Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Storing triggerId and deviceRoleName in TriggeredTask is redundant #429

Open
Whathecode opened this issue Jan 11, 2023 · 2 comments
Open
Labels
enhancement Nice to have, non-functional requirements.

Comments

@Whathecode
Copy link
Member

Whathecode commented Jan 11, 2023

A TriggeredTask always gets uploaded using a DataStreamPoint, which already contains triggerIds and deviceRolename.

In the case of TriggeredTask data points, the "reason" for collecting it will always be just the single trigger which triggered the task. Therefore, triggerIds should always just contain the ID of the trigger. This makes triggerId in TriggeredTask redundant.

Similarly, deviceRoleName always corresponds.

@Whathecode Whathecode added the enhancement Nice to have, non-functional requirements. label Jan 11, 2023
@Whathecode Whathecode changed the title TriggeredTask.triggerId redundancy TriggeredTask triggerId and deviceRoleName` redundancy Jan 11, 2023
@Whathecode Whathecode changed the title TriggeredTask triggerId and deviceRoleName` redundancy TriggeredTask triggerId and deviceRoleName redundancy Jan 11, 2023
@Whathecode Whathecode changed the title TriggeredTask triggerId and deviceRoleName redundancy Storing triggerId and deviceRoleName in TriggeredTask is redundant Jan 11, 2023
@bardram
Copy link
Collaborator

bardram commented Oct 26, 2023

I agree - in general there seems to be some redundancy in the data stream - also the measure type is stated several times.

@Whathecode
Copy link
Member Author

Whathecode commented Oct 30, 2023

I presume you mean that the polymorphic type descriptor of individual data points duplicates the data stream data type on data uploads:

            "batch": [
                {
                    "dataStream": {
                        "studyDeploymentId": "3de35fb5-c39e-4fea-8ccc-3a70ca618ce7",
                        "deviceRoleName": "Device",
                        "dataType": "dk.cachet.carp.stubpoint"         // DEFINES TYPE
                    },
                    "firstSequenceId": 0,
                    "measurements": [
                        {
                            "sensorStartTime": 0,
                            "data": {
                                "__type": "dk.cachet.carp.stubpoint",       // REDUNDANT; ALWAYS THE SAME
                                "data": "Stub"
                            }
                        }
                     ]
                  }
               ]

I've made a separate issue for this, as they are quite distinct.

However, I've marked both of these as enhancements. It may be relevant if we need to reduce payload sizes (and even then some other approaches may be preferred first such as compression). But, I believe optimizing this may come with added complexity to the serializers (all potential target platforms). Implementing custom serializers in core will work for both backend and JS/TS, but you'd still need to do something similar in dart.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Nice to have, non-functional requirements.
Projects
None yet
Development

No branches or pull requests

2 participants