Replies: 13 comments 12 replies
-
I spent yesterday working on the SLC-1 issues and a few hours today. The functionality is complete, but there are a few improvements that need to be added to make this release ready. The first thing that I tackled was getting the data ready for the new simple way we're working on it. A description of the layout of our Entities is in issue #82. After our Entities were all in place in the system, I could work on getting the aggregated usages output in the correct tables again. This is done similarly to the first prototype version, but now it's backed by a database so we have proper integrity and a full audit trail back to where the calculations are made. Not much to shout about for this issue - details in #84. The next issue that was tackled was to handle Cargo files. The new Entities made this a breeze to implement - take a look at the CargoUpload class - I'm sure these Upload classes will become more complex over time, but for now we can make a pretty safe assumption that the uploaded file is a Cargo file if it matches the few known column headers. Something I want to work in to a future release is the ability to upload a zip file containing the Cargo data (I'm told this is how the files are received by labels). For now, it requires the Now we're working towards getting this product in front of people, this tabbed interface is introduced: It works in a similar way to before, but we're going to have the rows clickable to open the Product view, and the Balance/Output/Profit will be editable in the next release. I'm leaving the fun task until last. An implementation of Biff's fun animation idea: |
Beta Was this translation helpful? Give feedback.
-
What's next? There are three things I need to work on before it's release-ready:
|
Beta Was this translation helpful? Give feedback.
-
The first thing I've started on today is Cargo imports. Currently, the files import and are handled by the importer, but the products are not assigned correctly. I'm building up some test data to include with the Behat tests. While I'm in the tests, I need to finish off getting them bulletproof for Github Actions. There are a few flaky tests in there (Code Sniffer and Mess Detector) that need buckling up so Github Actions is happy running them, so we get nice automated deployments. Once all of this is completed, I'll get on to Spotify and getting background work running so any heavy lifting can be shifted away from the foreground process. |
Beta Was this translation helpful? Give feedback.
-
For SLC1 to be release-ready, it needs to be fast and accurate. Those two points are what I'm working on now, to get the codebase bulletproof and ready for release. I would like to create some test files with lots of fake, generated data, so they can be used in integration tests, as it's a bit too easy to test files with < 10 lines. Maybe we can come up with something together @richardbirkin ? I'm talking about a real-looking file with a million rows, so we can default to testing with these huge files, building in headroom for when we get a user dropping something of that size. One feature that was added last week was the Spotify API integration, so we can start looking up metadata such as the album artwork. I actually found a weird bug in Spotify's API, which was meaning every so often we'd get seemingly random artwork back, but luckily we're building our own library for Spotify API usage, so we can fix it in the library rather than have a hack in Trackshift, or have to wait for a fix from Spotify. The new Spotify code is integrated in our library, and this is being used directly by Trackshift. Now when I drag in a big statement, all tracks that have matching artwork are correctly matched, and all of this is done through the lazy loading script - to the user, there isn't really any noticeable delay in waiting for the artwork now. On to the primary focus of today - the processing speed. I've built the current iteration with the least amount of code possible. I think that's a great style of development, as it lets you expand and optimise when necessary. It's necessary now because of the introduction of the database. Each time an operation is performed on a database, it's done in a transaction. It takes a fraction fo a second per transaction to work with data in a database, and currently each record is being inserted individually, which adds up to a lot of fractions of a second. I'm working on a mechanism to perform all this in a single transaction, so we're only going to be limited to the upload bottleneck. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
There's also another couple of outstanding points about file uploads:
This last point will be a good segue into SLC2 as physical distributor accounts often contain manufacturing costs (if that has been done by the ditributor on behalf of the label) and therefore can be added to ProductCosts at import. |
Beta Was this translation helpful? Give feedback.
-
Also:
|
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
I've been setting up the production server. We're all ready to go now, so I'll grab something to eat and DEPLOY! |
Beta Was this translation helpful? Give feedback.
-
Since all we're doing in SLC1 is dragging and dropping files, I don't think we need to make a big deal of the 3 week thing. So I've just removed it from the page entirely. SLC2 involves some data entry from the user, but even then I don't think we need the expiry button. Instead we turn off that functionality. What do you think @g105b ? |
Beta Was this translation helpful? Give feedback.
-
I've commited and pushed all my changes and am happy for this to go live @g105b |
Beta Was this translation helpful? Give feedback.
-
Amazing! I'll take one last look and then I'll press the button 😨 |
Beta Was this translation helpful? Give feedback.
-
Biff's going to be supplying the wireframes for the accepted aggregation functionality, and while he's doodling those I've been working on two things:
Trackshift's tidy - now we have a clear path of where the development is going, there were a few things that needed sorting on the project before we progressed. Notably, I've got the test bed passing again (so we can be fully TDD), and introduced two new quality tools (Code Sniffer and Mess Detector) that will highlight any mistakes and areas of complex code, before they make their way into the codebase.
Spotify API - I started building a type-safe API for Spotify, so it documents itself rather than have to be back and forth in documentation pages. I'm only building the functionality out for what we're using in Trackshift, as there's a lot more of the API that we won't use (audiobooks, playlists, etc.). One day I'll code the rest of the functionality, but it's not needed yet.
Beta Was this translation helpful? Give feedback.
All reactions