* Prevent importing unsupported tracker from backup
This will lead to graphql field validation errors (non null declared field is null) once the track records get used, since they will point to trackers that do not exist
* Delete track records of unsupporter trackers
* Always return all track records of manga
Was already partially changed in 7df5f1c4c4 but this occurrence was missed
* Include tracking in validation of backup
* Always return track records
Not clear why an empty list should be returned in case no trackers are logged in
* Include tracking in backup creation
* Restore tracking from backup
In case the new chapters include duplicates from different scanlators, they would be included in the limit causing the auto download to potentially only download duplicated chapters while there might be more non duplicated chapters to download.
Instead, the limit should only consider unique chapters and then should include all duplicates of the chapters that should get downloaded
* Remove overrides of "ChapterFilesProvider::downloadImpl"
* Check final download folder for existing page on download
Downloads were changed to get downloaded to the system temp folder instead to directly into the final download folder.
This broke the check for existing pages, because now only the temp folder was checked instead of both the temp and the final download folder.
Regression introduced with 1c9a139006
* Properly check for already existing downloaded pages
The previous check was always false because the file ending of the page file is unknown and thus, missing from the created file path
* Cleanup cache download folder
* Update test/server-reference file
* Properly handle re-uploaded chapters in auto download of new chapters
In case of unhandable re-uploaded chapters (different chapter numbers) they potentially would have prevented auto downloads due being considered as unread.
Additionally, they would not have been considered to get downloaded due to not having a higher chapter number than the previous latest existing chapter before the chapter list fetch.
* Add option to ignore re-uploads for auto downloads
* Extract check for manga category download inclusion
* Extract logic to get new chapter ids to download
* Simplify manga category download inclusion check
In case the DEFAULT category does not exist, someone messed with the database and it is basically corrupted
* Add mutation to fetch the latest track data from the tracker
* Update Track.kt
---------
Co-authored-by: Mitchell Syer <Syer10@users.noreply.github.com>
* Extract unbinding track into function
* Introduce new unbind mutation
* Add option to delete track binding on track service
---------
Co-authored-by: Mitchell Syer <Syer10@users.noreply.github.com>
Triggering the progress update on server side does not work because the client needs to get the mutation result, otherwise, the clients cache will get outdated
* Update lastReadChapter on bind in case it's greater than remote
* Update lastReadChapter on chapter read in case it's greater than remote
* [Logging] Improve logs
* Extract thumbnail url fresh into function
* Remove incorrect non-null assertion
According to the typing there is no guarantee that fetching a manga from the source provides a thumbnail url
* Refresh manga thumbnail url on 404 error
* Refresh manga thumbnail url on unreachable origin cloudflare errors
* Set updater running flag to false only at the end of the update
For clearing the data loader cache properly, the update status subscription requires the update to be running.
For the last completed manga update the flag was immediately set to false which prevented the dataloader cache from getting cleared, returning outdated data for the last updated manga
* Correctly clear the "MangaForIdsDataLoader" cache
The cache keys for this dataloader are lists of manga ids.
Thus, it is not possible to clear only the cached data of the provided manga id and instead each cache entry that includes the manga id has to be cleared
* Ensure that manga dataloader caches gets cleared during global update
The "StateFlow" drops value updates in case the collector is too slow, which was the case for the "UpdateSubscription".
This caused the dataloader cache to not get properly cleared because the running state of the update was already set to false.
* Log "Browser::openInBrowser" errors
The error was never written to the log file.
It was only visible in the console
* Remove "printStackTrace" usage with logs
The local manga thumbnail got "downloaded" to thumbnail download folder of in library manga.
Since the "thumbnail url" of a local source manga never changes, the "downloaded" manga thumbnail never got updated
Regression introduced with f2dd67d87f
* Remove download ahead logic
Unnecessary on server side, should just be done by the client
* Rename "autoDownloadAheadLimit" to "autoDownloadNewChaptersLimit"
* Deprecate the old field
* Update Stable WebUI
* Update Stable WebUI
---------
Co-authored-by: Syer10 <syer10@users.noreply.github.com>
* Run functions for specific webui flavor
* Set default flavor of WebUIFlavor enum
* Consider flavor of served webUI when checking for update
In case the flavor was changed and the served webui files are still for the previous flavor, the update check could incorrectly detect no update
* Skip validation during initial setup
In case initial setup is triggered because of an invalid local webUI, doing a validation again is unnecessary
* Handle changed flavor on startup
In case a socket got disconnected, the session state of the subscriptions did not get correctly cleaned up.
The active operations did get closed but not removed and thus, when the client tried to reconnect, the server incorrectly detected an active subscription for an operation and immediately terminated the subscription.
In case there is no internet connection, it is not possible to verify the webUI files, leading to the server to fail from starting up.
Instead, the existing webUI should just be used
* Remove log of mangas to update
This logged the full manga data objects in the list with information that is not needed (e.g. description of a manga).
Once a manga gets updated via the updater, it gets logged, which should be enough
* Include manga id in updater log
* Use "toString" to log mangas
* Change "HttpLoggingInterceptor" level to "BASIC"
Was unintentionally merged with d658e07583