* Correctly check for none PREVIEW channel latest compatible version
The only working channel was the PREVIEW channel, since any other channel would have fetched the actual version of the preview and used this as the potential latest compatible version.
This was caused due to incorrectly checking if the preview version should be ignored.
* Remove PREVIEW version constant
* Consider versions of different channels
In case the current server version isn't compatible with the latest version of the selected webUI channel, versions of other channel should be considered depending on the selected channel.
E.g. PREVIEW is the latest available version and thus, any version of another channel is also compatible with the PREVIEW channel
* Restrict min compatible version to the bundled version
The oldest compatible version for a server is the bundled version, thus, any version that is older than the bundled one should not be considered compatible
* Switch to new Ktlint plugin
* Add ktlintCheck to PR builds
* Run formatter
* Put ktlint version in libs toml
* Fix lint
* Use Zip4Java from libs.toml
* Add "download ahead" mutation
Checks if the specified number of unread chapters, that should be downloaded, are available.
In case not enough chapters are downloaded, the number of missing unread chapters will get downloaded
* Optionally pass the latest read chapter id of a manga
In case a chapter will get marked as read, which also triggered the download ahead call, it's possible, that by the time the download ahead logic gets triggered, the chapter hasn't been marked as read yet.
This could then cause this chapter to be included in the chapters to get downloaded.
By providing the chapter id, this chapter will be used as the latest read chapter instead, and thus, not be included inn the chapters to download.
In case a newer version of the extension is installed and the extension gets manually downgraded, the version in db is still the one of the newer version.
This will prevent detection of available updates, since it won't get recognized, that an older version is currently installed.
Chapters were added to the queue by database index order.
In case a chapters of different mangas got added to the queue, downloads got mingled instead of being group inserted per manga.
Also sort manga chapters by source order, to make sure, that, in case chapters of a manga are, for some reason, not in the correct order in the database, they will still get downloaded in the order of the source.
When using cursors for pagination while sorting, the sort order was inverted (desc -> asc, asc -> desc).
However, this was then not considered when selecting results based on the cursor.
For before/after results where always selected via greater/less.
Due to inverting the sort order, this also needs to be inverted depending on the sort order (desc or asc).
Since the number of chapters gets converted to be index based, 1 available chapter would result in 0.
Due to this, in case a manga had exactly one chapter before updating the chapters, it was incorrectly detected as the initial fetch and the new chapters did not get automatically downloaded.
Flow::stateIn has "Strong equality-based conflation" (see documentation).
Thus, it omits every value in case it's equal to the previous one.
Since the DownloadManger::getStatus function returns a status with a queue, that contains all current "DownloadChapters" by reference, the equality check was always true.
Thus, progress changes of downloads were never sent to subscribers.
Subscriber were only notified about finished downloads (size of queue changed) or downloader status changes
In case a download was finished, but the downloader got stopped before it was able to remove the finished download from the queue, the downloader got stuck in an endless loop of starting and pausing downloads.
This was caused by selecting the next chapter to download and then recognizing in "Downloader::step", that there is another chapter to download before the current one in the queue.
However, since this recognized chapter is already downloaded, the downloader selected the next queued chapter again.
It was then stuck in this loop until the finished chapter was manually removed from the queue.
* Rename "newChapters" to "updatedChapterList"
* Do not auto download new chapters of entries with unread chapters
Makes it possible to prevent unnecessary chapter downloads in case the entry hasn't yet been caught up
* Optionally limit auto new chapter downloads
* Prevent downloading new chapters for mangas not in the library
In case the user config file has to be updated, the file needs to get reset.
While doing the reset, the already loaded internal state of the config got also reset, but was never updated again.
Due to this, the internal state of the config was the default config reference until the next server startup
Regression introduced with a31446557d.
The function always returned the PREVIEW version as the latest compatible version.
This was caused by incorrectly selecting the version from the json object, which resulted in the version to be wrapped in '"'.
* Create manga download dir in case it's missing for cbz downloads
The directory, in which the cbz file should have been saved in, was never created.
* Correctly copy chapter download to final download location
"renameTo" does not include the content of a directory.
Thus, it just created an empty chapter folder int the final download directory
These are information that are necessary for nearly all manga requests.
They could be selected via the categories mutation, but this only works for a single manga.
It is not possible to select this information for lists of mangas without having to request all chapters for every manga in the list.
* Set graphql logs to error level
Set log level for loggers with names
- ExecutionStrategy (spams logs with "... completing field ...")
- notprivacysafe (logs every received request up to 4 times (received, parse, validate, execute))
* Extract logic to get logger for name into function
* Add function to set log level for a logger
* Add settings to enable graphql debug logging
* Move chapter download logic to base class
* Do not reuse "FolderProvider" in "ArchiveProviders" download function
Due to reusing the "FolderProvider" to download a chapter as a cbz file, a normal chapter download folder was created.
In case the download was aborted before the cbz file got created and the folder deleted, the next time the chapter got downloaded, the wrong "FileProvider" was selected, causing the chapter not to get downloaded as a cbz file.
In case e.g. no manga exists for the passed id, the query returned null.
This makes it harder to have a "streamlined" error handling in the client, since these types of queries need a special handling.
* Update chapter page refresh logic with logic from "ChapterMutation"
* Rename function to "getChapterDownloadReadyByIndex"
* Update "ChapterForDownload" to work with only "chapterId" being passed
* Return database chapter page list in case chapter is downloaded
In case the chapter is downloaded, fetching the chapter pages info should not be needed.
It should also currently break reading downloaded chapters while being offline, since the page request will always fail, since there is no internet connection
* Provide last global update timestamp
* Provide skipped mangas in update status
* Extract update status logic into function
* Rename update "statusMap" to "mangaStatusMap"
* Provide info about categories in update status
* Add "uiName" to WebUI enum
* Add "Custom" WebUI to enum
* Rename "WebUI" enum to "WebUIFlavor"
* Add "WebUIInterface" enum
* Add query for server settings
* Add mutation for server settings
* Add mutation to reset the server settings
* Only update the config in case the value changed
In case the value of the config is already the same as the new value of the state flow, it is not necessary to update the config file
* Make server config value changes subscribable
* Make server config value changes subscribable - Update usage
* Add util functions to listen to server config value changes
* Listen to server config value changes - Auto backups
* Listen to server config value changes - Auto global update
* Listen to server config value changes - WebUI auto updates
* Listen to server config value changes - Javalin update ip and port
* Listen to server config value changes - Update socks proxy
* Listen to server config value changes - Update debug log level
* Listen to server config value changes - Update system tray icon
* Update config values one at a time
In case settings are changed in quick succession it's possible that each setting update reverts the change of the previous changed setting because the internal config hasn't been updated yet.
E.g.
1. settingA changed
2. settingB changed
3. settingA updates config file
4. settingB updates config file (internal config hasn't been updated yet with change from settingA)
5. settingA updates internal config (settingA updated)
6. settingB updates internal config (settingB updated, settingA outdated)
now settingA is unchanged because settingB reverted its change while updating the config with its new value
* Always add log interceptor to OkHttpClient
In case debug logs are disabled then the KotlinLogging log level will be set to level > debug and thus, these logs won't get logged
* Rename "maxParallelUpdateRequests" to "maxSourcesInParallel"
* Use server setting "maxSourcesInParallel" for downloads
* Listen to server config value changes - downloads
* Always use latest server settings - Browser
* Always use latest server settings - folders
* [Test] Fix type error
Gets already called by "Chapter::fetchChapterList", thus, this is unnecessary.
Additionally, "chapters.toList()" and "chapters.map()" have to be called in a transaction block, which they are not, and thus, cause an unhandled exception, breaking the mutation
There were cases where the last page read was greater than the max page count of a chapter.
This is not possible and is just invalid data, that is saved in the database, possible leading to other errors down the line.
This could happen in case the chapter was loaded at some point with e.g. 18 pages and after some time got fetched again from the source, now with fewer pages than before e.g. 15.
If the chapters last page was already read by that time, the last read page would have been 18, while the chapter now has only 15 pages.
* Rename "DownloadedFilesProvider" to "ChaptersFilesProvider"
* Move files into sub packages
* Further abstract "DownloadedFilesProvider"
* Rename "getCachedImageResponse" to "getImageResponse"
* Extract getting cached image response into new function
* Decouple thumbnail cache and download
* Download and delete permanent thumbnails
When adding/removing manga from/to the library make sure the permanent thumbnail files will get handled properly
* Move thumbnail cache to actual temp folder
* Rename "mangaDownloadsRoot" to "downloadRoot"
* Move manga downloads into "mangas" subfolder
* Clear downloaded thumbnail
* Add "server" to "checkForUpdate" logic names
* Use "webUIRoot" as default path for "getLocalVersion"
* Use local version as default version for "isUpdateAvailable"
* Return the version with the webUI update check
* Update WebinterfaceManager to be async
* Add query, mutation and subscription for webUI update
* Catch error and return default error value for missing local WebUI version
* Catch error when updating to bundled webUI
In case the bundled webUI is missing, the webUI setup threw an error and made the server startup fail.
Since a local webUI exists the error should be ignored, since it's only a try to update to a newer webUI version.
* Extract logic to setup bundled webUI version
* Update to bundled webUI try to download missing bundled webUI
* Get rid of multiple static "assets/" usage
* Correctly add new zip entry
The name of the entry has to be a "/" separated path, otherwise, the files can't be found.
* Extract reorder logic into function
* Save download queue everytime a download was finished
The download queue was never saved after a download was finished.
This caused finished download to be restored on a server start, which caused unnecessary "downloads" which most of the time would just finish immediately since the pages were still in the cache
* Wait for download queue save process to be finished
Since multiple downloaders could be finished at the same time, the download queue should be saved synchronously
* Remove unnecessary download queue save trigger
This gets called everytime a downloader finished downloading all chapters of its source.
Since the queue is now saved everytime a download is finished, this is trigger is not needed anymore
* Log extension load failure
In case the extension couldn't be loaded the error was never logged, making it impossible to analyse what was going on
* Log exception in "GetCatalogueSource:: getCatalogueSourceOrNull"
In case "GetCatalogueSource::getCatalogueSource" threw an error, this was never logged here
In case the file could not be retrieved, the page retrieve just failed and wasn't triggered again.
In case of the downloader, the chapter download just kept failing 3 times and was aborted