These are information that are necessary for nearly all manga requests.
They could be selected via the categories mutation, but this only works for a single manga.
It is not possible to select this information for lists of mangas without having to request all chapters for every manga in the list.
* Set graphql logs to error level
Set log level for loggers with names
- ExecutionStrategy (spams logs with "... completing field ...")
- notprivacysafe (logs every received request up to 4 times (received, parse, validate, execute))
* Extract logic to get logger for name into function
* Add function to set log level for a logger
* Add settings to enable graphql debug logging
* Move chapter download logic to base class
* Do not reuse "FolderProvider" in "ArchiveProviders" download function
Due to reusing the "FolderProvider" to download a chapter as a cbz file, a normal chapter download folder was created.
In case the download was aborted before the cbz file got created and the folder deleted, the next time the chapter got downloaded, the wrong "FileProvider" was selected, causing the chapter not to get downloaded as a cbz file.
In case e.g. no manga exists for the passed id, the query returned null.
This makes it harder to have a "streamlined" error handling in the client, since these types of queries need a special handling.
* Update chapter page refresh logic with logic from "ChapterMutation"
* Rename function to "getChapterDownloadReadyByIndex"
* Update "ChapterForDownload" to work with only "chapterId" being passed
* Return database chapter page list in case chapter is downloaded
In case the chapter is downloaded, fetching the chapter pages info should not be needed.
It should also currently break reading downloaded chapters while being offline, since the page request will always fail, since there is no internet connection
* Provide last global update timestamp
* Provide skipped mangas in update status
* Extract update status logic into function
* Rename update "statusMap" to "mangaStatusMap"
* Provide info about categories in update status
* Add "uiName" to WebUI enum
* Add "Custom" WebUI to enum
* Rename "WebUI" enum to "WebUIFlavor"
* Add "WebUIInterface" enum
* Add query for server settings
* Add mutation for server settings
* Add mutation to reset the server settings
* Only update the config in case the value changed
In case the value of the config is already the same as the new value of the state flow, it is not necessary to update the config file
* Make server config value changes subscribable
* Make server config value changes subscribable - Update usage
* Add util functions to listen to server config value changes
* Listen to server config value changes - Auto backups
* Listen to server config value changes - Auto global update
* Listen to server config value changes - WebUI auto updates
* Listen to server config value changes - Javalin update ip and port
* Listen to server config value changes - Update socks proxy
* Listen to server config value changes - Update debug log level
* Listen to server config value changes - Update system tray icon
* Update config values one at a time
In case settings are changed in quick succession it's possible that each setting update reverts the change of the previous changed setting because the internal config hasn't been updated yet.
E.g.
1. settingA changed
2. settingB changed
3. settingA updates config file
4. settingB updates config file (internal config hasn't been updated yet with change from settingA)
5. settingA updates internal config (settingA updated)
6. settingB updates internal config (settingB updated, settingA outdated)
now settingA is unchanged because settingB reverted its change while updating the config with its new value
* Always add log interceptor to OkHttpClient
In case debug logs are disabled then the KotlinLogging log level will be set to level > debug and thus, these logs won't get logged
* Rename "maxParallelUpdateRequests" to "maxSourcesInParallel"
* Use server setting "maxSourcesInParallel" for downloads
* Listen to server config value changes - downloads
* Always use latest server settings - Browser
* Always use latest server settings - folders
* [Test] Fix type error
Gets already called by "Chapter::fetchChapterList", thus, this is unnecessary.
Additionally, "chapters.toList()" and "chapters.map()" have to be called in a transaction block, which they are not, and thus, cause an unhandled exception, breaking the mutation
There were cases where the last page read was greater than the max page count of a chapter.
This is not possible and is just invalid data, that is saved in the database, possible leading to other errors down the line.
This could happen in case the chapter was loaded at some point with e.g. 18 pages and after some time got fetched again from the source, now with fewer pages than before e.g. 15.
If the chapters last page was already read by that time, the last read page would have been 18, while the chapter now has only 15 pages.
* Rename "DownloadedFilesProvider" to "ChaptersFilesProvider"
* Move files into sub packages
* Further abstract "DownloadedFilesProvider"
* Rename "getCachedImageResponse" to "getImageResponse"
* Extract getting cached image response into new function
* Decouple thumbnail cache and download
* Download and delete permanent thumbnails
When adding/removing manga from/to the library make sure the permanent thumbnail files will get handled properly
* Move thumbnail cache to actual temp folder
* Rename "mangaDownloadsRoot" to "downloadRoot"
* Move manga downloads into "mangas" subfolder
* Clear downloaded thumbnail
* Add "server" to "checkForUpdate" logic names
* Use "webUIRoot" as default path for "getLocalVersion"
* Use local version as default version for "isUpdateAvailable"
* Return the version with the webUI update check
* Update WebinterfaceManager to be async
* Add query, mutation and subscription for webUI update
* Catch error and return default error value for missing local WebUI version
* Catch error when updating to bundled webUI
In case the bundled webUI is missing, the webUI setup threw an error and made the server startup fail.
Since a local webUI exists the error should be ignored, since it's only a try to update to a newer webUI version.
* Extract logic to setup bundled webUI version
* Update to bundled webUI try to download missing bundled webUI
* Get rid of multiple static "assets/" usage
* Correctly add new zip entry
The name of the entry has to be a "/" separated path, otherwise, the files can't be found.
* Extract reorder logic into function
* Save download queue everytime a download was finished
The download queue was never saved after a download was finished.
This caused finished download to be restored on a server start, which caused unnecessary "downloads" which most of the time would just finish immediately since the pages were still in the cache
* Wait for download queue save process to be finished
Since multiple downloaders could be finished at the same time, the download queue should be saved synchronously
* Remove unnecessary download queue save trigger
This gets called everytime a downloader finished downloading all chapters of its source.
Since the queue is now saved everytime a download is finished, this is trigger is not needed anymore
* Log extension load failure
In case the extension couldn't be loaded the error was never logged, making it impossible to analyse what was going on
* Log exception in "GetCatalogueSource:: getCatalogueSourceOrNull"
In case "GetCatalogueSource::getCatalogueSource" threw an error, this was never logged here
In case the file could not be retrieved, the page retrieve just failed and wasn't triggered again.
In case of the downloader, the chapter download just kept failing 3 times and was aborted
* Use the Launcher
* Test launcher
* a
* Revert "a"
This reverts commit eb8667e439.
* Move launcher
* Test launcher 2
* Update dex2jar
* Fixes
* Use regular java with deb install
* Improve linux installs
* Revert "Test launcher 2"
This reverts commit 265825808f.
* Revert "Test launcher"
This reverts commit 7ff83c7ab9.
* Rename functions
* Require version to be passed to "downloadVersion"
Makes it possible to download different versions than the latest compatible one with retry functionality
* Fallback to downloading bundled webUI in case it's missing
In case no download was possible and the fallback to the bundled version also failed due to it not existing, try to download the version of the bundled version as a last resort.
* Handle exception of "getLatestCompatibleVersion"
* Move validation of download to actual download function
* Extract retry logic into function
* Retry every fetch up to 3 times
* Log full exception and change log level
In case on the startup no webUI update was available but the bundled version of the server is newer than the current used version, then the bundled version should be used.
This could be the case in case a new server version was installed and no compatible webUI version is available
* Return actual version for "PREVIEW" in "getLatestCompatibleVersion"
In case "PREVIEW" is the latest available version, the function should immediately fetch the actual webUI version that is currently the latest released version.
Thus, the function always returns a valid version and the preview version has not to be considered anymore at other places in the code
* Ignore download failure in case local webUI version is valid
In case the download failed e.g. due to internet connection issues, the server should only fall back to another version in case the local version is invalid or missing
* Change log level of download error
* Change type of sourceId in Downloader
Unclear why it was converted to Long since it just got converted back to String anyway when it was used in the Downloader
* Only stop downloads from source of the Downloader
The downloader just changed the state of all downloads, ignoring if they are from the source the Downloader is for or not
* Remove unnecessary DownloadManager::start calls
In case chapters were added to the queue the DownloadManager will start itself
* Extract download filtering into property
* Improve Downloader logging
* Notify clients only in case Downloader was started
In case nothing was done there is nothing to notify about
* Do not start Downloaders for failed downloads
In case there were failed chapter downloads in the queue the DownloadManager still created a Downloader and started it.
This Downloader would than immediately call "onComplete", since there is no available download, which then would refresh the Downloaders again which created an infinite loop until the failed download got removed from the queue
* Retry download in case it failed it gets re-added to the queue
In case a failed downloaded that was still in the queue was tried to get added to the queue again, nothing happened.
Instead of doing nothing, the download should get retried.
Thus, it also provides the logic to easily retry a failed download by just "adding" the chapter to the queue again.
Currently, to retry a failed download, the download has to be removed from the queue and then get re-added.
* Rename function "unqueue" to "dequeue"
* Move "dequeue" function
* Extract dequeue logic into function
* Improve DownloadManager logging
* Override "toString" of DownloadChapter
See documentation (%/rem, mod) for differences.
Example for "issue" that occurred:
mathematical: -4 % 6 = 2 (expected)
kotlin: -4 % 6 = -4 (unexpected)
* Trigger missed auto global update immediately on server start
In case the last execution was missed, it was never immediately scheduled.
Thus, it had to be waited for the next scheduled execution to be executed.
* Schedule auto global updates at a later point during the startup
In case a global update was triggered immediately, the server setup wasn't far enough causing an error due to trying to use things (e.g. database) that weren't initialized yet
* Correctly set the "firstExecutionTime" of a "HATask"
In case an initial delay is used for "Timer::scheduleAtFixedRate" (e.g. when rescheduling) then the "firstExecutionTime" of the "HATask" was incorrect, since it considered the first execution to be based on the actual interval.
This caused
- calculations for execution times (e.g. "timeToNextExecution", "nextExecutionTime") to be incorrect
- the ordering of the "scheduledTasks" queue to be incorrect
* Add logging
* Do not modify queue during forEach loop
Caused a "ConcurrentModificationException" and broke the system suspension detection due to the unhandled exception canceling the task
* Log all uncaught exceptions
In case an exception is uncaught/unhandled, it only gets logged in the console, but is not included in the log file.
E.g. the "HAScheduler::scheduleHibernateCheckerTask" task caused an unhandled "ConcurrentModificationException" which caused the task to get dropped.
In the log files this error could not be seen and thus, analysing the issue of the suspension detection to stop working was not possible via the logs
* Schedule "HATask" immediately when its last execution was missed
The missed execution was never triggered
* Calculate the "HATask" "last execution time" correctly
When scheduling a task for the first time, the "first execution time" is in the future.
This time is used for by all functions calculating times for this task (e.g. next/last execution time).
In case the first execution didn't happen yet and the current time, would have been an "execution time" based on the interval, the "hibernation detection" would trigger for this task, since it would think that the last execution was missed, due to the "last execution" being in the future.
To prevent this, it has to be made sure, that the "last execution time" is in the past.
* Check correctly if task threshold was met
It was incorrectly considered to be met in case the remaining time till the next execution was less than the threshold.
Instead, it has to be greater, since that would mean, that the next execution is taking long enough to not be triggering a double execution
Thus, the current logic is not, as intended, preventing possible double executions and instead is making sure to only execute missed tasks in case it will lead to double executions...
* Always trigger missed executions
The idea to have a threshold to prevent double executions in case the next scheduled execution isn't too far in the future doesn't really work with big intervals (e.g. in the days range).
For such cases, multiple days left for the next executions could be considered to cause double executions.
Decreasing the threshold doesn't really work since then it wouldn't really work for low intervals.
Instead, it makes more sense to just allow possible double executions and to just live with it.
In case it would be a problem for a specific task, the task should handle this issue itself.
* Rename schedule functions
* Introduce Base task for "HATask"
* Support kotlin Timer repeated interval in HAScheduler
It's not possible to schedule a task via cron expression to run every x hours in case the set hours are greater than 23.
To be able to do this and still keep the functionality provided by the "HAScheduler" it has to also support repeated tasks scheduled via the default Timer
* Support global update interval greater 23 hours
* Use "globalUpdateInterval" to disable auto updates
Gets rid of an unnecessary setting
* Setup "logback" to write to file
To be able to dynamically set the log file save location, logback has to be setup via code instead of a config file
* Log OkHttp via logback
Otherwise, the logs would only get written to the console and thus, not be included in the log file
* Init logback
Has to be done after the config was loaded, otherwise, the root directory would be unknown.
Moved the log of the loaded config to the "applicationSetup" since otherwise, the log would not be included in the log file