* Log extension load failure
In case the extension couldn't be loaded the error was never logged, making it impossible to analyse what was going on
* Log exception in "GetCatalogueSource:: getCatalogueSourceOrNull"
In case "GetCatalogueSource::getCatalogueSource" threw an error, this was never logged here
In case the file could not be retrieved, the page retrieve just failed and wasn't triggered again.
In case of the downloader, the chapter download just kept failing 3 times and was aborted
* Use the Launcher
* Test launcher
* a
* Revert "a"
This reverts commit eb8667e439.
* Move launcher
* Test launcher 2
* Update dex2jar
* Fixes
* Use regular java with deb install
* Improve linux installs
* Revert "Test launcher 2"
This reverts commit 265825808f.
* Revert "Test launcher"
This reverts commit 7ff83c7ab9.
* Rename functions
* Require version to be passed to "downloadVersion"
Makes it possible to download different versions than the latest compatible one with retry functionality
* Fallback to downloading bundled webUI in case it's missing
In case no download was possible and the fallback to the bundled version also failed due to it not existing, try to download the version of the bundled version as a last resort.
* Handle exception of "getLatestCompatibleVersion"
* Move validation of download to actual download function
* Extract retry logic into function
* Retry every fetch up to 3 times
* Log full exception and change log level
In case on the startup no webUI update was available but the bundled version of the server is newer than the current used version, then the bundled version should be used.
This could be the case in case a new server version was installed and no compatible webUI version is available
* Return actual version for "PREVIEW" in "getLatestCompatibleVersion"
In case "PREVIEW" is the latest available version, the function should immediately fetch the actual webUI version that is currently the latest released version.
Thus, the function always returns a valid version and the preview version has not to be considered anymore at other places in the code
* Ignore download failure in case local webUI version is valid
In case the download failed e.g. due to internet connection issues, the server should only fall back to another version in case the local version is invalid or missing
* Change log level of download error
* Change type of sourceId in Downloader
Unclear why it was converted to Long since it just got converted back to String anyway when it was used in the Downloader
* Only stop downloads from source of the Downloader
The downloader just changed the state of all downloads, ignoring if they are from the source the Downloader is for or not
* Remove unnecessary DownloadManager::start calls
In case chapters were added to the queue the DownloadManager will start itself
* Extract download filtering into property
* Improve Downloader logging
* Notify clients only in case Downloader was started
In case nothing was done there is nothing to notify about
* Do not start Downloaders for failed downloads
In case there were failed chapter downloads in the queue the DownloadManager still created a Downloader and started it.
This Downloader would than immediately call "onComplete", since there is no available download, which then would refresh the Downloaders again which created an infinite loop until the failed download got removed from the queue
* Retry download in case it failed it gets re-added to the queue
In case a failed downloaded that was still in the queue was tried to get added to the queue again, nothing happened.
Instead of doing nothing, the download should get retried.
Thus, it also provides the logic to easily retry a failed download by just "adding" the chapter to the queue again.
Currently, to retry a failed download, the download has to be removed from the queue and then get re-added.
* Rename function "unqueue" to "dequeue"
* Move "dequeue" function
* Extract dequeue logic into function
* Improve DownloadManager logging
* Override "toString" of DownloadChapter
See documentation (%/rem, mod) for differences.
Example for "issue" that occurred:
mathematical: -4 % 6 = 2 (expected)
kotlin: -4 % 6 = -4 (unexpected)
* Trigger missed auto global update immediately on server start
In case the last execution was missed, it was never immediately scheduled.
Thus, it had to be waited for the next scheduled execution to be executed.
* Schedule auto global updates at a later point during the startup
In case a global update was triggered immediately, the server setup wasn't far enough causing an error due to trying to use things (e.g. database) that weren't initialized yet
* Correctly set the "firstExecutionTime" of a "HATask"
In case an initial delay is used for "Timer::scheduleAtFixedRate" (e.g. when rescheduling) then the "firstExecutionTime" of the "HATask" was incorrect, since it considered the first execution to be based on the actual interval.
This caused
- calculations for execution times (e.g. "timeToNextExecution", "nextExecutionTime") to be incorrect
- the ordering of the "scheduledTasks" queue to be incorrect
* Add logging
* Do not modify queue during forEach loop
Caused a "ConcurrentModificationException" and broke the system suspension detection due to the unhandled exception canceling the task
* Log all uncaught exceptions
In case an exception is uncaught/unhandled, it only gets logged in the console, but is not included in the log file.
E.g. the "HAScheduler::scheduleHibernateCheckerTask" task caused an unhandled "ConcurrentModificationException" which caused the task to get dropped.
In the log files this error could not be seen and thus, analysing the issue of the suspension detection to stop working was not possible via the logs
* Schedule "HATask" immediately when its last execution was missed
The missed execution was never triggered
* Calculate the "HATask" "last execution time" correctly
When scheduling a task for the first time, the "first execution time" is in the future.
This time is used for by all functions calculating times for this task (e.g. next/last execution time).
In case the first execution didn't happen yet and the current time, would have been an "execution time" based on the interval, the "hibernation detection" would trigger for this task, since it would think that the last execution was missed, due to the "last execution" being in the future.
To prevent this, it has to be made sure, that the "last execution time" is in the past.
* Check correctly if task threshold was met
It was incorrectly considered to be met in case the remaining time till the next execution was less than the threshold.
Instead, it has to be greater, since that would mean, that the next execution is taking long enough to not be triggering a double execution
Thus, the current logic is not, as intended, preventing possible double executions and instead is making sure to only execute missed tasks in case it will lead to double executions...
* Always trigger missed executions
The idea to have a threshold to prevent double executions in case the next scheduled execution isn't too far in the future doesn't really work with big intervals (e.g. in the days range).
For such cases, multiple days left for the next executions could be considered to cause double executions.
Decreasing the threshold doesn't really work since then it wouldn't really work for low intervals.
Instead, it makes more sense to just allow possible double executions and to just live with it.
In case it would be a problem for a specific task, the task should handle this issue itself.
* Rename schedule functions
* Introduce Base task for "HATask"
* Support kotlin Timer repeated interval in HAScheduler
It's not possible to schedule a task via cron expression to run every x hours in case the set hours are greater than 23.
To be able to do this and still keep the functionality provided by the "HAScheduler" it has to also support repeated tasks scheduled via the default Timer
* Support global update interval greater 23 hours
* Use "globalUpdateInterval" to disable auto updates
Gets rid of an unnecessary setting
* Setup "logback" to write to file
To be able to dynamically set the log file save location, logback has to be setup via code instead of a config file
* Log OkHttp via logback
Otherwise, the logs would only get written to the console and thus, not be included in the log file
* Init logback
Has to be done after the config was loaded, otherwise, the root directory would be unknown.
Moved the log of the loaded config to the "applicationSetup" since otherwise, the log would not be included in the log file
The actual version of the preview was never loaded and compared to the local version.
Instead, for the preview channel it was incorrectly decided that a new version is available on every update check
* Convert "WebInterfaceManager" to singleton
* Move server webUI mapping to the webUI
* Extract logic into functions
* Retry failed download
* Validate downloaded webUI
* Automatically check for webUI updates
* Add logic to support different webUIs
* Update logs
* Close ZipFile after extracting it
* Add option to disable cleanup of backups
* Ensure the minimum TTL of backups to 1 day
* Schedule the automated backup on a specific time of the day
* Introduce scheduler that takes system hibernation time into account
In case the system was hibernating/suspended scheduled task that should have been executed during that time would not get triggered and thus, miss an execution.
To prevent this, this new scheduler periodically checks if the system was suspended and in case it was, triggers any task that missed its last execution
* Use new scheduler
Some extension require some assets to work properly.
Currently, the extracted jar file does not contain these assets, thus, these extensions wouldn't work
The server reference config file was only able to be read while in dev mode.
Using the build jar, the content of the file was empty, since in the build jar resources aren't actual files anymore, instead they are streams.
This caused the user config content to be replaced with an empty string.
Currently, the "UserConfig" was created in case it was missing.
But in case settings changed (added/removed), an already existing "UserConfig" never reflected these changes and thus, was out of date
* Exclude "default" category from reordering
Due to the "default" category having been added to the database, the index based approach to reorder the categories didn't work anymore.
In case one tried to move a category to or from pos 1, the default category was selected due to being at index 0
* Normalize categories after reordering
Makes sure that the ordering is correct.
E.g. "default" category should always be at position 0
There is a possibility that a partially downloaded file remains in case of an error.
In that case, the next time the image gets requested the existing file would be handled as a successfully cached image.