It's possible that a manga is bound to a tracker while there is no search result.
This happens when e.g. restoring a backup which includes track bindings for which there was never a tracker search.
In that case when trying to e.g. copy the binding to another manga, the mutation would fail due to not finding a search result.
These cases can be handled by additionally checking the TrackRecordTable to get the necessary track info.
* Update to exposed-migrations v3.5.0
* Update to kotlin-logging v7.0.0
* Update to exposed v0.46.0
* Update to exposed v0.47.0
* Update to exposed v0.55.0
* Update to exposed v0.56.0
* Update to exposed v0.57.0
* Update graphqlkotlin to v8
* Go back to JsonMapper
* Add context to data loaders
* Compile fixes
---------
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Syer10 <syer10@users.noreply.github.com>
* Properly set download update type on exceptions
* Always send FINISHED download update to client for deprecated subscription
By the time the status was sent to the client, the finished download item was already removed from the queue, causing the client to never get the latest status, thus, having an outdated cache
Regression introduced with 168b76cb0c
* Validate setting values on mutation
* Handle invalid negative setting values
* Ensure at least one source is downloading at all times
* Prevent possible IllegalArgumentException
The "serverConfig.maxSourcesInParallel" value could have changed after the if-condition
* Emit only download changes instead of full status
The download subscription emitted the full download status, which, depending on how big the queue was, took forever because the graphql subscription does not support data loader batching, causing it to run into the n+1 problem
* Rename "DownloadManager#status" to "DownloadManager#updates"
* Add initial queue to download subscription type
Adds the current queue at the time of sending the initial message.
This field is null for all following messages after the initial one
* Optionally limit and omit download updates
To prevent the n+1 dataloader issue, the max number of updates included in the download subscription can be limited.
This way, the problem will be circumvented and instead, the latest download status should be (re-)fetched via the download status query, which does not run into this problem.
* Formatting
* Update graphqlkotlin to v6.8.5
* Replace Jackson with Kotlinx.Serialization where possible
---------
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Syer10 <syer10@users.noreply.github.com>
* Keep up to 31 log files
On average one log file per day gets created, thus, increasing to 31 files will store log files for one month
* Decrease total log files size to 100mb
* Make log appender settings configurable
* feat(comicinfo): add date fields to comic info
This will be parsed by Komga, Kavita etc ... and any other library management to also have the date of the chapter.
* refactor: improve code readability
Co-authored-by: Mitchell Syer <Syer10@users.noreply.github.com>
---------
Co-authored-by: Mitchell Syer <Syer10@users.noreply.github.com>
This PR:
https://github.com/FlareSolverr/FlareSolverr/pull/1300
Solve a lot of issue with not solving challenge, however, the cookie don't have path, httpOnly, secure and sameSite.
By making them optional that should work for both version of flaresolverr.
* Launch missed auto backup task in background
* Launch missed auto global update task in background
* Launch missed auto webui update check task in background
In case a manga gets added to the library which has not been initialized yet, it should be tried to initialize it.
Since it's not an error to have uninitialized manga in the library, this can be done in the background via the updater and the client receives the updated data via the update subscription.
They were only initialized in case the setting to refresh manga metadata during an update was enabled.
However, this should always be done for uninitialized manga, regardless of the setting.
06bfc33e72 prevents uninitialized manga from getting filtered out, however, it did not ensure to initialize the manga
* Properly check for first page in cbz files
The download check for cbz files only checked if the archive existed but didn't check for the first page
* Streamline getImageImpl of ChapterDownloadProviders
* Exclude comic info file from page list
In case the download folder did not contain any page files, only the comic info file existed, which caused the download check to incorrectly detect the first page
* Add logging to ChapterForDownload#asDownloadReady
Manga can be added to the library while they have not been initialized yet.
In this case, depending on the manga exclusion setting, they will never be updated automatically unless they get refreshed once manually.
* Persist page count during chapter list update
In case a downloaded chapter gets deleted during a chapter list update, the download status was tried to be preserved.
However, in case the status could be preserved, the page count was lost and thus, the chapter now was marked as downloaded with a page count of -1.
* Mark downloaded chapters without page count as not downloaded
* Prevent adding duplicated chapters into the db
it's possible that the source returns a list containing chapters with the same url
once such duplicated chapters have been added, they aren't being removed anymore as long as there is
a chapter with the same url in the fetched chapter list, even if the duplicated chapter itself
does not exist anymore on the source
* Drop duplicated chapters from database table
* Add unique constraint to chapter table
This is to completely prevent duplicated chapters from being added to the database.
Since once a duplicated chapter has been added to the database, it does not get removed anymore as long as a chapter with the same url is included in the requested source chapter list
The automated backup cleanup just deleted every file (recursively in subfolders as well) in the set folder in case it was older than the set backup ttl.
This made it impossible to save the automated backups into a folder with different files.
* Remove code duplication
* Remove unnecessary functions
* Simplify filtering for multiple values in queries
Makes it easier to filter for multiple values at ones without having to nest filters with multiple "and".
e.g.
```gql
query MyQuery {
mangas(
filter: {genre: {includesInsensitive: "action"}, and: {genre: {includesInsensitive: "adventure"}, and: { ... }}}
) {
nodes {
id
}
}
}
```
can be simplified to
```gql
query MyQuery {
mangas(
filter: {genre: {includesInsensitive: ["action", "adventure", ...]}}
) {
nodes {
id
}
}
}
```
* Add filter for matching "any" value in list
Makes it easier to filter for entries that match any value without having to nest filters with multiple "or".
e.g.
```gql
query MyQuery {
mangas(
filter: {genre: {includesInsensitiveAny: ["action", "adventure", ...]}}
) {
nodes {
id
}
}
}
```
instead of
```gql
query MyQuery {
mangas(
filter: {genre: {includesInsensitive: "action", or: {genre: includesInsensitive: "adventure", or: {...}}}}
) {
nodes {
id
}
}
}
```
* Add util function to apply "andWhere/All/Any"