* Fix indention
Signed-off-by: kolaente <k@knt.li>
* Add option to merge a pr right now without waiting for the checks to succeed
Signed-off-by: kolaente <k@knt.li>
* Fix lint
Signed-off-by: kolaente <k@knt.li>
* Add scheduled pr merge to tables used for testing
Signed-off-by: kolaente <k@knt.li>
* Add status param to make GetPullRequestByHeadBranch reusable
Signed-off-by: kolaente <k@knt.li>
* Move "Merge now" to a seperate button to make the ui clearer
Signed-off-by: kolaente <k@knt.li>
* Update models/scheduled_pull_request_merge.go
Co-authored-by: 赵智超 <1012112796@qq.com>
* Update web_src/js/index.js
Co-authored-by: 赵智超 <1012112796@qq.com>
* Update web_src/js/index.js
Co-authored-by: 赵智超 <1012112796@qq.com>
* Re-add migration after merge
* Fix frontend lint
* Fix version compare
* Add vendored dependencies
* Add basic tets
* Make sure the api route is capable of scheduling PRs for merging
* Fix comparing version
* make vendor
* adopt refactor
* apply suggestion: User -> Doer
* init var once
* Fix Test
* Update templates/repo/issue/view_content/comments.tmpl
* adopt
* nits
* next
* code format
* lint
* use same name schema; rm CreateUnScheduledPRToAutoMergeComment
* API: can not create schedule twice
* Add TestGetBranchNamesForSha
* nits
* new go routine for each pull to merge
* Update models/pull.go
Co-authored-by: a1012112796 <1012112796@qq.com>
* Update models/scheduled_pull_request_merge.go
Co-authored-by: a1012112796 <1012112796@qq.com>
* fix & add renaming sugestions
* Update services/automerge/pull_auto_merge.go
Co-authored-by: a1012112796 <1012112796@qq.com>
* fix conflict relicts
* apply latest refactors
* fix: migration after merge
* Update models/error.go
Co-authored-by: delvh <dev.lh@web.de>
* Update options/locale/locale_en-US.ini
Co-authored-by: delvh <dev.lh@web.de>
* Update options/locale/locale_en-US.ini
Co-authored-by: delvh <dev.lh@web.de>
* adapt latest refactors
* fix test
* use more context
* skip potential edgecases
* document func usage
* GetBranchNamesForSha() -> GetRefsBySha()
* start refactoring
* ajust to new changes
* nit
* docu nit
* the great check move
* move checks for branchprotection into own package
* resolve todo now ...
* move & rename
* unexport if posible
* fix
* check if merge is allowed before merge on scheduled pull
* debugg
* wording
* improve SetDefaults & nits
* NotAllowedToMerge -> DisallowedToMerge
* fix test
* merge files
* use package "errors"
* merge files
* add string names
* other implementation for gogit
* adapt refactor
* more context for models/pull.go
* GetUserRepoPermission use context
* more ctx
* use context for loading pull head/base-repo
* more ctx
* more ctx
* models.LoadIssueCtx()
* models.LoadIssueCtx()
* Handle pull_service.Merge in one DB transaction
* add TODOs
* next
* next
* next
* more ctx
* more ctx
* Start refactoring structure of old pull code ...
* move code into new packages
* shorter names ... and finish **restructure**
* Update models/branches.go
Co-authored-by: zeripath <art27@cantab.net>
* finish UpdateProtectBranch
* more and fix
* update datum
* template: use "svg" helper
* rename prQueue 2 prPatchCheckerQueue
* handle automerge in queue
* lock pull on git&db actions ...
* lock pull on git&db actions ...
* add TODO notes
* the regex
* transaction in tests
* GetRepositoryByIDCtx
* shorter table name and lint fix
* close transaction bevore notify
* Update models/pull.go
* next
* CheckPullMergable check all branch protections!
* Update routers/web/repo/pull.go
* CheckPullMergable check all branch protections!
* Revert "PullService lock via pullID (#19520)" (for now...)
This reverts commit 6cde7c9159a5ea75a10356feb7b8c7ad4c434a9a.
* Update services/pull/check.go
* Use for a repo action one database transaction
* Apply suggestions from code review
* Apply suggestions from code review
Co-authored-by: delvh <dev.lh@web.de>
* Update services/issue/status.go
Co-authored-by: delvh <dev.lh@web.de>
* Update services/issue/status.go
Co-authored-by: delvh <dev.lh@web.de>
* use db.WithTx()
* gofmt
* make pr.GetDefaultMergeMessage() context aware
* make MergePullRequestForm.SetDefaults context aware
* use db.WithTx()
* pull.SetMerged only with context
* fix deadlock in `test-sqlite\#TestAPIBranchProtection`
* dont forget templates
* db.WithTx allow to set the parentCtx
* handle db transaction in service packages but not router
* issue_service.ChangeStatus just had caused another deadlock :/
it has to do something with how notification package is handled
* if we merge a pull in one database transaktion, we get a lock, because merge infoce internal api that cant handle open db sessions to the same repo
* ajust to current master
* Apply suggestions from code review
Co-authored-by: delvh <dev.lh@web.de>
* dont open db transaction in router
* make generate-swagger
* one _success less
* wording nit
* rm
* adapt
* remove not needed test files
* rm less diff & use attr in JS
* ...
* Update services/repository/files/commit.go
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
* ajust db schema for PullAutoMerge
* skip broken pull refs
* more context in error messages
* remove webUI part for another pull
* remove more WebUI only parts
* API: add CancleAutoMergePR
* Apply suggestions from code review
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
* fix lint
* Apply suggestions from code review
* cancle -> cancel
Co-authored-by: delvh <dev.lh@web.de>
* change queue identifyer
* fix swagger
* prevent nil issue
* fix and dont drop error
* as per @zeripath
* Update integrations/git_test.go
Co-authored-by: delvh <dev.lh@web.de>
* Update integrations/git_test.go
Co-authored-by: delvh <dev.lh@web.de>
* more declarative integration tests (dedup code)
* use assert.False/True helper
Co-authored-by: 赵智超 <1012112796@qq.com>
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: delvh <dev.lh@web.de>
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
* chore: add health check endpoint
docs: update document about health check
fix: fix up Sqlite3 ping. current ping will success even if the db file is missing
fix: do not expose privacy information in output field
* refactor: remove HealthChecker struct
* Added `/api/healthz` to install routes.
This was needed for using /api/healthz endpoint in Docker healthchecks,
otherwise, Docker would never become healthy if using healthz endpoint
and users would not be able to complete the installation of Gitea.
* Update modules/cache/cache.go
* fine tune
* Remove unnecessary test code. Now there are 2 routes for installation (and maybe more in future)
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: Marcos de Oliveira <marcossantos@furb.br>
* Only check for non-finished migrating task
- Only check if a non-finished migrating task exists for a mirror before
fetching the mirror details from the database.
- Resolves#19600
- Regression: #19588
* Clarify function
- When a repository is still being migrated, don't try to fetch the
Mirror from the database. Instead skip it. This allows to visit
repositories that are still being migrated and were configured to be
mirrored.
- Resolves#19585
- Regression: #19295
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
* Apply DefaultUserIsRestricted in CreateUser
* Enforce system defaults in CreateUser
Allow for overwrites with CreateUserOverwriteOptions
* Fix compilation errors
* Add "restricted" option to create user command
* Add "restricted" option to create user admin api
* Respect default setting.Service.RegisterEmailConfirm and setting.Service.RegisterManualConfirm where needed
* Revert "Respect default setting.Service.RegisterEmailConfirm and setting.Service.RegisterManualConfirm where needed"
This reverts commit ee95d3e8dc.
Targeting #14936, #15332
Adds a collaborator permissions API endpoint according to GitHub API: https://docs.github.com/en/rest/collaborators/collaborators#get-repository-permissions-for-a-user to retrieve a collaborators permissions for a specific repository.
### Checks the repository permissions of a collaborator.
`GET` `/repos/{owner}/{repo}/collaborators/{collaborator}/permission`
Possible `permission` values are `admin`, `write`, `read`, `owner`, `none`.
```json
{
"permission": "admin",
"role_name": "admin",
"user": {}
}
```
Where `permission` and `role_name` hold the same `permission` value and `user` is filled with the user API object. Only admins are allowed to use this API endpoint.
Adds a feature [like GitHub has](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request-from-a-fork) (step 7).
If you create a new PR from a forked repo, you can select (and change later, but only if you are the PR creator/poster) the "Allow edits from maintainers" option.
Then users with write access to the base branch get more permissions on this branch:
* use the update pull request button
* push directly from the command line (`git push`)
* edit/delete/upload files via web UI
* use related API endpoints
You can't merge PRs to this branch with this enabled, you'll need "full" code write permissions.
This feature has a pretty big impact on the permission system. I might forgot changing some things or didn't find security vulnerabilities. In this case, please leave a review or comment on this PR.
Closes#17728
Co-authored-by: 6543 <6543@obermui.de>
There is a potential rare race possible whereby the c.running channel could
be closed twice. Looking at the code I do not see a need for this c.running
channel and therefore I think we can remove this. (I think the c.running
might have been some attempt to prevent a hang but the use of os.Pipes should
prevent that.)
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: 6543 <6543@obermui.de>
- Doing 64-bit atomic operations on 32-bit machines is a bit tricky by
golang, as they can only be done under certain set of
conditions(https://pkg.go.dev/sync/atomic#pkg-note-BUG).
- This PR fixes such case whereby the conditions weren't met, it moves
the int64 to the first field of the struct, which will 64-bit operations
happening on this property on 32-bit machines.
- Resolves#19518
Within doArchive there is a service goroutine that performs the
archiving function. This goroutine reports its error using a `chan
error` called `done`. Prior to this PR this channel had 0 capacity
meaning that the goroutine would block until the `done` channel was
cleared - however there are a couple of ways in which this channel might
not be read.
The simplest solution is to add a single space of capacity to the
goroutine which will mean that the goroutine will always complete and
even if the `done` channel is not read it will be simply garbage
collected away.
(The PR also contains two other places when setting up the indexers
which do not leak but where the blocking of the sending goroutine is
also unnecessary and so we should just add a small amount of capacity
and let the sending goroutine complete as soon as it can.)
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: 6543 <6543@obermui.de>
If an `os/exec.Command` is passed non `*os.File` as an input/output, go
will create `os.Pipe`s and wait for their closure in `cmd.Wait()`. If
the code following this is responsible for closing `io.Pipe`s or other
handlers then on process death from context cancellation the `Wait` can
hang.
There are two possible solutions:
1. use `os.Pipe` as the input/output as `cmd.Wait` does not wait for these.
2. create a goroutine waiting on the context cancellation that will close the inputs.
This PR provides the second option - which is a simpler change that can
be more easily backported.
Closes#19448
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Set correct PR status on 3way on conflict checking
- When 3-way merge is enabled for conflict checking, it has a new
interesting behavior that it doesn't return any error when it found a
conflict, so we change the condition to not check for the error, but
instead check if conflictedfiles is populated, this fixes a issue
whereby PR status wasn't correctly on conflicted PR's.
- Refactor the mergeable property(which was incorrectly set and lead me this
bug) to be more maintainable.
- Add a dedicated test for conflicting checking, so it should prevent
future issues with this.
* Fix linter
When a mirror repo interval is updated by the UI it is rescheduled with that interval
however the API does not do this. The API also lacks the enable_prune option.
This PR adds this functionality in to the API Edit Repo endpoint.
Signed-off-by: Andrew Thornton <art27@cantab.net>
Do a refactoring to the CSRF related code, remove most unnecessary functions.
Parse the generated token's issue time, regenerate the token every a few minutes.
Reusing `/api/v1` from Gitea UI Pages have pros and cons.
Pros:
1) Less code copy
Cons:
1) API/v1 have to support shared session with page requests.
2) You need to consider for each other when you want to change something about api/v1 or page.
This PR moves all dependencies to API/v1 from UI Pages.
Partially replace #16052
Remove two unmaintained vendor packages `i18n` and `paginater`. Changes:
* Rewrite `i18n` package with a more clear fallback mechanism. Fix an unstable `Tr` behavior, add more tests.
* Refactor the legacy `Paginater` to `Paginator`, test cases are kept unchanged.
Trivial enhancement (no breaking for end users):
* Use the first locale in LANGS setting option as the default, add a log to prevent from surprising users.
There appears to be an intermittent NPE in queue tests relating to the deferred
shutdown/terminate functions.
This PR more formally asserts that shutdown and termination occurs before starting
and finishing the tests but leaves the defer in place to ensure that if there is an
issue shutdown/termination will occur.
Signed-off-by: Andrew Thornton <art27@cantab.net>
Follows: #19284
* The `CopyDir` is only used inside test code
* Rewrite `ToSnakeCase` with more test cases
* The `RedisCacher` only put strings into cache, here we use internal `toStr` to replace the legacy `ToStr`
* The `UniqueQueue` can use string as ID directly, no need to call `ToStr`
Right now, a pull-mirror repo does not get marked as such until *after* the
mirroring completes. In the meantime, it will show up (in API and UI) as a
regular repo.
The main purpose is to refactor the legacy `unknwon/com` package.
1. Remove most imports of `unknwon/com`, only `util/legacy.go` imports the legacy `unknwon/com`
2. Use golangci's depguard to process denied packages
3. Fix some incorrect values in golangci.yml, eg, the version should be quoted string `"1.18"`
4. Use correctly escaped content for `go-import` and `go-source` meta tags
5. Refactor `com.Expand` to our stable (and the same fast) `vars.Expand`, our `vars.Expand` can still return partially rendered content even if the template is not good (eg: key mistach).
Follows #19266, #8553, Close#18553, now there are only three `Run..(&RunOpts{})` functions.
* before: `stdout, err := RunInDir(path)`
* now: `stdout, _, err := RunStdString(&git.RunOpts{Dir:path})`
- Upgrade all JS dependencies minus vue and vue-loader
- Adapt to breaking change of octicons
- Update eslint rules
- Tested Swagger UI, sortablejs and prod build
Continues on from #19202.
Following the addition of pprof labels we can now more easily understand the relationship between a goroutine and the requests that spawn them.
This PR takes advantage of the labels and adds a few others, then provides a mechanism for the monitoring page to query the pprof goroutine profile.
The binary profile that results from this profile is immediately piped in to the google library for parsing this and then stack traces are formed for the goroutines.
If the goroutine is within a context or has been created from a goroutine within a process context it will acquire the process description labels for that process.
The goroutines are mapped with there associate pids and any that do not have an associated pid are placed in a group at the bottom as unbound.
In this way we should be able to more easily examine goroutines that have been stuck.
A manager command `gitea manager processes` is also provided that can export the processes (with or without stacktraces) to the command line.
Signed-off-by: Andrew Thornton <art27@cantab.net>
This addresses https://github.com/go-gitea/gitea/issues/18352
It aims to improve performance (and resource use) of the `SyncReleasesWithTags` operation for pull-mirrors.
For large repositories with many tags, `SyncReleasesWithTags` can be a costly operation (taking several minutes to complete). The reason is two-fold:
1. on sync, every upstream repo tag is compared (for changes) against existing local entries in the release table to ensure that they are up-to-date.
2. the procedure for getting _each tag_ involves a series of git operations
```bash
git show-ref --tags -- v8.2.4477
git cat-file -t 29ab6ce9f36660cffaad3c8789e71162e5db5d2f
git cat-file -p 29ab6ce9f36660cffaad3c8789e71162e5db5d2f
git rev-list --count 29ab6ce9f36660cffaad3c8789e71162e5db5d2f
```
of which the `git rev-list --count` can be particularly heavy.
This PR optimizes performance for pull-mirrors. We utilize the fact that a pull-mirror is always identical to its upstream and rebuild the entire release table on every sync and use a batch `git for-each-ref .. refs/tags` call to retrieve all tags in one go.
For large mirror repos, with hundreds of annotated tags, this brings down the duration of the sync operation from several minutes to a few seconds. A few unscientific examples run on my local machine:
- https://github.com/spring-projects/spring-boot (223 tags)
- before: `0m28,673s`
- after: `0m2,244s`
- https://github.com/kubernetes/kubernetes (890 tags)
- before: `8m00s`
- after: `0m8,520s`
- https://github.com/vim/vim (13954 tags)
- before: `14m20,383s`
- after: `0m35,467s`
I added a `foreachref` package which contains a flexible way of specifying which reference fields are of interest (`git-for-each-ref(1)`) and to produce a parser for the expected output. These could be reused in other places where `for-each-ref` is used. I'll add unit tests for those if the overall PR looks promising.
This follows
* https://github.com/go-gitea/gitea/issues/18553
Introduce `RunWithContextString` and `RunWithContextBytes` to help the refactoring. Add related unit tests. They keep the same behavior to save stderr into err.Error() as `RunInXxx` before.
Remove `RunInDirTimeoutPipeline` `RunInDirTimeoutFullPipeline` `RunInDirTimeout` `RunInDirTimeoutEnv` `RunInDirPipeline` `RunInDirFullPipeline` `RunTimeout`, `RunInDirTimeoutEnvPipeline`, `RunInDirTimeoutEnvFullPipeline`, `RunInDirTimeoutEnvFullPipelineFunc`.
Then remaining `RunInDir` `RunInDirBytes` `RunInDirWithEnv` can be easily refactored in next PR with a simple search & replace:
* before: `stdout, err := RunInDir(path)`
* next: `stdout, _, err := RunWithContextString(&git.RunContext{Dir:path})`
Other changes:
1. When `timeout <= 0`, use default. Because `timeout==0` is meaningless and could cause bugs. And now many functions becomes more simple, eg: `GitGcRepos` 9 lines to 1 line. `Fsck` 6 lines to 1 line.
2. Only set defaultCommandExecutionTimeout when the option `setting.Git.Timeout.Default > 0`
Gitea was not able to supply any authentication parameters to it. So this brings support to do that, along with some light extraction of a couple of bits into some separate functions for easier testing.
I looked at other libraries supporting similar RedisUri-style connection strings (e.g. Lettuce), but it looks like this type of configuration is beyond what would typically be done in a connection string. Since gitea doesn't have configuration options for manually specifying all this redis connection detail, I went ahead and just chose straightforward names for these new parameters.
Strangely #19038 appears to relate to an issue whereby a tag appears to
be listed in `git show-ref --tags` but then does not appear when `git
show-ref --tags -- short_name` is called.
As a solution though I propose to stop the second call as it is
unnecessary and only likely to cause problems.
I've also noticed that the tags calls are wildly inefficient and aren't using the common cat-files - so these have been added.
I've also noticed that the git commit-graph is not being written on mirroring - so I've also added writing this to the migration which should improve mirror rendering somewhat.
Fix#19038
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: 6543 <6543@obermui.de>
The last PR about clone buttons introduced an JS error when visiting an empty repo page:
* https://github.com/go-gitea/gitea/pull/19028
* `Uncaught ReferenceError: isSSH is not defined`, because the variables are scoped and doesn't share between sub templates.
This:
1. Simplify `templates/repo/clone_buttons.tmpl` and make code clear
2. Move most JS code into `initRepoCloneLink`
3. Remove unused `CloneLink.Git`
4. Remove `ctx.Data["DisableSSH"] / ctx.Data["ExposeAnonSSH"] / ctx.Data["DisableHTTP"]`, and only set them when is is needed (eg: deploy keys / ssh keys)
5. Introduce `Data["CloneButton*"]` to provide data for clone buttons and links
6. Introduce `Data["RepoCloneLink"]` for the repo clone link (not the wiki)
7. Remove most `ctx.Data["PageIsWiki"]` because it has been set in the `/wiki` middleware
8. Remove incorrect `quickstart` class in `migrating.tmpl`
So whilst #19225 fixes one issue it caused another. We need to initialise the Git
module first.
Related #19225Fix#19162
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: 6543 <6543@obermui.de>
The git command by default adds a number of global arguments. These are not
helpful to be displayed in the process manager and so should be skipped for
default process descriptions.
Signed-off-by: Andrew Thornton <art27@cantab.net>
The RepoIndexerTest is failing with considerable frequency due to a race inherrent in
its design. This PR adjust this test to avoid the reliance on waiting for the populate
repo indexer to run and forcibly adds the repo to the queue. It then flushes the queue.
It may be worth separating out the tests somewhat by testing the Index function
directly away from the queue however, this forceful method should solve the current
problem.
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Set the default branch for repositories generated from templates
* Allows default branch to be set through the API for repos generated from templates
* Update swagger API template
* Only set default branch to the one from the template if not specified
* Use specified default branch if it exists while generating git commits
Fix#19082
Co-authored-by: John Olheiser <john.olheiser@gmail.com>
Co-authored-by: zeripath <art27@cantab.net>
* Add auto logging of goroutine pid label
This PR uses unsafe to export the hidden runtime_getProfLabel function from the
runtime package and then casts the result to a map[string]string.
We can then interrogate this map to get the pid label from the goroutine allowing
us to log it with any logging request.
Reference #19202
Signed-off-by: Andrew Thornton <art27@cantab.net>
This PR adds a middleware which sets a ContextUser (like GetUserByParams before) in a single place which can be used by other methods. For routes which represent a repo or org the respective middlewares set the field too.
Also fix a bug in modules/context/org.go during refactoring.
Unhelpfully Locations starting with `/\` will be converted by the
browser to `//` because ... well I do not fully understand. Certainly
the RFCs and MDN do not indicate that this would be expected. Providing
"compatibility" with the (mis)behaviour of a certain proprietary OS is
my suspicion. However, we clearly have to protect against this.
Therefore we should reject redirection locations that match the regular
expression: `^/[\\\\/]+`
Reference #9678
Signed-off-by: Andrew Thornton <art27@cantab.net>
Unfortunately #19169 causing a panic at startup in prod mode. This was hidden by dev
mode because the templates are compiled dynamically there. The issue is that DotEscape
is not in the original FuncMap at the time of compilation which causes a panic.
Ref #19169
Signed-off-by: Andrew Thornton <art27@cantab.net>
Redirect .wiki/* ui link to /wiki
fix#18590
Signed-off-by: a1012112796 <1012112796@qq.com>
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Andrew Thornton <art27@cantab.net>
Unfortunately many email readers will (helpfully) detect url or url-like names and
automatically create links to them, even in HTML emails. This is not ideal when
usernames can have dots in them.
This PR tries to prevent this behaviour by sticking ZWJ characters between dots and
also set the meta tag to prevent format detection.
Not every email template has been changed in this way - just the activation emails but
it may be that we should be setting the above meta tag in all of our emails too.
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Clean paths when looking in Storage
Ensure paths are clean for minio aswell as local storage.
Use url.Path not RequestURI/EscapedPath in storageHandler.
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Apply suggestions from code review
Co-authored-by: Lauris BH <lauris@nix.lv>
* Remove `db.DefaultContext` usage in routers, use `ctx` directly
* Use `ctx` directly if there is one, remove some `db.DefaultContext` in `services`
* Use ctx instead of db.DefaultContext for `cmd` and some `modules` packages
* fix incorrect context usage
* Clean up protected_branches when deleting user
fixes#19094
* Clean up protected_branches when deleting teams
* fix issue
Co-authored-by: Lauris BH <lauris@nix.lv>
Make SKIP_TLS_VERIFY apply to git data migrations too through adding the `-c http.sslVerify=false` option to the git clone command.
Fix#18998
Signed-off-by: Andrew Thornton <art27@cantab.net>
Storing the foreign identifier of an imported issue in the database is a prerequisite to implement idempotent migrations or mirror for issues. It is a baby step towards mirroring that introduces a new table.
At the moment when an issue is created by the Gitea uploader, it fails if the issue already exists. The Gitea uploader could be modified so that, instead of failing, it looks up the database to find an existing issue. And if it does it would update the issue instead of creating a new one. However this is not currently possible because an information is missing from the database: the foreign identifier that uniquely represents the issue being migrated is not persisted. With this change, the foreign identifier is stored in the database and the Gitea uploader will then be able to run a query to figure out if a given issue being imported already exists.
The implementation of mirroring for issues, pull requests, releases, etc. can be done in three steps:
1. Store an identifier for the element being mirrored (issue, pull request...) in the database (this is the purpose of these changes)
2. Modify the Gitea uploader to be able to update an existing repository with all it contains (issues, pull request...) instead of failing if it exists
3. Optimize the Gitea uploader to speed up the updates, when possible.
The second step creates code that does not yet exist to enable idempotent migrations with the Gitea uploader. When a migration is done for the first time, the behavior is not changed. But when a migration is done for a repository that already exists, this new code is used to update it.
The third step can use the code created in the second step to optimize and speed up migrations. For instance, when a migration is resumed, an issue that has an update time that is not more recent can be skipped and only newly created issues or updated ones will be updated. Another example of optimization could be that a webhook notifies Gitea when an issue is updated. The code triggered by the webhook would download only this issue and call the code created in the second step to update the issue, as if it was in the process of an idempotent migration.
The ForeignReferences table is added to contain local and foreign ID pairs relative to a given repository. It can later be used for pull requests and other artifacts that can be mirrored. Although the foreign id could be added as a single field in issues or pull requests, it would need to be added to all tables that represent something that can be mirrored. Creating a new table makes for a simpler and more generic design. The drawback is that it requires an extra lookup to obtain the information. However, this extra information is only required during migration or mirroring and does not impact the way Gitea currently works.
The foreign identifier of an issue or pull request is similar to the identifier of an external user, which is stored in reactions, issues, etc. as OriginalPosterID and so on. The representation of a user is however different and the ability of users to link their account to an external user at a later time is also a logic that is different from what is involved in mirroring or migrations. For these reasons, despite some commonalities, it is unclear at this time how the two tables (foreign reference and external user) could be merged together.
The ForeignID field is extracted from the issue migration context so that it can be dumped in files with dump-repo and later restored via restore-repo.
The GetAllComments downloader method is introduced to simplify the implementation and not overload the Context for the purpose of pagination. It also clarifies in which context the comments are paginated and in which context they are not.
The Context interface is no longer useful for the purpose of retrieving the LocalID and ForeignID since they are now both available from the PullRequest and Issue struct. The Reviewable and Commentable interfaces replace and serve the same purpose.
The Context data member of PullRequest and Issue becomes a DownloaderContext to clarify that its purpose is not to support in memory operations while the current downloader is acting but is not otherwise persisted. It is, for instance, used by the GitLab downloader to store the IsMergeRequest boolean and sort out issues.
---
[source](https://lab.forgefriends.org/forgefriends/forgefriends/-/merge_requests/36)
Signed-off-by: Loïc Dachary <loic@dachary.org>
Co-authored-by: Loïc Dachary <loic@dachary.org>
* use go1.18 to build gitea& update min go version to 1.17
* bump in a few more places
* add a few simple tests for isipprivate
* update go.mod
* update URL to https://go.dev/dl/
* golangci-lint
* attempt golangci-lint workaround
* change version
* bump fumpt version
* skip strings.title test
* go mod tidy
* update tests as some aren't private??
* update tests
Unfortunately #18642 does not work because a `*net.OpError` does not implement
the `Is` interface to make `errors.Is` work correctly - thus leading to the
irritating conclusion that a `*net.OpError` is not a `*net.OpError`.
Here we keep the `errors.Is` because presumably this will be fixed at
some point in the golang main source code but also we add a simply type
cast to also check.
Fix#18629
Signed-off-by: Andrew Thornton <art27@cantab.net>
Yet another issue has come up where the logging from SyncMirrors does not provide
enough context. This PR adds more context to these logging events.
Related #19038
Signed-off-by: Andrew Thornton <art27@cantab.net>
Add new feature to delete issues and pulls via API
Co-authored-by: fnetx <git@fralix.ovh>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Gusted <williamzijl7@hotmail.com>
Co-authored-by: 6543 <6543@obermui.de>
- Add helper method to reduce redundancy
- Expand the scope from displaying days to years
- Reduce irrelevance by not displaying small units (hours, minutes, seconds) when bigger ones apply (years)
This PR adjusts the error returned when there is failure to lock the level db, and
permits a connections to the same leveldb where there is a different connection string.
Reference #18921
Reference #18917
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Don't treat BOM escape sequence as hidden character.
- BOM sequence is a common non-harmfull escape sequence, it shouldn't be
shown as hidden character.
- Follows GitHub's behavior.
- Resolves#18837
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
The service worker causes a lot of issues with JS errors after instance
upgrades while not bringing any real performance gain over regular HTTP
caching.
Disable it by default for this reason. Maybe later we can remove it
completely, as I simply see no benefit in having it.
* Add tests for references with dashes
This commit adds tests for full URLs referencing repos names and user
names containing a dash.
* Extend regex to match URLs to repos/users with dashes
* logs: add the buffer logger to inspect logs during testing
Signed-off-by: Loïc Dachary <loic@dachary.org>
* migrations: add test for importing pull requests in gitea uploader
Signed-off-by: Loïc Dachary <loic@dachary.org>
* for each git.OpenRepositoryCtx, call Close
* Content is expected to return the content of the log
* test for errors before defer
Co-authored-by: Loïc Dachary <loic@dachary.org>
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Repositories missing their directory should not report an error from the stats
indexer.
Close#18847
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
We can't depend on `latest` version of gofumpt because the output will
not be stable across versions. Lock it down to the latest version
released yesterday and run it again.
Currently Gitea will wait for HammerTime or nice shutdown if kill -1 or kill -2
is sent. We should just immediately hammer if there is a second kill.
Signed-off-by: Andrew Thornton <art27@cantab.net>
There is a potential panic due to a mistaken resetting of the length parameter when
multibyte characters go over a read boundary.
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Fix display time of milestones
* Move the SecToTime function
From the models/issue_stopwatch.go file to the modules/util package
* Rename the sec_to_time file
* Updated formatting
* Include copyright notice in sec_to_time.go
* Apply PR review suggestions
- Update copyright notice dates to 2022
- Change `1 day 3h 5min 7s` to `1d 3h 5m 7s`
* Rename hrs var and combine conditions
* Update unit tests to match new time pattern
Changed `1min` to `1m`
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
It appears possible that there could be a hang due to unread data from the
repo-attribute command pipes. This PR simply closes these during the defer.
Signed-off-by: Andrew Thornton <art27@cantab.net>
I want to address #17892, where emails notifications are not sent to assignees (issue and PR) and reviewers (PR) when they have the email setting Only email on mention enabled.
From the user experience perspective, when a user gets a issue/PR assigned or a PR review request, he/she would expect to be implicitly mentioned since the assignment or request is personal and targeting a single person only. Thus I see #17892 as a bug. Could we therefore mark this ticket as such?
The changed code just explicitly checks for the EmailNotificationsOnMention setting beside the existing EmailNotificationsEnabled check. Too rude?
@lunny mentioned a mock mail server for tests, is there something ready. How could I make use of it?
#12774 (comment)
Fix#17892
Add number in queue status to the monitor page so that administrators can
assess how much work is left to be done in the queues.
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
- Use a better and more curated list of Ciphers and KeyExchanges, these roughly follows OpenSSH's default.
- Remove some cryptography values which were deprecated.
This code adds a simple endpoint to apply patches to repositories and
branches on gitea. This is then used along with the conflicting checking
code in #18004 to provide a basic implementation of cherry-pick revert.
Now because the buttons necessary for cherry-pick and revert have
required us to create a dropdown next to the Browse Source button
I've also implemented Create Branch and Create Tag operations.
Fix#3880Fix#17986
Signed-off-by: Andrew Thornton <art27@cantab.net>
WebAuthn may cause a security exception if the provided APP_ID is not allowed for the
current origin. Therefore we should reattempt authentication without the appid
extension.
Also we should allow [u2f] as-well as [U2F] sections.
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
* Simplify Boost/Pause logic
#18658 has added a check to see if we need to boost because there is still work to do
however the check is slightly complex and not ideal. There's no point boosting if
the queue is paused or can't scale. Therefore merge the two selects into one and add
a check to p.paused.
Signed-off-by: Andrew Thornton <art27@cantab.net>
* And on resume add a zeroboost if necessary
Signed-off-by: Andrew Thornton <art27@cantab.net>
* simplify
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Lauris BH <lauris@nix.lv>
* Restart zero worker if there is still work to do
It is possible for the zero worker to timeout before all the work is finished.
This may mean that work may take a long time to complete because a worker will only
be induced on repushing.
Also ensure that requested count is reset after pulls and push mirror sync requests and add some more trace logging to the queue push.
Fix#18607
Signed-off-by: Andrew Thornton <art27@cantab.net>
* remove unnecessary web context data fields, and unify the i18n/translation related functions to `Locale`
* in development, show an error if a translation key is missing
* remove the unnecessary loops `for _, lang := range translation.AllLangs()` for every request, which improves the performance slightly
* use `ctx.Locale.Language()` instead of `ctx.Data["Lang"].(string)`
* add more comments about how the Locale/LangType fields are used
When a net.OpError occurs during rendering the underlying connection is essentially
dead and therefore attempting to render further data will only cause further errors.
Therefore in serverErrorInternal detect if the passed in error is an OpError and
if so do not attempt any further rendering.
Fix#18629
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Only attempt to flush queue if the underlying worker pool is not finished
There is a possible race whereby a worker pool could be cancelled but yet the
underlying queue is not empty. This will lead to flush-all cycling because it
cannot empty the pool.
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Apply suggestions from code review
Co-authored-by: Gusted <williamzijl7@hotmail.com>
Co-authored-by: Gusted <williamzijl7@hotmail.com>
- Switch to use `CryptoRandomBytes` instead of `CryptoRandomString`, OAuth's secrets are copied pasted and don't need to avoid dubious characters etc.
- `CryptoRandomBytes` gives ![2^256 = 1.15 * 10^77](https://render.githubusercontent.com/render/math?math=2^256%20=%201.15%20\cdot%2010^77) `CryptoRandomString` gives ![62^44 = 7.33 * 10^78](https://render.githubusercontent.com/render/math?math=62^44%20=%207.33%20\cdot%2010^78) possible states.
- Add a prefix, such that code scanners can easily grep these in source code.
- 32 Bytes + prefix
* Collaborator trust model should trust collaborators
There was an unintended regression in #17917 which leads to only
repository admin commits being trusted. This PR restores the old logic.
Fix#18501
Signed-off-by: Andrew Thornton <art27@cantab.net>
* COrrect use `UserID` in `SearchTeams`
- Use `UserID` in the `SearchTeams` function, currently it was useless
to pass such information. Now it does a INNER statement to `team_user`
which obtains UserID -> TeamID data.
- Make OrgID optional.
- Resolves#18484
* Seperate searching specific user
* Add condition back
* Use correct struct type
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
* add test coverage for original author conversion during migrations
And create a function to factorize a code snippet that is repeated
five times and would otherwise be more difficult to test and maintain
consistently.
Signed-off-by: Loïc Dachary <loic@dachary.org>
* fix variable scope and int64 formatting
* add missing calls to remapExternalUser and fix misplaced %d
Co-authored-by: Loïc Dachary <loic@dachary.org>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>