Backport #32102 by @lunny
Fix#31930 and more places which use `http.TimeFormat` wrongly.
`http.TimeFormat` requires a UTC time. refer to
https://pkg.go.dev/net/http#TimeFormat
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Backport #32099 by @maantje
This PR addresses the missing `bin` field in Composer metadata, which
currently causes vendor-provided binaries to not be symlinked to
`vendor/bin` during installation.
In the current implementation, running `composer install` does not
publish the binaries, leading to issues where expected binaries are not
available.
By properly declaring the `bin` field, this PR ensures that binaries are
correctly symlinked upon installation, as described in the [Composer
documentation](https://getcomposer.org/doc/articles/vendor-binaries.md).
Co-authored-by: Jamie Schouten <j4mie@hey.com>
Backport #32025 by @wolfogre
Fix#32024. Follow #27655.
After this PR, all usage of "new dial context" needs to provide a proxy,
so I dropped the old `NewDialContext` and renamed
`NewDialContextWithProxy` to `NewDialContext`.
Co-authored-by: Jason Song <i@wolfogre.com>
Backport #32011 by @wolfogre
Replace #32001.
To prevent the context cache from being misused for long-term work
(which would result in using invalid cache without awareness), the
context cache is designed to exist for a maximum of 10 seconds. This
leads to many false reports, especially in the case of slow SQL.
This PR increases it to 5 minutes to reduce false reports.
5 minutes is not a very safe value, as a lot of changes may have
occurred within that time frame. However, as far as I know, there has
not been a case of misuse of context cache discovered so far, so I think
5 minutes should be OK.
Please note that after this PR, if warning logs are found again, it
should get attention, at that time it can be almost 100% certain that it
is a misuse.
Co-authored-by: Jason Song <i@wolfogre.com>
Backport #31754 by @lunny
When opening a repository, it will call `ensureValidRepository` and also
`CatFileBatch`. But sometimes these will not be used until repository
closed. So it's a waste of CPU to invoke 3 times git command for every
open repository.
This PR removed all of these from `OpenRepository` but only kept
checking whether the folder exists. When a batch is necessary, the
necessary functions will be invoked.
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Backport #31825 by @Zettat123
Fix#31395
This regression is introduced by #30273. To find out how GitHub handles
this case, I did [some
tests](https://github.com/go-gitea/gitea/issues/31395#issuecomment-2278929115).
I use redirect in this PR instead of checking if the corresponding `.md`
file exists when rendering the link because GitHub also uses redirect.
With this PR, there is no need to resolve the raw wiki link when
rendering a wiki page. If a wiki link points to a raw file, access will
be redirected to the raw link.
---------
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-authored-by: yp05327 <576951401@qq.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Backport #31790 by @wolfogre
Fix#31271.
When gogit is enabled, `IsObjectExist` calls
`repo.gogitRepo.ResolveRevision`, which is not correct. It's for
checking references not objects, it could work with commit hash since
it's both a valid reference and a commit object, but it doesn't work
with blob objects.
So it causes #31271 because it reports that all blob objects do not
exist.
Co-authored-by: Jason Song <i@wolfogre.com>
Backport #31778 by @lunny
Fix#31738
When pushing a new branch, the old commit is zero. Most git commands
cannot recognize the zero commit id. To get the changed files in the
push, we need to get the first diverge commit of this branch. In most
situations, we could check commits one by one until one commit is
contained by another branch. Then we will think that commit is the
diverge point.
And in a pre-receive hook, this will be more difficult because all
commits haven't been merged and they actually stored in a temporary
place by git. So we need to bring some envs to let git know the commit
exist.
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Backport #31702 by @wolfogre
Fix#31137.
Replace #31623#31697.
When migrating LFS objects, if there's any object that failed (like some
objects are losted, which is not really critical), Gitea will stop
migrating LFS immediately but treat the migration as successful.
This PR checks the error according to the [LFS api
doc](https://github.com/git-lfs/git-lfs/blob/main/docs/api/batch.md#successful-responses).
> LFS object error codes should match HTTP status codes where possible:
>
> - 404 - The object does not exist on the server.
> - 409 - The specified hash algorithm disagrees with the server's
acceptable options.
> - 410 - The object was removed by the owner.
> - 422 - Validation error.
If the error is `404`, it's safe to ignore it and continue migration.
Otherwise, stop the migration and mark it as failed to ensure data
integrity of LFS objects.
And maybe we should also ignore others errors (maybe `410`? I'm not sure
what's the difference between "does not exist" and "removed by the
owner".), we can add it later when some users report that they have
failed to migrate LFS because of an error which should be ignored.
Co-authored-by: Jason Song <i@wolfogre.com>
Backport #31548 by @brechtvl
Running git update-index for every individual file is slow, so add and
remove everything with a single git command.
When such a big commit lands in the default branch, it could cause PR
creation and patch checking for all open PRs to be slow, or time out
entirely. For example, a commit that removes 1383 files was measured to
take more than 60 seconds and timed out. With this change checking took
about a second.
This is related to #27967, though this will not help with commits that
change many lines in few files.
Co-authored-by: Brecht Van Lommel <brecht@blender.org>
Backport #31061 by @sergeyvfx
This change fixes cases when a Wiki page refers to a video stored in the
Wiki repository using relative path. It follows the similar case which
has been already implemented for images.
Test plan:
- Create repository and Wiki page
- Clone the Wiki repository
- Add video to it, say `video.mp4`
- Modify the markdown file to refer to the video using `<video
src="video.mp4">`
- Commit the Wiki page
- Observe that the video is properly displayed
Co-authored-by: Sergey Sharybin <sergey.vfx@gmail.com>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Backport #31420 by charles7668
Co-authored-by: charles <30816317+charles7668@users.noreply.github.com>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Backport #31410 by tobiasbp
This PR modifies the structs for editing and creating org teams to allow
team names to be up to 255 characters. The previous maximum length was
30 characters.
Co-authored-by: Tobias Balle-Petersen <tobias.petersen@unity3d.com>
Backport #31299
Parse base path and tree path so that media links can be correctly
created with /media/.
Resolves#31294
---------
Co-authored-by: Brecht Van Lommel <brecht@blender.org>
Backport #31319 by @lunny
Fix a hash render problem like `<hash>: xxxxx` which is usually used in
release notes.
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Backport #31333 by @lunny
Fix#31330Fix#31311
A workaround to fix the old database is to update object_format_name to
`sha1` if it's empty or null.
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Backport #31003 by wxiaoguang
Fix#31002
1. Mention Make sure `Host` and `X-Fowarded-Proto` headers are correctly passed to Gitea
2. Clarify the basic requirements and move the "general configuration" to the top
3. Add a comment for the "container registry"
4. Use 1.21 behavior if the reverse proxy is not correctly configured
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: KN4CK3R <admin@oldschoolhack.me>
Backport #30805 by @lunny
Merging PR may fail because of various problems. The pull request may
have a dirty state because there is no transaction when merging a pull
request. ref
https://github.com/go-gitea/gitea/pull/25741#issuecomment-2074126393
This PR moves all database update operations to post-receive handler for
merging a pull request and having a database transaction. That means if
database operations fail, then the git merging will fail, the git client
will get a fail result.
There are already many tests for pull request merging, so we don't need
to add a new one.
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Backport #30843 by wxiaoguang
Reduce the context line number to 1, make "git grep" search respect the
include/exclude patter, and fix#30785
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>