The StaticInit pass asserts that the operand of &v is a global,
but this is not so for the &autotemp desugaring of new(expr).
(The variable has by that point escaped to the heap, so
the object code calls runtime.newobject. A future optimization
would be to statically allocate the variable when it is safe
and advantageous to do so.)
Thanks to khr for suggesting the fix.
+ static test
Fixes#77237
Change-Id: I71b34a1353fe0f3e297beab9851f8f87d765d8f1
Reviewed-on: https://go-review.googlesource.com/c/go/+/737680
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
This makes it easier to identify which functions are used for memory
allocation by looking for functions that start with mallocgc. The Size
suffix is added so that the isSpecializedMalloc function in
cmd/compile/internal/ssa can distinguish between the generated functions
and the mallocgcTiny function called by mallocgc, similar to the SC
suffixes for the mallocgcSmallNoScanSC* and mallocgcSmallScanNoHeaderSC*
functons.
Change-Id: I6ad7f15617bf6f18ae5d1bfa2a0b94e86a6a6964
Reviewed-on: https://go-review.googlesource.com/c/go/+/735780
Reviewed-by: Michael Matloob <matloob@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Fix a pattern in test/codegen/comparisons.go to use whitespace instead
of a tab. The test needs a whitespace to properly fail if the regalloc
change from CL686655 would be missing (in that case we would have a
spill immediately after call to memequal, which is supposed to be
captured by this pattern).
Change-Id: I5b6fb5e861b9327c7f071660592b8ffa265e0030
Reviewed-on: https://go-review.googlesource.com/c/go/+/727620
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
Merge List:
+ 2025-12-08 a33bbf1988 weak: fix weak pointer test to correctly iterate over weak pointers after GC
+ 2025-12-08 a88a96330f cmd/cgo: use doc link for cgo.Handle
+ 2025-12-08 276cc4d3db cmd/link: fix AIX builds after recent linker changes
+ 2025-12-08 f2d96272cb runtime/trace: update TestSubscribers to dump traces
+ 2025-12-08 4837bcc92c internal/trace: skip tests for alloc/free experiment by default
+ 2025-12-08 b5f6816cea cmd/link: generate DWARF for moduledata
+ 2025-12-08 44a39c9dac runtime: only run TestNotInGoMetricCallback when building with cgo
+ 2025-12-08 3a6a034cd6 runtime: disable TestNotInGoMetricCallback on FreeBSD + race
+ 2025-12-08 4122d3e9ea runtime: use atomic C types with atomic C functions
+ 2025-12-08 34397865b1 runtime: deflake TestProfBufWakeup
+ 2025-12-08 d4972f6295 runtime: mark getfp as nosplit
+ 2025-12-05 0d0d5c9a82 test/codegen: test negation with add/sub on riscv64
+ 2025-12-05 2e509e61ef cmd/go: convert some more tests to script tests
+ 2025-12-05 c270e71835 cmd/go/internal/vet: skip -fix on pkgs from vendor or non-main mod
+ 2025-12-05 745349712e runtime: don't count nGsyscallNoP for extra Ms in C
+ 2025-12-05 f3d572d96a cmd/go: fix race applying fixes in fix and vet -fix modes
+ 2025-12-05 76345533f7 runtime: expand Pinner documentation
+ 2025-12-05 b133524c0f cmd/go/testdata/script: skip vet_cache in short mode
+ 2025-12-05 96e142ba2b runtime: skip TestArenaCollision if we run out of hints
+ 2025-12-05 fe4952f116 runtime: relax threadsSlack in TestReadMetricsSched
+ 2025-12-05 8947f092a8 runtime: skip mayMoreStackMove in goroutine leak tests
+ 2025-12-05 44cb82449e runtime/race: set missing argument frame for ppc64x atomic And/Or wrappers
+ 2025-12-05 435e61c801 runtime: reject any goroutine leak test failure that failed to execute
+ 2025-12-05 54e5540014 runtime: print output in case of segfault in goroutine leak tests
+ 2025-12-05 9616c33295 runtime: don't specify GOEXPERIMENT=greenteagc in goroutine leak tests
+ 2025-12-05 2244bd7eeb crypto/subtle: add speculation barrier after DIT
+ 2025-12-05 f84f8d86be cmd/compile: fix mis-infer bounds in slice len/cap calculations
+ 2025-12-05 a70addd3b3 all: fix some comment issues
+ 2025-12-05 93b49f773d internal/runtime/maps: clarify probeSeq doc comment
+ 2025-12-04 91267f0a70 all: update vendored x/tools
+ 2025-12-04 a753a9ed54 cmd/internal/fuzztest: move fuzz tests out of cmd/go test suite
+ 2025-12-04 1681c3b67f crypto: use rand.IsDefaultReader instead of comparing to boring.RandReader
+ 2025-12-04 7b67b68a0d cmd/compile: use isUnsignedPowerOfTwo rather than isPowerOfTwo for unsigneds
+ 2025-12-03 2b62144069 all: REVERSE MERGE dev.simd (9ac524a) into master
Change-Id: Ia0cdf06cdde89b6a4db30ed15ed8e0bcbac6ae30
Also removes a few leftover TODOs and scraps of commented-out code
from simd development.
Updated etetest.sh to make it behave whether amd64 implies the
experiment, or not.
Fixes#76473.
Change-Id: I6d9792214d7f514cb90c21b101dbf7d07c1d0e55
Reviewed-on: https://go-review.googlesource.com/c/go/+/728220
TryBot-Bypass: David Chase <drchase@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
This CL is part of a set of CLs that attempt to reduce how much work the
GC must do. See the design in https://go.dev/design/74299-runtime-freegc
This CL updates the compiler to examine append calls to prove
whether or not the slice is aliased.
If proven unaliased, the compiler automatically inserts a call to a new
runtime function introduced with this CL, runtime.growsliceNoAlias,
which frees the old backing memory immediately after slice growth is
complete and the old storage is logically dead.
Two append benchmarks below show promising results, executing up to
~2x faster and up to factor of ~3 memory reduction with this CL.
The approach works with multiple append calls for the same slice,
including inside loops, and the final slice memory can be escaping,
such as in a classic pattern of returning a slice from a function
after the slice is built. (The final slice memory is never freed with
this CL, though we have other work that tackles that.)
An example target for this CL is we automatically free the
intermediate memory for the appends in the loop in this function:
func f1(input []int) []int {
var s []int
for _, x := range input {
s = append(s, g(x)) // s cannot be aliased here
if h(x) {
s = append(s, x) // s cannot be aliased here
}
}
return s // slice escapes at end
}
In this case, the compiler and the runtime collaborate so that
the heap allocated backing memory for s is automatically freed after
a successful grow. (For the first grow, there is nothing to free,
but for the second and subsequent growths, the old heap memory is
freed automatically.)
The new runtime.growsliceNoAlias is primarily implemented
by calling runtime.freegc, which we introduced in CL 673695.
The high-level approach here is we step through the IR starting
from a slice declaration and look for any operations that either
alias the slice or might do so, and treat any IR construct we
don't specifically handle as a potential alias (and therefore
conservatively fall back to treating the slice as aliased when
encountering something not understood).
For loops, some additional care is required. We arrange the analysis
so that an alias in the body of a loop causes all the appends in that
same loop body to be marked aliased, even if the aliasing occurs after
the append in the IR:
func f2() {
var s []int
for i := range 10 {
s = append(s, i) // aliased due to next line
alias = s
}
}
For nested loops, we analyse the nesting appropriately so that
for example this append is still proven as non-aliased in the
inner loop even though it aliased for the outer loop:
func f3() {
for range 10 {
var s []int
for i := range 10 {
s = append(s, i) // append using non-aliased slice
}
alias = s
}
}
A good starting point is the beginning of the test/escape_alias.go file,
which starts with ~10 introductory examples with brief comments that
attempt to illustrate the high-level approach.
For more details, see the new .../internal/escape/alias.go file,
especially the (*aliasAnalysis).analyze method.
In the first benchmark, an append in a loop builds up a slice from
nothing, where the slice elements are each 64 bytes. In the table below,
'count' is the number of appends. With 1 append, there is no opportunity
for this CL to free memory. Once there are 2 appends, the growth from
1 element to 2 elements means the compiler-inserted growsliceNoAlias
frees the 1-element array, and we see a ~33% reduction in memory use
and a small reported speed improvement.
As the number of appends increases for example to 5, we are at
a ~20% speed improvement and ~45% memory reduction, and so on until
we reach ~40% faster and ~50% less memory allocated at the end of
the table.
There can be variation in the reported numbers based on -randlayout, so
this table is for 30 different values of -randlayout with a total
n=150. (Even so, there is still some variation, so we probably should
not read too much into small changes.) This is with GOAMD64=v3 on
a VM that gcc reports is cascadelake.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
Append64Bytes/count=1-4 31.09n ± 2% 31.69n ± 1% +1.95% (n=150)
Append64Bytes/count=2-4 73.31n ± 1% 70.27n ± 0% -4.15% (n=150)
Append64Bytes/count=3-4 142.7n ± 1% 124.6n ± 1% -12.68% (n=150)
Append64Bytes/count=4-4 149.6n ± 1% 127.7n ± 0% -14.64% (n=150)
Append64Bytes/count=5-4 277.1n ± 1% 213.6n ± 0% -22.90% (n=150)
Append64Bytes/count=6-4 280.7n ± 1% 216.5n ± 1% -22.87% (n=150)
Append64Bytes/count=10-4 544.3n ± 1% 386.6n ± 0% -28.97% (n=150)
Append64Bytes/count=20-4 1058.5n ± 1% 715.6n ± 1% -32.39% (n=150)
Append64Bytes/count=50-4 2.121µ ± 1% 1.404µ ± 1% -33.83% (n=150)
Append64Bytes/count=100-4 4.152µ ± 1% 2.736µ ± 1% -34.11% (n=150)
Append64Bytes/count=200-4 7.753µ ± 1% 4.882µ ± 1% -37.03% (n=150)
Append64Bytes/count=400-4 15.163µ ± 2% 9.273µ ± 1% -38.84% (n=150)
geomean 601.8n 455.0n -24.39%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
Append64Bytes/count=1-4 64.00 ± 0% 64.00 ± 0% ~ (n=150)
Append64Bytes/count=2-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
Append64Bytes/count=3-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=4-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=5-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=6-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=10-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
Append64Bytes/count=20-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
Append64Bytes/count=50-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
Append64Bytes/count=100-4 15.938Ki ± 0% 8.021Ki ± 0% -49.67% (n=150)
Append64Bytes/count=200-4 31.94Ki ± 0% 16.08Ki ± 0% -49.64% (n=150)
Append64Bytes/count=400-4 63.94Ki ± 0% 32.33Ki ± 0% -49.44% (n=150)
geomean 1.991Ki 1.124Ki -43.54%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
Append64Bytes/count=1-4 1.000 ± 0% 1.000 ± 0% ~ (n=150)
Append64Bytes/count=2-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
Append64Bytes/count=3-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=4-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=5-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=6-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=10-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
Append64Bytes/count=20-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
Append64Bytes/count=50-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
Append64Bytes/count=100-4 8.000 ± 0% 1.000 ± 0% -87.50% (n=150)
Append64Bytes/count=200-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
Append64Bytes/count=400-4 10.000 ± 0% 1.000 ± 0% -90.00% (n=150)
geomean 4.331 1.000 -76.91%
The second benchmark is similar, but instead uses an 8-byte integer
for the slice element. The first 4 appends in the loop never call into
the runtime thanks to the excellent CL 664299 introduced by Keith in
Go 1.25 that allows some <= 32 byte dynamically-sized slices to be on
the stack, so this CL is neutral for <= 32 bytes. Once the 5th append
occurs at count=5, a grow happens via the runtime and heap allocates
as normal, but freegc does not yet have anything to free, so we see
a small ~1.4ns penalty reported there. But once the second growth
happens, the older heap memory is now automatically freed by freegc,
so we start to see some benefit in memory reductions and speed
improvements, starting at a tiny speed improvement (close to a wash,
or maybe noise) by the second growth before count=10, and building up to
~2x faster with ~68% fewer allocated bytes reported.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
AppendInt/count=1-4 2.978n ± 0% 2.969n ± 0% -0.30% (p=0.000 n=150)
AppendInt/count=4-4 4.292n ± 3% 4.163n ± 3% ~ (p=0.528 n=150)
AppendInt/count=5-4 33.50n ± 0% 34.93n ± 0% +4.25% (p=0.000 n=150)
AppendInt/count=10-4 76.21n ± 1% 75.67n ± 0% -0.72% (p=0.000 n=150)
AppendInt/count=20-4 150.6n ± 1% 133.0n ± 0% -11.65% (n=150)
AppendInt/count=50-4 284.1n ± 1% 225.6n ± 0% -20.59% (n=150)
AppendInt/count=100-4 544.2n ± 1% 392.4n ± 1% -27.89% (n=150)
AppendInt/count=200-4 1051.5n ± 1% 702.3n ± 0% -33.21% (n=150)
AppendInt/count=400-4 2.041µ ± 1% 1.312µ ± 1% -35.70% (n=150)
AppendInt/count=1000-4 5.224µ ± 2% 2.851µ ± 1% -45.43% (n=150)
AppendInt/count=2000-4 11.770µ ± 1% 6.010µ ± 1% -48.94% (n=150)
AppendInt/count=3000-4 17.747µ ± 2% 8.264µ ± 1% -53.44% (n=150)
geomean 331.8n 246.4n -25.72%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 64.00 ± 0% 64.00 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
AppendInt/count=20-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
AppendInt/count=50-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
AppendInt/count=100-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
AppendInt/count=200-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
AppendInt/count=400-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
AppendInt/count=1000-4 24.56Ki ± 0% 10.05Ki ± 0% -59.07% (n=150)
AppendInt/count=2000-4 58.56Ki ± 0% 20.31Ki ± 0% -65.32% (n=150)
AppendInt/count=3000-4 85.19Ki ± 0% 27.30Ki ± 0% -67.95% (n=150)
geomean ² -42.81%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 1.000 ± 0% 1.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
AppendInt/count=20-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
AppendInt/count=50-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
AppendInt/count=100-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
AppendInt/count=200-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
AppendInt/count=400-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
AppendInt/count=1000-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
AppendInt/count=2000-4 11.000 ± 0% 1.000 ± 0% -90.91% (n=150)
AppendInt/count=3000-4 12.000 ± 0% 1.000 ± 0% -91.67% (n=150)
geomean ² -72.76% ²
Of course, these are just microbenchmarks, but likely indicate
there are some opportunities here.
The immediately following CL 712422 tackles inlining and is able to get
runtime.freegc working automatically with iterators such as used by
slices.Collect, which becomes able to automatically free the
intermediate memory from its repeated appends (which earlier
in this work required a temporary hand edit to the slices package).
For now, we only use the NoAlias version for element types without
pointers while waiting on additional runtime support in CL 698515.
Updates #74299
Change-Id: I1b9d286aa97c170dcc2e203ec0f8ca72d84e8221
Reviewed-on: https://go-review.googlesource.com/c/go/+/710015
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
Given a type definition of the form:
type T RHS
The setDefType function would set T.fromRHS as soon as we knew its
top-level type. For instance, in:
type S struct { ... }
S.fromRHS is set to a struct type before type-checking anything inside
the struct.
This permit access to the (incomplete) RHS type in a cyclic type
declaration. Accessing this information is fraught (as it's incomplete),
but was used for reporting certain types of cycles.
This CL replaces setDefType with a check that ensures no value of type
T is used before its RHS is set up.
This CL is strictly more complete than what setDefType achieved. For
instance, it enables correct reporting for the below cycles:
type A [unsafe.Sizeof(A{})]int
var v any = 42
type B [v.(B)]int
func f() C {
return C{}
}
type C [unsafe.Sizeof(f())]int
Fixes#76383Fixes#76384
Change-Id: I9dfab5b708013b418fa66e43362bb4d8483fedec
Reviewed-on: https://go-review.googlesource.com/c/go/+/724140
Auto-Submit: Mark Freeman <markfreeman@google.com>
Reviewed-by: Robert Griesemer <gri@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Introduce a new MemEq SSA operation for runtime.memequal. The operation
is initially implemented for arm64. The change adds opt rules (following
existing rules for call to runtime.memequal), working with MemEq, and a
later op version LoweredMemEq which may be lowered differently for more
constant size cases in future (for other targets as well as for arm64).
The new MemEq SSA operation does not have memory result, allowing cse of
loads operations around it.
Code size difference (for arm64 linux):
Executable Old .text New .text Change
-------------------------------------------------------
asm 1970420 1969668 -0.04%
cgo 1741220 1740212 -0.06%
compile 8956756 8959428 +0.03%
cover 1879332 1878772 -0.03%
link 2574116 2572660 -0.06%
preprofile 867124 866820 -0.04%
vet 2890404 2888596 -0.06%
Change-Id: I6ab507929b861884d17d5818cfbd152cf7879751
Reviewed-on: https://go-review.googlesource.com/c/go/+/686655
Reviewed-by: Keith Randall <khr@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Conflicts:
- src/cmd/compile/internal/typecheck/builtin.go
Merge List:
+ 2025-11-20 ca37d24e0b net/http: drop unused "broken" field from persistConn
+ 2025-11-20 4b740af56a cmd/internal/obj/x86: handle global reference in From3 in dynlink mode
+ 2025-11-20 790384c6c2 spec: adjust rule for type parameter on RHS of alias declaration
+ 2025-11-20 a49b0302d0 net/http: correctly close fake net.Conns
+ 2025-11-20 32f5aadd2f cmd/compile: stack allocate backing stores during append
+ 2025-11-20 a18aff8057 runtime: select GC mark workers during start-the-world
+ 2025-11-20 829779f4fe runtime: split findRunnableGCWorker in two
+ 2025-11-20 ab59569099 go/version: use "custom" as an example of a version suffix
+ 2025-11-19 c4bb9653ba cmd/compile: Implement LoweredZeroLoop with LSX Instruction on loong64
+ 2025-11-19 7f2ae21fb4 cmd/internal/obj/loong64: add MULW.D.W[U] instructions
+ 2025-11-19 a2946f2385 crypto: add Encapsulator and Decapsulator interfaces
+ 2025-11-19 6b83bd7146 crypto/ecdh: add KeyExchanger interface
+ 2025-11-19 4fef9f8b55 go/types, types2: fix object path for grouped declaration statements
+ 2025-11-19 33529db142 spec: escape double-ampersands
+ 2025-11-19 dc42565a20 cmd/compile: fix control flow for unsigned divisions proof relations
+ 2025-11-19 e64023dcbf cmd/compile: cleanup useless if statement in prove
+ 2025-11-19 2239520d1c test: go fmt prove.go tests
+ 2025-11-19 489d3dafb7 math: switch s390x math.Pow to generic implementation
+ 2025-11-18 8c41a482f9 runtime: add dlog.hexdump
+ 2025-11-18 e912618bd2 runtime: add hexdumper
+ 2025-11-18 2cf9d4b62f Revert "net/http: do not discard body content when closing it within request handlers"
+ 2025-11-18 4d0658bb08 cmd/compile: prefer fixed registers for values
+ 2025-11-18 ba634ca5c7 cmd/compile: fold boolean NOT into branches
+ 2025-11-18 8806d53c10 cmd/link: align sections, not symbols after DWARF compress
+ 2025-11-18 c93766007d runtime: do not print recovered when double panic with the same value
+ 2025-11-18 9859b43643 cmd/asm,cmd/compile,cmd/internal/obj/riscv: use compressed instructions on riscv64
+ 2025-11-17 b9ef0633f6 cmd/internal/sys,internal/goarch,runtime: enable the use of compressed instructions on riscv64
+ 2025-11-17 a087dea869 debug/elf: sync new loong64 relocation types up to LoongArch ELF psABI v20250521
+ 2025-11-17 e1a12c781f cmd/compile: use 32x32->64 multiplies on arm64
+ 2025-11-17 6caab99026 runtime: relax TestMemoryLimit on darwin a bit more
+ 2025-11-17 eda2e8c683 runtime: clear frame pointer at thread entry points
+ 2025-11-17 6919858338 runtime: rename findrunnable references to findRunnable
+ 2025-11-17 8e734ec954 go/ast: fix BasicLit.End position for raw strings containing \r
+ 2025-11-17 592775ec7d crypto/mlkem: avoid a few unnecessary inverse NTT calls
+ 2025-11-17 590cf18daf crypto/mlkem/mlkemtest: add derandomized Encapsulate768/1024
+ 2025-11-17 c12c337099 cmd/compile: teach prove about subtract idioms
+ 2025-11-17 bc15963813 cmd/compile: clean up prove pass
+ 2025-11-17 1297fae708 go/token: add (*File).End method
+ 2025-11-17 65c09eafdf runtime: hoist invariant code out of heapBitsSmallForAddrInline
+ 2025-11-17 594129b80c internal/runtime/maps: update doc for table.Clear
+ 2025-11-15 c58d075e9a crypto/rsa: deprecate PKCS#1 v1.5 encryption
+ 2025-11-14 d55ecea9e5 runtime: usleep before stealing runnext only if not in syscall
+ 2025-11-14 410ef44f00 cmd: update x/tools to 59ff18c
+ 2025-11-14 50128a2154 runtime: support runtime.freegc in size-specialized mallocs for noscan objects
+ 2025-11-14 c3708350a4 cmd/go: tests: rename git-min-vers->git-sha256
+ 2025-11-14 aea881230d std: fix printf("%q", int) mistakes
+ 2025-11-14 120f1874ef runtime: add more precise test of assist credit handling for runtime.freegc
+ 2025-11-14 fecfcaa4f6 runtime: add runtime.freegc to reduce GC work
+ 2025-11-14 5a347b775e runtime: set GOEXPERIMENT=runtimefreegc to disabled by default
+ 2025-11-14 1a03d0db3f runtime: skip tests for GOEXPERIMENT=arenas that do not handle clobberfree=1
+ 2025-11-14 cb0d9980f5 net/http: do not discard body content when closing it within request handlers
+ 2025-11-14 03ed43988f cmd/compile: allow multi-field structs to be stored directly in interfaces
+ 2025-11-14 1bb1f2bf0c runtime: put AddCleanup cleanup arguments in their own allocation
+ 2025-11-14 9fd2e44439 runtime: add AddCleanup benchmark
+ 2025-11-14 80c91eedbb runtime: ensure weak handles end up in their own allocation
+ 2025-11-14 7a8d0b5d53 runtime: add debug mode to extend _Grunning-without-P windows
+ 2025-11-14 710abf74da internal/runtime/cgobench: add Go function call benchmark for comparison
+ 2025-11-14 b24aec598b doc, cmd/internal/obj/riscv: document the riscv64 assembler
+ 2025-11-14 a0e738c657 cmd/compile/internal: remove incorrect riscv64 SLTI rule
+ 2025-11-14 2cdcc4150b cmd/compile: fold negation into multiplication
+ 2025-11-14 b57962b7c7 bytes: fix panic in bytes.Buffer.Peek
+ 2025-11-14 0a569528ea cmd/compile: optimize comparisons with single bit difference
+ 2025-11-14 1e5e6663e9 cmd/compile: remove unnecessary casts and types from riscv64 rules
+ 2025-11-14 ddd8558e61 go/types, types2: swap object.color for Checker.objPathIdx
+ 2025-11-14 9daaab305c cmd/link/internal/ld: make runtime.buildVersion with experiments valid
+ 2025-11-13 d50a571ddf test: fix tests to work with sizespecializedmalloc turned off
+ 2025-11-13 704f841eab cmd/trace: annotation proc start/stop with thread and proc always
+ 2025-11-13 17a02b9106 net/http: remove unused isLitOrSingle and isNotToken
+ 2025-11-13 ff61991aed cmd/go: fix flaky TestScript/mod_get_direct
+ 2025-11-13 129d0cb543 net/http/cgi: accept INCLUDED as protocol for server side includes
+ 2025-11-13 77c5130100 go/types: minor simplification
+ 2025-11-13 7601cd3880 go/types: generate cycles.go
+ 2025-11-13 7a372affd9 go/types, types2: rename definedType to declaredType and clarify docs
Change-Id: Ibaa9bdb982364892f80e511c1bb12661fcd5fb86
We can already stack allocate the backing store during append if the
resulting backing store doesn't escape. See CL 664299.
This CL enables us to often stack allocate the backing store during
append *even if* the result escapes. Typically, for code like:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
return r
}
the backing store for r escapes, but only by returning it.
Could we operate with r on the stack for most of its lifeime,
and only move it to the heap at the return point?
The current implementation of append will need to do an allocation
each time it calls growslice. This will happen on the 1st, 2nd, 4th,
8th, etc. append calls. The allocations done by all but the
last growslice call will then immediately be garbage.
We'd like to avoid doing some of those intermediate allocations
if possible. We rewrite the above code by introducing a move2heap
operation:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
r = move2heap(r)
return r
}
Using the move2heap runtime function, which does:
move2heap(r):
If r is already backed by heap storage, return r.
Otherwise, copy r to the heap and return the copy.
Now we can treat the backing store of r allocated at the
append site as not escaping. Previous stack allocation
optimizations now apply, which can use a fixed-size
stack-allocated backing store for r when appending.
See the description in cmd/compile/internal/slice/slice.go
for how we ensure that this optimization is safe.
Change-Id: I81f36e58bade2241d07f67967d8d547fff5302b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/707755
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>