Compare commits

..

10 Commits

Author SHA1 Message Date
Michael Anthony Knyszek
ec5170397c [release-branch.go1.17] go1.17
Change-Id: Ie59feb7f043574c7db4c7f14184603442638e4c0
Reviewed-on: https://go-review.googlesource.com/c/go/+/342470
Run-TryBot: Michael Knyszek <mknyszek@google.com>
Trust: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Heschi Kreinick <heschi@google.com>
Reviewed-by: Dmitri Shuralyov <dmitshur@golang.org>
TryBot-Result: Go Bot <gobot@golang.org>
2021-08-16 16:23:06 +00:00
Jeff Wentworth
fdd6dfd507 [release-branch.go1.17] sync/atomic: fix documentation for CompareAndSwap
The documentation for CompareAndSwap atomic/value incorrectly labelled
the function as CompareAndSwapPointer. This PR fixes that.

Updates #47699.
Fixes #47703.

Change-Id: I6db08fdfe166570b775248fd24550f5d28e3434e
GitHub-Last-Rev: 41f7870792
GitHub-Pull-Request: golang/go#47700
Reviewed-on: https://go-review.googlesource.com/c/go/+/342210
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Daniel Martí <mvdan@mvdan.cc>
Trust: Daniel Martí <mvdan@mvdan.cc>
Run-TryBot: Daniel Martí <mvdan@mvdan.cc>
TryBot-Result: Go Bot <gobot@golang.org>
(cherry picked from commit 0a0a160d4d)
Reviewed-on: https://go-review.googlesource.com/c/go/+/342329
Trust: Dmitri Shuralyov <dmitshur@golang.org>
Run-TryBot: Dmitri Shuralyov <dmitshur@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2021-08-15 22:50:19 +00:00
Michael Pratt
f7b9470992 [release-branch.go1.17] runtime: drop SIGPROF while in ARM < 7 kernel helpers
On Linux ARMv6 and below runtime/internal/atomic.Cas calls into a kernel
cas helper at a fixed address. If a SIGPROF arrives while executing the
kernel helper, the sigprof lostAtomic logic will miss that we are
potentially in the spinlock critical section, which could cause
a deadlock when using atomics later in sigprof.

For #47505
Fixes #47688

Change-Id: If8ba0d0fc47e45d4e6c68eca98fac4c6ed4e43c1
Reviewed-on: https://go-review.googlesource.com/c/go/+/341889
Trust: Michael Pratt <mpratt@google.com>
Run-TryBot: Michael Pratt <mpratt@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
(cherry picked from commit 20a620fd9f)
Reviewed-on: https://go-review.googlesource.com/c/go/+/341890
2021-08-13 16:34:25 +00:00
Russ Cox
4397d66bdd [release-branch.go1.17] time: fix docs for new comma layouts
The current text is slightly inaccurate. Make it more correct.

Change-Id: Iebe0051b74649d13982d7eefe3697f9e69c9b75d
Reviewed-on: https://go-review.googlesource.com/c/go/+/340449
Trust: Russ Cox <rsc@golang.org>
Run-TryBot: Russ Cox <rsc@golang.org>
Reviewed-by: Jay Conrod <jayconrod@google.com>
Reviewed-by: Damien Neil <dneil@google.com>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-on: https://go-review.googlesource.com/c/go/+/341949
Trust: Dmitri Shuralyov <dmitshur@golang.org>
Run-TryBot: Dmitri Shuralyov <dmitshur@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
2021-08-13 16:26:40 +00:00
Dmitri Shuralyov
eb5a7b5050 [release-branch.go1.17] all: merge master (5805efc) into release-branch.go1.17
Merge List:

+ 2021-08-12 5805efc78e doc/go1.17: remove draft notice
+ 2021-08-12 39634e7dae CONTRIBUTORS: update for the Go 1.17 release
+ 2021-08-12 095bb790e1 os/exec: re-enable LookPathTest/16
+ 2021-08-11 dea23e9ca8 src/make.*: make --no-clean flag a no-op that prints a warning
+ 2021-08-11 d4c0ed26ac doc/go1.17: linker passes -I to extld as -Wl,--dynamic-linker
+ 2021-08-10 1f9c9d8530 doc: use "high address/low address" instead of "top/bottom"
+ 2021-08-09 f1dce319ff cmd/go: with -mod=vendor, don't panic if there are duplicate requirements
+ 2021-08-09 7aeaad5c86 runtime/cgo: when using msan explicitly unpoison cgoCallers
+ 2021-08-08 507cc341ec doc: add example for conversion from slice expressions to array ptr
+ 2021-08-07 891547e2d4 doc/go1.17: fix a typo introduced in CL 335135
+ 2021-08-06 8eaf4d16bc make.bash: do not overwrite GO_LDSO if already set
+ 2021-08-06 63b968f4f8 doc/go1.17: clarify Modules changes
+ 2021-08-06 70546f6404 runtime: allow arm64 SEH to be called if illegal instruction
+ 2021-08-05 fd45e267c2 runtime: warn that KeepAlive is not an unsafe.Pointer workaround
+ 2021-08-04 6e738868a7 net/http: speed up and deflake TestCancelRequestWhenSharingConnection
+ 2021-08-02 8a7ee4c51e io/fs: don't use absolute path in DirEntry.Name doc

Change-Id: Ie9ce48bf4e99457cde966c5f972b04140468fcf0
2021-08-12 14:04:53 -04:00
Alexander Rakoczy
72ab3ff68b [release-branch.go1.17] go1.17rc2
Change-Id: I1f1ada15d7365db3e48599fbaddd6a65a8e5d06c
Reviewed-on: https://go-review.googlesource.com/c/go/+/339130
Run-TryBot: Alexander Rakoczy <alex@golang.org>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Alexander Rakoczy <alex@golang.org>
Reviewed-by: David Chase <drchase@google.com>
2021-08-02 17:57:22 +00:00
Alexander Rakoczy
c3ccb77d1e [release-branch.go1.17] all: merge master (b8ca6e5) into release-branch.go1.17
Merge List:

+ 2021-07-31 b8ca6e59ed all: gofmt
+ 2021-07-30 b7a85e0003 net/http/httputil: close incoming ReverseProxy request body
+ 2021-07-29 70fd4e47d7 runtime: avoid possible preemption when returning from Go to C
+ 2021-07-28 9eee0ed439 cmd/go: fix go.mod file name printed in error messages for replacements
+ 2021-07-28 b39e0f461c runtime: don't crash on nil pointers in checkptrAlignment
+ 2021-07-27 7cd10c1149 cmd/go: use .mod instead of .zip to determine if version has go.mod file
+ 2021-07-27 c8cf0f74e4 cmd/go: add missing flag in UsageLine
+ 2021-07-27 7ba8e796c9 testing: clarify T.Name returns a distinct name of the running test
+ 2021-07-27 33ff155970 go/types: preserve untyped constants on the RHS of a shift expression
+ 2021-07-26 840e583ff3 runtime: correct variable name in comment
+ 2021-07-26 bfbb288574 runtime: remove adjustTimers counter
+ 2021-07-26 9c81fd53b3 cmd/vet: add missing copyright header
+ 2021-07-26 ecaa6816bf doc: clarify non-nil zero length slice to array pointer conversion
+ 2021-07-26 1868f8296e crypto/x509: update iOS bundled roots to version 55188.120.1.0.1
+ 2021-07-25 849b791129 spec: use consistent capitalization for rune literal hex constants
+ 2021-07-23 0914646ab9 doc/1.17: fix two dead rfc links
+ 2021-07-22 052da5717e cmd/compile: do not change field offset in ABI analysis
+ 2021-07-22 798ec73519 runtime: don't clear timerModifiedEarliest if adjustTimers is 0
+ 2021-07-22 fdb45acd1f runtime: move mem profile sampling into m-acquired section
+ 2021-07-21 3e48c0381f reflect: add missing copyright header
+ 2021-07-21 48c88f1b1b reflect: add Value.CanConvert
+ 2021-07-20 9e26569293 cmd/go: don't add C compiler ID to hash for standard library
+ 2021-07-20 d568e6e075 runtime/debug: skip TestPanicOnFault on netbsd/arm
+ 2021-07-19 c8f4e6152d spec: correct example comment in Conversions from slice to array
+ 2021-07-19 1d91551b73 time: correct typo in documentation for UnixMicro
+ 2021-07-19 404127c30f cmd/compile: fix off-by-one error in traceback argument counting
+ 2021-07-19 6298cfe672 cmd/compile: fix typo in fatal message of builtinCall
+ 2021-07-19 49402bee36 cmd/{compile,link}: fix bug in map.zero handling
+ 2021-07-18 a66190ecee test/bench/go1: fix size for RegexpMatchMedium_32
+ 2021-07-18 650fc2117a text/scanner: use Go convention in Position doc comment
+ 2021-07-16 aa4e0f528e net/http:  correct capitalization in cancelTimeBody comment
+ 2021-07-15 0941dbca6a testing: clarify in docs that TestMain is advanced
+ 2021-07-15 69728ead87 cmd/go: update error messages in tests to match CL 332573
+ 2021-07-15 c1cc9f9c3d cmd/compile: fix lookup package of redeclared dot import symbol
+ 2021-07-15 21a04e3335 doc/go1.17: mention GOARCH=loong64
+ 2021-07-14 2b00a54baf go/build, runtime/internal/sys: reserve GOARCH=loong64
+ 2021-07-14 60ddf42b46 cmd/go: change link in error message from /wiki to /doc.
+ 2021-07-13 d8f348a589 cmd/go: remove a duplicated word from 'go help mod graph'

Change-Id: I63a540ba823bcbde7249348999ac5a7d9908b531
2021-08-02 11:56:53 -04:00
Dmitri Shuralyov
c3b47cb598 [release-branch.go1.17] update codereview.cfg for release-branch.go1.17
The initial "branch: master" configuration was inherited from when
the release-branch.go1.17 branch was branched off the master branch.
Update the "branch" key, otherwise a modern git-codereview will mail
CLs to the wrong branch.

We can set "parent-branch" so that 'git codereview sync-branch' works
to merge latest master into release-branch.go1.17, something we want
to do for subsequent RCs, maybe up to the final release. (At some point
the release freeze will end and tree will open for Go 1.18 development,
so we'll be switching to using cherry-picks only. Having parent-branch
will not be useful then, but I think harmless.)

Change-Id: I36ab977eaeb52bc0a84cc59e807572054f54c789
Reviewed-on: https://go-review.googlesource.com/c/go/+/334376
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Heschi Kreinick <heschi@google.com>
Trust: Dmitri Shuralyov <dmitshur@golang.org>
2021-07-13 22:45:26 +00:00
Cherry Mui
ddfd72f7d1 [release-branch.go1.17] go1.17rc1
Change-Id: Idc207e34c54b9bff0ae59348e4bc97474b1c11d2
Reviewed-on: https://go-review.googlesource.com/c/go/+/334254
Run-TryBot: Cherry Mui <cherryyz@google.com>
Reviewed-by: Dmitri Shuralyov <dmitshur@golang.org>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Cherry Mui <cherryyz@google.com>
2021-07-13 17:12:38 +00:00
Cherry Mui
d6f4d9a2be [release-branch.go1.17] all: merge master into release-branch.go1.17
a98589711d crypto/tls: test key type when casting
cfbd73ba33 doc/go1.17: editing pass over the "Compiler" section
ab4085ce84 runtime/pprof: call runtime.GC twice in memory profile test

Change-Id: I5b19d559629353886752e2a73ce8f37f983772df
2021-07-13 11:17:41 -04:00
1275 changed files with 21899 additions and 46373 deletions

1
VERSION Normal file
View File

@@ -0,0 +1 @@
go1.17

View File

@@ -1,108 +0,0 @@
pkg syscall (darwin-amd64), func RecvfromInet4(int, []uint8, int, *SockaddrInet4) (int, error)
pkg syscall (darwin-amd64), func RecvfromInet6(int, []uint8, int, *SockaddrInet6) (int, error)
pkg syscall (darwin-amd64), func SendtoInet4(int, []uint8, int, SockaddrInet4) error
pkg syscall (darwin-amd64), func SendtoInet6(int, []uint8, int, SockaddrInet6) error
pkg syscall (darwin-amd64-cgo), func RecvfromInet4(int, []uint8, int, *SockaddrInet4) (int, error)
pkg syscall (darwin-amd64-cgo), func RecvfromInet6(int, []uint8, int, *SockaddrInet6) (int, error)
pkg syscall (darwin-amd64-cgo), func SendtoInet4(int, []uint8, int, SockaddrInet4) error
pkg syscall (darwin-amd64-cgo), func SendtoInet6(int, []uint8, int, SockaddrInet6) error
pkg syscall (freebsd-386), func RecvfromInet4(int, []uint8, int, *SockaddrInet4) (int, error)
pkg syscall (freebsd-386), func RecvfromInet6(int, []uint8, int, *SockaddrInet6) (int, error)
pkg syscall (freebsd-386), func SendtoInet4(int, []uint8, int, SockaddrInet4) error
pkg syscall (freebsd-386), func SendtoInet6(int, []uint8, int, SockaddrInet6) error
pkg syscall (freebsd-386-cgo), func RecvfromInet4(int, []uint8, int, *SockaddrInet4) (int, error)
pkg syscall (freebsd-386-cgo), func RecvfromInet6(int, []uint8, int, *SockaddrInet6) (int, error)
pkg syscall (freebsd-386-cgo), func SendtoInet4(int, []uint8, int, SockaddrInet4) error
pkg syscall (freebsd-386-cgo), func SendtoInet6(int, []uint8, int, SockaddrInet6) error
pkg syscall (freebsd-amd64), func RecvfromInet4(int, []uint8, int, *SockaddrInet4) (int, error)
pkg syscall (freebsd-amd64), func RecvfromInet6(int, []uint8, int, *SockaddrInet6) (int, error)
pkg syscall (freebsd-amd64), func SendtoInet4(int, []uint8, int, SockaddrInet4) error
pkg syscall (freebsd-amd64), func SendtoInet6(int, []uint8, int, SockaddrInet6) error
pkg syscall (freebsd-amd64-cgo), func RecvfromInet4(int, []uint8, int, *SockaddrInet4) (int, error)
pkg syscall (freebsd-amd64-cgo), func RecvfromInet6(int, []uint8, int, *SockaddrInet6) (int, error)
pkg syscall (freebsd-amd64-cgo), func SendtoInet4(int, []uint8, int, SockaddrInet4) error
pkg syscall (freebsd-amd64-cgo), func SendtoInet6(int, []uint8, int, SockaddrInet6) error
pkg syscall (freebsd-arm), func RecvfromInet4(int, []uint8, int, *SockaddrInet4) (int, error)
pkg syscall (freebsd-arm), func RecvfromInet6(int, []uint8, int, *SockaddrInet6) (int, error)
pkg syscall (freebsd-arm), func SendtoInet4(int, []uint8, int, SockaddrInet4) error
pkg syscall (freebsd-arm), func SendtoInet6(int, []uint8, int, SockaddrInet6) error
pkg syscall (freebsd-arm-cgo), func RecvfromInet4(int, []uint8, int, *SockaddrInet4) (int, error)
pkg syscall (freebsd-arm-cgo), func RecvfromInet6(int, []uint8, int, *SockaddrInet6) (int, error)
pkg syscall (freebsd-arm-cgo), func SendtoInet4(int, []uint8, int, SockaddrInet4) error
pkg syscall (freebsd-arm-cgo), func SendtoInet6(int, []uint8, int, SockaddrInet6) error
pkg syscall (linux-386), func RecvfromInet4(int, []uint8, int, *SockaddrInet4) (int, error)
pkg syscall (linux-386), func RecvfromInet6(int, []uint8, int, *SockaddrInet6) (int, error)
pkg syscall (linux-386), func SendtoInet4(int, []uint8, int, SockaddrInet4) error
pkg syscall (linux-386), func SendtoInet6(int, []uint8, int, SockaddrInet6) error
pkg syscall (linux-386-cgo), func RecvfromInet4(int, []uint8, int, *SockaddrInet4) (int, error)
pkg syscall (linux-386-cgo), func RecvfromInet6(int, []uint8, int, *SockaddrInet6) (int, error)
pkg syscall (linux-386-cgo), func SendtoInet4(int, []uint8, int, SockaddrInet4) error
pkg syscall (linux-386-cgo), func SendtoInet6(int, []uint8, int, SockaddrInet6) error
pkg syscall (linux-amd64), func RecvfromInet4(int, []uint8, int, *SockaddrInet4) (int, error)
pkg syscall (linux-amd64), func RecvfromInet6(int, []uint8, int, *SockaddrInet6) (int, error)
pkg syscall (linux-amd64), func SendtoInet4(int, []uint8, int, SockaddrInet4) error
pkg syscall (linux-amd64), func SendtoInet6(int, []uint8, int, SockaddrInet6) error
pkg syscall (linux-amd64-cgo), func RecvfromInet4(int, []uint8, int, *SockaddrInet4) (int, error)
pkg syscall (linux-amd64-cgo), func RecvfromInet6(int, []uint8, int, *SockaddrInet6) (int, error)
pkg syscall (linux-amd64-cgo), func SendtoInet4(int, []uint8, int, SockaddrInet4) error
pkg syscall (linux-amd64-cgo), func SendtoInet6(int, []uint8, int, SockaddrInet6) error
pkg syscall (linux-arm), func RecvfromInet4(int, []uint8, int, *SockaddrInet4) (int, error)
pkg syscall (linux-arm), func RecvfromInet6(int, []uint8, int, *SockaddrInet6) (int, error)
pkg syscall (linux-arm), func SendtoInet4(int, []uint8, int, SockaddrInet4) error
pkg syscall (linux-arm), func SendtoInet6(int, []uint8, int, SockaddrInet6) error
pkg syscall (linux-arm-cgo), func RecvfromInet4(int, []uint8, int, *SockaddrInet4) (int, error)
pkg syscall (linux-arm-cgo), func RecvfromInet6(int, []uint8, int, *SockaddrInet6) (int, error)
pkg syscall (linux-arm-cgo), func SendtoInet4(int, []uint8, int, SockaddrInet4) error
pkg syscall (linux-arm-cgo), func SendtoInet6(int, []uint8, int, SockaddrInet6) error
pkg syscall (netbsd-386), func RecvfromInet4(int, []uint8, int, *SockaddrInet4) (int, error)
pkg syscall (netbsd-386), func RecvfromInet6(int, []uint8, int, *SockaddrInet6) (int, error)
pkg syscall (netbsd-386), func SendtoInet4(int, []uint8, int, SockaddrInet4) error
pkg syscall (netbsd-386), func SendtoInet6(int, []uint8, int, SockaddrInet6) error
pkg syscall (netbsd-386-cgo), func RecvfromInet4(int, []uint8, int, *SockaddrInet4) (int, error)
pkg syscall (netbsd-386-cgo), func RecvfromInet6(int, []uint8, int, *SockaddrInet6) (int, error)
pkg syscall (netbsd-386-cgo), func SendtoInet4(int, []uint8, int, SockaddrInet4) error
pkg syscall (netbsd-386-cgo), func SendtoInet6(int, []uint8, int, SockaddrInet6) error
pkg syscall (netbsd-amd64), func RecvfromInet4(int, []uint8, int, *SockaddrInet4) (int, error)
pkg syscall (netbsd-amd64), func RecvfromInet6(int, []uint8, int, *SockaddrInet6) (int, error)
pkg syscall (netbsd-amd64), func SendtoInet4(int, []uint8, int, SockaddrInet4) error
pkg syscall (netbsd-amd64), func SendtoInet6(int, []uint8, int, SockaddrInet6) error
pkg syscall (netbsd-amd64-cgo), func RecvfromInet4(int, []uint8, int, *SockaddrInet4) (int, error)
pkg syscall (netbsd-amd64-cgo), func RecvfromInet6(int, []uint8, int, *SockaddrInet6) (int, error)
pkg syscall (netbsd-amd64-cgo), func SendtoInet4(int, []uint8, int, SockaddrInet4) error
pkg syscall (netbsd-amd64-cgo), func SendtoInet6(int, []uint8, int, SockaddrInet6) error
pkg syscall (netbsd-arm), func RecvfromInet4(int, []uint8, int, *SockaddrInet4) (int, error)
pkg syscall (netbsd-arm), func RecvfromInet6(int, []uint8, int, *SockaddrInet6) (int, error)
pkg syscall (netbsd-arm), func SendtoInet4(int, []uint8, int, SockaddrInet4) error
pkg syscall (netbsd-arm), func SendtoInet6(int, []uint8, int, SockaddrInet6) error
pkg syscall (netbsd-arm-cgo), func RecvfromInet4(int, []uint8, int, *SockaddrInet4) (int, error)
pkg syscall (netbsd-arm-cgo), func RecvfromInet6(int, []uint8, int, *SockaddrInet6) (int, error)
pkg syscall (netbsd-arm-cgo), func SendtoInet4(int, []uint8, int, SockaddrInet4) error
pkg syscall (netbsd-arm-cgo), func SendtoInet6(int, []uint8, int, SockaddrInet6) error
pkg syscall (netbsd-arm64), func RecvfromInet4(int, []uint8, int, *SockaddrInet4) (int, error)
pkg syscall (netbsd-arm64), func RecvfromInet6(int, []uint8, int, *SockaddrInet6) (int, error)
pkg syscall (netbsd-arm64), func SendtoInet4(int, []uint8, int, SockaddrInet4) error
pkg syscall (netbsd-arm64), func SendtoInet6(int, []uint8, int, SockaddrInet6) error
pkg syscall (netbsd-arm64-cgo), func RecvfromInet4(int, []uint8, int, *SockaddrInet4) (int, error)
pkg syscall (netbsd-arm64-cgo), func RecvfromInet6(int, []uint8, int, *SockaddrInet6) (int, error)
pkg syscall (netbsd-arm64-cgo), func SendtoInet4(int, []uint8, int, SockaddrInet4) error
pkg syscall (netbsd-arm64-cgo), func SendtoInet6(int, []uint8, int, SockaddrInet6) error
pkg syscall (openbsd-386), func RecvfromInet4(int, []uint8, int, *SockaddrInet4) (int, error)
pkg syscall (openbsd-386), func RecvfromInet6(int, []uint8, int, *SockaddrInet6) (int, error)
pkg syscall (openbsd-386), func SendtoInet4(int, []uint8, int, SockaddrInet4) error
pkg syscall (openbsd-386), func SendtoInet6(int, []uint8, int, SockaddrInet6) error
pkg syscall (openbsd-386-cgo), func RecvfromInet4(int, []uint8, int, *SockaddrInet4) (int, error)
pkg syscall (openbsd-386-cgo), func RecvfromInet6(int, []uint8, int, *SockaddrInet6) (int, error)
pkg syscall (openbsd-386-cgo), func SendtoInet4(int, []uint8, int, SockaddrInet4) error
pkg syscall (openbsd-386-cgo), func SendtoInet6(int, []uint8, int, SockaddrInet6) error
pkg syscall (openbsd-amd64), func RecvfromInet4(int, []uint8, int, *SockaddrInet4) (int, error)
pkg syscall (openbsd-amd64), func RecvfromInet6(int, []uint8, int, *SockaddrInet6) (int, error)
pkg syscall (openbsd-amd64), func SendtoInet4(int, []uint8, int, SockaddrInet4) error
pkg syscall (openbsd-amd64), func SendtoInet6(int, []uint8, int, SockaddrInet6) error
pkg syscall (openbsd-amd64-cgo), func RecvfromInet4(int, []uint8, int, *SockaddrInet4) (int, error)
pkg syscall (openbsd-amd64-cgo), func RecvfromInet6(int, []uint8, int, *SockaddrInet6) (int, error)
pkg syscall (openbsd-amd64-cgo), func SendtoInet4(int, []uint8, int, SockaddrInet4) error
pkg syscall (openbsd-amd64-cgo), func SendtoInet6(int, []uint8, int, SockaddrInet6) error
pkg syscall (windows-386), func WSASendtoInet4(Handle, *WSABuf, uint32, *uint32, uint32, SockaddrInet4, *Overlapped, *uint8) error
pkg syscall (windows-386), func WSASendtoInet6(Handle, *WSABuf, uint32, *uint32, uint32, SockaddrInet6, *Overlapped, *uint8) error
pkg syscall (windows-amd64), func WSASendtoInet4(Handle, *WSABuf, uint32, *uint32, uint32, SockaddrInet4, *Overlapped, *uint8) error
pkg syscall (windows-amd64), func WSASendtoInet6(Handle, *WSABuf, uint32, *uint32, uint32, SockaddrInet6, *Overlapped, *uint8) error

View File

@@ -1,2 +1,2 @@
branch: dev.cmdgo
branch: release-branch.go1.17
parent-branch: master

1240
doc/go1.17.html Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,101 +0,0 @@
<!--{
"Title": "Go 1.18 Release Notes",
"Path": "/doc/go1.18"
}-->
<!--
NOTE: In this document and others in this directory, the convention is to
set fixed-width phrases with non-fixed-width spaces, as in
<code>hello</code> <code>world</code>.
Do not send CLs removing the interior tags from such phrases.
-->
<style>
main ul li { margin: 0.5em 0; }
</style>
<h2 id="introduction">DRAFT RELEASE NOTES — Introduction to Go 1.18</h2>
<p>
<strong>
Go 1.18 is not yet released. These are work-in-progress
release notes. Go 1.18 is expected to be released in February 2022.
</strong>
</p>
<h2 id="language">Changes to the language</h2>
<p>
TODO: complete this section
</p>
<h2 id="ports">Ports</h2>
<p>
TODO: complete this section, or delete if not needed
</p>
<h2 id="tools">Tools</h2>
<p>
TODO: complete this section, or delete if not needed
</p>
<h3 id="go-command">Go command</h3>
<p>
TODO: complete this section, or delete if not needed
</p>
<h2 id="runtime">Runtime</h2>
<p>
TODO: complete this section, or delete if not needed
</p>
<h2 id="compiler">Compiler</h2>
<p>
TODO: complete this section, or delete if not needed
</p>
<h2 id="linker">Linker</h2>
<p>
TODO: complete this section, or delete if not needed
</p>
<h2 id="library">Core library</h2>
<p>
TODO: complete this section
</p>
<h3 id="minor_library_changes">Minor changes to the library</h3>
<p>
As always, there are various minor changes and updates to the library,
made with the Go 1 <a href="/doc/go1compat">promise of compatibility</a>
in mind.
</p>
<p>
TODO: complete this section
</p>
<dl id="syscall"><dt><a href="/pkg/syscall/">syscall</a></dt>
<dd>
<p><!-- CL 336550 -->
The new function <a href="/pkg/syscall/?GOOS=windows#SyscallN"><code>SyscallN</code></a>
has been introduced for Windows, allowing for calls with arbitrary number
of arguments. As results,
<a href="/pkg/syscall/?GOOS=windows#Syscall"><code>Syscall</code></a>,
<a href="/pkg/syscall/?GOOS=windows#Syscall6"><code>Syscall6</code></a>,
<a href="/pkg/syscall/?GOOS=windows#Syscall9"><code>Syscall9</code></a>,
<a href="/pkg/syscall/?GOOS=windows#Syscall12"><code>Syscall12</code></a>,
<a href="/pkg/syscall/?GOOS=windows#Syscall15"><code>Syscall15</code></a>, and
<a href="/pkg/syscall/?GOOS=windows#Syscall18"><code>Syscall18</code></a> are
deprecated in favor of <a href="/pkg/syscall/?GOOS=windows#SyscallN"><code>SyscallN</code></a>.
</p>
</dd>
</dl><!-- syscall -->

View File

@@ -1,6 +1,6 @@
<!--{
"Title": "The Go Programming Language Specification",
"Subtitle": "Version of Aug 23, 2021",
"Subtitle": "Version of Jul 26, 2021",
"Path": "/ref/spec"
}-->
@@ -3000,18 +3000,6 @@ method value; the saved copy is then used as the receiver in any calls,
which may be executed later.
</p>
<pre>
type S struct { *T }
type T int
func (t T) M() { print(t) }
t := new(T)
s := S{T: t}
f := t.M // receiver *t is evaluated and stored in f
g := s.M // receiver *(s.T) is evaluated and stored in g
*t = 42 // does not affect stored receivers in f and g
</pre>
<p>
The type <code>T</code> may be an interface or non-interface type.
</p>

View File

@@ -4,7 +4,7 @@ The IANA asserts that the database is in the public domain.
For more information, see
https://www.iana.org/time-zones
ftp://ftp.iana.org/tz/code/tz-link.html
https://datatracker.ietf.org/doc/html/rfc6557
ftp://ftp.iana.org/tz/code/tz-link.htm
http://tools.ietf.org/html/rfc6557
To rebuild the archive, read and run update.bash.

View File

@@ -2,7 +2,6 @@
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build ignore
// +build ignore
// This program can be used as go_android_GOARCH_exec by the Go tool.

View File

@@ -91,18 +91,10 @@ func main() {
// issue 26745
_ = func(i int) int {
// typecheck reports at column 14 ('+'), but types2 reports at
// column 10 ('C').
// TODO(mdempsky): Investigate why, and see if types2 can be
// updated to match typecheck behavior.
return C.i + 1 // ERROR HERE: \b(10|14)\b
return C.i + 1 // ERROR HERE: 14
}
_ = func(i int) {
// typecheck reports at column 7 ('('), but types2 reports at
// column 8 ('i'). The types2 position is more correct, but
// updating typecheck here is fundamentally challenging because of
// IR limitations.
C.fi(i) // ERROR HERE: \b(7|8)\b
C.fi(i) // ERROR HERE: 7
}
C.fi = C.fi // ERROR HERE

View File

@@ -2,7 +2,6 @@
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build ignore
// +build ignore
// Compute Fibonacci numbers with two goroutines

View File

@@ -2,7 +2,6 @@
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build ignore
// +build ignore
package main

View File

@@ -2,7 +2,6 @@
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build linux && freebsd && openbsd
// +build linux,freebsd,openbsd
package cgotest

View File

@@ -2,7 +2,6 @@
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build !windows
// +build !windows
package cgotest

View File

@@ -2,7 +2,6 @@
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build linux && cgo
// +build linux,cgo
package cgotest

View File

@@ -2,7 +2,6 @@
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build !windows
// +build !windows
// Issue 18146: pthread_create failure during syscall.Exec.

View File

@@ -2,7 +2,6 @@
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build darwin && cgo && !internal
// +build darwin,cgo,!internal
package cgotest

View File

@@ -2,7 +2,6 @@
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build !darwin || !cgo || internal
// +build !darwin !cgo internal
package cgotest

View File

@@ -2,9 +2,7 @@
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build !windows && !static && (!darwin || (!internal_pie && !arm64))
// +build !windows
// +build !static
// +build !windows,!static
// +build !darwin !internal_pie,!arm64
// Excluded in darwin internal linking PIE mode, as dynamic export is not

View File

@@ -2,7 +2,6 @@
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build windows || static || (darwin && internal_pie) || (darwin && arm64)
// +build windows static darwin,internal_pie darwin,arm64
package cgotest

View File

@@ -2,7 +2,6 @@
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build !android
// +build !android
// Test that pthread_cancel works as expected

View File

@@ -2,7 +2,6 @@
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build !windows
// +build !windows
package cgotest

View File

@@ -2,7 +2,6 @@
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build !android
// +build !android
package cgotest

View File

@@ -2,7 +2,6 @@
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build !windows && !android
// +build !windows,!android
// Test that the Go runtime still works if C code changes the signal stack.

View File

@@ -2,7 +2,6 @@
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build !windows
// +build !windows
package cgotest

View File

@@ -2,7 +2,6 @@
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build !windows
// +build !windows
package cgotest

View File

@@ -1,9 +0,0 @@
// Copyright 2021 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package cgotest
// Issue 43639: No runtime test needed, make sure package cgotest/issue43639 compiles well.
import _ "cgotest/issue43639"

View File

@@ -1,8 +0,0 @@
// Copyright 2021 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package issue43639
// #cgo CFLAGS: -W -Wall -Werror
import "C"

View File

@@ -2,7 +2,6 @@
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build !cgo
// +build !cgo
package so_test

View File

@@ -2,7 +2,6 @@
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build cgo
// +build cgo
package so_test

View File

@@ -2,7 +2,6 @@
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build !cgo
// +build !cgo
package so_test

View File

@@ -2,7 +2,6 @@
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build cgo
// +build cgo
package so_test

View File

@@ -2,7 +2,6 @@
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build !windows
// +build !windows
package cgotlstest

View File

@@ -2,7 +2,6 @@
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build ignore
// +build ignore
// detect attempts to autodetect the correct

View File

@@ -2,7 +2,6 @@
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build explicit
// +build explicit
// Package experiment_toolid_test verifies that GOEXPERIMENT settings built

View File

@@ -316,10 +316,10 @@ func invertSparseEntries(src []sparseEntry, size int64) []sparseEntry {
// fileState tracks the number of logical (includes sparse holes) and physical
// (actual in tar archive) bytes remaining for the current file.
//
// Invariant: logicalRemaining >= physicalRemaining
// Invariant: LogicalRemaining >= PhysicalRemaining
type fileState interface {
logicalRemaining() int64
physicalRemaining() int64
LogicalRemaining() int64
PhysicalRemaining() int64
}
// allowedFormats determines which formats can be used.
@@ -413,22 +413,22 @@ func (h Header) allowedFormats() (format Format, paxHdrs map[string]string, err
// Check basic fields.
var blk block
v7 := blk.toV7()
ustar := blk.toUSTAR()
gnu := blk.toGNU()
verifyString(h.Name, len(v7.name()), "Name", paxPath)
verifyString(h.Linkname, len(v7.linkName()), "Linkname", paxLinkpath)
verifyString(h.Uname, len(ustar.userName()), "Uname", paxUname)
verifyString(h.Gname, len(ustar.groupName()), "Gname", paxGname)
verifyNumeric(h.Mode, len(v7.mode()), "Mode", paxNone)
verifyNumeric(int64(h.Uid), len(v7.uid()), "Uid", paxUid)
verifyNumeric(int64(h.Gid), len(v7.gid()), "Gid", paxGid)
verifyNumeric(h.Size, len(v7.size()), "Size", paxSize)
verifyNumeric(h.Devmajor, len(ustar.devMajor()), "Devmajor", paxNone)
verifyNumeric(h.Devminor, len(ustar.devMinor()), "Devminor", paxNone)
verifyTime(h.ModTime, len(v7.modTime()), "ModTime", paxMtime)
verifyTime(h.AccessTime, len(gnu.accessTime()), "AccessTime", paxAtime)
verifyTime(h.ChangeTime, len(gnu.changeTime()), "ChangeTime", paxCtime)
v7 := blk.V7()
ustar := blk.USTAR()
gnu := blk.GNU()
verifyString(h.Name, len(v7.Name()), "Name", paxPath)
verifyString(h.Linkname, len(v7.LinkName()), "Linkname", paxLinkpath)
verifyString(h.Uname, len(ustar.UserName()), "Uname", paxUname)
verifyString(h.Gname, len(ustar.GroupName()), "Gname", paxGname)
verifyNumeric(h.Mode, len(v7.Mode()), "Mode", paxNone)
verifyNumeric(int64(h.Uid), len(v7.UID()), "Uid", paxUid)
verifyNumeric(int64(h.Gid), len(v7.GID()), "Gid", paxGid)
verifyNumeric(h.Size, len(v7.Size()), "Size", paxSize)
verifyNumeric(h.Devmajor, len(ustar.DevMajor()), "Devmajor", paxNone)
verifyNumeric(h.Devminor, len(ustar.DevMinor()), "Devminor", paxNone)
verifyTime(h.ModTime, len(v7.ModTime()), "ModTime", paxMtime)
verifyTime(h.AccessTime, len(gnu.AccessTime()), "AccessTime", paxAtime)
verifyTime(h.ChangeTime, len(gnu.ChangeTime()), "ChangeTime", paxCtime)
// Check for header-only types.
var whyOnlyPAX, whyOnlyGNU string

View File

@@ -156,28 +156,28 @@ var zeroBlock block
type block [blockSize]byte
// Convert block to any number of formats.
func (b *block) toV7() *headerV7 { return (*headerV7)(b) }
func (b *block) toGNU() *headerGNU { return (*headerGNU)(b) }
func (b *block) toSTAR() *headerSTAR { return (*headerSTAR)(b) }
func (b *block) toUSTAR() *headerUSTAR { return (*headerUSTAR)(b) }
func (b *block) toSparse() sparseArray { return sparseArray(b[:]) }
func (b *block) V7() *headerV7 { return (*headerV7)(b) }
func (b *block) GNU() *headerGNU { return (*headerGNU)(b) }
func (b *block) STAR() *headerSTAR { return (*headerSTAR)(b) }
func (b *block) USTAR() *headerUSTAR { return (*headerUSTAR)(b) }
func (b *block) Sparse() sparseArray { return sparseArray(b[:]) }
// GetFormat checks that the block is a valid tar header based on the checksum.
// It then attempts to guess the specific format based on magic values.
// If the checksum fails, then FormatUnknown is returned.
func (b *block) getFormat() Format {
func (b *block) GetFormat() Format {
// Verify checksum.
var p parser
value := p.parseOctal(b.toV7().chksum())
chksum1, chksum2 := b.computeChecksum()
value := p.parseOctal(b.V7().Chksum())
chksum1, chksum2 := b.ComputeChecksum()
if p.err != nil || (value != chksum1 && value != chksum2) {
return FormatUnknown
}
// Guess the magic values.
magic := string(b.toUSTAR().magic())
version := string(b.toUSTAR().version())
trailer := string(b.toSTAR().trailer())
magic := string(b.USTAR().Magic())
version := string(b.USTAR().Version())
trailer := string(b.STAR().Trailer())
switch {
case magic == magicUSTAR && trailer == trailerSTAR:
return formatSTAR
@@ -190,23 +190,23 @@ func (b *block) getFormat() Format {
}
}
// setFormat writes the magic values necessary for specified format
// SetFormat writes the magic values necessary for specified format
// and then updates the checksum accordingly.
func (b *block) setFormat(format Format) {
func (b *block) SetFormat(format Format) {
// Set the magic values.
switch {
case format.has(formatV7):
// Do nothing.
case format.has(FormatGNU):
copy(b.toGNU().magic(), magicGNU)
copy(b.toGNU().version(), versionGNU)
copy(b.GNU().Magic(), magicGNU)
copy(b.GNU().Version(), versionGNU)
case format.has(formatSTAR):
copy(b.toSTAR().magic(), magicUSTAR)
copy(b.toSTAR().version(), versionUSTAR)
copy(b.toSTAR().trailer(), trailerSTAR)
copy(b.STAR().Magic(), magicUSTAR)
copy(b.STAR().Version(), versionUSTAR)
copy(b.STAR().Trailer(), trailerSTAR)
case format.has(FormatUSTAR | FormatPAX):
copy(b.toUSTAR().magic(), magicUSTAR)
copy(b.toUSTAR().version(), versionUSTAR)
copy(b.USTAR().Magic(), magicUSTAR)
copy(b.USTAR().Version(), versionUSTAR)
default:
panic("invalid format")
}
@@ -214,17 +214,17 @@ func (b *block) setFormat(format Format) {
// Update checksum.
// This field is special in that it is terminated by a NULL then space.
var f formatter
field := b.toV7().chksum()
chksum, _ := b.computeChecksum() // Possible values are 256..128776
field := b.V7().Chksum()
chksum, _ := b.ComputeChecksum() // Possible values are 256..128776
f.formatOctal(field[:7], chksum) // Never fails since 128776 < 262143
field[7] = ' '
}
// computeChecksum computes the checksum for the header block.
// ComputeChecksum computes the checksum for the header block.
// POSIX specifies a sum of the unsigned byte values, but the Sun tar used
// signed byte values.
// We compute and return both.
func (b *block) computeChecksum() (unsigned, signed int64) {
func (b *block) ComputeChecksum() (unsigned, signed int64) {
for i, c := range b {
if 148 <= i && i < 156 {
c = ' ' // Treat the checksum field itself as all spaces.
@@ -236,68 +236,68 @@ func (b *block) computeChecksum() (unsigned, signed int64) {
}
// Reset clears the block with all zeros.
func (b *block) reset() {
func (b *block) Reset() {
*b = block{}
}
type headerV7 [blockSize]byte
func (h *headerV7) name() []byte { return h[000:][:100] }
func (h *headerV7) mode() []byte { return h[100:][:8] }
func (h *headerV7) uid() []byte { return h[108:][:8] }
func (h *headerV7) gid() []byte { return h[116:][:8] }
func (h *headerV7) size() []byte { return h[124:][:12] }
func (h *headerV7) modTime() []byte { return h[136:][:12] }
func (h *headerV7) chksum() []byte { return h[148:][:8] }
func (h *headerV7) typeFlag() []byte { return h[156:][:1] }
func (h *headerV7) linkName() []byte { return h[157:][:100] }
func (h *headerV7) Name() []byte { return h[000:][:100] }
func (h *headerV7) Mode() []byte { return h[100:][:8] }
func (h *headerV7) UID() []byte { return h[108:][:8] }
func (h *headerV7) GID() []byte { return h[116:][:8] }
func (h *headerV7) Size() []byte { return h[124:][:12] }
func (h *headerV7) ModTime() []byte { return h[136:][:12] }
func (h *headerV7) Chksum() []byte { return h[148:][:8] }
func (h *headerV7) TypeFlag() []byte { return h[156:][:1] }
func (h *headerV7) LinkName() []byte { return h[157:][:100] }
type headerGNU [blockSize]byte
func (h *headerGNU) v7() *headerV7 { return (*headerV7)(h) }
func (h *headerGNU) magic() []byte { return h[257:][:6] }
func (h *headerGNU) version() []byte { return h[263:][:2] }
func (h *headerGNU) userName() []byte { return h[265:][:32] }
func (h *headerGNU) groupName() []byte { return h[297:][:32] }
func (h *headerGNU) devMajor() []byte { return h[329:][:8] }
func (h *headerGNU) devMinor() []byte { return h[337:][:8] }
func (h *headerGNU) accessTime() []byte { return h[345:][:12] }
func (h *headerGNU) changeTime() []byte { return h[357:][:12] }
func (h *headerGNU) sparse() sparseArray { return sparseArray(h[386:][:24*4+1]) }
func (h *headerGNU) realSize() []byte { return h[483:][:12] }
func (h *headerGNU) V7() *headerV7 { return (*headerV7)(h) }
func (h *headerGNU) Magic() []byte { return h[257:][:6] }
func (h *headerGNU) Version() []byte { return h[263:][:2] }
func (h *headerGNU) UserName() []byte { return h[265:][:32] }
func (h *headerGNU) GroupName() []byte { return h[297:][:32] }
func (h *headerGNU) DevMajor() []byte { return h[329:][:8] }
func (h *headerGNU) DevMinor() []byte { return h[337:][:8] }
func (h *headerGNU) AccessTime() []byte { return h[345:][:12] }
func (h *headerGNU) ChangeTime() []byte { return h[357:][:12] }
func (h *headerGNU) Sparse() sparseArray { return sparseArray(h[386:][:24*4+1]) }
func (h *headerGNU) RealSize() []byte { return h[483:][:12] }
type headerSTAR [blockSize]byte
func (h *headerSTAR) v7() *headerV7 { return (*headerV7)(h) }
func (h *headerSTAR) magic() []byte { return h[257:][:6] }
func (h *headerSTAR) version() []byte { return h[263:][:2] }
func (h *headerSTAR) userName() []byte { return h[265:][:32] }
func (h *headerSTAR) groupName() []byte { return h[297:][:32] }
func (h *headerSTAR) devMajor() []byte { return h[329:][:8] }
func (h *headerSTAR) devMinor() []byte { return h[337:][:8] }
func (h *headerSTAR) prefix() []byte { return h[345:][:131] }
func (h *headerSTAR) accessTime() []byte { return h[476:][:12] }
func (h *headerSTAR) changeTime() []byte { return h[488:][:12] }
func (h *headerSTAR) trailer() []byte { return h[508:][:4] }
func (h *headerSTAR) V7() *headerV7 { return (*headerV7)(h) }
func (h *headerSTAR) Magic() []byte { return h[257:][:6] }
func (h *headerSTAR) Version() []byte { return h[263:][:2] }
func (h *headerSTAR) UserName() []byte { return h[265:][:32] }
func (h *headerSTAR) GroupName() []byte { return h[297:][:32] }
func (h *headerSTAR) DevMajor() []byte { return h[329:][:8] }
func (h *headerSTAR) DevMinor() []byte { return h[337:][:8] }
func (h *headerSTAR) Prefix() []byte { return h[345:][:131] }
func (h *headerSTAR) AccessTime() []byte { return h[476:][:12] }
func (h *headerSTAR) ChangeTime() []byte { return h[488:][:12] }
func (h *headerSTAR) Trailer() []byte { return h[508:][:4] }
type headerUSTAR [blockSize]byte
func (h *headerUSTAR) v7() *headerV7 { return (*headerV7)(h) }
func (h *headerUSTAR) magic() []byte { return h[257:][:6] }
func (h *headerUSTAR) version() []byte { return h[263:][:2] }
func (h *headerUSTAR) userName() []byte { return h[265:][:32] }
func (h *headerUSTAR) groupName() []byte { return h[297:][:32] }
func (h *headerUSTAR) devMajor() []byte { return h[329:][:8] }
func (h *headerUSTAR) devMinor() []byte { return h[337:][:8] }
func (h *headerUSTAR) prefix() []byte { return h[345:][:155] }
func (h *headerUSTAR) V7() *headerV7 { return (*headerV7)(h) }
func (h *headerUSTAR) Magic() []byte { return h[257:][:6] }
func (h *headerUSTAR) Version() []byte { return h[263:][:2] }
func (h *headerUSTAR) UserName() []byte { return h[265:][:32] }
func (h *headerUSTAR) GroupName() []byte { return h[297:][:32] }
func (h *headerUSTAR) DevMajor() []byte { return h[329:][:8] }
func (h *headerUSTAR) DevMinor() []byte { return h[337:][:8] }
func (h *headerUSTAR) Prefix() []byte { return h[345:][:155] }
type sparseArray []byte
func (s sparseArray) entry(i int) sparseElem { return sparseElem(s[i*24:]) }
func (s sparseArray) isExtended() []byte { return s[24*s.maxEntries():][:1] }
func (s sparseArray) maxEntries() int { return len(s) / 24 }
func (s sparseArray) Entry(i int) sparseElem { return sparseElem(s[i*24:]) }
func (s sparseArray) IsExtended() []byte { return s[24*s.MaxEntries():][:1] }
func (s sparseArray) MaxEntries() int { return len(s) / 24 }
type sparseElem []byte
func (s sparseElem) offset() []byte { return s[00:][:12] }
func (s sparseElem) length() []byte { return s[12:][:12] }
func (s sparseElem) Offset() []byte { return s[00:][:12] }
func (s sparseElem) Length() []byte { return s[12:][:12] }

View File

@@ -65,7 +65,7 @@ func (tr *Reader) next() (*Header, error) {
format := FormatUSTAR | FormatPAX | FormatGNU
for {
// Discard the remainder of the file and any padding.
if err := discard(tr.r, tr.curr.physicalRemaining()); err != nil {
if err := discard(tr.r, tr.curr.PhysicalRemaining()); err != nil {
return nil, err
}
if _, err := tryReadFull(tr.r, tr.blk[:tr.pad]); err != nil {
@@ -355,7 +355,7 @@ func (tr *Reader) readHeader() (*Header, *block, error) {
}
// Verify the header matches a known format.
format := tr.blk.getFormat()
format := tr.blk.GetFormat()
if format == FormatUnknown {
return nil, nil, ErrHeader
}
@@ -364,30 +364,30 @@ func (tr *Reader) readHeader() (*Header, *block, error) {
hdr := new(Header)
// Unpack the V7 header.
v7 := tr.blk.toV7()
hdr.Typeflag = v7.typeFlag()[0]
hdr.Name = p.parseString(v7.name())
hdr.Linkname = p.parseString(v7.linkName())
hdr.Size = p.parseNumeric(v7.size())
hdr.Mode = p.parseNumeric(v7.mode())
hdr.Uid = int(p.parseNumeric(v7.uid()))
hdr.Gid = int(p.parseNumeric(v7.gid()))
hdr.ModTime = time.Unix(p.parseNumeric(v7.modTime()), 0)
v7 := tr.blk.V7()
hdr.Typeflag = v7.TypeFlag()[0]
hdr.Name = p.parseString(v7.Name())
hdr.Linkname = p.parseString(v7.LinkName())
hdr.Size = p.parseNumeric(v7.Size())
hdr.Mode = p.parseNumeric(v7.Mode())
hdr.Uid = int(p.parseNumeric(v7.UID()))
hdr.Gid = int(p.parseNumeric(v7.GID()))
hdr.ModTime = time.Unix(p.parseNumeric(v7.ModTime()), 0)
// Unpack format specific fields.
if format > formatV7 {
ustar := tr.blk.toUSTAR()
hdr.Uname = p.parseString(ustar.userName())
hdr.Gname = p.parseString(ustar.groupName())
hdr.Devmajor = p.parseNumeric(ustar.devMajor())
hdr.Devminor = p.parseNumeric(ustar.devMinor())
ustar := tr.blk.USTAR()
hdr.Uname = p.parseString(ustar.UserName())
hdr.Gname = p.parseString(ustar.GroupName())
hdr.Devmajor = p.parseNumeric(ustar.DevMajor())
hdr.Devminor = p.parseNumeric(ustar.DevMinor())
var prefix string
switch {
case format.has(FormatUSTAR | FormatPAX):
hdr.Format = format
ustar := tr.blk.toUSTAR()
prefix = p.parseString(ustar.prefix())
ustar := tr.blk.USTAR()
prefix = p.parseString(ustar.Prefix())
// For Format detection, check if block is properly formatted since
// the parser is more liberal than what USTAR actually permits.
@@ -396,23 +396,23 @@ func (tr *Reader) readHeader() (*Header, *block, error) {
hdr.Format = FormatUnknown // Non-ASCII characters in block.
}
nul := func(b []byte) bool { return int(b[len(b)-1]) == 0 }
if !(nul(v7.size()) && nul(v7.mode()) && nul(v7.uid()) && nul(v7.gid()) &&
nul(v7.modTime()) && nul(ustar.devMajor()) && nul(ustar.devMinor())) {
if !(nul(v7.Size()) && nul(v7.Mode()) && nul(v7.UID()) && nul(v7.GID()) &&
nul(v7.ModTime()) && nul(ustar.DevMajor()) && nul(ustar.DevMinor())) {
hdr.Format = FormatUnknown // Numeric fields must end in NUL
}
case format.has(formatSTAR):
star := tr.blk.toSTAR()
prefix = p.parseString(star.prefix())
hdr.AccessTime = time.Unix(p.parseNumeric(star.accessTime()), 0)
hdr.ChangeTime = time.Unix(p.parseNumeric(star.changeTime()), 0)
star := tr.blk.STAR()
prefix = p.parseString(star.Prefix())
hdr.AccessTime = time.Unix(p.parseNumeric(star.AccessTime()), 0)
hdr.ChangeTime = time.Unix(p.parseNumeric(star.ChangeTime()), 0)
case format.has(FormatGNU):
hdr.Format = format
var p2 parser
gnu := tr.blk.toGNU()
if b := gnu.accessTime(); b[0] != 0 {
gnu := tr.blk.GNU()
if b := gnu.AccessTime(); b[0] != 0 {
hdr.AccessTime = time.Unix(p2.parseNumeric(b), 0)
}
if b := gnu.changeTime(); b[0] != 0 {
if b := gnu.ChangeTime(); b[0] != 0 {
hdr.ChangeTime = time.Unix(p2.parseNumeric(b), 0)
}
@@ -439,8 +439,8 @@ func (tr *Reader) readHeader() (*Header, *block, error) {
// See https://golang.org/issues/21005
if p2.err != nil {
hdr.AccessTime, hdr.ChangeTime = time.Time{}, time.Time{}
ustar := tr.blk.toUSTAR()
if s := p.parseString(ustar.prefix()); isASCII(s) {
ustar := tr.blk.USTAR()
if s := p.parseString(ustar.Prefix()); isASCII(s) {
prefix = s
}
hdr.Format = FormatUnknown // Buggy file is not GNU
@@ -465,38 +465,38 @@ func (tr *Reader) readOldGNUSparseMap(hdr *Header, blk *block) (sparseDatas, err
// Make sure that the input format is GNU.
// Unfortunately, the STAR format also has a sparse header format that uses
// the same type flag but has a completely different layout.
if blk.getFormat() != FormatGNU {
if blk.GetFormat() != FormatGNU {
return nil, ErrHeader
}
hdr.Format.mayOnlyBe(FormatGNU)
var p parser
hdr.Size = p.parseNumeric(blk.toGNU().realSize())
hdr.Size = p.parseNumeric(blk.GNU().RealSize())
if p.err != nil {
return nil, p.err
}
s := blk.toGNU().sparse()
spd := make(sparseDatas, 0, s.maxEntries())
s := blk.GNU().Sparse()
spd := make(sparseDatas, 0, s.MaxEntries())
for {
for i := 0; i < s.maxEntries(); i++ {
for i := 0; i < s.MaxEntries(); i++ {
// This termination condition is identical to GNU and BSD tar.
if s.entry(i).offset()[0] == 0x00 {
if s.Entry(i).Offset()[0] == 0x00 {
break // Don't return, need to process extended headers (even if empty)
}
offset := p.parseNumeric(s.entry(i).offset())
length := p.parseNumeric(s.entry(i).length())
offset := p.parseNumeric(s.Entry(i).Offset())
length := p.parseNumeric(s.Entry(i).Length())
if p.err != nil {
return nil, p.err
}
spd = append(spd, sparseEntry{Offset: offset, Length: length})
}
if s.isExtended()[0] > 0 {
if s.IsExtended()[0] > 0 {
// There are more entries. Read an extension header and parse its entries.
if _, err := mustReadFull(tr.r, blk[:]); err != nil {
return nil, err
}
s = blk.toSparse()
s = blk.Sparse()
continue
}
return spd, nil // Done
@@ -678,13 +678,11 @@ func (fr *regFileReader) WriteTo(w io.Writer) (int64, error) {
return io.Copy(w, struct{ io.Reader }{fr})
}
// logicalRemaining implements fileState.logicalRemaining.
func (fr regFileReader) logicalRemaining() int64 {
func (fr regFileReader) LogicalRemaining() int64 {
return fr.nb
}
// logicalRemaining implements fileState.physicalRemaining.
func (fr regFileReader) physicalRemaining() int64 {
func (fr regFileReader) PhysicalRemaining() int64 {
return fr.nb
}
@@ -696,9 +694,9 @@ type sparseFileReader struct {
}
func (sr *sparseFileReader) Read(b []byte) (n int, err error) {
finished := int64(len(b)) >= sr.logicalRemaining()
finished := int64(len(b)) >= sr.LogicalRemaining()
if finished {
b = b[:sr.logicalRemaining()]
b = b[:sr.LogicalRemaining()]
}
b0 := b
@@ -726,7 +724,7 @@ func (sr *sparseFileReader) Read(b []byte) (n int, err error) {
return n, errMissData // Less data in dense file than sparse file
case err != nil:
return n, err
case sr.logicalRemaining() == 0 && sr.physicalRemaining() > 0:
case sr.LogicalRemaining() == 0 && sr.PhysicalRemaining() > 0:
return n, errUnrefData // More data in dense file than sparse file
case finished:
return n, io.EOF
@@ -748,7 +746,7 @@ func (sr *sparseFileReader) WriteTo(w io.Writer) (n int64, err error) {
var writeLastByte bool
pos0 := sr.pos
for sr.logicalRemaining() > 0 && !writeLastByte && err == nil {
for sr.LogicalRemaining() > 0 && !writeLastByte && err == nil {
var nf int64 // Size of fragment
holeStart, holeEnd := sr.sp[0].Offset, sr.sp[0].endOffset()
if sr.pos < holeStart { // In a data fragment
@@ -756,7 +754,7 @@ func (sr *sparseFileReader) WriteTo(w io.Writer) (n int64, err error) {
nf, err = io.CopyN(ws, sr.fr, nf)
} else { // In a hole fragment
nf = holeEnd - sr.pos
if sr.physicalRemaining() == 0 {
if sr.PhysicalRemaining() == 0 {
writeLastByte = true
nf--
}
@@ -781,18 +779,18 @@ func (sr *sparseFileReader) WriteTo(w io.Writer) (n int64, err error) {
return n, errMissData // Less data in dense file than sparse file
case err != nil:
return n, err
case sr.logicalRemaining() == 0 && sr.physicalRemaining() > 0:
case sr.LogicalRemaining() == 0 && sr.PhysicalRemaining() > 0:
return n, errUnrefData // More data in dense file than sparse file
default:
return n, nil
}
}
func (sr sparseFileReader) logicalRemaining() int64 {
func (sr sparseFileReader) LogicalRemaining() int64 {
return sr.sp[len(sr.sp)-1].endOffset() - sr.pos
}
func (sr sparseFileReader) physicalRemaining() int64 {
return sr.fr.physicalRemaining()
func (sr sparseFileReader) PhysicalRemaining() int64 {
return sr.fr.PhysicalRemaining()
}
type zeroReader struct{}

View File

@@ -1021,12 +1021,12 @@ func TestParsePAX(t *testing.T) {
func TestReadOldGNUSparseMap(t *testing.T) {
populateSparseMap := func(sa sparseArray, sps []string) []string {
for i := 0; len(sps) > 0 && i < sa.maxEntries(); i++ {
copy(sa.entry(i), sps[0])
for i := 0; len(sps) > 0 && i < sa.MaxEntries(); i++ {
copy(sa.Entry(i), sps[0])
sps = sps[1:]
}
if len(sps) > 0 {
copy(sa.isExtended(), "\x80")
copy(sa.IsExtended(), "\x80")
}
return sps
}
@@ -1034,19 +1034,19 @@ func TestReadOldGNUSparseMap(t *testing.T) {
makeInput := func(format Format, size string, sps ...string) (out []byte) {
// Write the initial GNU header.
var blk block
gnu := blk.toGNU()
sparse := gnu.sparse()
copy(gnu.realSize(), size)
gnu := blk.GNU()
sparse := gnu.Sparse()
copy(gnu.RealSize(), size)
sps = populateSparseMap(sparse, sps)
if format != FormatUnknown {
blk.setFormat(format)
blk.SetFormat(format)
}
out = append(out, blk[:]...)
// Write extended sparse blocks.
for len(sps) > 0 {
var blk block
sps = populateSparseMap(blk.toSparse(), sps)
sps = populateSparseMap(blk.Sparse(), sps)
out = append(out, blk[:]...)
}
return out
@@ -1359,7 +1359,7 @@ func TestFileReader(t *testing.T) {
wantCnt int64
wantErr error
}
testRemaining struct { // logicalRemaining() == wantLCnt, physicalRemaining() == wantPCnt
testRemaining struct { // LogicalRemaining() == wantLCnt, PhysicalRemaining() == wantPCnt
wantLCnt int64
wantPCnt int64
}
@@ -1596,11 +1596,11 @@ func TestFileReader(t *testing.T) {
t.Errorf("test %d.%d, expected %d more operations", i, j, len(f.ops))
}
case testRemaining:
if got := fr.logicalRemaining(); got != tf.wantLCnt {
t.Errorf("test %d.%d, logicalRemaining() = %d, want %d", i, j, got, tf.wantLCnt)
if got := fr.LogicalRemaining(); got != tf.wantLCnt {
t.Errorf("test %d.%d, LogicalRemaining() = %d, want %d", i, j, got, tf.wantLCnt)
}
if got := fr.physicalRemaining(); got != tf.wantPCnt {
t.Errorf("test %d.%d, physicalRemaining() = %d, want %d", i, j, got, tf.wantPCnt)
if got := fr.PhysicalRemaining(); got != tf.wantPCnt {
t.Errorf("test %d.%d, PhysicalRemaining() = %d, want %d", i, j, got, tf.wantPCnt)
}
default:
t.Fatalf("test %d.%d, unknown test operation: %T", i, j, tf)

View File

@@ -50,7 +50,7 @@ func (tw *Writer) Flush() error {
if tw.err != nil {
return tw.err
}
if nb := tw.curr.logicalRemaining(); nb > 0 {
if nb := tw.curr.LogicalRemaining(); nb > 0 {
return fmt.Errorf("archive/tar: missed writing %d bytes", nb)
}
if _, tw.err = tw.w.Write(zeroBlock[:tw.pad]); tw.err != nil {
@@ -117,8 +117,8 @@ func (tw *Writer) writeUSTARHeader(hdr *Header) error {
// Pack the main header.
var f formatter
blk := tw.templateV7Plus(hdr, f.formatString, f.formatOctal)
f.formatString(blk.toUSTAR().prefix(), namePrefix)
blk.setFormat(FormatUSTAR)
f.formatString(blk.USTAR().Prefix(), namePrefix)
blk.SetFormat(FormatUSTAR)
if f.err != nil {
return f.err // Should never happen since header is validated
}
@@ -208,7 +208,7 @@ func (tw *Writer) writePAXHeader(hdr *Header, paxHdrs map[string]string) error {
var f formatter // Ignore errors since they are expected
fmtStr := func(b []byte, s string) { f.formatString(b, toASCII(s)) }
blk := tw.templateV7Plus(hdr, fmtStr, f.formatOctal)
blk.setFormat(FormatPAX)
blk.SetFormat(FormatPAX)
if err := tw.writeRawHeader(blk, hdr.Size, hdr.Typeflag); err != nil {
return err
}
@@ -250,10 +250,10 @@ func (tw *Writer) writeGNUHeader(hdr *Header) error {
var spb []byte
blk := tw.templateV7Plus(hdr, f.formatString, f.formatNumeric)
if !hdr.AccessTime.IsZero() {
f.formatNumeric(blk.toGNU().accessTime(), hdr.AccessTime.Unix())
f.formatNumeric(blk.GNU().AccessTime(), hdr.AccessTime.Unix())
}
if !hdr.ChangeTime.IsZero() {
f.formatNumeric(blk.toGNU().changeTime(), hdr.ChangeTime.Unix())
f.formatNumeric(blk.GNU().ChangeTime(), hdr.ChangeTime.Unix())
}
// TODO(dsnet): Re-enable this when adding sparse support.
// See https://golang.org/issue/22735
@@ -293,7 +293,7 @@ func (tw *Writer) writeGNUHeader(hdr *Header) error {
f.formatNumeric(blk.GNU().RealSize(), realSize)
}
*/
blk.setFormat(FormatGNU)
blk.SetFormat(FormatGNU)
if err := tw.writeRawHeader(blk, hdr.Size, hdr.Typeflag); err != nil {
return err
}
@@ -321,28 +321,28 @@ type (
// The block returned is only valid until the next call to
// templateV7Plus or writeRawFile.
func (tw *Writer) templateV7Plus(hdr *Header, fmtStr stringFormatter, fmtNum numberFormatter) *block {
tw.blk.reset()
tw.blk.Reset()
modTime := hdr.ModTime
if modTime.IsZero() {
modTime = time.Unix(0, 0)
}
v7 := tw.blk.toV7()
v7.typeFlag()[0] = hdr.Typeflag
fmtStr(v7.name(), hdr.Name)
fmtStr(v7.linkName(), hdr.Linkname)
fmtNum(v7.mode(), hdr.Mode)
fmtNum(v7.uid(), int64(hdr.Uid))
fmtNum(v7.gid(), int64(hdr.Gid))
fmtNum(v7.size(), hdr.Size)
fmtNum(v7.modTime(), modTime.Unix())
v7 := tw.blk.V7()
v7.TypeFlag()[0] = hdr.Typeflag
fmtStr(v7.Name(), hdr.Name)
fmtStr(v7.LinkName(), hdr.Linkname)
fmtNum(v7.Mode(), hdr.Mode)
fmtNum(v7.UID(), int64(hdr.Uid))
fmtNum(v7.GID(), int64(hdr.Gid))
fmtNum(v7.Size(), hdr.Size)
fmtNum(v7.ModTime(), modTime.Unix())
ustar := tw.blk.toUSTAR()
fmtStr(ustar.userName(), hdr.Uname)
fmtStr(ustar.groupName(), hdr.Gname)
fmtNum(ustar.devMajor(), hdr.Devmajor)
fmtNum(ustar.devMinor(), hdr.Devminor)
ustar := tw.blk.USTAR()
fmtStr(ustar.UserName(), hdr.Uname)
fmtStr(ustar.GroupName(), hdr.Gname)
fmtNum(ustar.DevMajor(), hdr.Devmajor)
fmtNum(ustar.DevMinor(), hdr.Devminor)
return &tw.blk
}
@@ -351,7 +351,7 @@ func (tw *Writer) templateV7Plus(hdr *Header, fmtStr stringFormatter, fmtNum num
// It uses format to encode the header format and will write data as the body.
// It uses default values for all of the other fields (as BSD and GNU tar does).
func (tw *Writer) writeRawFile(name, data string, flag byte, format Format) error {
tw.blk.reset()
tw.blk.Reset()
// Best effort for the filename.
name = toASCII(name)
@@ -361,15 +361,15 @@ func (tw *Writer) writeRawFile(name, data string, flag byte, format Format) erro
name = strings.TrimRight(name, "/")
var f formatter
v7 := tw.blk.toV7()
v7.typeFlag()[0] = flag
f.formatString(v7.name(), name)
f.formatOctal(v7.mode(), 0)
f.formatOctal(v7.uid(), 0)
f.formatOctal(v7.gid(), 0)
f.formatOctal(v7.size(), int64(len(data))) // Must be < 8GiB
f.formatOctal(v7.modTime(), 0)
tw.blk.setFormat(format)
v7 := tw.blk.V7()
v7.TypeFlag()[0] = flag
f.formatString(v7.Name(), name)
f.formatOctal(v7.Mode(), 0)
f.formatOctal(v7.UID(), 0)
f.formatOctal(v7.GID(), 0)
f.formatOctal(v7.Size(), int64(len(data))) // Must be < 8GiB
f.formatOctal(v7.ModTime(), 0)
tw.blk.SetFormat(format)
if f.err != nil {
return f.err // Only occurs if size condition is violated
}
@@ -511,13 +511,10 @@ func (fw *regFileWriter) ReadFrom(r io.Reader) (int64, error) {
return io.Copy(struct{ io.Writer }{fw}, r)
}
// logicalRemaining implements fileState.logicalRemaining.
func (fw regFileWriter) logicalRemaining() int64 {
func (fw regFileWriter) LogicalRemaining() int64 {
return fw.nb
}
// logicalRemaining implements fileState.physicalRemaining.
func (fw regFileWriter) physicalRemaining() int64 {
func (fw regFileWriter) PhysicalRemaining() int64 {
return fw.nb
}
@@ -529,9 +526,9 @@ type sparseFileWriter struct {
}
func (sw *sparseFileWriter) Write(b []byte) (n int, err error) {
overwrite := int64(len(b)) > sw.logicalRemaining()
overwrite := int64(len(b)) > sw.LogicalRemaining()
if overwrite {
b = b[:sw.logicalRemaining()]
b = b[:sw.LogicalRemaining()]
}
b0 := b
@@ -559,7 +556,7 @@ func (sw *sparseFileWriter) Write(b []byte) (n int, err error) {
return n, errMissData // Not possible; implies bug in validation logic
case err != nil:
return n, err
case sw.logicalRemaining() == 0 && sw.physicalRemaining() > 0:
case sw.LogicalRemaining() == 0 && sw.PhysicalRemaining() > 0:
return n, errUnrefData // Not possible; implies bug in validation logic
case overwrite:
return n, ErrWriteTooLong
@@ -581,12 +578,12 @@ func (sw *sparseFileWriter) ReadFrom(r io.Reader) (n int64, err error) {
var readLastByte bool
pos0 := sw.pos
for sw.logicalRemaining() > 0 && !readLastByte && err == nil {
for sw.LogicalRemaining() > 0 && !readLastByte && err == nil {
var nf int64 // Size of fragment
dataStart, dataEnd := sw.sp[0].Offset, sw.sp[0].endOffset()
if sw.pos < dataStart { // In a hole fragment
nf = dataStart - sw.pos
if sw.physicalRemaining() == 0 {
if sw.PhysicalRemaining() == 0 {
readLastByte = true
nf--
}
@@ -616,18 +613,18 @@ func (sw *sparseFileWriter) ReadFrom(r io.Reader) (n int64, err error) {
return n, errMissData // Not possible; implies bug in validation logic
case err != nil:
return n, err
case sw.logicalRemaining() == 0 && sw.physicalRemaining() > 0:
case sw.LogicalRemaining() == 0 && sw.PhysicalRemaining() > 0:
return n, errUnrefData // Not possible; implies bug in validation logic
default:
return n, ensureEOF(rs)
}
}
func (sw sparseFileWriter) logicalRemaining() int64 {
func (sw sparseFileWriter) LogicalRemaining() int64 {
return sw.sp[len(sw.sp)-1].endOffset() - sw.pos
}
func (sw sparseFileWriter) physicalRemaining() int64 {
return sw.fw.physicalRemaining()
func (sw sparseFileWriter) PhysicalRemaining() int64 {
return sw.fw.PhysicalRemaining()
}
// zeroWriter may only be written with NULs, otherwise it returns errWriteHole.

View File

@@ -987,11 +987,11 @@ func TestIssue12594(t *testing.T) {
// The prefix field should never appear in the GNU format.
var blk block
copy(blk[:], b.Bytes())
prefix := string(blk.toUSTAR().prefix())
prefix := string(blk.USTAR().Prefix())
if i := strings.IndexByte(prefix, 0); i >= 0 {
prefix = prefix[:i] // Truncate at the NUL terminator
}
if blk.getFormat() == FormatGNU && len(prefix) > 0 && strings.HasPrefix(name, prefix) {
if blk.GetFormat() == FormatGNU && len(prefix) > 0 && strings.HasPrefix(name, prefix) {
t.Errorf("test %d, found prefix in GNU format: %s", i, prefix)
}
@@ -1029,7 +1029,7 @@ func TestFileWriter(t *testing.T) {
wantCnt int64
wantErr error
}
testRemaining struct { // logicalRemaining() == wantLCnt, physicalRemaining() == wantPCnt
testRemaining struct { // LogicalRemaining() == wantLCnt, PhysicalRemaining() == wantPCnt
wantLCnt int64
wantPCnt int64
}
@@ -1292,11 +1292,11 @@ func TestFileWriter(t *testing.T) {
t.Errorf("test %d.%d, expected %d more operations", i, j, len(f.ops))
}
case testRemaining:
if got := fw.logicalRemaining(); got != tf.wantLCnt {
t.Errorf("test %d.%d, logicalRemaining() = %d, want %d", i, j, got, tf.wantLCnt)
if got := fw.LogicalRemaining(); got != tf.wantLCnt {
t.Errorf("test %d.%d, LogicalRemaining() = %d, want %d", i, j, got, tf.wantLCnt)
}
if got := fw.physicalRemaining(); got != tf.wantPCnt {
t.Errorf("test %d.%d, physicalRemaining() = %d, want %d", i, j, got, tf.wantPCnt)
if got := fw.PhysicalRemaining(); got != tf.wantPCnt {
t.Errorf("test %d.%d, PhysicalRemaining() = %d, want %d", i, j, got, tf.wantPCnt)
}
default:
t.Fatalf("test %d.%d, unknown test operation: %T", i, j, tf)

View File

@@ -102,7 +102,7 @@ func (z *Reader) init(r io.ReaderAt, size int64) error {
// indicate it contains up to 1 << 128 - 1 files. Since each file has a
// header which will be _at least_ 30 bytes we can safely preallocate
// if (data size / 30) >= end.directoryRecords.
if end.directorySize < uint64(size) && (uint64(size)-end.directorySize)/30 >= end.directoryRecords {
if (uint64(size)-end.directorySize)/30 >= end.directoryRecords {
z.File = make([]*File, 0, end.directoryRecords)
}
z.Comment = end.comment

View File

@@ -1384,21 +1384,3 @@ func TestCVE202133196(t *testing.T) {
t.Errorf("Archive has unexpected number of files, got %d, want 5", len(r.File))
}
}
func TestCVE202139293(t *testing.T) {
// directory size is so large, that the check in Reader.init
// overflows when subtracting from the archive size, causing
// the pre-allocation check to be bypassed.
data := []byte{
0x50, 0x4b, 0x06, 0x06, 0x05, 0x06, 0x31, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x50, 0x4b,
0x06, 0x07, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x50, 0x4b, 0x05, 0x06, 0x00, 0x1a, 0x00, 0x00, 0x00, 0x00, 0x00, 0x50, 0x4b,
0x06, 0x07, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x50, 0x4b, 0x05, 0x06, 0x00, 0x31, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff,
0xff, 0x50, 0xfe, 0x00, 0xff, 0x00, 0x3a, 0x00, 0x00, 0x00, 0xff,
}
_, err := NewReader(bytes.NewReader(data), int64(len(data)))
if err != ErrFormat {
t.Fatalf("unexpected error, got: %v, want: %v", err, ErrFormat)
}
}

View File

@@ -888,6 +888,11 @@ func (as *asciiSet) contains(c byte) bool {
}
func makeCutsetFunc(cutset string) func(r rune) bool {
if len(cutset) == 1 && cutset[0] < utf8.RuneSelf {
return func(r rune) bool {
return r == rune(cutset[0])
}
}
if as, isASCII := makeASCIISet(cutset); isASCII {
return func(r rune) bool {
return r < utf8.RuneSelf && as.contains(byte(r))
@@ -906,44 +911,21 @@ func makeCutsetFunc(cutset string) func(r rune) bool {
// Trim returns a subslice of s by slicing off all leading and
// trailing UTF-8-encoded code points contained in cutset.
func Trim(s []byte, cutset string) []byte {
if len(cutset) == 1 && cutset[0] < utf8.RuneSelf {
return trimLeftByte(trimRightByte(s, cutset[0]), cutset[0])
}
return TrimFunc(s, makeCutsetFunc(cutset))
}
// TrimLeft returns a subslice of s by slicing off all leading
// UTF-8-encoded code points contained in cutset.
func TrimLeft(s []byte, cutset string) []byte {
if len(cutset) == 1 && cutset[0] < utf8.RuneSelf {
return trimLeftByte(s, cutset[0])
}
return TrimLeftFunc(s, makeCutsetFunc(cutset))
}
func trimLeftByte(s []byte, c byte) []byte {
for len(s) > 0 && s[0] == c {
s = s[1:]
}
return s
}
// TrimRight returns a subslice of s by slicing off all trailing
// UTF-8-encoded code points that are contained in cutset.
func TrimRight(s []byte, cutset string) []byte {
if len(cutset) == 1 && cutset[0] < utf8.RuneSelf {
return trimRightByte(s, cutset[0])
}
return TrimRightFunc(s, makeCutsetFunc(cutset))
}
func trimRightByte(s []byte, c byte) []byte {
for len(s) > 0 && s[len(s)-1] == c {
s = s[:len(s)-1]
}
return s
}
// TrimSpace returns a subslice of s by slicing off all leading and
// trailing white space, as defined by Unicode.
func TrimSpace(s []byte) []byte {

View File

@@ -1251,9 +1251,7 @@ var trimTests = []TrimTest{
{"TrimLeft", "abba", "ab", ""},
{"TrimRight", "abba", "ab", ""},
{"TrimLeft", "abba", "a", "bba"},
{"TrimLeft", "abba", "b", "abba"},
{"TrimRight", "abba", "a", "abb"},
{"TrimRight", "abba", "b", "abba"},
{"Trim", "<tag>", "<>", "tag"},
{"Trim", "* listitem", " *", "listitem"},
{"Trim", `"quote"`, `"`, "quote"},
@@ -1965,13 +1963,6 @@ func BenchmarkTrimASCII(b *testing.B) {
}
}
func BenchmarkTrimByte(b *testing.B) {
x := []byte(" the quick brown fox ")
for i := 0; i < b.N; i++ {
Trim(x, " ")
}
}
func BenchmarkIndexPeriodic(b *testing.B) {
key := []byte{1, 1}
for _, skip := range [...]int{2, 4, 8, 16, 32, 64} {

View File

@@ -165,21 +165,27 @@ func ARM64RegisterExtension(a *obj.Addr, ext string, reg, num int16, isAmount, i
}
}
if reg <= arm64.REG_R31 && reg >= arm64.REG_R0 {
if !isAmount {
return errors.New("invalid register extension")
}
switch ext {
case "UXTB":
if !isAmount {
return errors.New("invalid register extension")
}
if a.Type == obj.TYPE_MEM {
return errors.New("invalid shift for the register offset addressing mode")
}
a.Reg = arm64.REG_UXTB + Rnum
case "UXTH":
if !isAmount {
return errors.New("invalid register extension")
}
if a.Type == obj.TYPE_MEM {
return errors.New("invalid shift for the register offset addressing mode")
}
a.Reg = arm64.REG_UXTH + Rnum
case "UXTW":
if !isAmount {
return errors.New("invalid register extension")
}
// effective address of memory is a base register value and an offset register value.
if a.Type == obj.TYPE_MEM {
a.Index = arm64.REG_UXTW + Rnum
@@ -187,33 +193,48 @@ func ARM64RegisterExtension(a *obj.Addr, ext string, reg, num int16, isAmount, i
a.Reg = arm64.REG_UXTW + Rnum
}
case "UXTX":
if !isAmount {
return errors.New("invalid register extension")
}
if a.Type == obj.TYPE_MEM {
return errors.New("invalid shift for the register offset addressing mode")
}
a.Reg = arm64.REG_UXTX + Rnum
case "SXTB":
if a.Type == obj.TYPE_MEM {
return errors.New("invalid shift for the register offset addressing mode")
if !isAmount {
return errors.New("invalid register extension")
}
a.Reg = arm64.REG_SXTB + Rnum
case "SXTH":
if !isAmount {
return errors.New("invalid register extension")
}
if a.Type == obj.TYPE_MEM {
return errors.New("invalid shift for the register offset addressing mode")
}
a.Reg = arm64.REG_SXTH + Rnum
case "SXTW":
if !isAmount {
return errors.New("invalid register extension")
}
if a.Type == obj.TYPE_MEM {
a.Index = arm64.REG_SXTW + Rnum
} else {
a.Reg = arm64.REG_SXTW + Rnum
}
case "SXTX":
if !isAmount {
return errors.New("invalid register extension")
}
if a.Type == obj.TYPE_MEM {
a.Index = arm64.REG_SXTX + Rnum
} else {
a.Reg = arm64.REG_SXTX + Rnum
}
case "LSL":
if !isAmount {
return errors.New("invalid register extension")
}
a.Index = arm64.REG_LSL + Rnum
default:
return errors.New("unsupported general register extension type: " + ext)

View File

@@ -334,8 +334,6 @@ TEXT foo(SB), DUPOK|NOSPLIT, $-8
EONW $0x6006000060060, R5 // EONW $1689262177517664, R5 // 1b0c8052db00a072a5003b4a
ORNW $0x6006000060060, R5 // ORNW $1689262177517664, R5 // 1b0c8052db00a072a5003b2a
BICSW $0x6006000060060, R5 // BICSW $1689262177517664, R5 // 1b0c8052db00a072a5003b6a
AND $1, ZR // fb0340b2ff031b8a
ANDW $1, ZR // fb030032ff031b0a
// TODO: this could have better encoding
ANDW $-1, R10 // 1b0080124a011b0a
AND $8, R0, RSP // 1f007d92
@@ -371,9 +369,9 @@ TEXT foo(SB), DUPOK|NOSPLIT, $-8
MOVD $-1, R1 // 01008092
MOVD $0x210000, R0 // MOVD $2162688, R0 // 2004a0d2
MOVD $0xffffffffffffaaaa, R1 // MOVD $-21846, R1 // a1aa8a92
MOVW $1, ZR // 3f008052
MOVW $1, ZR
MOVW $1, R1
MOVD $1, ZR // 3f0080d2
MOVD $1, ZR
MOVD $1, R1
MOVK $1, R1
MOVD $0x1000100010001000, RSP // MOVD $1152939097061330944, RSP // ff8304b2
@@ -388,10 +386,10 @@ TEXT foo(SB), DUPOK|NOSPLIT, $-8
VMOVQ $0x8040201008040202, $0x7040201008040201, V20 // VMOVQ $-9205322385119247870, $8088500183983456769, V20
// mov(to/from sp)
MOVD $0x1002(RSP), R1 // MOVD $4098(RSP), R1 // e107409121080091
MOVD $0x1708(RSP), RSP // MOVD $5896(RSP), RSP // ff074091ff231c91
MOVD $0x2001(R7), R1 // MOVD $8193(R7), R1 // e108409121040091
MOVD $0xffffff(R7), R1 // MOVD $16777215(R7), R1 // e1fc7f9121fc3f91
MOVD $0x1002(RSP), R1 // MOVD $4098(RSP), R1 // fb074091610b0091
MOVD $0x1708(RSP), RSP // MOVD $5896(RSP), RSP // fb0740917f231c91
MOVD $0x2001(R7), R1 // MOVD $8193(R7), R1 // fb08409161070091
MOVD $0xffffff(R7), R1 // MOVD $16777215(R7), R1 // fbfc7f9161ff3f91
MOVD $-0x1(R7), R1 // MOVD $-1(R7), R1 // e10400d1
MOVD $-0x30(R7), R1 // MOVD $-48(R7), R1 // e1c000d1
MOVD $-0x708(R7), R1 // MOVD $-1800(R7), R1 // e1201cd1

View File

@@ -3,7 +3,7 @@
// license that can be found in the LICENSE file.
TEXT errors(SB),$0
AND $1, RSP // ERROR "illegal source register"
AND $1, RSP // ERROR "illegal combination"
ANDS $1, R0, RSP // ERROR "illegal combination"
ADDSW R7->32, R14, R13 // ERROR "shift amount out of range 0 to 31"
ADD R1.UXTB<<5, R2, R3 // ERROR "shift amount out of range 0 to 4"
@@ -419,8 +419,4 @@ TEXT errors(SB),$0
ADD R1>>2, RSP, R3 // ERROR "illegal combination"
ADDS R2<<3, R3, RSP // ERROR "unexpected SP reference"
CMP R1<<5, RSP // ERROR "the left shift amount out of range 0 to 4"
MOVD.P y+8(FP), R1 // ERROR "illegal combination"
MOVD.W x-8(SP), R1 // ERROR "illegal combination"
LDP.P x+8(FP), (R0, R1) // ERROR "illegal combination"
LDP.W x+8(SP), (R0, R1) // ERROR "illegal combination"
RET

View File

@@ -23,13 +23,10 @@ import (
"internal/xcoff"
"math"
"os"
"os/exec"
"strconv"
"strings"
"unicode"
"unicode/utf8"
"cmd/internal/str"
)
var debugDefine = flag.Bool("debug-define", false, "print relevant #defines")
@@ -385,7 +382,7 @@ func (p *Package) guessKinds(f *File) []*Name {
stderr = p.gccErrors(b.Bytes())
}
if stderr == "" {
fatalf("%s produced no output\non input:\n%s", gccBaseCmd[0], b.Bytes())
fatalf("%s produced no output\non input:\n%s", p.gccBaseCmd()[0], b.Bytes())
}
completed := false
@@ -460,7 +457,7 @@ func (p *Package) guessKinds(f *File) []*Name {
}
if !completed {
fatalf("%s did not produce error at completed:1\non input:\n%s\nfull error output:\n%s", gccBaseCmd[0], b.Bytes(), stderr)
fatalf("%s did not produce error at completed:1\non input:\n%s\nfull error output:\n%s", p.gccBaseCmd()[0], b.Bytes(), stderr)
}
for i, n := range names {
@@ -491,7 +488,7 @@ func (p *Package) guessKinds(f *File) []*Name {
// to users debugging preamble mistakes. See issue 8442.
preambleErrors := p.gccErrors([]byte(f.Preamble))
if len(preambleErrors) > 0 {
error_(token.NoPos, "\n%s errors for preamble:\n%s", gccBaseCmd[0], preambleErrors)
error_(token.NoPos, "\n%s errors for preamble:\n%s", p.gccBaseCmd()[0], preambleErrors)
}
fatalf("unresolved names")
@@ -1548,37 +1545,20 @@ func gofmtPos(n ast.Expr, pos token.Pos) string {
return fmt.Sprintf("/*line :%d:%d*/%s", p.Line, p.Column, s)
}
// checkGCCBaseCmd returns the start of the compiler command line.
// gccBaseCmd returns the start of the compiler command line.
// It uses $CC if set, or else $GCC, or else the compiler recorded
// during the initial build as defaultCC.
// defaultCC is defined in zdefaultcc.go, written by cmd/dist.
//
// The compiler command line is split into arguments on whitespace. Quotes
// are understood, so arguments may contain whitespace.
//
// checkGCCBaseCmd confirms that the compiler exists in PATH, returning
// an error if it does not.
func checkGCCBaseCmd() ([]string, error) {
func (p *Package) gccBaseCmd() []string {
// Use $CC if set, since that's what the build uses.
value := os.Getenv("CC")
if value == "" {
// Try $GCC if set, since that's what we used to use.
value = os.Getenv("GCC")
if ret := strings.Fields(os.Getenv("CC")); len(ret) > 0 {
return ret
}
if value == "" {
value = defaultCC(goos, goarch)
// Try $GCC if set, since that's what we used to use.
if ret := strings.Fields(os.Getenv("GCC")); len(ret) > 0 {
return ret
}
args, err := str.SplitQuotedFields(value)
if err != nil {
return nil, err
}
if len(args) == 0 {
return nil, errors.New("CC not set and no default found")
}
if _, err := exec.LookPath(args[0]); err != nil {
return nil, fmt.Errorf("C compiler %q not found: %v", args[0], err)
}
return args[:len(args):len(args)], nil
return strings.Fields(defaultCC(goos, goarch))
}
// gccMachine returns the gcc -m flag to use, either "-m32", "-m64" or "-marm".
@@ -1624,7 +1604,7 @@ func gccTmp() string {
// gccCmd returns the gcc command line to use for compiling
// the input.
func (p *Package) gccCmd() []string {
c := append(gccBaseCmd,
c := append(p.gccBaseCmd(),
"-w", // no warnings
"-Wno-error", // warnings are not errors
"-o"+gccTmp(), // write object to tmp
@@ -2025,7 +2005,7 @@ func (p *Package) gccDebug(stdin []byte, nnames int) (d *dwarf.Data, ints []int6
// #defines that gcc encountered while processing the input
// and its included files.
func (p *Package) gccDefines(stdin []byte) string {
base := append(gccBaseCmd, "-E", "-dM", "-xc")
base := append(p.gccBaseCmd(), "-E", "-dM", "-xc")
base = append(base, p.gccMachine()...)
stdout, _ := runGcc(stdin, append(append(base, p.GccOptions...), "-"))
return stdout

View File

@@ -21,6 +21,7 @@ import (
"io"
"io/ioutil"
"os"
"os/exec"
"path/filepath"
"reflect"
"runtime"
@@ -247,7 +248,6 @@ var importSyscall = flag.Bool("import_syscall", true, "import syscall in generat
var trimpath = flag.String("trimpath", "", "applies supplied rewrites or trims prefixes to recorded source file paths")
var goarch, goos, gomips, gomips64 string
var gccBaseCmd []string
func main() {
objabi.AddVersionFlag() // -V
@@ -305,10 +305,10 @@ func main() {
p := newPackage(args[:i])
// We need a C compiler to be available. Check this.
var err error
gccBaseCmd, err = checkGCCBaseCmd()
gccName := p.gccBaseCmd()[0]
_, err := exec.LookPath(gccName)
if err != nil {
fatalf("%v", err)
fatalf("C compiler %q not found: %v", gccName, err)
os.Exit(2)
}

View File

@@ -59,9 +59,9 @@ func (p *Package) writeDefs() {
// Write C main file for using gcc to resolve imports.
fmt.Fprintf(fm, "int main() { return 0; }\n")
if *importRuntimeCgo {
fmt.Fprintf(fm, "void crosscall2(void(*fn)(void*) __attribute__((unused)), void *a __attribute__((unused)), int c __attribute__((unused)), __SIZE_TYPE__ ctxt __attribute__((unused))) { }\n")
fmt.Fprintf(fm, "void crosscall2(void(*fn)(void*), void *a, int c, __SIZE_TYPE__ ctxt) { }\n")
fmt.Fprintf(fm, "__SIZE_TYPE__ _cgo_wait_runtime_init_done(void) { return 0; }\n")
fmt.Fprintf(fm, "void _cgo_release_context(__SIZE_TYPE__ ctxt __attribute__((unused))) { }\n")
fmt.Fprintf(fm, "void _cgo_release_context(__SIZE_TYPE__ ctxt) { }\n")
fmt.Fprintf(fm, "char* _cgo_topofstack(void) { return (char*)0; }\n")
} else {
// If we're not importing runtime/cgo, we *are* runtime/cgo,
@@ -70,8 +70,8 @@ func (p *Package) writeDefs() {
fmt.Fprintf(fm, "__SIZE_TYPE__ _cgo_wait_runtime_init_done(void);\n")
fmt.Fprintf(fm, "void _cgo_release_context(__SIZE_TYPE__);\n")
}
fmt.Fprintf(fm, "void _cgo_allocate(void *a __attribute__((unused)), int c __attribute__((unused))) { }\n")
fmt.Fprintf(fm, "void _cgo_panic(void *a __attribute__((unused)), int c __attribute__((unused))) { }\n")
fmt.Fprintf(fm, "void _cgo_allocate(void *a, int c) { }\n")
fmt.Fprintf(fm, "void _cgo_panic(void *a, int c) { }\n")
fmt.Fprintf(fm, "void _cgo_reginit(void) { }\n")
// Write second Go output: definitions of _C_xxx.

View File

@@ -505,128 +505,6 @@ control bits specified by the ELF AMD64 ABI.
The x87 floating-point control word is not used by Go on amd64.
### arm64 architecture
The arm64 architecture uses R0 R15 for integer arguments and results.
It uses F0 F15 for floating-point arguments and results.
*Rationale*: 16 integer registers and 16 floating-point registers are
more than enough for passing arguments and results for practically all
functions (see Appendix). While there are more registers available,
using more registers provides little benefit. Additionally, it will add
overhead on code paths where the number of arguments are not statically
known (e.g. reflect call), and will consume more stack space when there
is only limited stack space available to fit in the nosplit limit.
Registers R16 and R17 are permanent scratch registers. They are also
used as scratch registers by the linker (Go linker and external
linker) in trampolines.
Register R18 is reserved and never used. It is reserved for the OS
on some platforms (e.g. macOS).
Registers R19 R25 are permanent scratch registers. In addition,
R27 is a permanent scratch register used by the assembler when
expanding instructions.
Floating-point registers F16 F31 are also permanent scratch
registers.
Special-purpose registers are as follows:
| Register | Call meaning | Return meaning | Body meaning |
| --- | --- | --- | --- |
| RSP | Stack pointer | Same | Same |
| R30 | Link register | Same | Scratch (non-leaf functions) |
| R29 | Frame pointer | Same | Same |
| R28 | Current goroutine | Same | Same |
| R27 | Scratch | Scratch | Scratch |
| R26 | Closure context pointer | Scratch | Scratch |
| R18 | Reserved (not used) | Same | Same |
| ZR | Zero value | Same | Same |
*Rationale*: These register meanings are compatible with Gos
stack-based calling convention.
*Rationale*: The link register, R30, holds the function return
address at the function entry. For functions that have frames
(including most non-leaf functions), R30 is saved to stack in the
function prologue and restored in the epilogue. Within the function
body, R30 can be used as a scratch register.
*Implementation note*: Registers with fixed meaning at calls but not
in function bodies must be initialized by "injected" calls such as
signal-based panics.
#### Stack layout
The stack pointer, RSP, grows down and is always aligned to 16 bytes.
*Rationale*: The arm64 architecture requires the stack pointer to be
16-byte aligned.
A function's stack frame, after the frame is created, is laid out as
follows:
+------------------------------+
| ... locals ... |
| ... outgoing arguments ... |
| return PC | ← RSP points to
| frame pointer on entry |
+------------------------------+ ↓ lower addresses
The "return PC" is loaded to the link register, R30, as part of the
arm64 `CALL` operation.
On entry, a function subtracts from RSP to open its stack frame, and
saves the values of R30 and R29 at the bottom of the frame.
Specifically, R30 is saved at 0(RSP) and R29 is saved at -8(RSP),
after RSP is updated.
A leaf function that does not require any stack space may omit the
saved R30 and R29.
The Go ABI's use of R29 as a frame pointer register is compatible with
arm64 architecture requirement so that Go can inter-operate with platform
debuggers and profilers.
This stack layout is used by both register-based (ABIInternal) and
stack-based (ABI0) calling conventions.
#### Flags
The arithmetic status flags (NZCV) are treated like scratch registers
and not preserved across calls.
All other bits in PSTATE are system flags and are not modified by Go.
The floating-point status register (FPSR) is treated like scratch
registers and not preserved across calls.
At calls, the floating-point control register (FPCR) bits are always
set as follows:
| Flag | Bit | Value | Meaning |
| --- | --- | --- | --- |
| DN | 25 | 0 | Propagate NaN operands |
| FZ | 24 | 0 | Do not flush to zero |
| RC | 23/22 | 0 (RN) | Round to nearest, choose even if tied |
| IDE | 15 | 0 | Denormal operations trap disabled |
| IXE | 12 | 0 | Inexact trap disabled |
| UFE | 11 | 0 | Underflow trap disabled |
| OFE | 10 | 0 | Overflow trap disabled |
| DZE | 9 | 0 | Divide-by-zero trap disabled |
| IOE | 8 | 0 | Invalid operations trap disabled |
| NEP | 2 | 0 | Scalar operations do not affect higher elements in vector registers |
| AH | 1 | 0 | No alternate handling of de-normal inputs |
| FIZ | 0 | 0 | Do not zero de-normals |
*Rationale*: Having a fixed FPCR control configuration allows Go
functions to use floating-point and vector (SIMD) operations without
modifying or saving the FPCR.
Functions are allowed to modify it between calls (as long as they
restore it), but as of this writing Go code never does.
## Future directions
### Spill path improvements

View File

@@ -722,17 +722,14 @@ func setup() {
types.NewField(nxp, fname("len"), ui),
types.NewField(nxp, fname("cap"), ui),
})
types.CalcStructSize(synthSlice)
synthString = types.NewStruct(types.NoPkg, []*types.Field{
types.NewField(nxp, fname("data"), unsp),
types.NewField(nxp, fname("len"), ui),
})
types.CalcStructSize(synthString)
synthIface = types.NewStruct(types.NoPkg, []*types.Field{
types.NewField(nxp, fname("f1"), unsp),
types.NewField(nxp, fname("f2"), unsp),
})
types.CalcStructSize(synthIface)
})
}

View File

@@ -18,10 +18,11 @@ func Init(arch *ssagen.ArchInfo) {
arch.ZeroRange = zerorange
arch.Ginsnop = ginsnop
arch.Ginsnopdefer = ginsnop
arch.SSAMarkMoves = ssaMarkMoves
arch.SSAGenValue = ssaGenValue
arch.SSAGenBlock = ssaGenBlock
arch.LoadRegResult = loadRegResult
arch.LoadRegResults = loadRegResults
arch.SpillArgReg = spillArgReg
}

View File

@@ -57,6 +57,7 @@ func dzDI(b int64) int64 {
func zerorange(pp *objw.Progs, p *obj.Prog, off, cnt int64, state *uint32) *obj.Prog {
const (
r13 = 1 << iota // if R13 is already zeroed.
x15 // if X15 is already zeroed. Note: in new ABI, X15 is always zero.
)
if cnt == 0 {
@@ -84,6 +85,11 @@ func zerorange(pp *objw.Progs, p *obj.Prog, off, cnt int64, state *uint32) *obj.
}
p = pp.Append(p, x86.AMOVQ, obj.TYPE_REG, x86.REG_R13, 0, obj.TYPE_MEM, x86.REG_SP, off)
} else if !isPlan9 && cnt <= int64(8*types.RegSize) {
if !buildcfg.Experiment.RegabiG && *state&x15 == 0 {
p = pp.Append(p, x86.AXORPS, obj.TYPE_REG, x86.REG_X15, 0, obj.TYPE_REG, x86.REG_X15, 0)
*state |= x15
}
for i := int64(0); i < cnt/16; i++ {
p = pp.Append(p, x86.AMOVUPS, obj.TYPE_REG, x86.REG_X15, 0, obj.TYPE_MEM, x86.REG_SP, off+i*16)
}
@@ -92,6 +98,10 @@ func zerorange(pp *objw.Progs, p *obj.Prog, off, cnt int64, state *uint32) *obj.
p = pp.Append(p, x86.AMOVUPS, obj.TYPE_REG, x86.REG_X15, 0, obj.TYPE_MEM, x86.REG_SP, off+cnt-int64(16))
}
} else if !isPlan9 && (cnt <= int64(128*types.RegSize)) {
if !buildcfg.Experiment.RegabiG && *state&x15 == 0 {
p = pp.Append(p, x86.AXORPS, obj.TYPE_REG, x86.REG_X15, 0, obj.TYPE_REG, x86.REG_X15, 0)
*state |= x15
}
// Save DI to r12. With the amd64 Go register abi, DI can contain
// an incoming parameter, whereas R12 is always scratch.
p = pp.Append(p, x86.AMOVQ, obj.TYPE_REG, x86.REG_DI, 0, obj.TYPE_REG, x86.REG_R12, 0)

View File

@@ -823,7 +823,7 @@ func ssaGenValue(s *ssagen.State, v *ssa.Value) {
p.To.Reg = v.Args[0].Reg()
ssagen.AddAux2(&p.To, v, sc.Off64())
case ssa.OpAMD64MOVOstorezero:
if s.ABI != obj.ABIInternal {
if !buildcfg.Experiment.RegabiG || s.ABI != obj.ABIInternal {
// zero X15 manually
opregreg(s, x86.AXORPS, x86.REG_X15, x86.REG_X15)
}
@@ -914,7 +914,7 @@ func ssaGenValue(s *ssagen.State, v *ssa.Value) {
p.To.Type = obj.TYPE_REG
p.To.Reg = v.Reg()
case ssa.OpAMD64DUFFZERO:
if s.ABI != obj.ABIInternal {
if !buildcfg.Experiment.RegabiG || s.ABI != obj.ABIInternal {
// zero X15 manually
opregreg(s, x86.AXORPS, x86.REG_X15, x86.REG_X15)
}
@@ -997,26 +997,22 @@ func ssaGenValue(s *ssagen.State, v *ssa.Value) {
// Closure pointer is DX.
ssagen.CheckLoweredGetClosurePtr(v)
case ssa.OpAMD64LoweredGetG:
if s.ABI == obj.ABIInternal {
if buildcfg.Experiment.RegabiG && s.ABI == obj.ABIInternal {
v.Fatalf("LoweredGetG should not appear in ABIInternal")
}
r := v.Reg()
getgFromTLS(s, r)
case ssa.OpAMD64CALLstatic:
if s.ABI == obj.ABI0 && v.Aux.(*ssa.AuxCall).Fn.ABI() == obj.ABIInternal {
if buildcfg.Experiment.RegabiG && s.ABI == obj.ABI0 && v.Aux.(*ssa.AuxCall).Fn.ABI() == obj.ABIInternal {
// zeroing X15 when entering ABIInternal from ABI0
if buildcfg.GOOS != "plan9" { // do not use SSE on Plan 9
opregreg(s, x86.AXORPS, x86.REG_X15, x86.REG_X15)
}
opregreg(s, x86.AXORPS, x86.REG_X15, x86.REG_X15)
// set G register from TLS
getgFromTLS(s, x86.REG_R14)
}
s.Call(v)
if s.ABI == obj.ABIInternal && v.Aux.(*ssa.AuxCall).Fn.ABI() == obj.ABI0 {
if buildcfg.Experiment.RegabiG && s.ABI == obj.ABIInternal && v.Aux.(*ssa.AuxCall).Fn.ABI() == obj.ABI0 {
// zeroing X15 when entering ABIInternal from ABI0
if buildcfg.GOOS != "plan9" { // do not use SSE on Plan 9
opregreg(s, x86.AXORPS, x86.REG_X15, x86.REG_X15)
}
opregreg(s, x86.AXORPS, x86.REG_X15, x86.REG_X15)
// set G register from TLS
getgFromTLS(s, x86.REG_R14)
}
@@ -1308,11 +1304,9 @@ func ssaGenBlock(s *ssagen.State, b, next *ssa.Block) {
case ssa.BlockRet:
s.Prog(obj.ARET)
case ssa.BlockRetJmp:
if s.ABI == obj.ABI0 && b.Aux.(*obj.LSym).ABI() == obj.ABIInternal {
if buildcfg.Experiment.RegabiG && s.ABI == obj.ABI0 && b.Aux.(*obj.LSym).ABI() == obj.ABIInternal {
// zeroing X15 when entering ABIInternal from ABI0
if buildcfg.GOOS != "plan9" { // do not use SSE on Plan 9
opregreg(s, x86.AXORPS, x86.REG_X15, x86.REG_X15)
}
opregreg(s, x86.AXORPS, x86.REG_X15, x86.REG_X15)
// set G register from TLS
getgFromTLS(s, x86.REG_R14)
}
@@ -1354,15 +1348,20 @@ func ssaGenBlock(s *ssagen.State, b, next *ssa.Block) {
}
}
func loadRegResult(s *ssagen.State, f *ssa.Func, t *types.Type, reg int16, n *ir.Name, off int64) *obj.Prog {
p := s.Prog(loadByType(t))
p.From.Type = obj.TYPE_MEM
p.From.Name = obj.NAME_AUTO
p.From.Sym = n.Linksym()
p.From.Offset = n.FrameOffset() + off
p.To.Type = obj.TYPE_REG
p.To.Reg = reg
return p
func loadRegResults(s *ssagen.State, f *ssa.Func) {
for _, o := range f.OwnAux.ABIInfo().OutParams() {
n := o.Name.(*ir.Name)
rts, offs := o.RegisterTypesAndOffsets()
for i := range o.Registers {
p := s.Prog(loadByType(rts[i]))
p.From.Type = obj.TYPE_MEM
p.From.Name = obj.NAME_AUTO
p.From.Sym = n.Linksym()
p.From.Offset = n.FrameOffset() + offs[i]
p.To.Type = obj.TYPE_REG
p.To.Reg = ssa.ObjRegForAbiReg(o.Registers[i], f.Config)
}
}
}
func spillArgReg(pp *objw.Progs, p *obj.Prog, f *ssa.Func, t *types.Type, reg int16, n *ir.Name, off int64) *obj.Prog {

View File

@@ -18,6 +18,7 @@ func Init(arch *ssagen.ArchInfo) {
arch.SoftFloat = buildcfg.GOARM == 5
arch.ZeroRange = zerorange
arch.Ginsnop = ginsnop
arch.Ginsnopdefer = ginsnop
arch.SSAMarkMoves = func(s *ssagen.State, b *ssa.Block) {}
arch.SSAGenValue = ssaGenValue

View File

@@ -18,10 +18,9 @@ func Init(arch *ssagen.ArchInfo) {
arch.PadFrame = padframe
arch.ZeroRange = zerorange
arch.Ginsnop = ginsnop
arch.Ginsnopdefer = ginsnop
arch.SSAMarkMoves = func(s *ssagen.State, b *ssa.Block) {}
arch.SSAGenValue = ssaGenValue
arch.SSAGenBlock = ssaGenBlock
arch.LoadRegResult = loadRegResult
arch.SpillArgReg = spillArgReg
}

View File

@@ -10,7 +10,6 @@ import (
"cmd/compile/internal/base"
"cmd/compile/internal/ir"
"cmd/compile/internal/logopt"
"cmd/compile/internal/objw"
"cmd/compile/internal/ssa"
"cmd/compile/internal/ssagen"
"cmd/compile/internal/types"
@@ -162,18 +161,6 @@ func ssaGenValue(s *ssagen.State, v *ssa.Value) {
p.From.Type = obj.TYPE_REG
p.From.Reg = v.Args[0].Reg()
ssagen.AddrAuto(&p.To, v)
case ssa.OpArgIntReg, ssa.OpArgFloatReg:
// The assembler needs to wrap the entry safepoint/stack growth code with spill/unspill
// The loop only runs once.
for _, a := range v.Block.Func.RegArgs {
// Pass the spill/unspill information along to the assembler, offset by size of
// the saved LR slot.
addr := ssagen.SpillSlotAddr(a, arm64.REGSP, base.Ctxt.FixedFrameSize())
s.FuncInfo().AddSpill(
obj.RegSpill{Reg: a.Reg, Addr: addr, Unspill: loadByType(a.Type), Spill: storeByType(a.Type)})
}
v.Block.Func.RegArgs = nil
ssagen.CheckArgReg(v)
case ssa.OpARM64ADD,
ssa.OpARM64SUB,
ssa.OpARM64AND,
@@ -1114,34 +1101,8 @@ func ssaGenValue(s *ssagen.State, v *ssa.Value) {
v.Fatalf("FlagConstant op should never make it to codegen %v", v.LongString())
case ssa.OpARM64InvertFlags:
v.Fatalf("InvertFlags should never make it to codegen %v", v.LongString())
case ssa.OpClobber:
// MOVW $0xdeaddead, REGTMP
// MOVW REGTMP, (slot)
// MOVW REGTMP, 4(slot)
p := s.Prog(arm64.AMOVW)
p.From.Type = obj.TYPE_CONST
p.From.Offset = 0xdeaddead
p.To.Type = obj.TYPE_REG
p.To.Reg = arm64.REGTMP
p = s.Prog(arm64.AMOVW)
p.From.Type = obj.TYPE_REG
p.From.Reg = arm64.REGTMP
p.To.Type = obj.TYPE_MEM
p.To.Reg = arm64.REGSP
ssagen.AddAux(&p.To, v)
p = s.Prog(arm64.AMOVW)
p.From.Type = obj.TYPE_REG
p.From.Reg = arm64.REGTMP
p.To.Type = obj.TYPE_MEM
p.To.Reg = arm64.REGSP
ssagen.AddAux2(&p.To, v, v.AuxInt+4)
case ssa.OpClobberReg:
x := uint64(0xdeaddeaddeaddead)
p := s.Prog(arm64.AMOVD)
p.From.Type = obj.TYPE_CONST
p.From.Offset = int64(x)
p.To.Type = obj.TYPE_REG
p.To.Reg = v.Reg()
case ssa.OpClobber, ssa.OpClobberReg:
// TODO: implement for clobberdead experiment. Nop is ok for now.
default:
v.Fatalf("genValue not implemented: %s", v.LongString())
}
@@ -1305,22 +1266,3 @@ func ssaGenBlock(s *ssagen.State, b, next *ssa.Block) {
b.Fatalf("branch not implemented: %s", b.LongString())
}
}
func loadRegResult(s *ssagen.State, f *ssa.Func, t *types.Type, reg int16, n *ir.Name, off int64) *obj.Prog {
p := s.Prog(loadByType(t))
p.From.Type = obj.TYPE_MEM
p.From.Name = obj.NAME_AUTO
p.From.Sym = n.Linksym()
p.From.Offset = n.FrameOffset() + off
p.To.Type = obj.TYPE_REG
p.To.Reg = reg
return p
}
func spillArgReg(pp *objw.Progs, p *obj.Prog, f *ssa.Func, t *types.Type, reg int16, n *ir.Name, off int64) *obj.Prog {
p = pp.Append(p, storeByType(t), obj.TYPE_REG, reg, 0, obj.TYPE_MEM, 0, n.FrameOffset()+off)
p.To.Name = obj.NAME_PARAM
p.To.Sym = n.Linksym()
p.Pos = p.Pos.WithNotStmt()
return p
}

View File

@@ -1,12 +0,0 @@
// Copyright 2021 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build !compiler_bootstrap
// +build !compiler_bootstrap
package base
// CompilerBootstrap reports whether the current compiler binary was
// built with -tags=compiler_bootstrap.
const CompilerBootstrap = false

View File

@@ -1,12 +0,0 @@
// Copyright 2021 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build compiler_bootstrap
// +build compiler_bootstrap
package base
// CompilerBootstrap reports whether the current compiler binary was
// built with -tags=compiler_bootstrap.
const CompilerBootstrap = true

View File

@@ -44,11 +44,8 @@ type DebugFlags struct {
Panic int `help:"show all compiler panics"`
Slice int `help:"print information about slice compilation"`
SoftFloat int `help:"force compiler to emit soft-float code"`
SyncFrames int `help:"how many writer stack frames to include at sync points in unified export data"`
TypeAssert int `help:"print information about type assertion inlining"`
TypecheckInl int `help:"eager typechecking of inline function bodies"`
Unified int `help:"enable unified IR construction"`
UnifiedQuirks int `help:"enable unified IR construction's quirks mode"`
WB int `help:"print information about write barriers"`
ABIWrap int `help:"print information about ABI wrapper generation"`

View File

@@ -140,7 +140,6 @@ type CmdFlags struct {
// ParseFlags parses the command-line flags into Flag.
func ParseFlags() {
Flag.G = 3
Flag.I = addImportDir
Flag.LowerC = 1
@@ -160,11 +159,7 @@ func ParseFlags() {
Flag.LinkShared = &Ctxt.Flag_linkshared
Flag.Shared = &Ctxt.Flag_shared
Flag.WB = true
Debug.InlFuncsWithClosures = 1
if buildcfg.Experiment.Unified {
Debug.Unified = 1
}
Debug.Checkptr = -1 // so we can tell whether it is set explicitly

View File

@@ -233,27 +233,6 @@ func FatalfAt(pos src.XPos, format string, args ...interface{}) {
ErrorExit()
}
// Assert reports "assertion failed" with Fatalf, unless b is true.
func Assert(b bool) {
if !b {
Fatalf("assertion failed")
}
}
// Assertf reports a fatal error with Fatalf, unless b is true.
func Assertf(b bool, format string, args ...interface{}) {
if !b {
Fatalf(format, args...)
}
}
// AssertfAt reports a fatal error with FatalfAt, unless b is true.
func AssertfAt(b bool, pos src.XPos, format string, args ...interface{}) {
if !b {
FatalfAt(pos, format, args...)
}
}
// hcrash crashes the compiler when -h is set, to find out where a message is generated.
func hcrash() {
if Flag.LowerH != 0 {

View File

@@ -38,7 +38,6 @@ func Func(fn *ir.Func) {
}
}
ir.VisitList(fn.Body, markHiddenClosureDead)
fn.Body = []ir.Node{ir.NewBlockStmt(base.Pos, nil)}
}
@@ -63,11 +62,9 @@ func stmts(nn *ir.Nodes) {
if ir.IsConst(n.Cond, constant.Bool) {
var body ir.Nodes
if ir.BoolVal(n.Cond) {
ir.VisitList(n.Else, markHiddenClosureDead)
n.Else = ir.Nodes{}
body = n.Body
} else {
ir.VisitList(n.Body, markHiddenClosureDead)
n.Body = ir.Nodes{}
body = n.Else
}
@@ -153,13 +150,3 @@ func expr(n ir.Node) ir.Node {
}
return n
}
func markHiddenClosureDead(n ir.Node) {
if n.Op() != ir.OCLOSURE {
return
}
clo := n.(*ir.ClosureExpr)
if clo.Func.IsHiddenClosure() {
clo.Func.SetIsDeadcodeClosure(true)
}
}

View File

@@ -214,7 +214,7 @@ func createDwarfVars(fnsym *obj.LSym, complexOK bool, fn *ir.Func, apDecls []*ir
Type: base.Ctxt.Lookup(typename),
DeclFile: declpos.RelFilename(),
DeclLine: declpos.RelLine(),
DeclCol: declpos.RelCol(),
DeclCol: declpos.Col(),
InlIndex: int32(inlIndex),
ChildIndex: -1,
})
@@ -371,7 +371,7 @@ func createSimpleVar(fnsym *obj.LSym, n *ir.Name) *dwarf.Var {
Type: base.Ctxt.Lookup(typename),
DeclFile: declpos.RelFilename(),
DeclLine: declpos.RelLine(),
DeclCol: declpos.RelCol(),
DeclCol: declpos.Col(),
InlIndex: int32(inlIndex),
ChildIndex: -1,
}
@@ -475,7 +475,7 @@ func createComplexVar(fnsym *obj.LSym, fn *ir.Func, varID ssa.VarID) *dwarf.Var
StackOffset: ssagen.StackOffset(debug.Slots[debug.VarSlots[varID][0]]),
DeclFile: declpos.RelFilename(),
DeclLine: declpos.RelLine(),
DeclCol: declpos.RelCol(),
DeclCol: declpos.Col(),
InlIndex: int32(inlIndex),
ChildIndex: -1,
}

View File

@@ -244,7 +244,7 @@ func makePreinlineDclMap(fnsym *obj.LSym) map[varPos]int {
DeclName: unversion(n.Sym().Name),
DeclFile: pos.RelFilename(),
DeclLine: pos.RelLine(),
DeclCol: pos.RelCol(),
DeclCol: pos.Col(),
}
if _, found := m[vp]; found {
// We can see collisions (variables with the same name/file/line/col) in obfuscated or machine-generated code -- see issue 44378 for an example. Skip duplicates in such cases, since it is unlikely that a human will be debugging such code.

View File

@@ -1,120 +0,0 @@
// Copyright 2018 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package escape
import (
"cmd/compile/internal/base"
"cmd/compile/internal/ir"
)
// addr evaluates an addressable expression n and returns a hole
// that represents storing into the represented location.
func (e *escape) addr(n ir.Node) hole {
if n == nil || ir.IsBlank(n) {
// Can happen in select case, range, maybe others.
return e.discardHole()
}
k := e.heapHole()
switch n.Op() {
default:
base.Fatalf("unexpected addr: %v", n)
case ir.ONAME:
n := n.(*ir.Name)
if n.Class == ir.PEXTERN {
break
}
k = e.oldLoc(n).asHole()
case ir.OLINKSYMOFFSET:
break
case ir.ODOT:
n := n.(*ir.SelectorExpr)
k = e.addr(n.X)
case ir.OINDEX:
n := n.(*ir.IndexExpr)
e.discard(n.Index)
if n.X.Type().IsArray() {
k = e.addr(n.X)
} else {
e.discard(n.X)
}
case ir.ODEREF, ir.ODOTPTR:
e.discard(n)
case ir.OINDEXMAP:
n := n.(*ir.IndexExpr)
e.discard(n.X)
e.assignHeap(n.Index, "key of map put", n)
}
return k
}
func (e *escape) addrs(l ir.Nodes) []hole {
var ks []hole
for _, n := range l {
ks = append(ks, e.addr(n))
}
return ks
}
func (e *escape) assignHeap(src ir.Node, why string, where ir.Node) {
e.expr(e.heapHole().note(where, why), src)
}
// assignList evaluates the assignment dsts... = srcs....
func (e *escape) assignList(dsts, srcs []ir.Node, why string, where ir.Node) {
ks := e.addrs(dsts)
for i, k := range ks {
var src ir.Node
if i < len(srcs) {
src = srcs[i]
}
if dst := dsts[i]; dst != nil {
// Detect implicit conversion of uintptr to unsafe.Pointer when
// storing into reflect.{Slice,String}Header.
if dst.Op() == ir.ODOTPTR && ir.IsReflectHeaderDataField(dst) {
e.unsafeValue(e.heapHole().note(where, why), src)
continue
}
// Filter out some no-op assignments for escape analysis.
if src != nil && isSelfAssign(dst, src) {
if base.Flag.LowerM != 0 {
base.WarnfAt(where.Pos(), "%v ignoring self-assignment in %v", e.curfn, where)
}
k = e.discardHole()
}
}
e.expr(k.note(where, why), src)
}
e.reassigned(ks, where)
}
// reassigned marks the locations associated with the given holes as
// reassigned, unless the location represents a variable declared and
// assigned exactly once by where.
func (e *escape) reassigned(ks []hole, where ir.Node) {
if as, ok := where.(*ir.AssignStmt); ok && as.Op() == ir.OAS && as.Y == nil {
if dst, ok := as.X.(*ir.Name); ok && dst.Op() == ir.ONAME && dst.Defn == nil {
// Zero-value assignment for variable declared without an
// explicit initial value. Assume this is its initialization
// statement.
return
}
}
for _, k := range ks {
loc := k.dst
// Variables declared by range statements are assigned on every iteration.
if n, ok := loc.n.(*ir.Name); ok && n.Defn == where && where.Op() != ir.ORANGE {
continue
}
loc.reassigned = true
}
}

View File

@@ -1,428 +0,0 @@
// Copyright 2018 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package escape
import (
"cmd/compile/internal/base"
"cmd/compile/internal/ir"
"cmd/compile/internal/typecheck"
"cmd/compile/internal/types"
"cmd/internal/src"
)
// call evaluates a call expressions, including builtin calls. ks
// should contain the holes representing where the function callee's
// results flows.
func (e *escape) call(ks []hole, call ir.Node) {
var init ir.Nodes
e.callCommon(ks, call, &init, nil)
if len(init) != 0 {
call.(*ir.CallExpr).PtrInit().Append(init...)
}
}
func (e *escape) callCommon(ks []hole, call ir.Node, init *ir.Nodes, wrapper *ir.Func) {
// argumentPragma handles escape analysis of argument *argp to the
// given hole. If the function callee is known, pragma is the
// function's pragma flags; otherwise 0.
argumentFunc := func(fn *ir.Name, k hole, argp *ir.Node) {
e.rewriteArgument(argp, init, call, fn, wrapper)
e.expr(k.note(call, "call parameter"), *argp)
}
argument := func(k hole, argp *ir.Node) {
argumentFunc(nil, k, argp)
}
switch call.Op() {
default:
ir.Dump("esc", call)
base.Fatalf("unexpected call op: %v", call.Op())
case ir.OCALLFUNC, ir.OCALLMETH, ir.OCALLINTER:
call := call.(*ir.CallExpr)
typecheck.FixVariadicCall(call)
typecheck.FixMethodCall(call)
// Pick out the function callee, if statically known.
//
// TODO(mdempsky): Change fn from *ir.Name to *ir.Func, but some
// functions (e.g., runtime builtins, method wrappers, generated
// eq/hash functions) don't have it set. Investigate whether
// that's a concern.
var fn *ir.Name
switch call.Op() {
case ir.OCALLFUNC:
// If we have a direct call to a closure (not just one we were
// able to statically resolve with ir.StaticValue), mark it as
// such so batch.outlives can optimize the flow results.
if call.X.Op() == ir.OCLOSURE {
call.X.(*ir.ClosureExpr).Func.SetClosureCalled(true)
}
switch v := ir.StaticValue(call.X); v.Op() {
case ir.ONAME:
if v := v.(*ir.Name); v.Class == ir.PFUNC {
fn = v
}
case ir.OCLOSURE:
fn = v.(*ir.ClosureExpr).Func.Nname
case ir.OMETHEXPR:
fn = ir.MethodExprName(v)
}
case ir.OCALLMETH:
base.FatalfAt(call.Pos(), "OCALLMETH missed by typecheck")
}
fntype := call.X.Type()
if fn != nil {
fntype = fn.Type()
}
if ks != nil && fn != nil && e.inMutualBatch(fn) {
for i, result := range fn.Type().Results().FieldSlice() {
e.expr(ks[i], ir.AsNode(result.Nname))
}
}
var recvp *ir.Node
if call.Op() == ir.OCALLFUNC {
// Evaluate callee function expression.
//
// Note: We use argument and not argumentFunc, because while
// call.X here may be an argument to runtime.{new,defer}proc,
// it's not an argument to fn itself.
argument(e.discardHole(), &call.X)
} else {
recvp = &call.X.(*ir.SelectorExpr).X
}
args := call.Args
if recv := fntype.Recv(); recv != nil {
if recvp == nil {
// Function call using method expression. Recevier argument is
// at the front of the regular arguments list.
recvp = &args[0]
args = args[1:]
}
argumentFunc(fn, e.tagHole(ks, fn, recv), recvp)
}
for i, param := range fntype.Params().FieldSlice() {
argumentFunc(fn, e.tagHole(ks, fn, param), &args[i])
}
case ir.OINLCALL:
call := call.(*ir.InlinedCallExpr)
e.stmts(call.Body)
for i, result := range call.ReturnVars {
k := e.discardHole()
if ks != nil {
k = ks[i]
}
e.expr(k, result)
}
case ir.OAPPEND:
call := call.(*ir.CallExpr)
args := call.Args
// Appendee slice may flow directly to the result, if
// it has enough capacity. Alternatively, a new heap
// slice might be allocated, and all slice elements
// might flow to heap.
appendeeK := ks[0]
if args[0].Type().Elem().HasPointers() {
appendeeK = e.teeHole(appendeeK, e.heapHole().deref(call, "appendee slice"))
}
argument(appendeeK, &args[0])
if call.IsDDD {
appendedK := e.discardHole()
if args[1].Type().IsSlice() && args[1].Type().Elem().HasPointers() {
appendedK = e.heapHole().deref(call, "appended slice...")
}
argument(appendedK, &args[1])
} else {
for i := 1; i < len(args); i++ {
argument(e.heapHole(), &args[i])
}
}
case ir.OCOPY:
call := call.(*ir.BinaryExpr)
argument(e.discardHole(), &call.X)
copiedK := e.discardHole()
if call.Y.Type().IsSlice() && call.Y.Type().Elem().HasPointers() {
copiedK = e.heapHole().deref(call, "copied slice")
}
argument(copiedK, &call.Y)
case ir.OPANIC:
call := call.(*ir.UnaryExpr)
argument(e.heapHole(), &call.X)
case ir.OCOMPLEX:
call := call.(*ir.BinaryExpr)
argument(e.discardHole(), &call.X)
argument(e.discardHole(), &call.Y)
case ir.ODELETE, ir.OPRINT, ir.OPRINTN, ir.ORECOVER:
call := call.(*ir.CallExpr)
fixRecoverCall(call)
for i := range call.Args {
argument(e.discardHole(), &call.Args[i])
}
case ir.OLEN, ir.OCAP, ir.OREAL, ir.OIMAG, ir.OCLOSE:
call := call.(*ir.UnaryExpr)
argument(e.discardHole(), &call.X)
case ir.OUNSAFEADD, ir.OUNSAFESLICE:
call := call.(*ir.BinaryExpr)
argument(ks[0], &call.X)
argument(e.discardHole(), &call.Y)
}
}
// goDeferStmt analyzes a "go" or "defer" statement.
//
// In the process, it also normalizes the statement to always use a
// simple function call with no arguments and no results. For example,
// it rewrites:
//
// defer f(x, y)
//
// into:
//
// x1, y1 := x, y
// defer func() { f(x1, y1) }()
func (e *escape) goDeferStmt(n *ir.GoDeferStmt) {
k := e.heapHole()
if n.Op() == ir.ODEFER && e.loopDepth == 1 {
// Top-level defer arguments don't escape to the heap,
// but they do need to last until they're invoked.
k = e.later(e.discardHole())
// force stack allocation of defer record, unless
// open-coded defers are used (see ssa.go)
n.SetEsc(ir.EscNever)
}
call := n.Call
init := n.PtrInit()
init.Append(ir.TakeInit(call)...)
e.stmts(*init)
// If the function is already a zero argument/result function call,
// just escape analyze it normally.
if call, ok := call.(*ir.CallExpr); ok && call.Op() == ir.OCALLFUNC {
if sig := call.X.Type(); sig.NumParams()+sig.NumResults() == 0 {
if clo, ok := call.X.(*ir.ClosureExpr); ok && n.Op() == ir.OGO {
clo.IsGoWrap = true
}
e.expr(k, call.X)
return
}
}
// Create a new no-argument function that we'll hand off to defer.
fn := ir.NewClosureFunc(n.Pos(), true)
fn.SetWrapper(true)
fn.Nname.SetType(types.NewSignature(types.LocalPkg, nil, nil, nil, nil))
fn.Body = []ir.Node{call}
clo := fn.OClosure
if n.Op() == ir.OGO {
clo.IsGoWrap = true
}
e.callCommon(nil, call, init, fn)
e.closures = append(e.closures, closure{e.spill(k, clo), clo})
// Create new top level call to closure.
n.Call = ir.NewCallExpr(call.Pos(), ir.OCALL, clo, nil)
ir.WithFunc(e.curfn, func() {
typecheck.Stmt(n.Call)
})
}
// rewriteArgument rewrites the argument *argp of the given call expression.
// fn is the static callee function, if known.
// wrapper is the go/defer wrapper function for call, if any.
func (e *escape) rewriteArgument(argp *ir.Node, init *ir.Nodes, call ir.Node, fn *ir.Name, wrapper *ir.Func) {
var pragma ir.PragmaFlag
if fn != nil && fn.Func != nil {
pragma = fn.Func.Pragma
}
// unsafeUintptr rewrites "uintptr(ptr)" arguments to syscall-like
// functions, so that ptr is kept alive and/or escaped as
// appropriate. unsafeUintptr also reports whether it modified arg0.
unsafeUintptr := func(arg0 ir.Node) bool {
if pragma&(ir.UintptrKeepAlive|ir.UintptrEscapes) == 0 {
return false
}
// If the argument is really a pointer being converted to uintptr,
// arrange for the pointer to be kept alive until the call returns,
// by copying it into a temp and marking that temp
// still alive when we pop the temp stack.
if arg0.Op() != ir.OCONVNOP || !arg0.Type().IsUintptr() {
return false
}
arg := arg0.(*ir.ConvExpr)
if !arg.X.Type().IsUnsafePtr() {
return false
}
// Create and declare a new pointer-typed temp variable.
tmp := e.wrapExpr(arg.Pos(), &arg.X, init, call, wrapper)
if pragma&ir.UintptrEscapes != 0 {
e.flow(e.heapHole().note(arg, "//go:uintptrescapes"), e.oldLoc(tmp))
}
if pragma&ir.UintptrKeepAlive != 0 {
call := call.(*ir.CallExpr)
// SSA implements CallExpr.KeepAlive using OpVarLive, which
// doesn't support PAUTOHEAP variables. I tried changing it to
// use OpKeepAlive, but that ran into issues of its own.
// For now, the easy solution is to explicitly copy to (yet
// another) new temporary variable.
keep := tmp
if keep.Class == ir.PAUTOHEAP {
keep = e.copyExpr(arg.Pos(), tmp, call.PtrInit(), wrapper, false)
}
keep.SetAddrtaken(true) // ensure SSA keeps the tmp variable
call.KeepAlive = append(call.KeepAlive, keep)
}
return true
}
visit := func(pos src.XPos, argp *ir.Node) {
// Optimize a few common constant expressions. By leaving these
// untouched in the call expression, we let the wrapper handle
// evaluating them, rather than taking up closure context space.
switch arg := *argp; arg.Op() {
case ir.OLITERAL, ir.ONIL, ir.OMETHEXPR:
return
case ir.ONAME:
if arg.(*ir.Name).Class == ir.PFUNC {
return
}
}
if unsafeUintptr(*argp) {
return
}
if wrapper != nil {
e.wrapExpr(pos, argp, init, call, wrapper)
}
}
// Peel away any slice lits.
if arg := *argp; arg.Op() == ir.OSLICELIT {
list := arg.(*ir.CompLitExpr).List
for i := range list {
visit(arg.Pos(), &list[i])
}
} else {
visit(call.Pos(), argp)
}
}
// wrapExpr replaces *exprp with a temporary variable copy. If wrapper
// is non-nil, the variable will be captured for use within that
// function.
func (e *escape) wrapExpr(pos src.XPos, exprp *ir.Node, init *ir.Nodes, call ir.Node, wrapper *ir.Func) *ir.Name {
tmp := e.copyExpr(pos, *exprp, init, e.curfn, true)
if wrapper != nil {
// Currently for "defer i.M()" if i is nil it panics at the point
// of defer statement, not when deferred function is called. We
// need to do the nil check outside of the wrapper.
if call.Op() == ir.OCALLINTER && exprp == &call.(*ir.CallExpr).X.(*ir.SelectorExpr).X {
check := ir.NewUnaryExpr(pos, ir.OCHECKNIL, ir.NewUnaryExpr(pos, ir.OITAB, tmp))
init.Append(typecheck.Stmt(check))
}
e.oldLoc(tmp).captured = true
tmp = ir.NewClosureVar(pos, wrapper, tmp)
}
*exprp = tmp
return tmp
}
// copyExpr creates and returns a new temporary variable within fn;
// appends statements to init to declare and initialize it to expr;
// and escape analyzes the data flow if analyze is true.
func (e *escape) copyExpr(pos src.XPos, expr ir.Node, init *ir.Nodes, fn *ir.Func, analyze bool) *ir.Name {
if ir.HasUniquePos(expr) {
pos = expr.Pos()
}
tmp := typecheck.TempAt(pos, fn, expr.Type())
stmts := []ir.Node{
ir.NewDecl(pos, ir.ODCL, tmp),
ir.NewAssignStmt(pos, tmp, expr),
}
typecheck.Stmts(stmts)
init.Append(stmts...)
if analyze {
e.newLoc(tmp, false)
e.stmts(stmts)
}
return tmp
}
// tagHole returns a hole for evaluating an argument passed to param.
// ks should contain the holes representing where the function
// callee's results flows. fn is the statically-known callee function,
// if any.
func (e *escape) tagHole(ks []hole, fn *ir.Name, param *types.Field) hole {
// If this is a dynamic call, we can't rely on param.Note.
if fn == nil {
return e.heapHole()
}
if e.inMutualBatch(fn) {
return e.addr(ir.AsNode(param.Nname))
}
// Call to previously tagged function.
var tagKs []hole
esc := parseLeaks(param.Note)
if x := esc.Heap(); x >= 0 {
tagKs = append(tagKs, e.heapHole().shift(x))
}
if ks != nil {
for i := 0; i < numEscResults; i++ {
if x := esc.Result(i); x >= 0 {
tagKs = append(tagKs, ks[i].shift(x))
}
}
}
return e.teeHole(tagKs...)
}

View File

@@ -1,37 +0,0 @@
// Copyright 2021 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package escape
import (
"cmd/compile/internal/base"
"cmd/compile/internal/ir"
"cmd/compile/internal/typecheck"
"cmd/compile/internal/types"
)
// TODO(mdempsky): Desugaring doesn't belong during escape analysis,
// but for now it's the most convenient place for some rewrites.
// fixRecoverCall rewrites an ORECOVER call into ORECOVERFP,
// adding an explicit frame pointer argument.
// If call is not an ORECOVER call, it's left unmodified.
func fixRecoverCall(call *ir.CallExpr) {
if call.Op() != ir.ORECOVER {
return
}
pos := call.Pos()
// FP is equal to caller's SP plus FixedFrameSize().
var fp ir.Node = ir.NewCallExpr(pos, ir.OGETCALLERSP, nil, nil)
if off := base.Ctxt.FixedFrameSize(); off != 0 {
fp = ir.NewBinaryExpr(fp.Pos(), ir.OADD, fp, ir.NewInt(off))
}
// TODO(mdempsky): Replace *int32 with unsafe.Pointer, without upsetting checkptr.
fp = ir.NewConvExpr(pos, ir.OCONVNOP, types.NewPtr(types.Types[types.TINT32]), fp)
call.SetOp(ir.ORECOVERFP)
call.Args = []ir.Node{typecheck.Expr(fp)}
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,335 +0,0 @@
// Copyright 2018 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package escape
import (
"cmd/compile/internal/base"
"cmd/compile/internal/ir"
"cmd/compile/internal/types"
)
// expr models evaluating an expression n and flowing the result into
// hole k.
func (e *escape) expr(k hole, n ir.Node) {
if n == nil {
return
}
e.stmts(n.Init())
e.exprSkipInit(k, n)
}
func (e *escape) exprSkipInit(k hole, n ir.Node) {
if n == nil {
return
}
lno := ir.SetPos(n)
defer func() {
base.Pos = lno
}()
if k.derefs >= 0 && !n.Type().HasPointers() {
k.dst = &e.blankLoc
}
switch n.Op() {
default:
base.Fatalf("unexpected expr: %s %v", n.Op().String(), n)
case ir.OLITERAL, ir.ONIL, ir.OGETG, ir.OGETCALLERPC, ir.OGETCALLERSP, ir.OTYPE, ir.OMETHEXPR, ir.OLINKSYMOFFSET:
// nop
case ir.ONAME:
n := n.(*ir.Name)
if n.Class == ir.PFUNC || n.Class == ir.PEXTERN {
return
}
e.flow(k, e.oldLoc(n))
case ir.OPLUS, ir.ONEG, ir.OBITNOT, ir.ONOT:
n := n.(*ir.UnaryExpr)
e.discard(n.X)
case ir.OADD, ir.OSUB, ir.OOR, ir.OXOR, ir.OMUL, ir.ODIV, ir.OMOD, ir.OLSH, ir.ORSH, ir.OAND, ir.OANDNOT, ir.OEQ, ir.ONE, ir.OLT, ir.OLE, ir.OGT, ir.OGE:
n := n.(*ir.BinaryExpr)
e.discard(n.X)
e.discard(n.Y)
case ir.OANDAND, ir.OOROR:
n := n.(*ir.LogicalExpr)
e.discard(n.X)
e.discard(n.Y)
case ir.OADDR:
n := n.(*ir.AddrExpr)
e.expr(k.addr(n, "address-of"), n.X) // "address-of"
case ir.ODEREF:
n := n.(*ir.StarExpr)
e.expr(k.deref(n, "indirection"), n.X) // "indirection"
case ir.ODOT, ir.ODOTMETH, ir.ODOTINTER:
n := n.(*ir.SelectorExpr)
e.expr(k.note(n, "dot"), n.X)
case ir.ODOTPTR:
n := n.(*ir.SelectorExpr)
e.expr(k.deref(n, "dot of pointer"), n.X) // "dot of pointer"
case ir.ODOTTYPE, ir.ODOTTYPE2:
n := n.(*ir.TypeAssertExpr)
e.expr(k.dotType(n.Type(), n, "dot"), n.X)
case ir.ODYNAMICDOTTYPE, ir.ODYNAMICDOTTYPE2:
n := n.(*ir.DynamicTypeAssertExpr)
e.expr(k.dotType(n.Type(), n, "dot"), n.X)
// n.T doesn't need to be tracked; it always points to read-only storage.
case ir.OINDEX:
n := n.(*ir.IndexExpr)
if n.X.Type().IsArray() {
e.expr(k.note(n, "fixed-array-index-of"), n.X)
} else {
// TODO(mdempsky): Fix why reason text.
e.expr(k.deref(n, "dot of pointer"), n.X)
}
e.discard(n.Index)
case ir.OINDEXMAP:
n := n.(*ir.IndexExpr)
e.discard(n.X)
e.discard(n.Index)
case ir.OSLICE, ir.OSLICEARR, ir.OSLICE3, ir.OSLICE3ARR, ir.OSLICESTR:
n := n.(*ir.SliceExpr)
e.expr(k.note(n, "slice"), n.X)
e.discard(n.Low)
e.discard(n.High)
e.discard(n.Max)
case ir.OCONV, ir.OCONVNOP:
n := n.(*ir.ConvExpr)
if ir.ShouldCheckPtr(e.curfn, 2) && n.Type().IsUnsafePtr() && n.X.Type().IsPtr() {
// When -d=checkptr=2 is enabled, treat
// conversions to unsafe.Pointer as an
// escaping operation. This allows better
// runtime instrumentation, since we can more
// easily detect object boundaries on the heap
// than the stack.
e.assignHeap(n.X, "conversion to unsafe.Pointer", n)
} else if n.Type().IsUnsafePtr() && n.X.Type().IsUintptr() {
e.unsafeValue(k, n.X)
} else {
e.expr(k, n.X)
}
case ir.OCONVIFACE, ir.OCONVIDATA:
n := n.(*ir.ConvExpr)
if !n.X.Type().IsInterface() && !types.IsDirectIface(n.X.Type()) {
k = e.spill(k, n)
}
e.expr(k.note(n, "interface-converted"), n.X)
case ir.OEFACE:
n := n.(*ir.BinaryExpr)
// Note: n.X is not needed because it can never point to memory that might escape.
e.expr(k, n.Y)
case ir.OIDATA, ir.OSPTR:
n := n.(*ir.UnaryExpr)
e.expr(k, n.X)
case ir.OSLICE2ARRPTR:
// the slice pointer flows directly to the result
n := n.(*ir.ConvExpr)
e.expr(k, n.X)
case ir.ORECV:
n := n.(*ir.UnaryExpr)
e.discard(n.X)
case ir.OCALLMETH, ir.OCALLFUNC, ir.OCALLINTER, ir.OINLCALL, ir.OLEN, ir.OCAP, ir.OCOMPLEX, ir.OREAL, ir.OIMAG, ir.OAPPEND, ir.OCOPY, ir.ORECOVER, ir.OUNSAFEADD, ir.OUNSAFESLICE:
e.call([]hole{k}, n)
case ir.ONEW:
n := n.(*ir.UnaryExpr)
e.spill(k, n)
case ir.OMAKESLICE:
n := n.(*ir.MakeExpr)
e.spill(k, n)
e.discard(n.Len)
e.discard(n.Cap)
case ir.OMAKECHAN:
n := n.(*ir.MakeExpr)
e.discard(n.Len)
case ir.OMAKEMAP:
n := n.(*ir.MakeExpr)
e.spill(k, n)
e.discard(n.Len)
case ir.OMETHVALUE:
// Flow the receiver argument to both the closure and
// to the receiver parameter.
n := n.(*ir.SelectorExpr)
closureK := e.spill(k, n)
m := n.Selection
// We don't know how the method value will be called
// later, so conservatively assume the result
// parameters all flow to the heap.
//
// TODO(mdempsky): Change ks into a callback, so that
// we don't have to create this slice?
var ks []hole
for i := m.Type.NumResults(); i > 0; i-- {
ks = append(ks, e.heapHole())
}
name, _ := m.Nname.(*ir.Name)
paramK := e.tagHole(ks, name, m.Type.Recv())
e.expr(e.teeHole(paramK, closureK), n.X)
case ir.OPTRLIT:
n := n.(*ir.AddrExpr)
e.expr(e.spill(k, n), n.X)
case ir.OARRAYLIT:
n := n.(*ir.CompLitExpr)
for _, elt := range n.List {
if elt.Op() == ir.OKEY {
elt = elt.(*ir.KeyExpr).Value
}
e.expr(k.note(n, "array literal element"), elt)
}
case ir.OSLICELIT:
n := n.(*ir.CompLitExpr)
k = e.spill(k, n)
for _, elt := range n.List {
if elt.Op() == ir.OKEY {
elt = elt.(*ir.KeyExpr).Value
}
e.expr(k.note(n, "slice-literal-element"), elt)
}
case ir.OSTRUCTLIT:
n := n.(*ir.CompLitExpr)
for _, elt := range n.List {
e.expr(k.note(n, "struct literal element"), elt.(*ir.StructKeyExpr).Value)
}
case ir.OMAPLIT:
n := n.(*ir.CompLitExpr)
e.spill(k, n)
// Map keys and values are always stored in the heap.
for _, elt := range n.List {
elt := elt.(*ir.KeyExpr)
e.assignHeap(elt.Key, "map literal key", n)
e.assignHeap(elt.Value, "map literal value", n)
}
case ir.OCLOSURE:
n := n.(*ir.ClosureExpr)
k = e.spill(k, n)
e.closures = append(e.closures, closure{k, n})
if fn := n.Func; fn.IsHiddenClosure() {
for _, cv := range fn.ClosureVars {
if loc := e.oldLoc(cv); !loc.captured {
loc.captured = true
// Ignore reassignments to the variable in straightline code
// preceding the first capture by a closure.
if loc.loopDepth == e.loopDepth {
loc.reassigned = false
}
}
}
for _, n := range fn.Dcl {
// Add locations for local variables of the
// closure, if needed, in case we're not including
// the closure func in the batch for escape
// analysis (happens for escape analysis called
// from reflectdata.methodWrapper)
if n.Op() == ir.ONAME && n.Opt == nil {
e.with(fn).newLoc(n, false)
}
}
e.walkFunc(fn)
}
case ir.ORUNES2STR, ir.OBYTES2STR, ir.OSTR2RUNES, ir.OSTR2BYTES, ir.ORUNESTR:
n := n.(*ir.ConvExpr)
e.spill(k, n)
e.discard(n.X)
case ir.OADDSTR:
n := n.(*ir.AddStringExpr)
e.spill(k, n)
// Arguments of OADDSTR never escape;
// runtime.concatstrings makes sure of that.
e.discards(n.List)
case ir.ODYNAMICTYPE:
// Nothing to do - argument is a *runtime._type (+ maybe a *runtime.itab) pointing to static data section
}
}
// unsafeValue evaluates a uintptr-typed arithmetic expression looking
// for conversions from an unsafe.Pointer.
func (e *escape) unsafeValue(k hole, n ir.Node) {
if n.Type().Kind() != types.TUINTPTR {
base.Fatalf("unexpected type %v for %v", n.Type(), n)
}
if k.addrtaken {
base.Fatalf("unexpected addrtaken")
}
e.stmts(n.Init())
switch n.Op() {
case ir.OCONV, ir.OCONVNOP:
n := n.(*ir.ConvExpr)
if n.X.Type().IsUnsafePtr() {
e.expr(k, n.X)
} else {
e.discard(n.X)
}
case ir.ODOTPTR:
n := n.(*ir.SelectorExpr)
if ir.IsReflectHeaderDataField(n) {
e.expr(k.deref(n, "reflect.Header.Data"), n.X)
} else {
e.discard(n.X)
}
case ir.OPLUS, ir.ONEG, ir.OBITNOT:
n := n.(*ir.UnaryExpr)
e.unsafeValue(k, n.X)
case ir.OADD, ir.OSUB, ir.OOR, ir.OXOR, ir.OMUL, ir.ODIV, ir.OMOD, ir.OAND, ir.OANDNOT:
n := n.(*ir.BinaryExpr)
e.unsafeValue(k, n.X)
e.unsafeValue(k, n.Y)
case ir.OLSH, ir.ORSH:
n := n.(*ir.BinaryExpr)
e.unsafeValue(k, n.X)
// RHS need not be uintptr-typed (#32959) and can't meaningfully
// flow pointers anyway.
e.discard(n.Y)
default:
e.exprSkipInit(e.discardHole(), n)
}
}
// discard evaluates an expression n for side-effects, but discards
// its value.
func (e *escape) discard(n ir.Node) {
e.expr(e.discardHole(), n)
}
func (e *escape) discards(l ir.Nodes) {
for _, n := range l {
e.discard(n)
}
}
// spill allocates a new location associated with expression n, flows
// its address to k, and returns a hole that flows values to it. It's
// intended for use with most expressions that allocate storage.
func (e *escape) spill(k hole, n ir.Node) hole {
loc := e.newLoc(n, true)
e.flow(k.addr(n, "spill"), loc)
return loc.asHole()
}

View File

@@ -1,324 +0,0 @@
// Copyright 2018 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package escape
import (
"cmd/compile/internal/base"
"cmd/compile/internal/ir"
"cmd/compile/internal/logopt"
"cmd/compile/internal/types"
"fmt"
)
// Below we implement the methods for walking the AST and recording
// data flow edges. Note that because a sub-expression might have
// side-effects, it's important to always visit the entire AST.
//
// For example, write either:
//
// if x {
// e.discard(n.Left)
// } else {
// e.value(k, n.Left)
// }
//
// or
//
// if x {
// k = e.discardHole()
// }
// e.value(k, n.Left)
//
// Do NOT write:
//
// // BAD: possibly loses side-effects within n.Left
// if !x {
// e.value(k, n.Left)
// }
// An location represents an abstract location that stores a Go
// variable.
type location struct {
n ir.Node // represented variable or expression, if any
curfn *ir.Func // enclosing function
edges []edge // incoming edges
loopDepth int // loopDepth at declaration
// resultIndex records the tuple index (starting at 1) for
// PPARAMOUT variables within their function's result type.
// For non-PPARAMOUT variables it's 0.
resultIndex int
// derefs and walkgen are used during walkOne to track the
// minimal dereferences from the walk root.
derefs int // >= -1
walkgen uint32
// dst and dstEdgeindex track the next immediate assignment
// destination location during walkone, along with the index
// of the edge pointing back to this location.
dst *location
dstEdgeIdx int
// queued is used by walkAll to track whether this location is
// in the walk queue.
queued bool
// escapes reports whether the represented variable's address
// escapes; that is, whether the variable must be heap
// allocated.
escapes bool
// transient reports whether the represented expression's
// address does not outlive the statement; that is, whether
// its storage can be immediately reused.
transient bool
// paramEsc records the represented parameter's leak set.
paramEsc leaks
captured bool // has a closure captured this variable?
reassigned bool // has this variable been reassigned?
addrtaken bool // has this variable's address been taken?
}
// An edge represents an assignment edge between two Go variables.
type edge struct {
src *location
derefs int // >= -1
notes *note
}
func (l *location) asHole() hole {
return hole{dst: l}
}
// leak records that parameter l leaks to sink.
func (l *location) leakTo(sink *location, derefs int) {
// If sink is a result parameter that doesn't escape (#44614)
// and we can fit return bits into the escape analysis tag,
// then record as a result leak.
if !sink.escapes && sink.isName(ir.PPARAMOUT) && sink.curfn == l.curfn {
ri := sink.resultIndex - 1
if ri < numEscResults {
// Leak to result parameter.
l.paramEsc.AddResult(ri, derefs)
return
}
}
// Otherwise, record as heap leak.
l.paramEsc.AddHeap(derefs)
}
func (l *location) isName(c ir.Class) bool {
return l.n != nil && l.n.Op() == ir.ONAME && l.n.(*ir.Name).Class == c
}
// A hole represents a context for evaluation of a Go
// expression. E.g., when evaluating p in "x = **p", we'd have a hole
// with dst==x and derefs==2.
type hole struct {
dst *location
derefs int // >= -1
notes *note
// addrtaken indicates whether this context is taking the address of
// the expression, independent of whether the address will actually
// be stored into a variable.
addrtaken bool
}
type note struct {
next *note
where ir.Node
why string
}
func (k hole) note(where ir.Node, why string) hole {
if where == nil || why == "" {
base.Fatalf("note: missing where/why")
}
if base.Flag.LowerM >= 2 || logopt.Enabled() {
k.notes = &note{
next: k.notes,
where: where,
why: why,
}
}
return k
}
func (k hole) shift(delta int) hole {
k.derefs += delta
if k.derefs < -1 {
base.Fatalf("derefs underflow: %v", k.derefs)
}
k.addrtaken = delta < 0
return k
}
func (k hole) deref(where ir.Node, why string) hole { return k.shift(1).note(where, why) }
func (k hole) addr(where ir.Node, why string) hole { return k.shift(-1).note(where, why) }
func (k hole) dotType(t *types.Type, where ir.Node, why string) hole {
if !t.IsInterface() && !types.IsDirectIface(t) {
k = k.shift(1)
}
return k.note(where, why)
}
func (b *batch) flow(k hole, src *location) {
if k.addrtaken {
src.addrtaken = true
}
dst := k.dst
if dst == &b.blankLoc {
return
}
if dst == src && k.derefs >= 0 { // dst = dst, dst = *dst, ...
return
}
if dst.escapes && k.derefs < 0 { // dst = &src
if base.Flag.LowerM >= 2 || logopt.Enabled() {
pos := base.FmtPos(src.n.Pos())
if base.Flag.LowerM >= 2 {
fmt.Printf("%s: %v escapes to heap:\n", pos, src.n)
}
explanation := b.explainFlow(pos, dst, src, k.derefs, k.notes, []*logopt.LoggedOpt{})
if logopt.Enabled() {
var e_curfn *ir.Func // TODO(mdempsky): Fix.
logopt.LogOpt(src.n.Pos(), "escapes", "escape", ir.FuncName(e_curfn), fmt.Sprintf("%v escapes to heap", src.n), explanation)
}
}
src.escapes = true
return
}
// TODO(mdempsky): Deduplicate edges?
dst.edges = append(dst.edges, edge{src: src, derefs: k.derefs, notes: k.notes})
}
func (b *batch) heapHole() hole { return b.heapLoc.asHole() }
func (b *batch) discardHole() hole { return b.blankLoc.asHole() }
func (b *batch) oldLoc(n *ir.Name) *location {
if n.Canonical().Opt == nil {
base.Fatalf("%v has no location", n)
}
return n.Canonical().Opt.(*location)
}
func (e *escape) newLoc(n ir.Node, transient bool) *location {
if e.curfn == nil {
base.Fatalf("e.curfn isn't set")
}
if n != nil && n.Type() != nil && n.Type().NotInHeap() {
base.ErrorfAt(n.Pos(), "%v is incomplete (or unallocatable); stack allocation disallowed", n.Type())
}
if n != nil && n.Op() == ir.ONAME {
if canon := n.(*ir.Name).Canonical(); n != canon {
base.Fatalf("newLoc on non-canonical %v (canonical is %v)", n, canon)
}
}
loc := &location{
n: n,
curfn: e.curfn,
loopDepth: e.loopDepth,
transient: transient,
}
e.allLocs = append(e.allLocs, loc)
if n != nil {
if n.Op() == ir.ONAME {
n := n.(*ir.Name)
if n.Class == ir.PPARAM && n.Curfn == nil {
// ok; hidden parameter
} else if n.Curfn != e.curfn {
base.Fatalf("curfn mismatch: %v != %v for %v", n.Curfn, e.curfn, n)
}
if n.Opt != nil {
base.Fatalf("%v already has a location", n)
}
n.Opt = loc
}
}
return loc
}
// teeHole returns a new hole that flows into each hole of ks,
// similar to the Unix tee(1) command.
func (e *escape) teeHole(ks ...hole) hole {
if len(ks) == 0 {
return e.discardHole()
}
if len(ks) == 1 {
return ks[0]
}
// TODO(mdempsky): Optimize if there's only one non-discard hole?
// Given holes "l1 = _", "l2 = **_", "l3 = *_", ..., create a
// new temporary location ltmp, wire it into place, and return
// a hole for "ltmp = _".
loc := e.newLoc(nil, true)
for _, k := range ks {
// N.B., "p = &q" and "p = &tmp; tmp = q" are not
// semantically equivalent. To combine holes like "l1
// = _" and "l2 = &_", we'd need to wire them as "l1 =
// *ltmp" and "l2 = ltmp" and return "ltmp = &_"
// instead.
if k.derefs < 0 {
base.Fatalf("teeHole: negative derefs")
}
e.flow(k, loc)
}
return loc.asHole()
}
// later returns a new hole that flows into k, but some time later.
// Its main effect is to prevent immediate reuse of temporary
// variables introduced during Order.
func (e *escape) later(k hole) hole {
loc := e.newLoc(nil, false)
e.flow(k, loc)
return loc.asHole()
}
// Fmt is called from node printing to print information about escape analysis results.
func Fmt(n ir.Node) string {
text := ""
switch n.Esc() {
case ir.EscUnknown:
break
case ir.EscHeap:
text = "esc(h)"
case ir.EscNone:
text = "esc(no)"
case ir.EscNever:
text = "esc(N)"
default:
text = fmt.Sprintf("esc(%d)", n.Esc())
}
if n.Op() == ir.ONAME {
n := n.(*ir.Name)
if loc, ok := n.Opt.(*location); ok && loc.loopDepth != 0 {
if text != "" {
text += " "
}
text += fmt.Sprintf("ld(%d)", loc.loopDepth)
}
}
return text
}

View File

@@ -1,106 +0,0 @@
// Copyright 2018 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package escape
import (
"cmd/compile/internal/base"
"math"
"strings"
)
const numEscResults = 7
// An leaks represents a set of assignment flows from a parameter
// to the heap or to any of its function's (first numEscResults)
// result parameters.
type leaks [1 + numEscResults]uint8
// Empty reports whether l is an empty set (i.e., no assignment flows).
func (l leaks) Empty() bool { return l == leaks{} }
// Heap returns the minimum deref count of any assignment flow from l
// to the heap. If no such flows exist, Heap returns -1.
func (l leaks) Heap() int { return l.get(0) }
// Result returns the minimum deref count of any assignment flow from
// l to its function's i'th result parameter. If no such flows exist,
// Result returns -1.
func (l leaks) Result(i int) int { return l.get(1 + i) }
// AddHeap adds an assignment flow from l to the heap.
func (l *leaks) AddHeap(derefs int) { l.add(0, derefs) }
// AddResult adds an assignment flow from l to its function's i'th
// result parameter.
func (l *leaks) AddResult(i, derefs int) { l.add(1+i, derefs) }
func (l *leaks) setResult(i, derefs int) { l.set(1+i, derefs) }
func (l leaks) get(i int) int { return int(l[i]) - 1 }
func (l *leaks) add(i, derefs int) {
if old := l.get(i); old < 0 || derefs < old {
l.set(i, derefs)
}
}
func (l *leaks) set(i, derefs int) {
v := derefs + 1
if v < 0 {
base.Fatalf("invalid derefs count: %v", derefs)
}
if v > math.MaxUint8 {
v = math.MaxUint8
}
l[i] = uint8(v)
}
// Optimize removes result flow paths that are equal in length or
// longer than the shortest heap flow path.
func (l *leaks) Optimize() {
// If we have a path to the heap, then there's no use in
// keeping equal or longer paths elsewhere.
if x := l.Heap(); x >= 0 {
for i := 0; i < numEscResults; i++ {
if l.Result(i) >= x {
l.setResult(i, -1)
}
}
}
}
var leakTagCache = map[leaks]string{}
// Encode converts l into a binary string for export data.
func (l leaks) Encode() string {
if l.Heap() == 0 {
// Space optimization: empty string encodes more
// efficiently in export data.
return ""
}
if s, ok := leakTagCache[l]; ok {
return s
}
n := len(l)
for n > 0 && l[n-1] == 0 {
n--
}
s := "esc:" + string(l[:n])
leakTagCache[l] = s
return s
}
// parseLeaks parses a binary string representing a leaks
func parseLeaks(s string) leaks {
var l leaks
if !strings.HasPrefix(s, "esc:") {
l.AddHeap(0)
return l
}
copy(l[:], s[4:])
return l
}

View File

@@ -1,289 +0,0 @@
// Copyright 2018 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package escape
import (
"cmd/compile/internal/base"
"cmd/compile/internal/ir"
"cmd/compile/internal/logopt"
"cmd/internal/src"
"fmt"
"strings"
)
// walkAll computes the minimal dereferences between all pairs of
// locations.
func (b *batch) walkAll() {
// We use a work queue to keep track of locations that we need
// to visit, and repeatedly walk until we reach a fixed point.
//
// We walk once from each location (including the heap), and
// then re-enqueue each location on its transition from
// transient->!transient and !escapes->escapes, which can each
// happen at most once. So we take Θ(len(e.allLocs)) walks.
// LIFO queue, has enough room for e.allLocs and e.heapLoc.
todo := make([]*location, 0, len(b.allLocs)+1)
enqueue := func(loc *location) {
if !loc.queued {
todo = append(todo, loc)
loc.queued = true
}
}
for _, loc := range b.allLocs {
enqueue(loc)
}
enqueue(&b.heapLoc)
var walkgen uint32
for len(todo) > 0 {
root := todo[len(todo)-1]
todo = todo[:len(todo)-1]
root.queued = false
walkgen++
b.walkOne(root, walkgen, enqueue)
}
}
// walkOne computes the minimal number of dereferences from root to
// all other locations.
func (b *batch) walkOne(root *location, walkgen uint32, enqueue func(*location)) {
// The data flow graph has negative edges (from addressing
// operations), so we use the Bellman-Ford algorithm. However,
// we don't have to worry about infinite negative cycles since
// we bound intermediate dereference counts to 0.
root.walkgen = walkgen
root.derefs = 0
root.dst = nil
todo := []*location{root} // LIFO queue
for len(todo) > 0 {
l := todo[len(todo)-1]
todo = todo[:len(todo)-1]
derefs := l.derefs
// If l.derefs < 0, then l's address flows to root.
addressOf := derefs < 0
if addressOf {
// For a flow path like "root = &l; l = x",
// l's address flows to root, but x's does
// not. We recognize this by lower bounding
// derefs at 0.
derefs = 0
// If l's address flows to a non-transient
// location, then l can't be transiently
// allocated.
if !root.transient && l.transient {
l.transient = false
enqueue(l)
}
}
if b.outlives(root, l) {
// l's value flows to root. If l is a function
// parameter and root is the heap or a
// corresponding result parameter, then record
// that value flow for tagging the function
// later.
if l.isName(ir.PPARAM) {
if (logopt.Enabled() || base.Flag.LowerM >= 2) && !l.escapes {
if base.Flag.LowerM >= 2 {
fmt.Printf("%s: parameter %v leaks to %s with derefs=%d:\n", base.FmtPos(l.n.Pos()), l.n, b.explainLoc(root), derefs)
}
explanation := b.explainPath(root, l)
if logopt.Enabled() {
var e_curfn *ir.Func // TODO(mdempsky): Fix.
logopt.LogOpt(l.n.Pos(), "leak", "escape", ir.FuncName(e_curfn),
fmt.Sprintf("parameter %v leaks to %s with derefs=%d", l.n, b.explainLoc(root), derefs), explanation)
}
}
l.leakTo(root, derefs)
}
// If l's address flows somewhere that
// outlives it, then l needs to be heap
// allocated.
if addressOf && !l.escapes {
if logopt.Enabled() || base.Flag.LowerM >= 2 {
if base.Flag.LowerM >= 2 {
fmt.Printf("%s: %v escapes to heap:\n", base.FmtPos(l.n.Pos()), l.n)
}
explanation := b.explainPath(root, l)
if logopt.Enabled() {
var e_curfn *ir.Func // TODO(mdempsky): Fix.
logopt.LogOpt(l.n.Pos(), "escape", "escape", ir.FuncName(e_curfn), fmt.Sprintf("%v escapes to heap", l.n), explanation)
}
}
l.escapes = true
enqueue(l)
continue
}
}
for i, edge := range l.edges {
if edge.src.escapes {
continue
}
d := derefs + edge.derefs
if edge.src.walkgen != walkgen || edge.src.derefs > d {
edge.src.walkgen = walkgen
edge.src.derefs = d
edge.src.dst = l
edge.src.dstEdgeIdx = i
todo = append(todo, edge.src)
}
}
}
}
// explainPath prints an explanation of how src flows to the walk root.
func (b *batch) explainPath(root, src *location) []*logopt.LoggedOpt {
visited := make(map[*location]bool)
pos := base.FmtPos(src.n.Pos())
var explanation []*logopt.LoggedOpt
for {
// Prevent infinite loop.
if visited[src] {
if base.Flag.LowerM >= 2 {
fmt.Printf("%s: warning: truncated explanation due to assignment cycle; see golang.org/issue/35518\n", pos)
}
break
}
visited[src] = true
dst := src.dst
edge := &dst.edges[src.dstEdgeIdx]
if edge.src != src {
base.Fatalf("path inconsistency: %v != %v", edge.src, src)
}
explanation = b.explainFlow(pos, dst, src, edge.derefs, edge.notes, explanation)
if dst == root {
break
}
src = dst
}
return explanation
}
func (b *batch) explainFlow(pos string, dst, srcloc *location, derefs int, notes *note, explanation []*logopt.LoggedOpt) []*logopt.LoggedOpt {
ops := "&"
if derefs >= 0 {
ops = strings.Repeat("*", derefs)
}
print := base.Flag.LowerM >= 2
flow := fmt.Sprintf(" flow: %s = %s%v:", b.explainLoc(dst), ops, b.explainLoc(srcloc))
if print {
fmt.Printf("%s:%s\n", pos, flow)
}
if logopt.Enabled() {
var epos src.XPos
if notes != nil {
epos = notes.where.Pos()
} else if srcloc != nil && srcloc.n != nil {
epos = srcloc.n.Pos()
}
var e_curfn *ir.Func // TODO(mdempsky): Fix.
explanation = append(explanation, logopt.NewLoggedOpt(epos, "escflow", "escape", ir.FuncName(e_curfn), flow))
}
for note := notes; note != nil; note = note.next {
if print {
fmt.Printf("%s: from %v (%v) at %s\n", pos, note.where, note.why, base.FmtPos(note.where.Pos()))
}
if logopt.Enabled() {
var e_curfn *ir.Func // TODO(mdempsky): Fix.
explanation = append(explanation, logopt.NewLoggedOpt(note.where.Pos(), "escflow", "escape", ir.FuncName(e_curfn),
fmt.Sprintf(" from %v (%v)", note.where, note.why)))
}
}
return explanation
}
func (b *batch) explainLoc(l *location) string {
if l == &b.heapLoc {
return "{heap}"
}
if l.n == nil {
// TODO(mdempsky): Omit entirely.
return "{temp}"
}
if l.n.Op() == ir.ONAME {
return fmt.Sprintf("%v", l.n)
}
return fmt.Sprintf("{storage for %v}", l.n)
}
// outlives reports whether values stored in l may survive beyond
// other's lifetime if stack allocated.
func (b *batch) outlives(l, other *location) bool {
// The heap outlives everything.
if l.escapes {
return true
}
// We don't know what callers do with returned values, so
// pessimistically we need to assume they flow to the heap and
// outlive everything too.
if l.isName(ir.PPARAMOUT) {
// Exception: Directly called closures can return
// locations allocated outside of them without forcing
// them to the heap. For example:
//
// var u int // okay to stack allocate
// *(func() *int { return &u }()) = 42
if containsClosure(other.curfn, l.curfn) && l.curfn.ClosureCalled() {
return false
}
return true
}
// If l and other are within the same function, then l
// outlives other if it was declared outside other's loop
// scope. For example:
//
// var l *int
// for {
// l = new(int)
// }
if l.curfn == other.curfn && l.loopDepth < other.loopDepth {
return true
}
// If other is declared within a child closure of where l is
// declared, then l outlives it. For example:
//
// var l *int
// func() {
// l = new(int)
// }
if containsClosure(l.curfn, other.curfn) {
return true
}
return false
}
// containsClosure reports whether c is a closure contained within f.
func containsClosure(f, c *ir.Func) bool {
// Common case.
if f == c {
return false
}
// Closures within function Foo are named like "Foo.funcN..."
// TODO(mdempsky): Better way to recognize this.
fn := f.Sym().Name
cn := c.Sym().Name
return len(cn) > len(fn) && cn[:len(fn)] == fn && cn[len(fn)] == '.'
}

View File

@@ -1,207 +0,0 @@
// Copyright 2018 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package escape
import (
"cmd/compile/internal/base"
"cmd/compile/internal/ir"
"fmt"
)
// stmt evaluates a single Go statement.
func (e *escape) stmt(n ir.Node) {
if n == nil {
return
}
lno := ir.SetPos(n)
defer func() {
base.Pos = lno
}()
if base.Flag.LowerM > 2 {
fmt.Printf("%v:[%d] %v stmt: %v\n", base.FmtPos(base.Pos), e.loopDepth, e.curfn, n)
}
e.stmts(n.Init())
switch n.Op() {
default:
base.Fatalf("unexpected stmt: %v", n)
case ir.ODCLCONST, ir.ODCLTYPE, ir.OFALL, ir.OINLMARK:
// nop
case ir.OBREAK, ir.OCONTINUE, ir.OGOTO:
// TODO(mdempsky): Handle dead code?
case ir.OBLOCK:
n := n.(*ir.BlockStmt)
e.stmts(n.List)
case ir.ODCL:
// Record loop depth at declaration.
n := n.(*ir.Decl)
if !ir.IsBlank(n.X) {
e.dcl(n.X)
}
case ir.OLABEL:
n := n.(*ir.LabelStmt)
switch e.labels[n.Label] {
case nonlooping:
if base.Flag.LowerM > 2 {
fmt.Printf("%v:%v non-looping label\n", base.FmtPos(base.Pos), n)
}
case looping:
if base.Flag.LowerM > 2 {
fmt.Printf("%v: %v looping label\n", base.FmtPos(base.Pos), n)
}
e.loopDepth++
default:
base.Fatalf("label missing tag")
}
delete(e.labels, n.Label)
case ir.OIF:
n := n.(*ir.IfStmt)
e.discard(n.Cond)
e.block(n.Body)
e.block(n.Else)
case ir.OFOR, ir.OFORUNTIL:
n := n.(*ir.ForStmt)
e.loopDepth++
e.discard(n.Cond)
e.stmt(n.Post)
e.block(n.Body)
e.loopDepth--
case ir.ORANGE:
// for Key, Value = range X { Body }
n := n.(*ir.RangeStmt)
// X is evaluated outside the loop.
tmp := e.newLoc(nil, false)
e.expr(tmp.asHole(), n.X)
e.loopDepth++
ks := e.addrs([]ir.Node{n.Key, n.Value})
if n.X.Type().IsArray() {
e.flow(ks[1].note(n, "range"), tmp)
} else {
e.flow(ks[1].deref(n, "range-deref"), tmp)
}
e.reassigned(ks, n)
e.block(n.Body)
e.loopDepth--
case ir.OSWITCH:
n := n.(*ir.SwitchStmt)
if guard, ok := n.Tag.(*ir.TypeSwitchGuard); ok {
var ks []hole
if guard.Tag != nil {
for _, cas := range n.Cases {
cv := cas.Var
k := e.dcl(cv) // type switch variables have no ODCL.
if cv.Type().HasPointers() {
ks = append(ks, k.dotType(cv.Type(), cas, "switch case"))
}
}
}
e.expr(e.teeHole(ks...), n.Tag.(*ir.TypeSwitchGuard).X)
} else {
e.discard(n.Tag)
}
for _, cas := range n.Cases {
e.discards(cas.List)
e.block(cas.Body)
}
case ir.OSELECT:
n := n.(*ir.SelectStmt)
for _, cas := range n.Cases {
e.stmt(cas.Comm)
e.block(cas.Body)
}
case ir.ORECV:
// TODO(mdempsky): Consider e.discard(n.Left).
n := n.(*ir.UnaryExpr)
e.exprSkipInit(e.discardHole(), n) // already visited n.Ninit
case ir.OSEND:
n := n.(*ir.SendStmt)
e.discard(n.Chan)
e.assignHeap(n.Value, "send", n)
case ir.OAS:
n := n.(*ir.AssignStmt)
e.assignList([]ir.Node{n.X}, []ir.Node{n.Y}, "assign", n)
case ir.OASOP:
n := n.(*ir.AssignOpStmt)
// TODO(mdempsky): Worry about OLSH/ORSH?
e.assignList([]ir.Node{n.X}, []ir.Node{n.Y}, "assign", n)
case ir.OAS2:
n := n.(*ir.AssignListStmt)
e.assignList(n.Lhs, n.Rhs, "assign-pair", n)
case ir.OAS2DOTTYPE: // v, ok = x.(type)
n := n.(*ir.AssignListStmt)
e.assignList(n.Lhs, n.Rhs, "assign-pair-dot-type", n)
case ir.OAS2MAPR: // v, ok = m[k]
n := n.(*ir.AssignListStmt)
e.assignList(n.Lhs, n.Rhs, "assign-pair-mapr", n)
case ir.OAS2RECV, ir.OSELRECV2: // v, ok = <-ch
n := n.(*ir.AssignListStmt)
e.assignList(n.Lhs, n.Rhs, "assign-pair-receive", n)
case ir.OAS2FUNC:
n := n.(*ir.AssignListStmt)
e.stmts(n.Rhs[0].Init())
ks := e.addrs(n.Lhs)
e.call(ks, n.Rhs[0])
e.reassigned(ks, n)
case ir.ORETURN:
n := n.(*ir.ReturnStmt)
results := e.curfn.Type().Results().FieldSlice()
dsts := make([]ir.Node, len(results))
for i, res := range results {
dsts[i] = res.Nname.(*ir.Name)
}
e.assignList(dsts, n.Results, "return", n)
case ir.OCALLFUNC, ir.OCALLMETH, ir.OCALLINTER, ir.OINLCALL, ir.OCLOSE, ir.OCOPY, ir.ODELETE, ir.OPANIC, ir.OPRINT, ir.OPRINTN, ir.ORECOVER:
e.call(nil, n)
case ir.OGO, ir.ODEFER:
n := n.(*ir.GoDeferStmt)
e.goDeferStmt(n)
case ir.OTAILCALL:
// TODO(mdempsky): Treat like a normal call? esc.go used to just ignore it.
}
}
func (e *escape) stmts(l ir.Nodes) {
for _, n := range l {
e.stmt(n)
}
}
// block is like stmts, but preserves loopDepth.
func (e *escape) block(l ir.Nodes) {
old := e.loopDepth
e.stmts(l)
e.loopDepth = old
}
func (e *escape) dcl(n *ir.Name) hole {
if n.Curfn != e.curfn || n.IsClosureVar() {
base.Fatalf("bad declaration of %v", n)
}
loc := e.oldLoc(n)
loc.loopDepth = e.loopDepth
return loc.asHole()
}

View File

@@ -1,215 +0,0 @@
// Copyright 2018 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package escape
import (
"cmd/compile/internal/ir"
"cmd/compile/internal/typecheck"
)
func isSliceSelfAssign(dst, src ir.Node) bool {
// Detect the following special case.
//
// func (b *Buffer) Foo() {
// n, m := ...
// b.buf = b.buf[n:m]
// }
//
// This assignment is a no-op for escape analysis,
// it does not store any new pointers into b that were not already there.
// However, without this special case b will escape, because we assign to OIND/ODOTPTR.
// Here we assume that the statement will not contain calls,
// that is, that order will move any calls to init.
// Otherwise base ONAME value could change between the moments
// when we evaluate it for dst and for src.
// dst is ONAME dereference.
var dstX ir.Node
switch dst.Op() {
default:
return false
case ir.ODEREF:
dst := dst.(*ir.StarExpr)
dstX = dst.X
case ir.ODOTPTR:
dst := dst.(*ir.SelectorExpr)
dstX = dst.X
}
if dstX.Op() != ir.ONAME {
return false
}
// src is a slice operation.
switch src.Op() {
case ir.OSLICE, ir.OSLICE3, ir.OSLICESTR:
// OK.
case ir.OSLICEARR, ir.OSLICE3ARR:
// Since arrays are embedded into containing object,
// slice of non-pointer array will introduce a new pointer into b that was not already there
// (pointer to b itself). After such assignment, if b contents escape,
// b escapes as well. If we ignore such OSLICEARR, we will conclude
// that b does not escape when b contents do.
//
// Pointer to an array is OK since it's not stored inside b directly.
// For slicing an array (not pointer to array), there is an implicit OADDR.
// We check that to determine non-pointer array slicing.
src := src.(*ir.SliceExpr)
if src.X.Op() == ir.OADDR {
return false
}
default:
return false
}
// slice is applied to ONAME dereference.
var baseX ir.Node
switch base := src.(*ir.SliceExpr).X; base.Op() {
default:
return false
case ir.ODEREF:
base := base.(*ir.StarExpr)
baseX = base.X
case ir.ODOTPTR:
base := base.(*ir.SelectorExpr)
baseX = base.X
}
if baseX.Op() != ir.ONAME {
return false
}
// dst and src reference the same base ONAME.
return dstX.(*ir.Name) == baseX.(*ir.Name)
}
// isSelfAssign reports whether assignment from src to dst can
// be ignored by the escape analysis as it's effectively a self-assignment.
func isSelfAssign(dst, src ir.Node) bool {
if isSliceSelfAssign(dst, src) {
return true
}
// Detect trivial assignments that assign back to the same object.
//
// It covers these cases:
// val.x = val.y
// val.x[i] = val.y[j]
// val.x1.x2 = val.x1.y2
// ... etc
//
// These assignments do not change assigned object lifetime.
if dst == nil || src == nil || dst.Op() != src.Op() {
return false
}
// The expression prefix must be both "safe" and identical.
switch dst.Op() {
case ir.ODOT, ir.ODOTPTR:
// Safe trailing accessors that are permitted to differ.
dst := dst.(*ir.SelectorExpr)
src := src.(*ir.SelectorExpr)
return ir.SameSafeExpr(dst.X, src.X)
case ir.OINDEX:
dst := dst.(*ir.IndexExpr)
src := src.(*ir.IndexExpr)
if mayAffectMemory(dst.Index) || mayAffectMemory(src.Index) {
return false
}
return ir.SameSafeExpr(dst.X, src.X)
default:
return false
}
}
// mayAffectMemory reports whether evaluation of n may affect the program's
// memory state. If the expression can't affect memory state, then it can be
// safely ignored by the escape analysis.
func mayAffectMemory(n ir.Node) bool {
// We may want to use a list of "memory safe" ops instead of generally
// "side-effect free", which would include all calls and other ops that can
// allocate or change global state. For now, it's safer to start with the latter.
//
// We're ignoring things like division by zero, index out of range,
// and nil pointer dereference here.
// TODO(rsc): It seems like it should be possible to replace this with
// an ir.Any looking for any op that's not the ones in the case statement.
// But that produces changes in the compiled output detected by buildall.
switch n.Op() {
case ir.ONAME, ir.OLITERAL, ir.ONIL:
return false
case ir.OADD, ir.OSUB, ir.OOR, ir.OXOR, ir.OMUL, ir.OLSH, ir.ORSH, ir.OAND, ir.OANDNOT, ir.ODIV, ir.OMOD:
n := n.(*ir.BinaryExpr)
return mayAffectMemory(n.X) || mayAffectMemory(n.Y)
case ir.OINDEX:
n := n.(*ir.IndexExpr)
return mayAffectMemory(n.X) || mayAffectMemory(n.Index)
case ir.OCONVNOP, ir.OCONV:
n := n.(*ir.ConvExpr)
return mayAffectMemory(n.X)
case ir.OLEN, ir.OCAP, ir.ONOT, ir.OBITNOT, ir.OPLUS, ir.ONEG, ir.OALIGNOF, ir.OOFFSETOF, ir.OSIZEOF:
n := n.(*ir.UnaryExpr)
return mayAffectMemory(n.X)
case ir.ODOT, ir.ODOTPTR:
n := n.(*ir.SelectorExpr)
return mayAffectMemory(n.X)
case ir.ODEREF:
n := n.(*ir.StarExpr)
return mayAffectMemory(n.X)
default:
return true
}
}
// HeapAllocReason returns the reason the given Node must be heap
// allocated, or the empty string if it doesn't.
func HeapAllocReason(n ir.Node) string {
if n == nil || n.Type() == nil {
return ""
}
// Parameters are always passed via the stack.
if n.Op() == ir.ONAME {
n := n.(*ir.Name)
if n.Class == ir.PPARAM || n.Class == ir.PPARAMOUT {
return ""
}
}
if n.Type().Width > ir.MaxStackVarSize {
return "too large for stack"
}
if (n.Op() == ir.ONEW || n.Op() == ir.OPTRLIT) && n.Type().Elem().Width > ir.MaxImplicitStackVarSize {
return "too large for stack"
}
if n.Op() == ir.OCLOSURE && typecheck.ClosureType(n.(*ir.ClosureExpr)).Size() > ir.MaxImplicitStackVarSize {
return "too large for stack"
}
if n.Op() == ir.OMETHVALUE && typecheck.MethodValueType(n.(*ir.SelectorExpr)).Size() > ir.MaxImplicitStackVarSize {
return "too large for stack"
}
if n.Op() == ir.OMAKESLICE {
n := n.(*ir.MakeExpr)
r := n.Cap
if r == nil {
r = n.Len
}
if !ir.IsSmallIntConst(r) {
return "non-constant size"
}
if t := n.Type(); t.Elem().Width != 0 && ir.Int64Val(r) > ir.MaxImplicitStackVarSize/t.Elem().Width {
return "too large for stack"
}
}
return ""
}

View File

@@ -5,16 +5,46 @@
package gc
import (
"fmt"
"go/constant"
"cmd/compile/internal/base"
"cmd/compile/internal/inline"
"cmd/compile/internal/ir"
"cmd/compile/internal/typecheck"
"cmd/compile/internal/types"
"cmd/internal/bio"
"fmt"
"go/constant"
)
func exportf(bout *bio.Writer, format string, args ...interface{}) {
fmt.Fprintf(bout, format, args...)
if base.Debug.Export != 0 {
fmt.Printf(format, args...)
}
}
func dumpexport(bout *bio.Writer) {
p := &exporter{marked: make(map[*types.Type]bool)}
for _, n := range typecheck.Target.Exports {
// Must catch it here rather than Export(), because the type can be
// not fully set (still TFORW) when Export() is called.
if n.Type() != nil && n.Type().HasTParam() {
base.Fatalf("Cannot (yet) export a generic type: %v", n)
}
p.markObject(n)
}
// The linker also looks for the $$ marker - use char after $$ to distinguish format.
exportf(bout, "\n$$B\n") // indicate binary export format
off := bout.Offset()
typecheck.WriteExports(bout.Writer)
size := bout.Offset() - off
exportf(bout, "\n$$\n")
if base.Debug.Export != 0 {
fmt.Printf("BenchmarkExportSize:%s 1 %d bytes\n", base.Ctxt.Pkgpath, size)
}
}
func dumpasmhdr() {
b, err := bio.Create(base.Flag.AsmHdr)
if err != nil {
@@ -49,3 +79,83 @@ func dumpasmhdr() {
b.Close()
}
type exporter struct {
marked map[*types.Type]bool // types already seen by markType
}
// markObject visits a reachable object.
func (p *exporter) markObject(n ir.Node) {
if n.Op() == ir.ONAME {
n := n.(*ir.Name)
if n.Class == ir.PFUNC {
inline.Inline_Flood(n, typecheck.Export)
}
}
p.markType(n.Type())
}
// markType recursively visits types reachable from t to identify
// functions whose inline bodies may be needed.
func (p *exporter) markType(t *types.Type) {
if p.marked[t] {
return
}
p.marked[t] = true
// If this is a named type, mark all of its associated
// methods. Skip interface types because t.Methods contains
// only their unexpanded method set (i.e., exclusive of
// interface embeddings), and the switch statement below
// handles their full method set.
if t.Sym() != nil && t.Kind() != types.TINTER {
for _, m := range t.Methods().Slice() {
if types.IsExported(m.Sym.Name) {
p.markObject(ir.AsNode(m.Nname))
}
}
}
// Recursively mark any types that can be produced given a
// value of type t: dereferencing a pointer; indexing or
// iterating over an array, slice, or map; receiving from a
// channel; accessing a struct field or interface method; or
// calling a function.
//
// Notably, we don't mark function parameter types, because
// the user already needs some way to construct values of
// those types.
switch t.Kind() {
case types.TPTR, types.TARRAY, types.TSLICE:
p.markType(t.Elem())
case types.TCHAN:
if t.ChanDir().CanRecv() {
p.markType(t.Elem())
}
case types.TMAP:
p.markType(t.Key())
p.markType(t.Elem())
case types.TSTRUCT:
for _, f := range t.FieldSlice() {
if types.IsExported(f.Sym.Name) || f.Embedded != 0 {
p.markType(f.Type)
}
}
case types.TFUNC:
for _, f := range t.Results().FieldSlice() {
p.markType(f.Type)
}
case types.TINTER:
for _, f := range t.AllMethods().Slice() {
if types.IsExported(f.Sym.Name) {
p.markType(f.Type)
}
}
}
}

View File

@@ -32,7 +32,6 @@ import (
"log"
"os"
"runtime"
"sort"
)
func hidePanic() {
@@ -84,7 +83,7 @@ func Main(archInit func(*ssagen.ArchInfo)) {
types.BuiltinPkg.Prefix = "go.builtin" // not go%2ebuiltin
// pseudo-package, accessed by import "unsafe"
types.UnsafePkg = types.NewPkg("unsafe", "unsafe")
ir.Pkgs.Unsafe = types.NewPkg("unsafe", "unsafe")
// Pseudo-package that contains the compiler's builtin
// declarations for package runtime. These are declared in a
@@ -160,6 +159,9 @@ func Main(archInit func(*ssagen.ArchInfo)) {
dwarf.EnableLogging(base.Debug.DwarfInl != 0)
}
if base.Debug.SoftFloat != 0 {
if buildcfg.Experiment.RegabiArgs {
log.Fatalf("softfloat mode with GOEXPERIMENT=regabiargs not implemented ")
}
ssagen.Arch.SoftFloat = true
}
@@ -179,41 +181,23 @@ func Main(archInit func(*ssagen.ArchInfo)) {
typecheck.Target = new(ir.Package)
typecheck.NeedITab = func(t, iface *types.Type) { reflectdata.ITabAddr(t, iface) }
typecheck.NeedRuntimeType = reflectdata.NeedRuntimeType // TODO(rsc): TypeSym for lock?
base.AutogeneratedPos = makePos(src.NewFileBase("<autogenerated>", "<autogenerated>"), 1, 0)
typecheck.InitUniverse()
typecheck.InitRuntime()
// Parse and typecheck input.
noder.LoadPackage(flag.Args())
dwarfgen.RecordPackageName()
// Prepare for backend processing. This must happen before pkginit,
// because it generates itabs for initializing global variables.
ssagen.InitConfig()
// Build init task.
if initTask := pkginit.Task(); initTask != nil {
typecheck.Export(initTask)
}
// Stability quirk: sort top-level declarations, so we're not
// sensitive to the order that functions are added. In particular,
// the order that noder+typecheck add function closures is very
// subtle, and not important to reproduce.
//
// Note: This needs to happen after pkginit.Task, otherwise it risks
// changing the order in which top-level variables are initialized.
if base.Debug.UnifiedQuirks != 0 {
s := typecheck.Target.Decls
sort.SliceStable(s, func(i, j int) bool {
return s[i].Pos().Before(s[j].Pos())
})
}
// Eliminate some obviously dead code.
// Must happen after typechecking.
for _, n := range typecheck.Target.Decls {
@@ -268,11 +252,6 @@ func Main(archInit func(*ssagen.ArchInfo)) {
base.Timer.Start("fe", "escapes")
escape.Funcs(typecheck.Target.Decls)
// TODO(mdempsky): This is a hack. We need a proper, global work
// queue for scheduling function compilation so components don't
// need to adjust their behavior depending on when they're called.
reflectdata.AfterGlobalEscapeAnalysis = true
// Collect information for go:nowritebarrierrec
// checking. This must happen before transforming closures during Walk
// We'll do the final check after write barriers are
@@ -281,7 +260,17 @@ func Main(archInit func(*ssagen.ArchInfo)) {
ssagen.EnableNoWriteBarrierRecCheck()
}
// Prepare for SSA compilation.
// This must be before CompileITabs, because CompileITabs
// can trigger function compilation.
typecheck.InitRuntime()
ssagen.InitConfig()
// Just before compilation, compile itabs found on
// the right side of OCONVIFACE so that methods
// can be de-virtualized during compilation.
ir.CurFunc = nil
reflectdata.CompileITabs()
// Compile top level functions.
// Don't use range--walk can add functions to Target.Decls.
@@ -289,10 +278,6 @@ func Main(archInit func(*ssagen.ArchInfo)) {
fcount := int64(0)
for i := 0; i < len(typecheck.Target.Decls); i++ {
if fn, ok := typecheck.Target.Decls[i].(*ir.Func); ok {
// Don't try compiling dead hidden closure.
if fn.IsDeadcodeClosure() {
continue
}
enqueueFunc(fn)
fcount++
}

View File

@@ -7,7 +7,6 @@ package gc
import (
"cmd/compile/internal/base"
"cmd/compile/internal/ir"
"cmd/compile/internal/noder"
"cmd/compile/internal/objw"
"cmd/compile/internal/reflectdata"
"cmd/compile/internal/staticdata"
@@ -104,7 +103,7 @@ func finishArchiveEntry(bout *bio.Writer, start int64, name string) {
func dumpCompilerObj(bout *bio.Writer) {
printObjHeader(bout)
noder.WriteExports(bout)
dumpexport(bout)
}
func dumpdata() {
@@ -117,7 +116,7 @@ func dumpdata() {
addsignats(typecheck.Target.Externs)
reflectdata.WriteRuntimeTypes()
reflectdata.WriteTabs()
numPTabs := reflectdata.CountPTabs()
numPTabs, numITabs := reflectdata.CountTabs()
reflectdata.WriteImportStrings()
reflectdata.WriteBasicTypes()
dumpembeds()
@@ -158,10 +157,13 @@ func dumpdata() {
if numExports != len(typecheck.Target.Exports) {
base.Fatalf("Target.Exports changed after compile functions loop")
}
newNumPTabs := reflectdata.CountPTabs()
newNumPTabs, newNumITabs := reflectdata.CountTabs()
if newNumPTabs != numPTabs {
base.Fatalf("ptabs changed after compile functions loop")
}
if newNumITabs != numITabs {
base.Fatalf("itabs changed after compile functions loop")
}
}
func dumpLinkerObj(bout *bio.Writer) {

View File

@@ -1,3 +1,4 @@
// UNREVIEWED
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.

View File

@@ -1,3 +1,4 @@
// UNREVIEWED
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
@@ -155,7 +156,7 @@ func Import(packages map[string]*types2.Package, path, srcDir string, lookup fun
// binary export format starts with a 'c', 'd', or 'v'
// (from "version"). Select appropriate importer.
if len(data) > 0 && data[0] == 'i' {
pkg, err = ImportData(packages, string(data[1:]), id)
_, pkg, err = iImportData(packages, data[1:], id)
} else {
err = fmt.Errorf("import %q: old binary export format no longer supported (recompile library)", path)
}

View File

@@ -1,3 +1,4 @@
// UNREVIEWED
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
@@ -9,6 +10,7 @@ import (
"cmd/compile/internal/types2"
"fmt"
"internal/testenv"
"io/ioutil"
"os"
"os/exec"
"path/filepath"
@@ -62,7 +64,7 @@ const maxTime = 30 * time.Second
func testDir(t *testing.T, dir string, endTime time.Time) (nimports int) {
dirname := filepath.Join(runtime.GOROOT(), "pkg", runtime.GOOS+"_"+runtime.GOARCH, dir)
list, err := os.ReadDir(dirname)
list, err := ioutil.ReadDir(dirname)
if err != nil {
t.Fatalf("testDir(%s): %s", dirname, err)
}
@@ -90,7 +92,7 @@ func testDir(t *testing.T, dir string, endTime time.Time) (nimports int) {
}
func mktmpdir(t *testing.T) string {
tmpdir, err := os.MkdirTemp("", "gcimporter_test")
tmpdir, err := ioutil.TempDir("", "gcimporter_test")
if err != nil {
t.Fatal("mktmpdir:", err)
}
@@ -140,7 +142,7 @@ func TestVersionHandling(t *testing.T) {
}
const dir = "./testdata/versions"
list, err := os.ReadDir(dir)
list, err := ioutil.ReadDir(dir)
if err != nil {
t.Fatal(err)
}
@@ -193,7 +195,7 @@ func TestVersionHandling(t *testing.T) {
// create file with corrupted export data
// 1) read file
data, err := os.ReadFile(filepath.Join(dir, name))
data, err := ioutil.ReadFile(filepath.Join(dir, name))
if err != nil {
t.Fatal(err)
}
@@ -210,7 +212,7 @@ func TestVersionHandling(t *testing.T) {
// 4) write the file
pkgpath += "_corrupted"
filename := filepath.Join(corruptdir, pkgpath) + ".a"
os.WriteFile(filename, data, 0666)
ioutil.WriteFile(filename, data, 0666)
// test that importing the corrupted file results in an error
_, err = Import(make(map[string]*types2.Package), pkgpath, corruptdir, nil)
@@ -259,7 +261,8 @@ var importedObjectTests = []struct {
{"io.Reader", "type Reader interface{Read(p []byte) (n int, err error)}"},
{"io.ReadWriter", "type ReadWriter interface{Reader; Writer}"},
{"go/ast.Node", "type Node interface{End() go/token.Pos; Pos() go/token.Pos}"},
{"go/types.Type", "type Type interface{String() string; Underlying() Type}"},
// go/types.Type has grown much larger - excluded for now
// {"go/types.Type", "type Type interface{String() string; Underlying() Type}"},
}
func TestImportedTypes(t *testing.T) {
@@ -454,17 +457,17 @@ func TestIssue13898(t *testing.T) {
t.Fatal("go/types not found")
}
// look for go/types.Object type
// look for go/types2.Object type
obj := lookupObj(t, goTypesPkg.Scope(), "Object")
typ, ok := obj.Type().(*types2.Named)
if !ok {
t.Fatalf("go/types.Object type is %v; wanted named type", typ)
t.Fatalf("go/types2.Object type is %v; wanted named type", typ)
}
// lookup go/types.Object.Pkg method
// lookup go/types2.Object.Pkg method
m, index, indirect := types2.LookupFieldOrMethod(typ, false, nil, "Pkg")
if m == nil {
t.Fatalf("go/types.Object.Pkg not found (index = %v, indirect = %v)", index, indirect)
t.Fatalf("go/types2.Object.Pkg not found (index = %v, indirect = %v)", index, indirect)
}
// the method must belong to go/types

View File

@@ -1,3 +1,4 @@
// UNREVIEWED
// Copyright 2018 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
@@ -8,6 +9,7 @@
package importer
import (
"bytes"
"cmd/compile/internal/syntax"
"cmd/compile/internal/types2"
"encoding/binary"
@@ -17,11 +19,10 @@ import (
"io"
"math/big"
"sort"
"strings"
)
type intReader struct {
*strings.Reader
*bytes.Reader
path string
}
@@ -41,21 +42,6 @@ func (r *intReader) uint64() uint64 {
return i
}
// Keep this in sync with constants in iexport.go.
const (
iexportVersionGo1_11 = 0
iexportVersionPosCol = 1
// TODO: before release, change this back to 2.
iexportVersionGenerics = iexportVersionPosCol
iexportVersionCurrent = iexportVersionGenerics
)
type ident struct {
pkg string
name string
}
const predeclReserved = 32
type itag uint64
@@ -71,9 +57,6 @@ const (
signatureType
structType
interfaceType
typeParamType
instType
unionType
)
const io_SeekCurrent = 1 // io.SeekCurrent (not defined in Go 1.4)
@@ -82,8 +65,8 @@ const io_SeekCurrent = 1 // io.SeekCurrent (not defined in Go 1.4)
// and returns the number of bytes consumed and a reference to the package.
// If the export data version is not recognized or the format is otherwise
// compromised, an error is returned.
func ImportData(imports map[string]*types2.Package, data, path string) (pkg *types2.Package, err error) {
const currentVersion = iexportVersionCurrent
func iImportData(imports map[string]*types2.Package, data []byte, path string) (_ int, pkg *types2.Package, err error) {
const currentVersion = 1
version := int64(-1)
defer func() {
if e := recover(); e != nil {
@@ -95,17 +78,13 @@ func ImportData(imports map[string]*types2.Package, data, path string) (pkg *typ
}
}()
r := &intReader{strings.NewReader(data), path}
r := &intReader{bytes.NewReader(data), path}
version = int64(r.uint64())
switch version {
case /* iexportVersionGenerics, */ iexportVersionPosCol, iexportVersionGo1_11:
case currentVersion, 0:
default:
if version > iexportVersionGenerics {
errorf("unstable iexport format version %d, just rebuild compiler and std library", version)
} else {
errorf("unknown iexport format version %d", version)
}
errorf("unknown iexport format version %d", version)
}
sLen := int64(r.uint64())
@@ -117,20 +96,16 @@ func ImportData(imports map[string]*types2.Package, data, path string) (pkg *typ
r.Seek(sLen+dLen, io_SeekCurrent)
p := iimporter{
exportVersion: version,
ipath: path,
version: int(version),
ipath: path,
version: int(version),
stringData: stringData,
pkgCache: make(map[uint64]*types2.Package),
posBaseCache: make(map[uint64]*syntax.PosBase),
stringData: stringData,
stringCache: make(map[uint64]string),
pkgCache: make(map[uint64]*types2.Package),
declData: declData,
pkgIndex: make(map[*types2.Package]map[string]uint64),
typCache: make(map[uint64]types2.Type),
// Separate map for typeparams, keyed by their package and unique
// name (name with subscript).
tparamIndex: make(map[ident]types2.Type),
}
for i, pt := range predeclared {
@@ -142,22 +117,17 @@ func ImportData(imports map[string]*types2.Package, data, path string) (pkg *typ
pkgPathOff := r.uint64()
pkgPath := p.stringAt(pkgPathOff)
pkgName := p.stringAt(r.uint64())
pkgHeight := int(r.uint64())
_ = r.uint64() // package height; unused by go/types
if pkgPath == "" {
pkgPath = path
}
pkg := imports[pkgPath]
if pkg == nil {
pkg = types2.NewPackageHeight(pkgPath, pkgName, pkgHeight)
pkg = types2.NewPackage(pkgPath, pkgName)
imports[pkgPath] = pkg
} else {
if pkg.Name() != pkgName {
errorf("conflicting names %s and %s for package %q", pkg.Name(), pkgName, path)
}
if pkg.Height() != pkgHeight {
errorf("conflicting heights %v and %v for package %q", pkg.Height(), pkgHeight, path)
}
} else if pkg.Name() != pkgName {
errorf("conflicting names %s and %s for package %q", pkg.Name(), pkgName, path)
}
p.pkgCache[pkgPathOff] = pkg
@@ -183,6 +153,10 @@ func ImportData(imports map[string]*types2.Package, data, path string) (pkg *typ
p.doDecl(localpkg, name)
}
for _, typ := range p.interfaceList {
typ.Complete()
}
// record all referenced packages as imports
list := append(([]*types2.Package)(nil), pkgList[1:]...)
sort.Sort(byPath(list))
@@ -191,22 +165,21 @@ func ImportData(imports map[string]*types2.Package, data, path string) (pkg *typ
// package was imported completely and without errors
localpkg.MarkComplete()
return localpkg, nil
consumed, _ := r.Seek(0, io_SeekCurrent)
return int(consumed), localpkg, nil
}
type iimporter struct {
exportVersion int64
ipath string
version int
ipath string
version int
stringData string
pkgCache map[uint64]*types2.Package
posBaseCache map[uint64]*syntax.PosBase
stringData []byte
stringCache map[uint64]string
pkgCache map[uint64]*types2.Package
declData string
pkgIndex map[*types2.Package]map[string]uint64
typCache map[uint64]types2.Type
tparamIndex map[ident]types2.Type
declData []byte
pkgIndex map[*types2.Package]map[string]uint64
typCache map[uint64]types2.Type
interfaceList []*types2.Interface
}
@@ -226,21 +199,24 @@ func (p *iimporter) doDecl(pkg *types2.Package, name string) {
// Reader.Reset is not available in Go 1.4.
// Use bytes.NewReader for now.
// r.declReader.Reset(p.declData[off:])
r.declReader = *strings.NewReader(p.declData[off:])
r.declReader = *bytes.NewReader(p.declData[off:])
r.obj(name)
}
func (p *iimporter) stringAt(off uint64) string {
var x [binary.MaxVarintLen64]byte
n := copy(x[:], p.stringData[off:])
if s, ok := p.stringCache[off]; ok {
return s
}
slen, n := binary.Uvarint(x[:n])
slen, n := binary.Uvarint(p.stringData[off:])
if n <= 0 {
errorf("varint failed")
}
spos := off + uint64(n)
return p.stringData[spos : spos+slen]
s := string(p.stringData[spos : spos+slen])
p.stringCache[off] = s
return s
}
func (p *iimporter) pkgAt(off uint64) *types2.Package {
@@ -252,16 +228,6 @@ func (p *iimporter) pkgAt(off uint64) *types2.Package {
return nil
}
func (p *iimporter) posBaseAt(off uint64) *syntax.PosBase {
if posBase, ok := p.posBaseCache[off]; ok {
return posBase
}
filename := p.stringAt(off)
posBase := syntax.NewTrimmedFileBase(filename, true)
p.posBaseCache[off] = posBase
return posBase
}
func (p *iimporter) typAt(off uint64, base *types2.Named) types2.Type {
if t, ok := p.typCache[off]; ok && (base == nil || !isInterface(t)) {
return t
@@ -275,7 +241,7 @@ func (p *iimporter) typAt(off uint64, base *types2.Named) types2.Type {
// Reader.Reset is not available in Go 1.4.
// Use bytes.NewReader for now.
// r.declReader.Reset(p.declData[off-predeclReserved:])
r.declReader = *strings.NewReader(p.declData[off-predeclReserved:])
r.declReader = *bytes.NewReader(p.declData[off-predeclReserved:])
t := r.doType(base)
if base == nil || !isInterface(t) {
@@ -285,12 +251,12 @@ func (p *iimporter) typAt(off uint64, base *types2.Named) types2.Type {
}
type importReader struct {
p *iimporter
declReader strings.Reader
currPkg *types2.Package
prevPosBase *syntax.PosBase
prevLine int64
prevColumn int64
p *iimporter
declReader bytes.Reader
currPkg *types2.Package
prevFile string
prevLine int64
prevColumn int64
}
func (r *importReader) obj(name string) {
@@ -308,28 +274,16 @@ func (r *importReader) obj(name string) {
r.declare(types2.NewConst(pos, r.currPkg, name, typ, val))
case 'F', 'G':
var tparams []*types2.TypeParam
if tag == 'G' {
tparams = r.tparamList()
}
case 'F':
sig := r.signature(nil)
sig.SetTParams(tparams)
r.declare(types2.NewFunc(pos, r.currPkg, name, sig))
case 'T', 'U':
var tparams []*types2.TypeParam
if tag == 'U' {
tparams = r.tparamList()
}
case 'T':
// Types can be recursive. We need to setup a stub
// declaration before recursing.
obj := types2.NewTypeName(pos, r.currPkg, name, nil)
named := types2.NewNamed(obj, nil, nil)
if tag == 'U' {
named.SetTParams(tparams)
}
r.declare(obj)
underlying := r.p.typAt(r.uint64(), named).Underlying()
@@ -342,43 +296,10 @@ func (r *importReader) obj(name string) {
recv := r.param()
msig := r.signature(recv)
// If the receiver has any targs, set those as the
// rparams of the method (since those are the
// typeparams being used in the method sig/body).
targs := baseType(msig.Recv().Type()).TArgs()
if targs.Len() > 0 {
rparams := make([]*types2.TypeParam, targs.Len())
for i := range rparams {
rparams[i] = types2.AsTypeParam(targs.At(i))
}
msig.SetRParams(rparams)
}
named.AddMethod(types2.NewFunc(mpos, r.currPkg, mname, msig))
}
}
case 'P':
// We need to "declare" a typeparam in order to have a name that
// can be referenced recursively (if needed) in the type param's
// bound.
if r.p.exportVersion < iexportVersionGenerics {
errorf("unexpected type param type")
}
name0, sub := parseSubscript(name)
tn := types2.NewTypeName(pos, r.currPkg, name0, nil)
t := (*types2.Checker)(nil).NewTypeParam(tn, nil)
if sub == 0 {
errorf("missing subscript")
}
t.SetId(sub)
// To handle recursive references to the typeparam within its
// bound, save the partial type in tparamIndex before reading the bounds.
id := ident{r.currPkg.Name(), name}
r.p.tparamIndex[id] = t
t.SetConstraint(r.typ())
case 'V':
typ := r.typ()
@@ -518,11 +439,12 @@ func (r *importReader) pos() syntax.Pos {
r.posv0()
}
if (r.prevPosBase == nil || r.prevPosBase.Filename() == "") && r.prevLine == 0 && r.prevColumn == 0 {
if r.prevFile == "" && r.prevLine == 0 && r.prevColumn == 0 {
return syntax.Pos{}
}
return syntax.MakePos(r.prevPosBase, uint(r.prevLine), uint(r.prevColumn))
// TODO(gri) fix this
// return r.p.fake.pos(r.prevFile, int(r.prevLine), int(r.prevColumn))
return syntax.Pos{}
}
func (r *importReader) posv0() {
@@ -532,7 +454,7 @@ func (r *importReader) posv0() {
} else if l := r.int64(); l == -1 {
r.prevLine += deltaNewFile
} else {
r.prevPosBase = r.posBase()
r.prevFile = r.string()
r.prevLine = l
}
}
@@ -544,7 +466,7 @@ func (r *importReader) posv1() {
delta = r.int64()
r.prevLine += delta >> 1
if delta&1 != 0 {
r.prevPosBase = r.posBase()
r.prevFile = r.string()
}
}
}
@@ -558,9 +480,8 @@ func isInterface(t types2.Type) bool {
return ok
}
func (r *importReader) pkg() *types2.Package { return r.p.pkgAt(r.uint64()) }
func (r *importReader) string() string { return r.p.stringAt(r.uint64()) }
func (r *importReader) posBase() *syntax.PosBase { return r.p.posBaseAt(r.uint64()) }
func (r *importReader) pkg() *types2.Package { return r.p.pkgAt(r.uint64()) }
func (r *importReader) string() string { return r.p.stringAt(r.uint64()) }
func (r *importReader) doType(base *types2.Named) types2.Type {
switch k := r.kind(); k {
@@ -633,49 +554,6 @@ func (r *importReader) doType(base *types2.Named) types2.Type {
typ := types2.NewInterfaceType(methods, embeddeds)
r.p.interfaceList = append(r.p.interfaceList, typ)
return typ
case typeParamType:
if r.p.exportVersion < iexportVersionGenerics {
errorf("unexpected type param type")
}
pkg, name := r.qualifiedIdent()
id := ident{pkg.Name(), name}
if t, ok := r.p.tparamIndex[id]; ok {
// We're already in the process of importing this typeparam.
return t
}
// Otherwise, import the definition of the typeparam now.
r.p.doDecl(pkg, name)
return r.p.tparamIndex[id]
case instType:
if r.p.exportVersion < iexportVersionGenerics {
errorf("unexpected instantiation type")
}
// pos does not matter for instances: they are positioned on the original
// type.
_ = r.pos()
len := r.uint64()
targs := make([]types2.Type, len)
for i := range targs {
targs[i] = r.typ()
}
baseType := r.typ()
// The imported instantiated type doesn't include any methods, so
// we must always use the methods of the base (orig) type.
// TODO provide a non-nil *Checker
t, _ := types2.Instantiate(nil, baseType, targs, false)
return t
case unionType:
if r.p.exportVersion < iexportVersionGenerics {
errorf("unexpected instantiation type")
}
terms := make([]*types2.Term, r.uint64())
for i := range terms {
terms[i] = types2.NewTerm(r.bool(), r.typ())
}
return types2.NewUnion(terms)
}
}
@@ -690,19 +568,6 @@ func (r *importReader) signature(recv *types2.Var) *types2.Signature {
return types2.NewSignature(recv, params, results, variadic)
}
func (r *importReader) tparamList() []*types2.TypeParam {
n := r.uint64()
if n == 0 {
return nil
}
xs := make([]*types2.TypeParam, n)
for i := range xs {
typ := r.typ()
xs[i] = types2.AsTypeParam(typ)
}
return xs
}
func (r *importReader) paramList() *types2.Tuple {
xs := make([]*types2.Var, r.uint64())
for i := range xs {
@@ -745,33 +610,3 @@ func (r *importReader) byte() byte {
}
return x
}
func baseType(typ types2.Type) *types2.Named {
// pointer receivers are never types2.Named types
if p, _ := typ.(*types2.Pointer); p != nil {
typ = p.Elem()
}
// receiver base types are always (possibly generic) types2.Named types
n, _ := typ.(*types2.Named)
return n
}
func parseSubscript(name string) (string, uint64) {
// Extract the subscript value from the type param name. We export
// and import the subscript value, so that all type params have
// unique names.
sub := uint64(0)
startsub := -1
for i, r := range name {
if '₀' <= r && r < '₀'+10 {
if startsub == -1 {
startsub = i
}
sub = sub*10 + uint64(r-'₀')
}
}
if startsub >= 0 {
name = name[:startsub]
}
return name, sub
}

View File

@@ -1,3 +1,4 @@
// UNREVIEWED
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
@@ -119,9 +120,6 @@ var predeclared = []types2.Type{
// used internally by gc; never used by this package or in .a files
anyType{},
// comparable
types2.Universe.Lookup("comparable").Type(),
}
type anyType struct{}

View File

@@ -179,8 +179,6 @@ func CanInline(fn *ir.Func) {
Cost: inlineMaxBudget - visitor.budget,
Dcl: pruneUnusedAutos(n.Defn.(*ir.Func).Dcl, &visitor),
Body: inlcopylist(fn.Body),
CanDelayResults: canDelayResults(fn),
}
if base.Flag.LowerM > 1 {
@@ -193,36 +191,60 @@ func CanInline(fn *ir.Func) {
}
}
// canDelayResults reports whether inlined calls to fn can delay
// declaring the result parameter until the "return" statement.
func canDelayResults(fn *ir.Func) bool {
// We can delay declaring+initializing result parameters if:
// (1) there's exactly one "return" statement in the inlined function;
// (2) it's not an empty return statement (#44355); and
// (3) the result parameters aren't named.
// Inline_Flood marks n's inline body for export and recursively ensures
// all called functions are marked too.
func Inline_Flood(n *ir.Name, exportsym func(*ir.Name)) {
if n == nil {
return
}
if n.Op() != ir.ONAME || n.Class != ir.PFUNC {
base.Fatalf("Inline_Flood: unexpected %v, %v, %v", n, n.Op(), n.Class)
}
fn := n.Func
if fn == nil {
base.Fatalf("Inline_Flood: missing Func on %v", n)
}
if fn.Inl == nil {
return
}
nreturns := 0
ir.VisitList(fn.Body, func(n ir.Node) {
if n, ok := n.(*ir.ReturnStmt); ok {
nreturns++
if len(n.Results) == 0 {
nreturns++ // empty return statement (case 2)
if fn.ExportInline() {
return
}
fn.SetExportInline(true)
typecheck.ImportedBody(fn)
var doFlood func(n ir.Node)
doFlood = func(n ir.Node) {
switch n.Op() {
case ir.OMETHEXPR, ir.ODOTMETH:
Inline_Flood(ir.MethodExprName(n), exportsym)
case ir.ONAME:
n := n.(*ir.Name)
switch n.Class {
case ir.PFUNC:
Inline_Flood(n, exportsym)
exportsym(n)
case ir.PEXTERN:
exportsym(n)
}
}
})
if nreturns != 1 {
return false // not exactly one return statement (case 1)
}
// temporaries for return values.
for _, param := range fn.Type().Results().FieldSlice() {
if sym := types.OrigSym(param.Sym); sym != nil && !sym.IsBlank() {
return false // found a named result parameter (case 3)
case ir.OCALLPART:
// Okay, because we don't yet inline indirect
// calls to method values.
case ir.OCLOSURE:
// VisitList doesn't visit closure bodies, so force a
// recursive call to VisitList on the body of the closure.
ir.VisitList(n.(*ir.ClosureExpr).Func.Body, doFlood)
}
}
return true
// Recursively identify all referenced functions for
// reexport. We want to include even non-called functions,
// because after inlining they might be callable.
ir.VisitList(ir.Nodes(fn.Inl.Body), doFlood)
}
// hairyVisitor visits a function body to determine its inlining
@@ -273,19 +295,6 @@ func (v *hairyVisitor) doNode(n ir.Node) bool {
}
}
}
if n.X.Op() == ir.OMETHEXPR {
if meth := ir.MethodExprName(n.X); meth != nil {
fn := meth.Func
if fn != nil && types.IsRuntimePkg(fn.Sym().Pkg) && fn.Sym().Name == "heapBits.nextArena" {
// Special case: explicitly allow
// mid-stack inlining of
// runtime.heapBits.next even though
// it calls slow-path
// runtime.heapBits.nextArena.
break
}
}
}
if ir.IsIntrinsicCall(n) {
// Treat like any other node.
@@ -300,8 +309,28 @@ func (v *hairyVisitor) doNode(n ir.Node) bool {
// Call cost for non-leaf inlining.
v.budget -= v.extraCallCost
// Call is okay if inlinable and we have the budget for the body.
case ir.OCALLMETH:
base.FatalfAt(n.Pos(), "OCALLMETH missed by typecheck")
n := n.(*ir.CallExpr)
t := n.X.Type()
if t == nil {
base.Fatalf("no function type for [%p] %+v\n", n.X, n.X)
}
fn := ir.MethodExprName(n.X).Func
if types.IsRuntimePkg(fn.Sym().Pkg) && fn.Sym().Name == "heapBits.nextArena" {
// Special case: explicitly allow
// mid-stack inlining of
// runtime.heapBits.next even though
// it calls slow-path
// runtime.heapBits.nextArena.
break
}
if fn.Inl != nil {
v.budget -= fn.Inl.Cost
break
}
// Call cost for non-leaf inlining.
v.budget -= v.extraCallCost
// Things that are too hairy, irrespective of the budget
case ir.OCALL, ir.OCALLINTER:
@@ -416,7 +445,7 @@ func (v *hairyVisitor) doNode(n ir.Node) bool {
// and don't charge for the OBLOCK itself. The ++ undoes the -- below.
v.budget++
case ir.OMETHVALUE, ir.OSLICELIT:
case ir.OCALLPART, ir.OSLICELIT:
v.budget-- // Hack for toolstash -cmp.
case ir.OMETHEXPR:
@@ -470,6 +499,9 @@ func inlcopy(n ir.Node) ir.Node {
// x.Func.Body for iexport and local inlining.
oldfn := x.Func
newfn := ir.NewFunc(oldfn.Pos())
if oldfn.ClosureCalled() {
newfn.SetClosureCalled(true)
}
m.(*ir.ClosureExpr).Func = newfn
newfn.Nname = ir.NewNameAt(oldfn.Nname.Pos(), oldfn.Nname.Sym())
// XXX OK to share fn.Type() ??
@@ -512,6 +544,37 @@ func InlineCalls(fn *ir.Func) {
ir.CurFunc = savefn
}
// Turn an OINLCALL into a statement.
func inlconv2stmt(inlcall *ir.InlinedCallExpr) ir.Node {
n := ir.NewBlockStmt(inlcall.Pos(), nil)
n.List = inlcall.Init()
n.List.Append(inlcall.Body.Take()...)
return n
}
// Turn an OINLCALL into a single valued expression.
// The result of inlconv2expr MUST be assigned back to n, e.g.
// n.Left = inlconv2expr(n.Left)
func inlconv2expr(n *ir.InlinedCallExpr) ir.Node {
r := n.ReturnVars[0]
return ir.InitExpr(append(n.Init(), n.Body...), r)
}
// Turn the rlist (with the return values) of the OINLCALL in
// n into an expression list lumping the ninit and body
// containing the inlined statements on the first list element so
// order will be preserved. Used in return, oas2func and call
// statements.
func inlconv2list(n *ir.InlinedCallExpr) []ir.Node {
if n.Op() != ir.OINLCALL || len(n.ReturnVars) == 0 {
base.Fatalf("inlconv2list %+v\n", n)
}
s := n.ReturnVars
s[0] = ir.InitExpr(append(n.Init(), n.Body...), s[0])
return s
}
// inlnode recurses over the tree to find inlineable calls, which will
// be turned into OINLCALLs by mkinlcall. When the recursion comes
// back up will examine left, right, list, rlist, ninit, ntest, nincr,
@@ -534,9 +597,7 @@ func inlnode(n ir.Node, maxCost int32, inlMap map[*ir.Func]bool, edit func(ir.No
case ir.ODEFER, ir.OGO:
n := n.(*ir.GoDeferStmt)
switch call := n.Call; call.Op() {
case ir.OCALLMETH:
base.FatalfAt(call.Pos(), "OCALLMETH missed by typecheck")
case ir.OCALLFUNC:
case ir.OCALLFUNC, ir.OCALLMETH:
call := call.(*ir.CallExpr)
call.NoInline = true
}
@@ -546,18 +607,11 @@ func inlnode(n ir.Node, maxCost int32, inlMap map[*ir.Func]bool, edit func(ir.No
case ir.OCLOSURE:
return n
case ir.OCALLMETH:
base.FatalfAt(n.Pos(), "OCALLMETH missed by typecheck")
case ir.OCALLFUNC:
// Prevent inlining some reflect.Value methods when using checkptr,
// even when package reflect was compiled without it (#35073).
n := n.(*ir.CallExpr)
if n.X.Op() == ir.OMETHEXPR {
// Prevent inlining some reflect.Value methods when using checkptr,
// even when package reflect was compiled without it (#35073).
if meth := ir.MethodExprName(n.X); meth != nil {
s := meth.Sym()
if base.Debug.Checkptr != 0 && types.IsReflectPkg(s.Pkg) && (s.Name == "Value.UnsafeAddr" || s.Name == "Value.Pointer") {
return n
}
}
if s := ir.MethodExprName(n.X).Sym(); base.Debug.Checkptr != 0 && types.IsReflectPkg(s.Pkg) && (s.Name == "Value.UnsafeAddr" || s.Name == "Value.Pointer") {
return n
}
}
@@ -565,18 +619,31 @@ func inlnode(n ir.Node, maxCost int32, inlMap map[*ir.Func]bool, edit func(ir.No
ir.EditChildren(n, edit)
if as := n; as.Op() == ir.OAS2FUNC {
as := as.(*ir.AssignListStmt)
if as.Rhs[0].Op() == ir.OINLCALL {
as.Rhs = inlconv2list(as.Rhs[0].(*ir.InlinedCallExpr))
as.SetOp(ir.OAS2)
as.SetTypecheck(0)
n = typecheck.Stmt(as)
}
}
// with all the branches out of the way, it is now time to
// transmogrify this node itself unless inhibited by the
// switch at the top of this function.
switch n.Op() {
case ir.OCALLMETH:
base.FatalfAt(n.Pos(), "OCALLMETH missed by typecheck")
case ir.OCALLFUNC:
call := n.(*ir.CallExpr)
if call.NoInline {
break
case ir.OCALLFUNC, ir.OCALLMETH:
n := n.(*ir.CallExpr)
if n.NoInline {
return n
}
}
var call *ir.CallExpr
switch n.Op() {
case ir.OCALLFUNC:
call = n.(*ir.CallExpr)
if base.Flag.LowerM > 3 {
fmt.Printf("%v:call to func %+v\n", ir.Line(n), call.X)
}
@@ -586,10 +653,38 @@ func inlnode(n ir.Node, maxCost int32, inlMap map[*ir.Func]bool, edit func(ir.No
if fn := inlCallee(call.X); fn != nil && fn.Inl != nil {
n = mkinlcall(call, fn, maxCost, inlMap, edit)
}
case ir.OCALLMETH:
call = n.(*ir.CallExpr)
if base.Flag.LowerM > 3 {
fmt.Printf("%v:call to meth %v\n", ir.Line(n), call.X.(*ir.SelectorExpr).Sel)
}
// typecheck should have resolved ODOTMETH->type, whose nname points to the actual function.
if call.X.Type() == nil {
base.Fatalf("no function type for [%p] %+v\n", call.X, call.X)
}
n = mkinlcall(call, ir.MethodExprName(call.X).Func, maxCost, inlMap, edit)
}
base.Pos = lno
if n.Op() == ir.OINLCALL {
ic := n.(*ir.InlinedCallExpr)
switch call.Use {
default:
ir.Dump("call", call)
base.Fatalf("call missing use")
case ir.CallUseExpr:
n = inlconv2expr(ic)
case ir.CallUseStmt:
n = inlconv2stmt(ic)
case ir.CallUseList:
// leave for caller to convert
}
}
return n
}
@@ -645,12 +740,7 @@ var inlgen int
// when producing output for debugging the compiler itself.
var SSADumpInline = func(*ir.Func) {}
// NewInline allows the inliner implementation to be overridden.
// If it returns nil, the legacy inliner will handle this call
// instead.
var NewInline = func(call *ir.CallExpr, fn *ir.Func, inlIndex int) *ir.InlinedCallExpr { return nil }
// If n is a OCALLFUNC node, and fn is an ONAME node for a
// If n is a call node (OCALLFUNC or OCALLMETH), and fn is an ONAME node for a
// function with an inlinable body, return an OINLCALL node that can replace n.
// The returned node's Ninit has the parameter assignments, the Nbody is the
// inlined function body, and (List, Rlist) contain the (input, output)
@@ -703,90 +793,38 @@ func mkinlcall(n *ir.CallExpr, fn *ir.Func, maxCost int32, inlMap map[*ir.Func]b
defer func() {
inlMap[fn] = false
}()
typecheck.FixVariadicCall(n)
parent := base.Ctxt.PosTable.Pos(n.Pos()).Base().InliningIndex()
sym := fn.Linksym()
inlIndex := base.Ctxt.InlTree.Add(parent, n.Pos(), sym)
if base.Flag.GenDwarfInl > 0 {
if !sym.WasInlined() {
base.Ctxt.DwFixups.SetPrecursorFunc(sym, fn)
sym.Set(obj.AttrWasInlined, true)
}
if base.Debug.TypecheckInl == 0 {
typecheck.ImportedBody(fn)
}
if base.Flag.LowerM != 0 {
// We have a function node, and it has an inlineable body.
if base.Flag.LowerM > 1 {
fmt.Printf("%v: inlining call to %v %v { %v }\n", ir.Line(n), fn.Sym(), fn.Type(), ir.Nodes(fn.Inl.Body))
} else if base.Flag.LowerM != 0 {
fmt.Printf("%v: inlining call to %v\n", ir.Line(n), fn)
}
if base.Flag.LowerM > 2 {
fmt.Printf("%v: Before inlining: %+v\n", ir.Line(n), n)
}
res := NewInline(n, fn, inlIndex)
if res == nil {
res = oldInline(n, fn, inlIndex)
}
// transitive inlining
// might be nice to do this before exporting the body,
// but can't emit the body with inlining expanded.
// instead we emit the things that the body needs
// and each use must redo the inlining.
// luckily these are small.
ir.EditChildren(res, edit)
if base.Flag.LowerM > 2 {
fmt.Printf("%v: After inlining %+v\n\n", ir.Line(res), res)
}
return res
}
// CalleeEffects appends any side effects from evaluating callee to init.
func CalleeEffects(init *ir.Nodes, callee ir.Node) {
for {
switch callee.Op() {
case ir.ONAME, ir.OCLOSURE, ir.OMETHEXPR:
return // done
case ir.OCONVNOP:
conv := callee.(*ir.ConvExpr)
init.Append(ir.TakeInit(conv)...)
callee = conv.X
case ir.OINLCALL:
ic := callee.(*ir.InlinedCallExpr)
init.Append(ir.TakeInit(ic)...)
init.Append(ic.Body.Take()...)
callee = ic.SingleResult()
default:
base.FatalfAt(callee.Pos(), "unexpected callee expression: %v", callee)
}
}
}
// oldInline creates an InlinedCallExpr to replace the given call
// expression. fn is the callee function to be inlined. inlIndex is
// the inlining tree position index, for use with src.NewInliningBase
// when rewriting positions.
func oldInline(call *ir.CallExpr, fn *ir.Func, inlIndex int) *ir.InlinedCallExpr {
if base.Debug.TypecheckInl == 0 {
typecheck.ImportedBody(fn)
}
SSADumpInline(fn)
ninit := call.Init()
ninit := n.Init()
// For normal function calls, the function callee expression
// may contain side effects. Make sure to preserve these,
// may contain side effects (e.g., added by addinit during
// inlconv2expr or inlconv2list). Make sure to preserve these,
// if necessary (#42703).
if call.Op() == ir.OCALLFUNC {
CalleeEffects(&ninit, call.X)
if n.Op() == ir.OCALLFUNC {
callee := n.X
for callee.Op() == ir.OCONVNOP {
conv := callee.(*ir.ConvExpr)
ninit.Append(ir.TakeInit(conv)...)
callee = conv.X
}
if callee.Op() != ir.ONAME && callee.Op() != ir.OCLOSURE && callee.Op() != ir.OMETHEXPR {
base.Fatalf("unexpected callee expression: %v", callee)
}
}
// Make temp names to use instead of the originals.
@@ -816,6 +854,25 @@ func oldInline(call *ir.CallExpr, fn *ir.Func, inlIndex int) *ir.InlinedCallExpr
}
// We can delay declaring+initializing result parameters if:
// (1) there's exactly one "return" statement in the inlined function;
// (2) it's not an empty return statement (#44355); and
// (3) the result parameters aren't named.
delayretvars := true
nreturns := 0
ir.VisitList(ir.Nodes(fn.Inl.Body), func(n ir.Node) {
if n, ok := n.(*ir.ReturnStmt); ok {
nreturns++
if len(n.Results) == 0 {
delayretvars = false // empty return statement (case 2)
}
}
})
if nreturns != 1 {
delayretvars = false // not exactly one return statement (case 1)
}
// temporaries for return values.
var retvars []ir.Node
for i, t := range fn.Type().Results().Fields().Slice() {
@@ -825,6 +882,7 @@ func oldInline(call *ir.CallExpr, fn *ir.Func, inlIndex int) *ir.InlinedCallExpr
m = inlvar(n)
m = typecheck.Expr(m).(*ir.Name)
inlvars[n] = m
delayretvars = false // found a named result parameter (case 3)
} else {
// anonymous return values, synthesize names for use in assignment that replaces return
m = retvar(t, i)
@@ -847,23 +905,61 @@ func oldInline(call *ir.CallExpr, fn *ir.Func, inlIndex int) *ir.InlinedCallExpr
// Assign arguments to the parameters' temp names.
as := ir.NewAssignListStmt(base.Pos, ir.OAS2, nil, nil)
as.Def = true
if call.Op() == ir.OCALLMETH {
base.FatalfAt(call.Pos(), "OCALLMETH missed by typecheck")
if n.Op() == ir.OCALLMETH {
sel := n.X.(*ir.SelectorExpr)
if sel.X == nil {
base.Fatalf("method call without receiver: %+v", n)
}
as.Rhs.Append(sel.X)
}
as.Rhs.Append(call.Args...)
as.Rhs.Append(n.Args...)
// For non-dotted calls to variadic functions, we assign the
// variadic parameter's temp name separately.
var vas *ir.AssignStmt
if recv := fn.Type().Recv(); recv != nil {
as.Lhs.Append(inlParam(recv, as, inlvars))
}
for _, param := range fn.Type().Params().Fields().Slice() {
as.Lhs.Append(inlParam(param, as, inlvars))
// For ordinary parameters or variadic parameters in
// dotted calls, just add the variable to the
// assignment list, and we're done.
if !param.IsDDD() || n.IsDDD {
as.Lhs.Append(inlParam(param, as, inlvars))
continue
}
// Otherwise, we need to collect the remaining values
// to pass as a slice.
x := len(as.Lhs)
for len(as.Lhs) < len(as.Rhs) {
as.Lhs.Append(argvar(param.Type, len(as.Lhs)))
}
varargs := as.Lhs[x:]
vas = ir.NewAssignStmt(base.Pos, nil, nil)
vas.X = inlParam(param, vas, inlvars)
if len(varargs) == 0 {
vas.Y = typecheck.NodNil()
vas.Y.SetType(param.Type)
} else {
lit := ir.NewCompLitExpr(base.Pos, ir.OCOMPLIT, ir.TypeNode(param.Type), nil)
lit.List = varargs
vas.Y = lit
}
}
if len(as.Rhs) != 0 {
ninit.Append(typecheck.Stmt(as))
}
if !fn.Inl.CanDelayResults {
if vas != nil {
ninit.Append(typecheck.Stmt(vas))
}
if !delayretvars {
// Zero the return parameters.
for _, n := range retvars {
ninit.Append(ir.NewDecl(base.Pos, ir.ODCL, n.(*ir.Name)))
@@ -876,21 +972,40 @@ func oldInline(call *ir.CallExpr, fn *ir.Func, inlIndex int) *ir.InlinedCallExpr
inlgen++
parent := -1
if b := base.Ctxt.PosTable.Pos(n.Pos()).Base(); b != nil {
parent = b.InliningIndex()
}
sym := fn.Linksym()
newIndex := base.Ctxt.InlTree.Add(parent, n.Pos(), sym)
// Add an inline mark just before the inlined body.
// This mark is inline in the code so that it's a reasonable spot
// to put a breakpoint. Not sure if that's really necessary or not
// (in which case it could go at the end of the function instead).
// Note issue 28603.
ninit.Append(ir.NewInlineMarkStmt(call.Pos().WithIsStmt(), int64(inlIndex)))
inlMark := ir.NewInlineMarkStmt(base.Pos, types.BADWIDTH)
inlMark.SetPos(n.Pos().WithIsStmt())
inlMark.Index = int64(newIndex)
ninit.Append(inlMark)
if base.Flag.GenDwarfInl > 0 {
if !sym.WasInlined() {
base.Ctxt.DwFixups.SetPrecursorFunc(sym, fn)
sym.Set(obj.AttrWasInlined, true)
}
}
subst := inlsubst{
retlabel: retlabel,
retvars: retvars,
inlvars: inlvars,
defnMarker: ir.NilExpr{},
bases: make(map[*src.PosBase]*src.PosBase),
newInlIndex: inlIndex,
fn: fn,
retlabel: retlabel,
retvars: retvars,
delayretvars: delayretvars,
inlvars: inlvars,
defnMarker: ir.NilExpr{},
bases: make(map[*src.PosBase]*src.PosBase),
newInlIndex: newIndex,
fn: fn,
}
subst.edit = subst.node
@@ -911,11 +1026,26 @@ func oldInline(call *ir.CallExpr, fn *ir.Func, inlIndex int) *ir.InlinedCallExpr
//dumplist("ninit post", ninit);
res := ir.NewInlinedCallExpr(base.Pos, body, retvars)
res.SetInit(ninit)
res.SetType(call.Type())
res.SetTypecheck(1)
return res
call := ir.NewInlinedCallExpr(base.Pos, nil, nil)
*call.PtrInit() = ninit
call.Body = body
call.ReturnVars = retvars
call.SetType(n.Type())
call.SetTypecheck(1)
// transitive inlining
// might be nice to do this before exporting the body,
// but can't emit the body with inlining expanded.
// instead we emit the things that the body needs
// and each use must redo the inlining.
// luckily these are small.
ir.EditChildren(call, edit)
if base.Flag.LowerM > 2 {
fmt.Printf("%v: After inlining %+v\n\n", ir.Line(call), call)
}
return call
}
// Every time we expand a function we generate a new set of tmpnames,
@@ -928,10 +1058,8 @@ func inlvar(var_ *ir.Name) *ir.Name {
n := typecheck.NewName(var_.Sym())
n.SetType(var_.Type())
n.SetTypecheck(1)
n.Class = ir.PAUTO
n.SetUsed(true)
n.SetAutoTemp(var_.AutoTemp())
n.Curfn = ir.CurFunc // the calling function, not the called one
n.SetAddrtaken(var_.Addrtaken())
@@ -943,7 +1071,18 @@ func inlvar(var_ *ir.Name) *ir.Name {
func retvar(t *types.Field, i int) *ir.Name {
n := typecheck.NewName(typecheck.LookupNum("~R", i))
n.SetType(t.Type)
n.SetTypecheck(1)
n.Class = ir.PAUTO
n.SetUsed(true)
n.Curfn = ir.CurFunc // the calling function, not the called one
ir.CurFunc.Dcl = append(ir.CurFunc.Dcl, n)
return n
}
// Synthesize a variable to store the inlined function's arguments
// when they come from a multiple return call.
func argvar(t *types.Type, i int) ir.Node {
n := typecheck.NewName(typecheck.LookupNum("~arg", i))
n.SetType(t.Elem())
n.Class = ir.PAUTO
n.SetUsed(true)
n.Curfn = ir.CurFunc // the calling function, not the called one
@@ -960,6 +1099,10 @@ type inlsubst struct {
// Temporary result variables.
retvars []ir.Node
// Whether result variables should be initialized at the
// "return" statement.
delayretvars bool
inlvars map[*ir.Name]*ir.Name
// defnMarker is used to mark a Node for reassignment.
// inlsubst.clovar set this during creating new ONAME.
@@ -1014,21 +1157,17 @@ func (subst *inlsubst) fields(oldt *types.Type) []*types.Field {
// clovar creates a new ONAME node for a local variable or param of a closure
// inside a function being inlined.
func (subst *inlsubst) clovar(n *ir.Name) *ir.Name {
m := ir.NewNameAt(n.Pos(), n.Sym())
m.Class = n.Class
m.SetType(n.Type())
m.SetTypecheck(1)
if n.IsClosureVar() {
m.SetIsClosureVar(true)
}
if n.Addrtaken() {
m.SetAddrtaken(true)
}
if n.Used() {
m.SetUsed(true)
}
m.Defn = n.Defn
// TODO(danscales): want to get rid of this shallow copy, with code like the
// following, but it is hard to copy all the necessary flags in a maintainable way.
// m := ir.NewNameAt(n.Pos(), n.Sym())
// m.Class = n.Class
// m.SetType(n.Type())
// m.SetTypecheck(1)
//if n.IsClosureVar() {
// m.SetIsClosureVar(true)
//}
m := &ir.Name{}
*m = *n
m.Curfn = subst.newclofn
switch defn := n.Defn.(type) {
@@ -1083,6 +1222,8 @@ func (subst *inlsubst) clovar(n *ir.Name) *ir.Name {
// closure does the necessary substitions for a ClosureExpr n and returns the new
// closure node.
func (subst *inlsubst) closure(n *ir.ClosureExpr) ir.Node {
m := ir.Copy(n)
// Prior to the subst edit, set a flag in the inlsubst to
// indicated that we don't want to update the source positions in
// the new closure. If we do this, it will appear that the closure
@@ -1090,16 +1231,29 @@ func (subst *inlsubst) closure(n *ir.ClosureExpr) ir.Node {
// issue #46234 for more details.
defer func(prev bool) { subst.noPosUpdate = prev }(subst.noPosUpdate)
subst.noPosUpdate = true
ir.EditChildren(m, subst.edit)
//fmt.Printf("Inlining func %v with closure into %v\n", subst.fn, ir.FuncName(ir.CurFunc))
// The following is similar to funcLit
oldfn := n.Func
newfn := ir.NewClosureFunc(oldfn.Pos(), true)
newfn := ir.NewFunc(oldfn.Pos())
// These three lines are not strictly necessary, but just to be clear
// that new function needs to redo typechecking and inlinability.
newfn.SetTypecheck(0)
newfn.SetInlinabilityChecked(false)
newfn.Inl = nil
newfn.SetIsHiddenClosure(true)
newfn.Nname = ir.NewNameAt(n.Pos(), ir.BlankNode.Sym())
newfn.Nname.Func = newfn
// Ntype can be nil for -G=3 mode.
if oldfn.Nname.Ntype != nil {
newfn.Nname.Ntype = subst.node(oldfn.Nname.Ntype).(ir.Ntype)
}
newfn.Nname.Defn = newfn
m.(*ir.ClosureExpr).Func = newfn
newfn.OClosure = m.(*ir.ClosureExpr)
if subst.newclofn != nil {
//fmt.Printf("Inlining a closure with a nested closure\n")
@@ -1149,9 +1303,13 @@ func (subst *inlsubst) closure(n *ir.ClosureExpr) ir.Node {
// Actually create the named function for the closure, now that
// the closure is inlined in a specific function.
newclo := newfn.OClosure
newclo.SetInit(subst.list(n.Init()))
return typecheck.Expr(newclo)
m.SetTypecheck(0)
if oldfn.ClosureCalled() {
typecheck.Callee(m)
} else {
typecheck.Expr(m)
}
return m
}
// node recursively copies a node from the saved pristine body of the
@@ -1233,7 +1391,7 @@ func (subst *inlsubst) node(n ir.Node) ir.Node {
}
as.Rhs = subst.list(n.Results)
if subst.fn.Inl.CanDelayResults {
if subst.delayretvars {
for _, n := range as.Lhs {
as.PtrInit().Append(ir.NewDecl(base.Pos, ir.ODCL, n.(*ir.Name)))
n.Name().Defn = as

View File

@@ -142,15 +142,28 @@ func (n *BinaryExpr) SetOp(op Op) {
}
}
// A CallUse records how the result of the call is used:
type CallUse byte
const (
_ CallUse = iota
CallUseExpr // single expression result is used
CallUseList // list of results are used
CallUseStmt // results not used - call is a statement
)
// A CallExpr is a function call X(Args).
type CallExpr struct {
miniExpr
origNode
X Node
Args Nodes
KeepAlive []*Name // vars to be kept alive until call returns
IsDDD bool
NoInline bool
X Node
Args Nodes
KeepAlive []*Name // vars to be kept alive until call returns
IsDDD bool
Use CallUse
NoInline bool
PreserveClosure bool // disable directClosureCall for this call
}
func NewCallExpr(pos src.XPos, op Op, fun Node, args []Node) *CallExpr {
@@ -168,12 +181,8 @@ func (n *CallExpr) SetOp(op Op) {
switch op {
default:
panic(n.no("SetOp " + op.String()))
case OAPPEND,
OCALL, OCALLFUNC, OCALLINTER, OCALLMETH,
ODELETE,
OGETG, OGETCALLERPC, OGETCALLERSP,
OMAKE, OPRINT, OPRINTN,
ORECOVER, ORECOVERFP:
case OCALL, OCALLFUNC, OCALLINTER, OCALLMETH,
OAPPEND, ODELETE, OGETG, OMAKE, OPRINT, OPRINTN, ORECOVER:
n.op = op
}
}
@@ -183,10 +192,8 @@ type ClosureExpr struct {
miniExpr
Func *Func `mknode:"-"`
Prealloc *Name
IsGoWrap bool // whether this is wrapper closure of a go statement
}
// Deprecated: Use NewClosureFunc instead.
func NewClosureExpr(pos src.XPos, fn *Func) *ClosureExpr {
n := &ClosureExpr{Func: fn}
n.op = OCLOSURE
@@ -270,12 +277,12 @@ func (n *ConvExpr) SetOp(op Op) {
switch op {
default:
panic(n.no("SetOp " + op.String()))
case OCONV, OCONVIFACE, OCONVIDATA, OCONVNOP, OBYTES2STR, OBYTES2STRTMP, ORUNES2STR, OSTR2BYTES, OSTR2BYTESTMP, OSTR2RUNES, ORUNESTR, OSLICE2ARRPTR:
case OCONV, OCONVIFACE, OCONVNOP, OBYTES2STR, OBYTES2STRTMP, ORUNES2STR, OSTR2BYTES, OSTR2BYTESTMP, OSTR2RUNES, ORUNESTR, OSLICE2ARRPTR:
n.op = op
}
}
// An IndexExpr is an index expression X[Index].
// An IndexExpr is an index expression X[Y].
type IndexExpr struct {
miniExpr
X Node
@@ -316,24 +323,26 @@ func NewKeyExpr(pos src.XPos, key, value Node) *KeyExpr {
// A StructKeyExpr is an Field: Value composite literal key.
type StructKeyExpr struct {
miniExpr
Field *types.Field
Value Node
Field *types.Sym
Value Node
Offset int64
}
func NewStructKeyExpr(pos src.XPos, field *types.Field, value Node) *StructKeyExpr {
func NewStructKeyExpr(pos src.XPos, field *types.Sym, value Node) *StructKeyExpr {
n := &StructKeyExpr{Field: field, Value: value}
n.pos = pos
n.op = OSTRUCTKEY
n.Offset = types.BADWIDTH
return n
}
func (n *StructKeyExpr) Sym() *types.Sym { return n.Field.Sym }
func (n *StructKeyExpr) Sym() *types.Sym { return n.Field }
// An InlinedCallExpr is an inlined function call.
type InlinedCallExpr struct {
miniExpr
Body Nodes
ReturnVars Nodes // must be side-effect free
ReturnVars Nodes
}
func NewInlinedCallExpr(pos src.XPos, body, retvars []Node) *InlinedCallExpr {
@@ -345,21 +354,6 @@ func NewInlinedCallExpr(pos src.XPos, body, retvars []Node) *InlinedCallExpr {
return n
}
func (n *InlinedCallExpr) SingleResult() Node {
if have := len(n.ReturnVars); have != 1 {
base.FatalfAt(n.Pos(), "inlined call has %v results, expected 1", have)
}
if !n.Type().HasShape() && n.ReturnVars[0].Type().HasShape() {
// If the type of the call is not a shape, but the type of the return value
// is a shape, we need to do an implicit conversion, so the real type
// of n is maintained.
r := NewConvExpr(n.Pos(), OCONVNOP, n.Type(), n.ReturnVars[0])
r.SetTypecheck(1)
return r
}
return n.ReturnVars[0]
}
// A LogicalExpr is a expression X Op Y where Op is && or ||.
// It is separate from BinaryExpr to make room for statements
// that must be executed before Y but after X.
@@ -454,20 +448,6 @@ func (n *ParenExpr) SetOTYPE(t *types.Type) {
t.SetNod(n)
}
// A RawOrigExpr represents an arbitrary Go expression as a string value.
// When printed in diagnostics, the string value is written out exactly as-is.
type RawOrigExpr struct {
miniExpr
Raw string
}
func NewRawOrigExpr(pos src.XPos, op Op, raw string) *RawOrigExpr {
n := &RawOrigExpr{Raw: raw}
n.pos = pos
n.op = op
return n
}
// A ResultExpr represents a direct access to a result.
type ResultExpr struct {
miniExpr
@@ -514,15 +494,10 @@ func NewNameOffsetExpr(pos src.XPos, name *Name, offset int64, typ *types.Type)
// A SelectorExpr is a selector expression X.Sel.
type SelectorExpr struct {
miniExpr
X Node
// Sel is the name of the field or method being selected, without (in the
// case of methods) any preceding type specifier. If the field/method is
// exported, than the Sym uses the local package regardless of the package
// of the containing type.
Sel *types.Sym
// The actual selected field - may not be filled in until typechecking.
X Node
Sel *types.Sym
Selection *types.Field
Prealloc *Name // preallocated storage for OMETHVALUE, if any
Prealloc *Name // preallocated storage for OCALLPART, if any
}
func NewSelectorExpr(pos src.XPos, op Op, x Node, sel *types.Sym) *SelectorExpr {
@@ -536,7 +511,7 @@ func (n *SelectorExpr) SetOp(op Op) {
switch op {
default:
panic(n.no("SetOp " + op.String()))
case OXDOT, ODOT, ODOTPTR, ODOTMETH, ODOTINTER, OMETHVALUE, OMETHEXPR:
case OXDOT, ODOT, ODOTPTR, ODOTMETH, ODOTINTER, OCALLPART, OMETHEXPR:
n.op = op
}
}
@@ -570,11 +545,10 @@ func (*SelectorExpr) CanBeNtype() {}
// A SliceExpr is a slice expression X[Low:High] or X[Low:High:Max].
type SliceExpr struct {
miniExpr
X Node
Low Node
High Node
Max Node
CheckPtrCall *CallExpr
X Node
Low Node
High Node
Max Node
}
func NewSliceExpr(pos src.XPos, op Op, x, low, high, max Node) *SliceExpr {
@@ -678,38 +652,6 @@ func (n *TypeAssertExpr) SetOp(op Op) {
}
}
// A DynamicTypeAssertExpr asserts that X is of dynamic type T.
type DynamicTypeAssertExpr struct {
miniExpr
X Node
// N = not an interface
// E = empty interface
// I = nonempty interface
// For E->N, T is a *runtime.type for N
// For I->N, T is a *runtime.itab for N+I
// For E->I, T is a *runtime.type for I
// For I->I, ditto
// For I->E, T is a *runtime.type for interface{} (unnecessary, but just to fill in the slot)
// For E->E, ditto
T Node
}
func NewDynamicTypeAssertExpr(pos src.XPos, op Op, x, t Node) *DynamicTypeAssertExpr {
n := &DynamicTypeAssertExpr{X: x, T: t}
n.pos = pos
n.op = op
return n
}
func (n *DynamicTypeAssertExpr) SetOp(op Op) {
switch op {
default:
panic(n.no("SetOp " + op.String()))
case ODYNAMICDOTTYPE, ODYNAMICDOTTYPE2:
n.op = op
}
}
// A UnaryExpr is a unary expression Op X,
// or Op(X) for a builtin function that does not end up being a call.
type UnaryExpr struct {
@@ -736,11 +678,6 @@ func (n *UnaryExpr) SetOp(op Op) {
}
}
// Probably temporary: using Implicit() flag to mark generic function nodes that
// are called to make getGfInfo analysis easier in one pre-order pass.
func (n *InstExpr) Implicit() bool { return n.flags&miniExprImplicit != 0 }
func (n *InstExpr) SetImplicit(b bool) { n.flags.set(miniExprImplicit, b) }
// An InstExpr is a generic function or type instantiation.
type InstExpr struct {
miniExpr
@@ -836,11 +773,6 @@ func StaticValue(n Node) Node {
continue
}
if n.Op() == OINLCALL {
n = n.(*InlinedCallExpr).SingleResult()
continue
}
n1 := staticValue1(n)
if n1 == nil {
return n
@@ -1139,7 +1071,7 @@ func MethodExprName(n Node) *Name {
// MethodExprFunc is like MethodExprName, but returns the types.Field instead.
func MethodExprFunc(n Node) *types.Field {
switch n.Op() {
case ODOTMETH, OMETHEXPR, OMETHVALUE:
case ODOTMETH, OMETHEXPR, OCALLPART:
return n.(*SelectorExpr).Selection
}
base.Fatalf("unexpected node: %v (%v)", n, n.Op())

View File

@@ -185,7 +185,6 @@ var OpPrec = []int{
OCLOSE: 8,
OCOMPLIT: 8,
OCONVIFACE: 8,
OCONVIDATA: 8,
OCONVNOP: 8,
OCONV: 8,
OCOPY: 8,
@@ -238,7 +237,7 @@ var OpPrec = []int{
ODOTTYPE: 8,
ODOT: 8,
OXDOT: 8,
OMETHVALUE: 8,
OCALLPART: 8,
OMETHEXPR: 8,
OPLUS: 7,
ONOT: 7,
@@ -547,7 +546,7 @@ func exprFmt(n Node, s fmt.State, prec int) {
n = nn.X
continue
}
case OCONV, OCONVNOP, OCONVIFACE, OCONVIDATA:
case OCONV, OCONVNOP, OCONVIFACE:
nn := nn.(*ConvExpr)
if nn.Implicit() {
n = nn.X
@@ -568,11 +567,6 @@ func exprFmt(n Node, s fmt.State, prec int) {
return
}
if n, ok := n.(*RawOrigExpr); ok {
fmt.Fprint(s, n.Raw)
return
}
switch n.Op() {
case OPAREN:
n := n.(*ParenExpr)
@@ -715,10 +709,6 @@ func exprFmt(n Node, s fmt.State, prec int) {
fmt.Fprintf(s, "... argument")
return
}
if typ := n.Type(); typ != nil {
fmt.Fprintf(s, "%v{%s}", typ, ellipsisIf(len(n.List) != 0))
return
}
if n.Ntype != nil {
fmt.Fprintf(s, "%v{%s}", n.Ntype, ellipsisIf(len(n.List) != 0))
return
@@ -762,7 +752,7 @@ func exprFmt(n Node, s fmt.State, prec int) {
n := n.(*StructKeyExpr)
fmt.Fprintf(s, "%v:%v", n.Field, n.Value)
case OXDOT, ODOT, ODOTPTR, ODOTINTER, ODOTMETH, OMETHVALUE, OMETHEXPR:
case OXDOT, ODOT, ODOTPTR, ODOTINTER, ODOTMETH, OCALLPART, OMETHEXPR:
n := n.(*SelectorExpr)
exprFmt(n.X, s, nprec)
if n.Sel == nil {
@@ -814,7 +804,6 @@ func exprFmt(n Node, s fmt.State, prec int) {
case OCONV,
OCONVIFACE,
OCONVIDATA,
OCONVNOP,
OBYTES2STR,
ORUNES2STR,
@@ -865,15 +854,6 @@ func exprFmt(n Node, s fmt.State, prec int) {
}
fmt.Fprintf(s, "(%.v)", n.Args)
case OINLCALL:
n := n.(*InlinedCallExpr)
// TODO(mdempsky): Print Init and/or Body?
if len(n.ReturnVars) == 1 {
fmt.Fprintf(s, "%v", n.ReturnVars[0])
return
}
fmt.Fprintf(s, "(.%v)", n.ReturnVars)
case OMAKEMAP, OMAKECHAN, OMAKESLICE:
n := n.(*MakeExpr)
if n.Cap != nil {
@@ -1006,7 +986,7 @@ func (l Nodes) Format(s fmt.State, verb rune) {
// Dump prints the message s followed by a debug dump of n.
func Dump(s string, n Node) {
fmt.Printf("%s%+v\n", s, n)
fmt.Printf("%s [%p]%+v\n", s, n, n)
}
// DumpList prints the message s followed by a debug dump of each node in the list.
@@ -1134,21 +1114,16 @@ func dumpNodeHeader(w io.Writer, n Node) {
}
if n.Pos().IsKnown() {
fmt.Fprint(w, " # ")
pfx := ""
switch n.Pos().IsStmt() {
case src.PosNotStmt:
fmt.Fprint(w, "_") // "-" would be confusing
pfx = "_" // "-" would be confusing
case src.PosIsStmt:
fmt.Fprint(w, "+")
}
for i, pos := range base.Ctxt.AllPos(n.Pos(), nil) {
if i > 0 {
fmt.Fprint(w, ",")
}
// TODO(mdempsky): Print line pragma details too.
file := filepath.Base(pos.Filename())
fmt.Fprintf(w, "%s:%d:%d", file, pos.Line(), pos.Col())
pfx = "+"
}
pos := base.Ctxt.PosTable.Pos(n.Pos())
file := filepath.Base(pos.Filename())
fmt.Fprintf(w, " # %s%s:%d", pfx, file, pos.Line())
}
}

View File

@@ -9,7 +9,6 @@ import (
"cmd/compile/internal/types"
"cmd/internal/obj"
"cmd/internal/src"
"fmt"
)
// A Func corresponds to a single function in a Go program
@@ -40,14 +39,14 @@ import (
// constructs a fresh node.
//
// A method value (t.M) is represented by ODOTMETH/ODOTINTER
// when it is called directly and by OMETHVALUE otherwise.
// when it is called directly and by OCALLPART otherwise.
// These are like method expressions, except that for ODOTMETH/ODOTINTER,
// the method name is stored in Sym instead of Right.
// Each OMETHVALUE ends up being implemented as a new
// Each OCALLPART ends up being implemented as a new
// function, a bit like a closure, with its own ODCLFUNC.
// The OMETHVALUE uses n.Func to record the linkage to
// The OCALLPART uses n.Func to record the linkage to
// the generated ODCLFUNC, but there is no
// pointer from the Func back to the OMETHVALUE.
// pointer from the Func back to the OCALLPART.
type Func struct {
miniNode
Body Nodes
@@ -167,11 +166,6 @@ type Inline struct {
// another package is imported.
Dcl []*Name
Body []Node
// CanDelayResults reports whether it's safe for the inliner to delay
// initializing the result parameters until immediately before the
// "return" statement.
CanDelayResults bool
}
// A Mark represents a scope boundary.
@@ -196,14 +190,13 @@ const (
// true if closure inside a function; false if a simple function or a
// closure in a global variable initialization
funcIsHiddenClosure
funcIsDeadcodeClosure // true if closure is deadcode
funcHasDefer // contains a defer statement
funcNilCheckDisabled // disable nil checks when compiling this function
funcInlinabilityChecked // inliner has already determined whether the function is inlinable
funcExportInline // include inline body in export data
funcInstrumentBody // add race/msan instrumentation during SSA construction
funcOpenCodedDeferDisallowed // can't do open-coded defers
funcClosureCalled // closure is only immediately called; used by escape analysis
funcClosureCalled // closure is only immediately called
)
type SymAndPos struct {
@@ -217,7 +210,6 @@ func (f *Func) ABIWrapper() bool { return f.flags&funcABIWrapper !
func (f *Func) Needctxt() bool { return f.flags&funcNeedctxt != 0 }
func (f *Func) ReflectMethod() bool { return f.flags&funcReflectMethod != 0 }
func (f *Func) IsHiddenClosure() bool { return f.flags&funcIsHiddenClosure != 0 }
func (f *Func) IsDeadcodeClosure() bool { return f.flags&funcIsDeadcodeClosure != 0 }
func (f *Func) HasDefer() bool { return f.flags&funcHasDefer != 0 }
func (f *Func) NilCheckDisabled() bool { return f.flags&funcNilCheckDisabled != 0 }
func (f *Func) InlinabilityChecked() bool { return f.flags&funcInlinabilityChecked != 0 }
@@ -232,7 +224,6 @@ func (f *Func) SetABIWrapper(b bool) { f.flags.set(funcABIWrapper,
func (f *Func) SetNeedctxt(b bool) { f.flags.set(funcNeedctxt, b) }
func (f *Func) SetReflectMethod(b bool) { f.flags.set(funcReflectMethod, b) }
func (f *Func) SetIsHiddenClosure(b bool) { f.flags.set(funcIsHiddenClosure, b) }
func (f *Func) SetIsDeadcodeClosure(b bool) { f.flags.set(funcIsDeadcodeClosure, b) }
func (f *Func) SetHasDefer(b bool) { f.flags.set(funcHasDefer, b) }
func (f *Func) SetNilCheckDisabled(b bool) { f.flags.set(funcNilCheckDisabled, b) }
func (f *Func) SetInlinabilityChecked(b bool) { f.flags.set(funcInlinabilityChecked, b) }
@@ -281,17 +272,6 @@ func PkgFuncName(f *Func) string {
var CurFunc *Func
// WithFunc invokes do with CurFunc and base.Pos set to curfn and
// curfn.Pos(), respectively, and then restores their previous values
// before returning.
func WithFunc(curfn *Func, do func()) {
oldfn, oldpos := CurFunc, base.Pos
defer func() { CurFunc, base.Pos = oldfn, oldpos }()
CurFunc, base.Pos = curfn, curfn.Pos()
do()
}
func FuncSymName(s *types.Sym) string {
return s.Name + "·f"
}
@@ -299,7 +279,7 @@ func FuncSymName(s *types.Sym) string {
// MarkFunc marks a node as a function.
func MarkFunc(n *Name) {
if n.Op() != ONAME || n.Class != Pxxx {
base.FatalfAt(n.Pos(), "expected ONAME/Pxxx node, got %v (%v/%v)", n, n.Op(), n.Class)
base.Fatalf("expected ONAME/Pxxx node, got %v", n)
}
n.Class = PFUNC
@@ -316,8 +296,8 @@ func ClosureDebugRuntimeCheck(clo *ClosureExpr) {
base.WarnfAt(clo.Pos(), "stack closure, captured vars = %v", clo.Func.ClosureVars)
}
}
if base.Flag.CompilingRuntime && clo.Esc() == EscHeap && !clo.IsGoWrap {
base.ErrorfAt(clo.Pos(), "heap-allocated closure %s, not allowed in runtime", FuncName(clo.Func))
if base.Flag.CompilingRuntime && clo.Esc() == EscHeap {
base.ErrorfAt(clo.Pos(), "heap-allocated closure, not allowed in runtime")
}
}
@@ -326,109 +306,3 @@ func ClosureDebugRuntimeCheck(clo *ClosureExpr) {
func IsTrivialClosure(clo *ClosureExpr) bool {
return len(clo.Func.ClosureVars) == 0
}
// globClosgen is like Func.Closgen, but for the global scope.
var globClosgen int32
// closureName generates a new unique name for a closure within outerfn.
func closureName(outerfn *Func) *types.Sym {
pkg := types.LocalPkg
outer := "glob."
prefix := "func"
gen := &globClosgen
if outerfn != nil {
if outerfn.OClosure != nil {
prefix = ""
}
pkg = outerfn.Sym().Pkg
outer = FuncName(outerfn)
// There may be multiple functions named "_". In those
// cases, we can't use their individual Closgens as it
// would lead to name clashes.
if !IsBlank(outerfn.Nname) {
gen = &outerfn.Closgen
}
}
*gen++
return pkg.Lookup(fmt.Sprintf("%s.%s%d", outer, prefix, *gen))
}
// NewClosureFunc creates a new Func to represent a function literal.
// If hidden is true, then the closure is marked hidden (i.e., as a
// function literal contained within another function, rather than a
// package-scope variable initialization expression).
func NewClosureFunc(pos src.XPos, hidden bool) *Func {
fn := NewFunc(pos)
fn.SetIsHiddenClosure(hidden)
fn.Nname = NewNameAt(pos, BlankNode.Sym())
fn.Nname.Func = fn
fn.Nname.Defn = fn
fn.OClosure = NewClosureExpr(pos, fn)
return fn
}
// NameClosure generates a unique for the given function literal,
// which must have appeared within outerfn.
func NameClosure(clo *ClosureExpr, outerfn *Func) {
fn := clo.Func
if fn.IsHiddenClosure() != (outerfn != nil) {
base.FatalfAt(clo.Pos(), "closure naming inconsistency: hidden %v, but outer %v", fn.IsHiddenClosure(), outerfn)
}
name := fn.Nname
if !IsBlank(name) {
base.FatalfAt(clo.Pos(), "closure already named: %v", name)
}
name.SetSym(closureName(outerfn))
MarkFunc(name)
}
// UseClosure checks that the ginen function literal has been setup
// correctly, and then returns it as an expression.
// It must be called after clo.Func.ClosureVars has been set.
func UseClosure(clo *ClosureExpr, pkg *Package) Node {
fn := clo.Func
name := fn.Nname
if IsBlank(name) {
base.FatalfAt(fn.Pos(), "unnamed closure func: %v", fn)
}
// Caution: clo.Typecheck() is still 0 when UseClosure is called by
// tcClosure.
if fn.Typecheck() != 1 || name.Typecheck() != 1 {
base.FatalfAt(fn.Pos(), "missed typecheck: %v", fn)
}
if clo.Type() == nil || name.Type() == nil {
base.FatalfAt(fn.Pos(), "missing types: %v", fn)
}
if !types.Identical(clo.Type(), name.Type()) {
base.FatalfAt(fn.Pos(), "mismatched types: %v", fn)
}
if base.Flag.W > 1 {
s := fmt.Sprintf("new closure func: %v", fn)
Dump(s, fn)
}
if pkg != nil {
pkg.Decls = append(pkg.Decls, fn)
}
if false && IsTrivialClosure(clo) {
// TODO(mdempsky): Investigate if we can/should optimize this
// case. walkClosure already handles it later, but it could be
// useful to recognize earlier (e.g., it might allow multiple
// inlined calls to a function to share a common trivial closure
// func, rather than cloning it for each inlined call).
}
return clo
}

View File

@@ -358,74 +358,39 @@ func (n *Name) Byval() bool {
return n.Canonical().flags&nameByval != 0
}
// NewClosureVar returns a new closure variable for fn to refer to
// outer variable n.
func NewClosureVar(pos src.XPos, fn *Func, n *Name) *Name {
c := NewNameAt(pos, n.Sym())
c.Curfn = fn
c.Class = PAUTOHEAP
c.SetIsClosureVar(true)
c.Defn = n.Canonical()
c.Outer = n
c.SetType(n.Type())
c.SetTypecheck(n.Typecheck())
fn.ClosureVars = append(fn.ClosureVars, c)
return c
}
// NewHiddenParam returns a new hidden parameter for fn with the given
// name and type.
func NewHiddenParam(pos src.XPos, fn *Func, sym *types.Sym, typ *types.Type) *Name {
if fn.OClosure != nil {
base.FatalfAt(fn.Pos(), "cannot add hidden parameters to closures")
}
fn.SetNeedctxt(true)
// Create a fake parameter, disassociated from any real function, to
// pretend to capture.
fake := NewNameAt(pos, sym)
fake.Class = PPARAM
fake.SetType(typ)
fake.SetByval(true)
return NewClosureVar(pos, fn, fake)
}
// CaptureName returns a Name suitable for referring to n from within function
// fn or from the package block if fn is nil. If n is a free variable declared
// within a function that encloses fn, then CaptureName returns the closure
// variable that refers to n within fn, creating it if necessary.
// Otherwise, it simply returns n.
// within a function that encloses fn, then CaptureName returns a closure
// variable that refers to n and adds it to fn.ClosureVars. Otherwise, it simply
// returns n.
func CaptureName(pos src.XPos, fn *Func, n *Name) *Name {
if n.Op() != ONAME || n.Curfn == nil {
return n // okay to use directly
}
if n.IsClosureVar() {
base.FatalfAt(pos, "misuse of CaptureName on closure variable: %v", n)
}
c := n.Innermost
if c == nil {
c = n
if n.Op() != ONAME || n.Curfn == nil || n.Curfn == fn {
return n // okay to use directly
}
if c.Curfn == fn {
return c
}
if fn == nil {
base.FatalfAt(pos, "package-block reference to %v, declared in %v", n, n.Curfn)
}
c := n.Innermost
if c != nil && c.Curfn == fn {
return c
}
// Do not have a closure var for the active closure yet; make one.
c = NewClosureVar(pos, fn, c)
c = NewNameAt(pos, n.Sym())
c.Curfn = fn
c.Class = PAUTOHEAP
c.SetIsClosureVar(true)
c.Defn = n
// Link into list of active closure variables.
// Popped from list in FinishCaptureNames.
c.Outer = n.Innermost
n.Innermost = c
fn.ClosureVars = append(fn.ClosureVars, c)
return c
}

View File

@@ -159,6 +159,7 @@ const (
OCALLFUNC // X(Args) (function call f(args))
OCALLMETH // X(Args) (direct method call x.Method(args))
OCALLINTER // X(Args) (interface method call x.Method(args))
OCALLPART // X.Sel (method expression x.Method, not called)
OCAP // cap(X)
OCLOSE // close(X)
OCLOSURE // func Type { Func.Closure.Body } (func literal)
@@ -170,7 +171,6 @@ const (
OPTRLIT // &X (X is composite literal)
OCONV // Type(X) (type conversion)
OCONVIFACE // Type(X) (type conversion, to interface)
OCONVIDATA // Builds a data word to store X in an interface. Equivalent to IDATA(CONVIFACE(X)). Is an ir.ConvExpr.
OCONVNOP // Type(X) (type conversion, no effect)
OCOPY // copy(X, Y)
ODCL // var X (declares X of type X.Type)
@@ -237,7 +237,6 @@ const (
OSLICE3ARR // X[Low : High : Max] (X is pointer to array)
OSLICEHEADER // sliceheader{Ptr, Len, Cap} (Ptr is unsafe.Pointer, Len is length, Cap is capacity)
ORECOVER // recover()
ORECOVERFP // recover(Args) w/ explicit FP argument
ORECV // <-X
ORUNESTR // Type(X) (Type is string, X is rune)
OSELRECV2 // like OAS2: Lhs = Rhs where len(Lhs)=2, len(Rhs)=1, Rhs[0].Op = ORECV (appears as .Var of OCASE)
@@ -250,16 +249,14 @@ const (
OSIZEOF // unsafe.Sizeof(X)
OUNSAFEADD // unsafe.Add(X, Y)
OUNSAFESLICE // unsafe.Slice(X, Y)
OMETHEXPR // X(Args) (method expression T.Method(args), first argument is the method receiver)
OMETHVALUE // X.Sel (method expression t.Method, not called)
OMETHEXPR // method expression
// statements
OBLOCK // { List } (block of code)
OBREAK // break [Label]
// OCASE: case List: Body (List==nil means default)
// For OTYPESW, List is a OTYPE node for the specified type (or OLITERAL
// for nil) or an ODYNAMICTYPE indicating a runtime type for generics.
// If a type-switch variable is specified, Var is an
// for nil), and, if a type-switch variable is specified, Rlist is an
// ONAME for the version of the type-switch variable with the specified
// type.
OCASE
@@ -320,30 +317,13 @@ const (
OINLMARK // start of an inlined body, with file/line of caller. Xoffset is an index into the inline tree.
OLINKSYMOFFSET // offset within a name
// opcodes for generics
ODYNAMICDOTTYPE // x = i.(T) where T is a type parameter (or derived from a type parameter)
ODYNAMICDOTTYPE2 // x, ok = i.(T) where T is a type parameter (or derived from a type parameter)
ODYNAMICTYPE // a type node for type switches (represents a dynamic target type for a type switch)
// arch-specific opcodes
OTAILCALL // tail call to another function
OGETG // runtime.getg() (read g pointer)
OGETCALLERPC // runtime.getcallerpc() (continuation PC in caller frame)
OGETCALLERSP // runtime.getcallersp() (stack pointer in caller frame)
OTAILCALL // tail call to another function
OGETG // runtime.getg() (read g pointer)
OEND
)
// IsCmp reports whether op is a comparison operation (==, !=, <, <=,
// >, or >=).
func (op Op) IsCmp() bool {
switch op {
case OEQ, ONE, OLT, OLE, OGT, OGE:
return true
}
return false
}
// Nodes is a pointer to a slice of *Node.
// For fields that are not used in most nodes, this is used instead of
// a slice to save space.
@@ -456,19 +436,18 @@ func (s NameSet) Sorted(less func(*Name, *Name) bool) []*Name {
return res
}
type PragmaFlag uint16
type PragmaFlag int16
const (
// Func pragmas.
Nointerface PragmaFlag = 1 << iota
Noescape // func parameters don't escape
Norace // func must not have race detector annotations
Nosplit // func should not execute on separate stack
Noinline // func should not be inlined
NoCheckPtr // func should not be instrumented by checkptr
CgoUnsafeArgs // treat a pointer to one arg as a pointer to them all
UintptrKeepAlive // pointers converted to uintptr must be kept alive (compiler internal only)
UintptrEscapes // pointers converted to uintptr escape
Nointerface PragmaFlag = 1 << iota
Noescape // func parameters don't escape
Norace // func must not have race detector annotations
Nosplit // func should not execute on separate stack
Noinline // func should not be inlined
NoCheckPtr // func should not be instrumented by checkptr
CgoUnsafeArgs // treat a pointer to one arg as a pointer to them all
UintptrEscapes // pointers converted to uintptr escape
// Runtime-only func pragmas.
// See ../../../../runtime/README.md for detailed descriptions.
@@ -584,7 +563,7 @@ func OuterValue(n Node) Node {
for {
switch nn := n; nn.Op() {
case OXDOT:
base.FatalfAt(n.Pos(), "OXDOT in walk: %v", n)
base.Fatalf("OXDOT in walk")
case ODOT:
nn := nn.(*SelectorExpr)
n = nn.X

View File

@@ -463,62 +463,6 @@ func (n *Decl) editChildren(edit func(Node) Node) {
}
}
func (n *DynamicType) Format(s fmt.State, verb rune) { fmtNode(n, s, verb) }
func (n *DynamicType) copy() Node {
c := *n
c.init = copyNodes(c.init)
return &c
}
func (n *DynamicType) doChildren(do func(Node) bool) bool {
if doNodes(n.init, do) {
return true
}
if n.X != nil && do(n.X) {
return true
}
if n.ITab != nil && do(n.ITab) {
return true
}
return false
}
func (n *DynamicType) editChildren(edit func(Node) Node) {
editNodes(n.init, edit)
if n.X != nil {
n.X = edit(n.X).(Node)
}
if n.ITab != nil {
n.ITab = edit(n.ITab).(Node)
}
}
func (n *DynamicTypeAssertExpr) Format(s fmt.State, verb rune) { fmtNode(n, s, verb) }
func (n *DynamicTypeAssertExpr) copy() Node {
c := *n
c.init = copyNodes(c.init)
return &c
}
func (n *DynamicTypeAssertExpr) doChildren(do func(Node) bool) bool {
if doNodes(n.init, do) {
return true
}
if n.X != nil && do(n.X) {
return true
}
if n.T != nil && do(n.T) {
return true
}
return false
}
func (n *DynamicTypeAssertExpr) editChildren(edit func(Node) Node) {
editNodes(n.init, edit)
if n.X != nil {
n.X = edit(n.X).(Node)
}
if n.T != nil {
n.T = edit(n.T).(Node)
}
}
func (n *ForStmt) Format(s fmt.State, verb rune) { fmtNode(n, s, verb) }
func (n *ForStmt) copy() Node {
c := *n
@@ -1003,22 +947,6 @@ func (n *RangeStmt) editChildren(edit func(Node) Node) {
}
}
func (n *RawOrigExpr) Format(s fmt.State, verb rune) { fmtNode(n, s, verb) }
func (n *RawOrigExpr) copy() Node {
c := *n
c.init = copyNodes(c.init)
return &c
}
func (n *RawOrigExpr) doChildren(do func(Node) bool) bool {
if doNodes(n.init, do) {
return true
}
return false
}
func (n *RawOrigExpr) editChildren(edit func(Node) Node) {
editNodes(n.init, edit)
}
func (n *ResultExpr) Format(s fmt.State, verb rune) { fmtNode(n, s, verb) }
func (n *ResultExpr) copy() Node {
c := *n

View File

@@ -41,18 +41,18 @@ func _() {
_ = x[OCALLFUNC-30]
_ = x[OCALLMETH-31]
_ = x[OCALLINTER-32]
_ = x[OCAP-33]
_ = x[OCLOSE-34]
_ = x[OCLOSURE-35]
_ = x[OCOMPLIT-36]
_ = x[OMAPLIT-37]
_ = x[OSTRUCTLIT-38]
_ = x[OARRAYLIT-39]
_ = x[OSLICELIT-40]
_ = x[OPTRLIT-41]
_ = x[OCONV-42]
_ = x[OCONVIFACE-43]
_ = x[OCONVIDATA-44]
_ = x[OCALLPART-33]
_ = x[OCAP-34]
_ = x[OCLOSE-35]
_ = x[OCLOSURE-36]
_ = x[OCOMPLIT-37]
_ = x[OMAPLIT-38]
_ = x[OSTRUCTLIT-39]
_ = x[OARRAYLIT-40]
_ = x[OSLICELIT-41]
_ = x[OPTRLIT-42]
_ = x[OCONV-43]
_ = x[OCONVIFACE-44]
_ = x[OCONVNOP-45]
_ = x[OCOPY-46]
_ = x[ODCL-47]
@@ -109,72 +109,65 @@ func _() {
_ = x[OSLICE3ARR-98]
_ = x[OSLICEHEADER-99]
_ = x[ORECOVER-100]
_ = x[ORECOVERFP-101]
_ = x[ORECV-102]
_ = x[ORUNESTR-103]
_ = x[OSELRECV2-104]
_ = x[OIOTA-105]
_ = x[OREAL-106]
_ = x[OIMAG-107]
_ = x[OCOMPLEX-108]
_ = x[OALIGNOF-109]
_ = x[OOFFSETOF-110]
_ = x[OSIZEOF-111]
_ = x[OUNSAFEADD-112]
_ = x[OUNSAFESLICE-113]
_ = x[OMETHEXPR-114]
_ = x[OMETHVALUE-115]
_ = x[OBLOCK-116]
_ = x[OBREAK-117]
_ = x[OCASE-118]
_ = x[OCONTINUE-119]
_ = x[ODEFER-120]
_ = x[OFALL-121]
_ = x[OFOR-122]
_ = x[OFORUNTIL-123]
_ = x[OGOTO-124]
_ = x[OIF-125]
_ = x[OLABEL-126]
_ = x[OGO-127]
_ = x[ORANGE-128]
_ = x[ORETURN-129]
_ = x[OSELECT-130]
_ = x[OSWITCH-131]
_ = x[OTYPESW-132]
_ = x[OFUNCINST-133]
_ = x[OTCHAN-134]
_ = x[OTMAP-135]
_ = x[OTSTRUCT-136]
_ = x[OTINTER-137]
_ = x[OTFUNC-138]
_ = x[OTARRAY-139]
_ = x[OTSLICE-140]
_ = x[OINLCALL-141]
_ = x[OEFACE-142]
_ = x[OITAB-143]
_ = x[OIDATA-144]
_ = x[OSPTR-145]
_ = x[OCFUNC-146]
_ = x[OCHECKNIL-147]
_ = x[OVARDEF-148]
_ = x[OVARKILL-149]
_ = x[OVARLIVE-150]
_ = x[ORESULT-151]
_ = x[OINLMARK-152]
_ = x[OLINKSYMOFFSET-153]
_ = x[ODYNAMICDOTTYPE-154]
_ = x[ODYNAMICDOTTYPE2-155]
_ = x[ODYNAMICTYPE-156]
_ = x[OTAILCALL-157]
_ = x[OGETG-158]
_ = x[OGETCALLERPC-159]
_ = x[OGETCALLERSP-160]
_ = x[OEND-161]
_ = x[ORECV-101]
_ = x[ORUNESTR-102]
_ = x[OSELRECV2-103]
_ = x[OIOTA-104]
_ = x[OREAL-105]
_ = x[OIMAG-106]
_ = x[OCOMPLEX-107]
_ = x[OALIGNOF-108]
_ = x[OOFFSETOF-109]
_ = x[OSIZEOF-110]
_ = x[OUNSAFEADD-111]
_ = x[OUNSAFESLICE-112]
_ = x[OMETHEXPR-113]
_ = x[OBLOCK-114]
_ = x[OBREAK-115]
_ = x[OCASE-116]
_ = x[OCONTINUE-117]
_ = x[ODEFER-118]
_ = x[OFALL-119]
_ = x[OFOR-120]
_ = x[OFORUNTIL-121]
_ = x[OGOTO-122]
_ = x[OIF-123]
_ = x[OLABEL-124]
_ = x[OGO-125]
_ = x[ORANGE-126]
_ = x[ORETURN-127]
_ = x[OSELECT-128]
_ = x[OSWITCH-129]
_ = x[OTYPESW-130]
_ = x[OFUNCINST-131]
_ = x[OTCHAN-132]
_ = x[OTMAP-133]
_ = x[OTSTRUCT-134]
_ = x[OTINTER-135]
_ = x[OTFUNC-136]
_ = x[OTARRAY-137]
_ = x[OTSLICE-138]
_ = x[OINLCALL-139]
_ = x[OEFACE-140]
_ = x[OITAB-141]
_ = x[OIDATA-142]
_ = x[OSPTR-143]
_ = x[OCFUNC-144]
_ = x[OCHECKNIL-145]
_ = x[OVARDEF-146]
_ = x[OVARKILL-147]
_ = x[OVARLIVE-148]
_ = x[ORESULT-149]
_ = x[OINLMARK-150]
_ = x[OLINKSYMOFFSET-151]
_ = x[OTAILCALL-152]
_ = x[OGETG-153]
_ = x[OEND-154]
}
const _Op_name = "XXXNAMENONAMETYPEPACKLITERALNILADDSUBORXORADDSTRADDRANDANDAPPENDBYTES2STRBYTES2STRTMPRUNES2STRSTR2BYTESSTR2BYTESTMPSTR2RUNESSLICE2ARRPTRASAS2AS2DOTTYPEAS2FUNCAS2MAPRAS2RECVASOPCALLCALLFUNCCALLMETHCALLINTERCAPCLOSECLOSURECOMPLITMAPLITSTRUCTLITARRAYLITSLICELITPTRLITCONVCONVIFACECONVIDATACONVNOPCOPYDCLDCLFUNCDCLCONSTDCLTYPEDELETEDOTDOTPTRDOTMETHDOTINTERXDOTDOTTYPEDOTTYPE2EQNELTLEGEGTDEREFINDEXINDEXMAPKEYSTRUCTKEYLENMAKEMAKECHANMAKEMAPMAKESLICEMAKESLICECOPYMULDIVMODLSHRSHANDANDNOTNEWNOTBITNOTPLUSNEGORORPANICPRINTPRINTNPARENSENDSLICESLICEARRSLICESTRSLICE3SLICE3ARRSLICEHEADERRECOVERRECOVERFPRECVRUNESTRSELRECV2IOTAREALIMAGCOMPLEXALIGNOFOFFSETOFSIZEOFUNSAFEADDUNSAFESLICEMETHEXPRMETHVALUEBLOCKBREAKCASECONTINUEDEFERFALLFORFORUNTILGOTOIFLABELGORANGERETURNSELECTSWITCHTYPESWFUNCINSTTCHANTMAPTSTRUCTTINTERTFUNCTARRAYTSLICEINLCALLEFACEITABIDATASPTRCFUNCCHECKNILVARDEFVARKILLVARLIVERESULTINLMARKLINKSYMOFFSETDYNAMICDOTTYPEDYNAMICDOTTYPE2DYNAMICTYPETAILCALLGETGGETCALLERPCGETCALLERSPEND"
const _Op_name = "XXXNAMENONAMETYPEPACKLITERALNILADDSUBORXORADDSTRADDRANDANDAPPENDBYTES2STRBYTES2STRTMPRUNES2STRSTR2BYTESSTR2BYTESTMPSTR2RUNESSLICE2ARRPTRASAS2AS2DOTTYPEAS2FUNCAS2MAPRAS2RECVASOPCALLCALLFUNCCALLMETHCALLINTERCALLPARTCAPCLOSECLOSURECOMPLITMAPLITSTRUCTLITARRAYLITSLICELITPTRLITCONVCONVIFACECONVNOPCOPYDCLDCLFUNCDCLCONSTDCLTYPEDELETEDOTDOTPTRDOTMETHDOTINTERXDOTDOTTYPEDOTTYPE2EQNELTLEGEGTDEREFINDEXINDEXMAPKEYSTRUCTKEYLENMAKEMAKECHANMAKEMAPMAKESLICEMAKESLICECOPYMULDIVMODLSHRSHANDANDNOTNEWNOTBITNOTPLUSNEGORORPANICPRINTPRINTNPARENSENDSLICESLICEARRSLICESTRSLICE3SLICE3ARRSLICEHEADERRECOVERRECVRUNESTRSELRECV2IOTAREALIMAGCOMPLEXALIGNOFOFFSETOFSIZEOFUNSAFEADDUNSAFESLICEMETHEXPRBLOCKBREAKCASECONTINUEDEFERFALLFORFORUNTILGOTOIFLABELGORANGERETURNSELECTSWITCHTYPESWFUNCINSTTCHANTMAPTSTRUCTTINTERTFUNCTARRAYTSLICEINLCALLEFACEITABIDATASPTRCFUNCCHECKNILVARDEFVARKILLVARLIVERESULTINLMARKLINKSYMOFFSETTAILCALLGETGEND"
var _Op_index = [...]uint16{0, 3, 7, 13, 17, 21, 28, 31, 34, 37, 39, 42, 48, 52, 58, 64, 73, 85, 94, 103, 115, 124, 136, 138, 141, 151, 158, 165, 172, 176, 180, 188, 196, 205, 208, 213, 220, 227, 233, 242, 250, 258, 264, 268, 277, 286, 293, 297, 300, 307, 315, 322, 328, 331, 337, 344, 352, 356, 363, 371, 373, 375, 377, 379, 381, 383, 388, 393, 401, 404, 413, 416, 420, 428, 435, 444, 457, 460, 463, 466, 469, 472, 475, 481, 484, 487, 493, 497, 500, 504, 509, 514, 520, 525, 529, 534, 542, 550, 556, 565, 576, 583, 592, 596, 603, 611, 615, 619, 623, 630, 637, 645, 651, 660, 671, 679, 688, 693, 698, 702, 710, 715, 719, 722, 730, 734, 736, 741, 743, 748, 754, 760, 766, 772, 780, 785, 789, 796, 802, 807, 813, 819, 826, 831, 835, 840, 844, 849, 857, 863, 870, 877, 883, 890, 903, 917, 932, 943, 951, 955, 966, 977, 980}
var _Op_index = [...]uint16{0, 3, 7, 13, 17, 21, 28, 31, 34, 37, 39, 42, 48, 52, 58, 64, 73, 85, 94, 103, 115, 124, 136, 138, 141, 151, 158, 165, 172, 176, 180, 188, 196, 205, 213, 216, 221, 228, 235, 241, 250, 258, 266, 272, 276, 285, 292, 296, 299, 306, 314, 321, 327, 330, 336, 343, 351, 355, 362, 370, 372, 374, 376, 378, 380, 382, 387, 392, 400, 403, 412, 415, 419, 427, 434, 443, 456, 459, 462, 465, 468, 471, 474, 480, 483, 486, 492, 496, 499, 503, 508, 513, 519, 524, 528, 533, 541, 549, 555, 564, 575, 582, 586, 593, 601, 605, 609, 613, 620, 627, 635, 641, 650, 661, 669, 674, 679, 683, 691, 696, 700, 703, 711, 715, 717, 722, 724, 729, 735, 741, 747, 753, 761, 766, 770, 777, 783, 788, 794, 800, 807, 812, 816, 821, 825, 830, 838, 844, 851, 858, 864, 871, 884, 892, 896, 899}
func (i Op) String() string {
if i >= Op(len(_Op_index)-1) {

View File

@@ -32,4 +32,7 @@ type Package struct {
// Exported (or re-exported) symbols.
Exports []*Name
// Map from function names of stencils to already-created stencils.
Stencils map[*types.Sym]*Func
}

View File

@@ -90,7 +90,7 @@ func (v *bottomUpVisitor) visit(n *Func) uint32 {
if n := n.(*Name); n.Class == PFUNC {
do(n.Defn)
}
case ODOTMETH, OMETHVALUE, OMETHEXPR:
case ODOTMETH, OCALLPART, OMETHEXPR:
if fn := MethodExprName(n); fn != nil {
do(fn.Defn)
}
@@ -116,11 +116,12 @@ func (v *bottomUpVisitor) visit(n *Func) uint32 {
var i int
for i = len(v.stack) - 1; i >= 0; i-- {
x := v.stack[i]
v.nodeID[x] = ^uint32(0)
if x == n {
break
}
v.nodeID[x] = ^uint32(0)
}
v.nodeID[n] = ^uint32(0)
block := v.stack[i:]
// Run escape analysis on this set of functions.
v.stack = v.stack[:i]

View File

@@ -244,7 +244,7 @@ func NewGoDeferStmt(pos src.XPos, op Op, call Node) *GoDeferStmt {
return n
}
// An IfStmt is a return statement: if Init; Cond { Body } else { Else }.
// A IfStmt is a return statement: if Init; Cond { Then } else { Else }.
type IfStmt struct {
miniStmt
Cond Node

View File

@@ -68,4 +68,5 @@ var Pkgs struct {
Go *types.Pkg
Itab *types.Pkg
Runtime *types.Pkg
Unsafe *types.Pkg
}

View File

@@ -300,36 +300,11 @@ func (n *typeNode) CanBeNtype() {}
// TypeNode returns the Node representing the type t.
func TypeNode(t *types.Type) Ntype {
return TypeNodeAt(src.NoXPos, t)
}
// TypeNodeAt is like TypeNode, but allows specifying the position
// information if a new OTYPE needs to be constructed.
//
// Deprecated: Use TypeNode instead. For typical use, the position for
// an anonymous OTYPE node should not matter. However, TypeNodeAt is
// available for use with toolstash -cmp to refactor existing code
// that is sensitive to OTYPE position.
func TypeNodeAt(pos src.XPos, t *types.Type) Ntype {
if n := t.Obj(); n != nil {
if n.Type() != t {
base.Fatalf("type skew: %v has type %v, but expected %v", n, n.Type(), t)
}
return n.(Ntype)
}
return newTypeNode(pos, t)
}
// A DynamicType represents the target type in a type switch.
type DynamicType struct {
miniExpr
X Node // a *runtime._type for the targeted type
ITab Node // for type switches from nonempty interfaces to non-interfaces, this is the itab for that pair.
}
func NewDynamicType(pos src.XPos, x Node) *DynamicType {
n := &DynamicType{X: x}
n.pos = pos
n.op = ODYNAMICTYPE
return n
return newTypeNode(src.NoXPos, t)
}

View File

@@ -66,7 +66,7 @@ func Float64Val(v constant.Value) float64 {
func AssertValidTypeForConst(t *types.Type, v constant.Value) {
if !ValidTypeForConst(t, v) {
base.Fatalf("%v (%v) does not represent %v (%v)", t, t.Kind(), v, v.Kind())
base.Fatalf("%v does not represent %v", t, v)
}
}

View File

@@ -1082,10 +1082,6 @@ func (lv *liveness) showlive(v *ssa.Value, live bitvec.BitVec) {
if base.Flag.Live == 0 || ir.FuncName(lv.fn) == "init" || strings.HasPrefix(ir.FuncName(lv.fn), ".") {
return
}
if lv.fn.Wrapper() || lv.fn.Dupok() {
// Skip reporting liveness information for compiler-generated wrappers.
return
}
if !(v == nil || v.Op.IsCall()) {
// Historically we only printed this information at
// calls. Keep doing so.

View File

@@ -209,7 +209,7 @@ func s15a8(x *[15]int64) [15]int64 {
want(t, slogged, `{"range":{"start":{"line":11,"character":6},"end":{"line":11,"character":6}},"severity":3,"code":"isInBounds","source":"go compiler","message":""}`)
want(t, slogged, `{"range":{"start":{"line":7,"character":6},"end":{"line":7,"character":6}},"severity":3,"code":"canInlineFunction","source":"go compiler","message":"cost: 35"}`)
// escape analysis explanation
want(t, slogged, `{"range":{"start":{"line":7,"character":13},"end":{"line":7,"character":13}},"severity":3,"code":"leak","source":"go compiler","message":"parameter z leaks to ~r0 with derefs=0",`+
want(t, slogged, `{"range":{"start":{"line":7,"character":13},"end":{"line":7,"character":13}},"severity":3,"code":"leak","source":"go compiler","message":"parameter z leaks to ~r2 with derefs=0",`+
`"relatedInformation":[`+
`{"location":{"uri":"file://tmpdir/file.go","range":{"start":{"line":9,"character":13},"end":{"line":9,"character":13}}},"message":"escflow: flow: y = z:"},`+
`{"location":{"uri":"file://tmpdir/file.go","range":{"start":{"line":9,"character":13},"end":{"line":9,"character":13}}},"message":"escflow: from y := z (assign-pair)"},`+
@@ -220,8 +220,8 @@ func s15a8(x *[15]int64) [15]int64 {
`{"location":{"uri":"file://tmpdir/file.go","range":{"start":{"line":9,"character":13},"end":{"line":9,"character":13}}},"message":"escflow: from \u0026y.b (address-of)"},`+
`{"location":{"uri":"file://tmpdir/file.go","range":{"start":{"line":4,"character":9},"end":{"line":4,"character":9}}},"message":"inlineLoc"},`+
`{"location":{"uri":"file://tmpdir/file.go","range":{"start":{"line":9,"character":13},"end":{"line":9,"character":13}}},"message":"escflow: from ~R0 = \u0026y.b (assign-pair)"},`+
`{"location":{"uri":"file://tmpdir/file.go","range":{"start":{"line":9,"character":3},"end":{"line":9,"character":3}}},"message":"escflow: flow: ~r0 = ~R0:"},`+
`{"location":{"uri":"file://tmpdir/file.go","range":{"start":{"line":9,"character":3},"end":{"line":9,"character":3}}},"message":"escflow: from return ~R0 (return)"}]}`)
`{"location":{"uri":"file://tmpdir/file.go","range":{"start":{"line":9,"character":3},"end":{"line":9,"character":3}}},"message":"escflow: flow: ~r2 = ~R0:"},`+
`{"location":{"uri":"file://tmpdir/file.go","range":{"start":{"line":9,"character":3},"end":{"line":9,"character":3}}},"message":"escflow: from return (*int)(~R0) (return)"}]}`)
})
}

Some files were not shown because too many files have changed in this diff Show More