mirror of
https://github.com/golang/go.git
synced 2026-01-30 07:32:05 +03:00
Compare commits
13 Commits
go1.8rc2
...
dev.garbag
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
1d4942afe0 | ||
|
|
8b25a00e6d | ||
|
|
42afbd9e63 | ||
|
|
f9f6c90ed1 | ||
|
|
69161e279e | ||
|
|
fb4c718209 | ||
|
|
81b74bf9c5 | ||
|
|
edb54c300f | ||
|
|
312aa09996 | ||
|
|
d491e550c3 | ||
|
|
641c32dafa | ||
|
|
2e495a1df6 | ||
|
|
344476d23c |
54
README.md
54
README.md
@@ -5,37 +5,39 @@ reliable, and efficient software.
|
||||
|
||||

|
||||
|
||||
For documentation about how to install and use Go,
|
||||
visit https://golang.org/ or load doc/install-source.html
|
||||
in your web browser.
|
||||
|
||||
Our canonical Git repository is located at https://go.googlesource.com/go.
|
||||
There is a mirror of the repository at https://github.com/golang/go.
|
||||
|
||||
Unless otherwise noted, the Go source files are distributed under the
|
||||
BSD-style license found in the LICENSE file.
|
||||
|
||||
### Download and Install
|
||||
|
||||
#### Binary Distributions
|
||||
|
||||
Official binary distributions are available at https://golang.org/dl/.
|
||||
|
||||
After downloading a binary release, visit https://golang.org/doc/install
|
||||
or load doc/install.html in your web browser for installation
|
||||
instructions.
|
||||
|
||||
#### Install From Source
|
||||
|
||||
If a binary distribution is not available for your combination of
|
||||
operating system and architecture, visit
|
||||
https://golang.org/doc/install/source or load doc/install-source.html
|
||||
in your web browser for source installation instructions.
|
||||
|
||||
### Contributing
|
||||
|
||||
Go is the work of hundreds of contributors. We appreciate your help!
|
||||
|
||||
To contribute, please read the contribution guidelines:
|
||||
https://golang.org/doc/contribute.html
|
||||
|
||||
Note that the Go project does not use GitHub pull requests, and that
|
||||
we use the issue tracker for bug reports and proposals only. See
|
||||
https://golang.org/wiki/Questions for a list of places to ask
|
||||
questions about the Go language.
|
||||
##### Note that we do not accept pull requests and that we use the issue tracker for bug reports and proposals only. Please ask questions on https://forum.golangbridge.org or https://groups.google.com/forum/#!forum/golang-nuts.
|
||||
|
||||
Unless otherwise noted, the Go source files are distributed
|
||||
under the BSD-style license found in the LICENSE file.
|
||||
|
||||
--
|
||||
|
||||
## Binary Distribution Notes
|
||||
|
||||
If you have just untarred a binary Go distribution, you need to set
|
||||
the environment variable $GOROOT to the full path of the go
|
||||
directory (the one containing this file). You can omit the
|
||||
variable if you unpack it into /usr/local/go, or if you rebuild
|
||||
from sources by running all.bash (see doc/install-source.html).
|
||||
You should also add the Go binary directory $GOROOT/bin
|
||||
to your shell's path.
|
||||
|
||||
For example, if you extracted the tar file into $HOME/go, you might
|
||||
put the following in your .profile:
|
||||
|
||||
export GOROOT=$HOME/go
|
||||
export PATH=$PATH:$GOROOT/bin
|
||||
|
||||
See https://golang.org/doc/install or doc/install.html for more details.
|
||||
|
||||
@@ -698,7 +698,7 @@ These files will be periodically updated based on the commit logs.
|
||||
<p>Code that you contribute should use the standard copyright header:</p>
|
||||
|
||||
<pre>
|
||||
// Copyright 2017 The Go Authors. All rights reserved.
|
||||
// Copyright 2016 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
</pre>
|
||||
|
||||
@@ -4,8 +4,7 @@
|
||||
}-->
|
||||
|
||||
<p><i>
|
||||
This applies to the standard toolchain (the <code>gc</code> Go
|
||||
compiler and tools). Gccgo has native gdb support.
|
||||
This applies to the <code>gc</code> toolchain. Gccgo has native gdb support.
|
||||
Besides this overview you might want to consult the
|
||||
<a href="http://sourceware.org/gdb/current/onlinedocs/gdb/">GDB manual</a>.
|
||||
</i></p>
|
||||
@@ -50,14 +49,6 @@ when debugging, pass the flags <code>-gcflags "-N -l"</code> to the
|
||||
debugged.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
If you want to use gdb to inspect a core dump, you can trigger a dump
|
||||
on a program crash, on systems that permit it, by setting
|
||||
<code>GOTRACEBACK=crash</code> in the environment (see the
|
||||
<a href="/pkg/runtime/#hdr-Environment_Variables"> runtime package
|
||||
documentation</a> for more info).
|
||||
</p>
|
||||
|
||||
<h3 id="Common_Operations">Common Operations</h3>
|
||||
|
||||
<ul>
|
||||
@@ -139,7 +130,7 @@ the DWARF code.
|
||||
|
||||
<p>
|
||||
If you're interested in what the debugging information looks like, run
|
||||
'<code>objdump -W a.out</code>' and browse through the <code>.debug_*</code>
|
||||
'<code>objdump -W 6.out</code>' and browse through the <code>.debug_*</code>
|
||||
sections.
|
||||
</p>
|
||||
|
||||
@@ -386,9 +377,7 @@ $3 = struct hchan<*testing.T>
|
||||
</pre>
|
||||
|
||||
<p>
|
||||
That <code>struct hchan<*testing.T></code> is the
|
||||
runtime-internal representation of a channel. It is currently empty,
|
||||
or gdb would have pretty-printed its contents.
|
||||
That <code>struct hchan<*testing.T></code> is the runtime-internal representation of a channel. It is currently empty, or gdb would have pretty-printed it's contents.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
|
||||
@@ -93,8 +93,7 @@ On OpenBSD, Go now requires OpenBSD 5.9 or later. <!-- CL 34093 -->
|
||||
<p>
|
||||
The Plan 9 port's networking support is now much more complete
|
||||
and matches the behavior of Unix and Windows with respect to deadlines
|
||||
and cancelation. For Plan 9 kernel requirements, see the
|
||||
<a href="https://golang.org/wiki/Plan9">Plan 9 wiki page</a>.
|
||||
and cancelation.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
@@ -809,6 +808,11 @@ Optimizations and minor bug fixes are not listed.
|
||||
|
||||
<dl id="crypto_x509"><dt><a href="/pkg/crypto/x509/">crypto/x509</a></dt>
|
||||
<dd>
|
||||
<p> <!-- CL 30578 -->
|
||||
<a href="/pkg/crypto/x509/#SystemCertPool"><code>SystemCertPool</code></a>
|
||||
is now implemented on Windows.
|
||||
</p>
|
||||
|
||||
<p> <!-- CL 24743 -->
|
||||
PSS signatures are now supported.
|
||||
</p>
|
||||
@@ -1613,9 +1617,9 @@ crypto/x509: return error for missing SerialNumber (CL 27238)
|
||||
June 31 and July 32.
|
||||
</p>
|
||||
|
||||
<p> <!-- CL 33029 --> <!-- CL 34816 -->
|
||||
<p> <!-- CL 33029 -->
|
||||
The <code>tzdata</code> database has been updated to version
|
||||
2016j for systems that don't already have a local time zone
|
||||
2016i for systems that don't already have a local time zone
|
||||
database.
|
||||
</p>
|
||||
|
||||
|
||||
@@ -250,7 +250,7 @@ $ <b>cd $HOME/go/src/hello</b>
|
||||
$ <b>go build</b>
|
||||
</pre>
|
||||
|
||||
<pre class="testWindows">
|
||||
<pre class="testWindows" style="display: none">
|
||||
C:\> <b>cd %USERPROFILE%\go\src\hello</b>
|
||||
C:\Users\Gopher\go\src\hello> <b>go build</b>
|
||||
</pre>
|
||||
@@ -267,7 +267,7 @@ $ <b>./hello</b>
|
||||
hello, world
|
||||
</pre>
|
||||
|
||||
<pre class="testWindows">
|
||||
<pre class="testWindows" style="display: none">
|
||||
C:\Users\Gopher\go\src\hello> <b>hello</b>
|
||||
hello, world
|
||||
</pre>
|
||||
|
||||
@@ -1,46 +0,0 @@
|
||||
// Copyright 2017 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"iface_i"
|
||||
"log"
|
||||
"plugin"
|
||||
)
|
||||
|
||||
func main() {
|
||||
a, err := plugin.Open("iface_a.so")
|
||||
if err != nil {
|
||||
log.Fatalf(`plugin.Open("iface_a.so"): %v`, err)
|
||||
}
|
||||
b, err := plugin.Open("iface_b.so")
|
||||
if err != nil {
|
||||
log.Fatalf(`plugin.Open("iface_b.so"): %v`, err)
|
||||
}
|
||||
|
||||
af, err := a.Lookup("F")
|
||||
if err != nil {
|
||||
log.Fatalf(`a.Lookup("F") failed: %v`, err)
|
||||
}
|
||||
bf, err := b.Lookup("F")
|
||||
if err != nil {
|
||||
log.Fatalf(`b.Lookup("F") failed: %v`, err)
|
||||
}
|
||||
if af.(func() interface{})() != bf.(func() interface{})() {
|
||||
panic("empty interfaces not equal")
|
||||
}
|
||||
|
||||
ag, err := a.Lookup("G")
|
||||
if err != nil {
|
||||
log.Fatalf(`a.Lookup("G") failed: %v`, err)
|
||||
}
|
||||
bg, err := b.Lookup("G")
|
||||
if err != nil {
|
||||
log.Fatalf(`b.Lookup("G") failed: %v`, err)
|
||||
}
|
||||
if ag.(func() iface_i.I)() != bg.(func() iface_i.I)() {
|
||||
panic("nonempty interfaces not equal")
|
||||
}
|
||||
}
|
||||
@@ -1,17 +0,0 @@
|
||||
// Copyright 2017 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package main
|
||||
|
||||
import "iface_i"
|
||||
|
||||
//go:noinline
|
||||
func F() interface{} {
|
||||
return (*iface_i.T)(nil)
|
||||
}
|
||||
|
||||
//go:noinline
|
||||
func G() iface_i.I {
|
||||
return (*iface_i.T)(nil)
|
||||
}
|
||||
@@ -1,17 +0,0 @@
|
||||
// Copyright 2017 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package main
|
||||
|
||||
import "iface_i"
|
||||
|
||||
//go:noinline
|
||||
func F() interface{} {
|
||||
return (*iface_i.T)(nil)
|
||||
}
|
||||
|
||||
//go:noinline
|
||||
func G() iface_i.I {
|
||||
return (*iface_i.T)(nil)
|
||||
}
|
||||
@@ -1,17 +0,0 @@
|
||||
// Copyright 2017 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package iface_i
|
||||
|
||||
type I interface {
|
||||
M()
|
||||
}
|
||||
|
||||
type T struct {
|
||||
}
|
||||
|
||||
func (t *T) M() {
|
||||
}
|
||||
|
||||
// *T implements I
|
||||
@@ -1,13 +0,0 @@
|
||||
// Copyright 2017 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package dynamodbstreamsevt
|
||||
|
||||
import "encoding/json"
|
||||
|
||||
var foo json.RawMessage
|
||||
|
||||
type Event struct{}
|
||||
|
||||
func (e *Event) Dummy() {}
|
||||
@@ -1,31 +0,0 @@
|
||||
// Copyright 2017 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// The bug happened like this:
|
||||
// 1) The main binary adds an itab for *json.UnsupportedValueError / error
|
||||
// (concrete type / interface type). This itab goes in hash bucket 0x111.
|
||||
// 2) The plugin adds that same itab again. That makes a cycle in the itab
|
||||
// chain rooted at hash bucket 0x111.
|
||||
// 3) The main binary then asks for the itab for *dynamodbstreamsevt.Event /
|
||||
// json.Unmarshaler. This itab happens to also live in bucket 0x111.
|
||||
// The lookup code goes into an infinite loop searching for this itab.
|
||||
// The code is carefully crafted so that the two itabs are both from the
|
||||
// same bucket, and so that the second itab doesn't exist in
|
||||
// the itab hashmap yet (so the entire linked list must be searched).
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"issue18676/dynamodbstreamsevt"
|
||||
"plugin"
|
||||
)
|
||||
|
||||
func main() {
|
||||
plugin.Open("plugin.so")
|
||||
|
||||
var x interface{} = (*dynamodbstreamsevt.Event)(nil)
|
||||
if _, ok := x.(json.Unmarshaler); !ok {
|
||||
println("something")
|
||||
}
|
||||
}
|
||||
@@ -1,11 +0,0 @@
|
||||
// Copyright 2017 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package main
|
||||
|
||||
import "C"
|
||||
|
||||
import "issue18676/dynamodbstreamsevt"
|
||||
|
||||
func F(evt *dynamodbstreamsevt.Event) {}
|
||||
@@ -15,8 +15,8 @@ goos=$(go env GOOS)
|
||||
goarch=$(go env GOARCH)
|
||||
|
||||
function cleanup() {
|
||||
rm -f plugin*.so unnamed*.so iface*.so
|
||||
rm -rf host pkg sub iface issue18676
|
||||
rm -f plugin*.so unnamed*.so
|
||||
rm -rf host pkg sub
|
||||
}
|
||||
trap cleanup EXIT
|
||||
|
||||
@@ -32,15 +32,3 @@ GOPATH=$(pwd) go build -buildmode=plugin unnamed2.go
|
||||
GOPATH=$(pwd) go build host
|
||||
|
||||
LD_LIBRARY_PATH=$(pwd) ./host
|
||||
|
||||
# Test that types and itabs get properly uniqified.
|
||||
GOPATH=$(pwd) go build -buildmode=plugin iface_a
|
||||
GOPATH=$(pwd) go build -buildmode=plugin iface_b
|
||||
GOPATH=$(pwd) go build iface
|
||||
LD_LIBRARY_PATH=$(pwd) ./iface
|
||||
|
||||
# Test for issue 18676 - make sure we don't add the same itab twice.
|
||||
# The buggy code hangs forever, so use a timeout to check for that.
|
||||
GOPATH=$(pwd) go build -buildmode=plugin -o plugin.so src/issue18676/plugin.go
|
||||
GOPATH=$(pwd) go build -o issue18676 src/issue18676/main.go
|
||||
timeout 10s ./issue18676
|
||||
|
||||
@@ -815,14 +815,3 @@ func TestImplicitInclusion(t *testing.T) {
|
||||
goCmd(t, "install", "-linkshared", "implicitcmd")
|
||||
run(t, "running executable linked against library that contains same package as it", "./bin/implicitcmd")
|
||||
}
|
||||
|
||||
// Tests to make sure that the type fields of empty interfaces and itab
|
||||
// fields of nonempty interfaces are unique even across modules,
|
||||
// so that interface equality works correctly.
|
||||
func TestInterface(t *testing.T) {
|
||||
goCmd(t, "install", "-buildmode=shared", "-linkshared", "iface_a")
|
||||
// Note: iface_i gets installed implicitly as a dependency of iface_a.
|
||||
goCmd(t, "install", "-buildmode=shared", "-linkshared", "iface_b")
|
||||
goCmd(t, "install", "-linkshared", "iface")
|
||||
run(t, "running type/itab uniqueness tester", "./bin/iface")
|
||||
}
|
||||
|
||||
@@ -1,17 +0,0 @@
|
||||
// Copyright 2017 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package main
|
||||
|
||||
import "iface_a"
|
||||
import "iface_b"
|
||||
|
||||
func main() {
|
||||
if iface_a.F() != iface_b.F() {
|
||||
panic("empty interfaces not equal")
|
||||
}
|
||||
if iface_a.G() != iface_b.G() {
|
||||
panic("non-empty interfaces not equal")
|
||||
}
|
||||
}
|
||||
@@ -1,17 +0,0 @@
|
||||
// Copyright 2017 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package iface_a
|
||||
|
||||
import "iface_i"
|
||||
|
||||
//go:noinline
|
||||
func F() interface{} {
|
||||
return (*iface_i.T)(nil)
|
||||
}
|
||||
|
||||
//go:noinline
|
||||
func G() iface_i.I {
|
||||
return (*iface_i.T)(nil)
|
||||
}
|
||||
@@ -1,17 +0,0 @@
|
||||
// Copyright 2017 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package iface_b
|
||||
|
||||
import "iface_i"
|
||||
|
||||
//go:noinline
|
||||
func F() interface{} {
|
||||
return (*iface_i.T)(nil)
|
||||
}
|
||||
|
||||
//go:noinline
|
||||
func G() iface_i.I {
|
||||
return (*iface_i.T)(nil)
|
||||
}
|
||||
@@ -1,17 +0,0 @@
|
||||
// Copyright 2017 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package iface_i
|
||||
|
||||
type I interface {
|
||||
M()
|
||||
}
|
||||
|
||||
type T struct {
|
||||
}
|
||||
|
||||
func (t *T) M() {
|
||||
}
|
||||
|
||||
// *T implements I
|
||||
@@ -99,7 +99,7 @@ func main() {
|
||||
// Approximately 1 in a 100 binaries fail to start. If it happens,
|
||||
// try again. These failures happen for several reasons beyond
|
||||
// our control, but all of them are safe to retry as they happen
|
||||
// before lldb encounters the initial SIGUSR2 stop. As we
|
||||
// before lldb encounters the initial getwd breakpoint. As we
|
||||
// know the tests haven't started, we are not hiding flaky tests
|
||||
// with this retry.
|
||||
for i := 0; i < 5; i++ {
|
||||
@@ -204,11 +204,6 @@ func run(bin string, args []string) (err error) {
|
||||
var opts options
|
||||
opts, args = parseArgs(args)
|
||||
|
||||
// Pass the suffix for the current working directory as the
|
||||
// first argument to the test. For iOS, cmd/go generates
|
||||
// special handling of this argument.
|
||||
args = append([]string{"cwdSuffix=" + pkgpath}, args...)
|
||||
|
||||
// ios-deploy invokes lldb to give us a shell session with the app.
|
||||
s, err := newSession(appdir, args, opts)
|
||||
if err != nil {
|
||||
@@ -229,7 +224,6 @@ func run(bin string, args []string) (err error) {
|
||||
s.do(`process handle SIGHUP --stop false --pass true --notify false`)
|
||||
s.do(`process handle SIGPIPE --stop false --pass true --notify false`)
|
||||
s.do(`process handle SIGUSR1 --stop false --pass true --notify false`)
|
||||
s.do(`process handle SIGUSR2 --stop true --pass false --notify true`) // sent by test harness
|
||||
s.do(`process handle SIGCONT --stop false --pass true --notify false`)
|
||||
s.do(`process handle SIGSEGV --stop false --pass true --notify false`) // does not work
|
||||
s.do(`process handle SIGBUS --stop false --pass true --notify false`) // does not work
|
||||
@@ -242,9 +236,20 @@ func run(bin string, args []string) (err error) {
|
||||
return nil
|
||||
}
|
||||
|
||||
s.do(`breakpoint set -n getwd`) // in runtime/cgo/gcc_darwin_arm.go
|
||||
|
||||
started = true
|
||||
|
||||
s.doCmd("run", "stop reason = signal SIGUSR2", 20*time.Second)
|
||||
s.doCmd("run", "stop reason = breakpoint", 20*time.Second)
|
||||
|
||||
// Move the current working directory into the faux gopath.
|
||||
if pkgpath != "src" {
|
||||
s.do(`breakpoint delete 1`)
|
||||
s.do(`expr char* $mem = (char*)malloc(512)`)
|
||||
s.do(`expr $mem = (char*)getwd($mem, 512)`)
|
||||
s.do(`expr $mem = (char*)strcat($mem, "/` + pkgpath + `")`)
|
||||
s.do(`call (void)chdir($mem)`)
|
||||
}
|
||||
|
||||
startTestsLen := s.out.Len()
|
||||
fmt.Fprintln(s.in, `process continue`)
|
||||
@@ -515,11 +520,13 @@ func copyLocalData(dstbase string) (pkgpath string, err error) {
|
||||
|
||||
// Copy timezone file.
|
||||
//
|
||||
// Apps have the zoneinfo.zip in the root of their app bundle,
|
||||
// Typical apps have the zoneinfo.zip in the root of their app bundle,
|
||||
// read by the time package as the working directory at initialization.
|
||||
// As we move the working directory to the GOROOT pkg directory, we
|
||||
// install the zoneinfo.zip file in the pkgpath.
|
||||
if underGoRoot {
|
||||
err := cp(
|
||||
dstbase,
|
||||
filepath.Join(dstbase, pkgpath),
|
||||
filepath.Join(cwd, "lib", "time", "zoneinfo.zip"),
|
||||
)
|
||||
if err != nil {
|
||||
|
||||
@@ -303,9 +303,7 @@ func genhash(sym *Sym, t *Type) {
|
||||
typecheckslice(fn.Nbody.Slice(), Etop)
|
||||
Curfn = nil
|
||||
popdcl()
|
||||
if debug_dclstack != 0 {
|
||||
testdclstack()
|
||||
}
|
||||
testdclstack()
|
||||
|
||||
// Disable safemode while compiling this code: the code we
|
||||
// generate internally can refer to unsafe.Pointer.
|
||||
@@ -495,9 +493,7 @@ func geneq(sym *Sym, t *Type) {
|
||||
typecheckslice(fn.Nbody.Slice(), Etop)
|
||||
Curfn = nil
|
||||
popdcl()
|
||||
if debug_dclstack != 0 {
|
||||
testdclstack()
|
||||
}
|
||||
testdclstack()
|
||||
|
||||
// Disable safemode while compiling this code: the code we
|
||||
// generate internally can refer to unsafe.Pointer.
|
||||
|
||||
@@ -217,9 +217,7 @@ func Import(in *bufio.Reader) {
|
||||
typecheckok = tcok
|
||||
resumecheckwidth()
|
||||
|
||||
if debug_dclstack != 0 {
|
||||
testdclstack()
|
||||
}
|
||||
testdclstack() // debugging only
|
||||
}
|
||||
|
||||
func formatErrorf(format string, args ...interface{}) {
|
||||
|
||||
@@ -30,12 +30,11 @@ var (
|
||||
)
|
||||
|
||||
var (
|
||||
Debug_append int
|
||||
Debug_closure int
|
||||
debug_dclstack int
|
||||
Debug_panic int
|
||||
Debug_slice int
|
||||
Debug_wb int
|
||||
Debug_append int
|
||||
Debug_closure int
|
||||
Debug_panic int
|
||||
Debug_slice int
|
||||
Debug_wb int
|
||||
)
|
||||
|
||||
// Debug arguments.
|
||||
@@ -49,7 +48,6 @@ var debugtab = []struct {
|
||||
{"append", &Debug_append}, // print information about append compilation
|
||||
{"closure", &Debug_closure}, // print information about closure compilation
|
||||
{"disablenil", &disable_checknil}, // disable nil checks
|
||||
{"dclstack", &debug_dclstack}, // run internal dclstack checks
|
||||
{"gcprog", &Debug_gcprog}, // print dump of GC programs
|
||||
{"nil", &Debug_checknil}, // print information about nil checks
|
||||
{"panic", &Debug_panic}, // do not hide any compiler panic
|
||||
@@ -327,6 +325,7 @@ func Main() {
|
||||
timings.Stop()
|
||||
timings.AddEvent(int64(lexlineno-lexlineno0), "lines")
|
||||
|
||||
testdclstack()
|
||||
mkpackage(localpkg.Name) // final import not used checks
|
||||
finishUniverse()
|
||||
|
||||
|
||||
@@ -34,7 +34,6 @@ func parseFile(filename string) {
|
||||
}
|
||||
|
||||
if nsyntaxerrors == 0 {
|
||||
// Always run testdclstack here, even when debug_dclstack is not set, as a sanity measure.
|
||||
testdclstack()
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1833,9 +1833,7 @@ func genwrapper(rcvr *Type, method *Field, newnam *Sym, iface int) {
|
||||
funcbody(fn)
|
||||
Curfn = fn
|
||||
popdcl()
|
||||
if debug_dclstack != 0 {
|
||||
testdclstack()
|
||||
}
|
||||
testdclstack()
|
||||
|
||||
// wrappers where T is anonymous (struct or interface) can be duplicated.
|
||||
if rcvr.IsStruct() || rcvr.IsInterface() || rcvr.IsPtr() && rcvr.Elem().IsStruct() {
|
||||
|
||||
@@ -3117,12 +3117,12 @@ func walkcompare(n *Node, init *Nodes) *Node {
|
||||
cmpr = cmpr.Left
|
||||
}
|
||||
|
||||
if !islvalue(cmpl) || !islvalue(cmpr) {
|
||||
Fatalf("arguments of comparison must be lvalues - %v %v", cmpl, cmpr)
|
||||
}
|
||||
|
||||
// Chose not to inline. Call equality function directly.
|
||||
if !inline {
|
||||
if !islvalue(cmpl) || !islvalue(cmpr) {
|
||||
Fatalf("arguments of comparison must be lvalues - %v %v", cmpl, cmpr)
|
||||
}
|
||||
|
||||
// eq algs take pointers
|
||||
pl := temp(ptrto(t))
|
||||
al := nod(OAS, pl, nod(OADDR, cmpl, nil))
|
||||
|
||||
@@ -35,7 +35,7 @@ func writebarrier(f *Func) {
|
||||
valueLoop:
|
||||
for i, v := range b.Values {
|
||||
switch v.Op {
|
||||
case OpStoreWB, OpMoveWB, OpMoveWBVolatile, OpZeroWB:
|
||||
case OpStoreWB, OpMoveWB, OpMoveWBVolatile:
|
||||
if IsStackAddr(v.Args[0]) {
|
||||
switch v.Op {
|
||||
case OpStoreWB:
|
||||
|
||||
@@ -20,10 +20,11 @@ import (
|
||||
var cmdBug = &Command{
|
||||
Run: runBug,
|
||||
UsageLine: "bug",
|
||||
Short: "start a bug report",
|
||||
Short: "print information for bug reports",
|
||||
Long: `
|
||||
Bug opens the default browser and starts a new bug report.
|
||||
The report includes useful system information.
|
||||
Bug prints information that helps file effective bug reports.
|
||||
|
||||
Bugs may be reported at https://golang.org/issue/new.
|
||||
`,
|
||||
}
|
||||
|
||||
|
||||
@@ -894,13 +894,9 @@ func (b *builder) test(p *Package) (buildAction, runAction, printAction *action,
|
||||
|
||||
if buildContext.GOOS == "darwin" {
|
||||
if buildContext.GOARCH == "arm" || buildContext.GOARCH == "arm64" {
|
||||
t.IsIOS = true
|
||||
t.NeedOS = true
|
||||
t.NeedCgo = true
|
||||
}
|
||||
}
|
||||
if t.TestMain == nil {
|
||||
t.NeedOS = true
|
||||
}
|
||||
|
||||
for _, cp := range pmain.imports {
|
||||
if len(cp.coverVars) > 0 {
|
||||
@@ -1347,8 +1343,7 @@ type testFuncs struct {
|
||||
NeedTest bool
|
||||
ImportXtest bool
|
||||
NeedXtest bool
|
||||
NeedOS bool
|
||||
IsIOS bool
|
||||
NeedCgo bool
|
||||
Cover []coverInfo
|
||||
}
|
||||
|
||||
@@ -1449,7 +1444,7 @@ var testmainTmpl = template.Must(template.New("main").Parse(`
|
||||
package main
|
||||
|
||||
import (
|
||||
{{if .NeedOS}}
|
||||
{{if not .TestMain}}
|
||||
"os"
|
||||
{{end}}
|
||||
"testing"
|
||||
@@ -1465,10 +1460,8 @@ import (
|
||||
_cover{{$i}} {{$p.Package.ImportPath | printf "%q"}}
|
||||
{{end}}
|
||||
|
||||
{{if .IsIOS}}
|
||||
"os/signal"
|
||||
{{if .NeedCgo}}
|
||||
_ "runtime/cgo"
|
||||
"syscall"
|
||||
{{end}}
|
||||
)
|
||||
|
||||
@@ -1530,32 +1523,6 @@ func coverRegisterFile(fileName string, counter []uint32, pos []uint32, numStmts
|
||||
{{end}}
|
||||
|
||||
func main() {
|
||||
{{if .IsIOS}}
|
||||
// Send a SIGUSR2, which will be intercepted by LLDB to
|
||||
// tell the test harness that installation was successful.
|
||||
// See misc/ios/go_darwin_arm_exec.go.
|
||||
signal.Notify(make(chan os.Signal), syscall.SIGUSR2)
|
||||
syscall.Kill(0, syscall.SIGUSR2)
|
||||
signal.Reset(syscall.SIGUSR2)
|
||||
|
||||
// The first argument supplied to an iOS test is an offset
|
||||
// suffix for the current working directory.
|
||||
// Process it here, and remove it from os.Args.
|
||||
const hdr = "cwdSuffix="
|
||||
if len(os.Args) < 2 || len(os.Args[1]) <= len(hdr) || os.Args[1][:len(hdr)] != hdr {
|
||||
panic("iOS test not passed a working directory suffix")
|
||||
}
|
||||
suffix := os.Args[1][len(hdr):]
|
||||
dir, err := os.Getwd()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
if err := os.Chdir(dir + "/" + suffix); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
os.Args = append([]string{os.Args[0]}, os.Args[2:]...)
|
||||
{{end}}
|
||||
|
||||
{{if .CoverEnabled}}
|
||||
testing.RegisterCover(testing.Cover{
|
||||
Mode: {{printf "%q" .CoverMode}},
|
||||
|
||||
@@ -168,7 +168,7 @@ func container(s *Symbol) int {
|
||||
if s == nil {
|
||||
return 0
|
||||
}
|
||||
if Buildmode == BuildmodePlugin && Headtype == obj.Hdarwin && onlycsymbol(s) {
|
||||
if Buildmode == BuildmodePlugin && onlycsymbol(s) {
|
||||
return 1
|
||||
}
|
||||
// We want to generate func table entries only for the "lowest level" symbols,
|
||||
|
||||
@@ -204,6 +204,12 @@ func TestMTF(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
var (
|
||||
digits = mustLoadFile("testdata/e.txt.bz2")
|
||||
twain = mustLoadFile("testdata/Mark.Twain-Tom.Sawyer.txt.bz2")
|
||||
random = mustLoadFile("testdata/random.data.bz2")
|
||||
)
|
||||
|
||||
func benchmarkDecode(b *testing.B, compressed []byte) {
|
||||
// Determine the uncompressed size of testfile.
|
||||
uncompressedSize, err := io.Copy(ioutil.Discard, NewReader(bytes.NewReader(compressed)))
|
||||
@@ -221,18 +227,6 @@ func benchmarkDecode(b *testing.B, compressed []byte) {
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkDecodeDigits(b *testing.B) {
|
||||
digits := mustLoadFile("testdata/e.txt.bz2")
|
||||
b.ResetTimer()
|
||||
benchmarkDecode(b, digits)
|
||||
}
|
||||
func BenchmarkDecodeTwain(b *testing.B) {
|
||||
twain := mustLoadFile("testdata/Mark.Twain-Tom.Sawyer.txt.bz2")
|
||||
b.ResetTimer()
|
||||
benchmarkDecode(b, twain)
|
||||
}
|
||||
func BenchmarkDecodeRand(b *testing.B) {
|
||||
random := mustLoadFile("testdata/random.data.bz2")
|
||||
b.ResetTimer()
|
||||
benchmarkDecode(b, random)
|
||||
}
|
||||
func BenchmarkDecodeDigits(b *testing.B) { benchmarkDecode(b, digits) }
|
||||
func BenchmarkDecodeTwain(b *testing.B) { benchmarkDecode(b, twain) }
|
||||
func BenchmarkDecodeRand(b *testing.B) { benchmarkDecode(b, random) }
|
||||
|
||||
@@ -136,17 +136,14 @@ func (d *compressor) fillDeflate(b []byte) int {
|
||||
delta := d.hashOffset - 1
|
||||
d.hashOffset -= delta
|
||||
d.chainHead -= delta
|
||||
|
||||
// Iterate over slices instead of arrays to avoid copying
|
||||
// the entire table onto the stack (Issue #18625).
|
||||
for i, v := range d.hashPrev[:] {
|
||||
for i, v := range d.hashPrev {
|
||||
if int(v) > delta {
|
||||
d.hashPrev[i] = uint32(int(v) - delta)
|
||||
} else {
|
||||
d.hashPrev[i] = 0
|
||||
}
|
||||
}
|
||||
for i, v := range d.hashHead[:] {
|
||||
for i, v := range d.hashHead {
|
||||
if int(v) > delta {
|
||||
d.hashHead[i] = uint32(int(v) - delta)
|
||||
} else {
|
||||
|
||||
@@ -12,7 +12,6 @@ import (
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"reflect"
|
||||
"runtime/debug"
|
||||
"sync"
|
||||
"testing"
|
||||
)
|
||||
@@ -865,33 +864,3 @@ func TestBestSpeedMaxMatchOffset(t *testing.T) {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestMaxStackSize(t *testing.T) {
|
||||
// This test must not run in parallel with other tests as debug.SetMaxStack
|
||||
// affects all goroutines.
|
||||
n := debug.SetMaxStack(1 << 16)
|
||||
defer debug.SetMaxStack(n)
|
||||
|
||||
var wg sync.WaitGroup
|
||||
defer wg.Wait()
|
||||
|
||||
b := make([]byte, 1<<20)
|
||||
for level := HuffmanOnly; level <= BestCompression; level++ {
|
||||
// Run in separate goroutine to increase probability of stack regrowth.
|
||||
wg.Add(1)
|
||||
go func(level int) {
|
||||
defer wg.Done()
|
||||
zw, err := NewWriter(ioutil.Discard, level)
|
||||
if err != nil {
|
||||
t.Errorf("level %d, NewWriter() = %v, want nil", level, err)
|
||||
}
|
||||
if n, err := zw.Write(b); n != len(b) || err != nil {
|
||||
t.Errorf("level %d, Write() = (%d, %v), want (%d, nil)", level, n, err, len(b))
|
||||
}
|
||||
if err := zw.Close(); err != nil {
|
||||
t.Errorf("level %d, Close() = %v, want nil", level, err)
|
||||
}
|
||||
zw.Reset(ioutil.Discard)
|
||||
}(level)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -60,7 +60,7 @@ func newDeflateFast() *deflateFast {
|
||||
func (e *deflateFast) encode(dst []token, src []byte) []token {
|
||||
// Ensure that e.cur doesn't wrap.
|
||||
if e.cur > 1<<30 {
|
||||
e.resetAll()
|
||||
*e = deflateFast{cur: maxStoreBlockSize, prev: e.prev[:0]}
|
||||
}
|
||||
|
||||
// This check isn't in the Snappy implementation, but there, the caller
|
||||
@@ -265,21 +265,6 @@ func (e *deflateFast) reset() {
|
||||
|
||||
// Protect against e.cur wraparound.
|
||||
if e.cur > 1<<30 {
|
||||
e.resetAll()
|
||||
}
|
||||
}
|
||||
|
||||
// resetAll resets the deflateFast struct and is only called in rare
|
||||
// situations to prevent integer overflow. It manually resets each field
|
||||
// to avoid causing large stack growth.
|
||||
//
|
||||
// See https://golang.org/issue/18636.
|
||||
func (e *deflateFast) resetAll() {
|
||||
// This is equivalent to:
|
||||
// *e = deflateFast{cur: maxStoreBlockSize, prev: e.prev[:0]}
|
||||
e.cur = maxStoreBlockSize
|
||||
e.prev = e.prev[:0]
|
||||
for i := range e.table {
|
||||
e.table[i] = tableEntry{}
|
||||
*e = deflateFast{cur: maxStoreBlockSize, prev: e.prev[:0]}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -9,17 +9,11 @@ import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
// TestGZIPFilesHaveZeroMTimes checks that every .gz file in the tree
|
||||
// has a zero MTIME. This is a requirement for the Debian maintainers
|
||||
// to be able to have deterministic packages.
|
||||
//
|
||||
// See https://golang.org/issue/14937.
|
||||
// Per golang.org/issue/14937, check that every .gz file
|
||||
// in the tree has a zero mtime.
|
||||
func TestGZIPFilesHaveZeroMTimes(t *testing.T) {
|
||||
// To avoid spurious false positives due to untracked GZIP files that
|
||||
// may be in the user's GOROOT (Issue 18604), we only run this test on
|
||||
// the builders, which should have a clean checkout of the tree.
|
||||
if testenv.Builder() == "" {
|
||||
t.Skip("skipping test on non-builder")
|
||||
if testing.Short() && testenv.Builder() == "" {
|
||||
t.Skip("skipping in short mode")
|
||||
}
|
||||
goroot, err := filepath.EvalSymlinks(runtime.GOROOT())
|
||||
if err != nil {
|
||||
|
||||
@@ -95,7 +95,7 @@ func TestSignAndVerify(t *testing.T) {
|
||||
func TestSigningWithDegenerateKeys(t *testing.T) {
|
||||
// Signing with degenerate private keys should not cause an infinite
|
||||
// loop.
|
||||
badKeys := []struct {
|
||||
badKeys := []struct{
|
||||
p, q, g, y, x string
|
||||
}{
|
||||
{"00", "01", "00", "00", "00"},
|
||||
@@ -105,7 +105,7 @@ func TestSigningWithDegenerateKeys(t *testing.T) {
|
||||
for i, test := range badKeys {
|
||||
priv := PrivateKey{
|
||||
PublicKey: PublicKey{
|
||||
Parameters: Parameters{
|
||||
Parameters: Parameters {
|
||||
P: fromHex(test.p),
|
||||
Q: fromHex(test.q),
|
||||
G: fromHex(test.g),
|
||||
|
||||
@@ -84,15 +84,15 @@ var cipherSuites = []*cipherSuite{
|
||||
{TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, 16, 0, 4, ecdheECDSAKA, suiteECDHE | suiteECDSA | suiteTLS12, nil, nil, aeadAESGCM},
|
||||
{TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, 32, 0, 4, ecdheRSAKA, suiteECDHE | suiteTLS12 | suiteSHA384, nil, nil, aeadAESGCM},
|
||||
{TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, 32, 0, 4, ecdheECDSAKA, suiteECDHE | suiteECDSA | suiteTLS12 | suiteSHA384, nil, nil, aeadAESGCM},
|
||||
{TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, 16, 32, 16, ecdheRSAKA, suiteECDHE | suiteTLS12 | suiteDefaultOff, cipherAES, macSHA256, nil},
|
||||
{TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, 16, 32, 16, ecdheRSAKA, suiteECDHE | suiteTLS12, cipherAES, macSHA256, nil},
|
||||
{TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, 16, 20, 16, ecdheRSAKA, suiteECDHE, cipherAES, macSHA1, nil},
|
||||
{TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, 16, 32, 16, ecdheECDSAKA, suiteECDHE | suiteECDSA | suiteTLS12 | suiteDefaultOff, cipherAES, macSHA256, nil},
|
||||
{TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, 16, 32, 16, ecdheECDSAKA, suiteECDHE | suiteECDSA | suiteTLS12, cipherAES, macSHA256, nil},
|
||||
{TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, 16, 20, 16, ecdheECDSAKA, suiteECDHE | suiteECDSA, cipherAES, macSHA1, nil},
|
||||
{TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, 32, 20, 16, ecdheRSAKA, suiteECDHE, cipherAES, macSHA1, nil},
|
||||
{TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, 32, 20, 16, ecdheECDSAKA, suiteECDHE | suiteECDSA, cipherAES, macSHA1, nil},
|
||||
{TLS_RSA_WITH_AES_128_GCM_SHA256, 16, 0, 4, rsaKA, suiteTLS12, nil, nil, aeadAESGCM},
|
||||
{TLS_RSA_WITH_AES_256_GCM_SHA384, 32, 0, 4, rsaKA, suiteTLS12 | suiteSHA384, nil, nil, aeadAESGCM},
|
||||
{TLS_RSA_WITH_AES_128_CBC_SHA256, 16, 32, 16, rsaKA, suiteTLS12 | suiteDefaultOff, cipherAES, macSHA256, nil},
|
||||
{TLS_RSA_WITH_AES_128_CBC_SHA256, 16, 32, 16, rsaKA, suiteTLS12, cipherAES, macSHA256, nil},
|
||||
{TLS_RSA_WITH_AES_128_CBC_SHA, 16, 20, 16, rsaKA, 0, cipherAES, macSHA1, nil},
|
||||
{TLS_RSA_WITH_AES_256_CBC_SHA, 32, 20, 16, rsaKA, 0, cipherAES, macSHA1, nil},
|
||||
{TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, 24, 20, 8, ecdheRSAKA, suiteECDHE, cipher3DES, macSHA1, nil},
|
||||
|
||||
@@ -6,8 +6,8 @@
|
||||
package tls
|
||||
|
||||
// BUG(agl): The crypto/tls package only implements some countermeasures
|
||||
// against Lucky13 attacks on CBC-mode encryption, and only on SHA1
|
||||
// variants. See http://www.isg.rhul.ac.uk/tls/TLStiming.pdf and
|
||||
// against Lucky13 attacks on CBC-mode encryption. See
|
||||
// http://www.isg.rhul.ac.uk/tls/TLStiming.pdf and
|
||||
// https://www.imperialviolet.org/2013/02/04/luckythirteen.html.
|
||||
|
||||
import (
|
||||
|
||||
@@ -4,11 +4,7 @@
|
||||
|
||||
package x509
|
||||
|
||||
import (
|
||||
"encoding/pem"
|
||||
"errors"
|
||||
"runtime"
|
||||
)
|
||||
import "encoding/pem"
|
||||
|
||||
// CertPool is a set of certificates.
|
||||
type CertPool struct {
|
||||
@@ -30,11 +26,6 @@ func NewCertPool() *CertPool {
|
||||
// Any mutations to the returned pool are not written to disk and do
|
||||
// not affect any other pool.
|
||||
func SystemCertPool() (*CertPool, error) {
|
||||
if runtime.GOOS == "windows" {
|
||||
// Issue 16736, 18609:
|
||||
return nil, errors.New("crypto/x509: system root pool is not available on Windows")
|
||||
}
|
||||
|
||||
return loadSystemRoots()
|
||||
}
|
||||
|
||||
|
||||
@@ -226,11 +226,6 @@ func (c *Certificate) systemVerify(opts *VerifyOptions) (chains [][]*Certificate
|
||||
}
|
||||
|
||||
func loadSystemRoots() (*CertPool, error) {
|
||||
// TODO: restore this functionality on Windows. We tried to do
|
||||
// it in Go 1.8 but had to revert it. See Issue 18609.
|
||||
// Returning (nil, nil) was the old behavior, prior to CL 30578.
|
||||
return nil, nil
|
||||
|
||||
const CRYPT_E_NOT_FOUND = 0x80092004
|
||||
|
||||
store, err := syscall.CertOpenSystemStore(0, syscall.StringToUTF16Ptr("ROOT"))
|
||||
|
||||
@@ -24,7 +24,6 @@ import (
|
||||
"net"
|
||||
"os/exec"
|
||||
"reflect"
|
||||
"runtime"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
@@ -1478,9 +1477,6 @@ func TestMultipleRDN(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestSystemCertPool(t *testing.T) {
|
||||
if runtime.GOOS == "windows" {
|
||||
t.Skip("not implemented on Windows; Issue 16736, 18609")
|
||||
}
|
||||
_, err := SystemCertPool()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
|
||||
@@ -70,8 +70,10 @@ func (s *Scope) String() string {
|
||||
// The Data fields contains object-specific data:
|
||||
//
|
||||
// Kind Data type Data value
|
||||
// Pkg *Scope package scope
|
||||
// Pkg *types.Package package scope
|
||||
// Con int iota for the respective declaration
|
||||
// Con != nil constant value
|
||||
// Typ *Scope (used as method scope during type checking - transient)
|
||||
//
|
||||
type Object struct {
|
||||
Kind ObjKind
|
||||
|
||||
@@ -25,7 +25,7 @@ var files = flag.String("files", "", "consider only Go test files matching this
|
||||
|
||||
const dataDir = "testdata"
|
||||
|
||||
var templateTxt *template.Template
|
||||
var templateTxt = readTemplate("template.txt")
|
||||
|
||||
func readTemplate(filename string) *template.Template {
|
||||
t := template.New(filename)
|
||||
@@ -96,9 +96,6 @@ func test(t *testing.T, mode Mode) {
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if templateTxt == nil {
|
||||
templateTxt = readTemplate("template.txt")
|
||||
}
|
||||
|
||||
// test packages
|
||||
for _, pkg := range pkgs {
|
||||
|
||||
@@ -10,12 +10,17 @@ import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkParse(b *testing.B) {
|
||||
src, err := ioutil.ReadFile("parser.go")
|
||||
var src = readFile("parser.go")
|
||||
|
||||
func readFile(filename string) []byte {
|
||||
data, err := ioutil.ReadFile(filename)
|
||||
if err != nil {
|
||||
b.Fatal(err)
|
||||
panic(err)
|
||||
}
|
||||
b.ResetTimer()
|
||||
return data
|
||||
}
|
||||
|
||||
func BenchmarkParse(b *testing.B) {
|
||||
b.SetBytes(int64(len(src)))
|
||||
for i := 0; i < b.N; i++ {
|
||||
if _, err := ParseFile(token.NewFileSet(), "", src, ParseComments); err != nil {
|
||||
|
||||
@@ -5173,142 +5173,3 @@ func TestServerDuplicateBackgroundRead(t *testing.T) {
|
||||
}
|
||||
wg.Wait()
|
||||
}
|
||||
|
||||
// Test that the bufio.Reader returned by Hijack includes any buffered
|
||||
// byte (from the Server's backgroundRead) in its buffer. We want the
|
||||
// Handler code to be able to tell that a byte is available via
|
||||
// bufio.Reader.Buffered(), without resorting to Reading it
|
||||
// (potentially blocking) to get at it.
|
||||
func TestServerHijackGetsBackgroundByte(t *testing.T) {
|
||||
if runtime.GOOS == "plan9" {
|
||||
t.Skip("skipping test; see https://golang.org/issue/18657")
|
||||
}
|
||||
setParallel(t)
|
||||
defer afterTest(t)
|
||||
done := make(chan struct{})
|
||||
inHandler := make(chan bool, 1)
|
||||
ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
|
||||
defer close(done)
|
||||
|
||||
// Tell the client to send more data after the GET request.
|
||||
inHandler <- true
|
||||
|
||||
// Wait until the HTTP server sees the extra data
|
||||
// after the GET request. The HTTP server fires the
|
||||
// close notifier here, assuming it's a pipelined
|
||||
// request, as documented.
|
||||
select {
|
||||
case <-w.(CloseNotifier).CloseNotify():
|
||||
case <-time.After(5 * time.Second):
|
||||
t.Error("timeout")
|
||||
return
|
||||
}
|
||||
|
||||
conn, buf, err := w.(Hijacker).Hijack()
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
return
|
||||
}
|
||||
defer conn.Close()
|
||||
n := buf.Reader.Buffered()
|
||||
if n != 1 {
|
||||
t.Errorf("buffered data = %d; want 1", n)
|
||||
}
|
||||
peek, err := buf.Reader.Peek(3)
|
||||
if string(peek) != "foo" || err != nil {
|
||||
t.Errorf("Peek = %q, %v; want foo, nil", peek, err)
|
||||
}
|
||||
}))
|
||||
defer ts.Close()
|
||||
|
||||
cn, err := net.Dial("tcp", ts.Listener.Addr().String())
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer cn.Close()
|
||||
if _, err := cn.Write([]byte("GET / HTTP/1.1\r\nHost: e.com\r\n\r\n")); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
<-inHandler
|
||||
if _, err := cn.Write([]byte("foo")); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if err := cn.(*net.TCPConn).CloseWrite(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
select {
|
||||
case <-done:
|
||||
case <-time.After(2 * time.Second):
|
||||
t.Error("timeout")
|
||||
}
|
||||
}
|
||||
|
||||
// Like TestServerHijackGetsBackgroundByte above but sending a
|
||||
// immediate 1MB of data to the server to fill up the server's 4KB
|
||||
// buffer.
|
||||
func TestServerHijackGetsBackgroundByte_big(t *testing.T) {
|
||||
if runtime.GOOS == "plan9" {
|
||||
t.Skip("skipping test; see https://golang.org/issue/18657")
|
||||
}
|
||||
setParallel(t)
|
||||
defer afterTest(t)
|
||||
done := make(chan struct{})
|
||||
const size = 8 << 10
|
||||
ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
|
||||
defer close(done)
|
||||
|
||||
// Wait until the HTTP server sees the extra data
|
||||
// after the GET request. The HTTP server fires the
|
||||
// close notifier here, assuming it's a pipelined
|
||||
// request, as documented.
|
||||
select {
|
||||
case <-w.(CloseNotifier).CloseNotify():
|
||||
case <-time.After(5 * time.Second):
|
||||
t.Error("timeout")
|
||||
return
|
||||
}
|
||||
|
||||
conn, buf, err := w.(Hijacker).Hijack()
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
return
|
||||
}
|
||||
defer conn.Close()
|
||||
slurp, err := ioutil.ReadAll(buf.Reader)
|
||||
if err != nil {
|
||||
t.Error("Copy: %v", err)
|
||||
}
|
||||
allX := true
|
||||
for _, v := range slurp {
|
||||
if v != 'x' {
|
||||
allX = false
|
||||
}
|
||||
}
|
||||
if len(slurp) != size {
|
||||
t.Errorf("read %d; want %d", len(slurp), size)
|
||||
} else if !allX {
|
||||
t.Errorf("read %q; want %d 'x'", slurp, size)
|
||||
}
|
||||
}))
|
||||
defer ts.Close()
|
||||
|
||||
cn, err := net.Dial("tcp", ts.Listener.Addr().String())
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer cn.Close()
|
||||
if _, err := fmt.Fprintf(cn, "GET / HTTP/1.1\r\nHost: e.com\r\n\r\n%s",
|
||||
strings.Repeat("x", size)); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if err := cn.(*net.TCPConn).CloseWrite(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
select {
|
||||
case <-done:
|
||||
case <-time.After(2 * time.Second):
|
||||
t.Error("timeout")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -164,7 +164,7 @@ type Flusher interface {
|
||||
// should always test for this ability at runtime.
|
||||
type Hijacker interface {
|
||||
// Hijack lets the caller take over the connection.
|
||||
// After a call to Hijack the HTTP server library
|
||||
// After a call to Hijack(), the HTTP server library
|
||||
// will not do anything else with the connection.
|
||||
//
|
||||
// It becomes the caller's responsibility to manage
|
||||
@@ -174,9 +174,6 @@ type Hijacker interface {
|
||||
// already set, depending on the configuration of the
|
||||
// Server. It is the caller's responsibility to set
|
||||
// or clear those deadlines as needed.
|
||||
//
|
||||
// The returned bufio.Reader may contain unprocessed buffered
|
||||
// data from the client.
|
||||
Hijack() (net.Conn, *bufio.ReadWriter, error)
|
||||
}
|
||||
|
||||
@@ -296,11 +293,6 @@ func (c *conn) hijackLocked() (rwc net.Conn, buf *bufio.ReadWriter, err error) {
|
||||
rwc.SetDeadline(time.Time{})
|
||||
|
||||
buf = bufio.NewReadWriter(c.bufr, bufio.NewWriter(rwc))
|
||||
if c.r.hasByte {
|
||||
if _, err := c.bufr.Peek(c.bufr.Buffered() + 1); err != nil {
|
||||
return nil, nil, fmt.Errorf("unexpected Peek failure reading buffered byte: %v", err)
|
||||
}
|
||||
}
|
||||
c.setState(rwc, StateHijacked)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -36,7 +36,6 @@ import (
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
@@ -2546,13 +2545,6 @@ type closerFunc func() error
|
||||
|
||||
func (f closerFunc) Close() error { return f() }
|
||||
|
||||
type writerFuncConn struct {
|
||||
net.Conn
|
||||
write func(p []byte) (n int, err error)
|
||||
}
|
||||
|
||||
func (c writerFuncConn) Write(p []byte) (n int, err error) { return c.write(p) }
|
||||
|
||||
// Issue 4677. If we try to reuse a connection that the server is in the
|
||||
// process of closing, we may end up successfully writing out our request (or a
|
||||
// portion of our request) only to find a connection error when we try to read
|
||||
@@ -2565,78 +2557,66 @@ func (c writerFuncConn) Write(p []byte) (n int, err error) { return c.write(p) }
|
||||
func TestRetryIdempotentRequestsOnError(t *testing.T) {
|
||||
defer afterTest(t)
|
||||
|
||||
var (
|
||||
mu sync.Mutex
|
||||
logbuf bytes.Buffer
|
||||
)
|
||||
logf := func(format string, args ...interface{}) {
|
||||
mu.Lock()
|
||||
defer mu.Unlock()
|
||||
fmt.Fprintf(&logbuf, format, args...)
|
||||
logbuf.WriteByte('\n')
|
||||
}
|
||||
|
||||
ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
|
||||
logf("Handler")
|
||||
w.Header().Set("X-Status", "ok")
|
||||
}))
|
||||
defer ts.Close()
|
||||
|
||||
var writeNumAtomic int32
|
||||
tr := &Transport{
|
||||
Dial: func(network, addr string) (net.Conn, error) {
|
||||
logf("Dial")
|
||||
c, err := net.Dial(network, ts.Listener.Addr().String())
|
||||
if err != nil {
|
||||
logf("Dial error: %v", err)
|
||||
return nil, err
|
||||
}
|
||||
return &writerFuncConn{
|
||||
Conn: c,
|
||||
write: func(p []byte) (n int, err error) {
|
||||
if atomic.AddInt32(&writeNumAtomic, 1) == 2 {
|
||||
logf("intentional write failure")
|
||||
return 0, errors.New("second write fails")
|
||||
}
|
||||
logf("Write(%q)", p)
|
||||
return c.Write(p)
|
||||
},
|
||||
}, nil
|
||||
},
|
||||
}
|
||||
defer tr.CloseIdleConnections()
|
||||
tr := &Transport{}
|
||||
c := &Client{Transport: tr}
|
||||
|
||||
const N = 2
|
||||
retryc := make(chan struct{}, N)
|
||||
SetRoundTripRetried(func() {
|
||||
logf("Retried.")
|
||||
retryc <- struct{}{}
|
||||
})
|
||||
defer SetRoundTripRetried(nil)
|
||||
|
||||
for i := 0; i < 3; i++ {
|
||||
res, err := c.Get("http://fake.golang/")
|
||||
if err != nil {
|
||||
t.Fatalf("i=%d: Get = %v", i, err)
|
||||
for n := 0; n < 100; n++ {
|
||||
// open 2 conns
|
||||
errc := make(chan error, N)
|
||||
for i := 0; i < N; i++ {
|
||||
// start goroutines, send on errc
|
||||
go func() {
|
||||
res, err := c.Get(ts.URL)
|
||||
if err == nil {
|
||||
res.Body.Close()
|
||||
}
|
||||
errc <- err
|
||||
}()
|
||||
}
|
||||
for i := 0; i < N; i++ {
|
||||
if err := <-errc; err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
res.Body.Close()
|
||||
}
|
||||
|
||||
mu.Lock()
|
||||
got := logbuf.String()
|
||||
mu.Unlock()
|
||||
const want = `Dial
|
||||
Write("GET / HTTP/1.1\r\nHost: fake.golang\r\nUser-Agent: Go-http-client/1.1\r\nAccept-Encoding: gzip\r\n\r\n")
|
||||
Handler
|
||||
intentional write failure
|
||||
Retried.
|
||||
Dial
|
||||
Write("GET / HTTP/1.1\r\nHost: fake.golang\r\nUser-Agent: Go-http-client/1.1\r\nAccept-Encoding: gzip\r\n\r\n")
|
||||
Handler
|
||||
Write("GET / HTTP/1.1\r\nHost: fake.golang\r\nUser-Agent: Go-http-client/1.1\r\nAccept-Encoding: gzip\r\n\r\n")
|
||||
Handler
|
||||
`
|
||||
if got != want {
|
||||
t.Errorf("Log of events differs. Got:\n%s\nWant:\n%s", got, want)
|
||||
ts.CloseClientConnections()
|
||||
for i := 0; i < N; i++ {
|
||||
go func() {
|
||||
res, err := c.Get(ts.URL)
|
||||
if err == nil {
|
||||
res.Body.Close()
|
||||
}
|
||||
errc <- err
|
||||
}()
|
||||
}
|
||||
|
||||
for i := 0; i < N; i++ {
|
||||
if err := <-errc; err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
for i := 0; i < N; i++ {
|
||||
select {
|
||||
case <-retryc:
|
||||
// we triggered a retry, test was successful
|
||||
t.Logf("finished after %d runs\n", n)
|
||||
return
|
||||
default:
|
||||
}
|
||||
}
|
||||
}
|
||||
t.Fatal("did not trigger any retries")
|
||||
}
|
||||
|
||||
// Issue 6981
|
||||
|
||||
@@ -54,15 +54,12 @@ var sysdir = func() *sysDir {
|
||||
case "darwin":
|
||||
switch runtime.GOARCH {
|
||||
case "arm", "arm64":
|
||||
/// At this point the test harness has not had a chance
|
||||
// to move us into the ./src/os directory, so the
|
||||
// current working directory is the root of the app.
|
||||
wd, err := syscall.Getwd()
|
||||
if err != nil {
|
||||
wd = err.Error()
|
||||
}
|
||||
return &sysDir{
|
||||
wd,
|
||||
filepath.Join(wd, "..", ".."),
|
||||
[]string{
|
||||
"ResourceRules.plist",
|
||||
"Info.plist",
|
||||
|
||||
@@ -26,8 +26,6 @@ import (
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
var sink interface{}
|
||||
|
||||
func TestBool(t *testing.T) {
|
||||
v := ValueOf(true)
|
||||
if v.Bool() != true {
|
||||
@@ -5333,72 +5331,6 @@ func TestCallGC(t *testing.T) {
|
||||
f2("four", "five5", "six666", "seven77", "eight888")
|
||||
}
|
||||
|
||||
// Issue 18635 (function version).
|
||||
func TestKeepFuncLive(t *testing.T) {
|
||||
// Test that we keep makeFuncImpl live as long as it is
|
||||
// referenced on the stack.
|
||||
typ := TypeOf(func(i int) {})
|
||||
var f, g func(in []Value) []Value
|
||||
f = func(in []Value) []Value {
|
||||
clobber()
|
||||
i := int(in[0].Int())
|
||||
if i > 0 {
|
||||
// We can't use Value.Call here because
|
||||
// runtime.call* will keep the makeFuncImpl
|
||||
// alive. However, by converting it to an
|
||||
// interface value and calling that,
|
||||
// reflect.callReflect is the only thing that
|
||||
// can keep the makeFuncImpl live.
|
||||
//
|
||||
// Alternate between f and g so that if we do
|
||||
// reuse the memory prematurely it's more
|
||||
// likely to get obviously corrupted.
|
||||
MakeFunc(typ, g).Interface().(func(i int))(i - 1)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
g = func(in []Value) []Value {
|
||||
clobber()
|
||||
i := int(in[0].Int())
|
||||
MakeFunc(typ, f).Interface().(func(i int))(i)
|
||||
return nil
|
||||
}
|
||||
MakeFunc(typ, f).Call([]Value{ValueOf(10)})
|
||||
}
|
||||
|
||||
// Issue 18635 (method version).
|
||||
type KeepMethodLive struct{}
|
||||
|
||||
func (k KeepMethodLive) Method1(i int) {
|
||||
clobber()
|
||||
if i > 0 {
|
||||
ValueOf(k).MethodByName("Method2").Interface().(func(i int))(i - 1)
|
||||
}
|
||||
}
|
||||
|
||||
func (k KeepMethodLive) Method2(i int) {
|
||||
clobber()
|
||||
ValueOf(k).MethodByName("Method1").Interface().(func(i int))(i)
|
||||
}
|
||||
|
||||
func TestKeepMethodLive(t *testing.T) {
|
||||
// Test that we keep methodValue live as long as it is
|
||||
// referenced on the stack.
|
||||
KeepMethodLive{}.Method1(10)
|
||||
}
|
||||
|
||||
// clobber tries to clobber unreachable memory.
|
||||
func clobber() {
|
||||
runtime.GC()
|
||||
for i := 1; i < 32; i++ {
|
||||
for j := 0; j < 10; j++ {
|
||||
obj := make([]*byte, i)
|
||||
sink = obj
|
||||
}
|
||||
}
|
||||
runtime.GC()
|
||||
}
|
||||
|
||||
type funcLayoutTest struct {
|
||||
rcvr, t Type
|
||||
size, argsize, retOffset uintptr
|
||||
|
||||
@@ -538,11 +538,6 @@ func callReflect(ctxt *makeFuncImpl, frame unsafe.Pointer) {
|
||||
off += typ.size
|
||||
}
|
||||
}
|
||||
|
||||
// runtime.getArgInfo expects to be able to find ctxt on the
|
||||
// stack when it finds our caller, makeFuncStub. Make sure it
|
||||
// doesn't get garbage collected.
|
||||
runtime.KeepAlive(ctxt)
|
||||
}
|
||||
|
||||
// methodReceiver returns information about the receiver
|
||||
@@ -655,9 +650,6 @@ func callMethod(ctxt *methodValue, frame unsafe.Pointer) {
|
||||
// though it's a heap object.
|
||||
memclrNoHeapPointers(args, frametype.size)
|
||||
framePool.Put(args)
|
||||
|
||||
// See the comment in callReflect.
|
||||
runtime.KeepAlive(ctxt)
|
||||
}
|
||||
|
||||
// funcName returns the name of f, for use in error messages.
|
||||
|
||||
@@ -111,7 +111,7 @@ const (
|
||||
|
||||
// Tiny allocator parameters, see "Tiny allocator" comment in malloc.go.
|
||||
_TinySize = 16
|
||||
_TinySizeClass = 2
|
||||
_TinySizeClass = int8(2)
|
||||
|
||||
_FixAllocChunk = 16 << 10 // Chunk size for FixAlloc
|
||||
_MaxMHeapList = 1 << (20 - _PageShift) // Maximum page length for fixed-size list in MHeap.
|
||||
@@ -512,8 +512,8 @@ func nextFreeFast(s *mspan) gclinkptr {
|
||||
// weight allocation. If it is a heavy weight allocation the caller must
|
||||
// determine whether a new GC cycle needs to be started or if the GC is active
|
||||
// whether this goroutine needs to assist the GC.
|
||||
func (c *mcache) nextFree(sizeclass uint8) (v gclinkptr, s *mspan, shouldhelpgc bool) {
|
||||
s = c.alloc[sizeclass]
|
||||
func (c *mcache) nextFree(spc spanClass) (v gclinkptr, s *mspan, shouldhelpgc bool) {
|
||||
s = c.alloc[spc]
|
||||
shouldhelpgc = false
|
||||
freeIndex := s.nextFreeIndex()
|
||||
if freeIndex == s.nelems {
|
||||
@@ -523,10 +523,10 @@ func (c *mcache) nextFree(sizeclass uint8) (v gclinkptr, s *mspan, shouldhelpgc
|
||||
throw("s.allocCount != s.nelems && freeIndex == s.nelems")
|
||||
}
|
||||
systemstack(func() {
|
||||
c.refill(int32(sizeclass))
|
||||
c.refill(spc)
|
||||
})
|
||||
shouldhelpgc = true
|
||||
s = c.alloc[sizeclass]
|
||||
s = c.alloc[spc]
|
||||
|
||||
freeIndex = s.nextFreeIndex()
|
||||
}
|
||||
@@ -650,10 +650,10 @@ func mallocgc(size uintptr, typ *_type, needzero bool) unsafe.Pointer {
|
||||
return x
|
||||
}
|
||||
// Allocate a new maxTinySize block.
|
||||
span := c.alloc[tinySizeClass]
|
||||
span := c.alloc[tinySpanClass]
|
||||
v := nextFreeFast(span)
|
||||
if v == 0 {
|
||||
v, _, shouldhelpgc = c.nextFree(tinySizeClass)
|
||||
v, _, shouldhelpgc = c.nextFree(tinySpanClass)
|
||||
}
|
||||
x = unsafe.Pointer(v)
|
||||
(*[2]uint64)(x)[0] = 0
|
||||
@@ -673,10 +673,11 @@ func mallocgc(size uintptr, typ *_type, needzero bool) unsafe.Pointer {
|
||||
sizeclass = size_to_class128[(size-smallSizeMax+largeSizeDiv-1)/largeSizeDiv]
|
||||
}
|
||||
size = uintptr(class_to_size[sizeclass])
|
||||
span := c.alloc[sizeclass]
|
||||
spc := makeSpanClass(sizeclass, noscan)
|
||||
span := c.alloc[spc]
|
||||
v := nextFreeFast(span)
|
||||
if v == 0 {
|
||||
v, span, shouldhelpgc = c.nextFree(sizeclass)
|
||||
v, span, shouldhelpgc = c.nextFree(spc)
|
||||
}
|
||||
x = unsafe.Pointer(v)
|
||||
if needzero && span.needzero != 0 {
|
||||
@@ -687,7 +688,7 @@ func mallocgc(size uintptr, typ *_type, needzero bool) unsafe.Pointer {
|
||||
var s *mspan
|
||||
shouldhelpgc = true
|
||||
systemstack(func() {
|
||||
s = largeAlloc(size, needzero)
|
||||
s = largeAlloc(size, needzero, noscan)
|
||||
})
|
||||
s.freeindex = 1
|
||||
s.allocCount = 1
|
||||
@@ -696,9 +697,7 @@ func mallocgc(size uintptr, typ *_type, needzero bool) unsafe.Pointer {
|
||||
}
|
||||
|
||||
var scanSize uintptr
|
||||
if noscan {
|
||||
heapBitsSetTypeNoScan(uintptr(x))
|
||||
} else {
|
||||
if !noscan {
|
||||
// If allocating a defer+arg block, now that we've picked a malloc size
|
||||
// large enough to hold everything, cut the "asked for" size down to
|
||||
// just the defer header, so that the GC bitmap will record the arg block
|
||||
@@ -776,7 +775,7 @@ func mallocgc(size uintptr, typ *_type, needzero bool) unsafe.Pointer {
|
||||
return x
|
||||
}
|
||||
|
||||
func largeAlloc(size uintptr, needzero bool) *mspan {
|
||||
func largeAlloc(size uintptr, needzero bool, noscan bool) *mspan {
|
||||
// print("largeAlloc size=", size, "\n")
|
||||
|
||||
if size+_PageSize < size {
|
||||
@@ -792,7 +791,7 @@ func largeAlloc(size uintptr, needzero bool) *mspan {
|
||||
// pays the debt down to npage pages.
|
||||
deductSweepCredit(npages*_PageSize, npages)
|
||||
|
||||
s := mheap_.alloc(npages, 0, true, needzero)
|
||||
s := mheap_.alloc(npages, makeSpanClass(0, noscan), true, needzero)
|
||||
if s == nil {
|
||||
throw("out of memory")
|
||||
}
|
||||
|
||||
@@ -45,6 +45,11 @@
|
||||
// not checkmarked, and is the dead encoding.
|
||||
// These properties must be preserved when modifying the encoding.
|
||||
//
|
||||
// The bitmap for noscan spans is not maintained. Code must ensure
|
||||
// that an object is scannable before consulting its bitmap by
|
||||
// checking either the noscan bit in the span or by consulting its
|
||||
// type's information.
|
||||
//
|
||||
// Checkmarks
|
||||
//
|
||||
// In a concurrent garbage collector, one worries about failing to mark
|
||||
@@ -184,6 +189,90 @@ type markBits struct {
|
||||
index uintptr
|
||||
}
|
||||
|
||||
//go:nosplit
|
||||
func inBss(p uintptr) bool {
|
||||
for datap := &firstmoduledata; datap != nil; datap = datap.next {
|
||||
if p >= datap.bss && p < datap.ebss {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
//go:nosplit
|
||||
func inData(p uintptr) bool {
|
||||
for datap := &firstmoduledata; datap != nil; datap = datap.next {
|
||||
if p >= datap.data && p < datap.edata {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// isPublic checks whether the object has been published.
|
||||
// ptr may not point to the start of the object.
|
||||
// This is conservative in the sense that it will return true
|
||||
// for any object that hasn't been allocated by this
|
||||
// goroutine since the last roc checkpoint was performed.
|
||||
// Must run on the system stack to prevent stack growth and
|
||||
// moving of goroutine stack.
|
||||
//go:systemstack
|
||||
func isPublic(ptr uintptr) bool {
|
||||
if debug.gcroc == 0 {
|
||||
// Unexpected call to ROC specific routine while not running ROC.
|
||||
// blowup without supressing inlining.
|
||||
_ = *(*int)(nil)
|
||||
}
|
||||
|
||||
if inStack(ptr, getg().stack) {
|
||||
return false
|
||||
}
|
||||
|
||||
if getg().m != nil && getg().m.curg != nil && inStack(ptr, getg().m.curg.stack) {
|
||||
return false
|
||||
}
|
||||
|
||||
if inBss(ptr) {
|
||||
return true
|
||||
}
|
||||
if inData(ptr) {
|
||||
return true
|
||||
}
|
||||
if !inheap(ptr) {
|
||||
// Note: Objects created using persistentalloc are not in the heap
|
||||
// so any pointers from such object to local objects needs to be dealt
|
||||
// with specially. nil is also considered not in the heap.
|
||||
return true
|
||||
}
|
||||
// At this point we know the object is in the heap.
|
||||
s := spanOf(ptr)
|
||||
oldSweepgen := atomic.Load(&s.sweepgen)
|
||||
sg := mheap_.sweepgen
|
||||
if oldSweepgen != sg {
|
||||
// We have an unswept span which means that the pointer points to a public object since it will
|
||||
// be found to be marked once it is swept.
|
||||
return true
|
||||
}
|
||||
abits := s.allocBitsForAddr(ptr)
|
||||
if abits.isMarked() {
|
||||
return true
|
||||
} else if s.freeindex <= abits.index {
|
||||
// Unmarked and beyond freeindex yet reachable object encountered.
|
||||
// blowup without supressing inlining.
|
||||
_ = *(*int)(nil)
|
||||
}
|
||||
|
||||
// The object is not marked. If it is part of the current
|
||||
// ROC epoch then it is not public.
|
||||
if s.startindex*s.elemsize <= ptr-s.base() {
|
||||
// Object allocated in this ROC epoch and since it is
|
||||
// not marked it has not been published.
|
||||
return false
|
||||
}
|
||||
// Object allocated since last GC but in a previous ROC epoch so it is public.
|
||||
return true
|
||||
}
|
||||
|
||||
//go:nosplit
|
||||
func (s *mspan) allocBitsForIndex(allocBitIndex uintptr) markBits {
|
||||
whichByte := allocBitIndex / 8
|
||||
@@ -192,6 +281,16 @@ func (s *mspan) allocBitsForIndex(allocBitIndex uintptr) markBits {
|
||||
return markBits{bytePtr, uint8(1 << whichBit), allocBitIndex}
|
||||
}
|
||||
|
||||
//go:nosplit
|
||||
func (s *mspan) allocBitsForAddr(p uintptr) markBits {
|
||||
byteOffset := p - s.base()
|
||||
allocBitIndex := byteOffset / s.elemsize
|
||||
whichByte := allocBitIndex / 8
|
||||
whichBit := allocBitIndex % 8
|
||||
bytePtr := addb(s.allocBits, whichByte)
|
||||
return markBits{bytePtr, uint8(1 << whichBit), allocBitIndex}
|
||||
}
|
||||
|
||||
// refillaCache takes 8 bytes s.allocBits starting at whichByte
|
||||
// and negates them so that ctz (count trailing zeros) instructions
|
||||
// can be used. It then places these 8 bytes into the cached 64 bit
|
||||
@@ -509,16 +608,6 @@ func (h heapBits) isPointer() bool {
|
||||
return h.bits()&bitPointer != 0
|
||||
}
|
||||
|
||||
// hasPointers reports whether the given object has any pointers.
|
||||
// It must be told how large the object at h is for efficiency.
|
||||
// h must describe the initial word of the object.
|
||||
func (h heapBits) hasPointers(size uintptr) bool {
|
||||
if size == sys.PtrSize { // 1-word objects are always pointers
|
||||
return true
|
||||
}
|
||||
return (*h.bitp>>h.shift)&bitScan != 0
|
||||
}
|
||||
|
||||
// isCheckmarked reports whether the heap bits have the checkmarked bit set.
|
||||
// It must be told how large the object at h is, because the encoding of the
|
||||
// checkmark bit varies by size.
|
||||
@@ -1363,13 +1452,6 @@ Phase4:
|
||||
}
|
||||
}
|
||||
|
||||
// heapBitsSetTypeNoScan marks x as noscan by setting the first word
|
||||
// of x in the heap bitmap to scalar/dead.
|
||||
func heapBitsSetTypeNoScan(x uintptr) {
|
||||
h := heapBitsForAddr(uintptr(x))
|
||||
*h.bitp &^= (bitPointer | bitScan) << h.shift
|
||||
}
|
||||
|
||||
var debugPtrmask struct {
|
||||
lock mutex
|
||||
data *byte
|
||||
|
||||
@@ -33,7 +33,8 @@ type mcache struct {
|
||||
local_tinyallocs uintptr // number of tiny allocs not counted in other stats
|
||||
|
||||
// The rest is not accessed on every malloc.
|
||||
alloc [_NumSizeClasses]*mspan // spans to allocate from
|
||||
|
||||
alloc [numSpanClasses]*mspan // spans to allocate from, indexed by spanClass
|
||||
|
||||
stackcache [_NumStackOrders]stackfreelist
|
||||
|
||||
@@ -77,7 +78,7 @@ func allocmcache() *mcache {
|
||||
lock(&mheap_.lock)
|
||||
c := (*mcache)(mheap_.cachealloc.alloc())
|
||||
unlock(&mheap_.lock)
|
||||
for i := 0; i < _NumSizeClasses; i++ {
|
||||
for i := range c.alloc {
|
||||
c.alloc[i] = &emptymspan
|
||||
}
|
||||
c.next_sample = nextSample()
|
||||
@@ -103,12 +104,12 @@ func freemcache(c *mcache) {
|
||||
|
||||
// Gets a span that has a free object in it and assigns it
|
||||
// to be the cached span for the given sizeclass. Returns this span.
|
||||
func (c *mcache) refill(sizeclass int32) *mspan {
|
||||
func (c *mcache) refill(spc spanClass) *mspan {
|
||||
_g_ := getg()
|
||||
|
||||
_g_.m.locks++
|
||||
// Return the current cached span to the central lists.
|
||||
s := c.alloc[sizeclass]
|
||||
s := c.alloc[spc]
|
||||
|
||||
if uintptr(s.allocCount) != s.nelems {
|
||||
throw("refill of span with free space remaining")
|
||||
@@ -119,7 +120,7 @@ func (c *mcache) refill(sizeclass int32) *mspan {
|
||||
}
|
||||
|
||||
// Get a new cached span from the central lists.
|
||||
s = mheap_.central[sizeclass].mcentral.cacheSpan()
|
||||
s = mheap_.central[spc].mcentral.cacheSpan()
|
||||
if s == nil {
|
||||
throw("out of memory")
|
||||
}
|
||||
@@ -128,13 +129,13 @@ func (c *mcache) refill(sizeclass int32) *mspan {
|
||||
throw("span has no free space")
|
||||
}
|
||||
|
||||
c.alloc[sizeclass] = s
|
||||
c.alloc[spc] = s
|
||||
_g_.m.locks--
|
||||
return s
|
||||
}
|
||||
|
||||
func (c *mcache) releaseAll() {
|
||||
for i := 0; i < _NumSizeClasses; i++ {
|
||||
for i := range c.alloc {
|
||||
s := c.alloc[i]
|
||||
if s != &emptymspan {
|
||||
mheap_.central[i].mcentral.uncacheSpan(s)
|
||||
|
||||
@@ -19,14 +19,14 @@ import "runtime/internal/atomic"
|
||||
//go:notinheap
|
||||
type mcentral struct {
|
||||
lock mutex
|
||||
sizeclass int32
|
||||
spanclass spanClass
|
||||
nonempty mSpanList // list of spans with a free object, ie a nonempty free list
|
||||
empty mSpanList // list of spans with no free objects (or cached in an mcache)
|
||||
}
|
||||
|
||||
// Initialize a single central free list.
|
||||
func (c *mcentral) init(sizeclass int32) {
|
||||
c.sizeclass = sizeclass
|
||||
func (c *mcentral) init(spc spanClass) {
|
||||
c.spanclass = spc
|
||||
c.nonempty.init()
|
||||
c.empty.init()
|
||||
}
|
||||
@@ -34,7 +34,7 @@ func (c *mcentral) init(sizeclass int32) {
|
||||
// Allocate a span to use in an MCache.
|
||||
func (c *mcentral) cacheSpan() *mspan {
|
||||
// Deduct credit for this span allocation and sweep if necessary.
|
||||
spanBytes := uintptr(class_to_allocnpages[c.sizeclass]) * _PageSize
|
||||
spanBytes := uintptr(class_to_allocnpages[c.spanclass.sizeclass()]) * _PageSize
|
||||
deductSweepCredit(spanBytes, 0)
|
||||
|
||||
lock(&c.lock)
|
||||
@@ -205,11 +205,11 @@ func (c *mcentral) freeSpan(s *mspan, preserve bool, wasempty bool) bool {
|
||||
|
||||
// grow allocates a new empty span from the heap and initializes it for c's size class.
|
||||
func (c *mcentral) grow() *mspan {
|
||||
npages := uintptr(class_to_allocnpages[c.sizeclass])
|
||||
size := uintptr(class_to_size[c.sizeclass])
|
||||
npages := uintptr(class_to_allocnpages[c.spanclass.sizeclass()])
|
||||
size := uintptr(class_to_size[c.spanclass.sizeclass()])
|
||||
n := (npages << _PageShift) / size
|
||||
|
||||
s := mheap_.alloc(npages, c.sizeclass, false, true)
|
||||
s := mheap_.alloc(npages, c.spanclass, false, true)
|
||||
if s == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -441,7 +441,7 @@ func findObject(v unsafe.Pointer) (s *mspan, x unsafe.Pointer, n uintptr) {
|
||||
}
|
||||
|
||||
n = s.elemsize
|
||||
if s.sizeclass != 0 {
|
||||
if s.spanclass.sizeclass() != 0 {
|
||||
x = add(x, (uintptr(v)-uintptr(x))/n*n)
|
||||
}
|
||||
return
|
||||
|
||||
@@ -244,6 +244,7 @@ var writeBarrier struct {
|
||||
pad [3]byte // compiler uses 32-bit load for "enabled" field
|
||||
needed bool // whether we need a write barrier for current GC phase
|
||||
cgo bool // whether we need a write barrier for a cgo check
|
||||
roc bool // whether we need a write barrier for the ROC algorithm
|
||||
alignme uint64 // guarantee alignment so that compiler can use a 32 or 64-bit load
|
||||
}
|
||||
|
||||
@@ -1129,6 +1130,8 @@ top:
|
||||
// sitting in the per-P work caches.
|
||||
// Flush and disable work caches.
|
||||
|
||||
gcMarkRootCheck()
|
||||
|
||||
// Disallow caching workbufs and indicate that we're in mark 2.
|
||||
gcBlackenPromptly = true
|
||||
|
||||
@@ -1151,16 +1154,6 @@ top:
|
||||
})
|
||||
})
|
||||
|
||||
// Check that roots are marked. We should be able to
|
||||
// do this before the forEachP, but based on issue
|
||||
// #16083 there may be a (harmless) race where we can
|
||||
// enter mark 2 while some workers are still scanning
|
||||
// stacks. The forEachP ensures these scans are done.
|
||||
//
|
||||
// TODO(austin): Figure out the race and fix this
|
||||
// properly.
|
||||
gcMarkRootCheck()
|
||||
|
||||
// Now we can start up mark 2 workers.
|
||||
atomic.Xaddint64(&gcController.dedicatedMarkWorkersNeeded, 0xffffffff)
|
||||
atomic.Xaddint64(&gcController.fractionalMarkWorkersNeeded, 0xffffffff)
|
||||
|
||||
@@ -1268,7 +1268,7 @@ func scanobject(b uintptr, gcw *gcWork) {
|
||||
// paths), in which case we must *not* enqueue
|
||||
// oblets since their bitmaps will be
|
||||
// uninitialized.
|
||||
if !hbits.hasPointers(n) {
|
||||
if s.spanclass.noscan() {
|
||||
// Bypass the whole scan.
|
||||
gcw.bytesMarked += uint64(n)
|
||||
return
|
||||
@@ -1396,7 +1396,7 @@ func greyobject(obj, base, off uintptr, hbits heapBits, span *mspan, gcw *gcWork
|
||||
atomic.Or8(mbits.bytep, mbits.mask)
|
||||
// If this is a noscan object, fast-track it to black
|
||||
// instead of greying it.
|
||||
if !hbits.hasPointers(span.elemsize) {
|
||||
if span.spanclass.noscan() {
|
||||
gcw.bytesMarked += uint64(span.elemsize)
|
||||
return
|
||||
}
|
||||
@@ -1429,7 +1429,7 @@ func gcDumpObject(label string, obj, off uintptr) {
|
||||
print(" s=nil\n")
|
||||
return
|
||||
}
|
||||
print(" s.base()=", hex(s.base()), " s.limit=", hex(s.limit), " s.sizeclass=", s.sizeclass, " s.elemsize=", s.elemsize, " s.state=")
|
||||
print(" s.base()=", hex(s.base()), " s.limit=", hex(s.limit), " s.spanclass=", s.spanclass, " s.elemsize=", s.elemsize, " s.state=")
|
||||
if 0 <= s.state && int(s.state) < len(mSpanStateNames) {
|
||||
print(mSpanStateNames[s.state], "\n")
|
||||
} else {
|
||||
|
||||
@@ -183,7 +183,7 @@ func (s *mspan) sweep(preserve bool) bool {
|
||||
|
||||
atomic.Xadd64(&mheap_.pagesSwept, int64(s.npages))
|
||||
|
||||
cl := s.sizeclass
|
||||
spc := s.spanclass
|
||||
size := s.elemsize
|
||||
res := false
|
||||
nfree := 0
|
||||
@@ -277,7 +277,7 @@ func (s *mspan) sweep(preserve bool) bool {
|
||||
|
||||
// Count the number of free objects in this span.
|
||||
nfree = s.countFree()
|
||||
if cl == 0 && nfree != 0 {
|
||||
if spc.sizeclass() == 0 && nfree != 0 {
|
||||
s.needzero = 1
|
||||
freeToHeap = true
|
||||
}
|
||||
@@ -318,9 +318,9 @@ func (s *mspan) sweep(preserve bool) bool {
|
||||
atomic.Store(&s.sweepgen, sweepgen)
|
||||
}
|
||||
|
||||
if nfreed > 0 && cl != 0 {
|
||||
c.local_nsmallfree[cl] += uintptr(nfreed)
|
||||
res = mheap_.central[cl].mcentral.freeSpan(s, preserve, wasempty)
|
||||
if nfreed > 0 && spc.sizeclass() != 0 {
|
||||
c.local_nsmallfree[spc.sizeclass()] += uintptr(nfreed)
|
||||
res = mheap_.central[spc].mcentral.freeSpan(s, preserve, wasempty)
|
||||
// MCentral_FreeSpan updates sweepgen
|
||||
} else if freeToHeap {
|
||||
// Free large span to heap
|
||||
|
||||
@@ -98,7 +98,8 @@ type mheap struct {
|
||||
// the padding makes sure that the MCentrals are
|
||||
// spaced CacheLineSize bytes apart, so that each MCentral.lock
|
||||
// gets its own cache line.
|
||||
central [_NumSizeClasses]struct {
|
||||
// central is indexed by spanClass.
|
||||
central [numSpanClasses]struct {
|
||||
mcentral mcentral
|
||||
pad [sys.CacheLineSize]byte
|
||||
}
|
||||
@@ -194,6 +195,13 @@ type mspan struct {
|
||||
// helps performance.
|
||||
nelems uintptr // number of object in the span.
|
||||
|
||||
// startindex is the object index where the owner G started allocating in this span.
|
||||
//
|
||||
// This is used in conjunction with nextUsedSpan to implement ROC checkpoints and recycles.
|
||||
startindex uintptr
|
||||
// nextUsedSpan links together all spans that have the same span class and owner G.
|
||||
nextUsedSpan *mspan
|
||||
|
||||
// Cache of the allocBits at freeindex. allocCache is shifted
|
||||
// such that the lowest bit corresponds to the bit freeindex.
|
||||
// allocCache holds the complement of allocBits, thus allowing
|
||||
@@ -237,7 +245,7 @@ type mspan struct {
|
||||
divMul uint16 // for divide by elemsize - divMagic.mul
|
||||
baseMask uint16 // if non-0, elemsize is a power of 2, & this will get object allocation base
|
||||
allocCount uint16 // capacity - number of objects in freelist
|
||||
sizeclass uint8 // size class
|
||||
spanclass spanClass // size class and noscan (uint8)
|
||||
incache bool // being used by an mcache
|
||||
state mSpanState // mspaninuse etc
|
||||
needzero uint8 // needs to be zeroed before allocation
|
||||
@@ -292,6 +300,31 @@ func recordspan(vh unsafe.Pointer, p unsafe.Pointer) {
|
||||
h.allspans = append(h.allspans, s)
|
||||
}
|
||||
|
||||
// A spanClass represents the size class and noscan-ness of a span.
|
||||
//
|
||||
// Each size class has a noscan spanClass and a scan spanClass. The
|
||||
// noscan spanClass contains only noscan objects, which do not contain
|
||||
// pointers and thus do not need to be scanned by the garbage
|
||||
// collector.
|
||||
type spanClass uint8
|
||||
|
||||
const (
|
||||
numSpanClasses = _NumSizeClasses << 1
|
||||
tinySpanClass = spanClass(tinySizeClass<<1 | 1)
|
||||
)
|
||||
|
||||
func makeSpanClass(sizeclass uint8, noscan bool) spanClass {
|
||||
return spanClass(sizeclass<<1) | spanClass(bool2int(noscan))
|
||||
}
|
||||
|
||||
func (sc spanClass) sizeclass() int8 {
|
||||
return int8(sc >> 1)
|
||||
}
|
||||
|
||||
func (sc spanClass) noscan() bool {
|
||||
return sc&1 != 0
|
||||
}
|
||||
|
||||
// inheap reports whether b is a pointer into a (potentially dead) heap object.
|
||||
// It returns false for pointers into stack spans.
|
||||
// Non-preemptible because it is used by write barriers.
|
||||
@@ -376,7 +409,7 @@ func mlookup(v uintptr, base *uintptr, size *uintptr, sp **mspan) int32 {
|
||||
}
|
||||
|
||||
p := s.base()
|
||||
if s.sizeclass == 0 {
|
||||
if s.spanclass.sizeclass() == 0 {
|
||||
// Large object.
|
||||
if base != nil {
|
||||
*base = p
|
||||
@@ -424,7 +457,7 @@ func (h *mheap) init(spansStart, spansBytes uintptr) {
|
||||
h.freelarge.init()
|
||||
h.busylarge.init()
|
||||
for i := range h.central {
|
||||
h.central[i].mcentral.init(int32(i))
|
||||
h.central[i].mcentral.init(spanClass(i))
|
||||
}
|
||||
|
||||
sp := (*slice)(unsafe.Pointer(&h.spans))
|
||||
@@ -533,7 +566,7 @@ func (h *mheap) reclaim(npage uintptr) {
|
||||
|
||||
// Allocate a new span of npage pages from the heap for GC'd memory
|
||||
// and record its size class in the HeapMap and HeapMapCache.
|
||||
func (h *mheap) alloc_m(npage uintptr, sizeclass int32, large bool) *mspan {
|
||||
func (h *mheap) alloc_m(npage uintptr, spanclass spanClass, large bool) *mspan {
|
||||
_g_ := getg()
|
||||
if _g_ != _g_.m.g0 {
|
||||
throw("_mheap_alloc not on g0 stack")
|
||||
@@ -567,8 +600,8 @@ func (h *mheap) alloc_m(npage uintptr, sizeclass int32, large bool) *mspan {
|
||||
h.sweepSpans[h.sweepgen/2%2].push(s) // Add to swept in-use list.
|
||||
s.state = _MSpanInUse
|
||||
s.allocCount = 0
|
||||
s.sizeclass = uint8(sizeclass)
|
||||
if sizeclass == 0 {
|
||||
s.spanclass = spanclass
|
||||
if sizeclass := spanclass.sizeclass(); sizeclass == 0 {
|
||||
s.elemsize = s.npages << _PageShift
|
||||
s.divShift = 0
|
||||
s.divMul = 0
|
||||
@@ -618,13 +651,13 @@ func (h *mheap) alloc_m(npage uintptr, sizeclass int32, large bool) *mspan {
|
||||
return s
|
||||
}
|
||||
|
||||
func (h *mheap) alloc(npage uintptr, sizeclass int32, large bool, needzero bool) *mspan {
|
||||
func (h *mheap) alloc(npage uintptr, spanclass spanClass, large bool, needzero bool) *mspan {
|
||||
// Don't do any operations that lock the heap on the G stack.
|
||||
// It might trigger stack growth, and the stack growth code needs
|
||||
// to be able to allocate heap.
|
||||
var s *mspan
|
||||
systemstack(func() {
|
||||
s = h.alloc_m(npage, sizeclass, large)
|
||||
s = h.alloc_m(npage, spanclass, large)
|
||||
})
|
||||
|
||||
if s != nil {
|
||||
@@ -1020,7 +1053,7 @@ func (span *mspan) init(base uintptr, npages uintptr) {
|
||||
span.startAddr = base
|
||||
span.npages = npages
|
||||
span.allocCount = 0
|
||||
span.sizeclass = 0
|
||||
span.spanclass = 0
|
||||
span.incache = false
|
||||
span.elemsize = 0
|
||||
span.state = _MSpanDead
|
||||
|
||||
@@ -553,12 +553,12 @@ func updatememstats(stats *gcstats) {
|
||||
if s.state != mSpanInUse {
|
||||
continue
|
||||
}
|
||||
if s.sizeclass == 0 {
|
||||
if sizeclass := s.spanclass.sizeclass(); sizeclass == 0 {
|
||||
memstats.nmalloc++
|
||||
memstats.alloc += uint64(s.elemsize)
|
||||
} else {
|
||||
memstats.nmalloc += uint64(s.allocCount)
|
||||
memstats.by_size[s.sizeclass].nmalloc += uint64(s.allocCount)
|
||||
memstats.by_size[sizeclass].nmalloc += uint64(s.allocCount)
|
||||
memstats.alloc += uint64(s.allocCount) * uint64(s.elemsize)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -56,9 +56,7 @@ func plugin_lastmoduleinit() (path string, syms map[string]interface{}, mismatch
|
||||
|
||||
lock(&ifaceLock)
|
||||
for _, i := range md.itablinks {
|
||||
if i.inhash == 0 {
|
||||
additab(i, true, false)
|
||||
}
|
||||
additab(i, true, false)
|
||||
}
|
||||
unlock(&ifaceLock)
|
||||
|
||||
|
||||
@@ -324,6 +324,7 @@ var debug struct {
|
||||
gcrescanstacks int32
|
||||
gcstoptheworld int32
|
||||
gctrace int32
|
||||
gcroc int32
|
||||
invalidptr int32
|
||||
sbrk int32
|
||||
scavenge int32
|
||||
@@ -344,6 +345,7 @@ var dbgvars = []dbgVar{
|
||||
{"gcrescanstacks", &debug.gcrescanstacks},
|
||||
{"gcstoptheworld", &debug.gcstoptheworld},
|
||||
{"gctrace", &debug.gctrace},
|
||||
{"gcroc", &debug.gcroc},
|
||||
{"invalidptr", &debug.invalidptr},
|
||||
{"sbrk", &debug.sbrk},
|
||||
{"scavenge", &debug.scavenge},
|
||||
@@ -409,6 +411,11 @@ func parsedebugvars() {
|
||||
writeBarrier.cgo = true
|
||||
writeBarrier.enabled = true
|
||||
}
|
||||
// For the roc algorithm we turn on the write barrier at all times
|
||||
if debug.gcroc >= 1 {
|
||||
writeBarrier.roc = true
|
||||
writeBarrier.enabled = true
|
||||
}
|
||||
}
|
||||
|
||||
//go:linkname setTraceback runtime/debug.SetTraceback
|
||||
|
||||
@@ -1225,3 +1225,8 @@ func morestackc() {
|
||||
throw("attempt to execute C code on Go stack")
|
||||
})
|
||||
}
|
||||
|
||||
//go:nosplit
|
||||
func inStack(p uintptr, s stack) bool {
|
||||
return s.lo <= p && p < s.hi
|
||||
}
|
||||
|
||||
@@ -295,3 +295,10 @@ func checkASM() bool
|
||||
|
||||
func memequal_varlen(a, b unsafe.Pointer) bool
|
||||
func eqstring(s1, s2 string) bool
|
||||
|
||||
// bool2int returns 0 if x is false or 1 if x is true.
|
||||
func bool2int(x bool) int {
|
||||
// Avoid branches. In the SSA compiler, this compiles to
|
||||
// exactly what you would want it to.
|
||||
return int(uint8(*(*uint8)(unsafe.Pointer(&x))))
|
||||
}
|
||||
|
||||
@@ -330,9 +330,9 @@ sigtrampnog:
|
||||
// Lock sigprofCallersUse.
|
||||
MOVL $0, AX
|
||||
MOVL $1, CX
|
||||
MOVQ $runtime·sigprofCallersUse(SB), R11
|
||||
MOVQ $runtime·sigprofCallersUse(SB), BX
|
||||
LOCK
|
||||
CMPXCHGL CX, 0(R11)
|
||||
CMPXCHGL CX, 0(BX)
|
||||
JNZ sigtramp // Skip stack trace if already locked.
|
||||
|
||||
// Jump to the traceback function in runtime/cgo.
|
||||
|
||||
@@ -18,7 +18,6 @@ import (
|
||||
"log"
|
||||
"os"
|
||||
"regexp"
|
||||
"strings"
|
||||
)
|
||||
|
||||
func main() {
|
||||
@@ -39,16 +38,10 @@ func main() {
|
||||
re = regexp.MustCompile("Pad_cgo[A-Za-z0-9_]*")
|
||||
s = re.ReplaceAllString(s, "_")
|
||||
|
||||
// We want to keep X__val in Fsid. Hide it and restore it later.
|
||||
s = strings.Replace(s, "X__val", "MKPOSTFSIDVAL", 1)
|
||||
|
||||
// Replace other unwanted fields with blank identifiers.
|
||||
re = regexp.MustCompile("X_[A-Za-z0-9_]*")
|
||||
s = re.ReplaceAllString(s, "_")
|
||||
|
||||
// Restore X__val in Fsid.
|
||||
s = strings.Replace(s, "MKPOSTFSIDVAL", "X__val", 1)
|
||||
|
||||
// Force the type of RawSockaddr.Data to [14]int8 to match
|
||||
// the existing gccgo API.
|
||||
re = regexp.MustCompile("(Data\\s+\\[14\\])uint8")
|
||||
|
||||
@@ -140,7 +140,7 @@ type Dirent struct {
|
||||
}
|
||||
|
||||
type Fsid struct {
|
||||
X__val [2]int32
|
||||
_ [2]int32
|
||||
}
|
||||
|
||||
type Flock_t struct {
|
||||
|
||||
@@ -219,7 +219,7 @@ func (b *B) run1() bool {
|
||||
}
|
||||
// Only print the output if we know we are not going to proceed.
|
||||
// Otherwise it is printed in processBench.
|
||||
if atomic.LoadInt32(&b.hasSub) != 0 || b.finished {
|
||||
if b.hasSub || b.finished {
|
||||
tag := "BENCH"
|
||||
if b.skipped {
|
||||
tag = "SKIP"
|
||||
@@ -460,13 +460,10 @@ func (ctx *benchContext) processBench(b *B) {
|
||||
//
|
||||
// A subbenchmark is like any other benchmark. A benchmark that calls Run at
|
||||
// least once will not be measured itself and will be called once with N=1.
|
||||
//
|
||||
// Run may be called simultaneously from multiple goroutines, but all such
|
||||
// calls must happen before the outer benchmark function for b returns.
|
||||
func (b *B) Run(name string, f func(b *B)) bool {
|
||||
// Since b has subbenchmarks, we will no longer run it as a benchmark itself.
|
||||
// Release the lock and acquire it on exit to ensure locks stay paired.
|
||||
atomic.StoreInt32(&b.hasSub, 1)
|
||||
b.hasSub = true
|
||||
benchmarkLock.Unlock()
|
||||
defer benchmarkLock.Lock()
|
||||
|
||||
|
||||
@@ -6,7 +6,6 @@ package testing
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"regexp"
|
||||
"strings"
|
||||
"sync/atomic"
|
||||
@@ -516,19 +515,3 @@ func TestBenchmarkOutput(t *T) {
|
||||
Benchmark(func(b *B) { b.Error("do not print this output") })
|
||||
Benchmark(func(b *B) {})
|
||||
}
|
||||
|
||||
func TestParallelSub(t *T) {
|
||||
c := make(chan int)
|
||||
block := make(chan int)
|
||||
for i := 0; i < 10; i++ {
|
||||
go func(i int) {
|
||||
<-block
|
||||
t.Run(fmt.Sprint(i), func(t *T) {})
|
||||
c <- 1
|
||||
}(i)
|
||||
}
|
||||
close(block)
|
||||
for i := 0; i < 10; i++ {
|
||||
<-c
|
||||
}
|
||||
}
|
||||
|
||||
@@ -216,7 +216,6 @@ import (
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
)
|
||||
|
||||
@@ -268,8 +267,8 @@ type common struct {
|
||||
skipped bool // Test of benchmark has been skipped.
|
||||
finished bool // Test function has completed.
|
||||
done bool // Test is finished and all subtests have completed.
|
||||
hasSub int32 // written atomically
|
||||
raceErrors int // number of races detected during test
|
||||
hasSub bool
|
||||
raceErrors int // number of races detected during test
|
||||
|
||||
parent *common
|
||||
level int // Nesting depth of test or benchmark.
|
||||
@@ -646,7 +645,7 @@ func tRunner(t *T, fn func(t *T)) {
|
||||
// Do not lock t.done to allow race detector to detect race in case
|
||||
// the user does not appropriately synchronizes a goroutine.
|
||||
t.done = true
|
||||
if t.parent != nil && atomic.LoadInt32(&t.hasSub) == 0 {
|
||||
if t.parent != nil && !t.hasSub {
|
||||
t.setRan()
|
||||
}
|
||||
t.signal <- true
|
||||
@@ -660,11 +659,8 @@ func tRunner(t *T, fn func(t *T)) {
|
||||
|
||||
// Run runs f as a subtest of t called name. It reports whether f succeeded.
|
||||
// Run will block until all its parallel subtests have completed.
|
||||
//
|
||||
// Run may be called simultaneously from multiple goroutines, but all such
|
||||
// calls must happen before the outer test function for t returns.
|
||||
func (t *T) Run(name string, f func(t *T)) bool {
|
||||
atomic.StoreInt32(&t.hasSub, 1)
|
||||
t.hasSub = true
|
||||
testName, ok := t.context.match.fullName(&t.common, name)
|
||||
if !ok {
|
||||
return true
|
||||
|
||||
@@ -54,9 +54,9 @@
|
||||
ADCQ t3, h1; \
|
||||
ADCQ $0, h2
|
||||
|
||||
DATA ·poly1305Mask<>+0x00(SB)/8, $0x0FFFFFFC0FFFFFFF
|
||||
DATA ·poly1305Mask<>+0x08(SB)/8, $0x0FFFFFFC0FFFFFFC
|
||||
GLOBL ·poly1305Mask<>(SB), RODATA, $16
|
||||
DATA poly1305Mask<>+0x00(SB)/8, $0x0FFFFFFC0FFFFFFF
|
||||
DATA poly1305Mask<>+0x08(SB)/8, $0x0FFFFFFC0FFFFFFC
|
||||
GLOBL poly1305Mask<>(SB), RODATA, $16
|
||||
|
||||
// func poly1305(out *[16]byte, m *byte, mlen uint64, key *[32]key)
|
||||
TEXT ·poly1305(SB), $0-32
|
||||
@@ -67,8 +67,8 @@ TEXT ·poly1305(SB), $0-32
|
||||
|
||||
MOVQ 0(AX), R11
|
||||
MOVQ 8(AX), R12
|
||||
ANDQ ·poly1305Mask<>(SB), R11 // r0
|
||||
ANDQ ·poly1305Mask<>+8(SB), R12 // r1
|
||||
ANDQ poly1305Mask<>(SB), R11 // r0
|
||||
ANDQ poly1305Mask<>+8(SB), R12 // r1
|
||||
XORQ R8, R8 // h0
|
||||
XORQ R9, R9 // h1
|
||||
XORQ R10, R10 // h2
|
||||
|
||||
@@ -9,12 +9,12 @@
|
||||
// This code was translated into a form compatible with 5a from the public
|
||||
// domain source by Andrew Moon: github.com/floodyberry/poly1305-opt/blob/master/app/extensions/poly1305.
|
||||
|
||||
DATA ·poly1305_init_constants_armv6<>+0x00(SB)/4, $0x3ffffff
|
||||
DATA ·poly1305_init_constants_armv6<>+0x04(SB)/4, $0x3ffff03
|
||||
DATA ·poly1305_init_constants_armv6<>+0x08(SB)/4, $0x3ffc0ff
|
||||
DATA ·poly1305_init_constants_armv6<>+0x0c(SB)/4, $0x3f03fff
|
||||
DATA ·poly1305_init_constants_armv6<>+0x10(SB)/4, $0x00fffff
|
||||
GLOBL ·poly1305_init_constants_armv6<>(SB), 8, $20
|
||||
DATA poly1305_init_constants_armv6<>+0x00(SB)/4, $0x3ffffff
|
||||
DATA poly1305_init_constants_armv6<>+0x04(SB)/4, $0x3ffff03
|
||||
DATA poly1305_init_constants_armv6<>+0x08(SB)/4, $0x3ffc0ff
|
||||
DATA poly1305_init_constants_armv6<>+0x0c(SB)/4, $0x3f03fff
|
||||
DATA poly1305_init_constants_armv6<>+0x10(SB)/4, $0x00fffff
|
||||
GLOBL poly1305_init_constants_armv6<>(SB), 8, $20
|
||||
|
||||
// Warning: the linker may use R11 to synthesize certain instructions. Please
|
||||
// take care and verify that no synthetic instructions use it.
|
||||
@@ -27,7 +27,7 @@ TEXT poly1305_init_ext_armv6<>(SB), NOSPLIT, $0
|
||||
ADD $4, R13, R8
|
||||
MOVM.IB [R4-R7], (R8)
|
||||
MOVM.IA.W (R1), [R2-R5]
|
||||
MOVW $·poly1305_init_constants_armv6<>(SB), R7
|
||||
MOVW $poly1305_init_constants_armv6<>(SB), R7
|
||||
MOVW R2, R8
|
||||
MOVW R2>>26, R9
|
||||
MOVW R3>>20, g
|
||||
|
||||
@@ -1,18 +0,0 @@
|
||||
// compile
|
||||
|
||||
// Copyright 2017 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package p
|
||||
|
||||
var (
|
||||
e interface{}
|
||||
s = struct{ a *int }{}
|
||||
b = e == s
|
||||
)
|
||||
|
||||
func test(obj interface{}) {
|
||||
if obj != struct{ a *string }{} {
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user