Compare commits

..

13 Commits

Author SHA1 Message Date
Rick Hudson
1d4942afe0 [dev.garbage] runtime: determine if an object is public
ROC (request oriented collector) needs to determine
if an object is visible to other goroutines, i.e.
public. In a later CL this will be used by the write
barrier and the publishing logic to distinguish between
local and public objects and act accordingly.

Change-Id: I6a80da9deb21f57e831a2ec04e41477f997a8c33
Reviewed-on: https://go-review.googlesource.com/25056
Reviewed-by: Austin Clements <austin@google.com>
2017-05-25 18:05:53 +00:00
Austin Clements
8b25a00e6d Merge branch 'master' into dev.garbage
Change-Id: I36274cf72b8e1908efc8e375cab7880d7b0b3f43
2017-01-11 11:34:07 -05:00
Austin Clements
42afbd9e63 Merge branch 'master' into dev.garbage
Sync to 1.8 beta 1.

Change-Id: Iddd3773babd8fe50aed791ef7150d0c7c455cc8d
2016-12-05 12:15:50 -05:00
Austin Clements
f9f6c90ed1 [dev.garbage] Merge branch 'master' into dev.garbage
This merges master as of the Go 1.7 release.

Change-Id: I95ac0f96a0837173a2b8d7e8aaadf6fecc1baeaf
2016-08-25 11:55:08 -04:00
Rick Hudson
69161e279e [dev.garbage] runtime: Add GODEBUG=gcroc=n
Add command line boilerplace that turns the ROC algorithm on.
gcroc=0 or not specified:
	does not turn ROC on but some benign code may run.
gcroc=1
	simply turns the algorithm on
gcroc>1
	same as gcroc but with increasing levels of diagnostics
	being turned on. Expect gcroc>1 to not be as performant as
	gcroc=1

Change-Id: I6348b6768f7a3f8c2f69ef01ea20efebf992029e
Reviewed-on: https://go-review.googlesource.com/21368
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-07-26 13:21:53 +00:00
Austin Clements
fb4c718209 [dev.garbage] Merge branch 'master' into dev.garbage
Change-Id: I8ba4b012d82921f9521f471b1c0b5a1f6149a986
2016-07-19 17:54:41 -04:00
Austin Clements
81b74bf9c5 [dev.garbage] runtime: make _TinySizeClass an int8 to prevent use as spanClass
Currently _TinySizeClass is untyped, which means it can accidentally
be used as a spanClass (not that I would know this from experience or
anything). Make it an int8 to avoid this mix up.

Change-Id: I1e69eccee436ea5aa45e9a9828a013e369e03f1a
Reviewed-on: https://go-review.googlesource.com/24372
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
2016-06-23 16:26:08 +00:00
Austin Clements
edb54c300f [dev.garbage] runtime: eliminate heapBitsSetTypeNoScan
It's no longer necessary to maintain the bitmap of noscan objects
since we now use the span metadata to determine that they're noscan
instead of the bitmap.

The combined effect of segregating noscan spans and the follow-on
optimizations is almost no effect on the go1 benchmarks and a 1.19%
improvement in the garbage benchmark:

name              old time/op  new time/op  delta
XBenchGarbage-12  2.13ms ± 2%  2.11ms ± 1%  -1.19%  (p=0.000 n=19+20)

name                      old time/op    new time/op    delta
BinaryTree17-12              2.42s ± 1%     2.41s ± 1%  -0.50%  (p=0.001 n=20+17)
Fannkuch11-12                2.14s ± 0%     2.12s ± 0%  -0.91%  (p=0.000 n=20+17)
FmtFprintfEmpty-12          45.2ns ± 0%    45.2ns ± 1%    ~     (p=0.677 n=16+19)
FmtFprintfString-12          131ns ± 0%     132ns ± 1%  +0.57%  (p=0.000 n=16+20)
FmtFprintfInt-12             126ns ± 1%     126ns ± 0%    ~     (p=0.078 n=18+16)
FmtFprintfIntInt-12          199ns ± 0%     195ns ± 0%  -2.19%  (p=0.000 n=14+20)
FmtFprintfPrefixedInt-12     196ns ± 1%     196ns ± 0%    ~     (p=0.155 n=19+16)
FmtFprintfFloat-12           254ns ± 0%     253ns ± 0%  -0.50%  (p=0.000 n=14+19)
FmtManyArgs-12               803ns ± 1%     798ns ± 0%  -0.71%  (p=0.000 n=18+19)
GobDecode-12                7.11ms ± 1%    7.07ms ± 1%  -0.50%  (p=0.024 n=18+19)
GobEncode-12                5.87ms ± 0%    5.86ms ± 1%    ~     (p=0.113 n=19+20)
Gzip-12                      218ms ± 1%     218ms ± 1%    ~     (p=0.879 n=19+20)
Gunzip-12                   37.2ms ± 0%    37.3ms ± 0%  +0.14%  (p=0.047 n=19+20)
HTTPClientServer-12         80.5µs ± 6%    82.8µs ± 8%  +2.91%  (p=0.008 n=19+20)
JSONEncode-12               15.4ms ± 1%    15.4ms ± 1%  -0.32%  (p=0.003 n=18+19)
JSONDecode-12               55.1ms ± 1%    53.0ms ± 1%  -3.87%  (p=0.000 n=18+20)
Mandelbrot200-12            4.08ms ± 1%    4.10ms ± 1%  +0.34%  (p=0.001 n=19+17)
GoParse-12                  3.20ms ± 1%    3.21ms ± 1%    ~     (p=0.138 n=19+19)
RegexpMatchEasy0_32-12      70.6ns ± 2%    70.4ns ± 2%    ~     (p=0.343 n=20+20)
RegexpMatchEasy0_1K-12       240ns ± 0%     242ns ± 2%  +0.88%  (p=0.000 n=15+20)
RegexpMatchEasy1_32-12      70.5ns ± 1%    70.2ns ± 3%    ~     (p=0.053 n=18+20)
RegexpMatchEasy1_1K-12       374ns ± 1%     374ns ± 1%    ~     (p=0.705 n=20+19)
RegexpMatchMedium_32-12      108ns ± 1%     108ns ± 1%    ~     (p=0.854 n=20+19)
RegexpMatchMedium_1K-12     33.5µs ± 1%    33.6µs ± 2%    ~     (p=0.897 n=18+20)
RegexpMatchHard_32-12       1.76µs ± 1%    1.75µs ± 1%    ~     (p=0.771 n=18+18)
RegexpMatchHard_1K-12       52.8µs ± 1%    52.8µs ± 1%    ~     (p=0.678 n=17+19)
Revcomp-12                   381ms ± 1%     380ms ± 0%    ~     (p=0.320 n=20+16)
Template-12                 65.6ms ± 1%    65.1ms ± 2%  -0.75%  (p=0.003 n=20+20)
TimeParse-12                 324ns ± 1%     326ns ± 1%  +0.72%  (p=0.000 n=18+18)
TimeFormat-12                342ns ± 0%     343ns ± 1%  +0.22%  (p=0.004 n=15+18)
[Geo mean]                  52.4µs         52.3µs       -0.18%

Change-Id: Ic77faaa15cdac3bfbbb0032dde5c204e05a0fd8e
Reviewed-on: https://go-review.googlesource.com/23702
Reviewed-by: Rick Hudson <rlh@golang.org>
2016-06-15 21:17:15 +00:00
Austin Clements
312aa09996 [dev.garbage] runtime: eliminate heapBits.hasPointers
This is no longer necessary now that we can more efficiently consult
the span's noscan bit.

Change-Id: Id0b00b278533660973f45eb6efa5b00f373d58af
Reviewed-on: https://go-review.googlesource.com/23701
Reviewed-by: Rick Hudson <rlh@golang.org>
2016-06-15 21:17:04 +00:00
Austin Clements
d491e550c3 [dev.garbage] runtime: separate spans of noscan objects
Currently, we mix objects with pointers and objects without pointers
("noscan" objects) together in memory. As a result, for every object
we grey, we have to check that object's heap bits to find out if it's
noscan, which adds to the per-object cost of GC. This also hurts the
TLB footprint of the garbage collector because it decreases the
density of scannable objects at the page level.

This commit improves the situation by using separate spans for noscan
objects. This will allow a much simpler noscan check (in a follow up
CL), eliminate the need to clear the bitmap of noscan objects (in a
follow up CL), and improves TLB footprint by increasing the density of
scannable objects.

This is also a step toward eliminating dead bits, since the current
noscan check depends on checking the dead bit of the first word.

This has no effect on the heap size of the garbage benchmark.

We'll measure the performance change of this after the follow-up
optimizations.

Change-Id: I13bdc4869538ece5649a8d2a41c6605371618e40
Reviewed-on: https://go-review.googlesource.com/23700
Reviewed-by: Rick Hudson <rlh@golang.org>
2016-06-15 21:16:53 +00:00
Austin Clements
641c32dafa [dev.garbage] Merge branch 'master' into dev.garbage
Change-Id: I7ab2afca656e8c145804d9823cd084b8a85bccd7
2016-06-06 10:01:41 -04:00
Austin Clements
2e495a1df6 [dev.garbage] Merge branch 'master' into dev.garbage
Change-Id: I35edefb4464566601850081ecc84dd3535d60ceb
2016-05-16 14:29:53 -04:00
Austin Clements
344476d23c [dev.garbage] Merge branch 'master' into dev.garbage
Change-Id: I2e04fd9e7071efe33ce76f2f10a8dbde53ba90b9
2016-05-09 14:49:54 -04:00
103 changed files with 572 additions and 1547 deletions

View File

@@ -5,37 +5,39 @@ reliable, and efficient software.
![Gopher image](doc/gopher/fiveyears.jpg)
For documentation about how to install and use Go,
visit https://golang.org/ or load doc/install-source.html
in your web browser.
Our canonical Git repository is located at https://go.googlesource.com/go.
There is a mirror of the repository at https://github.com/golang/go.
Unless otherwise noted, the Go source files are distributed under the
BSD-style license found in the LICENSE file.
### Download and Install
#### Binary Distributions
Official binary distributions are available at https://golang.org/dl/.
After downloading a binary release, visit https://golang.org/doc/install
or load doc/install.html in your web browser for installation
instructions.
#### Install From Source
If a binary distribution is not available for your combination of
operating system and architecture, visit
https://golang.org/doc/install/source or load doc/install-source.html
in your web browser for source installation instructions.
### Contributing
Go is the work of hundreds of contributors. We appreciate your help!
To contribute, please read the contribution guidelines:
https://golang.org/doc/contribute.html
Note that the Go project does not use GitHub pull requests, and that
we use the issue tracker for bug reports and proposals only. See
https://golang.org/wiki/Questions for a list of places to ask
questions about the Go language.
##### Note that we do not accept pull requests and that we use the issue tracker for bug reports and proposals only. Please ask questions on https://forum.golangbridge.org or https://groups.google.com/forum/#!forum/golang-nuts.
Unless otherwise noted, the Go source files are distributed
under the BSD-style license found in the LICENSE file.
--
## Binary Distribution Notes
If you have just untarred a binary Go distribution, you need to set
the environment variable $GOROOT to the full path of the go
directory (the one containing this file). You can omit the
variable if you unpack it into /usr/local/go, or if you rebuild
from sources by running all.bash (see doc/install-source.html).
You should also add the Go binary directory $GOROOT/bin
to your shell's path.
For example, if you extracted the tar file into $HOME/go, you might
put the following in your .profile:
export GOROOT=$HOME/go
export PATH=$PATH:$GOROOT/bin
See https://golang.org/doc/install or doc/install.html for more details.

View File

@@ -1 +0,0 @@
go1.8rc3

View File

@@ -698,7 +698,7 @@ These files will be periodically updated based on the commit logs.
<p>Code that you contribute should use the standard copyright header:</p>
<pre>
// Copyright 2017 The Go Authors. All rights reserved.
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
</pre>

View File

@@ -4,8 +4,7 @@
}-->
<p><i>
This applies to the standard toolchain (the <code>gc</code> Go
compiler and tools). Gccgo has native gdb support.
This applies to the <code>gc</code> toolchain. Gccgo has native gdb support.
Besides this overview you might want to consult the
<a href="http://sourceware.org/gdb/current/onlinedocs/gdb/">GDB manual</a>.
</i></p>
@@ -50,14 +49,6 @@ when debugging, pass the flags <code>-gcflags "-N -l"</code> to the
debugged.
</p>
<p>
If you want to use gdb to inspect a core dump, you can trigger a dump
on a program crash, on systems that permit it, by setting
<code>GOTRACEBACK=crash</code> in the environment (see the
<a href="/pkg/runtime/#hdr-Environment_Variables"> runtime package
documentation</a> for more info).
</p>
<h3 id="Common_Operations">Common Operations</h3>
<ul>
@@ -139,7 +130,7 @@ the DWARF code.
<p>
If you're interested in what the debugging information looks like, run
'<code>objdump -W a.out</code>' and browse through the <code>.debug_*</code>
'<code>objdump -W 6.out</code>' and browse through the <code>.debug_*</code>
sections.
</p>
@@ -386,9 +377,7 @@ $3 = struct hchan&lt;*testing.T&gt;
</pre>
<p>
That <code>struct hchan&lt;*testing.T&gt;</code> is the
runtime-internal representation of a channel. It is currently empty,
or gdb would have pretty-printed its contents.
That <code>struct hchan&lt;*testing.T&gt;</code> is the runtime-internal representation of a channel. It is currently empty, or gdb would have pretty-printed it's contents.
</p>
<p>

View File

@@ -52,19 +52,6 @@ user libraries. The Go 1.4 runtime is not fully merged, but that
should not be visible to Go programs.
</p>
<p>
The GCC 6 releases include a complete implementation of the Go 1.6.1
user libraries. The Go 1.6 runtime is not fully merged, but that
should not be visible to Go programs.
</p>
<p>
The GCC 7 releases are expected to include a complete implementation
of the Go 1.8 user libraries. As with earlier releases, the Go 1.8
runtime is not fully merged, but that should not be visible to Go
programs.
</p>
<h2 id="Source_code">Source code</h2>
<p>
@@ -173,6 +160,23 @@ make
make install
</pre>
<h3 id="Ubuntu">A note on Ubuntu</h3>
<p>
Current versions of Ubuntu and versions of GCC before 4.8 disagree on
where system libraries and header files are found. This is not a
gccgo issue. When building older versions of GCC, setting these
environment variables while configuring and building gccgo may fix the
problem.
</p>
<pre>
LIBRARY_PATH=/usr/lib/x86_64-linux-gnu
C_INCLUDE_PATH=/usr/include/x86_64-linux-gnu
CPLUS_INCLUDE_PATH=/usr/include/x86_64-linux-gnu
export LIBRARY_PATH C_INCLUDE_PATH CPLUS_INCLUDE_PATH
</pre>
<h2 id="Using_gccgo">Using gccgo</h2>
<p>
@@ -360,15 +364,12 @@ or with C++ code compiled using <code>extern "C"</code>.
<h3 id="Types">Types</h3>
<p>
Basic types map directly: an <code>int32</code> in Go is
an <code>int32_t</code> in C, an <code>int64</code> is
an <code>int64_t</code>, etc.
The Go type <code>int</code> is an integer that is the same size as a
pointer, and as such corresponds to the C type <code>intptr_t</code>.
Go <code>byte</code> is equivalent to C <code>unsigned char</code>.
Pointers in Go are pointers in C.
A Go <code>struct</code> is the same as C <code>struct</code> with the
same fields and types.
Basic types map directly: an <code>int</code> in Go is an <code>int</code>
in C, an <code>int32</code> is an <code>int32_t</code>,
etc. Go <code>byte</code> is equivalent to C <code>unsigned
char</code>.
Pointers in Go are pointers in C. A Go <code>struct</code> is the same as C
<code>struct</code> with the same fields and types.
</p>
<p>
@@ -379,7 +380,7 @@ structure (this is <b style="color: red;">subject to change</b>):
<pre>
struct __go_string {
const unsigned char *__data;
intptr_t __length;
int __length;
};
</pre>
@@ -399,8 +400,8 @@ A slice in Go is a structure. The current definition is
<pre>
struct __go_slice {
void *__values;
intptr_t __count;
intptr_t __capacity;
int __count;
int __capacity;
};
</pre>
@@ -525,3 +526,15 @@ This procedure is full of unstated caveats and restrictions and we make no
guarantee that it will not change in the future. It is more useful as a
starting point for real Go code than as a regular procedure.
</p>
<h2 id="RTEMS_Port">RTEMS Port</h2>
<p>
The gccgo compiler has been ported to <a href="http://www.rtems.com/">
<code>RTEMS</code></a>. <code>RTEMS</code> is a real-time executive
that provides a high performance environment for embedded applications
on a range of processors and embedded hardware. The current gccgo
port is for x86. The goal is to extend the port to most of the
<a href="http://www.rtems.org/wiki/index.php/SupportedCPUs">
architectures supported by <code>RTEMS</code></a>. For more information on the port,
as well as instructions on how to install it, please see this
<a href="http://www.rtems.org/wiki/index.php/GCCGoRTEMS"><code>RTEMS</code> Wiki page</a>.

View File

@@ -93,8 +93,7 @@ On OpenBSD, Go now requires OpenBSD 5.9 or later. <!-- CL 34093 -->
<p>
The Plan 9 port's networking support is now much more complete
and matches the behavior of Unix and Windows with respect to deadlines
and cancelation. For Plan 9 kernel requirements, see the
<a href="https://golang.org/wiki/Plan9">Plan 9 wiki page</a>.
and cancelation.
</p>
<p>
@@ -435,11 +434,11 @@ version of gccgo.
<h3 id="plugin">Plugins</h3>
<p>
Go now provides early support for plugins with a “<code>plugin</code>
build mode for generating plugins written in Go, and a
Go now supports a “<code>plugin</code> build mode for generating
plugins written in Go, and a
new <a href="/pkg/plugin/"><code>plugin</code></a> package for
loading such plugins at run time. Plugin support is currently only
available on Linux. Please report any issues.
loading such plugins at run time. Plugin support is only currently
available on Linux.
</p>
<h2 id="runtime">Runtime</h2>
@@ -809,6 +808,11 @@ Optimizations and minor bug fixes are not listed.
<dl id="crypto_x509"><dt><a href="/pkg/crypto/x509/">crypto/x509</a></dt>
<dd>
<p> <!-- CL 30578 -->
<a href="/pkg/crypto/x509/#SystemCertPool"><code>SystemCertPool</code></a>
is now implemented on Windows.
</p>
<p> <!-- CL 24743 -->
PSS signatures are now supported.
</p>
@@ -1613,9 +1617,9 @@ crypto/x509: return error for missing SerialNumber (CL 27238)
June 31 and July 32.
</p>
<p> <!-- CL 33029 --> <!-- CL 34816 -->
<p> <!-- CL 33029 -->
The <code>tzdata</code> database has been updated to version
2016j for systems that don't already have a local time zone
2016i for systems that don't already have a local time zone
database.
</p>
@@ -1645,17 +1649,6 @@ crypto/x509: return error for missing SerialNumber (CL 27238)
and only the overall execution of the test binary would fail.
</p>
<p><!-- CL 32455 -->
The signature of the
<a href="/pkg/testing/#MainStart"><code>MainStart</code></a>
function has changed, as allowed by the documentation. It is an
internal detail and not part of the Go 1 compatibility promise.
If you're not calling <code>MainStart</code> directly but see
errors, that likely means you set the
normally-empty <code>GOROOT</code> environment variable and it
doesn't match the version of your <code>go</code> command's binary.
</p>
</dd>
</dl>

View File

@@ -147,9 +147,6 @@ either the git branch <code>release-branch.go1.4</code> or
which contains the Go 1.4 source code plus accumulated fixes
to keep the tools running on newer operating systems.
(Go 1.4 was the last distribution in which the tool chain was written in C.)
After unpacking the Go 1.4 source, <code>cd</code> to
the <code>src</code> subdirectory and run <code>make.bash</code> (or,
on Windows, <code>make.bat</code>).
</p>
<p>

View File

@@ -250,7 +250,7 @@ $ <b>cd $HOME/go/src/hello</b>
$ <b>go build</b>
</pre>
<pre class="testWindows">
<pre class="testWindows" style="display: none">
C:\&gt; <b>cd %USERPROFILE%\go\src\hello</b>
C:\Users\Gopher\go\src\hello&gt; <b>go build</b>
</pre>
@@ -267,7 +267,7 @@ $ <b>./hello</b>
hello, world
</pre>
<pre class="testWindows">
<pre class="testWindows" style="display: none">
C:\Users\Gopher\go\src\hello&gt; <b>hello</b>
hello, world
</pre>

View File

@@ -73,7 +73,7 @@ func test18146(t *testing.T) {
}
runtime.GOMAXPROCS(threads)
argv := append(os.Args, "-test.run=NoSuchTestExists")
if err := syscall.Exec(os.Args[0], argv, os.Environ()); err != nil {
if err := syscall.Exec(os.Args[0], argv, nil); err != nil {
t.Fatal(err)
}
}

View File

@@ -17,7 +17,7 @@ package cgotest
static stack_t oss;
static char signalStack[SIGSTKSZ];
static void changeSignalStack(void) {
static void changeSignalStack() {
stack_t ss;
memset(&ss, 0, sizeof ss);
ss.ss_sp = signalStack;
@@ -29,7 +29,7 @@ static void changeSignalStack(void) {
}
}
static void restoreSignalStack(void) {
static void restoreSignalStack() {
#if (defined(__x86_64__) || defined(__i386__)) && defined(__APPLE__)
// The Darwin C library enforces a minimum that the kernel does not.
// This is OK since we allocated this much space in mpreinit,
@@ -42,7 +42,7 @@ static void restoreSignalStack(void) {
}
}
static int zero(void) {
static int zero() {
return 0;
}
*/

View File

@@ -1,46 +0,0 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package main
import (
"iface_i"
"log"
"plugin"
)
func main() {
a, err := plugin.Open("iface_a.so")
if err != nil {
log.Fatalf(`plugin.Open("iface_a.so"): %v`, err)
}
b, err := plugin.Open("iface_b.so")
if err != nil {
log.Fatalf(`plugin.Open("iface_b.so"): %v`, err)
}
af, err := a.Lookup("F")
if err != nil {
log.Fatalf(`a.Lookup("F") failed: %v`, err)
}
bf, err := b.Lookup("F")
if err != nil {
log.Fatalf(`b.Lookup("F") failed: %v`, err)
}
if af.(func() interface{})() != bf.(func() interface{})() {
panic("empty interfaces not equal")
}
ag, err := a.Lookup("G")
if err != nil {
log.Fatalf(`a.Lookup("G") failed: %v`, err)
}
bg, err := b.Lookup("G")
if err != nil {
log.Fatalf(`b.Lookup("G") failed: %v`, err)
}
if ag.(func() iface_i.I)() != bg.(func() iface_i.I)() {
panic("nonempty interfaces not equal")
}
}

View File

@@ -1,17 +0,0 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package main
import "iface_i"
//go:noinline
func F() interface{} {
return (*iface_i.T)(nil)
}
//go:noinline
func G() iface_i.I {
return (*iface_i.T)(nil)
}

View File

@@ -1,17 +0,0 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package main
import "iface_i"
//go:noinline
func F() interface{} {
return (*iface_i.T)(nil)
}
//go:noinline
func G() iface_i.I {
return (*iface_i.T)(nil)
}

View File

@@ -1,17 +0,0 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package iface_i
type I interface {
M()
}
type T struct {
}
func (t *T) M() {
}
// *T implements I

View File

@@ -1,13 +0,0 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package dynamodbstreamsevt
import "encoding/json"
var foo json.RawMessage
type Event struct{}
func (e *Event) Dummy() {}

View File

@@ -1,31 +0,0 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// The bug happened like this:
// 1) The main binary adds an itab for *json.UnsupportedValueError / error
// (concrete type / interface type). This itab goes in hash bucket 0x111.
// 2) The plugin adds that same itab again. That makes a cycle in the itab
// chain rooted at hash bucket 0x111.
// 3) The main binary then asks for the itab for *dynamodbstreamsevt.Event /
// json.Unmarshaler. This itab happens to also live in bucket 0x111.
// The lookup code goes into an infinite loop searching for this itab.
// The code is carefully crafted so that the two itabs are both from the
// same bucket, and so that the second itab doesn't exist in
// the itab hashmap yet (so the entire linked list must be searched).
package main
import (
"encoding/json"
"issue18676/dynamodbstreamsevt"
"plugin"
)
func main() {
plugin.Open("plugin.so")
var x interface{} = (*dynamodbstreamsevt.Event)(nil)
if _, ok := x.(json.Unmarshaler); !ok {
println("something")
}
}

View File

@@ -1,11 +0,0 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package main
import "C"
import "issue18676/dynamodbstreamsevt"
func F(evt *dynamodbstreamsevt.Event) {}

View File

@@ -15,8 +15,8 @@ goos=$(go env GOOS)
goarch=$(go env GOARCH)
function cleanup() {
rm -f plugin*.so unnamed*.so iface*.so
rm -rf host pkg sub iface issue18676
rm -f plugin*.so unnamed*.so
rm -rf host pkg sub
}
trap cleanup EXIT
@@ -32,15 +32,3 @@ GOPATH=$(pwd) go build -buildmode=plugin unnamed2.go
GOPATH=$(pwd) go build host
LD_LIBRARY_PATH=$(pwd) ./host
# Test that types and itabs get properly uniqified.
GOPATH=$(pwd) go build -buildmode=plugin iface_a
GOPATH=$(pwd) go build -buildmode=plugin iface_b
GOPATH=$(pwd) go build iface
LD_LIBRARY_PATH=$(pwd) ./iface
# Test for issue 18676 - make sure we don't add the same itab twice.
# The buggy code hangs forever, so use a timeout to check for that.
GOPATH=$(pwd) go build -buildmode=plugin -o plugin.so src/issue18676/plugin.go
GOPATH=$(pwd) go build -o issue18676 src/issue18676/main.go
timeout 10s ./issue18676

View File

@@ -1,12 +0,0 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// This program segfaulted during libpreinit when built with -msan:
// http://golang.org/issue/18707
package main
import "C"
func main() {}

View File

@@ -68,25 +68,6 @@ fi
status=0
testmsanshared() {
goos=$(go env GOOS)
suffix="-installsuffix testsanitizers"
libext="so"
if [ "$goos" == "darwin" ]; then
libext="dylib"
fi
go build -msan -buildmode=c-shared $suffix -o ${TMPDIR}/libmsanshared.$libext msan_shared.go
echo 'int main() { return 0; }' > ${TMPDIR}/testmsanshared.c
$CC $(go env GOGCCFLAGS) -fsanitize=memory -o ${TMPDIR}/testmsanshared ${TMPDIR}/testmsanshared.c ${TMPDIR}/libmsanshared.$libext
if ! LD_LIBRARY_PATH=. ${TMPDIR}/testmsanshared; then
echo "FAIL: msan_shared"
status=1
fi
rm -f ${TMPDIR}/{testmsanshared,testmsanshared.c,libmsanshared.$libext}
}
if test "$msan" = "yes"; then
if ! go build -msan std; then
echo "FAIL: build -msan std"
@@ -127,8 +108,6 @@ if test "$msan" = "yes"; then
echo "FAIL: msan_fail"
status=1
fi
testmsanshared
fi
if test "$tsan" = "yes"; then

View File

@@ -815,14 +815,3 @@ func TestImplicitInclusion(t *testing.T) {
goCmd(t, "install", "-linkshared", "implicitcmd")
run(t, "running executable linked against library that contains same package as it", "./bin/implicitcmd")
}
// Tests to make sure that the type fields of empty interfaces and itab
// fields of nonempty interfaces are unique even across modules,
// so that interface equality works correctly.
func TestInterface(t *testing.T) {
goCmd(t, "install", "-buildmode=shared", "-linkshared", "iface_a")
// Note: iface_i gets installed implicitly as a dependency of iface_a.
goCmd(t, "install", "-buildmode=shared", "-linkshared", "iface_b")
goCmd(t, "install", "-linkshared", "iface")
run(t, "running type/itab uniqueness tester", "./bin/iface")
}

View File

@@ -5,8 +5,6 @@ import (
"reflect"
)
var SlicePtr interface{} = &[]int{}
var V int = 1
var HasMask []string = []string{"hi"}

View File

@@ -19,8 +19,6 @@ func F() *C {
return nil
}
var slicePtr interface{} = &[]int{}
func main() {
defer depBase.ImplementedInAsm()
// This code below causes various go.itab.* symbols to be generated in
@@ -34,11 +32,4 @@ func main() {
if reflect.TypeOf(F).Out(0) != reflect.TypeOf(c) {
panic("bad reflection results, see golang.org/issue/18252")
}
sp := reflect.New(reflect.TypeOf(slicePtr).Elem())
s := sp.Interface()
if reflect.TypeOf(s) != reflect.TypeOf(slicePtr) {
panic("bad reflection results, see golang.org/issue/18729")
}
}

View File

@@ -1,17 +0,0 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package main
import "iface_a"
import "iface_b"
func main() {
if iface_a.F() != iface_b.F() {
panic("empty interfaces not equal")
}
if iface_a.G() != iface_b.G() {
panic("non-empty interfaces not equal")
}
}

View File

@@ -1,17 +0,0 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package iface_a
import "iface_i"
//go:noinline
func F() interface{} {
return (*iface_i.T)(nil)
}
//go:noinline
func G() iface_i.I {
return (*iface_i.T)(nil)
}

View File

@@ -1,17 +0,0 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package iface_b
import "iface_i"
//go:noinline
func F() interface{} {
return (*iface_i.T)(nil)
}
//go:noinline
func G() iface_i.I {
return (*iface_i.T)(nil)
}

View File

@@ -1,17 +0,0 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package iface_i
type I interface {
M()
}
type T struct {
}
func (t *T) M() {
}
// *T implements I

View File

@@ -99,7 +99,7 @@ func main() {
// Approximately 1 in a 100 binaries fail to start. If it happens,
// try again. These failures happen for several reasons beyond
// our control, but all of them are safe to retry as they happen
// before lldb encounters the initial SIGUSR2 stop. As we
// before lldb encounters the initial getwd breakpoint. As we
// know the tests haven't started, we are not hiding flaky tests
// with this retry.
for i := 0; i < 5; i++ {
@@ -204,11 +204,6 @@ func run(bin string, args []string) (err error) {
var opts options
opts, args = parseArgs(args)
// Pass the suffix for the current working directory as the
// first argument to the test. For iOS, cmd/go generates
// special handling of this argument.
args = append([]string{"cwdSuffix=" + pkgpath}, args...)
// ios-deploy invokes lldb to give us a shell session with the app.
s, err := newSession(appdir, args, opts)
if err != nil {
@@ -229,7 +224,6 @@ func run(bin string, args []string) (err error) {
s.do(`process handle SIGHUP --stop false --pass true --notify false`)
s.do(`process handle SIGPIPE --stop false --pass true --notify false`)
s.do(`process handle SIGUSR1 --stop false --pass true --notify false`)
s.do(`process handle SIGUSR2 --stop true --pass false --notify true`) // sent by test harness
s.do(`process handle SIGCONT --stop false --pass true --notify false`)
s.do(`process handle SIGSEGV --stop false --pass true --notify false`) // does not work
s.do(`process handle SIGBUS --stop false --pass true --notify false`) // does not work
@@ -242,9 +236,20 @@ func run(bin string, args []string) (err error) {
return nil
}
s.do(`breakpoint set -n getwd`) // in runtime/cgo/gcc_darwin_arm.go
started = true
s.doCmd("run", "stop reason = signal SIGUSR2", 20*time.Second)
s.doCmd("run", "stop reason = breakpoint", 20*time.Second)
// Move the current working directory into the faux gopath.
if pkgpath != "src" {
s.do(`breakpoint delete 1`)
s.do(`expr char* $mem = (char*)malloc(512)`)
s.do(`expr $mem = (char*)getwd($mem, 512)`)
s.do(`expr $mem = (char*)strcat($mem, "/` + pkgpath + `")`)
s.do(`call (void)chdir($mem)`)
}
startTestsLen := s.out.Len()
fmt.Fprintln(s.in, `process continue`)
@@ -515,11 +520,13 @@ func copyLocalData(dstbase string) (pkgpath string, err error) {
// Copy timezone file.
//
// Apps have the zoneinfo.zip in the root of their app bundle,
// Typical apps have the zoneinfo.zip in the root of their app bundle,
// read by the time package as the working directory at initialization.
// As we move the working directory to the GOROOT pkg directory, we
// install the zoneinfo.zip file in the pkgpath.
if underGoRoot {
err := cp(
dstbase,
filepath.Join(dstbase, pkgpath),
filepath.Join(cwd, "lib", "time", "zoneinfo.zip"),
)
if err != nil {

View File

@@ -303,9 +303,7 @@ func genhash(sym *Sym, t *Type) {
typecheckslice(fn.Nbody.Slice(), Etop)
Curfn = nil
popdcl()
if debug_dclstack != 0 {
testdclstack()
}
testdclstack()
// Disable safemode while compiling this code: the code we
// generate internally can refer to unsafe.Pointer.
@@ -495,9 +493,7 @@ func geneq(sym *Sym, t *Type) {
typecheckslice(fn.Nbody.Slice(), Etop)
Curfn = nil
popdcl()
if debug_dclstack != 0 {
testdclstack()
}
testdclstack()
// Disable safemode while compiling this code: the code we
// generate internally can refer to unsafe.Pointer.

View File

@@ -217,9 +217,7 @@ func Import(in *bufio.Reader) {
typecheckok = tcok
resumecheckwidth()
if debug_dclstack != 0 {
testdclstack()
}
testdclstack() // debugging only
}
func formatErrorf(format string, args ...interface{}) {

View File

@@ -30,12 +30,11 @@ var (
)
var (
Debug_append int
Debug_closure int
debug_dclstack int
Debug_panic int
Debug_slice int
Debug_wb int
Debug_append int
Debug_closure int
Debug_panic int
Debug_slice int
Debug_wb int
)
// Debug arguments.
@@ -49,7 +48,6 @@ var debugtab = []struct {
{"append", &Debug_append}, // print information about append compilation
{"closure", &Debug_closure}, // print information about closure compilation
{"disablenil", &disable_checknil}, // disable nil checks
{"dclstack", &debug_dclstack}, // run internal dclstack checks
{"gcprog", &Debug_gcprog}, // print dump of GC programs
{"nil", &Debug_checknil}, // print information about nil checks
{"panic", &Debug_panic}, // do not hide any compiler panic
@@ -327,6 +325,7 @@ func Main() {
timings.Stop()
timings.AddEvent(int64(lexlineno-lexlineno0), "lines")
testdclstack()
mkpackage(localpkg.Name) // final import not used checks
finishUniverse()

View File

@@ -34,7 +34,6 @@ func parseFile(filename string) {
}
if nsyntaxerrors == 0 {
// Always run testdclstack here, even when debug_dclstack is not set, as a sanity measure.
testdclstack()
}
}

View File

@@ -1833,9 +1833,7 @@ func genwrapper(rcvr *Type, method *Field, newnam *Sym, iface int) {
funcbody(fn)
Curfn = fn
popdcl()
if debug_dclstack != 0 {
testdclstack()
}
testdclstack()
// wrappers where T is anonymous (struct or interface) can be duplicated.
if rcvr.IsStruct() || rcvr.IsInterface() || rcvr.IsPtr() && rcvr.Elem().IsStruct() {

View File

@@ -57,13 +57,8 @@ func startProfile() {
Fatalf("%v", err)
}
atExit(func() {
// Profile all outstanding allocations.
runtime.GC()
// compilebench parses the memory profile to extract memstats,
// which are only written in the legacy pprof format.
// See golang.org/issue/18641 and runtime/pprof/pprof.go:writeHeap.
const writeLegacyFormat = 1
if err := pprof.Lookup("heap").WriteTo(f, writeLegacyFormat); err != nil {
runtime.GC() // profile all outstanding allocations
if err := pprof.WriteHeapProfile(f); err != nil {
Fatalf("%v", err)
}
})

View File

@@ -3117,12 +3117,12 @@ func walkcompare(n *Node, init *Nodes) *Node {
cmpr = cmpr.Left
}
if !islvalue(cmpl) || !islvalue(cmpr) {
Fatalf("arguments of comparison must be lvalues - %v %v", cmpl, cmpr)
}
// Chose not to inline. Call equality function directly.
if !inline {
if !islvalue(cmpl) || !islvalue(cmpr) {
Fatalf("arguments of comparison must be lvalues - %v %v", cmpl, cmpr)
}
// eq algs take pointers
pl := temp(ptrto(t))
al := nod(OAS, pl, nod(OADDR, cmpl, nil))

View File

@@ -885,9 +885,9 @@
(MOVDEQ y _ (FlagLT)) -> y
(MOVDEQ y _ (FlagGT)) -> y
(MOVDNE y _ (FlagEQ)) -> y
(MOVDNE _ x (FlagLT)) -> x
(MOVDNE _ x (FlagGT)) -> x
(MOVDNE _ y (FlagEQ)) -> y
(MOVDNE x _ (FlagLT)) -> x
(MOVDNE x _ (FlagGT)) -> x
(MOVDLT y _ (FlagEQ)) -> y
(MOVDLT _ x (FlagLT)) -> x

View File

@@ -82,7 +82,7 @@ func nilcheckelim(f *Func) {
}
}
// Next, eliminate any redundant nil checks in this block.
// Next, process values in the block.
i := 0
for _, v := range b.Values {
b.Values[i] = v
@@ -105,10 +105,13 @@ func nilcheckelim(f *Func) {
f.Config.Warnl(v.Line, "removed nil check")
}
v.reset(OpUnknown)
// TODO: f.freeValue(v)
i--
continue
}
// Record the fact that we know ptr is non nil, and remember to
// undo that information when this dominator subtree is done.
nonNilValues[ptr.ID] = true
work = append(work, bp{op: ClearPtr, ptr: ptr})
}
}
for j := i; j < len(b.Values); j++ {
@@ -116,21 +119,6 @@ func nilcheckelim(f *Func) {
}
b.Values = b.Values[:i]
// Finally, find redundant nil checks for subsequent blocks.
// Note that we can't add these until the loop above is done, as the
// values in the block are not ordered in any way when this pass runs.
// This was the cause of issue #18725.
for _, v := range b.Values {
if v.Op != OpNilCheck {
continue
}
ptr := v.Args[0]
// Record the fact that we know ptr is non nil, and remember to
// undo that information when this dominator subtree is done.
nonNilValues[ptr.ID] = true
work = append(work, bp{op: ClearPtr, ptr: ptr})
}
// Add all dominated blocks to the work list.
for w := sdom[node.block.ID].child; w != nil; w = sdom[w.ID].sibling {
work = append(work, bp{op: Work, block: w})

View File

@@ -9847,11 +9847,11 @@ func rewriteValueS390X_OpS390XMOVDNE(v *Value, config *Config) bool {
v.AddArg(cmp)
return true
}
// match: (MOVDNE y _ (FlagEQ))
// match: (MOVDNE _ y (FlagEQ))
// cond:
// result: y
for {
y := v.Args[0]
y := v.Args[1]
v_2 := v.Args[2]
if v_2.Op != OpS390XFlagEQ {
break
@@ -9861,11 +9861,11 @@ func rewriteValueS390X_OpS390XMOVDNE(v *Value, config *Config) bool {
v.AddArg(y)
return true
}
// match: (MOVDNE _ x (FlagLT))
// match: (MOVDNE x _ (FlagLT))
// cond:
// result: x
for {
x := v.Args[1]
x := v.Args[0]
v_2 := v.Args[2]
if v_2.Op != OpS390XFlagLT {
break
@@ -9875,11 +9875,11 @@ func rewriteValueS390X_OpS390XMOVDNE(v *Value, config *Config) bool {
v.AddArg(x)
return true
}
// match: (MOVDNE _ x (FlagGT))
// match: (MOVDNE x _ (FlagGT))
// cond:
// result: x
for {
x := v.Args[1]
x := v.Args[0]
v_2 := v.Args[2]
if v_2.Op != OpS390XFlagGT {
break

View File

@@ -35,7 +35,7 @@ func writebarrier(f *Func) {
valueLoop:
for i, v := range b.Values {
switch v.Op {
case OpStoreWB, OpMoveWB, OpMoveWBVolatile, OpZeroWB:
case OpStoreWB, OpMoveWB, OpMoveWBVolatile:
if IsStackAddr(v.Args[0]) {
switch v.Op {
case OpStoreWB:

View File

@@ -20,10 +20,11 @@ import (
var cmdBug = &Command{
Run: runBug,
UsageLine: "bug",
Short: "start a bug report",
Short: "print information for bug reports",
Long: `
Bug opens the default browser and starts a new bug report.
The report includes useful system information.
Bug prints information that helps file effective bug reports.
Bugs may be reported at https://golang.org/issue/new.
`,
}

View File

@@ -3787,26 +3787,3 @@ GLOBL ·constants<>(SB),8,$8
tg.setenv("GOPATH", tg.path("go"))
tg.run("build", "p")
}
// Issue 18778.
func TestDotDotDotOutsideGOPATH(t *testing.T) {
tg := testgo(t)
defer tg.cleanup()
tg.tempFile("pkgs/a.go", `package x`)
tg.tempFile("pkgs/a_test.go", `package x_test
import "testing"
func TestX(t *testing.T) {}`)
tg.tempFile("pkgs/a/a.go", `package a`)
tg.tempFile("pkgs/a/a_test.go", `package a_test
import "testing"
func TestA(t *testing.T) {}`)
tg.cd(tg.path("pkgs"))
tg.run("build", "./...")
tg.run("test", "./...")
tg.run("list", "./...")
tg.grepStdout("pkgs$", "expected package not listed")
tg.grepStdout("pkgs/a", "expected package not listed")
}

View File

@@ -429,7 +429,7 @@ func setErrorPos(p *Package, importPos []token.Position) *Package {
func cleanImport(path string) string {
orig := path
path = pathpkg.Clean(path)
if strings.HasPrefix(orig, "./") && path != ".." && !strings.HasPrefix(path, "../") {
if strings.HasPrefix(orig, "./") && path != ".." && path != "." && !strings.HasPrefix(path, "../") {
path = "./" + path
}
return path

View File

@@ -894,13 +894,9 @@ func (b *builder) test(p *Package) (buildAction, runAction, printAction *action,
if buildContext.GOOS == "darwin" {
if buildContext.GOARCH == "arm" || buildContext.GOARCH == "arm64" {
t.IsIOS = true
t.NeedOS = true
t.NeedCgo = true
}
}
if t.TestMain == nil {
t.NeedOS = true
}
for _, cp := range pmain.imports {
if len(cp.coverVars) > 0 {
@@ -1347,8 +1343,7 @@ type testFuncs struct {
NeedTest bool
ImportXtest bool
NeedXtest bool
NeedOS bool
IsIOS bool
NeedCgo bool
Cover []coverInfo
}
@@ -1449,7 +1444,7 @@ var testmainTmpl = template.Must(template.New("main").Parse(`
package main
import (
{{if .NeedOS}}
{{if not .TestMain}}
"os"
{{end}}
"testing"
@@ -1465,10 +1460,8 @@ import (
_cover{{$i}} {{$p.Package.ImportPath | printf "%q"}}
{{end}}
{{if .IsIOS}}
"os/signal"
{{if .NeedCgo}}
_ "runtime/cgo"
"syscall"
{{end}}
)
@@ -1530,32 +1523,6 @@ func coverRegisterFile(fileName string, counter []uint32, pos []uint32, numStmts
{{end}}
func main() {
{{if .IsIOS}}
// Send a SIGUSR2, which will be intercepted by LLDB to
// tell the test harness that installation was successful.
// See misc/ios/go_darwin_arm_exec.go.
signal.Notify(make(chan os.Signal), syscall.SIGUSR2)
syscall.Kill(0, syscall.SIGUSR2)
signal.Reset(syscall.SIGUSR2)
// The first argument supplied to an iOS test is an offset
// suffix for the current working directory.
// Process it here, and remove it from os.Args.
const hdr = "cwdSuffix="
if len(os.Args) < 2 || len(os.Args[1]) <= len(hdr) || os.Args[1][:len(hdr)] != hdr {
panic("iOS test not passed a working directory suffix")
}
suffix := os.Args[1][len(hdr):]
dir, err := os.Getwd()
if err != nil {
panic(err)
}
if err := os.Chdir(dir + "/" + suffix); err != nil {
panic(err)
}
os.Args = append([]string{os.Args[0]}, os.Args[2:]...)
{{end}}
{{if .CoverEnabled}}
testing.RegisterCover(testing.Cover{
Mode: {{printf "%q" .CoverMode}},

View File

@@ -1080,7 +1080,7 @@ func writelines(ctxt *Link, syms []*Symbol) ([]*Symbol, []*Symbol) {
epcs = s
dsym := ctxt.Syms.Lookup(dwarf.InfoPrefix+s.Name, int(s.Version))
dsym.Attr |= AttrHidden | AttrReachable
dsym.Attr |= AttrHidden
dsym.Type = obj.SDWARFINFO
for _, r := range dsym.R {
if r.Type == obj.R_DWARFREF && r.Sym.Size == 0 {

View File

@@ -168,7 +168,7 @@ func container(s *Symbol) int {
if s == nil {
return 0
}
if Buildmode == BuildmodePlugin && Headtype == obj.Hdarwin && onlycsymbol(s) {
if Buildmode == BuildmodePlugin && onlycsymbol(s) {
return 1
}
// We want to generate func table entries only for the "lowest level" symbols,

View File

@@ -204,6 +204,12 @@ func TestMTF(t *testing.T) {
}
}
var (
digits = mustLoadFile("testdata/e.txt.bz2")
twain = mustLoadFile("testdata/Mark.Twain-Tom.Sawyer.txt.bz2")
random = mustLoadFile("testdata/random.data.bz2")
)
func benchmarkDecode(b *testing.B, compressed []byte) {
// Determine the uncompressed size of testfile.
uncompressedSize, err := io.Copy(ioutil.Discard, NewReader(bytes.NewReader(compressed)))
@@ -221,18 +227,6 @@ func benchmarkDecode(b *testing.B, compressed []byte) {
}
}
func BenchmarkDecodeDigits(b *testing.B) {
digits := mustLoadFile("testdata/e.txt.bz2")
b.ResetTimer()
benchmarkDecode(b, digits)
}
func BenchmarkDecodeTwain(b *testing.B) {
twain := mustLoadFile("testdata/Mark.Twain-Tom.Sawyer.txt.bz2")
b.ResetTimer()
benchmarkDecode(b, twain)
}
func BenchmarkDecodeRand(b *testing.B) {
random := mustLoadFile("testdata/random.data.bz2")
b.ResetTimer()
benchmarkDecode(b, random)
}
func BenchmarkDecodeDigits(b *testing.B) { benchmarkDecode(b, digits) }
func BenchmarkDecodeTwain(b *testing.B) { benchmarkDecode(b, twain) }
func BenchmarkDecodeRand(b *testing.B) { benchmarkDecode(b, random) }

View File

@@ -136,17 +136,14 @@ func (d *compressor) fillDeflate(b []byte) int {
delta := d.hashOffset - 1
d.hashOffset -= delta
d.chainHead -= delta
// Iterate over slices instead of arrays to avoid copying
// the entire table onto the stack (Issue #18625).
for i, v := range d.hashPrev[:] {
for i, v := range d.hashPrev {
if int(v) > delta {
d.hashPrev[i] = uint32(int(v) - delta)
} else {
d.hashPrev[i] = 0
}
}
for i, v := range d.hashHead[:] {
for i, v := range d.hashHead {
if int(v) > delta {
d.hashHead[i] = uint32(int(v) - delta)
} else {

View File

@@ -12,7 +12,6 @@ import (
"io"
"io/ioutil"
"reflect"
"runtime/debug"
"sync"
"testing"
)
@@ -865,33 +864,3 @@ func TestBestSpeedMaxMatchOffset(t *testing.T) {
}
}
}
func TestMaxStackSize(t *testing.T) {
// This test must not run in parallel with other tests as debug.SetMaxStack
// affects all goroutines.
n := debug.SetMaxStack(1 << 16)
defer debug.SetMaxStack(n)
var wg sync.WaitGroup
defer wg.Wait()
b := make([]byte, 1<<20)
for level := HuffmanOnly; level <= BestCompression; level++ {
// Run in separate goroutine to increase probability of stack regrowth.
wg.Add(1)
go func(level int) {
defer wg.Done()
zw, err := NewWriter(ioutil.Discard, level)
if err != nil {
t.Errorf("level %d, NewWriter() = %v, want nil", level, err)
}
if n, err := zw.Write(b); n != len(b) || err != nil {
t.Errorf("level %d, Write() = (%d, %v), want (%d, nil)", level, n, err, len(b))
}
if err := zw.Close(); err != nil {
t.Errorf("level %d, Close() = %v, want nil", level, err)
}
zw.Reset(ioutil.Discard)
}(level)
}
}

View File

@@ -60,7 +60,7 @@ func newDeflateFast() *deflateFast {
func (e *deflateFast) encode(dst []token, src []byte) []token {
// Ensure that e.cur doesn't wrap.
if e.cur > 1<<30 {
e.resetAll()
*e = deflateFast{cur: maxStoreBlockSize, prev: e.prev[:0]}
}
// This check isn't in the Snappy implementation, but there, the caller
@@ -265,21 +265,6 @@ func (e *deflateFast) reset() {
// Protect against e.cur wraparound.
if e.cur > 1<<30 {
e.resetAll()
}
}
// resetAll resets the deflateFast struct and is only called in rare
// situations to prevent integer overflow. It manually resets each field
// to avoid causing large stack growth.
//
// See https://golang.org/issue/18636.
func (e *deflateFast) resetAll() {
// This is equivalent to:
// *e = deflateFast{cur: maxStoreBlockSize, prev: e.prev[:0]}
e.cur = maxStoreBlockSize
e.prev = e.prev[:0]
for i := range e.table {
e.table[i] = tableEntry{}
*e = deflateFast{cur: maxStoreBlockSize, prev: e.prev[:0]}
}
}

View File

@@ -9,17 +9,11 @@ import (
"testing"
)
// TestGZIPFilesHaveZeroMTimes checks that every .gz file in the tree
// has a zero MTIME. This is a requirement for the Debian maintainers
// to be able to have deterministic packages.
//
// See https://golang.org/issue/14937.
// Per golang.org/issue/14937, check that every .gz file
// in the tree has a zero mtime.
func TestGZIPFilesHaveZeroMTimes(t *testing.T) {
// To avoid spurious false positives due to untracked GZIP files that
// may be in the user's GOROOT (Issue 18604), we only run this test on
// the builders, which should have a clean checkout of the tree.
if testenv.Builder() == "" {
t.Skip("skipping test on non-builder")
if testing.Short() && testenv.Builder() == "" {
t.Skip("skipping in short mode")
}
goroot, err := filepath.EvalSymlinks(runtime.GOROOT())
if err != nil {

View File

@@ -95,7 +95,7 @@ func TestSignAndVerify(t *testing.T) {
func TestSigningWithDegenerateKeys(t *testing.T) {
// Signing with degenerate private keys should not cause an infinite
// loop.
badKeys := []struct {
badKeys := []struct{
p, q, g, y, x string
}{
{"00", "01", "00", "00", "00"},
@@ -105,7 +105,7 @@ func TestSigningWithDegenerateKeys(t *testing.T) {
for i, test := range badKeys {
priv := PrivateKey{
PublicKey: PublicKey{
Parameters: Parameters{
Parameters: Parameters {
P: fromHex(test.p),
Q: fromHex(test.q),
G: fromHex(test.g),

View File

@@ -84,15 +84,15 @@ var cipherSuites = []*cipherSuite{
{TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, 16, 0, 4, ecdheECDSAKA, suiteECDHE | suiteECDSA | suiteTLS12, nil, nil, aeadAESGCM},
{TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, 32, 0, 4, ecdheRSAKA, suiteECDHE | suiteTLS12 | suiteSHA384, nil, nil, aeadAESGCM},
{TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, 32, 0, 4, ecdheECDSAKA, suiteECDHE | suiteECDSA | suiteTLS12 | suiteSHA384, nil, nil, aeadAESGCM},
{TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, 16, 32, 16, ecdheRSAKA, suiteECDHE | suiteTLS12 | suiteDefaultOff, cipherAES, macSHA256, nil},
{TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, 16, 32, 16, ecdheRSAKA, suiteECDHE | suiteTLS12, cipherAES, macSHA256, nil},
{TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, 16, 20, 16, ecdheRSAKA, suiteECDHE, cipherAES, macSHA1, nil},
{TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, 16, 32, 16, ecdheECDSAKA, suiteECDHE | suiteECDSA | suiteTLS12 | suiteDefaultOff, cipherAES, macSHA256, nil},
{TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, 16, 32, 16, ecdheECDSAKA, suiteECDHE | suiteECDSA | suiteTLS12, cipherAES, macSHA256, nil},
{TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, 16, 20, 16, ecdheECDSAKA, suiteECDHE | suiteECDSA, cipherAES, macSHA1, nil},
{TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, 32, 20, 16, ecdheRSAKA, suiteECDHE, cipherAES, macSHA1, nil},
{TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, 32, 20, 16, ecdheECDSAKA, suiteECDHE | suiteECDSA, cipherAES, macSHA1, nil},
{TLS_RSA_WITH_AES_128_GCM_SHA256, 16, 0, 4, rsaKA, suiteTLS12, nil, nil, aeadAESGCM},
{TLS_RSA_WITH_AES_256_GCM_SHA384, 32, 0, 4, rsaKA, suiteTLS12 | suiteSHA384, nil, nil, aeadAESGCM},
{TLS_RSA_WITH_AES_128_CBC_SHA256, 16, 32, 16, rsaKA, suiteTLS12 | suiteDefaultOff, cipherAES, macSHA256, nil},
{TLS_RSA_WITH_AES_128_CBC_SHA256, 16, 32, 16, rsaKA, suiteTLS12, cipherAES, macSHA256, nil},
{TLS_RSA_WITH_AES_128_CBC_SHA, 16, 20, 16, rsaKA, 0, cipherAES, macSHA1, nil},
{TLS_RSA_WITH_AES_256_CBC_SHA, 32, 20, 16, rsaKA, 0, cipherAES, macSHA1, nil},
{TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, 24, 20, 8, ecdheRSAKA, suiteECDHE, cipher3DES, macSHA1, nil},

View File

@@ -6,8 +6,8 @@
package tls
// BUG(agl): The crypto/tls package only implements some countermeasures
// against Lucky13 attacks on CBC-mode encryption, and only on SHA1
// variants. See http://www.isg.rhul.ac.uk/tls/TLStiming.pdf and
// against Lucky13 attacks on CBC-mode encryption. See
// http://www.isg.rhul.ac.uk/tls/TLStiming.pdf and
// https://www.imperialviolet.org/2013/02/04/luckythirteen.html.
import (

View File

@@ -4,11 +4,7 @@
package x509
import (
"encoding/pem"
"errors"
"runtime"
)
import "encoding/pem"
// CertPool is a set of certificates.
type CertPool struct {
@@ -30,11 +26,6 @@ func NewCertPool() *CertPool {
// Any mutations to the returned pool are not written to disk and do
// not affect any other pool.
func SystemCertPool() (*CertPool, error) {
if runtime.GOOS == "windows" {
// Issue 16736, 18609:
return nil, errors.New("crypto/x509: system root pool is not available on Windows")
}
return loadSystemRoots()
}

View File

@@ -226,11 +226,6 @@ func (c *Certificate) systemVerify(opts *VerifyOptions) (chains [][]*Certificate
}
func loadSystemRoots() (*CertPool, error) {
// TODO: restore this functionality on Windows. We tried to do
// it in Go 1.8 but had to revert it. See Issue 18609.
// Returning (nil, nil) was the old behavior, prior to CL 30578.
return nil, nil
const CRYPT_E_NOT_FOUND = 0x80092004
store, err := syscall.CertOpenSystemStore(0, syscall.StringToUTF16Ptr("ROOT"))

View File

@@ -24,7 +24,6 @@ import (
"net"
"os/exec"
"reflect"
"runtime"
"strings"
"testing"
"time"
@@ -1478,9 +1477,6 @@ func TestMultipleRDN(t *testing.T) {
}
func TestSystemCertPool(t *testing.T) {
if runtime.GOOS == "windows" {
t.Skip("not implemented on Windows; Issue 16736, 18609")
}
_, err := SystemCertPool()
if err != nil {
t.Fatal(err)

View File

@@ -1357,7 +1357,16 @@ func (db *DB) begin(ctx context.Context, opts *TxOptions, strategy connReuseStra
cancel: cancel,
ctx: ctx,
}
go tx.awaitDone()
go func(tx *Tx) {
select {
case <-tx.ctx.Done():
if !tx.isDone() {
// Discard and close the connection used to ensure the transaction
// is closed and the resources are released.
tx.rollback(true)
}
}
}(tx)
return tx, nil
}
@@ -1379,11 +1388,6 @@ func (db *DB) Driver() driver.Driver {
type Tx struct {
db *DB
// closemu prevents the transaction from closing while there
// is an active query. It is held for read during queries
// and exclusively during close.
closemu sync.RWMutex
// dc is owned exclusively until Commit or Rollback, at which point
// it's returned with putConn.
dc *driverConn
@@ -1409,20 +1413,6 @@ type Tx struct {
ctx context.Context
}
// awaitDone blocks until the context in Tx is canceled and rolls back
// the transaction if it's not already done.
func (tx *Tx) awaitDone() {
// Wait for either the transaction to be committed or rolled
// back, or for the associated context to be closed.
<-tx.ctx.Done()
// Discard and close the connection used to ensure the
// transaction is closed and the resources are released. This
// rollback does nothing if the transaction has already been
// committed or rolled back.
tx.rollback(true)
}
func (tx *Tx) isDone() bool {
return atomic.LoadInt32(&tx.done) != 0
}
@@ -1434,31 +1424,16 @@ var ErrTxDone = errors.New("sql: Transaction has already been committed or rolle
// close returns the connection to the pool and
// must only be called by Tx.rollback or Tx.Commit.
func (tx *Tx) close(err error) {
tx.closemu.Lock()
defer tx.closemu.Unlock()
tx.db.putConn(tx.dc, err)
tx.cancel()
tx.dc = nil
tx.txi = nil
}
// hookTxGrabConn specifies an optional hook to be called on
// a successful call to (*Tx).grabConn. For tests.
var hookTxGrabConn func()
func (tx *Tx) grabConn(ctx context.Context) (*driverConn, error) {
select {
default:
case <-ctx.Done():
return nil, ctx.Err()
}
if tx.isDone() {
return nil, ErrTxDone
}
if hookTxGrabConn != nil { // test hook
hookTxGrabConn()
}
return tx.dc, nil
}
@@ -1528,9 +1503,6 @@ func (tx *Tx) Rollback() error {
// for the execution of the returned statement. The returned statement
// will run in the transaction context.
func (tx *Tx) PrepareContext(ctx context.Context, query string) (*Stmt, error) {
tx.closemu.RLock()
defer tx.closemu.RUnlock()
// TODO(bradfitz): We could be more efficient here and either
// provide a method to take an existing Stmt (created on
// perhaps a different Conn), and re-create it on this Conn if
@@ -1595,9 +1567,6 @@ func (tx *Tx) Prepare(query string) (*Stmt, error) {
// The returned statement operates within the transaction and will be closed
// when the transaction has been committed or rolled back.
func (tx *Tx) StmtContext(ctx context.Context, stmt *Stmt) *Stmt {
tx.closemu.RLock()
defer tx.closemu.RUnlock()
// TODO(bradfitz): optimize this. Currently this re-prepares
// each time. This is fine for now to illustrate the API but
// we should really cache already-prepared statements
@@ -1649,9 +1618,6 @@ func (tx *Tx) Stmt(stmt *Stmt) *Stmt {
// ExecContext executes a query that doesn't return rows.
// For example: an INSERT and UPDATE.
func (tx *Tx) ExecContext(ctx context.Context, query string, args ...interface{}) (Result, error) {
tx.closemu.RLock()
defer tx.closemu.RUnlock()
dc, err := tx.grabConn(ctx)
if err != nil {
return nil, err
@@ -1695,9 +1661,6 @@ func (tx *Tx) Exec(query string, args ...interface{}) (Result, error) {
// QueryContext executes a query that returns rows, typically a SELECT.
func (tx *Tx) QueryContext(ctx context.Context, query string, args ...interface{}) (*Rows, error) {
tx.closemu.RLock()
defer tx.closemu.RUnlock()
dc, err := tx.grabConn(ctx)
if err != nil {
return nil, err
@@ -2075,21 +2038,25 @@ type Rows struct {
// closed value is 1 when the Rows is closed.
// Use atomic operations on value when checking value.
closed int32
cancel func() // called when Rows is closed, may be nil.
ctxClose chan struct{} // closed when Rows is closed, may be null.
lastcols []driver.Value
lasterr error // non-nil only if closed is true
closeStmt *driverStmt // if non-nil, statement to Close on close
}
func (rs *Rows) initContextClose(ctx context.Context) {
ctx, rs.cancel = context.WithCancel(ctx)
go rs.awaitDone(ctx)
}
if ctx.Done() == context.Background().Done() {
return
}
// awaitDone blocks until the rows are closed or the context canceled.
func (rs *Rows) awaitDone(ctx context.Context) {
<-ctx.Done()
rs.Close()
rs.ctxClose = make(chan struct{})
go func() {
select {
case <-ctx.Done():
rs.Close()
case <-rs.ctxClose:
}
}()
}
// Next prepares the next result row for reading with the Scan method. It
@@ -2347,9 +2314,7 @@ func (rs *Rows) Scan(dest ...interface{}) error {
return nil
}
// rowsCloseHook returns a function so tests may install the
// hook throug a test only mutex.
var rowsCloseHook = func() func(*Rows, *error) { return nil }
var rowsCloseHook func(*Rows, *error)
func (rs *Rows) isClosed() bool {
return atomic.LoadInt32(&rs.closed) != 0
@@ -2363,15 +2328,13 @@ func (rs *Rows) Close() error {
if !atomic.CompareAndSwapInt32(&rs.closed, 0, 1) {
return nil
}
if rs.ctxClose != nil {
close(rs.ctxClose)
}
err := rs.rowsi.Close()
if fn := rowsCloseHook(); fn != nil {
if fn := rowsCloseHook; fn != nil {
fn(rs, &err)
}
if rs.cancel != nil {
rs.cancel()
}
if rs.closeStmt != nil {
rs.closeStmt.Close()
}

View File

@@ -14,7 +14,6 @@ import (
"runtime"
"strings"
"sync"
"sync/atomic"
"testing"
"time"
)
@@ -327,7 +326,9 @@ func TestQueryContext(t *testing.T) {
// And verify that the final rows.Next() call, which hit EOF,
// also closed the rows connection.
waitForFree(t, db, 5*time.Second, 1)
if n := db.numFreeConns(); n != 1 {
t.Fatalf("free conns after query hitting EOF = %d; want 1", n)
}
if prepares := numPrepares(t, db) - prepares0; prepares != 1 {
t.Errorf("executed %d Prepare statements; want 1", prepares)
}
@@ -344,18 +345,6 @@ func waitCondition(waitFor, checkEvery time.Duration, fn func() bool) bool {
return false
}
// waitForFree checks db.numFreeConns until either it equals want or
// the maxWait time elapses.
func waitForFree(t *testing.T, db *DB, maxWait time.Duration, want int) {
var numFree int
if !waitCondition(maxWait, 5*time.Millisecond, func() bool {
numFree = db.numFreeConns()
return numFree == want
}) {
t.Fatalf("free conns after hitting EOF = %d; want %d", numFree, want)
}
}
func TestQueryContextWait(t *testing.T) {
db := newTestDB(t, "people")
defer closeDB(t, db)
@@ -372,7 +361,9 @@ func TestQueryContextWait(t *testing.T) {
}
// Verify closed rows connection after error condition.
waitForFree(t, db, 5*time.Second, 1)
if n := db.numFreeConns(); n != 1 {
t.Fatalf("free conns after query hitting EOF = %d; want 1", n)
}
if prepares := numPrepares(t, db) - prepares0; prepares != 1 {
t.Errorf("executed %d Prepare statements; want 1", prepares)
}
@@ -397,7 +388,13 @@ func TestTxContextWait(t *testing.T) {
t.Fatalf("expected QueryContext to error with context deadline exceeded but returned %v", err)
}
waitForFree(t, db, 5*time.Second, 0)
var numFree int
if !waitCondition(5*time.Second, 5*time.Millisecond, func() bool {
numFree = db.numFreeConns()
return numFree == 0
}) {
t.Fatalf("free conns after hitting EOF = %d; want 0", numFree)
}
// Ensure the dropped connection allows more connections to be made.
// Checked on DB Close.
@@ -474,7 +471,9 @@ func TestMultiResultSetQuery(t *testing.T) {
// And verify that the final rows.Next() call, which hit EOF,
// also closed the rows connection.
waitForFree(t, db, 5*time.Second, 1)
if n := db.numFreeConns(); n != 1 {
t.Fatalf("free conns after query hitting EOF = %d; want 1", n)
}
if prepares := numPrepares(t, db) - prepares0; prepares != 1 {
t.Errorf("executed %d Prepare statements; want 1", prepares)
}
@@ -1136,24 +1135,6 @@ func TestQueryRowClosingStmt(t *testing.T) {
}
}
var atomicRowsCloseHook atomic.Value // of func(*Rows, *error)
func init() {
rowsCloseHook = func() func(*Rows, *error) {
fn, _ := atomicRowsCloseHook.Load().(func(*Rows, *error))
return fn
}
}
func setRowsCloseHook(fn func(*Rows, *error)) {
if fn == nil {
// Can't change an atomic.Value back to nil, so set it to this
// no-op func instead.
fn = func(*Rows, *error) {}
}
atomicRowsCloseHook.Store(fn)
}
// Test issue 6651
func TestIssue6651(t *testing.T) {
db := newTestDB(t, "people")
@@ -1166,7 +1147,6 @@ func TestIssue6651(t *testing.T) {
return fmt.Errorf(want)
}
defer func() { rowsCursorNextHook = nil }()
err := db.QueryRow("SELECT|people|name|").Scan(&v)
if err == nil || err.Error() != want {
t.Errorf("error = %q; want %q", err, want)
@@ -1174,10 +1154,10 @@ func TestIssue6651(t *testing.T) {
rowsCursorNextHook = nil
want = "error in rows.Close"
setRowsCloseHook(func(rows *Rows, err *error) {
rowsCloseHook = func(rows *Rows, err *error) {
*err = fmt.Errorf(want)
})
defer setRowsCloseHook(nil)
}
defer func() { rowsCloseHook = nil }()
err = db.QueryRow("SELECT|people|name|").Scan(&v)
if err == nil || err.Error() != want {
t.Errorf("error = %q; want %q", err, want)
@@ -1850,9 +1830,7 @@ func TestStmtCloseDeps(t *testing.T) {
db.dumpDeps(t)
}
if !waitCondition(5*time.Second, 5*time.Millisecond, func() bool {
return len(stmt.css) <= nquery
}) {
if len(stmt.css) > nquery {
t.Errorf("len(stmt.css) = %d; want <= %d", len(stmt.css), nquery)
}
@@ -2598,10 +2576,10 @@ func TestIssue6081(t *testing.T) {
if err != nil {
t.Fatal(err)
}
setRowsCloseHook(func(rows *Rows, err *error) {
rowsCloseHook = func(rows *Rows, err *error) {
*err = driver.ErrBadConn
})
defer setRowsCloseHook(nil)
}
defer func() { rowsCloseHook = nil }()
for i := 0; i < 10; i++ {
rows, err := stmt.Query()
if err != nil {
@@ -2664,10 +2642,7 @@ func TestIssue18429(t *testing.T) {
if err != nil {
return
}
// This is expected to give a cancel error many, but not all the time.
// Test failure will happen with a panic or other race condition being
// reported.
rows, _ := tx.QueryContext(ctx, "WAIT|"+qwait+"|SELECT|people|name|")
rows, err := tx.QueryContext(ctx, "WAIT|"+qwait+"|SELECT|people|name|")
if rows != nil {
rows.Close()
}
@@ -2680,56 +2655,6 @@ func TestIssue18429(t *testing.T) {
time.Sleep(milliWait * 3 * time.Millisecond)
}
// TestIssue18719 closes the context right before use. The sql.driverConn
// will nil out the ci on close in a lock, but if another process uses it right after
// it will panic with on the nil ref.
//
// See https://golang.org/cl/35550 .
func TestIssue18719(t *testing.T) {
db := newTestDB(t, "people")
defer closeDB(t, db)
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
tx, err := db.BeginTx(ctx, nil)
if err != nil {
t.Fatal(err)
}
hookTxGrabConn = func() {
cancel()
// Wait for the context to cancel and tx to rollback.
for tx.isDone() == false {
time.Sleep(time.Millisecond * 3)
}
}
defer func() { hookTxGrabConn = nil }()
// This call will grab the connection and cancel the context
// after it has done so. Code after must deal with the canceled state.
rows, err := tx.QueryContext(ctx, "SELECT|people|name|")
if err != nil {
rows.Close()
t.Fatalf("expected error %v but got %v", nil, err)
}
// Rows may be ignored because it will be closed when the context is canceled.
// Do not explicitly rollback. The rollback will happen from the
// canceled context.
// Wait for connections to return to pool.
var numOpen int
if !waitCondition(5*time.Second, 5*time.Millisecond, func() bool {
numOpen = db.numOpenConns()
return numOpen == 0
}) {
t.Fatalf("open conns after hitting EOF = %d; want 0", numOpen)
}
}
func TestConcurrency(t *testing.T) {
doConcurrentTest(t, new(concurrentDBQueryTest))
doConcurrentTest(t, new(concurrentDBExecTest))

View File

@@ -70,8 +70,10 @@ func (s *Scope) String() string {
// The Data fields contains object-specific data:
//
// Kind Data type Data value
// Pkg *Scope package scope
// Pkg *types.Package package scope
// Con int iota for the respective declaration
// Con != nil constant value
// Typ *Scope (used as method scope during type checking - transient)
//
type Object struct {
Kind ObjKind

View File

@@ -25,7 +25,7 @@ var files = flag.String("files", "", "consider only Go test files matching this
const dataDir = "testdata"
var templateTxt *template.Template
var templateTxt = readTemplate("template.txt")
func readTemplate(filename string) *template.Template {
t := template.New(filename)
@@ -96,9 +96,6 @@ func test(t *testing.T, mode Mode) {
if err != nil {
t.Fatal(err)
}
if templateTxt == nil {
templateTxt = readTemplate("template.txt")
}
// test packages
for _, pkg := range pkgs {

View File

@@ -10,12 +10,17 @@ import (
"testing"
)
func BenchmarkParse(b *testing.B) {
src, err := ioutil.ReadFile("parser.go")
var src = readFile("parser.go")
func readFile(filename string) []byte {
data, err := ioutil.ReadFile(filename)
if err != nil {
b.Fatal(err)
panic(err)
}
b.ResetTimer()
return data
}
func BenchmarkParse(b *testing.B) {
b.SetBytes(int64(len(src)))
for i := 0; i < b.N; i++ {
if _, err := ParseFile(token.NewFileSet(), "", src, ParseComments); err != nil {

View File

@@ -733,7 +733,7 @@ func (p *printer) expr1(expr ast.Expr, prec1, depth int) {
case *ast.FuncLit:
p.expr(x.Type)
p.funcBody(p.distanceFrom(x.Type.Pos()), blank, x.Body)
p.adjBlock(p.distanceFrom(x.Type.Pos()), blank, x.Body)
case *ast.ParenExpr:
if _, hasParens := x.X.(*ast.ParenExpr); hasParens {
@@ -825,7 +825,6 @@ func (p *printer) expr1(expr ast.Expr, prec1, depth int) {
if x.Type != nil {
p.expr1(x.Type, token.HighestPrec, depth)
}
p.level++
p.print(x.Lbrace, token.LBRACE)
p.exprList(x.Lbrace, x.Elts, 1, commaTerm, x.Rbrace)
// do not insert extra line break following a /*-style comment
@@ -838,7 +837,6 @@ func (p *printer) expr1(expr ast.Expr, prec1, depth int) {
mode |= noExtraBlank
}
p.print(mode, x.Rbrace, token.RBRACE, mode)
p.level--
case *ast.Ellipsis:
p.print(token.ELLIPSIS)
@@ -1559,23 +1557,18 @@ func (p *printer) bodySize(b *ast.BlockStmt, maxSize int) int {
return bodySize
}
// funcBody prints a function body following a function header of given headerSize.
// adjBlock prints an "adjacent" block (e.g., a for-loop or function body) following
// a header (e.g., a for-loop control clause or function signature) of given headerSize.
// If the header's and block's size are "small enough" and the block is "simple enough",
// the block is printed on the current line, without line breaks, spaced from the header
// by sep. Otherwise the block's opening "{" is printed on the current line, followed by
// lines for the block's statements and its closing "}".
//
func (p *printer) funcBody(headerSize int, sep whiteSpace, b *ast.BlockStmt) {
func (p *printer) adjBlock(headerSize int, sep whiteSpace, b *ast.BlockStmt) {
if b == nil {
return
}
// save/restore composite literal nesting level
defer func(level int) {
p.level = level
}(p.level)
p.level = 0
const maxSize = 100
if headerSize+p.bodySize(b, maxSize) <= maxSize {
p.print(sep, b.Lbrace, token.LBRACE)
@@ -1620,7 +1613,7 @@ func (p *printer) funcDecl(d *ast.FuncDecl) {
}
p.expr(d.Name)
p.signature(d.Type.Params, d.Type.Results)
p.funcBody(p.distanceFrom(d.Pos()), vtab, d.Body)
p.adjBlock(p.distanceFrom(d.Pos()), vtab, d.Body)
}
func (p *printer) decl(decl ast.Decl) {

View File

@@ -58,7 +58,6 @@ type printer struct {
// Current state
output []byte // raw printer result
indent int // current indentation
level int // level == 0: outside composite literal; level > 0: inside composite literal
mode pmode // current printer mode
impliedSemi bool // if set, a linebreak implies a semicolon
lastTok token.Token // last token printed (token.ILLEGAL if it's whitespace)
@@ -745,19 +744,15 @@ func (p *printer) intersperseComments(next token.Position, tok token.Token) (wro
// follows on the same line but is not a comma, and not a "closing"
// token immediately following its corresponding "opening" token,
// add an extra separator unless explicitly disabled. Use a blank
// as separator unless we have pending linebreaks, they are not
// disabled, and we are outside a composite literal, in which case
// we want a linebreak (issue 15137).
// TODO(gri) This has become overly complicated. We should be able
// to track whether we're inside an expression or statement and
// use that information to decide more directly.
// as separator unless we have pending linebreaks and they are not
// disabled, in which case we want a linebreak (issue 15137).
needsLinebreak := false
if p.mode&noExtraBlank == 0 &&
last.Text[1] == '*' && p.lineFor(last.Pos()) == next.Line &&
tok != token.COMMA &&
(tok != token.RPAREN || p.prevOpen == token.LPAREN) &&
(tok != token.RBRACK || p.prevOpen == token.LBRACK) {
if p.containsLinebreak() && p.mode&noExtraLinebreak == 0 && p.level == 0 {
if p.containsLinebreak() && p.mode&noExtraLinebreak == 0 {
needsLinebreak = true
} else {
p.writeByte(' ', 1)

View File

@@ -103,62 +103,3 @@ label:
mask := uint64(1)<<c - 1 // Allocation mask
used := atomic.LoadUint64(&h.used) // Current allocations
}
// Test cases for issue 18782
var _ = [][]int{
/* a, b, c, d, e */
/* a */ {0, 0, 0, 0, 0},
/* b */ {0, 5, 4, 4, 4},
/* c */ {0, 4, 5, 4, 4},
/* d */ {0, 4, 4, 5, 4},
/* e */ {0, 4, 4, 4, 5},
}
var _ = T{ /* a */ 0}
var _ = T{ /* a */ /* b */ 0}
var _ = T{ /* a */ /* b */
/* c */ 0,
}
var _ = T{ /* a */ /* b */
/* c */
/* d */ 0,
}
var _ = T{
/* a */
/* b */ 0,
}
var _ = T{ /* a */ {}}
var _ = T{ /* a */ /* b */ {}}
var _ = T{ /* a */ /* b */
/* c */ {},
}
var _ = T{ /* a */ /* b */
/* c */
/* d */ {},
}
var _ = T{
/* a */
/* b */ {},
}
var _ = []T{
func() {
var _ = [][]int{
/* a, b, c, d, e */
/* a */ {0, 0, 0, 0, 0},
/* b */ {0, 5, 4, 4, 4},
/* c */ {0, 4, 5, 4, 4},
/* d */ {0, 4, 4, 5, 4},
/* e */ {0, 4, 4, 4, 5},
}
},
}

View File

@@ -103,66 +103,3 @@ label:
mask := uint64(1)<<c - 1 // Allocation mask
used := atomic.LoadUint64(&h.used) // Current allocations
}
// Test cases for issue 18782
var _ = [][]int{
/* a, b, c, d, e */
/* a */ {0, 0, 0, 0, 0},
/* b */ {0, 5, 4, 4, 4},
/* c */ {0, 4, 5, 4, 4},
/* d */ {0, 4, 4, 5, 4},
/* e */ {0, 4, 4, 4, 5},
}
var _ = T{ /* a */ 0,
}
var _ = T{ /* a */ /* b */ 0,
}
var _ = T{ /* a */ /* b */
/* c */ 0,
}
var _ = T{ /* a */ /* b */
/* c */
/* d */ 0,
}
var _ = T{
/* a */
/* b */ 0,
}
var _ = T{ /* a */ {},
}
var _ = T{ /* a */ /* b */ {},
}
var _ = T{ /* a */ /* b */
/* c */ {},
}
var _ = T{ /* a */ /* b */
/* c */
/* d */ {},
}
var _ = T{
/* a */
/* b */ {},
}
var _ = []T{
func() {
var _ = [][]int{
/* a, b, c, d, e */
/* a */ {0, 0, 0, 0, 0},
/* b */ {0, 5, 4, 4, 4},
/* c */ {0, 4, 5, 4, 4},
/* d */ {0, 4, 4, 5, 4},
/* e */ {0, 4, 4, 4, 5},
}
},
}

View File

@@ -413,12 +413,11 @@ func (c *Client) checkRedirect(req *Request, via []*Request) error {
// redirectBehavior describes what should happen when the
// client encounters a 3xx status code from the server
func redirectBehavior(reqMethod string, resp *Response, ireq *Request) (redirectMethod string, shouldRedirect, includeBody bool) {
func redirectBehavior(reqMethod string, resp *Response, ireq *Request) (redirectMethod string, shouldRedirect bool) {
switch resp.StatusCode {
case 301, 302, 303:
redirectMethod = reqMethod
shouldRedirect = true
includeBody = false
// RFC 2616 allowed automatic redirection only with GET and
// HEAD requests. RFC 7231 lifts this restriction, but we still
@@ -430,7 +429,6 @@ func redirectBehavior(reqMethod string, resp *Response, ireq *Request) (redirect
case 307, 308:
redirectMethod = reqMethod
shouldRedirect = true
includeBody = true
// Treat 307 and 308 specially, since they're new in
// Go 1.8, and they also require re-sending the request body.
@@ -451,7 +449,7 @@ func redirectBehavior(reqMethod string, resp *Response, ireq *Request) (redirect
shouldRedirect = false
}
}
return redirectMethod, shouldRedirect, includeBody
return redirectMethod, shouldRedirect
}
// Do sends an HTTP request and returns an HTTP response, following
@@ -494,14 +492,11 @@ func (c *Client) Do(req *Request) (*Response, error) {
}
var (
deadline = c.deadline()
reqs []*Request
resp *Response
copyHeaders = c.makeHeadersCopier(req)
// Redirect behavior:
deadline = c.deadline()
reqs []*Request
resp *Response
copyHeaders = c.makeHeadersCopier(req)
redirectMethod string
includeBody bool
)
uerr := func(err error) error {
req.closeBody()
@@ -539,7 +534,7 @@ func (c *Client) Do(req *Request) (*Response, error) {
Cancel: ireq.Cancel,
ctx: ireq.ctx,
}
if includeBody && ireq.GetBody != nil {
if ireq.GetBody != nil {
req.Body, err = ireq.GetBody()
if err != nil {
return nil, uerr(err)
@@ -603,7 +598,7 @@ func (c *Client) Do(req *Request) (*Response, error) {
}
var shouldRedirect bool
redirectMethod, shouldRedirect, includeBody = redirectBehavior(req.Method, resp, reqs[0])
redirectMethod, shouldRedirect = redirectBehavior(req.Method, resp, reqs[0])
if !shouldRedirect {
return resp, nil
}

View File

@@ -360,25 +360,25 @@ func TestPostRedirects(t *testing.T) {
wantSegments := []string{
`POST / "first"`,
`POST /?code=301&next=302 "c301"`,
`GET /?code=302 ""`,
`GET / ""`,
`GET /?code=302 "c301"`,
`GET / "c301"`,
`POST /?code=302&next=302 "c302"`,
`GET /?code=302 ""`,
`GET / ""`,
`GET /?code=302 "c302"`,
`GET / "c302"`,
`POST /?code=303&next=301 "c303wc301"`,
`GET /?code=301 ""`,
`GET / ""`,
`GET /?code=301 "c303wc301"`,
`GET / "c303wc301"`,
`POST /?code=304 "c304"`,
`POST /?code=305 "c305"`,
`POST /?code=307&next=303,308,302 "c307"`,
`POST /?code=303&next=308,302 "c307"`,
`GET /?code=308&next=302 ""`,
`GET /?code=308&next=302 "c307"`,
`GET /?code=302 "c307"`,
`GET / ""`,
`GET / "c307"`,
`POST /?code=308&next=302,301 "c308"`,
`POST /?code=302&next=301 "c308"`,
`GET /?code=301 ""`,
`GET / ""`,
`GET /?code=301 "c308"`,
`GET / "c308"`,
`POST /?code=404 "c404"`,
}
want := strings.Join(wantSegments, "\n")
@@ -399,20 +399,20 @@ func TestDeleteRedirects(t *testing.T) {
wantSegments := []string{
`DELETE / "first"`,
`DELETE /?code=301&next=302,308 "c301"`,
`GET /?code=302&next=308 ""`,
`GET /?code=308 ""`,
`GET /?code=302&next=308 "c301"`,
`GET /?code=308 "c301"`,
`GET / "c301"`,
`DELETE /?code=302&next=302 "c302"`,
`GET /?code=302 ""`,
`GET / ""`,
`GET /?code=302 "c302"`,
`GET / "c302"`,
`DELETE /?code=303 "c303"`,
`GET / ""`,
`GET / "c303"`,
`DELETE /?code=307&next=301,308,303,302,304 "c307"`,
`DELETE /?code=301&next=308,303,302,304 "c307"`,
`GET /?code=308&next=303,302,304 ""`,
`GET /?code=308&next=303,302,304 "c307"`,
`GET /?code=303&next=302,304 "c307"`,
`GET /?code=302&next=304 ""`,
`GET /?code=304 ""`,
`GET /?code=302&next=304 "c307"`,
`GET /?code=304 "c307"`,
`DELETE /?code=308&next=307 "c308"`,
`DELETE /?code=307 "c308"`,
`DELETE / "c308"`,
@@ -432,11 +432,7 @@ func testRedirectsByMethod(t *testing.T, method string, table []redirectTest, wa
ts = httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
log.Lock()
slurp, _ := ioutil.ReadAll(r.Body)
fmt.Fprintf(&log.Buffer, "%s %s %q", r.Method, r.RequestURI, slurp)
if cl := r.Header.Get("Content-Length"); r.Method == "GET" && len(slurp) == 0 && (r.ContentLength != 0 || cl != "") {
fmt.Fprintf(&log.Buffer, " (but with body=%T, content-length = %v, %q)", r.Body, r.ContentLength, cl)
}
log.WriteByte('\n')
fmt.Fprintf(&log.Buffer, "%s %s %q\n", r.Method, r.RequestURI, slurp)
log.Unlock()
urlQuery := r.URL.Query()
if v := urlQuery.Get("code"); v != "" {
@@ -479,24 +475,7 @@ func testRedirectsByMethod(t *testing.T, method string, table []redirectTest, wa
want = strings.TrimSpace(want)
if got != want {
got, want, lines := removeCommonLines(got, want)
t.Errorf("Log differs after %d common lines.\n\nGot:\n%s\n\nWant:\n%s\n", lines, got, want)
}
}
func removeCommonLines(a, b string) (asuffix, bsuffix string, commonLines int) {
for {
nl := strings.IndexByte(a, '\n')
if nl < 0 {
return a, b, commonLines
}
line := a[:nl+1]
if !strings.HasPrefix(b, line) {
return a, b, commonLines
}
commonLines++
a = a[len(line):]
b = b[len(line):]
t.Errorf("Log differs.\n Got:\n%s\nWant:\n%s\n", got, want)
}
}

View File

@@ -5173,142 +5173,3 @@ func TestServerDuplicateBackgroundRead(t *testing.T) {
}
wg.Wait()
}
// Test that the bufio.Reader returned by Hijack includes any buffered
// byte (from the Server's backgroundRead) in its buffer. We want the
// Handler code to be able to tell that a byte is available via
// bufio.Reader.Buffered(), without resorting to Reading it
// (potentially blocking) to get at it.
func TestServerHijackGetsBackgroundByte(t *testing.T) {
if runtime.GOOS == "plan9" {
t.Skip("skipping test; see https://golang.org/issue/18657")
}
setParallel(t)
defer afterTest(t)
done := make(chan struct{})
inHandler := make(chan bool, 1)
ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
defer close(done)
// Tell the client to send more data after the GET request.
inHandler <- true
// Wait until the HTTP server sees the extra data
// after the GET request. The HTTP server fires the
// close notifier here, assuming it's a pipelined
// request, as documented.
select {
case <-w.(CloseNotifier).CloseNotify():
case <-time.After(5 * time.Second):
t.Error("timeout")
return
}
conn, buf, err := w.(Hijacker).Hijack()
if err != nil {
t.Error(err)
return
}
defer conn.Close()
n := buf.Reader.Buffered()
if n != 1 {
t.Errorf("buffered data = %d; want 1", n)
}
peek, err := buf.Reader.Peek(3)
if string(peek) != "foo" || err != nil {
t.Errorf("Peek = %q, %v; want foo, nil", peek, err)
}
}))
defer ts.Close()
cn, err := net.Dial("tcp", ts.Listener.Addr().String())
if err != nil {
t.Fatal(err)
}
defer cn.Close()
if _, err := cn.Write([]byte("GET / HTTP/1.1\r\nHost: e.com\r\n\r\n")); err != nil {
t.Fatal(err)
}
<-inHandler
if _, err := cn.Write([]byte("foo")); err != nil {
t.Fatal(err)
}
if err := cn.(*net.TCPConn).CloseWrite(); err != nil {
t.Fatal(err)
}
select {
case <-done:
case <-time.After(2 * time.Second):
t.Error("timeout")
}
}
// Like TestServerHijackGetsBackgroundByte above but sending a
// immediate 1MB of data to the server to fill up the server's 4KB
// buffer.
func TestServerHijackGetsBackgroundByte_big(t *testing.T) {
if runtime.GOOS == "plan9" {
t.Skip("skipping test; see https://golang.org/issue/18657")
}
setParallel(t)
defer afterTest(t)
done := make(chan struct{})
const size = 8 << 10
ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
defer close(done)
// Wait until the HTTP server sees the extra data
// after the GET request. The HTTP server fires the
// close notifier here, assuming it's a pipelined
// request, as documented.
select {
case <-w.(CloseNotifier).CloseNotify():
case <-time.After(5 * time.Second):
t.Error("timeout")
return
}
conn, buf, err := w.(Hijacker).Hijack()
if err != nil {
t.Error(err)
return
}
defer conn.Close()
slurp, err := ioutil.ReadAll(buf.Reader)
if err != nil {
t.Errorf("Copy: %v", err)
}
allX := true
for _, v := range slurp {
if v != 'x' {
allX = false
}
}
if len(slurp) != size {
t.Errorf("read %d; want %d", len(slurp), size)
} else if !allX {
t.Errorf("read %q; want %d 'x'", slurp, size)
}
}))
defer ts.Close()
cn, err := net.Dial("tcp", ts.Listener.Addr().String())
if err != nil {
t.Fatal(err)
}
defer cn.Close()
if _, err := fmt.Fprintf(cn, "GET / HTTP/1.1\r\nHost: e.com\r\n\r\n%s",
strings.Repeat("x", size)); err != nil {
t.Fatal(err)
}
if err := cn.(*net.TCPConn).CloseWrite(); err != nil {
t.Fatal(err)
}
select {
case <-done:
case <-time.After(2 * time.Second):
t.Error("timeout")
}
}

View File

@@ -164,7 +164,7 @@ type Flusher interface {
// should always test for this ability at runtime.
type Hijacker interface {
// Hijack lets the caller take over the connection.
// After a call to Hijack the HTTP server library
// After a call to Hijack(), the HTTP server library
// will not do anything else with the connection.
//
// It becomes the caller's responsibility to manage
@@ -174,9 +174,6 @@ type Hijacker interface {
// already set, depending on the configuration of the
// Server. It is the caller's responsibility to set
// or clear those deadlines as needed.
//
// The returned bufio.Reader may contain unprocessed buffered
// data from the client.
Hijack() (net.Conn, *bufio.ReadWriter, error)
}
@@ -296,11 +293,6 @@ func (c *conn) hijackLocked() (rwc net.Conn, buf *bufio.ReadWriter, err error) {
rwc.SetDeadline(time.Time{})
buf = bufio.NewReadWriter(c.bufr, bufio.NewWriter(rwc))
if c.r.hasByte {
if _, err := c.bufr.Peek(c.bufr.Buffered() + 1); err != nil {
return nil, nil, fmt.Errorf("unexpected Peek failure reading buffered byte: %v", err)
}
}
c.setState(rwc, StateHijacked)
return
}

View File

@@ -36,7 +36,6 @@ import (
"strconv"
"strings"
"sync"
"sync/atomic"
"testing"
"time"
)
@@ -2546,13 +2545,6 @@ type closerFunc func() error
func (f closerFunc) Close() error { return f() }
type writerFuncConn struct {
net.Conn
write func(p []byte) (n int, err error)
}
func (c writerFuncConn) Write(p []byte) (n int, err error) { return c.write(p) }
// Issue 4677. If we try to reuse a connection that the server is in the
// process of closing, we may end up successfully writing out our request (or a
// portion of our request) only to find a connection error when we try to read
@@ -2565,78 +2557,66 @@ func (c writerFuncConn) Write(p []byte) (n int, err error) { return c.write(p) }
func TestRetryIdempotentRequestsOnError(t *testing.T) {
defer afterTest(t)
var (
mu sync.Mutex
logbuf bytes.Buffer
)
logf := func(format string, args ...interface{}) {
mu.Lock()
defer mu.Unlock()
fmt.Fprintf(&logbuf, format, args...)
logbuf.WriteByte('\n')
}
ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
logf("Handler")
w.Header().Set("X-Status", "ok")
}))
defer ts.Close()
var writeNumAtomic int32
tr := &Transport{
Dial: func(network, addr string) (net.Conn, error) {
logf("Dial")
c, err := net.Dial(network, ts.Listener.Addr().String())
if err != nil {
logf("Dial error: %v", err)
return nil, err
}
return &writerFuncConn{
Conn: c,
write: func(p []byte) (n int, err error) {
if atomic.AddInt32(&writeNumAtomic, 1) == 2 {
logf("intentional write failure")
return 0, errors.New("second write fails")
}
logf("Write(%q)", p)
return c.Write(p)
},
}, nil
},
}
defer tr.CloseIdleConnections()
tr := &Transport{}
c := &Client{Transport: tr}
const N = 2
retryc := make(chan struct{}, N)
SetRoundTripRetried(func() {
logf("Retried.")
retryc <- struct{}{}
})
defer SetRoundTripRetried(nil)
for i := 0; i < 3; i++ {
res, err := c.Get("http://fake.golang/")
if err != nil {
t.Fatalf("i=%d: Get = %v", i, err)
for n := 0; n < 100; n++ {
// open 2 conns
errc := make(chan error, N)
for i := 0; i < N; i++ {
// start goroutines, send on errc
go func() {
res, err := c.Get(ts.URL)
if err == nil {
res.Body.Close()
}
errc <- err
}()
}
for i := 0; i < N; i++ {
if err := <-errc; err != nil {
t.Fatal(err)
}
}
res.Body.Close()
}
mu.Lock()
got := logbuf.String()
mu.Unlock()
const want = `Dial
Write("GET / HTTP/1.1\r\nHost: fake.golang\r\nUser-Agent: Go-http-client/1.1\r\nAccept-Encoding: gzip\r\n\r\n")
Handler
intentional write failure
Retried.
Dial
Write("GET / HTTP/1.1\r\nHost: fake.golang\r\nUser-Agent: Go-http-client/1.1\r\nAccept-Encoding: gzip\r\n\r\n")
Handler
Write("GET / HTTP/1.1\r\nHost: fake.golang\r\nUser-Agent: Go-http-client/1.1\r\nAccept-Encoding: gzip\r\n\r\n")
Handler
`
if got != want {
t.Errorf("Log of events differs. Got:\n%s\nWant:\n%s", got, want)
ts.CloseClientConnections()
for i := 0; i < N; i++ {
go func() {
res, err := c.Get(ts.URL)
if err == nil {
res.Body.Close()
}
errc <- err
}()
}
for i := 0; i < N; i++ {
if err := <-errc; err != nil {
t.Fatal(err)
}
}
for i := 0; i < N; i++ {
select {
case <-retryc:
// we triggered a retry, test was successful
t.Logf("finished after %d runs\n", n)
return
default:
}
}
}
t.Fatal("did not trigger any retries")
}
// Issue 6981

View File

@@ -54,15 +54,12 @@ var sysdir = func() *sysDir {
case "darwin":
switch runtime.GOARCH {
case "arm", "arm64":
/// At this point the test harness has not had a chance
// to move us into the ./src/os directory, so the
// current working directory is the root of the app.
wd, err := syscall.Getwd()
if err != nil {
wd = err.Error()
}
return &sysDir{
wd,
filepath.Join(wd, "..", ".."),
[]string{
"ResourceRules.plist",
"Info.plist",

View File

@@ -26,8 +26,6 @@ import (
"unsafe"
)
var sink interface{}
func TestBool(t *testing.T) {
v := ValueOf(true)
if v.Bool() != true {
@@ -5333,72 +5331,6 @@ func TestCallGC(t *testing.T) {
f2("four", "five5", "six666", "seven77", "eight888")
}
// Issue 18635 (function version).
func TestKeepFuncLive(t *testing.T) {
// Test that we keep makeFuncImpl live as long as it is
// referenced on the stack.
typ := TypeOf(func(i int) {})
var f, g func(in []Value) []Value
f = func(in []Value) []Value {
clobber()
i := int(in[0].Int())
if i > 0 {
// We can't use Value.Call here because
// runtime.call* will keep the makeFuncImpl
// alive. However, by converting it to an
// interface value and calling that,
// reflect.callReflect is the only thing that
// can keep the makeFuncImpl live.
//
// Alternate between f and g so that if we do
// reuse the memory prematurely it's more
// likely to get obviously corrupted.
MakeFunc(typ, g).Interface().(func(i int))(i - 1)
}
return nil
}
g = func(in []Value) []Value {
clobber()
i := int(in[0].Int())
MakeFunc(typ, f).Interface().(func(i int))(i)
return nil
}
MakeFunc(typ, f).Call([]Value{ValueOf(10)})
}
// Issue 18635 (method version).
type KeepMethodLive struct{}
func (k KeepMethodLive) Method1(i int) {
clobber()
if i > 0 {
ValueOf(k).MethodByName("Method2").Interface().(func(i int))(i - 1)
}
}
func (k KeepMethodLive) Method2(i int) {
clobber()
ValueOf(k).MethodByName("Method1").Interface().(func(i int))(i)
}
func TestKeepMethodLive(t *testing.T) {
// Test that we keep methodValue live as long as it is
// referenced on the stack.
KeepMethodLive{}.Method1(10)
}
// clobber tries to clobber unreachable memory.
func clobber() {
runtime.GC()
for i := 1; i < 32; i++ {
for j := 0; j < 10; j++ {
obj := make([]*byte, i)
sink = obj
}
}
runtime.GC()
}
type funcLayoutTest struct {
rcvr, t Type
size, argsize, retOffset uintptr

View File

@@ -538,11 +538,6 @@ func callReflect(ctxt *makeFuncImpl, frame unsafe.Pointer) {
off += typ.size
}
}
// runtime.getArgInfo expects to be able to find ctxt on the
// stack when it finds our caller, makeFuncStub. Make sure it
// doesn't get garbage collected.
runtime.KeepAlive(ctxt)
}
// methodReceiver returns information about the receiver
@@ -655,9 +650,6 @@ func callMethod(ctxt *methodValue, frame unsafe.Pointer) {
// though it's a heap object.
memclrNoHeapPointers(args, frametype.size)
framePool.Put(args)
// See the comment in callReflect.
runtime.KeepAlive(ctxt)
}
// funcName returns the name of f, for use in error messages.

View File

@@ -111,7 +111,7 @@ const (
// Tiny allocator parameters, see "Tiny allocator" comment in malloc.go.
_TinySize = 16
_TinySizeClass = 2
_TinySizeClass = int8(2)
_FixAllocChunk = 16 << 10 // Chunk size for FixAlloc
_MaxMHeapList = 1 << (20 - _PageShift) // Maximum page length for fixed-size list in MHeap.
@@ -512,8 +512,8 @@ func nextFreeFast(s *mspan) gclinkptr {
// weight allocation. If it is a heavy weight allocation the caller must
// determine whether a new GC cycle needs to be started or if the GC is active
// whether this goroutine needs to assist the GC.
func (c *mcache) nextFree(sizeclass uint8) (v gclinkptr, s *mspan, shouldhelpgc bool) {
s = c.alloc[sizeclass]
func (c *mcache) nextFree(spc spanClass) (v gclinkptr, s *mspan, shouldhelpgc bool) {
s = c.alloc[spc]
shouldhelpgc = false
freeIndex := s.nextFreeIndex()
if freeIndex == s.nelems {
@@ -523,10 +523,10 @@ func (c *mcache) nextFree(sizeclass uint8) (v gclinkptr, s *mspan, shouldhelpgc
throw("s.allocCount != s.nelems && freeIndex == s.nelems")
}
systemstack(func() {
c.refill(int32(sizeclass))
c.refill(spc)
})
shouldhelpgc = true
s = c.alloc[sizeclass]
s = c.alloc[spc]
freeIndex = s.nextFreeIndex()
}
@@ -650,10 +650,10 @@ func mallocgc(size uintptr, typ *_type, needzero bool) unsafe.Pointer {
return x
}
// Allocate a new maxTinySize block.
span := c.alloc[tinySizeClass]
span := c.alloc[tinySpanClass]
v := nextFreeFast(span)
if v == 0 {
v, _, shouldhelpgc = c.nextFree(tinySizeClass)
v, _, shouldhelpgc = c.nextFree(tinySpanClass)
}
x = unsafe.Pointer(v)
(*[2]uint64)(x)[0] = 0
@@ -673,10 +673,11 @@ func mallocgc(size uintptr, typ *_type, needzero bool) unsafe.Pointer {
sizeclass = size_to_class128[(size-smallSizeMax+largeSizeDiv-1)/largeSizeDiv]
}
size = uintptr(class_to_size[sizeclass])
span := c.alloc[sizeclass]
spc := makeSpanClass(sizeclass, noscan)
span := c.alloc[spc]
v := nextFreeFast(span)
if v == 0 {
v, span, shouldhelpgc = c.nextFree(sizeclass)
v, span, shouldhelpgc = c.nextFree(spc)
}
x = unsafe.Pointer(v)
if needzero && span.needzero != 0 {
@@ -687,7 +688,7 @@ func mallocgc(size uintptr, typ *_type, needzero bool) unsafe.Pointer {
var s *mspan
shouldhelpgc = true
systemstack(func() {
s = largeAlloc(size, needzero)
s = largeAlloc(size, needzero, noscan)
})
s.freeindex = 1
s.allocCount = 1
@@ -696,9 +697,7 @@ func mallocgc(size uintptr, typ *_type, needzero bool) unsafe.Pointer {
}
var scanSize uintptr
if noscan {
heapBitsSetTypeNoScan(uintptr(x))
} else {
if !noscan {
// If allocating a defer+arg block, now that we've picked a malloc size
// large enough to hold everything, cut the "asked for" size down to
// just the defer header, so that the GC bitmap will record the arg block
@@ -776,7 +775,7 @@ func mallocgc(size uintptr, typ *_type, needzero bool) unsafe.Pointer {
return x
}
func largeAlloc(size uintptr, needzero bool) *mspan {
func largeAlloc(size uintptr, needzero bool, noscan bool) *mspan {
// print("largeAlloc size=", size, "\n")
if size+_PageSize < size {
@@ -792,7 +791,7 @@ func largeAlloc(size uintptr, needzero bool) *mspan {
// pays the debt down to npage pages.
deductSweepCredit(npages*_PageSize, npages)
s := mheap_.alloc(npages, 0, true, needzero)
s := mheap_.alloc(npages, makeSpanClass(0, noscan), true, needzero)
if s == nil {
throw("out of memory")
}

View File

@@ -45,6 +45,11 @@
// not checkmarked, and is the dead encoding.
// These properties must be preserved when modifying the encoding.
//
// The bitmap for noscan spans is not maintained. Code must ensure
// that an object is scannable before consulting its bitmap by
// checking either the noscan bit in the span or by consulting its
// type's information.
//
// Checkmarks
//
// In a concurrent garbage collector, one worries about failing to mark
@@ -184,6 +189,90 @@ type markBits struct {
index uintptr
}
//go:nosplit
func inBss(p uintptr) bool {
for datap := &firstmoduledata; datap != nil; datap = datap.next {
if p >= datap.bss && p < datap.ebss {
return true
}
}
return false
}
//go:nosplit
func inData(p uintptr) bool {
for datap := &firstmoduledata; datap != nil; datap = datap.next {
if p >= datap.data && p < datap.edata {
return true
}
}
return false
}
// isPublic checks whether the object has been published.
// ptr may not point to the start of the object.
// This is conservative in the sense that it will return true
// for any object that hasn't been allocated by this
// goroutine since the last roc checkpoint was performed.
// Must run on the system stack to prevent stack growth and
// moving of goroutine stack.
//go:systemstack
func isPublic(ptr uintptr) bool {
if debug.gcroc == 0 {
// Unexpected call to ROC specific routine while not running ROC.
// blowup without supressing inlining.
_ = *(*int)(nil)
}
if inStack(ptr, getg().stack) {
return false
}
if getg().m != nil && getg().m.curg != nil && inStack(ptr, getg().m.curg.stack) {
return false
}
if inBss(ptr) {
return true
}
if inData(ptr) {
return true
}
if !inheap(ptr) {
// Note: Objects created using persistentalloc are not in the heap
// so any pointers from such object to local objects needs to be dealt
// with specially. nil is also considered not in the heap.
return true
}
// At this point we know the object is in the heap.
s := spanOf(ptr)
oldSweepgen := atomic.Load(&s.sweepgen)
sg := mheap_.sweepgen
if oldSweepgen != sg {
// We have an unswept span which means that the pointer points to a public object since it will
// be found to be marked once it is swept.
return true
}
abits := s.allocBitsForAddr(ptr)
if abits.isMarked() {
return true
} else if s.freeindex <= abits.index {
// Unmarked and beyond freeindex yet reachable object encountered.
// blowup without supressing inlining.
_ = *(*int)(nil)
}
// The object is not marked. If it is part of the current
// ROC epoch then it is not public.
if s.startindex*s.elemsize <= ptr-s.base() {
// Object allocated in this ROC epoch and since it is
// not marked it has not been published.
return false
}
// Object allocated since last GC but in a previous ROC epoch so it is public.
return true
}
//go:nosplit
func (s *mspan) allocBitsForIndex(allocBitIndex uintptr) markBits {
whichByte := allocBitIndex / 8
@@ -192,6 +281,16 @@ func (s *mspan) allocBitsForIndex(allocBitIndex uintptr) markBits {
return markBits{bytePtr, uint8(1 << whichBit), allocBitIndex}
}
//go:nosplit
func (s *mspan) allocBitsForAddr(p uintptr) markBits {
byteOffset := p - s.base()
allocBitIndex := byteOffset / s.elemsize
whichByte := allocBitIndex / 8
whichBit := allocBitIndex % 8
bytePtr := addb(s.allocBits, whichByte)
return markBits{bytePtr, uint8(1 << whichBit), allocBitIndex}
}
// refillaCache takes 8 bytes s.allocBits starting at whichByte
// and negates them so that ctz (count trailing zeros) instructions
// can be used. It then places these 8 bytes into the cached 64 bit
@@ -509,16 +608,6 @@ func (h heapBits) isPointer() bool {
return h.bits()&bitPointer != 0
}
// hasPointers reports whether the given object has any pointers.
// It must be told how large the object at h is for efficiency.
// h must describe the initial word of the object.
func (h heapBits) hasPointers(size uintptr) bool {
if size == sys.PtrSize { // 1-word objects are always pointers
return true
}
return (*h.bitp>>h.shift)&bitScan != 0
}
// isCheckmarked reports whether the heap bits have the checkmarked bit set.
// It must be told how large the object at h is, because the encoding of the
// checkmark bit varies by size.
@@ -1363,13 +1452,6 @@ Phase4:
}
}
// heapBitsSetTypeNoScan marks x as noscan by setting the first word
// of x in the heap bitmap to scalar/dead.
func heapBitsSetTypeNoScan(x uintptr) {
h := heapBitsForAddr(uintptr(x))
*h.bitp &^= (bitPointer | bitScan) << h.shift
}
var debugPtrmask struct {
lock mutex
data *byte

View File

@@ -33,7 +33,8 @@ type mcache struct {
local_tinyallocs uintptr // number of tiny allocs not counted in other stats
// The rest is not accessed on every malloc.
alloc [_NumSizeClasses]*mspan // spans to allocate from
alloc [numSpanClasses]*mspan // spans to allocate from, indexed by spanClass
stackcache [_NumStackOrders]stackfreelist
@@ -77,7 +78,7 @@ func allocmcache() *mcache {
lock(&mheap_.lock)
c := (*mcache)(mheap_.cachealloc.alloc())
unlock(&mheap_.lock)
for i := 0; i < _NumSizeClasses; i++ {
for i := range c.alloc {
c.alloc[i] = &emptymspan
}
c.next_sample = nextSample()
@@ -103,12 +104,12 @@ func freemcache(c *mcache) {
// Gets a span that has a free object in it and assigns it
// to be the cached span for the given sizeclass. Returns this span.
func (c *mcache) refill(sizeclass int32) *mspan {
func (c *mcache) refill(spc spanClass) *mspan {
_g_ := getg()
_g_.m.locks++
// Return the current cached span to the central lists.
s := c.alloc[sizeclass]
s := c.alloc[spc]
if uintptr(s.allocCount) != s.nelems {
throw("refill of span with free space remaining")
@@ -119,7 +120,7 @@ func (c *mcache) refill(sizeclass int32) *mspan {
}
// Get a new cached span from the central lists.
s = mheap_.central[sizeclass].mcentral.cacheSpan()
s = mheap_.central[spc].mcentral.cacheSpan()
if s == nil {
throw("out of memory")
}
@@ -128,13 +129,13 @@ func (c *mcache) refill(sizeclass int32) *mspan {
throw("span has no free space")
}
c.alloc[sizeclass] = s
c.alloc[spc] = s
_g_.m.locks--
return s
}
func (c *mcache) releaseAll() {
for i := 0; i < _NumSizeClasses; i++ {
for i := range c.alloc {
s := c.alloc[i]
if s != &emptymspan {
mheap_.central[i].mcentral.uncacheSpan(s)

View File

@@ -19,14 +19,14 @@ import "runtime/internal/atomic"
//go:notinheap
type mcentral struct {
lock mutex
sizeclass int32
spanclass spanClass
nonempty mSpanList // list of spans with a free object, ie a nonempty free list
empty mSpanList // list of spans with no free objects (or cached in an mcache)
}
// Initialize a single central free list.
func (c *mcentral) init(sizeclass int32) {
c.sizeclass = sizeclass
func (c *mcentral) init(spc spanClass) {
c.spanclass = spc
c.nonempty.init()
c.empty.init()
}
@@ -34,7 +34,7 @@ func (c *mcentral) init(sizeclass int32) {
// Allocate a span to use in an MCache.
func (c *mcentral) cacheSpan() *mspan {
// Deduct credit for this span allocation and sweep if necessary.
spanBytes := uintptr(class_to_allocnpages[c.sizeclass]) * _PageSize
spanBytes := uintptr(class_to_allocnpages[c.spanclass.sizeclass()]) * _PageSize
deductSweepCredit(spanBytes, 0)
lock(&c.lock)
@@ -205,11 +205,11 @@ func (c *mcentral) freeSpan(s *mspan, preserve bool, wasempty bool) bool {
// grow allocates a new empty span from the heap and initializes it for c's size class.
func (c *mcentral) grow() *mspan {
npages := uintptr(class_to_allocnpages[c.sizeclass])
size := uintptr(class_to_size[c.sizeclass])
npages := uintptr(class_to_allocnpages[c.spanclass.sizeclass()])
size := uintptr(class_to_size[c.spanclass.sizeclass()])
n := (npages << _PageShift) / size
s := mheap_.alloc(npages, c.sizeclass, false, true)
s := mheap_.alloc(npages, c.spanclass, false, true)
if s == nil {
return nil
}

View File

@@ -146,16 +146,10 @@ move_1or2:
move_0:
RET
move_3or4:
CMPQ BX, $4
JB move_3
MOVL (SI), AX
MOVL AX, (DI)
RET
move_3:
MOVW (SI), AX
MOVB 2(SI), CX
MOVW -2(SI)(BX*1), CX
MOVW AX, (DI)
MOVB CX, 2(DI)
MOVW CX, -2(DI)(BX*1)
RET
move_5through7:
MOVL (SI), AX

View File

@@ -6,7 +6,6 @@ package runtime_test
import (
"crypto/rand"
"encoding/binary"
"fmt"
"internal/race"
. "runtime"
@@ -448,22 +447,3 @@ func BenchmarkCopyFat1024(b *testing.B) {
_ = y
}
}
func BenchmarkIssue18740(b *testing.B) {
// This tests that memmove uses one 4-byte load/store to move 4 bytes.
// It used to do 2 2-byte load/stores, which leads to a pipeline stall
// when we try to read the result with one 4-byte load.
var buf [4]byte
for j := 0; j < b.N; j++ {
s := uint32(0)
for i := 0; i < 4096; i += 4 {
copy(buf[:], g[i:])
s += binary.LittleEndian.Uint32(buf[:])
}
sink = uint64(s)
}
}
// TODO: 2 byte and 8 byte benchmarks also.
var g [4096]byte

View File

@@ -441,7 +441,7 @@ func findObject(v unsafe.Pointer) (s *mspan, x unsafe.Pointer, n uintptr) {
}
n = s.elemsize
if s.sizeclass != 0 {
if s.spanclass.sizeclass() != 0 {
x = add(x, (uintptr(v)-uintptr(x))/n*n)
}
return

View File

@@ -244,6 +244,7 @@ var writeBarrier struct {
pad [3]byte // compiler uses 32-bit load for "enabled" field
needed bool // whether we need a write barrier for current GC phase
cgo bool // whether we need a write barrier for a cgo check
roc bool // whether we need a write barrier for the ROC algorithm
alignme uint64 // guarantee alignment so that compiler can use a 32 or 64-bit load
}
@@ -1129,6 +1130,8 @@ top:
// sitting in the per-P work caches.
// Flush and disable work caches.
gcMarkRootCheck()
// Disallow caching workbufs and indicate that we're in mark 2.
gcBlackenPromptly = true
@@ -1151,16 +1154,6 @@ top:
})
})
// Check that roots are marked. We should be able to
// do this before the forEachP, but based on issue
// #16083 there may be a (harmless) race where we can
// enter mark 2 while some workers are still scanning
// stacks. The forEachP ensures these scans are done.
//
// TODO(austin): Figure out the race and fix this
// properly.
gcMarkRootCheck()
// Now we can start up mark 2 workers.
atomic.Xaddint64(&gcController.dedicatedMarkWorkersNeeded, 0xffffffff)
atomic.Xaddint64(&gcController.fractionalMarkWorkersNeeded, 0xffffffff)

View File

@@ -1268,7 +1268,7 @@ func scanobject(b uintptr, gcw *gcWork) {
// paths), in which case we must *not* enqueue
// oblets since their bitmaps will be
// uninitialized.
if !hbits.hasPointers(n) {
if s.spanclass.noscan() {
// Bypass the whole scan.
gcw.bytesMarked += uint64(n)
return
@@ -1396,7 +1396,7 @@ func greyobject(obj, base, off uintptr, hbits heapBits, span *mspan, gcw *gcWork
atomic.Or8(mbits.bytep, mbits.mask)
// If this is a noscan object, fast-track it to black
// instead of greying it.
if !hbits.hasPointers(span.elemsize) {
if span.spanclass.noscan() {
gcw.bytesMarked += uint64(span.elemsize)
return
}
@@ -1429,7 +1429,7 @@ func gcDumpObject(label string, obj, off uintptr) {
print(" s=nil\n")
return
}
print(" s.base()=", hex(s.base()), " s.limit=", hex(s.limit), " s.sizeclass=", s.sizeclass, " s.elemsize=", s.elemsize, " s.state=")
print(" s.base()=", hex(s.base()), " s.limit=", hex(s.limit), " s.spanclass=", s.spanclass, " s.elemsize=", s.elemsize, " s.state=")
if 0 <= s.state && int(s.state) < len(mSpanStateNames) {
print(mSpanStateNames[s.state], "\n")
} else {

View File

@@ -183,7 +183,7 @@ func (s *mspan) sweep(preserve bool) bool {
atomic.Xadd64(&mheap_.pagesSwept, int64(s.npages))
cl := s.sizeclass
spc := s.spanclass
size := s.elemsize
res := false
nfree := 0
@@ -277,7 +277,7 @@ func (s *mspan) sweep(preserve bool) bool {
// Count the number of free objects in this span.
nfree = s.countFree()
if cl == 0 && nfree != 0 {
if spc.sizeclass() == 0 && nfree != 0 {
s.needzero = 1
freeToHeap = true
}
@@ -318,9 +318,9 @@ func (s *mspan) sweep(preserve bool) bool {
atomic.Store(&s.sweepgen, sweepgen)
}
if nfreed > 0 && cl != 0 {
c.local_nsmallfree[cl] += uintptr(nfreed)
res = mheap_.central[cl].mcentral.freeSpan(s, preserve, wasempty)
if nfreed > 0 && spc.sizeclass() != 0 {
c.local_nsmallfree[spc.sizeclass()] += uintptr(nfreed)
res = mheap_.central[spc].mcentral.freeSpan(s, preserve, wasempty)
// MCentral_FreeSpan updates sweepgen
} else if freeToHeap {
// Free large span to heap

View File

@@ -98,7 +98,8 @@ type mheap struct {
// the padding makes sure that the MCentrals are
// spaced CacheLineSize bytes apart, so that each MCentral.lock
// gets its own cache line.
central [_NumSizeClasses]struct {
// central is indexed by spanClass.
central [numSpanClasses]struct {
mcentral mcentral
pad [sys.CacheLineSize]byte
}
@@ -194,6 +195,13 @@ type mspan struct {
// helps performance.
nelems uintptr // number of object in the span.
// startindex is the object index where the owner G started allocating in this span.
//
// This is used in conjunction with nextUsedSpan to implement ROC checkpoints and recycles.
startindex uintptr
// nextUsedSpan links together all spans that have the same span class and owner G.
nextUsedSpan *mspan
// Cache of the allocBits at freeindex. allocCache is shifted
// such that the lowest bit corresponds to the bit freeindex.
// allocCache holds the complement of allocBits, thus allowing
@@ -237,7 +245,7 @@ type mspan struct {
divMul uint16 // for divide by elemsize - divMagic.mul
baseMask uint16 // if non-0, elemsize is a power of 2, & this will get object allocation base
allocCount uint16 // capacity - number of objects in freelist
sizeclass uint8 // size class
spanclass spanClass // size class and noscan (uint8)
incache bool // being used by an mcache
state mSpanState // mspaninuse etc
needzero uint8 // needs to be zeroed before allocation
@@ -292,6 +300,31 @@ func recordspan(vh unsafe.Pointer, p unsafe.Pointer) {
h.allspans = append(h.allspans, s)
}
// A spanClass represents the size class and noscan-ness of a span.
//
// Each size class has a noscan spanClass and a scan spanClass. The
// noscan spanClass contains only noscan objects, which do not contain
// pointers and thus do not need to be scanned by the garbage
// collector.
type spanClass uint8
const (
numSpanClasses = _NumSizeClasses << 1
tinySpanClass = spanClass(tinySizeClass<<1 | 1)
)
func makeSpanClass(sizeclass uint8, noscan bool) spanClass {
return spanClass(sizeclass<<1) | spanClass(bool2int(noscan))
}
func (sc spanClass) sizeclass() int8 {
return int8(sc >> 1)
}
func (sc spanClass) noscan() bool {
return sc&1 != 0
}
// inheap reports whether b is a pointer into a (potentially dead) heap object.
// It returns false for pointers into stack spans.
// Non-preemptible because it is used by write barriers.
@@ -376,7 +409,7 @@ func mlookup(v uintptr, base *uintptr, size *uintptr, sp **mspan) int32 {
}
p := s.base()
if s.sizeclass == 0 {
if s.spanclass.sizeclass() == 0 {
// Large object.
if base != nil {
*base = p
@@ -424,7 +457,7 @@ func (h *mheap) init(spansStart, spansBytes uintptr) {
h.freelarge.init()
h.busylarge.init()
for i := range h.central {
h.central[i].mcentral.init(int32(i))
h.central[i].mcentral.init(spanClass(i))
}
sp := (*slice)(unsafe.Pointer(&h.spans))
@@ -533,7 +566,7 @@ func (h *mheap) reclaim(npage uintptr) {
// Allocate a new span of npage pages from the heap for GC'd memory
// and record its size class in the HeapMap and HeapMapCache.
func (h *mheap) alloc_m(npage uintptr, sizeclass int32, large bool) *mspan {
func (h *mheap) alloc_m(npage uintptr, spanclass spanClass, large bool) *mspan {
_g_ := getg()
if _g_ != _g_.m.g0 {
throw("_mheap_alloc not on g0 stack")
@@ -567,8 +600,8 @@ func (h *mheap) alloc_m(npage uintptr, sizeclass int32, large bool) *mspan {
h.sweepSpans[h.sweepgen/2%2].push(s) // Add to swept in-use list.
s.state = _MSpanInUse
s.allocCount = 0
s.sizeclass = uint8(sizeclass)
if sizeclass == 0 {
s.spanclass = spanclass
if sizeclass := spanclass.sizeclass(); sizeclass == 0 {
s.elemsize = s.npages << _PageShift
s.divShift = 0
s.divMul = 0
@@ -618,13 +651,13 @@ func (h *mheap) alloc_m(npage uintptr, sizeclass int32, large bool) *mspan {
return s
}
func (h *mheap) alloc(npage uintptr, sizeclass int32, large bool, needzero bool) *mspan {
func (h *mheap) alloc(npage uintptr, spanclass spanClass, large bool, needzero bool) *mspan {
// Don't do any operations that lock the heap on the G stack.
// It might trigger stack growth, and the stack growth code needs
// to be able to allocate heap.
var s *mspan
systemstack(func() {
s = h.alloc_m(npage, sizeclass, large)
s = h.alloc_m(npage, spanclass, large)
})
if s != nil {
@@ -1020,7 +1053,7 @@ func (span *mspan) init(base uintptr, npages uintptr) {
span.startAddr = base
span.npages = npages
span.allocCount = 0
span.sizeclass = 0
span.spanclass = 0
span.incache = false
span.elemsize = 0
span.state = _MSpanDead

View File

@@ -28,11 +28,9 @@ const msanenabled = true
// the runtime, but operations like a slice copy can call msanread
// anyhow for values on the stack. Just ignore msanread when running
// on the system stack. The other msan functions are fine.
//
//go:nosplit
func msanread(addr unsafe.Pointer, sz uintptr) {
g := getg()
if g == nil || g.m == nil || g == g.m.g0 || g == g.m.gsignal {
if g == g.m.g0 || g == g.m.gsignal {
return
}
domsanread(addr, sz)

View File

@@ -553,12 +553,12 @@ func updatememstats(stats *gcstats) {
if s.state != mSpanInUse {
continue
}
if s.sizeclass == 0 {
if sizeclass := s.spanclass.sizeclass(); sizeclass == 0 {
memstats.nmalloc++
memstats.alloc += uint64(s.elemsize)
} else {
memstats.nmalloc += uint64(s.allocCount)
memstats.by_size[s.sizeclass].nmalloc += uint64(s.allocCount)
memstats.by_size[sizeclass].nmalloc += uint64(s.allocCount)
memstats.alloc += uint64(s.allocCount) * uint64(s.elemsize)
}
}

View File

@@ -56,9 +56,7 @@ func plugin_lastmoduleinit() (path string, syms map[string]interface{}, mismatch
lock(&ifaceLock)
for _, i := range md.itablinks {
if i.inhash == 0 {
additab(i, true, false)
}
additab(i, true, false)
}
unlock(&ifaceLock)

View File

@@ -7,7 +7,6 @@ package runtime_test
import (
"bytes"
"fmt"
"go/build"
"internal/testenv"
"io/ioutil"
"os"
@@ -68,6 +67,7 @@ func checkGdbPython(t *testing.T) {
}
const helloSource = `
package main
import "fmt"
var gslice []string
func main() {
@@ -85,20 +85,9 @@ func main() {
`
func TestGdbPython(t *testing.T) {
testGdbPython(t, false)
}
func TestGdbPythonCgo(t *testing.T) {
testGdbPython(t, true)
}
func testGdbPython(t *testing.T, cgo bool) {
if runtime.GOARCH == "mips64" {
testenv.SkipFlaky(t, 18173)
}
if cgo && !build.Default.CgoEnabled {
t.Skip("skipping because cgo is not enabled")
}
t.Parallel()
checkGdbEnvironment(t)
@@ -111,15 +100,8 @@ func testGdbPython(t *testing.T, cgo bool) {
}
defer os.RemoveAll(dir)
var buf bytes.Buffer
buf.WriteString("package main\n")
if cgo {
buf.WriteString(`import "C"` + "\n")
}
buf.WriteString(helloSource)
src := filepath.Join(dir, "main.go")
err = ioutil.WriteFile(src, buf.Bytes(), 0644)
err = ioutil.WriteFile(src, []byte(helloSource), 0644)
if err != nil {
t.Fatalf("failed to create file: %v", err)
}

View File

@@ -324,6 +324,7 @@ var debug struct {
gcrescanstacks int32
gcstoptheworld int32
gctrace int32
gcroc int32
invalidptr int32
sbrk int32
scavenge int32
@@ -344,6 +345,7 @@ var dbgvars = []dbgVar{
{"gcrescanstacks", &debug.gcrescanstacks},
{"gcstoptheworld", &debug.gcstoptheworld},
{"gctrace", &debug.gctrace},
{"gcroc", &debug.gcroc},
{"invalidptr", &debug.invalidptr},
{"sbrk", &debug.sbrk},
{"scavenge", &debug.scavenge},
@@ -409,6 +411,11 @@ func parsedebugvars() {
writeBarrier.cgo = true
writeBarrier.enabled = true
}
// For the roc algorithm we turn on the write barrier at all times
if debug.gcroc >= 1 {
writeBarrier.roc = true
writeBarrier.enabled = true
}
}
//go:linkname setTraceback runtime/debug.SetTraceback

View File

@@ -1225,3 +1225,8 @@ func morestackc() {
throw("attempt to execute C code on Go stack")
})
}
//go:nosplit
func inStack(p uintptr, s stack) bool {
return s.lo <= p && p < s.hi
}

View File

@@ -295,3 +295,10 @@ func checkASM() bool
func memequal_varlen(a, b unsafe.Pointer) bool
func eqstring(s1, s2 string) bool
// bool2int returns 0 if x is false or 1 if x is true.
func bool2int(x bool) int {
// Avoid branches. In the SSA compiler, this compiles to
// exactly what you would want it to.
return int(uint8(*(*uint8)(unsafe.Pointer(&x))))
}

View File

@@ -285,25 +285,6 @@ func modulesinit() {
md.gcbssmask = progToPointerMask((*byte)(unsafe.Pointer(md.gcbss)), md.ebss-md.bss)
}
}
// Modules appear in the moduledata linked list in the order they are
// loaded by the dynamic loader, with one exception: the
// firstmoduledata itself the module that contains the runtime. This
// is not always the first module (when using -buildmode=shared, it
// is typically libstd.so, the second module). The order matters for
// typelinksinit, so we swap the first module with whatever module
// contains the main function.
//
// See Issue #18729.
mainText := funcPC(main_main)
for i, md := range *modules {
if md.text <= mainText && mainText <= md.etext {
(*modules)[0] = md
(*modules)[i] = &firstmoduledata
break
}
}
atomicstorep(unsafe.Pointer(&modulesSlice), unsafe.Pointer(modules))
}

View File

@@ -330,9 +330,9 @@ sigtrampnog:
// Lock sigprofCallersUse.
MOVL $0, AX
MOVL $1, CX
MOVQ $runtime·sigprofCallersUse(SB), R11
MOVQ $runtime·sigprofCallersUse(SB), BX
LOCK
CMPXCHGL CX, 0(R11)
CMPXCHGL CX, 0(BX)
JNZ sigtramp // Skip stack trace if already locked.
// Jump to the traceback function in runtime/cgo.

View File

@@ -18,7 +18,6 @@ import (
"log"
"os"
"regexp"
"strings"
)
func main() {
@@ -39,16 +38,10 @@ func main() {
re = regexp.MustCompile("Pad_cgo[A-Za-z0-9_]*")
s = re.ReplaceAllString(s, "_")
// We want to keep X__val in Fsid. Hide it and restore it later.
s = strings.Replace(s, "X__val", "MKPOSTFSIDVAL", 1)
// Replace other unwanted fields with blank identifiers.
re = regexp.MustCompile("X_[A-Za-z0-9_]*")
s = re.ReplaceAllString(s, "_")
// Restore X__val in Fsid.
s = strings.Replace(s, "MKPOSTFSIDVAL", "X__val", 1)
// Force the type of RawSockaddr.Data to [14]int8 to match
// the existing gccgo API.
re = regexp.MustCompile("(Data\\s+\\[14\\])uint8")

View File

@@ -140,7 +140,7 @@ type Dirent struct {
}
type Fsid struct {
X__val [2]int32
_ [2]int32
}
type Flock_t struct {

View File

@@ -219,7 +219,7 @@ func (b *B) run1() bool {
}
// Only print the output if we know we are not going to proceed.
// Otherwise it is printed in processBench.
if atomic.LoadInt32(&b.hasSub) != 0 || b.finished {
if b.hasSub || b.finished {
tag := "BENCH"
if b.skipped {
tag = "SKIP"
@@ -460,13 +460,10 @@ func (ctx *benchContext) processBench(b *B) {
//
// A subbenchmark is like any other benchmark. A benchmark that calls Run at
// least once will not be measured itself and will be called once with N=1.
//
// Run may be called simultaneously from multiple goroutines, but all such
// calls must happen before the outer benchmark function for b returns.
func (b *B) Run(name string, f func(b *B)) bool {
// Since b has subbenchmarks, we will no longer run it as a benchmark itself.
// Release the lock and acquire it on exit to ensure locks stay paired.
atomic.StoreInt32(&b.hasSub, 1)
b.hasSub = true
benchmarkLock.Unlock()
defer benchmarkLock.Lock()

View File

@@ -6,7 +6,6 @@ package testing
import (
"bytes"
"fmt"
"regexp"
"strings"
"sync/atomic"
@@ -516,19 +515,3 @@ func TestBenchmarkOutput(t *T) {
Benchmark(func(b *B) { b.Error("do not print this output") })
Benchmark(func(b *B) {})
}
func TestParallelSub(t *T) {
c := make(chan int)
block := make(chan int)
for i := 0; i < 10; i++ {
go func(i int) {
<-block
t.Run(fmt.Sprint(i), func(t *T) {})
c <- 1
}(i)
}
close(block)
for i := 0; i < 10; i++ {
<-c
}
}

View File

@@ -216,7 +216,6 @@ import (
"strconv"
"strings"
"sync"
"sync/atomic"
"time"
)
@@ -268,8 +267,8 @@ type common struct {
skipped bool // Test of benchmark has been skipped.
finished bool // Test function has completed.
done bool // Test is finished and all subtests have completed.
hasSub int32 // written atomically
raceErrors int // number of races detected during test
hasSub bool
raceErrors int // number of races detected during test
parent *common
level int // Nesting depth of test or benchmark.
@@ -646,7 +645,7 @@ func tRunner(t *T, fn func(t *T)) {
// Do not lock t.done to allow race detector to detect race in case
// the user does not appropriately synchronizes a goroutine.
t.done = true
if t.parent != nil && atomic.LoadInt32(&t.hasSub) == 0 {
if t.parent != nil && !t.hasSub {
t.setRan()
}
t.signal <- true
@@ -660,11 +659,8 @@ func tRunner(t *T, fn func(t *T)) {
// Run runs f as a subtest of t called name. It reports whether f succeeded.
// Run will block until all its parallel subtests have completed.
//
// Run may be called simultaneously from multiple goroutines, but all such
// calls must happen before the outer test function for t returns.
func (t *T) Run(name string, f func(t *T)) bool {
atomic.StoreInt32(&t.hasSub, 1)
t.hasSub = true
testName, ok := t.context.match.fullName(&t.common, name)
if !ok {
return true

View File

@@ -54,9 +54,9 @@
ADCQ t3, h1; \
ADCQ $0, h2
DATA ·poly1305Mask<>+0x00(SB)/8, $0x0FFFFFFC0FFFFFFF
DATA ·poly1305Mask<>+0x08(SB)/8, $0x0FFFFFFC0FFFFFFC
GLOBL ·poly1305Mask<>(SB), RODATA, $16
DATA poly1305Mask<>+0x00(SB)/8, $0x0FFFFFFC0FFFFFFF
DATA poly1305Mask<>+0x08(SB)/8, $0x0FFFFFFC0FFFFFFC
GLOBL poly1305Mask<>(SB), RODATA, $16
// func poly1305(out *[16]byte, m *byte, mlen uint64, key *[32]key)
TEXT ·poly1305(SB), $0-32
@@ -67,8 +67,8 @@ TEXT ·poly1305(SB), $0-32
MOVQ 0(AX), R11
MOVQ 8(AX), R12
ANDQ ·poly1305Mask<>(SB), R11 // r0
ANDQ ·poly1305Mask<>+8(SB), R12 // r1
ANDQ poly1305Mask<>(SB), R11 // r0
ANDQ poly1305Mask<>+8(SB), R12 // r1
XORQ R8, R8 // h0
XORQ R9, R9 // h1
XORQ R10, R10 // h2

View File

@@ -9,12 +9,12 @@
// This code was translated into a form compatible with 5a from the public
// domain source by Andrew Moon: github.com/floodyberry/poly1305-opt/blob/master/app/extensions/poly1305.
DATA ·poly1305_init_constants_armv6<>+0x00(SB)/4, $0x3ffffff
DATA ·poly1305_init_constants_armv6<>+0x04(SB)/4, $0x3ffff03
DATA ·poly1305_init_constants_armv6<>+0x08(SB)/4, $0x3ffc0ff
DATA ·poly1305_init_constants_armv6<>+0x0c(SB)/4, $0x3f03fff
DATA ·poly1305_init_constants_armv6<>+0x10(SB)/4, $0x00fffff
GLOBL ·poly1305_init_constants_armv6<>(SB), 8, $20
DATA poly1305_init_constants_armv6<>+0x00(SB)/4, $0x3ffffff
DATA poly1305_init_constants_armv6<>+0x04(SB)/4, $0x3ffff03
DATA poly1305_init_constants_armv6<>+0x08(SB)/4, $0x3ffc0ff
DATA poly1305_init_constants_armv6<>+0x0c(SB)/4, $0x3f03fff
DATA poly1305_init_constants_armv6<>+0x10(SB)/4, $0x00fffff
GLOBL poly1305_init_constants_armv6<>(SB), 8, $20
// Warning: the linker may use R11 to synthesize certain instructions. Please
// take care and verify that no synthetic instructions use it.
@@ -27,7 +27,7 @@ TEXT poly1305_init_ext_armv6<>(SB), NOSPLIT, $0
ADD $4, R13, R8
MOVM.IB [R4-R7], (R8)
MOVM.IA.W (R1), [R2-R5]
MOVW $·poly1305_init_constants_armv6<>(SB), R7
MOVW $poly1305_init_constants_armv6<>(SB), R7
MOVW R2, R8
MOVW R2>>26, R9
MOVW R3>>20, g

Some files were not shown because too many files have changed in this diff Show More