Compare commits

...

311 Commits

Author SHA1 Message Date
David Chase
d08010f94e [dev.ssa] cmd/compile: PPC64, FP to/from int conversions.
Passes ssa_test.

Requires a few new instructions and some scratchpad
memory to move data between G and F registers.

Also fixed comparisons to be correct in case of NaN.
Added missing instructions for run.bash.
Removed some FP registers that are apparently "reserved"
(but that are also apparently also unused except for a
gratuitous multiplication by two when y = x+x would work
just as well).

Currently failing stack splits.

Updates #16010.

Change-Id: I73b161bfff54445d72bd7b813b1479f89fc72602
Reviewed-on: https://go-review.googlesource.com/26813
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
2016-08-15 14:47:49 +00:00
Cherry Zhang
d99cee79b9 [dev.ssa] cmd/compile, etc.: more ARM64 optimizations, and enable SSA by default
Add more ARM64 optimizations:
- use hardware zero register when it is possible.
- use shifted ops.
  The assembler supports shifted ops but not documented, nor knows
  how to print it. This CL adds them.
- enable fast division.
  This was disabled because it makes the old backend generate slower
  code. But with SSA it generates faster code.

Turn on SSA by default, also adjust tests.

Change-Id: I7794479954c83bb65008dcb457bc1e21d7496da6
Reviewed-on: https://go-review.googlesource.com/26950
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
2016-08-15 03:37:34 +00:00
Keith Randall
94c8e59ae1 [dev.ssa] cmd/compile: simplify 386+PIC+globals a bit
We shouldn't issue instructions like MOVL foo(SB), AX directly from the
SSA backend.  Instead we should do LEAL foo(SB), AX; MOVL (AX), AX.

This simplifies obj logic because now only LEAL needs to be treated
specially.  The register allocator uses the LEAL to in effect allocate
the temporary register required for the shared library thunk calls.

Also, the LEALs can now be CSEd.  So code like
    var g int
    func f() { g += 5 }
Requires only one thunk call instead of 2.

Change-Id: Ib87d465f617f73af437445871d0ea91a630b2355
Reviewed-on: https://go-review.googlesource.com/26814
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
2016-08-11 20:34:47 +00:00
Keith Randall
8f955d3664 [dev.ssa] cmd/compile: fix fp constant loads for 386+PIC
In position-independent 386 code, loading floating-point constants from
the constant pool requires two steps: materializing the address of
the constant pool entry (requires calling a thunk) and then loading
from that address.

Before this CL, the materializing happened implicitly in CX, which
clobbered that register.

Change-Id: Id094e0fb2d3be211089f299e8f7c89c315de0a87
Reviewed-on: https://go-review.googlesource.com/26811
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Reviewed-by: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-08-11 19:52:45 +00:00
Cherry Zhang
ed1ad8f56c [dev.ssa] cmd/compile: add some ARM64 optimizations
Mostly mirrors ARM, includes:
- constant folding
- simplification of load, store, extension, and arithmetics
- nilcheck removal

Change-Id: Iffaa5fcdce100fe327429ecab316cb395e543469
Reviewed-on: https://go-review.googlesource.com/26710
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
2016-08-11 18:08:47 +00:00
Cherry Zhang
748aa84424 [dev.ssa] cmd/internal/obj/arm64: fix encoding constant into some instructions
When a constant can be encoded in a logical instruction (BITCON), do
it this way instead of using the constant pool. The BITCON testing
code runs faster than table lookup (using map):

(on AMD64 machine, with pseudo random input)
BenchmarkIsBitcon-4   	300000000	         4.04 ns/op
BenchmarkTable-4      	50000000	        27.3 ns/op

The equivalent C code of BITCON testing is formally verified with
model checker CBMC against linear search of the lookup table.

Also handle cases when a constant can be encoded in a MOV instruction.
In this case, materializa the constant into REGTMP without using the
constant pool.

When constants need to be added to the constant pool, make sure to
check whether it fits in 32-bit. If not, store 64-bit.

Both legacy and SSA compiler backends are happy with this.

Fixes #16226.

Change-Id: I883e3069dee093a1cdc40853c42221a198a152b0
Reviewed-on: https://go-review.googlesource.com/26631
Run-TryBot: Cherry Zhang <cherryyz@google.com>
Reviewed-by: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-08-10 20:33:11 +00:00
Keith Randall
c069bc4996 [dev.ssa] cmd/compile: implement GO386=387
Last part of the 386 SSA port.

Modify the x86 backend to simulate SSE registers and
instructions with 387 registers and instructions.
The simulation isn't terribly performant, but it works,
and the old implementation wasn't very performant either.
Leaving to people who care about 387 to optimize if they want.

Turn on SSA backend for 386 by default.

Fixes #16358

Change-Id: I678fb59132620b2c47e993c1c10c4c21135f70c0
Reviewed-on: https://go-review.googlesource.com/25271
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2016-08-10 17:41:01 +00:00
Keith Randall
77ef597f38 [dev.ssa] cmd/compile: more fixes for 386 shared libraries
Use the destination register for materializing the pc
for GOT references also. See https://go-review.googlesource.com/c/25442/
The SSA backend assumes CX does not get clobbered for these instructions.

Mark duffzero as clobbering CX. The linker needs to clobber CX
to materialize the address to call. (This affects the non-shared-library
duffzero also, but hopefully forbidding one register across duffzero
won't be a big deal.)

Hopefully this is all the cases where the linker is clobbering CX
under the hood and SSA assumes it isn't.

Change-Id: I080c938170193df57cd5ce1f2a956b68a34cc886
Reviewed-on: https://go-review.googlesource.com/26611
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Michael Hudson-Doyle <michael.hudson@canonical.com>
2016-08-10 17:09:38 +00:00
David Chase
ff37d0e681 [dev.ssa] cmd/compile: PPC: FP load/store/const/cmp/neg; div/mod
FP<->int conversions remain.

Updates #16010.

Change-Id: I38d7a4923e34d0a489935fffc4c96c020cafdba2
Reviewed-on: https://go-review.googlesource.com/25589
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2016-08-09 17:13:43 +00:00
Keith Randall
2cbdd55d64 [dev.ssa] cmd/compile: fix PIC for SSA-generated code
Access to globals requires a 2-instruction sequence on PIC 386.

    MOVL foo(SB), AX

is translated by the obj package into:

    CALL getPCofNextInstructionInTempRegister(SB)
    MOVL (&foo-&thisInstruction)(tmpReg), AX

The call returns the PC of the next instruction in a register.
The next instruction then offsets from that register to get the
address required.  The tricky part is the allocation of the
temp register.  The legacy compiler always used CX, and forbid
the register allocator from allocating CX when in PIC mode.
We can't easily do that in SSA because CX is actually a required
register for shift instructions. (I think the old backend got away
with this because the register allocator never uses CX, only
codegen knows that shifts must use CX.)

Instead, we allow the temp register to be anything.  When the
destination of the MOV (or LEA) is an integer register, we can
use that register.  Otherwise, we make sure to compile the
operation using an LEA to reference the global.  So

    MOVL AX, foo(SB)

is never generated directly.  Instead, SSA generates:

    LEAL foo(SB), DX
    MOVL AX, (DX)

which is then rewritten by the obj package to:

    CALL getPcInDX(SB)
    LEAL (&foo-&thisInstruction)(DX), AX
    MOVL AX, (DX)

So this CL modifies the obj package to use different thunks
to materialize the pc into different registers.  We use the
registers that regalloc chose so that SSA can still allocate
the full set of registers.

Change-Id: Ie095644f7164a026c62e95baf9d18a8bcaed0bba
Reviewed-on: https://go-review.googlesource.com/25442
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
2016-08-09 15:50:07 +00:00
Keith Randall
69a755b602 [dev.ssa] cmd/compile: port SSA backend to amd64p32
It's not a new backend, just a PtrSize==4 modification
of the existing AMD64 backend.

Change-Id: Icc63521a5cf4ebb379f7430ef3f070894c09afda
Reviewed-on: https://go-review.googlesource.com/25586
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
2016-08-09 15:48:26 +00:00
Gerrit Code Review
f3b4e78516 Merge "[dev.ssa] Merge commit 'f135c326402aaa757aa96aad283a91873d4ae124' into mergebranch" into dev.ssa 2016-08-08 18:21:58 +00:00
Cherry Zhang
0484052358 [dev.ssa] cmd/compile: remove flags from regMask
Reg allocator skips flag-typed values. Flag allocator uses the type
and whether the op has "clobberFlags" set.

Tested on AMD64, ARM, ARM64, 386. Passed 'toolstash -cmp' on AMD64.
PPC64 is coded blindly.

Change-Id: Ib1cc27efecef6a1bb27f7d7ed035a582660d244f
Reviewed-on: https://go-review.googlesource.com/25480
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
2016-08-07 03:08:03 +00:00
David Chase
01ae4b1da4 [dev.ssa] cmd/compile: PPC64, load/store by type, shifts, divisions, bools
Updates #16010.

Change-Id: Ie520d64fd1c4f881f45623303ed0b7cbdf0e4764
Reviewed-on: https://go-review.googlesource.com/25493
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
2016-08-06 04:09:01 +00:00
David Chase
dd1d9b36c6 [dev.ssa] cmd/compile: PPC64, add cmp->bool, some shifts, hmul
Includes hmul (all widths)
compare for boolean result and simplifications
shift operations plus changes/additions for implementation
(ORN, ADDME, ADDC)

Also fixed a backwards-operand CMP.

Change-Id: Id723c4e25125c38e0d9ab9ec9448176b75f4cdb4
Reviewed-on: https://go-review.googlesource.com/25410
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
2016-08-04 18:17:52 +00:00
Keith Randall
01dbfb81a0 [dev.ssa] Merge commit 'f135c326402aaa757aa96aad283a91873d4ae124' into mergebranch
Pick up shared library fix in dev.ssa.

Change-Id: I5bdd0e9e0f1d6f7c14b518343ee323ed9a894b9c
2016-08-04 10:52:24 -07:00
David Crawshaw
f135c32640 runtime: initialize hash algs before typemap
When compiling with -buildmode=shared, a map[int32]*_type is created for
each extra module mapping duplicate types back to a canonical object.
This is done in the function typelinksinit, which is called before the
init function that sets up the hash functions for the map
implementation. The result is typemap becomes unusable after
runtime initialization.

The fix in this CL is to move algorithm init before typelinksinit in
the runtime setup process. (For 1.8, we may want to turn typemap into
a sorted slice of types and use binary search.)

Manually tested on GOOS=linux with:

	GOHOSTARCH=386 GOARCH=386 ./make.bash && \
		go install -buildmode=shared std && \
		cd ../test && \
		go run run.go -linkshared

Fixes #16590

Change-Id: Idc08c50cc70d20028276fbf564509d2cd5405210
Reviewed-on: https://go-review.googlesource.com/25469
Run-TryBot: David Crawshaw <crawshaw@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2016-08-04 17:39:05 +00:00
Keith Randall
d2286ea284 [dev.ssa] Merge remote-tracking branch 'origin/master' into mergebranch
Semi-regular merge from tip into dev.ssa.

Change-Id: Iadb60e594ef65a99c0e1404b14205fa67c32a9e9
2016-08-04 10:08:20 -07:00
Josh Bleecher Snyder
6a1153acb4 [dev.ssa] cmd/compile: refactor out rulegen value parsing
Previously, genMatch0 and genResult0 contained
lots of duplication: locating the op, parsing
the value, validation, etc.
Parsing and validation was mixed in with code gen.

Extract a helper, parseValue. It is responsible
for parsing the value, locating the op, and doing
shared validation.

As a bonus (and possibly as my original motivation),
make op selection pay attention to the number
of args present.
This allows arch-specific ops to share a name
with generic ops as long as there is no ambiguity.
It also detects and reports unresolved ambiguity,
unlike before, where it would simply always
pick the generic op, with no warning.

Also use parseValue when generating the top-level
op dispatch, to ensure its opinion about ops
matches genMatch0 and genResult0.

The order of statements in the generated code used
to depend on the exact rule. It is now somewhat
independent of the rule. That is the source
of some of the generated code changes in this CL.
See rewritedec64 and rewritegeneric for examples.
It is a one-time change.

The op dispatch switch and functions used to be
sorted by opname without architecture. The sort
now includes the architecture, leading to further
generated code changes.
See rewriteARM and rewriteAMD64 for examples.
Again, it is a one-time change.

There are no functional changes.

Change-Id: I22c989183ad5651741ebdc0566349c5fd6c6b23c
Reviewed-on: https://go-review.googlesource.com/24649
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
2016-08-03 22:51:51 +00:00
Chris Broadfoot
50edddb738 VERSION: remove erroneously committed VERSION file
Change-Id: I1134a4758b7e1a7da243c56f12ad9d2200c8ba41
Reviewed-on: https://go-review.googlesource.com/25414
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-08-02 21:31:58 +00:00
Chris Broadfoot
ae68090d00 all: merge master into release-branch.go1.7
Change-Id: I177856ea2bc9943cbde28ca9afa145b6ea5b0942
2016-08-02 14:06:13 -07:00
Brad Fitzpatrick
2da5633eb9 runtime: fix nanotime for macOS Sierra, again.
macOS Sierra beta4 changed the kernel interface for getting time.
DX now optionally points to an address for additional info.
Set it to zero to avoid corrupting memory.

Fixes #16570

Change-Id: I9f537e552682045325cdbb68b7d0b4ddafade14a
Reviewed-on: https://go-review.googlesource.com/25400
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Quentin Smith <quentin@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-08-02 20:17:50 +00:00
Joe Tsai
6317c213c9 cmd/doc: ensure functions with unexported return values are shown
The commit in golang.org/cl/22354 groups constructors functions under
the type that they construct to. However, this caused a minor regression
where functions that had unexported return values were not being printed
at all. Thus, we forgo the grouping logic if the type the constructor falls
under is not going to be printed.

Fixes #16568

Change-Id: Idc14f5d03770282a519dc22187646bda676af612
Reviewed-on: https://go-review.googlesource.com/25369
Run-TryBot: Joe Tsai <thebrokentoaster@gmail.com>
Reviewed-by: Rob Pike <r@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-08-02 03:24:48 +00:00
Chris Broadfoot
c628d83ec5 go1.7rc4
Change-Id: Icf861dd28bfe29a2e4b90529e53644b43b6f7969
Reviewed-on: https://go-review.googlesource.com/25368
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-08-02 02:22:22 +00:00
Joe Tsai
f5758739a8 cmd/doc: handle embedded interfaces properly
Changes made:
* Disallow star expression on interfaces as this is not possible.
* Show an embedded "error" in an interface as public similar to
how godoc does it.
* Properly handle selector expressions in both structs and interfaces.
This is possible since a type may refer to something defined in
another package (e.g. io.Reader).

Before:
<<<
$ go doc runtime.Error
type Error interface {

    // RuntimeError is a no-op function but
    // serves to distinguish types that are run time
    // errors from ordinary errors: a type is a
    // run time error if it has a RuntimeError method.
    RuntimeError()
    // Has unexported methods.
}

$ go doc compress/flate Reader
doc: invalid program: unexpected type for embedded field
doc: invalid program: unexpected type for embedded field
type Reader interface {
    io.Reader
    io.ByteReader
}
>>>

After:
<<<
$ go doc runtime.Error
type Error interface {
    error

    // RuntimeError is a no-op function but
    // serves to distinguish types that are run time
    // errors from ordinary errors: a type is a
    // run time error if it has a RuntimeError method.
    RuntimeError()
}

$ go doc compress/flate Reader
type Reader interface {
    io.Reader
    io.ByteReader
}
>>>

Fixes #16567

Change-Id: I272dede971eee9f43173966233eb8810e4a8c907
Reviewed-on: https://go-review.googlesource.com/25365
Reviewed-by: Rob Pike <r@golang.org>
Run-TryBot: Joe Tsai <thebrokentoaster@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-08-02 01:58:14 +00:00
Chris Broadfoot
8ea89ba858 all: merge master into release-branch.go1.7
Change-Id: Ifb9647fa9817ed57aa4835a35a05020aba00a24e
2016-08-01 18:27:16 -07:00
Brad Fitzpatrick
28ee179657 net: prevent cancelation goroutine from adjusting fd timeout after connect
This was previously fixed in https://golang.org/cl/21497 but not enough.

Fixes #16523

Change-Id: I678543a656304c82d654e25e12fb094cd6cc87e8
Reviewed-on: https://go-review.googlesource.com/25330
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Joe Tsai <thebrokentoaster@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-08-02 00:55:45 +00:00
Brad Fitzpatrick
2629446df0 doc/go1.7.html: mention Server.Serve HTTP/2 behavior change
Fixes #16550
Updates #15908

Change-Id: Ic951080dbc88f96e4c00cdb3ffe24a5c03079efd
Reviewed-on: https://go-review.googlesource.com/25389
Reviewed-by: Chris Broadfoot <cbro@golang.org>
2016-08-02 00:46:47 +00:00
Brad Fitzpatrick
c558a539b5 net/http: update bundled http2
Updates bundled http2 to x/net/http2 rev 28d1bd4f for:

    http2: make Transport work around mod_h2 bug
    https://golang.org/cl/25362

    http2: don't ignore DATA padding in flow control
    https://golang.org/cl/25382

Updates #16519
Updates #16556
Updates #16481

Change-Id: I51f5696e977c91bdb2d80d2d56b8a78e3222da3f
Reviewed-on: https://go-review.googlesource.com/25388
Reviewed-by: Chris Broadfoot <cbro@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-08-02 00:33:01 +00:00
David Chase
dede2061f3 [dev.ssa] cmd/compile: PPC64, add more zeroing and moves
Passes light testing.
Modified to avoid possible exposure of "exterior" pointers
to GC.

Updates #16010.

Change-Id: I41fced4fa83cefb9542dff8c8dee1a0c48056b3c
Reviewed-on: https://go-review.googlesource.com/25310
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-08-01 18:26:03 +00:00
Cherry Zhang
111d590f86 cmd/compile: fix possible spill of invalid pointer with DUFFZERO on AMD64
SSA compiler on AMD64 may spill Duff-adjusted address as scalar. If
the object is on stack and the stack moves, the spilled address become
invalid.

Making the spill pointer-typed does not work. The Duff-adjusted address
points to the memory before the area to be zeroed and may be invalid.
This may cause stack scanning code panic.

Fix it by doing Duff-adjustment in genValue, so the intermediate value
is not seen by the reg allocator, and will not be spilled.

Add a test to cover both cases. As it depends on allocation, it may
be not always triggered.

Fixes #16515.

Change-Id: Ia81d60204782de7405b7046165ad063384ede0db
Reviewed-on: https://go-review.googlesource.com/25309
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
2016-07-29 01:09:55 +00:00
Brad Fitzpatrick
be91515907 doc/go1.7.html: add known issues section for FreeBSD crashes
Updates #16396

Change-Id: I7b4f85610e66f2c77c17cf8898cc41d81b2efc8c
Reviewed-on: https://go-review.googlesource.com/25283
Reviewed-by: Chris Broadfoot <cbro@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Andrew Gerrand <adg@golang.org>
2016-07-28 03:03:42 +00:00
Cherry Zhang
0069240216 [dev.ssa] cmd/compile: fix build for old backend on ARM64
Apparently the old backend needs NEG instruction having RegRead set,
even this instruction does not take a Reg field... I don't think SSA
uses this flag, so just leave it as it was. SSA is still happy.

Fix ARM64 build on https://build.golang.org/?branch=dev.ssa

Change-Id: Ia7e7f2ca217ddae9af314d346af5406bbafb68e8
Reviewed-on: https://go-review.googlesource.com/25302
Reviewed-by: David Chase <drchase@google.com>
2016-07-28 02:14:24 +00:00
Rhys Hiltner
ccca9c9cc0 runtime: reduce GC assist extra credit
Mutator goroutines that allocate memory during the concurrent mark
phase are required to spend some time assisting the garbage
collector. The magnitude of this mandatory assistance is proportional
to the goroutine's allocation debt and subject to the assistance
ratio as calculated by the pacer.

When assisting the garbage collector, a mutator goroutine will go
beyond paying off its allocation debt. It will build up extra credit
to amortize the overhead of the assist.

In fast-allocating applications with high assist ratios, building up
this credit can take the affected goroutine's entire time slice.
Reduce the penalty on each goroutine being selected to assist the GC
in two ways, to spread the responsibility more evenly.

First, do a consistent amount of extra scan work without regard for
the pacer's assistance ratio. Second, reduce the magnitude of the
extra scan work so it can be completed within a few hundred
microseconds.

Commentary on gcOverAssistWork is by Austin Clements, originally in
https://golang.org/cl/24704

Updates #14812
Fixes #16432

Change-Id: I436f899e778c20daa314f3e9f0e2a1bbd53b43e1
Reviewed-on: https://go-review.googlesource.com/25155
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Chris Broadfoot <cbro@golang.org>
2016-07-27 18:56:04 +00:00
Cherry Zhang
114c05962c [dev.ssa] cmd/compile: fix possible invalid pointer spill in large Zero/Move on ARM
Instead of comparing the address of the end of the memory to zero/copy,
comparing the address of the last element, which is a valid pointer.
Also unify large and unaligned Zero/Move, by passing alignment as AuxInt.

Fixes #16515 for ARM.

Change-Id: I19a62b31c5acf5c55c16a89bea1039c926dc91e5
Reviewed-on: https://go-review.googlesource.com/25300
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
2016-07-27 18:00:19 +00:00
Cherry Zhang
83208504fe [dev.ssa] cmd/compile: add more on ARM64 SSA
Support the following:
- Shifts. ARM64 machine instructions only use lowest 6 bits of the
  shift (i.e. mod 64). Use conditional selection instruction to
  ensure Go semantics.
- Zero/Move. Alignment is ensured.
- Hmul, Avg64u, Sqrt.
- reserve R18 (platform register in ARM64 ABI) and R29 (frame pointer
  in ARM64 ABI).

Everything compiles, all.bash passed (with non-SSA test disabled).

Change-Id: Ia8ed58dae5cbc001946f0b889357b258655078b1
Reviewed-on: https://go-review.googlesource.com/25290
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
2016-07-27 16:37:23 +00:00
Brad Fitzpatrick
c80e0d374b net/http: fix data race with concurrent use of Server.Serve
Fixes #16505

Change-Id: I0afabcc8b1be3a5dbee59946b0c44d4c00a28d71
Reviewed-on: https://go-review.googlesource.com/25280
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Chris Broadfoot <cbro@golang.org>
2016-07-27 05:43:36 +00:00
Brad Fitzpatrick
4a15508c66 crypto/x509: detect OS X version for FetchPEMRoots at run time
https://golang.org/cl/25233 was detecting the OS X release at compile
time, not run time. Detect it at run time instead.

Fixes #16473 (again)

Change-Id: I6bec4996e57aa50c52599c165aa6f1fae7423fa7
Reviewed-on: https://go-review.googlesource.com/25281
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Andrew Gerrand <adg@golang.org>
Reviewed-by: Chris Broadfoot <cbro@golang.org>
2016-07-26 23:16:15 +00:00
Brad Fitzpatrick
66b47431cb net/http: update bundled http2
Updates x/net/http2 to git rev 6a513af for:

  http2: return flow control for closed streams
  https://golang.org/cl/25231

  http2: make Transport prefer HTTP response header recv before body write error
  https://golang.org/cl/24984

  http2: make Transport treat "Connection: close" the same as Request.Close
  https://golang.org/cl/24982

Fixes golang/go#16481

Change-Id: Iaddb166387ca2df1cfbbf09a166f8605578bec49
Reviewed-on: https://go-review.googlesource.com/25282
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Andrew Gerrand <adg@golang.org>
2016-07-26 23:04:15 +00:00
Austin Clements
b11fff3886 runtime/pprof: document use of pprof package
Currently the pprof package gives almost no guidance for how to use it
and, despite the standard boilerplate used to create CPU and memory
profiles, this boilerplate appears nowhere in the pprof documentation.

Update the pprof package documentation to give the standard
boilerplate in a form people can copy, paste, and tweak. This
boilerplate is based on rsc's 2011 blog post on profiling Go programs
at https://blog.golang.org/profiling-go-programs, which is where I
always go when I need to copy-paste the boilerplate.

Change-Id: I74021e494ea4dcc6b56d6fb5e59829ad4bb7b0be
Reviewed-on: https://go-review.googlesource.com/25182
Reviewed-by: Rick Hudson <rlh@golang.org>
2016-07-26 22:16:55 +00:00
Brad Fitzpatrick
ff60da6962 crypto/x509: use Go 1.6 implementation for FetchPEMRoots for OS X 10.8
Conservative fix for the OS X 10.8 crash. We can unify them back together
during the Go 1.8 dev cycle.

Fixes #16473

Change-Id: If07228deb2be36093dd324b3b3bcb31c23a95035
Reviewed-on: https://go-review.googlesource.com/25233
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Andrew Gerrand <adg@golang.org>
2016-07-26 21:18:26 +00:00
David Chase
2d16e43158 [dev.ssa] cmd/compile: PPC64, basic support for all calls and "miscellaneous"
Added support for ClosureCall, DeferCall, InterCall
(GoCall not yet tested).

Added support for GetClosurePtr, IsNonNil, IsInBounds, IsSliceInBounds, NilCheck
(Convert and GetG not yet tested)

Still need to implement NilCheck optimizations.
Fixed move boolean constant, order of operands to subtract.

Updates #16010.

Change-Id: Ibe0f6a6e688df4396cd77de0e9095997e4ca8ed2
Reviewed-on: https://go-review.googlesource.com/25241
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-07-26 20:59:26 +00:00
Jack Lindamood
8876061149 context: add test for WithDeadline in the past
Adds a test case for calling context.WithDeadline() where the deadline
exists in the past.  This change increases the code coverage of the
context package.

Change-Id: Ib486bf6157e779fafd9dab2b7364cdb5a06be36e
Reviewed-on: https://go-review.googlesource.com/25007
Reviewed-by: Sameer Ajmani <sameer@golang.org>
Run-TryBot: Sameer Ajmani <sameer@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-07-26 14:53:38 +00:00
Brad Fitzpatrick
ea2376fcea net/http: make Transport.RoundTrip return raw Conn.Read error on peek failure
From at least Go 1.4 to Go 1.6, Transport.RoundTrip would return the
error value from net.Conn.Read directly when the initial Read (1 byte
Peek) failed while reading the HTTP response, if a request was
outstanding. While never a documented or tested promise, Go 1.7 changed the
behavior (starting at https://golang.org/cl/23160).

This restores the old behavior and adds a test (but no documentation
promises yet) while keeping the fix for spammy logging reported in #15446.

This looks larger than it is: it just changes errServerClosedConn from
a variable to a type, where the type preserves the underlying
net.Conn.Read error, for unwrapping later in Transport.RoundTrip.

Fixes #16465

Change-Id: I6fa018991221e93c0cfe3e4129cb168fbd98bd27
Reviewed-on: https://go-review.googlesource.com/25153
Reviewed-by: Andrew Gerrand <adg@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-07-26 05:28:06 +00:00
Michael Munday
67f799c42c doc: add s390x information to asm.html
Fixes #16362

Change-Id: I676718a1149ed2f3ff80cb031e25de7043805399
Reviewed-on: https://go-review.googlesource.com/25157
Reviewed-by: Rob Pike <r@golang.org>
2016-07-26 00:18:42 +00:00
Joe Tsai
d0256118de compress/flate: document HuffmanOnly
Fixes #16489

Change-Id: I13e2ed6de59102f977566de637d8d09b4e541980
Reviewed-on: https://go-review.googlesource.com/25200
Reviewed-by: Andrew Gerrand <adg@golang.org>
Run-TryBot: Joe Tsai <thebrokentoaster@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-07-25 23:20:40 +00:00
David Chase
806cacc7c6 [dev.ssa] cmd/compile: replace storeconst w/ storezero, fold addressing
Because PPC lacks store-immediate, remove the instruction
that implies that it exists.  Replace it with storezero for
the special case of storing zero, because R0 is reserved zero
for Go (though the assembler knows this, do it in SSA).

Also added address folding for storezero.
(Now corrected to use right-sized stores in bulk-zero code.)

Hello.go now compiles to
genssa main
    00000 (...hello.go:7) TEXT "".main(SB), $0
    00001 (...hello.go:7) FUNCDATA $0, "".gcargs·0(SB)
    00002 (...hello.go:7) FUNCDATA $1, "".gclocals·1(SB)
v23 00003 (...hello.go:8) MOVD $go.string."Hello, World!\n"(SB), R3
v11 00004 (...hello.go:8) MOVD R3, 32(R1)
v22 00005 (...hello.go:8) MOVD $14, R3
v6  00006 (...hello.go:8) MOVD R3, 40(R1)
v20 00007 (...hello.go:8) MOVD R0, 48(R1)
v18 00008 (...hello.go:8) MOVD R0, 56(R1)
v9  00009 (...hello.go:8) MOVD R0, 64(R1)
v10 00010 (...hello.go:8) CALL fmt.Printf(SB)
b2  00011 (...hello.go:9) RET
    00012 (<unknown line number>) END

Updates #16010

Change-Id: I33cfd98c21a1617502260ac753fa8cad68c8d85a
Reviewed-on: https://go-review.googlesource.com/25151
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-07-24 20:00:30 +00:00
Cherry Zhang
ae9570a5b9 [dev.ssa] cmd/compile: initial ARM64 SSA port
Mostly copied from ARM port, with instruction names and Prog fields
adjusted, and 64-bit int ops added. Not complete.

Fib compiles and runs correctly.

Change-Id: Id3ecb0d4b571200a035344b3e8e4408769f76221
Reviewed-on: https://go-review.googlesource.com/25130
Reviewed-by: David Chase <drchase@google.com>
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-07-23 21:25:58 +00:00
Brad Fitzpatrick
10538a8f9e net/http: fix potential for-select spin with closed Context.Done channel
Noticed when investigating a separate issue.

No external bug report or repro yet.

Change-Id: I8a1641a43163f22b09accd3beb25dd9e2a68a238
Reviewed-on: https://go-review.googlesource.com/25152
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Andrew Gerrand <adg@golang.org>
2016-07-22 22:23:14 +00:00
David Chase
7bca2c599d [dev.ssa] cmd/compile: some improvements to PPC codegen
Runs fibonacci for all integer types.
Fold addressing arithmetic into stores.

Updates #16010.

Change-Id: I257982c82c00c80b00679757c3da345045968022
Reviewed-on: https://go-review.googlesource.com/25103
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Run-TryBot: David Chase <drchase@google.com>
2016-07-22 15:52:06 +00:00
Chris Broadfoot
8707f31c0a go1.7rc3
Change-Id: Iaef13003979c68926c260c415d6074a50ae137b2
Reviewed-on: https://go-review.googlesource.com/25142
Run-TryBot: Chris Broadfoot <cbro@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-07-21 21:19:33 +00:00
Keith Randall
df2f813bd2 [dev.ssa] cmd/compile: 386 port now works
GOARCH=386 SSATEST=1 ./all.bash passes

Caveat: still needs changes to test/ files to use *_ssa.go versions.  I
won't check those changes in with this CL because the builders will
complain as they don't have SSATEST=1.

Mostly minor fixes.

Implement float <-> uint32 in assembly.  It seems the simplest option
for now.

GO386=387 does not work.  That's why I can't make SSA the default for
386 yet.

Change-Id: Ic4d4402104d32bcfb1fd612f5bb6539f9acb8ae0
Reviewed-on: https://go-review.googlesource.com/25119
Reviewed-by: Cherry Zhang <cherryyz@google.com>
2016-07-21 20:41:18 +00:00
Chris Broadfoot
16a2af03f1 all: merge master into release-branch.go1.7
Change-Id: I2511c3f7583887b641c9b3694aae54789fbc5342
2016-07-21 12:38:21 -07:00
Brad Fitzpatrick
243d51f05e misc/trace: disable trace resolution warning
It was removed in upstream Chrome https://codereview.chromium.org/2016863004

Rather than update to the latest version, make the minimal change for Go 1.7 and
change the "showToUser" boolean from true to false.

Tested by hand that it goes away after this change.

Updates #16247

Change-Id: I051f49da878e554b1a34a88e9abc70ab50e18780
Reviewed-on: https://go-review.googlesource.com/25117
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-07-21 18:17:31 +00:00
Cherry Zhang
d8181d5d75 [dev.ssa] cmd/compile: simplify MOVWreg on ARM
For register-register move, if there is only one use, allocate it in
the same register so we don't need to emit an instruction.

Updates #15365.

Change-Id: Iad41843854a506c521d577ad93fcbe73e8de8065
Reviewed-on: https://go-review.googlesource.com/25059
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
2016-07-21 16:46:58 +00:00
David Chase
846bc6c5ab cmd/compile: change phi location to be optimistic at backedges
This is:

(1) a simple trick that cuts the number of phi-nodes
(temporarily) inserted into the ssa representation by a factor
of 10, and can cut the user time to compile tricky inputs like
gogo/protobuf tests from 13 user minutes to 9.5, and memory
allocation from 3.4GB to 2.4GB.

(2) a fix to sparse lookup, that does not rely on
an assumption proven false by at least one pathological
input "etldlen".

These two changes fix unrelated compiler performance bugs,
both necessary to obtain good performance compiling etldlen.
Without them it takes 20 minutes or longer, with them it
completes in 2 minutes, without a gigantic memory footprint.

Updates #16407

Change-Id: Iaa8aaa8c706858b3d49de1c4865a7fd79e6f4ff7
Reviewed-on: https://go-review.googlesource.com/23136
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-07-21 16:09:45 +00:00
Keith Randall
305a0ac123 cmd/compile: move phi args which are constants closer to the phi
entry:
   x = MOVQconst [7]
   ...
b1:
   goto b2
b2:
   v = Phi(x, y, z)

Transform that program to:

entry:
   ...
b1:
   x = MOVQconst [7]
   goto b2
b2:
   v = Phi(x, y, z)

This CL moves constant-generating instructions used by a phi to the
appropriate immediate predecessor of the phi's block.

We used to put all constants in the entry block.  Unfortunately, in
large functions we have lots of constants at the start of the
function, all of which are used by lots of phis throughout the
function.  This leads to the constants being live through most of the
function (especially if there is an outer loop).  That's an O(n^2)
problem.

Note that most of the non-phi uses of constants have already been
folded into instructions (ADDQconst, MOVQstoreconst, etc.).

This CL may be generally useful for other instances of compiler
slowness, I'll have to check.  It may cause some programs to run
slower, but probably not by much, as rematerializeable values like
these constants are allocated late (not at their originally scheduled
location) anyway.

This CL is definitely a minimal change that can be considered for 1.7.
We probably want to do a better job in the tighten pass generally, not
just for phi args.  Leaving that for 1.8.

Update #16407

Change-Id: If112a8883b4ef172b2f37dea13e44bda9346c342
Reviewed-on: https://go-review.googlesource.com/25046
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
2016-07-21 15:17:23 +00:00
Ian Lance Taylor
ff227b8a56 runtime: add explicit INT $3 at end of Darwin amd64 sigtramp
The omission of this instruction could confuse the traceback code if a
SIGPROF occurred during a signal handler.  The traceback code would
trace up to sigtramp, but would then get confused because it would see a
PC address that did not appear to be in the function.

Fixes #16453.

Change-Id: I2b3d53e0b272fb01d9c2cb8add22bad879d3eebc
Reviewed-on: https://go-review.googlesource.com/25104
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
2016-07-21 01:04:22 +00:00
Austin Clements
f407ca9288 runtime: support smaller physical pages than PhysPageSize
Most operations need an upper bound on the physical page size, which
is what sys.PhysPageSize is for (this is checked at runtime init on
Linux). However, a few operations need a *lower* bound on the physical
page size. Introduce a "minPhysPageSize" constant to act as this lower
bound and use it where it makes sense:

1) In addrspace_free, we have to query each page in the given range.
   Currently we increment by the upper bound on the physical page
   size, which means we may skip over pages if the true size is
   smaller. Worse, we currently pass a result buffer that only has
   enough room for one page. If there are actually multiple pages in
   the range passed to mincore, the kernel will overflow this buffer.
   Fix these problems by incrementing by the lower-bound on the
   physical page size and by passing "1" for the length, which the
   kernel will round up to the true physical page size.

2) In the write barrier, the bad pointer check tests for pointers to
   the first physical page, which are presumably small integers
   masquerading as pointers. However, if physical pages are smaller
   than we think, we may have legitimate pointers below
   sys.PhysPageSize. Hence, use minPhysPageSize for this test since
   pointers should never fall below that.

In particular, this applies to ARM64 and MIPS. The runtime is
configured to use 64kB pages on ARM64, but by default Linux uses 4kB
pages. Similarly, the runtime assumes 16kB pages on MIPS, but both 4kB
and 16kB kernel configurations are common. This also applies to ARM on
systems where the runtime is recompiled to deal with a larger page
size. It is also a step toward making the runtime use only a
dynamically-queried page size.

Change-Id: I1fdfd18f6e7cbca170cc100354b9faa22fde8a69
Reviewed-on: https://go-review.googlesource.com/25020
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Run-TryBot: Austin Clements <austin@google.com>
2016-07-20 18:28:43 +00:00
Cherry Zhang
7b9873b9b9 [dev.ssa] cmd/internal/obj, etc.: add and use NEGF, NEGD instructions on ARM
Updates #15365.

Change-Id: I372a5617c2c7d91de545cac0464809b96711b63a
Reviewed-on: https://go-review.googlesource.com/24646
Run-TryBot: Cherry Zhang <cherryyz@google.com>
Reviewed-by: David Chase <drchase@google.com>
2016-07-20 18:15:37 +00:00
Dmitry Vyukov
d73ca5f4d8 runtime/race: fix memory leak
The leak was reported internally on a sever canary that runs for days.
After a day server consumes 5.6GB, after 6 days -- 12.2GB.
The leak is exposed by the added benchmark.
The leak is fixed upstream in :
http://llvm.org/viewvc/llvm-project/compiler-rt/trunk/lib/tsan/rtl/tsan_rtl_thread.cc?view=diff&r1=276102&r2=276103&pathrev=276103

Fixes #16441

Change-Id: I9d4f0adef48ca6cf2cd781b9a6990ad4661ba49b
Reviewed-on: https://go-review.googlesource.com/25091
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
2016-07-20 14:17:44 +00:00
Ian Lance Taylor
50048a4e8e runtime: add as many extra M's as needed
When a non-Go thread calls into Go, the runtime needs an M to run the Go
code. The runtime keeps a list of extra M's available. When the last
extra M is allocated, the needextram field is set to tell it to allocate
a new extra M as soon as it is running in Go. This ensures that an extra
M will always be available for the next thread.

However, if many threads need an extra M at the same time, this
serializes them all. One thread will get an extra M with the needextram
field set. All the other threads will see that there is no M available
and will go to sleep. The one thread that succeeded will create a new
extra M. One lucky thread will get it. All the other threads will see
that there is no M available and will go to sleep. The effect is
thundering herd, as all the threads looking for an extra M go through
the process one by one. This seems to have a particularly bad effect on
the FreeBSD scheduler for some reason.

With this change, we track the number of threads waiting for an M, and
create all of them as soon as one thread gets through. This still means
that all the threads will fight for the lock to pick up the next M. But
at least each thread that gets the lock will succeed, instead of going
to sleep only to fight again.

This smooths out the performance greatly on FreeBSD, reducing the
average wall time of `testprogcgo CgoCallbackGC` by 74%.  On GNU/Linux
the average wall time goes down by 9%.

Fixes #13926
Fixes #16396

Change-Id: I6dc42a4156085a7ed4e5334c60b39db8f8ef8fea
Reviewed-on: https://go-review.googlesource.com/25047
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
2016-07-20 13:31:55 +00:00
Brad Fitzpatrick
883e128f45 net/smtp: document that the smtp package is frozen
This copies the frozen wording from the log/syslog package.

Fixes #16436

Change-Id: If5d478023328925299399f228d8aaf7fb117c1b4
Reviewed-on: https://go-review.googlesource.com/25080
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Andrew Gerrand <adg@golang.org>
2016-07-20 05:08:37 +00:00
Keith Randall
4a33af6bb6 [dev.ssa] cmd/compile: more 386 port changes
Fix up zero/move code, including duff calls and rep movs.

Handle the new ops generated by dec64.rules.

Fix constant shifts.

Change-Id: I7d89194b29b04311bfafa0fd93b9f5644af04df9
Reviewed-on: https://go-review.googlesource.com/25033
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
2016-07-19 15:16:23 +00:00
Keith Randall
1b0404c4ca [dev.ssa] cmd/compile: fix verbose typing of DIV
Use Cherry's awesome pair type constructor.

Change-Id: I282156a570ee4dd3548bd82fbf15b8d8eb5bedf6
Reviewed-on: https://go-review.googlesource.com/25009
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
2016-07-18 21:13:15 +00:00
Austin Clements
1d2ca9e30c doc/go1.7.html: start sentence on a new line
Change-Id: Ia1c2ebcd2ccf7b98d89b378633bf4fc435d2364d
Reviewed-on: https://go-review.googlesource.com/25019
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-07-18 21:07:34 +00:00
Austin Clements
3ad586155b doc/go1.7.html: avoid term of art
Rather than saying "stop-the-world", say "garbage collection pauses".

Change-Id: Ifb2931781ab3094e04bea93f01f18f1acb889bdc
Reviewed-on: https://go-review.googlesource.com/25018
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Rob Pike <r@golang.org>
2016-07-18 21:07:30 +00:00
Keith Randall
aee8d8b9dd [dev.ssa] cmd/compile: implement more 64-bit ops on 386
add/sub/mul, plus constant input variants.

Change-Id: I1c8006727c4fdf73558da0e646e7d1fa130ed773
Reviewed-on: https://go-review.googlesource.com/25006
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
2016-07-18 19:52:28 +00:00
Keith Randall
cf92e3845f [dev.ssa] cmd/compile: use 2-result divide op
We now allow Values to have 2 outputs.  Use that ability for amd64.
This allows x,y := a/b,a%b to use just a single divide instruction.

Update #6815

Change-Id: Id70bcd20188a2dd8445e631a11d11f60991921e4
Reviewed-on: https://go-review.googlesource.com/25004
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Reviewed-by: David Chase <drchase@google.com>
2016-07-18 19:41:05 +00:00
Keith Randall
25e0a367da [dev.ssa] cmd/compile: clean up tuple types and selects
Make tuple types and their SelectX ops fully generic.
These ops no longer need to be lowered.
Regalloc understands them and their tuple-generating arguments.
We can now have opcodes returning arbitrary pairs of results.
(And it would be easy to move to >2 results if needed.)

Update arm implementation to the new standard.
Implement just enough in 386 port to do 64-bit add.

Change-Id: I370ed5aacce219c82e1954c61d1f63af76c16f79
Reviewed-on: https://go-review.googlesource.com/24976
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-07-18 16:11:36 +00:00
Ian Lance Taylor
d66cbec37a doc/go1.7.html: the 1.6.3 release supports Sierra
Updates #16354
Updates #16272

Change-Id: I73e8df40621a0a17a1990f3b10ea996f4fa738aa
Reviewed-on: https://go-review.googlesource.com/25014
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-07-18 15:52:45 +00:00
Chris Broadfoot
0ebf6ce087 [release-branch.go1.7] go1.7rc2
Change-Id: I5473071f672f8352fbd3352e158d5be12823b58a
Reviewed-on: https://go-review.googlesource.com/25017
Run-TryBot: Chris Broadfoot <cbro@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-07-18 15:36:41 +00:00
Chris Broadfoot
b3b0b7a182 doc: document go1.6.3
Change-Id: Ib33d7fb529aafcaf8ca7d43b2c9480f30d5c28cc
Reviewed-on: https://go-review.googlesource.com/25011
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-07-18 15:13:30 +00:00
Brad Fitzpatrick
cad4e97af8 [release-branch.go1.7] net/http, net/http/cgi: fix for CGI + HTTP_PROXY security issue
Because,

* The CGI spec defines that incoming request header "Foo: Bar" maps to
  environment variable HTTP_FOO == "Bar". (see RFC 3875 4.1.18)

* The HTTP_PROXY environment variable is conventionally used to configure
  the HTTP proxy for HTTP clients (and is respected by default for
  Go's net/http.Client and Transport)

That means Go programs running in a CGI environment (as a child
process under a CGI host) are vulnerable to an incoming request
containing "Proxy: attacker.com:1234", setting HTTP_PROXY, and
changing where Go by default proxies all outbound HTTP requests.

This is CVE-2016-5386, aka https://httpoxy.org/

Fixes #16405

Change-Id: I6f68ade85421b4807785799f6d98a8b077e871f0
Reviewed-on: https://go-review.googlesource.com/25010
Run-TryBot: Chris Broadfoot <cbro@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Chris Broadfoot <cbro@golang.org>
Reviewed-on: https://go-review.googlesource.com/25013
2016-07-18 15:13:06 +00:00
Brad Fitzpatrick
b97df54c31 net/http, net/http/cgi: fix for CGI + HTTP_PROXY security issue
Because,

* The CGI spec defines that incoming request header "Foo: Bar" maps to
  environment variable HTTP_FOO == "Bar". (see RFC 3875 4.1.18)

* The HTTP_PROXY environment variable is conventionally used to configure
  the HTTP proxy for HTTP clients (and is respected by default for
  Go's net/http.Client and Transport)

That means Go programs running in a CGI environment (as a child
process under a CGI host) are vulnerable to an incoming request
containing "Proxy: attacker.com:1234", setting HTTP_PROXY, and
changing where Go by default proxies all outbound HTTP requests.

This is CVE-2016-5386, aka https://httpoxy.org/

Fixes #16405

Change-Id: I6f68ade85421b4807785799f6d98a8b077e871f0
Reviewed-on: https://go-review.googlesource.com/25010
Run-TryBot: Chris Broadfoot <cbro@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Chris Broadfoot <cbro@golang.org>
2016-07-18 14:58:26 +00:00
Austin Clements
2837c39552 doc/go1.7.html: mention specific runtime improvements
Most of the runtime improvements are hard to quantify or summarize,
but it's worth mentioning some of the substantial improvements in STW
time, and that the scavenger now actually works on ARM64, PPC64, and
MIPS.

Change-Id: I0e951038516378cc3f95b364716ef1c183f3445a
Reviewed-on: https://go-review.googlesource.com/24966
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-07-18 14:10:42 +00:00
Brad Fitzpatrick
a6dbfc12c6 net: demote TestDialerDualStack to a flaky test
Only run TestDialerDualStack on the builders, as to not annoy or
otherwise distract users when it's not their fault.

Even though the intention is to only run this on the builders, very
few of the builders have IPv6 support. Oh well. We'll get some
coverage.

Updates #13324

Change-Id: I13e7e3bca77ac990d290cabec88984cc3d24fb67
Reviewed-on: https://go-review.googlesource.com/24985
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Mikio Hara <mikioh.mikioh@gmail.com>
2016-07-17 01:25:19 +00:00
Joe Tsai
510fb6397d fmt: properly handle early io.EOF Reads in readRune.readByte
Change https://golang.org/cl/19895 caused a regression
where the last character in a string would be dropped if it was
accompanied by an io.EOF.

This change fixes the logic so that the last byte is still returned
without a problem.

Fixes #16393

Change-Id: I7a4d0abf761c2c15454136a79e065fe002d736ea
Reviewed-on: https://go-review.googlesource.com/24981
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-07-16 19:14:58 +00:00
Cherry Zhang
6b6de15d32 [dev.ssa] cmd/compile: support NaCl in SSA for ARM
NaCl code runs in sandbox and there are restrictions for its
instruction uses
(https://developer.chrome.com/native-client/reference/sandbox_internals/arm-32-bit-sandbox).

Like the legacy backend, on NaCl,
- don't use R9, which is used as NaCl's "thread pointer".
- don't use Duff's device.
- don't use indexed load/stores.
- the assembler rewrites DIV/MOD to runtime calls, which on NaCl
  clobbers R12, so R12 is marked as clobbered for DIV/MOD.
- other restrictions are satisfied by the assembler.

Enable SSA specific tests on nacl/arm, and disable non-SSA ones.

Updates #15365.

Change-Id: I9262693ec6756b89ca29d3ae4e52a96fe5403b02
Reviewed-on: https://go-review.googlesource.com/24859
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
2016-07-16 03:13:45 +00:00
Cherry Zhang
7d70f84f54 [dev.ssa] cmd/compile: add floating point optimizations in SSA for ARM
Add some simplification rules for floating point ops.

cmd/internal/obj/arm supports instructions that compare FP register
to 0, but runtime softfloat simulator does not. This CL adds these
instructions to softfloat simulator as well.

Updates #15365.

Change-Id: I29405b2bfcb4c8cf106cb7a1a811409fec91b170
Reviewed-on: https://go-review.googlesource.com/24790
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
2016-07-16 03:13:22 +00:00
Ian Lance Taylor
2b6eb27651 doc/go1.7.html: remove erroneous note about ppc64 and power8
We decided that ppc64 should maintain power5 compatibility.
ppc64le requires power8.

Fixes #16372.

Change-Id: If5b309a0563f55a3c1fe9c853d29a463f5b71101
Reviewed-on: https://go-review.googlesource.com/24915
Reviewed-by: Minux Ma <minux@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-07-15 21:20:49 +00:00
Cherry Zhang
6adb97bde7 [dev.ssa] cmd/compile: fix argument size of runtime call in SSA for ARM
The argument size for runtime call was incorrectly includes the size
of LR (FixedFrameSize in general). This makes the stack frame
sometimes unnecessarily 4 bytes larger on ARM.
For example,
	func f(b []byte) byte { return b[0] }
compiles to
	0x0000 00000 (h.go:6)	TEXT	"".f(SB), $4-16 // <-- framesize = 4
	0x0000 00000 (h.go:6)	MOVW	8(g), R1
	0x0004 00004 (h.go:6)	CMP	R1, R13
	0x0008 00008 (h.go:6)	BLS	52
	0x000c 00012 (h.go:6)	MOVW.W	R14, -8(R13)
	0x0010 00016 (h.go:6)	FUNCDATA	$0, gclocals·8355ad952265fec823c17fcf739bd009(SB)
	0x0010 00016 (h.go:6)	FUNCDATA	$1, gclocals·69c1753bd5f81501d95132d08af04464(SB)
	0x0010 00016 (h.go:6)	MOVW	"".b+4(FP), R0
	0x0014 00020 (h.go:6)	CMP	$0, R0
	0x0018 00024 (h.go:6)	BLS	44
	0x001c 00028 (h.go:6)	MOVW	"".b(FP), R0
	0x0020 00032 (h.go:6)	MOVBU	(R0), R0
	0x0024 00036 (h.go:6)	MOVB	R0, "".~r1+12(FP)
	0x0028 00040 (h.go:6)	MOVW.P	8(R13), R15
	0x002c 00044 (h.go:6)	PCDATA	$0, $1
	0x002c 00044 (h.go:6)	CALL	runtime.panicindex(SB)
	0x0030 00048 (h.go:6)	UNDEF
	0x0034 00052 (h.go:6)	NOP
	0x0034 00052 (h.go:6)	MOVW	R14, R3
	0x0038 00056 (h.go:6)	CALL	runtime.morestack_noctxt(SB)
	0x003c 00060 (h.go:6)	JMP	0

Note that the frame size is 4, but there is actually no local. It
incorrectly thinks call to runtime.panicindex needs 4 bytes space
for argument.

This CL fixes it.

Updates #15365.

Change-Id: Ic65d55283a6aa8a7861d7a3fbc7b63c35785eeec
Reviewed-on: https://go-review.googlesource.com/24909
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
2016-07-15 18:20:29 +00:00
Cherry Zhang
7bd88a651d [dev.ssa] cmd/compile: don't sink spills that satisfy merge edges in SSA
If a spill is used to satisfy a merge edge (in shuffle), don't sink
it out of loop.

This is found in the following code (on ARM) where there is a stack
Phi (v268) inside a loop (b36 -> ... -> b47 -> b38 -> b36).

(before shuffle)
  b36: <- b34 b38
    ...
    v268 = Phi <int> v410 v360 : autotmp_198[int]
    ...
    ... -> b47
  b47: <- b44
    ...
    v360 = ... : R6
    v230 = StoreReg <int> v360 : autotmp_198[int]
    v261 = CMPconst <flags> [0] v360
    EQ v261 -> b49 b38 (unlikely)
  b38: <- b47
    ...
    Plain -> b36

During shuffle, v230 (as spill of v360) is found to satisfy v268, but
it didn't record its use in shuffle, and v230 is sunk out of the loop
(to b49), which leads to bad value in v268.

This seems never happened on AMD64 (in make.bash), until 4 registers
are removed.

Change-Id: I01dfc28ae461e853b36977c58bcfc0669e556660
Reviewed-on: https://go-review.googlesource.com/24858
Reviewed-by: David Chase <drchase@google.com>
2016-07-15 18:20:17 +00:00
Cherry Zhang
8cc3f4a17e [dev.ssa] cmd/compile: use shifted and indexed ops in SSA for ARM
This CL implements the following optimizations for ARM:
- use shifted ops (e.g. ADD R1<<2, R2) and indexed load/stores
- break up shift ops. Shifts used to be one SSA op that generates
  multiple instructions. We break them up to multiple ops, which
  allows constant folding and CSE for comparisons. Conditional moves
  are introduced for this.
- simplify zero/sign-extension ops.

Updates #15365.

Change-Id: I55e262a776a7ef2a1505d75e04d1208913c35d39
Reviewed-on: https://go-review.googlesource.com/24512
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
2016-07-15 18:19:59 +00:00
Josh Bleecher Snyder
4054769a31 runtime/internal/atomic: fix assembly arg sizes
Change-Id: I80ccf40cd3930aff908ee64f6dcbe5f5255198d3
Reviewed-on: https://go-review.googlesource.com/24914
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-07-14 16:35:37 +00:00
Keith Randall
14cf6e2083 [dev.ssa] cmd/compile: initial 386 SSA port
Basically just copied all the amd64 files, removed all the *Q ops,
and rebuilt.

Compiles fib successfully.

Still need to do:
 - all the 64->32 bit op translations.
 - audit for instructions that aren't available on 386.
 - GO386=387?

Update #16358

Change-Id: Ib8c684586416a554a527a5eefa0cff71424e36f5
Reviewed-on: https://go-review.googlesource.com/24912
Reviewed-by: Cherry Zhang <cherryyz@google.com>
2016-07-13 23:43:50 +00:00
Ian Lance Taylor
29ed5da5f2 runtime/pprof: don't print extraneous 0 after goexit
This fixes erroneous handling of the more result parameter of
runtime.Frames.Next.

Fixes #16349.

Change-Id: I4f1c0263dafbb883294b31dbb8922b9d3e650200
Reviewed-on: https://go-review.googlesource.com/24911
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-07-13 21:18:19 +00:00
Brad Fitzpatrick
4d00937cec all: rename vendored golang.org/x/net packages to golang_org
Regression from Go 1.6 to Go 1.7rc1: we had broken the ability for
users to vendor "golang.org/x/net/http2" or "golang.org/x/net/route"
because we were vendoring them ourselves and cmd/go and cmd/compile do
not understand multiple vendor directories across multiple GOPATH
workspaces (e.g. user's $GOPATH and default $GOROOT).

As a short-term fix, since fixing cmd/go and cmd/compile is too
invasive at this point in the cycle, just rename "golang.org" to
"golang_org" for the standard library's vendored copy.

Fixes #16333

Change-Id: I9bfaed91e9f7d4ca6bab07befe80d71d437a21af
Reviewed-on: https://go-review.googlesource.com/24902
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-07-13 18:35:40 +00:00
Keith Randall
efefd11725 [dev.ssa] Merge remote-tracking branch 'origin/master' into mergebranch
Semi-regular merge of tip into dev.ssa.

Change-Id: I855817c4746237792a2dab6eaf471087a3646be4
2016-07-13 11:12:44 -07:00
Ian Lance Taylor
1cb3f7169c doc/go1.7.html: earlier Go versions don't work on macOS Sierra
Updates #16272.

Change-Id: If5444b8de8678eeb9be10b62a929e2e101d1dd91
Reviewed-on: https://go-review.googlesource.com/24900
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-07-13 18:07:35 +00:00
Emmanuel Odeke
76da6491e8 doc/go1.7.html: document that http.Server now enforces request versions
Document that the http.Server is now stricter about rejecting
requests with invalid HTTP versions, and also that it rejects plaintext
HTTP/2 requests, except for `PRI * HTTP/2.0` upgrade requests.
The relevant CL is https://golang.org/cl/24505.

Updates #15810.

Change-Id: Ibbace23e001b5e2eee053bd341de50f9b6d3fde8
Reviewed-on: https://go-review.googlesource.com/24731
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-07-13 16:52:52 +00:00
Bryan C. Mills
2fcb25e07b doc/effective_go: clarify advice on returning interfaces
New Gophers sometimes misconstrue the advice in the "Generality" section
as "export interfaces instead of implementations" and add needless
interfaces to their code as a result.  Down the road, they end up
needing to add methods and either break existing callers or have to
resort to unpleasant hacks (e.g. using "magic method" type-switches).

Weaken the first paragraph of this section to only advise leaving types
unexported when they will never need additional methods.

Change-Id: I32a1ae44012b5896faf167c02e192398a4dfc0b8
Reviewed-on: https://go-review.googlesource.com/24892
Reviewed-by: Rob Pike <r@golang.org>
Reviewed-by: Andrew Gerrand <adg@golang.org>
2016-07-13 16:19:07 +00:00
Brad Fitzpatrick
a1110c3930 cmd/go: don't fail on invalid GOOS/GOARCH pair when using gccgo
Fixes #12272

Change-Id: I0306ce0ef4a87df2158df3b7d4d8d93a1cb6dabc
Reviewed-on: https://go-review.googlesource.com/24864
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
2016-07-12 19:18:08 +00:00
Ian Lance Taylor
b30814bbd6 runtime: add ctxt parameter to cgocallback called from Go
The cgocallback function picked up a ctxt parameter in CL 22508.
That CL updated the assembler implementation, but there are a few
mentions in Go code that were not updated. This CL fixes that.

Fixes #16326

Change-Id: I5f68e23565c6a0b11057aff476d13990bff54a66
Reviewed-on: https://go-review.googlesource.com/24848
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
2016-07-12 16:39:00 +00:00
Ian Lance Taylor
1f4e68d92b reflect: an unnamed type has no PkgPath
The reflect package was returning a non-empty PkgPath for an unnamed
type with methods, such as a type whose methods have a pointer
receiver.

Fixes #16328.

Change-Id: I733e93981ebb5c5c108ef9b03bf5494930b93cf3
Reviewed-on: https://go-review.googlesource.com/24862
Reviewed-by: David Crawshaw <crawshaw@golang.org>
2016-07-12 12:43:48 +00:00
Ian Lance Taylor
a84b18ac86 Revert "regexp: add the Fanout benchmark
This is a copy of the "FANOUT" benchmark recently added to RE2 with the
following comment:

    // This has quite a high degree of fanout.
    // NFA execution will be particularly slow.

Most of the benchmarks on the regexp package have very little fanout and
are designed for comparing the regexp package's NFA with backtracking
engines found in other regular expression libraries. This benchmark
exercises the performance of the NFA on expressions with high fanout.Reviewed-on: https://go-review.googlesource.com/24846
Reviewed-by: Andrew Gerrand <adg@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
"

This reverts commit fc803874d3.

Reason for revert: Breaks the -race build because the benchmark takes too long to run.

Change-Id: I6ed4b466f74a4108d8bcd5b019b9abe971eb483e
Reviewed-on: https://go-review.googlesource.com/24861
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Michael Matloob <matloob@golang.org>
2016-07-12 04:59:34 +00:00
Michael Matloob
fc803874d3 regexp: add the Fanout benchmark
This is a copy of the "FANOUT" benchmark recently added to RE2 with the
following comment:

    // This has quite a high degree of fanout.
    // NFA execution will be particularly slow.

Most of the benchmarks on the regexp package have very little fanout and
are designed for comparing the regexp package's NFA with backtracking
engines found in other regular expression libraries. This benchmark
exercises the performance of the NFA on expressions with high fanout.

Change-Id: Ie9c8e3bbeffeb1fe9fb90474ddd19e53f2f57a52
Reviewed-on: https://go-review.googlesource.com/24846
Reviewed-by: Andrew Gerrand <adg@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-07-11 21:03:09 +00:00
Francesc Campoy
296b618dc8 gofmt: remove unneeded call to os.Exit
PrintDefaults already calls os.Exit(2).

Change-Id: I0d783a6476f42b6157853cdb34ba69618e3f3fcb
Reviewed-on: https://go-review.googlesource.com/24844
Reviewed-by: Andrew Gerrand <adg@golang.org>
2016-07-11 18:37:12 +00:00
Ian Lance Taylor
38de5b71f2 doc/go1.7.html: no concurrent calls of math/rand methods
A follow-on to https://golang.org/cl/24852 that mentions the
documentation clarifications.

Updates #16308.

Change-Id: Ic2a6e1d4938d74352f93a6649021fb610efbfcd0
Reviewed-on: https://go-review.googlesource.com/24857
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Joe Tsai <thebrokentoaster@gmail.com>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
2016-07-11 17:19:26 +00:00
Ian Lance Taylor
fb3cf5c686 math/rand: fix raciness in Rand.Read
There are no synchronization points protecting the readVal and readPos
variables. This leads to a race when Read is called concurrently.
Fix this by adding methods to lockedSource, which is the case where
a race matters.

Fixes #16308.

Change-Id: Ic028909955700906b2d71e5c37c02da21b0f4ad9
Reviewed-on: https://go-review.googlesource.com/24852
Reviewed-by: Joe Tsai <thebrokentoaster@gmail.com>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
2016-07-11 15:11:44 +00:00
Brad Fitzpatrick
54ffdf364f net/http: fix vet warning of leaked context in error paths
Updates #16230

Change-Id: Ie38f85419c41c00108f8843960280428a39789b5
Reviewed-on: https://go-review.googlesource.com/24850
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-07-11 04:33:31 +00:00
Ian Lance Taylor
54b499e3f1 syscall: add another output for TestGroupCleanupUserNamespace
Fixes #16303.

Change-Id: I2832477ce0117a66da53ca1f198ebb6121953d56
Reviewed-on: https://go-review.googlesource.com/24833
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-07-08 19:40:59 +00:00
Ian Lance Taylor
12f2b4ff0e runtime: fix case in KeepAlive comment
Fixes #16299.

Change-Id: I76f541c7f11edb625df566f2f1035147b8bcd9dd
Reviewed-on: https://go-review.googlesource.com/24830
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-07-08 16:50:26 +00:00
Ian Lance Taylor
915398f14f doc/go1.7.html: fix name of IsExist
For better or for worse, it's IsExist, not IsExists.

Change-Id: I4503f961486edd459c0c81cf3f32047dff7703a4
Reviewed-on: https://go-review.googlesource.com/24819
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-07-08 15:06:27 +00:00
Ian Lance Taylor
53da5fd4d4 [release-branch.go1.7] runtime: fix nanotime for macOS Sierra
In the beta version of the macOS Sierra (10.12) release, the
gettimeofday system call changed on x86. Previously it always returned
the time in the AX/DX registers. Now, if AX is returned as 0, it means
that the system call has stored the values into the memory pointed to by
the first argument, just as the libc gettimeofday function does. The
libc function handles both cases, and we need to do so as well.

Fixes #16272.

Change-Id: Ibe5ad50a2c5b125e92b5a4e787db4b5179f6b723
Reviewed-on: https://go-review.googlesource.com/24812
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-on: https://go-review.googlesource.com/24755
Reviewed-by: Chris Broadfoot <cbro@golang.org>
2016-07-08 03:48:20 +00:00
Ian Lance Taylor
fad2bbdc6a runtime: fix nanotime for macOS Sierra
In the beta version of the macOS Sierra (10.12) release, the
gettimeofday system call changed on x86. Previously it always returned
the time in the AX/DX registers. Now, if AX is returned as 0, it means
that the system call has stored the values into the memory pointed to by
the first argument, just as the libc gettimeofday function does. The
libc function handles both cases, and we need to do so as well.

Fixes #16272.

Change-Id: Ibe5ad50a2c5b125e92b5a4e787db4b5179f6b723
Reviewed-on: https://go-review.googlesource.com/24812
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-07-08 03:17:18 +00:00
Chris Broadfoot
a91416e7ab [release-branch.go1.7] go1.7rc1
Change-Id: Ifbf1c13ce740428add68d68133c7f10876bad404
Reviewed-on: https://go-review.googlesource.com/24816
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Andrew Gerrand <adg@golang.org>
2016-07-08 02:57:35 +00:00
Ian Lance Taylor
84bb9e62f0 runtime: handle selects with duplicate channels in shrinkstack
The shrinkstack code locks all the channels a goroutine is waiting for,
but didn't handle the case of the same channel appearing in the list
multiple times. This led to a deadlock. The channels are sorted so it's
easy to avoid locking the same channel twice.

Fixes #16286.

Change-Id: Ie514805d0532f61c942e85af5b7b8ac405e2ff65
Reviewed-on: https://go-review.googlesource.com/24815
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2016-07-08 02:05:40 +00:00
Brad Fitzpatrick
e5ff529679 lib/time: update to IANA release 2016f (July 2016)
Fixes #16273

Change-Id: I443e1f254fd683c4ff61beadae89c1c45ff5d972
Reviewed-on: https://go-review.googlesource.com/24744
Reviewed-by: Andrew Gerrand <adg@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Quentin Smith <quentin@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-07-07 16:15:13 +00:00
Brad Fitzpatrick
d8722012af net/http: deflake TestClientRedirectContext
The test was checking for 1 of 2 possible error values. But based on
goroutine scheduling and the randomness of select statement receive
cases, it was possible for a 3rd type of error to be returned.

This modifies the code (not the test) to make that third type of error
actually the second type of error, which is a nicer error message.

The test is no longer flaky. The flake was very reproducible with a
5ms sleep before the select at the end of Transport.getConn.

Thanks to Github user @jaredborner for debugging.

Fixes #16049

Change-Id: I0d2a036c9555a8d2618b07bab01f28558d2b0b2c
Reviewed-on: https://go-review.googlesource.com/24748
Reviewed-by: Andrew Gerrand <adg@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-07-07 04:06:52 +00:00
Ian Lance Taylor
df7c159f06 path/filepath: fix typo in comment
Change-Id: I0c76e8deae49c1149647de421503c5175028b948
Reviewed-on: https://go-review.googlesource.com/24781
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-07-07 02:59:09 +00:00
Ian Lance Taylor
94477121bd path/filepath: document Clean behavior for each function
Document explicitly which functions Clean the result rather than
documenting it in the package comment.

Updates #10122.
Fixes #16111.

Change-Id: Ia589c7ee3936c9a6a758725ac7f143054d53e41e
Reviewed-on: https://go-review.googlesource.com/24747
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-07-06 23:22:31 +00:00
Ian Lance Taylor
bbe5da4260 cmd/compile, syscall: add //go:uintptrescapes comment, and use it
This new comment can be used to declare that the uintptr arguments to a
function may be converted from pointers, and that those pointers should
be considered to escape. This is used for the Call methods in
dll_windows.go that take uintptr arguments, because they call Syscall.

We can't treat these functions as we do syscall.Syscall, because unlike
Syscall they may cause the stack to grow. For Syscall we can assume that
stack arguments can remain on the stack, but for these functions we need
them to escape.

Fixes #16035.

Change-Id: Ia0e5b4068c04f8d303d95ab9ea394939f1f57454
Reviewed-on: https://go-review.googlesource.com/24551
Reviewed-by: David Chase <drchase@google.com>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-07-06 20:48:41 +00:00
Sam Whited
820e30f5b0 encoding/xml: update docs to follow convention
Fixes #8833

Change-Id: I4523a1de112ed02371504e27882659bce8028a45
Reviewed-on: https://go-review.googlesource.com/24745
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-07-06 17:19:45 +00:00
Josh Bleecher Snyder
f0bab31660 [dev.ssa] cmd/compile: add some constant folding optimizations
These were helpful for some autogenerated code
I'm working with.

Change-Id: I7b89c69552ca99bf560a14bfbcd6bd238595ddf6
Reviewed-on: https://go-review.googlesource.com/24742
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-07-06 16:06:57 +00:00
Cherry Zhang
8599fdd9b6 [dev.ssa] cmd/compile: add some ARM optimization rewriting rules
Mostly constant folding rules, analogous to AMD64 ones. Along with
some simplifications.

Updates #15365.

Change-Id: If83bc1188bb05acb982ef3a1c21704c187e3eb24
Reviewed-on: https://go-review.googlesource.com/24210
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
2016-07-06 15:55:29 +00:00
Cherry Zhang
42181ad852 [dev.ssa] cmd/compile: enable SSA on ARM by default
As Josh mentioned in CL 24716, there has been requests for using SSA
for ARM. SSA can still be disabled by setting -ssa=0 for cmd/compile,
or partially enabled with GOSSAFUNC, GOSSAPKG, and GOSSAHASH.

Not enable SSA by default on NaCl, which is not supported yet.

Enable SSA-specific tests on ARM: live_ssa.go and nilptr3_ssa.go;
disable non-SSA tests: live.go, nilptr3.go, and slicepot.go.

Updates #15365.

Change-Id: Ic2ca8d166aeca8517b9d262a55e92f2130683a16
Reviewed-on: https://go-review.googlesource.com/23953
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Reviewed-by: David Chase <drchase@google.com>
2016-07-06 15:05:50 +00:00
Emmanuel Odeke
5a9d5c3747 encoding/gob: document Encode, EncodeValue nil pointer panics
Fixes #16258.

Docs for Encode and EncodeValue do not mention that
nil pointers are not permitted hence we panic,
because Gobs encode values yet nil pointers have no value
to encode. It moves a comment that was internal to EncodeValue
to the top level to make it clearer to users what to expect
when they pass in nil pointers.
Supplements test TestTopLevelNilPointer.

Change-Id: Ie54f609fde4b791605960e088456047eb9aa8738
Reviewed-on: https://go-review.googlesource.com/24740
Reviewed-by: Andrew Gerrand <adg@golang.org>
Run-TryBot: Andrew Gerrand <adg@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-07-05 06:36:21 +00:00
Ian Lance Taylor
003a68bc7f cmd/vet: remove copylock warning about result types and calls
Don't issue a copylock warning about a result type; the function may
return a composite literal with a zero value, which is OK.

Don't issue a copylock warning about a function call on the RHS, or an
indirection of a function call; the function may return a composite
literal with a zero value, which is OK.

Updates #16227.

Change-Id: I94f0e066bbfbca5d4f8ba96106210083e36694a2
Reviewed-on: https://go-review.googlesource.com/24711
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Rob Pike <r@golang.org>
2016-07-04 03:23:23 +00:00
Mikio Hara
878e002bb9 syscall: fix missing use of use function in Getfsstat
Updates #13372.

Change-Id: If383c14af14839a303425ba7b80b97e35ca9b698
Reviewed-on: https://go-review.googlesource.com/24750
Run-TryBot: Mikio Hara <mikioh.mikioh@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-07-04 03:17:02 +00:00
Josh Bleecher Snyder
41a7dca272 [dev.ssa] cmd/compile: unify and check LoweredGetClosurePtr
The comments were mostly duplicated; unify them.
Add a check that the required invariant holds.

Change-Id: I42fe09dcd1fac76d3c4e191f7a58c591c5ce429b
Reviewed-on: https://go-review.googlesource.com/24719
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
Reviewed-by: David Chase <drchase@google.com>
2016-07-04 01:29:28 +00:00
Josh Bleecher Snyder
ad8b8f644e [dev.ssa] cmd/compile: remove dead amd64 ITab lowering rule
ITab is handled by decomposition.
The rule is vestigial. Remove it.

Change-Id: I6fdf3d14d466761c7665c7ea14f34ca0e1e3e646
Reviewed-on: https://go-review.googlesource.com/24718
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
2016-07-04 01:21:13 +00:00
Monty Taylor
afccfb829f cmd/go: remove noVCSSuffix check for OpenStack
The original intent of the code was to allow both with and without .git
suffix for now to allow a transition period. The noVCSSuffix check was a
copy pasta error.

Fixes #15979.

Change-Id: I3d39aba8d026b40fc445244d6d01d8bc1979d1e4
Reviewed-on: https://go-review.googlesource.com/24645
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-07-03 23:07:59 +00:00
Cherry Zhang
f55317828b [dev.ssa] cmd/compile: ensure alignment for Zero and Move in SSA for ARM
Encode the size and the alignment into AuxInt of Zero and Move ops.
On AMD64, we simply don't look at the alignment. On ARM and PPC64, we
only generate aligned stores.

Updates #15365.

Change-Id: Ifdcc205c364f67c4516b9adebfe7d50d223b6863
Reviewed-on: https://go-review.googlesource.com/24511
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-07-02 22:22:12 +00:00
Ian Lance Taylor
519b469795 cmd/compile: mark live heap-allocated pparamout vars as needzero
If we don't mark them as needzero, we have a live pointer variable
containing possible garbage, which will baffle the GC.

Fixes #16249.

Change-Id: I7c423ceaca199ddd46fc2c23e5965e7973f07584
Reviewed-on: https://go-review.googlesource.com/24715
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2016-07-02 00:40:40 +00:00
Robert Griesemer
575a871662 cmd/compile: don't lose //go:nointerface pragma in export data
Fixes #16243.

Change-Id: I207d1e8aa48abe453a23c709ccf4f8e07368595b
Reviewed-on: https://go-review.googlesource.com/24648
Run-TryBot: Robert Griesemer <gri@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-07-01 19:20:11 +00:00
Cherry Zhang
29f0984a35 cmd/compile: don't set line number to 0 when building SSA
The frontend may emit node with line number missing. In this case,
use the parent line number. Instead of changing every call site of
pushLine, do it in pushLine itself.

Fixes #16214.

Change-Id: I80390550b56e4d690fc770b01ff725b892ffd6dc
Reviewed-on: https://go-review.googlesource.com/24641
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Cherry Zhang <cherryyz@google.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-07-01 01:12:24 +00:00
Brad Fitzpatrick
b5aae1a284 net/http: update bundled http2
Updates x/net/http2 to git rev b400c2e for https://golang.org/cl/24214,
"http2: add additional blacklisted ciphersuites"

Both TLS_RSA_WITH_AES_128_GCM_SHA256 & TLS_RSA_WITH_AES_256_GCM_SHA384
are now blacklisted, per http://httpwg.org/specs/rfc7540.html#BadCipherSuites

Change-Id: I8b9a7f4dc3c152d0675e196523ddd36111744984
Reviewed-on: https://go-review.googlesource.com/24684
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-30 23:21:30 +00:00
Alan Donovan
08086e6246 cmd/vet: lostcancel: treat naked return as a use of named results
+ test.

Fixes #16230

Change-Id: Idac995437146a9df9e73f094d2a31abc25b1fa62
Reviewed-on: https://go-review.googlesource.com/24681
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-30 21:53:32 +00:00
Brad Fitzpatrick
7ea62121a7 all: be consistent about spelling of cancelation
We had ~30 one way, and these four new occurrences the other way.

Updates #11626

Change-Id: Ic6403dc4905874916ae292ff739d33482ed8e5bf
Reviewed-on: https://go-review.googlesource.com/24683
Reviewed-by: Rob Pike <r@golang.org>
2016-06-30 21:09:56 +00:00
Josh Bleecher Snyder
95427d2549 [dev.ssa] cmd/compile: improve stability of generated code
If the files in cmd/compile/internal/ssa/gen
are passed to go run in a different order,
e.g. due to shell differences or manual entry,
then the order of constants in opGen churns.

Sort archs by name to enforce stability.
The movement of the PPC constants is a one time cost.

Change-Id: Iebcfdb9e612d7dd8cde575f920f1292891f2f24a
Reviewed-on: https://go-review.googlesource.com/24680
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
2016-06-30 20:10:16 +00:00
Alan Donovan
fc12bb2636 context: cancel the context in ExampleWithTimeout, with explanation
Fixes #16230

Change-Id: Ibb10234a6c3ab8bd0cfd93c2ebe8cfa66f80f6b0
Reviewed-on: https://go-review.googlesource.com/24682
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-06-30 19:56:02 +00:00
Austin Clements
9c8809f82a runtime/internal/sys: implement Ctz and Bswap in assembly for 386
Ctz is a hot-spot in the Go 1.7 memory manager. In SSA it's
implemented as an intrinsic that compiles to a few instructions, but
on the old backend (all architectures other than amd64), it's
implemented as a fairly complex Go function. As a result, switching to
bitmap-based allocation was a significant hit to allocation-heavy
workloads like BinaryTree17 on non-SSA platforms.

For unknown reasons, this hit 386 particularly hard. We can regain a
lot of the lost performance by implementing Ctz in assembly on the
386. This isn't as good as an intrinsic, since it still generates a
function call and prevents useful inlining, but it's much better than
the pure Go implementation:

name                      old time/op    new time/op    delta
BinaryTree17-12              3.59s ± 1%     3.06s ± 1%  -14.74%  (p=0.000 n=19+20)
Fannkuch11-12                3.72s ± 1%     3.64s ± 1%   -2.09%  (p=0.000 n=17+19)
FmtFprintfEmpty-12          52.3ns ± 3%    52.3ns ± 3%     ~     (p=0.829 n=20+19)
FmtFprintfString-12          156ns ± 1%     148ns ± 3%   -5.20%  (p=0.000 n=18+19)
FmtFprintfInt-12             137ns ± 1%     136ns ± 1%   -0.56%  (p=0.000 n=19+13)
FmtFprintfIntInt-12          227ns ± 2%     225ns ± 2%   -0.93%  (p=0.000 n=19+17)
FmtFprintfPrefixedInt-12     210ns ± 1%     208ns ± 1%   -0.91%  (p=0.000 n=19+17)
FmtFprintfFloat-12           375ns ± 1%     371ns ± 1%   -1.06%  (p=0.000 n=19+18)
FmtManyArgs-12               995ns ± 2%     978ns ± 1%   -1.63%  (p=0.000 n=17+17)
GobDecode-12                9.33ms ± 1%    9.19ms ± 0%   -1.59%  (p=0.000 n=20+17)
GobEncode-12                7.73ms ± 1%    7.73ms ± 1%     ~     (p=0.771 n=19+20)
Gzip-12                      375ms ± 1%     374ms ± 1%     ~     (p=0.141 n=20+18)
Gunzip-12                   61.8ms ± 1%    61.8ms ± 1%     ~     (p=0.602 n=20+20)
HTTPClientServer-12         87.7µs ± 2%    86.9µs ± 3%   -0.87%  (p=0.024 n=19+20)
JSONEncode-12               20.2ms ± 1%    20.4ms ± 0%   +0.53%  (p=0.000 n=18+19)
JSONDecode-12               65.3ms ± 0%    65.4ms ± 1%     ~     (p=0.385 n=16+19)
Mandelbrot200-12            4.11ms ± 1%    4.12ms ± 0%   +0.29%  (p=0.020 n=19+19)
GoParse-12                  3.75ms ± 1%    3.61ms ± 2%   -3.90%  (p=0.000 n=20+20)
RegexpMatchEasy0_32-12       104ns ± 0%     103ns ± 0%   -0.96%  (p=0.000 n=13+16)
RegexpMatchEasy0_1K-12       805ns ± 1%     803ns ± 1%     ~     (p=0.189 n=18+18)
RegexpMatchEasy1_32-12       111ns ± 0%     111ns ± 3%     ~     (p=1.000 n=14+19)
RegexpMatchEasy1_1K-12      1.00µs ± 1%    1.00µs ± 1%   +0.50%  (p=0.003 n=19+19)
RegexpMatchMedium_32-12      133ns ± 2%     133ns ± 2%     ~     (p=0.218 n=20+20)
RegexpMatchMedium_1K-12     41.2µs ± 1%    42.2µs ± 1%   +2.52%  (p=0.000 n=18+16)
RegexpMatchHard_32-12       2.35µs ± 1%    2.38µs ± 1%   +1.53%  (p=0.000 n=18+18)
RegexpMatchHard_1K-12       70.9µs ± 2%    72.0µs ± 1%   +1.42%  (p=0.000 n=19+17)
Revcomp-12                   1.06s ± 0%     1.05s ± 0%   -1.36%  (p=0.000 n=20+18)
Template-12                 86.2ms ± 1%    84.6ms ± 0%   -1.89%  (p=0.000 n=20+18)
TimeParse-12                 425ns ± 2%     428ns ± 1%   +0.77%  (p=0.000 n=18+19)
TimeFormat-12                517ns ± 1%     519ns ± 1%   +0.43%  (p=0.001 n=20+19)
[Geo mean]                  74.3µs         73.5µs        -1.05%

Prior to this commit, BinaryTree17-12 on 386 was 33% slower than at
the go1.6 tag. With this commit, it's 13% slower.

On arm and arm64, BinaryTree17-12 is only ~5% slower than it was at
go1.6. It may be worth implementing Ctz for them as well.

I consider this change low risk, since the functions it replaces are
simple, very well specified, and well tested.

For #16117.

Change-Id: Ic39d851d5aca91330134596effd2dab9689ba066
Reviewed-on: https://go-review.googlesource.com/24640
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-30 19:35:44 +00:00
Ian Lance Taylor
95483f262b os/exec: start checking for context cancelation in Start
Previously we started checking for context cancelation in Wait, but
that meant that when using StdoutPipe context cancelation never took
effect.

Fixes #16222.

Change-Id: I89cd26d3499a6080bf1a07718ce38d825561899e
Reviewed-on: https://go-review.googlesource.com/24650
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-30 16:35:56 +00:00
Ian Lance Taylor
6c13649301 syscall: accept more variants of id output when testing as root
This should fix the report at #16224, and also fixes running the test as
root on my Ubuntu Trusty system.

Fixes #16224.

Change-Id: I4e3b5527aa63366afb33a7e30efab088d34fb302
Reviewed-on: https://go-review.googlesource.com/24670
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-30 15:49:01 +00:00
Brad Fitzpatrick
e0c8af090e net/http: update bundled http2
Updates x/net/http2 to git rev 8e573f40 for https://golang.org/cl/24600,
"http2: merge multiple GOAWAY frames' contents into error message"

Fixes #14627 (more)

Change-Id: I5231607c2c9e0d854ad6199ded43c59e59f62f52
Reviewed-on: https://go-review.googlesource.com/24612
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Andrew Gerrand <adg@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-30 00:25:29 +00:00
Brad Fitzpatrick
51b08d511e net/http: be consistent about spelling of HTTP/1.x
There was only one use of "HTTP/1.n" compared to "HTTP/1.x":

h2_bundle.go://   "Just as in HTTP/1.x, header field names are strings of ASCII
httputil/dump.go:// DumpRequest returns the given request in its HTTP/1.x wire
httputil/dump.go:// intact. HTTP/2 requests are dumped in HTTP/1.x form, not in their
response.go:// Write writes r to w in the HTTP/1.x server response format,
server.go:      // Request.Body. For HTTP/1.x requests, handlers should read any
server.go:// The default HTTP/1.x and HTTP/2 ResponseWriter implementations
server.go:// The default ResponseWriter for HTTP/1.x connections supports
server.go:// http1ServerSupportsRequest reports whether Go's HTTP/1.x server
server.go:      // about HTTP/1.x Handlers concurrently reading and writing, like
server.go:      // HTTP/1.x from here on.
transport.go:   return fmt.Errorf("net/http: HTTP/1.x transport connection broken: %v", err)

Be consistent.

Change-Id: I93c4c873e500f51af2b4762055e22f5487a625ac
Reviewed-on: https://go-review.googlesource.com/24610
Reviewed-by: Andrew Gerrand <adg@golang.org>
2016-06-29 23:59:41 +00:00
Nick Harper
cc6f5f6ce1 crypto/ecdsa: Update documentation for Sign
Change-Id: I2b7a81cb809d109f10d5f0db957c614f466d6bfd
Reviewed-on: https://go-review.googlesource.com/24582
Reviewed-by: Adam Langley <agl@golang.org>
2016-06-29 18:44:36 +00:00
Tom Bergan
ad82f2cf4b crypto/tls: Use the same buffer size in the client and server in the TLS throughput benchmark
I believe it's necessary to use a buffer size smaller than 64KB because
(at least some versions of) Window using a TCP receive window less than
64KB. Currently the client and server use buffer sizes of 16KB and 32KB,
respectively (the server uses io.Copy, which defaults to 32KB internally).
Since the server has been using 32KB, it should be safe for the client to
do so as well.

Fixes #15899

Change-Id: I36d44b29f2a5022c03fc086213d3c1adf153e983
Reviewed-on: https://go-review.googlesource.com/24581
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-29 18:34:53 +00:00
Dmitry Vyukov
bb337372fb runtime: fix race atomic operations on external memory
The assembly is broken: it does `MOVQ g(R12), R14` expecting that
R12 contains tls address, but it does not do get_tls(R12) before.
This magically works on linux: `MOVQ g(R12), R14` is compiled to
`mov %fs:0xfffffffffffffff8,%r14` which does not use R12.
But it crashes on windows.

Add explicit `get_tls(R12)`.

Fixes #16206

Change-Id: Ic1f21a6fef2473bcf9147de6646929781c9c1e98
Reviewed-on: https://go-review.googlesource.com/24590
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-29 15:30:54 +00:00
Ian Lance Taylor
25a609556a runtime: correct printing of blocked field in scheduler trace
When the blocked field was first introduced back in
https://golang.org/cl/61250043 the scheduler trace code incorrectly used
m->blocked instead of mp->blocked.  That has carried through the
conversion to Go.  This CL fixes it.

Change-Id: Id81907b625221895aa5c85b9853f7c185efd8f4b
Reviewed-on: https://go-review.googlesource.com/24571
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-06-29 01:38:39 +00:00
Ian Lance Taylor
c7ae41e577 runtime: better error message for newosproc failure
If creating a new thread fails with EAGAIN, point the user at ulimit.

Fixes #15476.

Change-Id: Ib36519614b5c72776ea7f218a0c62df1dd91a8ea
Reviewed-on: https://go-review.googlesource.com/24570
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-06-29 01:37:19 +00:00
Brad Fitzpatrick
8641e6fe21 net/http: update bundled http2
Updates x/net/http2 to git rev ef2e00e88 for https://golang.org/cl/24560,
"http2: make Transport return server's GOAWAY error back to the user"

Fixes #14627

Change-Id: I2bb123a3041e168db7c9446beef4ee47638f17ee
Reviewed-on: https://go-review.googlesource.com/24561
Reviewed-by: Andrew Gerrand <adg@golang.org>
Run-TryBot: Andrew Gerrand <adg@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-29 00:41:02 +00:00
Konstantin Shaposhnikov
85a4f44745 cmd/vet: make checking example names in _test packages more robust
Prior to this change package "foo" had to be installed in order to check
example names in "foo_test" package.

However by the time "foo_test" package is checked a parsed "foo" package
has been already constructed. Use it to check example names.

Also change TestDivergentPackagesExamples test to pass directory of the
package to the vet tool as it is the most common way to invoke it. This
requires changes to errchk to add support for grabbing source files from
a directory.

Fixes #16189

Change-Id: Ief103d07b024822282b86c24250835cc591793e8
Reviewed-on: https://go-review.googlesource.com/24488
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-28 22:09:00 +00:00
Brad Fitzpatrick
733aefd06e database/sql: deflake TestPendingConnsAfterErr and fix races, panics
TestPendingConnsAfterErr only cared that things didn't deadlock, so 5
seconds is a sufficient timer. We don't need 100 milliseconds.

I was able to reproduce with a tiny (5 nanosecond) timeout value,
instead of 100 milliseconds. In the process of testing with -race and
a high -count= value, I noticed several data races and panics
(sendings on a closed channel) which are also fixed in this change.

Fixes #15684

Change-Id: Ib4605fcc0f296e658cb948352ed642b801cb578c
Reviewed-on: https://go-review.googlesource.com/24550
Reviewed-by: Marko Tiikkaja <marko@joh.to>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Andrew Gerrand <adg@golang.org>
2016-06-28 21:37:53 +00:00
Andrew Gerrand
0692891808 cmd/go: restore support for git submodules and update docs
Fixes #16165

Change-Id: Ic90e5873e0c8ee044f09543177192dcae1dcdbed
Reviewed-on: https://go-review.googlesource.com/24531
Run-TryBot: Andrew Gerrand <adg@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-06-28 21:36:47 +00:00
Justyn Temme
b0838ca292 strconv: clarify doc for Atoi return type
Change-Id: I47bd98509663d75b0d4dedbdb778e803d90053cf
Reviewed-on: https://go-review.googlesource.com/24216
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-06-28 18:16:25 +00:00
Brad Fitzpatrick
b5f0aff495 net/http: conditionally configure HTTP/2 in Server.Serve(Listener)
Don't configure HTTP/2 in http.Server.Serve(net.Listener) if the
Server's TLSConfig is set and doesn't include the "h2" NextProto
value. This avoids mutating a *tls.Config already in use if
previously passed to tls.NewListener.

Also document this. (it's come up a few times now)

Fixes #15908

Change-Id: I283eed82fdb29a791f80d801aadd9f75db244de0
Reviewed-on: https://go-review.googlesource.com/24508
Reviewed-by: Andrew Gerrand <adg@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-06-28 18:14:56 +00:00
Marcel van Lohuizen
996ed3be9a doc: update 1.7 release notes on Unicode upgrade
Fixes #16201

Change-Id: I38c17859db78c2868905da24217e0ad47739c320
Reviewed-on: https://go-review.googlesource.com/24541
Run-TryBot: Marcel van Lohuizen <mpvl@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-06-28 17:01:30 +00:00
Joe Tsai
3001334e77 doc/go1.7.html: mention recent changes to Rand.Read
Updates #16124

Change-Id: Ib58f2bb37fd1559efc512a2e3cba976f09b939a0
Reviewed-on: https://go-review.googlesource.com/24520
Reviewed-by: Andrew Gerrand <adg@golang.org>
2016-06-28 16:55:20 +00:00
Lynn Boger
03d152f36f [dev.ssa] cmd/compile: Add initial SSA configuration for PPC64
This adds the initial SSA implementation for PPC64.
Builds golang and all.bash runs correctly.  Simple hello.go
builds but does not run.

Change-Id: I7cec211b934cd7a2dd75a6cdfaf9f71867063466
Reviewed-on: https://go-review.googlesource.com/24453
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-28 15:41:20 +00:00
Marcel van Lohuizen
a2a4db7375 unicode: upgrade to version 9.0.0
Changes beyond generated tables:
- Now supports aliases to handle deprecated
  property classes.
- Some Mongolian letters are now modifiers.

Other changes:
- strconv: newly generated table to be in sync
- regexp/syntax: updated maxFold

Fixes #16191

Change-Id: I56bdf21ee2f775f2a82d0465b3772faf5c24cb61
Reviewed-on: https://go-review.googlesource.com/24496
Run-TryBot: Marcel van Lohuizen <mpvl@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-06-28 15:08:11 +00:00
David Crawshaw
ed9362f769 reflect, runtime: optimize Name method
Several minor changes that remove a good chunk of the overhead added
to the reflect Name method over the 1.7 cycle, as seen from the
non-SSA architectures.

In particular, there are ~20 fewer instructions in reflect.name.name
on 386, and the method now qualifies for inlining.

The simple JSON decoding benchmark on darwin/386:

	name           old time/op    new time/op    delta
	CodeDecoder-8    49.2ms ± 0%    48.9ms ± 1%  -0.77%  (p=0.000 n=10+9)

	name           old speed      new speed      delta
	CodeDecoder-8  39.4MB/s ± 0%  39.7MB/s ± 1%  +0.77%  (p=0.000 n=10+9)

On darwin/amd64 the effect is less pronounced:

	name           old time/op    new time/op    delta
	CodeDecoder-8    38.9ms ± 0%    38.7ms ± 1%  -0.38%  (p=0.005 n=10+10)

	name           old speed      new speed      delta
	CodeDecoder-8  49.9MB/s ± 0%  50.1MB/s ± 1%  +0.38%  (p=0.006 n=10+10)

Counterintuitively, I get much more useful benchmark data out of my
MacBook Pro than a linux workstation with more expensive Intel chips.
While the laptop has fewer cores and an active GUI, the single-threaded
performance is significantly better (nearly 1.5x decoding throughput)
so the differences are more pronounced.

For #16117.

Change-Id: I4e0cc1cc2d271d47d5127b1ee1ca926faf34cabf
Reviewed-on: https://go-review.googlesource.com/24510
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-06-28 12:28:05 +00:00
Lynn Boger
b75b0630fe runtime/internal/atomic: Use power5 compatible instructions for ppc64
This modifies a recent performance improvement to the
And8 and Or8 atomic functions which required both ppc64le
and ppc64 to use power8 instructions. Since then it was
decided that ppc64 (BE) should work for power5 and later.
This change uses instructions compatible with power5 for
ppc64 and uses power8 for ppc64le.

Fixes #16004

Change-Id: I623c75e8e6fd1fa063a53d250d86cdc9d0890dc7
Reviewed-on: https://go-review.googlesource.com/24181
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Andrew Gerrand <adg@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-28 04:49:33 +00:00
Raul Silvera
05ecf53456 net/http/pprof: remove comments pointing to gperftools
The version of pprof in gperftools has been deprecated.
No need to have a pointer to that version since go tool pprof
is included with the Go distro.

Change-Id: I6d769a68f64280f5db89ff6fbc67bfea9c8f1526
Reviewed-on: https://go-review.googlesource.com/24509
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-06-28 02:40:20 +00:00
David Crawshaw
73516c5f48 encoding/gob: avoid allocating string for map key
On linux/386 compared to tip:

	name                     old time/op  new time/op  delta
	DecodeInterfaceSlice-40  1.23ms ± 1%  1.17ms ± 1%  -4.93%  (p=0.000 n=9+10)

Recovers about half the performance regression from Go 1.6 on 386.

For #16117.

Change-Id: Ie8676d92a4da3e27ff21b91a98b3e13d16730ba1
Reviewed-on: https://go-review.googlesource.com/24468
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-28 01:50:48 +00:00
Dmitri Popov
8d966bad6e math/rand: fix io.Reader implementation
Do not throw away the rest of Int63 value used for
generation random bytes. Save it in Rand struct and
re-use during the next Read call.

Fixes #16124

Change-Id: Ic70bd80c3c3a6590e60ac615e8b3c2324589bea3
Reviewed-on: https://go-review.googlesource.com/24251
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Joe Tsai <thebrokentoaster@gmail.com>
Run-TryBot: Joe Tsai <thebrokentoaster@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-27 22:18:09 +00:00
Vladimir Mihailenco
0ce100dc96 compress/flate: don't ignore dict in Reader.Reset
Fixes #16162.

Change-Id: I6f4ae906630079ef5fc29ee5f70e2e3d1c962170
Reviewed-on: https://go-review.googlesource.com/24390
Reviewed-by: Joe Tsai <thebrokentoaster@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-27 21:28:34 +00:00
Ian Lance Taylor
db58021047 crypto/tls: don't copy Mutex or Once values
This fixes some 40 warnings from go vet.

Fixes #16134.

Change-Id: Ib9fcba275fe692f027a2a07b581c8cf503b11087
Reviewed-on: https://go-review.googlesource.com/24287
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
2016-06-27 21:13:54 +00:00
Konstantin Shaposhnikov
b43fe463ff net/http/httptest: show usage of httptest.NewRequest in example
Change ExampleResponseRecorder to use httptest.NewRequest instead of
http.NewRequest. This makes the example shorter and shows how to use
one more function from the httptest package.

Change-Id: I3d35869bd0a4daf1c7551b649428bb2f2a45eba2
Reviewed-on: https://go-review.googlesource.com/24480
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-06-27 21:08:47 +00:00
Brad Fitzpatrick
3e9d6e064d net/http: reject faux HTTP/0.9 and HTTP/2+ requests
Fixes #16197

Change-Id: Icaabacbb22bc18c52b9e04b47385ac5325fcccd1
Reviewed-on: https://go-review.googlesource.com/24505
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-06-27 21:07:11 +00:00
Ian Lance Taylor
e0f986bf26 cmd/compile: avoid function literal name collision with "glob"
The compiler was treating all global function literals as occurring in a
function named "glob", which caused a symbol name collision when there
was an actual function named "glob".  Fixed by adding a period.

Fixes #16193.

Change-Id: I67792901a8ca04635ba41d172bfaee99944f594d
Reviewed-on: https://go-review.googlesource.com/24500
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Robert Griesemer <gri@golang.org>
2016-06-27 21:05:28 +00:00
Raul Silvera
c0e5d44506 runtime/pprof: update comments to point to new pprof
In the comments for this file there is a reference to gperftools
for more info on pprof. pprof now live on its own repo on github,
and the version in gperftools is deprecated.

Change-Id: I8a188f129534f73edd132ef4e5a2d566e69df7e9
Reviewed-on: https://go-review.googlesource.com/24502
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-06-27 18:06:11 +00:00
Keith Randall
6effdd28de cmd/compile: keep heap pointer for escaping output parameters live
Make sure the pointer to the heap copy of an output parameter is kept
live throughout the function.  The function could panic at any point,
and then a defer could recover.  Thus, we need the pointer to the heap
copy always available so the post-deferreturn code can copy the return
value back to the stack.

Before this CL, the pointer to the heap copy could be considered dead in
certain situations, like code which is reverse dominated by a panic call.

Fixes #16095.

Change-Id: Ic3800423e563670e5b567b473bf4c84cddb49a4c
Reviewed-on: https://go-review.googlesource.com/24213
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
2016-06-27 16:48:48 +00:00
Konstantin Shaposhnikov
e96b1ef99b cmd/vet: fix name check for examples in _test package
This fixes the obvious bug and makes go vet look for identifiers in foo
package when checking example names in foo_test package.

Note that for this check to work the foo package have to be
installed (using go install).

This commit however doesn't fix TestDivergentPackagesExamples test that
is not implemented correctly and passes only by chance.

Updates #16189

Change-Id: I5c2f675cd07e5b66cf0432b2b3e422ab45c3dedd
Reviewed-on: https://go-review.googlesource.com/24487
Reviewed-by: Dmitri Shuralyov <shurcool@gmail.com>
Run-TryBot: Rob Pike <r@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rob Pike <r@golang.org>
2016-06-27 15:40:09 +00:00
David Crawshaw
5f209aba6d encoding/json: copy-on-write cacheTypeFields
Swtich from a sync.RWMutex to atomic.Value for cacheTypeFields.

On GOARCH=386, this recovers most of the remaining performance
difference from the 1.6 release. Compared with tip on linux/386:

	name            old time/op    new time/op    delta
	CodeDecoder-40    92.8ms ± 1%    87.7ms ± 1%  -5.50%  (p=0.000 n=10+10)

	name            old speed      new speed      delta
	CodeDecoder-40  20.9MB/s ± 1%  22.1MB/s ± 1%  +5.83%  (p=0.000 n=10+10)

With more time and care, I believe more of the JSON decoder's work
could be shifted so it is done before decoding, and independent of
the number of bytes processed. Maybe someone could explore that for
Go 1.8.

For #16117.

Change-Id: I049655b2e5b76384a0d5f4b90e3ec7cc8d8c4340
Reviewed-on: https://go-review.googlesource.com/24472
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-27 15:08:12 +00:00
Josh Bleecher Snyder
68dc102ed1 [dev.ssa] cmd/compile: provide default types for all extension ops
Change-Id: I655327818297cc6792c81912f2cebdc321381561
Reviewed-on: https://go-review.googlesource.com/24465
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2016-06-26 13:35:44 +00:00
Konstantin Shaposhnikov
33fa855e6c math/rand: fix comment about bits of seed used by the default Source
Fixes #15788

Change-Id: I5a1fd1e5992f1c16cf8d8437d742bf02e1653b9c
Reviewed-on: https://go-review.googlesource.com/23461
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-26 05:00:39 +00:00
Ian Lance Taylor
4fcb4eb279 cmd/pprof: don't use local symbolization for remote source
If we are using a remote source (a URL), and the user did not specify
the executable file to use, then don't try to use a local source.
This was misbehaving because the local symbolizer will not fail
if there is any memory map available, but the presence of a memory map
does not ensure that the files and symbols are actually available.

We still need a pprof testsuite.

Fixes #16159.

Change-Id: I0250082a4d5181c7babc7eeec6bc95b2f3bcaec9
Reviewed-on: https://go-review.googlesource.com/24464
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Joe Tsai <thebrokentoaster@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-25 00:36:40 +00:00
Ian Lance Taylor
83e839f86f cmd/pprof: ignore symbols with address 0 and size 0
Handling a symbol with address 0 and size 0, such as an ELF STT_FILE
symbols, was causing us to disassemble the entire program.  We started
adding STT_FILE symbols to help fix issue #13247.

Fixes #16154.

Change-Id: I174b9614e66ddc3d65801f7c1af7650f291ac2af
Reviewed-on: https://go-review.googlesource.com/24460
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Joe Tsai <thebrokentoaster@gmail.com>
2016-06-24 22:52:15 +00:00
Cherry Zhang
df43cf033f [dev.ssa] cmd/compile: optimize NilCheck in SSA for ARM
Like AMD64, don't issue NilCheck instruction if the subsequent block
has a load or store at the same address.

Pass test/nilptr3_ssa.go.

Updates #15365.

Change-Id: Ic88780dab8c4893c57d1c95f663760cc185fe51e
Reviewed-on: https://go-review.googlesource.com/24451
Reviewed-by: David Chase <drchase@google.com>
Run-TryBot: David Chase <drchase@google.com>
2016-06-24 20:51:42 +00:00
Nathan VanBenschoten
5e43dc943a math/big: special-case a 0 mantissa during Rat parsing
Previously, a 0 mantissa was special-cased during big.Float
parsing, but not during big.Rat parsing. This meant that a value
like 0e9999999999 would parse successfully in big.Float.SetString,
but would hang in big.Rat.SetString. This discrepancy became an
issue in https://golang.org/src/go/constant/value.go?#L250,
where the big.Float would report an exponent of 0, so
big.Rat.SetString would be used and would subsequently hang.

A Go Playground example of this is https://play.golang.org/p/3fy28eUJuF

The solution is to special-case a zero mantissa during big.Rat
parsing as well, so that neither big.Rat nor big.Float will hang when
parsing a value with 0 mantissa but a large exponent.

This was discovered using go-fuzz on CockroachDB:
https://github.com/cockroachdb/go-fuzz/blob/master/examples/parser/main.go

Fixes #16176

Change-Id: I775558a8682adbeba1cc9d20ba10f8ed26259c56
Reviewed-on: https://go-review.googlesource.com/24430
Reviewed-by: Robert Griesemer <gri@golang.org>
Run-TryBot: Robert Griesemer <gri@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-24 20:51:06 +00:00
David Crawshaw
797dc58457 cmd/compile, etc: use tflag to optimize Name()==""
Improves JSON decoding benchmark:

	name                  old time/op    new time/op    delta
	CodeDecoder-8           41.3ms ± 6%    39.8ms ± 1%  -3.61%  (p=0.000 n=10+10)

	name                  old speed      new speed      delta
	CodeDecoder-8         47.0MB/s ± 6%  48.7MB/s ± 1%  +3.66%  (p=0.000 n=10+10)

Change-Id: I524ee05c432fad5252e79b29222ec635c1dee4b4
Reviewed-on: https://go-review.googlesource.com/24452
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-24 20:05:34 +00:00
Rob Pike
2834526fd9 time: update documentation for Duration.String regarding the zero value
It was out of date; in 1.7 the format changes to 0s.

Change-Id: I2013a1b0951afc5607828f313641b51c74433257
Reviewed-on: https://go-review.googlesource.com/24421
Reviewed-by: Joe Tsai <thebrokentoaster@gmail.com>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-24 19:41:45 +00:00
Sameer Ajmani
e4dc7f1ed2 context: update documentation on cancelation and go vet check.
Also replace double spaces after periods with single spaces.

Change-Id: Iedaea47595c5ce64e7e8aa3a368f36d49061c555
Reviewed-on: https://go-review.googlesource.com/24431
Reviewed-by: Alan Donovan <adonovan@google.com>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-06-24 19:21:21 +00:00
David Crawshaw
3c6ed76da2 reflect: avoid lock for some NumMethod()==0 cases
The encoding/json package uses NumMethod()==0 as a fast check for
interface satisfaction. In the case when a type has no methods at
all, we don't need to grab the RWMutex.

Improves JSON decoding benchmark on linux/amd64:

	name           old time/op    new time/op    delta
	CodeDecoder-8    44.2ms ± 2%    40.6ms ± 1%  -8.11%  (p=0.000 n=10+10)

	name           old speed      new speed      delta
	CodeDecoder-8  43.9MB/s ± 2%  47.8MB/s ± 1%  +8.82%  (p=0.000 n=10+10)

For #16117

Change-Id: Id717e7fcd2f41b7d51d50c26ac167af45bae3747
Reviewed-on: https://go-review.googlesource.com/24433
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-24 18:01:08 +00:00
Cherry Zhang
8eadb89266 [dev.ssa] cmd/compile: move tuple selectors to generator's block in CSE
CSE may substitute a tuple generator with another one in a different
block. In this case, since we want tuple selectors to stay together
with the tuple generator, copy the selector to the new generator's
block and rewrite its use.

Op.isTupleGenerator and Op.isTupleSelector are introduced to assert
tuple ops. Use it in tighten as well.

Updates #15365.

Change-Id: Ia9e8c734b9cc3bc9fca4a2750041eef9cdfac5a5
Reviewed-on: https://go-review.googlesource.com/24137
Reviewed-by: David Chase <drchase@google.com>
2016-06-24 17:33:39 +00:00
Brad Fitzpatrick
f9d6b909b1 A+C: automated updates
Add Aaron Zinman (corporate CLA for Empirical Interfaces Inc.)
Add Ayanamist Yang (individual CLA)
Add Christian Couder (individual CLA)
Add Eric Engestrom (individual CLA)
Add Filippo Valsorda (individual CLA)
Add Gyu-Ho Lee (individual CLA)
Add H. İbrahim Güngör (individual CLA)
Add Jacob Hoffman-Andrews (individual CLA)
Add Jason Barnett (individual CLA)
Add Joe Farrell (individual CLA)
Add Julian Kornberger (individual CLA)
Add Kris Rousey (corporate CLA for Google Inc.)
Add Miguel Mendez (individual CLA)
Add Nic Day (individual CLA)
Add Paulo Casaretto (individual CLA)
Add Philip Børgesen (individual CLA)
Add Quan Tran (individual CLA)
Add Sai Cheemalapati (corporate CLA for Google Inc.)
Add Sasha Sobol (individual CLA)
Add Seth Vargo (individual CLA)
Add Simon Thulbourn (individual CLA)
Add Wisdom Omuya (individual CLA)

Updates #12042

Change-Id: Ie8ab5e3500ee62000c0b176d4d71340446e72ab7
Reviewed-on: https://go-review.googlesource.com/24420
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-06-24 02:21:02 +00:00
David Crawshaw
e75c899a10 reflect: optimize (reflect.Type).Name
Improves JSON decoding on linux/amd64.

name                   old time/op    new time/op    delta
CodeUnmarshal-40         89.3ms ± 2%    86.3ms ± 2%  -3.31%  (p=0.000 n=22+22)

name                   old speed      new speed      delta
CodeUnmarshal-40       21.7MB/s ± 2%  22.5MB/s ± 2%  +3.44%  (p=0.000 n=22+22)

Updates #16117

Change-Id: I52acf31d7729400cfe6693e46292d41e1addba3d
Reviewed-on: https://go-review.googlesource.com/24410
Run-TryBot: David Crawshaw <crawshaw@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-06-23 18:19:52 +00:00
David Crawshaw
e369490fb7 cmd/compile, etc: bring back ptrToThis
This was removed in CL 19695 but it slows down reflect.New, which ends
up on the hot path of things like JSON decoding.

There is no immediate cost in binary size, but it will make it harder to
further shrink run time type information in Go 1.8.

Before

	BenchmarkNew-40         30000000                36.3 ns/op

After

	BenchmarkNew-40         50000000                29.5 ns/op

Fixes #16161
Updates #16117

Change-Id: If7cb7f3e745d44678f3f5cf3a5338c59847529d2
Reviewed-on: https://go-review.googlesource.com/24400
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-06-23 17:39:38 +00:00
Ian Lance Taylor
aa6345d3c9 testing: document that logs are dumped to standard output
Since at least 1.0.3, the testing package has said that logs are dumped
to standard error, but has in fact dumped the logs to standard output.
We could change to dump to standard error, but after doing it this way
for so long I think it's better to change the docs.

Fixes #16138.

Change-Id: If39c7ce91f51c7113f33ebabfb8f84fd4611b9e1
Reviewed-on: https://go-review.googlesource.com/24311
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Andrew Gerrand <adg@golang.org>
2016-06-23 04:31:19 +00:00
Ian Lance Taylor
bc3bcfd4e7 html/template: update security model link
Fixes #16148.

Change-Id: Ifab773e986b768602476824d005eea9200761236
Reviewed-on: https://go-review.googlesource.com/24327
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Andrew Gerrand <adg@golang.org>
2016-06-23 04:30:07 +00:00
Ian Lance Taylor
b31ec5c564 cmd/yacc: error rather than panic when TEMPSIZE is too small
I tried simply increasing the size of the slice but then I got an error
because NSTATES was too small. Leaving a real fix for after 1.7.

Update #16144.

Change-Id: I8676772cb79845dd4ca1619977d4d54a2ce6de59
Reviewed-on: https://go-review.googlesource.com/24321
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Andrew Gerrand <adg@golang.org>
2016-06-23 04:29:35 +00:00
Keith Randall
0dae2fd149 cmd/objdump: fix disassembly suffixes
MOVB $1, (AX) was being disassembled as MOVL $1, (AX).

Use the memory size to override the standard size.
Fix the tests.

Fixes #15922

Change-Id: If92fe74c33a21e5427c8c5cc97dd15e087edb860
Reviewed-on: https://go-review.googlesource.com/23608
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-23 02:28:18 +00:00
Michael Munday
395f6ebaf9 CONTRIBUTORS: add people who contributed to s390x port (IBM CLA)
Add Bill O'Farrell (corporate CLA for IBM)
Add Karan Dhiman (corporate CLA for IBM)
Add Sam Ding (corporate CLA for IBM)
Add Tristan Amini (corporate CLA for IBM)
Add Yu Heng Zhang (corporate CLA for IBM)
Add Yu Xuan Zhang (corporate CLA for IBM)

Change-Id: I9ab15e33954afc2c208fc2e420a72c5a4d865f9b
Reviewed-on: https://go-review.googlesource.com/24350
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-06-22 21:15:51 +00:00
Alan Donovan
4764d6fd6e cmd/vet/internal/cfg: don't crash on malformed goto statement
Change-Id: Ib285c02e240f02e9d5511bd448163ec1d4e75516
Reviewed-on: https://go-review.googlesource.com/24323
Reviewed-by: Rob Pike <r@golang.org>
2016-06-22 17:09:26 +00:00
Alan Donovan
f2c13d713d cmd/vet: fix a crash in lostcancel check
Fixes issue 16143

Change-Id: Id9d257aee54d31fbf0d478cb07339729cd9712c0
Reviewed-on: https://go-review.googlesource.com/24325
Reviewed-by: Rob Pike <r@golang.org>
2016-06-22 16:57:45 +00:00
Robert Griesemer
1f446432dd cmd/compile: fix error msg mentioning different packages with same name
This is a regression from 1.6. The respective code in importimport
(export.go) was not exactly replicated with the new importer. Also
copied over the missing cyclic import check.

Added test cases.

Fixes #16133.

Change-Id: I1e0a39ff1275ca62a8054874294d400ed83fb26a
Reviewed-on: https://go-review.googlesource.com/24312
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Robert Griesemer <gri@golang.org>
2016-06-22 00:12:55 +00:00
Robert Griesemer
845992eeed test: add -s flag to commands understood by run.go
If -s is specified, each file is considered a separate
package even if multiple files have the same package names.

For instance, the action and flag "errorcheckdir -s"
will compile all files in the respective directory as
individual packages.

Change-Id: Ic5c2f9e915a669433f66c2d3fe0ac068227a502f
Reviewed-on: https://go-review.googlesource.com/24313
Run-TryBot: Robert Griesemer <gri@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-06-22 00:06:19 +00:00
Alan Donovan
eaf4ad6f74 doc: describe vet -lostcancel in go1.7 release notes
Change-Id: Ie1c95fd0869307551bfcf76bf45c13372723fbba
Reviewed-on: https://go-review.googlesource.com/24288
Reviewed-by: Rob Pike <r@golang.org>
2016-06-21 18:25:53 +00:00
Alan Donovan
b65cb7f198 cmd/vet: -lostcancel: check for discarded result of context.WithCancel
The cfg subpackage builds a control flow graph of ast.Nodes.
The lostcancel module checks this graph to find paths, from a call to
WithCancel to a return statement, on which the cancel variable is
not used.  (A trivial case is simply assigning the cancel result to
the blank identifier.)

In a sample of 50,000 source files, the new check found 2068 blank
assignments and 118 return-before-cancel errors.  I manually inspected
20 of the latter and didn't find a single false positive among them.

Change-Id: I84cd49445f9f8d04908b04881eb1496a96611205
Reviewed-on: https://go-review.googlesource.com/24150
Reviewed-by: Robert Griesemer <gri@golang.org>
Reviewed-by: Rob Pike <r@golang.org>
2016-06-21 14:58:33 +00:00
qeed
d282427248 cmd/cgo: error, not panic, if not enough arguments to function
Fixes #16116.

Change-Id: Ic3cb0b95382bb683368743bda49b4eb5fdcc35c0
Reviewed-on: https://go-review.googlesource.com/24286
Reviewed-by: Emmanuel Odeke <emm.odeke@gmail.com>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-21 04:32:04 +00:00
Rob Pike
86b0310185 text/template: clarify the default formatting used for values
Fixes #16105.

Change-Id: I94467f2adf861eb38f3119ad30d46a87456d5305
Reviewed-on: https://go-review.googlesource.com/24281
Reviewed-by: Andrew Gerrand <adg@golang.org>
2016-06-21 02:15:44 +00:00
Ian Lance Taylor
252eda470a cmd/pprof: don't use offset if we don't have a start address
The test is in the runtime package because there are other tests of
pprof there. At some point we should probably move them all into a pprof
testsuite.

Fixes #16128.

Change-Id: Ieefa40c61cf3edde11fe0cf04da1debfd8b3d7c0
Reviewed-on: https://go-review.googlesource.com/24274
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Andrew Gerrand <adg@golang.org>
2016-06-21 01:44:38 +00:00
Ian Lance Taylor
09834d1c08 runtime: panic with the right error on iface conversion
A straight conversion from a type T to an interface type I, where T does
not implement I, should always panic with an interface conversion error
that shows the missing method.  This was not happening if the conversion
was done once using the comma-ok form (the result would not be OK) and
then again in a straight conversion.  Due to an error in the runtime
package the second conversion was failing with a nil pointer
dereference.

Fixes #16130.

Change-Id: I8b9fca0f1bb635a6181b8b76de8c2385bb7ac2d2
Reviewed-on: https://go-review.googlesource.com/24284
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Michel Lespinasse <walken@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Andrew Gerrand <adg@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2016-06-21 01:43:42 +00:00
Ian Lance Taylor
659b9a19aa runtime: set PPROF_TMPDIR before running pprof
Fixes #16121.

Change-Id: I7b838fb6fb9f098e6c348d67379fdc81fb0d69a4
Reviewed-on: https://go-review.googlesource.com/24270
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Michael Hudson-Doyle <michael.hudson@canonical.com>
2016-06-20 23:58:59 +00:00
Andrew Gerrand
109823ec93 cmd/go: for generate, use build context values for GOOS/GOARCH
Fixes #16120

Change-Id: Ia352558231e00baab5c698e93d7267564c07ec0c
Reviewed-on: https://go-review.googlesource.com/24242
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Andrew Gerrand <adg@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-20 23:55:43 +00:00
Ian Lance Taylor
e1a6e71e74 test: add missing copyright notice
Change-Id: I2a5353203ca2958fa37fc7a5ea3f22ad4fc62b0e
Reviewed-on: https://go-review.googlesource.com/24282
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
2016-06-20 23:46:33 +00:00
Andrew Gerrand
349f0fb89a doc: update architectures on source install instructions
Fixes #16099

Change-Id: I334c1f04dfc98c4a07e33745819d890b5fcb1673
Reviewed-on: https://go-review.googlesource.com/24243
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
2016-06-20 21:59:22 +00:00
Mikio Hara
20fd1bb8bd doc/go1.7.html: don't mention obsolete RFC
Change-Id: Ia978c10a97e4c24fd7cf1fa4a7c3bd886d20bbf8
Reviewed-on: https://go-review.googlesource.com/24241
Reviewed-by: Andrew Gerrand <adg@golang.org>
2016-06-20 01:44:37 +00:00
Emmanuel Odeke
153d31da16 doc/go1.7.html: net/http RFC 2616 conformation + timeoutHandler on empty body
- Mention RFC 2616 conformation in which the server now only sends one
"Transfer-Encoding" header when "chunked" is explicitly set.
- Mention that a timeout handler now sends a 200 status code on
encountering an empty response body instead of sending back 0.

Change-Id: Id45e2867390f7e679ab40d7a66db1f7b9d92ce17
Reviewed-on: https://go-review.googlesource.com/24250
Reviewed-by: Andrew Gerrand <adg@golang.org>
2016-06-20 00:54:17 +00:00
Alex Brainman
691c5c1568 debug/pe: handle files with no string table
pecoff.doc (https://goo.gl/ayvckk) in section 5.6 says:

Immediately following the COFF symbol table is the COFF string table.
The position of this table is found by taking the symbol table address
in the COFF header, and adding the number of symbols multiplied by
the size of a symbol.

So it is unclear what to do when symbol table address is 0.
Lets assume executable does not have any string table.

Added new test with executable with no symbol table. The

gcc -s testdata\hello.c -o testdata\gcc-386-mingw-no-symbols-exec.

command was used to generate the executable.

Fixes #16084

Change-Id: Ie74137ac64b15daadd28e1f0315f3b62d1bf2059
Reviewed-on: https://go-review.googlesource.com/24200
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-19 05:18:09 +00:00
Michael Munday
f3d54789f7 cmd/compile: use power5 instructions for uint64 to float casts
Use the same technique as mips64 for these casts (CL 22835).

We could use the FCFIDU instruction for ppc64le however it seems
better to keep it the same as ppc64 for now.

Updates #15539, updates #16004.

Change-Id: I550680e485327568bf3238c4615a6cc8de6438d7
Reviewed-on: https://go-review.googlesource.com/24191
Reviewed-by: David Chase <drchase@google.com>
Run-TryBot: Michael Munday <munday@ca.ibm.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-17 19:15:29 +00:00
Austin Clements
9e8fa1e99c runtime: eliminate poisonStack checks
We haven't used poisonStack since we switched to 1-bit stack maps
(4d0f3a1), but the checks are still there. However, nothing prevents
us from genuinely allocating an object at this address on 32-bit and
causing the runtime to crash claiming that it's found a bad pointer.

Since we're not using poisonStack anyway, just pull it out.

Fixes #15831.

Change-Id: Ia6ef604675b8433f75045e369f5acd4644a5bb38
Reviewed-on: https://go-review.googlesource.com/24211
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
2016-06-17 15:18:39 +00:00
Austin Clements
fca9fc52c8 runtime: fix stale comment in lfstack
Change-Id: I6ef08f6078190dc9df0b2df4f26a76456602f5e8
Reviewed-on: https://go-review.googlesource.com/24176
Reviewed-by: Rick Hudson <rlh@golang.org>
2016-06-16 19:45:33 +00:00
Austin Clements
79f2f008a3 cmd/dist: make zosarch.go deterministic
Currently zosarch.go is written out in non-deterministic map order.
Sort the keys and write it out in sorted order to make the generated
file contents deterministic.

Change-Id: Id490f0e8665a2c619c5a7a00a30f4fc64f333258
Reviewed-on: https://go-review.googlesource.com/24174
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Robert Griesemer <gri@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-06-16 19:32:33 +00:00
Hana Kim
c3818e56d0 internal/trace: err if binary is not supplied for old trace
Change-Id: Id25c90993c4cbb7449d7031301b6d214a67d7633
Reviewed-on: https://go-review.googlesource.com/24134
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-16 16:22:03 +00:00
Josh Bleecher Snyder
8086ce44c4 [dev.ssa] cmd/compile: unify OpARMMOVWaddr cases
Minor code cleanup. Done as part of understanding
OpARMMOVWaddr, since other architectures will
need to do something similar.

Change-Id: Iea2ecf3defb4f884e63902c369cd55e4647bce7a
Reviewed-on: https://go-review.googlesource.com/24157
Reviewed-by: Cherry Zhang <cherryyz@google.com>
2016-06-16 14:34:57 +00:00
Josh Bleecher Snyder
22d1318e7b [dev.ssa] cmd/compile: refactor out CheckLoweredPhi
This will be used verbatim in other architectures.

Change-Id: I307891ae597d797fd45f296b6a38ffe9fac6b975
Reviewed-on: https://go-review.googlesource.com/24155
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
Reviewed-by: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-16 14:34:28 +00:00
Josh Bleecher Snyder
a2beee000b [dev.ssa] cmd/compile: improve special register error checking
Provide better diagnostic messages.

Use an int for numRegs comparisons,
to avoid asking whether a uint8 is > 255.

Change-Id: I33ae193ce292b24b369865abda3902c3207d7d3f
Reviewed-on: https://go-review.googlesource.com/24135
Reviewed-by: Keith Randall <khr@golang.org>
2016-06-16 14:34:01 +00:00
Josh Bleecher Snyder
d0fa6c2f9e [dev.ssa] cmd/compile: add and use SSAReg
This will be needed by other architectures as well.
Put a cleaner encapsulation around it.

Change-Id: I0ac25d600378042b2233301678e9d037e20701d8
Reviewed-on: https://go-review.googlesource.com/24154
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
2016-06-16 14:12:30 +00:00
Ian Lance Taylor
ea2ac3fe5f runtime: remove useless loop from CgoCCodeSIGPROF test program
I verified that the test fails if I undo the change that it tests for.

Updates #14732.

Change-Id: Ib30352580236adefae946450ddd6cd65a62b7cdf
Reviewed-on: https://go-review.googlesource.com/24151
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Mikio Hara <mikioh.mikioh@gmail.com>
2016-06-16 03:52:18 +00:00
Matthew Dempsky
d78d0de4d1 go/ast: fix comments misinterpreted as documentation
The comments describing blocks of Pos/End implementations for various
nodes types are being misinterpreted as documentation for BadDecl,
BadExpr, BadStmt, and ImportSpec's Pos methods.

Change-Id: I935b0bc38dbc13e9305f3efeb437dd3a6575d9a1
Reviewed-on: https://go-review.googlesource.com/24152
Reviewed-by: Robert Griesemer <gri@golang.org>
2016-06-15 20:40:38 +00:00
Cherry Zhang
93b8aab5c9 [dev.ssa] cmd/compile: handle GetG on ARM
Use hardware g register (R10) for GetG, allow g to appear at LHS of
some ops.

Progress on SSA backend for ARM. Now everything compiles and runs.

Updates #15365.

Change-Id: Icdf93585579faa86cc29b1e17ab7c90f0119fc4e
Reviewed-on: https://go-review.googlesource.com/23952
Reviewed-by: David Chase <drchase@google.com>
2016-06-15 15:36:35 +00:00
Ian Lance Taylor
26d6dc6bf8 runtime: if the test program hangs, try to get a stack trace
This is an attempt to get more information for #14809, which seems to
occur rarely.

Updates #14809.

Change-Id: Idbeb136ceb57993644e03266622eb699d2685d02
Reviewed-on: https://go-review.googlesource.com/24110
Reviewed-by: Mikio Hara <mikioh.mikioh@gmail.com>
Reviewed-by: Austin Clements <austin@google.com>
2016-06-15 15:03:48 +00:00
Cherry Zhang
48cc3c4b58 syscall: skip TestUnshare if kernel does not support net namespace
Fixes #16056.

Change-Id: Ic3343914742713851b8ae969b101521f25e85e7b
Reviewed-on: https://go-review.googlesource.com/24132
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-15 11:41:49 +00:00
Andrew Gerrand
0ec62565f9 net/http: pass through server side Transfer-Encoding headers
Fixes #16063

Change-Id: I2e8695beb657b0aef067e83f086828d8857787ed
Reviewed-on: https://go-review.googlesource.com/24130
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-15 03:10:47 +00:00
Sameer Ajmani
c4692da923 context: document how to release resources associated with Contexts.
Some users don't realize that creating a Context with a CancelFunc
attaches a subtree to the parent, and that that subtree is not released
until the CancelFunc is called or the parent is canceled.  Make this
clear early in the package docs, so that people learning about this
package have the right conceptual model.

Change-Id: I7c77a546c19c3751dd1f3a5bc827ad106dd1afbf
Reviewed-on: https://go-review.googlesource.com/24090
Reviewed-by: Alan Donovan <adonovan@google.com>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-06-15 00:30:46 +00:00
Ian Lance Taylor
68697a3e82 net: don't run TestLookupDotsWithLocalSource in short mode
Do run it on the builders, though.

Fixes #15881.

Change-Id: Ie42204d553cb18547ffd6441afc261717bbd9205
Reviewed-on: https://go-review.googlesource.com/24111
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Mikio Hara <mikioh.mikioh@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-14 22:50:09 +00:00
Mikio Hara
9208ed3224 os: fix blockUntilWaitable on freebsd/{386,arm}
The previous fix was wrong because it had two misunderstandings on
freebsd32 calling convention like the following:
- 32-bit id1 implies that it is the upper half of 64-bit id, indeed it
  depends on machine endianness.
- 32-bit ARM calling convension doesn't conform to freebsd32_args,
  indeed it does.

This change fixes the bugs and makes blockUntilWaitable work correctly
on freebsd/{386,arm}.

Fixes #16064.

Change-Id: I820c6d01d59a43ac4f2ab381f757c03b14bca75e
Reviewed-on: https://go-review.googlesource.com/24064
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-06-14 22:39:56 +00:00
David Crawshaw
af0fc83985 cmd/compile, etc: handle many struct fields
This adds 8 bytes of binary size to every type that has methods. It is
the smallest change I could come up with for 1.7.

Fixes #16037

Change-Id: Ibe15c3165854a21768596967757864b880dbfeed
Reviewed-on: https://go-review.googlesource.com/24070
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-14 15:32:34 +00:00
Keith Randall
0393ed8201 [dev.ssa] Merge remote-tracking branch 'origin/master' into mergebranch
Change-Id: Idd150294aaeced0176b53d6b95852f5d21ff4fdc
2016-06-14 07:34:09 -07:00
Ian Lance Taylor
53242e49b1 crypto/x509: don't ignore asn1.Marshal error
I don't see how the call could fail, so, no test. Just a code cleanup in
case it can fail in the future.

Fixes #15987.

Change-Id: If4af0d5e7d19cc8b13fb6a4f7661c37fb0015e83
Reviewed-on: https://go-review.googlesource.com/23860
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Mikio Hara <mikioh.mikioh@gmail.com>
2016-06-14 05:17:57 +00:00
Ian Lance Taylor
0deb49f9c0 cmd/go: include .syso files even if CGO_ENABLED=0
A .syso file may include information that should go into the object file
that is not object code, and should be included even if not using cgo.
The example in the issue is a Windows manifest file.

Fixes #16050.

Change-Id: I1f4f3f80bb007e84d153ca2d26e5919213ea4f8d
Reviewed-on: https://go-review.googlesource.com/24032
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Alex Brainman <alex.brainman@gmail.com>
2016-06-14 03:44:27 +00:00
Ian Lance Taylor
9273e25ecc cmd/go: remove obsolete comment referring to deleted parameter
The dir parameter was removed in https://golang.org/cl/5732045.

Fixes #15503.

Change-Id: I02a6d8317233bea08633715a095ea2514822032b
Reviewed-on: https://go-review.googlesource.com/24011
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Dmitri Shuralyov <shurcool@gmail.com>
Reviewed-by: Andrew Gerrand <adg@golang.org>
2016-06-14 03:33:45 +00:00
Mikio Hara
cab87a60de os: fix build on freebsd/arm
Change-Id: I21fad94ff94e342ada18e0e41ca90296d030115f
Reviewed-on: https://go-review.googlesource.com/24061
Run-TryBot: Mikio Hara <mikioh.mikioh@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-06-14 01:42:36 +00:00
Mikio Hara
5d876e3e2e os: use wait6 to avoid wait/kill race on freebsd
This change is a followup to https://go-review.googlesource.com/23967
for FreeBSD.

Updates #13987.
Updates #16028.

Change-Id: I0f0737372fce6df89d090fe9847305749b79eb4c
Reviewed-on: https://go-review.googlesource.com/24021
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-06-13 22:13:56 +00:00
Mikio Hara
ccd9a55609 os: use waitid to avoid wait/kill race on darwin
This change is a followup to https://go-review.googlesource.com/23967
for Darwin.

Updates #13987.
Updates #16028.

Change-Id: Ib1fb9f957fafd0f91da6fceea56620e29ad82b00
Reviewed-on: https://go-review.googlesource.com/24020
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-06-13 22:13:12 +00:00
Ian Lance Taylor
84d8aff94c runtime: collect stack trace if SIGPROF arrives on non-Go thread
Fixes #15994.

Change-Id: I5aca91ab53985ac7dcb07ce094ec15eb8ec341f8
Reviewed-on: https://go-review.googlesource.com/23891
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-13 21:43:19 +00:00
Keith Randall
5701174c52 cmd/link: put padding between functions, not at the end of a function
Functions should be declared to end after the last real instruction, not
after the last padding byte. We achieve this by adding the padding while
assembling the text section in the linker instead of adding the padding
to the function symbol in the compiler. This change makes dtrace happy.

TODO: check that this works with external linking

Fixes #15969

Change-Id: I973e478d0cd34b61be1ddc55410552cbd645ad62
Reviewed-on: https://go-review.googlesource.com/24040
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-06-13 14:05:52 +00:00
Cherry Zhang
c40dcff2f2 [dev.ssa] cmd/compile: use MOVWaddr for address on ARM
Introduce an op MOVWaddr for addresses on ARM, instead of overuse
ADDconst.

Mark MOVWaddr as rematerializable. This fixes a liveness problem: if
it were not rematerializable, the address of a variable may be spilled
and later use of the address may just load the spilled value without
mentioning the variable, and the liveness code may think it is dead
prematurely.

Update #15365.

Change-Id: Ib0b0fa826bdb75c9e6bb362b95c6cf132cc6b1c0
Reviewed-on: https://go-review.googlesource.com/23942
Reviewed-by: David Chase <drchase@google.com>
2016-06-13 12:55:51 +00:00
Cherry Zhang
e3a6d00876 [dev.ssa] cmd/compile: ensure OffPtr has pointer type
SSA treats SP as constant throughout a function, so as OffPtr [off] SP.
When the stack moves, spilled OffPtr values become invalid, if they are
not pointer-typed.

(Currently it is fine because of the optimization rules that folds OffPtr
into Load/Store. But it'd better be "optimization", not requirement.)

Updates #15365.

Change-Id: I76cf4008dfdc169e1cb5a55a2605b6678efc915d
Reviewed-on: https://go-review.googlesource.com/23941
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
2016-06-13 12:55:30 +00:00
David Chase
595426c0d9 cmd/compile: fix OASWB rewriting in racewalk
Special case for rewriting OAS inits omitted OASWB, added
that and OAS2FUNC.  The special case cannot be default case,
that causes racewalk to fail in horrible ways.

Fixes #16008.

Change-Id: Ie0d2f5735fe9d8255a109597b36d196d4f86703a
Reviewed-on: https://go-review.googlesource.com/23954
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-12 23:58:25 +00:00
Dmitri Shuralyov
2ba3d5fc96 cmd/go: remove invalid space in import comment docs
Generate package comment in alldocs.go using line comments rather than
general comments. This scales better, general comments cannot contain the
"*/" character sequence. Line comments do not have any restrictions on
the comment text that can be contained.

Remove the dependency on sed, which is not cross-platform, not go-gettable
external command.

Remove trailing whitespace from usage string in test.go. It's unnecessary.

Fixes #16030.

Change-Id: I3c0bc9955e7c7603c3d1fb4878218b0719d02e04
Reviewed-on: https://go-review.googlesource.com/23968
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-12 04:13:07 +00:00
Keith Randall
c83e6f50d9 runtime: aeshash, xor seed in earlier
Instead of doing:

x = input
one round of aes on x
x ^= seed
two rounds of aes on x

Do:

x = input
x ^= seed
three rounds of aes on x

This change provides some additional seed-dependent scrambling
which should help prevent collisions.

Change-Id: I02c774d09c2eb6917cf861513816a1024a9b65d7
Reviewed-on: https://go-review.googlesource.com/23577
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-11 00:35:47 +00:00
Ian Lance Taylor
cea29c4a35 os: on GNU/Linux use waitid to avoid wait/kill race
On systems that support the POSIX.1-2008 waitid function, we can use it
to block until a wait will succeed. This avoids a possible race
condition: if a program calls p.Kill/p.Signal and p.Wait from two
different goroutines, then it is possible for the wait to complete just
before the signal is sent. In that case, it is possible that the system
will start a new process using the same PID between the wait and the
signal, causing the signal to be sent to the wrong process. The
Process.isdone field attempts to avoid that race, but there is a small
gap of time between when wait returns and isdone is set when the race
can occur.

This CL avoids that race by using waitid to wait until the process has
exited without actually collecting the PID. Then it sets isdone, then
waits for any active signals to complete, and only then collects the PID.

No test because any plausible test would require starting enough
processes to recycle all the process IDs.

Update #13987.
Update #16028.

Change-Id: Id2939431991d3b355dfb22f08793585fc0568ce8
Reviewed-on: https://go-review.googlesource.com/23967
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-10 18:10:14 +00:00
Robert Griesemer
e980a3d885 go/parser: document that parse functions need valid token.FileSet
+ panic with explicit error if no file set it provided

(Not providing a file set is invalid use of the API; panic
is the appropriate action rather than returning an error.)

Fixes #16018.

Change-Id: I207f5b2a2e318d65826bdd9522fce46d614c24ee
Reviewed-on: https://go-review.googlesource.com/24010
Run-TryBot: Robert Griesemer <gri@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-10 17:43:35 +00:00
Ian Lance Taylor
fee02d270b cmd/go: clarify go get documentation
Make the documentation for `go get` match the documentation for `go
install`, since `go get` essentially invokes `go install`.

Update #15825.

Change-Id: I374d80efd301814b6d98b86b7a4a68dd09704c92
Reviewed-on: https://go-review.googlesource.com/23925
Reviewed-by: Andrew Gerrand <adg@golang.org>
2016-06-10 04:12:50 +00:00
Michael Munday
5f3eb43288 syscall: add a padding field to EpollEvent on s390x
Fixes #16021.

Change-Id: I55df38bbccd2641abcb54704115002a9aa04325d
Reviewed-on: https://go-review.googlesource.com/23962
Run-TryBot: Michael Munday <munday@ca.ibm.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-06-10 03:54:43 +00:00
Jess Frazelle
8042bfe347 encoding/csv: update doc about comments whitespace
This patch updates the doc about comments whitespace for the
encoding/csv package to reflect that leading whitespace before
the hash will treat the line as not a comment.

Fixes #13775.

Change-Id: Ia468c75b242a487b4b2b4cd3d342bfb8e07720ba
Reviewed-on: https://go-review.googlesource.com/23302
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-06-10 01:00:09 +00:00
Cherry Zhang
cbc26869b7 runtime: set $sp before $pc in gdb python script
When setting $pc, gdb does a backtrace using the current value of $sp,
and it may complain if $sp does not match that $pc (although the
assignment went through successfully).

This happens with ARM SSA backend: when setting $pc it prints
> Cannot access memory at address 0x0

As well as occasionally on MIPS64:
> warning: GDB can't find the start of the function at 0xc82003fe07.
> ...

Setting $sp before setting $pc makes it happy.

Change-Id: Idd96dbef3e9b698829da553c6d71d5b4c6d492db
Reviewed-on: https://go-review.googlesource.com/23940
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2016-06-09 20:02:59 +00:00
Keith Randall
e3f1c66f31 cmd/compile: for tail calls in stubs, ensure args are alive
The generated code for interface stubs sometimes just messes
with a few of the args and then tail-calls to the target routine.
The args that aren't explicitly modified appear to not be used.
But they are used, by the thing we're tail calling.

Fixes #16016

Change-Id: Ib9b3a8311bb714a201daee002885fcb59e0463fa
Reviewed-on: https://go-review.googlesource.com/23960
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
2016-06-09 19:32:51 +00:00
Ian Lance Taylor
1bdf1c3024 cmd/cgo: fix use of unsafe argument in new deferred function
The combination of https://golang.org/cl/23650 and
https://golang.org/cl/23675 did not work--they were tested separately
but not together.

The problem was that 23650 introduced deferred argument checking, and
the deferred function loses the type that 23675 started requiring. The
fix is to go back to using an empty interface type in a deferred
argument check.

No new test required--fixes broken build.

Change-Id: I5ea023c5aed71d70e57b11c4551242d3ef25986d
Reviewed-on: https://go-review.googlesource.com/23961
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2016-06-09 19:05:19 +00:00
Ian Lance Taylor
837984f372 cmd/cgo: use function arg type for _cgoCheckPointerN function
When cgo writes a _cgoCheckPointerN function to handle unsafe.Pointer,
use the function's argument type rather than interface{}. This permits
type errors to be detected at build time rather than run time.

Fixes #13830.

Change-Id: Ic7090905e16b977e2379670e0f83640dc192b565
Reviewed-on: https://go-review.googlesource.com/23675
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2016-06-09 16:02:03 +00:00
Ian Lance Taylor
894803c11e time: document that RFC822/1123 don't parse all RFC formats
Fixes #14505.

Change-Id: I46196b26c9339609e6e3ef9159de38c5b50c2a1b
Reviewed-on: https://go-review.googlesource.com/23922
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Emmanuel Odeke <emm.odeke@gmail.com>
Reviewed-by: Andrew Gerrand <adg@golang.org>
2016-06-09 15:32:48 +00:00
Kenny Grant
e2a30b8ffb time: genzabbrs.go skips Feb when checking months
getAbbrs looks like it is checking each month looking for a change
in the time zone abbreviation, but starts in Dec of the previous year
and skips the month of February because of the overflow rules for
AddDate. Changing the day to 1 starts at Jan 1 and tries all months
in the current year. This isn't very important or likely to change
output as zones usually span several months. Discovered when
looking into time.AddDate behavior when adding months.

Change-Id: I685254c8d21c402ba82cc4176e9a86b64ce8f7f7
Reviewed-on: https://go-review.googlesource.com/23322
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-09 15:31:08 +00:00
Jason Barnett
6662897b2a crypto/subtle: expand abbreviation to eliminate confusion
Change-Id: I68d66fccf9cc8f7137c92b94820ce7d6f478a185
Reviewed-on: https://go-review.googlesource.com/23310
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-09 15:30:48 +00:00
Ian Lance Taylor
a8c6c4837c os: document that the runtime can write to standard error
Fixes #15970.

Change-Id: I3f7d8316069a69d0e3859aaa96bc1414487fead0
Reviewed-on: https://go-review.googlesource.com/23921
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Andrew Gerrand <adg@golang.org>
2016-06-09 13:47:13 +00:00
Ian Lance Taylor
763883632e cmd/go: only run TestGoGetHTTPS404 where it works
The test TestGoGetHTTPS404 downloads a package that does not build on
every OS, so change it to only run where the package builds. It's not
great for the test to depend on an external package, but this is an
improvement on the current situation.

Fixes #15644.

Change-Id: I1679cee5ab1e61a5b26f4ad39dc8a397fbc0da69
Reviewed-on: https://go-review.googlesource.com/23920
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Andrew Gerrand <adg@golang.org>
2016-06-09 03:48:34 +00:00
Cherry Zhang
fa54bf16e0 [dev.ssa] cmd/compile: fix a few bugs for SSA for ARM
- 64x signed right shift was wrong for shift larger than 0x80000000.
- for Lsh-followed-by-Rsh, the intermediate value should be full int
  width, so when it is spilled MOVW should be used.
- use RET for RetJmp, so the assembler can take case of restoring LR
  for non-leaf case.
- reserve R9 in dynlink mode. R9 is used for GOT by the assembler.

Progress on SSA backend for ARM. Still not complete.

Updates #15365.

Change-Id: I3caca256b92ff7cf96469da2feaf4868a592efc5
Reviewed-on: https://go-review.googlesource.com/23793
Reviewed-by: David Chase <drchase@google.com>
2016-06-08 20:37:31 +00:00
Cherry Zhang
225ef76c25 [dev.ssa] cmd/compile: fix scheduling of tuple ops
We want tuple-reading ops immediately follow tuple-generating op, so
that tuple values will not be spilled/copied.

The mechanism introduced in the previous CL cannot really avoid tuples
interleaving. In this CL we always emit tuple and their selectors together.
Maybe remove the tuple scores if it does not help on performance (todo).

Also let tighten not move tuple-reading ops across blocks.

In the previous CL a special case of regenerating flags with tuple-reading
pseudo-op is added, but it did not cover end-of-block case. This is fixed
in this CL and the condition is generalized.

Progress on SSA backend for ARM. Still not complete.

Updates #15365.

Change-Id: I8980b34e7a64eb98153540e9e19a3782e20406ff
Reviewed-on: https://go-review.googlesource.com/23792
Reviewed-by: David Chase <drchase@google.com>
2016-06-08 20:37:13 +00:00
Cherry Zhang
f3689d1382 cmd/compile: nilcheck interface value in go/defer interface call for SSA
This matches the behavior of the legacy backend.

Fixes #15975 (if this is the intended behavior)

Change-Id: Id277959069b8b8bf9958fa8f2cbc762c752a1a19
Reviewed-on: https://go-review.googlesource.com/23820
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-08 20:35:53 +00:00
Michael Munday
0324a3f828 runtime/cgo: restore the g pointer correctly in crosscall_s390x
R13 needs to be set to g because C code may have clobbered R13.

Fixes #16006.

Change-Id: I66311fe28440e85e589a1695fa1c42416583b4c6
Reviewed-on: https://go-review.googlesource.com/23910
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-08 18:09:47 +00:00
Elias Naur
09eedc32e1 misc/android: make the exec wrapper exit code parsing more robust
Before, the Android exec wrapper expected the trailing exit code
output on its own line, like this:

PASS
exitcode=0

However, some tests can sometimes squeeze in some output after
the test harness outputs "PASS" and the newline. The
TestWriteHeapDumpFinalizers test is particularly prone to this,
since its finalizers println to standard out. When it happens, the
output looks like this:

PASS
finalizedexitcode=0

Two recent failures caused by this race:

https://build.golang.org/log/185605e1b936142c22350eef22d20e982be53c29
https://build.golang.org/log/e61cf6a050551d10360bd90be3c5f58c3eb07605

Since the "exitcode=" string is always echoed after the test output,
the fix is simple: instead of looking for the last newline in the
output, look for the last exitcode string instead.

Change-Id: Icd6e53855eeba60b982ad3108289d92549328b86
Reviewed-on: https://go-review.googlesource.com/23750
Run-TryBot: Elias Naur <elias.naur@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2016-06-08 05:24:53 +00:00
Ian Lance Taylor
d1b5d08f34 misc/cgo/testsanitizers: don't run some TSAN tests on GCC < 7
Before GCC 7 defined __SANITIZE_THREAD__ when using TSAN,
runtime/cgo/libcgo.h could not determine reliably whether TSAN was in
use when using GCC.

Fixes #15983.

Change-Id: I5581c9f88e1cde1974c280008b2230fe5e971f44
Reviewed-on: https://go-review.googlesource.com/23833
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Michael Hudson-Doyle <michael.hudson@canonical.com>
2016-06-08 05:08:11 +00:00
Andrew Gerrand
f605c77bbc net/http: update bundled http2
Updates x/net/http2 to git rev 313cf39 for CLs 23812 and 23880:

	http2: GotFirstResponseByte hook should only fire once
	http2: fix data race on pipe

Fixes #16000

Change-Id: I9c3f1b2528bbd99968aa5a0529ae9c5295979d1d
Reviewed-on: https://go-review.googlesource.com/23881
Reviewed-by: Mikio Hara <mikioh.mikioh@gmail.com>
2016-06-08 04:34:30 +00:00
Keith Randall
afad74ec30 cmd/compile: cgen_append can handle complex targets
Post-liveness fix, the slices on both sides can now be
indirects of & variables.  The cgen code handles those
cases just fine.

Fixes #15988

Change-Id: I378ad1d5121587e6107a9879c167291a70bbb9e4
Reviewed-on: https://go-review.googlesource.com/23863
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
2016-06-08 00:01:09 +00:00
Keith Randall
41dd1696ab cmd/compile: fix heap dump test on android
go_android_exec is looking for "exitcode=" to decide the result
of running a test.  The heap dump test nondeterministically prints
"finalized" right at the end of the test.  When the timing is just
right, we print "finalizedexitcode=0" and confuse go_android_exec.

This failure happens occasionally on the android builders.

Change-Id: I4f73a4db05d8f40047ecd3ef3a881a4ae3741e26
Reviewed-on: https://go-review.googlesource.com/23861
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
2016-06-07 17:34:48 +00:00
Keith Randall
2f088884ae cmd/compile: use fake package for allocating autos
Make sure auto names don't conflict with function names. Before this CL,
we confused name a.len (the len field of the slice a) with a.len (the function
len declared on a).

Fixes #15961

Change-Id: I14913de697b521fb35db9a1b10ba201f25d552bb
Reviewed-on: https://go-review.googlesource.com/23789
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-07 06:04:23 +00:00
Andrew Gerrand
27c5dcffef cmd/dist: use "set" instead of "export" in diagnostic message
On Windows, "export" doesn't mean anything, but Windows users are the
most likely to see this message.

Fixes #15977

Change-Id: Ib09ca08a7580713cacb65d0cdc43730765c377a8
Reviewed-on: https://go-review.googlesource.com/23840
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-07 04:31:29 +00:00
Andrew Gerrand
3ba31558d1 net/http: send StatusOK on empty body with TimeoutHandler
Fixes #15948

Change-Id: Idd79859b3e98d61cd4e3ef9caa5d3b2524fd026a
Reviewed-on: https://go-review.googlesource.com/23810
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-06 21:15:40 +00:00
Andrew Gerrand
a71af25401 time: warn about correct use of a Timer's Stop/Reset methods
Updates #14038
Fixes #14383

Change-Id: Icf6acb7c5d13ff1d3145084544c030a778482a38
Reviewed-on: https://go-review.googlesource.com/23575
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-06 21:15:35 +00:00
Andrew Gerrand
f9b4556de0 net/http: send one Transfer-Encoding header when "chunked" set manually
Fixes #15960

Change-Id: I7503f6ede33e6a1a93cee811d40f7b297edf47bc
Reviewed-on: https://go-review.googlesource.com/23811
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-06 21:14:27 +00:00
Cherry Zhang
59e11d7827 [dev.ssa] cmd/compile: handle floating point on ARM
Machine supports (or the runtime simulates in soft float mode)
(u)int32<->float conversions. The frontend rewrites int64<->float
conversions to call to runtime function.

For int64->float32 conversion, the frontend generates

.   .   AS u(100) l(10) tc(1)
.   .   .   NAME-main.~r1 u(1) a(true) g(1) l(9) x(8+0) class(PPARAMOUT) f(1) float32
.   .   .   CALLFUNC u(100) l(10) tc(1) float32
.   .   .   .   NAME-runtime.int64tofloat64 u(1) a(true) x(0+0) class(PFUNC) tc(1) used(true) FUNC-func(int64) float64

The CALLFUNC node has type float32, whereas runtime.int64tofloat64
returns float64. The legacy backend implicitly makes a float64->float32
conversion. The SSA backend does not do implicit conversion, so we
insert an explicit CONV here.

All cmd/compile/internal/gc/testdata/*_ssa.go tests passed.

Progress on SSA for ARM. Still not complete.

Update #15365.

Change-Id: I30937c8ff977271246b068f48224693776804339
Reviewed-on: https://go-review.googlesource.com/23652
Reviewed-by: Keith Randall <khr@golang.org>
2016-06-06 14:06:38 +00:00
Keith Randall
a871464e5a runtime: fix typo
Fixes #15962

Change-Id: I1949e0787f6c2b1e19b9f9d3af2f712606a6d4cf
Reviewed-on: https://go-review.googlesource.com/23786
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-05 18:10:01 +00:00
Cherry Zhang
e78d90beeb [dev.ssa] cmd/compile: handle Div, Convert, GetClosurePtr etc. on ARM
This CL adds support of Div, Mod, Convert, GetClosurePtr and 64-bit indexing
support to SSA backend for ARM.

Add tests for 64-bit indexing to cmd/compile/internal/gc/testdata/string_ssa.go.

Tests cmd/compile/internal/gc/testdata/*_ssa.go passed, except compound_ssa.go
and fp_ssa.go.

Progress on SSA for ARM. Still not complete. Essentially the only unsupported
part is floating point.

Updates #15365.

Change-Id: I269e88b67f641c25e7a813d910c96d356d236bff
Reviewed-on: https://go-review.googlesource.com/23542
Reviewed-by: David Chase <drchase@google.com>
2016-06-05 03:56:42 +00:00
Mikio Hara
0326e28f17 Revert "cmd/go: re-enable TestCgoConsistentResults on solaris"
This reverts commit b89bcc1dae.

Change-Id: Ief2f317ffc175f7e6002d0c39694876f46788c69
Reviewed-on: https://go-review.googlesource.com/23744
Reviewed-by: Mikio Hara <mikioh.mikioh@gmail.com>
2016-06-03 22:41:47 +00:00
Mikio Hara
b89bcc1dae cmd/go: re-enable TestCgoConsistentResults on solaris
Updates #13247.

Change-Id: If5e4c9f4db05f58608b0eeed1a2312a04015b207
Reviewed-on: https://go-review.googlesource.com/23741
Run-TryBot: Mikio Hara <mikioh.mikioh@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-06-03 22:18:16 +00:00
Ian Lance Taylor
7b48020cfe cmd/cgo: check pointers for deferred C calls at the right time
We used to check time at the point of the defer statement. This change
fixes cgo to check them when the deferred function is executed.

Fixes #15921.

Change-Id: I72a10e26373cad6ad092773e9ebec4add29b9561
Reviewed-on: https://go-review.googlesource.com/23650
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2016-06-03 20:51:39 +00:00
Ian Lance Taylor
55559f159e doc/go1.7.html: html tidy
Change-Id: I0e07610bae641cd63769b520089f5d854d796648
Reviewed-on: https://go-review.googlesource.com/23770
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rob Pike <r@golang.org>
2016-06-03 20:51:17 +00:00
Stephen McQuay (smcquay)
adf261b7d6 cmd/go: match go-import package prefixes by slash
The existing implementation for path collision resolution would
incorrectly determine that:

    example.org/aa

collides with:

    example.org/a

This change splits by slash rather than comparing on a byte-by-byte
basis.

Fixes: #15947

Change-Id: I18b3aaafbc787c81253203cf1328bb3c4420a0c4
Reviewed-on: https://go-review.googlesource.com/23732
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
2016-06-03 19:47:37 +00:00
David Crawshaw
4b64c53c03 reflect: clear tflag for StructOf type
Fixes #15923

Change-Id: I3e56564365086ceb0bfc15db61db6fb446ab7448
Reviewed-on: https://go-review.googlesource.com/23760
Reviewed-by: Sebastien Binet <seb.binet@gmail.com>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-03 19:01:03 +00:00
Ian Lance Taylor
cf862478c8 runtime/cgo: add TSAN locks around mmap call
Change-Id: I806cc5523b7b5e3278d01074bc89900d78700e0c
Reviewed-on: https://go-review.googlesource.com/23736
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
2016-06-03 18:26:01 +00:00
Ian Lance Taylor
b4c7f6280e doc/go1.7.html: add missing <code> and </a>
Change-Id: I5f4bf89345dc139063dcf34da653e914386bcde6
Reviewed-on: https://go-review.googlesource.com/23735
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-06-03 17:09:56 +00:00
Ian Lance Taylor
6901b08482 cmd/link: avoid name collision with DWARF .def suffix
Adding a .def suffix for DWARF info collided with the DWARF info,
without the suffix, for a method named def. Change the suffix to ..def
instead.

Fixes #15926.

Change-Id: If1bf1bcb5dff1d7f7b79f78e3f7a3bbfcd2201bb
Reviewed-on: https://go-review.googlesource.com/23733
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
2016-06-03 16:56:29 +00:00
Mikio Hara
c3bd93aa26 net: don't leak test helper goroutine in TestAcceptTimeout
Fixes #15109.

Change-Id: Ibfdedd6807322ebec84bacfeb492fb53fe066960
Reviewed-on: https://go-review.googlesource.com/23742
Run-TryBot: Mikio Hara <mikioh.mikioh@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Marcel van Lohuizen <mpvl@golang.org>
2016-06-03 11:39:40 +00:00
Marcel van Lohuizen
9e112a3fe4 bytes: use Run method for benchmarks
Change-Id: I34ab1003099570f0ba511340e697a648de31d08a
Reviewed-on: https://go-review.googlesource.com/23427
Run-TryBot: Marcel van Lohuizen <mpvl@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Robert Griesemer <gri@golang.org>
2016-06-03 07:03:03 +00:00
Michael Hudson-Doyle
26849746c9 cmd/internal/obj, runtime: fixes for defer in 386 shared libraries
Any defer in a shared object crashed when GOARCH=386. This turns out to be two
bugs:

 1) Calls to morestack were not processed to be PIC safe (must have been
    possible to trigger this another way too)
 2) jmpdefer needs to rewind the return address of the deferred function past
    the instructions that load the GOT pointer into BX, not just past the call

Bug 2) requires re-introducing the a way for .s files to know when they are
being compiled for dynamic linking but I've tried to do that in as minimal
a way as possible.

Fixes #15916

Change-Id: Ia0d09b69ec272a176934176b8eaef5f3bfcacf04
Reviewed-on: https://go-review.googlesource.com/23623
Run-TryBot: Michael Hudson-Doyle <michael.hudson@canonical.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-06-03 02:50:27 +00:00
Andrew Gerrand
5799973c3e cmd/go: fix staleness test for releases, also deflake it
Fixes #15933

Change-Id: I2cd6365e6d0ca1cafdc812fbfaaa55aa64b2b289
Reviewed-on: https://go-review.googlesource.com/23731
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-03 02:46:58 +00:00
Ian Lance Taylor
7825ca6a63 doc/go1.7.html: net/mail.ParseAddress is stricter
Fixes #15940.

Change-Id: Ie6da6fef235c6a251caa96d45f606c05d118a0ac
Reviewed-on: https://go-review.googlesource.com/23710
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Travis Beatty <travisby@gmail.com>
Reviewed-by: Andrew Gerrand <adg@golang.org>
2016-06-03 01:48:45 +00:00
David Glasser
92cd6e3af9 encoding/json: fix docs on valid key names
This has been inaccurate since https://golang.org/cl/6048047.

Fixes #15317.

Change-Id: If93d2161f51ccb91912cb94a35318cf33f4d526a
Reviewed-on: https://go-review.googlesource.com/23691
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-03 00:40:59 +00:00
Mikio Hara
49c680f948 syscall: deflake TestUnshare
Change-Id: I21a08c2ff5ebb74e158723cca323574432870ba8
Reviewed-on: https://go-review.googlesource.com/23662
Run-TryBot: Mikio Hara <mikioh.mikioh@gmail.com>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-02 22:26:03 +00:00
Marcel van Lohuizen
0f5697a81d strconv: use Run for some benchmarks
This serves as an example of table-driven benchmarks which are analoguous to the common pattern for table-driven tests.

Change-Id: I47f94c121a7117dd1e4ba03b3f2f8bcb5da38063
Reviewed-on: https://go-review.googlesource.com/23470
Run-TryBot: Marcel van Lohuizen <mpvl@golang.org>
Reviewed-by: Robert Griesemer <gri@golang.org>
2016-06-02 20:47:29 +00:00
Ian Lance Taylor
03abde4971 runtime: only permit SetCgoTraceback to be called once
Accept a duplicate call, but nothing else.

Change-Id: Iec24bf5ddc3b0f0c559ad2158339aca698601743
Reviewed-on: https://go-review.googlesource.com/23692
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-02 19:24:55 +00:00
Ian Lance Taylor
88e0ec2979 runtime/cgo: avoid races on cgo_context_function
Change-Id: Ie9e6fda675e560234e90b9022526fd689d770818
Reviewed-on: https://go-review.googlesource.com/23610
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-02 18:47:48 +00:00
Alexander Morozov
853cd1f4a6 syscall: call setgroups for no groups on GNU/Linux
Skip setgroups only for one particular case: GidMappings != nil and
GidMappingsEnableSetgroup == false and list of supplementary groups is
empty.
This patch returns pre-1.5 behavior for simple exec and still allows to
use GidMappings with non-empty Credential.

Change-Id: Ia91c77e76ec5efab7a7f78134ffb529910108fc1
Reviewed-on: https://go-review.googlesource.com/23524
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-02 17:21:34 +00:00
Steve Phillips
e90a49a0f5 doc/go1.7.html: typo fix; replace "," at end of sentence with "."
Signed-off-by: Steven Phillips <steve@tryingtobeawesome.com>

Change-Id: Ie7c3253a5e1cd43be8fa12bad340204cc6c5ca76
Reviewed-on: https://go-review.googlesource.com/23677
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-06-02 15:44:24 +00:00
Alberto Donizetti
04888c9770 doc/go1.7: fix typo in nsswitch.conf name
Fixes #15939

Change-Id: I120cbeac73a052fb3f328774e6d5e1534f11bf6b
Reviewed-on: https://go-review.googlesource.com/23682
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-06-02 15:26:44 +00:00
Sebastien Binet
905ced0e6a reflect: document StructOf embedded fields limitation
This CL documents that StructOf currently does not generate wrapper
methods for embedded fields.

Updates #15924

Change-Id: I932011b1491d68767709559f515f699c04ce70d4
Reviewed-on: https://go-review.googlesource.com/23681
Reviewed-by: David Crawshaw <crawshaw@golang.org>
2016-06-02 14:29:03 +00:00
Elias Naur
14968bc1e5 cmd/dist: skip an unsupported test on darwin/arm
Fixes the darwin/arm builder (I hope)

Change-Id: I8a3502a1cdd468d4bf9a1c895754ada420b305ce
Reviewed-on: https://go-review.googlesource.com/23684
Run-TryBot: Elias Naur <elias.naur@gmail.com>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
2016-06-02 14:02:07 +00:00
Elias Naur
6c4f8cd0d1 misc/cgo/test: fix issue9400 test on android/386
The test for #9400 relies on an assembler function that manipulates
the stack pointer. Meanwile, it uses a global variable for
synchronization. However, position independent code on 386 use a
function call to fetch the base address for global variables.
That function call in turn overwrites the Go stack.

Fix that by fetching the global variable address once before the
stack register manipulation.

Fixes the android/386 builder.

Change-Id: Ib77bd80affaa12f09d582d09d8b84a73bd021b60
Reviewed-on: https://go-review.googlesource.com/23683
Run-TryBot: Elias Naur <elias.naur@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
2016-06-02 14:01:47 +00:00
Cherry Zhang
4636d02244 [dev.ssa] cmd/compile: handle 64-bit shifts on ARM
Also fix a mistake in previous CL about x8 and x16 shifts:
the shift needs ZeroExt.

Progress on SSA for ARM. Still not complete.

Updates #15365.

Change-Id: Ibc352760023d38bc6b9c5251e929fe26e016637a
Reviewed-on: https://go-review.googlesource.com/23486
Reviewed-by: David Chase <drchase@google.com>
2016-06-02 13:03:59 +00:00
Cherry Zhang
90883091ff [dev.ssa] cmd/compile: clean up hardcoded regmasks in ssa/regalloc.go
Auto-generate register masks and load them through Config.

Passed toolstash -cmp on AMD64.

Tests phi_ssa.go and regalloc_ssa.go in cmd/compile/internal/gc/testdata
passed on ARM.

Updates #15365.

Change-Id: I393924d68067f2dbb13dab82e569fb452c986593
Reviewed-on: https://go-review.googlesource.com/23292
Reviewed-by: David Chase <drchase@google.com>
2016-06-02 13:01:44 +00:00
Cherry Zhang
8756d9253f [dev.ssa] cmd/compile: decompose 64-bit integer on ARM
Introduce dec64 rules to (generically) decompose 64-bit integer on
32-bit architectures. 64-bit integer is composed/decomposed with
Int64Make/Hi/Lo ops, as for complex types.

The idea of dealing with Add64 is the following:

(Add64 (Int64Make xh xl) (Int64Make yh yl))
->
(Int64Make
	(Add32withcarry xh yh (Select0 (Add32carry xl yl)))
	(Select1 (Add32carry xl yl)))

where Add32carry returns a tuple (flags,uint32). Select0 and Select1
read the first and the second component of the tuple, respectively.
The two Add32carry will be CSE'd.

Similarly for multiplication, Mul32uhilo returns a tuple (hi, lo).

Also add support of KeepAlive, to fix build after merge.

Tests addressed_ssa.go, array_ssa.go, break_ssa.go, chan_ssa.go,
cmp_ssa.go, ctl_ssa.go, map_ssa.go, and string_ssa.go in
cmd/compile/internal/gc/testdata passed.

Progress on SSA for ARM. Still not complete.

Updates #15365.

Change-Id: I7867c76785a456312de5d8398a6b3f7ca5a4f7ec
Reviewed-on: https://go-review.googlesource.com/23213
Reviewed-by: Keith Randall <khr@golang.org>
2016-06-02 13:01:09 +00:00
Elias Naur
42c51debe8 misc/cgo/test,cmd/dist: enable (more) Cgo tests on iOS
For #15919

Change-Id: I9fc38d9c8a9cc9406b551315e1599750fe212d0d
Reviewed-on: https://go-review.googlesource.com/23635
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Run-TryBot: Elias Naur <elias.naur@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-02 10:09:12 +00:00
Dmitry Vyukov
ba22172832 runtime: fix typo in comment
Change-Id: I82e35770b45ccd1433dfae0af423073c312c0859
Reviewed-on: https://go-review.googlesource.com/23680
Reviewed-by: Andrew Gerrand <adg@golang.org>
2016-06-02 06:02:01 +00:00
Anmol Sethi
15db3654b8 net/http: http.Request.Context doc fix
The comment on http.Request.Context says that the context
is canceled when the client's connection closes even though
this has not been implemented. See #15927

Change-Id: I50b68638303dafd70f77f8f778e6caff102d3350
Reviewed-on: https://go-review.googlesource.com/23672
Reviewed-by: Andrew Gerrand <adg@golang.org>
2016-06-02 05:25:46 +00:00
Andrew Gerrand
6a0fd18016 doc: mention net/http/httptrace package in release notes
Updates #15810

Change-Id: I689e18409a88c9e8941aa2e98f472c331efd455e
Reviewed-on: https://go-review.googlesource.com/23674
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-06-02 04:38:52 +00:00
Ian Lance Taylor
17f7461ed6 doc/go1.7.html: fix spelling of cancelation
We say "cancelation," not "cancellation."

Fixes #15928.

Change-Id: I66d545404665948a27281133cb9050eebf1debbb
Reviewed-on: https://go-review.googlesource.com/23673
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-06-02 04:13:08 +00:00
Michael Hudson-Doyle
d25c3eadea cmd/compile: do not generate tail calls when dynamic linking on ppc64le
When a wrapper method calls the real implementation, it's not possible to use a
tail call when dynamic linking on ppc64le. The bad scenario is when a local
call is made to the wrapper: the wrapper will call the implementation, which
might be in a different module and so set the TOC to the appropriate value for
that module. But if it returns directly to the wrapper's caller, nothing will
reset it to the correct value for that function.

Change-Id: Icebf24c9a2a0a9a7c2bce6bd6f1358657284fb10
Reviewed-on: https://go-review.googlesource.com/23468
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Michael Hudson-Doyle <michael.hudson@canonical.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-02 02:34:01 +00:00
Mikio Hara
068c745e1e vendor: update vendored route
Updates golang.org/x/net/route to rev fac978c for:
- route: fix typos in test

Change-Id: I35de1d3f8e887c6bb5fe50e7299f2fc12e4426de
Reviewed-on: https://go-review.googlesource.com/23660
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-06-02 00:59:46 +00:00
David Chase
31e13c83c2 [dev.ssa] Merge branch 'master' into dev.ssa
Change-Id: Iabc80b6e0734efbd234d998271e110d2eaad41dd
2016-05-27 15:19:33 -04:00
Cherry Zhang
d108bc0e73 [dev.ssa] cmd/compile: implement Defer, RetJmp on SSA for ARM
Also fix argument offset for runtime calls.

Also fix LoadReg/StoreReg by generating instructions by type.

Progress on SSA backend for ARM. Still not complete.
Tests append_ssa.go, assert_ssa.go, loadstore_ssa.go, short_ssa.go, and
deferNoReturn.go in cmd/compile/internal/gc/testdata passed.

Updates #15365.

Change-Id: I0f0a2398cab8bbb461772a55241a16a7da2ecedf
Reviewed-on: https://go-review.googlesource.com/23212
Reviewed-by: David Chase <drchase@google.com>
2016-05-27 12:53:22 +00:00
Cherry Zhang
8357ec37ae [dev.ssa] cmd/compile: implement Zero, Move, Copy for SSA on ARM
Generate load/stores for small zeroing/move, DUFFZERO/DUFFCOPY for
medium zeroing/move, and loops for large zeroing/move.

cmd/compile/internal/gc/testdata/{copy_ssa.go,zero_ssa.go} tests
passed.

Progress on SSA backend for ARM. Still not complete. A few packages
in the standard library compile and tests passed, including
container/list, hash/crc32, unicode/utf8, etc.

Updates #15365.

Change-Id: Ieb4b68b44ee7de66bf7b68f5f33a605349fcc6fa
Reviewed-on: https://go-review.googlesource.com/23097
Reviewed-by: Keith Randall <khr@golang.org>
2016-05-19 02:55:35 +00:00
Cherry Zhang
8f72690711 [dev.ssa] cmd/compile: implement shifts & multiplications for SSA on ARM
Implement shifts and multiplications for up to 32-bit values.

Also handle Exit block.

Progress on SSA backend for ARM. Still not complete.
container/heap, crypto/subtle, hash/adler32 packages compile and
tests passed.

Updates #15365.

Change-Id: I6bee4d5b0051e51d5de97e8a1938c4b87a36cbf8
Reviewed-on: https://go-review.googlesource.com/23096
Reviewed-by: Keith Randall <khr@golang.org>
2016-05-19 02:49:09 +00:00
Cherry Zhang
ccaed50c7b [dev.ssa] cmd/compile: handle boolean values for SSA on ARM
Fix hardcoded flag register mask in ssa/flagalloc.go by auto-generating
the mask.

Also fix a mistake (in previous CL) about conditional branches.

Progress on SSA backend for ARM. Still not complete. Now "container/ring"
package compiles and tests passed.

Updates #15365.

Change-Id: Id7c8805c30dbb8107baedb485ed0f71f59ed6ea8
Reviewed-on: https://go-review.googlesource.com/23093
Reviewed-by: Keith Randall <khr@golang.org>
2016-05-19 02:48:36 +00:00
Cherry Zhang
e2848de9ef [dev.ssa] cmd/compile: implement the following for SSA on ARM
- generic Ops: Phi, CALL variants, NilCheck
- generic Blocks: Plain, Check
- 32-bit arithmetics
- CMP and conditional branches
- load/store
- zero/sign-extensions (8 to 16, 8 to 32, 16 to 32)

Progress on SSA backend for ARM. Still not complete. Now "errors"
package compiles and tests passed.

Updates #15365.

Change-Id: If126fd17f8695cbf55d64085bb3f1a4a53205701
Reviewed-on: https://go-review.googlesource.com/22856
Reviewed-by: Keith Randall <khr@golang.org>
2016-05-10 19:38:11 +00:00
Cherry Zhang
fdc4a964d2 [dev.ssa] cmd/compile/internal/gc, runtime: use 32-bit load for writeBarrier check
Use 32-bit load for writeBarrier check on all architectures.
Padding added to runtime structure.

Updates #15365, #15492.

Change-Id: I5d3dadf8609923fe0fe4fcb384a418b7b9624998
Reviewed-on: https://go-review.googlesource.com/22855
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-05-10 17:34:30 +00:00
Cherry Zhang
e1a2ea88d0 [dev.ssa] cmd/compile: handle symbolic constant for SSA on ARM
Progress on SSA backend for ARM. Still not complete. Now "helloworld"
function compiles and runs.

Updates #15365.

Change-Id: I02f66983cefdf07a6aed262fb4af8add464d8e9a
Reviewed-on: https://go-review.googlesource.com/22854
Reviewed-by: Keith Randall <khr@golang.org>
2016-05-10 13:30:51 +00:00
Keith Randall
802966f7b3 [dev.ssa] Merge remote-tracking branch 'origin/master' into mergebranch
Merge from tip into ssa.

Change-Id: Icbc1c46d9f4721e4a0f99a24dd708044407ee9f7
2016-05-05 14:24:52 -07:00
Keith Randall
ab150e1ac9 [dev.ssa] all: merge from tip to get dev.ssa current
So we can start working on other architectures here.

Change is a dummy to keep git happy.

Change-Id: I1caa62a242790601810a1ff72af7ea9773d4da76
Reviewed-on: https://go-review.googlesource.com/22822
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-05-05 20:47:51 +00:00
422 changed files with 96713 additions and 9890 deletions

20
AUTHORS
View File

@@ -105,6 +105,7 @@ Audrey Lim <audreylh@gmail.com>
Augusto Roman <aroman@gmail.com>
Aulus Egnatius Varialus <varialus@gmail.com>
awaw fumin <awawfumin@gmail.com>
Ayanamist Yang <ayanamist@gmail.com>
Aymerick Jéhanne <aymerick@jehanne.org>
Ben Burkert <ben@benburkert.com>
Ben Olive <sionide21@gmail.com>
@@ -147,6 +148,7 @@ Chris Jones <chris@cjones.org>
Chris Kastorff <encryptio@gmail.com>
Chris Lennert <calennert@gmail.com>
Chris McGee <sirnewton_01@yahoo.ca> <newton688@gmail.com>
Christian Couder <chriscool@tuxfamily.org>
Christian Himpel <chressie@googlemail.com>
Christine Hansmann <chhansmann@gmail.com>
Christoffer Buchholz <christoffer.buchholz@gmail.com>
@@ -237,8 +239,10 @@ Eivind Uggedal <eivind@uggedal.com>
Elias Naur <elias.naur@gmail.com>
Emil Hessman <c.emil.hessman@gmail.com> <emil@hessman.se>
Emmanuel Odeke <emm.odeke@gmail.com> <odeke@ualberta.ca>
Empirical Interfaces Inc.
Eoghan Sherry <ejsherry@gmail.com>
Eric Clark <zerohp@gmail.com>
Eric Engestrom <eric@engestrom.ch>
Eric Lagergren <ericscottlagergren@gmail.com>
Eric Milliken <emilliken@gmail.com>
Eric Roshan-Eisner <eric.d.eisner@gmail.com>
@@ -258,6 +262,7 @@ Fastly, Inc.
Fatih Arslan <fatih@arslan.io>
Fazlul Shahriar <fshahriar@gmail.com>
Felix Geisendörfer <haimuiba@gmail.com>
Filippo Valsorda <hi@filippo.io>
Firmansyah Adiputra <frm.adiputra@gmail.com>
Florian Uekermann <florian@uekermann-online.de>
Florian Weimer <fw@deneb.enyo.de>
@@ -290,6 +295,8 @@ Guobiao Mei <meiguobiao@gmail.com>
Gustav Paul <gustav.paul@gmail.com>
Gustavo Niemeyer <gustavo@niemeyer.net>
Gwenael Treguier <gwenn.kahz@gmail.com>
Gyu-Ho Lee <gyuhox@gmail.com>
H. İbrahim Güngör <igungor@gmail.com>
Hajime Hoshi <hajimehoshi@gmail.com>
Hari haran <hariharan.uno@gmail.com>
Hariharan Srinath <srinathh@gmail.com>
@@ -321,6 +328,7 @@ Intel Corporation
Irieda Noboru <irieda@gmail.com>
Isaac Wagner <ibw@isaacwagner.me>
Ivan Ukhov <ivan.ukhov@gmail.com>
Jacob Hoffman-Andrews <github@hoffman-andrews.com>
Jae Kwon <jae@tendermint.com>
Jakob Borg <jakob@nym.se>
Jakub Ryszard Czarnowicz <j.czarnowicz@gmail.com>
@@ -342,6 +350,7 @@ Jan Newmarch <jan.newmarch@gmail.com>
Jan Ziak <0xe2.0x9a.0x9b@gmail.com>
Jani Monoses <jani.monoses@ubuntu.com>
Jaroslavas Počepko <jp@webmaster.ms>
Jason Barnett <jason.w.barnett@gmail.com>
Jason Del Ponte <delpontej@gmail.com>
Jason Travis <infomaniac7@gmail.com>
Jay Weisskopf <jay@jayschwa.net>
@@ -359,6 +368,7 @@ Jingcheng Zhang <diogin@gmail.com>
Jingguo Yao <yaojingguo@gmail.com>
Jiong Du <londevil@gmail.com>
Joakim Sernbrant <serbaut@gmail.com>
Joe Farrell <joe2farrell@gmail.com>
Joe Harrison <joehazzers@gmail.com>
Joe Henke <joed.henke@gmail.com>
Joe Poirier <jdpoirier@gmail.com>
@@ -392,6 +402,7 @@ Joshua Chase <jcjoshuachase@gmail.com>
Jostein Stuhaug <js@solidsystem.no>
JT Olds <jtolds@xnet5.com>
Jukka-Pekka Kekkonen <karatepekka@gmail.com>
Julian Kornberger <jk+github@digineo.de>
Julian Phillips <julian@quantumfyre.co.uk>
Julien Schmidt <google@julienschmidt.com>
Justin Nuß <nuss.justin@gmail.com>
@@ -502,6 +513,7 @@ Michael Vetter <g.bluehut@gmail.com>
Michal Bohuslávek <mbohuslavek@gmail.com>
Michał Derkacz <ziutek@lnet.pl>
Miek Gieben <miek@miek.nl>
Miguel Mendez <stxmendez@gmail.com>
Mihai Borobocea <MihaiBorobocea@gmail.com>
Mikael Tillenius <mikti42@gmail.com>
Mike Andrews <mra@xoba.com>
@@ -530,6 +542,7 @@ Netflix, Inc.
Nevins Bartolomeo <nevins.bartolomeo@gmail.com>
ngmoco, LLC
Niall Sheridan <nsheridan@gmail.com>
Nic Day <nic.day@me.com>
Nicholas Katsaros <nick@nickkatsaros.com>
Nicholas Presta <nick@nickpresta.ca> <nick1presta@gmail.com>
Nicholas Sullivan <nicholas.sullivan@gmail.com>
@@ -575,6 +588,7 @@ Paul Rosania <paul.rosania@gmail.com>
Paul Sbarra <Sbarra.Paul@gmail.com>
Paul Smith <paulsmith@pobox.com> <paulsmith@gmail.com>
Paul van Brouwershaven <paul@vanbrouwershaven.com>
Paulo Casaretto <pcasaretto@gmail.com>
Pavel Paulau <pavel.paulau@gmail.com>
Pavel Zinovkin <pavel.zinovkin@gmail.com>
Pawel Knap <pawelknap88@gmail.com>
@@ -591,6 +605,7 @@ Péter Szilágyi <peterke@gmail.com>
Peter Waldschmidt <peter@waldschmidt.com>
Peter Waller <peter.waller@gmail.com>
Peter Williams <pwil3058@gmail.com>
Philip Børgesen <philip.borgesen@gmail.com>
Philip Hofer <phofer@umich.edu>
Philip K. Warren <pkwarren@gmail.com>
Pierre Durand <pierredurand@gmail.com>
@@ -599,6 +614,7 @@ Pieter Droogendijk <pieter@binky.org.uk>
Pietro Gagliardi <pietro10@mac.com>
Prashant Varanasi <prashant@prashantv.com>
Preetam Jinka <pj@preet.am>
Quan Tran <qeed.quan@gmail.com>
Quan Yong Zhai <qyzhai@gmail.com>
Quentin Perez <qperez@ocs.online.net>
Quoc-Viet Nguyen <afelion@gmail.com>
@@ -644,6 +660,7 @@ Salmān Aljammāz <s@0x65.net>
Sam Hug <samuel.b.hug@gmail.com>
Sam Whited <sam@samwhited.com>
Sanjay Menakuru <balasanjay@gmail.com>
Sasha Sobol <sasha@scaledinference.com>
Scott Barron <scott.barron@github.com>
Scott Bell <scott@sctsm.com>
Scott Ferguson <scottwferg@gmail.com>
@@ -654,6 +671,7 @@ Sergei Skorobogatov <skorobo@rambler.ru>
Sergey 'SnakE' Gromov <snake.scaly@gmail.com>
Sergio Luis O. B. Correia <sergio@correia.cc>
Seth Hoenig <seth.a.hoenig@gmail.com>
Seth Vargo <sethvargo@gmail.com>
Shahar Kohanim <skohanim@gmail.com>
Shane Hansen <shanemhansen@gmail.com>
Shaozhen Ding <dsz0111@gmail.com>
@@ -663,6 +681,7 @@ Shinji Tanaka <shinji.tanaka@gmail.com>
Shivakumar GN <shivakumar.gn@gmail.com>
Silvan Jegen <s.jegen@gmail.com>
Simon Jefford <simon.jefford@gmail.com>
Simon Thulbourn <simon+github@thulbourn.com>
Simon Whitehead <chemnova@gmail.com>
Sokolov Yura <funny.falcon@gmail.com>
Spencer Nelson <s@spenczar.com>
@@ -734,6 +753,7 @@ Wei Guangjing <vcc.163@gmail.com>
Willem van der Schyff <willemvds@gmail.com>
William Josephson <wjosephson@gmail.com>
William Orr <will@worrbase.com> <ay1244@gmail.com>
Wisdom Omuya <deafgoat@gmail.com>
Xia Bin <snyh@snyh.org>
Xing Xing <mikespook@gmail.com>
Xudong Zhang <felixmelon@gmail.com>

View File

@@ -37,6 +37,7 @@ Aaron France <aaron.l.france@gmail.com>
Aaron Jacobs <jacobsa@google.com>
Aaron Kemp <kemp.aaron@gmail.com>
Aaron Torres <tcboox@gmail.com>
Aaron Zinman <aaron@azinman.com>
Abe Haskins <abeisgreat@abeisgreat.com>
Abhinav Gupta <abhinav.g90@gmail.com>
Adam Langley <agl@golang.org>
@@ -141,6 +142,7 @@ Augusto Roman <aroman@gmail.com>
Aulus Egnatius Varialus <varialus@gmail.com>
Austin Clements <austin@google.com> <aclements@csail.mit.edu>
awaw fumin <awawfumin@gmail.com>
Ayanamist Yang <ayanamist@gmail.com>
Aymerick Jéhanne <aymerick@jehanne.org>
Balazs Lecz <leczb@google.com>
Ben Burkert <ben@benburkert.com>
@@ -155,6 +157,7 @@ Benny Siegert <bsiegert@gmail.com>
Benoit Sigoure <tsunanet@gmail.com>
Berengar Lehr <Berengar.Lehr@gmx.de>
Bill Neubauer <wcn@golang.org> <wcn@google.com> <bill.neubauer@gmail.com>
Bill O'Farrell <billo@ca.ibm.com>
Bill Thiede <couchmoney@gmail.com>
Billie Harold Cleek <bhcleek@gmail.com>
Bjorn Tillenius <bjorn@tillenius.me>
@@ -212,6 +215,7 @@ Chris Lennert <calennert@gmail.com>
Chris Manghane <cmang@golang.org>
Chris McGee <sirnewton_01@yahoo.ca> <newton688@gmail.com>
Chris Zou <chriszou@ca.ibm.com>
Christian Couder <chriscool@tuxfamily.org>
Christian Himpel <chressie@googlemail.com> <chressie@gmail.com>
Christine Hansmann <chhansmann@gmail.com>
Christoffer Buchholz <christoffer.buchholz@gmail.com>
@@ -330,6 +334,7 @@ Emil Hessman <c.emil.hessman@gmail.com> <emil@hessman.se>
Emmanuel Odeke <emm.odeke@gmail.com> <odeke@ualberta.ca>
Eoghan Sherry <ejsherry@gmail.com>
Eric Clark <zerohp@gmail.com>
Eric Engestrom <eric@engestrom.ch>
Eric Garrido <ekg@google.com>
Eric Koleda <ekoleda+devrel@google.com>
Eric Lagergren <ericscottlagergren@gmail.com>
@@ -356,6 +361,7 @@ Fatih Arslan <fatih@arslan.io>
Fazlul Shahriar <fshahriar@gmail.com>
Federico Simoncelli <fsimonce@redhat.com>
Felix Geisendörfer <haimuiba@gmail.com>
Filippo Valsorda <hi@filippo.io>
Firmansyah Adiputra <frm.adiputra@gmail.com>
Florian Uekermann <florian@uekermann-online.de> <f1@uekermann-online.de>
Florian Weimer <fw@deneb.enyo.de>
@@ -397,6 +403,8 @@ Gustav Paul <gustav.paul@gmail.com>
Gustavo Franco <gustavorfranco@gmail.com>
Gustavo Niemeyer <gustavo@niemeyer.net> <n13m3y3r@gmail.com>
Gwenael Treguier <gwenn.kahz@gmail.com>
Gyu-Ho Lee <gyuhox@gmail.com>
H. İbrahim Güngör <igungor@gmail.com>
Hajime Hoshi <hajimehoshi@gmail.com>
Hallgrimur Gunnarsson <halg@google.com>
Han-Wen Nienhuys <hanwen@google.com>
@@ -435,6 +443,7 @@ Ivan Ukhov <ivan.ukhov@gmail.com>
Jaana Burcu Dogan <jbd@google.com> <jbd@golang.org> <burcujdogan@gmail.com>
Jacob Baskin <jbaskin@google.com>
Jacob H. Haven <jacob@cloudflare.com>
Jacob Hoffman-Andrews <github@hoffman-andrews.com>
Jae Kwon <jae@tendermint.com>
Jakob Borg <jakob@nym.se>
Jakub Čajka <jcajka@redhat.com>
@@ -465,6 +474,7 @@ Jan Newmarch <jan.newmarch@gmail.com>
Jan Ziak <0xe2.0x9a.0x9b@gmail.com>
Jani Monoses <jani.monoses@ubuntu.com> <jani.monoses@gmail.com>
Jaroslavas Počepko <jp@webmaster.ms>
Jason Barnett <jason.w.barnett@gmail.com>
Jason Del Ponte <delpontej@gmail.com>
Jason Hall <jasonhall@google.com>
Jason Travis <infomaniac7@gmail.com>
@@ -489,6 +499,7 @@ Jingcheng Zhang <diogin@gmail.com>
Jingguo Yao <yaojingguo@gmail.com>
Jiong Du <londevil@gmail.com>
Joakim Sernbrant <serbaut@gmail.com>
Joe Farrell <joe2farrell@gmail.com>
Joe Harrison <joehazzers@gmail.com>
Joe Henke <joed.henke@gmail.com>
Joe Poirier <jdpoirier@gmail.com>
@@ -539,6 +550,7 @@ JP Sugarbroad <jpsugar@google.com>
JT Olds <jtolds@xnet5.com>
Jukka-Pekka Kekkonen <karatepekka@gmail.com>
Julia Hansbrough <flowerhack@google.com>
Julian Kornberger <jk+github@digineo.de>
Julian Phillips <julian@quantumfyre.co.uk>
Julien Schmidt <google@julienschmidt.com>
Jungho Ahn <jhahn@google.com>
@@ -548,6 +560,7 @@ Kai Backman <kaib@golang.org>
Kamal Aboul-Hosn <aboulhosn@google.com>
Kamil Kisiel <kamil@kamilkisiel.net> <kamil.kisiel@gmail.com>
Kang Hu <hukangustc@gmail.com>
Karan Dhiman <karandhi@ca.ibm.com>
Kato Kazuyoshi <kato.kazuyoshi@gmail.com>
Katrina Owen <katrina.owen@gmail.com>
Kay Zhu <kayzhu@google.com>
@@ -575,6 +588,7 @@ Kim Shrier <kshrier@racktopsystems.com>
Kirklin McDonald <kirklin.mcdonald@gmail.com>
Klaus Post <klauspost@gmail.com>
Konstantin Shaposhnikov <k.shaposhnikov@gmail.com>
Kris Rousey <krousey@google.com>
Kristopher Watts <traetox@gmail.com>
Kun Li <likunarmstrong@gmail.com>
Kyle Consalus <consalus@gmail.com>
@@ -686,6 +700,7 @@ Michał Derkacz <ziutek@lnet.pl>
Michalis Kargakis <michaliskargakis@gmail.com>
Michel Lespinasse <walken@google.com>
Miek Gieben <miek@miek.nl> <remigius.gieben@gmail.com>
Miguel Mendez <stxmendez@gmail.com>
Mihai Borobocea <MihaiBorobocea@gmail.com>
Mikael Tillenius <mikti42@gmail.com>
Mike Andrews <mra@xoba.com>
@@ -716,6 +731,7 @@ Nathan(yinian) Hu <nathanhu@google.com>
Neelesh Chandola <neelesh.c98@gmail.com>
Nevins Bartolomeo <nevins.bartolomeo@gmail.com>
Niall Sheridan <nsheridan@gmail.com>
Nic Day <nic.day@me.com>
Nicholas Katsaros <nick@nickkatsaros.com>
Nicholas Presta <nick@nickpresta.ca> <nick1presta@gmail.com>
Nicholas Sullivan <nicholas.sullivan@gmail.com>
@@ -769,6 +785,7 @@ Paul Sbarra <Sbarra.Paul@gmail.com>
Paul Smith <paulsmith@pobox.com> <paulsmith@gmail.com>
Paul van Brouwershaven <paul@vanbrouwershaven.com>
Paul Wankadia <junyer@google.com>
Paulo Casaretto <pcasaretto@gmail.com>
Pavel Paulau <pavel.paulau@gmail.com>
Pavel Zinovkin <pavel.zinovkin@gmail.com>
Pawel Knap <pawelknap88@gmail.com>
@@ -793,6 +810,7 @@ Peter Waller <peter.waller@gmail.com>
Peter Weinberger <pjw@golang.org>
Peter Williams <pwil3058@gmail.com>
Phil Pennock <pdp@golang.org>
Philip Børgesen <philip.borgesen@gmail.com>
Philip Hofer <phofer@umich.edu>
Philip K. Warren <pkwarren@gmail.com>
Pierre Durand <pierredurand@gmail.com>
@@ -801,6 +819,7 @@ Pieter Droogendijk <pieter@binky.org.uk>
Pietro Gagliardi <pietro10@mac.com>
Prashant Varanasi <prashant@prashantv.com>
Preetam Jinka <pj@preet.am>
Quan Tran <qeed.quan@gmail.com>
Quan Yong Zhai <qyzhai@gmail.com>
Quentin Perez <qperez@ocs.online.net>
Quentin Smith <quentin@golang.org>
@@ -857,7 +876,9 @@ Ryan Lower <rpjlower@gmail.com>
Ryan Seys <ryan@ryanseys.com>
Ryan Slade <ryanslade@gmail.com>
S.Çağlar Onur <caglar@10ur.org>
Sai Cheemalapati <saicheems@google.com>
Salmān Aljammāz <s@0x65.net>
Sam Ding <samding@ca.ibm.com>
Sam Hug <samuel.b.hug@gmail.com>
Sam Thorogood <thorogood@google.com> <sam.thorogood@gmail.com>
Sam Whited <sam@samwhited.com>
@@ -865,6 +886,7 @@ Sameer Ajmani <sameer@golang.org> <ajmani@gmail.com>
Sami Commerot <samic@google.com>
Sanjay Menakuru <balasanjay@gmail.com>
Sasha Lionheart <lionhearts@google.com>
Sasha Sobol <sasha@scaledinference.com>
Scott Barron <scott.barron@github.com>
Scott Bell <scott@sctsm.com>
Scott Ferguson <scottwferg@gmail.com>
@@ -882,6 +904,7 @@ Sergey 'SnakE' Gromov <snake.scaly@gmail.com>
Sergey Arseev <sergey.arseev@intel.com>
Sergio Luis O. B. Correia <sergio@correia.cc>
Seth Hoenig <seth.a.hoenig@gmail.com>
Seth Vargo <sethvargo@gmail.com>
Shahar Kohanim <skohanim@gmail.com>
Shane Hansen <shanemhansen@gmail.com>
Shaozhen Ding <dsz0111@gmail.com>
@@ -894,6 +917,7 @@ Shivakumar GN <shivakumar.gn@gmail.com>
Shun Fan <sfan@google.com>
Silvan Jegen <s.jegen@gmail.com>
Simon Jefford <simon.jefford@gmail.com>
Simon Thulbourn <simon+github@thulbourn.com>
Simon Whitehead <chemnova@gmail.com>
Sokolov Yura <funny.falcon@gmail.com>
Spencer Nelson <s@spenczar.com>
@@ -957,6 +981,7 @@ Totoro W <tw19881113@gmail.com>
Travis Cline <travis.cline@gmail.com>
Trevor Strohman <trevor.strohman@gmail.com>
Trey Tacon <ttacon@gmail.com>
Tristan Amini <tamini01@ca.ibm.com>
Tudor Golubenco <tudor.g@gmail.com>
Tyler Bunnell <tylerbunnell@gmail.com>
Tyler Treat <ttreat31@gmail.com>
@@ -986,6 +1011,7 @@ Willem van der Schyff <willemvds@gmail.com>
William Chan <willchan@chromium.org>
William Josephson <wjosephson@gmail.com>
William Orr <will@worrbase.com> <ay1244@gmail.com>
Wisdom Omuya <deafgoat@gmail.com>
Xia Bin <snyh@snyh.org>
Xing Xing <mikespook@gmail.com>
Xudong Zhang <felixmelon@gmail.com>
@@ -999,6 +1025,8 @@ Yissakhar Z. Beck <yissakhar.beck@gmail.com>
Yo-An Lin <yoanlin93@gmail.com>
Yongjian Xu <i3dmaster@gmail.com>
Yoshiyuki Kanno <nekotaroh@gmail.com> <yoshiyuki.kanno@stoic.co.jp>
Yu Heng Zhang <annita.zhang@cn.ibm.com>
Yu Xuan Zhang <zyxsh@cn.ibm.com>
Yuki Yugui Sonoda <yugui@google.com>
Yusuke Kagiwada <block.rxckin.beats@gmail.com>
Yuusei Kuwana <kuwana@kumama.org>

View File

@@ -329,3 +329,4 @@ pkg syscall (netbsd-arm-cgo), const SizeofIfData = 132
pkg syscall (netbsd-arm-cgo), type IfMsghdr struct, Pad_cgo_1 [4]uint8
pkg unicode, const Version = "6.3.0"
pkg unicode, const Version = "7.0.0"
pkg unicode, const Version = "8.0.0"

View File

@@ -274,3 +274,12 @@ pkg syscall (linux-arm-cgo), type SysProcAttr struct, Unshareflags uintptr
pkg testing, method (*B) Run(string, func(*B)) bool
pkg testing, method (*T) Run(string, func(*T)) bool
pkg testing, type InternalExample struct, Unordered bool
pkg unicode, const Version = "9.0.0"
pkg unicode, var Adlam *RangeTable
pkg unicode, var Bhaiksuki *RangeTable
pkg unicode, var Marchen *RangeTable
pkg unicode, var Newa *RangeTable
pkg unicode, var Osage *RangeTable
pkg unicode, var Prepended_Concatenation_Mark *RangeTable
pkg unicode, var Sentence_Terminal *RangeTable
pkg unicode, var Tangut *RangeTable

View File

@@ -780,6 +780,64 @@ mode as on the x86, but the only scale allowed is <code>1</code>.
</ul>
<h3 id="s390x">IBM z/Architecture, a.k.a. s390x</h3>
<p>
The registers <code>R10</code> and <code>R11</code> are reserved.
The assembler uses them to hold temporary values when assembling some instructions.
</p>
<p>
<code>R13</code> points to the <code>g</code> (goroutine) structure.
This register must be referred to as <code>g</code>; the name <code>R13</code> is not recognized.
</p>
<p>
<code>R15</code> points to the stack frame and should typically only be accessed using the
virtual registers <code>SP</code> and <code>FP</code>.
</p>
<p>
Load- and store-multiple instructions operate on a range of registers.
The range of registers is specified by a start register and an end register.
For example, <code>LMG</code> <code>(R9),</code> <code>R5,</code> <code>R7</code> would load
<code>R5</code>, <code>R6</code> and <code>R7</code> with the 64-bit values at
<code>0(R9)</code>, <code>8(R9)</code> and <code>16(R9)</code> respectively.
</p>
<p>
Storage-and-storage instructions such as <code>MVC</code> and <code>XC</code> are written
with the length as the first argument.
For example, <code>XC</code> <code>$8,</code> <code>(R9),</code> <code>(R9)</code> would clear
eight bytes at the address specified in <code>R9</code>.
</p>
<p>
If a vector instruction takes a length or an index as an argument then it will be the
first argument.
For example, <code>VLEIF</code> <code>$1,</code> <code>$16,</code> <code>V2</code> will load
the value sixteen into index one of <code>V2</code>.
Care should be taken when using vector instructions to ensure that they are available at
runtime.
To use vector instructions a machine must have both the vector facility (bit 129 in the
facility list) and kernel support.
Without kernel support a vector instruction will have no effect (it will be equivalent
to a <code>NOP</code> instruction).
</p>
<p>
Addressing modes:
</p>
<ul>
<li>
<code>(R5)(R6*1)</code>: The location at <code>R5</code> plus <code>R6</code>.
It is a scaled mode as on the x86, but the only scale allowed is <code>1</code>.
</li>
</ul>
<h3 id="unsupported_opcodes">Unsupported opcodes</h3>
<p>

View File

@@ -53,6 +53,14 @@ See the <a href="https://github.com/golang/go/issues?q=milestone%3AGo1.6.2">Go
1.6.2 milestone</a> on our issue tracker for details.
</p>
<p>
go1.6.3 (released 2016/07/17) includes security fixes to the
<code>net/http/cgi</code> package and <code>net/http</code> package when used in
a CGI environment. This release also adds support for macOS Sierra.
See the <a href="https://github.com/golang/go/issues?q=milestone%3AGo1.6.3">Go
1.6.3 milestone</a> on our issue tracker for details.
</p>
<h2 id="go1.5">go1.5 (released 2015/08/19)</h2>
<p>

View File

@@ -2238,13 +2238,12 @@ if str, ok := value.(string); ok {
<h3 id="generality">Generality</h3>
<p>
If a type exists only to implement an interface
and has no exported methods beyond that interface,
there is no need to export the type itself.
Exporting just the interface makes it clear that
it's the behavior that matters, not the implementation,
and that other implementations with different properties
can mirror the behavior of the original type.
If a type exists only to implement an interface and will
never have exported methods beyond that interface, there is
no need to export the type itself.
Exporting just the interface makes it clear the value has no
interesting behavior beyond what is described in the
interface.
It also avoids the need to repeat the documentation
on every instance of a common method.
</p>
@@ -3665,4 +3664,3 @@ var _ image.Color = Black
var _ image.Image = Black
</pre>
-->

View File

@@ -74,6 +74,13 @@ This change has no effect on the correctness of existing programs.
<h2 id="ports">Ports</h2>
<p>
Go 1.7 adds support for macOS 10.12 Sierra.
This support was backported to Go 1.6.3.
Binaries built with versions of Go before 1.6.3 will not work
correctly on Sierra.
</p>
<p>
Go 1.7 adds an experimental port to <a href="https://en.wikipedia.org/wiki/Linux_on_z_Systems">Linux on z Systems</a> (<code>linux/s390x</code>)
and the beginning of a port to Plan 9 on ARM (<code>plan9/arm</code>).
@@ -85,17 +92,30 @@ added in Go 1.6 now have full support for cgo and external linking.
</p>
<p>
The experimental port to Linux on big-endian 64-bit PowerPC (<code>linux/ppc64</code>)
The experimental port to Linux on little-endian 64-bit PowerPC (<code>linux/ppc64le</code>)
now requires the POWER8 architecture or later.
Big-endian 64-bit PowerPC (<code>linux/ppc64</code>) only requires the
POWER5 architecture.
</p>
<p>
The OpenBSD port now requires OpenBSD 5.6 or later, for access to the <a href="http://man.openbsd.org/getentropy.2"><i>getentropy</i>(2)</a> system call.
</p>
<h3 id="known_issues">Known Issues</h3>
<p>
There are some instabilities on FreeBSD that are known but not understood.
These can lead to program crashes in rare cases.
See <a href="https://golang.org/issue/16136">issue 16136</a>,
<a href="https://golang.org/issue/15658">issue 15658</a>,
and <a href="https://golang.org/issue/16396">issue 16396</a>.
Any help in solving these FreeBSD-specific issues would be appreciated.
</p>
<h2 id="tools">Tools</h2>
<h3 id="cmd/asm">Assembler</h3>
<h3 id="cmd_asm">Assembler</h3>
<p>
For 64-bit ARM systems, the vector register names have been
@@ -196,7 +216,7 @@ To build a toolchain that does not use frame pointers, set
<code>make.bash</code>, <code>make.bat</code>, or <code>make.rc</code>.
</p>
<h3 id="cmd/cgo">Cgo</h3>
<h3 id="cmd_cgo">Cgo</h3>
<p>
Packages using <a href="/cmd/cgo/">cgo</a> may now include
@@ -230,7 +250,7 @@ GCC release 6 contains the Go 1.6.1 version of gccgo.
The next release, GCC 7, will likely have the Go 1.8 version of gccgo.
</p>
<h3 id="cmd/go">Go command</h3>
<h3 id="cmd_go">Go command</h3>
<p>
The <a href="/cmd/go/"><code>go</code></a> command's basic operation
@@ -270,7 +290,7 @@ will not work with such packages, and there are no plans to support
such packages in the “<code>go</code> <code>get</code>” command.
</p>
<h3 id="cmd/doc">Go doc</h3>
<h3 id="cmd_doc">Go doc</h3>
<p>
The “<code>go</code> <code>doc</code>” command
@@ -278,7 +298,7 @@ now groups constructors with the type they construct,
following <a href="/cmd/godoc/"><code>godoc</code></a>.
</p>
<h3 id="cmd/vet">Go vet</h3>
<h3 id="cmd_vet">Go vet</h3>
<p>
The “<code>go</code> <code>vet</code>” command
@@ -288,14 +308,26 @@ To avoid confusion with the new <code>-tests</code> check, the old, unadvertised
<code>-test</code> option has been removed; it was equivalent to <code>-all</code> <code>-shadow</code>.
</p>
<h3 id="cmd/dist">Go tool dist</h3>
<p id="vet_lostcancel">
The <code>vet</code> command also has a new check,
<code>-lostcancel</code>, which detects failure to call the
cancelation function returned by the <code>WithCancel</code>,
<code>WithTimeout</code>, and <code>WithDeadline</code> functions in
Go 1.7's new <code>context</code> package (see <a
href='#context'>below</a>).
Failure to call the function prevents the new <code>Context</code>
from being reclaimed until its parent is cancelled.
(The background context is never cancelled.)
</p>
<h3 id="cmd_dist">Go tool dist</h3>
<p>
The new subcommand “<code>go</code> <code>tool</code> <code>dist</code> <code>list</code>
prints all supported operating system/architecture pairs.
</p>
<h3 id="cmd/trace">Go tool trace</h3>
<h3 id="cmd_trace">Go tool trace</h3>
<p>
The “<code>go</code> <code>tool</code> <code>trace</code>” command,
@@ -335,7 +367,7 @@ the code generation changes alone typically reduce program CPU time by 5-35%.
</p>
<p>
<!-- git log --grep '-[0-9][0-9]\.[0-9][0-9]%' go1.6.. -->
<!-- git log &#45&#45grep '-[0-9][0-9]\.[0-9][0-9]%' go1.6.. -->
There have been significant optimizations bringing more than 10% improvements
to implementations in the
<a href="/pkg/crypto/sha1/"><code>crypto/sha1</code></a>,
@@ -355,6 +387,12 @@ and
packages.
</p>
<p>
Garbage collection pauses should be significantly shorter than they
were in Go 1.6 for programs with large numbers of idle goroutines,
substantial stack size fluctuation, or large package-level variables.
</p>
<h2 id="library">Core library</h2>
<h3 id="context">Context</h3>
@@ -362,7 +400,7 @@ packages.
<p>
Go 1.7 moves the <code>golang.org/x/net/context</code> package
into the standard library as <a href="/pkg/context/"><code>context</code></a>.
This allows the use of contexts for cancellation, timeouts, and passing
This allows the use of contexts for cancelation, timeouts, and passing
request-scoped data in other standard library packages,
including
<a href="#net">net</a>,
@@ -379,6 +417,13 @@ and the Go blog post
<a href="https://blog.golang.org/context">Go Concurrent Patterns: Context</a>.”
</p>
<h3 id="httptrace">HTTP Tracing</h3>
<p>
Go 1.7 introduces <a href="/pkg/net/http/httptrace/"><code>net/http/httptrace</code></a>,
a package that provides mechanisms for tracing events within HTTP requests.
</p>
<h3 id="testing">Testing</h3>
<p>
@@ -394,7 +439,8 @@ See the <a href="/pkg/testing/#hdr-Subtests_and_Sub_benchmarks">package document
<p>
All panics started by the runtime now use panic values
that implement both the builtin <a href="/ref/spec#Errors">error</code>,
that implement both the
builtin <a href="/ref/spec#Errors"><code>error</code></a>,
and
<a href="/pkg/runtime/#Error"><code>runtime.Error</code></a>,
as
@@ -442,6 +488,13 @@ eliminating the
common in some environments.
</p>
<p>
The runtime can now return unused memory to the operating system on
all architectures.
In Go 1.6 and earlier, the runtime could not
release memory on ARM64, 64-bit PowerPC, or MIPS.
</p>
<p>
On Windows, Go programs in Go 1.5 and earlier forced
the global Windows timer resolution to 1ms at startup
@@ -462,7 +515,7 @@ made with the Go 1 <a href="/doc/go1compat">promise of compatibility</a>
in mind.
</p>
<dl id="bufio"><a href="/pkg/bufio/">bufio</a></dl>
<dl id="bufio"><dt><a href="/pkg/bufio/">bufio</a></dt>
<dd>
<p>
@@ -474,8 +527,9 @@ it would return an empty slice and the error <code>ErrBufferFull</code>.
Now it returns the entire underlying buffer, still accompanied by the error <code>ErrBufferFull</code>.
</p>
</dd>
</dl>
<dl id="bytes"><a href="/pkg/bytes/">bytes</a></dl>
<dl id="bytes"><dt><a href="/pkg/bytes/">bytes</a></dt>
<dd>
<p>
@@ -502,8 +556,9 @@ The
<a href="/pkg/bytes/#Reader.Reset"><code>Reset</code></a> to allow reuse of a <code>Reader</code>.
</p>
</dd>
</dl>
<dl id="compress/flate"><a href="/pkg/compress/flate/">compress/flate</a></dl>
<dl id="compress_flate"><dt><a href="/pkg/compress/flate/">compress/flate</a></dt>
<dd>
<p>
@@ -549,8 +604,9 @@ Now, it reports
<a href="/pkg/io/#EOF"><code>io.EOF</code></a> more eagerly when reading the last set of bytes.
</p>
</dd>
</dl>
<dl id="crypto/tls"><a href="/pkg/crypto/tls/">crypto/tls</a></dl>
<dl id="crypto_tls"><dt><a href="/pkg/crypto/tls/">crypto/tls</a></dt>
<dd>
<p>
@@ -586,8 +642,9 @@ When generating self-signed certificates, the package no longer sets the
“Authority Key Identifier” field by default.
</p>
</dd>
</dl>
<dl id="crypto/x509"><a href="/pkg/crypto/x509/">crypto/x509</a></dl>
<dl id="crypto_x509"><dt><a href="/pkg/crypto/x509/">crypto/x509</a></dt>
<dd>
<p>
@@ -598,8 +655,9 @@ There is also a new associated error type
<a href="/pkg/crypto/x509/#SystemRootsError"><code>SystemRootsError</code></a>.
</p>
</dd>
</dl>
<dl id="debug/dwarf"><a href="/pkg/debug/dwarf/">debug/dwarf</a></dl>
<dl id="debug_dwarf"><dt><a href="/pkg/debug/dwarf/">debug/dwarf</a></dt>
<dd>
<p>
@@ -613,8 +671,9 @@ help to find the compilation unit to pass to a
and to identify the specific function for a given program counter.
</p>
</dd>
</dl>
<dl id="debug/elf"><a href="/pkg/debug/elf/">debug/elf</a></dl>
<dl id="debug_elf"><dt><a href="/pkg/debug/elf/">debug/elf</a></dt>
<dd>
<p>
@@ -624,8 +683,9 @@ and its many predefined constants
support the S390 port.
</p>
</dd>
</dl>
<dl id="encoding/asn1"><a href="/pkg/encoding/asn1/">encoding/asn1</a></dl>
<dl id="encoding_asn1"><dt><a href="/pkg/encoding/asn1/">encoding/asn1</a></dt>
<dd>
<p>
@@ -633,8 +693,9 @@ The ASN.1 decoder now rejects non-minimal integer encodings.
This may cause the package to reject some invalid but formerly accepted ASN.1 data.
</p>
</dd>
</dl>
<dl id="encoding/json"><a href="/pkg/encoding/json/">encoding/json</a></dl>
<dl id="encoding_json"><dt><a href="/pkg/encoding/json/">encoding/json</a></dt>
<dd>
<p>
@@ -698,8 +759,9 @@ so this change should be semantically backwards compatible with earlier versions
even though it does change the chosen encoding.
</p>
</dd>
</dl>
<dl id="go/build"><a href="/pkg/go/build/">go/build</a></dl>
<dl id="go_build"><dt><a href="/pkg/go/build/">go/build</a></dt>
<dd>
<p>
@@ -710,8 +772,9 @@ the
adds new fields <code>BinaryOnly</code>, <code>CgoFFLAGS</code>, and <code>FFiles</code>.
</p>
</dd>
</dl>
<dl id="go/doc"><a href="/pkg/go/doc/">go/doc</a></dl>
<dl id="go_doc"><dt><a href="/pkg/go/doc/">go/doc</a></dt>
<dd>
<p>
@@ -720,8 +783,9 @@ To support the corresponding change in <code>go</code> <code>test</code> describ
indicating whether the example may generate its output lines in any order.
</p>
</dd>
</dl>
<dl id="io"><a href="/pkg/io/">io</a></dl>
<dl id="io"><dt><a href="/pkg/io/">io</a></dt>
<dd>
<p>
@@ -734,8 +798,9 @@ These constants are preferred over <code>os.SEEK_SET</code>, <code>os.SEEK_CUR</
but the latter will be preserved for compatibility.
</p>
</dd>
</dl>
<dl id="math/big"><a href="/pkg/math/big/">math/big</a></dl>
<dl id="math_big"><dt><a href="/pkg/math/big/">math/big</a></dt>
<dd>
<p>
@@ -748,8 +813,33 @@ so that values of type <code>Float</code> can now be encoded and decoded using t
package.
</p>
</dd>
</dl>
<dl id="mime/multipart"><a href="/pkg/mime/multipart/">mime/multipart</a></dl>
<dl id="math_rand"><dt><a href="/pkg/math/rand/">math/rand</a></dt>
<dd>
<p>
The
<a href="/pkg/math/rand/#Read"><code>Read</code></a> function and
<a href="/pkg/math/rand/#Rand"><code>Rand</code></a>'s
<a href="/pkg/math/rand/#Rand.Read"><code>Read</code></a> method
now produce a pseudo-random stream of bytes that is consistent and not
dependent on the size of the input buffer.
</p>
<p>
The documentation clarifies that
Rand's <a href="/pkg/math/rand/#Rand.Seed"><code>Seed</code></a>
and <a href="/pkg/math/rand/#Rand.Read"><code>Read</code></a> methods
are not safe to call concurrently, though the global
functions <a href="/pkg/math/rand/#Seed"><code>Seed</code></a>
and <a href="/pkg/math/rand/#Read"><code>Read</code></a> are (and have
always been) safe.
</p>
</dd>
</dl>
<dl id="mime_multipart"><dt><a href="/pkg/mime/multipart/">mime/multipart</a></dt>
<dd>
<p>
@@ -760,8 +850,9 @@ Previously, iteration over a map caused the section header to use a
non-deterministic order.
</p>
</dd>
</dl>
<dl id="net"><a href="/pkg/net/">net</a></dl>
<dl id="net"><dt><a href="/pkg/net/">net</a></dt>
<dd>
<p>
@@ -788,13 +879,14 @@ Go 1.7 adds the hexadecimal encoding of the bytes, as in <code>"?12ab"</code>.
<p>
The pure Go <a href="/pkg/net/#hdr-Name_Resolution">name resolution</a>
implementation now respects <code>nsswtch.conf</code>'s
implementation now respects <code>nsswitch.conf</code>'s
stated preference for the priority of DNS lookups compared to
local file (that is, <code>/etc/hosts</code>) lookups.
</p>
</dd>
</dl>
<dl id="net/http"><a href="/pkg/net/http/">net/http</a></dl>
<dl id="net_http"><dt><a href="/pkg/net/http/">net/http</a></dt>
<dd>
<p>
@@ -824,6 +916,12 @@ For example, the address on which a request received is
<code>req.Context().Value(http.LocalAddrContextKey).(net.Addr)</code>.
</p>
<p>
The server's <a href="/pkg/net/http/#Server.Serve"><code>Serve</code></a> method
now only enables HTTP/2 support if the <code>Server.TLSConfig</code> field is <code>nil</code>
or includes <code>"h2"</code> in its <code>TLSConfig.NextProto</code>.
</p>
<p>
The server implementation now
pads response codes less than 100 to three digits
@@ -832,6 +930,23 @@ so that <code>w.WriteHeader(5)</code> uses the HTTP response
status <code>005</code>, not just <code>5</code>.
</p>
<p>
The server implementation now correctly sends only one "Transfer-Encoding" header when "chunked"
is set explicitly, following <a href="https://tools.ietf.org/html/rfc7230#section-3.3.1">RFC 7230</a>.
</p>
<p>
The server implementation is now stricter about rejecting requests with invalid HTTP versions.
Invalid requests claiming to be HTTP/0.x are now rejected (HTTP/0.9 was never fully supported),
and plaintext HTTP/2 requests other than the "PRI * HTTP/2.0" upgrade request are now rejected as well.
The server continues to handle encrypted HTTP/2 requests.
</p>
<p>
In the server, a 200 status code is sent back by the timeout handler on an empty
response body, instead of sending back 0 as the status code.
</p>
<p>
In the client, the
<a href="/pkg/net/http/#Transport"><code>Transport</code></a> implementation passes the request context
@@ -880,8 +995,9 @@ this transparent decompression took place.
adds support for a few new audio and video content types.
</p>
</dd>
</dl>
<dl id="net/http/cgi"><a href="/pkg/net/http/cgi/">net/http/cgi</a></dl>
<dl id="net_http_cgi"><dt><a href="/pkg/net/http/cgi/">net/http/cgi</a></dt>
<dd>
<p>
@@ -894,8 +1010,9 @@ standard error away from the host process's
standard error.
</p>
</dd>
</dl>
<dl id="net/http/httptest"><a href="/pkg/net/http/httptest/">net/http/httptest</a></dl>
<dl id="net_http_httptest"><dt><a href="/pkg/net/http/httptest/">net/http/httptest</a></dt>
<dd>
<p>
@@ -919,8 +1036,9 @@ instead of accessing
<code>ResponseRecorder</code>'s <code>HeaderMap</code> directly.
</p>
</dd>
</dl>
<dl id="net/http/httputil"><a href="/pkg/net/http/httputil/">net/http/httputil</a></dl>
<dl id="net_http_httputil"><dt><a href="/pkg/net/http/httputil/">net/http/httputil</a></dt>
<dd>
<p>
@@ -943,8 +1061,9 @@ and
instead.
</p>
</dd>
</dl>
<dl id="net/http/pprof"><a href="/pkg/net/http/pprof/">net/http/pprof</a></dl>
<dl id="net_http_pprof"><dt><a href="/pkg/net/http/pprof/">net/http/pprof</a></dt>
<dd>
<p>
@@ -954,8 +1073,9 @@ allowing collection of traces for intervals smaller than one second.
This is especially useful on busy servers.
</p>
</dd>
</dl>
<dl><a href="/pkg/net/mail/">net/mail</a></dl>
<dl><dt><a href="/pkg/net/mail/">net/mail</a></dt>
<dd>
<p>
@@ -966,11 +1086,21 @@ For compatibility with older mail parsers,
the address encoder, namely
<a href="/pkg/net/mail/#Address"><code>Address</code></a>'s
<a href="/pkg/net/mail/#Address.String"><code>String</code></a> method,
continues to escape all UTF-8 text following <a href="https://tools.ietf.org/html/rfc5322">RFC 5322</a>,
continues to escape all UTF-8 text following <a href="https://tools.ietf.org/html/rfc5322">RFC 5322</a>.
</p>
<p>
The <a href="/pkg/net/mail/#ParseAddress"><code>ParseAddress</code></a>
function and
the <a href="/pkg/net/mail/#AddressParser.Parse"><code>AddressParser.Parse</code></a>
method are stricter.
They used to ignore any characters following an e-mail address, but
will now return an error for anything other than whitespace.
</p>
</dd>
</dl>
<dl id="net/url"><a href="/pkg/net/url/">net/url</a></dl>
<dl id="net_url"><dt><a href="/pkg/net/url/">net/url</a></dt>
<dd>
<p>
@@ -982,12 +1112,13 @@ in order to distinguish URLs without query strings (like <code>/search</code>)
from URLs with empty query strings (like <code>/search?</code>).
</p>
</dd>
</dl>
<dl id="os"><a href="/pkg/os/">os</a></dl>
<dl id="os"><dt><a href="/pkg/os/">os</a></dt>
<dd>
<p>
<a href="/pkg/os/#IsExists"><code>IsExists</code></a> now returns true for <code>syscall.ENOTEMPTY</code>,
<a href="/pkg/os/#IsExist"><code>IsExist</code></a> now returns true for <code>syscall.ENOTEMPTY</code>,
on systems where that error exists.
</p>
@@ -998,8 +1129,9 @@ making the implementation behave as on
non-Windows systems.
</p>
</dd>
</dl>
<dl id="os/exec"><a href="/pkg/os/exec/">os/exec</a></dl>
<dl id="os_exec"><dt><a href="/pkg/os/exec/">os/exec</a></dt>
<dd>
<p>
@@ -1010,8 +1142,9 @@ is like
<a href="/pkg/os/exec/#Command"><code>Command</code></a> but includes a context that can be used to cancel the command execution.
</p>
</dd>
</dl>
<dl id="os/user"><a href="/pkg/os/user/">os/user</a></dl>
<dl id="os_user"><dt><a href="/pkg/os/user/">os/user</a></dt>
<dd>
<p>
@@ -1030,8 +1163,9 @@ and the new field <code>GroupIds</code> in the <code>User</code> struct,
provides access to system-specific user group information.
</p>
</dd>
</dl>
<dl id="reflect"><a href="/pkg/reflect/">reflect</a></dl>
<dl id="reflect"><dt><a href="/pkg/reflect/">reflect</a></dt>
<dd>
<p>
@@ -1078,8 +1212,9 @@ methods of
no longer return or count unexported methods.
</p>
</dd>
</dl>
<dl id="strings"><a href="/pkg/strings/">strings</a></dl>
<dl id="strings"><dt><a href="/pkg/strings/">strings</a></dt>
<dd>
<p>
@@ -1098,8 +1233,9 @@ The
<a href="/pkg/strings/#Reader.Reset"><code>Reset</code></a> to allow reuse of a <code>Reader</code>.
</p>
</dd>
</dl>
<dl id="time"><a href="/pkg/time/">time</a></dl>
<dl id="time"><dt><a href="/pkg/time/">time</a></dt>
<dd>
<p>
@@ -1122,8 +1258,9 @@ cannot be found, for example on Windows.
The Windows time zone abbreviation list has also been updated.
</p>
</dd>
</dl>
<dl id="syscall"><a href="/pkg/syscall/">syscall</a></dl>
<dl id="syscall"><dt><a href="/pkg/syscall/">syscall</a></dt>
<dd>
<p>
@@ -1140,3 +1277,16 @@ will call the
system call before executing the new program.
</p>
</dd>
</dl>
<dl id="unicode"><dt><a href="/pkg/unicode/">unicode</a></dt>
<dd>
<p>
The <a href="/pkg/unicode/"><code>unicode</code></a> package and associated
support throughout the system has been upgraded from version 8.0 to
<a href="http://www.unicode.org/versions/Unicode9.0.0/">Unicode 9.0</a>.
</p>
</dd>
</dl>

View File

@@ -33,7 +33,7 @@ compiler using the GCC back end, see
</p>
<p>
The Go compilers support six instruction sets.
The Go compilers support seven instruction sets.
There are important differences in the quality of the compilers for the different
architectures.
</p>
@@ -43,15 +43,17 @@ architectures.
<code>amd64</code> (also known as <code>x86-64</code>)
</dt>
<dd>
A mature implementation. The compiler has an effective
optimizer (registerizer) and generates good code (although
<code>gccgo</code> can do noticeably better sometimes).
A mature implementation. New in 1.7 is its SSA-based back end
that generates compact, efficient code.
</dd>
<dt>
<code>386</code> (<code>x86</code> or <code>x86-32</code>)
</dt>
<dd>
Comparable to the <code>amd64</code> port.
Comparable to the <code>amd64</code> port, but does
not yet use the SSA-based back end. It has an effective
optimizer (registerizer) and generates good code (although
<code>gccgo</code> can do noticeably better sometimes).
</dd>
<dt>
<code>arm</code> (<code>ARM</code>)
@@ -77,6 +79,12 @@ architectures.
<dd>
Supports Linux binaries. New in 1.6 and not as well exercised as other ports.
</dd>
<dt>
<code>s390x</code> (IBM System z)
</dt>
<dd>
Supports Linux binaries. New in 1.7 and not as well exercised as other ports.
</dd>
</dl>
<p>

View File

@@ -1,4 +1,4 @@
#!/bin/sh
#!/bin/bash
# Copyright 2012 The Go Authors. All rights reserved.
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file.
@@ -8,8 +8,8 @@
# Consult http://www.iana.org/time-zones for the latest versions.
# Versions to use.
CODE=2016d
DATA=2016d
CODE=2016f
DATA=2016f
set -e
rm -rf work

Binary file not shown.

View File

@@ -91,11 +91,11 @@ func main() {
run("shell", "rm", "-rf", deviceGotmp) // Clean up.
output = output[strings.LastIndex(output, "\n")+1:]
if !strings.HasPrefix(output, exitstr) {
exitIdx := strings.LastIndex(output, exitstr)
if exitIdx == -1 {
log.Fatalf("no exit code: %q", output)
}
code, err := strconv.Atoi(output[len(exitstr):])
code, err := strconv.Atoi(output[exitIdx+len(exitstr):])
if err != nil {
log.Fatalf("bad exit code: %v", err)
}

View File

@@ -0,0 +1,26 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// cgo converts C void* to Go unsafe.Pointer, so despite appearances C
// void** is Go *unsafe.Pointer. This test verifies that we detect the
// problem at build time.
package main
// typedef void v;
// void F(v** p) {}
import "C"
import "unsafe"
type v [0]byte
func f(p **v) {
C.F((**C.v)(unsafe.Pointer(p))) // ERROR HERE
}
func main() {
var p *v
f(&p)
}

View File

@@ -0,0 +1,12 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package main
// void f(void *p, int x) {}
import "C"
func main() {
_ = C.f(1) // ERROR HERE
}

View File

@@ -314,6 +314,14 @@ var ptrTests = []ptrTest{
body: `i := 0; p := S{u:uintptr(unsafe.Pointer(&i))}; q := (*S)(C.malloc(C.size_t(unsafe.Sizeof(p)))); *q = p; C.f(unsafe.Pointer(q))`,
fail: false,
},
{
// Check deferred pointers when they are used, not
// when the defer statement is run.
name: "defer",
c: `typedef struct s { int *p; } s; void f(s *ps) {}`,
body: `p := &C.s{}; defer C.f(p); p.p = new(C.int)`,
fail: true,
},
}
func main() {

View File

@@ -18,16 +18,16 @@ expect() {
file=$1
shift
if go build $file >errs 2>&1; then
echo 1>&2 misc/cgo/errors/test.bash: BUG: expected cgo to fail but it succeeded
echo 1>&2 misc/cgo/errors/test.bash: BUG: expected cgo to fail on $file but it succeeded
exit 1
fi
if ! test -s errs; then
echo 1>&2 misc/cgo/errors/test.bash: BUG: expected error output but saw none
echo 1>&2 misc/cgo/errors/test.bash: BUG: expected error output for $file but saw none
exit 1
fi
for error; do
if ! fgrep $error errs >/dev/null 2>&1; then
echo 1>&2 misc/cgo/errors/test.bash: BUG: expected error output to contain \"$error\" but saw:
echo 1>&2 misc/cgo/errors/test.bash: BUG: expected error output for $file to contain \"$error\" but saw:
cat 1>&2 errs
exit 1
fi
@@ -44,6 +44,8 @@ check issue11097b.go
expect issue13129.go C.ushort
check issue13423.go
expect issue13635.go C.uchar C.schar C.ushort C.uint C.ulong C.longlong C.ulonglong C.complexfloat C.complexdouble
check issue13830.go
check issue16116.go
if ! go build issue14669.go; then
exit 1

View File

@@ -8,6 +8,7 @@ package cgotest
import "C"
import (
"runtime"
"sync"
"testing"
)
@@ -30,6 +31,9 @@ func Add(x int) {
}
func testCthread(t *testing.T) {
if runtime.GOOS == "darwin" && (runtime.GOARCH == "arm" || runtime.GOARCH == "arm64") {
t.Skip("the iOS exec wrapper is unable to properly handle the panic from Add")
}
sum.i = 0
C.doAdd(10, 6)

View File

@@ -103,7 +103,7 @@ func test7978(t *testing.T) {
if C.HAS_SYNC_FETCH_AND_ADD == 0 {
t.Skip("clang required for __sync_fetch_and_add support on darwin/arm")
}
if runtime.GOOS == "android" {
if runtime.GOOS == "android" || runtime.GOOS == "darwin" && (runtime.GOARCH == "arm" || runtime.GOARCH == "arm64") {
t.Skip("GOTRACEBACK is not passed on to the exec wrapper")
}
if os.Getenv("GOTRACEBACK") != "2" {

View File

@@ -7,17 +7,18 @@
#include "textflag.h"
TEXT ·RewindAndSetgid(SB),NOSPLIT,$0-0
MOVL $·Baton(SB), BX
// Rewind stack pointer so anything that happens on the stack
// will clobber the test pattern created by the caller
ADDL $(1024 * 8), SP
// Ask signaller to setgid
MOVL $1, ·Baton(SB)
MOVL $1, (BX)
// Wait for setgid completion
loop:
PAUSE
MOVL ·Baton(SB), AX
MOVL (BX), AX
CMPL AX, $0
JNE loop

View File

@@ -111,61 +111,52 @@ if test "$tsan" = "yes"; then
rm -f ${TMPDIR}/testsanitizers$$*
fi
if test "$tsan" = "yes"; then
# Run a TSAN test.
# $1 test name
# $2 environment variables
# $3 go run args
testtsan() {
err=${TMPDIR}/tsanerr$$.out
if ! go run tsan.go 2>$err; then
if ! env $2 go run $3 $1 2>$err; then
cat $err
echo "FAIL: tsan"
echo "FAIL: $1"
status=1
elif grep -i warning $err >/dev/null 2>&1; then
cat $err
echo "FAIL: tsan"
echo "FAIL: $1"
status=1
fi
if ! go run tsan2.go 2>$err; then
cat $err
echo "FAIL: tsan2"
status=1
elif grep -i warning $err >/dev/null 2>&1; then
cat $err
echo "FAIL: tsan2"
status=1
fi
if ! go run tsan3.go 2>$err; then
cat $err
echo "FAIL: tsan3"
status=1
elif grep -i warning $err >/dev/null 2>&1; then
cat $err
echo "FAIL: tsan3"
status=1
fi
if ! go run tsan4.go 2>$err; then
cat $err
echo "FAIL: tsan4"
status=1
elif grep -i warning $err >/dev/null 2>&1; then
cat $err
echo "FAIL: tsan4"
status=1
fi
# This test requires rebuilding os/user with -fsanitize=thread.
if ! CGO_CFLAGS="-fsanitize=thread" CGO_LDFLAGS="-fsanitize=thread" go run -installsuffix=tsan tsan5.go 2>$err; then
cat $err
echo "FAIL: tsan5"
status=1
elif grep -i warning $err >/dev/null 2>&1; then
cat $err
echo "FAIL: tsan5"
status=1
fi
rm -f $err
}
if test "$tsan" = "yes"; then
testtsan tsan.go
testtsan tsan2.go
testtsan tsan3.go
testtsan tsan4.go
# These tests are only reliable using clang or GCC version 7 or later.
# Otherwise runtime/cgo/libcgo.h can't tell whether TSAN is in use.
ok=false
if ${CC} --version | grep clang >/dev/null 2>&1; then
ok=true
else
ver=$($CC -dumpversion)
major=$(echo $ver | sed -e 's/\([0-9]*\).*/\1/')
if test "$major" -lt 7; then
echo "skipping remaining TSAN tests: GCC version $major (older than 7)"
else
ok=true
fi
fi
if test "$ok" = "true"; then
# This test requires rebuilding os/user with -fsanitize=thread.
testtsan tsan5.go "CGO_CFLAGS=-fsanitize=thread CGO_LDFLAGS=-fsanitize=thread" "-installsuffix=tsan"
# This test requires rebuilding runtime/cgo with -fsanitize=thread.
testtsan tsan6.go "CGO_CFLAGS=-fsanitize=thread CGO_LDFLAGS=-fsanitize=thread" "-installsuffix=tsan"
fi
fi
exit $status

View File

@@ -0,0 +1,49 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package main
// Check that writes to Go allocated memory, with Go synchronization,
// do not look like a race.
/*
#cgo CFLAGS: -fsanitize=thread
#cgo LDFLAGS: -fsanitize=thread
void f(char *p) {
*p = 1;
}
*/
import "C"
import (
"runtime"
"sync"
)
func main() {
var wg sync.WaitGroup
var mu sync.Mutex
c := make(chan []C.char, 100)
for i := 0; i < 10; i++ {
wg.Add(2)
go func() {
defer wg.Done()
for i := 0; i < 100; i++ {
c <- make([]C.char, 4096)
runtime.Gosched()
}
}()
go func() {
defer wg.Done()
for i := 0; i < 100; i++ {
p := &(<-c)[0]
mu.Lock()
C.f(p)
mu.Unlock()
}
}()
}
wg.Wait()
}

View File

@@ -12,6 +12,11 @@ type Dep struct {
X int
}
func (d *Dep) Method() int {
return 10
}
func F() int {
defer func() {}()
return V
}

View File

@@ -3,5 +3,6 @@ package main
import "dep2"
func main() {
dep2.W = dep2.G() + 1
d := &dep2.Dep2{}
dep2.W = dep2.G() + 1 + d.Method()
}

View File

@@ -5281,7 +5281,7 @@ if(traces.length&&!this.hasEventDataDecoder_(importers)){throw new Error('Could
importers.sort(function(x,y){return x.importPriority-y.importPriority;});},this);lastTask=lastTask.timedAfter('TraceImport',function importClockSyncMarkers(task){importers.forEach(function(importer,index){task.subTask(Timing.wrapNamedFunction('TraceImport',importer.importerName,function runImportClockSyncMarkersOnOneImporter(){progressMeter.update('Importing clock sync markers '+(index+1)+' of '+
importers.length);importer.importClockSyncMarkers();}),this);},this);},this);lastTask=lastTask.timedAfter('TraceImport',function runImport(task){importers.forEach(function(importer,index){task.subTask(Timing.wrapNamedFunction('TraceImport',importer.importerName,function runImportEventsOnOneImporter(){progressMeter.update('Importing '+(index+1)+' of '+importers.length);importer.importEvents();}),this);},this);},this);if(this.importOptions_.customizeModelCallback){lastTask=lastTask.timedAfter('TraceImport',function runCustomizeCallbacks(task){this.importOptions_.customizeModelCallback(this.model_);},this);}
lastTask=lastTask.timedAfter('TraceImport',function importSampleData(task){importers.forEach(function(importer,index){progressMeter.update('Importing sample data '+(index+1)+'/'+importers.length);importer.importSampleData();},this);},this);lastTask=lastTask.timedAfter('TraceImport',function runAutoclosers(){progressMeter.update('Autoclosing open slices...');this.model_.autoCloseOpenSlices();this.model_.createSubSlices();},this);lastTask=lastTask.timedAfter('TraceImport',function finalizeImport(task){importers.forEach(function(importer,index){progressMeter.update('Finalizing import '+(index+1)+'/'+importers.length);importer.finalizeImport();},this);},this);lastTask=lastTask.timedAfter('TraceImport',function runPreinits(){progressMeter.update('Initializing objects (step 1/2)...');this.model_.preInitializeObjects();},this);if(this.importOptions_.pruneEmptyContainers){lastTask=lastTask.timedAfter('TraceImport',function runPruneEmptyContainers(){progressMeter.update('Pruning empty containers...');this.model_.pruneEmptyContainers();},this);}
lastTask=lastTask.timedAfter('TraceImport',function runMergeKernelWithuserland(){progressMeter.update('Merging kernel with userland...');this.model_.mergeKernelWithUserland();},this);var auditors=[];lastTask=lastTask.timedAfter('TraceImport',function createAuditorsAndRunAnnotate(){progressMeter.update('Adding arbitrary data to model...');auditors=this.importOptions_.auditorConstructors.map(function(auditorConstructor){return new auditorConstructor(this.model_);},this);auditors.forEach(function(auditor){auditor.runAnnotate();auditor.installUserFriendlyCategoryDriverIfNeeded();});},this);lastTask=lastTask.timedAfter('TraceImport',function computeWorldBounds(){progressMeter.update('Computing final world bounds...');this.model_.computeWorldBounds(this.importOptions_.shiftWorldToZero);},this);lastTask=lastTask.timedAfter('TraceImport',function buildFlowEventIntervalTree(){progressMeter.update('Building flow event map...');this.model_.buildFlowEventIntervalTree();},this);lastTask=lastTask.timedAfter('TraceImport',function joinRefs(){progressMeter.update('Joining object refs...');this.model_.joinRefs();},this);lastTask=lastTask.timedAfter('TraceImport',function cleanupUndeletedObjects(){progressMeter.update('Cleaning up undeleted objects...');this.model_.cleanupUndeletedObjects();},this);lastTask=lastTask.timedAfter('TraceImport',function sortMemoryDumps(){progressMeter.update('Sorting memory dumps...');this.model_.sortMemoryDumps();},this);lastTask=lastTask.timedAfter('TraceImport',function finalizeMemoryGraphs(){progressMeter.update('Finalizing memory dump graphs...');this.model_.finalizeMemoryGraphs();},this);lastTask=lastTask.timedAfter('TraceImport',function initializeObjects(){progressMeter.update('Initializing objects (step 2/2)...');this.model_.initializeObjects();},this);lastTask=lastTask.timedAfter('TraceImport',function buildEventIndices(){progressMeter.update('Building event indices...');this.model_.buildEventIndices();},this);lastTask=lastTask.timedAfter('TraceImport',function buildUserModel(){progressMeter.update('Building UserModel...');var userModelBuilder=new tr.importer.UserModelBuilder(this.model_);userModelBuilder.buildUserModel();},this);lastTask=lastTask.timedAfter('TraceImport',function sortExpectations(){progressMeter.update('Sorting user expectations...');this.model_.userModel.sortExpectations();},this);lastTask=lastTask.timedAfter('TraceImport',function runAudits(){progressMeter.update('Running auditors...');auditors.forEach(function(auditor){auditor.runAudit();});},this);lastTask=lastTask.timedAfter('TraceImport',function sortAlerts(){progressMeter.update('Updating alerts...');this.model_.sortAlerts();},this);lastTask=lastTask.timedAfter('TraceImport',function lastUpdateBounds(){progressMeter.update('Update bounds...');this.model_.updateBounds();},this);lastTask=lastTask.timedAfter('TraceImport',function addModelWarnings(){progressMeter.update('Looking for warnings...');if(!this.model_.isTimeHighResolution){this.model_.importWarning({type:'low_resolution_timer',message:'Trace time is low resolution, trace may be unusable.',showToUser:true});}},this);lastTask.after(function(){this.importing_=false;},this);return importTask;},createImporter_:function(eventData){var importerConstructor=tr.importer.Importer.findImporterFor(eventData);if(!importerConstructor){throw new Error('Couldn\'t create an importer for the provided '+'eventData.');}
lastTask=lastTask.timedAfter('TraceImport',function runMergeKernelWithuserland(){progressMeter.update('Merging kernel with userland...');this.model_.mergeKernelWithUserland();},this);var auditors=[];lastTask=lastTask.timedAfter('TraceImport',function createAuditorsAndRunAnnotate(){progressMeter.update('Adding arbitrary data to model...');auditors=this.importOptions_.auditorConstructors.map(function(auditorConstructor){return new auditorConstructor(this.model_);},this);auditors.forEach(function(auditor){auditor.runAnnotate();auditor.installUserFriendlyCategoryDriverIfNeeded();});},this);lastTask=lastTask.timedAfter('TraceImport',function computeWorldBounds(){progressMeter.update('Computing final world bounds...');this.model_.computeWorldBounds(this.importOptions_.shiftWorldToZero);},this);lastTask=lastTask.timedAfter('TraceImport',function buildFlowEventIntervalTree(){progressMeter.update('Building flow event map...');this.model_.buildFlowEventIntervalTree();},this);lastTask=lastTask.timedAfter('TraceImport',function joinRefs(){progressMeter.update('Joining object refs...');this.model_.joinRefs();},this);lastTask=lastTask.timedAfter('TraceImport',function cleanupUndeletedObjects(){progressMeter.update('Cleaning up undeleted objects...');this.model_.cleanupUndeletedObjects();},this);lastTask=lastTask.timedAfter('TraceImport',function sortMemoryDumps(){progressMeter.update('Sorting memory dumps...');this.model_.sortMemoryDumps();},this);lastTask=lastTask.timedAfter('TraceImport',function finalizeMemoryGraphs(){progressMeter.update('Finalizing memory dump graphs...');this.model_.finalizeMemoryGraphs();},this);lastTask=lastTask.timedAfter('TraceImport',function initializeObjects(){progressMeter.update('Initializing objects (step 2/2)...');this.model_.initializeObjects();},this);lastTask=lastTask.timedAfter('TraceImport',function buildEventIndices(){progressMeter.update('Building event indices...');this.model_.buildEventIndices();},this);lastTask=lastTask.timedAfter('TraceImport',function buildUserModel(){progressMeter.update('Building UserModel...');var userModelBuilder=new tr.importer.UserModelBuilder(this.model_);userModelBuilder.buildUserModel();},this);lastTask=lastTask.timedAfter('TraceImport',function sortExpectations(){progressMeter.update('Sorting user expectations...');this.model_.userModel.sortExpectations();},this);lastTask=lastTask.timedAfter('TraceImport',function runAudits(){progressMeter.update('Running auditors...');auditors.forEach(function(auditor){auditor.runAudit();});},this);lastTask=lastTask.timedAfter('TraceImport',function sortAlerts(){progressMeter.update('Updating alerts...');this.model_.sortAlerts();},this);lastTask=lastTask.timedAfter('TraceImport',function lastUpdateBounds(){progressMeter.update('Update bounds...');this.model_.updateBounds();},this);lastTask=lastTask.timedAfter('TraceImport',function addModelWarnings(){progressMeter.update('Looking for warnings...');if(!this.model_.isTimeHighResolution){this.model_.importWarning({type:'low_resolution_timer',message:'Trace time is low resolution, trace may be unusable.',showToUser:false});}},this);lastTask.after(function(){this.importing_=false;},this);return importTask;},createImporter_:function(eventData){var importerConstructor=tr.importer.Importer.findImporterFor(eventData);if(!importerConstructor){throw new Error('Couldn\'t create an importer for the provided '+'eventData.');}
return new importerConstructor(this.model_,eventData);},hasEventDataDecoder_:function(importers){for(var i=0;i<importers.length;++i){if(!importers[i].isTraceDataContainer())
return true;}
return false;}};return{ImportOptions:ImportOptions,Import:Import};});'use strict';tr.exportTo('tr.e.cc',function(){function PictureAsImageData(picture,errorOrImageData){this.picture_=picture;if(errorOrImageData instanceof ImageData){this.error_=undefined;this.imageData_=errorOrImageData;}else{this.error_=errorOrImageData;this.imageData_=undefined;}};PictureAsImageData.Pending=function(picture){return new PictureAsImageData(picture,undefined);};PictureAsImageData.prototype={get picture(){return this.picture_;},get error(){return this.error_;},get imageData(){return this.imageData_;},isPending:function(){return this.error_===undefined&&this.imageData_===undefined;},asCanvas:function(){if(!this.imageData_)

View File

@@ -6,6 +6,7 @@ package bytes_test
import (
. "bytes"
"fmt"
"math/rand"
"reflect"
"testing"
@@ -357,167 +358,152 @@ func TestIndexRune(t *testing.T) {
var bmbuf []byte
func BenchmarkIndexByte10(b *testing.B) { bmIndexByte(b, IndexByte, 10) }
func BenchmarkIndexByte32(b *testing.B) { bmIndexByte(b, IndexByte, 32) }
func BenchmarkIndexByte4K(b *testing.B) { bmIndexByte(b, IndexByte, 4<<10) }
func BenchmarkIndexByte4M(b *testing.B) { bmIndexByte(b, IndexByte, 4<<20) }
func BenchmarkIndexByte64M(b *testing.B) { bmIndexByte(b, IndexByte, 64<<20) }
func BenchmarkIndexBytePortable10(b *testing.B) { bmIndexByte(b, IndexBytePortable, 10) }
func BenchmarkIndexBytePortable32(b *testing.B) { bmIndexByte(b, IndexBytePortable, 32) }
func BenchmarkIndexBytePortable4K(b *testing.B) { bmIndexByte(b, IndexBytePortable, 4<<10) }
func BenchmarkIndexBytePortable4M(b *testing.B) { bmIndexByte(b, IndexBytePortable, 4<<20) }
func BenchmarkIndexBytePortable64M(b *testing.B) { bmIndexByte(b, IndexBytePortable, 64<<20) }
func bmIndexByte(b *testing.B, index func([]byte, byte) int, n int) {
if len(bmbuf) < n {
bmbuf = make([]byte, n)
func valName(x int) string {
if s := x >> 20; s<<20 == x {
return fmt.Sprintf("%dM", s)
}
b.SetBytes(int64(n))
buf := bmbuf[0:n]
buf[n-1] = 'x'
for i := 0; i < b.N; i++ {
j := index(buf, 'x')
if j != n-1 {
b.Fatal("bad index", j)
}
if s := x >> 10; s<<10 == x {
return fmt.Sprintf("%dK", s)
}
buf[n-1] = '\x00'
return fmt.Sprint(x)
}
func BenchmarkEqual0(b *testing.B) {
var buf [4]byte
buf1 := buf[0:0]
buf2 := buf[1:1]
for i := 0; i < b.N; i++ {
eq := Equal(buf1, buf2)
if !eq {
b.Fatal("bad equal")
}
func benchBytes(b *testing.B, sizes []int, f func(b *testing.B, n int)) {
for _, n := range sizes {
b.Run(valName(n), func(b *testing.B) {
if len(bmbuf) < n {
bmbuf = make([]byte, n)
}
b.SetBytes(int64(n))
f(b, n)
})
}
}
func BenchmarkEqual1(b *testing.B) { bmEqual(b, Equal, 1) }
func BenchmarkEqual6(b *testing.B) { bmEqual(b, Equal, 6) }
func BenchmarkEqual9(b *testing.B) { bmEqual(b, Equal, 9) }
func BenchmarkEqual15(b *testing.B) { bmEqual(b, Equal, 15) }
func BenchmarkEqual16(b *testing.B) { bmEqual(b, Equal, 16) }
func BenchmarkEqual20(b *testing.B) { bmEqual(b, Equal, 20) }
func BenchmarkEqual32(b *testing.B) { bmEqual(b, Equal, 32) }
func BenchmarkEqual4K(b *testing.B) { bmEqual(b, Equal, 4<<10) }
func BenchmarkEqual4M(b *testing.B) { bmEqual(b, Equal, 4<<20) }
func BenchmarkEqual64M(b *testing.B) { bmEqual(b, Equal, 64<<20) }
func BenchmarkEqualPort1(b *testing.B) { bmEqual(b, EqualPortable, 1) }
func BenchmarkEqualPort6(b *testing.B) { bmEqual(b, EqualPortable, 6) }
func BenchmarkEqualPort32(b *testing.B) { bmEqual(b, EqualPortable, 32) }
func BenchmarkEqualPort4K(b *testing.B) { bmEqual(b, EqualPortable, 4<<10) }
func BenchmarkEqualPortable4M(b *testing.B) { bmEqual(b, EqualPortable, 4<<20) }
func BenchmarkEqualPortable64M(b *testing.B) { bmEqual(b, EqualPortable, 64<<20) }
var indexSizes = []int{10, 32, 4 << 10, 4 << 20, 64 << 20}
func bmEqual(b *testing.B, equal func([]byte, []byte) bool, n int) {
if len(bmbuf) < 2*n {
bmbuf = make([]byte, 2*n)
}
b.SetBytes(int64(n))
buf1 := bmbuf[0:n]
buf2 := bmbuf[n : 2*n]
buf1[n-1] = 'x'
buf2[n-1] = 'x'
for i := 0; i < b.N; i++ {
eq := equal(buf1, buf2)
if !eq {
b.Fatal("bad equal")
}
}
buf1[n-1] = '\x00'
buf2[n-1] = '\x00'
func BenchmarkIndexByte(b *testing.B) {
benchBytes(b, indexSizes, bmIndexByte(IndexByte))
}
func BenchmarkIndex32(b *testing.B) { bmIndex(b, Index, 32) }
func BenchmarkIndex4K(b *testing.B) { bmIndex(b, Index, 4<<10) }
func BenchmarkIndex4M(b *testing.B) { bmIndex(b, Index, 4<<20) }
func BenchmarkIndex64M(b *testing.B) { bmIndex(b, Index, 64<<20) }
func bmIndex(b *testing.B, index func([]byte, []byte) int, n int) {
if len(bmbuf) < n {
bmbuf = make([]byte, n)
}
b.SetBytes(int64(n))
buf := bmbuf[0:n]
buf[n-1] = 'x'
for i := 0; i < b.N; i++ {
j := index(buf, buf[n-7:])
if j != n-7 {
b.Fatal("bad index", j)
}
}
buf[n-1] = '\x00'
func BenchmarkIndexBytePortable(b *testing.B) {
benchBytes(b, indexSizes, bmIndexByte(IndexBytePortable))
}
func BenchmarkIndexEasy32(b *testing.B) { bmIndexEasy(b, Index, 32) }
func BenchmarkIndexEasy4K(b *testing.B) { bmIndexEasy(b, Index, 4<<10) }
func BenchmarkIndexEasy4M(b *testing.B) { bmIndexEasy(b, Index, 4<<20) }
func BenchmarkIndexEasy64M(b *testing.B) { bmIndexEasy(b, Index, 64<<20) }
func bmIndexEasy(b *testing.B, index func([]byte, []byte) int, n int) {
if len(bmbuf) < n {
bmbuf = make([]byte, n)
}
b.SetBytes(int64(n))
buf := bmbuf[0:n]
buf[n-1] = 'x'
buf[n-7] = 'x'
for i := 0; i < b.N; i++ {
j := index(buf, buf[n-7:])
if j != n-7 {
b.Fatal("bad index", j)
func bmIndexByte(index func([]byte, byte) int) func(b *testing.B, n int) {
return func(b *testing.B, n int) {
buf := bmbuf[0:n]
buf[n-1] = 'x'
for i := 0; i < b.N; i++ {
j := index(buf, 'x')
if j != n-1 {
b.Fatal("bad index", j)
}
}
buf[n-1] = '\x00'
}
buf[n-1] = '\x00'
buf[n-7] = '\x00'
}
func BenchmarkCount32(b *testing.B) { bmCount(b, Count, 32) }
func BenchmarkCount4K(b *testing.B) { bmCount(b, Count, 4<<10) }
func BenchmarkCount4M(b *testing.B) { bmCount(b, Count, 4<<20) }
func BenchmarkCount64M(b *testing.B) { bmCount(b, Count, 64<<20) }
func bmCount(b *testing.B, count func([]byte, []byte) int, n int) {
if len(bmbuf) < n {
bmbuf = make([]byte, n)
}
b.SetBytes(int64(n))
buf := bmbuf[0:n]
buf[n-1] = 'x'
for i := 0; i < b.N; i++ {
j := count(buf, buf[n-7:])
if j != 1 {
b.Fatal("bad count", j)
func BenchmarkEqual(b *testing.B) {
b.Run("0", func(b *testing.B) {
var buf [4]byte
buf1 := buf[0:0]
buf2 := buf[1:1]
for i := 0; i < b.N; i++ {
eq := Equal(buf1, buf2)
if !eq {
b.Fatal("bad equal")
}
}
}
buf[n-1] = '\x00'
})
sizes := []int{1, 6, 9, 15, 16, 20, 32, 4 << 10, 4 << 20, 64 << 20}
benchBytes(b, sizes, bmEqual(Equal))
}
func BenchmarkCountEasy32(b *testing.B) { bmCountEasy(b, Count, 32) }
func BenchmarkCountEasy4K(b *testing.B) { bmCountEasy(b, Count, 4<<10) }
func BenchmarkCountEasy4M(b *testing.B) { bmCountEasy(b, Count, 4<<20) }
func BenchmarkCountEasy64M(b *testing.B) { bmCountEasy(b, Count, 64<<20) }
func BenchmarkEqualPort(b *testing.B) {
sizes := []int{1, 6, 32, 4 << 10, 4 << 20, 64 << 20}
benchBytes(b, sizes, bmEqual(EqualPortable))
}
func bmCountEasy(b *testing.B, count func([]byte, []byte) int, n int) {
if len(bmbuf) < n {
bmbuf = make([]byte, n)
}
b.SetBytes(int64(n))
buf := bmbuf[0:n]
buf[n-1] = 'x'
buf[n-7] = 'x'
for i := 0; i < b.N; i++ {
j := count(buf, buf[n-7:])
if j != 1 {
b.Fatal("bad count", j)
func bmEqual(equal func([]byte, []byte) bool) func(b *testing.B, n int) {
return func(b *testing.B, n int) {
if len(bmbuf) < 2*n {
bmbuf = make([]byte, 2*n)
}
buf1 := bmbuf[0:n]
buf2 := bmbuf[n : 2*n]
buf1[n-1] = 'x'
buf2[n-1] = 'x'
for i := 0; i < b.N; i++ {
eq := equal(buf1, buf2)
if !eq {
b.Fatal("bad equal")
}
}
buf1[n-1] = '\x00'
buf2[n-1] = '\x00'
}
buf[n-1] = '\x00'
buf[n-7] = '\x00'
}
func BenchmarkIndex(b *testing.B) {
benchBytes(b, indexSizes, func(b *testing.B, n int) {
buf := bmbuf[0:n]
buf[n-1] = 'x'
for i := 0; i < b.N; i++ {
j := Index(buf, buf[n-7:])
if j != n-7 {
b.Fatal("bad index", j)
}
}
buf[n-1] = '\x00'
})
}
func BenchmarkIndexEasy(b *testing.B) {
benchBytes(b, indexSizes, func(b *testing.B, n int) {
buf := bmbuf[0:n]
buf[n-1] = 'x'
buf[n-7] = 'x'
for i := 0; i < b.N; i++ {
j := Index(buf, buf[n-7:])
if j != n-7 {
b.Fatal("bad index", j)
}
}
buf[n-1] = '\x00'
buf[n-7] = '\x00'
})
}
func BenchmarkCount(b *testing.B) {
benchBytes(b, indexSizes, func(b *testing.B, n int) {
buf := bmbuf[0:n]
buf[n-1] = 'x'
for i := 0; i < b.N; i++ {
j := Count(buf, buf[n-7:])
if j != 1 {
b.Fatal("bad count", j)
}
}
buf[n-1] = '\x00'
})
}
func BenchmarkCountEasy(b *testing.B) {
benchBytes(b, indexSizes, func(b *testing.B, n int) {
buf := bmbuf[0:n]
buf[n-1] = 'x'
buf[n-7] = 'x'
for i := 0; i < b.N; i++ {
j := Count(buf, buf[n-7:])
if j != 1 {
b.Fatal("bad count", j)
}
}
buf[n-1] = '\x00'
buf[n-7] = '\x00'
})
}
type ExplodeTest struct {
@@ -1318,33 +1304,24 @@ func BenchmarkRepeat(b *testing.B) {
}
}
func benchmarkBytesCompare(b *testing.B, n int) {
var x = make([]byte, n)
var y = make([]byte, n)
func BenchmarkBytesCompare(b *testing.B) {
for n := 1; n <= 2048; n <<= 1 {
b.Run(fmt.Sprint(n), func(b *testing.B) {
var x = make([]byte, n)
var y = make([]byte, n)
for i := 0; i < n; i++ {
x[i] = 'a'
}
for i := 0; i < n; i++ {
x[i] = 'a'
}
for i := 0; i < n; i++ {
y[i] = 'a'
}
for i := 0; i < n; i++ {
y[i] = 'a'
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
Compare(x, y)
b.ResetTimer()
for i := 0; i < b.N; i++ {
Compare(x, y)
}
})
}
}
func BenchmarkBytesCompare1(b *testing.B) { benchmarkBytesCompare(b, 1) }
func BenchmarkBytesCompare2(b *testing.B) { benchmarkBytesCompare(b, 2) }
func BenchmarkBytesCompare4(b *testing.B) { benchmarkBytesCompare(b, 4) }
func BenchmarkBytesCompare8(b *testing.B) { benchmarkBytesCompare(b, 8) }
func BenchmarkBytesCompare16(b *testing.B) { benchmarkBytesCompare(b, 16) }
func BenchmarkBytesCompare32(b *testing.B) { benchmarkBytesCompare(b, 32) }
func BenchmarkBytesCompare64(b *testing.B) { benchmarkBytesCompare(b, 64) }
func BenchmarkBytesCompare128(b *testing.B) { benchmarkBytesCompare(b, 128) }
func BenchmarkBytesCompare256(b *testing.B) { benchmarkBytesCompare(b, 256) }
func BenchmarkBytesCompare512(b *testing.B) { benchmarkBytesCompare(b, 512) }
func BenchmarkBytesCompare1024(b *testing.B) { benchmarkBytesCompare(b, 1024) }
func BenchmarkBytesCompare2048(b *testing.B) { benchmarkBytesCompare(b, 2048) }

View File

@@ -425,7 +425,7 @@ func (w *Walker) Import(name string) (*types.Package, error) {
w.imported[name] = &importing
root := w.root
if strings.HasPrefix(name, "golang.org/x/") {
if strings.HasPrefix(name, "golang_org/x/") {
root = filepath.Join(root, "vendor")
}

View File

@@ -17,6 +17,7 @@ import (
func setArch(goarch string) (*arch.Arch, *obj.Link) {
os.Setenv("GOOS", "linux") // obj can handle this OS for all architectures.
os.Setenv("GOARCH", goarch)
architecture := arch.Set(goarch)
if architecture == nil {
panic("asm: unrecognized architecture " + goarch)

View File

@@ -172,7 +172,7 @@ func (f *File) saveExprs(x interface{}, context string) {
f.saveRef(x, context)
}
case *ast.CallExpr:
f.saveCall(x)
f.saveCall(x, context)
}
}
@@ -220,7 +220,7 @@ func (f *File) saveRef(n *ast.Expr, context string) {
}
// Save calls to C.xxx for later processing.
func (f *File) saveCall(call *ast.CallExpr) {
func (f *File) saveCall(call *ast.CallExpr, context string) {
sel, ok := call.Fun.(*ast.SelectorExpr)
if !ok {
return
@@ -228,7 +228,8 @@ func (f *File) saveCall(call *ast.CallExpr) {
if l, ok := sel.X.(*ast.Ident); !ok || l.Name != "C" {
return
}
f.Calls = append(f.Calls, call)
c := &Call{Call: call, Deferred: context == "defer"}
f.Calls = append(f.Calls, c)
}
// If a function should be exported add it to ExpFunc.
@@ -401,7 +402,7 @@ func (f *File) walk(x interface{}, context string, visit func(*File, interface{}
case *ast.GoStmt:
f.walk(n.Call, "expr", visit)
case *ast.DeferStmt:
f.walk(n.Call, "expr", visit)
f.walk(n.Call, "defer", visit)
case *ast.ReturnStmt:
f.walk(n.Results, "expr", visit)
case *ast.BranchStmt:

View File

@@ -581,7 +581,7 @@ func (p *Package) mangleName(n *Name) {
func (p *Package) rewriteCalls(f *File) {
for _, call := range f.Calls {
// This is a call to C.xxx; set goname to "xxx".
goname := call.Fun.(*ast.SelectorExpr).Sel.Name
goname := call.Call.Fun.(*ast.SelectorExpr).Sel.Name
if goname == "malloc" {
continue
}
@@ -596,37 +596,60 @@ func (p *Package) rewriteCalls(f *File) {
// rewriteCall rewrites one call to add pointer checks. We replace
// each pointer argument x with _cgoCheckPointer(x).(T).
func (p *Package) rewriteCall(f *File, call *ast.CallExpr, name *Name) {
func (p *Package) rewriteCall(f *File, call *Call, name *Name) {
// Avoid a crash if the number of arguments is
// less than the number of parameters.
// This will be caught when the generated file is compiled.
if len(call.Call.Args) < len(name.FuncType.Params) {
return
}
any := false
for i, param := range name.FuncType.Params {
if len(call.Args) <= i {
// Avoid a crash; this will be caught when the
// generated file is compiled.
return
if p.needsPointerCheck(f, param.Go, call.Call.Args[i]) {
any = true
break
}
}
if !any {
return
}
// We need to rewrite this call.
//
// We are going to rewrite C.f(p) to C.f(_cgoCheckPointer(p)).
// If the call to C.f is deferred, that will check p at the
// point of the defer statement, not when the function is called, so
// rewrite to func(_cgo0 ptype) { C.f(_cgoCheckPointer(_cgo0)) }(p)
var dargs []ast.Expr
if call.Deferred {
dargs = make([]ast.Expr, len(name.FuncType.Params))
}
for i, param := range name.FuncType.Params {
origArg := call.Call.Args[i]
darg := origArg
if call.Deferred {
dargs[i] = darg
darg = ast.NewIdent(fmt.Sprintf("_cgo%d", i))
call.Call.Args[i] = darg
}
// An untyped nil does not need a pointer check, and
// when _cgoCheckPointer returns the untyped nil the
// type assertion we are going to insert will fail.
// Easier to just skip nil arguments.
// TODO: Note that this fails if nil is shadowed.
if id, ok := call.Args[i].(*ast.Ident); ok && id.Name == "nil" {
continue
}
if !p.needsPointerCheck(f, param.Go) {
if !p.needsPointerCheck(f, param.Go, origArg) {
continue
}
c := &ast.CallExpr{
Fun: ast.NewIdent("_cgoCheckPointer"),
Args: []ast.Expr{
call.Args[i],
darg,
},
}
// Add optional additional arguments for an address
// expression.
c.Args = p.checkAddrArgs(f, c.Args, call.Args[i])
c.Args = p.checkAddrArgs(f, c.Args, origArg)
// _cgoCheckPointer returns interface{}.
// We need to type assert that to the type we want.
@@ -636,7 +659,7 @@ func (p *Package) rewriteCall(f *File, call *ast.CallExpr, name *Name) {
// Instead we use a local variant of _cgoCheckPointer.
var arg ast.Expr
if n := p.unsafeCheckPointerName(param.Go); n != "" {
if n := p.unsafeCheckPointerName(param.Go, call.Deferred); n != "" {
c.Fun = ast.NewIdent(n)
arg = c
} else {
@@ -664,14 +687,73 @@ func (p *Package) rewriteCall(f *File, call *ast.CallExpr, name *Name) {
}
}
call.Args[i] = arg
call.Call.Args[i] = arg
}
if call.Deferred {
params := make([]*ast.Field, len(name.FuncType.Params))
for i, param := range name.FuncType.Params {
ptype := param.Go
if p.hasUnsafePointer(ptype) {
// Avoid generating unsafe.Pointer by using
// interface{}. This works because we are
// going to call a _cgoCheckPointer function
// anyhow.
ptype = &ast.InterfaceType{
Methods: &ast.FieldList{},
}
}
params[i] = &ast.Field{
Names: []*ast.Ident{
ast.NewIdent(fmt.Sprintf("_cgo%d", i)),
},
Type: ptype,
}
}
dbody := &ast.CallExpr{
Fun: call.Call.Fun,
Args: call.Call.Args,
}
call.Call.Fun = &ast.FuncLit{
Type: &ast.FuncType{
Params: &ast.FieldList{
List: params,
},
},
Body: &ast.BlockStmt{
List: []ast.Stmt{
&ast.ExprStmt{
X: dbody,
},
},
},
}
call.Call.Args = dargs
call.Call.Lparen = token.NoPos
call.Call.Rparen = token.NoPos
// There is a Ref pointing to the old call.Call.Fun.
for _, ref := range f.Ref {
if ref.Expr == &call.Call.Fun {
ref.Expr = &dbody.Fun
}
}
}
}
// needsPointerCheck returns whether the type t needs a pointer check.
// This is true if t is a pointer and if the value to which it points
// might contain a pointer.
func (p *Package) needsPointerCheck(f *File, t ast.Expr) bool {
func (p *Package) needsPointerCheck(f *File, t ast.Expr, arg ast.Expr) bool {
// An untyped nil does not need a pointer check, and when
// _cgoCheckPointer returns the untyped nil the type assertion we
// are going to insert will fail. Easier to just skip nil arguments.
// TODO: Note that this fails if nil is shadowed.
if id, ok := arg.(*ast.Ident); ok && id.Name == "nil" {
return false
}
return p.hasPointer(f, t, true)
}
@@ -859,20 +941,31 @@ func (p *Package) isType(t ast.Expr) bool {
// assertion to unsafe.Pointer in our copy of user code. We return
// the name of the _cgoCheckPointer function we are going to build, or
// the empty string if the type does not use unsafe.Pointer.
func (p *Package) unsafeCheckPointerName(t ast.Expr) string {
//
// The deferred parameter is true if this check is for the argument of
// a deferred function. In that case we need to use an empty interface
// as the argument type, because the deferred function we introduce in
// rewriteCall will use an empty interface type, and we can't add a
// type assertion. This is handled by keeping a separate list, and
// writing out the lists separately in writeDefs.
func (p *Package) unsafeCheckPointerName(t ast.Expr, deferred bool) string {
if !p.hasUnsafePointer(t) {
return ""
}
var buf bytes.Buffer
conf.Fprint(&buf, fset, t)
s := buf.String()
for i, t := range p.CgoChecks {
checks := &p.CgoChecks
if deferred {
checks = &p.DeferredCgoChecks
}
for i, t := range *checks {
if s == t {
return p.unsafeCheckPointerNameIndex(i)
return p.unsafeCheckPointerNameIndex(i, deferred)
}
}
p.CgoChecks = append(p.CgoChecks, s)
return p.unsafeCheckPointerNameIndex(len(p.CgoChecks) - 1)
*checks = append(*checks, s)
return p.unsafeCheckPointerNameIndex(len(*checks)-1, deferred)
}
// hasUnsafePointer returns whether the Go type t uses unsafe.Pointer.
@@ -900,7 +993,10 @@ func (p *Package) hasUnsafePointer(t ast.Expr) bool {
// unsafeCheckPointerNameIndex returns the name to use for a
// _cgoCheckPointer variant based on the index in the CgoChecks slice.
func (p *Package) unsafeCheckPointerNameIndex(i int) string {
func (p *Package) unsafeCheckPointerNameIndex(i int, deferred bool) string {
if deferred {
return fmt.Sprintf("_cgoCheckPointerInDefer%d", i)
}
return fmt.Sprintf("_cgoCheckPointer%d", i)
}

View File

@@ -42,7 +42,10 @@ type Package struct {
GoFiles []string // list of Go files
GccFiles []string // list of gcc output files
Preamble string // collected preamble for _cgo_export.h
CgoChecks []string // see unsafeCheckPointerName
// See unsafeCheckPointerName.
CgoChecks []string
DeferredCgoChecks []string
}
// A File collects information about a single Go input file.
@@ -52,7 +55,7 @@ type File struct {
Package string // Package name
Preamble string // C preamble (doc comment on import "C")
Ref []*Ref // all references to C.xxx in AST
Calls []*ast.CallExpr // all calls to C.xxx in AST
Calls []*Call // all calls to C.xxx in AST
ExpFunc []*ExpFunc // exported functions for this file
Name map[string]*Name // map from Go name to Name
}
@@ -66,6 +69,12 @@ func nameKeys(m map[string]*Name) []string {
return ks
}
// A Call refers to a call of a C.xxx function in the AST.
type Call struct {
Call *ast.CallExpr
Deferred bool
}
// A Ref refers to an expression of the form C.xxx in the AST.
type Ref struct {
Name *Name

View File

@@ -112,7 +112,13 @@ func (p *Package) writeDefs() {
}
for i, t := range p.CgoChecks {
n := p.unsafeCheckPointerNameIndex(i)
n := p.unsafeCheckPointerNameIndex(i, false)
fmt.Fprintf(fgo2, "\nfunc %s(p %s, args ...interface{}) %s {\n", n, t, t)
fmt.Fprintf(fgo2, "\treturn _cgoCheckPointer(p, args...).(%s)\n", t)
fmt.Fprintf(fgo2, "}\n")
}
for i, t := range p.DeferredCgoChecks {
n := p.unsafeCheckPointerNameIndex(i, true)
fmt.Fprintf(fgo2, "\nfunc %s(p interface{}, args ...interface{}) %s {\n", n, t)
fmt.Fprintf(fgo2, "\treturn _cgoCheckPointer(p, args...).(%s)\n", t)
fmt.Fprintf(fgo2, "}\n")

View File

@@ -156,6 +156,36 @@ func opregreg(op obj.As, dest, src int16) *obj.Prog {
return p
}
// DUFFZERO consists of repeated blocks of 4 MOVUPSs + ADD,
// See runtime/mkduff.go.
func duffStart(size int64) int64 {
x, _ := duff(size)
return x
}
func duffAdj(size int64) int64 {
_, x := duff(size)
return x
}
// duff returns the offset (from duffzero, in bytes) and pointer adjust (in bytes)
// required to use the duffzero mechanism for a block of the given size.
func duff(size int64) (int64, int64) {
if size < 32 || size > 1024 || size%dzClearStep != 0 {
panic("bad duffzero size")
}
steps := size / dzClearStep
blocks := steps / dzBlockLen
steps %= dzBlockLen
off := dzBlockSize * (dzBlocks - blocks)
var adj int64
if steps != 0 {
off -= dzAddSize
off -= dzMovSize * steps
adj -= dzClearStep * (dzBlockLen - steps)
}
return off, adj
}
func ssaGenValue(s *gc.SSAGenState, v *ssa.Value) {
s.SetLineno(v.Line)
switch v.Op {
@@ -209,89 +239,87 @@ func ssaGenValue(s *gc.SSAGenState, v *ssa.Value) {
}
opregreg(v.Op.Asm(), r, gc.SSARegNum(v.Args[1]))
case ssa.OpAMD64DIVQ, ssa.OpAMD64DIVL, ssa.OpAMD64DIVW,
ssa.OpAMD64DIVQU, ssa.OpAMD64DIVLU, ssa.OpAMD64DIVWU,
ssa.OpAMD64MODQ, ssa.OpAMD64MODL, ssa.OpAMD64MODW,
ssa.OpAMD64MODQU, ssa.OpAMD64MODLU, ssa.OpAMD64MODWU:
case ssa.OpAMD64DIVQU, ssa.OpAMD64DIVLU, ssa.OpAMD64DIVWU:
// Arg[0] (the dividend) is in AX.
// Arg[1] (the divisor) can be in any other register.
// Result[0] (the quotient) is in AX.
// Result[1] (the remainder) is in DX.
r := gc.SSARegNum(v.Args[1])
// Arg[0] is already in AX as it's the only register we allow
// and AX is the only output
x := gc.SSARegNum(v.Args[1])
// CPU faults upon signed overflow, which occurs when most
// negative int is divided by -1.
var j *obj.Prog
if v.Op == ssa.OpAMD64DIVQ || v.Op == ssa.OpAMD64DIVL ||
v.Op == ssa.OpAMD64DIVW || v.Op == ssa.OpAMD64MODQ ||
v.Op == ssa.OpAMD64MODL || v.Op == ssa.OpAMD64MODW {
var c *obj.Prog
switch v.Op {
case ssa.OpAMD64DIVQ, ssa.OpAMD64MODQ:
c = gc.Prog(x86.ACMPQ)
j = gc.Prog(x86.AJEQ)
// go ahead and sign extend to save doing it later
gc.Prog(x86.ACQO)
case ssa.OpAMD64DIVL, ssa.OpAMD64MODL:
c = gc.Prog(x86.ACMPL)
j = gc.Prog(x86.AJEQ)
gc.Prog(x86.ACDQ)
case ssa.OpAMD64DIVW, ssa.OpAMD64MODW:
c = gc.Prog(x86.ACMPW)
j = gc.Prog(x86.AJEQ)
gc.Prog(x86.ACWD)
}
c.From.Type = obj.TYPE_REG
c.From.Reg = x
c.To.Type = obj.TYPE_CONST
c.To.Offset = -1
j.To.Type = obj.TYPE_BRANCH
}
// for unsigned ints, we sign extend by setting DX = 0
// signed ints were sign extended above
if v.Op == ssa.OpAMD64DIVQU || v.Op == ssa.OpAMD64MODQU ||
v.Op == ssa.OpAMD64DIVLU || v.Op == ssa.OpAMD64MODLU ||
v.Op == ssa.OpAMD64DIVWU || v.Op == ssa.OpAMD64MODWU {
c := gc.Prog(x86.AXORQ)
c.From.Type = obj.TYPE_REG
c.From.Reg = x86.REG_DX
c.To.Type = obj.TYPE_REG
c.To.Reg = x86.REG_DX
}
// Zero extend dividend.
c := gc.Prog(x86.AXORL)
c.From.Type = obj.TYPE_REG
c.From.Reg = x86.REG_DX
c.To.Type = obj.TYPE_REG
c.To.Reg = x86.REG_DX
// Issue divide.
p := gc.Prog(v.Op.Asm())
p.From.Type = obj.TYPE_REG
p.From.Reg = x
p.From.Reg = r
// signed division, rest of the check for -1 case
if j != nil {
j2 := gc.Prog(obj.AJMP)
j2.To.Type = obj.TYPE_BRANCH
case ssa.OpAMD64DIVQ, ssa.OpAMD64DIVL, ssa.OpAMD64DIVW:
// Arg[0] (the dividend) is in AX.
// Arg[1] (the divisor) can be in any other register.
// Result[0] (the quotient) is in AX.
// Result[1] (the remainder) is in DX.
r := gc.SSARegNum(v.Args[1])
var n *obj.Prog
if v.Op == ssa.OpAMD64DIVQ || v.Op == ssa.OpAMD64DIVL ||
v.Op == ssa.OpAMD64DIVW {
// n * -1 = -n
n = gc.Prog(x86.ANEGQ)
n.To.Type = obj.TYPE_REG
n.To.Reg = x86.REG_AX
} else {
// n % -1 == 0
n = gc.Prog(x86.AXORQ)
n.From.Type = obj.TYPE_REG
n.From.Reg = x86.REG_DX
n.To.Type = obj.TYPE_REG
n.To.Reg = x86.REG_DX
}
j.To.Val = n
j2.To.Val = s.Pc()
// CPU faults upon signed overflow, which occurs when the most
// negative int is divided by -1. Handle divide by -1 as a special case.
var c *obj.Prog
switch v.Op {
case ssa.OpAMD64DIVQ:
c = gc.Prog(x86.ACMPQ)
case ssa.OpAMD64DIVL:
c = gc.Prog(x86.ACMPL)
case ssa.OpAMD64DIVW:
c = gc.Prog(x86.ACMPW)
}
c.From.Type = obj.TYPE_REG
c.From.Reg = r
c.To.Type = obj.TYPE_CONST
c.To.Offset = -1
j1 := gc.Prog(x86.AJEQ)
j1.To.Type = obj.TYPE_BRANCH
// Sign extend dividend.
switch v.Op {
case ssa.OpAMD64DIVQ:
gc.Prog(x86.ACQO)
case ssa.OpAMD64DIVL:
gc.Prog(x86.ACDQ)
case ssa.OpAMD64DIVW:
gc.Prog(x86.ACWD)
}
// Issue divide.
p := gc.Prog(v.Op.Asm())
p.From.Type = obj.TYPE_REG
p.From.Reg = r
// Skip over -1 fixup code.
j2 := gc.Prog(obj.AJMP)
j2.To.Type = obj.TYPE_BRANCH
// Issue -1 fixup code.
// n / -1 = -n
n1 := gc.Prog(x86.ANEGQ)
n1.To.Type = obj.TYPE_REG
n1.To.Reg = x86.REG_AX
// n % -1 == 0
n2 := gc.Prog(x86.AXORL)
n2.From.Type = obj.TYPE_REG
n2.From.Reg = x86.REG_DX
n2.To.Type = obj.TYPE_REG
n2.To.Reg = x86.REG_DX
// TODO(khr): issue only the -1 fixup code we need.
// For instance, if only the quotient is used, no point in zeroing the remainder.
j1.To.Val = n1
j2.To.Val = s.Pc()
case ssa.OpAMD64HMULQ, ssa.OpAMD64HMULL, ssa.OpAMD64HMULW, ssa.OpAMD64HMULB,
ssa.OpAMD64HMULQU, ssa.OpAMD64HMULLU, ssa.OpAMD64HMULWU, ssa.OpAMD64HMULBU:
@@ -470,8 +498,8 @@ func ssaGenValue(s *gc.SSAGenState, v *ssa.Value) {
gc.AddAux(&p.From, v)
p.To.Type = obj.TYPE_REG
p.To.Reg = gc.SSARegNum(v)
case ssa.OpAMD64LEAQ:
p := gc.Prog(x86.ALEAQ)
case ssa.OpAMD64LEAQ, ssa.OpAMD64LEAL:
p := gc.Prog(v.Op.Asm())
p.From.Type = obj.TYPE_MEM
p.From.Reg = gc.SSARegNum(v.Args[0])
gc.AddAux(&p.From, v)
@@ -649,10 +677,20 @@ func ssaGenValue(s *gc.SSAGenState, v *ssa.Value) {
ssa.OpAMD64CVTSS2SD, ssa.OpAMD64CVTSD2SS:
opregreg(v.Op.Asm(), gc.SSARegNum(v), gc.SSARegNum(v.Args[0]))
case ssa.OpAMD64DUFFZERO:
p := gc.Prog(obj.ADUFFZERO)
off := duffStart(v.AuxInt)
adj := duffAdj(v.AuxInt)
var p *obj.Prog
if adj != 0 {
p = gc.Prog(x86.AADDQ)
p.From.Type = obj.TYPE_CONST
p.From.Offset = adj
p.To.Type = obj.TYPE_REG
p.To.Reg = x86.REG_DI
}
p = gc.Prog(obj.ADUFFZERO)
p.To.Type = obj.TYPE_ADDR
p.To.Sym = gc.Linksym(gc.Pkglookup("duffzero", gc.Runtimepkg))
p.To.Offset = v.AuxInt
p.To.Offset = off
case ssa.OpAMD64MOVOconst:
if v.AuxInt != 0 {
v.Unimplementedf("MOVOconst can only do constant=0")
@@ -665,7 +703,7 @@ func ssaGenValue(s *gc.SSAGenState, v *ssa.Value) {
p.To.Sym = gc.Linksym(gc.Pkglookup("duffcopy", gc.Runtimepkg))
p.To.Offset = v.AuxInt
case ssa.OpCopy, ssa.OpAMD64MOVQconvert: // TODO: use MOVQreg for reg->reg copies instead of OpCopy?
case ssa.OpCopy, ssa.OpAMD64MOVQconvert, ssa.OpAMD64MOVLconvert: // TODO: use MOVQreg for reg->reg copies instead of OpCopy?
if v.Type.IsMemory() {
return
}
@@ -714,27 +752,14 @@ func ssaGenValue(s *gc.SSAGenState, v *ssa.Value) {
p.To.Name = obj.NAME_AUTO
}
case ssa.OpPhi:
// just check to make sure regalloc and stackalloc did it right
if v.Type.IsMemory() {
return
}
f := v.Block.Func
loc := f.RegAlloc[v.ID]
for _, a := range v.Args {
if aloc := f.RegAlloc[a.ID]; aloc != loc { // TODO: .Equal() instead?
v.Fatalf("phi arg at different location than phi: %v @ %v, but arg %v @ %v\n%s\n", v, loc, a, aloc, v.Block.Func)
}
}
gc.CheckLoweredPhi(v)
case ssa.OpInitMem:
// memory arg needs no code
case ssa.OpArg:
// input args need no code
case ssa.OpAMD64LoweredGetClosurePtr:
// Output is hardwired to DX only,
// and DX contains the closure pointer on
// closure entry, and this "instruction"
// is scheduled to the very beginning
// of the entry block.
// Closure pointer is DX.
gc.CheckLoweredGetClosurePtr(v)
case ssa.OpAMD64LoweredGetG:
r := gc.SSARegNum(v)
// See the comments in cmd/internal/obj/x86/obj6.go
@@ -831,6 +856,8 @@ func ssaGenValue(s *gc.SSAGenState, v *ssa.Value) {
p.To.Reg = gc.SSARegNum(v)
case ssa.OpSP, ssa.OpSB:
// nothing to do
case ssa.OpSelect0, ssa.OpSelect1:
// nothing to do
case ssa.OpAMD64SETEQ, ssa.OpAMD64SETNE,
ssa.OpAMD64SETL, ssa.OpAMD64SETLE,
ssa.OpAMD64SETG, ssa.OpAMD64SETGE,

View File

@@ -79,6 +79,8 @@ var progtable = [arm.ALAST & obj.AMask]obj.ProgInfo{
arm.AMULF & obj.AMask: {Flags: gc.SizeF | gc.LeftRead | RightRdwr},
arm.ASUBD & obj.AMask: {Flags: gc.SizeD | gc.LeftRead | RightRdwr},
arm.ASUBF & obj.AMask: {Flags: gc.SizeF | gc.LeftRead | RightRdwr},
arm.ANEGD & obj.AMask: {Flags: gc.SizeD | gc.LeftRead | RightRdwr},
arm.ANEGF & obj.AMask: {Flags: gc.SizeF | gc.LeftRead | RightRdwr},
arm.ASQRTD & obj.AMask: {Flags: gc.SizeD | gc.LeftRead | RightRdwr},
// Conversions.

File diff suppressed because it is too large Load Diff

View File

@@ -6,6 +6,7 @@ package arm64
import (
"cmd/compile/internal/gc"
"cmd/compile/internal/ssa"
"cmd/internal/obj/arm64"
)
@@ -61,6 +62,11 @@ func Main() {
gc.Thearch.Doregbits = doregbits
gc.Thearch.Regnames = regnames
gc.Thearch.SSARegToReg = ssaRegToReg
gc.Thearch.SSAMarkMoves = func(s *gc.SSAGenState, b *ssa.Block) {}
gc.Thearch.SSAGenValue = ssaGenValue
gc.Thearch.SSAGenBlock = ssaGenBlock
gc.Main()
gc.Exit(0)
}

View File

@@ -44,24 +44,37 @@ var progtable = [arm64.ALAST & obj.AMask]obj.ProgInfo{
// Integer
arm64.AADD & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RegRead | gc.RightWrite},
arm64.ASUB & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RegRead | gc.RightWrite},
arm64.ANEG & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RegRead | gc.RightWrite},
arm64.ANEG & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RegRead | gc.RightWrite}, // why RegRead? revisit once the old backend gone
arm64.AAND & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RegRead | gc.RightWrite},
arm64.AORR & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RegRead | gc.RightWrite},
arm64.AEOR & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RegRead | gc.RightWrite},
arm64.ABIC & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RegRead | gc.RightWrite},
arm64.AMVN & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RightWrite},
arm64.AMUL & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RegRead | gc.RightWrite},
arm64.AMULW & obj.AMask: {Flags: gc.SizeL | gc.LeftRead | gc.RegRead | gc.RightWrite},
arm64.ASMULL & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RegRead | gc.RightWrite},
arm64.AUMULL & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RegRead | gc.RightWrite},
arm64.ASMULH & obj.AMask: {Flags: gc.SizeL | gc.LeftRead | gc.RegRead | gc.RightWrite},
arm64.AUMULH & obj.AMask: {Flags: gc.SizeL | gc.LeftRead | gc.RegRead | gc.RightWrite},
arm64.ASMULH & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RegRead | gc.RightWrite},
arm64.AUMULH & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RegRead | gc.RightWrite},
arm64.ASDIV & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RegRead | gc.RightWrite},
arm64.AUDIV & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RegRead | gc.RightWrite},
arm64.ASDIVW & obj.AMask: {Flags: gc.SizeL | gc.LeftRead | gc.RegRead | gc.RightWrite},
arm64.AUDIVW & obj.AMask: {Flags: gc.SizeL | gc.LeftRead | gc.RegRead | gc.RightWrite},
arm64.AREM & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RegRead | gc.RightWrite},
arm64.AUREM & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RegRead | gc.RightWrite},
arm64.AREMW & obj.AMask: {Flags: gc.SizeL | gc.LeftRead | gc.RegRead | gc.RightWrite},
arm64.AUREMW & obj.AMask: {Flags: gc.SizeL | gc.LeftRead | gc.RegRead | gc.RightWrite},
arm64.ALSL & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RegRead | gc.RightWrite},
arm64.ALSR & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RegRead | gc.RightWrite},
arm64.AASR & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RegRead | gc.RightWrite},
arm64.ACMP & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RegRead},
arm64.ACMPW & obj.AMask: {Flags: gc.SizeL | gc.LeftRead | gc.RegRead},
arm64.AADC & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RegRead | gc.RightWrite | gc.UseCarry},
arm64.AROR & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RegRead | gc.RightWrite},
arm64.ARORW & obj.AMask: {Flags: gc.SizeL | gc.LeftRead | gc.RegRead | gc.RightWrite},
arm64.AADDS & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RegRead | gc.RightWrite | gc.SetCarry},
arm64.ACSET & obj.AMask: {Flags: gc.SizeQ | gc.RightWrite},
arm64.ACSEL & obj.AMask: {Flags: gc.SizeQ | gc.RegRead | gc.RightWrite},
// Floating point.
arm64.AFADDD & obj.AMask: {Flags: gc.SizeD | gc.LeftRead | gc.RegRead | gc.RightWrite},

View File

@@ -0,0 +1,865 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package arm64
import (
"math"
"cmd/compile/internal/gc"
"cmd/compile/internal/ssa"
"cmd/internal/obj"
"cmd/internal/obj/arm64"
)
var ssaRegToReg = []int16{
arm64.REG_R0,
arm64.REG_R1,
arm64.REG_R2,
arm64.REG_R3,
arm64.REG_R4,
arm64.REG_R5,
arm64.REG_R6,
arm64.REG_R7,
arm64.REG_R8,
arm64.REG_R9,
arm64.REG_R10,
arm64.REG_R11,
arm64.REG_R12,
arm64.REG_R13,
arm64.REG_R14,
arm64.REG_R15,
arm64.REG_R16,
arm64.REG_R17,
arm64.REG_R18, // platform register, not used
arm64.REG_R19,
arm64.REG_R20,
arm64.REG_R21,
arm64.REG_R22,
arm64.REG_R23,
arm64.REG_R24,
arm64.REG_R25,
arm64.REG_R26,
// R27 = REGTMP not used in regalloc
arm64.REGG, // R28
arm64.REG_R29, // frame pointer, not used
// R30 = REGLINK not used in regalloc
arm64.REGSP, // R31
arm64.REG_F0,
arm64.REG_F1,
arm64.REG_F2,
arm64.REG_F3,
arm64.REG_F4,
arm64.REG_F5,
arm64.REG_F6,
arm64.REG_F7,
arm64.REG_F8,
arm64.REG_F9,
arm64.REG_F10,
arm64.REG_F11,
arm64.REG_F12,
arm64.REG_F13,
arm64.REG_F14,
arm64.REG_F15,
arm64.REG_F16,
arm64.REG_F17,
arm64.REG_F18,
arm64.REG_F19,
arm64.REG_F20,
arm64.REG_F21,
arm64.REG_F22,
arm64.REG_F23,
arm64.REG_F24,
arm64.REG_F25,
arm64.REG_F26,
arm64.REG_F27,
arm64.REG_F28,
arm64.REG_F29,
arm64.REG_F30,
arm64.REG_F31,
arm64.REG_NZCV, // flag
0, // SB isn't a real register. We fill an Addr.Reg field with 0 in this case.
}
// Smallest possible faulting page at address zero,
// see ../../../../runtime/mheap.go:/minPhysPageSize
const minZeroPage = 4096
// loadByType returns the load instruction of the given type.
func loadByType(t ssa.Type) obj.As {
if t.IsFloat() {
switch t.Size() {
case 4:
return arm64.AFMOVS
case 8:
return arm64.AFMOVD
}
} else {
switch t.Size() {
case 1:
if t.IsSigned() {
return arm64.AMOVB
} else {
return arm64.AMOVBU
}
case 2:
if t.IsSigned() {
return arm64.AMOVH
} else {
return arm64.AMOVHU
}
case 4:
if t.IsSigned() {
return arm64.AMOVW
} else {
return arm64.AMOVWU
}
case 8:
return arm64.AMOVD
}
}
panic("bad load type")
}
// storeByType returns the store instruction of the given type.
func storeByType(t ssa.Type) obj.As {
if t.IsFloat() {
switch t.Size() {
case 4:
return arm64.AFMOVS
case 8:
return arm64.AFMOVD
}
} else {
switch t.Size() {
case 1:
return arm64.AMOVB
case 2:
return arm64.AMOVH
case 4:
return arm64.AMOVW
case 8:
return arm64.AMOVD
}
}
panic("bad store type")
}
// makeshift encodes a register shifted by a constant, used as an Offset in Prog
func makeshift(reg int16, typ int64, s int64) int64 {
return int64(reg&31)<<16 | typ | (s&63)<<10
}
// genshift generates a Prog for r = r0 op (r1 shifted by s)
func genshift(as obj.As, r0, r1, r int16, typ int64, s int64) *obj.Prog {
p := gc.Prog(as)
p.From.Type = obj.TYPE_SHIFT
p.From.Offset = makeshift(r1, typ, s)
p.Reg = r0
if r != 0 {
p.To.Type = obj.TYPE_REG
p.To.Reg = r
}
return p
}
func ssaGenValue(s *gc.SSAGenState, v *ssa.Value) {
s.SetLineno(v.Line)
switch v.Op {
case ssa.OpInitMem:
// memory arg needs no code
case ssa.OpArg:
// input args need no code
case ssa.OpSP, ssa.OpSB, ssa.OpGetG:
// nothing to do
case ssa.OpCopy, ssa.OpARM64MOVDconvert, ssa.OpARM64MOVDreg:
if v.Type.IsMemory() {
return
}
x := gc.SSARegNum(v.Args[0])
y := gc.SSARegNum(v)
if x == y {
return
}
as := arm64.AMOVD
if v.Type.IsFloat() {
switch v.Type.Size() {
case 4:
as = arm64.AFMOVS
case 8:
as = arm64.AFMOVD
default:
panic("bad float size")
}
}
p := gc.Prog(as)
p.From.Type = obj.TYPE_REG
p.From.Reg = x
p.To.Type = obj.TYPE_REG
p.To.Reg = y
case ssa.OpARM64MOVDnop:
if gc.SSARegNum(v) != gc.SSARegNum(v.Args[0]) {
v.Fatalf("input[0] and output not in same register %s", v.LongString())
}
// nothing to do
case ssa.OpLoadReg:
if v.Type.IsFlags() {
v.Unimplementedf("load flags not implemented: %v", v.LongString())
return
}
p := gc.Prog(loadByType(v.Type))
n, off := gc.AutoVar(v.Args[0])
p.From.Type = obj.TYPE_MEM
p.From.Node = n
p.From.Sym = gc.Linksym(n.Sym)
p.From.Offset = off
if n.Class == gc.PPARAM || n.Class == gc.PPARAMOUT {
p.From.Name = obj.NAME_PARAM
p.From.Offset += n.Xoffset
} else {
p.From.Name = obj.NAME_AUTO
}
p.To.Type = obj.TYPE_REG
p.To.Reg = gc.SSARegNum(v)
case ssa.OpPhi:
gc.CheckLoweredPhi(v)
case ssa.OpStoreReg:
if v.Type.IsFlags() {
v.Unimplementedf("store flags not implemented: %v", v.LongString())
return
}
p := gc.Prog(storeByType(v.Type))
p.From.Type = obj.TYPE_REG
p.From.Reg = gc.SSARegNum(v.Args[0])
n, off := gc.AutoVar(v)
p.To.Type = obj.TYPE_MEM
p.To.Node = n
p.To.Sym = gc.Linksym(n.Sym)
p.To.Offset = off
if n.Class == gc.PPARAM || n.Class == gc.PPARAMOUT {
p.To.Name = obj.NAME_PARAM
p.To.Offset += n.Xoffset
} else {
p.To.Name = obj.NAME_AUTO
}
case ssa.OpARM64ADD,
ssa.OpARM64SUB,
ssa.OpARM64AND,
ssa.OpARM64OR,
ssa.OpARM64XOR,
ssa.OpARM64BIC,
ssa.OpARM64MUL,
ssa.OpARM64MULW,
ssa.OpARM64MULH,
ssa.OpARM64UMULH,
ssa.OpARM64MULL,
ssa.OpARM64UMULL,
ssa.OpARM64DIV,
ssa.OpARM64UDIV,
ssa.OpARM64DIVW,
ssa.OpARM64UDIVW,
ssa.OpARM64MOD,
ssa.OpARM64UMOD,
ssa.OpARM64MODW,
ssa.OpARM64UMODW,
ssa.OpARM64SLL,
ssa.OpARM64SRL,
ssa.OpARM64SRA,
ssa.OpARM64FADDS,
ssa.OpARM64FADDD,
ssa.OpARM64FSUBS,
ssa.OpARM64FSUBD,
ssa.OpARM64FMULS,
ssa.OpARM64FMULD,
ssa.OpARM64FDIVS,
ssa.OpARM64FDIVD:
r := gc.SSARegNum(v)
r1 := gc.SSARegNum(v.Args[0])
r2 := gc.SSARegNum(v.Args[1])
p := gc.Prog(v.Op.Asm())
p.From.Type = obj.TYPE_REG
p.From.Reg = r2
p.Reg = r1
p.To.Type = obj.TYPE_REG
p.To.Reg = r
case ssa.OpARM64ADDconst,
ssa.OpARM64SUBconst,
ssa.OpARM64ANDconst,
ssa.OpARM64ORconst,
ssa.OpARM64XORconst,
ssa.OpARM64BICconst,
ssa.OpARM64SLLconst,
ssa.OpARM64SRLconst,
ssa.OpARM64SRAconst,
ssa.OpARM64RORconst,
ssa.OpARM64RORWconst:
p := gc.Prog(v.Op.Asm())
p.From.Type = obj.TYPE_CONST
p.From.Offset = v.AuxInt
p.Reg = gc.SSARegNum(v.Args[0])
p.To.Type = obj.TYPE_REG
p.To.Reg = gc.SSARegNum(v)
case ssa.OpARM64ADDshiftLL,
ssa.OpARM64SUBshiftLL,
ssa.OpARM64ANDshiftLL,
ssa.OpARM64ORshiftLL,
ssa.OpARM64XORshiftLL,
ssa.OpARM64BICshiftLL:
genshift(v.Op.Asm(), gc.SSARegNum(v.Args[0]), gc.SSARegNum(v.Args[1]), gc.SSARegNum(v), arm64.SHIFT_LL, v.AuxInt)
case ssa.OpARM64ADDshiftRL,
ssa.OpARM64SUBshiftRL,
ssa.OpARM64ANDshiftRL,
ssa.OpARM64ORshiftRL,
ssa.OpARM64XORshiftRL,
ssa.OpARM64BICshiftRL:
genshift(v.Op.Asm(), gc.SSARegNum(v.Args[0]), gc.SSARegNum(v.Args[1]), gc.SSARegNum(v), arm64.SHIFT_LR, v.AuxInt)
case ssa.OpARM64ADDshiftRA,
ssa.OpARM64SUBshiftRA,
ssa.OpARM64ANDshiftRA,
ssa.OpARM64ORshiftRA,
ssa.OpARM64XORshiftRA,
ssa.OpARM64BICshiftRA:
genshift(v.Op.Asm(), gc.SSARegNum(v.Args[0]), gc.SSARegNum(v.Args[1]), gc.SSARegNum(v), arm64.SHIFT_AR, v.AuxInt)
case ssa.OpARM64MOVDconst:
p := gc.Prog(v.Op.Asm())
p.From.Type = obj.TYPE_CONST
p.From.Offset = v.AuxInt
p.To.Type = obj.TYPE_REG
p.To.Reg = gc.SSARegNum(v)
case ssa.OpARM64FMOVSconst,
ssa.OpARM64FMOVDconst:
p := gc.Prog(v.Op.Asm())
p.From.Type = obj.TYPE_FCONST
p.From.Val = math.Float64frombits(uint64(v.AuxInt))
p.To.Type = obj.TYPE_REG
p.To.Reg = gc.SSARegNum(v)
case ssa.OpARM64CMP,
ssa.OpARM64CMPW,
ssa.OpARM64CMN,
ssa.OpARM64CMNW,
ssa.OpARM64FCMPS,
ssa.OpARM64FCMPD:
p := gc.Prog(v.Op.Asm())
p.From.Type = obj.TYPE_REG
p.From.Reg = gc.SSARegNum(v.Args[1])
p.Reg = gc.SSARegNum(v.Args[0])
case ssa.OpARM64CMPconst,
ssa.OpARM64CMPWconst,
ssa.OpARM64CMNconst,
ssa.OpARM64CMNWconst:
p := gc.Prog(v.Op.Asm())
p.From.Type = obj.TYPE_CONST
p.From.Offset = v.AuxInt
p.Reg = gc.SSARegNum(v.Args[0])
case ssa.OpARM64CMPshiftLL:
genshift(v.Op.Asm(), gc.SSARegNum(v.Args[0]), gc.SSARegNum(v.Args[1]), 0, arm64.SHIFT_LL, v.AuxInt)
case ssa.OpARM64CMPshiftRL:
genshift(v.Op.Asm(), gc.SSARegNum(v.Args[0]), gc.SSARegNum(v.Args[1]), 0, arm64.SHIFT_LR, v.AuxInt)
case ssa.OpARM64CMPshiftRA:
genshift(v.Op.Asm(), gc.SSARegNum(v.Args[0]), gc.SSARegNum(v.Args[1]), 0, arm64.SHIFT_AR, v.AuxInt)
case ssa.OpARM64MOVDaddr:
p := gc.Prog(arm64.AMOVD)
p.From.Type = obj.TYPE_ADDR
p.To.Type = obj.TYPE_REG
p.To.Reg = gc.SSARegNum(v)
var wantreg string
// MOVD $sym+off(base), R
// the assembler expands it as the following:
// - base is SP: add constant offset to SP (R13)
// when constant is large, tmp register (R11) may be used
// - base is SB: load external address from constant pool (use relocation)
switch v.Aux.(type) {
default:
v.Fatalf("aux is of unknown type %T", v.Aux)
case *ssa.ExternSymbol:
wantreg = "SB"
gc.AddAux(&p.From, v)
case *ssa.ArgSymbol, *ssa.AutoSymbol:
wantreg = "SP"
gc.AddAux(&p.From, v)
case nil:
// No sym, just MOVD $off(SP), R
wantreg = "SP"
p.From.Reg = arm64.REGSP
p.From.Offset = v.AuxInt
}
if reg := gc.SSAReg(v.Args[0]); reg.Name() != wantreg {
v.Fatalf("bad reg %s for symbol type %T, want %s", reg.Name(), v.Aux, wantreg)
}
case ssa.OpARM64MOVBload,
ssa.OpARM64MOVBUload,
ssa.OpARM64MOVHload,
ssa.OpARM64MOVHUload,
ssa.OpARM64MOVWload,
ssa.OpARM64MOVWUload,
ssa.OpARM64MOVDload,
ssa.OpARM64FMOVSload,
ssa.OpARM64FMOVDload:
p := gc.Prog(v.Op.Asm())
p.From.Type = obj.TYPE_MEM
p.From.Reg = gc.SSARegNum(v.Args[0])
gc.AddAux(&p.From, v)
p.To.Type = obj.TYPE_REG
p.To.Reg = gc.SSARegNum(v)
case ssa.OpARM64MOVBstore,
ssa.OpARM64MOVHstore,
ssa.OpARM64MOVWstore,
ssa.OpARM64MOVDstore,
ssa.OpARM64FMOVSstore,
ssa.OpARM64FMOVDstore:
p := gc.Prog(v.Op.Asm())
p.From.Type = obj.TYPE_REG
p.From.Reg = gc.SSARegNum(v.Args[1])
p.To.Type = obj.TYPE_MEM
p.To.Reg = gc.SSARegNum(v.Args[0])
gc.AddAux(&p.To, v)
case ssa.OpARM64MOVBstorezero,
ssa.OpARM64MOVHstorezero,
ssa.OpARM64MOVWstorezero,
ssa.OpARM64MOVDstorezero:
p := gc.Prog(v.Op.Asm())
p.From.Type = obj.TYPE_REG
p.From.Reg = arm64.REGZERO
p.To.Type = obj.TYPE_MEM
p.To.Reg = gc.SSARegNum(v.Args[0])
gc.AddAux(&p.To, v)
case ssa.OpARM64MOVBreg,
ssa.OpARM64MOVBUreg,
ssa.OpARM64MOVHreg,
ssa.OpARM64MOVHUreg,
ssa.OpARM64MOVWreg,
ssa.OpARM64MOVWUreg:
a := v.Args[0]
for a.Op == ssa.OpCopy || a.Op == ssa.OpARM64MOVDreg {
a = a.Args[0]
}
if a.Op == ssa.OpLoadReg {
t := a.Type
switch {
case v.Op == ssa.OpARM64MOVBreg && t.Size() == 1 && t.IsSigned(),
v.Op == ssa.OpARM64MOVBUreg && t.Size() == 1 && !t.IsSigned(),
v.Op == ssa.OpARM64MOVHreg && t.Size() == 2 && t.IsSigned(),
v.Op == ssa.OpARM64MOVHUreg && t.Size() == 2 && !t.IsSigned(),
v.Op == ssa.OpARM64MOVWreg && t.Size() == 4 && t.IsSigned(),
v.Op == ssa.OpARM64MOVWUreg && t.Size() == 4 && !t.IsSigned():
// arg is a proper-typed load, already zero/sign-extended, don't extend again
if gc.SSARegNum(v) == gc.SSARegNum(v.Args[0]) {
return
}
p := gc.Prog(arm64.AMOVD)
p.From.Type = obj.TYPE_REG
p.From.Reg = gc.SSARegNum(v.Args[0])
p.To.Type = obj.TYPE_REG
p.To.Reg = gc.SSARegNum(v)
return
default:
}
}
fallthrough
case ssa.OpARM64MVN,
ssa.OpARM64NEG,
ssa.OpARM64FNEGS,
ssa.OpARM64FNEGD,
ssa.OpARM64FSQRTD,
ssa.OpARM64FCVTZSSW,
ssa.OpARM64FCVTZSDW,
ssa.OpARM64FCVTZUSW,
ssa.OpARM64FCVTZUDW,
ssa.OpARM64FCVTZSS,
ssa.OpARM64FCVTZSD,
ssa.OpARM64FCVTZUS,
ssa.OpARM64FCVTZUD,
ssa.OpARM64SCVTFWS,
ssa.OpARM64SCVTFWD,
ssa.OpARM64SCVTFS,
ssa.OpARM64SCVTFD,
ssa.OpARM64UCVTFWS,
ssa.OpARM64UCVTFWD,
ssa.OpARM64UCVTFS,
ssa.OpARM64UCVTFD,
ssa.OpARM64FCVTSD,
ssa.OpARM64FCVTDS:
p := gc.Prog(v.Op.Asm())
p.From.Type = obj.TYPE_REG
p.From.Reg = gc.SSARegNum(v.Args[0])
p.To.Type = obj.TYPE_REG
p.To.Reg = gc.SSARegNum(v)
case ssa.OpARM64CSELULT,
ssa.OpARM64CSELULT0:
r1 := int16(arm64.REGZERO)
if v.Op == ssa.OpARM64CSELULT {
r1 = gc.SSARegNum(v.Args[1])
}
p := gc.Prog(v.Op.Asm())
p.From.Type = obj.TYPE_REG // assembler encodes conditional bits in Reg
p.From.Reg = arm64.COND_LO
p.Reg = gc.SSARegNum(v.Args[0])
p.From3 = &obj.Addr{Type: obj.TYPE_REG, Reg: r1}
p.To.Type = obj.TYPE_REG
p.To.Reg = gc.SSARegNum(v)
case ssa.OpARM64DUFFZERO:
// runtime.duffzero expects start address - 8 in R16
p := gc.Prog(arm64.ASUB)
p.From.Type = obj.TYPE_CONST
p.From.Offset = 8
p.Reg = gc.SSARegNum(v.Args[0])
p.To.Type = obj.TYPE_REG
p.To.Reg = arm64.REG_R16
p = gc.Prog(obj.ADUFFZERO)
p.To.Type = obj.TYPE_MEM
p.To.Name = obj.NAME_EXTERN
p.To.Sym = gc.Linksym(gc.Pkglookup("duffzero", gc.Runtimepkg))
p.To.Offset = v.AuxInt
case ssa.OpARM64LoweredZero:
// MOVD.P ZR, 8(R16)
// CMP Rarg1, R16
// BLE -2(PC)
// arg1 is the address of the last element to zero
// auxint is alignment
var sz int64
var mov obj.As
switch {
case v.AuxInt%8 == 0:
sz = 8
mov = arm64.AMOVD
case v.AuxInt%4 == 0:
sz = 4
mov = arm64.AMOVW
case v.AuxInt%2 == 0:
sz = 2
mov = arm64.AMOVH
default:
sz = 1
mov = arm64.AMOVB
}
p := gc.Prog(mov)
p.Scond = arm64.C_XPOST
p.From.Type = obj.TYPE_REG
p.From.Reg = arm64.REGZERO
p.To.Type = obj.TYPE_MEM
p.To.Reg = arm64.REG_R16
p.To.Offset = sz
p2 := gc.Prog(arm64.ACMP)
p2.From.Type = obj.TYPE_REG
p2.From.Reg = gc.SSARegNum(v.Args[1])
p2.Reg = arm64.REG_R16
p3 := gc.Prog(arm64.ABLE)
p3.To.Type = obj.TYPE_BRANCH
gc.Patch(p3, p)
case ssa.OpARM64LoweredMove:
// MOVD.P 8(R16), Rtmp
// MOVD.P Rtmp, 8(R17)
// CMP Rarg2, R16
// BLE -3(PC)
// arg2 is the address of the last element of src
// auxint is alignment
var sz int64
var mov obj.As
switch {
case v.AuxInt%8 == 0:
sz = 8
mov = arm64.AMOVD
case v.AuxInt%4 == 0:
sz = 4
mov = arm64.AMOVW
case v.AuxInt%2 == 0:
sz = 2
mov = arm64.AMOVH
default:
sz = 1
mov = arm64.AMOVB
}
p := gc.Prog(mov)
p.Scond = arm64.C_XPOST
p.From.Type = obj.TYPE_MEM
p.From.Reg = arm64.REG_R16
p.From.Offset = sz
p.To.Type = obj.TYPE_REG
p.To.Reg = arm64.REGTMP
p2 := gc.Prog(mov)
p2.Scond = arm64.C_XPOST
p2.From.Type = obj.TYPE_REG
p2.From.Reg = arm64.REGTMP
p2.To.Type = obj.TYPE_MEM
p2.To.Reg = arm64.REG_R17
p2.To.Offset = sz
p3 := gc.Prog(arm64.ACMP)
p3.From.Type = obj.TYPE_REG
p3.From.Reg = gc.SSARegNum(v.Args[2])
p3.Reg = arm64.REG_R16
p4 := gc.Prog(arm64.ABLE)
p4.To.Type = obj.TYPE_BRANCH
gc.Patch(p4, p)
case ssa.OpARM64CALLstatic:
if v.Aux.(*gc.Sym) == gc.Deferreturn.Sym {
// Deferred calls will appear to be returning to
// the CALL deferreturn(SB) that we are about to emit.
// However, the stack trace code will show the line
// of the instruction byte before the return PC.
// To avoid that being an unrelated instruction,
// insert an actual hardware NOP that will have the right line number.
// This is different from obj.ANOP, which is a virtual no-op
// that doesn't make it into the instruction stream.
ginsnop()
}
p := gc.Prog(obj.ACALL)
p.To.Type = obj.TYPE_MEM
p.To.Name = obj.NAME_EXTERN
p.To.Sym = gc.Linksym(v.Aux.(*gc.Sym))
if gc.Maxarg < v.AuxInt {
gc.Maxarg = v.AuxInt
}
case ssa.OpARM64CALLclosure:
p := gc.Prog(obj.ACALL)
p.To.Type = obj.TYPE_MEM
p.To.Offset = 0
p.To.Reg = gc.SSARegNum(v.Args[0])
if gc.Maxarg < v.AuxInt {
gc.Maxarg = v.AuxInt
}
case ssa.OpARM64CALLdefer:
p := gc.Prog(obj.ACALL)
p.To.Type = obj.TYPE_MEM
p.To.Name = obj.NAME_EXTERN
p.To.Sym = gc.Linksym(gc.Deferproc.Sym)
if gc.Maxarg < v.AuxInt {
gc.Maxarg = v.AuxInt
}
case ssa.OpARM64CALLgo:
p := gc.Prog(obj.ACALL)
p.To.Type = obj.TYPE_MEM
p.To.Name = obj.NAME_EXTERN
p.To.Sym = gc.Linksym(gc.Newproc.Sym)
if gc.Maxarg < v.AuxInt {
gc.Maxarg = v.AuxInt
}
case ssa.OpARM64CALLinter:
p := gc.Prog(obj.ACALL)
p.To.Type = obj.TYPE_MEM
p.To.Offset = 0
p.To.Reg = gc.SSARegNum(v.Args[0])
if gc.Maxarg < v.AuxInt {
gc.Maxarg = v.AuxInt
}
case ssa.OpARM64LoweredNilCheck:
// Optimization - if the subsequent block has a load or store
// at the same address, we don't need to issue this instruction.
mem := v.Args[1]
for _, w := range v.Block.Succs[0].Block().Values {
if w.Op == ssa.OpPhi {
if w.Type.IsMemory() {
mem = w
}
continue
}
if len(w.Args) == 0 || !w.Args[len(w.Args)-1].Type.IsMemory() {
// w doesn't use a store - can't be a memory op.
continue
}
if w.Args[len(w.Args)-1] != mem {
v.Fatalf("wrong store after nilcheck v=%s w=%s", v, w)
}
switch w.Op {
case ssa.OpARM64MOVBload, ssa.OpARM64MOVBUload, ssa.OpARM64MOVHload, ssa.OpARM64MOVHUload,
ssa.OpARM64MOVWload, ssa.OpARM64MOVWUload, ssa.OpARM64MOVDload,
ssa.OpARM64FMOVSload, ssa.OpARM64FMOVDload,
ssa.OpARM64MOVBstore, ssa.OpARM64MOVHstore, ssa.OpARM64MOVWstore, ssa.OpARM64MOVDstore,
ssa.OpARM64FMOVSstore, ssa.OpARM64FMOVDstore:
// arg0 is ptr, auxint is offset
if w.Args[0] == v.Args[0] && w.Aux == nil && w.AuxInt >= 0 && w.AuxInt < minZeroPage {
if gc.Debug_checknil != 0 && int(v.Line) > 1 {
gc.Warnl(v.Line, "removed nil check")
}
return
}
case ssa.OpARM64DUFFZERO, ssa.OpARM64LoweredZero:
// arg0 is ptr
if w.Args[0] == v.Args[0] {
if gc.Debug_checknil != 0 && int(v.Line) > 1 {
gc.Warnl(v.Line, "removed nil check")
}
return
}
case ssa.OpARM64LoweredMove:
// arg0 is dst ptr, arg1 is src ptr
if w.Args[0] == v.Args[0] || w.Args[1] == v.Args[0] {
if gc.Debug_checknil != 0 && int(v.Line) > 1 {
gc.Warnl(v.Line, "removed nil check")
}
return
}
default:
}
if w.Type.IsMemory() {
if w.Op == ssa.OpVarDef || w.Op == ssa.OpVarKill || w.Op == ssa.OpVarLive {
// these ops are OK
mem = w
continue
}
// We can't delay the nil check past the next store.
break
}
}
// Issue a load which will fault if arg is nil.
p := gc.Prog(arm64.AMOVB)
p.From.Type = obj.TYPE_MEM
p.From.Reg = gc.SSARegNum(v.Args[0])
gc.AddAux(&p.From, v)
p.To.Type = obj.TYPE_REG
p.To.Reg = arm64.REGTMP
if gc.Debug_checknil != 0 && v.Line > 1 { // v.Line==1 in generated wrappers
gc.Warnl(v.Line, "generated nil check")
}
case ssa.OpVarDef:
gc.Gvardef(v.Aux.(*gc.Node))
case ssa.OpVarKill:
gc.Gvarkill(v.Aux.(*gc.Node))
case ssa.OpVarLive:
gc.Gvarlive(v.Aux.(*gc.Node))
case ssa.OpKeepAlive:
if !v.Args[0].Type.IsPtrShaped() {
v.Fatalf("keeping non-pointer alive %v", v.Args[0])
}
n, off := gc.AutoVar(v.Args[0])
if n == nil {
v.Fatalf("KeepLive with non-spilled value %s %s", v, v.Args[0])
}
if off != 0 {
v.Fatalf("KeepLive with non-zero offset spill location %s:%d", n, off)
}
gc.Gvarlive(n)
case ssa.OpARM64Equal,
ssa.OpARM64NotEqual,
ssa.OpARM64LessThan,
ssa.OpARM64LessEqual,
ssa.OpARM64GreaterThan,
ssa.OpARM64GreaterEqual,
ssa.OpARM64LessThanU,
ssa.OpARM64LessEqualU,
ssa.OpARM64GreaterThanU,
ssa.OpARM64GreaterEqualU:
// generate boolean values using CSET
p := gc.Prog(arm64.ACSET)
p.From.Type = obj.TYPE_REG // assembler encodes conditional bits in Reg
p.From.Reg = condBits[v.Op]
p.To.Type = obj.TYPE_REG
p.To.Reg = gc.SSARegNum(v)
case ssa.OpSelect0, ssa.OpSelect1:
// nothing to do
case ssa.OpARM64LoweredGetClosurePtr:
// Closure pointer is R26 (arm64.REGCTXT).
gc.CheckLoweredGetClosurePtr(v)
case ssa.OpARM64FlagEQ,
ssa.OpARM64FlagLT_ULT,
ssa.OpARM64FlagLT_UGT,
ssa.OpARM64FlagGT_ULT,
ssa.OpARM64FlagGT_UGT:
v.Fatalf("Flag* ops should never make it to codegen %v", v.LongString())
case ssa.OpARM64InvertFlags:
v.Fatalf("InvertFlags should never make it to codegen %v", v.LongString())
default:
v.Unimplementedf("genValue not implemented: %s", v.LongString())
}
}
var condBits = map[ssa.Op]int16{
ssa.OpARM64Equal: arm64.COND_EQ,
ssa.OpARM64NotEqual: arm64.COND_NE,
ssa.OpARM64LessThan: arm64.COND_LT,
ssa.OpARM64LessThanU: arm64.COND_LO,
ssa.OpARM64LessEqual: arm64.COND_LE,
ssa.OpARM64LessEqualU: arm64.COND_LS,
ssa.OpARM64GreaterThan: arm64.COND_GT,
ssa.OpARM64GreaterThanU: arm64.COND_HI,
ssa.OpARM64GreaterEqual: arm64.COND_GE,
ssa.OpARM64GreaterEqualU: arm64.COND_HS,
}
var blockJump = map[ssa.BlockKind]struct {
asm, invasm obj.As
}{
ssa.BlockARM64EQ: {arm64.ABEQ, arm64.ABNE},
ssa.BlockARM64NE: {arm64.ABNE, arm64.ABEQ},
ssa.BlockARM64LT: {arm64.ABLT, arm64.ABGE},
ssa.BlockARM64GE: {arm64.ABGE, arm64.ABLT},
ssa.BlockARM64LE: {arm64.ABLE, arm64.ABGT},
ssa.BlockARM64GT: {arm64.ABGT, arm64.ABLE},
ssa.BlockARM64ULT: {arm64.ABLO, arm64.ABHS},
ssa.BlockARM64UGE: {arm64.ABHS, arm64.ABLO},
ssa.BlockARM64UGT: {arm64.ABHI, arm64.ABLS},
ssa.BlockARM64ULE: {arm64.ABLS, arm64.ABHI},
}
func ssaGenBlock(s *gc.SSAGenState, b, next *ssa.Block) {
s.SetLineno(b.Line)
switch b.Kind {
case ssa.BlockPlain, ssa.BlockCall, ssa.BlockCheck:
if b.Succs[0].Block() != next {
p := gc.Prog(obj.AJMP)
p.To.Type = obj.TYPE_BRANCH
s.Branches = append(s.Branches, gc.Branch{P: p, B: b.Succs[0].Block()})
}
case ssa.BlockDefer:
// defer returns in R0:
// 0 if we should continue executing
// 1 if we should jump to deferreturn call
p := gc.Prog(arm64.ACMP)
p.From.Type = obj.TYPE_CONST
p.From.Offset = 0
p.Reg = arm64.REG_R0
p = gc.Prog(arm64.ABNE)
p.To.Type = obj.TYPE_BRANCH
s.Branches = append(s.Branches, gc.Branch{P: p, B: b.Succs[1].Block()})
if b.Succs[0].Block() != next {
p := gc.Prog(obj.AJMP)
p.To.Type = obj.TYPE_BRANCH
s.Branches = append(s.Branches, gc.Branch{P: p, B: b.Succs[0].Block()})
}
case ssa.BlockExit:
gc.Prog(obj.AUNDEF) // tell plive.go that we never reach here
case ssa.BlockRet:
gc.Prog(obj.ARET)
case ssa.BlockRetJmp:
p := gc.Prog(obj.ARET)
p.To.Type = obj.TYPE_MEM
p.To.Name = obj.NAME_EXTERN
p.To.Sym = gc.Linksym(b.Aux.(*gc.Sym))
case ssa.BlockARM64EQ, ssa.BlockARM64NE,
ssa.BlockARM64LT, ssa.BlockARM64GE,
ssa.BlockARM64LE, ssa.BlockARM64GT,
ssa.BlockARM64ULT, ssa.BlockARM64UGT,
ssa.BlockARM64ULE, ssa.BlockARM64UGE:
jmp := blockJump[b.Kind]
var p *obj.Prog
switch next {
case b.Succs[0].Block():
p = gc.Prog(jmp.invasm)
p.To.Type = obj.TYPE_BRANCH
s.Branches = append(s.Branches, gc.Branch{P: p, B: b.Succs[1].Block()})
case b.Succs[1].Block():
p = gc.Prog(jmp.asm)
p.To.Type = obj.TYPE_BRANCH
s.Branches = append(s.Branches, gc.Branch{P: p, B: b.Succs[0].Block()})
default:
p = gc.Prog(jmp.asm)
p.To.Type = obj.TYPE_BRANCH
s.Branches = append(s.Branches, gc.Branch{P: p, B: b.Succs[0].Block()})
q := gc.Prog(obj.AJMP)
q.To.Type = obj.TYPE_BRANCH
s.Branches = append(s.Branches, gc.Branch{P: q, B: b.Succs[1].Block()})
}
default:
b.Unimplementedf("branch not implemented: %s. Control: %s", b.LongString(), b.Control.LongString())
}
}

View File

@@ -122,3 +122,38 @@ NextVar:
}
return out
}
// TestLineNumber checks to make sure the generated assembly has line numbers
// see issue #16214
func TestLineNumber(t *testing.T) {
testenv.MustHaveGoBuild(t)
dir, err := ioutil.TempDir("", "TestLineNumber")
if err != nil {
t.Fatalf("could not create directory: %v", err)
}
defer os.RemoveAll(dir)
src := filepath.Join(dir, "x.go")
err = ioutil.WriteFile(src, []byte(issue16214src), 0644)
if err != nil {
t.Fatalf("could not write file: %v", err)
}
cmd := exec.Command("go", "tool", "compile", "-S", "-o", filepath.Join(dir, "out.o"), src)
out, err := cmd.CombinedOutput()
if err != nil {
t.Fatalf("fail to run go tool compile: %v", err)
}
if strings.Contains(string(out), "unknown line number") {
t.Errorf("line number missing in assembly:\n%s", out)
}
}
var issue16214src = `
package main
func Mod32(x uint32) uint32 {
return x % 3 // frontend rewrites it as HMUL with 2863311531, the LITERAL node has Lineno 0
}
`

View File

@@ -153,7 +153,13 @@ const debugFormat = false // default: false
// TODO(gri) disable and remove once there is only one export format again
const forceObjFileStability = true
const exportVersion = "v0"
// Supported export format versions.
// TODO(gri) Make this more systematic (issue #16244).
const (
exportVersion0 = "v0"
exportVersion1 = "v1"
exportVersion = exportVersion1
)
// exportInlined enables the export of inlined function bodies and related
// dependencies. The compiler should work w/o any loss of functionality with
@@ -727,6 +733,14 @@ func (p *exporter) typ(t *Type) {
p.paramList(sig.Params(), inlineable)
p.paramList(sig.Results(), inlineable)
// for issue #16243
// We make this conditional for 1.7 to avoid consistency problems
// with installed packages compiled with an older version.
// TODO(gri) Clean up after 1.7 is out (issue #16244)
if exportVersion == exportVersion1 {
p.bool(m.Nointerface)
}
var f *Func
if inlineable {
f = mfn.Func

View File

@@ -21,8 +21,9 @@ import (
// changes to bimport.go and bexport.go.
type importer struct {
in *bufio.Reader
buf []byte // reused for reading strings
in *bufio.Reader
buf []byte // reused for reading strings
version string
// object lists, in order of deserialization
strList []string
@@ -67,8 +68,9 @@ func Import(in *bufio.Reader) {
// --- generic export data ---
if v := p.string(); v != exportVersion {
Fatalf("importer: unknown export data version: %s", v)
p.version = p.string()
if p.version != exportVersion0 && p.version != exportVersion1 {
Fatalf("importer: unknown export data version: %s", p.version)
}
// populate typList with predeclared "known" types
@@ -239,14 +241,20 @@ func (p *importer) pkg() *Pkg {
Fatalf("importer: package path %q for pkg index %d", path, len(p.pkgList))
}
// see importimport (export.go)
pkg := importpkg
if path != "" {
pkg = mkpkg(path)
}
if pkg.Name == "" {
pkg.Name = name
numImport[name]++
} else if pkg.Name != name {
Fatalf("importer: conflicting package names %s and %s for path %q", pkg.Name, name, path)
Yyerror("importer: conflicting package names %s and %s for path %q", pkg.Name, name, path)
}
if incannedimport == 0 && myimportpath != "" && path == myimportpath {
Yyerror("import %q: package depends on %q (import cycle)", importpkg.Path, path)
errorexit()
}
p.pkgList = append(p.pkgList, pkg)
@@ -426,10 +434,15 @@ func (p *importer) typ() *Type {
params := p.paramList()
result := p.paramList()
nointerface := false
if p.version == exportVersion1 {
nointerface = p.bool()
}
n := methodname1(newname(sym), recv[0].Right)
n.Type = functype(recv[0], params, result)
checkwidth(n.Type)
addmethod(sym, n.Type, tsym.Pkg, false, false)
addmethod(sym, n.Type, tsym.Pkg, false, nointerface)
p.funcList = append(p.funcList, n)
importlist = append(importlist, n)

View File

@@ -3,7 +3,7 @@
package gc
const runtimeimport = "" +
"cn\x00\x03v0\x01\rruntime\x00\t\x11newobject\x00\x02\x17\"\vtyp·2\x00\x00" +
"cn\x00\x03v1\x01\rruntime\x00\t\x11newobject\x00\x02\x17\"\vtyp·2\x00\x00" +
"\x01\x17:\x00\t\x13panicindex\x00\x00\x00\t\x13panicslice\x00\x00\x00\t\x15pani" +
"cdivide\x00\x00\x00\t\x15throwreturn\x00\x00\x00\t\x11throwinit\x00\x00\x00" +
"\t\x11panicwrap\x00\x05 \x00 \x00 \x00\x00\t\rgopanic\x00\x01\x1b\x00\x00\x00\x00\t\x11go" +
@@ -95,16 +95,17 @@ const runtimeimport = "" +
"4div\x00\x03\n\x00\n\x00\x01\n\x00\t\x11uint64div\x00\x03\x14\x00\x14\x00\x01\x14\x00\t\x0fint64" +
"mod\x00\x03\n\x00\n\x00\x01\n\x00\t\x11uint64mod\x00\x03\x14\x00\x14\x00\x01\x14\x00\t\x1bfloat6" +
"4toint64\x00\x01\x1a\x00\x01\n\x00\t\x1dfloat64touint64\x00\x01\x1a\x00\x01\x14\x00\t" +
"\x1bint64tofloat64\x00\x01\n\x00\x01\x1a\x00\t\x1duint64tofloat64\x00" +
"\x01\x14\x00\x01\x1a\x00\t\x19complex128div\x00\x04\x1e\vnum·2\x00\x00\x1e\vden·" +
"3\x00\x00\x02\x1e\vquo·1\x00\x00\t\x19racefuncenter\x00\x01\x16d\x00\t\x17race" +
"funcexit\x00\x00\x00\t\x0fraceread\x00\x01\x16d\x00\t\x11racewrite\x00\x01\x16" +
"d\x00\t\x19racereadrange\x00\x04\x16\raddr·1\x00d\x16\rsize·2\x00" +
"d\x00\t\x1bracewriterange\x00\x04\x16\x94\x03\x00d\x16\x96\x03\x00d\x00\t\x0fmsanrea" +
"d\x00\x04\x16\x94\x03\x00d\x16\x96\x03\x00d\x00\t\x11msanwrite\x00\x04\x16\x94\x03\x00d\x16\x96\x03\x00d\x00\v\xf4" +
"\x01\x02\v\x00\x01\x00\n$$\n"
"\x1dfloat64touint32\x00\x01\x1a\x00\x01\x12\x00\t\x1bint64tofloat64\x00" +
"\x01\n\x00\x01\x1a\x00\t\x1duint64tofloat64\x00\x01\x14\x00\x01\x1a\x00\t\x1duint32to" +
"float64\x00\x01\x12\x00\x01\x1a\x00\t\x19complex128div\x00\x04\x1e\vnum·2\x00" +
"\x00\x1e\vden·3\x00\x00\x02\x1e\vquo·1\x00\x00\t\x19racefuncenter\x00\x01\x16" +
"d\x00\t\x17racefuncexit\x00\x00\x00\t\x0fraceread\x00\x01\x16d\x00\t\x11race" +
"write\x00\x01\x16d\x00\t\x19racereadrange\x00\x04\x16\raddr·1\x00d\x16\r" +
"size·2\x00d\x00\t\x1bracewriterange\x00\x04\x16\x98\x03\x00d\x16\x9a\x03\x00d\x00\t" +
"\x0fmsanread\x00\x04\x16\x98\x03\x00d\x16\x9a\x03\x00d\x00\t\x11msanwrite\x00\x04\x16\x98\x03\x00d" +
"\x16\x9a\x03\x00d\x00\v\xf8\x01\x02\v\x00\x01\x00\n$$\n"
const unsafeimport = "" +
"cn\x00\x03v0\x01\vunsafe\x00\x05\r\rPointer\x00\x16\x00\t\x0fOffsetof\x00\x01" +
"cn\x00\x03v1\x01\vunsafe\x00\x05\r\rPointer\x00\x16\x00\t\x0fOffsetof\x00\x01" +
":\x00\x01\x16\x00\t\vSizeof\x00\x01:\x00\x01\x16\x00\t\rAlignof\x00\x01:\x00\x01\x16\x00\v\b\x00\v" +
"\x00\x01\x00\n$$\n"

View File

@@ -150,8 +150,10 @@ func int64mod(int64, int64) int64
func uint64mod(uint64, uint64) uint64
func float64toint64(float64) int64
func float64touint64(float64) uint64
func float64touint32(float64) uint32
func int64tofloat64(int64) float64
func uint64tofloat64(uint64) float64
func uint32tofloat64(uint32) float64
func complex128div(num complex128, den complex128) (quo complex128)

View File

@@ -2855,11 +2855,6 @@ func cgen_append(n, res *Node) {
Dump("cgen_append-n", n)
Dump("cgen_append-res", res)
}
if res.Op != ONAME && !samesafeexpr(res, n.List.First()) {
Dump("cgen_append-n", n)
Dump("cgen_append-res", res)
Fatalf("append not lowered")
}
for _, n1 := range n.List.Slice() {
if n1.Ullman >= UINF {
Fatalf("append with function call arguments")

View File

@@ -166,7 +166,7 @@ func closurename(n *Node) *Sym {
prefix := ""
if n.Func.Outerfunc == nil {
// Global closure.
outer = "glob"
outer = "glob."
prefix = "func"
closurename_closgen++

View File

@@ -1551,10 +1551,12 @@ func esccall(e *EscState, n *Node, up *Node) {
}
var src *Node
note := ""
i := 0
lls := ll.Slice()
for t, it := IterFields(fntype.Params()); i < len(lls); i++ {
src = lls[i]
note = t.Note
if t.Isddd && !n.Isddd {
// Introduce ODDDARG node to represent ... allocation.
src = Nod(ODDDARG, nil, nil)
@@ -1566,7 +1568,7 @@ func esccall(e *EscState, n *Node, up *Node) {
}
if haspointers(t.Type) {
if escassignfromtag(e, t.Note, nE.Escretval, src) == EscNone && up.Op != ODEFER && up.Op != OPROC {
if escassignfromtag(e, note, nE.Escretval, src) == EscNone && up.Op != ODEFER && up.Op != OPROC {
a := src
for a.Op == OCONVNOP {
a = a.Left
@@ -1596,14 +1598,24 @@ func esccall(e *EscState, n *Node, up *Node) {
// This occurs when function parameter type Isddd and n not Isddd
break
}
if note == uintptrEscapesTag {
escassignSinkNilWhy(e, src, src, "escaping uintptr")
}
t = it.Next()
}
// Store arguments into slice for ... arg.
for ; i < len(lls); i++ {
if Debug['m'] > 3 {
fmt.Printf("%v::esccall:: ... <- %v\n", linestr(lineno), Nconv(lls[i], FmtShort))
}
escassignNilWhy(e, src, lls[i], "arg to ...") // args to slice
if note == uintptrEscapesTag {
escassignSinkNilWhy(e, src, lls[i], "arg to uintptrescapes ...")
} else {
escassignNilWhy(e, src, lls[i], "arg to ...")
}
}
}
@@ -1963,9 +1975,20 @@ recurse:
// lets us take the address below to get a *string.
var unsafeUintptrTag = "unsafe-uintptr"
// This special tag is applied to uintptr parameters of functions
// marked go:uintptrescapes.
const uintptrEscapesTag = "uintptr-escapes"
func esctag(e *EscState, func_ *Node) {
func_.Esc = EscFuncTagged
name := func(s *Sym, narg int) string {
if s != nil {
return s.Name
}
return fmt.Sprintf("arg#%d", narg)
}
// External functions are assumed unsafe,
// unless //go:noescape is given before the declaration.
if func_.Nbody.Len() == 0 {
@@ -1988,13 +2011,7 @@ func esctag(e *EscState, func_ *Node) {
narg++
if t.Type.Etype == TUINTPTR {
if Debug['m'] != 0 {
var name string
if t.Sym != nil {
name = t.Sym.Name
} else {
name = fmt.Sprintf("arg#%d", narg)
}
Warnl(func_.Lineno, "%v assuming %v is unsafe uintptr", funcSym(func_), name)
Warnl(func_.Lineno, "%v assuming %v is unsafe uintptr", funcSym(func_), name(t.Sym, narg))
}
t.Note = unsafeUintptrTag
}
@@ -2003,6 +2020,27 @@ func esctag(e *EscState, func_ *Node) {
return
}
if func_.Func.Pragma&UintptrEscapes != 0 {
narg := 0
for _, t := range func_.Type.Params().Fields().Slice() {
narg++
if t.Type.Etype == TUINTPTR {
if Debug['m'] != 0 {
Warnl(func_.Lineno, "%v marking %v as escaping uintptr", funcSym(func_), name(t.Sym, narg))
}
t.Note = uintptrEscapesTag
}
if t.Isddd && t.Type.Elem().Etype == TUINTPTR {
// final argument is ...uintptr.
if Debug['m'] != 0 {
Warnl(func_.Lineno, "%v marking %v as escaping ...uintptr", funcSym(func_), name(t.Sym, narg))
}
t.Note = uintptrEscapesTag
}
}
}
savefn := Curfn
Curfn = func_
@@ -2015,7 +2053,9 @@ func esctag(e *EscState, func_ *Node) {
case EscNone, // not touched by escflood
EscReturn:
if haspointers(ln.Type) { // don't bother tagging for scalars
ln.Name.Param.Field.Note = mktag(int(ln.Esc))
if ln.Name.Param.Field.Note != uintptrEscapesTag {
ln.Name.Param.Field.Note = mktag(int(ln.Esc))
}
}
case EscHeap, // touched by escflood, moved to heap

View File

@@ -479,6 +479,10 @@ func pkgtype(s *Sym) *Type {
return s.Def.Type
}
// numImport tracks how often a package with a given name is imported.
// It is used to provide a better error message (by using the package
// path to disambiguate) if a package that appears multiple times with
// the same name appears in an error message.
var numImport = make(map[string]int)
func importimport(s *Sym, path string) {

View File

@@ -160,6 +160,14 @@ func moveToHeap(n *Node) {
if n.Class == PPARAM {
stackcopy.SetNotLiveAtEnd(true)
}
if n.Class == PPARAMOUT {
// Make sure the pointer to the heap copy is kept live throughout the function.
// The function could panic at any point, and then a defer could recover.
// Thus, we need the pointer to the heap copy always available so the
// post-deferreturn code can copy the return value back to the stack.
// See issue 16095.
heapaddr.setIsOutputParamHeapAddr(true)
}
n.Name.Param.Stackcopy = stackcopy
// Substitute the stackcopy into the function variable list so that

View File

@@ -156,6 +156,8 @@ var Debug_typeassert int
var localpkg *Pkg // package being compiled
var autopkg *Pkg // fake package for allocating auto variables
var importpkg *Pkg // package being imported
var itabpkg *Pkg // fake pkg for itab entries

View File

@@ -72,6 +72,7 @@ const (
Nowritebarrier // emit compiler error instead of write barrier
Nowritebarrierrec // error on write barrier in this or recursive callees
CgoUnsafeArgs // treat a pointer to one arg as a pointer to them all
UintptrEscapes // pointers converted to uintptr escape
)
type lexer struct {
@@ -930,6 +931,19 @@ func (l *lexer) getlinepragma() rune {
l.pragma |= Nowritebarrierrec | Nowritebarrier // implies Nowritebarrier
case "go:cgo_unsafe_args":
l.pragma |= CgoUnsafeArgs
case "go:uintptrescapes":
// For the next function declared in the file
// any uintptr arguments may be pointer values
// converted to uintptr. This directive
// ensures that the referenced allocated
// object, if any, is retained and not moved
// until the call completes, even though from
// the types alone it would appear that the
// object is no longer needed during the
// call. The conversion to uintptr must appear
// in the argument list.
// Used in syscall/dll_windows.go.
l.pragma |= UintptrEscapes
}
return c
}

View File

@@ -108,6 +108,8 @@ func Main() {
localpkg = mkpkg("")
localpkg.Prefix = "\"\""
autopkg = mkpkg("")
autopkg.Prefix = "\"\""
// pseudo-package, for scoping
builtinpkg = mkpkg("go.builtin")

View File

@@ -373,7 +373,7 @@ func ordercall(n *Node, order *Order) {
if t == nil {
break
}
if t.Note == unsafeUintptrTag {
if t.Note == unsafeUintptrTag || t.Note == uintptrEscapesTag {
xp := n.List.Addr(i)
for (*xp).Op == OCONVNOP && !(*xp).Type.IsPtr() {
xp = &(*xp).Left
@@ -385,7 +385,11 @@ func ordercall(n *Node, order *Order) {
*xp = x
}
}
t = it.Next()
next := it.Next()
if next == nil && t.Isddd && t.Note == uintptrEscapesTag {
next = t
}
t = next
}
}
}

View File

@@ -2006,7 +2006,8 @@ func (p *parser) hidden_fndcl() *Node {
ss.Type = functype(s2[0], s6, s8)
checkwidth(ss.Type)
addmethod(s4, ss.Type, p.structpkg, false, false)
addmethod(s4, ss.Type, p.structpkg, false, p.pragma&Nointerface != 0)
p.pragma = 0
funchdr(ss)
// inl.C's inlnode in on a dotmeth node expects to find the inlineable body as

View File

@@ -577,6 +577,15 @@ func progeffects(prog *obj.Prog, vars []*Node, uevar bvec, varkill bvec, avarini
return
}
if prog.As == obj.AJMP && prog.To.Type == obj.TYPE_MEM && prog.To.Name == obj.NAME_EXTERN {
// This is a tail call. Ensure the arguments are still alive.
// See issue 16016.
for i, node := range vars {
if node.Class == PPARAM {
bvset(uevar, int32(i))
}
}
}
if prog.As == obj.ATEXT {
// A text instruction marks the entry point to a function and
@@ -1166,6 +1175,19 @@ func livenessepilogue(lv *Liveness) {
all := bvalloc(nvars)
ambig := bvalloc(localswords())
// Set ambig bit for the pointers to heap-allocated pparamout variables.
// These are implicitly read by post-deferreturn code and thus must be
// kept live throughout the function (if there is any defer that recovers).
if hasdefer {
for _, n := range lv.vars {
if n.IsOutputParamHeapAddr() {
n.Name.Needzero = true
xoffset := n.Xoffset + stkptrsize
onebitwalktype1(n.Type, &xoffset, ambig)
}
}
}
for _, bb := range lv.cfg {
// Compute avarinitany and avarinitall for entry to block.
// This duplicates information known during livenesssolve

View File

@@ -164,7 +164,7 @@ func instrumentnode(np **Node, init *Nodes, wr int, skip int) {
var outn Nodes
outn.Set(out)
instrumentnode(&ls[i], &outn, 0, 0)
if ls[i].Op != OAS || ls[i].Ninit.Len() == 0 {
if ls[i].Op != OAS && ls[i].Op != OASWB && ls[i].Op != OAS2FUNC || ls[i].Ninit.Len() == 0 {
out = append(outn.Slice(), ls[i])
} else {
// Splice outn onto end of ls[i].Ninit

View File

@@ -75,7 +75,7 @@ func uncommonSize(t *Type) int { // Sizeof(runtime.uncommontype{})
if t.Sym == nil && len(methods(t)) == 0 {
return 0
}
return 4 + 2 + 2
return 4 + 2 + 2 + 4 + 4
}
func makefield(name string, t *Type) *Field {
@@ -604,17 +604,19 @@ func dextratype(s *Sym, ot int, t *Type, dataAdd int) int {
ot = dgopkgpathOffLSym(Linksym(s), ot, typePkg(t))
dataAdd += 4 + 2 + 2
dataAdd += uncommonSize(t)
mcount := len(m)
if mcount != int(uint16(mcount)) {
Fatalf("too many methods on %s: %d", t, mcount)
}
if dataAdd != int(uint16(dataAdd)) {
if dataAdd != int(uint32(dataAdd)) {
Fatalf("methods are too far away on %s: %d", t, dataAdd)
}
ot = duint16(s, ot, uint16(mcount))
ot = duint16(s, ot, uint16(dataAdd))
ot = duint16(s, ot, 0)
ot = duint32(s, ot, uint32(dataAdd))
ot = duint32(s, ot, 0)
return ot
}
@@ -797,6 +799,7 @@ func typeptrdata(t *Type) int64 {
const (
tflagUncommon = 1 << 0
tflagExtraStar = 1 << 1
tflagNamed = 1 << 2
)
var dcommontype_algarray *Sym
@@ -818,14 +821,10 @@ func dcommontype(s *Sym, ot int, t *Type) int {
algsym = dalgsym(t)
}
var sptr *Sym
tptr := Ptrto(t)
if !t.IsPtr() && (t.Sym != nil || methods(tptr) != nil) {
sptr := dtypesym(tptr)
r := obj.Addrel(Linksym(s))
r.Off = 0
r.Siz = 0
r.Sym = sptr.Lsym
r.Type = obj.R_USETYPE
sptr = dtypesym(tptr)
}
gcsym, useGCProg, ptrdata := dgcsym(t)
@@ -843,7 +842,7 @@ func dcommontype(s *Sym, ot int, t *Type) int {
// alg *typeAlg
// gcdata *byte
// str nameOff
// _ int32
// ptrToThis typeOff
// }
ot = duintptr(s, ot, uint64(t.Width))
ot = duintptr(s, ot, uint64(ptrdata))
@@ -854,6 +853,9 @@ func dcommontype(s *Sym, ot int, t *Type) int {
if uncommonSize(t) != 0 {
tflag |= tflagUncommon
}
if t.Sym != nil && t.Sym.Name != "" {
tflag |= tflagNamed
}
exported := false
p := Tconv(t, FmtLeft|FmtUnsigned)
@@ -907,8 +909,12 @@ func dcommontype(s *Sym, ot int, t *Type) int {
ot = dsymptr(s, ot, gcsym, 0) // gcdata
nsym := dname(p, "", nil, exported)
ot = dsymptrOffLSym(Linksym(s), ot, nsym, 0)
ot = duint32(s, ot, 0)
ot = dsymptrOffLSym(Linksym(s), ot, nsym, 0) // str
if sptr == nil {
ot = duint32(s, ot, 0)
} else {
ot = dsymptrOffLSym(Linksym(s), ot, Linksym(sptr), 0) // ptrToThis
}
return ot
}

View File

@@ -153,10 +153,13 @@ func (s *state) locatePotentialPhiFunctions(fn *Node) *sparseDefState {
p := e.Block()
dm.Use(t, p) // always count phi pred as "use"; no-op except for loop edges, which matter.
x := t.stm.Find(p, ssa.AdjustAfter, helper) // Look for defs reaching or within predecessors.
if x == nil { // nil def from a predecessor means a backedge that will be visited soon.
continue
}
if defseen == nil {
defseen = x
}
if defseen != x || x == nil { // TODO: too conservative at loops, does better if x == nil -> continue
if defseen != x {
// Need to insert a phi function here because predecessors's definitions differ.
change = true
// Phi insertion is at AdjustBefore, visible with find in same block at AdjustWithin or AdjustAfter.

View File

@@ -26,6 +26,9 @@ func initssa() *ssa.Config {
ssaExp.mustImplement = true
if ssaConfig == nil {
ssaConfig = ssa.NewConfig(Thearch.LinkArch.Name, &ssaExp, Ctxt, Debug['N'] == 0)
if Thearch.LinkArch.Name == "386" {
ssaConfig.Set387(Thearch.Use387)
}
}
return ssaConfig
}
@@ -37,8 +40,8 @@ func shouldssa(fn *Node) bool {
if os.Getenv("SSATEST") == "" {
return false
}
case "amd64", "amd64p32", "arm", "386", "arm64":
// Generally available.
case "amd64":
}
if !ssaEnabled {
return false
@@ -384,6 +387,14 @@ func (s *state) endBlock() *ssa.Block {
// pushLine pushes a line number on the line number stack.
func (s *state) pushLine(line int32) {
if line == 0 {
// the frontend may emit node with line number missing,
// use the parent line number in this case.
line = s.peekLine()
if Debug['K'] != 0 {
Warn("buildssa: line 0")
}
}
s.line = append(s.line, line)
}
@@ -1138,6 +1149,7 @@ var opToSSA = map[opAndType]ssa.Op{
opAndType{OEQ, TFUNC}: ssa.OpEqPtr,
opAndType{OEQ, TMAP}: ssa.OpEqPtr,
opAndType{OEQ, TCHAN}: ssa.OpEqPtr,
opAndType{OEQ, TPTR32}: ssa.OpEqPtr,
opAndType{OEQ, TPTR64}: ssa.OpEqPtr,
opAndType{OEQ, TUINTPTR}: ssa.OpEqPtr,
opAndType{OEQ, TUNSAFEPTR}: ssa.OpEqPtr,
@@ -1158,6 +1170,7 @@ var opToSSA = map[opAndType]ssa.Op{
opAndType{ONE, TFUNC}: ssa.OpNeqPtr,
opAndType{ONE, TMAP}: ssa.OpNeqPtr,
opAndType{ONE, TCHAN}: ssa.OpNeqPtr,
opAndType{ONE, TPTR32}: ssa.OpNeqPtr,
opAndType{ONE, TPTR64}: ssa.OpNeqPtr,
opAndType{ONE, TUINTPTR}: ssa.OpNeqPtr,
opAndType{ONE, TUNSAFEPTR}: ssa.OpNeqPtr,
@@ -1322,6 +1335,15 @@ var fpConvOpToSSA = map[twoTypes]twoOpsAndType{
twoTypes{TFLOAT32, TFLOAT64}: twoOpsAndType{ssa.OpCvt32Fto64F, ssa.OpCopy, TFLOAT64},
}
// this map is used only for 32-bit arch, and only includes the difference
// on 32-bit arch, don't use int64<->float conversion for uint32
var fpConvOpToSSA32 = map[twoTypes]twoOpsAndType{
twoTypes{TUINT32, TFLOAT32}: twoOpsAndType{ssa.OpCopy, ssa.OpCvt32Uto32F, TUINT32},
twoTypes{TUINT32, TFLOAT64}: twoOpsAndType{ssa.OpCopy, ssa.OpCvt32Uto64F, TUINT32},
twoTypes{TFLOAT32, TUINT32}: twoOpsAndType{ssa.OpCvt32Fto32U, ssa.OpCopy, TUINT32},
twoTypes{TFLOAT64, TUINT32}: twoOpsAndType{ssa.OpCvt64Fto32U, ssa.OpCopy, TUINT32},
}
var shiftOpToSSA = map[opAndTwoTypes]ssa.Op{
opAndTwoTypes{OLSH, TINT8, TUINT8}: ssa.OpLsh8x8,
opAndTwoTypes{OLSH, TUINT8, TUINT8}: ssa.OpLsh8x8,
@@ -1638,6 +1660,11 @@ func (s *state) expr(n *Node) *ssa.Value {
if ft.IsFloat() || tt.IsFloat() {
conv, ok := fpConvOpToSSA[twoTypes{s.concreteEtype(ft), s.concreteEtype(tt)}]
if s.config.IntSize == 4 && Thearch.LinkArch.Name != "amd64p32" {
if conv1, ok1 := fpConvOpToSSA32[twoTypes{s.concreteEtype(ft), s.concreteEtype(tt)}]; ok1 {
conv = conv1
}
}
if !ok {
s.Fatalf("weird float conversion %s -> %s", ft, tt)
}
@@ -1731,7 +1758,7 @@ func (s *state) expr(n *Node) *ssa.Value {
addop := ssa.OpAdd64F
subop := ssa.OpSub64F
pt := floatForComplex(n.Type) // Could be Float32 or Float64
wt := Types[TFLOAT64] // Compute in Float64 to minimize cancellation error
wt := Types[TFLOAT64] // Compute in Float64 to minimize cancelation error
areal := s.newValue1(ssa.OpComplexReal, pt, a)
breal := s.newValue1(ssa.OpComplexReal, pt, b)
@@ -1769,7 +1796,7 @@ func (s *state) expr(n *Node) *ssa.Value {
subop := ssa.OpSub64F
divop := ssa.OpDiv64F
pt := floatForComplex(n.Type) // Could be Float32 or Float64
wt := Types[TFLOAT64] // Compute in Float64 to minimize cancellation error
wt := Types[TFLOAT64] // Compute in Float64 to minimize cancelation error
areal := s.newValue1(ssa.OpComplexReal, pt, a)
breal := s.newValue1(ssa.OpComplexReal, pt, b)
@@ -1946,7 +1973,7 @@ func (s *state) expr(n *Node) *ssa.Value {
case n.Left.Type.IsString():
a := s.expr(n.Left)
i := s.expr(n.Right)
i = s.extendIndex(i)
i = s.extendIndex(i, Panicindex)
if !n.Bounded {
len := s.newValue1(ssa.OpStringLen, Types[TINT], a)
s.boundsCheck(i, len)
@@ -2035,13 +2062,13 @@ func (s *state) expr(n *Node) *ssa.Value {
var i, j, k *ssa.Value
low, high, max := n.SliceBounds()
if low != nil {
i = s.extendIndex(s.expr(low))
i = s.extendIndex(s.expr(low), panicslice)
}
if high != nil {
j = s.extendIndex(s.expr(high))
j = s.extendIndex(s.expr(high), panicslice)
}
if max != nil {
k = s.extendIndex(s.expr(max))
k = s.extendIndex(s.expr(max), panicslice)
}
p, l, c := s.slice(n.Left.Type, v, i, j, k)
return s.newValue3(ssa.OpSliceMake, n.Type, p, l, c)
@@ -2051,10 +2078,10 @@ func (s *state) expr(n *Node) *ssa.Value {
var i, j *ssa.Value
low, high, _ := n.SliceBounds()
if low != nil {
i = s.extendIndex(s.expr(low))
i = s.extendIndex(s.expr(low), panicslice)
}
if high != nil {
j = s.extendIndex(s.expr(high))
j = s.extendIndex(s.expr(high), panicslice)
}
p, l, _ := s.slice(n.Left.Type, v, i, j, nil)
return s.newValue2(ssa.OpStringMake, n.Type, p, l)
@@ -2238,7 +2265,7 @@ func (s *state) append(n *Node, inplace bool) *ssa.Value {
if haspointers(et) {
s.insertWBmove(et, addr, arg.v, n.Lineno, arg.isVolatile)
} else {
s.vars[&memVar] = s.newValue3I(ssa.OpMove, ssa.TypeMem, et.Size(), addr, arg.v, s.mem())
s.vars[&memVar] = s.newValue3I(ssa.OpMove, ssa.TypeMem, SizeAlignAuxInt(et), addr, arg.v, s.mem())
}
}
}
@@ -2371,14 +2398,14 @@ func (s *state) assign(left *Node, right *ssa.Value, wb, deref bool, line int32,
if deref {
// Treat as a mem->mem move.
if right == nil {
s.vars[&memVar] = s.newValue2I(ssa.OpZero, ssa.TypeMem, t.Size(), addr, s.mem())
s.vars[&memVar] = s.newValue2I(ssa.OpZero, ssa.TypeMem, SizeAlignAuxInt(t), addr, s.mem())
return
}
if wb {
s.insertWBmove(t, addr, right, line, rightIsVolatile)
return
}
s.vars[&memVar] = s.newValue3I(ssa.OpMove, ssa.TypeMem, t.Size(), addr, right, s.mem())
s.vars[&memVar] = s.newValue3I(ssa.OpMove, ssa.TypeMem, SizeAlignAuxInt(t), addr, right, s.mem())
return
}
// Treat as a store.
@@ -2573,8 +2600,11 @@ func (s *state) call(n *Node, k callKind) *ssa.Value {
}
i := s.expr(fn.Left)
itab := s.newValue1(ssa.OpITab, Types[TUINTPTR], i)
if k != callNormal {
s.nilCheck(itab)
}
itabidx := fn.Xoffset + 3*int64(Widthptr) + 8 // offset of fun field in runtime.itab
itab = s.newValue1I(ssa.OpOffPtr, Types[TUINTPTR], itabidx, itab)
itab = s.newValue1I(ssa.OpOffPtr, Ptrto(Types[TUINTPTR]), itabidx, itab)
if k == callNormal {
codeptr = s.newValue2(ssa.OpLoad, Types[TUINTPTR], itab, s.mem())
} else {
@@ -2597,16 +2627,18 @@ func (s *state) call(n *Node, k callKind) *ssa.Value {
if k != callNormal {
argStart += int64(2 * Widthptr)
}
addr := s.entryNewValue1I(ssa.OpOffPtr, Types[TUINTPTR], argStart, s.sp)
addr := s.entryNewValue1I(ssa.OpOffPtr, Ptrto(Types[TUINTPTR]), argStart, s.sp)
s.vars[&memVar] = s.newValue3I(ssa.OpStore, ssa.TypeMem, int64(Widthptr), addr, rcvr, s.mem())
}
// Defer/go args
if k != callNormal {
// Write argsize and closure (args to Newproc/Deferproc).
argStart := Ctxt.FixedFrameSize()
argsize := s.constInt32(Types[TUINT32], int32(stksize))
s.vars[&memVar] = s.newValue3I(ssa.OpStore, ssa.TypeMem, 4, s.sp, argsize, s.mem())
addr := s.entryNewValue1I(ssa.OpOffPtr, Ptrto(Types[TUINTPTR]), int64(Widthptr), s.sp)
addr := s.entryNewValue1I(ssa.OpOffPtr, Ptrto(Types[TUINT32]), argStart, s.sp)
s.vars[&memVar] = s.newValue3I(ssa.OpStore, ssa.TypeMem, 4, addr, argsize, s.mem())
addr = s.entryNewValue1I(ssa.OpOffPtr, Ptrto(Types[TUINTPTR]), argStart+int64(Widthptr), s.sp)
s.vars[&memVar] = s.newValue3I(ssa.OpStore, ssa.TypeMem, int64(Widthptr), addr, closure, s.mem())
stksize += 2 * int64(Widthptr)
}
@@ -2751,7 +2783,7 @@ func (s *state) addr(n *Node, bounded bool) (*ssa.Value, bool) {
if n.Left.Type.IsSlice() {
a := s.expr(n.Left)
i := s.expr(n.Right)
i = s.extendIndex(i)
i = s.extendIndex(i, Panicindex)
len := s.newValue1(ssa.OpSliceLen, Types[TINT], a)
if !n.Bounded {
s.boundsCheck(i, len)
@@ -2761,7 +2793,7 @@ func (s *state) addr(n *Node, bounded bool) (*ssa.Value, bool) {
} else { // array
a, isVolatile := s.addr(n.Left, bounded)
i := s.expr(n.Right)
i = s.extendIndex(i)
i = s.extendIndex(i, Panicindex)
len := s.constInt(Types[TINT], n.Left.Type.NumElem())
if !n.Bounded {
s.boundsCheck(i, len)
@@ -2902,12 +2934,11 @@ func (s *state) nilCheck(ptr *ssa.Value) {
// boundsCheck generates bounds checking code. Checks if 0 <= idx < len, branches to exit if not.
// Starts a new block on return.
// idx is already converted to full int width.
func (s *state) boundsCheck(idx, len *ssa.Value) {
if Debug['B'] != 0 {
return
}
// TODO: convert index to full width?
// TODO: if index is 64-bit and we're compiling to 32-bit, check that high 32 bits are zero.
// bounds check
cmp := s.newValue2(ssa.OpIsInBounds, Types[TBOOL], idx, len)
@@ -2916,19 +2947,18 @@ func (s *state) boundsCheck(idx, len *ssa.Value) {
// sliceBoundsCheck generates slice bounds checking code. Checks if 0 <= idx <= len, branches to exit if not.
// Starts a new block on return.
// idx and len are already converted to full int width.
func (s *state) sliceBoundsCheck(idx, len *ssa.Value) {
if Debug['B'] != 0 {
return
}
// TODO: convert index to full width?
// TODO: if index is 64-bit and we're compiling to 32-bit, check that high 32 bits are zero.
// bounds check
cmp := s.newValue2(ssa.OpIsSliceInBounds, Types[TBOOL], idx, len)
s.check(cmp, panicslice)
}
// If cmp (a bool) is true, panic using the given function.
// If cmp (a bool) is false, panic using the given function.
func (s *state) check(cmp *ssa.Value, fn *Node) {
b := s.endBlock()
b.Kind = ssa.BlockIf
@@ -2958,19 +2988,23 @@ func (s *state) check(cmp *ssa.Value, fn *Node) {
// is started to load the return values.
func (s *state) rtcall(fn *Node, returns bool, results []*Type, args ...*ssa.Value) []*ssa.Value {
// Write args to the stack
var off int64 // TODO: arch-dependent starting offset?
off := Ctxt.FixedFrameSize()
for _, arg := range args {
t := arg.Type
off = Rnd(off, t.Alignment())
ptr := s.sp
if off != 0 {
ptr = s.newValue1I(ssa.OpOffPtr, Types[TUINTPTR], off, s.sp)
ptr = s.newValue1I(ssa.OpOffPtr, t.PtrTo(), off, s.sp)
}
size := t.Size()
s.vars[&memVar] = s.newValue3I(ssa.OpStore, ssa.TypeMem, size, ptr, arg, s.mem())
off += size
}
off = Rnd(off, int64(Widthptr))
if Thearch.LinkArch.Name == "amd64p32" {
// amd64p32 wants 8-byte alignment of the start of the return values.
off = Rnd(off, 8)
}
// Issue call
call := s.newValue1A(ssa.OpStaticCall, ssa.TypeMem, fn.Sym, s.mem())
@@ -2981,7 +3015,7 @@ func (s *state) rtcall(fn *Node, returns bool, results []*Type, args ...*ssa.Val
if !returns {
b.Kind = ssa.BlockExit
b.SetControl(call)
call.AuxInt = off
call.AuxInt = off - Ctxt.FixedFrameSize()
if len(results) > 0 {
Fatalf("panic call can't have results")
}
@@ -3004,7 +3038,7 @@ func (s *state) rtcall(fn *Node, returns bool, results []*Type, args ...*ssa.Val
off = Rnd(off, t.Alignment())
ptr := s.sp
if off != 0 {
ptr = s.newValue1I(ssa.OpOffPtr, Types[TUINTPTR], off, s.sp)
ptr = s.newValue1I(ssa.OpOffPtr, Ptrto(t), off, s.sp)
}
res[i] = s.newValue2(ssa.OpLoad, t, ptr, s.mem())
off += t.Size()
@@ -3038,10 +3072,9 @@ func (s *state) insertWBmove(t *Type, left, right *ssa.Value, line int32, rightI
aux := &ssa.ExternSymbol{Typ: Types[TBOOL], Sym: syslook("writeBarrier").Sym}
flagaddr := s.newValue1A(ssa.OpAddr, Ptrto(Types[TUINT32]), aux, s.sb)
// TODO: select the .enabled field. It is currently first, so not needed for now.
// Load word, test byte, avoiding partial register write from load byte.
// Load word, test word, avoiding partial register write from load byte.
flag := s.newValue2(ssa.OpLoad, Types[TUINT32], flagaddr, s.mem())
flag = s.newValue1(ssa.OpTrunc64to8, Types[TBOOL], flag)
flag = s.newValue2(ssa.OpNeq32, Types[TBOOL], flag, s.constInt32(Types[TUINT32], 0))
b := s.endBlock()
b.Kind = ssa.BlockIf
b.Likely = ssa.BranchUnlikely
@@ -3062,7 +3095,7 @@ func (s *state) insertWBmove(t *Type, left, right *ssa.Value, line int32, rightI
tmp := temp(t)
s.vars[&memVar] = s.newValue1A(ssa.OpVarDef, ssa.TypeMem, tmp, s.mem())
tmpaddr, _ := s.addr(tmp, true)
s.vars[&memVar] = s.newValue3I(ssa.OpMove, ssa.TypeMem, t.Size(), tmpaddr, right, s.mem())
s.vars[&memVar] = s.newValue3I(ssa.OpMove, ssa.TypeMem, SizeAlignAuxInt(t), tmpaddr, right, s.mem())
// Issue typedmemmove call.
taddr := s.newValue1A(ssa.OpAddr, Types[TUINTPTR], &ssa.ExternSymbol{Typ: Types[TUINTPTR], Sym: typenamesym(t)}, s.sb)
s.rtcall(typedmemmove, true, nil, taddr, left, tmpaddr)
@@ -3072,7 +3105,7 @@ func (s *state) insertWBmove(t *Type, left, right *ssa.Value, line int32, rightI
s.endBlock().AddEdgeTo(bEnd)
s.startBlock(bElse)
s.vars[&memVar] = s.newValue3I(ssa.OpMove, ssa.TypeMem, t.Size(), left, right, s.mem())
s.vars[&memVar] = s.newValue3I(ssa.OpMove, ssa.TypeMem, SizeAlignAuxInt(t), left, right, s.mem())
s.endBlock().AddEdgeTo(bEnd)
s.startBlock(bEnd)
@@ -3106,10 +3139,9 @@ func (s *state) insertWBstore(t *Type, left, right *ssa.Value, line int32, skip
aux := &ssa.ExternSymbol{Typ: Types[TBOOL], Sym: syslook("writeBarrier").Sym}
flagaddr := s.newValue1A(ssa.OpAddr, Ptrto(Types[TUINT32]), aux, s.sb)
// TODO: select the .enabled field. It is currently first, so not needed for now.
// Load word, test byte, avoiding partial register write from load byte.
// Load word, test word, avoiding partial register write from load byte.
flag := s.newValue2(ssa.OpLoad, Types[TUINT32], flagaddr, s.mem())
flag = s.newValue1(ssa.OpTrunc64to8, Types[TBOOL], flag)
flag = s.newValue2(ssa.OpNeq32, Types[TBOOL], flag, s.constInt32(Types[TUINT32], 0))
b := s.endBlock()
b.Kind = ssa.BlockIf
b.Likely = ssa.BranchUnlikely
@@ -3919,6 +3951,11 @@ type SSAGenState struct {
// bstart remembers where each block starts (indexed by block ID)
bstart []*obj.Prog
// 387 port: maps from SSE registers (REG_X?) to 387 registers (REG_F?)
SSEto387 map[int16]int16
// Some architectures require a 64-bit temporary for FP-related register shuffling. Examples include x86-387, PPC, and Sparc V8.
ScratchFpMem *Node
}
// Pc returns the current Prog.
@@ -3955,6 +3992,13 @@ func genssa(f *ssa.Func, ptxt *obj.Prog, gcargs, gclocals *Sym) {
blockProgs[Pc] = f.Blocks[0]
}
if Thearch.Use387 {
s.SSEto387 = map[int16]int16{}
}
if f.Config.NeedsFpScratch {
s.ScratchFpMem = temp(Types[TUINT64])
}
// Emit basic blocks
for i, b := range f.Blocks {
s.bstart[b.ID] = Pc
@@ -4117,12 +4161,25 @@ func SSAGenFPJump(s *SSAGenState, b, next *ssa.Block, jumps *[2][2]FloatingEQNEJ
}
}
func AuxOffset(v *ssa.Value) (offset int64) {
if v.Aux == nil {
return 0
}
switch sym := v.Aux.(type) {
case *ssa.AutoSymbol:
n := sym.Node.(*Node)
return n.Xoffset
}
return 0
}
// AddAux adds the offset in the aux fields (AuxInt and Aux) of v to a.
func AddAux(a *obj.Addr, v *ssa.Value) {
AddAux2(a, v, v.AuxInt)
}
func AddAux2(a *obj.Addr, v *ssa.Value, offset int64) {
if a.Type != obj.TYPE_MEM {
if a.Type != obj.TYPE_MEM && a.Type != obj.TYPE_ADDR {
v.Fatalf("bad AddAux addr %v", a)
}
// add integer offset
@@ -4160,17 +4217,27 @@ func AddAux2(a *obj.Addr, v *ssa.Value, offset int64) {
}
}
// SizeAlignAuxInt returns an AuxInt encoding the size and alignment of type t.
func SizeAlignAuxInt(t *Type) int64 {
return ssa.MakeSizeAndAlign(t.Size(), t.Alignment()).Int64()
}
// extendIndex extends v to a full int width.
func (s *state) extendIndex(v *ssa.Value) *ssa.Value {
// panic using the given function if v does not fit in an int (only on 32-bit archs).
func (s *state) extendIndex(v *ssa.Value, panicfn *Node) *ssa.Value {
size := v.Type.Size()
if size == s.config.IntSize {
return v
}
if size > s.config.IntSize {
// TODO: truncate 64-bit indexes on 32-bit pointer archs. We'd need to test
// the high word and branch to out-of-bounds failure if it is not 0.
s.Unimplementedf("64->32 index truncation not implemented")
return v
// truncate 64-bit indexes on 32-bit pointer archs. Test the
// high word and branch to out-of-bounds failure if it is not 0.
if Debug['B'] == 0 {
hi := s.newValue1(ssa.OpInt64Hi, Types[TUINT32], v)
cmp := s.newValue2(ssa.OpEq32, Types[TBOOL], hi, s.constInt32(Types[TUINT32], 0))
s.check(cmp, panicfn)
}
return s.newValue1(ssa.OpTrunc64to32, Types[TINT], v)
}
// Extend value to the required size
@@ -4209,17 +4276,74 @@ func (s *state) extendIndex(v *ssa.Value) *ssa.Value {
return s.newValue1(op, Types[TINT], v)
}
// SSARegNum returns the register (in cmd/internal/obj numbering) to
// which v has been allocated. Panics if v is not assigned to a
// register.
// TODO: Make this panic again once it stops happening routinely.
func SSARegNum(v *ssa.Value) int16 {
// SSAReg returns the register to which v has been allocated.
func SSAReg(v *ssa.Value) *ssa.Register {
reg := v.Block.Func.RegAlloc[v.ID]
if reg == nil {
v.Unimplementedf("nil regnum for value: %s\n%s\n", v.LongString(), v.Block.Func)
return 0
v.Fatalf("nil register for value: %s\n%s\n", v.LongString(), v.Block.Func)
}
return reg.(*ssa.Register)
}
// SSAReg0 returns the register to which the first output of v has been allocated.
func SSAReg0(v *ssa.Value) *ssa.Register {
reg := v.Block.Func.RegAlloc[v.ID].(ssa.LocPair)[0]
if reg == nil {
v.Fatalf("nil first register for value: %s\n%s\n", v.LongString(), v.Block.Func)
}
return reg.(*ssa.Register)
}
// SSAReg1 returns the register to which the second output of v has been allocated.
func SSAReg1(v *ssa.Value) *ssa.Register {
reg := v.Block.Func.RegAlloc[v.ID].(ssa.LocPair)[1]
if reg == nil {
v.Fatalf("nil second register for value: %s\n%s\n", v.LongString(), v.Block.Func)
}
return reg.(*ssa.Register)
}
// SSARegNum returns the register number (in cmd/internal/obj numbering) to which v has been allocated.
func SSARegNum(v *ssa.Value) int16 {
return Thearch.SSARegToReg[SSAReg(v).Num]
}
// SSARegNum0 returns the register number (in cmd/internal/obj numbering) to which the first output of v has been allocated.
func SSARegNum0(v *ssa.Value) int16 {
return Thearch.SSARegToReg[SSAReg0(v).Num]
}
// SSARegNum1 returns the register number (in cmd/internal/obj numbering) to which the second output of v has been allocated.
func SSARegNum1(v *ssa.Value) int16 {
return Thearch.SSARegToReg[SSAReg1(v).Num]
}
// CheckLoweredPhi checks that regalloc and stackalloc correctly handled phi values.
// Called during ssaGenValue.
func CheckLoweredPhi(v *ssa.Value) {
if v.Op != ssa.OpPhi {
v.Fatalf("CheckLoweredPhi called with non-phi value: %v", v.LongString())
}
if v.Type.IsMemory() {
return
}
f := v.Block.Func
loc := f.RegAlloc[v.ID]
for _, a := range v.Args {
if aloc := f.RegAlloc[a.ID]; aloc != loc { // TODO: .Equal() instead?
v.Fatalf("phi arg at different location than phi: %v @ %v, but arg %v @ %v\n%s\n", v, loc, a, aloc, v.Block.Func)
}
}
}
// CheckLoweredGetClosurePtr checks that v is the first instruction in the function's entry block.
// The output of LoweredGetClosurePtr is generally hardwired to the correct register.
// That register contains the closure pointer on closure entry.
func CheckLoweredGetClosurePtr(v *ssa.Value) {
entry := v.Block.Func.Entry
if entry != v.Block || entry.Values[0] != v {
Fatalf("in %s, badly placed LoweredGetClosurePtr: %v %v", v.Block.Func.Name, v.Block, v)
}
return Thearch.SSARegToReg[reg.(*ssa.Register).Num]
}
// AutoVar returns a *Node and int64 representing the auto variable and offset within it
@@ -4361,6 +4485,25 @@ func (e *ssaExport) SplitComplex(name ssa.LocalSlot) (ssa.LocalSlot, ssa.LocalSl
return ssa.LocalSlot{N: n, Type: t, Off: name.Off}, ssa.LocalSlot{N: n, Type: t, Off: name.Off + s}
}
func (e *ssaExport) SplitInt64(name ssa.LocalSlot) (ssa.LocalSlot, ssa.LocalSlot) {
n := name.N.(*Node)
var t *Type
if name.Type.IsSigned() {
t = Types[TINT32]
} else {
t = Types[TUINT32]
}
if n.Class == PAUTO && !n.Addrtaken {
// Split this int64 up into two separate variables.
h := e.namedAuto(n.Sym.Name+".hi", t)
l := e.namedAuto(n.Sym.Name+".lo", Types[TUINT32])
return ssa.LocalSlot{N: h, Type: t, Off: 0}, ssa.LocalSlot{N: l, Type: Types[TUINT32], Off: 0}
}
// Return the two parts of the larger variable.
// Assuming little endian (we don't support big endian 32-bit architecture yet)
return ssa.LocalSlot{N: n, Type: t, Off: name.Off + 4}, ssa.LocalSlot{N: n, Type: Types[TUINT32], Off: name.Off}
}
func (e *ssaExport) SplitStruct(name ssa.LocalSlot, i int) ssa.LocalSlot {
n := name.N.(*Node)
st := name.Type
@@ -4378,7 +4521,7 @@ func (e *ssaExport) SplitStruct(name ssa.LocalSlot, i int) ssa.LocalSlot {
// namedAuto returns a new AUTO variable with the given name and type.
func (e *ssaExport) namedAuto(name string, typ ssa.Type) ssa.GCNode {
t := typ.(*Type)
s := Lookup(name)
s := &Sym{Name: name, Pkg: autopkg}
n := Nod(ONAME, nil, nil)
s.Def = n
s.Def.Used = true

View File

@@ -1860,7 +1860,13 @@ func genwrapper(rcvr *Type, method *Field, newnam *Sym, iface int) {
dot := adddot(NodSym(OXDOT, this.Left, method.Sym))
// generate call
if !instrumenting && rcvr.IsPtr() && methodrcvr.IsPtr() && method.Embedded != 0 && !isifacemethod(method.Type) {
// It's not possible to use a tail call when dynamic linking on ppc64le. The
// bad scenario is when a local call is made to the wrapper: the wrapper will
// call the implementation, which might be in a different module and so set
// the TOC to the appropriate value for that module. But if it returns
// directly to the wrapper's caller, nothing will reset it to the correct
// value for that function.
if !instrumenting && rcvr.IsPtr() && methodrcvr.IsPtr() && method.Embedded != 0 && !isifacemethod(method.Type) && !(Thearch.LinkArch.Name == "ppc64le" && Ctxt.Flag_dynlink) {
// generate tail call: adjust pointer receiver and jump to embedded method.
dot = dot.Left // skip final .M
// TODO(mdempsky): Remove dependency on dotlist.

View File

@@ -79,6 +79,7 @@ const (
hasBreak = 1 << iota
notLiveAtEnd
isClosureVar
isOutputParamHeapAddr
)
func (n *Node) HasBreak() bool {
@@ -112,6 +113,17 @@ func (n *Node) setIsClosureVar(b bool) {
}
}
func (n *Node) IsOutputParamHeapAddr() bool {
return n.flags&isOutputParamHeapAddr != 0
}
func (n *Node) setIsOutputParamHeapAddr(b bool) {
if b {
n.flags |= isOutputParamHeapAddr
} else {
n.flags &^= isOutputParamHeapAddr
}
}
// Val returns the Val for the node.
func (n *Node) Val() Val {
if n.hasVal != +1 {

View File

@@ -553,6 +553,445 @@ func testOrPhi() {
}
}
//go:noinline
func addshiftLL_ssa(a, b uint32) uint32 {
return a + b<<3
}
//go:noinline
func subshiftLL_ssa(a, b uint32) uint32 {
return a - b<<3
}
//go:noinline
func rsbshiftLL_ssa(a, b uint32) uint32 {
return a<<3 - b
}
//go:noinline
func andshiftLL_ssa(a, b uint32) uint32 {
return a & (b << 3)
}
//go:noinline
func orshiftLL_ssa(a, b uint32) uint32 {
return a | b<<3
}
//go:noinline
func xorshiftLL_ssa(a, b uint32) uint32 {
return a ^ b<<3
}
//go:noinline
func bicshiftLL_ssa(a, b uint32) uint32 {
return a &^ (b << 3)
}
//go:noinline
func notshiftLL_ssa(a uint32) uint32 {
return ^(a << 3)
}
//go:noinline
func addshiftRL_ssa(a, b uint32) uint32 {
return a + b>>3
}
//go:noinline
func subshiftRL_ssa(a, b uint32) uint32 {
return a - b>>3
}
//go:noinline
func rsbshiftRL_ssa(a, b uint32) uint32 {
return a>>3 - b
}
//go:noinline
func andshiftRL_ssa(a, b uint32) uint32 {
return a & (b >> 3)
}
//go:noinline
func orshiftRL_ssa(a, b uint32) uint32 {
return a | b>>3
}
//go:noinline
func xorshiftRL_ssa(a, b uint32) uint32 {
return a ^ b>>3
}
//go:noinline
func bicshiftRL_ssa(a, b uint32) uint32 {
return a &^ (b >> 3)
}
//go:noinline
func notshiftRL_ssa(a uint32) uint32 {
return ^(a >> 3)
}
//go:noinline
func addshiftRA_ssa(a, b int32) int32 {
return a + b>>3
}
//go:noinline
func subshiftRA_ssa(a, b int32) int32 {
return a - b>>3
}
//go:noinline
func rsbshiftRA_ssa(a, b int32) int32 {
return a>>3 - b
}
//go:noinline
func andshiftRA_ssa(a, b int32) int32 {
return a & (b >> 3)
}
//go:noinline
func orshiftRA_ssa(a, b int32) int32 {
return a | b>>3
}
//go:noinline
func xorshiftRA_ssa(a, b int32) int32 {
return a ^ b>>3
}
//go:noinline
func bicshiftRA_ssa(a, b int32) int32 {
return a &^ (b >> 3)
}
//go:noinline
func notshiftRA_ssa(a int32) int32 {
return ^(a >> 3)
}
//go:noinline
func addshiftLLreg_ssa(a, b uint32, s uint8) uint32 {
return a + b<<s
}
//go:noinline
func subshiftLLreg_ssa(a, b uint32, s uint8) uint32 {
return a - b<<s
}
//go:noinline
func rsbshiftLLreg_ssa(a, b uint32, s uint8) uint32 {
return a<<s - b
}
//go:noinline
func andshiftLLreg_ssa(a, b uint32, s uint8) uint32 {
return a & (b << s)
}
//go:noinline
func orshiftLLreg_ssa(a, b uint32, s uint8) uint32 {
return a | b<<s
}
//go:noinline
func xorshiftLLreg_ssa(a, b uint32, s uint8) uint32 {
return a ^ b<<s
}
//go:noinline
func bicshiftLLreg_ssa(a, b uint32, s uint8) uint32 {
return a &^ (b << s)
}
//go:noinline
func notshiftLLreg_ssa(a uint32, s uint8) uint32 {
return ^(a << s)
}
//go:noinline
func addshiftRLreg_ssa(a, b uint32, s uint8) uint32 {
return a + b>>s
}
//go:noinline
func subshiftRLreg_ssa(a, b uint32, s uint8) uint32 {
return a - b>>s
}
//go:noinline
func rsbshiftRLreg_ssa(a, b uint32, s uint8) uint32 {
return a>>s - b
}
//go:noinline
func andshiftRLreg_ssa(a, b uint32, s uint8) uint32 {
return a & (b >> s)
}
//go:noinline
func orshiftRLreg_ssa(a, b uint32, s uint8) uint32 {
return a | b>>s
}
//go:noinline
func xorshiftRLreg_ssa(a, b uint32, s uint8) uint32 {
return a ^ b>>s
}
//go:noinline
func bicshiftRLreg_ssa(a, b uint32, s uint8) uint32 {
return a &^ (b >> s)
}
//go:noinline
func notshiftRLreg_ssa(a uint32, s uint8) uint32 {
return ^(a >> s)
}
//go:noinline
func addshiftRAreg_ssa(a, b int32, s uint8) int32 {
return a + b>>s
}
//go:noinline
func subshiftRAreg_ssa(a, b int32, s uint8) int32 {
return a - b>>s
}
//go:noinline
func rsbshiftRAreg_ssa(a, b int32, s uint8) int32 {
return a>>s - b
}
//go:noinline
func andshiftRAreg_ssa(a, b int32, s uint8) int32 {
return a & (b >> s)
}
//go:noinline
func orshiftRAreg_ssa(a, b int32, s uint8) int32 {
return a | b>>s
}
//go:noinline
func xorshiftRAreg_ssa(a, b int32, s uint8) int32 {
return a ^ b>>s
}
//go:noinline
func bicshiftRAreg_ssa(a, b int32, s uint8) int32 {
return a &^ (b >> s)
}
//go:noinline
func notshiftRAreg_ssa(a int32, s uint8) int32 {
return ^(a >> s)
}
// test ARM shifted ops
func testShiftedOps() {
a, b := uint32(10), uint32(42)
if want, got := a+b<<3, addshiftLL_ssa(a, b); got != want {
println("addshiftLL_ssa(10, 42) =", got, " want ", want)
failed = true
}
if want, got := a-b<<3, subshiftLL_ssa(a, b); got != want {
println("subshiftLL_ssa(10, 42) =", got, " want ", want)
failed = true
}
if want, got := a<<3-b, rsbshiftLL_ssa(a, b); got != want {
println("rsbshiftLL_ssa(10, 42) =", got, " want ", want)
failed = true
}
if want, got := a&(b<<3), andshiftLL_ssa(a, b); got != want {
println("andshiftLL_ssa(10, 42) =", got, " want ", want)
failed = true
}
if want, got := a|b<<3, orshiftLL_ssa(a, b); got != want {
println("orshiftLL_ssa(10, 42) =", got, " want ", want)
failed = true
}
if want, got := a^b<<3, xorshiftLL_ssa(a, b); got != want {
println("xorshiftLL_ssa(10, 42) =", got, " want ", want)
failed = true
}
if want, got := a&^(b<<3), bicshiftLL_ssa(a, b); got != want {
println("bicshiftLL_ssa(10, 42) =", got, " want ", want)
failed = true
}
if want, got := ^(a << 3), notshiftLL_ssa(a); got != want {
println("notshiftLL_ssa(10) =", got, " want ", want)
failed = true
}
if want, got := a+b>>3, addshiftRL_ssa(a, b); got != want {
println("addshiftRL_ssa(10, 42) =", got, " want ", want)
failed = true
}
if want, got := a-b>>3, subshiftRL_ssa(a, b); got != want {
println("subshiftRL_ssa(10, 42) =", got, " want ", want)
failed = true
}
if want, got := a>>3-b, rsbshiftRL_ssa(a, b); got != want {
println("rsbshiftRL_ssa(10, 42) =", got, " want ", want)
failed = true
}
if want, got := a&(b>>3), andshiftRL_ssa(a, b); got != want {
println("andshiftRL_ssa(10, 42) =", got, " want ", want)
failed = true
}
if want, got := a|b>>3, orshiftRL_ssa(a, b); got != want {
println("orshiftRL_ssa(10, 42) =", got, " want ", want)
failed = true
}
if want, got := a^b>>3, xorshiftRL_ssa(a, b); got != want {
println("xorshiftRL_ssa(10, 42) =", got, " want ", want)
failed = true
}
if want, got := a&^(b>>3), bicshiftRL_ssa(a, b); got != want {
println("bicshiftRL_ssa(10, 42) =", got, " want ", want)
failed = true
}
if want, got := ^(a >> 3), notshiftRL_ssa(a); got != want {
println("notshiftRL_ssa(10) =", got, " want ", want)
failed = true
}
c, d := int32(10), int32(-42)
if want, got := c+d>>3, addshiftRA_ssa(c, d); got != want {
println("addshiftRA_ssa(10, -42) =", got, " want ", want)
failed = true
}
if want, got := c-d>>3, subshiftRA_ssa(c, d); got != want {
println("subshiftRA_ssa(10, -42) =", got, " want ", want)
failed = true
}
if want, got := c>>3-d, rsbshiftRA_ssa(c, d); got != want {
println("rsbshiftRA_ssa(10, -42) =", got, " want ", want)
failed = true
}
if want, got := c&(d>>3), andshiftRA_ssa(c, d); got != want {
println("andshiftRA_ssa(10, -42) =", got, " want ", want)
failed = true
}
if want, got := c|d>>3, orshiftRA_ssa(c, d); got != want {
println("orshiftRA_ssa(10, -42) =", got, " want ", want)
failed = true
}
if want, got := c^d>>3, xorshiftRA_ssa(c, d); got != want {
println("xorshiftRA_ssa(10, -42) =", got, " want ", want)
failed = true
}
if want, got := c&^(d>>3), bicshiftRA_ssa(c, d); got != want {
println("bicshiftRA_ssa(10, -42) =", got, " want ", want)
failed = true
}
if want, got := ^(d >> 3), notshiftRA_ssa(d); got != want {
println("notshiftRA_ssa(-42) =", got, " want ", want)
failed = true
}
s := uint8(3)
if want, got := a+b<<s, addshiftLLreg_ssa(a, b, s); got != want {
println("addshiftLLreg_ssa(10, 42, 3) =", got, " want ", want)
failed = true
}
if want, got := a-b<<s, subshiftLLreg_ssa(a, b, s); got != want {
println("subshiftLLreg_ssa(10, 42, 3) =", got, " want ", want)
failed = true
}
if want, got := a<<s-b, rsbshiftLLreg_ssa(a, b, s); got != want {
println("rsbshiftLLreg_ssa(10, 42, 3) =", got, " want ", want)
failed = true
}
if want, got := a&(b<<s), andshiftLLreg_ssa(a, b, s); got != want {
println("andshiftLLreg_ssa(10, 42, 3) =", got, " want ", want)
failed = true
}
if want, got := a|b<<s, orshiftLLreg_ssa(a, b, s); got != want {
println("orshiftLLreg_ssa(10, 42, 3) =", got, " want ", want)
failed = true
}
if want, got := a^b<<s, xorshiftLLreg_ssa(a, b, s); got != want {
println("xorshiftLLreg_ssa(10, 42, 3) =", got, " want ", want)
failed = true
}
if want, got := a&^(b<<s), bicshiftLLreg_ssa(a, b, s); got != want {
println("bicshiftLLreg_ssa(10, 42, 3) =", got, " want ", want)
failed = true
}
if want, got := ^(a << s), notshiftLLreg_ssa(a, s); got != want {
println("notshiftLLreg_ssa(10) =", got, " want ", want)
failed = true
}
if want, got := a+b>>s, addshiftRLreg_ssa(a, b, s); got != want {
println("addshiftRLreg_ssa(10, 42, 3) =", got, " want ", want)
failed = true
}
if want, got := a-b>>s, subshiftRLreg_ssa(a, b, s); got != want {
println("subshiftRLreg_ssa(10, 42, 3) =", got, " want ", want)
failed = true
}
if want, got := a>>s-b, rsbshiftRLreg_ssa(a, b, s); got != want {
println("rsbshiftRLreg_ssa(10, 42, 3) =", got, " want ", want)
failed = true
}
if want, got := a&(b>>s), andshiftRLreg_ssa(a, b, s); got != want {
println("andshiftRLreg_ssa(10, 42, 3) =", got, " want ", want)
failed = true
}
if want, got := a|b>>s, orshiftRLreg_ssa(a, b, s); got != want {
println("orshiftRLreg_ssa(10, 42, 3) =", got, " want ", want)
failed = true
}
if want, got := a^b>>s, xorshiftRLreg_ssa(a, b, s); got != want {
println("xorshiftRLreg_ssa(10, 42, 3) =", got, " want ", want)
failed = true
}
if want, got := a&^(b>>s), bicshiftRLreg_ssa(a, b, s); got != want {
println("bicshiftRLreg_ssa(10, 42, 3) =", got, " want ", want)
failed = true
}
if want, got := ^(a >> s), notshiftRLreg_ssa(a, s); got != want {
println("notshiftRLreg_ssa(10) =", got, " want ", want)
failed = true
}
if want, got := c+d>>s, addshiftRAreg_ssa(c, d, s); got != want {
println("addshiftRAreg_ssa(10, -42, 3) =", got, " want ", want)
failed = true
}
if want, got := c-d>>s, subshiftRAreg_ssa(c, d, s); got != want {
println("subshiftRAreg_ssa(10, -42, 3) =", got, " want ", want)
failed = true
}
if want, got := c>>s-d, rsbshiftRAreg_ssa(c, d, s); got != want {
println("rsbshiftRAreg_ssa(10, -42, 3) =", got, " want ", want)
failed = true
}
if want, got := c&(d>>s), andshiftRAreg_ssa(c, d, s); got != want {
println("andshiftRAreg_ssa(10, -42, 3) =", got, " want ", want)
failed = true
}
if want, got := c|d>>s, orshiftRAreg_ssa(c, d, s); got != want {
println("orshiftRAreg_ssa(10, -42, 3) =", got, " want ", want)
failed = true
}
if want, got := c^d>>s, xorshiftRAreg_ssa(c, d, s); got != want {
println("xorshiftRAreg_ssa(10, -42, 3) =", got, " want ", want)
failed = true
}
if want, got := c&^(d>>s), bicshiftRAreg_ssa(c, d, s); got != want {
println("bicshiftRAreg_ssa(10, -42, 3) =", got, " want ", want)
failed = true
}
if want, got := ^(d >> s), notshiftRAreg_ssa(d, s); got != want {
println("notshiftRAreg_ssa(-42, 3) =", got, " want ", want)
failed = true
}
}
var failed = false
func main() {
@@ -573,6 +1012,7 @@ func main() {
testLoadCombine()
testLoadSymCombine()
testShiftRemoval()
testShiftedOps()
if failed {
panic("failed")

View File

@@ -110,6 +110,67 @@ func testSmallIndexType() {
}
}
//go:noinline
func testInt64Index_ssa(s string, i int64) byte {
return s[i]
}
//go:noinline
func testInt64Slice_ssa(s string, i, j int64) string {
return s[i:j]
}
func testInt64Index() {
tests := []struct {
i int64
j int64
b byte
s string
}{
{0, 5, 'B', "Below"},
{5, 10, 'E', "Exact"},
{10, 15, 'A', "Above"},
}
str := "BelowExactAbove"
for i, t := range tests {
if got := testInt64Index_ssa(str, t.i); got != t.b {
println("#", i, "got ", got, ", wanted", t.b)
failed = true
}
if got := testInt64Slice_ssa(str, t.i, t.j); got != t.s {
println("#", i, "got ", got, ", wanted", t.s)
failed = true
}
}
}
func testInt64IndexPanic() {
defer func() {
if r := recover(); r != nil {
println("paniced as expected")
}
}()
str := "foobar"
println("got ", testInt64Index_ssa(str, 1<<32+1))
println("expected to panic, but didn't")
failed = true
}
func testInt64SlicePanic() {
defer func() {
if r := recover(); r != nil {
println("paniced as expected")
}
}()
str := "foobar"
println("got ", testInt64Slice_ssa(str, 1<<32, 1<<32+1))
println("expected to panic, but didn't")
failed = true
}
//go:noinline
func testStringElem_ssa(s string, i int) byte {
return s[i]
@@ -153,6 +214,9 @@ func main() {
testSmallIndexType()
testStringElem()
testStringElemConst()
testInt64Index()
testInt64IndexPanic()
testInt64SlicePanic()
if failed {
panic("failed")

View File

@@ -1207,6 +1207,7 @@ func (t *Type) ChanDir() ChanDir {
func (t *Type) IsMemory() bool { return false }
func (t *Type) IsFlags() bool { return false }
func (t *Type) IsVoid() bool { return false }
func (t *Type) IsTuple() bool { return false }
// IsUntyped reports whether t is an untyped type.
func (t *Type) IsUntyped() bool {

View File

@@ -1094,12 +1094,45 @@ opswitch:
if n.Type.IsFloat() {
if n.Left.Type.Etype == TINT64 {
n = mkcall("int64tofloat64", n.Type, init, conv(n.Left, Types[TINT64]))
n = conv(mkcall("int64tofloat64", Types[TFLOAT64], init, conv(n.Left, Types[TINT64])), n.Type)
break
}
if n.Left.Type.Etype == TUINT64 {
n = mkcall("uint64tofloat64", n.Type, init, conv(n.Left, Types[TUINT64]))
n = conv(mkcall("uint64tofloat64", Types[TFLOAT64], init, conv(n.Left, Types[TUINT64])), n.Type)
break
}
}
}
if Thearch.LinkArch.Family == sys.I386 {
if n.Left.Type.IsFloat() {
if n.Type.Etype == TINT64 {
n = mkcall("float64toint64", n.Type, init, conv(n.Left, Types[TFLOAT64]))
break
}
if n.Type.Etype == TUINT64 {
n = mkcall("float64touint64", n.Type, init, conv(n.Left, Types[TFLOAT64]))
break
}
if n.Type.Etype == TUINT32 || n.Type.Etype == TUINTPTR {
n = mkcall("float64touint32", n.Type, init, conv(n.Left, Types[TFLOAT64]))
break
}
}
if n.Type.IsFloat() {
if n.Left.Type.Etype == TINT64 {
n = conv(mkcall("int64tofloat64", Types[TFLOAT64], init, conv(n.Left, Types[TINT64])), n.Type)
break
}
if n.Left.Type.Etype == TUINT64 {
n = conv(mkcall("uint64tofloat64", Types[TFLOAT64], init, conv(n.Left, Types[TUINT64])), n.Type)
break
}
if n.Left.Type.Etype == TUINT32 || n.Left.Type.Etype == TUINTPTR {
n = conv(mkcall("uint32tofloat64", Types[TFLOAT64], init, conv(n.Left, Types[TUINT32])), n.Type)
break
}
}
@@ -3303,6 +3336,7 @@ func samecheap(a *Node, b *Node) bool {
// The result of walkrotate MUST be assigned back to n, e.g.
// n.Left = walkrotate(n.Left)
func walkrotate(n *Node) *Node {
//TODO: enable LROT on ARM64 once the old backend is gone
if Thearch.LinkArch.InFamily(sys.MIPS64, sys.ARM64, sys.PPC64) {
return n
}
@@ -3496,16 +3530,6 @@ func walkdiv(n *Node, init *Nodes) *Node {
goto ret
}
// TODO(zhongwei) Test shows that TUINT8, TINT8, TUINT16 and TINT16's "quick division" method
// on current arm64 backend is slower than hardware div instruction on ARM64 due to unnecessary
// data movement between registers. It could be enabled when generated code is good enough.
if Thearch.LinkArch.Family == sys.ARM64 {
switch Simtype[nl.Type.Etype] {
case TUINT8, TINT8, TUINT16, TINT16:
return n
}
}
switch Simtype[nl.Type.Etype] {
default:
return n

View File

@@ -66,6 +66,11 @@ func Main() {
gc.Thearch.Doregbits = doregbits
gc.Thearch.Regnames = regnames
gc.Thearch.SSARegToReg = ssaRegToReg
gc.Thearch.SSAMarkMoves = ssaMarkMoves
gc.Thearch.SSAGenValue = ssaGenValue
gc.Thearch.SSAGenBlock = ssaGenBlock
initvariants()
initproginfo()

View File

@@ -441,9 +441,8 @@ func gmove(f *gc.Node, t *gc.Node) {
gc.Regfree(&r3)
return
//warn("gmove: convert int to float not implemented: %N -> %N\n", f, t);
/*
* signed integer to float
* integer to float
*/
case gc.TINT32<<16 | gc.TFLOAT32,
gc.TINT32<<16 | gc.TFLOAT64,
@@ -452,10 +451,42 @@ func gmove(f *gc.Node, t *gc.Node) {
gc.TINT16<<16 | gc.TFLOAT32,
gc.TINT16<<16 | gc.TFLOAT64,
gc.TINT8<<16 | gc.TFLOAT32,
gc.TINT8<<16 | gc.TFLOAT64:
gc.TINT8<<16 | gc.TFLOAT64,
gc.TUINT16<<16 | gc.TFLOAT32,
gc.TUINT16<<16 | gc.TFLOAT64,
gc.TUINT8<<16 | gc.TFLOAT32,
gc.TUINT8<<16 | gc.TFLOAT64,
gc.TUINT32<<16 | gc.TFLOAT32,
gc.TUINT32<<16 | gc.TFLOAT64,
gc.TUINT64<<16 | gc.TFLOAT32,
gc.TUINT64<<16 | gc.TFLOAT64:
bignodes()
// The algorithm is:
// if small enough, use native int64 -> float64 conversion,
// otherwise halve (x -> (x>>1)|(x&1)), convert, and double.
// Note: could use FCFIDU instead if target supports it.
var r1 gc.Node
gc.Regalloc(&r1, gc.Types[gc.TINT64], nil)
gmove(f, &r1)
if ft == gc.TUINT64 {
gc.Nodreg(&r2, gc.Types[gc.TUINT64], ppc64.REGTMP)
gmove(&bigi, &r2)
gins(ppc64.ACMPU, &r1, &r2)
p1 := gc.Gbranch(optoas(gc.OLT, gc.Types[gc.TUINT64]), nil, +1)
var r3 gc.Node
gc.Regalloc(&r3, gc.Types[gc.TUINT64], nil)
p2 := gins(ppc64.AANDCC, nil, &r3) // andi.
p2.Reg = r1.Reg
p2.From.Type = obj.TYPE_CONST
p2.From.Offset = 1
p3 := gins(ppc64.ASRD, nil, &r1)
p3.From.Type = obj.TYPE_CONST
p3.From.Offset = 1
gins(ppc64.AOR, &r3, &r1)
gc.Regfree(&r3)
gc.Patch(p1, gc.Pc)
}
gc.Regalloc(&r2, gc.Types[gc.TFLOAT64], t)
p1 := gins(ppc64.AMOVD, &r1, nil)
p1.To.Type = obj.TYPE_MEM
@@ -467,36 +498,12 @@ func gmove(f *gc.Node, t *gc.Node) {
p1.From.Offset = -8
gins(ppc64.AFCFID, &r2, &r2)
gc.Regfree(&r1)
gmove(&r2, t)
gc.Regfree(&r2)
return
/*
* unsigned integer to float
*/
case gc.TUINT16<<16 | gc.TFLOAT32,
gc.TUINT16<<16 | gc.TFLOAT64,
gc.TUINT8<<16 | gc.TFLOAT32,
gc.TUINT8<<16 | gc.TFLOAT64,
gc.TUINT32<<16 | gc.TFLOAT32,
gc.TUINT32<<16 | gc.TFLOAT64,
gc.TUINT64<<16 | gc.TFLOAT32,
gc.TUINT64<<16 | gc.TFLOAT64:
var r1 gc.Node
gc.Regalloc(&r1, gc.Types[gc.TUINT64], nil)
gmove(f, &r1)
gc.Regalloc(&r2, gc.Types[gc.TFLOAT64], t)
p1 := gins(ppc64.AMOVD, &r1, nil)
p1.To.Type = obj.TYPE_MEM
p1.To.Reg = ppc64.REGSP
p1.To.Offset = -8
p1 = gins(ppc64.AFMOVD, nil, &r2)
p1.From.Type = obj.TYPE_MEM
p1.From.Reg = ppc64.REGSP
p1.From.Offset = -8
gins(ppc64.AFCFIDU, &r2, &r2)
gc.Regfree(&r1)
if ft == gc.TUINT64 {
p1 := gc.Gbranch(optoas(gc.OLT, gc.Types[gc.TUINT64]), nil, +1) // use CR0 here again
gc.Nodreg(&r1, gc.Types[gc.TFLOAT64], ppc64.FREGTWO)
gins(ppc64.AFMUL, &r1, &r2)
gc.Patch(p1, gc.Pc)
}
gmove(&r2, t)
gc.Regfree(&r2)
return

View File

@@ -42,22 +42,34 @@ var progtable = [ppc64.ALAST & obj.AMask]obj.ProgInfo{
// Integer
ppc64.AADD & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RegRead | gc.RightWrite},
ppc64.AADDC & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RegRead | gc.RightWrite},
ppc64.ASUB & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RegRead | gc.RightWrite},
ppc64.AADDME & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RegRead | gc.RightWrite},
ppc64.ANEG & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RegRead | gc.RightWrite},
ppc64.AAND & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RegRead | gc.RightWrite},
ppc64.AANDN & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RegRead | gc.RightWrite},
ppc64.AOR & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RegRead | gc.RightWrite},
ppc64.AORN & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RegRead | gc.RightWrite},
ppc64.AXOR & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RegRead | gc.RightWrite},
ppc64.AEQV & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RegRead | gc.RightWrite},
ppc64.AMULLD & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RegRead | gc.RightWrite},
ppc64.AMULLW & obj.AMask: {Flags: gc.SizeL | gc.LeftRead | gc.RegRead | gc.RightWrite},
ppc64.AMULHD & obj.AMask: {Flags: gc.SizeL | gc.LeftRead | gc.RegRead | gc.RightWrite},
ppc64.AMULHDU & obj.AMask: {Flags: gc.SizeL | gc.LeftRead | gc.RegRead | gc.RightWrite},
ppc64.ADIVD & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RegRead | gc.RightWrite},
ppc64.ADIVDU & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RegRead | gc.RightWrite},
ppc64.ADIVW & obj.AMask: {Flags: gc.SizeL | gc.LeftRead | gc.RegRead | gc.RightWrite},
ppc64.ADIVWU & obj.AMask: {Flags: gc.SizeL | gc.LeftRead | gc.RegRead | gc.RightWrite},
ppc64.ASLD & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RegRead | gc.RightWrite},
ppc64.ASRD & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RegRead | gc.RightWrite},
ppc64.ASRAD & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RegRead | gc.RightWrite},
ppc64.ASLW & obj.AMask: {Flags: gc.SizeL | gc.LeftRead | gc.RegRead | gc.RightWrite},
ppc64.ASRW & obj.AMask: {Flags: gc.SizeL | gc.LeftRead | gc.RegRead | gc.RightWrite},
ppc64.ASRAW & obj.AMask: {Flags: gc.SizeL | gc.LeftRead | gc.RegRead | gc.RightWrite},
ppc64.ACMP & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RightRead},
ppc64.ACMPU & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RightRead},
ppc64.ACMPW & obj.AMask: {Flags: gc.SizeL | gc.LeftRead | gc.RightRead},
ppc64.ACMPWU & obj.AMask: {Flags: gc.SizeL | gc.LeftRead | gc.RightRead},
ppc64.ATD & obj.AMask: {Flags: gc.SizeQ | gc.RightRead},
// Floating point.
@@ -70,11 +82,13 @@ var progtable = [ppc64.ALAST & obj.AMask]obj.ProgInfo{
ppc64.AFDIV & obj.AMask: {Flags: gc.SizeD | gc.LeftRead | gc.RegRead | gc.RightWrite},
ppc64.AFDIVS & obj.AMask: {Flags: gc.SizeF | gc.LeftRead | gc.RegRead | gc.RightWrite},
ppc64.AFCTIDZ & obj.AMask: {Flags: gc.SizeF | gc.LeftRead | gc.RegRead | gc.RightWrite},
ppc64.AFCTIWZ & obj.AMask: {Flags: gc.SizeF | gc.LeftRead | gc.RegRead | gc.RightWrite},
ppc64.AFCFID & obj.AMask: {Flags: gc.SizeF | gc.LeftRead | gc.RegRead | gc.RightWrite},
ppc64.AFCFIDU & obj.AMask: {Flags: gc.SizeF | gc.LeftRead | gc.RegRead | gc.RightWrite},
ppc64.AFCMPU & obj.AMask: {Flags: gc.SizeD | gc.LeftRead | gc.RightRead},
ppc64.AFRSP & obj.AMask: {Flags: gc.SizeD | gc.LeftRead | gc.RightWrite | gc.Conv},
ppc64.AFSQRT & obj.AMask: {Flags: gc.SizeD | gc.LeftRead | gc.RightWrite},
ppc64.AFNEG & obj.AMask: {Flags: gc.SizeD | gc.LeftRead | gc.RightWrite},
// Moves
ppc64.AMOVB & obj.AMask: {Flags: gc.SizeB | gc.LeftRead | gc.RightWrite | gc.Move | gc.Conv},
@@ -91,6 +105,8 @@ var progtable = [ppc64.ALAST & obj.AMask]obj.ProgInfo{
ppc64.AMOVD & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RightWrite | gc.Move},
ppc64.AMOVDU & obj.AMask: {Flags: gc.SizeQ | gc.LeftRead | gc.RightWrite | gc.Move | gc.PostInc},
ppc64.AFMOVS & obj.AMask: {Flags: gc.SizeF | gc.LeftRead | gc.RightWrite | gc.Move | gc.Conv},
ppc64.AFMOVSX & obj.AMask: {Flags: gc.SizeF | gc.LeftRead | gc.RightWrite | gc.Move | gc.Conv},
ppc64.AFMOVSZ & obj.AMask: {Flags: gc.SizeF | gc.LeftRead | gc.RightWrite | gc.Move | gc.Conv},
ppc64.AFMOVD & obj.AMask: {Flags: gc.SizeD | gc.LeftRead | gc.RightWrite | gc.Move},
// Jumps

File diff suppressed because it is too large Load Diff

View File

@@ -270,6 +270,7 @@ var passes = [...]pass{
{name: "checkLower", fn: checkLower, required: true},
{name: "late phielim", fn: phielim},
{name: "late copyelim", fn: copyelim},
{name: "phi tighten", fn: phiTighten},
{name: "late deadcode", fn: deadcode},
{name: "critical", fn: critical, required: true}, // remove critical edges
{name: "likelyadjust", fn: likelyadjust},

View File

@@ -20,11 +20,18 @@ type Config struct {
lowerBlock func(*Block) bool // lowering function
lowerValue func(*Value, *Config) bool // lowering function
registers []Register // machine registers
gpRegMask regMask // general purpose integer register mask
fpRegMask regMask // floating point register mask
FPReg int8 // register number of frame pointer, -1 if not used
hasGReg bool // has hardware g register
fe Frontend // callbacks into compiler frontend
HTML *HTMLWriter // html writer, for debugging
ctxt *obj.Link // Generic arch information
optimize bool // Do optimization
noDuffDevice bool // Don't use Duff's device
nacl bool // GOOS=nacl
use387 bool // GO386=387
NeedsFpScratch bool // No direct move between GP and FP register sets
sparsePhiCutoff uint64 // Sparse phi location algorithm used above this #blocks*#variables score
curFunc *Func
@@ -106,6 +113,7 @@ type Frontend interface {
SplitSlice(LocalSlot) (LocalSlot, LocalSlot, LocalSlot)
SplitComplex(LocalSlot) (LocalSlot, LocalSlot)
SplitStruct(LocalSlot, int) LocalSlot
SplitInt64(LocalSlot) (LocalSlot, LocalSlot) // returns (hi, lo)
// Line returns a string describing the given line number.
Line(int32) string
@@ -128,29 +136,87 @@ func NewConfig(arch string, fe Frontend, ctxt *obj.Link, optimize bool) *Config
c.lowerBlock = rewriteBlockAMD64
c.lowerValue = rewriteValueAMD64
c.registers = registersAMD64[:]
case "386":
c.gpRegMask = gpRegMaskAMD64
c.fpRegMask = fpRegMaskAMD64
c.FPReg = framepointerRegAMD64
c.hasGReg = false
case "amd64p32":
c.IntSize = 4
c.PtrSize = 4
c.lowerBlock = rewriteBlockAMD64
c.lowerValue = rewriteValueAMD64 // TODO(khr): full 32-bit support
c.lowerValue = rewriteValueAMD64
c.registers = registersAMD64[:]
c.gpRegMask = gpRegMaskAMD64
c.fpRegMask = fpRegMaskAMD64
c.FPReg = framepointerRegAMD64
c.hasGReg = false
c.noDuffDevice = true
case "386":
c.IntSize = 4
c.PtrSize = 4
c.lowerBlock = rewriteBlock386
c.lowerValue = rewriteValue386
c.registers = registers386[:]
c.gpRegMask = gpRegMask386
c.fpRegMask = fpRegMask386
c.FPReg = framepointerReg386
c.hasGReg = false
case "arm":
c.IntSize = 4
c.PtrSize = 4
c.lowerBlock = rewriteBlockARM
c.lowerValue = rewriteValueARM
c.registers = registersARM[:]
c.gpRegMask = gpRegMaskARM
c.fpRegMask = fpRegMaskARM
c.FPReg = framepointerRegARM
c.hasGReg = true
case "arm64":
c.IntSize = 8
c.PtrSize = 8
c.lowerBlock = rewriteBlockARM64
c.lowerValue = rewriteValueARM64
c.registers = registersARM64[:]
c.gpRegMask = gpRegMaskARM64
c.fpRegMask = fpRegMaskARM64
c.FPReg = framepointerRegARM64
c.hasGReg = true
case "ppc64le":
c.IntSize = 8
c.PtrSize = 8
c.lowerBlock = rewriteBlockPPC64
c.lowerValue = rewriteValuePPC64
c.registers = registersPPC64[:]
c.gpRegMask = gpRegMaskPPC64
c.fpRegMask = fpRegMaskPPC64
c.FPReg = framepointerRegPPC64
c.noDuffDevice = true // TODO: Resolve PPC64 DuffDevice (has zero, but not copy)
c.NeedsFpScratch = true
c.hasGReg = true
default:
fe.Unimplementedf(0, "arch %s not implemented", arch)
}
c.ctxt = ctxt
c.optimize = optimize
c.nacl = obj.Getgoos() == "nacl"
// Don't use Duff's device on Plan 9, because floating
// Don't use Duff's device on Plan 9 AMD64, because floating
// point operations are not allowed in note handler.
if obj.Getgoos() == "plan9" {
if obj.Getgoos() == "plan9" && arch == "amd64" {
c.noDuffDevice = true
}
if c.nacl {
c.noDuffDevice = true // Don't use Duff's device on NaCl
// ARM assembler rewrites DIV/MOD to runtime calls, which
// clobber R12 on nacl
opcodeTable[OpARMDIV].reg.clobbers |= 1 << 12 // R12
opcodeTable[OpARMDIVU].reg.clobbers |= 1 << 12 // R12
opcodeTable[OpARMMOD].reg.clobbers |= 1 << 12 // R12
opcodeTable[OpARMMODU].reg.clobbers |= 1 << 12 // R12
}
// Assign IDs to preallocated values/blocks.
for i := range c.values {
c.values[i].ID = ID(i)
@@ -180,6 +246,11 @@ func NewConfig(arch string, fe Frontend, ctxt *obj.Link, optimize bool) *Config
return c
}
func (c *Config) Set387(b bool) {
c.NeedsFpScratch = b
c.use387 = b
}
func (c *Config) Frontend() Frontend { return c.fe }
func (c *Config) SparsePhiCutoff() uint64 { return c.sparsePhiCutoff }

View File

@@ -163,6 +163,29 @@ func cse(f *Func) {
}
}
// if we rewrite a tuple generator to a new one in a different block,
// copy its selectors to the new generator's block, so tuple generator
// and selectors stay together.
for _, b := range f.Blocks {
for _, v := range b.Values {
if rewrite[v.ID] != nil {
continue
}
if v.Op != OpSelect0 && v.Op != OpSelect1 {
continue
}
if !v.Args[0].Type.IsTuple() {
f.Fatalf("arg of tuple selector %s is not a tuple: %s", v.String(), v.Args[0].LongString())
}
t := rewrite[v.Args[0].ID]
if t != nil && t.Block != b {
// v.Args[0] is tuple generator, CSE'd into a different block as t, v is left behind
c := v.copyInto(t.Block)
rewrite[v.ID] = c
}
}
}
rewrites := int64(0)
// Apply substitutions

View File

@@ -89,7 +89,7 @@ func dse(f *Func) {
} else {
// zero addr mem
sz := v.Args[0].Type.ElemType().Size()
if v.AuxInt != sz {
if SizeAndAlign(v.AuxInt).Size() != sz {
f.Fatalf("mismatched zero/store sizes: %d and %d [%s]",
v.AuxInt, sz, v.LongString())
}

View File

@@ -25,6 +25,22 @@ func decomposeBuiltIn(f *Func) {
for _, name := range f.Names {
t := name.Type
switch {
case t.IsInteger() && t.Size() == 8 && f.Config.IntSize == 4:
var elemType Type
if t.IsSigned() {
elemType = f.Config.fe.TypeInt32()
} else {
elemType = f.Config.fe.TypeUInt32()
}
hiName, loName := f.Config.fe.SplitInt64(name)
newNames = append(newNames, hiName, loName)
for _, v := range f.NamedValues[name] {
hi := v.Block.NewValue1(v.Line, OpInt64Hi, elemType, v)
lo := v.Block.NewValue1(v.Line, OpInt64Lo, f.Config.fe.TypeUInt32(), v)
f.NamedValues[hiName] = append(f.NamedValues[hiName], hi)
f.NamedValues[loName] = append(f.NamedValues[loName], lo)
}
delete(f.NamedValues, name)
case t.IsComplex():
var elemType Type
if t.Size() == 16 {
@@ -78,6 +94,8 @@ func decomposeBuiltIn(f *Func) {
f.NamedValues[dataName] = append(f.NamedValues[dataName], data)
}
delete(f.NamedValues, name)
case t.IsFloat():
// floats are never decomposed, even ones bigger than IntSize
case t.Size() > f.Config.IntSize:
f.Unimplementedf("undecomposed named type %s %s", name, t)
default:
@@ -88,8 +106,13 @@ func decomposeBuiltIn(f *Func) {
}
func decomposeBuiltInPhi(v *Value) {
// TODO: decompose 64-bit ops on 32-bit archs?
switch {
case v.Type.IsInteger() && v.Type.Size() == 8 && v.Block.Func.Config.IntSize == 4:
if v.Block.Func.Config.arch == "amd64p32" {
// Even though ints are 32 bits, we have 64-bit ops.
break
}
decomposeInt64Phi(v)
case v.Type.IsComplex():
decomposeComplexPhi(v)
case v.Type.IsString():
@@ -98,6 +121,8 @@ func decomposeBuiltInPhi(v *Value) {
decomposeSlicePhi(v)
case v.Type.IsInterface():
decomposeInterfacePhi(v)
case v.Type.IsFloat():
// floats are never decomposed, even ones bigger than IntSize
case v.Type.Size() > v.Block.Func.Config.IntSize:
v.Unimplementedf("undecomposed type %s", v.Type)
}
@@ -138,6 +163,26 @@ func decomposeSlicePhi(v *Value) {
v.AddArg(cap)
}
func decomposeInt64Phi(v *Value) {
fe := v.Block.Func.Config.fe
var partType Type
if v.Type.IsSigned() {
partType = fe.TypeInt32()
} else {
partType = fe.TypeUInt32()
}
hi := v.Block.NewValue0(v.Line, OpPhi, partType)
lo := v.Block.NewValue0(v.Line, OpPhi, fe.TypeUInt32())
for _, a := range v.Args {
hi.AddArg(a.Block.NewValue1(v.Line, OpInt64Hi, partType, a))
lo.AddArg(a.Block.NewValue1(v.Line, OpInt64Lo, fe.TypeUInt32(), a))
}
v.reset(OpInt64Make)
v.AddArg(hi)
v.AddArg(lo)
}
func decomposeComplexPhi(v *Value) {
fe := v.Block.Func.Config.fe
var partType Type

View File

@@ -49,6 +49,12 @@ func (d DummyFrontend) SplitComplex(s LocalSlot) (LocalSlot, LocalSlot) {
}
return LocalSlot{s.N, d.TypeFloat32(), s.Off}, LocalSlot{s.N, d.TypeFloat32(), s.Off + 4}
}
func (d DummyFrontend) SplitInt64(s LocalSlot) (LocalSlot, LocalSlot) {
if s.Type.IsSigned() {
return LocalSlot{s.N, d.TypeInt32(), s.Off + 4}, LocalSlot{s.N, d.TypeUInt32(), s.Off}
}
return LocalSlot{s.N, d.TypeUInt32(), s.Off + 4}, LocalSlot{s.N, d.TypeUInt32(), s.Off}
}
func (d DummyFrontend) SplitStruct(s LocalSlot, i int) LocalSlot {
return LocalSlot{s.N, s.Type.FieldType(i), s.Off + s.Type.FieldOff(i)}
}

View File

@@ -4,8 +4,6 @@
package ssa
const flagRegMask = regMask(1) << 33 // TODO: arch-specific
// flagalloc allocates the flag register among all the flag-generating
// instructions. Flag values are recomputed if they need to be
// spilled/restored.
@@ -33,7 +31,7 @@ func flagalloc(f *Func) {
if v == flag {
flag = nil
}
if opcodeTable[v.Op].reg.clobbers&flagRegMask != 0 {
if opcodeTable[v.Op].clobberFlags {
flag = nil
}
for _, a := range v.Args {
@@ -97,7 +95,7 @@ func flagalloc(f *Func) {
continue
}
// Recalculate a
c := a.copyInto(b)
c := copyFlags(a, b)
// Update v.
v.SetArg(i, c)
// Remember the most-recently computed flag value.
@@ -105,7 +103,7 @@ func flagalloc(f *Func) {
}
// Issue v.
b.Values = append(b.Values, v)
if opcodeTable[v.Op].reg.clobbers&flagRegMask != 0 {
if opcodeTable[v.Op].clobberFlags {
flag = nil
}
if v.Type.IsFlags() {
@@ -121,7 +119,7 @@ func flagalloc(f *Func) {
if v := end[b.ID]; v != nil && v != flag {
// Need to reissue flag generator for use by
// subsequent blocks.
_ = v.copyInto(b)
copyFlags(v, b)
// Note: this flag generator is not properly linked up
// with the flag users. This breaks the SSA representation.
// We could fix up the users with another pass, but for now
@@ -135,3 +133,19 @@ func flagalloc(f *Func) {
b.FlagsLiveAtEnd = end[b.ID] != nil
}
}
// copyFlags copies v (flag generator) into b, returns the copy.
// If v's arg is also flags, copy recursively.
func copyFlags(v *Value, b *Block) *Value {
flagsArgs := make(map[int]*Value)
for i, a := range v.Args {
if a.Type.IsFlags() || a.Type.IsTuple() {
flagsArgs[i] = copyFlags(a, b)
}
}
c := v.copyInto(b)
for i, a := range flagsArgs {
c.SetArg(i, a)
}
return c
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,508 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
package main
import "strings"
// Notes:
// - Integer types live in the low portion of registers. Upper portions are junk.
// - Boolean types use the low-order byte of a register. 0=false, 1=true.
// Upper bytes are junk.
// - Floating-point types live in the low natural slot of an sse2 register.
// Unused portions are junk.
// - We do not use AH,BH,CH,DH registers.
// - When doing sub-register operations, we try to write the whole
// destination register to avoid a partial-register write.
// - Unused portions of AuxInt (or the Val portion of ValAndOff) are
// filled by sign-extending the used portion. Users of AuxInt which interpret
// AuxInt as unsigned (e.g. shifts) must be careful.
// Suffixes encode the bit width of various instructions.
// L (long word) = 32 bit
// W (word) = 16 bit
// B (byte) = 8 bit
// copied from ../../x86/reg.go
var regNames386 = []string{
"AX",
"CX",
"DX",
"BX",
"SP",
"BP",
"SI",
"DI",
"X0",
"X1",
"X2",
"X3",
"X4",
"X5",
"X6",
"X7",
// pseudo-registers
"SB",
}
// Notes on 387 support.
// - The 387 has a weird stack-register setup for floating-point registers.
// We use these registers when SSE registers are not available (when GO386=387).
// - We use the same register names (X0-X7) but they refer to the 387
// floating-point registers. That way, most of the SSA backend is unchanged.
// - The instruction generation pass maintains an SSE->387 register mapping.
// This mapping is updated whenever the FP stack is pushed or popped so that
// we can always find a given SSE register even when the TOS pointer has changed.
// - To facilitate the mapping from SSE to 387, we enforce that
// every basic block starts and ends with an empty floating-point stack.
func init() {
// Make map from reg names to reg integers.
if len(regNames386) > 64 {
panic("too many registers")
}
num := map[string]int{}
for i, name := range regNames386 {
num[name] = i
}
buildReg := func(s string) regMask {
m := regMask(0)
for _, r := range strings.Split(s, " ") {
if n, ok := num[r]; ok {
m |= regMask(1) << uint(n)
continue
}
panic("register " + r + " not found")
}
return m
}
// Common individual register masks
var (
ax = buildReg("AX")
cx = buildReg("CX")
dx = buildReg("DX")
gp = buildReg("AX CX DX BX BP SI DI")
fp = buildReg("X0 X1 X2 X3 X4 X5 X6 X7")
x7 = buildReg("X7")
gpsp = gp | buildReg("SP")
gpspsb = gpsp | buildReg("SB")
callerSave = gp | fp
)
// Common slices of register masks
var (
gponly = []regMask{gp}
fponly = []regMask{fp}
)
// Common regInfo
var (
gp01 = regInfo{inputs: nil, outputs: gponly}
gp11 = regInfo{inputs: []regMask{gp}, outputs: gponly}
gp11sp = regInfo{inputs: []regMask{gpsp}, outputs: gponly}
gp11sb = regInfo{inputs: []regMask{gpspsb}, outputs: gponly}
gp21 = regInfo{inputs: []regMask{gp, gp}, outputs: gponly}
gp11carry = regInfo{inputs: []regMask{gp}, outputs: []regMask{0, gp}}
gp21carry = regInfo{inputs: []regMask{gp, gp}, outputs: []regMask{0, gp}}
gp1carry1 = regInfo{inputs: []regMask{gp}, outputs: gponly}
gp2carry1 = regInfo{inputs: []regMask{gp, gp}, outputs: gponly}
gp21sp = regInfo{inputs: []regMask{gpsp, gp}, outputs: gponly}
gp21sb = regInfo{inputs: []regMask{gpspsb, gpsp}, outputs: gponly}
gp21shift = regInfo{inputs: []regMask{gp, cx}, outputs: []regMask{gp}}
gp11div = regInfo{inputs: []regMask{ax, gpsp &^ dx}, outputs: []regMask{ax}, clobbers: dx}
gp21hmul = regInfo{inputs: []regMask{ax, gpsp}, outputs: []regMask{dx}, clobbers: ax}
gp11mod = regInfo{inputs: []regMask{ax, gpsp &^ dx}, outputs: []regMask{dx}, clobbers: ax}
gp21mul = regInfo{inputs: []regMask{ax, gpsp}, outputs: []regMask{dx, ax}}
gp2flags = regInfo{inputs: []regMask{gpsp, gpsp}}
gp1flags = regInfo{inputs: []regMask{gpsp}}
flagsgp = regInfo{inputs: nil, outputs: gponly}
readflags = regInfo{inputs: nil, outputs: gponly}
flagsgpax = regInfo{inputs: nil, clobbers: ax, outputs: []regMask{gp &^ ax}}
gpload = regInfo{inputs: []regMask{gpspsb, 0}, outputs: gponly}
gploadidx = regInfo{inputs: []regMask{gpspsb, gpsp, 0}, outputs: gponly}
gpstore = regInfo{inputs: []regMask{gpspsb, gpsp, 0}}
gpstoreconst = regInfo{inputs: []regMask{gpspsb, 0}}
gpstoreidx = regInfo{inputs: []regMask{gpspsb, gpsp, gpsp, 0}}
gpstoreconstidx = regInfo{inputs: []regMask{gpspsb, gpsp, 0}}
fp01 = regInfo{inputs: nil, outputs: fponly}
fp21 = regInfo{inputs: []regMask{fp, fp}, outputs: fponly}
fp21x7 = regInfo{inputs: []regMask{fp &^ x7, fp &^ x7},
clobbers: x7, outputs: []regMask{fp &^ x7}}
fpgp = regInfo{inputs: fponly, outputs: gponly}
gpfp = regInfo{inputs: gponly, outputs: fponly}
fp11 = regInfo{inputs: fponly, outputs: fponly}
fp2flags = regInfo{inputs: []regMask{fp, fp}}
fpload = regInfo{inputs: []regMask{gpspsb, 0}, outputs: fponly}
fploadidx = regInfo{inputs: []regMask{gpspsb, gpsp, 0}, outputs: fponly}
fpstore = regInfo{inputs: []regMask{gpspsb, fp, 0}}
fpstoreidx = regInfo{inputs: []regMask{gpspsb, gpsp, fp, 0}}
)
var _386ops = []opData{
// fp ops
{name: "ADDSS", argLength: 2, reg: fp21, asm: "ADDSS", commutative: true, resultInArg0: true}, // fp32 add
{name: "ADDSD", argLength: 2, reg: fp21, asm: "ADDSD", commutative: true, resultInArg0: true}, // fp64 add
{name: "SUBSS", argLength: 2, reg: fp21x7, asm: "SUBSS", resultInArg0: true}, // fp32 sub
{name: "SUBSD", argLength: 2, reg: fp21x7, asm: "SUBSD", resultInArg0: true}, // fp64 sub
{name: "MULSS", argLength: 2, reg: fp21, asm: "MULSS", commutative: true, resultInArg0: true}, // fp32 mul
{name: "MULSD", argLength: 2, reg: fp21, asm: "MULSD", commutative: true, resultInArg0: true}, // fp64 mul
{name: "DIVSS", argLength: 2, reg: fp21x7, asm: "DIVSS", resultInArg0: true}, // fp32 div
{name: "DIVSD", argLength: 2, reg: fp21x7, asm: "DIVSD", resultInArg0: true}, // fp64 div
{name: "MOVSSload", argLength: 2, reg: fpload, asm: "MOVSS", aux: "SymOff"}, // fp32 load
{name: "MOVSDload", argLength: 2, reg: fpload, asm: "MOVSD", aux: "SymOff"}, // fp64 load
{name: "MOVSSconst", reg: fp01, asm: "MOVSS", aux: "Float32", rematerializeable: true}, // fp32 constant
{name: "MOVSDconst", reg: fp01, asm: "MOVSD", aux: "Float64", rematerializeable: true}, // fp64 constant
{name: "MOVSSloadidx1", argLength: 3, reg: fploadidx, asm: "MOVSS", aux: "SymOff"}, // fp32 load indexed by i
{name: "MOVSSloadidx4", argLength: 3, reg: fploadidx, asm: "MOVSS", aux: "SymOff"}, // fp32 load indexed by 4*i
{name: "MOVSDloadidx1", argLength: 3, reg: fploadidx, asm: "MOVSD", aux: "SymOff"}, // fp64 load indexed by i
{name: "MOVSDloadidx8", argLength: 3, reg: fploadidx, asm: "MOVSD", aux: "SymOff"}, // fp64 load indexed by 8*i
{name: "MOVSSstore", argLength: 3, reg: fpstore, asm: "MOVSS", aux: "SymOff"}, // fp32 store
{name: "MOVSDstore", argLength: 3, reg: fpstore, asm: "MOVSD", aux: "SymOff"}, // fp64 store
{name: "MOVSSstoreidx1", argLength: 4, reg: fpstoreidx, asm: "MOVSS", aux: "SymOff"}, // fp32 indexed by i store
{name: "MOVSSstoreidx4", argLength: 4, reg: fpstoreidx, asm: "MOVSS", aux: "SymOff"}, // fp32 indexed by 4i store
{name: "MOVSDstoreidx1", argLength: 4, reg: fpstoreidx, asm: "MOVSD", aux: "SymOff"}, // fp64 indexed by i store
{name: "MOVSDstoreidx8", argLength: 4, reg: fpstoreidx, asm: "MOVSD", aux: "SymOff"}, // fp64 indexed by 8i store
// binary ops
{name: "ADDL", argLength: 2, reg: gp21sp, asm: "ADDL", commutative: true, clobberFlags: true}, // arg0 + arg1
{name: "ADDLconst", argLength: 1, reg: gp11sp, asm: "ADDL", aux: "Int32", typ: "UInt32", clobberFlags: true}, // arg0 + auxint
{name: "ADDLcarry", argLength: 2, reg: gp21carry, asm: "ADDL", commutative: true, resultInArg0: true}, // arg0 + arg1, generates <carry,result> pair
{name: "ADDLconstcarry", argLength: 1, reg: gp11carry, asm: "ADDL", aux: "Int32", resultInArg0: true}, // arg0 + auxint, generates <carry,result> pair
{name: "ADCL", argLength: 3, reg: gp2carry1, asm: "ADCL", commutative: true, resultInArg0: true, clobberFlags: true}, // arg0+arg1+carry(arg2), where arg2 is flags
{name: "ADCLconst", argLength: 2, reg: gp1carry1, asm: "ADCL", aux: "Int32", resultInArg0: true, clobberFlags: true}, // arg0+auxint+carry(arg1), where arg1 is flags
{name: "SUBL", argLength: 2, reg: gp21, asm: "SUBL", resultInArg0: true, clobberFlags: true}, // arg0 - arg1
{name: "SUBLconst", argLength: 1, reg: gp11, asm: "SUBL", aux: "Int32", resultInArg0: true, clobberFlags: true}, // arg0 - auxint
{name: "SUBLcarry", argLength: 2, reg: gp21carry, asm: "SUBL", resultInArg0: true}, // arg0-arg1, generates <borrow,result> pair
{name: "SUBLconstcarry", argLength: 1, reg: gp11carry, asm: "SUBL", aux: "Int32", resultInArg0: true}, // arg0-auxint, generates <borrow,result> pair
{name: "SBBL", argLength: 3, reg: gp2carry1, asm: "SBBL", resultInArg0: true, clobberFlags: true}, // arg0-arg1-borrow(arg2), where arg2 is flags
{name: "SBBLconst", argLength: 2, reg: gp1carry1, asm: "SBBL", aux: "Int32", resultInArg0: true, clobberFlags: true}, // arg0-auxint-borrow(arg1), where arg1 is flags
{name: "MULL", argLength: 2, reg: gp21, asm: "IMULL", commutative: true, resultInArg0: true, clobberFlags: true}, // arg0 * arg1
{name: "MULLconst", argLength: 1, reg: gp11, asm: "IMULL", aux: "Int32", resultInArg0: true, clobberFlags: true}, // arg0 * auxint
{name: "HMULL", argLength: 2, reg: gp21hmul, asm: "IMULL", clobberFlags: true}, // (arg0 * arg1) >> width
{name: "HMULLU", argLength: 2, reg: gp21hmul, asm: "MULL", clobberFlags: true}, // (arg0 * arg1) >> width
{name: "HMULW", argLength: 2, reg: gp21hmul, asm: "IMULW", clobberFlags: true}, // (arg0 * arg1) >> width
{name: "HMULB", argLength: 2, reg: gp21hmul, asm: "IMULB", clobberFlags: true}, // (arg0 * arg1) >> width
{name: "HMULWU", argLength: 2, reg: gp21hmul, asm: "MULW", clobberFlags: true}, // (arg0 * arg1) >> width
{name: "HMULBU", argLength: 2, reg: gp21hmul, asm: "MULB", clobberFlags: true}, // (arg0 * arg1) >> width
{name: "MULLQU", argLength: 2, reg: gp21mul, asm: "MULL", clobberFlags: true}, // arg0 * arg1, high 32 in result[0], low 32 in result[1]
{name: "DIVL", argLength: 2, reg: gp11div, asm: "IDIVL", clobberFlags: true}, // arg0 / arg1
{name: "DIVW", argLength: 2, reg: gp11div, asm: "IDIVW", clobberFlags: true}, // arg0 / arg1
{name: "DIVLU", argLength: 2, reg: gp11div, asm: "DIVL", clobberFlags: true}, // arg0 / arg1
{name: "DIVWU", argLength: 2, reg: gp11div, asm: "DIVW", clobberFlags: true}, // arg0 / arg1
{name: "MODL", argLength: 2, reg: gp11mod, asm: "IDIVL", clobberFlags: true}, // arg0 % arg1
{name: "MODW", argLength: 2, reg: gp11mod, asm: "IDIVW", clobberFlags: true}, // arg0 % arg1
{name: "MODLU", argLength: 2, reg: gp11mod, asm: "DIVL", clobberFlags: true}, // arg0 % arg1
{name: "MODWU", argLength: 2, reg: gp11mod, asm: "DIVW", clobberFlags: true}, // arg0 % arg1
{name: "ANDL", argLength: 2, reg: gp21, asm: "ANDL", commutative: true, resultInArg0: true, clobberFlags: true}, // arg0 & arg1
{name: "ANDLconst", argLength: 1, reg: gp11, asm: "ANDL", aux: "Int32", resultInArg0: true, clobberFlags: true}, // arg0 & auxint
{name: "ORL", argLength: 2, reg: gp21, asm: "ORL", commutative: true, resultInArg0: true, clobberFlags: true}, // arg0 | arg1
{name: "ORLconst", argLength: 1, reg: gp11, asm: "ORL", aux: "Int32", resultInArg0: true, clobberFlags: true}, // arg0 | auxint
{name: "XORL", argLength: 2, reg: gp21, asm: "XORL", commutative: true, resultInArg0: true, clobberFlags: true}, // arg0 ^ arg1
{name: "XORLconst", argLength: 1, reg: gp11, asm: "XORL", aux: "Int32", resultInArg0: true, clobberFlags: true}, // arg0 ^ auxint
{name: "CMPL", argLength: 2, reg: gp2flags, asm: "CMPL", typ: "Flags"}, // arg0 compare to arg1
{name: "CMPW", argLength: 2, reg: gp2flags, asm: "CMPW", typ: "Flags"}, // arg0 compare to arg1
{name: "CMPB", argLength: 2, reg: gp2flags, asm: "CMPB", typ: "Flags"}, // arg0 compare to arg1
{name: "CMPLconst", argLength: 1, reg: gp1flags, asm: "CMPL", typ: "Flags", aux: "Int32"}, // arg0 compare to auxint
{name: "CMPWconst", argLength: 1, reg: gp1flags, asm: "CMPW", typ: "Flags", aux: "Int16"}, // arg0 compare to auxint
{name: "CMPBconst", argLength: 1, reg: gp1flags, asm: "CMPB", typ: "Flags", aux: "Int8"}, // arg0 compare to auxint
{name: "UCOMISS", argLength: 2, reg: fp2flags, asm: "UCOMISS", typ: "Flags"}, // arg0 compare to arg1, f32
{name: "UCOMISD", argLength: 2, reg: fp2flags, asm: "UCOMISD", typ: "Flags"}, // arg0 compare to arg1, f64
{name: "TESTL", argLength: 2, reg: gp2flags, asm: "TESTL", typ: "Flags"}, // (arg0 & arg1) compare to 0
{name: "TESTW", argLength: 2, reg: gp2flags, asm: "TESTW", typ: "Flags"}, // (arg0 & arg1) compare to 0
{name: "TESTB", argLength: 2, reg: gp2flags, asm: "TESTB", typ: "Flags"}, // (arg0 & arg1) compare to 0
{name: "TESTLconst", argLength: 1, reg: gp1flags, asm: "TESTL", typ: "Flags", aux: "Int32"}, // (arg0 & auxint) compare to 0
{name: "TESTWconst", argLength: 1, reg: gp1flags, asm: "TESTW", typ: "Flags", aux: "Int16"}, // (arg0 & auxint) compare to 0
{name: "TESTBconst", argLength: 1, reg: gp1flags, asm: "TESTB", typ: "Flags", aux: "Int8"}, // (arg0 & auxint) compare to 0
{name: "SHLL", argLength: 2, reg: gp21shift, asm: "SHLL", resultInArg0: true, clobberFlags: true}, // arg0 << arg1, shift amount is mod 32
{name: "SHLLconst", argLength: 1, reg: gp11, asm: "SHLL", aux: "Int32", resultInArg0: true, clobberFlags: true}, // arg0 << auxint, shift amount 0-31
// Note: x86 is weird, the 16 and 8 byte shifts still use all 5 bits of shift amount!
{name: "SHRL", argLength: 2, reg: gp21shift, asm: "SHRL", resultInArg0: true, clobberFlags: true}, // unsigned arg0 >> arg1, shift amount is mod 32
{name: "SHRW", argLength: 2, reg: gp21shift, asm: "SHRW", resultInArg0: true, clobberFlags: true}, // unsigned arg0 >> arg1, shift amount is mod 32
{name: "SHRB", argLength: 2, reg: gp21shift, asm: "SHRB", resultInArg0: true, clobberFlags: true}, // unsigned arg0 >> arg1, shift amount is mod 32
{name: "SHRLconst", argLength: 1, reg: gp11, asm: "SHRL", aux: "Int32", resultInArg0: true, clobberFlags: true}, // unsigned arg0 >> auxint, shift amount 0-31
{name: "SHRWconst", argLength: 1, reg: gp11, asm: "SHRW", aux: "Int16", resultInArg0: true, clobberFlags: true}, // unsigned arg0 >> auxint, shift amount 0-31
{name: "SHRBconst", argLength: 1, reg: gp11, asm: "SHRB", aux: "Int8", resultInArg0: true, clobberFlags: true}, // unsigned arg0 >> auxint, shift amount 0-31
{name: "SARL", argLength: 2, reg: gp21shift, asm: "SARL", resultInArg0: true, clobberFlags: true}, // signed arg0 >> arg1, shift amount is mod 32
{name: "SARW", argLength: 2, reg: gp21shift, asm: "SARW", resultInArg0: true, clobberFlags: true}, // signed arg0 >> arg1, shift amount is mod 32
{name: "SARB", argLength: 2, reg: gp21shift, asm: "SARB", resultInArg0: true, clobberFlags: true}, // signed arg0 >> arg1, shift amount is mod 32
{name: "SARLconst", argLength: 1, reg: gp11, asm: "SARL", aux: "Int32", resultInArg0: true, clobberFlags: true}, // signed arg0 >> auxint, shift amount 0-31
{name: "SARWconst", argLength: 1, reg: gp11, asm: "SARW", aux: "Int16", resultInArg0: true, clobberFlags: true}, // signed arg0 >> auxint, shift amount 0-31
{name: "SARBconst", argLength: 1, reg: gp11, asm: "SARB", aux: "Int8", resultInArg0: true, clobberFlags: true}, // signed arg0 >> auxint, shift amount 0-31
{name: "ROLLconst", argLength: 1, reg: gp11, asm: "ROLL", aux: "Int32", resultInArg0: true, clobberFlags: true}, // arg0 rotate left auxint, rotate amount 0-31
{name: "ROLWconst", argLength: 1, reg: gp11, asm: "ROLW", aux: "Int16", resultInArg0: true, clobberFlags: true}, // arg0 rotate left auxint, rotate amount 0-15
{name: "ROLBconst", argLength: 1, reg: gp11, asm: "ROLB", aux: "Int8", resultInArg0: true, clobberFlags: true}, // arg0 rotate left auxint, rotate amount 0-7
// unary ops
{name: "NEGL", argLength: 1, reg: gp11, asm: "NEGL", resultInArg0: true, clobberFlags: true}, // -arg0
{name: "NOTL", argLength: 1, reg: gp11, asm: "NOTL", resultInArg0: true, clobberFlags: true}, // ^arg0
{name: "BSFL", argLength: 1, reg: gp11, asm: "BSFL", clobberFlags: true}, // arg0 # of low-order zeroes ; undef if zero
{name: "BSFW", argLength: 1, reg: gp11, asm: "BSFW", clobberFlags: true}, // arg0 # of low-order zeroes ; undef if zero
{name: "BSRL", argLength: 1, reg: gp11, asm: "BSRL", clobberFlags: true}, // arg0 # of high-order zeroes ; undef if zero
{name: "BSRW", argLength: 1, reg: gp11, asm: "BSRW", clobberFlags: true}, // arg0 # of high-order zeroes ; undef if zero
{name: "BSWAPL", argLength: 1, reg: gp11, asm: "BSWAPL", resultInArg0: true, clobberFlags: true}, // arg0 swap bytes
{name: "SQRTSD", argLength: 1, reg: fp11, asm: "SQRTSD"}, // sqrt(arg0)
{name: "SBBLcarrymask", argLength: 1, reg: flagsgp, asm: "SBBL"}, // (int32)(-1) if carry is set, 0 if carry is clear.
// Note: SBBW and SBBB are subsumed by SBBL
{name: "SETEQ", argLength: 1, reg: readflags, asm: "SETEQ"}, // extract == condition from arg0
{name: "SETNE", argLength: 1, reg: readflags, asm: "SETNE"}, // extract != condition from arg0
{name: "SETL", argLength: 1, reg: readflags, asm: "SETLT"}, // extract signed < condition from arg0
{name: "SETLE", argLength: 1, reg: readflags, asm: "SETLE"}, // extract signed <= condition from arg0
{name: "SETG", argLength: 1, reg: readflags, asm: "SETGT"}, // extract signed > condition from arg0
{name: "SETGE", argLength: 1, reg: readflags, asm: "SETGE"}, // extract signed >= condition from arg0
{name: "SETB", argLength: 1, reg: readflags, asm: "SETCS"}, // extract unsigned < condition from arg0
{name: "SETBE", argLength: 1, reg: readflags, asm: "SETLS"}, // extract unsigned <= condition from arg0
{name: "SETA", argLength: 1, reg: readflags, asm: "SETHI"}, // extract unsigned > condition from arg0
{name: "SETAE", argLength: 1, reg: readflags, asm: "SETCC"}, // extract unsigned >= condition from arg0
// Need different opcodes for floating point conditions because
// any comparison involving a NaN is always FALSE and thus
// the patterns for inverting conditions cannot be used.
{name: "SETEQF", argLength: 1, reg: flagsgpax, asm: "SETEQ", clobberFlags: true}, // extract == condition from arg0
{name: "SETNEF", argLength: 1, reg: flagsgpax, asm: "SETNE", clobberFlags: true}, // extract != condition from arg0
{name: "SETORD", argLength: 1, reg: flagsgp, asm: "SETPC"}, // extract "ordered" (No Nan present) condition from arg0
{name: "SETNAN", argLength: 1, reg: flagsgp, asm: "SETPS"}, // extract "unordered" (Nan present) condition from arg0
{name: "SETGF", argLength: 1, reg: flagsgp, asm: "SETHI"}, // extract floating > condition from arg0
{name: "SETGEF", argLength: 1, reg: flagsgp, asm: "SETCC"}, // extract floating >= condition from arg0
{name: "MOVBLSX", argLength: 1, reg: gp11, asm: "MOVBLSX"}, // sign extend arg0 from int8 to int32
{name: "MOVBLZX", argLength: 1, reg: gp11, asm: "MOVBLZX"}, // zero extend arg0 from int8 to int32
{name: "MOVWLSX", argLength: 1, reg: gp11, asm: "MOVWLSX"}, // sign extend arg0 from int16 to int32
{name: "MOVWLZX", argLength: 1, reg: gp11, asm: "MOVWLZX"}, // zero extend arg0 from int16 to int32
{name: "MOVLconst", reg: gp01, asm: "MOVL", typ: "UInt32", aux: "Int32", rematerializeable: true}, // 32 low bits of auxint
{name: "CVTTSD2SL", argLength: 1, reg: fpgp, asm: "CVTTSD2SL"}, // convert float64 to int32
{name: "CVTTSS2SL", argLength: 1, reg: fpgp, asm: "CVTTSS2SL"}, // convert float32 to int32
{name: "CVTSL2SS", argLength: 1, reg: gpfp, asm: "CVTSL2SS"}, // convert int32 to float32
{name: "CVTSL2SD", argLength: 1, reg: gpfp, asm: "CVTSL2SD"}, // convert int32 to float64
{name: "CVTSD2SS", argLength: 1, reg: fp11, asm: "CVTSD2SS"}, // convert float64 to float32
{name: "CVTSS2SD", argLength: 1, reg: fp11, asm: "CVTSS2SD"}, // convert float32 to float64
{name: "PXOR", argLength: 2, reg: fp21, asm: "PXOR", commutative: true, resultInArg0: true}, // exclusive or, applied to X regs for float negation.
{name: "LEAL", argLength: 1, reg: gp11sb, aux: "SymOff", rematerializeable: true}, // arg0 + auxint + offset encoded in aux
{name: "LEAL1", argLength: 2, reg: gp21sb, aux: "SymOff"}, // arg0 + arg1 + auxint + aux
{name: "LEAL2", argLength: 2, reg: gp21sb, aux: "SymOff"}, // arg0 + 2*arg1 + auxint + aux
{name: "LEAL4", argLength: 2, reg: gp21sb, aux: "SymOff"}, // arg0 + 4*arg1 + auxint + aux
{name: "LEAL8", argLength: 2, reg: gp21sb, aux: "SymOff"}, // arg0 + 8*arg1 + auxint + aux
// Note: LEAL{1,2,4,8} must not have OpSB as either argument.
// auxint+aux == add auxint and the offset of the symbol in aux (if any) to the effective address
{name: "MOVBload", argLength: 2, reg: gpload, asm: "MOVBLZX", aux: "SymOff", typ: "UInt8"}, // load byte from arg0+auxint+aux. arg1=mem. Zero extend.
{name: "MOVBLSXload", argLength: 2, reg: gpload, asm: "MOVBLSX", aux: "SymOff"}, // ditto, sign extend to int32
{name: "MOVWload", argLength: 2, reg: gpload, asm: "MOVWLZX", aux: "SymOff", typ: "UInt16"}, // load 2 bytes from arg0+auxint+aux. arg1=mem. Zero extend.
{name: "MOVWLSXload", argLength: 2, reg: gpload, asm: "MOVWLSX", aux: "SymOff"}, // ditto, sign extend to int32
{name: "MOVLload", argLength: 2, reg: gpload, asm: "MOVL", aux: "SymOff", typ: "UInt32"}, // load 4 bytes from arg0+auxint+aux. arg1=mem. Zero extend.
{name: "MOVBstore", argLength: 3, reg: gpstore, asm: "MOVB", aux: "SymOff", typ: "Mem"}, // store byte in arg1 to arg0+auxint+aux. arg2=mem
{name: "MOVWstore", argLength: 3, reg: gpstore, asm: "MOVW", aux: "SymOff", typ: "Mem"}, // store 2 bytes in arg1 to arg0+auxint+aux. arg2=mem
{name: "MOVLstore", argLength: 3, reg: gpstore, asm: "MOVL", aux: "SymOff", typ: "Mem"}, // store 4 bytes in arg1 to arg0+auxint+aux. arg2=mem
// indexed loads/stores
{name: "MOVBloadidx1", argLength: 3, reg: gploadidx, asm: "MOVBLZX", aux: "SymOff"}, // load a byte from arg0+arg1+auxint+aux. arg2=mem
{name: "MOVWloadidx1", argLength: 3, reg: gploadidx, asm: "MOVWLZX", aux: "SymOff"}, // load 2 bytes from arg0+arg1+auxint+aux. arg2=mem
{name: "MOVWloadidx2", argLength: 3, reg: gploadidx, asm: "MOVWLZX", aux: "SymOff"}, // load 2 bytes from arg0+2*arg1+auxint+aux. arg2=mem
{name: "MOVLloadidx1", argLength: 3, reg: gploadidx, asm: "MOVL", aux: "SymOff"}, // load 4 bytes from arg0+arg1+auxint+aux. arg2=mem
{name: "MOVLloadidx4", argLength: 3, reg: gploadidx, asm: "MOVL", aux: "SymOff"}, // load 4 bytes from arg0+4*arg1+auxint+aux. arg2=mem
// TODO: sign-extending indexed loads
{name: "MOVBstoreidx1", argLength: 4, reg: gpstoreidx, asm: "MOVB", aux: "SymOff"}, // store byte in arg2 to arg0+arg1+auxint+aux. arg3=mem
{name: "MOVWstoreidx1", argLength: 4, reg: gpstoreidx, asm: "MOVW", aux: "SymOff"}, // store 2 bytes in arg2 to arg0+arg1+auxint+aux. arg3=mem
{name: "MOVWstoreidx2", argLength: 4, reg: gpstoreidx, asm: "MOVW", aux: "SymOff"}, // store 2 bytes in arg2 to arg0+2*arg1+auxint+aux. arg3=mem
{name: "MOVLstoreidx1", argLength: 4, reg: gpstoreidx, asm: "MOVL", aux: "SymOff"}, // store 4 bytes in arg2 to arg0+arg1+auxint+aux. arg3=mem
{name: "MOVLstoreidx4", argLength: 4, reg: gpstoreidx, asm: "MOVL", aux: "SymOff"}, // store 4 bytes in arg2 to arg0+4*arg1+auxint+aux. arg3=mem
// TODO: add size-mismatched indexed loads, like MOVBstoreidx4.
// For storeconst ops, the AuxInt field encodes both
// the value to store and an address offset of the store.
// Cast AuxInt to a ValAndOff to extract Val and Off fields.
{name: "MOVBstoreconst", argLength: 2, reg: gpstoreconst, asm: "MOVB", aux: "SymValAndOff", typ: "Mem"}, // store low byte of ValAndOff(AuxInt).Val() to arg0+ValAndOff(AuxInt).Off()+aux. arg1=mem
{name: "MOVWstoreconst", argLength: 2, reg: gpstoreconst, asm: "MOVW", aux: "SymValAndOff", typ: "Mem"}, // store low 2 bytes of ...
{name: "MOVLstoreconst", argLength: 2, reg: gpstoreconst, asm: "MOVL", aux: "SymValAndOff", typ: "Mem"}, // store low 4 bytes of ...
{name: "MOVBstoreconstidx1", argLength: 3, reg: gpstoreconstidx, asm: "MOVB", aux: "SymValAndOff", typ: "Mem"}, // store low byte of ValAndOff(AuxInt).Val() to arg0+1*arg1+ValAndOff(AuxInt).Off()+aux. arg2=mem
{name: "MOVWstoreconstidx1", argLength: 3, reg: gpstoreconstidx, asm: "MOVW", aux: "SymValAndOff", typ: "Mem"}, // store low 2 bytes of ... arg1 ...
{name: "MOVWstoreconstidx2", argLength: 3, reg: gpstoreconstidx, asm: "MOVW", aux: "SymValAndOff", typ: "Mem"}, // store low 2 bytes of ... 2*arg1 ...
{name: "MOVLstoreconstidx1", argLength: 3, reg: gpstoreconstidx, asm: "MOVL", aux: "SymValAndOff", typ: "Mem"}, // store low 4 bytes of ... arg1 ...
{name: "MOVLstoreconstidx4", argLength: 3, reg: gpstoreconstidx, asm: "MOVL", aux: "SymValAndOff", typ: "Mem"}, // store low 4 bytes of ... 4*arg1 ...
// arg0 = pointer to start of memory to zero
// arg1 = value to store (will always be zero)
// arg2 = mem
// auxint = offset into duffzero code to start executing
// returns mem
{
name: "DUFFZERO",
aux: "Int64",
argLength: 3,
reg: regInfo{
inputs: []regMask{buildReg("DI"), buildReg("AX")},
clobbers: buildReg("DI CX"),
// Note: CX is only clobbered when dynamic linking.
},
},
// arg0 = address of memory to zero
// arg1 = # of 4-byte words to zero
// arg2 = value to store (will always be zero)
// arg3 = mem
// returns mem
{
name: "REPSTOSL",
argLength: 4,
reg: regInfo{
inputs: []regMask{buildReg("DI"), buildReg("CX"), buildReg("AX")},
clobbers: buildReg("DI CX"),
},
},
{name: "CALLstatic", argLength: 1, reg: regInfo{clobbers: callerSave}, aux: "SymOff", clobberFlags: true}, // call static function aux.(*gc.Sym). arg0=mem, auxint=argsize, returns mem
{name: "CALLclosure", argLength: 3, reg: regInfo{inputs: []regMask{gpsp, buildReg("DX"), 0}, clobbers: callerSave}, aux: "Int64", clobberFlags: true}, // call function via closure. arg0=codeptr, arg1=closure, arg2=mem, auxint=argsize, returns mem
{name: "CALLdefer", argLength: 1, reg: regInfo{clobbers: callerSave}, aux: "Int64", clobberFlags: true}, // call deferproc. arg0=mem, auxint=argsize, returns mem
{name: "CALLgo", argLength: 1, reg: regInfo{clobbers: callerSave}, aux: "Int64", clobberFlags: true}, // call newproc. arg0=mem, auxint=argsize, returns mem
{name: "CALLinter", argLength: 2, reg: regInfo{inputs: []regMask{gp}, clobbers: callerSave}, aux: "Int64", clobberFlags: true}, // call fn by pointer. arg0=codeptr, arg1=mem, auxint=argsize, returns mem
// arg0 = destination pointer
// arg1 = source pointer
// arg2 = mem
// auxint = offset from duffcopy symbol to call
// returns memory
{
name: "DUFFCOPY",
aux: "Int64",
argLength: 3,
reg: regInfo{
inputs: []regMask{buildReg("DI"), buildReg("SI")},
clobbers: buildReg("DI SI CX"), // uses CX as a temporary
},
clobberFlags: true,
},
// arg0 = destination pointer
// arg1 = source pointer
// arg2 = # of 8-byte words to copy
// arg3 = mem
// returns memory
{
name: "REPMOVSL",
argLength: 4,
reg: regInfo{
inputs: []regMask{buildReg("DI"), buildReg("SI"), buildReg("CX")},
clobbers: buildReg("DI SI CX"),
},
},
// (InvertFlags (CMPL a b)) == (CMPL b a)
// So if we want (SETL (CMPL a b)) but we can't do that because a is a constant,
// then we do (SETL (InvertFlags (CMPL b a))) instead.
// Rewrites will convert this to (SETG (CMPL b a)).
// InvertFlags is a pseudo-op which can't appear in assembly output.
{name: "InvertFlags", argLength: 1}, // reverse direction of arg0
// Pseudo-ops
{name: "LoweredGetG", argLength: 1, reg: gp01}, // arg0=mem
// Scheduler ensures LoweredGetClosurePtr occurs only in entry block,
// and sorts it to the very beginning of the block to prevent other
// use of DX (the closure pointer)
{name: "LoweredGetClosurePtr", reg: regInfo{outputs: []regMask{buildReg("DX")}}},
//arg0=ptr,arg1=mem, returns void. Faults if ptr is nil.
{name: "LoweredNilCheck", argLength: 2, reg: regInfo{inputs: []regMask{gpsp}}, clobberFlags: true},
// MOVLconvert converts between pointers and integers.
// We have a special op for this so as to not confuse GC
// (particularly stack maps). It takes a memory arg so it
// gets correctly ordered with respect to GC safepoints.
// arg0=ptr/int arg1=mem, output=int/ptr
{name: "MOVLconvert", argLength: 2, reg: gp11, asm: "MOVL"},
// Constant flag values. For any comparison, there are 5 possible
// outcomes: the three from the signed total order (<,==,>) and the
// three from the unsigned total order. The == cases overlap.
// Note: there's a sixth "unordered" outcome for floating-point
// comparisons, but we don't use such a beast yet.
// These ops are for temporary use by rewrite rules. They
// cannot appear in the generated assembly.
{name: "FlagEQ"}, // equal
{name: "FlagLT_ULT"}, // signed < and unsigned <
{name: "FlagLT_UGT"}, // signed < and unsigned >
{name: "FlagGT_UGT"}, // signed > and unsigned <
{name: "FlagGT_ULT"}, // signed > and unsigned >
// Special op for -x on 387
{name: "FCHS", argLength: 1, reg: fp11},
// Special ops for PIC floating-point constants.
// MOVSXconst1 loads the address of the constant-pool entry into a register.
// MOVSXconst2 loads the constant from that address.
// MOVSXconst1 returns a pointer, but we type it as uint32 because it can never point to the Go heap.
{name: "MOVSSconst1", reg: gp01, typ: "UInt32", aux: "Float32"},
{name: "MOVSDconst1", reg: gp01, typ: "UInt32", aux: "Float64"},
{name: "MOVSSconst2", argLength: 1, reg: gpfp, asm: "MOVSS"},
{name: "MOVSDconst2", argLength: 1, reg: gpfp, asm: "MOVSD"},
}
var _386blocks = []blockData{
{name: "EQ"},
{name: "NE"},
{name: "LT"},
{name: "LE"},
{name: "GT"},
{name: "GE"},
{name: "ULT"},
{name: "ULE"},
{name: "UGT"},
{name: "UGE"},
{name: "EQF"},
{name: "NEF"},
{name: "ORD"}, // FP, ordered comparison (parity zero)
{name: "NAN"}, // FP, unordered comparison (parity one)
}
archs = append(archs, arch{
name: "386",
pkg: "cmd/internal/obj/x86",
genfile: "../../x86/ssa.go",
ops: _386ops,
blocks: _386blocks,
regnames: regNames386,
gpregmask: gp,
fpregmask: fp,
framepointerreg: int8(num["BP"]),
})
}

View File

@@ -4,7 +4,8 @@
// Lowering arithmetic
(Add64 x y) -> (ADDQ x y)
(AddPtr x y) -> (ADDQ x y)
(AddPtr x y) && config.PtrSize == 8 -> (ADDQ x y)
(AddPtr x y) && config.PtrSize == 4 -> (ADDL x y)
(Add32 x y) -> (ADDL x y)
(Add16 x y) -> (ADDL x y)
(Add8 x y) -> (ADDL x y)
@@ -12,7 +13,8 @@
(Add64F x y) -> (ADDSD x y)
(Sub64 x y) -> (SUBQ x y)
(SubPtr x y) -> (SUBQ x y)
(SubPtr x y) && config.PtrSize == 8 -> (SUBQ x y)
(SubPtr x y) && config.PtrSize == 4 -> (SUBL x y)
(Sub32 x y) -> (SUBL x y)
(Sub16 x y) -> (SUBL x y)
(Sub8 x y) -> (SUBL x y)
@@ -29,14 +31,14 @@
(Div32F x y) -> (DIVSS x y)
(Div64F x y) -> (DIVSD x y)
(Div64 x y) -> (DIVQ x y)
(Div64u x y) -> (DIVQU x y)
(Div32 x y) -> (DIVL x y)
(Div32u x y) -> (DIVLU x y)
(Div16 x y) -> (DIVW x y)
(Div16u x y) -> (DIVWU x y)
(Div8 x y) -> (DIVW (SignExt8to16 x) (SignExt8to16 y))
(Div8u x y) -> (DIVWU (ZeroExt8to16 x) (ZeroExt8to16 y))
(Div64 x y) -> (Select0 (DIVQ x y))
(Div64u x y) -> (Select0 (DIVQU x y))
(Div32 x y) -> (Select0 (DIVL x y))
(Div32u x y) -> (Select0 (DIVLU x y))
(Div16 x y) -> (Select0 (DIVW x y))
(Div16u x y) -> (Select0 (DIVWU x y))
(Div8 x y) -> (Select0 (DIVW (SignExt8to16 x) (SignExt8to16 y)))
(Div8u x y) -> (Select0 (DIVWU (ZeroExt8to16 x) (ZeroExt8to16 y)))
(Hmul64 x y) -> (HMULQ x y)
(Hmul64u x y) -> (HMULQU x y)
@@ -49,14 +51,14 @@
(Avg64u x y) -> (AVGQU x y)
(Mod64 x y) -> (MODQ x y)
(Mod64u x y) -> (MODQU x y)
(Mod32 x y) -> (MODL x y)
(Mod32u x y) -> (MODLU x y)
(Mod16 x y) -> (MODW x y)
(Mod16u x y) -> (MODWU x y)
(Mod8 x y) -> (MODW (SignExt8to16 x) (SignExt8to16 y))
(Mod8u x y) -> (MODWU (ZeroExt8to16 x) (ZeroExt8to16 y))
(Mod64 x y) -> (Select1 (DIVQ x y))
(Mod64u x y) -> (Select1 (DIVQU x y))
(Mod32 x y) -> (Select1 (DIVL x y))
(Mod32u x y) -> (Select1 (DIVLU x y))
(Mod16 x y) -> (Select1 (DIVW x y))
(Mod16u x y) -> (Select1 (DIVWU x y))
(Mod8 x y) -> (Select1 (DIVW (SignExt8to16 x) (SignExt8to16 y)))
(Mod8u x y) -> (Select1 (DIVWU (ZeroExt8to16 x) (ZeroExt8to16 y)))
(And64 x y) -> (ANDQ x y)
(And32 x y) -> (ANDL x y)
@@ -91,8 +93,9 @@
(Not x) -> (XORLconst [1] x)
// Lowering pointer arithmetic
(OffPtr [off] ptr) && is32Bit(off) -> (ADDQconst [off] ptr)
(OffPtr [off] ptr) -> (ADDQ (MOVQconst [off]) ptr)
(OffPtr [off] ptr) && config.PtrSize == 8 && is32Bit(off) -> (ADDQconst [off] ptr)
(OffPtr [off] ptr) && config.PtrSize == 8 -> (ADDQ (MOVQconst [off]) ptr)
(OffPtr [off] ptr) && config.PtrSize == 4 -> (ADDLconst [off] ptr)
// Lowering other arithmetic
// TODO: CMPQconst 0 below is redundant because BSF sets Z but how to remove?
@@ -270,7 +273,8 @@
(Eq16 x y) -> (SETEQ (CMPW x y))
(Eq8 x y) -> (SETEQ (CMPB x y))
(EqB x y) -> (SETEQ (CMPB x y))
(EqPtr x y) -> (SETEQ (CMPQ x y))
(EqPtr x y) && config.PtrSize == 8 -> (SETEQ (CMPQ x y))
(EqPtr x y) && config.PtrSize == 4 -> (SETEQ (CMPL x y))
(Eq64F x y) -> (SETEQF (UCOMISD x y))
(Eq32F x y) -> (SETEQF (UCOMISS x y))
@@ -279,13 +283,16 @@
(Neq16 x y) -> (SETNE (CMPW x y))
(Neq8 x y) -> (SETNE (CMPB x y))
(NeqB x y) -> (SETNE (CMPB x y))
(NeqPtr x y) -> (SETNE (CMPQ x y))
(NeqPtr x y) && config.PtrSize == 8 -> (SETNE (CMPQ x y))
(NeqPtr x y) && config.PtrSize == 4 -> (SETNE (CMPL x y))
(Neq64F x y) -> (SETNEF (UCOMISD x y))
(Neq32F x y) -> (SETNEF (UCOMISS x y))
(Int64Hi x) -> (SHRQconst [32] x) // needed for amd64p32
// Lowering loads
(Load <t> ptr mem) && (is64BitInt(t) || isPtr(t)) -> (MOVQload ptr mem)
(Load <t> ptr mem) && is32BitInt(t) -> (MOVLload ptr mem)
(Load <t> ptr mem) && (is64BitInt(t) || isPtr(t) && config.PtrSize == 8) -> (MOVQload ptr mem)
(Load <t> ptr mem) && (is32BitInt(t) || isPtr(t) && config.PtrSize == 4) -> (MOVLload ptr mem)
(Load <t> ptr mem) && is16BitInt(t) -> (MOVWload ptr mem)
(Load <t> ptr mem) && (t.IsBoolean() || is8BitInt(t)) -> (MOVBload ptr mem)
(Load <t> ptr mem) && is32BitFloat(t) -> (MOVSSload ptr mem)
@@ -302,39 +309,47 @@
(Store [1] ptr val mem) -> (MOVBstore ptr val mem)
// Lowering moves
(Move [0] _ _ mem) -> mem
(Move [1] dst src mem) -> (MOVBstore dst (MOVBload src mem) mem)
(Move [2] dst src mem) -> (MOVWstore dst (MOVWload src mem) mem)
(Move [4] dst src mem) -> (MOVLstore dst (MOVLload src mem) mem)
(Move [8] dst src mem) -> (MOVQstore dst (MOVQload src mem) mem)
(Move [16] dst src mem) -> (MOVOstore dst (MOVOload src mem) mem)
(Move [3] dst src mem) ->
(Move [s] _ _ mem) && SizeAndAlign(s).Size() == 0 -> mem
(Move [s] dst src mem) && SizeAndAlign(s).Size() == 1 -> (MOVBstore dst (MOVBload src mem) mem)
(Move [s] dst src mem) && SizeAndAlign(s).Size() == 2 -> (MOVWstore dst (MOVWload src mem) mem)
(Move [s] dst src mem) && SizeAndAlign(s).Size() == 4 -> (MOVLstore dst (MOVLload src mem) mem)
(Move [s] dst src mem) && SizeAndAlign(s).Size() == 8 -> (MOVQstore dst (MOVQload src mem) mem)
(Move [s] dst src mem) && SizeAndAlign(s).Size() == 16 -> (MOVOstore dst (MOVOload src mem) mem)
(Move [s] dst src mem) && SizeAndAlign(s).Size() == 3 ->
(MOVBstore [2] dst (MOVBload [2] src mem)
(MOVWstore dst (MOVWload src mem) mem))
(Move [5] dst src mem) ->
(Move [s] dst src mem) && SizeAndAlign(s).Size() == 5 ->
(MOVBstore [4] dst (MOVBload [4] src mem)
(MOVLstore dst (MOVLload src mem) mem))
(Move [6] dst src mem) ->
(Move [s] dst src mem) && SizeAndAlign(s).Size() == 6 ->
(MOVWstore [4] dst (MOVWload [4] src mem)
(MOVLstore dst (MOVLload src mem) mem))
(Move [7] dst src mem) ->
(Move [s] dst src mem) && SizeAndAlign(s).Size() == 7 ->
(MOVLstore [3] dst (MOVLload [3] src mem)
(MOVLstore dst (MOVLload src mem) mem))
(Move [size] dst src mem) && size > 8 && size < 16 ->
(MOVQstore [size-8] dst (MOVQload [size-8] src mem)
(Move [s] dst src mem) && SizeAndAlign(s).Size() > 8 && SizeAndAlign(s).Size() < 16 ->
(MOVQstore [SizeAndAlign(s).Size()-8] dst (MOVQload [SizeAndAlign(s).Size()-8] src mem)
(MOVQstore dst (MOVQload src mem) mem))
// Adjust moves to be a multiple of 16 bytes.
(Move [size] dst src mem) && size > 16 && size%16 != 0 && size%16 <= 8 ->
(Move [size-size%16] (ADDQconst <dst.Type> dst [size%16]) (ADDQconst <src.Type> src [size%16])
(Move [s] dst src mem)
&& SizeAndAlign(s).Size() > 16 && SizeAndAlign(s).Size()%16 != 0 && SizeAndAlign(s).Size()%16 <= 8 ->
(Move [SizeAndAlign(s).Size()-SizeAndAlign(s).Size()%16]
(OffPtr <dst.Type> dst [SizeAndAlign(s).Size()%16])
(OffPtr <src.Type> src [SizeAndAlign(s).Size()%16])
(MOVQstore dst (MOVQload src mem) mem))
(Move [size] dst src mem) && size > 16 && size%16 != 0 && size%16 > 8 ->
(Move [size-size%16] (ADDQconst <dst.Type> dst [size%16]) (ADDQconst <src.Type> src [size%16])
(Move [s] dst src mem)
&& SizeAndAlign(s).Size() > 16 && SizeAndAlign(s).Size()%16 != 0 && SizeAndAlign(s).Size()%16 > 8 ->
(Move [SizeAndAlign(s).Size()-SizeAndAlign(s).Size()%16]
(OffPtr <dst.Type> dst [SizeAndAlign(s).Size()%16])
(OffPtr <src.Type> src [SizeAndAlign(s).Size()%16])
(MOVOstore dst (MOVOload src mem) mem))
// Medium copying uses a duff device.
(Move [size] dst src mem) && size >= 32 && size <= 16*64 && size%16 == 0 && !config.noDuffDevice ->
(DUFFCOPY [14*(64-size/16)] dst src mem)
(Move [s] dst src mem)
&& SizeAndAlign(s).Size() >= 32 && SizeAndAlign(s).Size() <= 16*64 && SizeAndAlign(s).Size()%16 == 0
&& !config.noDuffDevice ->
(DUFFCOPY [14*(64-SizeAndAlign(s).Size()/16)] dst src mem)
// 14 and 64 are magic constants. 14 is the number of bytes to encode:
// MOVUPS (SI), X0
// ADDQ $16, SI
@@ -343,57 +358,62 @@
// and 64 is the number of such blocks. See src/runtime/duff_amd64.s:duffcopy.
// Large copying uses REP MOVSQ.
(Move [size] dst src mem) && (size > 16*64 || config.noDuffDevice) && size%8 == 0 ->
(REPMOVSQ dst src (MOVQconst [size/8]) mem)
(Move [s] dst src mem) && (SizeAndAlign(s).Size() > 16*64 || config.noDuffDevice) && SizeAndAlign(s).Size()%8 == 0 ->
(REPMOVSQ dst src (MOVQconst [SizeAndAlign(s).Size()/8]) mem)
// Lowering Zero instructions
(Zero [0] _ mem) -> mem
(Zero [1] destptr mem) -> (MOVBstoreconst [0] destptr mem)
(Zero [2] destptr mem) -> (MOVWstoreconst [0] destptr mem)
(Zero [4] destptr mem) -> (MOVLstoreconst [0] destptr mem)
(Zero [8] destptr mem) -> (MOVQstoreconst [0] destptr mem)
(Zero [s] _ mem) && SizeAndAlign(s).Size() == 0 -> mem
(Zero [s] destptr mem) && SizeAndAlign(s).Size() == 1 -> (MOVBstoreconst [0] destptr mem)
(Zero [s] destptr mem) && SizeAndAlign(s).Size() == 2 -> (MOVWstoreconst [0] destptr mem)
(Zero [s] destptr mem) && SizeAndAlign(s).Size() == 4 -> (MOVLstoreconst [0] destptr mem)
(Zero [s] destptr mem) && SizeAndAlign(s).Size() == 8 -> (MOVQstoreconst [0] destptr mem)
(Zero [3] destptr mem) ->
(Zero [s] destptr mem) && SizeAndAlign(s).Size() == 3 ->
(MOVBstoreconst [makeValAndOff(0,2)] destptr
(MOVWstoreconst [0] destptr mem))
(Zero [5] destptr mem) ->
(Zero [s] destptr mem) && SizeAndAlign(s).Size() == 5 ->
(MOVBstoreconst [makeValAndOff(0,4)] destptr
(MOVLstoreconst [0] destptr mem))
(Zero [6] destptr mem) ->
(Zero [s] destptr mem) && SizeAndAlign(s).Size() == 6 ->
(MOVWstoreconst [makeValAndOff(0,4)] destptr
(MOVLstoreconst [0] destptr mem))
(Zero [7] destptr mem) ->
(Zero [s] destptr mem) && SizeAndAlign(s).Size() == 7 ->
(MOVLstoreconst [makeValAndOff(0,3)] destptr
(MOVLstoreconst [0] destptr mem))
// Strip off any fractional word zeroing.
(Zero [size] destptr mem) && size%8 != 0 && size > 8 ->
(Zero [size-size%8] (ADDQconst destptr [size%8])
(Zero [s] destptr mem) && SizeAndAlign(s).Size()%8 != 0 && SizeAndAlign(s).Size() > 8 ->
(Zero [SizeAndAlign(s).Size()-SizeAndAlign(s).Size()%8] (OffPtr <destptr.Type> destptr [SizeAndAlign(s).Size()%8])
(MOVQstoreconst [0] destptr mem))
// Zero small numbers of words directly.
(Zero [16] destptr mem) ->
(Zero [s] destptr mem) && SizeAndAlign(s).Size() == 16 ->
(MOVQstoreconst [makeValAndOff(0,8)] destptr
(MOVQstoreconst [0] destptr mem))
(Zero [24] destptr mem) ->
(Zero [s] destptr mem) && SizeAndAlign(s).Size() == 24 ->
(MOVQstoreconst [makeValAndOff(0,16)] destptr
(MOVQstoreconst [makeValAndOff(0,8)] destptr
(MOVQstoreconst [0] destptr mem)))
(Zero [32] destptr mem) ->
(Zero [s] destptr mem) && SizeAndAlign(s).Size() == 32 ->
(MOVQstoreconst [makeValAndOff(0,24)] destptr
(MOVQstoreconst [makeValAndOff(0,16)] destptr
(MOVQstoreconst [makeValAndOff(0,8)] destptr
(MOVQstoreconst [0] destptr mem))))
// Medium zeroing uses a duff device.
(Zero [size] destptr mem) && size <= 1024 && size%8 == 0 && size%16 != 0 && !config.noDuffDevice ->
(Zero [size-8] (ADDQconst [8] destptr) (MOVQstore destptr (MOVQconst [0]) mem))
(Zero [size] destptr mem) && size <= 1024 && size%16 == 0 && !config.noDuffDevice ->
(DUFFZERO [duffStart(size)] (ADDQconst [duffAdj(size)] destptr) (MOVOconst [0]) mem)
(Zero [s] destptr mem)
&& SizeAndAlign(s).Size() <= 1024 && SizeAndAlign(s).Size()%8 == 0 && SizeAndAlign(s).Size()%16 != 0
&& !config.noDuffDevice ->
(Zero [SizeAndAlign(s).Size()-8] (OffPtr <destptr.Type> [8] destptr) (MOVQstore destptr (MOVQconst [0]) mem))
(Zero [s] destptr mem)
&& SizeAndAlign(s).Size() <= 1024 && SizeAndAlign(s).Size()%16 == 0 && !config.noDuffDevice ->
(DUFFZERO [SizeAndAlign(s).Size()] destptr (MOVOconst [0]) mem)
// Large zeroing uses REP STOSQ.
(Zero [size] destptr mem) && (size > 1024 || (config.noDuffDevice && size > 32)) && size%8 == 0 ->
(REPSTOSQ destptr (MOVQconst [size/8]) (MOVQconst [0]) mem)
(Zero [s] destptr mem)
&& (SizeAndAlign(s).Size() > 1024 || (config.noDuffDevice && SizeAndAlign(s).Size() > 32))
&& SizeAndAlign(s).Size()%8 == 0 ->
(REPSTOSQ destptr (MOVQconst [SizeAndAlign(s).Size()/8]) (MOVQconst [0]) mem)
// Lowering constants
(Const8 [val]) -> (MOVLconst [val])
@@ -402,7 +422,8 @@
(Const64 [val]) -> (MOVQconst [val])
(Const32F [val]) -> (MOVSSconst [val])
(Const64F [val]) -> (MOVSDconst [val])
(ConstNil) -> (MOVQconst [0])
(ConstNil) && config.PtrSize == 8 -> (MOVQconst [0])
(ConstNil) && config.PtrSize == 4 -> (MOVLconst [0])
(ConstBool [b]) -> (MOVLconst [b])
// Lowering calls
@@ -413,15 +434,17 @@
(InterCall [argwid] entry mem) -> (CALLinter [argwid] entry mem)
// Miscellaneous
(Convert <t> x mem) -> (MOVQconvert <t> x mem)
(IsNonNil p) -> (SETNE (TESTQ p p))
(Convert <t> x mem) && config.PtrSize == 8 -> (MOVQconvert <t> x mem)
(Convert <t> x mem) && config.PtrSize == 4 -> (MOVLconvert <t> x mem)
(IsNonNil p) && config.PtrSize == 8 -> (SETNE (TESTQ p p))
(IsNonNil p) && config.PtrSize == 4 -> (SETNE (TESTL p p))
(IsInBounds idx len) -> (SETB (CMPQ idx len))
(IsSliceInBounds idx len) -> (SETBE (CMPQ idx len))
(NilCheck ptr mem) -> (LoweredNilCheck ptr mem)
(GetG mem) -> (LoweredGetG mem)
(GetClosurePtr) -> (LoweredGetClosurePtr)
(Addr {sym} base) -> (LEAQ {sym} base)
(ITab (Load ptr mem)) -> (MOVQload ptr mem)
(Addr {sym} base) && config.PtrSize == 8 -> (LEAQ {sym} base)
(Addr {sym} base) && config.PtrSize == 4 -> (LEAL {sym} base)
// block rewrites
(If (SETL cmp) yes no) -> (LT cmp yes no)
@@ -495,6 +518,12 @@
(ANDLconst [c] (ANDLconst [d] x)) -> (ANDLconst [c & d] x)
(ANDQconst [c] (ANDQconst [d] x)) -> (ANDQconst [c & d] x)
(XORLconst [c] (XORLconst [d] x)) -> (XORLconst [c ^ d] x)
(XORQconst [c] (XORQconst [d] x)) -> (XORQconst [c ^ d] x)
(MULLconst [c] (MULLconst [d] x)) -> (MULLconst [int64(int32(c * d))] x)
(MULQconst [c] (MULQconst [d] x)) -> (MULQconst [c * d] x)
(ORQ x (MOVQconst [c])) && is32Bit(c) -> (ORQconst [c] x)
(ORQ (MOVQconst [c]) x) && is32Bit(c) -> (ORQconst [c] x)
(ORL x (MOVLconst [c])) -> (ORLconst [c] x)
@@ -544,6 +573,16 @@
(SHRL x (ANDLconst [31] y)) -> (SHRL x y)
(SHRQ x (ANDQconst [63] y)) -> (SHRQ x y)
(ROLQconst [c] (ROLQconst [d] x)) -> (ROLQconst [(c+d)&63] x)
(ROLLconst [c] (ROLLconst [d] x)) -> (ROLLconst [(c+d)&31] x)
(ROLWconst [c] (ROLWconst [d] x)) -> (ROLWconst [(c+d)&15] x)
(ROLBconst [c] (ROLBconst [d] x)) -> (ROLBconst [(c+d)& 7] x)
(ROLQconst [0] x) -> x
(ROLLconst [0] x) -> x
(ROLWconst [0] x) -> x
(ROLBconst [0] x) -> x
// Note: the word and byte shifts keep the low 5 bits (not the low 4 or 3 bits)
// because the x86 instructions are defined to use all 5 bits of the shift even
// for the small shifts. I don't think we'll ever generate a weird shift (e.g.
@@ -1564,3 +1603,53 @@
&& x.Uses == 1
&& clobber(x)
-> (MOVQstoreidx1 [i-4] {s} p (SHLQconst <idx.Type> [2] idx) w0 mem)
// amd64p32 rules
// same as the rules above, but with 32 instead of 64 bit pointer arithmetic.
// LEAQ,ADDQ -> LEAL,ADDL
(ADDLconst [c] (LEAL [d] {s} x)) && is32Bit(c+d) -> (LEAL [c+d] {s} x)
(LEAL [c] {s} (ADDLconst [d] x)) && is32Bit(c+d) -> (LEAL [c+d] {s} x)
(MOVQload [off1] {sym1} (LEAL [off2] {sym2} base) mem) && canMergeSym(sym1, sym2) ->
(MOVQload [off1+off2] {mergeSym(sym1,sym2)} base mem)
(MOVLload [off1] {sym1} (LEAL [off2] {sym2} base) mem) && canMergeSym(sym1, sym2) ->
(MOVLload [off1+off2] {mergeSym(sym1,sym2)} base mem)
(MOVWload [off1] {sym1} (LEAL [off2] {sym2} base) mem) && canMergeSym(sym1, sym2) ->
(MOVWload [off1+off2] {mergeSym(sym1,sym2)} base mem)
(MOVBload [off1] {sym1} (LEAL [off2] {sym2} base) mem) && canMergeSym(sym1, sym2) ->
(MOVBload [off1+off2] {mergeSym(sym1,sym2)} base mem)
(MOVQstore [off1] {sym1} (LEAL [off2] {sym2} base) val mem) && canMergeSym(sym1, sym2) ->
(MOVQstore [off1+off2] {mergeSym(sym1,sym2)} base val mem)
(MOVLstore [off1] {sym1} (LEAL [off2] {sym2} base) val mem) && canMergeSym(sym1, sym2) ->
(MOVLstore [off1+off2] {mergeSym(sym1,sym2)} base val mem)
(MOVWstore [off1] {sym1} (LEAL [off2] {sym2} base) val mem) && canMergeSym(sym1, sym2) ->
(MOVWstore [off1+off2] {mergeSym(sym1,sym2)} base val mem)
(MOVBstore [off1] {sym1} (LEAL [off2] {sym2} base) val mem) && canMergeSym(sym1, sym2) ->
(MOVBstore [off1+off2] {mergeSym(sym1,sym2)} base val mem)
(MOVQstoreconst [sc] {sym1} (LEAL [off] {sym2} ptr) mem) && canMergeSym(sym1, sym2) && ValAndOff(sc).canAdd(off) ->
(MOVQstoreconst [ValAndOff(sc).add(off)] {mergeSym(sym1, sym2)} ptr mem)
(MOVLstoreconst [sc] {sym1} (LEAL [off] {sym2} ptr) mem) && canMergeSym(sym1, sym2) && ValAndOff(sc).canAdd(off) ->
(MOVLstoreconst [ValAndOff(sc).add(off)] {mergeSym(sym1, sym2)} ptr mem)
(MOVWstoreconst [sc] {sym1} (LEAL [off] {sym2} ptr) mem) && canMergeSym(sym1, sym2) && ValAndOff(sc).canAdd(off) ->
(MOVWstoreconst [ValAndOff(sc).add(off)] {mergeSym(sym1, sym2)} ptr mem)
(MOVBstoreconst [sc] {sym1} (LEAL [off] {sym2} ptr) mem) && canMergeSym(sym1, sym2) && ValAndOff(sc).canAdd(off) ->
(MOVBstoreconst [ValAndOff(sc).add(off)] {mergeSym(sym1, sym2)} ptr mem)
(MOVQload [off1] {sym} (ADDLconst [off2] ptr) mem) && is32Bit(off1+off2) -> (MOVQload [off1+off2] {sym} ptr mem)
(MOVLload [off1] {sym} (ADDLconst [off2] ptr) mem) && is32Bit(off1+off2) -> (MOVLload [off1+off2] {sym} ptr mem)
(MOVWload [off1] {sym} (ADDLconst [off2] ptr) mem) && is32Bit(off1+off2) -> (MOVWload [off1+off2] {sym} ptr mem)
(MOVBload [off1] {sym} (ADDLconst [off2] ptr) mem) && is32Bit(off1+off2) -> (MOVBload [off1+off2] {sym} ptr mem)
(MOVQstore [off1] {sym} (ADDLconst [off2] ptr) val mem) && is32Bit(off1+off2) -> (MOVQstore [off1+off2] {sym} ptr val mem)
(MOVLstore [off1] {sym} (ADDLconst [off2] ptr) val mem) && is32Bit(off1+off2) -> (MOVLstore [off1+off2] {sym} ptr val mem)
(MOVWstore [off1] {sym} (ADDLconst [off2] ptr) val mem) && is32Bit(off1+off2) -> (MOVWstore [off1+off2] {sym} ptr val mem)
(MOVBstore [off1] {sym} (ADDLconst [off2] ptr) val mem) && is32Bit(off1+off2) -> (MOVBstore [off1+off2] {sym} ptr val mem)
(MOVQstoreconst [sc] {s} (ADDLconst [off] ptr) mem) && ValAndOff(sc).canAdd(off) ->
(MOVQstoreconst [ValAndOff(sc).add(off)] {s} ptr mem)
(MOVLstoreconst [sc] {s} (ADDLconst [off] ptr) mem) && ValAndOff(sc).canAdd(off) ->
(MOVLstoreconst [ValAndOff(sc).add(off)] {s} ptr mem)
(MOVWstoreconst [sc] {s} (ADDLconst [off] ptr) mem) && ValAndOff(sc).canAdd(off) ->
(MOVWstoreconst [ValAndOff(sc).add(off)] {s} ptr mem)
(MOVBstoreconst [sc] {s} (ADDLconst [off] ptr) mem) && ValAndOff(sc).canAdd(off) ->
(MOVBstoreconst [ValAndOff(sc).add(off)] {s} ptr mem)

View File

@@ -64,7 +64,6 @@ var regNamesAMD64 = []string{
// pseudo-registers
"SB",
"FLAGS",
}
func init() {
@@ -98,43 +97,36 @@ func init() {
fp = buildReg("X0 X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 X12 X13 X14 X15")
gpsp = gp | buildReg("SP")
gpspsb = gpsp | buildReg("SB")
flags = buildReg("FLAGS")
callerSave = gp | fp | flags
callerSave = gp | fp
)
// Common slices of register masks
var (
gponly = []regMask{gp}
fponly = []regMask{fp}
flagsonly = []regMask{flags}
gponly = []regMask{gp}
fponly = []regMask{fp}
)
// Common regInfo
var (
gp01 = regInfo{inputs: []regMask{}, outputs: gponly}
gp11 = regInfo{inputs: []regMask{gp}, outputs: gponly, clobbers: flags}
gp11sp = regInfo{inputs: []regMask{gpsp}, outputs: gponly, clobbers: flags}
gp11nf = regInfo{inputs: []regMask{gpsp}, outputs: gponly} // nf: no flags clobbered
gp01 = regInfo{inputs: nil, outputs: gponly}
gp11 = regInfo{inputs: []regMask{gp}, outputs: gponly}
gp11sp = regInfo{inputs: []regMask{gpsp}, outputs: gponly}
gp11sb = regInfo{inputs: []regMask{gpspsb}, outputs: gponly}
gp21 = regInfo{inputs: []regMask{gp, gp}, outputs: gponly, clobbers: flags}
gp21sp = regInfo{inputs: []regMask{gpsp, gp}, outputs: gponly, clobbers: flags}
gp21 = regInfo{inputs: []regMask{gp, gp}, outputs: gponly}
gp21sp = regInfo{inputs: []regMask{gpsp, gp}, outputs: gponly}
gp21sb = regInfo{inputs: []regMask{gpspsb, gpsp}, outputs: gponly}
gp21shift = regInfo{inputs: []regMask{gp, cx}, outputs: []regMask{gp}, clobbers: flags}
gp11div = regInfo{inputs: []regMask{ax, gpsp &^ dx}, outputs: []regMask{ax},
clobbers: dx | flags}
gp11hmul = regInfo{inputs: []regMask{ax, gpsp}, outputs: []regMask{dx},
clobbers: ax | flags}
gp11mod = regInfo{inputs: []regMask{ax, gpsp &^ dx}, outputs: []regMask{dx},
clobbers: ax | flags}
gp21shift = regInfo{inputs: []regMask{gp, cx}, outputs: []regMask{gp}}
gp11div = regInfo{inputs: []regMask{ax, gpsp &^ dx}, outputs: []regMask{ax, dx}}
gp21hmul = regInfo{inputs: []regMask{ax, gpsp}, outputs: []regMask{dx}, clobbers: ax}
gp2flags = regInfo{inputs: []regMask{gpsp, gpsp}, outputs: flagsonly}
gp1flags = regInfo{inputs: []regMask{gpsp}, outputs: flagsonly}
flagsgp = regInfo{inputs: flagsonly, outputs: gponly}
gp2flags = regInfo{inputs: []regMask{gpsp, gpsp}}
gp1flags = regInfo{inputs: []regMask{gpsp}}
flagsgp = regInfo{inputs: nil, outputs: gponly}
// for CMOVconst -- uses AX to hold constant temporary.
gp1flagsgp = regInfo{inputs: []regMask{gp &^ ax, flags}, clobbers: ax | flags, outputs: []regMask{gp &^ ax}}
gp1flagsgp = regInfo{inputs: []regMask{gp &^ ax}, clobbers: ax, outputs: []regMask{gp &^ ax}}
readflags = regInfo{inputs: flagsonly, outputs: gponly}
flagsgpax = regInfo{inputs: flagsonly, clobbers: ax | flags, outputs: []regMask{gp &^ ax}}
readflags = regInfo{inputs: nil, outputs: gponly}
flagsgpax = regInfo{inputs: nil, clobbers: ax, outputs: []regMask{gp &^ ax}}
gpload = regInfo{inputs: []regMask{gpspsb, 0}, outputs: gponly}
gploadidx = regInfo{inputs: []regMask{gpspsb, gpsp, 0}, outputs: gponly}
@@ -144,14 +136,14 @@ func init() {
gpstoreidx = regInfo{inputs: []regMask{gpspsb, gpsp, gpsp, 0}}
gpstoreconstidx = regInfo{inputs: []regMask{gpspsb, gpsp, 0}}
fp01 = regInfo{inputs: []regMask{}, outputs: fponly}
fp01 = regInfo{inputs: nil, outputs: fponly}
fp21 = regInfo{inputs: []regMask{fp, fp}, outputs: fponly}
fp21x15 = regInfo{inputs: []regMask{fp &^ x15, fp &^ x15},
clobbers: x15, outputs: []regMask{fp &^ x15}}
fpgp = regInfo{inputs: fponly, outputs: gponly}
gpfp = regInfo{inputs: gponly, outputs: fponly}
fp11 = regInfo{inputs: fponly, outputs: fponly}
fp2flags = regInfo{inputs: []regMask{fp, fp}, outputs: flagsonly}
fp2flags = regInfo{inputs: []regMask{fp, fp}}
fpload = regInfo{inputs: []regMask{gpspsb, 0}, outputs: fponly}
fploadidx = regInfo{inputs: []regMask{gpspsb, gpsp, 0}, outputs: fponly}
@@ -188,60 +180,53 @@ func init() {
{name: "MOVSDstoreidx8", argLength: 4, reg: fpstoreidx, asm: "MOVSD", aux: "SymOff"}, // fp64 indexed by 8i store
// binary ops
{name: "ADDQ", argLength: 2, reg: gp21sp, asm: "ADDQ", commutative: true}, // arg0 + arg1
{name: "ADDL", argLength: 2, reg: gp21sp, asm: "ADDL", commutative: true}, // arg0 + arg1
{name: "ADDQconst", argLength: 1, reg: gp11sp, asm: "ADDQ", aux: "Int64", typ: "UInt64"}, // arg0 + auxint
{name: "ADDLconst", argLength: 1, reg: gp11sp, asm: "ADDL", aux: "Int32"}, // arg0 + auxint
{name: "ADDQ", argLength: 2, reg: gp21sp, asm: "ADDQ", commutative: true, clobberFlags: true}, // arg0 + arg1
{name: "ADDL", argLength: 2, reg: gp21sp, asm: "ADDL", commutative: true, clobberFlags: true}, // arg0 + arg1
{name: "ADDQconst", argLength: 1, reg: gp11sp, asm: "ADDQ", aux: "Int64", typ: "UInt64", clobberFlags: true}, // arg0 + auxint
{name: "ADDLconst", argLength: 1, reg: gp11sp, asm: "ADDL", aux: "Int32", clobberFlags: true}, // arg0 + auxint
{name: "SUBQ", argLength: 2, reg: gp21, asm: "SUBQ", resultInArg0: true}, // arg0 - arg1
{name: "SUBL", argLength: 2, reg: gp21, asm: "SUBL", resultInArg0: true}, // arg0 - arg1
{name: "SUBQconst", argLength: 1, reg: gp11, asm: "SUBQ", aux: "Int64", resultInArg0: true}, // arg0 - auxint
{name: "SUBLconst", argLength: 1, reg: gp11, asm: "SUBL", aux: "Int32", resultInArg0: true}, // arg0 - auxint
{name: "SUBQ", argLength: 2, reg: gp21, asm: "SUBQ", resultInArg0: true, clobberFlags: true}, // arg0 - arg1
{name: "SUBL", argLength: 2, reg: gp21, asm: "SUBL", resultInArg0: true, clobberFlags: true}, // arg0 - arg1
{name: "SUBQconst", argLength: 1, reg: gp11, asm: "SUBQ", aux: "Int64", resultInArg0: true, clobberFlags: true}, // arg0 - auxint
{name: "SUBLconst", argLength: 1, reg: gp11, asm: "SUBL", aux: "Int32", resultInArg0: true, clobberFlags: true}, // arg0 - auxint
{name: "MULQ", argLength: 2, reg: gp21, asm: "IMULQ", commutative: true, resultInArg0: true}, // arg0 * arg1
{name: "MULL", argLength: 2, reg: gp21, asm: "IMULL", commutative: true, resultInArg0: true}, // arg0 * arg1
{name: "MULQconst", argLength: 1, reg: gp11, asm: "IMULQ", aux: "Int64", resultInArg0: true}, // arg0 * auxint
{name: "MULLconst", argLength: 1, reg: gp11, asm: "IMULL", aux: "Int32", resultInArg0: true}, // arg0 * auxint
{name: "MULQ", argLength: 2, reg: gp21, asm: "IMULQ", commutative: true, resultInArg0: true, clobberFlags: true}, // arg0 * arg1
{name: "MULL", argLength: 2, reg: gp21, asm: "IMULL", commutative: true, resultInArg0: true, clobberFlags: true}, // arg0 * arg1
{name: "MULQconst", argLength: 1, reg: gp11, asm: "IMULQ", aux: "Int64", resultInArg0: true, clobberFlags: true}, // arg0 * auxint
{name: "MULLconst", argLength: 1, reg: gp11, asm: "IMULL", aux: "Int32", resultInArg0: true, clobberFlags: true}, // arg0 * auxint
{name: "HMULQ", argLength: 2, reg: gp11hmul, asm: "IMULQ"}, // (arg0 * arg1) >> width
{name: "HMULL", argLength: 2, reg: gp11hmul, asm: "IMULL"}, // (arg0 * arg1) >> width
{name: "HMULW", argLength: 2, reg: gp11hmul, asm: "IMULW"}, // (arg0 * arg1) >> width
{name: "HMULB", argLength: 2, reg: gp11hmul, asm: "IMULB"}, // (arg0 * arg1) >> width
{name: "HMULQU", argLength: 2, reg: gp11hmul, asm: "MULQ"}, // (arg0 * arg1) >> width
{name: "HMULLU", argLength: 2, reg: gp11hmul, asm: "MULL"}, // (arg0 * arg1) >> width
{name: "HMULWU", argLength: 2, reg: gp11hmul, asm: "MULW"}, // (arg0 * arg1) >> width
{name: "HMULBU", argLength: 2, reg: gp11hmul, asm: "MULB"}, // (arg0 * arg1) >> width
{name: "HMULQ", argLength: 2, reg: gp21hmul, asm: "IMULQ", clobberFlags: true}, // (arg0 * arg1) >> width
{name: "HMULL", argLength: 2, reg: gp21hmul, asm: "IMULL", clobberFlags: true}, // (arg0 * arg1) >> width
{name: "HMULW", argLength: 2, reg: gp21hmul, asm: "IMULW", clobberFlags: true}, // (arg0 * arg1) >> width
{name: "HMULB", argLength: 2, reg: gp21hmul, asm: "IMULB", clobberFlags: true}, // (arg0 * arg1) >> width
{name: "HMULQU", argLength: 2, reg: gp21hmul, asm: "MULQ", clobberFlags: true}, // (arg0 * arg1) >> width
{name: "HMULLU", argLength: 2, reg: gp21hmul, asm: "MULL", clobberFlags: true}, // (arg0 * arg1) >> width
{name: "HMULWU", argLength: 2, reg: gp21hmul, asm: "MULW", clobberFlags: true}, // (arg0 * arg1) >> width
{name: "HMULBU", argLength: 2, reg: gp21hmul, asm: "MULB", clobberFlags: true}, // (arg0 * arg1) >> width
{name: "AVGQU", argLength: 2, reg: gp21, commutative: true, resultInArg0: true}, // (arg0 + arg1) / 2 as unsigned, all 64 result bits
{name: "AVGQU", argLength: 2, reg: gp21, commutative: true, resultInArg0: true, clobberFlags: true}, // (arg0 + arg1) / 2 as unsigned, all 64 result bits
{name: "DIVQ", argLength: 2, reg: gp11div, asm: "IDIVQ"}, // arg0 / arg1
{name: "DIVL", argLength: 2, reg: gp11div, asm: "IDIVL"}, // arg0 / arg1
{name: "DIVW", argLength: 2, reg: gp11div, asm: "IDIVW"}, // arg0 / arg1
{name: "DIVQU", argLength: 2, reg: gp11div, asm: "DIVQ"}, // arg0 / arg1
{name: "DIVLU", argLength: 2, reg: gp11div, asm: "DIVL"}, // arg0 / arg1
{name: "DIVWU", argLength: 2, reg: gp11div, asm: "DIVW"}, // arg0 / arg1
{name: "DIVQ", argLength: 2, reg: gp11div, typ: "(Int64,Int64)", asm: "IDIVQ", clobberFlags: true}, // [arg0 / arg1, arg0 % arg1]
{name: "DIVL", argLength: 2, reg: gp11div, typ: "(Int32,Int32)", asm: "IDIVL", clobberFlags: true}, // [arg0 / arg1, arg0 % arg1]
{name: "DIVW", argLength: 2, reg: gp11div, typ: "(Int16,Int16)", asm: "IDIVW", clobberFlags: true}, // [arg0 / arg1, arg0 % arg1]
{name: "DIVQU", argLength: 2, reg: gp11div, typ: "(UInt64,UInt64)", asm: "DIVQ", clobberFlags: true}, // [arg0 / arg1, arg0 % arg1]
{name: "DIVLU", argLength: 2, reg: gp11div, typ: "(UInt32,UInt32)", asm: "DIVL", clobberFlags: true}, // [arg0 / arg1, arg0 % arg1]
{name: "DIVWU", argLength: 2, reg: gp11div, typ: "(UInt16,UInt16)", asm: "DIVW", clobberFlags: true}, // [arg0 / arg1, arg0 % arg1]
{name: "MODQ", argLength: 2, reg: gp11mod, asm: "IDIVQ"}, // arg0 % arg1
{name: "MODL", argLength: 2, reg: gp11mod, asm: "IDIVL"}, // arg0 % arg1
{name: "MODW", argLength: 2, reg: gp11mod, asm: "IDIVW"}, // arg0 % arg1
{name: "MODQU", argLength: 2, reg: gp11mod, asm: "DIVQ"}, // arg0 % arg1
{name: "MODLU", argLength: 2, reg: gp11mod, asm: "DIVL"}, // arg0 % arg1
{name: "MODWU", argLength: 2, reg: gp11mod, asm: "DIVW"}, // arg0 % arg1
{name: "ANDQ", argLength: 2, reg: gp21, asm: "ANDQ", commutative: true, resultInArg0: true, clobberFlags: true}, // arg0 & arg1
{name: "ANDL", argLength: 2, reg: gp21, asm: "ANDL", commutative: true, resultInArg0: true, clobberFlags: true}, // arg0 & arg1
{name: "ANDQconst", argLength: 1, reg: gp11, asm: "ANDQ", aux: "Int64", resultInArg0: true, clobberFlags: true}, // arg0 & auxint
{name: "ANDLconst", argLength: 1, reg: gp11, asm: "ANDL", aux: "Int32", resultInArg0: true, clobberFlags: true}, // arg0 & auxint
{name: "ANDQ", argLength: 2, reg: gp21, asm: "ANDQ", commutative: true, resultInArg0: true}, // arg0 & arg1
{name: "ANDL", argLength: 2, reg: gp21, asm: "ANDL", commutative: true, resultInArg0: true}, // arg0 & arg1
{name: "ANDQconst", argLength: 1, reg: gp11, asm: "ANDQ", aux: "Int64", resultInArg0: true}, // arg0 & auxint
{name: "ANDLconst", argLength: 1, reg: gp11, asm: "ANDL", aux: "Int32", resultInArg0: true}, // arg0 & auxint
{name: "ORQ", argLength: 2, reg: gp21, asm: "ORQ", commutative: true, resultInArg0: true, clobberFlags: true}, // arg0 | arg1
{name: "ORL", argLength: 2, reg: gp21, asm: "ORL", commutative: true, resultInArg0: true, clobberFlags: true}, // arg0 | arg1
{name: "ORQconst", argLength: 1, reg: gp11, asm: "ORQ", aux: "Int64", resultInArg0: true, clobberFlags: true}, // arg0 | auxint
{name: "ORLconst", argLength: 1, reg: gp11, asm: "ORL", aux: "Int32", resultInArg0: true, clobberFlags: true}, // arg0 | auxint
{name: "ORQ", argLength: 2, reg: gp21, asm: "ORQ", commutative: true, resultInArg0: true}, // arg0 | arg1
{name: "ORL", argLength: 2, reg: gp21, asm: "ORL", commutative: true, resultInArg0: true}, // arg0 | arg1
{name: "ORQconst", argLength: 1, reg: gp11, asm: "ORQ", aux: "Int64", resultInArg0: true}, // arg0 | auxint
{name: "ORLconst", argLength: 1, reg: gp11, asm: "ORL", aux: "Int32", resultInArg0: true}, // arg0 | auxint
{name: "XORQ", argLength: 2, reg: gp21, asm: "XORQ", commutative: true, resultInArg0: true}, // arg0 ^ arg1
{name: "XORL", argLength: 2, reg: gp21, asm: "XORL", commutative: true, resultInArg0: true}, // arg0 ^ arg1
{name: "XORQconst", argLength: 1, reg: gp11, asm: "XORQ", aux: "Int64", resultInArg0: true}, // arg0 ^ auxint
{name: "XORLconst", argLength: 1, reg: gp11, asm: "XORL", aux: "Int32", resultInArg0: true}, // arg0 ^ auxint
{name: "XORQ", argLength: 2, reg: gp21, asm: "XORQ", commutative: true, resultInArg0: true, clobberFlags: true}, // arg0 ^ arg1
{name: "XORL", argLength: 2, reg: gp21, asm: "XORL", commutative: true, resultInArg0: true, clobberFlags: true}, // arg0 ^ arg1
{name: "XORQconst", argLength: 1, reg: gp11, asm: "XORQ", aux: "Int64", resultInArg0: true, clobberFlags: true}, // arg0 ^ auxint
{name: "XORLconst", argLength: 1, reg: gp11, asm: "XORL", aux: "Int32", resultInArg0: true, clobberFlags: true}, // arg0 ^ auxint
{name: "CMPQ", argLength: 2, reg: gp2flags, asm: "CMPQ", typ: "Flags"}, // arg0 compare to arg1
{name: "CMPL", argLength: 2, reg: gp2flags, asm: "CMPL", typ: "Flags"}, // arg0 compare to arg1
@@ -264,60 +249,60 @@ func init() {
{name: "TESTWconst", argLength: 1, reg: gp1flags, asm: "TESTW", typ: "Flags", aux: "Int16"}, // (arg0 & auxint) compare to 0
{name: "TESTBconst", argLength: 1, reg: gp1flags, asm: "TESTB", typ: "Flags", aux: "Int8"}, // (arg0 & auxint) compare to 0
{name: "SHLQ", argLength: 2, reg: gp21shift, asm: "SHLQ", resultInArg0: true}, // arg0 << arg1, shift amount is mod 64
{name: "SHLL", argLength: 2, reg: gp21shift, asm: "SHLL", resultInArg0: true}, // arg0 << arg1, shift amount is mod 32
{name: "SHLQconst", argLength: 1, reg: gp11, asm: "SHLQ", aux: "Int64", resultInArg0: true}, // arg0 << auxint, shift amount 0-63
{name: "SHLLconst", argLength: 1, reg: gp11, asm: "SHLL", aux: "Int32", resultInArg0: true}, // arg0 << auxint, shift amount 0-31
{name: "SHLQ", argLength: 2, reg: gp21shift, asm: "SHLQ", resultInArg0: true, clobberFlags: true}, // arg0 << arg1, shift amount is mod 64
{name: "SHLL", argLength: 2, reg: gp21shift, asm: "SHLL", resultInArg0: true, clobberFlags: true}, // arg0 << arg1, shift amount is mod 32
{name: "SHLQconst", argLength: 1, reg: gp11, asm: "SHLQ", aux: "Int64", resultInArg0: true, clobberFlags: true}, // arg0 << auxint, shift amount 0-63
{name: "SHLLconst", argLength: 1, reg: gp11, asm: "SHLL", aux: "Int32", resultInArg0: true, clobberFlags: true}, // arg0 << auxint, shift amount 0-31
// Note: x86 is weird, the 16 and 8 byte shifts still use all 5 bits of shift amount!
{name: "SHRQ", argLength: 2, reg: gp21shift, asm: "SHRQ", resultInArg0: true}, // unsigned arg0 >> arg1, shift amount is mod 64
{name: "SHRL", argLength: 2, reg: gp21shift, asm: "SHRL", resultInArg0: true}, // unsigned arg0 >> arg1, shift amount is mod 32
{name: "SHRW", argLength: 2, reg: gp21shift, asm: "SHRW", resultInArg0: true}, // unsigned arg0 >> arg1, shift amount is mod 32
{name: "SHRB", argLength: 2, reg: gp21shift, asm: "SHRB", resultInArg0: true}, // unsigned arg0 >> arg1, shift amount is mod 32
{name: "SHRQconst", argLength: 1, reg: gp11, asm: "SHRQ", aux: "Int64", resultInArg0: true}, // unsigned arg0 >> auxint, shift amount 0-63
{name: "SHRLconst", argLength: 1, reg: gp11, asm: "SHRL", aux: "Int32", resultInArg0: true}, // unsigned arg0 >> auxint, shift amount 0-31
{name: "SHRWconst", argLength: 1, reg: gp11, asm: "SHRW", aux: "Int16", resultInArg0: true}, // unsigned arg0 >> auxint, shift amount 0-31
{name: "SHRBconst", argLength: 1, reg: gp11, asm: "SHRB", aux: "Int8", resultInArg0: true}, // unsigned arg0 >> auxint, shift amount 0-31
{name: "SHRQ", argLength: 2, reg: gp21shift, asm: "SHRQ", resultInArg0: true, clobberFlags: true}, // unsigned arg0 >> arg1, shift amount is mod 64
{name: "SHRL", argLength: 2, reg: gp21shift, asm: "SHRL", resultInArg0: true, clobberFlags: true}, // unsigned arg0 >> arg1, shift amount is mod 32
{name: "SHRW", argLength: 2, reg: gp21shift, asm: "SHRW", resultInArg0: true, clobberFlags: true}, // unsigned arg0 >> arg1, shift amount is mod 32
{name: "SHRB", argLength: 2, reg: gp21shift, asm: "SHRB", resultInArg0: true, clobberFlags: true}, // unsigned arg0 >> arg1, shift amount is mod 32
{name: "SHRQconst", argLength: 1, reg: gp11, asm: "SHRQ", aux: "Int64", resultInArg0: true, clobberFlags: true}, // unsigned arg0 >> auxint, shift amount 0-63
{name: "SHRLconst", argLength: 1, reg: gp11, asm: "SHRL", aux: "Int32", resultInArg0: true, clobberFlags: true}, // unsigned arg0 >> auxint, shift amount 0-31
{name: "SHRWconst", argLength: 1, reg: gp11, asm: "SHRW", aux: "Int16", resultInArg0: true, clobberFlags: true}, // unsigned arg0 >> auxint, shift amount 0-31
{name: "SHRBconst", argLength: 1, reg: gp11, asm: "SHRB", aux: "Int8", resultInArg0: true, clobberFlags: true}, // unsigned arg0 >> auxint, shift amount 0-31
{name: "SARQ", argLength: 2, reg: gp21shift, asm: "SARQ", resultInArg0: true}, // signed arg0 >> arg1, shift amount is mod 64
{name: "SARL", argLength: 2, reg: gp21shift, asm: "SARL", resultInArg0: true}, // signed arg0 >> arg1, shift amount is mod 32
{name: "SARW", argLength: 2, reg: gp21shift, asm: "SARW", resultInArg0: true}, // signed arg0 >> arg1, shift amount is mod 32
{name: "SARB", argLength: 2, reg: gp21shift, asm: "SARB", resultInArg0: true}, // signed arg0 >> arg1, shift amount is mod 32
{name: "SARQconst", argLength: 1, reg: gp11, asm: "SARQ", aux: "Int64", resultInArg0: true}, // signed arg0 >> auxint, shift amount 0-63
{name: "SARLconst", argLength: 1, reg: gp11, asm: "SARL", aux: "Int32", resultInArg0: true}, // signed arg0 >> auxint, shift amount 0-31
{name: "SARWconst", argLength: 1, reg: gp11, asm: "SARW", aux: "Int16", resultInArg0: true}, // signed arg0 >> auxint, shift amount 0-31
{name: "SARBconst", argLength: 1, reg: gp11, asm: "SARB", aux: "Int8", resultInArg0: true}, // signed arg0 >> auxint, shift amount 0-31
{name: "SARQ", argLength: 2, reg: gp21shift, asm: "SARQ", resultInArg0: true, clobberFlags: true}, // signed arg0 >> arg1, shift amount is mod 64
{name: "SARL", argLength: 2, reg: gp21shift, asm: "SARL", resultInArg0: true, clobberFlags: true}, // signed arg0 >> arg1, shift amount is mod 32
{name: "SARW", argLength: 2, reg: gp21shift, asm: "SARW", resultInArg0: true, clobberFlags: true}, // signed arg0 >> arg1, shift amount is mod 32
{name: "SARB", argLength: 2, reg: gp21shift, asm: "SARB", resultInArg0: true, clobberFlags: true}, // signed arg0 >> arg1, shift amount is mod 32
{name: "SARQconst", argLength: 1, reg: gp11, asm: "SARQ", aux: "Int64", resultInArg0: true, clobberFlags: true}, // signed arg0 >> auxint, shift amount 0-63
{name: "SARLconst", argLength: 1, reg: gp11, asm: "SARL", aux: "Int32", resultInArg0: true, clobberFlags: true}, // signed arg0 >> auxint, shift amount 0-31
{name: "SARWconst", argLength: 1, reg: gp11, asm: "SARW", aux: "Int16", resultInArg0: true, clobberFlags: true}, // signed arg0 >> auxint, shift amount 0-31
{name: "SARBconst", argLength: 1, reg: gp11, asm: "SARB", aux: "Int8", resultInArg0: true, clobberFlags: true}, // signed arg0 >> auxint, shift amount 0-31
{name: "ROLQconst", argLength: 1, reg: gp11, asm: "ROLQ", aux: "Int64", resultInArg0: true}, // arg0 rotate left auxint, rotate amount 0-63
{name: "ROLLconst", argLength: 1, reg: gp11, asm: "ROLL", aux: "Int32", resultInArg0: true}, // arg0 rotate left auxint, rotate amount 0-31
{name: "ROLWconst", argLength: 1, reg: gp11, asm: "ROLW", aux: "Int16", resultInArg0: true}, // arg0 rotate left auxint, rotate amount 0-15
{name: "ROLBconst", argLength: 1, reg: gp11, asm: "ROLB", aux: "Int8", resultInArg0: true}, // arg0 rotate left auxint, rotate amount 0-7
{name: "ROLQconst", argLength: 1, reg: gp11, asm: "ROLQ", aux: "Int64", resultInArg0: true, clobberFlags: true}, // arg0 rotate left auxint, rotate amount 0-63
{name: "ROLLconst", argLength: 1, reg: gp11, asm: "ROLL", aux: "Int32", resultInArg0: true, clobberFlags: true}, // arg0 rotate left auxint, rotate amount 0-31
{name: "ROLWconst", argLength: 1, reg: gp11, asm: "ROLW", aux: "Int16", resultInArg0: true, clobberFlags: true}, // arg0 rotate left auxint, rotate amount 0-15
{name: "ROLBconst", argLength: 1, reg: gp11, asm: "ROLB", aux: "Int8", resultInArg0: true, clobberFlags: true}, // arg0 rotate left auxint, rotate amount 0-7
// unary ops
{name: "NEGQ", argLength: 1, reg: gp11, asm: "NEGQ", resultInArg0: true}, // -arg0
{name: "NEGL", argLength: 1, reg: gp11, asm: "NEGL", resultInArg0: true}, // -arg0
{name: "NEGQ", argLength: 1, reg: gp11, asm: "NEGQ", resultInArg0: true, clobberFlags: true}, // -arg0
{name: "NEGL", argLength: 1, reg: gp11, asm: "NEGL", resultInArg0: true, clobberFlags: true}, // -arg0
{name: "NOTQ", argLength: 1, reg: gp11, asm: "NOTQ", resultInArg0: true}, // ^arg0
{name: "NOTL", argLength: 1, reg: gp11, asm: "NOTL", resultInArg0: true}, // ^arg0
{name: "NOTQ", argLength: 1, reg: gp11, asm: "NOTQ", resultInArg0: true, clobberFlags: true}, // ^arg0
{name: "NOTL", argLength: 1, reg: gp11, asm: "NOTL", resultInArg0: true, clobberFlags: true}, // ^arg0
{name: "BSFQ", argLength: 1, reg: gp11, asm: "BSFQ"}, // arg0 # of low-order zeroes ; undef if zero
{name: "BSFL", argLength: 1, reg: gp11, asm: "BSFL"}, // arg0 # of low-order zeroes ; undef if zero
{name: "BSFW", argLength: 1, reg: gp11, asm: "BSFW"}, // arg0 # of low-order zeroes ; undef if zero
{name: "BSFQ", argLength: 1, reg: gp11, asm: "BSFQ", clobberFlags: true}, // arg0 # of low-order zeroes ; undef if zero
{name: "BSFL", argLength: 1, reg: gp11, asm: "BSFL", clobberFlags: true}, // arg0 # of low-order zeroes ; undef if zero
{name: "BSFW", argLength: 1, reg: gp11, asm: "BSFW", clobberFlags: true}, // arg0 # of low-order zeroes ; undef if zero
{name: "BSRQ", argLength: 1, reg: gp11, asm: "BSRQ"}, // arg0 # of high-order zeroes ; undef if zero
{name: "BSRL", argLength: 1, reg: gp11, asm: "BSRL"}, // arg0 # of high-order zeroes ; undef if zero
{name: "BSRW", argLength: 1, reg: gp11, asm: "BSRW"}, // arg0 # of high-order zeroes ; undef if zero
{name: "BSRQ", argLength: 1, reg: gp11, asm: "BSRQ", clobberFlags: true}, // arg0 # of high-order zeroes ; undef if zero
{name: "BSRL", argLength: 1, reg: gp11, asm: "BSRL", clobberFlags: true}, // arg0 # of high-order zeroes ; undef if zero
{name: "BSRW", argLength: 1, reg: gp11, asm: "BSRW", clobberFlags: true}, // arg0 # of high-order zeroes ; undef if zero
// Note ASM for ops moves whole register
{name: "CMOVQEQconst", argLength: 2, reg: gp1flagsgp, asm: "CMOVQEQ", typ: "UInt64", aux: "Int64", resultInArg0: true}, // replace arg0 w/ constant if Z set
{name: "CMOVLEQconst", argLength: 2, reg: gp1flagsgp, asm: "CMOVLEQ", typ: "UInt32", aux: "Int32", resultInArg0: true}, // replace arg0 w/ constant if Z set
{name: "CMOVWEQconst", argLength: 2, reg: gp1flagsgp, asm: "CMOVLEQ", typ: "UInt16", aux: "Int16", resultInArg0: true}, // replace arg0 w/ constant if Z set
{name: "CMOVQNEconst", argLength: 2, reg: gp1flagsgp, asm: "CMOVQNE", typ: "UInt64", aux: "Int64", resultInArg0: true}, // replace arg0 w/ constant if Z not set
{name: "CMOVLNEconst", argLength: 2, reg: gp1flagsgp, asm: "CMOVLNE", typ: "UInt32", aux: "Int32", resultInArg0: true}, // replace arg0 w/ constant if Z not set
{name: "CMOVWNEconst", argLength: 2, reg: gp1flagsgp, asm: "CMOVLNE", typ: "UInt16", aux: "Int16", resultInArg0: true}, // replace arg0 w/ constant if Z not set
{name: "CMOVQEQconst", argLength: 2, reg: gp1flagsgp, asm: "CMOVQEQ", typ: "UInt64", aux: "Int64", resultInArg0: true, clobberFlags: true}, // replace arg0 w/ constant if Z set
{name: "CMOVLEQconst", argLength: 2, reg: gp1flagsgp, asm: "CMOVLEQ", typ: "UInt32", aux: "Int32", resultInArg0: true, clobberFlags: true}, // replace arg0 w/ constant if Z set
{name: "CMOVWEQconst", argLength: 2, reg: gp1flagsgp, asm: "CMOVLEQ", typ: "UInt16", aux: "Int16", resultInArg0: true, clobberFlags: true}, // replace arg0 w/ constant if Z set
{name: "CMOVQNEconst", argLength: 2, reg: gp1flagsgp, asm: "CMOVQNE", typ: "UInt64", aux: "Int64", resultInArg0: true, clobberFlags: true}, // replace arg0 w/ constant if Z not set
{name: "CMOVLNEconst", argLength: 2, reg: gp1flagsgp, asm: "CMOVLNE", typ: "UInt32", aux: "Int32", resultInArg0: true, clobberFlags: true}, // replace arg0 w/ constant if Z not set
{name: "CMOVWNEconst", argLength: 2, reg: gp1flagsgp, asm: "CMOVLNE", typ: "UInt16", aux: "Int16", resultInArg0: true, clobberFlags: true}, // replace arg0 w/ constant if Z not set
{name: "BSWAPQ", argLength: 1, reg: gp11, asm: "BSWAPQ", resultInArg0: true}, // arg0 swap bytes
{name: "BSWAPL", argLength: 1, reg: gp11, asm: "BSWAPL", resultInArg0: true}, // arg0 swap bytes
{name: "BSWAPQ", argLength: 1, reg: gp11, asm: "BSWAPQ", resultInArg0: true, clobberFlags: true}, // arg0 swap bytes
{name: "BSWAPL", argLength: 1, reg: gp11, asm: "BSWAPL", resultInArg0: true, clobberFlags: true}, // arg0 swap bytes
{name: "SQRTSD", argLength: 1, reg: fp11, asm: "SQRTSD"}, // sqrt(arg0)
@@ -338,20 +323,20 @@ func init() {
// Need different opcodes for floating point conditions because
// any comparison involving a NaN is always FALSE and thus
// the patterns for inverting conditions cannot be used.
{name: "SETEQF", argLength: 1, reg: flagsgpax, asm: "SETEQ"}, // extract == condition from arg0
{name: "SETNEF", argLength: 1, reg: flagsgpax, asm: "SETNE"}, // extract != condition from arg0
{name: "SETORD", argLength: 1, reg: flagsgp, asm: "SETPC"}, // extract "ordered" (No Nan present) condition from arg0
{name: "SETNAN", argLength: 1, reg: flagsgp, asm: "SETPS"}, // extract "unordered" (Nan present) condition from arg0
{name: "SETEQF", argLength: 1, reg: flagsgpax, asm: "SETEQ", clobberFlags: true}, // extract == condition from arg0
{name: "SETNEF", argLength: 1, reg: flagsgpax, asm: "SETNE", clobberFlags: true}, // extract != condition from arg0
{name: "SETORD", argLength: 1, reg: flagsgp, asm: "SETPC"}, // extract "ordered" (No Nan present) condition from arg0
{name: "SETNAN", argLength: 1, reg: flagsgp, asm: "SETPS"}, // extract "unordered" (Nan present) condition from arg0
{name: "SETGF", argLength: 1, reg: flagsgp, asm: "SETHI"}, // extract floating > condition from arg0
{name: "SETGEF", argLength: 1, reg: flagsgp, asm: "SETCC"}, // extract floating >= condition from arg0
{name: "MOVBQSX", argLength: 1, reg: gp11nf, asm: "MOVBQSX"}, // sign extend arg0 from int8 to int64
{name: "MOVBQZX", argLength: 1, reg: gp11nf, asm: "MOVBQZX"}, // zero extend arg0 from int8 to int64
{name: "MOVWQSX", argLength: 1, reg: gp11nf, asm: "MOVWQSX"}, // sign extend arg0 from int16 to int64
{name: "MOVWQZX", argLength: 1, reg: gp11nf, asm: "MOVWQZX"}, // zero extend arg0 from int16 to int64
{name: "MOVLQSX", argLength: 1, reg: gp11nf, asm: "MOVLQSX"}, // sign extend arg0 from int32 to int64
{name: "MOVLQZX", argLength: 1, reg: gp11nf, asm: "MOVLQZX"}, // zero extend arg0 from int32 to int64
{name: "MOVBQSX", argLength: 1, reg: gp11, asm: "MOVBQSX"}, // sign extend arg0 from int8 to int64
{name: "MOVBQZX", argLength: 1, reg: gp11, asm: "MOVBQZX"}, // zero extend arg0 from int8 to int64
{name: "MOVWQSX", argLength: 1, reg: gp11, asm: "MOVWQSX"}, // sign extend arg0 from int16 to int64
{name: "MOVWQZX", argLength: 1, reg: gp11, asm: "MOVWQZX"}, // zero extend arg0 from int16 to int64
{name: "MOVLQSX", argLength: 1, reg: gp11, asm: "MOVLQSX"}, // sign extend arg0 from int32 to int64
{name: "MOVLQZX", argLength: 1, reg: gp11, asm: "MOVLQZX"}, // zero extend arg0 from int32 to int64
{name: "MOVLconst", reg: gp01, asm: "MOVL", typ: "UInt32", aux: "Int32", rematerializeable: true}, // 32 low bits of auxint
{name: "MOVQconst", reg: gp01, asm: "MOVQ", typ: "UInt64", aux: "Int64", rematerializeable: true}, // auxint
@@ -369,13 +354,15 @@ func init() {
{name: "PXOR", argLength: 2, reg: fp21, asm: "PXOR", commutative: true, resultInArg0: true}, // exclusive or, applied to X regs for float negation.
{name: "LEAQ", argLength: 1, reg: gp11sb, aux: "SymOff", rematerializeable: true}, // arg0 + auxint + offset encoded in aux
{name: "LEAQ1", argLength: 2, reg: gp21sb, aux: "SymOff"}, // arg0 + arg1 + auxint + aux
{name: "LEAQ2", argLength: 2, reg: gp21sb, aux: "SymOff"}, // arg0 + 2*arg1 + auxint + aux
{name: "LEAQ4", argLength: 2, reg: gp21sb, aux: "SymOff"}, // arg0 + 4*arg1 + auxint + aux
{name: "LEAQ8", argLength: 2, reg: gp21sb, aux: "SymOff"}, // arg0 + 8*arg1 + auxint + aux
{name: "LEAQ", argLength: 1, reg: gp11sb, asm: "LEAQ", aux: "SymOff", rematerializeable: true}, // arg0 + auxint + offset encoded in aux
{name: "LEAQ1", argLength: 2, reg: gp21sb, aux: "SymOff"}, // arg0 + arg1 + auxint + aux
{name: "LEAQ2", argLength: 2, reg: gp21sb, aux: "SymOff"}, // arg0 + 2*arg1 + auxint + aux
{name: "LEAQ4", argLength: 2, reg: gp21sb, aux: "SymOff"}, // arg0 + 4*arg1 + auxint + aux
{name: "LEAQ8", argLength: 2, reg: gp21sb, aux: "SymOff"}, // arg0 + 8*arg1 + auxint + aux
// Note: LEAQ{1,2,4,8} must not have OpSB as either argument.
{name: "LEAL", argLength: 1, reg: gp11sb, asm: "LEAL", aux: "SymOff", rematerializeable: true}, // arg0 + auxint + offset encoded in aux
// auxint+aux == add auxint and the offset of the symbol in aux (if any) to the effective address
{name: "MOVBload", argLength: 2, reg: gpload, asm: "MOVBLZX", aux: "SymOff", typ: "UInt8"}, // load byte from arg0+auxint+aux. arg1=mem. Zero extend.
{name: "MOVBQSXload", argLength: 2, reg: gpload, asm: "MOVBQSX", aux: "SymOff"}, // ditto, sign extend to int64
@@ -425,10 +412,10 @@ func init() {
{name: "MOVQstoreconstidx1", argLength: 3, reg: gpstoreconstidx, asm: "MOVQ", aux: "SymValAndOff", typ: "Mem"}, // store 8 bytes of ... arg1 ...
{name: "MOVQstoreconstidx8", argLength: 3, reg: gpstoreconstidx, asm: "MOVQ", aux: "SymValAndOff", typ: "Mem"}, // store 8 bytes of ... 8*arg1 ...
// arg0 = (duff-adjusted) pointer to start of memory to zero
// arg0 = pointer to start of memory to zero
// arg1 = value to store (will always be zero)
// arg2 = mem
// auxint = offset into duffzero code to start executing
// auxint = # of bytes to zero
// returns mem
{
name: "DUFFZERO",
@@ -436,8 +423,9 @@ func init() {
argLength: 3,
reg: regInfo{
inputs: []regMask{buildReg("DI"), buildReg("X0")},
clobbers: buildReg("DI FLAGS"),
clobbers: buildReg("DI"),
},
clobberFlags: true,
},
{name: "MOVOconst", reg: regInfo{nil, 0, []regMask{fp}}, typ: "Int128", aux: "Int128", rematerializeable: true},
@@ -455,11 +443,11 @@ func init() {
},
},
{name: "CALLstatic", argLength: 1, reg: regInfo{clobbers: callerSave}, aux: "SymOff"}, // call static function aux.(*gc.Sym). arg0=mem, auxint=argsize, returns mem
{name: "CALLclosure", argLength: 3, reg: regInfo{[]regMask{gpsp, buildReg("DX"), 0}, callerSave, nil}, aux: "Int64"}, // call function via closure. arg0=codeptr, arg1=closure, arg2=mem, auxint=argsize, returns mem
{name: "CALLdefer", argLength: 1, reg: regInfo{clobbers: callerSave}, aux: "Int64"}, // call deferproc. arg0=mem, auxint=argsize, returns mem
{name: "CALLgo", argLength: 1, reg: regInfo{clobbers: callerSave}, aux: "Int64"}, // call newproc. arg0=mem, auxint=argsize, returns mem
{name: "CALLinter", argLength: 2, reg: regInfo{inputs: []regMask{gp}, clobbers: callerSave}, aux: "Int64"}, // call fn by pointer. arg0=codeptr, arg1=mem, auxint=argsize, returns mem
{name: "CALLstatic", argLength: 1, reg: regInfo{clobbers: callerSave}, aux: "SymOff", clobberFlags: true}, // call static function aux.(*gc.Sym). arg0=mem, auxint=argsize, returns mem
{name: "CALLclosure", argLength: 3, reg: regInfo{inputs: []regMask{gpsp, buildReg("DX"), 0}, clobbers: callerSave}, aux: "Int64", clobberFlags: true}, // call function via closure. arg0=codeptr, arg1=closure, arg2=mem, auxint=argsize, returns mem
{name: "CALLdefer", argLength: 1, reg: regInfo{clobbers: callerSave}, aux: "Int64", clobberFlags: true}, // call deferproc. arg0=mem, auxint=argsize, returns mem
{name: "CALLgo", argLength: 1, reg: regInfo{clobbers: callerSave}, aux: "Int64", clobberFlags: true}, // call newproc. arg0=mem, auxint=argsize, returns mem
{name: "CALLinter", argLength: 2, reg: regInfo{inputs: []regMask{gp}, clobbers: callerSave}, aux: "Int64", clobberFlags: true}, // call fn by pointer. arg0=codeptr, arg1=mem, auxint=argsize, returns mem
// arg0 = destination pointer
// arg1 = source pointer
@@ -472,8 +460,9 @@ func init() {
argLength: 3,
reg: regInfo{
inputs: []regMask{buildReg("DI"), buildReg("SI")},
clobbers: buildReg("DI SI X0 FLAGS"), // uses X0 as a temporary
clobbers: buildReg("DI SI X0"), // uses X0 as a temporary
},
clobberFlags: true,
},
// arg0 = destination pointer
@@ -504,14 +493,15 @@ func init() {
// use of DX (the closure pointer)
{name: "LoweredGetClosurePtr", reg: regInfo{outputs: []regMask{buildReg("DX")}}},
//arg0=ptr,arg1=mem, returns void. Faults if ptr is nil.
{name: "LoweredNilCheck", argLength: 2, reg: regInfo{inputs: []regMask{gpsp}, clobbers: flags}},
{name: "LoweredNilCheck", argLength: 2, reg: regInfo{inputs: []regMask{gpsp}}, clobberFlags: true},
// MOVQconvert converts between pointers and integers.
// We have a special op for this so as to not confuse GC
// (particularly stack maps). It takes a memory arg so it
// gets correctly ordered with respect to GC safepoints.
// arg0=ptr/int arg1=mem, output=int/ptr
{name: "MOVQconvert", argLength: 2, reg: gp11nf, asm: "MOVQ"},
{name: "MOVQconvert", argLength: 2, reg: gp11, asm: "MOVQ"},
{name: "MOVLconvert", argLength: 2, reg: gp11, asm: "MOVL"}, // amd64p32 equivalent
// Constant flag values. For any comparison, there are 5 possible
// outcomes: the three from the signed total order (<,==,>) and the
@@ -545,11 +535,14 @@ func init() {
}
archs = append(archs, arch{
name: "AMD64",
pkg: "cmd/internal/obj/x86",
genfile: "../../amd64/ssa.go",
ops: AMD64ops,
blocks: AMD64blocks,
regnames: regNamesAMD64,
name: "AMD64",
pkg: "cmd/internal/obj/x86",
genfile: "../../amd64/ssa.go",
ops: AMD64ops,
blocks: AMD64blocks,
regnames: regNamesAMD64,
gpregmask: gp,
fpregmask: fp,
framepointerreg: int8(num["BP"]),
})
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,455 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
package main
import "strings"
// Notes:
// - Integer types live in the low portion of registers. Upper portions are junk.
// - Boolean types use the low-order byte of a register. 0=false, 1=true.
// Upper bytes are junk.
// - *const instructions may use a constant larger than the instuction can encode.
// In this case the assembler expands to multiple instructions and uses tmp
// register (R27).
// Suffixes encode the bit width of various instructions.
// D (double word) = 64 bit
// W (word) = 32 bit
// H (half word) = 16 bit
// HU = 16 bit unsigned
// B (byte) = 8 bit
// BU = 8 bit unsigned
// S (single) = 32 bit float
// D (double) = 64 bit float
// Note: registers not used in regalloc are not included in this list,
// so that regmask stays within int64
// Be careful when hand coding regmasks.
var regNamesARM64 = []string{
"R0",
"R1",
"R2",
"R3",
"R4",
"R5",
"R6",
"R7",
"R8",
"R9",
"R10",
"R11",
"R12",
"R13",
"R14",
"R15",
"R16",
"R17",
"R18", // platform register, not used
"R19",
"R20",
"R21",
"R22",
"R23",
"R24",
"R25",
"R26",
// R27 = REGTMP not used in regalloc
"g", // aka R28
"R29", // frame pointer, not used
// R30 = REGLINK not used in regalloc
"SP", // aka R31
"F0",
"F1",
"F2",
"F3",
"F4",
"F5",
"F6",
"F7",
"F8",
"F9",
"F10",
"F11",
"F12",
"F13",
"F14",
"F15",
"F16",
"F17",
"F18",
"F19",
"F20",
"F21",
"F22",
"F23",
"F24",
"F25",
"F26",
"F27",
"F28", // 0.0
"F29", // 0.5
"F30", // 1.0
"F31", // 2.0
// pseudo-registers
"SB",
}
func init() {
// Make map from reg names to reg integers.
if len(regNamesARM64) > 64 {
panic("too many registers")
}
num := map[string]int{}
for i, name := range regNamesARM64 {
num[name] = i
}
buildReg := func(s string) regMask {
m := regMask(0)
for _, r := range strings.Split(s, " ") {
if n, ok := num[r]; ok {
m |= regMask(1) << uint(n)
continue
}
panic("register " + r + " not found")
}
return m
}
// Common individual register masks
var (
gp = buildReg("R0 R1 R2 R3 R4 R5 R6 R7 R8 R9 R10 R11 R12 R13 R14 R15 R16 R17 R19 R20 R21 R22 R23 R24 R25 R26")
gpg = gp | buildReg("g")
gpsp = gp | buildReg("SP")
gpspg = gpg | buildReg("SP")
gpspsbg = gpspg | buildReg("SB")
fp = buildReg("F0 F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12 F13 F14 F15 F16 F17 F18 F19 F20 F21 F22 F23 F24 F25 F26 F27")
callerSave = gp | fp | buildReg("g") // runtime.setg (and anything calling it) may clobber g
)
// Common regInfo
var (
gp01 = regInfo{inputs: nil, outputs: []regMask{gp}}
gp11 = regInfo{inputs: []regMask{gpg}, outputs: []regMask{gp}}
gp11sp = regInfo{inputs: []regMask{gpspg}, outputs: []regMask{gp}}
gp1flags = regInfo{inputs: []regMask{gpg}}
gp1flags1 = regInfo{inputs: []regMask{gpg}, outputs: []regMask{gp}}
gp21 = regInfo{inputs: []regMask{gpg, gpg}, outputs: []regMask{gp}}
gp2flags = regInfo{inputs: []regMask{gpg, gpg}}
gp2flags1 = regInfo{inputs: []regMask{gp, gp}, outputs: []regMask{gp}}
//gp22 = regInfo{inputs: []regMask{gpg, gpg}, outputs: []regMask{gp, gp}}
//gp31 = regInfo{inputs: []regMask{gp, gp, gp}, outputs: []regMask{gp}}
//gp3flags = regInfo{inputs: []regMask{gp, gp, gp}}
//gp3flags1 = regInfo{inputs: []regMask{gp, gp, gp}, outputs: []regMask{gp}}
gpload = regInfo{inputs: []regMask{gpspsbg}, outputs: []regMask{gp}}
gpstore = regInfo{inputs: []regMask{gpspsbg, gpg}}
gpstore0 = regInfo{inputs: []regMask{gpspsbg}}
//gp2load = regInfo{inputs: []regMask{gpspsbg, gpg}, outputs: []regMask{gp}}
//gp2store = regInfo{inputs: []regMask{gpspsbg, gpg, gpg}}
fp01 = regInfo{inputs: nil, outputs: []regMask{fp}}
fp11 = regInfo{inputs: []regMask{fp}, outputs: []regMask{fp}}
//fp1flags = regInfo{inputs: []regMask{fp}}
fpgp = regInfo{inputs: []regMask{fp}, outputs: []regMask{gp}}
gpfp = regInfo{inputs: []regMask{gp}, outputs: []regMask{fp}}
fp21 = regInfo{inputs: []regMask{fp, fp}, outputs: []regMask{fp}}
fp2flags = regInfo{inputs: []regMask{fp, fp}}
fpload = regInfo{inputs: []regMask{gpspsbg}, outputs: []regMask{fp}}
fpstore = regInfo{inputs: []regMask{gpspsbg, fp}}
readflags = regInfo{inputs: nil, outputs: []regMask{gp}}
)
ops := []opData{
// binary ops
{name: "ADD", argLength: 2, reg: gp21, asm: "ADD", commutative: true}, // arg0 + arg1
{name: "ADDconst", argLength: 1, reg: gp11sp, asm: "ADD", aux: "Int64"}, // arg0 + auxInt
{name: "SUB", argLength: 2, reg: gp21, asm: "SUB"}, // arg0 - arg1
{name: "SUBconst", argLength: 1, reg: gp11, asm: "SUB", aux: "Int64"}, // arg0 - auxInt
{name: "MUL", argLength: 2, reg: gp21, asm: "MUL", commutative: true}, // arg0 * arg1
{name: "MULW", argLength: 2, reg: gp21, asm: "MULW", commutative: true}, // arg0 * arg1, 32-bit
{name: "MULH", argLength: 2, reg: gp21, asm: "SMULH", commutative: true}, // (arg0 * arg1) >> 64, signed
{name: "UMULH", argLength: 2, reg: gp21, asm: "UMULH", commutative: true}, // (arg0 * arg1) >> 64, unsigned
{name: "MULL", argLength: 2, reg: gp21, asm: "SMULL", commutative: true}, // arg0 * arg1, signed, 32-bit mult results in 64-bit
{name: "UMULL", argLength: 2, reg: gp21, asm: "UMULL", commutative: true}, // arg0 * arg1, unsigned, 32-bit mult results in 64-bit
{name: "DIV", argLength: 2, reg: gp21, asm: "SDIV"}, // arg0 / arg1, signed
{name: "UDIV", argLength: 2, reg: gp21, asm: "UDIV"}, // arg0 / arg1, unsighed
{name: "DIVW", argLength: 2, reg: gp21, asm: "SDIVW"}, // arg0 / arg1, signed, 32 bit
{name: "UDIVW", argLength: 2, reg: gp21, asm: "UDIVW"}, // arg0 / arg1, unsighed, 32 bit
{name: "MOD", argLength: 2, reg: gp21, asm: "REM"}, // arg0 % arg1, signed
{name: "UMOD", argLength: 2, reg: gp21, asm: "UREM"}, // arg0 % arg1, unsigned
{name: "MODW", argLength: 2, reg: gp21, asm: "REMW"}, // arg0 % arg1, signed, 32 bit
{name: "UMODW", argLength: 2, reg: gp21, asm: "UREMW"}, // arg0 % arg1, unsigned, 32 bit
{name: "FADDS", argLength: 2, reg: fp21, asm: "FADDS", commutative: true}, // arg0 + arg1
{name: "FADDD", argLength: 2, reg: fp21, asm: "FADDD", commutative: true}, // arg0 + arg1
{name: "FSUBS", argLength: 2, reg: fp21, asm: "FSUBS"}, // arg0 - arg1
{name: "FSUBD", argLength: 2, reg: fp21, asm: "FSUBD"}, // arg0 - arg1
{name: "FMULS", argLength: 2, reg: fp21, asm: "FMULS", commutative: true}, // arg0 * arg1
{name: "FMULD", argLength: 2, reg: fp21, asm: "FMULD", commutative: true}, // arg0 * arg1
{name: "FDIVS", argLength: 2, reg: fp21, asm: "FDIVS"}, // arg0 / arg1
{name: "FDIVD", argLength: 2, reg: fp21, asm: "FDIVD"}, // arg0 / arg1
{name: "AND", argLength: 2, reg: gp21, asm: "AND", commutative: true}, // arg0 & arg1
{name: "ANDconst", argLength: 1, reg: gp11, asm: "AND", aux: "Int64"}, // arg0 & auxInt
{name: "OR", argLength: 2, reg: gp21, asm: "ORR", commutative: true}, // arg0 | arg1
{name: "ORconst", argLength: 1, reg: gp11, asm: "ORR", aux: "Int64"}, // arg0 | auxInt
{name: "XOR", argLength: 2, reg: gp21, asm: "EOR", commutative: true}, // arg0 ^ arg1
{name: "XORconst", argLength: 1, reg: gp11, asm: "EOR", aux: "Int64"}, // arg0 ^ auxInt
{name: "BIC", argLength: 2, reg: gp21, asm: "BIC"}, // arg0 &^ arg1
{name: "BICconst", argLength: 1, reg: gp11, asm: "BIC", aux: "Int64"}, // arg0 &^ auxInt
// unary ops
{name: "MVN", argLength: 1, reg: gp11, asm: "MVN"}, // ^arg0
{name: "NEG", argLength: 1, reg: gp11, asm: "NEG"}, // -arg0
{name: "FNEGS", argLength: 1, reg: fp11, asm: "FNEGS"}, // -arg0, float32
{name: "FNEGD", argLength: 1, reg: fp11, asm: "FNEGD"}, // -arg0, float64
{name: "FSQRTD", argLength: 1, reg: fp11, asm: "FSQRTD"}, // sqrt(arg0), float64
// shifts
{name: "SLL", argLength: 2, reg: gp21, asm: "LSL"}, // arg0 << arg1, shift amount is mod 64
{name: "SLLconst", argLength: 1, reg: gp11, asm: "LSL", aux: "Int64"}, // arg0 << auxInt
{name: "SRL", argLength: 2, reg: gp21, asm: "LSR"}, // arg0 >> arg1, unsigned, shift amount is mod 64
{name: "SRLconst", argLength: 1, reg: gp11, asm: "LSR", aux: "Int64"}, // arg0 >> auxInt, unsigned
{name: "SRA", argLength: 2, reg: gp21, asm: "ASR"}, // arg0 >> arg1, signed, shift amount is mod 64
{name: "SRAconst", argLength: 1, reg: gp11, asm: "ASR", aux: "Int64"}, // arg0 >> auxInt, signed
{name: "RORconst", argLength: 1, reg: gp11, asm: "ROR", aux: "Int64"}, // arg0 right rotate by auxInt bits
{name: "RORWconst", argLength: 1, reg: gp11, asm: "RORW", aux: "Int64"}, // uint32(arg0) right rotate by auxInt bits
// comparisons
{name: "CMP", argLength: 2, reg: gp2flags, asm: "CMP", typ: "Flags"}, // arg0 compare to arg1
{name: "CMPconst", argLength: 1, reg: gp1flags, asm: "CMP", aux: "Int64", typ: "Flags"}, // arg0 compare to auxInt
{name: "CMPW", argLength: 2, reg: gp2flags, asm: "CMPW", typ: "Flags"}, // arg0 compare to arg1, 32 bit
{name: "CMPWconst", argLength: 1, reg: gp1flags, asm: "CMPW", aux: "Int32", typ: "Flags"}, // arg0 compare to auxInt, 32 bit
{name: "CMN", argLength: 2, reg: gp2flags, asm: "CMN", typ: "Flags"}, // arg0 compare to -arg1
{name: "CMNconst", argLength: 1, reg: gp1flags, asm: "CMN", aux: "Int64", typ: "Flags"}, // arg0 compare to -auxInt
{name: "CMNW", argLength: 2, reg: gp2flags, asm: "CMNW", typ: "Flags"}, // arg0 compare to -arg1, 32 bit
{name: "CMNWconst", argLength: 1, reg: gp1flags, asm: "CMNW", aux: "Int32", typ: "Flags"}, // arg0 compare to -auxInt, 32 bit
{name: "FCMPS", argLength: 2, reg: fp2flags, asm: "FCMPS", typ: "Flags"}, // arg0 compare to arg1, float32
{name: "FCMPD", argLength: 2, reg: fp2flags, asm: "FCMPD", typ: "Flags"}, // arg0 compare to arg1, float64
// shifted ops
{name: "ADDshiftLL", argLength: 2, reg: gp21, asm: "ADD", aux: "Int64"}, // arg0 + arg1<<auxInt
{name: "ADDshiftRL", argLength: 2, reg: gp21, asm: "ADD", aux: "Int64"}, // arg0 + arg1>>auxInt, unsigned shift
{name: "ADDshiftRA", argLength: 2, reg: gp21, asm: "ADD", aux: "Int64"}, // arg0 + arg1>>auxInt, signed shift
{name: "SUBshiftLL", argLength: 2, reg: gp21, asm: "SUB", aux: "Int64"}, // arg0 - arg1<<auxInt
{name: "SUBshiftRL", argLength: 2, reg: gp21, asm: "SUB", aux: "Int64"}, // arg0 - arg1>>auxInt, unsigned shift
{name: "SUBshiftRA", argLength: 2, reg: gp21, asm: "SUB", aux: "Int64"}, // arg0 - arg1>>auxInt, signed shift
{name: "ANDshiftLL", argLength: 2, reg: gp21, asm: "AND", aux: "Int64"}, // arg0 & (arg1<<auxInt)
{name: "ANDshiftRL", argLength: 2, reg: gp21, asm: "AND", aux: "Int64"}, // arg0 & (arg1>>auxInt), unsigned shift
{name: "ANDshiftRA", argLength: 2, reg: gp21, asm: "AND", aux: "Int64"}, // arg0 & (arg1>>auxInt), signed shift
{name: "ORshiftLL", argLength: 2, reg: gp21, asm: "ORR", aux: "Int64"}, // arg0 | arg1<<auxInt
{name: "ORshiftRL", argLength: 2, reg: gp21, asm: "ORR", aux: "Int64"}, // arg0 | arg1>>auxInt, unsigned shift
{name: "ORshiftRA", argLength: 2, reg: gp21, asm: "ORR", aux: "Int64"}, // arg0 | arg1>>auxInt, signed shift
{name: "XORshiftLL", argLength: 2, reg: gp21, asm: "EOR", aux: "Int64"}, // arg0 ^ arg1<<auxInt
{name: "XORshiftRL", argLength: 2, reg: gp21, asm: "EOR", aux: "Int64"}, // arg0 ^ arg1>>auxInt, unsigned shift
{name: "XORshiftRA", argLength: 2, reg: gp21, asm: "EOR", aux: "Int64"}, // arg0 ^ arg1>>auxInt, signed shift
{name: "BICshiftLL", argLength: 2, reg: gp21, asm: "BIC", aux: "Int64"}, // arg0 &^ (arg1<<auxInt)
{name: "BICshiftRL", argLength: 2, reg: gp21, asm: "BIC", aux: "Int64"}, // arg0 &^ (arg1>>auxInt), unsigned shift
{name: "BICshiftRA", argLength: 2, reg: gp21, asm: "BIC", aux: "Int64"}, // arg0 &^ (arg1>>auxInt), signed shift
{name: "CMPshiftLL", argLength: 2, reg: gp2flags, asm: "CMP", aux: "Int64", typ: "Flags"}, // arg0 compare to arg1<<auxInt
{name: "CMPshiftRL", argLength: 2, reg: gp2flags, asm: "CMP", aux: "Int64", typ: "Flags"}, // arg0 compare to arg1>>auxInt, unsigned shift
{name: "CMPshiftRA", argLength: 2, reg: gp2flags, asm: "CMP", aux: "Int64", typ: "Flags"}, // arg0 compare to arg1>>auxInt, signed shift
// moves
{name: "MOVDconst", argLength: 0, reg: gp01, aux: "Int64", asm: "MOVD", typ: "UInt64", rematerializeable: true}, // 32 low bits of auxint
{name: "FMOVSconst", argLength: 0, reg: fp01, aux: "Float64", asm: "FMOVS", typ: "Float32", rematerializeable: true}, // auxint as 64-bit float, convert to 32-bit float
{name: "FMOVDconst", argLength: 0, reg: fp01, aux: "Float64", asm: "FMOVD", typ: "Float64", rematerializeable: true}, // auxint as 64-bit float
{name: "MOVDaddr", argLength: 1, reg: regInfo{inputs: []regMask{buildReg("SP") | buildReg("SB")}, outputs: []regMask{gp}}, aux: "SymOff", asm: "MOVD", rematerializeable: true}, // arg0 + auxInt + aux.(*gc.Sym), arg0=SP/SB
{name: "MOVBload", argLength: 2, reg: gpload, aux: "SymOff", asm: "MOVB", typ: "Int8"}, // load from arg0 + auxInt + aux. arg1=mem.
{name: "MOVBUload", argLength: 2, reg: gpload, aux: "SymOff", asm: "MOVBU", typ: "UInt8"}, // load from arg0 + auxInt + aux. arg1=mem.
{name: "MOVHload", argLength: 2, reg: gpload, aux: "SymOff", asm: "MOVH", typ: "Int16"}, // load from arg0 + auxInt + aux. arg1=mem.
{name: "MOVHUload", argLength: 2, reg: gpload, aux: "SymOff", asm: "MOVHU", typ: "UInt16"}, // load from arg0 + auxInt + aux. arg1=mem.
{name: "MOVWload", argLength: 2, reg: gpload, aux: "SymOff", asm: "MOVW", typ: "Int32"}, // load from arg0 + auxInt + aux. arg1=mem.
{name: "MOVWUload", argLength: 2, reg: gpload, aux: "SymOff", asm: "MOVWU", typ: "UInt32"}, // load from arg0 + auxInt + aux. arg1=mem.
{name: "MOVDload", argLength: 2, reg: gpload, aux: "SymOff", asm: "MOVD", typ: "UInt64"}, // load from arg0 + auxInt + aux. arg1=mem.
{name: "FMOVSload", argLength: 2, reg: fpload, aux: "SymOff", asm: "FMOVS", typ: "Float32"}, // load from arg0 + auxInt + aux. arg1=mem.
{name: "FMOVDload", argLength: 2, reg: fpload, aux: "SymOff", asm: "FMOVD", typ: "Float64"}, // load from arg0 + auxInt + aux. arg1=mem.
{name: "MOVBstore", argLength: 3, reg: gpstore, aux: "SymOff", asm: "MOVB", typ: "Mem"}, // store 1 byte of arg1 to arg0 + auxInt + aux. arg2=mem.
{name: "MOVHstore", argLength: 3, reg: gpstore, aux: "SymOff", asm: "MOVH", typ: "Mem"}, // store 2 bytes of arg1 to arg0 + auxInt + aux. arg2=mem.
{name: "MOVWstore", argLength: 3, reg: gpstore, aux: "SymOff", asm: "MOVW", typ: "Mem"}, // store 4 bytes of arg1 to arg0 + auxInt + aux. arg2=mem.
{name: "MOVDstore", argLength: 3, reg: gpstore, aux: "SymOff", asm: "MOVD", typ: "Mem"}, // store 8 bytes of arg1 to arg0 + auxInt + aux. arg2=mem.
{name: "FMOVSstore", argLength: 3, reg: fpstore, aux: "SymOff", asm: "FMOVS", typ: "Mem"}, // store 4 bytes of arg1 to arg0 + auxInt + aux. arg2=mem.
{name: "FMOVDstore", argLength: 3, reg: fpstore, aux: "SymOff", asm: "FMOVD", typ: "Mem"}, // store 8 bytes of arg1 to arg0 + auxInt + aux. arg2=mem.
{name: "MOVBstorezero", argLength: 2, reg: gpstore0, aux: "SymOff", asm: "MOVB", typ: "Mem"}, // store 1 byte of zero to arg0 + auxInt + aux. arg1=mem.
{name: "MOVHstorezero", argLength: 2, reg: gpstore0, aux: "SymOff", asm: "MOVH", typ: "Mem"}, // store 2 bytes of zero to arg0 + auxInt + aux. arg1=mem.
{name: "MOVWstorezero", argLength: 2, reg: gpstore0, aux: "SymOff", asm: "MOVW", typ: "Mem"}, // store 4 bytes of zero to arg0 + auxInt + aux. arg1=mem.
{name: "MOVDstorezero", argLength: 2, reg: gpstore0, aux: "SymOff", asm: "MOVD", typ: "Mem"}, // store 8 bytes of zero to arg0 + auxInt + aux. ar12=mem.
// conversions
{name: "MOVBreg", argLength: 1, reg: gp11, asm: "MOVB"}, // move from arg0, sign-extended from byte
{name: "MOVBUreg", argLength: 1, reg: gp11, asm: "MOVBU"}, // move from arg0, unsign-extended from byte
{name: "MOVHreg", argLength: 1, reg: gp11, asm: "MOVH"}, // move from arg0, sign-extended from half
{name: "MOVHUreg", argLength: 1, reg: gp11, asm: "MOVHU"}, // move from arg0, unsign-extended from half
{name: "MOVWreg", argLength: 1, reg: gp11, asm: "MOVW"}, // move from arg0, sign-extended from word
{name: "MOVWUreg", argLength: 1, reg: gp11, asm: "MOVWU"}, // move from arg0, unsign-extended from word
{name: "MOVDreg", argLength: 1, reg: gp11, asm: "MOVD"}, // move from arg0
{name: "MOVDnop", argLength: 1, reg: regInfo{inputs: []regMask{gp}, outputs: []regMask{gp}}, resultInArg0: true}, // nop, return arg0 in same register
{name: "SCVTFWS", argLength: 1, reg: gpfp, asm: "SCVTFWS"}, // int32 -> float32
{name: "SCVTFWD", argLength: 1, reg: gpfp, asm: "SCVTFWD"}, // int32 -> float64
{name: "UCVTFWS", argLength: 1, reg: gpfp, asm: "UCVTFWS"}, // uint32 -> float32
{name: "UCVTFWD", argLength: 1, reg: gpfp, asm: "UCVTFWD"}, // uint32 -> float64
{name: "SCVTFS", argLength: 1, reg: gpfp, asm: "SCVTFS"}, // int64 -> float32
{name: "SCVTFD", argLength: 1, reg: gpfp, asm: "SCVTFD"}, // int64 -> float64
{name: "UCVTFS", argLength: 1, reg: gpfp, asm: "UCVTFS"}, // uint64 -> float32
{name: "UCVTFD", argLength: 1, reg: gpfp, asm: "UCVTFD"}, // uint64 -> float64
{name: "FCVTZSSW", argLength: 1, reg: fpgp, asm: "FCVTZSSW"}, // float32 -> int32
{name: "FCVTZSDW", argLength: 1, reg: fpgp, asm: "FCVTZSDW"}, // float64 -> int32
{name: "FCVTZUSW", argLength: 1, reg: fpgp, asm: "FCVTZUSW"}, // float32 -> uint32
{name: "FCVTZUDW", argLength: 1, reg: fpgp, asm: "FCVTZUDW"}, // float64 -> uint32
{name: "FCVTZSS", argLength: 1, reg: fpgp, asm: "FCVTZSS"}, // float32 -> int64
{name: "FCVTZSD", argLength: 1, reg: fpgp, asm: "FCVTZSD"}, // float64 -> int64
{name: "FCVTZUS", argLength: 1, reg: fpgp, asm: "FCVTZUS"}, // float32 -> uint64
{name: "FCVTZUD", argLength: 1, reg: fpgp, asm: "FCVTZUD"}, // float64 -> uint64
{name: "FCVTSD", argLength: 1, reg: fp11, asm: "FCVTSD"}, // float32 -> float64
{name: "FCVTDS", argLength: 1, reg: fp11, asm: "FCVTDS"}, // float64 -> float32
// conditional instructions
{name: "CSELULT", argLength: 3, reg: gp2flags1, asm: "CSEL"}, // returns arg0 if flags indicates unsigned LT, arg1 otherwise, arg2=flags
{name: "CSELULT0", argLength: 2, reg: gp1flags1, asm: "CSEL"}, // returns arg0 if flags indicates unsigned LT, 0 otherwise, arg1=flags
// function calls
{name: "CALLstatic", argLength: 1, reg: regInfo{clobbers: callerSave}, aux: "SymOff", clobberFlags: true}, // call static function aux.(*gc.Sym). arg0=mem, auxint=argsize, returns mem
{name: "CALLclosure", argLength: 3, reg: regInfo{inputs: []regMask{gpsp, buildReg("R26"), 0}, clobbers: callerSave}, aux: "Int64", clobberFlags: true}, // call function via closure. arg0=codeptr, arg1=closure, arg2=mem, auxint=argsize, returns mem
{name: "CALLdefer", argLength: 1, reg: regInfo{clobbers: callerSave}, aux: "Int64", clobberFlags: true}, // call deferproc. arg0=mem, auxint=argsize, returns mem
{name: "CALLgo", argLength: 1, reg: regInfo{clobbers: callerSave}, aux: "Int64", clobberFlags: true}, // call newproc. arg0=mem, auxint=argsize, returns mem
{name: "CALLinter", argLength: 2, reg: regInfo{inputs: []regMask{gp}, clobbers: callerSave}, aux: "Int64", clobberFlags: true}, // call fn by pointer. arg0=codeptr, arg1=mem, auxint=argsize, returns mem
// pseudo-ops
{name: "LoweredNilCheck", argLength: 2, reg: regInfo{inputs: []regMask{gpg}}}, // panic if arg0 is nil. arg1=mem.
{name: "Equal", argLength: 1, reg: readflags}, // bool, true flags encode x==y false otherwise.
{name: "NotEqual", argLength: 1, reg: readflags}, // bool, true flags encode x!=y false otherwise.
{name: "LessThan", argLength: 1, reg: readflags}, // bool, true flags encode signed x<y false otherwise.
{name: "LessEqual", argLength: 1, reg: readflags}, // bool, true flags encode signed x<=y false otherwise.
{name: "GreaterThan", argLength: 1, reg: readflags}, // bool, true flags encode signed x>y false otherwise.
{name: "GreaterEqual", argLength: 1, reg: readflags}, // bool, true flags encode signed x>=y false otherwise.
{name: "LessThanU", argLength: 1, reg: readflags}, // bool, true flags encode unsigned x<y false otherwise.
{name: "LessEqualU", argLength: 1, reg: readflags}, // bool, true flags encode unsigned x<=y false otherwise.
{name: "GreaterThanU", argLength: 1, reg: readflags}, // bool, true flags encode unsigned x>y false otherwise.
{name: "GreaterEqualU", argLength: 1, reg: readflags}, // bool, true flags encode unsigned x>=y false otherwise.
// duffzero
// arg0 = address of memory to zero
// arg1 = mem
// auxint = offset into duffzero code to start executing
// returns mem
// R16 aka arm64.REGRT1 changed as side effect
{
name: "DUFFZERO",
aux: "Int64",
argLength: 2,
reg: regInfo{
inputs: []regMask{gp},
clobbers: buildReg("R16"),
},
},
// large zeroing
// arg0 = address of memory to zero (in R16 aka arm64.REGRT1, changed as side effect)
// arg1 = address of the last element to zero
// arg2 = mem
// auxint = alignment
// returns mem
// MOVD.P ZR, 8(R16)
// CMP Rarg1, R16
// BLE -2(PC)
// Note: the-end-of-the-memory may be not a valid pointer. it's a problem if it is spilled.
// the-end-of-the-memory - 8 is with the area to zero, ok to spill.
{
name: "LoweredZero",
aux: "Int64",
argLength: 3,
reg: regInfo{
inputs: []regMask{buildReg("R16"), gp},
clobbers: buildReg("R16"),
},
clobberFlags: true,
},
// large move
// arg0 = address of dst memory (in R17 aka arm64.REGRT2, changed as side effect)
// arg1 = address of src memory (in R16 aka arm64.REGRT1, changed as side effect)
// arg2 = address of the last element of src
// arg3 = mem
// auxint = alignment
// returns mem
// MOVD.P 8(R16), Rtmp
// MOVD.P Rtmp, 8(R17)
// CMP Rarg2, R16
// BLE -3(PC)
// Note: the-end-of-src may be not a valid pointer. it's a problem if it is spilled.
// the-end-of-src - 8 is within the area to copy, ok to spill.
{
name: "LoweredMove",
aux: "Int64",
argLength: 4,
reg: regInfo{
inputs: []regMask{buildReg("R17"), buildReg("R16"), gp},
clobbers: buildReg("R16 R17"),
},
clobberFlags: true,
},
// Scheduler ensures LoweredGetClosurePtr occurs only in entry block,
// and sorts it to the very beginning of the block to prevent other
// use of R26 (arm64.REGCTXT, the closure pointer)
{name: "LoweredGetClosurePtr", reg: regInfo{outputs: []regMask{buildReg("R26")}}},
// MOVDconvert converts between pointers and integers.
// We have a special op for this so as to not confuse GC
// (particularly stack maps). It takes a memory arg so it
// gets correctly ordered with respect to GC safepoints.
// arg0=ptr/int arg1=mem, output=int/ptr
{name: "MOVDconvert", argLength: 2, reg: gp11, asm: "MOVD"},
// Constant flag values. For any comparison, there are 5 possible
// outcomes: the three from the signed total order (<,==,>) and the
// three from the unsigned total order. The == cases overlap.
// Note: there's a sixth "unordered" outcome for floating-point
// comparisons, but we don't use such a beast yet.
// These ops are for temporary use by rewrite rules. They
// cannot appear in the generated assembly.
{name: "FlagEQ"}, // equal
{name: "FlagLT_ULT"}, // signed < and unsigned <
{name: "FlagLT_UGT"}, // signed < and unsigned >
{name: "FlagGT_UGT"}, // signed > and unsigned <
{name: "FlagGT_ULT"}, // signed > and unsigned >
// (InvertFlags (CMP a b)) == (CMP b a)
// InvertFlags is a pseudo-op which can't appear in assembly output.
{name: "InvertFlags", argLength: 1}, // reverse direction of arg0
}
blocks := []blockData{
{name: "EQ"},
{name: "NE"},
{name: "LT"},
{name: "LE"},
{name: "GT"},
{name: "GE"},
{name: "ULT"},
{name: "ULE"},
{name: "UGT"},
{name: "UGE"},
}
archs = append(archs, arch{
name: "ARM64",
pkg: "cmd/internal/obj/arm64",
genfile: "../../arm64/ssa.go",
ops: ops,
blocks: blocks,
regnames: regNamesARM64,
gpregmask: gp,
fpregmask: fp,
framepointerreg: -1, // not used
})
}

View File

@@ -6,32 +6,481 @@
package main
import "strings"
// Notes:
// - Integer types live in the low portion of registers. Upper portions are junk.
// - Boolean types use the low-order byte of a register. 0=false, 1=true.
// Upper bytes are junk.
// - *const instructions may use a constant larger than the instuction can encode.
// In this case the assembler expands to multiple instructions and uses tmp
// register (R11).
// Suffixes encode the bit width of various instructions.
// W (word) = 32 bit
// H (half word) = 16 bit
// HU = 16 bit unsigned
// B (byte) = 8 bit
// BU = 8 bit unsigned
// F (float) = 32 bit float
// D (double) = 64 bit float
var regNamesARM = []string{
"R0",
"R1",
"R2",
"R3",
"R4",
"R5",
"R6",
"R7",
"R8",
"R9",
"g", // aka R10
"R11", // tmp
"R12",
"SP", // aka R13
"R14", // link
"R15", // pc
"F0",
"F1",
"F2",
"F3",
"F4",
"F5",
"F6",
"F7",
"F8",
"F9",
"F10",
"F11",
"F12",
"F13",
"F14",
"F15", // tmp
// pseudo-registers
"SB",
}
func init() {
// Make map from reg names to reg integers.
if len(regNamesARM) > 64 {
panic("too many registers")
}
num := map[string]int{}
for i, name := range regNamesARM {
num[name] = i
}
buildReg := func(s string) regMask {
m := regMask(0)
for _, r := range strings.Split(s, " ") {
if n, ok := num[r]; ok {
m |= regMask(1) << uint(n)
continue
}
panic("register " + r + " not found")
}
return m
}
// Common individual register masks
var (
gp01 = regInfo{inputs: []regMask{}, outputs: []regMask{31}}
gp11 = regInfo{inputs: []regMask{31}, outputs: []regMask{31}}
gp21 = regInfo{inputs: []regMask{31, 31}, outputs: []regMask{31}}
gp2flags = regInfo{inputs: []regMask{31, 31}, outputs: []regMask{32}}
gpload = regInfo{inputs: []regMask{31}, outputs: []regMask{31}}
gpstore = regInfo{inputs: []regMask{31, 31}, outputs: []regMask{}}
flagsgp = regInfo{inputs: []regMask{32}, outputs: []regMask{31}}
callerSave = regMask(15)
gp = buildReg("R0 R1 R2 R3 R4 R5 R6 R7 R8 R9 R12")
gpg = gp | buildReg("g")
gpsp = gp | buildReg("SP")
gpspg = gpg | buildReg("SP")
gpspsbg = gpspg | buildReg("SB")
fp = buildReg("F0 F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12 F13 F14 F15")
callerSave = gp | fp | buildReg("g") // runtime.setg (and anything calling it) may clobber g
)
// Common regInfo
var (
gp01 = regInfo{inputs: nil, outputs: []regMask{gp}}
gp11 = regInfo{inputs: []regMask{gpg}, outputs: []regMask{gp}}
gp11carry = regInfo{inputs: []regMask{gpg}, outputs: []regMask{0, gp}}
gp11sp = regInfo{inputs: []regMask{gpspg}, outputs: []regMask{gp}}
gp1flags = regInfo{inputs: []regMask{gpg}}
gp1flags1 = regInfo{inputs: []regMask{gp}, outputs: []regMask{gp}}
gp21 = regInfo{inputs: []regMask{gpg, gpg}, outputs: []regMask{gp}}
gp21carry = regInfo{inputs: []regMask{gpg, gpg}, outputs: []regMask{0, gp}}
gp2flags = regInfo{inputs: []regMask{gpg, gpg}}
gp2flags1 = regInfo{inputs: []regMask{gp, gp}, outputs: []regMask{gp}}
gp22 = regInfo{inputs: []regMask{gpg, gpg}, outputs: []regMask{gp, gp}}
gp31 = regInfo{inputs: []regMask{gp, gp, gp}, outputs: []regMask{gp}}
gp31carry = regInfo{inputs: []regMask{gp, gp, gp}, outputs: []regMask{0, gp}}
gp3flags = regInfo{inputs: []regMask{gp, gp, gp}}
gp3flags1 = regInfo{inputs: []regMask{gp, gp, gp}, outputs: []regMask{gp}}
gpload = regInfo{inputs: []regMask{gpspsbg}, outputs: []regMask{gp}}
gpstore = regInfo{inputs: []regMask{gpspsbg, gpg}}
gp2load = regInfo{inputs: []regMask{gpspsbg, gpg}, outputs: []regMask{gp}}
gp2store = regInfo{inputs: []regMask{gpspsbg, gpg, gpg}}
fp01 = regInfo{inputs: nil, outputs: []regMask{fp}}
fp11 = regInfo{inputs: []regMask{fp}, outputs: []regMask{fp}}
fp1flags = regInfo{inputs: []regMask{fp}}
fpgp = regInfo{inputs: []regMask{fp}, outputs: []regMask{gp}}
gpfp = regInfo{inputs: []regMask{gp}, outputs: []regMask{fp}}
fp21 = regInfo{inputs: []regMask{fp, fp}, outputs: []regMask{fp}}
fp2flags = regInfo{inputs: []regMask{fp, fp}}
fpload = regInfo{inputs: []regMask{gpspsbg}, outputs: []regMask{fp}}
fpstore = regInfo{inputs: []regMask{gpspsbg, fp}}
readflags = regInfo{inputs: nil, outputs: []regMask{gp}}
)
ops := []opData{
{name: "ADD", argLength: 2, reg: gp21, asm: "ADD", commutative: true}, // arg0 + arg1
{name: "ADDconst", argLength: 1, reg: gp11, asm: "ADD", aux: "SymOff"}, // arg0 + auxInt + aux.(*gc.Sym)
// binary ops
{name: "ADD", argLength: 2, reg: gp21, asm: "ADD", commutative: true}, // arg0 + arg1
{name: "ADDconst", argLength: 1, reg: gp11sp, asm: "ADD", aux: "Int32"}, // arg0 + auxInt
{name: "SUB", argLength: 2, reg: gp21, asm: "SUB"}, // arg0 - arg1
{name: "SUBconst", argLength: 1, reg: gp11, asm: "SUB", aux: "Int32"}, // arg0 - auxInt
{name: "RSB", argLength: 2, reg: gp21, asm: "RSB"}, // arg1 - arg0
{name: "RSBconst", argLength: 1, reg: gp11, asm: "RSB", aux: "Int32"}, // auxInt - arg0
{name: "MUL", argLength: 2, reg: gp21, asm: "MUL", commutative: true}, // arg0 * arg1
{name: "HMUL", argLength: 2, reg: gp21, asm: "MULL", commutative: true}, // (arg0 * arg1) >> 32, signed
{name: "HMULU", argLength: 2, reg: gp21, asm: "MULLU", commutative: true}, // (arg0 * arg1) >> 32, unsigned
{name: "DIV", argLength: 2, reg: gp21, asm: "DIV", clobberFlags: true}, // arg0 / arg1, signed, soft div clobbers flags
{name: "DIVU", argLength: 2, reg: gp21, asm: "DIVU", clobberFlags: true}, // arg0 / arg1, unsighed
{name: "MOD", argLength: 2, reg: gp21, asm: "MOD", clobberFlags: true}, // arg0 % arg1, signed
{name: "MODU", argLength: 2, reg: gp21, asm: "MODU", clobberFlags: true}, // arg0 % arg1, unsigned
{name: "MOVWconst", argLength: 0, reg: gp01, aux: "Int32", asm: "MOVW", rematerializeable: true}, // 32 low bits of auxint
{name: "ADDS", argLength: 2, reg: gp21carry, asm: "ADD", commutative: true}, // arg0 + arg1, set carry flag
{name: "ADDSconst", argLength: 1, reg: gp11carry, asm: "ADD", aux: "Int32"}, // arg0 + auxInt, set carry flag
{name: "ADC", argLength: 3, reg: gp2flags1, asm: "ADC", commutative: true}, // arg0 + arg1 + carry, arg2=flags
{name: "ADCconst", argLength: 2, reg: gp1flags1, asm: "ADC", aux: "Int32"}, // arg0 + auxInt + carry, arg1=flags
{name: "SUBS", argLength: 2, reg: gp21carry, asm: "SUB"}, // arg0 - arg1, set carry flag
{name: "SUBSconst", argLength: 1, reg: gp11carry, asm: "SUB", aux: "Int32"}, // arg0 - auxInt, set carry flag
{name: "RSBSconst", argLength: 1, reg: gp11carry, asm: "RSB", aux: "Int32"}, // auxInt - arg0, set carry flag
{name: "SBC", argLength: 3, reg: gp2flags1, asm: "SBC"}, // arg0 - arg1 - carry, arg2=flags
{name: "SBCconst", argLength: 2, reg: gp1flags1, asm: "SBC", aux: "Int32"}, // arg0 - auxInt - carry, arg1=flags
{name: "RSCconst", argLength: 2, reg: gp1flags1, asm: "RSC", aux: "Int32"}, // auxInt - arg0 - carry, arg1=flags
{name: "CMP", argLength: 2, reg: gp2flags, asm: "CMP", typ: "Flags"}, // arg0 compare to arg1
{name: "MULLU", argLength: 2, reg: gp22, asm: "MULLU", commutative: true}, // arg0 * arg1, high 32 bits in out0, low 32 bits in out1
{name: "MULA", argLength: 3, reg: gp31, asm: "MULA"}, // arg0 * arg1 + arg2
{name: "MOVWload", argLength: 2, reg: gpload, aux: "SymOff", asm: "MOVW"}, // load from arg0 + auxInt + aux. arg1=mem.
{name: "MOVWstore", argLength: 3, reg: gpstore, aux: "SymOff", asm: "MOVW"}, // store 4 bytes of arg1 to arg0 + auxInt + aux. arg2=mem.
{name: "ADDF", argLength: 2, reg: fp21, asm: "ADDF", commutative: true}, // arg0 + arg1
{name: "ADDD", argLength: 2, reg: fp21, asm: "ADDD", commutative: true}, // arg0 + arg1
{name: "SUBF", argLength: 2, reg: fp21, asm: "SUBF"}, // arg0 - arg1
{name: "SUBD", argLength: 2, reg: fp21, asm: "SUBD"}, // arg0 - arg1
{name: "MULF", argLength: 2, reg: fp21, asm: "MULF", commutative: true}, // arg0 * arg1
{name: "MULD", argLength: 2, reg: fp21, asm: "MULD", commutative: true}, // arg0 * arg1
{name: "DIVF", argLength: 2, reg: fp21, asm: "DIVF"}, // arg0 / arg1
{name: "DIVD", argLength: 2, reg: fp21, asm: "DIVD"}, // arg0 / arg1
{name: "CALLstatic", argLength: 1, reg: regInfo{clobbers: callerSave}, aux: "SymOff"}, // call static function aux.(*gc.Sym). arg0=mem, auxint=argsize, returns mem
{name: "AND", argLength: 2, reg: gp21, asm: "AND", commutative: true}, // arg0 & arg1
{name: "ANDconst", argLength: 1, reg: gp11, asm: "AND", aux: "Int32"}, // arg0 & auxInt
{name: "OR", argLength: 2, reg: gp21, asm: "ORR", commutative: true}, // arg0 | arg1
{name: "ORconst", argLength: 1, reg: gp11, asm: "ORR", aux: "Int32"}, // arg0 | auxInt
{name: "XOR", argLength: 2, reg: gp21, asm: "EOR", commutative: true}, // arg0 ^ arg1
{name: "XORconst", argLength: 1, reg: gp11, asm: "EOR", aux: "Int32"}, // arg0 ^ auxInt
{name: "BIC", argLength: 2, reg: gp21, asm: "BIC"}, // arg0 &^ arg1
{name: "BICconst", argLength: 1, reg: gp11, asm: "BIC", aux: "Int32"}, // arg0 &^ auxInt
// unary ops
{name: "MVN", argLength: 1, reg: gp11, asm: "MVN"}, // ^arg0
{name: "NEGF", argLength: 1, reg: fp11, asm: "NEGF"}, // -arg0, float32
{name: "NEGD", argLength: 1, reg: fp11, asm: "NEGD"}, // -arg0, float64
{name: "SQRTD", argLength: 1, reg: fp11, asm: "SQRTD"}, // sqrt(arg0), float64
// shifts
{name: "SLL", argLength: 2, reg: gp21, asm: "SLL"}, // arg0 << arg1, shift amount is mod 256
{name: "SLLconst", argLength: 1, reg: gp11, asm: "SLL", aux: "Int32"}, // arg0 << auxInt
{name: "SRL", argLength: 2, reg: gp21, asm: "SRL"}, // arg0 >> arg1, unsigned, shift amount is mod 256
{name: "SRLconst", argLength: 1, reg: gp11, asm: "SRL", aux: "Int32"}, // arg0 >> auxInt, unsigned
{name: "SRA", argLength: 2, reg: gp21, asm: "SRA"}, // arg0 >> arg1, signed, shift amount is mod 256
{name: "SRAconst", argLength: 1, reg: gp11, asm: "SRA", aux: "Int32"}, // arg0 >> auxInt, signed
{name: "SRRconst", argLength: 1, reg: gp11, aux: "Int32"}, // arg0 right rotate by auxInt bits
{name: "ADDshiftLL", argLength: 2, reg: gp21, asm: "ADD", aux: "Int32"}, // arg0 + arg1<<auxInt
{name: "ADDshiftRL", argLength: 2, reg: gp21, asm: "ADD", aux: "Int32"}, // arg0 + arg1>>auxInt, unsigned shift
{name: "ADDshiftRA", argLength: 2, reg: gp21, asm: "ADD", aux: "Int32"}, // arg0 + arg1>>auxInt, signed shift
{name: "SUBshiftLL", argLength: 2, reg: gp21, asm: "SUB", aux: "Int32"}, // arg0 - arg1<<auxInt
{name: "SUBshiftRL", argLength: 2, reg: gp21, asm: "SUB", aux: "Int32"}, // arg0 - arg1>>auxInt, unsigned shift
{name: "SUBshiftRA", argLength: 2, reg: gp21, asm: "SUB", aux: "Int32"}, // arg0 - arg1>>auxInt, signed shift
{name: "RSBshiftLL", argLength: 2, reg: gp21, asm: "RSB", aux: "Int32"}, // arg1<<auxInt - arg0
{name: "RSBshiftRL", argLength: 2, reg: gp21, asm: "RSB", aux: "Int32"}, // arg1>>auxInt - arg0, unsigned shift
{name: "RSBshiftRA", argLength: 2, reg: gp21, asm: "RSB", aux: "Int32"}, // arg1>>auxInt - arg0, signed shift
{name: "ANDshiftLL", argLength: 2, reg: gp21, asm: "AND", aux: "Int32"}, // arg0 & (arg1<<auxInt)
{name: "ANDshiftRL", argLength: 2, reg: gp21, asm: "AND", aux: "Int32"}, // arg0 & (arg1>>auxInt), unsigned shift
{name: "ANDshiftRA", argLength: 2, reg: gp21, asm: "AND", aux: "Int32"}, // arg0 & (arg1>>auxInt), signed shift
{name: "ORshiftLL", argLength: 2, reg: gp21, asm: "ORR", aux: "Int32"}, // arg0 | arg1<<auxInt
{name: "ORshiftRL", argLength: 2, reg: gp21, asm: "ORR", aux: "Int32"}, // arg0 | arg1>>auxInt, unsigned shift
{name: "ORshiftRA", argLength: 2, reg: gp21, asm: "ORR", aux: "Int32"}, // arg0 | arg1>>auxInt, signed shift
{name: "XORshiftLL", argLength: 2, reg: gp21, asm: "EOR", aux: "Int32"}, // arg0 ^ arg1<<auxInt
{name: "XORshiftRL", argLength: 2, reg: gp21, asm: "EOR", aux: "Int32"}, // arg0 ^ arg1>>auxInt, unsigned shift
{name: "XORshiftRA", argLength: 2, reg: gp21, asm: "EOR", aux: "Int32"}, // arg0 ^ arg1>>auxInt, signed shift
{name: "BICshiftLL", argLength: 2, reg: gp21, asm: "BIC", aux: "Int32"}, // arg0 &^ (arg1<<auxInt)
{name: "BICshiftRL", argLength: 2, reg: gp21, asm: "BIC", aux: "Int32"}, // arg0 &^ (arg1>>auxInt), unsigned shift
{name: "BICshiftRA", argLength: 2, reg: gp21, asm: "BIC", aux: "Int32"}, // arg0 &^ (arg1>>auxInt), signed shift
{name: "MVNshiftLL", argLength: 1, reg: gp11, asm: "MVN", aux: "Int32"}, // ^(arg0<<auxInt)
{name: "MVNshiftRL", argLength: 1, reg: gp11, asm: "MVN", aux: "Int32"}, // ^(arg0>>auxInt), unsigned shift
{name: "MVNshiftRA", argLength: 1, reg: gp11, asm: "MVN", aux: "Int32"}, // ^(arg0>>auxInt), signed shift
{name: "ADCshiftLL", argLength: 3, reg: gp2flags1, asm: "ADC", aux: "Int32"}, // arg0 + arg1<<auxInt + carry, arg2=flags
{name: "ADCshiftRL", argLength: 3, reg: gp2flags1, asm: "ADC", aux: "Int32"}, // arg0 + arg1>>auxInt + carry, unsigned shift, arg2=flags
{name: "ADCshiftRA", argLength: 3, reg: gp2flags1, asm: "ADC", aux: "Int32"}, // arg0 + arg1>>auxInt + carry, signed shift, arg2=flags
{name: "SBCshiftLL", argLength: 3, reg: gp2flags1, asm: "SBC", aux: "Int32"}, // arg0 - arg1<<auxInt - carry, arg2=flags
{name: "SBCshiftRL", argLength: 3, reg: gp2flags1, asm: "SBC", aux: "Int32"}, // arg0 - arg1>>auxInt - carry, unsigned shift, arg2=flags
{name: "SBCshiftRA", argLength: 3, reg: gp2flags1, asm: "SBC", aux: "Int32"}, // arg0 - arg1>>auxInt - carry, signed shift, arg2=flags
{name: "RSCshiftLL", argLength: 3, reg: gp2flags1, asm: "RSC", aux: "Int32"}, // arg1<<auxInt - arg0 - carry, arg2=flags
{name: "RSCshiftRL", argLength: 3, reg: gp2flags1, asm: "RSC", aux: "Int32"}, // arg1>>auxInt - arg0 - carry, unsigned shift, arg2=flags
{name: "RSCshiftRA", argLength: 3, reg: gp2flags1, asm: "RSC", aux: "Int32"}, // arg1>>auxInt - arg0 - carry, signed shift, arg2=flags
{name: "ADDSshiftLL", argLength: 2, reg: gp21carry, asm: "ADD", aux: "Int32"}, // arg0 + arg1<<auxInt, set carry flag
{name: "ADDSshiftRL", argLength: 2, reg: gp21carry, asm: "ADD", aux: "Int32"}, // arg0 + arg1>>auxInt, unsigned shift, set carry flag
{name: "ADDSshiftRA", argLength: 2, reg: gp21carry, asm: "ADD", aux: "Int32"}, // arg0 + arg1>>auxInt, signed shift, set carry flag
{name: "SUBSshiftLL", argLength: 2, reg: gp21carry, asm: "SUB", aux: "Int32"}, // arg0 - arg1<<auxInt, set carry flag
{name: "SUBSshiftRL", argLength: 2, reg: gp21carry, asm: "SUB", aux: "Int32"}, // arg0 - arg1>>auxInt, unsigned shift, set carry flag
{name: "SUBSshiftRA", argLength: 2, reg: gp21carry, asm: "SUB", aux: "Int32"}, // arg0 - arg1>>auxInt, signed shift, set carry flag
{name: "RSBSshiftLL", argLength: 2, reg: gp21carry, asm: "RSB", aux: "Int32"}, // arg1<<auxInt - arg0, set carry flag
{name: "RSBSshiftRL", argLength: 2, reg: gp21carry, asm: "RSB", aux: "Int32"}, // arg1>>auxInt - arg0, unsigned shift, set carry flag
{name: "RSBSshiftRA", argLength: 2, reg: gp21carry, asm: "RSB", aux: "Int32"}, // arg1>>auxInt - arg0, signed shift, set carry flag
{name: "ADDshiftLLreg", argLength: 3, reg: gp31, asm: "ADD"}, // arg0 + arg1<<arg2
{name: "ADDshiftRLreg", argLength: 3, reg: gp31, asm: "ADD"}, // arg0 + arg1>>arg2, unsigned shift
{name: "ADDshiftRAreg", argLength: 3, reg: gp31, asm: "ADD"}, // arg0 + arg1>>arg2, signed shift
{name: "SUBshiftLLreg", argLength: 3, reg: gp31, asm: "SUB"}, // arg0 - arg1<<arg2
{name: "SUBshiftRLreg", argLength: 3, reg: gp31, asm: "SUB"}, // arg0 - arg1>>arg2, unsigned shift
{name: "SUBshiftRAreg", argLength: 3, reg: gp31, asm: "SUB"}, // arg0 - arg1>>arg2, signed shift
{name: "RSBshiftLLreg", argLength: 3, reg: gp31, asm: "RSB"}, // arg1<<arg2 - arg0
{name: "RSBshiftRLreg", argLength: 3, reg: gp31, asm: "RSB"}, // arg1>>arg2 - arg0, unsigned shift
{name: "RSBshiftRAreg", argLength: 3, reg: gp31, asm: "RSB"}, // arg1>>arg2 - arg0, signed shift
{name: "ANDshiftLLreg", argLength: 3, reg: gp31, asm: "AND"}, // arg0 & (arg1<<arg2)
{name: "ANDshiftRLreg", argLength: 3, reg: gp31, asm: "AND"}, // arg0 & (arg1>>arg2), unsigned shift
{name: "ANDshiftRAreg", argLength: 3, reg: gp31, asm: "AND"}, // arg0 & (arg1>>arg2), signed shift
{name: "ORshiftLLreg", argLength: 3, reg: gp31, asm: "ORR"}, // arg0 | arg1<<arg2
{name: "ORshiftRLreg", argLength: 3, reg: gp31, asm: "ORR"}, // arg0 | arg1>>arg2, unsigned shift
{name: "ORshiftRAreg", argLength: 3, reg: gp31, asm: "ORR"}, // arg0 | arg1>>arg2, signed shift
{name: "XORshiftLLreg", argLength: 3, reg: gp31, asm: "EOR"}, // arg0 ^ arg1<<arg2
{name: "XORshiftRLreg", argLength: 3, reg: gp31, asm: "EOR"}, // arg0 ^ arg1>>arg2, unsigned shift
{name: "XORshiftRAreg", argLength: 3, reg: gp31, asm: "EOR"}, // arg0 ^ arg1>>arg2, signed shift
{name: "BICshiftLLreg", argLength: 3, reg: gp31, asm: "BIC"}, // arg0 &^ (arg1<<arg2)
{name: "BICshiftRLreg", argLength: 3, reg: gp31, asm: "BIC"}, // arg0 &^ (arg1>>arg2), unsigned shift
{name: "BICshiftRAreg", argLength: 3, reg: gp31, asm: "BIC"}, // arg0 &^ (arg1>>arg2), signed shift
{name: "MVNshiftLLreg", argLength: 2, reg: gp21, asm: "MVN"}, // ^(arg0<<arg1)
{name: "MVNshiftRLreg", argLength: 2, reg: gp21, asm: "MVN"}, // ^(arg0>>arg1), unsigned shift
{name: "MVNshiftRAreg", argLength: 2, reg: gp21, asm: "MVN"}, // ^(arg0>>arg1), signed shift
{name: "ADCshiftLLreg", argLength: 4, reg: gp3flags1, asm: "ADC"}, // arg0 + arg1<<arg2 + carry, arg3=flags
{name: "ADCshiftRLreg", argLength: 4, reg: gp3flags1, asm: "ADC"}, // arg0 + arg1>>arg2 + carry, unsigned shift, arg3=flags
{name: "ADCshiftRAreg", argLength: 4, reg: gp3flags1, asm: "ADC"}, // arg0 + arg1>>arg2 + carry, signed shift, arg3=flags
{name: "SBCshiftLLreg", argLength: 4, reg: gp3flags1, asm: "SBC"}, // arg0 - arg1<<arg2 - carry, arg3=flags
{name: "SBCshiftRLreg", argLength: 4, reg: gp3flags1, asm: "SBC"}, // arg0 - arg1>>arg2 - carry, unsigned shift, arg3=flags
{name: "SBCshiftRAreg", argLength: 4, reg: gp3flags1, asm: "SBC"}, // arg0 - arg1>>arg2 - carry, signed shift, arg3=flags
{name: "RSCshiftLLreg", argLength: 4, reg: gp3flags1, asm: "RSC"}, // arg1<<arg2 - arg0 - carry, arg3=flags
{name: "RSCshiftRLreg", argLength: 4, reg: gp3flags1, asm: "RSC"}, // arg1>>arg2 - arg0 - carry, unsigned shift, arg3=flags
{name: "RSCshiftRAreg", argLength: 4, reg: gp3flags1, asm: "RSC"}, // arg1>>arg2 - arg0 - carry, signed shift, arg3=flags
{name: "ADDSshiftLLreg", argLength: 3, reg: gp31carry, asm: "ADD"}, // arg0 + arg1<<arg2, set carry flag
{name: "ADDSshiftRLreg", argLength: 3, reg: gp31carry, asm: "ADD"}, // arg0 + arg1>>arg2, unsigned shift, set carry flag
{name: "ADDSshiftRAreg", argLength: 3, reg: gp31carry, asm: "ADD"}, // arg0 + arg1>>arg2, signed shift, set carry flag
{name: "SUBSshiftLLreg", argLength: 3, reg: gp31carry, asm: "SUB"}, // arg0 - arg1<<arg2, set carry flag
{name: "SUBSshiftRLreg", argLength: 3, reg: gp31carry, asm: "SUB"}, // arg0 - arg1>>arg2, unsigned shift, set carry flag
{name: "SUBSshiftRAreg", argLength: 3, reg: gp31carry, asm: "SUB"}, // arg0 - arg1>>arg2, signed shift, set carry flag
{name: "RSBSshiftLLreg", argLength: 3, reg: gp31carry, asm: "RSB"}, // arg1<<arg2 - arg0, set carry flag
{name: "RSBSshiftRLreg", argLength: 3, reg: gp31carry, asm: "RSB"}, // arg1>>arg2 - arg0, unsigned shift, set carry flag
{name: "RSBSshiftRAreg", argLength: 3, reg: gp31carry, asm: "RSB"}, // arg1>>arg2 - arg0, signed shift, set carry flag
// comparisons
{name: "CMP", argLength: 2, reg: gp2flags, asm: "CMP", typ: "Flags"}, // arg0 compare to arg1
{name: "CMPconst", argLength: 1, reg: gp1flags, asm: "CMP", aux: "Int32", typ: "Flags"}, // arg0 compare to auxInt
{name: "CMN", argLength: 2, reg: gp2flags, asm: "CMN", typ: "Flags"}, // arg0 compare to -arg1
{name: "CMNconst", argLength: 1, reg: gp1flags, asm: "CMN", aux: "Int32", typ: "Flags"}, // arg0 compare to -auxInt
{name: "TST", argLength: 2, reg: gp2flags, asm: "TST", typ: "Flags", commutative: true}, // arg0 & arg1 compare to 0
{name: "TSTconst", argLength: 1, reg: gp1flags, asm: "TST", aux: "Int32", typ: "Flags"}, // arg0 & auxInt compare to 0
{name: "TEQ", argLength: 2, reg: gp2flags, asm: "TEQ", typ: "Flags", commutative: true}, // arg0 ^ arg1 compare to 0
{name: "TEQconst", argLength: 1, reg: gp1flags, asm: "TEQ", aux: "Int32", typ: "Flags"}, // arg0 ^ auxInt compare to 0
{name: "CMPF", argLength: 2, reg: fp2flags, asm: "CMPF", typ: "Flags"}, // arg0 compare to arg1, float32
{name: "CMPD", argLength: 2, reg: fp2flags, asm: "CMPD", typ: "Flags"}, // arg0 compare to arg1, float64
{name: "CMPshiftLL", argLength: 2, reg: gp2flags, asm: "CMP", aux: "Int32", typ: "Flags"}, // arg0 compare to arg1<<auxInt
{name: "CMPshiftRL", argLength: 2, reg: gp2flags, asm: "CMP", aux: "Int32", typ: "Flags"}, // arg0 compare to arg1>>auxInt, unsigned shift
{name: "CMPshiftRA", argLength: 2, reg: gp2flags, asm: "CMP", aux: "Int32", typ: "Flags"}, // arg0 compare to arg1>>auxInt, signed shift
{name: "CMPshiftLLreg", argLength: 3, reg: gp3flags, asm: "CMP", typ: "Flags"}, // arg0 compare to arg1<<arg2
{name: "CMPshiftRLreg", argLength: 3, reg: gp3flags, asm: "CMP", typ: "Flags"}, // arg0 compare to arg1>>arg2, unsigned shift
{name: "CMPshiftRAreg", argLength: 3, reg: gp3flags, asm: "CMP", typ: "Flags"}, // arg0 compare to arg1>>arg2, signed shift
{name: "CMPF0", argLength: 1, reg: fp1flags, asm: "CMPF", typ: "Flags"}, // arg0 compare to 0, float32
{name: "CMPD0", argLength: 1, reg: fp1flags, asm: "CMPD", typ: "Flags"}, // arg0 compare to 0, float64
// moves
{name: "MOVWconst", argLength: 0, reg: gp01, aux: "Int32", asm: "MOVW", typ: "UInt32", rematerializeable: true}, // 32 low bits of auxint
{name: "MOVFconst", argLength: 0, reg: fp01, aux: "Float64", asm: "MOVF", typ: "Float32", rematerializeable: true}, // auxint as 64-bit float, convert to 32-bit float
{name: "MOVDconst", argLength: 0, reg: fp01, aux: "Float64", asm: "MOVD", typ: "Float64", rematerializeable: true}, // auxint as 64-bit float
{name: "MOVWaddr", argLength: 1, reg: regInfo{inputs: []regMask{buildReg("SP") | buildReg("SB")}, outputs: []regMask{gp}}, aux: "SymOff", asm: "MOVW", rematerializeable: true}, // arg0 + auxInt + aux.(*gc.Sym), arg0=SP/SB
{name: "MOVBload", argLength: 2, reg: gpload, aux: "SymOff", asm: "MOVB", typ: "Int8"}, // load from arg0 + auxInt + aux. arg1=mem.
{name: "MOVBUload", argLength: 2, reg: gpload, aux: "SymOff", asm: "MOVBU", typ: "UInt8"}, // load from arg0 + auxInt + aux. arg1=mem.
{name: "MOVHload", argLength: 2, reg: gpload, aux: "SymOff", asm: "MOVH", typ: "Int16"}, // load from arg0 + auxInt + aux. arg1=mem.
{name: "MOVHUload", argLength: 2, reg: gpload, aux: "SymOff", asm: "MOVHU", typ: "UInt16"}, // load from arg0 + auxInt + aux. arg1=mem.
{name: "MOVWload", argLength: 2, reg: gpload, aux: "SymOff", asm: "MOVW", typ: "UInt32"}, // load from arg0 + auxInt + aux. arg1=mem.
{name: "MOVFload", argLength: 2, reg: fpload, aux: "SymOff", asm: "MOVF", typ: "Float32"}, // load from arg0 + auxInt + aux. arg1=mem.
{name: "MOVDload", argLength: 2, reg: fpload, aux: "SymOff", asm: "MOVD", typ: "Float64"}, // load from arg0 + auxInt + aux. arg1=mem.
{name: "MOVBstore", argLength: 3, reg: gpstore, aux: "SymOff", asm: "MOVB", typ: "Mem"}, // store 1 byte of arg1 to arg0 + auxInt + aux. arg2=mem.
{name: "MOVHstore", argLength: 3, reg: gpstore, aux: "SymOff", asm: "MOVH", typ: "Mem"}, // store 2 bytes of arg1 to arg0 + auxInt + aux. arg2=mem.
{name: "MOVWstore", argLength: 3, reg: gpstore, aux: "SymOff", asm: "MOVW", typ: "Mem"}, // store 4 bytes of arg1 to arg0 + auxInt + aux. arg2=mem.
{name: "MOVFstore", argLength: 3, reg: fpstore, aux: "SymOff", asm: "MOVF", typ: "Mem"}, // store 4 bytes of arg1 to arg0 + auxInt + aux. arg2=mem.
{name: "MOVDstore", argLength: 3, reg: fpstore, aux: "SymOff", asm: "MOVD", typ: "Mem"}, // store 8 bytes of arg1 to arg0 + auxInt + aux. arg2=mem.
{name: "MOVWloadidx", argLength: 3, reg: gp2load, asm: "MOVW"}, // load from arg0 + arg1. arg2=mem
{name: "MOVWloadshiftLL", argLength: 3, reg: gp2load, asm: "MOVW", aux: "Int32"}, // load from arg0 + arg1<<auxInt. arg2=mem
{name: "MOVWloadshiftRL", argLength: 3, reg: gp2load, asm: "MOVW", aux: "Int32"}, // load from arg0 + arg1>>auxInt, unsigned shift. arg2=mem
{name: "MOVWloadshiftRA", argLength: 3, reg: gp2load, asm: "MOVW", aux: "Int32"}, // load from arg0 + arg1>>auxInt, signed shift. arg2=mem
{name: "MOVWstoreidx", argLength: 4, reg: gp2store, asm: "MOVW"}, // store arg2 to arg0 + arg1. arg3=mem
{name: "MOVWstoreshiftLL", argLength: 4, reg: gp2store, asm: "MOVW", aux: "Int32"}, // store arg2 to arg0 + arg1<<auxInt. arg3=mem
{name: "MOVWstoreshiftRL", argLength: 4, reg: gp2store, asm: "MOVW", aux: "Int32"}, // store arg2 to arg0 + arg1>>auxInt, unsigned shift. arg3=mem
{name: "MOVWstoreshiftRA", argLength: 4, reg: gp2store, asm: "MOVW", aux: "Int32"}, // store arg2 to arg0 + arg1>>auxInt, signed shift. arg3=mem
{name: "MOVBreg", argLength: 1, reg: gp11, asm: "MOVBS"}, // move from arg0, sign-extended from byte
{name: "MOVBUreg", argLength: 1, reg: gp11, asm: "MOVBU"}, // move from arg0, unsign-extended from byte
{name: "MOVHreg", argLength: 1, reg: gp11, asm: "MOVHS"}, // move from arg0, sign-extended from half
{name: "MOVHUreg", argLength: 1, reg: gp11, asm: "MOVHU"}, // move from arg0, unsign-extended from half
{name: "MOVWreg", argLength: 1, reg: gp11, asm: "MOVW"}, // move from arg0
{name: "MOVWnop", argLength: 1, reg: regInfo{inputs: []regMask{gp}, outputs: []regMask{gp}}, resultInArg0: true}, // nop, return arg0 in same register
{name: "MOVWF", argLength: 1, reg: gpfp, asm: "MOVWF"}, // int32 -> float32
{name: "MOVWD", argLength: 1, reg: gpfp, asm: "MOVWD"}, // int32 -> float64
{name: "MOVWUF", argLength: 1, reg: gpfp, asm: "MOVWF"}, // uint32 -> float32, set U bit in the instruction
{name: "MOVWUD", argLength: 1, reg: gpfp, asm: "MOVWD"}, // uint32 -> float64, set U bit in the instruction
{name: "MOVFW", argLength: 1, reg: fpgp, asm: "MOVFW"}, // float32 -> int32
{name: "MOVDW", argLength: 1, reg: fpgp, asm: "MOVDW"}, // float64 -> int32
{name: "MOVFWU", argLength: 1, reg: fpgp, asm: "MOVFW"}, // float32 -> uint32, set U bit in the instruction
{name: "MOVDWU", argLength: 1, reg: fpgp, asm: "MOVDW"}, // float64 -> uint32, set U bit in the instruction
{name: "MOVFD", argLength: 1, reg: fp11, asm: "MOVFD"}, // float32 -> float64
{name: "MOVDF", argLength: 1, reg: fp11, asm: "MOVDF"}, // float64 -> float32
// conditional instructions, for lowering shifts
{name: "CMOVWHSconst", argLength: 2, reg: gp1flags1, asm: "MOVW", aux: "Int32", resultInArg0: true}, // replace arg0 w/ const if flags indicates HS, arg1=flags
{name: "CMOVWLSconst", argLength: 2, reg: gp1flags1, asm: "MOVW", aux: "Int32", resultInArg0: true}, // replace arg0 w/ const if flags indicates LS, arg1=flags
{name: "SRAcond", argLength: 3, reg: gp2flags1, asm: "SRA"}, // arg0 >> 31 if flags indicates HS, arg0 >> arg1 otherwise, signed shift, arg2=flags
// function calls
{name: "CALLstatic", argLength: 1, reg: regInfo{clobbers: callerSave}, aux: "SymOff", clobberFlags: true}, // call static function aux.(*gc.Sym). arg0=mem, auxint=argsize, returns mem
{name: "CALLclosure", argLength: 3, reg: regInfo{inputs: []regMask{gpsp, buildReg("R7"), 0}, clobbers: callerSave}, aux: "Int64", clobberFlags: true}, // call function via closure. arg0=codeptr, arg1=closure, arg2=mem, auxint=argsize, returns mem
{name: "CALLdefer", argLength: 1, reg: regInfo{clobbers: callerSave}, aux: "Int64", clobberFlags: true}, // call deferproc. arg0=mem, auxint=argsize, returns mem
{name: "CALLgo", argLength: 1, reg: regInfo{clobbers: callerSave}, aux: "Int64", clobberFlags: true}, // call newproc. arg0=mem, auxint=argsize, returns mem
{name: "CALLinter", argLength: 2, reg: regInfo{inputs: []regMask{gp}, clobbers: callerSave}, aux: "Int64", clobberFlags: true}, // call fn by pointer. arg0=codeptr, arg1=mem, auxint=argsize, returns mem
// pseudo-ops
{name: "LessThan", argLength: 1, reg: flagsgp}, // bool, 1 flags encode x<y 0 otherwise.
{name: "LoweredNilCheck", argLength: 2, reg: regInfo{inputs: []regMask{gpg}}}, // panic if arg0 is nil. arg1=mem.
{name: "Equal", argLength: 1, reg: readflags}, // bool, true flags encode x==y false otherwise.
{name: "NotEqual", argLength: 1, reg: readflags}, // bool, true flags encode x!=y false otherwise.
{name: "LessThan", argLength: 1, reg: readflags}, // bool, true flags encode signed x<y false otherwise.
{name: "LessEqual", argLength: 1, reg: readflags}, // bool, true flags encode signed x<=y false otherwise.
{name: "GreaterThan", argLength: 1, reg: readflags}, // bool, true flags encode signed x>y false otherwise.
{name: "GreaterEqual", argLength: 1, reg: readflags}, // bool, true flags encode signed x>=y false otherwise.
{name: "LessThanU", argLength: 1, reg: readflags}, // bool, true flags encode unsigned x<y false otherwise.
{name: "LessEqualU", argLength: 1, reg: readflags}, // bool, true flags encode unsigned x<=y false otherwise.
{name: "GreaterThanU", argLength: 1, reg: readflags}, // bool, true flags encode unsigned x>y false otherwise.
{name: "GreaterEqualU", argLength: 1, reg: readflags}, // bool, true flags encode unsigned x>=y false otherwise.
// duffzero (must be 4-byte aligned)
// arg0 = address of memory to zero (in R1, changed as side effect)
// arg1 = value to store (always zero)
// arg2 = mem
// auxint = offset into duffzero code to start executing
// returns mem
{
name: "DUFFZERO",
aux: "Int64",
argLength: 3,
reg: regInfo{
inputs: []regMask{buildReg("R1"), buildReg("R0")},
clobbers: buildReg("R1"),
},
},
// duffcopy (must be 4-byte aligned)
// arg0 = address of dst memory (in R2, changed as side effect)
// arg1 = address of src memory (in R1, changed as side effect)
// arg2 = mem
// auxint = offset into duffcopy code to start executing
// returns mem
{
name: "DUFFCOPY",
aux: "Int64",
argLength: 3,
reg: regInfo{
inputs: []regMask{buildReg("R2"), buildReg("R1")},
clobbers: buildReg("R0 R1 R2"),
},
},
// large or unaligned zeroing
// arg0 = address of memory to zero (in R1, changed as side effect)
// arg1 = address of the last element to zero
// arg2 = value to store (always zero)
// arg3 = mem
// returns mem
// MOVW.P Rarg2, 4(R1)
// CMP R1, Rarg1
// BLE -2(PC)
{
name: "LoweredZero",
aux: "Int64",
argLength: 4,
reg: regInfo{
inputs: []regMask{buildReg("R1"), gp, gp},
clobbers: buildReg("R1"),
},
clobberFlags: true,
},
// large or unaligned move
// arg0 = address of dst memory (in R2, changed as side effect)
// arg1 = address of src memory (in R1, changed as side effect)
// arg2 = address of the last element of src
// arg3 = mem
// returns mem
// MOVW.P 4(R1), Rtmp
// MOVW.P Rtmp, 4(R2)
// CMP R1, Rarg2
// BLE -3(PC)
{
name: "LoweredMove",
aux: "Int64",
argLength: 4,
reg: regInfo{
inputs: []regMask{buildReg("R2"), buildReg("R1"), gp},
clobbers: buildReg("R1 R2"),
},
clobberFlags: true,
},
// Scheduler ensures LoweredGetClosurePtr occurs only in entry block,
// and sorts it to the very beginning of the block to prevent other
// use of R7 (arm.REGCTXT, the closure pointer)
{name: "LoweredGetClosurePtr", reg: regInfo{outputs: []regMask{buildReg("R7")}}},
// MOVWconvert converts between pointers and integers.
// We have a special op for this so as to not confuse GC
// (particularly stack maps). It takes a memory arg so it
// gets correctly ordered with respect to GC safepoints.
// arg0=ptr/int arg1=mem, output=int/ptr
{name: "MOVWconvert", argLength: 2, reg: gp11, asm: "MOVW"},
// Constant flag values. For any comparison, there are 5 possible
// outcomes: the three from the signed total order (<,==,>) and the
// three from the unsigned total order. The == cases overlap.
// Note: there's a sixth "unordered" outcome for floating-point
// comparisons, but we don't use such a beast yet.
// These ops are for temporary use by rewrite rules. They
// cannot appear in the generated assembly.
{name: "FlagEQ"}, // equal
{name: "FlagLT_ULT"}, // signed < and unsigned <
{name: "FlagLT_UGT"}, // signed < and unsigned >
{name: "FlagGT_UGT"}, // signed > and unsigned <
{name: "FlagGT_ULT"}, // signed > and unsigned >
// (InvertFlags (CMP a b)) == (CMP b a)
// InvertFlags is a pseudo-op which can't appear in assembly output.
{name: "InvertFlags", argLength: 1}, // reverse direction of arg0
}
blocks := []blockData{
@@ -47,22 +496,15 @@ func init() {
{name: "UGE"},
}
regNames := []string{
"R0",
"R1",
"R2",
"R3",
"SP",
"FLAGS",
"SB",
}
archs = append(archs, arch{
name: "ARM",
pkg: "cmd/internal/obj/arm",
genfile: "../../arm/ssa.go",
ops: ops,
blocks: blocks,
regnames: regNames,
name: "ARM",
pkg: "cmd/internal/obj/arm",
genfile: "../../arm/ssa.go",
ops: ops,
blocks: blocks,
regnames: regNamesARM,
gpregmask: gp,
fpregmask: fp,
framepointerreg: -1, // not used
})
}

View File

@@ -0,0 +1,573 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Lowering arithmetic
(Add64 x y) -> (ADD x y)
(AddPtr x y) -> (ADD x y)
(Add32 x y) -> (ADD x y)
(Add16 x y) -> (ADD x y)
(Add8 x y) -> (ADD x y)
(Add64F x y) -> (FADD x y)
(Add32F x y) -> (FADDS x y)
(Sub64 x y) -> (SUB x y)
(SubPtr x y) -> (SUB x y)
(Sub32 x y) -> (SUB x y)
(Sub16 x y) -> (SUB x y)
(Sub8 x y) -> (SUB x y)
(Sub32F x y) -> (FSUBS x y)
(Sub64F x y) -> (FSUB x y)
(Mod16 x y) -> (Mod32 (SignExt16to32 x) (SignExt16to32 y))
(Mod16u x y) -> (Mod32u (ZeroExt16to32 x) (ZeroExt16to32 y))
(Mod8 x y) -> (Mod32 (SignExt8to32 x) (SignExt8to32 y))
(Mod8u x y) -> (Mod32u (ZeroExt8to32 x) (ZeroExt8to32 y))
(Mod64 x y) -> (SUB x (MULLD y (DIVD x y)))
(Mod64u x y) -> (SUB x (MULLD y (DIVDU x y)))
(Mod32 x y) -> (SUB x (MULLW y (DIVW x y)))
(Mod32u x y) -> (SUB x (MULLW y (DIVWU x y)))
(Avg64u <t> x y) -> (ADD (ADD <t> (SRD <t> x (MOVDconst <t> [1])) (SRD <t> y (MOVDconst <t> [1]))) (ANDconst <t> (AND <t> x y) [1]))
(Mul64 x y) -> (MULLD x y)
(Mul32 x y) -> (MULLW x y)
(Mul16 x y) -> (MULLW x y)
(Mul8 x y) -> (MULLW x y)
(Div64 x y) -> (DIVD x y)
(Div64u x y) -> (DIVDU x y)
(Div32 x y) -> (DIVW x y)
(Div32u x y) -> (DIVWU x y)
(Div16 x y) -> (DIVW (SignExt16to32 x) (SignExt16to32 y))
(Div16u x y) -> (DIVWU (ZeroExt16to32 x) (ZeroExt16to32 y))
(Div8 x y) -> (DIVW (SignExt8to32 x) (SignExt8to32 y))
(Div8u x y) -> (DIVWU (ZeroExt8to32 x) (ZeroExt8to32 y))
(Hmul64 x y) -> (MULHD x y)
(Hmul64u x y) -> (MULHDU x y)
(Hmul32 x y) -> (MULHW x y)
(Hmul32u x y) -> (MULHWU x y)
(Hmul16 x y) -> (SRAWconst (MULLW <config.fe.TypeInt32()> (SignExt16to32 x) (SignExt16to32 y)) [16])
(Hmul16u x y) -> (SRWconst (MULLW <config.fe.TypeUInt32()> (ZeroExt16to32 x) (ZeroExt16to32 y)) [16])
(Hmul8 x y) -> (SRAWconst (MULLW <config.fe.TypeInt16()> (SignExt8to32 x) (SignExt8to32 y)) [8])
(Hmul8u x y) -> (SRWconst (MULLW <config.fe.TypeUInt16()> (ZeroExt8to32 x) (ZeroExt8to32 y)) [8])
(Mul32F x y) -> (FMULS x y)
(Mul64F x y) -> (FMUL x y)
(Div32F x y) -> (FDIVS x y)
(Div64F x y) -> (FDIV x y)
// Lowering float <-> int
(Cvt32to32F x) -> (FRSP (FCFID (Xi2f64 (SignExt32to64 x))))
(Cvt32to64F x) -> (FCFID (Xi2f64 (SignExt32to64 x)))
(Cvt64to32F x) -> (FRSP (FCFID (Xi2f64 x)))
(Cvt64to64F x) -> (FCFID (Xi2f64 x))
(Cvt32Fto32 x) -> (Xf2i64 (FCTIWZ x))
(Cvt32Fto64 x) -> (Xf2i64 (FCTIDZ x))
(Cvt64Fto32 x) -> (Xf2i64 (FCTIWZ x))
(Cvt64Fto64 x) -> (Xf2i64 (FCTIDZ x))
(Cvt32Fto64F x) -> x // Note x will have the wrong type for patterns dependent on Float32/Float64
(Cvt64Fto32F x) -> (FRSP x)
(Sqrt x) -> (FSQRT x)
(Rsh64x64 x y) -> (SRAD x (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-64] y))))
(Rsh64Ux64 x y) -> (SRD x (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-64] y))))
(Lsh64x64 x y) -> (SLD x (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-64] y))))
(Rsh32x64 x y) -> (SRAW x (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-32] y))))
(Rsh32Ux64 x y) -> (SRW x (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-32] y))))
(Lsh32x64 x y) -> (SLW x (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-32] y))))
(Rsh16x64 x y) -> (SRAW (SignExt16to32 x) (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-16] y))))
(Rsh16Ux64 x y) -> (SRW (ZeroExt16to32 x) (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-16] y))))
(Lsh16x64 x y) -> (SLW x (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-16] y))))
(Rsh8x64 x y) -> (SRAW (SignExt8to32 x) (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-8] y))))
(Rsh8Ux64 x y) -> (SRW (ZeroExt8to32 x) (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-8] y))))
(Lsh8x64 x y) -> (SLW x (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-8] y))))
(Rsh64x32 x y) -> (SRAD x (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt32to64 y)))))
(Rsh64Ux32 x y) -> (SRD x (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt32to64 y)))))
(Lsh64x32 x y) -> (SLD x (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt32to64 y)))))
(Rsh32x32 x y) -> (SRAW x (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt32to64 y)))))
(Rsh32Ux32 x y) -> (SRW x (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt32to64 y)))))
(Lsh32x32 x y) -> (SLW x (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt32to64 y)))))
(Rsh16x32 x y) -> (SRAW (SignExt16to32 x) (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt32to64 y)))))
(Rsh16Ux32 x y) -> (SRW (ZeroExt16to32 x) (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt32to64 y)))))
(Lsh16x32 x y) -> (SLW x (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt32to64 y)))))
(Rsh8x32 x y) -> (SRAW (SignExt8to32 x) (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt32to64 y)))))
(Rsh8Ux32 x y) -> (SRW (ZeroExt8to32 x) (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt32to64 y)))))
(Lsh8x32 x y) -> (SLW x (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt32to64 y)))))
(Rsh64x16 x y) -> (SRAD x (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt16to64 y)))))
(Rsh64Ux16 x y) -> (SRD x (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt16to64 y)))))
(Lsh64x16 x y) -> (SLD x (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt16to64 y)))))
(Rsh32x16 x y) -> (SRAW x (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt16to64 y)))))
(Rsh32Ux16 x y) -> (SRW x (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt16to64 y)))))
(Lsh32x16 x y) -> (SLW x (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt16to64 y)))))
(Rsh16x16 x y) -> (SRAW (SignExt16to32 x) (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt16to64 y)))))
(Rsh16Ux16 x y) -> (SRW (ZeroExt16to32 x) (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt16to64 y)))))
(Lsh16x16 x y) -> (SLW x (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt16to64 y)))))
(Rsh8x16 x y) -> (SRAW (SignExt8to32 x) (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt16to64 y)))))
(Rsh8Ux16 x y) -> (SRW (ZeroExt8to32 x) (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt16to64 y)))))
(Lsh8x16 x y) -> (SLW x (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt16to64 y)))))
(Rsh64x8 x y) -> (SRAD x (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt8to64 y)))))
(Rsh64Ux8 x y) -> (SRD x (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt8to64 y)))))
(Lsh64x8 x y) -> (SLD x (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt8to64 y)))))
(Rsh32x8 x y) -> (SRAW x (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt8to64 y)))))
(Rsh32Ux8 x y) -> (SRW x (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt8to64 y)))))
(Lsh32x8 x y) -> (SLW x (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt8to64 y)))))
(Rsh16x8 x y) -> (SRAW (SignExt16to32 x) (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt8to64 y)))))
(Rsh16Ux8 x y) -> (SRW (ZeroExt16to32 x) (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt8to64 y)))))
(Lsh16x8 x y) -> (SLW x (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt8to64 y)))))
(Rsh8x8 x y) -> (SRAW (SignExt8to32 x) (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt8to64 y)))))
(Rsh8Ux8 x y) -> (SRW (ZeroExt8to32 x) (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt8to64 y)))))
(Lsh8x8 x y) -> (SLW x (ORN y <config.fe.TypeInt64()> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt8to64 y)))))
// Potentially useful optimizing rewrites.
// (ADDconstForCarry [k] c), k < 0 && (c < 0 || k+c >= 0) -> CarrySet
// (ADDconstForCarry [k] c), K < 0 && (c >= 0 && k+c < 0) -> CarryClear
// (MaskIfNotCarry CarrySet) -> 0
// (MaskIfNotCarry CarrySet) -> -1
// Lowering constants
(Const8 [val]) -> (MOVWconst [val])
(Const16 [val]) -> (MOVWconst [val])
(Const32 [val]) -> (MOVWconst [val])
(Const64 [val]) -> (MOVDconst [val])
(Const32F [val]) -> (FMOVSconst [val])
(Const64F [val]) -> (FMOVDconst [val])
(ConstNil) -> (MOVDconst [0])
(ConstBool [b]) -> (MOVWconst [b])
(Addr {sym} base) -> (MOVDaddr {sym} base)
// (Addr {sym} base) -> (ADDconst {sym} base)
(OffPtr [off] ptr) -> (ADD (MOVDconst <config.Frontend().TypeInt64()> [off]) ptr)
(And64 x y) -> (AND x y)
(And32 x y) -> (AND x y)
(And16 x y) -> (AND x y)
(And8 x y) -> (AND x y)
(Or64 x y) -> (OR x y)
(Or32 x y) -> (OR x y)
(Or16 x y) -> (OR x y)
(Or8 x y) -> (OR x y)
(Xor64 x y) -> (XOR x y)
(Xor32 x y) -> (XOR x y)
(Xor16 x y) -> (XOR x y)
(Xor8 x y) -> (XOR x y)
(Neg64F x) -> (FNEG x)
(Neg32F x) -> (FNEG x)
(Neg64 x) -> (NEG x)
(Neg32 x) -> (NEG x)
(Neg16 x) -> (NEG x)
(Neg8 x) -> (NEG x)
(Com64 x) -> (XORconst [-1] x)
(Com32 x) -> (XORconst [-1] x)
(Com16 x) -> (XORconst [-1] x)
(Com8 x) -> (XORconst [-1] x)
// Lowering boolean ops
(AndB x y) -> (AND x y)
(OrB x y) -> (OR x y)
(Not x) -> (XORconst [1] x)
// Lowering comparisons
(EqB x y) -> (ANDconst [1] (EQV x y))
(Eq8 x y) -> (Equal (CMPW (ZeroExt8to32 x) (ZeroExt8to32 y)))
(Eq16 x y) -> (Equal (CMPW (ZeroExt16to32 x) (ZeroExt16to32 y)))
(Eq32 x y) -> (Equal (CMPW x y))
(Eq64 x y) -> (Equal (CMP x y))
(Eq32F x y) -> (Equal (FCMPU x y))
(Eq64F x y) -> (Equal (FCMPU x y))
(EqPtr x y) -> (Equal (CMP x y))
(NeqB x y) -> (XOR x y)
(Neq8 x y) -> (NotEqual (CMPW (ZeroExt8to32 x) (ZeroExt8to32 y)))
(Neq16 x y) -> (NotEqual (CMPW (ZeroExt16to32 x) (ZeroExt16to32 y)))
(Neq32 x y) -> (NotEqual (CMPW x y))
(Neq64 x y) -> (NotEqual (CMP x y))
(Neq32F x y) -> (NotEqual (FCMPU x y))
(Neq64F x y) -> (NotEqual (FCMPU x y))
(NeqPtr x y) -> (NotEqual (CMP x y))
(Less8 x y) -> (LessThan (CMPW (SignExt8to32 x) (SignExt8to32 y)))
(Less16 x y) -> (LessThan (CMPW (SignExt16to32 x) (SignExt16to32 y)))
(Less32 x y) -> (LessThan (CMPW x y))
(Less64 x y) -> (LessThan (CMP x y))
(Less32F x y) -> (FLessThan (FCMPU x y))
(Less64F x y) -> (FLessThan (FCMPU x y))
(Less8U x y) -> (LessThan (CMPWU (ZeroExt8to32 x) (ZeroExt8to32 y)))
(Less16U x y) -> (LessThan (CMPWU (ZeroExt16to32 x) (ZeroExt16to32 y)))
(Less32U x y) -> (LessThan (CMPWU x y))
(Less64U x y) -> (LessThan (CMPU x y))
(Leq8 x y) -> (LessEqual (CMPW (SignExt8to32 x) (SignExt8to32 y)))
(Leq16 x y) -> (LessEqual (CMPW (SignExt16to32 x) (SignExt16to32 y)))
(Leq32 x y) -> (LessEqual (CMPW x y))
(Leq64 x y) -> (LessEqual (CMP x y))
(Leq32F x y) -> (FLessEqual (FCMPU x y))
(Leq64F x y) -> (FLessEqual (FCMPU x y))
(Leq8U x y) -> (LessEqual (CMPWU (ZeroExt8to32 x) (ZeroExt8to32 y)))
(Leq16U x y) -> (LessEqual (CMPWU (ZeroExt16to32 x) (ZeroExt16to32 y)))
(Leq32U x y) -> (LessEqual (CMPWU x y))
(Leq64U x y) -> (LessEqual (CMPU x y))
(Greater8 x y) -> (GreaterThan (CMPW (SignExt8to32 x) (SignExt8to32 y)))
(Greater16 x y) -> (GreaterThan (CMPW (SignExt16to32 x) (SignExt16to32 y)))
(Greater32 x y) -> (GreaterThan (CMPW x y))
(Greater64 x y) -> (GreaterThan (CMP x y))
(Greater32F x y) -> (FGreaterThan (FCMPU x y))
(Greater64F x y) -> (FGreaterThan (FCMPU x y))
(Greater8U x y) -> (GreaterThan (CMPWU (ZeroExt8to32 x) (ZeroExt8to32 y)))
(Greater16U x y) -> (GreaterThan (CMPWU (ZeroExt16to32 x) (ZeroExt16to32 y)))
(Greater32U x y) -> (GreaterThan (CMPWU x y))
(Greater64U x y) -> (GreaterThan (CMPU x y))
(Geq8 x y) -> (GreaterEqual (CMPW (SignExt8to32 x) (SignExt8to32 y)))
(Geq16 x y) -> (GreaterEqual (CMPW (SignExt16to32 x) (SignExt16to32 y)))
(Geq32 x y) -> (GreaterEqual (CMPW x y))
(Geq64 x y) -> (GreaterEqual (CMP x y))
(Geq32F x y) -> (FGreaterEqual (FCMPU x y))
(Geq64F x y) -> (FGreaterEqual (FCMPU x y))
(Geq8U x y) -> (GreaterEqual (CMPU (ZeroExt8to32 x) (ZeroExt8to32 y)))
(Geq16U x y) -> (GreaterEqual (CMPU (ZeroExt16to32 x) (ZeroExt16to32 y)))
(Geq32U x y) -> (GreaterEqual (CMPU x y))
(Geq64U x y) -> (GreaterEqual (CMPU x y))
// Absorb pseudo-ops into blocks.
(If (Equal cc) yes no) -> (EQ cc yes no)
(If (NotEqual cc) yes no) -> (NE cc yes no)
(If (LessThan cc) yes no) -> (LT cc yes no)
(If (LessEqual cc) yes no) -> (LE cc yes no)
(If (GreaterThan cc) yes no) -> (GT cc yes no)
(If (GreaterEqual cc) yes no) -> (GE cc yes no)
(If (FLessThan cc) yes no) -> (FLT cc yes no)
(If (FLessEqual cc) yes no) -> (FLE cc yes no)
(If (FGreaterThan cc) yes no) -> (FGT cc yes no)
(If (FGreaterEqual cc) yes no) -> (FGE cc yes no)
(If cond yes no) -> (NE (CMPWconst [0] cond) yes no)
// Absorb boolean tests into block
(NE (CMPWconst [0] (Equal cc)) yes no) -> (EQ cc yes no)
(NE (CMPWconst [0] (NotEqual cc)) yes no) -> (NE cc yes no)
(NE (CMPWconst [0] (LessThan cc)) yes no) -> (LT cc yes no)
(NE (CMPWconst [0] (LessEqual cc)) yes no) -> (LE cc yes no)
(NE (CMPWconst [0] (GreaterThan cc)) yes no) -> (GT cc yes no)
(NE (CMPWconst [0] (GreaterEqual cc)) yes no) -> (GE cc yes no)
// (NE (CMPWconst [0] (FLessThan cc)) yes no) -> (FLT cc yes no)
// (NE (CMPWconst [0] (FLessEqual cc)) yes no) -> (FLE cc yes no)
// (NE (CMPWconst [0] (FGreaterThan cc)) yes no) -> (FGT cc yes no)
// (NE (CMPWconst [0] (FGreaterEqual cc)) yes no) -> (FGE cc yes no)
// absorb flag constants into branches
(EQ (FlagEQ) yes no) -> (First nil yes no)
(EQ (FlagLT) yes no) -> (First nil no yes)
(EQ (FlagGT) yes no) -> (First nil no yes)
(NE (FlagEQ) yes no) -> (First nil no yes)
(NE (FlagLT) yes no) -> (First nil yes no)
(NE (FlagGT) yes no) -> (First nil yes no)
(LT (FlagEQ) yes no) -> (First nil no yes)
(LT (FlagLT) yes no) -> (First nil yes no)
(LT (FlagGT) yes no) -> (First nil no yes)
(LE (FlagEQ) yes no) -> (First nil yes no)
(LE (FlagLT) yes no) -> (First nil yes no)
(LE (FlagGT) yes no) -> (First nil no yes)
(GT (FlagEQ) yes no) -> (First nil no yes)
(GT (FlagLT) yes no) -> (First nil no yes)
(GT (FlagGT) yes no) -> (First nil yes no)
(GE (FlagEQ) yes no) -> (First nil yes no)
(GE (FlagLT) yes no) -> (First nil no yes)
(GE (FlagGT) yes no) -> (First nil yes no)
// absorb InvertFlags into branches
(LT (InvertFlags cmp) yes no) -> (GT cmp yes no)
(GT (InvertFlags cmp) yes no) -> (LT cmp yes no)
(LE (InvertFlags cmp) yes no) -> (GE cmp yes no)
(GE (InvertFlags cmp) yes no) -> (LE cmp yes no)
(EQ (InvertFlags cmp) yes no) -> (EQ cmp yes no)
(NE (InvertFlags cmp) yes no) -> (NE cmp yes no)
// (FLT (InvertFlags cmp) yes no) -> (FGT cmp yes no)
// (FGT (InvertFlags cmp) yes no) -> (FLT cmp yes no)
// (FLE (InvertFlags cmp) yes no) -> (FGE cmp yes no)
// (FGE (InvertFlags cmp) yes no) -> (FLE cmp yes no)
// constant comparisons
(CMPWconst (MOVWconst [x]) [y]) && int32(x)==int32(y) -> (FlagEQ)
(CMPWconst (MOVWconst [x]) [y]) && int32(x)<int32(y) -> (FlagLT)
(CMPWconst (MOVWconst [x]) [y]) && int32(x)>int32(y) -> (FlagGT)
(CMPconst (MOVDconst [x]) [y]) && int64(x)==int64(y) -> (FlagEQ)
(CMPconst (MOVDconst [x]) [y]) && int64(x)<int64(y) -> (FlagLT)
(CMPconst (MOVDconst [x]) [y]) && int64(x)>int64(y) -> (FlagGT)
(CMPWUconst (MOVWconst [x]) [y]) && int32(x)==int32(y) -> (FlagEQ)
(CMPWUconst (MOVWconst [x]) [y]) && uint32(x)<uint32(y) -> (FlagLT)
(CMPWUconst (MOVWconst [x]) [y]) && uint32(x)>uint32(y) -> (FlagGT)
(CMPUconst (MOVDconst [x]) [y]) && int64(x)==int64(y) -> (FlagEQ)
(CMPUconst (MOVDconst [x]) [y]) && uint64(x)<uint64(y) -> (FlagLT)
(CMPUconst (MOVDconst [x]) [y]) && uint64(x)>uint64(y) -> (FlagGT)
// other known comparisons
//(CMPconst (MOVBUreg _) [c]) && 0xff < c -> (FlagLT)
//(CMPconst (MOVHUreg _) [c]) && 0xffff < c -> (FlagLT)
//(CMPconst (ANDconst _ [m]) [n]) && 0 <= int32(m) && int32(m) < int32(n) -> (FlagLT)
//(CMPconst (SRLconst _ [c]) [n]) && 0 <= n && 0 < c && c <= 32 && (1<<uint32(32-c)) <= uint32(n) -> (FlagLT)
// absorb flag constants into boolean values
(Equal (FlagEQ)) -> (MOVWconst [1])
(Equal (FlagLT)) -> (MOVWconst [0])
(Equal (FlagGT)) -> (MOVWconst [0])
(NotEqual (FlagEQ)) -> (MOVWconst [0])
(NotEqual (FlagLT)) -> (MOVWconst [1])
(NotEqual (FlagGT)) -> (MOVWconst [1])
(LessThan (FlagEQ)) -> (MOVWconst [0])
(LessThan (FlagLT)) -> (MOVWconst [1])
(LessThan (FlagGT)) -> (MOVWconst [0])
(LessEqual (FlagEQ)) -> (MOVWconst [1])
(LessEqual (FlagLT)) -> (MOVWconst [1])
(LessEqual (FlagGT)) -> (MOVWconst [0])
(GreaterThan (FlagEQ)) -> (MOVWconst [0])
(GreaterThan (FlagLT)) -> (MOVWconst [0])
(GreaterThan (FlagGT)) -> (MOVWconst [1])
(GreaterEqual (FlagEQ)) -> (MOVWconst [1])
(GreaterEqual (FlagLT)) -> (MOVWconst [0])
(GreaterEqual (FlagGT)) -> (MOVWconst [1])
// absorb InvertFlags into boolean values
(Equal (InvertFlags x)) -> (Equal x)
(NotEqual (InvertFlags x)) -> (NotEqual x)
(LessThan (InvertFlags x)) -> (GreaterThan x)
(GreaterThan (InvertFlags x)) -> (LessThan x)
(LessEqual (InvertFlags x)) -> (GreaterEqual x)
(GreaterEqual (InvertFlags x)) -> (LessEqual x)
(FLessThan (InvertFlags x)) -> (FGreaterThan x)
(FGreaterThan (InvertFlags x)) -> (FLessThan x)
(FLessEqual (InvertFlags x)) -> (FGreaterEqual x)
(FGreaterEqual (InvertFlags x)) -> (FLessEqual x)
// Lowering loads
(Load <t> ptr mem) && (is64BitInt(t) || isPtr(t)) -> (MOVDload ptr mem)
(Load <t> ptr mem) && is32BitInt(t) && isSigned(t) -> (MOVWload ptr mem)
(Load <t> ptr mem) && is32BitInt(t) && !isSigned(t) -> (MOVWZload ptr mem)
(Load <t> ptr mem) && is16BitInt(t) && isSigned(t) -> (MOVHload ptr mem)
(Load <t> ptr mem) && is16BitInt(t) && !isSigned(t) -> (MOVHZload ptr mem)
(Load <t> ptr mem) && (t.IsBoolean() || (is8BitInt(t) && isSigned(t))) -> (MOVBload ptr mem)
(Load <t> ptr mem) && is8BitInt(t) && !isSigned(t) -> (MOVBZload ptr mem)
(Load <t> ptr mem) && is32BitFloat(t) -> (FMOVSload ptr mem)
(Load <t> ptr mem) && is64BitFloat(t) -> (FMOVDload ptr mem)
(Store [8] ptr val mem) && is64BitFloat(val.Type) -> (FMOVDstore ptr val mem)
(Store [8] ptr val mem) && is32BitFloat(val.Type) -> (FMOVDstore ptr val mem) // glitch from (Cvt32Fto64F x) -> x -- type is wrong
(Store [4] ptr val mem) && is32BitFloat(val.Type) -> (FMOVSstore ptr val mem)
(Store [8] ptr val mem) && (is64BitInt(val.Type) || isPtr(val.Type)) -> (MOVDstore ptr val mem)
(Store [4] ptr val mem) && is32BitInt(val.Type) -> (MOVWstore ptr val mem)
(Store [2] ptr val mem) -> (MOVHstore ptr val mem)
(Store [1] ptr val mem) -> (MOVBstore ptr val mem)
(Zero [s] _ mem) && SizeAndAlign(s).Size() == 0 -> mem
(Zero [s] destptr mem) && SizeAndAlign(s).Size() == 1 -> (MOVBstorezero destptr mem)
(Zero [s] destptr mem) && SizeAndAlign(s).Size() == 2 && SizeAndAlign(s).Align()%2 == 0 ->
(MOVHstorezero destptr mem)
(Zero [s] destptr mem) && SizeAndAlign(s).Size() == 2 ->
(MOVBstorezero [1] destptr
(MOVBstorezero [0] destptr mem))
(Zero [s] destptr mem) && SizeAndAlign(s).Size() == 4 && SizeAndAlign(s).Align()%4 == 0 ->
(MOVWstorezero destptr mem)
(Zero [s] destptr mem) && SizeAndAlign(s).Size() == 4 && SizeAndAlign(s).Align()%2 == 0 ->
(MOVHstorezero [2] destptr
(MOVHstorezero [0] destptr mem))
(Zero [s] destptr mem) && SizeAndAlign(s).Size() == 4 ->
(MOVBstorezero [3] destptr
(MOVBstorezero [2] destptr
(MOVBstorezero [1] destptr
(MOVBstorezero [0] destptr mem))))
(Zero [s] destptr mem) && SizeAndAlign(s).Size() == 8 && SizeAndAlign(s).Align()%8 == 0 ->
(MOVDstorezero [0] destptr mem)
(Zero [s] destptr mem) && SizeAndAlign(s).Size() == 8 && SizeAndAlign(s).Align()%4 == 0 ->
(MOVWstorezero [4] destptr
(MOVWstorezero [0] destptr mem))
(Zero [s] destptr mem) && SizeAndAlign(s).Size() == 8 && SizeAndAlign(s).Align()%2 == 0 ->
(MOVHstorezero [6] destptr
(MOVHstorezero [4] destptr
(MOVHstorezero [2] destptr
(MOVHstorezero [0] destptr mem))))
(Zero [s] destptr mem) && SizeAndAlign(s).Size() == 3 ->
(MOVBstorezero [2] destptr
(MOVBstorezero [1] destptr
(MOVBstorezero [0] destptr mem)))
// Zero small numbers of words directly.
(Zero [s] destptr mem) && SizeAndAlign(s).Size() == 16 && SizeAndAlign(s).Align()%8 == 0 ->
(MOVDstorezero [8] destptr
(MOVDstorezero [0] destptr mem))
(Zero [s] destptr mem) && SizeAndAlign(s).Size() == 24 && SizeAndAlign(s).Align()%8 == 0 ->
(MOVDstorezero [16] destptr
(MOVDstorezero [8] destptr
(MOVDstorezero [0] destptr mem)))
(Zero [s] destptr mem) && SizeAndAlign(s).Size() == 32 && SizeAndAlign(s).Align()%8 == 0 ->
(MOVDstorezero [24] destptr
(MOVDstorezero [16] destptr
(MOVDstorezero [8] destptr
(MOVDstorezero [0] destptr mem))))
// Large zeroing uses a loop
(Zero [s] ptr mem)
&& (SizeAndAlign(s).Size() > 512 || config.noDuffDevice) || SizeAndAlign(s).Align()%8 != 0 ->
(LoweredZero [SizeAndAlign(s).Align()]
ptr
(ADDconst <ptr.Type> ptr [SizeAndAlign(s).Size()-moveSize(SizeAndAlign(s).Align(), config)])
mem)
// moves
(Move [s] _ _ mem) && SizeAndAlign(s).Size() == 0 -> mem
(Move [s] dst src mem) && SizeAndAlign(s).Size() == 1 -> (MOVBstore dst (MOVBZload src mem) mem)
(Move [s] dst src mem) && SizeAndAlign(s).Size() == 2 && SizeAndAlign(s).Align()%2 == 0 ->
(MOVHstore dst (MOVHZload src mem) mem)
(Move [s] dst src mem) && SizeAndAlign(s).Size() == 2 ->
(MOVBstore [1] dst (MOVBZload [1] src mem)
(MOVBstore dst (MOVBZload src mem) mem))
(Move [s] dst src mem) && SizeAndAlign(s).Size() == 4 && SizeAndAlign(s).Align()%4 == 0 ->
(MOVWstore dst (MOVWload src mem) mem)
(Move [s] dst src mem) && SizeAndAlign(s).Size() == 4 && SizeAndAlign(s).Align()%2 == 0 ->
(MOVHstore [2] dst (MOVHZload [2] src mem)
(MOVHstore dst (MOVHZload src mem) mem))
(Move [s] dst src mem) && SizeAndAlign(s).Size() == 4 ->
(MOVBstore [3] dst (MOVBZload [3] src mem)
(MOVBstore [2] dst (MOVBZload [2] src mem)
(MOVBstore [1] dst (MOVBZload [1] src mem)
(MOVBstore dst (MOVBZload src mem) mem))))
(Move [s] dst src mem) && SizeAndAlign(s).Size() == 8 && SizeAndAlign(s).Align()%8 == 0 ->
(MOVDstore dst (MOVDload src mem) mem)
(Move [s] dst src mem) && SizeAndAlign(s).Size() == 8 && SizeAndAlign(s).Align()%4 == 0 ->
(MOVWstore [4] dst (MOVWZload [4] src mem)
(MOVWstore dst (MOVWZload src mem) mem))
(Move [s] dst src mem) && SizeAndAlign(s).Size() == 8 && SizeAndAlign(s).Align()%2 == 0->
(MOVHstore [6] dst (MOVHZload [6] src mem)
(MOVHstore [4] dst (MOVHZload [4] src mem)
(MOVHstore [2] dst (MOVHZload [2] src mem)
(MOVHstore dst (MOVHZload src mem) mem))))
(Move [s] dst src mem) && SizeAndAlign(s).Size() == 3 ->
(MOVBstore [2] dst (MOVBZload [2] src mem)
(MOVBstore [1] dst (MOVBZload [1] src mem)
(MOVBstore dst (MOVBZload src mem) mem)))
// Large move uses a loop
(Move [s] dst src mem)
&& (SizeAndAlign(s).Size() > 512 || config.noDuffDevice) || SizeAndAlign(s).Align()%8 != 0 ->
(LoweredMove [SizeAndAlign(s).Align()]
dst
src
(ADDconst <src.Type> src [SizeAndAlign(s).Size()-moveSize(SizeAndAlign(s).Align(), config)])
mem)
// Calls
// Lowering calls
(StaticCall [argwid] {target} mem) -> (CALLstatic [argwid] {target} mem)
(ClosureCall [argwid] entry closure mem) -> (CALLclosure [argwid] entry closure mem)
(DeferCall [argwid] mem) -> (CALLdefer [argwid] mem)
(GoCall [argwid] mem) -> (CALLgo [argwid] mem)
(InterCall [argwid] entry mem) -> (CALLinter [argwid] entry mem)
// Miscellaneous
(Convert <t> x mem) -> (MOVDconvert <t> x mem)
(GetClosurePtr) -> (LoweredGetClosurePtr)
(IsNonNil ptr) -> (NotEqual (CMPconst [0] ptr))
(IsInBounds idx len) -> (LessThan (CMPU idx len))
(IsSliceInBounds idx len) -> (LessEqual (CMPU idx len))
(NilCheck ptr mem) -> (LoweredNilCheck ptr mem)
// Optimizations
(ADD (MOVDconst [c]) x) && int64(int32(c)) == c -> (ADDconst [c] x)
(ADD x (MOVDconst [c])) && int64(int32(c)) == c -> (ADDconst [c] x)
// Fold offsets for stores.
(MOVDstore [off1] {sym} (ADDconst [off2] x) val mem) && is16Bit(off1+off2) -> (MOVDstore [off1+off2] {sym} x val mem)
(MOVWstore [off1] {sym} (ADDconst [off2] x) val mem) && is16Bit(off1+off2) -> (MOVWstore [off1+off2] {sym} x val mem)
(MOVHstore [off1] {sym} (ADDconst [off2] x) val mem) && is16Bit(off1+off2) -> (MOVHstore [off1+off2] {sym} x val mem)
(MOVBstore [off1] {sym} (ADDconst [off2] x) val mem) && is16Bit(off1+off2) -> (MOVBstore [off1+off2] {sym} x val mem)
// Store of zero -> storezero
(MOVDstore [off] {sym} ptr (MOVDconst [c]) mem) && c == 0 -> (MOVDstorezero [off] {sym} ptr mem)
(MOVWstore [off] {sym} ptr (MOVDconst [c]) mem) && c == 0 -> (MOVWstorezero [off] {sym} ptr mem)
(MOVHstore [off] {sym} ptr (MOVDconst [c]) mem) && c == 0 -> (MOVHstorezero [off] {sym} ptr mem)
(MOVBstore [off] {sym} ptr (MOVDconst [c]) mem) && c == 0 -> (MOVBstorezero [off] {sym} ptr mem)
// Fold offsets for storezero
(MOVDstorezero [off1] {sym} (ADDconst [off2] x) mem) && is16Bit(off1+off2) ->
(MOVDstorezero [off1+off2] {sym} x mem)
(MOVWstorezero [off1] {sym} (ADDconst [off2] x) mem) && is16Bit(off1+off2) ->
(MOVWstorezero [off1+off2] {sym} x mem)
(MOVHstorezero [off1] {sym} (ADDconst [off2] x) mem) && is16Bit(off1+off2) ->
(MOVHstorezero [off1+off2] {sym} x mem)
(MOVBstorezero [off1] {sym} (ADDconst [off2] x) mem) && is16Bit(off1+off2) ->
(MOVBstorezero [off1+off2] {sym} x mem)
// Lowering extension
// Note: we always extend to 64 bits even though some ops don't need that many result bits.
(SignExt8to16 x) -> (MOVBreg x)
(SignExt8to32 x) -> (MOVBreg x)
(SignExt8to64 x) -> (MOVBreg x)
(SignExt16to32 x) -> (MOVHreg x)
(SignExt16to64 x) -> (MOVHreg x)
(SignExt32to64 x) -> (MOVWreg x)
(ZeroExt8to16 x) -> (MOVBZreg x)
(ZeroExt8to32 x) -> (MOVBZreg x)
(ZeroExt8to64 x) -> (MOVBZreg x)
(ZeroExt16to32 x) -> (MOVHZreg x)
(ZeroExt16to64 x) -> (MOVHZreg x)
(ZeroExt32to64 x) -> (MOVWZreg x)
(Trunc16to8 x) -> (MOVBreg x)
(Trunc32to8 x) -> (MOVBreg x)
(Trunc32to16 x) -> (MOVHreg x)
(Trunc64to8 x) -> (MOVBreg x)
(Trunc64to16 x) -> (MOVHreg x)
(Trunc64to32 x) -> (MOVWreg x)

View File

@@ -0,0 +1,395 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
package main
import "strings"
// Notes:
// - Less-than-64-bit integer types live in the low portion of registers.
// For now, the upper portion is junk; sign/zero-extension might be optimized in the future, but not yet.
// - Boolean types are zero or 1; stored in a byte, but loaded with AMOVBZ so the upper bytes of a register are zero.
// - *const instructions may use a constant larger than the instuction can encode.
// In this case the assembler expands to multiple instructions and uses tmp
// register (R31).
var regNamesPPC64 = []string{
// "R0", // REGZERO
"SP", // REGSP
"SB", // REGSB
"R3",
"R4",
"R5",
"R6",
"R7",
"R8",
"R9",
"R10",
"R11", // REGCTXT for closures
"R12",
"R13", // REGTLS
"R14",
"R15",
"R16",
"R17",
"R18",
"R19",
"R20",
"R21",
"R22",
"R23",
"R24",
"R25",
"R26",
"R27",
"R28",
"R29",
"g", // REGG. Using name "g" and setting Config.hasGReg makes it "just happen".
"R31", // REGTMP
"F0",
"F1",
"F2",
"F3",
"F4",
"F5",
"F6",
"F7",
"F8",
"F9",
"F10",
"F11",
"F12",
"F13",
"F14",
"F15",
"F16",
"F17",
"F18",
"F19",
"F20",
"F21",
"F22",
"F23",
"F24",
"F25",
"F26",
// "F27", // reserved for "floating conversion constant"
// "F28", // 0.0
// "F29", // 0.5
// "F30", // 1.0
// "F31", // 2.0
// "CR0",
// "CR1",
// "CR2",
// "CR3",
// "CR4",
// "CR5",
// "CR6",
// "CR7",
// "CR",
// "XER",
// "LR",
// "CTR",
}
func init() {
// Make map from reg names to reg integers.
if len(regNamesPPC64) > 64 {
panic("too many registers")
}
num := map[string]int{}
for i, name := range regNamesPPC64 {
num[name] = i
}
buildReg := func(s string) regMask {
m := regMask(0)
for _, r := range strings.Split(s, " ") {
if n, ok := num[r]; ok {
m |= regMask(1) << uint(n)
continue
}
panic("register " + r + " not found")
}
return m
}
var (
gp = buildReg("R3 R4 R5 R6 R7 R8 R9 R10 R11 R12 R14 R15 R16 R17 R18 R19 R20 R21 R22 R23 R24 R25 R26 R27 R28 R29")
fp = buildReg("F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12 F13 F14 F15 F16 F17 F18 F19 F20 F21 F22 F23 F24 F25 F26")
sp = buildReg("SP")
sb = buildReg("SB")
// gr = buildReg("g")
// cr = buildReg("CR")
// ctr = buildReg("CTR")
// lr = buildReg("LR")
tmp = buildReg("R31")
ctxt = buildReg("R11")
// tls = buildReg("R13")
gp01 = regInfo{inputs: nil, outputs: []regMask{gp}}
gp11 = regInfo{inputs: []regMask{gp | sp | sb}, outputs: []regMask{gp}}
gp21 = regInfo{inputs: []regMask{gp | sp | sb, gp | sp | sb}, outputs: []regMask{gp}}
gp1cr = regInfo{inputs: []regMask{gp | sp | sb}}
gp2cr = regInfo{inputs: []regMask{gp | sp | sb, gp | sp | sb}}
crgp = regInfo{inputs: nil, outputs: []regMask{gp}}
gpload = regInfo{inputs: []regMask{gp | sp | sb}, outputs: []regMask{gp}}
gpstore = regInfo{inputs: []regMask{gp | sp | sb, gp | sp | sb}}
gpstorezero = regInfo{inputs: []regMask{gp | sp | sb}} // ppc64.REGZERO is reserved zero value
fp01 = regInfo{inputs: nil, outputs: []regMask{fp}}
fp11 = regInfo{inputs: []regMask{fp}, outputs: []regMask{fp}}
fpgp = regInfo{inputs: []regMask{fp}, outputs: []regMask{gp}}
gpfp = regInfo{inputs: []regMask{gp}, outputs: []regMask{fp}}
fp21 = regInfo{inputs: []regMask{fp, fp}, outputs: []regMask{fp}}
fp2cr = regInfo{inputs: []regMask{fp, fp}}
fpload = regInfo{inputs: []regMask{gp | sp | sb}, outputs: []regMask{fp}}
fpstore = regInfo{inputs: []regMask{gp | sp | sb, fp}}
callerSave = regMask(gp | fp)
)
ops := []opData{
{name: "ADD", argLength: 2, reg: gp21, asm: "ADD", commutative: true}, // arg0 + arg1
{name: "ADDconst", argLength: 1, reg: gp11, asm: "ADD", aux: "SymOff"}, // arg0 + auxInt + aux.(*gc.Sym)
{name: "FADD", argLength: 2, reg: fp21, asm: "FADD", commutative: true}, // arg0+arg1
{name: "FADDS", argLength: 2, reg: fp21, asm: "FADDS", commutative: true}, // arg0+arg1
{name: "SUB", argLength: 2, reg: gp21, asm: "SUB"}, // arg0-arg1
{name: "FSUB", argLength: 2, reg: fp21, asm: "FSUB"}, // arg0-arg1
{name: "FSUBS", argLength: 2, reg: fp21, asm: "FSUBS"}, // arg0-arg1
{name: "MULLD", argLength: 2, reg: gp21, asm: "MULLD", typ: "Int64", commutative: true}, // arg0*arg1 (signed 64-bit)
{name: "MULLW", argLength: 2, reg: gp21, asm: "MULLW", typ: "Int32", commutative: true}, // arg0*arg1 (signed 32-bit)
{name: "MULHD", argLength: 2, reg: gp21, asm: "MULHD", commutative: true}, // (arg0 * arg1) >> 64, signed
{name: "MULHW", argLength: 2, reg: gp21, asm: "MULHW", commutative: true}, // (arg0 * arg1) >> 32, signed
{name: "MULHDU", argLength: 2, reg: gp21, asm: "MULHDU", commutative: true}, // (arg0 * arg1) >> 64, unsigned
{name: "MULHWU", argLength: 2, reg: gp21, asm: "MULHWU", commutative: true}, // (arg0 * arg1) >> 32, unsigned
{name: "FMUL", argLength: 2, reg: fp21, asm: "FMUL", commutative: true}, // arg0*arg1
{name: "FMULS", argLength: 2, reg: fp21, asm: "FMULS", commutative: true}, // arg0*arg1
{name: "SRAD", argLength: 2, reg: gp21, asm: "SRAD"}, // arg0 >>a arg1, 64 bits (all sign if arg1 & 64 != 0)
{name: "SRAW", argLength: 2, reg: gp21, asm: "SRAW"}, // arg0 >>a arg1, 32 bits (all sign if arg1 & 32 != 0)
{name: "SRD", argLength: 2, reg: gp21, asm: "SRD"}, // arg0 >> arg1, 64 bits (0 if arg1 & 64 != 0)
{name: "SRW", argLength: 2, reg: gp21, asm: "SRW"}, // arg0 >> arg1, 32 bits (0 if arg1 & 32 != 0)
{name: "SLD", argLength: 2, reg: gp21, asm: "SLD"}, // arg0 << arg1, 64 bits (0 if arg1 & 64 != 0)
{name: "SLW", argLength: 2, reg: gp21, asm: "SLW"}, // arg0 << arg1, 32 bits (0 if arg1 & 32 != 0)
{name: "ADDconstForCarry", argLength: 1, reg: regInfo{inputs: []regMask{gp | sp | sb}, clobbers: tmp}, aux: "Int16", asm: "ADDC", typ: "Flags"}, // _, carry := arg0 + aux
{name: "MaskIfNotCarry", argLength: 1, reg: crgp, asm: "ADDME", typ: "Int64"}, // carry - 1 (if carry then 0 else -1)
{name: "SRADconst", argLength: 1, reg: gp11, asm: "SRAD", aux: "Int64"}, // arg0 >>a aux, 64 bits
{name: "SRAWconst", argLength: 1, reg: gp11, asm: "SRAW", aux: "Int64"}, // arg0 >>a aux, 32 bits
{name: "SRDconst", argLength: 1, reg: gp11, asm: "SRD", aux: "Int64"}, // arg0 >> aux, 64 bits
{name: "SRWconst", argLength: 1, reg: gp11, asm: "SRW", aux: "Int64"}, // arg0 >> aux, 32 bits
{name: "SLDconst", argLength: 1, reg: gp11, asm: "SLD", aux: "Int64"}, // arg0 << aux, 64 bits
{name: "SLWconst", argLength: 1, reg: gp11, asm: "SLW", aux: "Int64"}, // arg0 << aux, 32 bits
{name: "FDIV", argLength: 2, reg: fp21, asm: "FDIV"}, // arg0/arg1
{name: "FDIVS", argLength: 2, reg: fp21, asm: "FDIVS"}, // arg0/arg1
{name: "DIVD", argLength: 2, reg: gp21, asm: "DIVD", typ: "Int64"}, // arg0/arg1 (signed 64-bit)
{name: "DIVW", argLength: 2, reg: gp21, asm: "DIVW", typ: "Int32"}, // arg0/arg1 (signed 32-bit)
{name: "DIVDU", argLength: 2, reg: gp21, asm: "DIVDU", typ: "Int64"}, // arg0/arg1 (unsigned 64-bit)
{name: "DIVWU", argLength: 2, reg: gp21, asm: "DIVWU", typ: "Int32"}, // arg0/arg1 (unsigned 32-bit)
// MOD is implemented as rem := arg0 - (arg0/arg1) * arg1
// Conversions are all float-to-float register operations. "Integer" refers to encoding in the FP register.
{name: "FCTIDZ", argLength: 1, reg: fp11, asm: "FCTIDZ", typ: "Float64"}, // convert float to 64-bit int round towards zero
{name: "FCTIWZ", argLength: 1, reg: fp11, asm: "FCTIWZ", typ: "Float64"}, // convert float to 32-bit int round towards zero
{name: "FCFID", argLength: 1, reg: fp11, asm: "FCFID", typ: "Float64"}, // convert 64-bit integer to float
{name: "FRSP", argLength: 1, reg: fp11, asm: "FRSP", typ: "Float64"}, // round float to 32-bit value
// Movement between float and integer registers with no change in bits; accomplished with stores+loads on PPC.
// Because the 32-bit load-literal-bits instructions have impoverished addressability, always widen the
// data instead and use FMOVDload and FMOVDstore instead (this will also dodge endianess issues).
// There are optimizations that should apply -- (Xi2f64 (MOVWload (not-ADD-ptr+offset) ) ) could use
// the word-load instructions. (Xi2f64 (MOVDload ptr )) can be (FMOVDload ptr)
{name: "Xf2i64", argLength: 1, reg: fpgp, typ: "Int64"}, // move 64 bits of F register into G register
{name: "Xi2f64", argLength: 1, reg: gpfp, typ: "Float64"}, // move 64 bits of G register into F register
{name: "AND", argLength: 2, reg: gp21, asm: "AND", commutative: true}, // arg0&arg1
{name: "ANDN", argLength: 2, reg: gp21, asm: "ANDN"}, // arg0&^arg1
{name: "OR", argLength: 2, reg: gp21, asm: "OR", commutative: true}, // arg0|arg1
{name: "ORN", argLength: 2, reg: gp21, asm: "ORN"}, // arg0|^arg1
{name: "XOR", argLength: 2, reg: gp21, asm: "XOR", typ: "Int64", commutative: true}, // arg0^arg1
{name: "EQV", argLength: 2, reg: gp21, asm: "EQV", typ: "Int64", commutative: true}, // arg0^^arg1
{name: "NEG", argLength: 1, reg: gp11, asm: "NEG"}, // -arg0 (integer)
{name: "FNEG", argLength: 1, reg: fp11, asm: "FNEG"}, // -arg0 (floating point)
{name: "FSQRT", argLength: 1, reg: fp11, asm: "FSQRT"}, // sqrt(arg0) (floating point)
{name: "FSQRTS", argLength: 1, reg: fp11, asm: "FSQRTS"}, // sqrt(arg0) (floating point, single precision)
{name: "ORconst", argLength: 1, reg: gp11, asm: "OR", aux: "Int64"}, // arg0|aux
{name: "XORconst", argLength: 1, reg: gp11, asm: "XOR", aux: "Int64"}, // arg0^aux
{name: "ANDconst", argLength: 1, reg: regInfo{inputs: []regMask{gp | sp | sb}, outputs: []regMask{gp}}, asm: "ANDCC", aux: "Int64", clobberFlags: true}, // arg0&aux // and-immediate sets CC on PPC, always.
{name: "MOVBreg", argLength: 1, reg: gp11, asm: "MOVB", typ: "Int64"}, // sign extend int8 to int64
{name: "MOVBZreg", argLength: 1, reg: gp11, asm: "MOVBZ", typ: "Int64"}, // zero extend uint8 to uint64
{name: "MOVHreg", argLength: 1, reg: gp11, asm: "MOVH", typ: "Int64"}, // sign extend int16 to int64
{name: "MOVHZreg", argLength: 1, reg: gp11, asm: "MOVHZ", typ: "Int64"}, // zero extend uint16 to uint64
{name: "MOVWreg", argLength: 1, reg: gp11, asm: "MOVW", typ: "Int64"}, // sign extend int32 to int64
{name: "MOVWZreg", argLength: 1, reg: gp11, asm: "MOVWZ", typ: "Int64"}, // zero extend uint32 to uint64
{name: "MOVBload", argLength: 2, reg: gpload, asm: "MOVB", aux: "SymOff", typ: "Int8"}, // sign extend int8 to int64
{name: "MOVBZload", argLength: 2, reg: gpload, asm: "MOVBZ", aux: "SymOff", typ: "UInt8"}, // zero extend uint8 to uint64
{name: "MOVHload", argLength: 2, reg: gpload, asm: "MOVH", aux: "SymOff", typ: "Int16"}, // sign extend int16 to int64
{name: "MOVHZload", argLength: 2, reg: gpload, asm: "MOVHZ", aux: "SymOff", typ: "UInt16"}, // zero extend uint16 to uint64
{name: "MOVWload", argLength: 2, reg: gpload, asm: "MOVW", aux: "SymOff", typ: "Int32"}, // sign extend int32 to int64
{name: "MOVWZload", argLength: 2, reg: gpload, asm: "MOVWZ", aux: "SymOff", typ: "UInt32"}, // zero extend uint32 to uint64
{name: "MOVDload", argLength: 2, reg: gpload, asm: "MOVD", aux: "SymOff", typ: "Int64"},
{name: "FMOVDload", argLength: 2, reg: fpload, asm: "FMOVD", typ: "Float64"},
{name: "FMOVSload", argLength: 2, reg: fpload, asm: "FMOVS", typ: "Float32"},
{name: "MOVBstore", argLength: 3, reg: gpstore, asm: "MOVB", aux: "SymOff", typ: "Mem"},
{name: "MOVHstore", argLength: 3, reg: gpstore, asm: "MOVH", aux: "SymOff", typ: "Mem"},
{name: "MOVWstore", argLength: 3, reg: gpstore, asm: "MOVW", aux: "SymOff", typ: "Mem"},
{name: "MOVDstore", argLength: 3, reg: gpstore, asm: "MOVD", aux: "SymOff", typ: "Mem"},
{name: "FMOVDstore", argLength: 3, reg: fpstore, asm: "FMOVD", aux: "SymOff", typ: "Mem"},
{name: "FMOVSstore", argLength: 3, reg: fpstore, asm: "FMOVS", aux: "SymOff", typ: "Mem"},
{name: "MOVBstorezero", argLength: 2, reg: gpstorezero, asm: "MOVB", aux: "SymOff", typ: "Mem"}, // store zero byte to arg0+aux. arg1=mem
{name: "MOVHstorezero", argLength: 2, reg: gpstorezero, asm: "MOVH", aux: "SymOff", typ: "Mem"}, // store zero 2 bytes to ...
{name: "MOVWstorezero", argLength: 2, reg: gpstorezero, asm: "MOVW", aux: "SymOff", typ: "Mem"}, // store zero 4 bytes to ...
{name: "MOVDstorezero", argLength: 2, reg: gpstorezero, asm: "MOVD", aux: "SymOff", typ: "Mem"}, // store zero 8 bytes to ...
{name: "MOVDaddr", argLength: 1, reg: regInfo{inputs: []regMask{sp | sb}, outputs: []regMask{gp}}, aux: "SymOff", asm: "MOVD", rematerializeable: true}, // arg0 + auxInt + aux.(*gc.Sym), arg0=SP/SB
{name: "MOVDconst", argLength: 0, reg: gp01, aux: "Int64", asm: "MOVD", rematerializeable: true}, //
{name: "MOVWconst", argLength: 0, reg: gp01, aux: "Int32", asm: "MOVW", rematerializeable: true}, // 32 low bits of auxint
{name: "FMOVDconst", argLength: 0, reg: fp01, aux: "Float64", asm: "FMOVD", rematerializeable: true}, //
{name: "FMOVSconst", argLength: 0, reg: fp01, aux: "Float32", asm: "FMOVS", rematerializeable: true}, //
{name: "FCMPU", argLength: 2, reg: fp2cr, asm: "FCMPU", typ: "Flags"},
{name: "CMP", argLength: 2, reg: gp2cr, asm: "CMP", typ: "Flags"}, // arg0 compare to arg1
{name: "CMPU", argLength: 2, reg: gp2cr, asm: "CMPU", typ: "Flags"}, // arg0 compare to arg1
{name: "CMPW", argLength: 2, reg: gp2cr, asm: "CMPW", typ: "Flags"}, // arg0 compare to arg1
{name: "CMPWU", argLength: 2, reg: gp2cr, asm: "CMPWU", typ: "Flags"}, // arg0 compare to arg1
{name: "CMPconst", argLength: 1, reg: gp1cr, asm: "CMP", aux: "Int64", typ: "Flags"},
{name: "CMPUconst", argLength: 1, reg: gp1cr, asm: "CMPU", aux: "Int64", typ: "Flags"},
{name: "CMPWconst", argLength: 1, reg: gp1cr, asm: "CMPW", aux: "Int32", typ: "Flags"},
{name: "CMPWUconst", argLength: 1, reg: gp1cr, asm: "CMPWU", aux: "Int32", typ: "Flags"},
// pseudo-ops
{name: "Equal", argLength: 1, reg: crgp}, // bool, true flags encode x==y false otherwise.
{name: "NotEqual", argLength: 1, reg: crgp}, // bool, true flags encode x!=y false otherwise.
{name: "LessThan", argLength: 1, reg: crgp}, // bool, true flags encode x<y false otherwise.
{name: "FLessThan", argLength: 1, reg: crgp}, // bool, true flags encode x<y false otherwise.
{name: "LessEqual", argLength: 1, reg: crgp}, // bool, true flags encode x<=y false otherwise.
{name: "FLessEqual", argLength: 1, reg: crgp}, // bool, true flags encode x<=y false otherwise; PPC <= === !> which is wrong for NaN
{name: "GreaterThan", argLength: 1, reg: crgp}, // bool, true flags encode x>y false otherwise.
{name: "FGreaterThan", argLength: 1, reg: crgp}, // bool, true flags encode x>y false otherwise.
{name: "GreaterEqual", argLength: 1, reg: crgp}, // bool, true flags encode x>=y false otherwise.
{name: "FGreaterEqual", argLength: 1, reg: crgp}, // bool, true flags encode x>=y false otherwise.; PPC >= === !< which is wrong for NaN
// Scheduler ensures LoweredGetClosurePtr occurs only in entry block,
// and sorts it to the very beginning of the block to prevent other
// use of the closure pointer.
{name: "LoweredGetClosurePtr", reg: regInfo{outputs: []regMask{ctxt}}},
//arg0=ptr,arg1=mem, returns void. Faults if ptr is nil.
{name: "LoweredNilCheck", argLength: 2, reg: regInfo{inputs: []regMask{gp | sp | sb}, clobbers: tmp}, clobberFlags: true},
// Convert pointer to integer, takes a memory operand for ordering.
{name: "MOVDconvert", argLength: 2, reg: gp11, asm: "MOVD"},
{name: "CALLstatic", argLength: 1, reg: regInfo{clobbers: callerSave}, aux: "SymOff", clobberFlags: true}, // call static function aux.(*gc.Sym). arg0=mem, auxint=argsize, returns mem
{name: "CALLclosure", argLength: 3, reg: regInfo{inputs: []regMask{gp | sp, ctxt, 0}, clobbers: callerSave}, aux: "Int64", clobberFlags: true}, // call function via closure. arg0=codeptr, arg1=closure, arg2=mem, auxint=argsize, returns mem
{name: "CALLdefer", argLength: 1, reg: regInfo{clobbers: callerSave}, aux: "Int64", clobberFlags: true}, // call deferproc. arg0=mem, auxint=argsize, returns mem
{name: "CALLgo", argLength: 1, reg: regInfo{clobbers: callerSave}, aux: "Int64", clobberFlags: true}, // call newproc. arg0=mem, auxint=argsize, returns mem
{name: "CALLinter", argLength: 2, reg: regInfo{inputs: []regMask{gp}, clobbers: callerSave}, aux: "Int64", clobberFlags: true}, // call fn by pointer. arg0=codeptr, arg1=mem, auxint=argsize, returns mem
// large or unaligned zeroing
// arg0 = address of memory to zero (in R3, changed as side effect)
// arg1 = address of the last element to zero
// arg2 = mem
// returns mem
// ADD -8,R3,R3 // intermediate value not valid GC ptr, cannot expose to opt+GC
// MOVDU R0, 8(R3)
// CMP R3, Rarg1
// BLE -2(PC)
{
name: "LoweredZero",
aux: "Int64",
argLength: 3,
reg: regInfo{
inputs: []regMask{buildReg("R3"), gp},
clobbers: buildReg("R3"),
},
clobberFlags: true,
typ: "Mem",
},
// large or unaligned move
// arg0 = address of dst memory (in R3, changed as side effect)
// arg1 = address of src memory (in R4, changed as side effect)
// arg2 = address of the last element of src
// arg3 = mem
// returns mem
// ADD -8,R3,R3 // intermediate value not valid GC ptr, cannot expose to opt+GC
// ADD -8,R4,R4 // intermediate value not valid GC ptr, cannot expose to opt+GC
// MOVDU 8(R4), Rtmp
// MOVDU Rtmp, 8(R3)
// CMP R4, Rarg2
// BLT -3(PC)
{
name: "LoweredMove",
aux: "Int64",
argLength: 4,
reg: regInfo{
inputs: []regMask{buildReg("R3"), buildReg("R4"), gp},
clobbers: buildReg("R3 R4"),
},
clobberFlags: true,
typ: "Mem",
},
// (InvertFlags (CMP a b)) == (CMP b a)
// So if we want (LessThan (CMP a b)) but we can't do that because a is a constant,
// then we do (LessThan (InvertFlags (CMP b a))) instead.
// Rewrites will convert this to (GreaterThan (CMP b a)).
// InvertFlags is a pseudo-op which can't appear in assembly output.
{name: "InvertFlags", argLength: 1}, // reverse direction of arg0
// Constant flag values. For any comparison, there are 3 possible
// outcomes: either the three from the signed total order (<,==,>)
// or the three from the unsigned total order, depending on which
// comparison operation was used (CMP or CMPU -- PPC is different from
// the other architectures, which have a single comparison producing
// both signed and unsigned comparison results.)
// These ops are for temporary use by rewrite rules. They
// cannot appear in the generated assembly.
{name: "FlagEQ"}, // equal
{name: "FlagLT"}, // signed < or unsigned <
{name: "FlagGT"}, // signed > or unsigned >
}
blocks := []blockData{
{name: "EQ"},
{name: "NE"},
{name: "LT"},
{name: "LE"},
{name: "GT"},
{name: "GE"},
{name: "FLT"},
{name: "FLE"},
{name: "FGT"},
{name: "FGE"},
}
archs = append(archs, arch{
name: "PPC64",
pkg: "cmd/internal/obj/ppc64",
genfile: "../../ppc64/ssa.go",
ops: ops,
blocks: blocks,
regnames: regNamesPPC64,
gpregmask: gp,
fpregmask: fp,
framepointerreg: int8(num["SP"]),
})
}

View File

@@ -0,0 +1,407 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// This file contains rules to decompose [u]int64 types on 32-bit
// architectures. These rules work together with the decomposeBuiltIn
// pass which handles phis of these types.
(Int64Hi (Int64Make hi _)) -> hi
(Int64Lo (Int64Make _ lo)) -> lo
// Assuming little endian (we don't support big endian 32-bit architecture yet)
(Load <t> ptr mem) && is64BitInt(t) && t.IsSigned() ->
(Int64Make
(Load <config.fe.TypeInt32()> (OffPtr <config.fe.TypeInt32().PtrTo()> [4] ptr) mem)
(Load <config.fe.TypeUInt32()> ptr mem))
(Load <t> ptr mem) && is64BitInt(t) && !t.IsSigned() ->
(Int64Make
(Load <config.fe.TypeUInt32()> (OffPtr <config.fe.TypeUInt32().PtrTo()> [4] ptr) mem)
(Load <config.fe.TypeUInt32()> ptr mem))
(Store [8] dst (Int64Make hi lo) mem) ->
(Store [4]
(OffPtr <hi.Type.PtrTo()> [4] dst)
hi
(Store [4] dst lo mem))
(Arg {n} [off]) && is64BitInt(v.Type) && v.Type.IsSigned() ->
(Int64Make
(Arg <config.fe.TypeInt32()> {n} [off+4])
(Arg <config.fe.TypeUInt32()> {n} [off]))
(Arg {n} [off]) && is64BitInt(v.Type) && !v.Type.IsSigned() ->
(Int64Make
(Arg <config.fe.TypeUInt32()> {n} [off+4])
(Arg <config.fe.TypeUInt32()> {n} [off]))
(Add64 x y) ->
(Int64Make
(Add32withcarry <config.fe.TypeInt32()>
(Int64Hi x)
(Int64Hi y)
(Select0 <TypeFlags> (Add32carry (Int64Lo x) (Int64Lo y))))
(Select1 <config.fe.TypeUInt32()> (Add32carry (Int64Lo x) (Int64Lo y))))
(Sub64 x y) ->
(Int64Make
(Sub32withcarry <config.fe.TypeInt32()>
(Int64Hi x)
(Int64Hi y)
(Select0 <TypeFlags> (Sub32carry (Int64Lo x) (Int64Lo y))))
(Select1 <config.fe.TypeUInt32()> (Sub32carry (Int64Lo x) (Int64Lo y))))
(Mul64 x y) ->
(Int64Make
(Add32 <config.fe.TypeUInt32()>
(Mul32 <config.fe.TypeUInt32()> (Int64Lo x) (Int64Hi y))
(Add32 <config.fe.TypeUInt32()>
(Mul32 <config.fe.TypeUInt32()> (Int64Hi x) (Int64Lo y))
(Select0 <config.fe.TypeUInt32()> (Mul32uhilo (Int64Lo x) (Int64Lo y)))))
(Select1 <config.fe.TypeUInt32()> (Mul32uhilo (Int64Lo x) (Int64Lo y))))
(And64 x y) ->
(Int64Make
(And32 <config.fe.TypeUInt32()> (Int64Hi x) (Int64Hi y))
(And32 <config.fe.TypeUInt32()> (Int64Lo x) (Int64Lo y)))
(Or64 x y) ->
(Int64Make
(Or32 <config.fe.TypeUInt32()> (Int64Hi x) (Int64Hi y))
(Or32 <config.fe.TypeUInt32()> (Int64Lo x) (Int64Lo y)))
(Xor64 x y) ->
(Int64Make
(Xor32 <config.fe.TypeUInt32()> (Int64Hi x) (Int64Hi y))
(Xor32 <config.fe.TypeUInt32()> (Int64Lo x) (Int64Lo y)))
(Neg64 <t> x) -> (Sub64 (Const64 <t> [0]) x)
(Com64 x) ->
(Int64Make
(Com32 <config.fe.TypeUInt32()> (Int64Hi x))
(Com32 <config.fe.TypeUInt32()> (Int64Lo x)))
(SignExt32to64 x) -> (Int64Make (Signmask x) x)
(SignExt16to64 x) -> (SignExt32to64 (SignExt16to32 x))
(SignExt8to64 x) -> (SignExt32to64 (SignExt8to32 x))
(ZeroExt32to64 x) -> (Int64Make (Const32 <config.fe.TypeUInt32()> [0]) x)
(ZeroExt16to64 x) -> (ZeroExt32to64 (ZeroExt16to32 x))
(ZeroExt8to64 x) -> (ZeroExt32to64 (ZeroExt8to32 x))
(Trunc64to32 (Int64Make _ lo)) -> lo
(Trunc64to16 (Int64Make _ lo)) -> (Trunc32to16 lo)
(Trunc64to8 (Int64Make _ lo)) -> (Trunc32to8 lo)
(Lsh32x64 _ (Int64Make (Const32 [c]) _)) && c != 0 -> (Const32 [0])
(Rsh32x64 x (Int64Make (Const32 [c]) _)) && c != 0 -> (Signmask x)
(Rsh32Ux64 _ (Int64Make (Const32 [c]) _)) && c != 0 -> (Const32 [0])
(Lsh16x64 _ (Int64Make (Const32 [c]) _)) && c != 0 -> (Const32 [0])
(Rsh16x64 x (Int64Make (Const32 [c]) _)) && c != 0 -> (Signmask (SignExt16to32 x))
(Rsh16Ux64 _ (Int64Make (Const32 [c]) _)) && c != 0 -> (Const32 [0])
(Lsh8x64 _ (Int64Make (Const32 [c]) _)) && c != 0 -> (Const32 [0])
(Rsh8x64 x (Int64Make (Const32 [c]) _)) && c != 0 -> (Signmask (SignExt8to32 x))
(Rsh8Ux64 _ (Int64Make (Const32 [c]) _)) && c != 0 -> (Const32 [0])
(Lsh32x64 x (Int64Make (Const32 [0]) lo)) -> (Lsh32x32 x lo)
(Rsh32x64 x (Int64Make (Const32 [0]) lo)) -> (Rsh32x32 x lo)
(Rsh32Ux64 x (Int64Make (Const32 [0]) lo)) -> (Rsh32Ux32 x lo)
(Lsh16x64 x (Int64Make (Const32 [0]) lo)) -> (Lsh16x32 x lo)
(Rsh16x64 x (Int64Make (Const32 [0]) lo)) -> (Rsh16x32 x lo)
(Rsh16Ux64 x (Int64Make (Const32 [0]) lo)) -> (Rsh16Ux32 x lo)
(Lsh8x64 x (Int64Make (Const32 [0]) lo)) -> (Lsh8x32 x lo)
(Rsh8x64 x (Int64Make (Const32 [0]) lo)) -> (Rsh8x32 x lo)
(Rsh8Ux64 x (Int64Make (Const32 [0]) lo)) -> (Rsh8Ux32 x lo)
(Lsh64x64 _ (Int64Make (Const32 [c]) _)) && c != 0 -> (Const64 [0])
(Rsh64x64 x (Int64Make (Const32 [c]) _)) && c != 0 -> (Int64Make (Signmask (Int64Hi x)) (Signmask (Int64Hi x)))
(Rsh64Ux64 _ (Int64Make (Const32 [c]) _)) && c != 0 -> (Const64 [0])
(Lsh64x64 x (Int64Make (Const32 [0]) lo)) -> (Lsh64x32 x lo)
(Rsh64x64 x (Int64Make (Const32 [0]) lo)) -> (Rsh64x32 x lo)
(Rsh64Ux64 x (Int64Make (Const32 [0]) lo)) -> (Rsh64Ux32 x lo)
// turn x64 non-constant shifts to x32 shifts
// if high 32-bit of the shift is nonzero, make a huge shift
(Lsh64x64 x (Int64Make hi lo)) && hi.Op != OpConst32 ->
(Lsh64x32 x (Or32 <config.fe.TypeUInt32()> (Zeromask hi) lo))
(Rsh64x64 x (Int64Make hi lo)) && hi.Op != OpConst32 ->
(Rsh64x32 x (Or32 <config.fe.TypeUInt32()> (Zeromask hi) lo))
(Rsh64Ux64 x (Int64Make hi lo)) && hi.Op != OpConst32 ->
(Rsh64Ux32 x (Or32 <config.fe.TypeUInt32()> (Zeromask hi) lo))
(Lsh32x64 x (Int64Make hi lo)) && hi.Op != OpConst32 ->
(Lsh32x32 x (Or32 <config.fe.TypeUInt32()> (Zeromask hi) lo))
(Rsh32x64 x (Int64Make hi lo)) && hi.Op != OpConst32 ->
(Rsh32x32 x (Or32 <config.fe.TypeUInt32()> (Zeromask hi) lo))
(Rsh32Ux64 x (Int64Make hi lo)) && hi.Op != OpConst32 ->
(Rsh32Ux32 x (Or32 <config.fe.TypeUInt32()> (Zeromask hi) lo))
(Lsh16x64 x (Int64Make hi lo)) && hi.Op != OpConst32 ->
(Lsh16x32 x (Or32 <config.fe.TypeUInt32()> (Zeromask hi) lo))
(Rsh16x64 x (Int64Make hi lo)) && hi.Op != OpConst32 ->
(Rsh16x32 x (Or32 <config.fe.TypeUInt32()> (Zeromask hi) lo))
(Rsh16Ux64 x (Int64Make hi lo)) && hi.Op != OpConst32 ->
(Rsh16Ux32 x (Or32 <config.fe.TypeUInt32()> (Zeromask hi) lo))
(Lsh8x64 x (Int64Make hi lo)) && hi.Op != OpConst32 ->
(Lsh8x32 x (Or32 <config.fe.TypeUInt32()> (Zeromask hi) lo))
(Rsh8x64 x (Int64Make hi lo)) && hi.Op != OpConst32 ->
(Rsh8x32 x (Or32 <config.fe.TypeUInt32()> (Zeromask hi) lo))
(Rsh8Ux64 x (Int64Make hi lo)) && hi.Op != OpConst32 ->
(Rsh8Ux32 x (Or32 <config.fe.TypeUInt32()> (Zeromask hi) lo))
// 64x left shift
// result.hi = hi<<s | lo>>(32-s) | lo<<(s-32) // >> is unsigned, large shifts result 0
// result.lo = lo<<s
(Lsh64x32 (Int64Make hi lo) s) ->
(Int64Make
(Or32 <config.fe.TypeUInt32()>
(Or32 <config.fe.TypeUInt32()>
(Lsh32x32 <config.fe.TypeUInt32()> hi s)
(Rsh32Ux32 <config.fe.TypeUInt32()>
lo
(Sub32 <config.fe.TypeUInt32()> (Const32 <config.fe.TypeUInt32()> [32]) s)))
(Lsh32x32 <config.fe.TypeUInt32()>
lo
(Sub32 <config.fe.TypeUInt32()> s (Const32 <config.fe.TypeUInt32()> [32]))))
(Lsh32x32 <config.fe.TypeUInt32()> lo s))
(Lsh64x16 (Int64Make hi lo) s) ->
(Int64Make
(Or32 <config.fe.TypeUInt32()>
(Or32 <config.fe.TypeUInt32()>
(Lsh32x16 <config.fe.TypeUInt32()> hi s)
(Rsh32Ux16 <config.fe.TypeUInt32()>
lo
(Sub16 <config.fe.TypeUInt16()> (Const16 <config.fe.TypeUInt16()> [32]) s)))
(Lsh32x16 <config.fe.TypeUInt32()>
lo
(Sub16 <config.fe.TypeUInt16()> s (Const16 <config.fe.TypeUInt16()> [32]))))
(Lsh32x16 <config.fe.TypeUInt32()> lo s))
(Lsh64x8 (Int64Make hi lo) s) ->
(Int64Make
(Or32 <config.fe.TypeUInt32()>
(Or32 <config.fe.TypeUInt32()>
(Lsh32x8 <config.fe.TypeUInt32()> hi s)
(Rsh32Ux8 <config.fe.TypeUInt32()>
lo
(Sub8 <config.fe.TypeUInt8()> (Const8 <config.fe.TypeUInt8()> [32]) s)))
(Lsh32x8 <config.fe.TypeUInt32()>
lo
(Sub8 <config.fe.TypeUInt8()> s (Const8 <config.fe.TypeUInt8()> [32]))))
(Lsh32x8 <config.fe.TypeUInt32()> lo s))
// 64x unsigned right shift
// result.hi = hi>>s
// result.lo = lo>>s | hi<<(32-s) | hi>>(s-32) // >> is unsigned, large shifts result 0
(Rsh64Ux32 (Int64Make hi lo) s) ->
(Int64Make
(Rsh32Ux32 <config.fe.TypeUInt32()> hi s)
(Or32 <config.fe.TypeUInt32()>
(Or32 <config.fe.TypeUInt32()>
(Rsh32Ux32 <config.fe.TypeUInt32()> lo s)
(Lsh32x32 <config.fe.TypeUInt32()>
hi
(Sub32 <config.fe.TypeUInt32()> (Const32 <config.fe.TypeUInt32()> [32]) s)))
(Rsh32Ux32 <config.fe.TypeUInt32()>
hi
(Sub32 <config.fe.TypeUInt32()> s (Const32 <config.fe.TypeUInt32()> [32])))))
(Rsh64Ux16 (Int64Make hi lo) s) ->
(Int64Make
(Rsh32Ux16 <config.fe.TypeUInt32()> hi s)
(Or32 <config.fe.TypeUInt32()>
(Or32 <config.fe.TypeUInt32()>
(Rsh32Ux16 <config.fe.TypeUInt32()> lo s)
(Lsh32x16 <config.fe.TypeUInt32()>
hi
(Sub16 <config.fe.TypeUInt16()> (Const16 <config.fe.TypeUInt16()> [32]) s)))
(Rsh32Ux16 <config.fe.TypeUInt32()>
hi
(Sub16 <config.fe.TypeUInt16()> s (Const16 <config.fe.TypeUInt16()> [32])))))
(Rsh64Ux8 (Int64Make hi lo) s) ->
(Int64Make
(Rsh32Ux8 <config.fe.TypeUInt32()> hi s)
(Or32 <config.fe.TypeUInt32()>
(Or32 <config.fe.TypeUInt32()>
(Rsh32Ux8 <config.fe.TypeUInt32()> lo s)
(Lsh32x8 <config.fe.TypeUInt32()>
hi
(Sub8 <config.fe.TypeUInt8()> (Const8 <config.fe.TypeUInt8()> [32]) s)))
(Rsh32Ux8 <config.fe.TypeUInt32()>
hi
(Sub8 <config.fe.TypeUInt8()> s (Const8 <config.fe.TypeUInt8()> [32])))))
// 64x signed right shift
// result.hi = hi>>s
// result.lo = lo>>s | hi<<(32-s) | (hi>>(s-32))&zeromask(s>>5) // hi>>(s-32) is signed, large shifts result 0/-1
(Rsh64x32 (Int64Make hi lo) s) ->
(Int64Make
(Rsh32x32 <config.fe.TypeUInt32()> hi s)
(Or32 <config.fe.TypeUInt32()>
(Or32 <config.fe.TypeUInt32()>
(Rsh32Ux32 <config.fe.TypeUInt32()> lo s)
(Lsh32x32 <config.fe.TypeUInt32()>
hi
(Sub32 <config.fe.TypeUInt32()> (Const32 <config.fe.TypeUInt32()> [32]) s)))
(And32 <config.fe.TypeUInt32()>
(Rsh32x32 <config.fe.TypeUInt32()>
hi
(Sub32 <config.fe.TypeUInt32()> s (Const32 <config.fe.TypeUInt32()> [32])))
(Zeromask
(Rsh32Ux32 <config.fe.TypeUInt32()> s (Const32 <config.fe.TypeUInt32()> [5]))))))
(Rsh64x16 (Int64Make hi lo) s) ->
(Int64Make
(Rsh32x16 <config.fe.TypeUInt32()> hi s)
(Or32 <config.fe.TypeUInt32()>
(Or32 <config.fe.TypeUInt32()>
(Rsh32Ux16 <config.fe.TypeUInt32()> lo s)
(Lsh32x16 <config.fe.TypeUInt32()>
hi
(Sub16 <config.fe.TypeUInt16()> (Const16 <config.fe.TypeUInt16()> [32]) s)))
(And32 <config.fe.TypeUInt32()>
(Rsh32x16 <config.fe.TypeUInt32()>
hi
(Sub16 <config.fe.TypeUInt16()> s (Const16 <config.fe.TypeUInt16()> [32])))
(Zeromask
(ZeroExt16to32
(Rsh16Ux32 <config.fe.TypeUInt16()> s (Const32 <config.fe.TypeUInt32()> [5])))))))
(Rsh64x8 (Int64Make hi lo) s) ->
(Int64Make
(Rsh32x8 <config.fe.TypeUInt32()> hi s)
(Or32 <config.fe.TypeUInt32()>
(Or32 <config.fe.TypeUInt32()>
(Rsh32Ux8 <config.fe.TypeUInt32()> lo s)
(Lsh32x8 <config.fe.TypeUInt32()>
hi
(Sub8 <config.fe.TypeUInt8()> (Const8 <config.fe.TypeUInt8()> [32]) s)))
(And32 <config.fe.TypeUInt32()>
(Rsh32x8 <config.fe.TypeUInt32()>
hi
(Sub8 <config.fe.TypeUInt8()> s (Const8 <config.fe.TypeUInt8()> [32])))
(Zeromask
(ZeroExt8to32
(Rsh8Ux32 <config.fe.TypeUInt8()> s (Const32 <config.fe.TypeUInt32()> [5])))))))
// 64xConst32 shifts
// we probably do not need them -- lateopt may take care of them just fine
//(Lsh64x32 _ (Const32 [c])) && uint32(c) >= 64 -> (Const64 [0])
//(Rsh64x32 x (Const32 [c])) && uint32(c) >= 64 -> (Int64Make (Signmask (Int64Hi x)) (Signmask (Int64Hi x)))
//(Rsh64Ux32 _ (Const32 [c])) && uint32(c) >= 64 -> (Const64 [0])
//
//(Lsh64x32 x (Const32 [c])) && c < 64 && c > 32 ->
// (Int64Make
// (Lsh32x32 <config.fe.TypeUInt32()> (Int64Lo x) (Const32 <config.fe.TypeUInt32()> [c-32]))
// (Const32 <config.fe.TypeUInt32()> [0]))
//(Rsh64x32 x (Const32 [c])) && c < 64 && c > 32 ->
// (Int64Make
// (Signmask (Int64Hi x))
// (Rsh32x32 <config.fe.TypeInt32()> (Int64Hi x) (Const32 <config.fe.TypeUInt32()> [c-32])))
//(Rsh64Ux32 x (Const32 [c])) && c < 64 && c > 32 ->
// (Int64Make
// (Const32 <config.fe.TypeUInt32()> [0])
// (Rsh32Ux32 <config.fe.TypeUInt32()> (Int64Hi x) (Const32 <config.fe.TypeUInt32()> [c-32])))
//
//(Lsh64x32 x (Const32 [32])) -> (Int64Make (Int64Lo x) (Const32 <config.fe.TypeUInt32()> [0]))
//(Rsh64x32 x (Const32 [32])) -> (Int64Make (Signmask (Int64Hi x)) (Int64Hi x))
//(Rsh64Ux32 x (Const32 [32])) -> (Int64Make (Const32 <config.fe.TypeUInt32()> [0]) (Int64Hi x))
//
//(Lsh64x32 x (Const32 [c])) && c < 32 && c > 0 ->
// (Int64Make
// (Or32 <config.fe.TypeUInt32()>
// (Lsh32x32 <config.fe.TypeUInt32()> (Int64Hi x) (Const32 <config.fe.TypeUInt32()> [c]))
// (Rsh32Ux32 <config.fe.TypeUInt32()> (Int64Lo x) (Const32 <config.fe.TypeUInt32()> [32-c])))
// (Lsh32x32 <config.fe.TypeUInt32()> (Int64Lo x) (Const32 <config.fe.TypeUInt32()> [c])))
//(Rsh64x32 x (Const32 [c])) && c < 32 && c > 0 ->
// (Int64Make
// (Rsh32x32 <config.fe.TypeInt32()> (Int64Hi x) (Const32 <config.fe.TypeUInt32()> [c]))
// (Or32 <config.fe.TypeUInt32()>
// (Rsh32Ux32 <config.fe.TypeUInt32()> (Int64Lo x) (Const32 <config.fe.TypeUInt32()> [c]))
// (Lsh32x32 <config.fe.TypeUInt32()> (Int64Hi x) (Const32 <config.fe.TypeUInt32()> [32-c]))))
//(Rsh64Ux32 x (Const32 [c])) && c < 32 && c > 0 ->
// (Int64Make
// (Rsh32Ux32 <config.fe.TypeUInt32()> (Int64Hi x) (Const32 <config.fe.TypeUInt32()> [c]))
// (Or32 <config.fe.TypeUInt32()>
// (Rsh32Ux32 <config.fe.TypeUInt32()> (Int64Lo x) (Const32 <config.fe.TypeUInt32()> [c]))
// (Lsh32x32 <config.fe.TypeUInt32()> (Int64Hi x) (Const32 <config.fe.TypeUInt32()> [32-c]))))
//
//(Lsh64x32 x (Const32 [0])) -> x
//(Rsh64x32 x (Const32 [0])) -> x
//(Rsh64Ux32 x (Const32 [0])) -> x
(Lrot64 (Int64Make hi lo) [c]) && c <= 32 ->
(Int64Make
(Or32 <config.fe.TypeUInt32()>
(Lsh32x32 <config.fe.TypeUInt32()> hi (Const32 <config.fe.TypeUInt32()> [c]))
(Rsh32Ux32 <config.fe.TypeUInt32()> lo (Const32 <config.fe.TypeUInt32()> [32-c])))
(Or32 <config.fe.TypeUInt32()>
(Lsh32x32 <config.fe.TypeUInt32()> lo (Const32 <config.fe.TypeUInt32()> [c]))
(Rsh32Ux32 <config.fe.TypeUInt32()> hi (Const32 <config.fe.TypeUInt32()> [32-c]))))
(Lrot64 (Int64Make hi lo) [c]) && c > 32 -> (Lrot64 (Int64Make lo hi) [c-32])
(Const64 <t> [c]) && t.IsSigned() ->
(Int64Make (Const32 <config.fe.TypeInt32()> [c>>32]) (Const32 <config.fe.TypeUInt32()> [int64(int32(c))]))
(Const64 <t> [c]) && !t.IsSigned() ->
(Int64Make (Const32 <config.fe.TypeUInt32()> [c>>32]) (Const32 <config.fe.TypeUInt32()> [int64(int32(c))]))
(Eq64 x y) ->
(AndB
(Eq32 (Int64Hi x) (Int64Hi y))
(Eq32 (Int64Lo x) (Int64Lo y)))
(Neq64 x y) ->
(OrB
(Neq32 (Int64Hi x) (Int64Hi y))
(Neq32 (Int64Lo x) (Int64Lo y)))
(Less64U x y) ->
(OrB
(Less32U (Int64Hi x) (Int64Hi y))
(AndB
(Eq32 (Int64Hi x) (Int64Hi y))
(Less32U (Int64Lo x) (Int64Lo y))))
(Leq64U x y) ->
(OrB
(Less32U (Int64Hi x) (Int64Hi y))
(AndB
(Eq32 (Int64Hi x) (Int64Hi y))
(Leq32U (Int64Lo x) (Int64Lo y))))
(Greater64U x y) ->
(OrB
(Greater32U (Int64Hi x) (Int64Hi y))
(AndB
(Eq32 (Int64Hi x) (Int64Hi y))
(Greater32U (Int64Lo x) (Int64Lo y))))
(Geq64U x y) ->
(OrB
(Greater32U (Int64Hi x) (Int64Hi y))
(AndB
(Eq32 (Int64Hi x) (Int64Hi y))
(Geq32U (Int64Lo x) (Int64Lo y))))
(Less64 x y) ->
(OrB
(Less32 (Int64Hi x) (Int64Hi y))
(AndB
(Eq32 (Int64Hi x) (Int64Hi y))
(Less32U (Int64Lo x) (Int64Lo y))))
(Leq64 x y) ->
(OrB
(Less32 (Int64Hi x) (Int64Hi y))
(AndB
(Eq32 (Int64Hi x) (Int64Hi y))
(Leq32U (Int64Lo x) (Int64Lo y))))
(Greater64 x y) ->
(OrB
(Greater32 (Int64Hi x) (Int64Hi y))
(AndB
(Eq32 (Int64Hi x) (Int64Hi y))
(Greater32U (Int64Lo x) (Int64Lo y))))
(Geq64 x y) ->
(OrB
(Greater32 (Int64Hi x) (Int64Hi y))
(AndB
(Eq32 (Int64Hi x) (Int64Hi y))
(Geq32U (Int64Lo x) (Int64Lo y))))

View File

@@ -0,0 +1,20 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
package main
var dec64Ops = []opData{}
var dec64Blocks = []blockData{}
func init() {
archs = append(archs, arch{
name: "dec64",
ops: dec64Ops,
blocks: dec64Blocks,
generic: true,
})
}

View File

@@ -67,6 +67,12 @@
(Const32F [f2i(float64(i2f32(c) * i2f32(d)))])
(Mul64F (Const64F [c]) (Const64F [d])) -> (Const64F [f2i(i2f(c) * i2f(d))])
// Convert x * -1 to -x. The front-end catches some but not all of these.
(Mul8 (Const8 [-1]) x) -> (Neg8 x)
(Mul16 (Const16 [-1]) x) -> (Neg16 x)
(Mul32 (Const32 [-1]) x) -> (Neg32 x)
(Mul64 (Const64 [-1]) x) -> (Neg64 x)
(Mod8 (Const8 [c]) (Const8 [d])) && d != 0 -> (Const8 [int64(int8(c % d))])
(Mod16 (Const16 [c]) (Const16 [d])) && d != 0 -> (Const16 [int64(int16(c % d))])
(Mod32 (Const32 [c]) (Const32 [d])) && d != 0 -> (Const32 [int64(int32(c % d))])
@@ -625,8 +631,10 @@
(Store [t.FieldType(0).Size()] dst f0 mem))))
// un-SSAable values use mem->mem copies
(Store [size] dst (Load <t> src mem) mem) && !config.fe.CanSSA(t) -> (Move [size] dst src mem)
(Store [size] dst (Load <t> src mem) (VarDef {x} mem)) && !config.fe.CanSSA(t) -> (Move [size] dst src (VarDef {x} mem))
(Store [size] dst (Load <t> src mem) mem) && !config.fe.CanSSA(t) ->
(Move [MakeSizeAndAlign(size, t.Alignment()).Int64()] dst src mem)
(Store [size] dst (Load <t> src mem) (VarDef {x} mem)) && !config.fe.CanSSA(t) ->
(Move [MakeSizeAndAlign(size, t.Alignment()).Int64()] dst src (VarDef {x} mem))
// string ops
// Decomposing StringMake and lowering of StringPtr and StringLen
@@ -832,3 +840,23 @@
-> (Sub64 x (Mul64 <t> (Div64 <t> x (Const64 <t> [c])) (Const64 <t> [c])))
(Mod64u <t> x (Const64 [c])) && x.Op != OpConst64 && umagic64ok(c)
-> (Sub64 x (Mul64 <t> (Div64u <t> x (Const64 <t> [c])) (Const64 <t> [c])))
// floating point optimizations
(Add32F x (Const32F [0])) -> x
(Add32F (Const32F [0]) x) -> x
(Add64F x (Const64F [0])) -> x
(Add64F (Const64F [0]) x) -> x
(Sub32F x (Const32F [0])) -> x
(Sub64F x (Const64F [0])) -> x
(Mul32F x (Const32F [f2i(1)])) -> x
(Mul32F (Const32F [f2i(1)]) x) -> x
(Mul64F x (Const64F [f2i(1)])) -> x
(Mul64F (Const64F [f2i(1)]) x) -> x
(Mul32F x (Const32F [f2i(-1)])) -> (Neg32F x)
(Mul32F (Const32F [f2i(-1)]) x) -> (Neg32F x)
(Mul64F x (Const64F [f2i(-1)])) -> (Neg64F x)
(Mul64F (Const64F [f2i(-1)]) x) -> (Neg64F x)
(Div32F x (Const32F [f2i(1)])) -> x
(Div64F x (Const64F [f2i(1)])) -> x
(Div32F x (Const32F [f2i(-1)])) -> (Neg32F x)
(Div64F x (Const64F [f2i(-1)])) -> (Neg32F x)

View File

@@ -173,76 +173,76 @@ var genericOps = []opData{
{name: "Lrot64", argLength: 1, aux: "Int64"},
// 2-input comparisons
{name: "Eq8", argLength: 2, commutative: true}, // arg0 == arg1
{name: "Eq16", argLength: 2, commutative: true},
{name: "Eq32", argLength: 2, commutative: true},
{name: "Eq64", argLength: 2, commutative: true},
{name: "EqPtr", argLength: 2, commutative: true},
{name: "EqInter", argLength: 2}, // arg0 or arg1 is nil; other cases handled by frontend
{name: "EqSlice", argLength: 2}, // arg0 or arg1 is nil; other cases handled by frontend
{name: "Eq32F", argLength: 2},
{name: "Eq64F", argLength: 2},
{name: "Eq8", argLength: 2, commutative: true, typ: "Bool"}, // arg0 == arg1
{name: "Eq16", argLength: 2, commutative: true, typ: "Bool"},
{name: "Eq32", argLength: 2, commutative: true, typ: "Bool"},
{name: "Eq64", argLength: 2, commutative: true, typ: "Bool"},
{name: "EqPtr", argLength: 2, commutative: true, typ: "Bool"},
{name: "EqInter", argLength: 2, typ: "Bool"}, // arg0 or arg1 is nil; other cases handled by frontend
{name: "EqSlice", argLength: 2, typ: "Bool"}, // arg0 or arg1 is nil; other cases handled by frontend
{name: "Eq32F", argLength: 2, typ: "Bool"},
{name: "Eq64F", argLength: 2, typ: "Bool"},
{name: "Neq8", argLength: 2, commutative: true}, // arg0 != arg1
{name: "Neq16", argLength: 2, commutative: true},
{name: "Neq32", argLength: 2, commutative: true},
{name: "Neq64", argLength: 2, commutative: true},
{name: "NeqPtr", argLength: 2, commutative: true},
{name: "NeqInter", argLength: 2}, // arg0 or arg1 is nil; other cases handled by frontend
{name: "NeqSlice", argLength: 2}, // arg0 or arg1 is nil; other cases handled by frontend
{name: "Neq32F", argLength: 2},
{name: "Neq8", argLength: 2, commutative: true, typ: "Bool"}, // arg0 != arg1
{name: "Neq16", argLength: 2, commutative: true, typ: "Bool"},
{name: "Neq32", argLength: 2, commutative: true, typ: "Bool"},
{name: "Neq64", argLength: 2, commutative: true, typ: "Bool"},
{name: "NeqPtr", argLength: 2, commutative: true, typ: "Bool"},
{name: "NeqInter", argLength: 2, typ: "Bool"}, // arg0 or arg1 is nil; other cases handled by frontend
{name: "NeqSlice", argLength: 2, typ: "Bool"}, // arg0 or arg1 is nil; other cases handled by frontend
{name: "Neq32F", argLength: 2, typ: "Bool"},
{name: "Neq64F", argLength: 2},
{name: "Less8", argLength: 2}, // arg0 < arg1, signed
{name: "Less8U", argLength: 2}, // arg0 < arg1, unsigned
{name: "Less16", argLength: 2},
{name: "Less16U", argLength: 2},
{name: "Less32", argLength: 2},
{name: "Less32U", argLength: 2},
{name: "Less64", argLength: 2},
{name: "Less64U", argLength: 2},
{name: "Less32F", argLength: 2},
{name: "Less64F", argLength: 2},
{name: "Less8", argLength: 2, typ: "Bool"}, // arg0 < arg1, signed
{name: "Less8U", argLength: 2, typ: "Bool"}, // arg0 < arg1, unsigned
{name: "Less16", argLength: 2, typ: "Bool"},
{name: "Less16U", argLength: 2, typ: "Bool"},
{name: "Less32", argLength: 2, typ: "Bool"},
{name: "Less32U", argLength: 2, typ: "Bool"},
{name: "Less64", argLength: 2, typ: "Bool"},
{name: "Less64U", argLength: 2, typ: "Bool"},
{name: "Less32F", argLength: 2, typ: "Bool"},
{name: "Less64F", argLength: 2, typ: "Bool"},
{name: "Leq8", argLength: 2}, // arg0 <= arg1, signed
{name: "Leq8U", argLength: 2}, // arg0 <= arg1, unsigned
{name: "Leq16", argLength: 2},
{name: "Leq16U", argLength: 2},
{name: "Leq32", argLength: 2},
{name: "Leq32U", argLength: 2},
{name: "Leq64", argLength: 2},
{name: "Leq64U", argLength: 2},
{name: "Leq32F", argLength: 2},
{name: "Leq64F", argLength: 2},
{name: "Leq8", argLength: 2, typ: "Bool"}, // arg0 <= arg1, signed
{name: "Leq8U", argLength: 2, typ: "Bool"}, // arg0 <= arg1, unsigned
{name: "Leq16", argLength: 2, typ: "Bool"},
{name: "Leq16U", argLength: 2, typ: "Bool"},
{name: "Leq32", argLength: 2, typ: "Bool"},
{name: "Leq32U", argLength: 2, typ: "Bool"},
{name: "Leq64", argLength: 2, typ: "Bool"},
{name: "Leq64U", argLength: 2, typ: "Bool"},
{name: "Leq32F", argLength: 2, typ: "Bool"},
{name: "Leq64F", argLength: 2, typ: "Bool"},
{name: "Greater8", argLength: 2}, // arg0 > arg1, signed
{name: "Greater8U", argLength: 2}, // arg0 > arg1, unsigned
{name: "Greater16", argLength: 2},
{name: "Greater16U", argLength: 2},
{name: "Greater32", argLength: 2},
{name: "Greater32U", argLength: 2},
{name: "Greater64", argLength: 2},
{name: "Greater64U", argLength: 2},
{name: "Greater32F", argLength: 2},
{name: "Greater64F", argLength: 2},
{name: "Greater8", argLength: 2, typ: "Bool"}, // arg0 > arg1, signed
{name: "Greater8U", argLength: 2, typ: "Bool"}, // arg0 > arg1, unsigned
{name: "Greater16", argLength: 2, typ: "Bool"},
{name: "Greater16U", argLength: 2, typ: "Bool"},
{name: "Greater32", argLength: 2, typ: "Bool"},
{name: "Greater32U", argLength: 2, typ: "Bool"},
{name: "Greater64", argLength: 2, typ: "Bool"},
{name: "Greater64U", argLength: 2, typ: "Bool"},
{name: "Greater32F", argLength: 2, typ: "Bool"},
{name: "Greater64F", argLength: 2, typ: "Bool"},
{name: "Geq8", argLength: 2}, // arg0 <= arg1, signed
{name: "Geq8U", argLength: 2}, // arg0 <= arg1, unsigned
{name: "Geq16", argLength: 2},
{name: "Geq16U", argLength: 2},
{name: "Geq32", argLength: 2},
{name: "Geq32U", argLength: 2},
{name: "Geq64", argLength: 2},
{name: "Geq64U", argLength: 2},
{name: "Geq32F", argLength: 2},
{name: "Geq64F", argLength: 2},
{name: "Geq8", argLength: 2, typ: "Bool"}, // arg0 <= arg1, signed
{name: "Geq8U", argLength: 2, typ: "Bool"}, // arg0 <= arg1, unsigned
{name: "Geq16", argLength: 2, typ: "Bool"},
{name: "Geq16U", argLength: 2, typ: "Bool"},
{name: "Geq32", argLength: 2, typ: "Bool"},
{name: "Geq32U", argLength: 2, typ: "Bool"},
{name: "Geq64", argLength: 2, typ: "Bool"},
{name: "Geq64U", argLength: 2, typ: "Bool"},
{name: "Geq32F", argLength: 2, typ: "Bool"},
{name: "Geq64F", argLength: 2, typ: "Bool"},
// boolean ops
{name: "AndB", argLength: 2}, // arg0 && arg1 (not shortcircuited)
{name: "OrB", argLength: 2}, // arg0 || arg1 (not shortcircuited)
{name: "EqB", argLength: 2}, // arg0 == arg1
{name: "NeqB", argLength: 2}, // arg0 != arg1
{name: "Not", argLength: 1}, // !arg0, boolean
{name: "AndB", argLength: 2, typ: "Bool"}, // arg0 && arg1 (not shortcircuited)
{name: "OrB", argLength: 2, typ: "Bool"}, // arg0 || arg1 (not shortcircuited)
{name: "EqB", argLength: 2, typ: "Bool"}, // arg0 == arg1
{name: "NeqB", argLength: 2, typ: "Bool"}, // arg0 != arg1
{name: "Not", argLength: 1, typ: "Bool"}, // !arg0, boolean
// 1-input ops
{name: "Neg8", argLength: 1}, // -arg0
@@ -312,8 +312,8 @@ var genericOps = []opData{
// Memory operations
{name: "Load", argLength: 2}, // Load from arg0. arg1=memory
{name: "Store", argLength: 3, typ: "Mem", aux: "Int64"}, // Store arg1 to arg0. arg2=memory, auxint=size. Returns memory.
{name: "Move", argLength: 3, aux: "Int64"}, // arg0=destptr, arg1=srcptr, arg2=mem, auxint=size. Returns memory.
{name: "Zero", argLength: 2, aux: "Int64"}, // arg0=destptr, arg1=mem, auxint=size. Returns memory.
{name: "Move", argLength: 3, typ: "Mem", aux: "Int64"}, // arg0=destptr, arg1=srcptr, arg2=mem, auxint=size. Returns memory.
{name: "Zero", argLength: 2, typ: "Mem", aux: "Int64"}, // arg0=destptr, arg1=mem, auxint=size. Returns memory.
// Function calls. Arguments to the call have already been written to the stack.
// Return values appear on the stack. The method receiver, if any, is treated
@@ -326,17 +326,17 @@ var genericOps = []opData{
// Conversions: signed extensions, zero (unsigned) extensions, truncations
{name: "SignExt8to16", argLength: 1, typ: "Int16"},
{name: "SignExt8to32", argLength: 1},
{name: "SignExt8to64", argLength: 1},
{name: "SignExt16to32", argLength: 1},
{name: "SignExt16to64", argLength: 1},
{name: "SignExt32to64", argLength: 1},
{name: "SignExt8to32", argLength: 1, typ: "Int32"},
{name: "SignExt8to64", argLength: 1, typ: "Int64"},
{name: "SignExt16to32", argLength: 1, typ: "Int32"},
{name: "SignExt16to64", argLength: 1, typ: "Int64"},
{name: "SignExt32to64", argLength: 1, typ: "Int64"},
{name: "ZeroExt8to16", argLength: 1, typ: "UInt16"},
{name: "ZeroExt8to32", argLength: 1},
{name: "ZeroExt8to64", argLength: 1},
{name: "ZeroExt16to32", argLength: 1},
{name: "ZeroExt16to64", argLength: 1},
{name: "ZeroExt32to64", argLength: 1},
{name: "ZeroExt8to32", argLength: 1, typ: "UInt32"},
{name: "ZeroExt8to64", argLength: 1, typ: "UInt64"},
{name: "ZeroExt16to32", argLength: 1, typ: "UInt32"},
{name: "ZeroExt16to64", argLength: 1, typ: "UInt64"},
{name: "ZeroExt32to64", argLength: 1, typ: "UInt64"},
{name: "Trunc16to8", argLength: 1},
{name: "Trunc32to8", argLength: 1},
{name: "Trunc32to16", argLength: 1},
@@ -416,6 +416,31 @@ var genericOps = []opData{
{name: "VarKill", argLength: 1, aux: "Sym"}, // aux is a *gc.Node of a variable that is known to be dead. arg0=mem, returns mem
{name: "VarLive", argLength: 1, aux: "Sym"}, // aux is a *gc.Node of a variable that must be kept live. arg0=mem, returns mem
{name: "KeepAlive", argLength: 2, typ: "Mem"}, // arg[0] is a value that must be kept alive until this mark. arg[1]=mem, returns mem
// Ops for breaking 64-bit operations on 32-bit architectures
{name: "Int64Make", argLength: 2, typ: "UInt64"}, // arg0=hi, arg1=lo
{name: "Int64Hi", argLength: 1, typ: "UInt32"}, // high 32-bit of arg0
{name: "Int64Lo", argLength: 1, typ: "UInt32"}, // low 32-bit of arg0
{name: "Add32carry", argLength: 2, commutative: true, typ: "(Flags,UInt32)"}, // arg0 + arg1, returns (carry, value)
{name: "Add32withcarry", argLength: 3, commutative: true}, // arg0 + arg1 + arg2, arg2=carry (0 or 1)
{name: "Sub32carry", argLength: 2, typ: "(Flags,UInt32)"}, // arg0 - arg1, returns (carry, value)
{name: "Sub32withcarry", argLength: 3}, // arg0 - arg1 - arg2, arg2=carry (0 or 1)
{name: "Mul32uhilo", argLength: 2, typ: "(UInt32,UInt32)"}, // arg0 * arg1, returns (hi, lo)
{name: "Signmask", argLength: 1, typ: "Int32"}, // 0 if arg0 >= 0, -1 if arg0 < 0
{name: "Zeromask", argLength: 1, typ: "UInt32"}, // 0 if arg0 == 0, 0xffffffff if arg0 != 0
{name: "Cvt32Uto32F", argLength: 1}, // uint32 -> float32, only used on 32-bit arch
{name: "Cvt32Uto64F", argLength: 1}, // uint32 -> float64, only used on 32-bit arch
{name: "Cvt32Fto32U", argLength: 1}, // float32 -> uint32, only used on 32-bit arch
{name: "Cvt64Fto32U", argLength: 1}, // float64 -> uint32, only used on 32-bit arch
// pseudo-ops for breaking Tuple
{name: "Select0", argLength: 1}, // the first component of a tuple
{name: "Select1", argLength: 1}, // the second component of a tuple
}
// kind control successors implicit exit

View File

@@ -21,13 +21,16 @@ import (
)
type arch struct {
name string
pkg string // obj package to import for this arch.
genfile string // source file containing opcode code generation.
ops []opData
blocks []blockData
regnames []string
generic bool
name string
pkg string // obj package to import for this arch.
genfile string // source file containing opcode code generation.
ops []opData
blocks []blockData
regnames []string
gpregmask regMask
fpregmask regMask
framepointerreg int8
generic bool
}
type opData struct {
@@ -38,8 +41,9 @@ type opData struct {
aux string
rematerializeable bool
argLength int32 // number of arguments, if -1, then this operation has a variable number of arguments
commutative bool // this operation is commutative (e.g. addition)
resultInArg0 bool // v and v.Args[0] must be allocated to the same register
commutative bool // this operation is commutative on its first 2 arguments (e.g. addition)
resultInArg0 bool // last output of v and v.Args[0] must be allocated to the same register
clobberFlags bool // this op clobbers flags register
}
type blockData struct {
@@ -73,6 +77,7 @@ var archs []arch
func main() {
flag.Parse()
sort.Sort(ArchsByName(archs))
genOp()
genLower()
}
@@ -155,13 +160,16 @@ func genOp() {
}
if v.resultInArg0 {
fmt.Fprintln(w, "resultInArg0: true,")
if v.reg.inputs[0] != v.reg.outputs[0] {
log.Fatalf("input[0] and output registers must be equal for %s", v.name)
if v.reg.inputs[0] != v.reg.outputs[len(v.reg.outputs)-1] {
log.Fatalf("input[0] and last output register must be equal for %s", v.name)
}
if v.commutative && v.reg.inputs[1] != v.reg.outputs[0] {
log.Fatalf("input[1] and output registers must be equal for %s", v.name)
if v.commutative && v.reg.inputs[1] != v.reg.outputs[len(v.reg.outputs)-1] {
log.Fatalf("input[1] and last output register must be equal for %s", v.name)
}
}
if v.clobberFlags {
fmt.Fprintln(w, "clobberFlags: true,")
}
if a.name == "generic" {
fmt.Fprintln(w, "generic:true,")
fmt.Fprintln(w, "},") // close op
@@ -191,14 +199,22 @@ func genOp() {
}
fmt.Fprintln(w, "},")
}
if v.reg.clobbers > 0 {
fmt.Fprintf(w, "clobbers: %d,%s\n", v.reg.clobbers, a.regMaskComment(v.reg.clobbers))
}
// reg outputs
if len(v.reg.outputs) > 0 {
fmt.Fprintln(w, "outputs: []regMask{")
for _, r := range v.reg.outputs {
fmt.Fprintf(w, "%d,%s\n", r, a.regMaskComment(r))
s = s[:0]
for i, r := range v.reg.outputs {
s = append(s, intPair{countRegs(r), i})
}
if len(s) > 0 {
sort.Sort(byKey(s))
fmt.Fprintln(w, "outputs: []outputInfo{")
for _, p := range s {
r := v.reg.outputs[p.val]
fmt.Fprintf(w, "{%d,%d},%s\n", p.val, r, a.regMaskComment(r))
}
fmt.Fprintln(w, "},")
}
@@ -223,6 +239,9 @@ func genOp() {
fmt.Fprintf(w, " {%d, \"%s\"},\n", i, r)
}
fmt.Fprintln(w, "}")
fmt.Fprintf(w, "var gpRegMask%s = regMask(%d)\n", a.name, a.gpregmask)
fmt.Fprintf(w, "var fpRegMask%s = regMask(%d)\n", a.name, a.fpregmask)
fmt.Fprintf(w, "var framepointerReg%s = int8(%d)\n", a.name, a.framepointerreg)
}
// gofmt result
@@ -298,3 +317,9 @@ type byKey []intPair
func (a byKey) Len() int { return len(a) }
func (a byKey) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
func (a byKey) Less(i, j int) bool { return a[i].key < a[j].key }
type ArchsByName []arch
func (x ArchsByName) Len() int { return len(x) }
func (x ArchsByName) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
func (x ArchsByName) Less(i, j int) bool { return x[i].name < x[j].name }

View File

@@ -117,15 +117,17 @@ func genRules(arch arch) {
if unbalanced(rule) {
continue
}
op := strings.Split(rule, " ")[0][1:]
if op[len(op)-1] == ')' {
op = op[:len(op)-1] // rule has only opcode, e.g. (ConstNil) -> ...
}
loc := fmt.Sprintf("%s.rules:%d", arch.name, ruleLineno)
if isBlock(op, arch) {
blockrules[op] = append(blockrules[op], Rule{rule: rule, loc: loc})
r := Rule{rule: rule, loc: loc}
if rawop := strings.Split(rule, " ")[0][1:]; isBlock(rawop, arch) {
blockrules[rawop] = append(blockrules[rawop], r)
} else {
oprules[op] = append(oprules[op], Rule{rule: rule, loc: loc})
// Do fancier value op matching.
match, _, _ := r.parse()
op, oparch, _, _, _, _ := parseValue(match, arch, loc)
opname := fmt.Sprintf("Op%s%s", oparch, op.name)
oprules[opname] = append(oprules[opname], r)
}
rule = ""
ruleLineno = 0
@@ -157,8 +159,8 @@ func genRules(arch arch) {
fmt.Fprintf(w, "func rewriteValue%s(v *Value, config *Config) bool {\n", arch.name)
fmt.Fprintf(w, "switch v.Op {\n")
for _, op := range ops {
fmt.Fprintf(w, "case %s:\n", opName(op, arch))
fmt.Fprintf(w, "return rewriteValue%s_%s(v, config)\n", arch.name, opName(op, arch))
fmt.Fprintf(w, "case %s:\n", op)
fmt.Fprintf(w, "return rewriteValue%s_%s(v, config)\n", arch.name, op)
}
fmt.Fprintf(w, "}\n")
fmt.Fprintf(w, "return false\n")
@@ -167,7 +169,7 @@ func genRules(arch arch) {
// Generate a routine per op. Note that we don't make one giant routine
// because it is too big for some compilers.
for _, op := range ops {
fmt.Fprintf(w, "func rewriteValue%s_%s(v *Value, config *Config) bool {\n", arch.name, opName(op, arch))
fmt.Fprintf(w, "func rewriteValue%s_%s(v *Value, config *Config) bool {\n", arch.name, op)
fmt.Fprintln(w, "b := v.Block")
fmt.Fprintln(w, "_ = b")
var canFail bool
@@ -334,141 +336,108 @@ func genMatch0(w io.Writer, arch arch, match, v string, m map[string]struct{}, t
}
canFail := false
// split body up into regions. Split by spaces/tabs, except those
// contained in () or {}.
s := split(match[1 : len(match)-1]) // remove parens, then split
// Find op record
var op opData
for _, x := range genericOps {
if x.name == s[0] {
op = x
break
}
}
for _, x := range arch.ops {
if x.name == s[0] {
op = x
break
}
}
if op.name == "" {
log.Fatalf("%s: unknown op %s", loc, s[0])
}
op, oparch, typ, auxint, aux, args := parseValue(match, arch, loc)
// check op
if !top {
fmt.Fprintf(w, "if %s.Op != %s {\nbreak\n}\n", v, opName(s[0], arch))
fmt.Fprintf(w, "if %s.Op != Op%s%s {\nbreak\n}\n", v, oparch, op.name)
canFail = true
}
// check type/aux/args
argnum := 0
for _, a := range s[1:] {
if a[0] == '<' {
// type restriction
t := a[1 : len(a)-1] // remove <>
if !isVariable(t) {
// code. We must match the results of this code.
fmt.Fprintf(w, "if %s.Type != %s {\nbreak\n}\n", v, t)
if typ != "" {
if !isVariable(typ) {
// code. We must match the results of this code.
fmt.Fprintf(w, "if %s.Type != %s {\nbreak\n}\n", v, typ)
canFail = true
} else {
// variable
if _, ok := m[typ]; ok {
// must match previous variable
fmt.Fprintf(w, "if %s.Type != %s {\nbreak\n}\n", v, typ)
canFail = true
} else {
// variable
if _, ok := m[t]; ok {
// must match previous variable
fmt.Fprintf(w, "if %s.Type != %s {\nbreak\n}\n", v, t)
canFail = true
} else {
m[t] = struct{}{}
fmt.Fprintf(w, "%s := %s.Type\n", t, v)
}
m[typ] = struct{}{}
fmt.Fprintf(w, "%s := %s.Type\n", typ, v)
}
} else if a[0] == '[' {
// auxint restriction
switch op.aux {
case "Bool", "Int8", "Int16", "Int32", "Int64", "Int128", "Float32", "Float64", "SymOff", "SymValAndOff", "SymInt32":
default:
log.Fatalf("%s: op %s %s can't have auxint", loc, op.name, op.aux)
}
x := a[1 : len(a)-1] // remove []
if !isVariable(x) {
// code
fmt.Fprintf(w, "if %s.AuxInt != %s {\nbreak\n}\n", v, x)
}
}
if auxint != "" {
if !isVariable(auxint) {
// code
fmt.Fprintf(w, "if %s.AuxInt != %s {\nbreak\n}\n", v, auxint)
canFail = true
} else {
// variable
if _, ok := m[auxint]; ok {
fmt.Fprintf(w, "if %s.AuxInt != %s {\nbreak\n}\n", v, auxint)
canFail = true
} else {
// variable
if _, ok := m[x]; ok {
fmt.Fprintf(w, "if %s.AuxInt != %s {\nbreak\n}\n", v, x)
canFail = true
} else {
m[x] = struct{}{}
fmt.Fprintf(w, "%s := %s.AuxInt\n", x, v)
}
m[auxint] = struct{}{}
fmt.Fprintf(w, "%s := %s.AuxInt\n", auxint, v)
}
} else if a[0] == '{' {
// aux restriction
switch op.aux {
case "String", "Sym", "SymOff", "SymValAndOff", "SymInt32":
default:
log.Fatalf("%s: op %s %s can't have aux", loc, op.name, op.aux)
}
x := a[1 : len(a)-1] // remove {}
if !isVariable(x) {
// code
fmt.Fprintf(w, "if %s.Aux != %s {\nbreak\n}\n", v, x)
}
}
if aux != "" {
if !isVariable(aux) {
// code
fmt.Fprintf(w, "if %s.Aux != %s {\nbreak\n}\n", v, aux)
canFail = true
} else {
// variable
if _, ok := m[aux]; ok {
fmt.Fprintf(w, "if %s.Aux != %s {\nbreak\n}\n", v, aux)
canFail = true
} else {
// variable
if _, ok := m[x]; ok {
fmt.Fprintf(w, "if %s.Aux != %s {\nbreak\n}\n", v, x)
canFail = true
} else {
m[x] = struct{}{}
fmt.Fprintf(w, "%s := %s.Aux\n", x, v)
}
m[aux] = struct{}{}
fmt.Fprintf(w, "%s := %s.Aux\n", aux, v)
}
} else if a == "_" {
argnum++
} else if !strings.Contains(a, "(") {
}
}
for i, arg := range args {
if arg == "_" {
continue
}
if !strings.Contains(arg, "(") {
// leaf variable
if _, ok := m[a]; ok {
if _, ok := m[arg]; ok {
// variable already has a definition. Check whether
// the old definition and the new definition match.
// For example, (add x x). Equality is just pointer equality
// on Values (so cse is important to do before lowering).
fmt.Fprintf(w, "if %s != %s.Args[%d] {\nbreak\n}\n", a, v, argnum)
fmt.Fprintf(w, "if %s != %s.Args[%d] {\nbreak\n}\n", arg, v, i)
canFail = true
} else {
// remember that this variable references the given value
m[a] = struct{}{}
fmt.Fprintf(w, "%s := %s.Args[%d]\n", a, v, argnum)
m[arg] = struct{}{}
fmt.Fprintf(w, "%s := %s.Args[%d]\n", arg, v, i)
}
argnum++
continue
}
// compound sexpr
var argname string
colon := strings.Index(arg, ":")
openparen := strings.Index(arg, "(")
if colon >= 0 && openparen >= 0 && colon < openparen {
// rule-specified name
argname = arg[:colon]
arg = arg[colon+1:]
} else {
// compound sexpr
var argname string
colon := strings.Index(a, ":")
openparen := strings.Index(a, "(")
if colon >= 0 && openparen >= 0 && colon < openparen {
// rule-specified name
argname = a[:colon]
a = a[colon+1:]
} else {
// autogenerated name
argname = fmt.Sprintf("%s_%d", v, argnum)
}
fmt.Fprintf(w, "%s := %s.Args[%d]\n", argname, v, argnum)
if genMatch0(w, arch, a, argname, m, false, loc) {
canFail = true
}
argnum++
// autogenerated name
argname = fmt.Sprintf("%s_%d", v, i)
}
fmt.Fprintf(w, "%s := %s.Args[%d]\n", argname, v, i)
if genMatch0(w, arch, arg, argname, m, false, loc) {
canFail = true
}
}
if op.argLength == -1 {
fmt.Fprintf(w, "if len(%s.Args) != %d {\nbreak\n}\n", v, argnum)
fmt.Fprintf(w, "if len(%s.Args) != %d {\nbreak\n}\n", v, len(args))
canFail = true
} else if int(op.argLength) != argnum {
log.Fatalf("%s: op %s should have %d args, has %d", loc, op.name, op.argLength, argnum)
}
return canFail
}
@@ -500,105 +469,44 @@ func genResult0(w io.Writer, arch arch, result string, alloc *int, top, move boo
return result
}
s := split(result[1 : len(result)-1]) // remove parens, then split
// Find op record
var op opData
for _, x := range genericOps {
if x.name == s[0] {
op = x
break
}
}
for _, x := range arch.ops {
if x.name == s[0] {
op = x
break
}
}
if op.name == "" {
log.Fatalf("%s: unknown op %s", loc, s[0])
}
op, oparch, typ, auxint, aux, args := parseValue(result, arch, loc)
// Find the type of the variable.
var opType string
var typeOverride bool
for _, a := range s[1:] {
if a[0] == '<' {
// type restriction
opType = a[1 : len(a)-1] // remove <>
typeOverride = true
break
}
}
if opType == "" {
// find default type, if any
for _, op := range arch.ops {
if op.name == s[0] && op.typ != "" {
opType = typeName(op.typ)
break
}
}
}
if opType == "" {
for _, op := range genericOps {
if op.name == s[0] && op.typ != "" {
opType = typeName(op.typ)
break
}
}
typeOverride := typ != ""
if typ == "" && op.typ != "" {
typ = typeName(op.typ)
}
var v string
if top && !move {
v = "v"
fmt.Fprintf(w, "v.reset(%s)\n", opName(s[0], arch))
fmt.Fprintf(w, "v.reset(Op%s%s)\n", oparch, op.name)
if typeOverride {
fmt.Fprintf(w, "v.Type = %s\n", opType)
fmt.Fprintf(w, "v.Type = %s\n", typ)
}
} else {
if opType == "" {
log.Fatalf("sub-expression %s (op=%s) must have a type", result, s[0])
if typ == "" {
log.Fatalf("sub-expression %s (op=Op%s%s) must have a type", result, oparch, op.name)
}
v = fmt.Sprintf("v%d", *alloc)
*alloc++
fmt.Fprintf(w, "%s := b.NewValue0(v.Line, %s, %s)\n", v, opName(s[0], arch), opType)
fmt.Fprintf(w, "%s := b.NewValue0(v.Line, Op%s%s, %s)\n", v, oparch, op.name, typ)
if move && top {
// Rewrite original into a copy
fmt.Fprintf(w, "v.reset(OpCopy)\n")
fmt.Fprintf(w, "v.AddArg(%s)\n", v)
}
}
argnum := 0
for _, a := range s[1:] {
if a[0] == '<' {
// type restriction, handled above
} else if a[0] == '[' {
// auxint restriction
switch op.aux {
case "Bool", "Int8", "Int16", "Int32", "Int64", "Int128", "Float32", "Float64", "SymOff", "SymValAndOff", "SymInt32":
default:
log.Fatalf("%s: op %s %s can't have auxint", loc, op.name, op.aux)
}
x := a[1 : len(a)-1] // remove []
fmt.Fprintf(w, "%s.AuxInt = %s\n", v, x)
} else if a[0] == '{' {
// aux restriction
switch op.aux {
case "String", "Sym", "SymOff", "SymValAndOff", "SymInt32":
default:
log.Fatalf("%s: op %s %s can't have aux", loc, op.name, op.aux)
}
x := a[1 : len(a)-1] // remove {}
fmt.Fprintf(w, "%s.Aux = %s\n", v, x)
} else {
// regular argument (sexpr or variable)
x := genResult0(w, arch, a, alloc, false, move, loc)
fmt.Fprintf(w, "%s.AddArg(%s)\n", v, x)
argnum++
}
if auxint != "" {
fmt.Fprintf(w, "%s.AuxInt = %s\n", v, auxint)
}
if op.argLength != -1 && int(op.argLength) != argnum {
log.Fatalf("%s: op %s should have %d args, has %d", loc, op.name, op.argLength, argnum)
if aux != "" {
fmt.Fprintf(w, "%s.Aux = %s\n", v, aux)
}
for _, arg := range args {
x := genResult0(w, arch, arg, alloc, false, move, loc)
fmt.Fprintf(w, "%s.AddArg(%s)\n", v, x)
}
return v
@@ -666,16 +574,102 @@ func isBlock(name string, arch arch) bool {
return false
}
// opName converts from an op name specified in a rule file to an Op enum.
// if the name matches a generic op, returns "Op" plus the specified name.
// Otherwise, returns "Op" plus arch name plus op name.
func opName(name string, arch arch) string {
for _, op := range genericOps {
if op.name == name {
return "Op" + name
// parseValue parses a parenthesized value from a rule.
// The value can be from the match or the result side.
// It returns the op and unparsed strings for typ, auxint, and aux restrictions and for all args.
// oparch is the architecture that op is located in, or "" for generic.
func parseValue(val string, arch arch, loc string) (op opData, oparch string, typ string, auxint string, aux string, args []string) {
val = val[1 : len(val)-1] // remove ()
// Split val up into regions.
// Split by spaces/tabs, except those contained in (), {}, [], or <>.
s := split(val)
// Extract restrictions and args.
for _, a := range s[1:] {
switch a[0] {
case '<':
typ = a[1 : len(a)-1] // remove <>
case '[':
auxint = a[1 : len(a)-1] // remove []
case '{':
aux = a[1 : len(a)-1] // remove {}
default:
args = append(args, a)
}
}
return "Op" + arch.name + name
// Resolve the op.
// match reports whether x is a good op to select.
// If strict is true, rule generation might succeed.
// If strict is false, rule generation has failed,
// but we're trying to generate a useful error.
// Doing strict=true then strict=false allows
// precise op matching while retaining good error messages.
match := func(x opData, strict bool, archname string) bool {
if x.name != s[0] {
return false
}
if x.argLength != -1 && int(x.argLength) != len(args) {
if strict {
return false
} else {
log.Printf("%s: op %s (%s) should have %d args, has %d", loc, s[0], archname, op.argLength, len(args))
}
}
return true
}
for _, x := range genericOps {
if match(x, true, "generic") {
op = x
break
}
}
if arch.name != "generic" {
for _, x := range arch.ops {
if match(x, true, arch.name) {
if op.name != "" {
log.Fatalf("%s: matches for op %s found in both generic and %s", loc, op.name, arch.name)
}
op = x
oparch = arch.name
break
}
}
}
if op.name == "" {
// Failed to find the op.
// Run through everything again with strict=false
// to generate useful diagnosic messages before failing.
for _, x := range genericOps {
match(x, false, "generic")
}
for _, x := range arch.ops {
match(x, false, arch.name)
}
log.Fatalf("%s: unknown op %s", loc, s)
}
// Sanity check aux, auxint.
if auxint != "" {
switch op.aux {
case "Bool", "Int8", "Int16", "Int32", "Int64", "Int128", "Float32", "Float64", "SymOff", "SymValAndOff", "SymInt32":
default:
log.Fatalf("%s: op %s %s can't have auxint", loc, op.name, op.aux)
}
}
if aux != "" {
switch op.aux {
case "String", "Sym", "SymOff", "SymValAndOff", "SymInt32":
default:
log.Fatalf("%s: op %s %s can't have aux", loc, op.name, op.aux)
}
}
return
}
func blockName(name string, arch arch) string {
@@ -689,6 +683,13 @@ func blockName(name string, arch arch) string {
// typeName returns the string to use to generate a type.
func typeName(typ string) string {
if typ[0] == '(' {
ts := strings.Split(typ[1:len(typ)-1], ",")
if len(ts) != 2 {
panic("Tuple expect 2 arguments")
}
return "MakeTuple(" + typeName(ts[0]) + ", " + typeName(ts[1]) + ")"
}
switch typ {
case "Flags", "Mem", "Void", "Int128":
return "Type" + typ

View File

@@ -359,7 +359,7 @@ func (v *Value) LongHTML() string {
}
r := v.Block.Func.RegAlloc
if int(v.ID) < len(r) && r[v.ID] != nil {
s += " : " + r[v.ID].Name()
s += " : " + html.EscapeString(r[v.ID].Name())
}
s += "</span>"
return s

View File

@@ -36,3 +36,16 @@ func (s LocalSlot) Name() string {
}
return fmt.Sprintf("%s+%d[%s]", s.N, s.Off, s.Type)
}
type LocPair [2]Location
func (t LocPair) Name() string {
n0, n1 := "nil", "nil"
if t[0] != nil {
n0 = t[0].Name()
}
if t[1] != nil {
n1 = t[1].Name()
}
return fmt.Sprintf("<%s,%s>", n0, n1)
}

View File

@@ -21,10 +21,15 @@ func checkLower(f *Func) {
continue // lowered
}
switch v.Op {
case OpSP, OpSB, OpInitMem, OpArg, OpPhi, OpVarDef, OpVarKill, OpVarLive, OpKeepAlive:
case OpSP, OpSB, OpInitMem, OpArg, OpPhi, OpVarDef, OpVarKill, OpVarLive, OpKeepAlive, OpSelect0, OpSelect1:
continue // ok not to lower
case OpGetG:
if f.Config.hasGReg {
// has hardware g register, regalloc takes care of it
continue // ok not to lower
}
}
s := "not lowered: " + v.Op.String() + " " + v.Type.SimpleString()
s := "not lowered: " + v.String() + ", " + v.Op.String() + " " + v.Type.SimpleString()
for _, a := range v.Args {
s += " " + a.Type.SimpleString()
}

View File

@@ -26,7 +26,8 @@ type opInfo struct {
generic bool // this is a generic (arch-independent) opcode
rematerializeable bool // this op is rematerializeable
commutative bool // this operation is commutative (e.g. addition)
resultInArg0 bool // v and v.Args[0] must be allocated to the same register
resultInArg0 bool // last output of v and v.Args[0] must be allocated to the same register
clobberFlags bool // this op clobbers flags register
}
type inputInfo struct {
@@ -34,10 +35,15 @@ type inputInfo struct {
regs regMask // allowed input registers
}
type outputInfo struct {
idx int // index in output tuple
regs regMask // allowed output registers
}
type regInfo struct {
inputs []inputInfo // ordered in register allocation order
clobbers regMask
outputs []regMask // NOTE: values can only have 1 output for now.
outputs []outputInfo // ordered in register allocation order
}
type auxType int8
@@ -124,3 +130,31 @@ func (x ValAndOff) add(off int64) int64 {
}
return makeValAndOff(x.Val(), x.Off()+off)
}
// SizeAndAlign holds both the size and the alignment of a type,
// used in Zero and Move ops.
// The high 8 bits hold the alignment.
// The low 56 bits hold the size.
type SizeAndAlign int64
func (x SizeAndAlign) Size() int64 {
return int64(x) & (1<<56 - 1)
}
func (x SizeAndAlign) Align() int64 {
return int64(uint64(x) >> 56)
}
func (x SizeAndAlign) Int64() int64 {
return int64(x)
}
func (x SizeAndAlign) String() string {
return fmt.Sprintf("size=%d,align=%d", x.Size(), x.Align())
}
func MakeSizeAndAlign(size, align int64) SizeAndAlign {
if size&^(1<<56-1) != 0 {
panic("size too big in SizeAndAlign")
}
if align >= 1<<8 {
panic("alignment too big in SizeAndAlign")
}
return SizeAndAlign(size | align<<56)
}

File diff suppressed because it is too large Load Diff

View File

@@ -11,4 +11,7 @@ func opt(f *Func) {
func dec(f *Func) {
applyRewrite(f, rewriteBlockdec, rewriteValuedec)
if f.Config.IntSize == 4 && f.Config.arch != "amd64p32" {
applyRewrite(f, rewriteBlockdec64, rewriteValuedec64)
}
}

View File

@@ -206,6 +206,7 @@ type regAllocState struct {
numRegs register
SPReg register
SBReg register
GReg register
allocatable regMask
// for each block, its primary predecessor.
@@ -332,14 +333,14 @@ func (s *regAllocState) assignReg(r register, v *Value, c *Value) {
s.f.setHome(c, &s.registers[r])
}
// allocReg chooses a register for v from the set of registers in mask.
// allocReg chooses a register from the set of registers in mask.
// If there is no unused register, a Value will be kicked out of
// a register to make room.
func (s *regAllocState) allocReg(v *Value, mask regMask) register {
func (s *regAllocState) allocReg(mask regMask, v *Value) register {
mask &= s.allocatable
mask &^= s.nospill
if mask == 0 {
s.f.Fatalf("no register available")
s.f.Fatalf("no register available for %s", v)
}
// Pick an unused register if one is available.
@@ -400,7 +401,7 @@ func (s *regAllocState) allocValToReg(v *Value, mask regMask, nospill bool, line
}
// Allocate a register.
r := s.allocReg(v, mask)
r := s.allocReg(mask, v)
// Allocate v to the new register.
var c *Value
@@ -438,28 +439,76 @@ func (s *regAllocState) allocValToReg(v *Value, mask regMask, nospill bool, line
func (s *regAllocState) init(f *Func) {
s.f = f
s.registers = f.Config.registers
s.numRegs = register(len(s.registers))
if s.numRegs > noRegister || s.numRegs > register(unsafe.Sizeof(regMask(0))*8) {
panic("too many registers")
if nr := len(s.registers); nr == 0 || nr > int(noRegister) || nr > int(unsafe.Sizeof(regMask(0))*8) {
s.f.Fatalf("bad number of registers: %d", nr)
} else {
s.numRegs = register(nr)
}
// Locate SP, SB, and g registers.
s.SPReg = noRegister
s.SBReg = noRegister
s.GReg = noRegister
for r := register(0); r < s.numRegs; r++ {
if s.registers[r].Name() == "SP" {
switch s.registers[r].Name() {
case "SP":
s.SPReg = r
}
if s.registers[r].Name() == "SB" {
case "SB":
s.SBReg = r
case "g":
s.GReg = r
}
}
// Make sure we found all required registers.
switch noRegister {
case s.SPReg:
s.f.Fatalf("no SP register found")
case s.SBReg:
s.f.Fatalf("no SB register found")
case s.GReg:
if f.Config.hasGReg {
s.f.Fatalf("no g register found")
}
}
// Figure out which registers we're allowed to use.
s.allocatable = regMask(1)<<s.numRegs - 1
s.allocatable = s.f.Config.gpRegMask | s.f.Config.fpRegMask
s.allocatable &^= 1 << s.SPReg
s.allocatable &^= 1 << s.SBReg
if s.f.Config.ctxt.Framepointer_enabled {
s.allocatable &^= 1 << 5 // BP
if s.f.Config.hasGReg {
s.allocatable &^= 1 << s.GReg
}
if s.f.Config.ctxt.Framepointer_enabled && s.f.Config.FPReg >= 0 {
s.allocatable &^= 1 << uint(s.f.Config.FPReg)
}
if s.f.Config.ctxt.Flag_dynlink {
s.allocatable &^= 1 << 15 // R15
switch s.f.Config.arch {
case "amd64":
s.allocatable &^= 1 << 15 // R15
case "arm":
s.allocatable &^= 1 << 9 // R9
case "arm64":
// nothing to do?
case "386":
// nothing to do.
// Note that for Flag_shared (position independent code)
// we do need to be careful, but that carefulness is hidden
// in the rewrite rules so we always have a free register
// available for global load/stores. See gen/386.rules (search for Flag_shared).
default:
s.f.Config.fe.Unimplementedf(0, "arch %s not implemented", s.f.Config.arch)
}
}
if s.f.Config.nacl {
switch s.f.Config.arch {
case "arm":
s.allocatable &^= 1 << 9 // R9 is "thread pointer" on nacl/arm
case "amd64p32":
s.allocatable &^= 1 << 5 // BP - reserved for nacl
s.allocatable &^= 1 << 15 // R15 - reserved for nacl
}
}
if s.f.Config.use387 {
s.allocatable &^= 1 << 15 // X7 disallowed (one 387 register is used as scratch space during SSE->387 generation in ../x86/387.go)
}
s.regs = make([]regState, s.numRegs)
@@ -467,11 +516,13 @@ func (s *regAllocState) init(f *Func) {
s.orig = make([]*Value, f.NumValues())
for _, b := range f.Blocks {
for _, v := range b.Values {
if !v.Type.IsMemory() && !v.Type.IsVoid() && !v.Type.IsFlags() {
if !v.Type.IsMemory() && !v.Type.IsVoid() && !v.Type.IsFlags() && !v.Type.IsTuple() {
s.values[v.ID].needReg = true
s.values[v.ID].rematerializeable = v.rematerializeable()
s.orig[v.ID] = v
}
// Note: needReg is false for values returning Tuple types.
// Instead, we mark the corresponding Selects as needReg.
}
}
s.computeLive()
@@ -564,9 +615,9 @@ func (s *regAllocState) setState(regs []endReg) {
func (s *regAllocState) compatRegs(t Type) regMask {
var m regMask
if t.IsFloat() || t == TypeInt128 {
m = 0xffff << 16 // X0-X15
m = s.f.Config.fpRegMask
} else {
m = 0xffff << 0 // AX-R15
m = s.f.Config.gpRegMask
}
return m & s.allocatable
}
@@ -786,6 +837,9 @@ func (s *regAllocState) regalloc(f *Func) {
if phiRegs[i] != noRegister {
continue
}
if s.f.Config.use387 && v.Type.IsFloat() {
continue // 387 can't handle floats in registers between blocks
}
m := s.compatRegs(v.Type) &^ phiUsed &^ s.used
if m != 0 {
r := pickReg(m)
@@ -915,6 +969,7 @@ func (s *regAllocState) regalloc(f *Func) {
if s.f.pass.debug > regDebug {
fmt.Printf(" processing %s\n", v.LongString())
}
regspec := opcodeTable[v.Op].reg
if v.Op == OpPhi {
f.Fatalf("phi %s not at start of block", v)
}
@@ -930,6 +985,28 @@ func (s *regAllocState) regalloc(f *Func) {
s.advanceUses(v)
continue
}
if v.Op == OpSelect0 || v.Op == OpSelect1 {
if s.values[v.ID].needReg {
var i = 0
if v.Op == OpSelect1 {
i = 1
}
s.assignReg(register(s.f.getHome(v.Args[0].ID).(LocPair)[i].(*Register).Num), v, v)
}
b.Values = append(b.Values, v)
s.advanceUses(v)
goto issueSpill
}
if v.Op == OpGetG && s.f.Config.hasGReg {
// use hardware g register
if s.regs[s.GReg].v != nil {
s.freeReg(s.GReg) // kick out the old value
}
s.assignReg(s.GReg, v, v)
b.Values = append(b.Values, v)
s.advanceUses(v)
goto issueSpill
}
if v.Op == OpArg {
// Args are "pre-spilled" values. We don't allocate
// any register here. We just set up the spill pointer to
@@ -957,7 +1034,6 @@ func (s *regAllocState) regalloc(f *Func) {
b.Values = append(b.Values, v)
continue
}
regspec := opcodeTable[v.Op].reg
if len(regspec.inputs) == 0 && len(regspec.outputs) == 0 {
// No register allocation required (or none specified yet)
s.freeRegs(regspec.clobbers)
@@ -1002,10 +1078,6 @@ func (s *regAllocState) regalloc(f *Func) {
args = append(args[:0], v.Args...)
for _, i := range regspec.inputs {
mask := i.regs
if mask == flagRegMask {
// TODO: remove flag input from regspec.inputs.
continue
}
if mask&s.values[args[i.idx].ID].regs == 0 {
// Need a new register for the input.
mask &= s.allocatable
@@ -1115,49 +1187,73 @@ func (s *regAllocState) regalloc(f *Func) {
// Dump any registers which will be clobbered
s.freeRegs(regspec.clobbers)
// Pick register for output.
if s.values[v.ID].needReg {
mask := regspec.outputs[0] & s.allocatable
if opcodeTable[v.Op].resultInArg0 {
if !opcodeTable[v.Op].commutative {
// Output must use the same register as input 0.
r := register(s.f.getHome(args[0].ID).(*Register).Num)
mask = regMask(1) << r
} else {
// Output must use the same register as input 0 or 1.
r0 := register(s.f.getHome(args[0].ID).(*Register).Num)
r1 := register(s.f.getHome(args[1].ID).(*Register).Num)
// Check r0 and r1 for desired output register.
found := false
for _, r := range dinfo[idx].out {
if (r == r0 || r == r1) && (mask&^s.used)>>r&1 != 0 {
mask = regMask(1) << r
found = true
if r == r1 {
args[0], args[1] = args[1], args[0]
// Pick registers for outputs.
{
outRegs := [2]register{noRegister, noRegister}
var used regMask
for _, out := range regspec.outputs {
mask := out.regs & s.allocatable &^ used
if mask == 0 {
continue
}
if opcodeTable[v.Op].resultInArg0 && out.idx == len(regspec.outputs)-1 {
if !opcodeTable[v.Op].commutative {
// Output must use the same register as input 0.
r := register(s.f.getHome(args[0].ID).(*Register).Num)
mask = regMask(1) << r
} else {
// Output must use the same register as input 0 or 1.
r0 := register(s.f.getHome(args[0].ID).(*Register).Num)
r1 := register(s.f.getHome(args[1].ID).(*Register).Num)
// Check r0 and r1 for desired output register.
found := false
for _, r := range dinfo[idx].out {
if (r == r0 || r == r1) && (mask&^s.used)>>r&1 != 0 {
mask = regMask(1) << r
found = true
if r == r1 {
args[0], args[1] = args[1], args[0]
}
break
}
break
}
if !found {
// Neither are desired, pick r0.
mask = regMask(1) << r0
}
}
if !found {
// Neither are desired, pick r0.
mask = regMask(1) << r0
}
for _, r := range dinfo[idx].out {
if r != noRegister && (mask&^s.used)>>r&1 != 0 {
// Desired register is allowed and unused.
mask = regMask(1) << r
break
}
}
// Avoid registers we're saving for other values.
if mask&^desired.avoid != 0 {
mask &^= desired.avoid
}
r := s.allocReg(mask, v)
outRegs[out.idx] = r
used |= regMask(1) << r
}
for _, r := range dinfo[idx].out {
if r != noRegister && (mask&^s.used)>>r&1 != 0 {
// Desired register is allowed and unused.
mask = regMask(1) << r
break
// Record register choices
if v.Type.IsTuple() {
var outLocs LocPair
if r := outRegs[0]; r != noRegister {
outLocs[0] = &s.registers[r]
}
if r := outRegs[1]; r != noRegister {
outLocs[1] = &s.registers[r]
}
s.f.setHome(v, outLocs)
// Note that subsequent SelectX instructions will do the assignReg calls.
} else {
if r := outRegs[0]; r != noRegister {
s.assignReg(r, v, v)
}
}
// Avoid registers we're saving for other values.
if mask&^desired.avoid != 0 {
mask &^= desired.avoid
}
r := s.allocReg(v, mask)
s.assignReg(r, v, v)
}
// Issue the Value itself.
@@ -1176,6 +1272,7 @@ func (s *regAllocState) regalloc(f *Func) {
// f()
// }
// It would be good to have both spill and restore inside the IF.
issueSpill:
if s.values[v.ID].needReg {
spill := b.NewValue1(v.Line, OpStoreReg, v.Type, v)
s.setOrig(spill, v)
@@ -1194,9 +1291,10 @@ func (s *regAllocState) regalloc(f *Func) {
if s.f.pass.debug > regDebug {
fmt.Printf(" processing control %s\n", v.LongString())
}
// TODO: regspec for block control values, instead of using
// register set from the control op's output.
s.allocValToReg(v, opcodeTable[v.Op].reg.outputs[0], false, b.Line)
// We assume that a control input can be passed in any
// type-compatible register. If this turns out not to be true,
// we'll need to introduce a regspec for a block's control value.
s.allocValToReg(v, s.compatRegs(v.Type), false, b.Line)
// Remove this use from the uses list.
vi := &s.values[v.ID]
u := vi.uses
@@ -1208,6 +1306,11 @@ func (s *regAllocState) regalloc(f *Func) {
s.freeUseRecords = u
}
// Spill any values that can't live across basic block boundaries.
if s.f.Config.use387 {
s.freeRegs(s.f.Config.fpRegMask)
}
// If we are approaching a merge point and we are the primary
// predecessor of it, find live values that we use soon after
// the merge point and promote them to registers now.
@@ -1231,6 +1334,9 @@ func (s *regAllocState) regalloc(f *Func) {
continue
}
v := s.orig[vid]
if s.f.Config.use387 && v.Type.IsFloat() {
continue // 387 can't handle floats in registers between blocks
}
m := s.compatRegs(v.Type) &^ s.used
if m&^desired.avoid != 0 {
m &^= desired.avoid
@@ -1769,6 +1875,9 @@ func (e *edgeState) processDest(loc Location, vid ID, splice **Value) bool {
(*splice).Uses--
*splice = occupant.c
occupant.c.Uses++
if occupant.c.Op == OpStoreReg {
e.s.lateSpillUse(vid)
}
}
// Note: if splice==nil then c will appear dead. This is
// non-SSA formed code, so be careful after this pass not to run
@@ -2010,6 +2119,8 @@ func (e *edgeState) findRegFor(typ Type) Location {
return nil
}
// rematerializeable reports whether the register allocator should recompute
// a value instead of spilling/restoring it.
func (v *Value) rematerializeable() bool {
if !opcodeTable[v.Op].rematerializeable {
return false

View File

@@ -205,6 +205,11 @@ func is32Bit(n int64) bool {
return n == int64(int32(n))
}
// is16Bit reports whether n can be represented as a signed 16 bit integer.
func is16Bit(n int64) bool {
return n == int64(int16(n))
}
// b2i translates a boolean value to 0 or 1 for assigning to auxInt.
func b2i(b bool) int64 {
if b {
@@ -254,50 +259,17 @@ func isSamePtr(p1, p2 *Value) bool {
return false
}
// DUFFZERO consists of repeated blocks of 4 MOVUPSs + ADD,
// See runtime/mkduff.go.
const (
dzBlocks = 16 // number of MOV/ADD blocks
dzBlockLen = 4 // number of clears per block
dzBlockSize = 19 // size of instructions in a single block
dzMovSize = 4 // size of single MOV instruction w/ offset
dzAddSize = 4 // size of single ADD instruction
dzClearStep = 16 // number of bytes cleared by each MOV instruction
dzTailLen = 4 // number of final STOSQ instructions
dzTailSize = 2 // size of single STOSQ instruction
dzClearLen = dzClearStep * dzBlockLen // bytes cleared by one block
dzSize = dzBlocks * dzBlockSize
)
func duffStart(size int64) int64 {
x, _ := duff(size)
return x
}
func duffAdj(size int64) int64 {
_, x := duff(size)
return x
}
// duff returns the offset (from duffzero, in bytes) and pointer adjust (in bytes)
// required to use the duffzero mechanism for a block of the given size.
func duff(size int64) (int64, int64) {
if size < 32 || size > 1024 || size%dzClearStep != 0 {
panic("bad duffzero size")
// moveSize returns the number of bytes an aligned MOV instruction moves
func moveSize(align int64, c *Config) int64 {
switch {
case align%8 == 0 && c.IntSize == 8:
return 8
case align%4 == 0:
return 4
case align%2 == 0:
return 2
}
// TODO: arch-dependent
steps := size / dzClearStep
blocks := steps / dzBlockLen
steps %= dzBlockLen
off := dzBlockSize * (dzBlocks - blocks)
var adj int64
if steps != 0 {
off -= dzAddSize
off -= dzMovSize * steps
adj -= dzClearStep * (dzBlockLen - steps)
}
return off, adj
return 1
}
// mergePoint finds a block among a's blocks which dominates b and is itself

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More