ryujinx-mirror/ARMeilleure/Instructions/InstEmitSimdMemory32.cs

353 lines
12 KiB
C#
Raw Normal View History

Add most of the A32 instruction set to ARMeilleure (#897) * Implement TEQ and MOV (Imm16) * Initial work on A32 instructions + SVC. No tests yet, hangs in rtld. * Implement CLZ, fix BFI and BFC Now stops on SIMD initialization. * Exclusive access instructions, fix to mul, system instructions. Now gets to a break after SignalProcessWideKey64. * Better impl of UBFX, add UDIV and SDIV Now boots way further - now stuck on VMOV instruction. * Many more instructions, start on SIMD and testing framework. * Fix build issues * svc: Rework 32 bit codepath Fixing once and for all argument ordering issues. * Fix 32 bits stacktrace * hle debug: Add 32 bits dynamic section parsing * Fix highCq mode, add many tests, fix some instruction bugs Still suffers from critical malloc failure :weary: * Fix incorrect opcode decoders and a few more instructions. * Add a few instructions and fix others. re-disable highCq for now. Disabled the svc memory clear since i'm not sure about it. * Fix build * Fix typo in ordered/exclusive stores. * Implement some more instructions, fix others. Uxtab16/Sxtab16 are untested. * Begin impl of pairwise, some other instructions. * Add a few more instructions, a quick hack to fix svcs for now. * Add tests and fix issues with VTRN, VZIP, VUZP * Add a few more instructions, fix Vmul_1 encoding. * Fix way too many instruction bugs, add tests for some of the more important ones. * Fix HighCq, enable FastFP paths for some floating point instructions (not entirely sure why these were disabled, so important to note this commit exists) Branching has been removed in A32 shifts until I figure out if it's worth it * Cleanup Part 1 There should be no functional change between these next few commits. Should is the key word. (except for removing break handler) * Implement 32 bits syscalls Co-authored-by: riperiperi <rhy3756547@hotmail.com> Implement all 32 bits counterparts of the 64 bits syscalls we currently have. * Refactor part 2: Move index/subindex logic to Operand May have inadvertently fixed one (1) bug * Add FlushProcessDataCache32 * Address jd's comments * Remove 16 bit encodings from OpCodeTable Still need to catch some edge cases (operands that use the "F" flag) and make Q encodings with non-even indexes undefined. * Correct Fpscr handling for FP vector slow paths WIP * Add StandardFPSCRValue behaviour for all Arithmetic instructions * Add StandardFPSCRValue behaviour to compare instructions. * Force passing of fpcr to FPProcessException and FPUnpack. Reduces potential for code error significantly * OpCode cleanup * Remove urgency from DMB comment in MRRC DMB is currently a no-op via the instruction, so it should likely still be a no-op here. * Test Cleanup * Fix FPDefaultNaN on Ryzen CPUs * Improve some tests, fix some shift instructions, add slow path for Vadd * Fix Typo * More test cleanup * Flip order of Fx and index, to indicate that the operand's is the "base" * Remove Simd32 register type, use Int32 and Int64 for scalars like A64 does. * Reintroduce alignment to DecoderHelper (removed by accident) * One more realign as reading diffs is hard * Use I32 registers in A32 (part 2) Swap default integer register type based on current execution mode. * FPSCR flags as Registers (part 1) Still need to change NativeContext and ExecutionContext to allow getting/setting with the flag values. * Use I32 registers in A32 (part 1) * FPSCR flags as registers (part 2) Only CMP flags are on the registers right now. It could be useful to use more of the space in non-fast-float when implementing A32 flags accurately in the fast path. * Address Feedback * Correct FP->Int behaviour (should saturate) * Make branches made by writing to PC eligible for Rejit Greatly improves performance in most games. * Remove unused branching for Vtbl * RejitRequest as a class rather than a tuple Makes a lot more sense than storing tuples on a dictionary. * Add VMOVN, VSHR (imm), VSHRN (imm) and related tests * Re-order InstEmitSystem32 Alphabetical sorting. * Address Feedback Feedback from Ac_K, remove and sort usings. * Address Feedback 2 * Address Feedback from LDj3SNuD Opcode table reordered to have alphabetical sorting within groups, Vmaxnm and Vminnm have split names to be less ambiguous, SoftFloat nits, Test nits and Test simplification with ValueSource. * Add Debug Asserts to A32 helpers Mainly to prevent the shift ones from being used on I64 operands, as they expect I32 input for most operations (eg. carry flag setting), and expect I32 input for shift and boolean amounts. Most other helper functions don't take Operands, throw on out of range values, and take specific types of OpCode, so didn't need any asserts. * Use ConstF rather than creating an operand. (useful for pooling in future) * Move exclusive load to helper, reference call flag rather than literal 1. * Address LDj feedback (minus table flatten) one final look before it's all gone. the world is so beautiful. * Flatten OpCodeTable oh no * Address more table ordering * Call Flag as int on A32 Co-authored-by: Natalie C. <cyuubiapps@gmail.com> Co-authored-by: Thog <thog@protonmail.com>
2020-02-23 21:20:40 +00:00
using ARMeilleure.Decoders;
using ARMeilleure.IntermediateRepresentation;
using ARMeilleure.State;
using ARMeilleure.Translation;
using static ARMeilleure.Instructions.InstEmitHelper;
using static ARMeilleure.Instructions.InstEmitMemoryHelper;
Reduce JIT GC allocations (#2515) * Turn `MemoryOperand` into a struct * Remove `IntrinsicOperation` * Remove `PhiNode` * Remove `Node` * Turn `Operand` into a struct * Turn `Operation` into a struct * Clean up pool management methods * Add `Arena` allocator * Move `OperationHelper` to `Operation.Factory` * Move `OperandHelper` to `Operand.Factory` * Optimize `Operation` a bit * Fix `Arena` initialization * Rename `NativeList<T>` to `ArenaList<T>` * Reduce `Operand` size from 88 to 56 bytes * Reduce `Operation` size from 56 to 40 bytes * Add optimistic interning of Register & Constant operands * Optimize `RegisterUsage` pass a bit * Optimize `RemoveUnusedNodes` pass a bit Iterating in reverse-order allows killing dependency chains in a single pass. * Fix PPTC symbols * Optimize `BasicBlock` a bit Reduce allocations from `_successor` & `DominanceFrontiers` * Fix `Operation` resize * Make `Arena` expandable Change the arena allocator to be expandable by allocating in pages, with some of them being pooled. Currently 32 pages are pooled. An LRU removal mechanism should probably be added to it. Apparently MHR can allocate bitmaps large enough to exceed the 16MB limit for the type. * Move `Arena` & `ArenaList` to `Common` * Remove `ThreadStaticPool` & co * Add `PhiOperation` * Reduce `Operand` size from 56 from 48 bytes * Add linear-probing to `Operand` intern table * Optimize `HybridAllocator` a bit * Add `Allocators` class * Tune `ArenaAllocator` sizes * Add page removal mechanism to `ArenaAllocator` Remove pages which have not been used for more than 5s after each reset. I am on fence if this would be better using a Gen2 callback object like the one in System.Buffers.ArrayPool<T>, to trim the pool. Because right now if a large translation happens, the pages will be freed only after a reset. This reset may not happen for a while because no new translation is hit, but the arena base sizes are rather small. * Fix `OOM` when allocating larger than page size in `ArenaAllocator` Tweak resizing mechanism for Operand.Uses and Assignemnts. * Optimize `Optimizer` a bit * Optimize `Operand.Add<T>/Remove<T>` a bit * Clean up `PreAllocator` * Fix phi insertion order Reduce codegen diffs. * Fix code alignment * Use new heuristics for degree of parallelism * Suppress warnings * Address gdkchan's feedback Renamed `GetValue()` to `GetValueUnsafe()` to make it more clear that `Operand.Value` should usually not be modified directly. * Add fast path to `ArenaAllocator` * Assembly for `ArenaAllocator.Allocate(ulong)`: .L0: mov rax, [rcx+0x18] lea r8, [rax+rdx] cmp r8, [rcx+0x10] ja short .L2 .L1: mov rdx, [rcx+8] add rax, [rdx+8] mov [rcx+0x18], r8 ret .L2: jmp ArenaAllocator.AllocateSlow(UInt64) A few variable/field had to be changed to ulong so that RyuJIT avoids emitting zero-extends. * Implement a new heuristic to free pooled pages. If an arena is used often, it is more likely that its pages will be needed, so the pages are kept for longer (e.g: during PPTC rebuild or burst sof compilations). If is not used often, then it is more likely that its pages will not be needed (e.g: after PPTC rebuild or bursts of compilations). * Address riperiperi's feedback * Use `EqualityComparer<T>` in `IntrusiveList<T>` Avoids a potential GC hole in `Equals(T, T)`.
2021-08-17 18:08:34 +00:00
using static ARMeilleure.IntermediateRepresentation.Operand.Factory;
Add most of the A32 instruction set to ARMeilleure (#897) * Implement TEQ and MOV (Imm16) * Initial work on A32 instructions + SVC. No tests yet, hangs in rtld. * Implement CLZ, fix BFI and BFC Now stops on SIMD initialization. * Exclusive access instructions, fix to mul, system instructions. Now gets to a break after SignalProcessWideKey64. * Better impl of UBFX, add UDIV and SDIV Now boots way further - now stuck on VMOV instruction. * Many more instructions, start on SIMD and testing framework. * Fix build issues * svc: Rework 32 bit codepath Fixing once and for all argument ordering issues. * Fix 32 bits stacktrace * hle debug: Add 32 bits dynamic section parsing * Fix highCq mode, add many tests, fix some instruction bugs Still suffers from critical malloc failure :weary: * Fix incorrect opcode decoders and a few more instructions. * Add a few instructions and fix others. re-disable highCq for now. Disabled the svc memory clear since i'm not sure about it. * Fix build * Fix typo in ordered/exclusive stores. * Implement some more instructions, fix others. Uxtab16/Sxtab16 are untested. * Begin impl of pairwise, some other instructions. * Add a few more instructions, a quick hack to fix svcs for now. * Add tests and fix issues with VTRN, VZIP, VUZP * Add a few more instructions, fix Vmul_1 encoding. * Fix way too many instruction bugs, add tests for some of the more important ones. * Fix HighCq, enable FastFP paths for some floating point instructions (not entirely sure why these were disabled, so important to note this commit exists) Branching has been removed in A32 shifts until I figure out if it's worth it * Cleanup Part 1 There should be no functional change between these next few commits. Should is the key word. (except for removing break handler) * Implement 32 bits syscalls Co-authored-by: riperiperi <rhy3756547@hotmail.com> Implement all 32 bits counterparts of the 64 bits syscalls we currently have. * Refactor part 2: Move index/subindex logic to Operand May have inadvertently fixed one (1) bug * Add FlushProcessDataCache32 * Address jd's comments * Remove 16 bit encodings from OpCodeTable Still need to catch some edge cases (operands that use the "F" flag) and make Q encodings with non-even indexes undefined. * Correct Fpscr handling for FP vector slow paths WIP * Add StandardFPSCRValue behaviour for all Arithmetic instructions * Add StandardFPSCRValue behaviour to compare instructions. * Force passing of fpcr to FPProcessException and FPUnpack. Reduces potential for code error significantly * OpCode cleanup * Remove urgency from DMB comment in MRRC DMB is currently a no-op via the instruction, so it should likely still be a no-op here. * Test Cleanup * Fix FPDefaultNaN on Ryzen CPUs * Improve some tests, fix some shift instructions, add slow path for Vadd * Fix Typo * More test cleanup * Flip order of Fx and index, to indicate that the operand's is the "base" * Remove Simd32 register type, use Int32 and Int64 for scalars like A64 does. * Reintroduce alignment to DecoderHelper (removed by accident) * One more realign as reading diffs is hard * Use I32 registers in A32 (part 2) Swap default integer register type based on current execution mode. * FPSCR flags as Registers (part 1) Still need to change NativeContext and ExecutionContext to allow getting/setting with the flag values. * Use I32 registers in A32 (part 1) * FPSCR flags as registers (part 2) Only CMP flags are on the registers right now. It could be useful to use more of the space in non-fast-float when implementing A32 flags accurately in the fast path. * Address Feedback * Correct FP->Int behaviour (should saturate) * Make branches made by writing to PC eligible for Rejit Greatly improves performance in most games. * Remove unused branching for Vtbl * RejitRequest as a class rather than a tuple Makes a lot more sense than storing tuples on a dictionary. * Add VMOVN, VSHR (imm), VSHRN (imm) and related tests * Re-order InstEmitSystem32 Alphabetical sorting. * Address Feedback Feedback from Ac_K, remove and sort usings. * Address Feedback 2 * Address Feedback from LDj3SNuD Opcode table reordered to have alphabetical sorting within groups, Vmaxnm and Vminnm have split names to be less ambiguous, SoftFloat nits, Test nits and Test simplification with ValueSource. * Add Debug Asserts to A32 helpers Mainly to prevent the shift ones from being used on I64 operands, as they expect I32 input for most operations (eg. carry flag setting), and expect I32 input for shift and boolean amounts. Most other helper functions don't take Operands, throw on out of range values, and take specific types of OpCode, so didn't need any asserts. * Use ConstF rather than creating an operand. (useful for pooling in future) * Move exclusive load to helper, reference call flag rather than literal 1. * Address LDj feedback (minus table flatten) one final look before it's all gone. the world is so beautiful. * Flatten OpCodeTable oh no * Address more table ordering * Call Flag as int on A32 Co-authored-by: Natalie C. <cyuubiapps@gmail.com> Co-authored-by: Thog <thog@protonmail.com>
2020-02-23 21:20:40 +00:00
namespace ARMeilleure.Instructions
{
static partial class InstEmit32
{
public static void Vld1(ArmEmitterContext context)
{
EmitVStoreOrLoadN(context, 1, true);
}
public static void Vld2(ArmEmitterContext context)
{
EmitVStoreOrLoadN(context, 2, true);
}
public static void Vld3(ArmEmitterContext context)
{
EmitVStoreOrLoadN(context, 3, true);
}
public static void Vld4(ArmEmitterContext context)
{
EmitVStoreOrLoadN(context, 4, true);
}
public static void Vst1(ArmEmitterContext context)
{
EmitVStoreOrLoadN(context, 1, false);
}
public static void Vst2(ArmEmitterContext context)
{
EmitVStoreOrLoadN(context, 2, false);
}
public static void Vst3(ArmEmitterContext context)
{
EmitVStoreOrLoadN(context, 3, false);
}
public static void Vst4(ArmEmitterContext context)
{
EmitVStoreOrLoadN(context, 4, false);
}
public static void EmitVStoreOrLoadN(ArmEmitterContext context, int count, bool load)
{
if (context.CurrOp is OpCode32SimdMemSingle)
{
OpCode32SimdMemSingle op = (OpCode32SimdMemSingle)context.CurrOp;
int eBytes = 1 << op.Size;
Operand n = context.Copy(GetIntA32(context, op.Rn));
// TODO: Check alignment.
int offset = 0;
int d = op.Vd;
for (int i = 0; i < count; i++)
{
// Write an element from a double simd register.
Operand address = context.Add(n, Const(offset));
if (eBytes == 8)
{
if (load)
{
EmitDVectorLoad(context, address, d);
}
else
{
EmitDVectorStore(context, address, d);
}
}
else
{
int index = ((d & 1) << (3 - op.Size)) + op.Index;
if (load)
{
if (op.Replicate)
{
var regs = (count > 1) ? 1 : op.Increment;
for (int reg = 0; reg < regs; reg++)
{
int dreg = reg + d;
int rIndex = ((dreg & 1) << (3 - op.Size));
int limit = rIndex + (1 << (3 - op.Size));
while (rIndex < limit)
{
EmitLoadSimd(context, address, GetVecA32(dreg >> 1), dreg >> 1, rIndex++, op.Size);
}
}
}
else
{
EmitLoadSimd(context, address, GetVecA32(d >> 1), d >> 1, index, op.Size);
}
}
else
{
EmitStoreSimd(context, address, d >> 1, index, op.Size);
}
}
offset += eBytes;
d += op.Increment;
}
if (op.WBack)
{
if (op.RegisterIndex)
{
Operand m = GetIntA32(context, op.Rm);
SetIntA32(context, op.Rn, context.Add(n, m));
}
else
{
SetIntA32(context, op.Rn, context.Add(n, Const(count * eBytes)));
}
}
}
else
{
OpCode32SimdMemPair op = (OpCode32SimdMemPair)context.CurrOp;
int eBytes = 1 << op.Size;
Operand n = context.Copy(GetIntA32(context, op.Rn));
int offset = 0;
int d = op.Vd;
for (int reg = 0; reg < op.Regs; reg++)
{
for (int elem = 0; elem < op.Elems; elem++)
{
int elemD = d + reg;
for (int i = 0; i < count; i++)
{
// Write an element from a double simd register
// add ebytes for each element.
Operand address = context.Add(n, Const(offset));
int index = ((elemD & 1) << (3 - op.Size)) + elem;
if (eBytes == 8)
{
if (load)
{
EmitDVectorLoad(context, address, elemD);
}
else
{
EmitDVectorStore(context, address, elemD);
}
}
else
{
if (load)
{
EmitLoadSimd(context, address, GetVecA32(elemD >> 1), elemD >> 1, index, op.Size);
}
else
{
EmitStoreSimd(context, address, elemD >> 1, index, op.Size);
}
}
offset += eBytes;
elemD += op.Increment;
}
}
}
if (op.WBack)
{
if (op.RegisterIndex)
{
Operand m = GetIntA32(context, op.Rm);
SetIntA32(context, op.Rn, context.Add(n, m));
}
else
{
SetIntA32(context, op.Rn, context.Add(n, Const(count * 8 * op.Regs)));
}
}
}
}
public static void Vldm(ArmEmitterContext context)
{
OpCode32SimdMemMult op = (OpCode32SimdMemMult)context.CurrOp;
Operand n = context.Copy(GetIntA32(context, op.Rn));
Operand baseAddress = context.Add(n, Const(op.Offset));
bool writeBack = op.PostOffset != 0;
if (writeBack)
{
SetIntA32(context, op.Rn, context.Add(n, Const(op.PostOffset)));
}
int range = op.RegisterRange;
int sReg = (op.DoubleWidth) ? (op.Vd << 1) : op.Vd;
int offset = 0;
int byteSize = 4;
for (int num = 0; num < range; num++, sReg++)
{
Operand address = context.Add(baseAddress, Const(offset));
Operand vec = GetVecA32(sReg >> 2);
EmitLoadSimd(context, address, vec, sReg >> 2, sReg & 3, WordSizeLog2);
offset += byteSize;
}
}
public static void Vstm(ArmEmitterContext context)
{
OpCode32SimdMemMult op = (OpCode32SimdMemMult)context.CurrOp;
Operand n = context.Copy(GetIntA32(context, op.Rn));
Operand baseAddress = context.Add(n, Const(op.Offset));
bool writeBack = op.PostOffset != 0;
if (writeBack)
{
SetIntA32(context, op.Rn, context.Add(n, Const(op.PostOffset)));
}
int offset = 0;
int range = op.RegisterRange;
int sReg = (op.DoubleWidth) ? (op.Vd << 1) : op.Vd;
int byteSize = 4;
for (int num = 0; num < range; num++, sReg++)
{
Operand address = context.Add(baseAddress, Const(offset));
EmitStoreSimd(context, address, sReg >> 2, sReg & 3, WordSizeLog2);
offset += byteSize;
}
}
public static void Vldr(ArmEmitterContext context)
{
EmitVLoadOrStore(context, AccessType.Load);
}
public static void Vstr(ArmEmitterContext context)
{
EmitVLoadOrStore(context, AccessType.Store);
}
private static void EmitDVectorStore(ArmEmitterContext context, Operand address, int vecD)
{
int vecQ = vecD >> 1;
int vecSElem = (vecD & 1) << 1;
Operand lblBigEndian = Label();
Operand lblEnd = Label();
context.BranchIfTrue(lblBigEndian, GetFlag(PState.EFlag));
EmitStoreSimd(context, address, vecQ, vecSElem, WordSizeLog2);
EmitStoreSimd(context, context.Add(address, Const(4)), vecQ, vecSElem | 1, WordSizeLog2);
context.Branch(lblEnd);
context.MarkLabel(lblBigEndian);
EmitStoreSimd(context, address, vecQ, vecSElem | 1, WordSizeLog2);
EmitStoreSimd(context, context.Add(address, Const(4)), vecQ, vecSElem, WordSizeLog2);
context.MarkLabel(lblEnd);
}
private static void EmitDVectorLoad(ArmEmitterContext context, Operand address, int vecD)
{
int vecQ = vecD >> 1;
int vecSElem = (vecD & 1) << 1;
Operand vec = GetVecA32(vecQ);
Operand lblBigEndian = Label();
Operand lblEnd = Label();
context.BranchIfTrue(lblBigEndian, GetFlag(PState.EFlag));
EmitLoadSimd(context, address, vec, vecQ, vecSElem, WordSizeLog2);
EmitLoadSimd(context, context.Add(address, Const(4)), vec, vecQ, vecSElem | 1, WordSizeLog2);
context.Branch(lblEnd);
context.MarkLabel(lblBigEndian);
EmitLoadSimd(context, address, vec, vecQ, vecSElem | 1, WordSizeLog2);
EmitLoadSimd(context, context.Add(address, Const(4)), vec, vecQ, vecSElem, WordSizeLog2);
context.MarkLabel(lblEnd);
}
private static void EmitVLoadOrStore(ArmEmitterContext context, AccessType accType)
{
OpCode32SimdMemImm op = (OpCode32SimdMemImm)context.CurrOp;
Operand n = context.Copy(GetIntA32(context, op.Rn));
Operand m = GetMemM(context, setCarry: false);
Operand address = op.Add
? context.Add(n, m)
: context.Subtract(n, m);
int size = op.Size;
if ((accType & AccessType.Load) != 0)
{
if (size == DWordSizeLog2)
{
EmitDVectorLoad(context, address, op.Vd);
}
else
{
Operand vec = GetVecA32(op.Vd >> 2);
EmitLoadSimd(context, address, vec, op.Vd >> 2, (op.Vd & 3) << (2 - size), size);
}
}
else
{
if (size == DWordSizeLog2)
{
EmitDVectorStore(context, address, op.Vd);
}
else
{
EmitStoreSimd(context, address, op.Vd >> 2, (op.Vd & 3) << (2 - size), size);
}
}
}
}
}