MdePkg: Clean up source files

1. Do not use tab characters
2. No trailing white space in one line
3. All files must end with CRLF

Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>
This commit is contained in:
Liming Gao
2018-06-27 21:11:33 +08:00
parent d1102dba72
commit 9095d37b8f
729 changed files with 15683 additions and 15683 deletions

View File

@@ -1,8 +1,8 @@
#------------------------------------------------------------------------------
#------------------------------------------------------------------------------
#
# CpuBreakpoint() for ARM
#
# Copyright (c) 2006 - 2009, Intel Corporation. All rights reserved.<BR>
# Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
# Portions copyright (c) 2008 - 2009, Apple Inc. All rights reserved.<BR>
# This program and the accompanying materials
# are licensed and made available under the terms and conditions of the BSD License
@@ -32,5 +32,5 @@ GCC_ASM_EXPORT(CpuBreakpoint)
# );
#
ASM_PFX(CpuBreakpoint):
swi 0xdbdbdb
swi 0xdbdbdb
bx lr

View File

@@ -1,8 +1,8 @@
;------------------------------------------------------------------------------
;------------------------------------------------------------------------------
;
; CpuBreakpoint() for ARM
;
; Copyright (c) 2006 - 2009, Intel Corporation. All rights reserved.<BR>
; Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
; Portions copyright (c) 2008 - 2009, Apple Inc. All rights reserved.<BR>
; This program and the accompanying materials
; are licensed and made available under the terms and conditions of the BSD License
@@ -37,5 +37,5 @@
CpuBreakpoint
swi 0xdbdbdb
bx lr
END

View File

@@ -1,8 +1,8 @@
;------------------------------------------------------------------------------
;------------------------------------------------------------------------------
;
; CpuPause() for ARM
;
; Copyright (c) 2006 - 2009, Intel Corporation. All rights reserved.<BR>
; Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
; Portions copyright (c) 2008 - 2009, Apple Inc. All rights reserved.<BR>
; This program and the accompanying materials
; are licensed and made available under the terms and conditions of the BSD License

View File

@@ -1,8 +1,8 @@
#------------------------------------------------------------------------------
#------------------------------------------------------------------------------
#
# DisableInterrupts() for ARM
#
# Copyright (c) 2006 - 2009, Intel Corporation. All rights reserved.<BR>
# Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
# Portions copyright (c) 2008 - 2009, Apple Inc. All rights reserved.<BR>
# This program and the accompanying materials
# are licensed and made available under the terms and conditions of the BSD License

View File

@@ -1,8 +1,8 @@
;------------------------------------------------------------------------------
;------------------------------------------------------------------------------
;
; DisableInterrupts() for ARM
;
; Copyright (c) 2006 - 2009, Intel Corporation. All rights reserved.<BR>
; Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
; Portions copyright (c) 2008 - 2009, Apple Inc. All rights reserved.<BR>
; This program and the accompanying materials
; are licensed and made available under the terms and conditions of the BSD License
@@ -33,5 +33,5 @@ DisableInterrupts
ORR R0,R0,#0x80 ;Disable IRQ interrupts
MSR CPSR_c,R0
BX LR
END

View File

@@ -1,8 +1,8 @@
#------------------------------------------------------------------------------
#------------------------------------------------------------------------------
#
# EnableInterrupts() for ARM
#
# Copyright (c) 2006 - 2009, Intel Corporation. All rights reserved.<BR>
# Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
# Portions copyright (c) 2008 - 2009, Apple Inc. All rights reserved.<BR>
# This program and the accompanying materials
# are licensed and made available under the terms and conditions of the BSD License

View File

@@ -1,8 +1,8 @@
;------------------------------------------------------------------------------
;------------------------------------------------------------------------------
;
; EnableInterrupts() for ARM
;
; Copyright (c) 2006 - 2009, Intel Corporation. All rights reserved.<BR>
; Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
; Portions copyright (c) 2008 - 2009, Apple Inc. All rights reserved.<BR>
; This program and the accompanying materials
; are licensed and made available under the terms and conditions of the BSD License
@@ -33,5 +33,5 @@ EnableInterrupts
BIC R0,R0,#0x80 ;Enable IRQ interrupts
MSR CPSR_c,R0
BX LR
END

View File

@@ -1,8 +1,8 @@
#------------------------------------------------------------------------------
#------------------------------------------------------------------------------
#
# GetInterruptState() function for ARM
#
# Copyright (c) 2006 - 2009, Intel Corporation. All rights reserved.<BR>
# Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
# Portions copyright (c) 2008 - 2009, Apple Inc. All rights reserved.<BR>
# This program and the accompanying materials
# are licensed and made available under the terms and conditions of the BSD License

View File

@@ -1,8 +1,8 @@
;------------------------------------------------------------------------------
;------------------------------------------------------------------------------
;
; GetInterruptState() function for ARM
;
; Copyright (c) 2006 - 2009, Intel Corporation. All rights reserved.<BR>
; Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
; Portions copyright (c) 2008 - 2009, Apple Inc. All rights reserved.<BR>
; This program and the accompanying materials
; are licensed and made available under the terms and conditions of the BSD License
@@ -41,5 +41,5 @@ GetInterruptState
MOVEQ R0, #1
MOVNE R0, #0
BX LR
END

View File

@@ -1,7 +1,7 @@
/** @file
SwitchStack() function for ARM.
Copyright (c) 2006 - 2010, Intel Corporation. All rights reserved.<BR>
Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
Portions copyright (c) 2008 - 2009, Apple Inc. All rights reserved.<BR>
This program and the accompanying materials
are licensed and made available under the terms and conditions of the BSD License
@@ -39,7 +39,7 @@ InternalSwitchStackAsm (
IN VOID *NewStack
);
/**
Transfers control to a function starting with a new stack.

View File

@@ -1,8 +1,8 @@
#------------------------------------------------------------------------------
#------------------------------------------------------------------------------
#
# Replacement for Math64.c that is coded to use older GCC intrinsics.
# Replacement for Math64.c that is coded to use older GCC intrinsics.
# Doing this reduces the number of intrinsics that are required when
# you port to a new version of gcc.
# you port to a new version of gcc.
#
# Need to split this into multple files to size optimize the image.
#
@@ -17,253 +17,253 @@
#
#------------------------------------------------------------------------------
.text
.align 2
GCC_ASM_EXPORT(InternalMathLShiftU64)
.text
.align 2
GCC_ASM_EXPORT(InternalMathLShiftU64)
ASM_PFX(InternalMathLShiftU64):
stmfd sp!, {r4, r5, r6}
mov r6, r1
rsb ip, r2, #32
mov r4, r6, asl r2
subs r1, r2, #32
orr r4, r4, r0, lsr ip
mov r3, r0, asl r2
movpl r4, r0, asl r1
mov r5, r0
mov r0, r3
mov r1, r4
ldmfd sp!, {r4, r5, r6}
bx lr
stmfd sp!, {r4, r5, r6}
mov r6, r1
rsb ip, r2, #32
mov r4, r6, asl r2
subs r1, r2, #32
orr r4, r4, r0, lsr ip
mov r3, r0, asl r2
movpl r4, r0, asl r1
mov r5, r0
mov r0, r3
mov r1, r4
ldmfd sp!, {r4, r5, r6}
bx lr
.align 2
GCC_ASM_EXPORT(InternalMathRShiftU64)
.align 2
GCC_ASM_EXPORT(InternalMathRShiftU64)
ASM_PFX(InternalMathRShiftU64):
stmfd sp!, {r4, r5, r6}
mov r5, r0
rsb ip, r2, #32
mov r3, r5, lsr r2
subs r0, r2, #32
orr r3, r3, r1, asl ip
mov r4, r1, lsr r2
movpl r3, r1, lsr r0
mov r6, r1
mov r0, r3
mov r1, r4
ldmfd sp!, {r4, r5, r6}
bx lr
stmfd sp!, {r4, r5, r6}
mov r5, r0
rsb ip, r2, #32
mov r3, r5, lsr r2
subs r0, r2, #32
orr r3, r3, r1, asl ip
mov r4, r1, lsr r2
movpl r3, r1, lsr r0
mov r6, r1
mov r0, r3
mov r1, r4
ldmfd sp!, {r4, r5, r6}
bx lr
.align 2
GCC_ASM_EXPORT(InternalMathARShiftU64)
.align 2
GCC_ASM_EXPORT(InternalMathARShiftU64)
ASM_PFX(InternalMathARShiftU64):
stmfd sp!, {r4, r5, r6}
mov r5, r0
rsb ip, r2, #32
mov r3, r5, lsr r2
subs r0, r2, #32
orr r3, r3, r1, asl ip
mov r4, r1, asr r2
movpl r3, r1, asr r0
mov r6, r1
mov r0, r3
mov r1, r4
ldmfd sp!, {r4, r5, r6}
bx lr
stmfd sp!, {r4, r5, r6}
mov r5, r0
rsb ip, r2, #32
mov r3, r5, lsr r2
subs r0, r2, #32
orr r3, r3, r1, asl ip
mov r4, r1, asr r2
movpl r3, r1, asr r0
mov r6, r1
mov r0, r3
mov r1, r4
ldmfd sp!, {r4, r5, r6}
bx lr
.align 2
GCC_ASM_EXPORT(InternalMathLRotU64)
.align 2
GCC_ASM_EXPORT(InternalMathLRotU64)
ASM_PFX(InternalMathLRotU64):
stmfd sp!, {r4, r5, r6, r7, lr}
add r7, sp, #12
mov r6, r1
rsb ip, r2, #32
mov r4, r6, asl r2
rsb lr, r2, #64
subs r1, r2, #32
orr r4, r4, r0, lsr ip
mov r3, r0, asl r2
movpl r4, r0, asl r1
sub ip, r2, #32
mov r5, r0
mov r0, r0, lsr lr
rsbs r2, r2, #32
orr r0, r0, r6, asl ip
mov r1, r6, lsr lr
movpl r0, r6, lsr r2
orr r1, r1, r4
orr r0, r0, r3
ldmfd sp!, {r4, r5, r6, r7, pc}
stmfd sp!, {r4, r5, r6, r7, lr}
add r7, sp, #12
mov r6, r1
rsb ip, r2, #32
mov r4, r6, asl r2
rsb lr, r2, #64
subs r1, r2, #32
orr r4, r4, r0, lsr ip
mov r3, r0, asl r2
movpl r4, r0, asl r1
sub ip, r2, #32
mov r5, r0
mov r0, r0, lsr lr
rsbs r2, r2, #32
orr r0, r0, r6, asl ip
mov r1, r6, lsr lr
movpl r0, r6, lsr r2
orr r1, r1, r4
orr r0, r0, r3
ldmfd sp!, {r4, r5, r6, r7, pc}
.align 2
GCC_ASM_EXPORT(InternalMathRRotU64)
.align 2
GCC_ASM_EXPORT(InternalMathRRotU64)
ASM_PFX(InternalMathRRotU64):
stmfd sp!, {r4, r5, r6, r7, lr}
add r7, sp, #12
mov r5, r0
rsb ip, r2, #32
mov r3, r5, lsr r2
rsb lr, r2, #64
subs r0, r2, #32
orr r3, r3, r1, asl ip
mov r4, r1, lsr r2
movpl r3, r1, lsr r0
sub ip, r2, #32
mov r6, r1
mov r1, r1, asl lr
rsbs r2, r2, #32
orr r1, r1, r5, lsr ip
mov r0, r5, asl lr
movpl r1, r5, asl r2
orr r0, r0, r3
orr r1, r1, r4
ldmfd sp!, {r4, r5, r6, r7, pc}
stmfd sp!, {r4, r5, r6, r7, lr}
add r7, sp, #12
mov r5, r0
rsb ip, r2, #32
mov r3, r5, lsr r2
rsb lr, r2, #64
subs r0, r2, #32
orr r3, r3, r1, asl ip
mov r4, r1, lsr r2
movpl r3, r1, lsr r0
sub ip, r2, #32
mov r6, r1
mov r1, r1, asl lr
rsbs r2, r2, #32
orr r1, r1, r5, lsr ip
mov r0, r5, asl lr
movpl r1, r5, asl r2
orr r0, r0, r3
orr r1, r1, r4
ldmfd sp!, {r4, r5, r6, r7, pc}
.align 2
GCC_ASM_EXPORT(InternalMathMultU64x32)
.align 2
GCC_ASM_EXPORT(InternalMathMultU64x32)
ASM_PFX(InternalMathMultU64x32):
stmfd sp!, {r7, lr}
add r7, sp, #0
mov r3, #0
mov ip, r0
mov lr, r1
umull r0, r1, ip, r2
mla r1, lr, r2, r1
mla r1, ip, r3, r1
ldmfd sp!, {r7, pc}
stmfd sp!, {r7, lr}
add r7, sp, #0
mov r3, #0
mov ip, r0
mov lr, r1
umull r0, r1, ip, r2
mla r1, lr, r2, r1
mla r1, ip, r3, r1
ldmfd sp!, {r7, pc}
.align 2
GCC_ASM_EXPORT(InternalMathMultU64x64)
.align 2
GCC_ASM_EXPORT(InternalMathMultU64x64)
ASM_PFX(InternalMathMultU64x64):
stmfd sp!, {r7, lr}
add r7, sp, #0
mov ip, r0
mov lr, r1
umull r0, r1, ip, r2
mla r1, lr, r2, r1
mla r1, ip, r3, r1
ldmfd sp!, {r7, pc}
stmfd sp!, {r7, lr}
add r7, sp, #0
mov ip, r0
mov lr, r1
umull r0, r1, ip, r2
mla r1, lr, r2, r1
mla r1, ip, r3, r1
ldmfd sp!, {r7, pc}
.align 2
GCC_ASM_EXPORT(InternalMathDivU64x32)
.align 2
GCC_ASM_EXPORT(InternalMathDivU64x32)
ASM_PFX(InternalMathDivU64x32):
stmfd sp!, {r7, lr}
add r7, sp, #0
mov r3, #0
bl ASM_PFX(__udivdi3)
ldmfd sp!, {r7, pc}
.align 2
GCC_ASM_EXPORT(InternalMathModU64x32)
stmfd sp!, {r7, lr}
add r7, sp, #0
mov r3, #0
bl ASM_PFX(__udivdi3)
ldmfd sp!, {r7, pc}
.align 2
GCC_ASM_EXPORT(InternalMathModU64x32)
ASM_PFX(InternalMathModU64x32):
stmfd sp!, {r7, lr}
add r7, sp, #0
mov r3, #0
bl ASM_PFX(__umoddi3)
ldmfd sp!, {r7, pc}
.align 2
GCC_ASM_EXPORT(InternalMathDivRemU64x32)
stmfd sp!, {r7, lr}
add r7, sp, #0
mov r3, #0
bl ASM_PFX(__umoddi3)
ldmfd sp!, {r7, pc}
.align 2
GCC_ASM_EXPORT(InternalMathDivRemU64x32)
ASM_PFX(InternalMathDivRemU64x32):
stmfd sp!, {r4, r5, r6, r7, lr}
add r7, sp, #12
stmfd sp!, {r10, r11}
subs r6, r3, #0
mov r10, r0
mov r11, r1
moveq r4, r2
moveq r5, #0
beq L22
mov r4, r2
mov r5, #0
mov r3, #0
bl ASM_PFX(__umoddi3)
str r0, [r6, #0]
stmfd sp!, {r4, r5, r6, r7, lr}
add r7, sp, #12
stmfd sp!, {r10, r11}
subs r6, r3, #0
mov r10, r0
mov r11, r1
moveq r4, r2
moveq r5, #0
beq L22
mov r4, r2
mov r5, #0
mov r3, #0
bl ASM_PFX(__umoddi3)
str r0, [r6, #0]
L22:
mov r0, r10
mov r1, r11
mov r2, r4
mov r3, r5
bl ASM_PFX(__udivdi3)
ldmfd sp!, {r10, r11}
ldmfd sp!, {r4, r5, r6, r7, pc}
.align 2
GCC_ASM_EXPORT(InternalMathDivRemU64x64)
mov r0, r10
mov r1, r11
mov r2, r4
mov r3, r5
bl ASM_PFX(__udivdi3)
ldmfd sp!, {r10, r11}
ldmfd sp!, {r4, r5, r6, r7, pc}
.align 2
GCC_ASM_EXPORT(InternalMathDivRemU64x64)
ASM_PFX(InternalMathDivRemU64x64):
stmfd sp!, {r4, r5, r6, r7, lr}
add r7, sp, #12
stmfd sp!, {r10, r11}
ldr r6, [sp, #28]
mov r4, r0
cmp r6, #0
mov r5, r1
mov r10, r2
mov r11, r3
beq L26
bl ASM_PFX(__umoddi3)
stmia r6, {r0-r1}
stmfd sp!, {r4, r5, r6, r7, lr}
add r7, sp, #12
stmfd sp!, {r10, r11}
ldr r6, [sp, #28]
mov r4, r0
cmp r6, #0
mov r5, r1
mov r10, r2
mov r11, r3
beq L26
bl ASM_PFX(__umoddi3)
stmia r6, {r0-r1}
L26:
mov r0, r4
mov r1, r5
mov r2, r10
mov r3, r11
bl ASM_PFX(__udivdi3)
ldmfd sp!, {r10, r11}
ldmfd sp!, {r4, r5, r6, r7, pc}
.align 2
GCC_ASM_EXPORT(InternalMathDivRemS64x64)
mov r0, r4
mov r1, r5
mov r2, r10
mov r3, r11
bl ASM_PFX(__udivdi3)
ldmfd sp!, {r10, r11}
ldmfd sp!, {r4, r5, r6, r7, pc}
.align 2
GCC_ASM_EXPORT(InternalMathDivRemS64x64)
ASM_PFX(InternalMathDivRemS64x64):
stmfd sp!, {r4, r5, r6, r7, lr}
add r7, sp, #12
stmfd sp!, {r10, r11}
ldr r6, [sp, #28]
mov r4, r0
cmp r6, #0
mov r5, r1
mov r10, r2
mov r11, r3
beq L30
bl ASM_PFX(__moddi3)
stmia r6, {r0-r1}
stmfd sp!, {r4, r5, r6, r7, lr}
add r7, sp, #12
stmfd sp!, {r10, r11}
ldr r6, [sp, #28]
mov r4, r0
cmp r6, #0
mov r5, r1
mov r10, r2
mov r11, r3
beq L30
bl ASM_PFX(__moddi3)
stmia r6, {r0-r1}
L30:
mov r0, r4
mov r1, r5
mov r2, r10
mov r3, r11
bl ASM_PFX(__divdi3)
ldmfd sp!, {r10, r11}
ldmfd sp!, {r4, r5, r6, r7, pc}
.align 2
GCC_ASM_EXPORT(InternalMathSwapBytes64)
mov r0, r4
mov r1, r5
mov r2, r10
mov r3, r11
bl ASM_PFX(__divdi3)
ldmfd sp!, {r10, r11}
ldmfd sp!, {r4, r5, r6, r7, pc}
.align 2
GCC_ASM_EXPORT(InternalMathSwapBytes64)
ASM_PFX(InternalMathSwapBytes64):
stmfd sp!, {r4, r5, r7, lr}
mov r5, r1
bl ASM_PFX(SwapBytes32)
mov r4, r0
mov r0, r5
bl ASM_PFX(SwapBytes32)
mov r1, r4
ldmfd sp!, {r4, r5, r7, pc}
stmfd sp!, {r4, r5, r7, lr}
mov r5, r1
bl ASM_PFX(SwapBytes32)
mov r4, r0
mov r0, r5
bl ASM_PFX(SwapBytes32)
mov r1, r4
ldmfd sp!, {r4, r5, r7, pc}
ASM_FUNCTION_REMOVE_IF_UNREFERENCED
ASM_FUNCTION_REMOVE_IF_UNREFERENCED

View File

@@ -1,6 +1,6 @@
#------------------------------------------------------------------------------
#------------------------------------------------------------------------------
#
# Copyright (c) 2006 - 2009, Intel Corporation. All rights reserved.<BR>
# Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
# Portions copyright (c) 2008 - 2009, Apple Inc. All rights reserved.<BR>
# This program and the accompanying materials
# are licensed and made available under the terms and conditions of the BSD License

View File

@@ -1,6 +1,6 @@
;------------------------------------------------------------------------------
;------------------------------------------------------------------------------
;
; Copyright (c) 2006 - 2009, Intel Corporation. All rights reserved.<BR>
; Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
; Portions copyright (c) 2008 - 2009, Apple Inc. All rights reserved.<BR>
; This program and the accompanying materials
; are licensed and made available under the terms and conditions of the BSD License

View File

@@ -1,6 +1,6 @@
//------------------------------------------------------------------------------
//------------------------------------------------------------------------------
//
// Copyright (c) 2006 - 2009, Intel Corporation. All rights reserved.<BR>
// Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
// Portions copyright (c) 2008 - 2009, Apple Inc. All rights reserved.<BR>
// Portions copyright (c) 2011, ARM Limited. All rights reserved.<BR>
// This program and the accompanying materials
@@ -12,13 +12,13 @@
// WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
//
//------------------------------------------------------------------------------
.text
.align 5
GCC_ASM_EXPORT(InternalSwitchStackAsm)
GCC_ASM_EXPORT(CpuPause)
GCC_ASM_EXPORT(CpuPause)
/**
//
// This allows the caller to switch the stack and goes to the new entry point

View File

@@ -1,6 +1,6 @@
;------------------------------------------------------------------------------
;------------------------------------------------------------------------------
;
; Copyright (c) 2006 - 2009, Intel Corporation. All rights reserved.<BR>
; Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
; Portions copyright (c) 2008 - 2009, Apple Inc. All rights reserved.<BR>
; This program and the accompanying materials
; are licensed and made available under the terms and conditions of the BSD License
@@ -11,11 +11,11 @@
; WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
;
;------------------------------------------------------------------------------
EXPORT InternalSwitchStackAsm
AREA Switch_Stack, CODE, READONLY
;/**
; This allows the caller to switch the stack and goes to the new entry point
;

View File

@@ -1,9 +1,9 @@
/** @file
Unaligned access functions of BaseLib for ARM.
volatile was added to work around optimization issues.
Copyright (c) 2006 - 2010, Intel Corporation. All rights reserved.<BR>
Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
Portions copyright (c) 2008 - 2009, Apple Inc. All rights reserved.<BR>
This program and the accompanying materials
are licensed and made available under the terms and conditions of the BSD License

View File

@@ -22,7 +22,7 @@
FILE_GUID = 27d67720-ea68-48ae-93da-a3a074c90e30
MODULE_TYPE = BASE
VERSION_STRING = 1.1
LIBRARY_CLASS = BaseLib
LIBRARY_CLASS = BaseLib
#
# VALID_ARCHITECTURES = IA32 X64 IPF EBC ARM AARCH64
@@ -69,94 +69,94 @@
[Sources.Ia32]
Ia32/WriteTr.nasm
Ia32/Wbinvd.c | MSFT
Ia32/WriteMm7.c | MSFT
Ia32/WriteMm6.c | MSFT
Ia32/WriteMm5.c | MSFT
Ia32/WriteMm4.c | MSFT
Ia32/WriteMm3.c | MSFT
Ia32/WriteMm2.c | MSFT
Ia32/WriteMm1.c | MSFT
Ia32/WriteMm0.c | MSFT
Ia32/WriteLdtr.c | MSFT
Ia32/WriteIdtr.c | MSFT
Ia32/WriteGdtr.c | MSFT
Ia32/WriteDr7.c | MSFT
Ia32/WriteDr6.c | MSFT
Ia32/WriteDr5.c | MSFT
Ia32/WriteDr4.c | MSFT
Ia32/WriteDr3.c | MSFT
Ia32/WriteDr2.c | MSFT
Ia32/WriteDr1.c | MSFT
Ia32/WriteDr0.c | MSFT
Ia32/WriteCr4.c | MSFT
Ia32/WriteCr3.c | MSFT
Ia32/WriteCr2.c | MSFT
Ia32/WriteCr0.c | MSFT
Ia32/WriteMsr64.c | MSFT
Ia32/SwapBytes64.c | MSFT
Ia32/SetJump.c | MSFT
Ia32/RRotU64.c | MSFT
Ia32/RShiftU64.c | MSFT
Ia32/ReadPmc.c | MSFT
Ia32/ReadTsc.c | MSFT
Ia32/ReadLdtr.c | MSFT
Ia32/ReadIdtr.c | MSFT
Ia32/ReadGdtr.c | MSFT
Ia32/ReadTr.c | MSFT
Ia32/ReadSs.c | MSFT
Ia32/ReadGs.c | MSFT
Ia32/ReadFs.c | MSFT
Ia32/ReadEs.c | MSFT
Ia32/ReadDs.c | MSFT
Ia32/ReadCs.c | MSFT
Ia32/ReadMsr64.c | MSFT
Ia32/ReadMm7.c | MSFT
Ia32/ReadMm6.c | MSFT
Ia32/ReadMm5.c | MSFT
Ia32/ReadMm4.c | MSFT
Ia32/ReadMm3.c | MSFT
Ia32/ReadMm2.c | MSFT
Ia32/ReadMm1.c | MSFT
Ia32/ReadMm0.c | MSFT
Ia32/ReadEflags.c | MSFT
Ia32/ReadDr7.c | MSFT
Ia32/ReadDr6.c | MSFT
Ia32/ReadDr5.c | MSFT
Ia32/ReadDr4.c | MSFT
Ia32/ReadDr3.c | MSFT
Ia32/ReadDr2.c | MSFT
Ia32/ReadDr1.c | MSFT
Ia32/ReadDr0.c | MSFT
Ia32/ReadCr4.c | MSFT
Ia32/ReadCr3.c | MSFT
Ia32/ReadCr2.c | MSFT
Ia32/ReadCr0.c | MSFT
Ia32/Mwait.c | MSFT
Ia32/Monitor.c | MSFT
Ia32/ModU64x32.c | MSFT
Ia32/MultU64x64.c | MSFT
Ia32/MultU64x32.c | MSFT
Ia32/LShiftU64.c | MSFT
Ia32/LRotU64.c | MSFT
Ia32/LongJump.c | MSFT
Ia32/Invd.c | MSFT
Ia32/FxRestore.c | MSFT
Ia32/FxSave.c | MSFT
Ia32/FlushCacheLine.c | MSFT
Ia32/EnablePaging32.c | MSFT
Ia32/EnableInterrupts.c | MSFT
Ia32/EnableDisableInterrupts.c | MSFT
Ia32/Wbinvd.c | MSFT
Ia32/WriteMm7.c | MSFT
Ia32/WriteMm6.c | MSFT
Ia32/WriteMm5.c | MSFT
Ia32/WriteMm4.c | MSFT
Ia32/WriteMm3.c | MSFT
Ia32/WriteMm2.c | MSFT
Ia32/WriteMm1.c | MSFT
Ia32/WriteMm0.c | MSFT
Ia32/WriteLdtr.c | MSFT
Ia32/WriteIdtr.c | MSFT
Ia32/WriteGdtr.c | MSFT
Ia32/WriteDr7.c | MSFT
Ia32/WriteDr6.c | MSFT
Ia32/WriteDr5.c | MSFT
Ia32/WriteDr4.c | MSFT
Ia32/WriteDr3.c | MSFT
Ia32/WriteDr2.c | MSFT
Ia32/WriteDr1.c | MSFT
Ia32/WriteDr0.c | MSFT
Ia32/WriteCr4.c | MSFT
Ia32/WriteCr3.c | MSFT
Ia32/WriteCr2.c | MSFT
Ia32/WriteCr0.c | MSFT
Ia32/WriteMsr64.c | MSFT
Ia32/SwapBytes64.c | MSFT
Ia32/SetJump.c | MSFT
Ia32/RRotU64.c | MSFT
Ia32/RShiftU64.c | MSFT
Ia32/ReadPmc.c | MSFT
Ia32/ReadTsc.c | MSFT
Ia32/ReadLdtr.c | MSFT
Ia32/ReadIdtr.c | MSFT
Ia32/ReadGdtr.c | MSFT
Ia32/ReadTr.c | MSFT
Ia32/ReadSs.c | MSFT
Ia32/ReadGs.c | MSFT
Ia32/ReadFs.c | MSFT
Ia32/ReadEs.c | MSFT
Ia32/ReadDs.c | MSFT
Ia32/ReadCs.c | MSFT
Ia32/ReadMsr64.c | MSFT
Ia32/ReadMm7.c | MSFT
Ia32/ReadMm6.c | MSFT
Ia32/ReadMm5.c | MSFT
Ia32/ReadMm4.c | MSFT
Ia32/ReadMm3.c | MSFT
Ia32/ReadMm2.c | MSFT
Ia32/ReadMm1.c | MSFT
Ia32/ReadMm0.c | MSFT
Ia32/ReadEflags.c | MSFT
Ia32/ReadDr7.c | MSFT
Ia32/ReadDr6.c | MSFT
Ia32/ReadDr5.c | MSFT
Ia32/ReadDr4.c | MSFT
Ia32/ReadDr3.c | MSFT
Ia32/ReadDr2.c | MSFT
Ia32/ReadDr1.c | MSFT
Ia32/ReadDr0.c | MSFT
Ia32/ReadCr4.c | MSFT
Ia32/ReadCr3.c | MSFT
Ia32/ReadCr2.c | MSFT
Ia32/ReadCr0.c | MSFT
Ia32/Mwait.c | MSFT
Ia32/Monitor.c | MSFT
Ia32/ModU64x32.c | MSFT
Ia32/MultU64x64.c | MSFT
Ia32/MultU64x32.c | MSFT
Ia32/LShiftU64.c | MSFT
Ia32/LRotU64.c | MSFT
Ia32/LongJump.c | MSFT
Ia32/Invd.c | MSFT
Ia32/FxRestore.c | MSFT
Ia32/FxSave.c | MSFT
Ia32/FlushCacheLine.c | MSFT
Ia32/EnablePaging32.c | MSFT
Ia32/EnableInterrupts.c | MSFT
Ia32/EnableDisableInterrupts.c | MSFT
Ia32/DivU64x64Remainder.nasm| MSFT
Ia32/DivU64x32Remainder.c | MSFT
Ia32/DivU64x32.c | MSFT
Ia32/DisablePaging32.c | MSFT
Ia32/DisableInterrupts.c | MSFT
Ia32/CpuPause.c | MSFT
Ia32/CpuIdEx.c | MSFT
Ia32/CpuId.c | MSFT
Ia32/CpuBreakpoint.c | MSFT
Ia32/ARShiftU64.c | MSFT
Ia32/DivU64x32Remainder.c | MSFT
Ia32/DivU64x32.c | MSFT
Ia32/DisablePaging32.c | MSFT
Ia32/DisableInterrupts.c | MSFT
Ia32/CpuPause.c | MSFT
Ia32/CpuIdEx.c | MSFT
Ia32/CpuId.c | MSFT
Ia32/CpuBreakpoint.c | MSFT
Ia32/ARShiftU64.c | MSFT
Ia32/Thunk16.nasm | MSFT
Ia32/EnablePaging64.nasm| MSFT
Ia32/EnableCache.c | MSFT
@@ -258,52 +258,52 @@
Ia32/RdRand.nasm| INTEL
Ia32/GccInline.c | GCC
Ia32/Thunk16.nasm | GCC
Ia32/Thunk16.S | XCODE
Ia32/Thunk16.nasm | GCC
Ia32/Thunk16.S | XCODE
Ia32/EnableDisableInterrupts.nasm| GCC
Ia32/EnableDisableInterrupts.S | GCC
Ia32/EnableDisableInterrupts.S | GCC
Ia32/EnablePaging64.nasm| GCC
Ia32/EnablePaging64.S | GCC
Ia32/EnablePaging64.S | GCC
Ia32/DisablePaging32.nasm| GCC
Ia32/DisablePaging32.S | GCC
Ia32/DisablePaging32.S | GCC
Ia32/EnablePaging32.nasm| GCC
Ia32/EnablePaging32.S | GCC
Ia32/EnablePaging32.S | GCC
Ia32/Mwait.nasm| GCC
Ia32/Mwait.S | GCC
Ia32/Mwait.S | GCC
Ia32/Monitor.nasm| GCC
Ia32/Monitor.S | GCC
Ia32/Monitor.S | GCC
Ia32/CpuIdEx.nasm| GCC
Ia32/CpuIdEx.S | GCC
Ia32/CpuIdEx.S | GCC
Ia32/CpuId.nasm| GCC
Ia32/CpuId.S | GCC
Ia32/CpuId.S | GCC
Ia32/LongJump.nasm| GCC
Ia32/LongJump.S | GCC
Ia32/LongJump.S | GCC
Ia32/SetJump.nasm| GCC
Ia32/SetJump.S | GCC
Ia32/SetJump.S | GCC
Ia32/SwapBytes64.nasm| GCC
Ia32/SwapBytes64.S | GCC
Ia32/SwapBytes64.S | GCC
Ia32/DivU64x64Remainder.nasm| GCC
Ia32/DivU64x64Remainder.S | GCC
Ia32/DivU64x64Remainder.S | GCC
Ia32/DivU64x32Remainder.nasm| GCC
Ia32/DivU64x32Remainder.S | GCC
Ia32/DivU64x32Remainder.S | GCC
Ia32/ModU64x32.nasm| GCC
Ia32/ModU64x32.S | GCC
Ia32/ModU64x32.S | GCC
Ia32/DivU64x32.nasm| GCC
Ia32/DivU64x32.S | GCC
Ia32/DivU64x32.S | GCC
Ia32/MultU64x64.nasm| GCC
Ia32/MultU64x64.S | GCC
Ia32/MultU64x64.S | GCC
Ia32/MultU64x32.nasm| GCC
Ia32/MultU64x32.S | GCC
Ia32/MultU64x32.S | GCC
Ia32/RRotU64.nasm| GCC
Ia32/RRotU64.S | GCC
Ia32/RRotU64.S | GCC
Ia32/LRotU64.nasm| GCC
Ia32/LRotU64.S | GCC
Ia32/LRotU64.S | GCC
Ia32/ARShiftU64.nasm| GCC
Ia32/ARShiftU64.S | GCC
Ia32/ARShiftU64.S | GCC
Ia32/RShiftU64.nasm| GCC
Ia32/RShiftU64.S | GCC
Ia32/RShiftU64.S | GCC
Ia32/LShiftU64.nasm| GCC
Ia32/LShiftU64.S | GCC
Ia32/LShiftU64.S | GCC
Ia32/EnableCache.nasm| GCC
Ia32/EnableCache.S | GCC
Ia32/DisableCache.nasm| GCC
@@ -347,9 +347,9 @@
X64/DisableCache.nasm
X64/WriteTr.nasm
X64/CpuBreakpoint.c | MSFT
X64/WriteMsr64.c | MSFT
X64/ReadMsr64.c | MSFT
X64/CpuBreakpoint.c | MSFT
X64/WriteMsr64.c | MSFT
X64/ReadMsr64.c | MSFT
X64/RdRand.nasm| MSFT
X64/CpuPause.nasm| MSFT
X64/EnableDisableInterrupts.nasm| MSFT
@@ -514,28 +514,28 @@
X86RdRand.c
X86PatchInstruction.c
X64/GccInline.c | GCC
X64/Thunk16.S | XCODE
X64/Thunk16.S | XCODE
X64/SwitchStack.nasm| GCC
X64/SwitchStack.S | GCC
X64/SwitchStack.S | GCC
X64/SetJump.nasm| GCC
X64/SetJump.S | GCC
X64/SetJump.S | GCC
X64/LongJump.nasm| GCC
X64/LongJump.S | GCC
X64/LongJump.S | GCC
X64/EnableDisableInterrupts.nasm| GCC
X64/EnableDisableInterrupts.S | GCC
X64/EnableDisableInterrupts.S | GCC
X64/DisablePaging64.nasm| GCC
X64/DisablePaging64.S | GCC
X64/DisablePaging64.S | GCC
X64/CpuId.nasm| GCC
X64/CpuId.S | GCC
X64/CpuId.S | GCC
X64/CpuIdEx.nasm| GCC
X64/CpuIdEx.S | GCC
X64/CpuIdEx.S | GCC
X64/EnableCache.nasm| GCC
X64/EnableCache.S | GCC
X64/DisableCache.nasm| GCC
X64/DisableCache.S | GCC
X64/RdRand.nasm| GCC
X64/RdRand.S | GCC
ChkStkGcc.c | GCC
ChkStkGcc.c | GCC
[Sources.IPF]
Ipf/AccessGp.s

View File

@@ -1,7 +1,7 @@
/** @file
Bit field functions of BaseLib.
Copyright (c) 2006 - 2013, Intel Corporation. All rights reserved.<BR>
Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
This program and the accompanying materials
are licensed and made available under the terms and conditions of the BSD License
which accompanies this distribution. The full text of the license may be found at
@@ -69,13 +69,13 @@ InternalBaseLibBitFieldOrUint (
)
{
//
// Higher bits in OrData those are not used must be zero.
// Higher bits in OrData those are not used must be zero.
//
// EndBit - StartBit + 1 might be 32 while the result right shifting 32 on a 32bit integer is undefined,
// So the logic is updated to right shift (EndBit - StartBit) bits and compare the last bit directly.
//
ASSERT ((OrData >> (EndBit - StartBit)) == ((OrData >> (EndBit - StartBit)) & 1));
//
// ~((UINTN)-2 << EndBit) is a mask in which bit[0] thru bit[EndBit]
// are 1's while bit[EndBit + 1] thru the most significant bit are 0's.
@@ -111,7 +111,7 @@ InternalBaseLibBitFieldAndUint (
)
{
//
// Higher bits in AndData those are not used must be zero.
// Higher bits in AndData those are not used must be zero.
//
// EndBit - StartBit + 1 might be 32 while the result right shifting 32 on a 32bit integer is undefined,
// So the logic is updated to right shift (EndBit - StartBit) bits and compare the last bit directly.
@@ -275,7 +275,7 @@ BitFieldAnd8 (
bitwise OR, and returns the result.
Performs a bitwise AND between the bit field specified by StartBit and EndBit
in Operand and the value specified by AndData, followed by a bitwise
in Operand and the value specified by AndData, followed by a bitwise
OR with value specified by OrData. All other bits in Operand are
preserved. The new 8-bit value is returned.
@@ -467,7 +467,7 @@ BitFieldAnd16 (
bitwise OR, and returns the result.
Performs a bitwise AND between the bit field specified by StartBit and EndBit
in Operand and the value specified by AndData, followed by a bitwise
in Operand and the value specified by AndData, followed by a bitwise
OR with value specified by OrData. All other bits in Operand are
preserved. The new 16-bit value is returned.
@@ -659,7 +659,7 @@ BitFieldAnd32 (
bitwise OR, and returns the result.
Performs a bitwise AND between the bit field specified by StartBit and EndBit
in Operand and the value specified by AndData, followed by a bitwise
in Operand and the value specified by AndData, followed by a bitwise
OR with value specified by OrData. All other bits in Operand are
preserved. The new 32-bit value is returned.
@@ -809,7 +809,7 @@ BitFieldOr64 (
ASSERT (EndBit < 64);
ASSERT (StartBit <= EndBit);
//
// Higher bits in OrData those are not used must be zero.
// Higher bits in OrData those are not used must be zero.
//
// EndBit - StartBit + 1 might be 64 while the result right shifting 64 on RShiftU64() API is invalid,
// So the logic is updated to right shift (EndBit - StartBit) bits and compare the last bit directly.
@@ -857,11 +857,11 @@ BitFieldAnd64 (
{
UINT64 Value1;
UINT64 Value2;
ASSERT (EndBit < 64);
ASSERT (StartBit <= EndBit);
//
// Higher bits in AndData those are not used must be zero.
// Higher bits in AndData those are not used must be zero.
//
// EndBit - StartBit + 1 might be 64 while the right shifting 64 on RShiftU64() API is invalid,
// So the logic is updated to right shift (EndBit - StartBit) bits and compare the last bit directly.
@@ -879,7 +879,7 @@ BitFieldAnd64 (
bitwise OR, and returns the result.
Performs a bitwise AND between the bit field specified by StartBit and EndBit
in Operand and the value specified by AndData, followed by a bitwise
in Operand and the value specified by AndData, followed by a bitwise
OR with value specified by OrData. All other bits in Operand are
preserved. The new 64-bit value is returned.

View File

@@ -2,7 +2,7 @@
Utility functions to generate checksum based on 2's complement
algorithm.
Copyright (c) 2007 - 2010, Intel Corporation. All rights reserved.<BR>
Copyright (c) 2007 - 2018, Intel Corporation. All rights reserved.<BR>
This program and the accompanying materials
are licensed and made available under the terms and conditions of the BSD License
which accompanies this distribution. The full text of the license may be found at
@@ -49,7 +49,7 @@ CalculateSum8 (
for (Sum = 0, Count = 0; Count < Length; Count++) {
Sum = (UINT8) (Sum + *(Buffer + Count));
}
return Sum;
}
@@ -128,7 +128,7 @@ CalculateSum16 (
for (Sum = 0, Count = 0; Count < Total; Count++) {
Sum = (UINT16) (Sum + *(Buffer + Count));
}
return Sum;
}
@@ -210,7 +210,7 @@ CalculateSum32 (
for (Sum = 0, Count = 0; Count < Total; Count++) {
Sum = Sum + *(Buffer + Count);
}
return Sum;
}
@@ -292,7 +292,7 @@ CalculateSum64 (
for (Sum = 0, Count = 0; Count < Total; Count++) {
Sum = Sum + *(Buffer + Count);
}
return Sum;
}

View File

@@ -1,7 +1,7 @@
/** @file
Provides hack function for passng GCC build.
Copyright (c) 2006 - 2008, Intel Corporation. All rights reserved.<BR>
Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
This program and the accompanying materials
are licensed and made available under the terms and conditions of the BSD License
which accompanies this distribution. The full text of the license may be found at
@@ -17,8 +17,8 @@
/**
Hack function for passing GCC build.
**/
VOID
__chkstk()
VOID
__chkstk()
{
}

View File

@@ -1,7 +1,7 @@
/** @file
Math worker functions.
Copyright (c) 2006 - 2008, Intel Corporation. All rights reserved.<BR>
Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
This program and the accompanying materials
are licensed and made available under the terms and conditions of the BSD License
which accompanies this distribution. The full text of the license may be found at
@@ -27,7 +27,7 @@
function returns the 64-bit signed quotient.
It is the caller's responsibility to not call this function with a Divisor of 0.
If Divisor is 0, then the quotient and remainder should be assumed to be
If Divisor is 0, then the quotient and remainder should be assumed to be
the largest negative integer.
If Divisor is 0, then ASSERT().

View File

@@ -1,6 +1,6 @@
#------------------------------------------------------------------------------
#
# Copyright (c) 2006 - 2015, Intel Corporation. All rights reserved.<BR>
# Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
# This program and the accompanying materials
# are licensed and made available under the terms and conditions of the BSD License
# which accompanies this distribution. The full text of the license may be found at
@@ -37,7 +37,7 @@ ASM_PFX(InternalMathARShiftU64):
jnz L0
movl %eax, %edx
movl 4(%esp), %eax
L0:
L0:
shrdl %cl, %edx, %eax
sar %cl, %edx
ret

View File

@@ -1,6 +1,6 @@
#------------------------------------------------------------------------------
#
# Copyright (c) 2006 - 2008, Intel Corporation. All rights reserved.<BR>
# Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
# This program and the accompanying materials
# are licensed and made available under the terms and conditions of the BSD License
# which accompanies this distribution. The full text of the license may be found at
@@ -57,22 +57,22 @@ L2:
shrl %ecx
jnz L2
divl %ebx
movl %eax, %ebx # ebx <- quotient
movl 28(%esp), %ecx # ecx <- high dword of divisor
movl %eax, %ebx # ebx <- quotient
movl 28(%esp), %ecx # ecx <- high dword of divisor
mull 24(%esp) # edx:eax <- quotient * divisor[0..31]
imull %ebx, %ecx # ecx <- quotient * divisor[32..63]
addl %ecx, %edx # edx <- (quotient * divisor)[32..63]
mov 32(%esp), %ecx # ecx <- addr for Remainder
jc TooLarge # product > 2^64
cmpl %edx, %edi # compare high 32 bits
ja Correct
jb TooLarge # product > dividend
cmpl %eax, %esi
jae Correct # product <= dividend
imull %ebx, %ecx # ecx <- quotient * divisor[32..63]
addl %ecx, %edx # edx <- (quotient * divisor)[32..63]
mov 32(%esp), %ecx # ecx <- addr for Remainder
jc TooLarge # product > 2^64
cmpl %edx, %edi # compare high 32 bits
ja Correct
jb TooLarge # product > dividend
cmpl %eax, %esi
jae Correct # product <= dividend
TooLarge:
decl %ebx # adjust quotient by -1
jecxz Return # return if Remainder == NULL
sub 24(%esp), %eax
decl %ebx # adjust quotient by -1
jecxz Return # return if Remainder == NULL
sub 24(%esp), %eax
sbb 28(%esp), %edx # edx:eax <- (quotient - 1) * divisor
Correct:
jecxz Return
@@ -81,7 +81,7 @@ Correct:
movl %esi, (%ecx)
movl %edi, 4(%ecx)
Return:
movl %ebx, %eax # eax <- quotient
movl %ebx, %eax # eax <- quotient
xorl %edx, %edx # quotient is 32 bits long
pop %edi
pop %esi

View File

@@ -1,6 +1,6 @@
#------------------------------------------------------------------------------
#
# Copyright (c) 2006 - 2008, Intel Corporation. All rights reserved.<BR>
# Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
# This program and the accompanying materials
# are licensed and made available under the terms and conditions of the BSD License
# which accompanies this distribution. The full text of the license may be found at
@@ -15,7 +15,7 @@
#
# Abstract:
#
# Flush all caches with a WBINVD instruction, clear the CD bit of CR0 to 0, and clear
# Flush all caches with a WBINVD instruction, clear the CD bit of CR0 to 0, and clear
# the NW bit of CR0 to 0
#
# Notes:

View File

@@ -1,7 +1,7 @@
/** @file
AsmFlushCacheLine function
Copyright (c) 2006 - 2015, Intel Corporation. All rights reserved.<BR>
Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
This program and the accompanying materials
are licensed and made available under the terms and conditions of the BSD License
which accompanies this distribution. The full text of the license may be found at
@@ -37,7 +37,7 @@ AsmFlushCacheLine (
)
{
//
// If the CPU does not support CLFLUSH instruction,
// If the CPU does not support CLFLUSH instruction,
// then promote flush range to flush entire cache.
//
_asm {
@@ -52,7 +52,7 @@ NoClflush:
wbinvd
Done:
}
return LinearAddress;
}

View File

@@ -1,7 +1,7 @@
/** @file
GCC inline implementation of BaseLib processor specific functions.
Copyright (c) 2006 - 2015, Intel Corporation. All rights reserved.<BR>
Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
Portions copyright (c) 2008 - 2009, Apple Inc. All rights reserved.<BR>
This program and the accompanying materials
are licensed and made available under the terms and conditions of the BSD License
@@ -32,7 +32,7 @@ MemoryFence (
)
{
// This is a little bit of overkill and it is more about the compiler that it is
// actually processor synchronization. This is like the _ReadWriteBarrier
// actually processor synchronization. This is like the _ReadWriteBarrier
// Microsoft specific intrinsic
__asm__ __volatile__ ("":::"memory");
}
@@ -65,7 +65,7 @@ EFIAPI
DisableInterrupts (
VOID
)
{
{
__asm__ __volatile__ ("cli"::: "memory");
}
@@ -128,13 +128,13 @@ AsmReadMsr64 (
)
{
UINT64 Data;
__asm__ __volatile__ (
"rdmsr"
: "=A" (Data) // %0
: "c" (Index) // %1
);
return Data;
}
@@ -168,7 +168,7 @@ AsmWriteMsr64 (
: "c" (Index),
"A" (Value)
);
return Value;
}
@@ -191,13 +191,13 @@ AsmReadEflags (
)
{
UINTN Eflags;
__asm__ __volatile__ (
"pushfl \n\t"
"popl %0 "
: "=r" (Eflags)
);
return Eflags;
}
@@ -220,12 +220,12 @@ AsmReadCr0 (
)
{
UINTN Data;
__asm__ __volatile__ (
"movl %%cr0,%0"
"movl %%cr0,%0"
: "=a" (Data)
);
return Data;
}
@@ -247,12 +247,12 @@ AsmReadCr2 (
)
{
UINTN Data;
__asm__ __volatile__ (
"movl %%cr2, %0"
"movl %%cr2, %0"
: "=r" (Data)
);
return Data;
}
@@ -273,12 +273,12 @@ AsmReadCr3 (
)
{
UINTN Data;
__asm__ __volatile__ (
"movl %%cr3, %0"
: "=r" (Data)
);
return Data;
}
@@ -300,12 +300,12 @@ AsmReadCr4 (
)
{
UINTN Data;
__asm__ __volatile__ (
"movl %%cr4, %0"
: "=a" (Data)
);
return Data;
}
@@ -431,12 +431,12 @@ AsmReadDr0 (
)
{
UINTN Data;
__asm__ __volatile__ (
"movl %%dr0, %0"
: "=r" (Data)
);
return Data;
}
@@ -458,12 +458,12 @@ AsmReadDr1 (
)
{
UINTN Data;
__asm__ __volatile__ (
"movl %%dr1, %0"
: "=r" (Data)
);
return Data;
}
@@ -485,12 +485,12 @@ AsmReadDr2 (
)
{
UINTN Data;
__asm__ __volatile__ (
"movl %%dr2, %0"
: "=r" (Data)
);
return Data;
}
@@ -512,12 +512,12 @@ AsmReadDr3 (
)
{
UINTN Data;
__asm__ __volatile__ (
"movl %%dr3, %0"
: "=r" (Data)
);
return Data;
}
@@ -539,12 +539,12 @@ AsmReadDr4 (
)
{
UINTN Data;
__asm__ __volatile__ (
"movl %%dr4, %0"
: "=r" (Data)
);
return Data;
}
@@ -566,12 +566,12 @@ AsmReadDr5 (
)
{
UINTN Data;
__asm__ __volatile__ (
"movl %%dr5, %0"
: "=r" (Data)
);
return Data;
}
@@ -593,12 +593,12 @@ AsmReadDr6 (
)
{
UINTN Data;
__asm__ __volatile__ (
"movl %%dr6, %0"
: "=r" (Data)
);
return Data;
}
@@ -620,12 +620,12 @@ AsmReadDr7 (
)
{
UINTN Data;
__asm__ __volatile__ (
"movl %%dr7, %0"
: "=r" (Data)
);
return Data;
}
@@ -854,12 +854,12 @@ AsmReadCs (
)
{
UINT16 Data;
__asm__ __volatile__ (
"mov %%cs, %0"
:"=a" (Data)
);
return Data;
}
@@ -880,12 +880,12 @@ AsmReadDs (
)
{
UINT16 Data;
__asm__ __volatile__ (
"mov %%ds, %0"
:"=a" (Data)
);
return Data;
}
@@ -906,12 +906,12 @@ AsmReadEs (
)
{
UINT16 Data;
__asm__ __volatile__ (
"mov %%es, %0"
:"=a" (Data)
);
return Data;
}
@@ -932,12 +932,12 @@ AsmReadFs (
)
{
UINT16 Data;
__asm__ __volatile__ (
"mov %%fs, %0"
:"=a" (Data)
);
return Data;
}
@@ -958,12 +958,12 @@ AsmReadGs (
)
{
UINT16 Data;
__asm__ __volatile__ (
"mov %%gs, %0"
:"=a" (Data)
);
return Data;
}
@@ -984,12 +984,12 @@ AsmReadSs (
)
{
UINT16 Data;
__asm__ __volatile__ (
"mov %%ds, %0"
:"=a" (Data)
);
return Data;
}
@@ -1010,12 +1010,12 @@ AsmReadTr (
)
{
UINT16 Data;
__asm__ __volatile__ (
"str %0"
: "=a" (Data)
);
return Data;
}
@@ -1062,7 +1062,7 @@ InternalX86WriteGdtr (
:
: "m" (*Gdtr)
);
}
@@ -1127,12 +1127,12 @@ AsmReadLdtr (
)
{
UINT16 Data;
__asm__ __volatile__ (
"sldt %0"
: "=g" (Data) // %0
);
return Data;
}
@@ -1180,7 +1180,7 @@ InternalX86FxSave (
"fxsave %0"
:
: "m" (*Buffer) // %0
);
);
}
@@ -1233,7 +1233,7 @@ AsmReadMm0 (
"pop %%edx \n\t"
: "=A" (Data) // %0
);
return Data;
}
@@ -1263,7 +1263,7 @@ AsmReadMm1 (
"pop %%edx \n\t"
: "=A" (Data) // %0
);
return Data;
}
@@ -1293,7 +1293,7 @@ AsmReadMm2 (
"pop %%edx \n\t"
: "=A" (Data) // %0
);
return Data;
}
@@ -1323,7 +1323,7 @@ AsmReadMm3 (
"pop %%edx \n\t"
: "=A" (Data) // %0
);
return Data;
}
@@ -1353,7 +1353,7 @@ AsmReadMm4 (
"pop %%edx \n\t"
: "=A" (Data) // %0
);
return Data;
}
@@ -1383,7 +1383,7 @@ AsmReadMm5 (
"pop %%edx \n\t"
: "=A" (Data) // %0
);
return Data;
}
@@ -1413,7 +1413,7 @@ AsmReadMm6 (
"pop %%edx \n\t"
: "=A" (Data) // %0
);
return Data;
}
@@ -1443,7 +1443,7 @@ AsmReadMm7 (
"pop %%edx \n\t"
: "=A" (Data) // %0
);
return Data;
}
@@ -1465,7 +1465,7 @@ AsmWriteMm0 (
{
__asm__ __volatile__ (
"movq %0, %%mm0" // %0
:
:
: "m" (Value)
);
}
@@ -1488,7 +1488,7 @@ AsmWriteMm1 (
{
__asm__ __volatile__ (
"movq %0, %%mm1" // %0
:
:
: "m" (Value)
);
}
@@ -1511,7 +1511,7 @@ AsmWriteMm2 (
{
__asm__ __volatile__ (
"movq %0, %%mm2" // %0
:
:
: "m" (Value)
);
}
@@ -1534,7 +1534,7 @@ AsmWriteMm3 (
{
__asm__ __volatile__ (
"movq %0, %%mm3" // %0
:
:
: "m" (Value)
);
}
@@ -1557,7 +1557,7 @@ AsmWriteMm4 (
{
__asm__ __volatile__ (
"movq %0, %%mm4" // %0
:
:
: "m" (Value)
);
}
@@ -1580,7 +1580,7 @@ AsmWriteMm5 (
{
__asm__ __volatile__ (
"movq %0, %%mm5" // %0
:
:
: "m" (Value)
);
}
@@ -1603,7 +1603,7 @@ AsmWriteMm6 (
{
__asm__ __volatile__ (
"movq %0, %%mm6" // %0
:
:
: "m" (Value)
);
}
@@ -1626,7 +1626,7 @@ AsmWriteMm7 (
{
__asm__ __volatile__ (
"movq %0, %%mm7" // %0
:
:
: "m" (Value)
);
}
@@ -1648,13 +1648,13 @@ AsmReadTsc (
)
{
UINT64 Data;
__asm__ __volatile__ (
"rdtsc"
: "=A" (Data)
);
return Data;
return Data;
}
@@ -1676,14 +1676,14 @@ AsmReadPmc (
)
{
UINT64 Data;
__asm__ __volatile__ (
"rdpmc"
: "=A" (Data)
: "c" (Index)
);
return Data;
return Data;
}
@@ -1720,7 +1720,7 @@ AsmInvd (
)
{
__asm__ __volatile__ ("invd":::"memory");
}
@@ -1748,7 +1748,7 @@ AsmFlushCacheLine (
UINT32 RegEdx;
//
// If the CPU does not support CLFLUSH instruction,
// If the CPU does not support CLFLUSH instruction,
// then promote flush range to flush entire cache.
//
AsmCpuid (0x01, NULL, NULL, NULL, &RegEdx);
@@ -1760,11 +1760,11 @@ AsmFlushCacheLine (
__asm__ __volatile__ (
"clflush (%0)"
: "+a" (LinearAddress)
:
: "+a" (LinearAddress)
:
: "memory"
);
return LinearAddress;
}

View File

@@ -1,6 +1,6 @@
#------------------------------------------------------------------------------
#
# Copyright (c) 2006 - 2008, Intel Corporation. All rights reserved.<BR>
# Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
# Portions copyright (c) 2011, Apple Inc. All rights reserved.<BR>
# This program and the accompanying materials
# are licensed and made available under the terms and conditions of the BSD License
@@ -34,15 +34,15 @@ ASM_GLOBAL ASM_PFX(InternalSwitchStack)
#------------------------------------------------------------------------------
ASM_PFX(InternalSwitchStack):
pushl %ebp
movl %esp, %ebp
movl %esp, %ebp
movl 20(%ebp), %esp # switch stack
subl $8, %esp
movl 20(%ebp), %esp # switch stack
subl $8, %esp
movl 16(%ebp), %eax
movl %eax, 4(%esp)
movl 12(%ebp), %eax
movl %eax, (%esp)
pushl $0 # keeps gdb from unwinding stack
jmp *8(%ebp) # call and never return
movl 16(%ebp), %eax
movl %eax, 4(%esp)
movl 12(%ebp), %eax
movl %eax, (%esp)
pushl $0 # keeps gdb from unwinding stack
jmp *8(%ebp) # call and never return

View File

@@ -1,6 +1,6 @@
#------------------------------------------------------------------------------
#
# Copyright (c) 2006 - 2015, Intel Corporation. All rights reserved.<BR>
# Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
# This program and the accompanying materials
# are licensed and made available under the terms and conditions of the BSD License
# which accompanies this distribution. The full text of the license may be found at
@@ -43,6 +43,6 @@ ASM_PFX(InternalMathLRotU64):
movl %eax, %ecx
movl %edx, %eax
movl %ecx, %edx
L0:
L0:
pop %ebx
ret

View File

@@ -1,6 +1,6 @@
#------------------------------------------------------------------------------
#
# Copyright (c) 2006 - 2015, Intel Corporation. All rights reserved.<BR>
# Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
# This program and the accompanying materials
# are licensed and made available under the terms and conditions of the BSD License
# which accompanies this distribution. The full text of the license may be found at
@@ -37,7 +37,7 @@ ASM_PFX(InternalMathLShiftU64):
jnz L0
movl %edx, %eax
movl 0x8(%esp), %edx
L0:
L0:
shld %cl, %eax, %edx
shl %cl, %eax
ret

View File

@@ -1,6 +1,6 @@
#------------------------------------------------------------------------------
#
# Copyright (c) 2006 - 2008, Intel Corporation. All rights reserved.<BR>
# Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
# This program and the accompanying materials
# are licensed and made available under the terms and conditions of the BSD License
# which accompanies this distribution. The full text of the license may be found at
@@ -30,15 +30,15 @@ ASM_GLOBAL ASM_PFX(InternalMathMultU64x64)
# );
#------------------------------------------------------------------------------
ASM_PFX(InternalMathMultU64x64):
push %ebx
movl 8(%esp), %ebx # ebx <- M1[0..31]
movl 16(%esp), %edx # edx <- M2[0..31]
movl %ebx, %ecx
movl %edx, %eax
imull 20(%esp), %ebx # ebx <- M1[0..31] * M2[32..63]
imull 12(%esp), %edx # edx <- M1[32..63] * M2[0..31]
addl %edx, %ebx # carries are abandoned
push %ebx
movl 8(%esp), %ebx # ebx <- M1[0..31]
movl 16(%esp), %edx # edx <- M2[0..31]
movl %ebx, %ecx
movl %edx, %eax
imull 20(%esp), %ebx # ebx <- M1[0..31] * M2[32..63]
imull 12(%esp), %edx # edx <- M1[32..63] * M2[0..31]
addl %edx, %ebx # carries are abandoned
mull %ecx # edx:eax <- M1[0..31] * M2[0..31]
addl %ebx, %edx # carries are abandoned
addl %ebx, %edx # carries are abandoned
pop %ebx
ret

View File

@@ -1,6 +1,6 @@
#------------------------------------------------------------------------------
#
# Copyright (c) 2006 - 2015, Intel Corporation. All rights reserved.<BR>
# Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
# This program and the accompanying materials
# are licensed and made available under the terms and conditions of the BSD License
# which accompanies this distribution. The full text of the license may be found at
@@ -43,6 +43,6 @@ ASM_PFX(InternalMathRRotU64):
movl %eax, %ecx # switch eax & edx if Count >= 32
movl %edx, %eax
movl %ecx, %edx
L0:
L0:
pop %ebx
ret

View File

@@ -1,6 +1,6 @@
#------------------------------------------------------------------------------
#
# Copyright (c) 2006 - 2015, Intel Corporation. All rights reserved.<BR>
# Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
# This program and the accompanying materials
# are licensed and made available under the terms and conditions of the BSD License
# which accompanies this distribution. The full text of the license may be found at
@@ -40,7 +40,7 @@ ASM_PFX(InternalMathRShiftU64):
jnz L0
movl %eax, %edx
movl 0x4(%esp), %eax
L0:
L0:
shrdl %cl, %edx, %eax
shr %cl, %edx
ret

View File

@@ -1,8 +1,8 @@
/** @file
/** @file
This module contains generic macros for an assembly writer.
Copyright (c) 2006 - 2008, Intel Corporation. All rights reserved.<BR>
Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
This program and the accompanying materials
are licensed and made available under the terms and conditions of the BSD License
which accompanies this distribution. The full text of the license may be found at

View File

@@ -1,7 +1,7 @@
/** @file
AsmFlushCacheRange() function for IPF.
Copyright (c) 2009, Intel Corporation. All rights reserved.<BR>
Copyright (c) 2009 - 2018, Intel Corporation. All rights reserved.<BR>
This program and the accompanying materials
are licensed and made available under the terms and conditions of the BSD License
which accompanies this distribution. The full text of the license may be found at
@@ -19,12 +19,12 @@
Flush a range of cache lines in the cache coherency domain of the calling
CPU.
Flushes the cache lines specified by Address and Length. If Address is not aligned
on a cache line boundary, then entire cache line containing Address is flushed.
If Address + Length is not aligned on a cache line boundary, then the entire cache
line containing Address + Length - 1 is flushed. This function may choose to flush
the entire cache if that is more efficient than flushing the specified range. If
Length is 0, the no cache lines are flushed. Address is returned.
Flushes the cache lines specified by Address and Length. If Address is not aligned
on a cache line boundary, then entire cache line containing Address is flushed.
If Address + Length is not aligned on a cache line boundary, then the entire cache
line containing Address + Length - 1 is flushed. This function may choose to flush
the entire cache if that is more efficient than flushing the specified range. If
Length is 0, the no cache lines are flushed. Address is returned.
This function is only available on IPF.
If Length is greater than (MAX_ADDRESS - Address + 1), then ASSERT().

View File

@@ -1,16 +1,16 @@
/** @file
/** @file
Register Definition for IPF.
Copyright (c) 2006 - 2008, Intel Corporation. All rights reserved.<BR>
Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
This program and the accompanying materials
are licensed and made available under the terms and conditions of the BSD License
which accompanies this distribution. The full text of the license may be found at
http://opensource.org/licenses/bsd-license.php.
THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
**/
#ifndef _IA64GEN_H_
#define _IA64GEN_H_

View File

@@ -1,7 +1,7 @@
/** @file
Linked List Library Functions.
Copyright (c) 2006 - 2017, Intel Corporation. All rights reserved.<BR>
Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
This program and the accompanying materials
are licensed and made available under the terms and conditions of the BSD License
which accompanies this distribution. The full text of the license may be found at
@@ -59,9 +59,9 @@
@retval TRUE if PcdVerifyNodeInList is FALSE
@retval TRUE if DoMembershipCheck is FALSE
@retval TRUE if PcdVerifyNodeInList is TRUE and DoMembershipCheck is TRUE
@retval TRUE if PcdVerifyNodeInList is TRUE and DoMembershipCheck is TRUE
and Node is a member of List.
@retval FALSE if PcdVerifyNodeInList is TRUE and DoMembershipCheck is TRUE
@retval FALSE if PcdVerifyNodeInList is TRUE and DoMembershipCheck is TRUE
and Node is in not a member of List.
**/
@@ -143,7 +143,7 @@ IsNodeInList (
Ptr = FirstEntry;
//
// Check to see if SecondEntry is a member of FirstEntry.
// Check to see if SecondEntry is a member of FirstEntry.
// Exit early if the number of nodes in List >= PcdMaximumLinkedListLength
//
do {
@@ -230,7 +230,7 @@ InsertHeadList (
// ASSERT List not too long and Entry is not one of the nodes of List
//
ASSERT_VERIFY_NODE_IN_VALID_LIST (ListHead, Entry, FALSE);
Entry->ForwardLink = ListHead->ForwardLink;
Entry->BackLink = ListHead;
Entry->ForwardLink->BackLink = Entry;
@@ -247,7 +247,7 @@ InsertHeadList (
If ListHead is NULL, then ASSERT().
If Entry is NULL, then ASSERT().
If ListHead was not initialized with INTIALIZE_LIST_HEAD_VARIABLE() or
If ListHead was not initialized with INTIALIZE_LIST_HEAD_VARIABLE() or
InitializeListHead(), then ASSERT().
If PcdMaximumLinkedListLength is not zero, and prior to insertion the number
of nodes in ListHead, including the ListHead node, is greater than or
@@ -271,7 +271,7 @@ InsertTailList (
// ASSERT List not too long and Entry is not one of the nodes of List
//
ASSERT_VERIFY_NODE_IN_VALID_LIST (ListHead, Entry, FALSE);
Entry->ForwardLink = ListHead;
Entry->BackLink = ListHead->BackLink;
Entry->BackLink->ForwardLink = Entry;
@@ -282,12 +282,12 @@ InsertTailList (
/**
Retrieves the first node of a doubly-linked list.
Returns the first node of a doubly-linked list. List must have been
Returns the first node of a doubly-linked list. List must have been
initialized with INTIALIZE_LIST_HEAD_VARIABLE() or InitializeListHead().
If List is empty, then List is returned.
If List is NULL, then ASSERT().
If List was not initialized with INTIALIZE_LIST_HEAD_VARIABLE() or
If List was not initialized with INTIALIZE_LIST_HEAD_VARIABLE() or
InitializeListHead(), then ASSERT().
If PcdMaximumLinkedListLength is not zero, and the number of nodes
in List, including the List node, is greater than or equal to
@@ -316,13 +316,13 @@ GetFirstNode (
/**
Retrieves the next node of a doubly-linked list.
Returns the node of a doubly-linked list that follows Node.
Returns the node of a doubly-linked list that follows Node.
List must have been initialized with INTIALIZE_LIST_HEAD_VARIABLE()
or InitializeListHead(). If List is empty, then List is returned.
If List is NULL, then ASSERT().
If Node is NULL, then ASSERT().
If List was not initialized with INTIALIZE_LIST_HEAD_VARIABLE() or
If List was not initialized with INTIALIZE_LIST_HEAD_VARIABLE() or
InitializeListHead(), then ASSERT().
If PcdMaximumLinkedListLength is not zero, and List contains more than
PcdMaximumLinkedListLength nodes, then ASSERT().
@@ -351,24 +351,24 @@ GetNextNode (
/**
Retrieves the previous node of a doubly-linked list.
Returns the node of a doubly-linked list that precedes Node.
Returns the node of a doubly-linked list that precedes Node.
List must have been initialized with INTIALIZE_LIST_HEAD_VARIABLE()
or InitializeListHead(). If List is empty, then List is returned.
If List is NULL, then ASSERT().
If Node is NULL, then ASSERT().
If List was not initialized with INTIALIZE_LIST_HEAD_VARIABLE() or
If List was not initialized with INTIALIZE_LIST_HEAD_VARIABLE() or
InitializeListHead(), then ASSERT().
If PcdMaximumLinkedListLength is not zero, and List contains more than
PcdMaximumLinkedListLength nodes, then ASSERT().
If PcdVerifyNodeInList is TRUE and Node is not a node in List, then ASSERT().
@param List A pointer to the head node of a doubly-linked list.
@param Node A pointer to a node in the doubly-linked list.
@return A pointer to the previous node if one exists. Otherwise List is returned.
**/
LIST_ENTRY *
EFIAPI
@@ -381,7 +381,7 @@ GetPreviousNode (
// ASSERT List not too long and Node is one of the nodes of List
//
ASSERT_VERIFY_NODE_IN_VALID_LIST (List, Node, TRUE);
return Node->BackLink;
}
@@ -392,7 +392,7 @@ GetPreviousNode (
zero nodes, this function returns TRUE. Otherwise, it returns FALSE.
If ListHead is NULL, then ASSERT().
If ListHead was not initialized with INTIALIZE_LIST_HEAD_VARIABLE() or
If ListHead was not initialized with INTIALIZE_LIST_HEAD_VARIABLE() or
InitializeListHead(), then ASSERT().
If PcdMaximumLinkedListLength is not zero, and the number of nodes
in List, including the List node, is greater than or equal to
@@ -414,7 +414,7 @@ IsListEmpty (
// ASSERT List not too long
//
ASSERT (InternalBaseLibIsListValid (ListHead));
return (BOOLEAN)(ListHead->ForwardLink == ListHead);
}
@@ -429,12 +429,12 @@ IsListEmpty (
If List is NULL, then ASSERT().
If Node is NULL, then ASSERT().
If List was not initialized with INTIALIZE_LIST_HEAD_VARIABLE() or InitializeListHead(),
If List was not initialized with INTIALIZE_LIST_HEAD_VARIABLE() or InitializeListHead(),
then ASSERT().
If PcdMaximumLinkedListLength is not zero, and the number of nodes
in List, including the List node, is greater than or equal to
PcdMaximumLinkedListLength, then ASSERT().
If PcdVerifyNodeInList is TRUE and Node is not a node in List and Node is not
If PcdVerifyNodeInList is TRUE and Node is not a node in List and Node is not
equal to List, then ASSERT().
@param List A pointer to the head node of a doubly-linked list.
@@ -455,7 +455,7 @@ IsNull (
// ASSERT List not too long and Node is one of the nodes of List
//
ASSERT_VERIFY_NODE_IN_VALID_LIST (List, Node, TRUE);
return (BOOLEAN)(Node == List);
}
@@ -493,7 +493,7 @@ IsNodeAtEnd (
// ASSERT List not too long and Node is one of the nodes of List
//
ASSERT_VERIFY_NODE_IN_VALID_LIST (List, Node, TRUE);
return (BOOLEAN)(!IsNull (List, Node) && List->BackLink == Node);
}
@@ -505,12 +505,12 @@ IsNodeAtEnd (
Otherwise, the location of the FirstEntry node is swapped with the location
of the SecondEntry node in a doubly-linked list. SecondEntry must be in the
same double linked list as FirstEntry and that double linked list must have
been initialized with INTIALIZE_LIST_HEAD_VARIABLE() or InitializeListHead().
been initialized with INTIALIZE_LIST_HEAD_VARIABLE() or InitializeListHead().
SecondEntry is returned after the nodes are swapped.
If FirstEntry is NULL, then ASSERT().
If SecondEntry is NULL, then ASSERT().
If PcdVerifyNodeInList is TRUE and SecondEntry and FirstEntry are not in the
If PcdVerifyNodeInList is TRUE and SecondEntry and FirstEntry are not in the
same linked list, then ASSERT().
If PcdMaximumLinkedListLength is not zero, and the number of nodes in the
linked list containing the FirstEntry and SecondEntry nodes, including
@@ -519,7 +519,7 @@ IsNodeAtEnd (
@param FirstEntry A pointer to a node in a linked list.
@param SecondEntry A pointer to another node in the same linked list.
@return SecondEntry.
**/
@@ -540,7 +540,7 @@ SwapListEntries (
// ASSERT Entry1 and Entry2 are in the same linked list
//
ASSERT_VERIFY_NODE_IN_VALID_LIST (FirstEntry, SecondEntry, TRUE);
//
// Ptr is the node pointed to by FirstEntry->ForwardLink
//
@@ -598,7 +598,7 @@ RemoveEntryList (
)
{
ASSERT (!IsListEmpty (Entry));
Entry->ForwardLink->BackLink = Entry->BackLink;
Entry->BackLink->ForwardLink = Entry->ForwardLink;
return Entry->ForwardLink;

View File

@@ -1,7 +1,7 @@
/** @file
Unicode and ASCII string primitives.
Copyright (c) 2006 - 2017, Intel Corporation. All rights reserved.<BR>
Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
This program and the accompanying materials
are licensed and made available under the terms and conditions of the BSD License
which accompanies this distribution. The full text of the license may be found at
@@ -73,7 +73,7 @@ StrCpy (
/**
[ATTENTION] This function will be deprecated for security reason.
Copies up to a specified length from one Null-terminated Unicode string to
Copies up to a specified length from one Null-terminated Unicode string to
another Null-terminated Unicode string and returns the new Unicode string.
This function copies the contents of the Unicode string Source to the Unicode
@@ -89,7 +89,7 @@ StrCpy (
If Length > 0 and Source is NULL, then ASSERT().
If Length > 0 and Source is not aligned on a 16-bit boundary, then ASSERT().
If Source and Destination overlap, then ASSERT().
If PcdMaximumUnicodeStringLength is not zero, and Length is greater than
If PcdMaximumUnicodeStringLength is not zero, and Length is greater than
PcdMaximumUnicodeStringLength, then ASSERT().
If PcdMaximumUnicodeStringLength is not zero, and Source contains more than
PcdMaximumUnicodeStringLength Unicode characters, not including the Null-terminator,
@@ -188,7 +188,7 @@ StrLen (
Returns the size of a Null-terminated Unicode string in bytes, including the
Null terminator.
This function returns the size, in bytes, of the Null-terminated Unicode string
This function returns the size, in bytes, of the Null-terminated Unicode string
specified by String.
If String is NULL, then ASSERT().
@@ -262,7 +262,7 @@ StrCmp (
/**
Compares up to a specified length the contents of two Null-terminated Unicode strings,
and returns the difference between the first mismatched Unicode characters.
This function compares the Null-terminated Unicode string FirstString to the
Null-terminated Unicode string SecondString. At most, Length Unicode
characters will be compared. If Length is 0, then 0 is returned. If
@@ -382,8 +382,8 @@ StrCat (
/**
[ATTENTION] This function will be deprecated for security reason.
Concatenates up to a specified length one Null-terminated Unicode to the end
of another Null-terminated Unicode string, and returns the concatenated
Concatenates up to a specified length one Null-terminated Unicode to the end
of another Null-terminated Unicode string, and returns the concatenated
Unicode string.
This function concatenates two Null-terminated Unicode strings. The contents
@@ -399,7 +399,7 @@ StrCat (
If Length > 0 and Source is NULL, then ASSERT().
If Length > 0 and Source is not aligned on a 16-bit boundary, then ASSERT().
If Source and Destination overlap, then ASSERT().
If PcdMaximumUnicodeStringLength is not zero, and Length is greater than
If PcdMaximumUnicodeStringLength is not zero, and Length is greater than
PcdMaximumUnicodeStringLength, then ASSERT().
If PcdMaximumUnicodeStringLength is not zero, and Destination contains more
than PcdMaximumUnicodeStringLength Unicode characters, not including the
@@ -492,13 +492,13 @@ StrStr (
while (*String != L'\0') {
SearchStringTmp = SearchString;
FirstMatch = String;
while ((*String == *SearchStringTmp)
while ((*String == *SearchStringTmp)
&& (*String != L'\0')) {
String++;
SearchStringTmp++;
}
}
if (*SearchStringTmp == L'\0') {
return (CHAR16 *) FirstMatch;
}
@@ -516,7 +516,7 @@ StrStr (
/**
Check if a Unicode character is a decimal character.
This internal function checks if a Unicode character is a
This internal function checks if a Unicode character is a
decimal character. The valid decimal character is from
L'0' to L'9'.
@@ -536,7 +536,7 @@ InternalIsDecimalDigitCharacter (
}
/**
Convert a Unicode character to upper case only if
Convert a Unicode character to upper case only if
it maps to a valid small-case ASCII character.
This internal function only deal with Unicode character
@@ -568,7 +568,7 @@ InternalCharToUpper (
This internal function only deal with Unicode character
which maps to a valid hexadecimal ASII character, i.e.
L'0' to L'9', L'a' to L'f' or L'A' to L'F'. For other
L'0' to L'9', L'a' to L'f' or L'A' to L'F'. For other
Unicode character, the value returned does not make sense.
@param Char The character to convert.
@@ -592,8 +592,8 @@ InternalHexCharToUintn (
/**
Check if a Unicode character is a hexadecimal character.
This internal function checks if a Unicode character is a
decimal character. The valid hexadecimal character is
This internal function checks if a Unicode character is a
decimal character. The valid hexadecimal character is
L'0' to L'9', L'a' to L'f', or L'A' to L'F'.
@@ -703,7 +703,7 @@ StrDecimalToUint64 (
)
{
UINT64 Result;
StrDecimalToUint64S (String, (CHAR16 **) NULL, &Result);
return Result;
}
@@ -806,7 +806,7 @@ StrHexToUint64 (
/**
Check if a ASCII character is a decimal character.
This internal function checks if a Unicode character is a
This internal function checks if a Unicode character is a
decimal character. The valid decimal character is from
'0' to '9'.
@@ -828,8 +828,8 @@ InternalAsciiIsDecimalDigitCharacter (
/**
Check if a ASCII character is a hexadecimal character.
This internal function checks if a ASCII character is a
decimal character. The valid hexadecimal character is
This internal function checks if a ASCII character is a
decimal character. The valid hexadecimal character is
L'0' to L'9', L'a' to L'f', or L'A' to L'F'.
@@ -915,7 +915,7 @@ UnicodeStrToAsciiStr (
ReturnValue = Destination;
while (*Source != '\0') {
//
// If any Unicode characters in Source contain
// If any Unicode characters in Source contain
// non-zero value in the upper 8 bits, then ASSERT().
//
ASSERT (*Source < 0x100);
@@ -987,7 +987,7 @@ AsciiStrCpy (
/**
[ATTENTION] This function will be deprecated for security reason.
Copies up to a specified length one Null-terminated ASCII string to another
Copies up to a specified length one Null-terminated ASCII string to another
Null-terminated ASCII string and returns the new ASCII string.
This function copies the contents of the ASCII string Source to the ASCII
@@ -1000,7 +1000,7 @@ AsciiStrCpy (
If Destination is NULL, then ASSERT().
If Source is NULL, then ASSERT().
If Source and Destination overlap, then ASSERT().
If PcdMaximumAsciiStringLength is not zero, and Length is greater than
If PcdMaximumAsciiStringLength is not zero, and Length is greater than
PcdMaximumAsciiStringLength, then ASSERT().
If PcdMaximumAsciiStringLength is not zero, and Source contains more than
PcdMaximumAsciiStringLength ASCII characters, not including the Null-terminator,
@@ -1176,7 +1176,7 @@ AsciiStrCmp (
@param Chr one Ascii character
@return The uppercase value of Ascii character
@return The uppercase value of Ascii character
**/
CHAR8
@@ -1193,7 +1193,7 @@ InternalBaseLibAsciiToUpper (
This internal function only deal with Unicode character
which maps to a valid hexadecimal ASII character, i.e.
'0' to '9', 'a' to 'f' or 'A' to 'F'. For other
'0' to '9', 'a' to 'f' or 'A' to 'F'. For other
ASCII character, the value returned does not make sense.
@param Char The character to convert.
@@ -1285,7 +1285,7 @@ AsciiStriCmp (
If Length > 0 and FirstString is NULL, then ASSERT().
If Length > 0 and SecondString is NULL, then ASSERT().
If PcdMaximumAsciiStringLength is not zero, and Length is greater than
If PcdMaximumAsciiStringLength is not zero, and Length is greater than
PcdMaximumAsciiStringLength, then ASSERT().
If PcdMaximumAsciiStringLength is not zero, and FirstString contains more than
PcdMaximumAsciiStringLength ASCII characters, not including the Null-terminator,
@@ -1297,7 +1297,7 @@ AsciiStriCmp (
@param FirstString A pointer to a Null-terminated ASCII string.
@param SecondString A pointer to a Null-terminated ASCII string.
@param Length The maximum number of ASCII characters for compare.
@retval ==0 FirstString is identical to SecondString.
@retval !=0 FirstString is not identical to SecondString.
@@ -1386,8 +1386,8 @@ AsciiStrCat (
/**
[ATTENTION] This function will be deprecated for security reason.
Concatenates up to a specified length one Null-terminated ASCII string to
the end of another Null-terminated ASCII string, and returns the
Concatenates up to a specified length one Null-terminated ASCII string to
the end of another Null-terminated ASCII string, and returns the
concatenated ASCII string.
This function concatenates two Null-terminated ASCII strings. The contents
@@ -1491,13 +1491,13 @@ AsciiStrStr (
while (*String != '\0') {
SearchStringTmp = SearchString;
FirstMatch = String;
while ((*String == *SearchStringTmp)
while ((*String == *SearchStringTmp)
&& (*String != '\0')) {
String++;
SearchStringTmp++;
}
}
if (*SearchStringTmp == '\0') {
return (CHAR8 *) FirstMatch;
}
@@ -1549,7 +1549,7 @@ AsciiStrDecimalToUintn (
)
{
UINTN Result;
AsciiStrDecimalToUintnS (String, (CHAR8 **) NULL, &Result);
return Result;
}
@@ -1592,7 +1592,7 @@ AsciiStrDecimalToUint64 (
)
{
UINT64 Result;
AsciiStrDecimalToUint64S (String, (CHAR8 **) NULL, &Result);
return Result;
}

View File

@@ -1,7 +1,7 @@
/** @file
Switch Stack functions.
Copyright (c) 2006 - 2008, Intel Corporation. All rights reserved.<BR>
Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
This program and the accompanying materials
are licensed and made available under the terms and conditions of the BSD License
which accompanies this distribution. The full text of the license may be found at
@@ -36,9 +36,9 @@
function.
@param NewStack A pointer to the new stack to use for the EntryPoint
function.
@param ... This variable argument list is ignored for IA32, x64, and EBC.
For IPF, this variable argument list is expected to contain
a single parameter of type VOID * that specifies the new backing
@param ... This variable argument list is ignored for IA32, x64, and EBC.
For IPF, this variable argument list is expected to contain
a single parameter of type VOID * that specifies the new backing
store pointer.

View File

@@ -1,6 +1,6 @@
#------------------------------------------------------------------------------
#
# Copyright (c) 2006 - 2008, Intel Corporation. All rights reserved.<BR>
# Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
# This program and the accompanying materials
# are licensed and made available under the terms and conditions of the BSD License
# which accompanies this distribution. The full text of the license may be found at
@@ -44,19 +44,19 @@ ASM_PFX(AsmCpuidEx):
test %r10, %r10
jz L1
mov %ecx,(%r10)
L1:
L1:
mov %r8, %rcx
jrcxz L2
movl %eax,(%rcx)
L2:
L2:
mov %r9, %rcx
jrcxz L3
mov %ebx, (%rcx)
L3:
L3:
mov 0x40(%rsp), %rcx
jrcxz L4
mov %edx, (%rcx)
L4:
L4:
pop %rax # restore Index to rax as return value
pop %rbx
ret

View File

@@ -1,6 +1,6 @@
#------------------------------------------------------------------------------
#
# Copyright (c) 2006 - 2009, Intel Corporation. All rights reserved.<BR>
# Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
# This program and the accompanying materials
# are licensed and made available under the terms and conditions of the BSD License
# which accompanies this distribution. The full text of the license may be found at
@@ -21,7 +21,7 @@
#
#------------------------------------------------------------------------------
#------------------------------------------------------------------------------
# VOID
@@ -37,29 +37,29 @@
ASM_GLOBAL ASM_PFX(InternalX86DisablePaging64)
ASM_PFX(InternalX86DisablePaging64):
cli
cli
lea L1(%rip), %rsi # rsi <- The start address of transition code
mov 0x28(%rsp), %edi # rdi <- New stack
lea _mTransitionEnd(%rip), %rax # rax <- end of transition code
sub %rsi, %rax # rax <- The size of transition piece code
add $4, %rax # round rax up to the next 4 byte boundary
and $0xfc, %al
sub %rax, %rdi # rdi <- use stack to hold transition code
sub %rax, %rdi # rdi <- use stack to hold transition code
mov %edi, %r10d # r10 <- The start address of transicition code below 4G
push %rcx # save rcx to stack
mov %rax, %rcx # rcx <- The size of transition piece code
rep
movsb # copy transition code to (new stack - 64byte) below 4G
pop %rcx # restore rcx
mov %r8d, %esi
mov %r9d, %edi
mov %r8d, %esi
mov %r9d, %edi
mov %r10d, %eax
sub $4, %eax
push %rcx # push Cs to stack
push %r10 # push address of transition code on stack
push %r10 # push address of transition code on stack
.byte 0x48, 0xcb # retq: Use far return to load CS register from stack
# (Use raw byte code since some GNU assemblers generates incorrect code for "retq")
# (Use raw byte code since some GNU assemblers generates incorrect code for "retq")
L1:
mov %eax,%esp # set up new stack
mov %cr0,%rax
@@ -68,9 +68,9 @@ L1:
mov %edx,%ebx # save EntryPoint to ebx, for rdmsr will overwrite edx
mov $0xc0000080,%ecx
rdmsr
rdmsr
and $0xfe,%ah # clear LME
wrmsr
wrmsr
mov %cr4,%rax
and $0xdf,%al # clear PAE
mov %rax,%cr4

View File

@@ -1,6 +1,6 @@
#------------------------------------------------------------------------------
#
# Copyright (c) 2006 - 2008, Intel Corporation. All rights reserved.<BR>
# Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
# This program and the accompanying materials
# are licensed and made available under the terms and conditions of the BSD License
# which accompanies this distribution. The full text of the license may be found at
@@ -15,7 +15,7 @@
#
# Abstract:
#
# Flush all caches with a WBINVD instruction, clear the CD bit of CR0 to 0, and clear
# Flush all caches with a WBINVD instruction, clear the CD bit of CR0 to 0, and clear
# the NW bit of CR0 to 0
#
# Notes:

View File

@@ -1,8 +1,8 @@
/** @file
GCC inline implementation of BaseLib processor specific functions.
Copyright (c) 2006 - 2010, Intel Corporation. All rights reserved.<BR>
Portions copyright (c) 2008 - 2009, Apple Inc. All rights reserved.<BR>
Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
Portions copyright (c) 2008 - 2009, Apple Inc. All rights reserved.<BR>
This program and the accompanying materials
are licensed and made available under the terms and conditions of the BSD License
which accompanies this distribution. The full text of the license may be found at
@@ -33,7 +33,7 @@ MemoryFence (
)
{
// This is a little bit of overkill and it is more about the compiler that it is
// actually processor synchronization. This is like the _ReadWriteBarrier
// actually processor synchronization. This is like the _ReadWriteBarrier
// Microsoft specific intrinsic
__asm__ __volatile__ ("":::"memory");
}
@@ -66,7 +66,7 @@ EFIAPI
DisableInterrupts (
VOID
)
{
{
__asm__ __volatile__ ("cli"::: "memory");
}
@@ -130,14 +130,14 @@ AsmReadMsr64 (
{
UINT32 LowData;
UINT32 HighData;
__asm__ __volatile__ (
"rdmsr"
: "=a" (LowData), // %0
"=d" (HighData) // %1
: "c" (Index) // %2
);
return (((UINT64)HighData) << 32) | LowData;
}
@@ -170,7 +170,7 @@ AsmWriteMsr64 (
LowData = (UINT32)(Value);
HighData = (UINT32)(Value >> 32);
__asm__ __volatile__ (
"wrmsr"
:
@@ -178,7 +178,7 @@ AsmWriteMsr64 (
"a" (LowData),
"d" (HighData)
);
return Value;
}
@@ -201,13 +201,13 @@ AsmReadEflags (
)
{
UINTN Eflags;
__asm__ __volatile__ (
"pushfq \n\t"
"pop %0 "
: "=r" (Eflags) // %0
);
return Eflags;
}
@@ -230,12 +230,12 @@ AsmReadCr0 (
)
{
UINTN Data;
__asm__ __volatile__ (
"mov %%cr0,%0"
"mov %%cr0,%0"
: "=r" (Data) // %0
);
return Data;
}
@@ -257,12 +257,12 @@ AsmReadCr2 (
)
{
UINTN Data;
__asm__ __volatile__ (
"mov %%cr2, %0"
"mov %%cr2, %0"
: "=r" (Data) // %0
);
return Data;
}
@@ -283,12 +283,12 @@ AsmReadCr3 (
)
{
UINTN Data;
__asm__ __volatile__ (
"mov %%cr3, %0"
"mov %%cr3, %0"
: "=r" (Data) // %0
);
return Data;
}
@@ -310,12 +310,12 @@ AsmReadCr4 (
)
{
UINTN Data;
__asm__ __volatile__ (
"mov %%cr4, %0"
"mov %%cr4, %0"
: "=r" (Data) // %0
);
return Data;
}
@@ -441,12 +441,12 @@ AsmReadDr0 (
)
{
UINTN Data;
__asm__ __volatile__ (
"mov %%dr0, %0"
: "=r" (Data)
);
return Data;
}
@@ -468,12 +468,12 @@ AsmReadDr1 (
)
{
UINTN Data;
__asm__ __volatile__ (
"mov %%dr1, %0"
: "=r" (Data)
);
return Data;
}
@@ -495,12 +495,12 @@ AsmReadDr2 (
)
{
UINTN Data;
__asm__ __volatile__ (
"mov %%dr2, %0"
: "=r" (Data)
);
return Data;
}
@@ -522,12 +522,12 @@ AsmReadDr3 (
)
{
UINTN Data;
__asm__ __volatile__ (
"mov %%dr3, %0"
: "=r" (Data)
);
return Data;
}
@@ -549,12 +549,12 @@ AsmReadDr4 (
)
{
UINTN Data;
__asm__ __volatile__ (
"mov %%dr4, %0"
: "=r" (Data)
);
return Data;
}
@@ -576,12 +576,12 @@ AsmReadDr5 (
)
{
UINTN Data;
__asm__ __volatile__ (
"mov %%dr5, %0"
: "=r" (Data)
);
return Data;
}
@@ -603,12 +603,12 @@ AsmReadDr6 (
)
{
UINTN Data;
__asm__ __volatile__ (
"mov %%dr6, %0"
: "=r" (Data)
);
return Data;
}
@@ -630,12 +630,12 @@ AsmReadDr7 (
)
{
UINTN Data;
__asm__ __volatile__ (
"mov %%dr7, %0"
: "=r" (Data)
);
return Data;
}
@@ -864,12 +864,12 @@ AsmReadCs (
)
{
UINT16 Data;
__asm__ __volatile__ (
"mov %%cs, %0"
:"=a" (Data)
);
return Data;
}
@@ -890,12 +890,12 @@ AsmReadDs (
)
{
UINT16 Data;
__asm__ __volatile__ (
"mov %%ds, %0"
:"=a" (Data)
);
return Data;
}
@@ -916,12 +916,12 @@ AsmReadEs (
)
{
UINT16 Data;
__asm__ __volatile__ (
"mov %%es, %0"
:"=a" (Data)
);
return Data;
}
@@ -942,12 +942,12 @@ AsmReadFs (
)
{
UINT16 Data;
__asm__ __volatile__ (
"mov %%fs, %0"
:"=a" (Data)
);
return Data;
}
@@ -968,12 +968,12 @@ AsmReadGs (
)
{
UINT16 Data;
__asm__ __volatile__ (
"mov %%gs, %0"
:"=a" (Data)
);
return Data;
}
@@ -994,12 +994,12 @@ AsmReadSs (
)
{
UINT16 Data;
__asm__ __volatile__ (
"mov %%ds, %0"
:"=a" (Data)
);
return Data;
}
@@ -1020,12 +1020,12 @@ AsmReadTr (
)
{
UINT16 Data;
__asm__ __volatile__ (
"str %0"
: "=r" (Data)
);
return Data;
}
@@ -1072,7 +1072,7 @@ InternalX86WriteGdtr (
:
: "m" (*Gdtr)
);
}
@@ -1137,12 +1137,12 @@ AsmReadLdtr (
)
{
UINT16 Data;
__asm__ __volatile__ (
"sldt %0"
: "=g" (Data) // %0
);
return Data;
}
@@ -1190,7 +1190,7 @@ InternalX86FxSave (
"fxsave %0"
:
: "m" (*Buffer) // %0
);
);
}
@@ -1239,7 +1239,7 @@ AsmReadMm0 (
"movd %%mm0, %0 \n\t"
: "=r" (Data) // %0
);
return Data;
}
@@ -1265,7 +1265,7 @@ AsmReadMm1 (
"movd %%mm1, %0 \n\t"
: "=r" (Data) // %0
);
return Data;
}
@@ -1291,7 +1291,7 @@ AsmReadMm2 (
"movd %%mm2, %0 \n\t"
: "=r" (Data) // %0
);
return Data;
}
@@ -1317,7 +1317,7 @@ AsmReadMm3 (
"movd %%mm3, %0 \n\t"
: "=r" (Data) // %0
);
return Data;
}
@@ -1343,7 +1343,7 @@ AsmReadMm4 (
"movd %%mm4, %0 \n\t"
: "=r" (Data) // %0
);
return Data;
}
@@ -1369,7 +1369,7 @@ AsmReadMm5 (
"movd %%mm5, %0 \n\t"
: "=r" (Data) // %0
);
return Data;
}
@@ -1395,7 +1395,7 @@ AsmReadMm6 (
"movd %%mm6, %0 \n\t"
: "=r" (Data) // %0
);
return Data;
}
@@ -1421,7 +1421,7 @@ AsmReadMm7 (
"movd %%mm7, %0 \n\t"
: "=r" (Data) // %0
);
return Data;
}
@@ -1443,7 +1443,7 @@ AsmWriteMm0 (
{
__asm__ __volatile__ (
"movd %0, %%mm0" // %0
:
:
: "m" (Value)
);
}
@@ -1466,7 +1466,7 @@ AsmWriteMm1 (
{
__asm__ __volatile__ (
"movd %0, %%mm1" // %0
:
:
: "m" (Value)
);
}
@@ -1489,7 +1489,7 @@ AsmWriteMm2 (
{
__asm__ __volatile__ (
"movd %0, %%mm2" // %0
:
:
: "m" (Value)
);
}
@@ -1512,7 +1512,7 @@ AsmWriteMm3 (
{
__asm__ __volatile__ (
"movd %0, %%mm3" // %0
:
:
: "m" (Value)
);
}
@@ -1535,7 +1535,7 @@ AsmWriteMm4 (
{
__asm__ __volatile__ (
"movd %0, %%mm4" // %0
:
:
: "m" (Value)
);
}
@@ -1558,7 +1558,7 @@ AsmWriteMm5 (
{
__asm__ __volatile__ (
"movd %0, %%mm5" // %0
:
:
: "m" (Value)
);
}
@@ -1581,7 +1581,7 @@ AsmWriteMm6 (
{
__asm__ __volatile__ (
"movd %0, %%mm6" // %0
:
:
: "m" (Value)
);
}
@@ -1604,7 +1604,7 @@ AsmWriteMm7 (
{
__asm__ __volatile__ (
"movd %0, %%mm7" // %0
:
:
: "m" (Value)
);
}
@@ -1627,14 +1627,14 @@ AsmReadTsc (
{
UINT32 LowData;
UINT32 HiData;
__asm__ __volatile__ (
"rdtsc"
: "=a" (LowData),
"=d" (HiData)
);
return (((UINT64)HiData) << 32) | LowData;
return (((UINT64)HiData) << 32) | LowData;
}
@@ -1657,15 +1657,15 @@ AsmReadPmc (
{
UINT32 LowData;
UINT32 HiData;
__asm__ __volatile__ (
"rdpmc"
: "=a" (LowData),
"=d" (HiData)
: "c" (Index)
);
return (((UINT64)HiData) << 32) | LowData;
return (((UINT64)HiData) << 32) | LowData;
}
@@ -1700,7 +1700,7 @@ AsmMonitor (
"c" (Ecx),
"d" (Edx)
);
return Eax;
}
@@ -1728,12 +1728,12 @@ AsmMwait (
{
__asm__ __volatile__ (
"mwait"
:
:
: "a" (Eax),
"c" (Ecx)
);
return Eax;
return Eax;
}
@@ -1768,7 +1768,7 @@ AsmInvd (
)
{
__asm__ __volatile__ ("invd":::"memory");
}
@@ -1796,10 +1796,10 @@ AsmFlushCacheLine (
__asm__ __volatile__ (
"clflush (%0)"
:
: "r" (LinearAddress)
: "r" (LinearAddress)
: "memory"
);
return LinearAddress;
}

View File

@@ -1,6 +1,6 @@
#------------------------------------------------------------------------------
#
# Copyright (c) 2006 - 2008, Intel Corporation. All rights reserved.<BR>
# Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
# This program and the accompanying materials
# are licensed and made available under the terms and conditions of the BSD License
# which accompanies this distribution. The full text of the license may be found at
@@ -49,6 +49,6 @@ ASM_PFX(InternalLongJump):
movdqu 0xB8(%rcx), %xmm12
movdqu 0xC8(%rcx), %xmm13
movdqu 0xD8(%rcx), %xmm14
movdqu 0xE8(%rcx), %xmm15
movdqu 0xE8(%rcx), %xmm15
mov %rdx, %rax # set return value
jmp *0x48(%rcx)

View File

@@ -1,6 +1,6 @@
#------------------------------------------------------------------------------
#
# Copyright (c) 2006 - 2008, Intel Corporation. All rights reserved.<BR>
# Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
# This program and the accompanying materials
# are licensed and made available under the terms and conditions of the BSD License
# which accompanies this distribution. The full text of the license may be found at
@@ -39,7 +39,7 @@ ASM_PFX(SetJump):
mov %rdx,0x48(%rcx)
# save non-volatile fp registers
stmxcsr 0x50(%rcx)
movdqu %xmm6, 0x58(%rcx)
movdqu %xmm6, 0x58(%rcx)
movdqu %xmm7, 0x68(%rcx)
movdqu %xmm8, 0x78(%rcx)
movdqu %xmm9, 0x88(%rcx)
@@ -48,6 +48,6 @@ ASM_PFX(SetJump):
movdqu %xmm12, 0xB8(%rcx)
movdqu %xmm13, 0xC8(%rcx)
movdqu %xmm14, 0xD8(%rcx)
movdqu %xmm15, 0xE8(%rcx)
movdqu %xmm15, 0xE8(%rcx)
xor %rax,%rax
jmpq *%rdx

View File

@@ -1,6 +1,6 @@
#------------------------------------------------------------------------------
#
# Copyright (c) 2006 - 2008, Intel Corporation. All rights reserved.<BR>
# Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
# This program and the accompanying materials
# are licensed and made available under the terms and conditions of the BSD License
# which accompanies this distribution. The full text of the license may be found at
@@ -37,9 +37,9 @@
#------------------------------------------------------------------------------
ASM_GLOBAL ASM_PFX(InternalSwitchStack)
ASM_PFX(InternalSwitchStack):
pushq %rbp
movq %rsp, %rbp
pushq %rbp
movq %rsp, %rbp
mov %rcx, %rax // Shift registers for new call
mov %rdx, %rcx
mov %r8, %rdx

View File

@@ -1,6 +1,6 @@
#------------------------------------------------------------------------------
#
# Copyright (c) 2006 - 2013, Intel Corporation. All rights reserved.<BR>
# Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
# This program and the accompanying materials
# are licensed and made available under the terms and conditions of the BSD License
# which accompanies this distribution. The full text of the license may be found at
@@ -49,7 +49,7 @@ ASM_GLOBAL ASM_PFX(InternalAsmThunk16)
.set IA32_REGS_SIZE, 56
.data
.set Lm16Size, ASM_PFX(InternalAsmThunk16) - ASM_PFX(m16Start)
ASM_PFX(m16Size): .word Lm16Size
.set LmThunk16Attr, L_ThunkAttr - ASM_PFX(m16Start)
@@ -85,7 +85,7 @@ ASM_PFX(BackFromUserCode):
.byte 0xe # push cs
.byte 0x66
call L_Base # push eip
L_Base:
L_Base:
.byte 0x66
pushq $0 # reserved high order 32 bits of EFlags
.byte 0x66, 0x9c # pushfd actually
@@ -102,13 +102,13 @@ L_ThunkAttr: .space 4
movl $0x15cd2401,%eax # mov ax, 2401h & int 15h
cli # disable interrupts
jnc L_2
L_1:
L_1:
testb $THUNK_ATTRIBUTE_DISABLE_A20_MASK_KBD_CTRL, %dl
jz L_2
inb $0x92,%al
orb $2,%al
outb %al, $0x92 # deactivate A20M#
L_2:
L_2:
xorw %ax, %ax # xor eax, eax
movl %ss, %eax # mov ax, ss
lea IA32_REGS_SIZE(%esp), %bp
@@ -180,13 +180,13 @@ ASM_PFX(ToUserCode):
movw %bx,%sp # set up 16-bit stack pointer
.byte 0x66 # make the following call 32-bit
call L_Base1 # push eip
L_Base1:
L_Base1:
popw %bp # ebp <- address of L_Base1
pushq (IA32_REGS_SIZE + 2)(%esp)
lea 0x0c(%rsi), %eax
pushq %rax
lret # execution begins at next instruction
L_RealMode:
L_RealMode:
.byte 0x66,0x2e # CS and operand size override
lidt (_16Idtr - L_Base1)(%rsi)
.byte 0x66,0x61 # popad
@@ -243,7 +243,7 @@ ASM_PFX(InternalAsmThunk16):
pushq %rbx
pushq %rsi
pushq %rdi
movl %ds, %ebx
pushq %rbx # Save ds segment register on the stack
movl %es, %ebx
@@ -257,7 +257,7 @@ ASM_PFX(InternalAsmThunk16):
movzwl _SS(%rsi), %r8d
movl _ESP(%rsi), %edi
lea -(IA32_REGS_SIZE + 4)(%edi), %rdi
imul $16, %r8d, %eax
imul $16, %r8d, %eax
movl %edi,%ebx # ebx <- stack for 16-bit code
pushq $(IA32_REGS_SIZE / 4)
addl %eax,%edi # edi <- linear address of 16-bit stack
@@ -268,26 +268,26 @@ ASM_PFX(InternalAsmThunk16):
movl %edx,%eax # eax <- transition code address
andl $0xf,%edx
shll $12,%eax # segment address in high order 16 bits
.set LBackFromUserCodeDelta, ASM_PFX(BackFromUserCode) - ASM_PFX(m16Start)
.set LBackFromUserCodeDelta, ASM_PFX(BackFromUserCode) - ASM_PFX(m16Start)
lea (LBackFromUserCodeDelta)(%rdx), %ax
stosl # [edi] <- return address of user code
sgdt 0x60(%rsp) # save GDT stack in argument space
movzwq 0x60(%rsp), %r10 # r10 <- GDT limit
lea ((ASM_PFX(InternalAsmThunk16) - L_SavedCr4) + 0xf)(%rcx), %r11
andq $0xfffffffffffffff0, %r11 # r11 <- 16-byte aligned shadowed GDT table in real mode buffer
movzwq 0x60(%rsp), %r10 # r10 <- GDT limit
lea ((ASM_PFX(InternalAsmThunk16) - L_SavedCr4) + 0xf)(%rcx), %r11
andq $0xfffffffffffffff0, %r11 # r11 <- 16-byte aligned shadowed GDT table in real mode buffer
movw %r10w, (SavedGdt - L_SavedCr4)(%rcx) # save the limit of shadowed GDT table
movq %r11, (SavedGdt - L_SavedCr4 + 0x2)(%rcx) # save the base address of shadowed GDT table
movq 0x62(%rsp) ,%rsi # rsi <- the original GDT base address
xchg %r10, %rcx # save rcx to r10 and initialize rcx to be the limit of GDT table
xchg %r10, %rcx # save rcx to r10 and initialize rcx to be the limit of GDT table
incq %rcx # rcx <- the size of memory to copy
xchg %r11, %rdi # save rdi to r11 and initialize rdi to the base address of shadowed GDT table
rep
movsb # perform memory copy to shadow GDT table
movq %r10, %rcx # restore the orignal rcx before memory copy
movq %r11, %rdi # restore the original rdi before memory copy
sidt 0x50(%rsp)
movq %cr0, %rax
.set LSavedCrDelta, L_SavedCr0 - L_SavedCr4
@@ -311,21 +311,21 @@ ASM_PFX(InternalAsmThunk16):
.byte 0xff, 0x69 # jmp (_EntryPoint - L_SavedCr4)(%rcx)
.set Ltemp1, _EntryPoint - L_SavedCr4
.byte Ltemp1
L_RetFromRealMode:
L_RetFromRealMode:
popfq
lgdt 0x60(%rsp) # restore protected mode GDTR
lidt 0x50(%rsp) # restore protected mode IDTR
lea -IA32_REGS_SIZE(%rbp), %eax
.byte 0x0f, 0xa9 # pop gs
.byte 0x0f, 0xa1 # pop fs
popq %rbx
movl %ebx, %ss
popq %rbx
movl %ebx, %es
popq %rbx
movl %ebx, %ds
popq %rdi
popq %rsi
popq %rbx

View File

@@ -3,7 +3,7 @@
;------------------------------------------------------------------------------
;
; Copyright (c) 2006 - 2013, Intel Corporation. All rights reserved.<BR>
; Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
; This program and the accompanying materials
; are licensed and made available under the terms and conditions of the BSD License
; which accompanies this distribution. The full text of the license may be found at
@@ -240,14 +240,14 @@ BITS 64
push rbx
push rsi
push rdi
mov ebx, ds
push rbx ; Save ds segment register on the stack
mov ebx, es
push rbx ; Save es segment register on the stack
mov ebx, ss
push rbx ; Save ss segment register on the stack
push fs
push gs
mov rsi, rcx
@@ -266,15 +266,15 @@ BITS 64
shl eax, 12 ; segment address in high order 16 bits
lea ax, [rdx + (_BackFromUserCode - ASM_PFX(m16Start))] ; offset address
stosd ; [edi] <- return address of user code
sgdt [rsp + 60h] ; save GDT stack in argument space
movzx r10, word [rsp + 60h] ; r10 <- GDT limit
movzx r10, word [rsp + 60h] ; r10 <- GDT limit
lea r11, [rcx + (ASM_PFX(InternalAsmThunk16) - _BackFromUserCode.SavedCr4End) + 0xf]
and r11, ~0xf ; r11 <- 16-byte aligned shadowed GDT table in real mode buffer
mov [rcx + (SavedGdt - _BackFromUserCode.SavedCr4End)], r10w ; save the limit of shadowed GDT table
mov [rcx + (SavedGdt - _BackFromUserCode.SavedCr4End) + 2], r11 ; save the base address of shadowed GDT table
mov rsi, [rsp + 62h] ; rsi <- the original GDT base address
xchg rcx, r10 ; save rcx to r10 and initialize rcx to be the limit of GDT table
inc rcx ; rcx <- the size of memory to copy
@@ -282,7 +282,7 @@ BITS 64
rep movsb ; perform memory copy to shadow GDT table
mov rcx, r10 ; restore the orignal rcx before memory copy
mov rdi, r11 ; restore the original rdi before memory copy
sidt [rsp + 50h] ; save IDT stack in argument space
mov rax, cr0
mov [rcx + (_BackFromUserCode.SavedCr0End - 4 - _BackFromUserCode.SavedCr4End)], eax

View File

@@ -1,7 +1,7 @@
/** @file
IA-32/x64 MSR functions.
Copyright (c) 2006 - 2012, Intel Corporation. All rights reserved.<BR>
Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
This program and the accompanying materials
are licensed and made available under the terms and conditions of the BSD License
which accompanies this distribution. The full text of the license may be found at
@@ -196,8 +196,8 @@ AsmMsrBitFieldRead32 (
Writes Value to a bit field in the lower 32-bits of a 64-bit MSR. The bit
field is specified by the StartBit and the EndBit. All other bits in the
destination MSR are preserved. The lower 32-bits of the MSR written is
returned. The caller must either guarantee that Index and the data written
is valid, or the caller must set up exception handlers to catch the exceptions.
returned. The caller must either guarantee that Index and the data written
is valid, or the caller must set up exception handlers to catch the exceptions.
This function is only available on IA-32 and x64.
If StartBit is greater than 31, then ASSERT().
@@ -420,7 +420,7 @@ AsmMsrAnd64 (
}
/**
Reads a 64-bit MSR, performs a bitwise AND followed by a bitwise
Reads a 64-bit MSR, performs a bitwise AND followed by a bitwise
OR, and writes the result back to the 64-bit MSR.
Reads the 64-bit MSR specified by Index, performs a bitwise AND between read
@@ -489,8 +489,8 @@ AsmMsrBitFieldRead64 (
Writes Value to a bit field in a 64-bit MSR. The bit field is specified by
the StartBit and the EndBit. All other bits in the destination MSR are
preserved. The MSR written is returned. The caller must either guarantee
that Index and the data written is valid, or the caller must set up exception
preserved. The MSR written is returned. The caller must either guarantee
that Index and the data written is valid, or the caller must set up exception
handlers to catch the exceptions. This function is only available on IA-32 and x64.
If StartBit is greater than 63, then ASSERT().

View File

@@ -1,7 +1,7 @@
/** @file
Real Mode Thunk Functions for IA32 and x64.
Copyright (c) 2006 - 2012, Intel Corporation. All rights reserved.<BR>
Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
This program and the accompanying materials
are licensed and made available under the terms and conditions of the BSD License
which accompanies this distribution. The full text of the license may be found at
@@ -86,7 +86,7 @@ AsmGetThunk16Properties (
Prepares all structures a code required to use AsmThunk16().
Prepares all structures and code required to use AsmThunk16().
This interface is limited to be used in either physical mode or virtual modes with paging enabled where the
virtual to physical mappings for ThunkContext.RealModeBuffer is mapped 1:1.
@@ -168,48 +168,48 @@ AsmPrepareThunk16 (
AsmPrepareThunk16() must be called with ThunkContext before this function is used.
This function must be called with interrupts disabled.
The register state from the RealModeState field of ThunkContext is restored just prior
to calling the 16-bit real mode entry point. This includes the EFLAGS field of RealModeState,
The register state from the RealModeState field of ThunkContext is restored just prior
to calling the 16-bit real mode entry point. This includes the EFLAGS field of RealModeState,
which is used to set the interrupt state when a 16-bit real mode entry point is called.
Control is transferred to the 16-bit real mode entry point specified by the CS and Eip fields of RealModeState.
The stack is initialized to the SS and ESP fields of RealModeState. Any parameters passed to
the 16-bit real mode code must be populated by the caller at SS:ESP prior to calling this function.
The stack is initialized to the SS and ESP fields of RealModeState. Any parameters passed to
the 16-bit real mode code must be populated by the caller at SS:ESP prior to calling this function.
The 16-bit real mode entry point is invoked with a 16-bit CALL FAR instruction,
so when accessing stack contents, the 16-bit real mode code must account for the 16-bit segment
and 16-bit offset of the return address that were pushed onto the stack. The 16-bit real mode entry
point must exit with a RETF instruction. The register state is captured into RealModeState immediately
so when accessing stack contents, the 16-bit real mode code must account for the 16-bit segment
and 16-bit offset of the return address that were pushed onto the stack. The 16-bit real mode entry
point must exit with a RETF instruction. The register state is captured into RealModeState immediately
after the RETF instruction is executed.
If EFLAGS specifies interrupts enabled, or any of the 16-bit real mode code enables interrupts,
or any of the 16-bit real mode code makes a SW interrupt, then the caller is responsible for making sure
the IDT at address 0 is initialized to handle any HW or SW interrupts that may occur while in 16-bit real mode.
If EFLAGS specifies interrupts enabled, or any of the 16-bit real mode code enables interrupts,
then the caller is responsible for making sure the 8259 PIC is in a state compatible with 16-bit real mode.
If EFLAGS specifies interrupts enabled, or any of the 16-bit real mode code enables interrupts,
or any of the 16-bit real mode code makes a SW interrupt, then the caller is responsible for making sure
the IDT at address 0 is initialized to handle any HW or SW interrupts that may occur while in 16-bit real mode.
If EFLAGS specifies interrupts enabled, or any of the 16-bit real mode code enables interrupts,
then the caller is responsible for making sure the 8259 PIC is in a state compatible with 16-bit real mode.
This includes the base vectors, the interrupt masks, and the edge/level trigger mode.
If THUNK_ATTRIBUTE_BIG_REAL_MODE is set in the ThunkAttributes field of ThunkContext, then the user code
If THUNK_ATTRIBUTE_BIG_REAL_MODE is set in the ThunkAttributes field of ThunkContext, then the user code
is invoked in big real mode. Otherwise, the user code is invoked in 16-bit real mode with 64KB segment limits.
If neither THUNK_ATTRIBUTE_DISABLE_A20_MASK_INT_15 nor THUNK_ATTRIBUTE_DISABLE_A20_MASK_KBD_CTRL are set in
ThunkAttributes, then it is assumed that the user code did not enable the A20 mask, and no attempt is made to
If neither THUNK_ATTRIBUTE_DISABLE_A20_MASK_INT_15 nor THUNK_ATTRIBUTE_DISABLE_A20_MASK_KBD_CTRL are set in
ThunkAttributes, then it is assumed that the user code did not enable the A20 mask, and no attempt is made to
disable the A20 mask.
If THUNK_ATTRIBUTE_DISABLE_A20_MASK_INT_15 is set and THUNK_ATTRIBUTE_DISABLE_A20_MASK_KBD_CTRL is clear in
ThunkAttributes, then attempt to use the INT 15 service to disable the A20 mask. If this INT 15 call fails,
If THUNK_ATTRIBUTE_DISABLE_A20_MASK_INT_15 is set and THUNK_ATTRIBUTE_DISABLE_A20_MASK_KBD_CTRL is clear in
ThunkAttributes, then attempt to use the INT 15 service to disable the A20 mask. If this INT 15 call fails,
then attempt to disable the A20 mask by directly accessing the 8042 keyboard controller I/O ports.
If THUNK_ATTRIBUTE_DISABLE_A20_MASK_INT_15 is clear and THUNK_ATTRIBUTE_DISABLE_A20_MASK_KBD_CTRL is set in
If THUNK_ATTRIBUTE_DISABLE_A20_MASK_INT_15 is clear and THUNK_ATTRIBUTE_DISABLE_A20_MASK_KBD_CTRL is set in
ThunkAttributes, then attempt to disable the A20 mask by directly accessing the 8042 keyboard controller I/O ports.
If ThunkContext is NULL, then ASSERT().
If AsmPrepareThunk16() was not previously called with ThunkContext, then ASSERT().
If both THUNK_ATTRIBUTE_DISABLE_A20_MASK_INT_15 and THUNK_ATTRIBUTE_DISABLE_A20_MASK_KBD_CTRL are set in
If both THUNK_ATTRIBUTE_DISABLE_A20_MASK_INT_15 and THUNK_ATTRIBUTE_DISABLE_A20_MASK_KBD_CTRL are set in
ThunkAttributes, then ASSERT().
This interface is limited to be used in either physical mode or virtual modes with paging enabled where the
virtual to physical mappings for ThunkContext.RealModeBuffer is mapped 1:1.
@param ThunkContext A pointer to the context structure that describes the
16-bit real mode code to call.
@@ -228,7 +228,7 @@ AsmThunk16 (
ASSERT ((UINTN)ThunkContext->RealModeBuffer + m16Size <= 0x100000);
ASSERT (((ThunkContext->ThunkAttributes & (THUNK_ATTRIBUTE_DISABLE_A20_MASK_INT_15 | THUNK_ATTRIBUTE_DISABLE_A20_MASK_KBD_CTRL)) != \
(THUNK_ATTRIBUTE_DISABLE_A20_MASK_INT_15 | THUNK_ATTRIBUTE_DISABLE_A20_MASK_KBD_CTRL)));
UpdatedRegs = InternalAsmThunk16 (
ThunkContext->RealModeState,
ThunkContext->RealModeBuffer
@@ -250,7 +250,7 @@ AsmThunk16 (
This interface is limited to be used in either physical mode or virtual modes with paging enabled where the
virtual to physical mappings for ThunkContext.RealModeBuffer is mapped 1:1.
See AsmPrepareThunk16() and AsmThunk16() for the detailed description and ASSERT() conditions.
@param ThunkContext A pointer to the context structure that describes the