| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
for having kernel text non-writable, because we still need to
apply relocations. On top of that, the PBVM page table has all
pages marked as RWX, so it's an inconsistency to begin with.
|
|
|
|
|
| |
after loading the kernel's text segment. The kernel will do the
same for loaded modules, so don't worry about that.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
boundaries. For good measure, align all other objects to cache
lines boundaries.
Use the new arch_loadseg I/F to keep track of kernel text and
data so that we can wire as much of it as is possible. It is
the responsibility of the kernel to link critical (read IVT
related) code and data at the front of the respective segment
so that it's covered by TRs before the kernel has a chance to
add more translations.
Use a better way of determining whether we're loading a legacy
kernel or not. We can't check for the presence of the PBVM page
table, because we may have unloaded that kernel and loaded an
older (legacy) kernel after that. Simply use the latest load
address for it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add support for Pre-Boot Virtual Memory (PBVM) to the loader.
PBVM allows us to link the kernel at a fixed virtual address without
having to make any assumptions about the physical memory layout. On
the SGI Altix 350 for example, there's no usuable physical memory
below 192GB. Also, the PBVM allows us to control better where we're
going to physically load the kernel and its modules so that we can
make sure we load the kernel in memory that's close to the BSP.
The PBVM is managed by a simple page table. The minimum size of the
page table is 4KB (EFI page size) and the maximum is currently set
to 1MB. A page in the PBVM is 64KB, as that's the maximum alignment
one can specify in a linker script. The bottom line is that PBVM is
between 64KB and 8GB in size.
The loader maps the PBVM page table at a fixed virtual address and
using a single translations. The PBVM itself is also mapped using a
single translation for a maximum of 32MB.
While here, increase the heap in the EFI loader from 512KB to 2MB
and set the stage for supporting relocatable modules.
|
|
|
|
|
| |
us to link the kernel at different addresses without needing to build
a corresponding loader.
|
|
|
|
|
|
|
| |
speculative loads. This at least makes control speculative loads
work. In the future we should analyze which faults/exceptions
we want to handle rather than defer to avoid having to call the
recovery code when it's not strictly necessary.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Make libefi portable by removing ia64 specific code and build
it on i386 and amd64 by default to prevent regressions. These
changes include fixes and improvements over previous code to
establish or improve APIs where none existed or when the amount
of kluging was unacceptably high.
2. Increase the amount of sharing between the efi and ski loaders
to improve maintainability of the loaders and simplify making
changes to the loader-kernel handshaking in the future.
The version of the efi and ski loaders are now both changed to 1.2
as user visible improvements and changes have been made.
|
|
|
|
|
| |
bit-fields. Unify the PTE defines accordingly and update all
uses.
|
|
|
|
|
|
| |
per letter dated July 22, 1999.
Approved by: core
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
things over floppy size limits, I can exclude it for release builds or
something like that. Most of the changes are to get the load_elf.c file
into a seperate elf32_ or elf64_ namespace so that you can have two
ELF loaders present at once. Note that for 64 bit kernels, it actually
starts up the kernel already in 64 bit mode with paging enabled. This
is really easy because we have a known minimum feature set.
Of note is that for amd64, we have to pass in the bios int 15 0xe821
memory map because once in long mode, you absolutely cannot make VM86
calls. amd64 does not use 'struct bootinfo' at all. It is a pure loader
metadata startup, just like sparc64 and powerpc. Much of the
infrastructure to support this was adapted from sparc64.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previous kernels unwantingly depended on this mapping, but as
of version 1.123 of src/sys/ia64/ia64/machdep.c this dependency
has been removed. Consequently, one has to update the kernel
before updating the loader. The documented/recommended upgrade
will suffice in this case.
Due to a visible (from the kernels point of view) change in
behaviour, bump the loader version number from 0.3 to 1.0.
Approved by: re (carte blanc)
|
|
|
|
|
|
|
|
|
|
|
|
| |
pages are 4KB.
o As a second order fix, don't assume we have enough space
after the bootinfo block left in a page to hold the memory
map.
o A third order fix as that we removed the assumption that a
bootinfo block fits in a single 8KB page.
PR: ia64/39415
submitted by: Espen Skoglund <esk@ira.uka.de>
|
|
|
|
|
|
|
|
|
|
|
| |
- Don't include ia64_cpu.h and cpu.h
- Guard definitions by _NO_NAMESPACE_POLLUTION
- Move definition of KERNBASE to vmparam.h
o Move definitions of IA64_RR_{BASE|MASK} to vmparam.h
o Move definitions of IA64_PHYS_TO_RR{6|7} to vmparam.h
o While here, remove some left-over Alpha references.
|
|
|
|
| |
hardwiring the location.
|
|
|
|
|
|
|
|
|
| |
register r8. We continue to write the bootinfo block at the same
hardwired address, because the kernel still expects it there.
It is expected that future kernels use register r8 to get to the
bootinfo block and don't depend on the hardwired address anymore.
Bump the loader version once again due to the interface change.
|
|
|
|
| |
the VM registers. This ought to make things slightly more reliable here.
|
|
|
|
|
| |
we start changing translation registers. Also, call ExitBootServices
before we jump into the kernel.
|
|
|
|
| |
* Add EFI network support.
|
| |
|
|
|