| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
handling. For statically linked apps this uses the __exidx_start/end
symbols set up by the linker. For dynamically linked apps it finds the
shared object that contains the given address and returns the location and
size of the exidx section in that shared object.
The dl_unwind_find_exidx() name is used by other BSD projects and Android,
and is mentioned in clang 3.5 comments as "the BSD interface" for finding
exidx data. GCC (in libgcc_s) expects the exact same API and functionality
to be provided by a function named __gnu_Unwind_Find_exidx(), so we provide
that with an alias ("strong reference").
Reviewed by: kib@
MFC after: 1 week
|
|
|
|
|
|
| |
Fix the code used on a Raspberry Pi.
Reviewed by: markm@
|
|
|
|
|
| |
Since PJ4Bv7 uses armv7_ CPU functions only pj4b_config
function is necessary. Remove obsolete routines.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
memory ordering model allows writes to different devices to complete out
of order, leading to a situation where the write that clears an interrupt
source at a device can complete after a write that unmasks and EOIs the
interrupt at the interrupt controller, leading to a spurious re-interrupt.
This adds a generic barrier function specific to the needs of interrupt
controllers, and calls that function from the GIC and TI AINTC controllers.
There may still be other soc-specific controllers that need to make the call.
Reviewed by: cognet, Svatopluk Kraus <onwahe@gmail.com>
MFC after: 3 days
|
| |
|
|
|
|
| |
the same platform methods.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
platform code, it is expected these will be merged in the future when the
ARM code is more complete.
Until more boards can be tested only use this with the Raspberry Pi and
rrename the functions on the other SoCs.
Reviewed by: ian@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Here, "suitably endowed" means that the System Control Coprocessor
(#15) has Performance Monitoring Registers, including a CCNT (Cycle
Count) register.
The CCNT register is used in a way similar to the TSC register in
x86 processors by the get_cyclecount(9) function. The entropy-harvesting
thread is a heavy user of this function, and will benefit from not
having to call binuptime(9) instead.
One problem with the CCNT register is that it is 32-bit only, so
the upper 32-bits of the returned number are always 0. The entropy
harvester does not care, but in case any one else does, follow-up
work may include an interrup trap to increment an upper-32-bit
counter on CCNT overflow.
Another problem is that the CCNT register is not readable in user-mode
code; in can be made readable by userland, but then it is also
writable, and so is a good chunk of the PMU system. For that reason,
the CCNT is not enabled for user-mode access in this commit.
Like the x86, there is one CCNT per core, so they don't all run in
perfect sync.
Reviewed by: ian@ (an earlier version)
Tested by: ian@ (same earlier version)
Committed from: WANDBOARD-QUAD
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On modern ARM SoCs the L2 cache controller sits between the CPU and the
AXI bus, and most on-chip memory-mapped devices are on the AXI bus. We
map the device registers using the 'Device' memory attribute, which means
the memory is not cached, but writes to it are buffered. Ensuring that a
write has made it all the way to a device may require that the L2
controller take some action.
There is currently only one implementation of the new function, for the
PL310 cache controller. It invokes a function that the controller
manual calls "cache sync" but it actually has nothing to do with cache at
all, it triggers a drain of all pending store buffer writes and it blocks
until they complete.
The sheeva and xscale L2 controllers (which predate the concept of Device
memory) don't seem to have a corresponding function. It appears that the
standard armv5 drain_writebuf function includes draining all the way
through the L2 controller.
|
|
|
|
| |
and armv5 as well.
|
|
|
|
| |
called by platform init routines to fine-tune cache performance.
|
|
|
|
| |
This should have been part of r265444.
|
| |
|
| |
|
|
|
|
| |
map them both to the same interrupt number like other arches do.
|
|
|
|
|
|
|
| |
This was added ca. 2004 for the purpose of ensuring the caches were in the
right state after the debugger set a breakpoint. kdb_cpu_sync_icache()
was added in 2007 to handle that situation, and now the wbinv_all is
actually harmful because the operation isn't broadcast to other cores.
|
|
|
|
|
|
|
|
|
|
| |
using armv7_idcache_wbinv_all, because wbinv_all doesn't broadcast the
operation to other cores. In elf_cpu_load_file() use icache_sync_all()
and explain why it's needed (and why other sync operations aren't).
As part of doing this, all callers of cpu_icache_sync_all() were
inspected to ensure they weren't relying on the old side effect of
doing a wbinv_all along with the icache work.
|
|
|
|
| |
and flushing the entire icache is needlessly expensive.
|
| |
|
|
|
|
| |
of memory to support ISA addressing limitations.
|
|
|
|
|
|
| |
-fms-extensions.
MFC after: 2 weeks
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
* Save the required VFP registers on context switch. If the exception bit
is set we need to save and restore the FPINST register, and if the fp2v
bit is also set we need to save and restore FPINST2.
* Move saving and restoring the floating point control registers to C.
* Clear the fpexc exception and fp2v flags on a floating-point exception.
* Signal a SIGFPE if the fpexc exception flag is set on an undefined
instruction. This is how the ARM core signals to software there is a
floating-point exception.
|
| |
|
| |
|
|
|
|
| |
identical this allows us to build for both v6 and v7 together.
|
|
|
|
| |
however only arm, armeb, armv6, and soon armv6hf will be used.
|
| |
|
|
|
|
|
|
|
| |
which was added by cognet in 2012, so remove the no-longer-applicable
license stuff that referred to all the old contents, and put in a
standard 2-clause BSD license (to cover the 6 lines of useful code left
in here).
|
|
|
|
|
|
|
| |
swi_exit code in exception.S instead of having its own inline expansion
of the DO_AST and PULLFRAME macros. That means that now all references
to the PUSH/PULLFRAME and DO_AST macros are localized to exception.S,
so move the macros themselves into there and remove them from asmacros.h
|
|
|
|
| |
using it doesn't have to have an "AST_LOCALS" macro somewhere in the file.
|
|
|
|
|
|
|
| |
never actually ran on these chips (other than using SA1 support in an
emulator to do the early porting to FreeBSD long long ago). The clutter
and complexity of some of this code keeps getting in the way of other
maintenance, so it's time to go.
|
|
|
|
|
|
|
|
|
| |
enabled. In vfp_discard(), if the state in the VFP hardware belongs to
the thread which is dying, NULL out pcpu fpcurthread to indicate the
state currently in the hardware belongs to nobody.
Submitted by: Juergen Weiss
Pointy hat to: me
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
a leftover from the days when a low-level debugger had hooks in the
undefined exception vector and needed stack space to function. These days
it effectively isn't used because we switch immediately to the svc32 mode
stack on exception entry. For that, the single undef mode stack per core
that gets set up at init time works fine.
The stack wasn't necessary but it was harmful, because the space for it
was carved out of the normal per-thread svc32 stack, in effect cutting
that 8K stack in half. If svc32 mode used more than 4k of stack space it
wandered down into the undef mode stack, and then an undef exception would
overwrite a couple words on the stack while switching to svc32 mode,
corrupting the scv32 stack. Having another stack abut the bottom of the
svc32 stack also effectively mooted the guard page below the stack.
This work is based on analysis and patches submitted by Juergen Weiss.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The old code was full of complexity that would only matter if the
kernel itself used the VFP hardware. Now that's reduced to either killing
the userland process or panicking the kernel on an illegal VFP instruction.
This removes most of the complexity from the assembler code, reducing it
to just calling the save code if the outgoing thread used the VFP.
The routine that stores the VFP state now takes a flag that indicates
whether the hardware should be disabled after saving state. Right now it
always is, but this makes the code ready to be used by get/set_mcontext()
(doing so will be addressed in a future commit).
Remove the arm-specific pc_vfpcthread from struct pcpu and use the MI
field pc_fpcurthread instead.
Reviewed by: cognet
|
|
|
|
|
|
| |
we've been using was actually just spinning due to ARM having redefined
the old 'wait for interrupt' operation via the system coprocessor as a nop
and replacing it with a WFI instruction.
|
|
|
|
|
|
|
|
| |
implementation in arm/machdep.c. Most arm platforms either don't need to
do anything, or just need to call the standard eventtimer init routines.
A generic implementation that does that is now provided via weak linkage.
Any platform that needs to do something different can provide a its own
implementation to override the generic one.
|
|
|
|
|
|
|
| |
implementations for each of the chips we support. Most chips up through
armv6 can use the armv4 implementation which has a single coprocessor
opcode for this operation. The rather more complex armv7 implementation
comes from netbsd.
|
|
|
|
|
|
|
| |
it into a bunch of different .c files. Remove declarations for the unused
mptramp() function from everywhere except AramadaXP (and I think it's
really not used there either, because the code that references it appears
to be insanely does-nothing in nature).
|
|
|
|
|
| |
of them is provided anywhere. (gcc was nice enough to warn about this,
clang didn't for some reason.)
|
|
|
|
|
|
|
| |
Invalidate L1 PTE regardles of existance of the corresponding
l2_bucket. This is relevant when superpage is entered via
pmap_enter_object() and will fix crash on entering page
in place of not properly removed superpage.
|
|
|
|
| |
necessary header file which has the new FAULT_WNR symbol defined in it.
|
|
|
|
| |
in as required.
|
|
|
|
| |
Pointed out by: alc
|
|
|
|
|
|
|
|
|
|
|
| |
communicate the kernel's physical load address from where it's known in
initarm() into cpu_mp_start() which is called from non-arm code and
takes no parameters.
This adds the global variable and ensures that all the various copies
of initarm() set it. It uses the variable in cpu_mp_start(), eliminating
the last uses of KERNPHYSADDR outside of locore.S (where we can now
calculate it instead of relying on the constant).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
a new physmem.c file. The new code provides helper routines that can be
used by legacy SoCs and newer FDT-based systems. There are routines to
add one or more regions of physically contiguous ram, and exclude one or
more physically contiguous regions of ram. Ram can be excluded from crash
dumps, from being given over to the vm system for allocation management,
or both. After all the included and excluded regions have been added,
arm_physmem_init_kernel_globals() processes the regions into the global
dump_avail and phys_avail arrays and realmem and physmem variables that
communicate memory configuration to the rest of the kernel.
Convert all existing SoCs to use the new helper code.
|
|
|
|
|
|
|
|
|
|
| |
This was an optimization used only by a few xscale platforms. Part of
the optimization was to create a direct map for all physical pages, and
that resulted in making multiple mappings of pages in a way that bypassed
the logic in pmap.c to handle VIVT cache aliasing. It also just generally
made the code more complex and hard to maintain for all SoCs.
Reviewed by: cognet
|
|
|
|
| |
remove the need to load the kernel at a fixed address.
|
| |
|