| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
Obtained from: Juniper Networks, Semihalf
|
|
|
|
|
|
| |
pmap.
Submitted by: Mark Tinguely
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The only downside is that it renames pmap_vac_me_harder() to pmap_fix_cache().
From Mark's email on -arm :
pmap_get_vac_flags(), pmap_vac_me_harder(), pmap_vac_me_kpmap(), and
pmap_vac_me_user() has been rewritten as pmap_fix_cache() to be more
efficient in the kernel map case. I also removed the reference to
the md.kro_mappings, md.krw_mappings, md.uro_mappings, and md.urw_mappings
counts.
In pmap_clearbit(), we can also skip over tests and writeback/invalidations
in the PVF_MOD and PVF_REF cases if those bits are not set in the pv_flag.
PVF_WRITE will turn caching back on and remove the PV_MOD bit.
In pmap_nuke_pv(), the vm_page_flag_clear(pg, PG_WRITEABLE) has been moved
to the pmap_fix_cache().
We can be more agressive in attempting to turn caching back on by calling
pmap_fix_cache() at times that may be appropriate to turn cache on
(a kernel mapping has been removed, a write has been removed or a read
has been removed and we know the mapping does not have multiple write
mappings to a page).
In pmap_remove_pages() the cpu_idcache_wbinv_all() is moved to happen
before the page tables are NULLed because the caches are virtually
indexed and virtually tagged.
In pmap_remove_all(), the pmap_remove_write(m) is added before the
page tables are NULLed because the caches are virtually indexed and
virtually tagged. This also removes the need for the caches fixing routine
(whichever is being used pmap_vac_me_harder() or pmap_fix_cache()) to be
called on any of these mappings.
In pmap_remove(), I simplified the cache cleaning process and removed
extra TLB removals. Basically if more than PMAP_REMOVE_CLEAN_LIST_SIZE
are removed, then just flush the entire cache.
|
|
|
|
|
|
| |
Make sure we cache entries in the L2 cache.
Approved by: re (blanket)
|
|
|
|
|
| |
the kernel pmap.
Document a bit more the behavior of the xscale core 3.
|
|
|
|
|
| |
This should be a no-op, and this is needed for xscale core 3 supersections
support, as they are always part of the domain 0
|
|
|
|
|
|
|
| |
- Add a default parent dma tag, similar to what has been done for sparc64.
- Before invalidating the dcache in POSTREAD, save the bits which are in the
same cachelines than our buffers, but not part of it, and restore them after
the invalidation.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
whole the physical memory, cached, using 1MB section mappings. This reduces
the address space available for user processes a bit, but given the amount of
memory a typical arm machine has, it is not (yet) a big issue.
It then provides a uma_small_alloc() that works as it does for architectures
which have a direct mapping.
|
|
|
|
|
|
| |
Eliminate the unused allpmaps list.
Tested by: cognet@
|
|
|
|
| |
to appease -Wundef.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add a new option, SKYEYE_WORKAROUNDS, which as the name suggests adds
workarounds for things skyeye doesn't simulate. Specifically :
- Use USART0 instead of DBGU as the console, make it not use DMA, and manually provoke an interrupt when we're done in the transmit function.
- Skyeye maintains an internal counter for clock, but apparently there's
no way to access it, so hack the timecounter code to return a value which
is increased at every clock interrupts. This is gross, but I didn't find a
better way to implement timecounters without hacking Skyeye to get the
counter value.
- Force the write-back of PTEs once we're done writing them, even if they
are supposed to be write-through. I don't know why I have to do that.
|
|
|
|
| |
apparently only needed because skyeye has bugs in its cache emulation.
|
|
|
|
|
|
| |
with malloc() or contigmalloc() as usual, but try to re-map the allocated
memory into a VA outside the KVA, non-cached, thus making the calls to
bus_dmamap_sync() for these buffers useless.
|
|
|
|
|
| |
even if the pte is supposed to be cached in write through mode (might be a
skyeye bug, I'll have to check).
|
|
|
|
|
|
|
|
| |
Move what can be moved (UMA zones creation, pv_entry_* initialization) from
pmap_init2() to pmap_init().
Create a new function, pmap_postinit(), called from cpu_startup(), to do the
L1 tables allocation.
pmap_init2() is now empty for arm as well.
|
|
|
|
| |
declaration out of the #ifdef.
|
|
|
|
|
|
|
| |
dumped.
For iq31244_machdep.c, attempt to recognize hints provided by the elf
trampoline.
|
|
|
|
|
| |
an implementation of uma_small_alloc() which tries to preallocate memory
1MB per 1MB, and maps it into a section mapping.
|
|
|
|
| |
- Garbage-collect pmap_update(), it became quite useless.
|
|
|
|
|
|
|
|
|
|
|
| |
ARM_TP_ADDRESS, where the tp will be stored. On CPUs that support it, a cache
line will be allocated and locked for this address, so that it will never go
to RAM. On CPUs that does not, a page is allocated for it (it will be a bit
slower, and is wrong for SMP, but should be fine for UP).
The tp is still stored in the mdthread struct, and at each context switch,
ARM_TP_ADDRESS gets updated.
Suggested by: davidxu
|
| |
|
|
|
|
| |
While I'm there, fix style.
|
|
|
|
| |
While I'm there, cleanup a bit pmap.h.
|
|
|
|
|
|
|
|
|
| |
Remove the cache state logic : right now, it provides more problems than it
helps.
Add helper functions for mapping devices while bootstrapping.
Reorganize the code a bit, and remove dead code.
Obtained from: NetBSD (partially)
|
| |
|
|
|
|
| |
<machine/pcb.h> before including <machine/pmap.h>.
|
|
It only supports sa1110 (on simics) right now, but xscale support should come
soon.
Some of the initial work has been provided by :
Stephane Potvin <sepotvin at videotron.ca>
Most of this comes from NetBSD.
|