summaryrefslogtreecommitdiffstats
path: root/sys/mips
diff options
context:
space:
mode:
authorimp <imp@FreeBSD.org>2010-01-10 20:29:20 +0000
committerimp <imp@FreeBSD.org>2010-01-10 20:29:20 +0000
commit90552e474a0c3a30d9265060f018c784ca048739 (patch)
tree0a284d72f39d660a815b54c2ef5e526d7d40b13c /sys/mips
parent4eb0a79f481aeebae1406c7ff153dab8577003ad (diff)
downloadFreeBSD-src-90552e474a0c3a30d9265060f018c784ca048739.zip
FreeBSD-src-90552e474a0c3a30d9265060f018c784ca048739.tar.gz
Merge from projects/mips to head by hand:
sorry for the huge firehose on this commit, it would be too tedious to do file by file r201881 | imp | 2010-01-08 20:08:22 -0700 (Fri, 08 Jan 2010) | 3 lines Rename mips_pcpu_init to mips_pcpu0_init since it applies only to the BSP. Provide a missing prototype. r201880 | neel | 2010-01-08 19:17:14 -0700 (Fri, 08 Jan 2010) | 7 lines Compute the target of the jump in the 'J' and 'JAL' instructions correctly. The 256MB segment is formed by taking the top 4 bits of the address of the instruction in the "branch delay" slot as opposed to the 'J' or 'JAL' instruction itself. r201845 | imp | 2010-01-08 15:48:21 -0700 (Fri, 08 Jan 2010) | 2 lines Centralize initialization of pcpu, and set curthread early... r201770 | neel | 2010-01-07 22:53:11 -0700 (Thu, 07 Jan 2010) | 4 lines Add a DDB command "show pcb" to dump out the contents of a thread's PCB. r201631 | neel | 2010-01-05 23:42:08 -0700 (Tue, 05 Jan 2010) | 5 lines Remove all CFE-specific code from locore.S. The CFE entrypoint initialization is now done in platform-specific code. r201563 | neel | 2010-01-04 23:58:54 -0700 (Mon, 04 Jan 2010) | 6 lines This change increases the size of the kernel stack for thread0 from PAGE_SIZE to (2 * PAGE_SIZE). It depends on the memory allocated by pmap_steal_memory() being aligned to a PAGE_SIZE boundary. r200656 | imp | 2009-12-17 16:55:49 -0700 (Thu, 17 Dec 2009) | 7 lines Place holder ptrace mips module. Not entirely sure what's required here yet, so I've not connected it to the build. I think that we'll need to move something into the processor specific part of the mips port by requiring mips_cpu_ptrace or platform_cpu_ptrace be provided by the ports to get/set processor specific registers, ala SSE registers on x86. r200342 | imp | 2009-12-09 18:42:44 -0700 (Wed, 09 Dec 2009) | 4 lines app_descriptor_addr is unused (I know it is referened still). And unnecessary since we pass in a3 unmodified to platform_start. Eliminate it from here and kill one more TARGET_OCTEON in the process. r199760 | imp | 2009-11-24 10:15:22 -0700 (Tue, 24 Nov 2009) | 2 lines Add in Cavium's CID. Report what the unknown CID is. r199755 | imp | 2009-11-24 09:53:58 -0700 (Tue, 24 Nov 2009) | 5 lines looks like there's more to this patch than just this one file. I'll leave it to neel@ to get all the relevant pieces into the tree. r199754 | imp | 2009-11-24 09:32:31 -0700 (Tue, 24 Nov 2009) | 6 lines Include opt_cputype.h for all .c and .S files referencing TARGET_OCTEON. Spell ld script name right. # for the most part, we need to enhance infrastructure to obviate the need # for such an intrusive option. r199753 | imp | 2009-11-24 09:30:29 -0700 (Tue, 24 Nov 2009) | 3 lines Remove a comment that's bogus. Include opt_cputype.h since TARGET_OCTEON moved there. r199752 | imp | 2009-11-24 09:29:23 -0700 (Tue, 24 Nov 2009) | 4 lines Make sure kstack0 is page aligned. # this may have been from neel@ for the sibyte stuff r199742 | imp | 2009-11-24 01:35:11 -0700 (Tue, 24 Nov 2009) | 8 lines Move the hard-wiring of the dcache on octeon outside of the if statement. When no caches support was added, it looks like TARGET_OCTEON was bogusly moved inside the if. Also, include opt_cputype.h to make TARGET_OCTEON actually active. # now we die in pmap init somewhere... Most likely because 32MB of RAM is # too tight given the load address we're using. r199741 | imp | 2009-11-24 01:21:48 -0700 (Tue, 24 Nov 2009) | 2 lines TARGET_OCTEON reqiures opt_cputype.h. r199736 | imp | 2009-11-24 00:40:38 -0700 (Tue, 24 Nov 2009) | 2 lines Prefer ANSI spellings of uintXX_t, etc. r199598 | imp | 2009-11-20 09:30:35 -0700 (Fri, 20 Nov 2009) | 3 lines Horrible kludge to make octeon32 work. I think a better way is to move the generic code into the config files.... r199597 | imp | 2009-11-20 09:27:50 -0700 (Fri, 20 Nov 2009) | 4 lines cast vaddr to uintptr_t before casting it to a bus_space_handle_t. # I'm sure this indicates a problem, but I'm not sure what... r199496 | gonzo | 2009-11-18 15:52:05 -0700 (Wed, 18 Nov 2009) | 5 lines - Add cpu_init_interrupts function that is supposed to prepeare stuff required for spinning out interrupts later - Add API for managing intrcnt/intrnames arrays - Some minor style(9) fixes r199246 | neel | 2009-11-13 02:24:09 -0700 (Fri, 13 Nov 2009) | 10 lines Make pmap_copy_page() L2-cache friendly by doing the copy through the cacheable window on physical memory (KSEG0). On the Sibyte processor going through the uncacheable window (KSEG1) bypasses both L1 and L2 caches so we may end up with stale contents in the L2 cache. This also makes it consistent with the rest of the function that uses cacheable mappings to copy pages. Approved by: imp (mentor) r198842 | gonzo | 2009-11-02 23:42:55 -0700 (Mon, 02 Nov 2009) | 3 lines - Handle errors when adding children to nexus. This sittuation might occure when there is dublicate of child's entry in hints r198669 | rrs | 2009-10-30 02:53:11 -0600 (Fri, 30 Oct 2009) | 5 lines With this commit our friend RMI will now compile. I have not tested it and the chances of it running yet are about ZERO.. but it will now compile. The hard part now begins, making it run ;-) r198569 | neel | 2009-10-28 23:18:02 -0600 (Wed, 28 Oct 2009) | 5 lines Deal with overflow of the COUNT register correctly. The 'cycles_per_hz' has nothing to do with the rollover. r198550 | imp | 2009-10-28 11:03:20 -0600 (Wed, 28 Oct 2009) | 3 lines Remove useless for statement. i isn't used after it. Remove needless braces. r198534 | gonzo | 2009-10-27 21:34:05 -0600 (Tue, 27 Oct 2009) | 8 lines - Fix busdma sync: dcache invalidation operates on cache line aligned addresses and could modify areas of memory that share the same cache line at the beginning and at the ending of the buffer. In order to prevent a data loss we save these chunks in temporary buffer before invalidation and restore them afer it. Idea suggested by: cognet r198531 | gonzo | 2009-10-27 18:01:20 -0600 (Tue, 27 Oct 2009) | 3 lines - Remove bunch of declared but not defined cach-related variables - Add mips_picache_linesize and mips_pdcache_linesize variables r198530 | gonzo | 2009-10-27 17:45:48 -0600 (Tue, 27 Oct 2009) | 3 lines - Replace stubs with actual cache info - minor style(9) fix r198355 | neel | 2009-10-21 22:35:32 -0600 (Wed, 21 Oct 2009) | 11 lines Remove redundant instructions from tlb.S The "_MTC0 v0, COP_0_TLB_HI" is actually incorrect because v0 has not been initialized at that point. It worked correctly because we subsequently did the right thing and initialized TLB_HI correctly. The "li v0, MIPS_KSEG0_START" is redundant because we do exactly the same thing 2 instructions down. r198354 | neel | 2009-10-21 20:51:31 -0600 (Wed, 21 Oct 2009) | 9 lines Get rid of the hardcoded constants to define cacheable memory: SDRAM_ADDR_START, SDRAM_ADDR_END and SDRAM_MEM_SIZE Instead we now keep a copy of the memory regions enumerated by platform-specific code and use that to determine whether an address is cacheable or not. r198310 | gonzo | 2009-10-20 17:13:08 -0600 (Tue, 20 Oct 2009) | 5 lines - Commit missing part of "bt" fix: store PC register in pcb_context struct in cpu_switch and use it in stack_trace function later. pcb_regs contains state of the process stored by exception handler and therefor is not valid for sleeping processes. r198264 | neel | 2009-10-19 22:36:08 -0600 (Mon, 19 Oct 2009) | 5 lines Fix a bug where we would think that the L1 instruction and data cache are present even though the line size field in the CP0 Config1 register is 0. r198208 | imp | 2009-10-18 09:21:48 -0600 (Sun, 18 Oct 2009) | 3 lines Get the PC from the trap frame, since it isn't saved as part of the pcb regs. r198205 | imp | 2009-10-18 08:55:55 -0600 (Sun, 18 Oct 2009) | 3 lines Use correct signature for MipsEmulateBranch. The other one doesn't work for 64-bit compiles. r198182 | gonzo | 2009-10-16 18:22:07 -0600 (Fri, 16 Oct 2009) | 11 lines - Use PC/RA/SP values as arguments for stacktrace_subr instead of trapframe. Context info could be obtained from other sources (see below) no only from td_pcb field - Do not show a0..a3 values unless they're obtained from the stack. These are only confirmed values. - Fix bt command in DDB. Previous implementation used thread's trapframe structure as a source info for trace unwinding, but this structure is filled only when exception occurs. Valid register values for sleeping processes are in pcb_context array. For curthread use pc/sp/ra for current frame r198181 | gonzo | 2009-10-16 16:52:18 -0600 (Fri, 16 Oct 2009) | 2 lines - Get rid of label_t. It came from NetBSD and was used only in one place r198066 | gonzo | 2009-10-13 19:43:53 -0600 (Tue, 13 Oct 2009) | 5 lines - Move stack tracing function to db_trace.c - Axe unused extern MipsXXX declarations - Move all declarations for functions in exceptions.S/swtch.S from trap.c to respective headers r197796 | gonzo | 2009-10-05 17:19:51 -0600 (Mon, 05 Oct 2009) | 2 lines - Revert part of r197685 because this change leads to wrong data in cache. r197685 | gonzo | 2009-10-01 14:05:36 -0600 (Thu, 01 Oct 2009) | 2 lines - Sync caches properly when dealing with sf_buf r197014 | imp | 2009-09-08 21:57:10 -0600 (Tue, 08 Sep 2009) | 2 lines Ugly hack to get this to compile. I'm sure there's a better way... r197013 | imp | 2009-09-08 21:54:55 -0600 (Tue, 08 Sep 2009) | 2 lines First half of making this 64-bit clean: fix prototypes. r196988 | gonzo | 2009-09-08 13:15:29 -0600 (Tue, 08 Sep 2009) | 2 lines - MFC from head@196987 r196313 | imp | 2009-08-17 06:14:40 -0600 (Mon, 17 Aug 2009) | 2 lines suword64 and csuword64. Needed by ELF64 stuff... r196266 | imp | 2009-08-15 16:51:11 -0600 (Sat, 15 Aug 2009) | 5 lines (1) Fix a few 32/64-bit bugs. (2) Also, always allocate 2 pages for the stack to optimize TLB usage. Submitted by: neel@ (2) r196265 | imp | 2009-08-15 16:48:09 -0600 (Sat, 15 Aug 2009) | 2 lines Various 32/64-bit confusion cleanups. r196264 | imp | 2009-08-15 16:45:46 -0600 (Sat, 15 Aug 2009) | 6 lines (1) Some CPUs have a range to map I/O cyces on the pci bus. So allow them to work by allowding the nexus to assign ports. (2) Remove some Octeon junk that shouldn't be necessary. Submitted by: neel@ (#1) for SB1 port. r196061 | gonzo | 2009-08-04 11:32:55 -0600 (Tue, 04 Aug 2009) | 2 lines - Use register_t for registers values r195984 | gonzo | 2009-07-30 17:48:29 -0600 (Thu, 30 Jul 2009) | 4 lines - Properly unwind stack for functions with __noreturn__ attribute Submitted by: Neelkanth Natu <neelnatu@yahoo.com> r195983 | gonzo | 2009-07-30 17:29:59 -0600 (Thu, 30 Jul 2009) | 4 lines - mark map as coherent if requested by flags - explicitly set memory allocation method in map flags instead of duplicating conditions for malloc/contigalloc r195584 | imp | 2009-07-10 13:09:34 -0600 (Fri, 10 Jul 2009) | 3 lines Use PTR_* macros for pointers, and not potentially mips64 unsafe operations. r195583 | imp | 2009-07-10 13:08:48 -0600 (Fri, 10 Jul 2009) | 2 lines Use PTR_* macros to deal with pointers. r195579 | imp | 2009-07-10 13:04:32 -0600 (Fri, 10 Jul 2009) | 2 lines use ta0-ta3 rather than t4-t7 for n32/n64 goodness. r195511 | gonzo | 2009-07-09 13:02:17 -0600 (Thu, 09 Jul 2009) | 3 lines - Ooops, this debug code wasn't supposed to get into final commit. My appologises. r195478 | gonzo | 2009-07-08 16:28:36 -0600 (Wed, 08 Jul 2009) | 5 lines - Port busdma code from FreeBSD/arm. This is more mature version that takes into account all limitation to DMA memory (boundaries, alignment) and implements bounce pages. - Add BUS_DMASYNC_POSTREAD case to bus_dmamap_sync_buf r195438 | imp | 2009-07-08 00:00:18 -0600 (Wed, 08 Jul 2009) | 2 lines Turns out this code was right, revert last change. r195429 | gonzo | 2009-07-07 13:55:09 -0600 (Tue, 07 Jul 2009) | 5 lines - Move dpcpu initialization to mips_proc0_init. It's more appropriate place for it. Besides dpcpu_init requires pmap module to be initialized and calling it int pmap.c hangs the system r195399 | imp | 2009-07-06 01:49:24 -0600 (Mon, 06 Jul 2009) | 2 lines Prefer uintptr_t to int cast here. r195398 | imp | 2009-07-06 01:48:31 -0600 (Mon, 06 Jul 2009) | 3 lines Better types for 64-bit compatibility. Use %p and cast to void * and prefer uintptr_t to other int-type casts. r195397 | imp | 2009-07-06 01:47:39 -0600 (Mon, 06 Jul 2009) | 2 lines No need to force mips32 here. r195396 | imp | 2009-07-06 01:46:13 -0600 (Mon, 06 Jul 2009) | 3 lines Pass in the uint64 value, rather than a pointer to it. that's what the function expects... r195395 | imp | 2009-07-06 01:45:02 -0600 (Mon, 06 Jul 2009) | 3 lines Use ta0 instead of t4 and ta1 instead of t5. These map to the same registers on O32 builds, but t4 and t5 don't exist on N32 or N64. r195394 | imp | 2009-07-06 01:43:50 -0600 (Mon, 06 Jul 2009) | 3 lines Use better casts for passing the small integer as a pointer here. Basically, replace int with uintptr_t. r195393 | imp | 2009-07-06 01:42:54 -0600 (Mon, 06 Jul 2009) | 5 lines (1) Improvements for SB1. only allow real memory to be accessed. (2) make compile n64 by using more-proper casts. Submitted by: Neelkanth Natu (1) r195373 | imp | 2009-07-05 09:23:54 -0600 (Sun, 05 Jul 2009) | 5 lines (1) Use PTR_LA rather than bare la for N64 goodness (it is dla there) (2) SB1 needs COHERENT policy, not cached for the config register Submitted by: (2) Neelkanth Natu r195372 | imp | 2009-07-05 09:22:22 -0600 (Sun, 05 Jul 2009) | 3 lines use "PTR_LA" in preference to a bare la so it translates to dla on 64-bit ABIs. r195371 | imp | 2009-07-05 09:21:35 -0600 (Sun, 05 Jul 2009) | 6 lines Now that we define atomic_{load,store}_64 inline in atomic.h, we don't need to define them here for the !N64 case. We now define atomic_readandclear_64 in atomic.h, so no need to repeat it here. r195364 | imp | 2009-07-05 09:10:07 -0600 (Sun, 05 Jul 2009) | 5 lines use %p in preference to 0x%08x for printing register_t values. Cast them to void * first. This neatly solves the "how do I print a register_t" problem because sizeof(void *) is always the same as sizeof(register_t), afaik. r195353 | imp | 2009-07-05 00:46:54 -0600 (Sun, 05 Jul 2009) | 6 lines Publish PAGE_SHIFT to assembler # we should likely phase out PGSHIFT Submitted by: Neelkanth Natu r195350 | imp | 2009-07-05 00:39:37 -0600 (Sun, 05 Jul 2009) | 7 lines Switch to ABI agnostic ta0-ta3. Provide defs for this in the right places. Provide n32/n64 register name defintions. This should have no effect for the O32 builds that everybody else uses, but should help make N64 builds possible (lots of other changes are needed for that). Obtained from: NetBSD (for the regdef.h changes) r195334 | imp | 2009-07-03 21:22:34 -0600 (Fri, 03 Jul 2009) | 6 lines Move from using the lame invalid address I chose when trying to get Octeon going... Turns out that you get tlb shutdowns with this... Use PGSHIFT instead of PAGE_SHIFT. Submitted by: Neelkanth Natu r195147 | gonzo | 2009-06-28 15:01:00 -0600 (Sun, 28 Jun 2009) | 2 lines - Replace casuword and casuword32 stubs with proper implementation r195128 | gonzo | 2009-06-27 17:27:41 -0600 (Sat, 27 Jun 2009) | 4 lines - Add support for handling TLS area address in kernel space. From the userland point of view get/set operations are performed using sysarch(2) call. r195127 | gonzo | 2009-06-27 17:01:35 -0600 (Sat, 27 Jun 2009) | 4 lines - Make cpu_set_upcall_kse conform MIPS ABI. T9 should be the same as PC in subroutine entry point - Preserve interrupt mask r194938 | gonzo | 2009-06-24 20:15:04 -0600 (Wed, 24 Jun 2009) | 3 lines - Invalidate cache in pmap_qenter. Fixes corruption of data that comes through pipe (may be other bugs) r194505 | gonzo | 2009-06-19 13:02:40 -0600 (Fri, 19 Jun 2009) | 5 lines - Keep interrupts mask intact by RESTORE_CPU in MipsKernGenException trap() function re-enables interrupts if exception happened with interrupts enabled and therefor status register might be modified by interrupt filters r194277 | gonzo | 2009-06-15 20:36:21 -0600 (Mon, 15 Jun 2009) | 2 lines - Remove debug printfs r194275 | gonzo | 2009-06-15 19:43:33 -0600 (Mon, 15 Jun 2009) | 2 lines - Handle KSEG0/KSEG1 addresses for /dev/mem as well. netstat requires it r193491 | gonzo | 2009-06-05 03:21:03 -0600 (Fri, 05 Jun 2009) | 6 lines - Status register should be set last in RESTORE_CPU in order to prevent race over k0, k1 registers. - Update interrupts mask in saved status register for MipsUserIntr and MipsUserGenException. It might be modified by intr filter or ithread. r192864 | gonzo | 2009-05-26 16:40:12 -0600 (Tue, 26 May 2009) | 4 lines - Replace CPU_NOFPU and SOFTFLOAT options with CPU_FPU. By default we assume that there is no FPU, because majority of SoC does not have it. r192794 | gonzo | 2009-05-26 00:20:50 -0600 (Tue, 26 May 2009) | 5 lines - Preserve INT_MASK fields in Status register across context switches. They should be modified only by interrupt setup/teardown and pre_ithread/post_ithread functions r192793 | gonzo | 2009-05-26 00:02:38 -0600 (Tue, 26 May 2009) | 2 lines - Remove erroneus "break" instruction, it was meant for debug r192792 | gonzo | 2009-05-26 00:01:17 -0600 (Tue, 26 May 2009) | 2 lines - Remove now unused NetBSDism intr.h r192791 | gonzo | 2009-05-25 23:59:05 -0600 (Mon, 25 May 2009) | 7 lines - Provide proper pre_ithread/post_ithread functions for both hard and soft interrupts - Do not handle masked interrupts - Do not write Cause register because most bytes are read-only and writing the same byte to RW fields are pointless. And in case of software interrupt utterly wrong r192664 | gonzo | 2009-05-23 13:42:23 -0600 (Sat, 23 May 2009) | 4 lines - cpu_establish_hardintr modifies INT_MASK of Status register, so we should use disableintr/restoreintr that modifies only IE bit. r192655 | gonzo | 2009-05-23 12:00:20 -0600 (Sat, 23 May 2009) | 6 lines - Remove stale comments - Replace a1 with k1 to while restoring context. a1 was there by mistake, interrupts are disabled at this point and it's safe to use k0, k1. This code never was reached beacasue current Status register handling prevented interrupta from user mode. r192496 | gonzo | 2009-05-20 17:07:10 -0600 (Wed, 20 May 2009) | 4 lines - Invalidate caches for respective areain KSEG0 in order to prevent further overwriting of KSEG1 data with writeback. r192364 | gonzo | 2009-05-18 20:43:21 -0600 (Mon, 18 May 2009) | 6 lines - Cleanup ticker initialization code. For some MIPS cpu Counter register increments only every second cycle. The only timing references for us is Count value. Therefore it's better to convert frequencies related to it and use them. Besides cleanup this commit fixes twice more then requested sleep interval problem. r192176 | gonzo | 2009-05-15 20:34:03 -0600 (Fri, 15 May 2009) | 3 lines - Add informational title for cache info lines to separate them from environment variables dump r192119 | gonzo | 2009-05-14 15:26:07 -0600 (Thu, 14 May 2009) | 3 lines - Off by one check fix. Check for last address in region to fit in KSEG1 r191841 | gonzo | 2009-05-05 20:55:43 -0600 (Tue, 05 May 2009) | 5 lines - Use index ops in order to avoid TLBMiss exceptions when flushing caches on mapping removal - Writeback all VA for page that is being copied in pmap_copy_page to guaranty up-to-date data in SDRAM r191613 | gonzo | 2009-04-27 20:59:18 -0600 (Mon, 27 Apr 2009) | 4 lines - When destroying va -> pa mapping writeback all caches or we may endup with partial page content in SDRAM - style(9) fix r191583 | gonzo | 2009-04-27 12:46:57 -0600 (Mon, 27 Apr 2009) | 5 lines - Use new spacebus - Be a bit more verbose on failures - style(9) fixes - Use default rid value of 0 instead of MIPS_MEM_RID (0x20) r191577 | gonzo | 2009-04-27 12:29:59 -0600 (Mon, 27 Apr 2009) | 4 lines - Use naming convention the same as MIPS spec does: eliminate _sel1 sufix and just use selector number. e.g. mips_rd_config_sel1 -> mips_rd_config1 - Add WatchHi/WatchLo accessors for selctors 1..3 (for debug purposes) r191453 | gonzo | 2009-04-23 23:28:44 -0600 (Thu, 23 Apr 2009) | 4 lines Fix cut'n'paste code. cfg3 should get the value of selector 3 Spotted by: thompa@ r191452 | gonzo | 2009-04-23 22:18:16 -0600 (Thu, 23 Apr 2009) | 2 lines - Print supported CPU capabilities during stratup r191448 | gonzo | 2009-04-23 21:38:51 -0600 (Thu, 23 Apr 2009) | 2 lines - Fix whitespace to conform style(9) r191282 | gonzo | 2009-04-19 16:02:14 -0600 (Sun, 19 Apr 2009) | 3 lines - Make mips_bus_space_generic be of type bus_space_tag_t instead of struct bus_space and update all relevant places. r191084 | gonzo | 2009-04-14 20:28:26 -0600 (Tue, 14 Apr 2009) | 6 lines Use FreeBSD/arm approach for handling bus space access: space tag is a pointer to bus_space structure that defines access methods and hence every bus can define own accessors. Default space is mips_bus_space_generic. It's a simple interface to physical memory, values are read with regard to host system byte order. r191083 | gonzo | 2009-04-14 19:47:52 -0600 (Tue, 14 Apr 2009) | 4 lines - Cleanout stale #ifdef'ed chunk of code - Fix whitespaces - Explicitly undefine NEXUS_DEBUG flag r191079 | gonzo | 2009-04-14 16:53:22 -0600 (Tue, 14 Apr 2009) | 2 lines - Revert changes accidentally killed by merge operation ------------------------------------------------------------------------ r187512 | gonzo | 2009-01-20 22:49:30 -0700 (Tue, 20 Jan 2009) | 4 lines - Check if maddr/msize hints are there before setting hinted resources to device - Check for irq hint too r187418 | gonzo | 2009-01-18 19:37:10 -0700 (Sun, 18 Jan 2009) | 4 lines - Add trampoline stuff for bootloaders that do not support ELF - Replace arm'ish KERNPHYSADDR/KERNVIRTADDR with KERNLOADADDR/TRAMPLOADADDR and clean configs
Diffstat (limited to 'sys/mips')
-rw-r--r--sys/mips/mips/busdma_machdep.c945
-rw-r--r--sys/mips/mips/cache.c10
-rw-r--r--sys/mips/mips/cache_mipsNN.c15
-rw-r--r--sys/mips/mips/copystr.S34
-rw-r--r--sys/mips/mips/cpu.c274
-rw-r--r--sys/mips/mips/db_trace.c384
-rw-r--r--sys/mips/mips/elf_machdep.c59
-rw-r--r--sys/mips/mips/exception.S251
-rw-r--r--sys/mips/mips/fp.S734
-rw-r--r--sys/mips/mips/gdb_machdep.c24
-rw-r--r--sys/mips/mips/genassym.c3
-rw-r--r--sys/mips/mips/in_cksum.c2
-rw-r--r--sys/mips/mips/intr_machdep.c121
-rw-r--r--sys/mips/mips/locore.S39
-rw-r--r--sys/mips/mips/machdep.c89
-rw-r--r--sys/mips/mips/mainbus.c3
-rw-r--r--sys/mips/mips/mem.c56
-rw-r--r--sys/mips/mips/nexus.c126
-rw-r--r--sys/mips/mips/pm_machdep.c29
-rw-r--r--sys/mips/mips/pmap.c123
-rw-r--r--sys/mips/mips/psraccess.S2
-rw-r--r--sys/mips/mips/support.S99
-rw-r--r--sys/mips/mips/swtch.S29
-rw-r--r--sys/mips/mips/tick.c32
-rw-r--r--sys/mips/mips/tlb.S50
-rw-r--r--sys/mips/mips/trap.c475
-rw-r--r--sys/mips/mips/vm_machdep.c193
27 files changed, 2687 insertions, 1514 deletions
diff --git a/sys/mips/mips/busdma_machdep.c b/sys/mips/mips/busdma_machdep.c
index 13c45b8..5271148 100644
--- a/sys/mips/mips/busdma_machdep.c
+++ b/sys/mips/mips/busdma_machdep.c
@@ -23,50 +23,16 @@
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
+ * From i386/busdma_machdep.c,v 1.26 2002/04/19 22:58:09 alfred
*/
-/*-
- * Copyright (c) 1997, 1998, 2001 The NetBSD Foundation, Inc.
- * All rights reserved.
- *
- * This code is derived from software contributed to The NetBSD Foundation
- * by Jason R. Thorpe of the Numerical Aerospace Simulation Facility,
- * NASA Ames Research Center.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in the
- * documentation and/or other materials provided with the distribution.
- * 3. All advertising materials mentioning features or use of this software
- * must display the following acknowledgement:
- * This product includes software developed by the NetBSD
- * Foundation, Inc. and its contributors.
- * 4. Neither the name of The NetBSD Foundation nor the names of its
- * contributors may be used to endorse or promote products derived
- * from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS
- * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
- * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
- * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS
- * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
- * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
- * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
- * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
- * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
- * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
- * POSSIBILITY OF SUCH DAMAGE.
- */
-
-/* $NetBSD: bus_dma.c,v 1.17 2006/03/01 12:38:11 yamt Exp $ */
-
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
+/*
+ * MIPS bus dma support routines
+ */
+
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/malloc.h>
@@ -79,6 +45,7 @@ __FBSDID("$FreeBSD$");
#include <sys/uio.h>
#include <sys/ktr.h>
#include <sys/kernel.h>
+#include <sys/sysctl.h>
#include <vm/vm.h>
#include <vm/vm_page.h>
@@ -88,6 +55,13 @@ __FBSDID("$FreeBSD$");
#include <machine/bus.h>
#include <machine/cache.h>
#include <machine/cpufunc.h>
+#include <machine/md_var.h>
+
+#define MAX_BPAGES 64
+#define BUS_DMA_COULD_BOUNCE BUS_DMA_BUS3
+#define BUS_DMA_MIN_ALLOC_COMP BUS_DMA_BUS4
+
+struct bounce_zone;
struct bus_dma_tag {
bus_dma_tag_t parent;
@@ -105,19 +79,59 @@ struct bus_dma_tag {
int map_count;
bus_dma_lock_t *lockfunc;
void *lockfuncarg;
- /* XXX: machine-dependent fields */
- vm_offset_t _physbase;
- vm_offset_t _wbase;
- vm_offset_t _wsize;
+ struct bounce_zone *bounce_zone;
};
+struct bounce_page {
+ vm_offset_t vaddr; /* kva of bounce buffer */
+ vm_offset_t vaddr_nocache; /* kva of bounce buffer uncached */
+ bus_addr_t busaddr; /* Physical address */
+ vm_offset_t datavaddr; /* kva of client data */
+ bus_size_t datacount; /* client data count */
+ STAILQ_ENTRY(bounce_page) links;
+};
+
+int busdma_swi_pending;
+
+struct bounce_zone {
+ STAILQ_ENTRY(bounce_zone) links;
+ STAILQ_HEAD(bp_list, bounce_page) bounce_page_list;
+ int total_bpages;
+ int free_bpages;
+ int reserved_bpages;
+ int active_bpages;
+ int total_bounced;
+ int total_deferred;
+ int map_count;
+ bus_size_t alignment;
+ bus_addr_t lowaddr;
+ char zoneid[8];
+ char lowaddrid[20];
+ struct sysctl_ctx_list sysctl_tree;
+ struct sysctl_oid *sysctl_tree_top;
+};
+
+static struct mtx bounce_lock;
+static int total_bpages;
+static int busdma_zonecount;
+static STAILQ_HEAD(, bounce_zone) bounce_zone_list;
+
+SYSCTL_NODE(_hw, OID_AUTO, busdma, CTLFLAG_RD, 0, "Busdma parameters");
+SYSCTL_INT(_hw_busdma, OID_AUTO, total_bpages, CTLFLAG_RD, &total_bpages, 0,
+ "Total bounce pages");
+
#define DMAMAP_LINEAR 0x1
#define DMAMAP_MBUF 0x2
#define DMAMAP_UIO 0x4
-#define DMAMAP_ALLOCATED 0x10
#define DMAMAP_TYPE_MASK (DMAMAP_LINEAR|DMAMAP_MBUF|DMAMAP_UIO)
#define DMAMAP_COHERENT 0x8
+#define DMAMAP_ALLOCATED 0x10
+#define DMAMAP_MALLOCUSED 0x20
+
struct bus_dmamap {
+ struct bp_list bpages;
+ int pagesneeded;
+ int pagesreserved;
bus_dma_tag_t dmat;
int flags;
void *buffer;
@@ -125,8 +139,15 @@ struct bus_dmamap {
void *allocbuffer;
TAILQ_ENTRY(bus_dmamap) freelist;
int len;
+ STAILQ_ENTRY(bus_dmamap) links;
+ bus_dmamap_callback_t *callback;
+ void *callback_arg;
+
};
+static STAILQ_HEAD(, bus_dmamap) bounce_map_waitinglist;
+static STAILQ_HEAD(, bus_dmamap) bounce_map_callbacklist;
+
static TAILQ_HEAD(,bus_dmamap) dmamap_freelist =
TAILQ_HEAD_INITIALIZER(dmamap_freelist);
@@ -137,6 +158,45 @@ static struct mtx busdma_mtx;
MTX_SYSINIT(busdma_mtx, &busdma_mtx, "busdma lock", MTX_DEF);
+static void init_bounce_pages(void *dummy);
+static int alloc_bounce_zone(bus_dma_tag_t dmat);
+static int alloc_bounce_pages(bus_dma_tag_t dmat, u_int numpages);
+static int reserve_bounce_pages(bus_dma_tag_t dmat, bus_dmamap_t map,
+ int commit);
+static bus_addr_t add_bounce_page(bus_dma_tag_t dmat, bus_dmamap_t map,
+ vm_offset_t vaddr, bus_size_t size);
+static void free_bounce_page(bus_dma_tag_t dmat, struct bounce_page *bpage);
+
+/* Default tag, as most drivers provide no parent tag. */
+bus_dma_tag_t mips_root_dma_tag;
+
+/*
+ * Return true if a match is made.
+ *
+ * To find a match walk the chain of bus_dma_tag_t's looking for 'paddr'.
+ *
+ * If paddr is within the bounds of the dma tag then call the filter callback
+ * to check for a match, if there is no filter callback then assume a match.
+ */
+static int
+run_filter(bus_dma_tag_t dmat, bus_addr_t paddr)
+{
+ int retval;
+
+ retval = 0;
+
+ do {
+ if (((paddr > dmat->lowaddr && paddr <= dmat->highaddr)
+ || ((paddr & (dmat->alignment - 1)) != 0))
+ && (dmat->filter == NULL
+ || (*dmat->filter)(dmat->filterarg, paddr) != 0))
+ retval = 1;
+
+ dmat = dmat->parent;
+ } while (retval == 0 && dmat != NULL);
+ return (retval);
+}
+
static void
mips_dmamap_freelist_init(void *dummy)
{
@@ -157,6 +217,19 @@ bus_dmamap_load_buffer(bus_dma_tag_t dmat, bus_dma_segment_t *segs,
bus_dmamap_t map, void *buf, bus_size_t buflen, struct pmap *pmap,
int flags, vm_offset_t *lastaddrp, int *segp);
+static __inline int
+_bus_dma_can_bounce(vm_offset_t lowaddr, vm_offset_t highaddr)
+{
+ int i;
+ for (i = 0; phys_avail[i] && phys_avail[i + 1]; i += 2) {
+ if ((lowaddr >= phys_avail[i] && lowaddr <= phys_avail[i + 1])
+ || (lowaddr < phys_avail[i] &&
+ highaddr > phys_avail[i]))
+ return (1);
+ }
+ return (0);
+}
+
/*
* Convenience function for manipulating driver locks from busdma (during
* busdma_swi, for example). Drivers that don't provide their own locks
@@ -213,6 +286,7 @@ _busdma_alloc_dmamap(void)
map->flags = DMAMAP_ALLOCATED;
} else
map->flags = 0;
+ STAILQ_INIT(&map->bpages);
return (map);
}
@@ -228,6 +302,11 @@ _busdma_free_dmamap(bus_dmamap_t map)
}
}
+/*
+ * Allocate a device specific dma_tag.
+ */
+#define SEG_NB 1024
+
int
bus_dma_tag_create(bus_dma_tag_t parent, bus_size_t alignment,
bus_size_t boundary, bus_addr_t lowaddr,
@@ -238,16 +317,12 @@ bus_dma_tag_create(bus_dma_tag_t parent, bus_size_t alignment,
{
bus_dma_tag_t newtag;
int error = 0;
-
- /* Basic sanity checking */
- if (boundary != 0 && boundary < maxsegsz)
- maxsegsz = boundary;
-
/* Return a NULL tag on failure */
*dmat = NULL;
+ if (!parent)
+ parent = mips_root_dma_tag;
- newtag = (bus_dma_tag_t)malloc(sizeof(*newtag), M_DEVBUF,
- M_ZERO | M_NOWAIT);
+ newtag = (bus_dma_tag_t)malloc(sizeof(*newtag), M_DEVBUF, M_NOWAIT);
if (newtag == NULL) {
CTR4(KTR_BUSDMA, "%s returned tag %p tag flags 0x%x error %d",
__func__, newtag, 0, error);
@@ -257,21 +332,16 @@ bus_dma_tag_create(bus_dma_tag_t parent, bus_size_t alignment,
newtag->parent = parent;
newtag->alignment = alignment;
newtag->boundary = boundary;
- newtag->lowaddr = trunc_page((vm_paddr_t)lowaddr) + (PAGE_SIZE - 1);
- newtag->highaddr = trunc_page((vm_paddr_t)highaddr) +
- (PAGE_SIZE - 1);
+ newtag->lowaddr = trunc_page((vm_offset_t)lowaddr) + (PAGE_SIZE - 1);
+ newtag->highaddr = trunc_page((vm_offset_t)highaddr) + (PAGE_SIZE - 1);
newtag->filter = filter;
newtag->filterarg = filterarg;
- newtag->maxsize = maxsize;
- newtag->nsegments = nsegments;
+ newtag->maxsize = maxsize;
+ newtag->nsegments = nsegments;
newtag->maxsegsz = maxsegsz;
newtag->flags = flags;
newtag->ref_count = 1; /* Count ourself */
newtag->map_count = 0;
- newtag->_wbase = 0;
- newtag->_physbase = 0;
- /* XXXMIPS: Should we limit window size to amount of physical memory */
- newtag->_wsize = MIPS_KSEG1_START - MIPS_KSEG0_START;
if (lockfunc != NULL) {
newtag->lockfunc = lockfunc;
newtag->lockfuncarg = lockfuncarg;
@@ -279,36 +349,68 @@ bus_dma_tag_create(bus_dma_tag_t parent, bus_size_t alignment,
newtag->lockfunc = dflt_lock;
newtag->lockfuncarg = NULL;
}
-
- /* Take into account any restrictions imposed by our parent tag */
- if (parent != NULL) {
- newtag->lowaddr = MIN(parent->lowaddr, newtag->lowaddr);
- newtag->highaddr = MAX(parent->highaddr, newtag->highaddr);
+ /*
+ * Take into account any restrictions imposed by our parent tag
+ */
+ if (parent != NULL) {
+ newtag->lowaddr = min(parent->lowaddr, newtag->lowaddr);
+ newtag->highaddr = max(parent->highaddr, newtag->highaddr);
if (newtag->boundary == 0)
newtag->boundary = parent->boundary;
else if (parent->boundary != 0)
- newtag->boundary = MIN(parent->boundary,
+ newtag->boundary = min(parent->boundary,
newtag->boundary);
- if (newtag->filter == NULL) {
- /*
- * Short circuit looking at our parent directly
- * since we have encapsulated all of its information
- */
- newtag->filter = parent->filter;
- newtag->filterarg = parent->filterarg;
- newtag->parent = parent->parent;
+ if ((newtag->filter != NULL) ||
+ ((parent->flags & BUS_DMA_COULD_BOUNCE) != 0))
+ newtag->flags |= BUS_DMA_COULD_BOUNCE;
+ if (newtag->filter == NULL) {
+ /*
+ * Short circuit looking at our parent directly
+ * since we have encapsulated all of its information
+ */
+ newtag->filter = parent->filter;
+ newtag->filterarg = parent->filterarg;
+ newtag->parent = parent->parent;
}
if (newtag->parent != NULL)
atomic_add_int(&parent->ref_count, 1);
}
+ if (_bus_dma_can_bounce(newtag->lowaddr, newtag->highaddr)
+ || newtag->alignment > 1)
+ newtag->flags |= BUS_DMA_COULD_BOUNCE;
+
+ if (((newtag->flags & BUS_DMA_COULD_BOUNCE) != 0) &&
+ (flags & BUS_DMA_ALLOCNOW) != 0) {
+ struct bounce_zone *bz;
+
+ /* Must bounce */
+
+ if ((error = alloc_bounce_zone(newtag)) != 0) {
+ free(newtag, M_DEVBUF);
+ return (error);
+ }
+ bz = newtag->bounce_zone;
+
+ if (ptoa(bz->total_bpages) < maxsize) {
+ int pages;
+
+ pages = atop(maxsize) - bz->total_bpages;
- if (error != 0) {
+ /* Add pages to our bounce pool */
+ if (alloc_bounce_pages(newtag, pages) < pages)
+ error = ENOMEM;
+ }
+ /* Performed initial allocation */
+ newtag->flags |= BUS_DMA_MIN_ALLOC_COMP;
+ } else
+ newtag->bounce_zone = NULL;
+ if (error != 0)
free(newtag, M_DEVBUF);
- } else {
+ else
*dmat = newtag;
- }
CTR4(KTR_BUSDMA, "%s returned tag %p tag flags 0x%x error %d",
__func__, newtag, (newtag != NULL ? newtag->flags : 0), error);
+
return (error);
}
@@ -346,6 +448,7 @@ bus_dma_tag_destroy(bus_dma_tag_t dmat)
return (0);
}
+#include <sys/kdb.h>
/*
* Allocate a handle for mapping from kva/uva/physical
* address space into bus device space.
@@ -354,9 +457,7 @@ int
bus_dmamap_create(bus_dma_tag_t dmat, int flags, bus_dmamap_t *mapp)
{
bus_dmamap_t newmap;
-#ifdef KTR
int error = 0;
-#endif
newmap = _busdma_alloc_dmamap();
if (newmap == NULL) {
@@ -365,13 +466,64 @@ bus_dmamap_create(bus_dma_tag_t dmat, int flags, bus_dmamap_t *mapp)
}
*mapp = newmap;
newmap->dmat = dmat;
+ newmap->allocbuffer = NULL;
dmat->map_count++;
+ /*
+ * Bouncing might be required if the driver asks for an active
+ * exclusion region, a data alignment that is stricter than 1, and/or
+ * an active address boundary.
+ */
+ if (dmat->flags & BUS_DMA_COULD_BOUNCE) {
+
+ /* Must bounce */
+ struct bounce_zone *bz;
+ int maxpages;
+
+ if (dmat->bounce_zone == NULL) {
+ if ((error = alloc_bounce_zone(dmat)) != 0) {
+ _busdma_free_dmamap(newmap);
+ *mapp = NULL;
+ return (error);
+ }
+ }
+ bz = dmat->bounce_zone;
+
+ /* Initialize the new map */
+ STAILQ_INIT(&((*mapp)->bpages));
+
+ /*
+ * Attempt to add pages to our pool on a per-instance
+ * basis up to a sane limit.
+ */
+ maxpages = MAX_BPAGES;
+ if ((dmat->flags & BUS_DMA_MIN_ALLOC_COMP) == 0
+ || (bz->map_count > 0 && bz->total_bpages < maxpages)) {
+ int pages;
+
+ pages = MAX(atop(dmat->maxsize), 1);
+ pages = MIN(maxpages - bz->total_bpages, pages);
+ pages = MAX(pages, 1);
+ if (alloc_bounce_pages(dmat, pages) < pages)
+ error = ENOMEM;
+
+ if ((dmat->flags & BUS_DMA_MIN_ALLOC_COMP) == 0) {
+ if (error == 0)
+ dmat->flags |= BUS_DMA_MIN_ALLOC_COMP;
+ } else {
+ error = 0;
+ }
+ }
+ bz->map_count++;
+ }
+
+ if (flags & BUS_DMA_COHERENT)
+ newmap->flags |= DMAMAP_COHERENT;
+
CTR4(KTR_BUSDMA, "%s: tag %p tag flags 0x%x error %d",
__func__, dmat, dmat->flags, error);
return (0);
-
}
/*
@@ -381,7 +533,15 @@ bus_dmamap_create(bus_dma_tag_t dmat, int flags, bus_dmamap_t *mapp)
int
bus_dmamap_destroy(bus_dma_tag_t dmat, bus_dmamap_t map)
{
+
_busdma_free_dmamap(map);
+ if (STAILQ_FIRST(&map->bpages) != NULL) {
+ CTR3(KTR_BUSDMA, "%s: tag %p error %d",
+ __func__, dmat, EBUSY);
+ return (EBUSY);
+ }
+ if (dmat->bounce_zone)
+ dmat->bounce_zone->map_count--;
dmat->map_count--;
CTR2(KTR_BUSDMA, "%s: tag %p error 0", __func__, dmat);
return (0);
@@ -416,9 +576,16 @@ bus_dmamem_alloc(bus_dma_tag_t dmat, void** vaddr, int flags,
dmat->map_count++;
*mapp = newmap;
newmap->dmat = dmat;
+
+ if (flags & BUS_DMA_COHERENT)
+ newmap->flags |= DMAMAP_COHERENT;
- if (dmat->maxsize <= PAGE_SIZE) {
+ if (dmat->maxsize <= PAGE_SIZE &&
+ (dmat->alignment < dmat->maxsize) &&
+ !_bus_dma_can_bounce(dmat->lowaddr, dmat->highaddr) &&
+ !(flags & BUS_DMA_COHERENT)) {
*vaddr = malloc(dmat->maxsize, M_DEVBUF, mflags);
+ newmap->flags |= DMAMAP_MALLOCUSED;
} else {
/*
* XXX Use Contigmalloc until it is merged into this facility
@@ -440,7 +607,7 @@ bus_dmamem_alloc(bus_dma_tag_t dmat, void** vaddr, int flags,
maxphys = dmat->lowaddr;
}
*vaddr = contigmalloc(dmat->maxsize, M_DEVBUF, mflags,
- 0ul, maxphys, dmat->alignment? dmat->alignment : 1ul,
+ 0ul, dmat->lowaddr, dmat->alignment? dmat->alignment : 1ul,
dmat->boundary);
}
if (*vaddr == NULL) {
@@ -451,6 +618,7 @@ bus_dmamem_alloc(bus_dma_tag_t dmat, void** vaddr, int flags,
*mapp = NULL;
return (ENOMEM);
}
+
if (flags & BUS_DMA_COHERENT) {
void *tmpaddr = (void *)*vaddr;
@@ -463,10 +631,10 @@ bus_dmamem_alloc(bus_dma_tag_t dmat, void** vaddr, int flags,
*vaddr = tmpaddr;
} else
newmap->origbuffer = newmap->allocbuffer = NULL;
- } else
+ } else
newmap->origbuffer = newmap->allocbuffer = NULL;
- return (0);
+ return (0);
}
/*
@@ -481,15 +649,69 @@ bus_dmamem_free(bus_dma_tag_t dmat, void *vaddr, bus_dmamap_t map)
("Trying to freeing the wrong DMA buffer"));
vaddr = map->origbuffer;
}
- if (dmat->maxsize <= PAGE_SIZE)
+
+ if (map->flags & DMAMAP_MALLOCUSED)
free(vaddr, M_DEVBUF);
- else {
+ else
contigfree(vaddr, dmat->maxsize, M_DEVBUF);
- }
+
dmat->map_count--;
_busdma_free_dmamap(map);
CTR3(KTR_BUSDMA, "%s: tag %p flags 0x%x", __func__, dmat, dmat->flags);
+}
+
+static int
+_bus_dmamap_count_pages(bus_dma_tag_t dmat, bus_dmamap_t map, pmap_t pmap,
+ void *buf, bus_size_t buflen, int flags)
+{
+ vm_offset_t vaddr;
+ vm_offset_t vendaddr;
+ bus_addr_t paddr;
+
+ if ((map->pagesneeded == 0)) {
+ CTR3(KTR_BUSDMA, "lowaddr= %d, boundary= %d, alignment= %d",
+ dmat->lowaddr, dmat->boundary, dmat->alignment);
+ CTR2(KTR_BUSDMA, "map= %p, pagesneeded= %d",
+ map, map->pagesneeded);
+ /*
+ * Count the number of bounce pages
+ * needed in order to complete this transfer
+ */
+ vaddr = trunc_page((vm_offset_t)buf);
+ vendaddr = (vm_offset_t)buf + buflen;
+
+ while (vaddr < vendaddr) {
+ KASSERT(kernel_pmap == pmap, ("pmap is not kernel pmap"));
+ paddr = pmap_kextract(vaddr);
+ if (((dmat->flags & BUS_DMA_COULD_BOUNCE) != 0) &&
+ run_filter(dmat, paddr) != 0)
+ map->pagesneeded++;
+ vaddr += PAGE_SIZE;
+ }
+ CTR1(KTR_BUSDMA, "pagesneeded= %d\n", map->pagesneeded);
+ }
+
+ /* Reserve Necessary Bounce Pages */
+ if (map->pagesneeded != 0) {
+ mtx_lock(&bounce_lock);
+ if (flags & BUS_DMA_NOWAIT) {
+ if (reserve_bounce_pages(dmat, map, 0) != 0) {
+ mtx_unlock(&bounce_lock);
+ return (ENOMEM);
+ }
+ } else {
+ if (reserve_bounce_pages(dmat, map, 1) != 0) {
+ /* Queue us for resources */
+ STAILQ_INSERT_TAIL(&bounce_map_waitinglist,
+ map, links);
+ mtx_unlock(&bounce_lock);
+ return (EINPROGRESS);
+ }
+ }
+ mtx_unlock(&bounce_lock);
+ }
+ return (0);
}
/*
@@ -504,8 +726,7 @@ bus_dmamap_load_buffer(bus_dma_tag_t dmat, bus_dma_segment_t *segs,
int flags, vm_offset_t *lastaddrp, int *segp)
{
bus_size_t sgsize;
- bus_size_t bmask;
- vm_offset_t curaddr, lastaddr;
+ bus_addr_t curaddr, lastaddr, baddr, bmask;
vm_offset_t vaddr = (vm_offset_t)buf;
int seg;
int error = 0;
@@ -513,36 +734,48 @@ bus_dmamap_load_buffer(bus_dma_tag_t dmat, bus_dma_segment_t *segs,
lastaddr = *lastaddrp;
bmask = ~(dmat->boundary - 1);
+ if ((dmat->flags & BUS_DMA_COULD_BOUNCE) != 0) {
+ error = _bus_dmamap_count_pages(dmat, map, pmap, buf, buflen,
+ flags);
+ if (error)
+ return (error);
+ }
+ CTR3(KTR_BUSDMA, "lowaddr= %d boundary= %d, "
+ "alignment= %d", dmat->lowaddr, dmat->boundary, dmat->alignment);
+
for (seg = *segp; buflen > 0 ; ) {
/*
* Get the physical address for this segment.
+ *
+ * XXX Don't support checking for coherent mappings
+ * XXX in user address space.
*/
KASSERT(kernel_pmap == pmap, ("pmap is not kernel pmap"));
curaddr = pmap_kextract(vaddr);
/*
- * If we're beyond the current DMA window, indicate
- * that and try to fall back onto something else.
- */
- if (curaddr < dmat->_physbase ||
- curaddr >= (dmat->_physbase + dmat->_wsize))
- return (EINVAL);
-
- /*
- * In a valid DMA range. Translate the physical
- * memory address to an address in the DMA window.
- */
- curaddr = (curaddr - dmat->_physbase) + dmat->_wbase;
-
-
- /*
* Compute the segment size, and adjust counts.
*/
sgsize = PAGE_SIZE - ((u_long)curaddr & PAGE_MASK);
+ if (sgsize > dmat->maxsegsz)
+ sgsize = dmat->maxsegsz;
if (buflen < sgsize)
sgsize = buflen;
/*
+ * Make sure we don't cross any boundaries.
+ */
+ if (dmat->boundary > 0) {
+ baddr = (curaddr + dmat->boundary) & bmask;
+ if (sgsize > (baddr - curaddr))
+ sgsize = (baddr - curaddr);
+ }
+ if (((dmat->flags & BUS_DMA_COULD_BOUNCE) != 0) &&
+ map->pagesneeded != 0 && run_filter(dmat, curaddr)) {
+ curaddr = add_bounce_page(dmat, map, vaddr, sgsize);
+ }
+
+ /*
* Insert chunk into a segment, coalescing with
* the previous segment if possible.
*/
@@ -574,9 +807,8 @@ segdone:
* Did we fit?
*/
if (buflen != 0)
- error = EFBIG;
-
- return error;
+ error = EFBIG; /* XXX better return value here? */
+ return (error);
}
/*
@@ -597,14 +829,17 @@ bus_dmamap_load(bus_dma_tag_t dmat, bus_dmamap_t map, void *buf,
KASSERT(dmat != NULL, ("dmatag is NULL"));
KASSERT(map != NULL, ("dmamap is NULL"));
+ map->callback = callback;
+ map->callback_arg = callback_arg;
map->flags &= ~DMAMAP_TYPE_MASK;
- map->flags |= DMAMAP_LINEAR|DMAMAP_COHERENT;
+ map->flags |= DMAMAP_LINEAR;
map->buffer = buf;
map->len = buflen;
error = bus_dmamap_load_buffer(dmat,
dm_segments, map, buf, buflen, kernel_pmap,
flags, &lastaddr, &nsegs);
-
+ if (error == EINPROGRESS)
+ return (error);
if (error)
(*callback)(callback_arg, NULL, 0, error);
else
@@ -613,8 +848,7 @@ bus_dmamap_load(bus_dma_tag_t dmat, bus_dmamap_t map, void *buf,
CTR5(KTR_BUSDMA, "%s: tag %p tag flags 0x%x error %d nsegs %d",
__func__, dmat, dmat->flags, nsegs + 1, error);
- return (0);
-
+ return (error);
}
/*
@@ -635,10 +869,9 @@ bus_dmamap_load_mbuf(bus_dma_tag_t dmat, bus_dmamap_t map, struct mbuf *m0,
M_ASSERTPKTHDR(m0);
map->flags &= ~DMAMAP_TYPE_MASK;
- map->flags |= DMAMAP_MBUF | DMAMAP_COHERENT;
+ map->flags |= DMAMAP_MBUF;
map->buffer = m0;
map->len = 0;
-
if (m0->m_pkthdr.len <= dmat->maxsize) {
vm_offset_t lastaddr = 0;
struct mbuf *m;
@@ -676,16 +909,14 @@ bus_dmamap_load_mbuf_sg(bus_dma_tag_t dmat, bus_dmamap_t map,
int flags)
{
int error = 0;
-
M_ASSERTPKTHDR(m0);
flags |= BUS_DMA_NOWAIT;
*nsegs = -1;
map->flags &= ~DMAMAP_TYPE_MASK;
- map->flags |= DMAMAP_MBUF | DMAMAP_COHERENT;
- map->buffer = m0;
+ map->flags |= DMAMAP_MBUF;
+ map->buffer = m0;
map->len = 0;
-
if (m0->m_pkthdr.len <= dmat->maxsize) {
vm_offset_t lastaddr = 0;
struct mbuf *m;
@@ -693,8 +924,9 @@ bus_dmamap_load_mbuf_sg(bus_dma_tag_t dmat, bus_dmamap_t map,
for (m = m0; m != NULL && error == 0; m = m->m_next) {
if (m->m_len > 0) {
error = bus_dmamap_load_buffer(dmat, segs, map,
- m->m_data, m->m_len,
- kernel_pmap, flags, &lastaddr, nsegs);
+ m->m_data, m->m_len,
+ kernel_pmap, flags, &lastaddr,
+ nsegs);
map->len += m->m_len;
}
}
@@ -702,12 +934,11 @@ bus_dmamap_load_mbuf_sg(bus_dma_tag_t dmat, bus_dmamap_t map,
error = EINVAL;
}
+ /* XXX FIXME: Having to increment nsegs is really annoying */
++*nsegs;
CTR5(KTR_BUSDMA, "%s: tag %p tag flags 0x%x error %d nsegs %d",
__func__, dmat, dmat->flags, error, *nsegs);
-
return (error);
-
}
/*
@@ -718,9 +949,65 @@ bus_dmamap_load_uio(bus_dma_tag_t dmat, bus_dmamap_t map, struct uio *uio,
bus_dmamap_callback2_t *callback, void *callback_arg,
int flags)
{
+ vm_offset_t lastaddr = 0;
+#ifdef __CC_SUPPORTS_DYNAMIC_ARRAY_INIT
+ bus_dma_segment_t dm_segments[dmat->nsegments];
+#else
+ bus_dma_segment_t dm_segments[BUS_DMAMAP_NSEGS];
+#endif
+ int nsegs, i, error;
+ bus_size_t resid;
+ struct iovec *iov;
+ struct pmap *pmap;
- panic("Unimplemented %s at %s:%d\n", __func__, __FILE__, __LINE__);
- return (0);
+ resid = uio->uio_resid;
+ iov = uio->uio_iov;
+ map->flags &= ~DMAMAP_TYPE_MASK;
+ map->flags |= DMAMAP_UIO;
+ map->buffer = uio;
+ map->len = 0;
+
+ if (uio->uio_segflg == UIO_USERSPACE) {
+ KASSERT(uio->uio_td != NULL,
+ ("bus_dmamap_load_uio: USERSPACE but no proc"));
+ /* XXX: pmap = vmspace_pmap(uio->uio_td->td_proc->p_vmspace); */
+ panic("can't do it yet");
+ } else
+ pmap = kernel_pmap;
+
+ error = 0;
+ nsegs = -1;
+ for (i = 0; i < uio->uio_iovcnt && resid != 0 && !error; i++) {
+ /*
+ * Now at the first iovec to load. Load each iovec
+ * until we have exhausted the residual count.
+ */
+ bus_size_t minlen =
+ resid < iov[i].iov_len ? resid : iov[i].iov_len;
+ caddr_t addr = (caddr_t) iov[i].iov_base;
+
+ if (minlen > 0) {
+ error = bus_dmamap_load_buffer(dmat, dm_segments, map,
+ addr, minlen, pmap, flags, &lastaddr, &nsegs);
+
+ map->len += minlen;
+ resid -= minlen;
+ }
+ }
+
+ if (error) {
+ /*
+ * force "no valid mappings" on error in callback.
+ */
+ (*callback)(callback_arg, dm_segments, 0, 0, error);
+ } else {
+ (*callback)(callback_arg, dm_segments, nsegs+1,
+ uio->uio_resid, error);
+ }
+
+ CTR5(KTR_BUSDMA, "%s: tag %p tag flags 0x%x error %d nsegs %d",
+ __func__, dmat, dmat->flags, error, nsegs + 1);
+ return (error);
}
/*
@@ -729,25 +1016,78 @@ bus_dmamap_load_uio(bus_dma_tag_t dmat, bus_dmamap_t map, struct uio *uio,
void
_bus_dmamap_unload(bus_dma_tag_t dmat, bus_dmamap_t map)
{
+ struct bounce_page *bpage;
+ map->flags &= ~DMAMAP_TYPE_MASK;
+ while ((bpage = STAILQ_FIRST(&map->bpages)) != NULL) {
+ STAILQ_REMOVE_HEAD(&map->bpages, links);
+ free_bounce_page(dmat, bpage);
+ }
return;
}
-static __inline void
+static void
bus_dmamap_sync_buf(void *buf, int len, bus_dmasync_op_t op)
{
+ char tmp_cl[mips_pdcache_linesize], tmp_clend[mips_pdcache_linesize];
+ vm_offset_t buf_cl, buf_clend;
+ vm_size_t size_cl, size_clend;
+ int cache_linesize_mask = mips_pdcache_linesize - 1;
+
+ /*
+ * dcache invalidation operates on cache line aligned addresses
+ * and could modify areas of memory that share the same cache line
+ * at the beginning and the ending of the buffer. In order to
+ * prevent a data loss we save these chunks in temporary buffer
+ * before invalidation and restore them afer it
+ */
+ buf_cl = (vm_offset_t)buf & ~cache_linesize_mask;
+ size_cl = (vm_offset_t)buf & cache_linesize_mask;
+ buf_clend = (vm_offset_t)buf + len;
+ size_clend = (mips_pdcache_linesize -
+ (buf_clend & cache_linesize_mask)) & cache_linesize_mask;
switch (op) {
+ case BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE:
+ case BUS_DMASYNC_POSTREAD:
+
+ /*
+ * Save buffers that might be modified by invalidation
+ */
+ if (size_cl)
+ memcpy (tmp_cl, (void*)buf_cl, size_cl);
+ if (size_clend)
+ memcpy (tmp_clend, (void*)buf_clend, size_clend);
+ mips_dcache_inv_range((vm_offset_t)buf, len);
+ /*
+ * Restore them
+ */
+ if (size_cl)
+ memcpy ((void*)buf_cl, tmp_cl, size_cl);
+ if (size_clend)
+ memcpy ((void*)buf_clend, tmp_clend, size_clend);
+ break;
+
case BUS_DMASYNC_PREREAD|BUS_DMASYNC_PREWRITE:
mips_dcache_wbinv_range((vm_offset_t)buf, len);
break;
case BUS_DMASYNC_PREREAD:
-#if 1
- mips_dcache_wbinv_range((vm_offset_t)buf, len);
-#else
+ /*
+ * Save buffers that might be modified by invalidation
+ */
+ if (size_cl)
+ memcpy (tmp_cl, (void *)buf_cl, size_cl);
+ if (size_clend)
+ memcpy (tmp_clend, (void *)buf_clend, size_clend);
mips_dcache_inv_range((vm_offset_t)buf, len);
-#endif
+ /*
+ * Restore them
+ */
+ if (size_cl)
+ memcpy ((void *)buf_cl, tmp_cl, size_cl);
+ if (size_clend)
+ memcpy ((void *)buf_clend, tmp_clend, size_clend);
break;
case BUS_DMASYNC_PREWRITE:
@@ -756,6 +1096,51 @@ bus_dmamap_sync_buf(void *buf, int len, bus_dmasync_op_t op)
}
}
+static void
+_bus_dmamap_sync_bp(bus_dma_tag_t dmat, bus_dmamap_t map, bus_dmasync_op_t op)
+{
+ struct bounce_page *bpage;
+
+ STAILQ_FOREACH(bpage, &map->bpages, links) {
+ if (op & BUS_DMASYNC_PREWRITE) {
+ bcopy((void *)bpage->datavaddr,
+ (void *)(bpage->vaddr_nocache != 0 ?
+ bpage->vaddr_nocache : bpage->vaddr),
+ bpage->datacount);
+ if (bpage->vaddr_nocache == 0) {
+ mips_dcache_wb_range(bpage->vaddr,
+ bpage->datacount);
+ }
+ dmat->bounce_zone->total_bounced++;
+ }
+ if (op & BUS_DMASYNC_POSTREAD) {
+ if (bpage->vaddr_nocache == 0) {
+ mips_dcache_inv_range(bpage->vaddr,
+ bpage->datacount);
+ }
+ bcopy((void *)(bpage->vaddr_nocache != 0 ?
+ bpage->vaddr_nocache : bpage->vaddr),
+ (void *)bpage->datavaddr, bpage->datacount);
+ dmat->bounce_zone->total_bounced++;
+ }
+ }
+}
+
+static __inline int
+_bus_dma_buf_is_in_bp(bus_dmamap_t map, void *buf, int len)
+{
+ struct bounce_page *bpage;
+
+ STAILQ_FOREACH(bpage, &map->bpages, links) {
+ if ((vm_offset_t)buf >= bpage->datavaddr &&
+ (vm_offset_t)buf + len <= bpage->datavaddr +
+ bpage->datacount)
+ return (1);
+ }
+ return (0);
+
+}
+
void
_bus_dmamap_sync(bus_dma_tag_t dmat, bus_dmamap_t map, bus_dmasync_op_t op)
{
@@ -764,51 +1149,23 @@ _bus_dmamap_sync(bus_dma_tag_t dmat, bus_dmamap_t map, bus_dmasync_op_t op)
int resid;
struct iovec *iov;
-
- /*
- * Mixing PRE and POST operations is not allowed.
- */
- if ((op & (BUS_DMASYNC_PREREAD|BUS_DMASYNC_PREWRITE)) != 0 &&
- (op & (BUS_DMASYNC_POSTREAD|BUS_DMASYNC_POSTWRITE)) != 0)
- panic("_bus_dmamap_sync: mix PRE and POST");
-
- /*
- * Since we're dealing with a virtually-indexed, write-back
- * cache, we need to do the following things:
- *
- * PREREAD -- Invalidate D-cache. Note we might have
- * to also write-back here if we have to use an Index
- * op, or if the buffer start/end is not cache-line aligned.
- *
- * PREWRITE -- Write-back the D-cache. If we have to use
- * an Index op, we also have to invalidate. Note that if
- * we are doing PREREAD|PREWRITE, we can collapse everything
- * into a single op.
- *
- * POSTREAD -- Nothing.
- *
- * POSTWRITE -- Nothing.
- */
-
- /*
- * Flush the write buffer.
- * XXX Is this always necessary?
- */
- mips_wbflush();
-
- op &= (BUS_DMASYNC_PREREAD|BUS_DMASYNC_PREWRITE);
- if (op == 0)
+ if (op == BUS_DMASYNC_POSTWRITE)
+ return;
+ if (STAILQ_FIRST(&map->bpages))
+ _bus_dmamap_sync_bp(dmat, map, op);
+ if (map->flags & DMAMAP_COHERENT)
return;
-
CTR3(KTR_BUSDMA, "%s: op %x flags %x", __func__, op, map->flags);
switch(map->flags & DMAMAP_TYPE_MASK) {
case DMAMAP_LINEAR:
- bus_dmamap_sync_buf(map->buffer, map->len, op);
+ if (!(_bus_dma_buf_is_in_bp(map, map->buffer, map->len)))
+ bus_dmamap_sync_buf(map->buffer, map->len, op);
break;
case DMAMAP_MBUF:
m = map->buffer;
while (m) {
- if (m->m_len > 0)
+ if (m->m_len > 0 &&
+ !(_bus_dma_buf_is_in_bp(map, m->m_data, m->m_len)))
bus_dmamap_sync_buf(m->m_data, m->m_len, op);
m = m->m_next;
}
@@ -821,7 +1178,10 @@ _bus_dmamap_sync(bus_dma_tag_t dmat, bus_dmamap_t map, bus_dmasync_op_t op)
bus_size_t minlen = resid < iov[i].iov_len ? resid :
iov[i].iov_len;
if (minlen > 0) {
- bus_dmamap_sync_buf(iov[i].iov_base, minlen, op);
+ if (!_bus_dma_buf_is_in_bp(map, iov[i].iov_base,
+ minlen))
+ bus_dmamap_sync_buf(iov[i].iov_base,
+ minlen, op);
resid -= minlen;
}
}
@@ -830,3 +1190,256 @@ _bus_dmamap_sync(bus_dma_tag_t dmat, bus_dmamap_t map, bus_dmasync_op_t op)
break;
}
}
+
+static void
+init_bounce_pages(void *dummy __unused)
+{
+
+ total_bpages = 0;
+ STAILQ_INIT(&bounce_zone_list);
+ STAILQ_INIT(&bounce_map_waitinglist);
+ STAILQ_INIT(&bounce_map_callbacklist);
+ mtx_init(&bounce_lock, "bounce pages lock", NULL, MTX_DEF);
+}
+SYSINIT(bpages, SI_SUB_LOCK, SI_ORDER_ANY, init_bounce_pages, NULL);
+
+static struct sysctl_ctx_list *
+busdma_sysctl_tree(struct bounce_zone *bz)
+{
+ return (&bz->sysctl_tree);
+}
+
+static struct sysctl_oid *
+busdma_sysctl_tree_top(struct bounce_zone *bz)
+{
+ return (bz->sysctl_tree_top);
+}
+
+static int
+alloc_bounce_zone(bus_dma_tag_t dmat)
+{
+ struct bounce_zone *bz;
+
+ /* Check to see if we already have a suitable zone */
+ STAILQ_FOREACH(bz, &bounce_zone_list, links) {
+ if ((dmat->alignment <= bz->alignment)
+ && (dmat->lowaddr >= bz->lowaddr)) {
+ dmat->bounce_zone = bz;
+ return (0);
+ }
+ }
+
+ if ((bz = (struct bounce_zone *)malloc(sizeof(*bz), M_DEVBUF,
+ M_NOWAIT | M_ZERO)) == NULL)
+ return (ENOMEM);
+
+ STAILQ_INIT(&bz->bounce_page_list);
+ bz->free_bpages = 0;
+ bz->reserved_bpages = 0;
+ bz->active_bpages = 0;
+ bz->lowaddr = dmat->lowaddr;
+ bz->alignment = MAX(dmat->alignment, PAGE_SIZE);
+ bz->map_count = 0;
+ snprintf(bz->zoneid, 8, "zone%d", busdma_zonecount);
+ busdma_zonecount++;
+ snprintf(bz->lowaddrid, 18, "%#jx", (uintmax_t)bz->lowaddr);
+ STAILQ_INSERT_TAIL(&bounce_zone_list, bz, links);
+ dmat->bounce_zone = bz;
+
+ sysctl_ctx_init(&bz->sysctl_tree);
+ bz->sysctl_tree_top = SYSCTL_ADD_NODE(&bz->sysctl_tree,
+ SYSCTL_STATIC_CHILDREN(_hw_busdma), OID_AUTO, bz->zoneid,
+ CTLFLAG_RD, 0, "");
+ if (bz->sysctl_tree_top == NULL) {
+ sysctl_ctx_free(&bz->sysctl_tree);
+ return (0); /* XXX error code? */
+ }
+
+ SYSCTL_ADD_INT(busdma_sysctl_tree(bz),
+ SYSCTL_CHILDREN(busdma_sysctl_tree_top(bz)), OID_AUTO,
+ "total_bpages", CTLFLAG_RD, &bz->total_bpages, 0,
+ "Total bounce pages");
+ SYSCTL_ADD_INT(busdma_sysctl_tree(bz),
+ SYSCTL_CHILDREN(busdma_sysctl_tree_top(bz)), OID_AUTO,
+ "free_bpages", CTLFLAG_RD, &bz->free_bpages, 0,
+ "Free bounce pages");
+ SYSCTL_ADD_INT(busdma_sysctl_tree(bz),
+ SYSCTL_CHILDREN(busdma_sysctl_tree_top(bz)), OID_AUTO,
+ "reserved_bpages", CTLFLAG_RD, &bz->reserved_bpages, 0,
+ "Reserved bounce pages");
+ SYSCTL_ADD_INT(busdma_sysctl_tree(bz),
+ SYSCTL_CHILDREN(busdma_sysctl_tree_top(bz)), OID_AUTO,
+ "active_bpages", CTLFLAG_RD, &bz->active_bpages, 0,
+ "Active bounce pages");
+ SYSCTL_ADD_INT(busdma_sysctl_tree(bz),
+ SYSCTL_CHILDREN(busdma_sysctl_tree_top(bz)), OID_AUTO,
+ "total_bounced", CTLFLAG_RD, &bz->total_bounced, 0,
+ "Total bounce requests");
+ SYSCTL_ADD_INT(busdma_sysctl_tree(bz),
+ SYSCTL_CHILDREN(busdma_sysctl_tree_top(bz)), OID_AUTO,
+ "total_deferred", CTLFLAG_RD, &bz->total_deferred, 0,
+ "Total bounce requests that were deferred");
+ SYSCTL_ADD_STRING(busdma_sysctl_tree(bz),
+ SYSCTL_CHILDREN(busdma_sysctl_tree_top(bz)), OID_AUTO,
+ "lowaddr", CTLFLAG_RD, bz->lowaddrid, 0, "");
+ SYSCTL_ADD_INT(busdma_sysctl_tree(bz),
+ SYSCTL_CHILDREN(busdma_sysctl_tree_top(bz)), OID_AUTO,
+ "alignment", CTLFLAG_RD, &bz->alignment, 0, "");
+
+ return (0);
+}
+
+static int
+alloc_bounce_pages(bus_dma_tag_t dmat, u_int numpages)
+{
+ struct bounce_zone *bz;
+ int count;
+
+ bz = dmat->bounce_zone;
+ count = 0;
+ while (numpages > 0) {
+ struct bounce_page *bpage;
+
+ bpage = (struct bounce_page *)malloc(sizeof(*bpage), M_DEVBUF,
+ M_NOWAIT | M_ZERO);
+
+ if (bpage == NULL)
+ break;
+ bpage->vaddr = (vm_offset_t)contigmalloc(PAGE_SIZE, M_DEVBUF,
+ M_NOWAIT, 0ul,
+ bz->lowaddr,
+ PAGE_SIZE,
+ 0);
+ if (bpage->vaddr == 0) {
+ free(bpage, M_DEVBUF);
+ break;
+ }
+ bpage->busaddr = pmap_kextract(bpage->vaddr);
+ bpage->vaddr_nocache =
+ (vm_offset_t)MIPS_PHYS_TO_KSEG1(bpage->busaddr);
+ mtx_lock(&bounce_lock);
+ STAILQ_INSERT_TAIL(&bz->bounce_page_list, bpage, links);
+ total_bpages++;
+ bz->total_bpages++;
+ bz->free_bpages++;
+ mtx_unlock(&bounce_lock);
+ count++;
+ numpages--;
+ }
+ return (count);
+}
+
+static int
+reserve_bounce_pages(bus_dma_tag_t dmat, bus_dmamap_t map, int commit)
+{
+ struct bounce_zone *bz;
+ int pages;
+
+ mtx_assert(&bounce_lock, MA_OWNED);
+ bz = dmat->bounce_zone;
+ pages = MIN(bz->free_bpages, map->pagesneeded - map->pagesreserved);
+ if (commit == 0 && map->pagesneeded > (map->pagesreserved + pages))
+ return (map->pagesneeded - (map->pagesreserved + pages));
+ bz->free_bpages -= pages;
+ bz->reserved_bpages += pages;
+ map->pagesreserved += pages;
+ pages = map->pagesneeded - map->pagesreserved;
+
+ return (pages);
+}
+
+static bus_addr_t
+add_bounce_page(bus_dma_tag_t dmat, bus_dmamap_t map, vm_offset_t vaddr,
+ bus_size_t size)
+{
+ struct bounce_zone *bz;
+ struct bounce_page *bpage;
+
+ KASSERT(dmat->bounce_zone != NULL, ("no bounce zone in dma tag"));
+ KASSERT(map != NULL, ("add_bounce_page: bad map %p", map));
+
+ bz = dmat->bounce_zone;
+ if (map->pagesneeded == 0)
+ panic("add_bounce_page: map doesn't need any pages");
+ map->pagesneeded--;
+
+ if (map->pagesreserved == 0)
+ panic("add_bounce_page: map doesn't need any pages");
+ map->pagesreserved--;
+
+ mtx_lock(&bounce_lock);
+ bpage = STAILQ_FIRST(&bz->bounce_page_list);
+ if (bpage == NULL)
+ panic("add_bounce_page: free page list is empty");
+
+ STAILQ_REMOVE_HEAD(&bz->bounce_page_list, links);
+ bz->reserved_bpages--;
+ bz->active_bpages++;
+ mtx_unlock(&bounce_lock);
+
+ if (dmat->flags & BUS_DMA_KEEP_PG_OFFSET) {
+ /* Page offset needs to be preserved. */
+ bpage->vaddr |= vaddr & PAGE_MASK;
+ bpage->busaddr |= vaddr & PAGE_MASK;
+ }
+ bpage->datavaddr = vaddr;
+ bpage->datacount = size;
+ STAILQ_INSERT_TAIL(&(map->bpages), bpage, links);
+ return (bpage->busaddr);
+}
+
+static void
+free_bounce_page(bus_dma_tag_t dmat, struct bounce_page *bpage)
+{
+ struct bus_dmamap *map;
+ struct bounce_zone *bz;
+
+ bz = dmat->bounce_zone;
+ bpage->datavaddr = 0;
+ bpage->datacount = 0;
+ if (dmat->flags & BUS_DMA_KEEP_PG_OFFSET) {
+ /*
+ * Reset the bounce page to start at offset 0. Other uses
+ * of this bounce page may need to store a full page of
+ * data and/or assume it starts on a page boundary.
+ */
+ bpage->vaddr &= ~PAGE_MASK;
+ bpage->busaddr &= ~PAGE_MASK;
+ }
+
+ mtx_lock(&bounce_lock);
+ STAILQ_INSERT_HEAD(&bz->bounce_page_list, bpage, links);
+ bz->free_bpages++;
+ bz->active_bpages--;
+ if ((map = STAILQ_FIRST(&bounce_map_waitinglist)) != NULL) {
+ if (reserve_bounce_pages(map->dmat, map, 1) == 0) {
+ STAILQ_REMOVE_HEAD(&bounce_map_waitinglist, links);
+ STAILQ_INSERT_TAIL(&bounce_map_callbacklist,
+ map, links);
+ busdma_swi_pending = 1;
+ bz->total_deferred++;
+ swi_sched(vm_ih, 0);
+ }
+ }
+ mtx_unlock(&bounce_lock);
+}
+
+void
+busdma_swi(void)
+{
+ bus_dma_tag_t dmat;
+ struct bus_dmamap *map;
+
+ mtx_lock(&bounce_lock);
+ while ((map = STAILQ_FIRST(&bounce_map_callbacklist)) != NULL) {
+ STAILQ_REMOVE_HEAD(&bounce_map_callbacklist, links);
+ mtx_unlock(&bounce_lock);
+ dmat = map->dmat;
+ (dmat->lockfunc)(dmat->lockfuncarg, BUS_DMA_LOCK);
+ bus_dmamap_load(map->dmat, map, map->buffer, map->len,
+ map->callback, map->callback_arg, /*flags*/0);
+ (dmat->lockfunc)(dmat->lockfuncarg, BUS_DMA_UNLOCK);
+ mtx_lock(&bounce_lock);
+ }
+ mtx_unlock(&bounce_lock);
+}
diff --git a/sys/mips/mips/cache.c b/sys/mips/mips/cache.c
index 64e03c9..087e22c 100644
--- a/sys/mips/mips/cache.c
+++ b/sys/mips/mips/cache.c
@@ -73,6 +73,8 @@ __FBSDID("$FreeBSD$");
#include <sys/types.h>
#include <sys/systm.h>
+#include "opt_cputype.h"
+
#include <machine/cpuinfo.h>
#include <machine/cache.h>
@@ -81,6 +83,7 @@ struct mips_cache_ops mips_cache_ops;
void
mips_config_cache(struct mips_cpuinfo * cpuinfo)
{
+
switch (cpuinfo->l1.ic_linesize) {
case 16:
mips_cache_ops.mco_icache_sync_all = mipsNN_icache_sync_all_16;
@@ -223,7 +226,9 @@ mips_config_cache(struct mips_cpuinfo * cpuinfo)
#endif
/* Check that all cache ops are set up. */
- if (mips_picache_size || 1) { /* XXX- must have primary Icache */
+ /* must have primary Icache */
+ if (cpuinfo->l1.ic_size) {
+
if (!mips_cache_ops.mco_icache_sync_all)
panic("no icache_sync_all cache op");
if (!mips_cache_ops.mco_icache_sync_range)
@@ -231,7 +236,8 @@ mips_config_cache(struct mips_cpuinfo * cpuinfo)
if (!mips_cache_ops.mco_icache_sync_range_index)
panic("no icache_sync_range_index cache op");
}
- if (mips_pdcache_size || 1) { /* XXX- must have primary Icache */
+ /* must have primary Dcache */
+ if (cpuinfo->l1.dc_size) {
if (!mips_cache_ops.mco_pdcache_wbinv_all)
panic("no pdcache_wbinv_all");
if (!mips_cache_ops.mco_pdcache_wbinv_range)
diff --git a/sys/mips/mips/cache_mipsNN.c b/sys/mips/mips/cache_mipsNN.c
index 4037885..e2d3ffb 100644
--- a/sys/mips/mips/cache_mipsNN.c
+++ b/sys/mips/mips/cache_mipsNN.c
@@ -38,6 +38,8 @@
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
+#include "opt_cputype.h"
+
#include <sys/types.h>
#include <sys/systm.h>
#include <sys/param.h>
@@ -65,8 +67,11 @@ __FBSDID("$FreeBSD$");
#define SYNCI
#endif
-
-__asm(".set mips32");
+/*
+ * Exported variables for consumers like bus_dma code
+ */
+int mips_picache_linesize;
+int mips_pdcache_linesize;
static int picache_size;
static int picache_stride;
@@ -109,12 +114,18 @@ mipsNN_cache_init(struct mips_cpuinfo * cpuinfo)
pdcache_loopcount = (cpuinfo->l1.dc_nsets * cpuinfo->l1.dc_linesize / PAGE_SIZE) *
cpuinfo->l1.dc_nways;
}
+
+ mips_picache_linesize = cpuinfo->l1.ic_linesize;
+ mips_pdcache_linesize = cpuinfo->l1.dc_linesize;
+
picache_size = cpuinfo->l1.ic_size;
picache_way_mask = cpuinfo->l1.ic_nways - 1;
pdcache_size = cpuinfo->l1.dc_size;
pdcache_way_mask = cpuinfo->l1.dc_nways - 1;
+
#define CACHE_DEBUG
#ifdef CACHE_DEBUG
+ printf("Cache info:\n");
if (cpuinfo->icache_virtual)
printf(" icache is virtual\n");
printf(" picache_stride = %d\n", picache_stride);
diff --git a/sys/mips/mips/copystr.S b/sys/mips/mips/copystr.S
index 4d5d921..35e7905 100644
--- a/sys/mips/mips/copystr.S
+++ b/sys/mips/mips/copystr.S
@@ -67,13 +67,13 @@ ENTRY(copystr)
move v0, zero
beqz a2, 2f
move t1, zero
-1: subu a2, 1
+1: subu a2, 1 /*XXX mips64 unsafe -- long */
lbu t0, 0(a0)
- addu a0, 1
+ PTR_ADDU a0, 1
sb t0, 0(a1)
- addu a1, 1
+ PTR_ADDU a1, 1
beqz t0, 3f /* NULL - end of string*/
- addu t1, 1
+ addu t1, 1 /*XXX mips64 unsafe -- long */
bnez a2, 1b
nop
2: /* ENAMETOOLONG */
@@ -81,7 +81,7 @@ ENTRY(copystr)
3: /* done != NULL -> how many bytes were copied */
beqz a3, 4f
nop
- sw t1, 0(a3)
+ sw t1, 0(a3) /*XXX mips64 unsafe -- long */
4: jr ra
nop
.set reorder
@@ -100,25 +100,25 @@ LEAF(copyinstr)
.set noat
lw t2, pcpup
lw v1, PC_CURPCB(t2)
- la v0, _C_LABEL(copystrerr)
+ PTR_LA v0, _C_LABEL(copystrerr)
blt a0, zero, _C_LABEL(copystrerr)
sw v0, PCB_ONFAULT(v1)
move t0, a2
beq a2, zero, 4f
1:
lbu v0, 0(a0)
- subu a2, a2, 1
+ subu a2, a2, 1 /*xxx mips64 unsafe -- long */
beq v0, zero, 2f
sb v0, 0(a1)
- addu a0, a0, 1
+ PTR_ADDU a0, a0, 1
bne a2, zero, 1b
- addu a1, a1, 1
+ PTR_ADDU a1, a1, 1
4:
li v0, ENAMETOOLONG
2:
beq a3, zero, 3f
- subu a2, t0, a2
- sw a2, 0(a3)
+ subu a2, t0, a2 /*xxx mips64 unsafe -- long */
+ sw a2, 0(a3) /*xxx mips64 unsafe -- long */
3:
j ra # v0 is 0 or ENAMETOOLONG
sw zero, PCB_ONFAULT(v1)
@@ -138,25 +138,25 @@ LEAF(copyoutstr)
.set noat
lw t2, pcpup
lw v1, PC_CURPCB(t2)
- la v0, _C_LABEL(copystrerr)
+ PTR_LA v0, _C_LABEL(copystrerr)
blt a1, zero, _C_LABEL(copystrerr)
sw v0, PCB_ONFAULT(v1)
move t0, a2
beq a2, zero, 4f
1:
lbu v0, 0(a0)
- subu a2, a2, 1
+ subu a2, a2, 1 /*xxx mips64 unsafe -- long */
beq v0, zero, 2f
sb v0, 0(a1)
- addu a0, a0, 1
+ PTR_ADDU a0, a0, 1
bne a2, zero, 1b
- addu a1, a1, 1
+ PTR_ADDU a1, a1, 1
4:
li v0, ENAMETOOLONG
2:
beq a3, zero, 3f
- subu a2, t0, a2
- sw a2, 0(a3)
+ subu a2, t0, a2 /*xxx mips64 unsafe -- long */
+ sw a2, 0(a3) /*xxx mips64 unsafe -- long */
3:
j ra # v0 is 0 or ENAMETOOLONG
sw zero, PCB_ONFAULT(v1)
diff --git a/sys/mips/mips/cpu.c b/sys/mips/mips/cpu.c
index f9596e2..1b490e6 100644
--- a/sys/mips/mips/cpu.c
+++ b/sys/mips/mips/cpu.c
@@ -27,6 +27,8 @@
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
+#include "opt_cputype.h"
+
#include <sys/param.h>
#include <sys/kernel.h>
#include <sys/module.h>
@@ -47,6 +49,7 @@ __FBSDID("$FreeBSD$");
#include <machine/intr_machdep.h>
#include <machine/locore.h>
#include <machine/pte.h>
+#include <machine/hwfunc.h>
static struct mips_cpuinfo cpuinfo;
@@ -64,66 +67,72 @@ union cpuprid fpu_id;
static void
mips_get_identity(struct mips_cpuinfo *cpuinfo)
{
- u_int32_t prid;
- u_int32_t cfg0;
- u_int32_t cfg1;
- u_int32_t tmp;
-
- memset(cpuinfo, 0, sizeof(struct mips_cpuinfo));
-
- /* Read and store the PrID ID for CPU identification. */
- prid = mips_rd_prid();
- cpuinfo->cpu_vendor = MIPS_PRID_CID(prid);
- cpuinfo->cpu_rev = MIPS_PRID_REV(prid);
- cpuinfo->cpu_impl = MIPS_PRID_IMPL(prid);
-
- /* Read config register selection 0 to learn TLB type. */
- cfg0 = mips_rd_config();
-
- cpuinfo->tlb_type = ((cfg0 & MIPS_CONFIG0_MT_MASK) >> MIPS_CONFIG0_MT_SHIFT);
- cpuinfo->icache_virtual = cfg0 & MIPS_CONFIG0_VI;
-
- /* If config register selection 1 does not exist, exit. */
- if (!(cfg0 & MIPS3_CONFIG_CM))
- return;
-
- /* Learn TLB size and L1 cache geometry. */
- cfg1 = mips_rd_config_sel1();
- cpuinfo->tlb_nentries = ((cfg1 & MIPS_CONFIG1_TLBSZ_MASK) >> MIPS_CONFIG1_TLBSZ_SHIFT) + 1;
-
- /* L1 instruction cache. */
- tmp = 1 << (((cfg1 & MIPS_CONFIG1_IL_MASK) >> MIPS_CONFIG1_IL_SHIFT) + 1);
- if (tmp != 0) {
- cpuinfo->l1.ic_linesize = tmp;
- cpuinfo->l1.ic_nways = (((cfg1 & MIPS_CONFIG1_IA_MASK) >> MIPS_CONFIG1_IA_SHIFT)) + 1;
- cpuinfo->l1.ic_nsets = 1 << (((cfg1 & MIPS_CONFIG1_IS_MASK) >> MIPS_CONFIG1_IS_SHIFT) + 6);
- cpuinfo->l1.ic_size = cpuinfo->l1.ic_linesize * cpuinfo->l1.ic_nsets
- * cpuinfo->l1.ic_nways;
- }
-
- /* L1 data cache. */
- tmp = 1 << (((cfg1 & MIPS_CONFIG1_DL_MASK) >> MIPS_CONFIG1_DL_SHIFT) + 1);
- if (tmp != 0) {
- cpuinfo->l1.dc_linesize = tmp;
- cpuinfo->l1.dc_nways = (((cfg1 & MIPS_CONFIG1_DA_MASK) >> MIPS_CONFIG1_DA_SHIFT)) + 1;
- cpuinfo->l1.dc_nsets = 1 << (((cfg1 & MIPS_CONFIG1_DS_MASK) >> MIPS_CONFIG1_DS_SHIFT) + 6);
+ u_int32_t prid;
+ u_int32_t cfg0;
+ u_int32_t cfg1;
+ u_int32_t tmp;
+
+ memset(cpuinfo, 0, sizeof(struct mips_cpuinfo));
+
+ /* Read and store the PrID ID for CPU identification. */
+ prid = mips_rd_prid();
+ cpuinfo->cpu_vendor = MIPS_PRID_CID(prid);
+ cpuinfo->cpu_rev = MIPS_PRID_REV(prid);
+ cpuinfo->cpu_impl = MIPS_PRID_IMPL(prid);
+
+ /* Read config register selection 0 to learn TLB type. */
+ cfg0 = mips_rd_config();
+
+ cpuinfo->tlb_type =
+ ((cfg0 & MIPS_CONFIG0_MT_MASK) >> MIPS_CONFIG0_MT_SHIFT);
+ cpuinfo->icache_virtual = cfg0 & MIPS_CONFIG0_VI;
+
+ /* If config register selection 1 does not exist, exit. */
+ if (!(cfg0 & MIPS3_CONFIG_CM))
+ return;
+
+ /* Learn TLB size and L1 cache geometry. */
+ cfg1 = mips_rd_config1();
+ cpuinfo->tlb_nentries =
+ ((cfg1 & MIPS_CONFIG1_TLBSZ_MASK) >> MIPS_CONFIG1_TLBSZ_SHIFT) + 1;
+
+ /* L1 instruction cache. */
+ tmp = (cfg1 & MIPS_CONFIG1_IL_MASK) >> MIPS_CONFIG1_IL_SHIFT;
+ if (tmp != 0) {
+ cpuinfo->l1.ic_linesize = 1 << (tmp + 1);
+ cpuinfo->l1.ic_nways = (((cfg1 & MIPS_CONFIG1_IA_MASK) >> MIPS_CONFIG1_IA_SHIFT)) + 1;
+ cpuinfo->l1.ic_nsets =
+ 1 << (((cfg1 & MIPS_CONFIG1_IS_MASK) >> MIPS_CONFIG1_IS_SHIFT) + 6);
+ cpuinfo->l1.ic_size =
+ cpuinfo->l1.ic_linesize * cpuinfo->l1.ic_nsets * cpuinfo->l1.ic_nways;
+ }
+
+ /* L1 data cache. */
+ tmp = (cfg1 & MIPS_CONFIG1_DL_MASK) >> MIPS_CONFIG1_DL_SHIFT;
+ if (tmp != 0) {
+ cpuinfo->l1.dc_linesize = 1 << (tmp + 1);
+ cpuinfo->l1.dc_nways =
+ (((cfg1 & MIPS_CONFIG1_DA_MASK) >> MIPS_CONFIG1_DA_SHIFT)) + 1;
+ cpuinfo->l1.dc_nsets =
+ 1 << (((cfg1 & MIPS_CONFIG1_DS_MASK) >> MIPS_CONFIG1_DS_SHIFT) + 6);
+ }
#ifdef TARGET_OCTEON
- /*
- * Octeon does 128 byte line-size. But Config-Sel1 doesn't show
- * 128 line-size, 1 Set, 64 ways.
- */
+ /*
+ * Octeon does 128 byte line-size. But Config-Sel1 doesn't show
+ * 128 line-size, 1 Set, 64 ways.
+ */
cpuinfo->l1.dc_linesize = 128;
cpuinfo->l1.dc_nsets = 1;
cpuinfo->l1.dc_nways = 64;
#endif
- cpuinfo->l1.dc_size = cpuinfo->l1.dc_linesize * cpuinfo->l1.dc_nsets
- * cpuinfo->l1.dc_nways;
- }
+ cpuinfo->l1.dc_size = cpuinfo->l1.dc_linesize
+ * cpuinfo->l1.dc_nsets * cpuinfo->l1.dc_nways;
}
void
mips_cpu_init(void)
{
+ platform_cpu_init();
mips_get_identity(&cpuinfo);
num_tlbentries = cpuinfo.tlb_nentries;
Mips_SetWIRED(0);
@@ -141,77 +150,110 @@ mips_cpu_init(void)
void
cpu_identify(void)
{
- printf("cpu%d: ", 0); /* XXX per-cpu */
- switch (cpuinfo.cpu_vendor) {
- case MIPS_PRID_CID_MTI:
- printf("MIPS Technologies");
- break;
- case MIPS_PRID_CID_BROADCOM:
- case MIPS_PRID_CID_SIBYTE:
- printf("Broadcom");
- break;
- case MIPS_PRID_CID_ALCHEMY:
- printf("AMD");
- break;
- case MIPS_PRID_CID_SANDCRAFT:
- printf("Sandcraft");
- break;
- case MIPS_PRID_CID_PHILIPS:
- printf("Philips");
- break;
- case MIPS_PRID_CID_TOSHIBA:
- printf("Toshiba");
- break;
- case MIPS_PRID_CID_LSI:
- printf("LSI");
- break;
- case MIPS_PRID_CID_LEXRA:
- printf("Lexra");
- break;
- case MIPS_PRID_CID_PREHISTORIC:
- default:
- printf("Unknown");
- break;
- }
- printf(" processor v%d.%d\n", cpuinfo.cpu_rev, cpuinfo.cpu_impl);
-
- printf(" MMU: ");
- if (cpuinfo.tlb_type == MIPS_MMU_NONE) {
- printf("none present\n");
- } else {
- if (cpuinfo.tlb_type == MIPS_MMU_TLB) {
- printf("Standard TLB");
- } else if (cpuinfo.tlb_type == MIPS_MMU_BAT) {
- printf("Standard BAT");
- } else if (cpuinfo.tlb_type == MIPS_MMU_FIXED) {
- printf("Fixed mapping");
+ uint32_t cfg0, cfg1, cfg2, cfg3;
+ printf("cpu%d: ", 0); /* XXX per-cpu */
+ switch (cpuinfo.cpu_vendor) {
+ case MIPS_PRID_CID_MTI:
+ printf("MIPS Technologies");
+ break;
+ case MIPS_PRID_CID_BROADCOM:
+ case MIPS_PRID_CID_SIBYTE:
+ printf("Broadcom");
+ break;
+ case MIPS_PRID_CID_ALCHEMY:
+ printf("AMD");
+ break;
+ case MIPS_PRID_CID_SANDCRAFT:
+ printf("Sandcraft");
+ break;
+ case MIPS_PRID_CID_PHILIPS:
+ printf("Philips");
+ break;
+ case MIPS_PRID_CID_TOSHIBA:
+ printf("Toshiba");
+ break;
+ case MIPS_PRID_CID_LSI:
+ printf("LSI");
+ break;
+ case MIPS_PRID_CID_LEXRA:
+ printf("Lexra");
+ break;
+ case MIPS_PRID_CID_CAVIUM:
+ printf("Cavium");
+ break;
+ case MIPS_PRID_CID_PREHISTORIC:
+ default:
+ printf("Unknown cid %#x", cpuinfo.cpu_vendor);
+ break;
+ }
+ printf(" processor v%d.%d\n", cpuinfo.cpu_rev, cpuinfo.cpu_impl);
+
+ printf(" MMU: ");
+ if (cpuinfo.tlb_type == MIPS_MMU_NONE) {
+ printf("none present\n");
+ } else {
+ if (cpuinfo.tlb_type == MIPS_MMU_TLB) {
+ printf("Standard TLB");
+ } else if (cpuinfo.tlb_type == MIPS_MMU_BAT) {
+ printf("Standard BAT");
+ } else if (cpuinfo.tlb_type == MIPS_MMU_FIXED) {
+ printf("Fixed mapping");
+ }
+ printf(", %d entries\n", cpuinfo.tlb_nentries);
}
- printf(", %d entries\n", cpuinfo.tlb_nentries);
- }
-
- printf(" L1 i-cache: ");
- if (cpuinfo.l1.ic_linesize == 0) {
- printf("disabled");
- } else {
- if (cpuinfo.l1.ic_nways == 1) {
- printf("direct-mapped with");
+
+ printf(" L1 i-cache: ");
+ if (cpuinfo.l1.ic_linesize == 0) {
+ printf("disabled");
} else {
- printf ("%d ways of", cpuinfo.l1.ic_nways);
+ if (cpuinfo.l1.ic_nways == 1) {
+ printf("direct-mapped with");
+ } else {
+ printf ("%d ways of", cpuinfo.l1.ic_nways);
+ }
+ printf(" %d sets, %d bytes per line\n",
+ cpuinfo.l1.ic_nsets, cpuinfo.l1.ic_linesize);
}
- printf(" %d sets, %d bytes per line\n", cpuinfo.l1.ic_nsets, cpuinfo.l1.ic_linesize);
- }
-
- printf(" L1 d-cache: ");
- if (cpuinfo.l1.dc_linesize == 0) {
- printf("disabled");
- } else {
- if (cpuinfo.l1.dc_nways == 1) {
- printf("direct-mapped with");
+
+ printf(" L1 d-cache: ");
+ if (cpuinfo.l1.dc_linesize == 0) {
+ printf("disabled");
} else {
- printf ("%d ways of", cpuinfo.l1.dc_nways);
+ if (cpuinfo.l1.dc_nways == 1) {
+ printf("direct-mapped with");
+ } else {
+ printf ("%d ways of", cpuinfo.l1.dc_nways);
+ }
+ printf(" %d sets, %d bytes per line\n",
+ cpuinfo.l1.dc_nsets, cpuinfo.l1.dc_linesize);
}
- printf(" %d sets, %d bytes per line\n", cpuinfo.l1.dc_nsets, cpuinfo.l1.dc_linesize);
- }
+
+ cfg0 = mips_rd_config();
+ /* If config register selection 1 does not exist, exit. */
+ if (!(cfg0 & MIPS3_CONFIG_CM))
+ return;
+
+ cfg1 = mips_rd_config1();
+ printf(" Config1=0x%b\n", cfg1,
+ "\20\7COP2\6MDMX\5PerfCount\4WatchRegs\3MIPS16\2EJTAG\1FPU");
+
+ /* If config register selection 2 does not exist, exit. */
+ if (!(cfg1 & MIPS3_CONFIG_CM))
+ return;
+ cfg2 = mips_rd_config2();
+ /*
+ * Config2 contains no useful information other then Config3
+ * existence flag
+ */
+
+ /* If config register selection 3 does not exist, exit. */
+ if (!(cfg2 & MIPS3_CONFIG_CM))
+ return;
+ cfg3 = mips_rd_config3();
+
+ /* Print Config3 if it contains any useful info */
+ if (cfg3 & ~(0x80000000))
+ printf(" Config3=0x%b\n", cfg3, "\20\2SmartMIPS\1TraceLogic");
}
static struct rman cpu_hardirq_rman;
diff --git a/sys/mips/mips/db_trace.c b/sys/mips/mips/db_trace.c
index fe2aa6e..d18d6aa 100644
--- a/sys/mips/mips/db_trace.c
+++ b/sys/mips/mips/db_trace.c
@@ -17,9 +17,356 @@ __FBSDID("$FreeBSD$");
#include <machine/db_machdep.h>
#include <machine/md_var.h>
+#include <machine/mips_opcode.h>
#include <machine/pcb.h>
+#include <machine/trap.h>
#include <ddb/ddb.h>
+#include <ddb/db_sym.h>
+
+extern char _locore[];
+extern char _locoreEnd[];
+extern char edata[];
+
+/*
+ * A function using a stack frame has the following instruction as the first
+ * one: addiu sp,sp,-<frame_size>
+ *
+ * We make use of this to detect starting address of a function. This works
+ * better than using 'j ra' instruction to signify end of the previous
+ * function (for e.g. functions like boot() or panic() do not actually
+ * emit a 'j ra' instruction).
+ *
+ * XXX the abi does not require that the addiu instruction be the first one.
+ */
+#define MIPS_START_OF_FUNCTION(ins) (((ins) & 0xffff8000) == 0x27bd8000)
+
+/*
+ * MIPS ABI 3.0 requires that all functions return using the 'j ra' instruction
+ *
+ * XXX gcc doesn't do this for functions with __noreturn__ attribute.
+ */
+#define MIPS_END_OF_FUNCTION(ins) ((ins) == 0x03e00008)
+
+/*
+ * kdbpeekD(addr) - skip one word starting at 'addr', then read the second word
+ */
+#define kdbpeekD(addr) kdbpeek(((int *)(addr)) + 1)
+
+/*
+ * Functions ``special'' enough to print by name
+ */
+#ifdef __STDC__
+#define Name(_fn) { (void*)_fn, # _fn }
+#else
+#define Name(_fn) { _fn, "_fn"}
+#endif
+static struct {
+ void *addr;
+ char *name;
+} names[] = {
+
+ Name(trap),
+ Name(MipsKernGenException),
+ Name(MipsUserGenException),
+ Name(MipsKernIntr),
+ Name(MipsUserIntr),
+ Name(cpu_switch),
+ {
+ 0, 0
+ }
+};
+
+/*
+ * Map a function address to a string name, if known; or a hex string.
+ */
+static char *
+fn_name(uintptr_t addr)
+{
+ static char buf[17];
+ int i = 0;
+
+ db_expr_t diff;
+ c_db_sym_t sym;
+ char *symname;
+
+ diff = 0;
+ symname = NULL;
+ sym = db_search_symbol((db_addr_t)addr, DB_STGY_ANY, &diff);
+ db_symbol_values(sym, (const char **)&symname, (db_expr_t *)0);
+ if (symname && diff == 0)
+ return (symname);
+
+ for (i = 0; names[i].name; i++)
+ if (names[i].addr == (void *)addr)
+ return (names[i].name);
+ sprintf(buf, "%jx", (uintmax_t)addr);
+ return (buf);
+}
+
+void
+stacktrace_subr(register_t pc, register_t sp, register_t ra,
+ int (*printfn) (const char *,...))
+{
+ InstFmt i;
+ /*
+ * Arrays for a0..a3 registers and flags if content
+ * of these registers is valid, e.g. obtained from the stack
+ */
+ int valid_args[4];
+ uintptr_t args[4];
+ uintptr_t va, subr;
+ unsigned instr, mask;
+ unsigned int frames = 0;
+ int more, stksize, j;
+
+/* Jump here when done with a frame, to start a new one */
+loop:
+
+ /*
+ * Invalidate arguments values
+ */
+ valid_args[0] = 0;
+ valid_args[1] = 0;
+ valid_args[2] = 0;
+ valid_args[3] = 0;
+/* Jump here after a nonstandard (interrupt handler) frame */
+ stksize = 0;
+ subr = 0;
+ if (frames++ > 100) {
+ (*printfn) ("\nstackframe count exceeded\n");
+ /* return breaks stackframe-size heuristics with gcc -O2 */
+ goto finish; /* XXX */
+ }
+ /* check for bad SP: could foul up next frame */
+ /*XXX MIPS64 bad: this hard-coded SP is lame */
+ if (sp & 3 || sp < 0x80000000) {
+ (*printfn) ("SP 0x%x: not in kernel\n", sp);
+ ra = 0;
+ subr = 0;
+ goto done;
+ }
+#define Between(x, y, z) \
+ ( ((x) <= (y)) && ((y) < (z)) )
+#define pcBetween(a,b) \
+ Between((uintptr_t)a, pc, (uintptr_t)b)
+
+ /*
+ * Check for current PC in exception handler code that don't have a
+ * preceding "j ra" at the tail of the preceding function. Depends
+ * on relative ordering of functions in exception.S, swtch.S.
+ */
+ if (pcBetween(MipsKernGenException, MipsUserGenException))
+ subr = (uintptr_t)MipsKernGenException;
+ else if (pcBetween(MipsUserGenException, MipsKernIntr))
+ subr = (uintptr_t)MipsUserGenException;
+ else if (pcBetween(MipsKernIntr, MipsUserIntr))
+ subr = (uintptr_t)MipsKernIntr;
+ else if (pcBetween(MipsUserIntr, MipsTLBInvalidException))
+ subr = (uintptr_t)MipsUserIntr;
+ else if (pcBetween(MipsTLBInvalidException,
+ MipsKernTLBInvalidException))
+ subr = (uintptr_t)MipsTLBInvalidException;
+ else if (pcBetween(MipsKernTLBInvalidException,
+ MipsUserTLBInvalidException))
+ subr = (uintptr_t)MipsKernTLBInvalidException;
+ else if (pcBetween(MipsUserTLBInvalidException, MipsTLBMissException))
+ subr = (uintptr_t)MipsUserTLBInvalidException;
+ else if (pcBetween(cpu_switch, MipsSwitchFPState))
+ subr = (uintptr_t)cpu_switch;
+ else if (pcBetween(_locore, _locoreEnd)) {
+ subr = (uintptr_t)_locore;
+ ra = 0;
+ goto done;
+ }
+ /* check for bad PC */
+ /*XXX MIPS64 bad: These hard coded constants are lame */
+ if (pc & 3 || pc < (uintptr_t)0x80000000 || pc >= (uintptr_t)edata) {
+ (*printfn) ("PC 0x%x: not in kernel\n", pc);
+ ra = 0;
+ goto done;
+ }
+ /*
+ * Find the beginning of the current subroutine by scanning
+ * backwards from the current PC for the end of the previous
+ * subroutine.
+ */
+ if (!subr) {
+ va = pc - sizeof(int);
+ while (1) {
+ instr = kdbpeek((int *)va);
+
+ if (MIPS_START_OF_FUNCTION(instr))
+ break;
+
+ if (MIPS_END_OF_FUNCTION(instr)) {
+ /* skip over branch-delay slot instruction */
+ va += 2 * sizeof(int);
+ break;
+ }
+
+ va -= sizeof(int);
+ }
+
+ /* skip over nulls which might separate .o files */
+ while ((instr = kdbpeek((int *)va)) == 0)
+ va += sizeof(int);
+ subr = va;
+ }
+ /* scan forwards to find stack size and any saved registers */
+ stksize = 0;
+ more = 3;
+ mask = 0;
+ for (va = subr; more; va += sizeof(int),
+ more = (more == 3) ? 3 : more - 1) {
+ /* stop if hit our current position */
+ if (va >= pc)
+ break;
+ instr = kdbpeek((int *)va);
+ i.word = instr;
+ switch (i.JType.op) {
+ case OP_SPECIAL:
+ switch (i.RType.func) {
+ case OP_JR:
+ case OP_JALR:
+ more = 2; /* stop after next instruction */
+ break;
+
+ case OP_SYSCALL:
+ case OP_BREAK:
+ more = 1; /* stop now */
+ };
+ break;
+
+ case OP_BCOND:
+ case OP_J:
+ case OP_JAL:
+ case OP_BEQ:
+ case OP_BNE:
+ case OP_BLEZ:
+ case OP_BGTZ:
+ more = 2; /* stop after next instruction */
+ break;
+
+ case OP_COP0:
+ case OP_COP1:
+ case OP_COP2:
+ case OP_COP3:
+ switch (i.RType.rs) {
+ case OP_BCx:
+ case OP_BCy:
+ more = 2; /* stop after next instruction */
+ };
+ break;
+
+ case OP_SW:
+ /* look for saved registers on the stack */
+ if (i.IType.rs != 29)
+ break;
+ /* only restore the first one */
+ if (mask & (1 << i.IType.rt))
+ break;
+ mask |= (1 << i.IType.rt);
+ switch (i.IType.rt) {
+ case 4:/* a0 */
+ args[0] = kdbpeek((int *)(sp + (short)i.IType.imm));
+ valid_args[0] = 1;
+ break;
+
+ case 5:/* a1 */
+ args[1] = kdbpeek((int *)(sp + (short)i.IType.imm));
+ valid_args[1] = 1;
+ break;
+
+ case 6:/* a2 */
+ args[2] = kdbpeek((int *)(sp + (short)i.IType.imm));
+ valid_args[2] = 1;
+ break;
+
+ case 7:/* a3 */
+ args[3] = kdbpeek((int *)(sp + (short)i.IType.imm));
+ valid_args[3] = 1;
+ break;
+
+ case 31: /* ra */
+ ra = kdbpeek((int *)(sp + (short)i.IType.imm));
+ }
+ break;
+
+ case OP_SD:
+ /* look for saved registers on the stack */
+ if (i.IType.rs != 29)
+ break;
+ /* only restore the first one */
+ if (mask & (1 << i.IType.rt))
+ break;
+ mask |= (1 << i.IType.rt);
+ switch (i.IType.rt) {
+ case 4:/* a0 */
+ args[0] = kdbpeekD((int *)(sp + (short)i.IType.imm));
+ valid_args[0] = 1;
+ break;
+
+ case 5:/* a1 */
+ args[1] = kdbpeekD((int *)(sp + (short)i.IType.imm));
+ valid_args[1] = 1;
+ break;
+
+ case 6:/* a2 */
+ args[2] = kdbpeekD((int *)(sp + (short)i.IType.imm));
+ valid_args[2] = 1;
+ break;
+
+ case 7:/* a3 */
+ args[3] = kdbpeekD((int *)(sp + (short)i.IType.imm));
+ valid_args[3] = 1;
+ break;
+
+ case 31: /* ra */
+ ra = kdbpeekD((int *)(sp + (short)i.IType.imm));
+ }
+ break;
+
+ case OP_ADDI:
+ case OP_ADDIU:
+ /* look for stack pointer adjustment */
+ if (i.IType.rs != 29 || i.IType.rt != 29)
+ break;
+ stksize = -((short)i.IType.imm);
+ }
+ }
+
+done:
+ (*printfn) ("%s+%x (", fn_name(subr), pc - subr);
+ for (j = 0; j < 4; j ++) {
+ if (j > 0)
+ (*printfn)(",");
+ if (valid_args[j])
+ (*printfn)("%x", args[j]);
+ else
+ (*printfn)("?");
+ }
+
+ (*printfn) (") ra %x sz %d\n", ra, stksize);
+
+ if (ra) {
+ if (pc == ra && stksize == 0)
+ (*printfn) ("stacktrace: loop!\n");
+ else {
+ pc = ra;
+ sp += stksize;
+ ra = 0;
+ goto loop;
+ }
+ } else {
+finish:
+ if (curproc)
+ (*printfn) ("pid %d\n", curproc->p_pid);
+ else
+ (*printfn) ("curproc NULL\n");
+ }
+}
+
int
db_md_set_watchpoint(db_expr_t addr, db_expr_t size)
@@ -42,14 +389,6 @@ db_md_list_watchpoints()
{
}
-static int
-db_backtrace(struct thread *td, db_addr_t frame, int count)
-{
- stacktrace_subr((struct trapframe *)frame,
- (int (*) (const char *, ...))db_printf);
- return (0);
-}
-
void
db_trace_self(void)
{
@@ -60,10 +399,35 @@ db_trace_self(void)
int
db_trace_thread(struct thread *thr, int count)
{
+ register_t pc, ra, sp;
struct pcb *ctx;
- ctx = kdb_thr_ctx(thr);
- return (db_backtrace(thr, (db_addr_t) &ctx->pcb_regs, count));
+ if (thr == curthread) {
+ sp = (register_t)__builtin_frame_address(0);
+ ra = (register_t)__builtin_return_address(0);
+
+ __asm __volatile(
+ "jal 99f\n"
+ "nop\n"
+ "99:\n"
+ "move %0, $31\n" /* get ra */
+ "move $31, %1\n" /* restore ra */
+ : "=r" (pc)
+ : "r" (ra));
+
+ }
+
+ else {
+ ctx = thr->td_pcb;
+ sp = (register_t)ctx->pcb_context[PREG_SP];
+ pc = (register_t)ctx->pcb_context[PREG_PC];
+ ra = (register_t)ctx->pcb_context[PREG_RA];
+ }
+
+ stacktrace_subr(pc, sp, ra,
+ (int (*) (const char *, ...))db_printf);
+
+ return (0);
}
void
diff --git a/sys/mips/mips/elf_machdep.c b/sys/mips/mips/elf_machdep.c
index f9b363f..54c31e5 100644
--- a/sys/mips/mips/elf_machdep.c
+++ b/sys/mips/mips/elf_machdep.c
@@ -47,6 +47,59 @@ __FBSDID("$FreeBSD$");
#include <machine/elf.h>
#include <machine/md_var.h>
+#ifdef __mips_n64
+struct sysentvec elf64_freebsd_sysvec = {
+ .sv_size = SYS_MAXSYSCALL,
+ .sv_table = sysent,
+ .sv_mask = 0,
+ .sv_sigsize = 0,
+ .sv_sigtbl = NULL,
+ .sv_errsize = 0,
+ .sv_errtbl = NULL,
+ .sv_transtrap = NULL,
+ .sv_fixup = __elfN(freebsd_fixup),
+ .sv_sendsig = sendsig,
+ .sv_sigcode = sigcode,
+ .sv_szsigcode = &szsigcode,
+ .sv_prepsyscall = NULL,
+ .sv_name = "FreeBSD ELF64",
+ .sv_coredump = __elfN(coredump),
+ .sv_imgact_try = NULL,
+ .sv_minsigstksz = MINSIGSTKSZ,
+ .sv_pagesize = PAGE_SIZE,
+ .sv_minuser = VM_MIN_ADDRESS,
+ .sv_maxuser = VM_MAXUSER_ADDRESS,
+ .sv_usrstack = USRSTACK,
+ .sv_psstrings = PS_STRINGS,
+ .sv_stackprot = VM_PROT_ALL,
+ .sv_copyout_strings = exec_copyout_strings,
+ .sv_setregs = exec_setregs,
+ .sv_fixlimit = NULL,
+ .sv_maxssiz = NULL,
+ .sv_flags = SV_ABI_FREEBSD | SV_LP64
+};
+
+static Elf64_Brandinfo freebsd_brand_info = {
+ .brand = ELFOSABI_FREEBSD,
+ .machine = EM_MIPS,
+ .compat_3_brand = "FreeBSD",
+ .emul_path = NULL,
+ .interp_path = "/libexec/ld-elf.so.1",
+ .sysvec = &elf64_freebsd_sysvec,
+ .interp_newpath = NULL,
+ .flags = 0
+};
+
+SYSINIT(elf64, SI_SUB_EXEC, SI_ORDER_ANY,
+ (sysinit_cfunc_t) elf64_insert_brand_entry,
+ &freebsd_brand_info);
+
+void
+elf64_dump_thread(struct thread *td __unused, void *dst __unused,
+ size_t *off __unused)
+{
+}
+#else
struct sysentvec elf32_freebsd_sysvec = {
.sv_size = SYS_MAXSYSCALL,
.sv_table = sysent,
@@ -86,8 +139,7 @@ static Elf32_Brandinfo freebsd_brand_info = {
.interp_path = "/libexec/ld-elf.so.1",
.sysvec = &elf32_freebsd_sysvec,
.interp_newpath = NULL,
- .brand_note = &elf32_freebsd_brandnote,
- .flags = BI_BRAND_NOTE
+ .flags = 0
};
SYSINIT(elf32, SI_SUB_EXEC, SI_ORDER_FIRST,
@@ -99,13 +151,14 @@ elf32_dump_thread(struct thread *td __unused, void *dst __unused,
size_t *off __unused)
{
}
+#endif
/* Process one elf relocation with addend. */
static int
elf_reloc_internal(linker_file_t lf, Elf_Addr relocbase, const void *data,
int type, int local, elf_lookup_fn lookup)
{
- Elf_Addr *where = (Elf_Addr *)NULL;
+ Elf_Addr *where = (Elf_Addr *)NULL;;
Elf_Addr addr;
Elf_Addr addend = (Elf_Addr)0;
Elf_Word rtype = (Elf_Word)0, symidx;
diff --git a/sys/mips/mips/exception.S b/sys/mips/mips/exception.S
index fb7614d..1271658 100644
--- a/sys/mips/mips/exception.S
+++ b/sys/mips/mips/exception.S
@@ -62,6 +62,8 @@
#include <machine/cpuregs.h>
#include <machine/pte.h>
+#include "opt_cputype.h"
+
#include "assym.s"
#if defined(ISA_MIPS32)
@@ -97,6 +99,11 @@
#endif
/*
+ * Reasonable limit
+ */
+#define INTRCNT_COUNT 128
+
+/*
* Assume that w alaways need nops to escape CP0 hazard
* TODO: Make hazard delays configurable. Stuck with 5 cycles on the moment
* For more info on CP0 hazards see Chapter 7 (p.99) of "MIPS32 Architecture
@@ -139,12 +146,14 @@ VECTOR_END(MipsTLBMiss)
*----------------------------------------------------------------------------
*/
MipsDoTLBMiss:
+#xxx mips64 unsafe?
#ifndef SMP
lui k1, %hi(_C_LABEL(pcpup))
#endif
#k0 already has BadVA
bltz k0, 1f #02: k0<0 -> 1f (kernel fault)
srl k0, k0, SEGSHIFT - 2 #03: k0=seg offset (almost)
+#xxx mips64 unsafe?
#ifdef SMP
GET_CPU_PCPU(k1)
#else
@@ -153,6 +162,7 @@ MipsDoTLBMiss:
lw k1, PC_SEGBASE(k1)
beqz k1, 2f #05: make sure segbase is not null
andi k0, k0, 0x7fc #06: k0=seg offset (mask 0x3)
+#xxx mips64 unsafe?
addu k1, k0, k1 #07: k1=seg entry address
lw k1, 0(k1) #08: k1=seg entry
mfc0 k0, COP_0_BAD_VADDR #09: k0=bad address (again)
@@ -160,6 +170,7 @@ MipsDoTLBMiss:
srl k0, PGSHIFT - 2 #0b: k0=VPN (aka va>>10)
andi k0, k0, ((NPTEPG/2) - 1) << 3 #0c: k0=page tab offset
+#xxx mips64 unsafe?
addu k1, k1, k0 #0d: k1=pte address
lw k0, 0(k1) #0e: k0=lo0 pte
lw k1, 4(k1) #0f: k1=lo1 pte
@@ -199,8 +210,8 @@ VECTOR(MipsException, unknown)
and k1, k1, CR_EXC_CODE # Mask out the cause bits.
or k1, k1, k0 # change index to user table
1:
- la k0, _C_LABEL(machExceptionTable) # get base of the jump table
- addu k0, k0, k1 # Get the address of the
+ PTR_LA k0, _C_LABEL(machExceptionTable) # get base of the jump table
+ PTR_ADDU k0, k0, k1 # Get the address of the
# function entry. Note that
# the cause is already
# shifted left by 2 bits so
@@ -272,7 +283,7 @@ SlowFault:
and a0, a0, a2 ; \
mtc0 a0, COP_0_STATUS_REG
#endif
-
+
#define SAVE_CPU \
SAVE_REG(AT, AST, sp) ;\
.set at ; \
@@ -286,10 +297,10 @@ SlowFault:
SAVE_REG(t1, T1, sp) ;\
SAVE_REG(t2, T2, sp) ;\
SAVE_REG(t3, T3, sp) ;\
- SAVE_REG(t4, T4, sp) ;\
- SAVE_REG(t5, T5, sp) ;\
- SAVE_REG(t6, T6, sp) ;\
- SAVE_REG(t7, T7, sp) ;\
+ SAVE_REG(ta0, TA0, sp) ;\
+ SAVE_REG(ta1, TA1, sp) ;\
+ SAVE_REG(ta2, TA2, sp) ;\
+ SAVE_REG(ta3, TA3, sp) ;\
SAVE_REG(t8, T8, sp) ;\
SAVE_REG(t9, T9, sp) ;\
SAVE_REG(gp, GP, sp) ;\
@@ -315,10 +326,10 @@ SlowFault:
SAVE_REG(ra, RA, sp) ;\
SAVE_REG(a2, BADVADDR, sp) ;\
SAVE_REG(a3, PC, sp) ;\
- addu v0, sp, KERN_EXC_FRAME_SIZE ;\
+ PTR_ADDU v0, sp, KERN_EXC_FRAME_SIZE ;\
SAVE_REG(v0, SP, sp) ;\
CLEAR_STATUS ;\
- addu a0, sp, STAND_ARG_SIZE ;\
+ PTR_ADDU a0, sp, STAND_ARG_SIZE ;\
ITLBNOPFIX
#define RESTORE_REG(reg, offs, base) \
@@ -326,14 +337,13 @@ SlowFault:
#define RESTORE_CPU \
mtc0 zero,COP_0_STATUS_REG ;\
- RESTORE_REG(a0, SR, sp) ;\
+ RESTORE_REG(k0, SR, sp) ;\
RESTORE_REG(t0, MULLO, sp) ;\
RESTORE_REG(t1, MULHI, sp) ;\
- mtc0 a0, COP_0_STATUS_REG ;\
mtlo t0 ;\
mthi t1 ;\
_MTC0 v0, COP_0_EXC_PC ;\
- .set noat ; \
+ .set noat ;\
RESTORE_REG(AT, AST, sp) ;\
RESTORE_REG(v0, V0, sp) ;\
RESTORE_REG(v1, V1, sp) ;\
@@ -345,10 +355,10 @@ SlowFault:
RESTORE_REG(t1, T1, sp) ;\
RESTORE_REG(t2, T2, sp) ;\
RESTORE_REG(t3, T3, sp) ;\
- RESTORE_REG(t4, T4, sp) ;\
- RESTORE_REG(t5, T5, sp) ;\
- RESTORE_REG(t6, T6, sp) ;\
- RESTORE_REG(t7, T7, sp) ;\
+ RESTORE_REG(ta0, TA0, sp) ;\
+ RESTORE_REG(ta1, TA1, sp) ;\
+ RESTORE_REG(ta2, TA2, sp) ;\
+ RESTORE_REG(ta3, TA3, sp) ;\
RESTORE_REG(t8, T8, sp) ;\
RESTORE_REG(t9, T9, sp) ;\
RESTORE_REG(s0, S0, sp) ;\
@@ -362,7 +372,8 @@ SlowFault:
RESTORE_REG(s8, S8, sp) ;\
RESTORE_REG(gp, GP, sp) ;\
RESTORE_REG(ra, RA, sp) ;\
- addu sp, sp, KERN_EXC_FRAME_SIZE
+ PTR_ADDU sp, sp, KERN_EXC_FRAME_SIZE;\
+ mtc0 k0, COP_0_STATUS_REG
/*
@@ -384,11 +395,24 @@ NNON_LEAF(MipsKernGenException, KERN_EXC_FRAME_SIZE, ra)
/*
* Call the exception handler. a0 points at the saved frame.
*/
- la gp, _C_LABEL(_gp)
- la k0, _C_LABEL(trap)
+ PTR_LA gp, _C_LABEL(_gp)
+ PTR_LA k0, _C_LABEL(trap)
jalr k0
sw a3, STAND_RA_OFFSET + KERN_REG_SIZE(sp) # for debugging
+ /*
+ * Update interrupt mask in saved status register
+ * Some of interrupts could be disabled by
+ * intr filters if interrupts are enabled later
+ * in trap handler
+ */
+ mfc0 a0, COP_0_STATUS_REG
+ mtc0 zero, COP_0_STATUS_REG
+ and a0, a0, SR_INT_MASK
+ RESTORE_REG(a1, SR, sp)
+ and a1, a1, ~SR_INT_MASK
+ or a1, a1, a0
+ SAVE_REG(a1, SR, sp)
RESTORE_CPU # v0 contains the return address.
sync
eret
@@ -438,11 +462,11 @@ NNON_LEAF(MipsUserGenException, STAND_FRAME_SIZE, ra)
SAVE_U_PCB_REG(t1, T1, k1)
SAVE_U_PCB_REG(t2, T2, k1)
SAVE_U_PCB_REG(t3, T3, k1)
- SAVE_U_PCB_REG(t4, T4, k1)
+ SAVE_U_PCB_REG(ta0, TA0, k1)
mfc0 a0, COP_0_STATUS_REG # First arg is the status reg.
- SAVE_U_PCB_REG(t5, T5, k1)
- SAVE_U_PCB_REG(t6, T6, k1)
- SAVE_U_PCB_REG(t7, T7, k1)
+ SAVE_U_PCB_REG(ta1, TA1, k1)
+ SAVE_U_PCB_REG(ta2, TA2, k1)
+ SAVE_U_PCB_REG(ta3, TA3, k1)
SAVE_U_PCB_REG(s0, S0, k1)
mfc0 a1, COP_0_CAUSE_REG # Second arg is the cause reg.
SAVE_U_PCB_REG(s1, S1, k1)
@@ -468,32 +492,35 @@ NNON_LEAF(MipsUserGenException, STAND_FRAME_SIZE, ra)
SAVE_U_PCB_REG(a2, BADVADDR, k1)
SAVE_U_PCB_REG(a3, PC, k1)
sw a3, STAND_RA_OFFSET(sp) # for debugging
- la gp, _C_LABEL(_gp) # switch to kernel GP
+ PTR_LA gp, _C_LABEL(_gp) # switch to kernel GP
# Turn off fpu and enter kernel mode
and t0, a0, ~(SR_COP_1_BIT | SR_EXL | SR_KSU_MASK | SR_INT_ENAB)
#ifdef TARGET_OCTEON
or t0, t0, (MIPS_SR_KX | MIPS_SR_SX | MIPS_SR_UX)
#endif
mtc0 t0, COP_0_STATUS_REG
- addu a0, k1, U_PCB_REGS
+ PTR_ADDU a0, k1, U_PCB_REGS
ITLBNOPFIX
/*
* Call the exception handler.
*/
- la k0, _C_LABEL(trap)
+ PTR_LA k0, _C_LABEL(trap)
jalr k0
nop
+
/*
* Restore user registers and return.
* First disable interrupts and set exeption level.
*/
DO_AST
- mtc0 zero, COP_0_STATUS_REG # disable int
+ mfc0 t0, COP_0_STATUS_REG # disable int
+ and t0, t0, ~(MIPS_SR_INT_IE)
+ mtc0 t0, COP_0_STATUS_REG
ITLBNOPFIX
- li v0, SR_EXL
- mtc0 v0, COP_0_STATUS_REG # set exeption level
+ or t0, t0, SR_EXL
+ mtc0 t0, COP_0_STATUS_REG # set exeption level
ITLBNOPFIX
/*
@@ -504,6 +531,18 @@ NNON_LEAF(MipsUserGenException, STAND_FRAME_SIZE, ra)
GET_CPU_PCPU(k1)
lw k1, PC_CURPCB(k1)
+ /*
+ * Update interrupt mask in saved status register
+ * Some of interrupts could be enabled by ithread
+ * scheduled by ast()
+ */
+ mfc0 a0, COP_0_STATUS_REG
+ and a0, a0, SR_INT_MASK
+ RESTORE_U_PCB_REG(a1, SR, k1)
+ and a1, a1, ~SR_INT_MASK
+ or a1, a1, a0
+ SAVE_U_PCB_REG(a1, SR, k1)
+
RESTORE_U_PCB_REG(t0, MULLO, k1)
RESTORE_U_PCB_REG(t1, MULHI, k1)
mtlo t0
@@ -520,10 +559,10 @@ NNON_LEAF(MipsUserGenException, STAND_FRAME_SIZE, ra)
RESTORE_U_PCB_REG(t1, T1, k1)
RESTORE_U_PCB_REG(t2, T2, k1)
RESTORE_U_PCB_REG(t3, T3, k1)
- RESTORE_U_PCB_REG(t4, T4, k1)
- RESTORE_U_PCB_REG(t5, T5, k1)
- RESTORE_U_PCB_REG(t6, T6, k1)
- RESTORE_U_PCB_REG(t7, T7, k1)
+ RESTORE_U_PCB_REG(ta0, TA0, k1)
+ RESTORE_U_PCB_REG(ta1, TA1, k1)
+ RESTORE_U_PCB_REG(ta2, TA2, k1)
+ RESTORE_U_PCB_REG(ta3, TA3, k1)
RESTORE_U_PCB_REG(s0, S0, k1)
RESTORE_U_PCB_REG(s1, S1, k1)
RESTORE_U_PCB_REG(s2, S2, k1)
@@ -587,12 +626,25 @@ NNON_LEAF(MipsKernIntr, KERN_EXC_FRAME_SIZE, ra)
/*
* Call the interrupt handler.
*/
- la gp, _C_LABEL(_gp)
- addu a0, sp, STAND_ARG_SIZE
- la k0, _C_LABEL(cpu_intr)
+ PTR_LA gp, _C_LABEL(_gp)
+ PTR_ADDU a0, sp, STAND_ARG_SIZE
+ PTR_LA k0, _C_LABEL(cpu_intr)
jalr k0
sw a3, STAND_RA_OFFSET + KERN_REG_SIZE(sp)
/* Why no AST processing here? */
+
+ /*
+ * Update interrupt mask in saved status register
+ * Some of interrupts could be disabled by
+ * intr filters
+ */
+ mfc0 a0, COP_0_STATUS_REG
+ and a0, a0, SR_INT_MASK
+ RESTORE_REG(a1, SR, sp)
+ and a1, a1, ~SR_INT_MASK
+ or a1, a1, a0
+ SAVE_REG(a1, SR, sp)
+
/*
* Restore registers and return from the interrupt.
*/
@@ -643,10 +695,10 @@ NNON_LEAF(MipsUserIntr, STAND_FRAME_SIZE, ra)
SAVE_U_PCB_REG(t1, T1, k1)
SAVE_U_PCB_REG(t2, T2, k1)
SAVE_U_PCB_REG(t3, T3, k1)
- SAVE_U_PCB_REG(t4, T4, k1)
- SAVE_U_PCB_REG(t5, T5, k1)
- SAVE_U_PCB_REG(t6, T6, k1)
- SAVE_U_PCB_REG(t7, T7, k1)
+ SAVE_U_PCB_REG(ta0, TA0, k1)
+ SAVE_U_PCB_REG(ta1, TA1, k1)
+ SAVE_U_PCB_REG(ta2, TA2, k1)
+ SAVE_U_PCB_REG(ta3, TA3, k1)
SAVE_U_PCB_REG(t8, T8, k1)
SAVE_U_PCB_REG(t9, T9, k1)
SAVE_U_PCB_REG(gp, GP, k1)
@@ -676,7 +728,7 @@ NNON_LEAF(MipsUserIntr, STAND_FRAME_SIZE, ra)
SAVE_U_PCB_REG(a1, CAUSE, k1)
SAVE_U_PCB_REG(a3, PC, k1) # PC in a3, note used later!
subu sp, k1, STAND_FRAME_SIZE # switch to kernel SP
- la gp, _C_LABEL(_gp) # switch to kernel GP
+ PTR_LA gp, _C_LABEL(_gp) # switch to kernel GP
# Turn off fpu, disable interrupts, set kernel mode kernel mode, clear exception level.
and t0, a0, ~(SR_COP_1_BIT | SR_EXL | SR_INT_ENAB | SR_KSU_MASK)
@@ -685,40 +737,45 @@ NNON_LEAF(MipsUserIntr, STAND_FRAME_SIZE, ra)
#endif
mtc0 t0, COP_0_STATUS_REG
ITLBNOPFIX
- addu a0, k1, U_PCB_REGS
+ PTR_ADDU a0, k1, U_PCB_REGS
/*
* Call the interrupt handler.
*/
- la k0, _C_LABEL(cpu_intr)
+ PTR_LA k0, _C_LABEL(cpu_intr)
jalr k0
sw a3, STAND_RA_OFFSET(sp) # for debugging
+
/*
- * Since interrupts are enabled at this point, we use a1 instead of
- * k0 or k1 to store the PCB pointer. This is because k0 and k1
- * are not preserved across interrupts. ** RRS - And how did the
- * get enabled? cpu_intr clears the cause register but it does
- * not touch the sr as far as I can see thus intr are still
- * disabled.
+ * DO_AST enabled interrupts
*/
DO_AST
/*
- * Restore user registers and return. NOTE: interrupts are enabled.
+ * Restore user registers and return.
*/
-
-/*
- * Since interrupts are enabled at this point, we use a1 instead of
- * k0 or k1 to store the PCB pointer. This is because k0 and k1
- * are not preserved across interrupts.
- */
- mtc0 zero, COP_0_STATUS_REG
+ mfc0 t0, COP_0_STATUS_REG # disable int
+ and t0, t0, ~(MIPS_SR_INT_IE)
+ mtc0 t0, COP_0_STATUS_REG
ITLBNOPFIX
- li v0, SR_EXL
- mtc0 v0, COP_0_STATUS_REG # set exeption level bit.
+ or t0, t0, SR_EXL
+ mtc0 t0, COP_0_STATUS_REG # set exeption level
ITLBNOPFIX
GET_CPU_PCPU(k1)
- lw a1, PC_CURPCB(k1)
+ lw k1, PC_CURPCB(k1)
+
+ /*
+ * Update interrupt mask in saved status register
+ * Some of interrupts could be disabled by
+ * intr filters
+ */
+ mfc0 a0, COP_0_STATUS_REG
+ and a0, a0, SR_INT_MASK
+ RESTORE_U_PCB_REG(a1, SR, k1)
+ and a1, a1, ~SR_INT_MASK
+ or a1, a1, a0
+ SAVE_U_PCB_REG(a1, SR, k1)
+
RESTORE_U_PCB_REG(s0, S0, k1)
RESTORE_U_PCB_REG(s1, S1, k1)
RESTORE_U_PCB_REG(s2, S2, k1)
@@ -744,10 +801,10 @@ NNON_LEAF(MipsUserIntr, STAND_FRAME_SIZE, ra)
RESTORE_U_PCB_REG(t1, T1, k1)
RESTORE_U_PCB_REG(t2, T2, k1)
RESTORE_U_PCB_REG(t3, T3, k1)
- RESTORE_U_PCB_REG(t4, T4, k1)
- RESTORE_U_PCB_REG(t5, T5, k1)
- RESTORE_U_PCB_REG(t6, T6, k1)
- RESTORE_U_PCB_REG(t7, T7, k1)
+ RESTORE_U_PCB_REG(ta0, TA0, k1)
+ RESTORE_U_PCB_REG(ta1, TA1, k1)
+ RESTORE_U_PCB_REG(ta2, TA2, k1)
+ RESTORE_U_PCB_REG(ta3, TA3, k1)
RESTORE_U_PCB_REG(t8, T8, k1)
RESTORE_U_PCB_REG(t9, T9, k1)
RESTORE_U_PCB_REG(gp, GP, k1)
@@ -833,6 +890,7 @@ NLEAF(MipsKernTLBInvalidException)
2:
srl k0, 20 # k0=seg offset (almost)
andi k0, k0, 0xffc # k0=seg offset (mask 0x3)
+#xxx mips64 unsafe?
addu k1, k0, k1 # k1=seg entry address
lw k1, 0(k1) # k1=seg entry
mfc0 k0, COP_0_BAD_VADDR # k0=bad address (again)
@@ -840,6 +898,7 @@ NLEAF(MipsKernTLBInvalidException)
srl k0, k0, PGSHIFT-2
andi k0, k0, 0xffc # compute offset from index
tlbp # Probe the invalid entry
+#xxx mips64 unsafe?
addu k1, k1, k0
and k0, k0, 4 # check even/odd page
nop # required for QED 5230
@@ -905,6 +964,7 @@ NLEAF(MipsUserTLBInvalidException)
sltu k1, k0, k1
beqz k1, _C_LABEL(MipsUserGenException)
nop
+#xxx mips64 unsafe?
#ifdef SMP
GET_CPU_PCPU(k1)
#else
@@ -917,6 +977,7 @@ NLEAF(MipsUserTLBInvalidException)
2:
srl k0, 20 # k0=seg offset (almost)
andi k0, k0, 0xffc # k0=seg offset (mask 0x3)
+#xxx mips64 unsafe?
addu k1, k0, k1 # k1=seg entry address
lw k1, 0(k1) # k1=seg entry
mfc0 k0, COP_0_BAD_VADDR # k0=bad address (again)
@@ -924,6 +985,7 @@ NLEAF(MipsUserTLBInvalidException)
srl k0, k0, PGSHIFT-2
andi k0, k0, 0xffc # compute offset from index
tlbp # Probe the invalid entry
+#xxx mips64 unsafe?
addu k1, k1, k0
and k0, k0, 4 # check even/odd page
nop # required for QED 5230
@@ -1007,12 +1069,14 @@ NLEAF(MipsTLBMissException)
lw k1, %lo(_C_LABEL(kernel_segmap))(k1) # k1=segment tab base
beq k1, zero, _C_LABEL(MipsKernGenException) # ==0 -- no seg tab
andi k0, k0, 0xffc # k0=seg offset (mask 0x3)
+#xxx mips64 unsafe
addu k1, k0, k1 # k1=seg entry address
lw k1, 0(k1) # k1=seg entry
mfc0 k0, COP_0_BAD_VADDR # k0=bad address (again)
beq k1, zero, _C_LABEL(MipsKernGenException) # ==0 -- no page table
srl k0, 10 # k0=VPN (aka va>>10)
andi k0, k0, 0xff8 # k0=page tab offset
+#xxx mips64 unsafe
addu k1, k1, k0 # k1=pte address
lw k0, 0(k1) # k0=lo0 pte
lw k1, 4(k1) # k1=lo1 pte
@@ -1037,31 +1101,31 @@ sys_stk_chk:
nop
# stack overflow
- la a0, _C_LABEL(_start) - START_FRAME - 8 # set sp to a valid place
+ PTR_LA a0, _C_LABEL(_start) - START_FRAME - 8 # set sp to a valid place
sw sp, 24(a0)
move sp, a0
- la a0, 1f
+ PTR_LA a0, 1f
mfc0 a2, COP_0_STATUS_REG
mfc0 a3, COP_0_CAUSE_REG
_MFC0 a1, COP_0_EXC_PC
sw a2, 16(sp)
sw a3, 20(sp)
move a2, ra
- la k0, _C_LABEL(printf)
+ PTR_LA k0, _C_LABEL(printf)
jalr k0
mfc0 a3, COP_0_BAD_VADDR
- la sp, _C_LABEL(_start) - START_FRAME # set sp to a valid place
+ PTR_LA sp, _C_LABEL(_start) - START_FRAME # set sp to a valid place
#if !defined(SMP) && defined(DDB)
- la a0, 2f
- la k0, _C_LABEL(trapDump)
+ PTR_LA a0, 2f
+ PTR_LA k0, _C_LABEL(trapDump)
jalr k0
nop
li a0, 0
lw a1, _C_LABEL(num_tlbentries)
- la k0, _C_LABEL(db_dump_tlb)
+ PTR_LA k0, _C_LABEL(db_dump_tlb)
jalr k0
addu a1, -1
@@ -1138,11 +1202,12 @@ NON_LEAF(MipsFPTrap, STAND_FRAME_SIZE, ra)
*/
sw a2, STAND_FRAME_SIZE + 8(sp)
GET_CPU_PCPU(a0)
+#mips64 unsafe?
lw a0, PC_CURPCB(a0)
- addu a0, a0, U_PCB_REGS # first arg is ptr to CPU registers
+ PTR_ADDU a0, a0, U_PCB_REGS # first arg is ptr to CPU registers
move a1, a2 # second arg is instruction PC
move a2, t1 # third arg is floating point CSR
- la t3, _C_LABEL(MipsEmulateBranch) # compute PC after branch
+ PTR_LA t3, _C_LABEL(MipsEmulateBranch) # compute PC after branch
jalr t3 # compute PC after branch
move a3, zero # fourth arg is FALSE
/*
@@ -1158,6 +1223,7 @@ NON_LEAF(MipsFPTrap, STAND_FRAME_SIZE, ra)
*/
1:
lw a0, 0(a2) # a0 = coproc instruction
+#xxx mips64 unsafe?
addu v0, a2, 4 # v0 = next pc
2:
GET_CPU_PCPU(t2)
@@ -1178,7 +1244,7 @@ NON_LEAF(MipsFPTrap, STAND_FRAME_SIZE, ra)
lw a0, PC_CURTHREAD(a0) # get current thread
cfc1 a2, FPC_CSR # code = FP execptions
ctc1 zero, FPC_CSR # Clear exceptions
- la t3, _C_LABEL(trapsignal)
+ PTR_LA t3, _C_LABEL(trapsignal)
jalr t3
li a1, SIGFPE
b FPReturn
@@ -1188,7 +1254,7 @@ NON_LEAF(MipsFPTrap, STAND_FRAME_SIZE, ra)
* Finally, we can call MipsEmulateFP() where a0 is the instruction to emulate.
*/
4:
- la t3, _C_LABEL(MipsEmulateFP)
+ PTR_LA t3, _C_LABEL(MipsEmulateFP)
jalr t3
nop
@@ -1202,26 +1268,9 @@ FPReturn:
mtc0 t0, COP_0_STATUS_REG
ITLBNOPFIX
j ra
- addu sp, sp, STAND_FRAME_SIZE
+ PTR_ADDU sp, sp, STAND_FRAME_SIZE
END(MipsFPTrap)
-
-#if 0
-/*
- * Atomic ipending update
- */
-LEAF(set_sint)
- la v1, ipending
-1:
- ll v0, 0(v1)
- or v0, a0
- sc v0, 0(v1)
- beqz v0, 1b
- j ra
- nop
-END(set_sint)
-#endif
-
/*
* Interrupt counters for vmstat.
*/
@@ -1231,15 +1280,11 @@ END(set_sint)
.globl intrnames
.globl eintrnames
intrnames:
- .asciiz "clock"
- .asciiz "rtc"
- .asciiz "sio"
- .asciiz "pe"
- .asciiz "pic-nic"
+ .space INTRCNT_COUNT * (MAXCOMLEN + 1) * 2
eintrnames:
- .align 2
+ .align 4
intrcnt:
- .word 0,0,0,0,0
+ .space INTRCNT_COUNT * 4 * 2
eintrcnt:
@@ -1248,7 +1293,7 @@ eintrcnt:
*/
.text
VECTOR(MipsCache, unknown)
- la k0, _C_LABEL(MipsCacheException)
+ PTR_LA k0, _C_LABEL(MipsCacheException)
li k1, MIPS_PHYS_MASK
and k0, k1
li k1, MIPS_UNCACHED_MEMORY_ADDR
@@ -1267,8 +1312,8 @@ VECTOR_END(MipsCache)
NESTED_NOPROFILE(MipsCacheException, KERN_EXC_FRAME_SIZE, ra)
.set noat
.mask 0x80000000, -4
- la k0, _C_LABEL(panic) # return to panic
- la a0, 9f # panicstr
+ PTR_LA k0, _C_LABEL(panic) # return to panic
+ PTR_LA a0, 9f # panicstr
_MFC0 a1, COP_0_ERROR_PC
mfc0 a2, COP_0_CACHE_ERR # 3rd arg cache error
diff --git a/sys/mips/mips/fp.S b/sys/mips/mips/fp.S
index b211c12..afe9f03 100644
--- a/sys/mips/mips/fp.S
+++ b/sys/mips/mips/fp.S
@@ -634,7 +634,7 @@ func_fmt_tbl:
*/
sub_s:
jal get_ft_fs_s
- xor t4, t4, 1 # negate FT sign bit
+ xor ta0, ta0, 1 # negate FT sign bit
b add_sub_s
/*
* Single precision add.
@@ -643,38 +643,38 @@ add_s:
jal get_ft_fs_s
add_sub_s:
bne t1, SEXP_INF, 1f # is FS an infinity?
- bne t5, SEXP_INF, result_fs_s # if FT is not inf, result=FS
+ bne ta1, SEXP_INF, result_fs_s # if FT is not inf, result=FS
bne t2, zero, result_fs_s # if FS is NAN, result is FS
- bne t6, zero, result_ft_s # if FT is NAN, result is FT
- bne t0, t4, invalid_s # both infinities same sign?
+ bne ta2, zero, result_ft_s # if FT is NAN, result is FT
+ bne t0, ta0, invalid_s # both infinities same sign?
b result_fs_s # result is in FS
1:
- beq t5, SEXP_INF, result_ft_s # if FT is inf, result=FT
+ beq ta1, SEXP_INF, result_ft_s # if FT is inf, result=FT
bne t1, zero, 4f # is FS a denormalized num?
beq t2, zero, 3f # is FS zero?
- bne t5, zero, 2f # is FT a denormalized num?
- beq t6, zero, result_fs_s # FT is zero, result=FS
+ bne ta1, zero, 2f # is FT a denormalized num?
+ beq ta2, zero, result_fs_s # FT is zero, result=FS
jal renorm_fs_s
jal renorm_ft_s
b 5f
2:
jal renorm_fs_s
- subu t5, t5, SEXP_BIAS # unbias FT exponent
- or t6, t6, SIMPL_ONE # set implied one bit
+ subu ta1, ta1, SEXP_BIAS # unbias FT exponent
+ or ta2, ta2, SIMPL_ONE # set implied one bit
b 5f
3:
- bne t5, zero, result_ft_s # if FT != 0, result=FT
- bne t6, zero, result_ft_s
+ bne ta1, zero, result_ft_s # if FT != 0, result=FT
+ bne ta2, zero, result_ft_s
and v0, a1, FPC_ROUNDING_BITS # get rounding mode
bne v0, FPC_ROUND_RM, 1f # round to -infinity?
- or t0, t0, t4 # compute result sign
+ or t0, t0, ta0 # compute result sign
b result_fs_s
1:
- and t0, t0, t4 # compute result sign
+ and t0, t0, ta0 # compute result sign
b result_fs_s
4:
- bne t5, zero, 2f # is FT a denormalized num?
- beq t6, zero, result_fs_s # FT is zero, result=FS
+ bne ta1, zero, 2f # is FT a denormalized num?
+ beq ta2, zero, result_fs_s # FT is zero, result=FS
subu t1, t1, SEXP_BIAS # unbias FS exponent
or t2, t2, SIMPL_ONE # set implied one bit
jal renorm_ft_s
@@ -682,15 +682,15 @@ add_sub_s:
2:
subu t1, t1, SEXP_BIAS # unbias FS exponent
or t2, t2, SIMPL_ONE # set implied one bit
- subu t5, t5, SEXP_BIAS # unbias FT exponent
- or t6, t6, SIMPL_ONE # set implied one bit
+ subu ta1, ta1, SEXP_BIAS # unbias FT exponent
+ or ta2, ta2, SIMPL_ONE # set implied one bit
/*
* Perform the addition.
*/
5:
move t8, zero # no shifted bits (sticky reg)
- beq t1, t5, 4f # no shift needed
- subu v0, t1, t5 # v0 = difference of exponents
+ beq t1, ta1, 4f # no shift needed
+ subu v0, t1, ta1 # v0 = difference of exponents
move v1, v0 # v1 = abs(difference)
bge v0, zero, 1f
negu v1
@@ -698,50 +698,50 @@ add_sub_s:
ble v1, SFRAC_BITS+2, 2f # is difference too great?
li t8, STICKYBIT # set the sticky bit
bge v0, zero, 1f # check which exp is larger
- move t1, t5 # result exp is FTs
+ move t1, ta1 # result exp is FTs
move t2, zero # FSs fraction shifted is zero
b 4f
1:
- move t6, zero # FTs fraction shifted is zero
+ move ta2, zero # FTs fraction shifted is zero
b 4f
2:
li t9, 32 # compute 32 - abs(exp diff)
subu t9, t9, v1
bgt v0, zero, 3f # if FS > FT, shift FTs frac
- move t1, t5 # FT > FS, result exp is FTs
+ move t1, ta1 # FT > FS, result exp is FTs
sll t8, t2, t9 # save bits shifted out
srl t2, t2, v1 # shift FSs fraction
b 4f
3:
- sll t8, t6, t9 # save bits shifted out
- srl t6, t6, v1 # shift FTs fraction
+ sll t8, ta2, t9 # save bits shifted out
+ srl ta2, ta2, v1 # shift FTs fraction
4:
- bne t0, t4, 1f # if signs differ, subtract
- addu t2, t2, t6 # add fractions
+ bne t0, ta0, 1f # if signs differ, subtract
+ addu t2, t2, ta2 # add fractions
b norm_s
1:
- blt t2, t6, 3f # subtract larger from smaller
- bne t2, t6, 2f # if same, result=0
+ blt t2, ta2, 3f # subtract larger from smaller
+ bne t2, ta2, 2f # if same, result=0
move t1, zero # result=0
move t2, zero
and v0, a1, FPC_ROUNDING_BITS # get rounding mode
- bne v0, FPC_ROUND_RM, 1f # round to -infinity?
- or t0, t0, t4 # compute result sign
+ bne v0, FPC_ROUND_RM, 1f # round to -infinity?
+ or t0, t0, ta0 # compute result sign
b result_fs_s
1:
- and t0, t0, t4 # compute result sign
+ and t0, t0, ta0 # compute result sign
b result_fs_s
2:
- sltu t9, zero, t8 # compute t2:zero - t6:t8
+ sltu t9, zero, t8 # compute t2:zero - ta2:t8
subu t8, zero, t8
- subu t2, t2, t6 # subtract fractions
+ subu t2, t2, ta2 # subtract fractions
subu t2, t2, t9 # subtract barrow
b norm_s
3:
- move t0, t4 # sign of result = FTs
- sltu t9, zero, t8 # compute t6:zero - t2:t8
+ move t0, ta0 # sign of result = FTs
+ sltu t9, zero, t8 # compute ta2:zero - t2:t8
subu t8, zero, t8
- subu t2, t6, t2 # subtract fractions
+ subu t2, ta2, t2 # subtract fractions
subu t2, t2, t9 # subtract barrow
b norm_s
@@ -750,7 +750,7 @@ add_sub_s:
*/
sub_d:
jal get_ft_fs_d
- xor t4, t4, 1 # negate sign bit
+ xor ta0, ta0, 1 # negate sign bit
b add_sub_d
/*
* Double precision add.
@@ -759,46 +759,46 @@ add_d:
jal get_ft_fs_d
add_sub_d:
bne t1, DEXP_INF, 1f # is FS an infinity?
- bne t5, DEXP_INF, result_fs_d # if FT is not inf, result=FS
+ bne ta1, DEXP_INF, result_fs_d # if FT is not inf, result=FS
bne t2, zero, result_fs_d # if FS is NAN, result is FS
bne t3, zero, result_fs_d
- bne t6, zero, result_ft_d # if FT is NAN, result is FT
- bne t7, zero, result_ft_d
- bne t0, t4, invalid_d # both infinities same sign?
+ bne ta2, zero, result_ft_d # if FT is NAN, result is FT
+ bne ta3, zero, result_ft_d
+ bne t0, ta0, invalid_d # both infinities same sign?
b result_fs_d # result is in FS
1:
- beq t5, DEXP_INF, result_ft_d # if FT is inf, result=FT
+ beq ta1, DEXP_INF, result_ft_d # if FT is inf, result=FT
bne t1, zero, 4f # is FS a denormalized num?
bne t2, zero, 1f # is FS zero?
beq t3, zero, 3f
1:
- bne t5, zero, 2f # is FT a denormalized num?
- bne t6, zero, 1f
- beq t7, zero, result_fs_d # FT is zero, result=FS
+ bne ta1, zero, 2f # is FT a denormalized num?
+ bne ta2, zero, 1f
+ beq ta3, zero, result_fs_d # FT is zero, result=FS
1:
jal renorm_fs_d
jal renorm_ft_d
b 5f
2:
jal renorm_fs_d
- subu t5, t5, DEXP_BIAS # unbias FT exponent
- or t6, t6, DIMPL_ONE # set implied one bit
+ subu ta1, ta1, DEXP_BIAS # unbias FT exponent
+ or ta2, ta2, DIMPL_ONE # set implied one bit
b 5f
3:
- bne t5, zero, result_ft_d # if FT != 0, result=FT
- bne t6, zero, result_ft_d
- bne t7, zero, result_ft_d
+ bne ta1, zero, result_ft_d # if FT != 0, result=FT
+ bne ta2, zero, result_ft_d
+ bne ta3, zero, result_ft_d
and v0, a1, FPC_ROUNDING_BITS # get rounding mode
- bne v0, FPC_ROUND_RM, 1f # round to -infinity?
- or t0, t0, t4 # compute result sign
+ bne v0, FPC_ROUND_RM, 1f # round to -infinity?
+ or t0, t0, ta0 # compute result sign
b result_fs_d
1:
- and t0, t0, t4 # compute result sign
+ and t0, t0, ta0 # compute result sign
b result_fs_d
4:
- bne t5, zero, 2f # is FT a denormalized num?
- bne t6, zero, 1f
- beq t7, zero, result_fs_d # FT is zero, result=FS
+ bne ta1, zero, 2f # is FT a denormalized num?
+ bne ta2, zero, 1f
+ beq ta3, zero, result_fs_d # FT is zero, result=FS
1:
subu t1, t1, DEXP_BIAS # unbias FS exponent
or t2, t2, DIMPL_ONE # set implied one bit
@@ -807,15 +807,15 @@ add_sub_d:
2:
subu t1, t1, DEXP_BIAS # unbias FS exponent
or t2, t2, DIMPL_ONE # set implied one bit
- subu t5, t5, DEXP_BIAS # unbias FT exponent
- or t6, t6, DIMPL_ONE # set implied one bit
+ subu ta1, ta1, DEXP_BIAS # unbias FT exponent
+ or ta2, ta2, DIMPL_ONE # set implied one bit
/*
* Perform the addition.
*/
5:
move t8, zero # no shifted bits (sticky reg)
- beq t1, t5, 4f # no shift needed
- subu v0, t1, t5 # v0 = difference of exponents
+ beq t1, ta1, 4f # no shift needed
+ subu v0, t1, ta1 # v0 = difference of exponents
move v1, v0 # v1 = abs(difference)
bge v0, zero, 1f
negu v1
@@ -823,18 +823,18 @@ add_sub_d:
ble v1, DFRAC_BITS+2, 2f # is difference too great?
li t8, STICKYBIT # set the sticky bit
bge v0, zero, 1f # check which exp is larger
- move t1, t5 # result exp is FTs
+ move t1, ta1 # result exp is FTs
move t2, zero # FSs fraction shifted is zero
move t3, zero
b 4f
1:
- move t6, zero # FTs fraction shifted is zero
- move t7, zero
+ move ta2, zero # FTs fraction shifted is zero
+ move ta3, zero
b 4f
2:
li t9, 32
bge v0, zero, 3f # if FS > FT, shift FTs frac
- move t1, t5 # FT > FS, result exp is FTs
+ move t1, ta1 # FT > FS, result exp is FTs
blt v1, t9, 1f # shift right by < 32?
subu v1, v1, t9
subu t9, t9, v1
@@ -856,62 +856,62 @@ add_sub_d:
blt v1, t9, 1f # shift right by < 32?
subu v1, v1, t9
subu t9, t9, v1
- sll t8, t6, t9 # save bits shifted out
- srl t7, t6, v1 # shift FTs fraction
- move t6, zero
+ sll t8, ta2, t9 # save bits shifted out
+ srl ta3, ta2, v1 # shift FTs fraction
+ move ta2, zero
b 4f
1:
subu t9, t9, v1
- sll t8, t7, t9 # save bits shifted out
- srl t7, t7, v1 # shift FTs fraction
- sll t9, t6, t9 # save bits shifted out of t2
- or t7, t7, t9 # and put into t3
- srl t6, t6, v1
+ sll t8, ta3, t9 # save bits shifted out
+ srl ta3, ta3, v1 # shift FTs fraction
+ sll t9, ta2, t9 # save bits shifted out of t2
+ or ta3, ta3, t9 # and put into t3
+ srl ta2, ta2, v1
4:
- bne t0, t4, 1f # if signs differ, subtract
- addu t3, t3, t7 # add fractions
- sltu t9, t3, t7 # compute carry
- addu t2, t2, t6 # add fractions
+ bne t0, ta0, 1f # if signs differ, subtract
+ addu t3, t3, ta3 # add fractions
+ sltu t9, t3, ta3 # compute carry
+ addu t2, t2, ta2 # add fractions
addu t2, t2, t9 # add carry
b norm_d
1:
- blt t2, t6, 3f # subtract larger from smaller
- bne t2, t6, 2f
- bltu t3, t7, 3f
- bne t3, t7, 2f # if same, result=0
+ blt t2, ta2, 3f # subtract larger from smaller
+ bne t2, ta2, 2f
+ bltu t3, ta3, 3f
+ bne t3, ta3, 2f # if same, result=0
move t1, zero # result=0
move t2, zero
move t3, zero
and v0, a1, FPC_ROUNDING_BITS # get rounding mode
bne v0, FPC_ROUND_RM, 1f # round to -infinity?
- or t0, t0, t4 # compute result sign
+ or t0, t0, ta0 # compute result sign
b result_fs_d
1:
- and t0, t0, t4 # compute result sign
+ and t0, t0, ta0 # compute result sign
b result_fs_d
2:
- beq t8, zero, 1f # compute t2:t3:zero - t6:t7:t8
+ beq t8, zero, 1f # compute t2:t3:zero - ta2:ta3:t8
subu t8, zero, t8
sltu v0, t3, 1 # compute barrow out
subu t3, t3, 1 # subtract barrow
subu t2, t2, v0
1:
- sltu v0, t3, t7
- subu t3, t3, t7 # subtract fractions
- subu t2, t2, t6 # subtract fractions
+ sltu v0, t3, ta3
+ subu t3, t3, ta3 # subtract fractions
+ subu t2, t2, ta2 # subtract fractions
subu t2, t2, v0 # subtract barrow
b norm_d
3:
- move t0, t4 # sign of result = FTs
- beq t8, zero, 1f # compute t6:t7:zero - t2:t3:t8
+ move t0, ta0 # sign of result = FTs
+ beq t8, zero, 1f # compute ta2:ta3:zero - t2:t3:t8
subu t8, zero, t8
- sltu v0, t7, 1 # compute barrow out
- subu t7, t7, 1 # subtract barrow
- subu t6, t6, v0
+ sltu v0, ta3, 1 # compute barrow out
+ subu ta3, ta3, 1 # subtract barrow
+ subu ta2, ta2, v0
1:
- sltu v0, t7, t3
- subu t3, t7, t3 # subtract fractions
- subu t2, t6, t2 # subtract fractions
+ sltu v0, ta3, t3
+ subu t3, ta3, t3 # subtract fractions
+ subu t2, ta2, t2 # subtract fractions
subu t2, t2, v0 # subtract barrow
b norm_d
@@ -920,22 +920,22 @@ add_sub_d:
*/
mul_s:
jal get_ft_fs_s
- xor t0, t0, t4 # compute sign of result
- move t4, t0
+ xor t0, t0, ta0 # compute sign of result
+ move ta0, t0
bne t1, SEXP_INF, 2f # is FS an infinity?
bne t2, zero, result_fs_s # if FS is a NAN, result=FS
- bne t5, SEXP_INF, 1f # FS is inf, is FT an infinity?
- bne t6, zero, result_ft_s # if FT is a NAN, result=FT
+ bne ta1, SEXP_INF, 1f # FS is inf, is FT an infinity?
+ bne ta2, zero, result_ft_s # if FT is a NAN, result=FT
b result_fs_s # result is infinity
1:
- bne t5, zero, result_fs_s # inf * zero? if no, result=FS
- bne t6, zero, result_fs_s
+ bne ta1, zero, result_fs_s # inf * zero? if no, result=FS
+ bne ta2, zero, result_fs_s
b invalid_s # infinity * zero is invalid
2:
- bne t5, SEXP_INF, 1f # FS != inf, is FT an infinity?
+ bne ta1, SEXP_INF, 1f # FS != inf, is FT an infinity?
bne t1, zero, result_ft_s # zero * inf? if no, result=FT
bne t2, zero, result_ft_s
- bne t6, zero, result_ft_s # if FT is a NAN, result=FT
+ bne ta2, zero, result_ft_s # if FT is a NAN, result=FT
b invalid_s # zero * infinity is invalid
1:
bne t1, zero, 1f # is FS zero?
@@ -946,17 +946,17 @@ mul_s:
subu t1, t1, SEXP_BIAS # unbias FS exponent
or t2, t2, SIMPL_ONE # set implied one bit
2:
- bne t5, zero, 1f # is FT zero?
- beq t6, zero, result_ft_s # result is zero
+ bne ta1, zero, 1f # is FT zero?
+ beq ta2, zero, result_ft_s # result is zero
jal renorm_ft_s
b 2f
1:
- subu t5, t5, SEXP_BIAS # unbias FT exponent
- or t6, t6, SIMPL_ONE # set implied one bit
+ subu ta1, ta1, SEXP_BIAS # unbias FT exponent
+ or ta2, ta2, SIMPL_ONE # set implied one bit
2:
- addu t1, t1, t5 # compute result exponent
+ addu t1, t1, ta1 # compute result exponent
addu t1, t1, 9 # account for binary point
- multu t2, t6 # multiply fractions
+ multu t2, ta2 # multiply fractions
mflo t8
mfhi t2
b norm_s
@@ -966,27 +966,27 @@ mul_s:
*/
mul_d:
jal get_ft_fs_d
- xor t0, t0, t4 # compute sign of result
- move t4, t0
+ xor t0, t0, ta0 # compute sign of result
+ move ta0, t0
bne t1, DEXP_INF, 2f # is FS an infinity?
bne t2, zero, result_fs_d # if FS is a NAN, result=FS
bne t3, zero, result_fs_d
- bne t5, DEXP_INF, 1f # FS is inf, is FT an infinity?
- bne t6, zero, result_ft_d # if FT is a NAN, result=FT
- bne t7, zero, result_ft_d
+ bne ta1, DEXP_INF, 1f # FS is inf, is FT an infinity?
+ bne ta2, zero, result_ft_d # if FT is a NAN, result=FT
+ bne ta3, zero, result_ft_d
b result_fs_d # result is infinity
1:
- bne t5, zero, result_fs_d # inf * zero? if no, result=FS
- bne t6, zero, result_fs_d
- bne t7, zero, result_fs_d
+ bne ta1, zero, result_fs_d # inf * zero? if no, result=FS
+ bne ta2, zero, result_fs_d
+ bne ta3, zero, result_fs_d
b invalid_d # infinity * zero is invalid
2:
- bne t5, DEXP_INF, 1f # FS != inf, is FT an infinity?
+ bne ta1, DEXP_INF, 1f # FS != inf, is FT an infinity?
bne t1, zero, result_ft_d # zero * inf? if no, result=FT
bne t2, zero, result_ft_d # if FS is a NAN, result=FS
bne t3, zero, result_ft_d
- bne t6, zero, result_ft_d # if FT is a NAN, result=FT
- bne t7, zero, result_ft_d
+ bne ta2, zero, result_ft_d # if FT is a NAN, result=FT
+ bne ta3, zero, result_ft_d
b invalid_d # zero * infinity is invalid
1:
bne t1, zero, 2f # is FS zero?
@@ -999,37 +999,37 @@ mul_d:
subu t1, t1, DEXP_BIAS # unbias FS exponent
or t2, t2, DIMPL_ONE # set implied one bit
3:
- bne t5, zero, 2f # is FT zero?
- bne t6, zero, 1f
- beq t7, zero, result_ft_d # result is zero
+ bne ta1, zero, 2f # is FT zero?
+ bne ta2, zero, 1f
+ beq ta3, zero, result_ft_d # result is zero
1:
jal renorm_ft_d
b 3f
2:
- subu t5, t5, DEXP_BIAS # unbias FT exponent
- or t6, t6, DIMPL_ONE # set implied one bit
+ subu ta1, ta1, DEXP_BIAS # unbias FT exponent
+ or ta2, ta2, DIMPL_ONE # set implied one bit
3:
- addu t1, t1, t5 # compute result exponent
+ addu t1, t1, ta1 # compute result exponent
addu t1, t1, 12 # ???
- multu t3, t7 # multiply fractions (low * low)
- move t4, t2 # free up t2,t3 for result
- move t5, t3
+ multu t3, ta3 # multiply fractions (low * low)
+ move ta0, t2 # free up t2,t3 for result
+ move ta1, t3
mflo a3 # save low order bits
mfhi t8
not v0, t8
- multu t4, t7 # multiply FS(high) * FT(low)
+ multu ta0, ta3 # multiply FS(high) * FT(low)
mflo v1
mfhi t3 # init low result
sltu v0, v0, v1 # compute carry
addu t8, v1
- multu t5, t6 # multiply FS(low) * FT(high)
+ multu ta1, ta2 # multiply FS(low) * FT(high)
addu t3, t3, v0 # add carry
not v0, t8
mflo v1
mfhi t2
sltu v0, v0, v1
addu t8, v1
- multu t4, t6 # multiply FS(high) * FT(high)
+ multu ta0, ta2 # multiply FS(high) * FT(high)
addu t3, v0
not v1, t3
sltu v1, v1, t2
@@ -1050,24 +1050,24 @@ mul_d:
*/
div_s:
jal get_ft_fs_s
- xor t0, t0, t4 # compute sign of result
- move t4, t0
+ xor t0, t0, ta0 # compute sign of result
+ move ta0, t0
bne t1, SEXP_INF, 1f # is FS an infinity?
bne t2, zero, result_fs_s # if FS is NAN, result is FS
- bne t5, SEXP_INF, result_fs_s # is FT an infinity?
- bne t6, zero, result_ft_s # if FT is NAN, result is FT
+ bne ta1, SEXP_INF, result_fs_s # is FT an infinity?
+ bne ta2, zero, result_ft_s # if FT is NAN, result is FT
b invalid_s # infinity/infinity is invalid
1:
- bne t5, SEXP_INF, 1f # is FT an infinity?
- bne t6, zero, result_ft_s # if FT is NAN, result is FT
+ bne ta1, SEXP_INF, 1f # is FT an infinity?
+ bne ta2, zero, result_ft_s # if FT is NAN, result is FT
move t1, zero # x / infinity is zero
move t2, zero
b result_fs_s
1:
bne t1, zero, 2f # is FS zero?
bne t2, zero, 1f
- bne t5, zero, result_fs_s # FS=zero, is FT zero?
- beq t6, zero, invalid_s # 0 / 0
+ bne ta1, zero, result_fs_s # FS=zero, is FT zero?
+ beq ta2, zero, invalid_s # 0 / 0
b result_fs_s # result = zero
1:
jal renorm_fs_s
@@ -1076,8 +1076,8 @@ div_s:
subu t1, t1, SEXP_BIAS # unbias FS exponent
or t2, t2, SIMPL_ONE # set implied one bit
3:
- bne t5, zero, 2f # is FT zero?
- bne t6, zero, 1f
+ bne ta1, zero, 2f # is FT zero?
+ bne ta2, zero, 1f
or a1, a1, FPC_EXCEPTION_DIV0 | FPC_STICKY_DIV0
and v0, a1, FPC_ENABLE_DIV0 # trap enabled?
bne v0, zero, fpe_trap
@@ -1089,18 +1089,18 @@ div_s:
jal renorm_ft_s
b 3f
2:
- subu t5, t5, SEXP_BIAS # unbias FT exponent
- or t6, t6, SIMPL_ONE # set implied one bit
+ subu ta1, ta1, SEXP_BIAS # unbias FT exponent
+ or ta2, ta2, SIMPL_ONE # set implied one bit
3:
- subu t1, t1, t5 # compute exponent
+ subu t1, t1, ta1 # compute exponent
subu t1, t1, 3 # compensate for result position
li v0, SFRAC_BITS+3 # number of bits to divide
move t8, t2 # init dividend
move t2, zero # init result
1:
- bltu t8, t6, 3f # is dividend >= divisor?
+ bltu t8, ta2, 3f # is dividend >= divisor?
2:
- subu t8, t8, t6 # subtract divisor from dividend
+ subu t8, t8, ta2 # subtract divisor from dividend
or t2, t2, 1 # remember that we did
bne t8, zero, 3f # if not done, continue
sll t2, t2, v0 # shift result to final position
@@ -1117,19 +1117,19 @@ div_s:
*/
div_d:
jal get_ft_fs_d
- xor t0, t0, t4 # compute sign of result
- move t4, t0
+ xor t0, t0, ta0 # compute sign of result
+ move ta0, t0
bne t1, DEXP_INF, 1f # is FS an infinity?
bne t2, zero, result_fs_d # if FS is NAN, result is FS
bne t3, zero, result_fs_d
- bne t5, DEXP_INF, result_fs_d # is FT an infinity?
- bne t6, zero, result_ft_d # if FT is NAN, result is FT
- bne t7, zero, result_ft_d
+ bne ta1, DEXP_INF, result_fs_d # is FT an infinity?
+ bne ta2, zero, result_ft_d # if FT is NAN, result is FT
+ bne ta3, zero, result_ft_d
b invalid_d # infinity/infinity is invalid
1:
- bne t5, DEXP_INF, 1f # is FT an infinity?
- bne t6, zero, result_ft_d # if FT is NAN, result is FT
- bne t7, zero, result_ft_d
+ bne ta1, DEXP_INF, 1f # is FT an infinity?
+ bne ta2, zero, result_ft_d # if FT is NAN, result is FT
+ bne ta3, zero, result_ft_d
move t1, zero # x / infinity is zero
move t2, zero
move t3, zero
@@ -1138,9 +1138,9 @@ div_d:
bne t1, zero, 2f # is FS zero?
bne t2, zero, 1f
bne t3, zero, 1f
- bne t5, zero, result_fs_d # FS=zero, is FT zero?
- bne t6, zero, result_fs_d
- beq t7, zero, invalid_d # 0 / 0
+ bne ta1, zero, result_fs_d # FS=zero, is FT zero?
+ bne ta2, zero, result_fs_d
+ beq ta3, zero, invalid_d # 0 / 0
b result_fs_d # result = zero
1:
jal renorm_fs_d
@@ -1149,9 +1149,9 @@ div_d:
subu t1, t1, DEXP_BIAS # unbias FS exponent
or t2, t2, DIMPL_ONE # set implied one bit
3:
- bne t5, zero, 2f # is FT zero?
- bne t6, zero, 1f
- bne t7, zero, 1f
+ bne ta1, zero, 2f # is FT zero?
+ bne ta2, zero, 1f
+ bne ta3, zero, 1f
or a1, a1, FPC_EXCEPTION_DIV0 | FPC_STICKY_DIV0
and v0, a1, FPC_ENABLE_DIV0 # trap enabled?
bne v0, zero, fpe_trap
@@ -1164,10 +1164,10 @@ div_d:
jal renorm_ft_d
b 3f
2:
- subu t5, t5, DEXP_BIAS # unbias FT exponent
- or t6, t6, DIMPL_ONE # set implied one bit
+ subu ta1, ta1, DEXP_BIAS # unbias FT exponent
+ or ta2, ta2, DIMPL_ONE # set implied one bit
3:
- subu t1, t1, t5 # compute exponent
+ subu t1, t1, ta1 # compute exponent
subu t1, t1, 3 # compensate for result position
li v0, DFRAC_BITS+3 # number of bits to divide
move t8, t2 # init dividend
@@ -1175,13 +1175,13 @@ div_d:
move t2, zero # init result
move t3, zero
1:
- bltu t8, t6, 3f # is dividend >= divisor?
- bne t8, t6, 2f
- bltu t9, t7, 3f
+ bltu t8, ta2, 3f # is dividend >= divisor?
+ bne t8, ta2, 2f
+ bltu t9, ta3, 3f
2:
- sltu v1, t9, t7 # subtract divisor from dividend
- subu t9, t9, t7
- subu t8, t8, t6
+ sltu v1, t9, ta3 # subtract divisor from dividend
+ subu t9, t9, ta3
+ subu t8, t8, ta2
subu t8, t8, v1
or t3, t3, 1 # remember that we did
bne t8, zero, 3f # if not done, continue
@@ -1576,23 +1576,23 @@ cmp_s:
bne t1, SEXP_INF, 1f # is FS an infinity?
bne t2, zero, unordered # FS is a NAN
1:
- bne t5, SEXP_INF, 2f # is FT an infinity?
- bne t6, zero, unordered # FT is a NAN
+ bne ta1, SEXP_INF, 2f # is FT an infinity?
+ bne ta2, zero, unordered # FT is a NAN
2:
sll t1, t1, 23 # reassemble exp & frac
or t1, t1, t2
- sll t5, t5, 23 # reassemble exp & frac
- or t5, t5, t6
+ sll ta1, ta1, 23 # reassemble exp & frac
+ or ta1, ta1, ta2
beq t0, zero, 1f # is FS positive?
negu t1
1:
- beq t4, zero, 1f # is FT positive?
- negu t5
+ beq ta0, zero, 1f # is FT positive?
+ negu ta1
1:
li v0, COND_LESS
- blt t1, t5, test_cond # is FS < FT?
+ blt t1, ta1, test_cond # is FS < FT?
li v0, COND_EQUAL
- beq t1, t5, test_cond # is FS == FT?
+ beq t1, ta1, test_cond # is FS == FT?
move v0, zero # FS > FT
b test_cond
@@ -1605,14 +1605,14 @@ cmp_d:
bne t2, zero, unordered
bne t3, zero, unordered # FS is a NAN
1:
- bne t5, DEXP_INF, 2f # is FT an infinity?
- bne t6, zero, unordered
- bne t7, zero, unordered # FT is a NAN
+ bne ta1, DEXP_INF, 2f # is FT an infinity?
+ bne ta2, zero, unordered
+ bne ta3, zero, unordered # FT is a NAN
2:
sll t1, t1, 20 # reassemble exp & frac
or t1, t1, t2
- sll t5, t5, 20 # reassemble exp & frac
- or t5, t5, t6
+ sll ta1, ta1, 20 # reassemble exp & frac
+ or ta1, ta1, ta2
beq t0, zero, 1f # is FS positive?
not t3 # negate t1,t3
not t1
@@ -1620,21 +1620,21 @@ cmp_d:
seq v0, t3, zero # compute carry
addu t1, t1, v0
1:
- beq t4, zero, 1f # is FT positive?
- not t7 # negate t5,t7
- not t5
- addu t7, t7, 1
- seq v0, t7, zero # compute carry
- addu t5, t5, v0
+ beq ta0, zero, 1f # is FT positive?
+ not ta3 # negate ta1,ta3
+ not ta1
+ addu ta3, ta3, 1
+ seq v0, ta3, zero # compute carry
+ addu ta1, ta1, v0
1:
li v0, COND_LESS
- blt t1, t5, test_cond # is FS(MSW) < FT(MSW)?
+ blt t1, ta1, test_cond # is FS(MSW) < FT(MSW)?
move v0, zero
- bne t1, t5, test_cond # is FS(MSW) > FT(MSW)?
+ bne t1, ta1, test_cond # is FS(MSW) > FT(MSW)?
li v0, COND_LESS
- bltu t3, t7, test_cond # is FS(LSW) < FT(LSW)?
+ bltu t3, ta3, test_cond # is FS(LSW) < FT(LSW)?
li v0, COND_EQUAL
- beq t3, t7, test_cond # is FS(LSW) == FT(LSW)?
+ beq t3, ta3, test_cond # is FS(LSW) == FT(LSW)?
move v0, zero # FS > FT
test_cond:
and v0, v0, a0 # condition match instruction?
@@ -1725,8 +1725,8 @@ norm_s:
or t8, t8, v0
srl t2, t2, t9
norm_noshift_s:
- move t5, t1 # save unrounded exponent
- move t6, t2 # save unrounded fraction
+ move ta1, t1 # save unrounded exponent
+ move ta2, t2 # save unrounded fraction
and v0, a1, FPC_ROUNDING_BITS # get rounding mode
beq v0, FPC_ROUND_RN, 3f # round to nearest
beq v0, FPC_ROUND_RZ, 5f # round to zero (truncate)
@@ -1826,8 +1826,8 @@ underflow_s:
* signal inexact result (if it is) and trap (if enabled).
*/
1:
- move t1, t5 # get unrounded exponent
- move t2, t6 # get unrounded fraction
+ move t1, ta1 # get unrounded exponent
+ move t2, ta2 # get unrounded fraction
li t9, SEXP_MIN # compute shift amount
subu t9, t9, t1 # shift t2,t8 right by t9
blt t9, SFRAC_BITS+2, 3f # shift all the bits out?
@@ -1841,7 +1841,7 @@ underflow_s:
and v0, a1, FPC_ROUNDING_BITS # get rounding mode
beq v0, FPC_ROUND_RN, inexact_nobias_s # round to nearest
beq v0, FPC_ROUND_RZ, inexact_nobias_s # round to zero
- beq v0, FPC_ROUND_RP, 1f # round to +infinity
+ beq v0, FPC_ROUND_RP, 1f # round to +infinity
beq t0, zero, inexact_nobias_s # if sign is positive, truncate
b 2f
1:
@@ -1970,9 +1970,9 @@ norm_d:
or t3, t3, v0
srl t2, t2, t9
norm_noshift_d:
- move t5, t1 # save unrounded exponent
- move t6, t2 # save unrounded fraction (MS)
- move t7, t3 # save unrounded fraction (LS)
+ move ta1, t1 # save unrounded exponent
+ move ta2, t2 # save unrounded fraction (MS)
+ move ta3, t3 # save unrounded fraction (LS)
and v0, a1, FPC_ROUNDING_BITS # get rounding mode
beq v0, FPC_ROUND_RN, 3f # round to nearest
beq v0, FPC_ROUND_RZ, 5f # round to zero (truncate)
@@ -2078,9 +2078,9 @@ underflow_d:
* signal inexact result (if it is) and trap (if enabled).
*/
1:
- move t1, t5 # get unrounded exponent
- move t2, t6 # get unrounded fraction (MS)
- move t3, t7 # get unrounded fraction (LS)
+ move t1, ta1 # get unrounded exponent
+ move t2, ta2 # get unrounded fraction (MS)
+ move t3, ta3 # get unrounded fraction (LS)
li t9, DEXP_MIN # compute shift amount
subu t9, t9, t1 # shift t2,t8 right by t9
blt t9, DFRAC_BITS+2, 3f # shift all the bits out?
@@ -2095,7 +2095,7 @@ underflow_d:
and v0, a1, FPC_ROUNDING_BITS # get rounding mode
beq v0, FPC_ROUND_RN, inexact_nobias_d # round to nearest
beq v0, FPC_ROUND_RZ, inexact_nobias_d # round to zero
- beq v0, FPC_ROUND_RP, 1f # round to +infinity
+ beq v0, FPC_ROUND_RP, 1f # round to +infinity
beq t0, zero, inexact_nobias_d # if sign is positive, truncate
b 2f
1:
@@ -2128,9 +2128,9 @@ underflow_d:
*/
2:
and v0, a1, FPC_ROUNDING_BITS # get rounding mode
- beq v0, FPC_ROUND_RN, 3f # round to nearest
- beq v0, FPC_ROUND_RZ, 5f # round to zero (truncate)
- beq v0, FPC_ROUND_RP, 1f # round to +infinity
+ beq v0, FPC_ROUND_RN, 3f # round to nearest
+ beq v0, FPC_ROUND_RZ, 5f # round to zero (truncate)
+ beq v0, FPC_ROUND_RP, 1f # round to +infinity
beq t0, zero, 5f # if sign is positive, truncate
b 2f
1:
@@ -2227,9 +2227,9 @@ ill:
break 0
result_ft_s:
- move t0, t4 # result is FT
- move t1, t5
- move t2, t6
+ move t0, ta0 # result is FT
+ move t1, ta1
+ move t2, ta2
result_fs_s: # result is FS
jal set_fd_s # save result (in t0,t1,t2)
b done
@@ -2239,10 +2239,10 @@ result_fs_w:
b done
result_ft_d:
- move t0, t4 # result is FT
- move t1, t5
- move t2, t6
- move t3, t7
+ move t0, ta0 # result is FT
+ move t1, ta1
+ move t2, ta2
+ move t3, ta3
result_fs_d: # result is FS
jal set_fd_d # save result (in t0,t1,t2,t3)
@@ -2356,9 +2356,9 @@ END(get_fs_int)
* t0 contains the FS sign
* t1 contains the FS (biased) exponent
* t2 contains the FS fraction
- * t4 contains the FT sign
- * t5 contains the FT (biased) exponent
- * t6 contains the FT fraction
+ * ta0 contains the FT sign
+ * ta1 contains the FT (biased) exponent
+ * ta2 contains the FT fraction
*
*----------------------------------------------------------------------------
*/
@@ -2389,59 +2389,59 @@ get_ft_s_tbl:
.text
get_ft_s_f0:
- mfc1 t4, $f0
+ mfc1 ta0, $f0
b get_ft_s_done
get_ft_s_f2:
- mfc1 t4, $f2
+ mfc1 ta0, $f2
b get_ft_s_done
get_ft_s_f4:
- mfc1 t4, $f4
+ mfc1 ta0, $f4
b get_ft_s_done
get_ft_s_f6:
- mfc1 t4, $f6
+ mfc1 ta0, $f6
b get_ft_s_done
get_ft_s_f8:
- mfc1 t4, $f8
+ mfc1 ta0, $f8
b get_ft_s_done
get_ft_s_f10:
- mfc1 t4, $f10
+ mfc1 ta0, $f10
b get_ft_s_done
get_ft_s_f12:
- mfc1 t4, $f12
+ mfc1 ta0, $f12
b get_ft_s_done
get_ft_s_f14:
- mfc1 t4, $f14
+ mfc1 ta0, $f14
b get_ft_s_done
get_ft_s_f16:
- mfc1 t4, $f16
+ mfc1 ta0, $f16
b get_ft_s_done
get_ft_s_f18:
- mfc1 t4, $f18
+ mfc1 ta0, $f18
b get_ft_s_done
get_ft_s_f20:
- mfc1 t4, $f20
+ mfc1 ta0, $f20
b get_ft_s_done
get_ft_s_f22:
- mfc1 t4, $f22
+ mfc1 ta0, $f22
b get_ft_s_done
get_ft_s_f24:
- mfc1 t4, $f24
+ mfc1 ta0, $f24
b get_ft_s_done
get_ft_s_f26:
- mfc1 t4, $f26
+ mfc1 ta0, $f26
b get_ft_s_done
get_ft_s_f28:
- mfc1 t4, $f28
+ mfc1 ta0, $f28
b get_ft_s_done
get_ft_s_f30:
- mfc1 t4, $f30
+ mfc1 ta0, $f30
get_ft_s_done:
- srl t5, t4, 23 # get exponent
- and t5, t5, 0xFF
- and t6, t4, 0x7FFFFF # get fraction
- srl t4, t4, 31 # get sign
- bne t5, SEXP_INF, 1f # is it a signaling NAN?
- and v0, t6, SSIGNAL_NAN
+ srl ta1, ta0, 23 # get exponent
+ and ta1, ta1, 0xFF
+ and ta2, ta0, 0x7FFFFF # get fraction
+ srl ta0, ta0, 31 # get sign
+ bne ta1, SEXP_INF, 1f # is it a signaling NAN?
+ and v0, ta2, SSIGNAL_NAN
bne v0, zero, invalid_s
1:
/* fall through to get FS */
@@ -2557,10 +2557,10 @@ END(get_ft_fs_s)
* t1 contains the FS (biased) exponent
* t2 contains the FS fraction
* t3 contains the FS remaining fraction
- * t4 contains the FT sign
- * t5 contains the FT (biased) exponent
- * t6 contains the FT fraction
- * t7 contains the FT remaining fraction
+ * ta0 contains the FT sign
+ * ta1 contains the FT (biased) exponent
+ * ta2 contains the FT fraction
+ * ta3 contains the FT remaining fraction
*
*----------------------------------------------------------------------------
*/
@@ -2591,75 +2591,75 @@ get_ft_d_tbl:
.text
get_ft_d_f0:
- mfc1 t7, $f0
- mfc1 t4, $f1
+ mfc1 ta3, $f0
+ mfc1 ta0, $f1
b get_ft_d_done
get_ft_d_f2:
- mfc1 t7, $f2
- mfc1 t4, $f3
+ mfc1 ta3, $f2
+ mfc1 ta0, $f3
b get_ft_d_done
get_ft_d_f4:
- mfc1 t7, $f4
- mfc1 t4, $f5
+ mfc1 ta3, $f4
+ mfc1 ta0, $f5
b get_ft_d_done
get_ft_d_f6:
- mfc1 t7, $f6
- mfc1 t4, $f7
+ mfc1 ta3, $f6
+ mfc1 ta0, $f7
b get_ft_d_done
get_ft_d_f8:
- mfc1 t7, $f8
- mfc1 t4, $f9
+ mfc1 ta3, $f8
+ mfc1 ta0, $f9
b get_ft_d_done
get_ft_d_f10:
- mfc1 t7, $f10
- mfc1 t4, $f11
+ mfc1 ta3, $f10
+ mfc1 ta0, $f11
b get_ft_d_done
get_ft_d_f12:
- mfc1 t7, $f12
- mfc1 t4, $f13
+ mfc1 ta3, $f12
+ mfc1 ta0, $f13
b get_ft_d_done
get_ft_d_f14:
- mfc1 t7, $f14
- mfc1 t4, $f15
+ mfc1 ta3, $f14
+ mfc1 ta0, $f15
b get_ft_d_done
get_ft_d_f16:
- mfc1 t7, $f16
- mfc1 t4, $f17
+ mfc1 ta3, $f16
+ mfc1 ta0, $f17
b get_ft_d_done
get_ft_d_f18:
- mfc1 t7, $f18
- mfc1 t4, $f19
+ mfc1 ta3, $f18
+ mfc1 ta0, $f19
b get_ft_d_done
get_ft_d_f20:
- mfc1 t7, $f20
- mfc1 t4, $f21
+ mfc1 ta3, $f20
+ mfc1 ta0, $f21
b get_ft_d_done
get_ft_d_f22:
- mfc1 t7, $f22
- mfc1 t4, $f23
+ mfc1 ta3, $f22
+ mfc1 ta0, $f23
b get_ft_d_done
get_ft_d_f24:
- mfc1 t7, $f24
- mfc1 t4, $f25
+ mfc1 ta3, $f24
+ mfc1 ta0, $f25
b get_ft_d_done
get_ft_d_f26:
- mfc1 t7, $f26
- mfc1 t4, $f27
+ mfc1 ta3, $f26
+ mfc1 ta0, $f27
b get_ft_d_done
get_ft_d_f28:
- mfc1 t7, $f28
- mfc1 t4, $f29
+ mfc1 ta3, $f28
+ mfc1 ta0, $f29
b get_ft_d_done
get_ft_d_f30:
- mfc1 t7, $f30
- mfc1 t4, $f31
+ mfc1 ta3, $f30
+ mfc1 ta0, $f31
get_ft_d_done:
- srl t5, t4, 20 # get exponent
- and t5, t5, 0x7FF
- and t6, t4, 0xFFFFF # get fraction
- srl t4, t4, 31 # get sign
- bne t5, DEXP_INF, 1f # is it a signaling NAN?
- and v0, t6, DSIGNAL_NAN
+ srl ta1, ta0, 20 # get exponent
+ and ta1, ta1, 0x7FF
+ and ta2, ta0, 0xFFFFF # get fraction
+ srl ta0, ta0, 31 # get sign
+ bne ta1, DEXP_INF, 1f # is it a signaling NAN?
+ and v0, ta2, DSIGNAL_NAN
bne v0, zero, invalid_d
1:
/* fall through to get FS */
@@ -2791,9 +2791,9 @@ END(get_ft_fs_d)
* t0 contains the sign
* t1 contains the (biased) exponent
* t2 contains the fraction
- * t4 contains the sign
- * t5 contains the (biased) exponent
- * t6 contains the fraction
+ * ta0 contains the sign
+ * ta1 contains the (biased) exponent
+ * ta2 contains the fraction
*
*----------------------------------------------------------------------------
*/
@@ -2902,57 +2902,57 @@ cmp_ft_s_tbl:
.text
cmp_ft_s_f0:
- mfc1 t4, $f0
+ mfc1 ta0, $f0
b cmp_ft_s_done
cmp_ft_s_f2:
- mfc1 t4, $f2
+ mfc1 ta0, $f2
b cmp_ft_s_done
cmp_ft_s_f4:
- mfc1 t4, $f4
+ mfc1 ta0, $f4
b cmp_ft_s_done
cmp_ft_s_f6:
- mfc1 t4, $f6
+ mfc1 ta0, $f6
b cmp_ft_s_done
cmp_ft_s_f8:
- mfc1 t4, $f8
+ mfc1 ta0, $f8
b cmp_ft_s_done
cmp_ft_s_f10:
- mfc1 t4, $f10
+ mfc1 ta0, $f10
b cmp_ft_s_done
cmp_ft_s_f12:
- mfc1 t4, $f12
+ mfc1 ta0, $f12
b cmp_ft_s_done
cmp_ft_s_f14:
- mfc1 t4, $f14
+ mfc1 ta0, $f14
b cmp_ft_s_done
cmp_ft_s_f16:
- mfc1 t4, $f16
+ mfc1 ta0, $f16
b cmp_ft_s_done
cmp_ft_s_f18:
- mfc1 t4, $f18
+ mfc1 ta0, $f18
b cmp_ft_s_done
cmp_ft_s_f20:
- mfc1 t4, $f20
+ mfc1 ta0, $f20
b cmp_ft_s_done
cmp_ft_s_f22:
- mfc1 t4, $f22
+ mfc1 ta0, $f22
b cmp_ft_s_done
cmp_ft_s_f24:
- mfc1 t4, $f24
+ mfc1 ta0, $f24
b cmp_ft_s_done
cmp_ft_s_f26:
- mfc1 t4, $f26
+ mfc1 ta0, $f26
b cmp_ft_s_done
cmp_ft_s_f28:
- mfc1 t4, $f28
+ mfc1 ta0, $f28
b cmp_ft_s_done
cmp_ft_s_f30:
- mfc1 t4, $f30
+ mfc1 ta0, $f30
cmp_ft_s_done:
- srl t5, t4, 23 # get exponent
- and t5, t5, 0xFF
- and t6, t4, 0x7FFFFF # get fraction
- srl t4, t4, 31 # get sign
+ srl ta1, ta0, 23 # get exponent
+ and ta1, ta1, 0xFF
+ and ta2, ta0, 0x7FFFFF # get fraction
+ srl ta0, ta0, 31 # get sign
j ra
END(get_cmp_s)
@@ -2968,10 +2968,10 @@ END(get_cmp_s)
* t1 contains the (biased) exponent
* t2 contains the fraction
* t3 contains the remaining fraction
- * t4 contains the sign
- * t5 contains the (biased) exponent
- * t6 contains the fraction
- * t7 contains the remaining fraction
+ * ta0 contains the sign
+ * ta1 contains the (biased) exponent
+ * ta2 contains the fraction
+ * ta3 contains the remaining fraction
*
*----------------------------------------------------------------------------
*/
@@ -3096,73 +3096,73 @@ cmp_ft_d_tbl:
.text
cmp_ft_d_f0:
- mfc1 t7, $f0
- mfc1 t4, $f1
+ mfc1 ta3, $f0
+ mfc1 ta0, $f1
b cmp_ft_d_done
cmp_ft_d_f2:
- mfc1 t7, $f2
- mfc1 t4, $f3
+ mfc1 ta3, $f2
+ mfc1 ta0, $f3
b cmp_ft_d_done
cmp_ft_d_f4:
- mfc1 t7, $f4
- mfc1 t4, $f5
+ mfc1 ta3, $f4
+ mfc1 ta0, $f5
b cmp_ft_d_done
cmp_ft_d_f6:
- mfc1 t7, $f6
- mfc1 t4, $f7
+ mfc1 ta3, $f6
+ mfc1 ta0, $f7
b cmp_ft_d_done
cmp_ft_d_f8:
- mfc1 t7, $f8
- mfc1 t4, $f9
+ mfc1 ta3, $f8
+ mfc1 ta0, $f9
b cmp_ft_d_done
cmp_ft_d_f10:
- mfc1 t7, $f10
- mfc1 t4, $f11
+ mfc1 ta3, $f10
+ mfc1 ta0, $f11
b cmp_ft_d_done
cmp_ft_d_f12:
- mfc1 t7, $f12
- mfc1 t4, $f13
+ mfc1 ta3, $f12
+ mfc1 ta0, $f13
b cmp_ft_d_done
cmp_ft_d_f14:
- mfc1 t7, $f14
- mfc1 t4, $f15
+ mfc1 ta3, $f14
+ mfc1 ta0, $f15
b cmp_ft_d_done
cmp_ft_d_f16:
- mfc1 t7, $f16
- mfc1 t4, $f17
+ mfc1 ta3, $f16
+ mfc1 ta0, $f17
b cmp_ft_d_done
cmp_ft_d_f18:
- mfc1 t7, $f18
- mfc1 t4, $f19
+ mfc1 ta3, $f18
+ mfc1 ta0, $f19
b cmp_ft_d_done
cmp_ft_d_f20:
- mfc1 t7, $f20
- mfc1 t4, $f21
+ mfc1 ta3, $f20
+ mfc1 ta0, $f21
b cmp_ft_d_done
cmp_ft_d_f22:
- mfc1 t7, $f22
- mfc1 t4, $f23
+ mfc1 ta3, $f22
+ mfc1 ta0, $f23
b cmp_ft_d_done
cmp_ft_d_f24:
- mfc1 t7, $f24
- mfc1 t4, $f25
+ mfc1 ta3, $f24
+ mfc1 ta0, $f25
b cmp_ft_d_done
cmp_ft_d_f26:
- mfc1 t7, $f26
- mfc1 t4, $f27
+ mfc1 ta3, $f26
+ mfc1 ta0, $f27
b cmp_ft_d_done
cmp_ft_d_f28:
- mfc1 t7, $f28
- mfc1 t4, $f29
+ mfc1 ta3, $f28
+ mfc1 ta0, $f29
b cmp_ft_d_done
cmp_ft_d_f30:
- mfc1 t7, $f30
- mfc1 t4, $f31
+ mfc1 ta3, $f30
+ mfc1 ta0, $f31
cmp_ft_d_done:
- srl t5, t4, 20 # get exponent
- and t5, t5, 0x7FF
- and t6, t4, 0xFFFFF # get fraction
- srl t4, t4, 31 # get sign
+ srl ta1, ta0, 20 # get exponent
+ and ta1, ta1, 0x7FF
+ and ta2, ta0, 0xFFFFF # get fraction
+ srl ta0, ta0, 31 # get sign
j ra
END(get_cmp_d)
@@ -3498,16 +3498,16 @@ END(renorm_fs_d)
* renorm_ft_s --
*
* Results:
- * t5 unbiased exponent
- * t6 normalized fraction
+ * ta1 unbiased exponent
+ * ta2 normalized fraction
*
*----------------------------------------------------------------------------
*/
LEAF(renorm_ft_s)
/*
- * Find out how many leading zero bits are in t6 and put in t9.
+ * Find out how many leading zero bits are in ta2 and put in t9.
*/
- move v0, t6
+ move v0, ta2
move t9, zero
srl v1, v0, 16
bne v1, zero, 1f
@@ -3533,13 +3533,13 @@ LEAF(renorm_ft_s)
bne v1, zero, 1f
addu t9, 1
/*
- * Now shift t6 the correct number of bits.
+ * Now shift ta2 the correct number of bits.
*/
1:
subu t9, t9, SLEAD_ZEROS # dont count normal leading zeros
- li t5, SEXP_MIN
- subu t5, t5, t9 # adjust exponent
- sll t6, t6, t9
+ li ta1, SEXP_MIN
+ subu ta1, ta1, t9 # adjust exponent
+ sll ta2, ta2, t9
j ra
END(renorm_ft_s)
@@ -3547,19 +3547,19 @@ END(renorm_ft_s)
* renorm_ft_d --
*
* Results:
- * t5 unbiased exponent
- * t6,t7 normalized fraction
+ * ta1 unbiased exponent
+ * ta2,ta3 normalized fraction
*
*----------------------------------------------------------------------------
*/
LEAF(renorm_ft_d)
/*
- * Find out how many leading zero bits are in t6,t7 and put in t9.
+ * Find out how many leading zero bits are in ta2,ta3 and put in t9.
*/
- move v0, t6
+ move v0, ta2
move t9, zero
- bne t6, zero, 1f
- move v0, t7
+ bne ta2, zero, 1f
+ move v0, ta3
addu t9, 32
1:
srl v1, v0, 16
@@ -3586,23 +3586,23 @@ LEAF(renorm_ft_d)
bne v1, zero, 1f
addu t9, 1
/*
- * Now shift t6,t7 the correct number of bits.
+ * Now shift ta2,ta3 the correct number of bits.
*/
1:
subu t9, t9, DLEAD_ZEROS # dont count normal leading zeros
- li t5, DEXP_MIN
- subu t5, t5, t9 # adjust exponent
+ li ta1, DEXP_MIN
+ subu ta1, ta1, t9 # adjust exponent
li v0, 32
blt t9, v0, 1f
subu t9, t9, v0 # shift fraction left >= 32 bits
- sll t6, t7, t9
- move t7, zero
+ sll ta2, ta3, t9
+ move ta3, zero
j ra
1:
subu v0, v0, t9 # shift fraction left < 32 bits
- sll t6, t6, t9
- srl v1, t7, v0
- or t6, t6, v1
- sll t7, t7, t9
+ sll ta2, ta2, t9
+ srl v1, ta3, v0
+ or ta2, ta2, v1
+ sll ta3, ta3, t9
j ra
END(renorm_ft_d)
diff --git a/sys/mips/mips/gdb_machdep.c b/sys/mips/mips/gdb_machdep.c
index ae77e6b..0f5b5ed 100644
--- a/sys/mips/mips/gdb_machdep.c
+++ b/sys/mips/mips/gdb_machdep.c
@@ -126,17 +126,17 @@ gdb_cpu_getreg(int regnum, size_t *regsz)
}
}
switch (regnum) {
- case 16: return (&kdb_thrctx->pcb_context.val[0]);
- case 17: return (&kdb_thrctx->pcb_context.val[1]);
- case 18: return (&kdb_thrctx->pcb_context.val[2]);
- case 19: return (&kdb_thrctx->pcb_context.val[3]);
- case 20: return (&kdb_thrctx->pcb_context.val[4]);
- case 21: return (&kdb_thrctx->pcb_context.val[5]);
- case 22: return (&kdb_thrctx->pcb_context.val[6]);
- case 23: return (&kdb_thrctx->pcb_context.val[7]);
- case 29: return (&kdb_thrctx->pcb_context.val[8]);
- case 30: return (&kdb_thrctx->pcb_context.val[9]);
- case 31: return (&kdb_thrctx->pcb_context.val[10]);
+ case 16: return (&kdb_thrctx->pcb_context[0]);
+ case 17: return (&kdb_thrctx->pcb_context[1]);
+ case 18: return (&kdb_thrctx->pcb_context[2]);
+ case 19: return (&kdb_thrctx->pcb_context[3]);
+ case 20: return (&kdb_thrctx->pcb_context[4]);
+ case 21: return (&kdb_thrctx->pcb_context[5]);
+ case 22: return (&kdb_thrctx->pcb_context[6]);
+ case 23: return (&kdb_thrctx->pcb_context[7]);
+ case 29: return (&kdb_thrctx->pcb_context[8]);
+ case 30: return (&kdb_thrctx->pcb_context[9]);
+ case 31: return (&kdb_thrctx->pcb_context[10]);
}
return (NULL);
}
@@ -146,7 +146,7 @@ gdb_cpu_setreg(int regnum, void *val)
{
switch (regnum) {
case GDB_REG_PC:
- kdb_thrctx->pcb_context.val[10] = *(register_t *)val;
+ kdb_thrctx->pcb_context[10] = *(register_t *)val;
if (kdb_thread == PCPU_GET(curthread))
kdb_frame->pc = *(register_t *)val;
}
diff --git a/sys/mips/mips/genassym.c b/sys/mips/mips/genassym.c
index 90974c7..81ed9df 100644
--- a/sys/mips/mips/genassym.c
+++ b/sys/mips/mips/genassym.c
@@ -69,6 +69,7 @@ ASSYM(TD_REALKSTACK, offsetof(struct thread, td_md.md_realstack));
ASSYM(TD_FLAGS, offsetof(struct thread, td_flags));
ASSYM(TD_LOCK, offsetof(struct thread, td_lock));
ASSYM(TD_FRAME, offsetof(struct thread, td_frame));
+ASSYM(TD_TLS, offsetof(struct thread, td_md.md_tls));
ASSYM(TF_REG_SR, offsetof(struct trapframe, sr));
@@ -90,6 +91,7 @@ ASSYM(VM_MAXUSER_ADDRESS, VM_MAXUSER_ADDRESS);
ASSYM(VM_KERNEL_ALLOC_OFFSET, VM_KERNEL_ALLOC_OFFSET);
ASSYM(SIGF_UC, offsetof(struct sigframe, sf_uc));
ASSYM(SIGFPE, SIGFPE);
+ASSYM(PAGE_SHIFT, PAGE_SHIFT);
ASSYM(PGSHIFT, PGSHIFT);
ASSYM(NBPG, NBPG);
ASSYM(SEGSHIFT, SEGSHIFT);
@@ -97,3 +99,4 @@ ASSYM(NPTEPG, NPTEPG);
ASSYM(TDF_NEEDRESCHED, TDF_NEEDRESCHED);
ASSYM(TDF_ASTPENDING, TDF_ASTPENDING);
ASSYM(PCPU_SIZE, sizeof(struct pcpu));
+ASSYM(MAXCOMLEN, MAXCOMLEN);
diff --git a/sys/mips/mips/in_cksum.c b/sys/mips/mips/in_cksum.c
index f0c95d9..31bcd3e 100644
--- a/sys/mips/mips/in_cksum.c
+++ b/sys/mips/mips/in_cksum.c
@@ -226,7 +226,7 @@ skip_start:
if (len < mlen)
mlen = len;
- if ((clen ^ (int) addr) & 1)
+ if ((clen ^ (uintptr_t) addr) & 1)
sum += in_cksumdata(addr, mlen) << 8;
else
sum += in_cksumdata(addr, mlen);
diff --git a/sys/mips/mips/intr_machdep.c b/sys/mips/mips/intr_machdep.c
index cca27ba..530cc08 100644
--- a/sys/mips/mips/intr_machdep.c
+++ b/sys/mips/mips/intr_machdep.c
@@ -46,23 +46,85 @@ __FBSDID("$FreeBSD$");
static struct intr_event *hardintr_events[NHARD_IRQS];
static struct intr_event *softintr_events[NSOFT_IRQS];
+static mips_intrcnt_t mips_intr_counters[NSOFT_IRQS + NHARD_IRQS];
-#ifdef notyet
-static int intrcnt_tab[NHARD_IRQS + NSOFT_IRQS];
-static int intrcnt_index = 0;
-static int last_printed = 0;
-#endif
+static int intrcnt_index;
+
+mips_intrcnt_t
+mips_intrcnt_create(const char* name)
+{
+ mips_intrcnt_t counter = &intrcnt[intrcnt_index++];
+
+ mips_intrcnt_setname(counter, name);
+ return counter;
+}
void
-mips_mask_irq(void)
+mips_intrcnt_setname(mips_intrcnt_t counter, const char *name)
+{
+ int idx = counter - intrcnt;
+
+ KASSERT(counter != NULL, ("mips_intrcnt_setname: NULL counter"));
+
+ snprintf(intrnames + (MAXCOMLEN + 1) * idx,
+ MAXCOMLEN + 1, "%-*s", MAXCOMLEN, name);
+}
+
+static void
+mips_mask_hard_irq(void *source)
+{
+ uintptr_t irq = (uintptr_t)source;
+
+ mips_wr_status(mips_rd_status() & ~(((1 << irq) << 8) << 2));
+}
+
+static void
+mips_unmask_hard_irq(void *source)
+{
+ uintptr_t irq = (uintptr_t)source;
+
+ mips_wr_status(mips_rd_status() | (((1 << irq) << 8) << 2));
+}
+
+static void
+mips_mask_soft_irq(void *source)
{
+ uintptr_t irq = (uintptr_t)source;
+ mips_wr_status(mips_rd_status() & ~((1 << irq) << 8));
}
+static void
+mips_unmask_soft_irq(void *source)
+{
+ uintptr_t irq = (uintptr_t)source;
+
+ mips_wr_status(mips_rd_status() | ((1 << irq) << 8));
+}
+
+/*
+ * Perform initialization of interrupts prior to setting
+ * handlings
+ */
void
-mips_unmask_irq(void)
+cpu_init_interrupts()
{
+ int i;
+ char name[MAXCOMLEN + 1];
+ /*
+ * Initialize all available vectors so spare IRQ
+ * would show up in systat output
+ */
+ for (i = 0; i < NSOFT_IRQS; i++) {
+ snprintf(name, MAXCOMLEN + 1, "sint%d:", i);
+ mips_intr_counters[i] = mips_intrcnt_create(name);
+ }
+
+ for (i = 0; i < NHARD_IRQS; i++) {
+ snprintf(name, MAXCOMLEN + 1, "int%d:", i);
+ mips_intr_counters[NSOFT_IRQS + i] = mips_intrcnt_create(name);
+ }
}
void
@@ -72,8 +134,10 @@ cpu_establish_hardintr(const char *name, driver_filter_t *filt,
struct intr_event *event;
int error;
+#if 0
printf("Establish HARD IRQ %d: filt %p handler %p arg %p\n",
irq, filt, handler, arg);
+#endif
/*
* We have 6 levels, but thats 0 - 5 (not including 6)
*/
@@ -82,26 +146,20 @@ cpu_establish_hardintr(const char *name, driver_filter_t *filt,
event = hardintr_events[irq];
if (event == NULL) {
- error = intr_event_create(&event, (void *)irq, 0, irq,
- (mask_fn)mips_mask_irq, (mask_fn)mips_unmask_irq,
- NULL, NULL, "hard intr%d:", irq);
+ error = intr_event_create(&event, (void *)(uintptr_t)irq, 0,
+ irq, mips_mask_hard_irq, mips_unmask_hard_irq,
+ NULL, NULL, "int%d", irq);
if (error)
return;
hardintr_events[irq] = event;
-#ifdef notyet
- last_printed += snprintf(intrnames + last_printed,
- MAXCOMLEN + 1, "hard irq%d: %s", irq, name);
- last_printed++;
- intrcnt_tab[irq] = intrcnt_index;
- intrcnt_index++;
-#endif
-
}
intr_event_add_handler(event, name, filt, handler, arg,
intr_priority(flags), flags, cookiep);
- mips_wr_status(mips_rd_status() | (((1 << irq) << 8) << 2));
+ mips_intrcnt_setname(mips_intr_counters[NSOFT_IRQS + irq], event->ie_fullname);
+
+ mips_unmask_hard_irq((void*)(uintptr_t)irq);
}
void
@@ -112,16 +170,18 @@ cpu_establish_softintr(const char *name, driver_filter_t *filt,
struct intr_event *event;
int error;
+#if 0
printf("Establish SOFT IRQ %d: filt %p handler %p arg %p\n",
irq, filt, handler, arg);
+#endif
if (irq < 0 || irq > NSOFT_IRQS)
panic("%s called for unknown hard intr %d", __func__, irq);
event = softintr_events[irq];
if (event == NULL) {
- error = intr_event_create(&event, (void *)irq, 0, irq,
- (mask_fn)mips_mask_irq, (mask_fn)mips_unmask_irq,
- NULL, NULL, "intr%d:", irq);
+ error = intr_event_create(&event, (void *)(uintptr_t)irq, 0,
+ irq, mips_mask_soft_irq, mips_unmask_soft_irq,
+ NULL, NULL, "sint%d:", irq);
if (error)
return;
softintr_events[irq] = event;
@@ -130,22 +190,29 @@ cpu_establish_softintr(const char *name, driver_filter_t *filt,
intr_event_add_handler(event, name, filt, handler, arg,
intr_priority(flags), flags, cookiep);
- mips_wr_status(mips_rd_status() | (((1<< irq) << 8)));
+ mips_intrcnt_setname(mips_intr_counters[irq], event->ie_fullname);
+
+ mips_unmask_soft_irq((void*)(uintptr_t)irq);
}
void
cpu_intr(struct trapframe *tf)
{
struct intr_event *event;
- register_t cause;
+ register_t cause, status;
int hard, i, intr;
critical_enter();
cause = mips_rd_cause();
+ status = mips_rd_status();
intr = (cause & MIPS_INT_MASK) >> 8;
- cause &= ~MIPS_INT_MASK;
- mips_wr_cause(cause);
+ /*
+ * Do not handle masked interrupts. They were masked by
+ * pre_ithread function (mips_mask_XXX_intr) and will be
+ * unmasked once ithread is through with handler
+ */
+ intr &= (status & MIPS_INT_MASK) >> 8;
while ((i = fls(intr)) != 0) {
intr &= ~(1 << (i - 1));
switch (i) {
@@ -154,6 +221,7 @@ cpu_intr(struct trapframe *tf)
i--; /* Get a 0-offset interrupt. */
hard = 0;
event = softintr_events[i];
+ mips_intrcnt_inc(mips_intr_counters[i]);
break;
default:
/* Hardware interrupt. */
@@ -161,6 +229,7 @@ cpu_intr(struct trapframe *tf)
i--; /* Get a 0-offset interrupt. */
hard = 1;
event = hardintr_events[i];
+ mips_intrcnt_inc(mips_intr_counters[NSOFT_IRQS + i]);
break;
}
diff --git a/sys/mips/mips/locore.S b/sys/mips/mips/locore.S
index afcabd4..11d9cdc 100644
--- a/sys/mips/mips/locore.S
+++ b/sys/mips/mips/locore.S
@@ -77,16 +77,6 @@
GLOBAL(fenvp)
.space 4 # Assumes mips32? Is that OK?
#endif
-#ifdef CFE /* Assumes MIPS32, bad? */
-GLOBAL(cfe_handle)
- .space 4
-GLOBAL(cfe_vector)
- .space 4
-#endif
-#if defined(TARGET_OCTEON)
-GLOBAL(app_descriptor_addr)
- .space 8
-#endif
GLOBAL(stackspace)
.space NBPG /* Smaller than it should be since it's temp. */
.align 8
@@ -138,14 +128,18 @@ VECTOR(_locore, unknown)
mtc0 t2, COP_0_STATUS_REG
COP0_SYNC
/* Make sure KSEG0 is cached */
+#ifdef CPU_SB1
+ li t0, CFG_K0_COHERENT
+#else
li t0, CFG_K0_CACHED
+#endif
mtc0 t0, MIPS_COP_0_CONFIG
COP0_SYNC
/* Read and store the PrID FPU ID for CPU identification, if any. */
mfc0 t2, COP_0_STATUS_REG
mfc0 t0, MIPS_COP_0_PRID
-#ifndef CPU_NOFPU
+#ifdef CPU_HAVEFPU
and t2, MIPS_SR_COP_1_BIT
beqz t2, 1f
move t1, zero
@@ -164,8 +158,8 @@ VECTOR(_locore, unknown)
/*
* Initialize stack and call machine startup.
*/
- la sp, _C_LABEL(topstack) - START_FRAME
- la gp, _C_LABEL(_gp)
+ PTR_LA sp, _C_LABEL(topstack) - START_FRAME
+ PTR_LA gp, _C_LABEL(_gp)
sw zero, START_FRAME - 4(sp) # Zero out old ra for debugger
/*xxximp
@@ -176,20 +170,6 @@ VECTOR(_locore, unknown)
/* Save YAMON boot environment pointer */
sw a2, _C_LABEL(fenvp)
#endif
-#ifdef CFE
- /*
- * Save the CFE context passed to us by the loader.
- */
- li t1, 0x43464531
- bne a3, t1, no_cfe /* Check for "CFE1" signature */
- sw a0, _C_LABEL(cfe_handle)/* Firmware data segment */
- sw a2, _C_LABEL(cfe_vector)/* Firmware entry vector */
-no_cfe:
-#endif
-#if defined(TARGET_OCTEON)
- la a0, app_descriptor_addr
- sw a3, 0(a0) /* Store app descriptor ptr */
-#endif
/*
* The following needs to be done differently for each platform and
@@ -199,6 +179,7 @@ no_cfe:
/*
* Block all the slave CPUs
*/
+ /* XXX a0, a1, a2 shouldn't be used here */
/*
* Read the cpu id from the cp0 config register
* cpuid[9:4], thrid[3: 0]
@@ -232,7 +213,7 @@ no_cfe:
nop
#ifdef SMP
- la t0, _C_LABEL(__pcpu)
+ PTR_LA t0, _C_LABEL(__pcpu)
SET_CPU_PCPU(t0)
/* If not master cpu, jump... */
/*XXX this assumes the above #if 0'd code runs */
@@ -244,7 +225,7 @@ no_cfe:
jal _C_LABEL(platform_start)
sw zero, START_FRAME - 8(sp) # Zero out old fp for debugger
- la sp, _C_LABEL(thread0)
+ PTR_LA sp, _C_LABEL(thread0)
lw a0, TD_PCB(sp)
li t0, ~7
and a0, a0, t0
diff --git a/sys/mips/mips/machdep.c b/sys/mips/mips/machdep.c
index 9898ab7..65b4cfb 100644
--- a/sys/mips/mips/machdep.c
+++ b/sys/mips/mips/machdep.c
@@ -42,6 +42,7 @@
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
+#include "opt_cputype.h"
#include "opt_ddb.h"
#include "opt_md.h"
#include "opt_msgbuf.h"
@@ -75,17 +76,17 @@ __FBSDID("$FreeBSD$");
#include <sys/socket.h>
#include <sys/user.h>
+#include <sys/interrupt.h>
#include <sys/cons.h>
#include <sys/syslog.h>
-#include <machine/cache.h>
-#include <machine/cpu.h>
-#include <machine/pltfm.h>
-#include <net/netisr.h>
-#include <machine/md_var.h>
-#include <machine/clock.h>
#include <machine/asm.h>
#include <machine/bootinfo.h>
+#include <machine/cache.h>
+#include <machine/clock.h>
+#include <machine/cpu.h>
#include <machine/hwfunc.h>
+#include <machine/intr_machdep.h>
+#include <machine/md_var.h>
#ifdef DDB
#include <sys/kdb.h>
#include <ddb/ddb.h>
@@ -104,6 +105,7 @@ SYSCTL_STRING(_hw, HW_MODEL, model, CTLFLAG_RD, cpu_model, 0, "Machine model");
int cold = 1;
long realmem = 0;
+long Maxmem = 0;
int cpu_clock = MIPS_DEFAULT_HZ;
SYSCTL_INT(_hw, OID_AUTO, clockrate, CTLFLAG_RD,
&cpu_clock, 0, "CPU instruction clock rate");
@@ -112,14 +114,16 @@ int clocks_running = 0;
vm_offset_t kstack0;
#ifdef SMP
-struct pcpu __pcpu[32];
-char pcpu_boot_stack[KSTACK_PAGES * PAGE_SIZE * (MAXCPU-1)];
+struct pcpu __pcpu[MAXCPU];
+char pcpu_boot_stack[KSTACK_PAGES * PAGE_SIZE * MAXCPU];
#else
struct pcpu pcpu;
struct pcpu *pcpup = &pcpu;
#endif
-vm_offset_t phys_avail[10];
+vm_offset_t phys_avail[PHYS_AVAIL_ENTRIES + 2];
+vm_offset_t physmem_desc[PHYS_AVAIL_ENTRIES + 2];
+
#ifdef UNIMPLEMENTED
struct platform platform;
#endif
@@ -150,7 +154,6 @@ extern char edata[], end[];
u_int32_t bootdev;
struct bootinfo bootinfo;
-
static void
cpu_startup(void *dummy)
{
@@ -170,11 +173,13 @@ cpu_startup(void *dummy)
printf("Physical memory chunk(s):\n");
for (indx = 0; phys_avail[indx + 1] != 0; indx += 2) {
- int size1 = phys_avail[indx + 1] - phys_avail[indx];
+ uintptr_t size1 = phys_avail[indx + 1] - phys_avail[indx];
- printf("0x%08x - 0x%08x, %u bytes (%u pages)\n",
- phys_avail[indx], phys_avail[indx + 1] - 1, size1,
- size1 / PAGE_SIZE);
+ printf("0x%08llx - 0x%08llx, %llu bytes (%llu pages)\n",
+ (unsigned long long)phys_avail[indx],
+ (unsigned long long)phys_avail[indx + 1] - 1,
+ (unsigned long long)size1,
+ (unsigned long long)size1 / PAGE_SIZE);
}
}
@@ -182,6 +187,7 @@ cpu_startup(void *dummy)
printf("avail memory = %lu (%luMB)\n", ptoa(cnt.v_free_count),
ptoa(cnt.v_free_count) / 1048576);
+ cpu_init_interrupts();
/*
* Set up buffers, so they can be used to read disk labels.
@@ -247,24 +253,34 @@ SYSCTL_INT(_machdep, CPU_WALLCLOCK, wall_cmos_clock, CTLFLAG_RW,
#endif /* PORT_TO_JMIPS */
/*
- * Initialize mips and configure to run kernel
+ * Initialize per cpu data structures, include curthread.
*/
void
-mips_proc0_init(void)
+mips_pcpu0_init()
{
- proc_linkup(&proc0, &thread0);
- thread0.td_kstack = kstack0;
- thread0.td_kstack_pages = KSTACK_PAGES - 1;
- if (thread0.td_kstack & (1 << PAGE_SHIFT))
- thread0.td_md.md_realstack = thread0.td_kstack + PAGE_SIZE;
- else
- thread0.td_md.md_realstack = thread0.td_kstack;
/* Initialize pcpu info of cpu-zero */
#ifdef SMP
pcpu_init(&__pcpu[0], 0, sizeof(struct pcpu));
#else
pcpu_init(pcpup, 0, sizeof(struct pcpu));
#endif
+ PCPU_SET(curthread, &thread0);
+}
+
+/*
+ * Initialize mips and configure to run kernel
+ */
+void
+mips_proc0_init(void)
+{
+ proc_linkup0(&proc0, &thread0);
+
+ KASSERT((kstack0 & PAGE_MASK) == 0,
+ ("kstack0 is not aligned on a page boundary: 0x%0lx",
+ (long)kstack0));
+ thread0.td_kstack = kstack0;
+ thread0.td_kstack_pages = KSTACK_PAGES;
+ thread0.td_md.md_realstack = roundup2(thread0.td_kstack, PAGE_SIZE * 2);
/*
* Do not use cpu_thread_alloc to initialize these fields
* thread0 is the only thread that has kstack located in KSEG0
@@ -277,14 +293,18 @@ mips_proc0_init(void)
/* Steal memory for the dynamic per-cpu area. */
dpcpu_init((void *)pmap_steal_memory(DPCPU_SIZE), 0);
+ PCPU_SET(curpcb, thread0.td_pcb);
/*
* There is no need to initialize md_upte array for thread0 as it's
* located in .bss section and should be explicitly zeroed during
* kernel initialization.
*/
+}
- PCPU_SET(curthread, &thread0);
- PCPU_SET(curpcb, thread0.td_pcb);
+void
+cpu_initclocks(void)
+{
+ platform_initclocks();
}
struct msgbuf *msgbufp=0;
@@ -353,12 +373,6 @@ cpu_pcpu_init(struct pcpu *pcpu, int cpuid, size_t size)
}
int
-sysarch(struct thread *td, register struct sysarch_args *uap)
-{
- return (ENOSYS);
-}
-
-int
fill_dbregs(struct thread *td, struct dbreg *dbregs)
{
@@ -432,3 +446,16 @@ cpu_idle_wakeup(int cpu)
return (0);
}
+
+int
+is_physical_memory(vm_offset_t addr)
+{
+ int i;
+
+ for (i = 0; physmem_desc[i + 1] != 0; i += 2) {
+ if (addr >= physmem_desc[i] && addr < physmem_desc[i + 1])
+ return (1);
+ }
+
+ return (0);
+}
diff --git a/sys/mips/mips/mainbus.c b/sys/mips/mips/mainbus.c
index d1571e4cd..abee72b 100644
--- a/sys/mips/mips/mainbus.c
+++ b/sys/mips/mips/mainbus.c
@@ -44,6 +44,8 @@
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
+#include "opt_cputype.h"
+
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/bus.h>
@@ -265,7 +267,6 @@ mainbus_activate_resource(device_t bus, device_t child, int type, int rid,
+ poffs;
}
rman_set_virtual(r, vaddr);
- /* IBM-PC: the type of bus_space_handle_t is u_int */
#ifdef TARGET_OCTEON
temp = 0x0000000000000000;
temp |= (uint32_t)vaddr;
diff --git a/sys/mips/mips/mem.c b/sys/mips/mips/mem.c
index bebeded..c0e88e0 100644
--- a/sys/mips/mips/mem.c
+++ b/sys/mips/mips/mem.c
@@ -65,7 +65,6 @@ __FBSDID("$FreeBSD$");
#include <machine/cpu.h>
#include <machine/md_var.h>
#include <machine/atomic.h>
-#include <machine/pltfm.h>
#include <machine/memdev.h>
@@ -101,10 +100,8 @@ memrw(dev, uio, flags)
vm_paddr_t pa;
register int o;
- if (v + c > (SDRAM_ADDR_START + ctob(physmem)))
- return (EFAULT);
-
- if (is_cacheable_mem(v) && is_cacheable_mem(v + c)) {
+ if (is_cacheable_mem(v) &&
+ is_cacheable_mem(v + c - 1)) {
struct fpage *fp;
struct sysmaps *sysmaps;
@@ -117,7 +114,7 @@ memrw(dev, uio, flags)
va = pmap_map_fpage(pa, fp, FALSE);
o = (int)uio->uio_offset & PAGE_MASK;
c = (u_int)(PAGE_SIZE -
- ((int)iov->iov_base & PAGE_MASK));
+ ((uintptr_t)iov->iov_base & PAGE_MASK));
c = min(c, (u_int)(PAGE_SIZE - o));
c = min(c, (u_int)iov->iov_len);
error = uiomove((caddr_t)(va + o), (int)c, uio);
@@ -133,6 +130,7 @@ memrw(dev, uio, flags)
else if (dev2unit(dev) == CDEV_MINOR_KMEM) {
v = uio->uio_offset;
c = min(iov->iov_len, MAXPHYS);
+
vm_offset_t addr, eaddr;
vm_offset_t wired_tlb_virtmem_end;
@@ -143,25 +141,37 @@ memrw(dev, uio, flags)
addr = trunc_page(uio->uio_offset);
eaddr = round_page(uio->uio_offset + c);
- if (addr < (vm_offset_t) VM_MIN_KERNEL_ADDRESS)
- return EFAULT;
-
- wired_tlb_virtmem_end = VM_MIN_KERNEL_ADDRESS +
- VM_KERNEL_ALLOC_OFFSET;
- if ((addr < wired_tlb_virtmem_end) &&
- (eaddr >= wired_tlb_virtmem_end))
- addr = wired_tlb_virtmem_end;
-
- if (addr >= wired_tlb_virtmem_end) {
- for (; addr < eaddr; addr += PAGE_SIZE)
- if (pmap_extract(kernel_pmap,addr) == 0)
- return EFAULT;
-
- if (!kernacc((caddr_t)(int)uio->uio_offset, c,
- uio->uio_rw == UIO_READ ?
- VM_PROT_READ : VM_PROT_WRITE))
+ if (addr > (vm_offset_t) VM_MIN_KERNEL_ADDRESS) {
+ wired_tlb_virtmem_end = VM_MIN_KERNEL_ADDRESS +
+ VM_KERNEL_ALLOC_OFFSET;
+ if ((addr < wired_tlb_virtmem_end) &&
+ (eaddr >= wired_tlb_virtmem_end))
+ addr = wired_tlb_virtmem_end;
+
+ if (addr >= wired_tlb_virtmem_end) {
+ for (; addr < eaddr; addr += PAGE_SIZE)
+ if (pmap_extract(kernel_pmap,
+ addr) == 0)
+ return EFAULT;
+
+ if (!kernacc(
+ (caddr_t)(uintptr_t)uio->uio_offset, c,
+ uio->uio_rw == UIO_READ ?
+ VM_PROT_READ : VM_PROT_WRITE))
+ return (EFAULT);
+ }
+ }
+ else if (MIPS_IS_KSEG0_ADDR(v)) {
+ if (MIPS_KSEG0_TO_PHYS(v + c) >= ctob(physmem))
+ return (EFAULT);
+ }
+ else if (MIPS_IS_KSEG1_ADDR(v)) {
+ if (MIPS_KSEG1_TO_PHYS(v + c) >= ctob(physmem))
return (EFAULT);
}
+ else
+ return (EFAULT);
+
error = uiomove((caddr_t)v, c, uio);
continue;
diff --git a/sys/mips/mips/nexus.c b/sys/mips/mips/nexus.c
index 64cba66..a325f3e 100644
--- a/sys/mips/mips/nexus.c
+++ b/sys/mips/mips/nexus.c
@@ -58,6 +58,7 @@ __FBSDID("$FreeBSD$");
#include <machine/resource.h>
#include <machine/vmparam.h>
+#undef NEXUS_DEBUG
#ifdef NEXUS_DEBUG
#define dprintf printf
#else
@@ -76,20 +77,7 @@ struct nexus_device {
static struct rman irq_rman;
static struct rman mem_rman;
-
-#ifdef notyet
-/*
- * XXX: TODO: Implement bus space barrier functions.
- * Currently tag and handle are set when memory resources
- * are activated.
- */
-struct bus_space_tag nexus_bustag = {
- NULL, /* cookie */
- NULL, /* parent bus tag */
- NEXUS_BUS_SPACE, /* type */
- nexus_bus_barrier, /* bus_space_barrier */
-};
-#endif
+static struct rman port_rman;
static struct resource *
nexus_alloc_resource(device_t, device_t, int, int *, u_long,
@@ -173,6 +161,21 @@ nexus_probe(device_t dev)
panic("%s: mem_rman", __func__);
}
+ /*
+ * MIPS has no concept of the x86 I/O address space but some cpus
+ * provide a memory mapped window to access the PCI I/O BARs.
+ */
+ port_rman.rm_start = 0;
+#ifdef PCI_IOSPACE_SIZE
+ port_rman.rm_end = PCI_IOSPACE_SIZE - 1;
+#endif
+ port_rman.rm_type = RMAN_ARRAY;
+ port_rman.rm_descr = "I/O ports";
+ if (rman_init(&port_rman) != 0 ||
+ rman_manage_region(&port_rman, 0, port_rman.rm_end) != 0)
+ panic("%s: port_rman", __func__);
+
+
return (0);
}
@@ -182,14 +185,14 @@ nexus_setup_intr(device_t dev, device_t child, struct resource *res, int flags,
{
int irq;
- register_t sr = intr_disable();
+ intrmask_t s = disableintr();
irq = rman_get_start(res);
if (irq >= NUM_MIPS_IRQS)
return (0);
cpu_establish_hardintr(device_get_nameunit(child), filt, intr, arg,
irq, flags, cookiep);
- intr_restore(sr);
+ restoreintr(s);
return (0);
}
@@ -238,6 +241,7 @@ nexus_print_all_resources(device_t dev)
retval += resource_list_print_type(rl, "mem", SYS_RES_MEMORY, "%#lx");
retval += resource_list_print_type(rl, "irq", SYS_RES_IRQ, "%ld");
+ retval += resource_list_print_type(rl, "port", SYS_RES_IOPORT, "%#lx");
return (retval);
}
@@ -249,24 +253,46 @@ nexus_hinted_child(device_t bus, const char *dname, int dunit)
long maddr;
int msize;
int result;
+ int irq;
+ int mem_hints_count;
child = BUS_ADD_CHILD(bus, 0, dname, dunit);
+ if (child == NULL)
+ return;
/*
* Set hard-wired resources for hinted child using
* specific RIDs.
*/
- resource_long_value(dname, dunit, "maddr", &maddr);
- resource_int_value(dname, dunit, "msize", &msize);
-
- dprintf("%s: discovered hinted child %s at maddr %p(%d)\n",
- __func__, device_get_nameunit(child),
- (void *)(intptr_t)maddr, msize);
+ mem_hints_count = 0;
+ if (resource_long_value(dname, dunit, "maddr", &maddr) == 0)
+ mem_hints_count++;
+ if (resource_int_value(dname, dunit, "msize", &msize) == 0)
+ mem_hints_count++;
+
+ /* check if all info for mem resource has been provided */
+ if ((mem_hints_count > 0) && (mem_hints_count < 2)) {
+ printf("Either maddr or msize hint is missing for %s%d\n",
+ dname, dunit);
+ }
+ else if (mem_hints_count) {
+ dprintf("%s: discovered hinted child %s at maddr %p(%d)\n",
+ __func__, device_get_nameunit(child),
+ (void *)(intptr_t)maddr, msize);
+
+ result = bus_set_resource(child, SYS_RES_MEMORY, 0, maddr,
+ msize);
+ if (result != 0) {
+ device_printf(bus,
+ "warning: bus_set_resource() failed\n");
+ }
+ }
- result = bus_set_resource(child, SYS_RES_MEMORY, MIPS_MEM_RID,
- maddr, msize);
- if (result != 0) {
- device_printf(bus, "warning: bus_set_resource() failed\n");
+ if (resource_int_value(dname, dunit, "irq", &irq) == 0) {
+ result = bus_set_resource(child, SYS_RES_IRQ, 0, irq, 1);
+ if (result != 0)
+ device_printf(bus,
+ "warning: bus_set_resource() failed\n");
}
}
@@ -282,6 +308,10 @@ nexus_add_child(device_t bus, int order, const char *name, int unit)
resource_list_init(&ndev->nx_resources);
child = device_add_child_ordered(bus, order, name, unit);
+ if (child == NULL) {
+ device_printf(bus, "failed to add child: %s%d\n", name, unit);
+ return (0);
+ }
/* should we free this in nexus_child_detached? */
device_set_ivars(child, ndev);
@@ -338,6 +368,9 @@ nexus_alloc_resource(device_t bus, device_t child, int type, int *rid,
case SYS_RES_MEMORY:
rm = &mem_rman;
break;
+ case SYS_RES_IOPORT:
+ rm = &port_rman;
+ break;
default:
printf("%s: unknown resource type %d\n", __func__, type);
return (0);
@@ -345,7 +378,8 @@ nexus_alloc_resource(device_t bus, device_t child, int type, int *rid,
rv = rman_reserve_resource(rm, start, end, count, flags, child);
if (rv == 0) {
- printf("%s: could not reserve resource\n", __func__);
+ printf("%s: could not reserve resource for %s\n", __func__,
+ device_get_nameunit(child));
return (0);
}
@@ -366,33 +400,25 @@ static int
nexus_activate_resource(device_t bus, device_t child, int type, int rid,
struct resource *r)
{
-#ifdef TARGET_OCTEON
- uint64_t temp;
-#endif
/*
* If this is a memory resource, track the direct mapping
* in the uncached MIPS KSEG1 segment.
*/
+ /* XXX we shouldn't be supporting sys_res_ioport here */
if ((type == SYS_RES_MEMORY) || (type == SYS_RES_IOPORT)) {
- caddr_t vaddr = 0;
- u_int32_t paddr;
- u_int32_t psize;
- u_int32_t poffs;
-
- paddr = rman_get_start(r);
- psize = rman_get_size(r);
- poffs = paddr - trunc_page(paddr);
- vaddr = (caddr_t) pmap_mapdev(paddr-poffs, psize+poffs) + poffs;
+ caddr_t vaddr = 0;
+ u_int32_t paddr;
+ u_int32_t psize;
+ u_int32_t poffs;
+
+ paddr = rman_get_start(r);
+ psize = rman_get_size(r);
+ poffs = paddr - trunc_page(paddr);
+ vaddr = (caddr_t) pmap_mapdev(paddr-poffs, psize+poffs) + poffs;
rman_set_virtual(r, vaddr);
- rman_set_bustag(r, MIPS_BUS_SPACE_MEM);
-#ifdef TARGET_OCTEON
- temp = 0x0000000000000000;
- temp |= (uint32_t)vaddr;
- rman_set_bushandle(r, (bus_space_handle_t)temp);
-#else
- rman_set_bushandle(r, (bus_space_handle_t)vaddr);
-#endif
+ rman_set_bustag(r, mips_bus_space_generic);
+ rman_set_bushandle(r, (bus_space_handle_t)(uintptr_t)vaddr);
}
return (rman_activate_resource(r));
@@ -473,6 +499,12 @@ static int
nexus_deactivate_resource(device_t bus, device_t child, int type, int rid,
struct resource *r)
{
+ vm_offset_t va;
+
+ if (type == SYS_RES_MEMORY || type == SYS_RES_IOPORT) {
+ va = (vm_offset_t)rman_get_virtual(r);
+ pmap_unmapdev(va, rman_get_size(r));
+ }
return (rman_deactivate_resource(r));
}
diff --git a/sys/mips/mips/pm_machdep.c b/sys/mips/mips/pm_machdep.c
index dc30f4c..712763b 100644
--- a/sys/mips/mips/pm_machdep.c
+++ b/sys/mips/mips/pm_machdep.c
@@ -39,6 +39,8 @@
__FBSDID("$FreeBSD$");
#include "opt_compat.h"
+#include "opt_cputype.h"
+
#include <sys/types.h>
#include <sys/param.h>
#include <sys/systm.h>
@@ -228,13 +230,13 @@ sigreturn(struct thread *td, struct sigreturn_args *uap)
/* #ifdef DEBUG */
if (ucp->uc_mcontext.mc_regs[ZERO] != UCONTEXT_MAGIC) {
printf("sigreturn: pid %d, ucp %p\n", td->td_proc->p_pid, ucp);
- printf(" old sp %x ra %x pc %x\n",
- regs->sp, regs->ra, regs->pc);
- printf(" new sp %x ra %x pc %x z %x\n",
- ucp->uc_mcontext.mc_regs[SP],
- ucp->uc_mcontext.mc_regs[RA],
- ucp->uc_mcontext.mc_regs[PC],
- ucp->uc_mcontext.mc_regs[ZERO]);
+ printf(" old sp %p ra %p pc %p\n",
+ (void *)regs->sp, (void *)regs->ra, (void *)regs->pc);
+ printf(" new sp %p ra %p pc %p z %p\n",
+ (void *)ucp->uc_mcontext.mc_regs[SP],
+ (void *)ucp->uc_mcontext.mc_regs[RA],
+ (void *)ucp->uc_mcontext.mc_regs[PC],
+ (void *)ucp->uc_mcontext.mc_regs[ZERO]);
return EINVAL;
}
/* #endif */
@@ -322,7 +324,7 @@ ptrace_single_step(struct thread *td)
/* compute next address after current location */
if(curinstr != 0) {
va = MipsEmulateBranch(locr0, locr0->pc, locr0->fsr,
- (u_int)&curinstr);
+ (uintptr_t)&curinstr);
} else {
va = locr0->pc + 4;
}
@@ -408,9 +410,16 @@ get_mcontext(struct thread *td, mcontext_t *mcp, int flags)
bcopy((void *)&td->td_frame->f0, (void *)&mcp->mc_fpregs,
sizeof(mcp->mc_fpregs));
}
+ if (flags & GET_MC_CLEAR_RET) {
+ mcp->mc_regs[V0] = 0;
+ mcp->mc_regs[V1] = 0;
+ mcp->mc_regs[A3] = 0;
+ }
+
mcp->mc_pc = td->td_frame->pc;
mcp->mullo = td->td_frame->mullo;
mcp->mulhi = td->td_frame->mulhi;
+ mcp->mc_tls = td->td_md.md_tls;
return (0);
}
@@ -431,6 +440,7 @@ set_mcontext(struct thread *td, const mcontext_t *mcp)
td->td_frame->pc = mcp->mc_pc;
td->td_frame->mullo = mcp->mullo;
td->td_frame->mulhi = mcp->mulhi;
+ td->td_md.md_tls = mcp->mc_tls;
/* Dont let user to set any bits in Status and casue registers */
return (0);
@@ -477,7 +487,8 @@ exec_setregs(struct thread *td, u_long entry, u_long stack, u_long ps_strings)
// td->td_frame->sr = SR_KSU_USER | SR_EXL | SR_INT_ENAB;
//? td->td_frame->sr |= idle_mask & ALL_INT_MASK;
#else
- td->td_frame->sr = SR_KSU_USER | SR_EXL;// mips2 also did COP_0_BIT
+ td->td_frame->sr = SR_KSU_USER | SR_EXL | SR_INT_ENAB |
+ (mips_rd_status() & ALL_INT_MASK);
#endif
#ifdef TARGET_OCTEON
td->td_frame->sr |= MIPS_SR_COP_2_BIT | MIPS32_SR_PX | MIPS_SR_UX |
diff --git a/sys/mips/mips/pmap.c b/sys/mips/mips/pmap.c
index de14439..671413b 100644
--- a/sys/mips/mips/pmap.c
+++ b/sys/mips/mips/pmap.c
@@ -96,7 +96,6 @@ __FBSDID("$FreeBSD$");
#endif
#include <machine/cache.h>
-#include <machine/pltfm.h>
#include <machine/md_var.h>
#if defined(DIAGNOSTIC)
@@ -292,9 +291,14 @@ pmap_bootstrap(void)
/* Sort. */
again:
for (i = 0; phys_avail[i + 1] != 0; i += 2) {
- if (phys_avail[i + 1] >= MIPS_KSEG0_LARGEST_PHYS) {
+ /*
+ * Keep the memory aligned on page boundary.
+ */
+ phys_avail[i] = round_page(phys_avail[i]);
+ phys_avail[i + 1] = trunc_page(phys_avail[i + 1]);
+
+ if (phys_avail[i + 1] >= MIPS_KSEG0_LARGEST_PHYS)
memory_larger_than_512meg++;
- }
if (i < 2)
continue;
if (phys_avail[i - 2] > phys_avail[i]) {
@@ -313,6 +317,16 @@ again:
}
}
+ /*
+ * Copy the phys_avail[] array before we start stealing memory from it.
+ */
+ for (i = 0; phys_avail[i + 1] != 0; i += 2) {
+ physmem_desc[i] = phys_avail[i];
+ physmem_desc[i + 1] = phys_avail[i + 1];
+ }
+
+ Maxmem = atop(phys_avail[i - 1]);
+
if (bootverbose) {
printf("Physical memory chunk(s):\n");
for (i = 0; phys_avail[i + 1] != 0; i += 2) {
@@ -324,6 +338,7 @@ again:
(uintmax_t) phys_avail[i + 1] - 1,
(uintmax_t) size, (uintmax_t) size / PAGE_SIZE);
}
+ printf("Maxmem is 0x%0lx\n", ptoa(Maxmem));
}
/*
* Steal the message buffer from the beginning of memory.
@@ -397,22 +412,13 @@ again:
for (i = 0, pte = pgtab; i < (nkpt * NPTEPG); i++, pte++)
*pte = PTE_G;
- printf("Va=0x%x Ve=%x\n", virtual_avail, virtual_end);
/*
* The segment table contains the KVA of the pages in the second
* level page table.
*/
- printf("init kernel_segmap va >> = %d nkpt:%d\n",
- (virtual_avail >> SEGSHIFT),
- nkpt);
for (i = 0, j = (virtual_avail >> SEGSHIFT); i < nkpt; i++, j++)
kernel_segmap[j] = (pd_entry_t)(pgtab + (i * NPTEPG));
- for (i = 0; phys_avail[i + 2]; i += 2)
- continue;
- printf("avail_start:0x%x avail_end:0x%x\n",
- phys_avail[0], phys_avail[i + 1]);
-
/*
* The kernel's pmap is statically allocated so we don't have to use
* pmap_create, which is unlikely to work correctly at this part of
@@ -694,6 +700,11 @@ pmap_kremove(vm_offset_t va)
{
register pt_entry_t *pte;
+ /*
+ * Write back all caches from the page being destroyed
+ */
+ mips_dcache_wbinv_range_index(va, NBPG);
+
pte = pmap_pte(kernel_pmap, va);
*pte = PTE_G;
pmap_invalidate_page(kernel_pmap, va);
@@ -738,11 +749,15 @@ void
pmap_qenter(vm_offset_t va, vm_page_t *m, int count)
{
int i;
+ vm_offset_t origva = va;
for (i = 0; i < count; i++) {
+ pmap_flush_pvcache(m[i]);
pmap_kenter(va, VM_PAGE_TO_PHYS(m[i]));
va += PAGE_SIZE;
}
+
+ mips_dcache_wbinv_range_index(origva, PAGE_SIZE*count);
}
/*
@@ -752,6 +767,11 @@ pmap_qenter(vm_offset_t va, vm_page_t *m, int count)
void
pmap_qremove(vm_offset_t va, int count)
{
+ /*
+ * No need to wb/inv caches here,
+ * pmap_kremove will do it for us
+ */
+
while (count-- > 0) {
pmap_kremove(va);
va += PAGE_SIZE;
@@ -1530,6 +1550,12 @@ pmap_remove_page(struct pmap *pmap, vm_offset_t va)
if (!ptq || !pmap_pte_v(ptq)) {
return;
}
+
+ /*
+ * Write back all caches from the page being destroyed
+ */
+ mips_dcache_wbinv_range_index(va, NBPG);
+
/*
* get a local va for mappings for this pmap.
*/
@@ -1609,6 +1635,14 @@ pmap_remove_all(vm_page_t m)
while ((pv = TAILQ_FIRST(&m->md.pv_list)) != NULL) {
PMAP_LOCK(pv->pv_pmap);
+
+ /*
+ * If it's last mapping writeback all caches from
+ * the page being destroyed
+ */
+ if (m->md.pv_list_count == 1)
+ mips_dcache_wbinv_range_index(pv->pv_va, NBPG);
+
pv->pv_pmap->pm_stats.resident_count--;
pte = pmap_pte(pv->pv_pmap, pv->pv_va);
@@ -1765,8 +1799,8 @@ pmap_enter(pmap_t pmap, vm_offset_t va, vm_prot_t access, vm_page_t m,
* Page Directory table entry not valid, we need a new PT page
*/
if (pte == NULL) {
- panic("pmap_enter: invalid page directory, pdir=%p, va=0x%x\n",
- (void *)pmap->pm_segtab, va);
+ panic("pmap_enter: invalid page directory, pdir=%p, va=%p\n",
+ (void *)pmap->pm_segtab, (void *)va);
}
pa = VM_PAGE_TO_PHYS(m);
om = NULL;
@@ -1827,7 +1861,7 @@ pmap_enter(pmap_t pmap, vm_offset_t va, vm_prot_t access, vm_page_t m,
mpte->wire_count--;
KASSERT(mpte->wire_count > 0,
("pmap_enter: missing reference to page table page,"
- " va: 0x%x", va));
+ " va: %p", (void *)va));
}
} else
pmap->pm_stats.resident_count++;
@@ -1889,7 +1923,7 @@ validate:
if (origpte & PTE_M) {
KASSERT((origpte & PTE_RW),
("pmap_enter: modified page not writable:"
- " va: 0x%x, pte: 0x%lx", va, origpte));
+ " va: %p, pte: 0x%lx", (void *)va, origpte));
if (page_is_managed(opa))
vm_page_dirty(om);
}
@@ -2226,7 +2260,7 @@ pmap_zero_page(vm_page_t m)
#endif
if (phys < MIPS_KSEG0_LARGEST_PHYS) {
- va = MIPS_PHYS_TO_UNCACHED(phys);
+ va = MIPS_PHYS_TO_CACHED(phys);
bzero((caddr_t)va, PAGE_SIZE);
mips_dcache_wbinv_range(va, PAGE_SIZE);
@@ -2282,7 +2316,7 @@ pmap_zero_page_area(vm_page_t m, int off, int size)
} else
#endif
if (phys < MIPS_KSEG0_LARGEST_PHYS) {
- va = MIPS_PHYS_TO_UNCACHED(phys);
+ va = MIPS_PHYS_TO_CACHED(phys);
bzero((char *)(caddr_t)va + off, size);
mips_dcache_wbinv_range(va + off, size);
} else {
@@ -2321,7 +2355,7 @@ pmap_zero_page_idle(vm_page_t m)
} else
#endif
if (phys < MIPS_KSEG0_LARGEST_PHYS) {
- va = MIPS_PHYS_TO_UNCACHED(phys);
+ va = MIPS_PHYS_TO_CACHED(phys);
bzero((caddr_t)va, PAGE_SIZE);
mips_dcache_wbinv_range(va, PAGE_SIZE);
} else {
@@ -2358,7 +2392,6 @@ pmap_copy_page(vm_page_t src, vm_page_t dst)
vm_paddr_t phy_src = VM_PAGE_TO_PHYS(src);
vm_paddr_t phy_dst = VM_PAGE_TO_PHYS(dst);
-
#ifdef VM_ALLOC_WIRED_TLB_PG_POOL
if (need_wired_tlb_page_pool) {
struct fpage *fp1, *fp2;
@@ -2389,9 +2422,17 @@ pmap_copy_page(vm_page_t src, vm_page_t dst)
{
if ((phy_src < MIPS_KSEG0_LARGEST_PHYS) && (phy_dst < MIPS_KSEG0_LARGEST_PHYS)) {
/* easy case, all can be accessed via KSEG0 */
+ /*
+ * Flush all caches for VA that are mapped to this page
+ * to make sure that data in SDRAM is up to date
+ */
+ pmap_flush_pvcache(src);
+ mips_dcache_wbinv_range_index(
+ MIPS_PHYS_TO_CACHED(phy_dst), NBPG);
va_src = MIPS_PHYS_TO_CACHED(phy_src);
va_dst = MIPS_PHYS_TO_CACHED(phy_dst);
bcopy((caddr_t)va_src, (caddr_t)va_dst, PAGE_SIZE);
+ mips_dcache_wbinv_range(va_dst, PAGE_SIZE);
} else {
int cpu;
struct local_sysmaps *sysm;
@@ -2492,9 +2533,7 @@ pmap_remove_pages(pmap_t pmap)
PMAP_LOCK(pmap);
sched_pin();
//XXX need to be TAILQ_FOREACH_SAFE ?
- for (pv = TAILQ_FIRST(&pmap->pm_pvlist);
- pv;
- pv = npv) {
+ for (pv = TAILQ_FIRST(&pmap->pm_pvlist); pv; pv = npv) {
pte = pmap_pte(pv->pv_pmap, pv->pv_va);
if (!pmap_pte_v(pte))
@@ -2799,15 +2838,16 @@ pmap_mapdev(vm_offset_t pa, vm_size_t size)
* KSEG1 maps only first 512M of phys address space. For
* pa > 0x20000000 we should make proper mapping * using pmap_kenter.
*/
- if (pa + size < MIPS_KSEG0_LARGEST_PHYS)
+ if ((pa + size - 1) < MIPS_KSEG0_LARGEST_PHYS)
return (void *)MIPS_PHYS_TO_KSEG1(pa);
else {
offset = pa & PAGE_MASK;
- size = roundup(size, PAGE_SIZE);
+ size = roundup(size + offset, PAGE_SIZE);
va = kmem_alloc_nofault(kernel_map, size);
if (!va)
panic("pmap_mapdev: Couldn't alloc kernel virtual memory");
+ pa = trunc_page(pa);
for (tmpva = va; size > 0;) {
pmap_kenter(tmpva, pa);
size -= PAGE_SIZE;
@@ -2822,6 +2862,18 @@ pmap_mapdev(vm_offset_t pa, vm_size_t size)
void
pmap_unmapdev(vm_offset_t va, vm_size_t size)
{
+ vm_offset_t base, offset, tmpva;
+
+ /* If the address is within KSEG1 then there is nothing to do */
+ if (va >= MIPS_KSEG1_START && va <= MIPS_KSEG1_END)
+ return;
+
+ base = trunc_page(va);
+ offset = va & PAGE_MASK;
+ size = roundup(size + offset, PAGE_SIZE);
+ for (tmpva = base; tmpva < base + size; tmpva += PAGE_SIZE)
+ pmap_kremove(tmpva);
+ kmem_free(kernel_map, base, size);
}
/*
@@ -2896,6 +2948,7 @@ pmap_activate(struct thread *td)
PCPU_SET(segbase, pmap->pm_segtab);
MachSetPID(pmap->pm_asid[PCPU_GET(cpuid)].asid);
}
+
PCPU_SET(curpmap, pmap);
critical_exit();
}
@@ -2962,7 +3015,7 @@ pmap_pid_dump(int pid)
pde = &pmap->pm_segtab[i];
if (pde && pmap_pde_v(pde)) {
for (j = 0; j < 1024; j++) {
- unsigned va = base +
+ vm_offset_t va = base +
(j << PAGE_SHIFT);
pte = pmap_pte(pmap, va);
@@ -2972,8 +3025,9 @@ pmap_pid_dump(int pid)
pa = mips_tlbpfn_to_paddr(*pte);
m = PHYS_TO_VM_PAGE(pa);
- printf("va: 0x%x, pt: 0x%x, h: %d, w: %d, f: 0x%x",
- va, pa,
+ printf("va: %p, pt: %p, h: %d, w: %d, f: 0x%x",
+ (void *)va,
+ (void *)pa,
m->hold_count,
m->wire_count,
m->flags);
@@ -3273,3 +3327,16 @@ pmap_kextract(vm_offset_t va)
}
return pa;
}
+
+void
+pmap_flush_pvcache(vm_page_t m)
+{
+ pv_entry_t pv;
+
+ if (m != NULL) {
+ for (pv = TAILQ_FIRST(&m->md.pv_list); pv;
+ pv = TAILQ_NEXT(pv, pv_list)) {
+ mips_dcache_wbinv_range_index(pv->pv_va, NBPG);
+ }
+ }
+}
diff --git a/sys/mips/mips/psraccess.S b/sys/mips/mips/psraccess.S
index 003c1d5..0bcb04d 100644
--- a/sys/mips/mips/psraccess.S
+++ b/sys/mips/mips/psraccess.S
@@ -41,6 +41,8 @@
#include <machine/cpu.h>
#include <machine/regnum.h>
+#include "opt_cputype.h"
+
#include "assym.s"
/*
diff --git a/sys/mips/mips/support.S b/sys/mips/mips/support.S
index d821361..526f957 100644
--- a/sys/mips/mips/support.S
+++ b/sys/mips/mips/support.S
@@ -55,6 +55,7 @@
* assembly language support routines.
*/
+#include "opt_cputype.h"
#include "opt_ddb.h"
#include <sys/errno.h>
#include <machine/asm.h>
@@ -460,8 +461,10 @@ ALEAF(fuibyte)
sw zero, U_PCB_ONFAULT(v1)
END(fubyte)
-LEAF(suword)
-XLEAF(suword32)
+LEAF(suword32)
+#ifndef __mips_n64
+XLEAF(suword)
+#endif
blt a0, zero, fswberr # make sure address is in user space
li v0, FSWBERR
GET_CPU_PCPU(v1)
@@ -471,31 +474,86 @@ XLEAF(suword32)
sw zero, U_PCB_ONFAULT(v1)
j ra
move v0, zero
-END(suword)
+END(suword32)
+
+#ifdef __mips_n64
+LEAF(suword64)
+XLEAF(suword)
+ blt a0, zero, fswberr # make sure address is in user space
+ li v0, FSWBERR
+ GET_CPU_PCPU(v1)
+ lw v1, PC_CURPCB(v1)
+ sw v0, U_PCB_ONFAULT(v1)
+ sd a1, 0(a0) # store word
+ sw zero, U_PCB_ONFAULT(v1)
+ j ra
+ move v0, zero
+END(suword64)
+#endif
/*
* casuword(9)
* <v0>u_long casuword(<a0>u_long *p, <a1>u_long oldval, <a2>u_long newval)
*/
-ENTRY(casuword)
- break
- li v0, -1
- jr ra
- nop
-END(casuword)
-
/*
* casuword32(9)
* <v0>uint32_t casuword(<a0>uint32_t *p, <a1>uint32_t oldval,
* <a2>uint32_t newval)
*/
-ENTRY(casuword32)
- break
+LEAF(casuword32)
+#ifndef __mips_n64
+XLEAF(casuword)
+#endif
+ blt a0, zero, fswberr # make sure address is in user space
+ li v0, FSWBERR
+ GET_CPU_PCPU(v1)
+ lw v1, PC_CURPCB(v1)
+ sw v0, U_PCB_ONFAULT(v1)
+1:
+ move t0, a2
+ ll v0, 0(a0)
+ bne a1, v0, 2f
+ nop
+ sc t0, 0(a0) # store word
+ beqz t0, 1b
+ nop
+ j 3f
+ nop
+2:
li v0, -1
+3:
+ sw zero, U_PCB_ONFAULT(v1)
jr ra
nop
END(casuword32)
+#ifdef __mips_n64
+LEAF(casuword64)
+XLEAF(casuword)
+ blt a0, zero, fswberr # make sure address is in user space
+ li v0, FSWBERR
+ GET_CPU_PCPU(v1)
+ lw v1, PC_CURPCB(v1)
+ sw v0, U_PCB_ONFAULT(v1)
+1:
+ move t0, a2
+ lld v0, 0(a0)
+ bne a1, v0, 2f
+ nop
+ scd t0, 0(a0) # store double word
+ beqz t0, 1b
+ nop
+ j 3f
+ nop
+2:
+ li v0, -1
+3:
+ sw zero, U_PCB_ONFAULT(v1)
+ jr ra
+ nop
+END(casuword64)
+#endif
+
#if 0
/* unused in FreeBSD */
/*
@@ -1280,17 +1338,11 @@ END(atomic_subtract_8)
.set mips3
#endif
-LEAF(atomic_readandclear_64)
-1:
- lld v0, 0(a0)
- li t0, 0
- scd t0, 0(a0)
- beqz t0, 1b
- nop
- j ra
- nop
-END(atomic_readandclear_64)
-
+#if !defined(__mips_n64) && !defined(__mips_n32)
+ /*
+ * I don't know if these routines have the right number of
+ * NOPs in it for all processors. XXX
+ */
LEAF(atomic_store_64)
mfc0 t1, COP_0_STATUS_REG
and t2, t1, ~SR_INT_ENAB
@@ -1336,6 +1388,7 @@ LEAF(atomic_load_64)
j ra
nop
END(atomic_load_64)
+#endif
#if defined(DDB) || defined(DEBUG)
diff --git a/sys/mips/mips/swtch.S b/sys/mips/mips/swtch.S
index 84585cb..6ccf0a1 100644
--- a/sys/mips/mips/swtch.S
+++ b/sys/mips/mips/swtch.S
@@ -81,14 +81,12 @@
#define _MFC0 dmfc0
#define _MTC0 dmtc0
#define WIRED_SHIFT 34
-#define PAGE_SHIFT 34
#else
#define _SLL sll
#define _SRL srl
#define _MFC0 mfc0
#define _MTC0 mtc0
#define WIRED_SHIFT 2
-#define PAGE_SHIFT 2
#endif
.set noreorder # Noreorder is default style!
#if defined(ISA_MIPS32)
@@ -205,10 +203,10 @@ LEAF(fork_trampoline)
RESTORE_U_PCB_REG(t1, T1, k1)
RESTORE_U_PCB_REG(t2, T2, k1)
RESTORE_U_PCB_REG(t3, T3, k1)
- RESTORE_U_PCB_REG(t4, T4, k1)
- RESTORE_U_PCB_REG(t5, T5, k1)
- RESTORE_U_PCB_REG(t6, T6, k1)
- RESTORE_U_PCB_REG(t7, T7, k1)
+ RESTORE_U_PCB_REG(ta0, TA0, k1)
+ RESTORE_U_PCB_REG(ta1, TA1, k1)
+ RESTORE_U_PCB_REG(ta2, TA2, k1)
+ RESTORE_U_PCB_REG(ta3, TA3, k1)
RESTORE_U_PCB_REG(s0, S0, k1)
RESTORE_U_PCB_REG(s1, S1, k1)
RESTORE_U_PCB_REG(s2, S2, k1)
@@ -224,6 +222,11 @@ LEAF(fork_trampoline)
RESTORE_U_PCB_REG(s8, S8, k1)
RESTORE_U_PCB_REG(ra, RA, k1)
RESTORE_U_PCB_REG(sp, SP, k1)
+ li k1, ~SR_INT_MASK
+ and k0, k0, k1
+ mfc0 k1, COP_0_STATUS_REG
+ and k1, k1, SR_INT_MASK
+ or k0, k0, k1
mtc0 k0, COP_0_STATUS_REG # switch to user mode (when eret...)
HAZARD_DELAY
sync
@@ -310,6 +313,10 @@ NON_LEAF(cpu_switch, STAND_FRAME_SIZE, ra)
SAVE_U_PCB_CONTEXT(ra, PREG_RA, a0) # save return address
SAVE_U_PCB_CONTEXT(t0, PREG_SR, a0) # save status register
SAVE_U_PCB_CONTEXT(gp, PREG_GP, a0)
+ jal getpc
+ nop
+getpc:
+ SAVE_U_PCB_CONTEXT(ra, PREG_PC, a0) # save return address
/*
* FREEBSD_DEVELOPERS_FIXME:
* In case there are CPU-specific registers that need
@@ -320,7 +327,7 @@ NON_LEAF(cpu_switch, STAND_FRAME_SIZE, ra)
mips_sw1:
#if defined(SMP) && defined(SCHED_ULE)
- la t0, _C_LABEL(blocked_lock)
+ PTR_LA t0, _C_LABEL(blocked_lock)
blocked_loop:
lw t1, TD_LOCK(a1)
beq t0, t1, blocked_loop
@@ -361,7 +368,7 @@ entry0:
nop
pgm:
bltz s0, entry0set
- li t1, MIPS_KSEG0_START + 0x0fff0000 # invalidate tlb entry
+ li t1, MIPS_KSEG0_START # invalidate tlb entry
sll s0, PAGE_SHIFT + 1
addu t1, s0
mtc0 t1, COP_0_TLB_HI
@@ -385,7 +392,7 @@ entry0set:
* Now running on new u struct.
*/
sw2:
- la t1, _C_LABEL(pmap_activate) # s7 = new proc pointer
+ PTR_LA t1, _C_LABEL(pmap_activate) # s7 = new proc pointer
jalr t1 # s7 = new proc pointer
move a0, s7 # BDSLOT
/*
@@ -410,6 +417,10 @@ sw2:
* In case there are CPU-specific registers that need
* to be restored with the other registers do so here.
*/
+ mfc0 t0, COP_0_STATUS_REG
+ and t0, t0, SR_INT_MASK
+ and v0, v0, ~SR_INT_MASK
+ or v0, v0, t0
mtc0 v0, COP_0_STATUS_REG
ITLBNOPFIX
diff --git a/sys/mips/mips/tick.c b/sys/mips/mips/tick.c
index faae90e..b03c4d4 100644
--- a/sys/mips/mips/tick.c
+++ b/sys/mips/mips/tick.c
@@ -33,6 +33,8 @@
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
+#include "opt_cputype.h"
+
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/sysctl.h>
@@ -59,6 +61,8 @@ u_int32_t counter_upper = 0;
u_int32_t counter_lower_last = 0;
int tick_started = 0;
+void platform_initclocks(void);
+
struct clk_ticks
{
u_long hard_ticks;
@@ -97,9 +101,8 @@ mips_timer_early_init(uint64_t clock_hz)
}
void
-cpu_initclocks(void)
+platform_initclocks(void)
{
-
if (!tick_started) {
tc_init(&counter_timecounter);
tick_started++;
@@ -138,25 +141,19 @@ mips_timer_init_params(uint64_t platform_counter_freq, int double_count)
* function should be called before cninit.
*/
counter_freq = platform_counter_freq;
+ /*
+ * XXX: Some MIPS32 cores update the Count register only every two
+ * pipeline cycles.
+ */
+ if (double_count != 0)
+ counter_freq /= 2;
+
cycles_per_tick = counter_freq / 1000;
- if (double_count)
- cycles_per_tick *= 2;
cycles_per_hz = counter_freq / hz;
cycles_per_usec = counter_freq / (1 * 1000 * 1000);
cycles_per_sec = counter_freq ;
counter_timecounter.tc_frequency = counter_freq;
- /*
- * XXX: Some MIPS32 cores update the Count register only every two
- * pipeline cycles.
- * XXX2: We can read this from the hardware register on some
- * systems. Need to investigate.
- */
- if (double_count != 0) {
- cycles_per_hz /= 2;
- cycles_per_usec /= 2;
- cycles_per_sec /= 2;
- }
printf("hz=%d cyl_per_hz:%jd cyl_per_usec:%jd freq:%jd cyl_per_hz:%jd cyl_per_sec:%jd\n",
hz,
cycles_per_tick,
@@ -229,9 +226,9 @@ DELAY(int n)
/* Check to see if the timer has wrapped around. */
if (cur < last)
- delta += (cur + (cycles_per_hz - last));
+ delta += cur + (0xffffffff - last) + 1;
else
- delta += (cur - last);
+ delta += cur - last;
last = cur;
@@ -346,6 +343,7 @@ clock_attach(device_t dev)
device_printf(dev, "bus_setup_intr returned %d\n", error);
return (error);
}
+
mips_wr_compare(mips_rd_count() + counter_freq / hz);
return (0);
}
diff --git a/sys/mips/mips/tlb.S b/sys/mips/mips/tlb.S
index 28636b1..46a15f8 100644
--- a/sys/mips/mips/tlb.S
+++ b/sys/mips/mips/tlb.S
@@ -81,14 +81,12 @@
#define _MFC0 dmfc0
#define _MTC0 dmtc0
#define WIRED_SHIFT 34
-#define PAGE_SHIFT 34
#else
#define _SLL sll
#define _SRL srl
#define _MFC0 mfc0
#define _MTC0 mtc0
#define WIRED_SHIFT 2
-#define PAGE_SHIFT 2
#endif
.set noreorder # Noreorder is default style!
#if defined(ISA_MIPS32)
@@ -232,28 +230,33 @@ LEAF(Mips_TLBFlush)
mtc0 zero, COP_0_STATUS_REG # Disable interrupts
ITLBNOPFIX
mfc0 t1, COP_0_TLB_WIRED
- li v0, MIPS_KSEG3_START + 0x0fff0000 # invalid address
_MFC0 t0, COP_0_TLB_HI # Save the PID
-
- _MTC0 v0, COP_0_TLB_HI # Mark entry high as invalid
_MTC0 zero, COP_0_TLB_LO0 # Zero out low entry0.
_MTC0 zero, COP_0_TLB_LO1 # Zero out low entry1.
mtc0 zero, COP_0_TLB_PG_MASK # Zero out mask entry.
+ #
+ # Load invalid entry, each TLB entry should have it's own bogus
+ # address calculated by following expression:
+ # MIPS_KSEG0_START + 2 * i * PAGE_SIZE;
+ # One bogus value for every TLB entry might cause MCHECK exception
+ #
+ sll t3, t1, PGSHIFT + 1
+ li v0, MIPS_KSEG0_START # invalid address
+ addu v0, t3
/*
* Align the starting value (t1) and the upper bound (a0).
*/
1:
mtc0 t1, COP_0_TLB_INDEX # Set the index register.
ITLBNOPFIX
- _MTC0 t0, COP_0_TLB_HI # Restore the PID
+ _MTC0 v0, COP_0_TLB_HI # Mark entry high as invalid
addu t1, t1, 1 # Increment index.
- addu t0, t0, 8 * 1024
+ addu v0, v0, 8 * 1024
MIPS_CPU_NOP_DELAY
tlbwi # Write the TLB entry.
MIPS_CPU_NOP_DELAY
bne t1, a0, 1b
nop
-
_MTC0 t0, COP_0_TLB_HI # Restore the PID
mtc0 v1, COP_0_STATUS_REG # Restore the status register
ITLBNOPFIX
@@ -289,14 +292,14 @@ LEAF(Mips_TLBFlushAddr)
tlbp # Probe for the entry.
MIPS_CPU_NOP_DELAY
mfc0 v0, COP_0_TLB_INDEX # See what we got
- li t1, MIPS_KSEG0_START + 0x0fff0000
+ li t1, MIPS_KSEG0_START
bltz v0, 1f # index < 0 => !found
nop
# Load invalid entry, each TLB entry should have it's own bogus
# address calculated by following expression:
- # MIPS_KSEG0_START + 0x0fff0000 + 2 * i * PAGE_SIZE;
+ # MIPS_KSEG0_START + 2 * i * PAGE_SIZE;
# One bogus value for every TLB entry might cause MCHECK exception
- sll v0, PAGE_SHIFT + 1
+ sll v0, PGSHIFT + 1
addu t1, v0
_MTC0 t1, COP_0_TLB_HI # Mark entry high as invalid
@@ -424,17 +427,17 @@ LEAF(Mips_TLBRead)
MIPS_CPU_NOP_DELAY
mfc0 t2, COP_0_TLB_PG_MASK # fetch the hi entry
_MFC0 t3, COP_0_TLB_HI # fetch the hi entry
- _MFC0 t4, COP_0_TLB_LO0 # See what we got
- _MFC0 t5, COP_0_TLB_LO1 # See what we got
+ _MFC0 ta0, COP_0_TLB_LO0 # See what we got
+ _MFC0 ta1, COP_0_TLB_LO1 # See what we got
_MTC0 t0, COP_0_TLB_HI # restore PID
MIPS_CPU_NOP_DELAY
mtc0 v1, COP_0_STATUS_REG # Restore the status register
ITLBNOPFIX
sw t2, 0(a1)
sw t3, 4(a1)
- sw t4, 8(a1)
+ sw ta0, 8(a1)
j ra
- sw t5, 12(a1)
+ sw ta1, 12(a1)
END(Mips_TLBRead)
/*--------------------------------------------------------------------------
@@ -470,10 +473,19 @@ LEAF(mips_TBIAP)
mfc0 v1, COP_0_STATUS_REG # save status register
mtc0 zero, COP_0_STATUS_REG # disable interrupts
- _MFC0 t4, COP_0_TLB_HI # Get current PID
+ _MFC0 ta0, COP_0_TLB_HI # Get current PID
move t2, a0
mfc0 t1, COP_0_TLB_WIRED
- li v0, MIPS_KSEG0_START + 0x0fff0000 # invalid address
+ #
+ # Load invalid entry, each TLB entry should have it's own bogus
+ # address calculated by following expression:
+ # MIPS_KSEG0_START + 2 * i * PAGE_SIZE;
+ # One bogus value for every TLB entry might cause MCHECK exception
+ #
+ sll t3, t1, PGSHIFT + 1
+ li v0, MIPS_KSEG0_START # invalid address
+ addu v0, t3
+
mfc0 t3, COP_0_TLB_PG_MASK # save current pgMask
# do {} while (t1 < t2)
@@ -495,11 +507,11 @@ LEAF(mips_TBIAP)
tlbwi # invalidate the TLB entry
2:
addu t1, t1, 1
- addu v0, 1 << (PAGE_SHIFT + 1)
+ addu v0, 1 << (PGSHIFT + 1)
bne t1, t2, 1b
nop
- _MTC0 t4, COP_0_TLB_HI # restore PID
+ _MTC0 ta0, COP_0_TLB_HI # restore PID
mtc0 t3, COP_0_TLB_PG_MASK # restore pgMask
MIPS_CPU_NOP_DELAY
mtc0 v1, COP_0_STATUS_REG # restore status register
diff --git a/sys/mips/mips/trap.c b/sys/mips/mips/trap.c
index 3be4dcc..6f5b2ae 100644
--- a/sys/mips/mips/trap.c
+++ b/sys/mips/mips/trap.c
@@ -76,9 +76,9 @@ __FBSDID("$FreeBSD$");
#include <machine/trap.h>
#include <machine/psl.h>
#include <machine/cpu.h>
-#include <machine/intr.h>
#include <machine/pte.h>
#include <machine/pmap.h>
+#include <machine/md_var.h>
#include <machine/mips_opcode.h>
#include <machine/frame.h>
#include <machine/regnum.h>
@@ -99,28 +99,17 @@ __FBSDID("$FreeBSD$");
#ifdef TRAP_DEBUG
int trap_debug = 1;
-
#endif
extern unsigned onfault_table[];
-extern void MipsKernGenException(void);
-extern void MipsUserGenException(void);
-extern void MipsKernIntr(void);
-extern void MipsUserIntr(void);
-extern void MipsTLBInvalidException(void);
-extern void MipsKernTLBInvalidException(void);
-extern void MipsUserTLBInvalidException(void);
-extern void MipsTLBMissException(void);
static void log_bad_page_fault(char *, struct trapframe *, int);
static void log_frame_dump(struct trapframe *frame);
static void get_mapping_info(vm_offset_t, pd_entry_t **, pt_entry_t **);
#ifdef TRAP_DEBUG
static void trap_frame_dump(struct trapframe *frame);
-
#endif
-extern char edata[];
void (*machExceptionTable[]) (void)= {
/*
@@ -232,38 +221,17 @@ char *trap_type[] = {
#if !defined(SMP) && (defined(DDB) || defined(DEBUG))
struct trapdebug trapdebug[TRAPSIZE], *trp = trapdebug;
-
#endif
#if defined(DDB) || defined(DEBUG)
void stacktrace(struct trapframe *);
void logstacktrace(struct trapframe *);
-int kdbpeek(int *);
-
-/* extern functions printed by name in stack backtraces */
-extern void MipsTLBMiss(void);
-extern void MipsUserSyscallException(void);
-extern char _locore[];
-extern char _locoreEnd[];
-
-#endif /* DDB || DEBUG */
-
-extern void MipsSwitchFPState(struct thread *, struct trapframe *);
-extern void MipsFPTrap(u_int, u_int, u_int);
-
-u_int trap(struct trapframe *);
-u_int MipsEmulateBranch(struct trapframe *, int, int, u_int);
+#endif
#define KERNLAND(x) ((int)(x) < 0)
#define DELAYBRANCH(x) ((int)(x) < 0)
/*
- * kdbpeekD(addr) - skip one word starting at 'addr', then read the second word
- */
-#define kdbpeekD(addr) kdbpeek(((int *)(addr)) + 1)
-int rrs_debug = 0;
-
-/*
* MIPS load/store access type
*/
enum {
@@ -306,8 +274,7 @@ extern char *syscallnames[];
* p->p_addr->u_pcb.pcb_onfault is set, otherwise, return old pc.
*/
u_int
-trap(trapframe)
- struct trapframe *trapframe;
+trap(struct trapframe *trapframe)
{
int type, usermode;
int i = 0;
@@ -365,7 +332,7 @@ trap(trapframe)
printf("cpuid = %d\n", PCPU_GET(cpuid));
#endif
MachTLBGetPID(pid);
- printf("badaddr = %p, pc = %p, ra = %p, sp = %p, sr = 0x%x, pid = %d, ASID = 0x%x\n",
+ printf("badaddr = 0x%0x, pc = 0x%0x, ra = 0x%0x, sp = 0x%0x, sr = 0x%x, pid = %d, ASID = 0x%x\n",
trapframe->badvaddr, trapframe->pc, trapframe->ra,
trapframe->sp, trapframe->sr,
(curproc ? curproc->p_pid : -1), pid);
@@ -389,7 +356,7 @@ trap(trapframe)
((type & ~T_USER) != T_SYSCALL)) {
if (++count == 3) {
trap_frame_dump(trapframe);
- panic("too many faults at %p\n", last_badvaddr);
+ panic("too many faults at %x\n", last_badvaddr);
}
} else {
last_badvaddr = this_badvaddr;
@@ -564,7 +531,7 @@ dofault:
--p->p_lock;
PROC_UNLOCK(p);
#ifdef VMFAULT_TRACE
- printf("vm_fault(%x (pmap %x), %x (%x), %x, %d) -> %x at pc %x\n",
+ printf("vm_fault(%p (pmap %p), %x (%x), %x, %d) -> %x at pc %x\n",
map, &vm->vm_pmap, va, trapframe->badvaddr, ftype, VM_FAULT_NORMAL,
rv, trapframe->pc);
#endif
@@ -805,7 +772,7 @@ dofault:
case T_BREAK + T_USER:
{
- unsigned int va, instr;
+ uintptr_t va, instr;
/* compute address of break instruction */
va = trapframe->pc;
@@ -838,13 +805,13 @@ dofault:
case T_IWATCH + T_USER:
case T_DWATCH + T_USER:
{
- unsigned int va;
+ uintptr_t va;
/* compute address of trapped instruction */
va = trapframe->pc;
if (DELAYBRANCH(trapframe->cause))
va += sizeof(int);
- printf("watch exception @ 0x%x\n", va);
+ printf("watch exception @ %p\n", (void *)va);
i = SIGTRAP;
addr = va;
break;
@@ -852,7 +819,7 @@ dofault:
case T_TRAP + T_USER:
{
- unsigned int va, instr;
+ uintptr_t va, instr;
struct trapframe *locr0 = td->td_frame;
/* compute address of trap instruction */
@@ -885,7 +852,7 @@ dofault:
goto err;
break;
case T_COP_UNUSABLE + T_USER:
-#if defined(SOFTFLOAT)
+#if !defined(CPU_HAVEFPU)
/* FP (COP1) instruction */
if ((trapframe->cause & CR_COP_ERR) == 0x10000000) {
i = SIGILL;
@@ -1046,27 +1013,27 @@ trapDump(char *msg)
/*
* Return the resulting PC as if the branch was executed.
*/
-u_int
-MipsEmulateBranch(struct trapframe *framePtr, int instPC, int fpcCSR,
- u_int instptr)
+uintptr_t
+MipsEmulateBranch(struct trapframe *framePtr, uintptr_t instPC, int fpcCSR,
+ uintptr_t instptr)
{
InstFmt inst;
register_t *regsPtr = (register_t *) framePtr;
- unsigned retAddr = 0;
+ uintptr_t retAddr = 0;
int condition;
#define GetBranchDest(InstPtr, inst) \
- ((unsigned)InstPtr + 4 + ((short)inst.IType.imm << 2))
+ (InstPtr + 4 + ((short)inst.IType.imm << 2))
if (instptr) {
if (instptr < MIPS_KSEG0_START)
- inst.word = fuword((void *)instptr);
+ inst.word = fuword32((void *)instptr);
else
inst = *(InstFmt *) instptr;
} else {
if ((vm_offset_t)instPC < MIPS_KSEG0_START)
- inst.word = fuword((void *)instPC);
+ inst.word = fuword32((void *)instPC);
else
inst = *(InstFmt *) instPC;
}
@@ -1124,7 +1091,7 @@ MipsEmulateBranch(struct trapframe *framePtr, int instPC, int fpcCSR,
case OP_J:
case OP_JAL:
retAddr = (inst.JType.target << 2) |
- ((unsigned)instPC & 0xF0000000);
+ ((unsigned)(instPC + 4) & 0xF0000000);
break;
case OP_BEQ:
@@ -1186,348 +1153,53 @@ MipsEmulateBranch(struct trapframe *framePtr, int instPC, int fpcCSR,
#if defined(DDB) || defined(DEBUG)
-#define MIPS_JR_RA 0x03e00008 /* instruction code for jr ra */
-
-/* forward */
-char *fn_name(unsigned addr);
-
/*
* Print a stack backtrace.
*/
void
stacktrace(struct trapframe *regs)
{
- stacktrace_subr(regs, printf);
-}
-
-void
-stacktrace_subr(struct trapframe *regs, int (*printfn) (const char *,...))
-{
- InstFmt i;
- unsigned a0, a1, a2, a3, pc, sp, fp, ra, va, subr;
- unsigned instr, mask;
- unsigned int frames = 0;
- int more, stksize;
-
- /* get initial values from the exception frame */
- sp = regs->sp;
- pc = regs->pc;
- fp = regs->s8;
- ra = regs->ra; /* May be a 'leaf' function */
- a0 = regs->a0;
- a1 = regs->a1;
- a2 = regs->a2;
- a3 = regs->a3;
-
-/* Jump here when done with a frame, to start a new one */
-loop:
-
-/* Jump here after a nonstandard (interrupt handler) frame */
- stksize = 0;
- subr = 0;
- if (frames++ > 100) {
- (*printfn) ("\nstackframe count exceeded\n");
- /* return breaks stackframe-size heuristics with gcc -O2 */
- goto finish; /* XXX */
- }
- /* check for bad SP: could foul up next frame */
- if (sp & 3 || sp < 0x80000000) {
- (*printfn) ("SP 0x%x: not in kernel\n", sp);
- ra = 0;
- subr = 0;
- goto done;
- }
-#define Between(x, y, z) \
- ( ((x) <= (y)) && ((y) < (z)) )
-#define pcBetween(a,b) \
- Between((unsigned)a, pc, (unsigned)b)
-
- /*
- * Check for current PC in exception handler code that don't have a
- * preceding "j ra" at the tail of the preceding function. Depends
- * on relative ordering of functions in exception.S, swtch.S.
- */
- if (pcBetween(MipsKernGenException, MipsUserGenException))
- subr = (unsigned)MipsKernGenException;
- else if (pcBetween(MipsUserGenException, MipsKernIntr))
- subr = (unsigned)MipsUserGenException;
- else if (pcBetween(MipsKernIntr, MipsUserIntr))
- subr = (unsigned)MipsKernIntr;
- else if (pcBetween(MipsUserIntr, MipsTLBInvalidException))
- subr = (unsigned)MipsUserIntr;
- else if (pcBetween(MipsTLBInvalidException,
- MipsKernTLBInvalidException))
- subr = (unsigned)MipsTLBInvalidException;
- else if (pcBetween(MipsKernTLBInvalidException,
- MipsUserTLBInvalidException))
- subr = (unsigned)MipsKernTLBInvalidException;
- else if (pcBetween(MipsUserTLBInvalidException, MipsTLBMissException))
- subr = (unsigned)MipsUserTLBInvalidException;
- else if (pcBetween(cpu_switch, MipsSwitchFPState))
- subr = (unsigned)cpu_switch;
- else if (pcBetween(_locore, _locoreEnd)) {
- subr = (unsigned)_locore;
- ra = 0;
- goto done;
- }
- /* check for bad PC */
- if (pc & 3 || pc < (unsigned)0x80000000 || pc >= (unsigned)edata) {
- (*printfn) ("PC 0x%x: not in kernel\n", pc);
- ra = 0;
- goto done;
- }
- /*
- * Find the beginning of the current subroutine by scanning
- * backwards from the current PC for the end of the previous
- * subroutine.
- */
- if (!subr) {
- va = pc - sizeof(int);
- while ((instr = kdbpeek((int *)va)) != MIPS_JR_RA)
- va -= sizeof(int);
- va += 2 * sizeof(int); /* skip back over branch & delay slot */
- /* skip over nulls which might separate .o files */
- while ((instr = kdbpeek((int *)va)) == 0)
- va += sizeof(int);
- subr = va;
- }
- /* scan forwards to find stack size and any saved registers */
- stksize = 0;
- more = 3;
- mask = 0;
- for (va = subr; more; va += sizeof(int),
- more = (more == 3) ? 3 : more - 1) {
- /* stop if hit our current position */
- if (va >= pc)
- break;
- instr = kdbpeek((int *)va);
- i.word = instr;
- switch (i.JType.op) {
- case OP_SPECIAL:
- switch (i.RType.func) {
- case OP_JR:
- case OP_JALR:
- more = 2; /* stop after next instruction */
- break;
-
- case OP_SYSCALL:
- case OP_BREAK:
- more = 1; /* stop now */
- };
- break;
-
- case OP_BCOND:
- case OP_J:
- case OP_JAL:
- case OP_BEQ:
- case OP_BNE:
- case OP_BLEZ:
- case OP_BGTZ:
- more = 2; /* stop after next instruction */
- break;
-
- case OP_COP0:
- case OP_COP1:
- case OP_COP2:
- case OP_COP3:
- switch (i.RType.rs) {
- case OP_BCx:
- case OP_BCy:
- more = 2; /* stop after next instruction */
- };
- break;
-
- case OP_SW:
- /* look for saved registers on the stack */
- if (i.IType.rs != 29)
- break;
- /* only restore the first one */
- if (mask & (1 << i.IType.rt))
- break;
- mask |= (1 << i.IType.rt);
- switch (i.IType.rt) {
- case 4:/* a0 */
- a0 = kdbpeek((int *)(sp + (short)i.IType.imm));
- break;
-
- case 5:/* a1 */
- a1 = kdbpeek((int *)(sp + (short)i.IType.imm));
- break;
-
- case 6:/* a2 */
- a2 = kdbpeek((int *)(sp + (short)i.IType.imm));
- break;
-
- case 7:/* a3 */
- a3 = kdbpeek((int *)(sp + (short)i.IType.imm));
- break;
-
- case 30: /* fp */
- fp = kdbpeek((int *)(sp + (short)i.IType.imm));
- break;
-
- case 31: /* ra */
- ra = kdbpeek((int *)(sp + (short)i.IType.imm));
- }
- break;
-
- case OP_SD:
- /* look for saved registers on the stack */
- if (i.IType.rs != 29)
- break;
- /* only restore the first one */
- if (mask & (1 << i.IType.rt))
- break;
- mask |= (1 << i.IType.rt);
- switch (i.IType.rt) {
- case 4:/* a0 */
- a0 = kdbpeekD((int *)(sp + (short)i.IType.imm));
- break;
-
- case 5:/* a1 */
- a1 = kdbpeekD((int *)(sp + (short)i.IType.imm));
- break;
-
- case 6:/* a2 */
- a2 = kdbpeekD((int *)(sp + (short)i.IType.imm));
- break;
-
- case 7:/* a3 */
- a3 = kdbpeekD((int *)(sp + (short)i.IType.imm));
- break;
-
- case 30: /* fp */
- fp = kdbpeekD((int *)(sp + (short)i.IType.imm));
- break;
-
- case 31: /* ra */
- ra = kdbpeekD((int *)(sp + (short)i.IType.imm));
- }
- break;
-
- case OP_ADDI:
- case OP_ADDIU:
- /* look for stack pointer adjustment */
- if (i.IType.rs != 29 || i.IType.rt != 29)
- break;
- stksize = -((short)i.IType.imm);
- }
- }
-
-done:
- (*printfn) ("%s+%x (%x,%x,%x,%x) ra %x sz %d\n",
- fn_name(subr), pc - subr, a0, a1, a2, a3, ra, stksize);
-
- if (ra) {
- if (pc == ra && stksize == 0)
- (*printfn) ("stacktrace: loop!\n");
- else {
- pc = ra;
- sp += stksize;
- ra = 0;
- goto loop;
- }
- } else {
-finish:
- if (curproc)
- (*printfn) ("pid %d\n", curproc->p_pid);
- else
- (*printfn) ("curproc NULL\n");
- }
+ stacktrace_subr(regs->pc, regs->sp, regs->ra, printf);
}
-
-/*
- * Functions ``special'' enough to print by name
- */
-#ifdef __STDC__
-#define Name(_fn) { (void*)_fn, # _fn }
-#else
-#define Name(_fn) { _fn, "_fn"}
#endif
-static struct {
- void *addr;
- char *name;
-} names[] = {
-
- Name(trap),
- Name(MipsKernGenException),
- Name(MipsUserGenException),
- Name(MipsKernIntr),
- Name(MipsUserIntr),
- Name(cpu_switch),
- {
- 0, 0
- }
-};
-
-/*
- * Map a function address to a string name, if known; or a hex string.
- */
-char *
-fn_name(unsigned addr)
-{
- static char buf[17];
- int i = 0;
-
-#ifdef DDB
- db_expr_t diff;
- c_db_sym_t sym;
- char *symname;
-
- diff = 0;
- symname = NULL;
- sym = db_search_symbol((db_addr_t)addr, DB_STGY_ANY, &diff);
- db_symbol_values(sym, (const char **)&symname, (db_expr_t *)0);
- if (symname && diff == 0)
- return (symname);
-#endif
-
- for (i = 0; names[i].name; i++)
- if (names[i].addr == (void *)addr)
- return (names[i].name);
- sprintf(buf, "%x", addr);
- return (buf);
-}
-
-#endif /* DDB */
static void
log_frame_dump(struct trapframe *frame)
{
log(LOG_ERR, "Trapframe Register Dump:\n");
- log(LOG_ERR, "\tzero: %08x\tat: %08x\tv0: %08x\tv1: %08x\n",
- 0, frame->ast, frame->v0, frame->v1);
+ log(LOG_ERR, "\tzero: %p\tat: %p\tv0: %p\tv1: %p\n",
+ (void *)0, (void *)frame->ast, (void *)frame->v0, (void *)frame->v1);
- log(LOG_ERR, "\ta0: %08x\ta1: %08x\ta2: %08x\ta3: %08x\n",
- frame->a0, frame->a1, frame->a2, frame->a3);
+ log(LOG_ERR, "\ta0: %p\ta1: %p\ta2: %p\ta3: %p\n",
+ (void *)frame->a0, (void *)frame->a1, (void *)frame->a2, (void *)frame->a3);
- log(LOG_ERR, "\tt0: %08x\tt1: %08x\tt2: %08x\tt3: %08x\n",
- frame->t0, frame->t1, frame->t2, frame->t3);
+ log(LOG_ERR, "\tt0: %p\tt1: %p\tt2: %p\tt3: %p\n",
+ (void *)frame->t0, (void *)frame->t1, (void *)frame->t2, (void *)frame->t3);
- log(LOG_ERR, "\tt4: %08x\tt5: %08x\tt6: %08x\tt7: %08x\n",
- frame->t4, frame->t5, frame->t6, frame->t7);
+ log(LOG_ERR, "\tt4: %p\tt5: %p\tt6: %p\tt7: %p\n",
+ (void *)frame->t4, (void *)frame->t5, (void *)frame->t6, (void *)frame->t7);
- log(LOG_ERR, "\tt8: %08x\tt9: %08x\ts0: %08x\ts1: %08x\n",
- frame->t8, frame->t9, frame->s0, frame->s1);
+ log(LOG_ERR, "\tt8: %p\tt9: %p\ts0: %p\ts1: %p\n",
+ (void *)frame->t8, (void *)frame->t9, (void *)frame->s0, (void *)frame->s1);
- log(LOG_ERR, "\ts2: %08x\ts3: %08x\ts4: %08x\ts5: %08x\n",
- frame->s2, frame->s3, frame->s4, frame->s5);
+ log(LOG_ERR, "\ts2: %p\ts3: %p\ts4: %p\ts5: %p\n",
+ (void *)frame->s2, (void *)frame->s3, (void *)frame->s4, (void *)frame->s5);
- log(LOG_ERR, "\ts6: %08x\ts7: %08x\tk0: %08x\tk1: %08x\n",
- frame->s6, frame->s7, frame->k0, frame->k1);
+ log(LOG_ERR, "\ts6: %p\ts7: %p\tk0: %p\tk1: %p\n",
+ (void *)frame->s6, (void *)frame->s7, (void *)frame->k0, (void *)frame->k1);
- log(LOG_ERR, "\tgp: %08x\tsp: %08x\ts8: %08x\tra: %08x\n",
- frame->gp, frame->sp, frame->s8, frame->ra);
+ log(LOG_ERR, "\tgp: %p\tsp: %p\ts8: %p\tra: %p\n",
+ (void *)frame->gp, (void *)frame->sp, (void *)frame->s8, (void *)frame->ra);
- log(LOG_ERR, "\tsr: %08x\tmullo: %08x\tmulhi: %08x\tbadvaddr: %08x\n",
- frame->sr, frame->mullo, frame->mulhi, frame->badvaddr);
+ log(LOG_ERR, "\tsr: %p\tmullo: %p\tmulhi: %p\tbadvaddr: %p\n",
+ (void *)frame->sr, (void *)frame->mullo, (void *)frame->mulhi, (void *)frame->badvaddr);
#ifdef IC_REG
- log(LOG_ERR, "\tcause: %08x\tpc: %08x\tic: %08x\n",
- frame->cause, frame->pc, frame->ic);
+ log(LOG_ERR, "\tcause: %p\tpc: %p\tic: %p\n",
+ (void *)frame->cause, (void *)frame->pc, (void *)frame->ic);
#else
- log(LOG_ERR, "\tcause: %08x\tpc: %08x\n",
- frame->cause, frame->pc);
+ log(LOG_ERR, "\tcause: %p\tpc: %p\n",
+ (void *)frame->cause, (void *)frame->pc);
#endif
}
@@ -1536,39 +1208,39 @@ static void
trap_frame_dump(struct trapframe *frame)
{
printf("Trapframe Register Dump:\n");
- printf("\tzero: %08x\tat: %08x\tv0: %08x\tv1: %08x\n",
- 0, frame->ast, frame->v0, frame->v1);
+ printf("\tzero: %p\tat: %p\tv0: %p\tv1: %p\n",
+ (void *)0, (void *)frame->ast, (void *)frame->v0, (void *)frame->v1);
- printf("\ta0: %08x\ta1: %08x\ta2: %08x\ta3: %08x\n",
- frame->a0, frame->a1, frame->a2, frame->a3);
+ printf("\ta0: %p\ta1: %p\ta2: %p\ta3: %p\n",
+ (void *)frame->a0, (void *)frame->a1, (void *)frame->a2, (void *)frame->a3);
- printf("\tt0: %08x\tt1: %08x\tt2: %08x\tt3: %08x\n",
- frame->t0, frame->t1, frame->t2, frame->t3);
+ printf("\tt0: %p\tt1: %p\tt2: %p\tt3: %p\n",
+ (void *)frame->t0, (void *)frame->t1, (void *)frame->t2, (void *)frame->t3);
- printf("\tt4: %08x\tt5: %08x\tt6: %08x\tt7: %08x\n",
- frame->t4, frame->t5, frame->t6, frame->t7);
+ printf("\tt4: %p\tt5: %p\tt6: %p\tt7: %p\n",
+ (void *)frame->t4, (void *)frame->t5, (void *)frame->t6, (void *)frame->t7);
- printf("\tt8: %08x\tt9: %08x\ts0: %08x\ts1: %08x\n",
- frame->t8, frame->t9, frame->s0, frame->s1);
+ printf("\tt8: %p\tt9: %p\ts0: %p\ts1: %p\n",
+ (void *)frame->t8, (void *)frame->t9, (void *)frame->s0, (void *)frame->s1);
- printf("\ts2: %08x\ts3: %08x\ts4: %08x\ts5: %08x\n",
- frame->s2, frame->s3, frame->s4, frame->s5);
+ printf("\ts2: %p\ts3: %p\ts4: %p\ts5: %p\n",
+ (void *)frame->s2, (void *)frame->s3, (void *)frame->s4, (void *)frame->s5);
- printf("\ts6: %08x\ts7: %08x\tk0: %08x\tk1: %08x\n",
- frame->s6, frame->s7, frame->k0, frame->k1);
+ printf("\ts6: %p\ts7: %p\tk0: %p\tk1: %p\n",
+ (void *)frame->s6, (void *)frame->s7, (void *)frame->k0, (void *)frame->k1);
- printf("\tgp: %08x\tsp: %08x\ts8: %08x\tra: %08x\n",
- frame->gp, frame->sp, frame->s8, frame->ra);
+ printf("\tgp: %p\tsp: %p\ts8: %p\tra: %p\n",
+ (void *)frame->gp, (void *)frame->sp, (void *)frame->s8, (void *)frame->ra);
- printf("\tsr: %08x\tmullo: %08x\tmulhi: %08x\tbadvaddr: %08x\n",
- frame->sr, frame->mullo, frame->mulhi, frame->badvaddr);
+ printf("\tsr: %p\tmullo: %p\tmulhi: %p\tbadvaddr: %p\n",
+ (void *)frame->sr, (void *)frame->mullo, (void *)frame->mulhi, (void *)frame->badvaddr);
#ifdef IC_REG
- printf("\tcause: %08x\tpc: %08x\tic: %08x\n",
- frame->cause, frame->pc, frame->ic);
+ printf("\tcause: %p\tpc: %p\tic: %p\n",
+ (void *)frame->cause, (void *)frame->pc, (void *)frame->ic);
#else
- printf("\tcause: %08x\tpc: %08x\n",
- frame->cause, frame->pc);
+ printf("\tcause: %p\tpc: %p\n",
+ (void *)frame->cause, (void *)frame->pc);
#endif
}
@@ -1623,12 +1295,12 @@ log_bad_page_fault(char *msg, struct trapframe *frame, int trap_type)
}
pc = frame->pc + (DELAYBRANCH(frame->cause) ? 4 : 0);
- log(LOG_ERR, "%s: pid %d (%s), uid %d: pc 0x%x got a %s fault at 0x%x\n",
+ log(LOG_ERR, "%s: pid %d (%s), uid %d: pc %p got a %s fault at %p\n",
msg, p->p_pid, p->p_comm,
p->p_ucred ? p->p_ucred->cr_uid : -1,
- pc,
+ (void *)pc,
read_or_write,
- frame->badvaddr);
+ (void *)frame->badvaddr);
/* log registers in trap frame */
log_frame_dump(frame);
@@ -1643,8 +1315,8 @@ log_bad_page_fault(char *msg, struct trapframe *frame, int trap_type)
(trap_type != T_BUS_ERR_IFETCH) &&
useracc((caddr_t)pc, sizeof(int) * 4, VM_PROT_READ)) {
/* dump page table entry for faulting instruction */
- log(LOG_ERR, "Page table info for pc address 0x%x: pde = %p, pte = 0x%lx\n",
- pc, *pdep, ptep ? *ptep : 0);
+ log(LOG_ERR, "Page table info for pc address %p: pde = %p, pte = 0x%lx\n",
+ (void *)pc, *pdep, ptep ? *ptep : 0);
addr = (unsigned int *)pc;
log(LOG_ERR, "Dumping 4 words starting at pc address %p: \n",
@@ -1652,8 +1324,8 @@ log_bad_page_fault(char *msg, struct trapframe *frame, int trap_type)
log(LOG_ERR, "%08x %08x %08x %08x\n",
addr[0], addr[1], addr[2], addr[3]);
} else {
- log(LOG_ERR, "pc address 0x%x is inaccessible, pde = 0x%p, pte = 0x%lx\n",
- pc, *pdep, ptep ? *ptep : 0);
+ log(LOG_ERR, "pc address %p is inaccessible, pde = 0x%p, pte = 0x%lx\n",
+ (void *)pc, *pdep, ptep ? *ptep : 0);
}
/* panic("Bad trap");*/
}
@@ -1762,8 +1434,9 @@ emulate_unaligned_access(struct trapframe *frame)
else
frame->pc += 4;
- log(LOG_INFO, "Unaligned %s: pc=0x%x, badvaddr=0x%x\n",
- access_name[access_type - 1], pc, frame->badvaddr);
+ log(LOG_INFO, "Unaligned %s: pc=%p, badvaddr=%p\n",
+ access_name[access_type - 1], (void *)pc,
+ (void *)frame->badvaddr);
}
}
return access_type;
diff --git a/sys/mips/mips/vm_machdep.c b/sys/mips/mips/vm_machdep.c
index 918c5f0..57fe355 100644
--- a/sys/mips/mips/vm_machdep.c
+++ b/sys/mips/mips/vm_machdep.c
@@ -41,6 +41,9 @@
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
+#include "opt_cputype.h"
+#include "opt_ddb.h"
+
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/malloc.h>
@@ -53,11 +56,11 @@ __FBSDID("$FreeBSD$");
#include <sys/sysctl.h>
#include <sys/unistd.h>
+#include <machine/cache.h>
#include <machine/clock.h>
#include <machine/cpu.h>
#include <machine/md_var.h>
#include <machine/pcb.h>
-#include <machine/pltfm.h>
#include <vm/vm.h>
#include <vm/vm_param.h>
@@ -142,14 +145,14 @@ cpu_fork(register struct thread *td1,register struct proc *p2,
if (td1 == PCPU_GET(fpcurthread))
MipsSaveCurFPState(td1);
- pcb2->pcb_context.val[PCB_REG_RA] = (register_t)fork_trampoline;
+ pcb2->pcb_context[PCB_REG_RA] = (register_t)fork_trampoline;
/* Make sp 64-bit aligned */
- pcb2->pcb_context.val[PCB_REG_SP] = (register_t)(((vm_offset_t)td2->td_pcb &
+ pcb2->pcb_context[PCB_REG_SP] = (register_t)(((vm_offset_t)td2->td_pcb &
~(sizeof(__int64_t) - 1)) - STAND_FRAME_SIZE);
- pcb2->pcb_context.val[PCB_REG_S0] = (register_t)fork_return;
- pcb2->pcb_context.val[PCB_REG_S1] = (register_t)td2;
- pcb2->pcb_context.val[PCB_REG_S2] = (register_t)td2->td_frame;
- pcb2->pcb_context.val[PCB_REG_SR] = SR_INT_MASK;
+ pcb2->pcb_context[PCB_REG_S0] = (register_t)fork_return;
+ pcb2->pcb_context[PCB_REG_S1] = (register_t)td2;
+ pcb2->pcb_context[PCB_REG_S2] = (register_t)td2->td_frame;
+ pcb2->pcb_context[PCB_REG_SR] = SR_INT_MASK & mips_rd_status();
/*
* FREEBSD_DEVELOPERS_FIXME:
* Setup any other CPU-Specific registers (Not MIPS Standard)
@@ -157,10 +160,11 @@ cpu_fork(register struct thread *td1,register struct proc *p2,
* that are needed.
*/
+ td2->td_md.md_tls = td1->td_md.md_tls;
td2->td_md.md_saved_intr = MIPS_SR_INT_IE;
td2->td_md.md_spinlock_count = 1;
#ifdef TARGET_OCTEON
- pcb2->pcb_context.val[PCB_REG_SR] |= MIPS_SR_COP_2_BIT | MIPS32_SR_PX | MIPS_SR_UX | MIPS_SR_KX | MIPS_SR_SX;
+ pcb2->pcb_context[PCB_REG_SR] |= MIPS_SR_COP_2_BIT | MIPS32_SR_PX | MIPS_SR_UX | MIPS_SR_KX | MIPS_SR_SX;
#endif
}
@@ -178,8 +182,8 @@ cpu_set_fork_handler(struct thread *td, void (*func) __P((void *)), void *arg)
* Note that the trap frame follows the args, so the function
* is really called like this: func(arg, frame);
*/
- td->td_pcb->pcb_context.val[PCB_REG_S0] = (register_t) func;
- td->td_pcb->pcb_context.val[PCB_REG_S1] = (register_t) arg;
+ td->td_pcb->pcb_context[PCB_REG_S0] = (register_t) func;
+ td->td_pcb->pcb_context[PCB_REG_S1] = (register_t) arg;
}
void
@@ -348,20 +352,18 @@ cpu_set_upcall(struct thread *td, struct thread *td0)
* Set registers for trampoline to user mode.
*/
- pcb2->pcb_context.val[PCB_REG_RA] = (register_t)fork_trampoline;
+ pcb2->pcb_context[PCB_REG_RA] = (register_t)fork_trampoline;
/* Make sp 64-bit aligned */
- pcb2->pcb_context.val[PCB_REG_SP] = (register_t)(((vm_offset_t)td->td_pcb &
+ pcb2->pcb_context[PCB_REG_SP] = (register_t)(((vm_offset_t)td->td_pcb &
~(sizeof(__int64_t) - 1)) - STAND_FRAME_SIZE);
- pcb2->pcb_context.val[PCB_REG_S0] = (register_t)fork_return;
- pcb2->pcb_context.val[PCB_REG_S1] = (register_t)td;
- pcb2->pcb_context.val[PCB_REG_S2] = (register_t)td->td_frame;
-
-
+ pcb2->pcb_context[PCB_REG_S0] = (register_t)fork_return;
+ pcb2->pcb_context[PCB_REG_S1] = (register_t)td;
+ pcb2->pcb_context[PCB_REG_S2] = (register_t)td->td_frame;
/* Dont set IE bit in SR. sched lock release will take care of it */
-/* idle_mask is jmips pcb2->pcb_context.val[11] = (ALL_INT_MASK & idle_mask); */
- pcb2->pcb_context.val[PCB_REG_SR] = SR_INT_MASK;
+ pcb2->pcb_context[PCB_REG_SR] = SR_INT_MASK & mips_rd_status();
+
#ifdef TARGET_OCTEON
- pcb2->pcb_context.val[PCB_REG_SR] |= MIPS_SR_COP_2_BIT | MIPS_SR_COP_0_BIT |
+ pcb2->pcb_context[PCB_REG_SR] |= MIPS_SR_COP_2_BIT | MIPS_SR_COP_0_BIT |
MIPS32_SR_PX | MIPS_SR_UX | MIPS_SR_KX | MIPS_SR_SX;
#endif
@@ -392,14 +394,14 @@ cpu_set_upcall_kse(struct thread *td, void (*entry)(void *), void *arg,
stack_t *stack)
{
struct trapframe *tf;
- u_int32_t sp;
+ register_t sp;
/*
* At the point where a function is called, sp must be 8
* byte aligned[for compatibility with 64-bit CPUs]
* in ``See MIPS Run'' by D. Sweetman, p. 269
* align stack */
- sp = ((uint32_t)(stack->ss_sp + stack->ss_size) & ~0x7) -
+ sp = ((register_t)(stack->ss_sp + stack->ss_size) & ~0x7) -
STAND_FRAME_SIZE;
/*
@@ -410,9 +412,18 @@ cpu_set_upcall_kse(struct thread *td, void (*entry)(void *), void *arg,
bzero(tf, sizeof(struct trapframe));
tf->sp = (register_t)sp;
tf->pc = (register_t)entry;
+ /*
+ * MIPS ABI requires T9 to be the same as PC
+ * in subroutine entry point
+ */
+ tf->t9 = (register_t)entry;
tf->a0 = (register_t)arg;
- tf->sr = SR_KSU_USER | SR_EXL;
+ /*
+ * Keep interrupt mask
+ */
+ tf->sr = SR_KSU_USER | SR_EXL | (SR_INT_MASK & mips_rd_status()) |
+ MIPS_SR_INT_IE;
#ifdef TARGET_OCTEON
tf->sr |= MIPS_SR_INT_IE | MIPS_SR_COP_0_BIT | MIPS_SR_UX |
MIPS_SR_KX;
@@ -448,34 +459,6 @@ kvtop(void *addr)
#define ZIDLE_HI(v) ((v) * 4 / 5)
/*
- * Tell whether this address is in some physical memory region.
- * Currently used by the kernel coredump code in order to avoid
- * dumping non-memory physical address space.
- */
-int
-is_physical_memory(vm_offset_t addr)
-{
- if (addr >= SDRAM_ADDR_START && addr <= SDRAM_ADDR_END)
- return 1;
- else
- return 0;
-}
-
-int
-is_cacheable_mem(vm_offset_t pa)
-{
- if ((pa >= SDRAM_ADDR_START && pa <= SDRAM_ADDR_END) ||
-#ifdef FLASH_ADDR_START
- (pa >= FLASH_ADDR_START && pa <= FLASH_ADDR_END))
-#else
- 0)
-#endif
- return 1;
- else
- return 0;
-}
-
-/*
* Allocate a pool of sf_bufs (sendfile(2) or "super-fast" if you prefer. :-))
*/
static void
@@ -523,6 +506,12 @@ sf_buf_alloc(struct vm_page *m, int flags)
nsfbufsused++;
nsfbufspeak = imax(nsfbufspeak, nsfbufsused);
}
+ /*
+ * Flush all mappings in order to have up to date
+ * physycal memory
+ */
+ pmap_flush_pvcache(sf->m);
+ mips_dcache_inv_range(sf->kva, PAGE_SIZE);
goto done;
}
}
@@ -564,6 +553,10 @@ sf_buf_free(struct sf_buf *sf)
{
mtx_lock(&sf_buf_lock);
sf->ref_count--;
+ /*
+ * Make sure all changes in KVA end up in physical memory
+ */
+ mips_dcache_wbinv_range(sf->kva, PAGE_SIZE);
if (sf->ref_count == 0) {
TAILQ_INSERT_TAIL(&sf_buf_freelist, sf, free_entry);
nsfbufsused--;
@@ -585,7 +578,7 @@ int
cpu_set_user_tls(struct thread *td, void *tls_base)
{
- /* TBD */
+ td->td_md.md_tls = tls_base;
return (0);
}
@@ -596,3 +589,99 @@ cpu_throw(struct thread *old, struct thread *new)
func_2args_asmmacro(&mips_cpu_throw, old, new);
panic("mips_cpu_throw() returned");
}
+
+#ifdef DDB
+#include <ddb/ddb.h>
+
+#define DB_PRINT_REG(ptr, regname) \
+ db_printf(" %-12s 0x%lx\n", #regname, (long)((ptr)->regname))
+
+#define DB_PRINT_REG_ARRAY(ptr, arrname, regname) \
+ db_printf(" %-12s 0x%lx\n", #regname, (long)((ptr)->arrname[regname]))
+
+DB_SHOW_COMMAND(pcb, ddb_dump_pcb)
+{
+ struct thread *td;
+ struct pcb *pcb;
+ struct trapframe *trapframe;
+
+ /* Determine which thread to examine. */
+ if (have_addr)
+ td = db_lookup_thread(addr, FALSE);
+ else
+ td = curthread;
+
+ pcb = td->td_pcb;
+
+ db_printf("Thread %d at %p\n", td->td_tid, td);
+
+ db_printf("PCB at %p\n", pcb);
+
+ trapframe = &pcb->pcb_regs;
+ db_printf("Trapframe at %p\n", trapframe);
+ DB_PRINT_REG(trapframe, zero);
+ DB_PRINT_REG(trapframe, ast);
+ DB_PRINT_REG(trapframe, v0);
+ DB_PRINT_REG(trapframe, v1);
+ DB_PRINT_REG(trapframe, a0);
+ DB_PRINT_REG(trapframe, a1);
+ DB_PRINT_REG(trapframe, a2);
+ DB_PRINT_REG(trapframe, a3);
+ DB_PRINT_REG(trapframe, t0);
+ DB_PRINT_REG(trapframe, t1);
+ DB_PRINT_REG(trapframe, t2);
+ DB_PRINT_REG(trapframe, t3);
+ DB_PRINT_REG(trapframe, t4);
+ DB_PRINT_REG(trapframe, t5);
+ DB_PRINT_REG(trapframe, t6);
+ DB_PRINT_REG(trapframe, t7);
+ DB_PRINT_REG(trapframe, s0);
+ DB_PRINT_REG(trapframe, s1);
+ DB_PRINT_REG(trapframe, s2);
+ DB_PRINT_REG(trapframe, s3);
+ DB_PRINT_REG(trapframe, s4);
+ DB_PRINT_REG(trapframe, s5);
+ DB_PRINT_REG(trapframe, s6);
+ DB_PRINT_REG(trapframe, s7);
+ DB_PRINT_REG(trapframe, t8);
+ DB_PRINT_REG(trapframe, t9);
+ DB_PRINT_REG(trapframe, k0);
+ DB_PRINT_REG(trapframe, k1);
+ DB_PRINT_REG(trapframe, gp);
+ DB_PRINT_REG(trapframe, sp);
+ DB_PRINT_REG(trapframe, s8);
+ DB_PRINT_REG(trapframe, ra);
+ DB_PRINT_REG(trapframe, sr);
+ DB_PRINT_REG(trapframe, mullo);
+ DB_PRINT_REG(trapframe, mulhi);
+ DB_PRINT_REG(trapframe, badvaddr);
+ DB_PRINT_REG(trapframe, cause);
+ DB_PRINT_REG(trapframe, pc);
+
+ db_printf("PCB Context:\n");
+ DB_PRINT_REG_ARRAY(pcb, pcb_context, PCB_REG_S0);
+ DB_PRINT_REG_ARRAY(pcb, pcb_context, PCB_REG_S1);
+ DB_PRINT_REG_ARRAY(pcb, pcb_context, PCB_REG_S2);
+ DB_PRINT_REG_ARRAY(pcb, pcb_context, PCB_REG_S3);
+ DB_PRINT_REG_ARRAY(pcb, pcb_context, PCB_REG_S4);
+ DB_PRINT_REG_ARRAY(pcb, pcb_context, PCB_REG_S5);
+ DB_PRINT_REG_ARRAY(pcb, pcb_context, PCB_REG_S6);
+ DB_PRINT_REG_ARRAY(pcb, pcb_context, PCB_REG_S7);
+ DB_PRINT_REG_ARRAY(pcb, pcb_context, PCB_REG_SP);
+ DB_PRINT_REG_ARRAY(pcb, pcb_context, PCB_REG_S8);
+ DB_PRINT_REG_ARRAY(pcb, pcb_context, PCB_REG_RA);
+ DB_PRINT_REG_ARRAY(pcb, pcb_context, PCB_REG_SR);
+ DB_PRINT_REG_ARRAY(pcb, pcb_context, PCB_REG_GP);
+ DB_PRINT_REG_ARRAY(pcb, pcb_context, PCB_REG_PC);
+
+ db_printf("PCB onfault = %d\n", pcb->pcb_onfault);
+ db_printf("md_saved_intr = 0x%0lx\n", (long)td->td_md.md_saved_intr);
+ db_printf("md_spinlock_count = %d\n", td->td_md.md_spinlock_count);
+
+ if (td->td_frame != trapframe) {
+ db_printf("td->td_frame %p is not the same as pcb_regs %p\n",
+ td->td_frame, trapframe);
+ }
+}
+
+#endif /* DDB */
OpenPOWER on IntegriCloud