| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While this is generally good, it brings along a serie of problems,
like clocks going off sync and in presence of SW_WATCHDOG, watchdogs
firing without a good reason (missed hardclock wdog ticks update).
Fix the latter by kicking the watchdog just before to re-enable the interrupts.
Also, while here, not rely on users to stop the watchdog manually when
entering DDB but do that when entering KDB context.
Sponsored by: Sandvine Incorporated
Reviewed by: emaste, rstone
Approved by: re (kib)
MFC after: 1 week
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
accessible:
(1) Always compile in support for breaking into the debugger if options
KDB is present in the kernel.
(2) Disable both by default, but allow them to be enabled via tunables
and sysctls debug.kdb.break_to_debugger and
debug.kdb.alt_break_to_debugger.
(3) options BREAK_TO_DEBUGGER and options ALT_BREAK_TO_DEBUGGER continue
to behave as before -- only now instead of compiling in
break-to-debugger support, they change the default values of the
above sysctls to enable those features by default. Current kernel
configurations should, therefore, continue to behave as expected.
(4) Migrate alternative break-to-debugger state machine logic out of
individual device drivers into centralised KDB code. This has a
number of upsides, but also one downside: it's now tricky to release
sio spin locks when entering the debugger, so we don't. However,
similar logic does not exist in other device drivers, including uart.
(5) dcons requires some special handling; unlike other console types, it
allows overriding KDB's own debugger selection, so we need a new
interface to KDB to allow that to work.
GENERIC kernels in -CURRENT will now support break-to-debugger as long as
appropriate boot/run-time options are set, which should improve the
debuggability of BETA kernels significantly.
MFC after: 3 weeks
Reviewed by: kib, nwhitehorn
Approved by: re (bz)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
mask of CPUs, pc_other_cpus and pc_cpumask become highly inefficient.
Remove them and replace their usage with custom pc_cpuid magic (as,
atm, pc_cpumask can be easilly represented by (1 << pc_cpuid) and
pc_other_cpus by (all_cpus & ~(1 << pc_cpuid))).
This change is not targeted for MFC because of struct pcpu members
removal and dependency by cpumask_t retirement.
MD review by: marcel, marius, alc
Tested by: pluknet
MD testing by: marcel, marius, gonzo, andreast
|
|
|
|
|
|
|
|
|
|
|
| |
... and thus retire debug.kdb.stop_cpus tunable/sysctl.
The knob was to work around CPU stopping issues, which since have been
either fixed or greatly reduced. kdb should really operate in a special
environment with scheduler stopped and interrupts disabled to provide
deterministic debugging.
Discussed with: attilio, rwatson
X-MFC after: 2 months or never
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Modify the "alternate break sequence" detecting state
machine so that only a contiguous invocation of the
break sequence is accepted. The old implementation
did not reset the state machine when detecting an
unexpected character.
While here, use an enum for the states of the machine
instead of magic numbers.bmitted by:
Sponsored by: Spectra Logic Corporation
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).
Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.
The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN
while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.
Some technical notes:
- This commit may be considered an ABI nop for all the architectures
different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
accessed avoiding migration, because the size of cpuset_t should be
considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
primirally done in order to leave some more space in userland to cope
with KBI extensions). If you need to access kernel cpuset_t from the
userland please refer to example in this patch on how to do that
correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now
The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.
Tested by: pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by: jeff, jhb, sbruno
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the debugger back-end has changed. This means that switching from ddb
to gdb no longer requires a "step" which can be dangerous on an
already-crashed kernel.
Also add a capability to get from the gdb back-end back to ddb, by
typing ^C in the console window.
While here, simplify kdb_sysctl_available() by using
sbuf_new_for_sysctl(), and use strlcpy() instead of strncpy() since the
strlcpy semantic is desired.
MFC after: 1 month
|
|
|
|
| |
MFC after: 1 week
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a followup to r212964.
stack_print call chain obtains linker sx lock and thus potentially may
lead to a deadlock depending on a kind of a panic.
stack_print_ddb doesn't acquire any locks and it doesn't use any
facilities of ddb backend.
Using stack_print_ddb outside of DDB ifdef required taking a number of
helper functions from under it as well.
It is a good idea to rename linker_ddb_* and stack_*_ddb functions to
have 'unlocked' component in their name instead of 'ddb', because those
functions do not use any DDB services, but instead they provide unlocked
access to linker symbol information. The latter was previously needed
only for DDB, hence the 'ddb' name component.
Alternative is to ditch unlocked versions altogether after implementing
proper panic handling:
1. stop other cpus upon a panic
2. make all non-spinlock lock operations (mutex, sx, rwlock) be a no-op
when panicstr != NULL
Suggested by: mdf
Discussed with: attilio
MFC after: 2 weeks
|
|
|
|
|
|
|
|
|
|
| |
The idea is to add KDB and KDB_TRACE options to GENERIC kernels on
stable branches, so that at least the minimal information is produced
for non-specific panics like traps on page faults.
The GENERICs in stable branches seem to already include STACK option.
Reviewed by: attilio
MFC after: 2 weeks
|
|
|
|
|
|
|
|
|
|
| |
via %s
Most of the cases looked harmless, but this is done for the sake of
correctness. In one case it even allowed to drop an intermediate buffer.
Found by: clang
MFC after: 2 week
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
has proven to have a good effect when entering KDB by using a NMI,
but it completely violates all the good rules about interrupts
disabled while holding a spinlock in other occasions. This can be the
cause of deadlocks on events where a normal IPI_STOP is expected.
* Adds an new IPI called IPI_STOP_HARD on all the supported architectures.
This IPI is responsible for sending a stop message among CPUs using a
privileged channel when disponible. In other cases it just does match a
normal IPI_STOP.
Right now the IPI_STOP_HARD functionality uses a NMI on ia32 and amd64
architectures, while on the other has a normal IPI_STOP effect. It is
responsibility of maintainers to eventually implement an hard stop
when necessary and possible.
* Use the new IPI facility in order to implement a new userend SMP kernel
function called stop_cpus_hard(). That is specular to stop_cpu() but
it does use the privileged channel for the stopping facility.
* Let KDB use the newly introduced function stop_cpus_hard() and leave
stop_cpus() for all the other cases
* Disable interrupts on CPU0 when starting the process of APs suspension.
* Style cleanup and comments adding
This patch should fix the reboot/shutdown deadlocks many users are
constantly reporting on mailing lists.
Please don't forget to update your config file with the STOP_NMI
option removal
Reviewed by: jhb
Tested by: pho, bz, rink
Approved by: re (kib)
|
|
|
|
| |
parameters. Mark two items as static that aren't used elsewhere...
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ALT_BREAK_TO_DEBUGGER. In addition to "Enter ~ ctrl-B" (to enter the
debugger), there is now "Enter ~ ctrl-P" (force panic) and
"Enter ~ ctrl-R" (request clean reboot, ala ctrl-alt-del on syscons).
We've used variations of this at work. The force panic sequence is
best used with KDB_UNATTENDED for when you just want it to dump and
get on with it.
The reboot request is a safer way of getting into single user than
a power cycle. eg: you've hosed the ability to log in (pam, rtld, etc).
It gives init the reboot signal, which causes an orderly reboot.
I've taken my best guess at what the !x86 and non-sio code changes
should be.
This also makes sio release its spinlock before calling KDB/DDB.
|
|
|
|
|
|
|
|
|
| |
for that argument. This will allow DDB to detect the broad category of
reason why the debugger has been entered, which it can use for the
purposes of deciding which DDB script to run.
Assign approximate why values to all current consumers of the
kdb_enter() interface.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- p_sflag was mostly protected by PROC_LOCK rather than the PROC_SLOCK or
previously the sched_lock. These bugs have existed for some time.
- Allow swapout to try each thread in a process individually and then
swapin the whole process if any of these fail. This allows us to move
most scheduler related swap flags into td_flags.
- Keep ki_sflag for backwards compat but change all in source tools to
use the new and more correct location of P_INMEM.
Reported by: pho
Reviewed by: attilio, kib
Approved by: re (kensmith)
|
|
|
|
|
|
|
|
|
| |
It is similar to debug.kdb.trap, except for it tries to cause a page fault
via a call to an invalid pointer. This can highlight differences between
a fault on data access vs. a fault on code call some CPUs might have.
This appeared as a test for a work \
Sponsored by: RiNet (Cronyx Plus LLC)
|
| |
|
|
|
|
|
|
| |
kdb_active before we restart them. This avoids false positives on
restarted CPUs when they test for kdb_active while kdb_trap() is
still finishing up.
|
|
|
|
|
|
|
|
| |
PCB in which the context of stopped CPUs is stored. To access this
PCB from KDB, we introduce a new define, called KDB_STOPPEDPCB. The
definition, when present, lives in <machine/kdb.h> and abstracts
where MD code saves the context. Define KDB_STOPPEDPCB on i386,
amd64, alpha and sparc64 in accordance to previous code.
|
|
|
|
|
|
|
| |
to register_t, as intr_disable() returns the latter and register_t
may be wider than int.
Pointed out by: marius@
|
|
|
|
|
|
|
|
|
|
| |
intr_disable() and intr_restore() resp. Previously, critical
regions would have interrupts disabled, but that was changed.
Consequently, the debugger could run with interrupts enabled.
This could cause problems for the low-level console code where
received characters would trigger an interrupt that causes
the interrupt handler to read the character instead of the
cngetc() function.
|
|
|
|
|
|
|
|
| |
current context in the IPI_STOP handler so that we can get accurate stack
traces of threads on other CPUs on these two archs like we do now on i386
and amd64.
Tested on: alpha, sparc64
|
|
|
|
|
|
|
| |
debug.kdb.panic and debug.kdb.trap alongside the existing debug.kdb.enter
sysctl. 'panic' causes a panic, and 'trap' causes a page fault. We used
these to ensure that crash dumps succeed from those two common failure
modes. This avoids the need for creating a 'panic' kld module.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
IPI_STOP IPIs.
- Change the i386 and amd64 MD IPI code to send an NMI if STOP_NMI is
enabled if an attempt is made to send an IPI_STOP IPI. If the kernel
option is enabled, there is also a sysctl to change the behavior at
runtime (debug.stop_cpus_with_nmi which defaults to enabled). This
includes removing stop_cpus_nmi() and making ipi_nmi_selected() a
private function for i386 and amd64.
- Fix ipi_all(), ipi_all_but_self(), and ipi_self() on i386 and amd64 to
properly handle bitmapped IPIs as well as IPI_STOP IPIs when STOP_NMI is
enabled.
- Fix ipi_nmi_handler() to execute the restart function on the first CPU
that is restarted making use of atomic_readandclear() rather than
assuming that the BSP is always included in the set of restarted CPUs.
Also, the NMI handler didn't clear the function pointer meaning that
subsequent stop and restarts could execute the function again.
- Define a new macro HAVE_STOPPEDPCBS on i386 and amd64 to control the use
of stoppedpcbs[] and always enable it for i386 and amd64 instead of
being dependent on KDB_STOP_NMI. It works fine in both the NMI and
non-NMI cases.
|
|
|
|
|
| |
- Use PCPU_GET(cpumask) in preference to 1 << PCPU_GET(cpuid) in a few
places.
|
|
|
|
|
|
| |
debug.kdb.stop_cpus_with_nmi to 1 rather than 0.
MFC after: 3 days
|
|
|
|
| |
Approved by: re
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
a regular IPI vector, but this vector is blocked when interrupts are disabled.
With "options KDB_STOP_NMI" and debug.kdb.stop_cpus_with_nmi set, KDB will
send an NMI to each CPU instead. The code also has a context-stuffing
feature which helps ddb extract the state of processes running on the
stopped CPUs.
KDB_STOP_NMI is only useful with SMP and complains if SMP is not defined.
This feature only applies to i386 and amd64 at the moment, but could be
used on other architectures with the appropriate MD bits.
Submitted by: ups
|
| |
|
|
|
|
|
| |
Approved by: sam (mentor)
MFC after: 1 week
|
|
|
|
|
|
|
|
| |
the trapframe via kdb_frame, but kdb_frame was not initialized until
after the call to kdb_cpu_trap(). Ergo: kdb_cpu_trap() was moved too
far up.
Pointy hat: marcel
|
|
|
|
| |
corrections made to the trapframe. This is more logical.
|
|
|
|
| |
to help debug early nasty hangs.
|
|
|
|
|
|
|
|
|
|
|
| |
attempt to IPI other cpus when entering the debugger in order to stop
them while in the debugger. The default remains to issue the stop;
however, that can result in a hang if another cpu has interrupts disabled
and is spinning, since the IPI won't be received and the KDB will wait
indefinitely. We probably need to add a timeout, but this is a useful
stopgap in the mean time.
Reviewed by: marcel
|
|
|
|
| |
in the process. This is useful when working from or with a process.
|
|
|
|
|
|
| |
changing the backend from outside the KDB frontend. For example from
within a backend. Rewrite kdb_sysctl_current to make use of this
function as well.
|
|
|
|
|
|
|
| |
name in the debug.kdb.current sysctl. All other dereferences are
properly guarded, but this one was overlooked.
Reported by: Morten Rodal (morten at rodal dot no)
|
|
in which multiple (presumably different) debugger backends can be
configured and which provides basic services to those backends.
Besides providing services to backends, it also serves as the single
point of contact for any and all code that wants to make use of the
debugger functions, such as entering the debugger or handling of the
alternate break sequence. For this purpose, the frontend has been
made non-optional.
All debugger requests are forwarded or handed over to the current
backend, if applicable. Selection of the current backend is done by
the debug.kdb.current sysctl. A list of configured backends can be
obtained with the debug.kdb.available sysctl. One can enter the
debugger by writing to the debug.kdb.enter sysctl.
|