summaryrefslogtreecommitdiffstats
path: root/kernel/livepatch
Commit message (Collapse)AuthorAgeFilesLines
* livepatch: Allow to call a custom callback when freeing shadow variablesPetr Mladek2018-04-171-8/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We might need to do some actions before the shadow variable is freed. For example, we might need to remove it from a list or free some data that it points to. This is already possible now. The user can get the shadow variable by klp_shadow_get(), do the necessary actions, and then call klp_shadow_free(). This patch allows to do it a more elegant way. The user could implement the needed actions in a callback that is passed to klp_shadow_free() as a parameter. The callback usually does reverse operations to the constructor callback that can be called by klp_shadow_*alloc(). It is especially useful for klp_shadow_free_all(). There we need to do these extra actions for each found shadow variable with the given ID. Note that the memory used by the shadow variable itself is still released later by rcu callback. It is needed to protect internal structures that keep all shadow variables. But the destructor is called immediately. The shadow variable must not be access anyway after klp_shadow_free() is called. The user is responsible to protect this any suitable way. Be aware that the destructor is called under klp_shadow_lock. It is the same as for the contructor in klp_shadow_alloc(). Signed-off-by: Petr Mladek <pmladek@suse.com> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Acked-by: Miroslav Benes <mbenes@suse.cz> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
* livepatch: Initialize shadow variables safely by a custom callbackPetr Mladek2018-04-171-29/+53
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The existing API allows to pass a sample data to initialize the shadow data. It works well when the data are position independent. But it fails miserably when we need to set a pointer to the shadow structure itself. Unfortunately, we might need to initialize the pointer surprisingly often because of struct list_head. It is even worse because the list might be hidden in other common structures, for example, struct mutex, struct wait_queue_head. For example, this was needed to fix races in ALSA sequencer. It required to add mutex into struct snd_seq_client. See commit b3defb791b26ea06 ("ALSA: seq: Make ioctls race-free") and commit d15d662e89fc667b9 ("ALSA: seq: Fix racy pool initializations") This patch makes the API more safe. A custom constructor function and data are passed to klp_shadow_*alloc() functions instead of the sample data. Note that ctor_data are no longer a template for shadow->data. It might point to any data that might be necessary when the constructor is called. Also note that the constructor is called under klp_shadow_lock. It is an internal spin_lock that synchronizes alloc() vs. get() operations, see klp_shadow_get_or_alloc(). On one hand, this adds a risk of ABBA deadlocks. On the other hand, it allows to do some operations safely. For example, we could add the new structure into an existing list. This must be done only once when the structure is allocated. Reported-by: Nicolai Stange <nstange@suse.de> Signed-off-by: Petr Mladek <pmladek@suse.com> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Acked-by: Miroslav Benes <mbenes@suse.cz> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
* Merge branch 'for-4.16/remove-immediate' into for-linusJiri Kosina2018-01-312-55/+6
|\ | | | | | | Pull 'immediate' feature removal from Miroslav Benes.
| * livepatch: Remove immediate featureMiroslav Benes2018-01-112-55/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Immediate flag has been used to disable per-task consistency and patch all tasks immediately. It could be useful if the patch doesn't change any function or data semantics. However, it causes problems on its own. The consistency problem is currently broken with respect to immediate patches. func a patches 1i 2i 3 When the patch 3 is applied, only 2i function is checked (by stack checking facility). There might be a task sleeping in 1i though. Such task is migrated to 3, because we do not check 1i in klp_check_stack_func() at all. Coming atomic replace feature would be easier to implement and more reliable without immediate. Thus, remove immediate feature completely and save us from the problems. Note that force feature has the similar problem. However it is considered as a last resort. If used, administrator should not apply any new live patches and should plan for reboot into an updated kernel. The architectures would now need to provide HAVE_RELIABLE_STACKTRACE to fully support livepatch. Signed-off-by: Miroslav Benes <mbenes@suse.cz> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
* | livepatch: add locking to force and signal functionsMiroslav Benes2018-01-111-24/+28
|/ | | | | | | | | | | | | | | | | | | | | | | klp_send_signals() and klp_force_transition() do not acquire klp_mutex, because it seemed to be superfluous. A potential race in klp_send_signals() was harmless and there was nothing in klp_force_transition() which needed to be synchronized. That changed with the addition of klp_forced variable during the review process. There is a small window now, when klp_complete_transition() does not see klp_forced set to true while all tasks have been already transitioned to the target state. module_put() is called and the module can be removed. Acquire klp_mutex in sysfs callback to prevent it. Do the same for the signal sending just to be sure. There is no real downside to that. Fixes: c99a2be790b07 ("livepatch: force transition to finish") Fixes: 43347d56c8d9d ("livepatch: send a fake signal to all blocking tasks") Reported-by: Jason Baron <jbaron@akamai.com> Signed-off-by: Miroslav Benes <mbenes@suse.cz> Reviewed-by: Petr Mladek <pmladek@suse.com> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
* livepatch: force transition to finishMiroslav Benes2017-12-073-2/+65
| | | | | | | | | | | | | | | | | | | | | If a task sleeps in a set of patched functions uninterruptedly, it could block the whole transition indefinitely. Thus it may be useful to clear its TIF_PATCH_PENDING to allow the process to finish. Admin can do that now by writing to force sysfs attribute in livepatch sysfs directory. TIF_PATCH_PENDING is then cleared for all tasks and the transition can finish successfully. Important note! Administrator should not use this feature without a clearance from a patch distributor. It must be checked that by doing so the consistency model guarantees are not violated. Removal (rmmod) of patch modules is permanently disabled when the feature is used. It cannot be guaranteed there is no task sleeping in such module. Signed-off-by: Miroslav Benes <mbenes@suse.cz> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Reviewed-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
* livepatch: send a fake signal to all blocking tasksMiroslav Benes2017-12-043-0/+72
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Live patching consistency model is of LEAVE_PATCHED_SET and SWITCH_THREAD. This means that all tasks in the system have to be marked one by one as safe to call a new patched function. Safe means when a task is not (sleeping) in a set of patched functions. That is, no patched function is on the task's stack. Another clearly safe place is the boundary between kernel and userspace. The patching waits for all tasks to get outside of the patched set or to cross the boundary. The transition is completed afterwards. The problem is that a task can block the transition for quite a long time, if not forever. It could sleep in a set of patched functions, for example. Luckily we can force the task to leave the set by sending it a fake signal, that is a signal with no data in signal pending structures (no handler, no sign of proper signal delivered). Suspend/freezer use this to freeze the tasks as well. The task gets TIF_SIGPENDING set and is woken up (if it has been sleeping in the kernel before) or kicked by rescheduling IPI (if it was running on other CPU). This causes the task to go to kernel/userspace boundary where the signal would be handled and the task would be marked as safe in terms of live patching. There are tasks which are not affected by this technique though. The fake signal is not sent to kthreads. They should be handled differently. They can be woken up so they leave the patched set and their TIF_PATCH_PENDING can be cleared thanks to stack checking. For the sake of completeness, if the task is in TASK_RUNNING state but not currently running on some CPU it doesn't get the IPI, but it would eventually handle the signal anyway. Second, if the task runs in the kernel (in TASK_RUNNING state) it gets the IPI, but the signal is not handled on return from the interrupt. It would be handled on return to the userspace in the future when the fake signal is sent again. Stack checking deals with these cases in a better way. If the task was sleeping in a syscall it would be woken by our fake signal, it would check if TIF_SIGPENDING is set (by calling signal_pending() predicate) and return ERESTART* or EINTR. Syscalls with ERESTART* return values are restarted in case of the fake signal (see do_signal()). EINTR is propagated back to the userspace program. This could disturb the program, but... * each process dealing with signals should react accordingly to EINTR return values. * syscalls returning EINTR happen to be quite common situation in the system even if no fake signal is sent. * freezer sends the fake signal and does not deal with EINTR anyhow. Thus EINTR values are returned when the system is resumed. The very safe marking is done in architectures' "entry" on syscall and interrupt/exception exit paths, and in a stack checking functions of livepatch. TIF_PATCH_PENDING is cleared and the next recalc_sigpending() drops TIF_SIGPENDING. In connection with this, also call klp_update_patch_state() before do_signal(), so that recalc_sigpending() in dequeue_signal() can clear TIF_PATCH_PENDING immediately and thus prevent a double call of do_signal(). Note that the fake signal is not sent to stopped/traced tasks. Such task prevents the patching to finish till it continues again (is not traced anymore). Last, sending the fake signal is not automatic. It is done only when admin requests it by writing 1 to signal sysfs attribute in livepatch sysfs directory. Signed-off-by: Miroslav Benes <mbenes@suse.cz> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: linuxppc-dev@lists.ozlabs.org Cc: x86@kernel.org Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc) Signed-off-by: Jiri Kosina <jkosina@suse.cz>
* Merge branch 'for-linus' of ↵Linus Torvalds2017-11-156-18/+399
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/jikos/livepatching Pull livepatching updates from Jiri Kosina: - shadow variables support, allowing livepatches to associate new "shadow" fields to existing data structures, from Joe Lawrence - pre/post patch callbacks API, allowing livepatch writers to register callbacks to be called before and after patch application, from Joe Lawrence * 'for-linus' of ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/jikos/livepatching: livepatch: __klp_disable_patch() should never be called for disabled patches livepatch: Correctly call klp_post_unpatch_callback() in error paths livepatch: add transition notices livepatch: move transition "complete" notice into klp_complete_transition() livepatch: add (un)patch callbacks livepatch: Small shadow variable documentation fixes livepatch: __klp_shadow_get_or_alloc() is local to shadow.c livepatch: introduce shadow variable API
| * Merge branch 'for-4.15/callbacks' into for-linusJiri Kosina2017-11-154-17/+121
| |\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This pulls in an infrastructure/API that allows livepatch writers to register pre-patch and post-patch callbacks that allow for running a glue code necessary for finalizing the patching if necessary. Conflicts: kernel/livepatch/core.c - trivial conflict by adding a callback call into module going notifier vs. moving that code block to klp_cleanup_module_patches_limited() Signed-off-by: Jiri Kosina <jkosina@suse.cz>
| | * livepatch: __klp_disable_patch() should never be called for disabled patchesPetr Mladek2017-10-261-1/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | __klp_disable_patch() should never be called when the patch is not enabled. Let's add the same warning that we have in __klp_enable_patch(). This allows to remove the check when calling klp_pre_unpatch_callback(). It was strange anyway because it repeatedly checked per-patch flag for each patched object. Signed-off-by: Petr Mladek <pmladek@suse.com> Acked-by: Joe Lawrence <joe.lawrence@redhat.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
| | * livepatch: Correctly call klp_post_unpatch_callback() in error pathsPetr Mladek2017-10-262-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The post_unpatch_enabled flag in struct klp_callbacks is set when a pre-patch callback successfully executes, indicating that we need to call a corresponding post-unpatch callback when the patch is reverted. This is true for ordinary patch disable as well as the error paths of klp_patch_object() callers. As currently coded, we inadvertently execute the post-patch callback twice in klp_module_coming() when klp_patch_object() fails: - We explicitly call klp_post_unpatch_callback() for the failed object - We call it again for the same object (and all the others) via klp_cleanup_module_patches_limited() We should clear the flag in klp_post_unpatch_callback() to make sure that the callback is not called twice. It makes the API more safe. (We could have removed the callback from the former error path as it would be covered by the latter call, but I think that is is cleaner to clear the post_unpatch_enabled after its invoked. For example, someone might later decide to call the callback only when obj->patched flag is set.) There is another mistake in the error path of klp_coming_module() in which it skips the post-unpatch callback for the klp_transition_patch. However, the pre-patch callback was called even for this patch, so be sure to make the corresponding callbacks for all patches. Finally, I used this opportunity to make klp_pre_patch_callback() more readable. [jkosina@suse.cz: incorporate changelog wording changes proposed by Joe Lawrence] Signed-off-by: Petr Mladek <pmladek@suse.com> Acked-by: Joe Lawrence <joe.lawrence@redhat.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
| | * livepatch: add transition noticesJoe Lawrence2017-10-191-1/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Log a few kernel debug messages at the beginning of the following livepatch transition functions: klp_complete_transition() klp_cancel_transition() klp_init_transition() klp_reverse_transition() Also update the log notice message in klp_start_transition() for similar verbiage as the above messages. Suggested-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com> Acked-by: Miroslav Benes <mbenes@suse.cz> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
| | * livepatch: move transition "complete" notice into klp_complete_transition()Joe Lawrence2017-10-191-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | klp_complete_transition() performs a bit of housework before a transition to KLP_PATCHED or KLP_UNPATCHED is actually completed (including post-(un)patch callbacks). To be consistent, move the transition "complete" kernel log notice out of klp_try_complete_transition() and into klp_complete_transition(). Suggested-by: Josh Poimboeuf <jpoimboe@redhat.com> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com> Acked-by: Miroslav Benes <mbenes@suse.cz> Reviewed-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
| | * livepatch: add (un)patch callbacksJoe Lawrence2017-10-194-13/+98
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Provide livepatch modules a klp_object (un)patching notification mechanism. Pre and post-(un)patch callbacks allow livepatch modules to setup or synchronize changes that would be difficult to support in only patched-or-unpatched code contexts. Callbacks can be registered for target module or vmlinux klp_objects, but each implementation is klp_object specific. - Pre-(un)patch callbacks run before any (un)patching transition starts. - Post-(un)patch callbacks run once an object has been (un)patched and the klp_patch fully transitioned to its target state. Example use cases include modification of global data and registration of newly available services/handlers. See Documentation/livepatch/callbacks.txt for details and samples/livepatch/ for examples. Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Acked-by: Miroslav Benes <mbenes@suse.cz> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
| * | Merge branch 'for-4.15/shadow-variables' into for-linusJiri Kosina2017-11-152-1/+278
| |\ \ | | |/ | | | | | | | | | | | | Shadow variables allow callers to associate new shadow fields to existing data structures. This is intended to be used by livepatch modules seeking to emulate additions to data structure definitions.
| | * livepatch: __klp_shadow_get_or_alloc() is local to shadow.cJiri Kosina2017-09-151-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | ... therefore make it static. Fixes: 439e7271dc2 ("livepatch: introduce shadow variable API") Acked-by: Joe Lawrence <joe.lawrence@redhat.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
| | * livepatch: introduce shadow variable APIJoe Lawrence2017-09-142-1/+278
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add exported API for livepatch modules: klp_shadow_get() klp_shadow_alloc() klp_shadow_get_or_alloc() klp_shadow_free() klp_shadow_free_all() that implement "shadow" variables, which allow callers to associate new shadow fields to existing data structures. This is intended to be used by livepatch modules seeking to emulate additions to data structure definitions. See Documentation/livepatch/shadow-vars.txt for a summary of the new shadow variable API, including a few common use cases. See samples/livepatch/livepatch-shadow-* for example modules that demonstrate shadow variables. [jkosina@suse.cz: fix __klp_shadow_get_or_alloc() comment as spotted by Josh] Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Acked-by: Miroslav Benes <mbenes@suse.cz> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
* | | License cleanup: add SPDX GPL-2.0 license identifier to files with no licenseGreg Kroah-Hartman2017-11-023-0/+3
|/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Many source files in the tree are missing licensing information, which makes it harder for compliance tools to determine the correct license. By default all files without license information are under the default license of the kernel, which is GPL version 2. Update the files which contain no license information with the 'GPL-2.0' SPDX license identifier. The SPDX identifier is a legally binding shorthand, which can be used instead of the full boiler plate text. This patch is based on work done by Thomas Gleixner and Kate Stewart and Philippe Ombredanne. How this work was done: Patches were generated and checked against linux-4.14-rc6 for a subset of the use cases: - file had no licensing information it it. - file was a */uapi/* one with no licensing information in it, - file was a */uapi/* one with existing licensing information, Further patches will be generated in subsequent months to fix up cases where non-standard license headers were used, and references to license had to be inferred by heuristics based on keywords. The analysis to determine which SPDX License Identifier to be applied to a file was done in a spreadsheet of side by side results from of the output of two independent scanners (ScanCode & Windriver) producing SPDX tag:value files created by Philippe Ombredanne. Philippe prepared the base worksheet, and did an initial spot review of a few 1000 files. The 4.13 kernel was the starting point of the analysis with 60,537 files assessed. Kate Stewart did a file by file comparison of the scanner results in the spreadsheet to determine which SPDX license identifier(s) to be applied to the file. She confirmed any determination that was not immediately clear with lawyers working with the Linux Foundation. Criteria used to select files for SPDX license identifier tagging was: - Files considered eligible had to be source code files. - Make and config files were included as candidates if they contained >5 lines of source - File already had some variant of a license header in it (even if <5 lines). All documentation files were explicitly excluded. The following heuristics were used to determine which SPDX license identifiers to apply. - when both scanners couldn't find any license traces, file was considered to have no license information in it, and the top level COPYING file license applied. For non */uapi/* files that summary was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 11139 and resulted in the first patch in this series. If that file was a */uapi/* path one, it was "GPL-2.0 WITH Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 WITH Linux-syscall-note 930 and resulted in the second patch in this series. - if a file had some form of licensing information in it, and was one of the */uapi/* ones, it was denoted with the Linux-syscall-note if any GPL family license was found in the file or had no licensing in it (per prior point). Results summary: SPDX license identifier # files ---------------------------------------------------|------ GPL-2.0 WITH Linux-syscall-note 270 GPL-2.0+ WITH Linux-syscall-note 169 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17 LGPL-2.1+ WITH Linux-syscall-note 15 GPL-1.0+ WITH Linux-syscall-note 14 ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5 LGPL-2.0+ WITH Linux-syscall-note 4 LGPL-2.1 WITH Linux-syscall-note 3 ((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3 ((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1 and that resulted in the third patch in this series. - when the two scanners agreed on the detected license(s), that became the concluded license(s). - when there was disagreement between the two scanners (one detected a license but the other didn't, or they both detected different licenses) a manual inspection of the file occurred. - In most cases a manual inspection of the information in the file resulted in a clear resolution of the license that should apply (and which scanner probably needed to revisit its heuristics). - When it was not immediately clear, the license identifier was confirmed with lawyers working with the Linux Foundation. - If there was any question as to the appropriate license identifier, the file was flagged for further research and to be revisited later in time. In total, over 70 hours of logged manual review was done on the spreadsheet to determine the SPDX license identifiers to apply to the source files by Kate, Philippe, Thomas and, in some cases, confirmation by lawyers working with the Linux Foundation. Kate also obtained a third independent scan of the 4.13 code base from FOSSology, and compared selected files where the other two scanners disagreed against that SPDX file, to see if there was new insights. The Windriver scanner is based on an older version of FOSSology in part, so they are related. Thomas did random spot checks in about 500 files from the spreadsheets for the uapi headers and agreed with SPDX license identifier in the files he inspected. For the non-uapi files Thomas did random spot checks in about 15000 files. In initial set of patches against 4.14-rc6, 3 files were found to have copy/paste license identifier errors, and have been fixed to reflect the correct identifier. Additionally Philippe spent 10 hours this week doing a detailed manual inspection and review of the 12,461 patched files from the initial patch version early this week with: - a full scancode scan run, collecting the matched texts, detected license ids and scores - reviewing anything where there was a license detected (about 500+ files) to ensure that the applied SPDX license was correct - reviewing anything where there was no detection but the patch license was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied SPDX license was correct This produced a worksheet with 20 files needing minor correction. This worksheet was then exported into 3 different .csv files for the different types of files to be modified. These .csv files were then reviewed by Greg. Thomas wrote a script to parse the csv files and add the proper SPDX tag to the file, in the format that the file expected. This script was further refined by Greg based on the output to detect more types of files automatically and to distinguish between header and source .c files (which need different comment types.) Finally Greg ran the script using the .csv files to generate the patches. Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* | livepatch: unpatch all klp_objects if klp_module_coming failsJoe Lawrence2017-10-111-23/+37
|/ | | | | | | | | | | | | | | | | | | | When an incoming module is considered for livepatching by klp_module_coming(), it iterates over multiple patches and multiple kernel objects in this order: list_for_each_entry(patch, &klp_patches, list) { klp_for_each_object(patch, obj) { which means that if one of the kernel objects fails to patch, klp_module_coming()'s error path needs to unpatch and cleanup any kernel objects that were already patched by a previous patch. Reported-by: Miroslav Benes <mbenes@suse.cz> Suggested-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Reviewed-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
* livepatch: Fix stacking of patches with respect to RCUPetr Mladek2017-06-202-7/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | rcu_read_(un)lock(), list_*_rcu(), and synchronize_rcu() are used for a secure access and manipulation of the list of patches that modify the same function. In particular, it is the variable func_stack that is accessible from the ftrace handler via struct ftrace_ops and klp_ops. Of course, it synchronizes also some states of the patch on the top of the stack, e.g. func->transition in klp_ftrace_handler. At the same time, this mechanism guards also the manipulation of task->patch_state. It is modified according to the state of the transition and the state of the process. Now, all this works well as long as RCU works well. Sadly livepatching might get into some corner cases when this is not true. For example, RCU is not watching when rcu_read_lock() is taken in idle threads. It is because they might sleep and prevent reaching the grace period for too long. There are ways how to make RCU watching even in idle threads, see rcu_irq_enter(). But there is a small location inside RCU infrastructure when even this does not work. This small problematic location can be detected either before calling rcu_irq_enter() by rcu_irq_enter_disabled() or later by rcu_is_watching(). Sadly, there is no safe way how to handle it. Once we detect that RCU was not watching, we might see inconsistent state of the function stack and the related variables in klp_ftrace_handler(). Then we could do a wrong decision, use an incompatible implementation of the function and break the consistency of the system. We could warn but we could not avoid the damage. Fortunately, ftrace has similar problems and they seem to be solved well there. It uses a heavy weight implementation of some RCU operations. In particular, it replaces: + rcu_read_lock() with preempt_disable_notrace() + rcu_read_unlock() with preempt_enable_notrace() + synchronize_rcu() with schedule_on_each_cpu(sync_work) My understanding is that this is RCU implementation from a stone age. It meets the core RCU requirements but it is rather ineffective. Especially, it does not allow to batch or speed up the synchronize calls. On the other hand, it is very trivial. It allows to safely trace and/or livepatch even the RCU core infrastructure. And the effectiveness is a not a big issue because using ftrace or livepatches on productive systems is a rare operation. The safety is much more important than a negligible extra load. Note that the alternative implementation follows the RCU principles. Therefore, we could and actually must use list_*_rcu() variants when manipulating the func_stack. These functions allow to access the pointers in the right order and with the right barriers. But they do not use any other information that would be set only by rcu_read_lock(). Also note that there are actually two problems solved in ftrace: First, it cares about the consistency of RCU read sections. It is being solved the way as described and used in this patch. Second, ftrace needs to make sure that nobody is inside the dynamic trampoline when it is being freed. For this, it also calls synchronize_rcu_tasks() in preemptive kernel in ftrace_shutdown(). Livepatch has similar problem but it is solved by ftrace for free. klp_ftrace_handler() is a good guy and never sleeps. In addition, it is registered with FTRACE_OPS_FL_DYNAMIC. It causes that unregister_ftrace_function() calls: * schedule_on_each_cpu(ftrace_sync) - always * synchronize_rcu_tasks() - in preemptive kernel The effect is that nobody is neither inside the dynamic trampoline nor inside the ftrace handler after unregister_ftrace_function() returns. [jkosina@suse.cz: reformat changelog, fix comment] Signed-off-by: Petr Mladek <pmladek@suse.com> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Acked-by: Miroslav Benes <mbenes@suse.cz> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
* livepatch: Make livepatch dependent on !TRIM_UNUSED_KSYMSMiroslav Benes2017-05-271-0/+1
| | | | | | | | | | | | | | | | | | | | If TRIM_UNUSED_KSYMS is enabled, all unneeded exported symbols are made unexported. Two-pass build of the kernel is done to find out which symbols are needed based on a configuration. This effectively complicates things for out-of-tree modules. Livepatch exports functions to (un)register and enable/disable a live patch. The only in-tree module which uses these functions is a sample in samples/livepatch/. If the sample is disabled, the functions are trimmed and out-of-tree live patches cannot be built. Note that live patches are intended to be built out-of-tree. Suggested-by: Michal Marek <mmarek@suse.com> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Acked-by: Jessica Yu <jeyu@redhat.com> Signed-off-by: Miroslav Benes <mbenes@suse.cz> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
* Merge branches 'for-4.12/upstream' and ↵Jiri Kosina2017-05-017-274/+1043
|\ | | | | | | 'for-4.12/klp-hybrid-consistency-model' into for-linus
| * livepatch: Cancel transition a safe way for immediate patchesPetr Mladek2017-04-111-20/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | klp_init_transition() does not set func->transition for immediate patches. Then klp_ftrace_handler() could use the new code immediately. As a result, it is not safe to put the livepatch module in klp_cancel_transition(). This patch reverts most of the last minute changes klp_cancel_transition(). It keeps the warning about a misuse because it still makes sense. Fixes: 3ec24776bfd0 ("livepatch: allow removal of a disabled patch") Signed-off-by: Petr Mladek <pmladek@suse.com> Acked-by: Miroslav Benes <mbenes@suse.cz> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
| * livepatch: make klp_mutex proper part of APIJiri Kosina2017-03-083-2/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | klp_mutex is shared between core.c and transition.c, and as such would rather be properly located in a header so that we don't have to play 'extern' games from .c sources. This also silences sparse warning (wrongly) suggesting that klp_mutex should be defined static. Acked-by: Miroslav Benes <mbenes@suse.cz> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
| * livepatch: allow removal of a disabled patchJosh Poimboeuf2017-03-082-33/+84
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently we do not allow patch module to unload since there is no method to determine if a task is still running in the patched code. The consistency model gives us the way because when the unpatching finishes we know that all tasks were marked as safe to call an original function. Thus every new call to the function calls the original code and at the same time no task can be somewhere in the patched code, because it had to leave that code to be marked as safe. We can safely let the patch module go after that. Completion is used for synchronization between module removal and sysfs infrastructure in a similar way to commit 942e443127e9 ("module: Fix mod->mkobj.kobj potentially freed too early"). Note that we still do not allow the removal for immediate model, that is no consistency model. The module refcount may increase in this case if somebody disables and enables the patch several times. This should not cause any harm. With this change a call to try_module_get() is moved to __klp_enable_patch from klp_register_patch to make module reference counting symmetric (module_put() is in a patch disable path) and to allow to take a new reference to a disabled module when being enabled. Finally, we need to be very careful about possible races between klp_unregister_patch(), kobject_put() functions and operations on the related sysfs files. kobject_put(&patch->kobj) must be called without klp_mutex. Otherwise, it might be blocked by enabled_store() that needs the mutex as well. In addition, enabled_store() must check if the patch was not unregisted in the meantime. There is no need to do the same for other kobject_put() callsites at the moment. Their sysfs operations neither take the lock nor they access any data that might be freed in the meantime. There was an attempt to use kobjects the right way and prevent these races by design. But it made the patch definition more complicated and opened another can of worms. See https://lkml.kernel.org/r/1464018848-4303-1-git-send-email-pmladek@suse.com [Thanks to Petr Mladek for improving the commit message.] Signed-off-by: Miroslav Benes <mbenes@suse.cz> Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Reviewed-by: Petr Mladek <pmladek@suse.com> Acked-by: Miroslav Benes <mbenes@suse.cz> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
| * livepatch: change to a per-task consistency modelJosh Poimboeuf2017-03-086-25/+699
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Change livepatch to use a basic per-task consistency model. This is the foundation which will eventually enable us to patch those ~10% of security patches which change function or data semantics. This is the biggest remaining piece needed to make livepatch more generally useful. This code stems from the design proposal made by Vojtech [1] in November 2014. It's a hybrid of kGraft and kpatch: it uses kGraft's per-task consistency and syscall barrier switching combined with kpatch's stack trace switching. There are also a number of fallback options which make it quite flexible. Patches are applied on a per-task basis, when the task is deemed safe to switch over. When a patch is enabled, livepatch enters into a transition state where tasks are converging to the patched state. Usually this transition state can complete in a few seconds. The same sequence occurs when a patch is disabled, except the tasks converge from the patched state to the unpatched state. An interrupt handler inherits the patched state of the task it interrupts. The same is true for forked tasks: the child inherits the patched state of the parent. Livepatch uses several complementary approaches to determine when it's safe to patch tasks: 1. The first and most effective approach is stack checking of sleeping tasks. If no affected functions are on the stack of a given task, the task is patched. In most cases this will patch most or all of the tasks on the first try. Otherwise it'll keep trying periodically. This option is only available if the architecture has reliable stacks (HAVE_RELIABLE_STACKTRACE). 2. The second approach, if needed, is kernel exit switching. A task is switched when it returns to user space from a system call, a user space IRQ, or a signal. It's useful in the following cases: a) Patching I/O-bound user tasks which are sleeping on an affected function. In this case you have to send SIGSTOP and SIGCONT to force it to exit the kernel and be patched. b) Patching CPU-bound user tasks. If the task is highly CPU-bound then it will get patched the next time it gets interrupted by an IRQ. c) In the future it could be useful for applying patches for architectures which don't yet have HAVE_RELIABLE_STACKTRACE. In this case you would have to signal most of the tasks on the system. However this isn't supported yet because there's currently no way to patch kthreads without HAVE_RELIABLE_STACKTRACE. 3. For idle "swapper" tasks, since they don't ever exit the kernel, they instead have a klp_update_patch_state() call in the idle loop which allows them to be patched before the CPU enters the idle state. (Note there's not yet such an approach for kthreads.) All the above approaches may be skipped by setting the 'immediate' flag in the 'klp_patch' struct, which will disable per-task consistency and patch all tasks immediately. This can be useful if the patch doesn't change any function or data semantics. Note that, even with this flag set, it's possible that some tasks may still be running with an old version of the function, until that function returns. There's also an 'immediate' flag in the 'klp_func' struct which allows you to specify that certain functions in the patch can be applied without per-task consistency. This might be useful if you want to patch a common function like schedule(), and the function change doesn't need consistency but the rest of the patch does. For architectures which don't have HAVE_RELIABLE_STACKTRACE, the user must set patch->immediate which causes all tasks to be patched immediately. This option should be used with care, only when the patch doesn't change any function or data semantics. In the future, architectures which don't have HAVE_RELIABLE_STACKTRACE may be allowed to use per-task consistency if we can come up with another way to patch kthreads. The /sys/kernel/livepatch/<patch>/transition file shows whether a patch is in transition. Only a single patch (the topmost patch on the stack) can be in transition at a given time. A patch can remain in transition indefinitely, if any of the tasks are stuck in the initial patch state. A transition can be reversed and effectively canceled by writing the opposite value to the /sys/kernel/livepatch/<patch>/enabled file while the transition is in progress. Then all the tasks will attempt to converge back to the original patch state. [1] https://lkml.kernel.org/r/20141107140458.GA21774@suse.cz Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Acked-by: Miroslav Benes <mbenes@suse.cz> Acked-by: Ingo Molnar <mingo@kernel.org> # for the scheduler changes Signed-off-by: Jiri Kosina <jkosina@suse.cz>
| * livepatch: store function sizesJosh Poimboeuf2017-03-081-0/+16
| | | | | | | | | | | | | | | | | | | | | | For the consistency model we'll need to know the sizes of the old and new functions to determine if they're on the stacks of any tasks. Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Acked-by: Miroslav Benes <mbenes@suse.cz> Reviewed-by: Petr Mladek <pmladek@suse.com> Reviewed-by: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
| * livepatch: use kstrtobool() in enabled_store()Josh Poimboeuf2017-03-081-8/+5
| | | | | | | | | | | | | | | | | | | | | | The sysfs enabled value is a boolean, so kstrtobool() is a better fit for parsing the input string since it does the range checking for us. Suggested-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Acked-by: Miroslav Benes <mbenes@suse.cz> Reviewed-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
| * livepatch: move patching functions into patch.cJosh Poimboeuf2017-03-084-202/+247
| | | | | | | | | | | | | | | | | | | | | | Move functions related to the actual patching of functions and objects into a new patch.c file. Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Acked-by: Miroslav Benes <mbenes@suse.cz> Reviewed-by: Petr Mladek <pmladek@suse.com> Reviewed-by: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
| * livepatch: remove unnecessary object loaded checkJosh Poimboeuf2017-03-081-3/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | klp_patch_object()'s callers already ensure that the object is loaded, so its call to klp_is_object_loaded() is unnecessary. This will also make it possible to move the patching code into a separate file. Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Acked-by: Miroslav Benes <mbenes@suse.cz> Reviewed-by: Petr Mladek <pmladek@suse.com> Reviewed-by: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
| * livepatch: separate enabled and patched statesJosh Poimboeuf2017-03-081-36/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Once we have a consistency model, patches and their objects will be enabled and disabled at different times. For example, when a patch is disabled, its loaded objects' funcs can remain registered with ftrace indefinitely until the unpatching operation is complete and they're no longer in use. It's less confusing if we give them different names: patches can be enabled or disabled; objects (and their funcs) can be patched or unpatched: - Enabled means that a patch is logically enabled (but not necessarily fully applied). - Patched means that an object's funcs are registered with ftrace and added to the klp_ops func stack. Also, since these states are binary, represent them with booleans instead of ints. Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Acked-by: Miroslav Benes <mbenes@suse.cz> Reviewed-by: Petr Mladek <pmladek@suse.com> Reviewed-by: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
| * livepatch: create temporary klp_update_patch_state() stubJosh Poimboeuf2017-03-081-0/+3
| | | | | | | | | | | | | | | | | | | | Create temporary stubs for klp_update_patch_state() so we can add TIF_PATCH_PENDING to different architectures in separate patches without breaking build bisectability. Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Reviewed-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
* | livepatch: add missing printk newlinesJosh Poimboeuf2017-04-161-4/+4
| | | | | | | | | | | | | | | | | | Add missing newlines to some pr_err() strings. Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Acked-by: Miroslav Benes <mbenes@suse.cz> Acked-by: Jessica Yu <jeyu@redhat.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
* | livepatch: Reduce the time of finding module symbolsZhou Chengming2017-03-301-1/+4
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It's reported that the time of insmoding a klp.ko for one of our out-tree modules is too long. ~ time sudo insmod klp.ko real 0m23.799s user 0m0.036s sys 0m21.256s Then we found the reason: our out-tree module used a lot of static local variables, so klp.ko has a lot of relocation records which reference the module. Then for each such entry klp_find_object_symbol() is called to resolve it, but this function uses the interface kallsyms_on_each_symbol() even for finding module symbols, so will waste a lot of time on walking through vmlinux kallsyms table many times. This patch changes it to use module_kallsyms_on_each_symbol() for modules symbols. After we apply this patch, the sys time reduced dramatically. ~ time sudo insmod klp.ko real 0m1.007s user 0m0.032s sys 0m0.924s Signed-off-by: Zhou Chengming <zhouchengming1@huawei.com> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Acked-by: Jessica Yu <jeyu@redhat.com> Acked-by: Miroslav Benes <mbenes@suse.cz> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
* livepatch/module: make TAINT_LIVEPATCH module-specificJosh Poimboeuf2016-08-261-3/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | There's no reliable way to determine which module tainted the kernel with TAINT_LIVEPATCH. For example, /sys/module/<klp module>/taint doesn't report it. Neither does the "mod -t" command in the crash tool. Make it crystal clear who the guilty party is by associating TAINT_LIVEPATCH with any module which sets the "livepatch" modinfo attribute. The flag will still get set in the kernel like before, but now it also sets the same flag in mod->taint. Note that now the taint flag gets set when the module is loaded rather than when it's enabled. I also renamed find_livepatch_modinfo() to check_modinfo_livepatch() to better reflect its purpose: it's basically a livepatch-specific sub-function of check_modinfo(). Reported-by: Chunyu Hu <chuhu@redhat.com> Reviewed-by: Petr Mladek <pmladek@suse.com> Acked-by: Miroslav Benes <mbenes@suse.cz> Acked-by: Jessica Yu <jeyu@redhat.com> Acked-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
* livepatch: use arch_klp_init_object_loaded() to finish arch-specific tasksJessica Yu2016-08-181-3/+13
| | | | | | | | | | | Introduce arch_klp_init_object_loaded() to complete any additional arch-specific tasks during patching. Architecture code may override this function. Signed-off-by: Jessica Yu <jeyu@redhat.com> Reviewed-by: Petr Mladek <pmladek@suse.com> Acked-by: Miroslav Benes <mbenes@suse.cz> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
* modules: add ro_after_init supportJessica Yu2016-08-041-1/+1
| | | | | | | | | | Add ro_after_init support for modules by adding a new page-aligned section in the module layout (after rodata) for ro_after_init data and enabling RO protection for that section after module init runs. Signed-off-by: Jessica Yu <jeyu@redhat.com> Acked-by: Kees Cook <keescook@chromium.org> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
*-. Merge branches 'for-4.7/core', 'for-4.7/livepatching-doc' and ↵Jiri Kosina2016-05-171-60/+128
|\ \ | | | | | | | | | 'for-4.7/livepatching-ppc64' into for-linus
| | * Merge branch 'topic/livepatch' of ↵Jiri Kosina2016-04-151-3/+31
| | |\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux into for-4.7/livepatching-ppc64le Pull livepatching support for ppc64 architecture from Michael Ellerman. Signed-off-by: Jiri Kosina <jkosina@suse.cz>
| | | * livepatch: Allow architectures to specify an alternate ftrace locationMichael Ellerman2016-04-141-3/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When livepatch tries to patch a function it takes the function address and asks ftrace to install the livepatch handler at that location. ftrace will look for an mcount call site at that exact address. On powerpc the mcount location is not the first instruction of the function, and in fact it's not at a constant offset from the start of the function. To accommodate this add a hook which arch code can override to customise the behaviour. Signed-off-by: Torsten Duwe <duwe@suse.de> Signed-off-by: Balbir Singh <bsingharora@gmail.com> Signed-off-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| | * | livepatch: robustify klp_register_patch() API error checkingJiri Kosina2016-04-071-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 425595a7fc20 ("livepatch: reuse module loader code to write relocations") adds a possibility of dereferncing pointers supplied by the consumer of the livepatch API before sanity (NULL) checking them (patch and patch->mod). Spotted by smatch tool. Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Acked-by: Jessica Yu <jeyu@redhat.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
| | * | livepatch: reuse module loader code to write relocationsJessica Yu2016-04-011-54/+94
| |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Reuse module loader code to write relocations, thereby eliminating the need for architecture specific relocation code in livepatch. Specifically, reuse the apply_relocate_add() function in the module loader to write relocations instead of duplicating functionality in livepatch's arch-dependent klp_write_module_reloc() function. In order to accomplish this, livepatch modules manage their own relocation sections (marked with the SHF_RELA_LIVEPATCH section flag) and livepatch-specific symbols (marked with SHN_LIVEPATCH symbol section index). To apply livepatch relocation sections, livepatch symbols referenced by relocs are resolved and then apply_relocate_add() is called to apply those relocations. In addition, remove x86 livepatch relocation code and the s390 klp_write_module_reloc() function stub. They are no longer needed since relocation work has been offloaded to module loader. Lastly, mark the module as a livepatch module so that the module loader canappropriately identify and initialize it. Signed-off-by: Jessica Yu <jeyu@redhat.com> Reviewed-by: Miroslav Benes <mbenes@suse.cz> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com> # for s390 changes Signed-off-by: Jiri Kosina <jkosina@suse.cz>
* | | livepatch: make object/func-walking helpers more robustMiroslav Benes2016-04-301-0/+3
|/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Current object-walking helper checks the presence of obj->funcs to determine the end of objs array in klp_object structure. This is somewhat fragile because one can easily forget about funcs definition during livepatch creation. In such a case the livepatch module is successfully loaded and all objects after the incorrect one are omitted. This is very confusing. Let's make the helper more robust and check also for the other external member, name. Thus the helper correctly stops on an empty item of the array. We need to have a check for obj->funcs in klp_init_object() to make it work. The same applies to a func-walking helper. As a benefit we'll check for new_func member definition during the livepatch initialization. There is no such check anywhere in the code now. [jkosina@suse.cz: fix shortlog] Signed-off-by: Miroslav Benes <mbenes@suse.cz> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Acked-by: Jessica Yu <jeyu@redhat.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
* | livepatch/module: remove livepatch module notifierJessica Yu2016-03-171-76/+71
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove the livepatch module notifier in favor of directly enabling and disabling patches to modules in the module loader. Hard-coding the function calls ensures that ftrace_module_enable() is run before klp_module_coming() during module load, and that klp_module_going() is run before ftrace_release_mod() during module unload. This way, ftrace and livepatch code is run in the correct order during the module load/unload sequence without dependence on the module notifier call chain. Signed-off-by: Jessica Yu <jeyu@redhat.com> Reviewed-by: Petr Mladek <pmladek@suse.cz> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Acked-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
* | livepatch: Fix the error message about unresolvable ambiguityPetr Mladek2016-03-091-2/+2
|/ | | | | | | | | | | | | | | | klp_find_callback() stops the search when sympos is not defined and a second symbol of the same name is found. It means that the current error message about the unresolvable ambiguity always prints "(2 matches)". Let's remove this information. The total number of occurrences is not much helpful. The author of the patch still must put a non-trivial effort into searching the right position in the object file. [jkosina@suse.cz: fixed grammar as suggested by Josh] Signed-off-by: Petr Mladek <pmladek@suse.com> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Acked-by: Chris J Arges <chris.j.arges@canonical.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
* livepatch: Cleanup module page permission changesJosh Poimboeuf2015-12-041-5/+11
| | | | | | | | | | | | | | Calling set_memory_rw() and set_memory_ro() for every iteration of the loop in klp_write_object_relocations() is messy, inefficient, and error-prone. Change all the read-only pages to read-write before the loop and convert them back to read-only again afterwards. Suggested-by: Miroslav Benes <mbenes@suse.cz> Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Reviewed-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
* livepatch: function,sympos scheme in livepatch sysfs directoryChris J Arges2015-12-031-2/+8
| | | | | | | | | | | | | | | | | The following directory structure will allow for cases when the same function name exists in a single object. /sys/kernel/livepatch/<patch>/<object>/<function,sympos> The sympos number corresponds to the nth occurrence of the symbol name in kallsyms for the patched object. An example of patching multiple symbols can be found here: https://github.com/dynup/kpatch/issues/493 Signed-off-by: Chris J Arges <chris.j.arges@canonical.com> Reviewed-by: Petr Mladek <pmladek@suse.com> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
* livepatch: add sympos as disambiguator field to klp_relocChris J Arges2015-12-031-65/+19
| | | | | | | | | | | | | | | | | | | | In cases of duplicate symbols, sympos will be used to disambiguate instead of val. By default sympos will be 0, and patching will only succeed if the symbol is unique. Specifying a positive value will ensure that occurrence of the symbol in kallsyms for the patched object will be used for patching if it is valid. For external relocations sympos is not supported. Remove klp_verify_callback, klp_verify_args and klp_verify_vmlinux_symbol as they are no longer used. From the klp_reloc structure remove val, as it can be refactored as a local variable in klp_write_object_relocations. Signed-off-by: Chris J Arges <chris.j.arges@canonical.com> Reviewed-by: Petr Mladek <pmladek@suse.com> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
* livepatch: add old_sympos as disambiguator field to klp_funcChris J Arges2015-12-031-41/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, patching objects with duplicate symbol names fail because the creation of the sysfs function directory collides with the previous attempt. Appending old_addr to the function name is problematic as it reveals the address of the function being patch to a normal user. Using the symbol's occurrence in kallsyms to postfix the function name in the sysfs directory solves the issue of having consistent unique names and ensuring that the address is not exposed to a normal user. In addition, using the symbol position as the user's method to disambiguate symbols instead of addr allows for disambiguating symbols in modules as well for both function addresses and for relocs. This also simplifies much of the code. Special handling for kASLR is no longer needed and can be removed. The klp_find_verify_func_addr function can be replaced by klp_find_object_symbol, and klp_verify_vmlinux_symbol and its callback can be removed completely. In cases of duplicate symbols, old_sympos will be used to disambiguate instead of old_addr. By default old_sympos will be 0, and patching will only succeed if the symbol is unique. Specifying a positive value will ensure that occurrence of the symbol in kallsyms for the patched object will be used for patching if it is valid. In addition, make old_addr an internal structure field not to be specified by the user. Finally, remove klp_find_verify_func_addr as it can be replaced by klp_find_object_symbol directly. Support for symbol position disambiguation for relocations is added in the next patch in this series. Signed-off-by: Chris J Arges <chris.j.arges@canonical.com> Reviewed-by: Petr Mladek <pmladek@suse.com> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
* livepatch: x86: fix relocation computation with kASLRZhou Chengming2015-11-111-0/+6
| | | | | | | | | | | | | With kASLR enabled, old_addr provided by patch module is being shifted accrodingly so that the symbol lookups work. To have module relocations handled properly as well, the same transformation needs to be perfomed on relocation address information. [jkosina@suse.cz: extended / reworded changelog a bit] Reported-by: Cyril B. <cbay@alwaysdata.com> Signed-off-by: Zhou Chengming <zhouchengming1@huawei.com> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
OpenPOWER on IntegriCloud