summaryrefslogtreecommitdiffstats
path: root/sbin/hastd/control.c
Commit message (Collapse)AuthorAgeFilesLines
* MFC r257155, r257582, r259191, r259192, r259193, r259194, r259195, r259196:trociny2013-12-281-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | r257155: Make hastctl list command output current queue sizes. Reviewed by: pjd r257582 (pjd): Correct alignment. r259191: For memsync replication, hio_countdown is used not only as an indication when a request can be moved to done queue, but also for detecting the current state of memsync request. This approach has problems, e.g. leaking a request if memsynk ack from the secondary failed, or racy usage of write_complete, which should be called only once per write request, but for memsync can be entered by local_send_thread and ggate_send_thread simultaneously. So the following approach is implemented instead: 1) Use hio_countdown only for counting components we waiting to complete, i.e. initially it is always 2 for any replication mode. 2) To distinguish between "memsync ack" and "memsync fin" responses from the secondary, add and use hio_memsyncacked field. 3) write_complete() in component threads is called only before releasing hio_countdown (i.e. before the hio may be returned to the done queue). 4) Add and use hio_writecount refcounter to detect when write_complete() can be called in memsync case. Reported by: Pete French petefrench ingresso.co.uk Tested by: Pete French petefrench ingresso.co.uk r259192: Add some macros to make the code more readable (no functional chages). r259193: Fix compiler warnings. r259194: In remote_send_thread, if sending a request fails don't take the request back from the receive queue -- it might already be processed by remote_recv_thread, which lead to crashes like below: (primary) Unable to receive reply header: Connection reset by peer. (primary) Unable to send request (Connection reset by peer): WRITE(954662912, 131072). (primary) Disconnected from kopusha:7772. (primary) Increasing localcnt to 1. (primary) Assertion failed: (old > 0), function refcnt_release, file refcnt.h, line 62. Taking the request back was not necessary (it would properly be processed by the remote_recv_thread) and only complicated things. r259195: Send wakeup to threads waiting on empty queue before releasing the lock to decrease spurious wakeups. Submitted by: davidxu r259196: Check remote protocol version only for the first connection (when it is actually sent by the remote node). Otherwise it generated confusing "Negotiated protocol version 1" debug messages when processing the second connection.
* Make hastctl(1) ('list' command) output a worker pid.trociny2013-07-011-0/+1
| | | | | Reviewed by: pjd MFC after: 3 days
* Add i/o error counters to hastd(8) and make hastctl(8) displaytrociny2013-02-251-0/+18
| | | | | | | them. This may be useful for detecting problems with HAST disks. Discussed with and reviewed by: pjd MFC after: 1 week
* For functions that return -1 on failure check exactly for -1 and not forpjd2012-01-101-8/+8
| | | | | | any negative number. MFC after: 3 days
* Remove redundant setting of the error variable.pjd2011-12-151-2/+0
| | | | | Found by: Clang Static Analyzer MFC after: 1 week
* Prefer PJDLOG_ASSERT() and PJDLOG_ABORT() over assert() and abort().pjd2011-09-271-10/+9
| | | | | | | pjdlog versions will log problem to syslog when application is running in background. MFC after: 3 days
* Remove useless initialization.trociny2011-07-051-2/+1
| | | | | Approved by: pjd (mentor) MFC after: 3 days
* Keep statistics on number of BIO_READ, BIO_WRITE, BIO_DELETE and BIO_FLUSHpjd2011-05-231-0/+17
| | | | | | | | | | | requests as well as number of activemap updates. Number of BIO_WRITEs and activemap updates are especially interesting, because if those two are too close to each other, it means that your workload needs bigger number of dirty extents. Activemap should be updated as rarely as possible. MFC after: 1 week
* Rename HASTCTL_ defines, which are used for conversion between maintrociny2011-04-261-3/+3
| | | | | | | | | | hastd process and workers, remove unused one and set different range of numbers. This is done in order not to confuse them with HASTCTL_CMD defines, used for conversation between hastctl and hastd, and to avoid bugs like the one fixed in in r221075. Approved by: pjd (mentor) MFC after: 1 week
* For conversation between hastctl and hastd we should use HASTCTL_CMDtrociny2011-04-261-5/+5
| | | | | | | defines. Approved by: pjd (mentor) MFC after: 1 week
* Remove stale comment. Yes, it is valid to set role back to init.pjd2011-03-211-1/+1
| | | | MFC after: 1 week
* In hast.conf we define the other node's address in 'remote' variable.pjd2011-03-211-0/+2
| | | | | | | | | | | | | | | | | | This way we know how to connect to secondary node when we are primary. The same variable is used by the secondary node - it only accepts connections from the address stored in 'remote' variable. In cluster configurations it is common that each node has its individual IP address and there is one addtional shared IP address which is assigned to primary node. It seems it is possible that if the shared IP address is from the same network as the individual IP address it might be choosen by the kernel as a source address for connection with the secondary node. Such connection will be rejected by secondary, as it doesn't come from primary node individual IP. Add 'source' variable that allows to specify source IP address we want to bind to before connecting to the secondary node. MFC after: 1 week
* Allow to compress on-the-wire data using two algorithms:pjd2011-03-061-0/+3
| | | | | | | | | | | - HOLE - it simply turns all-zero blocks into few bytes header; it is extremely fast, so it is turned on by default; it is mostly intended to speed up initial synchronization where we expect many zeros; - LZF - very fast algorithm by Marc Alexander Lehmann, which shows very decent compression ratio and has BSD license. MFC after: 2 weeks
* Allow to checksum on-the-wire data using either CRC32 or SHA256.pjd2011-03-061-0/+3
| | | | MFC after: 2 weeks
* Setup another socketpair between parent and child, so that primary sandboxedpjd2011-02-031-0/+4
| | | | | | | | | worker can ask the main privileged process to connect in worker's behalf and then we can migrate descriptor using this socketpair to worker. This is not really needed now, but will be needed once we start to use capsicum for sandboxing. MFC after: 1 week
* Remember created control connection so on fork(2) we can close it in child.pjd2011-01-271-0/+2
| | | | | Found with: procstat(1) MFC after: 1 week
* Don't open configuration file from worker process. Handle SIGHUP in thepjd2011-01-241-1/+12
| | | | | | | | master process only and pass changes to the worker processes over control socket. This removes access to global namespace in preparation for capsicum sandboxing. MFC after: 2 weeks
* Add missing logs.pjd2011-01-221-4/+5
| | | | MFC after: 1 week
* Use int16 for error.pjd2011-01-221-1/+1
| | | | MFC after: 1 week
* We close the event socketpair early in the mainloop to prevent spaming withpjd2010-10-081-2/+4
| | | | | | | | error messages, so when we clean up after child process, we have to check if the event socketpair is still there. Submitted by: Mikolaj Golub <to.my.trociny@gmail.com> MFC after: 3 days
* Fix descriptor leaks: when child exits, we have to close control and eventpjd2010-09-221-1/+13
| | | | | | socket pairs. We did that only in one case out of three. MFC after: 3 days
* If we are unable to receive control message is most likely because the mainpjd2010-09-221-1/+2
| | | | | | process died. Instead of entering infinite loop, terminate. MFC after: 3 days
* Sort includes.pjd2010-09-221-1/+1
| | | | MFC after: 3 days
* - Call hook on role change.pjd2010-08-291-0/+5
| | | | | | | - Document new event. MFC after: 2 weeks Obtained from: Wheel Systems Sp. z o.o. http://www.wheelsystems.com
* Make control_set_role() more public. We will need it soon.pjd2010-08-051-10/+17
| | | | MFC after: 1 month
* Please welcome HAST - Highly Avalable Storage.pjd2010-02-181-0/+426
HAST allows to transparently store data on two physically separated machines connected over the TCP/IP network. HAST works in Primary-Secondary (Master-Backup, Master-Slave) configuration, which means that only one of the cluster nodes can be active at any given time. Only Primary node is able to handle I/O requests to HAST-managed devices. Currently HAST is limited to two cluster nodes in total. HAST operates on block level - it provides disk-like devices in /dev/hast/ directory for use by file systems and/or applications. Working on block level makes it transparent for file systems and applications. There in no difference between using HAST-provided device and raw disk, partition, etc. All of them are just regular GEOM providers in FreeBSD. For more information please consult hastd(8), hastctl(8) and hast.conf(5) manual pages, as well as http://wiki.FreeBSD.org/HAST. Sponsored by: FreeBSD Foundation Sponsored by: OMCnet Internet Service GmbH Sponsored by: TransIP BV
OpenPOWER on IntegriCloud