OpenBSD CVS

CVS log for src/sys/sys/clockintr.h


[BACK] Up to [local] / src / sys / sys

Request diff between arbitrary revisions


Default branch: MAIN


Revision 1.29 / (download) - annotate - [select for diffs], Sun Feb 25 19:15:50 2024 UTC (3 months ago) by cheloha
Branch: MAIN
CVS Tags: OPENBSD_7_5_BASE, OPENBSD_7_5, HEAD
Changes since 1.28: +6 -6 lines
Diff to previous 1.28 (colored)

clockintr: rename "struct clockintr_queue" to "struct clockqueue"

The code has outgrown the original name for this struct.  Both the
external and internal APIs have used the "clockqueue" namespace for
some time when operating on it, and that name is eyeball-consistent
with "clockintr" and "clockrequest", so "clockqueue" it is.

Revision 1.28 / (download) - annotate - [select for diffs], Sun Feb 25 18:29:26 2024 UTC (3 months ago) by cheloha
Branch: MAIN
Changes since 1.27: +2 -2 lines
Diff to previous 1.27 (colored)

sys/clockintr.h: consolidate forward declarations

Revision 1.27 / (download) - annotate - [select for diffs], Sun Feb 25 18:17:11 2024 UTC (3 months ago) by cheloha
Branch: MAIN
Changes since 1.26: +2 -2 lines
Diff to previous 1.26 (colored)

clockintr.h, kern_clockintr.c: add 2023, 2024 to copyright range

Revision 1.26 / (download) - annotate - [select for diffs], Fri Feb 9 16:52:58 2024 UTC (3 months, 3 weeks ago) by cheloha
Branch: MAIN
Changes since 1.25: +7 -2 lines
Diff to previous 1.25 (colored)

clockintr: add clockintr_unbind()

The clockintr_unbind() function cancels any pending execution of the
given clock interrupt object's callback and severs the binding between
the object and its host CPU.  Upon return from clockintr_unbind(), the
clock interrupt object may be rebound with a call to clockintr_bind().

The optional CL_BARRIER flag tells clockintr_unbind() to block if the
clockintr's callback function is executing at the moment of the call.
This is useful when the clockintr's arg is a shared reference and the
caller needs to be certain the reference is inactive.

Now that clockintrs can be bound and unbound repeatedly, there is more
room for error.  To help catch programmer errors, clockintr_unbind()
sets cl_queue to NULL.  Calls to other API functions after a clockintr
is unbound will then fault on a NULL dereference.  clockintr_bind()
also KASSERTs that cl_queue is NULL to ensure the clockintr is not
already bound.  These checks are not perfect, but they do catch some
common errors.

With input from mpi@.

Thread: https://marc.info/?l=openbsd-tech&m=170629367121800&w=2

ok mpi@

Revision 1.25 / (download) - annotate - [select for diffs], Wed Jan 24 19:23:38 2024 UTC (4 months, 1 week ago) by cheloha
Branch: MAIN
Changes since 1.24: +4 -4 lines
Diff to previous 1.24 (colored)

clockintr: switch from callee- to caller-allocated clockintr structs

Currently, clockintr_establish() calls malloc(9) to allocate a
clockintr struct on behalf of the caller.  mpi@ says this behavior is
incompatible with dt(4).  In particular, calling malloc(9) during the
initialization of a PCB outside of dt_pcb_alloc() is (a) awkward and
(b) may conflict with future changes/optimizations to PCB allocation.

To side-step the problem, this patch changes the clockintr subsystem
to use caller-allocated clockintr structs instead of callee-allocated
structs.

clockintr_establish() is named after softintr_establish(), which uses
malloc(9) internally to create softintr objects.  The clockintr subsystem
is no longer using malloc(9), so the "establish" naming is no longer apt.
To avoid confusion, this patch also renames "clockintr_establish" to
"clockintr_bind".

Requested by mpi@.  Tweaked by mpi@.

Thread: https://marc.info/?l=openbsd-tech&m=170597126103504&w=2

ok claudio@ mlarkin@ mpi@

Revision 1.24 / (download) - annotate - [select for diffs], Mon Jan 15 01:15:37 2024 UTC (4 months, 2 weeks ago) by cheloha
Branch: MAIN
Changes since 1.23: +4 -4 lines
Diff to previous 1.23 (colored)

clockintr: move CLST_IGNORE_REQUESTS from cl_flags to cq_flags

In the near future, we will add support for destroying clockintr
objects.  When this happens, it will no longer be safe to dereference
the pointer to the expired clockintr during the dispatch loop in
clockintr_dispatch() after reentering cq_mtx.  This means we will not
be able to safely check for the CLST_IGNORE_REQUESTS flag.

So replace the CLST_IGNORE_REQUESTS flag in cl_flags with the
CQ_IGNORE_REQUESTS flag in cq_flags.  The semantics are the same.
Both cl_flags and cq_flags are protected by cq_mtx.

Note that we cannot move the CLST_IGNORE_REQUESTS flag to cr_flags in
struct clockrequest: that member is owned by the dispatching CPU and
is not mutated with atomic operations.

Revision 1.23 / (download) - annotate - [select for diffs], Tue Oct 17 00:04:02 2023 UTC (7 months, 2 weeks ago) by cheloha
Branch: MAIN
Changes since 1.22: +24 -7 lines
Diff to previous 1.22 (colored)

clockintr: move callback-specific API behaviors to "clockrequest" namespace

The API's behavior when invoked from a callback function is impossible
to document.  Move the special behavior into a distinct namespace,
"clockrequest".

- Add a 'struct clockrequest'.  Basically a stripped-down 'struct clockintr'
  for exclusive use during clockintr_dispatch().
- In clockintr_queue, replace the "cq_shadow" clockintr with a "cq_request"
  clockrequest.  They serve the same purpose.
- CLST_SHADOW_PENDING -> CR_RESCHEDULE; different namespace, same meaning.
- CLST_IGNORE_SHADOW -> CLST_IGNORE_REQUEST; same meaning.
- Move shadow branch in clockintr_advance() to clockrequest_advance().
- clockintr_request_random() becomes clockrequest_advance_random().
- Delete dead shadow branches in clockintr_cancel(), clockintr_schedule().
- Callback functions now get a clockrequest pointer instead of a special
  clockintr pointer: update all prototypes, callers.

No functional change intended.

Revision 1.22 / (download) - annotate - [select for diffs], Wed Oct 11 15:07:04 2023 UTC (7 months, 3 weeks ago) by cheloha
Branch: MAIN
Changes since 1.21: +2 -1 lines
Diff to previous 1.21 (colored)

clockintr: move clockintr_schedule() into public API

Prototype clockintr_schedule() in <sys/clockintr.h>.

Revision 1.21 / (download) - annotate - [select for diffs], Sun Oct 8 21:08:00 2023 UTC (7 months, 3 weeks ago) by cheloha
Branch: MAIN
Changes since 1.20: +1 -13 lines
Diff to previous 1.20 (colored)

clockintr: move intrclock wrappers from sys/clockintr.h to kern_clockintr.c

intrclock_rearm() and intrclock_trigger() are not part of the public
API, so there's no reason to implement them in sys/clockintr.h.  Move
them to kern_clockintr.c.

Revision 1.20 / (download) - annotate - [select for diffs], Sun Sep 17 15:24:35 2023 UTC (8 months, 2 weeks ago) by cheloha
Branch: MAIN
CVS Tags: OPENBSD_7_4_BASE, OPENBSD_7_4
Changes since 1.19: +4 -2 lines
Diff to previous 1.19 (colored)

clockintr.h: forward-declare "struct cpu_info" for clockintr_establish()

With input from claudio@ and deraadt@.

Revision 1.19 / (download) - annotate - [select for diffs], Sun Sep 17 15:05:44 2023 UTC (8 months, 2 weeks ago) by cheloha
Branch: MAIN
Changes since 1.18: +3 -3 lines
Diff to previous 1.18 (colored)

struct clockintr_queue: rename "cq_est" to "cq_all"

"cq_all" is a more obvious name than "cq_est".  It's the list of all
established clockintrs.  Duh.

Revision 1.18 / (download) - annotate - [select for diffs], Sun Sep 17 14:50:50 2023 UTC (8 months, 2 weeks ago) by cheloha
Branch: MAIN
Changes since 1.17: +1 -9 lines
Diff to previous 1.17 (colored)

clockintr: remove clockintr_init(), clockintr_flags

All the state initialization once done in clockintr_init() has been
moved to other parts of the kernel.  It's a dead function.  Remove it.

Likewise, the clockintr_flags variable no longer sports any meaningful
flags.  Remove it.  This frees up the CL_* flag namespace, which might
be useful to the clockintr frontend if we ever need to add behavior
flags to any of those functions.

Revision 1.17 / (download) - annotate - [select for diffs], Fri Sep 15 11:48:48 2023 UTC (8 months, 2 weeks ago) by deraadt
Branch: MAIN
Changes since 1.16: +2 -2 lines
Diff to previous 1.16 (colored)

work around cpu.h not coming into early scope on all arch

Revision 1.16 / (download) - annotate - [select for diffs], Thu Sep 14 22:07:11 2023 UTC (8 months, 2 weeks ago) by cheloha
Branch: MAIN
Changes since 1.15: +1 -2 lines
Diff to previous 1.15 (colored)

clockintr, scheduler: move statclock handle from clockintr_queue to schedstate_percpu

Move the statclock handle from clockintr_queue.cq_statclock to
schedstate_percpu.spc_statclock.  Establish spc_statclock during
sched_init_cpu() alongside the other scheduler clock interrupts.

Thread: https://marc.info/?l=openbsd-tech&m=169428749720476&w=2

Revision 1.15 / (download) - annotate - [select for diffs], Thu Sep 14 19:51:18 2023 UTC (8 months, 2 weeks ago) by cheloha
Branch: MAIN
Changes since 1.14: +2 -1 lines
Diff to previous 1.14 (colored)

clockintr: move clockintr_advance_random() prototype into sys/clockintr.h

statclock() is going to need this.  Move the prototype into the public API.

Thread: https://marc.info/?l=openbsd-tech&m=169428749720476&w=2

Revision 1.14 / (download) - annotate - [select for diffs], Thu Sep 14 19:39:47 2023 UTC (8 months, 2 weeks ago) by cheloha
Branch: MAIN
Changes since 1.13: +2 -3 lines
Diff to previous 1.13 (colored)

clockintr: replace CL_RNDSTAT with global variable statclock_is_randomized

In order to separate the statclock from the clock interrupt subsystem
we need to move all statclock state out into the broader kernel.

Start by replacing the CL_RNDSTAT flag with a new global variable,
"statclock_is_randomized", in kern_clock.c.  Update all clockintr_init()
callers to set the boolean instead of passing the flag.

Thread: https://marc.info/?l=openbsd-tech&m=169428749720476&w=2

Revision 1.13 / (download) - annotate - [select for diffs], Sun Sep 10 03:08:05 2023 UTC (8 months, 3 weeks ago) by cheloha
Branch: MAIN
Changes since 1.12: +4 -3 lines
Diff to previous 1.12 (colored)

clockintr: support an arbitrary callback function argument

Callers can now provide an argument pointer to clockintr_establish().
The pointer is kept in a new struct clockintr member, cl_arg.  The
pointer is passed as the third parameter to clockintr.cl_func when it
is executed during clockintr_dispatch().  Like the callback function,
the callback argument is immutable after the clockintr is established.

At present, nothing uses this.  All current clockintr_establish()
callers pass a NULL arg pointer.  However, I am confident that dt(4)'s
profile provider will need this in the near future.

Requested by dlg@ back in March.

Revision 1.12 / (download) - annotate - [select for diffs], Wed Sep 6 02:33:18 2023 UTC (8 months, 3 weeks ago) by cheloha
Branch: MAIN
Changes since 1.11: +7 -7 lines
Diff to previous 1.11 (colored)

clockintr: replace u_int with standard types

The clockintr code already uses uint64_t everywhere, so we may as well
be consistent: replace u_int with uint32_t everywhere it is trivial to
do so; leave the sysctl(2) hook and ddb(4) code alone for now.

Suggested by mpi@.

ok mpi@

Revision 1.11 / (download) - annotate - [select for diffs], Wed Sep 6 02:09:58 2023 UTC (8 months, 3 weeks ago) by cheloha
Branch: MAIN
Changes since 1.10: +2 -2 lines
Diff to previous 1.10 (colored)

clockintr: clockintr_establish: change first argument to a cpu_info pointer

All CPUs control a single clockintr_queue.  clockintr_establish()
callers don't need to know about the underlying clockintr_queue.
Accepting a cpu_info pointer as argument simplifies the API.

From mpi@.

ok mpi@

Revision 1.10 / (download) - annotate - [select for diffs], Mon Aug 21 17:22:04 2023 UTC (9 months, 1 week ago) by cheloha
Branch: MAIN
Changes since 1.9: +1 -2 lines
Diff to previous 1.9 (colored)

clockintr: remove support for independent schedclock()

Remove the scaffolding for an independent schedclock().  With the
removal of the independent schedclock() from alpha, schedhz is zero on
all platforms and this schedclock-specific code is now unused.

It is possible that schedclock() will repurposed for use in the
future.  Even if this happens, the schedclock handle will not live in
struct clockintr_queue.

Revision 1.9 / (download) - annotate - [select for diffs], Tue Jul 25 18:16:19 2023 UTC (10 months, 1 week ago) by cheloha
Branch: MAIN
Changes since 1.8: +7 -4 lines
Diff to previous 1.8 (colored)

statclock: move profil(2), GPROF code to profclock(), gmonclock()

This patch isolates profil(2) and GPROF from statclock().  Currently,
statclock() implements both profil(2) and GPROF through a complex
mechanism involving both platform code (setstatclockrate) and the
scheduler (pscnt, psdiv, and psratio).  We have a machine-independent
interface to the clock interrupt hardware now, so we no longer need to
do it this way.

- Move profil(2)-specific code from statclock() to a new clock
  interrupt callback, profclock(), in subr_prof.c.  Each
  schedstate_percpu has its own profclock handle.  The profclock is
  enabled/disabled for a given CPU when it is needed by the running
  thread during mi_switch() and sched_exit().

- Move GPROF-specific code from statclock() to a new clock interrupt
  callback, gmonclock(), in subr_prof.c.  Where available, each cpu_info
  has its own gmonclock handle .  The gmonclock is enabled/disabled for
  a given CPU via sysctl(2) in prof_state_toggle().

- Both profclock() and gmonclock() have a fixed period, profclock_period,
  that is initialized during initclocks().

- Export clockintr_advance(), clockintr_cancel(), clockintr_establish(),
  and clockintr_stagger() via <sys/clockintr.h>.  They have external
  callers now.

- Delete pscnt, psdiv, psratio.  From schedstate_percpu, also delete
  spc_pscnt and spc_psdiv.  The statclock frequency is not dynamic
  anymore so these variables are now useless.

- Delete code/state related to the dynamic statclock frequency from
  kern_clockintr.c.  The statclock frequency can still be pseudo-random,
  so move the contents of clockintr_statvar_init() into clockintr_init().

With input from miod@, deraadt@, and claudio@.  Early revisions
cleaned up by claudio.  Early revisions tested by claudio@.  Tested by
cheloha@ on amd64, arm64, macppc, octeon, and sparc64 (sun4v).
Compile- and boot- tested on i386 by mlarkin@.  riscv64 compilation
bugs found by mlarkin@.  Tested on riscv64 by jca@.  Tested on
powerpc64 by gkoehler@.

Revision 1.8 / (download) - annotate - [select for diffs], Thu Jun 15 22:18:06 2023 UTC (11 months, 2 weeks ago) by cheloha
Branch: MAIN
Changes since 1.7: +2 -1 lines
Diff to previous 1.7 (colored)

all platforms, main(): call clockqueue_init() just before sched_init_cpu()

Move the clockqueue_init() call out of clockintr_cpu_init() and up
just before the sched_init_cpu() call for a given CPU.

This will allow sched_init_cpu() to allocate clockintr handles for a
given CPU's scheduler in a later patch.

Link: https://marc.info/?l=openbsd-tech&m=168661507607622&w=2

ok kettenis@, claudio@

Revision 1.7 / (download) - annotate - [select for diffs], Thu Apr 20 14:51:28 2023 UTC (13 months, 1 week ago) by cheloha
Branch: MAIN
Changes since 1.6: +2 -3 lines
Diff to previous 1.6 (colored)

clockintr: eliminate CL_SCHEDCLOCK flag

The CL_SCHEDCLOCK flag is set when schedhz is non-zero.  It's
redundant.  We can just check the value of schedhz directly.

Revision 1.6 / (download) - annotate - [select for diffs], Wed Apr 19 14:30:35 2023 UTC (13 months, 2 weeks ago) by cheloha
Branch: MAIN
Changes since 1.5: +6 -7 lines
Diff to previous 1.5 (colored)

clockintr: rename CL_CPU_* flags to CQ_* flags

The CL_CPU_* flags were originally so-named because they were set from
clockintr_cpu_init(), which was itself named before I had named the
clockintr_queue struct.  It makes more for the flag namespace to match
the struct namespace, so CQ_* is a better flag prefix than CL_CPU_*.

While we're at it, move the CQ_* flag definitions up so they
immediately follow the clockintr_queue structure definition in
sys/clockintr.h.

Revision 1.5 / (download) - annotate - [select for diffs], Sun Apr 16 21:19:26 2023 UTC (13 months, 2 weeks ago) by cheloha
Branch: MAIN
Changes since 1.4: +5 -2 lines
Diff to previous 1.4 (colored)

clockintr: add shadow copy of running clock interrupt to clockintr_queue

cq_shadow is a private copy of the running clock interrupt passed to
cl_func() during the dispatch loop.  It resembles the real clockintr
object, though the two are distinct (hence "shadow").  A private copy
is useful for two reasons:

1. Scheduling operations performed on cq_shadow (advance, cancel,
   schedule) are recorded as requests with the CLST_SHADOW_PENDING
   flag and are normally performed on the real clockintr when cl_func()
   returns.  However, if an outside thread performs a scheduling
   operation on the real clockintr while cl_func() is running, the
   CLST_IGNORE_SHADOW flag is set and any scheduling operations
   requested by the running clock interrupt are ignored.

   The upshot of this arrangement is that outside scheduling operations
   have priority over those requested by the running clock interrupt.
   Because there is no race, periodic clock interrupts can now be safely
   stopped without employing the serialization mechanisms needed to safely
   stop periodic timeouts or tasks.

2. &cq->cq_shadow is a unique address, so most clockintr_* API calls
   made while cl_func() is running now don't need to enter/leave
   cq_mtx: the API can recognize when it is being called in the midst
   of clockintr_dispatch().

Tested by mlarkin@.  With input from dlg@.

In particular, dlg@ expressed some design concerns but then stopped
responding.  I have changes planned to address some of the concerns.
I think if we hit a wall with the current clockintr design we could
change the allocation scheme without too much suffering.  I don't
anticipate there being more than ~20 distinct clock interrupts.

Revision 1.4 / (download) - annotate - [select for diffs], Mon Apr 3 00:20:24 2023 UTC (14 months ago) by cheloha
Branch: MAIN
Changes since 1.3: +13 -9 lines
Diff to previous 1.3 (colored)

clockintr: protect struct clockintr_queue with a mutex

Add a mutex (cq_mtx) to stuct clockintr_queue so that arbitrary CPUs
can manipulate clock interrupts established on arbitrary CPU queues.

Refactor the bulk of clockintr_schedule() into clockintr_schedule_locked()
so we can reuse it from within the mutex.

Tested by mlarkin@.  Neat bug found by mlarkin@.  With tweaks from
kettenis@.

ok kettenis@

Revision 1.3 / (download) - annotate - [select for diffs], Thu Mar 9 03:50:38 2023 UTC (14 months, 3 weeks ago) by cheloha
Branch: MAIN
CVS Tags: OPENBSD_7_3_BASE, OPENBSD_7_3
Changes since 1.2: +12 -2 lines
Diff to previous 1.2 (colored)

clockintr: add a priority queue

- Add cq_pend to struct clockintr_queue.  cq_pend is the list of clock
  interrupts pending to run, sorted in ascending order by cl_expiration
  (earliest deadline first; EDF).  If the cl_expiration of two
  clockintrs is equal, the first clock interrupt scheduled has priority
  (FIFO).

  We may need to switch to an RB tree or a min-heap in the future.
  For now, there are only three clock interrupts, so a linked list
  is fine.

- Add cl_flags to struct clockintr.  We have one flag, CLST_PENDING.
  It indicates whether the given clockintr is enqueued on cq_pend.

- Rewrite clockintr_dispatch() to operate on cq_pend.  Clock
  interrupts are run in EDF order until the most imminent clockintr
  expires in the future.

- Add code to clockintr_establish(), clockintr_advance() and
  clockintr_schedule() to enqueue/dequeue the given clockintr
  on cq_est and cq_pend as needed.

- Add cq_est to struct clockintr_queue.  cq_est is the list of all
  clockintrs established on a clockintr_queue.

- Add a new counter, cs_spurious, to clockintr_stat.  A dispatch is
  "spurious" if no clockintrs are on cq_pend when we call
  clockintr_dispatch().

With input from aisha@.  Heavily tested by mlarkin@.  Shared with
hackers@.

ok aisha@ mlarkin@

Revision 1.2 / (download) - annotate - [select for diffs], Sun Feb 26 23:00:42 2023 UTC (15 months ago) by cheloha
Branch: MAIN
Changes since 1.1: +20 -4 lines
Diff to previous 1.1 (colored)

clockintr: add a kernel-facing API

We need an API for creating, scheduling, and rescheduling clock
interrupts.

- Add struct clockintr, a schedulable clock interrupt callback.

- Add clockintr_establish().  Allocates a new struct clockintr and
  binds it to the given clockintr_queue.

- Add clockintr_expiration().  Returns the clockintr's absolute
  expiration uptime.

- Add clockintr_nsecuptime().  Returns the clockintr's parent queue's
  cached uptime.  Using a cached timestamp is cheaper than calling
  nsecuptime(9) repeatedly when we don't absolutely need to.

- Add clockintr_schedule().  Schedules the clock interrupt to run at
  or after the given absolute uptime.

- Add clockintr_advance().  Reschedules the clock interrupt in the
  future on the given period relative to the parent queue's cached
  uptime.

With the above pieces in place we can push most of the scheduling
code for hardclock()/statclock()/schedclock() from clockintr_dispatch()
into the wrapper functions clockintr_hardclock(), clockintr_statclock(),
and clockintr_schedclock().  These wrappers are temporary.  I don't
want to muck up the wrapped functions while things are still moving
around.

For the moment these interfaces are internal to kern_clockintr.c.  In
a later patch we will move the prototypes into <sys/clockintr.h> so
anyone can use them.  We first need to add a data structure for
sorting the clockintr structs.  We also need to add a mutex to
clockintr_queue to allow arbitrary threads to safely manipulate clock
interrupts established on other CPUs.

Shown on hackers@.  Tweaked by mlarkin@.

ok mlarkin@, "no objections" kettenis@

Revision 1.1 / (download) - annotate - [select for diffs], Sat Nov 5 19:29:46 2022 UTC (18 months, 3 weeks ago) by cheloha
Branch: MAIN

clockintr(9): initial commit

clockintr(9) is a machine-independent clock interrupt scheduler.  It
emulates most of what the machine-dependent clock interrupt code is
doing on every platform.  Every CPU has a work schedule based on the
system uptime clock.  For now, every CPU has a hardclock(9) and a
statclock().  If schedhz is set, every CPU has a schedclock(), too.

This commit only contains the MI pieces.  All code is conditionally
compiled with __HAVE_CLOCKINTR.  This commit changes no behavior yet.

At a high level, clockintr(9) is configured and used as follows:

1. During boot, the primary CPU calls clockintr_init(9).  Global state
   is initialized.
2. Primary CPU calls clockintr_cpu_init(9).  Local, per-CPU state is
   initialized.  An "intrclock" struct may be installed, too.
3. Secondary CPUs call clockintr_cpu_init(9) to initialize their
   local state.
4. All CPUs repeatedly call clockintr_dispatch(9) from the MD clock
   interrupt handler.  The CPUs complete work and rearm their local
   interrupt clock, if any, during the dispatch.
5. Repeat step (4) until the system shuts down, suspends, or hibernates.
6. During resume, the primary CPU calls inittodr(9) and advances the
   system uptime.
7. Go to step (2).  This time around, clockintr_cpu_init(9) also
   advances the work schedule on the calling CPU to skip events that
   expired during suspend.  This prevents a "thundering herd" of
   useless work during the first clock interrupt.

In the long term, we need an MI clock interrupt scheduler in order to
(1) provide control over the clock interrupt to MI subsystems like
timeout(9) and dt(4) to improve their accuracy, (2) provide drivers
like acpicpu(4) a means for slowing or stopping the clock interrupt on
idle CPUs to conserve power, and (3) reduce the amount of duplicated
code in the MD clock interrupt code.

Before we can do any of that, though, we need to switch every platform
over to using clockintr(9) and do some cleanup.

Prompted by "the vmm(4) time bug," among other problems, and a
discussion at a2k19 on the subject.  Lots of design input from
kettenis@.  Early versions reviewed by kettenis@ and mlarkin@.
Platform-specific help and testing from kettenis@, gkoehler@,
mlarkin@, miod@, aoyama@, visa@, and dv@.  Babysitting and spiritual
guidance from mlarkin@ and kettenis@.

Link: https://marc.info/?l=openbsd-tech&m=166697497302283&w=2

ok kettenis@ mlarkin@

This form allows you to request diff's between any two revisions of a file. You may select a symbolic revision name using the selection box or you may type in a numeric name using the type-in text box.