write sth. upfront
This essay was written by a very rookie rookie. Please feel free to ask any questions.
You can contact: 1160712160@
GitHhub:/WindDevil (Nothing so far.
Focus of this section
The beginning of this chapter explains the difference between a multi-channel-like program and collaborative scheduling.
Recall that in the previous section, we mentioned that if we were still not to use theyeild
, which is still the same as the implementation in the previous section (Multi-Channel Program Loading).
Because if we don't release the CPU voluntarily, the tasks will still be executed sequentially.
Well, not all programmers write programs with others in mind. For example, to write code for a microcontroller, I use theIIC
Communication, I'll just, uh, wait.ACK
Signal, I'd ratherwhile()
I'm not willing to give up my CPU for you.
This is where the use of preemptive scheduling becomes necessary.
official documentThere is a cloud:We need to have an understanding of themandates further expanding and extending the concept of
- Time-sharing multitasking: The operating system manages each application to occupy the processor in time slices to run the application.
- Time Slice Rotation Scheduling: After a program runs out of its time slice, the operating system preempts the current program and calls the next program to execute it, week after week, forming a time-slice rotation scheduling for the application at the task level.
There are a few more concepts in the documentation: the
- And with the development of technology, there is an influx of more and moreinteractive application (Interactive Application), one of the key goals they aim to achieve is to increase the responsiveness of user (application user and developer) operations and reduce the number ofprocrastinate (Latency) in order to optimize the application experience and development experience. For these applications, even if they need to wait for a peripheral or some event, they are not inclined to actively yield CPU usage, as this may cause unacceptable latency.
- preemptive scheduling (Preemptive Scheduling) is an application of theat the right time All have the potential to be switched out by the kernel.
Modern task scheduling algorithms are basically preemptive, which requires that each application can only execute continuously for a certain period of time before the kernel switches it out compulsorily. This is generally done by combining thetime piece (Time Slice) A unit of measure for the length of continuous execution of an application, each time slice may be on the order of milliseconds.
The scheduling algorithm needs to take into account: how many time slices an application is given to execute each time before switching out, and which application is to be switched in. This can be done in terms of performance (mainly the metrics of throughput and latency) and thefairness (The scheduling algorithm is evaluated on two dimensions, Fairness, which requires that there should not be a large difference in the percentage of time slices allocated to multiple applications.
As an aside, this is often the time to think of writingRT-Thread
to each application when adding a500ms
I'm not sure what I'm talking about. (Of course, now that I think ofRT-Thread
Seems like there should be a more advanced scheduling algorithm, so I'm hanging on by a thread).
That's when it comes toscheduling algorithm (no wonder there's a post dedicated to this one).
time-slice rotation scheduling
In this project, the scheduling algorithm used is thetime-slice rotation algorithm (RR, Round-Robin) .
In this chapter, we will only need the primitive RR algorithm, which in textual terms is to maintain a task queue, take one application from the head of the queue at a time and execute it for a time slice, then drop it to the end of the queue, then continue to take an application from the head of the queue, and so on until all applications have been executed.
Time Slice Implementation
The time slice, which we guessed at the beginning of this chapter would be implemented using a timer with interrupts, seems to be the case so far.
Interrupts for risc-v
official documentThe parsing of interrupts is this, very insightful, especially with reference to the concepts of synchronization and asynchrony.
In the context of the RISC-V architecture, thedisruptions (Interrupt) and the exceptions we introduced in Chapter 2 (including those caused by program errors or the execution of Trap-like instructions such as those used in system calls).ecall
The exception is a Trap, but it is triggered for a different reason. For a given processor core, the exception is related to the current CPU instruction execution.synchronization (Synchronous), the reason why an exception is triggered must be traceable to the execution of an instruction; interrupts, on the other hand, are not.synchronous (Asynchronous) to the current instruction being executed, i.e., which peripheral the interrupt comes from and how the interrupt is triggered is completely independent of the current instruction being executed by the processor.
Regarding interruptions, here's a key point.The check interrupt is after each processor executes an instruction of.
For interrupts, it can be understood that the interrupt is initiated by a set of circuits that have nothing to do with the execution of instructions by the processor (from a clock interrupt to a simple counter and comparator), and that this set of circuits is connected to the processor through only one wire. When an interrupt is triggered by an external signal, a high level or a positive edge is input, and the processor checks this wire after each instruction and decides whether to continue executing the next instruction or to enter the interrupt handling process. In other words, in most cases, the hardware unit associated with instruction execution is completely independent of the circuitry that may initiate the interrupt.side by side (of two processes, developments, thoughts etc) (Parallel) operation, they are connected by a single wire.
There are interrupt types and priority rules for RISC-V that should be read carefully.official document.
Only one of the more important rules is presented here, which is that after entering an interrupt, it willMask other interrupts below and equal to this priority level .
clock interruption
It's mentioned here.RISC-V architecture The timing mechanism requires that it must have a built-in clock.
Here we are reminded of what we have learned beforeCortex-M3
built-insystick
. There is also the little noticed built-inCortex-M3
hit the nail on the headDWT
Modules.
(Here's where you can also go forChapter 1 Operations Li (surname)third question That time-delayed program again, did you remember our previous visit?sepc
registers used in theriscv::register
, now we can call it directly to realize it)
existRISC-V 64 In the architecture, this timer is implemented as amtime
Registers.
This register's can't be used in theS
Privileged direct access, recalling what we've learned before.OS
Architecture Levels.
establishos/src/
Documentation.
Simply add theM
The upper echelons of the privileged class (SBI ) to build an interface.RustSBI This is exactly the kind of interface that we can implement, so we can implement it like this.
// os/src/
use riscv::register::time;
pub fn get_time() -> usize {
time::read()
}
In order to realizeTimed period of time To force the switching of applications, we need to implement aTimed interruptions Characterization.
RustSBI This can be accomplished by using theos/src/
Additionally, you can use thesbi_call
To realize this function, i.e., to set a timer, the actual principle is to set themtimecmp
The value of the module, waiting for themtime
After the timer reachestrigger an interrupt:
// os/src/
pub fn set_timer(time: usize)
{
sbi_rt::set_timer(time as _);
}
here aretake note of ,Code written in the manualbeold version , on the basis of configuration in accordance with chapter 0sbi-rt
The version of0.02
. It should be usedas above Code.
At this time in theos/src/
Wrapping. The computation timer in the10ms
increment within the currentmtime
The value and increment are added together to set the value of themtimecmp
In.
// os/src/
use crate::config::CLOCK_FREQ;
const TICKS_PER_SEC:usize = 100;
pub fn set_next_trigger()
{
set_timer(get_time()+CLOCK_FREQ/TICKS_PER_SEC);
}
Pay attention here.CLOCK_FREQ
It's written inos/src/
Constants in thesource code (computing)Here you can seeK210
The clock frequency andQEMU
Default Clock Frequency.
/*
#[cfg(feature = "board_k210")]
pub const CLOCK_FREQ: usize = 403000000 / 62;
#[cfg(feature = "board_qemu")]
pub const CLOCK_FREQ: usize = 12500000;
*/
pub use crate::board::CLOCK_FREQ;
this oneboard
Modules need to be added to thesrc/board/
The realization of.
//! Constants used in rCore for qemu
pub const CLOCK_FREQ: usize = 12500000;
ultimateboard
modules andtimer
The module also needs to be in theos/src/
The Committee on the Elimination of Discrimination against Women declared that.
...
mod timer;
#[path = "boards/"]
mod board;
...
#[path = "boards/"]
is a property macro (attribute macro
), which is used to tell the Rust compiler that the actual source file location of the module is in theboards/
In this document.
It is also necessary to implement aGet current timer time function. It is possible tobe used for Counting application runtime.
// os/src/
const MICRO_PER_SEC: usize = 1_000_000;
pub fn get_time_us() -> usize {
time::read() / (CLOCK_FREQ / MICRO_PER_SEC)
}
Here the official documentation says to implement asystem call , which is used by the application to get the current time. So we've gotget_time_us
Functions.
Based on the experience of the previous section, we only need to separately add theuser level cap (a poem)core layer Just write the implementation.
The first is implemented at the user level: the
// user/src/
const SYSCALL_GET_TIME: usize = 169;
pub fn sys_get_time() -> isize {
syscall(SYSCALL_GET_TIME, [0, 0, 0])
}
Further encapsulation.
// user/src/
pub fn get_time() -> isize {
sys_get_time()
}
Then it needs to be implemented at the kernel level, here's the callback.
// os/src/syscall/
...
const SYSCALL_GET_TIME: usize = 169;
/// handle syscall exception with `syscall_id` and other arguments
pub fn syscall(syscall_id: usize, args: [usize; 3]) -> isize {
match syscall_id {
SYSCALL_WRITE => sys_write(args[0], args[1] as *const u8, args[2]),
SYSCALL_EXIT => sys_exit(args[0] as i32),
SYSCALL_YIELD => sys_yield(),
SYSCALL_GET_TIME => sys_get_time(),
_ => panic!("Unsupported syscall_id: {}", syscall_id),
}
}
...
Here it is also necessary to specify the implementation of thesys_get_time
:
// os/src/syscall/
use crate::timer::get_time_us;
pub fn sys_get_time() -> isize {
get_time_us() as isize
}
Then we can't just return the currentmtime
The value of the unit is replaced by the value of the unit, so thatget_time_us
The realization is also obvious, according toSystem clock frequency Calculate the currentmicrosecond Duration.
// os/src/
const MICRO_PER_SEC: usize = 1_000_000;
pub fn get_time_us() -> usize
{
time::read()/(CLOCK_FREQ/MICRO_PER_SEC)
}
preemptive scheduling
Here's to recallingInterruptions are also atrap
.
Then it's easy to implement preemptive scheduling. All you need to do is to set the scheduler to be preemptive when thetimer interrupt beyondContinue to set a timer, and then execute thesuspend_current_and_run_next
, Suspend the current application and execute the next one.
Then just modify thetrap_handler
function, adding the appropriate processing logic.
// os/src/trap/
match () {
Trap::Interrupt(Interrupt::SupervisorTimer) => {
set_next_trigger();
suspend_current_and_run_next();
}
}
So where did the first time piece come from? The answer is us.Set a timer yourself timer::set_next_trigger();
. In order to avoidS
Privileged level clock interrupts are blocked, we need toInitiate clock interrupt plunging trap::enable_timer_interrupt();
.
// os/src/
#[no_mangle]
pub fn rust_main() -> ! {
clear_bss();
println!("[kernel] Hello, world!");
trap::init();
loader::load_apps();
trap::enable_timer_interrupt();
timer::set_next_trigger()
task::run_first_task();
panic!("Unreachable in rust_main!");
}
trap::enable_timer_interrupt();
The actual setting of thesie
registersstie
bit, so that theS
Privileged clock interrupts are not masked.
// os/src/trap/
/// timer interrupt enabled
pub fn enable_timer_interrupt() {
unsafe {
sie::set_stimer();
}
}
Here's a special handling mechanism to keep an eye on, though I can't find out what it is at my level.
Some of you may notice that we did not apply the initial Trap context of thesstatus
hit the nail on the headSPIE
position is 1. This will mean that the CPU executes the application in the user state with thesstatus
(used form a nominal expression)SIE
is 0, which by definition means that the CPU blocks all interrupts in the S-state, including, naturally, the S privileged clock interrupt. However, it can be observed that our application can be interrupted normally after exhausting a time slice. This is because the CPU is preempted when it receives an S-state clock interrupt in the U-state, regardless of whether theSIE
Trap processing occurs whether the bit is set or not.
This is the time to pay attention toSurrender the CPU voluntarily The mechanism of the "C" is still to be preserved. For example, this application.
// user/src/bin/
#[no_mangle]
fn main() -> i32 {
let current_timer = get_time();
let wait_for = current_timer + 3000;
while get_time() < wait_for {
yield_();
}
println!("Test sleep OK!");
0
}
Here if you don't execute theyield_()
would requireisochronous timer interrupt need10ms
The time slice, so thatWasted time. .
Specific realization of implementation
The main focus here is on theTime-sharing multitasking systems with preemptive scheduling The test application was not realized.
All we have to do is add thesource code (computing)Just find the implementation in the
Here's some analysis and pointers.
// user\src\bin\00power_3.rs
#![no_std]
#![no_main]
#[macro_use]
extern crate user_lib;
const LEN: usize = 100;
#[no_mangle]
fn main() -> i32 {
let p = 3u64;
let m = 998244353u64;
let iter: usize = 200000;
let mut s = [0u64; LEN];
let mut cur = 0usize;
s[cur] = 1;
for i in 1..=iter {
let next = if cur + 1 == LEN { 0 } else { cur + 1 };
s[next] = s[cur] * p % m;
cur = next;
if i % 10000 == 0 {
println!("power_3 [{}/{}]", i, iter);
}
}
println!("{}^{} = {}(MOD {})", p, iter, s[cur], m);
println!("Test power_3 OK!");
0
}
This application computes the value of $p^{iter} \mod m$.
- Calculate the current element
s[cur]
multiply byp
impose an additional burden onm
Take the result of the mold and store this result in the next locationnext
Center. - update
cur
because ofnext
Ifcur
has reached the end of the array (LEN
), then thecur
Set to 0 to enable scrolling use of the array. - Whenever an iteration reaches a multiple of 10000, output the progress of the current iteration.
- After completing all the iterations, the final computation results and the test passed message are output.
The remaining two apps.user\src\bin\01power_5.rs
cap (a poem)user\src\bin\02power_7.rs
Just replacing thep
The value of . Skimmed here.
For the last application is the one we mentioned at the end of the previous sectionuser\src\bin\
. . and will not be elaborated upon .
Here's exactly what the app will look like after it's all implemented.
cd user
make build
cd ../os
make run
The final result of the run is.
[rustsbi] RustSBI version 0.3.1, adapting to RISC-V SBI v1.0.0
.______ __ __ _______.___________. _______..______ __
| _ \ | | | | / | | / || _ \ | |
| |_) | | | | | | (----`---| |----`| (----`| |_) || |
| / | | | | \ \ | | \ \ | _ < | |
| |\ \----.| `--' |.----) | | | .----) | | |_) || |
| _| `._____| \______/ |_______/ |__| |_______/ |______/ |__|
[rustsbi] Implementation : RustSBI-QEMU Version 0.2.0-alpha.2
[rustsbi] Platform Name : riscv-virtio,qemu
[rustsbi] Platform SMP : 1
[rustsbi] Platform Memory : 0x80000000..0x88000000
[rustsbi] Boot HART : 0
[rustsbi] Device Tree Region : 0x87000000..0x87000f02
[rustsbi] Firmware Address : 0x80000000
[rustsbi] Supervisor Address : 0x80200000
[rustsbi] pmp01: 0x00000000..0x80000000 (-wr)
[rustsbi] pmp02: 0x80000000..0x80200000 (---)
[rustsbi] pmp03: 0x80200000..0x88000000 (xwr)
[rustsbi] pmp04: 0x88000000..0x00000000 (-wr)
[kernel] Hello, world!
[kernel] trap init end
power_3 [10000/200000]
power_3 [20000/200000]
power_3 [30000/200000]
power_3 [40000/200000]
power_3 [50000/200000]
power_3 [60000/200000]
power_3 [70000/200000]
power_3 [80000/200000]
power_3 [90000/200000]
power_3 [100000/200000]
power_3 [110000/200000]
power_3 [120000/200000]
power_3 [130000/200000]
power_3 [140000/200000]
power_3 [150000/200000]
power_3 [160000/200000]
power_3 [170000/200000]
power_3 [180000/200000]
power_3 [190000/200000]
power_3 [200000/200000]
3^200000 = 871008973(MOD 998244353)
Test power_3 OK!
[kernel] Application exited with code 0
power_7 [10000/160000]
power_7 [20000/160000]
power_7 [30000/160000]
power_7 [40000/160000]
power_7 [50000/160000]
power_7 [60000/160000]
power_7 [70000/160000]
power_7 [80000/160000]
power_7 [90000/160000]
power_7 [100000/160000]
power_7 [110000/160000]
power_7 [120000/160000]
power_7 [130000/160000]
power_7 [140000/160000]
power_7 [150000/160000]
power_7 [160000/160000]
7^160000 = 667897727(MOD 998244353)
Test power_7 OK!
[kernel] Application exited with code 0
power_7 [10000/160000]
power_7 [20000/160000]
power_7 [30000/160000]
power_7 [40000/160000]
power_7 [50000/160000]
power_7 [60000/160000]
power_7 [70000/160000]
power_7 [80000/160000]
power_7 [90000/160000]
power_7 [100000/160000]
power_7 [110000/160000]
power_7 [120000/160000]
power_7 [130000/160000]
power_7 [140000/160000]
power_7 [150000/160000]
power_7 [160000/160000]
7^160000 = 667897727(MOD 998244353)
Test power_7 OK!
[kernel] Application exited with code 0
Test sleep OK!
[kernel] Application exited with code 0
All applications completed!