write sth. upfront
This essay was written by a very rookie rookie. Please feel free to ask any questions.
You can contact: 1160712160@
GitHhub:/WindDevil (Nothing so far.
Focus of this section
Mainly formandates The concept is further expanded and extended: the formation of
- Task Runtime State: The different runtime states of a task from start to finish execution: uninitialized, ready to execute, executing, exited.
- Task Control Block: Manages the task context of a program's execution process, controlling program execution and pause.
- Task-related system calls: interface between the application and the operating system for program-initiated pauses
sys_yield
and voluntary withdrawalsys_exit
The main point here is to look at the specific implementation of these concepts before learning RTOS use is used, but the specific how to implement it is not yet clear.
Multi-Channel Program Background and the yield System Call
Even though the CPU can now run applications all the time, there is still room for its utilization to increase.
With the increasing complexity of application requirements, there are times when some peripherals are accessed under the supervision of the kernel, and they are another very important part of the computer system, i.e., theInput/Output (I/O, Input/Output) .
The CPU passes the I/O request to the peripheral, and after the peripheral finishes processing, the CPU can read the result of the I/O request from the peripheral.
Let's consider for the moment that the CPU can onlyunidirectionally The completion status of I/O processing by the peripheral is obtained by reading the register information provided by the peripheral.
The idea of a multi-pass program is that.
- The kernel manages multiple applications at the same time. If the peripheral takes long enough to process the I/O, then we can do a task switch to execute the other application first
- After a switchback, the application reads the device registers again and finds that the I/O request has been processed, so it can continue down the line based on the returned I/O result
In that case, as long as the simultaneous presence ofThere are enough applications Then you'll be able toa certain extent Hiding the latency of I/O peripheral processing relative to the CPU ensures that the CPU doesn't have to waste time waiting for a peripheral, but is almost always computing.
This task switching is a way for the applicationdo sth of one's own accord call (programming)sys_yield
This is accomplished by a system call, which means that the application actively surrenders the use of the CPU to other applications.
The description in this paragraph is pretty much a multitasking poll, but in my head.external interrupt It's still better than multitasking. But how do you rationalizeutilization External interrupts to improve real-time are a problem.
As for the active callsys_yield
It's just a hard thing to do. That's why it's called a "D".collaborative , that is, the performance of the system depends on the programmer to release the CPU when designing the app. (I myself want to pull the full CPU, who cares about your life and death pinches)
It's mentioned here.A typical case of multi-channel program execution :
This picture is very well explained: the
- This one'shorizontal axis It's a timeline.
- This one'svertical axis is the running entity (Task and IO hardware)
- It can be seen that there are three running entities
- I/O Device : This is IO hardware.
- I/O Task : This is a task that requests IO hardware
- Other Task : This is another task that does not request IO hardware.
- You can see that the very beginning isIO Task At runtime.
- existI/O Start yield Moments,IO Task IO hardware was requested, then the CPU was freed.
- Other Task Taking over the CPU, at the same timeIO Device Keep working on the hardware.
- Execute all the way toNot Complete yileld again The beginning of the time period.Other Task After execution, release the CPU.
- leave it (to sb)IO Task After taking over, check the status of the IO hardware, it still hasn't been processed.
- existNot Complete yileld again The end of the time period.IO Task Release the CPU.
- Other Task Take over the CPU again, and at the same timeIO Device Keep working on the hardware.
- existOther Task During the performance period, the following occurredI/O Complete The software doesn't recognize it at that moment.
- existContinue Moment, ,Other Task After execution, release the CPU.
- leave it (to sb)IO Task After taking over, the IO hardware status is checked and processed, so execution continues.
Above, we introduced the "avoiding unnecessary peripheral waits to improve CPU utilization" as the entry point for thesys_yield
. But actually callingsys_yield
Not necessarily related to peripherals As the kernel becomes more and more complex, we will also encounter a number of problems. As the kernel becomes progressively more complex in its functionality, we will also encounterOther events to wait for We can all immediately callsys_yield
to avoid the waste caused by the waiting process.
Disadvantages of sys_yield
This section is somewhat related to my initial thoughts on real-time issues.
After an application calls it to voluntarily surrender CPU usage, the point at which it is next allowed to use the CPU again is related to the kernel's scheduling policy and the current overall application execution, and is likely to be much later than the point at which the event the application is waiting for (e.g., a peripheral completing a request) is reached. This can result in erratic or very long response latencies for that application. For example, imagine hitting the keyboard and seeing the characters on the screen several minutes later - that's more than a human can bear. But don't worry, we'll have a more elegant solution later.
Standard interfaces for sys_yield
Thinking about the two kinds we mentioned earliersyscall
.
existcore layer Realized.
//os/syscall/mod
const SYSCALL_WRITE: usize = 64;
const SYSCALL_EXIT: usize = 93;
mod fs;
mod process;
use fs::*;
use process::*;
/// handle syscall exception with `syscall_id` and other arguments
pub fn syscall(syscall_id: usize, args: [usize; 3]) -> isize {
match syscall_id {
SYSCALL_WRITE => sys_write(args[0], args[1] as *const u8, args[2]),
SYSCALL_EXIT => sys_exit(args[0] as i32),
_ => panic!("Unsupported syscall_id: {}", syscall_id),
}
}
existuser level Realized.
//user/syscall
use core::arch::asm;
const SYSCALL_WRITE: usize = 64;
const SYSCALL_EXIT: usize = 93;
fn syscall(id: usize, args: [usize; 3]) -> isize {
let mut ret: isize;
unsafe {
asm!(
"ecall",
inlateout("x10") args[0] => ret,
in("x11") args[1],
in("x12") args[2],
in("x17") id
);
}
ret
}
pub fn sys_write(fd: usize, buffer: &[u8]) -> isize {
syscall(SYSCALL_WRITE, [fd, buffer.as_ptr() as usize, ()])
}
pub fn sys_exit(exit_code: i32) -> isize {
syscall(SYSCALL_EXIT, [exit_code as usize, 0, 0])
}
It would be nice to understand here the eponymoussyscall
,sys_write
,sys_exit
It's not the same function.understood .
nowContinued realization ansystem call sys_yield
.
So it's going to be inuser level Implementation interface.
// user/src/
pub fn sys_yield() -> isize {
syscall(SYSCALL_YIELD, [0, 0, 0])
}
// user/src/
pub fn yield_() -> isize { sys_yield() }
SYSCALL_YIELD
Again, it's aneed to define constants.
There's a little problem here, becauseyield
It's rust.Keywords. , so when defining a function nameAdded a_
.
thuscore layer (used form a nominal expression)syscall
Inside also need to add a discrimination, now I only write into pseudo-code, because I also specificI don't know. How to fill in the parameters.
pub fn syscall(syscall_id: usize, args: [usize; 3]) -> isize {
match syscall_id {
// Here is the pseudo-code
SYSCALL_YIELD => sys_yield(...)
// Here is the pseudo-code
SYSCALL_WRITE => sys_write(args[0], args[1] as *const u8, args[2]),
SYSCALL_EXIT => sys_exit(args[0] as i32),
_ => panic!("Unsupported syscall_id: {}", syscall_id),
}
}
Task Control Block and Task Run Status
Reflecting on the previous chapter's realization ofAppManager
It consists of three parts.
- appliedquantities .
- be facing (us) Run the application.
- appliedentrance address .
But considering the current state of the task, it is possible tofault Simply as in the case of the two missions shown above, there are more missions and more complex scenarios.
Thinking about our sectionbeginning When the time came, it was said that to create aTask Run Status The concept of the task is categorized into the following states.
- uninitialized
- Preparing for implementation
- under implementation
- opt out
Thus it is possible to construct such a structure using rust: the
// os/src/task/
#[derive(Copy, Clone, PartialEq)]
pub enum TaskStatus {
UnInit, // not initialized
Ready, // ready to run
Running, // Running
Exited, // Exited
Exited, // Exited }
#[derive]
This annotation is somewhat similar to theKotlin , which can makecompiler automatically To help you realize some of the ways.
- Realized
Clone
After the Trait, you can call theclone
function completes the copy; - Realized
PartialEq
After Trait, you can use the== operator compares two instances of the type, logically only two equal application execution states will be judged equal, and indeed they are. -
Copy
is a token Trait that determines whether the type uses move semantics or copy semantics when passing/assigning parameters by value.
Recall that the previous section mentionedTaskContext
"Ourmission control block The two parts that need to be saved are also known.
-
TaskContext
Saving the Task Context -
TaskStatus
Saving Task Status
So construct a structure like this with rust: the
// os/src/task/
#[derive(Copy, Clone)]
pub struct TaskControlBlock {
pub task_status: TaskStatus,
pub task_cx: TaskContext,
}
task manager
So there you have it.TaskControlBlock
, you can implement a task manager.
Task Manager needs to manage multiple tasks, so you need to know.
- app aggregate
- be facing (us) mandates
- per taskcontrol block
- mandatesstate of affairs
- mandates(textual) context
Here theMethods for separating constants and variables To realize it.
// os/src/task/
pub struct TaskManager {
num_app: usize,
inner: UPSafeCell<TaskManagerInner>,
}
struct TaskManagerInner {
tasks: [TaskControlBlock; MAX_APP_NUM],
current_task: usize,
}
This is becausenum_app
is constant and does not need to be changed, whereasinner
is a variable that requires theUPSafeCell
, to ensure that theirInternal variability cap (a poem)single-core time Secure borrowing capacity.
It's here.official documentIt's mentioned in.
- In the second chapter of
AppManager
is available through thecurrent_app
surmise until (a time)Up/Down Tasks The ... - However, in the case of
TaskManger
innerTaskManagerInner
(used form a nominal expression)current_task
behave no other choice Sense the current task.
because ofTaskManager
Creating a global instanceTASK_MANAGER
, still usinglazy initialization Methodology.
// os/src/task/
lazy_static! {
pub static ref TASK_MANAGER: TaskManager = {
let num_app = get_num_app();
let mut tasks = [
TaskControlBlock {
task_cx: TaskContext::zero_init(),
task_status: TaskStatus::UnInit
};
MAX_APP_NUM
];
for i in 0..num_app {
tasks[i].task_cx = TaskContext::goto_restore(init_app_cx(i));
tasks[i].task_status = TaskStatus::Ready;
}
TaskManager {
num_app,
inner: unsafe { UPSafeCell::new(TaskManagerInner {
tasks,
current_task: 0,
})},
}
};
}
This initialization sequence is.
- utilizationPrevious Section Realization (used form a nominal expression)
get_num_app
to get the number of tasks - Create a
TaskControlBlock
(used form a nominal expression)arrays , sizesetMAX_APP_NUM
. - and then throughThe previous section realizes the
init_app_cx
to get each of theAlready loaded into memory The context of the task. - Take all the tasksinitialization because ofReady Status.
- followed byanonymous function The way to get the
task
and initialized to 0current_task
Create an anonymousTaskManagerInner
, then wrapped inUPSafeCell
andnum_app
Together, they create aTaskManager
, pass toTASK_MANAGER
.
Implement the sys_yield and sys_exit system calls.
Similar to what was realized in the previous chaptercore layer (used form a nominal expression)syscall
function will be based on thefunction code Call function.
One thing we need to understand is this.
-
application layer (computing) (used form a nominal expression)
syscall
function just uses theecall
Trigger Trap. -
core layer (used form a nominal expression)
syscall
Functions are really concrete implementations.
We're talking aboutKernel Layer Specific Implementation The function called by thesyscall
as abranch (of company, river etc) :
// os/src/syscall/
use crate::task::suspend_current_and_run_next;
pub fn sys_yield() -> isize {
suspend_current_and_run_next();
0
}
This one issys_yield
, used to pause the current application and switch to the next one.
Seeing that its specific implementation is actuallyabstract (modal particle intensifying preceding clause)suspend_current_and_run_next
interface, making the interface nameconcordance .
This is a good time to consider what we implemented in the previous chaptersys_exit
:
//! App management syscalls
use crate::loader::run_next_app;
use crate::println;
/// task exits and submit an exit code
pub fn sys_exit(exit_code: i32) -> ! {
println!("[kernel] Application exited with code {}", exit_code);
run_next_app()
}
After printing the LOG, use therun_next_app
Switch to the next app.
Well, considering that nowrun_next_app
is no longer appropriate for the current state of havingtask scheduling system, so it's also important to have a good understanding of thesys_exit
The specific implementations of the
// os/src/syscall/
use crate::task::exit_current_and_run_next;
pub fn sys_exit(exit_code: i32) -> ! {
println!("[kernel] Application exited with code {}", exit_code);
exit_current_and_run_next();
panic!("Unreachable in sys_exit!");
}
You can see that the specific implementation is nowabstract (modal particle intensifying preceding clause)exit_current_and_run_next
interface, making the interface nameconcordance .
Next we just need toconcrete realization The two interfaces just mentioned will do.
// os/src/task/
pub fn suspend_current_and_run_next() {
mark_current_suspended();
run_next_task();
}
pub fn exit_current_and_run_next() {
mark_current_exited();
run_next_task();
}
The implementation is excerpted here, but there are still three functions in the implementation.To be realized :
mark_current_suspended
mark_current_exited
run_next_task
Their specific implementations are to be compared with those of the previous chapter and section:.
- Previous chapter.Loading Applications after thatModify the program pointer Start running directly .
- Previous section:DirectModify the program pointer Just start running.
This chapter is implemented differently, by means of theModifying a user's status , resolved.
// os/src/task/
fn mark_current_suspended() {
TASK_MANAGER.mark_current_suspended();
}
fn mark_current_exited() {
TASK_MANAGER.mark_current_exited();
}
impl TaskManager {
fn mark_current_suspended(&self) {
let mut inner = .borrow_mut();
let current = inner.current_task;
[current].task_status = TaskStatus::Ready;
}
fn mark_current_exited(&self) {
let mut inner = .borrow_mut();
let current = inner.current_task;
[current].task_status = TaskStatus::Exited;
}
}
and then throughrun_next_task
come (according to state)Decision (can we call it scheduling?). Right... No... Yes, yes... No.) Which Task to run next.
// os/src/task/
fn run_next_task() {
TASK_MANAGER.run_next_task();
}
impl TaskManager {
fn run_next_task(&self) {
if let Some(next) = self.find_next_task() {
let mut inner = .exclusive_access();
let current = inner.current_task;
[next].task_status = TaskStatus::Running;
inner.current_task = next;
let current_task_cx_ptr = &mut [current].task_cx as *mut TaskContext;
let next_task_cx_ptr = &[next].task_cx as *const TaskContext;
drop(inner);
// before this, we should drop local variables that must be dropped manually
unsafe {
__switch(
current_task_cx_ptr,
next_task_cx_ptr,
);
}
// go back to user mode
} else {
panic!("All applications completed!");
}
}
}
Here too, it is divided into two parts: the
-
run_next_task
Yes, it is.TASK_MANAGER.run_next_task();
The package. - treat (sb a certain way)
TaskManager
structuredrun_next_task
method implementation.- first of all
if let
This pattern matching writing method, the first did not grasp the rust development technology, so I do not understand.- This is the time to check theThe Rust Bible. . on
if let
of the Convention on the Elimination of All Forms of Discrimination against Women.- This method can be used when only one match is needed.
- Matching is used to solve problems that can't be solved with a simple ==complex type Matching cases.
- utilization
if let
rather thanmatch
It's to address the fact that onlyNone
peacebuildingNone
Simple writing for both cases .
- refer toThe Rust Bible. . on
Some
of the Convention on the Elimination of All Forms of Discrimination against Women.-
Option
There are two possibilities for enumeration-
Some
represents a value, theSome
The content of the package is its value- alonedefine The enumeration type is
Some(T)
,T
represents the type.Some(i32)
This means that it is possible to storei32
The value of the type. - At the time of the instance
Some(T)
can be instantiatedSome(3)
, means that the value exists and has the value 3.
- alonedefine The enumeration type is
-
None
represent a null hypothesis
-
-
- The result of this paragraph therefore means.
- in the event that
self.find_next_task()
The result is notNone
, then the corresponding return value should beSome(next)
. - The following logic in the
next
is the return of theSome()
envelopednext
. . on behalf ofTask number for the next task .
- in the event that
- This is the time to check theThe Rust Bible. . on
- subsequent acquisition
single-threaded variable borrowing.
- From the results of the previous stepcurrent task.
- commander-in-chief (military)Next mission. The status is changed torunning .
- Change the current task number toJust got the next mission number .
- GettingCurrent and Next Mission context.
- Actively releases the acquired
.
- This is because if you don't release it voluntarily, you have to wait for the function to finish before you can access it again.
The content.
-
__switch
require operationinner
task.task_cx
The content of.
- This is because if you don't release it voluntarily, you have to wait for the function to finish before you can access it again.
- utilizationThe previous section realizes the
__switch
Complete the mission stack switch, or go back and look at it if you've forgotten.
- first of all
can be seenfind_next_task
is an important method, which is implemented like this.
// os/src/task/
impl TaskManager {
fn find_next_task(&self) -> Option<usize> {
let inner = .exclusive_access();
let current = inner.current_task;
(current + 1..current + self.num_app + 1)
.map(|id| id % self.num_app)
.find(|id| {
[*id].task_status == TaskStatus::Ready
})
}
}
It's in the process of gettingAfter the single-threaded mutable borrowing of the
current_task
to begin with (Does not contain itself ) sees the entire array as acircular queue Then go one by one.Inquiry Status Until we find it.first Status of prepared tasks.
Here in Rust, every time we encounter something we don't know, we don't just try to understand it, but we also try to understand the conceptual stuff in the layer above it.
The one used here is theclosure (math) cap (a poem)iterator (math.) Knowledge of.
-
iterator (math.)marry sb. (of woman)
for
Loops are quite similar in that they both go through a collection, but there are actually quite a few differences, not the least of which is:Whether the collection is accessed via an index。-
Iterator
Trait'smap
methodologies: Iterators in Rust (Iterator
) There's amap
method, which takes a closure and passes each element of the iterator to this closure.map
method generates a new iterator whose elements are the result returned by the closure. - The iterator has a
find
method, which receives a closure as an argument. The closure defines the condition to be looked for, and when an element in the iterator satisfies this condition, thefind
method then returns aOption
type result that contains the first match found or theNone
If no element satisfies the condition.
-
-
closure (math)An anonymous function that can be assigned to a variable or passed as a parameter to other functions, and which, unlike a function, allows the value in the caller's scope to be captured.
- It's kind of like some kind of C.function macro (math.) , with
do...while
This is the kind of thing that is encapsulated. So you can steal variables from other scopes and use them.
- It's kind of like some kind of C.function macro (math.) , with
This is a great picture :)
First time in userland
Recall from the previous chapter that we usedrun_next_app
calls the__restore
call (programming)sret
Back to the user state.
Currently, to enter the userland for the first time we should also need thesret
Yes, you can.
But think about what we learned in the last chapter__switch
implementation, it is clear that it isinvariably Privileged.
So the first time you enter the userland you still have to rely on the__restore
.
In order to use__restore
Then you need to build the Trap context, and put theprevious section practicalinit_app_cx
, move to:
// os/src/
pub fn init_app_cx(app_id: usize) -> usize {
KERNEL_STACK[app_id].push_context(
TrapContext::app_init_context(get_base_i(app_id), USER_STACK[app_id].get_sp()),
)
}
reimbursementTaskContext
Construct aConstructing the context for the first execution of a task Methodology.
// os/src/task/
impl TaskContext {
pub fn goto_restore(kstack_ptr: usize) -> Self {
extern "C" { fn __restore(); }
Self {
ra: __restore as usize,
sp: kstack_ptr,
s: [0; 12],
}
}
}
In this operation, the
- Passed in akernel stack pointer .
- Use the following to build a
TaskContext
.- Kernel stack pointer asStack pointer to the task context .
-
__restore
function address as theReturn address after function call . . in other words.__switch
(used form a nominal expression)ret
After execution, execute__restore
. - bare
s0~s12
.
It is important to note that__restore
The implementation needs to change: itno longer need at the outsetmv sp, a0
up. Because in the__switch
After that.sp
is already pointing to the correct address for the Trap context we need.
Then after creating theTaskManager
Global instances of theTASK_MANAGER
of the time for thePer Task Context , initialized to consist of the followingTaskContext
:
- Link in. determined by the task memory location of theKernel stack pointer for each task as the stack pointer.
-
__restore
act asReturn address after function call . - bare
s0~s12
.
because ofTaskContext
Build aImplementation of the first mandate Methodology.
impl TaskManager {
fn run_first_task(&self) -> ! {
let mut inner = .exclusive_access();
let task0 = &mut [0];
task0.task_status = TaskStatus::Running;
let next_task_cx_ptr = &task0.task_cx as *const TaskContext;
drop(inner);
let mut _unused = TaskContext::zero_init();
// before this, we should drop local variables that must be dropped manually
unsafe {
__switch(
&mut _unused as *mut TaskContext,
next_task_cx_ptr,
);
}
panic!("unreachable in run_first_task!");
}
This code can be read like this.
- gainSingle-threaded borrowing .
- Getting the firstPointer to the task block .
- Subsequently set this task tooperational state .
- Get this task's(textual) context .
- Since the subsequent use of
__switch
Hence the need forvoluntary release This borrowing. - utilization
__switch
call (programming)- leave it (to sb)
zero_init
The construction of aall-empty context. - First mission context.
- leave it (to sb)
At this point the order of execution is a bit messed up, I'll try to draw a flowchart.
First is the structure implemented in this chapterTaskManager
Structure.
initialization The process is.
initializedTASK_MANAGER
:
call (programming)run_fist_app
What happened after that.
At this point consider what happens when an app hang occurs: the