Layered design in software engineering

From Processes to Threads to Goroutines Part1

A lot of concurrent execution, a lot of these ideas came from operating systems. The job of an operating system is really to give concurrency.‎
So, for instance, take your machine as an example. your machine runs this Browser. It is but at the same time is doing other things in the background and the operating system is allowing them all to be running concurrently at the same time.‎

a lot of concurrency starts with this idea of processes.

a process is basically an instance of a running program. There are things that every process has unique to it, a big chunk of memory. So, if you've got multiple processes running on a machine, every process has it's own memory, it's virtual address space.‎

Every process is going to have it's own code. ‎
It's going to have it's own stack. Stack is a region of memory that handles function calls mostly. ‎
it's own HeapHeap which is another region of memory where you do memory allocation, stuff like that. ‎
Shared libraries, actually, shared libraries are shared. So, the shared libraries are actually not unique to a process, they're shared by definition.‎

So each process is going to have it's own stack, it's own code, it's own virtual address space, and a bunch of other things. ‎

Also, a process is going to have registers unique to it. ‎
Registers in case you don't know, basically, they're just store values inside the machine. They're like tiny little super fast memories. They just store one value, one word, say. A program counter. ‎

Program counter, it's the register that tells you what instruction you're executing right now or really the next instruction was the next one you're going execute.‎

Data registers, the stack pointer, that's another register that tells you where you are on the stack, stuff like that.‎

So, every process has this unique, was typically called context. 

So, Context is a bunch of memory, a bunch of register values that are unique to the process, and they're all needed to execute the program correctly.‎

 The main job of an operating system, and this task is called scheduling. ‎


Deciding which process runs at which time, that's called the scheduling task and operating systems do that. ‎

The user has the impression of parallelism even if it's not parallel. Although operating systems can apply parallelism, the operating system can map something to a different core but generally, certainly, right now, we're talking about the single-core operating system. ‎

The operating system needs to give fair access to resources. So, when I say resources, I mean things in the system other than the processor itself. So, there's a processor itself. Maybe you give this guy a 20-millisecond slice, that guy a 20-millisecond slice, but also, other things like memory, this one gets to use this region of memory, this one gets to use that region of memory.,IO devices, you get to use the screen now, now, you get your turn to using the screen, so on. ‎

So, the operating system is basically managing a pile of processes and making sure they don't interfere with each other, and they get fair use of the resources, so they can all complete in a timely manner.‎




تعليقات