Kernel can just use raw threads, pretty muchs the only thing that
process provides is syscalls which kernel threads of course don't
need.
Also this makes init process have pid 1 :D
All block functions now take an optional mutex parameter that is
atomically unlocked instead of having the user unlock it before hand.
This prevents a ton of race conditions everywhere in the code!
If connection on unix socket was closed and other end tries to recvfrom,
the thread would enter a fucked up state where it held the socket's
spinlock when returning to userspace.
- Fix race condition when adding packet to send buffer before other end
has acknowledged it
- Allow sending multiple packets before receiving ACK for previous ones
This implementation is on top of inodes instead of fds as linux does it.
If I start finding ports/software that relies on epoll allowing
duplicate inodes, I will do what linux does.
I'm probably missing multiple epoll_notify's which may cause hangs but
the system seems to work fine :dd:
This patch adds clearing of *Interrupt Cause Registers*, which allows
older qemu versions to send new interrupts. Apparently this is not
needed on newer releases.
I had not understood how MSIs work and I was unnecessarily routing them
through IOAPIC. This is not necessary and should not be done :D
Also MSIs were reserving interrupts that IOAPIC was capable of
generating. Now IOAPIC and MSIs use different set of interrupts so
IOAPIC can use more interrupts if needed.
Change Semaphore -> ThreadBlocker
This was not a semaphore, I just named it one because I didn't know
what semaphore was. I have meant to change this sooner, but it was in
no way urgent :D
Implement SMP events. Processors can now be sent SMP events through
IPIs. SMP events can be sent either to a single processor or broadcasted
to every processor.
PageTable::{map_page,map_range,unmap_page,unmap_range}() now send SMP
event to invalidate TLB caches for the changed pages.
Scheduler no longer uses a global run queue. Each processor has its own
scheduler that keeps track of the load on the processor. Once every
second schedulers do load balancing. Schedulers have no access to other
processors' schedulers, they just see approximate loads. If scheduler
decides that it has too much load, it will send a thread to another
processor through a SMP event.
Schedulers are currently run using the timer interrupt on BSB. This
should be not the case, and each processor should use its LAPIC timer
for interrupts. There is no reason to broadcast SMP event to all
processors when BSB gets timer interrupt.
Old scheduler only achieved 20% idle load on qemu. That was probably a
very inefficient implementation. This new scheduler seems to average
around 1% idle load. This is much closer to what I would expect. On my
own laptop idle load seems to be only around 0.5% on each processor.
When a unix domain socket is closed and it has a connection to another
socket, it will make the other socket readable and recv will return 0.
This allows detection of socket closing
Remove race condition if two acks are to be sent one after another.
Always unblock semaphore once TCP thread has done something. This
allows better chance of TCP sending to succeed.
There are multiple places in the networking code that would require
thread-safe entering to blocking mode. I should add some API for this
so that a lot of race conditions could be removed.
I had written the ICR register check backwards which lead to interrupt
handling only when it was not needed, and no handling when it was
needed. This somehow still worked, just much slower often requiring tcp
resends from the server.