Compare commits

..

126 Commits

Author SHA1 Message Date
DcraftBg
d2c5242267 Kernel: Fix ByteRingBuffer->back() 2026-05-02 20:48:02 +03:00
96ae432bcf fixup 2026-05-02 20:05:13 +03:00
d08f7b1dee Kernel: Cleanup inline assembly accessing cpu specific data 2026-05-02 18:29:07 +03:00
23a0226f1b LibC: Mark exit as noreturn 2026-05-02 18:12:20 +03:00
33ea0f07b7 Kernel: Calculate internet checksum in host endian
No need to swap bytes of every 16 bit word in the packet, we can just do
one swap at the return
2026-05-02 18:11:29 +03:00
3874e0ed1e Kernel: Pass current cpu index as a GDT limit
I had no idea LSL was an instruction. This cleans up code to get the
current cpu by a lot and does not require extra segment usage :D
2026-05-02 18:10:10 +03:00
d49d260a09 BAN: Fix HashSet insertion and general cleanup
When inserting, look through the full hash chain before adding a new
entry, this fixes double insertion of the same key when there were a
hash collision

Instead of storing used and removed bits, store 2 bit state. This makes
the code cleaner and easier to not make mistakes
2026-05-02 16:01:06 +03:00
73b03860f4 Kernel: Use empty string instead of nullptr for non existing proc name 2026-05-02 15:54:37 +03:00
b9754859b2 Kernel: Remove kmalloc_vaddr_of
This is no longer needed. It was only used for x86_64 paging and AP
stack initialization
2026-05-02 15:53:29 +03:00
da50b654ab Kernel: Wrap syscall macro value in paranthesis 2026-05-02 15:52:18 +03:00
8869cc7b8c Kernel: Stop stacktrace dump on null bp
This makes stack traces not crash before IDT is initialized
2026-05-02 15:51:12 +03:00
d2b9b49cb0 Kernel: Rewrite paging and AP initialization
Initial step of paging now just prepares fast page for heap, actual page
table initialization happens after heap is initialized which allows
x86_64 to never depend on kmalloc for pages.

Processor's stacks are now also spawned with PMM/VMM allocated stacks
instead of kmalloc identity mapped.
2026-05-02 15:45:08 +03:00
21a2e7fd51 BAN: Fix HashSet 2026-05-02 13:12:22 +03:00
1602b195c5 ports: Rework ssl certificates
ca-certificates:
 - update to 2026.03.19
 - install to /etc/cacert
 - extract individual ceritificates from the bundle

openssl:
 - depend on ca-certificates
 - install hashed symlinks to individual certs

curl:
 - don't depend on ca-certificates; openssl handles this
 - set both ca-bundle and ca-path
2026-04-28 02:23:46 +03:00
1486ad7aa5 Kernel: Don't map NIC buffers as uncached
There is no need for them to be uncached. Having them as uncached killed
the networking performance, over 90% time was spent in kernel out of
which 80% was in checksum calculation and memcpy, half each (measured in
qemu with e1000e)
2026-04-27 19:45:16 +03:00
ab8bcbec3e Kernel: Allow mapping dma regions as not uncached 2026-04-27 19:36:32 +03:00
0e00b72df6 BAN: Cleanup HashMap
Add a concept for HashMapFindable instead of manually specifying the
requries expression everywhere.

Allow HashMapFindable also for `remove`, `contains`, `operator[]`, `at`

Make Entry have a const Key. This allows iterator's operator* and
operator-> return values have const keys.
2026-04-25 22:10:01 +03:00
a63818ec33 BAN: Cleanup HashSet
Add a concept for HashSetFindable instead of manually specifying the
requries expression everywhere.

Allow HashSetFindable also for `remove` and `contains`.

Remove non-const return values from iterator; you should never modify
the hashed value in place.

Don't require key to be move assignable, move construction is enough.
2026-04-25 22:10:01 +03:00
cf2e8ffaff Kernel: Remove unnecessary custom RefPtr hashes
RefPtr now exposes its own default hash
2026-04-25 22:10:01 +03:00
b5647ff258 BAN: Rewrite RefPtr,UniqPtr const semantics
const RefPtr<T> now allows accessing T as non-const. Also add default
hash function for RefPtr and UniqPtr based on the pointer value
2026-04-25 22:10:01 +03:00
bef53c726b Userspace: Install libdl, libm, libpthread as symlinks to libc
Having dummy libraries was just unnecessary
2026-04-21 21:46:00 +03:00
6b43cadf3a Kernel: Implement stack trace dump with safe memcpy
This fixes kernel panic if the stack trace cannot be read. Manually
validating pointers is definitely not safe
2026-04-21 21:20:52 +03:00
b74812d669 Kernel: Remove unnused features from VirtualRange
On-demand paging has not been used ever since I made userspace stack be
a normal MemoryRegion.
2026-04-21 19:58:09 +03:00
eea0154f18 LibC: Cleanup RNG and properly initialize to srand(1) at startup 2026-04-21 00:26:41 +03:00
ea4c34fc0b LibC: Fix printf thread safety
I don't know why I was using a static buffer for value conversions :D
2026-04-21 00:25:56 +03:00
558ed8fd44 LibC: Define pthread_{equal,self} as macros
These really should get inlined :D
2026-04-21 00:25:16 +03:00
8665195350 Kernel: Allow main thread to call pthread_exit
Apparently this is allowed. Also when last thread calls pthread_join
the process should also exit
2026-04-21 00:23:35 +03:00
72a24a0d38 Kernel: Reorder FUTEX_WAIT value and timeout check
Return EAGAIN rather than ETIMEDOUT if value does not match at futex
entry
2026-04-21 00:20:37 +03:00
0ee50032f3 Kernel: Fix 32 bit signal trapoline offset 2026-04-21 00:19:33 +03:00
40f3546aca BAN: Rewrite HashMap as a wrapper around HashSet
There is really no need to have two implementation of the same thing.
Only difference now is that HashMap's value type has to be movable but
this wasn't an issue
2026-04-21 00:18:18 +03:00
a8e496310b Kernel: Use HashMap for fd->epoll_event mapping
I don't know why it was a static array :D
2026-04-21 00:14:24 +03:00
fe613e4274 BAN: Rewrite HashSet
Instead of representing the map as vector or linked lists which required
an allocation for every insertion and deallocation for removal, we now
store a single big contiguous block of memory and use hash chains to
handle collisions. This intuitively feels much better although I did not
run any benchmarks.
2026-04-21 00:14:24 +03:00
3264dcee44 BAN: Add Math::round_up_to_power_of_two 2026-04-21 00:14:24 +03:00
71649ffe09 Kernel Rework ThreadBlocker
I don't know why I though the block chain had to be stored fully in the
ThreadBlocker, that did not even fix the problem I was trying to fix
when I last rewrote it. Roll back to doubly linked list of block chain
and now just check that the node is contained within the ThreadBlocker
before removing and after acquiring the ThreadBlocker's lock. Also there
is no need to have a separate lock the node's blocker field. We can just
perform an atomic reads and writes to it. We can still get a blocker
that the node is no longer part of, but this can be resolved with a
simple check. This patch reduces ThreadBlocker's size from over 200
bytes to just 12 bytes +4 bytes padding
2026-04-20 12:50:09 +03:00
5b7b2d7ac3 Kernel: Fix memory leak when cleaning up inodes shared page cache 2026-04-19 17:52:21 +03:00
8e543195b1 Kernel: Check for null in pthread_join 2026-04-19 17:52:21 +03:00
a24ec0da2b LibC: Don't complain about stack size until 8192 bytes
It was unnecessary as userspace stack is way bigger than that...
2026-04-19 17:52:21 +03:00
e6284c3cf3 LibC/Kernel: Bump PATH_MAX to 4096 2026-04-19 17:52:21 +03:00
DcraftBg
f527aca9d0 LibC: add strcasestr to string.h 2026-04-19 17:52:21 +03:00
DcraftBg
5fce3b64cc LibC: implemented strptime (partially) 2026-04-17 21:05:58 +03:00
c04ad65f7f LibC: Add mbsinit and wcsrtombs stubs 2026-04-17 18:40:18 +03:00
af17b29414 ports: Add xz port 2026-04-17 18:39:18 +03:00
7badcf80cf ports: Add libarchive port 2026-04-17 18:37:41 +03:00
7f122d9e89 ports: Add bzip2 port 2026-04-17 18:37:30 +03:00
984c7c0a89 LibGUI: Fix packet sending and cleanup receiving 2026-04-15 21:52:51 +03:00
ce318c7930 LibGUI: Cleanup packet {,de}serialization 2026-04-15 21:52:13 +03:00
eff6c79e9e ports/xbanan: Update to a working version
Also don't depend on mesa or Xlib. We only need the protocol headers to
compile the project
2026-04-15 19:25:54 +03:00
aaade52146 LibC: Use __builtin_thread_pointer for _get_uthread()
This generates much nicer assembly as it does not have to read thread
pointer for every access to TCB (errno, cancel_state, cancelled) and
instead it can read it once and use the same value for all accesses
2026-04-15 17:32:43 +03:00
1bf5e6a051 WindowServer: Fix xbanan access check 2026-04-15 16:40:30 +03:00
394719a909 userspace: Fix some includes found when compiling to linux 2026-04-15 16:39:36 +03:00
3ebadc5c74 LibDEFLATE: Optimize decompression
Instead of calculating bit-by-bit crc32, we now calculate a lookup table
during compile time. The old crc32 calculation was taking almost 50% of
the decompression time.

Also handle multiple symbols at once without outputting to user. It is
much more efficient to output many bytes instead of the up to 258 that a
single symbol can decode to :^)
2026-04-14 01:50:30 +03:00
d471bbf856 Kernel: Cleanup bootloader headers
Also add custom load addresses for x86_64 target. This allows qemu to
load the kernel with -kernel argument. Without these addresses qemu
would refuse to load as it only supports 32 bit ELFs, but as our kernel
starts in 32 bit mode anyway, we can just load it!
2026-04-13 16:48:57 +03:00
c849293f3d Kernel: Add support for loading gzip compressed initrd 2026-04-13 16:48:57 +03:00
0156d06cdc LibDEFLATE: Support decompressing to/from partial buffer
We no longer require the user to pass full compressed data in one go,
instead the decompressor reports to the user if it needs more input or
output space.
2026-04-13 03:04:55 +03:00
ad12bf3e1d LibC: Cleanup environment variable code 2026-04-13 00:36:13 +03:00
42964ad0b4 Kernel: Remove concept of OpenFile
This was just RefPtr<OpenFileDescription> and descriptor flags.
Descriptor flags only define O_CLOEXEC, so we can just store fd's
cloexec status in a bitmap rather than separate fields. This cuts down
the size of OpenFileDescriptorSet to basically half!
2026-04-12 04:42:08 +03:00
87979b1627 LibImage: Don't allocate zlib stream to a contiguous buffer
We can now pass multiple buffers to the decoder!
2026-04-11 19:48:46 +03:00
fed9dbefdf LibDEFLATE: Allow decompression from multiple byte spans
Before we required the compressed data to live in a single contiguous
chunch of memory.
2026-04-11 19:47:44 +03:00
2984927be5 WindowServer: Block without timeout when there is no damaged regions 2026-04-11 08:41:21 +03:00
2e654b53fa WindowServer: Use rectangular framebuffer syncs 2026-04-11 08:30:15 +03:00
ac6e6f3ec1 Kernel: Add ioctl to sync rectangular areas in framebuffer
msync is not really the best API for framebuffer synchronization
2026-04-11 08:29:10 +03:00
2b97587e9f WindowServer: Rewrite damaged region tracking
Instead of immediately doing rerender of client data and syncing 60 Hz,
we now only keep track of the damaged regions and also do the rerender
step 60 Hz.
2026-04-11 08:26:22 +03:00
4bde088b28 WindowServer: Store rectangles as min and max bounds
This makes some math easier than x,y and w,h
2026-04-11 06:35:45 +03:00
2a9dad2dd8 LibC: Add SSE2 non-temporal memset and memcpy
Also cleanup other assembly by using local labels to emit them from the
assembled program.
2026-04-11 03:30:52 +03:00
d11160d2f7 Kernel: Fix si_addr reporting
Meaning of this is signal specific and not the instruction pointer
2026-04-11 03:30:52 +03:00
7333008f40 LibC: Use IP instead of si_addr for faulting instruction
si_addr only means faulting instruction for SIGILL. For SIGSEGV it is
the faulting memory address.
2026-04-11 03:30:52 +03:00
cd7d309fd1 Kernel: Push missing IP and SP to mcontext in signal handler
I was missing these two registers, messing up the whole siginfo_t
structure. This fixes libc's stack trace dump crashing :D
2026-04-11 03:30:52 +03:00
a4ba1da65a LibGUI/WindowServer: Rework packet serialization
Instead of sending while serializing (what even was that), we serialize
the whole packet into a buffer which can be sent in one go. First of all
this reduces the number of sends by a lot. This also fixes WindowServer
ending up sending partial packets when client is not responsive.
Previously we would just try sending once, if any send failed the send
was aborted while partial packet was already transmitted. This lead to
packet stream being out of sync leading to the client killing itself.
Now we allow 64 KiB outgoing buffer per client. If this buffer ever fills
up, we will not send partial packets.
2026-04-11 03:30:52 +03:00
2f9b8b6fc9 Kernel/LibC: Rework userspace syscall interface
Kernel syscall API no longer zeros all unused argument registers and
libc now uses inlined syscall macro internally. This significantly
cleans up generated code for basic syscall wrapper functions.
2026-04-11 03:30:52 +03:00
279ac6b2b6 BAN: Implement some macro utilities
This contains stuff to count arguments, stringify, concatinate, for_each
2026-04-11 03:30:52 +03:00
9084d9305c Kernel: Change preemption condition
Instead of keeping track of the current time and rescheduling when
interval has passed, keep track of the next expected reschedule time.
This prevents theoretically missing every second pre-emption when
scheduler's timer is interrupting at same rate as the interval.
2026-04-11 03:30:52 +03:00
80c4213501 LibC: Make errno macro directly access uthread
This allows inlining errno usages

This breaks libc ABI and requires toolchain rebuild
2026-04-11 03:30:32 +03:00
e0af23a924 LibC: Move uthread definition to its own header
Use `__asm__` instead of `asm` to allow compilation with --std=c99 and
before
2026-04-11 03:30:32 +03:00
7e907b70f6 Kernel: Store memory region size as uint64_t
On 32 bit target, we were storing 32 bit physical region sizes which
would truncate regions > 4 GiB
2026-04-07 03:41:25 +03:00
7fb27b16e8 LibC: Fix pthread cancellation
Install SIGCANCEL handler for all threads.

Remove unneeded atomic stores and loads. States are only changed within
the thread itself.

Define pthread_testcancel as a macro so it gets inlined inside
cancellation points
2026-04-07 03:41:25 +03:00
3fb903d991 LibGUI: Optimize invalidate and set alpha channel
If the window does not have an alpha channel, we now set every pixel's
alpha to 0xFF. This is needed by the WindowServer when it does alpha
blending, there used to be some weird stuff happening on overlapping
windows.

Also when we are invalidating a region with width of the whole window,
we can do a single memcpy instead of a memcpy for each row separately.
2026-04-06 19:29:34 +03:00
2a4a688c2d WindowServer: Optimize rendering
We now use SSE2 to do alpha blending on 4 pixels at a time where
possible and use memcpy instead of manual loops for non blended regions.
2026-04-06 19:29:34 +03:00
1487c86262 Kernel: Resolve \\_S5 package elements on poweroff 2026-04-06 19:29:34 +03:00
4d3751028b LibInput: Honor chroot and credentials when loading keymap 2026-04-06 19:29:34 +03:00
e4c6539964 Kernel: Be more clever with physical memory
Initially allocate all physical memory except kernel memory and boot
modules. Before we just skipped all memory before kernel boot modules.
Also release memory used by boot modules after the kernel is up and
running. Once the boot modules are loaded, there is no need to keep them
in memory.
2026-04-06 19:29:34 +03:00
34b59f062b LibC: Implement blocking pthread_rwlock
pthread_rwlock now uses a mutex and condition variable internally so it
doesn't need to yield while waiting!
2026-04-06 19:29:34 +03:00
ec4aa8d0b6 LibC: Fix shared pthread_barrier init
Initialize internal lock and cond as shared when the barrier is shared
2026-04-05 12:06:18 +03:00
1eebe85071 LibC: Fix pthread_cond_timedwait
If timeout occurred, I was not removing the entry from block list
2026-04-05 11:31:16 +03:00
db0507e670 LibC: Mark pthread_exit noreturn 2026-04-05 11:30:45 +03:00
1e3ca7dc18 Kernel: Fix signal related syscalls
There were missing locks, out of order sigprocmask, incorrect signal
masking...
2026-04-05 02:31:30 +03:00
8ca3c5d778 Kernel: Clean up signal handling
We now appreciate sa_mask and SA_NODEFER and change the signal mask for
the duration of signal handler. This is done by making a sigprocmask
syscall at the end of the signal handler. Back-to-back signals will
still grow stack as original registers are popped AFTER the block mask
is updated. I guess this is why linux has sigreturn(?).
2026-04-05 02:25:59 +03:00
df257755f7 Kernel: If userspace sets fs or gs, dont overwrite it
Current cpu index is stored at either segment. If userspace sets that
segment, kernel will not overwrite it on every reschedule. This is fine
as long as user program does not use anything that relies on it :)
2026-04-04 23:48:43 +03:00
d7e292a9f8 Kernel: Drop 32 bit userspace stack to 4 MiB
32 bit userspace only has 256 MiB reserved for stacks, so with 32 MiB
stacks it only allowed total of 7 threads. Now we can have up to 62
threads
2026-04-04 23:48:43 +03:00
9fce114e8e Kernel: Don't clone entire kernel stack on fork
We only need to copy area between [ret_sp, stack_end]. This range is
always very small compared to the whole stack (64 KiB).
2026-04-04 23:48:43 +03:00
9d83424346 Kernel: Remove unnecessary stack pointer loading
Any time I started a thread I was loading the stack pointer which is
already correctly passed :D
2026-04-04 23:48:43 +03:00
a29681a524 Kernel: Fix signal generation
We need to have interrupts enabled when signal kills the process as
process does mutex locking. Also signals are now only checked when
returning to userspace in the same place where userspace segments are
loaded.
2026-04-04 23:48:43 +03:00
47d85eb281 Kernel: Pass the actual vaddr range to reserve pages 2026-04-04 23:48:43 +03:00
85f676c30a DynamicLoader: Calulate max loaded file count based on dtv size
dtv should be dynamic but i dont care right now :)
2026-04-04 23:48:43 +03:00
8c5fa1c0b8 DynamicLoader: Fix R_386_PC32 relocation
I was not accounting elf base with offset
2026-04-04 23:48:43 +03:00
c7690053ae LibC: Don't crash on 32 bit pthread_create 2026-04-04 23:48:43 +03:00
3f55be638d Kernel: Allow reserve_free_page{,s} to fail
Apparently I was asserting here before :D
2026-04-04 23:48:43 +03:00
664c824bc0 Kernel: Keep fast page always reserved
There was a bug where 32 bit target's reserve_free_page was allocating
the fast page address
2026-04-04 23:48:43 +03:00
e239d9ca55 ports/SDL2: Use 48 kHz floats instead of 44.1 kHz PCM16 2026-04-03 16:17:16 +03:00
bf1d9662d7 LibAudio: Use floats instead of doubles for samples 2026-04-03 16:15:02 +03:00
675c215e6a Kernel: Add CoW support to MemoryBackedRegion
This speeds up fork by A LOT. Forking WindowServer took ~90 ms before
this and now its ~5 ms.
2026-04-03 01:54:59 +03:00
c09bca56f9 Kernel: Add fast write perm remove to page tables 2026-04-03 01:54:22 +03:00
7d8f7753d5 Kernel: Cleanup and fix page tables and better TLB shootdown 2026-04-03 01:53:30 +03:00
f77aa65dc5 Kernel: Cleanup accessing userspace memory
Instead of doing page validiation and loading manually we just do simple
memcpy and handle the possible page faults
2026-04-02 16:36:33 +03:00
9589b5984d Kernel: Move USERSPACE_END to lower half
This allows calculating distance to USERSPACE_END from lower half
address
2026-04-02 16:34:47 +03:00
32806a5af3 LibC: Allow "t" in stdio mode 2026-04-02 15:44:50 +03:00
876fbe3d7c LibC: Fix sem_{,timed}wait 2026-04-02 15:43:34 +03:00
c1b8f5e475 LibC: Add and cleanup network definitions 2026-04-02 15:42:00 +03:00
cf31ea9cbe LibC: Add _SC_PHYS_PAGES and _SC_AVPHYS_PAGES 2026-04-02 15:41:26 +03:00
7e6b8c93b4 LibC: Implement strsep 2026-04-02 15:40:23 +03:00
dd2bbe4588 LibC: Implement sched_getcpu 2026-04-02 15:39:36 +03:00
e01e35713b LibC: Allow including assert.h multiple times
Some shit seems to depend on this
2026-04-02 15:38:06 +03:00
82d5d9ba58 LibC: Write memchr, memcmp and strlen with sse 2026-04-02 15:35:03 +03:00
d168492462 WindowServer: bind volume up/down to volume control 2026-04-02 15:24:02 +03:00
6f2e8320a9 TaskBar: Show current volume level 2026-04-02 15:22:42 +03:00
bf4831f468 AudioServer: Add support for volume control 2026-04-02 15:21:38 +03:00
5647cf24d2 Kernel: Implement volume control to audio drivers 2026-04-02 15:14:27 +03:00
85f61aded5 BAN: Use builtins for math overflow 2026-04-02 14:49:12 +03:00
21639071c2 kill: Allow killing with process name 2026-04-02 05:02:05 +03:00
68506a789a Kernel: Add support for volume control keys 2026-04-02 05:02:05 +03:00
d9ca25b796 LibC: Add FNM_CASEFOLD and FNM_IGNORECASE
These are part of POSIX issue 8
2026-03-25 04:27:00 +02:00
e9c81477d7 BAN/LibC: Implement remainder
This is basically just fmod but with fprem1 instead of fprem
2026-03-25 01:06:45 +02:00
5c20d5e291 Kernel: HDAudio hide unusable pins and cleanup path finding 2026-03-24 01:16:47 +02:00
f89d690716 Kernel: HDAudio only probe codecs in STATESTS
This removes unnecessary probing that lead to timeouts. Also cap codec
address at 14 instead of 15. My test laptop was duplicating codec 0 at
address 15 leading to duplicate devices.
2026-03-24 00:49:47 +02:00
c563efcd1c AudioServer: Query pins of the asked device and not the current one 2026-03-23 22:57:49 +02:00
dedeebbfbe Kernel: Use ByteRingBuffer with audio buffers 2026-03-23 22:12:40 +02:00
35e2a70de0 AudioServer: Handle client data before disconnecting clients 2026-03-23 20:41:13 +02:00
170 changed files with 6109 additions and 3956 deletions

View File

@@ -1,49 +1,154 @@
#pragma once
#include <BAN/Hash.h>
#include <BAN/LinkedList.h>
#include <BAN/Vector.h>
#include <BAN/HashSet.h>
namespace BAN
{
template<typename Key, typename T, typename HASH = BAN::hash<Key>>
template<typename HashSetIt, typename HashMap, typename Entry>
class HashMapIterator
{
public:
HashMapIterator() = default;
Entry& operator*()
{
return const_cast<Entry&>(m_iterator.operator*());
}
const Entry& operator*() const
{
return m_iterator.operator*();
}
Entry* operator->()
{
return const_cast<Entry*>(m_iterator.operator->());
}
const Entry* operator->() const
{
return m_iterator.operator->();
}
HashMapIterator& operator++()
{
++m_iterator;
return *this;
}
HashMapIterator operator++(int)
{
auto temp = *this;
++(*this);
return temp;
}
bool operator==(HashMapIterator other) const
{
return m_iterator == other.m_iterator;
}
bool operator!=(HashMapIterator other) const
{
return m_iterator != other.m_iterator;
}
private:
explicit HashMapIterator(HashSetIt it)
: m_iterator(it)
{ }
private:
HashSetIt m_iterator;
friend HashMap;
};
namespace detail
{
template<typename T, typename Key, typename HASH, typename COMP>
concept HashMapFindable = requires(const Key& a, const T& b) { COMP()(a, b); HASH()(b); };
}
template<typename Key, typename T, typename HASH = BAN::hash<Key>, typename COMP = BAN::equal<Key>>
class HashMap
{
public:
struct Entry
{
template<typename... Args>
Entry(const Key& key, Args&&... args) requires is_constructible_v<T, Args...>
: key(key)
, value(forward<Args>(args)...)
{}
template<typename... Args>
Entry(Key&& key, Args&&... args) requires is_constructible_v<T, Args...>
: key(BAN::move(key))
, value(forward<Args>(args)...)
{}
Key key;
const Key key;
T value;
Entry() = delete;
Entry& operator=(const Entry&) = delete;
Entry& operator=(Entry&&) = delete;
Entry(const Entry& other)
: key(other.key)
, value(other.value)
{ }
Entry(Entry&& other)
: key(BAN::move(const_cast<Key&>(other.key)))
, value(BAN::move(other.value))
{ }
template<typename... Args>
Entry(Key&& key, Args&&... args)
: key(BAN::move(key))
, value(BAN::forward<Args>(args)...)
{ }
};
struct EntryHash
{
constexpr bool operator()(const Key& a)
{
return HASH()(a);
}
constexpr bool operator()(const Entry& a)
{
return HASH()(a.key);
}
};
struct EntryComp
{
constexpr bool operator()(const Entry& a, const Key& b)
{
return COMP()(a.key, b);
}
constexpr bool operator()(const Entry& a, const Entry& b)
{
return COMP()(a.key, b.key);
}
};
public:
using size_type = size_t;
using key_type = Key;
using value_type = T;
using iterator = IteratorDouble<Entry, Vector, LinkedList, HashMap>;
using const_iterator = ConstIteratorDouble<Entry, Vector, LinkedList, HashMap>;
using size_type = size_t;
using key_type = Key;
using value_type = T;
using iterator = HashMapIterator<typename HashSet<Entry, EntryHash, EntryComp>::iterator, HashMap, Entry>;
using const_iterator = HashMapIterator<typename HashSet<Entry, EntryHash, EntryComp>::const_iterator, HashMap, const Entry>;
public:
HashMap() = default;
HashMap(const HashMap<Key, T, HASH>&);
HashMap(HashMap<Key, T, HASH>&&);
~HashMap();
~HashMap() { clear(); }
HashMap<Key, T, HASH>& operator=(const HashMap<Key, T, HASH>&);
HashMap<Key, T, HASH>& operator=(HashMap<Key, T, HASH>&&);
HashMap(const HashMap& other) { *this = other; }
HashMap& operator=(const HashMap& other)
{
m_hash_set = other.m_hash_set;
return *this;
}
HashMap(HashMap&& other) { *this = BAN::move(other); }
HashMap& operator=(HashMap&& other)
{
m_hash_set = BAN::move(other.m_hash_set);
return *this;
}
iterator begin() { return iterator(m_hash_set.begin()); }
iterator end() { return iterator(m_hash_set.end()); }
const_iterator begin() const { return const_iterator(m_hash_set.begin()); }
const_iterator end() const { return const_iterator(m_hash_set.end()); }
ErrorOr<iterator> insert(const Key& key, const T& value) { return emplace(key, value); }
ErrorOr<iterator> insert(const Key& key, T&& value) { return emplace(key, move(value)); }
@@ -57,263 +162,100 @@ namespace BAN
template<typename... Args>
ErrorOr<iterator> emplace(const Key& key, Args&&... args) requires is_constructible_v<T, Args...>
{ return emplace(Key(key), forward<Args>(args)...); }
{ return emplace(Key(key), BAN::forward<Args>(args)...); }
template<typename... Args>
ErrorOr<iterator> emplace(Key&&, Args&&...) requires is_constructible_v<T, Args...>;
ErrorOr<iterator> emplace(Key&& key, Args&&... args) requires is_constructible_v<T, Args...>
{
ASSERT(!contains(key));
auto it = TRY(m_hash_set.insert(Entry { BAN::move(key), T(BAN::forward<Args>(args)...) }));
return iterator(it);
}
template<typename... Args>
ErrorOr<iterator> emplace_or_assign(const Key& key, Args&&... args) requires is_constructible_v<T, Args...>
{ return emplace_or_assign(Key(key), forward<Args>(args)...); }
{ return emplace_or_assign(Key(key), BAN::forward<Args>(args)...); }
template<typename... Args>
ErrorOr<iterator> emplace_or_assign(Key&&, Args&&...) requires is_constructible_v<T, Args...>;
ErrorOr<iterator> emplace_or_assign(Key&& key, Args&&... args) requires is_constructible_v<T, Args...>
{
if (auto it = m_hash_set.find(key); it != m_hash_set.end())
{
const_cast<T&>(it->value) = T(BAN::forward<Args>(args)...);
return iterator(it);
}
iterator begin() { return iterator(m_buckets.end(), m_buckets.begin()); }
iterator end() { return iterator(m_buckets.end(), m_buckets.end()); }
const_iterator begin() const { return const_iterator(m_buckets.end(), m_buckets.begin()); }
const_iterator end() const { return const_iterator(m_buckets.end(), m_buckets.end()); }
auto it = TRY(m_hash_set.insert(Entry { BAN::move(key), T(BAN::forward<Args>(args)...) }));
return iterator(it);
}
ErrorOr<void> reserve(size_type);
template<detail::HashMapFindable<Key, HASH, COMP> U>
void remove(const U& key)
{
if (auto it = find(key); it != end())
remove(it);
}
void remove(const Key&);
void remove(iterator it);
void clear();
iterator remove(iterator it)
{
return iterator(m_hash_set.remove(it.m_iterator));
}
T& operator[](const Key&);
const T& operator[](const Key&) const;
template<detail::HashMapFindable<Key, HASH, COMP> U>
iterator find(const U& key)
{
return iterator(m_hash_set.find(key));
}
iterator find(const Key& key);
const_iterator find(const Key& key) const;
bool contains(const Key&) const;
template<detail::HashMapFindable<Key, HASH, COMP> U>
const_iterator find(const U& key) const
{
return const_iterator(m_hash_set.find(key));
}
bool empty() const;
size_type size() const;
void clear()
{
m_hash_set.clear();
}
ErrorOr<void> reserve(size_type size)
{
return m_hash_set.reserve(size);
}
template<detail::HashMapFindable<Key, HASH, COMP> U>
T& operator[](const U& key)
{
return find(key)->value;
}
template<detail::HashMapFindable<Key, HASH, COMP> U>
const T& operator[](const U& key) const
{
return find(key)->value;
}
template<detail::HashMapFindable<Key, HASH, COMP> U>
bool contains(const U& key) const
{
return find(key) != end();
}
size_type capacity() const
{
return m_hash_set.capacity();
}
size_type size() const
{
return m_hash_set.size();
}
bool empty() const
{
return m_hash_set.empty();
}
private:
ErrorOr<void> rebucket(size_type);
LinkedList<Entry>& get_bucket(const Key&);
const LinkedList<Entry>& get_bucket(const Key&) const;
Vector<LinkedList<Entry>>::iterator get_bucket_iterator(const Key&);
Vector<LinkedList<Entry>>::const_iterator get_bucket_iterator(const Key&) const;
private:
Vector<LinkedList<Entry>> m_buckets;
size_type m_size = 0;
friend iterator;
HashSet<Entry, EntryHash, EntryComp> m_hash_set;
};
template<typename Key, typename T, typename HASH>
HashMap<Key, T, HASH>::HashMap(const HashMap<Key, T, HASH>& other)
{
*this = other;
}
template<typename Key, typename T, typename HASH>
HashMap<Key, T, HASH>::HashMap(HashMap<Key, T, HASH>&& other)
{
*this = move(other);
}
template<typename Key, typename T, typename HASH>
HashMap<Key, T, HASH>::~HashMap()
{
clear();
}
template<typename Key, typename T, typename HASH>
HashMap<Key, T, HASH>& HashMap<Key, T, HASH>::operator=(const HashMap<Key, T, HASH>& other)
{
clear();
m_buckets = other.m_buckets;
m_size = other.m_size;
return *this;
}
template<typename Key, typename T, typename HASH>
HashMap<Key, T, HASH>& HashMap<Key, T, HASH>::operator=(HashMap<Key, T, HASH>&& other)
{
clear();
m_buckets = move(other.m_buckets);
m_size = other.m_size;
other.m_size = 0;
return *this;
}
template<typename Key, typename T, typename HASH>
template<typename... Args>
ErrorOr<typename HashMap<Key, T, HASH>::iterator> HashMap<Key, T, HASH>::emplace(Key&& key, Args&&... args) requires is_constructible_v<T, Args...>
{
ASSERT(!contains(key));
TRY(rebucket(m_size + 1));
auto bucket_it = get_bucket_iterator(key);
TRY(bucket_it->emplace_back(move(key), forward<Args>(args)...));
m_size++;
return iterator(m_buckets.end(), bucket_it, prev(bucket_it->end(), 1));
}
template<typename Key, typename T, typename HASH>
template<typename... Args>
ErrorOr<typename HashMap<Key, T, HASH>::iterator> HashMap<Key, T, HASH>::emplace_or_assign(Key&& key, Args&&... args) requires is_constructible_v<T, Args...>
{
if (empty())
return emplace(move(key), forward<Args>(args)...);
auto bucket_it = get_bucket_iterator(key);
for (auto entry_it = bucket_it->begin(); entry_it != bucket_it->end(); entry_it++)
{
if (entry_it->key != key)
continue;
entry_it->value = T(forward<Args>(args)...);
return iterator(m_buckets.end(), bucket_it, entry_it);
}
return emplace(move(key), forward<Args>(args)...);
}
template<typename Key, typename T, typename HASH>
ErrorOr<void> HashMap<Key, T, HASH>::reserve(size_type size)
{
TRY(rebucket(size));
return {};
}
template<typename Key, typename T, typename HASH>
void HashMap<Key, T, HASH>::remove(const Key& key)
{
auto it = find(key);
if (it != end())
remove(it);
}
template<typename Key, typename T, typename HASH>
void HashMap<Key, T, HASH>::remove(iterator it)
{
it.outer_current()->remove(it.inner_current());
m_size--;
}
template<typename Key, typename T, typename HASH>
void HashMap<Key, T, HASH>::clear()
{
m_buckets.clear();
m_size = 0;
}
template<typename Key, typename T, typename HASH>
T& HashMap<Key, T, HASH>::operator[](const Key& key)
{
ASSERT(!empty());
auto& bucket = get_bucket(key);
for (Entry& entry : bucket)
if (entry.key == key)
return entry.value;
ASSERT_NOT_REACHED();
}
template<typename Key, typename T, typename HASH>
const T& HashMap<Key, T, HASH>::operator[](const Key& key) const
{
ASSERT(!empty());
const auto& bucket = get_bucket(key);
for (const Entry& entry : bucket)
if (entry.key == key)
return entry.value;
ASSERT_NOT_REACHED();
}
template<typename Key, typename T, typename HASH>
typename HashMap<Key, T, HASH>::iterator HashMap<Key, T, HASH>::find(const Key& key)
{
if (empty())
return end();
auto bucket_it = get_bucket_iterator(key);
for (auto it = bucket_it->begin(); it != bucket_it->end(); it++)
if (it->key == key)
return iterator(m_buckets.end(), bucket_it, it);
return end();
}
template<typename Key, typename T, typename HASH>
typename HashMap<Key, T, HASH>::const_iterator HashMap<Key, T, HASH>::find(const Key& key) const
{
if (empty())
return end();
auto bucket_it = get_bucket_iterator(key);
for (auto it = bucket_it->begin(); it != bucket_it->end(); it++)
if (it->key == key)
return const_iterator(m_buckets.end(), bucket_it, it);
return end();
}
template<typename Key, typename T, typename HASH>
bool HashMap<Key, T, HASH>::contains(const Key& key) const
{
return find(key) != end();
}
template<typename Key, typename T, typename HASH>
bool HashMap<Key, T, HASH>::empty() const
{
return m_size == 0;
}
template<typename Key, typename T, typename HASH>
typename HashMap<Key, T, HASH>::size_type HashMap<Key, T, HASH>::size() const
{
return m_size;
}
template<typename Key, typename T, typename HASH>
ErrorOr<void> HashMap<Key, T, HASH>::rebucket(size_type bucket_count)
{
if (m_buckets.size() >= bucket_count)
return {};
size_type new_bucket_count = BAN::Math::max<size_type>(bucket_count, m_buckets.size() * 2);
Vector<LinkedList<Entry>> new_buckets;
TRY(new_buckets.resize(new_bucket_count));
for (auto& bucket : m_buckets)
{
for (auto it = bucket.begin(); it != bucket.end();)
{
size_type new_bucket_index = HASH()(it->key) % new_buckets.size();
it = bucket.move_element_to_other_linked_list(new_buckets[new_bucket_index], new_buckets[new_bucket_index].end(), it);
}
}
m_buckets = move(new_buckets);
return {};
}
template<typename Key, typename T, typename HASH>
LinkedList<typename HashMap<Key, T, HASH>::Entry>& HashMap<Key, T, HASH>::get_bucket(const Key& key)
{
return *get_bucket_iterator(key);
}
template<typename Key, typename T, typename HASH>
const LinkedList<typename HashMap<Key, T, HASH>::Entry>& HashMap<Key, T, HASH>::get_bucket(const Key& key) const
{
return *get_bucket_iterator(key);
}
template<typename Key, typename T, typename HASH>
Vector<LinkedList<typename HashMap<Key, T, HASH>::Entry>>::iterator HashMap<Key, T, HASH>::get_bucket_iterator(const Key& key)
{
ASSERT(!m_buckets.empty());
auto index = HASH()(key) % m_buckets.size();
return next(m_buckets.begin(), index);
}
template<typename Key, typename T, typename HASH>
Vector<LinkedList<typename HashMap<Key, T, HASH>::Entry>>::const_iterator HashMap<Key, T, HASH>::get_bucket_iterator(const Key& key) const
{
ASSERT(!m_buckets.empty());
auto index = HASH()(key) % m_buckets.size();
return next(m_buckets.begin(), index);
}
}

View File

@@ -2,198 +2,358 @@
#include <BAN/Errors.h>
#include <BAN/Hash.h>
#include <BAN/Iterators.h>
#include <BAN/LinkedList.h>
#include <BAN/Math.h>
#include <BAN/Move.h>
#include <BAN/Vector.h>
#include <BAN/New.h>
namespace BAN
{
template<typename T, typename HASH = hash<T>>
class HashSet
template<typename HashSet, typename Bucket, typename T>
class HashSetIterator
{
public:
using value_type = T;
using size_type = size_t;
using iterator = IteratorDouble<T, Vector, LinkedList, HashSet>;
using const_iterator = ConstIteratorDouble<T, Vector, LinkedList, HashSet>;
HashSetIterator() = default;
const T& operator*() const
{
ASSERT(m_bucket);
return *m_bucket->element();
}
const T* operator->() const
{
ASSERT(m_bucket);
return m_bucket->element();
}
HashSetIterator& operator++()
{
ASSERT(m_bucket);
m_bucket++;
skip_to_valid_bucket();
return *this;
}
HashSetIterator operator++(int)
{
auto temp = *this;
++(*this);
return temp;
}
bool operator==(HashSetIterator other) const
{
return m_bucket == other.m_bucket;
}
bool operator!=(HashSetIterator other) const
{
return m_bucket != other.m_bucket;
}
private:
explicit HashSetIterator(Bucket* bucket)
: m_bucket(bucket)
{
if (m_bucket != nullptr)
skip_to_valid_bucket();
}
void skip_to_valid_bucket()
{
while (m_bucket->state != Bucket::USED && !m_bucket->end)
m_bucket++;
if (m_bucket->end)
m_bucket = nullptr;
}
private:
Bucket* m_bucket { nullptr };
friend HashSet;
};
namespace detail
{
template<typename T, typename U, typename HASH, typename COMP>
concept HashSetFindable = requires(const U& a, const T& b) { COMP()(a, b); HASH()(b); };
}
template<typename T, typename HASH = BAN::hash<T>, typename COMP = BAN::equal<T>>
class HashSet
{
private:
struct Bucket
{
static constexpr uint8_t UNUSED = 0;
static constexpr uint8_t USED = 1;
static constexpr uint8_t REMOVED = 2;
alignas(T) uint8_t storage[sizeof(T)];
hash_t hash;
uint8_t state : 2;
uint8_t chain_start : 1;
uint8_t end : 1;
T* element() { return reinterpret_cast<T*>(storage); }
const T* element() const { return reinterpret_cast<const T*>(storage); }
};
public:
using value_type = T;
using size_type = size_t;
using iterator = HashSetIterator<HashSet, Bucket, T>;
using const_iterator = HashSetIterator<HashSet, const Bucket, const T>;
public:
HashSet() = default;
HashSet(const HashSet&);
HashSet(HashSet&&);
~HashSet() { clear(); }
HashSet& operator=(const HashSet&);
HashSet& operator=(HashSet&&);
HashSet(const HashSet& other) { *this = other; }
HashSet& operator=(const HashSet& other)
{
clear();
ErrorOr<void> insert(const T&);
ErrorOr<void> insert(T&&);
void remove(const T&);
void clear();
MUST(reserve(other.size()));
for (auto& bucket : other)
MUST(insert(bucket));
ErrorOr<void> reserve(size_type);
return *this;
}
iterator begin() { return iterator(m_buckets.end(), m_buckets.begin()); }
iterator end() { return iterator(m_buckets.end(), m_buckets.end()); }
const_iterator begin() const { return const_iterator(m_buckets.end(), m_buckets.begin()); }
const_iterator end() const { return const_iterator(m_buckets.end(), m_buckets.end()); }
HashSet(HashSet&& other) { *this = BAN::move(other); }
HashSet& operator=(HashSet&& other)
{
clear();
bool contains(const T&) const;
m_buckets = other.m_buckets;
m_capacity = other.m_capacity;
m_size = other.m_size;
m_removed = other.m_removed;
size_type size() const;
bool empty() const;
other.m_buckets = nullptr;
other.m_capacity = 0;
other.m_size = 0;
other.m_removed = 0;
return *this;
}
iterator begin() { return iterator(m_buckets); }
iterator end() { return iterator(nullptr); }
const_iterator begin() const { return const_iterator(m_buckets); }
const_iterator end() const { return const_iterator(nullptr); }
ErrorOr<iterator> insert(const T& value)
{
return insert(T(value));
}
ErrorOr<iterator> insert(T&& value)
{
if (should_rehash_with_size(m_size + 1))
TRY(rehash(m_size * 2));
return insert_impl(BAN::move(value), HASH()(value));
}
template<detail::HashSetFindable<T, HASH, COMP> U>
void remove(const U& value)
{
if (auto it = find(value); it != end())
remove(it);
}
iterator remove(iterator it)
{
auto& bucket = *it.m_bucket;
bucket.element()->~T();
bucket.state = Bucket::REMOVED;
m_size--;
m_removed++;
return iterator(&bucket);
}
template<detail::HashSetFindable<T, HASH, COMP> U>
iterator find(const U& value)
{
return iterator(const_cast<Bucket*>(find_impl(value).m_bucket));
}
template<detail::HashSetFindable<T, HASH, COMP> U>
const_iterator find(const U& value) const
{
return find_impl(value);
}
void clear()
{
if (m_buckets == nullptr)
return;
for (size_type i = 0; i < m_capacity; i++)
if (m_buckets[i].state == Bucket::USED)
m_buckets[i].element()->~T();
BAN::deallocator(m_buckets);
m_buckets = nullptr;
m_capacity = 0;
m_size = 0;
m_removed = 0;
}
ErrorOr<void> reserve(size_type size)
{
if (should_rehash_with_size(size))
TRY(rehash(size * 2));
return {};
}
template<detail::HashSetFindable<T, HASH, COMP> U>
bool contains(const U& value) const
{
return find(value) != end();
}
size_type capacity() const
{
return m_capacity;
}
size_type size() const
{
return m_size;
}
bool empty() const
{
return m_size == 0;
}
private:
ErrorOr<void> rebucket(size_type);
LinkedList<T>& get_bucket(const T&);
const LinkedList<T>& get_bucket(const T&) const;
ErrorOr<void> rehash(size_type new_capacity)
{
new_capacity = BAN::Math::max<size_t>(16, BAN::Math::max(new_capacity, m_size + 1));
new_capacity = BAN::Math::round_up_to_power_of_two(new_capacity);
void* new_buckets = BAN::allocator((new_capacity + 1) * sizeof(Bucket));
if (new_buckets == nullptr)
return BAN::Error::from_errno(ENOMEM);
memset(new_buckets, 0, (new_capacity + 1) * sizeof(Bucket));
Bucket* old_buckets = m_buckets;
const size_type old_capacity = m_capacity;
m_buckets = static_cast<Bucket*>(new_buckets);
m_capacity = new_capacity;
m_size = 0;
m_removed = 0;
for (size_type i = 0; i < old_capacity; i++)
{
auto& old_bucket = old_buckets[i];
if (old_bucket.state != Bucket::USED)
continue;
insert_impl(BAN::move(*old_bucket.element()), old_bucket.hash);
old_bucket.element()->~T();
}
m_buckets[m_capacity].end = true;
BAN::deallocator(old_buckets);
return {};
}
template<detail::HashSetFindable<T, HASH, COMP> U>
const_iterator find_impl(const U& value) const
{
if (m_capacity == 0)
return end();
bool first = true;
const hash_t orig_hash = HASH()(value);
for (auto hash = orig_hash;; hash = get_next_hash_in_chain(hash, orig_hash), first = false)
{
auto& bucket = m_buckets[hash & (m_capacity - 1)];
if (bucket.state == Bucket::USED && bucket.hash == orig_hash && COMP()(*bucket.element(), value))
return const_iterator(&bucket);
if (bucket.state == Bucket::UNUSED)
return end();
if (!first && bucket.chain_start)
return end();
}
}
iterator insert_impl(T&& value, hash_t orig_hash)
{
ASSERT(!should_rehash_with_size(m_size + 1));
Bucket* target = nullptr;
bool first = true;
for (auto hash = orig_hash;; hash = get_next_hash_in_chain(hash, orig_hash), first = false)
{
auto& bucket = m_buckets[hash & (m_capacity - 1)];
if (!first)
bucket.chain_start = false;
if (bucket.state == Bucket::USED)
{
if (bucket.hash != orig_hash || !COMP()(*bucket.element(), value))
continue;
target = &bucket;
break;
}
if (target == nullptr)
target = &bucket;
if (bucket.state == Bucket::UNUSED)
break;
}
switch (target->state)
{
case Bucket::USED:
target->element()->~T();
break;
case Bucket::REMOVED:
m_removed--;
[[fallthrough]];
case Bucket::UNUSED:
m_size++;
break;
}
target->chain_start = first && target->state == Bucket::UNUSED;
target->hash = orig_hash;
target->state = Bucket::USED;
new (target->element()) T(BAN::move(value));
return iterator(target);
}
bool should_rehash_with_size(size_type size) const
{
if (m_capacity < 16)
return true;
if (size + m_removed > m_capacity / 4 * 3)
return true;
return false;
}
hash_t get_next_hash_in_chain(hash_t prev_hash, hash_t orig_hash) const
{
// TODO: does this even provide better performance than `return prev_hash + 1`
// when using "good" hash functions
return prev_hash * 1103515245 + (orig_hash | 1);
}
private:
Vector<LinkedList<T>> m_buckets;
size_type m_size = 0;
Bucket* m_buckets { nullptr };
size_type m_capacity { 0 };
size_type m_size { 0 };
size_type m_removed { 0 };
};
template<typename T, typename HASH>
HashSet<T, HASH>::HashSet(const HashSet& other)
: m_buckets(other.m_buckets)
, m_size(other.m_size)
{
}
template<typename T, typename HASH>
HashSet<T, HASH>::HashSet(HashSet&& other)
: m_buckets(move(other.m_buckets))
, m_size(other.m_size)
{
other.clear();
}
template<typename T, typename HASH>
HashSet<T, HASH>& HashSet<T, HASH>::operator=(const HashSet& other)
{
clear();
m_buckets = other.m_buckets;
m_size = other.m_size;
return *this;
}
template<typename T, typename HASH>
HashSet<T, HASH>& HashSet<T, HASH>::operator=(HashSet&& other)
{
clear();
m_buckets = move(other.m_buckets);
m_size = other.m_size;
other.clear();
return *this;
}
template<typename T, typename HASH>
ErrorOr<void> HashSet<T, HASH>::insert(const T& key)
{
return insert(move(T(key)));
}
template<typename T, typename HASH>
ErrorOr<void> HashSet<T, HASH>::insert(T&& key)
{
if (!empty() && get_bucket(key).contains(key))
return {};
TRY(rebucket(m_size + 1));
TRY(get_bucket(key).push_back(move(key)));
m_size++;
return {};
}
template<typename T, typename HASH>
void HashSet<T, HASH>::remove(const T& key)
{
if (empty()) return;
auto& bucket = get_bucket(key);
for (auto it = bucket.begin(); it != bucket.end(); it++)
{
if (*it == key)
{
bucket.remove(it);
m_size--;
break;
}
}
}
template<typename T, typename HASH>
void HashSet<T, HASH>::clear()
{
m_buckets.clear();
m_size = 0;
}
template<typename T, typename HASH>
ErrorOr<void> HashSet<T, HASH>::reserve(size_type size)
{
TRY(rebucket(size));
return {};
}
template<typename T, typename HASH>
bool HashSet<T, HASH>::contains(const T& key) const
{
if (empty()) return false;
return get_bucket(key).contains(key);
}
template<typename T, typename HASH>
typename HashSet<T, HASH>::size_type HashSet<T, HASH>::size() const
{
return m_size;
}
template<typename T, typename HASH>
bool HashSet<T, HASH>::empty() const
{
return m_size == 0;
}
template<typename T, typename HASH>
ErrorOr<void> HashSet<T, HASH>::rebucket(size_type bucket_count)
{
if (m_buckets.size() >= bucket_count)
return {};
size_type new_bucket_count = Math::max<size_type>(bucket_count, m_buckets.size() * 2);
Vector<LinkedList<T>> new_buckets;
if (new_buckets.resize(new_bucket_count).is_error())
return Error::from_errno(ENOMEM);
for (auto& bucket : m_buckets)
{
for (auto it = bucket.begin(); it != bucket.end();)
{
size_type new_bucket_index = HASH()(*it) % new_buckets.size();
it = bucket.move_element_to_other_linked_list(new_buckets[new_bucket_index], new_buckets[new_bucket_index].end(), it);
}
}
m_buckets = move(new_buckets);
return {};
}
template<typename T, typename HASH>
LinkedList<T>& HashSet<T, HASH>::get_bucket(const T& key)
{
ASSERT(!m_buckets.empty());
size_type index = HASH()(key) % m_buckets.size();
return m_buckets[index];
}
template<typename T, typename HASH>
const LinkedList<T>& HashSet<T, HASH>::get_bucket(const T& key) const
{
ASSERT(!m_buckets.empty());
size_type index = HASH()(key) % m_buckets.size();
return m_buckets[index];
}
}

View File

@@ -0,0 +1,46 @@
#pragma once
#define _ban_count_args_impl(_0, _1, _2, _3, _4, _5, _6, _7, _8, _9, ...) _9
#define _ban_count_args(...) _ban_count_args_impl(__VA_ARGS__ __VA_OPT__(,) 9, 8, 7, 6, 5, 4, 3, 2, 1, 0)
#define _ban_concat_impl(a, b) a##b
#define _ban_concat(a, b) _ban_concat_impl(a, b)
#define _ban_stringify_impl(x) #x
#define _ban_stringify(x) _ban_stringify_impl(x)
#define _ban_fe_0(f)
#define _ban_fe_1(f, _0) f(0, _0)
#define _ban_fe_2(f, _0, _1) f(0, _0) f(1, _1)
#define _ban_fe_3(f, _0, _1, _2) f(0, _0) f(1, _1) f(2, _2)
#define _ban_fe_4(f, _0, _1, _2, _3) f(0, _0) f(1, _1) f(2, _2) f(3, _3)
#define _ban_fe_5(f, _0, _1, _2, _3, _4) f(0, _0) f(1, _1) f(2, _2) f(3, _3) f(4, _4)
#define _ban_fe_6(f, _0, _1, _2, _3, _4, _5) f(0, _0) f(1, _1) f(2, _2) f(3, _3) f(4, _4) f(5, _5)
#define _ban_fe_7(f, _0, _1, _2, _3, _4, _5, _6) f(0, _0) f(1, _1) f(2, _2) f(3, _3) f(4, _4) f(5, _5) f(6, _6)
#define _ban_fe_8(f, _0, _1, _2, _3, _4, _5, _6, _7) f(0, _0) f(1, _1) f(2, _2) f(3, _3) f(4, _4) f(5, _5) f(6, _6) f(7, _7)
#define _ban_fe_9(f, _0, _1, _2, _3, _4, _5, _6, _7, _8) f(0, _0) f(1, _1) f(2, _2) f(3, _3) f(4, _4) f(5, _5) f(6, _6) f(7, _7) f(8, _8)
#define _ban_for_each(f, ...) _ban_concat(_ban_fe_, _ban_count_args(__VA_ARGS__))(f __VA_OPT__(,) __VA_ARGS__)
#define _ban_fe_comma_0(f)
#define _ban_fe_comma_1(f, _0) f(0, _0)
#define _ban_fe_comma_2(f, _0, _1) f(0, _0), f(1, _1)
#define _ban_fe_comma_3(f, _0, _1, _2) f(0, _0), f(1, _1), f(2, _2)
#define _ban_fe_comma_4(f, _0, _1, _2, _3) f(0, _0), f(1, _1), f(2, _2), f(3, _3)
#define _ban_fe_comma_5(f, _0, _1, _2, _3, _4) f(0, _0), f(1, _1), f(2, _2), f(3, _3), f(4, _4)
#define _ban_fe_comma_6(f, _0, _1, _2, _3, _4, _5) f(0, _0), f(1, _1), f(2, _2), f(3, _3), f(4, _4), f(5, _5)
#define _ban_fe_comma_7(f, _0, _1, _2, _3, _4, _5, _6) f(0, _0), f(1, _1), f(2, _2), f(3, _3), f(4, _4), f(5, _5), f(6, _6)
#define _ban_fe_comma_8(f, _0, _1, _2, _3, _4, _5, _6, _7) f(0, _0), f(1, _1), f(2, _2), f(3, _3), f(4, _4), f(5, _5), f(6, _6), f(7, _7)
#define _ban_fe_comma_9(f, _0, _1, _2, _3, _4, _5, _6, _7, _8) f(0, _0), f(1, _1), f(2, _2), f(3, _3), f(4, _4), f(5, _5), f(6, _6), f(7, _7), f(8, _8)
#define _ban_for_each_comma(f, ...) _ban_concat(_ban_fe_comma_, _ban_count_args(__VA_ARGS__))(f __VA_OPT__(,) __VA_ARGS__)
#define _ban_get_0(a0, ...) a0
#define _ban_get_1(a0, a1, ...) a1
#define _ban_get_2(a0, a1, a2, ...) a2
#define _ban_get_3(a0, a1, a2, a3, ...) a3
#define _ban_get_4(a0, a1, a2, a3, a4, ...) a4
#define _ban_get_5(a0, a1, a2, a3, a4, a5, ...) a5
#define _ban_get_6(a0, a1, a2, a3, a4, a5, a6, ...) a6
#define _ban_get_7(a0, a1, a2, a3, a4, a5, a6, a7, ...) a7
#define _ban_get_8(a0, a1, a2, a3, a4, a5, a6, a7, a8, ...) a8
#define _ban_get_9(a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, ...) a9
#define _ban_get(n, ...) _ban_concat(_ban_get_, n)(__VA_ARGS__)

View File

@@ -6,19 +6,6 @@
#include <float.h>
// This is ugly but my clangd does not like including
// intrinsic headers at all
#if !defined(__SSE__) || !defined(__SSE2__)
#pragma GCC push_options
#ifndef __SSE__
#pragma GCC target("sse")
#endif
#ifndef __SSE2__
#pragma GCC target("sse2")
#endif
#define BAN_MATH_POP_OPTIONS
#endif
namespace BAN::Math
{
@@ -49,12 +36,11 @@ namespace BAN::Math
template<integral T>
inline constexpr T gcd(T a, T b)
{
T t;
while (b)
{
t = b;
T temp = b;
b = a % b;
a = t;
a = temp;
}
return a;
}
@@ -79,25 +65,36 @@ namespace BAN::Math
return (x & (x - 1)) == 0;
}
template<BAN::integral T>
static constexpr bool will_multiplication_overflow(T a, T b)
template<integral T> requires(sizeof(T) <= 8)
inline constexpr T round_up_to_power_of_two(T x)
{
if (a == 0 || b == 0)
return false;
if ((a > 0) == (b > 0))
return a > BAN::numeric_limits<T>::max() / b;
else
return a < BAN::numeric_limits<T>::min() / b;
x--;
x |= x >> 1;
x |= x >> 2;
x |= x >> 4;
if constexpr(sizeof(T) >= 2)
x |= x >> 8;
if constexpr(sizeof(T) >= 4)
x |= x >> 16;
if constexpr(sizeof(T) >= 8)
x |= x >> 32;
return x + 1;
}
template<BAN::integral T>
static constexpr bool will_addition_overflow(T a, T b)
template<integral T>
__attribute__((always_inline))
inline constexpr bool will_multiplication_overflow(T a, T b)
{
if (a > 0 && b > 0)
return a > BAN::numeric_limits<T>::max() - b;
if (a < 0 && b < 0)
return a < BAN::numeric_limits<T>::min() - b;
return false;
T dummy;
return __builtin_mul_overflow(a, b, &dummy);
}
template<integral T>
__attribute__((always_inline))
inline constexpr bool will_addition_overflow(T a, T b)
{
T dummy;
return __builtin_add_overflow(a, b, &dummy);
}
template<typename T>
@@ -111,6 +108,19 @@ namespace BAN::Math
return sizeof(T) * 8 - __builtin_clzll(x) - 1;
}
// This is ugly but my clangd does not like including
// intrinsic headers at all
#if !defined(__SSE__) || !defined(__SSE2__)
#pragma GCC push_options
#ifndef __SSE__
#pragma GCC target("sse")
#endif
#ifndef __SSE2__
#pragma GCC target("sse2")
#endif
#define BAN_MATH_POP_OPTIONS
#endif
template<floating_point T>
inline constexpr T floor(T x)
{
@@ -172,7 +182,23 @@ namespace BAN::Math
"jne 1b;"
: "+t"(a)
: "u"(b)
: "ax"
: "ax", "cc"
);
return a;
}
template<floating_point T>
inline constexpr T remainder(T a, T b)
{
asm(
"1:"
"fprem1;"
"fnstsw %%ax;"
"testb $4, %%ah;"
"jne 1b;"
: "+t"(a)
: "u"(b)
: "ax", "cc"
);
return a;
}
@@ -447,9 +473,9 @@ namespace BAN::Math
return sqrt<T>(x * x + y * y);
}
}
#ifdef BAN_MATH_POP_OPTIONS
#undef BAN_MATH_POP_OPTIONS
#pragma GCC pop_options
#endif
}

View File

@@ -2,9 +2,9 @@
#include <BAN/Atomic.h>
#include <BAN/Errors.h>
#include <BAN/Hash.h>
#include <BAN/Move.h>
#include <BAN/NoCopyMove.h>
#include <stdint.h>
namespace BAN
{
@@ -129,14 +129,9 @@ namespace BAN
return *this;
}
T* ptr() { ASSERT(!empty()); return m_pointer; }
const T* ptr() const { ASSERT(!empty()); return m_pointer; }
T& operator*() { return *ptr(); }
const T& operator*() const { return *ptr(); }
T* operator->() { return ptr(); }
const T* operator->() const { return ptr(); }
T* ptr() const { return m_pointer; }
T& operator*() const { ASSERT(!empty()); return *ptr(); }
T* operator->() const { ASSERT(!empty()); return ptr(); }
bool operator==(RefPtr other) const { return m_pointer == other.m_pointer; }
bool operator!=(RefPtr other) const { return m_pointer != other.m_pointer; }
@@ -158,4 +153,13 @@ namespace BAN
friend class RefPtr;
};
template<typename T>
struct hash<RefPtr<T>>
{
constexpr hash_t operator()(const RefPtr<T>& ptr) const
{
return hash<T*>()(ptr.ptr());
}
};
}

View File

@@ -1,6 +1,7 @@
#pragma once
#include <BAN/Errors.h>
#include <BAN/Hash.h>
#include <BAN/NoCopyMove.h>
namespace BAN
@@ -53,32 +54,12 @@ namespace BAN
return *this;
}
T& operator*()
{
ASSERT(m_pointer);
return *m_pointer;
}
T* ptr() const { return m_pointer; }
T& operator*() const { ASSERT(!empty()); return *ptr(); }
T* operator->() const { ASSERT(!empty()); return ptr(); }
const T& operator*() const
{
ASSERT(m_pointer);
return *m_pointer;
}
T* operator->()
{
ASSERT(m_pointer);
return m_pointer;
}
const T* operator->() const
{
ASSERT(m_pointer);
return m_pointer;
}
T* ptr() { return m_pointer; }
const T* ptr() const { return m_pointer; }
bool empty() const { return m_pointer == nullptr; }
explicit operator bool() const { return m_pointer; }
void clear()
{
@@ -87,8 +68,6 @@ namespace BAN
m_pointer = nullptr;
}
operator bool() const { return m_pointer != nullptr; }
private:
T* m_pointer = nullptr;
@@ -96,4 +75,13 @@ namespace BAN
friend class UniqPtr;
};
template<typename T>
struct hash<UniqPtr<T>>
{
constexpr hash_t operator()(const UniqPtr<T>& ptr) const
{
return hash<T*>()(ptr.ptr());
}
};
}

Binary file not shown.

View File

@@ -139,6 +139,7 @@ if("${BANAN_ARCH}" STREQUAL "x86_64")
arch/x86_64/Signal.S
arch/x86_64/Syscall.S
arch/x86_64/Thread.S
arch/x86_64/User.S
arch/x86_64/Yield.S
)
elseif("${BANAN_ARCH}" STREQUAL "i686")
@@ -150,6 +151,7 @@ elseif("${BANAN_ARCH}" STREQUAL "i686")
arch/i686/Signal.S
arch/i686/Syscall.S
arch/i686/Thread.S
arch/i686/User.S
arch/i686/Yield.S
)
else()
@@ -166,10 +168,7 @@ set(BAN_SOURCES
set(KLIBC_SOURCES
klibc/ctype.cpp
klibc/string.cpp
# Ehhh don't do this but for now libc uses the same stuff kernel can use
# This won't work after libc starts using sse implemetations tho
../userspace/libraries/LibC/arch/${BANAN_ARCH}/string.S
klibc/arch/${BANAN_ARCH}/string.S
)
set(LIBDEFLATE_SOURCE

View File

@@ -16,11 +16,18 @@ extern uint8_t g_kernel_writable_end[];
extern uint8_t g_userspace_start[];
extern uint8_t g_userspace_end[];
extern uint64_t g_boot_fast_page_pt[];
namespace Kernel
{
SpinLock PageTable::s_fast_page_lock;
constexpr uint64_t s_page_flag_mask = 0x8000000000000FFF;
constexpr uint64_t s_page_addr_mask = ~s_page_flag_mask;
static bool s_is_initialized = false;
static PageTable* s_kernel = nullptr;
static bool s_has_nxe = false;
static bool s_has_pge = false;
@@ -28,6 +35,28 @@ namespace Kernel
static paddr_t s_global_pdpte = 0;
static uint64_t* s_fast_page_pt { nullptr };
static uint64_t* allocate_zeroed_page_aligned_page()
{
void* page = kmalloc(PAGE_SIZE, PAGE_SIZE, true);
ASSERT(page);
memset(page, 0, PAGE_SIZE);
return (uint64_t*)page;
}
template<typename T>
static paddr_t V2P(const T vaddr)
{
return (vaddr_t)vaddr - KERNEL_OFFSET + g_boot_info.kernel_paddr;
}
template<typename T>
static uint64_t* P2V(const T paddr)
{
return reinterpret_cast<uint64_t*>(reinterpret_cast<paddr_t>(paddr) - g_boot_info.kernel_paddr + KERNEL_OFFSET);
}
static inline PageTable::flags_t parse_flags(uint64_t entry)
{
using Flags = PageTable::Flags;
@@ -46,31 +75,22 @@ namespace Kernel
return result;
}
void PageTable::initialize_pre_heap()
void PageTable::initialize_fast_page()
{
s_fast_page_pt = g_boot_fast_page_pt;
}
static void detect_cpu_features()
{
if (CPUID::has_nxe())
s_has_nxe = true;
if (CPUID::has_pge())
s_has_pge = true;
if (CPUID::has_pat())
s_has_pat = true;
ASSERT(s_kernel == nullptr);
s_kernel = new PageTable();
ASSERT(s_kernel);
s_kernel->initialize_kernel();
s_kernel->initial_load();
}
void PageTable::initialize_post_heap()
{
// NOTE: this is no-op as our 32 bit target does not use hhdm
}
void PageTable::initial_load()
void PageTable::enable_cpu_features()
{
if (s_has_nxe)
{
@@ -111,8 +131,56 @@ namespace Kernel
"movl %%eax, %%cr0;"
::: "rax"
);
}
load();
void PageTable::initialize_and_load()
{
detect_cpu_features();
enable_cpu_features();
ASSERT(s_kernel == nullptr);
s_kernel = new PageTable();
ASSERT(s_kernel);
auto* pdpt = allocate_zeroed_page_aligned_page();
ASSERT(pdpt);
s_kernel->m_highest_paging_struct = V2P(pdpt);
s_kernel->map_kernel_memory();
PageTable::with_fast_page(s_kernel->m_highest_paging_struct, [] {
s_global_pdpte = PageTable::fast_page_as_sized<paddr_t>(3);
});
// update fast page pt
{
constexpr vaddr_t vaddr = fast_page();
constexpr uint16_t pdpte = (vaddr >> 30) & 0x1FF;
constexpr uint16_t pde = (vaddr >> 21) & 0x1FF;
const auto get_or_allocate_entry =
[](paddr_t table_paddr, uint16_t entry, uint64_t flags)
{
uint64_t* table = P2V(table_paddr);
if (!(table[entry] & Flags::Present))
{
auto* vaddr = allocate_zeroed_page_aligned_page();
ASSERT(vaddr);
table[entry] = V2P(vaddr);
}
table[entry] |= flags;
return table[entry] & s_page_addr_mask;
};
const paddr_t pdpt = s_kernel->m_highest_paging_struct;
const paddr_t pd = get_or_allocate_entry(pdpt, pdpte, Flags::Present);
s_fast_page_pt = P2V(get_or_allocate_entry(pd, pde, Flags::ReadWrite | Flags::Present));
}
s_kernel->load();
}
PageTable& PageTable::kernel()
@@ -126,40 +194,12 @@ namespace Kernel
return true;
}
static uint64_t* allocate_zeroed_page_aligned_page()
void PageTable::map_kernel_memory()
{
void* page = kmalloc(PAGE_SIZE, PAGE_SIZE, true);
ASSERT(page);
memset(page, 0, PAGE_SIZE);
return (uint64_t*)page;
}
template<typename T>
static paddr_t V2P(const T vaddr)
{
return (vaddr_t)vaddr - KERNEL_OFFSET + g_boot_info.kernel_paddr;
}
template<typename T>
static vaddr_t P2V(const T paddr)
{
return (paddr_t)paddr - g_boot_info.kernel_paddr + KERNEL_OFFSET;
}
void PageTable::initialize_kernel()
{
ASSERT(s_global_pdpte == 0);
s_global_pdpte = V2P(allocate_zeroed_page_aligned_page());
map_kernel_memory();
prepare_fast_page();
// Map (phys_kernel_start -> phys_kernel_end) to (virt_kernel_start -> virt_kernel_end)
ASSERT((vaddr_t)g_kernel_start % PAGE_SIZE == 0);
map_range_at(
V2P(g_kernel_start),
(vaddr_t)g_kernel_start,
reinterpret_cast<vaddr_t>(g_kernel_start),
g_kernel_end - g_kernel_start,
Flags::Present
);
@@ -167,7 +207,7 @@ namespace Kernel
// Map executable kernel memory as executable
map_range_at(
V2P(g_kernel_execute_start),
(vaddr_t)g_kernel_execute_start,
reinterpret_cast<vaddr_t>(g_kernel_execute_start),
g_kernel_execute_end - g_kernel_execute_start,
Flags::Execute | Flags::Present
);
@@ -175,7 +215,7 @@ namespace Kernel
// Map writable kernel memory as writable
map_range_at(
V2P(g_kernel_writable_start),
(vaddr_t)g_kernel_writable_start,
reinterpret_cast<vaddr_t>(g_kernel_writable_start),
g_kernel_writable_end - g_kernel_writable_start,
Flags::ReadWrite | Flags::Present
);
@@ -183,65 +223,34 @@ namespace Kernel
// Map userspace memory
map_range_at(
V2P(g_userspace_start),
(vaddr_t)g_userspace_start,
reinterpret_cast<vaddr_t>(g_userspace_start),
g_userspace_end - g_userspace_start,
Flags::Execute | Flags::UserSupervisor | Flags::Present
);
}
void PageTable::prepare_fast_page()
{
constexpr uint64_t pdpte = (fast_page() >> 30) & 0x1FF;
constexpr uint64_t pde = (fast_page() >> 21) & 0x1FF;
uint64_t* pdpt = reinterpret_cast<uint64_t*>(P2V(m_highest_paging_struct));
ASSERT(pdpt[pdpte] & Flags::Present);
uint64_t* pd = reinterpret_cast<uint64_t*>(P2V(pdpt[pdpte]) & PAGE_ADDR_MASK);
ASSERT(!(pd[pde] & Flags::Present));
pd[pde] = V2P(allocate_zeroed_page_aligned_page()) | Flags::ReadWrite | Flags::Present;
}
void PageTable::map_fast_page(paddr_t paddr)
{
ASSERT(s_kernel);
ASSERT(paddr);
ASSERT(paddr % PAGE_SIZE == 0);
ASSERT(paddr && paddr % PAGE_SIZE == 0);
ASSERT(s_fast_page_pt);
ASSERT(s_fast_page_lock.current_processor_has_lock());
constexpr uint64_t pdpte = (fast_page() >> 30) & 0x1FF;
constexpr uint64_t pde = (fast_page() >> 21) & 0x1FF;
constexpr uint64_t pte = (fast_page() >> 12) & 0x1FF;
ASSERT(!(*s_fast_page_pt & Flags::Present));
s_fast_page_pt[0] = paddr | Flags::ReadWrite | Flags::Present;
uint64_t* pdpt = reinterpret_cast<uint64_t*>(P2V(s_kernel->m_highest_paging_struct));
uint64_t* pd = reinterpret_cast<uint64_t*>(P2V(pdpt[pdpte] & PAGE_ADDR_MASK));
uint64_t* pt = reinterpret_cast<uint64_t*>(P2V(pd[pde] & PAGE_ADDR_MASK));
ASSERT(!(pt[pte] & Flags::Present));
pt[pte] = paddr | Flags::ReadWrite | Flags::Present;
asm volatile("invlpg (%0)" :: "r"(fast_page()) : "memory");
asm volatile("invlpg (%0)" :: "r"(fast_page()));
}
void PageTable::unmap_fast_page()
{
ASSERT(s_kernel);
ASSERT(s_fast_page_pt);
ASSERT(s_fast_page_lock.current_processor_has_lock());
constexpr uint64_t pdpte = (fast_page() >> 30) & 0x1FF;
constexpr uint64_t pde = (fast_page() >> 21) & 0x1FF;
constexpr uint64_t pte = (fast_page() >> 12) & 0x1FF;
ASSERT((*s_fast_page_pt & Flags::Present));
s_fast_page_pt[0] = 0;
uint64_t* pdpt = reinterpret_cast<uint64_t*>(P2V(s_kernel->m_highest_paging_struct));
uint64_t* pd = reinterpret_cast<uint64_t*>(P2V(pdpt[pdpte] & PAGE_ADDR_MASK));
uint64_t* pt = reinterpret_cast<uint64_t*>(P2V(pd[pde] & PAGE_ADDR_MASK));
ASSERT(pt[pte] & Flags::Present);
pt[pte] = 0;
asm volatile("invlpg (%0)" :: "r"(fast_page()) : "memory");
asm volatile("invlpg (%0)" :: "r"(fast_page()));
}
BAN::ErrorOr<PageTable*> PageTable::create_userspace()
@@ -250,25 +259,23 @@ namespace Kernel
PageTable* page_table = new PageTable;
if (page_table == nullptr)
return BAN::Error::from_errno(ENOMEM);
page_table->map_kernel_memory();
return page_table;
}
void PageTable::map_kernel_memory()
{
ASSERT(s_kernel);
ASSERT(s_global_pdpte);
uint64_t* pdpt = allocate_zeroed_page_aligned_page();
if (pdpt == nullptr)
{
delete page_table;
return BAN::Error::from_errno(ENOMEM);
}
ASSERT(m_highest_paging_struct == 0);
m_highest_paging_struct = V2P(kmalloc(32, 32, true));
ASSERT(m_highest_paging_struct);
page_table->m_highest_paging_struct = V2P(pdpt);
uint64_t* pdpt = reinterpret_cast<uint64_t*>(P2V(m_highest_paging_struct));
pdpt[0] = 0;
pdpt[1] = 0;
pdpt[2] = 0;
pdpt[3] = s_global_pdpte | Flags::Present;
static_assert(KERNEL_OFFSET == 0xC0000000);
return page_table;
}
PageTable::~PageTable()
@@ -276,18 +283,17 @@ namespace Kernel
if (m_highest_paging_struct == 0)
return;
uint64_t* pdpt = reinterpret_cast<uint64_t*>(P2V(m_highest_paging_struct));
uint64_t* pdpt = P2V(m_highest_paging_struct);
for (uint32_t pdpte = 0; pdpte < 3; pdpte++)
{
if (!(pdpt[pdpte] & Flags::Present))
continue;
uint64_t* pd = reinterpret_cast<uint64_t*>(P2V(pdpt[pdpte] & PAGE_ADDR_MASK));
uint64_t* pd = P2V(pdpt[pdpte] & s_page_addr_mask);
for (uint32_t pde = 0; pde < 512; pde++)
{
if (!(pd[pde] & Flags::Present))
continue;
kfree(reinterpret_cast<uint64_t*>(P2V(pd[pde] & PAGE_ADDR_MASK)));
kfree(P2V(pd[pde] & s_page_addr_mask));
}
kfree(pd);
}
@@ -298,15 +304,43 @@ namespace Kernel
{
SpinLockGuard _(m_lock);
ASSERT(m_highest_paging_struct < 0x100000000);
const uint32_t pdpt_lo = m_highest_paging_struct;
asm volatile("movl %0, %%cr3" :: "r"(pdpt_lo));
asm volatile("movl %0, %%cr3" :: "r"(static_cast<uint32_t>(m_highest_paging_struct)));
Processor::set_current_page_table(this);
}
void PageTable::invalidate(vaddr_t vaddr, bool send_smp_message)
void PageTable::invalidate_range(vaddr_t vaddr, size_t pages, bool send_smp_message)
{
ASSERT(vaddr % PAGE_SIZE == 0);
asm volatile("invlpg (%0)" :: "r"(vaddr) : "memory");
const bool is_userspace = (vaddr < KERNEL_OFFSET);
if (is_userspace && this != &PageTable::current())
;
else if (pages <= 32 || !s_is_initialized)
{
for (size_t i = 0; i < pages; i++, vaddr += PAGE_SIZE)
asm volatile("invlpg (%0)" :: "r"(vaddr));
}
else if (is_userspace || !s_has_pge)
{
asm volatile("movl %0, %%cr3" :: "r"(static_cast<uint32_t>(m_highest_paging_struct)));
}
else
{
asm volatile(
"movl %%cr4, %%eax;"
"andl $~0x80, %%eax;"
"movl %%eax, %%cr4;"
"movl %0, %%cr3;"
"orl $0x80, %%eax;"
"movl %%eax, %%cr4;"
:
: "r"(static_cast<uint32_t>(m_highest_paging_struct))
: "eax"
);
}
if (send_smp_message)
{
@@ -314,14 +348,14 @@ namespace Kernel
.type = Processor::SMPMessage::Type::FlushTLB,
.flush_tlb = {
.vaddr = vaddr,
.page_count = 1,
.page_count = pages,
.page_table = vaddr < KERNEL_OFFSET ? this : nullptr,
}
});
}
}
void PageTable::unmap_page(vaddr_t vaddr, bool send_smp_message)
void PageTable::unmap_page(vaddr_t vaddr, bool invalidate)
{
ASSERT(vaddr);
ASSERT(vaddr % PAGE_SIZE == 0);
@@ -340,16 +374,16 @@ namespace Kernel
if (is_page_free(vaddr))
Kernel::panic("trying to unmap unmapped page 0x{H}", vaddr);
uint64_t* pdpt = reinterpret_cast<uint64_t*>(P2V(m_highest_paging_struct));
uint64_t* pd = reinterpret_cast<uint64_t*>(P2V(pdpt[pdpte] & PAGE_ADDR_MASK));
uint64_t* pt = reinterpret_cast<uint64_t*>(P2V(pd[pde] & PAGE_ADDR_MASK));
uint64_t* pdpt = P2V(m_highest_paging_struct);
uint64_t* pd = P2V(pdpt[pdpte] & s_page_addr_mask);
uint64_t* pt = P2V(pd[pde] & s_page_addr_mask);
const paddr_t old_paddr = pt[pte] & PAGE_ADDR_MASK;
const paddr_t old_paddr = pt[pte] & s_page_addr_mask;
pt[pte] = 0;
if (old_paddr != 0)
invalidate(vaddr, send_smp_message);
if (invalidate && old_paddr != 0)
invalidate_page(vaddr, true);
}
void PageTable::unmap_range(vaddr_t vaddr, size_t size)
@@ -361,18 +395,10 @@ namespace Kernel
SpinLockGuard _(m_lock);
for (vaddr_t page = 0; page < page_count; page++)
unmap_page(vaddr + page * PAGE_SIZE, false);
Processor::broadcast_smp_message({
.type = Processor::SMPMessage::Type::FlushTLB,
.flush_tlb = {
.vaddr = vaddr,
.page_count = page_count,
.page_table = vaddr < KERNEL_OFFSET ? this : nullptr,
}
});
invalidate_range(vaddr, page_count, true);
}
void PageTable::map_page_at(paddr_t paddr, vaddr_t vaddr, flags_t flags, MemoryType memory_type, bool send_smp_message)
void PageTable::map_page_at(paddr_t paddr, vaddr_t vaddr, flags_t flags, MemoryType memory_type, bool invalidate)
{
ASSERT(vaddr);
ASSERT(vaddr != fast_page());
@@ -407,11 +433,11 @@ namespace Kernel
SpinLockGuard _(m_lock);
uint64_t* pdpt = reinterpret_cast<uint64_t*>(P2V(m_highest_paging_struct));
uint64_t* pdpt = P2V(m_highest_paging_struct);
if (!(pdpt[pdpte] & Flags::Present))
pdpt[pdpte] = V2P(allocate_zeroed_page_aligned_page()) | Flags::Present;
uint64_t* pd = reinterpret_cast<uint64_t*>(P2V(pdpt[pdpte] & PAGE_ADDR_MASK));
uint64_t* pd = P2V(pdpt[pdpte] & s_page_addr_mask);
if ((pd[pde] & uwr_flags) != uwr_flags)
{
if (!(pd[pde] & Flags::Present))
@@ -422,14 +448,14 @@ namespace Kernel
if (!(flags & Flags::Present))
uwr_flags &= ~Flags::Present;
uint64_t* pt = reinterpret_cast<uint64_t*>(P2V(pd[pde] & PAGE_ADDR_MASK));
uint64_t* pt = P2V(pd[pde] & s_page_addr_mask);
const paddr_t old_paddr = pt[pte] & PAGE_ADDR_MASK;
const paddr_t old_paddr = pt[pte] & s_page_addr_mask;
pt[pte] = paddr | uwr_flags | extra_flags;
if (old_paddr != 0)
invalidate(vaddr, send_smp_message);
if (invalidate && old_paddr != 0)
invalidate_page(vaddr, true);
}
void PageTable::map_range_at(paddr_t paddr, vaddr_t vaddr, size_t size, flags_t flags, MemoryType memory_type)
@@ -443,15 +469,49 @@ namespace Kernel
SpinLockGuard _(m_lock);
for (size_t page = 0; page < page_count; page++)
map_page_at(paddr + page * PAGE_SIZE, vaddr + page * PAGE_SIZE, flags, memory_type, false);
invalidate_range(vaddr, page_count, true);
}
Processor::broadcast_smp_message({
.type = Processor::SMPMessage::Type::FlushTLB,
.flush_tlb = {
.vaddr = vaddr,
.page_count = page_count,
.page_table = vaddr < KERNEL_OFFSET ? this : nullptr,
void PageTable::remove_writable_from_range(vaddr_t vaddr, size_t size)
{
ASSERT(vaddr);
ASSERT(vaddr % PAGE_SIZE == 0);
uint32_t pdpte = (vaddr >> 30) & 0x1FF;
uint32_t pde = (vaddr >> 21) & 0x1FF;
uint32_t pte = (vaddr >> 12) & 0x1FF;
const uint32_t e_pdpte = ((vaddr + size - 1) >> 30) & 0x1FF;
const uint32_t e_pde = ((vaddr + size - 1) >> 21) & 0x1FF;
const uint32_t e_pte = ((vaddr + size - 1) >> 12) & 0x1FF;
SpinLockGuard _(m_lock);
const uint64_t* pdpt = P2V(m_highest_paging_struct);
for (; pdpte <= e_pdpte; pdpte++)
{
if (!(pdpt[pdpte] & Flags::Present))
continue;
const uint64_t* pd = P2V(pdpt[pdpte] & s_page_addr_mask);
for (; pde < 512; pde++)
{
if (pdpte == e_pdpte && pde > e_pde)
break;
if (!(pd[pde] & Flags::ReadWrite))
continue;
uint64_t* pt = P2V(pd[pde] & s_page_addr_mask);
for (; pte < 512; pte++)
{
if (pdpte == e_pdpte && pde == e_pde && pte > e_pte)
break;
pt[pte] &= ~static_cast<uint64_t>(Flags::ReadWrite);
}
pte = 0;
}
});
pde = 0;
}
invalidate_range(vaddr, size / PAGE_SIZE, true);
}
uint64_t PageTable::get_page_data(vaddr_t vaddr) const
@@ -464,15 +524,15 @@ namespace Kernel
SpinLockGuard _(m_lock);
uint64_t* pdpt = (uint64_t*)P2V(m_highest_paging_struct);
const uint64_t* pdpt = P2V(m_highest_paging_struct);
if (!(pdpt[pdpte] & Flags::Present))
return 0;
uint64_t* pd = (uint64_t*)P2V(pdpt[pdpte] & PAGE_ADDR_MASK);
const uint64_t* pd = P2V(pdpt[pdpte] & s_page_addr_mask);
if (!(pd[pde] & Flags::Present))
return 0;
uint64_t* pt = (uint64_t*)P2V(pd[pde] & PAGE_ADDR_MASK);
const uint64_t* pt = P2V(pd[pde] & s_page_addr_mask);
if (!(pt[pte] & Flags::Used))
return 0;
@@ -486,8 +546,7 @@ namespace Kernel
paddr_t PageTable::physical_address_of(vaddr_t vaddr) const
{
uint64_t page_data = get_page_data(vaddr);
return (page_data & PAGE_ADDR_MASK) & ~(1ull << 63);
return get_page_data(vaddr) & s_page_addr_mask;
}
bool PageTable::is_page_free(vaddr_t vaddr) const
@@ -529,14 +588,8 @@ namespace Kernel
return false;
for (size_t offset = 0; offset < bytes; offset += PAGE_SIZE)
reserve_page(vaddr + offset, true, false);
Processor::broadcast_smp_message({
.type = Processor::SMPMessage::Type::FlushTLB,
.flush_tlb = {
.vaddr = vaddr,
.page_count = bytes / PAGE_SIZE,
.page_table = vaddr < KERNEL_OFFSET ? this : nullptr,
}
});
invalidate_range(vaddr, bytes / PAGE_SIZE, true);
return true;
}
@@ -549,48 +602,47 @@ namespace Kernel
if (size_t rem = last_address % PAGE_SIZE)
last_address -= rem;
const uint32_t s_pdpte = (first_address >> 30) & 0x1FF;
const uint32_t s_pde = (first_address >> 21) & 0x1FF;
const uint32_t s_pte = (first_address >> 12) & 0x1FF;
uint32_t pdpte = (first_address >> 30) & 0x1FF;
uint32_t pde = (first_address >> 21) & 0x1FF;
uint32_t pte = (first_address >> 12) & 0x1FF;
const uint32_t e_pdpte = (last_address >> 30) & 0x1FF;
const uint32_t e_pde = (last_address >> 21) & 0x1FF;
const uint32_t e_pte = (last_address >> 12) & 0x1FF;
const uint32_t e_pdpte = ((last_address - 1) >> 30) & 0x1FF;
const uint32_t e_pde = ((last_address - 1) >> 21) & 0x1FF;
const uint32_t e_pte = ((last_address - 1) >> 12) & 0x1FF;
SpinLockGuard _(m_lock);
// Try to find free page that can be mapped without
// allocations (page table with unused entries)
uint64_t* pdpt = reinterpret_cast<uint64_t*>(P2V(m_highest_paging_struct));
for (uint32_t pdpte = s_pdpte; pdpte < 4; pdpte++)
const uint64_t* pdpt = P2V(m_highest_paging_struct);
for (; pdpte <= e_pdpte; pdpte++)
{
if (pdpte > e_pdpte)
break;
if (!(pdpt[pdpte] & Flags::Present))
continue;
uint64_t* pd = reinterpret_cast<uint64_t*>(P2V(pdpt[pdpte] & PAGE_ADDR_MASK));
for (uint32_t pde = s_pde; pde < 512; pde++)
const uint64_t* pd = P2V(pdpt[pdpte] & s_page_addr_mask);
for (; pde < 512; pde++)
{
if (pdpte == e_pdpte && pde > e_pde)
break;
if (!(pd[pde] & Flags::Present))
continue;
uint64_t* pt = (uint64_t*)P2V(pd[pde] & PAGE_ADDR_MASK);
for (uint32_t pte = s_pte; pte < 512; pte++)
const uint64_t* pt = P2V(pd[pde] & s_page_addr_mask);
for (; pte < 512; pte++)
{
if (pdpte == e_pdpte && pde == e_pde && pte >= e_pte)
if (pdpte == e_pdpte && pde == e_pde && pte > e_pte)
break;
if (!(pt[pte] & Flags::Used))
{
vaddr_t vaddr = 0;
vaddr |= (vaddr_t)pdpte << 30;
vaddr |= (vaddr_t)pde << 21;
vaddr |= (vaddr_t)pte << 12;
ASSERT(reserve_page(vaddr));
return vaddr;
}
if (pt[pte] & Flags::Used)
continue;
vaddr_t vaddr = 0;
vaddr |= (vaddr_t)pdpte << 30;
vaddr |= (vaddr_t)pde << 21;
vaddr |= (vaddr_t)pte << 12;
ASSERT(reserve_page(vaddr));
return vaddr;
}
pte = 0;
}
pde = 0;
}
// Find any free page
@@ -603,7 +655,7 @@ namespace Kernel
}
}
ASSERT_NOT_REACHED();
return 0;
}
vaddr_t PageTable::reserve_free_contiguous_pages(size_t page_count, vaddr_t first_address, vaddr_t last_address)
@@ -636,7 +688,7 @@ namespace Kernel
}
}
ASSERT_NOT_REACHED();
return 0;
}
static void dump_range(vaddr_t start, vaddr_t end, PageTable::flags_t flags)
@@ -659,7 +711,7 @@ namespace Kernel
flags_t flags = 0;
vaddr_t start = 0;
uint64_t* pdpt = reinterpret_cast<uint64_t*>(P2V(m_highest_paging_struct));
const uint64_t* pdpt = P2V(m_highest_paging_struct);
for (uint32_t pdpte = 0; pdpte < 4; pdpte++)
{
if (!(pdpt[pdpte] & Flags::Present))
@@ -668,7 +720,7 @@ namespace Kernel
start = 0;
continue;
}
uint64_t* pd = (uint64_t*)P2V(pdpt[pdpte] & PAGE_ADDR_MASK);
const uint64_t* pd = P2V(pdpt[pdpte] & s_page_addr_mask);
for (uint64_t pde = 0; pde < 512; pde++)
{
if (!(pd[pde] & Flags::Present))
@@ -677,7 +729,7 @@ namespace Kernel
start = 0;
continue;
}
uint64_t* pt = (uint64_t*)P2V(pd[pde] & PAGE_ADDR_MASK);
const uint64_t* pt = P2V(pd[pde] & s_page_addr_mask);
for (uint64_t pte = 0; pte < 512; pte++)
{
if (parse_flags(pt[pte]) != flags)

View File

@@ -1,12 +1,13 @@
.section .userspace, "ax"
// stack contains
// return address
// return stack
// return rflags
// siginfo_t
// signal number
// signal handler
// (4 bytes) return address (on return stack)
// (4 bytes) return stack
// (4 bytes) return rflags
// (8 bytes) restore sigmask
// (36 bytes) siginfo_t
// (4 bytes) signal number
// (4 bytes) signal handler
.global signal_trampoline
signal_trampoline:
@@ -18,6 +19,10 @@ signal_trampoline:
pushl %eax
pushl %ebp
movl 84(%esp), %eax
pushl %eax; addl $4, (%esp)
pushl (%eax)
// FIXME: populate these
xorl %eax, %eax
pushl %eax // stack
@@ -28,9 +33,9 @@ signal_trampoline:
pushl %eax // link
movl %esp, %edx // ucontext
leal 60(%esp), %esi // siginfo
movl 56(%esp), %edi // signal number
movl 52(%esp), %eax // handlers
leal 68(%esp), %esi // siginfo
movl 64(%esp), %edi // signal number
movl 60(%esp), %eax // handlers
// align stack to 16 bytes
movl %esp, %ebp
@@ -53,7 +58,15 @@ signal_trampoline:
movl %ebp, %esp
addl $24, %esp
// restore sigmask
movl $83, %eax // SYS_SIGPROCMASK
movl $3, %ebx // SIG_SETMASK
leal 72(%esp), %ecx // set
xorl %edx, %edx // oset
int $0xF0
// restore registers
addl $8, %esp
popl %ebp
popl %eax
popl %ebx
@@ -62,8 +75,8 @@ signal_trampoline:
popl %edi
popl %esi
// skip handler, number, siginfo_t
addl $44, %esp
// skip handler, number, siginfo_t, sigmask
addl $52, %esp
// restore flags
popf

View File

@@ -63,7 +63,7 @@ sys_fork_trampoline:
call read_ip
testl %eax, %eax
jz .reload_stack
jz .done
movl %esp, %ebx
@@ -79,9 +79,3 @@ sys_fork_trampoline:
popl %ebx
popl %ebp
ret
.reload_stack:
call get_thread_start_sp
movl %eax, %esp
xorl %eax, %eax
jmp .done

View File

@@ -7,9 +7,6 @@ read_ip:
# void start_kernel_thread()
.global start_kernel_thread
start_kernel_thread:
call get_thread_start_sp
movl %eax, %esp
# STACK LAYOUT
# on_exit arg
# on_exit func
@@ -34,9 +31,6 @@ start_kernel_thread:
.global start_userspace_thread
start_userspace_thread:
call get_thread_start_sp
movl %eax, %esp
movw $(0x20 | 3), %bx
movw %bx, %ds
movw %bx, %es

54
kernel/arch/i686/User.S Normal file
View File

@@ -0,0 +1,54 @@
# bool safe_user_memcpy(void*, const void*, size_t)
.global safe_user_memcpy
.global safe_user_memcpy_end
.global safe_user_memcpy_fault
safe_user_memcpy:
xorl %eax, %eax
xchgl 4(%esp), %edi
xchgl 8(%esp), %esi
movl 12(%esp), %ecx
movl %edi, %edx
rep movsb
movl 4(%esp), %edi
movl 8(%esp), %esi
incl %eax
safe_user_memcpy_fault:
ret
safe_user_memcpy_end:
# bool safe_user_strncpy(void*, const void*, size_t)
.global safe_user_strncpy
.global safe_user_strncpy_end
.global safe_user_strncpy_fault
safe_user_strncpy:
xchgl 4(%esp), %edi
xchgl 8(%esp), %esi
movl 12(%esp), %ecx
testl %ecx, %ecx
jz safe_user_strncpy_fault
.safe_user_strncpy_loop:
movb (%esi), %al
movb %al, (%edi)
testb %al, %al
jz .safe_user_strncpy_done
incl %edi
incl %esi
decl %ecx
jnz .safe_user_strncpy_loop
safe_user_strncpy_fault:
xorl %eax, %eax
jmp .safe_user_strncpy_return
.safe_user_strncpy_done:
movl $1, %eax
.safe_user_strncpy_return:
movl 4(%esp), %edi
movl 8(%esp), %esi
ret
safe_user_strncpy_end:

View File

@@ -11,13 +11,14 @@
.code32
# multiboot2 header
// video mode info, page align modules
.set multiboot_flags, (1 << 2) | (1 << 0)
.section .multiboot, "aw"
.align 8
multiboot_start:
.long 0x1BADB002
.long (1 << 2) # page align modules
.long -(0x1BADB002 + (1 << 2))
.long multiboot_flags
.long -(0x1BADB002 + multiboot_flags)
.long 0
.long 0
@@ -30,7 +31,8 @@ multiboot_start:
.long FB_HEIGHT
.long FB_BPP
multiboot_end:
.align 8
.section .multiboot2, "aw"
multiboot2_start:
.long 0xE85250D6
.long 0
@@ -66,7 +68,6 @@ multiboot2_start:
multiboot2_end:
.section .bananboot, "aw"
.align 8
bananboot_start:
.long 0xBABAB007
.long -(0xBABAB007 + FB_WIDTH + FB_HEIGHT + FB_BPP)
@@ -97,8 +98,7 @@ bananboot_end:
boot_pdpt:
.long V2P(boot_pd) + (PG_PRESENT)
.long 0
.quad 0
.quad 0
.skip 2 * 8
.long V2P(boot_pd) + (PG_PRESENT)
.long 0
.align 4096
@@ -111,13 +111,16 @@ boot_pd:
.endr
boot_pts:
.set i, 0
.rept 512
.rept 511
.rept 512
.long i + (PG_READ_WRITE | PG_PRESENT)
.long 0
.set i, i + 0x1000
.endr
.endr
.global g_boot_fast_page_pt
g_boot_fast_page_pt:
.skip 512 * 8
boot_gdt:
.quad 0x0000000000000000 # null descriptor
@@ -273,7 +276,7 @@ system_halt:
jmp 1b
#define AP_V2P(vaddr) ((vaddr) - ap_trampoline + 0xF000)
#define AP_REL(vaddr) ((vaddr) - ap_trampoline + 0xF000)
.section .ap_init, "ax"
@@ -283,21 +286,27 @@ ap_trampoline:
jmp 1f
.align 8
ap_stack_ptr:
ap_stack_paddr:
.skip 4
ap_stack_vaddr:
.skip 4
ap_prepare_paging:
.skip 4
ap_page_table:
.skip 4
ap_ready:
.skip 4
ap_stack_loaded:
.skip 1
1: cli; cld
ljmpl $0x00, $AP_V2P(ap_cs_clear)
ljmpl $0x00, $AP_REL(ap_cs_clear)
ap_cs_clear:
# load ap gdt and enter protected mode
lgdt AP_V2P(ap_gdtr)
lgdt AP_REL(ap_gdtr)
movl %cr0, %eax
orb $1, %al
movl %eax, %cr0
ljmpl $0x08, $AP_V2P(ap_protected_mode)
ljmpl $0x08, $AP_REL(ap_protected_mode)
.code32
ap_protected_mode:
@@ -306,8 +315,7 @@ ap_protected_mode:
movw %ax, %ss
movw %ax, %es
movl AP_V2P(ap_stack_ptr), %esp
movb $1, AP_V2P(ap_stack_loaded)
movl AP_REL(ap_stack_paddr), %esp
leal V2P(enable_sse), %ecx; call *%ecx
leal V2P(enable_tsc), %ecx; call *%ecx
@@ -315,24 +323,28 @@ ap_protected_mode:
# load boot gdt and enter long mode
lgdt V2P(boot_gdtr)
ljmpl $0x08, $AP_V2P(ap_flush_gdt)
ljmpl $0x08, $AP_REL(ap_flush_gdt)
ap_flush_gdt:
# move stack pointer to higher half
movl %esp, %esp
addl $KERNEL_OFFSET, %esp
# jump to higher half
leal ap_higher_half, %ecx
movl $ap_higher_half, %ecx
jmp *%ecx
ap_higher_half:
movl AP_REL(ap_prepare_paging), %eax
call *%eax
# load AP's initial values
movl AP_REL(ap_stack_vaddr), %esp
movl AP_REL(ap_page_table), %eax
movl $1, AP_REL(ap_ready)
movl %eax, %cr3
# clear rbp for stacktrace
xorl %ebp, %ebp
1: pause
cmpb $0, g_ap_startup_done
jz 1b
je 1b
lock incb g_ap_running_count

View File

@@ -1,20 +1,20 @@
.macro maybe_load_kernel_segments, n
testb $3, \n(%esp)
jz 1f; jnp 1f
.macro intr_header, n
pushal
testb $3, \n+8*4(%esp)
jz 1f
movw $0x10, %ax
movw %ax, %ds
movw %ax, %es
movw %ax, %fs
movw $0x28, %ax
movw %ax, %gs
1:
1: cld
.endm
.macro maybe_load_userspace_segments, n
testb $3, \n(%esp)
jz 1f; jnp 1f
.macro intr_footer, n
testb $3, \n+8*4(%esp)
jz 1f
call cpp_check_signal
movw $(0x20 | 3), %bx
movw %bx, %ds
movw %bx, %es
@@ -22,14 +22,11 @@
movw %bx, %fs
movw $(0x38 | 3), %bx
movw %bx, %gs
1:
1: popal
.endm
isr_stub:
pushal
maybe_load_kernel_segments 44
cld
intr_header 12
movl %cr0, %eax; pushl %eax
movl %cr2, %eax; pushl %eax
movl %cr3, %eax; pushl %eax
@@ -57,15 +54,12 @@ isr_stub:
movl %ebp, %esp
addl $24, %esp
maybe_load_userspace_segments 44
popal
intr_footer 12
addl $8, %esp
iret
irq_stub:
pushal
maybe_load_kernel_segments 44
cld
intr_header 12
movl 32(%esp), %edi # interrupt number
@@ -78,16 +72,13 @@ irq_stub:
movl %ebp, %esp
maybe_load_userspace_segments 44
popal
intr_footer 12
addl $8, %esp
iret
.global asm_ipi_handler
asm_ipi_handler:
pushal
maybe_load_kernel_segments 36
cld
intr_header 4
movl %esp, %ebp
andl $-16, %esp
@@ -96,15 +87,12 @@ asm_ipi_handler:
movl %ebp, %esp
maybe_load_userspace_segments 36
popal
intr_footer 4
iret
.global asm_timer_handler
asm_timer_handler:
pushal
maybe_load_kernel_segments 36
cld
intr_header 4
movl %esp, %ebp
andl $-16, %esp
@@ -113,8 +101,7 @@ asm_timer_handler:
movl %ebp, %esp
maybe_load_userspace_segments 36
popal
intr_footer 4
iret
.macro isr n

View File

@@ -11,20 +11,21 @@ SECTIONS
{
g_kernel_execute_start = .;
*(.multiboot)
*(.multiboot2)
*(.bananboot)
*(.text.*)
}
.ap_init ALIGN(4K) : AT(ADDR(.ap_init) - KERNEL_OFFSET)
{
g_ap_init_addr = .;
*(.ap_init)
g_kernel_execute_end = .;
}
.userspace ALIGN(4K) : AT(ADDR(.userspace) - KERNEL_OFFSET)
{
g_userspace_start = .;
*(.userspace)
g_userspace_end = .;
g_kernel_execute_end = .;
}
.ap_init ALIGN(4K) : AT(ADDR(.ap_init) - KERNEL_OFFSET)
{
g_ap_init_addr = .;
*(.ap_init)
}
.rodata ALIGN(4K) : AT(ADDR(.rodata) - KERNEL_OFFSET)
{

View File

@@ -2,7 +2,6 @@
#include <kernel/CPUID.h>
#include <kernel/Lock/SpinLock.h>
#include <kernel/Memory/Heap.h>
#include <kernel/Memory/kmalloc.h>
#include <kernel/Memory/PageTable.h>
extern uint8_t g_kernel_start[];
@@ -17,13 +16,15 @@ extern uint8_t g_kernel_writable_end[];
extern uint8_t g_userspace_start[];
extern uint8_t g_userspace_end[];
extern uint64_t g_boot_fast_page_pt[];
namespace Kernel
{
SpinLock PageTable::s_fast_page_lock;
static constexpr vaddr_t s_hhdm_offset = 0xFFFF800000000000;
static bool s_is_hddm_initialized = false;
static bool s_is_initialized = false;
constexpr uint64_t s_page_flag_mask = 0x8000000000000FFF;
constexpr uint64_t s_page_addr_mask = ~s_page_flag_mask;
@@ -35,6 +36,8 @@ namespace Kernel
static paddr_t s_global_pml4_entries[512] { 0 };
static uint64_t* s_fast_page_pt { nullptr };
static constexpr inline bool is_canonical(uintptr_t addr)
{
constexpr uintptr_t mask = 0xFFFF800000000000;
@@ -54,68 +57,27 @@ namespace Kernel
return addr;
}
struct FuncsKmalloc
static paddr_t allocate_zeroed_page_aligned_page()
{
static paddr_t allocate_zeroed_page_aligned_page()
{
void* page = kmalloc(PAGE_SIZE, PAGE_SIZE, true);
ASSERT(page);
memset(page, 0, PAGE_SIZE);
return kmalloc_paddr_of(reinterpret_cast<vaddr_t>(page)).value();
}
const paddr_t paddr = Heap::get().take_free_page();
ASSERT(paddr);
memset(reinterpret_cast<void*>(paddr + s_hhdm_offset), 0, PAGE_SIZE);
return paddr;
}
static void unallocate_page(paddr_t paddr)
{
kfree(reinterpret_cast<void*>(kmalloc_vaddr_of(paddr).value()));
}
static paddr_t V2P(vaddr_t vaddr)
{
return vaddr - KERNEL_OFFSET + g_boot_info.kernel_paddr;
}
static uint64_t* P2V(paddr_t paddr)
{
return reinterpret_cast<uint64_t*>(paddr - g_boot_info.kernel_paddr + KERNEL_OFFSET);
}
};
struct FuncsHHDM
static void unallocate_page(paddr_t paddr)
{
static paddr_t allocate_zeroed_page_aligned_page()
{
const paddr_t paddr = Heap::get().take_free_page();
ASSERT(paddr);
memset(reinterpret_cast<void*>(paddr + s_hhdm_offset), 0, PAGE_SIZE);
return paddr;
}
Heap::get().release_page(paddr);
}
static void unallocate_page(paddr_t paddr)
{
Heap::get().release_page(paddr);
}
static uint64_t* P2V(paddr_t paddr)
{
ASSERT(paddr != 0);
ASSERT(!BAN::Math::will_addition_overflow(paddr, s_hhdm_offset));
return reinterpret_cast<uint64_t*>(paddr + s_hhdm_offset);
}
static paddr_t V2P(vaddr_t vaddr)
{
ASSERT(vaddr >= s_hhdm_offset);
ASSERT(vaddr < KERNEL_OFFSET);
return vaddr - s_hhdm_offset;
}
static uint64_t* P2V(paddr_t paddr)
{
ASSERT(paddr != 0);
ASSERT(!BAN::Math::will_addition_overflow(paddr, s_hhdm_offset));
return reinterpret_cast<uint64_t*>(paddr + s_hhdm_offset);
}
};
static paddr_t (*allocate_zeroed_page_aligned_page)() = &FuncsKmalloc::allocate_zeroed_page_aligned_page;
static void (*unallocate_page)(paddr_t) = &FuncsKmalloc::unallocate_page;
static paddr_t (*V2P)(vaddr_t) = &FuncsKmalloc::V2P;
static uint64_t* (*P2V)(paddr_t) = &FuncsKmalloc::P2V;
static inline PageTable::flags_t parse_flags(uint64_t entry)
static PageTable::flags_t parse_flags(uint64_t entry)
{
using Flags = PageTable::Flags;
@@ -137,7 +99,7 @@ namespace Kernel
// 0: 4 KiB
// 1: 2 MiB
// 2: 1 GiB
static void init_map_hhdm_page(paddr_t pml4, paddr_t paddr, uint8_t page_size)
static void map_hhdm_page(paddr_t pml4, paddr_t paddr, uint8_t page_size)
{
ASSERT(0 <= page_size && page_size <= 2);
@@ -184,7 +146,7 @@ namespace Kernel
const uint64_t noexec_flag = s_has_nxe ? (static_cast<uint64_t>(1) << 63) : 0;
const paddr_t pdpt = get_or_allocate_entry(pml4, pml4e, noexec_flag);
s_global_pml4_entries[pml4e] = pdpt | hhdm_flags;
s_global_pml4_entries[pml4e] = pdpt | hhdm_flags | noexec_flag;
paddr_t lowest_paddr = pdpt;
uint16_t lowest_entry = pdpte;
@@ -207,23 +169,11 @@ namespace Kernel
});
}
static void init_map_hhdm(paddr_t pml4)
static void initialize_hhdm(paddr_t pml4)
{
for (const auto& entry : g_boot_info.memory_map_entries)
{
bool should_map = false;
switch (entry.type)
{
case MemoryMapEntry::Type::Available:
should_map = true;
break;
case MemoryMapEntry::Type::ACPIReclaim:
case MemoryMapEntry::Type::ACPINVS:
case MemoryMapEntry::Type::Reserved:
should_map = false;
break;
}
if (!should_map)
if (entry.type != MemoryMapEntry::Type::Available)
continue;
constexpr size_t one_gib = 1024 * 1024 * 1024;
@@ -235,156 +185,39 @@ namespace Kernel
{
if (s_has_gib && paddr % one_gib == 0 && paddr + one_gib <= entry_end)
{
init_map_hhdm_page(pml4, paddr, 2);
map_hhdm_page(pml4, paddr, 2);
paddr += one_gib;
}
else if (paddr % two_mib == 0 && paddr + two_mib <= entry_end)
{
init_map_hhdm_page(pml4, paddr, 1);
map_hhdm_page(pml4, paddr, 1);
paddr += two_mib;
}
else
{
init_map_hhdm_page(pml4, paddr, 0);
map_hhdm_page(pml4, paddr, 0);
paddr += PAGE_SIZE;
}
}
}
}
static paddr_t copy_page_from_kmalloc_to_heap(paddr_t kmalloc_paddr)
void PageTable::initialize_fast_page()
{
const paddr_t heap_paddr = Heap::get().take_free_page();
ASSERT(heap_paddr);
const vaddr_t kmalloc_vaddr = kmalloc_vaddr_of(kmalloc_paddr).value();
PageTable::with_fast_page(heap_paddr, [kmalloc_vaddr] {
memcpy(PageTable::fast_page_as_ptr(), reinterpret_cast<void*>(kmalloc_vaddr), PAGE_SIZE);
});
return heap_paddr;
s_fast_page_pt = g_boot_fast_page_pt;
}
static void copy_paging_structure_to_heap(uint64_t* old_table, uint64_t* new_table, int depth)
{
if (depth == 0)
return;
constexpr uint64_t page_flag_mask = 0x8000000000000FFF;
constexpr uint64_t page_addr_mask = ~page_flag_mask;
for (uint16_t index = 0; index < 512; index++)
{
const uint64_t old_entry = old_table[index];
if (old_entry == 0)
{
new_table[index] = 0;
continue;
}
const paddr_t old_paddr = old_entry & page_addr_mask;
const paddr_t new_paddr = copy_page_from_kmalloc_to_heap(old_paddr);
new_table[index] = new_paddr | (old_entry & page_flag_mask);
uint64_t* next_old_table = reinterpret_cast<uint64_t*>(old_paddr + s_hhdm_offset);
uint64_t* next_new_table = reinterpret_cast<uint64_t*>(new_paddr + s_hhdm_offset);
copy_paging_structure_to_heap(next_old_table, next_new_table, depth - 1);
}
}
static void free_kmalloc_paging_structure(uint64_t* table, int depth)
{
if (depth == 0)
return;
constexpr uint64_t page_flag_mask = 0x8000000000000FFF;
constexpr uint64_t page_addr_mask = ~page_flag_mask;
for (uint16_t index = 0; index < 512; index++)
{
const uint64_t entry = table[index];
if (entry == 0)
continue;
const paddr_t paddr = entry & page_addr_mask;
uint64_t* next_table = reinterpret_cast<uint64_t*>(paddr + s_hhdm_offset);
free_kmalloc_paging_structure(next_table, depth - 1);
kfree(reinterpret_cast<void*>(kmalloc_vaddr_of(paddr).value()));
}
}
void PageTable::initialize_pre_heap()
static void detect_cpu_features()
{
if (CPUID::has_nxe())
s_has_nxe = true;
if (CPUID::has_pge())
s_has_pge = true;
if (CPUID::has_1gib_pages())
s_has_gib = true;
ASSERT(s_kernel == nullptr);
s_kernel = new PageTable();
ASSERT(s_kernel);
s_kernel->m_highest_paging_struct = allocate_zeroed_page_aligned_page();
s_kernel->prepare_fast_page();
s_kernel->initialize_kernel();
for (auto pml4e : s_global_pml4_entries)
ASSERT(pml4e == 0);
const uint64_t* pml4 = P2V(s_kernel->m_highest_paging_struct);
s_global_pml4_entries[511] = pml4[511];
}
void PageTable::initialize_post_heap()
{
ASSERT(s_kernel);
init_map_hhdm(s_kernel->m_highest_paging_struct);
const paddr_t old_pml4_paddr = s_kernel->m_highest_paging_struct;
const paddr_t new_pml4_paddr = copy_page_from_kmalloc_to_heap(old_pml4_paddr);
uint64_t* old_pml4 = reinterpret_cast<uint64_t*>(kmalloc_vaddr_of(old_pml4_paddr).value());
uint64_t* new_pml4 = reinterpret_cast<uint64_t*>(new_pml4_paddr + s_hhdm_offset);
const paddr_t old_pdpt_paddr = old_pml4[511] & s_page_addr_mask;
const paddr_t new_pdpt_paddr = Heap::get().take_free_page();
ASSERT(new_pdpt_paddr);
uint64_t* old_pdpt = reinterpret_cast<uint64_t*>(old_pdpt_paddr + s_hhdm_offset);
uint64_t* new_pdpt = reinterpret_cast<uint64_t*>(new_pdpt_paddr + s_hhdm_offset);
copy_paging_structure_to_heap(old_pdpt, new_pdpt, 2);
new_pml4[511] = new_pdpt_paddr | (old_pml4[511] & s_page_flag_mask);
s_global_pml4_entries[511] = new_pml4[511];
s_kernel->m_highest_paging_struct = new_pml4_paddr;
s_kernel->load();
free_kmalloc_paging_structure(old_pdpt, 2);
kfree(reinterpret_cast<void*>(kmalloc_vaddr_of(old_pdpt_paddr).value()));
kfree(reinterpret_cast<void*>(kmalloc_vaddr_of(old_pml4_paddr).value()));
allocate_zeroed_page_aligned_page = &FuncsHHDM::allocate_zeroed_page_aligned_page;
unallocate_page = &FuncsHHDM::unallocate_page;
V2P = &FuncsHHDM::V2P;
P2V = &FuncsHHDM::P2V;
s_is_hddm_initialized = true;
// This is a hack to unmap fast page. fast page pt is copied
// while it is mapped, so we need to manually unmap it
SpinLockGuard _(s_fast_page_lock);
unmap_fast_page();
}
void PageTable::initial_load()
void PageTable::enable_cpu_features()
{
if (s_has_nxe)
{
@@ -423,8 +256,63 @@ namespace Kernel
"movq %%rax, %%cr0;"
::: "rax"
);
}
load();
void PageTable::initialize_and_load()
{
detect_cpu_features();
enable_cpu_features();
const paddr_t boot_pml4_paddr = ({
paddr_t paddr;
asm volatile("movq %%cr3, %0" : "=r"(paddr));
paddr;
});
initialize_hhdm(boot_pml4_paddr);
ASSERT(s_kernel == nullptr);
s_kernel = new PageTable();
ASSERT(s_kernel != nullptr);
s_kernel->m_highest_paging_struct = allocate_zeroed_page_aligned_page();
ASSERT(s_kernel->m_highest_paging_struct);
uint64_t* pml4 = P2V(s_kernel->m_highest_paging_struct);
memcpy(pml4, s_global_pml4_entries, sizeof(s_global_pml4_entries));
s_kernel->map_kernel_memory();
s_global_pml4_entries[511] = pml4[511];
// update fast page pt
{
constexpr vaddr_t uc_vaddr = uncanonicalize(fast_page());
constexpr uint16_t pml4e = (uc_vaddr >> 39) & 0x1FF;
constexpr uint16_t pdpte = (uc_vaddr >> 30) & 0x1FF;
constexpr uint16_t pde = (uc_vaddr >> 21) & 0x1FF;
const auto get_or_allocate_entry =
[](paddr_t table_paddr, uint16_t entry, uint64_t flags)
{
uint64_t* table = P2V(table_paddr);
if (!(table[entry] & Flags::Present))
{
table[entry] = allocate_zeroed_page_aligned_page();
ASSERT(table[entry]);
}
table[entry] |= flags;
return table[entry] & s_page_addr_mask;
};
const paddr_t pml4 = s_kernel->m_highest_paging_struct;
const paddr_t pdpt = get_or_allocate_entry(pml4, pml4e, Flags::ReadWrite | Flags::Present);
const paddr_t pd = get_or_allocate_entry(pdpt, pdpte, Flags::ReadWrite | Flags::Present);
s_fast_page_pt = P2V(get_or_allocate_entry(pd, pde, Flags::ReadWrite | Flags::Present));
}
s_kernel->load();
}
PageTable& PageTable::kernel()
@@ -440,12 +328,12 @@ namespace Kernel
return true;
}
void PageTable::initialize_kernel()
void PageTable::map_kernel_memory()
{
// Map (phys_kernel_start -> phys_kernel_end) to (virt_kernel_start -> virt_kernel_end)
const vaddr_t kernel_start = reinterpret_cast<vaddr_t>(g_kernel_start);
map_range_at(
V2P(kernel_start),
kernel_start - KERNEL_OFFSET + g_boot_info.kernel_paddr,
kernel_start,
g_kernel_end - g_kernel_start,
Flags::Present
@@ -454,7 +342,7 @@ namespace Kernel
// Map executable kernel memory as executable
const vaddr_t kernel_execute_start = reinterpret_cast<vaddr_t>(g_kernel_execute_start);
map_range_at(
V2P(kernel_execute_start),
kernel_execute_start - KERNEL_OFFSET + g_boot_info.kernel_paddr,
kernel_execute_start,
g_kernel_execute_end - g_kernel_execute_start,
Flags::Execute | Flags::Present
@@ -463,7 +351,7 @@ namespace Kernel
// Map writable kernel memory as writable
const vaddr_t kernel_writable_start = reinterpret_cast<vaddr_t>(g_kernel_writable_start);
map_range_at(
V2P(kernel_writable_start),
kernel_writable_start - KERNEL_OFFSET + g_boot_info.kernel_paddr,
kernel_writable_start,
g_kernel_writable_end - g_kernel_writable_start,
Flags::ReadWrite | Flags::Present
@@ -472,109 +360,58 @@ namespace Kernel
// Map userspace memory
const vaddr_t userspace_start = reinterpret_cast<vaddr_t>(g_userspace_start);
map_range_at(
V2P(userspace_start),
userspace_start - KERNEL_OFFSET + g_boot_info.kernel_paddr,
userspace_start,
g_userspace_end - g_userspace_start,
Flags::Execute | Flags::UserSupervisor | Flags::Present
);
}
void PageTable::prepare_fast_page()
{
constexpr vaddr_t uc_vaddr = uncanonicalize(fast_page());
constexpr uint64_t pml4e = (uc_vaddr >> 39) & 0x1FF;
constexpr uint64_t pdpte = (uc_vaddr >> 30) & 0x1FF;
constexpr uint64_t pde = (uc_vaddr >> 21) & 0x1FF;
uint64_t* pml4 = P2V(m_highest_paging_struct);
ASSERT(!(pml4[pml4e] & Flags::Present));
pml4[pml4e] = allocate_zeroed_page_aligned_page() | Flags::ReadWrite | Flags::Present;
uint64_t* pdpt = P2V(pml4[pml4e] & s_page_addr_mask);
ASSERT(!(pdpt[pdpte] & Flags::Present));
pdpt[pdpte] = allocate_zeroed_page_aligned_page() | Flags::ReadWrite | Flags::Present;
uint64_t* pd = P2V(pdpt[pdpte] & s_page_addr_mask);
ASSERT(!(pd[pde] & Flags::Present));
pd[pde] = allocate_zeroed_page_aligned_page() | Flags::ReadWrite | Flags::Present;
}
void PageTable::map_fast_page(paddr_t paddr)
{
ASSERT(s_kernel);
ASSERT(paddr);
ASSERT(paddr % PAGE_SIZE == 0);
ASSERT(paddr && paddr % PAGE_SIZE == 0);
ASSERT(s_fast_page_pt);
ASSERT(s_fast_page_lock.current_processor_has_lock());
constexpr vaddr_t uc_vaddr = uncanonicalize(fast_page());
constexpr uint64_t pml4e = (uc_vaddr >> 39) & 0x1FF;
constexpr uint64_t pdpte = (uc_vaddr >> 30) & 0x1FF;
constexpr uint64_t pde = (uc_vaddr >> 21) & 0x1FF;
constexpr uint64_t pte = (uc_vaddr >> 12) & 0x1FF;
ASSERT(!(*s_fast_page_pt & Flags::Present));
s_fast_page_pt[0] = paddr | Flags::ReadWrite | Flags::Present;
const uint64_t* pml4 = P2V(s_kernel->m_highest_paging_struct);
const uint64_t* pdpt = P2V(pml4[pml4e] & s_page_addr_mask);
const uint64_t* pd = P2V(pdpt[pdpte] & s_page_addr_mask);
uint64_t* pt = P2V(pd[pde] & s_page_addr_mask);
ASSERT(!(pt[pte] & Flags::Present));
pt[pte] = paddr | Flags::ReadWrite | Flags::Present;
asm volatile("invlpg (%0)" :: "r"(fast_page()) : "memory");
asm volatile("invlpg (%0)" :: "r"(fast_page()));
}
void PageTable::unmap_fast_page()
{
ASSERT(s_kernel);
ASSERT(s_fast_page_pt);
ASSERT(s_fast_page_lock.current_processor_has_lock());
constexpr vaddr_t uc_vaddr = uncanonicalize(fast_page());
constexpr uint64_t pml4e = (uc_vaddr >> 39) & 0x1FF;
constexpr uint64_t pdpte = (uc_vaddr >> 30) & 0x1FF;
constexpr uint64_t pde = (uc_vaddr >> 21) & 0x1FF;
constexpr uint64_t pte = (uc_vaddr >> 12) & 0x1FF;
ASSERT((*s_fast_page_pt & Flags::Present));
s_fast_page_pt[0] = 0;
const uint64_t* pml4 = P2V(s_kernel->m_highest_paging_struct);
const uint64_t* pdpt = P2V(pml4[pml4e] & s_page_addr_mask);
const uint64_t* pd = P2V(pdpt[pdpte] & s_page_addr_mask);
uint64_t* pt = P2V(pd[pde] & s_page_addr_mask);
ASSERT(pt[pte] & Flags::Present);
pt[pte] = 0;
asm volatile("invlpg (%0)" :: "r"(fast_page()) : "memory");
asm volatile("invlpg (%0)" :: "r"(fast_page()));
}
BAN::ErrorOr<PageTable*> PageTable::create_userspace()
{
SpinLockGuard _(s_kernel->m_lock);
PageTable* page_table = new PageTable;
if (page_table == nullptr)
return BAN::Error::from_errno(ENOMEM);
page_table->map_kernel_memory();
page_table->m_highest_paging_struct = allocate_zeroed_page_aligned_page();
if (page_table->m_highest_paging_struct == 0)
{
delete page_table;
return BAN::Error::from_errno(ENOMEM);
}
uint64_t* pml4 = P2V(page_table->m_highest_paging_struct);
memcpy(pml4, s_global_pml4_entries, sizeof(s_global_pml4_entries));
return page_table;
}
void PageTable::map_kernel_memory()
{
ASSERT(s_kernel);
ASSERT(s_global_pml4_entries[511]);
ASSERT(m_highest_paging_struct == 0);
m_highest_paging_struct = allocate_zeroed_page_aligned_page();
PageTable::with_fast_page(m_highest_paging_struct, [] {
for (size_t i = 0; i < 512; i++)
{
if (s_global_pml4_entries[i] == 0)
continue;
ASSERT(i >= 256);
PageTable::fast_page_as_sized<uint64_t>(i) = s_global_pml4_entries[i];
}
});
}
PageTable::~PageTable()
{
if (m_highest_paging_struct == 0)
@@ -612,10 +449,39 @@ namespace Kernel
Processor::set_current_page_table(this);
}
void PageTable::invalidate(vaddr_t vaddr, bool send_smp_message)
void PageTable::invalidate_range(vaddr_t vaddr, size_t pages, bool send_smp_message)
{
ASSERT(vaddr % PAGE_SIZE == 0);
asm volatile("invlpg (%0)" :: "r"(vaddr) : "memory");
const bool is_userspace = (vaddr < KERNEL_OFFSET);
if (is_userspace && this != &PageTable::current())
;
else if (pages <= 32 || !s_is_initialized)
{
for (size_t i = 0; i < pages; i++, vaddr += PAGE_SIZE)
asm volatile("invlpg (%0)" :: "r"(vaddr));
}
else if (is_userspace || !s_has_pge)
{
asm volatile("movq %0, %%cr3" :: "r"(m_highest_paging_struct));
}
else
{
asm volatile(
"movq %%cr4, %%rax;"
"andq $~0x80, %%rax;"
"movq %%rax, %%cr4;"
"movq %0, %%cr3;"
"orq $0x80, %%rax;"
"movq %%rax, %%cr4;"
:
: "r"(m_highest_paging_struct)
: "rax"
);
}
if (send_smp_message)
{
@@ -623,14 +489,14 @@ namespace Kernel
.type = Processor::SMPMessage::Type::FlushTLB,
.flush_tlb = {
.vaddr = vaddr,
.page_count = 1,
.page_count = pages,
.page_table = vaddr < KERNEL_OFFSET ? this : nullptr,
}
});
}
}
void PageTable::unmap_page(vaddr_t vaddr, bool send_smp_message)
void PageTable::unmap_page(vaddr_t vaddr, bool invalidate)
{
ASSERT(vaddr);
ASSERT(vaddr != fast_page());
@@ -663,31 +529,23 @@ namespace Kernel
pt[pte] = 0;
if (old_paddr != 0)
invalidate(vaddr, send_smp_message);
if (invalidate && old_paddr != 0)
invalidate_page(vaddr, true);
}
void PageTable::unmap_range(vaddr_t vaddr, size_t size)
{
ASSERT(vaddr % PAGE_SIZE == 0);
size_t page_count = range_page_count(vaddr, size);
const size_t page_count = range_page_count(vaddr, size);
SpinLockGuard _(m_lock);
for (vaddr_t page = 0; page < page_count; page++)
unmap_page(vaddr + page * PAGE_SIZE, false);
Processor::broadcast_smp_message({
.type = Processor::SMPMessage::Type::FlushTLB,
.flush_tlb = {
.vaddr = vaddr,
.page_count = page_count,
.page_table = vaddr < KERNEL_OFFSET ? this : nullptr,
}
});
invalidate_range(vaddr, page_count, true);
}
void PageTable::map_page_at(paddr_t paddr, vaddr_t vaddr, flags_t flags, MemoryType memory_type, bool send_smp_message)
void PageTable::map_page_at(paddr_t paddr, vaddr_t vaddr, flags_t flags, MemoryType memory_type, bool invalidate)
{
ASSERT(vaddr);
ASSERT(vaddr != fast_page());
@@ -752,8 +610,8 @@ namespace Kernel
pt[pte] = paddr | uwr_flags | extra_flags;
if (old_paddr != 0)
invalidate(vaddr, send_smp_message);
if (invalidate && old_paddr != 0)
invalidate_page(vaddr, true);
}
void PageTable::map_range_at(paddr_t paddr, vaddr_t vaddr, size_t size, flags_t flags, MemoryType memory_type)
@@ -769,15 +627,66 @@ namespace Kernel
SpinLockGuard _(m_lock);
for (size_t page = 0; page < page_count; page++)
map_page_at(paddr + page * PAGE_SIZE, vaddr + page * PAGE_SIZE, flags, memory_type, false);
invalidate_range(vaddr, page_count, true);
}
Processor::broadcast_smp_message({
.type = Processor::SMPMessage::Type::FlushTLB,
.flush_tlb = {
.vaddr = vaddr,
.page_count = page_count,
.page_table = vaddr < KERNEL_OFFSET ? this : nullptr,
void PageTable::remove_writable_from_range(vaddr_t vaddr, size_t size)
{
ASSERT(vaddr);
ASSERT(vaddr % PAGE_SIZE == 0);
ASSERT(is_canonical(vaddr));
ASSERT(is_canonical(vaddr + size - 1));
const vaddr_t uc_vaddr_start = uncanonicalize(vaddr);
const vaddr_t uc_vaddr_end = uncanonicalize(vaddr + size - 1);
uint16_t pml4e = (uc_vaddr_start >> 39) & 0x1FF;
uint16_t pdpte = (uc_vaddr_start >> 30) & 0x1FF;
uint16_t pde = (uc_vaddr_start >> 21) & 0x1FF;
uint16_t pte = (uc_vaddr_start >> 12) & 0x1FF;
const uint16_t e_pml4e = (uc_vaddr_end >> 39) & 0x1FF;
const uint16_t e_pdpte = (uc_vaddr_end >> 30) & 0x1FF;
const uint16_t e_pde = (uc_vaddr_end >> 21) & 0x1FF;
const uint16_t e_pte = (uc_vaddr_end >> 12) & 0x1FF;
SpinLockGuard _(m_lock);
const uint64_t* pml4 = P2V(m_highest_paging_struct);
for (; pml4e <= e_pml4e; pml4e++)
{
if (!(pml4[pml4e] & Flags::ReadWrite))
continue;
const uint64_t* pdpt = P2V(pml4[pml4e] & s_page_addr_mask);
for (; pdpte < 512; pdpte++)
{
if (pml4e == e_pml4e && pdpte > e_pdpte)
break;
if (!(pdpt[pdpte] & Flags::ReadWrite))
continue;
const uint64_t* pd = P2V(pdpt[pdpte] & s_page_addr_mask);
for (; pde < 512; pde++)
{
if (pml4e == e_pml4e && pdpte == e_pdpte && pde > e_pde)
break;
if (!(pd[pde] & Flags::ReadWrite))
continue;
uint64_t* pt = P2V(pd[pde] & s_page_addr_mask);
for (; pte < 512; pte++)
{
if (pml4e == e_pml4e && pdpte == e_pdpte && pde == e_pde && pte > e_pte)
break;
pt[pte] &= ~static_cast<uint64_t>(Flags::ReadWrite);
}
pte = 0;
}
pde = 0;
}
});
pdpte = 0;
}
invalidate_range(vaddr, size / PAGE_SIZE, true);
}
uint64_t PageTable::get_page_data(vaddr_t vaddr) const
@@ -824,13 +733,13 @@ namespace Kernel
return page_data & s_page_addr_mask;
}
bool PageTable::reserve_page(vaddr_t vaddr, bool only_free, bool send_smp_message)
bool PageTable::reserve_page(vaddr_t vaddr, bool only_free, bool invalidate)
{
SpinLockGuard _(m_lock);
ASSERT(vaddr % PAGE_SIZE == 0);
if (only_free && !is_page_free(vaddr))
return false;
map_page_at(0, vaddr, Flags::Reserved, MemoryType::Normal, send_smp_message);
map_page_at(0, vaddr, Flags::Reserved, MemoryType::Normal, invalidate);
return true;
}
@@ -845,14 +754,7 @@ namespace Kernel
return false;
for (size_t offset = 0; offset < bytes; offset += PAGE_SIZE)
reserve_page(vaddr + offset, true, false);
Processor::broadcast_smp_message({
.type = Processor::SMPMessage::Type::FlushTLB,
.flush_tlb = {
.vaddr = vaddr,
.page_count = bytes / PAGE_SIZE,
.page_table = vaddr < KERNEL_OFFSET ? this : nullptr,
}
});
invalidate_range(vaddr, bytes / PAGE_SIZE, true);
return true;
}
@@ -866,9 +768,9 @@ namespace Kernel
last_address -= rem;
ASSERT(is_canonical(first_address));
ASSERT(is_canonical(last_address));
ASSERT(is_canonical(last_address - 1));
const vaddr_t uc_vaddr_start = uncanonicalize(first_address);
const vaddr_t uc_vaddr_end = uncanonicalize(last_address);
const vaddr_t uc_vaddr_end = uncanonicalize(last_address - 1);
uint16_t pml4e = (uc_vaddr_start >> 39) & 0x1FF;
uint16_t pdpte = (uc_vaddr_start >> 30) & 0x1FF;
@@ -885,10 +787,8 @@ namespace Kernel
// Try to find free page that can be mapped without
// allocations (page table with unused entries)
const uint64_t* pml4 = P2V(m_highest_paging_struct);
for (; pml4e < 512; pml4e++)
for (; pml4e <= e_pml4e; pml4e++)
{
if (pml4e > e_pml4e)
break;
if (!(pml4[pml4e] & Flags::Present))
continue;
const uint64_t* pdpt = P2V(pml4[pml4e] & s_page_addr_mask);
@@ -908,22 +808,24 @@ namespace Kernel
const uint64_t* pt = P2V(pd[pde] & s_page_addr_mask);
for (; pte < 512; pte++)
{
if (pml4e == e_pml4e && pdpte == e_pdpte && pde == e_pde && pte >= e_pte)
if (pml4e == e_pml4e && pdpte == e_pdpte && pde == e_pde && pte > e_pte)
break;
if (!(pt[pte] & Flags::Used))
{
vaddr_t vaddr = 0;
vaddr |= static_cast<uint64_t>(pml4e) << 39;
vaddr |= static_cast<uint64_t>(pdpte) << 30;
vaddr |= static_cast<uint64_t>(pde) << 21;
vaddr |= static_cast<uint64_t>(pte) << 12;
vaddr = canonicalize(vaddr);
ASSERT(reserve_page(vaddr));
return vaddr;
}
if (pt[pte] & Flags::Used)
continue;
vaddr_t vaddr = 0;
vaddr |= static_cast<uint64_t>(pml4e) << 39;
vaddr |= static_cast<uint64_t>(pdpte) << 30;
vaddr |= static_cast<uint64_t>(pde) << 21;
vaddr |= static_cast<uint64_t>(pte) << 12;
vaddr = canonicalize(vaddr);
ASSERT(reserve_page(vaddr));
return vaddr;
}
pte = 0;
}
pde = 0;
}
pdpte = 0;
}
for (vaddr_t uc_vaddr = uc_vaddr_start; uc_vaddr < uc_vaddr_end; uc_vaddr += PAGE_SIZE)
@@ -935,7 +837,7 @@ namespace Kernel
}
}
ASSERT_NOT_REACHED();
return 0;
}
vaddr_t PageTable::reserve_free_contiguous_pages(size_t page_count, vaddr_t first_address, vaddr_t last_address)
@@ -948,7 +850,7 @@ namespace Kernel
last_address -= rem;
ASSERT(is_canonical(first_address));
ASSERT(is_canonical(last_address));
ASSERT(is_canonical(last_address - 1));
SpinLockGuard _(m_lock);
@@ -977,7 +879,7 @@ namespace Kernel
}
}
ASSERT_NOT_REACHED();
return 0;
}
bool PageTable::is_page_free(vaddr_t page) const

View File

@@ -1,12 +1,13 @@
.section .userspace, "ax"
// stack contains
// return address
// return stack
// return rflags
// siginfo_t
// signal number
// signal handler
// (8 bytes) return address (on return stack)
// (8 bytes) return stack
// (8 bytes) return rflags
// (8 bytes) restore sigmask
// (56 bytes) siginfo_t
// (8 bytes) signal number
// (8 bytes) signal handler
.global signal_trampoline
signal_trampoline:
@@ -26,6 +27,10 @@ signal_trampoline:
pushq %rax
pushq %rbp
movq 208(%rsp), %rax
pushq %rax; addq $(128 + 8), (%rsp)
pushq (%rax)
// FIXME: populate these
xorq %rax, %rax
pushq %rax // stack
@@ -35,9 +40,9 @@ signal_trampoline:
pushq %rax // link
movq %rsp, %rdx // ucontext
leaq 176(%rsp), %rsi // siginfo
movq 168(%rsp), %rdi // signal number
movq 160(%rsp), %rax // handler
leaq 192(%rsp), %rsi // siginfo
movq 184(%rsp), %rdi // signal number
movq 176(%rsp), %rax // handler
// align stack to 16 bytes
movq %rsp, %rbp
@@ -55,7 +60,15 @@ signal_trampoline:
movq %rbp, %rsp
addq $40, %rsp
// restore sigmask
movq $83, %rdi // SYS_SIGPROCMASK
movq $3, %rsi // SIG_SETMASK
leaq 192(%rsp), %rdx // set
xorq %r10, %r10 // oset
syscall
// restore registers
addq $16, %rsp
popq %rbp
popq %rax
popq %rbx
@@ -72,13 +85,13 @@ signal_trampoline:
popq %r14
popq %r15
// skip handler, number, siginfo_t
addq $72, %rsp
// skip handler, number, siginfo_t, sigmask
addq $80, %rsp
// restore flags
popfq
movq (%rsp), %rsp
// return over red-zone and siginfo_t
// return over red-zone
ret $128

View File

@@ -33,7 +33,7 @@ sys_fork_trampoline:
call read_ip
testq %rax, %rax
je .done
jz .done
movq %rax, %rsi
movq %rsp, %rdi

View File

@@ -7,9 +7,6 @@ read_ip:
# void start_kernel_thread()
.global start_kernel_thread
start_kernel_thread:
call get_thread_start_sp
movq %rax, %rsp
# STACK LAYOUT
# on_exit arg
# on_exit func
@@ -27,9 +24,5 @@ start_kernel_thread:
.global start_userspace_thread
start_userspace_thread:
call get_thread_start_sp
movq %rax, %rsp
swapgs
iretq

87
kernel/arch/x86_64/User.S Normal file
View File

@@ -0,0 +1,87 @@
# bool safe_user_memcpy(void*, const void*, size_t)
.global safe_user_memcpy
.global safe_user_memcpy_end
.global safe_user_memcpy_fault
safe_user_memcpy:
xorq %rax, %rax
movq %rdx, %rcx
rep movsb
incq %rax
safe_user_memcpy_fault:
ret
safe_user_memcpy_end:
# bool safe_user_strncpy(void*, const void*, size_t)
.global safe_user_strncpy
.global safe_user_strncpy_end
.global safe_user_strncpy_fault
safe_user_strncpy:
movq %rdx, %rcx
testq %rcx, %rcx
jz safe_user_strncpy_fault
.safe_user_strncpy_align_loop:
testb $0x7, %sil
jz .safe_user_strncpy_align_done
movb (%rsi), %al
movb %al, (%rdi)
testb %al, %al
jz .safe_user_strncpy_done
incq %rdi
incq %rsi
decq %rcx
jnz .safe_user_strncpy_align_loop
jmp safe_user_strncpy_fault
.safe_user_strncpy_align_done:
movq $0x0101010101010101, %r8
movq $0x8080808080808080, %r9
.safe_user_strncpy_qword_loop:
cmpq $8, %rcx
jb .safe_user_strncpy_qword_done
movq (%rsi), %rax
movq %rax, %r10
movq %rax, %r11
# https://graphics.stanford.edu/~seander/bithacks.html#ZeroInWord
subq %r8, %r10
notq %r11
andq %r11, %r10
andq %r9, %r10
jnz .safe_user_strncpy_byte_loop
movq %rax, (%rdi)
addq $8, %rdi
addq $8, %rsi
subq $8, %rcx
jnz .safe_user_strncpy_qword_loop
jmp safe_user_strncpy_fault
.safe_user_strncpy_qword_done:
testq %rcx, %rcx
jz safe_user_strncpy_fault
.safe_user_strncpy_byte_loop:
movb (%rsi), %al
movb %al, (%rdi)
testb %al, %al
jz .safe_user_strncpy_done
incq %rdi
incq %rsi
decq %rcx
jnz .safe_user_strncpy_byte_loop
safe_user_strncpy_fault:
xorq %rax, %rax
ret
.safe_user_strncpy_done:
movb $1, %al
ret
safe_user_strncpy_end:

View File

@@ -11,26 +11,28 @@
.code32
# multiboot2 header
// custom addresses, video mode info, page align modules
.set multiboot_flags, (1 << 16) | (1 << 2) | (1 << 0)
.section .multiboot, "aw"
.align 8
multiboot_start:
.long 0x1BADB002
.long (1 << 2) # page align modules
.long -(0x1BADB002 + (1 << 2))
.long multiboot_flags
.long -(0x1BADB002 + multiboot_flags)
.long 0
.long 0
.long 0
.long 0
.long 0
.long V2P(multiboot_start)
.long V2P(g_kernel_start)
.long V2P(g_kernel_bss_start)
.long V2P(g_kernel_end)
.long V2P(_start)
.long 0
.long FB_WIDTH
.long FB_HEIGHT
.long FB_BPP
multiboot_end:
.align 8
.section .multiboot2, "aw"
multiboot2_start:
.long 0xE85250D6
.long 0
@@ -66,7 +68,6 @@ multiboot2_start:
multiboot2_end:
.section .bananboot, "aw"
.align 8
bananboot_start:
.long 0xBABAB007
.long -(0xBABAB007 + FB_WIDTH + FB_HEIGHT + FB_BPP)
@@ -96,27 +97,25 @@ bananboot_end:
.align 4096
boot_pml4:
.quad V2P(boot_pdpt_lo) + (PG_READ_WRITE | PG_PRESENT)
.rept 510
.quad 0
.endr
.skip 510 * 8
.quad V2P(boot_pdpt_hi) + (PG_READ_WRITE | PG_PRESENT)
boot_pdpt_lo:
.quad V2P(boot_pd) + (PG_READ_WRITE | PG_PRESENT)
.rept 511
.quad 0
.endr
.skip 511 * 8
boot_pdpt_hi:
.rept 510
.quad 0
.endr
.skip 510 * 8
.quad V2P(boot_pd) + (PG_READ_WRITE | PG_PRESENT)
.quad 0
.skip 8
boot_pd:
.set i, 0
.rept 512
.rept 511
.quad i + (PG_PAGE_SIZE | PG_READ_WRITE | PG_PRESENT)
.set i, i + 0x200000
.endr
.quad V2P(g_boot_fast_page_pt) + (PG_READ_WRITE | PG_PRESENT)
.global g_boot_fast_page_pt
g_boot_fast_page_pt:
.skip 512 * 8
boot_gdt:
.quad 0x0000000000000000 # null descriptor
@@ -272,7 +271,7 @@ system_halt:
jmp 1b
#define AP_V2P(vaddr) ((vaddr) - ap_trampoline + 0xF000)
#define AP_REL(vaddr) ((vaddr) - ap_trampoline + 0xF000)
.section .ap_init, "ax"
@@ -282,21 +281,27 @@ ap_trampoline:
jmp 1f
.align 8
ap_stack_ptr:
.skip 4
ap_stack_loaded:
.skip 1
ap_stack_paddr:
.skip 8
ap_stack_vaddr:
.skip 8
ap_prepare_paging:
.skip 8
ap_page_table:
.skip 8
ap_ready:
.skip 8
1: cli; cld
ljmpl $0x00, $AP_V2P(ap_cs_clear)
ljmpl $0x00, $AP_REL(ap_cs_clear)
ap_cs_clear:
# load ap gdt and enter protected mode
lgdt AP_V2P(ap_gdtr)
lgdt AP_REL(ap_gdtr)
movl %cr0, %eax
orb $1, %al
movl %eax, %cr0
ljmpl $0x08, $AP_V2P(ap_protected_mode)
ljmpl $0x08, $AP_REL(ap_protected_mode)
.code32
ap_protected_mode:
@@ -305,8 +310,7 @@ ap_protected_mode:
movw %ax, %ss
movw %ax, %es
movl AP_V2P(ap_stack_ptr), %esp
movb $1, AP_V2P(ap_stack_loaded)
movl AP_REL(ap_stack_paddr), %esp
leal V2P(enable_sse), %ecx; call *%ecx
leal V2P(enable_tsc), %ecx; call *%ecx
@@ -314,28 +318,34 @@ ap_protected_mode:
# load boot gdt and enter long mode
lgdt V2P(boot_gdtr)
ljmpl $0x08, $AP_V2P(ap_long_mode)
ljmpl $0x08, $AP_REL(ap_long_mode)
.code64
ap_long_mode:
# move stack pointer to higher half
movl %esp, %esp
addq $KERNEL_OFFSET, %rsp
movq $ap_higher_half, %rax
jmp *%rax
ap_higher_half:
movq AP_REL(ap_prepare_paging), %rax
call *%rax
# load AP's initial values
movq AP_REL(ap_stack_vaddr), %rsp
movq AP_REL(ap_page_table), %rax
movq $1, AP_REL(ap_ready)
movq %rax, %cr3
# clear rbp for stacktrace
xorq %rbp, %rbp
xorb %al, %al
1: pause
cmpb %al, g_ap_startup_done
jz 1b
cmpb $0, g_ap_startup_done
je 1b
lock incb g_ap_running_count
# jump to ap_main in higher half
movabsq $ap_main, %rcx
call *%rcx
jmp V2P(system_halt)
call ap_main
jmp system_halt
ap_gdt:
.quad 0x0000000000000000 # null descriptor

View File

@@ -1,12 +1,4 @@
.macro swapgs_if_necessary, n
testb $3, \n(%rsp)
jz 1f; jnp 1f
swapgs
1:
.endm
.macro pushaq, n
swapgs_if_necessary \n
.macro intr_header, n
pushq %rax
pushq %rcx
pushq %rdx
@@ -22,10 +14,18 @@
pushq %r13
pushq %r14
pushq %r15
testb $3, \n+15*8(%rsp)
jz 1f
swapgs
1: cld
.endm
.macro popaq, n
popq %r15
.macro intr_footer, n
testb $3, \n+15*8(%rsp)
jz 1f
call cpp_check_signal
swapgs
1: popq %r15
popq %r14
popq %r13
popq %r12
@@ -40,12 +40,10 @@
popq %rdx
popq %rcx
popq %rax
swapgs_if_necessary \n
.endm
isr_stub:
pushaq 24
cld
intr_header 24
movq %cr0, %rax; pushq %rax
movq %cr2, %rax; pushq %rax
movq %cr3, %rax; pushq %rax
@@ -58,33 +56,33 @@ isr_stub:
call cpp_isr_handler
addq $32, %rsp
popaq 24
intr_footer 24
addq $16, %rsp
iretq
irq_stub:
pushaq 24
cld
intr_header 24
xorq %rbp, %rbp
movq 120(%rsp), %rdi # irq number
call cpp_irq_handler
popaq 24
intr_footer 24
addq $16, %rsp
iretq
.global asm_ipi_handler
asm_ipi_handler:
pushaq 8
cld
intr_header 8
xorq %rbp, %rbp
call cpp_ipi_handler
popaq 8
intr_footer 8
iretq
.global asm_timer_handler
asm_timer_handler:
pushaq 8
cld
intr_header 8
xorq %rbp, %rbp
call cpp_timer_handler
popaq 8
intr_footer 8
iretq
.macro isr n

View File

@@ -11,20 +11,21 @@ SECTIONS
{
g_kernel_execute_start = .;
*(.multiboot)
*(.multiboot2)
*(.bananboot)
*(.text.*)
}
.ap_init ALIGN(4K) : AT(ADDR(.ap_init) - KERNEL_OFFSET)
{
g_ap_init_addr = .;
*(.ap_init)
g_kernel_execute_end = .;
}
.userspace ALIGN(4K) : AT(ADDR(.userspace) - KERNEL_OFFSET)
{
g_userspace_start = .;
*(.userspace)
g_userspace_end = .;
g_kernel_execute_end = .;
}
.ap_init ALIGN(4K) : AT(ADDR(.ap_init) - KERNEL_OFFSET)
{
g_ap_init_addr = .;
*(.ap_init)
}
.rodata ALIGN(4K) : AT(ADDR(.rodata) - KERNEL_OFFSET)
{
@@ -43,6 +44,7 @@ SECTIONS
}
.bss ALIGN(4K) : AT(ADDR(.bss) - KERNEL_OFFSET)
{
g_kernel_bss_start = .;
*(COMMON)
*(.bss)
g_kernel_writable_end = .;

View File

@@ -12,7 +12,7 @@ namespace Kernel::API
struct SharedPage
{
uint8_t __sequence[0x100];
uint16_t gdt_cpu_offset;
uint32_t features;

View File

@@ -1,44 +1,34 @@
#pragma once
#include <kernel/Attributes.h>
#include <kernel/IDT.h>
#include <stdint.h>
#include <sys/syscall.h>
#include <BAN/MacroUtils.h>
namespace Kernel
{
ALWAYS_INLINE long syscall(int syscall, uintptr_t arg1 = 0, uintptr_t arg2 = 0, uintptr_t arg3 = 0, uintptr_t arg4 = 0, uintptr_t arg5 = 0)
{
long ret;
#if ARCH(x86_64)
register uintptr_t r10 asm("r10") = arg3;
register uintptr_t r8 asm( "r8") = arg4;
register uintptr_t r9 asm( "r9") = arg5;
asm volatile(
"syscall"
: "=a"(ret)
, "+D"(syscall)
, "+S"(arg1)
, "+d"(arg2)
, "+r"(r10)
, "+r"(r8)
, "+r"(r9)
:: "rcx", "r11", "memory");
#elif ARCH(i686)
asm volatile(
"int %[irq]"
: "=a"(ret)
: [irq]"i"(static_cast<int>(IRQ_SYSCALL)) // WTF GCC 15
, "a"(syscall)
, "b"(arg1)
, "c"(arg2)
, "d"(arg3)
, "S"(arg4)
, "D"(arg5)
: "memory");
#if defined(__x86_64__)
#define _kas_instruction "syscall"
#define _kas_result rax
#define _kas_arguments rdi, rsi, rdx, r10, r8, r9
#define _kas_globbers rcx, rdx, rdi, rsi, r8, r9, r10, r11
#elif defined(__i686__)
#define _kas_instruction "int $0xF0"
#define _kas_result eax
#define _kas_arguments eax, ebx, ecx, edx, esi, edi
#define _kas_globbers
#endif
return ret;
}
}
#define _kas_argument_var(index, value) register long _kas_a##index asm(_ban_stringify(_ban_get(index, _kas_arguments))) = (long)(value);
#define _kas_dummy_var(index, value) register long _kas_d##index asm(#value);
#define _kas_input(index, _) "r"(_kas_a##index)
#define _kas_output(index, _) , "=r"(_kas_d##index)
#define _kas_globber(_, value) #value
#define _kas_syscall(...) ({ \
register long _kas_ret asm(_ban_stringify(_kas_result)); \
_ban_for_each(_kas_argument_var, __VA_ARGS__) \
_ban_for_each(_kas_dummy_var, _kas_globbers) \
asm volatile( \
_kas_instruction \
: "=r"(_kas_ret) _ban_for_each(_kas_output, _kas_globbers) \
: _ban_for_each_comma(_kas_input, __VA_ARGS__) \
: "cc", "memory"); \
(void)_kas_a0; /* require 1 argument */ \
_kas_ret; \
})

View File

@@ -6,7 +6,7 @@
namespace Kernel
{
class AC97AudioController : public AudioController, public Interruptable
class AC97AudioController final : public AudioController, public Interruptable
{
public:
static BAN::ErrorOr<void> create(PCI::Device& pci_device);
@@ -23,11 +23,15 @@ namespace Kernel
uint32_t get_current_pin() const override { return 0; }
BAN::ErrorOr<void> set_current_pin(uint32_t pin) override { if (pin != 0) return BAN::Error::from_errno(EINVAL); return {}; }
BAN::ErrorOr<void> set_volume_mdB(int32_t) override;
private:
AC97AudioController(PCI::Device& pci_device)
: m_pci_device(pci_device)
{ }
uint32_t get_volume_data() const;
BAN::ErrorOr<void> initialize();
BAN::ErrorOr<void> initialize_bld();
BAN::ErrorOr<void> initialize_interrupts();

View File

@@ -1,8 +1,11 @@
#pragma once
#include <kernel/Device/Device.h>
#include <kernel/Memory/ByteRingBuffer.h>
#include <kernel/PCI.h>
#include <sys/ioctl.h>
namespace Kernel
{
@@ -16,6 +19,7 @@ namespace Kernel
protected:
AudioController();
BAN::ErrorOr<void> initialize();
virtual void handle_new_data() = 0;
@@ -26,8 +30,10 @@ namespace Kernel
virtual uint32_t get_current_pin() const = 0;
virtual BAN::ErrorOr<void> set_current_pin(uint32_t) = 0;
virtual BAN::ErrorOr<void> set_volume_mdB(int32_t) = 0;
bool can_read_impl() const override { return false; }
bool can_write_impl() const override { SpinLockGuard _(m_spinlock); return m_sample_data_size < m_sample_data_capacity; }
bool can_write_impl() const override { SpinLockGuard _(m_spinlock); return !m_sample_data->full(); }
bool has_error_impl() const override { return false; }
bool has_hungup_impl() const override { return false; }
@@ -40,9 +46,9 @@ namespace Kernel
mutable SpinLock m_spinlock;
static constexpr size_t m_sample_data_capacity = 1 << 20;
uint8_t m_sample_data[m_sample_data_capacity];
size_t m_sample_data_head { 0 };
size_t m_sample_data_size { 0 };
BAN::UniqPtr<ByteRingBuffer> m_sample_data;
snd_volume_info m_volume_info {};
private:
const dev_t m_rdev;

View File

@@ -8,7 +8,7 @@ namespace Kernel
class HDAudioController;
class HDAudioFunctionGroup : public AudioController
class HDAudioFunctionGroup final : public AudioController
{
public:
static BAN::ErrorOr<BAN::RefPtr<HDAudioFunctionGroup>> create(BAN::RefPtr<HDAudioController>, uint8_t cid, HDAudio::AFGNode&&);
@@ -24,6 +24,8 @@ namespace Kernel
uint32_t get_current_pin() const override;
BAN::ErrorOr<void> set_current_pin(uint32_t) override;
BAN::ErrorOr<void> set_volume_mdB(int32_t) override;
void handle_new_data() override;
private:
@@ -46,15 +48,12 @@ namespace Kernel
BAN::ErrorOr<void> recurse_output_paths(const HDAudio::AFGWidget& widget, BAN::Vector<const HDAudio::AFGWidget*>& path);
uint16_t get_format_data() const;
uint16_t get_volume_data() const;
size_t bdl_offset() const;
void queue_bdl_data();
private:
static constexpr size_t m_max_path_length = 16;
// use 6 512 sample BDL entries
// each entry is ~10.7 ms at 48 kHz
// -> total buffered audio is 64 ms
@@ -66,6 +65,7 @@ namespace Kernel
const uint8_t m_cid;
BAN::Vector<BAN::Vector<const HDAudio::AFGWidget*>> m_output_paths;
BAN::Vector<const HDAudio::AFGWidget*> m_output_pins;
size_t m_output_path_index { SIZE_MAX };
uint8_t m_stream_id { 0xFF };

View File

@@ -50,9 +50,21 @@ namespace Kernel::HDAudio
{
bool input;
bool output;
bool display; // HDMI or DP
uint32_t config;
} pin_complex;
};
struct Amplifier
{
uint8_t offset;
uint8_t num_steps;
uint8_t step_size;
bool mute;
};
BAN::Optional<Amplifier> output_amplifier;
BAN::Vector<uint16_t> connections;
};

View File

@@ -11,6 +11,7 @@ namespace Kernel::HDAudio
VMIN = 0x02,
VMAJ = 0x03,
GCTL = 0x08,
STATESTS = 0x0E,
INTCTL = 0x20,
INTSTS = 0x24,

View File

@@ -44,7 +44,7 @@ namespace Kernel
struct BootModule
{
paddr_t start;
size_t size;
uint64_t size;
};
struct BootInfo

View File

@@ -37,6 +37,8 @@ namespace Kernel
virtual BAN::ErrorOr<size_t> read_impl(off_t, BAN::ByteSpan) override;
virtual BAN::ErrorOr<size_t> write_impl(off_t, BAN::ConstByteSpan) override;
BAN::ErrorOr<long> ioctl_impl(int cmd, void* arg) override;
virtual bool can_read_impl() const override { return true; }
virtual bool can_write_impl() const override { return true; }
virtual bool has_error_impl() const override { return false; }

View File

@@ -1,12 +1,8 @@
#pragma once
#include <BAN/Array.h>
#include <BAN/CircularQueue.h>
#include <BAN/HashMap.h>
#include <BAN/HashSet.h>
#include <kernel/FS/Inode.h>
#include <limits.h>
#include <sys/epoll.h>
namespace Kernel
@@ -53,57 +49,52 @@ namespace Kernel
BAN::ErrorOr<void> fsync_impl() override { return {}; }
private:
struct InodeRefPtrHash
{
BAN::hash_t operator()(const BAN::RefPtr<Inode>& inode)
{
return BAN::hash<const Inode*>()(inode.ptr());
}
};
struct ListenEventList
{
BAN::Array<epoll_event, OPEN_MAX> events;
uint32_t bitmap[(OPEN_MAX + 31) / 32] {};
ListenEventList() = default;
ListenEventList(const ListenEventList&) = delete;
ListenEventList& operator=(const ListenEventList&) = delete;
ListenEventList(ListenEventList&& other)
: events(BAN::move(other.events))
{}
ListenEventList& operator=(ListenEventList&& other)
{
events = BAN::move(other.events);
return *this;
}
BAN::HashMap<int, epoll_event> events;
bool has_fd(int fd) const
{
// For some reason having (fd < 0 || ...) makes GCC 15.1.0
// think bitmap access can be out of bounds...
if (static_cast<size_t>(fd) >= events.size())
return false;
return bitmap[fd / 32] & (1u << (fd % 32));
return events.contains(fd);
}
bool empty() const
{
for (auto val : bitmap)
if (val != 0)
return false;
return true;
return events.empty();
}
void add_fd(int fd, epoll_event event)
BAN::ErrorOr<void> add_fd(int fd, epoll_event event)
{
ASSERT(!has_fd(fd));
bitmap[fd / 32] |= (1u << (fd % 32));
events[fd] = event;
TRY(events.insert(fd, event));
return {};
}
void remove_fd(int fd)
{
ASSERT(has_fd(fd));
bitmap[fd / 32] &= ~(1u << (fd % 32));
events[fd] = {};
events.remove(fd);
}
};
private:
ThreadBlocker m_thread_blocker;
SpinLock m_ready_lock;
BAN::HashMap<BAN::RefPtr<Inode>, uint32_t, InodeRefPtrHash> m_ready_events;
BAN::HashMap<BAN::RefPtr<Inode>, uint32_t, InodeRefPtrHash> m_processing_events;
BAN::HashMap<BAN::RefPtr<Inode>, ListenEventList, InodeRefPtrHash> m_listening_events;
BAN::HashMap<BAN::RefPtr<Inode>, uint32_t> m_ready_events;
BAN::HashMap<BAN::RefPtr<Inode>, uint32_t> m_processing_events;
BAN::HashMap<BAN::RefPtr<Inode>, ListenEventList> m_listening_events;
};
}

View File

@@ -1,12 +1,11 @@
#pragma once
#include <kernel/BootInfo.h>
#include <kernel/FS/FileSystem.h>
#include <kernel/FS/Inode.h>
namespace Kernel
{
bool is_ustar_boot_module(const BootModule&);
BAN::ErrorOr<void> unpack_boot_module_into_filesystem(BAN::RefPtr<FileSystem>, const BootModule&);
BAN::ErrorOr<bool> unpack_boot_module_into_directory(BAN::RefPtr<Inode>, const BootModule&);
}

View File

@@ -133,6 +133,12 @@ namespace Kernel
void set_gsbase(uintptr_t addr);
#endif
static uint16_t cpu_index_offset() { return m_cpu_index_offset; }
void set_cpu_index(uint8_t index)
{
write_entry(m_cpu_index_offset, 0, index, 0xF2, 0x4);
}
private:
GDT() = default;
@@ -151,11 +157,13 @@ namespace Kernel
private:
#if ARCH(x86_64)
BAN::Array<SegmentDescriptor, 8> m_gdt; // null, kernel code, kernel data, user code (32 bit), user data, user code (64 bit), tss low, tss high
static constexpr uint16_t m_tss_offset = 0x30;
BAN::Array<SegmentDescriptor, 9> m_gdt; // null, kernel code, kernel data, user code (32 bit), user data, user code (64 bit), cpu-index, tss low, tss high
static constexpr uint16_t m_cpu_index_offset = 0x30;
static constexpr uint16_t m_tss_offset = 0x38;
#elif ARCH(i686)
BAN::Array<SegmentDescriptor, 9> m_gdt; // null, kernel code, kernel data, user code, user data, processor data, fsbase, gsbase, tss
static constexpr uint16_t m_tss_offset = 0x40;
BAN::Array<SegmentDescriptor, 10> m_gdt; // null, kernel code, kernel data, user code, user data, processor data, fsbase, gsbase, cpu-index, tss
static constexpr uint16_t m_cpu_index_offset = 0x40;
static constexpr uint16_t m_tss_offset = 0x48;
#endif
TaskStateSegment m_tss;
const GDTR m_gdtr {

View File

@@ -20,7 +20,7 @@ namespace Kernel
constexpr uint8_t IRQ_MSI_BASE = 0x80;
constexpr uint8_t IRQ_MSI_END = 0xF0;
#if ARCH(i686)
constexpr uint8_t IRQ_SYSCALL = 0xF0;
constexpr uint8_t IRQ_SYSCALL = 0xF0; // hard coded in kernel/API/Syscall.h
#endif
constexpr uint8_t IRQ_IPI = 0xF1;
constexpr uint8_t IRQ_TIMER = 0xF2;

View File

@@ -51,7 +51,7 @@ namespace Kernel
uint8_t back() const
{
ASSERT(!empty());
return reinterpret_cast<const uint8_t*>(m_vaddr)[m_tail + m_size];
return reinterpret_cast<const uint8_t*>(m_vaddr)[m_tail + m_size - 1];
}
bool empty() const { return m_size == 0; }

View File

@@ -8,7 +8,7 @@ namespace Kernel
class DMARegion
{
public:
static BAN::ErrorOr<BAN::UniqPtr<DMARegion>> create(size_t size);
static BAN::ErrorOr<BAN::UniqPtr<DMARegion>> create(size_t size, PageTable::MemoryType type = PageTable::MemoryType::Uncached);
~DMARegion();
size_t size() const { return m_size; }

View File

@@ -18,6 +18,8 @@ namespace Kernel
static void initialize();
static Heap& get();
void release_boot_modules();
paddr_t take_free_page();
void release_page(paddr_t);

View File

@@ -28,6 +28,20 @@ namespace Kernel
private:
MemoryBackedRegion(PageTable&, size_t size, Type, PageTable::flags_t, int status_flags);
private:
struct PhysicalPage
{
PhysicalPage(paddr_t paddr)
: paddr(paddr)
{ }
~PhysicalPage();
BAN::Atomic<uint32_t> ref_count { 1 };
const paddr_t paddr;
};
BAN::Vector<PhysicalPage*> m_physical_pages;
Mutex m_mutex;
};
}

View File

@@ -9,12 +9,6 @@
namespace Kernel
{
struct AddressRange
{
vaddr_t start;
vaddr_t end;
};
class MemoryRegion
{
BAN_NON_COPYABLE(MemoryRegion);

View File

@@ -46,13 +46,22 @@ namespace Kernel
};
public:
static void initialize_pre_heap();
static void initialize_post_heap();
static void initialize_fast_page();
static void initialize_and_load();
static void enable_cpu_features();
static PageTable& kernel();
static PageTable& current() { return *reinterpret_cast<PageTable*>(Processor::get_current_page_table()); }
static constexpr vaddr_t fast_page() { return KERNEL_OFFSET; }
static constexpr vaddr_t fast_page()
{
#if ARCH(x86_64)
return 0xffffffffbfe00000;
#elif ARCH(i686)
return 0xffe00000;
#endif
}
template<with_fast_page_callback F>
static void with_fast_page(paddr_t paddr, F callback)
@@ -100,40 +109,42 @@ namespace Kernel
static BAN::ErrorOr<PageTable*> create_userspace();
~PageTable();
void unmap_page(vaddr_t, bool send_smp_message = true);
void unmap_page(vaddr_t, bool invalidate = true);
void unmap_range(vaddr_t, size_t bytes);
void map_page_at(paddr_t, vaddr_t, flags_t, MemoryType = MemoryType::Normal, bool send_smp_message = true);
void map_page_at(paddr_t, vaddr_t, flags_t, MemoryType = MemoryType::Normal, bool invalidate = true);
void map_range_at(paddr_t, vaddr_t, size_t bytes, flags_t, MemoryType = MemoryType::Normal);
void remove_writable_from_range(vaddr_t, size_t);
paddr_t physical_address_of(vaddr_t) const;
flags_t get_page_flags(vaddr_t) const;
bool is_page_free(vaddr_t) const;
bool is_range_free(vaddr_t, size_t bytes) const;
bool reserve_page(vaddr_t, bool only_free = true, bool send_smp_message = true);
bool reserve_page(vaddr_t, bool only_free = true, bool invalidate = true);
bool reserve_range(vaddr_t, size_t bytes, bool only_free = true);
vaddr_t reserve_free_page(vaddr_t first_address, vaddr_t last_address = UINTPTR_MAX);
vaddr_t reserve_free_contiguous_pages(size_t page_count, vaddr_t first_address, vaddr_t last_address = UINTPTR_MAX);
void load();
void initial_load();
void invalidate_page(vaddr_t addr, bool send_smp_message) { invalidate_range(addr, 1, send_smp_message); }
void invalidate_range(vaddr_t addr, size_t pages, bool send_smp_message);
InterruptState lock() const { return m_lock.lock(); }
void unlock(InterruptState state) const { m_lock.unlock(state); }
paddr_t paddr() const { return m_highest_paging_struct; }
void debug_dump();
private:
PageTable() = default;
uint64_t get_page_data(vaddr_t) const;
void initialize_kernel();
void map_kernel_memory();
void prepare_fast_page();
void invalidate(vaddr_t, bool send_smp_message);
static void map_fast_page(paddr_t);
static void unmap_fast_page();

View File

@@ -10,7 +10,7 @@ namespace Kernel
class PhysicalRange
{
public:
PhysicalRange(paddr_t, size_t);
PhysicalRange(paddr_t, uint64_t);
paddr_t reserve_page();
void release_page(paddr_t);

View File

@@ -4,7 +4,7 @@
#if ARCH(x86_64)
#define KERNEL_OFFSET 0xFFFFFFFF80000000
#define USERSPACE_END 0xFFFF800000000000
#define USERSPACE_END 0x800000000000
#elif ARCH(i686)
#define KERNEL_OFFSET 0xC0000000
#define USERSPACE_END 0xC0000000
@@ -23,4 +23,10 @@ namespace Kernel
using vaddr_t = uintptr_t;
using paddr_t = uint64_t;
struct AddressRange
{
vaddr_t start;
vaddr_t end;
};
}

View File

@@ -14,44 +14,27 @@ namespace Kernel
BAN_NON_MOVABLE(VirtualRange);
public:
// Create virtual range to fixed virtual address
static BAN::ErrorOr<BAN::UniqPtr<VirtualRange>> create_to_vaddr(PageTable&, vaddr_t, size_t, PageTable::flags_t flags, bool preallocate_pages, bool add_guard_pages);
// Create virtual range to virtual address range
static BAN::ErrorOr<BAN::UniqPtr<VirtualRange>> create_to_vaddr_range(PageTable&, vaddr_t vaddr_start, vaddr_t vaddr_end, size_t, PageTable::flags_t flags, bool preallocate_pages, bool add_guard_pages);
static BAN::ErrorOr<BAN::UniqPtr<VirtualRange>> create_to_vaddr_range(PageTable&, AddressRange address_range, size_t, PageTable::flags_t flags, bool add_guard_pages);
~VirtualRange();
BAN::ErrorOr<BAN::UniqPtr<VirtualRange>> clone(PageTable&);
vaddr_t vaddr() const { return m_vaddr + (m_has_guard_pages ? PAGE_SIZE : 0); }
size_t size() const { return m_size - (m_has_guard_pages ? 2 * PAGE_SIZE : 0); }
PageTable::flags_t flags() const { return m_flags; }
paddr_t paddr_of(vaddr_t vaddr) const
{
ASSERT(vaddr % PAGE_SIZE == 0);
const size_t index = (vaddr - this->vaddr()) / PAGE_SIZE;
ASSERT(index < m_paddrs.size());
const paddr_t paddr = m_paddrs[index];
ASSERT(paddr);
return paddr;
}
paddr_t paddr_of(vaddr_t vaddr) const { return m_page_table.physical_address_of(vaddr & PAGE_ADDR_MASK); }
bool contains(vaddr_t address) const { return vaddr() <= address && address < vaddr() + size(); }
BAN::ErrorOr<bool> allocate_page_for_demand_paging(vaddr_t address);
private:
VirtualRange(PageTable&, bool preallocated, bool has_guard_pages, vaddr_t, size_t, PageTable::flags_t);
VirtualRange(PageTable&, bool has_guard_pages, vaddr_t, size_t, PageTable::flags_t);
BAN::ErrorOr<void> initialize();
private:
PageTable& m_page_table;
const bool m_preallocated;
const bool m_has_guard_pages;
const vaddr_t m_vaddr;
const size_t m_size;
const PageTable::flags_t m_flags;
BAN::Vector<paddr_t> m_paddrs;
SpinLock m_lock;
friend class BAN::UniqPtr<VirtualRange>;

View File

@@ -13,4 +13,3 @@ void* kmalloc(size_t size, size_t align, bool force_identity_map = false);
void kfree(void*);
BAN::Optional<Kernel::paddr_t> kmalloc_paddr_of(Kernel::vaddr_t);
BAN::Optional<Kernel::vaddr_t> kmalloc_vaddr_of(Kernel::paddr_t);

View File

@@ -44,6 +44,10 @@ namespace Kernel
void close_all();
void close_cloexec();
bool is_cloexec(int fd);
void add_cloexec(int fd);
void remove_cloexec(int fd);
BAN::ErrorOr<void> flock(int fd, int op);
BAN::ErrorOr<size_t> read(int fd, BAN::ByteSpan);
@@ -84,27 +88,6 @@ namespace Kernel
friend class BAN::RefPtr<OpenFileDescription>;
};
struct OpenFile
{
OpenFile() = default;
OpenFile(BAN::RefPtr<OpenFileDescription> description, int descriptor_flags)
: description(BAN::move(description))
, descriptor_flags(descriptor_flags)
{ }
BAN::RefPtr<Inode> inode() const { ASSERT(description); return description->file.inode; }
BAN::StringView path() const { ASSERT(description); return description->file.canonical_path.sv(); }
int& status_flags() { ASSERT(description); return description->status_flags; }
const int& status_flags() const { ASSERT(description); return description->status_flags; }
off_t& offset() { ASSERT(description); return description->offset; }
const off_t& offset() const { ASSERT(description); return description->offset; }
BAN::RefPtr<OpenFileDescription> description;
int descriptor_flags { 0 };
};
BAN::ErrorOr<void> validate_fd(int) const;
BAN::ErrorOr<int> get_free_fd() const;
BAN::ErrorOr<void> get_free_fd_pair(int fds[2]) const;
@@ -139,7 +122,8 @@ namespace Kernel
const Credentials& m_credentials;
mutable Mutex m_mutex;
BAN::Array<OpenFile, OPEN_MAX> m_open_files;
BAN::Array<BAN::RefPtr<OpenFileDescription>, OPEN_MAX> m_open_files;
BAN::Array<uint32_t, (OPEN_MAX + 31) / 32> m_cloexec_files {};
};
}

View File

@@ -2,6 +2,7 @@
#include <BAN/Atomic.h>
#include <BAN/ForwardList.h>
#include <BAN/Math.h>
#include <kernel/API/SharedPage.h>
#include <kernel/Arch.h>
@@ -58,6 +59,8 @@ namespace Kernel
static Processor& create(ProcessorID id);
static Processor& initialize();
void allocate_stack();
static ProcessorID current_id() { return read_gs_sized<ProcessorID>(offsetof(Processor, m_id)); }
static uint8_t current_index() { return read_gs_sized<uint8_t>(offsetof(Processor, m_index)); }
static ProcessorID id_from_index(size_t index);
@@ -100,11 +103,8 @@ namespace Kernel
handle_smp_messages();
}
static uintptr_t current_stack_bottom() { return read_gs_sized<uintptr_t>(offsetof(Processor, m_stack)); }
static uintptr_t current_stack_top() { return current_stack_bottom() + s_stack_size; }
uintptr_t stack_bottom() const { return reinterpret_cast<uintptr_t>(m_stack); }
uintptr_t stack_top() const { return stack_bottom() + s_stack_size; }
vaddr_t stack_top_vaddr() const { return m_stack_vaddr + s_stack_size; }
paddr_t stack_top_paddr() const { return m_stack_paddr + s_stack_size; }
static void set_thread_syscall_stack(vaddr_t vaddr) { write_gs_sized<vaddr_t>(offsetof(Processor, m_thread_syscall_stack), vaddr); }
@@ -140,11 +140,7 @@ namespace Kernel
static void disable_sse()
{
uintptr_t dummy;
#if ARCH(x86_64)
asm volatile("movq %%cr0, %0; orq $0x08, %0; movq %0, %%cr0" : "=r"(dummy));
#elif ARCH(i686)
asm volatile("movl %%cr0, %0; orl $0x08, %0; movl %0, %%cr0" : "=r"(dummy));
#endif
asm volatile("mov %%cr0, %0; or $0x08, %0; mov %0, %%cr0" : "=r"(dummy));
}
static void enable_sse()
@@ -169,35 +165,17 @@ namespace Kernel
}
template<typename T>
static T read_gs_sized(uintptr_t offset) requires(sizeof(T) <= 8)
static T read_gs_sized(uintptr_t offset) requires(sizeof(T) <= 8 && BAN::Math::is_power_of_two(sizeof(T)))
{
#define __ASM_INPUT(operation) asm volatile(operation " %%gs:%a[offset], %[result]" : [result]"=r"(result) : [offset]"ir"(offset))
T result;
if constexpr(sizeof(T) == 8)
__ASM_INPUT("movq");
if constexpr(sizeof(T) == 4)
__ASM_INPUT("movl");
if constexpr(sizeof(T) == 2)
__ASM_INPUT("movw");
if constexpr(sizeof(T) == 1)
__ASM_INPUT("movb");
return result;
#undef __ASM_INPUT
T value;
asm volatile("mov %%gs:%a[offset], %[value]" : [value]"=r"(value) : [offset]"ir"(offset));
return value;
}
template<typename T>
static void write_gs_sized(uintptr_t offset, T value) requires(sizeof(T) <= 8)
static void write_gs_sized(uintptr_t offset, T value) requires(sizeof(T) <= 8 && BAN::Math::is_power_of_two(sizeof(T)))
{
#define __ASM_INPUT(operation) asm volatile(operation " %[value], %%gs:%a[offset]" :: [value]"r"(value), [offset]"ir"(offset) : "memory")
if constexpr(sizeof(T) == 8)
__ASM_INPUT("movq");
if constexpr(sizeof(T) == 4)
__ASM_INPUT("movl");
if constexpr(sizeof(T) == 2)
__ASM_INPUT("movw");
if constexpr(sizeof(T) == 1)
__ASM_INPUT("movb");
#undef __ASM_INPUT
asm volatile("mov %[value], %%gs:%a[offset]" :: [value]"r"(value), [offset]"ir"(offset) : "memory");
}
private:
@@ -215,8 +193,9 @@ namespace Kernel
Thread* m_sse_thread { nullptr };
static constexpr size_t s_stack_size { 4096 };
void* m_stack { nullptr };
static constexpr size_t s_stack_size { PAGE_SIZE };
vaddr_t m_stack_vaddr { 0 };
paddr_t m_stack_paddr { 0 };
GDT* m_gdt { nullptr };
IDT* m_idt { nullptr };

View File

@@ -101,7 +101,7 @@ namespace Kernel
InterruptStack* m_interrupt_stack { nullptr };
InterruptRegisters* m_interrupt_registers { nullptr };
uint64_t m_last_reschedule_ns { 0 };
uint64_t m_next_reschedule_ns { 0 };
uint64_t m_last_load_balance_ns { 0 };
struct ThreadInfo

View File

@@ -22,8 +22,9 @@ namespace Kernel
uint64_t wake_time_ns { static_cast<uint64_t>(-1) };
SpinLock blocker_lock;
ThreadBlocker* blocker { nullptr };
BAN::Atomic<ThreadBlocker*> blocker { nullptr };
SchedulerQueueNode* block_chain_prev { nullptr };
SchedulerQueueNode* block_chain_next { nullptr };
ProcessorID processor_id { PROCESSOR_NONE };
bool blocked { false };

View File

@@ -38,8 +38,12 @@ namespace Kernel
// stack overflows on some machines with 8 page stack
static constexpr size_t kernel_stack_size { PAGE_SIZE * 16 };
// TODO: userspace stack is hard limited to 32 MiB, maybe make this dynamic?
// TODO: userspace stack size is hard limited, maybe make this dynamic?
#if ARCH(x86_64)
static constexpr size_t userspace_stack_size { 32 << 20 };
#elif ARCH(i686)
static constexpr size_t userspace_stack_size { 4 << 20 };
#endif
public:
static BAN::ErrorOr<Thread*> create_kernel(entry_t, void*);
@@ -56,12 +60,10 @@ namespace Kernel
// Returns true, if thread is going to trigger signal
bool is_interrupted_by_signal(bool skip_stop_and_cont = false) const;
// Returns true if pending signal can be added to thread
bool can_add_signal_to_execute() const;
bool will_execute_signal() const;
// Returns true if handled signal had SA_RESTART
bool handle_signal(int signal = 0, const siginfo_t& signal_info = {});
void add_signal(int signal, const siginfo_t& info);
bool handle_signal_if_interrupted();
bool handle_signal(int signal, const siginfo_t&);
void add_signal(int signal, const siginfo_t&);
void set_suspend_signal_mask(uint64_t sigmask);
static bool is_stopping_signal(int signal);
@@ -153,6 +155,16 @@ namespace Kernel
bool currently_on_alternate_stack() const;
struct signal_handle_info_t
{
vaddr_t handler;
vaddr_t stack_top;
uint64_t restore_sigmask;
bool has_sa_restart;
};
signal_handle_info_t remove_signal_and_get_info(int signal);
void handle_signal_impl(int signal, const siginfo_t&, const signal_handle_info_t&);
private:
// NOTE: this is the first member to force it being last destructed
// {kernel,userspace}_stack has to be destroyed before page table

View File

@@ -15,7 +15,6 @@ namespace Kernel
void block_with_wake_time_ns(uint64_t wake_time_ns, BaseMutex*);
void unblock();
void block_with_timeout_ms(uint64_t timeout_ms, BaseMutex* mutex)
{
ASSERT(!BAN::Math::will_multiplication_overflow<uint64_t>(timeout_ms, 1'000'000));
@@ -29,14 +28,12 @@ namespace Kernel
private:
void add_thread_to_block_queue(SchedulerQueue::Node*);
void remove_blocked_thread(SchedulerQueue::Node*);
void remove_thread_from_block_queue(SchedulerQueue::Node*);
private:
SchedulerQueue::Node* m_block_chain { nullptr };
SpinLock m_lock;
SchedulerQueue::Node* m_block_chain[32] {};
size_t m_block_chain_length { 0 };
friend class Scheduler;
};

View File

@@ -531,11 +531,8 @@ acpi_release_global_lock:
return BAN::Error::from_errno(EFAULT);
}
if (!s5_node.as.package->elements[0].resolved || !s5_node.as.package->elements[1].resolved)
{
dwarnln("TODO: lazy evaluate package \\_S5 elements");
return BAN::Error::from_errno(ENOTSUP);
}
TRY(AML::resolve_package_element(s5_node.as.package->elements[0], true));
TRY(AML::resolve_package_element(s5_node.as.package->elements[1], true));
auto slp_typa_node = TRY(AML::convert_node(TRY(s5_node.as.package->elements[0].value.node->copy()), AML::ConvInteger, sizeof(uint64_t)));
auto slp_typb_node = TRY(AML::convert_node(TRY(s5_node.as.package->elements[1].value.node->copy()), AML::ConvInteger, sizeof(uint64_t)));

View File

@@ -293,9 +293,25 @@ namespace Kernel
dprintln("Trying to enable processor (lapic id {})", processor.apic_id);
auto& proc = Kernel::Processor::create(ProcessorID(processor.apic_id));
proc.allocate_stack();
struct ap_init_info_t
{
uintptr_t stack_paddr;
uintptr_t stack_vaddr;
uintptr_t prepare_paging;
uintptr_t page_table;
uintptr_t ready;
};
PageTable::with_fast_page(ap_init_paddr, [&] {
PageTable::fast_page_as_sized<uint32_t>(2) = kmalloc_paddr_of(proc.stack_top()).value();
PageTable::fast_page_as_sized<uint8_t>(13) = 0;
PageTable::fast_page_as<ap_init_info_t>(8) = {
.stack_paddr = static_cast<uintptr_t>(proc.stack_top_paddr()),
.stack_vaddr = proc.stack_top_vaddr(),
.prepare_paging = reinterpret_cast<uintptr_t>(&PageTable::enable_cpu_features),
.page_table = static_cast<uintptr_t>(PageTable::kernel().paddr()),
.ready = 0,
};
});
write_to_local_apic(LAPIC_ERROR_REG, 0x00);
@@ -334,12 +350,9 @@ namespace Kernel
// give processor upto 100 * 100 us + 200 us to boot
PageTable::with_fast_page(ap_init_paddr, [&] {
for (int i = 0; i < 100; i++)
{
if (__atomic_load_n(&PageTable::fast_page_as_sized<uint8_t>(13), __ATOMIC_SEQ_CST))
for (int i = 0; i < 100; i++, udelay(100))
if (__atomic_load_n(&PageTable::fast_page_as<ap_init_info_t>(8).ready, __ATOMIC_SEQ_CST))
break;
udelay(100);
}
});
initialized_aps++;

View File

@@ -118,6 +118,8 @@ namespace Kernel
BAN::ErrorOr<void> AC97AudioController::initialize()
{
TRY(AudioController::initialize());
m_pci_device.enable_bus_mastering();
m_mixer = TRY(m_pci_device.allocate_bar_region(0));
@@ -135,8 +137,27 @@ namespace Kernel
// Reset mixer to default values
m_mixer->write16(AudioMixerRegister::Reset, 0);
// Master volume 100%, no mute
m_mixer->write16(AudioMixerRegister::MasterVolume, 0x0000);
// Master volumes
m_mixer->write16(AudioMixerRegister::MasterVolume, 0x2020);
if (m_mixer->read16(AudioMixerRegister::MasterVolume) == 0x2020)
{
m_volume_info = {
.min_mdB = -94500,
.max_mdB = 0,
.step_mdB = 1500,
.mdB = 0,
};
}
else
{
m_volume_info = {
.min_mdB = -46500,
.max_mdB = 0,
.step_mdB = 1500,
.mdB = 0,
};
}
m_mixer->write16(AudioMixerRegister::MasterVolume, get_volume_data());
// PCM output volume left/right +0 db, no mute
m_mixer->write16(AudioMixerRegister::PCMOutVolume, 0x0808);
@@ -185,6 +206,19 @@ namespace Kernel
return {};
}
uint32_t AC97AudioController::get_volume_data() const
{
const uint32_t steps = (-m_volume_info.mdB + m_volume_info.step_mdB / 2) / m_volume_info.step_mdB;
return (steps << 8) | steps;
}
BAN::ErrorOr<void> AC97AudioController::set_volume_mdB(int32_t mdB)
{
m_volume_info.mdB = BAN::Math::clamp(mdB, m_volume_info.min_mdB, m_volume_info.max_mdB);
m_mixer->write16(AudioMixerRegister::MasterVolume, get_volume_data());
return {};
}
void AC97AudioController::handle_new_data()
{
ASSERT(m_spinlock.current_processor_has_lock());
@@ -203,23 +237,22 @@ namespace Kernel
if (next_bld_head == m_bdl_tail)
break;
const size_t sample_data_tail = (m_sample_data_head + m_sample_data_capacity - m_sample_data_size) % m_sample_data_capacity;
const size_t max_memcpy = BAN::Math::min(m_sample_data_size, m_sample_data_capacity - sample_data_tail);
const size_t samples = BAN::Math::min(max_memcpy / 2, m_samples_per_entry);
if (samples == 0)
const size_t sample_frames = BAN::Math::min(m_sample_data->size() / get_channels() / sizeof(uint16_t), m_samples_per_entry / get_channels());
if (sample_frames == 0)
break;
const size_t copy_total_bytes = sample_frames * get_channels() * sizeof(uint16_t);
auto& entry = reinterpret_cast<AC97::BufferDescriptorListEntry*>(m_bdl_region->vaddr())[m_bdl_head];
entry.samples = samples;
entry.samples = sample_frames * get_channels();
entry.flags = (1 << 15);
memcpy(
reinterpret_cast<void*>(m_bdl_region->paddr_to_vaddr(entry.address)),
&m_sample_data[sample_data_tail],
samples * 2
m_sample_data->get_data().data(),
copy_total_bytes
);
m_sample_data_size -= samples * 2;
m_sample_data->pop(copy_total_bytes);
lvi = m_bdl_head;
m_bdl_head = next_bld_head;

View File

@@ -53,34 +53,28 @@ namespace Kernel
return {};
}
BAN::ErrorOr<void> AudioController::initialize()
{
m_sample_data = TRY(ByteRingBuffer::create(m_sample_data_capacity));
return {};
}
BAN::ErrorOr<size_t> AudioController::write_impl(off_t, BAN::ConstByteSpan buffer)
{
SpinLockGuard lock_guard(m_spinlock);
while (m_sample_data_size >= m_sample_data_capacity)
while (m_sample_data->full())
{
SpinLockGuardAsMutex smutex(lock_guard);
TRY(Thread::current().block_or_eintr_indefinite(m_sample_data_blocker, &smutex));
}
size_t nwritten = 0;
while (nwritten < buffer.size())
{
if (m_sample_data_size >= m_sample_data_capacity)
break;
const size_t max_memcpy = BAN::Math::min(m_sample_data_capacity - m_sample_data_size, m_sample_data_capacity - m_sample_data_head);
const size_t to_copy = BAN::Math::min(buffer.size() - nwritten, max_memcpy);
memcpy(m_sample_data + m_sample_data_head, buffer.data() + nwritten, to_copy);
nwritten += to_copy;
m_sample_data_head = (m_sample_data_head + to_copy) % m_sample_data_capacity;
m_sample_data_size += to_copy;
}
const size_t to_copy = BAN::Math::min(buffer.size(), m_sample_data->free());
m_sample_data->push(buffer.slice(0, to_copy));
handle_new_data();
return nwritten;
return to_copy;
}
BAN::ErrorOr<long> AudioController::ioctl_impl(int cmd, void* arg)
@@ -97,9 +91,9 @@ namespace Kernel
case SND_GET_BUFFERSZ:
{
SpinLockGuard _(m_spinlock);
*static_cast<uint32_t*>(arg) = m_sample_data_size;
*static_cast<uint32_t*>(arg) = m_sample_data->size();
if (cmd == SND_RESET_BUFFER)
m_sample_data_size = 0;
m_sample_data->pop(m_sample_data->size());
return 0;
}
case SND_GET_TOTAL_PINS:
@@ -111,6 +105,12 @@ namespace Kernel
case SND_SET_PIN:
TRY(set_current_pin(*static_cast<uint32_t*>(arg)));
return 0;
case SND_GET_VOLUME_INFO:
*static_cast<snd_volume_info*>(arg) = m_volume_info;
return 0;
case SND_SET_VOLUME_MDB:
TRY(set_volume_mdB(*static_cast<int32_t*>(arg)));
return 0;
}
return CharacterDevice::ioctl_impl(cmd, arg);

View File

@@ -2,6 +2,8 @@
#include <kernel/Audio/HDAudio/Registers.h>
#include <kernel/FS/DevFS/FileSystem.h>
#include <BAN/Sort.h>
namespace Kernel
{
@@ -25,6 +27,8 @@ namespace Kernel
BAN::ErrorOr<void> HDAudioFunctionGroup::initialize()
{
TRY(AudioController::initialize());
if constexpr(DEBUG_HDAUDIO)
{
const auto widget_to_string =
@@ -50,19 +54,13 @@ namespace Kernel
{
if (widget.type == HDAudio::AFGWidget::Type::PinComplex)
{
const uint32_t config = TRY(m_controller->send_command({
.data = 0x00,
.command = 0xF1C,
.node_index = widget.id,
.codec_address = m_cid,
}));
dprintln(" widget {}: {} ({}, {}), {32b}",
dprintln(" widget {}: {} ({}, {}, {}), {32b}",
widget.id,
widget_to_string(widget.type),
(int)widget.pin_complex.output,
(int)widget.pin_complex.input,
config
(int)widget.pin_complex.display,
widget.pin_complex.config
);
}
else
@@ -79,59 +77,49 @@ namespace Kernel
}
TRY(initialize_stream());
TRY(initialize_output());
if (auto ret = initialize_output(); ret.is_error())
{
// No usable pins, not really an error
if (ret.error().get_error_code() == ENODEV)
return {};
return ret.release_error();
}
DevFileSystem::get().add_device(this);
return {};
}
uint32_t HDAudioFunctionGroup::get_total_pins() const
{
uint32_t count = 0;
for (const auto& widget : m_afg_node.widgets)
if (widget.type == HDAudio::AFGWidget::Type::PinComplex && widget.pin_complex.output)
count++;
return count;
return m_output_pins.size();
}
uint32_t HDAudioFunctionGroup::get_current_pin() const
{
const auto current_id = m_output_paths[m_output_path_index].front()->id;
uint32_t pin = 0;
for (const auto& widget : m_afg_node.widgets)
{
if (widget.type != HDAudio::AFGWidget::Type::PinComplex || !widget.pin_complex.output)
continue;
if (widget.id == current_id)
return pin;
pin++;
}
for (size_t i = 0; i < m_output_pins.size(); i++)
if (m_output_pins[i]->id == current_id)
return i;
ASSERT_NOT_REACHED();
}
BAN::ErrorOr<void> HDAudioFunctionGroup::set_current_pin(uint32_t pin)
{
uint32_t pin_id = 0;
for (const auto& widget : m_afg_node.widgets)
{
if (widget.type != HDAudio::AFGWidget::Type::PinComplex || !widget.pin_complex.output)
continue;
if (pin-- > 0)
continue;
pin_id = widget.id;
break;
}
if (pin >= m_output_pins.size())
return BAN::Error::from_errno(EINVAL);
if (auto ret = disable_output_path(m_output_path_index); ret.is_error())
dwarnln("failed to disable old output path {}", ret.error());
const uint32_t pin_id = m_output_pins[pin]->id;
for (size_t i = 0; i < m_output_paths.size(); i++)
{
if (m_output_paths[i].front()->id != pin_id)
continue;
if (auto ret = enable_output_path(i); !ret.is_error())
if (auto ret = enable_output_path(i); ret.is_error())
{
if (ret.error().get_error_code() == ENOTSUP)
continue;
@@ -139,7 +127,6 @@ namespace Kernel
return ret.release_error();
}
dprintln("set output widget to {}", pin_id);
m_output_path_index = i;
return {};
}
@@ -149,6 +136,37 @@ namespace Kernel
return BAN::Error::from_errno(ENOTSUP);
}
BAN::ErrorOr<void> HDAudioFunctionGroup::set_volume_mdB(int32_t mdB)
{
mdB = BAN::Math::clamp(mdB, m_volume_info.min_mdB, m_volume_info.max_mdB);
const auto& path = m_output_paths[m_output_path_index];
for (size_t i = 0; i < path.size(); i++)
{
if (!path[i]->output_amplifier.has_value())
continue;
const int32_t step_round = (mdB >= 0)
? +m_volume_info.step_mdB / 2
: -m_volume_info.step_mdB / 2;
const uint32_t step = (mdB + step_round) / m_volume_info.step_mdB + path[i]->output_amplifier->offset;
const uint32_t volume = 0b1'0'1'1'0000'0'0000000 | step;
TRY(m_controller->send_command({
.data = static_cast<uint8_t>(volume & 0xFF),
.command = static_cast<uint16_t>(0x300 | (volume >> 8)),
.node_index = path[i]->id,
.codec_address = m_cid,
}));
break;
}
m_volume_info.mdB = mdB;
return {};
}
size_t HDAudioFunctionGroup::bdl_offset() const
{
const size_t bdl_entry_bytes = m_bdl_entry_sample_frames * get_channels() * sizeof(uint16_t);
@@ -230,18 +248,39 @@ namespace Kernel
BAN::ErrorOr<void> HDAudioFunctionGroup::initialize_output()
{
BAN::Vector<const HDAudio::AFGWidget*> path;
TRY(path.reserve(m_max_path_length));
for (const auto& widget : m_afg_node.widgets)
{
if (widget.type != HDAudio::AFGWidget::Type::PinComplex || !widget.pin_complex.output)
continue;
// no physical connection
if ((widget.pin_complex.config >> 30) == 0b01)
continue;
// needs a GPU
if (widget.pin_complex.display)
continue;
BAN::Vector<const HDAudio::AFGWidget*> path;
TRY(path.push_back(&widget));
TRY(recurse_output_paths(widget, path));
path.pop_back();
if (!m_output_paths.empty() && m_output_paths.back().front()->id == widget.id)
TRY(m_output_pins.push_back(&widget));
}
if (m_output_pins.empty())
return BAN::Error::from_errno(ENODEV);
// prefer short paths
BAN::sort::sort(m_output_paths.begin(), m_output_paths.end(),
[](const auto& a, const auto& b) {
if (a.front()->id != b.front()->id)
return a.front()->id < b.front()->id;
return a.size() < b.size();
}
);
dprintln_if(DEBUG_HDAUDIO, "found {} paths from output to DAC", m_output_paths.size());
// select first supported path
@@ -283,13 +322,6 @@ namespace Kernel
return 0b0'0'000'000'0'001'0001;
}
uint16_t HDAudioFunctionGroup::get_volume_data() const
{
// TODO: don't hardcode this
// left and right output, no mute, max gain
return 0b1'0'1'1'0000'0'1111111;
}
BAN::ErrorOr<void> HDAudioFunctionGroup::enable_output_path(uint8_t index)
{
ASSERT(index < m_output_paths.size());
@@ -310,7 +342,6 @@ namespace Kernel
}
const auto format = get_format_data();
const auto volume = get_volume_data();
for (size_t i = 0; i < path.size(); i++)
{
@@ -339,13 +370,17 @@ namespace Kernel
}));
}
// set volume
TRY(m_controller->send_command({
.data = static_cast<uint8_t>(volume & 0xFF),
.command = static_cast<uint16_t>(0x300 | (volume >> 8)),
.node_index = path[i]->id,
.codec_address = m_cid,
}));
// set volume to 0 dB, no mute
if (path[i]->output_amplifier.has_value())
{
const uint32_t volume = 0b1'0'1'1'0000'0'0000000 | path[i]->output_amplifier->offset;
TRY(m_controller->send_command({
.data = static_cast<uint8_t>(volume & 0xFF),
.command = static_cast<uint16_t>(0x300 | (volume >> 8)),
.node_index = path[i]->id,
.codec_address = m_cid,
}));
}
switch (path[i]->type)
{
@@ -390,6 +425,41 @@ namespace Kernel
}
}
// update volume info to this path
m_volume_info.min_mdB = 0;
m_volume_info.max_mdB = 0;
m_volume_info.step_mdB = 0;
for (size_t i = 0; i < path.size(); i++)
{
if (!path[i]->output_amplifier.has_value())
continue;
const auto& amp = path[i]->output_amplifier.value();
const int32_t step_mdB = amp.step_size * 250;
m_volume_info.step_mdB = step_mdB;
m_volume_info.min_mdB = -amp.offset * step_mdB;
m_volume_info.max_mdB = (amp.num_steps - amp.offset) * step_mdB;
m_volume_info.mdB = BAN::Math::clamp(m_volume_info.mdB, m_volume_info.min_mdB, m_volume_info.max_mdB);
const int32_t step_round = (m_volume_info.mdB >= 0)
? +step_mdB / 2
: -step_mdB / 2;
const uint32_t step = (m_volume_info.mdB + step_round) / step_mdB + amp.offset;
const uint32_t volume = 0b1'0'1'1'0000'0'0000000 | step;
TRY(m_controller->send_command({
.data = static_cast<uint8_t>(volume & 0xFF),
.command = static_cast<uint16_t>(0x300 | (volume >> 8)),
.node_index = path[i]->id,
.codec_address = m_cid,
}));
break;
}
if (m_volume_info.min_mdB == 0 && m_volume_info.max_mdB == 0)
m_volume_info.mdB = 0;
return {};
}
@@ -442,10 +512,6 @@ namespace Kernel
BAN::ErrorOr<void> HDAudioFunctionGroup::recurse_output_paths(const HDAudio::AFGWidget& widget, BAN::Vector<const HDAudio::AFGWidget*>& path)
{
// cycle "detection"
if (path.size() >= m_max_path_length)
return {};
// we've reached a DAC
if (widget.type == HDAudio::AFGWidget::Type::OutputConverter)
{
@@ -462,9 +528,18 @@ namespace Kernel
{
if (!widget.connections.contains(connection.id))
continue;
// cycle detection
for (const auto* w : path)
if (w == &connection)
goto already_visited;
TRY(path.push_back(&connection));
TRY(recurse_output_paths(connection, path));
path.pop_back();
already_visited:
continue;
}
return {};
@@ -483,30 +558,18 @@ namespace Kernel
while ((m_bdl_head + 1) % m_bdl_entry_count != m_bdl_tail)
{
const size_t sample_data_tail = (m_sample_data_head + m_sample_data_capacity - m_sample_data_size) % m_sample_data_capacity;
const size_t sample_frames = BAN::Math::min(m_sample_data_size / get_channels() / sizeof(uint16_t), m_bdl_entry_sample_frames);
const size_t sample_frames = BAN::Math::min(m_sample_data->size() / get_channels() / sizeof(uint16_t), m_bdl_entry_sample_frames);
if (sample_frames == 0)
break;
const size_t copy_total_bytes = sample_frames * get_channels() * sizeof(uint16_t);
const size_t copy_before_wrap = BAN::Math::min(copy_total_bytes, m_sample_data_capacity - sample_data_tail);
memcpy(
reinterpret_cast<void*>(m_bdl_region->vaddr() + m_bdl_head * bdl_entry_bytes),
&m_sample_data[sample_data_tail],
copy_before_wrap
m_sample_data->get_data().data(),
copy_total_bytes
);
if (copy_before_wrap < copy_total_bytes)
{
memcpy(
reinterpret_cast<void*>(m_bdl_region->vaddr() + m_bdl_head * bdl_entry_bytes + copy_before_wrap),
&m_sample_data[0],
copy_total_bytes - copy_before_wrap
);
}
if (copy_total_bytes < bdl_entry_bytes)
{
memset(
@@ -516,8 +579,7 @@ namespace Kernel
);
}
m_sample_data_size -= copy_total_bytes;
m_sample_data->pop(copy_total_bytes);
m_bdl_head = (m_bdl_head + 1) % m_bdl_entry_count;
}

View File

@@ -65,8 +65,12 @@ namespace Kernel
m_pci_device.enable_interrupt(0, *this);
m_bar0->write32(Regs::INTCTL, UINT32_MAX);
for (uint8_t codec_id = 0; codec_id < 0x10; codec_id++)
const uint16_t state_sts = m_bar0->read16(Regs::STATESTS);
for (uint8_t codec_id = 0; codec_id < 15; codec_id++)
{
if (!(state_sts & (1 << codec_id)))
continue;
auto codec_or_error = initialize_codec(codec_id);
if (codec_or_error.is_error())
continue;
@@ -307,8 +311,26 @@ namespace Kernel
if (result.type == AFGWidget::Type::PinComplex)
{
const uint32_t cap = send_command_or_zero(0xF00, 0x0C);
result.pin_complex.output = !!(cap & (1 << 4));
result.pin_complex.input = !!(cap & (1 << 5));
result.pin_complex = {
.input = !!(cap & (1 << 5)),
.output = !!(cap & (1 << 4)),
.display = !!(cap & ((1 << 7) | (1 << 24))),
.config = send_command_or_zero(0xF1C, 0x00),
};
}
if (const uint32_t out_amp_cap = send_command_or_zero(0xF00, 0x12))
{
const uint8_t offset = (out_amp_cap >> 0) & 0x7F;
const uint8_t num_steps = (out_amp_cap >> 8) & 0x7F;
const uint8_t step_size = (out_amp_cap >> 16) & 0x7F;
const bool mute = (out_amp_cap >> 31);
result.output_amplifier = HDAudio::AFGWidget::Amplifier {
.offset = offset,
.num_steps = num_steps,
.step_size = step_size,
.mute = mute,
};
}
const uint8_t connection_info = send_command_or_zero(0xF00, 0x0E);

View File

@@ -7,6 +7,8 @@
#include <kernel/Terminal/TTY.h>
#include <kernel/Timer/Timer.h>
#include <BAN/ScopeGuard.h>
#include <LibDEFLATE/Compressor.h>
#include <LibQR/QRCode.h>
@@ -14,6 +16,13 @@
bool g_disable_debug = false;
extern "C" bool safe_user_memcpy(void*, const void*, size_t);
namespace Kernel
{
extern bool g_safe_user_alloc_nonexisting;
}
namespace Debug
{
@@ -45,48 +54,25 @@ namespace Debug
SpinLockGuard _(s_debug_lock);
const stackframe* frame = reinterpret_cast<const stackframe*>(bp);
void* first_ip = frame->ip;
void* last_ip = 0;
bool first = true;
const bool temp = g_safe_user_alloc_nonexisting;
g_safe_user_alloc_nonexisting = false;
BAN::ScopeGuard alloc_restore([temp] { g_safe_user_alloc_nonexisting = temp; });
BAN::Formatter::print(Debug::putchar, "\e[36mStack trace:\r\n");
if (ip != 0)
BAN::Formatter::print(Debug::putchar, " {}\r\n", reinterpret_cast<void*>(ip));
while (frame)
stackframe frame;
if (!safe_user_memcpy(&frame, reinterpret_cast<void*>(bp), sizeof(stackframe)))
;
else for (size_t depth = 0; depth < 64; depth++)
{
if (!PageTable::is_valid_pointer((vaddr_t)frame))
{
derrorln("invalid pointer {H}", (vaddr_t)frame);
BAN::Formatter::print(Debug::putchar, " {}\r\n", reinterpret_cast<void*>(frame.ip));
if (frame.bp == nullptr || !safe_user_memcpy(&frame, frame.bp, sizeof(stackframe)))
break;
}
if (PageTable::current().is_page_free((vaddr_t)frame & PAGE_ADDR_MASK))
{
derrorln(" {} not mapped", frame);
break;
}
BAN::Formatter::print(Debug::putchar, " {}\r\n", (void*)frame->ip);
if (!first && frame->ip == first_ip)
{
derrorln("looping kernel panic :(");
break;
}
else if (!first && frame->ip == last_ip)
{
derrorln("repeating stack trace");
break;
}
last_ip = frame->ip;
frame = frame->bp;
first = false;
}
BAN::Formatter::print(Debug::putchar, "\e[m");
}

View File

@@ -6,6 +6,7 @@
#include <kernel/Terminal/FramebufferTerminal.h>
#include <sys/framebuffer.h>
#include <sys/ioctl.h>
#include <sys/mman.h>
#include <sys/sysmacros.h>
@@ -83,10 +84,10 @@ namespace Kernel
m_video_buffer = TRY(VirtualRange::create_to_vaddr_range(
PageTable::kernel(),
KERNEL_OFFSET, UINTPTR_MAX,
{ KERNEL_OFFSET, UINTPTR_MAX },
BAN::Math::div_round_up<size_t>(m_width * m_height * (BANAN_FB_BPP / 8), PAGE_SIZE) * PAGE_SIZE,
PageTable::Flags::ReadWrite | PageTable::Flags::Present,
true, false
false
));
return {};
@@ -133,6 +134,26 @@ namespace Kernel
return bytes_to_copy;
}
BAN::ErrorOr<long> FramebufferDevice::ioctl_impl(int cmd, void* arg)
{
switch (cmd)
{
case FB_MSYNC_RECTANGLE:
{
auto& rectangle = *static_cast<fb_msync_region*>(arg);
sync_pixels_rectangle(
rectangle.min_x,
rectangle.min_y,
rectangle.max_x - rectangle.min_x,
rectangle.max_y - rectangle.min_y
);
return 0;
}
}
return CharacterDevice::ioctl(cmd, arg);
}
uint32_t FramebufferDevice::get_pixel(uint32_t x, uint32_t y) const
{
ASSERT(x < m_width && y < m_height);

View File

@@ -16,7 +16,7 @@ namespace Kernel
Epoll::~Epoll()
{
for (auto [inode, _] : m_listening_events)
for (auto& [inode, _] : m_listening_events)
inode->del_epoll(this);
}
@@ -44,7 +44,7 @@ namespace Kernel
if (!contains_inode)
TRY(inode->add_epoll(this));
it->value.add_fd(fd, event);
TRY(it->value.add_fd(fd, event));
SpinLockGuard _(m_ready_lock);
auto ready_it = m_ready_events.find(inode);
@@ -144,9 +144,8 @@ namespace Kernel
{
uint32_t listen_mask = EPOLLHUP | EPOLLERR;
for (size_t fd = 0; fd < listen.events.size(); fd++)
if (listen.has_fd(fd))
listen_mask |= listen.events[fd].events;
for (const auto& [_, events] : listen.events)
listen_mask |= events.events;
events &= listen_mask;
}
@@ -171,11 +170,10 @@ namespace Kernel
#undef REMOVE_IT
for (size_t fd = 0; fd < listen.events.size() && event_count < event_span.size(); fd++)
for (auto& [_, listen_event] : listen.events)
{
if (!listen.has_fd(fd))
continue;
auto& listen_event = listen.events[fd];
if (event_count >= event_span.size())
break;
const auto new_events = (listen_event.events | EPOLLHUP | EPOLLERR) & events;
if (new_events == 0)

View File

@@ -1,94 +1,262 @@
#include <BAN/ScopeGuard.h>
#include <kernel/FS/USTARModule.h>
#include <kernel/Timer/Timer.h>
#include <LibDEFLATE/Decompressor.h>
#include <tar.h>
namespace Kernel
{
bool is_ustar_boot_module(const BootModule& module)
class DataSource
{
if (module.start % PAGE_SIZE)
public:
DataSource() = default;
virtual ~DataSource() = default;
size_t data_size() const
{
dprintln("ignoring non-page-aligned module");
return m_data_size;
}
BAN::ConstByteSpan data()
{
return { m_data_buffer, m_data_size };
}
void pop_data(size_t size)
{
ASSERT(size <= m_data_size);
if (size > 0 && size < m_data_size)
memmove(m_data_buffer, m_data_buffer + size, m_data_size - size);
m_data_size -= size;
m_bytes_produced += size;
}
virtual BAN::ErrorOr<bool> produce_data() = 0;
uint64_t bytes_produced() const
{
return m_bytes_produced;
}
virtual uint64_t bytes_consumed() const = 0;
protected:
uint8_t m_data_buffer[4096];
size_t m_data_size { 0 };
private:
uint64_t m_bytes_produced { 0 };
};
class DataSourceRaw final : public DataSource
{
public:
DataSourceRaw(const BootModule& module)
: m_module(module)
{ }
BAN::ErrorOr<bool> produce_data() override
{
if (m_offset >= m_module.size || m_data_size >= sizeof(m_data_buffer))
return false;
while (m_offset < m_module.size && m_data_size < sizeof(m_data_buffer))
{
const size_t to_copy = BAN::Math::min(
sizeof(m_data_buffer) - m_data_size,
PAGE_SIZE - (m_offset % PAGE_SIZE)
);
PageTable::with_fast_page((m_module.start + m_offset) & PAGE_ADDR_MASK, [&] {
memcpy(m_data_buffer + m_data_size, PageTable::fast_page_as_ptr(m_offset % PAGE_SIZE), to_copy);
});
m_data_size += to_copy;
m_offset += to_copy;
}
return true;
}
uint64_t bytes_consumed() const override
{
return bytes_produced();
}
private:
const BootModule& m_module;
size_t m_offset { 0 };
};
class DataSourceGZip final : public DataSource
{
public:
DataSourceGZip(BAN::UniqPtr<DataSource>&& data_source)
: m_data_source(BAN::move(data_source))
, m_decompressor(LibDEFLATE::StreamType::GZip)
{ }
BAN::ErrorOr<bool> produce_data() override
{
if (m_is_done)
return false;
bool did_produce_data { false };
for (;;)
{
TRY(m_data_source->produce_data());
size_t input_consumed, output_produced;
const auto status = TRY(m_decompressor.decompress(
m_data_source->data(),
input_consumed,
{ m_data_buffer + m_data_size, sizeof(m_data_buffer) - m_data_size },
output_produced
));
m_data_source->pop_data(input_consumed);
m_data_size += output_produced;
if (output_produced)
did_produce_data = true;
switch (status)
{
using DecompStatus = LibDEFLATE::Decompressor::Status;
case DecompStatus::Done:
m_is_done = true;
return did_produce_data;
case DecompStatus::NeedMoreInput:
break;
case DecompStatus::NeedMoreOutput:
return did_produce_data;
}
}
}
uint64_t bytes_consumed() const override
{
return m_data_source->bytes_consumed();
}
private:
BAN::UniqPtr<DataSource> m_data_source;
LibDEFLATE::Decompressor m_decompressor;
bool m_is_done { false };
};
static BAN::ErrorOr<void> unpack_boot_module_into_directory(BAN::RefPtr<Inode> root_inode, DataSource& data_source);
BAN::ErrorOr<bool> unpack_boot_module_into_directory(BAN::RefPtr<Inode> root_inode, const BootModule& module)
{
ASSERT(root_inode->mode().ifdir());
BAN::UniqPtr<DataSource> data_source = TRY(BAN::UniqPtr<DataSourceRaw>::create(module));
bool is_compressed = false;
TRY(data_source->produce_data());
if (data_source->data_size() >= 2 && memcmp(&data_source->data()[0], "\x1F\x8B", 2) == 0)
{
data_source = TRY(BAN::UniqPtr<DataSourceGZip>::create(BAN::move(data_source)));
is_compressed = true;
}
TRY(data_source->produce_data());
if (data_source->data_size() < 512 || memcmp(&data_source->data()[257], "ustar", 5) != 0)
{
dwarnln("Unrecognized initrd format");
return false;
}
if (module.size < 512)
return false;
const auto module_size_kib = module.size / 1024;
dprintln("unpacking {}.{3} MiB{} initrd",
module_size_kib / 1024, (module_size_kib % 1024) * 1000 / 1024,
is_compressed ? " compressed" : ""
);
bool has_ustar_signature;
PageTable::with_fast_page(module.start, [&] {
has_ustar_signature = memcmp(PageTable::fast_page_as_ptr(257), "ustar", 5) == 0;
});
const auto unpack_ms1 = SystemTimer::get().ms_since_boot();
TRY(unpack_boot_module_into_directory(root_inode, *data_source));
const auto unpack_ms2 = SystemTimer::get().ms_since_boot();
return has_ustar_signature;
const auto duration_ms = unpack_ms2 - unpack_ms1;
dprintln("unpacking {}.{3} MiB{} initrd took {}.{3} s",
module_size_kib / 1024, (module_size_kib % 1024) * 1000 / 1024,
is_compressed ? " compressed" : "",
duration_ms / 1000, duration_ms % 1000
);
if (is_compressed)
{
const auto uncompressed_kib = data_source->bytes_produced() / 1024;
dprintln("uncompressed size {}.{3} MiB",
uncompressed_kib / 1024, (uncompressed_kib % 1024) * 1000 / 1024
);
}
return true;
}
BAN::ErrorOr<void> unpack_boot_module_into_filesystem(BAN::RefPtr<FileSystem> filesystem, const BootModule& module)
BAN::ErrorOr<void> unpack_boot_module_into_directory(BAN::RefPtr<Inode> root_inode, DataSource& data_source)
{
ASSERT(is_ustar_boot_module(module));
auto root_inode = filesystem->root_inode();
uint8_t* temp_page = static_cast<uint8_t*>(kmalloc(PAGE_SIZE));
if (temp_page == nullptr)
return BAN::Error::from_errno(ENOMEM);
BAN::ScopeGuard _([temp_page] { kfree(temp_page); });
BAN::String next_file_name;
BAN::String next_link_name;
size_t offset = 0;
while (offset + 512 <= module.size)
constexpr uint32_t print_interval_ms = 1000;
auto next_print_ms = SystemTimer::get().ms_since_boot() + print_interval_ms;
while (TRY(data_source.produce_data()), data_source.data_size() >= 512)
{
size_t file_size = 0;
mode_t file_mode = 0;
uid_t file_uid = 0;
gid_t file_gid = 0;
uint8_t file_type = 0;
char file_path[100 + 1 + 155 + 1] {};
PageTable::with_fast_page((module.start + offset) & PAGE_ADDR_MASK, [&] {
const size_t page_off = offset % PAGE_SIZE;
const auto parse_octal =
[page_off](size_t offset, size_t length) -> size_t
{
size_t result = 0;
for (size_t i = 0; i < length; i++)
{
const char ch = PageTable::fast_page_as<char>(page_off + offset + i);
if (ch == '\0')
break;
result = (result * 8) + (ch - '0');
}
return result;
};
if (memcmp(PageTable::fast_page_as_ptr(page_off + 257), "ustar", 5)) {
file_size = SIZE_MAX;
return;
if (SystemTimer::get().ms_since_boot() >= next_print_ms)
{
const auto kib_consumed = data_source.bytes_consumed() / 1024;
const auto kib_produced = data_source.bytes_produced() / 1024;
if (kib_consumed == kib_produced)
{
dprintln(" ... {}.{3} MiB",
kib_consumed / 1024, (kib_consumed % 1024) * 1000 / 1024
);
}
else
{
dprintln(" ... {}.{3} MiB ({}.{3} MiB)",
kib_consumed / 1024, (kib_consumed % 1024) * 1000 / 1024,
kib_produced / 1024, (kib_produced % 1024) * 1000 / 1024
);
}
next_print_ms = SystemTimer::get().ms_since_boot() + print_interval_ms;
}
memcpy(file_path, PageTable::fast_page_as_ptr(page_off + 345), 155);
const size_t prefix_len = strlen(file_path);
file_path[prefix_len] = '/';
memcpy(file_path + prefix_len + 1, PageTable::fast_page_as_ptr(page_off), 100);
const auto parse_octal =
[&data_source](size_t offset, size_t length) -> size_t
{
size_t result = 0;
for (size_t i = 0; i < length; i++)
{
const char ch = data_source.data()[offset + i];
if (ch == '\0')
break;
result = (result * 8) + (ch - '0');
}
return result;
};
file_mode = parse_octal(100, 8);
file_uid = parse_octal(108, 8);
file_gid = parse_octal(116, 8);
file_size = parse_octal(124, 12);
file_type = PageTable::fast_page_as<char>(page_off + 156);
});
if (file_size == SIZE_MAX)
break;
if (offset + 512 + file_size > module.size)
if (memcmp(&data_source.data()[257], "ustar", 5) != 0)
break;
auto parent_inode = filesystem->root_inode();
char file_path[100 + 1 + 155 + 1];
memcpy(file_path, &data_source.data()[345], 155);
const size_t prefix_len = strlen(file_path);
file_path[prefix_len] = '/';
memcpy(file_path + prefix_len + 1, &data_source.data()[0], 100);
mode_t file_mode = parse_octal(100, 8);
const uid_t file_uid = parse_octal(108, 8);
const gid_t file_gid = parse_octal(116, 8);
const size_t file_size = parse_octal(124, 12);
const uint8_t file_type = data_source.data()[156];
auto parent_inode = root_inode;
auto file_path_parts = TRY(BAN::StringView(next_file_name.empty() ? file_path : next_file_name.sv()).split('/'));
for (size_t i = 0; i < file_path_parts.size() - 1; i++)
@@ -111,27 +279,33 @@ namespace Kernel
auto file_name_sv = file_path_parts.back();
bool did_consume_data = false;
if (file_type == 'L' || file_type == 'K')
{
auto& target = (file_type == 'L') ? next_file_name : next_link_name;
TRY(target.resize(file_size));
auto& target_str = (file_type == 'L') ? next_file_name : next_link_name;
TRY(target_str.resize(file_size));
data_source.pop_data(512);
size_t nwritten = 0;
while (nwritten < file_size)
{
const paddr_t paddr = module.start + offset + 512 + nwritten;
PageTable::with_fast_page(paddr & PAGE_ADDR_MASK, [&] {
memcpy(temp_page, PageTable::fast_page_as_ptr(), PAGE_SIZE);
});
TRY(data_source.produce_data());
if (data_source.data_size() == 0)
return {};
const size_t page_off = paddr % PAGE_SIZE;
const size_t to_write = BAN::Math::min(file_size - nwritten, PAGE_SIZE - page_off);
memcpy(target.data() + nwritten, temp_page + page_off, to_write);
nwritten += to_write;
const size_t to_copy = BAN::Math::min(data_source.data_size(), file_size - nwritten);
memcpy(target_str.data() + nwritten, data_source.data().data(), to_copy);
nwritten += to_copy;
data_source.pop_data(to_copy);
}
while (!target.empty() && target.back() == '\0')
target.pop_back();
did_consume_data = true;
while (!target_str.empty() && target_str.back() == '\0')
target_str.pop_back();
}
else if (file_type == DIRTYPE)
{
@@ -149,14 +323,11 @@ namespace Kernel
link_name = next_link_name.sv();
else
{
const paddr_t paddr = module.start + offset;
PageTable::with_fast_page(paddr & PAGE_ADDR_MASK, [&] {
memcpy(link_buffer, PageTable::fast_page_as_ptr((paddr % PAGE_SIZE) + 157), 100);
});
memcpy(link_buffer, &data_source.data()[157], 100);
link_name = link_buffer;
}
auto target_inode = filesystem->root_inode();
auto target_inode = root_inode;
auto link_path_parts = TRY(link_name.split('/'));
for (const auto part : link_path_parts)
@@ -188,10 +359,7 @@ namespace Kernel
link_name = next_link_name.sv();
else
{
const paddr_t paddr = module.start + offset;
PageTable::with_fast_page(paddr & PAGE_ADDR_MASK, [&] {
memcpy(link_buffer, PageTable::fast_page_as_ptr((paddr % PAGE_SIZE) + 157), 100);
});
memcpy(link_buffer, &data_source.data()[157], 100);
link_name = link_buffer;
}
@@ -203,26 +371,26 @@ namespace Kernel
{
if (auto ret = parent_inode->create_file(file_name_sv, file_mode, file_uid, file_gid); ret.is_error())
dwarnln("failed to create file '{}': {}", file_name_sv, ret.error());
else
else if (file_size)
{
if (file_size)
auto inode = TRY(parent_inode->find_inode(file_name_sv));
data_source.pop_data(512);
size_t nwritten = 0;
while (nwritten < file_size)
{
auto inode = TRY(parent_inode->find_inode(file_name_sv));
TRY(data_source.produce_data());
ASSERT(data_source.data_size() > 0); // what to do?
size_t nwritten = 0;
while (nwritten < file_size)
{
const paddr_t paddr = module.start + offset + 512 + nwritten;
PageTable::with_fast_page(paddr & PAGE_ADDR_MASK, [&] {
memcpy(temp_page, PageTable::fast_page_as_ptr(), PAGE_SIZE);
});
const size_t to_write = BAN::Math::min(file_size - nwritten, data_source.data_size());
TRY(inode->write(nwritten, data_source.data().slice(0, to_write)));
nwritten += to_write;
const size_t page_off = paddr % PAGE_SIZE;
const size_t to_write = BAN::Math::min(file_size - nwritten, PAGE_SIZE - page_off);
TRY(inode->write(nwritten, { temp_page + page_off, to_write }));
nwritten += to_write;
}
data_source.pop_data(to_write);
}
did_consume_data = true;
}
}
@@ -232,9 +400,27 @@ namespace Kernel
next_link_name.clear();
}
offset += 512 + file_size;
if (auto rem = offset % 512)
offset += 512 - rem;
if (!did_consume_data)
{
data_source.pop_data(512);
size_t consumed = 0;
while (consumed < file_size)
{
TRY(data_source.produce_data());
if (data_source.data_size() == 0)
return {};
data_source.pop_data(BAN::Math::min(file_size - consumed, data_source.data_size()));
}
}
if (const auto rem = file_size % 512)
{
TRY(data_source.produce_data());
if (data_source.data_size() < rem)
return {};
data_source.pop_data(512 - rem);
}
}
return {};

View File

@@ -61,18 +61,22 @@ namespace Kernel
if (filesystem_or_error.is_error())
panic("Failed to create fallback filesystem: {}", filesystem_or_error.error());
dprintln("Loading fallback filesystem from {} modules", g_boot_info.modules.size());
dprintln("Trying to load fallback filesystem from {} modules", g_boot_info.modules.size());
auto filesystem = BAN::RefPtr<FileSystem>::adopt(filesystem_or_error.release_value());
bool loaded_initrd = false;
for (const auto& module : g_boot_info.modules)
{
if (!is_ustar_boot_module(module))
continue;
if (auto ret = unpack_boot_module_into_filesystem(filesystem, module); ret.is_error())
if (auto ret = unpack_boot_module_into_directory(filesystem->root_inode(), module); ret.is_error())
dwarnln("Failed to unpack boot module: {}", ret.error());
else
loaded_initrd |= ret.value();
}
if (!loaded_initrd)
panic("Could not load initrd from any boot module :(");
return filesystem;
}

View File

@@ -32,6 +32,8 @@ namespace Kernel
gdt->write_entry(0x38, 0x00000000, 0x00000, 0xF2, 0xC); // gsbase
#endif
gdt->write_entry(m_cpu_index_offset, 0, 0, 0, 0);
gdt->write_tss();
return gdt;

View File

@@ -164,6 +164,35 @@ namespace Kernel
"Unkown Exception 0x1F",
};
extern "C" uint8_t safe_user_memcpy[];
extern "C" uint8_t safe_user_memcpy_end[];
extern "C" uint8_t safe_user_memcpy_fault[];
extern "C" uint8_t safe_user_strncpy[];
extern "C" uint8_t safe_user_strncpy_end[];
extern "C" uint8_t safe_user_strncpy_fault[];
struct safe_user_page_fault
{
const uint8_t* ip_start;
const uint8_t* ip_end;
const uint8_t* ip_fault;
};
static constexpr safe_user_page_fault s_safe_user_page_faults[] {
{
.ip_start = safe_user_memcpy,
.ip_end = safe_user_memcpy_end,
.ip_fault = safe_user_memcpy_fault,
},
{
.ip_start = safe_user_strncpy,
.ip_end = safe_user_strncpy_end,
.ip_fault = safe_user_strncpy_fault,
},
};
bool g_safe_user_alloc_nonexisting { true };
extern "C" void cpp_isr_handler(uint32_t isr, uint32_t error, InterruptStack* interrupt_stack, const Registers* regs)
{
if (g_paniced)
@@ -187,6 +216,16 @@ namespace Kernel
PageFaultError page_fault_error;
page_fault_error.raw = error;
const uint8_t* ip = reinterpret_cast<const uint8_t*>(interrupt_stack->ip);
if (!g_safe_user_alloc_nonexisting) for (const auto& safe_user : s_safe_user_page_faults)
{
if (ip < safe_user.ip_start || ip >= safe_user.ip_end)
continue;
interrupt_stack->ip = reinterpret_cast<vaddr_t>(safe_user.ip_fault);
return;
}
Processor::set_interrupt_state(InterruptState::Enabled);
auto result = Process::current().allocate_page_for_demand_paging(regs->cr2, page_fault_error.write, page_fault_error.instruction);
Processor::set_interrupt_state(InterruptState::Disabled);
@@ -194,13 +233,27 @@ namespace Kernel
if (result.is_error())
{
dwarnln("Demand paging: {}", result.error());
// TODO: this is too strict, we should maybe do SIGBUS and
// SIGKILL only on recursive exceptions
Processor::set_interrupt_state(InterruptState::Enabled);
Thread::current().handle_signal(SIGKILL, {});
Processor::set_interrupt_state(InterruptState::Disabled);
return;
}
if (result.value())
return;
if (g_safe_user_alloc_nonexisting) for (const auto& safe_user : s_safe_user_page_faults)
{
if (ip < safe_user.ip_start || ip >= safe_user.ip_end)
continue;
interrupt_stack->ip = reinterpret_cast<vaddr_t>(safe_user.ip_fault);
return;
}
break;
}
case ISR::DeviceNotAvailable:
@@ -240,7 +293,7 @@ namespace Kernel
const char* process_name = (tid && Thread::current().has_process())
? Process::current().name()
: nullptr;
: "";
#if ARCH(x86_64)
dwarnln(
@@ -285,7 +338,7 @@ namespace Kernel
Debug::s_debug_lock.unlock(InterruptState::Disabled);
if (tid && Thread::current().is_userspace())
if (tid && GDT::is_user_segment(interrupt_stack->cs))
{
// TODO: Confirm and fix the exception to signal mappings
@@ -316,6 +369,7 @@ namespace Kernel
case ISR::InvalidOpcode:
signal_info.si_signo = SIGILL;
signal_info.si_code = ILL_ILLOPC;
signal_info.si_addr = reinterpret_cast<void*>(interrupt_stack->ip);
break;
case ISR::PageFault:
signal_info.si_signo = SIGSEGV;
@@ -323,6 +377,7 @@ namespace Kernel
signal_info.si_code = SEGV_ACCERR;
else
signal_info.si_code = SEGV_MAPERR;
signal_info.si_addr = reinterpret_cast<void*>(regs->cr2);
break;
default:
dwarnln("Unhandled exception");
@@ -330,7 +385,9 @@ namespace Kernel
break;
}
Processor::set_interrupt_state(InterruptState::Enabled);
Thread::current().handle_signal(signal_info.si_signo, signal_info);
Processor::set_interrupt_state(InterruptState::Disabled);
}
else
{
@@ -365,10 +422,6 @@ namespace Kernel
Process::update_alarm_queue();
Processor::scheduler().timer_interrupt();
auto& current_thread = Thread::current();
if (current_thread.can_add_signal_to_execute())
current_thread.handle_signal();
}
extern "C" void cpp_irq_handler(uint32_t irq)
@@ -392,15 +445,18 @@ namespace Kernel
else
dprintln("no handler for irq 0x{2H}", irq);
auto& current_thread = Thread::current();
if (current_thread.can_add_signal_to_execute())
current_thread.handle_signal();
Processor::scheduler().reschedule_if_idle();
ASSERT(Thread::current().state() != Thread::State::Terminated);
}
extern "C" void cpp_check_signal()
{
Processor::set_interrupt_state(InterruptState::Enabled);
Thread::current().handle_signal_if_interrupted();
Processor::set_interrupt_state(InterruptState::Disabled);
}
void IDT::register_interrupt_handler(uint8_t index, void (*handler)(), uint8_t ist)
{
auto& desc = m_idt[index];
@@ -440,7 +496,6 @@ namespace Kernel
IRQ_LIST_X
#undef X
extern "C" void asm_yield_handler();
extern "C" void asm_ipi_handler();
extern "C" void asm_timer_handler();
#if ARCH(i686)

View File

@@ -136,6 +136,9 @@ namespace Kernel::Input
m_scancode_to_keycode_extended[0x49] = keycode_function(18);
m_scancode_to_keycode_extended[0x51] = keycode_function(19);
m_scancode_to_keycode_normal[0x46] = keycode_function(20);
m_scancode_to_keycode_extended[0x20] = keycode_function(21);
m_scancode_to_keycode_extended[0x2E] = keycode_function(22);
m_scancode_to_keycode_extended[0x30] = keycode_function(23);
// Arrow keys
m_scancode_to_keycode_extended[0x48] = keycode_normal(5, 0);
@@ -246,6 +249,9 @@ namespace Kernel::Input
m_scancode_to_keycode_extended[0x7D] = keycode_function(18);
m_scancode_to_keycode_extended[0x7A] = keycode_function(19);
m_scancode_to_keycode_normal[0x7E] = keycode_function(20);
m_scancode_to_keycode_extended[0x23] = keycode_function(21);
m_scancode_to_keycode_extended[0x21] = keycode_function(22);
m_scancode_to_keycode_extended[0x32] = keycode_function(23);
// Arrow keys
m_scancode_to_keycode_extended[0x75] = keycode_normal(5, 0);
@@ -356,6 +362,9 @@ namespace Kernel::Input
m_scancode_to_keycode_normal[0x6F] = keycode_function(18);
m_scancode_to_keycode_normal[0x6D] = keycode_function(19);
m_scancode_to_keycode_normal[0x5F] = keycode_function(20);
m_scancode_to_keycode_normal[0x9C] = keycode_function(21);
m_scancode_to_keycode_normal[0x9D] = keycode_function(22);
m_scancode_to_keycode_normal[0x95] = keycode_function(23);
// Arrow keys
m_scancode_to_keycode_normal[0x63] = keycode_normal(5, 0);

View File

@@ -5,7 +5,7 @@
namespace Kernel
{
BAN::ErrorOr<BAN::UniqPtr<DMARegion>> DMARegion::create(size_t size)
BAN::ErrorOr<BAN::UniqPtr<DMARegion>> DMARegion::create(size_t size, PageTable::MemoryType type)
{
size_t needed_pages = BAN::Math::div_round_up<size_t>(size, PAGE_SIZE);
@@ -26,7 +26,7 @@ namespace Kernel
vaddr_guard.disable();
paddr_guard.disable();
PageTable::kernel().map_range_at(paddr, vaddr, size, PageTable::Flags::ReadWrite | PageTable::Flags::Present, PageTable::MemoryType::Uncached);
PageTable::kernel().map_range_at(paddr, vaddr, size, PageTable::Flags::ReadWrite | PageTable::Flags::Present, type);
return BAN::UniqPtr<DMARegion>::adopt(region_ptr);
}

View File

@@ -67,8 +67,12 @@ namespace Kernel
ASSERT(success);
for (size_t i = 0; i < pages.size(); i++)
if (pages[i])
sync(i);
{
if (pages[i] == 0)
continue;
sync(i);
Heap::get().release_page(pages[i]);
}
mutex.unlock();
}

View File

@@ -2,11 +2,62 @@
#include <kernel/Memory/Heap.h>
#include <kernel/Memory/PageTable.h>
#include <BAN/Sort.h>
extern uint8_t g_kernel_end[];
namespace Kernel
{
struct ReservedRegion
{
paddr_t paddr;
uint64_t size;
};
static BAN::Vector<ReservedRegion> get_reserved_regions()
{
BAN::Vector<ReservedRegion> reserved_regions;
MUST(reserved_regions.reserve(2 + g_boot_info.modules.size()));
MUST(reserved_regions.emplace_back(0, 0x100000));
MUST(reserved_regions.emplace_back(g_boot_info.kernel_paddr, reinterpret_cast<size_t>(g_kernel_end - KERNEL_OFFSET)));
for (const auto& module : g_boot_info.modules)
MUST(reserved_regions.emplace_back(module.start, module.size));
// page align regions
for (auto& region : reserved_regions)
{
const auto rem = region.paddr % PAGE_SIZE;
region.paddr -= rem;
region.size += rem;
if (const auto rem = region.size % PAGE_SIZE)
region.size += PAGE_SIZE - rem;
}
// sort regions
BAN::sort::sort(reserved_regions.begin(), reserved_regions.end(),
[](const auto& lhs, const auto& rhs) -> bool {
if (lhs.paddr == rhs.paddr)
return lhs.size < rhs.size;
return lhs.paddr < rhs.paddr;
}
);
// combine overlapping regions
for (size_t i = 1; i < reserved_regions.size(); i++)
{
auto& prev = reserved_regions[i - 1];
auto& curr = reserved_regions[i - 0];
if (prev.paddr > curr.paddr + curr.size || curr.paddr > prev.paddr + prev.size)
continue;
prev.size = BAN::Math::max(prev.size, curr.paddr + curr.size - prev.paddr);
reserved_regions.remove(i--);
}
return reserved_regions;
}
static Heap* s_instance = nullptr;
void Heap::initialize()
@@ -28,6 +79,7 @@ namespace Kernel
if (g_boot_info.memory_map_entries.empty())
panic("Bootloader did not provide a memory map");
auto reserved_regions = get_reserved_regions();
for (const auto& entry : g_boot_info.memory_map_entries)
{
const char* entry_type_string = nullptr;
@@ -58,33 +110,71 @@ namespace Kernel
if (entry.type != MemoryMapEntry::Type::Available)
continue;
// FIXME: only reserve kernel area and modules, not everything from 0 -> kernel end
paddr_t start = entry.address;
if (start < (vaddr_t)g_kernel_end - KERNEL_OFFSET + g_boot_info.kernel_paddr)
start = (vaddr_t)g_kernel_end - KERNEL_OFFSET + g_boot_info.kernel_paddr;
for (const auto& module : g_boot_info.modules)
if (start < module.start + module.size)
start = module.start + module.size;
paddr_t e_start = entry.address;
if (auto rem = e_start % PAGE_SIZE)
e_start = PAGE_SIZE - rem;
paddr_t e_end = entry.address + entry.length;
if (auto rem = e_end % PAGE_SIZE)
e_end -= rem;
for (const auto& reserved_region : reserved_regions)
{
const paddr_t r_start = reserved_region.paddr;
const paddr_t r_end = reserved_region.paddr + reserved_region.size;
if (r_end < e_start)
continue;
if (r_start > e_end)
break;
const paddr_t end = BAN::Math::max(e_start, r_start);
if (e_start + 2 * PAGE_SIZE <= end)
MUST(m_physical_ranges.emplace_back(e_start, end - e_start));
e_start = BAN::Math::max(e_start, BAN::Math::min(e_end, r_end));
}
if (e_start + 2 * PAGE_SIZE <= e_end)
MUST(m_physical_ranges.emplace_back(e_start, e_end - e_start));
}
uint64_t total_kibi_bytes = 0;
for (auto& range : m_physical_ranges)
{
const uint64_t kibi_bytes = range.usable_memory() / 1024;
dprintln("RAM {8H}->{8H} ({}.{3} MiB)", range.start(), range.end(), kibi_bytes / 1024, kibi_bytes % 1024);
total_kibi_bytes += kibi_bytes;
}
dprintln("Total RAM {}.{3} MiB", total_kibi_bytes / 1024, total_kibi_bytes % 1024);
}
void Heap::release_boot_modules()
{
const auto modules = BAN::move(g_boot_info.modules);
uint64_t kibi_bytes = 0;
for (const auto& module : modules)
{
vaddr_t start = module.start;
if (auto rem = start % PAGE_SIZE)
start += PAGE_SIZE - rem;
paddr_t end = entry.address + entry.length;
vaddr_t end = module.start + module.size;
if (auto rem = end % PAGE_SIZE)
end -= rem;
// Physical pages needs atleast 2 pages
if (end > start + PAGE_SIZE)
MUST(m_physical_ranges.emplace_back(start, end - start));
const size_t size = end - start;
if (size < 2 * PAGE_SIZE)
continue;
SpinLockGuard _(m_lock);
MUST(m_physical_ranges.emplace_back(start, size));
kibi_bytes += m_physical_ranges.back().usable_memory() / 1024;
}
size_t total = 0;
for (auto& range : m_physical_ranges)
{
size_t bytes = range.usable_memory();
dprintln("RAM {8H}->{8H} ({}.{} MB)", range.start(), range.end(), bytes / (1 << 20), bytes % (1 << 20) * 1000 / (1 << 20));
total += bytes;
}
dprintln("Total RAM {}.{} MB", total / (1 << 20), total % (1 << 20) * 1000 / (1 << 20));
if (kibi_bytes)
dprintln("Released {}.{3} MiB of RAM from boot modules", kibi_bytes / 1024, kibi_bytes % 1024);
}
paddr_t Heap::take_free_page()

View File

@@ -1,5 +1,6 @@
#include <kernel/Memory/Heap.h>
#include <kernel/Memory/MemoryBackedRegion.h>
#include <kernel/Lock/LockGuard.h>
namespace Kernel
{
@@ -14,6 +15,9 @@ namespace Kernel
return BAN::Error::from_errno(ENOMEM);
auto region = BAN::UniqPtr<MemoryBackedRegion>::adopt(region_ptr);
const size_t page_count = (size + PAGE_SIZE - 1) / PAGE_SIZE;
TRY(region->m_physical_pages.resize(page_count, nullptr));
TRY(region->initialize(address_range));
return region;
@@ -28,38 +32,75 @@ namespace Kernel
{
ASSERT(m_type == Type::PRIVATE);
size_t needed_pages = BAN::Math::div_round_up<size_t>(m_size, PAGE_SIZE);
for (size_t i = 0; i < needed_pages; i++)
{
paddr_t paddr = m_page_table.physical_address_of(m_vaddr + i * PAGE_SIZE);
if (paddr != 0)
Heap::get().release_page(paddr);
}
for (auto* page : m_physical_pages)
if (page && --page->ref_count == 0)
delete page;
}
MemoryBackedRegion::PhysicalPage::~PhysicalPage()
{
Heap::get().release_page(paddr);
}
BAN::ErrorOr<bool> MemoryBackedRegion::allocate_page_containing_impl(vaddr_t address, bool wants_write)
{
ASSERT(m_type == Type::PRIVATE);
ASSERT(contains(address));
(void)wants_write;
// Check if address is already mapped
vaddr_t vaddr = address & PAGE_ADDR_MASK;
if (m_page_table.physical_address_of(vaddr) != 0)
return false;
const vaddr_t vaddr = address & PAGE_ADDR_MASK;
// Map new physcial page to address
paddr_t paddr = Heap::get().take_free_page();
LockGuard _(m_mutex);
auto& physical_page = m_physical_pages[(vaddr - m_vaddr) / PAGE_SIZE];
if (physical_page == nullptr)
{
const paddr_t paddr = Heap::get().take_free_page();
if (paddr == 0)
return BAN::Error::from_errno(ENOMEM);
physical_page = new PhysicalPage(paddr);
if (physical_page == nullptr)
return BAN::Error::from_errno(ENOMEM);
m_page_table.map_page_at(paddr, vaddr, m_flags);
PageTable::with_fast_page(paddr, [] {
memset(PageTable::fast_page_as_ptr(), 0x00, PAGE_SIZE);
});
return true;
}
if (auto is_only_ref = (physical_page->ref_count == 1); is_only_ref || !wants_write)
{
auto flags = m_flags;
if (!is_only_ref)
flags &= ~PageTable::ReadWrite;
m_page_table.map_page_at(physical_page->paddr, vaddr, flags);
return true;
}
const paddr_t paddr = Heap::get().take_free_page();
if (paddr == 0)
return BAN::Error::from_errno(ENOMEM);
auto* new_physical_page = new PhysicalPage(paddr);
if (new_physical_page == nullptr)
return BAN::Error::from_errno(ENOMEM);
m_page_table.map_page_at(paddr, vaddr, m_flags);
// Zero out the new page
PageTable::with_fast_page(paddr, [&] {
memset(PageTable::fast_page_as_ptr(), 0x00, PAGE_SIZE);
ASSERT(&m_page_table == &PageTable::current());
PageTable::with_fast_page(physical_page->paddr, [vaddr] {
memcpy(reinterpret_cast<void*>(vaddr), PageTable::fast_page_as_ptr(), PAGE_SIZE);
});
if (--physical_page->ref_count == 0)
delete physical_page;
physical_page = new_physical_page;
return true;
}
@@ -67,16 +108,20 @@ namespace Kernel
{
ASSERT(&PageTable::current() == &m_page_table);
LockGuard _(m_mutex);
const size_t aligned_size = (m_size + PAGE_SIZE - 1) & PAGE_ADDR_MASK;
auto result = TRY(MemoryBackedRegion::create(new_page_table, m_size, { .start = m_vaddr, .end = m_vaddr + aligned_size }, m_type, m_flags, m_status_flags));
for (size_t offset = 0; offset < m_size; offset += PAGE_SIZE)
if (writable())
m_page_table.remove_writable_from_range(m_vaddr, m_size);
for (size_t i = 0; i < m_physical_pages.size(); i++)
{
paddr_t paddr = m_page_table.physical_address_of(m_vaddr + offset);
if (paddr == 0)
if (m_physical_pages[i] == nullptr)
continue;
const size_t to_copy = BAN::Math::min<size_t>(PAGE_SIZE, m_size - offset);
TRY(result->copy_data_to_region(offset, (const uint8_t*)(m_vaddr + offset), to_copy));
result->m_physical_pages[i] = m_physical_pages[i];
result->m_physical_pages[i]->ref_count++;
}
return BAN::UniqPtr<MemoryRegion>(BAN::move(result));
@@ -87,20 +132,35 @@ namespace Kernel
ASSERT(offset && offset < m_size);
ASSERT(offset % PAGE_SIZE == 0);
auto* new_region = new MemoryBackedRegion(m_page_table, m_size - offset, m_type, m_flags, m_status_flags);
if (new_region == nullptr)
LockGuard _(m_mutex);
auto* new_region_ptr = new MemoryBackedRegion(m_page_table, m_size - offset, m_type, m_flags, m_status_flags);
if (new_region_ptr == nullptr)
return BAN::Error::from_errno(ENOMEM);
auto new_region = BAN::UniqPtr<MemoryBackedRegion>::adopt(new_region_ptr);
new_region->m_vaddr = m_vaddr + offset;
const size_t moved_pages = (m_size - offset + PAGE_SIZE - 1) / PAGE_SIZE;
TRY(new_region->m_physical_pages.resize(moved_pages));
const size_t remaining_pages = m_physical_pages.size() - moved_pages;
for (size_t i = 0; i < moved_pages; i++)
new_region->m_physical_pages[i] = m_physical_pages[remaining_pages + i];
MUST(m_physical_pages.resize(remaining_pages));
m_size = offset;
return BAN::UniqPtr<MemoryRegion>::adopt(new_region);
return BAN::UniqPtr<MemoryRegion>(BAN::move(new_region));
}
BAN::ErrorOr<void> MemoryBackedRegion::copy_data_to_region(size_t offset_into_region, const uint8_t* buffer, size_t buffer_size)
{
ASSERT(offset_into_region + buffer_size <= m_size);
LockGuard _(m_mutex);
size_t written = 0;
while (written < buffer_size)
{
@@ -108,18 +168,18 @@ namespace Kernel
vaddr_t page_offset = write_vaddr % PAGE_SIZE;
size_t bytes = BAN::Math::min<size_t>(buffer_size - written, PAGE_SIZE - page_offset);
paddr_t paddr = m_page_table.physical_address_of(write_vaddr & PAGE_ADDR_MASK);
if (paddr == 0)
if (!(m_page_table.get_page_flags(write_vaddr & PAGE_ADDR_MASK) & PageTable::ReadWrite))
{
if (!TRY(allocate_page_containing(write_vaddr, false)))
if (!TRY(allocate_page_containing(write_vaddr, true)))
{
dwarnln("Could not allocate page for data copying");
return BAN::Error::from_errno(EFAULT);
}
paddr = m_page_table.physical_address_of(write_vaddr & PAGE_ADDR_MASK);
ASSERT(paddr);
}
const paddr_t paddr = m_page_table.physical_address_of(write_vaddr & PAGE_ADDR_MASK);
ASSERT(paddr);
PageTable::with_fast_page(paddr, [&] {
memcpy(PageTable::fast_page_as_ptr(page_offset), (void*)(buffer + written), bytes);
});

View File

@@ -22,8 +22,10 @@ namespace Kernel
BAN::ErrorOr<void> MemoryRegion::initialize(AddressRange address_range)
{
size_t needed_pages = BAN::Math::div_round_up<size_t>(m_size, PAGE_SIZE);
m_vaddr = m_page_table.reserve_free_contiguous_pages(needed_pages, address_range.start);
if (auto rem = address_range.end % PAGE_SIZE)
address_range.end += PAGE_SIZE - rem;
const size_t needed_pages = BAN::Math::div_round_up<size_t>(m_size, PAGE_SIZE);
m_vaddr = m_page_table.reserve_free_contiguous_pages(needed_pages, address_range.start, address_range.end);
if (m_vaddr == 0)
return BAN::Error::from_errno(ENOMEM);
if (m_vaddr + m_size > address_range.end)

View File

@@ -10,7 +10,7 @@ namespace Kernel
static constexpr size_t bits_per_page = PAGE_SIZE * 8;
PhysicalRange::PhysicalRange(paddr_t paddr, size_t size)
PhysicalRange::PhysicalRange(paddr_t paddr, uint64_t size)
: m_paddr(paddr)
, m_page_count(size / PAGE_SIZE)
, m_free_pages(m_page_count)

View File

@@ -1,67 +1,47 @@
#include <BAN/ScopeGuard.h>
#include <kernel/Memory/Heap.h>
#include <kernel/Memory/VirtualRange.h>
namespace Kernel
{
BAN::ErrorOr<BAN::UniqPtr<VirtualRange>> VirtualRange::create_to_vaddr(PageTable& page_table, vaddr_t vaddr, size_t size, PageTable::flags_t flags, bool preallocate_pages, bool add_guard_pages)
{
ASSERT(size % PAGE_SIZE == 0);
ASSERT(vaddr % PAGE_SIZE == 0);
ASSERT(vaddr > 0);
if (add_guard_pages)
{
vaddr -= PAGE_SIZE;
size += 2 * PAGE_SIZE;
}
auto result = TRY(BAN::UniqPtr<VirtualRange>::create(page_table, preallocate_pages, add_guard_pages, vaddr, size, flags));
ASSERT(page_table.reserve_range(vaddr, size));
TRY(result->initialize());
return result;
}
BAN::ErrorOr<BAN::UniqPtr<VirtualRange>> VirtualRange::create_to_vaddr_range(PageTable& page_table, vaddr_t vaddr_start, vaddr_t vaddr_end, size_t size, PageTable::flags_t flags, bool preallocate_pages, bool add_guard_pages)
BAN::ErrorOr<BAN::UniqPtr<VirtualRange>> VirtualRange::create_to_vaddr_range(PageTable& page_table, AddressRange address_range, size_t size, PageTable::flags_t flags, bool add_guard_pages)
{
if (add_guard_pages)
size += 2 * PAGE_SIZE;
ASSERT(size % PAGE_SIZE == 0);
ASSERT(vaddr_start > 0);
ASSERT(vaddr_start + size <= vaddr_end);
// Align vaddr range to page boundaries
if (size_t rem = vaddr_start % PAGE_SIZE)
vaddr_start += PAGE_SIZE - rem;
if (size_t rem = vaddr_end % PAGE_SIZE)
vaddr_end -= rem;
ASSERT(vaddr_start < vaddr_end);
ASSERT(vaddr_end - vaddr_start + 1 >= size / PAGE_SIZE);
if (const size_t rem = address_range.start % PAGE_SIZE)
address_range.start += PAGE_SIZE - rem;
if (const size_t rem = address_range.end % PAGE_SIZE)
address_range.end -= rem;
const vaddr_t vaddr = page_table.reserve_free_contiguous_pages(size / PAGE_SIZE, vaddr_start, vaddr_end);
const vaddr_t vaddr = page_table.reserve_free_contiguous_pages(size / PAGE_SIZE, address_range.start, address_range.end);
if (vaddr == 0)
return BAN::Error::from_errno(ENOMEM);
ASSERT(vaddr >= vaddr_start);
ASSERT(vaddr + size <= vaddr_end);
auto result_or_error = BAN::UniqPtr<VirtualRange>::create(page_table, preallocate_pages, add_guard_pages, vaddr, size, flags);
if (result_or_error.is_error())
{
BAN::ScopeGuard vaddr_cleaner([&page_table, vaddr, size] {
page_table.unmap_range(vaddr, size);
return result_or_error.release_error();
}
});
auto result = result_or_error.release_value();
auto result = TRY(BAN::UniqPtr<VirtualRange>::create(
page_table,
add_guard_pages,
vaddr,
size,
flags)
);
TRY(result->initialize());
vaddr_cleaner.disable();
return result;
}
VirtualRange::VirtualRange(PageTable& page_table, bool preallocated, bool has_guard_pages, vaddr_t vaddr, size_t size, PageTable::flags_t flags)
VirtualRange::VirtualRange(PageTable& page_table, bool has_guard_pages, vaddr_t vaddr, size_t size, PageTable::flags_t flags)
: m_page_table(page_table)
, m_preallocated(preallocated)
, m_has_guard_pages(has_guard_pages)
, m_vaddr(vaddr)
, m_size(size)
@@ -71,100 +51,26 @@ namespace Kernel
VirtualRange::~VirtualRange()
{
ASSERT(m_vaddr);
m_page_table.unmap_range(m_vaddr, m_size);
for (paddr_t paddr : m_paddrs)
if (paddr != 0)
for (size_t off = 0; off < size(); off += PAGE_SIZE)
if (const auto paddr = m_page_table.physical_address_of(vaddr() + off))
Heap::get().release_page(paddr);
m_page_table.unmap_range(m_vaddr, m_size);
}
BAN::ErrorOr<void> VirtualRange::initialize()
{
TRY(m_paddrs.resize(size() / PAGE_SIZE, 0));
if (!m_preallocated)
return {};
const size_t page_count = size() / PAGE_SIZE;
for (size_t i = 0; i < page_count; i++)
{
m_paddrs[i] = Heap::get().take_free_page();
if (m_paddrs[i] == 0)
const auto paddr = Heap::get().take_free_page();
if (paddr == 0)
return BAN::Error::from_errno(ENOMEM);
m_page_table.map_page_at(m_paddrs[i], vaddr() + i * PAGE_SIZE, m_flags);
PageTable::with_fast_page(paddr, [] {
memset(PageTable::fast_page_as_ptr(), 0, PAGE_SIZE);
});
m_page_table.map_page_at(paddr, vaddr() + i * PAGE_SIZE, m_flags);
}
if (&PageTable::current() == &m_page_table || &PageTable::kernel() == &m_page_table)
memset(reinterpret_cast<void*>(vaddr()), 0, size());
else
{
const size_t page_count = size() / PAGE_SIZE;
for (size_t i = 0; i < page_count; i++)
{
PageTable::with_fast_page(m_paddrs[i], [&] {
memset(PageTable::fast_page_as_ptr(), 0, PAGE_SIZE);
});
}
}
return {};
}
BAN::ErrorOr<BAN::UniqPtr<VirtualRange>> VirtualRange::clone(PageTable& page_table)
{
ASSERT(&PageTable::current() == &m_page_table);
ASSERT(&m_page_table != &page_table);
SpinLockGuard _(m_lock);
auto result = TRY(create_to_vaddr(page_table, vaddr(), size(), m_flags, m_preallocated, m_has_guard_pages));
const size_t page_count = size() / PAGE_SIZE;
for (size_t i = 0; i < page_count; i++)
{
if (m_paddrs[i] == 0)
continue;
if (!result->m_preallocated)
{
result->m_paddrs[i] = Heap::get().take_free_page();
if (result->m_paddrs[i] == 0)
return BAN::Error::from_errno(ENOMEM);
result->m_page_table.map_page_at(result->m_paddrs[i], vaddr() + i * PAGE_SIZE, m_flags);
}
PageTable::with_fast_page(result->m_paddrs[i], [&] {
memcpy(PageTable::fast_page_as_ptr(), reinterpret_cast<void*>(vaddr() + i * PAGE_SIZE), PAGE_SIZE);
});
}
return result;
}
BAN::ErrorOr<bool> VirtualRange::allocate_page_for_demand_paging(vaddr_t vaddr)
{
ASSERT(contains(vaddr));
vaddr &= PAGE_ADDR_MASK;
if (m_preallocated)
return false;
SpinLockGuard _(m_lock);
const size_t index = (vaddr - this->vaddr()) / PAGE_SIZE;
if (m_paddrs[index])
return false;
m_paddrs[index] = Heap::get().take_free_page();
if (m_paddrs[index] == 0)
return BAN::Error::from_errno(ENOMEM);
PageTable::with_fast_page(m_paddrs[index], []{
memset(PageTable::fast_page_as_ptr(), 0x00, PAGE_SIZE);
});
m_page_table.map_page_at(m_paddrs[index], vaddr, m_flags);
return true;
}
}

View File

@@ -436,12 +436,3 @@ BAN::Optional<Kernel::paddr_t> kmalloc_paddr_of(Kernel::vaddr_t vaddr)
return {};
return vaddr - KERNEL_OFFSET + g_boot_info.kernel_paddr;
}
BAN::Optional<Kernel::vaddr_t> kmalloc_vaddr_of(Kernel::paddr_t paddr)
{
using namespace Kernel;
const vaddr_t vaddr = paddr + KERNEL_OFFSET - g_boot_info.kernel_paddr;
if (!is_kmalloc_vaddr(vaddr))
return {};
return vaddr;
}

View File

@@ -173,7 +173,7 @@ namespace Kernel
BAN::ErrorOr<void> E1000::initialize_rx()
{
m_rx_buffer_region = TRY(DMARegion::create(E1000_RX_BUFFER_SIZE * E1000_RX_DESCRIPTOR_COUNT));
m_rx_buffer_region = TRY(DMARegion::create(E1000_RX_BUFFER_SIZE * E1000_RX_DESCRIPTOR_COUNT, PageTable::MemoryType::Normal));
m_rx_descriptor_region = TRY(DMARegion::create(sizeof(e1000_rx_desc) * E1000_RX_DESCRIPTOR_COUNT));
auto* rx_descriptors = reinterpret_cast<volatile e1000_rx_desc*>(m_rx_descriptor_region->vaddr());
@@ -207,7 +207,7 @@ namespace Kernel
BAN::ErrorOr<void> E1000::initialize_tx()
{
m_tx_buffer_region = TRY(DMARegion::create(E1000_TX_BUFFER_SIZE * E1000_TX_DESCRIPTOR_COUNT));
m_tx_buffer_region = TRY(DMARegion::create(E1000_TX_BUFFER_SIZE * E1000_TX_DESCRIPTOR_COUNT, PageTable::MemoryType::Normal));
m_tx_descriptor_region = TRY(DMARegion::create(sizeof(e1000_tx_desc) * E1000_TX_DESCRIPTOR_COUNT));
auto* tx_descriptors = reinterpret_cast<volatile e1000_tx_desc*>(m_tx_descriptor_region->vaddr());
@@ -304,7 +304,7 @@ namespace Kernel
// FIXME: there isnt really any reason to wait for transmission
write32(REG_TDT, (tx_current + 1) % E1000_TX_DESCRIPTOR_COUNT);
while (descriptor.status == 0)
continue;
Processor::pause();
dprintln_if(DEBUG_E1000, "sent {} bytes", packet_size);

View File

@@ -14,11 +14,10 @@ namespace Kernel
loopback->m_buffer = TRY(VirtualRange::create_to_vaddr_range(
PageTable::kernel(),
KERNEL_OFFSET,
BAN::numeric_limits<vaddr_t>::max(),
{ KERNEL_OFFSET, UINTPTR_MAX },
buffer_size * buffer_count,
PageTable::Flags::ReadWrite | PageTable::Flags::Present,
true, false
false
));
auto* thread = TRY(Thread::create_kernel([](void* loopback_ptr) {

View File

@@ -18,16 +18,20 @@ namespace Kernel
const uint16_t* buffer_u16 = reinterpret_cast<const uint16_t*>(buffer.data());
for (size_t j = 0; j < buffer.size() / 2; j++)
checksum += BAN::host_to_network_endian(buffer_u16[j]);
if (buffer.size() % 2 == 0)
continue;
ASSERT(i == buffers.size() - 1);
checksum += buffer[buffer.size() - 1] << 8;
checksum += buffer_u16[j];
if (buffer.size() % 2)
{
// NOTE: we only allow last buffer to be odd-length
ASSERT(i == buffers.size() - 1);
checksum += buffer[buffer.size() - 1];
}
}
while (checksum >> 16)
checksum = (checksum >> 16) + (checksum & 0xFFFF);
return ~(uint16_t)checksum;
checksum = (checksum & 0xFFFF) + (checksum >> 16);
return BAN::host_to_network_endian<uint16_t>(~checksum);
}
}

View File

@@ -107,7 +107,7 @@ namespace Kernel
BAN::ErrorOr<void> RTL8169::initialize_rx()
{
m_rx_buffer_region = TRY(DMARegion::create(m_rx_descriptor_count * s_buffer_size));
m_rx_buffer_region = TRY(DMARegion::create(m_rx_descriptor_count * s_buffer_size, PageTable::MemoryType::Normal));
m_rx_descriptor_region = TRY(DMARegion::create(m_rx_descriptor_count * sizeof(RTL8169Descriptor)));
for (size_t i = 0; i < m_rx_descriptor_count; i++)
@@ -144,7 +144,7 @@ namespace Kernel
BAN::ErrorOr<void> RTL8169::initialize_tx()
{
m_tx_buffer_region = TRY(DMARegion::create(m_tx_descriptor_count * s_buffer_size));
m_tx_buffer_region = TRY(DMARegion::create(m_tx_descriptor_count * s_buffer_size, PageTable::MemoryType::Normal));
m_tx_descriptor_region = TRY(DMARegion::create(m_tx_descriptor_count * sizeof(RTL8169Descriptor)));
for (size_t i = 0; i < m_tx_descriptor_count; i++)

View File

@@ -14,11 +14,10 @@ namespace Kernel
auto socket = TRY(BAN::RefPtr<UDPSocket>::create(network_layer, info));
socket->m_packet_buffer = TRY(VirtualRange::create_to_vaddr_range(
PageTable::kernel(),
KERNEL_OFFSET,
~(uintptr_t)0,
{ KERNEL_OFFSET, UINTPTR_MAX },
packet_buffer_size,
PageTable::Flags::ReadWrite | PageTable::Flags::Present,
true, false
false
));
return socket;
}

View File

@@ -14,15 +14,7 @@
namespace Kernel
{
struct UnixSocketHash
{
BAN::hash_t operator()(const BAN::RefPtr<Inode>& socket)
{
return BAN::hash<const Inode*>{}(socket.ptr());
}
};
static BAN::HashMap<BAN::RefPtr<Inode>, BAN::WeakPtr<UnixDomainSocket>, UnixSocketHash> s_bound_sockets;
static BAN::HashMap<BAN::RefPtr<Inode>, BAN::WeakPtr<UnixDomainSocket>> s_bound_sockets;
static Mutex s_bound_socket_lock;
static constexpr size_t s_packet_buffer_size = 0x10000;
@@ -50,11 +42,10 @@ namespace Kernel
auto socket = TRY(BAN::RefPtr<UnixDomainSocket>::create(socket_type, info));
socket->m_packet_buffer = TRY(VirtualRange::create_to_vaddr_range(
PageTable::kernel(),
KERNEL_OFFSET,
~(uintptr_t)0,
{ KERNEL_OFFSET, UINTPTR_MAX },
s_packet_buffer_size,
PageTable::Flags::ReadWrite | PageTable::Flags::Present,
true, false
false
));
return socket;
}

View File

@@ -51,17 +51,19 @@ namespace Kernel
close_all();
for (int fd = 0; fd < (int)other.m_open_files.size(); fd++)
for (int fd = 0; fd < static_cast<int>(other.m_open_files.size()); fd++)
{
if (other.validate_fd(fd).is_error())
continue;
auto& open_file = m_open_files[fd];
open_file.description = other.m_open_files[fd].description;
open_file.descriptor_flags = other.m_open_files[fd].descriptor_flags;
open_file.inode()->on_clone(open_file.status_flags());
open_file = other.m_open_files[fd];
open_file->file.inode->on_clone(open_file->status_flags);
}
for (size_t i = 0; i < m_cloexec_files.size(); i++)
m_cloexec_files[i] = other.m_cloexec_files[i];
return {};
}
@@ -81,11 +83,15 @@ namespace Kernel
if ((flags & O_TRUNC) && (flags & O_WRONLY) && file.inode->mode().ifreg())
TRY(file.inode->truncate(0));
LockGuard _(m_mutex);
constexpr int status_mask = O_APPEND | O_DSYNC | O_NONBLOCK | O_RSYNC | O_SYNC | O_ACCMODE;
int fd = TRY(get_free_fd());
m_open_files[fd].description = TRY(BAN::RefPtr<OpenFileDescription>::create(BAN::move(file), 0, flags & status_mask));
m_open_files[fd].descriptor_flags = flags & O_CLOEXEC;
LockGuard _(m_mutex);
const int fd = TRY(get_free_fd());
m_open_files[fd] = TRY(BAN::RefPtr<OpenFileDescription>::create(BAN::move(file), 0, flags & status_mask));
if (flags & O_CLOEXEC)
add_cloexec(fd);
return fd;
}
@@ -99,7 +105,7 @@ namespace Kernel
Socket::Domain domain;
Socket::Type type;
int status_flags;
int descriptor_flags;
bool cloexec;
};
static BAN::ErrorOr<SocketInfo> parse_socket_info(int domain, int type, int protocol)
@@ -124,11 +130,11 @@ namespace Kernel
}
info.status_flags = 0;
info.descriptor_flags = 0;
info.cloexec = false;
if (type & SOCK_NONBLOCK)
info.status_flags |= O_NONBLOCK;
if (type & SOCK_CLOEXEC)
info.descriptor_flags |= O_CLOEXEC;
info.cloexec = true;
type &= ~(SOCK_NONBLOCK | SOCK_CLOEXEC);
switch (type)
@@ -171,9 +177,12 @@ namespace Kernel
socket_sv = "<udp socket>";
LockGuard _(m_mutex);
int fd = TRY(get_free_fd());
m_open_files[fd].description = TRY(BAN::RefPtr<OpenFileDescription>::create(VirtualFileSystem::File(socket, socket_sv), 0, O_RDWR | sock_info.status_flags));
m_open_files[fd].descriptor_flags = sock_info.descriptor_flags;
const int fd = TRY(get_free_fd());
m_open_files[fd] = TRY(BAN::RefPtr<OpenFileDescription>::create(VirtualFileSystem::File(socket, socket_sv), 0, O_RDWR | sock_info.status_flags));
if (sock_info.cloexec)
add_cloexec(fd);
return fd;
}
@@ -188,10 +197,14 @@ namespace Kernel
LockGuard _(m_mutex);
TRY(get_free_fd_pair(socket_vector));
m_open_files[socket_vector[0]].description = TRY(BAN::RefPtr<OpenFileDescription>::create(VirtualFileSystem::File(socket1, "<socketpair>"_sv), 0, O_RDWR | sock_info.status_flags));
m_open_files[socket_vector[0]].descriptor_flags = sock_info.descriptor_flags;
m_open_files[socket_vector[1]].description = TRY(BAN::RefPtr<OpenFileDescription>::create(VirtualFileSystem::File(socket2, "<socketpair>"_sv), 0, O_RDWR | sock_info.status_flags));
m_open_files[socket_vector[1]].descriptor_flags = sock_info.descriptor_flags;
m_open_files[socket_vector[0]] = TRY(BAN::RefPtr<OpenFileDescription>::create(VirtualFileSystem::File(socket1, "<socketpair>"_sv), 0, O_RDWR | sock_info.status_flags));
m_open_files[socket_vector[1]] = TRY(BAN::RefPtr<OpenFileDescription>::create(VirtualFileSystem::File(socket2, "<socketpair>"_sv), 0, O_RDWR | sock_info.status_flags));
if (sock_info.cloexec)
{
add_cloexec(socket_vector[0]);
add_cloexec(socket_vector[1]);
}
return {};
}
@@ -202,10 +215,11 @@ namespace Kernel
TRY(get_free_fd_pair(fds));
auto pipe = TRY(Pipe::create(m_credentials));
m_open_files[fds[0]].description = TRY(BAN::RefPtr<OpenFileDescription>::create(VirtualFileSystem::File(pipe, "<pipe rd>"_sv), 0, O_RDONLY));
m_open_files[fds[0]].descriptor_flags = 0;
m_open_files[fds[1]].description = TRY(BAN::RefPtr<OpenFileDescription>::create(VirtualFileSystem::File(pipe, "<pipe wr>"_sv), 0, O_WRONLY));
m_open_files[fds[1]].descriptor_flags = 0;
m_open_files[fds[0]] = TRY(BAN::RefPtr<OpenFileDescription>::create(VirtualFileSystem::File(pipe, "<pipe rd>"_sv), 0, O_RDONLY));
m_open_files[fds[1]] = TRY(BAN::RefPtr<OpenFileDescription>::create(VirtualFileSystem::File(pipe, "<pipe wr>"_sv), 0, O_WRONLY));
ASSERT(!is_cloexec(fds[0]));
ASSERT(!is_cloexec(fds[1]));
return {};
}
@@ -224,9 +238,10 @@ namespace Kernel
(void)close(fildes2);
auto& open_file = m_open_files[fildes2];
open_file.description = m_open_files[fildes].description;
open_file.descriptor_flags = 0;
open_file.inode()->on_clone(open_file.status_flags());
open_file = m_open_files[fildes];
open_file->file.inode->on_clone(open_file->status_flags);
ASSERT(!is_cloexec(fildes2));
return fildes2;
}
@@ -246,18 +261,16 @@ namespace Kernel
TRY(validate_fd(fd));
const auto& open_file = m_open_files[fd];
inode = open_file.inode();
inode = m_open_files[fd]->file.inode;
switch (flock.l_whence)
{
case SEEK_SET:
break;
case SEEK_CUR:
if (BAN::Math::will_addition_overflow(flock.l_start, open_file.offset()))
if (BAN::Math::will_addition_overflow(flock.l_start, m_open_files[fd]->offset))
return BAN::Error::from_errno(EOVERFLOW);
flock.l_start += m_open_files[fd].offset();
flock.l_start += m_open_files[fd]->offset;
break;
case SEEK_END:
if (BAN::Math::will_addition_overflow(flock.l_start, inode->size()))
@@ -307,26 +320,27 @@ namespace Kernel
const int new_fd = TRY(get_free_fd());
auto& open_file = m_open_files[new_fd];
open_file.description = m_open_files[fd].description;
open_file.descriptor_flags = (cmd == F_DUPFD_CLOEXEC) ? O_CLOEXEC : 0;
open_file.inode()->on_clone(open_file.status_flags());
open_file = m_open_files[fd];
open_file->file.inode->on_clone(open_file->status_flags);
if (cmd == F_DUPFD_CLOEXEC)
add_cloexec(new_fd);
return new_fd;
}
case F_GETFD:
return m_open_files[fd].descriptor_flags;
return is_cloexec(fd) ? O_CLOEXEC : 0;
case F_SETFD:
if (extra & FD_CLOEXEC)
m_open_files[fd].descriptor_flags |= O_CLOEXEC;
add_cloexec(fd);
else
m_open_files[fd].descriptor_flags &= ~O_CLOEXEC;
remove_cloexec(fd);
return 0;
case F_GETFL:
return m_open_files[fd].status_flags();
return m_open_files[fd]->status_flags;
case F_SETFL:
extra &= O_APPEND | O_DSYNC | O_NONBLOCK | O_RSYNC | O_SYNC;
m_open_files[fd].status_flags() &= O_ACCMODE;
m_open_files[fd].status_flags() |= extra;
m_open_files[fd]->status_flags &= O_ACCMODE;
m_open_files[fd]->status_flags |= extra;
return 0;
default:
break;
@@ -349,10 +363,10 @@ namespace Kernel
base_offset = 0;
break;
case SEEK_CUR:
base_offset = m_open_files[fd].offset();
base_offset = m_open_files[fd]->offset;
break;
case SEEK_END:
base_offset = m_open_files[fd].inode()->size();
base_offset = m_open_files[fd]->file.inode->size();
break;
default:
return BAN::Error::from_errno(EINVAL);
@@ -362,7 +376,7 @@ namespace Kernel
if (new_offset < 0)
return BAN::Error::from_errno(EINVAL);
m_open_files[fd].offset() = new_offset;
m_open_files[fd]->offset = new_offset;
return new_offset;
}
@@ -371,14 +385,14 @@ namespace Kernel
{
LockGuard _(m_mutex);
TRY(validate_fd(fd));
return m_open_files[fd].offset();
return m_open_files[fd]->offset;
}
BAN::ErrorOr<void> OpenFileDescriptorSet::truncate(int fd, off_t length)
{
LockGuard _(m_mutex);
TRY(validate_fd(fd));
return m_open_files[fd].inode()->truncate(length);
return m_open_files[fd]->file.inode->truncate(length);
}
BAN::ErrorOr<void> OpenFileDescriptorSet::close(int fd)
@@ -389,7 +403,7 @@ namespace Kernel
auto& open_file = m_open_files[fd];
if (auto& flock = open_file.description->flock; Thread::current().has_process() && flock.lockers.contains(Process::current().pid()))
if (auto& flock = open_file->flock; Thread::current().has_process() && flock.lockers.contains(Process::current().pid()))
{
flock.lockers.remove(Process::current().pid());
if (flock.lockers.empty())
@@ -401,7 +415,7 @@ namespace Kernel
{
LockGuard _(s_fcntl_lock_mutex);
if (auto it = s_fcntl_locks.find(open_file.inode()); it != s_fcntl_locks.end())
if (auto it = s_fcntl_locks.find(open_file->file.inode); it != s_fcntl_locks.end())
{
auto& flocks = it->value;
for (size_t i = 0; i < flocks.size(); i++)
@@ -411,9 +425,9 @@ namespace Kernel
}
}
open_file.inode()->on_close(open_file.status_flags());
open_file.description.clear();
open_file.descriptor_flags = 0;
open_file->file.inode->on_close(open_file->status_flags);
open_file = {};
remove_cloexec(fd);
return {};
}
@@ -421,20 +435,31 @@ namespace Kernel
void OpenFileDescriptorSet::close_all()
{
LockGuard _(m_mutex);
for (int fd = 0; fd < (int)m_open_files.size(); fd++)
for (int fd = 0; fd < static_cast<int>(m_open_files.size()); fd++)
(void)close(fd);
}
void OpenFileDescriptorSet::close_cloexec()
{
LockGuard _(m_mutex);
for (int fd = 0; fd < (int)m_open_files.size(); fd++)
{
if (validate_fd(fd).is_error())
continue;
if (m_open_files[fd].descriptor_flags & O_CLOEXEC)
for (int fd = 0; fd < static_cast<int>(m_open_files.size()); fd++)
if (is_cloexec(fd))
(void)close(fd);
}
}
bool OpenFileDescriptorSet::is_cloexec(int fd)
{
return m_cloexec_files[fd / 32] & (1u << (fd % 32));
}
void OpenFileDescriptorSet::add_cloexec(int fd)
{
m_cloexec_files[fd / 32] |= 1u << (fd % 32);
}
void OpenFileDescriptorSet::remove_cloexec(int fd)
{
m_cloexec_files[fd / 32] &= ~(1u << (fd % 32));
}
static BAN::ErrorOr<void> fcntl_lock(BAN::RefPtr<Inode> inode, int cmd, struct flock& flock)
@@ -588,7 +613,7 @@ namespace Kernel
{
TRY(validate_fd(fd));
auto& flock = m_open_files[fd].description->flock;
auto& flock = m_open_files[fd]->flock;
switch (op & ~LOCK_NB)
{
case LOCK_UN:
@@ -645,11 +670,11 @@ namespace Kernel
LockGuard _(m_mutex);
TRY(validate_fd(fd));
auto& open_file = m_open_files[fd];
if (!(open_file.status_flags() & O_RDONLY))
if (!(open_file->status_flags & O_RDONLY))
return BAN::Error::from_errno(EBADF);
inode = open_file.inode();
is_nonblock = !!(open_file.status_flags() & O_NONBLOCK);
offset = open_file.offset();
inode = open_file->file.inode;
is_nonblock = !!(open_file->status_flags & O_NONBLOCK);
offset = open_file->offset;
}
if (inode->mode().ifsock())
@@ -685,7 +710,7 @@ namespace Kernel
LockGuard _(m_mutex);
// NOTE: race condition with offset, its UB per POSIX
if (!validate_fd(fd).is_error())
m_open_files[fd].offset() = offset + nread;
m_open_files[fd]->offset = offset + nread;
return nread;
}
@@ -699,11 +724,11 @@ namespace Kernel
LockGuard _(m_mutex);
TRY(validate_fd(fd));
auto& open_file = m_open_files[fd];
if (!(open_file.status_flags() & O_WRONLY))
if (!(open_file->status_flags & O_WRONLY))
return BAN::Error::from_errno(EBADF);
inode = open_file.inode();
is_nonblock = !!(open_file.status_flags() & O_NONBLOCK);
offset = (open_file.status_flags() & O_APPEND) ? inode->size() : open_file.offset();
inode = open_file->file.inode;
is_nonblock = !!(open_file->status_flags & O_NONBLOCK);
offset = (open_file->status_flags & O_APPEND) ? inode->size() : open_file->offset;
}
if (inode->mode().ifsock())
@@ -742,7 +767,7 @@ namespace Kernel
LockGuard _(m_mutex);
// NOTE: race condition with offset, its UB per POSIX
if (!validate_fd(fd).is_error())
m_open_files[fd].offset() = offset + nwrite;
m_open_files[fd]->offset = offset + nwrite;
return nwrite;
}
@@ -755,10 +780,10 @@ namespace Kernel
LockGuard _(m_mutex);
TRY(validate_fd(fd));
auto& open_file = m_open_files[fd];
if (!(open_file.status_flags() & O_RDONLY))
if (!(open_file->status_flags & O_RDONLY))
return BAN::Error::from_errno(EACCES);
inode = open_file.inode();
offset = open_file.offset();
inode = open_file->file.inode;
offset = open_file->offset;
}
for (;;)
@@ -773,7 +798,7 @@ namespace Kernel
LockGuard _(m_mutex);
// NOTE: race condition with offset, its UB per POSIX
if (!validate_fd(fd).is_error())
m_open_files[fd].offset() = offset;
m_open_files[fd]->offset = offset;
return ret;
}
}
@@ -787,10 +812,10 @@ namespace Kernel
LockGuard _(m_mutex);
TRY(validate_fd(fd));
auto& open_file = m_open_files[fd];
if (!open_file.inode()->mode().ifsock())
if (!open_file->file.inode->mode().ifsock())
return BAN::Error::from_errno(ENOTSOCK);
inode = open_file.inode();
is_nonblock = !!(open_file.status_flags() & O_NONBLOCK);
inode = open_file->file.inode;
is_nonblock = !!(open_file->status_flags & O_NONBLOCK);
}
LockGuard _(inode->m_mutex);
@@ -808,10 +833,10 @@ namespace Kernel
LockGuard _(m_mutex);
TRY(validate_fd(fd));
auto& open_file = m_open_files[fd];
if (!open_file.inode()->mode().ifsock())
if (!open_file->file.inode->mode().ifsock())
return BAN::Error::from_errno(ENOTSOCK);
inode = open_file.inode();
is_nonblock = !!(open_file.status_flags() & O_NONBLOCK);
inode = open_file->file.inode;
is_nonblock = !!(open_file->status_flags & O_NONBLOCK);
}
LockGuard _(inode->m_mutex);
@@ -830,7 +855,7 @@ namespace Kernel
{
LockGuard _(m_mutex);
TRY(validate_fd(fd));
return TRY(m_open_files[fd].description->file.clone());
return TRY(m_open_files[fd]->file.clone());
}
BAN::ErrorOr<BAN::String> OpenFileDescriptorSet::path_of(int fd) const
@@ -838,7 +863,7 @@ namespace Kernel
LockGuard _(m_mutex);
TRY(validate_fd(fd));
BAN::String path;
TRY(path.append(m_open_files[fd].path()));
TRY(path.append(m_open_files[fd]->file.canonical_path));
return path;
}
@@ -846,14 +871,14 @@ namespace Kernel
{
LockGuard _(m_mutex);
TRY(validate_fd(fd));
return m_open_files[fd].inode();
return m_open_files[fd]->file.inode;
}
BAN::ErrorOr<int> OpenFileDescriptorSet::status_flags_of(int fd) const
{
LockGuard _(m_mutex);
TRY(validate_fd(fd));
return m_open_files[fd].status_flags();
return m_open_files[fd]->status_flags;
}
BAN::ErrorOr<void> OpenFileDescriptorSet::validate_fd(int fd) const
@@ -861,7 +886,7 @@ namespace Kernel
LockGuard _(m_mutex);
if (fd < 0 || fd >= (int)m_open_files.size())
return BAN::Error::from_errno(EBADF);
if (!m_open_files[fd].description)
if (!m_open_files[fd])
return BAN::Error::from_errno(EBADF);
return {};
}
@@ -870,7 +895,7 @@ namespace Kernel
{
LockGuard _(m_mutex);
for (int fd = 0; fd < (int)m_open_files.size(); fd++)
if (!m_open_files[fd].description)
if (!m_open_files[fd])
return fd;
return BAN::Error::from_errno(EMFILE);
}
@@ -881,7 +906,7 @@ namespace Kernel
size_t found = 0;
for (int fd = 0; fd < (int)m_open_files.size(); fd++)
{
if (!m_open_files[fd].description)
if (!m_open_files[fd])
fds[found++] = fd;
if (found == 2)
return {};
@@ -929,7 +954,7 @@ namespace Kernel
{
LockGuard _(m_mutex);
TRY(validate_fd(fd));
return FDWrapper { m_open_files[fd].description };
return FDWrapper { m_open_files[fd] };
}
size_t OpenFileDescriptorSet::open_all_fd_wrappers(BAN::Span<FDWrapper> fd_wrappers)
@@ -943,8 +968,7 @@ namespace Kernel
return i;
const int fd = fd_or_error.release_value();
m_open_files[fd].description = BAN::move(fd_wrappers[i].m_description);
m_open_files[fd].descriptor_flags = 0;
m_open_files[fd] = BAN::move(fd_wrappers[i].m_description);
fd_wrappers[i].m_fd = fd;
}

View File

@@ -34,6 +34,8 @@
#include <sys/sysmacros.h>
#include <sys/wait.h>
#pragma GCC diagnostic ignored "-Wstack-usage="
namespace Kernel
{
@@ -303,6 +305,8 @@ namespace Kernel
void Process::exit(int status, int signal)
{
ASSERT(Processor::get_interrupt_state() == InterruptState::Enabled);
bool expected = false;
if (!m_is_exiting.compare_exchange(expected, true))
{
@@ -310,9 +314,6 @@ namespace Kernel
ASSERT_NOT_REACHED();
}
const auto state = Processor::get_interrupt_state();
Processor::set_interrupt_state(InterruptState::Enabled);
if (m_parent)
{
Process* parent_process = nullptr;
@@ -369,8 +370,6 @@ namespace Kernel
while (m_threads.size() > 1)
Processor::yield();
Processor::set_interrupt_state(state);
Thread::current().on_exit();
ASSERT_NOT_REACHED();
@@ -2951,7 +2950,7 @@ namespace Kernel
for (;;)
{
while (Thread::current().will_exit_because_of_signal())
Thread::current().handle_signal();
Thread::current().handle_signal_if_interrupted();
SpinLockGuard guard(m_signal_lock);
if (!m_stopped)
@@ -3050,7 +3049,7 @@ namespace Kernel
TRY(read_from_user(user_act, &new_act, sizeof(struct sigaction)));
{
SpinLockGuard signal_lock_guard(m_signal_lock);
SpinLockGuard _(m_signal_lock);
old_act = m_signal_handlers[signal];
if (user_act != nullptr)
m_signal_handlers[signal] = new_act;
@@ -3064,7 +3063,14 @@ namespace Kernel
BAN::ErrorOr<long> Process::sys_sigpending(sigset_t* user_sigset)
{
const sigset_t sigset = (signal_pending_mask() | Thread::current().m_signal_pending_mask) & Thread::current().m_signal_block_mask;
sigset_t sigset;
{
auto& thread = Thread::current();
SpinLockGuard _(thread.m_signal_lock);
sigset = (signal_pending_mask() | thread.m_signal_pending_mask) & thread.m_signal_block_mask;
}
TRY(write_to_user(user_sigset, &sigset, sizeof(sigset_t)));
return 0;
}
@@ -3073,34 +3079,36 @@ namespace Kernel
{
LockGuard _(m_process_lock);
if (user_oset != nullptr)
{
const sigset_t current = Thread::current().m_signal_block_mask;
TRY(write_to_user(user_oset, &current, sizeof(sigset_t)));
}
auto& thread = Thread::current();
const sigset_t old_sigset = thread.m_signal_block_mask;
if (user_set != nullptr)
{
sigset_t set;
TRY(read_from_user(user_set, &set, sizeof(sigset_t)));
sigset_t mask;
TRY(read_from_user(user_set, &mask, sizeof(sigset_t)));
mask &= ~((1ull << SIGKILL) | (1ull << SIGSTOP));
const sigset_t mask = set & ~(SIGKILL | SIGSTOP);
SpinLockGuard _(thread.m_signal_lock);
switch (how)
{
case SIG_BLOCK:
Thread::current().m_signal_block_mask |= mask;
break;
case SIG_SETMASK:
Thread::current().m_signal_block_mask = mask;
thread.m_signal_block_mask |= mask;
break;
case SIG_UNBLOCK:
Thread::current().m_signal_block_mask &= ~mask;
thread.m_signal_block_mask &= ~mask;
break;
case SIG_SETMASK:
thread.m_signal_block_mask = mask;
break;
default:
return BAN::Error::from_errno(EINVAL);
}
}
if (user_oset != nullptr)
TRY(write_to_user(user_oset, &old_sigset, sizeof(sigset_t)));
return 0;
}
@@ -3112,7 +3120,7 @@ namespace Kernel
LockGuard _(m_process_lock);
auto& thread = Thread::current();
thread.set_suspend_signal_mask(set & ~(SIGKILL | SIGSTOP));
thread.set_suspend_signal_mask(set & ~((1ull << SIGKILL) | (1ull << SIGSTOP)));
// FIXME: i *think* here is a race condition as kill doesnt hold process lock
while (!thread.is_interrupted_by_signal())
@@ -3207,6 +3215,9 @@ namespace Kernel
return BAN::Error::from_errno(ENOSYS);
}
if (op == FUTEX_WAIT && BAN::atomic_load(*addr) != val)
return BAN::Error::from_errno(EAGAIN);
uint64_t wake_time_ns = BAN::numeric_limits<uint64_t>::max();
if (user_abstime != nullptr)
@@ -3228,9 +3239,6 @@ namespace Kernel
}
}
if (op == FUTEX_WAIT && BAN::atomic_load(*addr) != val)
return BAN::Error::from_errno(EAGAIN);
futex_t* futex;
{
@@ -3305,7 +3313,8 @@ namespace Kernel
BAN::ErrorOr<long> Process::sys_set_fsbase(void* addr)
{
Thread::current().set_fsbase(reinterpret_cast<vaddr_t>(addr));
auto& thread = Thread::current();
thread.set_fsbase(reinterpret_cast<vaddr_t>(addr));
Processor::load_fsbase();
return 0;
}
@@ -3317,7 +3326,8 @@ namespace Kernel
BAN::ErrorOr<long> Process::sys_set_gsbase(void* addr)
{
Thread::current().set_gsbase(reinterpret_cast<vaddr_t>(addr));
auto& thread = Thread::current();
thread.set_gsbase(reinterpret_cast<vaddr_t>(addr));
Processor::load_gsbase();
return 0;
}
@@ -3345,9 +3355,8 @@ namespace Kernel
{
LockGuard _(m_process_lock);
// main thread cannot call pthread_exit
if (&Thread::current() == m_threads.front())
return BAN::Error::from_errno(EINVAL);
if (m_threads.size() == 1)
return sys_exit(0);
TRY(m_exited_pthreads.emplace_back(Thread::current().tid(), value));
@@ -3387,7 +3396,8 @@ namespace Kernel
{
if (auto ret = check_thread(); ret.has_value())
{
TRY(write_to_user(user_value, &ret.value(), sizeof(void*)));
if (user_value != nullptr)
TRY(write_to_user(user_value, &ret.value(), sizeof(void*)));
return 0;
}
@@ -3887,237 +3897,45 @@ namespace Kernel
return region->allocate_page_containing(address, wants_write);
}
// TODO: The following 3 functions could be simplified into one generic helper function
extern "C" bool safe_user_memcpy(void*, const void*, size_t);
extern "C" bool safe_user_strncpy(void*, const void*, size_t);
static inline bool is_valid_user_address(const void* user_addr, size_t size)
{
const vaddr_t user_vaddr = reinterpret_cast<vaddr_t>(user_addr);
if (BAN::Math::will_addition_overflow<vaddr_t>(user_vaddr, size))
return false;
if (user_vaddr + size > USERSPACE_END)
return false;
return true;
}
BAN::ErrorOr<void> Process::read_from_user(const void* user_addr, void* out, size_t size)
{
const vaddr_t user_vaddr = reinterpret_cast<vaddr_t>(user_addr);
auto* out_u8 = static_cast<uint8_t*>(out);
size_t ncopied = 0;
{
RWLockRDGuard _(m_memory_region_lock);
const size_t first_index = find_mapped_region(user_vaddr);
for (size_t i = first_index; ncopied < size && i < m_mapped_regions.size(); i++)
{
auto& region = m_mapped_regions[i];
if (!region->contains(user_vaddr + ncopied))
return BAN::Error::from_errno(EFAULT);
const size_t ncopy = BAN::Math::min<size_t>(
(region->vaddr() + region->size()) - (user_vaddr + ncopied),
size - ncopied
);
const size_t page_count = range_page_count(user_vaddr + ncopied, ncopy);
const vaddr_t page_base = (user_vaddr + ncopied) & PAGE_ADDR_MASK;
for (size_t p = 0; p < page_count; p++)
{
const auto flags = PageTable::UserSupervisor | PageTable::Present;
if ((m_page_table->get_page_flags(page_base + p * PAGE_SIZE) & flags) != flags)
goto read_from_user_with_allocation;
}
memcpy(out_u8 + ncopied, reinterpret_cast<void*>(user_vaddr + ncopied), ncopy);
ncopied += ncopy;
}
if (ncopied >= size)
return {};
if (ncopied > 0)
return BAN::Error::from_errno(EFAULT);
}
read_from_user_with_allocation:
RWLockWRGuard _(m_memory_region_lock);
const size_t first_index = find_mapped_region(user_vaddr + ncopied);
for (size_t i = first_index; ncopied < size && i < m_mapped_regions.size(); i++)
{
auto& region = m_mapped_regions[i];
if (!region->contains(user_vaddr + ncopied))
return BAN::Error::from_errno(EFAULT);
const size_t ncopy = BAN::Math::min<size_t>(
(region->vaddr() + region->size()) - (user_vaddr + ncopied),
size - ncopied
);
const size_t page_count = range_page_count(user_vaddr + ncopied, ncopy);
const vaddr_t page_base = (user_vaddr + ncopied) & PAGE_ADDR_MASK;
for (size_t p = 0; p < page_count; p++)
{
const auto flags = PageTable::UserSupervisor | PageTable::Present;
if ((m_page_table->get_page_flags(page_base + p * PAGE_SIZE) & flags) == flags)
continue;
if (!TRY(region->allocate_page_containing(page_base + p * PAGE_SIZE, false)))
return BAN::Error::from_errno(EFAULT);
}
memcpy(out_u8 + ncopied, reinterpret_cast<void*>(user_vaddr + ncopied), ncopy);
ncopied += ncopy;
}
if (ncopied >= size)
return {};
return BAN::Error::from_errno(EFAULT);
if (!is_valid_user_address(user_addr, size))
return BAN::Error::from_errno(EFAULT);
if (!safe_user_memcpy(out, user_addr, size))
return BAN::Error::from_errno(EFAULT);
return {};
}
BAN::ErrorOr<void> Process::read_string_from_user(const char* user_addr, char* out, size_t max_size)
{
const vaddr_t user_vaddr = reinterpret_cast<vaddr_t>(user_addr);
size_t ncopied = 0;
{
RWLockRDGuard _(m_memory_region_lock);
const size_t first_index = find_mapped_region(user_vaddr);
for (size_t i = first_index; ncopied < max_size && i < m_mapped_regions.size(); i++)
{
auto& region = m_mapped_regions[i];
if (!region->contains(user_vaddr + ncopied))
return BAN::Error::from_errno(EFAULT);
vaddr_t last_page = 0;
for (; ncopied < max_size; ncopied++)
{
const vaddr_t curr_page = (user_vaddr + ncopied) & PAGE_ADDR_MASK;
if (curr_page != last_page)
{
const auto flags = PageTable::UserSupervisor | PageTable::Present;
if ((m_page_table->get_page_flags(curr_page) & flags) != flags)
goto read_string_from_user_with_allocation;
}
out[ncopied] = user_addr[ncopied];
if (out[ncopied] == '\0')
return {};
last_page = curr_page;
}
}
if (ncopied >= max_size)
return BAN::Error::from_errno(ENAMETOOLONG);
if (ncopied > 0)
return BAN::Error::from_errno(EFAULT);
}
read_string_from_user_with_allocation:
RWLockWRGuard _(m_memory_region_lock);
const size_t first_index = find_mapped_region(user_vaddr + ncopied);
for (size_t i = first_index; ncopied < max_size && i < m_mapped_regions.size(); i++)
{
auto& region = m_mapped_regions[i];
if (!region->contains(user_vaddr + ncopied))
return BAN::Error::from_errno(EFAULT);
vaddr_t last_page = 0;
for (; ncopied < max_size; ncopied++)
{
const vaddr_t curr_page = (user_vaddr + ncopied) & PAGE_ADDR_MASK;
if (curr_page != last_page)
{
const auto flags = PageTable::UserSupervisor | PageTable::Present;
if ((m_page_table->get_page_flags(curr_page) & flags) == flags)
;
else if (!TRY(region->allocate_page_containing(curr_page, false)))
return BAN::Error::from_errno(EFAULT);
}
out[ncopied] = user_addr[ncopied];
if (out[ncopied] == '\0')
return {};
last_page = curr_page;
}
}
if (ncopied >= max_size)
return BAN::Error::from_errno(ENAMETOOLONG);
return BAN::Error::from_errno(EFAULT);
max_size = BAN::Math::min<size_t>(max_size, USERSPACE_END - reinterpret_cast<vaddr_t>(user_addr));
if (!is_valid_user_address(user_addr, max_size))
return BAN::Error::from_errno(EFAULT);
if (!safe_user_strncpy(out, user_addr, max_size))
return BAN::Error::from_errno(EFAULT);
return {};
}
BAN::ErrorOr<void> Process::write_to_user(void* user_addr, const void* in, size_t size)
{
const vaddr_t user_vaddr = reinterpret_cast<vaddr_t>(user_addr);
const auto* in_u8 = static_cast<const uint8_t*>(in);
size_t ncopied = 0;
{
RWLockRDGuard _(m_memory_region_lock);
const size_t first_index = find_mapped_region(user_vaddr);
for (size_t i = first_index; ncopied < size && i < m_mapped_regions.size(); i++)
{
auto& region = m_mapped_regions[i];
if (!region->contains(user_vaddr + ncopied))
return BAN::Error::from_errno(EFAULT);
const size_t ncopy = BAN::Math::min<size_t>(
(region->vaddr() + region->size()) - (user_vaddr + ncopied),
size - ncopied
);
const size_t page_count = range_page_count(user_vaddr + ncopied, ncopy);
const vaddr_t page_base = (user_vaddr + ncopied) & PAGE_ADDR_MASK;
for (size_t i = 0; i < page_count; i++)
{
const auto flags = PageTable::UserSupervisor | PageTable::ReadWrite | PageTable::Present;
if ((m_page_table->get_page_flags(page_base + i * PAGE_SIZE) & flags) != flags)
goto write_to_user_with_allocation;
}
memcpy(reinterpret_cast<void*>(user_vaddr + ncopied), in_u8 + ncopied, ncopy);
ncopied += ncopy;
}
if (ncopied >= size)
return {};
if (ncopied > 0)
return BAN::Error::from_errno(EFAULT);
}
write_to_user_with_allocation:
RWLockWRGuard _(m_memory_region_lock);
const size_t first_index = find_mapped_region(user_vaddr + ncopied);
for (size_t i = first_index; ncopied < size && i < m_mapped_regions.size(); i++)
{
auto& region = m_mapped_regions[i];
if (!region->contains(user_vaddr + ncopied))
return BAN::Error::from_errno(EFAULT);
const size_t ncopy = BAN::Math::min<size_t>(
(region->vaddr() + region->size()) - (user_vaddr + ncopied),
size - ncopied
);
const size_t page_count = range_page_count(user_vaddr + ncopied, ncopy);
const vaddr_t page_base = (user_vaddr + ncopied) & PAGE_ADDR_MASK;
for (size_t p = 0; p < page_count; p++)
{
const auto flags = PageTable::UserSupervisor | PageTable::ReadWrite | PageTable::Present;
if ((m_page_table->get_page_flags(page_base + p * PAGE_SIZE) & flags) == flags)
continue;
if (!TRY(region->allocate_page_containing(page_base + p * PAGE_SIZE, true)))
return BAN::Error::from_errno(EFAULT);
}
memcpy(reinterpret_cast<void*>(user_vaddr + ncopied), in_u8 + ncopied, ncopy);
ncopied += ncopy;
}
if (ncopied >= size)
return {};
return BAN::Error::from_errno(EFAULT);
if (!is_valid_user_address(user_addr, size))
return BAN::Error::from_errno(EFAULT);
if (!safe_user_memcpy(user_addr, in, size))
return BAN::Error::from_errno(EFAULT);
return {};
}
BAN::ErrorOr<MemoryRegion*> Process::validate_and_pin_pointer_access(const void* ptr, size_t size, bool needs_write)

View File

@@ -73,9 +73,6 @@ namespace Kernel
ASSERT(processor.m_id == PROCESSOR_NONE);
processor.m_id = id;
processor.m_stack = kmalloc(s_stack_size, 4096, true);
ASSERT(processor.m_stack);
processor.m_gdt = GDT::create(&processor);
ASSERT(processor.m_gdt);
@@ -157,6 +154,21 @@ namespace Kernel
return processor;
}
// NOTE: I don't like this being a separate function but we need heap and page tables for this :)
void Processor::allocate_stack()
{
ASSERT(m_stack_paddr == 0);
ASSERT(m_stack_vaddr == 0);
m_stack_paddr = Heap::get().take_free_page();
ASSERT(m_stack_paddr);
m_stack_vaddr = PageTable::kernel().reserve_free_page(KERNEL_OFFSET);
ASSERT(m_stack_vaddr);
PageTable::kernel().map_page_at(m_stack_paddr, m_stack_vaddr, PageTable::ReadWrite | PageTable::Present);
}
void Processor::initialize_smp()
{
const auto processor_id = current_id();
@@ -205,8 +217,7 @@ namespace Kernel
memset(reinterpret_cast<void*>(s_shared_page_vaddr), 0, PAGE_SIZE);
auto& shared_page = *reinterpret_cast<volatile API::SharedPage*>(s_shared_page_vaddr);
for (size_t i = 0; i <= 0xFF; i++)
shared_page.__sequence[i] = i;
shared_page.gdt_cpu_offset = GDT::cpu_index_offset();
shared_page.features = 0;
ASSERT(Processor::count() + sizeof(Kernel::API::SharedPage) <= PAGE_SIZE);
@@ -377,8 +388,7 @@ namespace Kernel
case SMPMessage::Type::FlushTLB:
if (message->flush_tlb.page_table && message->flush_tlb.page_table != processor.m_current_page_table)
break;
for (size_t i = 0; i < message->flush_tlb.page_count; i++)
asm volatile("invlpg (%0)" :: "r"(message->flush_tlb.vaddr + i * PAGE_SIZE) : "memory");
PageTable::current().invalidate_range(message->flush_tlb.vaddr, message->flush_tlb.page_count, false);
break;
case SMPMessage::Type::NewThread:
processor.m_scheduler->add_thread(message->new_thread);
@@ -566,7 +576,7 @@ namespace Kernel
if (!scheduler().is_idle())
Thread::current().set_cpu_time_stop();
asm_yield_trampoline(Processor::current_stack_top());
asm_yield_trampoline(processor_info.stack_top_vaddr());
processor_info.m_start_ns = SystemTimer::get().ns_since_boot();

View File

@@ -139,6 +139,8 @@ namespace Kernel
if (Processor::count() > 1)
Processor::set_smp_enabled();
m_next_reschedule_ns = SystemTimer::get().ns_since_boot();
return {};
}
@@ -305,15 +307,14 @@ namespace Kernel
void Scheduler::wake_up_sleeping_threads()
{
ASSERT(Processor::get_interrupt_state() == InterruptState::Disabled);
const uint64_t current_ns = SystemTimer::get().ns_since_boot();
while (!m_block_queue.empty() && current_ns >= m_block_queue.front()->wake_time_ns)
{
auto* node = m_block_queue.pop_front();
{
SpinLockGuard _(node->blocker_lock);
if (node->blocker)
node->blocker->remove_blocked_thread(node);
}
if (auto* blocker = node->blocker.load())
blocker->remove_thread_from_block_queue(node);
node->blocked = false;
update_most_loaded_node_queue(node, &m_run_queue);
m_run_queue.add_thread_to_back(node);
@@ -348,13 +349,10 @@ namespace Kernel
wake_up_sleeping_threads();
if (SystemTimer::get().ns_since_boot() >= m_next_reschedule_ns)
{
const uint64_t current_ns = SystemTimer::get().ns_since_boot();
if (current_ns >= m_last_reschedule_ns + s_reschedule_interval_ns)
{
m_last_reschedule_ns = current_ns;
Processor::yield();
}
m_next_reschedule_ns += s_reschedule_interval_ns;
Processor::yield();
}
}
@@ -369,11 +367,8 @@ namespace Kernel
return;
if (node != m_current)
m_block_queue.remove_node(node);
{
SpinLockGuard _(node->blocker_lock);
if (node->blocker)
node->blocker->remove_blocked_thread(node);
}
if (auto* blocker = node->blocker.load())
blocker->remove_thread_from_block_queue(node);
node->blocked = false;
if (node != m_current)
m_run_queue.add_thread_to_back(node);
@@ -666,11 +661,8 @@ namespace Kernel
m_current->blocked = true;
m_current->wake_time_ns = wake_time_ns;
{
SpinLockGuard _(m_current->blocker_lock);
if (blocker)
blocker->add_thread_to_block_queue(m_current);
}
if (blocker)
blocker->add_thread_to_block_queue(m_current);
update_most_loaded_node_queue(m_current, &m_block_queue);

View File

@@ -7,6 +7,7 @@
#include <kernel/Timer/Timer.h>
#include <termios.h>
#include <sys/syscall.h>
#define DUMP_ALL_SYSCALLS 0
#define DUMP_LONG_SYSCALLS 0
@@ -94,13 +95,11 @@ namespace Kernel
Process::current().wait_while_stopped();
Processor::set_interrupt_state(InterruptState::Disabled);
if (Thread::current().handle_signal_if_interrupted())
if (ret.is_error() && ret.error().get_error_code() == EINTR && is_restartable_syscall(syscall))
ret = BAN::Error::from_errno(ERESTART);
auto& current_thread = Thread::current();
if (current_thread.can_add_signal_to_execute())
if (current_thread.handle_signal())
if (ret.is_error() && ret.error().get_error_code() == EINTR && is_restartable_syscall(syscall))
ret = BAN::Error::from_errno(ERESTART);
Processor::set_interrupt_state(InterruptState::Disabled);
ASSERT(Kernel::Thread::current().state() == Kernel::Thread::State::Executing);

View File

@@ -42,10 +42,10 @@ namespace Kernel
auto pts_master_buffer = TRY(VirtualRange::create_to_vaddr_range(
PageTable::kernel(),
KERNEL_OFFSET, static_cast<vaddr_t>(-1),
{ KERNEL_OFFSET, UINTPTR_MAX },
16 * PAGE_SIZE,
PageTable::Flags::ReadWrite | PageTable::Flags::Present,
true, false
false
));
auto pts_master = TRY(BAN::RefPtr<PseudoTerminalMaster>::create(BAN::move(pts_master_buffer), mode, uid, gid));
DevFileSystem::get().remove_from_cache(pts_master);

View File

@@ -13,6 +13,12 @@
namespace Kernel
{
#if ARCH(x86_64)
static constexpr vaddr_t s_user_stack_addr_start = 0x0000700000000000;
#elif ARCH(i686)
static constexpr vaddr_t s_user_stack_addr_start = 0xB0000000;
#endif
extern "C" [[noreturn]] void start_kernel_thread();
extern "C" [[noreturn]] void start_userspace_thread();
@@ -25,11 +31,6 @@ namespace Kernel
*(uintptr_t*)rsp = (uintptr_t)value;
}
extern "C" uintptr_t get_thread_start_sp()
{
return Thread::current().yield_registers().sp;
}
static pid_t s_next_tid = 1;
alignas(16) static uint8_t s_default_sse_storage[512];
@@ -170,11 +171,10 @@ namespace Kernel
// Initialize stack and registers
thread->m_kernel_stack = TRY(VirtualRange::create_to_vaddr_range(
PageTable::kernel(),
KERNEL_OFFSET,
~(uintptr_t)0,
{ KERNEL_OFFSET, UINTPTR_MAX },
kernel_stack_size,
PageTable::Flags::ReadWrite | PageTable::Flags::Present,
true, true
true
));
// Initialize stack for returning
@@ -205,24 +205,18 @@ namespace Kernel
thread->m_is_userspace = true;
#if ARCH(x86_64)
static constexpr vaddr_t stack_addr_start = 0x0000700000000000;
#elif ARCH(i686)
static constexpr vaddr_t stack_addr_start = 0xB0000000;
#endif
thread->m_kernel_stack = TRY(VirtualRange::create_to_vaddr_range(
page_table,
stack_addr_start, USERSPACE_END,
{ s_user_stack_addr_start, USERSPACE_END },
kernel_stack_size,
PageTable::Flags::ReadWrite | PageTable::Flags::Present,
true, true
true
));
auto userspace_stack = TRY(MemoryBackedRegion::create(
page_table,
userspace_stack_size,
{ stack_addr_start, USERSPACE_END },
{ s_user_stack_addr_start, USERSPACE_END },
MemoryRegion::Type::PRIVATE,
PageTable::Flags::UserSupervisor | PageTable::Flags::ReadWrite | PageTable::Flags::Present,
O_RDWR
@@ -309,14 +303,7 @@ namespace Kernel
{
if (!is_userspace() || !has_process())
return;
const vaddr_t vaddr = process().shared_page_vaddr() + Processor::current_index();
#if ARCH(x86_64)
set_gsbase(vaddr);
#elif ARCH(i686)
set_fsbase(vaddr);
#endif
Processor::gdt().set_cpu_index(Processor::current_index());
}
BAN::ErrorOr<Thread*> Thread::pthread_create(entry_t entry, void* arg)
@@ -351,7 +338,24 @@ namespace Kernel
thread->m_is_userspace = true;
thread->m_kernel_stack = TRY(m_kernel_stack->clone(new_process->page_table()));
thread->m_kernel_stack = TRY(VirtualRange::create_to_vaddr_range(
new_process->page_table(),
{ s_user_stack_addr_start, USERSPACE_END },
kernel_stack_size,
PageTable::Flags::ReadWrite | PageTable::Flags::Present,
true
));
// NOTE: copy [sp, stack_end] so fork return works
PageTable::with_fast_page(thread->m_kernel_stack->paddr_of(thread->kernel_stack_top() - PAGE_SIZE), [&] {
const size_t ncopy = kernel_stack_top() - sp;
ASSERT(ncopy <= PAGE_SIZE);
memcpy(
PageTable::fast_page_as_ptr(PAGE_SIZE - ncopy),
reinterpret_cast<void*>(sp),
ncopy
);
});
const auto stack_index = new_process->find_mapped_region(m_userspace_stack->vaddr());
thread->m_userspace_stack = static_cast<MemoryBackedRegion*>(new_process->m_mapped_regions[stack_index].ptr());
@@ -572,19 +576,6 @@ namespace Kernel
return false;
}
bool Thread::can_add_signal_to_execute() const
{
return is_interrupted_by_signal() && m_mutex_count == 0;
}
bool Thread::will_execute_signal() const
{
if (!is_userspace() || m_state != State::Executing)
return false;
auto& interrupt_stack = *reinterpret_cast<InterruptStack*>(kernel_stack_top() - sizeof(InterruptStack));
return interrupt_stack.ip == (uintptr_t)signal_trampoline;
}
bool Thread::will_exit_because_of_signal() const
{
const uint64_t full_pending_mask = m_signal_pending_mask | process().signal_pending_mask();
@@ -596,21 +587,17 @@ namespace Kernel
return false;
}
bool Thread::handle_signal(int signal, const siginfo_t& _signal_info)
bool Thread::handle_signal_if_interrupted()
{
ASSERT(&Thread::current() == this);
ASSERT(is_userspace());
int signal;
siginfo_t signal_info;
signal_handle_info_t handle_info;
auto state = m_signal_lock.lock();
auto& interrupt_stack = *reinterpret_cast<InterruptStack*>(kernel_stack_top() - sizeof(InterruptStack));
ASSERT(GDT::is_user_segment(interrupt_stack.cs));
auto signal_info = _signal_info;
if (signal == 0)
{
const uint64_t process_signal_pending_mask = process().signal_pending_mask();
SpinLockGuard _1(m_signal_lock);
SpinLockGuard _2(m_process->m_signal_lock);
const uint64_t process_signal_pending_mask = m_process->m_signal_pending_mask;
const uint64_t full_pending_mask = m_signal_pending_mask | process_signal_pending_mask;
for (signal = _SIGMIN; signal <= _SIGMAX; signal++)
{
@@ -618,42 +605,90 @@ namespace Kernel
if ((full_pending_mask & mask) && !(m_signal_block_mask & mask))
break;
}
ASSERT(signal <= _SIGMAX);
if (signal > _SIGMAX)
return false;
if (process_signal_pending_mask & (1ull << signal))
signal_info = process().m_signal_infos[signal];
signal_info = m_process->m_signal_infos[signal];
else
signal_info = m_signal_infos[signal];
}
else
{
ASSERT(signal >= _SIGMIN);
ASSERT(signal <= _SIGMAX);
signal_info.si_signo = signal;
handle_info = remove_signal_and_get_info(signal);
}
vaddr_t signal_handler;
bool has_sa_restart;
handle_signal_impl(signal, signal_info, handle_info);
return handle_info.has_sa_restart;
}
bool Thread::handle_signal(int signal, const siginfo_t& signal_info)
{
ASSERT(&Thread::current() == this);
ASSERT(is_userspace());
ASSERT(signal >= _SIGMIN);
ASSERT(signal <= _SIGMAX);
signal_handle_info_t handle_info;
// If this signal is blocked or ignored, terminate the process
bool terminate_process = false;
{
SpinLockGuard _1(m_signal_lock);
SpinLockGuard _2(m_process->m_signal_lock);
if (m_signal_block_mask & (1ull << signal))
terminate_process = true;
handle_info = remove_signal_and_get_info(signal);
if (handle_info.handler == reinterpret_cast<vaddr_t>(SIG_IGN))
terminate_process = true;
}
if (terminate_process)
{
process().exit(128 + signal, signal | 0x80);
ASSERT_NOT_REACHED();
}
handle_signal_impl(signal, signal_info, handle_info);
return handle_info.has_sa_restart;
}
Thread::signal_handle_info_t Thread::remove_signal_and_get_info(int signal)
{
ASSERT(m_signal_lock.current_processor_has_lock());
ASSERT(m_process->m_signal_lock.current_processor_has_lock());
ASSERT(signal >= _SIGMIN);
ASSERT(signal <= _SIGMAX);
const uint64_t restore_sigmask = m_signal_block_mask;
auto& handler = m_process->m_signal_handlers[signal];
const vaddr_t signal_handler = (handler.sa_flags & SA_SIGINFO)
? reinterpret_cast<vaddr_t>(handler.sa_sigaction)
: reinterpret_cast<vaddr_t>(handler.sa_handler);
const bool has_sa_restart = !!(handler.sa_flags & SA_RESTART);
vaddr_t signal_stack_top = 0;
if (m_signal_alt_stack.ss_flags != SS_DISABLE && (handler.sa_flags & SA_ONSTACK) && !currently_on_alternate_stack())
signal_stack_top = reinterpret_cast<vaddr_t>(m_signal_alt_stack.ss_sp) + m_signal_alt_stack.ss_size;
{
SpinLockGuard _(m_process->m_signal_lock);
m_signal_block_mask |= handler.sa_mask;
if (!(handler.sa_flags & SA_NODEFER))
m_signal_block_mask |= 1ull << signal;
auto& handler = m_process->m_signal_handlers[signal];
signal_handler = (handler.sa_flags & SA_SIGINFO)
? reinterpret_cast<vaddr_t>(handler.sa_sigaction)
: reinterpret_cast<vaddr_t>(handler.sa_handler);
has_sa_restart = !!(handler.sa_flags & SA_RESTART);
const auto& alt_stack = m_signal_alt_stack;
if (alt_stack.ss_flags != SS_DISABLE && (handler.sa_flags & SA_ONSTACK) && !currently_on_alternate_stack())
signal_stack_top = reinterpret_cast<vaddr_t>(alt_stack.ss_sp) + alt_stack.ss_size;
if (handler.sa_flags & SA_RESETHAND)
handler = { .sa_handler = SIG_DFL, .sa_mask = 0, .sa_flags = 0 };
}
if (handler.sa_flags & SA_RESETHAND)
handler = { .sa_handler = SIG_DFL, .sa_mask = 0, .sa_flags = 0 };
m_signal_pending_mask &= ~(1ull << signal);
process().remove_pending_signal(signal);
m_process->m_signal_pending_mask &= ~(1ull << signal);
if (m_signal_suspend_mask.has_value())
{
@@ -661,66 +696,54 @@ namespace Kernel
m_signal_suspend_mask.clear();
}
m_signal_lock.unlock(state);
return {
.handler = signal_handler,
.stack_top = signal_stack_top,
.restore_sigmask = restore_sigmask,
.has_sa_restart = has_sa_restart,
};
}
if (signal_handler == (vaddr_t)SIG_IGN)
void Thread::handle_signal_impl(int signal, const siginfo_t& signal_info, const signal_handle_info_t& handle_info)
{
ASSERT(this == &Thread::current());
ASSERT(is_userspace());
ASSERT(Processor::get_interrupt_state() == InterruptState::Enabled);
auto& interrupt_stack = *reinterpret_cast<InterruptStack*>(kernel_stack_top() - sizeof(InterruptStack));
ASSERT(GDT::is_user_segment(interrupt_stack.cs));
if (handle_info.handler == reinterpret_cast<vaddr_t>(SIG_IGN))
;
else if (signal_handler != (vaddr_t)SIG_DFL)
else if (handle_info.handler != reinterpret_cast<vaddr_t>(SIG_DFL))
{
// call userspace signal handlers
#if ARCH(x86_64)
interrupt_stack.sp -= 128; // skip possible red-zone
#endif
{
// Make sure stack is allocated
vaddr_t pages[3] {};
size_t page_count { 0 };
if (signal_stack_top == 0)
const auto write_to_stack =
[&]<typename T>(uintptr_t& sp, const T& value)
{
pages[0] = (interrupt_stack.sp - 1 * sizeof(uintptr_t) ) & PAGE_ADDR_MASK;
pages[1] = (interrupt_stack.sp - 5 * sizeof(uintptr_t) - sizeof(siginfo_t)) & PAGE_ADDR_MASK;
page_count = 2;
}
else
{
pages[0] = (interrupt_stack.sp - 1 * sizeof(uintptr_t) ) & PAGE_ADDR_MASK;
pages[2] = (signal_stack_top - 1 * sizeof(uintptr_t) ) & PAGE_ADDR_MASK;
pages[1] = (signal_stack_top - 4 * sizeof(uintptr_t) - sizeof(siginfo_t)) & PAGE_ADDR_MASK;
page_count = 3;
}
for (size_t i = 0; i < page_count; i++)
{
if (m_process->page_table().get_page_flags(pages[i]) & PageTable::Flags::Present)
continue;
Processor::set_interrupt_state(InterruptState::Enabled);
if (auto ret = m_process->allocate_page_for_demand_paging(pages[i], true, false); ret.is_error() || !ret.value())
m_process->exit(128 + SIGSEGV, SIGSEGV);
Processor::set_interrupt_state(InterruptState::Disabled);
}
}
static_assert(sizeof(T) >= sizeof(uintptr_t));
sp -= sizeof(T);
if (m_process->write_to_user(reinterpret_cast<void*>(sp), &value, sizeof(T)).is_error())
m_process->exit(128 + SIGSEGV, SIGSEGV | 0x80);
};
write_to_stack(interrupt_stack.sp, interrupt_stack.ip);
const vaddr_t old_stack = interrupt_stack.sp;
if (signal_stack_top)
interrupt_stack.sp = signal_stack_top;
if (handle_info.stack_top)
interrupt_stack.sp = handle_info.stack_top;
write_to_stack(interrupt_stack.sp, old_stack);
write_to_stack(interrupt_stack.sp, interrupt_stack.flags);
write_to_stack(interrupt_stack.sp, handle_info.restore_sigmask);
{
signal_info.si_signo = signal;
signal_info.si_addr = reinterpret_cast<void*>(interrupt_stack.ip);
ASSERT(signal_info.si_signo == signal);
write_to_stack(interrupt_stack.sp, signal_info);
interrupt_stack.sp -= sizeof(siginfo_t);
*reinterpret_cast<siginfo_t*>(interrupt_stack.sp) = signal_info;
static_assert(sizeof(siginfo_t) % sizeof(uintptr_t) == 0);
}
write_to_stack(interrupt_stack.sp, signal);
write_to_stack(interrupt_stack.sp, signal_handler);
write_to_stack(interrupt_stack.sp, static_cast<uintptr_t>(signal));
write_to_stack(interrupt_stack.sp, handle_info.handler);
interrupt_stack.ip = (uintptr_t)signal_trampoline;
}
else
@@ -752,8 +775,6 @@ namespace Kernel
panic("Executing unhandled signal {}", signal);
}
}
return has_sa_restart;
}
void Thread::add_signal(int signal, const siginfo_t& info)

View File

@@ -23,60 +23,64 @@ namespace Kernel
void ThreadBlocker::unblock()
{
decltype(m_block_chain) temp_block_chain;
size_t temp_block_chain_length { 0 };
SpinLockGuard _(m_lock);
for (auto* node = m_block_chain; node;)
{
SpinLockGuard _(m_lock);
for (size_t i = 0; i < m_block_chain_length; i++)
temp_block_chain[i] = m_block_chain[i];
temp_block_chain_length = m_block_chain_length;
m_block_chain_length = 0;
auto* next = node->block_chain_next;
ASSERT(node->blocked);
ASSERT(node->blocker == this);
node->blocker.store(nullptr);
node->block_chain_prev = nullptr;
node->block_chain_next = nullptr;
Processor::scheduler().unblock_thread(node);
node = next;
}
for (size_t i = 0; i < temp_block_chain_length; i++)
Processor::scheduler().unblock_thread(temp_block_chain[i]);
m_block_chain = nullptr;
}
void ThreadBlocker::add_thread_to_block_queue(SchedulerQueue::Node* node)
{
ASSERT(node->blocker_lock.current_processor_has_lock());
SpinLockGuard _(m_lock);
ASSERT(m_block_chain_length < sizeof(m_block_chain) / sizeof(m_block_chain[0]));
ASSERT(node);
ASSERT(node->blocked);
ASSERT(node->blocker == nullptr);
for (size_t i = 0 ; i < m_block_chain_length; i++)
ASSERT(m_block_chain[i] != node);
m_block_chain[m_block_chain_length++] = node;
node->blocker.store(this);
node->block_chain_prev = nullptr;
node->block_chain_next = m_block_chain;
node->blocker = this;
if (m_block_chain)
m_block_chain->block_chain_prev = node;
m_block_chain = node;
}
void ThreadBlocker::remove_blocked_thread(SchedulerQueue::Node* node)
void ThreadBlocker::remove_thread_from_block_queue(SchedulerQueue::Node* node)
{
ASSERT(node->blocker_lock.current_processor_has_lock());
SpinLockGuard _(m_lock);
ASSERT(node);
// NOTE: this is possible if we got here while another
// core was doing an unblock on this blocker
if (node->blocker.load() != this)
return;
ASSERT(node->blocked);
ASSERT(node->blocker == this);
for (size_t i = 0 ; i < m_block_chain_length; i++)
{
if (m_block_chain[i] != node)
continue;
for (size_t j = i + 1; j < m_block_chain_length; j++)
m_block_chain[j - 1] = m_block_chain[j];
m_block_chain_length--;
}
if (node->block_chain_prev)
node->block_chain_prev->block_chain_next = node->block_chain_next;
if (node->block_chain_next)
node->block_chain_next->block_chain_prev = node->block_chain_prev;
node->blocker = nullptr;
if (node == m_block_chain)
m_block_chain = node->block_chain_next;
node->blocker.store(nullptr);
node->block_chain_prev = nullptr;
node->block_chain_next = nullptr;
}
}

View File

@@ -342,6 +342,9 @@ namespace Kernel
s_scancode_to_keycode[0x4B] = keycode_function(18);
s_scancode_to_keycode[0x4E] = keycode_function(19);
s_scancode_to_keycode[0x47] = keycode_function(20);
s_scancode_to_keycode[0x7F] = keycode_function(21);
s_scancode_to_keycode[0x81] = keycode_function(22);
s_scancode_to_keycode[0x80] = keycode_function(23);
s_scancode_to_keycode[0x53] = keycode_numpad(0, 0);
s_scancode_to_keycode[0x54] = keycode_numpad(0, 1);

View File

@@ -126,19 +126,20 @@ extern "C" void kernel_main(uint32_t boot_magic, uint32_t boot_info)
parse_boot_info(boot_magic, boot_info);
dprintln("boot info parsed");
Processor::create(PROCESSOR_NONE);
auto& processor = Processor::create(PROCESSOR_NONE);
Processor::initialize();
dprintln("BSP initialized");
PageTable::initialize_pre_heap();
PageTable::kernel().initial_load();
dprintln("PageTable stage1 initialized");
PageTable::initialize_fast_page();
dprintln("fast page initialized");
Heap::initialize();
dprintln("Heap initialzed");
dprintln("Heap initialized");
PageTable::initialize_post_heap();
dprintln("PageTable stage2 initialized");
PageTable::initialize_and_load();
dprintln("PageTable initialized");
processor.allocate_stack();
parse_command_line();
dprintln("command line parsed, root='{}', console='{}'", cmdline.root, cmdline.console);
@@ -255,8 +256,8 @@ static void init2(void*)
VirtualFileSystem::initialize(cmdline.root);
dprintln("VFS initialized");
// FIXME: release memory used by modules. If modules are used
// they are already loaded in here
// NOTE: All modules should be loaded
Heap::get().release_boot_modules();
TTY::initialize_devices();
@@ -270,7 +271,7 @@ extern "C" void ap_main()
using namespace Kernel;
Processor::initialize();
PageTable::kernel().initial_load();
InterruptController::get().enable();
Processor::wait_until_processors_ready();

View File

@@ -0,0 +1,47 @@
.align 16
.global memcpy
memcpy:
xchgl 4(%esp), %edi
xchgl 8(%esp), %esi
movl 12(%esp), %ecx
movl %edi, %edx
rep movsb
movl 4(%esp), %edi
movl 8(%esp), %esi
movl %edx, %eax
ret
.align 16
.global memmove
memmove:
xchgl 4(%esp), %edi
xchgl 8(%esp), %esi
movl 12(%esp), %ecx
movl %edi, %edx
cmpl %edi, %esi
jb .memmove_slow
rep movsb
.memmove_done:
movl 4(%esp), %edi
movl 8(%esp), %esi
movl %edx, %eax
ret
.memmove_slow:
leal -1(%edi, %ecx), %edi
leal -1(%esi, %ecx), %esi
std
rep movsb
cld
jmp .memmove_done
.align 16
.global memset
memset:
xchgl 4(%esp), %edi
movl 8(%esp), %eax
movl 12(%esp), %ecx
movl %edi, %edx
rep stosb
movl 4(%esp), %edi
movl %edx, %eax
ret

View File

@@ -0,0 +1,31 @@
.align 16
.global memcpy
memcpy:
movq %rdi, %rax
movq %rdx, %rcx
rep movsb
ret
.align 16
.global memmove
memmove:
cmpq %rdi, %rsi
jae memcpy
movq %rdi, %rax
leaq -1(%rdi, %rdx), %rdi
leaq -1(%rsi, %rdx), %rsi
movq %rdx, %rcx
std
rep movsb
cld
ret
.align 16
.global memset
memset:
movq %rdi, %r8
movb %sil, %al
movq %rdx, %rcx
rep stosb
movq %r8, %rax
ret

View File

@@ -1,6 +1,6 @@
diff -ruN SDL2-2.32.8/cmake/sdlplatform.cmake SDL2-2.32.8-banan_os/cmake/sdlplatform.cmake
--- SDL2-2.32.8/cmake/sdlplatform.cmake 2024-08-14 13:35:43.000000000 +0300
+++ SDL2-2.32.8-banan_os/cmake/sdlplatform.cmake 2026-01-07 19:04:34.332166371 +0200
+++ SDL2-2.32.8-banan_os/cmake/sdlplatform.cmake 2026-04-03 04:34:27.256800208 +0300
@@ -28,6 +28,8 @@
set(SDL_CMAKE_PLATFORM AIX)
elseif(CMAKE_SYSTEM_NAME MATCHES "Minix.*")
@@ -12,7 +12,7 @@ diff -ruN SDL2-2.32.8/cmake/sdlplatform.cmake SDL2-2.32.8-banan_os/cmake/sdlplat
endif()
diff -ruN SDL2-2.32.8/CMakeLists.txt SDL2-2.32.8-banan_os/CMakeLists.txt
--- SDL2-2.32.8/CMakeLists.txt 2025-06-03 02:00:39.000000000 +0300
+++ SDL2-2.32.8-banan_os/CMakeLists.txt 2026-01-07 19:04:34.343116295 +0200
+++ SDL2-2.32.8-banan_os/CMakeLists.txt 2026-04-03 04:34:27.257159543 +0300
@@ -14,7 +14,7 @@
set(SDL2_SUBPROJECT ON)
endif()
@@ -98,7 +98,7 @@ diff -ruN SDL2-2.32.8/CMakeLists.txt SDL2-2.32.8-banan_os/CMakeLists.txt
file(GLOB MISC_SOURCES ${SDL2_SOURCE_DIR}/src/misc/riscos/*.c)
diff -ruN SDL2-2.32.8/include/SDL_config.h.cmake SDL2-2.32.8-banan_os/include/SDL_config.h.cmake
--- SDL2-2.32.8/include/SDL_config.h.cmake 2025-01-01 17:47:53.000000000 +0200
+++ SDL2-2.32.8-banan_os/include/SDL_config.h.cmake 2026-01-07 19:04:34.358682129 +0200
+++ SDL2-2.32.8-banan_os/include/SDL_config.h.cmake 2026-04-03 04:34:27.257563019 +0300
@@ -307,6 +307,7 @@
#cmakedefine SDL_AUDIO_DRIVER_FUSIONSOUND @SDL_AUDIO_DRIVER_FUSIONSOUND@
#cmakedefine SDL_AUDIO_DRIVER_FUSIONSOUND_DYNAMIC @SDL_AUDIO_DRIVER_FUSIONSOUND_DYNAMIC@
@@ -125,7 +125,7 @@ diff -ruN SDL2-2.32.8/include/SDL_config.h.cmake SDL2-2.32.8-banan_os/include/SD
#cmakedefine SDL_VIDEO_DRIVER_DIRECTFB @SDL_VIDEO_DRIVER_DIRECTFB@
diff -ruN SDL2-2.32.8/include/SDL_platform.h SDL2-2.32.8-banan_os/include/SDL_platform.h
--- SDL2-2.32.8/include/SDL_platform.h 2025-01-01 17:47:53.000000000 +0200
+++ SDL2-2.32.8-banan_os/include/SDL_platform.h 2026-01-07 19:04:34.370086235 +0200
+++ SDL2-2.32.8-banan_os/include/SDL_platform.h 2026-04-03 04:34:27.257711782 +0300
@@ -36,6 +36,10 @@
#undef __HAIKU__
#define __HAIKU__ 1
@@ -139,8 +139,8 @@ diff -ruN SDL2-2.32.8/include/SDL_platform.h SDL2-2.32.8-banan_os/include/SDL_pl
#define __BSDI__ 1
diff -ruN SDL2-2.32.8/src/audio/banan_os/SDL_banan_os_audio.cpp SDL2-2.32.8-banan_os/src/audio/banan_os/SDL_banan_os_audio.cpp
--- SDL2-2.32.8/src/audio/banan_os/SDL_banan_os_audio.cpp 1970-01-01 02:00:00.000000000 +0200
+++ SDL2-2.32.8-banan_os/src/audio/banan_os/SDL_banan_os_audio.cpp 2026-01-07 19:04:34.370691623 +0200
@@ -0,0 +1,150 @@
+++ SDL2-2.32.8-banan_os/src/audio/banan_os/SDL_banan_os_audio.cpp 2026-04-03 16:06:27.541819001 +0300
@@ -0,0 +1,134 @@
+/*
+ Simple DirectMedia Layer
+ Copyright (C) 1997-2019 Sam Lantinga <slouken@libsdl.org>
@@ -154,11 +154,11 @@ diff -ruN SDL2-2.32.8/src/audio/banan_os/SDL_banan_os_audio.cpp SDL2-2.32.8-bana
+ freely, subject to the following restrictions:
+
+ 1. The origin of this software must not be misrepresented; you must not
+ claim that you wrote the original software. If you use this software
+ in a product, an acknowledgment in the product documentation would be
+ appreciated but is not required.
+ claim that you wrote the original software. If you use this software
+ in a product, an acknowledgment in the product documentation would be
+ appreciated but is not required.
+ 2. Altered source versions must be plainly marked as such, and must not be
+ misrepresented as being the original software.
+ misrepresented as being the original software.
+ 3. This notice may not be removed or altered from any source distribution.
+*/
+#include "../../SDL_internal.h"
@@ -201,8 +201,8 @@ diff -ruN SDL2-2.32.8/src/audio/banan_os/SDL_banan_os_audio.cpp SDL2-2.32.8-bana
+ DUMP_FUNCTION();
+
+ // TODO: try to accept already existing spec
+ _this->spec.freq = 44100;
+ _this->spec.format = AUDIO_S16LSB;
+ _this->spec.freq = 48000;
+ _this->spec.format = AUDIO_F32LSB;
+ _this->spec.channels = 2;
+ _this->spec.samples = 2048;
+ SDL_CalculateAudioSpec(&_this->spec);
@@ -229,38 +229,22 @@ diff -ruN SDL2-2.32.8/src/audio/banan_os/SDL_banan_os_audio.cpp SDL2-2.32.8-bana
+
+ const bool should_play = SDL_AtomicGet(&_this->enabled) && !SDL_AtomicGet(&_this->paused);
+ _this->hidden->audio.set_paused(!should_play);
+ if (!should_play)
+ if (!should_play)
+ {
+ usleep(100);
+ return;
+ }
+
+ static_assert(BAN::is_same_v<LibAudio::AudioBuffer::sample_t, double>);
+
+ const auto convert_sample = [](int16_t input) {
+ return (static_cast<double>(input) - BAN::numeric_limits<int16_t>::min()) / BAN::numeric_limits<int16_t>::max() * 2.0 - 1.0;
+ };
+
+ const size_t input_samples = _this->spec.size / sizeof(int16_t);
+ size_t samples_queued = 0;
+
+ const int16_t* mixbuf = static_cast<const int16_t*>(_this->hidden->mixbuf);
+ while (samples_queued < input_samples)
+ auto sample_span = BAN::Span(
+ static_cast<const float*>(_this->hidden->mixbuf),
+ _this->spec.size / sizeof(float)
+ );
+ while (!sample_span.empty())
+ {
+ const size_t to_convert = BAN::Math::min(_this->hidden->conversion.size(), input_samples - samples_queued);
+ for (size_t i = 0; i < to_convert; i++)
+ _this->hidden->conversion[i] = convert_sample(mixbuf[samples_queued + i]);
+
+ auto sample_span = _this->hidden->conversion.span();
+ while (!sample_span.empty())
+ {
+ const size_t queued = _this->hidden->audio.queue_samples(sample_span);
+ if (queued == 0)
+ usleep(100);
+ sample_span = sample_span.slice(queued);
+ }
+
+ samples_queued += to_convert;
+ const size_t queued = _this->hidden->audio.queue_samples(sample_span);
+ if (queued == 0)
+ usleep(100);
+ sample_span = sample_span.slice(queued);
+ }
+}
+
@@ -268,33 +252,33 @@ diff -ruN SDL2-2.32.8/src/audio/banan_os/SDL_banan_os_audio.cpp SDL2-2.32.8-bana
+{
+ DUMP_FUNCTION();
+
+ return static_cast<Uint8*>(_this->hidden->mixbuf);
+ return static_cast<Uint8*>(_this->hidden->mixbuf);
+}
+
+static SDL_bool BANANOS_Init(SDL_AudioDriverImpl* impl)
+{
+ impl->OpenDevice = BANANOS_OpenDevice;
+ impl->CloseDevice = BANANOS_CloseDevice;
+ impl->PlayDevice = BANANOS_PlayDevice;
+ impl->OpenDevice = BANANOS_OpenDevice;
+ impl->CloseDevice = BANANOS_CloseDevice;
+ impl->PlayDevice = BANANOS_PlayDevice;
+ impl->GetDeviceBuf = BANANOS_GetDeviceBuf;
+
+ impl->ProvidesOwnCallbackThread = SDL_FALSE;
+ impl->HasCaptureSupport = SDL_FALSE;
+ impl->OnlyHasDefaultOutputDevice = SDL_TRUE;
+ impl->SupportsNonPow2Samples = SDL_TRUE;
+ impl->ProvidesOwnCallbackThread = SDL_FALSE;
+ impl->HasCaptureSupport = SDL_FALSE;
+ impl->OnlyHasDefaultOutputDevice = SDL_TRUE;
+ impl->SupportsNonPow2Samples = SDL_TRUE;
+
+ return SDL_TRUE;
+ return SDL_TRUE;
+}
+
+AudioBootStrap BANANOSAUDIO_bootstrap = {
+ "banan-os", "banan-os AudioServer", BANANOS_Init, SDL_FALSE
+ "banan-os", "banan-os AudioServer", BANANOS_Init, SDL_FALSE
+};
+
+#endif
diff -ruN SDL2-2.32.8/src/audio/banan_os/SDL_banan_os_audio.h SDL2-2.32.8-banan_os/src/audio/banan_os/SDL_banan_os_audio.h
--- SDL2-2.32.8/src/audio/banan_os/SDL_banan_os_audio.h 1970-01-01 02:00:00.000000000 +0200
+++ SDL2-2.32.8-banan_os/src/audio/banan_os/SDL_banan_os_audio.h 2026-01-07 19:04:34.370883199 +0200
@@ -0,0 +1,34 @@
+++ SDL2-2.32.8-banan_os/src/audio/banan_os/SDL_banan_os_audio.h 2026-04-03 15:56:58.603415021 +0300
@@ -0,0 +1,33 @@
+/*
+ Simple DirectMedia Layer
+ Copyright (C) 1997-2019 Sam Lantinga <slouken@libsdl.org>
@@ -327,11 +311,10 @@ diff -ruN SDL2-2.32.8/src/audio/banan_os/SDL_banan_os_audio.h SDL2-2.32.8-banan_
+struct SDL_PrivateAudioData {
+ LibAudio::Audio audio;
+ void* mixbuf { nullptr };
+ BAN::Array<LibAudio::AudioBuffer::sample_t, 4096> conversion;
+};
diff -ruN SDL2-2.32.8/src/audio/SDL_audio.c SDL2-2.32.8-banan_os/src/audio/SDL_audio.c
--- SDL2-2.32.8/src/audio/SDL_audio.c 2025-01-01 17:47:53.000000000 +0200
+++ SDL2-2.32.8-banan_os/src/audio/SDL_audio.c 2026-01-07 19:04:34.371410923 +0200
+++ SDL2-2.32.8-banan_os/src/audio/SDL_audio.c 2026-04-03 04:34:27.258080476 +0300
@@ -87,6 +87,9 @@
#ifdef SDL_AUDIO_DRIVER_HAIKU
&HAIKUAUDIO_bootstrap,
@@ -344,7 +327,7 @@ diff -ruN SDL2-2.32.8/src/audio/SDL_audio.c SDL2-2.32.8-banan_os/src/audio/SDL_a
#endif
diff -ruN SDL2-2.32.8/src/audio/SDL_sysaudio.h SDL2-2.32.8-banan_os/src/audio/SDL_sysaudio.h
--- SDL2-2.32.8/src/audio/SDL_sysaudio.h 2025-01-01 17:47:53.000000000 +0200
+++ SDL2-2.32.8-banan_os/src/audio/SDL_sysaudio.h 2026-01-07 19:04:34.372150756 +0200
+++ SDL2-2.32.8-banan_os/src/audio/SDL_sysaudio.h 2026-04-03 04:34:27.258278128 +0300
@@ -196,6 +196,7 @@
extern AudioBootStrap WINMM_bootstrap;
extern AudioBootStrap PAUDIO_bootstrap;
@@ -355,7 +338,7 @@ diff -ruN SDL2-2.32.8/src/audio/SDL_sysaudio.h SDL2-2.32.8-banan_os/src/audio/SD
extern AudioBootStrap DUMMYAUDIO_bootstrap;
diff -ruN SDL2-2.32.8/src/joystick/banan_os/SDL_banan_os_joystick.cpp SDL2-2.32.8-banan_os/src/joystick/banan_os/SDL_banan_os_joystick.cpp
--- SDL2-2.32.8/src/joystick/banan_os/SDL_banan_os_joystick.cpp 1970-01-01 02:00:00.000000000 +0200
+++ SDL2-2.32.8-banan_os/src/joystick/banan_os/SDL_banan_os_joystick.cpp 2026-01-07 19:07:12.677617077 +0200
+++ SDL2-2.32.8-banan_os/src/joystick/banan_os/SDL_banan_os_joystick.cpp 2026-04-03 04:34:27.258447075 +0300
@@ -0,0 +1,296 @@
+/*
+Simple DirectMedia Layer
@@ -655,7 +638,7 @@ diff -ruN SDL2-2.32.8/src/joystick/banan_os/SDL_banan_os_joystick.cpp SDL2-2.32.
+/* vi: set ts=4 sw=4 expandtab: */
diff -ruN SDL2-2.32.8/src/joystick/SDL_joystick.c SDL2-2.32.8-banan_os/src/joystick/SDL_joystick.c
--- SDL2-2.32.8/src/joystick/SDL_joystick.c 2025-01-01 17:47:53.000000000 +0200
+++ SDL2-2.32.8-banan_os/src/joystick/SDL_joystick.c 2026-01-07 19:04:34.373890653 +0200
+++ SDL2-2.32.8-banan_os/src/joystick/SDL_joystick.c 2026-04-03 04:34:27.258656321 +0300
@@ -85,6 +85,9 @@
#ifdef SDL_JOYSTICK_HAIKU
&SDL_HAIKU_JoystickDriver,
@@ -668,7 +651,7 @@ diff -ruN SDL2-2.32.8/src/joystick/SDL_joystick.c SDL2-2.32.8-banan_os/src/joyst
#endif
diff -ruN SDL2-2.32.8/src/joystick/SDL_sysjoystick.h SDL2-2.32.8-banan_os/src/joystick/SDL_sysjoystick.h
--- SDL2-2.32.8/src/joystick/SDL_sysjoystick.h 2025-01-01 17:47:53.000000000 +0200
+++ SDL2-2.32.8-banan_os/src/joystick/SDL_sysjoystick.h 2026-01-07 19:04:34.374337431 +0200
+++ SDL2-2.32.8-banan_os/src/joystick/SDL_sysjoystick.h 2026-04-03 04:34:27.259001479 +0300
@@ -235,6 +235,7 @@
/* The available joystick drivers */
@@ -679,7 +662,7 @@ diff -ruN SDL2-2.32.8/src/joystick/SDL_sysjoystick.h SDL2-2.32.8-banan_os/src/jo
extern SDL_JoystickDriver SDL_DUMMY_JoystickDriver;
diff -ruN SDL2-2.32.8/src/misc/banan_os/SDL_sysurl.cpp SDL2-2.32.8-banan_os/src/misc/banan_os/SDL_sysurl.cpp
--- SDL2-2.32.8/src/misc/banan_os/SDL_sysurl.cpp 1970-01-01 02:00:00.000000000 +0200
+++ SDL2-2.32.8-banan_os/src/misc/banan_os/SDL_sysurl.cpp 2026-01-07 19:04:34.379748697 +0200
+++ SDL2-2.32.8-banan_os/src/misc/banan_os/SDL_sysurl.cpp 2026-04-03 04:34:27.259173778 +0300
@@ -0,0 +1,30 @@
+/*
+ Simple DirectMedia Layer
@@ -713,7 +696,7 @@ diff -ruN SDL2-2.32.8/src/misc/banan_os/SDL_sysurl.cpp SDL2-2.32.8-banan_os/src/
+
diff -ruN SDL2-2.32.8/src/video/banan_os/SDL_banan_os_clipboard.cpp SDL2-2.32.8-banan_os/src/video/banan_os/SDL_banan_os_clipboard.cpp
--- SDL2-2.32.8/src/video/banan_os/SDL_banan_os_clipboard.cpp 1970-01-01 02:00:00.000000000 +0200
+++ SDL2-2.32.8-banan_os/src/video/banan_os/SDL_banan_os_clipboard.cpp 2026-01-07 19:04:34.379995308 +0200
+++ SDL2-2.32.8-banan_os/src/video/banan_os/SDL_banan_os_clipboard.cpp 2026-04-03 04:34:27.259271557 +0300
@@ -0,0 +1,51 @@
+/*
+ Simple DirectMedia Layer
@@ -768,7 +751,7 @@ diff -ruN SDL2-2.32.8/src/video/banan_os/SDL_banan_os_clipboard.cpp SDL2-2.32.8-
+/* vi: set ts=4 sw=4 expandtab: */
diff -ruN SDL2-2.32.8/src/video/banan_os/SDL_banan_os_clipboard.h SDL2-2.32.8-banan_os/src/video/banan_os/SDL_banan_os_clipboard.h
--- SDL2-2.32.8/src/video/banan_os/SDL_banan_os_clipboard.h 1970-01-01 02:00:00.000000000 +0200
+++ SDL2-2.32.8-banan_os/src/video/banan_os/SDL_banan_os_clipboard.h 2026-01-07 19:04:34.380137576 +0200
+++ SDL2-2.32.8-banan_os/src/video/banan_os/SDL_banan_os_clipboard.h 2026-04-03 04:34:27.259318700 +0300
@@ -0,0 +1,43 @@
+/*
+ Simple DirectMedia Layer
@@ -815,7 +798,7 @@ diff -ruN SDL2-2.32.8/src/video/banan_os/SDL_banan_os_clipboard.h SDL2-2.32.8-ba
+/* vi: set ts=4 sw=4 expandtab: */
diff -ruN SDL2-2.32.8/src/video/banan_os/SDL_banan_os_message_box.cpp SDL2-2.32.8-banan_os/src/video/banan_os/SDL_banan_os_message_box.cpp
--- SDL2-2.32.8/src/video/banan_os/SDL_banan_os_message_box.cpp 1970-01-01 02:00:00.000000000 +0200
+++ SDL2-2.32.8-banan_os/src/video/banan_os/SDL_banan_os_message_box.cpp 2026-01-07 19:04:34.380308339 +0200
+++ SDL2-2.32.8-banan_os/src/video/banan_os/SDL_banan_os_message_box.cpp 2026-04-03 04:34:27.259375481 +0300
@@ -0,0 +1,60 @@
+/*
+ Simple DirectMedia Layer
@@ -879,7 +862,7 @@ diff -ruN SDL2-2.32.8/src/video/banan_os/SDL_banan_os_message_box.cpp SDL2-2.32.
+/* vi: set ts=4 sw=4 expandtab: */
diff -ruN SDL2-2.32.8/src/video/banan_os/SDL_banan_os_message_box.h SDL2-2.32.8-banan_os/src/video/banan_os/SDL_banan_os_message_box.h
--- SDL2-2.32.8/src/video/banan_os/SDL_banan_os_message_box.h 1970-01-01 02:00:00.000000000 +0200
+++ SDL2-2.32.8-banan_os/src/video/banan_os/SDL_banan_os_message_box.h 2026-01-07 19:04:34.380550899 +0200
+++ SDL2-2.32.8-banan_os/src/video/banan_os/SDL_banan_os_message_box.h 2026-04-03 04:34:27.259437082 +0300
@@ -0,0 +1,45 @@
+/*
+ Simple DirectMedia Layer
@@ -928,7 +911,7 @@ diff -ruN SDL2-2.32.8/src/video/banan_os/SDL_banan_os_message_box.h SDL2-2.32.8-
+/* vi: set ts=4 sw=4 expandtab: */
diff -ruN SDL2-2.32.8/src/video/banan_os/SDL_banan_os_video.cpp SDL2-2.32.8-banan_os/src/video/banan_os/SDL_banan_os_video.cpp
--- SDL2-2.32.8/src/video/banan_os/SDL_banan_os_video.cpp 1970-01-01 02:00:00.000000000 +0200
+++ SDL2-2.32.8-banan_os/src/video/banan_os/SDL_banan_os_video.cpp 2026-01-07 19:04:34.380720824 +0200
+++ SDL2-2.32.8-banan_os/src/video/banan_os/SDL_banan_os_video.cpp 2026-04-03 04:34:27.259503082 +0300
@@ -0,0 +1,729 @@
+/*
+ Simple DirectMedia Layer
@@ -1661,7 +1644,7 @@ diff -ruN SDL2-2.32.8/src/video/banan_os/SDL_banan_os_video.cpp SDL2-2.32.8-bana
+/* vi: set ts=4 sw=4 expandtab: */
diff -ruN SDL2-2.32.8/src/video/SDL_sysvideo.h SDL2-2.32.8-banan_os/src/video/SDL_sysvideo.h
--- SDL2-2.32.8/src/video/SDL_sysvideo.h 2025-05-20 00:24:41.000000000 +0300
+++ SDL2-2.32.8-banan_os/src/video/SDL_sysvideo.h 2026-01-07 19:04:34.381316574 +0200
+++ SDL2-2.32.8-banan_os/src/video/SDL_sysvideo.h 2026-04-03 04:34:27.259743826 +0300
@@ -462,6 +462,7 @@
extern VideoBootStrap WINDOWS_bootstrap;
extern VideoBootStrap WINRT_bootstrap;
@@ -1672,7 +1655,7 @@ diff -ruN SDL2-2.32.8/src/video/SDL_sysvideo.h SDL2-2.32.8-banan_os/src/video/SD
extern VideoBootStrap Android_bootstrap;
diff -ruN SDL2-2.32.8/src/video/SDL_video.c SDL2-2.32.8-banan_os/src/video/SDL_video.c
--- SDL2-2.32.8/src/video/SDL_video.c 2025-05-20 00:24:41.000000000 +0300
+++ SDL2-2.32.8-banan_os/src/video/SDL_video.c 2026-01-07 19:04:34.398132645 +0200
+++ SDL2-2.32.8-banan_os/src/video/SDL_video.c 2026-04-03 04:34:27.260110007 +0300
@@ -96,6 +96,9 @@
#ifdef SDL_VIDEO_DRIVER_HAIKU
&HAIKU_bootstrap,

36
ports/bzip2/build.sh Executable file
View File

@@ -0,0 +1,36 @@
#!/bin/bash ../install.sh
NAME='bzip2'
VERSION='1.0.8'
DOWNLOAD_URL="https://sourceware.org/pub/bzip2/bzip2-$VERSION.tar.gz#ab5a03176ee106d3f0fa90e381da478ddae405918153cca248e682cd0c4a2269"
configure() {
:
}
build() {
make -j$(nproc) -f Makefile-libbz2_so CC="$CC" || exit 1
}
install() {
cp -v libbz2.so.$VERSION $BANAN_SYSROOT/usr/lib/ || exit 1
ln -svf libbz2.so.$VERSION $BANAN_SYSROOT/usr/lib/libbz2.so || exit 1
ln -svf libbz2.so.$VERSION $BANAN_SYSROOT/usr/lib/libbz2.so.1 || exit 1
ln -svf libbz2.so.$VERSION $BANAN_SYSROOT/usr/lib/libbz2.so.1.0 || exit 1
cp -v bzlib.h $BANAN_SYSROOT/usr/include/ || exit 1
cat > $BANAN_SYSROOT/usr/lib/pkgconfig/bzip2.pc << EOF
prefix=/usr
exec_prefix=\${prefix}
bindir=\${exec_prefix}/bin
libdir=\${exec_prefix}/lib
includedir=\${prefix}/include
Name: bzip2
Description: A file compression library
Version: $VERSION
Libs: -L\${libdir} -lbz2
Cflags: -I\${includedir}
EOF
}

View File

@@ -1,8 +1,8 @@
#!/bin/bash ../install.sh
NAME='ca-certificates'
VERSION='2025-12-02'
DOWNLOAD_URL="https://curl.se/ca/cacert-$VERSION.pem#f1407d974c5ed87d544bd931a278232e13925177e239fca370619aba63c757b4"
VERSION='2026.03.19'
DOWNLOAD_URL="https://curl.se/ca/cacert-${VERSION//./-}.pem#b6e66569cc3d438dd5abe514d0df50005d570bfc96c14dca8f768d020cb96171"
configure() {
:
@@ -13,7 +13,10 @@ build() {
}
install() {
mkdir -p "$BANAN_SYSROOT/etc/ssl/certs"
cp -v "../cacert-$VERSION.pem" "$BANAN_SYSROOT/etc/ssl/certs/ca-certificates.crt"
ln -svf "certs/ca-certificates.crt" "$BANAN_SYSROOT/etc/ssl/cert.pem"
rm -rf "$BANAN_SYSROOT/etc/cacert/extracted"
mkdir -p "$BANAN_SYSROOT/etc/cacert/extracted"
cp -vf "../cacert-${VERSION//./-}.pem" "$BANAN_SYSROOT/etc/cacert/cacert.pem"
awk '/-----BEGIN CERTIFICATE-----/ {c=1;n++} c {print > sprintf("cert%03d.pem", n)} /-----END CERTIFICATE-----/ {c=0}' "../cacert-${VERSION//./-}.pem"
mv cert*.pem "$BANAN_SYSROOT/etc/cacert/extracted/"
}

View File

@@ -3,7 +3,7 @@
NAME='curl'
VERSION='8.17.0'
DOWNLOAD_URL="https://curl.se/download/curl-$VERSION.tar.xz#955f6e729ad6b3566260e8fef68620e76ba3c31acf0a18524416a185acf77992"
DEPENDENCIES=('ca-certificates' 'openssl' 'zlib' 'zstd')
DEPENDENCIES=('openssl' 'zlib' 'zstd')
CONFIG_SUB=('config.sub')
CONFIGURE_OPTIONS=(
'--disable-threaded-resolver'
@@ -16,6 +16,6 @@ CONFIGURE_OPTIONS=(
'--with-zlib'
'--with-zstd'
'--without-libpsl'
'--with-ca-bundle=/etc/ssl/certs/ca-certificates.crt'
'--without-ca-path'
'--with-ca-path=/etc/ssl/certs'
'--with-ca-bundle=/etc/ssl/certs/ca-bundle.crt'
)

23
ports/libarchive/build.sh Executable file
View File

@@ -0,0 +1,23 @@
#!/bin/bash ../install.sh
NAME='libarchive'
VERSION='3.8.6'
DOWNLOAD_URL="https://github.com/libarchive/libarchive/releases/download/v$VERSION/libarchive-$VERSION.tar.xz#8ac57c1f5e99550948d1fe755c806d26026e71827da228f36bef24527e372e6f"
DEPENDENCIES=('zlib' 'zstd' 'bzip2' 'xz')
configure() {
cmake --fresh -B build -S . -G Ninja \
--toolchain="$BANAN_TOOLCHAIN_DIR/Toolchain.txt" \
-DCMAKE_INSTALL_PREFIX=/usr \
-DCMAKE_BUILD_TYPE=Release \
-DENABLE_TEST=OFF \
|| exit 1
}
build() {
cmake --build build || exit 1
}
install() {
cmake --install build || exit 1
}

View File

@@ -3,9 +3,24 @@
NAME='openssl'
VERSION='3.6.0'
DOWNLOAD_URL="https://github.com/openssl/openssl/releases/download/openssl-$VERSION/openssl-$VERSION.tar.gz#b6a5f44b7eb69e3fa35dbf15524405b44837a481d43d81daddde3ff21fcbb8e9"
DEPENDENCIES=('zlib')
DEPENDENCIES=('ca-certificates' 'zlib')
MAKE_INSTALL_TARGETS=('install_sw' 'install_ssldirs')
configure() {
./Configure --prefix=/usr --openssldir=/etc/ssl -DOPENSSL_USE_IPV6=0 no-asm no-tests banan_os-generic threads zlib
}
post_install() {
rm -f "$BANAN_SYSROOT/etc/ssl/certs"/*
ln -svf "../cacert/cacert.pem" "$BANAN_SYSROOT/etc/ssl/cert.pem"
ln -svf "../../cacert/cacert.pem" "$BANAN_SYSROOT/etc/ssl/certs/ca-certificates.crt"
ln -svf "../../cacert/cacert.pem" "$BANAN_SYSROOT/etc/ssl/certs/ca-bundle.crt"
openssl rehash "$BANAN_SYSROOT/etc/cacert/extracted"
find "$BANAN_SYSROOT/etc/cacert/extracted" -type l -print0 |
while IFS= read -r -d '' link; do
ln -s "../../cacert/extracted/$(readlink "$link")" "$BANAN_SYSROOT/etc/ssl/certs/${link##*/}"
rm "$link"
done
}

View File

@@ -2,8 +2,8 @@
NAME='xbanan'
VERSION='git'
DOWNLOAD_URL="https://git.bananymous.com/Bananymous/xbanan.git#b228ef13c41adff2738acaeda5db804ebf493bfd"
DEPENDENCIES=('mesa' 'libX11' 'xorgproto')
DOWNLOAD_URL="https://git.bananymous.com/Bananymous/xbanan.git#b2c642f03d2e498e9d6acd55cc89a5e76c220811"
DEPENDENCIES=('xorgproto')
configure() {
cmake --fresh -B build -S . -G Ninja \

Some files were not shown because too many files have changed in this diff Show More