How have I ever built anything for Blender without being able to turn the viewport around properly! But then again looking back at how I even began to write code, it's a miracle in itself. Oh! You wanna hear that story? Well, I used to be a character artist for half a decade (I still am... in parts), became a senior character artist and then my employers at the time decided I was expendable and should become a tech artist in a few months along with my regular duties with no extra pay!! Yaay!!
But at least it got me into this, and it's good work, so, ummm a win in the end?
Ex-mountaineer, looking back now I should have used rope while climbing in the Himalayas, probably wouldn't have been "ex" then. I like to drive sometimes and then call myself a petrolhead, how does a carburetor work again??
Typically if you build a mixed reality game, you can only see objects up to about 8 meters. I modified some SDK files to get unlimited range with working occlusion through clever masking tricks. I was also able to solve the issue of occlusion glitching out if there is glass windows around the area by adding another mask to the depth texture. Running a RenderDoc profile for the build I was also able to optimise the SDK further saving a tiny bit of performance after the changes.
Most of the implementation of SVT in unity (and there aren't a lot, especially none I could find for mobile), are for terrains or very large meshes. For a mixed reality build it doesn't really work. That means you cannot really cut down on sampler calls, because large atlases will hit memory limits. I transferred most of the calculations of streaming data from CPU to GPU, saving that CPU bound performance impact it could have had on a VR headset. It streams the texture in chunks together with the right mip levels, saving huge amount of memory cost that would have been taken with large atlases. This in turn with the previous solution means fewer sampler calls, and non destructive atlases that can be streamed without resolution or size limits for memory.
Converts imported mesh (marked as colliders) from DCC app like Maya or blender to Unity's primitive box collider instead of the default mesh collider it would have been. Primitive colliders are more performant than mesh colliders but making them in Unity's editor is really a painful way to spend the time. This solves that issue.
The usual method to collect shaders in Unity is to turn on logging, play and get the log to see what variants have been used. It's just not very efficient. The editor tool I created lets you do that with a user friendly UI (because let's be honest no one wants to use the inbuilt shader variant inspector), but more importantly automation for collecting runtime variants with a few clicks.
Just like the reference warning you get when you delete an asset in Unreal that might break something else. It's not an easy thing to do, especially when working with multiple git branches. Unity's GUID system instead of relying on relative path for reference is a terrible thing, the logic behind the tool uses a kernel level daemon (Windows only for now) to watch file changes and then fetch known patterns from those files to reference in a sqlite3 database.
Useful for running TUI apps that crash without tty support and do not support headless mode. (e.g. - Windows native Claude code in interactive mode). Is really truly headless, most headless terminal apps I have seen do spawn a console if run without no console flag, or directly from the executable binary. Bidirectional termination to prevent orphaned processes. Supports any CLI app, including any TUI apps, or GUI apps. Supports and keeps track of multiple child, grandchild, etc processes. Can be run from the system tray or truly headless. If you choose to run from the tray, it can be used to interact with an otherwise headless terminal CLI/TUI app whenever you want.
- 99.9% accuracy at 40% threshold, including covered, blurry, or side faces as well as low light scenarios.
- Path exclusion/inclusion with explorer or wildcards, hiding people or specific photos of someone, changing thumbnails
- Auto rename conflict resolution
- Sorting by count or name
- Hiding unknown people
- Filtering people from the background by toggling on a button
- Jumping to names
- Dynamic auto-scan with four modes
- Recalibration does not require rescan
- Thumbnail size controllable with a slider
Great for everyday users to sort all their photos, hundreds of thousands of photos locally, without the need to spend money on online storage. It differs from competition in the fact that it's really easy to use with a focus on everyday users, install it, point to the folder where your photos are, and just let it do its thing. It's accurate, even for blurry faces or side faces, without the need to micro manage such as manual tagging to get better results. I wanted to keep the UI simple instead of packing in pro features, there are already photo managers out there for professionals, not one accurate facial recognition and organiser for someone who doesn't want to deal with a complex UI.



