Current Async work
https://ebpf.io/ looking into small implementations for better observability of distributed systems and cloud native workloads
• Lately, I've Also been hooked on all the new AI research that's popping up and transforming every domain and modality known(like depth maps, videogen etc) . It's inspiring and ties into my love for research/rabbit holes.
• Exploring Inference Optimization and distributed training architectures of these large models: Diving into itty bitty details of the work by NousResearch, and experimenting with test time compute-heavy RAG/CoT algorithms on local hardware (think Localllama subreddit discussions!).
• HPC Enthusiast: I've got a history of being a lunatic for high-performance computing and can't seem to get enough of it. Love nerding out on papers and benchmarks about using GPUs, LPUs, TPUs (any HW accelerators) and cool stuff like InfiniBand networks and optics. Looking into network protocols like ROCe and planning to do a cert on it soon https://www.nvidia.com/en-us/learn/certification/infiniband-professional/ (finish every course at https://learn.nvidia.com/) did some work in that space at Nokia working on mellanox, SRIOV/DPDK workloads but dint spend extensive time on it. Even Been messing around with different job schedulers lately, like Slurm and BSub (LSF), to get a feel for more than just traditional Kubernetes /Linux schedulers
- Moore's law might be ending but not arch improvements or chiplets packaging tech (3D stacking memory HBM) or fast optic interconnects or material-science break throughs like Lk99(superconductivity) or some other elements cant be counted out till quantum computers breakthroughs become mainstream.
- Understanding chip design (arm/riscv) to tsmc yields at different wafer scales can help but less data is present online to understand the economics and energy efficiencies behind those decisions(although lots of youtube content creators do help to make the sense of the hw trends and perf gains).i guess its better to stick to strengths.. Writing quality code, performance optimizations, benchmarking at Datacenter scale and iteration speeds lol
Reflecting a lot on programming language trade-offs at lower level like compiler optimzation tools like LLVM and synchronization primitives through the lens of formal language theory and the mathematical limits of Gödel’s theorem, exploring features like Rust’s ownership model capabilities. Pondering on whether it’s possible to safely move away from pointers/memory issues or if its inevitable with performance trade-offs like garbage collection and reference counting. Also considering the balance between language complexity and user complexity. With advancements in AI tools, LLM assisted coding/copilots, I believe they can now reduce user complexity, removing the need for the trade-offs we previously had to make between language sophistication and ease of use. (not bothering about moving the complexity from language to the user(+LLM))
Recently I came across a paper by DeepMind where diffusion models are used as real-time game engines. This piqued my interest because it reminded me of an undergrad case study I did. The paper explored cloud gaming as a green solution for massive multiplayer online games, tying into a green computing class I took, where I focused on energy efficiencies, bandwidth bottlenecks between the GPU and the monitor, and the connection from the network interface card directly to the monitor—similar to what services like Google Stadia aimed to do but gave up. > In that case study, I looked at the entire cloud gaming framework, which included graphics rendering pipelines, video compression, and network delivery modules. Trying to analyze the numbers and explored what performance improvements needed to happen. Did envelope math (needed 10x more more bandwidth to get away with local rigs/consoles)..have to do it with gaussian noise workloads now.. with many optimizations being made using Gaussian splats and compression techniques, I think it could be possible to make this a reality in the near future. > One key idea is to explore the math behind using Gaussian splats instead of traditional rasterized workloads. Rather than sticking with rasterized rendering, you can layer ray tracing on top. And beyond that, you could train a neural net on top of the Gaussian splats to incorporate ray tracing. I'm currently looking into how that pipeline might evolve and trying to understand what the future could hold for video game graphics rendering pipelines. Taking the Quote from Jensen a little too seriously: "On every pixel will be generated instead of rendered"
In free time when not indulging in meme-warfare ..also a Pixel enthusiast and often nerd out a lot on metrics like blooming, color gamut, color space, calibration, contrast ratios, motion blur, HDR formats and flickering artifacts. I spend my time benchmarking these on TestUFO and diving deep into in-depth metrics like uniformity over at RTINGS. I often recreate these benchmarks at home for fun.
Regularly in touch with kernel.org code and CNCF landscape to stay on top of bug fixes, new features, and low-level kernel operations.
Some fringe interests and things i get zoned out and ponder a lot on: Keep getting Lost in world of quantum probabilities ..Needing certainty! > Lets start with something sane..into etymology (whoz not?), Cymatics, Mandelbrot fractals, hyperbolic geometry (non-Euclidean space like Penrose tiles) and space-time symmetries/conservation laws, solipsism too (ikr!). Not to freak ... UFO-folklore especially the defense tech aspects(eg: NDA bill : S.Amdt.2610 UAP Discloure act to S.4638 118th Congress and project bluebook time-period) and Hidden science behind some of the black budget projects (eg: https://www.aaro.mil/, https://www.cia.gov/readingroom/docs/CIA-RDP96-00788R001700210016-5.pdf), from the early days of tech discoveries like lasers, transistors, information theory and evolution of the internet protocols. Sometimes, I feel like I'm channelling Hari Seldon's PsychoHistorian hobby from Foundation! which kindah makes sense for someone who's obsessed with Normal distributions/Gaussian Bell curves, and all that Boltzmann brain low-energy state goodness—there's something oddly satisfying about the statistical nature of thermodynamic equilibrium & entropy discussion and how it effects the uni-directional arrow of time
Back to Timeline