delve into the making of this site

This website was created from scratch over a weekend (with cursor AI IDE) without any dedicated(except vercel hosting) backend server or database, using only HTML, CSS, and JavaScript. No webpack or other package builds were used, and no frameworks were employed (at least locally, if CDN doesn't count). The static files are rendered client-side in real-time, with everything happening in the browser.

Deets on some of the tooling explored:

Tools: vercel, v0 + unpkg + fly.io + cloudflare + three.js (already Implemented examples tried to modify existing ones) + cursor IDE + lumalabs.ai + postshot + polycam + mixamo + fal.ai + meshy.ai + blender, maya (native file formats) + fbx2gltf + gltf2fbx + dnsmap.io + dnschecker.org + countapi.xyz + nerfstudio + sketchfab(sample 3d models from creators) + https://niujinshuchong.github.io/mip-splatting-demo/ + splat-converter.glitch.me + AI tools which does text to 3d(Genie), image/video to 3d + namecheap + talktastic(voice to text)

delve pt 2

Explored the current state-of-the-art open-source methods available in 3D AI pipelines to get a sense of the efficiencies and numbers. Tools like Draco and KTX2 compression helped reduce a 4.78 MB 3D asset to around 1.74 MB, saving about 65% in size. This compression significantly improved load times for web-based 3D applications, especially for larger models and animation pipeline assets.

While auto-rigging failed in Mixamo for default animations, I experimented with tools like Meshy.ai and fal.ai for mesh conversions. I also tested various tools for photogrammetry, including downloading 360-degree video mp4s from Polycam and uploading them to Luma Labs AI for capture.

In working with splat rendering, I explored sites like mip-splatting-demo, threejs Gaussian splats demos, and splat-converter.glitch.me, though some conversions resulted in artifacts. Adjusted initial camera positions in scenes (e.g., [-1, -4, 6]) and played with programmatic rotation changes in XYZ coordinates for different splat scenes.

Tested the .ply viewer and managed to compress files from 280 MB to 20 MB by converting to ksplat. However, converting(hoping to reduce size with less quality loss) between glTF and FBX formats was a challenge since tools like FBX2glTF (fbx/obj file format from mixamo) are compiled only for x64 architecture not working on macbook arm chips, leading me to spin up a EC2 Ubuntu instance to handle the conversion and fetch the files locally.

For DNS management, I used tools like dnsmap.io and dnschecker.org to track propagation of A, CNAME, and MX records. Pointed the namecheap bought domain to this Vercel's dns servers and handled the records within clean ui in Vercel. Unpkg CDN for threejs etc uses Fly.io(which again depends on cloudflare infra), also I opted for Vercel's CDN too due to better geographic routing for cached static assets like 3d models and images.

Cursor ide, along with Vercel's integrations and some rusty JS knowledge, allowed me to build the entire client-side 3D code over a weekend. I used countapi.xyz for simple back-end counter operations like "Integer as a Service" (IaaS). switched to kvdb.io for the counter as it was more reliable and faster. (CORS issues with countapi.xyz)

Gaussian splats generated by popular tools like Postshot, Polycam, Luma.ai, Kiri, and nerfStudio. Gaussian splats can represent both extended scene environments and individual objects.

Unpkg (hosted on Fly.io's CDN network, backed by Cloudflare) was employed for loading NPM modules like threejs, avoiding the build packaging process entirely. The site is hosted on Vercel, allowing easy point edits via a full-stack solution. I used this as an opportunity to work with a tech stack that I don't typically get to use in my daily work, focusing on the aesthetics and hands-on experience.

In my exploration of file formats (eg: openUSD), I tested local .ply polygon file format with LumaApp capture from mobile custom environment (initial size ~250 MB) and explored techniques for streaming that data more efficiently (directly via iframes was much faster thanks to lumalabs's native streaming (on inspecting in browser iframe size hardly loaded ~15MB )). I also dug into the differences between .ply and .splat formats (including Luma's custom splat format), focusing on compression and adaptive level-of-detail based..gaussian noise way of loading/streaming the splat file data.

A large part of the motivation for this site was to dive into Three.js, webgl/webgpu/webgl2/webxr ecosystem (with webxr I ran into limitations with Apple's Vision Pro ecosystem, which felt too early in its current form. Despite the impressive eye-tracking and spatial video features, it was a bit heavy for my liking. I'm holding off for future XR devices that offer better FOV and microLED pixel degree density (PPD) for a more lifelike experience.(had to settle with quest from 2017)) and while also experimenting with 3D modeling and Gaussian splatting and NeRFs (Neural Radiance Fields). I wanted to explore how load times for various 3D formats and models compare, and how best to integrate splats. Cursor made development significantly easier.

Tested a few GLTF viewers during this process, analyzing load times for different compression levels (Made custom splats on lumaapp and streamed on the web canvas, with iframe provided ny luma..cant load locally as it would mean 200MB file load wait time..has to stream splat efficiently so low level details load gradually like gaussian noise) and checking for artifacts and compatibility across different file (fbx, glb, gltf) for threejs library loaders funcs.

Refs: Vercel data analytics is good ..source origin to edge and edge to user data usage is tracked well with edge network DC locations https://www.vercel-status.com/ and dashboards in the profile/account

Links:

https://github.khronos.org/glTF-Sample-Viewer-Release/

https://github.com/CesiumGS/gltf-pipeline

https://github.com/mkkellogg/GaussianSplats3D/blob/main/README.md

https://www.jsdelivr.com/package/npm/@mkkellogg/gaussian-splats-3d

https://r3f.docs.pmnd.rs/getting-started/introduction

Luma nerfs capture best practices: https://docs.lumalabs.ai/MCrGAEukR4orR9

Back to Timeline