# crazy-file-server *A heavy-duty web file browser for CRAZY files.* The whole schtick of this program is that it caches the directory and file structures so that the server doesn't have to re-read the disk on every request. By doing the processing upfront when the server starts along with some background scans to keep the cache fresh we can keep requests snappy and responsive. I needed to serve a very large dataset full of small files publicly over the internet in an easy to browse website. The existing solutions were subpar and I found myself having to create confusing Openresty scripts and complex CDN caching to keep things responsive and server load low. I gave up and decided to create my own solution. You will likely need to store your data on an SSD for this. With an SSD, my server was able to crawl over 6 million files stored in a very complicated directory tree in just 5 minutes. ## Features - Automated cache management. Fill the cache when the starts, or as requests come in. - File browsing API. - Download API. - Restrict certain files and directories from the download API to prevent users from downloading your entire 100GB+ dataset. - Frontend-agnostic design. - Basic searching or Elasticsearch integration. ## Install 1. Install Go. 2. Download the binary or do `cd src && go mod tidy && go build` ## Use 1. Edit `config.yml`. It's well commented. 2. `./crazyfs --config /path/to/config.yml`. You can use `-d` for debug mode to see what it's doing. By default, it looks for your config in the same directory as the executable: `./config.yml` or `./config.yaml`. If you're using initial cache and have tons of files to scan you'll need at least 5GB of RAM and will have to wait 10 or so minutes for it to traverse the directory structure. CrazyFS is heavily threaded, so you'll want at least an 8-core machine. You'll need something line Nginx if you want SSL or HTTP. Also, CrazyFS works great with an HTTP cache in front of it.