update readme

This commit is contained in:
Cyberes 2024-03-23 17:37:16 -06:00
parent 90fd31bf3a
commit fdd7f08b12
1 changed files with 26 additions and 28 deletions

View File

@ -1,47 +1,45 @@
# crazy-file-server
*A heavy-duty web file browser for cRaZy files.*
_A heavy-duty web file browser for cRaZy files._
The whole schtick of this program is that it caches the directory and file structures so that the server doesn't have to
re-read the disk on every request. By doing the processing upfront when the server starts along with some background
scans to keep the cache fresh we can keep requests snappy and responsive.
The whole schtick of this program is that it caches the file structure of your dataset so that the server doesn't have to
do I/O operations on every request. By doing the processing upfront when the server starts, we can keep requests snappy and responsive.
I needed to serve a very large dataset full of small files publicly over the internet in an easy to browse website. The
existing solutions were subpar and I found myself having to create confusing Openresty scripts and complex CDN caching
to keep things responsive and server load low. I gave up and decided to create my own solution.
existing solutions were subpar and I found myself having to create confusing Openresty configs with complex CDN caching
to keep things responsive and server load low. I gave up and decided to build my own solution from the ground up.
You absolutely need an SSD for this. With an SSD, my server was able to crawl over 6 million files stored in a very
complicated directory tree in just 5 minutes.
## System Setup
This was designed to run on a Linux machine. Not sure if this works on Windows.
You'll need at least 5GB of RAM CrazyFS is heavily threaded, so you'll want at least an 8-core machine.
You absolutely need an SSD for this. With two SSDs in a RAID1 ZPOOL, my server was able to crawl over 7 million files stored in a very
complicated directory tree in under 3 minutes.
It's also possible to use a very fast SSD as swap in case your dataset needs more memory than you have in RAM. The Samsung 980PRO NVMe drive worked very well for me as a swap drive.
You'll need something like Nginx if you want SSL. Also, crazyfs works best with an HTTP cache in front of it. I set my CDN to cache responses for 2 hours.
This program was designed to run on a Linux machine. Not sure if this works on Windows.
## Features
- Automated cache management. Fill the cache when the starts, or as requests come in.
- Front end agnostic design.
- Elasticsearch integration.
- File browsing API.
- Download API.
- Admin API.
- Automated cache management. Fill the cache when the starts, or as requests come in.
- Front end agnostic design.
- Elasticsearch integration.
- File browsing API.
- Download API.
- Admin API.
## Install
1. Install Go.
2. Download the binary or do `cd src && ./build.sh`
Download the binary or install Go and build it via `./build.sh`.
## Use
1. Edit `config.yml`. It's well commented.
2. `./crazyfs --config /path/to/config.yml`. You can use `-d` for debug mode to see what it's doing.
1. Edit `config.yml`. It's well commented.
2. `./crazyfs --config /path/to/config.yml`. You can use `-d` for debug mode to see what it's doing.
By default, it looks for your config in the same directory as the executable: `./config.yml` or `./config.yaml`.
If you're using initial cache and have tons of files to scan you'll need at least 5GB of RAM and will have to wait 5 or
so minutes for it to traverse the directory structure. CrazyFS is heavily threaded, so you'll want at least an 8-core
machine.
You'll need something like Nginx if you want SSL or HTTP. Also, CrazyFS works great with an HTTP cache in front of it.
## To Do
- [ ] Remove symlink support.
Native searching is included with the server, but it doesn't work very well and is slow. You'll probably want to set up Elasticsearch.