update readme
This commit is contained in:
parent
90fd31bf3a
commit
fdd7f08b12
54
README.md
54
README.md
|
@ -1,47 +1,45 @@
|
||||||
# crazy-file-server
|
# crazy-file-server
|
||||||
|
|
||||||
*A heavy-duty web file browser for cRaZy files.*
|
_A heavy-duty web file browser for cRaZy files._
|
||||||
|
|
||||||
The whole schtick of this program is that it caches the directory and file structures so that the server doesn't have to
|
The whole schtick of this program is that it caches the file structure of your dataset so that the server doesn't have to
|
||||||
re-read the disk on every request. By doing the processing upfront when the server starts along with some background
|
do I/O operations on every request. By doing the processing upfront when the server starts, we can keep requests snappy and responsive.
|
||||||
scans to keep the cache fresh we can keep requests snappy and responsive.
|
|
||||||
|
|
||||||
I needed to serve a very large dataset full of small files publicly over the internet in an easy to browse website. The
|
I needed to serve a very large dataset full of small files publicly over the internet in an easy to browse website. The
|
||||||
existing solutions were subpar and I found myself having to create confusing Openresty scripts and complex CDN caching
|
existing solutions were subpar and I found myself having to create confusing Openresty configs with complex CDN caching
|
||||||
to keep things responsive and server load low. I gave up and decided to create my own solution.
|
to keep things responsive and server load low. I gave up and decided to build my own solution from the ground up.
|
||||||
|
|
||||||
You absolutely need an SSD for this. With an SSD, my server was able to crawl over 6 million files stored in a very
|
## System Setup
|
||||||
complicated directory tree in just 5 minutes.
|
|
||||||
|
|
||||||
This was designed to run on a Linux machine. Not sure if this works on Windows.
|
You'll need at least 5GB of RAM CrazyFS is heavily threaded, so you'll want at least an 8-core machine.
|
||||||
|
|
||||||
|
You absolutely need an SSD for this. With two SSDs in a RAID1 ZPOOL, my server was able to crawl over 7 million files stored in a very
|
||||||
|
complicated directory tree in under 3 minutes.
|
||||||
|
|
||||||
|
It's also possible to use a very fast SSD as swap in case your dataset needs more memory than you have in RAM. The Samsung 980PRO NVMe drive worked very well for me as a swap drive.
|
||||||
|
|
||||||
|
You'll need something like Nginx if you want SSL. Also, crazyfs works best with an HTTP cache in front of it. I set my CDN to cache responses for 2 hours.
|
||||||
|
|
||||||
|
This program was designed to run on a Linux machine. Not sure if this works on Windows.
|
||||||
|
|
||||||
## Features
|
## Features
|
||||||
|
|
||||||
- Automated cache management. Fill the cache when the starts, or as requests come in.
|
- Automated cache management. Fill the cache when the starts, or as requests come in.
|
||||||
- Front end agnostic design.
|
- Front end agnostic design.
|
||||||
- Elasticsearch integration.
|
- Elasticsearch integration.
|
||||||
- File browsing API.
|
- File browsing API.
|
||||||
- Download API.
|
- Download API.
|
||||||
- Admin API.
|
- Admin API.
|
||||||
|
|
||||||
## Install
|
## Install
|
||||||
|
|
||||||
1. Install Go.
|
Download the binary or install Go and build it via `./build.sh`.
|
||||||
2. Download the binary or do `cd src && ./build.sh`
|
|
||||||
|
|
||||||
## Use
|
## Use
|
||||||
|
|
||||||
1. Edit `config.yml`. It's well commented.
|
1. Edit `config.yml`. It's well commented.
|
||||||
2. `./crazyfs --config /path/to/config.yml`. You can use `-d` for debug mode to see what it's doing.
|
2. `./crazyfs --config /path/to/config.yml`. You can use `-d` for debug mode to see what it's doing.
|
||||||
|
|
||||||
By default, it looks for your config in the same directory as the executable: `./config.yml` or `./config.yaml`.
|
By default, it looks for your config in the same directory as the executable: `./config.yml` or `./config.yaml`.
|
||||||
|
|
||||||
If you're using initial cache and have tons of files to scan you'll need at least 5GB of RAM and will have to wait 5 or
|
Native searching is included with the server, but it doesn't work very well and is slow. You'll probably want to set up Elasticsearch.
|
||||||
so minutes for it to traverse the directory structure. CrazyFS is heavily threaded, so you'll want at least an 8-core
|
|
||||||
machine.
|
|
||||||
|
|
||||||
You'll need something like Nginx if you want SSL or HTTP. Also, CrazyFS works great with an HTTP cache in front of it.
|
|
||||||
|
|
||||||
## To Do
|
|
||||||
|
|
||||||
- [ ] Remove symlink support.
|
|
||||||
|
|
Loading…
Reference in New Issue