Graylog a very scalable log collection and log management solution with the ability to deal with over 100,000 messages per second, it has an open-source and an enterprise version so you can get vendor support if needed.
Why chose Graylog over Elasticsearch Logstash Kibana (ELK) stack, Loki (from Grafana labs) or Splunk read on
Graylog vs Splunk
This comes down to price, historically Splunk licensing was strict and based on process rate of logs, this means one guy turns on debugging on a device log rate shoots up, tipping you over the license limit, you guest it, it would stop processing the excessive logs, you would lose insight into your environment this is bad.
Luckily, Splunk has realised this and amended their model it is now more like a true-up after the event for how much logs you have processes, when compared to FOSS the expensive software is hard to justify but they have tried by having all the features and “do more with Splunk”.
There are FOSS options that are close to the functionality of paid-for solutions, yes FOSS has hidden cost but all systems have hidden cost it is about how much you have and how much you want to invest, that makes the choice if you have followed any type of life cycle of the product or solution you are providing you have already costed for the operation and monitoring! YES!!!

Graylog vs ELK
They both use Elasticsearch for the storage of logs, they both have graphing elements ELK=Kibana Graylog=Graylog
Kibana is more versatile when graphing data but Graylog has made big improvements on this with the newer version, worst-case scenario you could just graph directly off the Elasticsearch via Grafana. below is a comparison of what makes up each vendors stack.

As you can see they are very similar so I have to choose by time and resource if you are more familiar with ELK and tuning the aspects on how to make that run well feel free I find I need more resource and invest more into get that solution running, but if I have to build one with no time deadlines ELK would be my first choice every day. saying that why do I use Graylog:
– Time to implement is low I know it can scale easily.
– The professional support if needed and you don’t feel like your a little fish in a big pond.
– it’s middleware you get to play with the fun stuff without needed full engineer level knowledge of how it ticks “Elasticsearch”.
Graylog vs Loki
Loki the new kid on the block, Loki is described as Prometheus but for logs, this doesn’t mean it scrapes logs but it does use their syntax when running queries.
Where does Loki fit, have you thought how much it would cost to run an ELK stack, Graylog or Splunk in the cloud WOW it gets expensive fast really expensive, these systems have been made for the use-case of Bigdata, information = power, the more insight and logs you have the more you can do with it (almost correct).
They also index all the words in the log message this makes the index a memory hog by design and very large 50GB index is not unheard of, Loki doesn’t index the whole message but on pre-defined labels like hostname, this limits the index size meaning you can run it on a smaller instance = saving money.

Graylog Architecture
When getting started with Graylog the easiest way is to download a pre-built image from Graylog for your favourite virtualiser eg OVA,VMDK this is one server to do all is good for a play and to see if you what to use it a Proof of concept (PoC).
When getting this to run in a medium to a large organisation we need to split out the different functions this also means we are able to scale components individuality to the rest when they are being saturated.

that is it today with the intro into Graylog next time we will look at installing from scratch.
UPDATE yes it uses MongoDB you don’t have to tell me just google mongoDB sucks.