Rails logging details


#1

Hello there,

Can you explain how do you capture Rails logs?

My specific question is actually a little bit different, I’ve added logstasher gem, and assuming that in Docker world everything should just go to STDOUT, I set up logstasher in that way.

The problem is that every line in the logs is prepended with App #{pid} stdout: and I’m sure Kibana won’t parse them right.

Here is an example:

2017-09-05T20:54:45.857Z [nex-backend-dev-web 54e485089573]: App 315 stdout: {"method":"HEAD","path":"/","format":"json","controller":"welcome","action":"index","status":200,"duration":4.9,"view":4.44,"db":0.0,"ip":"54.159.126.21","route":"welcome#index","request_id":"5f695a02-0ab8-4108-adaf-45b018f63352","source":"nexhealth.staging.rails","tags":["request"],"@timestamp":"2017-09-05T20:54:45.283Z","@version":"1"}

Should I just log to file and expect logs to get shipped to our ELK instance?

Also, adding some technical info to the support pages (I’m talking about https://www.aptible.com/documentation/enclave/tutorials/logging-setup/elk-stack.html) would be very helpful.

Thanks!


#2

Hi,

Can you explain how do you capture Rails logs?

Enclave captures whatever your container is writing to stdout and stderr.

The problem is that every line in the logs is prepended with App #{pid} stdout: and I’m sure Kibana won’t parse them right.

This prefix isn’t coming from Enclave.

A quick Google search suggests you might be using Phusion Passenger, which seems to be generating those prefixes:

Should I just log to file and expect logs to get shipped to our ELK instance?

No.

This is documented here: we capture stdout and stderr, and that’s it. So, if you log to a file, we won’t be able to capture those logs.

Also, adding some technical info to the support pages would be very helpful.

Sure; thanks: see the link above.

Unfortunately, we can only duplicate that information in so many places :slight_smile:


As an aside, note that Kibana does not actually parse logs. Instead, Kibana simply makes queries to Elasticsearch and displays the results.

The JSON parsing can happen in two places:

When you’re deployed on Enclave and set up an Elasticsearch Log Drain, by default we send your logs directly to Elasticsearch. This means Kibana will likely show you the JSON as text.

To fix this, we can enable an Ingest Piepline if you’d like, where you can perform the JSON parsing. This is support-request-only at this time, so feel free to contact support if you’d like to set that up.

(of course, you could also run Logstash yourself and route your logs there, but that’s more brittle, difficult, and expensive)


#3

Thomas, thanks a lot for your thorough answer, looks like I got confused because usually Rails app servers, such as Unicorn or Puma, they write logs to log/ directory, not to STDOUT. Actually, even after looking carefully at https://github.com/phusion/passenger-docker I still fail to see where they redirect logging to STDOUT, andI had that idea that the image is made by Aptible and you just tweak some logging settings…

Thanks a lot for clearing things up!

Also thanks for the remark about log parsing, that’s something I totally missed. You know, when you don’t how things work, you tend to expect them to work automagically :slight_smile:

So, I will have to deal with Passenger first, and when the logs look fine, we can return to that Ingest Piepline subject.

Cheers!