We were unable to load Disqus. If you are a moderator please see our troubleshooting guide.
Hello Raphael and thanks for the excellent questions! My answers below align with the order of your questions:
1. I used the timestamp in the structured log more so that the implementation of the structured logging is not coupled to Docker or any other kind of log shipper. With the timestamp being set by the application, you can ship the logs with anything and whether or not the shipper supports attaching timestamps is a non issue.
2. Like most things in engineering...it depends. I 100% agree that you should be capturing metrics (via Prometheus/StatsD/etc) on the performance of your database+application. If you look at my previous articles on Prometheus I layout how to use the Postgres Exporter and how to tie into the Ecto Telemetry events to get Ecto metrics into Prometheus. This time-series data will be essential in setting up alerts and tracking performance over time. Where logging of the queries can be beneficial is if you have many dynamic queries (i.e queries that are composed as a result of user input) and it is necessary to correlate system slowdowns with what was actually running. For example, in the AutoFinder.UsedCars module that I wrote the database query is dynamically constructed based on used input. It may be useful to see in the logs what was actually run in case fields are missing an index in the DB. In other words, the metrics would tell you that there is a performance problem, the logs would provide specifics to figure out why there is a slow down occurring. Depending on your application, capturing these logs may be excessive, in which case you can perhaps omit logs that are below a certain time threshold, lower log retention, etc. Really depends on your needs.
3. There are many ways to approach the problem on this one. I usually prefer structuring the logs in the application as it is less error prone (in my experience). Having your log shipper do it can be fragile if your RegEx is slightly off or if newlines across separate requests are being interleaved. What I have done in the past is used LogStash (which is a log shipper like Vector) to filter out log messages that I didn't care for. For example, I wanted to drop log entries to /metrics since having Prometheus poll every 5 seconds was polluting the logs. With it being structured from the get go, LogStash was able to just access a particular field in the structured log and reject the log messaged if the path == /metrics.
OK; my previous cry for help was marked as spam; Disqus is living down to my expectations.
I'm completely new to LoggerJSON and Grafana, and just working on my first Elixir app (coming from a Ruby background).
I've gone through, literally copying and pasting most files except for the very short snippets, and everything works except the curl commands. I get
$ curl 'http://localhost:4000/api/used_cars?make=ferrari'
curl: (52) Empty reply from server
$
When I look at the docker-compose logs, two things jump out at me:
First, the very first line in the log, from auto_finder_app_1, reads WARNING: no logs are available with the 'loki' log driver.
Second, after Grafana finishes loading, and Loki creates its (numerous) tables, Loki spits out an easy-to-miss 'info'-level error:
loki_1 | level=info ts=2020-03-13T15:46:00.598118388Z caller=worker.go:72 msg="no address specified, not starting worker"
I'm using
* docker-compose 1.25.4
* Docker version 19.03.5 on macOS Mojave 10.14.6
* Elixir 1.10.2 on Erlang/OTP 22
Any help would be greatly appreciated. As I said, I'm new to most of this; I've deployed Ruby (Hanami and Roda) apps to production in Docker containers, and I'm highly motivated to come up to speed on Elixir and surrounding ecosystem, particularly within a Docker context.
Thanks.
Hi Jeff and thanks for reaching out! It sounds like perhaps you haven't run this step: docker plugin install grafana/loki-docker-driver:latest --alias loki --grant-all-permissions?
After you you that you should be able to run docker plugin ls and see something like:
$ docker plugin ls
ID NAME DESCRIPTION ENABLED
50afec8b1dec loki:latest Loki Logging Driver true
If the Docker logging plugin is not installed, docker cannot forward container logs from STDOUT to the log destination.
Let me know if that fixes it for you!
I should have included that in the original message; it doesn't seem to have any effect.
$ docker plugin ls
ID NAME DESCRIPTION ENABLED
e2a07aa9e148 loki:latest Loki Logging Driver true
Edit: Oddly enough, Disqus here displays everything but spaces within a <code> block as fixed, but spaces are proportional. Sorry about that.
As a sanity check, I did
$ docker plugin install grafana/loki-docker-driver:latest --alias loki --grant-all-permissions
$ git clone git clone git@github.com:akoutmos/auto_finder.git
$ cd auto_finder && docker-compose up
and was able to get everything up and running on my OSX machine. You'll get the url: (52) Empty reply from server while the Phoenix server is still starting up. If you navigate to localhost:3000 (the Grafana dashboard) and then go to Explore. Select Loki as the data source (also toggling to logs versus metrics) and select the auto_finder app under Log Labels > Compose project. You should see mix compile output coming out as the application is being built.
Thanks! Finally! Not sure what I was doing wrong earlier, which leaves the current status as "working, but cargo-culted"; that's now on the medium-priority queue for figuring out and documenting the figuring-out process.
I love this craft; after 40 years, one thing remains constant: Every time I learn something new, I discover several things I don't know. Finding a balance in that is the eternal struggle.
Thanks again.
hrmm. I'll try and reproduce and get back to you
Hey, thank you for the post! I didn't get how the LoggerJSON connects with Loki, how is done?
Hi Thiago and thanks for the kind words. To be clear, LoggerJSON is merely formatting the log statements into JSON, it is not forwarding those messages anywhere (like loki in this case). The log messages are still being printed to STDOUT. In the set up proposed here, Docker is what ships the STDOUT logs from the Elixir app container to Loki (see relevant part of Docker Compose file https://github.com/akoutmos....
Hope that helps!
Nice, you inspired me and I applied it in our project. I'm using LoggerJSON with custom formatters and Fluentd to collect and stream the logs do AWS Cloudwatch in our Kubernetes cluster. Also I added the request params to Plug formatter https://gist.github.com/thi..., it is a pull request too https://github.com/Nebo15/l.... Thank you so much!
That's awesome!! I can't see your gist...but I assume it is nothing short of awesome :).
link fixed :D
Hey,
LoggerJSON maintainer here. Thank you for spreading knowledge of my package, I appreciate it <3. Also, I would love to have a pull request which adds a generic logger formatter which removes those GSC specifics. Maybe we can even come up with some reusability between default formatters.
Hey Andrew and thanks for the great package! I'll whip something together and make a PR for you some time this week. Appreciate you stopping by the blog :).
Hi,
Thanks for the amazing post!
If you have time, here are a couple of questions.
- Docker already ships logs with a timestamp right? I was wondering why we don't use this timestamp and create a new one. (perhaps it's one of the limitation of the docker plugin)
- I was wondering in what situations you needed the ecto query logs. It seems like a huge volume to me, and I couldn't figure out in what case to use it. Perhaps it would be the role of the metrics logic to track down slow queries? Or perhaps you identify slow queries with metrics then go look back into the logs to see what they are?
- It seemed to me like the logic to transform the logs should live outside of our app. I don't know if you have seen https://vector.dev/, but this is exactly their purpose. It takes logs, transforms them and sends them to several destination. I thought maybe the app just outputs small messages including severity and message content, then vector would spit json and send it to loki or send the plain text to a file. I've never used it, I'm in the process of setting it up, maybe I'm missing something though.