fighting for truth, justice, and a kick-butt lotus notes experience.

    Who is using Splunk or a similar solution?

     17 Juni 2015 09:11:14
    This is a unusual post for me. This time I have a bunch of questions and would like to get your answers or experiences.

    I would like to know, if you are using solutions like Splunk, GrayLog or a similar already in your enterprise to get central access, view and analytics of your machine generated data like system / application logs and platform statistics.

    The idea behind Splunk:

    Step 1: Collect from all of your systems your application / system logs and platform statistics.  
    Step 2: Throw them into Splunk and let them get indexed by Splunk.

    Image:Who is using Splunk or a similar solution?

    Step 3: Search and drill down across your indexed log files from a central point

    Image:Who is using Splunk or a similar solution?

    Step 4: Use Big Data analytics provided by Splunk to visualize your indexed data to build dashboards or generate alerts.

    Image:Who is using Splunk or a similar solution?

    My questions to you:

    Do you know Splunk?
    Do you use Splunk, GrayLog or a similar solution in your enterprise already?
    How and for what use case do you use Splunk?
    How do you forward Domino, WebSphere, DB2 or your application logs and statistics to Splunk?

    Please add a comment or send me an email.

    I am looking forward to your answers and already thank you very much for participating in the discussion.


    To answer your question before you google it: What the hell is Splunk?

    Splunk is an American multinational corporation based in San Francisco, California, which produces software for searching, monitoring, and analyzing machine-generated big data, via a web-style interface.
    Splunk (the product) captures, indexes and correlates real-time data in a searchable repository from which it can generate graphs, reports, alerts, dashboards and visualizations.
    Splunk has a mission of making machine data accessible across an organization by identifying data patterns, providing metrics, diagnosing problems and providing intelligence for business operations. Splunk is a horizontal technology used for application management, security and compliance, as well as business and web analytics. As of early 2015, Splunk has over 9,000 customers worldwide.
    Splunk is based in San Francisco, with regional operations across EMEA and Asia, and has over 1700 employees.

    Splunk offers products that perform real-time and historical search, as well as reports and statistical analysis. The product can index structured or unstructured textual machine-generated data.


    Source Wikipedia: https://en.wikipedia.org/wiki/Splunk

    If you don't know Splunk - visit the Splunk Website: http://www.splunk.com/en_us/products/splunk-enterprise.html
    If you don't know GrayLog - visit the GrayLog Website: https://www.graylog.com/product/
    Kommentare

    1sean cull  17.06.2015 12:49:55  Who is using Splunk or a similar solution?

    Interesting questions. I am just looking at Zabbix - { Link } for our needs but am open to ideas.

    2Carsten Lührmann  17.06.2015 13:18:04  Who is using Splunk or a similar solution?

    I made some tests with Splunk, using it to analyze Domino console logs and Spamassassin log files. I chose to import the log files one by one, but there are other, more "production ready" ways: You can setup Splunk to monitor certain directories for new and changed files and index them automagically, or use an agent software on the servers that forwards the logs to the Splunk server.

    My findings so far:

    - First thing to do: create separate indices for different log types, otherwise you will end up with a wild mixture of data in the main index and will have a hard time cleaning it up

    - You will have to be really good at RegEx to precisely extract the fields you need and to use the full potential of Splunk

    - Think about when you want to extract the field data: at the time the files are indexed or on demand at search time. At first glance the first option seems like a good idea, because then the field values are contained in the index and searches are faster, however, you loose flexibility if you want to change the extraction criteria later on

    - Once you get used to all the charting stuff you can quickly build quite impressive dashboards from your data

    - You can convert the trial license into a free license anytime with a limit of 500 MB index data added per day, which is really cool. Just consider there is no security / user management / authentication in free version, so access to the web frontend needs to be secured if there is confidential data in the system

    3Detlev Poettgen  17.06.2015 13:46:28  Who is using Splunk or a similar solution?

    Hi Sean,

    thank you for your answer.

    Zaddix is more a classic monitoring solution.

    You can use Splunk to monitor your environment, too. You can use SNMP as one input for Splunk.

    But Splunks power is not only checking SNMP results, but more searching, analysing and combining different Log sources.

    4Detlev Poettgen  17.06.2015 13:55:31  Who is using Splunk or a similar solution?

    Carsten,

    thank you very much for your detailed answer.

    You should feed Splunk with the Traveler and DB2 logs, too :-)

    I would like to take a look at your Splunk environment, when we meet next time.

    5Darren Duke  17.06.2015 13:56:36  Who is using Splunk or a similar solution?

    I use Greylog2 right now, but I am starting to look at Logstash and may well switch to that.

    Both are open source, unlike Splunk.

    6Detlev Poettgen  17.06.2015 13:59:05  Who is using Splunk or a similar solution?

    Hi Darren,

    thank you for your answer.

    Splunk will offer you the first 500 MB Logs per day for free.

    LogStash looks interesting. Will take a look at it.

    7Darren Duke  17.06.2015 15:11:53  Who is using Splunk or a similar solution?

    @6, 500MB is not a lot once you start having switches or ESXi host send their logs ;)

    8Uwe Brahm  17.06.2015 16:42:40  Who is using Splunk or a similar solution?

    Hi Detlev,

    we are using OMD

    { Link }

    to monitor our Windows Domino servers:

    There is a nice web gui called

    Check_MK Multisite.

    Completely open source and no restrictions.

    You need Nagios or Icinga to run it, but that is part of the distro.

    It's Linux only, but a nice package :-)

    Regards,

    Uwe

    9Florian Vogler  17.06.2015 18:42:04  Who is using Splunk or a similar solution?

    Splunk is very powerful and (can be to will be) also very expensive.

    The more data you throw at it, the more sense it can make (if still affordable AND if you have the time to make more from the collected data than just "searching the logs")

    Collecting data is usually not so much the issue, it is making something from the collected data - more so when multiple data sources need to be combined.

    In our expertise it takes quite some time to turn collected data into meaningful insights, where analyzing more than just one environment helps tremendously - without benchmarks and experience it is often difficult to tell whether what you're looking at is different/a hint/meaningful/worth to further explore - or just heaps of datapoints. Again, I'm not talking "searching the logs" or "charting some stats" here.

    10Detlev Poettgen  17.06.2015 20:08:37  Who is using Splunk or a similar solution?

    Hi Florian,

    thank you for your response and I totally agree.

    I am hearing from more and more of our customers, that they are using Splunk or GrayLog for specific use cases.

    The advantage of Splunk is the community with ready to run apps and dashboards for standard environments (Microsoft plattform, Citrix, ...). Which will help you to start and to get more out of your machine data.

    •  
    • Hinweis zum Datenschutz und Datennutzung:
      Bitte lesen Sie unseren Hinweis zum Datenschutz bevor Sie hier einen Kommentar erstellen.
      Zur Erstellung eines Kommentar werden folgende Daten benötigt:
      - Name
      - Mailadresse
      Der Name kann auch ein Nickname/Pseudonym sein und wird hier auf diesem Blog zu Ihrem Kommentar angezeigt. Die Email-Adresse dient im Fall einer inhaltlichen Unklarheit Ihres Kommentars für persönliche Rückfragen durch mich, Detlev Pöttgen.
      Sowohl Ihr Name als auch Ihre Mailadresse werden nicht für andere Zwecke (Stichwort: Werbung) verwendet und auch nicht an Dritte übermittelt.
      Ihr Kommentar inkl. Ihrer übermittelten Kontaktdaten kann jederzeit auf Ihren Wunsch hin wieder gelöscht werden. Senden Sie in diesem Fall bitte eine Mail an blog(a)poettgen(punkt)eu

    • Note on data protection and data usage:
      Please read our Notes on Data Protection before posting a comment here.
      The following data is required to create a comment:
      - Name
      - Mail address
      The name can also be a nickname/pseudonym and will be displayed here on this blog with your comment. The email address will be used for personal questions by me, Detlev Pöttgen, in the event that the content of your comment is unclear.
      Neither your name nor your e-mail address will be used for any other purposes (like advertising) and will not be passed on to third parties.
      Your comment including your transmitted contact data can be deleted at any time on your request. In this case please send an email to blog(a)poettgen(dot)eu
    • Über diesen Blog
    • Datenschutz
    • Impressum
    • Kontakt

    • If you like the Blog...Donate
    • Buy me a coffeeBuy me a coffee

    Archive