Logiq is now patented! This solidifies our position as a leader in the field of modern data engineering.  Â Know More

Frequently Asked Questions

Answers to all the questions you might have about logiq

LOGIQ is a data platform for engineers built using cloud-native technologies​. Using LOGIQ, you can unify data from your engineering teams and aggregate logs, metrics, and traces, connect APIs, databases, and data lakes, and analyse them, all through a single interface.  LOGIQ uses S3 as the primary data layer and makes data stored within S3 indexable and searchable. This gives you the ability to log as much as your IT ecosystem demands and store data for as long as you wish to, without the massive storage tax that other platforms levy for log data retention. 

LOGIQ isn’t just a log management platform but a complete observability solution that helps you dive deep into the unknowns of your entire technology stack, all in one platform. 

With our log management capabilities, you can stream and analyze logs across your applications and infrastructure in real-time, generate quick and insightful visualizations from log data, and troubleshoot issues with AI and ML-driven analytics at any scale.  

LOGIQ’s Prometheus-backed, monitoring engine generates insightful metrics that lets you monitor the health and performance of all your applications and infrastructure and troubleshoot anomalies at scale.

LOGIQ’s built-in SIEM rules engine detects anomalous security events in real-time and at an infinite scale. You can sample security events in log data against crowdsourced security rules from the Sigma project for faster threat detection. 

Moreover, you can analyze all your APIs, identify patterns and behavior that help you troubleshoot issues, prevent threats preemptively, understand API usage, and confidently ship great APIs.

LOGIQ has been created from the ground up by engineers for engineers. We understand the challenges associated with typical ELK stack logging or other conventional logging vendors as we experienced it first hand. The ELK stack struggles at scale and leads to unwarranted complexities. We developed LOGIQ to address all the issues we faced ourselves. Our patented approach to using S3 as the primary storage layer for logging enables you to run scale observability infrastructure with ZERO storage operations overhead. You can scale out to any volume and any retention period, without any limits. You get a TCO that is mind-blowingly low and completely unprecedented. With LOGIQ you are not just getting a log management platform but also a data convergence platform that lets you aggregate logs, metrics, traces, APIs, databases, and data lakes. 

LOGIQ SaaS can be set up and be ready for use within a few minutes. LOGIQ PaaS deployments may take longer depending on where you’d like to deploy it and the data sources you’d like to connect to LOGIQ.

You can retain ingested data for as long as you wish. Since LOGIQ uses S3 as the primary data store, you get the flexibility to log and store as much log data as you’d like to, without any storage tax.

Both LOGIQ SaaS and LOGIQ PaaS have no limit on the number of users. You can onboard as many users as you’d like and manage them using fine-grained RBAC policies directly from the LOGIQ UI. Several of our existing customers have more than 100 active users of LOGIQ each.

For an ingest volume of 100 GB per day, the minimum hardware requirements are 16-core vCPU with 32 GB RAM. If you’re looking at ingesting higher volumes, you can get in touch with our Support Team for accurate sizing. 

We do not follow the practice of surge pricing. We provision for peak data rates exceeding which the log forwarder slows down and LOGIQ starts rejecting ingestion requests. The log forwarder resumes forwarding once the data rates fall well below the threshold. 

Our pricing is simple and straightforward. Your daily log ingestion volume is the only factor that affects pricing. For more information on plans and pricing, refer to the Pricing page. 

Absolutely! You can run multiple clusters of LOGIQ that you can easily manage via the LOGIQ UI. 

Yes! LOGIQ PaaS comes with a free-forever Community Edition that includes 4 ingest workers. You can easily deploy the Community Edition on a Kubernetes cluster using the LOGIQ PaaS Helm Chart. You can find detailed instructions on deploying LOGIQ PaaS here: https://docs.logiq.ai/logiq-server/logiq-paas-community-edition 

At the end of the 14-day free trial period, you’ll be asked whether you’d like to continue with the plan you selected while signing up, or upgrade to another plan that suits your needs better. The payment method you selected during sign-up will be billed as per the plan you select. In case you wish to cancel your subscription, reach out to us and we’ll take care of it. 

To promote maximum flexibility, LOGIQ can be hosted as a self-service PaaS or managed SaaS solution.

Users of solutions that rely on Logstash or Fluentd for ingesting logs, such as the ELK stack, Logz, Logentries, etc., can switch over to LOGIQ by simply reconfiguring Logstash or Fluentd to forward logs to LOGIQ instead. 

 

Other solutions like Splunk, Sumo Logic, or Datadog use their own proprietary log collectors. If you use any of these solutions, we can switch you over to LOGIQ by setting up Logstash, Fluentd, or our very own LogFlow to handle log forwarding duties.

Yes. Containerization was a major consideration when developing LOGIQ. LOGIQ is a solution focusing on modern deployment options, with integrations and existing configurations designed for Kubernetes.

LOGIQ currently supports integrations with popular log collection and forwarding agents such as Logstash, Rsyslog, Fluentd, and Fluent Bit. You can also use the Docker syslog driver to connect your container applications to LOGIQ. Besides, you can also use Logflow to connect to any data endpoint, whether cloud, edge, or on-premise, to collect, transform, and ship logs to LOGIQ. 

Absolutely! You can forward your syslog data to a syslog server such as Rsyslog. You can then forward this data from Rsyslog using omfwd or omrelp to your LOGIQ instance via an RELP-aware endpoint. Alternatively, you can leverage Logstash’s Syslog out plugin to funnel and forward syslog data into LOGIQ. 

Yes. Along with native integrations with alert platforms like Email, Slack, PagerDuty, OpsGenie, Mattermost, HipChat, and ChatWork, LOGIQ also supports integrating with custom webhooks for forwarding alerts triggered within LOGIQ. You can also work with our Support Team for bespoke integrations with your existing notification system. 

Get in touch with us if you have any other questions!

Follow Us on LinkedIn
and Twitter!

Before you go, make sure you don’t miss out on our latest updates and insights. Follow us on LinkedIn to stay up-to-date on industry trends, company news, and valuable insights.

Click the “Follow” button below to join our community and stay ahead of the curve. Thank you for visiting our site, and we hope to connect with you soon!