cancel
Showing results for 
Search instead for 
Did you mean: 
intapiuser
Community Team Member
Community Team Member
In Sisense Version 7.2 and beyond, we replaced many components of the IIS web hosting with a NodeJS framework, as well as making many new structural changes to the software. This allowed us to categorize the key components of the application to categories of microservices. Each microservice performs some essential functionality in the software, and each one also has it's own logging file path associated with it.
While this certainly isn't an all inclusive guide, this article is intended to shed light on the logging processes of the software.
In order to interpret the logs for a particular issue, a user will need to first narrow down exactly which service is being affected. This can best be done by understanding the services list at the following article:
Identifying Sisense Services

Once we identify where exactly the issue is, we can find most logs for each individual service at the corresponding file path in:

C:\ProgramData\Sisense\application-logs

The main exception of logs that would not be found in the individual service logs, are the following:

Build Logs (which detail each individual build error and successes, and often times contains better error reporting than the GUI in the elasticube managers): C:\ProgramData\SiSense\PrismServer\ElastiCubeProcessLogs<YourElastCubeName>

Prism Server logs: C:\ProgramData\SiSense\PrismServer\PrismServerLogs

Sisense Web log: C:\ProgramData\Sisense\PrismWeb\Logs

ElastCube build logs: C:\ProgramData\SiSense\PrismServer\ElastiCubeProcessLogs<YourElastCubeName>

MongoDB: C:\ProgramData\Sisense\PrismWeb\Repository\DB

iisnode (web server) logs: C:\Program Files\Sisense\PrismWeb\vnext\iisnode


Many log paths include an "error" log and then a general log name that does not include "errors" or "stderr". This is done as a way to simplify the troubleshooting process by filtering out all of the errors into their own log file, this is not always the case for each log, though, and sometimes it can be beneficial to look at the log file that doesn't include errors to see the process flow of the application leading up to, during, and after an issue has been noticed.
All of the logs contain single instruction level data, meaning that, while they can be rather large, each instruction is associated with a particular timestamp, so it is important to know the date and time that the error occurred beforehand. Once you are able to narrow down the individual service and time stamp that the error occurred with, you can go to that section of logs and begin your digging. As multiple processes can occur in a very small window of time, most users find it beneficial to search for the term "fail", "failure", or "error" to narrow down exactly which line might be relevant to their search.

Once a user has found the particular error that they feel could be responsible for the particular issue they are having, there are a few key elements to look for. While some errors are simple to understand, such as a build failing from needing more space on the hard drive, others could be more challenging since they are often written at the software developer level, and may even include a stack trace of where exactly in the program code the issue occurred. For errors of this later type, it is best to document the error, and open a service ticket with the support team for further analysis.
Users should also note that these microservices often call API endpoints and other pieces from other microservices, which may require them to do some further troubleshooting in those particular set of logs. For example, a user may notice that an API call in the API-Gateway logs is calling an API that has the term "Galaxy" in it, and this call is returning a 500 code (an internal service error code which could have many different causes, and is considered to be a very generic error), the user should be able to go into the galaxy logs for that particular time stamp, and see the call being made, and find out more specifically what caused it to fail.
All of the logs contain single instruction level data, meaning that, while they can be rather large, each instruction is associated with a particular timestamp, so it is important to know the date and time that the error occurred beforehand. Once you are able to narrow down the individual service and time stamp that the error occurred with, you can go to that section of logs and begin your digging. As multiple processes can occur in a very small window of time, most users find it beneficial to search for the term "fail", "failure", or "error" to narrow down exactly which line might be relevant to their search.

Once a user has found the particular error that they feel could be responsible for the particular issue they are having, there are a few key elements to look for. While some errors are simple to understand, such as a build failing from needing more space on the hard drive, others could be more challenging since they are often written at the software developer level, and may even include a stack trace of where exactly in the program code the issue occurred. For errors of this later type, it is best to document the error, and open a service ticket with the support team for further analysis.
Users should also note that these microservices often call API endpoints and other pieces from other microservices, which may require them to do some further troubleshooting in those particular set of logs. For example, a user may notice that an API call in the API-Gateway logs is calling an API that has the term "Galaxy" in it, and this call is returning a 500 code (an internal service error code which could have many different causes, and is considered to be a very generic error), the user should be able to go into the galaxy logs for that particular time stamp, and see the call being made, and find out more specifically what caused it to fail.
Rate this article:
Version history
Last update:
‎02-15-2024 10:52 AM
Updated by:
Contributors