Connection Tool - Programmatically Remove Unused Datasource Connections, and List All Connections
Managing connections within your Sisense environment can become complex over time, if there are a large number of connections, and connections are often added, and replace earlier datasource connections. In some scenarios unused connections can accumulate, potentially cluttering the connection manager UI with no longer relevant connections. Although unused connections typically represent minimal direct security risk, it's considered best practice to maintain a clean, organized list of connections, and in some scenarios it can be desired to remove all unused connections. Sisense prevents the deletion of connections actively used in datasources, safeguarding your dashboards and datasources from disruptions. However, inactive or "orphaned" connections remain after datasources are deleted or a connection is replaced, potentially contributing to unnecessary UI complexity in the connection manager UI. Connections can be of any type Sisense supports, common types include various SQL connections, Excel files, and CSV files, as well as many data providers, such as Big Panda. This tool can also be used to list all connections, with no automatic deletion of unused connections.403Views4likes3CommentsHow to avoid query caching in live data models
Query caching is a common mechanism used to enhance performance by storing frequently accessed data in memory. However, in scenarios where data models are continuously updated in real-time, caching can lead to outdated data being displayed. This article guides how to bypass query caching in Sisense Live Data Models to ensure that updated information is always displayed.416Views1like0CommentsResolving "Internal Server Error" When Importing Model with Generic JDBC Connector [LINUX]
Resolving "Internal Server Error" When Importing Model with Generic JDBC Connector This article addresses the issue of encountering an "Internal Server Error" when importing a model from a local desktop to a production environment using the Generic JDBC connector in Sisense. This error often arises due to compatibility issues with older JDBC frameworks. Here, we provide a solution to resolve this error by updating the JDBC connector and connections. Step-by-Step Guide: Identify the Error: a. When importing the model, if you receive an error message resembling: Unexpected error value: { timestamp: {}, status: 500, error: "Internal Server Error", path: "/api/v1/internal/live_connectors/auth/GenericJDBC… } b. This error could indicate a problem with the outdated Generic JDBC connector. Understand the Cause: The error is due to the Generic JDBC connector using an older framework that is incompatible with the latest Sisense versions. Update the JDBC Connector: Deploy the JDBC connector using the new JDBC framework. Ensure all connections within the cube in the original environment where the model was exported are updated accordingly. Deploying the New Framework: Follow the step-by-step instructions detailed in the Sisense documentation for deploying a custom connector with the new JDBC framework. Access the documentation here: Deploying a Custom Connector - Sisense Documentation. Re-attempt the Model Import: Once the JDBC connector is updated, try importing the model again into the production environment. Conclusion: In conclusion, updating the Generic JDBC connector to align with the new JDBC framework and ensuring the connections are properly configured should resolve the "Internal Server Error" encountered during the import process. Following the above steps will help ensure a smooth and successful model import. References/Related Content: Deploying a Custom Connector - Sisense Documentation Sisense Community Support Forum Introduction to Data Sources Disclaimer: This post outlines a potential custom workaround for a specific use case or provides instructions regarding a specific task. The solution may not work in all scenarios or Sisense versions, so we strongly recommend testing it in your environment before deployment. If you need further assistance with this please let us know.189Views1like0CommentsResolving "Internal Server Error" When Importing Model with Generic JDBC Connector
This article addresses the issue of encountering an "Internal Server Error" when importing a model from a local desktop to a production environment using the Generic JDBC connector in Sisense. This error often arises due to compatibility issues with older JDBC frameworks. Here, we provide a solution to resolve this error by updating the JDBC connector and connections.280Views1like0CommentsRedirect users to different dashboards based on dashboard filters
This article discusses and shares the full code of a dashboard script that redirects users to a different dashboard ID based on the user's filter selections or initial loaded filter state. In the particular example shared in this article, the script checks whether the selected date filter (either from a members filter or a from/to filter range) includes an earlier date than the earliest date in the current dashboard's datasource. If this is the case, the script redirects the user to a specified alternate dashboard, preserving any additional URL segments and query parameters in the URL. Any other type of filter state can also be used to determine on when the script should redirect, including non-date filters using similar scripts.440Views1like0CommentsFilter BETWEEN two dates in the model
Filter BETWEEN two dates in the model This article explains how to create a filter to return data between two fields in the model. The following cases will be covered: Show only active contracts on the given date; To show the status of the record on the given date. The first challenge is to show only active contracts on the given date. Let’s assume that we have the following columns in the table “History”: Id; From; To. Let's assume that the contract is active if the given date is between [From] and [To]. To filter the dates we will create two filters on the dashboard using these fields. Now, end-users can choose the same date in these filters. This is not a good user experience, because the user will have to update two filters. To improve user experience we will do the following: Lock these filters - a user will not update these filters manually. Use element Input.Date of the widget “BloX” to let a user provide a date; Create a custom action to update the filters [From] and [To]. Let’s create the action “betweenDates”: const { data, widget } = payload; const { selectedDate, fromDateDim, toDateDim } = data; if (!(selectedDate && fromDateDim && toDateDim)) { return; } const fromFilter = widget.dashboard.filters.item(true, fromDateDim); const toFilter = widget.dashboard.filters.item(true, toDateDim); if (!(toFilter && fromFilter)) { return; } setNewDatesInFilters(fromFilter, toFilter, selectedDate); widget.dashboard.refresh(); function setNewDatesInFilters(fromFilter, toFilter, newDate) { toFilter.jaql.filter = { from: newDate }; fromFilter.jaql.filter = { to: newDate }; updateFilter(toFilter.jaql); updateFilter(fromFilter.jaql, true); } function updateFilter(filterObject, save = false) { widget.dashboard.filters.update({ jaql: filterObject }, { save, refresh: false }) } Snippet of this action: { "type": "betweenDates", "title": "Apply", "data": { "selectedDate": "", "fromDateDim": "", "toDateDim": "" } } Let’s use this action in the BloX. Also, we will add the element “Input.Date”. As the result, we will get the following BloX JSON: { "style": "", "script": "", "title": "", "showCarousel": true, "body": [ { "type": "Container", "items": [ { "type": "Input.Date", "id": "data.selectedDate", "class": "" } ] } ], "actions": [ { "type": "betweenDates", "title": "Apply", "data": { "selectedDate": "", "fromDateDim": "[History.From (Calendar)]", "toDateDim": "[History.To (Calendar)]" } } ] } Now, you can choose a date and after clicking “Apply” both filters will be updated with the selected date. The only issue - input field is cleared. We will fix this by adding a simple widget’s script to the widget BloX - this script will persist the selected date in the input field: widget.on('ready', () => { const toFilter = widget.dashboard.filters.item('[History.To (Calendar)]'); const selectedDate = $$get(toFilter, "jaql.filter.from"); if (selectedDate && document.getElementById('data.selectedDate')) { document.getElementById('data.selectedDate').value = selectedDate; } }) Now, when the dashboard is loaded, the script finds the previously selected date and fills in the input field. Also, when a user changes the date, the newly input date is shown. Let’s solve the second challenge to show statuses of the record on the given date. Let’s assume that we have table “Cases”. In this table we have historical information about cases: Id - unique identifier of the record; CaseId - case’s identifier; Status - status of the case; Date - date when case status was changed. Sample of the data in this table: [ALT text: A table displaying case information with the following columns: Id, CaseId, Status, and Date. The table includes rows showing various case entries with status updates such as "New," "Open," "Reopen," and "Closed," along with corresponding timestamps for each case.] Use-case: as an end-user, I want to input the date and Sisense should return the status of the case in the given date. To accomplish this, let’s create a new custom table “History”, where we will compute the first date of the status and its last date. This custom table will be populated with the next SQL query: SELECT CaseId, [Old Status], [New Status], [From], [To] FROM ( SELECT c.[CaseId], c.[Status] "Old status", cc.[Status] "New status", c.[Date] "From", cc.[Date] "To", RankCompetitionAsc(c.CaseId, c.[Status], c.Date, cc.Date) "Rank" FROM Cases c INNER JOIN Cases cc ON c.[CaseId] = cc.[CaseId] and c.date < cc.date ) c WHERE c.Rank = 1 UNION ( SELECT c.[CaseId], c.[Status] "Old Status", '' "New status", c.date "From", CreateDate(2999, 12, 31) "To" FROM Cases c INNER JOIN ( select caseId, max(date) "MaxDate" from cases group by caseId ) latestStatus ON c.[CaseId] = latestStatus.caseId and latestStatus.MaxDate = c.date ) This query contains two parts: The first part returns statuses, which were already changed and we have information about the status after the change; The second part shows the current status of the record. As a result of the data transformation, we will have the following structure: [ALT Text: A table displaying status updates with columns labeled "CaseId," "Old Status," "New Status," "From," and "To." There are five rows of data showing various cases, their old and new statuses, and corresponding timestamps for when the status changes occurred. The "From" and "To" columns indicate the date and time of the changes. Some cases are marked as "New," "Open," "Closed," and "Reopen."] Now, we will use approach from the first part of this article to create dashboard’s filters and utilize BloX to input the date and custom action to update the dashboard’s filters. Conclusion In this article we reviewed a possible way to manipulate date filters. A similar approach you can use for embedding cases. Since the filters will be controlled from the BloX widget and filters will be locked for manual editing, you can even hide these dates filters using the following plugin: https://www.sisense.com/marketplace/add-on/hide-filters Similar Content: Sisense Docs Disclaimer: This post outlines a potential custom workaround for a specific use case or provides instructions regarding a specific task. The solution may not work in all scenarios or Sisense versions, so we strongly recommend testing it in your environment before deployment. If you need further assistance with this please let us know.393Views1like0CommentsResolving Missing Parse Option in SQL DB Connection
This article addresses the issue of the missing "Parse" option when connecting to an SQL database in Sisense. Users may encounter this problem when attempting to parse data from SQL DB connections. This guide will explain why the "Parse" option might be unavailable and provide steps to resolve the issue.346Views1like0CommentsExploring RAG: A Sisense-Based Approach with BigQuery and Gemini
Exploring RAG: A Sisense-Based Approach with BigQuery and Gemini Continuing from our previous article, Automated Machine Learning with Sisense Fusion: A Practical Guide, where we discussed how Sisense and AutoML simplify complex machine learning workflows, I am now excited to introduce an advanced feature: a chatbot interface powered by Retrieval-Augmented Generation (RAG) and a large language model (LLM) like Gemini. This enhancement allows users to interact with their data in natural language through Sisense's Native Blox widget. Imagine asking questions like "How many products did I sell per category?" and receiving insightful answers instantly. This not only improves data accessibility but also bridges the gap between technical data repositories and business decision-making. In this article, I’ll cover the first step of enabling this functionality: setting up a seamless integration pipeline between Sisense and Google BigQuery, embedding table and column metadata, and preparing the data for efficient querying. Important Note: Experimental Project: This implementation is an experimental project and not an official Sisense feature. It demonstrates how Sisense’s functionalities can be utilized to build a custom Retrieval-Augmented Generation (RAG) pipeline or how you can use this example by importing the provided notebooks. Google Ecosystem: The example provided is specific to the Google ecosystem, utilizing services like BigQuery (BQ) and Gemini for query generation and processing. Associated Costs: Please note that each prompt sent to the chatbot incurs a cost, which will be billed to your Google Cloud Platform (GCP) account. This includes charges for LLM processing and database queries. Building the Foundation: Data Preparation and Embeddings To enable natural language interactions with your data, we first need to establish a robust data infrastructure. This includes creating a vector store for embeddings in Google BigQuery (BQ) and preparing metadata about tables and columns. The process is fully automated using a pre-built Sisense custom code notebook, which simplifies embedding generation and management. Dataset in BigQuery The journey begins with your dataset in Google BigQuery. The notebook retrieves metadata such as table and column names, descriptions, and schema information from BigQuery’s Information_Schema. If any descriptions are missing, the system automatically generates them, ensuring comprehensive metadata coverage. Setting Up the Vector Store in BigQuery The vector store acts as the backbone for similarity searches within the RAG system. Using the custom notebook, the system: Creates the vector store: A structured repository to store embeddings for tables and columns. Organizes metadata: Ensures all table and column descriptions are structured and accessible. Generating Table and Column Descriptions Missing descriptions can hinder data understanding. The notebook includes a Description Agent that automatically generates meaningful text for: Tables: Contextual descriptions based on schema and usage. Columns: Descriptions highlighting their role within the dataset. These enhancements ensure the metadata is both informative and ready for embedding generation. Creating Embeddings with Vertex AI To enable semantic search, metadata descriptions are converted into numerical embeddings using Vertex AI’s TextEmbeddingModel. This is facilitated by the EmbedderAgent, which: Accepts strings or lists of strings (e.g., column descriptions). Generates embeddings through Vertex AI. Handles both single and batch processing for efficiency. Efficient Embedding with Chunked Processing For large datasets, embeddings are generated in chunks using the get_embedding_chunked function. This ensures: Scalability: Handles datasets of all sizes without performance issues. Parallel Processing: Processes text chunks simultaneously to speed up the workflow. Structured Outputs: Embeddings are returned in a structured DataFrame for storage or analysis. Storing Embeddings in the Vector Store The final step is storing these embeddings in the BigQuery vector store. This ensures that: Similarity searches are fast and efficient. Metadata is always accessible for chatbot interactions. ALT text: A screenshot of a data table displaying various columns such as "table_schema," "column_name," "data_type," "source_type," and several other attributes. The table shows sample values and metadata related to a database structure, organized in rows and columns. How the Chatbot Interface Works Now that the foundation for embeddings and metadata storage is set, let’s explore the chatbot interface in action. Imagine opening the chatbot in the Blox widget and asking a question about your dataset. Within moments, the chatbot responds in natural language, providing actionable insights. But what exactly happens under the hood to generate this seamless interaction? RAG Notebook and the Chatbot Workflow The chatbot operates using a pre-built RAG custom code transformation notebook, which orchestrates the end-to-end process. With this notebook, the entire pipeline—from understanding the query to generating the response—is automated. The notebook uses multiple specialized agents, each responsible for a specific task, ensuring precision and efficiency at every step. SQL Query Builder Agent BuildSQLAgent This agent specializes in constructing SQL queries for BigQuery. It uses the LLM to analyze the user’s natural language query and matches it with table schemas and column details from the vector store. It outputs a fully formed SQL query tailored to the user’s dataset and question. SQL Validation Agent ValidateSQLAgent The ValidateSQLAgent validates the SQL query before execution using a Large Language Model (LLM). Validation ensures the query adheres to essential rules, including: The presence of all referenced columns and tables. Proper table relationships and join conditions based on the schema. Formatting and compliance with BigQuery-specific SQL standards. Validation occurs during the debugging process, specifically within the DebugSQLAgent, to identify potential errors before attempting a dry run or execution. It provides a detailed JSON response: If valid, the process moves to the next step (dry run or execution). If invalid, the DebugSQLAgent uses the error details to refine the query iteratively. SQL Debugging Loop Agent DebugSQLAgent This agent runs the debugging loop to refine queries that fail validation or execution. The process includes: Validation: The query is passed to ValidateSQLAgent to check syntax, schema compliance, and structure. If valid, the query is ready for execution. Dry Run: If validation passes, the query is tested using a dry run via the test_sql_plan_execution function to confirm execution readiness. Execution: Once validation and dry runs succeed, the final query is executed using the retrieve_df function, which returns results as a DataFrame. Iterative Refinement: If the query fails either validation or the dry run, the DebugSQLAgent uses the LLM to troubleshoot and generate an alternative query. The loop repeats until a valid query is generated or the maximum debugging rounds are reached. This agent ensures the final query is: Correctly structured and semantically valid. Optimized for performance and aligns with the original user intent. Response Agent ResponseAgent This agent translates the SQL query results into natural language. It bridges the gap between technical SQL outputs and user-friendly communication. By combining the query results with the user’s original question, it crafts a clear and relevant response. How the Workflow Executes Here’s the step-by-step process for generating a response: User Query Embedding The EmbedderAgent converts the user’s natural language question into a numerical embedding. Using BigQuery native vector search, the system retrieves similar embeddings from the vector store created in the first phase. Schema and Content Retrieval Based on the retrieved embeddings, the system fetches relevant table and column schema details from the vector store. SQL Query Generation The BuildSQLAgent uses the retrieved schema details to construct an SQL query that aligns with the user’s question. SQL Validation and Execution The ValidateSQLAgent checks the generated SQL for accuracy and potential errors. If the SQL passes validation, it is executed against the BigQuery database. Debugging (if needed) If the query fails or generates an error, the DebugSQLAgent refines it iteratively until a valid query is produced. Response Generation The ResponseAgent uses the query results and the user’s original prompt to generate a natural language response. If the system fails to generate a valid response, it communicates the issue to the user. Conclusion By combining the foundational embedding process with this RAG-powered workflow, the chatbot transforms how users interact with their data. From seamless SQL query generation to delivering natural language responses, the system exemplifies the power of Sisense Fusion and advanced AI tools to simplify data-driven decision-making. As always, please reach out to your Customer Success Manager (CSM) if you would like to implement this in your own environment.2.2KViews1like0Comments