Connection Tool - Programmatically Remove Unused Datasource Connections, and List All Connections
Managing connections within your Sisense environment can become complex over time, if there are a large number of connections, and connections are often added, and replace earlier datasource connections. In some scenarios unused connections can accumulate, potentially cluttering the connection manager UI with no longer relevant connections. Although unused connections typically represent minimal direct security risk, it's considered best practice to maintain a clean, organized list of connections, and in some scenarios it can be desired to remove all unused connections. Sisense prevents the deletion of connections actively used in datasources, safeguarding your dashboards and datasources from disruptions. However, inactive or "orphaned" connections remain after datasources are deleted or a connection is replaced, potentially contributing to unnecessary UI complexity in the connection manager UI. Connections can be of any type Sisense supports, common types include various SQL connections, Excel files, and CSV files, as well as many data providers, such as Big Panda. This tool can also be used to list all connections, with no automatic deletion of unused connections.403Views4likes3CommentsHow to configure Data Groups
Introduction This article will guide you on how to configure the Data Groups’ parameters based on the current performance and business requirements. What are "Data Groups”? The “Data Groups” administrative feature allows configuring Quality-Of-Service (QoS) to Sisense instances and limiting/reserving the resources of individual data models. By utilizing data groups you’ll make sure available resources are controlled and managed to support your business needs. Why Should One Configure “Data Groups”? The main benefits of configuring “Data Groups” are: Controlling which cluster nodes are used for building and querying Limiting the amount of RAM and CPU cores a data model uses Configuring indexing, parallel processing, and caching behavior of a data model Mitigating “Out-Of-Memory" issues caused by lack of resources Preventing “Safe Mode” exceptions caused by lack of resources Read this article to learn about the different “Data Groups” parameters “Data Group” Use Cases The following two scenarios are good examples of using "Data Groups”: Scenario A customer embeds Sisense (OEM use case) and has many tenants that can design dashboards (a.k.a. Self-Service BI). Each tenant has a dedicated Elasticube data model. Challenge A single tenant is capable of creating a large number of dashboards - Causing the underlying data model to utilize a significant portion of the available CPU/RAM resources. The excessive use of resources by a single tenant may lead to resource depletion and degrade the user experience for them and other tenants. Solution Resource Governance – Set up resource utilization limitations using "Data Groups”. Doing so will limit each tenant’s usable resources and prevent tenants from affecting each other. Scenario A customer has many Elasticube data models created by various departments of the organization. Some of the data models are used to generate high-priority strategical dashboards (used by C-Level managers). Other dashboards are prioritized as well (e.g., operational dashboards vs. dashboards used for testing) Challenge C-Level dashboards must have the highest priority and should always be available. Other dashboards should still be operational and behave based on their priority. In case of conflict or a temporary lack of resources, a critical Elasticube may run out of memory or trigger a ‘safe mode’ event. Solution Resource Governance – Setting up priority-based Data Groups would result in allocating business-critical data models with more resources and limiting the less critical ones. Data Groups Resource Governance Strategies Limiting a Data Model’s Resources - Governance rules can be used to limit the resources used by a single data model or a group of multiple data models. This can be done by configuring a "Maximal” amount of allocated CPU/RAM. Note that the data model would be limited to the configured resource restrictions even though additional resources are available for use. Pre-Allocating a Data Model’s Resources - Governance rules can be used to pre-allocate the resources used by a single data model or a group of multiple data models. This can be done by configuring a “Reserved” amount of allocated CPU/RAM. Note that other data models would be limited to partial server resources even though a pre-allocated resource might be idle. Prioritizing Data Models - Governance rules can be used to prioritize certain data models by allocating a different amount of resources to different data models. High-priority data models would be allocated with more resources than lower-level data models. You may also choose not to limit the data model’s resources (no Data Group configuration). However, this will re-introduce the initial risk of resource depletion. How To Configure Data Groups? Configuring “Data Groups” requires high expertise and attention to detail. A dedicated tool is introduced to assist you in this task. Follow the directions below to acquire the tool and implement Data Groups: Prerequisite To base your calculations and make a decision on how to configure data groups you’ll require data monitoring enabled. To learn more about Sisense monitoring and the data reported read the following article: https://support.sisense.com/kb/en/article/enable-sending-sisense-monitoring-logs Step #1 – Download the “Data Groups Configuration Tool” The data groups tool (Excel Document) is attached to this article. Download a local copy to your computer. Step #2 – Access Logz.IO To access the Logz.IO platform: Log in to the Logz.io website Navigate to the “SA - Linux - Data Groups“ dashboard Set the timeframe filter to a 21-day timeframe Step #3 – Fill out the “Data Groups Configuration Tool” Document The “Elasticubes” sheet holds the data for the decision making regarding the different groups: Field Description Comment CubeName The ElastiCube name Size GB The ElastiCube’s size on disk The measure is taken from the “Data” tab ElastiCube list Estimated size in memory The estimated maximal ElastiCube size in memory Auto-calculated ([Size GB] X 2.5) Peak query memory consumption (GB) The actual maximal ElastiCube size in memory (should be smaller than the estimated maximal size) The measure is taken from the logz.io dashboard: (“Max Query Memory” field) Build frequency The frequency of the ETL Sisense performs The measure is taken from Sisense’s build scheduler Average of concurrent Query Average concurrent queries sent from dashboards to the ElastiCube The measure is taken from the logz.io dashboard: Search for the ‘Max concurrent queries per Cube’ widget and download the CSV file with the data. Calculate the average value of the “Max concurrent queries per Cube: column Business Criticality This is a business measure that determines the QoS of the ElastiCube True (High) / False (Low) Data security Is “Data Security” applied to this ElastiCube This column will help determine if the “ElastiCube Query Recycler” parameter will improve the performance or not. Explanation here under “ElastiCube Query Recycler” Data Group The Data Group this ElastiCube should be a part of Try to fill in the column by classifying your ElastiCubes according to common characteristics both in terms of business and in terms of resource consumption. Step #4 – Define your Data Groups Using the information you’ve collected and the explanations in this article – Define the data groups you wish to use. Fill in the information in the “Required data groups” sheet. The “Required data groups” sheet provides the name and configuration for each data group. Use this to tab describe the Data Groups that meet your business needs, use the “Intent” column to describe each group's purpose. The configuration in this sheet will later be used to configure the Data Groups in the Sisense Admin console: Field Description Comment Group name The Data Group’s name Intent The Data Group’s description (in the business point of view) Instances The number of query instances (pods) created in the Sisense cluster This parameter is very useful, however, increasing this value will result in increasing the Elasticube’s memory footprint. You should only consider changing this value if your CPU usage reaches 100%. The high CPU consumption is mostly caused by high users concurrency Build on Local Storage Is enabled, for Multi-node. Using local storage can decrease the time required to query the cube, after the first build on local storage Simultaneous Query Executions The maximum number of queries processed simultaneously In case your widget contains heavy formulas it is worth reducing the number to make the pressure lower. Used when experiencing out-of-memory (OOM) issues. Parallelization is a trade-off between memory usage & query performance. 8 is the optimal amount of queries % Concurrent Cores per Query Related to Mitosis and Parallelism technology. To minimize the risk of OOM, set this value to the equivalent of 8 cores. e.g. if 32-core, set to 25; for 64 cores set to 14; the value should be an even number. Can be increased up to a maximum of half the total cores - i.e. treat the default as the recommended max that can be adjusted down only. Change it when experiencing out-of-memory (OOM) issues Max Cores Maximum cores allowed to be used by the qry pod Limit for less critical groups Query: Max RAM (MB) The Max RAM will be consumed per each of the ElastiCubes in each group. Take from the column “MAX of Peak query memory consumption (GB)” in the Summary of data groups sheet Maximum RAM that is allowed to be used by the qry pod (dashboards). By each of the query instances if were increased in the Instances. Please note that 85% of the pod usage OR overall server RAM will cause a Safe-Mode exception. In the case of Dashboards qry pod) will delete and start the pod again. At the Safe Mode, the users of a dashboard would see the error of a Safe Mode and the qry pod will be deleted and started again. So the next dashboard refresh will bring the data Step #5 – Verify The “Summary of Data Groups” sheet includes a pivot chart that will calculate the max memory consumption from each data group. This value correlates to the “Query: Max RAM in MB” configuration. We need to take the maximal value of Peak query memory consumption (GB) from the Summary of data groups tab and multiply it by 1.20 to avoid Safe mode. Step #6 – Process the results and configure Sisense Data Groups Prerequisite - Read this article on how to create data-groups (5 min read): https://documentation.sisense.com/docs/creating-data-groups On the Sisense Web Application, for each new data group: Navigate to “Admin” --> Data Groups” and click the “+ Data Group” button. Fill out the information from the “Required data groups” tab. Step #7 – Monitor the Sisense Instance The final step is to follow the information in the “Sisense Monitor” and to make sure the performance is improving. Review the documentation below for more details regarding monitoring https://docs.sisense.com/main/SisenseLinux/monitoring-sisense-on-linux.htm Good luck!5.3KViews3likes0CommentsEnsuring Elasticube is in Sync with Data Source (Deletion & Updates in Accumulate Build)
Capturing accurate business transactions is imperative in understanding business operations. If data is constantly changing, such as being deleted or updated, how do you ensure your Elasticube is in sync with your data source? Depending on the use case, sometimes “Replace All” data could be an option, but that might not be the most efficient option if you have a huge amount of data. You also have “Accumulated By” as an option, but that could result in duplicated records and deleted records still being shown in Elasticube. Given these potential results, another path you should consider is a custom code solution. Custom Code Solution Leverage the demo below to learn more. Note in the demo, we will be using an oracle database with a transaction table. The transaction table has an updated_date column that captures the latest data changes and an audit table where deleted transaction IDs are stored with timestamps via a trigger. Please note, all tables in the demo contain dummy records. You can find the DDL for table and trigger here. Using custom code to create new tables in Elasticube will allow the Elasticube to be in sync with your data source table. This new table will only show the latest records, while also removing any deleted records via trigger. The audit table will capture deleted records. (NOTE: Adding triggers could impact database performance as every time a record is deleted there needs to be a write operation). Additionally, this custom code solution provides the ability for change data capture (CDC) to answer business questions like at what intervals the record was updated and/or deleted via the audit table. Let’s begin by building an Elasticube. Use updated_date as “Accumulated By” for both the tables and build the Elasticube by selecting BY TABLE. After this, we will add Custom Code that will contain all the logic required to obtain the latest records and also remove any deleted records. Step 1: Create tables, triggers, and records in Source Database. Sample code is available here. Step 2: Create Elasticube by adding both tables and selecting deleted_on column as accumulated by for audit table and updated_on for fact_sales table. Then build using “BY Table” (Note: The audit table will be empty since we haven’t deleted any records yet) Step 3: Create Custom Code Table If not enabled, go to Admin > Feature Management > Custom Code and save. Select Create your own Notebook In the input parameter, add both base tables, making sure you select the correct parameters and add columns for the output schema. Then hit the open code editor to open a jupyter notebook in a new tab where you will be able to add the logic. Once the jupyter notebook is open, clear all the cells and copy-paste the code from the .ipynb file from the repo. Make changes to the notebook based on your use case. For example, if the Elasticube and table names are different, provide them in cell 2. For this example, the date_change_threshold is the parameter we are choosing to restrict. Save the notebook, return to the previous tab, select the table for input parameters, and click done to create a new Custom Code table in Elasticube. First Build Changes Only which will create the data for the Custom Code Table Step 4: Insert a new record, update an old record, and/or delete a record from the Fact Sales table in your data source. The update trigger will change the Updated_On value for id 9 while the delete trigger will create a new record in the Audit Table Step 5: Build the Elasticube by selecting “By Table”. Now, in the Elasticube you have a fact_sales table which contains all the history/changes of the record and the Custom Code table which has the latest record that are in sync with the data source table. Example below of Table capturing all data changes: Example below of Table in-sync with data source: These steps should result in accurate data in your Elasticube leading to smarter business decisions. Please note, this is one possible workaround for users with similar use cases. Keep in mind that editing the jupyter code is another possibility. For example, you can try to use dask dataframe instead of pandas and see which performance is better with your data. Disclaimer: Please note, that this blog post contains one possible custom workaround solution for users with similar use cases. We cannot guarantee that the custom code solution described in this post will work in every scenario or with every Sisense software version. As such, we strongly advise users to test solutions in their own environment prior to deploying them to ensure that the solutions proffered function as desired in their environment. For the avoidance of doubt, the content of this blog post is provided to you “as-is” and without warranty of any kind, express, implied or otherwise, including without limitation any warranty of security and or fitness for a particular purpose. The workaround solution described in this post incorporates custom coding which is outside the Sisense product development environment and is therefore not covered by not covered by Sisense warranty and support services.5.7KViews2likes0CommentsUserReplaceTool - Automating Dashboard Ownership Transfers - Useful for Deleting User Accounts
Managing and deleting user accounts in Sisense can create manual processes when users leave an organization or change roles. A frequent issue is the reassignment of dashboard ownership to prevent losing Sisense dashboards when a given user account is deleted, as deleting a Sisense user will delete all dashboards owned by that user. The UserReplaceTool addresses this task by automating the transfer of dashboard ownership of all dashboards owned by a given user, ensuring continuity and data integrity. UserReplaceTool is a Python-based, API-based Tool solution designed to seamlessly transfer the ownership of dashboards and data models from one user to another in Sisense. This tool simplifies and automates this process, allowing organizations to reassign dashboard ownership without manual processes or the risk of losing dashboards and widgets. All components are accomplished by using Sisense API endpoint requests.862Views2likes0CommentsChoosing the Right Data Model
This post has become outdated. You can find guidance on choosing a data model on our documentation site here. https://docs.sisense.com/main/SisenseLinux/choosing-the-right-data-model.htm Introduction Customers often run into the question of which data model they should use (an ElastiCube, a Live model, or a Build-to-Destination). The following article presents some of the aspects you should consider when choosing between them. Sisense recommends that you discuss your needs and requirements with Sisense's technical team during the Jumpstart process, so the result will best meet your business expectations. Table of Contents Definitions The ElastiCube Data Model Importing data into an ElastiCube data model allows the customer to pull data from multiple data sources on-demand or at a scheduled time, and create a single source of truth inside Sisense. The imported data can then be transformed and aggregated to meet your business needs. Once imported, the data snapshot is used to generate analytical information. The process of importing the data, known as a "Build", includes the following steps: Extract the data: Query the different data source(s) for data. Load the data: Write the data extracted to Sisense (the local MonetDB). Transform the data: Transform the local MonetDB (using SQL queries). To read more about ElastiCubes, see Introducing ElastiCubes. The Live Data Model Using a Live data model does not require importing data. Only the data's schema needs to be defined. Once configured, analytical information required by the user is queried directly against the backend data source. To read more about Live models, see Introducing Live Models. Determining Factors Refresh Rate One of the most fundamental aspects of determining your data model is your data's refresh rate. The data refresh rate refers to the age of the data in your dashboards: For Live models, the data displayed on your dashboards is near-real-time, as every query is passed directly to the backend database. A good example of using a live model (due to refresh rate requirements) is a dashboard that shows stock prices. For ElastiCubes, the data displayed on your dashboard is current to the last successful build event. Every query is passed to the local database for execution. A good example of using an ElastiCube (due to refresh rate requirements) is a dashboard that shows historical stock prices. In this case, a daily ETL process will provide results that are good enough. To make a choice based on this factor, answer the following questions: How frequently do I need to pull new data from the database? Do all my widgets require the same data refresh frequency? How long does an entire ETL process take? Data Transformation Options The ETL process includes a "Transformation" phase. This transformation phase usually includes: Migrating the data tables into a dim-fact schema Enriching your data Pre-aggregating the data to meet your business needs The amount of data transformation on Sisense helps determine the suitable data model: For Live models, Sisense allows minimal to no data transformation. Data is not imported before a query is issued from the front end. Therefore, data cannot be pre-conditioned or pre-aggregated. Most data sources used by Live models are data warehouses that may perform all data preparations themselves. For ElastiCubes, data is imported before a query is issued from the front end. Therefore, it may be pre-conditioned and pre-aggregated. A user may customize the data model to optimally answer their business questions. To make a choice based on this factor, answer the following questions: Is my data in a fact-dim schema? Does my data require enriching or pre-conditioning? Can my data be pre-aggregated? Operational Database Load Your operational databases do more than serve your analytical system. Any application loading the operational databases should be closely examined: For Live models, Sisense will constantly query information from your operational databases, and feed it into your dashboard widgets. This occurs every time a user loads a dashboard. For ElastiCubes, Sisense highly stresses your operational databases during an ETL process while reading all tables. To make a choice based on this factor, answer the following questions: Does the analytical system stress my operational database(s)? Can the query load be avoided by using a "database replica"? Operational Database Availability Your operational database(s) availability is critical for collecting information for your analytical system. For Live models, all queries are redirected to your data sources. If the data source is not available, widgets will generate errors and not present any data. For ElastiCubes, data source availability is critical during the ETL process. If the data source is not available, the data in your widgets will always be available, but not necessarily be up to date. To make a choice based on this factor, answer the following questions: How frequently are analytical data sources offline? How critical is my analytical system? Is being offline (showing out-of-date information) acceptable? Additional Vendor Costs Various database vendors use a chargeback charging model, meaning that you will be charged by the amount of data you pull from the database or the computational power required to process your data. For Live models, every time a user loads a dashboard, each widget will trigger (at least) one database query. A combination of a chargeback charging model and a large user load may result in high costs. For ElastiCubes, every time the user triggers an ETL process, a large amount of data is queried from the database and loaded into Sisense. To make a choice based on this factor, answer the following questions: What is the number of users using my dashboards / What is my "build" frequency? Which data model will result in lower costs? What is the tipping point? Are you willing to pay more for real-time data? Database Size For ElastiCubes, please refer to these documents: Introducing ElastiCubes Minimum Requirements for Sisense in Linux Environments For Live models, there is no limitation as data is not imported to Sisense, only the data's schema. To make a choice based on this factor, answer the following questions: What is the amount of data I need in my data model? What is the amount of history I need to store? Can I reduce the amount of data (e.g., trimming historical data? reducing the number of columns? etc.) Query Performance Query performance depends on the underlying work required to fetch data and process it. Although every widget generates a query, the underlying data model will determine the work necessary to execute it. For ElastiCubes, every query is handled inside Sisense: The client-side widget sends a JAQL query to the Sisense analytical system. The query Is translated into SQL syntax, and run against an internal database. The query result is transformed back to JAQL syntax and returned to the client-side. For Live models, every query is forwarded to an external database and then processed internally: The client-side widget sends a JAQL query to the Sisense analytical system. The query Is translated into SQL syntax, and run against an external database. Sisense waits for the query to execute. Once returned, the query result is transformed back into JAQL syntax and returned to the client-side. To make a choice based on this factor, answer the following questions: How sensitive is the client to a delay in the query's result? When showing real-time data, is this extra latency acceptable? Connector Availability Sisense supports hundreds of data connectors (see Data Connectors). However, not all connectors are available for live data models. The reasoning behind this has to do with the connector's performance. A "slow connector" or one that requires a significant amount of processing may lead to a bad user experience when using Live models (that is, widgets take a long time to load): For ElastiCubes, Sisense allows the user to utilize all the data connectors. For Live models, Sisense limits the number of data connectors to a few high-performing ones (including most data warehouses and high-performing databases). To make a choice based on this factor, answer the following questions: Does my data source's connector support both data model types? Should I consider moving my data to a different data source to allow live connectivity? Caching Optimization Sisense optimizes performance by caching query results. In other words, query results are stored in memory for easier retrieval, in case they are re-executed. This ability provides a great benefit and improves the end-user experience: For ElastiCubes, Sisense recycles (caches) query results. For Live models, Sisense performs minimal caching to make sure data is near real-time. (Note that caching can be turned off upon request.) To make a choice based on this factor, answer the following questions: Do I want to leverage Sienese's query caching? How long do I want to cache data? Dashboard Design Limitations Specific formulas (such as Mode and Standard Deviation) and widget types (such as Box plots or Whisker plots) may result in "heavy" database queries: For Live models, Sisense limits the use of these functions and visualizations as the results of these formulas and visualizations may take a long time, causing a bad user experience. For ElastiCubes, Sisense allows the user to use them, as processing them is internal to Sisense. To make a choice based on this factor, answer the following questions: Do I need these functions and visualizations? Can I pre-aggregate the data and move these calculations to the data source instead of Sisense? See also Choosing a Data Strategy for Embedded Self-Service.3.7KViews2likes1CommentBlox - Multi-Select Dropdown List Filter
Blox - Multi-Select Dropdown List Filter This article will take you step by step on how to create a multi-select dropdown filter using Blox and JavaScript. ElastiCube: 1. For each field you want to use in multi-select filter, you need to add a custom column. For instance, in our Sample ECommerce ElastiCube, add a custom column to the "Category" table: For Sisense on Windows add the below string in order to create the column: '<li><input type="checkbox" />'+[Category].[Category]+'</li>' For Sisense on Linux: '<li><input type=checkbox>'+[Category].[Category] + '</li>' 2. Change its name to [CategoryHTML]. 3. Do the same for the column [Country] from the table [Country] and name it [CountryHTML]. 3. Perform the 'Changes Only' build. Dashboard: 1. Download the dashboard attached and import it to your application. 2. Create a custom action in BloX and name it MultiBoxSelection: 3. Add the action's code below: var outputFilters = []; var widgetid = payload.widget.oid; var widgetElement = $('[widgetid="' + widgetid + '"]'); widgetElement.find($('blox input:checked')).parent().each(function () { var values = $('.fillmeup').attr('value') + $(this).text(); $('.fillmeup').attr('value', values); }).each((i, lmnt) => { outputFilters.push($(lmnt).text()); }) payload.widget.dashboard.filters.update( { 'jaql': { 'dim': payload.data.dim, 'title': payload.data.title, 'filter': { 'members': outputFilters }, 'datatype': 'text' } }, { 'save': true, 'refresh': true } ) 4. Action's snippet: { "type": "MultiBoxSelection", "title": "Apply", "data": { "dim": "FilterDimension", "title": "FilterTitle" } } 5. Add the widget's script. For each widget you need to change identifiers [CategoryList] and [CategoryItems] - these identifiers should be unique per each widget on a page: widget.on('ready', function() { var checkList = document.getElementById('CategoryList'); var items = document.getElementById('CategoryItems'); checkList.getElementsByClassName('anchor')[0].onclick = function(evt) { if (items.classList.contains('visible')) { items.classList.remove('visible'); items.style.display = "none"; } else { items.classList.add('visible'); items.style.display = "block"; } } items.onblur = function(evt) { items.classList.remove('visible'); } }); widget.on('processresult', function(a, b) { b.result.slice(1, b.result.length).forEach(function(i) { b.result[0][0].Text = b.result[0][0].Text + ' ' + i[0].Text }) }); These identifiers should be the same as you have in the widget in the value of [text]: { "type": "TextBlock", "spacing": "large", "id": "", "class": "", "text": "<div id='CategoryList' class='dropdown-check-list' tabindex='100'> <span class='anchor'>Select Category</span> <ul id='CategoryItems' class='items'>{panel:CategoryHTML}</ul> </div>" } 5. Click Apply on the widget and refresh the dashboard. Important Notes: Make sure you have the Category in the items (The new column was created) and name the Item "Category". Make sure you have a Category Filter (The actual category field and name it "Category") on the dashboard level. Make sure to replace the field and table names with the relevant field/table in the Action, in the editor of the Blox widget in the Blox items and in the dashboard filter. Disclaimer: Please note that this blog post contains one possible custom workaround solution for users with similar use cases. We cannot guarantee that the custom code solution described in this post will work in every scenario or with every Sisense software version. As such, we strongly advise users to test solutions in their environment prior to deploying them to ensure that the solutions proffered function as desired in their environment. For the avoidance of doubt, the content of this blog post is provided to you "as-is" and without warranty of any kind, express, implied, or otherwise, including without limitation any warranty of security and or fitness for a particular purpose. The workaround solution described in this post incorporates custom coding, which is outside the Sisense product development environment and is, therefore, not covered by Sisense warranty and support services.259Views1like0CommentsRedirect users to different dashboards based on dashboard filters
This article discusses and shares the full code of a dashboard script that redirects users to a different dashboard ID based on the user's filter selections or initial loaded filter state. In the particular example shared in this article, the script checks whether the selected date filter (either from a members filter or a from/to filter range) includes an earlier date than the earliest date in the current dashboard's datasource. If this is the case, the script redirects the user to a specified alternate dashboard, preserving any additional URL segments and query parameters in the URL. Any other type of filter state can also be used to determine on when the script should redirect, including non-date filters using similar scripts.440Views1like0CommentsBuilds fail because libraries used in Custom Code tables are not installed
Use case You have specific libraries in Notebooks that have to be installed to run Custom code tables. However, these libraries can be uninstalled automatically (during upgrade or other system changes). Solution Add the following Python code into the first cell of each Custom Table Notebook to check if necessary libraries exist and install them if not, before executing the code: Adjust 'libraries' values to the libraries you need to check. import importlib import pandas as pd # List the libraries you want to check libraries = ['numpy', 'pandas', 'matplotlib', 'openpyxl', 'scipy', 'pyarrow'] # Initialize an empty list to store results results = [] for library in libraries: lib_found = importlib.util.find_spec(library) if lib_found is not None: results.append({"library": library, "status": "Previously Installed"}) else: results.append({"library": library, "status": "Newly Installed"}) print(f"{library} NOT FOUND. Installing now...") !pip install {library} Related Content: Sisense Docs: https://docs.sisense.com/main/SisenseLinux/transforming-data-with-custom-code.htm Sisense Academy: https://academy.sisense.com/notebooks-course386Views1like0Comments