Export a table from the elasticube to a CSV
* Disclaimer: This works for Windows only. It will work in future with Linux versions as well. Once it does, I will update the post. Sometimes, we need to export a table from the Elasticube to a CSV file. We can then import it to the next build, to send it to a colleague, to back up or any other usage. (In the Sisense Desktop version, we could have done it using the export to CSV post plugin. However, this is not available in the WEB ECM). The method described below utilities the API call: http://localhost:8081/api/datasources/LocalHost/ECName/sql?query=select%20*%20from%20TableName&format=csv If there are spaces in the name of the EC or of the table name, use %20. Happy exporting 🙂1.5KViews0likes1CommentSchedule Download Of SQL API Queries To CSV
Introduction With the Python script below and the Window Scheduler, you can automate the download of csv files based on queries of the SQL API to the elasticube. This type of process comes in handy when a user needs to download information regularly to be consumed by other processes. The script takes takes care of the authentication process and keeps the definition of the cube, query, and response file. Prerequisites 1. install python 3.4 (download here) 2. python c++ compiler (download here) 3. python dependencies jwt (pip module PyJWT) parse (pip module parse) requests (pip module requests) django (pip module django) All python dependencies can be downloaded with pip. In the command line, type the following for downloading PyJWT for instance: pip install PyJWT Script The sql_to_csv.py script is attached. Note the Settings section in the script to setup server location, name of the cube, query, user email and password, REST API token, and response file. Windows Scheduler See here how to schedule a task in Windows. You can read this stack overflow post about scheduling the task every hour. The command that you need to set in the Scheduler is the following: python sql_to_csv.py1.4KViews0likes1CommentGet number of rows in elasticubes
Though there is no direct API endpoint to get the number of rows in an elasticube, there is an endpoint to run SQL directly against a cube. Using PySense, it is straightforward to iterate through all your elasticubes, run SELECT COUNT(*) from each table in those cubes and export the results. How it works Define the csv file to output the results to Include your credentials (authenticate_by_password used below for testing purposes, you will probably want to use authenticate_by_file) The code retrieves all elasticubes. For each cube, iterate through all tables in the data model, and run SELECT COUNT(*) FROM table_name. Store the cube name, table name, and row count. Print the results to csv. Disclaimer Use this script as guidance; you are advised to test / validate / modify this script as needed. Sisense admin credentials are generally required to access the SQL endpoint. Row level data security rules are not applied. This script was built for a Linux environment. from PySense import PySense import json import csv #Parameters report_name = 'C:\\Users\\blahblah\\Downloads\\rowcount.csv' py_client = PySense.authenticate_by_password('url','username','password','Linux') report_map = {} elasticubes = py_client.get_elasticubes() for cube in elasticubes: cube_name = cube.get_title() print('Starting ' + cube_name) try: data_model = cube.get_model() for data_set in data_model.get_data_sets(): for table in data_set.get_tables(): table_name= table.get_name() cube_table_key = cube_name + '_' + table_name query = "SELECT COUNT(*) FROM \"" + table_name + '\"' sql_result_json = cube.run_sql(query, "") sql_result_dict = json.loads(sql_result_json) for key, value in sql_result_dict.items(): if key == 'values': str_value = str(value) str_value_trim = str_value[2:len(str_value)-2] report_map[cube_table_key] = [cube_name,table_name,str_value_trim] print('Finished ' + cube_name) except: report_map[cube_name + '_error'] = [cube_name,'error','0'] print('Error ' + cube_name) print('Start export') with open(report_name, 'w', newline='') as csv_file: dashboard_writer = csv.writer(csv_file, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL) dashboard_writer.writerow( ['Data Model', 'Table','Row Count'] ) for report in report_map.values(): dashboard_writer.writerow(report)1.2KViews2likes0Comments