Postgres vs. MongoDB for Storing JSON Data â Which Should You Choose?
As your data keeps growing, you will need to have your data storage in good shape to use in analytics. Use this article to think through the complex decision of using Postgres or MongoDB for your data storage.104KViews0likes0CommentsCONDITIONAL CASE SUM
Hi Sisense Community!! Please reference the attached workbook that I drafted in Excel. I am trying to draft a daily view by week, of each employee's time worked. This screenshot shows an example of actual hours worked by day by employee. I am trying to create a dynamic case statement for a metric that calculates the same way as is seen in K7 in the workbook. Essentially it would ready like this: SUM(TIME ON SUNDAY) + IF (TIME ON MONDAY = 0 THEN 0 ELSE TIME ON MONDAY) + IF(TIME ON TUESDAY = 0 THEN 0 ELSE TIME ON TUESDAY) + IF(TIME ON WEDNESDAY = 0 THEN 0 ELSE TIME ON WEDNESDAY) + IF(TIME ON THURSDAY = 0 THEN 0 ELSE TIME ON THURSDAY) + IF(TIME ON FRIDAY = 0 THEN 0 ELSE TIME ON FRIDAY) Any suggestions on how to get this case statement started through the first two lines? Many thanks in advance! -CarterSolved16KViews1like16CommentsHow To Troubleshoot Build Failures (Linux OS)
HOW TO TROUBLESHOOT BUILD FAILURES (Linux OS) Building an ElastiCube imports the data from the data source(s) that have been added. The data is stored on the Sisense instance, where future dashboard queries will be run against it. You must build an ElastiCube at least once before the ElastiCube data can be used in a dashboard. This article will help you understand and troubleshoot common build failures and covers the following: Steps of the Build Process Troubleshooting Tools Troubleshooting Techniques Common Errors & Resolutions Please note the following was written for Sisense on Linux as version L2022.5. Steps of the Build Process 1. Initialization: The initialization stage of the build process prepares the platform for the data import, which includes checking and deploying the resources geared towards performing the build. 2. Table Import: This step imports the data from the external data platforms into the ElastiCube. The ec-bld Pod runs two to three concurrent containers, meaning that two to three pods can be processed simultaneously. The build pod, which uses the given connector frameworks (old or new, based on the connector used), connects to the given source(s). By default, 100,000 lines of data are read and imported per cycle during this phase. The MServer is responsible for getting the data from the connectors and writing it to storage into Sisenseâs database (MonetDB). While importing the data, the process uses the given query assigned to the given data source (either the default Select All or a custom query). 3. Custom Table Columns: This step of the build process runs the data enrichment logic defined in the ElastiCube modeling. There are three types of custom elements: Custom Columns (Expressions) Custom Tables (SQL) Custom Code Tables (Python-based Jupyter Notebooks) Custom Elements uses the data previously imported during the Base Tables phase as its input. The calculations/data transformation happens sequentially one after the other based on the Build Plan/Dependencies generated earlier in the process between finalizing the Initialization phase and at the starting of the Base Tables phase. Calculations occur locally based on the data in the ElastiCube, and can consume lots of CPU and RAM based on the complexity of the Expressions/SQL/Python Jupyter Notebooks. 4. Finalization: These steps in the process finalize the ElastiCubeâs build and readies it for use. The steps include: I. The current (up-to-date) data of the ElastiCube is written to a disk. II. The management pod stops the current ElastiCube running in Build Mode (ec-bld Pod, and its ReplicaSet + Deployment controllers). III. The management pod creates a new ElastiCube running in Query Mode (ec-qry Pod, and its ReplicaSet + Deployment controllers). IV. Once the new ElastiCube is ready, it becomes active and available to answer data queries (e.g., dashboard requests). V. The management pod stops the previous ElastiCube running in Query Mode (ec-qry Pod, and its ReplicaSet + Deployment controllers). Builds may be impacted by several factors. It is recommended to test your build process and tune accordingly when changes are made to the following: Hardware Sisense architecture Middleware Data source Connectivity, networking, and security policies Sisense upgrade/migration from Windows to Linux Sisense configuration Increase in data volume Data model schema (i.e., number and complexity of custom tables and import queries) Troubleshooting Tools Leverage the following when troubleshooting build issues: Inspect log files Each log contains information related to a different part of the build process and can help identify the root cause of your build issue. Depending on your Sisense deployment, logs may be located in different directories. The default path for Single Node is /var/log/Sisense/Sisense. For Multi Node, itâs on the application node inside the Management pod. If you need to collect logs, make sure to do so soon after the build failure, as logs will be trimmed after they reach a certain size. Log name Description Build.log General build logs will contain information for all the Elasticubes. Query.log General query logs will contain information for all the queries. Management.log Elaborate log file, which contains service communication requests. (Build will reach out to Management to fetch info from MongoDB etc.) Connector.log General information for all builds and connectors. Translation.log All the logs related to the translation service. ec-<cube name>-bld-<...>.log This contains the latest build log for each cube. It can also be viewed through the UI, as shown here. ec-<cube name>-qry-<...>.log Contain logs related to specific Elasticubesâ queries. build-con-<cube name>-<...>.log More verbose logs provide connector-related details for specific builds. Combined.log Aggregation of all logs in one file. It can be downloaded via the Sisense UI, as shown here. Please note if you are a Managed Services customer, only the combined log and latest build log for each cube are available. Use Grafana to check System Resources Grafana is a tool that comes packaged with Sisense that can be used to monitor system resources across pods. Every build has its own pod. This allows you to see the CPU and RAM that each build uses, as well as what is used by your whole Sisense instance. Excessive CPU and RAM usage is a common cause of build failures. 1. Go to Admin > System Management > Click on Monitoring. Click on the Search icon and then select All pods per namespace and then select namespace where Sisense is deployed (by default is âsisenseâ). 2. In the Pod dropdown, search for âbldâ and select the cube you want to observe. *You may need to reduce the timeframe to get results: 3. Observe CPU and RAM over the duration of the build. *In the CPU graph, 1 core is represented by 100% See this article for additional information on using Grafana. Use Usage Analytics to observe build metrics Usage Analytics contains build data and pre-built dashboards to assist you in identifying build issues and build performance across cubes over time. See here for documentation on this feature. Ensure you have usage analytics turned on and configured to keep the desired history! Troubleshooting Techniques Below are some common issues and suggestions for build errors. The first step is to read and understand the error message on the Sisense portal. This will help resolve the exact build issue. 1. Whenever you face build issues, check the Memory consumption. Options include either ssh to your Linux machine and run âtopâ command to check the process and memory consumption, or you can also open grafana/logz.io and check memory consumption by the pod. If you see high memory usage, then please try to schedule builds in the off hours to see if that helps. 2. If the cube is too big, try to break the cube into multiple cubes by sharding the data or separating use cases. 3. Check the data groups first to see if one specific cube is very large or if you only have a default data group. If all the cubes are part of that data group, then create a different group for the large cube. 4. If the error message is related to Safe Mode (âYour build was aborted due to abnormal memory consumption (safe mode)â), then check the Max RAM value set in the data groups. You can increase the Max RAM value and verify the build. (https://support.sisense.com/kb/en/article/how-to-fix-safe-mode-on-build-dashboard) See the following two articles for details on managing data groups: https://documentation.sisense.com/docs/creating-data-groups https://community.sisense.com/t5/knowledge/how-to-configure-data-groups/ta-p/2142 5. Running concurrent build processes can also be an issue. Try to not run multiple builds at the same time. If that is the issue, then open the Configuration Manager (/app/configuration), expand the Build section, Change the value of Base Tables, Max Threads to 1 and save. (Relevant pod should restart automatically, but you can also restart the build Pod manually using âkubectl -n sisense delete pod -l app=buildâ 6. Lack of sufficient storage space to create a new ElastiCube (either in Full or Accumulative build) can also result in build failure. It is recommended to free up some space and then check the build. 7. Check the log files and the query running in the backend to try to break down complex queries to avoid memory consumption. 8. The below items outline the configurations that affect troubleshooting: -Base Tables Max Threads: Limits the number of Base Tables that are built simultaneously in the SAME ElastiCube -MaxNumberOfConcurrentBuilds: available via the Configuration Manager/Clicking on the Sisense logo at the top-left five times and selecting âBuildâ -Timeout for Base Table: Will probably âforcefullyâ fail the build if any Base Table takes more than this amount of time to build, available via the âBuildâ configuration Remember that making any changes to these settings might require pod restart: To restart the pod, run the following command: kubectl -n sisense delete pod -l app=build Check the pod restarted based on the pod age: Kubectl -n sisense get pods -l app=build 9. If you have many custom tables, try to use the import query (move the custom table query into the custom import query). Documentation: Importing Data with Custom Queries - Introduction to Data Sources 10. Please check your data model design and confirm that it conforms to Sisense best practices. For example, M2M takes more memory and can result in build failures. https://support.sisense.com/kb/en/article/data-relationships-introduction-to-a-many-to-many 11. Builds can also fail because of the network connection between data sources and the Sisense server. Perform a Telnet test to verify connectivity from the Sisense server to the data server. Common Build Errors and Resolutions Error Description Resolution BE#468714 Management response: ElastiCube failed to start - failed to detect the EC; Failed to create ElastiCube for build process. This means the process does not have enough resources to bring up the build pod. If the Kubernetes process is still running for creating the Pod, the following command will allow you to monitor the given Build pods being brought up and check once they are healthy, up, and running. Command: kubectl -n sisense get pods -l mode=build -w If the value for that pod in the restarts column is greater than 0, it means that the Pod is not able to be initialized properly and will retry 5 times until it fails and terminates the process. If the build process had already terminated in the past, view the Kubenetes journal to find out the reason for failure. Command: sudo journalctl --since=â <how long ago>â | grep -i oom For example, if the build occurred within the past hour or so, a â50Mâ ago and grep on âoomâ will show if an out of memory issue occurred for the given build. Example: sudo journalctl --since=â 50M agoâ | grep -i oom, which would indicate an oom_kill_process was put into place due to out-of-memory reasons. BE#196278 failed to get sessionId for dataSourceId This error indicates that the user running the build does not have permission to run the build for the given ElastiCube. The ElastiCube needs to be shared with the user with âModel Editâ permission. BE#470688 The reason for this issue is a cumulative build is being performed, which relies on having the ElastiCube stored in the farm fails because either access to the directory in the farm storage location or directory/files are corrupted or are not there. The only way to resolve it is to either restore the farm directory from a backup for the ElastiCube or re-build the ElastiCube with a full build. BE#313753 Circular dependency detected This happens when you have a lot of custom tables and custom columns which depend on each other Please check the below articles on how to avoid loops: https://documentation.sisense.com/docs/handling-relationship-cycles#gsc.tab=0 Error: Failed to read row:xxxxxxx, connector Sisense is importing data from the database using a Generic JDBC connector. Why did this fail suddenly? The data added recently is not in the correct format or as expected in the table. If you are using a Generic JDBC connector, then itâs worth checking the connector errors online where you may find useful information to resolve the issue related to the connector. BE#640720 Build failed: base table <table name> was not completed as expected. Failed to read row: 0, Connector (SQL compilation error: invalid number of result columns for set operator input branches, expected 46, got 45 in branch 2). Most likely issue in the custom import query or on the target table. Please check if there are right amount of columns used in the query and refresh table schema. Build failed: BE#636134 Task not completed as expected: Table TABLE_NAME : copy_into_base_table build Error -6: Exception for table TABLE_COLUMN_NAME in column COLUMN_NAME at row ROW_NUMBER: count X is larger than capacity Y This could be resolved by changing BaseTableMax (parallel table imports) from 4 to 1 in the Configuration Manager. Conclusion Understanding the exact error message is the first step towards resolution. Based on the symptom you can try some of the suggestions listed above and can quickly resolve build failure issues. If you need any additional help, please contact Sisense Support or create a Support Case with all the log files listed above, and a Support Engineer will be able to assist you. Remember to include all relevant log files for an efficient troubleshooting process! Krutika Lingarkar, Technical Account Manager in Customer Success, wrote this article in collaboration with Chad Solomon, Technical Account Manager, Senior in Customer Success, and Eran Ganot, Tech Enablement Lead in Field Engineering.11KViews6likes15CommentsData Relationships: Many-to-Many Relationship Resolutions
For general information on types of data relationships, potential complications and how to resolve them, see the following articles: Data Relationships: Introduction to Many-to-Many Data Relationships: Check Type of relationship Data Relationships: One-to-Many Relationship Data Relationships: The Invisible Many to Many Data Relationships: Many-to-Many Relationship Resolutions (You are here) There are several methods to resolve and bypass a many-to-many relationship; the solution depends on the business model and the logic of the business questions at hand. The following solutions differ by business logic and the schema at hand, each solution can be applied to each schema respectively. Resolutions for 2 tables, one relationship Two Tables, One Relationship Aggregated Table Possible resolutions for more than 2 tables, more than 1 relationship Lookup Function Concatenate the two tables into one Resolutions for 2 tables, one relationship Two Tables, One Relationship The direct solution for Many-to-Many would be to break this relationship into two separate one-to-many relationships, as seen the image below. The logic behind testing this issue can be visualized in the decision tree below. Follow the steps below to resolve the Many-to-Many this way: Create a custom dimension by using a custom SQL expression in the Elasticube. In the expression of this table select all the individual values for the identifier column from both sides, the expression should look like this: SELECT * FROM (SELECT DISTINCT r.GuestID, r.GuestName FROM [Reservations] r UNION SELECT DISTINCT p.GuestID, p.GuestName FROM [Payments] p) AS G This query will take all Guest Id values from both tables, and using the UNION statement, will bring in only the unique values from both tables, making this a complete list of all distinct Guest Id values. Merge the Guest Id field from the new 'linking' table to the other two Guest Id fields from the other two tables, thus creating two One-To-Many Relationship. You can now use this Guest Id field as the rows or axes elements of a widget, pulling the unique values from the new Guest Dimension, with measures from the two other tables. Image 1. Custom Common Dimension Aggregated Table In situations where we have more than one fact table (A Fact table is a primary table containing the measures or fields used for calculations in the dashboard) in our Elasticube, there are several situations when an aggregated table can resolve a many-to-many relationship. Image 2. Incorrect Connections Assuming we'd like to segment our data according to a few different dimensions, creating relationships directly between these fields can and will create many-to-many relationships in one of two ways, according to the schema: Both tables don't hold unique values, and all values from one table are held in the second table. In this scenario either a linked dimension (as described in solution 1) or an aggregated table can be created which will hold all the unique values and the desired calculations for one of the tables. In order to create an aggregate table, one can create a custom SQL expression and aggregate values from the table which holds all values; its own, and the subset present in the other table with the following expression: SELECT i.OrderDateKey, i.ProductKey, sum(i.DiscountAmount), sum(i.SalesAmount), avg(i.UnitPriceDiscountPct) FROM [FactInternetSales] i GROUP BY i.OrderDateKey, i.ProductKey This custom SQL expression will select the distinct OrderDateKey and their corresponding ProductKey from the FactInternetSales , grouped by these fields, together with single value aggregations for the different fields : Discount Amount, Sales Amount and the average unit Price discount. Now merge the OrderDateKey and Product Key between this table and the two other tables and you will be able to pull the values from this new table into the rows or axes panel of a widget in Sisense Web with measures and additional aggregations. Both fact tables don't hold unique values, and there are different values for several fields in both the tables. Resolving this scenario would incorporate both solutions from sections 2.1 and 1. In this scenario one should create an aggregated table as stated in 2.1, and a dimension table as stated in 1. The final resolution should look like this: Image 3. New Schema with dim & aggregated table Resolutions for more than two tables, more than one relationship Using the Lookup function In most scenarios we'll aggregate values according to a given id, from the unique side of the relationship to the duplicate side. However in specific cases it'll be vice versa. For example in the following scenario, in which we have 3 tables, and between them two one-to-many relationships, this can potentially create a many-to-many relationship, if we were to query the two leaf tables. This means that the query result table will have multiple rows which won't be distinguishable one from another. Image 4: Two consecutive M-to-M relationships Using the Lookup Function, we can import values from a remote table by matching values in a field. This will create a new field in the table with the matching value of the identifying field from the other table. Taking the following example of tables T1, T2 and T3, we'd like to run a query which will display aggregations from the duplicate id's from T1, with a measure from T3. If we ran the query as is, we'd get multiple values for the query's result set, and we won't be able to run this aggregation. In order to resolve this, we'll use the Lookup function in order to import the values from T3 into T2 and then re-run the query only on tables T1 and T2. We can import the values of 'M3' from the 'T3' table into the 'T2' table. Create a new custom column, and use the Lookup function to import the values of attribute, In this case, the Lookup function should look like this: Lookup([T3],[T3].[M3], [T2].id2,[T3].id2) Running this statement in table T2 will import the matching values of T3.M3 from T3 according to the matching results in id2 between the two tables. Image 5: Two consecutive Many to One relationships after Lookup fix Concatenate the two tables into one Assuming we have 2 separate tables with duplicate id values in each, and each holding different columns for each id, we can create a custom table which will hold all values for every id, and pull the aggregations from this new table. Notice that the two original tables; Table_1, Table_2 have different columns. Image 6: Concatenating tables Using the following SQL statement, we can import the data from both tables, with the id's and the columns respectively: SELECT s.id AS id, s.m1, s.m2, ToInt(NULL) m3 , ToInt(NULL) m4 FROM [Table 1] s UNION SELECT t.id, ToInt(NULL) , ToInt(NULL) , t.m3, t.m4 FROM [Table 2] t This will create a table with 5 columns: Id, M1 (from table_1), M2 (from table_1), M3 (from table_2), M4 (from table_2) The values missing from each table respectively will be NULL's which will result in the following table: Image 7: Concatenated table; result set Image 8: Determining a Many-to-Many relationship; decision tree. This is based on the first example with the Payments and Reservations tables.8KViews0likes0CommentsDisplay values continuously over time with existing gaps in FACT data table
Hi All, I have the following use case: I want to visualize the overall stock level of 2 different products over time. The source for the stock levels is a FACT table containing the log data of the stock movements. The FACT table is connected to a date dimension table containing the date of every day (see screenshot attached). However, if there was not a movement for a specific product, there is no entry in the FACT table. Example: 01.01.22: stock level = 5 02.01.22: stock level = 4 06.01.22: stock level = 10 When displaying those stock levels on a line/column chart, I only display the 3 days by default. There exists a "time gap" in the data and I am looking for a way to display the stock level for every day as a "continuous" rate over time. As there are no values in the FACT table for 03/04/05th January, I would like to display the latest value available (in this case the value stock level = 4). Are there any ways to solve this on the dashboard side (maybe with function/script)? I have already enabled the widget option "Display missing values as 0" but instead of displaying a 0, I would like to display the latest available value from the FACT table. Please also see the screenshots attached. Happy about any kind of suggestions or tips! đ âââ7.3KViews0likes10CommentsDate Dimension File
Analytical Need Analyzing information by date can be very useful, but not every source data set provides enough information to do this kind of analysis. Modeling Challenge We will need to import a date table, which will act as a reference for all time queries. Solution Example- Employment Table An employment table contains the following columns: Employee Name, Start Date, End Date This describes when an employee started and stopped working at a company. However, if we want to created a line chart that displayed days on the X axis, and charted the number of employees over time, it will not be possible with this data set. This is because the data set will not contain every individual date (assuming that there were days where the company did not hire or fire anyone). If every date is contained in your data set, use the instructions in Create a common date selection. This is why we would need to import a date dimension table. Date Dimension Excel File Under Attachments (bottom of this article), you will find the DimDates.xlsx file. This file contains day level information for dates ranging from January 1, 1991 to December 31, 2039 (when you use this Date as a filter, please remember to remove any dates that are lower than your minimum date and higher than your maximum date). After downloading the file, open the ElastiCube Manager, click Add Data-> Excel File, and select DimDates.xlsx. In the preview, make sure the field Date is imported as a date-time field. If it isn't, check the culture setting below the preview pane, and select English (United States). Using Custom SQL to Answer the Question We still need to establish how many employees existed in the company on any given day. We can do this by creating a custom SQL table that counts the number of employees per day, or we can create a table that has a row for every day that an employee was employed by the company. We'll create the employee-day table for this example (Check the amount of history and the number of employees before choosing this option, to determine how many rows will get created). SQL: SELECT E.EmployeeName,D.DateFROM Employee EINNER JOIN DimDate D ON D.Date>= E.StartDate AND D.Date <= E.EndDate Note on the join clause: We use the Dim Dates "Date" field. We can also use the "DateNum" field but for this, we would have to convert the E.StartDate & E.EndDate to the same format. The information on this process can be found under Numeric Representation of date fields. Now, create a line chart that lists the days from this new table, and create a value that counts the number of employees on that day. Date file attached below.7.1KViews0likes0CommentsLAG / LEAD Functions
Question LAG & LEAD Functions are functions that give access to multiple rows within a table, without the need for a self-join. The LAG function is used to access data from a previous row, while the LEAD function is used to return data from rows further down the result set. A perfect example of these functions in use is with the âSales Orderâ scenario. A department store is trying to monitor the purchase trends of its customers; more specifically, per customer, whatâs the average time it took to repeat an order? For us to do so, we must take a record in the table (Order Date) and compare it to another record in the same table. What are some ways we can do that? In our example, we will analyze & compare the efficiency between the multiple solutions. We will start by addressing the problem in the most common/traditional of solving a problem like this. Answer Steps: 1. Create A New ElastiCube Add Data > Microsoft SQL Server > Connect To Server Pull in Sales Order Header Table "How are we going to manipulate that table to get it into a format that we would want to work with?" Uncheck fields that you donât need; looking for high-level/important data BUILD! 2. Let's Create A Custom SQL Expression What we are getting here a single customerâs activity. It also orders the Order Date in ascending order (from first date to last) "For this specific customer, I want to be able to see __ amount of days between each order. How can we compute / compare those dates? In order to do so, we want to take the order table and join it to itself, where the customer is the same and the order date is offset to the previous or future date." So, letâs start by leveraging the rank function to assign an order number to every date⌠The Rank function is important here because within the Customer ID, we want to rank based off the order date, group it per customer, and order by the order date. This basically creates create a system that makes it so that it is comparing itself to itself minus 1. This is what sets up our ranking per order. In order for us to join the table to itself, letâs create a sub query and inside of it, put the entire Customerâs table: (Type Of Join:) What itâs getting joined on: Result: 3. Now That We Have The Order Date & The Total Due, We Can Start Subtracting Them Off Each Other & Start Doing Differences Now you would want to go back to the top; the first âSelectâ of the sub query and change the claims to: Preview: Now that we saw that this function is working, we can now delete our customer filter, order by filter. Now we get all the differences across all of the customers. Issue With This Solution The main issue with this solution is that it requires a lot of processing due to the two tables. Although we are really using one table, we still have two and the same data is repeated between them. This extra storage isnât necessary, creates the long processing, and can also create confusion for the customer. Solution 2: Doing A Lookup Function On Itself (Own Data Set). To improve upon the last solution, is there a way to pull in one table and do the manipulation in Sisense, without creating a second table? Answer: Yes! What we can do is take the custom table and add it as a view to the source; that way we only import one table Steps: 1. Duplicate "SalesOrderHeader 1", Rename "SalesOrderHeader 2" BUILD! For us to do a look up on this table, we need to have need to have one key to join. The key is a combination of the customer ID + ranking, thatâs what makes it unique. 2. So First, Create A New Field, Record_Rank (Int), Input: BUILD Add Data > Custom SQL Table 3. Head Back Into SalesOrderHeader 2, And Set Up The Different Fields To Create The Key + New Field > CurrCustomerRankKey (Text) Do Entire Build 4. Let's Create A Test Function + Custom SQL Expression > Test Function 5. Go Back To SalesOrderHeaders 2 Table, Do Lookup In There + New Field > PrevOrderDate (Date-Time) + New Field > PrevTotalDue (Float) + New Field > DaysBetweenOrders(Int) + New Field > DiffOrderAmount Schema Build: Solution 3: LAG & LEAD Functions This solution does not include a self-join Steps: 1. Open Up SQL Server (TC SQL) Go to same database (Adventure Works 2012) > create new query Letâs start off the new query as: Now we want to dynamically compare one against the other. Instead of doing a self-join, it tells itself to look at the look at the record/number that is above or below and just pull that value in. So itâs almost like a lookup , but it doesnât require a join , it just requires you to tell it how to sort the data and just dynamically go up and go down and pull the field that you want in to the current record. This is where the LAG & LEAD functions step in To go back a record, we call the LAG function. Within the LAG function, we must define the field that we want to pull in (order date) and how many records back do you want to go (1). Next we want to group by (partition by) and Order By a type of variety and then order it either ASC / DES. To make our âlook upâ a bit more distinct we can Result: If we want to compare up against future dates, we would use: And its result would be: What we can also pull in the Total Due and calculate the previous Total Due: Which would require a sub query: To get of the first null, we would enter: Pull in everything from t2.* and now we can calculate our differences (Date Diff, etc): As for getting this entire query into Sisense, we want to create a View6.9KViews0likes0CommentsUsing answer to a calculation within another calculation
Im looking to use the answer from one calculation in another calculation on a separate widget with different filter set up. This is my first calculation, it counts how many sales we have prior to drafting for each model and divides by the total number of sales. This then shows the percent sold prior to drafting for each of our models. (count([UniqueJob]), [SoldPriorToDrafting], [Years in FinalSaleDate]) / (count([UniqueJob]), [Years in FinalSaleDate]) The problem is I don't want the calculation to run first on the new widget I just need to use the number from the pivot table above as the other widget shows each individual job within the model. The calculation doesnt work when looking at it in this view.6.5KViews0likes4Comments