ContributionsMost RecentNewest TopicsMost LikesSolutionsRe: WE NEED THE ABILITY TO DO A LEFT JOIN WITHIN THE SISENSE DATA MODEL ehannscott thanks for sharing these alternatives will help us to see what works for us, I leave this post that is related and have other options https://community-old.sisense.com/hc/en-us/community/posts/360000477234-Left-Joining-Tables-in-Cube Re: WE NEED THE ABILITY TO DO A LEFT JOIN WITHIN THE SISENSE DATA MODEL ehannscott if maybe you could come up with an alternative solution to this please share it, we are presenting this need as well Re: refresh the schema to a table through the api for now from the api is not possible to have the same interaction as the refresh button from the graphical interface, an option but it is something very complicated and practically you must know the columns to add and remove is to see in the browser inspector and see the endpoints that use, the traditional solution would be to use the endpoints available to update the schema but you must know the columns changed refresh the schema to a table through the api Is it possible to refresh the schema to a table through the api? change schema in datamodel and dataset when a datamodel is cloned with a databricks connector and the schema is updated, it is not reflected in the data, this is because the schema name remains in the columns, but when you want to update it is not possible because the patch tables do not contain the schema name Re: live cubes - databricks apparently it is something that was solved in a new version, the version I was using was Version: L2021.9.0.57 and in Version: L2022.8.0.100 it allows to create databricks live cubes. live cubes - databricks the sisense native databricks connector works with live cubes, currently it works with elastic cubes but when I try to create a live cube but the databricks option does not come up. SolvedRe: how to increase the chunck size thanks for your help, yes I found the option to increase the chuncks but something strange happens is that I don't see the change reflected in the builds, let's reinitialize the pods. Re: how to increase the chunck size our source is a table in databricks, if we notice some network peaks in the cluster, but we do not see other consumption peaks, currently in this table we only partition it by a column that we use in the querys, it would be good to add other partitions even if we do not use it in the query how to increase the chunck size I have a table that has 1454334099 records and the build downloads 100000 in one hour it downloads 100.000.000 how can I increase the chuck or what is the best alternative to decrease the build times?