Check our collection of guides and frequently asked questions.
After the splash screen is finished loading, XpertBI does not appear. It seems to be running in the background. The SQL Server activity monitor shows a suspended task which is running the XBI__GP_Create_All_DataBases stored procedure. The task is waiting for locks. After killing the suspended task, XpertBI appears, but it is not possible to load executables or process data.
Answer:
The reason is a view or query which holds locks on a table that the stored procedure is trying to modify. Release all locks and try again.
Answer:
The machine that runs the executable does not have Visual C++ Runtime 2013 installed. Msvcr120.dll is the actual missing DLL. Install this runtime https://www.microsoft.com/en-us/download/details.aspx?id=40784
After a Xpert BI management database <PREFIX_ XBI_MGMT_DEV > is restored from a database backup, the following error occurs when using parts of Xpert BI.
Msg 10314, Level 16, State 11, Procedure XBI_Tr_Get_Executables_Foundation, Line 380 An error occurred in the Microsoft .NET Framework while trying to load assembly id 65537. The server may be running out of resources, or the assembly may not be trusted with PERMISSION_SET = EXTERNAL_ACCESS or UNSAFE. Run the query again, or check documentation to see how to solve the assembly trust issues. For more information about this error: System.IO.FileLoadException: Could not load file or assembly ‘bibuilders.xpertbi.sqlclr, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null’ or one of its dependencies. An error relating to security occurred. (Exception from HRESULT: 0x8013150A)
System.IO.FileLoadException: at System.Reflection.RuntimeAssembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, RuntimeAssembly locationHint, StackCrawlMark& stackMark, IntPtr pPrivHostBinder, Boolean throwOnFileNotFound, Boolean forIntrospection, Boolean suppressSecurityChecks) at System.Reflection.RuntimeAssembly.InternalLoadAssemblyName(AssemblyName assemblyRef, Evidence assemblySecurity, RuntimeAssembly reqAssembly, StackCrawlMark& stackMark, IntPtr pPrivHostBinder, Boolean throwOnFileNotFound, Boolean forIntrospection, Boolean suppressSecurityChecks) at System.Reflection.RuntimeAssembly.InternalLoad(String assemblyString, Evidence assemblySecurity, StackCrawlMark& stackMark, IntPtr pPrivHostBinder, Boolean forIntrospection) at System.Reflection.RuntimeAssembly.InternalLoad(String assemblyString, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) at System.Reflection.Assembly.Load(String assemblyString)
Answer:
Not all database settings are preserved in a database backup, ie. database owner is changed to the user which restored the database and trustworthy setting is set to OFF. Run the following script on the management database:USE [_XBI_MGMT_DEV]
EXEC sp_changedbowner 'sa'
ALTER DATABASE USE [_XBI_MGMT_DEV]SET TRUSTWORTHY ON
Answer:
When it comes to performance:
If the solution works *well enough* with a view-on-view-on-view type design, (i.e. a virtualised solution) then this means that the batch processing does not have to load any data. When new data is available in the ODS (or in other main sources) the data will then also be available in the solution without having to run an update. This is an advantage because processing time decreases (only the source data must be updated).
However, if there is a lot of complex logic – like large joins and many business rules – a virtualised solution may not work. This means that if a user queries the database directly the query may take a very long time (or not even finish) or the SSAS processing will take a very long time when queries are run towards ELM or DM. In such cases one or more views may have to be materialised into an ITS table, which means that the query result will be loaded into a table. It is also possible to add indexes as a post process (this option is available in the ITS configuration).
It is not always necessary to materialise the largest most complex view, it is often enough to materialise look-up views, so that the PK becomes indexed. The problem is not always with the view that is perceived as the slowest, it is often the view that comes before, or even before that one again. This can be checked/troubleshooted by looking at the execution plan. N.B. The execution plan may be different for the same query on different servers. This means, the query can run just fine on one server (for example a test server) and not on another (for example a prod server). This can make it difficult to troubleshoot.
One of the main purposes of a data warehouse is the re-modelling of a data model in order to:
The amount of modelling that is required to do this varies from data source to data source, so the extent to which views should be materialised into ITS tables will vary from solution to solution.
Export Database Metadata fails with the following error:
Ex_DSV_Create_MetadataDataBase An error occurred while executing the command definition. See the inner exception for details. Cannot create file ‘C:\Program Files\Microsoft SQL Server\MSSQL14.MSSQLSERVER\MSSQL\DATA\the_db_name.mdf’ because it already exists. Change the file path or the file name, and retry the operation. CREATE DATABASE failed. Some file names listed could not be created. Check related errors. User does not have permission to alter database ‘the_db_name’, the database does not exist, or the database is not in a state that allows access checks. ALTER DATABASE statement failed. Database ‘the_db_name’ does not exist. Make sure that the name is entered correctly.
Answer:
Export Metadatabase does not work when previously exported metadatabase is renamed. Rename the previously exported database to its original name which follows the following format: SolutionPrefix_MD_SourceName_SourceVerion
Answer:
A possible reason for this could be that some characters in the table name has changed from upper or lower case to a different case. This is picked up and stored in the Datasource_columns table, but not in the Datasource_Tables table. The solution is to delete the table in import source metadata, then save, and then re-import the table. Remember that doing this will remove all column names as well. Export the metadata to an excel spreadsheet before deleting the table and use the excel spreadsheet to populate the column names after re-importing the table.
If this solution is not viable, it is possible to correct the name directly in the Datasource_Tables table. NOTE: it is not recommended to edit the Xpert BI management database directly as this will void the standard support agreement. Contact support@bi-builders.com to get help.
Import Source Metadata fails with the following error:
SqlException: Cannot insert the value NULL into column ‘SqlScript’, table ‘PrefixName_XBI_MGMT_DEV.dbo.ImportMetadata_Queries’; column does not allow nulls. INSERT fails.
The statement has been terminated.
Answer:
This is most likely due to a source or connection type that requires additional information, such as Schema or MetadataVersion.
Answer:
Xpert BI can use SID or Service Name when connecting to Oracle database server. Oracle recommends remote connections via the Service Name. For non-cluster/standalone Oracle instances SID and Service Name often have the same value. For Exadata/Oracle RAC, the Service Name is an alias for all SIDs participating in the cluster.
For Import Source Metadata you can connect directly to one clusternode using the SID. Failover, load-balancing and so on will not be used but that does not matter when retrieving metadata.
Answer:
Query Source Data does not return all rows when querying a table with case sensitive primary key. The underlying data structure is a .NET DataTable, which has support for adding/updating based on PK. The data table ignores case, and updates “duplicate” rows in place, instead of showing two rows with different casing.
The Data Source Processing pane is showing the “loading icon” for 30 seconds then a timeout expired error message pops up and a log entry is written to the ErrorLog in the database. The message is: «XBI__GP_Get_Executables An error occurred while executing the command definition. See the inner exception for details. Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.»
Answer:
The reason for the timeout is a block/lock in XBI__GP_Create_All_DataBases. Use The SQL Server activity monitor to find which task is causing the lock. Kill the task if appropriate to release the lock.
Data Source Processing fails with the following error message:
XBI__GP_Get_Executables
An error occurred while executing the command definition. See the inner exception for details.
Could not find database ID [theID], name ‘[theName]’. The database may be offline. Wait a few minutes and try again.
Answer
This is a typical error message that can come if an “Additional Database” is missing. Go to Transformation, chose “Additional Databases”. Check that all databases in the list exists.
Data Source Processing fails (at the ODS append stage) with the following error message: Violation of PRIMARY KEY constraint ‘PK_RMC_1_1_Jobs’. Cannot insert duplicate key in object ‘dbo.ObjectName’. The duplicate key value is (XXX).
The statement has been terminated.
TableNameSource: SOURCE
TableNameDestination: [DESTINATION]
Answer
The reason for this error is a duplicate key in the ODS table. There could be several root causes for this, check the following
Execute Process Group (or Execute Inline Object) fails with the following error message:
Tr_Get_Executables
An error occurred while executing the command definition. See the inner exception for details.S
Cannot insert the value NULL into column ‘ColumnName’, table ‘@Table_Columns’; column does not allow nulls. INSERT fails.
The statement has been terminated
Answer:
The error is probably caused by an Inlined object (Materialized view) that is missing the
ORDER BY clause which Xpert BI adds to the view when it is Inlined/Materialized.
Run the following query to find Inlined objects without the ORDER BY clause
(<TheDatabase> should be set to the name of the database containing the Inlined objects, most often the ETL database. <PREFIX_XBI_MGMT_DEV > should be set to the name of the Xpert BI management database).
use <TheDatabase>
GO
<SELECT
CASE WHEN x.dbname IS NULL
THEN ‘Missing order by’
ELSE ‘OK’ END AS inline_status
, oa.*
FROM <PREFIX_XBI_MGMT_DEV>.dbo.Object_All oa
LEFT JOIN (
SELECT
db_name() AS dbname
, o.name
, sm.*
FROM sys.sql_modules AS sm
INNER JOIN sys.objects o
ON sm.object_id = o.object_id
WHERE definition LIKE ‘%ORDER BY%’
) AS x
ON
oa.DatabaseName = x.dbname
AND oa.ObjectNameUnqualified = x.name
WHERE oa.ObjectNameUnqualified LIKE ‘%_Inline’
AND oa.DatabaseName = db_name()
Execute Process Group (or Execute Inline Object) fails with the following error message:
Error#: 1750, Severity: 16, State: 0, Procedure: , Line: 2, Msg: Could not create constraint or index. See previous errors. TableNameDestination: <DestinationTable>
Answer:
The Inlined object with name <DestinationTable>_Inline probably has PK duplicates. Correct the view so that no PK duplicates exists.
Execute Process Group (or Execute Inline Object) fails with the following error message:
ExecuteNonQuery: CommandText property has not been initialized
TableNameSource:
TableNameDestination: <DestinationTable>
Answer:
The error is probably caused by Inline objects being out of sync. Run the following script at a time when the stack is not in use.
USE [<PREFIX>_XBI_MGMT_DEV]
EXEC [XBI_Tr_Create_InlineObjects]
When scheduling XBI_batch.exe to run on a server, the following error is logged:
Tr_Get_Executables
An error occurred while executing the command definition. See the inner exception for details.
XBI_Clr_SQL_Parse: Return: -2146232576
Answer:
Make sure that the correct version of .NET installed on the machine where the XBI_batch.exe file is running.
Answer:
Yes, Xpert BI supports SAP Router connection strings. The following format can be used:
Application Server host: /H/<ipaddress of sap router>/H/<sap host>
Data Source Processing (for SAP) fails with the following error message:
The type initializer for ‘Microsoft.Data.SAPClient.Statics’ threw an exception.
Answer:
This could mean that the SAP BizTalk Adapter trial has expired. Install a full version or try to uninstall and re-install the Biztalk adapter if still in dev/trial mode (both WCF LOB SDK and the Adapter pack).
Data Source Processing (for SAP) fails with the following error message:
Input string was not in a correct format.
TableNameSource: DD01L
TableNameDestination: [SAPDD_1_1_DD01L]
at System.Number.StringToNumber(String str, NumberStyles options, NumberBuffer& number, NumberFormatInfo info, Boolean parseDecimal)
at System.Number.ParseInt32(String s, NumberStyles style, NumberFormatInfo info)
at Microsoft.Data.SAPClient.ReadSelectDataReader.IntFieldExtractor.GetValue(String row)
at System.Data.SqlClient.SqlBulkCopy.GetValueFromSourceRow(Int32 destRowIndex, Boolean& isSqlType, Boolean& isDataFeed, Boolean& isNull)
at System.Data.SqlClient.SqlBulkCopy.ReadWriteColumnValueAsync(Int32 col)
at System.Data.SqlClient.SqlBulkCopy.CopyColumnsAsync(Int32 col, TaskCompletionSource`1 source)
at System.Data.SqlClient.SqlBulkCopy.CopyRowsAsync(Int32 rowsSoFar, Int32 totalRows, CancellationToken cts, TaskCompletionSource`1 source)
at System.Data.SqlClient.SqlBulkCopy.CopyBatchesAsyncContinued(BulkCopySimpleResultSet internalResults, String updateBulkCommandText, CancellationToken cts, TaskCompletionSource`1 source)
at System.Data.SqlClient.SqlBulkCopy.CopyBatchesAsync(BulkCopySimpleResultSet internalResults, String updateBulkCommandText, CancellationToken cts, TaskCompletionSource`1 source)
at System.Data.SqlClient.SqlBulkCopy.WriteToServerInternalRestContinuedAsync(BulkCopySimpleResultSet internalResults, CancellationToken cts, TaskCompletionSource`1 source)
at System.Data.SqlClient.SqlBulkCopy.WriteToServerInternalRestAsync(CancellationToken cts, TaskCompletionSource`1 source)
at System.Data.SqlClient.SqlBulkCopy.WriteToServerInternalAsync(CancellationToken ctoken)
at System.Data.SqlClient.SqlBulkCopy.WriteRowSourceToServerAsync(Int32 columnCount, CancellationToken ctoken)
at System.Data.SqlClient.SqlBulkCopy.WriteToServer(IDataReader reader)
at BIBuilders.BIStudio.DataProcessor.DataProcessor.BulkWrite(DataTask dataTask, IDataReader dataReader, SqlBulkCopy bulkCopy)
Answer:
When processing data from some SAP tables, a formatting exception may occur. This is because of data type conversions and lacking data type validation in SAP leading to bad data. To workaround/resolve either clean up the data in the source, or use Query Source Data to identify bad data and add filters to avoid loading the bad data.
Use the following query template when using Query Source Metadata:
“SELECT * FROM T OPTION ‘DisableDataValidation’”
For more information see:
https://msdn.microsoft.com/en-us/library/cc185537%28v=bts.10%29.aspx
One or more errors occurred.
Decimal's scale value must be between 0 and 28, inclusive.
Parameter name: scale
TableNameSource: T1
TableNameDestination: [dbo].[Ora_1_1_T1]
System.AggregateException: One or more errors occurred. ---> System.ArgumentOutOfRangeException: Decimal's scale value must be between 0 and 28, inclusive.
Parameter name: scale
at System.Decimal..ctor(Int32 lo, Int32 mid, Int32 hi, Boolean isNegative, Byte scale)
at Devart.Data.Oracle.ei.d(Byte[] A_0, Int32 A_1)
at Devart.Data.Oracle.bc.GetValue(Byte[] buffer, Int32 offset, Int32 length)
at Devart.Data.Oracle.r.b(Int32 A_0)
at Devart.Data.Universal.UniDataReader.GetValue(Int32 i)
at System.Data.SqlClient.SqlBulkCopy.GetValueFromSourceRow(Int32 destRowIndex, Boolean& isSqlType, Boolean& isDataFeed, Boolean& isNull)
at System.Data.SqlClient.SqlBulkCopy.ReadWriteColumnValueAsync(Int32 col)
at System.Data.SqlClient.SqlBulkCopy.CopyColumnsAsync(Int32 col, TaskCompletionSource`1 source)
at System.Data.SqlClient.SqlBulkCopy.CopyRowsAsync(Int32 rowsSoFar, Int32 totalRows, CancellationToken cts, TaskCompletionSource`1 source)
--- End of inner exception stack trace ---
at System.Threading.Tasks.Task.ThrowIfExceptional(Boolean includeTaskCanceledExceptions)
at System.Threading.Tasks.Task.Wait(Int32 millisecondsTimeout, CancellationToken cancellationToken)
at BIBuilders.BIStudio.DataProcessor.DataProcessor.BulkWrite(DataTask dataTask, IDataReader dataReader, SqlBulkCopy bulkCopy)
---> (Inner Exception #0) System.ArgumentOutOfRangeException: Decimal's scale value must be between 0 and 28, inclusive.
Parameter name: scale
at System.Decimal..ctor(Int32 lo, Int32 mid, Int32 hi, Boolean isNegative, Byte scale)
at Devart.Data.Oracle.ei.d(Byte[] A_0, Int32 A_1)
at Devart.Data.Oracle.bc.GetValue(Byte[] buffer, Int32 offset, Int32 length)
at Devart.Data.Oracle.r.b(Int32 A_0)
at Devart.Data.Universal.UniDataReader.GetValue(Int32 i)
at System.Data.SqlClient.SqlBulkCopy.GetValueFromSourceRow(Int32 destRowIndex, Boolean& isSqlType, Boolean& isDataFeed, Boolean& isNull)
at System.Data.SqlClient.SqlBulkCopy.ReadWriteColumnValueAsync(Int32 col)
at System.Data.SqlClient.SqlBulkCopy.CopyColumnsAsync(Int32 col, TaskCompletionSource`1 source)
at System.Data.SqlClient.SqlBulkCopy.CopyRowsAsync(Int32 rowsSoFar, Int32 totalRows, CancellationToken cts, TaskCompletionSource`1 source)<---
Answer:
When processing data from Oracle tables with NUMBER data type, the decimal scale may be greater than 28. To workaround/resolve either change the data type to a specific NUMBER with precision and scale in the source system, or use a Devart connection string Number Mappings. E.g Number Mappings = “(Number, 0, 0, System.Double)”
For more information see:
https://docs.oracle.com/database/121/ODPNT/OracleDecimalStructure.htm#ODPNT5037
Process Data for a Collection fails with the following error message:
The given value of type String from the data source cannot be converted to type nvarchar of the specified target column.
Answer:
The error is probably caused by a varchar value that is truncated.
Try to increase the size of the nvarchar and run again. Use the Analyse Data Length functionality for the Collection you are working with to find the actual max sizes for varchar columns.
Process Data for a Flat File Collection fails with the following error message:
You are trying to modify 16384 cells while ExcelFile.GroupMethodsAffectedCellsLimit is set to 5000. If you want to perform this operation, you need to change ExcelFile.GroupMethodsAffectedCellsLimit accordingly.
Answer:
This error can be caused by more than 5000 cells that has been merged in Excel. If a user miss clicks and merge for example the entire row, 16384 cells are merged. The solution is to mark the cells, unmerge and then merge correctly.
When trying to process an XML Collection Table, I get an error saying: “The given value of type String from the data source cannot be converted to type datetime of the specified target column.” What could cause this?
Answer:
Check the input file(s). The datetime columns cannot have empty elements. The empty elements are read as empty strings, which in turn is not a valid datetime format.
If this is the case, either change the datetime column(s) to nvarchar data type (and optionally convert to datetime in SQL afterwards) or use an XSLT Transform to remove the empty datetime elements (they will then become NULL, which is valid).
Answer:
The view you are trying to inline cannot contain “Select *”, it must have an expanded column list, i.e. “Select column1, column2 from MyObject”.
Answer:
Check the connection string for extraction and compare the data server name with the one shown when opening “Configure Xpert BI dB” under the Administration tab. The server name for the connection to the Collection table must be the same as for the rest of the Xpert BI stack.
Answer:
There is a bug in the Microsoft MDS stored procedure used to generate SCD Subscription views (we have reported it to Microsoft).
Duplicates (on RevisionID) can happen in a parent-child hierarchy (in a SCD subscription view) if the parent is updated on the same time the child is entered/updated. The autogenerated MDS view joins using between but should use >= and <
See sql below with highlighted changes for fix for a specific view:
ALTER VIEW [mdm].[Process_SCD]
/*WITH ENCRYPTION*/
AS
SELECT
T.ID AS RevisionID
,LS.ListOption AS State
,T.EN_ID AS ID
,T.MUID AS MUID
,T.[Name] AS [Name]
,T.[Code] AS [Code]
,T.[uda_9_178] AS [Level]
,COALESCE([Parent].Code, [Parent_HS].Code) AS [Parent_Code]
,COALESCE([Parent].Name, [Parent_HS].Code) AS [Parent_Name]
,T.[uda_9_179] AS [Parent_ID]
,T.[uda_9_1104] AS [ProcessType]
,T.[uda_9_1105] AS [ProcessArea]
,T.[uda_9_1106] AS [Owner]
,T.[uda_9_1107] AS [Responsible]
,T.EnterDTM AS EnterDateTime
,UE.UserName AS EnterUserName
,T.LastChgDTM AS LastChgDateTime
,UC.UserName AS LastChgUserName
FROM mdm.[tbl_3_9_EN_HS] AS T
LEFT JOIN mdm.[tbl_3_9_EN] AS [Parent]
ON [Parent].ID = T.[uda_9_179] AND [Parent].Version_ID = T.Version_ID
LEFT JOIN mdm.[tbl_3_9_EN_HS] AS [Parent_HS]
ON [Parent_HS].EN_ID = T.[uda_9_179] AND [Parent_HS].Version_ID = T.Version_ID AND (T.EnterDTM >= [Parent_HS].EnterDTM and T.EnterDTM < [Parent_HS].LastChgDTM)
LEFT JOIN mdm.tblList LS
ON LS.OptionID = T.Status_ID AND LS.ListCode = 'lstStatus'
LEFT JOIN mdm.tblUser UE
ON T.EnterUserID = UE.ID
LEFT JOIN mdm.tblUser UC
ON T.LastChgUserID = UC.ID
INNER JOIN mdm.tblModelVersion AS V
ON V.ID = T.Version_ID
WHERE V.VersionFlag_ID = 2
UNION ALL
SELECT
T.LastChgTS AS RevisionID
,LS.ListOption AS State
,T.ID AS ID
,T.MUID AS MUID
,T.[Name] AS [Name]
,T.[Code] AS [Code]
,T.[uda_9_178] AS [Level]
,[Parent].Code AS [Parent_Code]
,[Parent].Name AS [Parent_Name]
,T.[uda_9_179] AS [Parent_ID]
,T.[uda_9_1104] AS [ProcessType]
,T.[uda_9_1105] AS [ProcessArea]
,T.[uda_9_1106] AS [Owner]
,T.[uda_9_1107] AS [Responsible]
,T.LastChgDTM AS EnterDateTime
,UC.UserName AS EnterUserName
,N'9999-12-31 23:59:59.998' AS LastChgDateTime
,NULL AS LastChgUserName
FROM mdm.[tbl_3_9_EN] AS T
LEFT JOIN mdm.[tbl_3_9_EN] AS [Parent]
ON [Parent].ID = T.[uda_9_179] AND [Parent].Version_ID = T.Version_ID
LEFT JOIN mdm.tblList LS
ON LS.OptionID = T.Status_ID AND LS.ListCode = 'lstStatus'
LEFT JOIN mdm.tblUser UC
ON T.LastChgUserID = UC.ID
INNER JOIN mdm.tblModelVersion AS V
ON V.ID = T.Version_ID
WHERE V.VersionFlag_ID = 2;
GO
An option to fix this permanently is to alter the procedure which generates the view (the procedure is run automatically by MDS, to refresh the views on various occasions). The procedure is called udpCreateType2Views. The change is similar to the one highlighted in yellow above.
Answer:
Usually the cause of this is an incorrect column mapping. Check the column name capitalization in the source view as it could be mapped with a different capitalization in MDS. Also note that the error is given for each row in the source.
Answer:
This error is caused by out-of-sync Fact Table Dependencies. Check that there are no warnings and that all dependencies are set up correctly. All active dependencies must have a valid dependent dimension and column name. When everything is fixed, reload Solution Explorer and try to publish again.
Answer:
When it comes to performance:
If the solution works *well enough* with a view-on-view-on-view type design, (i.e. a virtualised solution) then this means that the batch processing does not have to load any data. When new data is available in the ODS (or in other main sources) the data will then also be available in the solution without having to run an update. This is an advantage because processing time decreases (only the source data must be updated).
However, if there is a lot of complex logic – like large joins and many business rules – a virtualised solution may not work. This means that if a user queries the database directly the query may take a very long time (or not even finish) or the SSAS processing will take a very long time when queries are run towards ELM or DM. In such cases one or more views may have to be materialised into an ITS table, which means that the query result will be loaded into a table. It is also possible to add indexes as a post process (this option is available in the ITS configuration).
It is not always necessary to materialise the largest most complex view, it is often enough to materialise look-up views, so that the PK becomes indexed. The problem is not always with the view that is perceived as the slowest, it is often the view that comes before, or even before that one again. This can be checked/troubleshooted by looking at the execution plan. N.B. The execution plan may be different for the same query on different servers. This means, the query can run just fine on one server (for example a test server) and not on another (for example a prod server). This can make it difficult to troubleshoot.
One of the main purposes of a data warehouse is the re-modelling of a data model in order to:
The amount of modelling that is required to do this varies from data source to data source, so the extent to which views should be materialised into ITS tables will vary from solution to solution.
Some Inline views may vary in execution time, which again may lead to scheduling challenges.
The following script will list Inline ordered by the execution time variance over the last 10 days.
NOTE: Remember to set the correct database name in [STACKNAME_XBI_MGMT_DEV]
ALTER VIEW [mdm].[Process_SCD]
/*WITH ENCRYPTION*/
SELECT max([TaskExecutionTime]) as maxExecTime
,min([TaskExecutionTime]) as minExecTime
,max([TaskExecutionTime]) - min([TaskExecutionTime]) as varExecTime
,count(*) as #runs
,[ProcessName]
,[GroupName]
,[DatabaseName]
FROM [STACKNAME_XBI_MGMT_DEV].[dbo].[PerformanceLog]
where ProcessCategory = 'Transformation'
and date between getdate()-10 and getdate()
group by [GroupName]
,[ProcessName]
,[DatabaseName]
order by varExecTime desc