Unable to convert mondrian 4 schema file to mondrain 3.x schema file - schema

We have Mondrian 4 schema file (Cube file .xml) , which is created in Mondrian 4, but the Mondrian Schema workbench(It is a beta version) currently not available. Now we are using stable version of Mondrian Schema Workbench (3.6.1) so we want to read and modify the Mondrian 4 schema file in Mondrian Schema Workbench (3.6.1) .
We use IvySE plugin but unable to succeed.
Is there any way to downgrade the schema file version (i.e. Mondrian 4.0 to Mondrian 3.6.1)?
Is any adaptor/plugin to convert schema file (i.e. Mondrian 4.0 to Mondrian 3.6.1) ?
What we have :
Mondrian 4 schema file.(Cube file .xml)
Mondrian 3.6.1 Pentaho Schema Workbench (PSW)
Example Code :
<?xml version="1.0" encoding="UTF-8"?>
<Schema name="sales" metamodelVersion="4.0">
<PhysicalSchema>
<Table name="sales" />
</PhysicalSchema>
<Cube name="Sales">
<Dimensions>
<Dimension name="City" key="City">
<Attributes>
<Attribute name="City" keyColumn="city" hasHierarchy="false" />
</Attributes>
<Hierarchies>
<Hierarchy name="City" hasAll="true">
<Level attribute="City" />
</Hierarchy>
</Hierarchies>
</Dimension>
<Dimension name="Store" key="Store">
<Attributes>
<Attribute name="Store" keyColumn="store" hasHierarchy="false" />
</Attributes>
<Hierarchies>
<Hierarchy name="Store" hasAll="true">
<Level attribute="Store" />
</Hierarchy>
</Hierarchies>
</Dimension>
</Dimensions>
<MeasureGroups>
<MeasureGroup name="Sales" table="sales">
<Measures>
<Measure name="Units sold" column="unitssold" aggregator="sum" formatString="#,###" />
</Measures>
<DimensionLinks>
<ForeignKeyLink dimension="City" foreignKeyColumn="city" />
<ForeignKeyLink dimension="Store" foreignKeyColumn="store" />
</DimensionLinks>
</MeasureGroup>
</MeasureGroups>
</Cube>
</Schema>
Thanks and Advance.

The way to downgrade 4.0 to 3.6 is edit xml manually to be compliant with 3.6.
Schema workbench had dropped support in ~2014 as far as I remember.
It don't know any tool and I don't expect somebody will spent time on creation a tool to convert from newer versions to older versions.
It depends on real xml schema you have, in very simple case if you don't use any 4.0 xml features try to edit metamodel version here:
<Schema name="sales" metamodelVersion="4.0">
Otherwise - it depends, and you can try to rewrite structure manually.

Related

Running Gatling tests not showing testFile.log on IntelliJ

We have an issue where whenever our Gatling performance tests are run the .log file that should generate at the root of the folder is not there.
This is my whole logback file if anyone able to help please.
<contextListener class="ch.qos.logback.classic.jul.LevelChangePropagator">
<resetJUL>true</resetJUL>
</contextListener>
<appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
<!-- <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>INFO</level>
</filter> -->
<encoder>
<pattern>%d{HH:mm:ss.SSS} [%-5level] %logger{15} - %msg%n%rEx</pattern>
<immediateFlush>true</immediateFlush>
</encoder>
</appender>
<appender name="FILE" class="ch.qos.logback.core.FileAppender">
<file>testFile.log</file>
<append>false</append>
<!-- encoders are assigned the type
ch.qos.logback.classic.encoder.PatternLayoutEncoder by default -->
<encoder>
<pattern>%d{HH:mm:ss.SSS} [%-5level] %logger{15} - %msg%n%rEx</pattern>
</encoder>
</appender>
<!-- uncomment and set to DEBUG to log all failing HTTP requests -->
<!-- uncomment and set to TRACE to log all HTTP requests -->
<logger name="io.gatling.http.engine.response" level="TRACE" />
<root level="WARN">
<appender-ref ref="CONSOLE" />
<appender-ref ref="FILE" />
</root>
Thank you very much.
Update
It seems the issue may be with IntelliJ itself as noticed we can see the file when going directly to the finder.
Disabling custom plugins should help. Seems one of configuration was corrupted.
There's a good chance the file is simply not generated where you expect it. Try setting an absolute path instead to verify.

What do you put in RegistrySpec.xml for Izpack installation to set Size field

This is my xml file for installer.
<izpack:registry version="5.0"
xmlns:izpack="http://izpack.org/schema/registry"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://izpack.org/schema/registry http://izpack.org/schema/5.0/izpack-registry-5.0.xsd">
<pack name="UninstallStuff">
<!-- Special "pack", if not defined an uninstall key will be generated automatically -->
<!-- The variable $UNINSTALL_NAME can be only used if CheckedHelloPanel will be used
because there the variable will be declared. With that variabel it is possible
to install more as one instances of the product on one machine each with an
unique uninstall key. -->
<value name="DisplayName"
keypath="SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\$APP_NAME $APP_VER"
root="HKLM"
string="$APP_NAME" />
<value name="DisplayVersion"
keypath="SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\$APP_NAME $APP_VER"
root="HKLM"
string="$APP_VER" />
<value name="UninstallString"
keypath="SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\$APP_NAME $APP_VER"
root="HKLM"
string=""$JAVA_HOME\bin\javaw.exe" -jar "$INSTALL_PATH\uninstaller\uninstaller.jar"" />
<value name="DisplayIcon"
keypath="SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\$APP_NAME $APP_VER"
root="HKLM"
string="$INSTALL_PATH\icon\uninstallericon.ico" />
<value name="Publisher"
keypath="SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\$APP_NAME $APP_VER"
root="HKLM"
string="opname" />
</pack>
</izpack:registry>
I am getting publisher name but i am not able to get size field value.How can i add size field in add remove program.
I guess you've probably found the solution by now (or given up) as this question is over three years old, but I have found the solution. It doesn't seem to be documented on the IzPack site, but digging through the schema and the registry settings of other installed programs reveals the answer.
Application size is stored in the registry as a 32-bit DWORD value, which is the application's size in KB, under the key "EstimatedSize". For example, for a 100MB (==102400KB) application, your configuration would look like the following:
<value name="EstimatedSize"
keypath="SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\$APP_NAME $APP_VER"
root="HKLM"
dword="102400" />

Cube in Analysis View not showing in Saiku Analytics

I installed saiku plugin 2.5 for pentaho 4.8.
followed the instructions here - Extracted Saiku to biserver-ce\pentaho-solutions\system.
I then followed the instructions in the readme file
delete the following JAR files from saiku/lib/
- mondrian*.jar, olap4j*.jar, eigenbase*.jar (should be 1 mondrian, 2 olap4j, 3 eigenbase jar files)
- open saiku/plugin.spring.xml and remove the following line (about line #33):
......
<property name="datasourceResolverClass" value="org.saiku.plugin.PentahoDataSourceResolver" />
.....
restart your server or use the plugin adapter refresh in http://localhost:8080/pentaho/Admin
thats it!
I created a cube using Schema workbench.
a very simple cube
<Schema name="S1">
<Cube name="Scott1" visible="true" cache="true" enabled="true">
<Table name="EMP" schema="SCOTT" alias="">
</Table>
<Dimension type="StandardDimension" visible="true" foreignKey="DEPTNO" name="Departments">
<Hierarchy name="Name" visible="true" hasAll="true">
<Table name="DEPT" schema="SCOTT" alias="">
</Table>
<Level name="name" visible="true" column="DNAME" uniqueMembers="false">
</Level>
</Hierarchy>
</Dimension>
<Measure name="employees" column="EMPNO" aggregator="count" visible="true">
</Measure>
<Measure name="Avg Salary" column="SAL" aggregator="avg" visible="true">
</Measure>
</Cube>
</Schema>
Now, I was able to publish the cube and view it in the Analysis View. The problem is I cant see in the Siaku Analysis window. There is nothing in the cube selection drop-down.
So I tried several things (some of them mentioned in this post)
Restarted my bi server.
Flushed mondrian cache.
Moved my schema xml file to a new folder named Cube pentaho-solutions\Haki\cube.
Moved my entry to the top of the resources list in datasources.xml.
Nothing.
I would appreciate any guidance.
Windows 7, pentaho 4.8 stable build 5 , saiku plugin 2.5 , oracle 10g.
Try this two things:
Check the logs to see if there are any problems with the cubes.
Refresh the cube in Saiku. It seems that the Saiku plugin has it's own caché.
There is an unofficial fix for this. Note this might break things, particularly if using Mongo DB.
https://github.com/buggtb/pentaho-mondrian4-plugin/blob/master/utils/EEOSGILIBS-0.1.zip
grab that
unzip it and then copy out the mondrian jar from
pentaho-solutions/system/osgi/bundles
save it somewhere in case this breaks everything.
then copy the jar in that zip file into the same directory
remove the contents of pentaho-solutions/system/osgi/cache/
restart the server
You should now be able to see your EE data sources. Thanks to TB for this solution.

Datanucleus schema generation ignores "inheritance strategy=" directive

I'm working with the Datanucleus tutorial application for JDO, specifically this one.
Regardless which "inheritance strategy" I try the table layout is the same. I would like two tables, one for PRODUCT and one for BOOK, but using the configuration below I only get the PRODUCT table with columns for both class Product and class Book.
<class name="Product" identity-type="sequence">
<inheritance strategy="complete-table"/>
<field name="name">
<column name="PRODUCT_NAME" length="100" jdbc-type="VARCHAR"/>
</field>
<field name="description">
<column length="255" jdbc-type="VARCHAR"/>
</field>
</class>
<class name="Book" identity-type="sequence">
<field name="author">
<column length="40" jdbc-type="VARCHAR"/>
</field>
<field name="isbn">
<column length="20" jdbc-type="CHAR"/>
</field>
<field name="publisher">
<column length="40" jdbc-type="VARCHAR"/>
</field>
</class>
The directory structure is exactly as in the tutorial, as is the build.xml. I have tried generating the schema via both the Ant task and the command line.
I use the sequence of commands:
ant clean
ant compile
ant enhance
ant createschema
The schema is generated but not as the Datanucleus documentation suggests that it should be with inheritance strategy "compete-table."
My target database is PostgreSQL 8.4 running on Ubuntu 10.04 if that matters.
Anyone else run into this issue and found a solution?
To answer my own question:
In the datanucleus tutorial download, the build.xml file given has a "createschema" target like:
<target name="createschema">
...
<schematool ...>
<fileset dir="${basedir}/target/classes">
<include name="**/*.class"/>
</fileset>
...
</schematool>
</target>
It should be changed to include all .jdo files as shown below:
<target name="createschema">
...
<schematool ...>
<fileset dir="${basedir}/target/classes">
<include name="**/*.class"/>
<include name="**/*.jdo"/>
</fileset>
...
</schematool>
</target>
In addition the package-hsql.orm file needs to be renamed to package-hsql.jdo and its header needs to be changed to:
<?xml version="1.0"?>
<!DOCTYPE jdo PUBLIC
"-//Sun Microsystems, Inc.//DTD Java Data Objects ORM Metadata 2.0//EN"
"http://java.sun.com/dtd/orm_2_0.dtd">
<jdo>
...
<jdo>
Notice that the DOCTYPE and root element were changed. The root element was "orm" and changed to "jdo".
Once I made these changes the schema generation tool followed the "inheritance strategy" directive.
For my custom application, I had a similar issue, and it worked fine after making the changes in the header of the jdo file. I am using version 3.2.9.

How can I apply all the sql files in a directory?

We currently run a specific SQL script as part of our Ant deploy process.
What we'd like to do is change this so we run all SQL scripts in a given directory. We can't figure out how to get this directory listing in Ant and iterate through the list and run each SQL script. Does anyone know how to do this?
Note: we currently run the sql file by using the Ant exec task that runs "call sqlplus ${id}/${pw}#${db.instance} #${file}"
I would recommend using the Ant SQL task. You can then specify with the following:
<sql
driver="org.database.jdbcDriver"
url="jdbc:database-url"
userid="sa"
password="pass">
<path>
<fileset dir=".">
<include name="data*.sql"/>
</fileset>
<path>
</sql>
i am doing basically the same thing on an oracle database.. but using sqlplus and the apply task. this allows the scripts to contain DDL statements.
would obviously only work where you have a command line program to use.
using ant 1.8.2 and antcontrib.
i have defined a few macros for use.. (see below) and then just call like
<compile_sql connectstring="${db.connect_string}" >
<filelist dir="${db_proc.dir}" files="specific_file.sql" />
<fileset dir="${db_proc.dir}" includes="wildcard*.pks" />
<fileset dir="${db_proc.dir}" includes="wildcard*.pkb" />
</compile_sql>
the macros are as follows
<macrodef name="compile_sql">
<attribute name="connectstring" />
<attribute name="dirtostart" default=""/>
<attribute name="arg1" default=""/>
<element name="sqllist" implicit="true" description="filesetlist of sql to run"/>
<sequential>
<check_missing_files>
<sqllist/>
</check_missing_files>
<apply executable="${sqlplus.exe}" failonerror="true" verbose="true" skipemptyfilesets="true" ignoremissing="false" dir="#{dirtostart}">
<arg value="-L"/>
<arg value="#{connectstring}"/>
<srcfile prefix="#" />
<sqllist/>
<arg value="#{arg1}"/>
<redirector>
<globmapper id="sqlout.mapper"
from="*"
to="*.out"/>
</redirector>
</apply>
</sequential>
</macrodef>
and
<macrodef name="check_missing_files">
<element name="checkfilelist" implicit="true" description="filelist of files to check for existance"/>
<sequential>
<restrict id="missing.files" xmlns:rsel="antlib:org.apache.tools.ant.types.resources.selectors">
<resources>
<checkfilelist/>
</resources>
<rsel:not>
<rsel:exists/>
</rsel:not>
</restrict>
<fail message="These files are missing: ${ant.refid:missing.files}" >
<condition >
<length string="${ant.refid:missing.files}" when="greater" length="0" />
</condition>
</fail>
</sequential>
</macrodef>