I installed saiku plugin 2.5 for pentaho 4.8.
followed the instructions here - Extracted Saiku to biserver-ce\pentaho-solutions\system.
I then followed the instructions in the readme file
delete the following JAR files from saiku/lib/
- mondrian*.jar, olap4j*.jar, eigenbase*.jar (should be 1 mondrian, 2 olap4j, 3 eigenbase jar files)
- open saiku/plugin.spring.xml and remove the following line (about line #33):
......
<property name="datasourceResolverClass" value="org.saiku.plugin.PentahoDataSourceResolver" />
.....
restart your server or use the plugin adapter refresh in http://localhost:8080/pentaho/Admin
thats it!
I created a cube using Schema workbench.
a very simple cube
<Schema name="S1">
<Cube name="Scott1" visible="true" cache="true" enabled="true">
<Table name="EMP" schema="SCOTT" alias="">
</Table>
<Dimension type="StandardDimension" visible="true" foreignKey="DEPTNO" name="Departments">
<Hierarchy name="Name" visible="true" hasAll="true">
<Table name="DEPT" schema="SCOTT" alias="">
</Table>
<Level name="name" visible="true" column="DNAME" uniqueMembers="false">
</Level>
</Hierarchy>
</Dimension>
<Measure name="employees" column="EMPNO" aggregator="count" visible="true">
</Measure>
<Measure name="Avg Salary" column="SAL" aggregator="avg" visible="true">
</Measure>
</Cube>
</Schema>
Now, I was able to publish the cube and view it in the Analysis View. The problem is I cant see in the Siaku Analysis window. There is nothing in the cube selection drop-down.
So I tried several things (some of them mentioned in this post)
Restarted my bi server.
Flushed mondrian cache.
Moved my schema xml file to a new folder named Cube pentaho-solutions\Haki\cube.
Moved my entry to the top of the resources list in datasources.xml.
Nothing.
I would appreciate any guidance.
Windows 7, pentaho 4.8 stable build 5 , saiku plugin 2.5 , oracle 10g.
Try this two things:
Check the logs to see if there are any problems with the cubes.
Refresh the cube in Saiku. It seems that the Saiku plugin has it's own caché.
There is an unofficial fix for this. Note this might break things, particularly if using Mongo DB.
https://github.com/buggtb/pentaho-mondrian4-plugin/blob/master/utils/EEOSGILIBS-0.1.zip
grab that
unzip it and then copy out the mondrian jar from
pentaho-solutions/system/osgi/bundles
save it somewhere in case this breaks everything.
then copy the jar in that zip file into the same directory
remove the contents of pentaho-solutions/system/osgi/cache/
restart the server
You should now be able to see your EE data sources. Thanks to TB for this solution.
Related
We have an issue where whenever our Gatling performance tests are run the .log file that should generate at the root of the folder is not there.
This is my whole logback file if anyone able to help please.
<contextListener class="ch.qos.logback.classic.jul.LevelChangePropagator">
<resetJUL>true</resetJUL>
</contextListener>
<appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
<!-- <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>INFO</level>
</filter> -->
<encoder>
<pattern>%d{HH:mm:ss.SSS} [%-5level] %logger{15} - %msg%n%rEx</pattern>
<immediateFlush>true</immediateFlush>
</encoder>
</appender>
<appender name="FILE" class="ch.qos.logback.core.FileAppender">
<file>testFile.log</file>
<append>false</append>
<!-- encoders are assigned the type
ch.qos.logback.classic.encoder.PatternLayoutEncoder by default -->
<encoder>
<pattern>%d{HH:mm:ss.SSS} [%-5level] %logger{15} - %msg%n%rEx</pattern>
</encoder>
</appender>
<!-- uncomment and set to DEBUG to log all failing HTTP requests -->
<!-- uncomment and set to TRACE to log all HTTP requests -->
<logger name="io.gatling.http.engine.response" level="TRACE" />
<root level="WARN">
<appender-ref ref="CONSOLE" />
<appender-ref ref="FILE" />
</root>
Thank you very much.
Update
It seems the issue may be with IntelliJ itself as noticed we can see the file when going directly to the finder.
Disabling custom plugins should help. Seems one of configuration was corrupted.
There's a good chance the file is simply not generated where you expect it. Try setting an absolute path instead to verify.
We have Mondrian 4 schema file (Cube file .xml) , which is created in Mondrian 4, but the Mondrian Schema workbench(It is a beta version) currently not available. Now we are using stable version of Mondrian Schema Workbench (3.6.1) so we want to read and modify the Mondrian 4 schema file in Mondrian Schema Workbench (3.6.1) .
We use IvySE plugin but unable to succeed.
Is there any way to downgrade the schema file version (i.e. Mondrian 4.0 to Mondrian 3.6.1)?
Is any adaptor/plugin to convert schema file (i.e. Mondrian 4.0 to Mondrian 3.6.1) ?
What we have :
Mondrian 4 schema file.(Cube file .xml)
Mondrian 3.6.1 Pentaho Schema Workbench (PSW)
Example Code :
<?xml version="1.0" encoding="UTF-8"?>
<Schema name="sales" metamodelVersion="4.0">
<PhysicalSchema>
<Table name="sales" />
</PhysicalSchema>
<Cube name="Sales">
<Dimensions>
<Dimension name="City" key="City">
<Attributes>
<Attribute name="City" keyColumn="city" hasHierarchy="false" />
</Attributes>
<Hierarchies>
<Hierarchy name="City" hasAll="true">
<Level attribute="City" />
</Hierarchy>
</Hierarchies>
</Dimension>
<Dimension name="Store" key="Store">
<Attributes>
<Attribute name="Store" keyColumn="store" hasHierarchy="false" />
</Attributes>
<Hierarchies>
<Hierarchy name="Store" hasAll="true">
<Level attribute="Store" />
</Hierarchy>
</Hierarchies>
</Dimension>
</Dimensions>
<MeasureGroups>
<MeasureGroup name="Sales" table="sales">
<Measures>
<Measure name="Units sold" column="unitssold" aggregator="sum" formatString="#,###" />
</Measures>
<DimensionLinks>
<ForeignKeyLink dimension="City" foreignKeyColumn="city" />
<ForeignKeyLink dimension="Store" foreignKeyColumn="store" />
</DimensionLinks>
</MeasureGroup>
</MeasureGroups>
</Cube>
</Schema>
Thanks and Advance.
The way to downgrade 4.0 to 3.6 is edit xml manually to be compliant with 3.6.
Schema workbench had dropped support in ~2014 as far as I remember.
It don't know any tool and I don't expect somebody will spent time on creation a tool to convert from newer versions to older versions.
It depends on real xml schema you have, in very simple case if you don't use any 4.0 xml features try to edit metamodel version here:
<Schema name="sales" metamodelVersion="4.0">
Otherwise - it depends, and you can try to rewrite structure manually.
I'm trying to enable perf4j annotations in intellij but I'm struggling to configure correctly AspectJ. More specifically the log file is created correctly but lacks of any data from the annotated method.
These are the relevant extracts of configuration:
logback.xml
<configuration debug="true">
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<!-- encoders are assigned the type
ch.qos.logback.classic.encoder.PatternLayoutEncoder by default -->
<encoder>
<pattern>%d{HH:mm:ss} [%thread] %-5level %logger{36} - %msg%n</pattern>
</encoder>
</appender>
<appender name="statistics" class="ch.qos.logback.core.FileAppender">
<file>./target/statisticsLogback.log</file>
<append>false</append>
<layout>
<pattern>%msg%n</pattern>
</layout>
</appender>
<appender name="coalescingStatistics" class="org.perf4j.logback.AsyncCoalescingStatisticsAppender">
<!-- encoders are assigned the type
ch.qos.logback.classic.encoder.PatternLayoutEncoder by default -->
<timeSlice>1000</timeSlice>
<appender-ref ref="statistics"/>
</appender>
<appender name="listAppender" class="ch.qos.logback.core.read.ListAppender">
<!-- encoders are assigned the type
ch.qos.logback.classic.encoder.PatternLayoutEncoder by default -->
<timeSlice>1000</timeSlice>
</appender>
<logger name="org.perf4j.TimingLogger" level="info">
<appender-ref ref="coalescingStatistics" />
<appender-ref ref="listAppender"/>
</logger>
<root level="debug">
<appender-ref ref="STDOUT" />
</root>
aop.xml
<?xml version="1.0" encoding="UTF-8"?>
<aspectj>
<!--
We only want to weave in the log4j TimingAspect into the #Profiled classes.
Note that Perf4J provides TimingAspects for the most popular Java logging
frameworks and facades: log4j, java.util.logging, Apache Commons Logging
and SLF4J. The TimingAspect you specify here will depend on which logging
framework you wish to use in your code.
-->
<aspects>
<aspect name="org.perf4j.slf4j.aop.TimingAspect"/>
<!-- if SLF4J/logback use org.perf4j.slf4j.aop.TimingAspect instead -->
</aspects>
<weaver options="-verbose -showWeaveInfo">
<!--
Here is where we specify the classes to be woven. You can specify package
names like com.company.project.*
-->
<include within="com.mycode.myproject.mypackage.*"/>
<include within="org.perf4j.slf4j.aop.*"/>
</weaver>
</aspectj>
Finally the related test method is tagged with the #Profiled annotation, this is part of the package defined in the aop.xml.
This configuration results in the log file being produced (which suggests that the logback.xml is configured correctly, however it only contains headers and no statistics from the tagged method.
The main question I have is where the AspectJ configuration should go within Intellij, I have included the aop.xml under a manually created META-INF folder in the src folder but I'm not sure this is detected by AspectJ at all.
Thanks in advance
UPDATE
I have made some progress on this since my initial post, specifically introducing two changes:
i) included -javaagent:lib\aspectjweaver.jar
ii) moved the aop.xml into a META-INF folder.
The aop configuration is now being picked up as it logs the configuration details and it also mentions the method being profiled.
The issue is now that the thread being profiled crashes, it doesn't log any exceptions but via debug the issue seems to be related to a ClassNotFoundException in org.aspectj.runtime.reflect.Factory when trying to instantiate org.aspectj.runtime.reflect.JoinPointImpl.
To isolate the issue i have removed all the maven imports of aspectJ and used the jars provided by the installation package but the issue persists, also the fact that the application crashes without any logging makes the issue tracking harder.
UPDATE
To clarify:
After reading more about this including the manual in the wayback link (thanks for that) I realised I was mixing up load-time / compile-time approach. Since then I tried both methods as described in the guide but with the same results described in my earlier update.
As per above, I do start the application with aspectj weaver option (-javaagent)
The build is done via IDE, as per above at the moment I have removed the aspectj / perf4j dependencies from Maven and linked to local jars
As mentioned the aop.xml does get picked up as mentioned in the update with no errors or warning, just confirmation of the weaved method
Okay, I have added a full Maven example to a GitHub repo which you can just clone and play around with.
Some basic things to consider:
For compile-time weaving (CTW) you need aspectjrt.jar on the classpath when compiling and running the code. You also need to use the AspectJ compiler to build the project, a normal Java compiler is not enough.
For load-time weaving (LTW) you need aspectjweaver.jar as a Java agent on the command line when running the code: -javaagent:/path/to/aspectjweaver.jar. You also need to add it as a VM argument to your LTW run configuration in IDEA.
For LTW you also need META-INF/aop.xml in your resources folder. Please also note that in order to encompass subpackages you should use the ..* notation, not just .*, e.g. <include within="de.scrum_master..*"/>.
You find more information in my project's read-me file.
P.S.: The Perf4J documentation is outdated and the project unmaintained. Thus, it still mentions AspectJ 1.6.x as necessary dependencies. I built and ran everything with the latest AspectJ 1.8.10 and it runs just fine, both from Maven and IDEA.
I'm pretty new to build servers but have been asked by my employer to do some testing (because F5 is not a build process, as the excellent article by Jeff Atwood says). At this stage, I'm working on getting some sample builds and test reports up and running on a CruiseControl.NET server. So far, I've gotten a build up and running (the configuration file will need some tidying up before adding new builds/projects but the proof of concept is there) but the reporting is causing something of a headache.
The main report I'm looking for is for out NUnit tests and SpecFlow integration tests. The tests run fine (as I'm getting a sensible looking xml file generated) and am looking to merge that in to the main build results so that I can show the results of the NUnit/SpecFlowtests.
Whenever the build completes, the following is reported in the messages (in ViewFarmReport.aspx): "Failing Tasks : XmlLogPublisher "
This combined with the following error reported in the Windows application log (source - CC.Net)
2015-03-24 08:36:52,987 [Initech.SuperCrm-DEV] ERROR CruiseControl.NET [(null)] - Publisher threw exception: ThoughtWorks.CruiseControl.Core.CruiseControlException: Unable to read the contents of the file: C:\CCNet\BuildArtifacts\Initech.SuperCrm-DEV\msbuild-results-7c657954-2c3e-405f-b0f1-7da1299788fd.xml ---> System.IO.FileNotFoundException: Could not find file 'C:\CCNet\BuildArtifacts\Initech.SuperCrm-DEV\msbuild-results-7c657954-2c3e-405f-b0f1-7da1299788fd.xml'.
(company/application name "censored")
This leads me to suspect that the failure to merge in the msbuild results (which I believe CruiseControl.NET automatically scrapes since version... 1.5 or 1.6?) is preventing the NUnit results from being merged in.
There is no msbuild-results file in the BuildArtifacts folder, which does not surprise me as I do not believe my current msbuild configuration allows for xml based logging as I am using the ThoughtWorks.CruiseControl.MsBuild.dll logger.
According to the online documentation for CruiseControl.NET there is XML enabled custom logger: ThoughtWorks.CruiseControl.MsBuild.XmlLogger which can be used, however the download location for this logger: here
appears not to exist any more.
Can anyone say whether I'm thinking along the right lines here and what my options are?
For reference, here is my complete configuration:
<cruisecontrol xmlns:cb="urn:ccnet.config.builder">
<cb:define MSBuildPath="C:\Windows\Microsoft.NET\Framework\v4.0.30319" />
<cb:define WorkingBaseDir="C:\CCNet\Builds" />
<cb:define ArtifactBaseDir="C:\CCNet\BuildArtifacts" />
<cb:define MSBuildLogger="C:\Program Files (x86)\CruiseControl.NET\server
\ThoughtWorks.CruiseControl.MsBuild.dll" />
<cb:define NUnitExe="C:\Jenkins\Nunit\nunit-console.exe" />
<cb:define name="vsts_ci">
<executable>C:\Jenkins\tf.exe</executable>
<server>http://tfs-srv:8080/tfs/LEEDS/</server>
<domain>CONTOSO</domain>
<autoGetSource>true</autoGetSource>
<cleanCopy>true</cleanCopy>
<force>true</force>
<deleteWorkspace>true</deleteWorkspace>
</cb:define>
<project name="Initech.Libraries" description="Shared libraries used in all Initech projects"
queue="Q1">
<state type="state" directory="C:\CCNet\State"/>
<artifactDirectory>$(ArtifactBaseDir)\Initech.Libraries</artifactDirectory>
<workingDirectory>$(WorkingBaseDir)\Initech.Libraries</workingDirectory>
<triggers>
<intervalTrigger
name="continuous"
seconds="30"
buildCondition="IfModificationExists"
initialSeconds="5"/>
</triggers>
<sourcecontrol type="vsts">
<cb:vsts_ci/>
<workspace>CCNET_Initech.Libraries</workspace>
<project>$/InitechLibraries/Initech.Libraries</project>
</sourcecontrol>
</project>
<project name="Initech.SuperCrm-DEV" description="Initech.SuperCrm Application, Development
Version" queue="Q1">
<cb:define ArtifactDirectory="$(ArtifactBaseDir)\Initech.SuperCrm-DEV" />
<cb:define WorkingDirectory="$(WorkingBaseDir)\Initech.SuperCrm-DEV" />
<cb:define OutputDirectory="$(WorkingDirectory)\Initech.SuperCrm\bin\Debug" />
<cb:define ProjectFile="Initech.SuperCrm.sln" />
<cb:define NUnitLog="$(WorkingDirectory)\NunitResults.xml" />
<state type="state" directory="C:\CCNet\State"/>
<artifactDirectory>$(ArtifactDirectory)</artifactDirectory>
<workingDirectory>$(WorkingDirectory)</workingDirectory>
<triggers>
<!-- check the source control every X time for changes,
and run the tasks if changes are found -->
<intervalTrigger
name="continuous"
seconds="30"
buildCondition="IfModificationExists"
initialSeconds="5"/>
</triggers>
<sourcecontrol type="vsts">
<cb:vsts_ci/>
<workspace>CCNET_Initech.SuperCrm-DEV</workspace>
<project>$/InitechSuperCrm/SuperCrm/Initech.SuperCrm-DEV</project>
</sourcecontrol>
<tasks>
<exec>
<executable>C:\Program Files (x86)\DXperience 12.1\Tools\DXperience
\ProjectConverter-console.exe</executable>
<buildArgs>$(WorkingDirectory)</buildArgs>
</exec>
<msbuild>
<executable>$(MSBuildPath)\MSBuild.exe</executable>
<workingDirectory>$(WorkingDirectory)</workingDirectory>
<projectFile>$(ProjectFile)</projectFile>
<timeout>900</timeout>
<logger>$(MSBuildLogger)</logger>
</msbuild>
<exec>
<executable>$(NUnitExe)</executable>
<buildArgs>/xml=$(NUnitLog) /nologo $(WorkingDirectory)\$(ProjectFile)
</buildArgs>
</exec>
</tasks>
<publishers>
<buildpublisher>
<sourceDir>$(OutputDirectory)</sourceDir>
<useLabelSubDirectory>true</useLabelSubDirectory>
<alwaysPublish>false</alwaysPublish>
<cleanPublishDirPriorToCopy>true</cleanPublishDirPriorToCopy>
</buildpublisher>
<merge>
<files>
<file>$(NUnitLog)</file>
</files>
</merge>
<xmllogger logDir="C:\CCNet\BuildArtifacts\Initech.SuperCrm-DEV\buildlogs" />
<artifactcleanup cleanUpMethod="KeepLastXBuilds"
cleanUpValue="50" />
</publishers>
</project>
</cruisecontrol>
I've been tearing my hair while trying to figure this out, and I don't have much to begin with, so any help would be greatly appreciated.
After a prolonged period of banging my head against the wall, I seem to have finally found the solution (well solutions).
1) Kobush.Build.dll (https://www.nuget.org/packages/Kobush.Build/) can be used as the logger for MSBuild. Looking at the attributions in CruiseControl.NET's documentation, it appears to have been written by the same developer (but extended).
2) Some tweaks were needed due to the default location of the msbuild-report output. Because, by default, it was dumped to the buildartifacts folder then it is susceptible to being prematurely deleted.
I no longer clean the publish directory prior to copying (in the buildpublisher) and perform the merge and xmllogger portions of the publisher before artifact cleanup.
As a result, I now have msbuild and nunit output/results integrated in to the main build log and these can be consumed through the CruiseControl.NET dashboard.
There's probably a tidier way of handling this, but at the moment I'm just getting a proof of concept going.
We are seeing an intermittent issue on development and production machines whereby our log files are not getting logged to.
When running in development and debugging using Visual Studio we get the following log4net error messages in the VS output window:
log4net:ERROR [RollingFileAppender] Unable to acquire lock on file C:\folder\file.log.
The process cannot access the file 'C:\folder\file.log' because it is being used by another process.
log4net:ERROR XmlConfigurator: Failed to find configuration section 'log4net' in the application's .config file.
Check your .config file for the <log4net> and <configSections> elements.
The configuration section should look like:
<section
name="log4net"
type="log4net.Config.Log4NetConfigurationSectionHandler,log4net" />
Our current workaround for the issue is to rename the last log file. We would of course expect this to fail (due to the aforementioned file lock), but it normally doesn't. Once or twice the rename has failed due to a lock from the aspnet_wp.exe process.
Our log4net configuration section is shown below:
<log4net>
<appender name="RollingLogFileAppender" type="log4net.Appender.RollingFileAppender">
<file value="C:\folder\file.log"/>
<appendToFile value="true" />
<datePattern value="yyyyMMdd" />
<rollingStyle value="Date" />
<maximumFileSize value="10MB" />
<maxSizeRollBackups value="100" />
<layout type="log4net.Layout.PatternLayout">
<header value="[Header]
"/>
<footer value="[Footer]
"/>
<conversionPattern value="%date %-5level %logger ${COMPUTERNAME} %property{UserHostAddress} [%property{SessionID}] - %message%newline"/>
</layout>
</appender>
<root>
<level value="INFO"/>
<appender-ref ref="RollingLogFileAppender"/>
</root>
</log4net>
As mentioned, we are seeing this intermittently on machines, but once the issue happens it persists.
Try adding
<lockingModel type="log4net.Appender.FileAppender+MinimalLock" />
to your <appender /> element. There is some performance impact because this means that log4net will lock the file, write to it, and unlock it for each write operation (as opposed to the default behavior, which acquires and holds onto the lock for a long time).
One implication of the default behavior is that if you're using it under a Web site that is being executed under multiple worker processes running on the same machine, each one will try to acquire and hold onto that lock indefinitely, and two of them are just going to lose. Changing the locking model to the minimal lock works around this issue.
(When debugging, ungraceful terminations and spinning up lots of new worker processes is exactly the type of thing that's likely to happen.)
Good luck!
Also be aware of the log4net FAQ:
How do I get multiple process to log to the same file?
Before you even start trying any of the alternatives provided, ask
yourself whether you really need to have multiple processes log to the
same file, then don't do it ;-).
FileAppender offers pluggable locking models for this usecase but all
existing implementations have issues and drawbacks.
By default the FileAppender holds an exclusive write lock on the log
file while it is logging. This prevents other processes from writing
to the file. This model is known to break down with (at least on some
versions of) Mono on Linux and log files may get corrupted as soon as
another process tries to access the log file.
MinimalLock only acquires the write lock while a log is being written.
This allows multiple processes to interleave writes to the same file,
albeit with a considerable loss in performance.
InterProcessLock doesn't lock the file at all but synchronizes using a
system wide Mutex. This will only work if all processes cooperate (and
use the same locking model). The acquisition and release of a Mutex
for every log entry to be written will result in a loss of
performance, but the Mutex is preferable to the use of MinimalLock.
If you use RollingFileAppender things become even worse as several
process may try to start rolling the log file concurrently.
RollingFileAppender completely ignores the locking model when rolling
files, rolling files is simply not compatible with this scenario.
A better alternative is to have your processes log to
RemotingAppenders. Using the RemoteLoggingServerPlugin (or
IRemoteLoggingSink) a process can receive all the events and log them
to a single log file. One of the examples shows how to use the
RemoteLoggingServerPlugin.
If you have
<staticLogFileName value="true" />
<rollingStyle value="Date" />
<datePattern value="yyyyMMdd" />
and add
<lockingModel type="log4net.Appender.FileAppender+MinimalLock" />
then there will be an error while the rolling happens.
The first process will create the new file and the rename the current file.
Then next proces will do the same and take the newly created file and overwrite the newly renamed file.
Resulting in the logfiel for the last day being empty.