What is the difference between pentaho-server and pentaho-server-manual - pentaho

I'am trying to download a pentaho server version that will be deployed on linux production environnement, and on sourceforge pentaho open source files, I found that it contained apprently two versions as shown in the picture below, Pentaho-Server-ce and Pentaho-Server-manual-ce and I want to know whether there are some differences in termes of requirements or functionalities and such.
And on the occasion I would like also to ask about how the pentaho server works in general (for instance it took me a while to figure out that I need a separate application called Pentaho Repor Designer to do my dashboards and then use them in the pentaho server. (the documentation is good but lacks a little bit -in my opinion- in termes of clarity and how to use it properly, moreover there are almost no tutorials for the pentaho server (I found some about the pentaho data integration) but almost none on pentaho server.
So any details and leads on how to master it would be highly appreciated.

Related

Sonarqube DB Queries - How to find new issues?

I need to find all the issues discovered in a snapshot/scan in Sonarqube. I can't use the web API since the volume can be excessive for new projects on first scan. I have a query that can find the latest snapshot with the project information. I can query issues by project. I can't figure out how to relate issues to a snapshot. There has to be a way since Sonarqube does it - New issues on the Project page.
Has anyone done this or have enough experience with the crazy schema to be able to figure it out? Can't wait for the schema rationalization...
Sonarqube 5.6.3 on Windows 2012 R2 with SQL Server 2012.
There is currently no association between snapshot and issue. Nor has there ever been one. The closest you can come is to use date parameters to narrow the set of issues created right around the time of your analysis. Note that this could be difficult if you run analyses close together.
The "new issues" metrics shown on the project homepage are just that - metrics. However, if you click through on one, you'll find yourself in a date-based Issues search.
You can do the same sort of thing using the web service, again, via date-based criteria. Or you could use the sinceLeakPeriod parameter.

how to use the Pentaho data integration for reporting

In my project we are using Pentaho data integration , saiku-server for reporting.
Now i am new to the business intelligence thing and i am confused which functioning will be performed by which software.
Senior coder here dont tell me so thats why i am asking here.
I am confused what are the functioning provided by these tools
Pentahoo
PAN
Kitchen
Spoon
Saiku
There are scripting which generates the these four files
cube.json sims.json schema.json path.json
now i don't know which software will be using that json files pentahoo , saiku spoon or what.
can anyone give me some idea
Pentaho data integration is one of open source tool provided by pentaho suit.
Spoon is used to create transformation using GUI interface.
if you simply want to run the transformations and jobs then use kitchen or spoon. (mainly used for running this things using command line)
saiku is a 1 of server, pentaho itself has a server (pentaho bi server) and in this you can add saiku pluggin for displaying cubes which are designed in pentaho schema workbench.
for more understanding google the terms which i mansion in the answer.

Sql 2008 Developer to Sql Azure Migration

Hi My company is deciding for switching its existing application to azure platform (only Sql Part). So we need to upload our db from local to cloud. For migration i came across various tools like
1. cerebrata 's tools
2. SqlAzure Migration wizard
3. Microsoft Sql Data Sync
4. Conventional Script way via management studio.
But all the above tools showed that they have limited capacity. A user cannot work flawlessly on either of the tool.
In cerebrata's tool - the main drawback was its field for Application User Name and Application Key , which my admin havent shared. Also there is manual mapping of fields between azure and local.
Sql Azure Migration wizard - generates scripts and executed too but with lots of error . I was using its version 2.1. Also it very slow. It seems that its a replica of Sql Srvr Mgmt Studio.
Sql Data Sync :- I found it cool as its a MS product but it has limitation too that it only connects with Windows Authentication based local sql server, or you need to explicitly allow the required but. Even after allowing while syncing , I got some Sql Azure Provisioning Error.
4 Sql Srvr Mgmt Studio :- This is most easiest way but requires a lot of manual work to do before actual migration. What i did is that I generated a script of entire db (almost 101123 lines of code for single db) and tried to execute on azure. On the very first time i faced some keyword mismatch error . Finally i removed all line after primary key declaration that With (Padding = Off ....)or something similar and also On Primary then i executed , but still got error on Set Identity Insert On. After doing a lot of hard work in removing unwanted lines waited more than 2 hrs to completed the script remotely, i got no Errors , errors and errors.
So you guys are requested to please suggest me any good alternative stated than above or i am lacking something and can do more with above.
Thanks
Amit Ranjan
I've faced a similar problem recently, running through the options you've listed.
You might give a try to Red-Gate beta for Azure (free for a few months). I found their tools to be quite good for SQL schema and data replication.
Never tried the Azure build myself, though (I migrated tables manually by the time I was told about the offer).

Custom Report Items in local reports

i have read this article about custom report items(CRI)
http://msdn.microsoft.com/en-us/magazine/cc188686.aspx
The only problem is that CRI are only usable in reporting service and not in local reports. My question is it possible some how to use CRI in local reports( RDLC ). Also i am interested in which version of reporting service is this possible, if possible
Best Regards,
Iordan
Custom report items are not supported in .rdlc files. (http://msdn.microsoft.com/en-us/library/ms251712(v=vs.90).aspx) This is likely the approach taken by Microsoft because the ReportViewer used with these files is a free control, while more complex features like third-party controls are supported only on a full SQL Server report server.
One thing in particular I have done in the past to get around this is generate custom charts/controls/images etc. As images and then send them into the report as an image. Not an ideal solution but it works.

Tools to Replay Load on a SQL Server

Has anyone come across any good tools (preferably but not necessarily FOSS) that can read a SQL Server (2005/2008) trace file and execute the commands against another database. We are attempting to perform some performance testing on our SQL servers and would like to replicate an actual load.
I have come across but not yet used:
JMeter
ReplayML
Preferably, the application would be able to use threading to mimic user connections and query execution on the SQL Server.
You can replay a SQL Server Profiler trace against another server using the SQL Server Profiler itself.
See the following Microsoft Reference as a starting point.
http://msdn.microsoft.com/en-us/library/ms189604.aspx
Quest Software also have a tool called Benchmark Factory that can be used to perform SQL Server load testing.
http://www.quest.com/benchmark-factory/
One of the best tools is actually freely available from Microsoft. The RML Utilities are targeted at SQL2005 & SQL2008 and are specifically designed for this type of testing.
You can download the tools from http://www.microsoft.com/downloads/details.aspx?FamilyId=7EDFA95A-A32F-440F-A3A8-5160C8DBE926&displaylang=en
We have used them to solve several performance and locking issues.
Note: Capturing trace files using the SQL Profiler GUI can add to performance problems due to the way the GUI and Trace backend interact. The RML Utilites include a script that can capture traces directly from the SQL Server without using the GUI.
You can replay trace files directly in SQL Profiler, although I've only used it a couple of times for that, so I don't know what all of the limitations are on it.
team system has an add-on that you can find on codeplex: It is called ->
sql load test
Let me know if that works well for you.
I know this is a really old question but after searching for some time i discovered a new open source tool.
https://github.com/spaghettidba/WorkloadTools which works great