I want to want to find out the part that SQL plays in GUI database application development and how this interacts with the standard GUI functionality in Apex.
From a high-level, APEX includes many components which you can include in your application - a chart, a grid, a report, a tree, a calendar, etc. The source of the data is provided by you - it could be as simple as a single table which you specify, or it could be a SQL query that you provide - as simple or complex as you wish. At runtime, the SQL you provide will be parsed, executed and the results fetched by the APEX engine. The result will then be rendered appropriately for the given component.
In this Oracle Magazine article (which you can practice for free on apex.oracle.com), in the "Custom Data Visualizations" section, you'll see a SQL query with a number of UNION operators, which is the data source for the multi-series chart shown further down the page.
In APEX 18.1 and later, the source for several components (report, chart, calendar) can be a SQL query against your local database objects, or it can also be a REST Enabled SQL Service or a Web Source (e.g., JSON data feed over HTTP).
Related
I would like to setup a datawarehouse which can be used by our companies Qlik application. The applications where I would like to retrieve data from a mostly running on-premises and all have a SQL server database as a source. Application which doesn't have an accessible SQL server as a source can be accessed via a REST API and/or Webservice.
This is how the setup looks like:
Data warehouse is a SQL-server (Standard, no Express version)
SQL datasources are running on 3 different SQL-servers (2 are Standard, 1 is Express)
Other sources are Webservice and/or REST API accessibel (SaaS application).
The SQL-servers are all on-premises are located within our network. The SaaS application is running in a data center (cloud).
Preferable I would like to have data as live as possible and the load on the server as small as possible. To do so I was wondering if there are ETL-tools which work with SQL Change Tracking to keep track of changes on table level (so that changes are PUSH based on not PULL). If this is the case I can let them sync sequential and set per table if it has to be synced based on Change Tracking or only full sync a day. As soon as I have the data in my datawarehouse I can create some data transformations in the ETL-tool or with T-SQL, that doesn't matter.
Hopefully there are some people around here who can tell me which ETL tool to use. There is a lot of information on the internet, but not much who go into the subject of SQL Change Tracking.
Many thanks in advance!
Setup: SQL Server 2012
Currently, we are working on a project to mask the data in Prod/dev/UAT systems and we are in the process of defining what is considered as sensitive /non-sensitive according to company policies.It would take another month before all the fields are finalized.Before that, I would like to take some time to find out methods to figure it out.
After reading some material, it has been clear, I could use any of the masking techniques such as scrambling, deleting a portion of text, repeating character masking, masking table etc. But once, one of these methods are applied, it would be permanent on that database and it would not be possible to decrypt that field.
But I would like to show masked values to only some users based on the access and revoke it the same way. Could someone help me, is there a way this could be achieved and how?
Users access the data via SSRS reports and Cube directly.
Note: It would not be possible to upgrade to SQL 2016 for another year or so.
DbDefence implements data masking for SQL Server R2 and higher.
Implementation is similar to dynamic masking, but unmasked values never appear in the database files or backups (unlike with Microsoft's implementation). You may set it up in several ways:
All applications except selected see masked data.
All logins except selected see masked data.
Application may see unmasked data after a special SQL statement.
It works with Cube, SSRS and other tools.
More about it: https://www.database-encryption.com/support/dbdefence-documentation/data-masking-SQL-Server.html
In general, the free version of DbDefence available for small databases.
I'm associated with the vendor.
I've been handed an interesting question in that an Apple centric user would be keen to run databases on Filemaker Pro and we already have several running on MS SQL.
FM Pro is visually stunning and as a front end to work with customers would look good, but I'm more SQL at heart.
Does anybody use both?
Can you easily run tasks between SQL and FM Pro to update data to FM Pro (say overnight)?
Has anybody made the change from SQL to FM Pro for any purpose and found it to be ok?
Thanks in advance
To expand on user4166144's answer a bit, you can add MS SQL as an external data source to FileMaker using ODBC. (See "Using FileMaker Pro, I want to create a live connection to a MS SQL Server, Oracle or MySQL data source.")
This will let you base layouts on an MS SQL table just as though it was a native FileMaker table. That is, the data will be "live", with no need for over-night copying about.
There are some limitations to ODBC connections, which will probably be irrelevant in your case. Mostly, ODBC data sources in FileMaker don't get all the FileMaker goodies in Manage Database. Tables from ODBC sources are "shadow tables". For example, if you delete a field ("column") in FileMaker, it doesn't get deleted in the SQL database. However, creating, editing, and deleting records all work as normal. You can even add tables from ODBC sources to the relationship graph, which is the primary way that you get data from multiple tables in FileMaker.
FileMaker is a little hard to wrap your head around coming from an SQL background. It's meant for rapid application development, and as such it has certain paradigms in mind. Here are a few things to know that I hope will help:
Every user interface ("Layout") in FileMaker is based on a table occurrence. The body of a layout represents a single record in that table occurrence. Every script, calculation and related piece of data is calculated from the perspective of that single record in that single table occurrence. That is, a layout is a "cursor".
There is no (sane) FileMaker way to do the equivalent of an SQL "OR" when it comes to the Relationship Graph.
FileMaker 12 has two features with very similar names. It has a calculation function "ExecuteSQL", which allows you to run SELECT statements on table occurrences in FileMaker; that includes ODBC sources. It also has a script step called "Execute SQL", which is handy for running arbitrary SQL against an ODBC data source. This latter is probably going to be very useful for you.
It's somewhat hard to get the results of SQL queries onto FileMaker layouts in any kind of elegant way. Generally, you need to write the results to a global field, a global variable, or a regular field. If you want to display tabular data from an SQL query in a decent kind of way, you will need to generate HTML and spit it into data url in a Web Viewer element on a layout (i.e., prefix the HTML with "data:text/html,")
FileMaker, since version 9, includes the ability to connect to a number of SQL databases without resorting to using SQL, including MySQL, SQL Server, and Oracle. This requires installation of the SQL database ODBC driver to connect to a SQL database. SQL databases can be used as data sources in FileMaker’s relationship graph, thus allowing the developer to create new layouts based on the SQL database; create, edit, and delete SQL records via FileMaker layouts and functions; and reference SQL fields in FileMaker calculations and script steps. It is a cross platform relational database application.
Versions from FileMaker Pro 5.5 onwards also have an ODBC interface.
FileMaker 12 introduced a new function, ExecuteSQL, which allows the user to perform an SQL query against the FileMaker database to retrieve data, but not for modification or deletion, or schema changes.
I have been working with Pentaho for the last few days. I have been able to setup the Pentaho Report Designer to generate a sample report by follow their documentation. Then I follow this article http://www.robertomarchetto.com/www/how_to_use_pentaho_report_designer_tutorial and managed to export the report to Pentaho BI server.
All I don't understand is Pentaho workflow. What should be the process I should follow which means what's the purpose of exporting the export to Pentaho BI server? Why there is a Data Integration tool? Why there is a BI sever when I can export the report from the Designer tool?
Requirement
All I want to do is retrieve the data from the MYSQL DB. Put them into a data-mart. Then from the data-mart generate a report.(According to what I have read, creating a data mart is the efficient way).
How can I get it done?
Pentaho Data Integration can be used to make this report generation automated.
In report designer you will be passing a parameter or set of parameters to generate a single report output.
With Data integration you can generate the reports for different set of parameters. for eg: if reports are generated on daily basis, we can make it automated for the whole month, so that there is no need of generating reports daily and manually.
And using the Pentaho Business Intelligence server we can make all these operations scheduled.
To generate Data/Table(Fact tables/dimension table) in MYSQL DB From difference source like files/different DB - Data Integration tool comes in to picture .
To create Schema on top of Fact tables - Mondrian tool
To handle user/roles on top of created cubes -Meta data editor
To create simple reports on top of small tables - Report Designer
For sequential Execution (at a go) usage of DI jobs/transformation , Reports, Java script - Design Studio
thanks to user surya.thanuri # forums.pentaho.com
The Data Integration tool is mostly for ETL, it's a separate tool and you can ignore it unless you are doing complex analysis of data from multiple dissimilar data sources. You don't need to 'export' reports to the pentaho server, you can write them directly to a directory then refresh the repository from inside the Pentaho web application. Exporting them is just one workflow technique.
You're going to find that there are about a dozen ways to do any one thing with Pentaho. For instance I use the CDA datasources with my reports vice placing the sql code inside my report. Alternatively you can link up to a Data Integration server to execute the Data Integration scripts to view a result set.
Just to answer your datamart question. In general a datamart should probably be supported by either the Data Integration tool (depending on your situation I don't exactly recommend this) or database functions/replication streams (recommended).
Just to hazard a guess, it sounds like someone tossed you a project saying: We need a BI system, here's the database where the data is stored, here are the reports we're already getting. X looked at Pentaho and liked it. You should use that.
First thing you need to do is understand the shape of the data, volume, tables, interrelations. Figure out what the real questions they want to answer are. Determine whether they need real time reporting, etc..etc. Just getting the datamart together itself, if you even need one, can take quite awhile. I think you may have jumped the gun on Pentaho itself.
thanks to user flamierd # forums.pentaho.com
Without going into specifics...I have a large SQL Server 2005 database with umpteen stored-procedures.
I have multiple applications from WinForm apps to WebServices all of which use this DB.
My simple objective now is to create a meta-database...a prospective data-dictionary where I can maintain details of which specific app. file uses which SP.
For example, My application Alpha which has a file Beta.aspx...uses 3 SPs which are physically configured for usage in BetaDAL.cs
You might have inferred by now,it will make life easier for me later when there is a migration or deprecation....where I can just query this DB SP-wise to get all Apps/Files that use the DB or vice-versa.
I can establish this as a single de-normalized table..or structure it in a better way.
Does some schema already exist for this purpose?
SQL Server supports what are called extended properties, basically a key-value dictionary attached to every object in the catalog. You can add whatever custom information about the catalog (comments on tables, columns, stored procedures, ...) you wish to store as extended properties and query them along with the normal catalog views.
Here's one overview (written for SQL Server 2005, but roughly the same techniques should apply for 2000 or 2008).