Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
How to make a data manipulation language like SQL, and implement the basic functions like
insert, join, natural join
I tried to search online, but I didn't get any proper link, where I can start from. Most of the results go towards making a SQL parser.
So I wanted to ask
What is the basic idea behind making a DML?
How should I be manipulating the data?
Which language or platform should I be using to implement it?
If possible, please post any links of past works in this field.
The search terms you're looking for are relational algebra and relational calculus. I'd rather not go into too much detail, since this usually takes about 6 weeks to cover in a college databases course.
The basic idea is that SQL is a "relational calculus" in that it describes the result you'd like to achieve. It is the job of the DBMS to compile this into a "relational algebra," which describes how to analyze the data.
Ref points 1 and 2 of your question:
I would start off by reading up about some of the theory behind SQL. Chris Date's books (http://en.wikipedia.org/wiki/Christopher_J._Date) are a good place to start.
Point 3.
Presumably you'll have to learn this language. I'd pick something modern with nice high level constructs, and built in String manipulation, Ruby, Python, C# or, Java?
Good luck.
You first need to implement all the functions to do basic operations such as projection, filtering, joining, indexing... Once this functionality is in place, you need to parse SQL, create a queryvexecution plan that will then call your API to get the results. This is of course a very crude description. I would suggest to read open source databases code and documentation such as mysql.
See Studying MySQL, SQLite source code to learn about RDBMS implementation for similar question.
See also
http://en.m.wikibooks.org/wiki/Design_of_Main_Memory_Database_System
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
My question is more for practice than a debug issue.
At work, we use a Java-JEE/Oracle solution and the least I can say is we need to perform SQL query, anticipate SQL performances, handle SQL issues like foreign key or orphan line and so on.
So from my point of view, doing SQL is very important. For a new project, we are looking to implement the solution in Ruby on Rails. But most of the tutorial and code I see, seems to nest every Postgres SQL code under Active Record implementation. I have already experienced some similar issue with the Java Hibernate framework and its "no need SQL code." Some production issues were madness, the generated hidden SQL query were not easy to read and there is no deal with index or foreign key.
Any one can tell me what risk we have to use only Active Record ?
What is the proper process to avoid most common Ruby/SQL interface issues ?
When did you need to open your SQL console et type some SQL query ?
Share a little bit its experience on these points.
If you have any relevant link dealing with this topic.
Thank you very much !
You can still use sql.
Either low level, where you receive an array of arrays of values.
Or a little more high level, so you receive objects, with methods like find_by_sql.
Or by providing only sql-fragments, for example for the where-clause.
How often you need sql depends on your use case.
Ruby is about objects, sql is about tables. ActiveRecord handles objects as rows in a table. That works most of the time quite good. All simple queries are handles automatically. You can describe relations between objects, and even joins to retrieve these relations are handled.
For queries with several joins or group_by, it is sometimes easier to write the sql instead of instructing activerecord to build the sql you have in mind.
Also you need to have an eye on what sql is generated, as it is easy to write code that is inefficient, for example by generating many small sql statements.
The official Rails guides about "models" are the most important resource. From sql perspective you should have a look at "Active Record Query Interface"
http://guides.rubyonrails.org/active_record_querying.html
I also done a presentation about rails database optimisation, but it for rails 3.2 and a little out of date (joins are now better handled)
http://meier-online.com/en/2012/08/presentation-rails-database/
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Can someone tell me why we need to use T-SQL over SQL when creating reports from Data Warehouse?
SQL also has functions and Joins but I see all of the online tutorials use T-SQL when creating reports from DW.
Can it be done with SQL? If T-SQL is must, could you please explain why? In terms of what can T-SQL do that SQL cannot.
Some useful tutorial links for T-SQL and creating reports would be great too!
Thanks in advance~
Regardless the fact T-SQL has more functionality than plain SQL, in general data warehousing you have two main approaches:
Put business logic closer to the data. This way you develop lots of T-SQL functions and apply many optimizations available there to improve performance of your ETL. Pros is greater performance of your ETL and reports. But cons are the cost of making changes to the code and migration cost. The usual case for growing DWH is migration to some of the MPP platforms. If you have lots of T-SQL code in MSSQL, you'll have to completely rewrite it, which will cost you pretty much money (sometimes even more than the cost of MPP solution + hardware for it)
Put business logic to the external ETL solution like Informatica, DataStage, Pentaho, etc. This way in database you operate with pure SQL and all the complex logic (if needed) is the responsibility of your ETL solution. Pros are simplicity of making changes (just move the transformation boxes and change their properties using GUI) and simplicity of changing the platform. The only con is the performance, which is usually up to 2-3x slower than in case of in-database implementation.
This is why you can either find a tutorial on T-SQL, or tutorial on ETL/BI solution. SQL is very general tool (many ANSI standards for it) and it is the basic skill for any DWH specialist, also ANSI SQL is much simpler as it does not have any database-specific stuff
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I am currently doing some kind of reporting system.the figures, tables, graphs are all based on the result of queries. somehow i find that complex queries are not easy to maintain, especially when there are a lot of filtering. this makes the query very long and not easy to understand. And also, sometimes, queries with similar filters are executed, making a lot of redundant code, e.g. when i am going to select something between '2010-03-10' and '2010-03-15' and the location is 'US', customer group is "ZZ", i need to rewrite these conditions each time i make a query in this scope. does the dbms (in my case, mysql) support any "scope/context" to make the coding more maintainable as well as the speed faster?
also, is there a industrial standard or best practice for designing such applications?
i guess what I am doing is called data mining, right?
Learn how to create views to eliminate redundant code from queries. http://dev.mysql.com/doc/refman/5.0/en/create-view.html
No, this isn't data mining, it's plain old reporting. Sometimes called "decision support". The bread and butter of information technology. Ultimately, play old reporting is the reason we write software. Someone needs information to make a decision and take action.
Data mining is a little more specialized in that the relationships aren't easily defined yet. Someone is trying to discover the relationships so they can then write a proper query to make use of the relationship they found.
You won't make a very flexible reporting tool if you are hand coding the queries. Every time a requirement changes you are up to your neck in fiddly code trying to satisfy it - that way lies madness.
Instead you should start thinking about a meta-layer above your query infrastructure and generating the sql in response to criteria expressed by the user. You could present them with a set of choices from which you could generate your queries. If you give a bit of thought to making those choices extensible you'll be well on your way down the path of the many, many BI and reporting products that already exist.
You might also want to start looking for infrastructure that does this already, such as Crystal Reports (swallowed by Business Objects, swallowed by SAP) or Eclipse's BIRT. Depending on whether you are after a programming exercise or a solution to your users' reporting problems you might just want to grab an off the shelf product which has already had tens of thousands of man years of development, such as one of those above or even Cognos (swallowed by IBM) or Hyperion (swallowed by Oracle).
Best of luck.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I always thought an SQL compiler would break but apparently nesing can nearly be infinite. Is this code to be trashed immediately or is there some glimmer of hope that something like this can function?
This query doesn't really belong to me so I cannot post it... However let's just pretend it is this one:
[SELECT /*+ NOPARALLEL bypass_recursive_check */
SP_ALIAS_190,
((CASE SP_ALIAS_191
WHEN 1
THEN 'PROVIDER::ALL_PROV::'
WHEN 0]
Clearly, you've never seen the SQL that comes out of the Sharepoint DAL.
If the query is generated by a tool (or by code), then it may be relatively simple to maintain (in the sense that the query generation code may in fact be well written and maintainable)
I ran into a problem similar to this recently and I came to a decision by considering a couple of things:
How long is this going to take to maintain vs. rewrite?
How critical is this? There may be a lot of logic that may be difficult to unravel and the value in the fact that "it works" exceeds the value from an immediate rewrite.
And of course, there was the political decision management had to make concerning risking explaining why something that was recently created would have to be rewritten.
In the end (for me), find + replace was my friend.
Refactor it using the WITH statement.
Add lots and lots and lots of comments.
If you break it into pieces that can be managed, you stand a much better chance.
If it contains allot of nesting I would say no.
Like any code no matter what language, you should only look at re-writting it because you can make it more efficient or easier to understand.
Based on my experiance I have been able to reduce badly written SQL 4 to 5 times its size and many times its performance because the origonal auther really had no idea.
If you think that's bad, you should see Industrial Logic's sample video on code smells: Technical Debt. Definitely not autogenerated.
Is it possible to maintain a 43 page function, say, in C#? The answer is obvious ;). I just cannot imagine this. If I were you I would break it into smaller parts.
Two things:
Will only machines ever need to read this SQL?
Are you stuck with the underlying schema?
If you have a 43 page query and you answered yes to the first two questions, welcome to SharePoint development
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I've been asked to support and take on a PostgreSQL app, but am a MySQL guy - is this a realistic task?
PostgreSQL has some nice features like generate_series, custom aggregate functions, arrays etc, which can ease your life greatly if you take some time to learn them.
On the other hand it lacks some features of MySQL like using and assigning session variables in queries, FORCE INDEX, etc., which is quite annoying if you are used to these features.
If you just use basic SQL, then you will hardly notice any difference.
How different is PostgreSQL to MySQL?
That depends if you're talking about SQL only (which is mostly the same) or the stored procedures (which are quite different).
is this a realistic task?
Absolutely. PostgreSQL has very good documentation and community. There are also lot of ppl, who have experience with MySQL and PostgreSQL.
"MySQL vs PostgreSQL wiki" — centers on "which is better", but gives you some idea of differences.
PostgreSQL compared to MySQL is as any other pair of DBMSs compared. What they have in common is non-functional, specifically the consequences of each being open source. In terms of features, use, and strengths they are no closer to each other than PostgreSQL is to Oracle or DB2 is to Sybase.
Now on to your real question: you are a SQL guy, albeit one who has not yet had experience with PostgreSQL. This is a completely realistic task for you, and a good one since you'll expand your understanding of the varieties of DBMSs and gain a perspective on MySQL that you can't get from working solely within its sphere.
As someone who was once in exactly the same position, my guess is that you'll quickly pick up PostgreSQL and might even hesitate to return to MySQL ;-).
If you're interested in the different flavors of SQL, here are a few resources (though some may be outdated):
SQLZoo
SQL Dialects Reference Wikibook
Tips on Writing Portable SQL
SQL Bible
You may want to take a look at these pages:
Why PostgreSQL Instead of MySQL: Comparing Reliability and Speed in 2007, Why PostgreSQL Instead of MySQL 2009.
I faced the same situation about a month ago.... I have been doing fine with postgres. There is a strong online community for postgres and you should be able to find help if you run into any trouble and learn stuff easily :)
I didn't take very long to switch from MySQL to PostgreSQL back when I first started using PostgreSQL in anger at a previous company. I found it very nice and very refreshing (not that MySQL was bad) compared to MySQL which I had used previously. PostgreSQL was also a good stepping stone to Oracle which I use at my current company. I liked that it had a proper command line application like MySQL, but the configuration options are harder - but if you're not setting it up then there is no problem.