How to find cubic root without using any loop or conditional statement - operators

I am finding a way to find a cubic root without using any loop or conditional statement. I am currently learning C that learnt basic operator and bits. Professor told me that there is a way to find cubic root without using any loop and conditional statement. I used pow first but he told me not to use that. Saying that you can only use basic operator. Is there any way to do that. I already search in stack overflow for 2 hours but I haven't find a clue.
I tried to find a way using loop but don't know how to do it without thoes

My guess:
x^(1/3)=y ==> y=2^(log2(x)/3)
To calculate log2(x) see: "Feynman's algorithm" in https://en.wikipedia.org/wiki/Logarithm

Related

When is using VALUES in SPARQL not equivalent to replacing directly the variable with a URI in the query itself?

Following the question at Difference in performance between using VALUES keyword and using directly the URI in the query?, I learned that using a VALUES clause at the end of the query is not always equivalent in terms of performance and query optimization than using directly a URI instead of the variable in the query string.
Comment from Andy says "VALUES at the end is "like setting variables" but isn't the same. The optimizer tries to push the values in but that can't happen in all cases as it changes the semantics."
Can someone explain in which cases this can't happen? for which query structures, and why exactly? I need to understand in which situations this technique (that I happily used for years now) is not advisable.
Note that I am not fluent with SPARQL algebra, so please try using simple words :-)
(I know this is not specific to Jena or RDF4J but I tag the question with these 2 tags since I understood the optimization of this might be different depending on the framework used).

String - Matching Automaton

So I am going to find the occurrence of s in d. s = "infinite" and d = "ininfinintefinfinite " using finite automaton. The first step I need is to construct a state diagram of s. But I think I have problems on identifying the occurrence of the string patterns. I am so confused about this. If someone could explain a little bit on this topic to me, it'll be really helpful.
You should be able to use regular expressions to accomplish your goal. Every programming language I've seen has support for regular expressions and should make your task much easier. You can test your regexs here: https://regexr.com/ and find a reference sheet for different things. For your example, the regex /(infinite)/ should do the trick.

Report Earliest Item in List

I am using Snap! to try to find the earliest item in a list. For instance, in list [3,1,2], I would like to report "1." I would like the solution to work for words as well (for instance, given list [Bob, George, Ari] report "Ari").
I tried to use recursion to solve the problem
and the solution works. However, I cannot find a way to do so recursively without the second "if else" statement. Is there a way to use recursion to solve this problem without the "if 0= length of..." statement?
Play with it here.
I don't see a way to do this without two if...else statements. You need two checks:
Is the list exhausted?
Is the first element less than all the following elements?
In some languages, you can use the conditional ternary operator ?:, but I don't think Snap! supports that. It's really just syntactic sugar for an if...else anyway.
You can do some clean-up on this function, though.
I recommend explicitly handling the case of a zero-length list.
"Earliest" is confusing. I recommend the term "least", since you're checking with the "less than" operator.
Don't call keep items such that [] from [] multiple times. This is inefficient and potentially a bug if someone modifies one line but forgets to modify the other. Instead, save the result in a script variable.
Don't compare the current first element to every element in the list. This gives the function an O(n^2) run time. Instead, compare it only to the least element so far. This reduces the run time to O(n).
Some of these changes are implemented here:

is it possible to refer to queries defined in a previous %%sql module in a later module?

I just started working with the new Google Cloud Datalab and IPython last week (though I've been using BigQuery for a few months). The tutorials and samples in github are very helpful, but as my scripts and queries become more complex I'm wondering a few things. The first one is this: can I refer to queries defined in one %%sql module in a later %%sql module? The other, somewhat related question is can I somehow store the results from one %%sql module and then put that information into something like an IN clause in a subsequent %%sql module?
Here's some things to try and see if they meet your needs. If they don't, I welcome you to file issues in github, as I think both of your scenarios are things we want to make sure work well.
For the first, it requires a combination of sql cells and code cells [for now]
Cell 1
%%sql --module m1
DEFINE QUERY q1
SELECT ...
Cell 2
%%sql --module m2
DEFINE QUERY q2
SELECT ... FROM $src ...
Cell 3
import gcp.bigquery as bq
compositequery = bq.Query(m2.q2, src = m1.q1)
Essentially, %%sql modules are turned into auto-imported python modules behind the scenes.
I used to split out queries per %%sql cell myself, but since the introduction of modules, I also depending on the scenario, define multiple queries within a single module, where you don't need a bit of python code stitching together. Depends on your scenario, which is better.
For your second question, again, if the queries are split across cells, you'll need some python glue in the middle. Execute one query, get its result, and use that as a parameter for the next query. This would work for general scalar values, but for IN clauses and tuples/lists of values, we have this issue we need to address: https://github.com/GoogleCloudPlatform/datalab/issues/615
For more ideas on how you can use JOINs in BigQuery to produce scalar results in one query that you consume in the next query, you can also see the query under Step 3 in the BigQuery tutorial notebook titled "SQL Query Composition".
Hope that helps.
As mentioned, if you hit specific issues where something didn't work as you expected, please do file an issue, and we can see if it makes sense to address, and possibly you or someone else might even step up to make a contribution. :)

Regular expression to match common SQL syntax?

I was writing some Unit tests last week for a piece of code that generated some SQL statements.
I was trying to figure out a regex to match SELECT, INSERT and UPDATE syntax so I could verify that my methods were generating valid SQL, and after 3-4 hours of searching and messing around with various regex editors I gave up.
I managed to get partial matches but because a section in quotes can contain any characters it quickly expands to match the whole statement.
Any help would be appreciated, I'm not very good with regular expressions but I'd like to learn more about them.
By the way it's C# RegEx that I'm after.
Clarification
I don't want to need access to a database as this is part of a Unit test and I don't wan't to have to maintain a database to test my code. which may live longer than the project.
Regular expressions can match languages only a finite state automaton can parse, which is very limited, whereas SQL is a syntax. It can be demonstrated you can't validate SQL with a regex. So, you can stop trying.
SQL is a type-2 grammar, it is too powerful to be described by regular expressions. It's the same as if you decided to generate C# code and then validate it without invoking a compiler. Database engine in general is too complex to be easily stubbed.
That said, you may try ANTLR's SQL grammars.
As far as I know this is beyond regex and your getting close to the dark arts of BnF and compilers.
http://savage.net.au/SQL/
Same things happens to people who want to do correct syntax highlighting. You start cramming things into regex and then you end up writing a compiler...
I had the same problem - an approach that would work for all the more standard sql statements would be to spin up an in-memory Sqlite database and issue the query against it, if you get back a "table does not exist" error, then your query parsed properly.
Off the top of my head: Couldn't you pass the generated SQL to a database and use EXPLAIN on them and catch any exceptions which would indicate poorly formed SQL?
Have you tried the lazy selectors. Rather than match as much as possible, they match as little as possible which is probably what you need for quotes.
To validate the queries, just run them with SET NOEXEC ON, that is how Entreprise Manager does it when you parse a query without executing it.
Besides if you are using regex to validate sql queries, you can be almost certain that you will miss some corner cases, or that the query is not valid from other reasons, even if it's syntactically correct.
I suggest creating a database with the same schema, possibly using an embedded sql engine, and passing the sql to that.
I don't think that you even need to have the schema created to be able to validate the statement, because the system will not try to resolve object_name etc until it has successfully parsed the statement.
With Oracle as an example, you would certainly get an error if you did:
select * from non_existant_table;
In this case, "ORA-00942: table or view does not exist".
However if you execute:
select * frm non_existant_table;
Then you'll get a syntax error, "ORA-00923: FROM keyword not found where expected".
It ought to be possible to classify errors into syntax parsing errors that indicate incorrect syntax and errors relating to tables name and permissions etc..
Add to that the problem of different RDBMSs and even different versions allowing different syntaxes and I think you really have to go to the db engine for this task.
There are ANTLR grammars to parse SQL. It's really a better idea to use an in memory database or a very lightweight database such as sqlite. It seems wasteful to me to test whether the SQL is valid from a parsing standpoint, and much more useful to check the table and column names and the specifics of your query.
The best way is to validate the parameters used to create the query, rather than the query itself. A function that receives the variables can check the length of the strings, valid numbers, valid emails or whatever. You can use regular expressions to do this validations.
public bool IsValid(string sql)
{
string pattern = #"SELECT\s.*FROM\s.*WHERE\s.*";
Regex rgx = new Regex(pattern, RegexOptions.IgnoreCase);
return rgx.IsMatch(sql);
}
I am assuming you did something like .\* try instead [^"]* that will keep you from eating the whole line. It still will give false positives on cases where you have \ inside your strings.