Neo4j Cypher- Equivalent of ADD_DATE(date,INTERVAL expr unit) - sql

I have some sql queries that I want to translate to cypher. One of my queries contains the function DATE_ADD :
WHERE s_date<=DATE_ADD('2000-12-01',INTERVAL -90 DAY);
Is there any equivalent function in cypher please?
Thanks,

You can use APOC for that : https://neo4j-contrib.github.io/neo4j-apoc-procedures/#_adding_subtracting_time_unit_values_to_timestamps
Or if you are using the temporal feature of Neo4j 3.4, you can add a Duration to a Date : RETURN date({year:2018, month:3, day: 31}) + duration('P1D').
For more information, see the documentation : https://neo4j.com/docs/developer-manual/3.4/cypher/syntax/temporal/#cypher-temporal-specifying-durations
Cheers

Related

Use unaccent postgres extension in Knex.js Querys

I need make a query for a postgresdb without identify accents (á, í,ö, etc).
I'm already use Knex.js as query builder, and postgresql have a unaccent extension that works fine in sql querys directly to db, but in my code i use knex and unaccent function throws error in querys.
Can anyone help me, ¿is possible make querys with knex.js that use unaccent function of postgresql?
My solution is to process the string before submitting the query using the following code:
const normalize = (str) => str.normalize('NFD').replace(/[\u0300-\u036f]/g, '');
console.log(normalize('Ấ Á Ắ Ạ Ê')) -> 'A A A A A'.
Or if you use postgresql version 13 or later it already supports that functionality.
select normalize('hồ, phố, ầ', NFC) → 'ho, pho, a' -- NFC (the default), NFD, NFKC, or NFKD.
Document: https://www.postgresql.org/docs/13/functions-string.html

Slick Plain Sql Generic Return Type

I am trying to write a configurable sql query executor using Slick. User provides a prepared statement with ? and at run time the exact query is formed by replacing ? with values.
Generally this is how one would run a plain sql query using slick.
val query = sql"#$queryString".as[(String,Int)]
In my case i would not know the result type so i want to get back a generic result type. Maybe a List of Tuples with each tuple representing a row of result SET.
Any ideas on how this would be done?
I found a solution from one of the scala git issues. Here it is
ResultMap extends GetResult[Map[String, Any]] {
def apply(pr: PositionedResult) = {
val resultSet = pr.rs
val metaData = resultSet.getMetaData();
(1 to pr.numColumns).map { i =>
metaData.getColumnName(i) -> resultSet.getObject(i)
}.toMap
}
and then we can simply do val query = sql"#$queryString".as(ResultMap)
Hope it helps!!

Creating User Defined Function in Spark-SQL

I am new to spark and spark sql and i was trying to query some data using spark SQL.
I need to fetch the month from a date which is given as a string.
I think it is not possible to query month directly from sparkqsl so i was thinking of writing a user defined function in scala.
Is it possible to write udf in sparkSQL and if possible can anybody suggest the best method of writing an udf.
You can do this, at least for filtering, if you're willing to use a language-integrated query.
For a data file dates.txt containing:
one,2014-06-01
two,2014-07-01
three,2014-08-01
four,2014-08-15
five,2014-09-15
You can pack as much Scala date magic in your UDF as you want but I'll keep it simple:
def myDateFilter(date: String) = date contains "-08-"
Set it all up as follows -- a lot of this is from the Programming guide.
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
import sqlContext._
// case class for your records
case class Entry(name: String, when: String)
// read and parse the data
val entries = sc.textFile("dates.txt").map(_.split(",")).map(e => Entry(e(0),e(1)))
You can use the UDF as part of your WHERE clause:
val augustEntries = entries.where('when)(myDateFilter).select('name, 'when)
and see the results:
augustEntries.map(r => r(0)).collect().foreach(println)
Notice the version of the where method I've used, declared as follows in the doc:
def where[T1](arg1: Symbol)(udf: (T1) ⇒ Boolean): SchemaRDD
So, the UDF can only take one argument, but you can compose several .where() calls to filter on multiple columns.
Edit for Spark 1.2.0 (and really 1.1.0 too)
While it's not really documented, Spark now supports registering a UDF so it can be queried from SQL.
The above UDF could be registered using:
sqlContext.registerFunction("myDateFilter", myDateFilter)
and if the table was registered
sqlContext.registerRDDAsTable(entries, "entries")
it could be queried using
sqlContext.sql("SELECT * FROM entries WHERE myDateFilter(when)")
For more details see this example.
In Spark 2.0, you can do this:
// define the UDF
def convert2Years(date: String) = date.substring(7, 11)
// register to session
sparkSession.udf.register("convert2Years", convert2Years(_: String))
val moviesDf = getMoviesDf // create dataframe usual way
moviesDf.createOrReplaceTempView("movies") // 'movies' is used in sql below
val years = sparkSession.sql("select convert2Years(releaseDate) from movies")
In PySpark 1.5 and above, we can easily achieve this with builtin function.
Following is an example:
raw_data =
[
("2016-02-27 23:59:59", "Gold", 97450.56),
("2016-02-28 23:00:00", "Silver", 7894.23),
("2016-02-29 22:59:58", "Titanium", 234589.66)]
Time_Material_revenue_df =
sqlContext.createDataFrame(raw_data, ["Sold_time", "Material", "Revenue"])
from pyspark.sql.functions import *
Day_Material_reveneu_df = Time_Material_revenue_df.select(to_date("Sold_time").alias("Sold_day"), "Material", "Revenue")

Alias of joined table in SQLProjection

I have this query:
criteria = session.CreateCriteria(typeof (Building))
.CreateAlias("Estate", "estate")
.SetProjection(Projections.ProjectionList()
.Add(Property.ForName("Name"), "BuildingName")
.Add(Property.ForName("estate.Name"), "EstateName")
.Add(Projections.SqlProjection(
"(estate1_.BBRMunicipalityNumber + '-' + estate1_.BBREstateNumber + '-' + {alias}.BBRBuildingNumber)" + " as BBRNumber",
new[] { "BBRNumber" },
new[] { NHibernateUtil.String }),
"BBRNumber"))
Is there a way that I can get the SQL alias for "estate" like writing {estate} in the SQL string? {estate} does not work. Now I ended up hardcoding the alias in the SQL string, but that doesn't seem very solid.
If I understand the docs correctly this should be possible. I'm using NH2.0.1.
/Asger
Not a direct answer to your question, but:
Why don't you query the three values separately and do the concatenation in your code instead of using the database for that?
To answer your question: In Hibernate v3 (java, sorry) there is a getColumnAlias method on the Projection interface. I'm not able to find its counterpart in NHibernate.
Cheers
You can use {alias} - it will reference the alias of the current projection.

SQl Query to Hibernate Query

I have a MySQL query that I use to retrieve random rows from a table. The query is:
SELECT * FROM QUESTION WHERE TESTID=1 ORDER BY RAND() LIMIT 10;
Now I need to change this query to Hibernate. Did a bit of googling but couldn't find the answer. Can someone provide help on this?
The random function is different between each underlying DB and is not a standard part of SQL92.
Given that you will need to implement a SQLDialect for the given database type you are using.
eg:
class PostgresSQLDialect extends org.hibernate.dialect.PostgreSQLDialect {
PostgresSQLDialect() {
super()
registerFunction( "rand", new NoArgSQLFunction("random", Hibernate.DOUBLE) );
}
}
Then you will need to define that dialect in the config
hibernate {
dialect='com.mycompany.sql.PostgresSQLDialect'
}
According to this post, you can do that :
String query = "from QUESTION order by newid()";
Query q = session.createQuery(query);
q.setMaxResults(10);
Not sure if it will work (especially for the random part), but you can try it :)