I want to send push notifications from my ionic app to app now I wrote parse cloud code and normal typescript but both are not working, actually m requirement is sending push notification to all devices and also specific device,please review my code below and help me
my cloud code
Parse.Cloud.define("send", (request) => {
enter code here
return Parse.Push.send({
channels: ["News"],
data: {
title: "Hello from the Cloud Code",
alert: "Back4App rocks!",
}
}, { useMasterKey: true });
});
typescript code
calling cloud code
Parse.Cloud.run('send').then(function (ratings) { debugger console.log("updated"); // result should be 'Update object successfully' }).catch((error) => { console.log(error) console.log("fail") });
You should add a query parameter. That way the push knows which user to send the push too.
You don't need the query parameter for channels. Do your installation have the channel?
Feel free to open an issue on the JS SDK.
Related
Ideally I need a script that outputs the following information in a CSV format that's easy to import into Excel:
job name,number of times run in last year,number of times run overall,last run status
For that job, output no individual run details.
Tried this on my Jenkins:
List Jenkins job build detials for last one year along with the user who triggered the build.
but got an error:
java.lang.NullPointerException: Cannot invoke method getShortDescription() on null object
at org.codehaus.groovy.runtime.NullObject.invokeMethod(NullObject.java:91)
at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:48)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
Any idea what in the Groovy needs changing? or is there a better solution?
Thanks all!
Thanks to #daggett and #ian . Both worked.
I went with IANS :
def jobNamePattern ='.*' // adjust to folder/job regex as needed
def daysBack = 365 // adjust to how many days back to report on
def timeToDays = 24*60*60*1000 // converts msec to daysprintln "Job Name: ( # builds: last ${daysBack} days / overall ) Last Status\n Number | Trigger | Status | Date | Duration\n"Jenkins.instance.allItems.findAll() {
it instanceof Job && it.fullName.matches(jobNamePattern)
}.each { job ->
builds = job.getBuilds().byTimestamp(System.currentTimeMillis() - daysBack*timeToDays, System.currentTimeMillis())
println job.fullName + ' ( ' + builds.size() + ' / ' + job.builds.size() + ' ) ' + job.getLastBuild()?.result
// individual build details
builds.each { build ->
println ' ' + build.number + ' | ' + build.getCauses()[0]?.getShortDescription() + ' | ' + build.result + ' | ' + build.getTimestampString2() + ' | ' + build.getDurationString()
}
}
return
I'm working on a project or I want to upload an Excel CSV file that contains data from a table that I have in the database using the query LOAD DATA LOCAL, I test the query using the full path of the file with its path (example C: //../file.csv) directly in the query it works without problem, I wanted to work with the library primefaces p: fileUpload and when I choose a file CSV from my desktop or from another directory on my pc, it only returns the name of the file I selected and not the full path of the hit I have an error:
org.hibernate.exception.GenericJDBCException: could not execute statement
java.io.FileNotFoundException: extraction1.csv (The specified file can not be found)
Forced because there is not the full path of the file with the name of the folder, what I want is to return the name of the file with its root files from where I have selected so that my request can to execute correctly, as to show on the code below I wish that the path where the file is located when I select also be returned with the name of the file in question, and thanks.
prelevServ.importToDB("C:\\Users\\helyoubi\\Desktop\\Japon 2\\"+fichierUpload.getFileName());
My JSF form:
<h:form enctype="multipart/form-data">
<p:growl id="messages" showDetail="true" />
<p:fileUpload label="Choisir" value="#{importFichier.fichierUpload}" mode="simple" skinSimple="true"/>
<p:separator/>
<p:commandButton value="Envoyer" ajax="false" action="#importFichier.importation}" />
</h:form>
My managedBean formula:
#ManagedBean
public class ImportFichier implements Serializable{
/**
*
*/
private static final long serialVersionUID = 1L;
private UploadedFile fichierUpload;
private PrelevementServices prelevServ = new PrelevementServicesImpl();
public UploadedFile getFichierUpload() {
return fichierUpload;
}
public void setFichierUpload(UploadedFile fichierUpload) {
this.fichierUpload = fichierUpload;
}
public void importation() {
if(fichierUpload.getFileName()!= null) {
//prelevServ.importToDB("C:\\Users\\helyoubi\\Desktop\\Japon 2\\"+fichierUpload.getFileName());
prelevServ.importToDB(fichierUpload.getFileName());
FacesMessage message = new FacesMessage("Succesful", fichierUpload.getFileName()+ " is uploaded.");
FacesContext.getCurrentInstance().addMessage(null, message);
}else {
FacesMessage message = new FacesMessage("Le chemin du fichier : "+fichierUpload.getFileName()+" est introuvable");
FacesContext.getCurrentInstance().addMessage(null, message);
}
System.out.println("CSV added to your the DB Table");
}
}
My request :
#Override
public void importToDB(String cheminFichier) {
session.beginTransaction();
session.createSQLQuery("LOAD DATA LOCAL INFILE :filename INTO TABLE Prelevement_saisons FIELDS TERMINATED BY ',' ENCLOSED BY '\"'(espece,saison,departement,commune,code,attributions,realisations)").setString("filename", cheminFichier).executeUpdate();
session.getTransaction().commit();
}
Try sending .getPath():
prelevServ.importToDB(fichierUpload.getPath());
I am running SPARK locally (I am not using Mesos), and when running a join such as d3=join(d1,d2) and d5=(d3, d4) am getting the following exception "org.apache.spark.SparkException: Exception thrown in awaitResult”.
Googling for it, I found the following two related links:
1) https://github.com/apache/spark/commit/947b9020b0d621bc97661a0a056297e6889936d3
2) https://github.com/apache/spark/pull/12433
which both explain why it happens but nothing about what to do to solve it.
A bit more about my running configuration:
1) I am using spark-core_2.11, spark-sql_2.11
SparkSession spark = SparkSession
.builder()
.master("local[6]").appName("DatasetForCaseNew").config("spark.executor.memory", "4g").config("spark.shuffle.blockTransferService", "nio").getOrCreate();
3) public Dataset buildDataset(){
...
// STEP A
// Join prdDS with cmpDS
Dataset<Row> prdDS_Join_cmpDS
= res1
.join(res2, (res1.col("PRD_asin#100")).equalTo(res2.col("CMP_asin")), "inner");
prdDS_Join_cmpDS.take(1);
// STEP B
// Join prdDS with cmpDS
Dataset<Row> prdDS_Join_cmpDS_Join
= prdDS_Join_cmpDS
.join(res3, prdDS_Join_cmpDS.col("PRD_asin#100").equalTo(res3.col("ORD_asin")), "inner");
prdDS_Join_cmpDS_Join.take(1);
prdDS_Join_cmpDS_Join.show();
}
The exception (see below for the stack trace) is thrown when the computation reaches the STEP B, until STEP A is fine.
Is there anything wrong or missing?
Thanks for your help in advance.
Best Regards,
Carlo
=== STACK TRACE
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 422.102 sec <<< FAILURE!
testBuildDataset(org.mksmart.amaretto.ml.DatasetPerHourVerOneTest) Time elapsed: 421.994 sec <<< ERROR!
org.apache.spark.SparkException: Exception thrown in awaitResult:
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:194)
at org.apache.spark.sql.execution.exchange.BroadcastExchangeExec.doExecuteBroadcast(BroadcastExchangeExec.scala:102)
at org.apache.spark.sql.execution.InputAdapter.doExecuteBroadcast(WholeStageCodegenExec.scala:229)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeBroadcast$1.apply(SparkPlan.scala:125)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeBroadcast$1.apply(SparkPlan.scala:125)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133)
at org.apache.spark.sql.execution.SparkPlan.executeBroadcast(SparkPlan.scala:124)
at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.prepareBroadcast(BroadcastHashJoinExec.scala:98)
at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.codegenInner(BroadcastHashJoinExec.scala:197)
at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.doConsume(BroadcastHashJoinExec.scala:82)
at org.apache.spark.sql.execution.CodegenSupport$class.consume(WholeStageCodegenExec.scala:153)
at org.apache.spark.sql.execution.joins.SortMergeJoinExec.consume(SortMergeJoinExec.scala:35)
at org.apache.spark.sql.execution.joins.SortMergeJoinExec.doProduce(SortMergeJoinExec.scala:565)
at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:83)
at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:78)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133)
at org.apache.spark.sql.execution.CodegenSupport$class.produce(WholeStageCodegenExec.scala:78)
at org.apache.spark.sql.execution.joins.SortMergeJoinExec.produce(SortMergeJoinExec.scala:35)
at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.doProduce(BroadcastHashJoinExec.scala:77)
at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:83)
at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:78)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133)
at org.apache.spark.sql.execution.CodegenSupport$class.produce(WholeStageCodegenExec.scala:78)
at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.produce(BroadcastHashJoinExec.scala:38)
at org.apache.spark.sql.execution.WholeStageCodegenExec.doCodeGen(WholeStageCodegenExec.scala:304)
at org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:343)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:114)
at org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:240)
at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:323)
at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2122)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2436)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$execute$1(Dataset.scala:2121)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collect(Dataset.scala:2128)
at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:1862)
at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:1861)
at org.apache.spark.sql.Dataset.withTypedCallback(Dataset.scala:2466)
at org.apache.spark.sql.Dataset.head(Dataset.scala:1861)
at org.apache.spark.sql.Dataset.take(Dataset.scala:2078)
at org.mksmart.amaretto.ml.DatasetPerHourVerOne.buildDataset(DatasetPerHourVerOne.java:115)
at org.mksmart.amaretto.ml.DatasetPerHourVerOneTest.testBuildDataset(DatasetPerHourVerOneTest.java:76)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:53)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:123)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:164)
at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:110)
at org.apache.maven.surefire.booter.SurefireStarter.invokeProvider(SurefireStarter.java:175)
at org.apache.maven.surefire.booter.SurefireStarter.runSuitesInProcessWhenForked(SurefireStarter.java:107)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:68)
Caused by: java.util.concurrent.TimeoutException: Futures timed out after [300 seconds]
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190)
at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
at scala.concurrent.Await$.result(package.scala:190)
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:190)
... 85 more
Just to add to what Carlo said, I used the following Line of code:
sqlContext.setConf("spark.sql.autoBroadcastJoinThreshold", "-1")
I was running into this problem... none of the online searches lead to the right solution...
Well, adding sparkConf.set("spark.sql.autoBroadcastJoinThreshold", "-1") solves the problem...
ProgressMonitor progressMonitor = new ProgressMonitor(frame, "", "", 0, 100);
progressMonitor.setProgress(0);
...
progressMonitor.setProgress(100);
This works fine for me, but now I want to change the title of this progress monitor. Currently its "Progress...".
You can set the title of the dialog window through the UIManager:
String title = "Foobar";
UIManager.put("ProgressMonitor.progressText", title);
ProgressMonitor progressMonitor = new ProgressMonitor(frame, "", "", 0, 100);
...
You can do this:
monitor.beginTask("Running Job..", IProgressMonitor.UNKNOWN);
monitor.beginTask("Job #1");
But I don't believe the actual title of the box can change.
I am trying to make an update between flex and rails with something like this:
// TODO Auto-generated method stub
var strXml:XML = <test>
<test_id>{txtMarketId.text}</test_id>
<market_name>{txtMarketName.text}</market_name>
</test>;
req = new URLRequest("http://localhost:3000/tests/"+market_id);
jrwloader = new URLLoader();
params = new URLVariables();
req.data = strXml.toString();
req.method = URLRequestMethod.POST;
req.requestHeaders.push(new URLRequestHeader("X-HTTP-Method-Override", URLRequestMethod.PUT));
jrwloader.load(req);
In my rails app I got this:
Started PUT "/markets/2" for 10.10.10.10 at 2012-09-08 18:37:24 +0000
Processing by TestController#update as */*
Parameters: {"test_id:2, market_name:test"=>nil, "id"=>"2"}
WARNING: Can't verify CSRF token authenticity
Market Load (0.2ms) SELECT `tests`.* FROM `test` WHERE `test`.`test_id` = 2 LIMIT 1
(0.1ms) BEGIN
(0.1ms) COMMIT
Redirected to http://localhost:3000/tests/2
Completed 302 Found in 3ms (ActiveRecord: 0.4ms)
I think I am close to the answer but I cannot set the new data into the database.
You should add CSRF token to your request.
Here is more on the topic: http://guides.rubyonrails.org/security.html#cross-site-request-forgery-csrf
I solve it! The main issue it was just to add one line in my request. So at the end I got something like this:
req = new URLRequest("http://localhost:3000/tests/"+market_id);
jrwloader = new URLLoader();
params = new URLVariables();
req.data = strXml.toString();
req.method = URLRequestMethod.POST;
req.requestHeaders.push(new URLRequestHeader("X-HTTP-Method-Override", URLRequestMethod.PUT));
**req.requestHeaders.push(new URLRequestHeader("Content-type", 'application/xml'));** jrwloader.load(req);
thanks all!