Debugging Parse Cloud-Code - parse-server

What would be the best way to debug Parse Cloud Code? Currently it's a mess of logging to the console and checking logs. Does anyone have a good workable solution?

During development, you should begin by testing against a local hosted server. I.e., I use VS Code. You can set breakpoints and watch variables for their values. You can set up a tool like ngrok to get a remote URL for your local endpoint so you can test with non-local hosted clients if you'd like.
We also use Slack extensively. We've created our own slack bot, and it has several channels it reports relevant information too, triggered from our parse-server. One of these is a dev error channel. Instead of console.logs, which are hard to sift through and find what you're looking for, we push important information to Slack. We don't switch every single console.log to a slack message, just the important "Hey something went wrong here's the information" messages. This brings them to our attention so we can identify and resolve them way faster. Slack is awesome. I recommend using slack, even on a solo project.

at the moment you can access your Logs using a console.log() or console.error() for functions and all general logs of everything that happens with your app, at Back4App you can access using: Server Settings -> Logs -> Settings -> Server System Log.
Or functions and all logs generated by Parse server, they're: request.log.info() and request.log.error(), at Back4App you can access using: Dashboard -> Logs.

Related

"The search engine appears to be down or failing to respond to the search query"

I've installed FusionAuth (awesome product) into a Docker Swarm cluster using the official docker-compose.yml file and everything seems to work brilliantly.
EXCEPT
Periodically, when a user goes to login they will be presented with the above error stating that the search engine is not available. If they try again immediately then everything works correctly! I would, obviously, prefer that they never saw the error.
Elasticsearch is definitely running and is responding to API calls correctly, and I can see the fusionauth_user index is present and populated with docs.
I guess my question is two fold:
1) What role does the ElasticSearch engine play in the FusionAuth ecosystem and can it be disabled?
2) Is there a configurable timeout somewhere that is causing the error message and, if so, where can change it?
I've search the docs for answers to the above but I can't seem to find anything :-(
Thanks for the kind feedback.
1) What role does the ElasticSearch engine play in the FusionAuth ecosystem and can it be disabled?
Elasticsearch provides full text search of user data. Each time a user is created or updated the user is re-indexed. In this case during login, we are updating the search index with the last login instant.
This service is required and cannot be disabled. We have had clients request to make this service optional for embedded applications or small scale scenarios where Elasticsearch may not be required. While this is not currently in plan, it is possible we may revisit this option in the future.
2) Is there a configurable timeout somewhere that is causing the error message and, if so, where can change it?
Not currently.
Full disclosure, I am not a Docker or Docker Swarm expert at all - perhaps there are some nuances to Swarm and response time due to spin up and spin down of resources?
Do you see any exceptions in the log when a user sees this error on the login?

Connecting IntelliJ Idea Servers to GitLab.com: what info is actually needed?

I'm trying to configure IntelliJ IDEA 2017.1.2 in order to get the tasks from a private repository on GitLab.com.
To do that I have to create the corresponding entry in the Servers window.
Now, I don't have the faintest idea about how I should fill the Servers form in IDEA.
What URL I have to use for Server URL ?
What token ?
Any advice? Thx in advance.
UPDATE: Based on the information mentioned in the issue IDEA-193736, the connectivity problem with the new GitLab Issues API (V4) should be fixed when the update 2018.2 is released.
The https://gitlab.com URL didn't work for me as the API URL was updated to V4 on GitLab. So, after some trial and error I was able to make it work by completing the following steps:
Create a Personal Access Token on GitLab (https://gitlab.com/profile/personal_access_tokens) with API and read_user access permissions
In IntelliJ (or Pycharm in my case), the Server URL should be https://gitlab.com/api/v4/issues? (with the question mark at the end)
The token is the Personal Access Token that was generated previously
Also, don't forget to increase the connection timeout to 15000 milliseconds under the Tasks section in the Settings (Settings => Tools => Tasks).
Task Server Screenshot
Hope it helps someone else.
[EDIT] This answer was valid in '17, when it was created. For an up to date anwer, pls see other answers in the thread.
So, here's how to do it.
First of all, go to gitlab.
Access with your data and get a personal access token.
Then, you can configure IntelliJ Idea with the following values:
You can now check all your GitLab's issues directly in Idea, as shown here below.

Simulate Access disable feature in Worklight , when worklight server itself is down.

I am trying show end users maintainence window such as "we are down please try later" and disable the application but my problem is what if my worklight server itself is down and not reachable and i cannot use the feature provided by worklight console,
Is there a way i make my app talk to a different server which returns back the below json data when a app is disabled , can i simulate this behaviour is this possible.
json recieved on access disabled in worklight :-
/*-secure-
{"WL-Authentication-Failure":{"wl_remoteDisableRealm":{"message”:”We are down, Please try again soon","downloadLink":null,"messageType":"BLOCK"}}}*/
I have some conceptual problems with this question.
Typically a production environment (simplified) would not consist of a single server serving your end-users... meaning, there would be a cluster of nodes, each node being a Worklight Server, and this cluster would be behind a load balancer that would direct the incoming requests. And so in a situation where a node is down for maintenance like in your scenario there would still be more servers able to serve - there would be no down time.
And thus at this point your suggestion to simulate a Remote Disable by sending it from another(?) Worklight Server seems not so much the correct path to take (it may even be simply wrong). Have you had this second Worklight Server, why wouldn't it just serve the apps business like usual? See again my first paragraph about clustering.
Now lets assume there is still a downtime, that affects all servers. The application's client logic should be able to handle failed connections to the Worklight Server. In such a case you should handle this in the WL.Client.connect()'s onFailure callback function to display a WL.SimpleDialog that looks just like a Remote Disable's dialog... or perhaps via the initOption.js's onConnectionFailure callback.
Bottom line: you cannot simulate the JSON that is sent back for the wl_RemoteDisable realm; it is part of a larger security mechanism.
Additionally though, perhaps a way to better handle maintenance mode on your server is to have the HTTP server return a specific HTTP status code, check for this code and display a proper message based on the returned HTTP status code.
To check for this code in a simple example:
Note: the getStatus method is available starting MobileFirst Platform Foundation 7.0 (formerly "Worklight").
function wlCommonInit(){
WL.Client.connect({onSuccess:success, onFailure:failure});
}
function success(response) {
// ...
}
function failure(response) {
if (response.getStatus() == "503") {
// site is down for maintenance - display a proper message.
} else if ...
}

How to track down long running calls to IIS?

Our users are restless. They keep complaining about woolly, unmeasurable stuff, particularly slowness, without giving specifics, which of course makes it very difficult to track down.
Nonetheless, it is quite possible that they are right, that there are server calls that are taking way too long to come back. So I want to put some kind of sniffer on the web site (we're using ASP.NET MVC 4 on IIS7) that will log any call that takes more than n seconds to turn around, or that returns more than x megabytes of data, along with all request parameters, the response size, and maybe a certain amount of response data.
I haven't a clue how to do this, though. Any suggestions?
here is my take on this:
FRT
While you can use failed request tracing to log slow requests, in my experience is more useful for finding out why a request fails before it hits your application, rather than why its running slowly. 9/10 times its going to simply show you that the slowdown is in your code somewhere.
Log Parser
Yes you can download and analyze iis logs. I use Log Parser Lizard to do the analysis - its a great gui over log parser. Here's a sample of how you might query slow requests over 1000ms:
SELECT
To_String(To_timestamp(date, time), 'dd/MM/yyyy hh:mm:ss') As Time,
cs-uri-stem, cs-uri-query, cs-method, time-taken, cs-bytes, sc-status
FROM
'C:\inetpub\logs\LogFiles\W3SVC1\u_ex140721.log'
WHERE
time-taken > 1000
ORDER BY time-taken desc
New Relic
My recommendation - go easy on yourself and sign up for a free trial. No I don't work for them, but I've used their APM product a lot. Install the agent on the server - set it up. In 10 mins you will be amazed at the data you see about the site. Trust me.
Its designed to work in production environments and gives you amazing depth of info on what's running slow, down to the database query and stack traces. Its pure awesome. Once its setup wait for the next user complaint, log in and look at traces for the time frame.
When your pro trial ends, you can still get valuable data on the free tier, but it will only keep last 24 hours. We purchased licenses -expensive yes, but worth every cent. Why? Time taken to identify root causes was reduced by an order of magnitude, we can get proactive by looking at what is number 2, 3 and 4 on the slow requests list and working those before they become big problems, and finally the alerting makes us much more responsive when things were going wrong.
Code it
You could roll you own. This blog uses Mvc ActionFilters to do the logging. You could also use an HttpModule similar to this post. The nice thing about this approach is you can compile and implement the module separately from your application, and then just drop in the dll and update web.config to wire up the module. I would be wary of these approaches for a very busy site. Also, getting the right level of detail to fully identify the root is challenging.
View Requests
As touched on by Appleman1234, IIS has a little known feature to look at requests currently executing. Its handy for the 'hey its running slow right now' situation. You can use appcmd.exe or the IIS gui to do it. You will need to install the 'Request Monitor' IIS feature for this to work. This approach is ok for rudimentary narrowing of the problem, but does not show you whats running slowly in your controller.
There are various ways you can do this:
Failed Requests Tracing(FRT) – formerly known as Failed Request Event Buffering (FREB) with custom failure condition of takes over a certain time to load / run
Logging request information with IIS logging functionality and then using a tool like LogParserStudio
Using tools like Fiddler or IISMonitor on the IIS server to capture request information
For FRT the official documentation is available here and information how to capture dumps for long running process is avaliable here
For logging request information in IIS information about log file analysis is located here
For information on configuring Fiddler to capture IIS requests find information here
A summary of the steps in the linked resources is provided below.
For FRT
From IIS Manager for a given site,In the Actions pane, under Configure, click Failed Request Tracing and enter desired values in dialog box to enable Failed Request Tracing.
From IIS Manager for a given site, under IIS click Failed Request Tracing Rules, in order to define rules of failure for a given request. In the Actions pane, click Add and follow the wizard.
The logs will go in the directory you specify and are viewable in a web broswer.
For IIS logging
Logging is enabled by default on IIS
From IIS Manager for a given site,under IIS click Logging, and in the Actions Pane, click Enable to enable logging if it isn't already.
From IIS Manager for a given site,under IIS click Logging, and then configure as desired and click apply.
Install LogParser, .Net 4.x and LogParserStudio (if you need additional steps see here
Open LogParserStudio and add logs to it, you then can use SQL queries to get information from the log files.
For Fiddler
You need to change the user that IIS runs as to a user that can launch applications, like Fiddler (instead of Network Service), and then launch Fiddler with that user.
Also see Monitor Activity on a Web Server (IIS 7) for further information.

Email WCF traces to me

So I've recently figured out how to turn on the WCF traces which creates an svclog file on my server and this is awesome. But the problem I have is I don't have direct access to the server where I'm working on my web service so everytime I need to look at the trace file I have to get someone else here at my office to log onto the server for me then I have to upload the trace file to a cloud server because its usually too big to email and then go through weeks worth of requests to find what I want to analyze.
Long story short I want to know if there is a way with trace logging for me to send an email to myself of every trace that gets stored?
The answer appears to be "no" there is no way to email traces using the svclog. There are other ways to accomplish the same goal.
You can set up a custom trace listener to log to DB or email. Log4Net or Nlog have some of those already built in. I've used both and prefer NLog over Log4net due to Nlog having new functionality support.
If you want to create your own from scratch. Look at this guys solution for an example how to.