How optimize an API call in symfony?
I call with Guzzle bundle, but the time in some situations is very long.
In client application call a function from the server.
In server application extract the objects from the database and send back to the client.
In client creat the new object with properties from server respons.
One of the ways to improve your API calls is to use caching. In Symfony there are many different ways to achieve this. I can show you one of them (PhpFileCache example):
In services.yml create cache service:
your_app.cache_provider:
class: Doctrine\Common\Cache\PhpFileCache
arguments: ["%kernel.cache_dir%/path_to/your_cache_dir", ".your.cached_file_name.php"]
(Remember, you need Doctrine extension in your app to work)
Then pass your caching service your_app.cache_provider to any service where you need caching:
Again in your services.yml:
some_service_of_yours:
class: AppBundle\Services\YourService
arguments: ['#your_app.cache_provider']
Finally, in your service (where you want to perform API caching):
use Doctrine\Common\Cache\CacheProvider;
class YourService
{
private $cache;
public function __construct(CacheProvider $cache)
{
$this->cache = $cache;
}
public function makeApiRequest()
{
$key = 'some_unique_identifier_of_your_cache_record';
if(!$data = $this->cache->fetch($key))
{
$data = $provider->makeActualApiCallHere('http://some_url');
$this->cache->save($key, serialize($data), 10800); //10800 here is amount of seconds to store your data in cache before its invalidated, change it to your needs
}
return $data; // now you can use the data
}
}
This is quite GENERIC example, you should change it to your exact needs, but idea is simple. You can cache data and avoid unnecessary API calls to speed things up. Be careful though, because cache has drawback of presenting stale(obsolete) data. Some things can (and should) be cached, but some things don't.
If you control the server
You should put a cache reverse proxy like Varnish on top of your PHP server. The PHP app must send HTTP cache headers to tell to the proxy how many time it must cache the request. Alternatively, you can use a library like FOSHttpCache to setup a cache invalidation strategy (the PHP server will purge the cache from the proxy when an update of the data occurs - it's a more advanced and complex scenario).
The PHP server will not even be called if the requested resource is in the reverse proxy cache.
You should also use a profiler like Blackfire.io or xhprof to find why some parts of your PHP code (or your SQL queries) take so many time to be executed, then optimize.
If you control the client
You can use this HTTP cache middleware for Guzzle to cache every API result according to HTTP headers sent by the API.
Related
I need to get some data from a REST API in my GraphQL API. For that I'm extending RESTDataSource from apollo-datasource-rest.
From what I understood, RESTDataSource caches automatically requests but I'd like to verify if it is indeed cached. Is there a way to know if my request is getting its data from the cache or if it's hitting the REST API?
I noticed that the first request takes some time, but the following ones are way faster and also, the didReceiveResponse method is not called everytime I make a query. Is it because the data is loaded from the cache?
I'm using apollo-server-express.
Thanks for your help!
You can time the requests like following:
console.time('restdatasource get req')
this.get(url)
console.timeEnd('restdatasource get req')
Now, if the time is under 100-150 milliseconds, that should be a request coming from the cache.
You can monitor the console, under the network tab. You will be able to see what endpoints the application is calling. If it uses cached data, there will be no new request to your endpoint logged
If you are trying to verify this locally, one good option is to setup a local proxy so that you can see all the network calls being made. (no network call meaning the call was read from cache) Then you can simply configure your app using this apollo documentation to forward all outgoing calls through a proxy like mitmproxy.
I've researched and found three different possibilities to solving my case: I'd like to make an async API call (using dotenv variables to store the credentials) and commit the returned data to Vuex on app init --keeping the creds secure.
Currently I'm attempting using serverMiddleware, but I'm having trouble accessing the context. Is this possible? Currently just getting a "store is not defined" error.
Also, after researching, I keep seeing that it's not a good idea to use regular middleware, as running any code on the client-side exposes the env variable... But I'm confused. Doesn't if (!process.client) { ... } take care of this? Or am I missing the bigger picture.
Additionally, if it does turn out to be okay to use middleware to secure the credentials, would using the separate-env-module be wise to make doubly sure that nothing gets leaked client-side?
Thanks, I'm looking forward to understanding this more thoroughly.
You can use serverMiddleware.
You can do it like this:
client -> call serverMiddleware -> servermiddleware calls API.
that way API key is not in client but remains on the server.
Example:
remote api is: https://maps.google.com/api/something
your api: https://awesome.herokuapp.com
since your own api has access to environment variables and you don't want the api key to be included in the generated client-side build, you create a serverMiddleware that will proxy the request for you.
So that in the end, your client will just make a call to https://awesome.herokuapp.com/api/maps, but that endpoint will just call https://maps.google.com/api/something?apikey=123456 and return the response back to you
I'm currently using a self hosted Parse Server up to date but I'm facing some security issues.
At the moment, calls done to the route /classes can retrieve any object in any table and, even though I might want an object to be public readable, I wouldn't like to show all the parameters of that object. Briefly I don't want the database to be retrieved in any case, I would like to disable "everything" except the Parse Cloud code. So that is, I would be able to run calls to my own functions, but not able to use clients (Android, iOS, C#, Javascript...) to retrieve data.
Is there any way to do this? I've been searching deeply for this, trying to debug some Controllers but I don't have any clue.
Thank you very much in advance.
tl;dr: set the ACL for all objects to be only readable when using the master key and then tell the query in Cloud Code to use the MK when querying your data
So without changing Parse Server itself you could make use of ACL and only allow a specific user to access objects. You would then "login" as that user in your Cloud Code and be able to access all objects.
As the old method, Parse.Cloud.useMasterKey() isn't available in the OS Parse Server you will have to pass the parameter useMasterKey to the query you are running which should do the trick for this particular request and will bypass ACL/CLP. There is an example in the Wiki of Parse Server as well.
For convenience, here is a short code example from the Wiki:
Parse.Cloud.define('getTotalMessageCount', function(request, response) {
var query = new Parse.Query('Messages');
query.count({
useMasterKey: true
}) // count() will use the master key to bypass ACLs
.then(function(count) {
response.success(count);
});
});
We have an old Yii application along with new Symfony one.
The basic idea is simple - I need to check if there is a route matching in Symfony application then it is cool, if not then bootstrap Yii application and try to handle the request with it.
The main idea to not instantiate AppKernel (and do not load autoload.php - since there is two different autoload.php for each project) before I am sure there is route matching.
Can I do it somehow?
We've done this before with legacy applications.
There are two approaches you can take.
Wrap your old application inside a symfony project (recommended).
Unfortunately this will indeed load the symfony front-controller and kernel. No way around that. You need to make sure that symfony can't handle the request and to do that the kernel needs to be booted up.
Use sub-directories and apache virtual hosts to load one application vs the other as needed.
Given option 1,
You can either create your own front controller that loads either symfony or yii by reading routes (from static files if using yml or xml, or annotations which will be more complex) OR EventListener (RequestListener) that listens to the HttpKernelInterface::MASTER_REQUEST and ensures that a route can be returned.
Creating your own front controller is the only way that you can make it not load the symfony kernel, but it will require you to write something that understands the routes in both frameworks (or at least symfony's) and hands off the request appropriately.
Event listener example:
public function onkernelRequest(GetResponseEvent $event)
{
if (HttpKernelInterface::MASTER_REQUEST !== $event->getRequestType()) {
return;
}
... Code to continue normally, or bootstrap yii and return a custom response... (Can include and ob_start, or make an http request, etc)
}
public function getSubscribedEvents()
{
return [
KernelEvents::REQUEST => ['onKernelRequest']
];
}
As you see, the kernel needs to be booted to ensure symfony can't serve the route. Unless creating your own front controller (as stated above).
A third approach would be to create a fallback controller, which would load up a specified URL if no route was found within symfony. Although this approach is generally used for legacy projects that lack a framework and use page scripts instead of proper routes, and definitely requires the use/help of output buffering.
The EventListener approach gives you the opportunity to create a proper Request to hand off to yii, and using what is returned to create a Response as proper symfony object (can also use ob or other options).
Thank you.
This is an alternative to vpassapera's solution -http://stovepipe.systems/post/migrating-your-project-to-symfony
I have a Web Api 2 service that will be deployed across 4 production servers. When a request doesn't pass validation a custom response object is generated and returned to the client.
A rudimentary example
if (!ModelState.IsValid)
{
var responseObject = responseGenerator.GetResponseForInvalidModelState(ModelState);
return Ok(responseObject);
}
Currently the responseGenerator is aware of what environment it is in and generates the response accordingly. For example, in development it'll return a lot detail but in production it'll only return a simple failure status.
How can I implement a "switch" that turns details on without requiring a round trip to the database each time?
Due to the nature of our environment using a config file isn't realistic. I've considered using a flag in the database and then caching it at the application layer but environmental constraints make refreshing the cache on all 4 servers very painful.
I ended up going with the parameter suggestion and then implementing a token system on the back end. If a Debug token is present in the request the service validates it against the database. If it's a valid and active token it returns the additional detail.
This allows us to control things from our end while keeping things simple for the vendors and only adds that extra round trip to the database during debugging.