Ecommerce plugin development - how to periodically check the platform DB and call external API - module

I'm developing a module for Prestashop and I'll make a premise valid for Prestashop development environment: a module/plugin can do it's work thanks to some hooks, like Header, or leftBar, or BackOffice header loading, so apparently there is no way to do what I want to do:
I want to periodically (let's say each day) check for abandoned cart in Prestashop database and send their information to external API.
I thought of a workaround which I don't like very much and it doesn't seem efficient to me: the plugin installs a custom database that will always contain one row: in that row there will be the current date.
Whenever a user visits the website the plugin checks for that value on the DB: if the date is older than today then it updates it to today's date. If the module has just updated the value then I'll do my check on the DB and the API call, else I will not do anything (for the rest of the day, because every other check on the date will fail because it's already updated).
Is there some better way to do it?
********** UPDATE **********
So exists a way: cron tasks. Now my doubt is: is it possible to integrate the cron schedule inside my plugin? I need that when someone installs my plugin then he has nothing more to do: I don't want to delegate the configuration to him through the cron tasks manager integrated on the backoffice of Prestahsop. The problem with Prestashop seems to be that unlike Wordpress where exists a unique solution to do this (https://www.smashingmagazine.com/2013/10/schedule-events-using-wordpress-cron/), there is no way to do that for a general website target, so if you want to do that inside your custom module you have to choose one cron (https://www.prestashop.com/forums/topic/564504-how-create-cron-job-from-custom-module-all-by-code/)

In my opinion the best way to do that is to use a cron or webcron.
You create a front controller on your module that do the work and you create a cron that execute the controller once a day.
How to create a front controller : http://doc.prestashop.com/display/PS16/Displaying+content+on+the+front+office#Displayingcontentonthefrontoffice-Embeddingatemplateinthetheme
if you call your controller cron, you can call it
by http for webcron: http://your.shop/module/[module-name]/cron
by shell for crontab :
php -f [shop-folder]index.php "fc=module&module=[module-name]&controller=cron"
You can also use a cron-like system based on user visits but I will not recommend this solution if :
your task could be long : it could slow down the shop for the visitor and you are dependent of the web server time limit
the timing is important : your task will run when and if you have a visit on your site
the uniqueness is important : your task can be called twice if you have two visits at the same time
There is no built-in solution, there are modules for that but it will not solve your issue, the customer will have to install them.
my solution is to hook on the footer
public function hookFooter()
{
// run cron every 12 hours
$limit = time() - 12 * 60 * 60;
$lastrun = Configuration::get('[module-name]-lastrun');
if ($lastrun < $limit) {
return '<script src="http://your.shop/module/[module-name]/cron" async></script>';
}
}
and in your cron controller
$limit = time() - 12 * 60 * 60;
$lastrun = Configuration::get('[module-name]-lastrun');
if ($lastrun < $limit) {
Configuration::updateValue('[module-name]-lastrun', time());
// do stuff
}

Related

need to run while loop for multiple users in jmeter

I am using jmeter to test the performance for the ride booking app.I need to run the while controller which runs the events fetching api continuously until the ride is completed or if driver is not available. This runs correctly for one user .But if i run the plan for multiple users then the while controller enters infinite loop.How can I fix this?
While Controller is being executed unless its condition (a Function or Variable) resolves to true
If it runs into an endless loop - most probably your server responds with something you don't expect, i.e. an error because it gets overloaded.
So I would suggest taking 2 actions:
Temporarily enable storing of responses into .jtl or a separate file and inspect what does the server return and amend your While Controller's condition accordingly
And/or limit maximum number of iterations of the While Controller to some reasonable number, i.e. 10 or 20 or whatever is acceptable value, example __jexl3() function
${__jexl3("${status}" != "running" && ${__jm__While Controller__idx} < 20,)}

How to invoke an on-demand bigquery Data transfer service?

I really liked BigQuery's Data Transfer Service. I have flat files in the exact schema sitting to be loaded into BQ. It would have been awesome to just setup DTS schedule that picked up GCS files that match a pattern and load the into BQ. I like the built in option to delete source files after copy and email in case of trouble. But the biggest bummer is that the minimum interval is 60 minutes. That is crazy. I could have lived with a 10 min delay perhaps.
So if I set up the DTS to be on demand, how can I invoke it from an API? I am thinking create a cronjob that calls it on demand every 10 mins. But I can’t figure out through the docs how to call it.
Also, what is my second best most reliable and cheapest way of moving GCS files (no ETL needed) into bq tables that match the exact schema. Should I use Cloud Scheduler, Cloud Functions, DataFlow, Cloud Run etc.
If I use Cloud Function, how can I submit all files in my GCS at time of invocation as one bq load job?
Lastly, anyone know if DTS will lower the limit to 10 mins in future?
So if I set up the DTS to be on demand, how can I invoke it from an API? I am thinking create a cronjob that calls it on demand every 10 mins. But I can’t figure out through the docs how to call it.
StartManualTransferRuns is part of the RPC library but does not have a REST API equivalent as of now. How to use that will depend on your environment. For instance, you can use the Python Client Library (docs).
As an example, I used the following code (you'll need to run pip install google-cloud-bigquery-datatransfer for the depencencies):
import time
from google.cloud import bigquery_datatransfer_v1
from google.protobuf.timestamp_pb2 import Timestamp
client = bigquery_datatransfer_v1.DataTransferServiceClient()
PROJECT_ID = 'PROJECT_ID'
TRANSFER_CONFIG_ID = '5e6...7bc' # alphanumeric ID you'll find in the UI
parent = client.project_transfer_config_path(PROJECT_ID, TRANSFER_CONFIG_ID)
start_time = bigquery_datatransfer_v1.types.Timestamp(seconds=int(time.time() + 10))
response = client.start_manual_transfer_runs(parent, requested_run_time=start_time)
print(response)
Note that you'll need to use the right Transfer Config ID and the requested_run_time has to be of type bigquery_datatransfer_v1.types.Timestamp (for which there was no example in the docs). I set a start time 10 seconds ahead of the current execution time.
You should get a response such as:
runs {
name: "projects/PROJECT_NUMBER/locations/us/transferConfigs/5e6...7bc/runs/5e5...c04"
destination_dataset_id: "DATASET_NAME"
schedule_time {
seconds: 1579358571
nanos: 922599371
}
...
data_source_id: "google_cloud_storage"
state: PENDING
params {
...
}
run_time {
seconds: 1579358581
}
user_id: 28...65
}
and the transfer is triggered as expected (nevermind the error):
Also, what is my second best most reliable and cheapest way of moving GCS files (no ETL needed) into bq tables that match the exact schema. Should I use Cloud Scheduler, Cloud Functions, DataFlow, Cloud Run etc.
With this you can set a cron job to execute your function every ten minutes. As discussed in the comments, the minimum interval is 60 minutes so it won't pick up files less than one hour old (docs).
Apart from that, this is not a very robust solution and here come into play your follow-up questions. I think these might be too broad to address in a single StackOverflow question but I would say that, for on-demand refresh, Cloud Scheduler + Cloud Functions/Cloud Run can work very well.
Dataflow would be best if you needed ETL but it has a GCS connector that can watch a file pattern (example). With this you would skip the transfer, set the watch interval and the load job triggering frequency to write the files into BigQuery. VM(s) would be running constantly in a streaming pipeline as opposed to the previous approach but a 10-minute watch period is possible.
If you have complex workflows/dependencies, Airflow has recently introduced operators to start manual runs.
If I use Cloud Function, how can I submit all files in my GCS at time of invocation as one bq load job?
You can use wildcards to match a file pattern when you create the transfer:
Also, this can be done on a file-by-file basis using Pub/Sub notifications for Cloud Storage to trigger a Cloud Function.
Lastly, anyone know if DTS will lower the limit to 10 mins in future?
There is already a Feature Request here. Feel free to star it to show your interest and receive updates
Now your can easy manual run transfer Bigquery data use RESTApi:
HTTP request
POST https://bigquerydatatransfer.googleapis.com/v1/{parent=projects/*/locations/*/transferConfigs/*}:startManualRuns
About this part > {parent=projects//locations//transferConfigs/*}, check on CONFIGURATION of your Transfer then notice part like image bellow.
Here
More here:
https://cloud.google.com/bigquery-transfer/docs/reference/datatransfer/rest/v1/projects.locations.transferConfigs/startManualRuns
following the Guillem's answer and the API updates, this is my new code:
import time
from google.cloud.bigquery import datatransfer_v1
from google.protobuf.timestamp_pb2 import Timestamp
client = datatransfer_v1.DataTransferServiceClient()
config = '34y....654'
PROJECT_ID = 'PROJECT_ID'
TRANSFER_CONFIG_ID = config
parent = client.transfer_config_path(PROJECT_ID, TRANSFER_CONFIG_ID)
start_time = Timestamp(seconds=int(time.time()))
request = datatransfer_v1.types.StartManualTransferRunsRequest(
{ "parent": parent, "requested_run_time": start_time }
)
response = client.start_manual_transfer_runs(request, timeout=360)
print(response)
For this to work, you need to know the correct TRANSFER_CONFIG_ID.
In my case, I wanted to list all the BigQuery Scheduled queries, to get a specific ID. You can do it like that :
# Put your projetID here
PROJECT_ID = 'PROJECT_ID'
from google.cloud import bigquery_datatransfer_v1
bq_transfer_client = bigquery_datatransfer_v1.DataTransferServiceClient()
parent = bq_transfer_client.project_path(PROJECT_ID)
# Iterate over all results
for element in bq_transfer_client.list_transfer_configs(parent):
# Print Display Name for each Scheduled Query
print(f'[Schedule Query Name]:\t{element.display_name}')
# Print name of all elements (it contains the ID)
print(f'[Name]:\t\t{element.name}')
# Extract the IDs:
TRANSFER_CONFIG_ID= element.name.split('/')[-1]
print(f'[TRANSFER_CONFIG_ID]:\t\t{TRANSFER_CONFIG_ID}')
# You can print the entire element for debug purposes
print(element)

Maximo 7.6 Intergration Automation Script

I'm trying to create an Intergration Automation Script for a PUBLISHED Channel which updates a database field.
Basically for WOACTIVITY I just want a field value setting to 1 for the Work Order if the PUBLISHED channel is triggered.
Any ideas or example scripts that anyone has or can help with please? Just can't get it work.
What about using a SET processing rule on the publish channel? Processing rules are evaluated every time the channel is activated, and a SET action will let you set an attribute for the parent object to a specified value. You can read more about processing rules here.
Adding a new answer because experience, in case it helps anyone.
If you create an automation script for integration against a Publish Channel and then select External Exit or User Exit there's an implicit variable irData that has access to the MBO being worked on. You can then use that MBO as you would in any other script. Note that because you're changing a record that's integrated you'll probably want a skip rule in your publish channel that skips records with your value set or you may run into an infinite publish --> update --> publish loop.
woMbo = irData.getCurrentMbo()
woMboSet = woMbo.getThisMboSet()
woMbo.setValue("FIELD", 1)
woMboSet.save()

Running one specific Laravel migration (single file)

I don't want to run All Outstanding Migrations on laravel 4. I have 5 migrations. Now I just want to run one migration.
instead of doing : php artisan migrate
I would like to run one specific migration like : php artisan migrate MY_MIGRATION_TO_RUN
Looks like you're doing it wrong.
Migrations were made to be executed by Laravel one by one, in the exact order they were created, so it can keep track of execution and order of execution. That way Laravel will be able to SAFELY rollback a batch of migrations, without risking breaking your database.
Giving the user the power to execute them manually, make it impossible to know (for sure) how to rollback changes in your database.
If your really need to execute something in your database, you better create a DDL script and manually execute it on your webserver.
Or just create a new migration and execute it using artisan.
EDIT:
If you need to run it first, you need to create it first.
If you just need to reorder them, rename the file to be the first. Migrations are created with a timestemp:
2013_01_20_221554_table
To create a new migration before this one you can name it
2013_01_19_221554_myFirstMigration
You can put migrations in more folders and run something like:
php artisan migrate --path=/app/database/migrations/my_migrations
Just move the already run migrations out of the app/config/database/migrations/ folder . Then run the command php artisan migrate . Worked like a charm for me .
A nice little snippet to ease any fears when running Laravel 4 migrations php artisan migrate --pretend . This will only output the SQL that would have been run if you ran the actual migration.
It sounds like your initial 4 migrations have already been run. I would guess that when you php artisan migrate it will only run the new, recent migration.
Word of advice: makes sure all of your up() and down() work how you expect them to. I like to run up(), down(), up() when I run my migrations, just to test them. It would be awful for you to get 5-6 migrations in and realize you can't roll them back without hassle, because you didn't match the down() with the up() 100% percent.
Just my two cents! Hope the --pretend helps.
The only way to re run a migrattion is a dirty one. You need to open your database and delete the line in the migrations table that represents your migration.
Then run php artisan migrate again.
You can create a separate directory for your migrations from your terminal as follows:
mkdir /database/migrations/my_migrations
And then move the specific migration you want to run to that directory and run this command:
php artisan migrate --path=/database/migrations/my_migrations
Hope this helps!
If you want to run(single file) migration in Laravel you would do the following:
php artisan migrate --path=/database/migrations/migrations_file_name
eg.
C:\xampp\htdocs\laravelv3s>php artisan migrate --path=/database/migrations/2020_02_14_102647_create_blogs_table.php
I gave this answer on another post, but you can do this: run artisan migrate to run all the migrations, then the following SQL commands to update the migrations table, making it look like the migrations were run one at a time:
SET #a = 0;
UPDATE migrations SET batch = #a:=#a+1;
That will change the batch column to 1, 2, 3, 4 .. etc. Add a WHERE batch>=... condition on there (and update the initial value of #a) if you only want to affect certain migrations.
After this, you can artisan migrate:rollback as much as is required, and it'll step through the migrations one at a time.
You can use below solution:
create your migration.
check your migration status like: php artisan migrate:status.
copy the full name of new migration and do this like: php artisan migrate:rollback --path:2018_07_13_070910_table_tests.
and then do this php artisan migrate.
finally, you migrate specific table.
Goodluck.
If you want to run your latest migration file you would do the following:
php artisan migrate
You can also revert back to before you added the migration with:
php artisan migrate: rollback
There is one easy way I know to do this can only be available for you on just local host
Modify your migration file as needed
open your phpMyAdmin or whatever you use to see your database table
find the desired table and drop it
find migrations table and open it
in this table under migration field find your desired table name and delete its row
finally run the command php artisan migrate from your command line or terminal. this will only migrate that tables which not exists in the migrations table in database.
This way is completely safe and will not make any errors or problems while it looks like un-professional way but it still works perfectly.
good luck
If it's just for testing purposes, this is how i do it:
For my case, i have several migrations, one of them contains App-Settings.
While I'm testing the App and not all of the migrations are already setup i simply move them into a new folder "future". This folde won't be touched by artisan and it will only execute the migration you want.
Dirty workaround, but it works...
I have same problem.
Copy table creation codes in first migration file something like below:
public function up()
{
Schema::create('posts', function(Blueprint $table){
$table->increments('id');
// Other columns...
$table->timestamps();
});
Schema::create('users', function (Blueprint $table) {
$table->increments('id');
// Other columns...
$table->softDeletes()->nullable();
});
}
Also you can change(decrease) batch column number in migrations table ;)
And then run php artisan migrate.
Throw an exception in a migration, if you don't want to apply it, and it would stop the whole process of migration.
Using this approach you can split your bunch of migrations into steps.
Working in Laravel 8+
Run single specific migration:
php artisan migrate --path=/database/migrations/yourfilename.php
Run all migrations:
php artisan migrate
so simple...! just go to your migration folder. move all migration files into another folder. then return all migration one by one into migration folder and run migration for one of them(php artisan). when you insert bad migration file into master migration folder and run php artisan migrate in command prompt will be error.
I used return on line 1 so the previous dbs are retained as it is.
<?php
use Illuminate\Support\Facades\Schema;
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Database\Migrations\Migration;
class CreateUsersTable extends Migration
{
/**
* Run the migrations.
*
* #return void
*/
public function up()
{
return; // This Line
Schema::create('users', function (Blueprint $table) {
$table->increments('id');
$table->string('name', 50);
$table->string('slug', 50)->unique();
$table->integer('role_id')->default(1);
$table->string('email', 50)->unique();
$table->timestamp('email_verified_at')->nullable();
$table->string('mobile', 10)->unique();
$table->timestamp('mobile_verified_at')->nullable();
$table->text('password');
$table->integer('can_login')->default(1);
$table->rememberToken();
$table->timestamps();
});
}
/**
* Reverse the migrations.
*
* #return void
*/
public function down()
{
return;// This Line
Schema::dropIfExists('users');
}
}
This is a bad approach of which I use.. I'll delete other migration files except the specific file I want to migrate then run PHP artisan migrate after migration is completed I'll goto my trash bin and restore the deleted files
For anybody still interested in this, Laravel 5 update: Laravel has implemented the option to run one migration file at a time (in version 5.7).
You can now run this:
php artisan migrate --path=/database/migrations/my_migration.php (as answered here)
Because the Illuminate\Database\Migrations\Migrator::getMigrationFiles() now contains this code:
return Str::endsWith($path, '.php') ? [$path] : $this->files->glob($path.'/*_*.php');
(see the source code.)
But in my usecase, I actually wanted to run a set of migrations at the same time, not just one, or all.
So I went the Laravel way and registered a different implementation of the Migrator, which decides which files to use:
/**
* A migrator that can run multiple specifically chosen migrations.
*/
class MigrationsSetEnabledMigrator extends Migrator
{
/**
* #param Migrator $migrator
*/
public function __construct(Migrator $migrator)
{
parent::__construct($migrator->repository, $migrator->resolver, $migrator->files);
// Compatibility with versions >= 5.8
if (isset($migrator->events)) {
$this->events = $migrator->events;
}
}
/**
* Get all of the migration files in a given path.
*
* #param string|array $paths
* #return array
*/
public function getMigrationFiles($paths)
{
return Collection::make($paths)->flatMap(function ($path) {
return Str::endsWith($path, ']') ? $this->parseArrayOfPaths($path) :
(Str::endsWith($path, '.php') ? [$path] : $this->files->glob($path . '/*_*.php'));
})->filter()->sortBy(function ($file) {
return $this->getMigrationName($file);
})->values()->keyBy(function ($file) {
return $this->getMigrationName($file);
})->all();
}
public function parseArrayOfPaths($path)
{
$prefix = explode('[', $path)[0];
$filePaths = explode('[', $path)[1];
$filePaths = rtrim($filePaths, ']');
return Collection::make(explode(',', $filePaths))->map(function ($filePath) use ($prefix) {
return $prefix . $filePath;
})->all();
}
}
We have to register it into the container as 'migrator' (to be accessible as $app['migrator']), because that is how Migrate command accesses it when itself is being registered into the IoC. To do so, we put this code into a service provider (in my case, it is a DatabaseServiceProvider):
public function register()
{
$this->app->extend('migrator', function ($migrator, $app) {
return new MultipleSpecificMigrationsEnabledMigrator($migrator);
});
// We reset the command.migrate bind, which uses the migrator - to
// force refresh of the migrator instance.
$this->app->instance('command.migrate', null);
}
Then you can run this:
php artisan migrate --path=[database/migrations/my_migration.php,database/migrations/another_migration.php]
Notice the multiple migration files, separated by a comma.
It is tested and working in Laravel 5.4 and should be Laravel 5.8 compatible.
Why?
For anyone interested: the usecase is updating the version of database along with it's data.
Imagine, for example, that you wanted to merge the street and house number of all users into new column, let's call it street_and_house. And imagine you wanted to do that on multiple installations in a safe and tested way - you would probably create a script for that (in my case, I create data versioning commands - artisan commands).
To do such an operation, you first have to load the users into memory; then run the migrations to remove the old columns and add the new one; and then for each user assign the street_and_house=$street . " " . $house_no and save the users. (I am simplifying here, but you can surely imagine other scenarios)
And I do not want to rely on the fact that I can run all the migrations at any given time. Imagine that you wanted to update it from let's say 1.0.0 to 1.2.0 and there were multiple batches of such updates – performing any more migrations could break your data, because those migrations must be handled by their own dedicated update command. Therefore, I want to only run the selected known migrations which this update knows how to work with, then perform operations on the data, and then possibly run the next update data command. (I want to be as defensive as possible).
To achieve this, I need the aforementioned mechanism and define a fixed set of migrations to be run for such a command to work.
Note: I would have preferred to use a simple decorator utilizing the magic __call method and avoid inheritance (a similar mechanism that Laravel uses in the \Illuminate\Database\Eloquent\Builder to wrap the \Illuminate\Database\Query\Builder), but the MigrateCommand, sadly, requires an instance of Migrator in it's constructor.
Final note: I wanted to post this answer to the question How can I run specific migration in laravel , as it is Laravel 5 - specific. But I can not - since that question is marked as a duplicate of this one (although this one is tagged as Laravel 4).
You may type the following command:
php artisan migrate --help
...
--path[=PATH] The path(s) to the migrations files to be executed (multiple values allowed)
...
If it does show an option called "--path" (like the upper example) that means your Laravel version supports this parameter. If so, you're in luck can then you can type something like:
php artisan migrate --path=/database/migrations/v1.0.0/
Where "v.1.0.0" is a directory that exists under your "/database/migrations" directory that holds those migrations you want to run for a certain version.
If not, then you can check in your migrations table to see which migrations have already been run, like this:
SELECT * FROM migrations;
And then move out of your "/database/migrations" folder those which were executed. By creating another folder "/databases/executed-migrations" and moving your executed migrations there.
After this you should be able to execute:
php artisan migrate
Without any danger to override any existing table in your schema/database.
(*) example for Windows:
php artisan migrate --path=database\migrations\2021_05_18_121604_create_service_type_table.php

Rails 3 - cache web service call

In my application, in the homepage action, I call a specific web service that returns JSON.
parsed = JSON.parse(open("http://myservice").read)
#history = parsed['DATA']
This data will not change more than once per 60 seconds and does not change on a per-visitor basis, so i would like to, ideally, cache the #history variable itself (since the parsing will not result in a new result) and auto invalidate it if it is more than a minute old.
I'm unsure of the best way to do this. The default Rails caching methods all seem to be more oriented towards content that needs to be manually expired. I'm sure there is a quick and easy method to do this, I just don't know what it is!
You can use the built in Rails cache for this:
#history = Rails.cache.fetch('parsed_myservice_data', :expires_in => 1.minute) do
JSON.parse connector.get_response("http://myservice")
end
One problem with this approach is when the rebuilding of the data to be cached takes
quite a long time. If you get many client requests during this time, each of them will
get a cache miss and call your block, resulting in lots of duplicated effort, not to mention slow response times.
EDIT: In Rails 3.x you can pass the option :race_condition_ttl to the fetch method to avoid this problem. Read more about it here.
A good solution to this in previous versions of Rails is to setup a background/cron job to be run at regular intervals that will fetch and parse the data and update the cache.
In your controller or model:
#history = Rails.cache.fetch('parsed_myservice_data') do
JSON.parse connector.get_response("http://myservice")
end
In your background/cron job:
Rails.cache.write('parsed_myservice_data',
JSON.parse connector.get_response("http://myservice"))
This way, your client requests will always get fresh cached data (except for the first
request if the background/cron job hasn't been run yet.)
I don't know of an easy railsy way of doing this. You might want to look into using redis. Redis lets you set expiration times on the data you store in it. Depending on which redis gem you use it'd look something like this:
#history = $redis.get('history')
if not #history
#history = JSON.parse(open("http://myservice").read)['DATA']
$redis.set('history', #history)
$redis.expire('history', 60)
end
Because there's only one redis service this will work for all your rails processes.
We had a similar requirement and we ended up using Squid as a forward proxy for all the webservice calls from the rails server. Squid was configured to have a cache-expiry time of 60 seconds.
http_connection_factory.rb:
class HttpConnectionFactory
def self.connection
AppConfig.use_forward_proxy ? Net::HTTP::Proxy(AppConfig.forward_proxy_host, AppConfig.forward_proxy_port) : Net::HTTP
end
end
In your application's home page action, you can use the proxy instead of making the call directly.
connector = HttpConnectionFactory.connection
parsed = JSON.parse(connector.get_response("http://myservice"))
#history = parsed['DATA']
We had second thoughts about using Redis or Memcache. But, we had several service calls and wanted to avoid all the hassles of generating keys and sweeping them at appropriate times.
So, in our case, the forward proxy took care of all those nitty gritties. Please refer to Squid Wiki for the configuration parameters necessary.