Are multiple gcm_sender_ids mandatory for multiple site web push? - google-cloud-messaging

We are a company that operates about 500 large and small sites.
(Self, outsourcing ...)
We are trying to attach a notification to these sites.
I know that when I sign in with my Google Developer Center account and get the gcm service, I get one gcm_sender_id.
Is there anything wrong with using this as a 'web push' on 500 large and small sites with one sender_id issued?
If you send a web push to each member at 500 sites at the same time, it could be hundreds of thousands to millions.
(Of course the maximum amount you can request at a time is 1000 times.)
I would like to know if there are any problems with the addition of such restrictions.
We see similar administration agencies, and they seem to be issuing different gcm_sender_id to each new site.
Can this issue multiple gcm_sender_id in one Developer Center account?
I asked him a little bit, and he said it was automated.
If you are a member of a management agency site, you can create a site by simply registering the site, and gcm_sender_id is automatically issued in real time.
In that short amount of time, I would like to sign up for a Google Developer Center account and give me one gcm_sender_id right away, and wonder if gcm_sender_id will be issued right away.
(The company I mentioned is not affiliated with Google.)
Also, if you want to create a web push for 500 sites, why do you have to issue gcm_sender_id differently or why?

You can use the same sender ID for multiple domains.
The problem is that in case of abuses it might be difficult to identify the actual sender among the various customers (if each website belongs to a different company). So I would say that the best approach is to create a different sender ID for each customer (as the agency has suggested).
On the other hand, if the same customer has a domain with many subdomains, for example, you can use a single sender ID, since the sender / customer is actually the same.
However, I recommend that you use VAPID (standard) to automate the registration process of the sender with Firebase Cloud Messaging (previously GCM). The use of a sender ID is a legacy proprietary method and might get deprecated in the future.
Read this article to get started with VAPID, or use a web push service that supports VAPID automatically and saves you a lot of time and trouble.

Related

How would I go about developing a program that automatically sends an email with the tracking number to a customer using EasyPost API?

I'm a fairly new web developer and I have an ecommerce website that integrates EasyPost to create and print shipping labels.
EasyPost has an API. Also, in each shipping label, I see a JavaScript object (I think) that displays buyer_address... "email": "example#gmail.com",, which tells me that the email information is there.
My question is somewhat general in scope: What steps would I need to take to go about creating this automation? The website is built in Webflow, so I don't really have a "codebase" or "repository" to store whatever code is needed to build the automation.
Since the buyer email is making it into EasyPost with integrations already in place, I feel that I could create a simple program that emails the tracking number to the buyer email every time a label is generated, or perhaps when package is shipped, without the program needing to interact with Webflow or other integrations.
I attempted using Zapier, as well as Make.com. Neither worked, and OrderDesk doesn't have a way to send tracking number emails.
It looks like Webflow has some kind of support for Webhooks (https://webflow.com/feature/create-webhooks-from-project-settings). EasyPost offers webhooks for free as an add-on service. Basically, with webhooks, EasyPost would send tracking events to Webflow proactively, but Webflow (or you) would need to manage the logic for what to do with those tracking events after they are delivered.
EasyPost Webhook Guide
I'm unaware of any off the shelf products that could do this for you without writing any code. We have a guide that details how you might accomplish this with Ruby (you could then follow this as an example for any other language): https://www.easypost.com/email-tracking-tutorial
A few suggestions:
Integrate something into Webflow if possible (I'm unfamiliar with the platform so couldn't say).
Build a simple script that runs on a schedule (cronjob) that retrieves your trackers from EasyPost and sends an email to customers if they have not yet received one. To your point, this approach wouldn't require interacting with Webflow at all and could be done with some local code running on a server and just your EasyPost API key.
I've created a simple UI for EasyPost: https://github.com/Justintime50/easypost-tools-ui, it could be interesting to add this particular use-case as a feature to that project. If you're interested, feel free to open an issue on GitHub for the repo listed here and I'd consider it.
You'd use easypost's API webhooks, to detect when shipment tracking information is provided, or package information is updated.
https://www.easypost.com/docs/api#trackers
It looks like it has a lot of states, so you can keep the client updated regarding the package status from the moment the tracking # is assigned;
EZ1000000001 pre_transit
EZ2000000002 in_transit
EZ3000000003 out_for_delivery
EZ4000000004 delivered
EZ5000000005 return_to_sender
EZ6000000006 failure
EZ7000000007 unknown
You can install webhooks from these docs.
To send the email, you can use an automation service e.g. Make to capture those webhook events, and then compose and send an email to that customer. I like MailJet for that purpose, because it has excellent template support and you can send from your own company domain. But there are many email-sending options.
A bigger challenge, maybe, is getting the email address to send to. I didn't spot it glancing through the Trackers or Shipments data structures, and I am primarily seeing physical address info.
If EasyPost is not tracking the customer's email with the shipment, you may have some challenge in that you'd need to capture the client info through Webflow's order webhooks, and then associate that with EasyPost's shipmentid, and store those in a reference table.
Many automation services offer database-like functionality for this purpose, or you could use e.g. google sheets ( columns webflow OrderID, easypost ShipmentID, customer Email ) or airtable for that purpose.
But you'd have to look into the Easypost integration as well, and you may need to make that integration manual so that you can acquire all 3 of those pieces of information at the same point in your business data flow.

Protect from bots creating multiple free accounts and uploading files

I am developing a web for my university where users can create an account and upload images. Images are private and can only be seen by the person who uploaded them. For instance, is like a cloud file system.
Each user have a free account with 500MB. I am using Amazon S3 to store the images, that is to say storage implies costs.
How can I avoid that bots upload millions of MB? How can I avoid that a bot creates million of new accounts and upload 500MB per account without affecting the user experience?
On one hand I definitely don't want to put a CAPTCHA in the registration form because it negatively affects the conversion rate. On the other, I don't want to pay thousands of dollars because a bot upload million of dummy images.
Does anyone know whether Dropbox, Google Drive, etc, suffers from this (content uploaded by bots)? It seems that is not a problem because I couldn't find anything about it. All spam related problems I could read about only covered spam in forums. It makes sense also. Spam in forums can be read by other users. Spam in a service like Dropbox or Google Drive reaches no one. Nonetheless I have to protect it to avoid cost surprises.
As far as I can see, without using CAPTCHAs this can be done:
Set up monitoring systems that warn for specific abuse patterns (the same IP uploading lots of data and creating new accounts repeatedly).
Throttle users that follow those patterns; this will hopefully make them realize and make the process worthless. If this fails, then disable those accounts and have their owners mail/talk to you in order to explain what's happening.
Since you say it's a system for your university, make users provide proof of enrollment (e.g. an university e-mail address) in case of abuse.
Have this forbidden usage explicit in your terms of use.
Of course, a smart enough bot can work around all those problems.
For a more advanced solution, you might try some machine learning or AI that learns about normal and abnormal usage patterns, then applies that information to judge a possible abuser.
I would recommend to :
make users register using their email
don't allow multiple accounts for a single email
send them an email registration confirm, and deactivate the "unconfirmed" accounts after a short amount of time (eg 3 days)
AFAIK, Drupal embeds this kind of controls out-of-the-box or with little effort (and no programming).
This won't solve all your problems, but in fact it will reduce the risk of bot exploits.
As you said you need a registration, there are two points to tackle this problem - make sure no bots register and/or limit the number of uploads.
I personally would use both points. For the user signup, design a login form where the user has to enter its email address, send them a mail with a link in it and activate their account only after clicking this link. Or let the user solve a simple math question on signup.
For the second point, you can store the number of uploaded bytes per user and time. You can then set a quota on allowed upload usage per time, for example you may not upload more than 10MB per hour. If a user hits this limit more than n times, you can deactivate his account.
And: set up and alerting and monitoring system. For example monitor the number of non-activated users, monitor the amount of uploads etc. and set up alerts if these exceed a certain threshold.
The above mentioned methods may not be perfect and probably won't block out all bots, but they will at least make it way harder for bots to upload unwanted data. Also these methods are quite simple, so you can start of with your project and see if this is really a problem. And if you get bots to upload data, you will at least receive alerts and can invent a better solution afterwards.

Easy script to sell and generate unique passwords to a protected area?

Here is what I'm trying to accomplish:
Three different products-- each one consists of online content, housed within a unique folder.
The customer purchases one of the three products, and receives a username/password (or it could be some sort of dynamic link that expires) for that product.
I am not a programmer, but I know enough to get myself in trouble. I thought I could find a simple script where I would just have to change a couple of parameters and be good to go. Surely this has been done before, right?
I need something that will somehow send the info to a payment processor (PayPal is preferable, but Google Checkout could be an option too), generate a unique password or code and email it to the buyer, and of course communicate to the folder where the product lives so that the password/code will work.
Am I crazy? Is this something that I need advanced development skills to pull off? I have been looking at open-source shopping carts to see if one of them has this functionality built in, but haven't been able to find anything.
There is a PayPal script that is supposed to do this, but I have tried working with it before and it is a real pain...I'm not even sure ultimately that it will work the way I want it too.
Any suggestions are most welcome!
From your description it looks like you are trying to sell digital content.
Both Google Checkout and Paypal have frameworks in place that allow you to securely sell and deliver digital goods.
Please have a look at the doc below for Google Checkout Digital Delivery:
http://code.google.com/apis/checkout/developer/Google_Checkout_Digital_Delivery.html

Design an API for a web service without "selling the farm"?

I'm going to try to phrase this as a generic question.
A company runs a website that has a lot of valuable information on it. This information is queried from an internal private database. So technically, the information in the database is the valuable part.
If this company wished to develop an API that developers could use to access their database of valuable & useful information, what approach should the company take?
It's important to give developers what they need. But it is also important to keep competing websites from essentially using the API to steal everything and essentially steal all traffic from the company's website.
Is there was some way the API could be used in a way that drives traffic back to the original company's website somehow? Something that gives users a reason to keep going there.
This is a design consideration that my company is struggling with that I can imagine other web-based services have come across before.
Institute API keys - don't make it public. Maybe make the signup process more complex than "anyone with an e-mail address".
Rate limit the API based on keys. If you're running more than X requests a minute, you're likely mining the database.
Don't provide a "fetch everything" API. Make the users know something to get information on it. Don't reveal what you know.
I've seen a lot of companies giving out API keys and stating a TOS that all developers must adhere to. For example, any page that uses data from the API must include your logo and a link back to your website. If any developer is found breaking the rules, the API key can be cancelled and your data is safe again.
Who is meant to use the API?
A good general method of solving this problem is to limit access to the data to end users (rather than allow applications or developers at it). Provide applications and users with identification, each, and make sure that to access a subset of the data, a combination of both user and application key is required.
Following this pattern, each user will have access to a very limited subset of the data (presumably, the data that they require for their own specific use), and you can put measures in place to enforce this. Any attempts at data-mining will become obvious.
This type of approach meshes well with capability-type security models on the server side.

How can I test out a new feature on just a percentage of my user base?

When facebook rolls out a new version of their site, they show it to a percentage of users first.
How could I go about doing this cleanly?
Have your users sign up for your Beta.
Select a certain percentage of those who sign up for your Beta. As you make changes, keep incrementally adding some more testers. You don't want to let everyone in at once so you can get testing all the way up until the feature is complete and released. Look at stackoverflow as an example.
You would do this because most of the people who sign up will check out your beta version, then leave. They most likely will not come back / keep testing for you.
It is also better to opt-in than opt-out. Your users may not want to be your test subjects.
With a proxy that diverts some fraction of the sessions to one of two separate running instances. The proxy can be a software proxy on the hosting machine.
Well, depending on the change, if you have a farm of web servers you could apply the change to only some of the servers in the farm. That way only certain users who were "lucky" enough to hit one of the updated servers would see the change. Of course, this approach assumes that your web proxy will always route any given user to the same server (or group of updated servers) in the farm.