Amazon SES - Using C# console app to send high quota of personalized emails per second - amazon-ses

Im using a C# console APP to send a personalized bulk email but I found the bottleneck that if I use sequential programming I'm only able to send 1 emails per second. I've tried to create a multi-thread app but I am able to send only two emails per second.
How can I do it better?
This is a fragment of the code:
public static void MainProgram(List emails ,string cuerpo_email_en, string cuerpo_email_es)
{
//emails list is populated with 50.000 emails
DateTime timeControllerForSendingEmails = DateTime.Now;
while (emails.Count > 0)
{
if ((DateTime.Now - timeControllerForSendingEmails).TotalSeconds >= 1)
{
timeControllerForSendingEmails = DateTime.Now;
//this method gets a list of 60 emails and remove them from the main list
List<EmailEnt> queuedEmails = GetEmailsQueue(emails, 60));
Send(queuedEmails);
}
}
}
public void Send(List<EmailEnt> queuedEmails)
{
IList<Task> tasks = new List<Task>();
List<string> logLines = new List<string>();
foreach (EmailEnt emailEnt in queuedEmails)
{
string subject = "Hello {name}";
string body = "im the body;

tasks.Add(Task.Factory.StartNew(() =>
{
SendEmail(emailEnt, subject, body);
}));
}
Task.WaitAll(tasks.ToArray());
}

Any chance you are still running in 'sandbox mode'? According to AWS:
When you are in the sandbox, your sending quota is 200 messages per
24-hour period and your maximum sending rate is one message per
second. To increase your sending limits, you need to request
production access. For more information, see Requesting Production
Access to Amazon SES. After you request production access and start
sending emails, you can increase your sending limits further by
following the guidance in the Increasing Your Amazon SES Sending
Limits section.
If not, I use code similar to this to send 15+ emails second (not on SES), and it works fine:
Parallel.ForEach(mailQueue, new ParallelOptions() {MaxDegreeOfParallelism = 7}, itm=>SendEmail(itm));
which perhaps is functionally equivalent to what you are doing already, but I can say for sure it does provide much greater throughput and may be worth a try.

Related

What happens when flux is returned from spring web controller?

I am comparatively new to reactive APIs and was curious about what was happening behind the scenes when we return a Flux from a web controller.
According to spring-web documentation
Reactive return values are handled as follows:
A single-value promise is adapted to, similar to using DeferredResult. Examples include Mono (Reactor) or Single (RxJava).
A multi-value stream with a streaming media type (such as application/stream+json or text/event-stream) is adapted to, similar to using ResponseBodyEmitter or SseEmitter. Examples include Flux (Reactor) or Observable (RxJava). Applications can also return Flux or Observable.
A multi-value stream with any other media type (such as application/json) is adapted to, similar to using DeferredResult<List<?>>.
I created two APIs as below:
#GetMapping("/async-deferredresult")
public DeferredResult<List<String>> handleReqDefResult(Model model) {
LOGGER.info("Received async-deferredresult request");
DeferredResult<List<String>> output = new DeferredResult<>();
ForkJoinPool.commonPool().submit(() -> {
LOGGER.info("Processing in separate thread");
List<String> list = new ArrayList<>();
for (int i = 0; i < 10000 ; i++) {
list.add(String.valueOf(i));
}
output.setResult(list);
});
LOGGER.info("servlet thread freed");
return output;
}
#GetMapping(value = "/async-flux",produces = MediaType.APPLICATION_JSON_VALUE)
public Flux<String> handleReqDefResult1(Model model) {
LOGGER.info("Received async-deferredresult request");
List<String> list = new ArrayList<>();
list.stream();
for (int i = 0; i < 10000 ; i++) {
list.add(String.valueOf(i));
}
return Flux.fromIterable(list);
}
So the exception was that both APIs should behave same as multi-value stream(Flux) should have similar behavior to that of a returning a DeferredResult. But in API where deferred result was returned, whole list was printed in one go on browser where as in API where Flux was returned the numbers where printed sequentially(one by one).
What exactly is happening when I am returning Flux from controller ?
When we return a Flux from a service endpoint many things can happen. But I assume you want to know what is happening when Flux observed as stream of events from client of this endpoint.
Scenario One: By adding 'application/json' as the content type of the endpoint Spring will communicate to the client to expect JSON body.
#GetMapping(value = "/async-flux", produces = MediaType.APPLICATION_JSON_VALUE)
public Flux<String> handleReqDefResult1(Model model) {
List<String> list = new ArrayList<>();
for (int i = 0; i < 10000; i++) {
list.add(String.valueOf(i));
}
return Flux.fromIterable(list);
}
The output at the client will be the whole set of numbers in one go. And once the response delivered the connection will be closed. Even though you have used Flux as the response type, you are still bound the laws of how HTTP over TCP/IP works. The endpoint got a HTTP request, execute the logic and respond with HTTP response containing final result.
As a result, you do not see the real value of a reactive api.
Scenario Two: By adding 'application/stream+json' as the content type of the endpoint, Spring starts to treat the resulting events of the Flux stream as individual JSON items. When an item is emitted is gets serialised, the HTTP response buffer is flushed, and the connection from the server to client keep open up until the event sequence get completed.
To get that working we can slightly modify your original code as follows.
#GetMapping(value = "/async-flux",produces = MediaType.APPLICATION_STREAM_JSON_VALUE)
public Flux<String> handleReqDefResult1(Model model) {
List<String> list = new ArrayList<>();
for (int i = 0; i < 10000 ; i++) {
list.add(String.valueOf(i));
}
return Flux.fromIterable(list)
// we have 1 sec delay to demonstrate the difference of behaviour.
.delayElements(Duration.ofSeconds(1));
}
This time we can see the real value of reactive api endpoint where it is able to deliver results to it's client as date get available.
You can find more details about how to build reactive REST APIs at
https://medium.com/#senanayake.kalpa/building-reactive-rest-apis-in-java-part-1-cd2c34af55c6
https://medium.com/#senanayake.kalpa/building-reactive-rest-apis-in-java-part-2-bd270d4cdf3f

Spring WebFlux Web Client - Iterating paged REST API

I need to get the items from all pages of a pageable REST API. I also need to start processing items, as soon as they are available, not needing to wait for all the pages to be loaded. In order to do so, I'm using Spring WebFlux and its WebClient, and want to return Flux<Item>.
Also, the REST API I'm using is rate limited, and each response to it contains headers with details on the current limits:
Size of the current window
Remaining time in the current window
Request quota in window
Requests left in current window
The response to a single page request looks like:
{
"data": [],
"meta": {
"pagination": {
"total": 10,
"current": 1
}
}
}
The data array contains the actual items, while the meta object contains pagination info.
My current solution first does a "dummy" request, just to get the total number of pages, and the rate limits.
Mono<T> paginated = client.get()
.uri(uri)
.exchange()
.flatMap(response -> {
HttpHeaders headers = response.headers().asHttpHeaders();
Limits limits = new Limits();
limits.setWindowSize(headers.getFirst("X-Window-Size"));
limits.setWindowRemaining(headers.getFirst("X-Window-Remaining"));
limits.setRequestsQuota(headers.getFirst("X-Requests-Quota");
limits.setRequestsLeft(headers.getFirst("X-Requests-Remaining");
return response.bodyToMono(Paginated.class)
.map(paginated -> {
paginated.setLimits(limits);
return paginated;
});
});
Afterwards, I emit a Flux containing page numbers, and for each page, I do a REST API request, each request being delayed enough so it doesn't get past the limit, and return a Flux of extracted items:
return paginated.flatMapMany(paginated -> {
return Flux.range(1, paginated.getMeta().getPagination().getTotal())
.delayElements(Duration.ofMillis(paginated.getLimits().getWindowRemaining() / paginated.getLimits().getRequestsQuota()))
.flatMap(page -> {
return client.get()
.uri(pageUri)
.retrieve()
.bodyToMono(Item.class)
.flatMapMany(p -> Flux.fromIterable(p.getData()));
});
});
This does work, but I'm not happy with it because:
It does initial "dummy" request to get the number of pages, and then
repeats the same request to get the actual data.
It gets rate limits only with the initial request, and assumes the
limits won't change (eg, that it's the only one using the API) -
which may not be true, in which case it will get an error that it
exceeded the limit.
So my question is how to refactor it so it doesn't need the initial request (but rather get limits, page numbers and data from the first request, and continue through all pages, while updating (and respecting) the limits.
I think this code will do what you want. The idea is to make a flux that make a call to your resource server, but in the process to handle the response, to add a new event on that flux to be able to make the call to next page.
The code is composed of:
A simple wrapper to contains the next page to call and the delay to wait before executing the call
private class WaitAndNext{
private String next;
private long delay;
}
A FluxProcessor that will make HTTP call and process the response:
FluxProcessor<WaitAndNext, WaitAndNext> processor= DirectProcessor.<WaitAndNext>create();
FluxSink<WaitAndNext> sink=processor.sink();
processor
.flatMap(x-> Mono.just(x).delayElement(Duration.ofMillis(x.delay)))
.map(x-> WebClient.builder()
.baseUrl(x.next)
.defaultHeader("Accept","application/json")
.build())
.flatMap(x->x.get()
.exchange()
.flatMapMany(z->manageResponse(sink, z))
)
.subscribe(........);
I split the code with a method that only manage response: It simply unwrap your data AND add a new event to the sink (the event beeing the next page to call after the given delay)
private Flux<Data> manageResponse(FluxSink<WaitAndNext> sink, ClientResponse resp) {
if (resp.statusCode()!= HttpStatus.OK){
sink.error(new IllegalStateException("Status code invalid"));
}
WaitAndNext wn=new WaitAndNext();
HttpHeaders headers=resp.headers().asHttpHeaders();
wn.delay= Integer.parseInt(headers.getFirst("X-Window-Remaining"))/ Integer.parseInt(headers.getFirst("X-Requests-Quota"));
return resp.bodyToMono(Item.class)
.flatMapMany(p -> {
if (p.paginated.current==p.paginated.total){
sink.complete();
}else{
wn.next="https://....?page="+(p.paginated.current+1);
sink.next(wn);
}
return Flux.fromIterable(p.getData());
});
}
Now we just need to initialize the system by calling for the retrieval of the first page with no delay:
WaitAndNext wn=new WaitAndNext();
wn.next="https://....?page=1";
wn.delay=0;
sink.next(wn);

Google Sheets API v4 receives HTTP 401 responses for public feeds

I'm having no luck getting a response from v4 of the Google Sheets API when running against a public (i.e. "Published To The Web" AND shared with "Anyone On The Web") spreadsheet.
The relevant documentation states:
"If the request doesn't require authorization (such as a request for public data), then the application must provide either the API key or an OAuth 2.0 token, or both—whatever option is most convenient for you."
And to provide the API key, the documentation states:
"After you have an API key, your application can append the query parameter key=yourAPIKey to all request URLs."
So, I should be able to get a response listing the sheets in a public spreadsheet at the following URL:
https://sheets.googleapis.com/v4/spreadsheets/{spreadsheetId}?key={myAPIkey}
(with, obviously, the id and key supplied in the path and query string respectively)
However, when I do this, I get an HTTP 401 response:
{
error: {
code: 401,
message: "The request does not have valid authentication credentials.",
status: "UNAUTHENTICATED"
}
}
Can anyone else get this to work against a public workbook? If not, can anyone monitoring this thread from the Google side either comment or provide a working sample?
I managed to get this working. Even I was frustrated at first. And, this is not a bug. Here's how I did it:
First, enable these in your GDC to get rid of authentication errors.
-Google Apps Script Execution API
-Google Sheets API
Note: Make sure the Google account you used in GDC must be the same account you're using in Spreadsheet project else you might get a "The API Key and the authentication credential are from different projects" error message.
Go to https://developers.google.com/oauthplayground where you will acquire authorization tokens.
On Step 1, choose Google Sheets API v4 and choose https://www.googleapis.com/auth/spreadsheets scope so you have bot read and write permissions.
Click the Authorize APIs button. Allow the authentication and you'll proceed to Step 2.
On Step 2, click Exchange authorization code for tokens button. After that, proceed to Step 3.
On Step 3, time to paste your URL request. Since default server method is GET proceed and click Send the request button.
Note: Make sure your URL requests are the ones indicated in the Spreadsheetv4 docs.
Here's my sample URL request:
https://sheets.googleapis.com/v4/spreadsheets/SPREADSHEET_ID?includeGridData=false
I got a HTTP/1.1 200 OK and it displayed my requested data. This goes for all Spreadsheetv4 server-side processes.
Hope this helps.
We recently fixed this and it should now be working. Sorry for the troubles, please try again.
The document must be shared to "Anyone with the link" or "Public on the web". (Note: the publishing settings from "File -> Publish to the web" are irrelevant, unlike in the v3 API.)
This is not a solution of the problem but I think this is a good way to achieve the goal. On site http://embedded-lab.com/blog/post-data-google-sheets-using-esp8266/ I found how to update spreadsheet using Google Apps Script. This is an example with GET method. I will try to show you POST method with JSON format.
How to POST:
Create Google Spreadsheet, in the tab Tools > Script Editor paste following script. Modify the script by entering the appropriate spreadsheet ID and Sheet tab name (Line 27 and 28 in the script).
function doPost(e)
{
var success = false;
if (e != null)
{
var JSON_RawContent = e.postData.contents;
var PersonalData = JSON.parse(JSON_RawContent);
success = SaveData(
PersonalData.Name,
PersonalData.Age,
PersonalData.Phone
);
}
// Return plain text Output
return ContentService.createTextOutput("Data saved: " + success);
}
function SaveData(Name, Age, Phone)
{
try
{
var dateTime = new Date();
// Paste the URL of the Google Sheets starting from https thru /edit
// For e.g.: https://docs.google.com/---YOUR SPREADSHEET ID---/edit
var MyPersonalMatrix = SpreadsheetApp.openByUrl("https://docs.google.com/spreadsheets/d/---YOUR SPREADSHEET ID---/edit");
var MyBasicPersonalData = MyPersonalMatrix.getSheetByName("BasicPersonalData");
// Get last edited row
var row = MyBasicPersonalData.getLastRow() + 1;
MyBasicPersonalData.getRange("A" + row).setValue(Name);
MyBasicPersonalData.getRange("B" + row).setValue(Age);
MyBasicPersonalData.getRange("C" + row).setValue(Phone);
return true;
}
catch(error)
{
return false;
}
}
Now save the script and go to tab Publish > Deploy as Web App.
Execute the app as: Me xyz#gmail.com,
Who has access to the app: Anyone, even anonymous
Then to test you can use Postman app.
Or using UWP:
private async void Button_Click(object sender, RoutedEventArgs e)
{
using (HttpClient httpClient = new HttpClient())
{
httpClient.BaseAddress = new Uri(#"https://script.google.com/");
httpClient.DefaultRequestHeaders.Accept.Add(new System.Net.Http.Headers.MediaTypeWithQualityHeaderValue("application/json"));
httpClient.DefaultRequestHeaders.AcceptEncoding.Add(new System.Net.Http.Headers.StringWithQualityHeaderValue("utf-8"));
string endpoint = #"/macros/s/---YOUR SCRIPT ID---/exec";
try
{
PersonalData personalData = new PersonalData();
personalData.Name = "Jarek";
personalData.Age = "34";
personalData.Phone = "111 222 333";
HttpContent httpContent = new StringContent(JsonConvert.SerializeObject(personalData), Encoding.UTF8, "application/json");
HttpResponseMessage httpResponseMessage = await httpClient.PostAsync(endpoint, httpContent);
if (httpResponseMessage.IsSuccessStatusCode)
{
string jsonResponse = await httpResponseMessage.Content.ReadAsStringAsync();
//do something with json response here
}
}
catch (Exception ex)
{
}
}
}
public class PersonalData
{
public string Name;
public string Age;
public string Phone;
}
To above code NuGet Newtonsoft.Json is required.
Result:
If your feed is public and you are using api key, make sure you are throwing a http GET request.In case of POST request, you will receive this error.
I faced same.
Getting data using
Method: spreadsheets.getByDataFilter has POST request

How to list all the activities in a specific google domain?

Any ideas on how can I list all the activities in my domain by using the new google+ domain's API in java?
The Developers' Live video shows at 4:00 minute mark that you can do something like this:
Plus.Activities.List listActivities = plus.activities().list("me", "domain");
The Link for this code is here.
But when I actually run the same line of code it shows me the following error.
{
"code" : 400,
"errors" : [ {
"domain" : "global",
"location" : "collection",
"locationType" : "parameter",
"message" : "Invalid string value: 'domain'. Allowed values: [user]",
"reason" : "invalidParameter"
} ],
"message" : "Invalid string value: 'domain'. Allowed values: [user]"
}
The error makes sense as in the activities.list documentation it says that "user" is the only acceptable value for collection and not "domain."
So what should I do about this issue?
As you say, the only available way is to list posts by the currently logged user. You have to use user delegation (with service accounts) and loop over all users in the domain in order to get all published activities.
You can use the updated field on the response to check if there is anything new in a user's list of activities.
This line of thought applies to the whole Domains API: every operation is done on behalf of a user, there is no "admin" account with superpowers. This can be a limitation when acting on a big number of users, as you are forced to authenticate for each one in turn (if someone has an idea on how to achieve this in a more efficient way, please share!)
As the documentation sais, only "public" is allowed:
https://developers.google.com/+/api/latest/activities/list
However even using the code provided in the example in the API doc, after going through successful authentication I get 0 activities.
/** List the public activities for the authenticated user. */
private static void listActivities() throws IOException {
System.out.println("Listing My Activities");
// Fetch the first page of activities
Plus.Activities.List listActivities = plus.activities().list("me", "public");
listActivities.setMaxResults(100L);
// Pro tip: Use partial responses to improve response time considerably
listActivities.setFields("nextPageToken,items(id,url,object/content)");
ActivityFeed activityFeed = listActivities.execute();
// Unwrap the request and extract the pieces we want
List<Activity> activities = activityFeed.getItems();
System.out.println("Number of activities: " + activities.size());
// Loop through until we arrive at an empty page
while (activities != null) {
for (Activity activity : activities) {
System.out.println("ID " + activity.getId() + " Content: " +
activity.getObject().getContent());
}
// We will know we are on the last page when the next page token is null.
// If this is the case, break.
if (activityFeed.getNextPageToken() == null) {
break;
}
// Prepare to request the next page of activities
listActivities.setPageToken(activityFeed.getNextPageToken());
// Execute and process the next page request
activityFeed = listActivities.execute();
activities = activityFeed.getItems();
}
}
Anybody know how to get this to work?
when you use Google+ API you must use "public" but when you use Google+ Domains API you must use "user" parameter value.

Delaying writes to SQL Server

I am working on an app, and need to keep track of how any views a page has. Almost like how SO does it. It is a value used to determine how popular a given page is.
I am concerned that writing to the DB every time a new view needs to be recorded will impact performance. I know this borderline pre-optimization, but I have experienced the problem before. Anyway, the value doesn't need to be real time; it is OK if it is delayed by 10 minutes or so. I was thinking that caching the data, and doing one large write every X minutes should help.
I am running on Windows Azure, so the Appfabric cache is available to me. My original plan was to create some sort of compound key (PostID:UserID), and tag the key with "pageview". Appfabric allows you to get all keys by tag. Thus I could let them build up, and do one bulk insert into my table instead of many small writes. The table looks like this, but is open to change.
int PageID | guid userID | DateTime ViewTimeStamp
The website would still get the value from the database, writes would just be delayed, make sense?
I just read that the Windows Azure Appfabric cache does not support tag based searches, so it pretty much negates my idea.
My question is, how would you accomplish this? I am new to Azure, so I am not sure what my options are. Is there a way to use the cache without tag based searches? I am just looking for advice on how to delay these writes to SQL.
You might want to take a look at http://www.apathybutton.com (and the Cloud Cover episode it links to), which talks about a highly scalable way to count things. (It might be overkill for your needs, but hopefully it gives you some options.)
You could keep a queue in memory and on a timer drain the queue, collapse the queued items by totaling the counts by page and write in one SQL batch/round trip. For example, using a TVP you could write the queued totals with one sproc call.
That of course doesn't guarantee the view counts get written since its in memory and latently written but page counts shouldn't be critical data and crashes should be rare.
You might want to have a look at how the "diagnostics" feature in Azure works. Not because you would use diagnostics for what you are doing at all, but because it is dealing with a similar problem and may provide some inspiration. I am just about to implement a data auditing feature and I want to log that to table storage so also want to delay and bunch the updates together and I have taken a lot of inspiration from diagnostics.
Now, the way Diagnostics in Azure works is that each role starts a little background "transfer" thread. So, whenever you write any traces then that gets stored in a list in local memory and the background thread will (by default) bunch all the requests up and transfer them to table storage every minute.
In your scenario, I would let each role instance keep track of a count of hits and then use a background thread to update the database every minute or so.
I would probably use something like a static ConcurrentDictionary (or one hanging off a singleton) on each webrole with each hit incrementing the counter for the page identifier. You'd need to have some thread handling code to allow multiple request to update the same counter in the list. Alternatively, just allow each "hit" to add a new record to a shared thread-safe list.
Then, have a background thread once per minute increment the database with the number of hits per page since last time and reset the local counter to 0 or empty the shared list if you are going with that approach (again, be careful about the multi threading and locking).
The important thing is to make sure your database update is atomic; If you do a read-current-count from the database, increment it and then write it back then you may have two different web role instances doing this at the same time and thus losing one update.
EDIT:
Here is a quick sample of how you could go about this.
using System.Collections.Concurrent;
using System.Data.SqlClient;
using System.Threading;
using System;
using System.Collections.Generic;
using System.Linq;
class Program
{
static void Main(string[] args)
{
// You would put this in your Application_start for the web role
Thread hitTransfer = new Thread(() => HitCounter.Run(new TimeSpan(0, 0, 1))); // You'd probably want the transfer to happen once a minute rather than once a second
hitTransfer.Start();
//Testing code - this just simulates various web threads being hit and adding hits to the counter
RunTestWorkerThreads(5);
Thread.Sleep(5000);
// You would put the following line in your Application shutdown
HitCounter.StopRunning(); // You could do some cleverer stuff with aborting threads, joining the thread etc but you probably won't need to
Console.WriteLine("Finished...");
Console.ReadKey();
}
private static void RunTestWorkerThreads(int workerCount)
{
Thread[] workerThreads = new Thread[workerCount];
for (int i = 0; i < workerCount; i++)
{
workerThreads[i] = new Thread(
(tagname) =>
{
Random rnd = new Random();
for (int j = 0; j < 300; j++)
{
HitCounter.LogHit(tagname.ToString());
Thread.Sleep(rnd.Next(0, 5));
}
});
workerThreads[i].Start("TAG" + i);
}
foreach (var t in workerThreads)
{
t.Join();
}
Console.WriteLine("All threads finished...");
}
}
public static class HitCounter
{
private static System.Collections.Concurrent.ConcurrentQueue<string> hits;
private static object transferlock = new object();
private static volatile bool stopRunning = false;
static HitCounter()
{
hits = new ConcurrentQueue<string>();
}
public static void LogHit(string tag)
{
hits.Enqueue(tag);
}
public static void Run(TimeSpan transferInterval)
{
while (!stopRunning)
{
Transfer();
Thread.Sleep(transferInterval);
}
}
public static void StopRunning()
{
stopRunning = true;
Transfer();
}
private static void Transfer()
{
lock(transferlock)
{
var tags = GetPendingTags();
var hitCounts = from tag in tags
group tag by tag
into g
select new KeyValuePair<string, int>(g.Key, g.Count());
WriteHits(hitCounts);
}
}
private static void WriteHits(IEnumerable<KeyValuePair<string, int>> hitCounts)
{
// NOTE: I don't usually use sql commands directly and have not tested the below
// The idea is that the update should be atomic so even though you have multiple
// web servers all issuing similar update commands, potentially at the same time,
// they should all commit. I do urge you to test this part as I cannot promise this code
// will work as-is
//using (SqlConnection con = new SqlConnection("xyz"))
//{
// foreach (var hitCount in hitCounts.OrderBy(h => h.Key))
// {
// var cmd = con.CreateCommand();
// cmd.CommandText = "update hits set count = count + #count where tag = #tag";
// cmd.Parameters.AddWithValue("#count", hitCount.Value);
// cmd.Parameters.AddWithValue("#tag", hitCount.Key);
// cmd.ExecuteNonQuery();
// }
//}
Console.WriteLine("Writing....");
foreach (var hitCount in hitCounts.OrderBy(h => h.Key))
{
Console.WriteLine(String.Format("{0}\t{1}", hitCount.Key, hitCount.Value));
}
}
private static IEnumerable<string> GetPendingTags()
{
List<string> hitlist = new List<string>();
var currentCount = hits.Count();
for (int i = 0; i < currentCount; i++)
{
string tag = null;
if (hits.TryDequeue(out tag))
{
hitlist.Add(tag);
}
}
return hitlist;
}
}