Mx record for subdomain [closed] - cpanel

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
My domain mybasiccrm.com is hosted on hostgator.com
The subdomain tr1.mybasiccrm.com is hosted on tr8.mybasiccrm.com
I have created an MX record on the server tr8 for the domain tr1.mybasiccrm.com but when I check this http://mxtoolbox.com/SuperTool.aspx?action=mx%3atr1.mybasiccrm.com&run=toolpage
it says that "No Records Exist"
How can I have a proper mx recort for tr1.mybasiccrm.com ?
PS: I can send an email from my gmail account to the address email#tr1.mybasiccrm.com without a problem.
Thank you all!

As this is sub domain of TLD: mybasiccrm.com, make sure that you are adding MX for your subdomain “tr1.mybasiccrm.com” in mybasiccrm.com's DNS zone as long as you do not have separate DNS zone of your sub domain.
→ Collect exact MX value, you need to set for tr1.mybasiccrm.com.
→ Open DNS zone of mybasiccrm.com and add MX record:
Record type: MX
Record name: tr1
Priority: 0
Value: MX value of tr1.mybasiccrm.com
Once it is done, check in any online tool

Related

REST URL convention [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I have an application table that has columns like quoteId, accountNumber, and few others. I have created a REST endpoint to update the account number on the basis of quoteId i.e Update account no. in the application that has quoteId = {quoteId}. Here is the endpoint:
PUT /applications/quotes/{quoteId}/accountNumber
Is it the correct REST convention?
Is it the correct REST convention?
Maybe.
If your PUT/PATCH/POST request uses the same URI as your GET request, then you are probably on safe ground. If PUT/PATCH/POST are using different URI, then something has Gone Wrong somewhere.
In other words, if /applications/quotes/{quoteId}/accountNumber is a resource that you link to, then it is the right idea that you send unsafe requests to that URI.
But if accountNumber is information normally retrieved via /applications/quotes/{quoteId}, then /applications/quotes/{quoteId} should be the target resource for edits (instead of creating a new resource used for editing only).
The reason for this is cache-invalidation, as explained in RFC 7234.
If this isn't immediately clear to you, then I suggest reviewing Jim Webber's 2011 talk on REST.
You should use PATCH instead of PUT for partial object updates.
https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/PATCH
https://www.infoworld.com/article/3206264/how-to-perform-partial-updates-to-rest-web-api-resources.html
In my opinion, your url must be:
PUT /applications/quotes/{quoteId}
payload: {
accountNumber: <number>,
... any other field...
}
Because you only want to update a part of an object of the list of quotes that you identify uniquely with quoteId.
About the use of PUT or PATCH, it's true that PUT means that you want to keep the object as the updated copy that you are sending (you must send in this case the entire object to make the update) but the fact is (I think) that many of us use PUT like you, to do partial updates of object.

How to move older clickhouse partitions to S3 disk [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I'm currently starting to work with clickhouse for our in-house analytics system, but it looks like there are no automated ways to configure policies for data retention. The only thing I saw was the ALTER ... MOVE PARTITION (https://clickhouse.tech/docs/en/sql-reference/statements/alter/partition/#alter_move-partition), but it looks like the process has to be manual / implemented in our application layer.
My objective is to move data older than 3 months directly to an S3 cluster for archival and price reasons, while still being able to query it.
Is there any native way to do so directly in clickhouse with storage policies?
Thanks in advance.
This answer was based out of #Denny Crane's comment: https://altinity.com/blog/clickhouse-and-s3-compatible-object-storage, where I did put comments where there were not enough explanations, and keeping it in the event that the link dies.
Add your S3 disk to a new configuration file (Let's say /etc/clickhouse-server/config.d/storage.xml:
<yandex>
<storage_configuration>
<disks>
<!-- This tag is the name of your S3-emulated disk, used for the rest of this tutorial -->
<your_s3>
<type>s3</type>
<!-- Set this to the endpoint of your S3-compatible provider -->
<endpoint>https://nyc3.digitaloceanspaces.com</endpoint>
<!-- Set this to your access key ID provided by your provider -->
<access_key_id>*****</access_key_id>
<!-- Set this to your access key Secret provided by your provider -->
<secret_access_key>*****</secret_access_key>
</your_s3>
</disks>
<!-- Don't leave this file yet! We still have things to do there -->
...
</storage_configuration>
</yandex>
Add a storage policy for your data storage:
<!-- Put this after the three dots in the snippet above -->
<policies>
<shared>
<volumes>
<default>
<!-- Default is the disk that is present in the default question -->
<disk>default</disk>
</default>
<your_s3>
<disk>your_s3</disk>
</your_s3>
</volumes>
</shared>
</policies>
Once that is done, you can create your tables with the following insert statement:
CREATE TABLE visits (...)
ENGINE = MergeTree
TTL toStartOfYear(time) + interval 3 year to volume 'your_s3'
SETTINGS storage_policy = 'shared';
Where shared is your policy name, and your_s3 is the name of your disk in that policy.

Set up authoritative DNS server [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I am trying to set up a hosting company. The hosting company is going to have a client with the domain widgets.de
The name of my company is hostingcompany.de. The name servers I am setting up are called ns1.hostingcompany.de and ns2.hostingcompany.de
In the zone file for widgets.de, I have
NS ns1.hostingcompany.de.
NS ns2.hostingcompany.de.
In the zone file for hostingcompany.de, I have
hostingcompany.de 300 IN NS ns-110.awsdns-13.com
hostingcompany.de 300 IN NS ns-1130.awsdns-15.com
ns1.hostingcompany.de. 300 IN A 34.65.125.52
ns2.hostingcompany.de. 300 IN A 52.43.124.76
Also, I created two more hosted zones per Amazon's guidance
124.43.52.in-addr.arpa.
NS ns-2035.awsdns-62.co.uk.
SOA ns-2035.awsdns-62.co.uk. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400
34 PTR ns1.hostingcompany.de
and
76.124.43.in-addr.arpa.
NS ns-799.awsdns-35.net.
SOA ns-2435.awsdns-62.co.uk. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400
52 PTR ns2.hostingcompany.de
However, this is not working. When I try to submit these zone files, RIPE rejects them saying that ns1.hostingcompany.de and ns2.hostingcompany.de do not exist as objects. I think I have to do something with PTR records, but I don't know what.
PTR records are usually necessary if you are running a DNS or SMTP server to provide some proof that you are legitimate. I found this article to be quite illuminating.
I think the answer to this question is found towards the bottom of the link in the question. You have to fill out a form and AWS will create the PTR record for you. Creating a hosted zone in Route 53 for the pointer record does not appear to have any effect. Nothing in the RFC prohibits the owner of the public IP address from allowing a customer to create a PTR record for that public IP address. Although AWS could allow customers to create PTR records for their Elastic IP addresses, they do not.
There are a lot of articles discussing how you need to create your own hosted zones for the PTR records, such as but not limited to Amazon's own article the question linked to. You can definitely do this for private IP addresses if you are running a DNS server for a private network. However, if you are running a publicly available DNS or SMTP server on a public IP address, more vetting is required.
In order to verify that the records are set up correctly, you have to get an answer to:
dig -x 34.65.125.52 (must answer ns1.hostingcompany.de)
Unless you do this, the TLD registrar will not accept your nameserver, and your SMTP mail will probably be rejected as spam.
In addition to the above, another problem was that these lines should also be included in the zone file for hostingcompany.de
hostingcompany.de. 300 IN NS ns1.hostingcompany.de.
hostingcompany.de. 300 IN NS ns2.hostingcompany.de.
It is still unclear to me why the top level domain requires that the domains own nameservers are listed as being nameservers for its own domain, but this does appear to be a requirement for some top-level domains. After correcting the above problems, everything works.
I spent a long time trying to track down the above problems, and it did not seem to be documented anywhere, so I hope this helps someone. I also found this RFC to be quite interesting and informative. It is always good to read stuff written by the authorities.

Sitemap re-submission for dynamic website [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
I am very new to SEO and trying to explore it reading all day. I have finally created my dream carpooling site and working on the seo aspect now. I want do it on my own so I get to know about SEO better.Its is dynamic website. User post their trip details every day like given below ( IN quotes)
site : www.shareurride.com.au
sitemap: www.shareurride.com.au/sitemap.xml
Trip detail page :
http://www.shareurride.com.au/ridedetails.php?id=MjY3&tripdate=MjAxMy0wNy0wNQ,,
http://www.shareurride.com.au/ridedetails.php?id=MTY2&tripdate=MjAxMy0wNy0wNQ,,
( trips like this will be added regularly everyday)
I have already have a program which dynamically insert this into my sitemap.
My main question is , is that I need to resubmit my site to GOOGLE every day or will it do on its own. ?
Trip detail page is the only page will be dynamically added to sitemap. Please let me know . If I need to resubmit the page regularly ,is there any tools to do that ?
Thanks
is that I need to resubmit my site to GOOGLE every day or will it do on its own. ?
No. Once Google knows where to find your sitemap they will continue to recrawl it periodically.

Testing IP based geolocation [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
We are implementing an IP based geolocation service, and we need to find some IP's from various markets (LA, NY etc) to fully test the service.
Does anybody know of a directory where we could find what IP ranges are used where?
EDIT: We have already implemented the system, it uses a 3rd party DB and a webservice. We just want some IP's from known markets to verify its working properly.
I'm going to see if I can get what I need from the free maxmind database.
Not sure if cost is a factor but there are a few open source databases knocking about. This one claims 99.3% accuracy on its free version with 99.8% for its paid version. They've also got a Free & Open Source City Database (76% accuracy at city level).
They're both available as CSV-based databases so you can easily take a known location and get an IP range for ISPs in the area.
The tougher part is getting access to a computer in that IP range.
Try looking for sites providing lists of anonymizers. They usually list the countries for the anonymizer sites. Then either use the IP provided or do a lookup on the anonymizer name.
Also try searching for lists of anonymous proxies.
We trawled the logs for our huge web site and built up a test collection.
Sorry I can't pass it on. )-:
cheers,
Rob
maybe this database will be useful for you:
http://www.hostip.info/dl/index.html
it's a collection of ip adresses with countries and cities.
Many open source projects have worldwide mirrors; you can find a country-indexed list of Debian mirrors and kernel.org mirrors. (Note that kernel.org specifically has many mirrors per country; there are eleven United States mirrors, which are located in different regions of the country and would give different information.)
You could try using an automation tool, such as AutoIT, to fire off a series of IP addresses at a whois database service such as arin or RIPE, and harvest the addressed responses, probably just varying the first two parts of the IP.
Use Tor with a strict exit node.
You'll need to use these options in your config:
ExitNodes server1, server2, server3
StrictExitNodes 1
You'll also need to identify exit nodes that work for you in the region that you want. I suggest using the Search Whois feature at ARIN to see it's location if the Tor country icon isn't good enough. It can be a bit of a pain to identify working Tor nodes in each region that you wish to test, but it's possible and free.