Facebook Developer - Cannot Create Developer Account [closed] - account

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 4 years ago.
Improve this question
When I try visiting the following page and clicking the "Create Developer Account" button in Step 2, a modal loads and then disappears immediately.
Page: https://developers.facebook.com/docs/apps/register
Console Errors: [Error] Failed to load resource: the server responded with a status of 404 (HTTP/2.0 404) (dialog, line 0)

go here https://developers.facebook.com/docs/facebook-login/review
after you log in. It should say "Get Started" click on it and it will allow you to create a developer account that way.

Related

I am trying to implement Permisions in asp.net boilerplate webApi [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I am trying to implement an AppService called Plug, I want to use User Role and Permissions and I can't seem to get it working, I applied the steps above and I am not winning.
namespace Sprint.Plug
{
[AbpAuthorize(PermissionNames.Pages_PlugEntity)] //Permisions
public class PlugAppService: AsyncCrudAppService<PlugEntity, PlugDto, Guid>, IPlugAppService
{
public PlugAppService(IRepository<PlugEntity,Guid> repository):base(repository)
{
}
}
}
Have you applied the permission to the database for your user/role in AbpPermissions table?
if you are already authenticated and you consume that API, [AbpAuthorize(PermissionNames.Pages_PlugEntity)] attribute will check you have permission in table AbpPermissions isgranted or not.
Ex :

I have disallowed everything for 10 days [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 3 years ago.
Improve this question
Due to an update error, I put in prod a robots.txt file that was intended for a test server. Result, the prod ended up with this robots.txt :
User-Agent: *
Disallow: /
That was 10 days ago and I now have more than 7000 URLS blocked Error (Submitted URL blocked by robots.txt) or Warning (Indexed through blocked byt robots.txt).
Yesterday, of course, I corrected the robots.txt file.
What can I do to speed up the correction by Google or any other search engine?
You could use the robots.txt test feature. https://www.google.com/webmasters/tools/robots-testing-tool
Once the robots.txt test has passed, click the "Submit" button and a popup window should appear. and then click option #3 "Submit" button again --
Ask Google to update
Submit a request to let Google know your robots.txt file has been updated.
Other then that, I think you'll have to wait for Googlebot to crawl the site again.
Best of luck :).

Is there any way to set JSESSIONID while doing scraping using scrapy [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am writing a code for spider in Scrapy for this website
[ https://www.garageclothing.com/ca/ ]
this website uses jsessionid.
I want to get that in my code(spider)
Can anybody guide me that how can i get
jsessionid in my code.
Currently i just copy paste the jsessionid from inspection tools of browser after visiting that website on browser.
This site uses JavaScript to set JSESSIONID. But if you will disable JavaScript, and try to load the page, you'll see that it requests the following URL:
https://www.dynamiteclothing.com/?postSessionRedirect=https%3A//www.garageclothing.com/ca&noRedirectJavaScript=true (1)
which redirects you to this URL:
https://www.garageclothing.com/ca;jsessionid=YOUR_SESSION_ID (2)
So you can do the following:
start requests with the URL (1)
in callback, extract session ID from URL (2) (which will be stored in response.url)
make the requests you want with the extracted session ID in cookies

Refresh Google Search Results for My Site [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 4 years ago.
Improve this question
I set up a site with a template and the title was something they supplied as a default. When I searched for my site's title, it showed up in results, but it was with their default title. After changing it a couple days ago, my site still shows up with the default title instead of what I changed it to.
Is there any way I can force Google to update their information so the title I have now shows up in results instead of the default title?
This will refresh your website immediately:
From Web Master tools Menu -> Crawl -> Fetch as Google
Leave URL blank to fetch the homepage then click Fetch
Submit to Index button will appear beside the fetched result; click it then choose > URL and all linked pages > OK
Just wait, Google should normally revisit your site and update its informations. But if you are hurried, you can try the following steps :
Increase the crawl speed of your site in Google Webmaster Tools : http://support.google.com/webmasters/bin/answer.py?hl=en&answer=48620
Ping your website on service like http://pingomatic.com/
Submit if you have not yet or resubmit an updated sitemap of your website.
Fetching as Google works, as already suggested. However stage 2 should be - submit your sites to several large social bookmarking sites like digg, reddit, stumbleupon, etc etc. There are huge lists of these sites out there.
Google notices everything on these sites and it will speed up the re crawling process. You can keep track of when Google last cached your site (There is a big difference between crawling and caching) by going to.. cache:sitename.com

Would 401 Error be a good choice? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
On one of my sites have a lot of restricted pages which is only available to logged-in users, and for everyone else it outputs a default "you have to be logged in ... " view.
The problem is; a lot of these pages are listed on Google with the not-logged-in-view, and it looks pretty bad when 80% of the pages in the list have the same title and description/preview.
Would it be a good choice to, along with my default not-logged-in-view, send a 401 unauthorized header? And would this stop Google (and other engines) to index these pages?
Thanks!
(and if you have another (better?) solution I would love to hear about it!)
Use a robots.txt to tell search engines not to index the not logged in pages.
http://www.robotstxt.org/
Ex.
User-agent: *
Disallow: /error/notloggedin.html
401 Unauthorized is the response code for requests that requires user authentication. So this is exactly the response code you want and have to send. Status Code Definitions
EDIT: Your previous suggestion, response code 403, is for requests, where authentication makes no difference, eg. disabled directory browsing.
here are the status codes googlebot understands and recommends.
http://www.google.com/support/webmasters/bin/answer.py?hl=en&answer=40132
in your case an HTTP 403 would be the right one.