Refresh Power BI dataset connected to Scopus database - api
I prepared a scientometric dashboard using Power BI which is connected directly to Scopus database by calling API keys. On my Power Bi desktop it can refresh data without any error but after publishing the dashboard to power bi website it can not be refreshed and returns a credential error:
Last refresh failed: Wed Nov 27 2019 12:32:39 GMT+0330 (Iran Standard
Time) There was an error when processing the data in the dataset.Hide
details Message: The credentials provided for the Web source are
invalid. (Source at https://api.elsevier.com/content/search/scopus.)
Table: API-Scopus-All. Cluster
URI: WABI-EAST-ASIA-A-PRIMARY-redirect.analysis.windows.net Activity
ID: 7edc8fb9-5513-465d-a35b-70cc5629d0d0 Request
ID: 2edb255e-20fe-d1db-6b7d-2cf1b6681fc5 Time: 2019-11-27 09:02:39Z
following code is my query in Power BI. Moreover my credential on desktop is "Basic" with "User name"= my apikey
I only deleted my apikey from code. Anyone wants to reproduce results, should replace his/her Scopus apikey with APIKEY. Moreover access to Scopus database should be provided.
I appreciate any help for solving the credential issue. thanks
let
Source = 1000, //the total value from a total rows api?
Starts = List.Generate(()=>0, each _ < Source, each _ + 25),
#"Converted to Table" = Table.FromList(Starts, Splitter.SplitByNothing(), null, null, ExtraValues.Error),
#"Changed Type" = Table.TransformColumnTypes(#"Converted to Table",{{"Column1", type text}}),
#"Added Custom" = Table.AddColumn(#"Changed Type", "Custom", each Json.Document(Web.Contents(
"https://api.elsevier.com/",
[
RelativePath="content/search/scopus/",
Query=
[
view="complete",
count="25",
query="AFFIL ( {Environmental Research Center} OR {Institute for Environmental Research} ) AND AFFIL ( {Tehran University of Medical Sciences} OR {Tehran University of Medical Science} ) AND AFFIL ( {Netherlands})",
apiKey="APIKEY",
limit="40",
start=""&[Column1]
]
]
))),
#"Expanded Custom" = Table.ExpandRecordColumn(#"Added Custom", "Custom", {"search-results"}, {"Custom.search-results"}),
#"Expanded Custom.search-results" = Table.ExpandRecordColumn(#"Expanded Custom", "Custom.search-results", {"opensearch:totalResults", "opensearch:startIndex", "opensearch:itemsPerPage", "opensearch:Query", "link", "entry"}, {"Custom.search-results.opensearch:totalResults", "Custom.search-results.opensearch:startIndex", "Custom.search-results.opensearch:itemsPerPage", "Custom.search-results.opensearch:Query", "Custom.search-results.link", "Custom.search-results.entry"}),
#"Expanded Custom.search-results.opensearch:Query" = Table.ExpandRecordColumn(#"Expanded Custom.search-results", "Custom.search-results.opensearch:Query", {"#role", "#searchTerms", "#startPage"}, {"Custom.search-results.opensearch:Query.#role", "Custom.search-results.opensearch:Query.#searchTerms", "Custom.search-results.opensearch:Query.#startPage"}),
#"Expanded Custom.search-results.link" = Table.ExpandListColumn(#"Expanded Custom.search-results.opensearch:Query", "Custom.search-results.link"),
#"Expanded Custom.search-results.link1" = Table.ExpandRecordColumn(#"Expanded Custom.search-results.link", "Custom.search-results.link", {"#_fa", "#ref", "#href", "#type"}, {"Custom.search-results.link.#_fa", "Custom.search-results.link.#ref", "Custom.search-results.link.#href", "Custom.search-results.link.#type"}),
#"Expanded Custom.search-results.entry" = Table.ExpandListColumn(#"Expanded Custom.search-results.link1", "Custom.search-results.entry"),
#"Expanded Custom.search-results.entry1" = Table.ExpandRecordColumn(#"Expanded Custom.search-results.entry", "Custom.search-results.entry", {"#_fa", "link", "prism:url", "dc:identifier", "eid", "dc:title", "dc:creator", "prism:publicationName", "prism:issn", "prism:eIssn", "prism:volume", "prism:pageRange", "prism:coverDate", "prism:coverDisplayDate", "prism:doi", "pii", "dc:description", "citedby-count", "affiliation", "prism:aggregationType", "subtype", "subtypeDescription", "author-count", "author", "authkeywords", "article-number", "source-id", "fund-acr", "fund-no", "fund-sponsor", "openaccess", "openaccessFlag"}, {"Custom.search-results.entry.#_fa", "Custom.search-results.entry.link", "Custom.search-results.entry.prism:url", "Custom.search-results.entry.dc:identifier", "Custom.search-results.entry.eid", "Custom.search-results.entry.dc:title", "Custom.search-results.entry.dc:creator", "Custom.search-results.entry.prism:publicationName", "Custom.search-results.entry.prism:issn", "Custom.search-results.entry.prism:eIssn", "Custom.search-results.entry.prism:volume", "Custom.search-results.entry.prism:pageRange", "Custom.search-results.entry.prism:coverDate", "Custom.search-results.entry.prism:coverDisplayDate", "Custom.search-results.entry.prism:doi", "Custom.search-results.entry.pii", "Custom.search-results.entry.dc:description", "Custom.search-results.entry.citedby-count", "Custom.search-results.entry.affiliation", "Custom.search-results.entry.prism:aggregationType", "Custom.search-results.entry.subtype", "Custom.search-results.entry.subtypeDescription", "Custom.search-results.entry.author-count", "Custom.search-results.entry.author", "Custom.search-results.entry.authkeywords", "Custom.search-results.entry.article-number", "Custom.search-results.entry.source-id", "Custom.search-results.entry.fund-acr", "Custom.search-results.entry.fund-no", "Custom.search-results.entry.fund-sponsor", "Custom.search-results.entry.openaccess", "Custom.search-results.entry.openaccessFlag"}),
#"Expanded Custom.search-results.entry.link" = Table.ExpandListColumn(#"Expanded Custom.search-results.entry1", "Custom.search-results.entry.link"),
#"Expanded Custom.search-results.entry.link1" = Table.ExpandRecordColumn(#"Expanded Custom.search-results.entry.link", "Custom.search-results.entry.link", {"#_fa", "#ref", "#href"}, {"Custom.search-results.entry.link.#_fa", "Custom.search-results.entry.link.#ref", "Custom.search-results.entry.link.#href"}),
#"Expanded Custom.search-results.entry.affiliation" = Table.ExpandListColumn(#"Expanded Custom.search-results.entry.link1", "Custom.search-results.entry.affiliation"),
#"Expanded Custom.search-results.entry.affiliation1" = Table.ExpandRecordColumn(#"Expanded Custom.search-results.entry.affiliation", "Custom.search-results.entry.affiliation", {"#_fa", "affiliation-url", "afid", "affilname", "affiliation-city", "affiliation-country"}, {"Custom.search-results.entry.affiliation.#_fa", "Custom.search-results.entry.affiliation.affiliation-url", "Custom.search-results.entry.affiliation.afid", "Custom.search-results.entry.affiliation.affilname", "Custom.search-results.entry.affiliation.affiliation-city", "Custom.search-results.entry.affiliation.affiliation-country"}),
#"Expanded Custom.search-results.entry.author-count" = Table.ExpandRecordColumn(#"Expanded Custom.search-results.entry.affiliation1", "Custom.search-results.entry.author-count", {"#limit", "#total", "$"}, {"Custom.search-results.entry.author-count.#limit", "Custom.search-results.entry.author-count.#total", "Custom.search-results.entry.author-count.$"}),
#"Expanded Custom.search-results.entry.author" = Table.ExpandListColumn(#"Expanded Custom.search-results.entry.author-count", "Custom.search-results.entry.author"),
#"Expanded Custom.search-results.entry.author1" = Table.ExpandRecordColumn(#"Expanded Custom.search-results.entry.author", "Custom.search-results.entry.author", {"#_fa", "#seq", "author-url", "authid", "authname", "surname", "given-name", "initials", "afid"}, {"Custom.search-results.entry.author.#_fa", "Custom.search-results.entry.author.#seq", "Custom.search-results.entry.author.author-url", "Custom.search-results.entry.author.authid", "Custom.search-results.entry.author.authname", "Custom.search-results.entry.author.surname", "Custom.search-results.entry.author.given-name", "Custom.search-results.entry.author.initials", "Custom.search-results.entry.author.afid"}),
#"Expanded Custom.search-results.entry.author.afid" = Table.ExpandListColumn(#"Expanded Custom.search-results.entry.author1", "Custom.search-results.entry.author.afid"),
#"Expanded Custom.search-results.entry.author.afid1" = Table.ExpandRecordColumn(#"Expanded Custom.search-results.entry.author.afid", "Custom.search-results.entry.author.afid", {"#_fa", "$"}, {"Custom.search-results.entry.author.afid.#_fa", "Custom.search-results.entry.author.afid.$"}),
#"Removed Columns" = Table.RemoveColumns(#"Expanded Custom.search-results.entry.author.afid1",{"Column1", "Custom.search-results.opensearch:startIndex", "Custom.search-results.opensearch:itemsPerPage", "Custom.search-results.opensearch:Query.#role", "Custom.search-results.opensearch:Query.#searchTerms", "Custom.search-results.opensearch:Query.#startPage", "Custom.search-results.link.#_fa", "Custom.search-results.link.#type", "Custom.search-results.entry.#_fa", "Custom.search-results.entry.link.#_fa", "Custom.search-results.entry.link.#ref", "Custom.search-results.entry.link.#href", "Custom.search-results.entry.prism:issn", "Custom.search-results.entry.prism:eIssn", "Custom.search-results.entry.prism:volume", "Custom.search-results.entry.prism:pageRange", "Custom.search-results.entry.dc:description", "Custom.search-results.entry.affiliation.#_fa", "Custom.search-results.entry.author-count.#limit", "Custom.search-results.entry.author.#_fa", "Custom.search-results.entry.author.afid.#_fa", "Custom.search-results.entry.article-number", "Custom.search-results.entry.source-id", "Custom.search-results.link.#href"}),
#"Changed Type1" = Table.TransformColumnTypes(#"Removed Columns",{{"Custom.search-results.entry.citedby-count", Int64.Type}}),
#"Renamed Columns" = Table.RenameColumns(#"Changed Type1",{{"Custom.search-results.entry.prism:doi", "DOI"}}),
#"Added Custom1" = Table.AddColumn(#"Renamed Columns", "URL", each "https://doi.org/"&[DOI]),
#"Duplicated Column" = Table.DuplicateColumn(#"Added Custom1", "Custom.search-results.entry.prism:coverDate", "Custom.search-results.entry.prism:coverDate - Copy"),
#"Renamed Columns1" = Table.RenameColumns(#"Duplicated Column",{{"Custom.search-results.entry.prism:coverDate - Copy", "Date"}}),
#"Changed Type2" = Table.TransformColumnTypes(#"Renamed Columns1",{{"Date", type date}}),
#"Renamed Columns2" = Table.RenameColumns(#"Changed Type2",{{"Custom.search-results.entry.prism:coverDate", "Cover date"}}),
#"Changed Type3" = Table.TransformColumnTypes(#"Renamed Columns2",{{"Cover date", type date}})
in
#"Changed Type3"
The problem is not related to IP client:
because if I make my query in Scopus:
(https://dev.elsevier.com/search.html#!/Scopus_Search/ScopusSearch) to produce a URL:
(https://api.elsevier.com/content/search/scopus?query=AFFIL%20(%20%7BEnvironmental%20Research%20Center%7D%20%20OR%20%20%7BInstitute%20for%20Environmental%20Research%7D%20)%20%20AND%20%20AFFIL%20(%20%7BTehran%20University%20of%20Medical%20Sciences%7D%20%20OR%20%20%7BTehran%20University%20of%20Medical%20Science%7D%20)%20AND%20%20AFFIL%20(%20%7BNetherlands%7D)&apiKey="MY-API-KEY")
and import it as web address to PBI it works well and can be refreshed without problem after publishing to PBI web. But the problem is with this simple query only one page of Scopus searched items will be returned. I need all, and because of it I changed the code as above, but it can not be refreshed after publish to PBI!
The issue is probably related to the fact that by default, the Scopus API uses the client's IP address to check if it has a subscription to Scopus. When you run the dashboard on your Power BI desktop client, requests to the Scopus API are sent from your client's IP address, and if your client IP address is set up for access to Scopus, you will get full access to Scopus data through the API as well. But when you run your dashboard on the Power BI website, requests to the Scopus API are probably sent from the Power BI server's IP address, which may not be set up for access to Scopus. Depending on your use case, you may be able to request the use of an authentication token by contacting Scopus API support.
Related
Scrape table from JSP website using Python
I would like to scrape the table that appears when you go to this website: https://www.eprocure.gov.bd/resources/common/SearcheCMS.jsp I used the following code based on the example shown here. options = Options() options.add_argument('--headless') driver = webdriver.Firefox(executable_path="C:/Users/DefaultUser/AppData/geckodriver.exe") driver.get("https://www.eprocure.gov.bd/resources/common/SearcheCMS.jsp") time.sleep(5) res = driver.execute_script("return document.documentElement.outerHTML") driver.quit() soup = BeautifulSoup(res, 'html.parser') table_rows =soup.find_all('table')\[1\].find_all('tr') rows=\[\] for tr in table_rows: td = tr.find_all('td') rows.append(\[i.text for i in td\]) delaydata = rows\[3:\] import pandas as pd df = pd.DataFrame(delaydata, columns = \['S. No.', 'Ministry, Division, Organization PE', 'Procurement Nature, Type & Method', 'Tender/Proposal ID, Ref No., Title & Publishing Date', 'Contract Awarded To', 'Company Unique ID', 'Experience Certificate No', 'Contract Amount', 'Contract Start & End Date', 'Work Status'\]) df
Finding the URL Well, actually, there's no need to use Selenium. The data is available via sending a POST request to: https://www.eprocure.gov.bd/AdvSearcheCMSServlet How did I find this URL? Well, if you inspect your browsers Network calls (Click on F12), you'll see the following: And take note of the "Payload" tab: this will later be used as data in the below example. Great, but how do I get the data including paginating the page? To get the data, including page pagination, you can see this example, where we get the HTML table and increase pageNo for pagination (this is for the "eTenders" table/tab): import requests import pandas as pd from bs4 import BeautifulSoup data = { "action": "geteCMSList", "keyword": "", "officeId": "0", "contractAwardTo": "", "contractStartDtFrom": "", "contractStartDtTo": "", "contractEndDtFrom": "", "contractEndDtTo": "", "departmentId": "", "tenderId": "", "procurementMethod": "", "procurementNature": "", "contAwrdSearchOpt": "Contains", "exCertSearchOpt": "Contains", "exCertificateNo": "", "tendererId": "", "procType": "", "statusTab": "eTenders", "pageNo": "1", "size": "10", "workStatus": "All", } _columns = [ "S. No", "Ministry, Division, Organization, PE", "Procurement Nature, Type & Method", "Tender/Proposal ID, Ref No., Title..", "Contract Awarded To", "Company Unique ID", "Experience Certificate No ", "Contract Amount", "Contract Start & End Date", "Work Status", ] for page in range(1, 11): # <--- Increase number of pages here print(f"Page: {page}") data["pageNo"] = page response = requests.post( "https://www.eprocure.gov.bd/AdvSearcheCMSServlet", data=data ) # The HTML is missing a `table` tag, so we need to fix it soup = BeautifulSoup("<table>" + "".join(response.text) + "</table>", "html.parser") df = pd.read_html( str(soup), )[0] df.columns = _columns print(df.to_string()) Going further How do I select the different tabs/tables on the page? To select the different tabs on the page, you can change the "statusTab" in the data. Inspect the payload tab again, and you'll see what I mean. Output The above code outputs: S. No Ministry, Division, Organization, PE Procurement Nature, Type & Method Tender/Proposal ID, Ref No., Title.. Contract Awarded To Company Unique ID Experience Certificate No\t Contract Amount Contract Start & End Date Work Status 0 1 Ministry of Education, Education Engineering Department, Office of the Executive Engineer, EED,Kishoreganj Zone. Works, NCT, LTM 300580, 932/EE/EED/KZ/Rev-5974/2018-19/23, Dt: 28/03/2019 Repair and Renovation Works at Chowganga Shahid Smrity High School Itna Kishoreganj. 01-Apr-2019 M/S KAZI RASEL NIRMAN SONGSTA 1051854 WD-5974- 25/e-GP/20221228/300580/0060000 475000.000 10-Jun-2019 03-Sep-2019 Completed 1 2 Ministry Of Water Resourses, Bangladesh Water Development Board (BWDB), Chattogram Mechanical Division Works, NCT, LTM 558656, CMD/T-19/100 Dated: 14-03-2021 Manufacturing supplying & installation of 01 No MS Flap gate size - 1.65 m 1.95m and 01 no. Padestal type lifting device for sluice no S-15 6-vent 02 nos MS Vertical gate size - 1.65 m 1.95m for sluice no S-15 6-vent and sluice no S-14 new 1-vent at Coxs Bazar Sadar Upazilla of CEP Polder No 66/1 under Coxsbazar O&M Division implemented by Chattogram Mechanical Division BWDB Madunaghat Chattogram during the financial year 2020-21. 15-Mar-2021 M/S. AN Corporation 1063426 CMD/COX/LTM-16/2020-21/e-GP/20221228/558656/0059991 503470.662 12-Apr-2021 05-May-2021 Completed 2 3 Ministry Of Water Resourses, Bangladesh Water Development Board (BWDB), Chattogram Mechanical Division Works, NCT, LTM 633496, CMD/T-19/263 Dated: 30-11-2021 Manufacturing, supplying & installation of 07 No M.S Flap gate for sluice no.- 6 (1-vent), sluice no.- 7 (2-vent), sluice no.-8 (2-vent), sluice no.-35 (2-vent) size :- (1.00 m Ã?1.00m), 01 No Padestal type lifting device for sluice no- 13(1-vent) for CEP Polder No 64/2B, at pekua Upazilla under Chattogram Mechanical Division, BWDB, Madunaghat, Chattogram, during the financial year 2021-22. 30-Nov-2021 M/S. AN Corporation 1063426 CMD/LTM-08/2021-22/e-GP/20221228/633496/0059989 648808.272 26-Dec-2021 31-Jan-2022 Completed ... ...
Power BI Pivot Columns Issue
My data looked like: After Pivoting in Power BI - Power Query, my data now looks like: How can I get rid of those null values so that my looks likes this ? ---> P.S --> I have tried the "DON'T AGGREGATE" method while Pivoting
You're doing something wrong as I get your desired result using "don't aggregate". let Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45WMlTSUSoEEcWZOdlKsTpQESMgUZ6fnwMWMYKpSc4vKcnPQ4ihqDLGosoYpiotM6koVSk2FgA=", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [id = _t, ques = _t, ans = _t]), #"Changed Type" = Table.TransformColumnTypes(Source,{{"id", Int64.Type}, {"ques", type text}, {"ans", type text}}), #"Pivoted Column" = Table.Pivot(#"Changed Type", List.Distinct(#"Changed Type"[ques]), "ques", "ans") in #"Pivoted Column"
Correct I tried the same and got the same results - You have to select "ques" column and pivot values must be "ans". #"Changed Type" = Table.TransformColumnTypes(Source,{{"Id", Int64.Type}, {"ques", type text}, {"ans", type text}}), #"Pivoted Column" = Table.Pivot(#"Changed Type", List.Distinct(#"Changed Type"[ques]), "ques", "ans")
Is there anyway in power bi to assign value to distinct count against value
I want to ask that i have a column of claims in which a claim no have different occurances but i want to assign value of 1 when claim no is distinct and it occurred first time then i assign 0 when it is repeated , so is there any method for creating calculated column for this problem
You can use Power Query to achieve your goal: Below I created some datasets, and achieved a result: Let's say you have such data set: Then here is all steps involved in Power Query to achieve your goal. Just Paste the code into Advanced Editor in which you can find in Home tab and Query Group: If you clicked the Advanced Editor, You can see the full code, like this: The Full M-Code: let Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45WMjMwNDFS0lEyVIrVAfNMzYA8IxgPTc4UWQ6s0hBFpSmKnAmKnBGKKcboZsYCAA==", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [#"Claim No" = _t, Occurances = _t]), #"Changed Type" = Table.TransformColumnTypes(Source,{{"Claim No", Int64.Type}, {"Occurances", Int64.Type}}), #"Grouped Rows" = Table.Group(#"Changed Type", {"Claim No"}, {{"Order", each _, type table [Claim No=nullable number, Occurances=nullable number]}}), #"Added Custom" = Table.AddColumn(#"Grouped Rows", "Custom", each Table.AddIndexColumn([Order],"SNo",1)), Custom1 = Table.Combine(#"Added Custom"[Custom]), #"Added Custom1" = Table.AddColumn(Custom1, "Distinct_Number", each if [SNo] = 1 then 1 else 0), #"Removed Columns" = Table.RemoveColumns(#"Added Custom1",{"SNo"}), #"Changed Type1" = Table.TransformColumnTypes(#"Removed Columns",{{"Distinct_Number", Int64.Type}}) in #"Changed Type1" If we test it, It returns:
Power BI - handling non-existing arguments and returning data in table for valid arguments
I am stuck at the error handling routine. I have this function.. (LicenceNumber) => let Source = Web.Page(Web.Contents("http://mbsweblist.fsco.gov.on.ca/ShowLicence.aspx?M" & Number.ToText(LicenceNumber) & "~")), WebData = Source{1}[Data], #"Extracted Text Before Delimiter" = Table.TransformColumns(WebData, {{"Column1", each Text.BeforeDelimiter(_, ":"), type text}}), #"Removed Top Rows" = Table.Skip(#"Extracted Text Before Delimiter",1), #"Transposed Table" = Table.Transpose(#"Removed Top Rows"), #"Promoted Headers" = Table.PromoteHeaders(#"Transposed Table", [PromoteAllScalars=true]) in #"Promoted Headers" Which returns data to table let Source = {13000246..13000250}, #"Convert to Table" = Table.FromList(Source,Splitter.SplitByNothing(),{"Licence Number"}), #"Changed Type" = Table.TransformColumnTypes(#"Convert to Table",{{"Licence Number", Int64.Type}}), #"Get WebData" = Table.AddColumn(#"Changed Type", "WebData", each try WebData([Licence Number]) otherwise #table({},{})), #"Combine WebData" = Table.Combine(#"Get WebData"[WebData]), #"Changed Types" = Table.TransformColumnTypes(#"Combine WebData",{{"Agent/Broker Name", type text}, {"Licence #", type text}, {"Brokerage Name", type text}, {"Licence Class", type text}, {"Status", type text}, {"Issue Date", type date}, {"Expiry Date", type date}, {"Inactive Date", type date}}) in #"Changed Types" I am trying to error handle a situation where I pass an invalid value in source, lets say source = {13009995..13009999}, this is throwing error - "col X of table was not found". I tried to use the following error handling logic but it is not working .. Empty = #table({{"Agent/Broker Name", type text}, {"Licence #", type text}, {"Brokerage Name", type text}, {"Licence Class", type text}, {"Status", type text}, {"Issue Date", type date}, {"Expiry Date", type date}, {"Inactive Date", type date}},{}), Combine = Table.Combine({#"Get WebData"[WebData], Empty}), I am primarily a business analyst and unable to fix this error. Requesting help. User Olly had helped me with my primary query
I would suggest creating an empty table as a separate query called EmptyTable that matches the columns when you do get data back. Here's the M code for that: let Empty = #table( { "Agent/Broker Name", "Licence #", "Brokerage Name", "Licence Class", "Status", "Issue Date", "Expiry Date", "Inactive Date" }, {} ) in Empty Now in your #"Get WebData" step, simply swap out #table({},{}) for EmptyTable. #"Get WebData" = Table.AddColumn( #"Changed Type", "WebData", each try WebData([Licence Number]) otherwise EmptyTable ), Note: Your query looks to work fine when there is at least one valid license number.
LDAP search in Grafana doesn't work
I'm struggling for a while to make Grafana LDAP work as I can't find appropriate search filter. In AD, both groups Grafana-Admin/User have a group as a member and that group have users which need to authenticate to Grafana. To simplify, my user sys22 is in a group called Graylog, group Graylog is a Member Of group Grafana. And, I want to use group Grafana in LDAP configuration. verbose_logging = true [[servers]] host = "dc-01.corp.domain.com" port = 389 use_ssl = false ssl_skip_verify = true bind_dn = "CN=Grafana-Auth,OU=ApplicationAccount,OU=SE,OU=Admin,DC=corp,DC=domain,DC=com" bind_password = 'pass1' search_filter = "(&(objectCategory=Person)(sAMAccountName=%s)" search_base_dns = ["dc=corp,dc=domain,dc=com"] # group_search_filter = "(member:1.2.840.113556.1.4.1941:=%s)" # group_search_filter_user_attribute = "distinguishedName" # group_search_base_dns = ["OU=Group,OU=SE,OU=Unit,DC=corp,DC=domain,DC=com"] [servers.attributes] name = "givenName" surname = "sn" username = "sAMAccountName" member_of = "distinguisedName" email = "mail" [[servers.group_mappings]] group_dn = "CN=Grafana- Admin,OU=Access,OU=Group,OU=SE,OU=Unit,DC=corp,DC=domain,DC=com" org_role = "Admin" [[servers.group_mappings]] group_dn = "CN=Grafana- User,OU=Access,OU=Group,OU=SE,OU=Unit,DC=corp,DC=domain,DC=com" org_role = "Editor" [[servers.group_mappings]] group_dn = "*" org_role = "Viewer" Applying various filters doesn't help and all the time I am getting lvl=eror msg="Invalid username or password" logger=context userId=0 orgId=0 uname= error="Invalid Username or Password" t=2018-05-18T08:01:02+0200 lvl=info msg="Request Completed" logger=context userId=0 orgId=0 uname= method=POST path=/login status=401 remote_addr=X.X.X.X time_ms=13 size=98 referer=http://graylogprod.corp.domain.com/grafana/login Any advice I'll much appreciate... Thank you, B
An issue in my case was in the infrastructure of the AD. Grafana doesn't support nested groups and users couldn't be found.