Splunk query to filter results - splunk

I have some code deployed on 1 out of my 6 servers. I need a splunk query that pulls data from the other 5 hosts. Something like - All except this 1 host. I know the host option in splunk to look for the host's logs, but I have no idea how to do all except 1. Can someone please assist me?
The one box I am talking about has my latest code changes, and the other 5 have my old code. So I want to write a query to do a before vs after analysis.

Looks like you have your answer, but I use an alternative method that speeds things up for me.
Within your search results, you can quickly eliminate what you want to filter out by ALT-clicking on a value in a selected field. In your case, it would add NOT host="1" to your query and immediately update your results.
I find this particularly helpful when I'm in the preliminary stage of investigating an issue, and don't have enough information to know exactly where to look first. It makes it easy to rapidly eliminate what you don't need.
*Note: This may still be broken in Splunk 6, not sure if the bug has been fixed yet: http://answers.splunk.com/answers/109473/alt-click-not-working-selected-fields

Okay, I got the answer to my question. Just use !=. So if I want the results for all my hosts except host 1, all I do is - index=blah host!="1"

Related

query SCCM apps

Is there any table in the SCCM database that returns all applications located in a subfolder ?
This table returns all application and packages: "v_Package"
but i need to filter only applications and only the ones in a specific subfolder.
i also found the table "v_Applications" in my SQL server that returns only the applications but it is not present in the microsoft docs:
https://learn.microsoft.com/en-us/mem/configmgr/develop/core/understand/sqlviews/application-management-views-configuration-manager
is that normal ?
Could someone point me in the right direction ?
Thank you very much
The best "trick" if you want to do something that is possible in the SCCM Console via SQL is to do it and then watch SMSProv.log on your siteserver which will tell you what SQL and WQL command was used to produce your result. That is always a good point towards the direction you should take.
In this case you will see that the query takes the applications info from a function call fn_ListApplicationCIs_List(1031) which features a Column ObjectPath that is your folder so a quick
select DisplayName, ObjectPath from fn_ListApplicationCIs_List(1031)
should probably give you what you want.
The real info if you wanna go deeper is as far as I understand it in a view called vFolderMembers (which weirdly only contains those that are not in root) but going from there you would have to join some internal IDs to get to a readable name.
To also answer your other question: Yes v_Applications is a normal table but it does not contain the object Path (which is only relevant within the context of the console and not deployment) and yes it is (sadly) normal that Microsofts documentation is not always the best and never really complete in regards to SCCM matters.
wow that's great.
what a relief. i've been struggling with that for a while .
thank you so much for your help. it works exacly as i need.

Google Cloud Big Query Scheduled Queries weird error relating JURISDICTION

All my datasheets, tables, and ALL items inside BQ are un EU. When I try to do a View->to->Table 15 min scheduled query I get an error regarding my location, which is incorrect, because all, source and destiny are both on EU...
Anyone knows why?
There is a transient known issue matching your situation, GCP support team needs more time for troubleshooting. There may be a potential issue in the UI. I would ask you to try the following steps:
Firstly, try to make the same operation in Chrome's incognito mode.
Another possible workaround is trying to follow this official guide using a different approach than the UI (CLI for instance).
I hope it helps.

Crm2013/15 Online and queries on huge data volumes

I'm working on a couple of million records, as soon as I try to run an advanced find, and put as a criteria a linked entity, the advanced find goes in timeout.
Create custom views on this allows me to filter properly? Anyone knows the proper way of using the advanced find this way? Are there limitations on the out of the box CRM that i should be aware of?
In CRM 2013 - it is possible to add indexes for specific fields by adding the columns to the quick find view for the entity.
You will need to wait for the Indexing Management Job to run (which is run every 24 hours by default) - see http://blogs.msdn.com/b/darrenliu/archive/2014/04/02/crm-2013-maintenance-jobs.aspx.
In previous version of CRM, it was necessary to add the indexes directly to the database - this may be necessary for more complex queries.
was too early to post an answer. The problem that I encountered was related to the OOB advanced find. Looking for example for an account with some related contacts (a really plain search with a linked entity) i had a SQL timeout. Everything was OOB so I was a little bit clueless and I opened a case to Microsoft. They found a bug, if i was changing the sorting the advanced find started to work again. They are still investigating. So wasn't a setting problem but a crm bug.

Big Query table too fragmented - unable to rectify

I have got a Google Big Query table that is too fragmented, meaning that it is unusable. Apparently there is supposed to be a job running to fix this, but it doesn't seem to have stopped the issue for myself.
I have attempted to fix this myself, with no success.
Steps tried:
Copying the table and deleting original - this does not work as the table is too fragmented for the copy
Exporting the file and reimporting. I managed to export to google cloud storage, as the file was JSON, so couldn't download - this was fine. The problem was on re-import. I was trying to use the web interface and it asked for a schema. I only have the file to work with, so I tried to use the schema as identified by BigQuery, but couldn't get it to be accepted - I think the problem was with the tree/leaf format not translating properly.
To fix this, I know I either need to get the coalesce process to work (out of my hands - anyone from Google able to help? My project ID is 189325614134), or to get help to format the import schema correctly.
This is currently causing a project to grind to a halt, as we can't query the data, so any help that can be given is greatly appreciated.
Andrew
I've run a manual coalesce on your table. It should be marginally better, but there seems to be a problem where we're not coalescing as thoroughly as we should. We're still investigating, we have an open bug on the issue.
Can you confirm this is the SocialAccounts table? You should not be seeing the fragmentation limit on this table when you try to copy it. Can you give the exact error you are seeing?

Heroku dataclips result is different from local DB result

I ran a SQL query using the dataclips feature on Heroku. It works great - the results I get are in the correct order, because part of my clause specifies "ORDER BY ...".
I just pulled a copy of my database from Heroku. I then run ActiveRecord::Base.connection.execute ""
The data I get is correct, but the PGResult object has the data ordered in a completely random way -- i.e. my "order by" clause is totally ignored.
I am wondering why that is happening and if there is anyway to prevent that.
We are now using the same version of Postgres as Heroku and we get the same dataset. It is, however, not correctly ordered still. The order by clause is still not working properly so we are just forcing a sort using rails (since we limit the result by 10, its not a lot of work for Rails).
I'm posting this because no one seems to have a good answer and we haven't found anything that works either. Might as well let other folks know that for some reason, there seems to be an issue. Going to dig around a bit and potentially report a bug to postgresql folks.