FATAL: could not create shared memory segment: Invalid argument - sql

The Postgresql file on my server doesn't seem to start. When I looked at the /var/lib/pgsql/pgstartup.log file, it gives the following errorFATAL: could not create shared memory segment: Invalid argument
I read a lot of other posts which suggested changing the SHMMAX value. I did this through the /etc/sysctl.conf file and then ran sysctl -p. It worked the first time and the PGSQL service started running. But then, when I tried a SQL import (of 55GB), it again stopped and this time changing the SHMMAX value didn't help. Actually, the problem started with importing the SQL file of 55GB. It worked till the first 30% and then stopped. I don't know why it keeps crashing.
Basically, there are two things I'm seeking help for:
How to get the PGSQL service running?
How to import a 55GB SQL file without any problems?
I've looked into a lot of resources, still haven't been able to find a solution. Any help will be appreciated.
Thanks!
EDIT: I found the solution. The issue was with disk space. After I emptied the psql data log files (day-wise), the service started working. Thank you all for the help.

This has been solved now. The edited post contains the solution too.
Thanks!

Related

BI Publisher - Fail to load and save data model

Started BI Publisher about a week ago.
When working on a new data model, about one or two queries in, I get this error when I try to save:
Failed to load servlet/res?s=%252F~developer1%252Ftest%252FJustin%2520Tests%252FOSRP%2520Information.xdm&desc=&_sTkn=9ba70c01152efbcb413.
I can no longer save my data model.
I tried deleting my queries, logging in and out, turning machine off and on, but no luck.
I'm currently resolved to saving all of my queries locally in notepad.
I can create a whole new data model and it will save fine, but then after two or three queries the same thing happens.
What's going on and why would anyone design such a confusing error message?
Any help would be greatly appreciated.
After restarting your server once you won't get this issue.It happens some time due to the connection problem.so restart should work for this.It resolved my problem.
None of the proposed solutions worked for me. I found out, on my own, that any unnecessary brackets around CASE in a select statement will cause this error. Remove the unnecessary brackets and the error goes away.
Oracle meta link Doc ID 2173333.1. In BI Publisher releases 11.1.1.8.x and up, there is an option to Manage Cache in the Administration section of BIP. This option was also added to 11.1.1.7 in patch 140715 (11.1.1.7.140715).
Clearing the object cache will resolve the saving errors:
Click on the Administration link
Manage BI Publisher
Manage Cache
Click on the 'Clear Object Cache'

MSBbuild wants to copy C:\pagefile.sys. any idea why?

I am using cruisecontrol.net +msbuild. It has been working well for sometime and for the last two days I am getting this error. why is it trying to copy pagefile.sys? here is the errorlog. Thank you all in advance for your help.
Error message:
"[CDATA[Could not copy "C:\pagefile.sys" to
C:\CCNet\PublishedFiles\_PublishedWebsites\merp\roslyn\pagefile.sys".
Beginning retry 1 in 1000ms. The process cannot access the file 'C:\pagefile.sys' because it is being used by another process."
Perhaps some .target files have been accidentally edited in the .NET Framework directories, as it is said here. A sort of workaround I would prefer before reinstalling VS, would be changing the swap file disk in System Settings (if you have other logical/physical drives).

File Addition and Synchronization issues in RavenFS

I am having a very hard time making RavenFS behave properly and was hoping that I could get some help.
I'm running into two separate issues, one where uploading files to the ravenfs while using an embedded db inside a service causes ravendb to fall over, and the other where synchronizing two instances setup in the same way makes the destination server fall over.
I have tried to do my best in documenting this... Code and steps to reproduce these issues are located here (https://github.com/punkcoder/RavenFSFileUploadAndSyncIssue), and video is located here (https://youtu.be/fZEvJo_UVpc). I tried looking for these issues in the issue tracker and didn't find something directly that looked like it related, but I may have missed something.
Solution for this problem was to remove Raven from the project and replace it with MongoDB. Binary storage in Mongo can be done on the record without issue.

Big Query table too fragmented - unable to rectify

I have got a Google Big Query table that is too fragmented, meaning that it is unusable. Apparently there is supposed to be a job running to fix this, but it doesn't seem to have stopped the issue for myself.
I have attempted to fix this myself, with no success.
Steps tried:
Copying the table and deleting original - this does not work as the table is too fragmented for the copy
Exporting the file and reimporting. I managed to export to google cloud storage, as the file was JSON, so couldn't download - this was fine. The problem was on re-import. I was trying to use the web interface and it asked for a schema. I only have the file to work with, so I tried to use the schema as identified by BigQuery, but couldn't get it to be accepted - I think the problem was with the tree/leaf format not translating properly.
To fix this, I know I either need to get the coalesce process to work (out of my hands - anyone from Google able to help? My project ID is 189325614134), or to get help to format the import schema correctly.
This is currently causing a project to grind to a halt, as we can't query the data, so any help that can be given is greatly appreciated.
Andrew
I've run a manual coalesce on your table. It should be marginally better, but there seems to be a problem where we're not coalescing as thoroughly as we should. We're still investigating, we have an open bug on the issue.
Can you confirm this is the SocialAccounts table? You should not be seeing the fragmentation limit on this table when you try to copy it. Can you give the exact error you are seeing?

Flume: No errors thrown but flume fails to transfer the file completely

I've been working on flume from the last 2-3 weeks. I faced a new situation which im unaware of how to resolve.
Flow: using a basic flow, spoolDir -> fileChannel -> HDFS
No extra parameters set in the .conf file
File size that i'm trying to transfer: 1.4GB
Situation: the agents starts fine, the file transfer starts fine, the file in source gets renamed to .COMPLETED, the complete file is not getting transfered to HDFS, no errors/exceptions are being thrown. I ran the same adhoc several times i found that out of the 1.4 gigs only ~169Mb is getting transfered. Seems weird !
Any suggestions? Any solutions? any hypotheses?
How long did you wait?
Give it an hour and you may see something.
It's possible you have a corrupt fileChannel and it need some time to clean it up.
What version of Flume, btw?
Try adding some more data to the file and wait for some time. Anything interesting in the logs?
Also make sure you have enough space left on your HDFS.