The xfce4 desktop environment goes black on random occasions during use.
Install xf86-video-ffbdev.
Will post updates here, and the solution as soon as I've found it.
$ uname -v
DragonFly v4.2.4-RELEASE #6: Sun Aug 9 13:25:14 EDT 2015 root#www.shiningsilence.com:/usr/obj/home/justin/release/4_2/sys/X86_64_GENERIC
n00b207: you shouldn't need xf86-video-fbdev (that warning is because how Xorg probes available devices)
$ pciconf -lv
hostb0#pci0:0:0:0: class=0x060000 card=0x50001458 chip=0x0c008086 rev=0x06 hdr=0x00
vendor = 'Intel Corporation'
device = '4th Gen Core Processor DRAM Controller'
class = bridge
subclass = HOST-PCI
vgapci0#pci0:0:2:0: class=0x030000 card=0xd0001458 chip=0x041e8086 rev=0x06 hdr=0x00
vendor = 'Intel Corporation'
device = '4th Generation Core Processor Family Integrated Graphics Controller'
class = display
subclass = VGA
$ xset q
Keyboard Control:
auto repeat: on key click percent: 0 LED mask: 00000000
XKB indicators:
00: Caps Lock: off 01: Num Lock: off 02: Scroll Lock: off
03: Shift Lock: off 04: Group 2: off 05: Mouse Keys: off
auto repeat delay: 500 repeat rate: 20
auto repeating keys: 00feffffdffffbbf
fedfffffffdfe5ef
ffffffffffffffff
ffffffffffffffff
bell percent: 0 bell pitch: 400 bell duration: 100
Pointer Control:
acceleration: 2/1 threshold: 4
Screen Saver:
prefer blanking: yes allow exposures: yes
timeout: 600 cycle: 600
Colors:
default colormap: 0x22 BlackPixel: 0x0 WhitePixel: 0xffffff
Font Path:
/usr/local/share/fonts/misc/,/usr/local/share/fonts/TTF/,/usr/local/share/fonts/OTF/,/usr/local/share/fonts/Type1/,/usr/local/share/fonts/100dpi/,/usr/local/share/fonts/75dpi/,built-ins
DPMS (Energy Star):
Standby: 600 Suspend: 600 Off: 600
DPMS is Enabled
Monitor is On
Font cache:
Server does not have the FontCache Extension
n00b207: could you try "xset dpms force off" to check if it freezes (might need to repeat few times to fully go into turn off screen mode)
dmesg output: http://pastebin.com/9tk5JCBf
n00b207: looks like you hit xfce+Xorg combo, try "xset -dpms && xset s noblank && xset s off" to see if it repeats again
Related
I've written a small PCI device in QEMU using https://github.com/qemu/qemu/blob/v2.7.0/hw/misc/edu.c as a base, and https://github.com/cirosantilli/linux-kernel-module-cheat/blob/6788a577c394a2fc512d8f3df0806d84dc09f355/kernel_module/pci.c to interact with it. However, interrupts appear to fire roughly ever 2 seconds.
I write data to the PCI device to acknowledge the interrupt, and I can see that before and after the call to:
if (!device->irq_status && !device_msi_enabled(device)) {
printf("Set state from %d ", device->pdev.irq_state);
pci_set_irq(&device->pdev, 0);
printf("to %d\n", device->pdev.irq_state);
}
The output is
Set state from 0 to 0
Set state from 0 to 0
Set state from 0 to 0
Set state from 0 to 0
I can't tell what could cause the interrupt to continue firing - is there a property I've forgotten?
While working on my bootloader project, I noticed that the resulting binary file is larger than the sum of the sizes of each section in the original ELF file.
After linking the bootloader image via ld, the ELF file is structured as follows:
$ readelf -S elfboot.elf
There are 10 section headers, starting at offset 0xa708:
Section Headers:
[Nr] Name Type Addr Off Size ES Flg Lk Inf Al
[ 0] NULL 00000000 000000 000000 00 0 0 0
[ 1] .text PROGBITS 00007e00 000e00 004bc7 00 AX 0 0 16
[ 2] .rodata PROGBITS 0000c9d0 0059d0 000638 00 A 0 0 32
[ 3] .initcalls PROGBITS 0000ec20 007c20 00000c 00 WA 0 0 4
[ 4] .exitcalls PROGBITS 0000ec30 007c30 00000c 00 WA 0 0 4
[ 5] .data PROGBITS 0000ec40 007c40 0001e0 00 WA 0 0 32
[ 6] .bss NOBITS 0000ee20 007e20 00025c 00 WA 0 0 32
[ 7] .symtab SYMTAB 00000000 007e20 0016e0 10 8 166 4
[ 8] .strtab STRTAB 00000000 009500 0011bb 00 0 0 1
[ 9] .shstrtab STRTAB 00000000 00a6bb 00004a 00 0 0 1
Key to Flags:
W (write), A (alloc), X (execute), M (merge), S (strings), I (info),
L (link order), O (extra OS processing required), G (group), T (TLS),
C (compressed), x (unknown), o (OS specific), E (exclude),
p (processor specific)
I noticed that there is a rather big gap (~7KB) between the sections .rodata and .initcalls. I also checked out the content of each section:
$ objdump -s elfboot.elf
[...]
Contents of section .rodata:
[...]
cfc0 23cf0000 80000000 2fcf0000 00010000 #......./.......
cfd0 3bcf0000 00020000 47cf0000 00040000 ;.......G.......
cfe0 54cf0000 00080000 61cf0000 00100000 T.......a.......
cff0 6ecf0000 00200000 7bcf0000 00400000 n.... ..{....#..
d000 89cf0000 00800000 ........
Contents of section .initcalls:
ec20 02b50000 6db50000 feac0000 ....m.......
Contents of section .exitcalls:
[...]
My linker script tells to align every section at a 16 byte (0x10) boundary. After
objcopy -O binary elfboot.elf elfboot.bin
the resulting binary contains a huge chunk of zero filled bytes at offset 0x5200 (0xd000 - 0x7e00) all the way to offset 0x6e20 (0xec20 - 0x7e00). Now that I know where the zero filled bytes come from, how do I remove them? For reference I added the verbose output of the linker step including the linker script used:
i686-elfboot-gcc -o elfboot.elf -T elfboot.ld ./src/arch/x86/bootstrap.o ./src/arch/x86/a20.o ./src/arch/x86/bda.o ./src/arch/x86/bios.o ./src/arch/x86/copy.o ./src/arch/x86/e820.o ./src/arch/x86/entry.o ./src/arch/x86/idt.o ./src/arch/x86/realmode_jmp.o ./src/arch/x86/setup.o ./src/arch/x86/opmode.o ./src/arch/x86/pic.o ./src/arch/x86/ptrace.o ./src/arch/x86/video.o ./src/core/bdev.o ./src/core/cdev.o ./src/core/elf.o ./src/core/input.o ./src/core/interrupt.o ./src/core/loader.o ./src/core/main.o ./src/core/module.o ./src/core/pci.o ./src/core/printf.o ./src/core/string.o ./src/core/symbol.o ./src/crypto/crc32.o ./src/drivers/ide/ide.o ./src/fs/fs.o ./src/fs/file.o ./src/fs/super.o ./src/fs/ramfs/ramfs.o ./src/fs/isofs/isofs.o ./src/lib/ata/libata.o ./src/lib/tmg/libtmg.o ./src/mm/memblock.o ./src/mm/page_alloc.o ./src/mm/slub.o ./src/mm/util.o -O2 -Wl,--verbose -nostdlib -lgcc
GNU ld (GNU Binutils) 2.31.1
Supported emulations:
elf_i386
elf_iamcu
opened script file elfboot.ld
using external linker script:
==================================================
OUTPUT_FORMAT(elf32-i386)
ENTRY(_arch_start)
SECTIONS
{
/*
* The boot stack starts at 0x6000 and grows towards lower addresses.
* The stack is designed to be only one page in size, which should be
* sufficient for the bootstrap stage.
*/
. = 0x6000;
__stack_start = .;
/*
* Buffer for reading files from the boot device. Usually boot devices
* are designed to read data in chunks of 0x200 (512) or 0x800 (2048)
* bytes. The buffer is large enough to read 4 512 or 2 2048 chunks of
* data from the disk.
*/
__buffer_start = .;
. = 0x7000;
__buffer_end = .;
/*
* Actual bootstrap stage code and data.
*/
. = 0x7E00;
__bootstrap_start = .;
.text ALIGN(0x10) : {
__text_start = .;
*(.text*)
__text_end = .;
}
.rodata ALIGN(0x10) : {
__rodata_start = .;
*(.rodata*)
__rodata_end = .;
}
/*
* This section is dedicated to all built-in modules. If a module will
* be included in the elfboot binary, the module_init function pointer
* of that module is placed in this section.
*
* Make sure to initialize built-in filesystems first as we need it to
* setup the root file system node.
*/
.initcalls ALIGN(0x10) : {
__initcalls_vfs_start = .;
/* Built-in file systems */
*(.initcalls_vfs*)
__initcalls_vfs_end = .;
__initcalls_dev_start = .;
/* Built-in devices */
*(.initcalls_dev*)
__initcalls_dev_end = .;
__initcalls_start = .;
/* Built-in modules */
*(.initcalls*)
__initcalls_end = .;
}
.exitcalls ALIGN(0x10) : {
__exitcalls_start = .;
*(.exitcalls*)
__exitcalls_end = .;
}
.data ALIGN(0x10) : {
__data_start = .;
*(.data*)
__data_end = .;
}
.bss ALIGN(0x10) : {
__bss_start = .;
*(.bss*)
__bss_end = .;
}
__bootstrap_end = .;
}
==================================================
attempt to open ./src/arch/x86/bootstrap.o succeeded
./src/arch/x86/bootstrap.o
attempt to open ./src/arch/x86/a20.o succeeded
./src/arch/x86/a20.o
attempt to open ./src/arch/x86/bda.o succeeded
./src/arch/x86/bda.o
attempt to open ./src/arch/x86/bios.o succeeded
./src/arch/x86/bios.o
attempt to open ./src/arch/x86/copy.o succeeded
./src/arch/x86/copy.o
attempt to open ./src/arch/x86/e820.o succeeded
./src/arch/x86/e820.o
attempt to open ./src/arch/x86/entry.o succeeded
./src/arch/x86/entry.o
attempt to open ./src/arch/x86/idt.o succeeded
./src/arch/x86/idt.o
attempt to open ./src/arch/x86/realmode_jmp.o succeeded
./src/arch/x86/realmode_jmp.o
attempt to open ./src/arch/x86/setup.o succeeded
./src/arch/x86/setup.o
attempt to open ./src/arch/x86/opmode.o succeeded
./src/arch/x86/opmode.o
attempt to open ./src/arch/x86/pic.o succeeded
./src/arch/x86/pic.o
attempt to open ./src/arch/x86/ptrace.o succeeded
./src/arch/x86/ptrace.o
attempt to open ./src/arch/x86/video.o succeeded
./src/arch/x86/video.o
attempt to open ./src/core/bdev.o succeeded
./src/core/bdev.o
attempt to open ./src/core/cdev.o succeeded
./src/core/cdev.o
attempt to open ./src/core/elf.o succeeded
./src/core/elf.o
attempt to open ./src/core/input.o succeeded
./src/core/input.o
attempt to open ./src/core/interrupt.o succeeded
./src/core/interrupt.o
attempt to open ./src/core/loader.o succeeded
./src/core/loader.o
attempt to open ./src/core/main.o succeeded
./src/core/main.o
attempt to open ./src/core/module.o succeeded
./src/core/module.o
attempt to open ./src/core/pci.o succeeded
./src/core/pci.o
attempt to open ./src/core/printf.o succeeded
./src/core/printf.o
attempt to open ./src/core/string.o succeeded
./src/core/string.o
attempt to open ./src/core/symbol.o succeeded
./src/core/symbol.o
attempt to open ./src/crypto/crc32.o succeeded
./src/crypto/crc32.o
attempt to open ./src/drivers/ide/ide.o succeeded
./src/drivers/ide/ide.o
attempt to open ./src/fs/fs.o succeeded
./src/fs/fs.o
attempt to open ./src/fs/file.o succeeded
./src/fs/file.o
attempt to open ./src/fs/super.o succeeded
./src/fs/super.o
attempt to open ./src/fs/ramfs/ramfs.o succeeded
./src/fs/ramfs/ramfs.o
attempt to open ./src/fs/isofs/isofs.o succeeded
./src/fs/isofs/isofs.o
attempt to open ./src/lib/ata/libata.o succeeded
./src/lib/ata/libata.o
attempt to open ./src/lib/tmg/libtmg.o succeeded
./src/lib/tmg/libtmg.o
attempt to open ./src/mm/memblock.o succeeded
./src/mm/memblock.o
attempt to open ./src/mm/page_alloc.o succeeded
./src/mm/page_alloc.o
attempt to open ./src/mm/slub.o succeeded
./src/mm/slub.o
attempt to open ./src/mm/util.o succeeded
./src/mm/util.o
attempt to open /home/croemheld/Repositories/elfboot/elfboot-toolchain/lib/gcc/i686-elfboot/8.2.0/libgcc.so failed
attempt to open /home/croemheld/Repositories/elfboot/elfboot-toolchain/lib/gcc/i686-elfboot/8.2.0/libgcc.a succeeded
(/home/croemheld/Repositories/elfboot/elfboot-toolchain/lib/gcc/i686-elfboot/8.2.0/libgcc.a)_umoddi3.o
(/home/croemheld/Repositories/elfboot/elfboot-toolchain/lib/gcc/i686-elfboot/8.2.0/libgcc.a)_udivmoddi4.o
Please note that I am not able to load the sections individually into memory, since my first stage bootloader (boot sector) simply loads the entire file at a specified offset from the boot device. To reduce the size of the second stage bootloader image, I want to try to remove the gap at link time or also if possible at all, at post link time (objcopy, ...).
I have a custom hardware device that uses a Variscite i.MX6Q (quad-core) to drive a 320x240 display. Once the linux kernel starts booting, the LCD display works great - no issues at all. However, prior to that the boot loader (u-boot) shows a white screen (sometimes with faint vertical lines) for about 0.25s, then goes black for about 8s until the kernel takes over (reinitializing the display and correctly showing the kernel's own splash screen).
Since the linux kernel can drive the display just fine, I'm sure I've just mis-configured something in my u-boot setup...but I'm tearing my hair out trying to figure out what and where! Resources / things I've tried include:
Porting LVDS LCD With Low Resolution to i.MX6 - This seems highly relevant, but refers to tweaking linux kernel drivers instead of uboot and I'm not experienced enough to port the knowledge to uboot.
U-Boot splash screen - LVDS - This seems soooo close to the problem I'm having, but doesn't list a clear solution. One response in the forum linked to a suggestion to invert the polarity of one of the clocks, which I tried but did not notice any difference.
How to display splash screen on parallel LCD in u-boot - In the same theme as the prior posts, this again hints at an issue with specifying clocks for low-res displays.
i.mx6 33.26MHz LVDS panel cannot display in u-boot - Following these instructions, I modified ...../uboot/drivers/video/ipu_common.c and set the g_ldb_clk struct .rate members to 6400000, but that seemed to have no effect.
Adding Displays to iMX Developer's Kits [Warning - PDF!] - Instructions on how to add support for new displays to iMX boards; section 6.1.4 talks about iMX6Q. However, I've added the proper display timings to the displays[] var (see code below) and I'm still having problems.
From my custom board schematics, I know that I need to configure a PWM backlight display on PWM2 and backlight enable/disable on GPIO 5-13, and I need to provide custom display timings. So, the relevant sections in ..../uboot/board/variscite/mx6var_som.c:
struct display_info_t const displays[] = {{
.bus = -1,
.addr = 0,
.pixfmt = IPU_PIX_FMT_RGB24,
.detect = detect_MyCustomBoard,
.enable = lvds_enable_disable,
.mode = {
.name = "VAR-QVGA MX6CB-R",
.refresh = 60, /* optional */
.xres = 320,
.yres = 240,
.pixclock = MHZ2PS(6.4),
.left_margin = 64,
.right_margin = 20,
.upper_margin = 8,
.lower_margin = 4,
.hsync_len = 4,
.vsync_len = 10,
.sync = FB_SYNC_EXT,
.vmode = FB_VMODE_NONINTERLACED
} },
...
};
static void setup_display(void)
{
...
/* Turn off backlight until display is ready */
SETUP_IOMUX_PAD(PAD_DISP0_DAT19__GPIO5_IO13 | MUX_PAD_CTRL(NO_PAD_CTRL));
gpio_direction_output(IMX_GPIO_NR(5, 13), 0);
/* Setup the backlight dimmer (via PWM) */
SETUP_IOMUX_PAD(PAD_DISP0_DAT9__PWM2_OUT | MUX_PAD_CTRL(BACKLIGHT_PWM_CTRL));
pwm_init(VAR_SOM_BACKLIGHT_PWM_ID, VAR_SOM_BACKLIGHT_PERIOD, 0);
pwm_config(VAR_SOM_BACKLIGHT_PWM_ID, 0, VAR_SOM_BACKLIGHT_PERIOD);
...
/* Turn on LDB0, LDB1, IPU,IPU DI0 clocks */
reg = readl(&mxc_ccm->CCGR3);
reg |= MXC_CCM_CCGR3_LDB_DI0_MASK | MXC_CCM_CCGR3_LDB_DI1_MASK;
writel(reg, &mxc_ccm->CCGR3);
/* set LDB0, LDB1 clk select to 011/011 */
reg = readl(&mxc_ccm->cs2cdr);
reg &= ~(MXC_CCM_CS2CDR_LDB_DI0_CLK_SEL_MASK
| MXC_CCM_CS2CDR_LDB_DI1_CLK_SEL_MASK);
reg |= (1 << MXC_CCM_CS2CDR_LDB_DI0_CLK_SEL_OFFSET)
| (1 << MXC_CCM_CS2CDR_LDB_DI1_CLK_SEL_OFFSET);
writel(reg, &mxc_ccm->cs2cdr);
...
}
int splash_screen_prepare(void)
{
...
/* Turn on backlight */
gpio_set_value(IMX_GPIO_NR(5, 13), 1);
pwm_config(VAR_SOM_BACKLIGHT_PWM_ID, VAR_SOM_BACKLIGHT_PERIOD*127/256, VAR_SOM_BACKLIGHT_PERIOD);
...
}
For comparison, here are the relevant sections of my linux device tree:
&pwm2 {
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_pwm2_1>;
status = "okay";
};
backlight {
compatible = "pwm-backlight";
pwms = <&pwm2 0 50000>;
brightness-levels = <0 4 8 16 32 64 128 248>;
default-brightness-level = <7>;
status = "okay";
};
&ldb {
status = "okay";
lvds-channel#0 {
fsl,data-mapping = "spwg";
fsl,data-width = <24>;
status = "okay";
primary;
display-timings {
native-mode = <&timing0r>;
timing0r: hsd100pxn1 {
clock-frequency = <6400000>;
hactive = <320>;
vactive = <240>;
hback-porch = <64>;
hfront-porch = <20>;
vback-porch = <8>;
vfront-porch = <4>;
hsync-len = <4>;
vsync-len = <10>;
};
};
};
...
};
&iomuxc {
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_hog>;
imx6qdl-var-som-mx6 {
pinctrl_hog: hoggrp {
fsl,pins = <
...
/* LCD Enable on GPIO 5-13 */
MX6QDL_PAD_DISP0_DAT19__GPIO5_IO13 0xc0000000
...
>;
};
In terms of hardware, the LVDS signal from the iMX6 is converted to parallel RGB by a TI SN65LVDS822 FlatlinkTM LVDS receiver, which drives a 320x240 QVGA Okaya RH320240T-3x5AP-A display.
The framework I'm using is Yocto (Krogoth release), which includes:
U-Boot 2015.04-mx6+g535519b: git://github.com/varigit/uboot-imx.git, branch imx_v2015.04_4.1.15_1.1.0_ga_var03, commit 535519
Linux kernel 4.1.15: git://github.com/varigit/linux-2.6-imx.git, branch imx-rel_imx_4.1.15_2.0.0_ga-var01, commit 5a4b34
I do have a Variscite DevKit, and when I boot the SOM in the DevKit (with an appropriate device tree and associated drivers) everything works great and I see both the uboot splash image as well as the linux kernel splash image. This implies that the image I'm using for the uboot splash is valid, can be read by uboot, etc.
There is one other kicker: I do not have serial console access on my production board set :(.
So, the big question here is what am I doing wrong in my uboot display driver initialization? At this point, I'd even welcome strategies on how to go about debugging this (although I don't have access to an oscilloscope).
I have compiled uboot and a mainline kernel downloaded from kernel.org to run on a Raspberry Type B Module. I have a problem using the GPIO Interface. I am writing a module to manage two I/O's one of them should generate irq. When I call gpio_to_irq() or any other gpio related kernel api, the call always fail (return code -517 or -22). The same code, running on the raspberry kernel downloaded from the github RPI repository works. The mainline kernel, however, support BCM2835 that is the Soc used on RPI. Where is my approch wrong ? How does the gpio_api calls fail ? If I manually find the GPIO number (Virq) and I requests it with request_irq() everything works fine even with the mainline kernel from kernel.org.
The mainline kernel version is 4.11 while the rpi kernel version from github is 4.9.26. Here is the init function of the module:
I apologize for this. I will try to be more in toych now. Linux version (mainline) is 4.11 while the rpi linux version is 4.9.26. This is the init function of the module:
static int __init hello_init(void)
{
int result;
int temp;
printk(KERN_INFO "Hello world!\n");
printk(KERN_INFO "%s\n",Version);
/* Registering device */
result = register_chrdev(memory_major, "Bisio", &memory_fops);
if (result < 0)
{
printk(KERN_INFO "Memory Driver: Cannot Obtain Major Number %d\n", memory_major);
return -1;
}
/* Allocating memory for the buffer */
memory_buffer = kmalloc(MEMSIZE, GFP_KERNEL);
if (!memory_buffer) {
return -ENOMEM;
}
memset(memory_buffer, 0, MEMSIZE);
result = gpio_request(23,"MyIO");
printk(KERN_INFO "gpio_request: %d\n",result);
result = gpio_direction_input(23);
printk(KERN_INFO "gpio_direction_input: %d\n",result);
result = gpio_to_irq(23);
printk(KERN_INFO "gpio_to_irq: %d\n",result);
return 0; // Non-zero return means that the module couldn't be loaded.
}
Doing a insmod for this module I get:
[ 109.792257] nothing: loading out-of-tree module taints kernel.
[ 109.820350] Hello world!
[ 109.829341] Driver Version 1.27 - 10/09/2015 - 10:20
[ 109.841344] gpio_request: -517
[ 109.850737] gpio_direction_input: 0
[ 109.860187] gpio_to_irq: -22
while, doing the same with the rpi kernel (compiled with the same gcc crosscompiler version (5.20 soft float) works correctly and here is the output of the same init function:
[ 47.927565] nothing: loading out-of-tree module taints kernel.
[ 47.939771] Hello world!
[ 47.942333] Driver Version 1.27 - 10/09/2015 - 10:20
[ 47.947411] gpio_request: 0
[ 47.952798] gpio_direction_input: 0
[ 47.956314] gpio_to_irq: 183
what I am missing ?
Any help will be appreciated.
Regards
Marco Bisio
The game works fine, it just won't load the textures.
package theDwainFilms19.SuperSmashBrosMod.item;
import scala.tools.nsc.MainClass;
import theDwainFilms19.SuperSmashBrosMod.SuperSmashBrosMod;
import net.minecraft.item.ItemArmor;
import net.minecraft.item.ItemStack;
import com.sun.xml.internal.stream.Entity;
public class ItemmarioArmor extends ItemArmor {
public ItemmarioArmor(ArmorMaterial armourMaterial, int id, int placement){
super(armourMaterial, id, placement);
}
public String getArmorTexture(ItemStack stack, Entity entity, int slot, String type) {
if (stack.getItem().itemID == SuperSmashBrosMod.marioHelmet.itemID ||stack.getItem().itemID == SuperSmashBrosMod.marioChestplate.itemID || stack.getItem ().itemID == SuperSmashBrosMod.marioBoots.itemID) {
return SuperSmashBrosMod.MODID + ":textures/models/armor/mario_layer_1.png";
}
if (stack.getItem().itemID == SuperSmashBrosMod.marioLeggings.itemID) {
return SuperSmashBrosMod.MODID + ":textures/models/armor/mario_layer_2.png";
} else {
return null;
I don't have a clue how to fix this, since it doesn't give a error thing, it just ignores the code.
Edit: here is a crash report for when I went to add a proxy
-- System Details --
Details:
Minecraft Version: 1.7.10
Operating System: Windows 8.1 (amd64) version 6.3
Java Version: 1.8.0_31, Oracle Corporation
Java VM Version: Java HotSpot(TM) 64-Bit Server VM (mixed mode), Oracle Corporation
Memory: 687494688 bytes (655 MB) / 1037959168 bytes (989 MB) up to 1037959168 bytes (989 MB)
JVM Flags: 3 total; -Xincgc -Xmx1024M -Xms1024M
AABB Pool Size: 0 (0 bytes; 0 MB) allocated, 0 (0 bytes; 0 MB) used
IntCache: cache: 0, tcache: 0, allocated: 0, tallocated: 0
FML: MCP v9.05 FML v7.10.85.1291 Minecraft Forge 10.13.2.1291 4 mods loaded, 4 mods active
mcp{9.05} [Minecraft Coder Pack] (minecraft.jar) Unloaded->Constructed
FML{7.10.85.1291} [Forge Mod Loader] (forgeSrc-1.7.10-10.13.2.1291.jar) Unloaded->Constructed
Forge{10.13.2.1291} [Minecraft Forge] (forgeSrc-1.7.10-10.13.2.1291.jar) Unloaded->Constructed
ssb{1.0} [Super Smash Bros mod] (bin) Unloaded->Errored
[22:56:55] [Client thread/INFO] [STDOUT]: [net.minecraft.client.Minecraft:displayCrashReport:398]: ##!## Game crashed! Crash report saved to: ##!## C:\Users\David\Desktop\SuperSmashBrosMOd\eclipse.\crash-reports\crash-2015-02-07_22.56.55-client.txt
Java HotSpot(TM) 64-Bit Server VM warning: Using incremental CMS is deprecated and will likely be removed in a future release