Minecraft Forge: My armor textures are not showing up on armor - minecraft

The game works fine, it just won't load the textures.
package theDwainFilms19.SuperSmashBrosMod.item;
import scala.tools.nsc.MainClass;
import theDwainFilms19.SuperSmashBrosMod.SuperSmashBrosMod;
import net.minecraft.item.ItemArmor;
import net.minecraft.item.ItemStack;
import com.sun.xml.internal.stream.Entity;
public class ItemmarioArmor extends ItemArmor {
public ItemmarioArmor(ArmorMaterial armourMaterial, int id, int placement){
super(armourMaterial, id, placement);
}
public String getArmorTexture(ItemStack stack, Entity entity, int slot, String type) {
if (stack.getItem().itemID == SuperSmashBrosMod.marioHelmet.itemID ||stack.getItem().itemID == SuperSmashBrosMod.marioChestplate.itemID || stack.getItem ().itemID == SuperSmashBrosMod.marioBoots.itemID) {
return SuperSmashBrosMod.MODID + ":textures/models/armor/mario_layer_1.png";
}
if (stack.getItem().itemID == SuperSmashBrosMod.marioLeggings.itemID) {
return SuperSmashBrosMod.MODID + ":textures/models/armor/mario_layer_2.png";
} else {
return null;
I don't have a clue how to fix this, since it doesn't give a error thing, it just ignores the code.
Edit: here is a crash report for when I went to add a proxy
-- System Details --
Details:
Minecraft Version: 1.7.10
Operating System: Windows 8.1 (amd64) version 6.3
Java Version: 1.8.0_31, Oracle Corporation
Java VM Version: Java HotSpot(TM) 64-Bit Server VM (mixed mode), Oracle Corporation
Memory: 687494688 bytes (655 MB) / 1037959168 bytes (989 MB) up to 1037959168 bytes (989 MB)
JVM Flags: 3 total; -Xincgc -Xmx1024M -Xms1024M
AABB Pool Size: 0 (0 bytes; 0 MB) allocated, 0 (0 bytes; 0 MB) used
IntCache: cache: 0, tcache: 0, allocated: 0, tallocated: 0
FML: MCP v9.05 FML v7.10.85.1291 Minecraft Forge 10.13.2.1291 4 mods loaded, 4 mods active
mcp{9.05} [Minecraft Coder Pack] (minecraft.jar) Unloaded->Constructed
FML{7.10.85.1291} [Forge Mod Loader] (forgeSrc-1.7.10-10.13.2.1291.jar) Unloaded->Constructed
Forge{10.13.2.1291} [Minecraft Forge] (forgeSrc-1.7.10-10.13.2.1291.jar) Unloaded->Constructed
ssb{1.0} [Super Smash Bros mod] (bin) Unloaded->Errored
[22:56:55] [Client thread/INFO] [STDOUT]: [net.minecraft.client.Minecraft:displayCrashReport:398]: ##!## Game crashed! Crash report saved to: ##!## C:\Users\David\Desktop\SuperSmashBrosMOd\eclipse.\crash-reports\crash-2015-02-07_22.56.55-client.txt
Java HotSpot(TM) 64-Bit Server VM warning: Using incremental CMS is deprecated and will likely be removed in a future release

Related

assembly logical shift left solidity don't work

function shift(int val) returns(int) {
int res;
assembly {
let m := mload(0x40)
mstore(m, shl(2, val))
mstore(0x40, add(m, 0x20))
res := mload(m)
}
return res;
}
Documentation
shl(x, y) //logical shift left y by x bits
Result always 0;
In testrPC it don't work at all.
Geth version: 1.8.10-stable
OS: ubuntu 16.04
Go version: 1.9.2
I receive this warning when I compile your code:
browser/Untitled3.sol:8:23: Warning: The "shl" instruction is only available for
Constantinople-compatible VMs. You are currently compiling for "byzantium", where it will
be interpreted as an invalid instruction.
mstore(m, shl(2, val))
^---------^
It looks like there's a plan to add this instruction in the Constantinople fork, which hasn't happened yet: https://github.com/ethereum/EIPs/issues/145. For now, no such opcode exists.

GPIO Raspberry Type B with MainLine Kernel

I have compiled uboot and a mainline kernel downloaded from kernel.org to run on a Raspberry Type B Module. I have a problem using the GPIO Interface. I am writing a module to manage two I/O's one of them should generate irq. When I call gpio_to_irq() or any other gpio related kernel api, the call always fail (return code -517 or -22). The same code, running on the raspberry kernel downloaded from the github RPI repository works. The mainline kernel, however, support BCM2835 that is the Soc used on RPI. Where is my approch wrong ? How does the gpio_api calls fail ? If I manually find the GPIO number (Virq) and I requests it with request_irq() everything works fine even with the mainline kernel from kernel.org.
The mainline kernel version is 4.11 while the rpi kernel version from github is 4.9.26. Here is the init function of the module:
I apologize for this. I will try to be more in toych now. Linux version (mainline) is 4.11 while the rpi linux version is 4.9.26. This is the init function of the module:
static int __init hello_init(void)
{
int result;
int temp;
printk(KERN_INFO "Hello world!\n");
printk(KERN_INFO "%s\n",Version);
/* Registering device */
result = register_chrdev(memory_major, "Bisio", &memory_fops);
if (result < 0)
{
printk(KERN_INFO "Memory Driver: Cannot Obtain Major Number %d\n", memory_major);
return -1;
}
/* Allocating memory for the buffer */
memory_buffer = kmalloc(MEMSIZE, GFP_KERNEL);
if (!memory_buffer) {
return -ENOMEM;
}
memset(memory_buffer, 0, MEMSIZE);
result = gpio_request(23,"MyIO");
printk(KERN_INFO "gpio_request: %d\n",result);
result = gpio_direction_input(23);
printk(KERN_INFO "gpio_direction_input: %d\n",result);
result = gpio_to_irq(23);
printk(KERN_INFO "gpio_to_irq: %d\n",result);
return 0; // Non-zero return means that the module couldn't be loaded.
}
Doing a insmod for this module I get:
[ 109.792257] nothing: loading out-of-tree module taints kernel.
[ 109.820350] Hello world!
[ 109.829341] Driver Version 1.27 - 10/09/2015 - 10:20
[ 109.841344] gpio_request: -517
[ 109.850737] gpio_direction_input: 0
[ 109.860187] gpio_to_irq: -22
while, doing the same with the rpi kernel (compiled with the same gcc crosscompiler version (5.20 soft float) works correctly and here is the output of the same init function:
[ 47.927565] nothing: loading out-of-tree module taints kernel.
[ 47.939771] Hello world!
[ 47.942333] Driver Version 1.27 - 10/09/2015 - 10:20
[ 47.947411] gpio_request: 0
[ 47.952798] gpio_direction_input: 0
[ 47.956314] gpio_to_irq: 183
what I am missing ?
Any help will be appreciated.
Regards
Marco Bisio

Build up multi vxworks in vmware

when I built up one vxworks in vmware it works. But when I create more two vxworks seperately with different IP, the second vxworks fails with(log is from vxware.log):
2015-09-02T09:10:45.057+08:00| vcpu-0| W110: VLANCE: RDP OUT to unknown Register 100
2015-09-02T09:10:45.057+08:00| vcpu-0| I120: VNET: MACVNetPort_SetPADR: Ethernet0: can't set PADR (0)
2015-09-02T09:10:45.057+08:00| vcpu-0| I120: Msg_Post: Warning
2015-09-02T09:10:45.057+08:00| vcpu-0| I120: [msg.vnet.padrConflict] MAC address 00:0C:29:5A:23:AF of adapter Ethernet0 is within the reserved address range or is in use by another virtual adapter on your system. Adapter Ethernet0 may not have network connectivity.
I am sure each vxworks OS got its own MAC address. Another point is that I created the second vxworks through copying the files from the first one.
Forgive me.
Remove the macro VXWORKS_RUN_ON_VMWARE and any related code in sysLn97xEnd.c.
Everything works perfectly under VMWorkstation 11.
MAC can be set under vm machine's config page.
Maybe those macro is for the far previous version of vmworkstation.
setting mac address in vmware does not work.
you need a function to generate different mac address while system booting.
each copy of the vm machines will need to build a differenet bootrom and a vxworks.
(simplely use -D MACRO in (.wpj)MAKEFILE to switch macs between different projects with a single header.)
here is a dirty solution for setting multi macs in one vm machine:
0.
define the mac addresses and a function to access it in ln97xEnd.c.
\#define LN97_MAX_IP (4)
int ln97EndLoaded = 0;
char ln97DefineAddr[LN97_MAX_IP][6] = {
{0x00, 0x0c, 0x29, 0x5a, 0x23, 0xa0},
{0x00, 0x0c, 0x29, 0x5a, 0x23, 0xa1},
{0x00, 0x0c, 0x29, 0x5a, 0x23, 0xa2},
{0x00, 0x0c, 0x29, 0x5a, 0x23, 0xa3}
};
END_OBJ * ln97xEndList[LN97_MAX_IP] = {NULL, NULL, NULL, NULL};
char * ln97xFindDefinedAddr(LN_97X_DRV_CTRL * pDrvCtrl)
{
int i;
for (i = 0; i endObj)
{
return ln97DefineAddr[i];
}
}
if (i
1.
Modify ln97xEndLoad() in ln97xEnd.c to init different mac (and store the END_OBJ* if needed).
END_OBJ * ln97xEndLoad
...
DRV_LOG (DRV_DEBUG_LOAD, "Done loading ln97x...\n", 1, 2, 3, 4, 5, 6);
/** add to save END_OBJ* */
if (ln97EndLoaded endObj;
ln97EndLoaded++;
}
/** end add */
return (&pDrvCtrl->endObj);
...
2.
change sysLan97xEnetAddrGet() in sysLn97xEnd.c.
aprom should not be set by ln97xFindDefinedAddr() instead of "00:0C:29:5A:23:AF".
char * ln97xFindDefinedAddr(LN_97X_DRV_CTRL * pDrvCtrl);
...
STATUS sysLan97xEnetAddrGet
...
{
char * addrDef = NULL;
...
/* modify by frankzhou to support in VMware */
\#define VXWORKS_RUN_ON_VMWARE
\#ifndef VXWORKS_RUN_ON_VMWARE
/* check for ASCII 'W's at APROM bytes 14 and 15 */
if ((aprom [0xe] != 'W') || (aprom [0xf] != 'W'))
{
logMsg ("sysLn97xEnetAddrGet: W's not stored in aprom\n",
0, 1, 2, 3, 4, 5);
return ERROR;
}
\#endif
\#ifdef VXWORKS_RUN_ON_VMWARE
/** add by bonex for multi mac addr */
addrDef = ln97xFindDefinedAddr(pDrvCtrl);
if (addrDef == NULL)
{
aprom[0]=0x00;
aprom\[1]=0x0c;
aprom[2]=0x29;
aprom[3]=0x5a;
aprom[4]=0x23;
aprom[5]=0xaf;
}
else
{
bcopy (addrDef, aprom, 6);
}
/** end by bonex */
\#endif
/* end by frankzhou */
...
3.
rebuild the bootrom, and rebuild the vxworks too.
result:
[telnet to vmware and check arpShow][1]
[1]: https://i.stack.imgur.com/kR9Uy.jpg
This is due to the setting of MAC address in sysLn97xEnd.c. This must be modified and rebuid the bootrom and vxworks image for another vxworks node, or it will render the conflict.

Boost serialization: archive "unsupported version" exception

I've got the exception "unsupported version" when I try to deserialize through a text archive some data previously serialized with a upper version of Boost (1.46 to serialize and 1.38 to deserialize)...is there a way to downgrade (in the code) the serialization?
Something like "set_library_version"?
See the Error read binary archive, created by old Boost version mail archive post about the serialization error.
It says that the code below does the job:
void load_override(version_type & t, int version){
library_version_type lvt = this->get_library_version();
if (boost::archive::library_version_type(7) < lvt){
this->detail_common_iarchive::load_override(t, version);
}
else
if (boost::archive::library_version_type(6) < lvt){
uint_least16_t x = 0;
* this->This() >> x;
t = boost::archive::version_type(x);
}
else
if (boost::archive::library_version_type(3) == lvt ||
boost::archive::library_version_type(5) == lvt){
#pragma message("CTMS fix for serialization bug (lack of backwards compatibility) introduced by Boost 1.45.")
// Up to 255 versions
unsigned char x = 0;
* this->This() >> x;
t = version_type(x);
}
else{
unsigned int x = 0;
* this->This() >> x;
t = boost::archive::version_type(x);
}
}
Using text_archive ... I had a recent issue with this also.
I recently upgraded boost from 1.67 to 1.72 on Windows, generated some data on Windows. When I ran the data on my Linux environment which is still on Boost 1.67, it throws not supported.
The header for 1.67 looked like this
22 serialization::archive 16
and 1.72 looked like
22 serialization::archive 17
I changed 17 to 16 and it was happy for my use case.

How to calculate GPU utilization of specified process on Windows platform?

divided into integrated graphics card and Nvidia graphics card.GPU Performance Monitoring,Generating Log File Upload Server.
if ((res = nvmlDeviceGetAccountingStats(device, pids[index], &stat)) == NVML_SUCCESS) {
qDebug("Gpu mem: %d", stat.memoryUtilization);
qDebug("GPU: %d%", stat.gpuUtilization);
} else
{
qDebug("get accounting stat error\n");
}