Fedora audit log can be useful for tracing abnormal ends of programs.
# find abnormal ends (eg. segfaults) ausearch --message ANOM_ABEND # find entries related to a given user ausearch -ua 500 -i
Further info
Fedora audit log can be useful for tracing abnormal ends of programs.
# find abnormal ends (eg. segfaults) ausearch --message ANOM_ABEND # find entries related to a given user ausearch -ua 500 -i
Further info
To set a list of unmanaged-devices you can just do the following.
cat >> /etc/NetworkManager/NetworkManager.conf <<EOF [keyfile] unmanaged-devices=interface-name:vboxnet0;interface-name:virbr0;interface-name:docker0 EOF
and
sudo nmcli connection reload
Strangely I had to put this in NetworkManager.conf. Using
/etc/NetworkManager/conf.d/20-unmanaged-bridges.conf didn’t work.
When you dockerize your jboss, the expose directive (luckily) doesn’t open firewall ports.
On Fedora20 you need to update your firewalld configuration:
1- add one or more services to /etc/firewalld/zones/public.xml
2- define ports in /etc/firewalld/services/eap6-standalone.xml <service> <short>eap-standalone</short> <port port="8080" protocol="tcp" /> ... </service> Now # restorecon -R /etc/firewalld/ Then #firewall-cmd --reload
The legacy routes configuration on RH-like was ugly and error prone. You had to compile files like the following:
# route-eth0 ADDRESS0=10.10.10.0 NETMASK0=255.255.255.0 GATEWAY0=192.168.0.253 ADDRESS1=172.16.1.0 NETMASK1=255.255.255.0 GATEWAY1=192.168.0.254
You had to preserve enumeration and evaluate netmasks. This was probably due to the usage of route script, which synopsis is
route add -net $ADDRESS0 netmask $NETMASK0 gw $GATEWAY0
The “new” iproute2 suite allows a new format of route files, compatible with the route dumping.
#route-eth0 10.10.10.0/24 via 192.168.0.253 dev eth0 172.16.1.0/26 via 192.168.0.254 dev eth0
At this point it’s easy to create our route-ethX files starting from the #ip route; output.
#ip route list scope global | grep -- eth0 | grep -v 'default' > route-eth0
In this case we filtered out two kind of entries:
* the default gateway, that could be managed via DHCP or other means like /etc/sysconfig/network:GATEWAY
* non global scope routes, like the ones set by #ip; when assigning addresses.
Check
#man ip |less +/rt_scope
Eg.
#ip -4 -o a list eth2; # show the ip 8: eth2 inet 192.168.0.40/26 brd 192.168.0.63 scope global eth2 #ip route | grep eth2 # show all eth2-related routes 192.168.0.0/26 dev eth2 proto kernel scope link src 192.168.0.40 #scope link! 10.0.10.0/24 via 192.168.0.1 dev eth2
ssh-copy-id doesn’t really work ootb with root user and SeLinux enabled.
Tailing the audit.log we’ll see that sshd – being in the ssh_t context – can’t read() the authorized_keys file – which is in
admin_home_t.
type=AVC msg=audit(1354703208.714:285): avc: denied { read } for pid=9759 comm="sshd"
name="authorized_keys" dev=dm-0 ino=17461
scontext=unconfined_u:system_r:sshd_t:s0-s0:c0.c1023
tcontext=unconfined_u:object_r:admin_home_t:s0
tclass=file
Checking with ls -Z we found that DAC permissions are ok, but the MAC are not:
-rw-------. root root unconfined_u:object_r:admin_home_t:s0 authorized_keys
Despite messing with audit2allow to modify policies, we just need to run:
# restorecon -v -R .ssh/
This will search in the already provided selinux policies and set the right fcontext for the given path.
To list the involved policies:
#semanage fcontext -l | grep ssh
What are jumbo frames? Jumbo frames are ethernet frames with more than 1500 bytes of payload. Conventionally, jumbo frames can carry up to 9000 bytes of payload, but variations exist and some care must be taken when using the term.
Why use jumbo frames? Enabling them on your network equipment and on your NICs you will experiment a performance boost specially with iSCSI protocol that works over a standard ethernet network.
Implementation of Jumbo frames must be done with some rules:
How to enable jumbo frames on RHEL/CentOS? Enabling jumbo frames on linux is really simple: edit the NIC configuration and append MTU=9000.
Don’t forget to enable them also on your switch/router!
# vi /etc/sysconfig/network-script/ifcfg-<your_nic> # ex. eth0
MTU=9000
Then restart the single interface…
ifdown eth0; ifup eth0
…or the entire network service
service network restart
After all verify that the new configuration has been correctly applied:
# ifconfig eth0
If the configuration is ok you will see a response like this:
eth0     Link encap:Ethernet HWaddr xx:xx:xx:xx:xx:xx
inet addr:x.x.x.x Bcast:x.x.x.x Mask:x.x.x.x
UP BROADCAST RUNNING MULTICASTÂ MTU:9000 Metric:1
If you’re using bonding, you need to enable jumbo frames only on bond device configuration:
# vi /etc/sysconfig/network-script/ifcfg-bond0
MTU=9000
Internet SCSI (iSCSI) is a network protocol that allows you to use of the SCSI protocol over TCP/IP networks. It’s good alternative to Fibre Channel-based SANs. You can easily manage, mount and format iSCSI volumes under Linux.
Definitions: the iSCSI Target is the server that hosts and exports volumes to the clients. The iSCSI Initiator is the client that use the configured volumes.
On the server you need to install this package, start the related service and ensure that it starts on boot:
# yum -y install scsi-target-utils
# service tgtd start
# chkconfig tgtd on
On the client side you need to install this package, start the related service and ensure that it starts on boot:
# yum -y install iscsi-initiator-utils
# service iscsid start
# chkconfig iscsid on
Now configure your LUNs on the target:
# vim /etc/tgt/targets.conf
This is a basic configuration for target:
<target iqn.yyyy-mm.reverse-hostname:label>
# use backing-store to export a specific volume…
backing-store /dev/vol_group_name/logical_volume_name
# …or use direct-store to export the entire device
# direct-store /dev/sdb
</target>
Don’t forget to restart the tgtd service after a configuration update:
# service tgtd restart
Now it’s time to check if your LUNs are being exported correctly. The next command will show two LUNs for each target. The first one (LUN 0) is the controller, the second one (LUN 1) is your volume. Run this command on the target:
# tgtadm –lld iscsi –op show –mode target
Remember to enable iSCSI ports on iptables in order to accept connection on port 3260 for both TCP and UDP protocols!
Ok, your target is now fully configured. You can logon on your client and start to use the remote storage. On the client run these commands to show the exported volumes and login to them:
iscsiadm -m discovery -t sendtargets -p target_ipaddress
iscsiadm -m node -T target_name_iqn -p target_ipaddress –login
Now restart the iscsid service, use fdisk to show the mounted device on /dev and create partitions on it.
If you need to detach from the target you have to logout from it:
iscsiadm -m node -T target_name_iqn -p target_ipaddress –logout