summaryrefslogtreecommitdiff
path: root/include/linux
AgeCommit message (Collapse)AuthorFilesLines
2009-03-25netlink: add NETLINK_NO_ENOBUFS socket flagPablo Neira Ayuso1-0/+1
This patch adds the NETLINK_NO_ENOBUFS socket flag. This flag can be used by unicast and broadcast listeners to avoid receiving ENOBUFS errors. Generally speaking, ENOBUFS errors are useful to notify two things to the listener: a) You may increase the receiver buffer size via setsockopt(). b) You have lost messages, you may be out of sync. In some cases, ignoring ENOBUFS errors can be useful. For example: a) nfnetlink_queue: this subsystem does not have any sort of resync method and you can decide to ignore ENOBUFS once you have set a given buffer size. b) ctnetlink: you can use this together with the socket flag NETLINK_BROADCAST_SEND_ERROR to stop getting ENOBUFS errors as you do not need to resync (packets whose event are not delivered are drop to provide reliable logging and state-synchronization). Moreover, the use of NETLINK_NO_ENOBUFS also reduces a "go up, go down" effect in terms of performance which is due to the netlink congestion control when the listener cannot back off. The effect is the following: 1) throughput rate goes up and netlink messages are inserted in the receiver buffer. 2) Then, netlink buffer fills and overruns (set on nlk->state bit 0). 3) While the listener empties the receiver buffer, netlink keeps dropping messages. Thus, throughput goes dramatically down. 4) Then, once the listener has emptied the buffer (nlk->state bit 0 is set off), goto step 1. This effect is easy to trigger with netlink broadcast under heavy load, and it is more noticeable when using a big receiver buffer. You can find some results in [1] that show this problem. [1] http://1984.lsi.us.es/linux/netlink/ This patch also includes the use of sk_drop to account the number of netlink messages drop due to overrun. This value is shown in /proc/net/netlink. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2009-03-24Merge branch 'master' of ↵David S. Miller9-15/+58
git://git.kernel.org/pub/scm/linux/kernel/git/kaber/nf-next-2.6
2009-03-23Merge branch 'master' of ↵David S. Miller1-0/+20
master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6 Conflicts: drivers/net/ucc_geth.c
2009-03-23nefilter: nfnetlink: add nfnetlink_set_err and use it in ctnetlinkPablo Neira Ayuso1-0/+1
This patch adds nfnetlink_set_err() to propagate the error to netlink broadcast listener in case of memory allocation errors in the message building. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: Patrick McHardy <kaber@trash.net>
2009-03-22usbnet: support net_device_opsStephen Hemminger1-0/+5
Use net_device_ops for usbnet device, and export for use by other derived drivers. Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Acked-by: David Brownell <dbrownell@users.sourceforge.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2009-03-21skb: expose and constify hash primitivesStephen Hemminger1-3/+6
Some minor changes to queue hashing: 1. Use const on accessor functions 2. Export skb_tx_hash for use in drivers (see ixgbe) Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2009-03-21dca: add missing copyright/license headersMaciej Sosnowski1-0/+20
In two dca files copyright and license headers are missing. This patch adds them there. Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2009-03-20net: reorder struct Qdisc for better SMP performanceEric Dumazet1-1/+1
dev_queue_xmit() needs to dirty fields "state", "q", "bstats" and "qstats" On x86_64 arch, they currently span three cache lines, involving more cache line ping pongs than necessary, making longer holding of queue spinlock. We can reduce this to one cache line, by grouping all read-mostly fields at the beginning of structure. (Or should I say, all highly modified fields at the end :) ) Before patch : offsetof(struct Qdisc, state)=0x38 offsetof(struct Qdisc, q)=0x48 offsetof(struct Qdisc, bstats)=0x80 offsetof(struct Qdisc, qstats)=0x90 sizeof(struct Qdisc)=0xc8 After patch : offsetof(struct Qdisc, state)=0x80 offsetof(struct Qdisc, q)=0x88 offsetof(struct Qdisc, bstats)=0xa0 offsetof(struct Qdisc, qstats)=0xac sizeof(struct Qdisc)=0xc0 Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2009-03-20rtnetlink: add new value for DHCP added routesStephen Hemminger1-0/+1
To improve manageability, it would be good to be able to disambiguate routes added by administrator from those added by DHCP client. The only necessary kernel change is to add value to rtnetlink include file so iproute2 utility can use it. Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2009-03-18Merge branch 'master' of ↵David S. Miller1-0/+67
git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless-next-2.6
2009-03-17cfg80211: add regulatory netlink multicast groupLuis R. Rodriguez1-0/+48
This allows us to send to userspace "regulatory" events. For now we just send an event when we change regulatory domains. We also notify userspace when devices are using their own custom world roaming regulatory domains. Signed-off-by: Luis R. Rodriguez <lrodriguez@atheros.com> Signed-off-by: John W. Linville <linville@tuxdriver.com>
2009-03-17cfg80211: move enum reg_set_by to nl80211.hLuis R. Rodriguez1-0/+19
We do this so we can later inform userspace who set the regulatory domain and provide details of the request. Signed-off-by: Luis R. Rodriguez <lrodriguez@atheros.com> Signed-off-by: John W. Linville <linville@tuxdriver.com>
2009-03-16GRO: Move netpoll checks to correct locationHerbert Xu2-0/+19
As my netpoll fix for net doesn't really work for net-next, we need this update to move the checks into the right place. As it stands we may pass freed skbs to netpoll_receive_skb. This patch also introduces a netpoll_rx_on function to avoid GRO completely if we're invoked through netpoll. This might seem paranoid but as netpoll may have an external receive hook it's better to be safe than sorry. I don't think we need this for 2.6.29 though since there's nothing immediately broken by it. This patch also moves the GRO_* return values to netdevice.h since VLAN needs them too (I tried to avoid this originally but alas this seems to be the easiest way out). This fixes a bug in VLAN where it continued to use the old return value 2 instead of the correct GRO_DROP. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2009-03-16netfilter: xtables: add cluster matchPablo Neira Ayuso2-0/+16
This patch adds the iptables cluster match. This match can be used to deploy gateway and back-end load-sharing clusters. The cluster can be composed of 32 nodes maximum (although I have only tested this with two nodes, so I cannot tell what is the real scalability limit of this solution in terms of cluster nodes). Assuming that all the nodes see all packets (see below for an example on how to do that if your switch does not allow this), the cluster match decides if this node has to handle a packet given: (jhash(source IP) % total_nodes) & node_mask For related connections, the master conntrack is used. The following is an example of its use to deploy a gateway cluster composed of two nodes (where this is the node 1): iptables -I PREROUTING -t mangle -i eth1 -m cluster \ --cluster-total-nodes 2 --cluster-local-node 1 \ --cluster-proc-name eth1 -j MARK --set-mark 0xffff iptables -A PREROUTING -t mangle -i eth1 \ -m mark ! --mark 0xffff -j DROP iptables -A PREROUTING -t mangle -i eth2 -m cluster \ --cluster-total-nodes 2 --cluster-local-node 1 \ --cluster-proc-name eth2 -j MARK --set-mark 0xffff iptables -A PREROUTING -t mangle -i eth2 \ -m mark ! --mark 0xffff -j DROP And the following commands to make all nodes see the same packets: ip maddr add 01:00:5e:00:01:01 dev eth1 ip maddr add 01:00:5e:00:01:02 dev eth2 arptables -I OUTPUT -o eth1 --h-length 6 \ -j mangle --mangle-mac-s 01:00:5e:00:01:01 arptables -I INPUT -i eth1 --h-length 6 \ --destination-mac 01:00:5e:00:01:01 \ -j mangle --mangle-mac-d 00:zz:yy:xx:5a:27 arptables -I OUTPUT -o eth2 --h-length 6 \ -j mangle --mangle-mac-s 01:00:5e:00:01:02 arptables -I INPUT -i eth2 --h-length 6 \ --destination-mac 01:00:5e:00:01:02 \ -j mangle --mangle-mac-d 00:zz:yy:xx:5a:27 In the case of TCP connections, pickup facility has to be disabled to avoid marking TCP ACK packets coming in the reply direction as valid. echo 0 > /proc/sys/net/netfilter/nf_conntrack_tcp_loose BTW, some final notes: * This match mangles the skbuff pkt_type in case that it detects PACKET_MULTICAST for a non-multicast address. This may be done in a PKTTYPE target for this sole purpose. * This match supersedes the CLUSTERIP target. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: Patrick McHardy <kaber@trash.net>
2009-03-16netfilter: xtables: avoid pointer to selfJan Engelhardt3-8/+12
Commit 784544739a25c30637397ace5489eeb6e15d7d49 (netfilter: iptables: lock free counters) broke a number of modules whose rule data referenced itself. A reallocation would not reestablish the correct references, so it is best to use a separate struct that does not fall under RCU. Signed-off-by: Jan Engelhardt <jengelh@medozas.de> Signed-off-by: Patrick McHardy <kaber@trash.net>
2009-03-16tcp: cache result of earlier divides when mss-aligning thingsIlpo Järvinen1-0/+1
The results is very unlikely change every so often so we hardly need to divide again after doing that once for a connection. Yet, if divide still becomes necessary we detect that and do the right thing and again settle for non-divide state. Takes the u16 space which was previously taken by the plain xmit_size_goal. This should take care part of the tso vs non-tso difference we found earlier. Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: David S. Miller <davem@davemloft.net>
2009-03-16tcp: simplify tcp_current_mssIlpo Järvinen1-1/+0
There's very little need for most of the callsites to get tp->xmit_goal_size updated. That will cost us divide as is, so slice the function in two. Also, the only users of the tp->xmit_goal_size are directly behind tcp_current_mss(), so there's no need to store that variable into tcp_sock at all! The drop of xmit_goal_size currently leaves 16-bit hole and some reorganization would again be necessary to change that (but I'm aiming to fill that hole with u16 xmit_goal_size_segs to cache the results of the remaining divide to get that tso on regression). Bring xmit_goal_size parts into tcp.c Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi> Cc: Evgeniy Polyakov <zbr@ioremap.net> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: David S. Miller <davem@davemloft.net>
2009-03-16net: reorder fields of struct socketEric Dumazet1-2/+6
On x86_64, its rather unfortunate that "wait_queue_head_t wait" field of "struct socket" spans two cache lines (assuming a 64 bytes cache line in current cpus) offsetof(struct socket, wait)=0x30 sizeof(wait_queue_head_t)=0x18 This might explain why Kenny Chang noticed that his multicast workload was performing bad with 64 bit kernels, since more cache lines ping pongs were involved. This litle patch moves "wait" field next "fasync_list" so that both fields share a single cache line, to speedup sock_def_readable() Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2009-03-14ppp: ppp_mp_explode() redesignGabriele Paoloni1-1/+1
I found the PPP subsystem to not work properly when connecting channels with different speeds to the same bundle. Problem Description: As the "ppp_mp_explode" function fragments the sk_buff buffer evenly among the PPP channels that are connected to a certain PPP unit to make up a bundle, if we are transmitting using an upper layer protocol that requires an Ack before sending the next packet (like TCP/IP for example), we will have a bandwidth bottleneck on the slowest channel of the bundle. Let's clarify by an example. Let's consider a scenario where we have two PPP links making up a bundle: a slow link (10KB/sec) and a fast link (1000KB/sec) working at the best (full bandwidth). On the top we have a TCP/IP stack sending a 1000 Bytes sk_buff buffer down to the PPP subsystem. The "ppp_mp_explode" function will divide the buffer in two fragments of 500B each (we are neglecting all the headers, crc, flags etc?.). Before the TCP/IP stack sends out the next buffer, it will have to wait for the ACK response from the remote peer, so it will have to wait for both fragments to have been sent over the two PPP links, received by the remote peer and reconstructed. The resulting behaviour is that, rather than having a bundle working @1010KB/sec (the sum of the channels bandwidths), we'll have a bundle working @20KB/sec (the double of the slowest channels bandwidth). Problem Solution: The problem has been solved by redesigning the "ppp_mp_explode" function in such a way to make it split the sk_buff buffer according to the speeds of the underlying PPP channels (the speeds of the serial interfaces respectively attached to the PPP channels). Referring to the above example, the redesigned "ppp_mp_explode" function will now divide the 1000 Bytes buffer into two fragments whose sizes are set according to the speeds of the channels where they are going to be sent on (e.g . 10 Byets on 10KB/sec channel and 990 Bytes on 1000KB/sec channel). The reworked function grants the same performances of the original one in optimal working conditions (i.e. a bundle made up of PPP links all working at the same speed), while greatly improving performances on the bundles made up of channels working at different speeds. Signed-off-by: Gabriele Paoloni <gabriele.paoloni@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2009-03-14phylib: convert state_queue work to delayed_workMarcin Slusarz1-2/+1
It closes a race in phy_stop_machine when reprogramming of phy_timer (from phy_state_machine) happens between del_timer_sync and cancel_work_sync. Without this change it could lead to crash if phy_device would be freed after phy_stop_machine (timer would fire and schedule freed work). Signed-off-by: Marcin Slusarz <marcin.slusarz@gmail.com> Acked-by: Jean Delvare <khali@linux-fr.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2009-03-13Network Drop Monitor: Adding Build changes to enable drop monitorNeil Horman1-0/+1
Network Drop Monitor: Adding Build changes to enable drop monitor Signed-off-by: Neil Horman <nhorman@tuxdriver.com> include/linux/Kbuild | 1 + net/Kconfig | 11 +++++++++++ net/core/Makefile | 1 + 3 files changed, 13 insertions(+) Signed-off-by: David S. Miller <davem@davemloft.net>
2009-03-13Network Drop Monitor: Adding drop monitor implementation & Netlink protocolNeil Horman1-0/+56
Signed-off-by: Neil Horman <nhorman@tuxdriver.com> include/linux/net_dropmon.h | 56 +++++++++ net/core/drop_monitor.c | 263 ++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 319 insertions(+) Signed-off-by: David S. Miller <davem@davemloft.net>
2009-03-13Network Drop Monitor: Adding kfree_skb_clean for non-drops and modifying ↵Neil Horman1-1/+3
end-of-line points for skbs Signed-off-by: Neil Horman <nhorman@tuxdriver.com> include/linux/skbuff.h | 4 +++- net/core/datagram.c | 2 +- net/core/skbuff.c | 22 ++++++++++++++++++++++ net/ipv4/arp.c | 2 +- net/ipv4/udp.c | 2 +- net/packet/af_packet.c | 2 +- 6 files changed, 29 insertions(+), 5 deletions(-) Signed-off-by: David S. Miller <davem@davemloft.net>
2009-03-05ssb: Add SPROM fallback supportMichael Buesch1-0/+4
This adds SSB functionality to register a fallback SPROM image from the architecture setup code. Weird architectures exist that have half-assed SSB devices without SPROM attached to their PCI busses. The architecture can register a fallback SPROM image that is used if no SPROM is found on the SSB device. Signed-off-by: Michael Buesch <mb@bu3sch.de> Cc: Florian Fainelli <florian@openwrt.org> Signed-off-by: John W. Linville <linville@tuxdriver.com>
2009-03-05Merge branch 'master' of ↵David S. Miller7-1/+40
master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6 Conflicts: drivers/net/tokenring/tmspci.c drivers/net/ucc_geth_mii.c
2009-03-05Merge branch 'master' of /home/davem/src/GIT/linux-2.6/David S. Miller6-1/+39
2009-03-05vlan: Fix vlan-in-vlan crashes.David S. Miller1-0/+1
As analyzed by Patrick McHardy, vlan needs to reset it's netdev_ops pointer in it's ->init() function but this leaves the compat method pointers stale. Add a netdev_resync_ops() and call it from the vlan code. Any other driver which changes ->netdev_ops after register_netdevice() will need to call this new function after doing so too. With help from Patrick McHardy. Tested-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2009-03-04neigh: Allow for user space users of the neighbour tableEric Biederman1-0/+1
Currently it is possible to do just about everything with the arp table from user space except treat an entry like you are using it. To that end implement and a flag NTF_USE that when set in a netwlink update request treats the neighbour table entry like the kernel does on the output path. This allows user space applications to share the kernel's arp cache. Signed-off-by: Eric Biederman <ebiederm@aristanetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2009-03-04Merge branch 'sched-fixes-for-linus' of ↵Linus Torvalds1-0/+4
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'sched-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: sched: don't allow setuid to succeed if the user does not have rt bandwidth sched_rt: don't start timer when rt bandwidth disabled
2009-03-04Merge branch 'core-fixes-for-linus' of ↵Linus Torvalds4-0/+31
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'core-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: rcu: Teach RCU that idle task is not quiscent state at boot
2009-03-03Merge branch 'x86-fixes-for-linus' of ↵Linus Torvalds1-1/+4
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: fix warning in io_mapping_map_wc() x86: i915 needs pgprot_writecombine() and is_io_mapping_possible()
2009-03-02skbuff.h: fix timestamps kernel-docRandy Dunlap1-4/+2
Fix skbuff.h kernel-doc for timestamps: must include "struct" keyword, otherwise there are kernel-doc errors: Error(linux-next-20090227//include/linux/skbuff.h:161): cannot understand prototype: 'struct skb_shared_hwtstamps ' Error(linux-next-20090227//include/linux/skbuff.h:177): cannot understand prototype: 'union skb_shared_tx ' Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2009-03-02wimax/i2400m: implement RX reorder supportInaky Perez-Gonzalez1-4/+28
Allow the device to give the driver RX data with reorder information. When that is done, the device will indicate the driver if a packet has to be held in a (sorted) queue. It will also tell the driver when held packets have to be released to the OS. This is done to improve the WiMAX-protocol level retransmission support when missing frames are detected. The code docs provide details about the implementation. In general, this just hooks into the RX path in rx.c; if a packet with the reorder bit in the RX header is detected, the reorder information in the header is extracted and one of the four main reorder operations are executed. In one case (queue) no packet will be delivered to the networking stack, just queued, whereas in the others (reset, update_ws and queue_update_ws), queued packet might be delivered depending on the window start for the specific queue. The modifications to files other than rx.c are: - control.c: during device initialization, enable reordering support if the rx_reorder_disabled module parameter is not enabled - driver.c: expose a rx_reorder_disable module parameter and call i2400m_rx_setup/release() to initialize/shutdown RX reorder support. - i2400m.h: introduce members in 'struct i2400m' needed for implementing reorder support. - linux/i2400m.h: introduce TLVs, commands and constant definitions related to RX reorder Last but not least, the rx reorder code includes an small circular log where the last N reorder operations are recorded to be displayed in case of inconsistency. Otherwise diagnosing issues would be almost impossible. Signed-off-by: Inaky Perez-Gonzalez <inaky@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2009-03-02wimax/i2400m: support extended data RX protocol (no need to reallocate skbs)Inaky Perez-Gonzalez1-0/+35
Newer i2400m firmwares (>= v1.4) extend the data RX protocol so that each packet has a 16 byte header. This header is mainly used to implement host reordeing (which is addressed in later commits). However, this header also allows us to overwrite it (once data has been extracted) with an Ethernet header and deliver to the networking stack without having to reallocate the skb (as it happened in fw <= v1.3) to make room for it. - control.c: indicate the device [dev_initialize()] that the driver wants to use the extended data RX protocol. Also involves adding the definition of the needed data types in include/linux/wimax/i2400m.h. - rx.c: handle the new payload type for the extended RX data protocol. Prepares the skb for delivery to netdev.c:i2400m_net_erx(). - netdev.c: Introduce i2400m_net_erx() that adds the fake ethernet address to a prepared skb and delivers it to the networking stack. - cleanup: in most instances in rx.c, the variable 'single' was renamed to 'single_last' for it better conveys its meaning. Signed-off-by: Inaky Perez-Gonzalez <inaky@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2009-03-02wimax: struct device - replace bus_id with dev_name(), dev_set_name()Kay Sievers1-1/+1
Cc: inaky.perez-gonzalez@intel.com Cc: linux-wimax@intel.com Acked-by: Greg Kroah-Hartman <gregkh@suse.de> Signed-off-by: Kay Sievers <kay.sievers@vrfy.org> Signed-off-by: Inaky Perez-Gonzalez <inaky@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2009-03-02wimax/i2400m: allow control of the base-station idle mode timeoutInaky Perez-Gonzalez1-0/+10
For power saving reasons, WiMAX links can be put in idle mode while connected after a certain time of the link not being used for tx or rx. In this mode, the device pages the base-station regularly and when data is ready to be transmitted, the link is revived. This patch allows the user to control the time the device has to be idle before it decides to go to idle mode from a sysfs interace. It also updates the initialization code to acknowledge the module variable 'idle_mode_disabled' when the firmware is a newer version (upcoming 1.4 vs 2.6.29's v1.3). The method for setting the idle mode timeout in the older firmwares is much more limited and can be only done at initialization time. Thus, the sysfs file will return -ENOSYS on older ones. Signed-off-by: Inaky Perez-Gonzalez <inaky@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2009-03-02tcp: kill eff_sacks "cache", the sole user can calculate itselfIlpo Järvinen1-1/+0
Also fixes insignificant bug that would cause sending of stale SACK block (would occur in some corner cases). Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: David S. Miller <davem@davemloft.net>
2009-03-02fix warning in io_mapping_map_wc()Pallipadi, Venkatesh1-1/+4
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-03-02Merge branch 'master' of ↵David S. Miller9-16/+47
master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6 Conflicts: drivers/net/wireless/iwlwifi/iwl-tx.c net/8021q/vlan_core.c net/core/dev.c
2009-03-01net headers: export dcbnl.hChris Leech1-0/+1
The DCB netlink interface is required for building the userspace tools available at e1000.sourceforge.net Signed-off-by: Chris Leech <christopher.leech@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2009-03-01net headers: cleanup dcbnl.hChris Leech1-1/+3
1) add an include for <linux/types.h> 2) change dcbmsg.dcb_family from unsigned char to __u8 to be more consistent with use of kernel types Signed-off-by: Chris Leech <christopher.leech@intel.com> Acked-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2009-03-01Merge branch 'master' of ↵David S. Miller1-0/+5
git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless-next-2.6
2009-03-01Merge branch 'master' of /home/davem/src/GIT/linux-2.6/David S. Miller7-15/+43
2009-02-28Merge branch 'x86-fixes-for-linus' of ↵Linus Torvalds1-11/+35
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: x86: enable DMAR by default xen: disable interrupts early, as start_kernel expects gpu/drm, x86, PAT: io_mapping_create_wc and resource_size_t gpu/drm, x86, PAT: Handle io_mapping_create_wc() errors in a clean way x86, Voyager: fix compile by lifting the degeneracy of phys_cpu_present_map x86, doc: fix references to Documentation/x86/i386/boot.txt
2009-02-28Fix recursive lock in free_uid()/free_user_ns()David Howells1-0/+1
free_uid() and free_user_ns() are corecursive when CONFIG_USER_SCHED=n, but free_user_ns() is called from free_uid() by way of uid_hash_remove(), which requires uidhash_lock to be held. free_user_ns() then calls free_uid() to complete the destruction. Fix this by deferring the destruction of the user_namespace. Signed-off-by: David Howells <dhowells@redhat.com> Acked-by: Serge Hallyn <serue@us.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-02-27nl80211: Provide access to STA TX/RX packet countersJouni Malinen1-0/+5
The TX/RX packet counters are needed to fill in RADIUS Accounting attributes Acct-Output-Packets and Acct-Input-Packets. We already collect the needed information, but only the TX/RX bytes were previously exposed through nl80211. Allow applications to fetch the packet counters, too, to provide more complete support for accounting. Signed-off-by: Jouni Malinen <jouni.malinen@atheros.com> Acked-by: Johannes Berg <johannes@sipsolutions.net> Signed-off-by: John W. Linville <linville@tuxdriver.com>
2009-02-27sched: don't allow setuid to succeed if the user does not have rt bandwidthDhaval Giani1-0/+4
Impact: fix hung task with certain (non-default) rt-limit settings Corey Hickey reported that on using setuid to change the uid of a rt process, the process would be unkillable and not be running. This is because there was no rt runtime for that user group. Add in a check to see if a user can attach an rt task to its task group. On failure, return EINVAL, which is also returned in CONFIG_CGROUP_SCHED. Reported-by: Corey Hickey <bugfood-ml@fatooh.org> Signed-off-by: Dhaval Giani <dhaval@linux.vnet.ibm.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-27RDS: Add userspace headerAndy Grover1-0/+250
Applications include this header in order to use RDS sockets. Signed-off-by: Andy Grover <andy.grover@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2009-02-27RDS: Add AF and PF #defines for RDS socketsAndy Grover1-0/+3
RDS is a reliable datagram protocol used for IPC on Oracle database clusters. This adds address and protocol family numbers for it. Signed-off-by: Andy Grover <andy.grover@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2009-02-26block: reduce stack footprint of blk_recount_segments()Jens Axboe1-0/+2
blk_recalc_rq_segments() requires a request structure passed in, which we don't have from blk_recount_segments(). So the latter allocates one on the stack, using > 400 bytes of stack for that. This can cause us to spill over one page of stack from ext4 at least: 0) 4560 400 blk_recount_segments+0x43/0x62 1) 4160 32 bio_phys_segments+0x1c/0x24 2) 4128 32 blk_rq_bio_prep+0x2a/0xf9 3) 4096 32 init_request_from_bio+0xf9/0xfe 4) 4064 112 __make_request+0x33c/0x3f6 5) 3952 144 generic_make_request+0x2d1/0x321 6) 3808 64 submit_bio+0xb9/0xc3 7) 3744 48 submit_bh+0xea/0x10e 8) 3696 368 ext4_mb_init_cache+0x257/0xa6a [ext4] 9) 3328 288 ext4_mb_regular_allocator+0x421/0xcd9 [ext4] 10) 3040 160 ext4_mb_new_blocks+0x211/0x4b4 [ext4] 11) 2880 336 ext4_ext_get_blocks+0xb61/0xd45 [ext4] 12) 2544 96 ext4_get_blocks_wrap+0xf2/0x200 [ext4] 13) 2448 80 ext4_da_get_block_write+0x6e/0x16b [ext4] 14) 2368 352 mpage_da_map_blocks+0x7e/0x4b3 [ext4] 15) 2016 352 ext4_da_writepages+0x2ce/0x43c [ext4] 16) 1664 32 do_writepages+0x2d/0x3c 17) 1632 144 __writeback_single_inode+0x162/0x2cd 18) 1488 96 generic_sync_sb_inodes+0x1e3/0x32b 19) 1392 16 sync_sb_inodes+0xe/0x10 20) 1376 48 writeback_inodes+0x69/0xb3 21) 1328 208 balance_dirty_pages_ratelimited_nr+0x187/0x2f9 22) 1120 224 generic_file_buffered_write+0x1d4/0x2c4 23) 896 176 __generic_file_aio_write_nolock+0x35f/0x393 24) 720 80 generic_file_aio_write+0x6c/0xc8 25) 640 80 ext4_file_write+0xa9/0x137 [ext4] 26) 560 320 do_sync_write+0xf0/0x137 27) 240 48 vfs_write+0xb3/0x13c 28) 192 64 sys_write+0x4c/0x74 29) 128 128 system_call_fastpath+0x16/0x1b Split the segment counting out into a __blk_recalc_rq_segments() helper to avoid allocating an onstack request just for checking the physical segment count. Signed-off-by: Jens Axboe <jens.axboe@oracle.com>