~ubuntu-branches/ubuntu/precise/linux-lowlatency/precise-proposed

« back to all changes in this revision

Viewing changes to drivers/net/hyperv/netvsc.c

  • Committer: Package Import Robot
  • Author(s): Luke Yelavich, Andy Whitcroft, Chase Douglas, Eugeni Dodonov, Ingo Molnar, Johannes Berg, John Johansen, Kees Cook, Leann Ogasawara, Robert Hooker, Seth Heasley, Tim Gardner, Luke Yelavich, Upstream Kernel Changes
  • Date: 2012-03-09 10:21:12 UTC
  • Revision ID: package-import@ubuntu.com-20120309102112-s1abu8w051stx2rl
Tags: 3.2.0-18.26
[ Andy Whitcroft ]

* [Config] clean up the human consumable package descriptions
* [Config] fix generic flavour description
* [Config] clean up linux-tools package descriptions
  - LP: #593107
* deviations -- note the source of the Hyper-V updates
* SAUCE: ata_piix: defer to the Hyper-V drivers by default
  - LP: #929545
* ubuntu: AUFS -- adapt to the new changelog handling
* ubuntu: AUFS -- sort out the relative header paths
* ubuntu: AUFS -- update to d266b0c5d0693d6383976ee54b9e2c0fa9a3f5b0

[ Chase Douglas ]

* SAUCE: (drop after 3.3) HID: hid-magicmouse: Add pointer and buttonpad
  properties for Magic Trackpad
* SAUCE: Input: synaptics - add second variant of two-button clickpad
* SAUCE: Input: synapticss - Set buttonpad property for all clickpads

[ Eugeni Dodonov ]

* SAUCE: drm/i915: do not enable RC6p on Sandy Bridge
* SAUCE: drm/i915: fix operator precedence when enabling RC6p

[ Ingo Molnar ]

* ubuntu: nx-emu - i386: NX emulation

[ Johannes Berg ]

* SAUCE: iwlwifi: fix key removal
  - LP: #911059

[ John Johansen ]

* Revert "SAUCE: AppArmor: Fix unpack of network tables."
* Revert "SAUCE: AppArmor: Allow dfa backward compatibility with broken
  userspace"
* SAUCE: AppArmor: Add mising end of structure test to caps unpacking
* SAUCE: AppArmor: Fix dropping of allowed operations that are force
  audited
* SAUCE: AppArmor: Fix underflow in xindex calculation
* SAUCE: AppArmor: fix mapping of META_READ to audit and quiet flags
* SAUCE: AppArmor: Fix the error case for chroot relative path name
  lookup
  - LP: #925028
* SAUCE: AppArmor: Retrieve the dentry_path for error reporting when path
  lookup fails
  - LP: #925028
* SAUCE: AppArmor: Minor cleanup of d_namespace_path to consolidate error
  handling
* SAUCE: AppArmor: Update dfa matching routines.
* SAUCE: AppArmor: Move path failure information into aa_get_name and
  rename
* SAUCE: AppArmor: Make chroot relative the default path lookup type
* SAUCE: AppArmor: Add ability to load extended policy
* SAUCE: AppArmor: basic networking rules
* SAUCE: AppArmor: Add profile introspection file to interface
* SAUCE: AppArmor: Add the ability to mediate mount
* SAUCE: AppArmor: Add mount information to apparmorfs

[ Kees Cook ]

* SAUCE: (drop after 3.3) security: create task_free security callback
* SAUCE: (drop after 3.3) security: Yama LSM
* SAUCE: (drop after 3.3) Yama: add PR_SET_PTRACER_ANY
* SAUCE: Yama: add link restrictions
* SAUCE: security: unconditionally chain to Yama LSM
* SAUCE: AppArmor: refactor securityfs to use structures
* SAUCE: AppArmor: add initial "features" directory to securityfs
* SAUCE: AppArmor: add "file" details to securityfs
* SAUCE: AppArmor: export known rlimit names/value mappings in securityfs
* ubuntu: Yama - LSM hooks
* ubuntu: Yama - add ptrace relationship tracking interface
* ubuntu: Yama - unconditionally chain to Yama LSM

[ Leann Ogasawara ]

* Revert "[Config] Enable CONFIG_NVRAM=m"
  - LP: #942193
* Drop ndiswrapper
* Ubuntu-3.2.0-17.26
* Ubuntu-3.2.0-17.27
* Rebase to v3.2.7
* [Config] Enable CONFIG_USB_SERIAL_QUATECH2=m on arm and powerpc
* [Config] Enable CONFIG_USB_SERIAL_QUATECH_USB2=m on arm and powerpc
* [Config] Add CONFIG_NVRAM to config enforcer
  - LP: #942193
* [Config] Enable CONFIG_SCSI_IBMVSCSI=m for powerpc
  - LP: #943090
* [Config] Enable CONFIG_SCSI_IPR=m for powerpc
  - LP: #943090
* provide ipmi udeb
  - LP: #942926
* Rebase to v3.2.9
* Add ibmveth to d-i/modules-powerpc/nic-modules
  - LP: #712188
* [Config] Enable CONFIG_SCSI_IBMVFC=m for powerpc
  - LP: #712188
* Add ibmvfc and ibmvscsic to d-i/modules-powerpc/nic-modules
  - LP: #712188
* Ubuntu-3.2.0-18.28

[ Robert Hooker ]

* SAUCE: drm/i915: Enable RC6 by default on sandybridge.

[ Seth Heasley ]

* SAUCE: ALSA: hda - Add Lynx Point HD Audio Controller DeviceIDs
  - LP: #900119
* SAUCE: ahci: AHCI-mode SATA patch for Intel Lynx Point DeviceIDs
  - LP: #900119
* SAUCE: ata_piix: IDE-mode SATA patch for Intel Lynx Point DeviceIDs
  - LP: #900119
* SAUCE: i2c-i801: Add device IDs for Intel Lynx Point
  - LP: #900119

[ Tim Gardner ]

* dropped hv_mouse
* [Config] CONFIG_X86_NUMACHIP=y
* [Config] updateconfigs after apparmor patches
* [Config] Added hv_netvsc and hv_storvsc to -virtual
  - LP: #942256
* [Config] Enable aufs
  - LP: #943119
* SAUCE: Made kernel irq-threaded by default

[ Luke Yelavich ]

* UBUNTU: Depend on crda (>=1.1.1-1ubuntu2) | wireless-crda as per precise
  mainline packaging

[ Upstream Kernel Changes ]

* Revert "Revert "ath9k_hw: fix interpretation of the rx KeyMiss flag""
* Revert "AppArmor: compatibility patch for v5 interface"
* Revert "AppArmor: compatibility patch for v5 network controll"
* Staging: hv: vmbus: Support building the vmbus driver as part of the
  kernel
* hv: Add Kconfig menu entry
* Drivers: hv: Fix a memory leak
* Drivers: hv: Make the vmbus driver unloadable
* Drivers: hv: Get rid of an unnecessary check in hv.c
* Staging: hv: mousevsc: Make boolean states boolean
* Staging: hv: mousevsc: Inline the code for mousevsc_on_device_add()
* Staging: hv: mousevsc: Inline the code for reportdesc_callback()
* Staging: hv: mousevsc: Cleanup mousevsc_on_channel_callback()
* Staging: hv: mousevsc: Add a new line to a debug string
* Staging: hv: mousevsc: Get rid of unnecessary include files
* Staging: hv: mousevsc: Address some style issues
* Staging: hv: mousevsc: Add a check to prevent memory corruption
* Staging: hv: mousevsc: Use the KBUILD_MODNAME macro
* Staging: hv: storvsc: Use mempools to allocate struct
  storvsc_cmd_request
* Staging: hv: storvsc: Cleanup error handling in the probe function
* Staging: hv: storvsc: Fixup the error when processing SET_WINDOW
  command
* Staging: hv: storvsc: Fix error handling storvsc_host_reset()
* Staging: hv: storvsc: Use the accessor function shost_priv()
* Staging: hv: storvsc: Use the unlocked version queuecommand
* Staging: hv: storvsc: use the macro KBUILD_MODNAME
* Staging: hv: storvsc: Get rid of an unnecessary forward declaration
* Staging: hv: storvsc: Upgrade the vmstor protocol version
* Staging: hv: storvsc: Support hot add of scsi disks
* Staging: hv: storvsc: Support hot-removing of scsi devices
* staging: hv: Use kmemdup rather than duplicating its implementation
* staging: hv: move hv_netvsc out of staging area
* Staging: hv: mousevsc: Properly add the hid device
* Staging: hv: storvsc: Disable clustering
* Staging: hv: storvsc: Cleanup storvsc_device_alloc()
* Staging: hv: storvsc: Fix a bug in storvsc_command_completion()
* Staging: hv: storvsc: Fix a bug in copy_from_bounce_buffer()
* Staging: hv: storvsc: Implement per device memory pools
* Staging: hv: remove hv_mouse driver as it's now in the hid directory
* Staging: hv: update TODO file
* Staging: hv: storvsc: Fix a bug in create_bounce_buffer()
* net/hyperv: Fix long lines in netvsc.c
* net/hyperv: Add support for promiscuous mode setting
* net/hyperv: Fix the stop/wake queue mechanism
* net/hyperv: Remove unnecessary kmap_atomic in netvsc driver
* net/hyperv: Add NETVSP protocol version negotiation
* net/hyperv: Add support for jumbo frame up to 64KB
* net/hyperv: fix possible memory leak in do_set_multicast()
* net/hyperv: rx_bytes should account the ether header size
* net/hyperv: fix the issue that large packets be dropped under bridge
* net/hyperv: Use netif_tx_disable() instead of netif_stop_queue() when
  necessary
* net/hyperv: Fix the page buffer when an RNDIS message goes beyond page
  boundary
* HID: Move the hid-hyperv driver out of staging
* HID: hv_mouse: Properly add the hid device
* HID: hyperv: Properly disconnect the input device
* Staging: hv: storvsc: Cleanup some comments
* Staging: hv: storvsc: Cleanup storvsc_probe()
* Staging: hv: storvsc: Cleanup storvsc_queuecommand()
* Staging: hv: storvsc: Introduce defines for srb status codes
* Staging: hv: storvsc: Cleanup storvsc_host_reset_handler()
* Staging: hv: storvsc: Move and cleanup storvsc_remove()
* Staging: hv: storvsc: Add a comment to explain life-cycle management
* Staging: hv: storvsc: Get rid of the on_io_completion in
  hv_storvsc_request
* Staging: hv: storvsc: Rename the context field in hv_storvsc_request
* Staging: hv: storvsc: Miscellaneous cleanup of storvsc driver
* Staging: hv: storvsc: Cleanup the code for generating protocol version
* Staging: hv: storvsc: Cleanup some protocol related constants
* Staging: hv: storvsc: Get rid of some unused defines
* Staging: hv: storvsc: Consolidate the request structure
* Staging: hv: storvsc: Consolidate all the wire protocol definitions
* Staging: hv: storvsc: Move the storage driver out of the staging area
* x86: Make flat_init_apic_ldr() available
* x86: Add x86_init platform override to fix up NUMA core numbering
* x86: Add NumaChip support
* x86/numachip: Drop unnecessary conflict with EDAC
* Input: bcm5974 - set BUTTONPAD property
* Ubuntu: Rebase to v3.2.8
* ACPI / PM: Do not save/restore NVS on Asus K54C/K54HR
  - LP: #898503
* Add low latency source

Show diffs side-by-side

added added

removed removed

Lines of Context:
 
1
/*
 
2
 * Copyright (c) 2009, Microsoft Corporation.
 
3
 *
 
4
 * This program is free software; you can redistribute it and/or modify it
 
5
 * under the terms and conditions of the GNU General Public License,
 
6
 * version 2, as published by the Free Software Foundation.
 
7
 *
 
8
 * This program is distributed in the hope it will be useful, but WITHOUT
 
9
 * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
 
10
 * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
 
11
 * more details.
 
12
 *
 
13
 * You should have received a copy of the GNU General Public License along with
 
14
 * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
 
15
 * Place - Suite 330, Boston, MA 02111-1307 USA.
 
16
 *
 
17
 * Authors:
 
18
 *   Haiyang Zhang <haiyangz@microsoft.com>
 
19
 *   Hank Janssen  <hjanssen@microsoft.com>
 
20
 */
 
21
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
 
22
 
 
23
#include <linux/kernel.h>
 
24
#include <linux/sched.h>
 
25
#include <linux/wait.h>
 
26
#include <linux/mm.h>
 
27
#include <linux/delay.h>
 
28
#include <linux/io.h>
 
29
#include <linux/slab.h>
 
30
#include <linux/netdevice.h>
 
31
#include <linux/if_ether.h>
 
32
 
 
33
#include "hyperv_net.h"
 
34
 
 
35
 
 
36
static struct netvsc_device *alloc_net_device(struct hv_device *device)
 
37
{
 
38
        struct netvsc_device *net_device;
 
39
        struct net_device *ndev = hv_get_drvdata(device);
 
40
 
 
41
        net_device = kzalloc(sizeof(struct netvsc_device), GFP_KERNEL);
 
42
        if (!net_device)
 
43
                return NULL;
 
44
 
 
45
        net_device->start_remove = false;
 
46
        net_device->destroy = false;
 
47
        net_device->dev = device;
 
48
        net_device->ndev = ndev;
 
49
 
 
50
        hv_set_drvdata(device, net_device);
 
51
        return net_device;
 
52
}
 
53
 
 
54
static struct netvsc_device *get_outbound_net_device(struct hv_device *device)
 
55
{
 
56
        struct netvsc_device *net_device;
 
57
 
 
58
        net_device = hv_get_drvdata(device);
 
59
        if (net_device && net_device->destroy)
 
60
                net_device = NULL;
 
61
 
 
62
        return net_device;
 
63
}
 
64
 
 
65
static struct netvsc_device *get_inbound_net_device(struct hv_device *device)
 
66
{
 
67
        struct netvsc_device *net_device;
 
68
 
 
69
        net_device = hv_get_drvdata(device);
 
70
 
 
71
        if (!net_device)
 
72
                goto get_in_err;
 
73
 
 
74
        if (net_device->destroy &&
 
75
                atomic_read(&net_device->num_outstanding_sends) == 0)
 
76
                net_device = NULL;
 
77
 
 
78
get_in_err:
 
79
        return net_device;
 
80
}
 
81
 
 
82
 
 
83
static int netvsc_destroy_recv_buf(struct netvsc_device *net_device)
 
84
{
 
85
        struct nvsp_message *revoke_packet;
 
86
        int ret = 0;
 
87
        struct net_device *ndev = net_device->ndev;
 
88
 
 
89
        /*
 
90
         * If we got a section count, it means we received a
 
91
         * SendReceiveBufferComplete msg (ie sent
 
92
         * NvspMessage1TypeSendReceiveBuffer msg) therefore, we need
 
93
         * to send a revoke msg here
 
94
         */
 
95
        if (net_device->recv_section_cnt) {
 
96
                /* Send the revoke receive buffer */
 
97
                revoke_packet = &net_device->revoke_packet;
 
98
                memset(revoke_packet, 0, sizeof(struct nvsp_message));
 
99
 
 
100
                revoke_packet->hdr.msg_type =
 
101
                        NVSP_MSG1_TYPE_REVOKE_RECV_BUF;
 
102
                revoke_packet->msg.v1_msg.
 
103
                revoke_recv_buf.id = NETVSC_RECEIVE_BUFFER_ID;
 
104
 
 
105
                ret = vmbus_sendpacket(net_device->dev->channel,
 
106
                                       revoke_packet,
 
107
                                       sizeof(struct nvsp_message),
 
108
                                       (unsigned long)revoke_packet,
 
109
                                       VM_PKT_DATA_INBAND, 0);
 
110
                /*
 
111
                 * If we failed here, we might as well return and
 
112
                 * have a leak rather than continue and a bugchk
 
113
                 */
 
114
                if (ret != 0) {
 
115
                        netdev_err(ndev, "unable to send "
 
116
                                "revoke receive buffer to netvsp\n");
 
117
                        return ret;
 
118
                }
 
119
        }
 
120
 
 
121
        /* Teardown the gpadl on the vsp end */
 
122
        if (net_device->recv_buf_gpadl_handle) {
 
123
                ret = vmbus_teardown_gpadl(net_device->dev->channel,
 
124
                           net_device->recv_buf_gpadl_handle);
 
125
 
 
126
                /* If we failed here, we might as well return and have a leak
 
127
                 * rather than continue and a bugchk
 
128
                 */
 
129
                if (ret != 0) {
 
130
                        netdev_err(ndev,
 
131
                                   "unable to teardown receive buffer's gpadl\n");
 
132
                        return ret;
 
133
                }
 
134
                net_device->recv_buf_gpadl_handle = 0;
 
135
        }
 
136
 
 
137
        if (net_device->recv_buf) {
 
138
                /* Free up the receive buffer */
 
139
                free_pages((unsigned long)net_device->recv_buf,
 
140
                        get_order(net_device->recv_buf_size));
 
141
                net_device->recv_buf = NULL;
 
142
        }
 
143
 
 
144
        if (net_device->recv_section) {
 
145
                net_device->recv_section_cnt = 0;
 
146
                kfree(net_device->recv_section);
 
147
                net_device->recv_section = NULL;
 
148
        }
 
149
 
 
150
        return ret;
 
151
}
 
152
 
 
153
static int netvsc_init_recv_buf(struct hv_device *device)
 
154
{
 
155
        int ret = 0;
 
156
        int t;
 
157
        struct netvsc_device *net_device;
 
158
        struct nvsp_message *init_packet;
 
159
        struct net_device *ndev;
 
160
 
 
161
        net_device = get_outbound_net_device(device);
 
162
        if (!net_device)
 
163
                return -ENODEV;
 
164
        ndev = net_device->ndev;
 
165
 
 
166
        net_device->recv_buf =
 
167
                (void *)__get_free_pages(GFP_KERNEL|__GFP_ZERO,
 
168
                                get_order(net_device->recv_buf_size));
 
169
        if (!net_device->recv_buf) {
 
170
                netdev_err(ndev, "unable to allocate receive "
 
171
                        "buffer of size %d\n", net_device->recv_buf_size);
 
172
                ret = -ENOMEM;
 
173
                goto cleanup;
 
174
        }
 
175
 
 
176
        /*
 
177
         * Establish the gpadl handle for this buffer on this
 
178
         * channel.  Note: This call uses the vmbus connection rather
 
179
         * than the channel to establish the gpadl handle.
 
180
         */
 
181
        ret = vmbus_establish_gpadl(device->channel, net_device->recv_buf,
 
182
                                    net_device->recv_buf_size,
 
183
                                    &net_device->recv_buf_gpadl_handle);
 
184
        if (ret != 0) {
 
185
                netdev_err(ndev,
 
186
                        "unable to establish receive buffer's gpadl\n");
 
187
                goto cleanup;
 
188
        }
 
189
 
 
190
 
 
191
        /* Notify the NetVsp of the gpadl handle */
 
192
        init_packet = &net_device->channel_init_pkt;
 
193
 
 
194
        memset(init_packet, 0, sizeof(struct nvsp_message));
 
195
 
 
196
        init_packet->hdr.msg_type = NVSP_MSG1_TYPE_SEND_RECV_BUF;
 
197
        init_packet->msg.v1_msg.send_recv_buf.
 
198
                gpadl_handle = net_device->recv_buf_gpadl_handle;
 
199
        init_packet->msg.v1_msg.
 
200
                send_recv_buf.id = NETVSC_RECEIVE_BUFFER_ID;
 
201
 
 
202
        /* Send the gpadl notification request */
 
203
        ret = vmbus_sendpacket(device->channel, init_packet,
 
204
                               sizeof(struct nvsp_message),
 
205
                               (unsigned long)init_packet,
 
206
                               VM_PKT_DATA_INBAND,
 
207
                               VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED);
 
208
        if (ret != 0) {
 
209
                netdev_err(ndev,
 
210
                        "unable to send receive buffer's gpadl to netvsp\n");
 
211
                goto cleanup;
 
212
        }
 
213
 
 
214
        t = wait_for_completion_timeout(&net_device->channel_init_wait, 5*HZ);
 
215
        BUG_ON(t == 0);
 
216
 
 
217
 
 
218
        /* Check the response */
 
219
        if (init_packet->msg.v1_msg.
 
220
            send_recv_buf_complete.status != NVSP_STAT_SUCCESS) {
 
221
                netdev_err(ndev, "Unable to complete receive buffer "
 
222
                           "initialization with NetVsp - status %d\n",
 
223
                           init_packet->msg.v1_msg.
 
224
                           send_recv_buf_complete.status);
 
225
                ret = -EINVAL;
 
226
                goto cleanup;
 
227
        }
 
228
 
 
229
        /* Parse the response */
 
230
 
 
231
        net_device->recv_section_cnt = init_packet->msg.
 
232
                v1_msg.send_recv_buf_complete.num_sections;
 
233
 
 
234
        net_device->recv_section = kmemdup(
 
235
                init_packet->msg.v1_msg.send_recv_buf_complete.sections,
 
236
                net_device->recv_section_cnt *
 
237
                sizeof(struct nvsp_1_receive_buffer_section),
 
238
                GFP_KERNEL);
 
239
        if (net_device->recv_section == NULL) {
 
240
                ret = -EINVAL;
 
241
                goto cleanup;
 
242
        }
 
243
 
 
244
        /*
 
245
         * For 1st release, there should only be 1 section that represents the
 
246
         * entire receive buffer
 
247
         */
 
248
        if (net_device->recv_section_cnt != 1 ||
 
249
            net_device->recv_section->offset != 0) {
 
250
                ret = -EINVAL;
 
251
                goto cleanup;
 
252
        }
 
253
 
 
254
        goto exit;
 
255
 
 
256
cleanup:
 
257
        netvsc_destroy_recv_buf(net_device);
 
258
 
 
259
exit:
 
260
        return ret;
 
261
}
 
262
 
 
263
 
 
264
/* Negotiate NVSP protocol version */
 
265
static int negotiate_nvsp_ver(struct hv_device *device,
 
266
                              struct netvsc_device *net_device,
 
267
                              struct nvsp_message *init_packet,
 
268
                              u32 nvsp_ver)
 
269
{
 
270
        int ret, t;
 
271
 
 
272
        memset(init_packet, 0, sizeof(struct nvsp_message));
 
273
        init_packet->hdr.msg_type = NVSP_MSG_TYPE_INIT;
 
274
        init_packet->msg.init_msg.init.min_protocol_ver = nvsp_ver;
 
275
        init_packet->msg.init_msg.init.max_protocol_ver = nvsp_ver;
 
276
 
 
277
        /* Send the init request */
 
278
        ret = vmbus_sendpacket(device->channel, init_packet,
 
279
                               sizeof(struct nvsp_message),
 
280
                               (unsigned long)init_packet,
 
281
                               VM_PKT_DATA_INBAND,
 
282
                               VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED);
 
283
 
 
284
        if (ret != 0)
 
285
                return ret;
 
286
 
 
287
        t = wait_for_completion_timeout(&net_device->channel_init_wait, 5*HZ);
 
288
 
 
289
        if (t == 0)
 
290
                return -ETIMEDOUT;
 
291
 
 
292
        if (init_packet->msg.init_msg.init_complete.status !=
 
293
            NVSP_STAT_SUCCESS)
 
294
                return -EINVAL;
 
295
 
 
296
        if (nvsp_ver != NVSP_PROTOCOL_VERSION_2)
 
297
                return 0;
 
298
 
 
299
        /* NVSPv2 only: Send NDIS config */
 
300
        memset(init_packet, 0, sizeof(struct nvsp_message));
 
301
        init_packet->hdr.msg_type = NVSP_MSG2_TYPE_SEND_NDIS_CONFIG;
 
302
        init_packet->msg.v2_msg.send_ndis_config.mtu = net_device->ndev->mtu;
 
303
 
 
304
        ret = vmbus_sendpacket(device->channel, init_packet,
 
305
                                sizeof(struct nvsp_message),
 
306
                                (unsigned long)init_packet,
 
307
                                VM_PKT_DATA_INBAND, 0);
 
308
 
 
309
        return ret;
 
310
}
 
311
 
 
312
static int netvsc_connect_vsp(struct hv_device *device)
 
313
{
 
314
        int ret;
 
315
        struct netvsc_device *net_device;
 
316
        struct nvsp_message *init_packet;
 
317
        int ndis_version;
 
318
        struct net_device *ndev;
 
319
 
 
320
        net_device = get_outbound_net_device(device);
 
321
        if (!net_device)
 
322
                return -ENODEV;
 
323
        ndev = net_device->ndev;
 
324
 
 
325
        init_packet = &net_device->channel_init_pkt;
 
326
 
 
327
        /* Negotiate the latest NVSP protocol supported */
 
328
        if (negotiate_nvsp_ver(device, net_device, init_packet,
 
329
                               NVSP_PROTOCOL_VERSION_2) == 0) {
 
330
                net_device->nvsp_version = NVSP_PROTOCOL_VERSION_2;
 
331
        } else if (negotiate_nvsp_ver(device, net_device, init_packet,
 
332
                                    NVSP_PROTOCOL_VERSION_1) == 0) {
 
333
                net_device->nvsp_version = NVSP_PROTOCOL_VERSION_1;
 
334
        } else {
 
335
                ret = -EPROTO;
 
336
                goto cleanup;
 
337
        }
 
338
 
 
339
        pr_debug("Negotiated NVSP version:%x\n", net_device->nvsp_version);
 
340
 
 
341
        /* Send the ndis version */
 
342
        memset(init_packet, 0, sizeof(struct nvsp_message));
 
343
 
 
344
        ndis_version = 0x00050000;
 
345
 
 
346
        init_packet->hdr.msg_type = NVSP_MSG1_TYPE_SEND_NDIS_VER;
 
347
        init_packet->msg.v1_msg.
 
348
                send_ndis_ver.ndis_major_ver =
 
349
                                (ndis_version & 0xFFFF0000) >> 16;
 
350
        init_packet->msg.v1_msg.
 
351
                send_ndis_ver.ndis_minor_ver =
 
352
                                ndis_version & 0xFFFF;
 
353
 
 
354
        /* Send the init request */
 
355
        ret = vmbus_sendpacket(device->channel, init_packet,
 
356
                                sizeof(struct nvsp_message),
 
357
                                (unsigned long)init_packet,
 
358
                                VM_PKT_DATA_INBAND, 0);
 
359
        if (ret != 0)
 
360
                goto cleanup;
 
361
 
 
362
        /* Post the big receive buffer to NetVSP */
 
363
        ret = netvsc_init_recv_buf(device);
 
364
 
 
365
cleanup:
 
366
        return ret;
 
367
}
 
368
 
 
369
static void netvsc_disconnect_vsp(struct netvsc_device *net_device)
 
370
{
 
371
        netvsc_destroy_recv_buf(net_device);
 
372
}
 
373
 
 
374
/*
 
375
 * netvsc_device_remove - Callback when the root bus device is removed
 
376
 */
 
377
int netvsc_device_remove(struct hv_device *device)
 
378
{
 
379
        struct netvsc_device *net_device;
 
380
        struct hv_netvsc_packet *netvsc_packet, *pos;
 
381
        unsigned long flags;
 
382
 
 
383
        net_device = hv_get_drvdata(device);
 
384
        spin_lock_irqsave(&device->channel->inbound_lock, flags);
 
385
        net_device->destroy = true;
 
386
        spin_unlock_irqrestore(&device->channel->inbound_lock, flags);
 
387
 
 
388
        /* Wait for all send completions */
 
389
        while (atomic_read(&net_device->num_outstanding_sends)) {
 
390
                dev_info(&device->device,
 
391
                        "waiting for %d requests to complete...\n",
 
392
                        atomic_read(&net_device->num_outstanding_sends));
 
393
                udelay(100);
 
394
        }
 
395
 
 
396
        netvsc_disconnect_vsp(net_device);
 
397
 
 
398
        /*
 
399
         * Since we have already drained, we don't need to busy wait
 
400
         * as was done in final_release_stor_device()
 
401
         * Note that we cannot set the ext pointer to NULL until
 
402
         * we have drained - to drain the outgoing packets, we need to
 
403
         * allow incoming packets.
 
404
         */
 
405
 
 
406
        spin_lock_irqsave(&device->channel->inbound_lock, flags);
 
407
        hv_set_drvdata(device, NULL);
 
408
        spin_unlock_irqrestore(&device->channel->inbound_lock, flags);
 
409
 
 
410
        /*
 
411
         * At this point, no one should be accessing net_device
 
412
         * except in here
 
413
         */
 
414
        dev_notice(&device->device, "net device safe to remove\n");
 
415
 
 
416
        /* Now, we can close the channel safely */
 
417
        vmbus_close(device->channel);
 
418
 
 
419
        /* Release all resources */
 
420
        list_for_each_entry_safe(netvsc_packet, pos,
 
421
                                 &net_device->recv_pkt_list, list_ent) {
 
422
                list_del(&netvsc_packet->list_ent);
 
423
                kfree(netvsc_packet);
 
424
        }
 
425
 
 
426
        kfree(net_device);
 
427
        return 0;
 
428
}
 
429
 
 
430
static void netvsc_send_completion(struct hv_device *device,
 
431
                                   struct vmpacket_descriptor *packet)
 
432
{
 
433
        struct netvsc_device *net_device;
 
434
        struct nvsp_message *nvsp_packet;
 
435
        struct hv_netvsc_packet *nvsc_packet;
 
436
        struct net_device *ndev;
 
437
 
 
438
        net_device = get_inbound_net_device(device);
 
439
        if (!net_device)
 
440
                return;
 
441
        ndev = net_device->ndev;
 
442
 
 
443
        nvsp_packet = (struct nvsp_message *)((unsigned long)packet +
 
444
                        (packet->offset8 << 3));
 
445
 
 
446
        if ((nvsp_packet->hdr.msg_type == NVSP_MSG_TYPE_INIT_COMPLETE) ||
 
447
            (nvsp_packet->hdr.msg_type ==
 
448
             NVSP_MSG1_TYPE_SEND_RECV_BUF_COMPLETE) ||
 
449
            (nvsp_packet->hdr.msg_type ==
 
450
             NVSP_MSG1_TYPE_SEND_SEND_BUF_COMPLETE)) {
 
451
                /* Copy the response back */
 
452
                memcpy(&net_device->channel_init_pkt, nvsp_packet,
 
453
                       sizeof(struct nvsp_message));
 
454
                complete(&net_device->channel_init_wait);
 
455
        } else if (nvsp_packet->hdr.msg_type ==
 
456
                   NVSP_MSG1_TYPE_SEND_RNDIS_PKT_COMPLETE) {
 
457
                /* Get the send context */
 
458
                nvsc_packet = (struct hv_netvsc_packet *)(unsigned long)
 
459
                        packet->trans_id;
 
460
 
 
461
                /* Notify the layer above us */
 
462
                nvsc_packet->completion.send.send_completion(
 
463
                        nvsc_packet->completion.send.send_completion_ctx);
 
464
 
 
465
                atomic_dec(&net_device->num_outstanding_sends);
 
466
 
 
467
                if (netif_queue_stopped(ndev) && !net_device->start_remove)
 
468
                        netif_wake_queue(ndev);
 
469
        } else {
 
470
                netdev_err(ndev, "Unknown send completion packet type- "
 
471
                           "%d received!!\n", nvsp_packet->hdr.msg_type);
 
472
        }
 
473
 
 
474
}
 
475
 
 
476
int netvsc_send(struct hv_device *device,
 
477
                        struct hv_netvsc_packet *packet)
 
478
{
 
479
        struct netvsc_device *net_device;
 
480
        int ret = 0;
 
481
        struct nvsp_message sendMessage;
 
482
        struct net_device *ndev;
 
483
 
 
484
        net_device = get_outbound_net_device(device);
 
485
        if (!net_device)
 
486
                return -ENODEV;
 
487
        ndev = net_device->ndev;
 
488
 
 
489
        sendMessage.hdr.msg_type = NVSP_MSG1_TYPE_SEND_RNDIS_PKT;
 
490
        if (packet->is_data_pkt) {
 
491
                /* 0 is RMC_DATA; */
 
492
                sendMessage.msg.v1_msg.send_rndis_pkt.channel_type = 0;
 
493
        } else {
 
494
                /* 1 is RMC_CONTROL; */
 
495
                sendMessage.msg.v1_msg.send_rndis_pkt.channel_type = 1;
 
496
        }
 
497
 
 
498
        /* Not using send buffer section */
 
499
        sendMessage.msg.v1_msg.send_rndis_pkt.send_buf_section_index =
 
500
                0xFFFFFFFF;
 
501
        sendMessage.msg.v1_msg.send_rndis_pkt.send_buf_section_size = 0;
 
502
 
 
503
        if (packet->page_buf_cnt) {
 
504
                ret = vmbus_sendpacket_pagebuffer(device->channel,
 
505
                                                  packet->page_buf,
 
506
                                                  packet->page_buf_cnt,
 
507
                                                  &sendMessage,
 
508
                                                  sizeof(struct nvsp_message),
 
509
                                                  (unsigned long)packet);
 
510
        } else {
 
511
                ret = vmbus_sendpacket(device->channel, &sendMessage,
 
512
                                sizeof(struct nvsp_message),
 
513
                                (unsigned long)packet,
 
514
                                VM_PKT_DATA_INBAND,
 
515
                                VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED);
 
516
 
 
517
        }
 
518
 
 
519
        if (ret == 0) {
 
520
                atomic_inc(&net_device->num_outstanding_sends);
 
521
        } else if (ret == -EAGAIN) {
 
522
                netif_stop_queue(ndev);
 
523
                if (atomic_read(&net_device->num_outstanding_sends) < 1)
 
524
                        netif_wake_queue(ndev);
 
525
        } else {
 
526
                netdev_err(ndev, "Unable to send packet %p ret %d\n",
 
527
                           packet, ret);
 
528
        }
 
529
 
 
530
        return ret;
 
531
}
 
532
 
 
533
static void netvsc_send_recv_completion(struct hv_device *device,
 
534
                                        u64 transaction_id)
 
535
{
 
536
        struct nvsp_message recvcompMessage;
 
537
        int retries = 0;
 
538
        int ret;
 
539
        struct net_device *ndev;
 
540
        struct netvsc_device *net_device = hv_get_drvdata(device);
 
541
 
 
542
        ndev = net_device->ndev;
 
543
 
 
544
        recvcompMessage.hdr.msg_type =
 
545
                                NVSP_MSG1_TYPE_SEND_RNDIS_PKT_COMPLETE;
 
546
 
 
547
        /* FIXME: Pass in the status */
 
548
        recvcompMessage.msg.v1_msg.send_rndis_pkt_complete.status =
 
549
                NVSP_STAT_SUCCESS;
 
550
 
 
551
retry_send_cmplt:
 
552
        /* Send the completion */
 
553
        ret = vmbus_sendpacket(device->channel, &recvcompMessage,
 
554
                               sizeof(struct nvsp_message), transaction_id,
 
555
                               VM_PKT_COMP, 0);
 
556
        if (ret == 0) {
 
557
                /* success */
 
558
                /* no-op */
 
559
        } else if (ret == -EAGAIN) {
 
560
                /* no more room...wait a bit and attempt to retry 3 times */
 
561
                retries++;
 
562
                netdev_err(ndev, "unable to send receive completion pkt"
 
563
                        " (tid %llx)...retrying %d\n", transaction_id, retries);
 
564
 
 
565
                if (retries < 4) {
 
566
                        udelay(100);
 
567
                        goto retry_send_cmplt;
 
568
                } else {
 
569
                        netdev_err(ndev, "unable to send receive "
 
570
                                "completion pkt (tid %llx)...give up retrying\n",
 
571
                                transaction_id);
 
572
                }
 
573
        } else {
 
574
                netdev_err(ndev, "unable to send receive "
 
575
                        "completion pkt - %llx\n", transaction_id);
 
576
        }
 
577
}
 
578
 
 
579
/* Send a receive completion packet to RNDIS device (ie NetVsp) */
 
580
static void netvsc_receive_completion(void *context)
 
581
{
 
582
        struct hv_netvsc_packet *packet = context;
 
583
        struct hv_device *device = (struct hv_device *)packet->device;
 
584
        struct netvsc_device *net_device;
 
585
        u64 transaction_id = 0;
 
586
        bool fsend_receive_comp = false;
 
587
        unsigned long flags;
 
588
        struct net_device *ndev;
 
589
 
 
590
        /*
 
591
         * Even though it seems logical to do a GetOutboundNetDevice() here to
 
592
         * send out receive completion, we are using GetInboundNetDevice()
 
593
         * since we may have disable outbound traffic already.
 
594
         */
 
595
        net_device = get_inbound_net_device(device);
 
596
        if (!net_device)
 
597
                return;
 
598
        ndev = net_device->ndev;
 
599
 
 
600
        /* Overloading use of the lock. */
 
601
        spin_lock_irqsave(&net_device->recv_pkt_list_lock, flags);
 
602
 
 
603
        packet->xfer_page_pkt->count--;
 
604
 
 
605
        /*
 
606
         * Last one in the line that represent 1 xfer page packet.
 
607
         * Return the xfer page packet itself to the freelist
 
608
         */
 
609
        if (packet->xfer_page_pkt->count == 0) {
 
610
                fsend_receive_comp = true;
 
611
                transaction_id = packet->completion.recv.recv_completion_tid;
 
612
                list_add_tail(&packet->xfer_page_pkt->list_ent,
 
613
                              &net_device->recv_pkt_list);
 
614
 
 
615
        }
 
616
 
 
617
        /* Put the packet back */
 
618
        list_add_tail(&packet->list_ent, &net_device->recv_pkt_list);
 
619
        spin_unlock_irqrestore(&net_device->recv_pkt_list_lock, flags);
 
620
 
 
621
        /* Send a receive completion for the xfer page packet */
 
622
        if (fsend_receive_comp)
 
623
                netvsc_send_recv_completion(device, transaction_id);
 
624
 
 
625
}
 
626
 
 
627
static void netvsc_receive(struct hv_device *device,
 
628
                            struct vmpacket_descriptor *packet)
 
629
{
 
630
        struct netvsc_device *net_device;
 
631
        struct vmtransfer_page_packet_header *vmxferpage_packet;
 
632
        struct nvsp_message *nvsp_packet;
 
633
        struct hv_netvsc_packet *netvsc_packet = NULL;
 
634
        /* struct netvsc_driver *netvscDriver; */
 
635
        struct xferpage_packet *xferpage_packet = NULL;
 
636
        int i;
 
637
        int count = 0;
 
638
        unsigned long flags;
 
639
        struct net_device *ndev;
 
640
 
 
641
        LIST_HEAD(listHead);
 
642
 
 
643
        net_device = get_inbound_net_device(device);
 
644
        if (!net_device)
 
645
                return;
 
646
        ndev = net_device->ndev;
 
647
 
 
648
        /*
 
649
         * All inbound packets other than send completion should be xfer page
 
650
         * packet
 
651
         */
 
652
        if (packet->type != VM_PKT_DATA_USING_XFER_PAGES) {
 
653
                netdev_err(ndev, "Unknown packet type received - %d\n",
 
654
                           packet->type);
 
655
                return;
 
656
        }
 
657
 
 
658
        nvsp_packet = (struct nvsp_message *)((unsigned long)packet +
 
659
                        (packet->offset8 << 3));
 
660
 
 
661
        /* Make sure this is a valid nvsp packet */
 
662
        if (nvsp_packet->hdr.msg_type !=
 
663
            NVSP_MSG1_TYPE_SEND_RNDIS_PKT) {
 
664
                netdev_err(ndev, "Unknown nvsp packet type received-"
 
665
                        " %d\n", nvsp_packet->hdr.msg_type);
 
666
                return;
 
667
        }
 
668
 
 
669
        vmxferpage_packet = (struct vmtransfer_page_packet_header *)packet;
 
670
 
 
671
        if (vmxferpage_packet->xfer_pageset_id != NETVSC_RECEIVE_BUFFER_ID) {
 
672
                netdev_err(ndev, "Invalid xfer page set id - "
 
673
                           "expecting %x got %x\n", NETVSC_RECEIVE_BUFFER_ID,
 
674
                           vmxferpage_packet->xfer_pageset_id);
 
675
                return;
 
676
        }
 
677
 
 
678
        /*
 
679
         * Grab free packets (range count + 1) to represent this xfer
 
680
         * page packet. +1 to represent the xfer page packet itself.
 
681
         * We grab it here so that we know exactly how many we can
 
682
         * fulfil
 
683
         */
 
684
        spin_lock_irqsave(&net_device->recv_pkt_list_lock, flags);
 
685
        while (!list_empty(&net_device->recv_pkt_list)) {
 
686
                list_move_tail(net_device->recv_pkt_list.next, &listHead);
 
687
                if (++count == vmxferpage_packet->range_cnt + 1)
 
688
                        break;
 
689
        }
 
690
        spin_unlock_irqrestore(&net_device->recv_pkt_list_lock, flags);
 
691
 
 
692
        /*
 
693
         * We need at least 2 netvsc pkts (1 to represent the xfer
 
694
         * page and at least 1 for the range) i.e. we can handled
 
695
         * some of the xfer page packet ranges...
 
696
         */
 
697
        if (count < 2) {
 
698
                netdev_err(ndev, "Got only %d netvsc pkt...needed "
 
699
                        "%d pkts. Dropping this xfer page packet completely!\n",
 
700
                        count, vmxferpage_packet->range_cnt + 1);
 
701
 
 
702
                /* Return it to the freelist */
 
703
                spin_lock_irqsave(&net_device->recv_pkt_list_lock, flags);
 
704
                for (i = count; i != 0; i--) {
 
705
                        list_move_tail(listHead.next,
 
706
                                       &net_device->recv_pkt_list);
 
707
                }
 
708
                spin_unlock_irqrestore(&net_device->recv_pkt_list_lock,
 
709
                                       flags);
 
710
 
 
711
                netvsc_send_recv_completion(device,
 
712
                                            vmxferpage_packet->d.trans_id);
 
713
 
 
714
                return;
 
715
        }
 
716
 
 
717
        /* Remove the 1st packet to represent the xfer page packet itself */
 
718
        xferpage_packet = (struct xferpage_packet *)listHead.next;
 
719
        list_del(&xferpage_packet->list_ent);
 
720
 
 
721
        /* This is how much we can satisfy */
 
722
        xferpage_packet->count = count - 1;
 
723
 
 
724
        if (xferpage_packet->count != vmxferpage_packet->range_cnt) {
 
725
                netdev_err(ndev, "Needed %d netvsc pkts to satisfy "
 
726
                        "this xfer page...got %d\n",
 
727
                        vmxferpage_packet->range_cnt, xferpage_packet->count);
 
728
        }
 
729
 
 
730
        /* Each range represents 1 RNDIS pkt that contains 1 ethernet frame */
 
731
        for (i = 0; i < (count - 1); i++) {
 
732
                netvsc_packet = (struct hv_netvsc_packet *)listHead.next;
 
733
                list_del(&netvsc_packet->list_ent);
 
734
 
 
735
                /* Initialize the netvsc packet */
 
736
                netvsc_packet->xfer_page_pkt = xferpage_packet;
 
737
                netvsc_packet->completion.recv.recv_completion =
 
738
                                        netvsc_receive_completion;
 
739
                netvsc_packet->completion.recv.recv_completion_ctx =
 
740
                                        netvsc_packet;
 
741
                netvsc_packet->device = device;
 
742
                /* Save this so that we can send it back */
 
743
                netvsc_packet->completion.recv.recv_completion_tid =
 
744
                                        vmxferpage_packet->d.trans_id;
 
745
 
 
746
                netvsc_packet->data = (void *)((unsigned long)net_device->
 
747
                        recv_buf + vmxferpage_packet->ranges[i].byte_offset);
 
748
                netvsc_packet->total_data_buflen =
 
749
                                        vmxferpage_packet->ranges[i].byte_count;
 
750
 
 
751
                /* Pass it to the upper layer */
 
752
                rndis_filter_receive(device, netvsc_packet);
 
753
 
 
754
                netvsc_receive_completion(netvsc_packet->
 
755
                                completion.recv.recv_completion_ctx);
 
756
        }
 
757
 
 
758
}
 
759
 
 
760
static void netvsc_channel_cb(void *context)
 
761
{
 
762
        int ret;
 
763
        struct hv_device *device = context;
 
764
        struct netvsc_device *net_device;
 
765
        u32 bytes_recvd;
 
766
        u64 request_id;
 
767
        unsigned char *packet;
 
768
        struct vmpacket_descriptor *desc;
 
769
        unsigned char *buffer;
 
770
        int bufferlen = NETVSC_PACKET_SIZE;
 
771
        struct net_device *ndev;
 
772
 
 
773
        packet = kzalloc(NETVSC_PACKET_SIZE * sizeof(unsigned char),
 
774
                         GFP_ATOMIC);
 
775
        if (!packet)
 
776
                return;
 
777
        buffer = packet;
 
778
 
 
779
        net_device = get_inbound_net_device(device);
 
780
        if (!net_device)
 
781
                goto out;
 
782
        ndev = net_device->ndev;
 
783
 
 
784
        do {
 
785
                ret = vmbus_recvpacket_raw(device->channel, buffer, bufferlen,
 
786
                                           &bytes_recvd, &request_id);
 
787
                if (ret == 0) {
 
788
                        if (bytes_recvd > 0) {
 
789
                                desc = (struct vmpacket_descriptor *)buffer;
 
790
                                switch (desc->type) {
 
791
                                case VM_PKT_COMP:
 
792
                                        netvsc_send_completion(device, desc);
 
793
                                        break;
 
794
 
 
795
                                case VM_PKT_DATA_USING_XFER_PAGES:
 
796
                                        netvsc_receive(device, desc);
 
797
                                        break;
 
798
 
 
799
                                default:
 
800
                                        netdev_err(ndev,
 
801
                                                   "unhandled packet type %d, "
 
802
                                                   "tid %llx len %d\n",
 
803
                                                   desc->type, request_id,
 
804
                                                   bytes_recvd);
 
805
                                        break;
 
806
                                }
 
807
 
 
808
                                /* reset */
 
809
                                if (bufferlen > NETVSC_PACKET_SIZE) {
 
810
                                        kfree(buffer);
 
811
                                        buffer = packet;
 
812
                                        bufferlen = NETVSC_PACKET_SIZE;
 
813
                                }
 
814
                        } else {
 
815
                                /* reset */
 
816
                                if (bufferlen > NETVSC_PACKET_SIZE) {
 
817
                                        kfree(buffer);
 
818
                                        buffer = packet;
 
819
                                        bufferlen = NETVSC_PACKET_SIZE;
 
820
                                }
 
821
 
 
822
                                break;
 
823
                        }
 
824
                } else if (ret == -ENOBUFS) {
 
825
                        /* Handle large packet */
 
826
                        buffer = kmalloc(bytes_recvd, GFP_ATOMIC);
 
827
                        if (buffer == NULL) {
 
828
                                /* Try again next time around */
 
829
                                netdev_err(ndev,
 
830
                                           "unable to allocate buffer of size "
 
831
                                           "(%d)!!\n", bytes_recvd);
 
832
                                break;
 
833
                        }
 
834
 
 
835
                        bufferlen = bytes_recvd;
 
836
                }
 
837
        } while (1);
 
838
 
 
839
out:
 
840
        kfree(buffer);
 
841
        return;
 
842
}
 
843
 
 
844
/*
 
845
 * netvsc_device_add - Callback when the device belonging to this
 
846
 * driver is added
 
847
 */
 
848
int netvsc_device_add(struct hv_device *device, void *additional_info)
 
849
{
 
850
        int ret = 0;
 
851
        int i;
 
852
        int ring_size =
 
853
        ((struct netvsc_device_info *)additional_info)->ring_size;
 
854
        struct netvsc_device *net_device;
 
855
        struct hv_netvsc_packet *packet, *pos;
 
856
        struct net_device *ndev;
 
857
 
 
858
        net_device = alloc_net_device(device);
 
859
        if (!net_device) {
 
860
                ret = -ENOMEM;
 
861
                goto cleanup;
 
862
        }
 
863
 
 
864
        /*
 
865
         * Coming into this function, struct net_device * is
 
866
         * registered as the driver private data.
 
867
         * In alloc_net_device(), we register struct netvsc_device *
 
868
         * as the driver private data and stash away struct net_device *
 
869
         * in struct netvsc_device *.
 
870
         */
 
871
        ndev = net_device->ndev;
 
872
 
 
873
        /* Initialize the NetVSC channel extension */
 
874
        net_device->recv_buf_size = NETVSC_RECEIVE_BUFFER_SIZE;
 
875
        spin_lock_init(&net_device->recv_pkt_list_lock);
 
876
 
 
877
        INIT_LIST_HEAD(&net_device->recv_pkt_list);
 
878
 
 
879
        for (i = 0; i < NETVSC_RECEIVE_PACKETLIST_COUNT; i++) {
 
880
                packet = kzalloc(sizeof(struct hv_netvsc_packet) +
 
881
                                 (NETVSC_RECEIVE_SG_COUNT *
 
882
                                  sizeof(struct hv_page_buffer)), GFP_KERNEL);
 
883
                if (!packet)
 
884
                        break;
 
885
 
 
886
                list_add_tail(&packet->list_ent,
 
887
                              &net_device->recv_pkt_list);
 
888
        }
 
889
        init_completion(&net_device->channel_init_wait);
 
890
 
 
891
        /* Open the channel */
 
892
        ret = vmbus_open(device->channel, ring_size * PAGE_SIZE,
 
893
                         ring_size * PAGE_SIZE, NULL, 0,
 
894
                         netvsc_channel_cb, device);
 
895
 
 
896
        if (ret != 0) {
 
897
                netdev_err(ndev, "unable to open channel: %d\n", ret);
 
898
                goto cleanup;
 
899
        }
 
900
 
 
901
        /* Channel is opened */
 
902
        pr_info("hv_netvsc channel opened successfully\n");
 
903
 
 
904
        /* Connect with the NetVsp */
 
905
        ret = netvsc_connect_vsp(device);
 
906
        if (ret != 0) {
 
907
                netdev_err(ndev,
 
908
                        "unable to connect to NetVSP - %d\n", ret);
 
909
                goto close;
 
910
        }
 
911
 
 
912
        return ret;
 
913
 
 
914
close:
 
915
        /* Now, we can close the channel safely */
 
916
        vmbus_close(device->channel);
 
917
 
 
918
cleanup:
 
919
 
 
920
        if (net_device) {
 
921
                list_for_each_entry_safe(packet, pos,
 
922
                                         &net_device->recv_pkt_list,
 
923
                                         list_ent) {
 
924
                        list_del(&packet->list_ent);
 
925
                        kfree(packet);
 
926
                }
 
927
 
 
928
                kfree(net_device);
 
929
        }
 
930
 
 
931
        return ret;
 
932
}