From 6bcf67374137f433e85aa42a18fde9f0e8562901 Mon Sep 17 00:00:00 2001 From: Chen Liqin Date: Sat, 13 Jun 2009 15:24:33 +0800 Subject: score: add maintainers for score architecture Signed-off-by: Chen Liqin Signed-off-by: Arnd Bergmann --- MAINTAINERS | 8 ++++++++ 1 file changed, 8 insertions(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 035df9d26609..ef643ddd5e8d 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -5000,6 +5000,14 @@ S: Maintained F: kernel/sched* F: include/linux/sched.h +SCORE ARCHITECTURE +P: Chen Liqin +M: liqin.chen@sunplusct.com +P: Lennox Wu +M: lennox.wu@sunplusct.com +W: http://www.sunplusct.com +S: Supported + SCSI CDROM DRIVER P: Jens Axboe M: axboe@kernel.dk -- cgit v1.2.3 From d19d36672ee379f26b79df985a9a2e5afb3f1df1 Mon Sep 17 00:00:00 2001 From: Hartley Sweeten Date: Fri, 10 Jul 2009 23:02:59 +0100 Subject: [ARM] 5599/1: MAINTAINERS: update for EP93XX ARM Change the maintainer of the EP93XX ARM ARCHITECTURE. Signed-off-by: H Hartley Sweeten Signed-off-by: Ryan Mallon Acked-by: Lennert Buytenhek Signed-off-by: Russell King --- MAINTAINERS | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index fa2a16def17a..7bf1128d74f4 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -604,10 +604,14 @@ W: http://maxim.org.za/at91_26.html S: Maintained ARM/CIRRUS LOGIC EP93XX ARM ARCHITECTURE -P: Lennert Buytenhek -M: kernel@wantstofly.org +P: Hartley Sweeten +M: hsweeten@visionengravers.com +P: Ryan Mallon +M: ryan@bluewatersys.com L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) S: Maintained +F: arch/arm/mach-ep93xx/ +F: arch/arm/mach-ep93xx/include/mach/ ARM/CIRRUS LOGIC EDB9315A MACHINE SUPPORT P: Lennert Buytenhek -- cgit v1.2.3 From 834da346041cd8969897feea2cdb09269120599f Mon Sep 17 00:00:00 2001 From: Kalle Valo Date: Tue, 21 Jul 2009 14:26:08 +0300 Subject: MAINTAINERS: add wl1251 wireless driver Add myself as the maintainer for wl1251 driver. Signed-off-by: Kalle Valo Signed-off-by: John W. Linville --- MAINTAINERS | 9 +++++++++ 1 file changed, 9 insertions(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index d119ba9e724d..622bd122e020 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -6466,6 +6466,15 @@ M: mitr@volny.cz S: Maintained F: drivers/input/misc/wistron_btns.c +WL1251 WIRELESS DRIVER +P: Kalle Valo +M: kalle.valo@nokia.com +L: linux-wireless@vger.kernel.org +W: http://wireless.kernel.org +T: git git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless-testing.git +S: Maintained +F: drivers/net/wireless/wl12xx/wl1251* + WL3501 WIRELESS PCMCIA CARD DRIVER P: Arnaldo Carvalho de Melo M: acme@ghostprotocols.net -- cgit v1.2.3 From 2b3daf588965b72d3a9ccff426bfd5516bb73c6a Mon Sep 17 00:00:00 2001 From: Joe Perches Date: Tue, 21 Jul 2009 13:09:56 -0700 Subject: MAINTAINERS: Update rtl8180 patterns rtl8180 files were moved into a subdirectory by commit 1c740ed2210a0d124674a477ea538468aba47810 Signed-off-by: Joe Perches Signed-off-by: John W. Linville --- MAINTAINERS | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 622bd122e020..df55d24a2f69 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -4984,7 +4984,7 @@ L: linux-wireless@vger.kernel.org W: http://linuxwireless.org/ T: git git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless-testing.git S: Maintained -F: drivers/net/wireless/rtl818* +F: drivers/net/wireless/rtl818x/rtl8180* RTL8187 WIRELESS DRIVER P: Herton Ronaldo Krzesinski -- cgit v1.2.3 From 8e0af5141ab950b78b3ebbfaded5439dcf8b3a8d Mon Sep 17 00:00:00 2001 From: Shaohua Li Date: Mon, 27 Jul 2009 18:11:02 -0400 Subject: ACPI: create Processor Aggregator Device driver ACPI 4.0 created the logical "processor aggregator device" as a mechinism for platforms to ask the OS to force otherwise busy processors to enter (power saving) idle. The intent is to lower power consumption to ride-out transient electrical and thermal emergencies, rather than powering off the server. On platforms that can save more power/performance via P-states, the platform will first exhaust P-states before forcing idle. However, the relative benefit of P-states vs. idle states is platform dependent, and thus this driver need not know or care about it. This driver does not use the kernel's CPU hot-plug mechanism because after the transient emergency is over, the system must be returned to its normal state, and hotplug would permanently break both cpusets and binding. So to force idle, the driver creates a power saving thread. The scheduler will migrate the thread to the preferred CPU. The thread has max priority and has SCHED_RR policy, so it can occupy one CPU. To save power, the thread will invoke the deep C-state entry instructions. To avoid starvation, the thread will sleep 5% of the time time for every second (current RT scheduler has threshold to avoid starvation, but if other CPUs are idle, the CPU can borrow CPU timer from other, which makes the mechanism not work here) Vaidyanathan Srinivasan has proposed scheduler enhancements to allow injecting idle time into the system. This driver doesn't depend on those enhancements, but could cut over to them when they are available. Peter Z. does not favor upstreaming this driver until the those scheduler enhancements are in place. However, we favor upstreaming this driver now because it is useful now, and can be enhanced over time. Signed-off-by: Shaohua Li NACKed-by: Peter Zijlstra Cc: Vaidyanathan Srinivasan Signed-off-by: Len Brown --- MAINTAINERS | 8 + drivers/acpi/Kconfig | 11 ++ drivers/acpi/Makefile | 2 + drivers/acpi/acpi_pad.c | 514 ++++++++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 535 insertions(+) create mode 100644 drivers/acpi/acpi_pad.c (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index ebc269152faf..082df56d5b9a 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -288,6 +288,14 @@ L: linux-pci@vger.kernel.org S: Supported F: drivers/pci/hotplug/acpi* +ACPI PROCESSOR AGGREGATOR DRIVER +P: Shaohua Li +M: shaohua.li@intel.com +L: linux-acpi@vger.kernel.org +W: http://www.lesswatts.org/projects/acpi/ +S: Supported +F: drivers/acpi/acpi_pad.c + ACPI THERMAL DRIVER P: Zhang Rui M: rui.zhang@intel.com diff --git a/drivers/acpi/Kconfig b/drivers/acpi/Kconfig index 7ec7d88c5999..13531ed3cbd3 100644 --- a/drivers/acpi/Kconfig +++ b/drivers/acpi/Kconfig @@ -196,6 +196,17 @@ config ACPI_HOTPLUG_CPU select ACPI_CONTAINER default y +config ACPI_PROCESSOR_AGGREGATOR + tristate "Processor Aggregator" + depends on ACPI_PROCESSOR + depends on EXPERIMENTAL + help + ACPI 4.0 defines processor Aggregator, which enables OS to perform + specfic processor configuration and control that applies to all + processors in the platform. Currently only logical processor idling + is defined, which is to reduce power consumption. This driver + support the new device. + config ACPI_THERMAL tristate "Thermal Zone" depends on ACPI_PROCESSOR diff --git a/drivers/acpi/Makefile b/drivers/acpi/Makefile index 03a985be3fe3..8dae61c8a176 100644 --- a/drivers/acpi/Makefile +++ b/drivers/acpi/Makefile @@ -61,3 +61,5 @@ obj-$(CONFIG_ACPI_SBS) += sbs.o processor-y := processor_core.o processor_throttling.o processor-y += processor_idle.o processor_thermal.o processor-$(CONFIG_CPU_FREQ) += processor_perflib.o + +obj-$(CONFIG_ACPI_PROCESSOR_AGGREGATOR) += acpi_pad.o diff --git a/drivers/acpi/acpi_pad.c b/drivers/acpi/acpi_pad.c new file mode 100644 index 000000000000..0d2cdb86158b --- /dev/null +++ b/drivers/acpi/acpi_pad.c @@ -0,0 +1,514 @@ +/* + * acpi_pad.c ACPI Processor Aggregator Driver + * + * Copyright (c) 2009, Intel Corporation. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; if not, write to the Free Software Foundation, Inc., + * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define ACPI_PROCESSOR_AGGREGATOR_CLASS "processor_aggregator" +#define ACPI_PROCESSOR_AGGREGATOR_DEVICE_NAME "Processor Aggregator" +#define ACPI_PROCESSOR_AGGREGATOR_NOTIFY 0x80 +static DEFINE_MUTEX(isolated_cpus_lock); + +#define MWAIT_SUBSTATE_MASK (0xf) +#define MWAIT_CSTATE_MASK (0xf) +#define MWAIT_SUBSTATE_SIZE (4) +#define CPUID_MWAIT_LEAF (5) +#define CPUID5_ECX_EXTENSIONS_SUPPORTED (0x1) +#define CPUID5_ECX_INTERRUPT_BREAK (0x2) +static unsigned long power_saving_mwait_eax; +static void power_saving_mwait_init(void) +{ + unsigned int eax, ebx, ecx, edx; + unsigned int highest_cstate = 0; + unsigned int highest_subcstate = 0; + int i; + + if (!boot_cpu_has(X86_FEATURE_MWAIT)) + return; + if (boot_cpu_data.cpuid_level < CPUID_MWAIT_LEAF) + return; + + cpuid(CPUID_MWAIT_LEAF, &eax, &ebx, &ecx, &edx); + + if (!(ecx & CPUID5_ECX_EXTENSIONS_SUPPORTED) || + !(ecx & CPUID5_ECX_INTERRUPT_BREAK)) + return; + + edx >>= MWAIT_SUBSTATE_SIZE; + for (i = 0; i < 7 && edx; i++, edx >>= MWAIT_SUBSTATE_SIZE) { + if (edx & MWAIT_SUBSTATE_MASK) { + highest_cstate = i; + highest_subcstate = edx & MWAIT_SUBSTATE_MASK; + } + } + power_saving_mwait_eax = (highest_cstate << MWAIT_SUBSTATE_SIZE) | + (highest_subcstate - 1); + + for_each_online_cpu(i) + clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_ON, &i); + +#if defined(CONFIG_GENERIC_TIME) && defined(CONFIG_X86) + switch (boot_cpu_data.x86_vendor) { + case X86_VENDOR_AMD: + case X86_VENDOR_INTEL: + /* + * AMD Fam10h TSC will tick in all + * C/P/S0/S1 states when this bit is set. + */ + if (boot_cpu_has(X86_FEATURE_NONSTOP_TSC)) + return; + + /*FALL THROUGH*/ + default: + /* TSC could halt in idle, so notify users */ + mark_tsc_unstable("TSC halts in idle"); + } +#endif +} + +static unsigned long cpu_weight[NR_CPUS]; +static int tsk_in_cpu[NR_CPUS] = {[0 ... NR_CPUS-1] = -1}; +static DECLARE_BITMAP(pad_busy_cpus_bits, NR_CPUS); +static void round_robin_cpu(unsigned int tsk_index) +{ + struct cpumask *pad_busy_cpus = to_cpumask(pad_busy_cpus_bits); + cpumask_var_t tmp; + int cpu; + unsigned long min_weight = -1, preferred_cpu; + + if (!alloc_cpumask_var(&tmp, GFP_KERNEL)) + return; + + mutex_lock(&isolated_cpus_lock); + cpumask_clear(tmp); + for_each_cpu(cpu, pad_busy_cpus) + cpumask_or(tmp, tmp, topology_thread_cpumask(cpu)); + cpumask_andnot(tmp, cpu_online_mask, tmp); + /* avoid HT sibilings if possible */ + if (cpumask_empty(tmp)) + cpumask_andnot(tmp, cpu_online_mask, pad_busy_cpus); + if (cpumask_empty(tmp)) { + mutex_unlock(&isolated_cpus_lock); + return; + } + for_each_cpu(cpu, tmp) { + if (cpu_weight[cpu] < min_weight) { + min_weight = cpu_weight[cpu]; + preferred_cpu = cpu; + } + } + + if (tsk_in_cpu[tsk_index] != -1) + cpumask_clear_cpu(tsk_in_cpu[tsk_index], pad_busy_cpus); + tsk_in_cpu[tsk_index] = preferred_cpu; + cpumask_set_cpu(preferred_cpu, pad_busy_cpus); + cpu_weight[preferred_cpu]++; + mutex_unlock(&isolated_cpus_lock); + + set_cpus_allowed_ptr(current, cpumask_of(preferred_cpu)); +} + +static void exit_round_robin(unsigned int tsk_index) +{ + struct cpumask *pad_busy_cpus = to_cpumask(pad_busy_cpus_bits); + cpumask_clear_cpu(tsk_in_cpu[tsk_index], pad_busy_cpus); + tsk_in_cpu[tsk_index] = -1; +} + +static unsigned int idle_pct = 5; /* percentage */ +static unsigned int round_robin_time = 10; /* second */ +static int power_saving_thread(void *data) +{ + struct sched_param param = {.sched_priority = 1}; + int do_sleep; + unsigned int tsk_index = (unsigned long)data; + u64 last_jiffies = 0; + + sched_setscheduler(current, SCHED_RR, ¶m); + + while (!kthread_should_stop()) { + int cpu; + u64 expire_time; + + try_to_freeze(); + + /* round robin to cpus */ + if (last_jiffies + round_robin_time * HZ < jiffies) { + last_jiffies = jiffies; + round_robin_cpu(tsk_index); + } + + do_sleep = 0; + + current_thread_info()->status &= ~TS_POLLING; + /* + * TS_POLLING-cleared state must be visible before we test + * NEED_RESCHED: + */ + smp_mb(); + + expire_time = jiffies + HZ * (100 - idle_pct) / 100; + + while (!need_resched()) { + local_irq_disable(); + cpu = smp_processor_id(); + clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_ENTER, + &cpu); + stop_critical_timings(); + + __monitor((void *)¤t_thread_info()->flags, 0, 0); + smp_mb(); + if (!need_resched()) + __mwait(power_saving_mwait_eax, 1); + + start_critical_timings(); + clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_EXIT, + &cpu); + local_irq_enable(); + + if (jiffies > expire_time) { + do_sleep = 1; + break; + } + } + + current_thread_info()->status |= TS_POLLING; + + /* + * current sched_rt has threshold for rt task running time. + * When a rt task uses 95% CPU time, the rt thread will be + * scheduled out for 5% CPU time to not starve other tasks. But + * the mechanism only works when all CPUs have RT task running, + * as if one CPU hasn't RT task, RT task from other CPUs will + * borrow CPU time from this CPU and cause RT task use > 95% + * CPU time. To make 'avoid staration' work, takes a nap here. + */ + if (do_sleep) + schedule_timeout_killable(HZ * idle_pct / 100); + } + + exit_round_robin(tsk_index); + return 0; +} + +static struct task_struct *ps_tsks[NR_CPUS]; +static unsigned int ps_tsk_num; +static int create_power_saving_task(void) +{ + ps_tsks[ps_tsk_num] = kthread_run(power_saving_thread, + (void *)(unsigned long)ps_tsk_num, + "power_saving/%d", ps_tsk_num); + if (ps_tsks[ps_tsk_num]) { + ps_tsk_num++; + return 0; + } + return -EINVAL; +} + +static void destroy_power_saving_task(void) +{ + if (ps_tsk_num > 0) { + ps_tsk_num--; + kthread_stop(ps_tsks[ps_tsk_num]); + } +} + +static void set_power_saving_task_num(unsigned int num) +{ + if (num > ps_tsk_num) { + while (ps_tsk_num < num) { + if (create_power_saving_task()) + return; + } + } else if (num < ps_tsk_num) { + while (ps_tsk_num > num) + destroy_power_saving_task(); + } +} + +static int acpi_pad_idle_cpus(unsigned int num_cpus) +{ + get_online_cpus(); + + num_cpus = min_t(unsigned int, num_cpus, num_online_cpus()); + set_power_saving_task_num(num_cpus); + + put_online_cpus(); + return 0; +} + +static uint32_t acpi_pad_idle_cpus_num(void) +{ + return ps_tsk_num; +} + +static ssize_t acpi_pad_rrtime_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + unsigned long num; + if (strict_strtoul(buf, 0, &num)) + return -EINVAL; + if (num < 1 || num >= 100) + return -EINVAL; + mutex_lock(&isolated_cpus_lock); + round_robin_time = num; + mutex_unlock(&isolated_cpus_lock); + return count; +} + +static ssize_t acpi_pad_rrtime_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + return scnprintf(buf, PAGE_SIZE, "%d", round_robin_time); +} +static DEVICE_ATTR(rrtime, S_IRUGO|S_IWUSR, + acpi_pad_rrtime_show, + acpi_pad_rrtime_store); + +static ssize_t acpi_pad_idlepct_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + unsigned long num; + if (strict_strtoul(buf, 0, &num)) + return -EINVAL; + if (num < 1 || num >= 100) + return -EINVAL; + mutex_lock(&isolated_cpus_lock); + idle_pct = num; + mutex_unlock(&isolated_cpus_lock); + return count; +} + +static ssize_t acpi_pad_idlepct_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + return scnprintf(buf, PAGE_SIZE, "%d", idle_pct); +} +static DEVICE_ATTR(idlepct, S_IRUGO|S_IWUSR, + acpi_pad_idlepct_show, + acpi_pad_idlepct_store); + +static ssize_t acpi_pad_idlecpus_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + unsigned long num; + if (strict_strtoul(buf, 0, &num)) + return -EINVAL; + mutex_lock(&isolated_cpus_lock); + acpi_pad_idle_cpus(num); + mutex_unlock(&isolated_cpus_lock); + return count; +} + +static ssize_t acpi_pad_idlecpus_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + return cpumask_scnprintf(buf, PAGE_SIZE, + to_cpumask(pad_busy_cpus_bits)); +} +static DEVICE_ATTR(idlecpus, S_IRUGO|S_IWUSR, + acpi_pad_idlecpus_show, + acpi_pad_idlecpus_store); + +static int acpi_pad_add_sysfs(struct acpi_device *device) +{ + int result; + + result = device_create_file(&device->dev, &dev_attr_idlecpus); + if (result) + return -ENODEV; + result = device_create_file(&device->dev, &dev_attr_idlepct); + if (result) { + device_remove_file(&device->dev, &dev_attr_idlecpus); + return -ENODEV; + } + result = device_create_file(&device->dev, &dev_attr_rrtime); + if (result) { + device_remove_file(&device->dev, &dev_attr_idlecpus); + device_remove_file(&device->dev, &dev_attr_idlepct); + return -ENODEV; + } + return 0; +} + +static void acpi_pad_remove_sysfs(struct acpi_device *device) +{ + device_remove_file(&device->dev, &dev_attr_idlecpus); + device_remove_file(&device->dev, &dev_attr_idlepct); + device_remove_file(&device->dev, &dev_attr_rrtime); +} + +/* Query firmware how many CPUs should be idle */ +static int acpi_pad_pur(acpi_handle handle, int *num_cpus) +{ + struct acpi_buffer buffer = {ACPI_ALLOCATE_BUFFER, NULL}; + acpi_status status; + union acpi_object *package; + int rev, num, ret = -EINVAL; + + status = acpi_evaluate_object(handle, "_PUR", NULL, &buffer); + if (ACPI_FAILURE(status)) + return -EINVAL; + package = buffer.pointer; + if (package->type != ACPI_TYPE_PACKAGE || package->package.count != 2) + goto out; + rev = package->package.elements[0].integer.value; + num = package->package.elements[1].integer.value; + if (rev != 1) + goto out; + *num_cpus = num; + ret = 0; +out: + kfree(buffer.pointer); + return ret; +} + +/* Notify firmware how many CPUs are idle */ +static void acpi_pad_ost(acpi_handle handle, int stat, + uint32_t idle_cpus) +{ + union acpi_object params[3] = { + {.type = ACPI_TYPE_INTEGER,}, + {.type = ACPI_TYPE_INTEGER,}, + {.type = ACPI_TYPE_BUFFER,}, + }; + struct acpi_object_list arg_list = {3, params}; + + params[0].integer.value = ACPI_PROCESSOR_AGGREGATOR_NOTIFY; + params[1].integer.value = stat; + params[2].buffer.length = 4; + params[2].buffer.pointer = (void *)&idle_cpus; + acpi_evaluate_object(handle, "_OST", &arg_list, NULL); +} + +static void acpi_pad_handle_notify(acpi_handle handle) +{ + int num_cpus, ret; + uint32_t idle_cpus; + + mutex_lock(&isolated_cpus_lock); + if (acpi_pad_pur(handle, &num_cpus)) { + mutex_unlock(&isolated_cpus_lock); + return; + } + ret = acpi_pad_idle_cpus(num_cpus); + idle_cpus = acpi_pad_idle_cpus_num(); + if (!ret) + acpi_pad_ost(handle, 0, idle_cpus); + else + acpi_pad_ost(handle, 1, 0); + mutex_unlock(&isolated_cpus_lock); +} + +static void acpi_pad_notify(acpi_handle handle, u32 event, + void *data) +{ + struct acpi_device *device = data; + + switch (event) { + case ACPI_PROCESSOR_AGGREGATOR_NOTIFY: + acpi_pad_handle_notify(handle); + acpi_bus_generate_proc_event(device, event, 0); + acpi_bus_generate_netlink_event(device->pnp.device_class, + dev_name(&device->dev), event, 0); + break; + default: + printk(KERN_WARNING"Unsupported event [0x%x]\n", event); + break; + } +} + +static int acpi_pad_add(struct acpi_device *device) +{ + acpi_status status; + + strcpy(acpi_device_name(device), ACPI_PROCESSOR_AGGREGATOR_DEVICE_NAME); + strcpy(acpi_device_class(device), ACPI_PROCESSOR_AGGREGATOR_CLASS); + + if (acpi_pad_add_sysfs(device)) + return -ENODEV; + + status = acpi_install_notify_handler(device->handle, + ACPI_DEVICE_NOTIFY, acpi_pad_notify, device); + if (ACPI_FAILURE(status)) { + acpi_pad_remove_sysfs(device); + return -ENODEV; + } + + return 0; +} + +static int acpi_pad_remove(struct acpi_device *device, + int type) +{ + mutex_lock(&isolated_cpus_lock); + acpi_pad_idle_cpus(0); + mutex_unlock(&isolated_cpus_lock); + + acpi_remove_notify_handler(device->handle, + ACPI_DEVICE_NOTIFY, acpi_pad_notify); + acpi_pad_remove_sysfs(device); + return 0; +} + +static const struct acpi_device_id pad_device_ids[] = { + {"ACPI000C", 0}, + {"", 0}, +}; +MODULE_DEVICE_TABLE(acpi, pad_device_ids); + +static struct acpi_driver acpi_pad_driver = { + .name = "processor_aggregator", + .class = ACPI_PROCESSOR_AGGREGATOR_CLASS, + .ids = pad_device_ids, + .ops = { + .add = acpi_pad_add, + .remove = acpi_pad_remove, + }, +}; + +static int __init acpi_pad_init(void) +{ + power_saving_mwait_init(); + if (power_saving_mwait_eax == 0) + return -EINVAL; + + return acpi_bus_register_driver(&acpi_pad_driver); +} + +static void __exit acpi_pad_exit(void) +{ + acpi_bus_unregister_driver(&acpi_pad_driver); +} + +module_init(acpi_pad_init); +module_exit(acpi_pad_exit); +MODULE_AUTHOR("Shaohua Li"); +MODULE_DESCRIPTION("ACPI Processor Aggregator Driver"); +MODULE_LICENSE("GPL"); -- cgit v1.2.3 From 54a246ff21b543bf3d8d5d064708dc7782403e32 Mon Sep 17 00:00:00 2001 From: Nicolas Pitre Date: Tue, 11 Aug 2009 23:28:51 -0400 Subject: [ARM] add MAINTAINERS entry for Orion/Kirkwood/etc. Signed-off-by: Nicolas Pitre --- MAINTAINERS | 12 ++++++++++++ 1 file changed, 12 insertions(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index d6befb2c470f..33d0ec494e34 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -685,6 +685,18 @@ ARM/MAGICIAN MACHINE SUPPORT M: Philipp Zabel S: Maintained +ARM/Marvell Loki/Kirkwood/MV78xx0/Orion SOC support +M: Lennert Buytenhek +M: Nicolas Pitre +L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +T: git git://git.marvell.com/orion +S: Maintained +F: arch/arm/mach-loki/ +F: arch/arm/mach-kirkwood/ +F: arch/arm/mach-mv78xx0/ +F: arch/arm/mach-orion5x/ +F: arch/arm/plat-orion/ + ARM/MIOA701 MACHINE SUPPORT M: Robert Jarzmik L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) -- cgit v1.2.3 From 34b921cf6fb4014e0e8d414492959a4725049000 Mon Sep 17 00:00:00 2001 From: Joe Perches Date: Fri, 7 Aug 2009 13:16:15 -0700 Subject: MAINTAINERS: NETWORKING [WIRELESS] additional patterns Added file patterns drivers/net/wireless and net/mac80211 (and net/rfkill -- JWL) Signed-off-by: Joe Perches Signed-off-by: John W. Linville --- MAINTAINERS | 3 +++ 1 file changed, 3 insertions(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 22af4965c16c..7b10f42c389b 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3583,9 +3583,12 @@ M: "John W. Linville" L: linux-wireless@vger.kernel.org T: git git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless-2.6.git S: Maintained +F: net/mac80211/ +F: net/rfkill/ F: net/wireless/ F: include/net/ieee80211* F: include/linux/wireless.h +F: drivers/net/wireless/ NETWORKING DRIVERS L: netdev@vger.kernel.org -- cgit v1.2.3 From 5e68ff6563ef79d87296c70f8eb2bee454d3fe75 Mon Sep 17 00:00:00 2001 From: Luciano Coelho Date: Mon, 10 Aug 2009 11:15:13 +0300 Subject: MAINTAINERS: add information for wl1271 wireless driver Add maintainer information section for the wl1271 wireless driver and fix the information for wl1271. Signed-off-by: Luciano Coelho Acked-by: Kalle Valo Signed-off-by: John W. Linville --- MAINTAINERS | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 7b10f42c389b..866253d7abd7 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -5580,7 +5580,16 @@ L: linux-wireless@vger.kernel.org W: http://wireless.kernel.org T: git git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless-testing.git S: Maintained -F: drivers/net/wireless/wl12xx/wl1251* +F: drivers/net/wireless/wl12xx/* +X: drivers/net/wireless/wl12xx/wl1271* + +WL1271 WIRELESS DRIVER +M: Luciano Coelho +L: linux-wireless@vger.kernel.org +W: http://wireless.kernel.org +T: git git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless-testing.git +S: Maintained +F: drivers/net/wireless/wl12xx/wl1271* WL3501 WIRELESS PCMCIA CARD DRIVER M: Arnaldo Carvalho de Melo -- cgit v1.2.3 From 57a473f2f97cf3bca78df08aac2f438ddef03bee Mon Sep 17 00:00:00 2001 From: Leo Chen Date: Wed, 12 Aug 2009 22:08:48 +0100 Subject: ARM: 5671/1: bcmring: add maintainer entry add maintainer entry Signed-off-by: Leo Chen Signed-off-by: Russell King --- MAINTAINERS | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index fa2a16def17a..a67a99857e2a 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -603,6 +603,23 @@ L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) W: http://maxim.org.za/at91_26.html S: Maintained +ARM/BCMRING ARM ARCHITECTURE +P: Leo Chen +P: Scott Branden +L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +S: Maintained +F: arch/arm/mach-bcmring + +ARM/BCMRING MTD NAND DRIVER +P: Leo Chen +P: Scott Branden +L: linux-mtd@lists.infradead.org +S: Maintained +F: drivers/mtd/nand/bcm_umi_nand.c +F: drivers/mtd/nand/bcm_umi_bch.c +F: drivers/mtd/nand/bcm_umi_hamming.c +F: drivers/mtd/nand/nand_bcm_umi.h + ARM/CIRRUS LOGIC EP93XX ARM ARCHITECTURE P: Lennert Buytenhek M: kernel@wantstofly.org -- cgit v1.2.3 From f00f510ab33f3e689e35ccdbb8dc264168aa5250 Mon Sep 17 00:00:00 2001 From: Dan Williams Date: Tue, 18 Aug 2009 15:21:50 -0700 Subject: iop: downgrade maintenance status iop support is in maintenance-only mode. Cc: Lennert Buytenhek Signed-off-by: Dan Williams --- MAINTAINERS | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 2c4326c0de9a..e2f7199c532f 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -631,24 +631,24 @@ ARM/INTEL IOP32X ARM ARCHITECTURE M: Lennert Buytenhek M: Dan Williams L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) -S: Supported +S: Maintained ARM/INTEL IOP33X ARM ARCHITECTURE M: Dan Williams L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) -S: Supported +S: Maintained ARM/INTEL IOP13XX ARM ARCHITECTURE M: Lennert Buytenhek M: Dan Williams L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) -S: Supported +S: Maintained ARM/INTEL IQ81342EX MACHINE SUPPORT M: Lennert Buytenhek M: Dan Williams L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) -S: Supported +S: Maintained ARM/INTEL IXP2000 ARM ARCHITECTURE M: Lennert Buytenhek @@ -669,7 +669,7 @@ ARM/INTEL XSC3 (MANZANO) ARM CORE M: Lennert Buytenhek M: Dan Williams L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) -S: Supported +S: Maintained ARM/IP FABRICS DOUBLE ESPRESSO MACHINE SUPPORT M: Lennert Buytenhek @@ -2619,7 +2619,7 @@ F: include/linux/intel-iommu.h INTEL IOP-ADMA DMA DRIVER M: Dan Williams -S: Supported +S: Maintained F: drivers/dma/iop-adma.c INTEL IXP4XX QMGR, NPE, ETHERNET and HSS SUPPORT -- cgit v1.2.3 From a2c3f6567c9ac327f1ef1272551f3a7595ec885e Mon Sep 17 00:00:00 2001 From: Lennert Buytenhek Date: Tue, 18 Aug 2009 05:13:48 +0200 Subject: MAINTAINERS: add information for mwl8k wireless driver Signed-off-by: Lennert Buytenhek Signed-off-by: John W. Linville --- MAINTAINERS | 6 ++++++ 1 file changed, 6 insertions(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 866253d7abd7..9b55c661c3fb 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3272,6 +3272,12 @@ S: Supported F: drivers/net/mv643xx_eth.* F: include/linux/mv643xx.h +MARVELL MWL8K WIRELESS DRIVER +M: Lennert Buytenhek +L: linux-wireless@vger.kernel.org +S: Supported +F: drivers/net/wireless/mwl8k.c + MARVELL SOC MMC/SD/SDIO CONTROLLER DRIVER M: Nicolas Pitre S: Maintained -- cgit v1.2.3 From 60df75c169c5d42f4eef03d80e65c3fe223a1620 Mon Sep 17 00:00:00 2001 From: James Bottomley Date: Thu, 13 Aug 2009 19:23:12 +0000 Subject: [SCSI] update MAINTAINERS with new email Novell is now funding SCSI work, so the MAINTAINERS file should reflect this. Signed-off-by: James Bottomley --- MAINTAINERS | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 60299a9a7adb..b841e3c20e9b 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -4408,7 +4408,7 @@ F: drivers/scsi/sg.c F: include/scsi/sg.h SCSI SUBSYSTEM -M: "James E.J. Bottomley" +M: "James E.J. Bottomley" L: linux-scsi@vger.kernel.org T: git git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi-misc-2.6.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi-rc-fixes-2.6.git -- cgit v1.2.3 From 4c5adde7943b982d22a7bf711654fbb5cb810667 Mon Sep 17 00:00:00 2001 From: Kevin Hilman Date: Fri, 19 Jun 2009 12:02:33 -0700 Subject: MAINTAINERS: add entry for TI DaVinci machine support Signed-off-by: Kevin Hilman --- MAINTAINERS | 6 ++++++ 1 file changed, 6 insertions(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 60299a9a7adb..86e78284394e 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -4590,6 +4590,12 @@ F: arch/arm/mach-s3c2410/ F: drivers/*/*s3c2410* F: drivers/*/*/*s3c2410* +TI DAVINCI MACHINE SUPPORT +P: Kevin Hilman +M: davinci-linux-open-source@linux.davincidsp.com +S: Supported +F: arch/arm/mach-davinci + SIS 190 ETHERNET DRIVER M: Francois Romieu L: netdev@vger.kernel.org -- cgit v1.2.3 From c06f51eab8e652abebb698b1ead4c5585f820ef9 Mon Sep 17 00:00:00 2001 From: Joe Perches Date: Wed, 26 Aug 2009 08:47:47 +0000 Subject: MAINTAINERS: update information for sfc network driver Based upon a patch by Ben Hutchings. Signed-off-by: David S. Miller --- MAINTAINERS | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 9b55c661c3fb..baa0aa812be8 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -4527,9 +4527,10 @@ S: Supported F: drivers/net/benet/ SFC NETWORK DRIVER -P: Steve Hodgson -P: Ben Hutchings -M: Robert Stonehouse +M: Solarflare linux maintainers +M: Steve Hodgson +M: Ben Hutchings +L: netdev@vger.kernel.org S: Supported F: drivers/net/sfc/ -- cgit v1.2.3 From a0bf797ff10cdbc15e8abe1309e7ab397f61691f Mon Sep 17 00:00:00 2001 From: Reinette Chatre Date: Fri, 21 Aug 2009 14:03:51 -0700 Subject: MAINTAINERS: Update ipw2x00 and iwlwifi entries Update MAINTAINERS file to reflect current maintenance status of ipw2x00 drivers. We remove James's name as he is not involved with this project anymore. We also update the Status to "Odd Fixes". This has been true for a while now, we have to make it official. There is also a new email address with which all relevant people can be reached. The same email address should be used for iwlwifi. Signed-off-by: Reinette Chatre Cc: James Ketrenos Acked-by: James Ketrenos Acked-by: Zhu Yi Signed-off-by: John W. Linville --- MAINTAINERS | 14 +++++--------- 1 file changed, 5 insertions(+), 9 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 9b55c661c3fb..8d3a91f4ba4a 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -2653,25 +2653,21 @@ F: drivers/net/ixgbe/ INTEL PRO/WIRELESS 2100 NETWORK CONNECTION SUPPORT M: Zhu Yi -M: James Ketrenos M: Reinette Chatre +M: Intel Linux Wireless L: linux-wireless@vger.kernel.org -L: ipw2100-devel@lists.sourceforge.net -W: http://lists.sourceforge.net/mailman/listinfo/ipw2100-devel W: http://ipw2100.sourceforge.net -S: Supported +S: Odd Fixes F: Documentation/networking/README.ipw2100 F: drivers/net/wireless/ipw2x00/ipw2100.* INTEL PRO/WIRELESS 2915ABG NETWORK CONNECTION SUPPORT M: Zhu Yi -M: James Ketrenos M: Reinette Chatre +M: Intel Linux Wireless L: linux-wireless@vger.kernel.org -L: ipw2100-devel@lists.sourceforge.net -W: http://lists.sourceforge.net/mailman/listinfo/ipw2100-devel W: http://ipw2200.sourceforge.net -S: Supported +S: Odd Fixes F: Documentation/networking/README.ipw2200 F: drivers/net/wireless/ipw2x00/ipw2200.* @@ -2688,8 +2684,8 @@ F: include/linux/wimax/i2400m.h INTEL WIRELESS WIFI LINK (iwlwifi) M: Zhu Yi M: Reinette Chatre +M: Intel Linux Wireless L: linux-wireless@vger.kernel.org -L: ipw3945-devel@lists.sourceforge.net W: http://intellinuxwireless.org T: git git://git.kernel.org/pub/scm/linux/kernel/git/iwlwifi/iwlwifi-2.6.git S: Supported -- cgit v1.2.3 From 6349d9979beba240fe7182872cb547250264b865 Mon Sep 17 00:00:00 2001 From: Len Brown Date: Fri, 14 Aug 2009 15:07:14 -0400 Subject: SFI: Simple Firmware Interface - MAINTAINERS, Kconfig CONFIG_SFI=y enables the kernel to boot and run optimally on platforms that support the Simple Firmware Interface. Thanks to Jacob Pan for prototyping the initial Linux SFI support, and to Feng Tang for Linux bring-up and debug both in emulation and on Moorestown hardware. See http://simplefirmware.org for more information on SFI. Signed-off-by: Len Brown --- MAINTAINERS | 12 ++++++++++++ drivers/sfi/Kconfig | 17 +++++++++++++++++ 2 files changed, 29 insertions(+) create mode 100644 drivers/sfi/Kconfig (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 60299a9a7adb..055ba880aaec 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -4575,6 +4575,18 @@ L: linux-pci@vger.kernel.org S: Supported F: drivers/pci/hotplug/shpchp* +SIMPLE FIRMWARE INTERFACE (SFI) +P: Len Brown +M: lenb@kernel.org +L: sfi-devel@simplefirmware.org +W: http://simplefirmware.org/ +T: git git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux-sfi-2.6.git +S: Supported +F: arch/x86/kernel/*sfi* +F: drivers/sfi/ +F: include/linux/sfi*.h + + SIMTEC EB110ATX (Chalice CATS) P: Ben Dooks M: Vincent Sanders diff --git a/drivers/sfi/Kconfig b/drivers/sfi/Kconfig new file mode 100644 index 000000000000..dd115121e0b6 --- /dev/null +++ b/drivers/sfi/Kconfig @@ -0,0 +1,17 @@ +# +# SFI Configuration +# + +menuconfig SFI + bool "SFI (Simple Firmware Interface) Support" + ---help--- + The Simple Firmware Interface (SFI) provides a lightweight method + for platform firmware to pass information to the operating system + via static tables in memory. Kernel SFI support is required to + boot on SFI-only platforms. Currently, all SFI-only platforms are + based on the 2nd generation Intel Atom processor platform, + code-named Moorestown. + + For more information, see http://simplefirmware.org + + Say 'Y' here to enable the kernel to boot on SFI-only platforms. -- cgit v1.2.3 From e6cc0fd1e31cfe48e207de78742ccdf301369bf3 Mon Sep 17 00:00:00 2001 From: Roland Dreier Date: Mon, 7 Sep 2009 21:54:38 -0700 Subject: MAINTAINERS: InfiniBand/RDMA mailing list transition to vger InfiniBand/RDMA development discussion is moving from general@lists.openfabrics.org to linux-rdma@vger.kernel.org. Signed-off-by: Roland Dreier --- MAINTAINERS | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 8dca9d89c6c1..989ff1149390 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -439,7 +439,7 @@ F: drivers/hwmon/ams/ AMSO1100 RNIC DRIVER M: Tom Tucker M: Steve Wise -L: general@lists.openfabrics.org +L: linux-rdma@vger.kernel.org S: Maintained F: drivers/infiniband/hw/amso1100/ @@ -1494,7 +1494,7 @@ F: drivers/net/cxgb3/ CXGB3 IWARP RNIC DRIVER (IW_CXGB3) M: Steve Wise -L: general@lists.openfabrics.org +L: linux-rdma@vger.kernel.org W: http://www.openfabrics.org S: Supported F: drivers/infiniband/hw/cxgb3/ @@ -1868,7 +1868,7 @@ F: fs/efs/ EHCA (IBM GX bus InfiniBand adapter) DRIVER M: Hoang-Nam Nguyen M: Christoph Raisch -L: general@lists.openfabrics.org +L: linux-rdma@vger.kernel.org S: Supported F: drivers/infiniband/hw/ehca/ @@ -2552,7 +2552,7 @@ INFINIBAND SUBSYSTEM M: Roland Dreier M: Sean Hefty M: Hal Rosenstock -L: general@lists.openfabrics.org (moderated for non-subscribers) +L: linux-rdma@vger.kernel.org W: http://www.openib.org/ T: git git://git.kernel.org/pub/scm/linux/kernel/git/roland/infiniband.git S: Supported @@ -2729,7 +2729,7 @@ F: drivers/net/ipg.c IPATH DRIVER M: Ralph Campbell -L: general@lists.openfabrics.org +L: linux-rdma@vger.kernel.org T: git git://git.qlogic.com/ipath-linux-2.6 S: Supported F: drivers/infiniband/hw/ipath/ @@ -3485,7 +3485,7 @@ F: drivers/scsi/NCR_D700.* NETEFFECT IWARP RNIC DRIVER (IW_NES) M: Faisal Latif M: Chien Tung -L: general@lists.openfabrics.org +L: linux-rdma@vger.kernel.org W: http://www.neteffect.com S: Supported F: drivers/infiniband/hw/nes/ -- cgit v1.2.3 From 72c706b775777e8ae546756a5d07ffda4a05ed7b Mon Sep 17 00:00:00 2001 From: Joe Perches Date: Mon, 7 Sep 2009 11:34:30 -0700 Subject: MAINTAINERS: Add Atheros Linux wireless drivers home page On Sun, 2009-09-06 at 12:26 -0700, Luis R. Rodriguez wrote: > On Sun, Sep 6, 2009 at 10:59 AM, Joe Perches wrote: > > On Thu, 2009-09-03 at 15:54 -0700, Luis R. Rodriguez wrote: > >> I'm pleased to announce the new home page to Atheros Linux wireless drivers: > >> http://wireless.kernel.org/en/users/Drivers/Atheros > > Perhaps add this to MAINTAINERS? > Fine by me, except ath5k and ath9k also have their own respective page > so those can also be added. (cc's trimmed and maintainers added) Perhaps this instead: Signed-off-by: Joe Perches Acked-by: Luis R. Rodriguez Signed-off-by: John W. Linville --- MAINTAINERS | 2 ++ 1 file changed, 2 insertions(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 0ab47e7fc2ec..6e2e12fd9ef8 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -876,6 +876,7 @@ M: "Luis R. Rodriguez" M: Bob Copeland L: linux-wireless@vger.kernel.org L: ath5k-devel@lists.ath5k.org +W: http://wireless.kernel.org/en/users/Drivers/ath5k S: Maintained F: drivers/net/wireless/ath/ath5k/ @@ -887,6 +888,7 @@ M: Vasanthakumar Thiagarajan M: Senthil Balasubramanian L: linux-wireless@vger.kernel.org L: ath9k-devel@lists.ath9k.org +W: http://wireless.kernel.org/en/users/Drivers/ath9k S: Supported F: drivers/net/wireless/ath/ath9k/ -- cgit v1.2.3 From 5d783a2d592ce5fdc01292470059ed15998e404e Mon Sep 17 00:00:00 2001 From: Marek Vasut Date: Thu, 16 Jul 2009 13:26:48 +0200 Subject: [ARM] pxa: Palm Tungsten|C initial support This patch adds basic support for Palm Tungsten|C handheld. Signed-off-by: Marek Vasut Signed-off-by: Eric Miao --- MAINTAINERS | 5 +- arch/arm/mach-pxa/Kconfig | 9 + arch/arm/mach-pxa/Makefile | 1 + arch/arm/mach-pxa/include/mach/palmtc.h | 86 ++++++ arch/arm/mach-pxa/palmtc.c | 462 ++++++++++++++++++++++++++++++++ 5 files changed, 561 insertions(+), 2 deletions(-) create mode 100644 arch/arm/mach-pxa/include/mach/palmtc.h create mode 100644 arch/arm/mach-pxa/palmtc.c (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 8dca9d89c6c1..efe015ddc567 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -706,8 +706,9 @@ M: Dmitry Eremin-Solenikov M: Dirk Opfer S: Maintained -ARM/PALMTX,PALMT5,PALMLD,PALMTE2 SUPPORT -M: Marek Vasut +ARM/PALMTX,PALMT5,PALMLD,PALMTE2,PALMTC SUPPORT +P: Marek Vasut +M: marek.vasut@gmail.com W: http://hackndev.com S: Maintained diff --git a/arch/arm/mach-pxa/Kconfig b/arch/arm/mach-pxa/Kconfig index 32e44b3a8fbe..d1dcb1cf0758 100644 --- a/arch/arm/mach-pxa/Kconfig +++ b/arch/arm/mach-pxa/Kconfig @@ -376,6 +376,15 @@ config MACH_PALMTE2 Say Y here if you intend to run this kernel on a Palm Tungsten|E2 handheld computer. +config MACH_PALMTC + bool "Palm Tungsten|C" + default y + depends on ARCH_PXA_PALM + select PXA25x + help + Say Y here if you intend to run this kernel on a Palm Tungsten|C + handheld computer. + config MACH_PALMT5 bool "Palm Tungsten|T5" default y diff --git a/arch/arm/mach-pxa/Makefile b/arch/arm/mach-pxa/Makefile index d4c6122a342f..aa8f83e9421e 100644 --- a/arch/arm/mach-pxa/Makefile +++ b/arch/arm/mach-pxa/Makefile @@ -58,6 +58,7 @@ obj-$(CONFIG_MACH_E750) += e750.o obj-$(CONFIG_MACH_E400) += e400.o obj-$(CONFIG_MACH_E800) += e800.o obj-$(CONFIG_MACH_PALMTE2) += palmte2.o +obj-$(CONFIG_MACH_PALMTC) += palmtc.o obj-$(CONFIG_MACH_PALMT5) += palmt5.o obj-$(CONFIG_MACH_PALMTX) += palmtx.o obj-$(CONFIG_MACH_PALMLD) += palmld.o diff --git a/arch/arm/mach-pxa/include/mach/palmtc.h b/arch/arm/mach-pxa/include/mach/palmtc.h new file mode 100644 index 000000000000..3dc9b074ab46 --- /dev/null +++ b/arch/arm/mach-pxa/include/mach/palmtc.h @@ -0,0 +1,86 @@ +/* + * linux/include/asm-arm/arch-pxa/palmtc-gpio.h + * + * GPIOs and interrupts for Palm Tungsten|C Handheld Computer + * + * Authors: Alex Osborne + * Marek Vasut + * Holger Bocklet + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ + +#ifndef _INCLUDE_PALMTC_H_ +#define _INCLUDE_PALMTC_H_ + +/** HERE ARE GPIOs **/ + +/* GPIOs */ +#define GPIO_NR_PALMTC_EARPHONE_DETECT 2 +#define GPIO_NR_PALMTC_CRADLE_DETECT 5 +#define GPIO_NR_PALMTC_HOTSYNC_BUTTON 7 + +/* SD/MMC */ +#define GPIO_NR_PALMTC_SD_DETECT_N 12 +#define GPIO_NR_PALMTC_SD_POWER 32 +#define GPIO_NR_PALMTC_SD_READONLY 54 + +/* WLAN */ +#define GPIO_NR_PALMTC_PCMCIA_READY 13 +#define GPIO_NR_PALMTC_PCMCIA_PWRREADY 14 +#define GPIO_NR_PALMTC_PCMCIA_POWER1 15 +#define GPIO_NR_PALMTC_PCMCIA_POWER2 33 +#define GPIO_NR_PALMTC_PCMCIA_POWER3 55 +#define GPIO_NR_PALMTC_PCMCIA_RESET 78 + +/* UDC */ +#define GPIO_NR_PALMTC_USB_DETECT_N 4 +#define GPIO_NR_PALMTC_USB_POWER 36 + +/* LCD/BACKLIGHT */ +#define GPIO_NR_PALMTC_BL_POWER 16 +#define GPIO_NR_PALMTC_LCD_POWER 44 +#define GPIO_NR_PALMTC_LCD_BLANK 38 + +/* UART */ +#define GPIO_NR_PALMTC_RS232_POWER 37 + +/* IRDA */ +#define GPIO_NR_PALMTC_IR_DISABLE 45 + +/* IRQs */ +#define IRQ_GPIO_PALMTC_SD_DETECT_N IRQ_GPIO(GPIO_NR_PALMTC_SD_DETECT_N) +#define IRQ_GPIO_PALMTC_WLAN_READY IRQ_GPIO(GPIO_NR_PALMTC_WLAN_READY) + +/* UCB1400 GPIOs */ +#define GPIO_NR_PALMTC_POWER_DETECT (0x80 | 0x00) +#define GPIO_NR_PALMTC_HEADPHONE_DETECT (0x80 | 0x01) +#define GPIO_NR_PALMTC_SPEAKER_ENABLE (0x80 | 0x03) +#define GPIO_NR_PALMTC_VIBRA_POWER (0x80 | 0x05) +#define GPIO_NR_PALMTC_LED_POWER (0x80 | 0x07) + +/** HERE ARE INIT VALUES **/ +#define PALMTC_UCB1400_GPIO_OFFSET 0x80 + +/* BATTERY */ +#define PALMTC_BAT_MAX_VOLTAGE 4000 /* 4.00V maximum voltage */ +#define PALMTC_BAT_MIN_VOLTAGE 3550 /* 3.55V critical voltage */ +#define PALMTC_BAT_MAX_CURRENT 0 /* unknokn */ +#define PALMTC_BAT_MIN_CURRENT 0 /* unknown */ +#define PALMTC_BAT_MAX_CHARGE 1 /* unknown */ +#define PALMTC_BAT_MIN_CHARGE 1 /* unknown */ +#define PALMTC_MAX_LIFE_MINS 240 /* on-life in minutes */ + +#define PALMTC_BAT_MEASURE_DELAY (HZ * 1) + +/* BACKLIGHT */ +#define PALMTC_MAX_INTENSITY 0xFE +#define PALMTC_DEFAULT_INTENSITY 0x7E +#define PALMTC_LIMIT_MASK 0x7F +#define PALMTC_PRESCALER 0x3F +#define PALMTC_PERIOD_NS 3500 + +#endif diff --git a/arch/arm/mach-pxa/palmtc.c b/arch/arm/mach-pxa/palmtc.c new file mode 100644 index 000000000000..f70f1f820eed --- /dev/null +++ b/arch/arm/mach-pxa/palmtc.c @@ -0,0 +1,462 @@ +/* + * linux/arch/arm/mach-pxa/palmtc.c + * + * Support for the Palm Tungsten|C + * + * Author: Marek Vasut + * + * Based on work of: + * Petr Blaha + * Chetan S. Kumar + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "generic.h" +#include "devices.h" + +/****************************************************************************** + * Pin configuration + ******************************************************************************/ +static unsigned long palmtc_pin_config[] __initdata = { + /* MMC */ + GPIO6_MMC_CLK, + GPIO8_MMC_CS0, + GPIO12_GPIO, /* detect */ + GPIO32_GPIO, /* power */ + GPIO54_GPIO, /* r/o switch */ + + /* PCMCIA */ + GPIO52_nPCE_1, + GPIO53_nPCE_2, + GPIO50_nPIOR, + GPIO51_nPIOW, + GPIO49_nPWE, + GPIO48_nPOE, + GPIO52_nPCE_1, + GPIO53_nPCE_2, + GPIO57_nIOIS16, + GPIO56_nPWAIT, + + /* AC97 */ + GPIO28_AC97_BITCLK, + GPIO29_AC97_SDATA_IN_0, + GPIO30_AC97_SDATA_OUT, + GPIO31_AC97_SYNC, + + /* IrDA */ + GPIO45_GPIO, /* ir disable */ + GPIO46_FICP_RXD, + GPIO47_FICP_TXD, + + /* PWM */ + GPIO17_PWM1_OUT, + + /* USB */ + GPIO4_GPIO, /* detect */ + GPIO36_GPIO, /* pullup */ + + /* LCD */ + GPIO58_LCD_LDD_0, + GPIO59_LCD_LDD_1, + GPIO60_LCD_LDD_2, + GPIO61_LCD_LDD_3, + GPIO62_LCD_LDD_4, + GPIO63_LCD_LDD_5, + GPIO64_LCD_LDD_6, + GPIO65_LCD_LDD_7, + GPIO66_LCD_LDD_8, + GPIO67_LCD_LDD_9, + GPIO68_LCD_LDD_10, + GPIO69_LCD_LDD_11, + GPIO70_LCD_LDD_12, + GPIO71_LCD_LDD_13, + GPIO72_LCD_LDD_14, + GPIO73_LCD_LDD_15, + GPIO74_LCD_FCLK, + GPIO75_LCD_LCLK, + GPIO76_LCD_PCLK, + GPIO77_LCD_BIAS, + + /* MATRIX KEYPAD */ + GPIO0_GPIO | WAKEUP_ON_EDGE_BOTH, /* in 0 */ + GPIO9_GPIO | WAKEUP_ON_EDGE_BOTH, /* in 1 */ + GPIO10_GPIO | WAKEUP_ON_EDGE_BOTH, /* in 2 */ + GPIO11_GPIO | WAKEUP_ON_EDGE_BOTH, /* in 3 */ + GPIO18_GPIO | MFP_LPM_DRIVE_LOW, /* out 0 */ + GPIO19_GPIO | MFP_LPM_DRIVE_LOW, /* out 1 */ + GPIO20_GPIO | MFP_LPM_DRIVE_LOW, /* out 2 */ + GPIO21_GPIO | MFP_LPM_DRIVE_LOW, /* out 3 */ + GPIO22_GPIO | MFP_LPM_DRIVE_LOW, /* out 4 */ + GPIO23_GPIO | MFP_LPM_DRIVE_LOW, /* out 5 */ + GPIO24_GPIO | MFP_LPM_DRIVE_LOW, /* out 6 */ + GPIO25_GPIO | MFP_LPM_DRIVE_LOW, /* out 7 */ + GPIO26_GPIO | MFP_LPM_DRIVE_LOW, /* out 8 */ + GPIO27_GPIO | MFP_LPM_DRIVE_LOW, /* out 9 */ + GPIO79_GPIO | MFP_LPM_DRIVE_LOW, /* out 10 */ + GPIO80_GPIO | MFP_LPM_DRIVE_LOW, /* out 11 */ + + /* PXA GPIO KEYS */ + GPIO7_GPIO | WAKEUP_ON_EDGE_BOTH, /* hotsync button on cradle */ + + /* MISC */ + GPIO1_RST, /* reset */ + GPIO2_GPIO, /* earphone detect */ + GPIO16_GPIO, /* backlight switch */ +}; + +/****************************************************************************** + * SD/MMC card controller + ******************************************************************************/ +static struct pxamci_platform_data palmtc_mci_platform_data = { + .ocr_mask = MMC_VDD_32_33 | MMC_VDD_33_34, + .gpio_power = GPIO_NR_PALMTC_SD_POWER, + .gpio_card_ro = GPIO_NR_PALMTC_SD_READONLY, + .gpio_card_detect = GPIO_NR_PALMTC_SD_DETECT_N, + .detect_delay = 20, +}; + +/****************************************************************************** + * GPIO keys + ******************************************************************************/ +static struct gpio_keys_button palmtc_pxa_buttons[] = { + {KEY_F8, GPIO_NR_PALMTC_HOTSYNC_BUTTON, 1, "HotSync Button", EV_KEY, 1}, +}; + +static struct gpio_keys_platform_data palmtc_pxa_keys_data = { + .buttons = palmtc_pxa_buttons, + .nbuttons = ARRAY_SIZE(palmtc_pxa_buttons), +}; + +static struct platform_device palmtc_pxa_keys = { + .name = "gpio-keys", + .id = -1, + .dev = { + .platform_data = &palmtc_pxa_keys_data, + }, +}; + +/****************************************************************************** + * Backlight + ******************************************************************************/ +static int palmtc_backlight_init(struct device *dev) +{ + int ret; + + ret = gpio_request(GPIO_NR_PALMTC_BL_POWER, "BL POWER"); + if (ret) + goto err; + ret = gpio_direction_output(GPIO_NR_PALMTC_BL_POWER, 1); + if (ret) + goto err2; + + return 0; + +err2: + gpio_free(GPIO_NR_PALMTC_BL_POWER); +err: + return ret; +} + +static int palmtc_backlight_notify(int brightness) +{ + /* backlight is on when GPIO16 AF0 is high */ + gpio_set_value(GPIO_NR_PALMTC_BL_POWER, brightness); + return brightness; +} + +static void palmtc_backlight_exit(struct device *dev) +{ + gpio_free(GPIO_NR_PALMTC_BL_POWER); +} + +static struct platform_pwm_backlight_data palmtc_backlight_data = { + .pwm_id = 1, + .max_brightness = PALMTC_MAX_INTENSITY, + .dft_brightness = PALMTC_MAX_INTENSITY, + .pwm_period_ns = PALMTC_PERIOD_NS, + .init = palmtc_backlight_init, + .notify = palmtc_backlight_notify, + .exit = palmtc_backlight_exit, +}; + +static struct platform_device palmtc_backlight = { + .name = "pwm-backlight", + .dev = { + .parent = &pxa25x_device_pwm1.dev, + .platform_data = &palmtc_backlight_data, + }, +}; + +/****************************************************************************** + * IrDA + ******************************************************************************/ +static int palmtc_irda_startup(struct device *dev) +{ + int err; + err = gpio_request(GPIO_NR_PALMTC_IR_DISABLE, "IR DISABLE"); + if (err) + goto err; + err = gpio_direction_output(GPIO_NR_PALMTC_IR_DISABLE, 1); + if (err) + gpio_free(GPIO_NR_PALMTC_IR_DISABLE); +err: + return err; +} + +static void palmtc_irda_shutdown(struct device *dev) +{ + gpio_free(GPIO_NR_PALMTC_IR_DISABLE); +} + +static void palmtc_irda_transceiver_mode(struct device *dev, int mode) +{ + gpio_set_value(GPIO_NR_PALMTC_IR_DISABLE, mode & IR_OFF); + pxa2xx_transceiver_mode(dev, mode); +} + +static struct pxaficp_platform_data palmtc_ficp_platform_data = { + .startup = palmtc_irda_startup, + .shutdown = palmtc_irda_shutdown, + .transceiver_cap = IR_SIRMODE | IR_FIRMODE | IR_OFF, + .transceiver_mode = palmtc_irda_transceiver_mode, +}; + +/****************************************************************************** + * Keyboard + ******************************************************************************/ +static const uint32_t palmtc_matrix_keys[] = { + KEY(0, 0, KEY_F1), + KEY(0, 1, KEY_X), + KEY(0, 2, KEY_POWER), + KEY(0, 3, KEY_TAB), + KEY(0, 4, KEY_A), + KEY(0, 5, KEY_Q), + KEY(0, 6, KEY_LEFTSHIFT), + KEY(0, 7, KEY_Z), + KEY(0, 8, KEY_S), + KEY(0, 9, KEY_W), + KEY(0, 10, KEY_E), + KEY(0, 11, KEY_UP), + + KEY(1, 0, KEY_F2), + KEY(1, 1, KEY_DOWN), + KEY(1, 3, KEY_D), + KEY(1, 4, KEY_C), + KEY(1, 5, KEY_F), + KEY(1, 6, KEY_R), + KEY(1, 7, KEY_SPACE), + KEY(1, 8, KEY_V), + KEY(1, 9, KEY_G), + KEY(1, 10, KEY_T), + KEY(1, 11, KEY_LEFT), + + KEY(2, 0, KEY_F3), + KEY(2, 1, KEY_LEFTCTRL), + KEY(2, 3, KEY_H), + KEY(2, 4, KEY_Y), + KEY(2, 5, KEY_N), + KEY(2, 6, KEY_J), + KEY(2, 7, KEY_U), + KEY(2, 8, KEY_M), + KEY(2, 9, KEY_K), + KEY(2, 10, KEY_I), + KEY(2, 11, KEY_RIGHT), + + KEY(3, 0, KEY_F4), + KEY(3, 1, KEY_ENTER), + KEY(3, 3, KEY_DOT), + KEY(3, 4, KEY_L), + KEY(3, 5, KEY_O), + KEY(3, 6, KEY_LEFTALT), + KEY(3, 7, KEY_ENTER), + KEY(3, 8, KEY_BACKSPACE), + KEY(3, 9, KEY_P), + KEY(3, 10, KEY_B), + KEY(3, 11, KEY_FN), +}; + +const struct matrix_keymap_data palmtc_keymap_data = { + .keymap = palmtc_matrix_keys, + .keymap_size = ARRAY_SIZE(palmtc_matrix_keys), +}; + +const static unsigned int palmtc_keypad_row_gpios[] = { + 0, 9, 10, 11 +}; + +const static unsigned int palmtc_keypad_col_gpios[] = { + 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 79, 80 +}; + +static struct matrix_keypad_platform_data palmtc_keypad_platform_data = { + .keymap_data = &palmtc_keymap_data, + .col_gpios = palmtc_keypad_row_gpios, + .num_col_gpios = 12, + .row_gpios = palmtc_keypad_col_gpios, + .num_row_gpios = 4, + .active_low = 1, + + .debounce_ms = 20, + .col_scan_delay_us = 5, +}; + +static struct platform_device palmtc_keyboard = { + .name = "matrix-keypad", + .id = -1, + .dev = { + .platform_data = &palmtc_keypad_platform_data, + }, +}; + +/****************************************************************************** + * UDC + ******************************************************************************/ +static struct pxa2xx_udc_mach_info palmtc_udc_info __initdata = { + .gpio_vbus = GPIO_NR_PALMTC_USB_DETECT_N, + .gpio_vbus_inverted = 1, + .gpio_pullup = GPIO_NR_PALMTC_USB_POWER, +}; + +/****************************************************************************** + * Touchscreen / Battery / GPIO-extender + ******************************************************************************/ +static struct platform_device palmtc_ucb1400_core = { + .name = "ucb1400_core", + .id = -1, +}; + +/****************************************************************************** + * NOR Flash + ******************************************************************************/ +static struct resource palmtc_flash_resource = { + .start = PXA_CS0_PHYS, + .end = PXA_CS0_PHYS + SZ_16M - 1, + .flags = IORESOURCE_MEM, +}; + +static struct mtd_partition palmtc_flash_parts[] = { + { + .name = "U-Boot Bootloader", + .offset = 0x0, + .size = 0x40000, + }, + { + .name = "Linux Kernel", + .offset = 0x40000, + .size = 0x2c0000, + }, + { + .name = "Filesystem", + .offset = 0x300000, + .size = 0xcc0000, + }, + { + .name = "U-Boot Environment", + .offset = 0xfc0000, + .size = MTDPART_SIZ_FULL, + }, +}; + +static struct physmap_flash_data palmtc_flash_data = { + .width = 4, + .parts = palmtc_flash_parts, + .nr_parts = ARRAY_SIZE(palmtc_flash_parts), +}; + +static struct platform_device palmtc_flash = { + .name = "physmap-flash", + .id = -1, + .resource = &palmtc_flash_resource, + .num_resources = 1, + .dev = { + .platform_data = &palmtc_flash_data, + }, +}; + +/****************************************************************************** + * Framebuffer + ******************************************************************************/ +static struct pxafb_mode_info palmtc_lcd_modes[] = { +{ + .pixclock = 115384, + .xres = 320, + .yres = 320, + .bpp = 16, + + .left_margin = 27, + .right_margin = 7, + .upper_margin = 7, + .lower_margin = 8, + + .hsync_len = 6, + .vsync_len = 1, +}, +}; + +static struct pxafb_mach_info palmtc_lcd_screen = { + .modes = palmtc_lcd_modes, + .num_modes = ARRAY_SIZE(palmtc_lcd_modes), + .lcd_conn = LCD_COLOR_TFT_16BPP | LCD_PCLK_EDGE_FALL, +}; + +/****************************************************************************** + * Machine init + ******************************************************************************/ +static struct platform_device *devices[] __initdata = { + &palmtc_backlight, + &palmtc_ucb1400_core, + &palmtc_keyboard, + &palmtc_pxa_keys, + &palmtc_flash, +}; + +static void __init palmtc_init(void) +{ + pxa2xx_mfp_config(ARRAY_AND_SIZE(palmtc_pin_config)); + + set_pxa_fb_info(&palmtc_lcd_screen); + pxa_set_mci_info(&palmtc_mci_platform_data); + pxa_set_udc_info(&palmtc_udc_info); + pxa_set_ac97_info(NULL); + pxa_set_ficp_info(&palmtc_ficp_platform_data); + + platform_add_devices(devices, ARRAY_SIZE(devices)); +}; + +MACHINE_START(PALMTC, "Palm Tungsten|C") + .phys_io = 0x40000000, + .boot_params = 0xa0000100, + .io_pg_offst = (io_p2v(0x40000000) >> 18) & 0xfffc, + .map_io = pxa_map_io, + .init_irq = pxa25x_init_irq, + .timer = &pxa_timer, + .init_machine = palmtc_init +MACHINE_END -- cgit v1.2.3 From 752807871ad27520d31baec1800f1db59e84f2a1 Mon Sep 17 00:00:00 2001 From: Marek Vasut Date: Sat, 22 Aug 2009 00:49:53 +0200 Subject: MAINTAINERS: fix mailing list entries for ARM/Palm devices Signed-off-by: Marek Vasut Acked-by: Sergey Lapin Acked-by: Tomas Cech Signed-off-by: Eric Miao --- MAINTAINERS | 3 +++ 1 file changed, 3 insertions(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index efe015ddc567..8e2b4b8cd566 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -709,16 +709,19 @@ S: Maintained ARM/PALMTX,PALMT5,PALMLD,PALMTE2,PALMTC SUPPORT P: Marek Vasut M: marek.vasut@gmail.com +L: linux-arm-kernel@lists.infradead.org W: http://hackndev.com S: Maintained ARM/PALM TREO 680 SUPPORT M: Tomas Cech +L: linux-arm-kernel@lists.infradead.org W: http://hackndev.com S: Maintained ARM/PALMZ72 SUPPORT M: Sergey Lapin +L: linux-arm-kernel@lists.infradead.org W: http://hackndev.com S: Maintained -- cgit v1.2.3 From f08a59f1467b92cd9b8c78961506f69c74bb01a5 Mon Sep 17 00:00:00 2001 From: Alex Elder Date: Fri, 11 Sep 2009 09:51:25 -0500 Subject: xfs: Record new maintainer information Update the MAINTAINERS file entry for XFS. Signed-off-by: Alex Elder Signed-off by: Felix Blyakher --- MAINTAINERS | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 8dca9d89c6c1..d5bbebacaa21 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -5627,7 +5627,7 @@ F: include/xen/ XFS FILESYSTEM P: Silicon Graphics Inc -M: Felix Blyakher +M: Alex Elder M: xfs-masters@oss.sgi.com L: xfs@oss.sgi.com W: http://oss.sgi.com/projects/xfs -- cgit v1.2.3 From 8e616fc8d343bd7f0f0a0c22407fdcb77f6d22b1 Mon Sep 17 00:00:00 2001 From: Marcelo Tosatti Date: Thu, 10 Sep 2009 17:21:34 -0300 Subject: MAINTAINERS: update KVM entry Add myself to KVM MAINTAINERS entry. Signed-off-by: Marcelo Tosatti Signed-off-by: Avi Kivity --- MAINTAINERS | 1 + 1 file changed, 1 insertion(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 8dca9d89c6c1..24f26940b91b 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -2928,6 +2928,7 @@ F: include/linux/sunrpc/ KERNEL VIRTUAL MACHINE (KVM) M: Avi Kivity +M: Marcelo Tosatti L: kvm@vger.kernel.org W: http://kvm.qumranet.com S: Supported -- cgit v1.2.3 From 89a3681041507773dfee1b88c1c90c8a811a79d3 Mon Sep 17 00:00:00 2001 From: Anil Ravindranath Date: Tue, 25 Aug 2009 17:35:18 -0700 Subject: [SCSI] pmcraid: PMC-Sierra MaxRAID driver to support 6Gb/s SAS RAID controller Signed-off-by: Anil Ravindranath Signed-off-by: James Bottomley --- MAINTAINERS | 8 + drivers/scsi/Kconfig | 6 + drivers/scsi/Makefile | 1 + drivers/scsi/pmcraid.c | 5604 ++++++++++++++++++++++++++++++++++++++++++++++++ drivers/scsi/pmcraid.h | 1029 +++++++++ 5 files changed, 6648 insertions(+) create mode 100644 drivers/scsi/pmcraid.c create mode 100644 drivers/scsi/pmcraid.h (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index b841e3c20e9b..e8fabf86c86a 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3970,6 +3970,14 @@ S: Maintained F: drivers/block/pktcdvd.c F: include/linux/pktcdvd.h +PMC SIERRA MaxRAID DRIVER +P: Anil Ravindranath +M: anil_ravindranath@pmc-sierra.com +L: linux-scsi@vger.kernel.org +W: http://www.pmc-sierra.com/ +S: Supported +F: drivers/scsi/pmcraid.* + POSIX CLOCKS and TIMERS M: Thomas Gleixner S: Supported diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig index 9c23122f755f..82bb3b2d207a 100644 --- a/drivers/scsi/Kconfig +++ b/drivers/scsi/Kconfig @@ -1811,6 +1811,12 @@ config ZFCP called zfcp. If you want to compile it as a module, say M here and read . +config SCSI_PMCRAID + tristate "PMC SIERRA Linux MaxRAID adapter support" + depends on PCI && SCSI + ---help--- + This driver supports the PMC SIERRA MaxRAID adapters. + config SCSI_SRP tristate "SCSI RDMA Protocol helper library" depends on SCSI && PCI diff --git a/drivers/scsi/Makefile b/drivers/scsi/Makefile index 25429ea63d0a..61a94af3cee7 100644 --- a/drivers/scsi/Makefile +++ b/drivers/scsi/Makefile @@ -130,6 +130,7 @@ obj-$(CONFIG_SCSI_MVSAS) += mvsas/ obj-$(CONFIG_PS3_ROM) += ps3rom.o obj-$(CONFIG_SCSI_CXGB3_ISCSI) += libiscsi.o libiscsi_tcp.o cxgb3i/ obj-$(CONFIG_SCSI_BNX2_ISCSI) += libiscsi.o bnx2i/ +obj-$(CONFIG_SCSI_PMCRAID) += pmcraid.o obj-$(CONFIG_ARM) += arm/ diff --git a/drivers/scsi/pmcraid.c b/drivers/scsi/pmcraid.c new file mode 100644 index 000000000000..4302f06e4ec9 --- /dev/null +++ b/drivers/scsi/pmcraid.c @@ -0,0 +1,5604 @@ +/* + * pmcraid.c -- driver for PMC Sierra MaxRAID controller adapters + * + * Written By: PMC Sierra Corporation + * + * Copyright (C) 2008, 2009 PMC Sierra Inc + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307, + * USA + * + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "pmcraid.h" + +/* + * Module configuration parameters + */ +static unsigned int pmcraid_debug_log; +static unsigned int pmcraid_disable_aen; +static unsigned int pmcraid_log_level = IOASC_LOG_LEVEL_MUST; + +/* + * Data structures to support multiple adapters by the LLD. + * pmcraid_adapter_count - count of configured adapters + */ +static atomic_t pmcraid_adapter_count = ATOMIC_INIT(0); + +/* + * Supporting user-level control interface through IOCTL commands. + * pmcraid_major - major number to use + * pmcraid_minor - minor number(s) to use + */ +static unsigned int pmcraid_major; +static struct class *pmcraid_class; +DECLARE_BITMAP(pmcraid_minor, PMCRAID_MAX_ADAPTERS); + +/* + * Module parameters + */ +MODULE_AUTHOR("PMC Sierra Corporation, anil_ravindranath@pmc-sierra.com"); +MODULE_DESCRIPTION("PMC Sierra MaxRAID Controller Driver"); +MODULE_LICENSE("GPL"); +MODULE_VERSION(PMCRAID_DRIVER_VERSION); + +module_param_named(log_level, pmcraid_log_level, uint, (S_IRUGO | S_IWUSR)); +MODULE_PARM_DESC(log_level, + "Enables firmware error code logging, default :1 high-severity" + " errors, 2: all errors including high-severity errors," + " 0: disables logging"); + +module_param_named(debug, pmcraid_debug_log, uint, (S_IRUGO | S_IWUSR)); +MODULE_PARM_DESC(debug, + "Enable driver verbose message logging. Set 1 to enable." + "(default: 0)"); + +module_param_named(disable_aen, pmcraid_disable_aen, uint, (S_IRUGO | S_IWUSR)); +MODULE_PARM_DESC(disable_aen, + "Disable driver aen notifications to apps. Set 1 to disable." + "(default: 0)"); + +/* chip specific constants for PMC MaxRAID controllers (same for + * 0x5220 and 0x8010 + */ +static struct pmcraid_chip_details pmcraid_chip_cfg[] = { + { + .ioastatus = 0x0, + .ioarrin = 0x00040, + .mailbox = 0x7FC30, + .global_intr_mask = 0x00034, + .ioa_host_intr = 0x0009C, + .ioa_host_intr_clr = 0x000A0, + .ioa_host_mask = 0x7FC28, + .ioa_host_mask_clr = 0x7FC28, + .host_ioa_intr = 0x00020, + .host_ioa_intr_clr = 0x00020, + .transop_timeout = 300 + } +}; + +/* + * PCI device ids supported by pmcraid driver + */ +static struct pci_device_id pmcraid_pci_table[] __devinitdata = { + { PCI_DEVICE(PCI_VENDOR_ID_PMC, PCI_DEVICE_ID_PMC_MAXRAID), + 0, 0, (kernel_ulong_t)&pmcraid_chip_cfg[0] + }, + {} +}; + +MODULE_DEVICE_TABLE(pci, pmcraid_pci_table); + + + +/** + * pmcraid_slave_alloc - Prepare for commands to a device + * @scsi_dev: scsi device struct + * + * This function is called by mid-layer prior to sending any command to the new + * device. Stores resource entry details of the device in scsi_device struct. + * Queuecommand uses the resource handle and other details to fill up IOARCB + * while sending commands to the device. + * + * Return value: + * 0 on success / -ENXIO if device does not exist + */ +static int pmcraid_slave_alloc(struct scsi_device *scsi_dev) +{ + struct pmcraid_resource_entry *temp, *res = NULL; + struct pmcraid_instance *pinstance; + u8 target, bus, lun; + unsigned long lock_flags; + int rc = -ENXIO; + pinstance = shost_priv(scsi_dev->host); + + /* Driver exposes VSET and GSCSI resources only; all other device types + * are not exposed. Resource list is synchronized using resource lock + * so any traversal or modifications to the list should be done inside + * this lock + */ + spin_lock_irqsave(&pinstance->resource_lock, lock_flags); + list_for_each_entry(temp, &pinstance->used_res_q, queue) { + + /* do not expose VSETs with order-ids >= 240 */ + if (RES_IS_VSET(temp->cfg_entry)) { + target = temp->cfg_entry.unique_flags1; + if (target >= PMCRAID_MAX_VSET_TARGETS) + continue; + bus = PMCRAID_VSET_BUS_ID; + lun = 0; + } else if (RES_IS_GSCSI(temp->cfg_entry)) { + target = RES_TARGET(temp->cfg_entry.resource_address); + bus = PMCRAID_PHYS_BUS_ID; + lun = RES_LUN(temp->cfg_entry.resource_address); + } else { + continue; + } + + if (bus == scsi_dev->channel && + target == scsi_dev->id && + lun == scsi_dev->lun) { + res = temp; + break; + } + } + + if (res) { + res->scsi_dev = scsi_dev; + scsi_dev->hostdata = res; + res->change_detected = 0; + atomic_set(&res->read_failures, 0); + atomic_set(&res->write_failures, 0); + rc = 0; + } + spin_unlock_irqrestore(&pinstance->resource_lock, lock_flags); + return rc; +} + +/** + * pmcraid_slave_configure - Configures a SCSI device + * @scsi_dev: scsi device struct + * + * This fucntion is executed by SCSI mid layer just after a device is first + * scanned (i.e. it has responded to an INQUIRY). For VSET resources, the + * timeout value (default 30s) will be over-written to a higher value (60s) + * and max_sectors value will be over-written to 512. It also sets queue depth + * to host->cmd_per_lun value + * + * Return value: + * 0 on success + */ +static int pmcraid_slave_configure(struct scsi_device *scsi_dev) +{ + struct pmcraid_resource_entry *res = scsi_dev->hostdata; + + if (!res) + return 0; + + /* LLD exposes VSETs and Enclosure devices only */ + if (RES_IS_GSCSI(res->cfg_entry) && + scsi_dev->type != TYPE_ENCLOSURE) + return -ENXIO; + + pmcraid_info("configuring %x:%x:%x:%x\n", + scsi_dev->host->unique_id, + scsi_dev->channel, + scsi_dev->id, + scsi_dev->lun); + + if (RES_IS_GSCSI(res->cfg_entry)) { + scsi_dev->allow_restart = 1; + } else if (RES_IS_VSET(res->cfg_entry)) { + scsi_dev->allow_restart = 1; + blk_queue_rq_timeout(scsi_dev->request_queue, + PMCRAID_VSET_IO_TIMEOUT); + blk_queue_max_sectors(scsi_dev->request_queue, + PMCRAID_VSET_MAX_SECTORS); + } + + if (scsi_dev->tagged_supported && + (RES_IS_GSCSI(res->cfg_entry) || RES_IS_VSET(res->cfg_entry))) { + scsi_activate_tcq(scsi_dev, scsi_dev->queue_depth); + scsi_adjust_queue_depth(scsi_dev, MSG_SIMPLE_TAG, + scsi_dev->host->cmd_per_lun); + } else { + scsi_adjust_queue_depth(scsi_dev, 0, + scsi_dev->host->cmd_per_lun); + } + + return 0; +} + +/** + * pmcraid_slave_destroy - Unconfigure a SCSI device before removing it + * + * @scsi_dev: scsi device struct + * + * This is called by mid-layer before removing a device. Pointer assignments + * done in pmcraid_slave_alloc will be reset to NULL here. + * + * Return value + * none + */ +static void pmcraid_slave_destroy(struct scsi_device *scsi_dev) +{ + struct pmcraid_resource_entry *res; + + res = (struct pmcraid_resource_entry *)scsi_dev->hostdata; + + if (res) + res->scsi_dev = NULL; + + scsi_dev->hostdata = NULL; +} + +/** + * pmcraid_change_queue_depth - Change the device's queue depth + * @scsi_dev: scsi device struct + * @depth: depth to set + * + * Return value + * actual depth set + */ +static int pmcraid_change_queue_depth(struct scsi_device *scsi_dev, int depth) +{ + if (depth > PMCRAID_MAX_CMD_PER_LUN) + depth = PMCRAID_MAX_CMD_PER_LUN; + + scsi_adjust_queue_depth(scsi_dev, scsi_get_tag_type(scsi_dev), depth); + + return scsi_dev->queue_depth; +} + +/** + * pmcraid_change_queue_type - Change the device's queue type + * @scsi_dev: scsi device struct + * @tag: type of tags to use + * + * Return value: + * actual queue type set + */ +static int pmcraid_change_queue_type(struct scsi_device *scsi_dev, int tag) +{ + struct pmcraid_resource_entry *res; + + res = (struct pmcraid_resource_entry *)scsi_dev->hostdata; + + if ((res) && scsi_dev->tagged_supported && + (RES_IS_GSCSI(res->cfg_entry) || RES_IS_VSET(res->cfg_entry))) { + scsi_set_tag_type(scsi_dev, tag); + + if (tag) + scsi_activate_tcq(scsi_dev, scsi_dev->queue_depth); + else + scsi_deactivate_tcq(scsi_dev, scsi_dev->queue_depth); + } else + tag = 0; + + return tag; +} + + +/** + * pmcraid_init_cmdblk - initializes a command block + * + * @cmd: pointer to struct pmcraid_cmd to be initialized + * @index: if >=0 first time initialization; otherwise reinitialization + * + * Return Value + * None + */ +void pmcraid_init_cmdblk(struct pmcraid_cmd *cmd, int index) +{ + struct pmcraid_ioarcb *ioarcb = &(cmd->ioa_cb->ioarcb); + dma_addr_t dma_addr = cmd->ioa_cb_bus_addr; + + if (index >= 0) { + /* first time initialization (called from probe) */ + u32 ioasa_offset = + offsetof(struct pmcraid_control_block, ioasa); + + cmd->index = index; + ioarcb->response_handle = cpu_to_le32(index << 2); + ioarcb->ioarcb_bus_addr = cpu_to_le64(dma_addr); + ioarcb->ioasa_bus_addr = cpu_to_le64(dma_addr + ioasa_offset); + ioarcb->ioasa_len = cpu_to_le16(sizeof(struct pmcraid_ioasa)); + } else { + /* re-initialization of various lengths, called once command is + * processed by IOA + */ + memset(&cmd->ioa_cb->ioarcb.cdb, 0, PMCRAID_MAX_CDB_LEN); + ioarcb->request_flags0 = 0; + ioarcb->request_flags1 = 0; + ioarcb->cmd_timeout = 0; + ioarcb->ioarcb_bus_addr &= (~0x1FULL); + ioarcb->ioadl_bus_addr = 0; + ioarcb->ioadl_length = 0; + ioarcb->data_transfer_length = 0; + ioarcb->add_cmd_param_length = 0; + ioarcb->add_cmd_param_offset = 0; + cmd->ioa_cb->ioasa.ioasc = 0; + cmd->ioa_cb->ioasa.residual_data_length = 0; + cmd->u.time_left = 0; + } + + cmd->cmd_done = NULL; + cmd->scsi_cmd = NULL; + cmd->release = 0; + cmd->completion_req = 0; + cmd->dma_handle = 0; + init_timer(&cmd->timer); +} + +/** + * pmcraid_reinit_cmdblk - reinitialize a command block + * + * @cmd: pointer to struct pmcraid_cmd to be reinitialized + * + * Return Value + * None + */ +static void pmcraid_reinit_cmdblk(struct pmcraid_cmd *cmd) +{ + pmcraid_init_cmdblk(cmd, -1); +} + +/** + * pmcraid_get_free_cmd - get a free cmd block from command block pool + * @pinstance: adapter instance structure + * + * Return Value: + * returns pointer to cmd block or NULL if no blocks are available + */ +static struct pmcraid_cmd *pmcraid_get_free_cmd( + struct pmcraid_instance *pinstance +) +{ + struct pmcraid_cmd *cmd = NULL; + unsigned long lock_flags; + + /* free cmd block list is protected by free_pool_lock */ + spin_lock_irqsave(&pinstance->free_pool_lock, lock_flags); + + if (!list_empty(&pinstance->free_cmd_pool)) { + cmd = list_entry(pinstance->free_cmd_pool.next, + struct pmcraid_cmd, free_list); + list_del(&cmd->free_list); + } + spin_unlock_irqrestore(&pinstance->free_pool_lock, lock_flags); + + /* Initialize the command block before giving it the caller */ + if (cmd != NULL) + pmcraid_reinit_cmdblk(cmd); + return cmd; +} + +/** + * pmcraid_return_cmd - return a completed command block back into free pool + * @cmd: pointer to the command block + * + * Return Value: + * nothing + */ +void pmcraid_return_cmd(struct pmcraid_cmd *cmd) +{ + struct pmcraid_instance *pinstance = cmd->drv_inst; + unsigned long lock_flags; + + spin_lock_irqsave(&pinstance->free_pool_lock, lock_flags); + list_add_tail(&cmd->free_list, &pinstance->free_cmd_pool); + spin_unlock_irqrestore(&pinstance->free_pool_lock, lock_flags); +} + +/** + * pmcraid_read_interrupts - reads IOA interrupts + * + * @pinstance: pointer to adapter instance structure + * + * Return value + * interrupts read from IOA + */ +static u32 pmcraid_read_interrupts(struct pmcraid_instance *pinstance) +{ + return ioread32(pinstance->int_regs.ioa_host_interrupt_reg); +} + +/** + * pmcraid_disable_interrupts - Masks and clears all specified interrupts + * + * @pinstance: pointer to per adapter instance structure + * @intrs: interrupts to disable + * + * Return Value + * None + */ +static void pmcraid_disable_interrupts( + struct pmcraid_instance *pinstance, + u32 intrs +) +{ + u32 gmask = ioread32(pinstance->int_regs.global_interrupt_mask_reg); + u32 nmask = gmask | GLOBAL_INTERRUPT_MASK; + + iowrite32(nmask, pinstance->int_regs.global_interrupt_mask_reg); + iowrite32(intrs, pinstance->int_regs.ioa_host_interrupt_clr_reg); + iowrite32(intrs, pinstance->int_regs.ioa_host_interrupt_mask_reg); + ioread32(pinstance->int_regs.ioa_host_interrupt_mask_reg); +} + +/** + * pmcraid_enable_interrupts - Enables specified interrupts + * + * @pinstance: pointer to per adapter instance structure + * @intr: interrupts to enable + * + * Return Value + * None + */ +static void pmcraid_enable_interrupts( + struct pmcraid_instance *pinstance, + u32 intrs +) +{ + u32 gmask = ioread32(pinstance->int_regs.global_interrupt_mask_reg); + u32 nmask = gmask & (~GLOBAL_INTERRUPT_MASK); + + iowrite32(nmask, pinstance->int_regs.global_interrupt_mask_reg); + iowrite32(~intrs, pinstance->int_regs.ioa_host_interrupt_mask_reg); + ioread32(pinstance->int_regs.ioa_host_interrupt_mask_reg); + + pmcraid_info("enabled interrupts global mask = %x intr_mask = %x\n", + ioread32(pinstance->int_regs.global_interrupt_mask_reg), + ioread32(pinstance->int_regs.ioa_host_interrupt_mask_reg)); +} + +/** + * pmcraid_reset_type - Determine the required reset type + * @pinstance: pointer to adapter instance structure + * + * IOA requires hard reset if any of the following conditions is true. + * 1. If HRRQ valid interrupt is not masked + * 2. IOA reset alert doorbell is set + * 3. If there are any error interrupts + */ +static void pmcraid_reset_type(struct pmcraid_instance *pinstance) +{ + u32 mask; + u32 intrs; + u32 alerts; + + mask = ioread32(pinstance->int_regs.ioa_host_interrupt_mask_reg); + intrs = ioread32(pinstance->int_regs.ioa_host_interrupt_reg); + alerts = ioread32(pinstance->int_regs.host_ioa_interrupt_reg); + + if ((mask & INTRS_HRRQ_VALID) == 0 || + (alerts & DOORBELL_IOA_RESET_ALERT) || + (intrs & PMCRAID_ERROR_INTERRUPTS)) { + pmcraid_info("IOA requires hard reset\n"); + pinstance->ioa_hard_reset = 1; + } + + /* If unit check is active, trigger the dump */ + if (intrs & INTRS_IOA_UNIT_CHECK) + pinstance->ioa_unit_check = 1; +} + +/** + * pmcraid_bist_done - completion function for PCI BIST + * @cmd: pointer to reset command + * Return Value + * none + */ + +static void pmcraid_ioa_reset(struct pmcraid_cmd *); + +static void pmcraid_bist_done(struct pmcraid_cmd *cmd) +{ + struct pmcraid_instance *pinstance = cmd->drv_inst; + unsigned long lock_flags; + int rc; + u16 pci_reg; + + rc = pci_read_config_word(pinstance->pdev, PCI_COMMAND, &pci_reg); + + /* If PCI config space can't be accessed wait for another two secs */ + if ((rc != PCIBIOS_SUCCESSFUL || (!(pci_reg & PCI_COMMAND_MEMORY))) && + cmd->u.time_left > 0) { + pmcraid_info("BIST not complete, waiting another 2 secs\n"); + cmd->timer.expires = jiffies + cmd->u.time_left; + cmd->u.time_left = 0; + cmd->timer.data = (unsigned long)cmd; + cmd->timer.function = + (void (*)(unsigned long))pmcraid_bist_done; + add_timer(&cmd->timer); + } else { + cmd->u.time_left = 0; + pmcraid_info("BIST is complete, proceeding with reset\n"); + spin_lock_irqsave(pinstance->host->host_lock, lock_flags); + pmcraid_ioa_reset(cmd); + spin_unlock_irqrestore(pinstance->host->host_lock, lock_flags); + } +} + +/** + * pmcraid_start_bist - starts BIST + * @cmd: pointer to reset cmd + * Return Value + * none + */ +static void pmcraid_start_bist(struct pmcraid_cmd *cmd) +{ + struct pmcraid_instance *pinstance = cmd->drv_inst; + u32 doorbells, intrs; + + /* proceed with bist and wait for 2 seconds */ + iowrite32(DOORBELL_IOA_START_BIST, + pinstance->int_regs.host_ioa_interrupt_reg); + doorbells = ioread32(pinstance->int_regs.host_ioa_interrupt_reg); + intrs = ioread32(pinstance->int_regs.ioa_host_interrupt_reg); + pmcraid_info("doorbells after start bist: %x intrs: %x \n", + doorbells, intrs); + + cmd->u.time_left = msecs_to_jiffies(PMCRAID_BIST_TIMEOUT); + cmd->timer.data = (unsigned long)cmd; + cmd->timer.expires = jiffies + msecs_to_jiffies(PMCRAID_BIST_TIMEOUT); + cmd->timer.function = (void (*)(unsigned long))pmcraid_bist_done; + add_timer(&cmd->timer); +} + +/** + * pmcraid_reset_alert_done - completion routine for reset_alert + * @cmd: pointer to command block used in reset sequence + * Return value + * None + */ +static void pmcraid_reset_alert_done(struct pmcraid_cmd *cmd) +{ + struct pmcraid_instance *pinstance = cmd->drv_inst; + u32 status = ioread32(pinstance->ioa_status); + unsigned long lock_flags; + + /* if the critical operation in progress bit is set or the wait times + * out, invoke reset engine to proceed with hard reset. If there is + * some more time to wait, restart the timer + */ + if (((status & INTRS_CRITICAL_OP_IN_PROGRESS) == 0) || + cmd->u.time_left <= 0) { + pmcraid_info("critical op is reset proceeding with reset\n"); + spin_lock_irqsave(pinstance->host->host_lock, lock_flags); + pmcraid_ioa_reset(cmd); + spin_unlock_irqrestore(pinstance->host->host_lock, lock_flags); + } else { + pmcraid_info("critical op is not yet reset waiting again\n"); + /* restart timer if some more time is available to wait */ + cmd->u.time_left -= PMCRAID_CHECK_FOR_RESET_TIMEOUT; + cmd->timer.data = (unsigned long)cmd; + cmd->timer.expires = jiffies + PMCRAID_CHECK_FOR_RESET_TIMEOUT; + cmd->timer.function = + (void (*)(unsigned long))pmcraid_reset_alert_done; + add_timer(&cmd->timer); + } +} + +/** + * pmcraid_reset_alert - alerts IOA for a possible reset + * @cmd : command block to be used for reset sequence. + * + * Return Value + * returns 0 if pci config-space is accessible and RESET_DOORBELL is + * successfully written to IOA. Returns non-zero in case pci_config_space + * is not accessible + */ +static void pmcraid_reset_alert(struct pmcraid_cmd *cmd) +{ + struct pmcraid_instance *pinstance = cmd->drv_inst; + u32 doorbells; + int rc; + u16 pci_reg; + + /* If we are able to access IOA PCI config space, alert IOA that we are + * going to reset it soon. This enables IOA to preserv persistent error + * data if any. In case memory space is not accessible, proceed with + * BIST or slot_reset + */ + rc = pci_read_config_word(pinstance->pdev, PCI_COMMAND, &pci_reg); + if ((rc == PCIBIOS_SUCCESSFUL) && (pci_reg & PCI_COMMAND_MEMORY)) { + + /* wait for IOA permission i.e until CRITICAL_OPERATION bit is + * reset IOA doesn't generate any interrupts when CRITICAL + * OPERATION bit is reset. A timer is started to wait for this + * bit to be reset. + */ + cmd->u.time_left = PMCRAID_RESET_TIMEOUT; + cmd->timer.data = (unsigned long)cmd; + cmd->timer.expires = jiffies + PMCRAID_CHECK_FOR_RESET_TIMEOUT; + cmd->timer.function = + (void (*)(unsigned long))pmcraid_reset_alert_done; + add_timer(&cmd->timer); + + iowrite32(DOORBELL_IOA_RESET_ALERT, + pinstance->int_regs.host_ioa_interrupt_reg); + doorbells = + ioread32(pinstance->int_regs.host_ioa_interrupt_reg); + pmcraid_info("doorbells after reset alert: %x\n", doorbells); + } else { + pmcraid_info("PCI config is not accessible starting BIST\n"); + pinstance->ioa_state = IOA_STATE_IN_HARD_RESET; + pmcraid_start_bist(cmd); + } +} + +/** + * pmcraid_timeout_handler - Timeout handler for internally generated ops + * + * @cmd : pointer to command structure, that got timedout + * + * This function blocks host requests and initiates an adapter reset. + * + * Return value: + * None + */ +static void pmcraid_timeout_handler(struct pmcraid_cmd *cmd) +{ + struct pmcraid_instance *pinstance = cmd->drv_inst; + unsigned long lock_flags; + + dev_err(&pinstance->pdev->dev, + "Adapter being reset due to command timeout.\n"); + + /* Command timeouts result in hard reset sequence. The command that got + * timed out may be the one used as part of reset sequence. In this + * case restart reset sequence using the same command block even if + * reset is in progress. Otherwise fail this command and get a free + * command block to restart the reset sequence. + */ + spin_lock_irqsave(pinstance->host->host_lock, lock_flags); + if (!pinstance->ioa_reset_in_progress) { + pinstance->ioa_reset_attempts = 0; + cmd = pmcraid_get_free_cmd(pinstance); + + /* If we are out of command blocks, just return here itself. + * Some other command's timeout handler can do the reset job + */ + if (cmd == NULL) { + spin_unlock_irqrestore(pinstance->host->host_lock, + lock_flags); + pmcraid_err("no free cmnd block for timeout handler\n"); + return; + } + + pinstance->reset_cmd = cmd; + pinstance->ioa_reset_in_progress = 1; + } else { + pmcraid_info("reset is already in progress\n"); + + if (pinstance->reset_cmd != cmd) { + /* This command should have been given to IOA, this + * command will be completed by fail_outstanding_cmds + * anyway + */ + pmcraid_err("cmd is pending but reset in progress\n"); + } + + /* If this command was being used as part of the reset + * sequence, set cmd_done pointer to pmcraid_ioa_reset. This + * causes fail_outstanding_commands not to return the command + * block back to free pool + */ + if (cmd == pinstance->reset_cmd) + cmd->cmd_done = pmcraid_ioa_reset; + + } + + pinstance->ioa_state = IOA_STATE_IN_RESET_ALERT; + scsi_block_requests(pinstance->host); + pmcraid_reset_alert(cmd); + spin_unlock_irqrestore(pinstance->host->host_lock, lock_flags); +} + +/** + * pmcraid_internal_done - completion routine for internally generated cmds + * + * @cmd: command that got response from IOA + * + * Return Value: + * none + */ +static void pmcraid_internal_done(struct pmcraid_cmd *cmd) +{ + pmcraid_info("response internal cmd CDB[0] = %x ioasc = %x\n", + cmd->ioa_cb->ioarcb.cdb[0], + le32_to_cpu(cmd->ioa_cb->ioasa.ioasc)); + + /* Some of the internal commands are sent with callers blocking for the + * response. Same will be indicated as part of cmd->completion_req + * field. Response path needs to wake up any waiters waiting for cmd + * completion if this flag is set. + */ + if (cmd->completion_req) { + cmd->completion_req = 0; + complete(&cmd->wait_for_completion); + } + + /* most of the internal commands are completed by caller itself, so + * no need to return the command block back to free pool until we are + * required to do so (e.g once done with initialization). + */ + if (cmd->release) { + cmd->release = 0; + pmcraid_return_cmd(cmd); + } +} + +/** + * pmcraid_reinit_cfgtable_done - done function for cfg table reinitialization + * + * @cmd: command that got response from IOA + * + * This routine is called after driver re-reads configuration table due to a + * lost CCN. It returns the command block back to free pool and schedules + * worker thread to add/delete devices into the system. + * + * Return Value: + * none + */ +static void pmcraid_reinit_cfgtable_done(struct pmcraid_cmd *cmd) +{ + pmcraid_info("response internal cmd CDB[0] = %x ioasc = %x\n", + cmd->ioa_cb->ioarcb.cdb[0], + le32_to_cpu(cmd->ioa_cb->ioasa.ioasc)); + + if (cmd->release) { + cmd->release = 0; + pmcraid_return_cmd(cmd); + } + pmcraid_info("scheduling worker for config table reinitialization\n"); + schedule_work(&cmd->drv_inst->worker_q); +} + +/** + * pmcraid_erp_done - Process completion of SCSI error response from device + * @cmd: pmcraid_command + * + * This function copies the sense buffer into the scsi_cmd struct and completes + * scsi_cmd by calling scsi_done function. + * + * Return value: + * none + */ +static void pmcraid_erp_done(struct pmcraid_cmd *cmd) +{ + struct scsi_cmnd *scsi_cmd = cmd->scsi_cmd; + struct pmcraid_instance *pinstance = cmd->drv_inst; + u32 ioasc = le32_to_cpu(cmd->ioa_cb->ioasa.ioasc); + + if (PMCRAID_IOASC_SENSE_KEY(ioasc) > 0) { + scsi_cmd->result |= (DID_ERROR << 16); + pmcraid_err("command CDB[0] = %x failed with IOASC: 0x%08X\n", + cmd->ioa_cb->ioarcb.cdb[0], ioasc); + } + + /* if we had allocated sense buffers for request sense, copy the sense + * release the buffers + */ + if (cmd->sense_buffer != NULL) { + memcpy(scsi_cmd->sense_buffer, + cmd->sense_buffer, + SCSI_SENSE_BUFFERSIZE); + pci_free_consistent(pinstance->pdev, + SCSI_SENSE_BUFFERSIZE, + cmd->sense_buffer, cmd->sense_buffer_dma); + cmd->sense_buffer = NULL; + cmd->sense_buffer_dma = 0; + } + + scsi_dma_unmap(scsi_cmd); + pmcraid_return_cmd(cmd); + scsi_cmd->scsi_done(scsi_cmd); +} + +/** + * pmcraid_fire_command - sends an IOA command to adapter + * + * This function adds the given block into pending command list + * and returns without waiting + * + * @cmd : command to be sent to the device + * + * Return Value + * None + */ +static void _pmcraid_fire_command(struct pmcraid_cmd *cmd) +{ + struct pmcraid_instance *pinstance = cmd->drv_inst; + unsigned long lock_flags; + + /* Add this command block to pending cmd pool. We do this prior to + * writting IOARCB to ioarrin because IOA might complete the command + * by the time we are about to add it to the list. Response handler + * (isr/tasklet) looks for cmb block in the pending pending list. + */ + spin_lock_irqsave(&pinstance->pending_pool_lock, lock_flags); + list_add_tail(&cmd->free_list, &pinstance->pending_cmd_pool); + spin_unlock_irqrestore(&pinstance->pending_pool_lock, lock_flags); + atomic_inc(&pinstance->outstanding_cmds); + + /* driver writes lower 32-bit value of IOARCB address only */ + mb(); + iowrite32(le32_to_cpu(cmd->ioa_cb->ioarcb.ioarcb_bus_addr), + pinstance->ioarrin); +} + +/** + * pmcraid_send_cmd - fires a command to IOA + * + * This function also sets up timeout function, and command completion + * function + * + * @cmd: pointer to the command block to be fired to IOA + * @cmd_done: command completion function, called once IOA responds + * @timeout: timeout to wait for this command completion + * @timeout_func: timeout handler + * + * Return value + * none + */ +static void pmcraid_send_cmd( + struct pmcraid_cmd *cmd, + void (*cmd_done) (struct pmcraid_cmd *), + unsigned long timeout, + void (*timeout_func) (struct pmcraid_cmd *) +) +{ + /* initialize done function */ + cmd->cmd_done = cmd_done; + + if (timeout_func) { + /* setup timeout handler */ + cmd->timer.data = (unsigned long)cmd; + cmd->timer.expires = jiffies + timeout; + cmd->timer.function = (void (*)(unsigned long))timeout_func; + add_timer(&cmd->timer); + } + + /* fire the command to IOA */ + _pmcraid_fire_command(cmd); +} + +/** + * pmcraid_ioa_shutdown - sends SHUTDOWN command to ioa + * + * @cmd: pointer to the command block used as part of reset sequence + * + * Return Value + * None + */ +static void pmcraid_ioa_shutdown(struct pmcraid_cmd *cmd) +{ + pmcraid_info("response for Cancel CCN CDB[0] = %x ioasc = %x\n", + cmd->ioa_cb->ioarcb.cdb[0], + le32_to_cpu(cmd->ioa_cb->ioasa.ioasc)); + + /* Note that commands sent during reset require next command to be sent + * to IOA. Hence reinit the done function as well as timeout function + */ + pmcraid_reinit_cmdblk(cmd); + cmd->ioa_cb->ioarcb.request_type = REQ_TYPE_IOACMD; + cmd->ioa_cb->ioarcb.resource_handle = + cpu_to_le32(PMCRAID_IOA_RES_HANDLE); + cmd->ioa_cb->ioarcb.cdb[0] = PMCRAID_IOA_SHUTDOWN; + cmd->ioa_cb->ioarcb.cdb[1] = PMCRAID_SHUTDOWN_NORMAL; + + /* fire shutdown command to hardware. */ + pmcraid_info("firing normal shutdown command (%d) to IOA\n", + le32_to_cpu(cmd->ioa_cb->ioarcb.response_handle)); + + pmcraid_send_cmd(cmd, pmcraid_ioa_reset, + PMCRAID_SHUTDOWN_TIMEOUT, + pmcraid_timeout_handler); +} + +/** + * pmcraid_identify_hrrq - registers host rrq buffers with IOA + * @cmd: pointer to command block to be used for identify hrrq + * + * Return Value + * 0 in case of success, otherwise non-zero failure code + */ + +static void pmcraid_querycfg(struct pmcraid_cmd *); + +static void pmcraid_identify_hrrq(struct pmcraid_cmd *cmd) +{ + struct pmcraid_instance *pinstance = cmd->drv_inst; + struct pmcraid_ioarcb *ioarcb = &cmd->ioa_cb->ioarcb; + int index = 0; + __be64 hrrq_addr = cpu_to_be64(pinstance->hrrq_start_bus_addr[index]); + u32 hrrq_size = cpu_to_be32(sizeof(u32) * PMCRAID_MAX_CMD); + + pmcraid_reinit_cmdblk(cmd); + + /* Initialize ioarcb */ + ioarcb->request_type = REQ_TYPE_IOACMD; + ioarcb->resource_handle = cpu_to_le32(PMCRAID_IOA_RES_HANDLE); + + /* initialize the hrrq number where IOA will respond to this command */ + ioarcb->hrrq_id = index; + ioarcb->cdb[0] = PMCRAID_IDENTIFY_HRRQ; + ioarcb->cdb[1] = index; + + /* IOA expects 64-bit pci address to be written in B.E format + * (i.e cdb[2]=MSByte..cdb[9]=LSB. + */ + pmcraid_info("HRRQ_IDENTIFY with hrrq:ioarcb => %llx:%llx\n", + hrrq_addr, ioarcb->ioarcb_bus_addr); + + memcpy(&(ioarcb->cdb[2]), &hrrq_addr, sizeof(hrrq_addr)); + memcpy(&(ioarcb->cdb[10]), &hrrq_size, sizeof(hrrq_size)); + + /* Subsequent commands require HRRQ identification to be successful. + * Note that this gets called even during reset from SCSI mid-layer + * or tasklet + */ + pmcraid_send_cmd(cmd, pmcraid_querycfg, + PMCRAID_INTERNAL_TIMEOUT, + pmcraid_timeout_handler); +} + +static void pmcraid_process_ccn(struct pmcraid_cmd *cmd); +static void pmcraid_process_ldn(struct pmcraid_cmd *cmd); + +/** + * pmcraid_send_hcam_cmd - send an initialized command block(HCAM) to IOA + * + * @cmd: initialized command block pointer + * + * Return Value + * none + */ +static void pmcraid_send_hcam_cmd(struct pmcraid_cmd *cmd) +{ + if (cmd->ioa_cb->ioarcb.cdb[1] == PMCRAID_HCAM_CODE_CONFIG_CHANGE) + atomic_set(&(cmd->drv_inst->ccn.ignore), 0); + else + atomic_set(&(cmd->drv_inst->ldn.ignore), 0); + + pmcraid_send_cmd(cmd, cmd->cmd_done, 0, NULL); +} + +/** + * pmcraid_init_hcam - send an initialized command block(HCAM) to IOA + * + * @pinstance: pointer to adapter instance structure + * @type: HCAM type + * + * Return Value + * pointer to initialized pmcraid_cmd structure or NULL + */ +static struct pmcraid_cmd *pmcraid_init_hcam +( + struct pmcraid_instance *pinstance, + u8 type +) +{ + struct pmcraid_cmd *cmd; + struct pmcraid_ioarcb *ioarcb; + struct pmcraid_ioadl_desc *ioadl; + struct pmcraid_hostrcb *hcam; + void (*cmd_done) (struct pmcraid_cmd *); + dma_addr_t dma; + int rcb_size; + + cmd = pmcraid_get_free_cmd(pinstance); + + if (!cmd) { + pmcraid_err("no free command blocks for hcam\n"); + return cmd; + } + + if (type == PMCRAID_HCAM_CODE_CONFIG_CHANGE) { + rcb_size = sizeof(struct pmcraid_hcam_ccn); + cmd_done = pmcraid_process_ccn; + dma = pinstance->ccn.baddr + PMCRAID_AEN_HDR_SIZE; + hcam = &pinstance->ccn; + } else { + rcb_size = sizeof(struct pmcraid_hcam_ldn); + cmd_done = pmcraid_process_ldn; + dma = pinstance->ldn.baddr + PMCRAID_AEN_HDR_SIZE; + hcam = &pinstance->ldn; + } + + /* initialize command pointer used for HCAM registration */ + hcam->cmd = cmd; + + ioarcb = &cmd->ioa_cb->ioarcb; + ioarcb->ioadl_bus_addr = cpu_to_le64((cmd->ioa_cb_bus_addr) + + offsetof(struct pmcraid_ioarcb, + add_data.u.ioadl[0])); + ioarcb->ioadl_length = cpu_to_le32(sizeof(struct pmcraid_ioadl_desc)); + ioadl = ioarcb->add_data.u.ioadl; + + /* Initialize ioarcb */ + ioarcb->request_type = REQ_TYPE_HCAM; + ioarcb->resource_handle = cpu_to_le32(PMCRAID_IOA_RES_HANDLE); + ioarcb->cdb[0] = PMCRAID_HOST_CONTROLLED_ASYNC; + ioarcb->cdb[1] = type; + ioarcb->cdb[7] = (rcb_size >> 8) & 0xFF; + ioarcb->cdb[8] = (rcb_size) & 0xFF; + + ioarcb->data_transfer_length = cpu_to_le32(rcb_size); + + ioadl[0].flags |= cpu_to_le32(IOADL_FLAGS_READ_LAST); + ioadl[0].data_len = cpu_to_le32(rcb_size); + ioadl[0].address = cpu_to_le32(dma); + + cmd->cmd_done = cmd_done; + return cmd; +} + +/** + * pmcraid_send_hcam - Send an HCAM to IOA + * @pinstance: ioa config struct + * @type: HCAM type + * + * This function will send a Host Controlled Async command to IOA. + * + * Return value: + * none + */ +static void pmcraid_send_hcam(struct pmcraid_instance *pinstance, u8 type) +{ + struct pmcraid_cmd *cmd = pmcraid_init_hcam(pinstance, type); + pmcraid_send_hcam_cmd(cmd); +} + + +/** + * pmcraid_prepare_cancel_cmd - prepares a command block to abort another + * + * @cmd: pointer to cmd that is used as cancelling command + * @cmd_to_cancel: pointer to the command that needs to be cancelled + */ +static void pmcraid_prepare_cancel_cmd( + struct pmcraid_cmd *cmd, + struct pmcraid_cmd *cmd_to_cancel +) +{ + struct pmcraid_ioarcb *ioarcb = &cmd->ioa_cb->ioarcb; + __be64 ioarcb_addr = cmd_to_cancel->ioa_cb->ioarcb.ioarcb_bus_addr; + + /* Get the resource handle to where the command to be aborted has been + * sent. + */ + ioarcb->resource_handle = cmd_to_cancel->ioa_cb->ioarcb.resource_handle; + ioarcb->request_type = REQ_TYPE_IOACMD; + memset(ioarcb->cdb, 0, PMCRAID_MAX_CDB_LEN); + ioarcb->cdb[0] = PMCRAID_ABORT_CMD; + + /* IOARCB address of the command to be cancelled is given in + * cdb[2]..cdb[9] is Big-Endian format. Note that length bits in + * IOARCB address are not masked. + */ + ioarcb_addr = cpu_to_be64(ioarcb_addr); + memcpy(&(ioarcb->cdb[2]), &ioarcb_addr, sizeof(ioarcb_addr)); +} + +/** + * pmcraid_cancel_hcam - sends ABORT task to abort a given HCAM + * + * @cmd: command to be used as cancelling command + * @type: HCAM type + * @cmd_done: op done function for the cancelling command + */ +static void pmcraid_cancel_hcam( + struct pmcraid_cmd *cmd, + u8 type, + void (*cmd_done) (struct pmcraid_cmd *) +) +{ + struct pmcraid_instance *pinstance; + struct pmcraid_hostrcb *hcam; + + pinstance = cmd->drv_inst; + hcam = (type == PMCRAID_HCAM_CODE_LOG_DATA) ? + &pinstance->ldn : &pinstance->ccn; + + /* prepare for cancelling previous hcam command. If the HCAM is + * currently not pending with IOA, we would have hcam->cmd as non-null + */ + if (hcam->cmd == NULL) + return; + + pmcraid_prepare_cancel_cmd(cmd, hcam->cmd); + + /* writing to IOARRIN must be protected by host_lock, as mid-layer + * schedule queuecommand while we are doing this + */ + pmcraid_send_cmd(cmd, cmd_done, + PMCRAID_INTERNAL_TIMEOUT, + pmcraid_timeout_handler); +} + +/** + * pmcraid_cancel_ccn - cancel CCN HCAM already registered with IOA + * + * @cmd: command block to be used for cancelling the HCAM + */ +static void pmcraid_cancel_ccn(struct pmcraid_cmd *cmd) +{ + pmcraid_info("response for Cancel LDN CDB[0] = %x ioasc = %x\n", + cmd->ioa_cb->ioarcb.cdb[0], + le32_to_cpu(cmd->ioa_cb->ioasa.ioasc)); + + pmcraid_reinit_cmdblk(cmd); + + pmcraid_cancel_hcam(cmd, + PMCRAID_HCAM_CODE_CONFIG_CHANGE, + pmcraid_ioa_shutdown); +} + +/** + * pmcraid_cancel_ldn - cancel LDN HCAM already registered with IOA + * + * @cmd: command block to be used for cancelling the HCAM + */ +static void pmcraid_cancel_ldn(struct pmcraid_cmd *cmd) +{ + pmcraid_cancel_hcam(cmd, + PMCRAID_HCAM_CODE_LOG_DATA, + pmcraid_cancel_ccn); +} + +/** + * pmcraid_expose_resource - check if the resource can be exposed to OS + * + * @cfgte: pointer to configuration table entry of the resource + * + * Return value: + * true if resource can be added to midlayer, false(0) otherwise + */ +static int pmcraid_expose_resource(struct pmcraid_config_table_entry *cfgte) +{ + int retval = 0; + + if (cfgte->resource_type == RES_TYPE_VSET) + retval = ((cfgte->unique_flags1 & 0xFF) < 0xFE); + else if (cfgte->resource_type == RES_TYPE_GSCSI) + retval = (RES_BUS(cfgte->resource_address) != + PMCRAID_VIRTUAL_ENCL_BUS_ID); + return retval; +} + +/* attributes supported by pmcraid_event_family */ +enum { + PMCRAID_AEN_ATTR_UNSPEC, + PMCRAID_AEN_ATTR_EVENT, + __PMCRAID_AEN_ATTR_MAX, +}; +#define PMCRAID_AEN_ATTR_MAX (__PMCRAID_AEN_ATTR_MAX - 1) + +/* commands supported by pmcraid_event_family */ +enum { + PMCRAID_AEN_CMD_UNSPEC, + PMCRAID_AEN_CMD_EVENT, + __PMCRAID_AEN_CMD_MAX, +}; +#define PMCRAID_AEN_CMD_MAX (__PMCRAID_AEN_CMD_MAX - 1) + +static struct genl_family pmcraid_event_family = { + .id = GENL_ID_GENERATE, + .name = "pmcraid", + .version = 1, + .maxattr = PMCRAID_AEN_ATTR_MAX +}; + +/** + * pmcraid_netlink_init - registers pmcraid_event_family + * + * Return value: + * 0 if the pmcraid_event_family is successfully registered + * with netlink generic, non-zero otherwise + */ +static int pmcraid_netlink_init(void) +{ + int result; + + result = genl_register_family(&pmcraid_event_family); + + if (result) + return result; + + pmcraid_info("registered NETLINK GENERIC group: %d\n", + pmcraid_event_family.id); + + return result; +} + +/** + * pmcraid_netlink_release - unregisters pmcraid_event_family + * + * Return value: + * none + */ +static void pmcraid_netlink_release(void) +{ + genl_unregister_family(&pmcraid_event_family); +} + +/** + * pmcraid_notify_aen - sends event msg to user space application + * @pinstance: pointer to adapter instance structure + * @type: HCAM type + * + * Return value: + * 0 if success, error value in case of any failure. + */ +static int pmcraid_notify_aen(struct pmcraid_instance *pinstance, u8 type) +{ + struct sk_buff *skb; + struct pmcraid_aen_msg *aen_msg; + void *msg_header; + int data_size, total_size; + int result; + + + if (type == PMCRAID_HCAM_CODE_LOG_DATA) { + aen_msg = pinstance->ldn.msg; + data_size = pinstance->ldn.hcam->data_len; + } else { + aen_msg = pinstance->ccn.msg; + data_size = pinstance->ccn.hcam->data_len; + } + + data_size += sizeof(struct pmcraid_hcam_hdr); + aen_msg->hostno = (pinstance->host->unique_id << 16 | + MINOR(pinstance->cdev.dev)); + aen_msg->length = data_size; + data_size += sizeof(*aen_msg); + + total_size = nla_total_size(data_size); + skb = genlmsg_new(total_size, GFP_ATOMIC); + + + if (!skb) { + pmcraid_err("Failed to allocate aen data SKB of size: %x\n", + total_size); + return -ENOMEM; + } + + /* add the genetlink message header */ + msg_header = genlmsg_put(skb, 0, 0, + &pmcraid_event_family, 0, + PMCRAID_AEN_CMD_EVENT); + if (!msg_header) { + pmcraid_err("failed to copy command details\n"); + nlmsg_free(skb); + return -ENOMEM; + } + + result = nla_put(skb, PMCRAID_AEN_ATTR_EVENT, data_size, aen_msg); + + if (result) { + pmcraid_err("failed to copy AEN attribute data \n"); + nlmsg_free(skb); + return -EINVAL; + } + + /* send genetlink multicast message to notify appplications */ + result = genlmsg_end(skb, msg_header); + + if (result < 0) { + pmcraid_err("genlmsg_end failed\n"); + nlmsg_free(skb); + return result; + } + + result = + genlmsg_multicast(skb, 0, pmcraid_event_family.id, GFP_ATOMIC); + + /* If there are no listeners, genlmsg_multicast may return non-zero + * value. + */ + if (result) + pmcraid_info("failed to send %s event message %x!\n", + type == PMCRAID_HCAM_CODE_LOG_DATA ? "LDN" : "CCN", + result); + return result; +} + +/** + * pmcraid_handle_config_change - Handle a config change from the adapter + * @pinstance: pointer to per adapter instance structure + * + * Return value: + * none + */ +static void pmcraid_handle_config_change(struct pmcraid_instance *pinstance) +{ + struct pmcraid_config_table_entry *cfg_entry; + struct pmcraid_hcam_ccn *ccn_hcam; + struct pmcraid_cmd *cmd; + struct pmcraid_cmd *cfgcmd; + struct pmcraid_resource_entry *res = NULL; + u32 new_entry = 1; + unsigned long lock_flags; + unsigned long host_lock_flags; + int rc; + + ccn_hcam = (struct pmcraid_hcam_ccn *)pinstance->ccn.hcam; + cfg_entry = &ccn_hcam->cfg_entry; + + pmcraid_info + ("CCN(%x): %x type: %x lost: %x flags: %x res: %x:%x:%x:%x\n", + pinstance->ccn.hcam->ilid, + pinstance->ccn.hcam->op_code, + pinstance->ccn.hcam->notification_type, + pinstance->ccn.hcam->notification_lost, + pinstance->ccn.hcam->flags, + pinstance->host->unique_id, + RES_IS_VSET(*cfg_entry) ? PMCRAID_VSET_BUS_ID : + (RES_IS_GSCSI(*cfg_entry) ? PMCRAID_PHYS_BUS_ID : + RES_BUS(cfg_entry->resource_address)), + RES_IS_VSET(*cfg_entry) ? cfg_entry->unique_flags1 : + RES_TARGET(cfg_entry->resource_address), + RES_LUN(cfg_entry->resource_address)); + + + /* If this HCAM indicates a lost notification, read the config table */ + if (pinstance->ccn.hcam->notification_lost) { + cfgcmd = pmcraid_get_free_cmd(pinstance); + if (cfgcmd) { + pmcraid_info("lost CCN, reading config table\b"); + pinstance->reinit_cfg_table = 1; + pmcraid_querycfg(cfgcmd); + } else { + pmcraid_err("lost CCN, no free cmd for querycfg\n"); + } + goto out_notify_apps; + } + + /* If this resource is not going to be added to mid-layer, just notify + * applications and return + */ + if (!pmcraid_expose_resource(cfg_entry)) + goto out_notify_apps; + + spin_lock_irqsave(&pinstance->resource_lock, lock_flags); + list_for_each_entry(res, &pinstance->used_res_q, queue) { + rc = memcmp(&res->cfg_entry.resource_address, + &cfg_entry->resource_address, + sizeof(cfg_entry->resource_address)); + if (!rc) { + new_entry = 0; + break; + } + } + + if (new_entry) { + + /* If there are more number of resources than what driver can + * manage, do not notify the applications about the CCN. Just + * ignore this notifications and re-register the same HCAM + */ + if (list_empty(&pinstance->free_res_q)) { + spin_unlock_irqrestore(&pinstance->resource_lock, + lock_flags); + pmcraid_err("too many resources attached\n"); + spin_lock_irqsave(pinstance->host->host_lock, + host_lock_flags); + pmcraid_send_hcam(pinstance, + PMCRAID_HCAM_CODE_CONFIG_CHANGE); + spin_unlock_irqrestore(pinstance->host->host_lock, + host_lock_flags); + return; + } + + res = list_entry(pinstance->free_res_q.next, + struct pmcraid_resource_entry, queue); + + list_del(&res->queue); + res->scsi_dev = NULL; + res->reset_progress = 0; + list_add_tail(&res->queue, &pinstance->used_res_q); + } + + memcpy(&res->cfg_entry, cfg_entry, + sizeof(struct pmcraid_config_table_entry)); + + if (pinstance->ccn.hcam->notification_type == + NOTIFICATION_TYPE_ENTRY_DELETED) { + if (res->scsi_dev) { + res->change_detected = RES_CHANGE_DEL; + res->cfg_entry.resource_handle = + PMCRAID_INVALID_RES_HANDLE; + schedule_work(&pinstance->worker_q); + } else { + /* This may be one of the non-exposed resources */ + list_move_tail(&res->queue, &pinstance->free_res_q); + } + } else if (!res->scsi_dev) { + res->change_detected = RES_CHANGE_ADD; + schedule_work(&pinstance->worker_q); + } + spin_unlock_irqrestore(&pinstance->resource_lock, lock_flags); + +out_notify_apps: + + /* Notify configuration changes to registered applications.*/ + if (!pmcraid_disable_aen) + pmcraid_notify_aen(pinstance, PMCRAID_HCAM_CODE_CONFIG_CHANGE); + + cmd = pmcraid_init_hcam(pinstance, PMCRAID_HCAM_CODE_CONFIG_CHANGE); + if (cmd) + pmcraid_send_hcam_cmd(cmd); +} + +/** + * pmcraid_get_error_info - return error string for an ioasc + * @ioasc: ioasc code + * Return Value + * none + */ +static struct pmcraid_ioasc_error *pmcraid_get_error_info(u32 ioasc) +{ + int i; + for (i = 0; i < ARRAY_SIZE(pmcraid_ioasc_error_table); i++) { + if (pmcraid_ioasc_error_table[i].ioasc_code == ioasc) + return &pmcraid_ioasc_error_table[i]; + } + return NULL; +} + +/** + * pmcraid_ioasc_logger - log IOASC information based user-settings + * @ioasc: ioasc code + * @cmd: pointer to command that resulted in 'ioasc' + */ +void pmcraid_ioasc_logger(u32 ioasc, struct pmcraid_cmd *cmd) +{ + struct pmcraid_ioasc_error *error_info = pmcraid_get_error_info(ioasc); + + if (error_info == NULL || + cmd->drv_inst->current_log_level < error_info->log_level) + return; + + /* log the error string */ + pmcraid_err("cmd [%d] for resource %x failed with %x(%s)\n", + cmd->ioa_cb->ioarcb.cdb[0], + cmd->ioa_cb->ioarcb.resource_handle, + le32_to_cpu(ioasc), error_info->error_string); +} + +/** + * pmcraid_handle_error_log - Handle a config change (error log) from the IOA + * + * @pinstance: pointer to per adapter instance structure + * + * Return value: + * none + */ +static void pmcraid_handle_error_log(struct pmcraid_instance *pinstance) +{ + struct pmcraid_hcam_ldn *hcam_ldn; + u32 ioasc; + + hcam_ldn = (struct pmcraid_hcam_ldn *)pinstance->ldn.hcam; + + pmcraid_info + ("LDN(%x): %x type: %x lost: %x flags: %x overlay id: %x\n", + pinstance->ldn.hcam->ilid, + pinstance->ldn.hcam->op_code, + pinstance->ldn.hcam->notification_type, + pinstance->ldn.hcam->notification_lost, + pinstance->ldn.hcam->flags, + pinstance->ldn.hcam->overlay_id); + + /* log only the errors, no need to log informational log entries */ + if (pinstance->ldn.hcam->notification_type != + NOTIFICATION_TYPE_ERROR_LOG) + return; + + if (pinstance->ldn.hcam->notification_lost == + HOSTRCB_NOTIFICATIONS_LOST) + dev_err(&pinstance->pdev->dev, "Error notifications lost\n"); + + ioasc = le32_to_cpu(hcam_ldn->error_log.fd_ioasc); + + if (ioasc == PMCRAID_IOASC_UA_BUS_WAS_RESET || + ioasc == PMCRAID_IOASC_UA_BUS_WAS_RESET_BY_OTHER) { + dev_err(&pinstance->pdev->dev, + "UnitAttention due to IOA Bus Reset\n"); + scsi_report_bus_reset( + pinstance->host, + RES_BUS(hcam_ldn->error_log.fd_ra)); + } + + return; +} + +/** + * pmcraid_process_ccn - Op done function for a CCN. + * @cmd: pointer to command struct + * + * This function is the op done function for a configuration + * change notification + * + * Return value: + * none + */ +static void pmcraid_process_ccn(struct pmcraid_cmd *cmd) +{ + struct pmcraid_instance *pinstance = cmd->drv_inst; + u32 ioasc = le32_to_cpu(cmd->ioa_cb->ioasa.ioasc); + unsigned long lock_flags; + + pinstance->ccn.cmd = NULL; + pmcraid_return_cmd(cmd); + + /* If driver initiated IOA reset happened while this hcam was pending + * with IOA, or IOA bringdown sequence is in progress, no need to + * re-register the hcam + */ + if (ioasc == PMCRAID_IOASC_IOA_WAS_RESET || + atomic_read(&pinstance->ccn.ignore) == 1) { + return; + } else if (ioasc) { + dev_err(&pinstance->pdev->dev, + "Host RCB (CCN) failed with IOASC: 0x%08X\n", ioasc); + spin_lock_irqsave(pinstance->host->host_lock, lock_flags); + pmcraid_send_hcam(pinstance, PMCRAID_HCAM_CODE_CONFIG_CHANGE); + spin_unlock_irqrestore(pinstance->host->host_lock, lock_flags); + } else { + pmcraid_handle_config_change(pinstance); + } +} + +/** + * pmcraid_process_ldn - op done function for an LDN + * @cmd: pointer to command block + * + * Return value + * none + */ +static void pmcraid_initiate_reset(struct pmcraid_instance *); + +static void pmcraid_process_ldn(struct pmcraid_cmd *cmd) +{ + struct pmcraid_instance *pinstance = cmd->drv_inst; + struct pmcraid_hcam_ldn *ldn_hcam = + (struct pmcraid_hcam_ldn *)pinstance->ldn.hcam; + u32 ioasc = le32_to_cpu(cmd->ioa_cb->ioasa.ioasc); + u32 fd_ioasc = le32_to_cpu(ldn_hcam->error_log.fd_ioasc); + unsigned long lock_flags; + + /* return the command block back to freepool */ + pinstance->ldn.cmd = NULL; + pmcraid_return_cmd(cmd); + + /* If driver initiated IOA reset happened while this hcam was pending + * with IOA, no need to re-register the hcam as reset engine will do it + * once reset sequence is complete + */ + if (ioasc == PMCRAID_IOASC_IOA_WAS_RESET || + atomic_read(&pinstance->ccn.ignore) == 1) { + return; + } else if (!ioasc) { + pmcraid_handle_error_log(pinstance); + if (fd_ioasc == PMCRAID_IOASC_NR_IOA_RESET_REQUIRED) { + spin_lock_irqsave(pinstance->host->host_lock, + lock_flags); + pmcraid_initiate_reset(pinstance); + spin_unlock_irqrestore(pinstance->host->host_lock, + lock_flags); + return; + } + } else { + dev_err(&pinstance->pdev->dev, + "Host RCB(LDN) failed with IOASC: 0x%08X\n", ioasc); + } + /* send netlink message for HCAM notification if enabled */ + if (!pmcraid_disable_aen) + pmcraid_notify_aen(pinstance, PMCRAID_HCAM_CODE_LOG_DATA); + + cmd = pmcraid_init_hcam(pinstance, PMCRAID_HCAM_CODE_LOG_DATA); + if (cmd) + pmcraid_send_hcam_cmd(cmd); +} + +/** + * pmcraid_register_hcams - register HCAMs for CCN and LDN + * + * @pinstance: pointer per adapter instance structure + * + * Return Value + * none + */ +static void pmcraid_register_hcams(struct pmcraid_instance *pinstance) +{ + pmcraid_send_hcam(pinstance, PMCRAID_HCAM_CODE_CONFIG_CHANGE); + pmcraid_send_hcam(pinstance, PMCRAID_HCAM_CODE_LOG_DATA); +} + +/** + * pmcraid_unregister_hcams - cancel HCAMs registered already + * @cmd: pointer to command used as part of reset sequence + */ +static void pmcraid_unregister_hcams(struct pmcraid_cmd *cmd) +{ + struct pmcraid_instance *pinstance = cmd->drv_inst; + + /* During IOA bringdown, HCAM gets fired and tasklet proceeds with + * handling hcam response though it is not necessary. In order to + * prevent this, set 'ignore', so that bring-down sequence doesn't + * re-send any more hcams + */ + atomic_set(&pinstance->ccn.ignore, 1); + atomic_set(&pinstance->ldn.ignore, 1); + + /* If adapter reset was forced as part of runtime reset sequence, + * start the reset sequence. + */ + if (pinstance->force_ioa_reset && !pinstance->ioa_bringdown) { + pinstance->force_ioa_reset = 0; + pinstance->ioa_state = IOA_STATE_IN_RESET_ALERT; + pmcraid_reset_alert(cmd); + return; + } + + /* Driver tries to cancel HCAMs by sending ABORT TASK for each HCAM + * one after the other. So CCN cancellation will be triggered by + * pmcraid_cancel_ldn itself. + */ + pmcraid_cancel_ldn(cmd); +} + +/** + * pmcraid_reset_enable_ioa - re-enable IOA after a hard reset + * @pinstance: pointer to adapter instance structure + * Return Value + * 1 if TRANSITION_TO_OPERATIONAL is active, otherwise 0 + */ +static void pmcraid_reinit_buffers(struct pmcraid_instance *); + +static int pmcraid_reset_enable_ioa(struct pmcraid_instance *pinstance) +{ + u32 intrs; + + pmcraid_reinit_buffers(pinstance); + intrs = pmcraid_read_interrupts(pinstance); + + pmcraid_enable_interrupts(pinstance, PMCRAID_PCI_INTERRUPTS); + + if (intrs & INTRS_TRANSITION_TO_OPERATIONAL) { + iowrite32(INTRS_TRANSITION_TO_OPERATIONAL, + pinstance->int_regs.ioa_host_interrupt_mask_reg); + iowrite32(INTRS_TRANSITION_TO_OPERATIONAL, + pinstance->int_regs.ioa_host_interrupt_clr_reg); + return 1; + } else { + return 0; + } +} + +/** + * pmcraid_soft_reset - performs a soft reset and makes IOA become ready + * @cmd : pointer to reset command block + * + * Return Value + * none + */ +static void pmcraid_soft_reset(struct pmcraid_cmd *cmd) +{ + struct pmcraid_instance *pinstance = cmd->drv_inst; + u32 int_reg; + u32 doorbell; + + /* There will be an interrupt when Transition to Operational bit is + * set so tasklet would execute next reset task. The timeout handler + * would re-initiate a reset + */ + cmd->cmd_done = pmcraid_ioa_reset; + cmd->timer.data = (unsigned long)cmd; + cmd->timer.expires = jiffies + + msecs_to_jiffies(PMCRAID_TRANSOP_TIMEOUT); + cmd->timer.function = (void (*)(unsigned long))pmcraid_timeout_handler; + + if (!timer_pending(&cmd->timer)) + add_timer(&cmd->timer); + + /* Enable destructive diagnostics on IOA if it is not yet in + * operational state + */ + doorbell = DOORBELL_RUNTIME_RESET | + DOORBELL_ENABLE_DESTRUCTIVE_DIAGS; + + iowrite32(doorbell, pinstance->int_regs.host_ioa_interrupt_reg); + int_reg = ioread32(pinstance->int_regs.ioa_host_interrupt_reg); + pmcraid_info("Waiting for IOA to become operational %x:%x\n", + ioread32(pinstance->int_regs.host_ioa_interrupt_reg), + int_reg); +} + +/** + * pmcraid_get_dump - retrieves IOA dump in case of Unit Check interrupt + * + * @pinstance: pointer to adapter instance structure + * + * Return Value + * none + */ +static void pmcraid_get_dump(struct pmcraid_instance *pinstance) +{ + pmcraid_info("%s is not yet implemented\n", __func__); +} + +/** + * pmcraid_fail_outstanding_cmds - Fails all outstanding ops. + * @pinstance: pointer to adapter instance structure + * + * This function fails all outstanding ops. If they are submitted to IOA + * already, it sends cancel all messages if IOA is still accepting IOARCBs, + * otherwise just completes the commands and returns the cmd blocks to free + * pool. + * + * Return value: + * none + */ +static void pmcraid_fail_outstanding_cmds(struct pmcraid_instance *pinstance) +{ + struct pmcraid_cmd *cmd, *temp; + unsigned long lock_flags; + + /* pending command list is protected by pending_pool_lock. Its + * traversal must be done as within this lock + */ + spin_lock_irqsave(&pinstance->pending_pool_lock, lock_flags); + list_for_each_entry_safe(cmd, temp, &pinstance->pending_cmd_pool, + free_list) { + list_del(&cmd->free_list); + spin_unlock_irqrestore(&pinstance->pending_pool_lock, + lock_flags); + cmd->ioa_cb->ioasa.ioasc = + cpu_to_le32(PMCRAID_IOASC_IOA_WAS_RESET); + cmd->ioa_cb->ioasa.ilid = + cpu_to_be32(PMCRAID_DRIVER_ILID); + + /* In case the command timer is still running */ + del_timer(&cmd->timer); + + /* If this is an IO command, complete it by invoking scsi_done + * function. If this is one of the internal commands other + * than pmcraid_ioa_reset and HCAM commands invoke cmd_done to + * complete it + */ + if (cmd->scsi_cmd) { + + struct scsi_cmnd *scsi_cmd = cmd->scsi_cmd; + __le32 resp = cmd->ioa_cb->ioarcb.response_handle; + + scsi_cmd->result |= DID_ERROR << 16; + + scsi_dma_unmap(scsi_cmd); + pmcraid_return_cmd(cmd); + + + pmcraid_info("failing(%d) CDB[0] = %x result: %x\n", + le32_to_cpu(resp) >> 2, + cmd->ioa_cb->ioarcb.cdb[0], + scsi_cmd->result); + scsi_cmd->scsi_done(scsi_cmd); + } else if (cmd->cmd_done == pmcraid_internal_done || + cmd->cmd_done == pmcraid_erp_done) { + cmd->cmd_done(cmd); + } else if (cmd->cmd_done != pmcraid_ioa_reset) { + pmcraid_return_cmd(cmd); + } + + atomic_dec(&pinstance->outstanding_cmds); + spin_lock_irqsave(&pinstance->pending_pool_lock, lock_flags); + } + + spin_unlock_irqrestore(&pinstance->pending_pool_lock, lock_flags); +} + +/** + * pmcraid_ioa_reset - Implementation of IOA reset logic + * + * @cmd: pointer to the cmd block to be used for entire reset process + * + * This function executes most of the steps required for IOA reset. This gets + * called by user threads (modprobe/insmod/rmmod) timer, tasklet and midlayer's + * 'eh_' thread. Access to variables used for controling the reset sequence is + * synchronized using host lock. Various functions called during reset process + * would make use of a single command block, pointer to which is also stored in + * adapter instance structure. + * + * Return Value + * None + */ +static void pmcraid_ioa_reset(struct pmcraid_cmd *cmd) +{ + struct pmcraid_instance *pinstance = cmd->drv_inst; + u8 reset_complete = 0; + + pinstance->ioa_reset_in_progress = 1; + + if (pinstance->reset_cmd != cmd) { + pmcraid_err("reset is called with different command block\n"); + pinstance->reset_cmd = cmd; + } + + pmcraid_info("reset_engine: state = %d, command = %p\n", + pinstance->ioa_state, cmd); + + switch (pinstance->ioa_state) { + + case IOA_STATE_DEAD: + /* If IOA is offline, whatever may be the reset reason, just + * return. callers might be waiting on the reset wait_q, wake + * up them + */ + pmcraid_err("IOA is offline no reset is possible\n"); + reset_complete = 1; + break; + + case IOA_STATE_IN_BRINGDOWN: + /* we enter here, once ioa shutdown command is processed by IOA + * Alert IOA for a possible reset. If reset alert fails, IOA + * goes through hard-reset + */ + pmcraid_disable_interrupts(pinstance, ~0); + pinstance->ioa_state = IOA_STATE_IN_RESET_ALERT; + pmcraid_reset_alert(cmd); + break; + + case IOA_STATE_UNKNOWN: + /* We may be called during probe or resume. Some pre-processing + * is required for prior to reset + */ + scsi_block_requests(pinstance->host); + + /* If asked to reset while IOA was processing responses or + * there are any error responses then IOA may require + * hard-reset. + */ + if (pinstance->ioa_hard_reset == 0) { + if (ioread32(pinstance->ioa_status) & + INTRS_TRANSITION_TO_OPERATIONAL) { + pmcraid_info("sticky bit set, bring-up\n"); + pinstance->ioa_state = IOA_STATE_IN_BRINGUP; + pmcraid_reinit_cmdblk(cmd); + pmcraid_identify_hrrq(cmd); + } else { + pinstance->ioa_state = IOA_STATE_IN_SOFT_RESET; + pmcraid_soft_reset(cmd); + } + } else { + /* Alert IOA of a possible reset and wait for critical + * operation in progress bit to reset + */ + pinstance->ioa_state = IOA_STATE_IN_RESET_ALERT; + pmcraid_reset_alert(cmd); + } + break; + + case IOA_STATE_IN_RESET_ALERT: + /* If critical operation in progress bit is reset or wait gets + * timed out, reset proceeds with starting BIST on the IOA. + * pmcraid_ioa_hard_reset keeps a count of reset attempts. If + * they are 3 or more, reset engine marks IOA dead and returns + */ + pinstance->ioa_state = IOA_STATE_IN_HARD_RESET; + pmcraid_start_bist(cmd); + break; + + case IOA_STATE_IN_HARD_RESET: + pinstance->ioa_reset_attempts++; + + /* retry reset if we haven't reached maximum allowed limit */ + if (pinstance->ioa_reset_attempts > PMCRAID_RESET_ATTEMPTS) { + pinstance->ioa_reset_attempts = 0; + pmcraid_err("IOA didn't respond marking it as dead\n"); + pinstance->ioa_state = IOA_STATE_DEAD; + reset_complete = 1; + break; + } + + /* Once either bist or pci reset is done, restore PCI config + * space. If this fails, proceed with hard reset again + */ + + if (pci_restore_state(pinstance->pdev)) { + pmcraid_info("config-space error resetting again\n"); + pinstance->ioa_state = IOA_STATE_IN_RESET_ALERT; + pmcraid_reset_alert(cmd); + break; + } + + /* fail all pending commands */ + pmcraid_fail_outstanding_cmds(pinstance); + + /* check if unit check is active, if so extract dump */ + if (pinstance->ioa_unit_check) { + pmcraid_info("unit check is active\n"); + pinstance->ioa_unit_check = 0; + pmcraid_get_dump(pinstance); + pinstance->ioa_reset_attempts--; + pinstance->ioa_state = IOA_STATE_IN_RESET_ALERT; + pmcraid_reset_alert(cmd); + break; + } + + /* if the reset reason is to bring-down the ioa, we might be + * done with the reset restore pci_config_space and complete + * the reset + */ + if (pinstance->ioa_bringdown) { + pmcraid_info("bringing down the adapter\n"); + pinstance->ioa_shutdown_type = SHUTDOWN_NONE; + pinstance->ioa_bringdown = 0; + pinstance->ioa_state = IOA_STATE_UNKNOWN; + reset_complete = 1; + } else { + /* bring-up IOA, so proceed with soft reset + * Reinitialize hrrq_buffers and their indices also + * enable interrupts after a pci_restore_state + */ + if (pmcraid_reset_enable_ioa(pinstance)) { + pinstance->ioa_state = IOA_STATE_IN_BRINGUP; + pmcraid_info("bringing up the adapter\n"); + pmcraid_reinit_cmdblk(cmd); + pmcraid_identify_hrrq(cmd); + } else { + pinstance->ioa_state = IOA_STATE_IN_SOFT_RESET; + pmcraid_soft_reset(cmd); + } + } + break; + + case IOA_STATE_IN_SOFT_RESET: + /* TRANSITION TO OPERATIONAL is on so start initialization + * sequence + */ + pmcraid_info("In softreset proceeding with bring-up\n"); + pinstance->ioa_state = IOA_STATE_IN_BRINGUP; + + /* Initialization commands start with HRRQ identification. From + * now on tasklet completes most of the commands as IOA is up + * and intrs are enabled + */ + pmcraid_identify_hrrq(cmd); + break; + + case IOA_STATE_IN_BRINGUP: + /* we are done with bringing up of IOA, change the ioa_state to + * operational and wake up any waiters + */ + pinstance->ioa_state = IOA_STATE_OPERATIONAL; + reset_complete = 1; + break; + + case IOA_STATE_OPERATIONAL: + default: + /* When IOA is operational and a reset is requested, check for + * the reset reason. If reset is to bring down IOA, unregister + * HCAMs and initiate shutdown; if adapter reset is forced then + * restart reset sequence again + */ + if (pinstance->ioa_shutdown_type == SHUTDOWN_NONE && + pinstance->force_ioa_reset == 0) { + reset_complete = 1; + } else { + if (pinstance->ioa_shutdown_type != SHUTDOWN_NONE) + pinstance->ioa_state = IOA_STATE_IN_BRINGDOWN; + pmcraid_reinit_cmdblk(cmd); + pmcraid_unregister_hcams(cmd); + } + break; + } + + /* reset will be completed if ioa_state is either DEAD or UNKNOWN or + * OPERATIONAL. Reset all control variables used during reset, wake up + * any waiting threads and let the SCSI mid-layer send commands. Note + * that host_lock must be held before invoking scsi_report_bus_reset. + */ + if (reset_complete) { + pinstance->ioa_reset_in_progress = 0; + pinstance->ioa_reset_attempts = 0; + pinstance->reset_cmd = NULL; + pinstance->ioa_shutdown_type = SHUTDOWN_NONE; + pinstance->ioa_bringdown = 0; + pmcraid_return_cmd(cmd); + + /* If target state is to bring up the adapter, proceed with + * hcam registration and resource exposure to mid-layer. + */ + if (pinstance->ioa_state == IOA_STATE_OPERATIONAL) + pmcraid_register_hcams(pinstance); + + wake_up_all(&pinstance->reset_wait_q); + } + + return; +} + +/** + * pmcraid_initiate_reset - initiates reset sequence. This is called from + * ISR/tasklet during error interrupts including IOA unit check. If reset + * is already in progress, it just returns, otherwise initiates IOA reset + * to bring IOA up to operational state. + * + * @pinstance: pointer to adapter instance structure + * + * Return value + * none + */ +static void pmcraid_initiate_reset(struct pmcraid_instance *pinstance) +{ + struct pmcraid_cmd *cmd; + + /* If the reset is already in progress, just return, otherwise start + * reset sequence and return + */ + if (!pinstance->ioa_reset_in_progress) { + scsi_block_requests(pinstance->host); + cmd = pmcraid_get_free_cmd(pinstance); + + if (cmd == NULL) { + pmcraid_err("no cmnd blocks for initiate_reset\n"); + return; + } + + pinstance->ioa_shutdown_type = SHUTDOWN_NONE; + pinstance->reset_cmd = cmd; + pinstance->force_ioa_reset = 1; + pmcraid_ioa_reset(cmd); + } +} + +/** + * pmcraid_reset_reload - utility routine for doing IOA reset either to bringup + * or bringdown IOA + * @pinstance: pointer adapter instance structure + * @shutdown_type: shutdown type to be used NONE, NORMAL or ABRREV + * @target_state: expected target state after reset + * + * Note: This command initiates reset and waits for its completion. Hence this + * should not be called from isr/timer/tasklet functions (timeout handlers, + * error response handlers and interrupt handlers). + * + * Return Value + * 1 in case ioa_state is not target_state, 0 otherwise. + */ +static int pmcraid_reset_reload( + struct pmcraid_instance *pinstance, + u8 shutdown_type, + u8 target_state +) +{ + struct pmcraid_cmd *reset_cmd = NULL; + unsigned long lock_flags; + int reset = 1; + + spin_lock_irqsave(pinstance->host->host_lock, lock_flags); + + if (pinstance->ioa_reset_in_progress) { + pmcraid_info("reset_reload: reset is already in progress\n"); + + spin_unlock_irqrestore(pinstance->host->host_lock, lock_flags); + + wait_event(pinstance->reset_wait_q, + !pinstance->ioa_reset_in_progress); + + spin_lock_irqsave(pinstance->host->host_lock, lock_flags); + + if (pinstance->ioa_state == IOA_STATE_DEAD) { + spin_unlock_irqrestore(pinstance->host->host_lock, + lock_flags); + pmcraid_info("reset_reload: IOA is dead\n"); + return reset; + } else if (pinstance->ioa_state == target_state) { + reset = 0; + } + } + + if (reset) { + pmcraid_info("reset_reload: proceeding with reset\n"); + scsi_block_requests(pinstance->host); + reset_cmd = pmcraid_get_free_cmd(pinstance); + + if (reset_cmd == NULL) { + pmcraid_err("no free cmnd for reset_reload\n"); + spin_unlock_irqrestore(pinstance->host->host_lock, + lock_flags); + return reset; + } + + if (shutdown_type == SHUTDOWN_NORMAL) + pinstance->ioa_bringdown = 1; + + pinstance->ioa_shutdown_type = shutdown_type; + pinstance->reset_cmd = reset_cmd; + pinstance->force_ioa_reset = reset; + pmcraid_info("reset_reload: initiating reset\n"); + pmcraid_ioa_reset(reset_cmd); + spin_unlock_irqrestore(pinstance->host->host_lock, lock_flags); + pmcraid_info("reset_reload: waiting for reset to complete\n"); + wait_event(pinstance->reset_wait_q, + !pinstance->ioa_reset_in_progress); + + pmcraid_info("reset_reload: reset is complete !! \n"); + scsi_unblock_requests(pinstance->host); + if (pinstance->ioa_state == target_state) + reset = 0; + } + + return reset; +} + +/** + * pmcraid_reset_bringdown - wrapper over pmcraid_reset_reload to bringdown IOA + * + * @pinstance: pointer to adapter instance structure + * + * Return Value + * whatever is returned from pmcraid_reset_reload + */ +static int pmcraid_reset_bringdown(struct pmcraid_instance *pinstance) +{ + return pmcraid_reset_reload(pinstance, + SHUTDOWN_NORMAL, + IOA_STATE_UNKNOWN); +} + +/** + * pmcraid_reset_bringup - wrapper over pmcraid_reset_reload to bring up IOA + * + * @pinstance: pointer to adapter instance structure + * + * Return Value + * whatever is returned from pmcraid_reset_reload + */ +static int pmcraid_reset_bringup(struct pmcraid_instance *pinstance) +{ + return pmcraid_reset_reload(pinstance, + SHUTDOWN_NONE, + IOA_STATE_OPERATIONAL); +} + +/** + * pmcraid_request_sense - Send request sense to a device + * @cmd: pmcraid command struct + * + * This function sends a request sense to a device as a result of a check + * condition. This method re-uses the same command block that failed earlier. + */ +static void pmcraid_request_sense(struct pmcraid_cmd *cmd) +{ + struct pmcraid_ioarcb *ioarcb = &cmd->ioa_cb->ioarcb; + struct pmcraid_ioadl_desc *ioadl = ioarcb->add_data.u.ioadl; + + /* allocate DMAable memory for sense buffers */ + cmd->sense_buffer = pci_alloc_consistent(cmd->drv_inst->pdev, + SCSI_SENSE_BUFFERSIZE, + &cmd->sense_buffer_dma); + + if (cmd->sense_buffer == NULL) { + pmcraid_err + ("couldn't allocate sense buffer for request sense\n"); + pmcraid_erp_done(cmd); + return; + } + + /* re-use the command block */ + memset(&cmd->ioa_cb->ioasa, 0, sizeof(struct pmcraid_ioasa)); + memset(ioarcb->cdb, 0, PMCRAID_MAX_CDB_LEN); + ioarcb->request_flags0 = (SYNC_COMPLETE | + NO_LINK_DESCS | + INHIBIT_UL_CHECK); + ioarcb->request_type = REQ_TYPE_SCSI; + ioarcb->cdb[0] = REQUEST_SENSE; + ioarcb->cdb[4] = SCSI_SENSE_BUFFERSIZE; + + ioarcb->ioadl_bus_addr = cpu_to_le64((cmd->ioa_cb_bus_addr) + + offsetof(struct pmcraid_ioarcb, + add_data.u.ioadl[0])); + ioarcb->ioadl_length = cpu_to_le32(sizeof(struct pmcraid_ioadl_desc)); + + ioarcb->data_transfer_length = cpu_to_le32(SCSI_SENSE_BUFFERSIZE); + + ioadl->address = cpu_to_le64(cmd->sense_buffer_dma); + ioadl->data_len = cpu_to_le32(SCSI_SENSE_BUFFERSIZE); + ioadl->flags = cpu_to_le32(IOADL_FLAGS_LAST_DESC); + + /* request sense might be called as part of error response processing + * which runs in tasklets context. It is possible that mid-layer might + * schedule queuecommand during this time, hence, writting to IOARRIN + * must be protect by host_lock + */ + pmcraid_send_cmd(cmd, pmcraid_erp_done, + PMCRAID_REQUEST_SENSE_TIMEOUT, + pmcraid_timeout_handler); +} + +/** + * pmcraid_cancel_all - cancel all outstanding IOARCBs as part of error recovery + * @cmd: command that failed + * @sense: true if request_sense is required after cancel all + * + * This function sends a cancel all to a device to clear the queue. + */ +static void pmcraid_cancel_all(struct pmcraid_cmd *cmd, u32 sense) +{ + struct scsi_cmnd *scsi_cmd = cmd->scsi_cmd; + struct pmcraid_ioarcb *ioarcb = &cmd->ioa_cb->ioarcb; + struct pmcraid_resource_entry *res = scsi_cmd->device->hostdata; + void (*cmd_done) (struct pmcraid_cmd *) = sense ? pmcraid_erp_done + : pmcraid_request_sense; + + memset(ioarcb->cdb, 0, PMCRAID_MAX_CDB_LEN); + ioarcb->request_flags0 = SYNC_OVERRIDE; + ioarcb->request_type = REQ_TYPE_IOACMD; + ioarcb->cdb[0] = PMCRAID_CANCEL_ALL_REQUESTS; + + if (RES_IS_GSCSI(res->cfg_entry)) + ioarcb->cdb[1] = PMCRAID_SYNC_COMPLETE_AFTER_CANCEL; + + ioarcb->ioadl_bus_addr = 0; + ioarcb->ioadl_length = 0; + ioarcb->data_transfer_length = 0; + ioarcb->ioarcb_bus_addr &= (~0x1FULL); + + /* writing to IOARRIN must be protected by host_lock, as mid-layer + * schedule queuecommand while we are doing this + */ + pmcraid_send_cmd(cmd, cmd_done, + PMCRAID_REQUEST_SENSE_TIMEOUT, + pmcraid_timeout_handler); +} + +/** + * pmcraid_frame_auto_sense: frame fixed format sense information + * + * @cmd: pointer to failing command block + * + * Return value + * none + */ +static void pmcraid_frame_auto_sense(struct pmcraid_cmd *cmd) +{ + u8 *sense_buf = cmd->scsi_cmd->sense_buffer; + struct pmcraid_resource_entry *res = cmd->scsi_cmd->device->hostdata; + struct pmcraid_ioasa *ioasa = &cmd->ioa_cb->ioasa; + u32 ioasc = le32_to_cpu(ioasa->ioasc); + u32 failing_lba = 0; + + memset(sense_buf, 0, SCSI_SENSE_BUFFERSIZE); + cmd->scsi_cmd->result = SAM_STAT_CHECK_CONDITION; + + if (RES_IS_VSET(res->cfg_entry) && + ioasc == PMCRAID_IOASC_ME_READ_ERROR_NO_REALLOC && + ioasa->u.vset.failing_lba_hi != 0) { + + sense_buf[0] = 0x72; + sense_buf[1] = PMCRAID_IOASC_SENSE_KEY(ioasc); + sense_buf[2] = PMCRAID_IOASC_SENSE_CODE(ioasc); + sense_buf[3] = PMCRAID_IOASC_SENSE_QUAL(ioasc); + + sense_buf[7] = 12; + sense_buf[8] = 0; + sense_buf[9] = 0x0A; + sense_buf[10] = 0x80; + + failing_lba = le32_to_cpu(ioasa->u.vset.failing_lba_hi); + + sense_buf[12] = (failing_lba & 0xff000000) >> 24; + sense_buf[13] = (failing_lba & 0x00ff0000) >> 16; + sense_buf[14] = (failing_lba & 0x0000ff00) >> 8; + sense_buf[15] = failing_lba & 0x000000ff; + + failing_lba = le32_to_cpu(ioasa->u.vset.failing_lba_lo); + + sense_buf[16] = (failing_lba & 0xff000000) >> 24; + sense_buf[17] = (failing_lba & 0x00ff0000) >> 16; + sense_buf[18] = (failing_lba & 0x0000ff00) >> 8; + sense_buf[19] = failing_lba & 0x000000ff; + } else { + sense_buf[0] = 0x70; + sense_buf[2] = PMCRAID_IOASC_SENSE_KEY(ioasc); + sense_buf[12] = PMCRAID_IOASC_SENSE_CODE(ioasc); + sense_buf[13] = PMCRAID_IOASC_SENSE_QUAL(ioasc); + + if (ioasc == PMCRAID_IOASC_ME_READ_ERROR_NO_REALLOC) { + if (RES_IS_VSET(res->cfg_entry)) + failing_lba = + le32_to_cpu(ioasa->u. + vset.failing_lba_lo); + sense_buf[0] |= 0x80; + sense_buf[3] = (failing_lba >> 24) & 0xff; + sense_buf[4] = (failing_lba >> 16) & 0xff; + sense_buf[5] = (failing_lba >> 8) & 0xff; + sense_buf[6] = failing_lba & 0xff; + } + + sense_buf[7] = 6; /* additional length */ + } +} + +/** + * pmcraid_error_handler - Error response handlers for a SCSI op + * @cmd: pointer to pmcraid_cmd that has failed + * + * This function determines whether or not to initiate ERP on the affected + * device. This is called from a tasklet, which doesn't hold any locks. + * + * Return value: + * 0 it caller can complete the request, otherwise 1 where in error + * handler itself completes the request and returns the command block + * back to free-pool + */ +static int pmcraid_error_handler(struct pmcraid_cmd *cmd) +{ + struct scsi_cmnd *scsi_cmd = cmd->scsi_cmd; + struct pmcraid_resource_entry *res = scsi_cmd->device->hostdata; + struct pmcraid_instance *pinstance = cmd->drv_inst; + struct pmcraid_ioasa *ioasa = &cmd->ioa_cb->ioasa; + u32 ioasc = le32_to_cpu(ioasa->ioasc); + u32 masked_ioasc = ioasc & PMCRAID_IOASC_SENSE_MASK; + u32 sense_copied = 0; + + if (!res) { + pmcraid_info("resource pointer is NULL\n"); + return 0; + } + + /* If this was a SCSI read/write command keep count of errors */ + if (SCSI_CMD_TYPE(scsi_cmd->cmnd[0]) == SCSI_READ_CMD) + atomic_inc(&res->read_failures); + else if (SCSI_CMD_TYPE(scsi_cmd->cmnd[0]) == SCSI_WRITE_CMD) + atomic_inc(&res->write_failures); + + if (!RES_IS_GSCSI(res->cfg_entry) && + masked_ioasc != PMCRAID_IOASC_HW_DEVICE_BUS_STATUS_ERROR) { + pmcraid_frame_auto_sense(cmd); + } + + /* Log IOASC/IOASA information based on user settings */ + pmcraid_ioasc_logger(ioasc, cmd); + + switch (masked_ioasc) { + + case PMCRAID_IOASC_AC_TERMINATED_BY_HOST: + scsi_cmd->result |= (DID_ABORT << 16); + break; + + case PMCRAID_IOASC_IR_INVALID_RESOURCE_HANDLE: + case PMCRAID_IOASC_HW_CANNOT_COMMUNICATE: + scsi_cmd->result |= (DID_NO_CONNECT << 16); + break; + + case PMCRAID_IOASC_NR_SYNC_REQUIRED: + res->sync_reqd = 1; + scsi_cmd->result |= (DID_IMM_RETRY << 16); + break; + + case PMCRAID_IOASC_ME_READ_ERROR_NO_REALLOC: + scsi_cmd->result |= (DID_PASSTHROUGH << 16); + break; + + case PMCRAID_IOASC_UA_BUS_WAS_RESET: + case PMCRAID_IOASC_UA_BUS_WAS_RESET_BY_OTHER: + if (!res->reset_progress) + scsi_report_bus_reset(pinstance->host, + scsi_cmd->device->channel); + scsi_cmd->result |= (DID_ERROR << 16); + break; + + case PMCRAID_IOASC_HW_DEVICE_BUS_STATUS_ERROR: + scsi_cmd->result |= PMCRAID_IOASC_SENSE_STATUS(ioasc); + res->sync_reqd = 1; + + /* if check_condition is not active return with error otherwise + * get/frame the sense buffer + */ + if (PMCRAID_IOASC_SENSE_STATUS(ioasc) != + SAM_STAT_CHECK_CONDITION && + PMCRAID_IOASC_SENSE_STATUS(ioasc) != SAM_STAT_ACA_ACTIVE) + return 0; + + /* If we have auto sense data as part of IOASA pass it to + * mid-layer + */ + if (ioasa->auto_sense_length != 0) { + short sense_len = ioasa->auto_sense_length; + int data_size = min_t(u16, le16_to_cpu(sense_len), + SCSI_SENSE_BUFFERSIZE); + + memcpy(scsi_cmd->sense_buffer, + ioasa->sense_data, + data_size); + sense_copied = 1; + } + + if (RES_IS_GSCSI(res->cfg_entry)) { + pmcraid_cancel_all(cmd, sense_copied); + } else if (sense_copied) { + pmcraid_erp_done(cmd); + return 0; + } else { + pmcraid_request_sense(cmd); + } + + return 1; + + case PMCRAID_IOASC_NR_INIT_CMD_REQUIRED: + break; + + default: + if (PMCRAID_IOASC_SENSE_KEY(ioasc) > RECOVERED_ERROR) + scsi_cmd->result |= (DID_ERROR << 16); + break; + } + return 0; +} + +/** + * pmcraid_reset_device - device reset handler functions + * + * @scsi_cmd: scsi command struct + * @modifier: reset modifier indicating the reset sequence to be performed + * + * This function issues a device reset to the affected device. + * A LUN reset will be sent to the device first. If that does + * not work, a target reset will be sent. + * + * Return value: + * SUCCESS / FAILED + */ +static int pmcraid_reset_device( + struct scsi_cmnd *scsi_cmd, + unsigned long timeout, + u8 modifier +) +{ + struct pmcraid_cmd *cmd; + struct pmcraid_instance *pinstance; + struct pmcraid_resource_entry *res; + struct pmcraid_ioarcb *ioarcb; + unsigned long lock_flags; + u32 ioasc; + + pinstance = + (struct pmcraid_instance *)scsi_cmd->device->host->hostdata; + res = scsi_cmd->device->hostdata; + + if (!res) { + pmcraid_err("reset_device: NULL resource pointer\n"); + return FAILED; + } + + /* If adapter is currently going through reset/reload, return failed. + * This will force the mid-layer to call _eh_bus/host reset, which + * will then go to sleep and wait for the reset to complete + */ + spin_lock_irqsave(pinstance->host->host_lock, lock_flags); + if (pinstance->ioa_reset_in_progress || + pinstance->ioa_state == IOA_STATE_DEAD) { + spin_unlock_irqrestore(pinstance->host->host_lock, lock_flags); + return FAILED; + } + + res->reset_progress = 1; + pmcraid_info("Resetting %s resource with addr %x\n", + ((modifier & RESET_DEVICE_LUN) ? "LUN" : + ((modifier & RESET_DEVICE_TARGET) ? "TARGET" : "BUS")), + le32_to_cpu(res->cfg_entry.resource_address)); + + /* get a free cmd block */ + cmd = pmcraid_get_free_cmd(pinstance); + + if (cmd == NULL) { + spin_unlock_irqrestore(pinstance->host->host_lock, lock_flags); + pmcraid_err("%s: no cmd blocks are available\n", __func__); + return FAILED; + } + + ioarcb = &cmd->ioa_cb->ioarcb; + ioarcb->resource_handle = res->cfg_entry.resource_handle; + ioarcb->request_type = REQ_TYPE_IOACMD; + ioarcb->cdb[0] = PMCRAID_RESET_DEVICE; + + /* Initialize reset modifier bits */ + if (modifier) + modifier = ENABLE_RESET_MODIFIER | modifier; + + ioarcb->cdb[1] = modifier; + + init_completion(&cmd->wait_for_completion); + cmd->completion_req = 1; + + pmcraid_info("cmd(CDB[0] = %x) for %x with index = %d\n", + cmd->ioa_cb->ioarcb.cdb[0], + le32_to_cpu(cmd->ioa_cb->ioarcb.resource_handle), + le32_to_cpu(cmd->ioa_cb->ioarcb.response_handle) >> 2); + + pmcraid_send_cmd(cmd, + pmcraid_internal_done, + timeout, + pmcraid_timeout_handler); + + spin_unlock_irqrestore(pinstance->host->host_lock, lock_flags); + + /* RESET_DEVICE command completes after all pending IOARCBs are + * completed. Once this command is completed, pmcraind_internal_done + * will wake up the 'completion' queue. + */ + wait_for_completion(&cmd->wait_for_completion); + + /* complete the command here itself and return the command block + * to free list + */ + pmcraid_return_cmd(cmd); + res->reset_progress = 0; + ioasc = le32_to_cpu(cmd->ioa_cb->ioasa.ioasc); + + /* set the return value based on the returned ioasc */ + return PMCRAID_IOASC_SENSE_KEY(ioasc) ? FAILED : SUCCESS; +} + +/** + * _pmcraid_io_done - helper for pmcraid_io_done function + * + * @cmd: pointer to pmcraid command struct + * @reslen: residual data length to be set in the ioasa + * @ioasc: ioasc either returned by IOA or set by driver itself. + * + * This function is invoked by pmcraid_io_done to complete mid-layer + * scsi ops. + * + * Return value: + * 0 if caller is required to return it to free_pool. Returns 1 if + * caller need not worry about freeing command block as error handler + * will take care of that. + */ + +static int _pmcraid_io_done(struct pmcraid_cmd *cmd, int reslen, int ioasc) +{ + struct scsi_cmnd *scsi_cmd = cmd->scsi_cmd; + int rc = 0; + + scsi_set_resid(scsi_cmd, reslen); + + pmcraid_info("response(%d) CDB[0] = %x ioasc:result: %x:%x\n", + le32_to_cpu(cmd->ioa_cb->ioarcb.response_handle) >> 2, + cmd->ioa_cb->ioarcb.cdb[0], + ioasc, scsi_cmd->result); + + if (PMCRAID_IOASC_SENSE_KEY(ioasc) != 0) + rc = pmcraid_error_handler(cmd); + + if (rc == 0) { + scsi_dma_unmap(scsi_cmd); + scsi_cmd->scsi_done(scsi_cmd); + } + + return rc; +} + +/** + * pmcraid_io_done - SCSI completion function + * + * @cmd: pointer to pmcraid command struct + * + * This function is invoked by tasklet/mid-layer error handler to completing + * the SCSI ops sent from mid-layer. + * + * Return value + * none + */ + +static void pmcraid_io_done(struct pmcraid_cmd *cmd) +{ + u32 ioasc = le32_to_cpu(cmd->ioa_cb->ioasa.ioasc); + u32 reslen = le32_to_cpu(cmd->ioa_cb->ioasa.residual_data_length); + + if (_pmcraid_io_done(cmd, reslen, ioasc) == 0) + pmcraid_return_cmd(cmd); +} + +/** + * pmcraid_abort_cmd - Aborts a single IOARCB already submitted to IOA + * + * @cmd: command block of the command to be aborted + * + * Return Value: + * returns pointer to command structure used as cancelling cmd + */ +static struct pmcraid_cmd *pmcraid_abort_cmd(struct pmcraid_cmd *cmd) +{ + struct pmcraid_cmd *cancel_cmd; + struct pmcraid_instance *pinstance; + struct pmcraid_resource_entry *res; + + pinstance = (struct pmcraid_instance *)cmd->drv_inst; + res = cmd->scsi_cmd->device->hostdata; + + cancel_cmd = pmcraid_get_free_cmd(pinstance); + + if (cancel_cmd == NULL) { + pmcraid_err("%s: no cmd blocks are available\n", __func__); + return NULL; + } + + pmcraid_prepare_cancel_cmd(cancel_cmd, cmd); + + pmcraid_info("aborting command CDB[0]= %x with index = %d\n", + cmd->ioa_cb->ioarcb.cdb[0], + cmd->ioa_cb->ioarcb.response_handle >> 2); + + init_completion(&cancel_cmd->wait_for_completion); + cancel_cmd->completion_req = 1; + + pmcraid_info("command (%d) CDB[0] = %x for %x\n", + le32_to_cpu(cancel_cmd->ioa_cb->ioarcb.response_handle) >> 2, + cmd->ioa_cb->ioarcb.cdb[0], + le32_to_cpu(cancel_cmd->ioa_cb->ioarcb.resource_handle)); + + pmcraid_send_cmd(cancel_cmd, + pmcraid_internal_done, + PMCRAID_INTERNAL_TIMEOUT, + pmcraid_timeout_handler); + return cancel_cmd; +} + +/** + * pmcraid_abort_complete - Waits for ABORT TASK completion + * + * @cancel_cmd: command block use as cancelling command + * + * Return Value: + * returns SUCCESS if ABORT TASK has good completion + * otherwise FAILED + */ +static int pmcraid_abort_complete(struct pmcraid_cmd *cancel_cmd) +{ + struct pmcraid_resource_entry *res; + u32 ioasc; + + wait_for_completion(&cancel_cmd->wait_for_completion); + res = cancel_cmd->u.res; + cancel_cmd->u.res = NULL; + ioasc = le32_to_cpu(cancel_cmd->ioa_cb->ioasa.ioasc); + + /* If the abort task is not timed out we will get a Good completion + * as sense_key, otherwise we may get one the following responses + * due to subsquent bus reset or device reset. In case IOASC is + * NR_SYNC_REQUIRED, set sync_reqd flag for the corresponding resource + */ + if (ioasc == PMCRAID_IOASC_UA_BUS_WAS_RESET || + ioasc == PMCRAID_IOASC_NR_SYNC_REQUIRED) { + if (ioasc == PMCRAID_IOASC_NR_SYNC_REQUIRED) + res->sync_reqd = 1; + ioasc = 0; + } + + /* complete the command here itself */ + pmcraid_return_cmd(cancel_cmd); + return PMCRAID_IOASC_SENSE_KEY(ioasc) ? FAILED : SUCCESS; +} + +/** + * pmcraid_eh_abort_handler - entry point for aborting a single task on errors + * + * @scsi_cmd: scsi command struct given by mid-layer. When this is called + * mid-layer ensures that no other commands are queued. This + * never gets called under interrupt, but a separate eh thread. + * + * Return value: + * SUCCESS / FAILED + */ +static int pmcraid_eh_abort_handler(struct scsi_cmnd *scsi_cmd) +{ + struct pmcraid_instance *pinstance; + struct pmcraid_cmd *cmd; + struct pmcraid_resource_entry *res; + unsigned long host_lock_flags; + unsigned long pending_lock_flags; + struct pmcraid_cmd *cancel_cmd = NULL; + int cmd_found = 0; + int rc = FAILED; + + pinstance = + (struct pmcraid_instance *)scsi_cmd->device->host->hostdata; + + dev_err(&pinstance->pdev->dev, + "I/O command timed out, aborting it.\n"); + + res = scsi_cmd->device->hostdata; + + if (res == NULL) + return rc; + + /* If we are currently going through reset/reload, return failed. + * This will force the mid-layer to eventually call + * pmcraid_eh_host_reset which will then go to sleep and wait for the + * reset to complete + */ + spin_lock_irqsave(pinstance->host->host_lock, host_lock_flags); + + if (pinstance->ioa_reset_in_progress || + pinstance->ioa_state == IOA_STATE_DEAD) { + spin_unlock_irqrestore(pinstance->host->host_lock, + host_lock_flags); + return rc; + } + + /* loop over pending cmd list to find cmd corresponding to this + * scsi_cmd. Note that this command might not have been completed + * already. locking: all pending commands are protected with + * pending_pool_lock. + */ + spin_lock_irqsave(&pinstance->pending_pool_lock, pending_lock_flags); + list_for_each_entry(cmd, &pinstance->pending_cmd_pool, free_list) { + + if (cmd->scsi_cmd == scsi_cmd) { + cmd_found = 1; + break; + } + } + + spin_unlock_irqrestore(&pinstance->pending_pool_lock, + pending_lock_flags); + + /* If the command to be aborted was given to IOA and still pending with + * it, send ABORT_TASK to abort this and wait for its completion + */ + if (cmd_found) + cancel_cmd = pmcraid_abort_cmd(cmd); + + spin_unlock_irqrestore(pinstance->host->host_lock, + host_lock_flags); + + if (cancel_cmd) { + cancel_cmd->u.res = cmd->scsi_cmd->device->hostdata; + rc = pmcraid_abort_complete(cancel_cmd); + } + + return cmd_found ? rc : SUCCESS; +} + +/** + * pmcraid_eh_xxxx_reset_handler - bus/target/device reset handler callbacks + * + * @scmd: pointer to scsi_cmd that was sent to the resource to be reset. + * + * All these routines invokve pmcraid_reset_device with appropriate parameters. + * Since these are called from mid-layer EH thread, no other IO will be queued + * to the resource being reset. However, control path (IOCTL) may be active so + * it is necessary to synchronize IOARRIN writes which pmcraid_reset_device + * takes care by locking/unlocking host_lock. + * + * Return value + * SUCCESS or FAILED + */ +static int pmcraid_eh_device_reset_handler(struct scsi_cmnd *scmd) +{ + pmcraid_err("Doing device reset due to an I/O command timeout.\n"); + return pmcraid_reset_device(scmd, + PMCRAID_INTERNAL_TIMEOUT, + RESET_DEVICE_LUN); +} + +static int pmcraid_eh_bus_reset_handler(struct scsi_cmnd *scmd) +{ + pmcraid_err("Doing bus reset due to an I/O command timeout.\n"); + return pmcraid_reset_device(scmd, + PMCRAID_RESET_BUS_TIMEOUT, + RESET_DEVICE_BUS); +} + +static int pmcraid_eh_target_reset_handler(struct scsi_cmnd *scmd) +{ + pmcraid_err("Doing target reset due to an I/O command timeout.\n"); + return pmcraid_reset_device(scmd, + PMCRAID_INTERNAL_TIMEOUT, + RESET_DEVICE_TARGET); +} + +/** + * pmcraid_eh_host_reset_handler - adapter reset handler callback + * + * @scmd: pointer to scsi_cmd that was sent to a resource of adapter + * + * Initiates adapter reset to bring it up to operational state + * + * Return value + * SUCCESS or FAILED + */ +static int pmcraid_eh_host_reset_handler(struct scsi_cmnd *scmd) +{ + unsigned long interval = 10000; /* 10 seconds interval */ + int waits = jiffies_to_msecs(PMCRAID_RESET_HOST_TIMEOUT) / interval; + struct pmcraid_instance *pinstance = + (struct pmcraid_instance *)(scmd->device->host->hostdata); + + + /* wait for an additional 150 seconds just in case firmware could come + * up and if it could complete all the pending commands excluding the + * two HCAM (CCN and LDN). + */ + while (waits--) { + if (atomic_read(&pinstance->outstanding_cmds) <= + PMCRAID_MAX_HCAM_CMD) + return SUCCESS; + msleep(interval); + } + + dev_err(&pinstance->pdev->dev, + "Adapter being reset due to an I/O command timeout.\n"); + return pmcraid_reset_bringup(pinstance) == 0 ? SUCCESS : FAILED; +} + +/** + * pmcraid_task_attributes - Translate SPI Q-Tags to task attributes + * @scsi_cmd: scsi command struct + * + * Return value + * number of tags or 0 if the task is not tagged + */ +static u8 pmcraid_task_attributes(struct scsi_cmnd *scsi_cmd) +{ + char tag[2]; + u8 rc = 0; + + if (scsi_populate_tag_msg(scsi_cmd, tag)) { + switch (tag[0]) { + case MSG_SIMPLE_TAG: + rc = TASK_TAG_SIMPLE; + break; + case MSG_HEAD_TAG: + rc = TASK_TAG_QUEUE_HEAD; + break; + case MSG_ORDERED_TAG: + rc = TASK_TAG_ORDERED; + break; + }; + } + + return rc; +} + + +/** + * pmcraid_init_ioadls - initializes IOADL related fields in IOARCB + * @cmd: pmcraid command struct + * @sgcount: count of scatter-gather elements + * + * Return value + * returns pointer pmcraid_ioadl_desc, initialized to point to internal + * or external IOADLs + */ +struct pmcraid_ioadl_desc * +pmcraid_init_ioadls(struct pmcraid_cmd *cmd, int sgcount) +{ + struct pmcraid_ioadl_desc *ioadl; + struct pmcraid_ioarcb *ioarcb = &cmd->ioa_cb->ioarcb; + int ioadl_count = 0; + + if (ioarcb->add_cmd_param_length) + ioadl_count = DIV_ROUND_UP(ioarcb->add_cmd_param_length, 16); + ioarcb->ioadl_length = + sizeof(struct pmcraid_ioadl_desc) * sgcount; + + if ((sgcount + ioadl_count) > (ARRAY_SIZE(ioarcb->add_data.u.ioadl))) { + /* external ioadls start at offset 0x80 from control_block + * structure, re-using 24 out of 27 ioadls part of IOARCB. + * It is necessary to indicate to firmware that driver is + * using ioadls to be treated as external to IOARCB. + */ + ioarcb->ioarcb_bus_addr &= ~(0x1FULL); + ioarcb->ioadl_bus_addr = + cpu_to_le64((cmd->ioa_cb_bus_addr) + + offsetof(struct pmcraid_ioarcb, + add_data.u.ioadl[3])); + ioadl = &ioarcb->add_data.u.ioadl[3]; + } else { + ioarcb->ioadl_bus_addr = + cpu_to_le64((cmd->ioa_cb_bus_addr) + + offsetof(struct pmcraid_ioarcb, + add_data.u.ioadl[ioadl_count])); + + ioadl = &ioarcb->add_data.u.ioadl[ioadl_count]; + ioarcb->ioarcb_bus_addr |= + DIV_ROUND_CLOSEST(sgcount + ioadl_count, 8); + } + + return ioadl; +} + +/** + * pmcraid_build_ioadl - Build a scatter/gather list and map the buffer + * @pinstance: pointer to adapter instance structure + * @cmd: pmcraid command struct + * + * This function is invoked by queuecommand entry point while sending a command + * to firmware. This builds ioadl descriptors and sets up ioarcb fields. + * + * Return value: + * 0 on success or -1 on failure + */ +static int pmcraid_build_ioadl( + struct pmcraid_instance *pinstance, + struct pmcraid_cmd *cmd +) +{ + int i, nseg; + struct scatterlist *sglist; + + struct scsi_cmnd *scsi_cmd = cmd->scsi_cmd; + struct pmcraid_ioarcb *ioarcb = &(cmd->ioa_cb->ioarcb); + struct pmcraid_ioadl_desc *ioadl = ioarcb->add_data.u.ioadl; + + u32 length = scsi_bufflen(scsi_cmd); + + if (!length) + return 0; + + nseg = scsi_dma_map(scsi_cmd); + + if (nseg < 0) { + dev_err(&pinstance->pdev->dev, "scsi_map_dma failed!\n"); + return -1; + } else if (nseg > PMCRAID_MAX_IOADLS) { + scsi_dma_unmap(scsi_cmd); + dev_err(&pinstance->pdev->dev, + "sg count is (%d) more than allowed!\n", nseg); + return -1; + } + + /* Initialize IOARCB data transfer length fields */ + if (scsi_cmd->sc_data_direction == DMA_TO_DEVICE) + ioarcb->request_flags0 |= TRANSFER_DIR_WRITE; + + ioarcb->request_flags0 |= NO_LINK_DESCS; + ioarcb->data_transfer_length = cpu_to_le32(length); + ioadl = pmcraid_init_ioadls(cmd, nseg); + + /* Initialize IOADL descriptor addresses */ + scsi_for_each_sg(scsi_cmd, sglist, nseg, i) { + ioadl[i].data_len = cpu_to_le32(sg_dma_len(sglist)); + ioadl[i].address = cpu_to_le64(sg_dma_address(sglist)); + ioadl[i].flags = 0; + } + /* setup last descriptor */ + ioadl[i - 1].flags = cpu_to_le32(IOADL_FLAGS_LAST_DESC); + + return 0; +} + +/** + * pmcraid_free_sglist - Frees an allocated SG buffer list + * @sglist: scatter/gather list pointer + * + * Free a DMA'able memory previously allocated with pmcraid_alloc_sglist + * + * Return value: + * none + */ +static void pmcraid_free_sglist(struct pmcraid_sglist *sglist) +{ + int i; + + for (i = 0; i < sglist->num_sg; i++) + __free_pages(sg_page(&(sglist->scatterlist[i])), + sglist->order); + + kfree(sglist); +} + +/** + * pmcraid_alloc_sglist - Allocates memory for a SG list + * @buflen: buffer length + * + * Allocates a DMA'able buffer in chunks and assembles a scatter/gather + * list. + * + * Return value + * pointer to sglist / NULL on failure + */ +static struct pmcraid_sglist *pmcraid_alloc_sglist(int buflen) +{ + struct pmcraid_sglist *sglist; + struct scatterlist *scatterlist; + struct page *page; + int num_elem, i, j; + int sg_size; + int order; + int bsize_elem; + + sg_size = buflen / (PMCRAID_MAX_IOADLS - 1); + order = (sg_size > 0) ? get_order(sg_size) : 0; + bsize_elem = PAGE_SIZE * (1 << order); + + /* Determine the actual number of sg entries needed */ + if (buflen % bsize_elem) + num_elem = (buflen / bsize_elem) + 1; + else + num_elem = buflen / bsize_elem; + + /* Allocate a scatter/gather list for the DMA */ + sglist = kzalloc(sizeof(struct pmcraid_sglist) + + (sizeof(struct scatterlist) * (num_elem - 1)), + GFP_KERNEL); + + if (sglist == NULL) + return NULL; + + scatterlist = sglist->scatterlist; + sg_init_table(scatterlist, num_elem); + sglist->order = order; + sglist->num_sg = num_elem; + sg_size = buflen; + + for (i = 0; i < num_elem; i++) { + page = alloc_pages(GFP_KERNEL|GFP_DMA, order); + if (!page) { + for (j = i - 1; j >= 0; j--) + __free_pages(sg_page(&scatterlist[j]), order); + kfree(sglist); + return NULL; + } + + sg_set_page(&scatterlist[i], page, + sg_size < bsize_elem ? sg_size : bsize_elem, 0); + sg_size -= bsize_elem; + } + + return sglist; +} + +/** + * pmcraid_copy_sglist - Copy user buffer to kernel buffer's SG list + * @sglist: scatter/gather list pointer + * @buffer: buffer pointer + * @len: buffer length + * @direction: data transfer direction + * + * Copy a user buffer into a buffer allocated by pmcraid_alloc_sglist + * + * Return value: + * 0 on success / other on failure + */ +static int pmcraid_copy_sglist( + struct pmcraid_sglist *sglist, + unsigned long buffer, + u32 len, + int direction +) +{ + struct scatterlist *scatterlist; + void *kaddr; + int bsize_elem; + int i; + int rc = 0; + + /* Determine the actual number of bytes per element */ + bsize_elem = PAGE_SIZE * (1 << sglist->order); + + scatterlist = sglist->scatterlist; + + for (i = 0; i < (len / bsize_elem); i++, buffer += bsize_elem) { + struct page *page = sg_page(&scatterlist[i]); + + kaddr = kmap(page); + if (direction == DMA_TO_DEVICE) + rc = __copy_from_user(kaddr, + (void *)buffer, + bsize_elem); + else + rc = __copy_to_user((void *)buffer, kaddr, bsize_elem); + + kunmap(page); + + if (rc) { + pmcraid_err("failed to copy user data into sg list\n"); + return -EFAULT; + } + + scatterlist[i].length = bsize_elem; + } + + if (len % bsize_elem) { + struct page *page = sg_page(&scatterlist[i]); + + kaddr = kmap(page); + + if (direction == DMA_TO_DEVICE) + rc = __copy_from_user(kaddr, + (void *)buffer, + len % bsize_elem); + else + rc = __copy_to_user((void *)buffer, + kaddr, + len % bsize_elem); + + kunmap(page); + + scatterlist[i].length = len % bsize_elem; + } + + if (rc) { + pmcraid_err("failed to copy user data into sg list\n"); + rc = -EFAULT; + } + + return rc; +} + +/** + * pmcraid_queuecommand - Queue a mid-layer request + * @scsi_cmd: scsi command struct + * @done: done function + * + * This function queues a request generated by the mid-layer. Midlayer calls + * this routine within host->lock. Some of the functions called by queuecommand + * would use cmd block queue locks (free_pool_lock and pending_pool_lock) + * + * Return value: + * 0 on success + * SCSI_MLQUEUE_DEVICE_BUSY if device is busy + * SCSI_MLQUEUE_HOST_BUSY if host is busy + */ +static int pmcraid_queuecommand( + struct scsi_cmnd *scsi_cmd, + void (*done) (struct scsi_cmnd *) +) +{ + struct pmcraid_instance *pinstance; + struct pmcraid_resource_entry *res; + struct pmcraid_ioarcb *ioarcb; + struct pmcraid_cmd *cmd; + int rc = 0; + + pinstance = + (struct pmcraid_instance *)scsi_cmd->device->host->hostdata; + + scsi_cmd->scsi_done = done; + res = scsi_cmd->device->hostdata; + scsi_cmd->result = (DID_OK << 16); + + /* if adapter is marked as dead, set result to DID_NO_CONNECT complete + * the command + */ + if (pinstance->ioa_state == IOA_STATE_DEAD) { + pmcraid_info("IOA is dead, but queuecommand is scheduled\n"); + scsi_cmd->result = (DID_NO_CONNECT << 16); + scsi_cmd->scsi_done(scsi_cmd); + return 0; + } + + /* If IOA reset is in progress, can't queue the commands */ + if (pinstance->ioa_reset_in_progress) + return SCSI_MLQUEUE_HOST_BUSY; + + /* initialize the command and IOARCB to be sent to IOA */ + cmd = pmcraid_get_free_cmd(pinstance); + + if (cmd == NULL) { + pmcraid_err("free command block is not available\n"); + return SCSI_MLQUEUE_HOST_BUSY; + } + + cmd->scsi_cmd = scsi_cmd; + ioarcb = &(cmd->ioa_cb->ioarcb); + memcpy(ioarcb->cdb, scsi_cmd->cmnd, scsi_cmd->cmd_len); + ioarcb->resource_handle = res->cfg_entry.resource_handle; + ioarcb->request_type = REQ_TYPE_SCSI; + + cmd->cmd_done = pmcraid_io_done; + + if (RES_IS_GSCSI(res->cfg_entry) || RES_IS_VSET(res->cfg_entry)) { + if (scsi_cmd->underflow == 0) + ioarcb->request_flags0 |= INHIBIT_UL_CHECK; + + if (res->sync_reqd) { + ioarcb->request_flags0 |= SYNC_COMPLETE; + res->sync_reqd = 0; + } + + ioarcb->request_flags0 |= NO_LINK_DESCS; + ioarcb->request_flags1 |= pmcraid_task_attributes(scsi_cmd); + + if (RES_IS_GSCSI(res->cfg_entry)) + ioarcb->request_flags1 |= DELAY_AFTER_RESET; + } + + rc = pmcraid_build_ioadl(pinstance, cmd); + + pmcraid_info("command (%d) CDB[0] = %x for %x:%x:%x:%x\n", + le32_to_cpu(ioarcb->response_handle) >> 2, + scsi_cmd->cmnd[0], pinstance->host->unique_id, + RES_IS_VSET(res->cfg_entry) ? PMCRAID_VSET_BUS_ID : + PMCRAID_PHYS_BUS_ID, + RES_IS_VSET(res->cfg_entry) ? + res->cfg_entry.unique_flags1 : + RES_TARGET(res->cfg_entry.resource_address), + RES_LUN(res->cfg_entry.resource_address)); + + if (likely(rc == 0)) { + _pmcraid_fire_command(cmd); + } else { + pmcraid_err("queuecommand could not build ioadl\n"); + pmcraid_return_cmd(cmd); + rc = SCSI_MLQUEUE_HOST_BUSY; + } + + return rc; +} + +/** + * pmcraid_open -char node "open" entry, allowed only users with admin access + */ +static int pmcraid_chr_open(struct inode *inode, struct file *filep) +{ + struct pmcraid_instance *pinstance; + + if (!capable(CAP_SYS_ADMIN)) + return -EACCES; + + /* Populate adapter instance * pointer for use by ioctl */ + pinstance = container_of(inode->i_cdev, struct pmcraid_instance, cdev); + filep->private_data = pinstance; + + return 0; +} + +/** + * pmcraid_release - char node "release" entry point + */ +static int pmcraid_chr_release(struct inode *inode, struct file *filep) +{ + struct pmcraid_instance *pinstance = + ((struct pmcraid_instance *)filep->private_data); + + filep->private_data = NULL; + fasync_helper(-1, filep, 0, &pinstance->aen_queue); + + return 0; +} + +/** + * pmcraid_fasync - Async notifier registration from applications + * + * This function adds the calling process to a driver global queue. When an + * event occurs, SIGIO will be sent to all processes in this queue. + */ +static int pmcraid_chr_fasync(int fd, struct file *filep, int mode) +{ + struct pmcraid_instance *pinstance; + int rc; + + pinstance = (struct pmcraid_instance *)filep->private_data; + mutex_lock(&pinstance->aen_queue_lock); + rc = fasync_helper(fd, filep, mode, &pinstance->aen_queue); + mutex_unlock(&pinstance->aen_queue_lock); + + return rc; +} + + +/** + * pmcraid_build_passthrough_ioadls - builds SG elements for passthrough + * commands sent over IOCTL interface + * + * @cmd : pointer to struct pmcraid_cmd + * @buflen : length of the request buffer + * @direction : data transfer direction + * + * Return value + * 0 on sucess, non-zero error code on failure + */ +static int pmcraid_build_passthrough_ioadls( + struct pmcraid_cmd *cmd, + int buflen, + int direction +) +{ + struct pmcraid_sglist *sglist = NULL; + struct scatterlist *sg = NULL; + struct pmcraid_ioarcb *ioarcb = &cmd->ioa_cb->ioarcb; + struct pmcraid_ioadl_desc *ioadl; + int i; + + sglist = pmcraid_alloc_sglist(buflen); + + if (!sglist) { + pmcraid_err("can't allocate memory for passthrough SGls\n"); + return -ENOMEM; + } + + sglist->num_dma_sg = pci_map_sg(cmd->drv_inst->pdev, + sglist->scatterlist, + sglist->num_sg, direction); + + if (!sglist->num_dma_sg || sglist->num_dma_sg > PMCRAID_MAX_IOADLS) { + dev_err(&cmd->drv_inst->pdev->dev, + "Failed to map passthrough buffer!\n"); + pmcraid_free_sglist(sglist); + return -EIO; + } + + cmd->sglist = sglist; + ioarcb->request_flags0 |= NO_LINK_DESCS; + + ioadl = pmcraid_init_ioadls(cmd, sglist->num_dma_sg); + + /* Initialize IOADL descriptor addresses */ + for_each_sg(sglist->scatterlist, sg, sglist->num_dma_sg, i) { + ioadl[i].data_len = cpu_to_le32(sg_dma_len(sg)); + ioadl[i].address = cpu_to_le64(sg_dma_address(sg)); + ioadl[i].flags = 0; + } + + /* setup the last descriptor */ + ioadl[i - 1].flags = cpu_to_le32(IOADL_FLAGS_LAST_DESC); + + return 0; +} + + +/** + * pmcraid_release_passthrough_ioadls - release passthrough ioadls + * + * @cmd: pointer to struct pmcraid_cmd for which ioadls were allocated + * @buflen: size of the request buffer + * @direction: data transfer direction + * + * Return value + * 0 on sucess, non-zero error code on failure + */ +static void pmcraid_release_passthrough_ioadls( + struct pmcraid_cmd *cmd, + int buflen, + int direction +) +{ + struct pmcraid_sglist *sglist = cmd->sglist; + + if (buflen > 0) { + pci_unmap_sg(cmd->drv_inst->pdev, + sglist->scatterlist, + sglist->num_sg, + direction); + pmcraid_free_sglist(sglist); + cmd->sglist = NULL; + } +} + +/** + * pmcraid_ioctl_passthrough - handling passthrough IOCTL commands + * + * @pinstance: pointer to adapter instance structure + * @cmd: ioctl code + * @arg: pointer to pmcraid_passthrough_buffer user buffer + * + * Return value + * 0 on sucess, non-zero error code on failure + */ +static long pmcraid_ioctl_passthrough( + struct pmcraid_instance *pinstance, + unsigned int ioctl_cmd, + unsigned int buflen, + unsigned long arg +) +{ + struct pmcraid_passthrough_ioctl_buffer *buffer; + struct pmcraid_ioarcb *ioarcb; + struct pmcraid_cmd *cmd; + struct pmcraid_cmd *cancel_cmd; + unsigned long request_buffer; + unsigned long request_offset; + unsigned long lock_flags; + int request_size; + int buffer_size; + u8 access, direction; + int rc = 0; + + /* If IOA reset is in progress, wait 10 secs for reset to complete */ + if (pinstance->ioa_reset_in_progress) { + rc = wait_event_interruptible_timeout( + pinstance->reset_wait_q, + !pinstance->ioa_reset_in_progress, + msecs_to_jiffies(10000)); + + if (!rc) + return -ETIMEDOUT; + else if (rc < 0) + return -ERESTARTSYS; + } + + /* If adapter is not in operational state, return error */ + if (pinstance->ioa_state != IOA_STATE_OPERATIONAL) { + pmcraid_err("IOA is not operational\n"); + return -ENOTTY; + } + + buffer_size = sizeof(struct pmcraid_passthrough_ioctl_buffer); + buffer = kmalloc(buffer_size, GFP_KERNEL); + + if (!buffer) { + pmcraid_err("no memory for passthrough buffer\n"); + return -ENOMEM; + } + + request_offset = + offsetof(struct pmcraid_passthrough_ioctl_buffer, request_buffer); + + request_buffer = arg + request_offset; + + rc = __copy_from_user(buffer, + (struct pmcraid_passthrough_ioctl_buffer *) arg, + sizeof(struct pmcraid_passthrough_ioctl_buffer)); + if (rc) { + pmcraid_err("ioctl: can't copy passthrough buffer\n"); + rc = -EFAULT; + goto out_free_buffer; + } + + request_size = buffer->ioarcb.data_transfer_length; + + if (buffer->ioarcb.request_flags0 & TRANSFER_DIR_WRITE) { + access = VERIFY_READ; + direction = DMA_TO_DEVICE; + } else { + access = VERIFY_WRITE; + direction = DMA_FROM_DEVICE; + } + + if (request_size > 0) { + rc = access_ok(access, arg, request_offset + request_size); + + if (!rc) { + rc = -EFAULT; + goto out_free_buffer; + } + } + + /* check if we have any additional command parameters */ + if (buffer->ioarcb.add_cmd_param_length > PMCRAID_ADD_CMD_PARAM_LEN) { + rc = -EINVAL; + goto out_free_buffer; + } + + cmd = pmcraid_get_free_cmd(pinstance); + + if (!cmd) { + pmcraid_err("free command block is not available\n"); + rc = -ENOMEM; + goto out_free_buffer; + } + + cmd->scsi_cmd = NULL; + ioarcb = &(cmd->ioa_cb->ioarcb); + + /* Copy the user-provided IOARCB stuff field by field */ + ioarcb->resource_handle = buffer->ioarcb.resource_handle; + ioarcb->data_transfer_length = buffer->ioarcb.data_transfer_length; + ioarcb->cmd_timeout = buffer->ioarcb.cmd_timeout; + ioarcb->request_type = buffer->ioarcb.request_type; + ioarcb->request_flags0 = buffer->ioarcb.request_flags0; + ioarcb->request_flags1 = buffer->ioarcb.request_flags1; + memcpy(ioarcb->cdb, buffer->ioarcb.cdb, PMCRAID_MAX_CDB_LEN); + + if (buffer->ioarcb.add_cmd_param_length) { + ioarcb->add_cmd_param_length = + buffer->ioarcb.add_cmd_param_length; + ioarcb->add_cmd_param_offset = + buffer->ioarcb.add_cmd_param_offset; + memcpy(ioarcb->add_data.u.add_cmd_params, + buffer->ioarcb.add_data.u.add_cmd_params, + buffer->ioarcb.add_cmd_param_length); + } + + if (request_size) { + rc = pmcraid_build_passthrough_ioadls(cmd, + request_size, + direction); + if (rc) { + pmcraid_err("couldn't build passthrough ioadls\n"); + goto out_free_buffer; + } + } + + /* If data is being written into the device, copy the data from user + * buffers + */ + if (direction == DMA_TO_DEVICE && request_size > 0) { + rc = pmcraid_copy_sglist(cmd->sglist, + request_buffer, + request_size, + direction); + if (rc) { + pmcraid_err("failed to copy user buffer\n"); + goto out_free_sglist; + } + } + + /* passthrough ioctl is a blocking command so, put the user to sleep + * until timeout. Note that a timeout value of 0 means, do timeout. + */ + cmd->cmd_done = pmcraid_internal_done; + init_completion(&cmd->wait_for_completion); + cmd->completion_req = 1; + + pmcraid_info("command(%d) (CDB[0] = %x) for %x\n", + le32_to_cpu(cmd->ioa_cb->ioarcb.response_handle) >> 2, + cmd->ioa_cb->ioarcb.cdb[0], + le32_to_cpu(cmd->ioa_cb->ioarcb.resource_handle)); + + spin_lock_irqsave(pinstance->host->host_lock, lock_flags); + _pmcraid_fire_command(cmd); + spin_unlock_irqrestore(pinstance->host->host_lock, lock_flags); + + /* If command timeout is specified put caller to wait till that time, + * otherwise it would be blocking wait. If command gets timed out, it + * will be aborted. + */ + if (buffer->ioarcb.cmd_timeout == 0) { + wait_for_completion(&cmd->wait_for_completion); + } else if (!wait_for_completion_timeout( + &cmd->wait_for_completion, + msecs_to_jiffies(buffer->ioarcb.cmd_timeout * 1000))) { + + pmcraid_info("aborting cmd %d (CDB[0] = %x) due to timeout\n", + le32_to_cpu(cmd->ioa_cb->ioarcb.response_handle >> 2), + cmd->ioa_cb->ioarcb.cdb[0]); + + rc = -ETIMEDOUT; + spin_lock_irqsave(pinstance->host->host_lock, lock_flags); + cancel_cmd = pmcraid_abort_cmd(cmd); + spin_unlock_irqrestore(pinstance->host->host_lock, lock_flags); + + if (cancel_cmd) { + wait_for_completion(&cancel_cmd->wait_for_completion); + pmcraid_return_cmd(cancel_cmd); + } + + goto out_free_sglist; + } + + /* If the command failed for any reason, copy entire IOASA buffer and + * return IOCTL success. If copying IOASA to user-buffer fails, return + * EFAULT + */ + if (le32_to_cpu(cmd->ioa_cb->ioasa.ioasc)) { + + void *ioasa = + (void *)(arg + + offsetof(struct pmcraid_passthrough_ioctl_buffer, ioasa)); + + pmcraid_info("command failed with %x\n", + le32_to_cpu(cmd->ioa_cb->ioasa.ioasc)); + if (copy_to_user(ioasa, &cmd->ioa_cb->ioasa, + sizeof(struct pmcraid_ioasa))) { + pmcraid_err("failed to copy ioasa buffer to user\n"); + rc = -EFAULT; + } + } + /* If the data transfer was from device, copy the data onto user + * buffers + */ + else if (direction == DMA_FROM_DEVICE && request_size > 0) { + rc = pmcraid_copy_sglist(cmd->sglist, + request_buffer, + request_size, + direction); + if (rc) { + pmcraid_err("failed to copy user buffer\n"); + rc = -EFAULT; + } + } + +out_free_sglist: + pmcraid_release_passthrough_ioadls(cmd, request_size, direction); + pmcraid_return_cmd(cmd); + +out_free_buffer: + kfree(buffer); + + return rc; +} + + + + +/** + * pmcraid_ioctl_driver - ioctl handler for commands handled by driver itself + * + * @pinstance: pointer to adapter instance structure + * @cmd: ioctl command passed in + * @buflen: length of user_buffer + * @user_buffer: user buffer pointer + * + * Return Value + * 0 in case of success, otherwise appropriate error code + */ +static long pmcraid_ioctl_driver( + struct pmcraid_instance *pinstance, + unsigned int cmd, + unsigned int buflen, + void __user *user_buffer +) +{ + int rc = -ENOSYS; + + if (!access_ok(VERIFY_READ, user_buffer, _IOC_SIZE(cmd))) { + pmcraid_err("ioctl_driver: access fault in request buffer \n"); + return -EFAULT; + } + + switch (cmd) { + case PMCRAID_IOCTL_RESET_ADAPTER: + pmcraid_reset_bringup(pinstance); + rc = 0; + break; + + default: + break; + } + + return rc; +} + +/** + * pmcraid_check_ioctl_buffer - check for proper access to user buffer + * + * @cmd: ioctl command + * @arg: user buffer + * @hdr: pointer to kernel memory for pmcraid_ioctl_header + * + * Return Value + * negetive error code if there are access issues, otherwise zero. + * Upon success, returns ioctl header copied out of user buffer. + */ + +static int pmcraid_check_ioctl_buffer( + int cmd, + void __user *arg, + struct pmcraid_ioctl_header *hdr +) +{ + int rc = 0; + int access = VERIFY_READ; + + if (copy_from_user(hdr, arg, sizeof(struct pmcraid_ioctl_header))) { + pmcraid_err("couldn't copy ioctl header from user buffer\n"); + return -EFAULT; + } + + /* check for valid driver signature */ + rc = memcmp(hdr->signature, + PMCRAID_IOCTL_SIGNATURE, + sizeof(hdr->signature)); + if (rc) { + pmcraid_err("signature verification failed\n"); + return -EINVAL; + } + + /* buffer length can't be negetive */ + if (hdr->buffer_length < 0) { + pmcraid_err("ioctl: invalid buffer length specified\n"); + return -EINVAL; + } + + /* check for appropriate buffer access */ + if ((_IOC_DIR(cmd) & _IOC_READ) == _IOC_READ) + access = VERIFY_WRITE; + + rc = access_ok(access, + (arg + sizeof(struct pmcraid_ioctl_header)), + hdr->buffer_length); + if (!rc) { + pmcraid_err("access failed for user buffer of size %d\n", + hdr->buffer_length); + return -EFAULT; + } + + return 0; +} + +/** + * pmcraid_ioctl - char node ioctl entry point + */ +static long pmcraid_chr_ioctl( + struct file *filep, + unsigned int cmd, + unsigned long arg +) +{ + struct pmcraid_instance *pinstance = NULL; + struct pmcraid_ioctl_header *hdr = NULL; + int retval = -ENOTTY; + + hdr = kmalloc(GFP_KERNEL, sizeof(struct pmcraid_ioctl_header)); + + if (!hdr) { + pmcraid_err("faile to allocate memory for ioctl header\n"); + return -ENOMEM; + } + + retval = pmcraid_check_ioctl_buffer(cmd, (void *)arg, hdr); + + if (retval) { + pmcraid_info("chr_ioctl: header check failed\n"); + kfree(hdr); + return retval; + } + + pinstance = (struct pmcraid_instance *)filep->private_data; + + if (!pinstance) { + pmcraid_info("adapter instance is not found\n"); + kfree(hdr); + return -ENOTTY; + } + + switch (_IOC_TYPE(cmd)) { + + case PMCRAID_PASSTHROUGH_IOCTL: + /* If ioctl code is to download microcode, we need to block + * mid-layer requests. + */ + if (cmd == PMCRAID_IOCTL_DOWNLOAD_MICROCODE) + scsi_block_requests(pinstance->host); + + retval = pmcraid_ioctl_passthrough(pinstance, + cmd, + hdr->buffer_length, + arg); + + if (cmd == PMCRAID_IOCTL_DOWNLOAD_MICROCODE) + scsi_unblock_requests(pinstance->host); + break; + + case PMCRAID_DRIVER_IOCTL: + arg += sizeof(struct pmcraid_ioctl_header); + retval = pmcraid_ioctl_driver(pinstance, + cmd, + hdr->buffer_length, + (void __user *)arg); + break; + + default: + retval = -ENOTTY; + break; + } + + kfree(hdr); + + return retval; +} + +/** + * File operations structure for management interface + */ +static const struct file_operations pmcraid_fops = { + .owner = THIS_MODULE, + .open = pmcraid_chr_open, + .release = pmcraid_chr_release, + .fasync = pmcraid_chr_fasync, + .unlocked_ioctl = pmcraid_chr_ioctl, +#ifdef CONFIG_COMPAT + .compat_ioctl = pmcraid_chr_ioctl, +#endif +}; + + + + +/** + * pmcraid_show_log_level - Display adapter's error logging level + * @dev: class device struct + * @buf: buffer + * + * Return value: + * number of bytes printed to buffer + */ +static ssize_t pmcraid_show_log_level( + struct device *dev, + struct device_attribute *attr, + char *buf) +{ + struct Scsi_Host *shost = class_to_shost(dev); + struct pmcraid_instance *pinstance = + (struct pmcraid_instance *)shost->hostdata; + return snprintf(buf, PAGE_SIZE, "%d\n", pinstance->current_log_level); +} + +/** + * pmcraid_store_log_level - Change the adapter's error logging level + * @dev: class device struct + * @buf: buffer + * @count: not used + * + * Return value: + * number of bytes printed to buffer + */ +static ssize_t pmcraid_store_log_level( + struct device *dev, + struct device_attribute *attr, + const char *buf, + size_t count +) +{ + struct Scsi_Host *shost; + struct pmcraid_instance *pinstance; + unsigned long val; + + if (strict_strtoul(buf, 10, &val)) + return -EINVAL; + /* log-level should be from 0 to 2 */ + if (val > 2) + return -EINVAL; + + shost = class_to_shost(dev); + pinstance = (struct pmcraid_instance *)shost->hostdata; + pinstance->current_log_level = val; + + return strlen(buf); +} + +static struct device_attribute pmcraid_log_level_attr = { + .attr = { + .name = "log_level", + .mode = S_IRUGO | S_IWUSR, + }, + .show = pmcraid_show_log_level, + .store = pmcraid_store_log_level, +}; + +/** + * pmcraid_show_drv_version - Display driver version + * @dev: class device struct + * @buf: buffer + * + * Return value: + * number of bytes printed to buffer + */ +static ssize_t pmcraid_show_drv_version( + struct device *dev, + struct device_attribute *attr, + char *buf +) +{ + return snprintf(buf, PAGE_SIZE, "version: %s, build date: %s\n", + PMCRAID_DRIVER_VERSION, PMCRAID_DRIVER_DATE); +} + +static struct device_attribute pmcraid_driver_version_attr = { + .attr = { + .name = "drv_version", + .mode = S_IRUGO, + }, + .show = pmcraid_show_drv_version, +}; + +/** + * pmcraid_show_io_adapter_id - Display driver assigned adapter id + * @dev: class device struct + * @buf: buffer + * + * Return value: + * number of bytes printed to buffer + */ +static ssize_t pmcraid_show_adapter_id( + struct device *dev, + struct device_attribute *attr, + char *buf +) +{ + struct Scsi_Host *shost = class_to_shost(dev); + struct pmcraid_instance *pinstance = + (struct pmcraid_instance *)shost->hostdata; + u32 adapter_id = (pinstance->pdev->bus->number << 8) | + pinstance->pdev->devfn; + u32 aen_group = pmcraid_event_family.id; + + return snprintf(buf, PAGE_SIZE, + "adapter id: %d\nminor: %d\naen group: %d\n", + adapter_id, MINOR(pinstance->cdev.dev), aen_group); +} + +static struct device_attribute pmcraid_adapter_id_attr = { + .attr = { + .name = "adapter_id", + .mode = S_IRUGO | S_IWUSR, + }, + .show = pmcraid_show_adapter_id, +}; + +static struct device_attribute *pmcraid_host_attrs[] = { + &pmcraid_log_level_attr, + &pmcraid_driver_version_attr, + &pmcraid_adapter_id_attr, + NULL, +}; + + +/* host template structure for pmcraid driver */ +static struct scsi_host_template pmcraid_host_template = { + .module = THIS_MODULE, + .name = PMCRAID_DRIVER_NAME, + .queuecommand = pmcraid_queuecommand, + .eh_abort_handler = pmcraid_eh_abort_handler, + .eh_bus_reset_handler = pmcraid_eh_bus_reset_handler, + .eh_target_reset_handler = pmcraid_eh_target_reset_handler, + .eh_device_reset_handler = pmcraid_eh_device_reset_handler, + .eh_host_reset_handler = pmcraid_eh_host_reset_handler, + + .slave_alloc = pmcraid_slave_alloc, + .slave_configure = pmcraid_slave_configure, + .slave_destroy = pmcraid_slave_destroy, + .change_queue_depth = pmcraid_change_queue_depth, + .change_queue_type = pmcraid_change_queue_type, + .can_queue = PMCRAID_MAX_IO_CMD, + .this_id = -1, + .sg_tablesize = PMCRAID_MAX_IOADLS, + .max_sectors = PMCRAID_IOA_MAX_SECTORS, + .cmd_per_lun = PMCRAID_MAX_CMD_PER_LUN, + .use_clustering = ENABLE_CLUSTERING, + .shost_attrs = pmcraid_host_attrs, + .proc_name = PMCRAID_DRIVER_NAME +}; + +/** + * pmcraid_isr_common - Common interrupt handler routine + * + * @pinstance: pointer to adapter instance + * @intrs: active interrupts (contents of ioa_host_interrupt register) + * @hrrq_id: Host RRQ index + * + * Return Value + * none + */ +static void pmcraid_isr_common( + struct pmcraid_instance *pinstance, + u32 intrs, + int hrrq_id +) +{ + u32 intrs_clear = + (intrs & INTRS_CRITICAL_OP_IN_PROGRESS) ? intrs + : INTRS_HRRQ_VALID; + iowrite32(intrs_clear, + pinstance->int_regs.ioa_host_interrupt_clr_reg); + intrs = ioread32(pinstance->int_regs.ioa_host_interrupt_reg); + + /* hrrq valid bit was set, schedule tasklet to handle the response */ + if (intrs_clear == INTRS_HRRQ_VALID) + tasklet_schedule(&(pinstance->isr_tasklet[hrrq_id])); +} + +/** + * pmcraid_isr - implements interrupt handling routine + * + * @irq: interrupt vector number + * @dev_id: pointer hrrq_vector + * + * Return Value + * IRQ_HANDLED if interrupt is handled or IRQ_NONE if ignored + */ +static irqreturn_t pmcraid_isr(int irq, void *dev_id) +{ + struct pmcraid_isr_param *hrrq_vector; + struct pmcraid_instance *pinstance; + unsigned long lock_flags; + u32 intrs; + + /* In case of legacy interrupt mode where interrupts are shared across + * isrs, it may be possible that the current interrupt is not from IOA + */ + if (!dev_id) { + printk(KERN_INFO "%s(): NULL host pointer\n", __func__); + return IRQ_NONE; + } + + hrrq_vector = (struct pmcraid_isr_param *)dev_id; + pinstance = hrrq_vector->drv_inst; + + /* Acquire the lock (currently host_lock) while processing interrupts. + * This interval is small as most of the response processing is done by + * tasklet without the lock. + */ + spin_lock_irqsave(pinstance->host->host_lock, lock_flags); + intrs = pmcraid_read_interrupts(pinstance); + + if (unlikely((intrs & PMCRAID_PCI_INTERRUPTS) == 0)) { + spin_unlock_irqrestore(pinstance->host->host_lock, lock_flags); + return IRQ_NONE; + } + + /* Any error interrupts including unit_check, initiate IOA reset. + * In case of unit check indicate to reset_sequence that IOA unit + * checked and prepare for a dump during reset sequence + */ + if (intrs & PMCRAID_ERROR_INTERRUPTS) { + + if (intrs & INTRS_IOA_UNIT_CHECK) + pinstance->ioa_unit_check = 1; + + iowrite32(intrs, + pinstance->int_regs.ioa_host_interrupt_clr_reg); + pmcraid_err("ISR: error interrupts: %x initiating reset\n", + intrs); + intrs = ioread32(pinstance->int_regs.ioa_host_interrupt_reg); + pmcraid_initiate_reset(pinstance); + } else { + pmcraid_isr_common(pinstance, intrs, hrrq_vector->hrrq_id); + } + + spin_unlock_irqrestore(pinstance->host->host_lock, lock_flags); + + return IRQ_HANDLED; +} + + +/** + * pmcraid_worker_function - worker thread function + * + * @workp: pointer to struct work queue + * + * Return Value + * None + */ + +static void pmcraid_worker_function(struct work_struct *workp) +{ + struct pmcraid_instance *pinstance; + struct pmcraid_resource_entry *res; + struct pmcraid_resource_entry *temp; + struct scsi_device *sdev; + unsigned long lock_flags; + unsigned long host_lock_flags; + u8 bus, target, lun; + + pinstance = container_of(workp, struct pmcraid_instance, worker_q); + /* add resources only after host is added into system */ + if (!atomic_read(&pinstance->expose_resources)) + return; + + spin_lock_irqsave(&pinstance->resource_lock, lock_flags); + list_for_each_entry_safe(res, temp, &pinstance->used_res_q, queue) { + + if (res->change_detected == RES_CHANGE_DEL && res->scsi_dev) { + sdev = res->scsi_dev; + + /* host_lock must be held before calling + * scsi_device_get + */ + spin_lock_irqsave(pinstance->host->host_lock, + host_lock_flags); + if (!scsi_device_get(sdev)) { + spin_unlock_irqrestore( + pinstance->host->host_lock, + host_lock_flags); + pmcraid_info("deleting %x from midlayer\n", + res->cfg_entry.resource_address); + list_move_tail(&res->queue, + &pinstance->free_res_q); + spin_unlock_irqrestore( + &pinstance->resource_lock, + lock_flags); + scsi_remove_device(sdev); + scsi_device_put(sdev); + spin_lock_irqsave(&pinstance->resource_lock, + lock_flags); + res->change_detected = 0; + } else { + spin_unlock_irqrestore( + pinstance->host->host_lock, + host_lock_flags); + } + } + } + + list_for_each_entry(res, &pinstance->used_res_q, queue) { + + if (res->change_detected == RES_CHANGE_ADD) { + + if (!pmcraid_expose_resource(&res->cfg_entry)) + continue; + + if (RES_IS_VSET(res->cfg_entry)) { + bus = PMCRAID_VSET_BUS_ID; + target = res->cfg_entry.unique_flags1; + lun = PMCRAID_VSET_LUN_ID; + } else { + bus = PMCRAID_PHYS_BUS_ID; + target = + RES_TARGET( + res->cfg_entry.resource_address); + lun = RES_LUN(res->cfg_entry.resource_address); + } + + res->change_detected = 0; + spin_unlock_irqrestore(&pinstance->resource_lock, + lock_flags); + scsi_add_device(pinstance->host, bus, target, lun); + spin_lock_irqsave(&pinstance->resource_lock, + lock_flags); + } + } + + spin_unlock_irqrestore(&pinstance->resource_lock, lock_flags); +} + +/** + * pmcraid_tasklet_function - Tasklet function + * + * @instance: pointer to msix param structure + * + * Return Value + * None + */ +void pmcraid_tasklet_function(unsigned long instance) +{ + struct pmcraid_isr_param *hrrq_vector; + struct pmcraid_instance *pinstance; + unsigned long hrrq_lock_flags; + unsigned long pending_lock_flags; + unsigned long host_lock_flags; + spinlock_t *lockp; /* hrrq buffer lock */ + int id; + u32 intrs; + __le32 resp; + + hrrq_vector = (struct pmcraid_isr_param *)instance; + pinstance = hrrq_vector->drv_inst; + id = hrrq_vector->hrrq_id; + lockp = &(pinstance->hrrq_lock[id]); + intrs = pmcraid_read_interrupts(pinstance); + + /* If interrupts was as part of the ioa initialization, clear and mask + * it. Delete the timer and wakeup the reset engine to proceed with + * reset sequence + */ + if (intrs & INTRS_TRANSITION_TO_OPERATIONAL) { + iowrite32(INTRS_TRANSITION_TO_OPERATIONAL, + pinstance->int_regs.ioa_host_interrupt_mask_reg); + iowrite32(INTRS_TRANSITION_TO_OPERATIONAL, + pinstance->int_regs.ioa_host_interrupt_clr_reg); + + if (pinstance->reset_cmd != NULL) { + del_timer(&pinstance->reset_cmd->timer); + spin_lock_irqsave(pinstance->host->host_lock, + host_lock_flags); + pinstance->reset_cmd->cmd_done(pinstance->reset_cmd); + spin_unlock_irqrestore(pinstance->host->host_lock, + host_lock_flags); + } + return; + } + + /* loop through each of the commands responded by IOA. Each HRRQ buf is + * protected by its own lock. Traversals must be done within this lock + * as there may be multiple tasklets running on multiple CPUs. Note + * that the lock is held just for picking up the response handle and + * manipulating hrrq_curr/toggle_bit values. + */ + spin_lock_irqsave(lockp, hrrq_lock_flags); + + resp = le32_to_cpu(*(pinstance->hrrq_curr[id])); + + while ((resp & HRRQ_TOGGLE_BIT) == + pinstance->host_toggle_bit[id]) { + + int cmd_index = resp >> 2; + struct pmcraid_cmd *cmd = NULL; + + if (cmd_index < PMCRAID_MAX_CMD) { + cmd = pinstance->cmd_list[cmd_index]; + } else { + /* In case of invalid response handle, initiate IOA + * reset sequence. + */ + spin_unlock_irqrestore(lockp, hrrq_lock_flags); + + pmcraid_err("Invalid response %d initiating reset\n", + cmd_index); + + spin_lock_irqsave(pinstance->host->host_lock, + host_lock_flags); + pmcraid_initiate_reset(pinstance); + spin_unlock_irqrestore(pinstance->host->host_lock, + host_lock_flags); + + spin_lock_irqsave(lockp, hrrq_lock_flags); + break; + } + + if (pinstance->hrrq_curr[id] < pinstance->hrrq_end[id]) { + pinstance->hrrq_curr[id]++; + } else { + pinstance->hrrq_curr[id] = pinstance->hrrq_start[id]; + pinstance->host_toggle_bit[id] ^= 1u; + } + + spin_unlock_irqrestore(lockp, hrrq_lock_flags); + + spin_lock_irqsave(&pinstance->pending_pool_lock, + pending_lock_flags); + list_del(&cmd->free_list); + spin_unlock_irqrestore(&pinstance->pending_pool_lock, + pending_lock_flags); + del_timer(&cmd->timer); + atomic_dec(&pinstance->outstanding_cmds); + + if (cmd->cmd_done == pmcraid_ioa_reset) { + spin_lock_irqsave(pinstance->host->host_lock, + host_lock_flags); + cmd->cmd_done(cmd); + spin_unlock_irqrestore(pinstance->host->host_lock, + host_lock_flags); + } else if (cmd->cmd_done != NULL) { + cmd->cmd_done(cmd); + } + /* loop over until we are done with all responses */ + spin_lock_irqsave(lockp, hrrq_lock_flags); + resp = le32_to_cpu(*(pinstance->hrrq_curr[id])); + } + + spin_unlock_irqrestore(lockp, hrrq_lock_flags); +} + +/** + * pmcraid_unregister_interrupt_handler - de-register interrupts handlers + * @pinstance: pointer to adapter instance structure + * + * This routine un-registers registered interrupt handler and + * also frees irqs/vectors. + * + * Retun Value + * None + */ +static +void pmcraid_unregister_interrupt_handler(struct pmcraid_instance *pinstance) +{ + free_irq(pinstance->pdev->irq, &(pinstance->hrrq_vector[0])); +} + +/** + * pmcraid_register_interrupt_handler - registers interrupt handler + * @pinstance: pointer to per-adapter instance structure + * + * Return Value + * 0 on success, non-zero error code otherwise. + */ +static int +pmcraid_register_interrupt_handler(struct pmcraid_instance *pinstance) +{ + struct pci_dev *pdev = pinstance->pdev; + + pinstance->hrrq_vector[0].hrrq_id = 0; + pinstance->hrrq_vector[0].drv_inst = pinstance; + pinstance->hrrq_vector[0].vector = 0; + pinstance->num_hrrq = 1; + return request_irq(pdev->irq, pmcraid_isr, IRQF_SHARED, + PMCRAID_DRIVER_NAME, &pinstance->hrrq_vector[0]); +} + +/** + * pmcraid_release_cmd_blocks - release buufers allocated for command blocks + * @pinstance: per adapter instance structure pointer + * @max_index: number of buffer blocks to release + * + * Return Value + * None + */ +static void +pmcraid_release_cmd_blocks(struct pmcraid_instance *pinstance, int max_index) +{ + int i; + for (i = 0; i < max_index; i++) { + kmem_cache_free(pinstance->cmd_cachep, pinstance->cmd_list[i]); + pinstance->cmd_list[i] = NULL; + } + kmem_cache_destroy(pinstance->cmd_cachep); + pinstance->cmd_cachep = NULL; +} + +/** + * pmcraid_release_control_blocks - releases buffers alloced for control blocks + * @pinstance: pointer to per adapter instance structure + * @max_index: number of buffers (from 0 onwards) to release + * + * This function assumes that the command blocks for which control blocks are + * linked are not released. + * + * Return Value + * None + */ +static void +pmcraid_release_control_blocks( + struct pmcraid_instance *pinstance, + int max_index +) +{ + int i; + + if (pinstance->control_pool == NULL) + return; + + for (i = 0; i < max_index; i++) { + pci_pool_free(pinstance->control_pool, + pinstance->cmd_list[i]->ioa_cb, + pinstance->cmd_list[i]->ioa_cb_bus_addr); + pinstance->cmd_list[i]->ioa_cb = NULL; + pinstance->cmd_list[i]->ioa_cb_bus_addr = 0; + } + pci_pool_destroy(pinstance->control_pool); + pinstance->control_pool = NULL; +} + +/** + * pmcraid_allocate_cmd_blocks - allocate memory for cmd block structures + * @pinstance - pointer to per adapter instance structure + * + * Allocates memory for command blocks using kernel slab allocator. + * + * Return Value + * 0 in case of success; -ENOMEM in case of failure + */ +static int __devinit +pmcraid_allocate_cmd_blocks(struct pmcraid_instance *pinstance) +{ + int i; + + sprintf(pinstance->cmd_pool_name, "pmcraid_cmd_pool_%d", + pinstance->host->unique_id); + + + pinstance->cmd_cachep = kmem_cache_create( + pinstance->cmd_pool_name, + sizeof(struct pmcraid_cmd), 0, + SLAB_HWCACHE_ALIGN, NULL); + if (!pinstance->cmd_cachep) + return -ENOMEM; + + for (i = 0; i < PMCRAID_MAX_CMD; i++) { + pinstance->cmd_list[i] = + kmem_cache_alloc(pinstance->cmd_cachep, GFP_KERNEL); + if (!pinstance->cmd_list[i]) { + pmcraid_release_cmd_blocks(pinstance, i); + return -ENOMEM; + } + } + return 0; +} + +/** + * pmcraid_allocate_control_blocks - allocates memory control blocks + * @pinstance : pointer to per adapter instance structure + * + * This function allocates PCI memory for DMAable buffers like IOARCB, IOADLs + * and IOASAs. This is called after command blocks are already allocated. + * + * Return Value + * 0 in case it can allocate all control blocks, otherwise -ENOMEM + */ +static int __devinit +pmcraid_allocate_control_blocks(struct pmcraid_instance *pinstance) +{ + int i; + + sprintf(pinstance->ctl_pool_name, "pmcraid_control_pool_%d", + pinstance->host->unique_id); + + pinstance->control_pool = + pci_pool_create(pinstance->ctl_pool_name, + pinstance->pdev, + sizeof(struct pmcraid_control_block), + PMCRAID_IOARCB_ALIGNMENT, 0); + + if (!pinstance->control_pool) + return -ENOMEM; + + for (i = 0; i < PMCRAID_MAX_CMD; i++) { + pinstance->cmd_list[i]->ioa_cb = + pci_pool_alloc( + pinstance->control_pool, + GFP_KERNEL, + &(pinstance->cmd_list[i]->ioa_cb_bus_addr)); + + if (!pinstance->cmd_list[i]->ioa_cb) { + pmcraid_release_control_blocks(pinstance, i); + return -ENOMEM; + } + memset(pinstance->cmd_list[i]->ioa_cb, 0, + sizeof(struct pmcraid_control_block)); + } + return 0; +} + +/** + * pmcraid_release_host_rrqs - release memory allocated for hrrq buffer(s) + * @pinstance: pointer to per adapter instance structure + * @maxindex: size of hrrq buffer pointer array + * + * Return Value + * None + */ +static void +pmcraid_release_host_rrqs(struct pmcraid_instance *pinstance, int maxindex) +{ + int i; + for (i = 0; i < maxindex; i++) { + + pci_free_consistent(pinstance->pdev, + HRRQ_ENTRY_SIZE * PMCRAID_MAX_CMD, + pinstance->hrrq_start[i], + pinstance->hrrq_start_bus_addr[i]); + + /* reset pointers and toggle bit to zeros */ + pinstance->hrrq_start[i] = NULL; + pinstance->hrrq_start_bus_addr[i] = 0; + pinstance->host_toggle_bit[i] = 0; + } +} + +/** + * pmcraid_allocate_host_rrqs - Allocate and initialize host RRQ buffers + * @pinstance: pointer to per adapter instance structure + * + * Return value + * 0 hrrq buffers are allocated, -ENOMEM otherwise. + */ +static int __devinit +pmcraid_allocate_host_rrqs(struct pmcraid_instance *pinstance) +{ + int i; + int buf_count = PMCRAID_MAX_CMD / pinstance->num_hrrq; + + for (i = 0; i < pinstance->num_hrrq; i++) { + int buffer_size = HRRQ_ENTRY_SIZE * buf_count; + + pinstance->hrrq_start[i] = + pci_alloc_consistent( + pinstance->pdev, + buffer_size, + &(pinstance->hrrq_start_bus_addr[i])); + + if (pinstance->hrrq_start[i] == 0) { + pmcraid_err("could not allocate host rrq: %d\n", i); + pmcraid_release_host_rrqs(pinstance, i); + return -ENOMEM; + } + + memset(pinstance->hrrq_start[i], 0, buffer_size); + pinstance->hrrq_curr[i] = pinstance->hrrq_start[i]; + pinstance->hrrq_end[i] = + pinstance->hrrq_start[i] + buf_count - 1; + pinstance->host_toggle_bit[i] = 1; + spin_lock_init(&pinstance->hrrq_lock[i]); + } + return 0; +} + +/** + * pmcraid_release_hcams - release HCAM buffers + * + * @pinstance: pointer to per adapter instance structure + * + * Return value + * none + */ +static void pmcraid_release_hcams(struct pmcraid_instance *pinstance) +{ + if (pinstance->ccn.msg != NULL) { + pci_free_consistent(pinstance->pdev, + PMCRAID_AEN_HDR_SIZE + + sizeof(struct pmcraid_hcam_ccn), + pinstance->ccn.msg, + pinstance->ccn.baddr); + + pinstance->ccn.msg = NULL; + pinstance->ccn.hcam = NULL; + pinstance->ccn.baddr = 0; + } + + if (pinstance->ldn.msg != NULL) { + pci_free_consistent(pinstance->pdev, + PMCRAID_AEN_HDR_SIZE + + sizeof(struct pmcraid_hcam_ldn), + pinstance->ldn.msg, + pinstance->ldn.baddr); + + pinstance->ldn.msg = NULL; + pinstance->ldn.hcam = NULL; + pinstance->ldn.baddr = 0; + } +} + +/** + * pmcraid_allocate_hcams - allocates HCAM buffers + * @pinstance : pointer to per adapter instance structure + * + * Return Value: + * 0 in case of successful allocation, non-zero otherwise + */ +static int pmcraid_allocate_hcams(struct pmcraid_instance *pinstance) +{ + pinstance->ccn.msg = pci_alloc_consistent( + pinstance->pdev, + PMCRAID_AEN_HDR_SIZE + + sizeof(struct pmcraid_hcam_ccn), + &(pinstance->ccn.baddr)); + + pinstance->ldn.msg = pci_alloc_consistent( + pinstance->pdev, + PMCRAID_AEN_HDR_SIZE + + sizeof(struct pmcraid_hcam_ldn), + &(pinstance->ldn.baddr)); + + if (pinstance->ldn.msg == NULL || pinstance->ccn.msg == NULL) { + pmcraid_release_hcams(pinstance); + } else { + pinstance->ccn.hcam = + (void *)pinstance->ccn.msg + PMCRAID_AEN_HDR_SIZE; + pinstance->ldn.hcam = + (void *)pinstance->ldn.msg + PMCRAID_AEN_HDR_SIZE; + + atomic_set(&pinstance->ccn.ignore, 0); + atomic_set(&pinstance->ldn.ignore, 0); + } + + return (pinstance->ldn.msg == NULL) ? -ENOMEM : 0; +} + +/** + * pmcraid_release_config_buffers - release config.table buffers + * @pinstance: pointer to per adapter instance structure + * + * Return Value + * none + */ +static void pmcraid_release_config_buffers(struct pmcraid_instance *pinstance) +{ + if (pinstance->cfg_table != NULL && + pinstance->cfg_table_bus_addr != 0) { + pci_free_consistent(pinstance->pdev, + sizeof(struct pmcraid_config_table), + pinstance->cfg_table, + pinstance->cfg_table_bus_addr); + pinstance->cfg_table = NULL; + pinstance->cfg_table_bus_addr = 0; + } + + if (pinstance->res_entries != NULL) { + int i; + + for (i = 0; i < PMCRAID_MAX_RESOURCES; i++) + list_del(&pinstance->res_entries[i].queue); + kfree(pinstance->res_entries); + pinstance->res_entries = NULL; + } + + pmcraid_release_hcams(pinstance); +} + +/** + * pmcraid_allocate_config_buffers - allocates DMAable memory for config table + * @pinstance : pointer to per adapter instance structure + * + * Return Value + * 0 for successful allocation, -ENOMEM for any failure + */ +static int __devinit +pmcraid_allocate_config_buffers(struct pmcraid_instance *pinstance) +{ + int i; + + pinstance->res_entries = + kzalloc(sizeof(struct pmcraid_resource_entry) * + PMCRAID_MAX_RESOURCES, GFP_KERNEL); + + if (NULL == pinstance->res_entries) { + pmcraid_err("failed to allocate memory for resource table\n"); + return -ENOMEM; + } + + for (i = 0; i < PMCRAID_MAX_RESOURCES; i++) + list_add_tail(&pinstance->res_entries[i].queue, + &pinstance->free_res_q); + + pinstance->cfg_table = + pci_alloc_consistent(pinstance->pdev, + sizeof(struct pmcraid_config_table), + &pinstance->cfg_table_bus_addr); + + if (NULL == pinstance->cfg_table) { + pmcraid_err("couldn't alloc DMA memory for config table\n"); + pmcraid_release_config_buffers(pinstance); + return -ENOMEM; + } + + if (pmcraid_allocate_hcams(pinstance)) { + pmcraid_err("could not alloc DMA memory for HCAMS\n"); + pmcraid_release_config_buffers(pinstance); + return -ENOMEM; + } + + return 0; +} + +/** + * pmcraid_init_tasklets - registers tasklets for response handling + * + * @pinstance: pointer adapter instance structure + * + * Return value + * none + */ +static void pmcraid_init_tasklets(struct pmcraid_instance *pinstance) +{ + int i; + for (i = 0; i < pinstance->num_hrrq; i++) + tasklet_init(&pinstance->isr_tasklet[i], + pmcraid_tasklet_function, + (unsigned long)&pinstance->hrrq_vector[i]); +} + +/** + * pmcraid_kill_tasklets - destroys tasklets registered for response handling + * + * @pinstance: pointer to adapter instance structure + * + * Return value + * none + */ +static void pmcraid_kill_tasklets(struct pmcraid_instance *pinstance) +{ + int i; + for (i = 0; i < pinstance->num_hrrq; i++) + tasklet_kill(&pinstance->isr_tasklet[i]); +} + +/** + * pmcraid_init_buffers - allocates memory and initializes various structures + * @pinstance: pointer to per adapter instance structure + * + * This routine pre-allocates memory based on the type of block as below: + * cmdblocks(PMCRAID_MAX_CMD): kernel memory using kernel's slab_allocator, + * IOARCBs(PMCRAID_MAX_CMD) : DMAable memory, using pci pool allocator + * config-table entries : DMAable memory using pci_alloc_consistent + * HostRRQs : DMAable memory, using pci_alloc_consistent + * + * Return Value + * 0 in case all of the blocks are allocated, -ENOMEM otherwise. + */ +static int __devinit pmcraid_init_buffers(struct pmcraid_instance *pinstance) +{ + int i; + + if (pmcraid_allocate_host_rrqs(pinstance)) { + pmcraid_err("couldn't allocate memory for %d host rrqs\n", + pinstance->num_hrrq); + return -ENOMEM; + } + + if (pmcraid_allocate_config_buffers(pinstance)) { + pmcraid_err("couldn't allocate memory for config buffers\n"); + pmcraid_release_host_rrqs(pinstance, pinstance->num_hrrq); + return -ENOMEM; + } + + if (pmcraid_allocate_cmd_blocks(pinstance)) { + pmcraid_err("couldn't allocate memory for cmd blocks \n"); + pmcraid_release_config_buffers(pinstance); + pmcraid_release_host_rrqs(pinstance, pinstance->num_hrrq); + return -ENOMEM; + } + + if (pmcraid_allocate_control_blocks(pinstance)) { + pmcraid_err("couldn't allocate memory control blocks \n"); + pmcraid_release_config_buffers(pinstance); + pmcraid_release_cmd_blocks(pinstance, PMCRAID_MAX_CMD); + pmcraid_release_host_rrqs(pinstance, pinstance->num_hrrq); + return -ENOMEM; + } + + /* Initialize all the command blocks and add them to free pool. No + * need to lock (free_pool_lock) as this is done in initialization + * itself + */ + for (i = 0; i < PMCRAID_MAX_CMD; i++) { + struct pmcraid_cmd *cmdp = pinstance->cmd_list[i]; + pmcraid_init_cmdblk(cmdp, i); + cmdp->drv_inst = pinstance; + list_add_tail(&cmdp->free_list, &pinstance->free_cmd_pool); + } + + return 0; +} + +/** + * pmcraid_reinit_buffers - resets various buffer pointers + * @pinstance: pointer to adapter instance + * Return value + * none + */ +static void pmcraid_reinit_buffers(struct pmcraid_instance *pinstance) +{ + int i; + int buffer_size = HRRQ_ENTRY_SIZE * PMCRAID_MAX_CMD; + + for (i = 0; i < pinstance->num_hrrq; i++) { + memset(pinstance->hrrq_start[i], 0, buffer_size); + pinstance->hrrq_curr[i] = pinstance->hrrq_start[i]; + pinstance->hrrq_end[i] = + pinstance->hrrq_start[i] + PMCRAID_MAX_CMD - 1; + pinstance->host_toggle_bit[i] = 1; + } +} + +/** + * pmcraid_init_instance - initialize per instance data structure + * @pdev: pointer to pci device structure + * @host: pointer to Scsi_Host structure + * @mapped_pci_addr: memory mapped IOA configuration registers + * + * Return Value + * 0 on success, non-zero in case of any failure + */ +static int __devinit pmcraid_init_instance( + struct pci_dev *pdev, + struct Scsi_Host *host, + void __iomem *mapped_pci_addr +) +{ + struct pmcraid_instance *pinstance = + (struct pmcraid_instance *)host->hostdata; + + pinstance->host = host; + pinstance->pdev = pdev; + + /* Initialize register addresses */ + pinstance->mapped_dma_addr = mapped_pci_addr; + + /* Initialize chip-specific details */ + { + struct pmcraid_chip_details *chip_cfg = pinstance->chip_cfg; + struct pmcraid_interrupts *pint_regs = &pinstance->int_regs; + + pinstance->ioarrin = mapped_pci_addr + chip_cfg->ioarrin; + + pint_regs->ioa_host_interrupt_reg = + mapped_pci_addr + chip_cfg->ioa_host_intr; + pint_regs->ioa_host_interrupt_clr_reg = + mapped_pci_addr + chip_cfg->ioa_host_intr_clr; + pint_regs->host_ioa_interrupt_reg = + mapped_pci_addr + chip_cfg->host_ioa_intr; + pint_regs->host_ioa_interrupt_clr_reg = + mapped_pci_addr + chip_cfg->host_ioa_intr_clr; + + /* Current version of firmware exposes interrupt mask set + * and mask clr registers through memory mapped bar0. + */ + pinstance->mailbox = mapped_pci_addr + chip_cfg->mailbox; + pinstance->ioa_status = mapped_pci_addr + chip_cfg->ioastatus; + pint_regs->ioa_host_interrupt_mask_reg = + mapped_pci_addr + chip_cfg->ioa_host_mask; + pint_regs->ioa_host_interrupt_mask_clr_reg = + mapped_pci_addr + chip_cfg->ioa_host_mask_clr; + pint_regs->global_interrupt_mask_reg = + mapped_pci_addr + chip_cfg->global_intr_mask; + }; + + pinstance->ioa_reset_attempts = 0; + init_waitqueue_head(&pinstance->reset_wait_q); + + atomic_set(&pinstance->outstanding_cmds, 0); + atomic_set(&pinstance->expose_resources, 0); + + INIT_LIST_HEAD(&pinstance->free_res_q); + INIT_LIST_HEAD(&pinstance->used_res_q); + INIT_LIST_HEAD(&pinstance->free_cmd_pool); + INIT_LIST_HEAD(&pinstance->pending_cmd_pool); + + spin_lock_init(&pinstance->free_pool_lock); + spin_lock_init(&pinstance->pending_pool_lock); + spin_lock_init(&pinstance->resource_lock); + mutex_init(&pinstance->aen_queue_lock); + + /* Work-queue (Shared) for deferred processing error handling */ + INIT_WORK(&pinstance->worker_q, pmcraid_worker_function); + + /* Initialize the default log_level */ + pinstance->current_log_level = pmcraid_log_level; + + /* Setup variables required for reset engine */ + pinstance->ioa_state = IOA_STATE_UNKNOWN; + pinstance->reset_cmd = NULL; + return 0; +} + +/** + * pmcraid_release_buffers - release per-adapter buffers allocated + * + * @pinstance: pointer to adapter soft state + * + * Return Value + * none + */ +static void pmcraid_release_buffers(struct pmcraid_instance *pinstance) +{ + pmcraid_release_config_buffers(pinstance); + pmcraid_release_control_blocks(pinstance, PMCRAID_MAX_CMD); + pmcraid_release_cmd_blocks(pinstance, PMCRAID_MAX_CMD); + pmcraid_release_host_rrqs(pinstance, pinstance->num_hrrq); + +} + +/** + * pmcraid_shutdown - shutdown adapter controller. + * @pdev: pci device struct + * + * Issues an adapter shutdown to the card waits for its completion + * + * Return value + * none + */ +static void pmcraid_shutdown(struct pci_dev *pdev) +{ + struct pmcraid_instance *pinstance = pci_get_drvdata(pdev); + pmcraid_reset_bringdown(pinstance); +} + + +/** + * pmcraid_get_minor - returns unused minor number from minor number bitmap + */ +static unsigned short pmcraid_get_minor(void) +{ + int minor; + + minor = find_first_zero_bit(pmcraid_minor, sizeof(pmcraid_minor)); + __set_bit(minor, pmcraid_minor); + return minor; +} + +/** + * pmcraid_release_minor - releases given minor back to minor number bitmap + */ +static void pmcraid_release_minor(unsigned short minor) +{ + __clear_bit(minor, pmcraid_minor); +} + +/** + * pmcraid_setup_chrdev - allocates a minor number and registers a char device + * + * @pinstance: pointer to adapter instance for which to register device + * + * Return value + * 0 in case of success, otherwise non-zero + */ +static int pmcraid_setup_chrdev(struct pmcraid_instance *pinstance) +{ + int minor; + int error; + + minor = pmcraid_get_minor(); + cdev_init(&pinstance->cdev, &pmcraid_fops); + pinstance->cdev.owner = THIS_MODULE; + + error = cdev_add(&pinstance->cdev, MKDEV(pmcraid_major, minor), 1); + + if (error) + pmcraid_release_minor(minor); + else + device_create(pmcraid_class, NULL, MKDEV(pmcraid_major, minor), + NULL, "pmcsas%u", minor); + return error; +} + +/** + * pmcraid_release_chrdev - unregisters per-adapter management interface + * + * @pinstance: pointer to adapter instance structure + * + * Return value + * none + */ +static void pmcraid_release_chrdev(struct pmcraid_instance *pinstance) +{ + pmcraid_release_minor(MINOR(pinstance->cdev.dev)); + device_destroy(pmcraid_class, + MKDEV(pmcraid_major, MINOR(pinstance->cdev.dev))); + cdev_del(&pinstance->cdev); +} + +/** + * pmcraid_remove - IOA hot plug remove entry point + * @pdev: pci device struct + * + * Return value + * none + */ +static void __devexit pmcraid_remove(struct pci_dev *pdev) +{ + struct pmcraid_instance *pinstance = pci_get_drvdata(pdev); + + /* remove the management interface (/dev file) for this device */ + pmcraid_release_chrdev(pinstance); + + /* remove host template from scsi midlayer */ + scsi_remove_host(pinstance->host); + + /* block requests from mid-layer */ + scsi_block_requests(pinstance->host); + + /* initiate shutdown adapter */ + pmcraid_shutdown(pdev); + + pmcraid_disable_interrupts(pinstance, ~0); + flush_scheduled_work(); + + pmcraid_kill_tasklets(pinstance); + pmcraid_unregister_interrupt_handler(pinstance); + pmcraid_release_buffers(pinstance); + iounmap(pinstance->mapped_dma_addr); + pci_release_regions(pdev); + scsi_host_put(pinstance->host); + pci_disable_device(pdev); + + return; +} + +#ifdef CONFIG_PM +/** + * pmcraid_suspend - driver suspend entry point for power management + * @pdev: PCI device structure + * @state: PCI power state to suspend routine + * + * Return Value - 0 always + */ +static int pmcraid_suspend(struct pci_dev *pdev, pm_message_t state) +{ + struct pmcraid_instance *pinstance = pci_get_drvdata(pdev); + + pmcraid_shutdown(pdev); + pmcraid_disable_interrupts(pinstance, ~0); + pmcraid_kill_tasklets(pinstance); + pci_set_drvdata(pinstance->pdev, pinstance); + pmcraid_unregister_interrupt_handler(pinstance); + pci_save_state(pdev); + pci_disable_device(pdev); + pci_set_power_state(pdev, pci_choose_state(pdev, state)); + + return 0; +} + +/** + * pmcraid_resume - driver resume entry point PCI power management + * @pdev: PCI device structure + * + * Return Value - 0 in case of success. Error code in case of any failure + */ +static int pmcraid_resume(struct pci_dev *pdev) +{ + struct pmcraid_instance *pinstance = pci_get_drvdata(pdev); + struct Scsi_Host *host = pinstance->host; + int rc; + int hrrqs; + + pci_set_power_state(pdev, PCI_D0); + pci_enable_wake(pdev, PCI_D0, 0); + pci_restore_state(pdev); + + rc = pci_enable_device(pdev); + + if (rc) { + pmcraid_err("pmcraid: Enable device failed\n"); + return rc; + } + + pci_set_master(pdev); + + if ((sizeof(dma_addr_t) == 4) || + pci_set_dma_mask(pdev, DMA_BIT_MASK(64))) + rc = pci_set_dma_mask(pdev, DMA_BIT_MASK(32)); + + if (rc == 0) + rc = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32)); + + if (rc != 0) { + dev_err(&pdev->dev, "Failed to set PCI DMA mask\n"); + goto disable_device; + } + + atomic_set(&pinstance->outstanding_cmds, 0); + hrrqs = pinstance->num_hrrq; + rc = pmcraid_register_interrupt_handler(pinstance); + + if (rc) { + pmcraid_err("resume: couldn't register interrupt handlers\n"); + rc = -ENODEV; + goto release_host; + } + + pmcraid_init_tasklets(pinstance); + pmcraid_enable_interrupts(pinstance, PMCRAID_PCI_INTERRUPTS); + + /* Start with hard reset sequence which brings up IOA to operational + * state as well as completes the reset sequence. + */ + pinstance->ioa_hard_reset = 1; + + /* Start IOA firmware initialization and bring card to Operational + * state. + */ + if (pmcraid_reset_bringup(pinstance)) { + pmcraid_err("couldn't initialize IOA \n"); + rc = -ENODEV; + goto release_tasklets; + } + + return 0; + +release_tasklets: + pmcraid_kill_tasklets(pinstance); + pmcraid_unregister_interrupt_handler(pinstance); + +release_host: + scsi_host_put(host); + +disable_device: + pci_disable_device(pdev); + + return rc; +} + +#else + +#define pmcraid_suspend NULL +#define pmcraid_resume NULL + +#endif /* CONFIG_PM */ + +/** + * pmcraid_complete_ioa_reset - Called by either timer or tasklet during + * completion of the ioa reset + * @cmd: pointer to reset command block + */ +static void pmcraid_complete_ioa_reset(struct pmcraid_cmd *cmd) +{ + struct pmcraid_instance *pinstance = cmd->drv_inst; + unsigned long flags; + + spin_lock_irqsave(pinstance->host->host_lock, flags); + pmcraid_ioa_reset(cmd); + spin_unlock_irqrestore(pinstance->host->host_lock, flags); + scsi_unblock_requests(pinstance->host); + schedule_work(&pinstance->worker_q); +} + +/** + * pmcraid_set_supported_devs - sends SET SUPPORTED DEVICES to IOAFP + * + * @cmd: pointer to pmcraid_cmd structure + * + * Return Value + * 0 for success or non-zero for failure cases + */ +static void pmcraid_set_supported_devs(struct pmcraid_cmd *cmd) +{ + struct pmcraid_ioarcb *ioarcb = &cmd->ioa_cb->ioarcb; + void (*cmd_done) (struct pmcraid_cmd *) = pmcraid_complete_ioa_reset; + + pmcraid_reinit_cmdblk(cmd); + + ioarcb->resource_handle = cpu_to_le32(PMCRAID_IOA_RES_HANDLE); + ioarcb->request_type = REQ_TYPE_IOACMD; + ioarcb->cdb[0] = PMCRAID_SET_SUPPORTED_DEVICES; + ioarcb->cdb[1] = ALL_DEVICES_SUPPORTED; + + /* If this was called as part of resource table reinitialization due to + * lost CCN, it is enough to return the command block back to free pool + * as part of set_supported_devs completion function. + */ + if (cmd->drv_inst->reinit_cfg_table) { + cmd->drv_inst->reinit_cfg_table = 0; + cmd->release = 1; + cmd_done = pmcraid_reinit_cfgtable_done; + } + + /* we will be done with the reset sequence after set supported devices, + * setup the done function to return the command block back to free + * pool + */ + pmcraid_send_cmd(cmd, + cmd_done, + PMCRAID_SET_SUP_DEV_TIMEOUT, + pmcraid_timeout_handler); + return; +} + +/** + * pmcraid_init_res_table - Initialize the resource table + * @cmd: pointer to pmcraid command struct + * + * This function looks through the existing resource table, comparing + * it with the config table. This function will take care of old/new + * devices and schedule adding/removing them from the mid-layer + * as appropriate. + * + * Return value + * None + */ +static void pmcraid_init_res_table(struct pmcraid_cmd *cmd) +{ + struct pmcraid_instance *pinstance = cmd->drv_inst; + struct pmcraid_resource_entry *res, *temp; + struct pmcraid_config_table_entry *cfgte; + unsigned long lock_flags; + int found, rc, i; + LIST_HEAD(old_res); + + if (pinstance->cfg_table->flags & MICROCODE_UPDATE_REQUIRED) + dev_err(&pinstance->pdev->dev, "Require microcode download\n"); + + /* resource list is protected by pinstance->resource_lock. + * init_res_table can be called from probe (user-thread) or runtime + * reset (timer/tasklet) + */ + spin_lock_irqsave(&pinstance->resource_lock, lock_flags); + + list_for_each_entry_safe(res, temp, &pinstance->used_res_q, queue) + list_move_tail(&res->queue, &old_res); + + for (i = 0; i < pinstance->cfg_table->num_entries; i++) { + cfgte = &pinstance->cfg_table->entries[i]; + + if (!pmcraid_expose_resource(cfgte)) + continue; + + found = 0; + + /* If this entry was already detected and initialized */ + list_for_each_entry_safe(res, temp, &old_res, queue) { + + rc = memcmp(&res->cfg_entry.resource_address, + &cfgte->resource_address, + sizeof(cfgte->resource_address)); + if (!rc) { + list_move_tail(&res->queue, + &pinstance->used_res_q); + found = 1; + break; + } + } + + /* If this is new entry, initialize it and add it the queue */ + if (!found) { + + if (list_empty(&pinstance->free_res_q)) { + dev_err(&pinstance->pdev->dev, + "Too many devices attached\n"); + break; + } + + found = 1; + res = list_entry(pinstance->free_res_q.next, + struct pmcraid_resource_entry, queue); + + res->scsi_dev = NULL; + res->change_detected = RES_CHANGE_ADD; + res->reset_progress = 0; + list_move_tail(&res->queue, &pinstance->used_res_q); + } + + /* copy new configuration table entry details into driver + * maintained resource entry + */ + if (found) { + memcpy(&res->cfg_entry, cfgte, + sizeof(struct pmcraid_config_table_entry)); + pmcraid_info("New res type:%x, vset:%x, addr:%x:\n", + res->cfg_entry.resource_type, + res->cfg_entry.unique_flags1, + le32_to_cpu(res->cfg_entry.resource_address)); + } + } + + /* Detect any deleted entries, mark them for deletion from mid-layer */ + list_for_each_entry_safe(res, temp, &old_res, queue) { + + if (res->scsi_dev) { + res->change_detected = RES_CHANGE_DEL; + res->cfg_entry.resource_handle = + PMCRAID_INVALID_RES_HANDLE; + list_move_tail(&res->queue, &pinstance->used_res_q); + } else { + list_move_tail(&res->queue, &pinstance->free_res_q); + } + } + + /* release the resource list lock */ + spin_unlock_irqrestore(&pinstance->resource_lock, lock_flags); + pmcraid_set_supported_devs(cmd); +} + +/** + * pmcraid_querycfg - Send a Query IOA Config to the adapter. + * @cmd: pointer pmcraid_cmd struct + * + * This function sends a Query IOA Configuration command to the adapter to + * retrieve the IOA configuration table. + * + * Return value: + * none + */ +static void pmcraid_querycfg(struct pmcraid_cmd *cmd) +{ + struct pmcraid_ioarcb *ioarcb = &cmd->ioa_cb->ioarcb; + struct pmcraid_ioadl_desc *ioadl = ioarcb->add_data.u.ioadl; + struct pmcraid_instance *pinstance = cmd->drv_inst; + int cfg_table_size = cpu_to_be32(sizeof(struct pmcraid_config_table)); + + ioarcb->request_type = REQ_TYPE_IOACMD; + ioarcb->resource_handle = cpu_to_le32(PMCRAID_IOA_RES_HANDLE); + + ioarcb->cdb[0] = PMCRAID_QUERY_IOA_CONFIG; + + /* firmware requires 4-byte length field, specified in B.E format */ + memcpy(&(ioarcb->cdb[10]), &cfg_table_size, sizeof(cfg_table_size)); + + /* Since entire config table can be described by single IOADL, it can + * be part of IOARCB itself + */ + ioarcb->ioadl_bus_addr = cpu_to_le64((cmd->ioa_cb_bus_addr) + + offsetof(struct pmcraid_ioarcb, + add_data.u.ioadl[0])); + ioarcb->ioadl_length = cpu_to_le32(sizeof(struct pmcraid_ioadl_desc)); + ioarcb->ioarcb_bus_addr &= ~(0x1FULL); + + ioarcb->request_flags0 |= NO_LINK_DESCS; + ioarcb->data_transfer_length = + cpu_to_le32(sizeof(struct pmcraid_config_table)); + + ioadl = &(ioarcb->add_data.u.ioadl[0]); + ioadl->flags = cpu_to_le32(IOADL_FLAGS_LAST_DESC); + ioadl->address = cpu_to_le64(pinstance->cfg_table_bus_addr); + ioadl->data_len = cpu_to_le32(sizeof(struct pmcraid_config_table)); + + pmcraid_send_cmd(cmd, pmcraid_init_res_table, + PMCRAID_INTERNAL_TIMEOUT, pmcraid_timeout_handler); +} + + +/** + * pmcraid_probe - PCI probe entry pointer for PMC MaxRaid controller driver + * @pdev: pointer to pci device structure + * @dev_id: pointer to device ids structure + * + * Return Value + * returns 0 if the device is claimed and successfully configured. + * returns non-zero error code in case of any failure + */ +static int __devinit pmcraid_probe( + struct pci_dev *pdev, + const struct pci_device_id *dev_id +) +{ + struct pmcraid_instance *pinstance; + struct Scsi_Host *host; + void __iomem *mapped_pci_addr; + int rc = PCIBIOS_SUCCESSFUL; + + if (atomic_read(&pmcraid_adapter_count) >= PMCRAID_MAX_ADAPTERS) { + pmcraid_err + ("maximum number(%d) of supported adapters reached\n", + atomic_read(&pmcraid_adapter_count)); + return -ENOMEM; + } + + atomic_inc(&pmcraid_adapter_count); + rc = pci_enable_device(pdev); + + if (rc) { + dev_err(&pdev->dev, "Cannot enable adapter\n"); + atomic_dec(&pmcraid_adapter_count); + return rc; + } + + dev_info(&pdev->dev, + "Found new IOA(%x:%x), Total IOA count: %d\n", + pdev->vendor, pdev->device, + atomic_read(&pmcraid_adapter_count)); + + rc = pci_request_regions(pdev, PMCRAID_DRIVER_NAME); + + if (rc < 0) { + dev_err(&pdev->dev, + "Couldn't register memory range of registers\n"); + goto out_disable_device; + } + + mapped_pci_addr = pci_iomap(pdev, 0, 0); + + if (!mapped_pci_addr) { + dev_err(&pdev->dev, "Couldn't map PCI registers memory\n"); + rc = -ENOMEM; + goto out_release_regions; + } + + pci_set_master(pdev); + + /* Firmware requires the system bus address of IOARCB to be within + * 32-bit addressable range though it has 64-bit IOARRIN register. + * However, firmware supports 64-bit streaming DMA buffers, whereas + * coherent buffers are to be 32-bit. Since pci_alloc_consistent always + * returns memory within 4GB (if not, change this logic), coherent + * buffers are within firmware acceptible address ranges. + */ + if ((sizeof(dma_addr_t) == 4) || + pci_set_dma_mask(pdev, DMA_BIT_MASK(64))) + rc = pci_set_dma_mask(pdev, DMA_BIT_MASK(32)); + + /* firmware expects 32-bit DMA addresses for IOARRIN register; set 32 + * bit mask for pci_alloc_consistent to return addresses within 4GB + */ + if (rc == 0) + rc = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32)); + + if (rc != 0) { + dev_err(&pdev->dev, "Failed to set PCI DMA mask\n"); + goto cleanup_nomem; + } + + host = scsi_host_alloc(&pmcraid_host_template, + sizeof(struct pmcraid_instance)); + + if (!host) { + dev_err(&pdev->dev, "scsi_host_alloc failed!\n"); + rc = -ENOMEM; + goto cleanup_nomem; + } + + host->max_id = PMCRAID_MAX_NUM_TARGETS_PER_BUS; + host->max_lun = PMCRAID_MAX_NUM_LUNS_PER_TARGET; + host->unique_id = host->host_no; + host->max_channel = PMCRAID_MAX_BUS_TO_SCAN; + host->max_cmd_len = PMCRAID_MAX_CDB_LEN; + + /* zero out entire instance structure */ + pinstance = (struct pmcraid_instance *)host->hostdata; + memset(pinstance, 0, sizeof(*pinstance)); + + pinstance->chip_cfg = + (struct pmcraid_chip_details *)(dev_id->driver_data); + + rc = pmcraid_init_instance(pdev, host, mapped_pci_addr); + + if (rc < 0) { + dev_err(&pdev->dev, "failed to initialize adapter instance\n"); + goto out_scsi_host_put; + } + + pci_set_drvdata(pdev, pinstance); + + /* Save PCI config-space for use following the reset */ + rc = pci_save_state(pinstance->pdev); + + if (rc != 0) { + dev_err(&pdev->dev, "Failed to save PCI config space\n"); + goto out_scsi_host_put; + } + + pmcraid_disable_interrupts(pinstance, ~0); + + rc = pmcraid_register_interrupt_handler(pinstance); + + if (rc) { + pmcraid_err("couldn't register interrupt handler\n"); + goto out_scsi_host_put; + } + + pmcraid_init_tasklets(pinstance); + + /* allocate verious buffers used by LLD.*/ + rc = pmcraid_init_buffers(pinstance); + + if (rc) { + pmcraid_err("couldn't allocate memory blocks\n"); + goto out_unregister_isr; + } + + /* check the reset type required */ + pmcraid_reset_type(pinstance); + + pmcraid_enable_interrupts(pinstance, PMCRAID_PCI_INTERRUPTS); + + /* Start IOA firmware initialization and bring card to Operational + * state. + */ + pmcraid_info("starting IOA initialization sequence\n"); + if (pmcraid_reset_bringup(pinstance)) { + pmcraid_err("couldn't initialize IOA \n"); + rc = 1; + goto out_release_bufs; + } + + /* Add adapter instance into mid-layer list */ + rc = scsi_add_host(pinstance->host, &pdev->dev); + if (rc != 0) { + pmcraid_err("couldn't add host into mid-layer: %d\n", rc); + goto out_release_bufs; + } + + scsi_scan_host(pinstance->host); + + rc = pmcraid_setup_chrdev(pinstance); + + if (rc != 0) { + pmcraid_err("couldn't create mgmt interface, error: %x\n", + rc); + goto out_remove_host; + } + + /* Schedule worker thread to handle CCN and take care of adding and + * removing devices to OS + */ + atomic_set(&pinstance->expose_resources, 1); + schedule_work(&pinstance->worker_q); + return rc; + +out_remove_host: + scsi_remove_host(host); + +out_release_bufs: + pmcraid_release_buffers(pinstance); + +out_unregister_isr: + pmcraid_kill_tasklets(pinstance); + pmcraid_unregister_interrupt_handler(pinstance); + +out_scsi_host_put: + scsi_host_put(host); + +cleanup_nomem: + iounmap(mapped_pci_addr); + +out_release_regions: + pci_release_regions(pdev); + +out_disable_device: + atomic_dec(&pmcraid_adapter_count); + pci_set_drvdata(pdev, NULL); + pci_disable_device(pdev); + return -ENODEV; +} + +/* + * PCI driver structure of pcmraid driver + */ +static struct pci_driver pmcraid_driver = { + .name = PMCRAID_DRIVER_NAME, + .id_table = pmcraid_pci_table, + .probe = pmcraid_probe, + .remove = pmcraid_remove, + .suspend = pmcraid_suspend, + .resume = pmcraid_resume, + .shutdown = pmcraid_shutdown +}; + + +/** + * pmcraid_init - module load entry point + */ +static int __init pmcraid_init(void) +{ + dev_t dev; + int error; + + pmcraid_info("%s Device Driver version: %s %s\n", + PMCRAID_DRIVER_NAME, + PMCRAID_DRIVER_VERSION, PMCRAID_DRIVER_DATE); + + error = alloc_chrdev_region(&dev, 0, + PMCRAID_MAX_ADAPTERS, + PMCRAID_DEVFILE); + + if (error) { + pmcraid_err("failed to get a major number for adapters\n"); + goto out_init; + } + + pmcraid_major = MAJOR(dev); + pmcraid_class = class_create(THIS_MODULE, PMCRAID_DEVFILE); + + if (IS_ERR(pmcraid_class)) { + error = PTR_ERR(pmcraid_class); + pmcraid_err("failed to register with with sysfs, error = %x\n", + error); + goto out_unreg_chrdev; + } + + + error = pmcraid_netlink_init(); + + if (error) + goto out_unreg_chrdev; + + error = pci_register_driver(&pmcraid_driver); + + if (error == 0) + goto out_init; + + pmcraid_err("failed to register pmcraid driver, error = %x\n", + error); + class_destroy(pmcraid_class); + pmcraid_netlink_release(); + +out_unreg_chrdev: + unregister_chrdev_region(MKDEV(pmcraid_major, 0), PMCRAID_MAX_ADAPTERS); +out_init: + return error; +} + +/** + * pmcraid_exit - module unload entry point + */ +static void __exit pmcraid_exit(void) +{ + pmcraid_netlink_release(); + class_destroy(pmcraid_class); + unregister_chrdev_region(MKDEV(pmcraid_major, 0), + PMCRAID_MAX_ADAPTERS); + pci_unregister_driver(&pmcraid_driver); +} + +module_init(pmcraid_init); +module_exit(pmcraid_exit); diff --git a/drivers/scsi/pmcraid.h b/drivers/scsi/pmcraid.h new file mode 100644 index 000000000000..614b3a764fed --- /dev/null +++ b/drivers/scsi/pmcraid.h @@ -0,0 +1,1029 @@ +/* + * pmcraid.h -- PMC Sierra MaxRAID controller driver header file + * + * Copyright (C) 2008, 2009 PMC Sierra Inc. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + */ + +#ifndef _PMCRAID_H +#define _PMCRAID_H + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +/* + * Driver name : string representing the driver name + * Device file : /dev file to be used for management interfaces + * Driver version: version string in major_version.minor_version.patch format + * Driver date : date information in "Mon dd yyyy" format + */ +#define PMCRAID_DRIVER_NAME "PMC MaxRAID" +#define PMCRAID_DEVFILE "pmcsas" +#define PMCRAID_DRIVER_VERSION "1.0.2" +#define PMCRAID_DRIVER_DATE __DATE__ + +/* Maximum number of adapters supported by current version of the driver */ +#define PMCRAID_MAX_ADAPTERS 1024 + +/* Bit definitions as per firmware, bit position [0][1][2].....[31] */ +#define PMC_BIT8(n) (1 << (7-n)) +#define PMC_BIT16(n) (1 << (15-n)) +#define PMC_BIT32(n) (1 << (31-n)) + +/* PMC PCI vendor ID and device ID values */ +#define PCI_VENDOR_ID_PMC 0x11F8 +#define PCI_DEVICE_ID_PMC_MAXRAID 0x5220 + +/* + * MAX_CMD : maximum commands that can be outstanding with IOA + * MAX_IO_CMD : command blocks available for IO commands + * MAX_HCAM_CMD : command blocks avaibale for HCAMS + * MAX_INTERNAL_CMD : command blocks avaible for internal commands like reset + */ +#define PMCRAID_MAX_CMD 1024 +#define PMCRAID_MAX_IO_CMD 1020 +#define PMCRAID_MAX_HCAM_CMD 2 +#define PMCRAID_MAX_INTERNAL_CMD 2 + +/* MAX_IOADLS : max number of scatter-gather lists supported by IOA + * IOADLS_INTERNAL : number of ioadls included as part of IOARCB. + * IOADLS_EXTERNAL : number of ioadls allocated external to IOARCB + */ +#define PMCRAID_IOADLS_INTERNAL 27 +#define PMCRAID_IOADLS_EXTERNAL 37 +#define PMCRAID_MAX_IOADLS PMCRAID_IOADLS_INTERNAL + +/* HRRQ_ENTRY_SIZE : size of hrrq buffer + * IOARCB_ALIGNMENT : alignment required for IOARCB + * IOADL_ALIGNMENT : alignment requirement for IOADLs + * MSIX_VECTORS : number of MSIX vectors supported + */ +#define HRRQ_ENTRY_SIZE sizeof(__le32) +#define PMCRAID_IOARCB_ALIGNMENT 32 +#define PMCRAID_IOADL_ALIGNMENT 16 +#define PMCRAID_IOASA_ALIGNMENT 4 +#define PMCRAID_NUM_MSIX_VECTORS 1 + +/* various other limits */ +#define PMCRAID_VENDOR_ID_LEN 8 +#define PMCRAID_PRODUCT_ID_LEN 16 +#define PMCRAID_SERIAL_NUM_LEN 8 +#define PMCRAID_LUN_LEN 8 +#define PMCRAID_MAX_CDB_LEN 16 +#define PMCRAID_DEVICE_ID_LEN 8 +#define PMCRAID_SENSE_DATA_LEN 256 +#define PMCRAID_ADD_CMD_PARAM_LEN 48 + +#define PMCRAID_MAX_BUS_TO_SCAN 1 +#define PMCRAID_MAX_NUM_TARGETS_PER_BUS 256 +#define PMCRAID_MAX_NUM_LUNS_PER_TARGET 8 + +/* IOA bus/target/lun number of IOA resources */ +#define PMCRAID_IOA_BUS_ID 0xfe +#define PMCRAID_IOA_TARGET_ID 0xff +#define PMCRAID_IOA_LUN_ID 0xff +#define PMCRAID_VSET_BUS_ID 0x1 +#define PMCRAID_VSET_LUN_ID 0x0 +#define PMCRAID_PHYS_BUS_ID 0x0 +#define PMCRAID_VIRTUAL_ENCL_BUS_ID 0x8 +#define PMCRAID_MAX_VSET_TARGETS 240 +#define PMCRAID_MAX_VSET_LUNS_PER_TARGET 8 + +#define PMCRAID_IOA_MAX_SECTORS 32767 +#define PMCRAID_VSET_MAX_SECTORS 512 +#define PMCRAID_MAX_CMD_PER_LUN 254 + +/* Number of configuration table entries (resources) */ +#define PMCRAID_MAX_NUM_OF_VSETS 240 + +/* Todo : Check max limit for Phase 1 */ +#define PMCRAID_MAX_NUM_OF_PHY_DEVS 256 + +/* MAX_NUM_OF_DEVS includes 1 FP, 1 Dummy Enclosure device */ +#define PMCRAID_MAX_NUM_OF_DEVS \ + (PMCRAID_MAX_NUM_OF_VSETS + PMCRAID_MAX_NUM_OF_PHY_DEVS + 2) + +#define PMCRAID_MAX_RESOURCES PMCRAID_MAX_NUM_OF_DEVS + +/* Adapter Commands used by driver */ +#define PMCRAID_QUERY_RESOURCE_STATE 0xC2 +#define PMCRAID_RESET_DEVICE 0xC3 +/* options to select reset target */ +#define ENABLE_RESET_MODIFIER 0x80 +#define RESET_DEVICE_LUN 0x40 +#define RESET_DEVICE_TARGET 0x20 +#define RESET_DEVICE_BUS 0x10 + +#define PMCRAID_IDENTIFY_HRRQ 0xC4 +#define PMCRAID_QUERY_IOA_CONFIG 0xC5 +#define PMCRAID_QUERY_CMD_STATUS 0xCB +#define PMCRAID_ABORT_CMD 0xC7 + +/* CANCEL ALL command, provides option for setting SYNC_COMPLETE + * on the target resources for which commands got cancelled + */ +#define PMCRAID_CANCEL_ALL_REQUESTS 0xCE +#define PMCRAID_SYNC_COMPLETE_AFTER_CANCEL PMC_BIT8(0) + +/* HCAM command and types of HCAM supported by IOA */ +#define PMCRAID_HOST_CONTROLLED_ASYNC 0xCF +#define PMCRAID_HCAM_CODE_CONFIG_CHANGE 0x01 +#define PMCRAID_HCAM_CODE_LOG_DATA 0x02 + +/* IOA shutdown command and various shutdown types */ +#define PMCRAID_IOA_SHUTDOWN 0xF7 +#define PMCRAID_SHUTDOWN_NORMAL 0x00 +#define PMCRAID_SHUTDOWN_PREPARE_FOR_NORMAL 0x40 +#define PMCRAID_SHUTDOWN_NONE 0x100 +#define PMCRAID_SHUTDOWN_ABBREV 0x80 + +/* SET SUPPORTED DEVICES command and the option to select all the + * devices to be supported + */ +#define PMCRAID_SET_SUPPORTED_DEVICES 0xFB +#define ALL_DEVICES_SUPPORTED PMC_BIT8(0) + +/* This option is used with SCSI WRITE_BUFFER command */ +#define PMCRAID_WR_BUF_DOWNLOAD_AND_SAVE 0x05 + +/* IOASC Codes used by driver */ +#define PMCRAID_IOASC_SENSE_MASK 0xFFFFFF00 +#define PMCRAID_IOASC_SENSE_KEY(ioasc) ((ioasc) >> 24) +#define PMCRAID_IOASC_SENSE_CODE(ioasc) (((ioasc) & 0x00ff0000) >> 16) +#define PMCRAID_IOASC_SENSE_QUAL(ioasc) (((ioasc) & 0x0000ff00) >> 8) +#define PMCRAID_IOASC_SENSE_STATUS(ioasc) ((ioasc) & 0x000000ff) + +#define PMCRAID_IOASC_GOOD_COMPLETION 0x00000000 +#define PMCRAID_IOASC_NR_INIT_CMD_REQUIRED 0x02040200 +#define PMCRAID_IOASC_NR_IOA_RESET_REQUIRED 0x02048000 +#define PMCRAID_IOASC_NR_SYNC_REQUIRED 0x023F0000 +#define PMCRAID_IOASC_ME_READ_ERROR_NO_REALLOC 0x03110C00 +#define PMCRAID_IOASC_HW_CANNOT_COMMUNICATE 0x04050000 +#define PMCRAID_IOASC_HW_DEVICE_TIMEOUT 0x04080100 +#define PMCRAID_IOASC_HW_DEVICE_BUS_STATUS_ERROR 0x04448500 +#define PMCRAID_IOASC_HW_IOA_RESET_REQUIRED 0x04448600 +#define PMCRAID_IOASC_IR_INVALID_RESOURCE_HANDLE 0x05250000 +#define PMCRAID_IOASC_AC_TERMINATED_BY_HOST 0x0B5A0000 +#define PMCRAID_IOASC_UA_BUS_WAS_RESET 0x06290000 +#define PMCRAID_IOASC_UA_BUS_WAS_RESET_BY_OTHER 0x06298000 + +/* Driver defined IOASCs */ +#define PMCRAID_IOASC_IOA_WAS_RESET 0x10000001 +#define PMCRAID_IOASC_PCI_ACCESS_ERROR 0x10000002 + +/* Various timeout values (in milliseconds) used. If any of these are chip + * specific, move them to pmcraid_chip_details structure. + */ +#define PMCRAID_PCI_DEASSERT_TIMEOUT 2000 +#define PMCRAID_BIST_TIMEOUT 2000 +#define PMCRAID_AENWAIT_TIMEOUT 5000 +#define PMCRAID_TRANSOP_TIMEOUT 60000 + +#define PMCRAID_RESET_TIMEOUT (2 * HZ) +#define PMCRAID_CHECK_FOR_RESET_TIMEOUT ((HZ / 10)) +#define PMCRAID_VSET_IO_TIMEOUT (60 * HZ) +#define PMCRAID_INTERNAL_TIMEOUT (60 * HZ) +#define PMCRAID_SHUTDOWN_TIMEOUT (150 * HZ) +#define PMCRAID_RESET_BUS_TIMEOUT (60 * HZ) +#define PMCRAID_RESET_HOST_TIMEOUT (150 * HZ) +#define PMCRAID_REQUEST_SENSE_TIMEOUT (30 * HZ) +#define PMCRAID_SET_SUP_DEV_TIMEOUT (2 * 60 * HZ) + +/* structure to represent a scatter-gather element (IOADL descriptor) */ +struct pmcraid_ioadl_desc { + __le64 address; + __le32 data_len; + __u8 reserved[3]; + __u8 flags; +} __attribute__((packed, aligned(PMCRAID_IOADL_ALIGNMENT))); + +/* pmcraid_ioadl_desc.flags values */ +#define IOADL_FLAGS_CHAINED PMC_BIT8(0) +#define IOADL_FLAGS_LAST_DESC PMC_BIT8(1) +#define IOADL_FLAGS_READ_LAST PMC_BIT8(1) +#define IOADL_FLAGS_WRITE_LAST PMC_BIT8(1) + + +/* additional IOARCB data which can be CDB or additional request parameters + * or list of IOADLs. Firmware supports max of 512 bytes for IOARCB, hence then + * number of IOADLs are limted to 27. In case they are more than 27, they will + * be used in chained form + */ +struct pmcraid_ioarcb_add_data { + union { + struct pmcraid_ioadl_desc ioadl[PMCRAID_IOADLS_INTERNAL]; + __u8 add_cmd_params[PMCRAID_ADD_CMD_PARAM_LEN]; + } u; +}; + +/* + * IOA Request Control Block + */ +struct pmcraid_ioarcb { + __le64 ioarcb_bus_addr; + __le32 resource_handle; + __le32 response_handle; + __le64 ioadl_bus_addr; + __le32 ioadl_length; + __le32 data_transfer_length; + __le64 ioasa_bus_addr; + __le16 ioasa_len; + __le16 cmd_timeout; + __le16 add_cmd_param_offset; + __le16 add_cmd_param_length; + __le32 reserved1[2]; + __le32 reserved2; + __u8 request_type; + __u8 request_flags0; + __u8 request_flags1; + __u8 hrrq_id; + __u8 cdb[PMCRAID_MAX_CDB_LEN]; + struct pmcraid_ioarcb_add_data add_data; +} __attribute__((packed, aligned(PMCRAID_IOARCB_ALIGNMENT))); + +/* well known resource handle values */ +#define PMCRAID_IOA_RES_HANDLE 0xffffffff +#define PMCRAID_INVALID_RES_HANDLE 0 + +/* pmcraid_ioarcb.request_type values */ +#define REQ_TYPE_SCSI 0x00 +#define REQ_TYPE_IOACMD 0x01 +#define REQ_TYPE_HCAM 0x02 + +/* pmcraid_ioarcb.flags0 values */ +#define TRANSFER_DIR_WRITE PMC_BIT8(0) +#define INHIBIT_UL_CHECK PMC_BIT8(2) +#define SYNC_OVERRIDE PMC_BIT8(3) +#define SYNC_COMPLETE PMC_BIT8(4) +#define NO_LINK_DESCS PMC_BIT8(5) + +/* pmcraid_ioarcb.flags1 values */ +#define DELAY_AFTER_RESET PMC_BIT8(0) +#define TASK_TAG_SIMPLE 0x10 +#define TASK_TAG_ORDERED 0x20 +#define TASK_TAG_QUEUE_HEAD 0x30 + +/* toggle bit offset in response handle */ +#define HRRQ_TOGGLE_BIT 0x01 +#define HRRQ_RESPONSE_BIT 0x02 + +/* IOA Status Area */ +struct pmcraid_ioasa_vset { + __le32 failing_lba_hi; + __le32 failing_lba_lo; + __le32 reserved; +} __attribute__((packed, aligned(4))); + +struct pmcraid_ioasa { + __le32 ioasc; + __le16 returned_status_length; + __le16 available_status_length; + __le32 residual_data_length; + __le32 ilid; + __le32 fd_ioasc; + __le32 fd_res_address; + __le32 fd_res_handle; + __le32 reserved; + + /* resource specific sense information */ + union { + struct pmcraid_ioasa_vset vset; + } u; + + /* IOA autosense data */ + __le16 auto_sense_length; + __le16 error_data_length; + __u8 sense_data[PMCRAID_SENSE_DATA_LEN]; +} __attribute__((packed, aligned(4))); + +#define PMCRAID_DRIVER_ILID 0xffffffff + +/* Config Table Entry per Resource */ +struct pmcraid_config_table_entry { + __u8 resource_type; + __u8 bus_protocol; + __le16 array_id; + __u8 common_flags0; + __u8 common_flags1; + __u8 unique_flags0; + __u8 unique_flags1; /*also used as vset target_id */ + __le32 resource_handle; + __le32 resource_address; + __u8 device_id[PMCRAID_DEVICE_ID_LEN]; + __u8 lun[PMCRAID_LUN_LEN]; +} __attribute__((packed, aligned(4))); + +/* resource types (config_table_entry.resource_type values) */ +#define RES_TYPE_AF_DASD 0x00 +#define RES_TYPE_GSCSI 0x01 +#define RES_TYPE_VSET 0x02 +#define RES_TYPE_IOA_FP 0xFF + +#define RES_IS_IOA(res) ((res).resource_type == RES_TYPE_IOA_FP) +#define RES_IS_GSCSI(res) ((res).resource_type == RES_TYPE_GSCSI) +#define RES_IS_VSET(res) ((res).resource_type == RES_TYPE_VSET) +#define RES_IS_AFDASD(res) ((res).resource_type == RES_TYPE_AF_DASD) + +/* bus_protocol values used by driver */ +#define RES_TYPE_VENCLOSURE 0x8 + +/* config_table_entry.common_flags0 */ +#define MULTIPATH_RESOURCE PMC_BIT32(0) + +/* unique_flags1 */ +#define IMPORT_MODE_MANUAL PMC_BIT8(0) + +/* well known resource handle values */ +#define RES_HANDLE_IOA 0xFFFFFFFF +#define RES_HANDLE_NONE 0x00000000 + +/* well known resource address values */ +#define RES_ADDRESS_IOAFP 0xFEFFFFFF +#define RES_ADDRESS_INVALID 0xFFFFFFFF + +/* BUS/TARGET/LUN values from resource_addrr */ +#define RES_BUS(res_addr) (le32_to_cpu(res_addr) & 0xFF) +#define RES_TARGET(res_addr) ((le32_to_cpu(res_addr) >> 16) & 0xFF) +#define RES_LUN(res_addr) 0x0 + +/* configuration table structure */ +struct pmcraid_config_table { + __le16 num_entries; + __u8 table_format; + __u8 reserved1; + __u8 flags; + __u8 reserved2[11]; + struct pmcraid_config_table_entry entries[PMCRAID_MAX_RESOURCES]; +} __attribute__((packed, aligned(4))); + +/* config_table.flags value */ +#define MICROCODE_UPDATE_REQUIRED PMC_BIT32(0) + +/* + * HCAM format + */ +#define PMCRAID_HOSTRCB_LDNSIZE 4056 + +/* Error log notification format */ +struct pmcraid_hostrcb_error { + __le32 fd_ioasc; + __le32 fd_ra; + __le32 fd_rh; + __le32 prc; + union { + __u8 data[PMCRAID_HOSTRCB_LDNSIZE]; + } u; +} __attribute__ ((packed, aligned(4))); + +struct pmcraid_hcam_hdr { + __u8 op_code; + __u8 notification_type; + __u8 notification_lost; + __u8 flags; + __u8 overlay_id; + __u8 reserved1[3]; + __le32 ilid; + __le32 timestamp1; + __le32 timestamp2; + __le32 data_len; +} __attribute__((packed, aligned(4))); + +#define PMCRAID_AEN_GROUP 0x3 + +struct pmcraid_hcam_ccn { + struct pmcraid_hcam_hdr header; + struct pmcraid_config_table_entry cfg_entry; +} __attribute__((packed, aligned(4))); + +struct pmcraid_hcam_ldn { + struct pmcraid_hcam_hdr header; + struct pmcraid_hostrcb_error error_log; +} __attribute__((packed, aligned(4))); + +/* pmcraid_hcam.op_code values */ +#define HOSTRCB_TYPE_CCN 0xE1 +#define HOSTRCB_TYPE_LDN 0xE2 + +/* pmcraid_hcam.notification_type values */ +#define NOTIFICATION_TYPE_ENTRY_CHANGED 0x0 +#define NOTIFICATION_TYPE_ENTRY_NEW 0x1 +#define NOTIFICATION_TYPE_ENTRY_DELETED 0x2 +#define NOTIFICATION_TYPE_ERROR_LOG 0x10 +#define NOTIFICATION_TYPE_INFORMATION_LOG 0x11 + +#define HOSTRCB_NOTIFICATIONS_LOST PMC_BIT8(0) + +/* pmcraid_hcam.flags values */ +#define HOSTRCB_INTERNAL_OP_ERROR PMC_BIT8(0) +#define HOSTRCB_ERROR_RESPONSE_SENT PMC_BIT8(1) + +/* pmcraid_hcam.overlay_id values */ +#define HOSTRCB_OVERLAY_ID_08 0x08 +#define HOSTRCB_OVERLAY_ID_09 0x09 +#define HOSTRCB_OVERLAY_ID_11 0x11 +#define HOSTRCB_OVERLAY_ID_12 0x12 +#define HOSTRCB_OVERLAY_ID_13 0x13 +#define HOSTRCB_OVERLAY_ID_14 0x14 +#define HOSTRCB_OVERLAY_ID_16 0x16 +#define HOSTRCB_OVERLAY_ID_17 0x17 +#define HOSTRCB_OVERLAY_ID_20 0x20 +#define HOSTRCB_OVERLAY_ID_FF 0xFF + +/* Implementation specific card details */ +struct pmcraid_chip_details { + /* hardware register offsets */ + unsigned long ioastatus; + unsigned long ioarrin; + unsigned long mailbox; + unsigned long global_intr_mask; + unsigned long ioa_host_intr; + unsigned long ioa_host_intr_clr; + unsigned long ioa_host_mask; + unsigned long ioa_host_mask_clr; + unsigned long host_ioa_intr; + unsigned long host_ioa_intr_clr; + + /* timeout used during transitional to operational state */ + unsigned long transop_timeout; +}; + +/* IOA to HOST doorbells (interrupts) */ +#define INTRS_TRANSITION_TO_OPERATIONAL PMC_BIT32(0) +#define INTRS_IOARCB_TRANSFER_FAILED PMC_BIT32(3) +#define INTRS_IOA_UNIT_CHECK PMC_BIT32(4) +#define INTRS_NO_HRRQ_FOR_CMD_RESPONSE PMC_BIT32(5) +#define INTRS_CRITICAL_OP_IN_PROGRESS PMC_BIT32(6) +#define INTRS_IO_DEBUG_ACK PMC_BIT32(7) +#define INTRS_IOARRIN_LOST PMC_BIT32(27) +#define INTRS_SYSTEM_BUS_MMIO_ERROR PMC_BIT32(28) +#define INTRS_IOA_PROCESSOR_ERROR PMC_BIT32(29) +#define INTRS_HRRQ_VALID PMC_BIT32(30) +#define INTRS_OPERATIONAL_STATUS PMC_BIT32(0) + +/* Host to IOA Doorbells */ +#define DOORBELL_RUNTIME_RESET PMC_BIT32(1) +#define DOORBELL_IOA_RESET_ALERT PMC_BIT32(7) +#define DOORBELL_IOA_DEBUG_ALERT PMC_BIT32(9) +#define DOORBELL_ENABLE_DESTRUCTIVE_DIAGS PMC_BIT32(8) +#define DOORBELL_IOA_START_BIST PMC_BIT32(23) +#define DOORBELL_RESET_IOA PMC_BIT32(31) + +/* Global interrupt mask register value */ +#define GLOBAL_INTERRUPT_MASK 0x4ULL + +#define PMCRAID_ERROR_INTERRUPTS (INTRS_IOARCB_TRANSFER_FAILED | \ + INTRS_IOA_UNIT_CHECK | \ + INTRS_NO_HRRQ_FOR_CMD_RESPONSE | \ + INTRS_IOARRIN_LOST | \ + INTRS_SYSTEM_BUS_MMIO_ERROR | \ + INTRS_IOA_PROCESSOR_ERROR) + +#define PMCRAID_PCI_INTERRUPTS (PMCRAID_ERROR_INTERRUPTS | \ + INTRS_HRRQ_VALID | \ + INTRS_CRITICAL_OP_IN_PROGRESS |\ + INTRS_TRANSITION_TO_OPERATIONAL) + +/* control_block, associated with each of the commands contains IOARCB, IOADLs + * memory for IOASA. Additional 3 * 16 bytes are allocated in order to support + * additional request parameters (of max size 48) any command. + */ +struct pmcraid_control_block { + struct pmcraid_ioarcb ioarcb; + struct pmcraid_ioadl_desc ioadl[PMCRAID_IOADLS_EXTERNAL + 3]; + struct pmcraid_ioasa ioasa; +} __attribute__ ((packed, aligned(PMCRAID_IOARCB_ALIGNMENT))); + +/* pmcraid_sglist - Scatter-gather list allocated for passthrough ioctls + */ +struct pmcraid_sglist { + u32 order; + u32 num_sg; + u32 num_dma_sg; + u32 buffer_len; + struct scatterlist scatterlist[1]; +}; + +/* pmcraid_cmd - LLD representation of SCSI command */ +struct pmcraid_cmd { + + /* Ptr and bus address of DMA.able control block for this command */ + struct pmcraid_control_block *ioa_cb; + dma_addr_t ioa_cb_bus_addr; + + /* sense buffer for REQUEST SENSE command if firmware is not sending + * auto sense data + */ + dma_addr_t sense_buffer_dma; + dma_addr_t dma_handle; + u8 *sense_buffer; + + /* pointer to mid layer structure of SCSI commands */ + struct scsi_cmnd *scsi_cmd; + + struct list_head free_list; + struct completion wait_for_completion; + struct timer_list timer; /* needed for internal commands */ + u32 timeout; /* current timeout value */ + u32 index; /* index into the command list */ + u8 completion_req; /* for handling internal commands */ + u8 release; /* for handling completions */ + + void (*cmd_done) (struct pmcraid_cmd *); + struct pmcraid_instance *drv_inst; + + struct pmcraid_sglist *sglist; /* used for passthrough IOCTLs */ + + /* scratch used during reset sequence */ + union { + unsigned long time_left; + struct pmcraid_resource_entry *res; + } u; +}; + +/* + * Interrupt registers of IOA + */ +struct pmcraid_interrupts { + void __iomem *ioa_host_interrupt_reg; + void __iomem *ioa_host_interrupt_clr_reg; + void __iomem *ioa_host_interrupt_mask_reg; + void __iomem *ioa_host_interrupt_mask_clr_reg; + void __iomem *global_interrupt_mask_reg; + void __iomem *host_ioa_interrupt_reg; + void __iomem *host_ioa_interrupt_clr_reg; +}; + +/* ISR parameters LLD allocates (one for each MSI-X if enabled) vectors */ +struct pmcraid_isr_param { + u8 hrrq_id; /* hrrq entry index */ + u16 vector; /* allocated msi-x vector */ + struct pmcraid_instance *drv_inst; +}; + +/* AEN message header sent as part of event data to applications */ +struct pmcraid_aen_msg { + u32 hostno; + u32 length; + u8 reserved[8]; + u8 data[0]; +}; + +struct pmcraid_hostrcb { + struct pmcraid_instance *drv_inst; + struct pmcraid_aen_msg *msg; + struct pmcraid_hcam_hdr *hcam; /* pointer to hcam buffer */ + struct pmcraid_cmd *cmd; /* pointer to command block used */ + dma_addr_t baddr; /* system address of hcam buffer */ + atomic_t ignore; /* process HCAM response ? */ +}; + +#define PMCRAID_AEN_HDR_SIZE sizeof(struct pmcraid_aen_msg) + + + +/* + * Per adapter structure maintained by LLD + */ +struct pmcraid_instance { + /* Array of allowed-to-be-exposed resources, initialized from + * Configutation Table, later updated with CCNs + */ + struct pmcraid_resource_entry *res_entries; + + struct list_head free_res_q; /* res_entries lists for easy lookup */ + struct list_head used_res_q; /* List of to be exposed resources */ + spinlock_t resource_lock; /* spinlock to protect resource list */ + + void __iomem *mapped_dma_addr; + void __iomem *ioa_status; /* Iomapped IOA status register */ + void __iomem *mailbox; /* Iomapped mailbox register */ + void __iomem *ioarrin; /* IOmapped IOARR IN register */ + + struct pmcraid_interrupts int_regs; + struct pmcraid_chip_details *chip_cfg; + + /* HostRCBs needed for HCAM */ + struct pmcraid_hostrcb ldn; + struct pmcraid_hostrcb ccn; + + + /* Bus address of start of HRRQ */ + dma_addr_t hrrq_start_bus_addr[PMCRAID_NUM_MSIX_VECTORS]; + + /* Pointer to 1st entry of HRRQ */ + __be32 *hrrq_start[PMCRAID_NUM_MSIX_VECTORS]; + + /* Pointer to last entry of HRRQ */ + __be32 *hrrq_end[PMCRAID_NUM_MSIX_VECTORS]; + + /* Pointer to current pointer of hrrq */ + __be32 *hrrq_curr[PMCRAID_NUM_MSIX_VECTORS]; + + /* Lock for HRRQ access */ + spinlock_t hrrq_lock[PMCRAID_NUM_MSIX_VECTORS]; + + /* Expected toggle bit at host */ + u8 host_toggle_bit[PMCRAID_NUM_MSIX_VECTORS]; + + /* No of Reset IOA retries . IOA marked dead if threshold exceeds */ + u8 ioa_reset_attempts; +#define PMCRAID_RESET_ATTEMPTS 3 + + /* Wait Q for threads to wait for Reset IOA completion */ + wait_queue_head_t reset_wait_q; + struct pmcraid_cmd *reset_cmd; + + /* structures for supporting SIGIO based AEN. */ + struct fasync_struct *aen_queue; + struct mutex aen_queue_lock; /* lock for aen subscribers list */ + struct cdev cdev; + + struct Scsi_Host *host; /* mid layer interface structure handle */ + struct pci_dev *pdev; /* PCI device structure handle */ + + u8 current_log_level; /* default level for logging IOASC errors */ + + u8 num_hrrq; /* Number of interrupt vectors allocated */ + dev_t dev; /* Major-Minor numbers for Char device */ + + /* Used as ISR handler argument */ + struct pmcraid_isr_param hrrq_vector[PMCRAID_NUM_MSIX_VECTORS]; + + /* configuration table */ + struct pmcraid_config_table *cfg_table; + dma_addr_t cfg_table_bus_addr; + + /* structures related to command blocks */ + struct kmem_cache *cmd_cachep; /* cache for cmd blocks */ + struct pci_pool *control_pool; /* pool for control blocks */ + char cmd_pool_name[64]; /* name of cmd cache */ + char ctl_pool_name[64]; /* name of control cache */ + + struct pmcraid_cmd *cmd_list[PMCRAID_MAX_CMD]; + + struct list_head free_cmd_pool; + struct list_head pending_cmd_pool; + spinlock_t free_pool_lock; /* free pool lock */ + spinlock_t pending_pool_lock; /* pending pool lock */ + + /* No of IO commands pending with FW */ + atomic_t outstanding_cmds; + + /* should add/delete resources to mid-layer now ?*/ + atomic_t expose_resources; + + /* Tasklet to handle deferred processing */ + struct tasklet_struct isr_tasklet[PMCRAID_NUM_MSIX_VECTORS]; + + /* Work-queue (Shared) for deferred reset processing */ + struct work_struct worker_q; + + + u32 ioa_state:4; /* For IOA Reset sequence FSM */ +#define IOA_STATE_OPERATIONAL 0x0 +#define IOA_STATE_UNKNOWN 0x1 +#define IOA_STATE_DEAD 0x2 +#define IOA_STATE_IN_SOFT_RESET 0x3 +#define IOA_STATE_IN_HARD_RESET 0x4 +#define IOA_STATE_IN_RESET_ALERT 0x5 +#define IOA_STATE_IN_BRINGDOWN 0x6 +#define IOA_STATE_IN_BRINGUP 0x7 + + u32 ioa_reset_in_progress:1; /* true if IOA reset is in progress */ + u32 ioa_hard_reset:1; /* TRUE if Hard Reset is needed */ + u32 ioa_unit_check:1; /* Indicates Unit Check condition */ + u32 ioa_bringdown:1; /* whether IOA needs to be brought down */ + u32 force_ioa_reset:1; /* force adapter reset ? */ + u32 reinit_cfg_table:1; /* reinit config table due to lost CCN */ + u32 ioa_shutdown_type:2;/* shutdown type used during reset */ +#define SHUTDOWN_NONE 0x0 +#define SHUTDOWN_NORMAL 0x1 +#define SHUTDOWN_ABBREV 0x2 + +}; + +/* LLD maintained resource entry structure */ +struct pmcraid_resource_entry { + struct list_head queue; /* link to "to be exposed" resources */ + struct pmcraid_config_table_entry cfg_entry; + struct scsi_device *scsi_dev; /* Link scsi_device structure */ + atomic_t read_failures; /* count of failed READ commands */ + atomic_t write_failures; /* count of failed WRITE commands */ + + /* To indicate add/delete/modify during CCN */ + u8 change_detected; +#define RES_CHANGE_ADD 0x1 /* add this to mid-layer */ +#define RES_CHANGE_DEL 0x2 /* remove this from mid-layer */ + + u8 reset_progress; /* Device is resetting */ + + /* + * When IOA asks for sync (i.e. IOASC = Not Ready, Sync Required), this + * flag will be set, mid layer will be asked to retry. In the next + * attempt, this flag will be checked in queuecommand() to set + * SYNC_COMPLETE flag in IOARCB (flag_0). + */ + u8 sync_reqd; + + /* target indicates the mapped target_id assigned to this resource if + * this is VSET resource. For non-VSET resources this will be un-used + * or zero + */ + u8 target; +}; + +/* Data structures used in IOASC error code logging */ +struct pmcraid_ioasc_error { + u32 ioasc_code; /* IOASC code */ + u8 log_level; /* default log level assignment. */ + char *error_string; +}; + +/* Initial log_level assignments for various IOASCs */ +#define IOASC_LOG_LEVEL_NONE 0x0 /* no logging */ +#define IOASC_LOG_LEVEL_MUST 0x1 /* must log: all high-severity errors */ +#define IOASC_LOG_LEVEL_HARD 0x2 /* optional – low severity errors */ + +/* Error information maintained by LLD. LLD initializes the pmcraid_error_table + * statically. + */ +static struct pmcraid_ioasc_error pmcraid_ioasc_error_table[] = { + {0x01180600, IOASC_LOG_LEVEL_MUST, + "Recovered Error, soft media error, sector reassignment suggested"}, + {0x015D0000, IOASC_LOG_LEVEL_MUST, + "Recovered Error, failure prediction thresold exceeded"}, + {0x015D9200, IOASC_LOG_LEVEL_MUST, + "Recovered Error, soft Cache Card Battery error thresold"}, + {0x015D9200, IOASC_LOG_LEVEL_MUST, + "Recovered Error, soft Cache Card Battery error thresold"}, + {0x02048000, IOASC_LOG_LEVEL_MUST, + "Not Ready, IOA Reset Required"}, + {0x02408500, IOASC_LOG_LEVEL_MUST, + "Not Ready, IOA microcode download required"}, + {0x03110B00, IOASC_LOG_LEVEL_MUST, + "Medium Error, data unreadable, reassignment suggested"}, + {0x03110C00, IOASC_LOG_LEVEL_MUST, + "Medium Error, data unreadable do not reassign"}, + {0x03310000, IOASC_LOG_LEVEL_MUST, + "Medium Error, media corrupted"}, + {0x04050000, IOASC_LOG_LEVEL_MUST, + "Hardware Error, IOA can't communicate with device"}, + {0x04080000, IOASC_LOG_LEVEL_MUST, + "Hardware Error, device bus error"}, + {0x04080000, IOASC_LOG_LEVEL_MUST, + "Hardware Error, device bus is not functioning"}, + {0x04118000, IOASC_LOG_LEVEL_MUST, + "Hardware Error, IOA reserved area data check"}, + {0x04118100, IOASC_LOG_LEVEL_MUST, + "Hardware Error, IOA reserved area invalid data pattern"}, + {0x04118200, IOASC_LOG_LEVEL_MUST, + "Hardware Error, IOA reserved area LRC error"}, + {0x04320000, IOASC_LOG_LEVEL_MUST, + "Hardware Error, reassignment space exhausted"}, + {0x04330000, IOASC_LOG_LEVEL_MUST, + "Hardware Error, data transfer underlength error"}, + {0x04330000, IOASC_LOG_LEVEL_MUST, + "Hardware Error, data transfer overlength error"}, + {0x04418000, IOASC_LOG_LEVEL_MUST, + "Hardware Error, PCI bus error"}, + {0x04440000, IOASC_LOG_LEVEL_MUST, + "Hardware Error, device error"}, + {0x04448300, IOASC_LOG_LEVEL_MUST, + "Hardware Error, undefined device response"}, + {0x04448400, IOASC_LOG_LEVEL_MUST, + "Hardware Error, IOA microcode error"}, + {0x04448600, IOASC_LOG_LEVEL_MUST, + "Hardware Error, IOA reset required"}, + {0x04449200, IOASC_LOG_LEVEL_MUST, + "Hardware Error, hard Cache Fearuee Card Battery error"}, + {0x0444A000, IOASC_LOG_LEVEL_MUST, + "Hardware Error, failed device altered"}, + {0x0444A200, IOASC_LOG_LEVEL_MUST, + "Hardware Error, data check after reassignment"}, + {0x0444A300, IOASC_LOG_LEVEL_MUST, + "Hardware Error, LRC error after reassignment"}, + {0x044A0000, IOASC_LOG_LEVEL_MUST, + "Hardware Error, device bus error (msg/cmd phase)"}, + {0x04670400, IOASC_LOG_LEVEL_MUST, + "Hardware Error, new device can't be used"}, + {0x04678000, IOASC_LOG_LEVEL_MUST, + "Hardware Error, invalid multiadapter configuration"}, + {0x04678100, IOASC_LOG_LEVEL_MUST, + "Hardware Error, incorrect connection between enclosures"}, + {0x04678200, IOASC_LOG_LEVEL_MUST, + "Hardware Error, connections exceed IOA design limits"}, + {0x04678300, IOASC_LOG_LEVEL_MUST, + "Hardware Error, incorrect multipath connection"}, + {0x04679000, IOASC_LOG_LEVEL_MUST, + "Hardware Error, command to LUN failed"}, + {0x064C8000, IOASC_LOG_LEVEL_HARD, + "Unit Attention, cache exists for missing/failed device"}, + {0x06670100, IOASC_LOG_LEVEL_HARD, + "Unit Attention, incompatible exposed mode device"}, + {0x06670600, IOASC_LOG_LEVEL_HARD, + "Unit Attention, attachment of logical unit failed"}, + {0x06678000, IOASC_LOG_LEVEL_MUST, + "Unit Attention, cables exceed connective design limit"}, + {0x06678300, IOASC_LOG_LEVEL_MUST, + "Unit Attention, incomplete multipath connection between" \ + "IOA and enclosure"}, + {0x06678400, IOASC_LOG_LEVEL_MUST, + "Unit Attention, incomplete multipath connection between" \ + "device and enclosure"}, + {0x06678500, IOASC_LOG_LEVEL_MUST, + "Unit Attention, incomplete multipath connection between" \ + "IOA and remote IOA"}, + {0x06678600, IOASC_LOG_LEVEL_HARD, + "Unit Attention, missing remote IOA"}, + {0x06679100, IOASC_LOG_LEVEL_HARD, + "Unit Attention, enclosure doesn't support required multipath" \ + "function"}, + {0x06698200, IOASC_LOG_LEVEL_HARD, + "Unit Attention, corrupt array parity detected on device"}, + {0x066B0200, IOASC_LOG_LEVEL_MUST, + "Unit Attention, array exposed"}, + {0x066B8200, IOASC_LOG_LEVEL_HARD, + "Unit Attention, exposed array is still protected"}, + {0x066B9200, IOASC_LOG_LEVEL_MUST, + "Unit Attention, Multipath redundancy level got worse"}, + {0x07270000, IOASC_LOG_LEVEL_HARD, + "Data Protect, device is read/write protected by IOA"}, + {0x07278000, IOASC_LOG_LEVEL_HARD, + "Data Protect, IOA doesn't support device attribute"}, + {0x07278100, IOASC_LOG_LEVEL_HARD, + "Data Protect, NVRAM mirroring prohibited"}, + {0x07278400, IOASC_LOG_LEVEL_MUST, + "Data Protect, array is short 2 or more devices"}, + {0x07278600, IOASC_LOG_LEVEL_MUST, + "Data Protect, exposed array is short a required device"}, + {0x07278700, IOASC_LOG_LEVEL_MUST, + "Data Protect, array members not at required addresses"}, + {0x07278800, IOASC_LOG_LEVEL_MUST, + "Data Protect, exposed mode device resource address conflict"}, + {0x07278900, IOASC_LOG_LEVEL_MUST, + "Data Protect, incorrect resource address of exposed mode device"}, + {0x07278A00, IOASC_LOG_LEVEL_MUST, + "Data Protect, Array is missing a device and parity is out of sync"}, + {0x07278B00, IOASC_LOG_LEVEL_MUST, + "Data Protect, maximum number of arrays already exist"}, + {0x07278C00, IOASC_LOG_LEVEL_HARD, + "Data Protect, cannot locate cache data for device"}, + {0x07278D00, IOASC_LOG_LEVEL_HARD, + "Data Protect, cache data exits for a changed device"}, + {0x07279100, IOASC_LOG_LEVEL_MUST, + "Data Protect, detection of a device requiring format"}, + {0x07279200, IOASC_LOG_LEVEL_MUST, + "Data Protect, IOA exceeds maximum number of devices"}, + {0x07279600, IOASC_LOG_LEVEL_MUST, + "Data Protect, missing array, volume set is not functional"}, + {0x07279700, IOASC_LOG_LEVEL_MUST, + "Data Protect, single device for a volume set"}, + {0x07279800, IOASC_LOG_LEVEL_MUST, + "Data Protect, missing multiple devices for a volume set"}, + {0x07279900, IOASC_LOG_LEVEL_HARD, + "Data Protect, maximum number of volument sets already exists"}, + {0x07279A00, IOASC_LOG_LEVEL_MUST, + "Data Protect, other volume set problem"}, +}; + +/* macros to help in debugging */ +#define pmcraid_err(...) \ + printk(KERN_ERR "MaxRAID: "__VA_ARGS__) + +#define pmcraid_info(...) \ + if (pmcraid_debug_log) \ + printk(KERN_INFO "MaxRAID: "__VA_ARGS__) + +/* check if given command is a SCSI READ or SCSI WRITE command */ +#define SCSI_READ_CMD 0x1 /* any of SCSI READ commands */ +#define SCSI_WRITE_CMD 0x2 /* any of SCSI WRITE commands */ +#define SCSI_CMD_TYPE(opcode) \ +({ u8 op = opcode; u8 __type = 0;\ + if (op == READ_6 || op == READ_10 || op == READ_12 || op == READ_16)\ + __type = SCSI_READ_CMD;\ + else if (op == WRITE_6 || op == WRITE_10 || op == WRITE_12 || \ + op == WRITE_16)\ + __type = SCSI_WRITE_CMD;\ + __type;\ +}) + +#define IS_SCSI_READ_WRITE(opcode) \ +({ u8 __type = SCSI_CMD_TYPE(opcode); \ + (__type == SCSI_READ_CMD || __type == SCSI_WRITE_CMD) ? 1 : 0;\ +}) + + +/* + * pmcraid_ioctl_header - definition of header structure that preceeds all the + * buffers given as ioctl arguements. + * + * .signature : always ASCII string, "PMCRAID" + * .reserved : not used + * .buffer_length : length of the buffer following the header + */ +struct pmcraid_ioctl_header { + u8 signature[8]; + u32 reserved; + u32 buffer_length; +}; + +#define PMCRAID_IOCTL_SIGNATURE "PMCRAID" + + +/* + * pmcraid_event_details - defines AEN details that apps can retrieve from LLD + * + * .rcb_ccn - complete RCB of CCN + * .rcb_ldn - complete RCB of CCN + */ +struct pmcraid_event_details { + struct pmcraid_hcam_ccn rcb_ccn; + struct pmcraid_hcam_ldn rcb_ldn; +}; + +/* + * pmcraid_driver_ioctl_buffer - structure passed as argument to most of the + * PMC driver handled ioctls. + */ +struct pmcraid_driver_ioctl_buffer { + struct pmcraid_ioctl_header ioctl_header; + struct pmcraid_event_details event_details; +}; + +/* + * pmcraid_passthrough_ioctl_buffer - structure given as argument to + * passthrough(or firmware handled) IOCTL commands. Note that ioarcb requires + * 32-byte alignment so, it is necessary to pack this structure to avoid any + * holes between ioctl_header and passthrough buffer + * + * .ioactl_header : ioctl header + * .ioarcb : filled-up ioarcb buffer, driver always reads this buffer + * .ioasa : buffer for ioasa, driver fills this with IOASA from firmware + * .request_buffer: The I/O buffer (flat), driver reads/writes to this based on + * the transfer directions passed in ioarcb.flags0. Contents + * of this buffer are valid only when ioarcb.data_transfer_len + * is not zero. + */ +struct pmcraid_passthrough_ioctl_buffer { + struct pmcraid_ioctl_header ioctl_header; + struct pmcraid_ioarcb ioarcb; + struct pmcraid_ioasa ioasa; + u8 request_buffer[1]; +} __attribute__ ((packed)); + +/* + * keys to differentiate between driver handled IOCTLs and passthrough + * IOCTLs passed to IOA. driver determines the ioctl type using macro + * _IOC_TYPE + */ +#define PMCRAID_DRIVER_IOCTL 'D' +#define PMCRAID_PASSTHROUGH_IOCTL 'F' + +#define DRV_IOCTL(n, size) \ + _IOC(_IOC_READ|_IOC_WRITE, PMCRAID_DRIVER_IOCTL, (n), (size)) + +#define FMW_IOCTL(n, size) \ + _IOC(_IOC_READ|_IOC_WRITE, PMCRAID_PASSTHROUGH_IOCTL, (n), (size)) + +/* + * _ARGSIZE: macro that gives size of the argument type passed to an IOCTL cmd. + * This is to facilitate applications avoiding un-necessary memory allocations. + * For example, most of driver handled ioctls do not require ioarcb, ioasa. + */ +#define _ARGSIZE(arg) (sizeof(struct pmcraid_ioctl_header) + sizeof(arg)) + +/* Driver handled IOCTL command definitions */ + +#define PMCRAID_IOCTL_RESET_ADAPTER \ + DRV_IOCTL(5, sizeof(struct pmcraid_ioctl_header)) + +/* passthrough/firmware handled commands */ +#define PMCRAID_IOCTL_PASSTHROUGH_COMMAND \ + FMW_IOCTL(1, sizeof(struct pmcraid_passthrough_ioctl_buffer)) + +#define PMCRAID_IOCTL_DOWNLOAD_MICROCODE \ + FMW_IOCTL(2, sizeof(struct pmcraid_passthrough_ioctl_buffer)) + + +#endif /* _PMCRAID_H */ -- cgit v1.2.3 From 983f2163e7fdf11a15e05816de243f93f07eafca Mon Sep 17 00:00:00 2001 From: Ingo Molnar Date: Tue, 15 Sep 2009 12:29:20 +0200 Subject: MAINTAINERS: Update tracing tree details Acked-by: Steven Rostedt Acked-by: Frederic Weisbecker Signed-off-by: Ingo Molnar --- MAINTAINERS | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 8dca9d89c6c1..1505129ec5a0 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -2118,13 +2118,16 @@ F: Documentation/filesystems/caching/ F: fs/fscache/ F: include/linux/fscache*.h -FTRACE +TRACING M: Steven Rostedt +M: Frederic Weisbecker +M: Ingo Molnar +T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip.git tracing/core S: Maintained F: Documentation/trace/ftrace.txt F: arch/*/*/*/ftrace.h F: arch/*/kernel/ftrace.c -F: include/*/ftrace.h +F: include/*/ftrace.h include/trace/ include/linux/trace*.h F: kernel/trace/ FUJITSU FR-V (FRV) PORT -- cgit v1.2.3 From 2c2a6172afab7421f6af7e228cd3121f423ea932 Mon Sep 17 00:00:00 2001 From: Luca Tettamanti Date: Tue, 15 Sep 2009 17:18:10 +0200 Subject: hwmon: (asus_atk0110) Add maintainer information Add myself as asus_atk0110 maintainer. Signed-off-by: Luca Tettamanti Signed-off-by: Jean Delvare --- MAINTAINERS | 6 ++++++ 1 file changed, 6 insertions(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 837b5985ac40..6dedc7f9ffea 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -931,6 +931,12 @@ W: http://wireless.kernel.org/en/users/Drivers/ar9170 S: Maintained F: drivers/net/wireless/ath/ar9170/ +ATK0110 HWMON DRIVER +M: Luca Tettamanti +L: lm-sensors@lm-sensors.org +S: Maintained +F: drivers/hwmon/asus_atk0110.c + ATI_REMOTE2 DRIVER M: Ville Syrjala S: Maintained -- cgit v1.2.3 From 2f82af08fcc7dc01a7e98a49a5995a77e32a2925 Mon Sep 17 00:00:00 2001 From: Nicolas Pitre Date: Mon, 14 Sep 2009 03:25:28 -0400 Subject: Nicolas Pitre has a new email address Due to problems at cam.org, my nico@cam.org email address is no longer valid. FRom now on, nico@fluxnic.net should be used instead. Signed-off-by: Nicolas Pitre Signed-off-by: Linus Torvalds --- CREDITS | 2 +- Documentation/arm/SA1100/ADSBitsy | 2 +- Documentation/arm/SA1100/Assabet | 2 +- Documentation/arm/SA1100/Brutus | 2 +- Documentation/arm/SA1100/GraphicsClient | 4 ++-- Documentation/arm/SA1100/GraphicsMaster | 4 ++-- Documentation/arm/SA1100/Victor | 2 +- MAINTAINERS | 4 ++-- arch/arm/boot/compressed/head-sa1100.S | 2 +- arch/arm/lib/lib1funcs.S | 2 +- arch/arm/lib/sha1.S | 2 +- arch/arm/mach-sa1100/include/mach/assabet.h | 2 +- arch/arm/mach-sa1100/include/mach/hardware.h | 2 +- arch/arm/mach-sa1100/include/mach/memory.h | 2 +- arch/arm/mach-sa1100/include/mach/neponset.h | 2 +- arch/arm/mach-sa1100/include/mach/system.h | 2 +- arch/arm/mach-sa1100/include/mach/uncompress.h | 2 +- arch/arm/mach-sa1100/pm.c | 2 +- arch/arm/mach-sa1100/time.c | 2 +- arch/arm/mm/proc-xscale.S | 2 +- arch/arm/plat-iop/setup.c | 2 +- arch/arm/plat-omap/include/mach/system.h | 2 +- drivers/input/keyboard/pxa27x_keypad.c | 2 +- drivers/mtd/chips/cfi_cmdset_0001.c | 2 +- drivers/mtd/chips/cfi_cmdset_0020.c | 2 +- drivers/mtd/maps/bfin-async-flash.c | 2 +- drivers/mtd/maps/ceiva.c | 2 +- drivers/mtd/maps/dc21285.c | 4 ++-- drivers/mtd/maps/ipaq-flash.c | 2 +- drivers/mtd/maps/pxa2xx-flash.c | 2 +- drivers/mtd/maps/sa1100-flash.c | 2 +- drivers/mtd/mtdblock.c | 4 ++-- drivers/mtd/mtdpart.c | 2 +- drivers/net/smc91x.c | 4 ++-- drivers/net/smc91x.h | 2 +- drivers/rtc/rtc-sa1100.c | 2 +- drivers/video/sa1100fb.c | 2 +- include/linux/mtd/partitions.h | 2 +- lib/inflate.c | 2 +- 39 files changed, 45 insertions(+), 45 deletions(-) (limited to 'MAINTAINERS') diff --git a/CREDITS b/CREDITS index 1a41bf4addd0..72b487869788 100644 --- a/CREDITS +++ b/CREDITS @@ -2800,7 +2800,7 @@ D: Starter of Linux1394 effort S: ask per mail for current address N: Nicolas Pitre -E: nico@cam.org +E: nico@fluxnic.net D: StrongARM SA1100 support integrator & hacker D: Xscale PXA architecture D: unified SMC 91C9x/91C11x ethernet driver (smc91x) diff --git a/Documentation/arm/SA1100/ADSBitsy b/Documentation/arm/SA1100/ADSBitsy index ab47c3833908..7197a9e958ee 100644 --- a/Documentation/arm/SA1100/ADSBitsy +++ b/Documentation/arm/SA1100/ADSBitsy @@ -40,4 +40,4 @@ Notes: mode, the timing is off so the image is corrupted. This will be fixed soon. -Any contribution can be sent to nico@cam.org and will be greatly welcome! +Any contribution can be sent to nico@fluxnic.net and will be greatly welcome! diff --git a/Documentation/arm/SA1100/Assabet b/Documentation/arm/SA1100/Assabet index 78bc1c1b04e5..91f7ce7ba426 100644 --- a/Documentation/arm/SA1100/Assabet +++ b/Documentation/arm/SA1100/Assabet @@ -240,7 +240,7 @@ Then, rebooting the Assabet is just a matter of waiting for the login prompt. Nicolas Pitre -nico@cam.org +nico@fluxnic.net June 12, 2001 diff --git a/Documentation/arm/SA1100/Brutus b/Documentation/arm/SA1100/Brutus index 2254c8f0b326..b1cfd405dccc 100644 --- a/Documentation/arm/SA1100/Brutus +++ b/Documentation/arm/SA1100/Brutus @@ -60,7 +60,7 @@ little modifications. Any contribution is welcome. -Please send patches to nico@cam.org +Please send patches to nico@fluxnic.net Have Fun ! diff --git a/Documentation/arm/SA1100/GraphicsClient b/Documentation/arm/SA1100/GraphicsClient index 8fa7e8027ff1..6c9c4f5a36e1 100644 --- a/Documentation/arm/SA1100/GraphicsClient +++ b/Documentation/arm/SA1100/GraphicsClient @@ -4,7 +4,7 @@ For more details, contact Applied Data Systems or see http://www.applieddata.net/products.html The original Linux support for this product has been provided by -Nicolas Pitre . Continued development work by +Nicolas Pitre . Continued development work by Woojung Huh It's currently possible to mount a root filesystem via NFS providing a @@ -94,5 +94,5 @@ Notes: mode, the timing is off so the image is corrupted. This will be fixed soon. -Any contribution can be sent to nico@cam.org and will be greatly welcome! +Any contribution can be sent to nico@fluxnic.net and will be greatly welcome! diff --git a/Documentation/arm/SA1100/GraphicsMaster b/Documentation/arm/SA1100/GraphicsMaster index dd28745ac521..ee7c6595f23f 100644 --- a/Documentation/arm/SA1100/GraphicsMaster +++ b/Documentation/arm/SA1100/GraphicsMaster @@ -4,7 +4,7 @@ For more details, contact Applied Data Systems or see http://www.applieddata.net/products.html The original Linux support for this product has been provided by -Nicolas Pitre . Continued development work by +Nicolas Pitre . Continued development work by Woojung Huh Use 'make graphicsmaster_config' before any 'make config'. @@ -50,4 +50,4 @@ Notes: mode, the timing is off so the image is corrupted. This will be fixed soon. -Any contribution can be sent to nico@cam.org and will be greatly welcome! +Any contribution can be sent to nico@fluxnic.net and will be greatly welcome! diff --git a/Documentation/arm/SA1100/Victor b/Documentation/arm/SA1100/Victor index 01e81fc49461..f938a29fdc20 100644 --- a/Documentation/arm/SA1100/Victor +++ b/Documentation/arm/SA1100/Victor @@ -9,7 +9,7 @@ Of course Victor is using Linux as its main operating system. The Victor implementation for Linux is maintained by Nicolas Pitre: nico@visuaide.com - nico@cam.org + nico@fluxnic.net For any comments, please feel free to contact me through the above addresses. diff --git a/MAINTAINERS b/MAINTAINERS index 837b5985ac40..64b9e447545c 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3317,7 +3317,7 @@ S: Supported F: drivers/net/wireless/mwl8k.c MARVELL SOC MMC/SD/SDIO CONTROLLER DRIVER -M: Nicolas Pitre +M: Nicolas Pitre S: Maintained MARVELL YUKON / SYSKONNECT DRIVER @@ -4689,7 +4689,7 @@ F: include/linux/sl?b*.h F: mm/sl?b.c SMC91x ETHERNET DRIVER -M: Nicolas Pitre +M: Nicolas Pitre S: Maintained F: drivers/net/smc91x.* diff --git a/arch/arm/boot/compressed/head-sa1100.S b/arch/arm/boot/compressed/head-sa1100.S index 4c8c0e46027d..6179d94dd5c6 100644 --- a/arch/arm/boot/compressed/head-sa1100.S +++ b/arch/arm/boot/compressed/head-sa1100.S @@ -1,7 +1,7 @@ /* * linux/arch/arm/boot/compressed/head-sa1100.S * - * Copyright (C) 1999 Nicolas Pitre + * Copyright (C) 1999 Nicolas Pitre * * SA1100 specific tweaks. This is merged into head.S by the linker. * diff --git a/arch/arm/lib/lib1funcs.S b/arch/arm/lib/lib1funcs.S index 67964bcfc854..6dc06487f3c3 100644 --- a/arch/arm/lib/lib1funcs.S +++ b/arch/arm/lib/lib1funcs.S @@ -1,7 +1,7 @@ /* * linux/arch/arm/lib/lib1funcs.S: Optimized ARM division routines * - * Author: Nicolas Pitre + * Author: Nicolas Pitre * - contributed to gcc-3.4 on Sep 30, 2003 * - adapted for the Linux kernel on Oct 2, 2003 */ diff --git a/arch/arm/lib/sha1.S b/arch/arm/lib/sha1.S index 09b548cac1a4..eb0edb80d7b8 100644 --- a/arch/arm/lib/sha1.S +++ b/arch/arm/lib/sha1.S @@ -3,7 +3,7 @@ * * SHA transform optimized for ARM * - * Copyright: (C) 2005 by Nicolas Pitre + * Copyright: (C) 2005 by Nicolas Pitre * Created: September 17, 2005 * * This program is free software; you can redistribute it and/or modify diff --git a/arch/arm/mach-sa1100/include/mach/assabet.h b/arch/arm/mach-sa1100/include/mach/assabet.h index 3959b20d5d1c..28c2cf50c259 100644 --- a/arch/arm/mach-sa1100/include/mach/assabet.h +++ b/arch/arm/mach-sa1100/include/mach/assabet.h @@ -1,7 +1,7 @@ /* * arch/arm/mach-sa1100/include/mach/assabet.h * - * Created 2000/06/05 by Nicolas Pitre + * Created 2000/06/05 by Nicolas Pitre * * This file contains the hardware specific definitions for Assabet * Only include this file from SA1100-specific files. diff --git a/arch/arm/mach-sa1100/include/mach/hardware.h b/arch/arm/mach-sa1100/include/mach/hardware.h index 60711822b125..99f5856d8de4 100644 --- a/arch/arm/mach-sa1100/include/mach/hardware.h +++ b/arch/arm/mach-sa1100/include/mach/hardware.h @@ -1,7 +1,7 @@ /* * arch/arm/mach-sa1100/include/mach/hardware.h * - * Copyright (C) 1998 Nicolas Pitre + * Copyright (C) 1998 Nicolas Pitre * * This file contains the hardware definitions for SA1100 architecture * diff --git a/arch/arm/mach-sa1100/include/mach/memory.h b/arch/arm/mach-sa1100/include/mach/memory.h index e9f8eed900f5..d5277f9bee77 100644 --- a/arch/arm/mach-sa1100/include/mach/memory.h +++ b/arch/arm/mach-sa1100/include/mach/memory.h @@ -1,7 +1,7 @@ /* * arch/arm/mach-sa1100/include/mach/memory.h * - * Copyright (C) 1999-2000 Nicolas Pitre + * Copyright (C) 1999-2000 Nicolas Pitre */ #ifndef __ASM_ARCH_MEMORY_H diff --git a/arch/arm/mach-sa1100/include/mach/neponset.h b/arch/arm/mach-sa1100/include/mach/neponset.h index d3f044f92c00..ffe2bc45eed0 100644 --- a/arch/arm/mach-sa1100/include/mach/neponset.h +++ b/arch/arm/mach-sa1100/include/mach/neponset.h @@ -1,7 +1,7 @@ /* * arch/arm/mach-sa1100/include/mach/neponset.h * - * Created 2000/06/05 by Nicolas Pitre + * Created 2000/06/05 by Nicolas Pitre * * This file contains the hardware specific definitions for Assabet * Only include this file from SA1100-specific files. diff --git a/arch/arm/mach-sa1100/include/mach/system.h b/arch/arm/mach-sa1100/include/mach/system.h index 942b153e251d..ba9da9f7f183 100644 --- a/arch/arm/mach-sa1100/include/mach/system.h +++ b/arch/arm/mach-sa1100/include/mach/system.h @@ -1,7 +1,7 @@ /* * arch/arm/mach-sa1100/include/mach/system.h * - * Copyright (c) 1999 Nicolas Pitre + * Copyright (c) 1999 Nicolas Pitre */ #include diff --git a/arch/arm/mach-sa1100/include/mach/uncompress.h b/arch/arm/mach-sa1100/include/mach/uncompress.h index 714160b03d7a..6cb39ddde656 100644 --- a/arch/arm/mach-sa1100/include/mach/uncompress.h +++ b/arch/arm/mach-sa1100/include/mach/uncompress.h @@ -1,7 +1,7 @@ /* * arch/arm/mach-sa1100/include/mach/uncompress.h * - * (C) 1999 Nicolas Pitre + * (C) 1999 Nicolas Pitre * * Reorganised to be machine independent. */ diff --git a/arch/arm/mach-sa1100/pm.c b/arch/arm/mach-sa1100/pm.c index 111cce67ad2f..c83fdc80edfd 100644 --- a/arch/arm/mach-sa1100/pm.c +++ b/arch/arm/mach-sa1100/pm.c @@ -15,7 +15,7 @@ * Save more value for the resume function! Support * Bitsy/Assabet/Freebird board * - * 2001-08-29: Nicolas Pitre + * 2001-08-29: Nicolas Pitre * Cleaned up, pushed platform dependent stuff * in the platform specific files. * diff --git a/arch/arm/mach-sa1100/time.c b/arch/arm/mach-sa1100/time.c index 711c0295c66f..95d92e8e56a8 100644 --- a/arch/arm/mach-sa1100/time.c +++ b/arch/arm/mach-sa1100/time.c @@ -4,7 +4,7 @@ * Copyright (C) 1998 Deborah Wallach. * Twiddles (C) 1999 Hugo Fiennes * - * 2000/03/29 (C) Nicolas Pitre + * 2000/03/29 (C) Nicolas Pitre * Rewritten: big cleanup, much simpler, better HZ accuracy. * */ diff --git a/arch/arm/mm/proc-xscale.S b/arch/arm/mm/proc-xscale.S index 0cce37b93937..423394260bcb 100644 --- a/arch/arm/mm/proc-xscale.S +++ b/arch/arm/mm/proc-xscale.S @@ -17,7 +17,7 @@ * * 2001 Sep 08: * Completely revisited, many important fixes - * Nicolas Pitre + * Nicolas Pitre */ #include diff --git a/arch/arm/plat-iop/setup.c b/arch/arm/plat-iop/setup.c index 9e573e78176a..bade586fed0f 100644 --- a/arch/arm/plat-iop/setup.c +++ b/arch/arm/plat-iop/setup.c @@ -1,7 +1,7 @@ /* * arch/arm/plat-iop/setup.c * - * Author: Nicolas Pitre + * Author: Nicolas Pitre * Copyright (C) 2001 MontaVista Software, Inc. * Copyright (C) 2004 Intel Corporation. * diff --git a/arch/arm/plat-omap/include/mach/system.h b/arch/arm/plat-omap/include/mach/system.h index 1060e345423b..ed8ec7477261 100644 --- a/arch/arm/plat-omap/include/mach/system.h +++ b/arch/arm/plat-omap/include/mach/system.h @@ -1,6 +1,6 @@ /* * Copied from arch/arm/mach-sa1100/include/mach/system.h - * Copyright (c) 1999 Nicolas Pitre + * Copyright (c) 1999 Nicolas Pitre */ #ifndef __ASM_ARCH_SYSTEM_H #define __ASM_ARCH_SYSTEM_H diff --git a/drivers/input/keyboard/pxa27x_keypad.c b/drivers/input/keyboard/pxa27x_keypad.c index 76f9668221a4..79cd3e9fdf2e 100644 --- a/drivers/input/keyboard/pxa27x_keypad.c +++ b/drivers/input/keyboard/pxa27x_keypad.c @@ -8,7 +8,7 @@ * * Based on a previous implementations by Kevin O'Connor * and Alex Osborne and - * on some suggestions by Nicolas Pitre . + * on some suggestions by Nicolas Pitre . * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as diff --git a/drivers/mtd/chips/cfi_cmdset_0001.c b/drivers/mtd/chips/cfi_cmdset_0001.c index 8664feebc93b..e7563a9872d0 100644 --- a/drivers/mtd/chips/cfi_cmdset_0001.c +++ b/drivers/mtd/chips/cfi_cmdset_0001.c @@ -5,7 +5,7 @@ * (C) 2000 Red Hat. GPL'd * * - * 10/10/2000 Nicolas Pitre + * 10/10/2000 Nicolas Pitre * - completely revamped method functions so they are aware and * independent of the flash geometry (buswidth, interleave, etc.) * - scalability vs code size is completely set at compile-time diff --git a/drivers/mtd/chips/cfi_cmdset_0020.c b/drivers/mtd/chips/cfi_cmdset_0020.c index 6c740f346f91..0667a671525d 100644 --- a/drivers/mtd/chips/cfi_cmdset_0020.c +++ b/drivers/mtd/chips/cfi_cmdset_0020.c @@ -4,7 +4,7 @@ * * (C) 2000 Red Hat. GPL'd * - * 10/10/2000 Nicolas Pitre + * 10/10/2000 Nicolas Pitre * - completely revamped method functions so they are aware and * independent of the flash geometry (buswidth, interleave, etc.) * - scalability vs code size is completely set at compile-time diff --git a/drivers/mtd/maps/bfin-async-flash.c b/drivers/mtd/maps/bfin-async-flash.c index 365c77b1b871..a7c808b577d3 100644 --- a/drivers/mtd/maps/bfin-async-flash.c +++ b/drivers/mtd/maps/bfin-async-flash.c @@ -6,7 +6,7 @@ * for example. All board-specific configuration goes in your * board resources file. * - * Copyright 2000 Nicolas Pitre + * Copyright 2000 Nicolas Pitre * Copyright 2005-2008 Analog Devices Inc. * * Enter bugs at http://blackfin.uclinux.org/ diff --git a/drivers/mtd/maps/ceiva.c b/drivers/mtd/maps/ceiva.c index 60e68bde0fea..d41f34766e53 100644 --- a/drivers/mtd/maps/ceiva.c +++ b/drivers/mtd/maps/ceiva.c @@ -9,7 +9,7 @@ * Based on: sa1100-flash.c, which has the following copyright: * Flash memory access on SA11x0 based devices * - * (C) 2000 Nicolas Pitre + * (C) 2000 Nicolas Pitre * */ diff --git a/drivers/mtd/maps/dc21285.c b/drivers/mtd/maps/dc21285.c index 42969fe051b2..b3cb3a183809 100644 --- a/drivers/mtd/maps/dc21285.c +++ b/drivers/mtd/maps/dc21285.c @@ -1,7 +1,7 @@ /* * MTD map driver for flash on the DC21285 (the StrongARM-110 companion chip) * - * (C) 2000 Nicolas Pitre + * (C) 2000 Nicolas Pitre * * This code is GPL */ @@ -249,5 +249,5 @@ module_exit(cleanup_dc21285); MODULE_LICENSE("GPL"); -MODULE_AUTHOR("Nicolas Pitre "); +MODULE_AUTHOR("Nicolas Pitre "); MODULE_DESCRIPTION("MTD map driver for DC21285 boards"); diff --git a/drivers/mtd/maps/ipaq-flash.c b/drivers/mtd/maps/ipaq-flash.c index 748c85f635f1..76708e796b70 100644 --- a/drivers/mtd/maps/ipaq-flash.c +++ b/drivers/mtd/maps/ipaq-flash.c @@ -1,7 +1,7 @@ /* * Flash memory access on iPAQ Handhelds (either SA1100 or PXA250 based) * - * (C) 2000 Nicolas Pitre + * (C) 2000 Nicolas Pitre * (C) 2002 Hewlett-Packard Company * (C) 2003 Christian Pellegrin , : concatenation of multiple flashes */ diff --git a/drivers/mtd/maps/pxa2xx-flash.c b/drivers/mtd/maps/pxa2xx-flash.c index 643aa06b599e..74fa075c838a 100644 --- a/drivers/mtd/maps/pxa2xx-flash.c +++ b/drivers/mtd/maps/pxa2xx-flash.c @@ -175,5 +175,5 @@ module_init(init_pxa2xx_flash); module_exit(cleanup_pxa2xx_flash); MODULE_LICENSE("GPL"); -MODULE_AUTHOR("Nicolas Pitre "); +MODULE_AUTHOR("Nicolas Pitre "); MODULE_DESCRIPTION("MTD map driver for Intel XScale PXA2xx"); diff --git a/drivers/mtd/maps/sa1100-flash.c b/drivers/mtd/maps/sa1100-flash.c index c6210f5118d1..fdb97f3d30e9 100644 --- a/drivers/mtd/maps/sa1100-flash.c +++ b/drivers/mtd/maps/sa1100-flash.c @@ -1,7 +1,7 @@ /* * Flash memory access on SA11x0 based devices * - * (C) 2000 Nicolas Pitre + * (C) 2000 Nicolas Pitre */ #include #include diff --git a/drivers/mtd/mtdblock.c b/drivers/mtd/mtdblock.c index 77db5ce24d92..2d70295a5fa3 100644 --- a/drivers/mtd/mtdblock.c +++ b/drivers/mtd/mtdblock.c @@ -1,7 +1,7 @@ /* * Direct MTD block device access * - * (C) 2000-2003 Nicolas Pitre + * (C) 2000-2003 Nicolas Pitre * (C) 1999-2003 David Woodhouse */ @@ -403,5 +403,5 @@ module_exit(cleanup_mtdblock); MODULE_LICENSE("GPL"); -MODULE_AUTHOR("Nicolas Pitre et al."); +MODULE_AUTHOR("Nicolas Pitre et al."); MODULE_DESCRIPTION("Caching read/erase/writeback block device emulation access to MTD devices"); diff --git a/drivers/mtd/mtdpart.c b/drivers/mtd/mtdpart.c index 349fcbe5cc0f..742504ea96f5 100644 --- a/drivers/mtd/mtdpart.c +++ b/drivers/mtd/mtdpart.c @@ -1,7 +1,7 @@ /* * Simple MTD partitioning layer * - * (C) 2000 Nicolas Pitre + * (C) 2000 Nicolas Pitre * * This code is GPL * diff --git a/drivers/net/smc91x.c b/drivers/net/smc91x.c index 61be6d7680f6..05c91ee6921e 100644 --- a/drivers/net/smc91x.c +++ b/drivers/net/smc91x.c @@ -35,7 +35,7 @@ * * contributors: * Daris A Nevil - * Nicolas Pitre + * Nicolas Pitre * Russell King * * History: @@ -58,7 +58,7 @@ * 22/09/04 Nicolas Pitre big update (see commit log for details) */ static const char version[] = - "smc91x.c: v1.1, sep 22 2004 by Nicolas Pitre \n"; + "smc91x.c: v1.1, sep 22 2004 by Nicolas Pitre \n"; /* Debugging level */ #ifndef SMC_DEBUG diff --git a/drivers/net/smc91x.h b/drivers/net/smc91x.h index 57a159fac99f..784b631cfa3c 100644 --- a/drivers/net/smc91x.h +++ b/drivers/net/smc91x.h @@ -28,7 +28,7 @@ . Authors . Erik Stahlman . Daris A Nevil - . Nicolas Pitre + . Nicolas Pitre . ---------------------------------------------------------------------------*/ #ifndef _SMC91X_H_ diff --git a/drivers/rtc/rtc-sa1100.c b/drivers/rtc/rtc-sa1100.c index 4f247e4dd3f9..021b2928f0b9 100644 --- a/drivers/rtc/rtc-sa1100.c +++ b/drivers/rtc/rtc-sa1100.c @@ -9,7 +9,7 @@ * * Modifications from: * CIH - * Nicolas Pitre + * Nicolas Pitre * Andrew Christian * * Converted to the RTC subsystem and Driver Model diff --git a/drivers/video/sa1100fb.c b/drivers/video/sa1100fb.c index 10ddad8e17d6..cdaa873a6054 100644 --- a/drivers/video/sa1100fb.c +++ b/drivers/video/sa1100fb.c @@ -66,7 +66,7 @@ * - FrameBuffer memory is now allocated at run-time when the * driver is initialized. * - * 2000/04/10: Nicolas Pitre + * 2000/04/10: Nicolas Pitre * - Big cleanup for dynamic selection of machine type at run time. * * 2000/07/19: Jamey Hicks diff --git a/include/linux/mtd/partitions.h b/include/linux/mtd/partitions.h index b70313d33ff8..274b6196091d 100644 --- a/include/linux/mtd/partitions.h +++ b/include/linux/mtd/partitions.h @@ -1,7 +1,7 @@ /* * MTD partitioning layer definitions * - * (C) 2000 Nicolas Pitre + * (C) 2000 Nicolas Pitre * * This code is GPL */ diff --git a/lib/inflate.c b/lib/inflate.c index 1a8e8a978128..d10255973a9f 100644 --- a/lib/inflate.c +++ b/lib/inflate.c @@ -7,7 +7,7 @@ * Adapted for booting Linux by Hannu Savolainen 1993 * based on gzip-1.0.3 * - * Nicolas Pitre , 1999/04/14 : + * Nicolas Pitre , 1999/04/14 : * Little mods for all variable to reside either into rodata or bss segments * by marking constant variables with 'const' and initializing all the others * at run-time only. This allows for the kernel uncompressor to run -- cgit v1.2.3 From ccb86a6907c9ba7b5be5f521362fc308e80bed34 Mon Sep 17 00:00:00 2001 From: "Michael S. Tsirkin" Date: Mon, 20 Jul 2009 10:29:34 +0300 Subject: uio: add generic driver for PCI 2.3 devices This adds a generic uio driver that can bind to any PCI device. First user will be virtualization where a qemu userspace process needs to give guest OS access to the device. Interrupts are handled using the Interrupt Disable bit in the PCI command register and Interrupt Status bit in the PCI status register. All devices compliant to PCI 2.3 (circa 2002) and all compliant PCI Express devices should support these bits. Driver detects this support, and won't bind to devices which do not support the Interrupt Disable Bit in the command register. It's expected that more features of interest to virtualization will be added to this driver in the future. Possibilities are: mmap for device resources, MSI/MSI-X, eventfd (to interface with kvm), iommu. Signed-off-by: Michael S. Tsirkin Acked-by: Chris Wright Signed-off-by: Hans J. Koch Acked-by: Jesse Barnes Signed-off-by: Greg Kroah-Hartman --- Documentation/DocBook/uio-howto.tmpl | 163 +++++++++++++++++++++++++++ MAINTAINERS | 7 ++ drivers/uio/Kconfig | 10 ++ drivers/uio/Makefile | 1 + drivers/uio/uio_pci_generic.c | 207 +++++++++++++++++++++++++++++++++++ include/linux/pci_regs.h | 1 + 6 files changed, 389 insertions(+) create mode 100644 drivers/uio/uio_pci_generic.c (limited to 'MAINTAINERS') diff --git a/Documentation/DocBook/uio-howto.tmpl b/Documentation/DocBook/uio-howto.tmpl index 8f6e3b2403c7..4d4ce0e61e42 100644 --- a/Documentation/DocBook/uio-howto.tmpl +++ b/Documentation/DocBook/uio-howto.tmpl @@ -25,6 +25,10 @@ 2006-2008 Hans-Jürgen Koch. + + 2009 + Red Hat Inc, Michael S. Tsirkin (mst@redhat.com) + @@ -41,6 +45,13 @@ GPL version 2. + + 0.9 + 2009-07-16 + mst + Added generic pci driver + + 0.8 2008-12-24 @@ -809,6 +820,158 @@ framework to set up sysfs files for this region. Simply leave it alone. + + +Generic PCI UIO driver + + The generic driver is a kernel module named uio_pci_generic. + It can work with any device compliant to PCI 2.3 (circa 2002) and + any compliant PCI Express device. Using this, you only need to + write the userspace driver, removing the need to write + a hardware-specific kernel module. + + + +Making the driver recognize the device + +Since the driver does not declare any device ids, it will not get loaded +automatically and will not automatically bind to any devices, you must load it +and allocate id to the driver yourself. For example: + + modprobe uio_pci_generic + echo "8086 10f5" > /sys/bus/pci/drivers/uio_pci_generic/new_id + + + +If there already is a hardware specific kernel driver for your device, the +generic driver still won't bind to it, in this case if you want to use the +generic driver (why would you?) you'll have to manually unbind the hardware +specific driver and bind the generic driver, like this: + + echo -n 0000:00:19.0 > /sys/bus/pci/drivers/e1000e/unbind + echo -n 0000:00:19.0 > /sys/bus/pci/drivers/uio_pci_generic/bind + + + +You can verify that the device has been bound to the driver +by looking for it in sysfs, for example like the following: + + ls -l /sys/bus/pci/devices/0000:00:19.0/driver + +Which if successful should print + + .../0000:00:19.0/driver -> ../../../bus/pci/drivers/uio_pci_generic + +Note that the generic driver will not bind to old PCI 2.2 devices. +If binding the device failed, run the following command: + + dmesg + +and look in the output for failure reasons + + + + +Things to know about uio_pci_generic + +Interrupts are handled using the Interrupt Disable bit in the PCI command +register and Interrupt Status bit in the PCI status register. All devices +compliant to PCI 2.3 (circa 2002) and all compliant PCI Express devices should +support these bits. uio_pci_generic detects this support, and won't bind to +devices which do not support the Interrupt Disable Bit in the command register. + + +On each interrupt, uio_pci_generic sets the Interrupt Disable bit. +This prevents the device from generating further interrupts +until the bit is cleared. The userspace driver should clear this +bit before blocking and waiting for more interrupts. + + + +Writing userspace driver using uio_pci_generic + +Userspace driver can use pci sysfs interface, or the +libpci libray that wraps it, to talk to the device and to +re-enable interrupts by writing to the command register. + + + +Example code using uio_pci_generic + +Here is some sample userspace driver code using uio_pci_generic: + +#include <stdlib.h> +#include <stdio.h> +#include <unistd.h> +#include <sys/types.h> +#include <sys/stat.h> +#include <fcntl.h> +#include <errno.h> + +int main() +{ + int uiofd; + int configfd; + int err; + int i; + unsigned icount; + unsigned char command_high; + + uiofd = open("/dev/uio0", O_RDONLY); + if (uiofd < 0) { + perror("uio open:"); + return errno; + } + configfd = open("/sys/class/uio/uio0/device/config", O_RDWR); + if (uiofd < 0) { + perror("config open:"); + return errno; + } + + /* Read and cache command value */ + err = pread(configfd, &command_high, 1, 5); + if (err != 1) { + perror("command config read:"); + return errno; + } + command_high &= ~0x4; + + for(i = 0;; ++i) { + /* Print out a message, for debugging. */ + if (i == 0) + fprintf(stderr, "Started uio test driver.\n"); + else + fprintf(stderr, "Interrupts: %d\n", icount); + + /****************************************/ + /* Here we got an interrupt from the + device. Do something to it. */ + /****************************************/ + + /* Re-enable interrupts. */ + err = pwrite(configfd, &command_high, 1, 5); + if (err != 1) { + perror("config write:"); + break; + } + + /* Wait for next interrupt. */ + err = read(uiofd, &icount, 4); + if (err != 4) { + perror("uio read:"); + break; + } + + } + return errno; +} + + + + + + + Further information diff --git a/MAINTAINERS b/MAINTAINERS index 837b5985ac40..01193a4fe30e 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -2218,6 +2218,13 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/arnd/asm-generic.git S: Maintained F: include/asm-generic +GENERIC UIO DRIVER FOR PCI DEVICES +M: Michael S. Tsirkin +L: kvm@vger.kernel.org +L: linux-kernel@vger.kernel.org +S: Supported +F: drivers/uio/uio_pci_generic.c + GFS2 FILE SYSTEM M: Steven Whitehouse L: cluster-devel@redhat.com diff --git a/drivers/uio/Kconfig b/drivers/uio/Kconfig index 45200fd68534..8aa1955f35ed 100644 --- a/drivers/uio/Kconfig +++ b/drivers/uio/Kconfig @@ -84,4 +84,14 @@ config UIO_SERCOS3 If you compile this as a module, it will be called uio_sercos3. +config UIO_PCI_GENERIC + tristate "Generic driver for PCI 2.3 and PCI Express cards" + depends on PCI + default n + help + Generic driver that you can bind, dynamically, to any + PCI 2.3 compliant and PCI Express card. It is useful, + primarily, for virtualization scenarios. + If you compile this as a module, it will be called uio_pci_generic. + endif diff --git a/drivers/uio/Makefile b/drivers/uio/Makefile index 5c2586d75797..73b2e7516729 100644 --- a/drivers/uio/Makefile +++ b/drivers/uio/Makefile @@ -5,3 +5,4 @@ obj-$(CONFIG_UIO_PDRV_GENIRQ) += uio_pdrv_genirq.o obj-$(CONFIG_UIO_SMX) += uio_smx.o obj-$(CONFIG_UIO_AEC) += uio_aec.o obj-$(CONFIG_UIO_SERCOS3) += uio_sercos3.o +obj-$(CONFIG_UIO_PCI_GENERIC) += uio_pci_generic.o diff --git a/drivers/uio/uio_pci_generic.c b/drivers/uio/uio_pci_generic.c new file mode 100644 index 000000000000..313da35984af --- /dev/null +++ b/drivers/uio/uio_pci_generic.c @@ -0,0 +1,207 @@ +/* uio_pci_generic - generic UIO driver for PCI 2.3 devices + * + * Copyright (C) 2009 Red Hat, Inc. + * Author: Michael S. Tsirkin + * + * This work is licensed under the terms of the GNU GPL, version 2. + * + * Since the driver does not declare any device ids, you must allocate + * id and bind the device to the driver yourself. For example: + * + * # echo "8086 10f5" > /sys/bus/pci/drivers/uio_pci_generic/new_id + * # echo -n 0000:00:19.0 > /sys/bus/pci/drivers/e1000e/unbind + * # echo -n 0000:00:19.0 > /sys/bus/pci/drivers/uio_pci_generic/bind + * # ls -l /sys/bus/pci/devices/0000:00:19.0/driver + * .../0000:00:19.0/driver -> ../../../bus/pci/drivers/uio_pci_generic + * + * Driver won't bind to devices which do not support the Interrupt Disable Bit + * in the command register. All devices compliant to PCI 2.3 (circa 2002) and + * all compliant PCI Express devices should support this bit. + */ + +#include +#include +#include +#include +#include + +#define DRIVER_VERSION "0.01.0" +#define DRIVER_AUTHOR "Michael S. Tsirkin " +#define DRIVER_DESC "Generic UIO driver for PCI 2.3 devices" + +struct uio_pci_generic_dev { + struct uio_info info; + struct pci_dev *pdev; + spinlock_t lock; /* guards command register accesses */ +}; + +static inline struct uio_pci_generic_dev * +to_uio_pci_generic_dev(struct uio_info *info) +{ + return container_of(info, struct uio_pci_generic_dev, info); +} + +/* Interrupt handler. Read/modify/write the command register to disable + * the interrupt. */ +static irqreturn_t irqhandler(int irq, struct uio_info *info) +{ + struct uio_pci_generic_dev *gdev = to_uio_pci_generic_dev(info); + struct pci_dev *pdev = gdev->pdev; + irqreturn_t ret = IRQ_NONE; + u32 cmd_status_dword; + u16 origcmd, newcmd, status; + + /* We do a single dword read to retrieve both command and status. + * Document assumptions that make this possible. */ + BUILD_BUG_ON(PCI_COMMAND % 4); + BUILD_BUG_ON(PCI_COMMAND + 2 != PCI_STATUS); + + spin_lock_irq(&gdev->lock); + pci_block_user_cfg_access(pdev); + + /* Read both command and status registers in a single 32-bit operation. + * Note: we could cache the value for command and move the status read + * out of the lock if there was a way to get notified of user changes + * to command register through sysfs. Should be good for shared irqs. */ + pci_read_config_dword(pdev, PCI_COMMAND, &cmd_status_dword); + origcmd = cmd_status_dword; + status = cmd_status_dword >> 16; + + /* Check interrupt status register to see whether our device + * triggered the interrupt. */ + if (!(status & PCI_STATUS_INTERRUPT)) + goto done; + + /* We triggered the interrupt, disable it. */ + newcmd = origcmd | PCI_COMMAND_INTX_DISABLE; + if (newcmd != origcmd) + pci_write_config_word(pdev, PCI_COMMAND, newcmd); + + /* UIO core will signal the user process. */ + ret = IRQ_HANDLED; +done: + + pci_unblock_user_cfg_access(pdev); + spin_unlock_irq(&gdev->lock); + return ret; +} + +/* Verify that the device supports Interrupt Disable bit in command register, + * per PCI 2.3, by flipping this bit and reading it back: this bit was readonly + * in PCI 2.2. */ +static int __devinit verify_pci_2_3(struct pci_dev *pdev) +{ + u16 orig, new; + int err = 0; + + pci_block_user_cfg_access(pdev); + pci_read_config_word(pdev, PCI_COMMAND, &orig); + pci_write_config_word(pdev, PCI_COMMAND, + orig ^ PCI_COMMAND_INTX_DISABLE); + pci_read_config_word(pdev, PCI_COMMAND, &new); + /* There's no way to protect against + * hardware bugs or detect them reliably, but as long as we know + * what the value should be, let's go ahead and check it. */ + if ((new ^ orig) & ~PCI_COMMAND_INTX_DISABLE) { + err = -EBUSY; + dev_err(&pdev->dev, "Command changed from 0x%x to 0x%x: " + "driver or HW bug?\n", orig, new); + goto err; + } + if (!((new ^ orig) & PCI_COMMAND_INTX_DISABLE)) { + dev_warn(&pdev->dev, "Device does not support " + "disabling interrupts: unable to bind.\n"); + err = -ENODEV; + goto err; + } + /* Now restore the original value. */ + pci_write_config_word(pdev, PCI_COMMAND, orig); +err: + pci_unblock_user_cfg_access(pdev); + return err; +} + +static int __devinit probe(struct pci_dev *pdev, + const struct pci_device_id *id) +{ + struct uio_pci_generic_dev *gdev; + int err; + + if (!pdev->irq) { + dev_warn(&pdev->dev, "No IRQ assigned to device: " + "no support for interrupts?\n"); + return -ENODEV; + } + + err = pci_enable_device(pdev); + if (err) { + dev_err(&pdev->dev, "%s: pci_enable_device failed: %d\n", + __func__, err); + return err; + } + + err = verify_pci_2_3(pdev); + if (err) + goto err_verify; + + gdev = kzalloc(sizeof(struct uio_pci_generic_dev), GFP_KERNEL); + if (!gdev) { + err = -ENOMEM; + goto err_alloc; + } + + gdev->info.name = "uio_pci_generic"; + gdev->info.version = DRIVER_VERSION; + gdev->info.irq = pdev->irq; + gdev->info.irq_flags = IRQF_SHARED; + gdev->info.handler = irqhandler; + gdev->pdev = pdev; + spin_lock_init(&gdev->lock); + + if (uio_register_device(&pdev->dev, &gdev->info)) + goto err_register; + pci_set_drvdata(pdev, gdev); + + return 0; +err_register: + kfree(gdev); +err_alloc: +err_verify: + pci_disable_device(pdev); + return err; +} + +static void remove(struct pci_dev *pdev) +{ + struct uio_pci_generic_dev *gdev = pci_get_drvdata(pdev); + + uio_unregister_device(&gdev->info); + pci_disable_device(pdev); + kfree(gdev); +} + +static struct pci_driver driver = { + .name = "uio_pci_generic", + .id_table = NULL, /* only dynamic id's */ + .probe = probe, + .remove = remove, +}; + +static int __init init(void) +{ + pr_info(DRIVER_DESC " version: " DRIVER_VERSION "\n"); + return pci_register_driver(&driver); +} + +static void __exit cleanup(void) +{ + pci_unregister_driver(&driver); +} + +module_init(init); +module_exit(cleanup); + +MODULE_VERSION(DRIVER_VERSION); +MODULE_LICENSE("GPL v2"); +MODULE_AUTHOR(DRIVER_AUTHOR); +MODULE_DESCRIPTION(DRIVER_DESC); diff --git a/include/linux/pci_regs.h b/include/linux/pci_regs.h index fcaee42c7ac2..dd0bed4f1cf0 100644 --- a/include/linux/pci_regs.h +++ b/include/linux/pci_regs.h @@ -42,6 +42,7 @@ #define PCI_COMMAND_INTX_DISABLE 0x400 /* INTx Emulation Disable */ #define PCI_STATUS 0x06 /* 16 bits */ +#define PCI_STATUS_INTERRUPT 0x08 /* Interrupt status */ #define PCI_STATUS_CAP_LIST 0x10 /* Support Capability List */ #define PCI_STATUS_66MHZ 0x20 /* Support 66 Mhz PCI 2.1 bus */ #define PCI_STATUS_UDF 0x40 /* Support User Definable Features [obsolete] */ -- cgit v1.2.3 From 19003c18e9b41f5c3aeb81c92356f90958e1f22f Mon Sep 17 00:00:00 2001 From: Jan Kara Date: Mon, 3 Aug 2009 19:00:57 +0200 Subject: ext3: Update MAINTAINERS for ext3 and JBD Stephen agreed that he's unlikely to find time for working on ext3/JBD in the near future and is not working on it for some time already. So remove him. Added myself to JBD since after Andrew I'm probably the second most sensible contact ;). CC: Stephen Tweedie Signed-off-by: Jan Kara --- MAINTAINERS | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 9117b65f4ae3..424beb672aff 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1967,7 +1967,6 @@ F: fs/ext2/ F: include/linux/ext2* EXT3 FILE SYSTEM -M: Stephen Tweedie M: Andrew Morton M: Andreas Dilger L: linux-ext4@vger.kernel.org @@ -2892,8 +2891,8 @@ F: fs/jffs2/ F: include/linux/jffs2.h JOURNALLING LAYER FOR BLOCK DEVICES (JBD) -M: Stephen Tweedie M: Andrew Morton +M: Jan Kara L: linux-ext4@vger.kernel.org S: Maintained F: fs/jbd*/ -- cgit v1.2.3 From b75ea16ae74e77244e134943a5676ca770036cac Mon Sep 17 00:00:00 2001 From: Mark Brown Date: Thu, 2 Jul 2009 16:43:08 +0100 Subject: MAINTAINERS: Add entry for Wolfson PMIC drivers Signed-off-by: Mark Brown Signed-off-by: Samuel Ortiz --- MAINTAINERS | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 9117b65f4ae3..fba5e8141a62 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -5669,6 +5669,24 @@ S: Supported F: drivers/input/touchscreen/*wm97* F: include/linux/wm97xx.h +WOLFSON MICROELECTRONICS PMIC DRIVERS +P: Mark Brown +M: broonie@opensource.wolfsonmicro.com +L: linux-kernel@vger.kernel.org +T: git git://opensource.wolfsonmicro.com/linux-2.6-audioplus +W: http://opensource.wolfsonmicro.com/node/8 +S: Supported +F: drivers/leds/leds-wm83*.c +F: drivers/mfd/wm8*.c +F: drivers/power/wm83*.c +F: drivers/rtc/rtc-wm83*.c +F: drivers/regulator/wm8*.c +F: drivers/watchdog/wm83*_wdt.c +F: include/linux/mfd/wm8350/ +F: include/linux/mfd/wm8400/ +F: sound/soc/codecs/wm8350.c +F: sound/soc/codecs/wm8400.c + X.25 NETWORK LAYER M: Henner Eisen L: linux-x25@vger.kernel.org -- cgit v1.2.3 From 3860e6c4b93d6a1c2428f0f45c77e083197da2d4 Mon Sep 17 00:00:00 2001 From: Mark Brown Date: Wed, 9 Sep 2009 14:46:43 +0100 Subject: mfd: Update MAINTAINERS patterns for WM831x The WM831x PMICs added a backlight driver and a new include directory. Signed-off-by: Mark Brown Signed-off-by: Samuel Ortiz --- MAINTAINERS | 2 ++ 1 file changed, 2 insertions(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index fba5e8141a62..9d635f3bbc7c 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -5681,7 +5681,9 @@ F: drivers/mfd/wm8*.c F: drivers/power/wm83*.c F: drivers/rtc/rtc-wm83*.c F: drivers/regulator/wm8*.c +F: drivers/video/backlight/wm83*_bl.c F: drivers/watchdog/wm83*_wdt.c +F: include/linux/mfd/wm831x/ F: include/linux/mfd/wm8350/ F: include/linux/mfd/wm8400/ F: sound/soc/codecs/wm8350.c -- cgit v1.2.3 From be3652b8a253346339ce32501dd48fb033258936 Mon Sep 17 00:00:00 2001 From: Jesse Barnes Date: Thu, 17 Sep 2009 10:01:07 -0700 Subject: MAINTAINTERS: remove hotplug driver entries So Kristen stops getting PCI hotplug related stuff; the maintainer switch actually happened awhile ago, but the entries were never purged. Signed-off-by: Jesse Barnes --- MAINTAINERS | 18 +++--------------- 1 file changed, 3 insertions(+), 15 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 9117b65f4ae3..5cbe07cf4ea1 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -256,12 +256,6 @@ W: http://www.lesswatts.org/projects/acpi/ S: Supported F: drivers/acpi/fan.c -ACPI PCI HOTPLUG DRIVER -M: Kristen Carlson Accardi -L: linux-pci@vger.kernel.org -S: Supported -F: drivers/pci/hotplug/acpi* - ACPI THERMAL DRIVER M: Zhang Rui L: linux-acpi@vger.kernel.org @@ -3964,11 +3958,11 @@ F: Documentation/PCI/ F: drivers/pci/ F: include/linux/pci* -PCIE HOTPLUG DRIVER -M: Kristen Carlson Accardi +PCI HOTPLUG +M: Jesse Barnes L: linux-pci@vger.kernel.org S: Supported -F: drivers/pci/pcie/ +F: drivers/pci/hotplug PCMCIA SUBSYSTEM P: Linux PCMCIA Team @@ -4624,12 +4618,6 @@ F: drivers/serial/serial_lh7a40x.c F: drivers/usb/gadget/lh7a40* F: drivers/usb/host/ohci-lh7a40* -SHPC HOTPLUG DRIVER -M: Kristen Carlson Accardi -L: linux-pci@vger.kernel.org -S: Supported -F: drivers/pci/hotplug/shpchp* - SIMTEC EB110ATX (Chalice CATS) P: Ben Dooks M: Vincent Sanders -- cgit v1.2.3 From 784546839fd45de485b9e9e4cd072b9d53100aa5 Mon Sep 17 00:00:00 2001 From: Russell King Date: Fri, 18 Sep 2009 21:08:03 +0100 Subject: ARM: Update mailing list addresses The old list has now been closed off; update the now invalid addresses. Signed-off-by: Russell King --- MAINTAINERS | 106 ++++++++++++++++++++++++++++++------------------------------ 1 file changed, 53 insertions(+), 53 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 837b5985ac40..12be157e184e 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -497,7 +497,7 @@ F: arch/arm/include/asm/floppy.h ARM PORT M: Russell King -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org W: http://www.arm.linux.org.uk/ S: Maintained F: arch/arm/ @@ -508,36 +508,36 @@ F: drivers/mmc/host/mmci.* ARM/ADI ROADRUNNER MACHINE SUPPORT M: Lennert Buytenhek -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org S: Maintained F: arch/arm/mach-ixp23xx/ F: arch/arm/mach-ixp23xx/include/mach/ ARM/ADS SPHERE MACHINE SUPPORT M: Lennert Buytenhek -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org S: Maintained ARM/AFEB9260 MACHINE SUPPORT M: Sergey Lapin -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org S: Maintained ARM/AJECO 1ARM MACHINE SUPPORT M: Lennert Buytenhek -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org S: Maintained ARM/ATMEL AT91RM9200 ARM ARCHITECTURE M: Andrew Victor -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org W: http://maxim.org.za/at91_26.html S: Maintained ARM/BCMRING ARM ARCHITECTURE M: Leo Chen M: Scott Branden -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org S: Maintained F: arch/arm/mach-bcmring @@ -554,25 +554,25 @@ F: drivers/mtd/nand/nand_bcm_umi.h ARM/CIRRUS LOGIC EP93XX ARM ARCHITECTURE M: Hartley Sweeten M: Ryan Mallon -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org S: Maintained F: arch/arm/mach-ep93xx/ F: arch/arm/mach-ep93xx/include/mach/ ARM/CIRRUS LOGIC EDB9315A MACHINE SUPPORT M: Lennert Buytenhek -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org S: Maintained ARM/CLKDEV SUPPORT M: Russell King -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org F: arch/arm/common/clkdev.c F: arch/arm/include/asm/clkdev.h ARM/COMPULAB CM-X270/EM-X270 and CM-X300 MACHINE SUPPORT M: Mike Rapoport -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org S: Maintained ARM/CORGI MACHINE SUPPORT @@ -581,14 +581,14 @@ S: Maintained ARM/CORTINA SYSTEMS GEMINI ARM ARCHITECTURE M: Paulius Zaleckas -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org T: git git://gitorious.org/linux-gemini/mainline.git S: Maintained F: arch/arm/mach-gemini/ ARM/EBSA110 MACHINE SUPPORT M: Russell King -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org W: http://www.arm.linux.org.uk/ S: Maintained F: arch/arm/mach-ebsa110/ @@ -606,13 +606,13 @@ F: arch/arm/mach-pxa/ezx.c ARM/FARADAY FA526 PORT M: Paulius Zaleckas -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org S: Maintained F: arch/arm/mm/*-fa* ARM/FOOTBRIDGE ARCHITECTURE M: Russell King -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org W: http://www.arm.linux.org.uk/ S: Maintained F: arch/arm/include/asm/hardware/dec21285.h @@ -620,17 +620,17 @@ F: arch/arm/mach-footbridge/ ARM/FREESCALE IMX / MXC ARM ARCHITECTURE M: Sascha Hauer -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org S: Maintained ARM/GLOMATION GESBC9312SX MACHINE SUPPORT M: Lennert Buytenhek -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org S: Maintained ARM/GUMSTIX MACHINE SUPPORT M: Steve Sakoman -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org S: Maintained ARM/H4700 (HP IPAQ HX4700) MACHINE SUPPORT @@ -650,55 +650,55 @@ F: arch/arm/mach-sa1100/include/mach/jornada720.h ARM/INTEL IOP32X ARM ARCHITECTURE M: Lennert Buytenhek M: Dan Williams -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org S: Supported ARM/INTEL IOP33X ARM ARCHITECTURE M: Dan Williams -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org S: Supported ARM/INTEL IOP13XX ARM ARCHITECTURE M: Lennert Buytenhek M: Dan Williams -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org S: Supported ARM/INTEL IQ81342EX MACHINE SUPPORT M: Lennert Buytenhek M: Dan Williams -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org S: Supported ARM/INTEL IXP2000 ARM ARCHITECTURE M: Lennert Buytenhek -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org S: Maintained ARM/INTEL IXDP2850 MACHINE SUPPORT M: Lennert Buytenhek -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org S: Maintained ARM/INTEL IXP23XX ARM ARCHITECTURE M: Lennert Buytenhek -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org S: Maintained ARM/INTEL XSC3 (MANZANO) ARM CORE M: Lennert Buytenhek M: Dan Williams -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org S: Supported ARM/IP FABRICS DOUBLE ESPRESSO MACHINE SUPPORT M: Lennert Buytenhek -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org S: Maintained ARM/LOGICPD PXA270 MACHINE SUPPORT M: Lennert Buytenhek -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org S: Maintained ARM/MAGICIAN MACHINE SUPPORT @@ -708,7 +708,7 @@ S: Maintained ARM/Marvell Loki/Kirkwood/MV78xx0/Orion SOC support M: Lennert Buytenhek M: Nicolas Pitre -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org T: git git://git.marvell.com/orion S: Maintained F: arch/arm/mach-loki/ @@ -719,7 +719,7 @@ F: arch/arm/plat-orion/ ARM/MIOA701 MACHINE SUPPORT M: Robert Jarzmik -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org F: arch/arm/mach-pxa/mioa701.c S: Maintained @@ -760,18 +760,18 @@ S: Maintained ARM/PT DIGITAL BOARD PORT M: Stefan Eletzhofer -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org W: http://www.arm.linux.org.uk/ S: Maintained ARM/RADISYS ENP2611 MACHINE SUPPORT M: Lennert Buytenhek -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org S: Maintained ARM/RISCPC ARCHITECTURE M: Russell King -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org W: http://www.arm.linux.org.uk/ S: Maintained F: arch/arm/common/time-acorn.c @@ -790,7 +790,7 @@ S: Maintained ARM/SAMSUNG ARM ARCHITECTURES M: Ben Dooks -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org W: http://www.fluff.org/ben/linux/ S: Maintained F: arch/arm/plat-s3c/ @@ -798,65 +798,65 @@ F: arch/arm/plat-s3c24xx/ ARM/S3C2410 ARM ARCHITECTURE M: Ben Dooks -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org W: http://www.fluff.org/ben/linux/ S: Maintained F: arch/arm/mach-s3c2410/ ARM/S3C2440 ARM ARCHITECTURE M: Ben Dooks -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org W: http://www.fluff.org/ben/linux/ S: Maintained F: arch/arm/mach-s3c2440/ ARM/S3C2442 ARM ARCHITECTURE M: Ben Dooks -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org W: http://www.fluff.org/ben/linux/ S: Maintained F: arch/arm/mach-s3c2442/ ARM/S3C2443 ARM ARCHITECTURE M: Ben Dooks -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org W: http://www.fluff.org/ben/linux/ S: Maintained F: arch/arm/mach-s3c2443/ ARM/S3C6400 ARM ARCHITECTURE M: Ben Dooks -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org W: http://www.fluff.org/ben/linux/ S: Maintained F: arch/arm/mach-s3c6400/ ARM/S3C6410 ARM ARCHITECTURE M: Ben Dooks -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org W: http://www.fluff.org/ben/linux/ S: Maintained F: arch/arm/mach-s3c6410/ ARM/TECHNOLOGIC SYSTEMS TS7250 MACHINE SUPPORT M: Lennert Buytenhek -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org S: Maintained ARM/THECUS N2100 MACHINE SUPPORT M: Lennert Buytenhek -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org S: Maintained ARM/NUVOTON W90X900 ARM ARCHITECTURE M: Wan ZongShun -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org W: http://www.mcuos.com S: Maintained ARM/VFP SUPPORT M: Russell King -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org W: http://www.arm.linux.org.uk/ S: Maintained F: arch/arm/vfp/ @@ -957,7 +957,7 @@ F: include/linux/atm* ATMEL AT91 MCI DRIVER M: Nicolas Ferre -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org W: http://www.atmel.com/products/AT91/ W: http://www.at91.com/ S: Maintained @@ -1535,7 +1535,7 @@ F: drivers/infiniband/hw/cxgb3/ CYBERPRO FB DRIVER M: Russell King -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org W: http://www.arm.linux.org.uk/ S: Maintained F: drivers/video/cyber2000fb.* @@ -2080,7 +2080,7 @@ F: drivers/i2c/busses/i2c-cpm.c FREESCALE IMX / MXC FRAMEBUFFER DRIVER M: Sascha Hauer L: linux-fbdev-devel@lists.sourceforge.net (moderated for non-subscribers) -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org S: Maintained F: arch/arm/plat-mxc/include/mach/imxfb.h F: drivers/video/imxfb.c @@ -3434,7 +3434,7 @@ F: include/linux/meye.h MOTOROLA IMX MMC/SD HOST CONTROLLER INTERFACE DRIVER M: Pavel Pisa -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org S: Maintained F: drivers/mmc/host/imxmmc.* @@ -4153,7 +4153,7 @@ F: drivers/media/video/pvrusb2/ PXA2xx/PXA3xx SUPPORT M: Eric Miao M: Russell King -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org S: Maintained F: arch/arm/mach-pxa/ F: drivers/pcmcia/pxa2xx* @@ -4166,13 +4166,13 @@ F: sound/soc/pxa PXA168 SUPPORT M: Eric Miao M: Jason Chagas -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org T: git git://git.kernel.org/pub/scm/linux/kernel/git/ycmiao/pxa-linux-2.6.git S: Maintained PXA910 SUPPORT M: Eric Miao -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org T: git git://git.kernel.org/pub/scm/linux/kernel/git/ycmiao/pxa-linux-2.6.git S: Maintained @@ -4413,7 +4413,7 @@ F: net/iucv/ S3C24XX SD/MMC Driver M: Ben Dooks -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org S: Supported F: drivers/mmc/host/s3cmci.* @@ -4609,7 +4609,7 @@ F: drivers/misc/sgi-xp/ SHARP LH SUPPORT (LH7952X & LH7A40X) M: Marc Singer W: http://projects.buici.com/arm -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org S: Maintained F: Documentation/arm/Sharp-LH/ADC-LH7-Touchscreen F: arch/arm/mach-lh7a40x/ -- cgit v1.2.3 From a1867d36b3bda28314fdd832a510dc9e55821c4c Mon Sep 17 00:00:00 2001 From: Wolfram Sang Date: Fri, 18 Sep 2009 22:45:51 +0200 Subject: MAINTAINERS: Add maintainer for AT24 and PCA9564/PCA9665 Add me as the maintainer for the i2c drivers I feel responsible for as I found patches going mainline I missed due to no Cc. Signed-off-by: Wolfram Sang Cc: David Brownell Signed-off-by: Jean Delvare --- MAINTAINERS | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index e613c6dd709f..9a1cd2ff24af 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -894,6 +894,13 @@ F: drivers/dma/ F: include/linux/dmaengine.h F: include/linux/async_tx.h +AT24 EEPROM DRIVER +M: Wolfram Sang +L: linux-i2c@vger.kernel.org +S: Maintained +F: drivers/misc/eeprom/at24.c +F: include/linux/i2c/at24.h + ATA OVER ETHERNET (AOE) DRIVER M: "Ed L. Cashin" W: http://www.coraid.com/support/linux @@ -3957,6 +3964,15 @@ S: Maintained F: drivers/leds/leds-pca9532.c F: include/linux/leds-pca9532.h +PCA9564/PCA9665 I2C BUS DRIVER +M: Wolfram Sang +L: linux-i2c@vger.kernel.org +S: Maintained +F: drivers/i2c/algos/i2c-algo-pca.c +F: drivers/i2c/busses/i2c-pca-* +F: include/linux/i2c-algo-pca.h +F: include/linux/i2c-pca-platform.h + PCI ERROR RECOVERY M: Linas Vepstas L: linux-pci@vger.kernel.org -- cgit v1.2.3 From 9caeb5324427990db7bc97e674794d201c1f0797 Mon Sep 17 00:00:00 2001 From: Herton Ronaldo Krzesinski Date: Mon, 14 Sep 2009 21:11:21 -0300 Subject: topstar-laptop: add new driver for hotkeys support on Topstar N01 This adds Topstar Laptop Extras ACPI driver. It enables hotkeys functionality with Topstar N01 netbook. Besides hotkeys there are other functions exposed by its ACPI firmware, but for now only hotkeys reporting on Topstar N01 is supported. Topstar is a chinese manufacturer, its website can be currently reached at http://www.topstardigital.cn/ Signed-off-by: Herton Ronaldo Krzesinski Reviewed-by: Alan Jenkins Reviewed-by: Bjorn Helgaas Signed-off-by: Len Brown --- MAINTAINERS | 5 + drivers/platform/x86/Kconfig | 9 ++ drivers/platform/x86/Makefile | 1 + drivers/platform/x86/topstar-laptop.c | 265 ++++++++++++++++++++++++++++++++++ 4 files changed, 280 insertions(+) create mode 100644 drivers/platform/x86/topstar-laptop.c (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 8dca9d89c6c1..d75a7a70788e 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -4968,6 +4968,11 @@ T: quilt http://svn.sourceforge.jp/svnroot/tomoyo/trunk/2.2.x/tomoyo-lsm/patches S: Maintained F: security/tomoyo/ +TOPSTAR LAPTOP EXTRAS DRIVER +M: Herton Ronaldo Krzesinski +S: Maintained +F: drivers/platform/x86/topstar-laptop.c + TOSHIBA ACPI EXTRAS DRIVER S: Orphan F: drivers/platform/x86/toshiba_acpi.c diff --git a/drivers/platform/x86/Kconfig b/drivers/platform/x86/Kconfig index 77c6097ced80..48315621046d 100644 --- a/drivers/platform/x86/Kconfig +++ b/drivers/platform/x86/Kconfig @@ -396,6 +396,15 @@ config ACPI_ASUS NOTE: This driver is deprecated and will probably be removed soon, use asus-laptop instead. +config TOPSTAR_LAPTOP + tristate "Topstar Laptop Extras" + depends on ACPI + depends on INPUT + ---help--- + This driver adds support for hotkeys found on Topstar laptops. + + If you have a Topstar laptop, say Y or M here. + config ACPI_TOSHIBA tristate "Toshiba Laptop Extras" depends on ACPI diff --git a/drivers/platform/x86/Makefile b/drivers/platform/x86/Makefile index 641b8bfa5538..d1c16210a512 100644 --- a/drivers/platform/x86/Makefile +++ b/drivers/platform/x86/Makefile @@ -19,4 +19,5 @@ obj-$(CONFIG_PANASONIC_LAPTOP) += panasonic-laptop.o obj-$(CONFIG_INTEL_MENLOW) += intel_menlow.o obj-$(CONFIG_ACPI_WMI) += wmi.o obj-$(CONFIG_ACPI_ASUS) += asus_acpi.o +obj-$(CONFIG_TOPSTAR_LAPTOP) += topstar-laptop.o obj-$(CONFIG_ACPI_TOSHIBA) += toshiba_acpi.o diff --git a/drivers/platform/x86/topstar-laptop.c b/drivers/platform/x86/topstar-laptop.c new file mode 100644 index 000000000000..02f3d4e9e666 --- /dev/null +++ b/drivers/platform/x86/topstar-laptop.c @@ -0,0 +1,265 @@ +/* + * ACPI driver for Topstar notebooks (hotkeys support only) + * + * Copyright (c) 2009 Herton Ronaldo Krzesinski + * + * Implementation inspired by existing x86 platform drivers, in special + * asus/eepc/fujitsu-laptop, thanks to their authors + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + +#include +#include +#include +#include +#include + +#define ACPI_TOPSTAR_CLASS "topstar" + +struct topstar_hkey { + struct input_dev *inputdev; +}; + +struct tps_key_entry { + u8 code; + u16 keycode; +}; + +static struct tps_key_entry topstar_keymap[] = { + { 0x80, KEY_BRIGHTNESSUP }, + { 0x81, KEY_BRIGHTNESSDOWN }, + { 0x83, KEY_VOLUMEUP }, + { 0x84, KEY_VOLUMEDOWN }, + { 0x85, KEY_MUTE }, + { 0x86, KEY_SWITCHVIDEOMODE }, + { 0x87, KEY_F13 }, /* touchpad enable/disable key */ + { 0x88, KEY_WLAN }, + { 0x8a, KEY_WWW }, + { 0x8b, KEY_MAIL }, + { 0x8c, KEY_MEDIA }, + { 0x96, KEY_F14 }, /* G key? */ + { } +}; + +static struct tps_key_entry *tps_get_key_by_scancode(int code) +{ + struct tps_key_entry *key; + + for (key = topstar_keymap; key->code; key++) + if (code == key->code) + return key; + + return NULL; +} + +static struct tps_key_entry *tps_get_key_by_keycode(int code) +{ + struct tps_key_entry *key; + + for (key = topstar_keymap; key->code; key++) + if (code == key->keycode) + return key; + + return NULL; +} + +static void acpi_topstar_notify(struct acpi_device *device, u32 event) +{ + struct tps_key_entry *key; + static bool dup_evnt[2]; + bool *dup; + struct topstar_hkey *hkey = acpi_driver_data(device); + + /* 0x83 and 0x84 key events comes duplicated... */ + if (event == 0x83 || event == 0x84) { + dup = &dup_evnt[event - 0x83]; + if (*dup) { + *dup = false; + return; + } + *dup = true; + } + + /* + * 'G key' generate two event codes, convert to only + * one event/key code for now (3G switch?) + */ + if (event == 0x97) + event = 0x96; + + key = tps_get_key_by_scancode(event); + if (key) { + input_report_key(hkey->inputdev, key->keycode, 1); + input_sync(hkey->inputdev); + input_report_key(hkey->inputdev, key->keycode, 0); + input_sync(hkey->inputdev); + return; + } + + /* Known non hotkey events don't handled or that we don't care yet */ + if (event == 0x8e || event == 0x8f || event == 0x90) + return; + + pr_info("unknown event = 0x%02x\n", event); +} + +static int acpi_topstar_fncx_switch(struct acpi_device *device, bool state) +{ + acpi_status status; + union acpi_object fncx_params[1] = { + { .type = ACPI_TYPE_INTEGER } + }; + struct acpi_object_list fncx_arg_list = { 1, &fncx_params[0] }; + + fncx_params[0].integer.value = state ? 0x86 : 0x87; + status = acpi_evaluate_object(device->handle, "FNCX", &fncx_arg_list, NULL); + if (ACPI_FAILURE(status)) { + pr_err("Unable to switch FNCX notifications\n"); + return -ENODEV; + } + + return 0; +} + +static int topstar_getkeycode(struct input_dev *dev, int scancode, int *keycode) +{ + struct tps_key_entry *key = tps_get_key_by_scancode(scancode); + + if (!key) + return -EINVAL; + + *keycode = key->keycode; + return 0; +} + +static int topstar_setkeycode(struct input_dev *dev, int scancode, int keycode) +{ + struct tps_key_entry *key; + int old_keycode; + + if (keycode < 0 || keycode > KEY_MAX) + return -EINVAL; + + key = tps_get_key_by_scancode(scancode); + + if (!key) + return -EINVAL; + + old_keycode = key->keycode; + key->keycode = keycode; + set_bit(keycode, dev->keybit); + if (!tps_get_key_by_keycode(old_keycode)) + clear_bit(old_keycode, dev->keybit); + return 0; +} + +static int acpi_topstar_init_hkey(struct topstar_hkey *hkey) +{ + struct tps_key_entry *key; + + hkey->inputdev = input_allocate_device(); + if (!hkey->inputdev) { + pr_err("Unable to allocate input device\n"); + return -ENODEV; + } + hkey->inputdev->name = "Topstar Laptop extra buttons"; + hkey->inputdev->phys = "topstar/input0"; + hkey->inputdev->id.bustype = BUS_HOST; + hkey->inputdev->getkeycode = topstar_getkeycode; + hkey->inputdev->setkeycode = topstar_setkeycode; + for (key = topstar_keymap; key->code; key++) { + set_bit(EV_KEY, hkey->inputdev->evbit); + set_bit(key->keycode, hkey->inputdev->keybit); + } + if (input_register_device(hkey->inputdev)) { + pr_err("Unable to register input device\n"); + input_free_device(hkey->inputdev); + return -ENODEV; + } + + return 0; +} + +static int acpi_topstar_add(struct acpi_device *device) +{ + struct topstar_hkey *tps_hkey; + + tps_hkey = kzalloc(sizeof(struct topstar_hkey), GFP_KERNEL); + if (!tps_hkey) + return -ENOMEM; + + strcpy(acpi_device_name(device), "Topstar TPSACPI"); + strcpy(acpi_device_class(device), ACPI_TOPSTAR_CLASS); + + if (acpi_topstar_fncx_switch(device, true)) + goto add_err; + + if (acpi_topstar_init_hkey(tps_hkey)) + goto add_err; + + device->driver_data = tps_hkey; + return 0; + +add_err: + kfree(tps_hkey); + return -ENODEV; +} + +static int acpi_topstar_remove(struct acpi_device *device, int type) +{ + struct topstar_hkey *tps_hkey = acpi_driver_data(device); + + acpi_topstar_fncx_switch(device, false); + + input_unregister_device(tps_hkey->inputdev); + kfree(tps_hkey); + + return 0; +} + +static const struct acpi_device_id topstar_device_ids[] = { + { "TPSACPI01", 0 }, + { "", 0 }, +}; +MODULE_DEVICE_TABLE(acpi, topstar_device_ids); + +static struct acpi_driver acpi_topstar_driver = { + .name = "Topstar laptop ACPI driver", + .class = ACPI_TOPSTAR_CLASS, + .ids = topstar_device_ids, + .ops = { + .add = acpi_topstar_add, + .remove = acpi_topstar_remove, + .notify = acpi_topstar_notify, + }, +}; + +static int __init topstar_laptop_init(void) +{ + int ret; + + ret = acpi_bus_register_driver(&acpi_topstar_driver); + if (ret < 0) + return ret; + + printk(KERN_INFO "Topstar Laptop ACPI extras driver loaded\n"); + + return 0; +} + +static void __exit topstar_laptop_exit(void) +{ + acpi_bus_unregister_driver(&acpi_topstar_driver); +} + +module_init(topstar_laptop_init); +module_exit(topstar_laptop_exit); + +MODULE_AUTHOR("Herton Ronaldo Krzesinski"); +MODULE_DESCRIPTION("Topstar Laptop ACPI Extras driver"); +MODULE_LICENSE("GPL"); -- cgit v1.2.3 From 57c0c15b5244320065374ad2c54f4fbec77a6428 Mon Sep 17 00:00:00 2001 From: Ingo Molnar Date: Mon, 21 Sep 2009 12:20:38 +0200 Subject: perf: Tidy up after the big rename - provide compatibility Kconfig entry for existing PERF_COUNTERS .config's - provide courtesy copy of old perf_counter.h, for user-space projects - small indentation fixups - fix up MAINTAINERS - fix small x86 printout fallout - fix up small PowerPC comment fallout (use 'counter' as in register) Reviewed-by: Arjan van de Ven Acked-by: Peter Zijlstra Cc: Mike Galbraith Cc: Paul Mackerras Cc: Benjamin Herrenschmidt Cc: Frederic Weisbecker LKML-Reference: Signed-off-by: Ingo Molnar --- MAINTAINERS | 2 +- arch/powerpc/include/asm/paca.h | 2 +- arch/powerpc/kernel/perf_event.c | 12 +- arch/x86/kernel/cpu/perf_event.c | 14 +- include/linux/perf_counter.h | 441 +++++++++++++++++++++++++++++++++++++++ include/linux/perf_event.h | 98 ++++----- init/Kconfig | 37 +++- kernel/perf_event.c | 4 +- 8 files changed, 534 insertions(+), 76 deletions(-) create mode 100644 include/linux/perf_counter.h (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 43761a00e3f1..751a307dc44e 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -4000,7 +4000,7 @@ S: Maintained F: include/linux/delayacct.h F: kernel/delayacct.c -PERFORMANCE COUNTER SUBSYSTEM +PERFORMANCE EVENTS SUBSYSTEM M: Peter Zijlstra M: Paul Mackerras M: Ingo Molnar diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h index 154f405b642f..7d8514ceceae 100644 --- a/arch/powerpc/include/asm/paca.h +++ b/arch/powerpc/include/asm/paca.h @@ -122,7 +122,7 @@ struct paca_struct { u8 soft_enabled; /* irq soft-enable flag */ u8 hard_enabled; /* set if irqs are enabled in MSR */ u8 io_sync; /* writel() needs spin_unlock sync */ - u8 perf_event_pending; /* PM interrupt while soft-disabled */ + u8 perf_event_pending; /* PM interrupt while soft-disabled */ /* Stuff for accurate time accounting */ u64 user_time; /* accumulated usermode TB ticks */ diff --git a/arch/powerpc/kernel/perf_event.c b/arch/powerpc/kernel/perf_event.c index c98321fcb459..197b7d958796 100644 --- a/arch/powerpc/kernel/perf_event.c +++ b/arch/powerpc/kernel/perf_event.c @@ -41,7 +41,7 @@ DEFINE_PER_CPU(struct cpu_hw_events, cpu_hw_events); struct power_pmu *ppmu; /* - * Normally, to ignore kernel events we set the FCS (freeze events + * Normally, to ignore kernel events we set the FCS (freeze counters * in supervisor mode) bit in MMCR0, but if the kernel runs with the * hypervisor bit set in the MSR, or if we are running on a processor * where the hypervisor bit is forced to 1 (as on Apple G5 processors), @@ -159,7 +159,7 @@ void perf_event_print_debug(void) } /* - * Read one performance monitor event (PMC). + * Read one performance monitor counter (PMC). */ static unsigned long read_pmc(int idx) { @@ -409,7 +409,7 @@ static void power_pmu_read(struct perf_event *event) val = read_pmc(event->hw.idx); } while (atomic64_cmpxchg(&event->hw.prev_count, prev, val) != prev); - /* The events are only 32 bits wide */ + /* The counters are only 32 bits wide */ delta = (val - prev) & 0xfffffffful; atomic64_add(delta, &event->count); atomic64_sub(delta, &event->hw.period_left); @@ -543,7 +543,7 @@ void hw_perf_disable(void) } /* - * Set the 'freeze events' bit. + * Set the 'freeze counters' bit. * The barrier is to make sure the mtspr has been * executed and the PMU has frozen the events * before we return. @@ -1124,7 +1124,7 @@ const struct pmu *hw_perf_event_init(struct perf_event *event) } /* - * A event has overflowed; update its count and record + * A counter has overflowed; update its count and record * things if requested. Note that interrupts are hard-disabled * here so there is no possibility of being interrupted. */ @@ -1271,7 +1271,7 @@ static void perf_event_interrupt(struct pt_regs *regs) /* * Reset MMCR0 to its normal value. This will set PMXE and - * clear FC (freeze events) and PMAO (perf mon alert occurred) + * clear FC (freeze counters) and PMAO (perf mon alert occurred) * and thus allow interrupts to occur again. * XXX might want to use MSR.PM to keep the events frozen until * we get back out of this interrupt. diff --git a/arch/x86/kernel/cpu/perf_event.c b/arch/x86/kernel/cpu/perf_event.c index 0d03629fb1a5..a3c7adb06b78 100644 --- a/arch/x86/kernel/cpu/perf_event.c +++ b/arch/x86/kernel/cpu/perf_event.c @@ -2081,13 +2081,13 @@ void __init init_hw_perf_events(void) perf_events_lapic_init(); register_die_notifier(&perf_event_nmi_notifier); - pr_info("... version: %d\n", x86_pmu.version); - pr_info("... bit width: %d\n", x86_pmu.event_bits); - pr_info("... generic events: %d\n", x86_pmu.num_events); - pr_info("... value mask: %016Lx\n", x86_pmu.event_mask); - pr_info("... max period: %016Lx\n", x86_pmu.max_period); - pr_info("... fixed-purpose events: %d\n", x86_pmu.num_events_fixed); - pr_info("... event mask: %016Lx\n", perf_event_mask); + pr_info("... version: %d\n", x86_pmu.version); + pr_info("... bit width: %d\n", x86_pmu.event_bits); + pr_info("... generic registers: %d\n", x86_pmu.num_events); + pr_info("... value mask: %016Lx\n", x86_pmu.event_mask); + pr_info("... max period: %016Lx\n", x86_pmu.max_period); + pr_info("... fixed-purpose events: %d\n", x86_pmu.num_events_fixed); + pr_info("... event mask: %016Lx\n", perf_event_mask); } static inline void x86_pmu_read(struct perf_event *event) diff --git a/include/linux/perf_counter.h b/include/linux/perf_counter.h new file mode 100644 index 000000000000..368bd70f1d2d --- /dev/null +++ b/include/linux/perf_counter.h @@ -0,0 +1,441 @@ +/* + * NOTE: this file will be removed in a future kernel release, it is + * provided as a courtesy copy of user-space code that relies on the + * old (pre-rename) symbols and constants. + * + * Performance events: + * + * Copyright (C) 2008-2009, Thomas Gleixner + * Copyright (C) 2008-2009, Red Hat, Inc., Ingo Molnar + * Copyright (C) 2008-2009, Red Hat, Inc., Peter Zijlstra + * + * Data type definitions, declarations, prototypes. + * + * Started by: Thomas Gleixner and Ingo Molnar + * + * For licencing details see kernel-base/COPYING + */ +#ifndef _LINUX_PERF_COUNTER_H +#define _LINUX_PERF_COUNTER_H + +#include +#include +#include + +/* + * User-space ABI bits: + */ + +/* + * attr.type + */ +enum perf_type_id { + PERF_TYPE_HARDWARE = 0, + PERF_TYPE_SOFTWARE = 1, + PERF_TYPE_TRACEPOINT = 2, + PERF_TYPE_HW_CACHE = 3, + PERF_TYPE_RAW = 4, + + PERF_TYPE_MAX, /* non-ABI */ +}; + +/* + * Generalized performance counter event types, used by the + * attr.event_id parameter of the sys_perf_counter_open() + * syscall: + */ +enum perf_hw_id { + /* + * Common hardware events, generalized by the kernel: + */ + PERF_COUNT_HW_CPU_CYCLES = 0, + PERF_COUNT_HW_INSTRUCTIONS = 1, + PERF_COUNT_HW_CACHE_REFERENCES = 2, + PERF_COUNT_HW_CACHE_MISSES = 3, + PERF_COUNT_HW_BRANCH_INSTRUCTIONS = 4, + PERF_COUNT_HW_BRANCH_MISSES = 5, + PERF_COUNT_HW_BUS_CYCLES = 6, + + PERF_COUNT_HW_MAX, /* non-ABI */ +}; + +/* + * Generalized hardware cache counters: + * + * { L1-D, L1-I, LLC, ITLB, DTLB, BPU } x + * { read, write, prefetch } x + * { accesses, misses } + */ +enum perf_hw_cache_id { + PERF_COUNT_HW_CACHE_L1D = 0, + PERF_COUNT_HW_CACHE_L1I = 1, + PERF_COUNT_HW_CACHE_LL = 2, + PERF_COUNT_HW_CACHE_DTLB = 3, + PERF_COUNT_HW_CACHE_ITLB = 4, + PERF_COUNT_HW_CACHE_BPU = 5, + + PERF_COUNT_HW_CACHE_MAX, /* non-ABI */ +}; + +enum perf_hw_cache_op_id { + PERF_COUNT_HW_CACHE_OP_READ = 0, + PERF_COUNT_HW_CACHE_OP_WRITE = 1, + PERF_COUNT_HW_CACHE_OP_PREFETCH = 2, + + PERF_COUNT_HW_CACHE_OP_MAX, /* non-ABI */ +}; + +enum perf_hw_cache_op_result_id { + PERF_COUNT_HW_CACHE_RESULT_ACCESS = 0, + PERF_COUNT_HW_CACHE_RESULT_MISS = 1, + + PERF_COUNT_HW_CACHE_RESULT_MAX, /* non-ABI */ +}; + +/* + * Special "software" counters provided by the kernel, even if the hardware + * does not support performance counters. These counters measure various + * physical and sw events of the kernel (and allow the profiling of them as + * well): + */ +enum perf_sw_ids { + PERF_COUNT_SW_CPU_CLOCK = 0, + PERF_COUNT_SW_TASK_CLOCK = 1, + PERF_COUNT_SW_PAGE_FAULTS = 2, + PERF_COUNT_SW_CONTEXT_SWITCHES = 3, + PERF_COUNT_SW_CPU_MIGRATIONS = 4, + PERF_COUNT_SW_PAGE_FAULTS_MIN = 5, + PERF_COUNT_SW_PAGE_FAULTS_MAJ = 6, + + PERF_COUNT_SW_MAX, /* non-ABI */ +}; + +/* + * Bits that can be set in attr.sample_type to request information + * in the overflow packets. + */ +enum perf_counter_sample_format { + PERF_SAMPLE_IP = 1U << 0, + PERF_SAMPLE_TID = 1U << 1, + PERF_SAMPLE_TIME = 1U << 2, + PERF_SAMPLE_ADDR = 1U << 3, + PERF_SAMPLE_READ = 1U << 4, + PERF_SAMPLE_CALLCHAIN = 1U << 5, + PERF_SAMPLE_ID = 1U << 6, + PERF_SAMPLE_CPU = 1U << 7, + PERF_SAMPLE_PERIOD = 1U << 8, + PERF_SAMPLE_STREAM_ID = 1U << 9, + PERF_SAMPLE_RAW = 1U << 10, + + PERF_SAMPLE_MAX = 1U << 11, /* non-ABI */ +}; + +/* + * The format of the data returned by read() on a perf counter fd, + * as specified by attr.read_format: + * + * struct read_format { + * { u64 value; + * { u64 time_enabled; } && PERF_FORMAT_ENABLED + * { u64 time_running; } && PERF_FORMAT_RUNNING + * { u64 id; } && PERF_FORMAT_ID + * } && !PERF_FORMAT_GROUP + * + * { u64 nr; + * { u64 time_enabled; } && PERF_FORMAT_ENABLED + * { u64 time_running; } && PERF_FORMAT_RUNNING + * { u64 value; + * { u64 id; } && PERF_FORMAT_ID + * } cntr[nr]; + * } && PERF_FORMAT_GROUP + * }; + */ +enum perf_counter_read_format { + PERF_FORMAT_TOTAL_TIME_ENABLED = 1U << 0, + PERF_FORMAT_TOTAL_TIME_RUNNING = 1U << 1, + PERF_FORMAT_ID = 1U << 2, + PERF_FORMAT_GROUP = 1U << 3, + + PERF_FORMAT_MAX = 1U << 4, /* non-ABI */ +}; + +#define PERF_ATTR_SIZE_VER0 64 /* sizeof first published struct */ + +/* + * Hardware event to monitor via a performance monitoring counter: + */ +struct perf_counter_attr { + + /* + * Major type: hardware/software/tracepoint/etc. + */ + __u32 type; + + /* + * Size of the attr structure, for fwd/bwd compat. + */ + __u32 size; + + /* + * Type specific configuration information. + */ + __u64 config; + + union { + __u64 sample_period; + __u64 sample_freq; + }; + + __u64 sample_type; + __u64 read_format; + + __u64 disabled : 1, /* off by default */ + inherit : 1, /* children inherit it */ + pinned : 1, /* must always be on PMU */ + exclusive : 1, /* only group on PMU */ + exclude_user : 1, /* don't count user */ + exclude_kernel : 1, /* ditto kernel */ + exclude_hv : 1, /* ditto hypervisor */ + exclude_idle : 1, /* don't count when idle */ + mmap : 1, /* include mmap data */ + comm : 1, /* include comm data */ + freq : 1, /* use freq, not period */ + inherit_stat : 1, /* per task counts */ + enable_on_exec : 1, /* next exec enables */ + task : 1, /* trace fork/exit */ + watermark : 1, /* wakeup_watermark */ + + __reserved_1 : 49; + + union { + __u32 wakeup_events; /* wakeup every n events */ + __u32 wakeup_watermark; /* bytes before wakeup */ + }; + __u32 __reserved_2; + + __u64 __reserved_3; +}; + +/* + * Ioctls that can be done on a perf counter fd: + */ +#define PERF_COUNTER_IOC_ENABLE _IO ('$', 0) +#define PERF_COUNTER_IOC_DISABLE _IO ('$', 1) +#define PERF_COUNTER_IOC_REFRESH _IO ('$', 2) +#define PERF_COUNTER_IOC_RESET _IO ('$', 3) +#define PERF_COUNTER_IOC_PERIOD _IOW('$', 4, u64) +#define PERF_COUNTER_IOC_SET_OUTPUT _IO ('$', 5) + +enum perf_counter_ioc_flags { + PERF_IOC_FLAG_GROUP = 1U << 0, +}; + +/* + * Structure of the page that can be mapped via mmap + */ +struct perf_counter_mmap_page { + __u32 version; /* version number of this structure */ + __u32 compat_version; /* lowest version this is compat with */ + + /* + * Bits needed to read the hw counters in user-space. + * + * u32 seq; + * s64 count; + * + * do { + * seq = pc->lock; + * + * barrier() + * if (pc->index) { + * count = pmc_read(pc->index - 1); + * count += pc->offset; + * } else + * goto regular_read; + * + * barrier(); + * } while (pc->lock != seq); + * + * NOTE: for obvious reason this only works on self-monitoring + * processes. + */ + __u32 lock; /* seqlock for synchronization */ + __u32 index; /* hardware counter identifier */ + __s64 offset; /* add to hardware counter value */ + __u64 time_enabled; /* time counter active */ + __u64 time_running; /* time counter on cpu */ + + /* + * Hole for extension of the self monitor capabilities + */ + + __u64 __reserved[123]; /* align to 1k */ + + /* + * Control data for the mmap() data buffer. + * + * User-space reading the @data_head value should issue an rmb(), on + * SMP capable platforms, after reading this value -- see + * perf_counter_wakeup(). + * + * When the mapping is PROT_WRITE the @data_tail value should be + * written by userspace to reflect the last read data. In this case + * the kernel will not over-write unread data. + */ + __u64 data_head; /* head in the data section */ + __u64 data_tail; /* user-space written tail */ +}; + +#define PERF_EVENT_MISC_CPUMODE_MASK (3 << 0) +#define PERF_EVENT_MISC_CPUMODE_UNKNOWN (0 << 0) +#define PERF_EVENT_MISC_KERNEL (1 << 0) +#define PERF_EVENT_MISC_USER (2 << 0) +#define PERF_EVENT_MISC_HYPERVISOR (3 << 0) + +struct perf_event_header { + __u32 type; + __u16 misc; + __u16 size; +}; + +enum perf_event_type { + + /* + * The MMAP events record the PROT_EXEC mappings so that we can + * correlate userspace IPs to code. They have the following structure: + * + * struct { + * struct perf_event_header header; + * + * u32 pid, tid; + * u64 addr; + * u64 len; + * u64 pgoff; + * char filename[]; + * }; + */ + PERF_EVENT_MMAP = 1, + + /* + * struct { + * struct perf_event_header header; + * u64 id; + * u64 lost; + * }; + */ + PERF_EVENT_LOST = 2, + + /* + * struct { + * struct perf_event_header header; + * + * u32 pid, tid; + * char comm[]; + * }; + */ + PERF_EVENT_COMM = 3, + + /* + * struct { + * struct perf_event_header header; + * u32 pid, ppid; + * u32 tid, ptid; + * u64 time; + * }; + */ + PERF_EVENT_EXIT = 4, + + /* + * struct { + * struct perf_event_header header; + * u64 time; + * u64 id; + * u64 stream_id; + * }; + */ + PERF_EVENT_THROTTLE = 5, + PERF_EVENT_UNTHROTTLE = 6, + + /* + * struct { + * struct perf_event_header header; + * u32 pid, ppid; + * u32 tid, ptid; + * { u64 time; } && PERF_SAMPLE_TIME + * }; + */ + PERF_EVENT_FORK = 7, + + /* + * struct { + * struct perf_event_header header; + * u32 pid, tid; + * + * struct read_format values; + * }; + */ + PERF_EVENT_READ = 8, + + /* + * struct { + * struct perf_event_header header; + * + * { u64 ip; } && PERF_SAMPLE_IP + * { u32 pid, tid; } && PERF_SAMPLE_TID + * { u64 time; } && PERF_SAMPLE_TIME + * { u64 addr; } && PERF_SAMPLE_ADDR + * { u64 id; } && PERF_SAMPLE_ID + * { u64 stream_id;} && PERF_SAMPLE_STREAM_ID + * { u32 cpu, res; } && PERF_SAMPLE_CPU + * { u64 period; } && PERF_SAMPLE_PERIOD + * + * { struct read_format values; } && PERF_SAMPLE_READ + * + * { u64 nr, + * u64 ips[nr]; } && PERF_SAMPLE_CALLCHAIN + * + * # + * # The RAW record below is opaque data wrt the ABI + * # + * # That is, the ABI doesn't make any promises wrt to + * # the stability of its content, it may vary depending + * # on event, hardware, kernel version and phase of + * # the moon. + * # + * # In other words, PERF_SAMPLE_RAW contents are not an ABI. + * # + * + * { u32 size; + * char data[size];}&& PERF_SAMPLE_RAW + * }; + */ + PERF_EVENT_SAMPLE = 9, + + PERF_EVENT_MAX, /* non-ABI */ +}; + +enum perf_callchain_context { + PERF_CONTEXT_HV = (__u64)-32, + PERF_CONTEXT_KERNEL = (__u64)-128, + PERF_CONTEXT_USER = (__u64)-512, + + PERF_CONTEXT_GUEST = (__u64)-2048, + PERF_CONTEXT_GUEST_KERNEL = (__u64)-2176, + PERF_CONTEXT_GUEST_USER = (__u64)-2560, + + PERF_CONTEXT_MAX = (__u64)-4095, +}; + +#define PERF_FLAG_FD_NO_GROUP (1U << 0) +#define PERF_FLAG_FD_OUTPUT (1U << 1) + +/* + * In case some app still references the old symbols: + */ + +#define __NR_perf_counter_open __NR_perf_event_open + +#define PR_TASK_PERF_COUNTERS_DISABLE PR_TASK_PERF_EVENTS_DISABLE +#define PR_TASK_PERF_COUNTERS_ENABLE PR_TASK_PERF_EVENTS_ENABLE + +#endif /* _LINUX_PERF_COUNTER_H */ diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index ae9d9ed6df2a..acefaf71e6dd 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -1,15 +1,15 @@ /* - * Performance events: + * Performance events: * * Copyright (C) 2008-2009, Thomas Gleixner * Copyright (C) 2008-2009, Red Hat, Inc., Ingo Molnar * Copyright (C) 2008-2009, Red Hat, Inc., Peter Zijlstra * - * Data type definitions, declarations, prototypes. + * Data type definitions, declarations, prototypes. * * Started by: Thomas Gleixner and Ingo Molnar * - * For licencing details see kernel-base/COPYING + * For licencing details see kernel-base/COPYING */ #ifndef _LINUX_PERF_EVENT_H #define _LINUX_PERF_EVENT_H @@ -131,19 +131,19 @@ enum perf_event_sample_format { * as specified by attr.read_format: * * struct read_format { - * { u64 value; - * { u64 time_enabled; } && PERF_FORMAT_ENABLED - * { u64 time_running; } && PERF_FORMAT_RUNNING - * { u64 id; } && PERF_FORMAT_ID - * } && !PERF_FORMAT_GROUP + * { u64 value; + * { u64 time_enabled; } && PERF_FORMAT_ENABLED + * { u64 time_running; } && PERF_FORMAT_RUNNING + * { u64 id; } && PERF_FORMAT_ID + * } && !PERF_FORMAT_GROUP * - * { u64 nr; - * { u64 time_enabled; } && PERF_FORMAT_ENABLED - * { u64 time_running; } && PERF_FORMAT_RUNNING - * { u64 value; - * { u64 id; } && PERF_FORMAT_ID - * } cntr[nr]; - * } && PERF_FORMAT_GROUP + * { u64 nr; + * { u64 time_enabled; } && PERF_FORMAT_ENABLED + * { u64 time_running; } && PERF_FORMAT_RUNNING + * { u64 value; + * { u64 id; } && PERF_FORMAT_ID + * } cntr[nr]; + * } && PERF_FORMAT_GROUP * }; */ enum perf_event_read_format { @@ -152,7 +152,7 @@ enum perf_event_read_format { PERF_FORMAT_ID = 1U << 2, PERF_FORMAT_GROUP = 1U << 3, - PERF_FORMAT_MAX = 1U << 4, /* non-ABI */ + PERF_FORMAT_MAX = 1U << 4, /* non-ABI */ }; #define PERF_ATTR_SIZE_VER0 64 /* sizeof first published struct */ @@ -216,8 +216,8 @@ struct perf_event_attr { * Ioctls that can be done on a perf event fd: */ #define PERF_EVENT_IOC_ENABLE _IO ('$', 0) -#define PERF_EVENT_IOC_DISABLE _IO ('$', 1) -#define PERF_EVENT_IOC_REFRESH _IO ('$', 2) +#define PERF_EVENT_IOC_DISABLE _IO ('$', 1) +#define PERF_EVENT_IOC_REFRESH _IO ('$', 2) #define PERF_EVENT_IOC_RESET _IO ('$', 3) #define PERF_EVENT_IOC_PERIOD _IOW('$', 4, u64) #define PERF_EVENT_IOC_SET_OUTPUT _IO ('$', 5) @@ -314,9 +314,9 @@ enum perf_event_type { /* * struct { - * struct perf_event_header header; - * u64 id; - * u64 lost; + * struct perf_event_header header; + * u64 id; + * u64 lost; * }; */ PERF_RECORD_LOST = 2, @@ -383,23 +383,23 @@ enum perf_event_type { * { u64 id; } && PERF_SAMPLE_ID * { u64 stream_id;} && PERF_SAMPLE_STREAM_ID * { u32 cpu, res; } && PERF_SAMPLE_CPU - * { u64 period; } && PERF_SAMPLE_PERIOD + * { u64 period; } && PERF_SAMPLE_PERIOD * * { struct read_format values; } && PERF_SAMPLE_READ * * { u64 nr, * u64 ips[nr]; } && PERF_SAMPLE_CALLCHAIN * - * # - * # The RAW record below is opaque data wrt the ABI - * # - * # That is, the ABI doesn't make any promises wrt to - * # the stability of its content, it may vary depending - * # on event_id, hardware, kernel version and phase of - * # the moon. - * # - * # In other words, PERF_SAMPLE_RAW contents are not an ABI. - * # + * # + * # The RAW record below is opaque data wrt the ABI + * # + * # That is, the ABI doesn't make any promises wrt to + * # the stability of its content, it may vary depending + * # on event, hardware, kernel version and phase of + * # the moon. + * # + * # In other words, PERF_SAMPLE_RAW contents are not an ABI. + * # * * { u32 size; * char data[size];}&& PERF_SAMPLE_RAW @@ -503,10 +503,10 @@ struct pmu { * enum perf_event_active_state - the states of a event */ enum perf_event_active_state { - PERF_EVENT_STATE_ERROR = -2, + PERF_EVENT_STATE_ERROR = -2, PERF_EVENT_STATE_OFF = -1, PERF_EVENT_STATE_INACTIVE = 0, - PERF_EVENT_STATE_ACTIVE = 1, + PERF_EVENT_STATE_ACTIVE = 1, }; struct file; @@ -529,7 +529,7 @@ struct perf_mmap_data { long watermark; /* wakeup watermark */ - struct perf_event_mmap_page *user_page; + struct perf_event_mmap_page *user_page; void *data_pages[0]; }; @@ -694,14 +694,14 @@ struct perf_cpu_context { }; struct perf_output_handle { - struct perf_event *event; - struct perf_mmap_data *data; - unsigned long head; - unsigned long offset; - int nmi; - int sample; - int locked; - unsigned long flags; + struct perf_event *event; + struct perf_mmap_data *data; + unsigned long head; + unsigned long offset; + int nmi; + int sample; + int locked; + unsigned long flags; }; #ifdef CONFIG_PERF_EVENTS @@ -829,22 +829,22 @@ static inline void perf_event_task_sched_out(struct task_struct *task, struct task_struct *next, int cpu) { } static inline void -perf_event_task_tick(struct task_struct *task, int cpu) { } +perf_event_task_tick(struct task_struct *task, int cpu) { } static inline int perf_event_init_task(struct task_struct *child) { return 0; } static inline void perf_event_exit_task(struct task_struct *child) { } static inline void perf_event_free_task(struct task_struct *task) { } -static inline void perf_event_do_pending(void) { } -static inline void perf_event_print_debug(void) { } +static inline void perf_event_do_pending(void) { } +static inline void perf_event_print_debug(void) { } static inline void perf_disable(void) { } static inline void perf_enable(void) { } -static inline int perf_event_task_disable(void) { return -EINVAL; } -static inline int perf_event_task_enable(void) { return -EINVAL; } +static inline int perf_event_task_disable(void) { return -EINVAL; } +static inline int perf_event_task_enable(void) { return -EINVAL; } static inline void perf_sw_event(u32 event_id, u64 nr, int nmi, struct pt_regs *regs, u64 addr) { } -static inline void perf_event_mmap(struct vm_area_struct *vma) { } +static inline void perf_event_mmap(struct vm_area_struct *vma) { } static inline void perf_event_comm(struct task_struct *tsk) { } static inline void perf_event_fork(struct task_struct *tsk) { } static inline void perf_event_init(void) { } diff --git a/init/Kconfig b/init/Kconfig index cfdf5c322806..706728be312f 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -920,26 +920,31 @@ config HAVE_PERF_EVENTS help See tools/perf/design.txt for details. -menu "Performance Counters" +menu "Kernel Performance Events And Counters" config PERF_EVENTS - bool "Kernel Performance Counters" - default y if PROFILING + bool "Kernel performance events and counters" + default y if (PROFILING || PERF_COUNTERS) depends on HAVE_PERF_EVENTS select ANON_INODES help - Enable kernel support for performance counter hardware. + Enable kernel support for various performance events provided + by software and hardware. - Performance counters are special hardware registers available - on most modern CPUs. These registers count the number of certain + Software events are supported either build-in or via the + use of generic tracepoints. + + Most modern CPUs support performance events via performance + counter registers. These registers count the number of certain types of hw events: such as instructions executed, cachemisses suffered, or branches mis-predicted - without slowing down the kernel or applications. These registers can also trigger interrupts when a threshold number of events have passed - and can thus be used to profile the code that runs on that CPU. - The Linux Performance Counter subsystem provides an abstraction of - these hardware capabilities, available via a system call. It + The Linux Performance Event subsystem provides an abstraction of + these software and hardware cevent apabilities, available via a + system call and used by the "perf" utility in tools/perf/. It provides per task and per CPU counters, and it provides event capabilities on top of those. @@ -950,14 +955,26 @@ config EVENT_PROFILE depends on PERF_EVENTS && EVENT_TRACING default y help - Allow the use of tracepoints as software performance counters. + Allow the use of tracepoints as software performance events. - When this is enabled, you can create perf counters based on + When this is enabled, you can create perf events based on tracepoints using PERF_TYPE_TRACEPOINT and the tracepoint ID found in debugfs://tracing/events/*/*/id. (The -e/--events option to the perf tool can parse and interpret symbolic tracepoints, in the subsystem:tracepoint_name format.) +config PERF_COUNTERS + bool "Kernel performance counters (old config option)" + depends on HAVE_PERF_EVENTS + help + This config has been obsoleted by the PERF_EVENTS + config option - please see that one for details. + + It has no effect on the kernel whether you enable + it or not, it is a compatibility placeholder. + + Say N if unsure. + endmenu config VM_EVENT_COUNTERS diff --git a/kernel/perf_event.c b/kernel/perf_event.c index 6e8b99a04e1e..76ac4db405e9 100644 --- a/kernel/perf_event.c +++ b/kernel/perf_event.c @@ -1,12 +1,12 @@ /* - * Performance event core code + * Performance events core code: * * Copyright (C) 2008 Thomas Gleixner * Copyright (C) 2008-2009 Red Hat, Inc., Ingo Molnar * Copyright (C) 2008-2009 Red Hat, Inc., Peter Zijlstra * Copyright © 2009 Paul Mackerras, IBM Corp. * - * For licensing details see kernel-base/COPYING + * For licensing details see kernel-base/COPYING */ #include -- cgit v1.2.3 From cf83011d8faec99de51d6ba8879c0c61cf31e836 Mon Sep 17 00:00:00 2001 From: Markus Heidelberg Date: Fri, 12 Jun 2009 01:00:28 +0200 Subject: trivial: update the Kernel Janitors' web-page URL The former address cannot be resolved. Signed-off-by: Markus Heidelberg Signed-off-by: Jiri Kosina --- MAINTAINERS | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 43761a00e3f1..64d41776c783 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -2955,7 +2955,7 @@ F: scripts/Makefile.* KERNEL JANITORS L: kernel-janitors@vger.kernel.org W: http://www.kerneljanitors.org/ -S: Odd fixes +S: Maintained KERNEL NFSD, SUNRPC, AND LOCKD SERVERS M: "J. Bruce Fields" -- cgit v1.2.3 From dfdd8cc903288bb2e2ad6731545be3db7304c133 Mon Sep 17 00:00:00 2001 From: Krzysztof HaÅ‚asa Date: Mon, 21 Sep 2009 18:31:24 +0200 Subject: Add MAINTAINERS entry for ARM/INTEL IXP4xx arch support. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: Krzysztof HaÅ‚asa --- MAINTAINERS | 7 +++++++ 1 file changed, 7 insertions(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 751a307dc44e..7bf7045337e1 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -685,6 +685,13 @@ M: Lennert Buytenhek L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) S: Maintained +ARM/INTEL IXP4XX ARM ARCHITECTURE +M: Imre Kaloz +M: Krzysztof Halasa +L: linux-arm-kernel@lists.infradead.org +S: Maintained +F: arch/arm/mach-ixp4xx/ + ARM/INTEL XSC3 (MANZANO) ARM CORE M: Lennert Buytenhek M: Dan Williams -- cgit v1.2.3 From 5be461657be65460ad92be3527e3bb1dd11c49ea Mon Sep 17 00:00:00 2001 From: Pierre Ossman Date: Mon, 21 Sep 2009 17:00:59 -0700 Subject: sdhci: orphan driver and list Remove myself as maintainer from the sdhci driver and steer people towards the new MMC list for discussing it. Signed-off-by: Pierre Ossman Cc: Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- MAINTAINERS | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 751a307dc44e..b212f55b6a2e 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -4533,20 +4533,20 @@ S: Maintained F: drivers/mmc/host/sdricoh_cs.c SECURE DIGITAL HOST CONTROLLER INTERFACE (SDHCI) DRIVER -M: Pierre Ossman -L: sdhci-devel@lists.ossman.eu -S: Maintained +S: Orphan +L: linux-mmc@vger.kernel.org +F: drivers/mmc/host/sdhci.* SECURE DIGITAL HOST CONTROLLER INTERFACE, OPEN FIRMWARE BINDINGS (SDHCI-OF) M: Anton Vorontsov L: linuxppc-dev@ozlabs.org -L: sdhci-devel@lists.ossman.eu +L: linux-mmc@vger.kernel.org S: Maintained -F: drivers/mmc/host/sdhci.* +F: drivers/mmc/host/sdhci-of.* SECURE DIGITAL HOST CONTROLLER INTERFACE (SDHCI) SAMSUNG DRIVER M: Ben Dooks -L: sdhci-devel@lists.ossman.eu +L: linux-mmc@vger.kernel.org S: Maintained F: drivers/mmc/host/sdhci-s3c.c -- cgit v1.2.3 From b61d4a71e483fe1aa1c4b170c28d85be77edce4f Mon Sep 17 00:00:00 2001 From: Hannes Eder Date: Mon, 21 Sep 2009 17:04:12 -0700 Subject: MAINTAINERS: add IPVS include files Signed-off-by: Hannes Eder Cc: Joe Perches Cc: "David S. Miller" Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- MAINTAINERS | 2 ++ 1 file changed, 2 insertions(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index b212f55b6a2e..ee58af192c40 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -2803,6 +2803,8 @@ L: netdev@vger.kernel.org L: lvs-devel@vger.kernel.org S: Maintained F: Documentation/networking/ipvs-sysctl.txt +F: include/net/ip_vs.h +F: include/linux/ip_vs.h F: net/netfilter/ipvs/ IPWIRELESS DRIVER -- cgit v1.2.3 From 43368e74d126f8e874fce2c248f094edfcf0baa6 Mon Sep 17 00:00:00 2001 From: Felipe Contreras Date: Mon, 21 Sep 2009 17:04:24 -0700 Subject: MAINTAINERS: acpi: add 'include/acpi' Signed-off-by: Felipe Contreras Cc: Joe Perches Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- MAINTAINERS | 1 + 1 file changed, 1 insertion(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index ee58af192c40..8d30ce808d63 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -233,6 +233,7 @@ S: Supported F: drivers/acpi/ F: drivers/pnp/pnpacpi/ F: include/linux/acpi.h +F: include/acpi/ ACPI BATTERY DRIVERS M: Alexey Starikovskiy -- cgit v1.2.3 From 4e04d5a3d57accb0b05d381aa84d93d3749b15e2 Mon Sep 17 00:00:00 2001 From: Felipe Contreras Date: Mon, 21 Sep 2009 17:04:25 -0700 Subject: MAINTAINERS: omap: fix regex Otherwise 'arch/arm/*omap*/foo.c' wouldn't match Signed-off-by: Felipe Contreras Cc: Joe Perches Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- MAINTAINERS | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 8d30ce808d63..4dd43f8621ec 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3737,7 +3737,7 @@ W: http://www.muru.com/linux/omap/ W: http://linux.omap.com/ T: git git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap-2.6.git S: Maintained -F: arch/arm/*omap* +F: arch/arm/*omap*/ OMAP CLOCK FRAMEWORK SUPPORT M: Paul Walmsley -- cgit v1.2.3 From 076cfaaece1c4e57ea7d924f38b911b9a5502365 Mon Sep 17 00:00:00 2001 From: Joe Perches Date: Mon, 21 Sep 2009 17:04:26 -0700 Subject: MAINTAINERS: integrate P:/M: lines A couple of new uses of separate "P: name" "M: address" lines are converted to single line "M: name
" Signed-off-by: Joe Perches Cc: Anil Ravindranath Cc: Kalle Valo Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- MAINTAINERS | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 4dd43f8621ec..7e54faf992cb 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -4028,8 +4028,7 @@ F: drivers/block/pktcdvd.c F: include/linux/pktcdvd.h PMC SIERRA MaxRAID DRIVER -P: Anil Ravindranath -M: anil_ravindranath@pmc-sierra.com +M: Anil Ravindranath L: linux-scsi@vger.kernel.org W: http://www.pmc-sierra.com/ S: Supported @@ -5660,8 +5659,7 @@ S: Maintained F: drivers/input/misc/wistron_btns.c WL1251 WIRELESS DRIVER -P: Kalle Valo -M: kalle.valo@nokia.com +M: Kalle Valo L: linux-wireless@vger.kernel.org W: http://wireless.kernel.org T: git git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless-testing.git -- cgit v1.2.3 From efc03ecb9d674588a13aee27289c2af2afe5e6b4 Mon Sep 17 00:00:00 2001 From: Joe Perches Date: Mon, 21 Sep 2009 17:04:27 -0700 Subject: MAINTAINERS: move ARM lists to infradead Signed-off-by: Joe Perches Cc: Sebastian Andrzej Siewior Cc: Krzysztof Halasa Cc: Russell King Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- MAINTAINERS | 106 ++++++++++++++++++++++++++++++------------------------------ 1 file changed, 53 insertions(+), 53 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 7e54faf992cb..379db50951d3 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -498,7 +498,7 @@ F: arch/arm/include/asm/floppy.h ARM PORT M: Russell King -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) W: http://www.arm.linux.org.uk/ S: Maintained F: arch/arm/ @@ -509,36 +509,36 @@ F: drivers/mmc/host/mmci.* ARM/ADI ROADRUNNER MACHINE SUPPORT M: Lennert Buytenhek -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Maintained F: arch/arm/mach-ixp23xx/ F: arch/arm/mach-ixp23xx/include/mach/ ARM/ADS SPHERE MACHINE SUPPORT M: Lennert Buytenhek -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Maintained ARM/AFEB9260 MACHINE SUPPORT M: Sergey Lapin -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Maintained ARM/AJECO 1ARM MACHINE SUPPORT M: Lennert Buytenhek -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Maintained ARM/ATMEL AT91RM9200 ARM ARCHITECTURE M: Andrew Victor -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) W: http://maxim.org.za/at91_26.html S: Maintained ARM/BCMRING ARM ARCHITECTURE M: Leo Chen M: Scott Branden -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Maintained F: arch/arm/mach-bcmring @@ -555,25 +555,25 @@ F: drivers/mtd/nand/nand_bcm_umi.h ARM/CIRRUS LOGIC EP93XX ARM ARCHITECTURE M: Hartley Sweeten M: Ryan Mallon -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Maintained F: arch/arm/mach-ep93xx/ F: arch/arm/mach-ep93xx/include/mach/ ARM/CIRRUS LOGIC EDB9315A MACHINE SUPPORT M: Lennert Buytenhek -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Maintained ARM/CLKDEV SUPPORT M: Russell King -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) F: arch/arm/common/clkdev.c F: arch/arm/include/asm/clkdev.h ARM/COMPULAB CM-X270/EM-X270 and CM-X300 MACHINE SUPPORT M: Mike Rapoport -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Maintained ARM/CORGI MACHINE SUPPORT @@ -582,14 +582,14 @@ S: Maintained ARM/CORTINA SYSTEMS GEMINI ARM ARCHITECTURE M: Paulius Zaleckas -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) T: git git://gitorious.org/linux-gemini/mainline.git S: Maintained F: arch/arm/mach-gemini/ ARM/EBSA110 MACHINE SUPPORT M: Russell King -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) W: http://www.arm.linux.org.uk/ S: Maintained F: arch/arm/mach-ebsa110/ @@ -607,13 +607,13 @@ F: arch/arm/mach-pxa/ezx.c ARM/FARADAY FA526 PORT M: Paulius Zaleckas -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Maintained F: arch/arm/mm/*-fa* ARM/FOOTBRIDGE ARCHITECTURE M: Russell King -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) W: http://www.arm.linux.org.uk/ S: Maintained F: arch/arm/include/asm/hardware/dec21285.h @@ -621,17 +621,17 @@ F: arch/arm/mach-footbridge/ ARM/FREESCALE IMX / MXC ARM ARCHITECTURE M: Sascha Hauer -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Maintained ARM/GLOMATION GESBC9312SX MACHINE SUPPORT M: Lennert Buytenhek -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Maintained ARM/GUMSTIX MACHINE SUPPORT M: Steve Sakoman -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Maintained ARM/H4700 (HP IPAQ HX4700) MACHINE SUPPORT @@ -651,55 +651,55 @@ F: arch/arm/mach-sa1100/include/mach/jornada720.h ARM/INTEL IOP32X ARM ARCHITECTURE M: Lennert Buytenhek M: Dan Williams -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Supported ARM/INTEL IOP33X ARM ARCHITECTURE M: Dan Williams -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Supported ARM/INTEL IOP13XX ARM ARCHITECTURE M: Lennert Buytenhek M: Dan Williams -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Supported ARM/INTEL IQ81342EX MACHINE SUPPORT M: Lennert Buytenhek M: Dan Williams -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Supported ARM/INTEL IXP2000 ARM ARCHITECTURE M: Lennert Buytenhek -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Maintained ARM/INTEL IXDP2850 MACHINE SUPPORT M: Lennert Buytenhek -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Maintained ARM/INTEL IXP23XX ARM ARCHITECTURE M: Lennert Buytenhek -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Maintained ARM/INTEL XSC3 (MANZANO) ARM CORE M: Lennert Buytenhek M: Dan Williams -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Supported ARM/IP FABRICS DOUBLE ESPRESSO MACHINE SUPPORT M: Lennert Buytenhek -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Maintained ARM/LOGICPD PXA270 MACHINE SUPPORT M: Lennert Buytenhek -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Maintained ARM/MAGICIAN MACHINE SUPPORT @@ -709,7 +709,7 @@ S: Maintained ARM/Marvell Loki/Kirkwood/MV78xx0/Orion SOC support M: Lennert Buytenhek M: Nicolas Pitre -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) T: git git://git.marvell.com/orion S: Maintained F: arch/arm/mach-loki/ @@ -720,7 +720,7 @@ F: arch/arm/plat-orion/ ARM/MIOA701 MACHINE SUPPORT M: Robert Jarzmik -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) F: arch/arm/mach-pxa/mioa701.c S: Maintained @@ -761,18 +761,18 @@ S: Maintained ARM/PT DIGITAL BOARD PORT M: Stefan Eletzhofer -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) W: http://www.arm.linux.org.uk/ S: Maintained ARM/RADISYS ENP2611 MACHINE SUPPORT M: Lennert Buytenhek -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Maintained ARM/RISCPC ARCHITECTURE M: Russell King -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) W: http://www.arm.linux.org.uk/ S: Maintained F: arch/arm/common/time-acorn.c @@ -791,7 +791,7 @@ S: Maintained ARM/SAMSUNG ARM ARCHITECTURES M: Ben Dooks -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) W: http://www.fluff.org/ben/linux/ S: Maintained F: arch/arm/plat-s3c/ @@ -799,65 +799,65 @@ F: arch/arm/plat-s3c24xx/ ARM/S3C2410 ARM ARCHITECTURE M: Ben Dooks -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) W: http://www.fluff.org/ben/linux/ S: Maintained F: arch/arm/mach-s3c2410/ ARM/S3C2440 ARM ARCHITECTURE M: Ben Dooks -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) W: http://www.fluff.org/ben/linux/ S: Maintained F: arch/arm/mach-s3c2440/ ARM/S3C2442 ARM ARCHITECTURE M: Ben Dooks -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) W: http://www.fluff.org/ben/linux/ S: Maintained F: arch/arm/mach-s3c2442/ ARM/S3C2443 ARM ARCHITECTURE M: Ben Dooks -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) W: http://www.fluff.org/ben/linux/ S: Maintained F: arch/arm/mach-s3c2443/ ARM/S3C6400 ARM ARCHITECTURE M: Ben Dooks -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) W: http://www.fluff.org/ben/linux/ S: Maintained F: arch/arm/mach-s3c6400/ ARM/S3C6410 ARM ARCHITECTURE M: Ben Dooks -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) W: http://www.fluff.org/ben/linux/ S: Maintained F: arch/arm/mach-s3c6410/ ARM/TECHNOLOGIC SYSTEMS TS7250 MACHINE SUPPORT M: Lennert Buytenhek -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Maintained ARM/THECUS N2100 MACHINE SUPPORT M: Lennert Buytenhek -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Maintained ARM/NUVOTON W90X900 ARM ARCHITECTURE M: Wan ZongShun -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) W: http://www.mcuos.com S: Maintained ARM/VFP SUPPORT M: Russell King -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) W: http://www.arm.linux.org.uk/ S: Maintained F: arch/arm/vfp/ @@ -964,7 +964,7 @@ F: include/linux/atm* ATMEL AT91 MCI DRIVER M: Nicolas Ferre -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) W: http://www.atmel.com/products/AT91/ W: http://www.at91.com/ S: Maintained @@ -1542,7 +1542,7 @@ F: drivers/infiniband/hw/cxgb3/ CYBERPRO FB DRIVER M: Russell King -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) W: http://www.arm.linux.org.uk/ S: Maintained F: drivers/video/cyber2000fb.* @@ -2086,7 +2086,7 @@ F: drivers/i2c/busses/i2c-cpm.c FREESCALE IMX / MXC FRAMEBUFFER DRIVER M: Sascha Hauer L: linux-fbdev-devel@lists.sourceforge.net (moderated for non-subscribers) -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Maintained F: arch/arm/plat-mxc/include/mach/imxfb.h F: drivers/video/imxfb.c @@ -3452,7 +3452,7 @@ F: include/linux/meye.h MOTOROLA IMX MMC/SD HOST CONTROLLER INTERFACE DRIVER M: Pavel Pisa -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Maintained F: drivers/mmc/host/imxmmc.* @@ -4170,7 +4170,7 @@ F: drivers/media/video/pvrusb2/ PXA2xx/PXA3xx SUPPORT M: Eric Miao M: Russell King -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Maintained F: arch/arm/mach-pxa/ F: drivers/pcmcia/pxa2xx* @@ -4183,13 +4183,13 @@ F: sound/soc/pxa PXA168 SUPPORT M: Eric Miao M: Jason Chagas -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) T: git git://git.kernel.org/pub/scm/linux/kernel/git/ycmiao/pxa-linux-2.6.git S: Maintained PXA910 SUPPORT M: Eric Miao -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) T: git git://git.kernel.org/pub/scm/linux/kernel/git/ycmiao/pxa-linux-2.6.git S: Maintained @@ -4430,7 +4430,7 @@ F: net/iucv/ S3C24XX SD/MMC Driver M: Ben Dooks -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Supported F: drivers/mmc/host/s3cmci.* @@ -4634,7 +4634,7 @@ F: drivers/misc/sgi-xp/ SHARP LH SUPPORT (LH7952X & LH7A40X) M: Marc Singer W: http://projects.buici.com/arm -L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Maintained F: Documentation/arm/Sharp-LH/ADC-LH7-Touchscreen F: arch/arm/mach-lh7a40x/ -- cgit v1.2.3 From e258b80e691f1f3ae83a60aa80eaf7322bd55ec4 Mon Sep 17 00:00:00 2001 From: David Härdeman Date: Mon, 21 Sep 2009 17:04:53 -0700 Subject: input: add a driver for the Winbond WPCD376I Consumer IR hardware MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Add a driver for the the Consumer IR (CIR) functionality of the Winbond WPCD376I chipset (found on e.g. Intel DG45FC motherboards). Signed-off-by: David Härdeman Reviewed-by: Jesse Barnes Cc: Dmitry Torokhov Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- MAINTAINERS | 6 + drivers/input/misc/Kconfig | 16 + drivers/input/misc/Makefile | 1 + drivers/input/misc/winbond-cir.c | 1614 ++++++++++++++++++++++++++++++++++++++ 4 files changed, 1637 insertions(+) create mode 100644 drivers/input/misc/winbond-cir.c (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 379db50951d3..da7a811f76a6 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -5640,6 +5640,12 @@ L: linux-scsi@vger.kernel.org S: Maintained F: drivers/scsi/wd7000.c +WINBOND CIR DRIVER +P: David Härdeman +M: david@hardeman.nu +S: Maintained +F: drivers/input/misc/winbond-cir.c + WIMAX STACK M: Inaky Perez-Gonzalez M: linux-wimax@intel.com diff --git a/drivers/input/misc/Kconfig b/drivers/input/misc/Kconfig index 1a50be379cbc..76d6751f89a7 100644 --- a/drivers/input/misc/Kconfig +++ b/drivers/input/misc/Kconfig @@ -222,6 +222,22 @@ config INPUT_SGI_BTNS To compile this driver as a module, choose M here: the module will be called sgi_btns. +config INPUT_WINBOND_CIR + tristate "Winbond IR remote control" + depends on X86 && PNP + select LEDS_CLASS + select BITREVERSE + help + Say Y here if you want to use the IR remote functionality found + in some Winbond SuperI/O chips. Currently only the WPCD376I + chip is supported (included in some Intel Media series motherboards). + + IR Receive and wake-on-IR from suspend and power-off is currently + supported. + + To compile this driver as a module, choose M here: the module will be + called winbond_cir. + config HP_SDC_RTC tristate "HP SDC Real Time Clock" depends on (GSC || HP300) && SERIO diff --git a/drivers/input/misc/Makefile b/drivers/input/misc/Makefile index bf4db626c313..a8b84854fb7b 100644 --- a/drivers/input/misc/Makefile +++ b/drivers/input/misc/Makefile @@ -26,6 +26,7 @@ obj-$(CONFIG_INPUT_SGI_BTNS) += sgi_btns.o obj-$(CONFIG_INPUT_SPARCSPKR) += sparcspkr.o obj-$(CONFIG_INPUT_TWL4030_PWRBUTTON) += twl4030-pwrbutton.o obj-$(CONFIG_INPUT_UINPUT) += uinput.o +obj-$(CONFIG_INPUT_WINBOND_CIR) += winbond-cir.o obj-$(CONFIG_INPUT_WISTRON_BTNS) += wistron_btns.o obj-$(CONFIG_INPUT_WM831X_ON) += wm831x-on.o obj-$(CONFIG_INPUT_YEALINK) += yealink.o diff --git a/drivers/input/misc/winbond-cir.c b/drivers/input/misc/winbond-cir.c new file mode 100644 index 000000000000..33309fe44e20 --- /dev/null +++ b/drivers/input/misc/winbond-cir.c @@ -0,0 +1,1614 @@ +/* + * winbond-cir.c - Driver for the Consumer IR functionality of Winbond + * SuperI/O chips. + * + * Currently supports the Winbond WPCD376i chip (PNP id WEC1022), but + * could probably support others (Winbond WEC102X, NatSemi, etc) + * with minor modifications. + * + * Original Author: David Härdeman + * Copyright (C) 2009 David Härdeman + * + * Dedicated to Matilda, my newborn daughter, without whose loving attention + * this driver would have been finished in half the time and with a fraction + * of the bugs. + * + * Written using: + * o Winbond WPCD376I datasheet helpfully provided by Jesse Barnes at Intel + * o NatSemi PC87338/PC97338 datasheet (for the serial port stuff) + * o DSDT dumps + * + * Supported features: + * o RC6 + * o Wake-On-CIR functionality + * + * To do: + * o Test NEC and RC5 + * + * Left as an exercise for the reader: + * o Learning (I have neither the hardware, nor the need) + * o IR Transmit (ibid) + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define DRVNAME "winbond-cir" + +/* CEIR Wake-Up Registers, relative to data->wbase */ +#define WBCIR_REG_WCEIR_CTL 0x03 /* CEIR Receiver Control */ +#define WBCIR_REG_WCEIR_STS 0x04 /* CEIR Receiver Status */ +#define WBCIR_REG_WCEIR_EV_EN 0x05 /* CEIR Receiver Event Enable */ +#define WBCIR_REG_WCEIR_CNTL 0x06 /* CEIR Receiver Counter Low */ +#define WBCIR_REG_WCEIR_CNTH 0x07 /* CEIR Receiver Counter High */ +#define WBCIR_REG_WCEIR_INDEX 0x08 /* CEIR Receiver Index */ +#define WBCIR_REG_WCEIR_DATA 0x09 /* CEIR Receiver Data */ +#define WBCIR_REG_WCEIR_CSL 0x0A /* CEIR Re. Compare Strlen */ +#define WBCIR_REG_WCEIR_CFG1 0x0B /* CEIR Re. Configuration 1 */ +#define WBCIR_REG_WCEIR_CFG2 0x0C /* CEIR Re. Configuration 2 */ + +/* CEIR Enhanced Functionality Registers, relative to data->ebase */ +#define WBCIR_REG_ECEIR_CTS 0x00 /* Enhanced IR Control Status */ +#define WBCIR_REG_ECEIR_CCTL 0x01 /* Infrared Counter Control */ +#define WBCIR_REG_ECEIR_CNT_LO 0x02 /* Infrared Counter LSB */ +#define WBCIR_REG_ECEIR_CNT_HI 0x03 /* Infrared Counter MSB */ +#define WBCIR_REG_ECEIR_IREM 0x04 /* Infrared Emitter Status */ + +/* SP3 Banked Registers, relative to data->sbase */ +#define WBCIR_REG_SP3_BSR 0x03 /* Bank Select, all banks */ + /* Bank 0 */ +#define WBCIR_REG_SP3_RXDATA 0x00 /* FIFO RX data (r) */ +#define WBCIR_REG_SP3_TXDATA 0x00 /* FIFO TX data (w) */ +#define WBCIR_REG_SP3_IER 0x01 /* Interrupt Enable */ +#define WBCIR_REG_SP3_EIR 0x02 /* Event Identification (r) */ +#define WBCIR_REG_SP3_FCR 0x02 /* FIFO Control (w) */ +#define WBCIR_REG_SP3_MCR 0x04 /* Mode Control */ +#define WBCIR_REG_SP3_LSR 0x05 /* Link Status */ +#define WBCIR_REG_SP3_MSR 0x06 /* Modem Status */ +#define WBCIR_REG_SP3_ASCR 0x07 /* Aux Status and Control */ + /* Bank 2 */ +#define WBCIR_REG_SP3_BGDL 0x00 /* Baud Divisor LSB */ +#define WBCIR_REG_SP3_BGDH 0x01 /* Baud Divisor MSB */ +#define WBCIR_REG_SP3_EXCR1 0x02 /* Extended Control 1 */ +#define WBCIR_REG_SP3_EXCR2 0x04 /* Extended Control 2 */ +#define WBCIR_REG_SP3_TXFLV 0x06 /* TX FIFO Level */ +#define WBCIR_REG_SP3_RXFLV 0x07 /* RX FIFO Level */ + /* Bank 3 */ +#define WBCIR_REG_SP3_MRID 0x00 /* Module Identification */ +#define WBCIR_REG_SP3_SH_LCR 0x01 /* LCR Shadow */ +#define WBCIR_REG_SP3_SH_FCR 0x02 /* FCR Shadow */ + /* Bank 4 */ +#define WBCIR_REG_SP3_IRCR1 0x02 /* Infrared Control 1 */ + /* Bank 5 */ +#define WBCIR_REG_SP3_IRCR2 0x04 /* Infrared Control 2 */ + /* Bank 6 */ +#define WBCIR_REG_SP3_IRCR3 0x00 /* Infrared Control 3 */ +#define WBCIR_REG_SP3_SIR_PW 0x02 /* SIR Pulse Width */ + /* Bank 7 */ +#define WBCIR_REG_SP3_IRRXDC 0x00 /* IR RX Demod Control */ +#define WBCIR_REG_SP3_IRTXMC 0x01 /* IR TX Mod Control */ +#define WBCIR_REG_SP3_RCCFG 0x02 /* CEIR Config */ +#define WBCIR_REG_SP3_IRCFG1 0x04 /* Infrared Config 1 */ +#define WBCIR_REG_SP3_IRCFG4 0x07 /* Infrared Config 4 */ + +/* + * Magic values follow + */ + +/* No interrupts for WBCIR_REG_SP3_IER and WBCIR_REG_SP3_EIR */ +#define WBCIR_IRQ_NONE 0x00 +/* RX data bit for WBCIR_REG_SP3_IER and WBCIR_REG_SP3_EIR */ +#define WBCIR_IRQ_RX 0x01 +/* Over/Under-flow bit for WBCIR_REG_SP3_IER and WBCIR_REG_SP3_EIR */ +#define WBCIR_IRQ_ERR 0x04 +/* Led enable/disable bit for WBCIR_REG_ECEIR_CTS */ +#define WBCIR_LED_ENABLE 0x80 +/* RX data available bit for WBCIR_REG_SP3_LSR */ +#define WBCIR_RX_AVAIL 0x01 +/* RX disable bit for WBCIR_REG_SP3_ASCR */ +#define WBCIR_RX_DISABLE 0x20 +/* Extended mode enable bit for WBCIR_REG_SP3_EXCR1 */ +#define WBCIR_EXT_ENABLE 0x01 +/* Select compare register in WBCIR_REG_WCEIR_INDEX (bits 5 & 6) */ +#define WBCIR_REGSEL_COMPARE 0x10 +/* Select mask register in WBCIR_REG_WCEIR_INDEX (bits 5 & 6) */ +#define WBCIR_REGSEL_MASK 0x20 +/* Starting address of selected register in WBCIR_REG_WCEIR_INDEX */ +#define WBCIR_REG_ADDR0 0x00 + +/* Valid banks for the SP3 UART */ +enum wbcir_bank { + WBCIR_BANK_0 = 0x00, + WBCIR_BANK_1 = 0x80, + WBCIR_BANK_2 = 0xE0, + WBCIR_BANK_3 = 0xE4, + WBCIR_BANK_4 = 0xE8, + WBCIR_BANK_5 = 0xEC, + WBCIR_BANK_6 = 0xF0, + WBCIR_BANK_7 = 0xF4, +}; + +/* Supported IR Protocols */ +enum wbcir_protocol { + IR_PROTOCOL_RC5 = 0x0, + IR_PROTOCOL_NEC = 0x1, + IR_PROTOCOL_RC6 = 0x2, +}; + +/* Misc */ +#define WBCIR_NAME "Winbond CIR" +#define WBCIR_ID_FAMILY 0xF1 /* Family ID for the WPCD376I */ +#define WBCIR_ID_CHIP 0x04 /* Chip ID for the WPCD376I */ +#define IR_KEYPRESS_TIMEOUT 250 /* FIXME: should be per-protocol? */ +#define INVALID_SCANCODE 0x7FFFFFFF /* Invalid with all protos */ +#define WAKEUP_IOMEM_LEN 0x10 /* Wake-Up I/O Reg Len */ +#define EHFUNC_IOMEM_LEN 0x10 /* Enhanced Func I/O Reg Len */ +#define SP_IOMEM_LEN 0x08 /* Serial Port 3 (IR) Reg Len */ +#define WBCIR_MAX_IDLE_BYTES 10 + +static DEFINE_SPINLOCK(wbcir_lock); +static DEFINE_RWLOCK(keytable_lock); + +struct wbcir_key { + u32 scancode; + unsigned int keycode; +}; + +struct wbcir_keyentry { + struct wbcir_key key; + struct list_head list; +}; + +static struct wbcir_key rc6_def_keymap[] = { + { 0x800F0400, KEY_NUMERIC_0 }, + { 0x800F0401, KEY_NUMERIC_1 }, + { 0x800F0402, KEY_NUMERIC_2 }, + { 0x800F0403, KEY_NUMERIC_3 }, + { 0x800F0404, KEY_NUMERIC_4 }, + { 0x800F0405, KEY_NUMERIC_5 }, + { 0x800F0406, KEY_NUMERIC_6 }, + { 0x800F0407, KEY_NUMERIC_7 }, + { 0x800F0408, KEY_NUMERIC_8 }, + { 0x800F0409, KEY_NUMERIC_9 }, + { 0x800F041D, KEY_NUMERIC_STAR }, + { 0x800F041C, KEY_NUMERIC_POUND }, + { 0x800F0410, KEY_VOLUMEUP }, + { 0x800F0411, KEY_VOLUMEDOWN }, + { 0x800F0412, KEY_CHANNELUP }, + { 0x800F0413, KEY_CHANNELDOWN }, + { 0x800F040E, KEY_MUTE }, + { 0x800F040D, KEY_VENDOR }, /* Vista Logo Key */ + { 0x800F041E, KEY_UP }, + { 0x800F041F, KEY_DOWN }, + { 0x800F0420, KEY_LEFT }, + { 0x800F0421, KEY_RIGHT }, + { 0x800F0422, KEY_OK }, + { 0x800F0423, KEY_ESC }, + { 0x800F040F, KEY_INFO }, + { 0x800F040A, KEY_CLEAR }, + { 0x800F040B, KEY_ENTER }, + { 0x800F045B, KEY_RED }, + { 0x800F045C, KEY_GREEN }, + { 0x800F045D, KEY_YELLOW }, + { 0x800F045E, KEY_BLUE }, + { 0x800F045A, KEY_TEXT }, + { 0x800F0427, KEY_SWITCHVIDEOMODE }, + { 0x800F040C, KEY_POWER }, + { 0x800F0450, KEY_RADIO }, + { 0x800F0448, KEY_PVR }, + { 0x800F0447, KEY_AUDIO }, + { 0x800F0426, KEY_EPG }, + { 0x800F0449, KEY_CAMERA }, + { 0x800F0425, KEY_TV }, + { 0x800F044A, KEY_VIDEO }, + { 0x800F0424, KEY_DVD }, + { 0x800F0416, KEY_PLAY }, + { 0x800F0418, KEY_PAUSE }, + { 0x800F0419, KEY_STOP }, + { 0x800F0414, KEY_FASTFORWARD }, + { 0x800F041A, KEY_NEXT }, + { 0x800F041B, KEY_PREVIOUS }, + { 0x800F0415, KEY_REWIND }, + { 0x800F0417, KEY_RECORD }, +}; + +/* Registers and other state is protected by wbcir_lock */ +struct wbcir_data { + unsigned long wbase; /* Wake-Up Baseaddr */ + unsigned long ebase; /* Enhanced Func. Baseaddr */ + unsigned long sbase; /* Serial Port Baseaddr */ + unsigned int irq; /* Serial Port IRQ */ + + struct input_dev *input_dev; + struct timer_list timer_keyup; + struct led_trigger *rxtrigger; + struct led_trigger *txtrigger; + struct led_classdev led; + + u32 last_scancode; + unsigned int last_keycode; + u8 last_toggle; + u8 keypressed; + unsigned long keyup_jiffies; + unsigned int idle_count; + + /* RX irdata and parsing state */ + unsigned long irdata[30]; + unsigned int irdata_count; + unsigned int irdata_idle; + unsigned int irdata_off; + unsigned int irdata_error; + + /* Protected by keytable_lock */ + struct list_head keytable; +}; + +static enum wbcir_protocol protocol = IR_PROTOCOL_RC6; +module_param(protocol, uint, 0444); +MODULE_PARM_DESC(protocol, "IR protocol to use " + "(0 = RC5, 1 = NEC, 2 = RC6A, default)"); + +static int invert; /* default = 0 */ +module_param(invert, bool, 0444); +MODULE_PARM_DESC(invert, "Invert the signal from the IR receiver"); + +static unsigned int wake_sc = 0x800F040C; +module_param(wake_sc, uint, 0644); +MODULE_PARM_DESC(wake_sc, "Scancode of the power-on IR command"); + +static unsigned int wake_rc6mode = 6; +module_param(wake_rc6mode, uint, 0644); +MODULE_PARM_DESC(wake_rc6mode, "RC6 mode for the power-on command " + "(0 = 0, 6 = 6A, default)"); + + + +/***************************************************************************** + * + * UTILITY FUNCTIONS + * + *****************************************************************************/ + +/* Caller needs to hold wbcir_lock */ +static void +wbcir_set_bits(unsigned long addr, u8 bits, u8 mask) +{ + u8 val; + + val = inb(addr); + val = ((val & ~mask) | (bits & mask)); + outb(val, addr); +} + +/* Selects the register bank for the serial port */ +static inline void +wbcir_select_bank(struct wbcir_data *data, enum wbcir_bank bank) +{ + outb(bank, data->sbase + WBCIR_REG_SP3_BSR); +} + +static enum led_brightness +wbcir_led_brightness_get(struct led_classdev *led_cdev) +{ + struct wbcir_data *data = container_of(led_cdev, + struct wbcir_data, + led); + + if (inb(data->ebase + WBCIR_REG_ECEIR_CTS) & WBCIR_LED_ENABLE) + return LED_FULL; + else + return LED_OFF; +} + +static void +wbcir_led_brightness_set(struct led_classdev *led_cdev, + enum led_brightness brightness) +{ + struct wbcir_data *data = container_of(led_cdev, + struct wbcir_data, + led); + + wbcir_set_bits(data->ebase + WBCIR_REG_ECEIR_CTS, + brightness == LED_OFF ? 0x00 : WBCIR_LED_ENABLE, + WBCIR_LED_ENABLE); +} + +/* Manchester encodes bits to RC6 message cells (see wbcir_parse_rc6) */ +static u8 +wbcir_to_rc6cells(u8 val) +{ + u8 coded = 0x00; + int i; + + val &= 0x0F; + for (i = 0; i < 4; i++) { + if (val & 0x01) + coded |= 0x02 << (i * 2); + else + coded |= 0x01 << (i * 2); + val >>= 1; + } + + return coded; +} + + + +/***************************************************************************** + * + * INPUT FUNCTIONS + * + *****************************************************************************/ + +static unsigned int +wbcir_do_getkeycode(struct wbcir_data *data, u32 scancode) +{ + struct wbcir_keyentry *keyentry; + unsigned int keycode = KEY_RESERVED; + unsigned long flags; + + read_lock_irqsave(&keytable_lock, flags); + + list_for_each_entry(keyentry, &data->keytable, list) { + if (keyentry->key.scancode == scancode) { + keycode = keyentry->key.keycode; + break; + } + } + + read_unlock_irqrestore(&keytable_lock, flags); + return keycode; +} + +static int +wbcir_getkeycode(struct input_dev *dev, int scancode, int *keycode) +{ + struct wbcir_data *data = input_get_drvdata(dev); + + *keycode = (int)wbcir_do_getkeycode(data, (u32)scancode); + return 0; +} + +static int +wbcir_setkeycode(struct input_dev *dev, int sscancode, int keycode) +{ + struct wbcir_data *data = input_get_drvdata(dev); + struct wbcir_keyentry *keyentry; + struct wbcir_keyentry *new_keyentry; + unsigned long flags; + unsigned int old_keycode = KEY_RESERVED; + u32 scancode = (u32)sscancode; + + if (keycode < 0 || keycode > KEY_MAX) + return -EINVAL; + + new_keyentry = kmalloc(sizeof(*new_keyentry), GFP_KERNEL); + if (!new_keyentry) + return -ENOMEM; + + write_lock_irqsave(&keytable_lock, flags); + + list_for_each_entry(keyentry, &data->keytable, list) { + if (keyentry->key.scancode != scancode) + continue; + + old_keycode = keyentry->key.keycode; + keyentry->key.keycode = keycode; + + if (keyentry->key.keycode == KEY_RESERVED) { + list_del(&keyentry->list); + kfree(keyentry); + } + + break; + } + + set_bit(keycode, dev->keybit); + + if (old_keycode == KEY_RESERVED) { + new_keyentry->key.scancode = scancode; + new_keyentry->key.keycode = keycode; + list_add(&new_keyentry->list, &data->keytable); + } else { + kfree(new_keyentry); + clear_bit(old_keycode, dev->keybit); + list_for_each_entry(keyentry, &data->keytable, list) { + if (keyentry->key.keycode == old_keycode) { + set_bit(old_keycode, dev->keybit); + break; + } + } + } + + write_unlock_irqrestore(&keytable_lock, flags); + return 0; +} + +/* + * Timer function to report keyup event some time after keydown is + * reported by the ISR. + */ +static void +wbcir_keyup(unsigned long cookie) +{ + struct wbcir_data *data = (struct wbcir_data *)cookie; + unsigned long flags; + + /* + * data->keyup_jiffies is used to prevent a race condition if a + * hardware interrupt occurs at this point and the keyup timer + * event is moved further into the future as a result. + * + * The timer will then be reactivated and this function called + * again in the future. We need to exit gracefully in that case + * to allow the input subsystem to do its auto-repeat magic or + * a keyup event might follow immediately after the keydown. + */ + + spin_lock_irqsave(&wbcir_lock, flags); + + if (time_is_after_eq_jiffies(data->keyup_jiffies) && data->keypressed) { + data->keypressed = 0; + led_trigger_event(data->rxtrigger, LED_OFF); + input_report_key(data->input_dev, data->last_keycode, 0); + input_sync(data->input_dev); + } + + spin_unlock_irqrestore(&wbcir_lock, flags); +} + +static void +wbcir_keydown(struct wbcir_data *data, u32 scancode, u8 toggle) +{ + unsigned int keycode; + + /* Repeat? */ + if (data->last_scancode == scancode && + data->last_toggle == toggle && + data->keypressed) + goto set_timer; + data->last_scancode = scancode; + + /* Do we need to release an old keypress? */ + if (data->keypressed) { + input_report_key(data->input_dev, data->last_keycode, 0); + input_sync(data->input_dev); + data->keypressed = 0; + } + + /* Report scancode */ + input_event(data->input_dev, EV_MSC, MSC_SCAN, (int)scancode); + + /* Do we know this scancode? */ + keycode = wbcir_do_getkeycode(data, scancode); + if (keycode == KEY_RESERVED) + goto set_timer; + + /* Register a keypress */ + input_report_key(data->input_dev, keycode, 1); + data->keypressed = 1; + data->last_keycode = keycode; + data->last_toggle = toggle; + +set_timer: + input_sync(data->input_dev); + led_trigger_event(data->rxtrigger, + data->keypressed ? LED_FULL : LED_OFF); + data->keyup_jiffies = jiffies + msecs_to_jiffies(IR_KEYPRESS_TIMEOUT); + mod_timer(&data->timer_keyup, data->keyup_jiffies); +} + + + +/***************************************************************************** + * + * IR PARSING FUNCTIONS + * + *****************************************************************************/ + +/* Resets all irdata */ +static void +wbcir_reset_irdata(struct wbcir_data *data) +{ + memset(data->irdata, 0, sizeof(data->irdata)); + data->irdata_count = 0; + data->irdata_off = 0; + data->irdata_error = 0; +} + +/* Adds one bit of irdata */ +static void +add_irdata_bit(struct wbcir_data *data, int set) +{ + if (data->irdata_count >= sizeof(data->irdata) * 8) { + data->irdata_error = 1; + return; + } + + if (set) + __set_bit(data->irdata_count, data->irdata); + data->irdata_count++; +} + +/* Gets count bits of irdata */ +static u16 +get_bits(struct wbcir_data *data, int count) +{ + u16 val = 0x0; + + if (data->irdata_count - data->irdata_off < count) { + data->irdata_error = 1; + return 0x0; + } + + while (count > 0) { + val <<= 1; + if (test_bit(data->irdata_off, data->irdata)) + val |= 0x1; + count--; + data->irdata_off++; + } + + return val; +} + +/* Reads 16 cells and converts them to a byte */ +static u8 +wbcir_rc6cells_to_byte(struct wbcir_data *data) +{ + u16 raw = get_bits(data, 16); + u8 val = 0x00; + int bit; + + for (bit = 0; bit < 8; bit++) { + switch (raw & 0x03) { + case 0x01: + break; + case 0x02: + val |= (0x01 << bit); + break; + default: + data->irdata_error = 1; + break; + } + raw >>= 2; + } + + return val; +} + +/* Decodes a number of bits from raw RC5 data */ +static u8 +wbcir_get_rc5bits(struct wbcir_data *data, unsigned int count) +{ + u16 raw = get_bits(data, count * 2); + u8 val = 0x00; + int bit; + + for (bit = 0; bit < count; bit++) { + switch (raw & 0x03) { + case 0x01: + val |= (0x01 << bit); + break; + case 0x02: + break; + default: + data->irdata_error = 1; + break; + } + raw >>= 2; + } + + return val; +} + +static void +wbcir_parse_rc6(struct device *dev, struct wbcir_data *data) +{ + /* + * Normal bits are manchester coded as follows: + * cell0 + cell1 = logic "0" + * cell1 + cell0 = logic "1" + * + * The IR pulse has the following components: + * + * Leader - 6 * cell1 - discarded + * Gap - 2 * cell0 - discarded + * Start bit - Normal Coding - always "1" + * Mode Bit 2 - 0 - Normal Coding + * Toggle bit - Normal Coding with double bit time, + * e.g. cell0 + cell0 + cell1 + cell1 + * means logic "0". + * + * The rest depends on the mode, the following modes are known: + * + * MODE 0: + * Address Bit 7 - 0 - Normal Coding + * Command Bit 7 - 0 - Normal Coding + * + * MODE 6: + * The above Toggle Bit is used as a submode bit, 0 = A, 1 = B. + * Submode B is for pointing devices, only remotes using submode A + * are supported. + * + * Customer range bit - 0 => Customer = 7 bits, 0...127 + * 1 => Customer = 15 bits, 32768...65535 + * Customer Bits - Normal Coding + * + * Customer codes are allocated by Philips. The rest of the bits + * are customer dependent. The following is commonly used (and the + * only supported config): + * + * Toggle Bit - Normal Coding + * Address Bit 6 - 0 - Normal Coding + * Command Bit 7 - 0 - Normal Coding + * + * All modes are followed by at least 6 * cell0. + * + * MODE 0 msglen: + * 1 * 2 (start bit) + 3 * 2 (mode) + 2 * 2 (toggle) + + * 8 * 2 (address) + 8 * 2 (command) = + * 44 cells + * + * MODE 6A msglen: + * 1 * 2 (start bit) + 3 * 2 (mode) + 2 * 2 (submode) + + * 1 * 2 (customer range bit) + 7/15 * 2 (customer bits) + + * 1 * 2 (toggle bit) + 7 * 2 (address) + 8 * 2 (command) = + * 60 - 76 cells + */ + u8 mode; + u8 toggle; + u16 customer = 0x0; + u8 address; + u8 command; + u32 scancode; + + /* Leader mark */ + while (get_bits(data, 1) && !data->irdata_error) + /* Do nothing */; + + /* Leader space */ + if (get_bits(data, 1)) { + dev_dbg(dev, "RC6 - Invalid leader space\n"); + return; + } + + /* Start bit */ + if (get_bits(data, 2) != 0x02) { + dev_dbg(dev, "RC6 - Invalid start bit\n"); + return; + } + + /* Mode */ + mode = get_bits(data, 6); + switch (mode) { + case 0x15: /* 010101 = b000 */ + mode = 0; + break; + case 0x29: /* 101001 = b110 */ + mode = 6; + break; + default: + dev_dbg(dev, "RC6 - Invalid mode\n"); + return; + } + + /* Toggle bit / Submode bit */ + toggle = get_bits(data, 4); + switch (toggle) { + case 0x03: + toggle = 0; + break; + case 0x0C: + toggle = 1; + break; + default: + dev_dbg(dev, "RC6 - Toggle bit error\n"); + break; + } + + /* Customer */ + if (mode == 6) { + if (toggle != 0) { + dev_dbg(dev, "RC6B - Not Supported\n"); + return; + } + + customer = wbcir_rc6cells_to_byte(data); + + if (customer & 0x80) { + /* 15 bit customer value */ + customer <<= 8; + customer |= wbcir_rc6cells_to_byte(data); + } + } + + /* Address */ + address = wbcir_rc6cells_to_byte(data); + if (mode == 6) { + toggle = address >> 7; + address &= 0x7F; + } + + /* Command */ + command = wbcir_rc6cells_to_byte(data); + + /* Create scancode */ + scancode = command; + scancode |= address << 8; + scancode |= customer << 16; + + /* Last sanity check */ + if (data->irdata_error) { + dev_dbg(dev, "RC6 - Cell error(s)\n"); + return; + } + + dev_info(dev, "IR-RC6 ad 0x%02X cm 0x%02X cu 0x%04X " + "toggle %u mode %u scan 0x%08X\n", + address, + command, + customer, + (unsigned int)toggle, + (unsigned int)mode, + scancode); + + wbcir_keydown(data, scancode, toggle); +} + +static void +wbcir_parse_rc5(struct device *dev, struct wbcir_data *data) +{ + /* + * Bits are manchester coded as follows: + * cell1 + cell0 = logic "0" + * cell0 + cell1 = logic "1" + * (i.e. the reverse of RC6) + * + * Start bit 1 - "1" - discarded + * Start bit 2 - Must be inverted to get command bit 6 + * Toggle bit + * Address Bit 4 - 0 + * Command Bit 5 - 0 + */ + u8 toggle; + u8 address; + u8 command; + u32 scancode; + + /* Start bit 1 */ + if (!get_bits(data, 1)) { + dev_dbg(dev, "RC5 - Invalid start bit\n"); + return; + } + + /* Start bit 2 */ + if (!wbcir_get_rc5bits(data, 1)) + command = 0x40; + else + command = 0x00; + + toggle = wbcir_get_rc5bits(data, 1); + address = wbcir_get_rc5bits(data, 5); + command |= wbcir_get_rc5bits(data, 6); + scancode = address << 7 | command; + + /* Last sanity check */ + if (data->irdata_error) { + dev_dbg(dev, "RC5 - Invalid message\n"); + return; + } + + dev_dbg(dev, "IR-RC5 ad %u cm %u t %u s %u\n", + (unsigned int)address, + (unsigned int)command, + (unsigned int)toggle, + (unsigned int)scancode); + + wbcir_keydown(data, scancode, toggle); +} + +static void +wbcir_parse_nec(struct device *dev, struct wbcir_data *data) +{ + /* + * Each bit represents 560 us. + * + * Leader - 9 ms burst + * Gap - 4.5 ms silence + * Address1 bit 0 - 7 - Address 1 + * Address2 bit 0 - 7 - Address 2 + * Command1 bit 0 - 7 - Command 1 + * Command2 bit 0 - 7 - Command 2 + * + * Note the bit order! + * + * With the old NEC protocol, Address2 was the inverse of Address1 + * and Command2 was the inverse of Command1 and were used as + * an error check. + * + * With NEC extended, Address1 is the LSB of the Address and + * Address2 is the MSB, Command parsing remains unchanged. + * + * A repeat message is coded as: + * Leader - 9 ms burst + * Gap - 2.25 ms silence + * Repeat - 560 us active + */ + u8 address1; + u8 address2; + u8 command1; + u8 command2; + u16 address; + u32 scancode; + + /* Leader mark */ + while (get_bits(data, 1) && !data->irdata_error) + /* Do nothing */; + + /* Leader space */ + if (get_bits(data, 4)) { + dev_dbg(dev, "NEC - Invalid leader space\n"); + return; + } + + /* Repeat? */ + if (get_bits(data, 1)) { + if (!data->keypressed) { + dev_dbg(dev, "NEC - Stray repeat message\n"); + return; + } + + dev_dbg(dev, "IR-NEC repeat s %u\n", + (unsigned int)data->last_scancode); + + wbcir_keydown(data, data->last_scancode, data->last_toggle); + return; + } + + /* Remaining leader space */ + if (get_bits(data, 3)) { + dev_dbg(dev, "NEC - Invalid leader space\n"); + return; + } + + address1 = bitrev8(get_bits(data, 8)); + address2 = bitrev8(get_bits(data, 8)); + command1 = bitrev8(get_bits(data, 8)); + command2 = bitrev8(get_bits(data, 8)); + + /* Sanity check */ + if (data->irdata_error) { + dev_dbg(dev, "NEC - Invalid message\n"); + return; + } + + /* Check command validity */ + if (command1 != ~command2) { + dev_dbg(dev, "NEC - Command bytes mismatch\n"); + return; + } + + /* Check for extended NEC protocol */ + address = address1; + if (address1 != ~address2) + address |= address2 << 8; + + scancode = address << 8 | command1; + + dev_dbg(dev, "IR-NEC ad %u cm %u s %u\n", + (unsigned int)address, + (unsigned int)command1, + (unsigned int)scancode); + + wbcir_keydown(data, scancode, !data->last_toggle); +} + + + +/***************************************************************************** + * + * INTERRUPT FUNCTIONS + * + *****************************************************************************/ + +static irqreturn_t +wbcir_irq_handler(int irqno, void *cookie) +{ + struct pnp_dev *device = cookie; + struct wbcir_data *data = pnp_get_drvdata(device); + struct device *dev = &device->dev; + u8 status; + unsigned long flags; + u8 irdata[8]; + int i; + unsigned int hw; + + spin_lock_irqsave(&wbcir_lock, flags); + + wbcir_select_bank(data, WBCIR_BANK_0); + + status = inb(data->sbase + WBCIR_REG_SP3_EIR); + + if (!(status & (WBCIR_IRQ_RX | WBCIR_IRQ_ERR))) { + spin_unlock_irqrestore(&wbcir_lock, flags); + return IRQ_NONE; + } + + if (status & WBCIR_IRQ_ERR) + data->irdata_error = 1; + + if (!(status & WBCIR_IRQ_RX)) + goto out; + + /* Since RXHDLEV is set, at least 8 bytes are in the FIFO */ + insb(data->sbase + WBCIR_REG_SP3_RXDATA, &irdata[0], 8); + + for (i = 0; i < sizeof(irdata); i++) { + hw = hweight8(irdata[i]); + if (hw > 4) + add_irdata_bit(data, 0); + else + add_irdata_bit(data, 1); + + if (hw == 8) + data->idle_count++; + else + data->idle_count = 0; + } + + if (data->idle_count > WBCIR_MAX_IDLE_BYTES) { + /* Set RXINACTIVE... */ + outb(WBCIR_RX_DISABLE, data->sbase + WBCIR_REG_SP3_ASCR); + + /* ...and drain the FIFO */ + while (inb(data->sbase + WBCIR_REG_SP3_LSR) & WBCIR_RX_AVAIL) + inb(data->sbase + WBCIR_REG_SP3_RXDATA); + + dev_dbg(dev, "IRDATA:\n"); + for (i = 0; i < data->irdata_count; i += BITS_PER_LONG) + dev_dbg(dev, "0x%08lX\n", data->irdata[i/BITS_PER_LONG]); + + switch (protocol) { + case IR_PROTOCOL_RC5: + wbcir_parse_rc5(dev, data); + break; + case IR_PROTOCOL_RC6: + wbcir_parse_rc6(dev, data); + break; + case IR_PROTOCOL_NEC: + wbcir_parse_nec(dev, data); + break; + } + + wbcir_reset_irdata(data); + data->idle_count = 0; + } + +out: + spin_unlock_irqrestore(&wbcir_lock, flags); + return IRQ_HANDLED; +} + + + +/***************************************************************************** + * + * SUSPEND/RESUME FUNCTIONS + * + *****************************************************************************/ + +static void +wbcir_shutdown(struct pnp_dev *device) +{ + struct device *dev = &device->dev; + struct wbcir_data *data = pnp_get_drvdata(device); + int do_wake = 1; + u8 match[11]; + u8 mask[11]; + u8 rc6_csl = 0; + int i; + + memset(match, 0, sizeof(match)); + memset(mask, 0, sizeof(mask)); + + if (wake_sc == INVALID_SCANCODE || !device_may_wakeup(dev)) { + do_wake = 0; + goto finish; + } + + switch (protocol) { + case IR_PROTOCOL_RC5: + if (wake_sc > 0xFFF) { + do_wake = 0; + dev_err(dev, "RC5 - Invalid wake scancode\n"); + break; + } + + /* Mask = 13 bits, ex toggle */ + mask[0] = 0xFF; + mask[1] = 0x17; + + match[0] = (wake_sc & 0x003F); /* 6 command bits */ + match[0] |= (wake_sc & 0x0180) >> 1; /* 2 address bits */ + match[1] = (wake_sc & 0x0E00) >> 9; /* 3 address bits */ + if (!(wake_sc & 0x0040)) /* 2nd start bit */ + match[1] |= 0x10; + + break; + + case IR_PROTOCOL_NEC: + if (wake_sc > 0xFFFFFF) { + do_wake = 0; + dev_err(dev, "NEC - Invalid wake scancode\n"); + break; + } + + mask[0] = mask[1] = mask[2] = mask[3] = 0xFF; + + match[1] = bitrev8((wake_sc & 0xFF)); + match[0] = ~match[1]; + + match[3] = bitrev8((wake_sc & 0xFF00) >> 8); + if (wake_sc > 0xFFFF) + match[2] = bitrev8((wake_sc & 0xFF0000) >> 16); + else + match[2] = ~match[3]; + + break; + + case IR_PROTOCOL_RC6: + + if (wake_rc6mode == 0) { + if (wake_sc > 0xFFFF) { + do_wake = 0; + dev_err(dev, "RC6 - Invalid wake scancode\n"); + break; + } + + /* Command */ + match[0] = wbcir_to_rc6cells(wake_sc >> 0); + mask[0] = 0xFF; + match[1] = wbcir_to_rc6cells(wake_sc >> 4); + mask[1] = 0xFF; + + /* Address */ + match[2] = wbcir_to_rc6cells(wake_sc >> 8); + mask[2] = 0xFF; + match[3] = wbcir_to_rc6cells(wake_sc >> 12); + mask[3] = 0xFF; + + /* Header */ + match[4] = 0x50; /* mode1 = mode0 = 0, ignore toggle */ + mask[4] = 0xF0; + match[5] = 0x09; /* start bit = 1, mode2 = 0 */ + mask[5] = 0x0F; + + rc6_csl = 44; + + } else if (wake_rc6mode == 6) { + i = 0; + + /* Command */ + match[i] = wbcir_to_rc6cells(wake_sc >> 0); + mask[i++] = 0xFF; + match[i] = wbcir_to_rc6cells(wake_sc >> 4); + mask[i++] = 0xFF; + + /* Address + Toggle */ + match[i] = wbcir_to_rc6cells(wake_sc >> 8); + mask[i++] = 0xFF; + match[i] = wbcir_to_rc6cells(wake_sc >> 12); + mask[i++] = 0x3F; + + /* Customer bits 7 - 0 */ + match[i] = wbcir_to_rc6cells(wake_sc >> 16); + mask[i++] = 0xFF; + match[i] = wbcir_to_rc6cells(wake_sc >> 20); + mask[i++] = 0xFF; + + if (wake_sc & 0x80000000) { + /* Customer range bit and bits 15 - 8 */ + match[i] = wbcir_to_rc6cells(wake_sc >> 24); + mask[i++] = 0xFF; + match[i] = wbcir_to_rc6cells(wake_sc >> 28); + mask[i++] = 0xFF; + rc6_csl = 76; + } else if (wake_sc <= 0x007FFFFF) { + rc6_csl = 60; + } else { + do_wake = 0; + dev_err(dev, "RC6 - Invalid wake scancode\n"); + break; + } + + /* Header */ + match[i] = 0x93; /* mode1 = mode0 = 1, submode = 0 */ + mask[i++] = 0xFF; + match[i] = 0x0A; /* start bit = 1, mode2 = 1 */ + mask[i++] = 0x0F; + + } else { + do_wake = 0; + dev_err(dev, "RC6 - Invalid wake mode\n"); + } + + break; + + default: + do_wake = 0; + break; + } + +finish: + if (do_wake) { + /* Set compare and compare mask */ + wbcir_set_bits(data->wbase + WBCIR_REG_WCEIR_INDEX, + WBCIR_REGSEL_COMPARE | WBCIR_REG_ADDR0, + 0x3F); + outsb(data->wbase + WBCIR_REG_WCEIR_DATA, match, 11); + wbcir_set_bits(data->wbase + WBCIR_REG_WCEIR_INDEX, + WBCIR_REGSEL_MASK | WBCIR_REG_ADDR0, + 0x3F); + outsb(data->wbase + WBCIR_REG_WCEIR_DATA, mask, 11); + + /* RC6 Compare String Len */ + outb(rc6_csl, data->wbase + WBCIR_REG_WCEIR_CSL); + + /* Clear status bits NEC_REP, BUFF, MSG_END, MATCH */ + wbcir_set_bits(data->wbase + WBCIR_REG_WCEIR_STS, 0x17, 0x17); + + /* Clear BUFF_EN, Clear END_EN, Set MATCH_EN */ + wbcir_set_bits(data->wbase + WBCIR_REG_WCEIR_EV_EN, 0x01, 0x07); + + /* Set CEIR_EN */ + wbcir_set_bits(data->wbase + WBCIR_REG_WCEIR_CTL, 0x01, 0x01); + + } else { + /* Clear BUFF_EN, Clear END_EN, Clear MATCH_EN */ + wbcir_set_bits(data->wbase + WBCIR_REG_WCEIR_EV_EN, 0x00, 0x07); + + /* Clear CEIR_EN */ + wbcir_set_bits(data->wbase + WBCIR_REG_WCEIR_CTL, 0x00, 0x01); + } + + /* Disable interrupts */ + outb(WBCIR_IRQ_NONE, data->sbase + WBCIR_REG_SP3_IER); +} + +static int +wbcir_suspend(struct pnp_dev *device, pm_message_t state) +{ + wbcir_shutdown(device); + return 0; +} + +static int +wbcir_resume(struct pnp_dev *device) +{ + struct wbcir_data *data = pnp_get_drvdata(device); + + /* Clear BUFF_EN, Clear END_EN, Clear MATCH_EN */ + wbcir_set_bits(data->wbase + WBCIR_REG_WCEIR_EV_EN, 0x00, 0x07); + + /* Clear CEIR_EN */ + wbcir_set_bits(data->wbase + WBCIR_REG_WCEIR_CTL, 0x00, 0x01); + + /* Enable interrupts */ + wbcir_reset_irdata(data); + outb(WBCIR_IRQ_RX | WBCIR_IRQ_ERR, data->sbase + WBCIR_REG_SP3_IER); + + return 0; +} + + + +/***************************************************************************** + * + * SETUP/INIT FUNCTIONS + * + *****************************************************************************/ + +static void +wbcir_cfg_ceir(struct wbcir_data *data) +{ + u8 tmp; + + /* Set PROT_SEL, RX_INV, Clear CEIR_EN (needed for the led) */ + tmp = protocol << 4; + if (invert) + tmp |= 0x08; + outb(tmp, data->wbase + WBCIR_REG_WCEIR_CTL); + + /* Clear status bits NEC_REP, BUFF, MSG_END, MATCH */ + wbcir_set_bits(data->wbase + WBCIR_REG_WCEIR_STS, 0x17, 0x17); + + /* Clear BUFF_EN, Clear END_EN, Clear MATCH_EN */ + wbcir_set_bits(data->wbase + WBCIR_REG_WCEIR_EV_EN, 0x00, 0x07); + + /* Set RC5 cell time to correspond to 36 kHz */ + wbcir_set_bits(data->wbase + WBCIR_REG_WCEIR_CFG1, 0x4A, 0x7F); + + /* Set IRTX_INV */ + if (invert) + outb(0x04, data->ebase + WBCIR_REG_ECEIR_CCTL); + else + outb(0x00, data->ebase + WBCIR_REG_ECEIR_CCTL); + + /* + * Clear IR LED, set SP3 clock to 24Mhz + * set SP3_IRRX_SW to binary 01, helpfully not documented + */ + outb(0x10, data->ebase + WBCIR_REG_ECEIR_CTS); +} + +static int __devinit +wbcir_probe(struct pnp_dev *device, const struct pnp_device_id *dev_id) +{ + struct device *dev = &device->dev; + struct wbcir_data *data; + int err; + + if (!(pnp_port_len(device, 0) == EHFUNC_IOMEM_LEN && + pnp_port_len(device, 1) == WAKEUP_IOMEM_LEN && + pnp_port_len(device, 2) == SP_IOMEM_LEN)) { + dev_err(dev, "Invalid resources\n"); + return -ENODEV; + } + + data = kzalloc(sizeof(*data), GFP_KERNEL); + if (!data) { + err = -ENOMEM; + goto exit; + } + + pnp_set_drvdata(device, data); + + data->ebase = pnp_port_start(device, 0); + data->wbase = pnp_port_start(device, 1); + data->sbase = pnp_port_start(device, 2); + data->irq = pnp_irq(device, 0); + + if (data->wbase == 0 || data->ebase == 0 || + data->sbase == 0 || data->irq == 0) { + err = -ENODEV; + dev_err(dev, "Invalid resources\n"); + goto exit_free_data; + } + + dev_dbg(&device->dev, "Found device " + "(w: 0x%lX, e: 0x%lX, s: 0x%lX, i: %u)\n", + data->wbase, data->ebase, data->sbase, data->irq); + + if (!request_region(data->wbase, WAKEUP_IOMEM_LEN, DRVNAME)) { + dev_err(dev, "Region 0x%lx-0x%lx already in use!\n", + data->wbase, data->wbase + WAKEUP_IOMEM_LEN - 1); + err = -EBUSY; + goto exit_free_data; + } + + if (!request_region(data->ebase, EHFUNC_IOMEM_LEN, DRVNAME)) { + dev_err(dev, "Region 0x%lx-0x%lx already in use!\n", + data->ebase, data->ebase + EHFUNC_IOMEM_LEN - 1); + err = -EBUSY; + goto exit_release_wbase; + } + + if (!request_region(data->sbase, SP_IOMEM_LEN, DRVNAME)) { + dev_err(dev, "Region 0x%lx-0x%lx already in use!\n", + data->sbase, data->sbase + SP_IOMEM_LEN - 1); + err = -EBUSY; + goto exit_release_ebase; + } + + err = request_irq(data->irq, wbcir_irq_handler, + IRQF_DISABLED, DRVNAME, device); + if (err) { + dev_err(dev, "Failed to claim IRQ %u\n", data->irq); + err = -EBUSY; + goto exit_release_sbase; + } + + led_trigger_register_simple("cir-tx", &data->txtrigger); + if (!data->txtrigger) { + err = -ENOMEM; + goto exit_free_irq; + } + + led_trigger_register_simple("cir-rx", &data->rxtrigger); + if (!data->rxtrigger) { + err = -ENOMEM; + goto exit_unregister_txtrigger; + } + + data->led.name = "cir::activity"; + data->led.default_trigger = "cir-rx"; + data->led.brightness_set = wbcir_led_brightness_set; + data->led.brightness_get = wbcir_led_brightness_get; + err = led_classdev_register(&device->dev, &data->led); + if (err) + goto exit_unregister_rxtrigger; + + data->input_dev = input_allocate_device(); + if (!data->input_dev) { + err = -ENOMEM; + goto exit_unregister_led; + } + + data->input_dev->evbit[0] = BIT(EV_KEY); + data->input_dev->name = WBCIR_NAME; + data->input_dev->phys = "wbcir/cir0"; + data->input_dev->id.bustype = BUS_HOST; + data->input_dev->id.vendor = PCI_VENDOR_ID_WINBOND; + data->input_dev->id.product = WBCIR_ID_FAMILY; + data->input_dev->id.version = WBCIR_ID_CHIP; + data->input_dev->getkeycode = wbcir_getkeycode; + data->input_dev->setkeycode = wbcir_setkeycode; + input_set_capability(data->input_dev, EV_MSC, MSC_SCAN); + input_set_drvdata(data->input_dev, data); + + err = input_register_device(data->input_dev); + if (err) + goto exit_free_input; + + data->last_scancode = INVALID_SCANCODE; + INIT_LIST_HEAD(&data->keytable); + setup_timer(&data->timer_keyup, wbcir_keyup, (unsigned long)data); + + /* Load default keymaps */ + if (protocol == IR_PROTOCOL_RC6) { + int i; + for (i = 0; i < ARRAY_SIZE(rc6_def_keymap); i++) { + err = wbcir_setkeycode(data->input_dev, + (int)rc6_def_keymap[i].scancode, + (int)rc6_def_keymap[i].keycode); + if (err) + goto exit_unregister_keys; + } + } + + device_init_wakeup(&device->dev, 1); + + wbcir_cfg_ceir(data); + + /* Disable interrupts */ + wbcir_select_bank(data, WBCIR_BANK_0); + outb(WBCIR_IRQ_NONE, data->sbase + WBCIR_REG_SP3_IER); + + /* Enable extended mode */ + wbcir_select_bank(data, WBCIR_BANK_2); + outb(WBCIR_EXT_ENABLE, data->sbase + WBCIR_REG_SP3_EXCR1); + + /* + * Configure baud generator, IR data will be sampled at + * a bitrate of: (24Mhz * prescaler) / (divisor * 16). + * + * The ECIR registers include a flag to change the + * 24Mhz clock freq to 48Mhz. + * + * It's not documented in the specs, but fifo levels + * other than 16 seems to be unsupported. + */ + + /* prescaler 1.0, tx/rx fifo lvl 16 */ + outb(0x30, data->sbase + WBCIR_REG_SP3_EXCR2); + + /* Set baud divisor to generate one byte per bit/cell */ + switch (protocol) { + case IR_PROTOCOL_RC5: + outb(0xA7, data->sbase + WBCIR_REG_SP3_BGDL); + break; + case IR_PROTOCOL_RC6: + outb(0x53, data->sbase + WBCIR_REG_SP3_BGDL); + break; + case IR_PROTOCOL_NEC: + outb(0x69, data->sbase + WBCIR_REG_SP3_BGDL); + break; + } + outb(0x00, data->sbase + WBCIR_REG_SP3_BGDH); + + /* Set CEIR mode */ + wbcir_select_bank(data, WBCIR_BANK_0); + outb(0xC0, data->sbase + WBCIR_REG_SP3_MCR); + inb(data->sbase + WBCIR_REG_SP3_LSR); /* Clear LSR */ + inb(data->sbase + WBCIR_REG_SP3_MSR); /* Clear MSR */ + + /* Disable RX demod, run-length encoding/decoding, set freq span */ + wbcir_select_bank(data, WBCIR_BANK_7); + outb(0x10, data->sbase + WBCIR_REG_SP3_RCCFG); + + /* Disable timer */ + wbcir_select_bank(data, WBCIR_BANK_4); + outb(0x00, data->sbase + WBCIR_REG_SP3_IRCR1); + + /* Enable MSR interrupt, Clear AUX_IRX */ + wbcir_select_bank(data, WBCIR_BANK_5); + outb(0x00, data->sbase + WBCIR_REG_SP3_IRCR2); + + /* Disable CRC */ + wbcir_select_bank(data, WBCIR_BANK_6); + outb(0x20, data->sbase + WBCIR_REG_SP3_IRCR3); + + /* Set RX/TX (de)modulation freq, not really used */ + wbcir_select_bank(data, WBCIR_BANK_7); + outb(0xF2, data->sbase + WBCIR_REG_SP3_IRRXDC); + outb(0x69, data->sbase + WBCIR_REG_SP3_IRTXMC); + + /* Set invert and pin direction */ + if (invert) + outb(0x10, data->sbase + WBCIR_REG_SP3_IRCFG4); + else + outb(0x00, data->sbase + WBCIR_REG_SP3_IRCFG4); + + /* Set FIFO thresholds (RX = 8, TX = 3), reset RX/TX */ + wbcir_select_bank(data, WBCIR_BANK_0); + outb(0x97, data->sbase + WBCIR_REG_SP3_FCR); + + /* Clear AUX status bits */ + outb(0xE0, data->sbase + WBCIR_REG_SP3_ASCR); + + /* Enable interrupts */ + outb(WBCIR_IRQ_RX | WBCIR_IRQ_ERR, data->sbase + WBCIR_REG_SP3_IER); + + return 0; + +exit_unregister_keys: + if (!list_empty(&data->keytable)) { + struct wbcir_keyentry *key; + struct wbcir_keyentry *keytmp; + + list_for_each_entry_safe(key, keytmp, &data->keytable, list) { + list_del(&key->list); + kfree(key); + } + } + input_unregister_device(data->input_dev); + /* Can't call input_free_device on an unregistered device */ + data->input_dev = NULL; +exit_free_input: + input_free_device(data->input_dev); +exit_unregister_led: + led_classdev_unregister(&data->led); +exit_unregister_rxtrigger: + led_trigger_unregister_simple(data->rxtrigger); +exit_unregister_txtrigger: + led_trigger_unregister_simple(data->txtrigger); +exit_free_irq: + free_irq(data->irq, device); +exit_release_sbase: + release_region(data->sbase, SP_IOMEM_LEN); +exit_release_ebase: + release_region(data->ebase, EHFUNC_IOMEM_LEN); +exit_release_wbase: + release_region(data->wbase, WAKEUP_IOMEM_LEN); +exit_free_data: + kfree(data); + pnp_set_drvdata(device, NULL); +exit: + return err; +} + +static void __devexit +wbcir_remove(struct pnp_dev *device) +{ + struct wbcir_data *data = pnp_get_drvdata(device); + struct wbcir_keyentry *key; + struct wbcir_keyentry *keytmp; + + /* Disable interrupts */ + wbcir_select_bank(data, WBCIR_BANK_0); + outb(WBCIR_IRQ_NONE, data->sbase + WBCIR_REG_SP3_IER); + + del_timer_sync(&data->timer_keyup); + + free_irq(data->irq, device); + + /* Clear status bits NEC_REP, BUFF, MSG_END, MATCH */ + wbcir_set_bits(data->wbase + WBCIR_REG_WCEIR_STS, 0x17, 0x17); + + /* Clear CEIR_EN */ + wbcir_set_bits(data->wbase + WBCIR_REG_WCEIR_CTL, 0x00, 0x01); + + /* Clear BUFF_EN, END_EN, MATCH_EN */ + wbcir_set_bits(data->wbase + WBCIR_REG_WCEIR_EV_EN, 0x00, 0x07); + + /* This will generate a keyup event if necessary */ + input_unregister_device(data->input_dev); + + led_trigger_unregister_simple(data->rxtrigger); + led_trigger_unregister_simple(data->txtrigger); + led_classdev_unregister(&data->led); + + /* This is ok since &data->led isn't actually used */ + wbcir_led_brightness_set(&data->led, LED_OFF); + + release_region(data->wbase, WAKEUP_IOMEM_LEN); + release_region(data->ebase, EHFUNC_IOMEM_LEN); + release_region(data->sbase, SP_IOMEM_LEN); + + list_for_each_entry_safe(key, keytmp, &data->keytable, list) { + list_del(&key->list); + kfree(key); + } + + kfree(data); + + pnp_set_drvdata(device, NULL); +} + +static const struct pnp_device_id wbcir_ids[] = { + { "WEC1022", 0 }, + { "", 0 } +}; +MODULE_DEVICE_TABLE(pnp, wbcir_ids); + +static struct pnp_driver wbcir_driver = { + .name = WBCIR_NAME, + .id_table = wbcir_ids, + .probe = wbcir_probe, + .remove = __devexit_p(wbcir_remove), + .suspend = wbcir_suspend, + .resume = wbcir_resume, + .shutdown = wbcir_shutdown +}; + +static int __init +wbcir_init(void) +{ + int ret; + + switch (protocol) { + case IR_PROTOCOL_RC5: + case IR_PROTOCOL_NEC: + case IR_PROTOCOL_RC6: + break; + default: + printk(KERN_ERR DRVNAME ": Invalid protocol argument\n"); + return -EINVAL; + } + + ret = pnp_register_driver(&wbcir_driver); + if (ret) + printk(KERN_ERR DRVNAME ": Unable to register driver\n"); + + return ret; +} + +static void __exit +wbcir_exit(void) +{ + pnp_unregister_driver(&wbcir_driver); +} + +MODULE_AUTHOR("David Härdeman "); +MODULE_DESCRIPTION("Winbond SuperI/O Consumer IR Driver"); +MODULE_LICENSE("GPL"); + +module_init(wbcir_init); +module_exit(wbcir_exit); + + -- cgit v1.2.3 From 7e483865612568268773317219d07b38a8c4d62d Mon Sep 17 00:00:00 2001 From: Chen Liqin Date: Wed, 23 Sep 2009 11:40:15 +0800 Subject: score: update email address in MAINTAINERS. Signed-off-by: Chen Liqin --- MAINTAINERS | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index d24c8823a8cb..4467da2a4d7a 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -4460,7 +4460,7 @@ SCORE ARCHITECTURE P: Chen Liqin M: liqin.chen@sunplusct.com P: Lennox Wu -M: lennox.wu@sunplusct.com +M: lennox.wu@gmail.com W: http://www.sunplusct.com S: Supported -- cgit v1.2.3 From 5429c7316577fcd859f6b53e10884bb8e1e3bcef Mon Sep 17 00:00:00 2001 From: Li Yang Date: Tue, 11 Aug 2009 11:11:11 +0800 Subject: USB: gadget: Update Freescale UDC entry in MAINTAINERS Change the F entry for file rename, and make it also cover fsl_qe_udc driver. Update the name accordingly. Signed-off-by: Li Yang Acked-by: Guennadi Liakhovetski Cc: Joe Perches Signed-off-by: Greg Kroah-Hartman --- MAINTAINERS | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index d24c8823a8cb..5e1bf0ce1d59 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -2107,12 +2107,12 @@ S: Supported F: arch/powerpc/sysdev/qe_lib/ F: arch/powerpc/include/asm/*qe.h -FREESCALE HIGHSPEED USB DEVICE DRIVER +FREESCALE USB PERIPHERIAL DRIVERS M: Li Yang L: linux-usb@vger.kernel.org L: linuxppc-dev@ozlabs.org S: Maintained -F: drivers/usb/gadget/fsl_usb2_udc.c +F: drivers/usb/gadget/fsl* FREESCALE QUICC ENGINE UCC ETHERNET DRIVER M: Li Yang -- cgit v1.2.3 From 515350b6fd041396f425180589e08812dd13615f Mon Sep 17 00:00:00 2001 From: Bartlomiej Zolnierkiewicz Date: Tue, 22 Sep 2009 16:43:54 -0700 Subject: MAINTAINERS: remove dead ncpfs list On Saturday 01 August 2009 00:30:39 Mail Delivery Subsystem wrote: > Delivery to the following recipient failed permanently: > > linware@sh.cvut.cz > > Technical details of permanent failure: > Google tried to deliver your message, but it was rejected by the recipient > domain. We recommend contacting the other email provider for further > information about the cause of this error. The error that the other server > returned was: 450 450 : Recipient address rejected: > undeliverable address: unknown user: "linware" (state 14). Cc: Petr Vandrovec Signed-off-by: Bartlomiej Zolnierkiewicz Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- MAINTAINERS | 1 - 1 file changed, 1 deletion(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index d24c8823a8cb..017cf691bb03 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3527,7 +3527,6 @@ F: drivers/net/natsemi.c NCP FILESYSTEM M: Petr Vandrovec -L: linware@sh.cvut.cz S: Maintained F: fs/ncpfs/ -- cgit v1.2.3 From 653f41b52dfc63fecf4a2333f13be28b159a918c Mon Sep 17 00:00:00 2001 From: Madhusudhan Chikkature Date: Tue, 22 Sep 2009 16:45:06 -0700 Subject: MAINTAINERS: update for TI OMAP hsmmc driver Update maintainers entry for TI OMAP HS MMC support. Signed-off-by: Madhusudhan Chikkature Acked-by: Kevin Hilman Cc: Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- MAINTAINERS | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 017cf691bb03..d8973ca86f2a 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3768,7 +3768,13 @@ OMAP MMC SUPPORT M: Jarkko Lavinen L: linux-omap@vger.kernel.org S: Maintained -F: drivers/mmc/host/*omap* +F: drivers/mmc/host/omap.c + +OMAP HS MMC SUPPORT +M: Madhusudhan Chikkature +L: linux-omap@vger.kernel.org +S: Maintained +F: drivers/mmc/host/omap_hsmmc.c OMAP RANDOM NUMBER GENERATOR SUPPORT M: Deepak Saxena -- cgit v1.2.3 From c0d0787b6d47d9f4d5e8bd321921104e854a9135 Mon Sep 17 00:00:00 2001 From: Joe Perches Date: Wed, 23 Sep 2009 15:57:17 -0700 Subject: MAINTAINERS: add Matt Mackall and Herbert Xu to HARDWARE RANDOM NUMBER GENERATOR Signed-off-by: Joe Perches Cc: David Daney Cc: Herbert Xu Cc: Matt Mackall Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- MAINTAINERS | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 7c1c0b05b298..0c138ba86526 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -2331,7 +2331,9 @@ S: Orphan F: drivers/hwmon/ HARDWARE RANDOM NUMBER GENERATOR CORE -S: Orphan +M: Matt Mackall +M: Herbert Xu +S: Odd fixes F: Documentation/hw_random.txt F: drivers/char/hw_random/ F: include/linux/hw_random.h -- cgit v1.2.3 From b882877082602bd2d9444069bae1926bf10ebe0d Mon Sep 17 00:00:00 2001 From: James Bottomley Date: Tue, 4 Aug 2009 23:59:55 +0000 Subject: parisc: add me to Maintainers It has been suggested that given my proficiency with less popular architectures, I should be officially listed as willing to see to the care and feeding of this one. Signed-off-by: James Bottomley Signed-off-by: Kyle McMartin --- MAINTAINERS | 1 + 1 file changed, 1 insertion(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index c450f3abb8c9..a51b896528c8 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3953,6 +3953,7 @@ F: drivers/block/paride/ PARISC ARCHITECTURE M: Kyle McMartin M: Helge Deller +M: "James E.J. Bottomley" L: linux-parisc@vger.kernel.org W: http://www.parisc-linux.org/ T: git git://git.kernel.org/pub/scm/linux/kernel/git/kyle/parisc-2.6.git -- cgit v1.2.3 From b411b3637fa71fce9cf2acf0639009500f5892fe Mon Sep 17 00:00:00 2001 From: Philipp Reisner Date: Fri, 25 Sep 2009 16:07:19 -0700 Subject: The DRBD driver Signed-off-by: Philipp Reisner Signed-off-by: Lars Ellenberg --- .../blockdev/drbd/DRBD-8.3-data-packets.svg | 588 +++ Documentation/blockdev/drbd/DRBD-data-packets.svg | 459 ++ Documentation/blockdev/drbd/README.txt | 16 + Documentation/blockdev/drbd/conn-states-8.dot | 18 + Documentation/blockdev/drbd/disk-states-8.dot | 16 + .../drbd/drbd-connection-state-overview.dot | 85 + Documentation/blockdev/drbd/node-states-8.dot | 14 + MAINTAINERS | 13 + drivers/block/Kconfig | 2 + drivers/block/Makefile | 1 + drivers/block/drbd/Kconfig | 82 + drivers/block/drbd/Makefile | 8 + drivers/block/drbd/drbd_actlog.c | 1484 +++++++ drivers/block/drbd/drbd_bitmap.c | 1327 ++++++ drivers/block/drbd/drbd_int.h | 2258 ++++++++++ drivers/block/drbd/drbd_main.c | 3735 ++++++++++++++++ drivers/block/drbd/drbd_nl.c | 2365 +++++++++++ drivers/block/drbd/drbd_proc.c | 266 ++ drivers/block/drbd/drbd_receiver.c | 4456 ++++++++++++++++++++ drivers/block/drbd/drbd_req.c | 1132 +++++ drivers/block/drbd/drbd_req.h | 327 ++ drivers/block/drbd/drbd_strings.c | 113 + drivers/block/drbd/drbd_tracing.c | 752 ++++ drivers/block/drbd/drbd_tracing.h | 87 + drivers/block/drbd/drbd_vli.h | 351 ++ drivers/block/drbd/drbd_worker.c | 1529 +++++++ drivers/block/drbd/drbd_wrappers.h | 91 + include/linux/drbd.h | 349 ++ include/linux/drbd_limits.h | 137 + include/linux/drbd_nl.h | 137 + include/linux/drbd_tag_magic.h | 83 + include/linux/lru_cache.h | 294 ++ lib/Kconfig | 3 + lib/Makefile | 2 + lib/lru_cache.c | 560 +++ 35 files changed, 23140 insertions(+) create mode 100644 Documentation/blockdev/drbd/DRBD-8.3-data-packets.svg create mode 100644 Documentation/blockdev/drbd/DRBD-data-packets.svg create mode 100644 Documentation/blockdev/drbd/README.txt create mode 100644 Documentation/blockdev/drbd/conn-states-8.dot create mode 100644 Documentation/blockdev/drbd/disk-states-8.dot create mode 100644 Documentation/blockdev/drbd/drbd-connection-state-overview.dot create mode 100644 Documentation/blockdev/drbd/node-states-8.dot create mode 100644 drivers/block/drbd/Kconfig create mode 100644 drivers/block/drbd/Makefile create mode 100644 drivers/block/drbd/drbd_actlog.c create mode 100644 drivers/block/drbd/drbd_bitmap.c create mode 100644 drivers/block/drbd/drbd_int.h create mode 100644 drivers/block/drbd/drbd_main.c create mode 100644 drivers/block/drbd/drbd_nl.c create mode 100644 drivers/block/drbd/drbd_proc.c create mode 100644 drivers/block/drbd/drbd_receiver.c create mode 100644 drivers/block/drbd/drbd_req.c create mode 100644 drivers/block/drbd/drbd_req.h create mode 100644 drivers/block/drbd/drbd_strings.c create mode 100644 drivers/block/drbd/drbd_tracing.c create mode 100644 drivers/block/drbd/drbd_tracing.h create mode 100644 drivers/block/drbd/drbd_vli.h create mode 100644 drivers/block/drbd/drbd_worker.c create mode 100644 drivers/block/drbd/drbd_wrappers.h create mode 100644 include/linux/drbd.h create mode 100644 include/linux/drbd_limits.h create mode 100644 include/linux/drbd_nl.h create mode 100644 include/linux/drbd_tag_magic.h create mode 100644 include/linux/lru_cache.h create mode 100644 lib/lru_cache.c (limited to 'MAINTAINERS') diff --git a/Documentation/blockdev/drbd/DRBD-8.3-data-packets.svg b/Documentation/blockdev/drbd/DRBD-8.3-data-packets.svg new file mode 100644 index 000000000000..f87cfa0dc2fb --- /dev/null +++ b/Documentation/blockdev/drbd/DRBD-8.3-data-packets.svg @@ -0,0 +1,588 @@ + + + + + + Master slide + + + + + + + + + + RSDataReply + + + + + + + CsumRSRequest + + + + w_make_resync_request() + + + receive_DataRequest() + + + drbd_endio_read_sec() + + + w_e_end_csum_rs_req() + + + receive_RSDataReply() + + + drbd_endio_write_sec() + + + e_end_resync_block() + + + + + + WriteAck + + + + got_BlockAck() + + + Checksum based Resync, case not in sync + + + DRBD-8.3 data flow + + + w_e_send_csum() + + + + + + + + RSIsInSync + + + + + + + CsumRSRequest + + + + receive_DataRequest() + + + drbd_endio_read_sec() + + + w_e_end_csum_rs_req() + + + got_IsInSync() + + + Checksum based Resync, case in sync + + + + + + + + + + OVReply + + + + + + + OVRequest + + + + receive_OVRequest() + + + drbd_endio_read_sec() + + + w_e_end_ov_req() + + + receive_OVReply() + + + drbd_endio_read_sec() + + + w_e_end_ov_reply() + + + + + + OVResult + + + + got_OVResult() + + + Online verify + + + w_make_ov_request() + + + + + + + + drbd_endio_read_sec() + + + w_make_resync_request() + + + w_e_send_csum() + + + + + drbd_endio_read_sec() + + + + + + rs_begin_io() + + + rs_begin_io() + + + rs_begin_io() + + + rs_complete_io() + + + rs_complete_io() + + + rs_complete_io() + + + rs_begin_io() + + + rs_begin_io() + + + rs_begin_io() + + + rs_complete_io() + + + rs_complete_io() + + + rs_complete_io() + + diff --git a/Documentation/blockdev/drbd/DRBD-data-packets.svg b/Documentation/blockdev/drbd/DRBD-data-packets.svg new file mode 100644 index 000000000000..48a1e2165fec --- /dev/null +++ b/Documentation/blockdev/drbd/DRBD-data-packets.svg @@ -0,0 +1,459 @@ + + + + + + Master slide + + + + + + + + + RSDataReply + + + + + RSDataRequest + + + w_make_resync_request() + + + receive_DataRequest() + + + drbd_endio_read_sec() + + + w_e_end_rsdata_req() + + + receive_RSDataReply() + + + drbd_endio_write_sec() + + + e_end_resync_block() + + + + + WriteAck + + + got_BlockAck() + + + Resync blocks, 4-32K + + + + + + + WriteAck + + + + + Data + + + drbd_make_request() + + + receive_Data() + + + drbd_endio_write_sec() + + + e_end_block() + + + got_BlockAck() + + + Regular mirrored write, 512-32K + + + w_send_dblock() + + + + + drbd_endio_write_pri() + + + + + + + DataReply + + + + + DataRequest + + + drbd_make_request() + + + receive_DataRequest() + + + drbd_endio_read_sec() + + + w_e_end_data_req() + + + Drawing + + receive_DataReply() + + + + Diskless read, 512-32K + + + w_send_read_req() + + + DRBD 8 data flow + + + + + + al_begin_io() + + + al_complete_io() + + + rs_begin_io() + + + rs_complete_io() + + + rs_begin_io() + + + rs_complete_io() + + diff --git a/Documentation/blockdev/drbd/README.txt b/Documentation/blockdev/drbd/README.txt new file mode 100644 index 000000000000..627b0a1bf35e --- /dev/null +++ b/Documentation/blockdev/drbd/README.txt @@ -0,0 +1,16 @@ +Description + + DRBD is a shared-nothing, synchronously replicated block device. It + is designed to serve as a building block for high availability + clusters and in this context, is a "drop-in" replacement for shared + storage. Simplistically, you could see it as a network RAID 1. + + Please visit http://www.drbd.org to find out more. + +The here included files are intended to help understand the implementation + +DRBD-8.3-data-packets.svg, DRBD-data-packets.svg + relates some functions, and write packets. + +conn-states-8.dot, disk-states-8.dot, node-states-8.dot + The sub graphs of DRBD's state transitions diff --git a/Documentation/blockdev/drbd/conn-states-8.dot b/Documentation/blockdev/drbd/conn-states-8.dot new file mode 100644 index 000000000000..025e8cf5e64a --- /dev/null +++ b/Documentation/blockdev/drbd/conn-states-8.dot @@ -0,0 +1,18 @@ +digraph conn_states { + StandAllone -> WFConnection [ label = "ioctl_set_net()" ] + WFConnection -> Unconnected [ label = "unable to bind()" ] + WFConnection -> WFReportParams [ label = "in connect() after accept" ] + WFReportParams -> StandAllone [ label = "checks in receive_param()" ] + WFReportParams -> Connected [ label = "in receive_param()" ] + WFReportParams -> WFBitMapS [ label = "sync_handshake()" ] + WFReportParams -> WFBitMapT [ label = "sync_handshake()" ] + WFBitMapS -> SyncSource [ label = "receive_bitmap()" ] + WFBitMapT -> SyncTarget [ label = "receive_bitmap()" ] + SyncSource -> Connected + SyncTarget -> Connected + SyncSource -> PausedSyncS + SyncTarget -> PausedSyncT + PausedSyncS -> SyncSource + PausedSyncT -> SyncTarget + Connected -> WFConnection [ label = "* on network error" ] +} diff --git a/Documentation/blockdev/drbd/disk-states-8.dot b/Documentation/blockdev/drbd/disk-states-8.dot new file mode 100644 index 000000000000..d06cfb46fb98 --- /dev/null +++ b/Documentation/blockdev/drbd/disk-states-8.dot @@ -0,0 +1,16 @@ +digraph disk_states { + Diskless -> Inconsistent [ label = "ioctl_set_disk()" ] + Diskless -> Consistent [ label = "ioctl_set_disk()" ] + Diskless -> Outdated [ label = "ioctl_set_disk()" ] + Consistent -> Outdated [ label = "receive_param()" ] + Consistent -> UpToDate [ label = "receive_param()" ] + Consistent -> Inconsistent [ label = "start resync" ] + Outdated -> Inconsistent [ label = "start resync" ] + UpToDate -> Inconsistent [ label = "ioctl_replicate" ] + Inconsistent -> UpToDate [ label = "resync completed" ] + Consistent -> Failed [ label = "io completion error" ] + Outdated -> Failed [ label = "io completion error" ] + UpToDate -> Failed [ label = "io completion error" ] + Inconsistent -> Failed [ label = "io completion error" ] + Failed -> Diskless [ label = "sending notify to peer" ] +} diff --git a/Documentation/blockdev/drbd/drbd-connection-state-overview.dot b/Documentation/blockdev/drbd/drbd-connection-state-overview.dot new file mode 100644 index 000000000000..6d9cf0a7b11d --- /dev/null +++ b/Documentation/blockdev/drbd/drbd-connection-state-overview.dot @@ -0,0 +1,85 @@ +// vim: set sw=2 sts=2 : +digraph { + rankdir=BT + bgcolor=white + + node [shape=plaintext] + node [fontcolor=black] + + StandAlone [ style=filled,fillcolor=gray,label=StandAlone ] + + node [fontcolor=lightgray] + + Unconnected [ label=Unconnected ] + + CommTrouble [ shape=record, + label="{communication loss|{Timeout|BrokenPipe|NetworkFailure}}" ] + + node [fontcolor=gray] + + subgraph cluster_try_connect { + label="try to connect, handshake" + rank=max + WFConnection [ label=WFConnection ] + WFReportParams [ label=WFReportParams ] + } + + TearDown [ label=TearDown ] + + Connected [ label=Connected,style=filled,fillcolor=green,fontcolor=black ] + + node [fontcolor=lightblue] + + StartingSyncS [ label=StartingSyncS ] + StartingSyncT [ label=StartingSyncT ] + + subgraph cluster_bitmap_exchange { + node [fontcolor=red] + fontcolor=red + label="new application (WRITE?) requests blocked\lwhile bitmap is exchanged" + + WFBitMapT [ label=WFBitMapT ] + WFSyncUUID [ label=WFSyncUUID ] + WFBitMapS [ label=WFBitMapS ] + } + + node [fontcolor=blue] + + cluster_resync [ shape=record,label="{resynchronisation process running\l'concurrent' application requests allowed|{{PausedSyncT\nSyncTarget}|{PausedSyncS\nSyncSource}}}" ] + + node [shape=box,fontcolor=black] + + // drbdadm [label="drbdadm connect"] + // handshake [label="drbd_connect()\ndrbd_do_handshake\ndrbd_sync_handshake() etc."] + // comm_error [label="communication trouble"] + + // + // edges + // -------------------------------------- + + StandAlone -> Unconnected [ label="drbdadm connect" ] + Unconnected -> StandAlone [ label="drbdadm disconnect\lor serious communication trouble" ] + Unconnected -> WFConnection [ label="receiver thread is started" ] + WFConnection -> WFReportParams [ headlabel="accept()\land/or \lconnect()\l" ] + + WFReportParams -> StandAlone [ label="during handshake\lpeers do not agree\labout something essential" ] + WFReportParams -> Connected [ label="data identical\lno sync needed",color=green,fontcolor=green ] + + WFReportParams -> WFBitMapS + WFReportParams -> WFBitMapT + WFBitMapT -> WFSyncUUID [minlen=0.1,constraint=false] + + WFBitMapS -> cluster_resync:S + WFSyncUUID -> cluster_resync:T + + edge [color=green] + cluster_resync:any -> Connected [ label="resnyc done",fontcolor=green ] + + edge [color=red] + WFReportParams -> CommTrouble + Connected -> CommTrouble + cluster_resync:any -> CommTrouble + edge [color=black] + CommTrouble -> Unconnected [label="receiver thread is stopped" ] + +} diff --git a/Documentation/blockdev/drbd/node-states-8.dot b/Documentation/blockdev/drbd/node-states-8.dot new file mode 100644 index 000000000000..4a2b00c23547 --- /dev/null +++ b/Documentation/blockdev/drbd/node-states-8.dot @@ -0,0 +1,14 @@ +digraph node_states { + Secondary -> Primary [ label = "ioctl_set_state()" ] + Primary -> Secondary [ label = "ioctl_set_state()" ] +} + +digraph peer_states { + Secondary -> Primary [ label = "recv state packet" ] + Primary -> Secondary [ label = "recv state packet" ] + Primary -> Unknown [ label = "connection lost" ] + Secondary -> Unknown [ label = "connection lost" ] + Unknown -> Primary [ label = "connected" ] + Unknown -> Secondary [ label = "connected" ] +} + diff --git a/MAINTAINERS b/MAINTAINERS index c450f3abb8c9..ea56bd7a6cba 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1758,6 +1758,19 @@ S: Maintained F: drivers/scsi/dpt* F: drivers/scsi/dpt/ +DRBD DRIVER +P: Philipp Reisner +P: Lars Ellenberg +M: drbd-dev@lists.linbit.com +L: drbd-user@lists.linbit.com +W: http://www.drbd.org +T: git git://git.drbd.org/linux-2.6-drbd.git drbd +T: git git://git.drbd.org/drbd-8.3.git +S: Supported +F: drivers/block/drbd/ +F: lib/lru_cache.c +F: Documentation/blockdev/drbd/ + DRIVER CORE, KOBJECTS, AND SYSFS M: Greg Kroah-Hartman T: quilt kernel.org/pub/linux/kernel/people/gregkh/gregkh-2.6/ diff --git a/drivers/block/Kconfig b/drivers/block/Kconfig index 1d886e079c58..77bfce52e9ca 100644 --- a/drivers/block/Kconfig +++ b/drivers/block/Kconfig @@ -271,6 +271,8 @@ config BLK_DEV_CRYPTOLOOP instead, which can be configured to be on-disk compatible with the cryptoloop device. +source "drivers/block/drbd/Kconfig" + config BLK_DEV_NBD tristate "Network block device support" depends on NET diff --git a/drivers/block/Makefile b/drivers/block/Makefile index cdaa3f8fddf0..aff5ac925c34 100644 --- a/drivers/block/Makefile +++ b/drivers/block/Makefile @@ -36,5 +36,6 @@ obj-$(CONFIG_BLK_DEV_UB) += ub.o obj-$(CONFIG_BLK_DEV_HD) += hd.o obj-$(CONFIG_XEN_BLKDEV_FRONTEND) += xen-blkfront.o +obj-$(CONFIG_BLK_DEV_DRBD) += drbd/ swim_mod-objs := swim.o swim_asm.o diff --git a/drivers/block/drbd/Kconfig b/drivers/block/drbd/Kconfig new file mode 100644 index 000000000000..4e6f90f487c2 --- /dev/null +++ b/drivers/block/drbd/Kconfig @@ -0,0 +1,82 @@ +# +# DRBD device driver configuration +# + +comment "DRBD disabled because PROC_FS, INET or CONNECTOR not selected" + depends on !PROC_FS || !INET || !CONNECTOR + +config BLK_DEV_DRBD + tristate "DRBD Distributed Replicated Block Device support" + depends on PROC_FS && INET && CONNECTOR + select LRU_CACHE + default n + help + + NOTE: In order to authenticate connections you have to select + CRYPTO_HMAC and a hash function as well. + + DRBD is a shared-nothing, synchronously replicated block device. It + is designed to serve as a building block for high availability + clusters and in this context, is a "drop-in" replacement for shared + storage. Simplistically, you could see it as a network RAID 1. + + Each minor device has a role, which can be 'primary' or 'secondary'. + On the node with the primary device the application is supposed to + run and to access the device (/dev/drbdX). Every write is sent to + the local 'lower level block device' and, across the network, to the + node with the device in 'secondary' state. The secondary device + simply writes the data to its lower level block device. + + DRBD can also be used in dual-Primary mode (device writable on both + nodes), which means it can exhibit shared disk semantics in a + shared-nothing cluster. Needless to say, on top of dual-Primary + DRBD utilizing a cluster file system is necessary to maintain for + cache coherency. + + For automatic failover you need a cluster manager (e.g. heartbeat). + See also: http://www.drbd.org/, http://www.linux-ha.org + + If unsure, say N. + +config DRBD_TRACE + tristate "DRBD tracing" + depends on BLK_DEV_DRBD + select TRACEPOINTS + default n + help + + Say Y here if you want to be able to trace various events in DRBD. + + If unsure, say N. + +config DRBD_FAULT_INJECTION + bool "DRBD fault injection" + depends on BLK_DEV_DRBD + help + + Say Y here if you want to simulate IO errors, in order to test DRBD's + behavior. + + The actual simulation of IO errors is done by writing 3 values to + /sys/module/drbd/parameters/ + + enable_faults: bitmask of... + 1 meta data write + 2 read + 4 resync data write + 8 read + 16 data write + 32 data read + 64 read ahead + 128 kmalloc of bitmap + 256 allocation of EE (epoch_entries) + + fault_devs: bitmask of minor numbers + fault_rate: frequency in percent + + Example: Simulate data write errors on /dev/drbd0 with a probability of 5%. + echo 16 > /sys/module/drbd/parameters/enable_faults + echo 1 > /sys/module/drbd/parameters/fault_devs + echo 5 > /sys/module/drbd/parameters/fault_rate + + If unsure, say N. diff --git a/drivers/block/drbd/Makefile b/drivers/block/drbd/Makefile new file mode 100644 index 000000000000..7d86ef8a8b40 --- /dev/null +++ b/drivers/block/drbd/Makefile @@ -0,0 +1,8 @@ +drbd-y := drbd_bitmap.o drbd_proc.o +drbd-y += drbd_worker.o drbd_receiver.o drbd_req.o drbd_actlog.o +drbd-y += drbd_main.o drbd_strings.o drbd_nl.o + +drbd_trace-y := drbd_tracing.o + +obj-$(CONFIG_BLK_DEV_DRBD) += drbd.o +obj-$(CONFIG_DRBD_TRACE) += drbd_trace.o diff --git a/drivers/block/drbd/drbd_actlog.c b/drivers/block/drbd/drbd_actlog.c new file mode 100644 index 000000000000..74b4835d3107 --- /dev/null +++ b/drivers/block/drbd/drbd_actlog.c @@ -0,0 +1,1484 @@ +/* + drbd_actlog.c + + This file is part of DRBD by Philipp Reisner and Lars Ellenberg. + + Copyright (C) 2003-2008, LINBIT Information Technologies GmbH. + Copyright (C) 2003-2008, Philipp Reisner . + Copyright (C) 2003-2008, Lars Ellenberg . + + drbd is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2, or (at your option) + any later version. + + drbd is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with drbd; see the file COPYING. If not, write to + the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. + + */ + +#include +#include +#include "drbd_int.h" +#include "drbd_tracing.h" +#include "drbd_wrappers.h" + +/* We maintain a trivial check sum in our on disk activity log. + * With that we can ensure correct operation even when the storage + * device might do a partial (last) sector write while loosing power. + */ +struct __packed al_transaction { + u32 magic; + u32 tr_number; + struct __packed { + u32 pos; + u32 extent; } updates[1 + AL_EXTENTS_PT]; + u32 xor_sum; +}; + +struct update_odbm_work { + struct drbd_work w; + unsigned int enr; +}; + +struct update_al_work { + struct drbd_work w; + struct lc_element *al_ext; + struct completion event; + unsigned int enr; + /* if old_enr != LC_FREE, write corresponding bitmap sector, too */ + unsigned int old_enr; +}; + +struct drbd_atodb_wait { + atomic_t count; + struct completion io_done; + struct drbd_conf *mdev; + int error; +}; + + +int w_al_write_transaction(struct drbd_conf *, struct drbd_work *, int); + +/* The actual tracepoint needs to have constant number of known arguments... + */ +void trace_drbd_resync(struct drbd_conf *mdev, int level, const char *fmt, ...) +{ + va_list ap; + + va_start(ap, fmt); + trace__drbd_resync(mdev, level, fmt, ap); + va_end(ap); +} + +static int _drbd_md_sync_page_io(struct drbd_conf *mdev, + struct drbd_backing_dev *bdev, + struct page *page, sector_t sector, + int rw, int size) +{ + struct bio *bio; + struct drbd_md_io md_io; + int ok; + + md_io.mdev = mdev; + init_completion(&md_io.event); + md_io.error = 0; + + if ((rw & WRITE) && !test_bit(MD_NO_BARRIER, &mdev->flags)) + rw |= (1 << BIO_RW_BARRIER); + rw |= ((1<bi_bdev = bdev->md_bdev; + bio->bi_sector = sector; + ok = (bio_add_page(bio, page, size, 0) == size); + if (!ok) + goto out; + bio->bi_private = &md_io; + bio->bi_end_io = drbd_md_io_complete; + bio->bi_rw = rw; + + trace_drbd_bio(mdev, "Md", bio, 0, NULL); + + if (FAULT_ACTIVE(mdev, (rw & WRITE) ? DRBD_FAULT_MD_WR : DRBD_FAULT_MD_RD)) + bio_endio(bio, -EIO); + else + submit_bio(rw, bio); + wait_for_completion(&md_io.event); + ok = bio_flagged(bio, BIO_UPTODATE) && md_io.error == 0; + + /* check for unsupported barrier op. + * would rather check on EOPNOTSUPP, but that is not reliable. + * don't try again for ANY return value != 0 */ + if (unlikely(bio_rw_flagged(bio, BIO_RW_BARRIER) && !ok)) { + /* Try again with no barrier */ + dev_warn(DEV, "Barriers not supported on meta data device - disabling\n"); + set_bit(MD_NO_BARRIER, &mdev->flags); + rw &= ~(1 << BIO_RW_BARRIER); + bio_put(bio); + goto retry; + } + out: + bio_put(bio); + return ok; +} + +int drbd_md_sync_page_io(struct drbd_conf *mdev, struct drbd_backing_dev *bdev, + sector_t sector, int rw) +{ + int logical_block_size, mask, ok; + int offset = 0; + struct page *iop = mdev->md_io_page; + + D_ASSERT(mutex_is_locked(&mdev->md_io_mutex)); + + BUG_ON(!bdev->md_bdev); + + logical_block_size = bdev_logical_block_size(bdev->md_bdev); + if (logical_block_size == 0) + logical_block_size = MD_SECTOR_SIZE; + + /* in case logical_block_size != 512 [ s390 only? ] */ + if (logical_block_size != MD_SECTOR_SIZE) { + mask = (logical_block_size / MD_SECTOR_SIZE) - 1; + D_ASSERT(mask == 1 || mask == 3 || mask == 7); + D_ASSERT(logical_block_size == (mask+1) * MD_SECTOR_SIZE); + offset = sector & mask; + sector = sector & ~mask; + iop = mdev->md_io_tmpp; + + if (rw & WRITE) { + /* these are GFP_KERNEL pages, pre-allocated + * on device initialization */ + void *p = page_address(mdev->md_io_page); + void *hp = page_address(mdev->md_io_tmpp); + + ok = _drbd_md_sync_page_io(mdev, bdev, iop, sector, + READ, logical_block_size); + + if (unlikely(!ok)) { + dev_err(DEV, "drbd_md_sync_page_io(,%llus," + "READ [logical_block_size!=512]) failed!\n", + (unsigned long long)sector); + return 0; + } + + memcpy(hp + offset*MD_SECTOR_SIZE, p, MD_SECTOR_SIZE); + } + } + + if (sector < drbd_md_first_sector(bdev) || + sector > drbd_md_last_sector(bdev)) + dev_alert(DEV, "%s [%d]:%s(,%llus,%s) out of range md access!\n", + current->comm, current->pid, __func__, + (unsigned long long)sector, (rw & WRITE) ? "WRITE" : "READ"); + + ok = _drbd_md_sync_page_io(mdev, bdev, iop, sector, rw, logical_block_size); + if (unlikely(!ok)) { + dev_err(DEV, "drbd_md_sync_page_io(,%llus,%s) failed!\n", + (unsigned long long)sector, (rw & WRITE) ? "WRITE" : "READ"); + return 0; + } + + if (logical_block_size != MD_SECTOR_SIZE && !(rw & WRITE)) { + void *p = page_address(mdev->md_io_page); + void *hp = page_address(mdev->md_io_tmpp); + + memcpy(p, hp + offset*MD_SECTOR_SIZE, MD_SECTOR_SIZE); + } + + return ok; +} + +static struct lc_element *_al_get(struct drbd_conf *mdev, unsigned int enr) +{ + struct lc_element *al_ext; + struct lc_element *tmp; + unsigned long al_flags = 0; + + spin_lock_irq(&mdev->al_lock); + tmp = lc_find(mdev->resync, enr/AL_EXT_PER_BM_SECT); + if (unlikely(tmp != NULL)) { + struct bm_extent *bm_ext = lc_entry(tmp, struct bm_extent, lce); + if (test_bit(BME_NO_WRITES, &bm_ext->flags)) { + spin_unlock_irq(&mdev->al_lock); + return NULL; + } + } + al_ext = lc_get(mdev->act_log, enr); + al_flags = mdev->act_log->flags; + spin_unlock_irq(&mdev->al_lock); + + /* + if (!al_ext) { + if (al_flags & LC_STARVING) + dev_warn(DEV, "Have to wait for LRU element (AL too small?)\n"); + if (al_flags & LC_DIRTY) + dev_warn(DEV, "Ongoing AL update (AL device too slow?)\n"); + } + */ + + return al_ext; +} + +void drbd_al_begin_io(struct drbd_conf *mdev, sector_t sector) +{ + unsigned int enr = (sector >> (AL_EXTENT_SHIFT-9)); + struct lc_element *al_ext; + struct update_al_work al_work; + + D_ASSERT(atomic_read(&mdev->local_cnt) > 0); + + trace_drbd_actlog(mdev, sector, "al_begin_io"); + + wait_event(mdev->al_wait, (al_ext = _al_get(mdev, enr))); + + if (al_ext->lc_number != enr) { + /* drbd_al_write_transaction(mdev,al_ext,enr); + * recurses into generic_make_request(), which + * disallows recursion, bios being serialized on the + * current->bio_tail list now. + * we have to delegate updates to the activity log + * to the worker thread. */ + init_completion(&al_work.event); + al_work.al_ext = al_ext; + al_work.enr = enr; + al_work.old_enr = al_ext->lc_number; + al_work.w.cb = w_al_write_transaction; + drbd_queue_work_front(&mdev->data.work, &al_work.w); + wait_for_completion(&al_work.event); + + mdev->al_writ_cnt++; + + spin_lock_irq(&mdev->al_lock); + lc_changed(mdev->act_log, al_ext); + spin_unlock_irq(&mdev->al_lock); + wake_up(&mdev->al_wait); + } +} + +void drbd_al_complete_io(struct drbd_conf *mdev, sector_t sector) +{ + unsigned int enr = (sector >> (AL_EXTENT_SHIFT-9)); + struct lc_element *extent; + unsigned long flags; + + trace_drbd_actlog(mdev, sector, "al_complete_io"); + + spin_lock_irqsave(&mdev->al_lock, flags); + + extent = lc_find(mdev->act_log, enr); + + if (!extent) { + spin_unlock_irqrestore(&mdev->al_lock, flags); + dev_err(DEV, "al_complete_io() called on inactive extent %u\n", enr); + return; + } + + if (lc_put(mdev->act_log, extent) == 0) + wake_up(&mdev->al_wait); + + spin_unlock_irqrestore(&mdev->al_lock, flags); +} + +int +w_al_write_transaction(struct drbd_conf *mdev, struct drbd_work *w, int unused) +{ + struct update_al_work *aw = container_of(w, struct update_al_work, w); + struct lc_element *updated = aw->al_ext; + const unsigned int new_enr = aw->enr; + const unsigned int evicted = aw->old_enr; + struct al_transaction *buffer; + sector_t sector; + int i, n, mx; + unsigned int extent_nr; + u32 xor_sum = 0; + + if (!get_ldev(mdev)) { + dev_err(DEV, "get_ldev() failed in w_al_write_transaction\n"); + complete(&((struct update_al_work *)w)->event); + return 1; + } + /* do we have to do a bitmap write, first? + * TODO reduce maximum latency: + * submit both bios, then wait for both, + * instead of doing two synchronous sector writes. */ + if (mdev->state.conn < C_CONNECTED && evicted != LC_FREE) + drbd_bm_write_sect(mdev, evicted/AL_EXT_PER_BM_SECT); + + mutex_lock(&mdev->md_io_mutex); /* protects md_io_page, al_tr_cycle, ... */ + buffer = (struct al_transaction *)page_address(mdev->md_io_page); + + buffer->magic = __constant_cpu_to_be32(DRBD_MAGIC); + buffer->tr_number = cpu_to_be32(mdev->al_tr_number); + + n = lc_index_of(mdev->act_log, updated); + + buffer->updates[0].pos = cpu_to_be32(n); + buffer->updates[0].extent = cpu_to_be32(new_enr); + + xor_sum ^= new_enr; + + mx = min_t(int, AL_EXTENTS_PT, + mdev->act_log->nr_elements - mdev->al_tr_cycle); + for (i = 0; i < mx; i++) { + unsigned idx = mdev->al_tr_cycle + i; + extent_nr = lc_element_by_index(mdev->act_log, idx)->lc_number; + buffer->updates[i+1].pos = cpu_to_be32(idx); + buffer->updates[i+1].extent = cpu_to_be32(extent_nr); + xor_sum ^= extent_nr; + } + for (; i < AL_EXTENTS_PT; i++) { + buffer->updates[i+1].pos = __constant_cpu_to_be32(-1); + buffer->updates[i+1].extent = __constant_cpu_to_be32(LC_FREE); + xor_sum ^= LC_FREE; + } + mdev->al_tr_cycle += AL_EXTENTS_PT; + if (mdev->al_tr_cycle >= mdev->act_log->nr_elements) + mdev->al_tr_cycle = 0; + + buffer->xor_sum = cpu_to_be32(xor_sum); + + sector = mdev->ldev->md.md_offset + + mdev->ldev->md.al_offset + mdev->al_tr_pos; + + if (!drbd_md_sync_page_io(mdev, mdev->ldev, sector, WRITE)) + drbd_chk_io_error(mdev, 1, TRUE); + + if (++mdev->al_tr_pos > + div_ceil(mdev->act_log->nr_elements, AL_EXTENTS_PT)) + mdev->al_tr_pos = 0; + + D_ASSERT(mdev->al_tr_pos < MD_AL_MAX_SIZE); + mdev->al_tr_number++; + + mutex_unlock(&mdev->md_io_mutex); + + complete(&((struct update_al_work *)w)->event); + put_ldev(mdev); + + return 1; +} + +/** + * drbd_al_read_tr() - Read a single transaction from the on disk activity log + * @mdev: DRBD device. + * @bdev: Block device to read form. + * @b: pointer to an al_transaction. + * @index: On disk slot of the transaction to read. + * + * Returns -1 on IO error, 0 on checksum error and 1 upon success. + */ +static int drbd_al_read_tr(struct drbd_conf *mdev, + struct drbd_backing_dev *bdev, + struct al_transaction *b, + int index) +{ + sector_t sector; + int rv, i; + u32 xor_sum = 0; + + sector = bdev->md.md_offset + bdev->md.al_offset + index; + + /* Dont process error normally, + * as this is done before disk is attached! */ + if (!drbd_md_sync_page_io(mdev, bdev, sector, READ)) + return -1; + + rv = (be32_to_cpu(b->magic) == DRBD_MAGIC); + + for (i = 0; i < AL_EXTENTS_PT + 1; i++) + xor_sum ^= be32_to_cpu(b->updates[i].extent); + rv &= (xor_sum == be32_to_cpu(b->xor_sum)); + + return rv; +} + +/** + * drbd_al_read_log() - Restores the activity log from its on disk representation. + * @mdev: DRBD device. + * @bdev: Block device to read form. + * + * Returns 1 on success, returns 0 when reading the log failed due to IO errors. + */ +int drbd_al_read_log(struct drbd_conf *mdev, struct drbd_backing_dev *bdev) +{ + struct al_transaction *buffer; + int i; + int rv; + int mx; + int active_extents = 0; + int transactions = 0; + int found_valid = 0; + int from = 0; + int to = 0; + u32 from_tnr = 0; + u32 to_tnr = 0; + u32 cnr; + + mx = div_ceil(mdev->act_log->nr_elements, AL_EXTENTS_PT); + + /* lock out all other meta data io for now, + * and make sure the page is mapped. + */ + mutex_lock(&mdev->md_io_mutex); + buffer = page_address(mdev->md_io_page); + + /* Find the valid transaction in the log */ + for (i = 0; i <= mx; i++) { + rv = drbd_al_read_tr(mdev, bdev, buffer, i); + if (rv == 0) + continue; + if (rv == -1) { + mutex_unlock(&mdev->md_io_mutex); + return 0; + } + cnr = be32_to_cpu(buffer->tr_number); + + if (++found_valid == 1) { + from = i; + to = i; + from_tnr = cnr; + to_tnr = cnr; + continue; + } + if ((int)cnr - (int)from_tnr < 0) { + D_ASSERT(from_tnr - cnr + i - from == mx+1); + from = i; + from_tnr = cnr; + } + if ((int)cnr - (int)to_tnr > 0) { + D_ASSERT(cnr - to_tnr == i - to); + to = i; + to_tnr = cnr; + } + } + + if (!found_valid) { + dev_warn(DEV, "No usable activity log found.\n"); + mutex_unlock(&mdev->md_io_mutex); + return 1; + } + + /* Read the valid transactions. + * dev_info(DEV, "Reading from %d to %d.\n",from,to); */ + i = from; + while (1) { + int j, pos; + unsigned int extent_nr; + unsigned int trn; + + rv = drbd_al_read_tr(mdev, bdev, buffer, i); + ERR_IF(rv == 0) goto cancel; + if (rv == -1) { + mutex_unlock(&mdev->md_io_mutex); + return 0; + } + + trn = be32_to_cpu(buffer->tr_number); + + spin_lock_irq(&mdev->al_lock); + + /* This loop runs backwards because in the cyclic + elements there might be an old version of the + updated element (in slot 0). So the element in slot 0 + can overwrite old versions. */ + for (j = AL_EXTENTS_PT; j >= 0; j--) { + pos = be32_to_cpu(buffer->updates[j].pos); + extent_nr = be32_to_cpu(buffer->updates[j].extent); + + if (extent_nr == LC_FREE) + continue; + + lc_set(mdev->act_log, extent_nr, pos); + active_extents++; + } + spin_unlock_irq(&mdev->al_lock); + + transactions++; + +cancel: + if (i == to) + break; + i++; + if (i > mx) + i = 0; + } + + mdev->al_tr_number = to_tnr+1; + mdev->al_tr_pos = to; + if (++mdev->al_tr_pos > + div_ceil(mdev->act_log->nr_elements, AL_EXTENTS_PT)) + mdev->al_tr_pos = 0; + + /* ok, we are done with it */ + mutex_unlock(&mdev->md_io_mutex); + + dev_info(DEV, "Found %d transactions (%d active extents) in activity log.\n", + transactions, active_extents); + + return 1; +} + +static void atodb_endio(struct bio *bio, int error) +{ + struct drbd_atodb_wait *wc = bio->bi_private; + struct drbd_conf *mdev = wc->mdev; + struct page *page; + int uptodate = bio_flagged(bio, BIO_UPTODATE); + + /* strange behavior of some lower level drivers... + * fail the request by clearing the uptodate flag, + * but do not return any error?! */ + if (!error && !uptodate) + error = -EIO; + + drbd_chk_io_error(mdev, error, TRUE); + if (error && wc->error == 0) + wc->error = error; + + if (atomic_dec_and_test(&wc->count)) + complete(&wc->io_done); + + page = bio->bi_io_vec[0].bv_page; + put_page(page); + bio_put(bio); + mdev->bm_writ_cnt++; + put_ldev(mdev); +} + +#define S2W(s) ((s)<<(BM_EXT_SHIFT-BM_BLOCK_SHIFT-LN2_BPL)) +/* activity log to on disk bitmap -- prepare bio unless that sector + * is already covered by previously prepared bios */ +static int atodb_prepare_unless_covered(struct drbd_conf *mdev, + struct bio **bios, + unsigned int enr, + struct drbd_atodb_wait *wc) __must_hold(local) +{ + struct bio *bio; + struct page *page; + sector_t on_disk_sector = enr + mdev->ldev->md.md_offset + + mdev->ldev->md.bm_offset; + unsigned int page_offset = PAGE_SIZE; + int offset; + int i = 0; + int err = -ENOMEM; + + /* Check if that enr is already covered by an already created bio. + * Caution, bios[] is not NULL terminated, + * but only initialized to all NULL. + * For completely scattered activity log, + * the last invocation iterates over all bios, + * and finds the last NULL entry. + */ + while ((bio = bios[i])) { + if (bio->bi_sector == on_disk_sector) + return 0; + i++; + } + /* bios[i] == NULL, the next not yet used slot */ + + /* GFP_KERNEL, we are not in the write-out path */ + bio = bio_alloc(GFP_KERNEL, 1); + if (bio == NULL) + return -ENOMEM; + + if (i > 0) { + const struct bio_vec *prev_bv = bios[i-1]->bi_io_vec; + page_offset = prev_bv->bv_offset + prev_bv->bv_len; + page = prev_bv->bv_page; + } + if (page_offset == PAGE_SIZE) { + page = alloc_page(__GFP_HIGHMEM); + if (page == NULL) + goto out_bio_put; + page_offset = 0; + } else { + get_page(page); + } + + offset = S2W(enr); + drbd_bm_get_lel(mdev, offset, + min_t(size_t, S2W(1), drbd_bm_words(mdev) - offset), + kmap(page) + page_offset); + kunmap(page); + + bio->bi_private = wc; + bio->bi_end_io = atodb_endio; + bio->bi_bdev = mdev->ldev->md_bdev; + bio->bi_sector = on_disk_sector; + + if (bio_add_page(bio, page, MD_SECTOR_SIZE, page_offset) != MD_SECTOR_SIZE) + goto out_put_page; + + atomic_inc(&wc->count); + /* we already know that we may do this... + * get_ldev_if_state(mdev,D_ATTACHING); + * just get the extra reference, so that the local_cnt reflects + * the number of pending IO requests DRBD at its backing device. + */ + atomic_inc(&mdev->local_cnt); + + bios[i] = bio; + + return 0; + +out_put_page: + err = -EINVAL; + put_page(page); +out_bio_put: + bio_put(bio); + return err; +} + +/** + * drbd_al_to_on_disk_bm() - * Writes bitmap parts covered by active AL extents + * @mdev: DRBD device. + * + * Called when we detach (unconfigure) local storage, + * or when we go from R_PRIMARY to R_SECONDARY role. + */ +void drbd_al_to_on_disk_bm(struct drbd_conf *mdev) +{ + int i, nr_elements; + unsigned int enr; + struct bio **bios; + struct drbd_atodb_wait wc; + + ERR_IF (!get_ldev_if_state(mdev, D_ATTACHING)) + return; /* sorry, I don't have any act_log etc... */ + + wait_event(mdev->al_wait, lc_try_lock(mdev->act_log)); + + nr_elements = mdev->act_log->nr_elements; + + /* GFP_KERNEL, we are not in anyone's write-out path */ + bios = kzalloc(sizeof(struct bio *) * nr_elements, GFP_KERNEL); + if (!bios) + goto submit_one_by_one; + + atomic_set(&wc.count, 0); + init_completion(&wc.io_done); + wc.mdev = mdev; + wc.error = 0; + + for (i = 0; i < nr_elements; i++) { + enr = lc_element_by_index(mdev->act_log, i)->lc_number; + if (enr == LC_FREE) + continue; + /* next statement also does atomic_inc wc.count and local_cnt */ + if (atodb_prepare_unless_covered(mdev, bios, + enr/AL_EXT_PER_BM_SECT, + &wc)) + goto free_bios_submit_one_by_one; + } + + /* unnecessary optimization? */ + lc_unlock(mdev->act_log); + wake_up(&mdev->al_wait); + + /* all prepared, submit them */ + for (i = 0; i < nr_elements; i++) { + if (bios[i] == NULL) + break; + if (FAULT_ACTIVE(mdev, DRBD_FAULT_MD_WR)) { + bios[i]->bi_rw = WRITE; + bio_endio(bios[i], -EIO); + } else { + submit_bio(WRITE, bios[i]); + } + } + + drbd_blk_run_queue(bdev_get_queue(mdev->ldev->md_bdev)); + + /* always (try to) flush bitmap to stable storage */ + drbd_md_flush(mdev); + + /* In case we did not submit a single IO do not wait for + * them to complete. ( Because we would wait forever here. ) + * + * In case we had IOs and they are already complete, there + * is not point in waiting anyways. + * Therefore this if () ... */ + if (atomic_read(&wc.count)) + wait_for_completion(&wc.io_done); + + put_ldev(mdev); + + kfree(bios); + return; + + free_bios_submit_one_by_one: + /* free everything by calling the endio callback directly. */ + for (i = 0; i < nr_elements && bios[i]; i++) + bio_endio(bios[i], 0); + + kfree(bios); + + submit_one_by_one: + dev_warn(DEV, "Using the slow drbd_al_to_on_disk_bm()\n"); + + for (i = 0; i < mdev->act_log->nr_elements; i++) { + enr = lc_element_by_index(mdev->act_log, i)->lc_number; + if (enr == LC_FREE) + continue; + /* Really slow: if we have al-extents 16..19 active, + * sector 4 will be written four times! Synchronous! */ + drbd_bm_write_sect(mdev, enr/AL_EXT_PER_BM_SECT); + } + + lc_unlock(mdev->act_log); + wake_up(&mdev->al_wait); + put_ldev(mdev); +} + +/** + * drbd_al_apply_to_bm() - Sets the bitmap to diry(1) where covered ba active AL extents + * @mdev: DRBD device. + */ +void drbd_al_apply_to_bm(struct drbd_conf *mdev) +{ + unsigned int enr; + unsigned long add = 0; + char ppb[10]; + int i; + + wait_event(mdev->al_wait, lc_try_lock(mdev->act_log)); + + for (i = 0; i < mdev->act_log->nr_elements; i++) { + enr = lc_element_by_index(mdev->act_log, i)->lc_number; + if (enr == LC_FREE) + continue; + add += drbd_bm_ALe_set_all(mdev, enr); + } + + lc_unlock(mdev->act_log); + wake_up(&mdev->al_wait); + + dev_info(DEV, "Marked additional %s as out-of-sync based on AL.\n", + ppsize(ppb, Bit2KB(add))); +} + +static int _try_lc_del(struct drbd_conf *mdev, struct lc_element *al_ext) +{ + int rv; + + spin_lock_irq(&mdev->al_lock); + rv = (al_ext->refcnt == 0); + if (likely(rv)) + lc_del(mdev->act_log, al_ext); + spin_unlock_irq(&mdev->al_lock); + + return rv; +} + +/** + * drbd_al_shrink() - Removes all active extents form the activity log + * @mdev: DRBD device. + * + * Removes all active extents form the activity log, waiting until + * the reference count of each entry dropped to 0 first, of course. + * + * You need to lock mdev->act_log with lc_try_lock() / lc_unlock() + */ +void drbd_al_shrink(struct drbd_conf *mdev) +{ + struct lc_element *al_ext; + int i; + + D_ASSERT(test_bit(__LC_DIRTY, &mdev->act_log->flags)); + + for (i = 0; i < mdev->act_log->nr_elements; i++) { + al_ext = lc_element_by_index(mdev->act_log, i); + if (al_ext->lc_number == LC_FREE) + continue; + wait_event(mdev->al_wait, _try_lc_del(mdev, al_ext)); + } + + wake_up(&mdev->al_wait); +} + +static int w_update_odbm(struct drbd_conf *mdev, struct drbd_work *w, int unused) +{ + struct update_odbm_work *udw = container_of(w, struct update_odbm_work, w); + + if (!get_ldev(mdev)) { + if (__ratelimit(&drbd_ratelimit_state)) + dev_warn(DEV, "Can not update on disk bitmap, local IO disabled.\n"); + kfree(udw); + return 1; + } + + drbd_bm_write_sect(mdev, udw->enr); + put_ldev(mdev); + + kfree(udw); + + if (drbd_bm_total_weight(mdev) <= mdev->rs_failed) { + switch (mdev->state.conn) { + case C_SYNC_SOURCE: case C_SYNC_TARGET: + case C_PAUSED_SYNC_S: case C_PAUSED_SYNC_T: + drbd_resync_finished(mdev); + default: + /* nothing to do */ + break; + } + } + drbd_bcast_sync_progress(mdev); + + return 1; +} + + +/* ATTENTION. The AL's extents are 4MB each, while the extents in the + * resync LRU-cache are 16MB each. + * The caller of this function has to hold an get_ldev() reference. + * + * TODO will be obsoleted once we have a caching lru of the on disk bitmap + */ +static void drbd_try_clear_on_disk_bm(struct drbd_conf *mdev, sector_t sector, + int count, int success) +{ + struct lc_element *e; + struct update_odbm_work *udw; + + unsigned int enr; + + D_ASSERT(atomic_read(&mdev->local_cnt)); + + /* I simply assume that a sector/size pair never crosses + * a 16 MB extent border. (Currently this is true...) */ + enr = BM_SECT_TO_EXT(sector); + + e = lc_get(mdev->resync, enr); + if (e) { + struct bm_extent *ext = lc_entry(e, struct bm_extent, lce); + if (ext->lce.lc_number == enr) { + if (success) + ext->rs_left -= count; + else + ext->rs_failed += count; + if (ext->rs_left < ext->rs_failed) { + dev_err(DEV, "BAD! sector=%llus enr=%u rs_left=%d " + "rs_failed=%d count=%d\n", + (unsigned long long)sector, + ext->lce.lc_number, ext->rs_left, + ext->rs_failed, count); + dump_stack(); + + lc_put(mdev->resync, &ext->lce); + drbd_force_state(mdev, NS(conn, C_DISCONNECTING)); + return; + } + } else { + /* Normally this element should be in the cache, + * since drbd_rs_begin_io() pulled it already in. + * + * But maybe an application write finished, and we set + * something outside the resync lru_cache in sync. + */ + int rs_left = drbd_bm_e_weight(mdev, enr); + if (ext->flags != 0) { + dev_warn(DEV, "changing resync lce: %d[%u;%02lx]" + " -> %d[%u;00]\n", + ext->lce.lc_number, ext->rs_left, + ext->flags, enr, rs_left); + ext->flags = 0; + } + if (ext->rs_failed) { + dev_warn(DEV, "Kicking resync_lru element enr=%u " + "out with rs_failed=%d\n", + ext->lce.lc_number, ext->rs_failed); + set_bit(WRITE_BM_AFTER_RESYNC, &mdev->flags); + } + ext->rs_left = rs_left; + ext->rs_failed = success ? 0 : count; + lc_changed(mdev->resync, &ext->lce); + } + lc_put(mdev->resync, &ext->lce); + /* no race, we are within the al_lock! */ + + if (ext->rs_left == ext->rs_failed) { + ext->rs_failed = 0; + + udw = kmalloc(sizeof(*udw), GFP_ATOMIC); + if (udw) { + udw->enr = ext->lce.lc_number; + udw->w.cb = w_update_odbm; + drbd_queue_work_front(&mdev->data.work, &udw->w); + } else { + dev_warn(DEV, "Could not kmalloc an udw\n"); + set_bit(WRITE_BM_AFTER_RESYNC, &mdev->flags); + } + } + } else { + dev_err(DEV, "lc_get() failed! locked=%d/%d flags=%lu\n", + mdev->resync_locked, + mdev->resync->nr_elements, + mdev->resync->flags); + } +} + +/* clear the bit corresponding to the piece of storage in question: + * size byte of data starting from sector. Only clear a bits of the affected + * one ore more _aligned_ BM_BLOCK_SIZE blocks. + * + * called by worker on C_SYNC_TARGET and receiver on SyncSource. + * + */ +void __drbd_set_in_sync(struct drbd_conf *mdev, sector_t sector, int size, + const char *file, const unsigned int line) +{ + /* Is called from worker and receiver context _only_ */ + unsigned long sbnr, ebnr, lbnr; + unsigned long count = 0; + sector_t esector, nr_sectors; + int wake_up = 0; + unsigned long flags; + + if (size <= 0 || (size & 0x1ff) != 0 || size > DRBD_MAX_SEGMENT_SIZE) { + dev_err(DEV, "drbd_set_in_sync: sector=%llus size=%d nonsense!\n", + (unsigned long long)sector, size); + return; + } + nr_sectors = drbd_get_capacity(mdev->this_bdev); + esector = sector + (size >> 9) - 1; + + ERR_IF(sector >= nr_sectors) return; + ERR_IF(esector >= nr_sectors) esector = (nr_sectors-1); + + lbnr = BM_SECT_TO_BIT(nr_sectors-1); + + /* we clear it (in sync). + * round up start sector, round down end sector. we make sure we only + * clear full, aligned, BM_BLOCK_SIZE (4K) blocks */ + if (unlikely(esector < BM_SECT_PER_BIT-1)) + return; + if (unlikely(esector == (nr_sectors-1))) + ebnr = lbnr; + else + ebnr = BM_SECT_TO_BIT(esector - (BM_SECT_PER_BIT-1)); + sbnr = BM_SECT_TO_BIT(sector + BM_SECT_PER_BIT-1); + + trace_drbd_resync(mdev, TRACE_LVL_METRICS, + "drbd_set_in_sync: sector=%llus size=%u sbnr=%lu ebnr=%lu\n", + (unsigned long long)sector, size, sbnr, ebnr); + + if (sbnr > ebnr) + return; + + /* + * ok, (capacity & 7) != 0 sometimes, but who cares... + * we count rs_{total,left} in bits, not sectors. + */ + spin_lock_irqsave(&mdev->al_lock, flags); + count = drbd_bm_clear_bits(mdev, sbnr, ebnr); + if (count) { + /* we need the lock for drbd_try_clear_on_disk_bm */ + if (jiffies - mdev->rs_mark_time > HZ*10) { + /* should be rolling marks, + * but we estimate only anyways. */ + if (mdev->rs_mark_left != drbd_bm_total_weight(mdev) && + mdev->state.conn != C_PAUSED_SYNC_T && + mdev->state.conn != C_PAUSED_SYNC_S) { + mdev->rs_mark_time = jiffies; + mdev->rs_mark_left = drbd_bm_total_weight(mdev); + } + } + if (get_ldev(mdev)) { + drbd_try_clear_on_disk_bm(mdev, sector, count, TRUE); + put_ldev(mdev); + } + /* just wake_up unconditional now, various lc_chaged(), + * lc_put() in drbd_try_clear_on_disk_bm(). */ + wake_up = 1; + } + spin_unlock_irqrestore(&mdev->al_lock, flags); + if (wake_up) + wake_up(&mdev->al_wait); +} + +/* + * this is intended to set one request worth of data out of sync. + * affects at least 1 bit, + * and at most 1+DRBD_MAX_SEGMENT_SIZE/BM_BLOCK_SIZE bits. + * + * called by tl_clear and drbd_send_dblock (==drbd_make_request). + * so this can be _any_ process. + */ +void __drbd_set_out_of_sync(struct drbd_conf *mdev, sector_t sector, int size, + const char *file, const unsigned int line) +{ + unsigned long sbnr, ebnr, lbnr, flags; + sector_t esector, nr_sectors; + unsigned int enr, count; + struct lc_element *e; + + if (size <= 0 || (size & 0x1ff) != 0 || size > DRBD_MAX_SEGMENT_SIZE) { + dev_err(DEV, "sector: %llus, size: %d\n", + (unsigned long long)sector, size); + return; + } + + if (!get_ldev(mdev)) + return; /* no disk, no metadata, no bitmap to set bits in */ + + nr_sectors = drbd_get_capacity(mdev->this_bdev); + esector = sector + (size >> 9) - 1; + + ERR_IF(sector >= nr_sectors) + goto out; + ERR_IF(esector >= nr_sectors) + esector = (nr_sectors-1); + + lbnr = BM_SECT_TO_BIT(nr_sectors-1); + + /* we set it out of sync, + * we do not need to round anything here */ + sbnr = BM_SECT_TO_BIT(sector); + ebnr = BM_SECT_TO_BIT(esector); + + trace_drbd_resync(mdev, TRACE_LVL_METRICS, + "drbd_set_out_of_sync: sector=%llus size=%u sbnr=%lu ebnr=%lu\n", + (unsigned long long)sector, size, sbnr, ebnr); + + /* ok, (capacity & 7) != 0 sometimes, but who cares... + * we count rs_{total,left} in bits, not sectors. */ + spin_lock_irqsave(&mdev->al_lock, flags); + count = drbd_bm_set_bits(mdev, sbnr, ebnr); + + enr = BM_SECT_TO_EXT(sector); + e = lc_find(mdev->resync, enr); + if (e) + lc_entry(e, struct bm_extent, lce)->rs_left += count; + spin_unlock_irqrestore(&mdev->al_lock, flags); + +out: + put_ldev(mdev); +} + +static +struct bm_extent *_bme_get(struct drbd_conf *mdev, unsigned int enr) +{ + struct lc_element *e; + struct bm_extent *bm_ext; + int wakeup = 0; + unsigned long rs_flags; + + spin_lock_irq(&mdev->al_lock); + if (mdev->resync_locked > mdev->resync->nr_elements/2) { + spin_unlock_irq(&mdev->al_lock); + return NULL; + } + e = lc_get(mdev->resync, enr); + bm_ext = e ? lc_entry(e, struct bm_extent, lce) : NULL; + if (bm_ext) { + if (bm_ext->lce.lc_number != enr) { + bm_ext->rs_left = drbd_bm_e_weight(mdev, enr); + bm_ext->rs_failed = 0; + lc_changed(mdev->resync, &bm_ext->lce); + wakeup = 1; + } + if (bm_ext->lce.refcnt == 1) + mdev->resync_locked++; + set_bit(BME_NO_WRITES, &bm_ext->flags); + } + rs_flags = mdev->resync->flags; + spin_unlock_irq(&mdev->al_lock); + if (wakeup) + wake_up(&mdev->al_wait); + + if (!bm_ext) { + if (rs_flags & LC_STARVING) + dev_warn(DEV, "Have to wait for element" + " (resync LRU too small?)\n"); + BUG_ON(rs_flags & LC_DIRTY); + } + + return bm_ext; +} + +static int _is_in_al(struct drbd_conf *mdev, unsigned int enr) +{ + struct lc_element *al_ext; + int rv = 0; + + spin_lock_irq(&mdev->al_lock); + if (unlikely(enr == mdev->act_log->new_number)) + rv = 1; + else { + al_ext = lc_find(mdev->act_log, enr); + if (al_ext) { + if (al_ext->refcnt) + rv = 1; + } + } + spin_unlock_irq(&mdev->al_lock); + + /* + if (unlikely(rv)) { + dev_info(DEV, "Delaying sync read until app's write is done\n"); + } + */ + return rv; +} + +/** + * drbd_rs_begin_io() - Gets an extent in the resync LRU cache and sets it to BME_LOCKED + * @mdev: DRBD device. + * @sector: The sector number. + * + * This functions sleeps on al_wait. Returns 1 on success, 0 if interrupted. + */ +int drbd_rs_begin_io(struct drbd_conf *mdev, sector_t sector) +{ + unsigned int enr = BM_SECT_TO_EXT(sector); + struct bm_extent *bm_ext; + int i, sig; + + trace_drbd_resync(mdev, TRACE_LVL_ALL, + "drbd_rs_begin_io: sector=%llus (rs_end=%d)\n", + (unsigned long long)sector, enr); + + sig = wait_event_interruptible(mdev->al_wait, + (bm_ext = _bme_get(mdev, enr))); + if (sig) + return 0; + + if (test_bit(BME_LOCKED, &bm_ext->flags)) + return 1; + + for (i = 0; i < AL_EXT_PER_BM_SECT; i++) { + sig = wait_event_interruptible(mdev->al_wait, + !_is_in_al(mdev, enr * AL_EXT_PER_BM_SECT + i)); + if (sig) { + spin_lock_irq(&mdev->al_lock); + if (lc_put(mdev->resync, &bm_ext->lce) == 0) { + clear_bit(BME_NO_WRITES, &bm_ext->flags); + mdev->resync_locked--; + wake_up(&mdev->al_wait); + } + spin_unlock_irq(&mdev->al_lock); + return 0; + } + } + + set_bit(BME_LOCKED, &bm_ext->flags); + + return 1; +} + +/** + * drbd_try_rs_begin_io() - Gets an extent in the resync LRU cache, does not sleep + * @mdev: DRBD device. + * @sector: The sector number. + * + * Gets an extent in the resync LRU cache, sets it to BME_NO_WRITES, then + * tries to set it to BME_LOCKED. Returns 0 upon success, and -EAGAIN + * if there is still application IO going on in this area. + */ +int drbd_try_rs_begin_io(struct drbd_conf *mdev, sector_t sector) +{ + unsigned int enr = BM_SECT_TO_EXT(sector); + const unsigned int al_enr = enr*AL_EXT_PER_BM_SECT; + struct lc_element *e; + struct bm_extent *bm_ext; + int i; + + trace_drbd_resync(mdev, TRACE_LVL_ALL, "drbd_try_rs_begin_io: sector=%llus\n", + (unsigned long long)sector); + + spin_lock_irq(&mdev->al_lock); + if (mdev->resync_wenr != LC_FREE && mdev->resync_wenr != enr) { + /* in case you have very heavy scattered io, it may + * stall the syncer undefined if we give up the ref count + * when we try again and requeue. + * + * if we don't give up the refcount, but the next time + * we are scheduled this extent has been "synced" by new + * application writes, we'd miss the lc_put on the + * extent we keep the refcount on. + * so we remembered which extent we had to try again, and + * if the next requested one is something else, we do + * the lc_put here... + * we also have to wake_up + */ + + trace_drbd_resync(mdev, TRACE_LVL_ALL, + "dropping %u, apparently got 'synced' by application io\n", + mdev->resync_wenr); + + e = lc_find(mdev->resync, mdev->resync_wenr); + bm_ext = e ? lc_entry(e, struct bm_extent, lce) : NULL; + if (bm_ext) { + D_ASSERT(!test_bit(BME_LOCKED, &bm_ext->flags)); + D_ASSERT(test_bit(BME_NO_WRITES, &bm_ext->flags)); + clear_bit(BME_NO_WRITES, &bm_ext->flags); + mdev->resync_wenr = LC_FREE; + if (lc_put(mdev->resync, &bm_ext->lce) == 0) + mdev->resync_locked--; + wake_up(&mdev->al_wait); + } else { + dev_alert(DEV, "LOGIC BUG\n"); + } + } + /* TRY. */ + e = lc_try_get(mdev->resync, enr); + bm_ext = e ? lc_entry(e, struct bm_extent, lce) : NULL; + if (bm_ext) { + if (test_bit(BME_LOCKED, &bm_ext->flags)) + goto proceed; + if (!test_and_set_bit(BME_NO_WRITES, &bm_ext->flags)) { + mdev->resync_locked++; + } else { + /* we did set the BME_NO_WRITES, + * but then could not set BME_LOCKED, + * so we tried again. + * drop the extra reference. */ + trace_drbd_resync(mdev, TRACE_LVL_ALL, + "dropping extra reference on %u\n", enr); + + bm_ext->lce.refcnt--; + D_ASSERT(bm_ext->lce.refcnt > 0); + } + goto check_al; + } else { + /* do we rather want to try later? */ + if (mdev->resync_locked > mdev->resync->nr_elements-3) { + trace_drbd_resync(mdev, TRACE_LVL_ALL, + "resync_locked = %u!\n", mdev->resync_locked); + + goto try_again; + } + /* Do or do not. There is no try. -- Yoda */ + e = lc_get(mdev->resync, enr); + bm_ext = e ? lc_entry(e, struct bm_extent, lce) : NULL; + if (!bm_ext) { + const unsigned long rs_flags = mdev->resync->flags; + if (rs_flags & LC_STARVING) + dev_warn(DEV, "Have to wait for element" + " (resync LRU too small?)\n"); + BUG_ON(rs_flags & LC_DIRTY); + goto try_again; + } + if (bm_ext->lce.lc_number != enr) { + bm_ext->rs_left = drbd_bm_e_weight(mdev, enr); + bm_ext->rs_failed = 0; + lc_changed(mdev->resync, &bm_ext->lce); + wake_up(&mdev->al_wait); + D_ASSERT(test_bit(BME_LOCKED, &bm_ext->flags) == 0); + } + set_bit(BME_NO_WRITES, &bm_ext->flags); + D_ASSERT(bm_ext->lce.refcnt == 1); + mdev->resync_locked++; + goto check_al; + } +check_al: + trace_drbd_resync(mdev, TRACE_LVL_ALL, "checking al for %u\n", enr); + + for (i = 0; i < AL_EXT_PER_BM_SECT; i++) { + if (unlikely(al_enr+i == mdev->act_log->new_number)) + goto try_again; + if (lc_is_used(mdev->act_log, al_enr+i)) + goto try_again; + } + set_bit(BME_LOCKED, &bm_ext->flags); +proceed: + mdev->resync_wenr = LC_FREE; + spin_unlock_irq(&mdev->al_lock); + return 0; + +try_again: + trace_drbd_resync(mdev, TRACE_LVL_ALL, "need to try again for %u\n", enr); + if (bm_ext) + mdev->resync_wenr = enr; + spin_unlock_irq(&mdev->al_lock); + return -EAGAIN; +} + +void drbd_rs_complete_io(struct drbd_conf *mdev, sector_t sector) +{ + unsigned int enr = BM_SECT_TO_EXT(sector); + struct lc_element *e; + struct bm_extent *bm_ext; + unsigned long flags; + + trace_drbd_resync(mdev, TRACE_LVL_ALL, + "drbd_rs_complete_io: sector=%llus (rs_enr=%d)\n", + (long long)sector, enr); + + spin_lock_irqsave(&mdev->al_lock, flags); + e = lc_find(mdev->resync, enr); + bm_ext = e ? lc_entry(e, struct bm_extent, lce) : NULL; + if (!bm_ext) { + spin_unlock_irqrestore(&mdev->al_lock, flags); + if (__ratelimit(&drbd_ratelimit_state)) + dev_err(DEV, "drbd_rs_complete_io() called, but extent not found\n"); + return; + } + + if (bm_ext->lce.refcnt == 0) { + spin_unlock_irqrestore(&mdev->al_lock, flags); + dev_err(DEV, "drbd_rs_complete_io(,%llu [=%u]) called, " + "but refcnt is 0!?\n", + (unsigned long long)sector, enr); + return; + } + + if (lc_put(mdev->resync, &bm_ext->lce) == 0) { + clear_bit(BME_LOCKED, &bm_ext->flags); + clear_bit(BME_NO_WRITES, &bm_ext->flags); + mdev->resync_locked--; + wake_up(&mdev->al_wait); + } + + spin_unlock_irqrestore(&mdev->al_lock, flags); +} + +/** + * drbd_rs_cancel_all() - Removes all extents from the resync LRU (even BME_LOCKED) + * @mdev: DRBD device. + */ +void drbd_rs_cancel_all(struct drbd_conf *mdev) +{ + trace_drbd_resync(mdev, TRACE_LVL_METRICS, "drbd_rs_cancel_all\n"); + + spin_lock_irq(&mdev->al_lock); + + if (get_ldev_if_state(mdev, D_FAILED)) { /* Makes sure ->resync is there. */ + lc_reset(mdev->resync); + put_ldev(mdev); + } + mdev->resync_locked = 0; + mdev->resync_wenr = LC_FREE; + spin_unlock_irq(&mdev->al_lock); + wake_up(&mdev->al_wait); +} + +/** + * drbd_rs_del_all() - Gracefully remove all extents from the resync LRU + * @mdev: DRBD device. + * + * Returns 0 upon success, -EAGAIN if at least one reference count was + * not zero. + */ +int drbd_rs_del_all(struct drbd_conf *mdev) +{ + struct lc_element *e; + struct bm_extent *bm_ext; + int i; + + trace_drbd_resync(mdev, TRACE_LVL_METRICS, "drbd_rs_del_all\n"); + + spin_lock_irq(&mdev->al_lock); + + if (get_ldev_if_state(mdev, D_FAILED)) { + /* ok, ->resync is there. */ + for (i = 0; i < mdev->resync->nr_elements; i++) { + e = lc_element_by_index(mdev->resync, i); + bm_ext = e ? lc_entry(e, struct bm_extent, lce) : NULL; + if (bm_ext->lce.lc_number == LC_FREE) + continue; + if (bm_ext->lce.lc_number == mdev->resync_wenr) { + dev_info(DEV, "dropping %u in drbd_rs_del_all, apparently" + " got 'synced' by application io\n", + mdev->resync_wenr); + D_ASSERT(!test_bit(BME_LOCKED, &bm_ext->flags)); + D_ASSERT(test_bit(BME_NO_WRITES, &bm_ext->flags)); + clear_bit(BME_NO_WRITES, &bm_ext->flags); + mdev->resync_wenr = LC_FREE; + lc_put(mdev->resync, &bm_ext->lce); + } + if (bm_ext->lce.refcnt != 0) { + dev_info(DEV, "Retrying drbd_rs_del_all() later. " + "refcnt=%d\n", bm_ext->lce.refcnt); + put_ldev(mdev); + spin_unlock_irq(&mdev->al_lock); + return -EAGAIN; + } + D_ASSERT(!test_bit(BME_LOCKED, &bm_ext->flags)); + D_ASSERT(!test_bit(BME_NO_WRITES, &bm_ext->flags)); + lc_del(mdev->resync, &bm_ext->lce); + } + D_ASSERT(mdev->resync->used == 0); + put_ldev(mdev); + } + spin_unlock_irq(&mdev->al_lock); + + return 0; +} + +/** + * drbd_rs_failed_io() - Record information on a failure to resync the specified blocks + * @mdev: DRBD device. + * @sector: The sector number. + * @size: Size of failed IO operation, in byte. + */ +void drbd_rs_failed_io(struct drbd_conf *mdev, sector_t sector, int size) +{ + /* Is called from worker and receiver context _only_ */ + unsigned long sbnr, ebnr, lbnr; + unsigned long count; + sector_t esector, nr_sectors; + int wake_up = 0; + + trace_drbd_resync(mdev, TRACE_LVL_SUMMARY, + "drbd_rs_failed_io: sector=%llus, size=%u\n", + (unsigned long long)sector, size); + + if (size <= 0 || (size & 0x1ff) != 0 || size > DRBD_MAX_SEGMENT_SIZE) { + dev_err(DEV, "drbd_rs_failed_io: sector=%llus size=%d nonsense!\n", + (unsigned long long)sector, size); + return; + } + nr_sectors = drbd_get_capacity(mdev->this_bdev); + esector = sector + (size >> 9) - 1; + + ERR_IF(sector >= nr_sectors) return; + ERR_IF(esector >= nr_sectors) esector = (nr_sectors-1); + + lbnr = BM_SECT_TO_BIT(nr_sectors-1); + + /* + * round up start sector, round down end sector. we make sure we only + * handle full, aligned, BM_BLOCK_SIZE (4K) blocks */ + if (unlikely(esector < BM_SECT_PER_BIT-1)) + return; + if (unlikely(esector == (nr_sectors-1))) + ebnr = lbnr; + else + ebnr = BM_SECT_TO_BIT(esector - (BM_SECT_PER_BIT-1)); + sbnr = BM_SECT_TO_BIT(sector + BM_SECT_PER_BIT-1); + + if (sbnr > ebnr) + return; + + /* + * ok, (capacity & 7) != 0 sometimes, but who cares... + * we count rs_{total,left} in bits, not sectors. + */ + spin_lock_irq(&mdev->al_lock); + count = drbd_bm_count_bits(mdev, sbnr, ebnr); + if (count) { + mdev->rs_failed += count; + + if (get_ldev(mdev)) { + drbd_try_clear_on_disk_bm(mdev, sector, count, FALSE); + put_ldev(mdev); + } + + /* just wake_up unconditional now, various lc_chaged(), + * lc_put() in drbd_try_clear_on_disk_bm(). */ + wake_up = 1; + } + spin_unlock_irq(&mdev->al_lock); + if (wake_up) + wake_up(&mdev->al_wait); +} diff --git a/drivers/block/drbd/drbd_bitmap.c b/drivers/block/drbd/drbd_bitmap.c new file mode 100644 index 000000000000..b61057e77882 --- /dev/null +++ b/drivers/block/drbd/drbd_bitmap.c @@ -0,0 +1,1327 @@ +/* + drbd_bitmap.c + + This file is part of DRBD by Philipp Reisner and Lars Ellenberg. + + Copyright (C) 2004-2008, LINBIT Information Technologies GmbH. + Copyright (C) 2004-2008, Philipp Reisner . + Copyright (C) 2004-2008, Lars Ellenberg . + + drbd is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2, or (at your option) + any later version. + + drbd is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with drbd; see the file COPYING. If not, write to + the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. + */ + +#include +#include +#include +#include +#include +#include "drbd_int.h" + +/* OPAQUE outside this file! + * interface defined in drbd_int.h + + * convention: + * function name drbd_bm_... => used elsewhere, "public". + * function name bm_... => internal to implementation, "private". + + * Note that since find_first_bit returns int, at the current granularity of + * the bitmap (4KB per byte), this implementation "only" supports up to + * 1<<(32+12) == 16 TB... + */ + +/* + * NOTE + * Access to the *bm_pages is protected by bm_lock. + * It is safe to read the other members within the lock. + * + * drbd_bm_set_bits is called from bio_endio callbacks, + * We may be called with irq already disabled, + * so we need spin_lock_irqsave(). + * And we need the kmap_atomic. + */ +struct drbd_bitmap { + struct page **bm_pages; + spinlock_t bm_lock; + /* WARNING unsigned long bm_*: + * 32bit number of bit offset is just enough for 512 MB bitmap. + * it will blow up if we make the bitmap bigger... + * not that it makes much sense to have a bitmap that large, + * rather change the granularity to 16k or 64k or something. + * (that implies other problems, however...) + */ + unsigned long bm_set; /* nr of set bits; THINK maybe atomic_t? */ + unsigned long bm_bits; + size_t bm_words; + size_t bm_number_of_pages; + sector_t bm_dev_capacity; + struct semaphore bm_change; /* serializes resize operations */ + + atomic_t bm_async_io; + wait_queue_head_t bm_io_wait; + + unsigned long bm_flags; + + /* debugging aid, in case we are still racy somewhere */ + char *bm_why; + struct task_struct *bm_task; +}; + +/* definition of bits in bm_flags */ +#define BM_LOCKED 0 +#define BM_MD_IO_ERROR 1 +#define BM_P_VMALLOCED 2 + +static int bm_is_locked(struct drbd_bitmap *b) +{ + return test_bit(BM_LOCKED, &b->bm_flags); +} + +#define bm_print_lock_info(m) __bm_print_lock_info(m, __func__) +static void __bm_print_lock_info(struct drbd_conf *mdev, const char *func) +{ + struct drbd_bitmap *b = mdev->bitmap; + if (!__ratelimit(&drbd_ratelimit_state)) + return; + dev_err(DEV, "FIXME %s in %s, bitmap locked for '%s' by %s\n", + current == mdev->receiver.task ? "receiver" : + current == mdev->asender.task ? "asender" : + current == mdev->worker.task ? "worker" : current->comm, + func, b->bm_why ?: "?", + b->bm_task == mdev->receiver.task ? "receiver" : + b->bm_task == mdev->asender.task ? "asender" : + b->bm_task == mdev->worker.task ? "worker" : "?"); +} + +void drbd_bm_lock(struct drbd_conf *mdev, char *why) +{ + struct drbd_bitmap *b = mdev->bitmap; + int trylock_failed; + + if (!b) { + dev_err(DEV, "FIXME no bitmap in drbd_bm_lock!?\n"); + return; + } + + trylock_failed = down_trylock(&b->bm_change); + + if (trylock_failed) { + dev_warn(DEV, "%s going to '%s' but bitmap already locked for '%s' by %s\n", + current == mdev->receiver.task ? "receiver" : + current == mdev->asender.task ? "asender" : + current == mdev->worker.task ? "worker" : current->comm, + why, b->bm_why ?: "?", + b->bm_task == mdev->receiver.task ? "receiver" : + b->bm_task == mdev->asender.task ? "asender" : + b->bm_task == mdev->worker.task ? "worker" : "?"); + down(&b->bm_change); + } + if (__test_and_set_bit(BM_LOCKED, &b->bm_flags)) + dev_err(DEV, "FIXME bitmap already locked in bm_lock\n"); + + b->bm_why = why; + b->bm_task = current; +} + +void drbd_bm_unlock(struct drbd_conf *mdev) +{ + struct drbd_bitmap *b = mdev->bitmap; + if (!b) { + dev_err(DEV, "FIXME no bitmap in drbd_bm_unlock!?\n"); + return; + } + + if (!__test_and_clear_bit(BM_LOCKED, &mdev->bitmap->bm_flags)) + dev_err(DEV, "FIXME bitmap not locked in bm_unlock\n"); + + b->bm_why = NULL; + b->bm_task = NULL; + up(&b->bm_change); +} + +/* word offset to long pointer */ +static unsigned long *__bm_map_paddr(struct drbd_bitmap *b, unsigned long offset, const enum km_type km) +{ + struct page *page; + unsigned long page_nr; + + /* page_nr = (word*sizeof(long)) >> PAGE_SHIFT; */ + page_nr = offset >> (PAGE_SHIFT - LN2_BPL + 3); + BUG_ON(page_nr >= b->bm_number_of_pages); + page = b->bm_pages[page_nr]; + + return (unsigned long *) kmap_atomic(page, km); +} + +static unsigned long * bm_map_paddr(struct drbd_bitmap *b, unsigned long offset) +{ + return __bm_map_paddr(b, offset, KM_IRQ1); +} + +static void __bm_unmap(unsigned long *p_addr, const enum km_type km) +{ + kunmap_atomic(p_addr, km); +}; + +static void bm_unmap(unsigned long *p_addr) +{ + return __bm_unmap(p_addr, KM_IRQ1); +} + +/* long word offset of _bitmap_ sector */ +#define S2W(s) ((s)<<(BM_EXT_SHIFT-BM_BLOCK_SHIFT-LN2_BPL)) +/* word offset from start of bitmap to word number _in_page_ + * modulo longs per page +#define MLPP(X) ((X) % (PAGE_SIZE/sizeof(long)) + hm, well, Philipp thinks gcc might not optimze the % into & (... - 1) + so do it explicitly: + */ +#define MLPP(X) ((X) & ((PAGE_SIZE/sizeof(long))-1)) + +/* Long words per page */ +#define LWPP (PAGE_SIZE/sizeof(long)) + +/* + * actually most functions herein should take a struct drbd_bitmap*, not a + * struct drbd_conf*, but for the debug macros I like to have the mdev around + * to be able to report device specific. + */ + +static void bm_free_pages(struct page **pages, unsigned long number) +{ + unsigned long i; + if (!pages) + return; + + for (i = 0; i < number; i++) { + if (!pages[i]) { + printk(KERN_ALERT "drbd: bm_free_pages tried to free " + "a NULL pointer; i=%lu n=%lu\n", + i, number); + continue; + } + __free_page(pages[i]); + pages[i] = NULL; + } +} + +static void bm_vk_free(void *ptr, int v) +{ + if (v) + vfree(ptr); + else + kfree(ptr); +} + +/* + * "have" and "want" are NUMBER OF PAGES. + */ +static struct page **bm_realloc_pages(struct drbd_bitmap *b, unsigned long want) +{ + struct page **old_pages = b->bm_pages; + struct page **new_pages, *page; + unsigned int i, bytes, vmalloced = 0; + unsigned long have = b->bm_number_of_pages; + + BUG_ON(have == 0 && old_pages != NULL); + BUG_ON(have != 0 && old_pages == NULL); + + if (have == want) + return old_pages; + + /* Trying kmalloc first, falling back to vmalloc. + * GFP_KERNEL is ok, as this is done when a lower level disk is + * "attached" to the drbd. Context is receiver thread or cqueue + * thread. As we have no disk yet, we are not in the IO path, + * not even the IO path of the peer. */ + bytes = sizeof(struct page *)*want; + new_pages = kmalloc(bytes, GFP_KERNEL); + if (!new_pages) { + new_pages = vmalloc(bytes); + if (!new_pages) + return NULL; + vmalloced = 1; + } + + memset(new_pages, 0, bytes); + if (want >= have) { + for (i = 0; i < have; i++) + new_pages[i] = old_pages[i]; + for (; i < want; i++) { + page = alloc_page(GFP_HIGHUSER); + if (!page) { + bm_free_pages(new_pages + have, i - have); + bm_vk_free(new_pages, vmalloced); + return NULL; + } + new_pages[i] = page; + } + } else { + for (i = 0; i < want; i++) + new_pages[i] = old_pages[i]; + /* NOT HERE, we are outside the spinlock! + bm_free_pages(old_pages + want, have - want); + */ + } + + if (vmalloced) + set_bit(BM_P_VMALLOCED, &b->bm_flags); + else + clear_bit(BM_P_VMALLOCED, &b->bm_flags); + + return new_pages; +} + +/* + * called on driver init only. TODO call when a device is created. + * allocates the drbd_bitmap, and stores it in mdev->bitmap. + */ +int drbd_bm_init(struct drbd_conf *mdev) +{ + struct drbd_bitmap *b = mdev->bitmap; + WARN_ON(b != NULL); + b = kzalloc(sizeof(struct drbd_bitmap), GFP_KERNEL); + if (!b) + return -ENOMEM; + spin_lock_init(&b->bm_lock); + init_MUTEX(&b->bm_change); + init_waitqueue_head(&b->bm_io_wait); + + mdev->bitmap = b; + + return 0; +} + +sector_t drbd_bm_capacity(struct drbd_conf *mdev) +{ + ERR_IF(!mdev->bitmap) return 0; + return mdev->bitmap->bm_dev_capacity; +} + +/* called on driver unload. TODO: call when a device is destroyed. + */ +void drbd_bm_cleanup(struct drbd_conf *mdev) +{ + ERR_IF (!mdev->bitmap) return; + bm_free_pages(mdev->bitmap->bm_pages, mdev->bitmap->bm_number_of_pages); + bm_vk_free(mdev->bitmap->bm_pages, test_bit(BM_P_VMALLOCED, &mdev->bitmap->bm_flags)); + kfree(mdev->bitmap); + mdev->bitmap = NULL; +} + +/* + * since (b->bm_bits % BITS_PER_LONG) != 0, + * this masks out the remaining bits. + * Returns the number of bits cleared. + */ +static int bm_clear_surplus(struct drbd_bitmap *b) +{ + const unsigned long mask = (1UL << (b->bm_bits & (BITS_PER_LONG-1))) - 1; + size_t w = b->bm_bits >> LN2_BPL; + int cleared = 0; + unsigned long *p_addr, *bm; + + p_addr = bm_map_paddr(b, w); + bm = p_addr + MLPP(w); + if (w < b->bm_words) { + cleared = hweight_long(*bm & ~mask); + *bm &= mask; + w++; bm++; + } + + if (w < b->bm_words) { + cleared += hweight_long(*bm); + *bm = 0; + } + bm_unmap(p_addr); + return cleared; +} + +static void bm_set_surplus(struct drbd_bitmap *b) +{ + const unsigned long mask = (1UL << (b->bm_bits & (BITS_PER_LONG-1))) - 1; + size_t w = b->bm_bits >> LN2_BPL; + unsigned long *p_addr, *bm; + + p_addr = bm_map_paddr(b, w); + bm = p_addr + MLPP(w); + if (w < b->bm_words) { + *bm |= ~mask; + bm++; w++; + } + + if (w < b->bm_words) { + *bm = ~(0UL); + } + bm_unmap(p_addr); +} + +static unsigned long __bm_count_bits(struct drbd_bitmap *b, const int swap_endian) +{ + unsigned long *p_addr, *bm, offset = 0; + unsigned long bits = 0; + unsigned long i, do_now; + + while (offset < b->bm_words) { + i = do_now = min_t(size_t, b->bm_words-offset, LWPP); + p_addr = __bm_map_paddr(b, offset, KM_USER0); + bm = p_addr + MLPP(offset); + while (i--) { +#ifndef __LITTLE_ENDIAN + if (swap_endian) + *bm = lel_to_cpu(*bm); +#endif + bits += hweight_long(*bm++); + } + __bm_unmap(p_addr, KM_USER0); + offset += do_now; + cond_resched(); + } + + return bits; +} + +static unsigned long bm_count_bits(struct drbd_bitmap *b) +{ + return __bm_count_bits(b, 0); +} + +static unsigned long bm_count_bits_swap_endian(struct drbd_bitmap *b) +{ + return __bm_count_bits(b, 1); +} + +/* offset and len in long words.*/ +static void bm_memset(struct drbd_bitmap *b, size_t offset, int c, size_t len) +{ + unsigned long *p_addr, *bm; + size_t do_now, end; + +#define BM_SECTORS_PER_BIT (BM_BLOCK_SIZE/512) + + end = offset + len; + + if (end > b->bm_words) { + printk(KERN_ALERT "drbd: bm_memset end > bm_words\n"); + return; + } + + while (offset < end) { + do_now = min_t(size_t, ALIGN(offset + 1, LWPP), end) - offset; + p_addr = bm_map_paddr(b, offset); + bm = p_addr + MLPP(offset); + if (bm+do_now > p_addr + LWPP) { + printk(KERN_ALERT "drbd: BUG BUG BUG! p_addr:%p bm:%p do_now:%d\n", + p_addr, bm, (int)do_now); + break; /* breaks to after catch_oob_access_end() only! */ + } + memset(bm, c, do_now * sizeof(long)); + bm_unmap(p_addr); + offset += do_now; + } +} + +/* + * make sure the bitmap has enough room for the attached storage, + * if necessary, resize. + * called whenever we may have changed the device size. + * returns -ENOMEM if we could not allocate enough memory, 0 on success. + * In case this is actually a resize, we copy the old bitmap into the new one. + * Otherwise, the bitmap is initialized to all bits set. + */ +int drbd_bm_resize(struct drbd_conf *mdev, sector_t capacity) +{ + struct drbd_bitmap *b = mdev->bitmap; + unsigned long bits, words, owords, obits, *p_addr, *bm; + unsigned long want, have, onpages; /* number of pages */ + struct page **npages, **opages = NULL; + int err = 0, growing; + int opages_vmalloced; + + ERR_IF(!b) return -ENOMEM; + + drbd_bm_lock(mdev, "resize"); + + dev_info(DEV, "drbd_bm_resize called with capacity == %llu\n", + (unsigned long long)capacity); + + if (capacity == b->bm_dev_capacity) + goto out; + + opages_vmalloced = test_bit(BM_P_VMALLOCED, &b->bm_flags); + + if (capacity == 0) { + spin_lock_irq(&b->bm_lock); + opages = b->bm_pages; + onpages = b->bm_number_of_pages; + owords = b->bm_words; + b->bm_pages = NULL; + b->bm_number_of_pages = + b->bm_set = + b->bm_bits = + b->bm_words = + b->bm_dev_capacity = 0; + spin_unlock_irq(&b->bm_lock); + bm_free_pages(opages, onpages); + bm_vk_free(opages, opages_vmalloced); + goto out; + } + bits = BM_SECT_TO_BIT(ALIGN(capacity, BM_SECT_PER_BIT)); + + /* if we would use + words = ALIGN(bits,BITS_PER_LONG) >> LN2_BPL; + a 32bit host could present the wrong number of words + to a 64bit host. + */ + words = ALIGN(bits, 64) >> LN2_BPL; + + if (get_ldev(mdev)) { + D_ASSERT((u64)bits <= (((u64)mdev->ldev->md.md_size_sect-MD_BM_OFFSET) << 12)); + put_ldev(mdev); + } + + /* one extra long to catch off by one errors */ + want = ALIGN((words+1)*sizeof(long), PAGE_SIZE) >> PAGE_SHIFT; + have = b->bm_number_of_pages; + if (want == have) { + D_ASSERT(b->bm_pages != NULL); + npages = b->bm_pages; + } else { + if (FAULT_ACTIVE(mdev, DRBD_FAULT_BM_ALLOC)) + npages = NULL; + else + npages = bm_realloc_pages(b, want); + } + + if (!npages) { + err = -ENOMEM; + goto out; + } + + spin_lock_irq(&b->bm_lock); + opages = b->bm_pages; + owords = b->bm_words; + obits = b->bm_bits; + + growing = bits > obits; + if (opages) + bm_set_surplus(b); + + b->bm_pages = npages; + b->bm_number_of_pages = want; + b->bm_bits = bits; + b->bm_words = words; + b->bm_dev_capacity = capacity; + + if (growing) { + bm_memset(b, owords, 0xff, words-owords); + b->bm_set += bits - obits; + } + + if (want < have) { + /* implicit: (opages != NULL) && (opages != npages) */ + bm_free_pages(opages + want, have - want); + } + + p_addr = bm_map_paddr(b, words); + bm = p_addr + MLPP(words); + *bm = DRBD_MAGIC; + bm_unmap(p_addr); + + (void)bm_clear_surplus(b); + + spin_unlock_irq(&b->bm_lock); + if (opages != npages) + bm_vk_free(opages, opages_vmalloced); + if (!growing) + b->bm_set = bm_count_bits(b); + dev_info(DEV, "resync bitmap: bits=%lu words=%lu\n", bits, words); + + out: + drbd_bm_unlock(mdev); + return err; +} + +/* inherently racy: + * if not protected by other means, return value may be out of date when + * leaving this function... + * we still need to lock it, since it is important that this returns + * bm_set == 0 precisely. + * + * maybe bm_set should be atomic_t ? + */ +static unsigned long _drbd_bm_total_weight(struct drbd_conf *mdev) +{ + struct drbd_bitmap *b = mdev->bitmap; + unsigned long s; + unsigned long flags; + + ERR_IF(!b) return 0; + ERR_IF(!b->bm_pages) return 0; + + spin_lock_irqsave(&b->bm_lock, flags); + s = b->bm_set; + spin_unlock_irqrestore(&b->bm_lock, flags); + + return s; +} + +unsigned long drbd_bm_total_weight(struct drbd_conf *mdev) +{ + unsigned long s; + /* if I don't have a disk, I don't know about out-of-sync status */ + if (!get_ldev_if_state(mdev, D_NEGOTIATING)) + return 0; + s = _drbd_bm_total_weight(mdev); + put_ldev(mdev); + return s; +} + +size_t drbd_bm_words(struct drbd_conf *mdev) +{ + struct drbd_bitmap *b = mdev->bitmap; + ERR_IF(!b) return 0; + ERR_IF(!b->bm_pages) return 0; + + return b->bm_words; +} + +unsigned long drbd_bm_bits(struct drbd_conf *mdev) +{ + struct drbd_bitmap *b = mdev->bitmap; + ERR_IF(!b) return 0; + + return b->bm_bits; +} + +/* merge number words from buffer into the bitmap starting at offset. + * buffer[i] is expected to be little endian unsigned long. + * bitmap must be locked by drbd_bm_lock. + * currently only used from receive_bitmap. + */ +void drbd_bm_merge_lel(struct drbd_conf *mdev, size_t offset, size_t number, + unsigned long *buffer) +{ + struct drbd_bitmap *b = mdev->bitmap; + unsigned long *p_addr, *bm; + unsigned long word, bits; + size_t end, do_now; + + end = offset + number; + + ERR_IF(!b) return; + ERR_IF(!b->bm_pages) return; + if (number == 0) + return; + WARN_ON(offset >= b->bm_words); + WARN_ON(end > b->bm_words); + + spin_lock_irq(&b->bm_lock); + while (offset < end) { + do_now = min_t(size_t, ALIGN(offset+1, LWPP), end) - offset; + p_addr = bm_map_paddr(b, offset); + bm = p_addr + MLPP(offset); + offset += do_now; + while (do_now--) { + bits = hweight_long(*bm); + word = *bm | lel_to_cpu(*buffer++); + *bm++ = word; + b->bm_set += hweight_long(word) - bits; + } + bm_unmap(p_addr); + } + /* with 32bit <-> 64bit cross-platform connect + * this is only correct for current usage, + * where we _know_ that we are 64 bit aligned, + * and know that this function is used in this way, too... + */ + if (end == b->bm_words) + b->bm_set -= bm_clear_surplus(b); + + spin_unlock_irq(&b->bm_lock); +} + +/* copy number words from the bitmap starting at offset into the buffer. + * buffer[i] will be little endian unsigned long. + */ +void drbd_bm_get_lel(struct drbd_conf *mdev, size_t offset, size_t number, + unsigned long *buffer) +{ + struct drbd_bitmap *b = mdev->bitmap; + unsigned long *p_addr, *bm; + size_t end, do_now; + + end = offset + number; + + ERR_IF(!b) return; + ERR_IF(!b->bm_pages) return; + + spin_lock_irq(&b->bm_lock); + if ((offset >= b->bm_words) || + (end > b->bm_words) || + (number <= 0)) + dev_err(DEV, "offset=%lu number=%lu bm_words=%lu\n", + (unsigned long) offset, + (unsigned long) number, + (unsigned long) b->bm_words); + else { + while (offset < end) { + do_now = min_t(size_t, ALIGN(offset+1, LWPP), end) - offset; + p_addr = bm_map_paddr(b, offset); + bm = p_addr + MLPP(offset); + offset += do_now; + while (do_now--) + *buffer++ = cpu_to_lel(*bm++); + bm_unmap(p_addr); + } + } + spin_unlock_irq(&b->bm_lock); +} + +/* set all bits in the bitmap */ +void drbd_bm_set_all(struct drbd_conf *mdev) +{ + struct drbd_bitmap *b = mdev->bitmap; + ERR_IF(!b) return; + ERR_IF(!b->bm_pages) return; + + spin_lock_irq(&b->bm_lock); + bm_memset(b, 0, 0xff, b->bm_words); + (void)bm_clear_surplus(b); + b->bm_set = b->bm_bits; + spin_unlock_irq(&b->bm_lock); +} + +/* clear all bits in the bitmap */ +void drbd_bm_clear_all(struct drbd_conf *mdev) +{ + struct drbd_bitmap *b = mdev->bitmap; + ERR_IF(!b) return; + ERR_IF(!b->bm_pages) return; + + spin_lock_irq(&b->bm_lock); + bm_memset(b, 0, 0, b->bm_words); + b->bm_set = 0; + spin_unlock_irq(&b->bm_lock); +} + +static void bm_async_io_complete(struct bio *bio, int error) +{ + struct drbd_bitmap *b = bio->bi_private; + int uptodate = bio_flagged(bio, BIO_UPTODATE); + + + /* strange behavior of some lower level drivers... + * fail the request by clearing the uptodate flag, + * but do not return any error?! + * do we want to WARN() on this? */ + if (!error && !uptodate) + error = -EIO; + + if (error) { + /* doh. what now? + * for now, set all bits, and flag MD_IO_ERROR */ + __set_bit(BM_MD_IO_ERROR, &b->bm_flags); + } + if (atomic_dec_and_test(&b->bm_async_io)) + wake_up(&b->bm_io_wait); + + bio_put(bio); +} + +static void bm_page_io_async(struct drbd_conf *mdev, struct drbd_bitmap *b, int page_nr, int rw) __must_hold(local) +{ + /* we are process context. we always get a bio */ + struct bio *bio = bio_alloc(GFP_KERNEL, 1); + unsigned int len; + sector_t on_disk_sector = + mdev->ldev->md.md_offset + mdev->ldev->md.bm_offset; + on_disk_sector += ((sector_t)page_nr) << (PAGE_SHIFT-9); + + /* this might happen with very small + * flexible external meta data device */ + len = min_t(unsigned int, PAGE_SIZE, + (drbd_md_last_sector(mdev->ldev) - on_disk_sector + 1)<<9); + + bio->bi_bdev = mdev->ldev->md_bdev; + bio->bi_sector = on_disk_sector; + bio_add_page(bio, b->bm_pages[page_nr], len, 0); + bio->bi_private = b; + bio->bi_end_io = bm_async_io_complete; + + if (FAULT_ACTIVE(mdev, (rw & WRITE) ? DRBD_FAULT_MD_WR : DRBD_FAULT_MD_RD)) { + bio->bi_rw |= rw; + bio_endio(bio, -EIO); + } else { + submit_bio(rw, bio); + } +} + +# if defined(__LITTLE_ENDIAN) + /* nothing to do, on disk == in memory */ +# define bm_cpu_to_lel(x) ((void)0) +# else +void bm_cpu_to_lel(struct drbd_bitmap *b) +{ + /* need to cpu_to_lel all the pages ... + * this may be optimized by using + * cpu_to_lel(-1) == -1 and cpu_to_lel(0) == 0; + * the following is still not optimal, but better than nothing */ + unsigned int i; + unsigned long *p_addr, *bm; + if (b->bm_set == 0) { + /* no page at all; avoid swap if all is 0 */ + i = b->bm_number_of_pages; + } else if (b->bm_set == b->bm_bits) { + /* only the last page */ + i = b->bm_number_of_pages - 1; + } else { + /* all pages */ + i = 0; + } + for (; i < b->bm_number_of_pages; i++) { + p_addr = kmap_atomic(b->bm_pages[i], KM_USER0); + for (bm = p_addr; bm < p_addr + PAGE_SIZE/sizeof(long); bm++) + *bm = cpu_to_lel(*bm); + kunmap_atomic(p_addr, KM_USER0); + } +} +# endif +/* lel_to_cpu == cpu_to_lel */ +# define bm_lel_to_cpu(x) bm_cpu_to_lel(x) + +/* + * bm_rw: read/write the whole bitmap from/to its on disk location. + */ +static int bm_rw(struct drbd_conf *mdev, int rw) __must_hold(local) +{ + struct drbd_bitmap *b = mdev->bitmap; + /* sector_t sector; */ + int bm_words, num_pages, i; + unsigned long now; + char ppb[10]; + int err = 0; + + WARN_ON(!bm_is_locked(b)); + + /* no spinlock here, the drbd_bm_lock should be enough! */ + + bm_words = drbd_bm_words(mdev); + num_pages = (bm_words*sizeof(long) + PAGE_SIZE-1) >> PAGE_SHIFT; + + /* on disk bitmap is little endian */ + if (rw == WRITE) + bm_cpu_to_lel(b); + + now = jiffies; + atomic_set(&b->bm_async_io, num_pages); + __clear_bit(BM_MD_IO_ERROR, &b->bm_flags); + + /* let the layers below us try to merge these bios... */ + for (i = 0; i < num_pages; i++) + bm_page_io_async(mdev, b, i, rw); + + drbd_blk_run_queue(bdev_get_queue(mdev->ldev->md_bdev)); + wait_event(b->bm_io_wait, atomic_read(&b->bm_async_io) == 0); + + if (test_bit(BM_MD_IO_ERROR, &b->bm_flags)) { + dev_alert(DEV, "we had at least one MD IO ERROR during bitmap IO\n"); + drbd_chk_io_error(mdev, 1, TRUE); + err = -EIO; + } + + now = jiffies; + if (rw == WRITE) { + /* swap back endianness */ + bm_lel_to_cpu(b); + /* flush bitmap to stable storage */ + drbd_md_flush(mdev); + } else /* rw == READ */ { + /* just read, if necessary adjust endianness */ + b->bm_set = bm_count_bits_swap_endian(b); + dev_info(DEV, "recounting of set bits took additional %lu jiffies\n", + jiffies - now); + } + now = b->bm_set; + + dev_info(DEV, "%s (%lu bits) marked out-of-sync by on disk bit-map.\n", + ppsize(ppb, now << (BM_BLOCK_SHIFT-10)), now); + + return err; +} + +/** + * drbd_bm_read() - Read the whole bitmap from its on disk location. + * @mdev: DRBD device. + */ +int drbd_bm_read(struct drbd_conf *mdev) __must_hold(local) +{ + return bm_rw(mdev, READ); +} + +/** + * drbd_bm_write() - Write the whole bitmap to its on disk location. + * @mdev: DRBD device. + */ +int drbd_bm_write(struct drbd_conf *mdev) __must_hold(local) +{ + return bm_rw(mdev, WRITE); +} + +/** + * drbd_bm_write_sect: Writes a 512 (MD_SECTOR_SIZE) byte piece of the bitmap + * @mdev: DRBD device. + * @enr: Extent number in the resync lru (happens to be sector offset) + * + * The BM_EXT_SIZE is on purpose exactly the amount of the bitmap covered + * by a single sector write. Therefore enr == sector offset from the + * start of the bitmap. + */ +int drbd_bm_write_sect(struct drbd_conf *mdev, unsigned long enr) __must_hold(local) +{ + sector_t on_disk_sector = enr + mdev->ldev->md.md_offset + + mdev->ldev->md.bm_offset; + int bm_words, num_words, offset; + int err = 0; + + mutex_lock(&mdev->md_io_mutex); + bm_words = drbd_bm_words(mdev); + offset = S2W(enr); /* word offset into bitmap */ + num_words = min(S2W(1), bm_words - offset); + if (num_words < S2W(1)) + memset(page_address(mdev->md_io_page), 0, MD_SECTOR_SIZE); + drbd_bm_get_lel(mdev, offset, num_words, + page_address(mdev->md_io_page)); + if (!drbd_md_sync_page_io(mdev, mdev->ldev, on_disk_sector, WRITE)) { + int i; + err = -EIO; + dev_err(DEV, "IO ERROR writing bitmap sector %lu " + "(meta-disk sector %llus)\n", + enr, (unsigned long long)on_disk_sector); + drbd_chk_io_error(mdev, 1, TRUE); + for (i = 0; i < AL_EXT_PER_BM_SECT; i++) + drbd_bm_ALe_set_all(mdev, enr*AL_EXT_PER_BM_SECT+i); + } + mdev->bm_writ_cnt++; + mutex_unlock(&mdev->md_io_mutex); + return err; +} + +/* NOTE + * find_first_bit returns int, we return unsigned long. + * should not make much difference anyways, but ... + * + * this returns a bit number, NOT a sector! + */ +#define BPP_MASK ((1UL << (PAGE_SHIFT+3)) - 1) +static unsigned long __bm_find_next(struct drbd_conf *mdev, unsigned long bm_fo, + const int find_zero_bit, const enum km_type km) +{ + struct drbd_bitmap *b = mdev->bitmap; + unsigned long i = -1UL; + unsigned long *p_addr; + unsigned long bit_offset; /* bit offset of the mapped page. */ + + if (bm_fo > b->bm_bits) { + dev_err(DEV, "bm_fo=%lu bm_bits=%lu\n", bm_fo, b->bm_bits); + } else { + while (bm_fo < b->bm_bits) { + unsigned long offset; + bit_offset = bm_fo & ~BPP_MASK; /* bit offset of the page */ + offset = bit_offset >> LN2_BPL; /* word offset of the page */ + p_addr = __bm_map_paddr(b, offset, km); + + if (find_zero_bit) + i = find_next_zero_bit(p_addr, PAGE_SIZE*8, bm_fo & BPP_MASK); + else + i = find_next_bit(p_addr, PAGE_SIZE*8, bm_fo & BPP_MASK); + + __bm_unmap(p_addr, km); + if (i < PAGE_SIZE*8) { + i = bit_offset + i; + if (i >= b->bm_bits) + break; + goto found; + } + bm_fo = bit_offset + PAGE_SIZE*8; + } + i = -1UL; + } + found: + return i; +} + +static unsigned long bm_find_next(struct drbd_conf *mdev, + unsigned long bm_fo, const int find_zero_bit) +{ + struct drbd_bitmap *b = mdev->bitmap; + unsigned long i = -1UL; + + ERR_IF(!b) return i; + ERR_IF(!b->bm_pages) return i; + + spin_lock_irq(&b->bm_lock); + if (bm_is_locked(b)) + bm_print_lock_info(mdev); + + i = __bm_find_next(mdev, bm_fo, find_zero_bit, KM_IRQ1); + + spin_unlock_irq(&b->bm_lock); + return i; +} + +unsigned long drbd_bm_find_next(struct drbd_conf *mdev, unsigned long bm_fo) +{ + return bm_find_next(mdev, bm_fo, 0); +} + +#if 0 +/* not yet needed for anything. */ +unsigned long drbd_bm_find_next_zero(struct drbd_conf *mdev, unsigned long bm_fo) +{ + return bm_find_next(mdev, bm_fo, 1); +} +#endif + +/* does not spin_lock_irqsave. + * you must take drbd_bm_lock() first */ +unsigned long _drbd_bm_find_next(struct drbd_conf *mdev, unsigned long bm_fo) +{ + /* WARN_ON(!bm_is_locked(mdev)); */ + return __bm_find_next(mdev, bm_fo, 0, KM_USER1); +} + +unsigned long _drbd_bm_find_next_zero(struct drbd_conf *mdev, unsigned long bm_fo) +{ + /* WARN_ON(!bm_is_locked(mdev)); */ + return __bm_find_next(mdev, bm_fo, 1, KM_USER1); +} + +/* returns number of bits actually changed. + * for val != 0, we change 0 -> 1, return code positive + * for val == 0, we change 1 -> 0, return code negative + * wants bitnr, not sector. + * expected to be called for only a few bits (e - s about BITS_PER_LONG). + * Must hold bitmap lock already. */ +int __bm_change_bits_to(struct drbd_conf *mdev, const unsigned long s, + unsigned long e, int val, const enum km_type km) +{ + struct drbd_bitmap *b = mdev->bitmap; + unsigned long *p_addr = NULL; + unsigned long bitnr; + unsigned long last_page_nr = -1UL; + int c = 0; + + if (e >= b->bm_bits) { + dev_err(DEV, "ASSERT FAILED: bit_s=%lu bit_e=%lu bm_bits=%lu\n", + s, e, b->bm_bits); + e = b->bm_bits ? b->bm_bits -1 : 0; + } + for (bitnr = s; bitnr <= e; bitnr++) { + unsigned long offset = bitnr>>LN2_BPL; + unsigned long page_nr = offset >> (PAGE_SHIFT - LN2_BPL + 3); + if (page_nr != last_page_nr) { + if (p_addr) + __bm_unmap(p_addr, km); + p_addr = __bm_map_paddr(b, offset, km); + last_page_nr = page_nr; + } + if (val) + c += (0 == __test_and_set_bit(bitnr & BPP_MASK, p_addr)); + else + c -= (0 != __test_and_clear_bit(bitnr & BPP_MASK, p_addr)); + } + if (p_addr) + __bm_unmap(p_addr, km); + b->bm_set += c; + return c; +} + +/* returns number of bits actually changed. + * for val != 0, we change 0 -> 1, return code positive + * for val == 0, we change 1 -> 0, return code negative + * wants bitnr, not sector */ +int bm_change_bits_to(struct drbd_conf *mdev, const unsigned long s, + const unsigned long e, int val) +{ + unsigned long flags; + struct drbd_bitmap *b = mdev->bitmap; + int c = 0; + + ERR_IF(!b) return 1; + ERR_IF(!b->bm_pages) return 0; + + spin_lock_irqsave(&b->bm_lock, flags); + if (bm_is_locked(b)) + bm_print_lock_info(mdev); + + c = __bm_change_bits_to(mdev, s, e, val, KM_IRQ1); + + spin_unlock_irqrestore(&b->bm_lock, flags); + return c; +} + +/* returns number of bits changed 0 -> 1 */ +int drbd_bm_set_bits(struct drbd_conf *mdev, const unsigned long s, const unsigned long e) +{ + return bm_change_bits_to(mdev, s, e, 1); +} + +/* returns number of bits changed 1 -> 0 */ +int drbd_bm_clear_bits(struct drbd_conf *mdev, const unsigned long s, const unsigned long e) +{ + return -bm_change_bits_to(mdev, s, e, 0); +} + +/* sets all bits in full words, + * from first_word up to, but not including, last_word */ +static inline void bm_set_full_words_within_one_page(struct drbd_bitmap *b, + int page_nr, int first_word, int last_word) +{ + int i; + int bits; + unsigned long *paddr = kmap_atomic(b->bm_pages[page_nr], KM_USER0); + for (i = first_word; i < last_word; i++) { + bits = hweight_long(paddr[i]); + paddr[i] = ~0UL; + b->bm_set += BITS_PER_LONG - bits; + } + kunmap_atomic(paddr, KM_USER0); +} + +/* Same thing as drbd_bm_set_bits, but without taking the spin_lock_irqsave. + * You must first drbd_bm_lock(). + * Can be called to set the whole bitmap in one go. + * Sets bits from s to e _inclusive_. */ +void _drbd_bm_set_bits(struct drbd_conf *mdev, const unsigned long s, const unsigned long e) +{ + /* First set_bit from the first bit (s) + * up to the next long boundary (sl), + * then assign full words up to the last long boundary (el), + * then set_bit up to and including the last bit (e). + * + * Do not use memset, because we must account for changes, + * so we need to loop over the words with hweight() anyways. + */ + unsigned long sl = ALIGN(s,BITS_PER_LONG); + unsigned long el = (e+1) & ~((unsigned long)BITS_PER_LONG-1); + int first_page; + int last_page; + int page_nr; + int first_word; + int last_word; + + if (e - s <= 3*BITS_PER_LONG) { + /* don't bother; el and sl may even be wrong. */ + __bm_change_bits_to(mdev, s, e, 1, KM_USER0); + return; + } + + /* difference is large enough that we can trust sl and el */ + + /* bits filling the current long */ + if (sl) + __bm_change_bits_to(mdev, s, sl-1, 1, KM_USER0); + + first_page = sl >> (3 + PAGE_SHIFT); + last_page = el >> (3 + PAGE_SHIFT); + + /* MLPP: modulo longs per page */ + /* LWPP: long words per page */ + first_word = MLPP(sl >> LN2_BPL); + last_word = LWPP; + + /* first and full pages, unless first page == last page */ + for (page_nr = first_page; page_nr < last_page; page_nr++) { + bm_set_full_words_within_one_page(mdev->bitmap, page_nr, first_word, last_word); + cond_resched(); + first_word = 0; + } + + /* last page (respectively only page, for first page == last page) */ + last_word = MLPP(el >> LN2_BPL); + bm_set_full_words_within_one_page(mdev->bitmap, last_page, first_word, last_word); + + /* possibly trailing bits. + * example: (e & 63) == 63, el will be e+1. + * if that even was the very last bit, + * it would trigger an assert in __bm_change_bits_to() + */ + if (el <= e) + __bm_change_bits_to(mdev, el, e, 1, KM_USER0); +} + +/* returns bit state + * wants bitnr, NOT sector. + * inherently racy... area needs to be locked by means of {al,rs}_lru + * 1 ... bit set + * 0 ... bit not set + * -1 ... first out of bounds access, stop testing for bits! + */ +int drbd_bm_test_bit(struct drbd_conf *mdev, const unsigned long bitnr) +{ + unsigned long flags; + struct drbd_bitmap *b = mdev->bitmap; + unsigned long *p_addr; + int i; + + ERR_IF(!b) return 0; + ERR_IF(!b->bm_pages) return 0; + + spin_lock_irqsave(&b->bm_lock, flags); + if (bm_is_locked(b)) + bm_print_lock_info(mdev); + if (bitnr < b->bm_bits) { + unsigned long offset = bitnr>>LN2_BPL; + p_addr = bm_map_paddr(b, offset); + i = test_bit(bitnr & BPP_MASK, p_addr) ? 1 : 0; + bm_unmap(p_addr); + } else if (bitnr == b->bm_bits) { + i = -1; + } else { /* (bitnr > b->bm_bits) */ + dev_err(DEV, "bitnr=%lu > bm_bits=%lu\n", bitnr, b->bm_bits); + i = 0; + } + + spin_unlock_irqrestore(&b->bm_lock, flags); + return i; +} + +/* returns number of bits set in the range [s, e] */ +int drbd_bm_count_bits(struct drbd_conf *mdev, const unsigned long s, const unsigned long e) +{ + unsigned long flags; + struct drbd_bitmap *b = mdev->bitmap; + unsigned long *p_addr = NULL, page_nr = -1; + unsigned long bitnr; + int c = 0; + size_t w; + + /* If this is called without a bitmap, that is a bug. But just to be + * robust in case we screwed up elsewhere, in that case pretend there + * was one dirty bit in the requested area, so we won't try to do a + * local read there (no bitmap probably implies no disk) */ + ERR_IF(!b) return 1; + ERR_IF(!b->bm_pages) return 1; + + spin_lock_irqsave(&b->bm_lock, flags); + if (bm_is_locked(b)) + bm_print_lock_info(mdev); + for (bitnr = s; bitnr <= e; bitnr++) { + w = bitnr >> LN2_BPL; + if (page_nr != w >> (PAGE_SHIFT - LN2_BPL + 3)) { + page_nr = w >> (PAGE_SHIFT - LN2_BPL + 3); + if (p_addr) + bm_unmap(p_addr); + p_addr = bm_map_paddr(b, w); + } + ERR_IF (bitnr >= b->bm_bits) { + dev_err(DEV, "bitnr=%lu bm_bits=%lu\n", bitnr, b->bm_bits); + } else { + c += (0 != test_bit(bitnr - (page_nr << (PAGE_SHIFT+3)), p_addr)); + } + } + if (p_addr) + bm_unmap(p_addr); + spin_unlock_irqrestore(&b->bm_lock, flags); + return c; +} + + +/* inherently racy... + * return value may be already out-of-date when this function returns. + * but the general usage is that this is only use during a cstate when bits are + * only cleared, not set, and typically only care for the case when the return + * value is zero, or we already "locked" this "bitmap extent" by other means. + * + * enr is bm-extent number, since we chose to name one sector (512 bytes) + * worth of the bitmap a "bitmap extent". + * + * TODO + * I think since we use it like a reference count, we should use the real + * reference count of some bitmap extent element from some lru instead... + * + */ +int drbd_bm_e_weight(struct drbd_conf *mdev, unsigned long enr) +{ + struct drbd_bitmap *b = mdev->bitmap; + int count, s, e; + unsigned long flags; + unsigned long *p_addr, *bm; + + ERR_IF(!b) return 0; + ERR_IF(!b->bm_pages) return 0; + + spin_lock_irqsave(&b->bm_lock, flags); + if (bm_is_locked(b)) + bm_print_lock_info(mdev); + + s = S2W(enr); + e = min((size_t)S2W(enr+1), b->bm_words); + count = 0; + if (s < b->bm_words) { + int n = e-s; + p_addr = bm_map_paddr(b, s); + bm = p_addr + MLPP(s); + while (n--) + count += hweight_long(*bm++); + bm_unmap(p_addr); + } else { + dev_err(DEV, "start offset (%d) too large in drbd_bm_e_weight\n", s); + } + spin_unlock_irqrestore(&b->bm_lock, flags); + return count; +} + +/* set all bits covered by the AL-extent al_enr */ +unsigned long drbd_bm_ALe_set_all(struct drbd_conf *mdev, unsigned long al_enr) +{ + struct drbd_bitmap *b = mdev->bitmap; + unsigned long *p_addr, *bm; + unsigned long weight; + int count, s, e, i, do_now; + ERR_IF(!b) return 0; + ERR_IF(!b->bm_pages) return 0; + + spin_lock_irq(&b->bm_lock); + if (bm_is_locked(b)) + bm_print_lock_info(mdev); + weight = b->bm_set; + + s = al_enr * BM_WORDS_PER_AL_EXT; + e = min_t(size_t, s + BM_WORDS_PER_AL_EXT, b->bm_words); + /* assert that s and e are on the same page */ + D_ASSERT((e-1) >> (PAGE_SHIFT - LN2_BPL + 3) + == s >> (PAGE_SHIFT - LN2_BPL + 3)); + count = 0; + if (s < b->bm_words) { + i = do_now = e-s; + p_addr = bm_map_paddr(b, s); + bm = p_addr + MLPP(s); + while (i--) { + count += hweight_long(*bm); + *bm = -1UL; + bm++; + } + bm_unmap(p_addr); + b->bm_set += do_now*BITS_PER_LONG - count; + if (e == b->bm_words) + b->bm_set -= bm_clear_surplus(b); + } else { + dev_err(DEV, "start offset (%d) too large in drbd_bm_ALe_set_all\n", s); + } + weight = b->bm_set - weight; + spin_unlock_irq(&b->bm_lock); + return weight; +} diff --git a/drivers/block/drbd/drbd_int.h b/drivers/block/drbd/drbd_int.h new file mode 100644 index 000000000000..8da602e010bb --- /dev/null +++ b/drivers/block/drbd/drbd_int.h @@ -0,0 +1,2258 @@ +/* + drbd_int.h + + This file is part of DRBD by Philipp Reisner and Lars Ellenberg. + + Copyright (C) 2001-2008, LINBIT Information Technologies GmbH. + Copyright (C) 1999-2008, Philipp Reisner . + Copyright (C) 2002-2008, Lars Ellenberg . + + drbd is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2, or (at your option) + any later version. + + drbd is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with drbd; see the file COPYING. If not, write to + the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. + +*/ + +#ifndef _DRBD_INT_H +#define _DRBD_INT_H + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#ifdef __CHECKER__ +# define __protected_by(x) __attribute__((require_context(x,1,999,"rdwr"))) +# define __protected_read_by(x) __attribute__((require_context(x,1,999,"read"))) +# define __protected_write_by(x) __attribute__((require_context(x,1,999,"write"))) +# define __must_hold(x) __attribute__((context(x,1,1), require_context(x,1,999,"call"))) +#else +# define __protected_by(x) +# define __protected_read_by(x) +# define __protected_write_by(x) +# define __must_hold(x) +#endif + +#define __no_warn(lock, stmt) do { __acquire(lock); stmt; __release(lock); } while (0) + +/* module parameter, defined in drbd_main.c */ +extern unsigned int minor_count; +extern int disable_sendpage; +extern int allow_oos; +extern unsigned int cn_idx; + +#ifdef CONFIG_DRBD_FAULT_INJECTION +extern int enable_faults; +extern int fault_rate; +extern int fault_devs; +#endif + +extern char usermode_helper[]; + + +#ifndef TRUE +#define TRUE 1 +#endif +#ifndef FALSE +#define FALSE 0 +#endif + +/* I don't remember why XCPU ... + * This is used to wake the asender, + * and to interrupt sending the sending task + * on disconnect. + */ +#define DRBD_SIG SIGXCPU + +/* This is used to stop/restart our threads. + * Cannot use SIGTERM nor SIGKILL, since these + * are sent out by init on runlevel changes + * I choose SIGHUP for now. + */ +#define DRBD_SIGKILL SIGHUP + +/* All EEs on the free list should have ID_VACANT (== 0) + * freshly allocated EEs get !ID_VACANT (== 1) + * so if it says "cannot dereference null pointer at adress 0x00000001", + * it is most likely one of these :( */ + +#define ID_IN_SYNC (4711ULL) +#define ID_OUT_OF_SYNC (4712ULL) + +#define ID_SYNCER (-1ULL) +#define ID_VACANT 0 +#define is_syncer_block_id(id) ((id) == ID_SYNCER) + +struct drbd_conf; + + +/* to shorten dev_warn(DEV, "msg"); and relatives statements */ +#define DEV (disk_to_dev(mdev->vdisk)) + +#define D_ASSERT(exp) if (!(exp)) \ + dev_err(DEV, "ASSERT( " #exp " ) in %s:%d\n", __FILE__, __LINE__) + +#define ERR_IF(exp) if (({ \ + int _b = (exp) != 0; \ + if (_b) dev_err(DEV, "%s: (%s) in %s:%d\n", \ + __func__, #exp, __FILE__, __LINE__); \ + _b; \ + })) + +/* Defines to control fault insertion */ +enum { + DRBD_FAULT_MD_WR = 0, /* meta data write */ + DRBD_FAULT_MD_RD = 1, /* read */ + DRBD_FAULT_RS_WR = 2, /* resync */ + DRBD_FAULT_RS_RD = 3, + DRBD_FAULT_DT_WR = 4, /* data */ + DRBD_FAULT_DT_RD = 5, + DRBD_FAULT_DT_RA = 6, /* data read ahead */ + DRBD_FAULT_BM_ALLOC = 7, /* bitmap allocation */ + DRBD_FAULT_AL_EE = 8, /* alloc ee */ + + DRBD_FAULT_MAX, +}; + +extern void trace_drbd_resync(struct drbd_conf *mdev, int level, const char *fmt, ...); + +#ifdef CONFIG_DRBD_FAULT_INJECTION +extern unsigned int +_drbd_insert_fault(struct drbd_conf *mdev, unsigned int type); +static inline int +drbd_insert_fault(struct drbd_conf *mdev, unsigned int type) { + return fault_rate && + (enable_faults & (1< P_MAY_IGNORE) ... */ + P_MAX_OPT_CMD = 0x101, + + /* special command ids for handshake */ + + P_HAND_SHAKE_M = 0xfff1, /* First Packet on the MetaSock */ + P_HAND_SHAKE_S = 0xfff2, /* First Packet on the Socket */ + + P_HAND_SHAKE = 0xfffe /* FIXED for the next century! */ +}; + +static inline const char *cmdname(enum drbd_packets cmd) +{ + /* THINK may need to become several global tables + * when we want to support more than + * one PRO_VERSION */ + static const char *cmdnames[] = { + [P_DATA] = "Data", + [P_DATA_REPLY] = "DataReply", + [P_RS_DATA_REPLY] = "RSDataReply", + [P_BARRIER] = "Barrier", + [P_BITMAP] = "ReportBitMap", + [P_BECOME_SYNC_TARGET] = "BecomeSyncTarget", + [P_BECOME_SYNC_SOURCE] = "BecomeSyncSource", + [P_UNPLUG_REMOTE] = "UnplugRemote", + [P_DATA_REQUEST] = "DataRequest", + [P_RS_DATA_REQUEST] = "RSDataRequest", + [P_SYNC_PARAM] = "SyncParam", + [P_SYNC_PARAM89] = "SyncParam89", + [P_PROTOCOL] = "ReportProtocol", + [P_UUIDS] = "ReportUUIDs", + [P_SIZES] = "ReportSizes", + [P_STATE] = "ReportState", + [P_SYNC_UUID] = "ReportSyncUUID", + [P_AUTH_CHALLENGE] = "AuthChallenge", + [P_AUTH_RESPONSE] = "AuthResponse", + [P_PING] = "Ping", + [P_PING_ACK] = "PingAck", + [P_RECV_ACK] = "RecvAck", + [P_WRITE_ACK] = "WriteAck", + [P_RS_WRITE_ACK] = "RSWriteAck", + [P_DISCARD_ACK] = "DiscardAck", + [P_NEG_ACK] = "NegAck", + [P_NEG_DREPLY] = "NegDReply", + [P_NEG_RS_DREPLY] = "NegRSDReply", + [P_BARRIER_ACK] = "BarrierAck", + [P_STATE_CHG_REQ] = "StateChgRequest", + [P_STATE_CHG_REPLY] = "StateChgReply", + [P_OV_REQUEST] = "OVRequest", + [P_OV_REPLY] = "OVReply", + [P_OV_RESULT] = "OVResult", + [P_MAX_CMD] = NULL, + }; + + if (cmd == P_HAND_SHAKE_M) + return "HandShakeM"; + if (cmd == P_HAND_SHAKE_S) + return "HandShakeS"; + if (cmd == P_HAND_SHAKE) + return "HandShake"; + if (cmd >= P_MAX_CMD) + return "Unknown"; + return cmdnames[cmd]; +} + +/* for sending/receiving the bitmap, + * possibly in some encoding scheme */ +struct bm_xfer_ctx { + /* "const" + * stores total bits and long words + * of the bitmap, so we don't need to + * call the accessor functions over and again. */ + unsigned long bm_bits; + unsigned long bm_words; + /* during xfer, current position within the bitmap */ + unsigned long bit_offset; + unsigned long word_offset; + + /* statistics; index: (h->command == P_BITMAP) */ + unsigned packets[2]; + unsigned bytes[2]; +}; + +extern void INFO_bm_xfer_stats(struct drbd_conf *mdev, + const char *direction, struct bm_xfer_ctx *c); + +static inline void bm_xfer_ctx_bit_to_word_offset(struct bm_xfer_ctx *c) +{ + /* word_offset counts "native long words" (32 or 64 bit), + * aligned at 64 bit. + * Encoded packet may end at an unaligned bit offset. + * In case a fallback clear text packet is transmitted in + * between, we adjust this offset back to the last 64bit + * aligned "native long word", which makes coding and decoding + * the plain text bitmap much more convenient. */ +#if BITS_PER_LONG == 64 + c->word_offset = c->bit_offset >> 6; +#elif BITS_PER_LONG == 32 + c->word_offset = c->bit_offset >> 5; + c->word_offset &= ~(1UL); +#else +# error "unsupported BITS_PER_LONG" +#endif +} + +#ifndef __packed +#define __packed __attribute__((packed)) +#endif + +/* This is the layout for a packet on the wire. + * The byteorder is the network byte order. + * (except block_id and barrier fields. + * these are pointers to local structs + * and have no relevance for the partner, + * which just echoes them as received.) + * + * NOTE that the payload starts at a long aligned offset, + * regardless of 32 or 64 bit arch! + */ +struct p_header { + u32 magic; + u16 command; + u16 length; /* bytes of data after this header */ + u8 payload[0]; +} __packed; +/* 8 bytes. packet FIXED for the next century! */ + +/* + * short commands, packets without payload, plain p_header: + * P_PING + * P_PING_ACK + * P_BECOME_SYNC_TARGET + * P_BECOME_SYNC_SOURCE + * P_UNPLUG_REMOTE + */ + +/* + * commands with out-of-struct payload: + * P_BITMAP (no additional fields) + * P_DATA, P_DATA_REPLY (see p_data) + * P_COMPRESSED_BITMAP (see receive_compressed_bitmap) + */ + +/* these defines must not be changed without changing the protocol version */ +#define DP_HARDBARRIER 1 +#define DP_RW_SYNC 2 +#define DP_MAY_SET_IN_SYNC 4 + +struct p_data { + struct p_header head; + u64 sector; /* 64 bits sector number */ + u64 block_id; /* to identify the request in protocol B&C */ + u32 seq_num; + u32 dp_flags; +} __packed; + +/* + * commands which share a struct: + * p_block_ack: + * P_RECV_ACK (proto B), P_WRITE_ACK (proto C), + * P_DISCARD_ACK (proto C, two-primaries conflict detection) + * p_block_req: + * P_DATA_REQUEST, P_RS_DATA_REQUEST + */ +struct p_block_ack { + struct p_header head; + u64 sector; + u64 block_id; + u32 blksize; + u32 seq_num; +} __packed; + + +struct p_block_req { + struct p_header head; + u64 sector; + u64 block_id; + u32 blksize; + u32 pad; /* to multiple of 8 Byte */ +} __packed; + +/* + * commands with their own struct for additional fields: + * P_HAND_SHAKE + * P_BARRIER + * P_BARRIER_ACK + * P_SYNC_PARAM + * ReportParams + */ + +struct p_handshake { + struct p_header head; /* 8 bytes */ + u32 protocol_min; + u32 feature_flags; + u32 protocol_max; + + /* should be more than enough for future enhancements + * for now, feature_flags and the reserverd array shall be zero. + */ + + u32 _pad; + u64 reserverd[7]; +} __packed; +/* 80 bytes, FIXED for the next century */ + +struct p_barrier { + struct p_header head; + u32 barrier; /* barrier number _handle_ only */ + u32 pad; /* to multiple of 8 Byte */ +} __packed; + +struct p_barrier_ack { + struct p_header head; + u32 barrier; + u32 set_size; +} __packed; + +struct p_rs_param { + struct p_header head; + u32 rate; + + /* Since protocol version 88 and higher. */ + char verify_alg[0]; +} __packed; + +struct p_rs_param_89 { + struct p_header head; + u32 rate; + /* protocol version 89: */ + char verify_alg[SHARED_SECRET_MAX]; + char csums_alg[SHARED_SECRET_MAX]; +} __packed; + +struct p_protocol { + struct p_header head; + u32 protocol; + u32 after_sb_0p; + u32 after_sb_1p; + u32 after_sb_2p; + u32 want_lose; + u32 two_primaries; + + /* Since protocol version 87 and higher. */ + char integrity_alg[0]; + +} __packed; + +struct p_uuids { + struct p_header head; + u64 uuid[UI_EXTENDED_SIZE]; +} __packed; + +struct p_rs_uuid { + struct p_header head; + u64 uuid; +} __packed; + +struct p_sizes { + struct p_header head; + u64 d_size; /* size of disk */ + u64 u_size; /* user requested size */ + u64 c_size; /* current exported size */ + u32 max_segment_size; /* Maximal size of a BIO */ + u32 queue_order_type; +} __packed; + +struct p_state { + struct p_header head; + u32 state; +} __packed; + +struct p_req_state { + struct p_header head; + u32 mask; + u32 val; +} __packed; + +struct p_req_state_reply { + struct p_header head; + u32 retcode; +} __packed; + +struct p_drbd06_param { + u64 size; + u32 state; + u32 blksize; + u32 protocol; + u32 version; + u32 gen_cnt[5]; + u32 bit_map_gen[5]; +} __packed; + +struct p_discard { + struct p_header head; + u64 block_id; + u32 seq_num; + u32 pad; +} __packed; + +/* Valid values for the encoding field. + * Bump proto version when changing this. */ +enum drbd_bitmap_code { + /* RLE_VLI_Bytes = 0, + * and other bit variants had been defined during + * algorithm evaluation. */ + RLE_VLI_Bits = 2, +}; + +struct p_compressed_bm { + struct p_header head; + /* (encoding & 0x0f): actual encoding, see enum drbd_bitmap_code + * (encoding & 0x80): polarity (set/unset) of first runlength + * ((encoding >> 4) & 0x07): pad_bits, number of trailing zero bits + * used to pad up to head.length bytes + */ + u8 encoding; + + u8 code[0]; +} __packed; + +/* DCBP: Drbd Compressed Bitmap Packet ... */ +static inline enum drbd_bitmap_code +DCBP_get_code(struct p_compressed_bm *p) +{ + return (enum drbd_bitmap_code)(p->encoding & 0x0f); +} + +static inline void +DCBP_set_code(struct p_compressed_bm *p, enum drbd_bitmap_code code) +{ + BUG_ON(code & ~0xf); + p->encoding = (p->encoding & ~0xf) | code; +} + +static inline int +DCBP_get_start(struct p_compressed_bm *p) +{ + return (p->encoding & 0x80) != 0; +} + +static inline void +DCBP_set_start(struct p_compressed_bm *p, int set) +{ + p->encoding = (p->encoding & ~0x80) | (set ? 0x80 : 0); +} + +static inline int +DCBP_get_pad_bits(struct p_compressed_bm *p) +{ + return (p->encoding >> 4) & 0x7; +} + +static inline void +DCBP_set_pad_bits(struct p_compressed_bm *p, int n) +{ + BUG_ON(n & ~0x7); + p->encoding = (p->encoding & (~0x7 << 4)) | (n << 4); +} + +/* one bitmap packet, including the p_header, + * should fit within one _architecture independend_ page. + * so we need to use the fixed size 4KiB page size + * most architechtures have used for a long time. + */ +#define BM_PACKET_PAYLOAD_BYTES (4096 - sizeof(struct p_header)) +#define BM_PACKET_WORDS (BM_PACKET_PAYLOAD_BYTES/sizeof(long)) +#define BM_PACKET_VLI_BYTES_MAX (4096 - sizeof(struct p_compressed_bm)) +#if (PAGE_SIZE < 4096) +/* drbd_send_bitmap / receive_bitmap would break horribly */ +#error "PAGE_SIZE too small" +#endif + +union p_polymorph { + struct p_header header; + struct p_handshake handshake; + struct p_data data; + struct p_block_ack block_ack; + struct p_barrier barrier; + struct p_barrier_ack barrier_ack; + struct p_rs_param_89 rs_param_89; + struct p_protocol protocol; + struct p_sizes sizes; + struct p_uuids uuids; + struct p_state state; + struct p_req_state req_state; + struct p_req_state_reply req_state_reply; + struct p_block_req block_req; +} __packed; + +/**********************************************************************/ +enum drbd_thread_state { + None, + Running, + Exiting, + Restarting +}; + +struct drbd_thread { + spinlock_t t_lock; + struct task_struct *task; + struct completion stop; + enum drbd_thread_state t_state; + int (*function) (struct drbd_thread *); + struct drbd_conf *mdev; + int reset_cpu_mask; +}; + +static inline enum drbd_thread_state get_t_state(struct drbd_thread *thi) +{ + /* THINK testing the t_state seems to be uncritical in all cases + * (but thread_{start,stop}), so we can read it *without* the lock. + * --lge */ + + smp_rmb(); + return thi->t_state; +} + + +/* + * Having this as the first member of a struct provides sort of "inheritance". + * "derived" structs can be "drbd_queue_work()"ed. + * The callback should know and cast back to the descendant struct. + * drbd_request and drbd_epoch_entry are descendants of drbd_work. + */ +struct drbd_work; +typedef int (*drbd_work_cb)(struct drbd_conf *, struct drbd_work *, int cancel); +struct drbd_work { + struct list_head list; + drbd_work_cb cb; +}; + +struct drbd_tl_epoch; +struct drbd_request { + struct drbd_work w; + struct drbd_conf *mdev; + + /* if local IO is not allowed, will be NULL. + * if local IO _is_ allowed, holds the locally submitted bio clone, + * or, after local IO completion, the ERR_PTR(error). + * see drbd_endio_pri(). */ + struct bio *private_bio; + + struct hlist_node colision; + sector_t sector; + unsigned int size; + unsigned int epoch; /* barrier_nr */ + + /* barrier_nr: used to check on "completion" whether this req was in + * the current epoch, and we therefore have to close it, + * starting a new epoch... + */ + + /* up to here, the struct layout is identical to drbd_epoch_entry; + * we might be able to use that to our advantage... */ + + struct list_head tl_requests; /* ring list in the transfer log */ + struct bio *master_bio; /* master bio pointer */ + unsigned long rq_state; /* see comments above _req_mod() */ + int seq_num; + unsigned long start_time; +}; + +struct drbd_tl_epoch { + struct drbd_work w; + struct list_head requests; /* requests before */ + struct drbd_tl_epoch *next; /* pointer to the next barrier */ + unsigned int br_number; /* the barriers identifier. */ + int n_req; /* number of requests attached before this barrier */ +}; + +struct drbd_request; + +/* These Tl_epoch_entries may be in one of 6 lists: + active_ee .. data packet being written + sync_ee .. syncer block being written + done_ee .. block written, need to send P_WRITE_ACK + read_ee .. [RS]P_DATA_REQUEST being read +*/ + +struct drbd_epoch { + struct list_head list; + unsigned int barrier_nr; + atomic_t epoch_size; /* increased on every request added. */ + atomic_t active; /* increased on every req. added, and dec on every finished. */ + unsigned long flags; +}; + +/* drbd_epoch flag bits */ +enum { + DE_BARRIER_IN_NEXT_EPOCH_ISSUED, + DE_BARRIER_IN_NEXT_EPOCH_DONE, + DE_CONTAINS_A_BARRIER, + DE_HAVE_BARRIER_NUMBER, + DE_IS_FINISHING, +}; + +enum epoch_event { + EV_PUT, + EV_GOT_BARRIER_NR, + EV_BARRIER_DONE, + EV_BECAME_LAST, + EV_TRACE_FLUSH, /* TRACE_ are not real events, only used for tracing */ + EV_TRACE_ADD_BARRIER, /* Doing the first write as a barrier write */ + EV_TRACE_SETTING_BI, /* Barrier is expressed with the first write of the next epoch */ + EV_TRACE_ALLOC, + EV_TRACE_FREE, + EV_CLEANUP = 32, /* used as flag */ +}; + +struct drbd_epoch_entry { + struct drbd_work w; + struct drbd_conf *mdev; + struct bio *private_bio; + struct hlist_node colision; + sector_t sector; + unsigned int size; + struct drbd_epoch *epoch; + + /* up to here, the struct layout is identical to drbd_request; + * we might be able to use that to our advantage... */ + + unsigned int flags; + u64 block_id; +}; + +struct drbd_wq_barrier { + struct drbd_work w; + struct completion done; +}; + +struct digest_info { + int digest_size; + void *digest; +}; + +/* ee flag bits */ +enum { + __EE_CALL_AL_COMPLETE_IO, + __EE_CONFLICT_PENDING, + __EE_MAY_SET_IN_SYNC, + __EE_IS_BARRIER, +}; +#define EE_CALL_AL_COMPLETE_IO (1<<__EE_CALL_AL_COMPLETE_IO) +#define EE_CONFLICT_PENDING (1<<__EE_CONFLICT_PENDING) +#define EE_MAY_SET_IN_SYNC (1<<__EE_MAY_SET_IN_SYNC) +#define EE_IS_BARRIER (1<<__EE_IS_BARRIER) + +/* global flag bits */ +enum { + CREATE_BARRIER, /* next P_DATA is preceeded by a P_BARRIER */ + SIGNAL_ASENDER, /* whether asender wants to be interrupted */ + SEND_PING, /* whether asender should send a ping asap */ + + STOP_SYNC_TIMER, /* tell timer to cancel itself */ + UNPLUG_QUEUED, /* only relevant with kernel 2.4 */ + UNPLUG_REMOTE, /* sending a "UnplugRemote" could help */ + MD_DIRTY, /* current uuids and flags not yet on disk */ + DISCARD_CONCURRENT, /* Set on one node, cleared on the peer! */ + USE_DEGR_WFC_T, /* degr-wfc-timeout instead of wfc-timeout. */ + CLUSTER_ST_CHANGE, /* Cluster wide state change going on... */ + CL_ST_CHG_SUCCESS, + CL_ST_CHG_FAIL, + CRASHED_PRIMARY, /* This node was a crashed primary. + * Gets cleared when the state.conn + * goes into C_CONNECTED state. */ + WRITE_BM_AFTER_RESYNC, /* A kmalloc() during resync failed */ + NO_BARRIER_SUPP, /* underlying block device doesn't implement barriers */ + CONSIDER_RESYNC, + + MD_NO_BARRIER, /* meta data device does not support barriers, + so don't even try */ + SUSPEND_IO, /* suspend application io */ + BITMAP_IO, /* suspend application io; + once no more io in flight, start bitmap io */ + BITMAP_IO_QUEUED, /* Started bitmap IO */ + RESYNC_AFTER_NEG, /* Resync after online grow after the attach&negotiate finished. */ + NET_CONGESTED, /* The data socket is congested */ + + CONFIG_PENDING, /* serialization of (re)configuration requests. + * if set, also prevents the device from dying */ + DEVICE_DYING, /* device became unconfigured, + * but worker thread is still handling the cleanup. + * reconfiguring (nl_disk_conf, nl_net_conf) is dissalowed, + * while this is set. */ + RESIZE_PENDING, /* Size change detected locally, waiting for the response from + * the peer, if it changed there as well. */ +}; + +struct drbd_bitmap; /* opaque for drbd_conf */ + +/* TODO sort members for performance + * MAYBE group them further */ + +/* THINK maybe we actually want to use the default "event/%s" worker threads + * or similar in linux 2.6, which uses per cpu data and threads. + * + * To be general, this might need a spin_lock member. + * For now, please use the mdev->req_lock to protect list_head, + * see drbd_queue_work below. + */ +struct drbd_work_queue { + struct list_head q; + struct semaphore s; /* producers up it, worker down()s it */ + spinlock_t q_lock; /* to protect the list. */ +}; + +struct drbd_socket { + struct drbd_work_queue work; + struct mutex mutex; + struct socket *socket; + /* this way we get our + * send/receive buffers off the stack */ + union p_polymorph sbuf; + union p_polymorph rbuf; +}; + +struct drbd_md { + u64 md_offset; /* sector offset to 'super' block */ + + u64 la_size_sect; /* last agreed size, unit sectors */ + u64 uuid[UI_SIZE]; + u64 device_uuid; + u32 flags; + u32 md_size_sect; + + s32 al_offset; /* signed relative sector offset to al area */ + s32 bm_offset; /* signed relative sector offset to bitmap */ + + /* u32 al_nr_extents; important for restoring the AL + * is stored into sync_conf.al_extents, which in turn + * gets applied to act_log->nr_elements + */ +}; + +/* for sync_conf and other types... */ +#define NL_PACKET(name, number, fields) struct name { fields }; +#define NL_INTEGER(pn,pr,member) int member; +#define NL_INT64(pn,pr,member) __u64 member; +#define NL_BIT(pn,pr,member) unsigned member:1; +#define NL_STRING(pn,pr,member,len) unsigned char member[len]; int member ## _len; +#include "linux/drbd_nl.h" + +struct drbd_backing_dev { + struct block_device *backing_bdev; + struct block_device *md_bdev; + struct file *lo_file; + struct file *md_file; + struct drbd_md md; + struct disk_conf dc; /* The user provided config... */ + sector_t known_size; /* last known size of that backing device */ +}; + +struct drbd_md_io { + struct drbd_conf *mdev; + struct completion event; + int error; +}; + +struct bm_io_work { + struct drbd_work w; + char *why; + int (*io_fn)(struct drbd_conf *mdev); + void (*done)(struct drbd_conf *mdev, int rv); +}; + +enum write_ordering_e { + WO_none, + WO_drain_io, + WO_bdev_flush, + WO_bio_barrier +}; + +struct drbd_conf { + /* things that are stored as / read from meta data on disk */ + unsigned long flags; + + /* configured by drbdsetup */ + struct net_conf *net_conf; /* protected by get_net_conf() and put_net_conf() */ + struct syncer_conf sync_conf; + struct drbd_backing_dev *ldev __protected_by(local); + + sector_t p_size; /* partner's disk size */ + struct request_queue *rq_queue; + struct block_device *this_bdev; + struct gendisk *vdisk; + + struct drbd_socket data; /* data/barrier/cstate/parameter packets */ + struct drbd_socket meta; /* ping/ack (metadata) packets */ + int agreed_pro_version; /* actually used protocol version */ + unsigned long last_received; /* in jiffies, either socket */ + unsigned int ko_count; + struct drbd_work resync_work, + unplug_work, + md_sync_work; + struct timer_list resync_timer; + struct timer_list md_sync_timer; + + /* Used after attach while negotiating new disk state. */ + union drbd_state new_state_tmp; + + union drbd_state state; + wait_queue_head_t misc_wait; + wait_queue_head_t state_wait; /* upon each state change. */ + unsigned int send_cnt; + unsigned int recv_cnt; + unsigned int read_cnt; + unsigned int writ_cnt; + unsigned int al_writ_cnt; + unsigned int bm_writ_cnt; + atomic_t ap_bio_cnt; /* Requests we need to complete */ + atomic_t ap_pending_cnt; /* AP data packets on the wire, ack expected */ + atomic_t rs_pending_cnt; /* RS request/data packets on the wire */ + atomic_t unacked_cnt; /* Need to send replys for */ + atomic_t local_cnt; /* Waiting for local completion */ + atomic_t net_cnt; /* Users of net_conf */ + spinlock_t req_lock; + struct drbd_tl_epoch *unused_spare_tle; /* for pre-allocation */ + struct drbd_tl_epoch *newest_tle; + struct drbd_tl_epoch *oldest_tle; + struct list_head out_of_sequence_requests; + struct hlist_head *tl_hash; + unsigned int tl_hash_s; + + /* blocks to sync in this run [unit BM_BLOCK_SIZE] */ + unsigned long rs_total; + /* number of sync IOs that failed in this run */ + unsigned long rs_failed; + /* Syncer's start time [unit jiffies] */ + unsigned long rs_start; + /* cumulated time in PausedSyncX state [unit jiffies] */ + unsigned long rs_paused; + /* block not up-to-date at mark [unit BM_BLOCK_SIZE] */ + unsigned long rs_mark_left; + /* marks's time [unit jiffies] */ + unsigned long rs_mark_time; + /* skipped because csum was equeal [unit BM_BLOCK_SIZE] */ + unsigned long rs_same_csum; + + /* where does the admin want us to start? (sector) */ + sector_t ov_start_sector; + /* where are we now? (sector) */ + sector_t ov_position; + /* Start sector of out of sync range (to merge printk reporting). */ + sector_t ov_last_oos_start; + /* size of out-of-sync range in sectors. */ + sector_t ov_last_oos_size; + unsigned long ov_left; /* in bits */ + struct crypto_hash *csums_tfm; + struct crypto_hash *verify_tfm; + + struct drbd_thread receiver; + struct drbd_thread worker; + struct drbd_thread asender; + struct drbd_bitmap *bitmap; + unsigned long bm_resync_fo; /* bit offset for drbd_bm_find_next */ + + /* Used to track operations of resync... */ + struct lru_cache *resync; + /* Number of locked elements in resync LRU */ + unsigned int resync_locked; + /* resync extent number waiting for application requests */ + unsigned int resync_wenr; + + int open_cnt; + u64 *p_uuid; + struct drbd_epoch *current_epoch; + spinlock_t epoch_lock; + unsigned int epochs; + enum write_ordering_e write_ordering; + struct list_head active_ee; /* IO in progress */ + struct list_head sync_ee; /* IO in progress */ + struct list_head done_ee; /* send ack */ + struct list_head read_ee; /* IO in progress */ + struct list_head net_ee; /* zero-copy network send in progress */ + struct hlist_head *ee_hash; /* is proteced by req_lock! */ + unsigned int ee_hash_s; + + /* this one is protected by ee_lock, single thread */ + struct drbd_epoch_entry *last_write_w_barrier; + + int next_barrier_nr; + struct hlist_head *app_reads_hash; /* is proteced by req_lock */ + struct list_head resync_reads; + atomic_t pp_in_use; + wait_queue_head_t ee_wait; + struct page *md_io_page; /* one page buffer for md_io */ + struct page *md_io_tmpp; /* for logical_block_size != 512 */ + struct mutex md_io_mutex; /* protects the md_io_buffer */ + spinlock_t al_lock; + wait_queue_head_t al_wait; + struct lru_cache *act_log; /* activity log */ + unsigned int al_tr_number; + int al_tr_cycle; + int al_tr_pos; /* position of the next transaction in the journal */ + struct crypto_hash *cram_hmac_tfm; + struct crypto_hash *integrity_w_tfm; /* to be used by the worker thread */ + struct crypto_hash *integrity_r_tfm; /* to be used by the receiver thread */ + void *int_dig_out; + void *int_dig_in; + void *int_dig_vv; + wait_queue_head_t seq_wait; + atomic_t packet_seq; + unsigned int peer_seq; + spinlock_t peer_seq_lock; + unsigned int minor; + unsigned long comm_bm_set; /* communicated number of set bits. */ + cpumask_var_t cpu_mask; + struct bm_io_work bm_io_work; + u64 ed_uuid; /* UUID of the exposed data */ + struct mutex state_mutex; + char congestion_reason; /* Why we where congested... */ +}; + +static inline struct drbd_conf *minor_to_mdev(unsigned int minor) +{ + struct drbd_conf *mdev; + + mdev = minor < minor_count ? minor_table[minor] : NULL; + + return mdev; +} + +static inline unsigned int mdev_to_minor(struct drbd_conf *mdev) +{ + return mdev->minor; +} + +/* returns 1 if it was successfull, + * returns 0 if there was no data socket. + * so wherever you are going to use the data.socket, e.g. do + * if (!drbd_get_data_sock(mdev)) + * return 0; + * CODE(); + * drbd_put_data_sock(mdev); + */ +static inline int drbd_get_data_sock(struct drbd_conf *mdev) +{ + mutex_lock(&mdev->data.mutex); + /* drbd_disconnect() could have called drbd_free_sock() + * while we were waiting in down()... */ + if (unlikely(mdev->data.socket == NULL)) { + mutex_unlock(&mdev->data.mutex); + return 0; + } + return 1; +} + +static inline void drbd_put_data_sock(struct drbd_conf *mdev) +{ + mutex_unlock(&mdev->data.mutex); +} + +/* + * function declarations + *************************/ + +/* drbd_main.c */ + +enum chg_state_flags { + CS_HARD = 1, + CS_VERBOSE = 2, + CS_WAIT_COMPLETE = 4, + CS_SERIALIZE = 8, + CS_ORDERED = CS_WAIT_COMPLETE + CS_SERIALIZE, +}; + +extern void drbd_init_set_defaults(struct drbd_conf *mdev); +extern int drbd_change_state(struct drbd_conf *mdev, enum chg_state_flags f, + union drbd_state mask, union drbd_state val); +extern void drbd_force_state(struct drbd_conf *, union drbd_state, + union drbd_state); +extern int _drbd_request_state(struct drbd_conf *, union drbd_state, + union drbd_state, enum chg_state_flags); +extern int __drbd_set_state(struct drbd_conf *, union drbd_state, + enum chg_state_flags, struct completion *done); +extern void print_st_err(struct drbd_conf *, union drbd_state, + union drbd_state, int); +extern int drbd_thread_start(struct drbd_thread *thi); +extern void _drbd_thread_stop(struct drbd_thread *thi, int restart, int wait); +#ifdef CONFIG_SMP +extern void drbd_thread_current_set_cpu(struct drbd_conf *mdev); +extern void drbd_calc_cpu_mask(struct drbd_conf *mdev); +#else +#define drbd_thread_current_set_cpu(A) ({}) +#define drbd_calc_cpu_mask(A) ({}) +#endif +extern void drbd_free_resources(struct drbd_conf *mdev); +extern void tl_release(struct drbd_conf *mdev, unsigned int barrier_nr, + unsigned int set_size); +extern void tl_clear(struct drbd_conf *mdev); +extern void _tl_add_barrier(struct drbd_conf *, struct drbd_tl_epoch *); +extern void drbd_free_sock(struct drbd_conf *mdev); +extern int drbd_send(struct drbd_conf *mdev, struct socket *sock, + void *buf, size_t size, unsigned msg_flags); +extern int drbd_send_protocol(struct drbd_conf *mdev); +extern int drbd_send_uuids(struct drbd_conf *mdev); +extern int drbd_send_uuids_skip_initial_sync(struct drbd_conf *mdev); +extern int drbd_send_sync_uuid(struct drbd_conf *mdev, u64 val); +extern int drbd_send_sizes(struct drbd_conf *mdev, int trigger_reply); +extern int _drbd_send_state(struct drbd_conf *mdev); +extern int drbd_send_state(struct drbd_conf *mdev); +extern int _drbd_send_cmd(struct drbd_conf *mdev, struct socket *sock, + enum drbd_packets cmd, struct p_header *h, + size_t size, unsigned msg_flags); +#define USE_DATA_SOCKET 1 +#define USE_META_SOCKET 0 +extern int drbd_send_cmd(struct drbd_conf *mdev, int use_data_socket, + enum drbd_packets cmd, struct p_header *h, + size_t size); +extern int drbd_send_cmd2(struct drbd_conf *mdev, enum drbd_packets cmd, + char *data, size_t size); +extern int drbd_send_sync_param(struct drbd_conf *mdev, struct syncer_conf *sc); +extern int drbd_send_b_ack(struct drbd_conf *mdev, u32 barrier_nr, + u32 set_size); +extern int drbd_send_ack(struct drbd_conf *mdev, enum drbd_packets cmd, + struct drbd_epoch_entry *e); +extern int drbd_send_ack_rp(struct drbd_conf *mdev, enum drbd_packets cmd, + struct p_block_req *rp); +extern int drbd_send_ack_dp(struct drbd_conf *mdev, enum drbd_packets cmd, + struct p_data *dp); +extern int drbd_send_ack_ex(struct drbd_conf *mdev, enum drbd_packets cmd, + sector_t sector, int blksize, u64 block_id); +extern int drbd_send_block(struct drbd_conf *mdev, enum drbd_packets cmd, + struct drbd_epoch_entry *e); +extern int drbd_send_dblock(struct drbd_conf *mdev, struct drbd_request *req); +extern int _drbd_send_barrier(struct drbd_conf *mdev, + struct drbd_tl_epoch *barrier); +extern int drbd_send_drequest(struct drbd_conf *mdev, int cmd, + sector_t sector, int size, u64 block_id); +extern int drbd_send_drequest_csum(struct drbd_conf *mdev, + sector_t sector,int size, + void *digest, int digest_size, + enum drbd_packets cmd); +extern int drbd_send_ov_request(struct drbd_conf *mdev,sector_t sector,int size); + +extern int drbd_send_bitmap(struct drbd_conf *mdev); +extern int _drbd_send_bitmap(struct drbd_conf *mdev); +extern int drbd_send_sr_reply(struct drbd_conf *mdev, int retcode); +extern void drbd_free_bc(struct drbd_backing_dev *ldev); +extern void drbd_mdev_cleanup(struct drbd_conf *mdev); + +/* drbd_meta-data.c (still in drbd_main.c) */ +extern void drbd_md_sync(struct drbd_conf *mdev); +extern int drbd_md_read(struct drbd_conf *mdev, struct drbd_backing_dev *bdev); +/* maybe define them below as inline? */ +extern void drbd_uuid_set(struct drbd_conf *mdev, int idx, u64 val) __must_hold(local); +extern void _drbd_uuid_set(struct drbd_conf *mdev, int idx, u64 val) __must_hold(local); +extern void drbd_uuid_new_current(struct drbd_conf *mdev) __must_hold(local); +extern void _drbd_uuid_new_current(struct drbd_conf *mdev) __must_hold(local); +extern void drbd_uuid_set_bm(struct drbd_conf *mdev, u64 val) __must_hold(local); +extern void drbd_md_set_flag(struct drbd_conf *mdev, int flags) __must_hold(local); +extern void drbd_md_clear_flag(struct drbd_conf *mdev, int flags)__must_hold(local); +extern int drbd_md_test_flag(struct drbd_backing_dev *, int); +extern void drbd_md_mark_dirty(struct drbd_conf *mdev); +extern void drbd_queue_bitmap_io(struct drbd_conf *mdev, + int (*io_fn)(struct drbd_conf *), + void (*done)(struct drbd_conf *, int), + char *why); +extern int drbd_bmio_set_n_write(struct drbd_conf *mdev); +extern int drbd_bmio_clear_n_write(struct drbd_conf *mdev); +extern int drbd_bitmap_io(struct drbd_conf *mdev, int (*io_fn)(struct drbd_conf *), char *why); + + +/* Meta data layout + We reserve a 128MB Block (4k aligned) + * either at the end of the backing device + * or on a seperate meta data device. */ + +#define MD_RESERVED_SECT (128LU << 11) /* 128 MB, unit sectors */ +/* The following numbers are sectors */ +#define MD_AL_OFFSET 8 /* 8 Sectors after start of meta area */ +#define MD_AL_MAX_SIZE 64 /* = 32 kb LOG ~ 3776 extents ~ 14 GB Storage */ +/* Allows up to about 3.8TB */ +#define MD_BM_OFFSET (MD_AL_OFFSET + MD_AL_MAX_SIZE) + +/* Since the smalles IO unit is usually 512 byte */ +#define MD_SECTOR_SHIFT 9 +#define MD_SECTOR_SIZE (1< we need 32 KB bitmap. + * Bit 0 ==> local node thinks this block is binary identical on both nodes + * Bit 1 ==> local node thinks this block needs to be synced. + */ + +#define BM_BLOCK_SHIFT 12 /* 4k per bit */ +#define BM_BLOCK_SIZE (1<>(BM_BLOCK_SHIFT-9)) +#define BM_BIT_TO_SECT(x) ((sector_t)(x)<<(BM_BLOCK_SHIFT-9)) +#define BM_SECT_PER_BIT BM_BIT_TO_SECT(1) + +/* bit to represented kilo byte conversion */ +#define Bit2KB(bits) ((bits)<<(BM_BLOCK_SHIFT-10)) + +/* in which _bitmap_ extent (resp. sector) the bit for a certain + * _storage_ sector is located in */ +#define BM_SECT_TO_EXT(x) ((x)>>(BM_EXT_SHIFT-9)) + +/* how much _storage_ sectors we have per bitmap sector */ +#define BM_EXT_TO_SECT(x) ((sector_t)(x) << (BM_EXT_SHIFT-9)) +#define BM_SECT_PER_EXT BM_EXT_TO_SECT(1) + +/* in one sector of the bitmap, we have this many activity_log extents. */ +#define AL_EXT_PER_BM_SECT (1 << (BM_EXT_SHIFT - AL_EXTENT_SHIFT)) +#define BM_WORDS_PER_AL_EXT (1 << (AL_EXTENT_SHIFT-BM_BLOCK_SHIFT-LN2_BPL)) + +#define BM_BLOCKS_PER_BM_EXT_B (BM_EXT_SHIFT - BM_BLOCK_SHIFT) +#define BM_BLOCKS_PER_BM_EXT_MASK ((1<ov_last_oos_size) { + dev_err(DEV, "Out of sync: start=%llu, size=%lu (sectors)\n", + (unsigned long long)mdev->ov_last_oos_start, + (unsigned long)mdev->ov_last_oos_size); + } + mdev->ov_last_oos_size=0; +} + + +extern void drbd_csum(struct drbd_conf *, struct crypto_hash *, struct bio *, void *); +/* worker callbacks */ +extern int w_req_cancel_conflict(struct drbd_conf *, struct drbd_work *, int); +extern int w_read_retry_remote(struct drbd_conf *, struct drbd_work *, int); +extern int w_e_end_data_req(struct drbd_conf *, struct drbd_work *, int); +extern int w_e_end_rsdata_req(struct drbd_conf *, struct drbd_work *, int); +extern int w_e_end_csum_rs_req(struct drbd_conf *, struct drbd_work *, int); +extern int w_e_end_ov_reply(struct drbd_conf *, struct drbd_work *, int); +extern int w_e_end_ov_req(struct drbd_conf *, struct drbd_work *, int); +extern int w_ov_finished(struct drbd_conf *, struct drbd_work *, int); +extern int w_resync_inactive(struct drbd_conf *, struct drbd_work *, int); +extern int w_resume_next_sg(struct drbd_conf *, struct drbd_work *, int); +extern int w_io_error(struct drbd_conf *, struct drbd_work *, int); +extern int w_send_write_hint(struct drbd_conf *, struct drbd_work *, int); +extern int w_make_resync_request(struct drbd_conf *, struct drbd_work *, int); +extern int w_send_dblock(struct drbd_conf *, struct drbd_work *, int); +extern int w_send_barrier(struct drbd_conf *, struct drbd_work *, int); +extern int w_send_read_req(struct drbd_conf *, struct drbd_work *, int); +extern int w_prev_work_done(struct drbd_conf *, struct drbd_work *, int); +extern int w_e_reissue(struct drbd_conf *, struct drbd_work *, int); + +extern void resync_timer_fn(unsigned long data); + +/* drbd_receiver.c */ +extern int drbd_release_ee(struct drbd_conf *mdev, struct list_head *list); +extern struct drbd_epoch_entry *drbd_alloc_ee(struct drbd_conf *mdev, + u64 id, + sector_t sector, + unsigned int data_size, + gfp_t gfp_mask) __must_hold(local); +extern void drbd_free_ee(struct drbd_conf *mdev, struct drbd_epoch_entry *e); +extern void drbd_wait_ee_list_empty(struct drbd_conf *mdev, + struct list_head *head); +extern void _drbd_wait_ee_list_empty(struct drbd_conf *mdev, + struct list_head *head); +extern void drbd_set_recv_tcq(struct drbd_conf *mdev, int tcq_enabled); +extern void _drbd_clear_done_ee(struct drbd_conf *mdev, struct list_head *to_be_freed); +extern void drbd_flush_workqueue(struct drbd_conf *mdev); + +/* yes, there is kernel_setsockopt, but only since 2.6.18. we don't need to + * mess with get_fs/set_fs, we know we are KERNEL_DS always. */ +static inline int drbd_setsockopt(struct socket *sock, int level, int optname, + char __user *optval, int optlen) +{ + int err; + if (level == SOL_SOCKET) + err = sock_setsockopt(sock, level, optname, optval, optlen); + else + err = sock->ops->setsockopt(sock, level, optname, optval, + optlen); + return err; +} + +static inline void drbd_tcp_cork(struct socket *sock) +{ + int __user val = 1; + (void) drbd_setsockopt(sock, SOL_TCP, TCP_CORK, + (char __user *)&val, sizeof(val)); +} + +static inline void drbd_tcp_uncork(struct socket *sock) +{ + int __user val = 0; + (void) drbd_setsockopt(sock, SOL_TCP, TCP_CORK, + (char __user *)&val, sizeof(val)); +} + +static inline void drbd_tcp_nodelay(struct socket *sock) +{ + int __user val = 1; + (void) drbd_setsockopt(sock, SOL_TCP, TCP_NODELAY, + (char __user *)&val, sizeof(val)); +} + +static inline void drbd_tcp_quickack(struct socket *sock) +{ + int __user val = 1; + (void) drbd_setsockopt(sock, SOL_TCP, TCP_QUICKACK, + (char __user *)&val, sizeof(val)); +} + +void drbd_bump_write_ordering(struct drbd_conf *mdev, enum write_ordering_e wo); + +/* drbd_proc.c */ +extern struct proc_dir_entry *drbd_proc; +extern struct file_operations drbd_proc_fops; +extern const char *drbd_conn_str(enum drbd_conns s); +extern const char *drbd_role_str(enum drbd_role s); + +/* drbd_actlog.c */ +extern void drbd_al_begin_io(struct drbd_conf *mdev, sector_t sector); +extern void drbd_al_complete_io(struct drbd_conf *mdev, sector_t sector); +extern void drbd_rs_complete_io(struct drbd_conf *mdev, sector_t sector); +extern int drbd_rs_begin_io(struct drbd_conf *mdev, sector_t sector); +extern int drbd_try_rs_begin_io(struct drbd_conf *mdev, sector_t sector); +extern void drbd_rs_cancel_all(struct drbd_conf *mdev); +extern int drbd_rs_del_all(struct drbd_conf *mdev); +extern void drbd_rs_failed_io(struct drbd_conf *mdev, + sector_t sector, int size); +extern int drbd_al_read_log(struct drbd_conf *mdev, struct drbd_backing_dev *); +extern void __drbd_set_in_sync(struct drbd_conf *mdev, sector_t sector, + int size, const char *file, const unsigned int line); +#define drbd_set_in_sync(mdev, sector, size) \ + __drbd_set_in_sync(mdev, sector, size, __FILE__, __LINE__) +extern void __drbd_set_out_of_sync(struct drbd_conf *mdev, sector_t sector, + int size, const char *file, const unsigned int line); +#define drbd_set_out_of_sync(mdev, sector, size) \ + __drbd_set_out_of_sync(mdev, sector, size, __FILE__, __LINE__) +extern void drbd_al_apply_to_bm(struct drbd_conf *mdev); +extern void drbd_al_to_on_disk_bm(struct drbd_conf *mdev); +extern void drbd_al_shrink(struct drbd_conf *mdev); + + +/* drbd_nl.c */ + +void drbd_nl_cleanup(void); +int __init drbd_nl_init(void); +void drbd_bcast_state(struct drbd_conf *mdev, union drbd_state); +void drbd_bcast_sync_progress(struct drbd_conf *mdev); +void drbd_bcast_ee(struct drbd_conf *mdev, + const char *reason, const int dgs, + const char* seen_hash, const char* calc_hash, + const struct drbd_epoch_entry* e); + + +/** + * DOC: DRBD State macros + * + * These macros are used to express state changes in easily readable form. + * + * The NS macros expand to a mask and a value, that can be bit ored onto the + * current state as soon as the spinlock (req_lock) was taken. + * + * The _NS macros are used for state functions that get called with the + * spinlock. These macros expand directly to the new state value. + * + * Besides the basic forms NS() and _NS() additional _?NS[23] are defined + * to express state changes that affect more than one aspect of the state. + * + * E.g. NS2(conn, C_CONNECTED, peer, R_SECONDARY) + * Means that the network connection was established and that the peer + * is in secondary role. + */ +#define role_MASK R_MASK +#define peer_MASK R_MASK +#define disk_MASK D_MASK +#define pdsk_MASK D_MASK +#define conn_MASK C_MASK +#define susp_MASK 1 +#define user_isp_MASK 1 +#define aftr_isp_MASK 1 + +#define NS(T, S) \ + ({ union drbd_state mask; mask.i = 0; mask.T = T##_MASK; mask; }), \ + ({ union drbd_state val; val.i = 0; val.T = (S); val; }) +#define NS2(T1, S1, T2, S2) \ + ({ union drbd_state mask; mask.i = 0; mask.T1 = T1##_MASK; \ + mask.T2 = T2##_MASK; mask; }), \ + ({ union drbd_state val; val.i = 0; val.T1 = (S1); \ + val.T2 = (S2); val; }) +#define NS3(T1, S1, T2, S2, T3, S3) \ + ({ union drbd_state mask; mask.i = 0; mask.T1 = T1##_MASK; \ + mask.T2 = T2##_MASK; mask.T3 = T3##_MASK; mask; }), \ + ({ union drbd_state val; val.i = 0; val.T1 = (S1); \ + val.T2 = (S2); val.T3 = (S3); val; }) + +#define _NS(D, T, S) \ + D, ({ union drbd_state __ns; __ns.i = D->state.i; __ns.T = (S); __ns; }) +#define _NS2(D, T1, S1, T2, S2) \ + D, ({ union drbd_state __ns; __ns.i = D->state.i; __ns.T1 = (S1); \ + __ns.T2 = (S2); __ns; }) +#define _NS3(D, T1, S1, T2, S2, T3, S3) \ + D, ({ union drbd_state __ns; __ns.i = D->state.i; __ns.T1 = (S1); \ + __ns.T2 = (S2); __ns.T3 = (S3); __ns; }) + +/* + * inline helper functions + *************************/ + +static inline void drbd_state_lock(struct drbd_conf *mdev) +{ + wait_event(mdev->misc_wait, + !test_and_set_bit(CLUSTER_ST_CHANGE, &mdev->flags)); +} + +static inline void drbd_state_unlock(struct drbd_conf *mdev) +{ + clear_bit(CLUSTER_ST_CHANGE, &mdev->flags); + wake_up(&mdev->misc_wait); +} + +static inline int _drbd_set_state(struct drbd_conf *mdev, + union drbd_state ns, enum chg_state_flags flags, + struct completion *done) +{ + int rv; + + read_lock(&global_state_lock); + rv = __drbd_set_state(mdev, ns, flags, done); + read_unlock(&global_state_lock); + + return rv; +} + +/** + * drbd_request_state() - Reqest a state change + * @mdev: DRBD device. + * @mask: mask of state bits to change. + * @val: value of new state bits. + * + * This is the most graceful way of requesting a state change. It is verbose + * quite verbose in case the state change is not possible, and all those + * state changes are globally serialized. + */ +static inline int drbd_request_state(struct drbd_conf *mdev, + union drbd_state mask, + union drbd_state val) +{ + return _drbd_request_state(mdev, mask, val, CS_VERBOSE + CS_ORDERED); +} + +#define __drbd_chk_io_error(m,f) __drbd_chk_io_error_(m,f, __func__) +static inline void __drbd_chk_io_error_(struct drbd_conf *mdev, int forcedetach, const char *where) +{ + switch (mdev->ldev->dc.on_io_error) { + case EP_PASS_ON: + if (!forcedetach) { + if (printk_ratelimit()) + dev_err(DEV, "Local IO failed in %s." + "Passing error on...\n", where); + break; + } + /* NOTE fall through to detach case if forcedetach set */ + case EP_DETACH: + case EP_CALL_HELPER: + if (mdev->state.disk > D_FAILED) { + _drbd_set_state(_NS(mdev, disk, D_FAILED), CS_HARD, NULL); + dev_err(DEV, "Local IO failed in %s." + "Detaching...\n", where); + } + break; + } +} + +/** + * drbd_chk_io_error: Handle the on_io_error setting, should be called from all io completion handlers + * @mdev: DRBD device. + * @error: Error code passed to the IO completion callback + * @forcedetach: Force detach. I.e. the error happened while accessing the meta data + * + * See also drbd_main.c:after_state_ch() if (os.disk > D_FAILED && ns.disk == D_FAILED) + */ +#define drbd_chk_io_error(m,e,f) drbd_chk_io_error_(m,e,f, __func__) +static inline void drbd_chk_io_error_(struct drbd_conf *mdev, + int error, int forcedetach, const char *where) +{ + if (error) { + unsigned long flags; + spin_lock_irqsave(&mdev->req_lock, flags); + __drbd_chk_io_error_(mdev, forcedetach, where); + spin_unlock_irqrestore(&mdev->req_lock, flags); + } +} + + +/** + * drbd_md_first_sector() - Returns the first sector number of the meta data area + * @bdev: Meta data block device. + * + * BTW, for internal meta data, this happens to be the maximum capacity + * we could agree upon with our peer node. + */ +static inline sector_t drbd_md_first_sector(struct drbd_backing_dev *bdev) +{ + switch (bdev->dc.meta_dev_idx) { + case DRBD_MD_INDEX_INTERNAL: + case DRBD_MD_INDEX_FLEX_INT: + return bdev->md.md_offset + bdev->md.bm_offset; + case DRBD_MD_INDEX_FLEX_EXT: + default: + return bdev->md.md_offset; + } +} + +/** + * drbd_md_last_sector() - Return the last sector number of the meta data area + * @bdev: Meta data block device. + */ +static inline sector_t drbd_md_last_sector(struct drbd_backing_dev *bdev) +{ + switch (bdev->dc.meta_dev_idx) { + case DRBD_MD_INDEX_INTERNAL: + case DRBD_MD_INDEX_FLEX_INT: + return bdev->md.md_offset + MD_AL_OFFSET - 1; + case DRBD_MD_INDEX_FLEX_EXT: + default: + return bdev->md.md_offset + bdev->md.md_size_sect; + } +} + +/* Returns the number of 512 byte sectors of the device */ +static inline sector_t drbd_get_capacity(struct block_device *bdev) +{ + /* return bdev ? get_capacity(bdev->bd_disk) : 0; */ + return bdev ? bdev->bd_inode->i_size >> 9 : 0; +} + +/** + * drbd_get_max_capacity() - Returns the capacity we announce to out peer + * @bdev: Meta data block device. + * + * returns the capacity we announce to out peer. we clip ourselves at the + * various MAX_SECTORS, because if we don't, current implementation will + * oops sooner or later + */ +static inline sector_t drbd_get_max_capacity(struct drbd_backing_dev *bdev) +{ + sector_t s; + switch (bdev->dc.meta_dev_idx) { + case DRBD_MD_INDEX_INTERNAL: + case DRBD_MD_INDEX_FLEX_INT: + s = drbd_get_capacity(bdev->backing_bdev) + ? min_t(sector_t, DRBD_MAX_SECTORS_FLEX, + drbd_md_first_sector(bdev)) + : 0; + break; + case DRBD_MD_INDEX_FLEX_EXT: + s = min_t(sector_t, DRBD_MAX_SECTORS_FLEX, + drbd_get_capacity(bdev->backing_bdev)); + /* clip at maximum size the meta device can support */ + s = min_t(sector_t, s, + BM_EXT_TO_SECT(bdev->md.md_size_sect + - bdev->md.bm_offset)); + break; + default: + s = min_t(sector_t, DRBD_MAX_SECTORS, + drbd_get_capacity(bdev->backing_bdev)); + } + return s; +} + +/** + * drbd_md_ss__() - Return the sector number of our meta data super block + * @mdev: DRBD device. + * @bdev: Meta data block device. + */ +static inline sector_t drbd_md_ss__(struct drbd_conf *mdev, + struct drbd_backing_dev *bdev) +{ + switch (bdev->dc.meta_dev_idx) { + default: /* external, some index */ + return MD_RESERVED_SECT * bdev->dc.meta_dev_idx; + case DRBD_MD_INDEX_INTERNAL: + /* with drbd08, internal meta data is always "flexible" */ + case DRBD_MD_INDEX_FLEX_INT: + /* sizeof(struct md_on_disk_07) == 4k + * position: last 4k aligned block of 4k size */ + if (!bdev->backing_bdev) { + if (__ratelimit(&drbd_ratelimit_state)) { + dev_err(DEV, "bdev->backing_bdev==NULL\n"); + dump_stack(); + } + return 0; + } + return (drbd_get_capacity(bdev->backing_bdev) & ~7ULL) + - MD_AL_OFFSET; + case DRBD_MD_INDEX_FLEX_EXT: + return 0; + } +} + +static inline void +_drbd_queue_work(struct drbd_work_queue *q, struct drbd_work *w) +{ + list_add_tail(&w->list, &q->q); + up(&q->s); +} + +static inline void +drbd_queue_work_front(struct drbd_work_queue *q, struct drbd_work *w) +{ + unsigned long flags; + spin_lock_irqsave(&q->q_lock, flags); + list_add(&w->list, &q->q); + up(&q->s); /* within the spinlock, + see comment near end of drbd_worker() */ + spin_unlock_irqrestore(&q->q_lock, flags); +} + +static inline void +drbd_queue_work(struct drbd_work_queue *q, struct drbd_work *w) +{ + unsigned long flags; + spin_lock_irqsave(&q->q_lock, flags); + list_add_tail(&w->list, &q->q); + up(&q->s); /* within the spinlock, + see comment near end of drbd_worker() */ + spin_unlock_irqrestore(&q->q_lock, flags); +} + +static inline void wake_asender(struct drbd_conf *mdev) +{ + if (test_bit(SIGNAL_ASENDER, &mdev->flags)) + force_sig(DRBD_SIG, mdev->asender.task); +} + +static inline void request_ping(struct drbd_conf *mdev) +{ + set_bit(SEND_PING, &mdev->flags); + wake_asender(mdev); +} + +static inline int drbd_send_short_cmd(struct drbd_conf *mdev, + enum drbd_packets cmd) +{ + struct p_header h; + return drbd_send_cmd(mdev, USE_DATA_SOCKET, cmd, &h, sizeof(h)); +} + +static inline int drbd_send_ping(struct drbd_conf *mdev) +{ + struct p_header h; + return drbd_send_cmd(mdev, USE_META_SOCKET, P_PING, &h, sizeof(h)); +} + +static inline int drbd_send_ping_ack(struct drbd_conf *mdev) +{ + struct p_header h; + return drbd_send_cmd(mdev, USE_META_SOCKET, P_PING_ACK, &h, sizeof(h)); +} + +static inline void drbd_thread_stop(struct drbd_thread *thi) +{ + _drbd_thread_stop(thi, FALSE, TRUE); +} + +static inline void drbd_thread_stop_nowait(struct drbd_thread *thi) +{ + _drbd_thread_stop(thi, FALSE, FALSE); +} + +static inline void drbd_thread_restart_nowait(struct drbd_thread *thi) +{ + _drbd_thread_stop(thi, TRUE, FALSE); +} + +/* counts how many answer packets packets we expect from our peer, + * for either explicit application requests, + * or implicit barrier packets as necessary. + * increased: + * w_send_barrier + * _req_mod(req, queue_for_net_write or queue_for_net_read); + * it is much easier and equally valid to count what we queue for the + * worker, even before it actually was queued or send. + * (drbd_make_request_common; recovery path on read io-error) + * decreased: + * got_BarrierAck (respective tl_clear, tl_clear_barrier) + * _req_mod(req, data_received) + * [from receive_DataReply] + * _req_mod(req, write_acked_by_peer or recv_acked_by_peer or neg_acked) + * [from got_BlockAck (P_WRITE_ACK, P_RECV_ACK)] + * for some reason it is NOT decreased in got_NegAck, + * but in the resulting cleanup code from report_params. + * we should try to remember the reason for that... + * _req_mod(req, send_failed or send_canceled) + * _req_mod(req, connection_lost_while_pending) + * [from tl_clear_barrier] + */ +static inline void inc_ap_pending(struct drbd_conf *mdev) +{ + atomic_inc(&mdev->ap_pending_cnt); +} + +#define ERR_IF_CNT_IS_NEGATIVE(which) \ + if (atomic_read(&mdev->which) < 0) \ + dev_err(DEV, "in %s:%d: " #which " = %d < 0 !\n", \ + __func__ , __LINE__ , \ + atomic_read(&mdev->which)) + +#define dec_ap_pending(mdev) do { \ + typecheck(struct drbd_conf *, mdev); \ + if (atomic_dec_and_test(&mdev->ap_pending_cnt)) \ + wake_up(&mdev->misc_wait); \ + ERR_IF_CNT_IS_NEGATIVE(ap_pending_cnt); } while (0) + +/* counts how many resync-related answers we still expect from the peer + * increase decrease + * C_SYNC_TARGET sends P_RS_DATA_REQUEST (and expects P_RS_DATA_REPLY) + * C_SYNC_SOURCE sends P_RS_DATA_REPLY (and expects P_WRITE_ACK whith ID_SYNCER) + * (or P_NEG_ACK with ID_SYNCER) + */ +static inline void inc_rs_pending(struct drbd_conf *mdev) +{ + atomic_inc(&mdev->rs_pending_cnt); +} + +#define dec_rs_pending(mdev) do { \ + typecheck(struct drbd_conf *, mdev); \ + atomic_dec(&mdev->rs_pending_cnt); \ + ERR_IF_CNT_IS_NEGATIVE(rs_pending_cnt); } while (0) + +/* counts how many answers we still need to send to the peer. + * increased on + * receive_Data unless protocol A; + * we need to send a P_RECV_ACK (proto B) + * or P_WRITE_ACK (proto C) + * receive_RSDataReply (recv_resync_read) we need to send a P_WRITE_ACK + * receive_DataRequest (receive_RSDataRequest) we need to send back P_DATA + * receive_Barrier_* we need to send a P_BARRIER_ACK + */ +static inline void inc_unacked(struct drbd_conf *mdev) +{ + atomic_inc(&mdev->unacked_cnt); +} + +#define dec_unacked(mdev) do { \ + typecheck(struct drbd_conf *, mdev); \ + atomic_dec(&mdev->unacked_cnt); \ + ERR_IF_CNT_IS_NEGATIVE(unacked_cnt); } while (0) + +#define sub_unacked(mdev, n) do { \ + typecheck(struct drbd_conf *, mdev); \ + atomic_sub(n, &mdev->unacked_cnt); \ + ERR_IF_CNT_IS_NEGATIVE(unacked_cnt); } while (0) + + +static inline void put_net_conf(struct drbd_conf *mdev) +{ + if (atomic_dec_and_test(&mdev->net_cnt)) + wake_up(&mdev->misc_wait); +} + +/** + * get_net_conf() - Increase ref count on mdev->net_conf; Returns 0 if nothing there + * @mdev: DRBD device. + * + * You have to call put_net_conf() when finished working with mdev->net_conf. + */ +static inline int get_net_conf(struct drbd_conf *mdev) +{ + int have_net_conf; + + atomic_inc(&mdev->net_cnt); + have_net_conf = mdev->state.conn >= C_UNCONNECTED; + if (!have_net_conf) + put_net_conf(mdev); + return have_net_conf; +} + +/** + * get_ldev() - Increase the ref count on mdev->ldev. Returns 0 if there is no ldev + * @M: DRBD device. + * + * You have to call put_ldev() when finished working with mdev->ldev. + */ +#define get_ldev(M) __cond_lock(local, _get_ldev_if_state(M,D_INCONSISTENT)) +#define get_ldev_if_state(M,MINS) __cond_lock(local, _get_ldev_if_state(M,MINS)) + +static inline void put_ldev(struct drbd_conf *mdev) +{ + __release(local); + if (atomic_dec_and_test(&mdev->local_cnt)) + wake_up(&mdev->misc_wait); + D_ASSERT(atomic_read(&mdev->local_cnt) >= 0); +} + +#ifndef __CHECKER__ +static inline int _get_ldev_if_state(struct drbd_conf *mdev, enum drbd_disk_state mins) +{ + int io_allowed; + + atomic_inc(&mdev->local_cnt); + io_allowed = (mdev->state.disk >= mins); + if (!io_allowed) + put_ldev(mdev); + return io_allowed; +} +#else +extern int _get_ldev_if_state(struct drbd_conf *mdev, enum drbd_disk_state mins); +#endif + +/* you must have an "get_ldev" reference */ +static inline void drbd_get_syncer_progress(struct drbd_conf *mdev, + unsigned long *bits_left, unsigned int *per_mil_done) +{ + /* + * this is to break it at compile time when we change that + * (we may feel 4TB maximum storage per drbd is not enough) + */ + typecheck(unsigned long, mdev->rs_total); + + /* note: both rs_total and rs_left are in bits, i.e. in + * units of BM_BLOCK_SIZE. + * for the percentage, we don't care. */ + + *bits_left = drbd_bm_total_weight(mdev) - mdev->rs_failed; + /* >> 10 to prevent overflow, + * +1 to prevent division by zero */ + if (*bits_left > mdev->rs_total) { + /* doh. maybe a logic bug somewhere. + * may also be just a race condition + * between this and a disconnect during sync. + * for now, just prevent in-kernel buffer overflow. + */ + smp_rmb(); + dev_warn(DEV, "cs:%s rs_left=%lu > rs_total=%lu (rs_failed %lu)\n", + drbd_conn_str(mdev->state.conn), + *bits_left, mdev->rs_total, mdev->rs_failed); + *per_mil_done = 0; + } else { + /* make sure the calculation happens in long context */ + unsigned long tmp = 1000UL - + (*bits_left >> 10)*1000UL + / ((mdev->rs_total >> 10) + 1UL); + *per_mil_done = tmp; + } +} + + +/* this throttles on-the-fly application requests + * according to max_buffers settings; + * maybe re-implement using semaphores? */ +static inline int drbd_get_max_buffers(struct drbd_conf *mdev) +{ + int mxb = 1000000; /* arbitrary limit on open requests */ + if (get_net_conf(mdev)) { + mxb = mdev->net_conf->max_buffers; + put_net_conf(mdev); + } + return mxb; +} + +static inline int drbd_state_is_stable(union drbd_state s) +{ + + /* DO NOT add a default clause, we want the compiler to warn us + * for any newly introduced state we may have forgotten to add here */ + + switch ((enum drbd_conns)s.conn) { + /* new io only accepted when there is no connection, ... */ + case C_STANDALONE: + case C_WF_CONNECTION: + /* ... or there is a well established connection. */ + case C_CONNECTED: + case C_SYNC_SOURCE: + case C_SYNC_TARGET: + case C_VERIFY_S: + case C_VERIFY_T: + case C_PAUSED_SYNC_S: + case C_PAUSED_SYNC_T: + /* maybe stable, look at the disk state */ + break; + + /* no new io accepted during tansitional states + * like handshake or teardown */ + case C_DISCONNECTING: + case C_UNCONNECTED: + case C_TIMEOUT: + case C_BROKEN_PIPE: + case C_NETWORK_FAILURE: + case C_PROTOCOL_ERROR: + case C_TEAR_DOWN: + case C_WF_REPORT_PARAMS: + case C_STARTING_SYNC_S: + case C_STARTING_SYNC_T: + case C_WF_BITMAP_S: + case C_WF_BITMAP_T: + case C_WF_SYNC_UUID: + case C_MASK: + /* not "stable" */ + return 0; + } + + switch ((enum drbd_disk_state)s.disk) { + case D_DISKLESS: + case D_INCONSISTENT: + case D_OUTDATED: + case D_CONSISTENT: + case D_UP_TO_DATE: + /* disk state is stable as well. */ + break; + + /* no new io accepted during tansitional states */ + case D_ATTACHING: + case D_FAILED: + case D_NEGOTIATING: + case D_UNKNOWN: + case D_MASK: + /* not "stable" */ + return 0; + } + + return 1; +} + +static inline int __inc_ap_bio_cond(struct drbd_conf *mdev) +{ + int mxb = drbd_get_max_buffers(mdev); + + if (mdev->state.susp) + return 0; + if (test_bit(SUSPEND_IO, &mdev->flags)) + return 0; + + /* to avoid potential deadlock or bitmap corruption, + * in various places, we only allow new application io + * to start during "stable" states. */ + + /* no new io accepted when attaching or detaching the disk */ + if (!drbd_state_is_stable(mdev->state)) + return 0; + + /* since some older kernels don't have atomic_add_unless, + * and we are within the spinlock anyways, we have this workaround. */ + if (atomic_read(&mdev->ap_bio_cnt) > mxb) + return 0; + if (test_bit(BITMAP_IO, &mdev->flags)) + return 0; + return 1; +} + +/* I'd like to use wait_event_lock_irq, + * but I'm not sure when it got introduced, + * and not sure when it has 3 or 4 arguments */ +static inline void inc_ap_bio(struct drbd_conf *mdev, int one_or_two) +{ + /* compare with after_state_ch, + * os.conn != C_WF_BITMAP_S && ns.conn == C_WF_BITMAP_S */ + DEFINE_WAIT(wait); + + /* we wait here + * as long as the device is suspended + * until the bitmap is no longer on the fly during connection + * handshake as long as we would exeed the max_buffer limit. + * + * to avoid races with the reconnect code, + * we need to atomic_inc within the spinlock. */ + + spin_lock_irq(&mdev->req_lock); + while (!__inc_ap_bio_cond(mdev)) { + prepare_to_wait(&mdev->misc_wait, &wait, TASK_UNINTERRUPTIBLE); + spin_unlock_irq(&mdev->req_lock); + schedule(); + finish_wait(&mdev->misc_wait, &wait); + spin_lock_irq(&mdev->req_lock); + } + atomic_add(one_or_two, &mdev->ap_bio_cnt); + spin_unlock_irq(&mdev->req_lock); +} + +static inline void dec_ap_bio(struct drbd_conf *mdev) +{ + int mxb = drbd_get_max_buffers(mdev); + int ap_bio = atomic_dec_return(&mdev->ap_bio_cnt); + + D_ASSERT(ap_bio >= 0); + /* this currently does wake_up for every dec_ap_bio! + * maybe rather introduce some type of hysteresis? + * e.g. (ap_bio == mxb/2 || ap_bio == 0) ? */ + if (ap_bio < mxb) + wake_up(&mdev->misc_wait); + if (ap_bio == 0 && test_bit(BITMAP_IO, &mdev->flags)) { + if (!test_and_set_bit(BITMAP_IO_QUEUED, &mdev->flags)) + drbd_queue_work(&mdev->data.work, &mdev->bm_io_work.w); + } +} + +static inline void drbd_set_ed_uuid(struct drbd_conf *mdev, u64 val) +{ + mdev->ed_uuid = val; +} + +static inline int seq_cmp(u32 a, u32 b) +{ + /* we assume wrap around at 32bit. + * for wrap around at 24bit (old atomic_t), + * we'd have to + * a <<= 8; b <<= 8; + */ + return (s32)(a) - (s32)(b); +} +#define seq_lt(a, b) (seq_cmp((a), (b)) < 0) +#define seq_gt(a, b) (seq_cmp((a), (b)) > 0) +#define seq_ge(a, b) (seq_cmp((a), (b)) >= 0) +#define seq_le(a, b) (seq_cmp((a), (b)) <= 0) +/* CAUTION: please no side effects in arguments! */ +#define seq_max(a, b) ((u32)(seq_gt((a), (b)) ? (a) : (b))) + +static inline void update_peer_seq(struct drbd_conf *mdev, unsigned int new_seq) +{ + unsigned int m; + spin_lock(&mdev->peer_seq_lock); + m = seq_max(mdev->peer_seq, new_seq); + mdev->peer_seq = m; + spin_unlock(&mdev->peer_seq_lock); + if (m == new_seq) + wake_up(&mdev->seq_wait); +} + +static inline void drbd_update_congested(struct drbd_conf *mdev) +{ + struct sock *sk = mdev->data.socket->sk; + if (sk->sk_wmem_queued > sk->sk_sndbuf * 4 / 5) + set_bit(NET_CONGESTED, &mdev->flags); +} + +static inline int drbd_queue_order_type(struct drbd_conf *mdev) +{ + /* sorry, we currently have no working implementation + * of distributed TCQ stuff */ +#ifndef QUEUE_ORDERED_NONE +#define QUEUE_ORDERED_NONE 0 +#endif + return QUEUE_ORDERED_NONE; +} + +static inline void drbd_blk_run_queue(struct request_queue *q) +{ + if (q && q->unplug_fn) + q->unplug_fn(q); +} + +static inline void drbd_kick_lo(struct drbd_conf *mdev) +{ + if (get_ldev(mdev)) { + drbd_blk_run_queue(bdev_get_queue(mdev->ldev->backing_bdev)); + put_ldev(mdev); + } +} + +static inline void drbd_md_flush(struct drbd_conf *mdev) +{ + int r; + + if (test_bit(MD_NO_BARRIER, &mdev->flags)) + return; + + r = blkdev_issue_flush(mdev->ldev->md_bdev, NULL); + if (r) { + set_bit(MD_NO_BARRIER, &mdev->flags); + dev_err(DEV, "meta data flush failed with status %d, disabling md-flushes\n", r); + } +} + +#endif diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c new file mode 100644 index 000000000000..edf0b8031e69 --- /dev/null +++ b/drivers/block/drbd/drbd_main.c @@ -0,0 +1,3735 @@ +/* + drbd.c + + This file is part of DRBD by Philipp Reisner and Lars Ellenberg. + + Copyright (C) 2001-2008, LINBIT Information Technologies GmbH. + Copyright (C) 1999-2008, Philipp Reisner . + Copyright (C) 2002-2008, Lars Ellenberg . + + Thanks to Carter Burden, Bart Grantham and Gennadiy Nerubayev + from Logicworks, Inc. for making SDP replication support possible. + + drbd is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2, or (at your option) + any later version. + + drbd is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with drbd; see the file COPYING. If not, write to + the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. + + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define __KERNEL_SYSCALLS__ +#include +#include + +#include +#include "drbd_int.h" +#include "drbd_tracing.h" +#include "drbd_req.h" /* only for _req_mod in tl_release and tl_clear */ + +#include "drbd_vli.h" + +struct after_state_chg_work { + struct drbd_work w; + union drbd_state os; + union drbd_state ns; + enum chg_state_flags flags; + struct completion *done; +}; + +int drbdd_init(struct drbd_thread *); +int drbd_worker(struct drbd_thread *); +int drbd_asender(struct drbd_thread *); + +int drbd_init(void); +static int drbd_open(struct block_device *bdev, fmode_t mode); +static int drbd_release(struct gendisk *gd, fmode_t mode); +static int w_after_state_ch(struct drbd_conf *mdev, struct drbd_work *w, int unused); +static void after_state_ch(struct drbd_conf *mdev, union drbd_state os, + union drbd_state ns, enum chg_state_flags flags); +static int w_md_sync(struct drbd_conf *mdev, struct drbd_work *w, int unused); +static void md_sync_timer_fn(unsigned long data); +static int w_bitmap_io(struct drbd_conf *mdev, struct drbd_work *w, int unused); + +DEFINE_TRACE(drbd_unplug); +DEFINE_TRACE(drbd_uuid); +DEFINE_TRACE(drbd_ee); +DEFINE_TRACE(drbd_packet); +DEFINE_TRACE(drbd_md_io); +DEFINE_TRACE(drbd_epoch); +DEFINE_TRACE(drbd_netlink); +DEFINE_TRACE(drbd_actlog); +DEFINE_TRACE(drbd_bio); +DEFINE_TRACE(_drbd_resync); +DEFINE_TRACE(drbd_req); + +MODULE_AUTHOR("Philipp Reisner , " + "Lars Ellenberg "); +MODULE_DESCRIPTION("drbd - Distributed Replicated Block Device v" REL_VERSION); +MODULE_VERSION(REL_VERSION); +MODULE_LICENSE("GPL"); +MODULE_PARM_DESC(minor_count, "Maximum number of drbd devices (1-255)"); +MODULE_ALIAS_BLOCKDEV_MAJOR(DRBD_MAJOR); + +#include +/* allow_open_on_secondary */ +MODULE_PARM_DESC(allow_oos, "DONT USE!"); +/* thanks to these macros, if compiled into the kernel (not-module), + * this becomes the boot parameter drbd.minor_count */ +module_param(minor_count, uint, 0444); +module_param(disable_sendpage, bool, 0644); +module_param(allow_oos, bool, 0); +module_param(cn_idx, uint, 0444); +module_param(proc_details, int, 0644); + +#ifdef CONFIG_DRBD_FAULT_INJECTION +int enable_faults; +int fault_rate; +static int fault_count; +int fault_devs; +/* bitmap of enabled faults */ +module_param(enable_faults, int, 0664); +/* fault rate % value - applies to all enabled faults */ +module_param(fault_rate, int, 0664); +/* count of faults inserted */ +module_param(fault_count, int, 0664); +/* bitmap of devices to insert faults on */ +module_param(fault_devs, int, 0644); +#endif + +/* module parameter, defined */ +unsigned int minor_count = 32; +int disable_sendpage; +int allow_oos; +unsigned int cn_idx = CN_IDX_DRBD; +int proc_details; /* Detail level in proc drbd*/ + +/* Module parameter for setting the user mode helper program + * to run. Default is /sbin/drbdadm */ +char usermode_helper[80] = "/sbin/drbdadm"; + +module_param_string(usermode_helper, usermode_helper, sizeof(usermode_helper), 0644); + +/* in 2.6.x, our device mapping and config info contains our virtual gendisks + * as member "struct gendisk *vdisk;" + */ +struct drbd_conf **minor_table; + +struct kmem_cache *drbd_request_cache; +struct kmem_cache *drbd_ee_cache; /* epoch entries */ +struct kmem_cache *drbd_bm_ext_cache; /* bitmap extents */ +struct kmem_cache *drbd_al_ext_cache; /* activity log extents */ +mempool_t *drbd_request_mempool; +mempool_t *drbd_ee_mempool; + +/* I do not use a standard mempool, because: + 1) I want to hand out the pre-allocated objects first. + 2) I want to be able to interrupt sleeping allocation with a signal. + Note: This is a single linked list, the next pointer is the private + member of struct page. + */ +struct page *drbd_pp_pool; +spinlock_t drbd_pp_lock; +int drbd_pp_vacant; +wait_queue_head_t drbd_pp_wait; + +DEFINE_RATELIMIT_STATE(drbd_ratelimit_state, 5 * HZ, 5); + +static struct block_device_operations drbd_ops = { + .owner = THIS_MODULE, + .open = drbd_open, + .release = drbd_release, +}; + +#define ARRY_SIZE(A) (sizeof(A)/sizeof(A[0])) + +#ifdef __CHECKER__ +/* When checking with sparse, and this is an inline function, sparse will + give tons of false positives. When this is a real functions sparse works. + */ +int _get_ldev_if_state(struct drbd_conf *mdev, enum drbd_disk_state mins) +{ + int io_allowed; + + atomic_inc(&mdev->local_cnt); + io_allowed = (mdev->state.disk >= mins); + if (!io_allowed) { + if (atomic_dec_and_test(&mdev->local_cnt)) + wake_up(&mdev->misc_wait); + } + return io_allowed; +} + +#endif + +/** + * DOC: The transfer log + * + * The transfer log is a single linked list of &struct drbd_tl_epoch objects. + * mdev->newest_tle points to the head, mdev->oldest_tle points to the tail + * of the list. There is always at least one &struct drbd_tl_epoch object. + * + * Each &struct drbd_tl_epoch has a circular double linked list of requests + * attached. + */ +static int tl_init(struct drbd_conf *mdev) +{ + struct drbd_tl_epoch *b; + + /* during device minor initialization, we may well use GFP_KERNEL */ + b = kmalloc(sizeof(struct drbd_tl_epoch), GFP_KERNEL); + if (!b) + return 0; + INIT_LIST_HEAD(&b->requests); + INIT_LIST_HEAD(&b->w.list); + b->next = NULL; + b->br_number = 4711; + b->n_req = 0; + b->w.cb = NULL; /* if this is != NULL, we need to dec_ap_pending in tl_clear */ + + mdev->oldest_tle = b; + mdev->newest_tle = b; + INIT_LIST_HEAD(&mdev->out_of_sequence_requests); + + mdev->tl_hash = NULL; + mdev->tl_hash_s = 0; + + return 1; +} + +static void tl_cleanup(struct drbd_conf *mdev) +{ + D_ASSERT(mdev->oldest_tle == mdev->newest_tle); + D_ASSERT(list_empty(&mdev->out_of_sequence_requests)); + kfree(mdev->oldest_tle); + mdev->oldest_tle = NULL; + kfree(mdev->unused_spare_tle); + mdev->unused_spare_tle = NULL; + kfree(mdev->tl_hash); + mdev->tl_hash = NULL; + mdev->tl_hash_s = 0; +} + +/** + * _tl_add_barrier() - Adds a barrier to the transfer log + * @mdev: DRBD device. + * @new: Barrier to be added before the current head of the TL. + * + * The caller must hold the req_lock. + */ +void _tl_add_barrier(struct drbd_conf *mdev, struct drbd_tl_epoch *new) +{ + struct drbd_tl_epoch *newest_before; + + INIT_LIST_HEAD(&new->requests); + INIT_LIST_HEAD(&new->w.list); + new->w.cb = NULL; /* if this is != NULL, we need to dec_ap_pending in tl_clear */ + new->next = NULL; + new->n_req = 0; + + newest_before = mdev->newest_tle; + /* never send a barrier number == 0, because that is special-cased + * when using TCQ for our write ordering code */ + new->br_number = (newest_before->br_number+1) ?: 1; + if (mdev->newest_tle != new) { + mdev->newest_tle->next = new; + mdev->newest_tle = new; + } +} + +/** + * tl_release() - Free or recycle the oldest &struct drbd_tl_epoch object of the TL + * @mdev: DRBD device. + * @barrier_nr: Expected identifier of the DRBD write barrier packet. + * @set_size: Expected number of requests before that barrier. + * + * In case the passed barrier_nr or set_size does not match the oldest + * &struct drbd_tl_epoch objects this function will cause a termination + * of the connection. + */ +void tl_release(struct drbd_conf *mdev, unsigned int barrier_nr, + unsigned int set_size) +{ + struct drbd_tl_epoch *b, *nob; /* next old barrier */ + struct list_head *le, *tle; + struct drbd_request *r; + + spin_lock_irq(&mdev->req_lock); + + b = mdev->oldest_tle; + + /* first some paranoia code */ + if (b == NULL) { + dev_err(DEV, "BAD! BarrierAck #%u received, but no epoch in tl!?\n", + barrier_nr); + goto bail; + } + if (b->br_number != barrier_nr) { + dev_err(DEV, "BAD! BarrierAck #%u received, expected #%u!\n", + barrier_nr, b->br_number); + goto bail; + } + if (b->n_req != set_size) { + dev_err(DEV, "BAD! BarrierAck #%u received with n_req=%u, expected n_req=%u!\n", + barrier_nr, set_size, b->n_req); + goto bail; + } + + /* Clean up list of requests processed during current epoch */ + list_for_each_safe(le, tle, &b->requests) { + r = list_entry(le, struct drbd_request, tl_requests); + _req_mod(r, barrier_acked); + } + /* There could be requests on the list waiting for completion + of the write to the local disk. To avoid corruptions of + slab's data structures we have to remove the lists head. + + Also there could have been a barrier ack out of sequence, overtaking + the write acks - which would be a bug and violating write ordering. + To not deadlock in case we lose connection while such requests are + still pending, we need some way to find them for the + _req_mode(connection_lost_while_pending). + + These have been list_move'd to the out_of_sequence_requests list in + _req_mod(, barrier_acked) above. + */ + list_del_init(&b->requests); + + nob = b->next; + if (test_and_clear_bit(CREATE_BARRIER, &mdev->flags)) { + _tl_add_barrier(mdev, b); + if (nob) + mdev->oldest_tle = nob; + /* if nob == NULL b was the only barrier, and becomes the new + barrier. Therefore mdev->oldest_tle points already to b */ + } else { + D_ASSERT(nob != NULL); + mdev->oldest_tle = nob; + kfree(b); + } + + spin_unlock_irq(&mdev->req_lock); + dec_ap_pending(mdev); + + return; + +bail: + spin_unlock_irq(&mdev->req_lock); + drbd_force_state(mdev, NS(conn, C_PROTOCOL_ERROR)); +} + + +/** + * tl_clear() - Clears all requests and &struct drbd_tl_epoch objects out of the TL + * @mdev: DRBD device. + * + * This is called after the connection to the peer was lost. The storage covered + * by the requests on the transfer gets marked as our of sync. Called from the + * receiver thread and the worker thread. + */ +void tl_clear(struct drbd_conf *mdev) +{ + struct drbd_tl_epoch *b, *tmp; + struct list_head *le, *tle; + struct drbd_request *r; + int new_initial_bnr = net_random(); + + spin_lock_irq(&mdev->req_lock); + + b = mdev->oldest_tle; + while (b) { + list_for_each_safe(le, tle, &b->requests) { + r = list_entry(le, struct drbd_request, tl_requests); + /* It would be nice to complete outside of spinlock. + * But this is easier for now. */ + _req_mod(r, connection_lost_while_pending); + } + tmp = b->next; + + /* there could still be requests on that ring list, + * in case local io is still pending */ + list_del(&b->requests); + + /* dec_ap_pending corresponding to queue_barrier. + * the newest barrier may not have been queued yet, + * in which case w.cb is still NULL. */ + if (b->w.cb != NULL) + dec_ap_pending(mdev); + + if (b == mdev->newest_tle) { + /* recycle, but reinit! */ + D_ASSERT(tmp == NULL); + INIT_LIST_HEAD(&b->requests); + INIT_LIST_HEAD(&b->w.list); + b->w.cb = NULL; + b->br_number = new_initial_bnr; + b->n_req = 0; + + mdev->oldest_tle = b; + break; + } + kfree(b); + b = tmp; + } + + /* we expect this list to be empty. */ + D_ASSERT(list_empty(&mdev->out_of_sequence_requests)); + + /* but just in case, clean it up anyways! */ + list_for_each_safe(le, tle, &mdev->out_of_sequence_requests) { + r = list_entry(le, struct drbd_request, tl_requests); + /* It would be nice to complete outside of spinlock. + * But this is easier for now. */ + _req_mod(r, connection_lost_while_pending); + } + + /* ensure bit indicating barrier is required is clear */ + clear_bit(CREATE_BARRIER, &mdev->flags); + + spin_unlock_irq(&mdev->req_lock); +} + +/** + * cl_wide_st_chg() - TRUE if the state change is a cluster wide one + * @mdev: DRBD device. + * @os: old (current) state. + * @ns: new (wanted) state. + */ +static int cl_wide_st_chg(struct drbd_conf *mdev, + union drbd_state os, union drbd_state ns) +{ + return (os.conn >= C_CONNECTED && ns.conn >= C_CONNECTED && + ((os.role != R_PRIMARY && ns.role == R_PRIMARY) || + (os.conn != C_STARTING_SYNC_T && ns.conn == C_STARTING_SYNC_T) || + (os.conn != C_STARTING_SYNC_S && ns.conn == C_STARTING_SYNC_S) || + (os.disk != D_DISKLESS && ns.disk == D_DISKLESS))) || + (os.conn >= C_CONNECTED && ns.conn == C_DISCONNECTING) || + (os.conn == C_CONNECTED && ns.conn == C_VERIFY_S); +} + +int drbd_change_state(struct drbd_conf *mdev, enum chg_state_flags f, + union drbd_state mask, union drbd_state val) +{ + unsigned long flags; + union drbd_state os, ns; + int rv; + + spin_lock_irqsave(&mdev->req_lock, flags); + os = mdev->state; + ns.i = (os.i & ~mask.i) | val.i; + rv = _drbd_set_state(mdev, ns, f, NULL); + ns = mdev->state; + spin_unlock_irqrestore(&mdev->req_lock, flags); + + return rv; +} + +/** + * drbd_force_state() - Impose a change which happens outside our control on our state + * @mdev: DRBD device. + * @mask: mask of state bits to change. + * @val: value of new state bits. + */ +void drbd_force_state(struct drbd_conf *mdev, + union drbd_state mask, union drbd_state val) +{ + drbd_change_state(mdev, CS_HARD, mask, val); +} + +static int is_valid_state(struct drbd_conf *mdev, union drbd_state ns); +static int is_valid_state_transition(struct drbd_conf *, + union drbd_state, union drbd_state); +static union drbd_state sanitize_state(struct drbd_conf *mdev, union drbd_state os, + union drbd_state ns, int *warn_sync_abort); +int drbd_send_state_req(struct drbd_conf *, + union drbd_state, union drbd_state); + +static enum drbd_state_ret_codes _req_st_cond(struct drbd_conf *mdev, + union drbd_state mask, union drbd_state val) +{ + union drbd_state os, ns; + unsigned long flags; + int rv; + + if (test_and_clear_bit(CL_ST_CHG_SUCCESS, &mdev->flags)) + return SS_CW_SUCCESS; + + if (test_and_clear_bit(CL_ST_CHG_FAIL, &mdev->flags)) + return SS_CW_FAILED_BY_PEER; + + rv = 0; + spin_lock_irqsave(&mdev->req_lock, flags); + os = mdev->state; + ns.i = (os.i & ~mask.i) | val.i; + ns = sanitize_state(mdev, os, ns, NULL); + + if (!cl_wide_st_chg(mdev, os, ns)) + rv = SS_CW_NO_NEED; + if (!rv) { + rv = is_valid_state(mdev, ns); + if (rv == SS_SUCCESS) { + rv = is_valid_state_transition(mdev, ns, os); + if (rv == SS_SUCCESS) + rv = 0; /* cont waiting, otherwise fail. */ + } + } + spin_unlock_irqrestore(&mdev->req_lock, flags); + + return rv; +} + +/** + * drbd_req_state() - Perform an eventually cluster wide state change + * @mdev: DRBD device. + * @mask: mask of state bits to change. + * @val: value of new state bits. + * @f: flags + * + * Should not be called directly, use drbd_request_state() or + * _drbd_request_state(). + */ +static int drbd_req_state(struct drbd_conf *mdev, + union drbd_state mask, union drbd_state val, + enum chg_state_flags f) +{ + struct completion done; + unsigned long flags; + union drbd_state os, ns; + int rv; + + init_completion(&done); + + if (f & CS_SERIALIZE) + mutex_lock(&mdev->state_mutex); + + spin_lock_irqsave(&mdev->req_lock, flags); + os = mdev->state; + ns.i = (os.i & ~mask.i) | val.i; + ns = sanitize_state(mdev, os, ns, NULL); + + if (cl_wide_st_chg(mdev, os, ns)) { + rv = is_valid_state(mdev, ns); + if (rv == SS_SUCCESS) + rv = is_valid_state_transition(mdev, ns, os); + spin_unlock_irqrestore(&mdev->req_lock, flags); + + if (rv < SS_SUCCESS) { + if (f & CS_VERBOSE) + print_st_err(mdev, os, ns, rv); + goto abort; + } + + drbd_state_lock(mdev); + if (!drbd_send_state_req(mdev, mask, val)) { + drbd_state_unlock(mdev); + rv = SS_CW_FAILED_BY_PEER; + if (f & CS_VERBOSE) + print_st_err(mdev, os, ns, rv); + goto abort; + } + + wait_event(mdev->state_wait, + (rv = _req_st_cond(mdev, mask, val))); + + if (rv < SS_SUCCESS) { + drbd_state_unlock(mdev); + if (f & CS_VERBOSE) + print_st_err(mdev, os, ns, rv); + goto abort; + } + spin_lock_irqsave(&mdev->req_lock, flags); + os = mdev->state; + ns.i = (os.i & ~mask.i) | val.i; + rv = _drbd_set_state(mdev, ns, f, &done); + drbd_state_unlock(mdev); + } else { + rv = _drbd_set_state(mdev, ns, f, &done); + } + + spin_unlock_irqrestore(&mdev->req_lock, flags); + + if (f & CS_WAIT_COMPLETE && rv == SS_SUCCESS) { + D_ASSERT(current != mdev->worker.task); + wait_for_completion(&done); + } + +abort: + if (f & CS_SERIALIZE) + mutex_unlock(&mdev->state_mutex); + + return rv; +} + +/** + * _drbd_request_state() - Request a state change (with flags) + * @mdev: DRBD device. + * @mask: mask of state bits to change. + * @val: value of new state bits. + * @f: flags + * + * Cousin of drbd_request_state(), useful with the CS_WAIT_COMPLETE + * flag, or when logging of failed state change requests is not desired. + */ +int _drbd_request_state(struct drbd_conf *mdev, union drbd_state mask, + union drbd_state val, enum chg_state_flags f) +{ + int rv; + + wait_event(mdev->state_wait, + (rv = drbd_req_state(mdev, mask, val, f)) != SS_IN_TRANSIENT_STATE); + + return rv; +} + +static void print_st(struct drbd_conf *mdev, char *name, union drbd_state ns) +{ + dev_err(DEV, " %s = { cs:%s ro:%s/%s ds:%s/%s %c%c%c%c }\n", + name, + drbd_conn_str(ns.conn), + drbd_role_str(ns.role), + drbd_role_str(ns.peer), + drbd_disk_str(ns.disk), + drbd_disk_str(ns.pdsk), + ns.susp ? 's' : 'r', + ns.aftr_isp ? 'a' : '-', + ns.peer_isp ? 'p' : '-', + ns.user_isp ? 'u' : '-' + ); +} + +void print_st_err(struct drbd_conf *mdev, + union drbd_state os, union drbd_state ns, int err) +{ + if (err == SS_IN_TRANSIENT_STATE) + return; + dev_err(DEV, "State change failed: %s\n", drbd_set_st_err_str(err)); + print_st(mdev, " state", os); + print_st(mdev, "wanted", ns); +} + + +#define drbd_peer_str drbd_role_str +#define drbd_pdsk_str drbd_disk_str + +#define drbd_susp_str(A) ((A) ? "1" : "0") +#define drbd_aftr_isp_str(A) ((A) ? "1" : "0") +#define drbd_peer_isp_str(A) ((A) ? "1" : "0") +#define drbd_user_isp_str(A) ((A) ? "1" : "0") + +#define PSC(A) \ + ({ if (ns.A != os.A) { \ + pbp += sprintf(pbp, #A "( %s -> %s ) ", \ + drbd_##A##_str(os.A), \ + drbd_##A##_str(ns.A)); \ + } }) + +/** + * is_valid_state() - Returns an SS_ error code if ns is not valid + * @mdev: DRBD device. + * @ns: State to consider. + */ +static int is_valid_state(struct drbd_conf *mdev, union drbd_state ns) +{ + /* See drbd_state_sw_errors in drbd_strings.c */ + + enum drbd_fencing_p fp; + int rv = SS_SUCCESS; + + fp = FP_DONT_CARE; + if (get_ldev(mdev)) { + fp = mdev->ldev->dc.fencing; + put_ldev(mdev); + } + + if (get_net_conf(mdev)) { + if (!mdev->net_conf->two_primaries && + ns.role == R_PRIMARY && ns.peer == R_PRIMARY) + rv = SS_TWO_PRIMARIES; + put_net_conf(mdev); + } + + if (rv <= 0) + /* already found a reason to abort */; + else if (ns.role == R_SECONDARY && mdev->open_cnt) + rv = SS_DEVICE_IN_USE; + + else if (ns.role == R_PRIMARY && ns.conn < C_CONNECTED && ns.disk < D_UP_TO_DATE) + rv = SS_NO_UP_TO_DATE_DISK; + + else if (fp >= FP_RESOURCE && + ns.role == R_PRIMARY && ns.conn < C_CONNECTED && ns.pdsk >= D_UNKNOWN) + rv = SS_PRIMARY_NOP; + + else if (ns.role == R_PRIMARY && ns.disk <= D_INCONSISTENT && ns.pdsk <= D_INCONSISTENT) + rv = SS_NO_UP_TO_DATE_DISK; + + else if (ns.conn > C_CONNECTED && ns.disk < D_INCONSISTENT) + rv = SS_NO_LOCAL_DISK; + + else if (ns.conn > C_CONNECTED && ns.pdsk < D_INCONSISTENT) + rv = SS_NO_REMOTE_DISK; + + else if ((ns.conn == C_CONNECTED || + ns.conn == C_WF_BITMAP_S || + ns.conn == C_SYNC_SOURCE || + ns.conn == C_PAUSED_SYNC_S) && + ns.disk == D_OUTDATED) + rv = SS_CONNECTED_OUTDATES; + + else if ((ns.conn == C_VERIFY_S || ns.conn == C_VERIFY_T) && + (mdev->sync_conf.verify_alg[0] == 0)) + rv = SS_NO_VERIFY_ALG; + + else if ((ns.conn == C_VERIFY_S || ns.conn == C_VERIFY_T) && + mdev->agreed_pro_version < 88) + rv = SS_NOT_SUPPORTED; + + return rv; +} + +/** + * is_valid_state_transition() - Returns an SS_ error code if the state transition is not possible + * @mdev: DRBD device. + * @ns: new state. + * @os: old state. + */ +static int is_valid_state_transition(struct drbd_conf *mdev, + union drbd_state ns, union drbd_state os) +{ + int rv = SS_SUCCESS; + + if ((ns.conn == C_STARTING_SYNC_T || ns.conn == C_STARTING_SYNC_S) && + os.conn > C_CONNECTED) + rv = SS_RESYNC_RUNNING; + + if (ns.conn == C_DISCONNECTING && os.conn == C_STANDALONE) + rv = SS_ALREADY_STANDALONE; + + if (ns.disk > D_ATTACHING && os.disk == D_DISKLESS) + rv = SS_IS_DISKLESS; + + if (ns.conn == C_WF_CONNECTION && os.conn < C_UNCONNECTED) + rv = SS_NO_NET_CONFIG; + + if (ns.disk == D_OUTDATED && os.disk < D_OUTDATED && os.disk != D_ATTACHING) + rv = SS_LOWER_THAN_OUTDATED; + + if (ns.conn == C_DISCONNECTING && os.conn == C_UNCONNECTED) + rv = SS_IN_TRANSIENT_STATE; + + if (ns.conn == os.conn && ns.conn == C_WF_REPORT_PARAMS) + rv = SS_IN_TRANSIENT_STATE; + + if ((ns.conn == C_VERIFY_S || ns.conn == C_VERIFY_T) && os.conn < C_CONNECTED) + rv = SS_NEED_CONNECTION; + + if ((ns.conn == C_VERIFY_S || ns.conn == C_VERIFY_T) && + ns.conn != os.conn && os.conn > C_CONNECTED) + rv = SS_RESYNC_RUNNING; + + if ((ns.conn == C_STARTING_SYNC_S || ns.conn == C_STARTING_SYNC_T) && + os.conn < C_CONNECTED) + rv = SS_NEED_CONNECTION; + + return rv; +} + +/** + * sanitize_state() - Resolves implicitly necessary additional changes to a state transition + * @mdev: DRBD device. + * @os: old state. + * @ns: new state. + * @warn_sync_abort: + * + * When we loose connection, we have to set the state of the peers disk (pdsk) + * to D_UNKNOWN. This rule and many more along those lines are in this function. + */ +static union drbd_state sanitize_state(struct drbd_conf *mdev, union drbd_state os, + union drbd_state ns, int *warn_sync_abort) +{ + enum drbd_fencing_p fp; + + fp = FP_DONT_CARE; + if (get_ldev(mdev)) { + fp = mdev->ldev->dc.fencing; + put_ldev(mdev); + } + + /* Disallow Network errors to configure a device's network part */ + if ((ns.conn >= C_TIMEOUT && ns.conn <= C_TEAR_DOWN) && + os.conn <= C_DISCONNECTING) + ns.conn = os.conn; + + /* After a network error (+C_TEAR_DOWN) only C_UNCONNECTED or C_DISCONNECTING can follow */ + if (os.conn >= C_TIMEOUT && os.conn <= C_TEAR_DOWN && + ns.conn != C_UNCONNECTED && ns.conn != C_DISCONNECTING) + ns.conn = os.conn; + + /* After C_DISCONNECTING only C_STANDALONE may follow */ + if (os.conn == C_DISCONNECTING && ns.conn != C_STANDALONE) + ns.conn = os.conn; + + if (ns.conn < C_CONNECTED) { + ns.peer_isp = 0; + ns.peer = R_UNKNOWN; + if (ns.pdsk > D_UNKNOWN || ns.pdsk < D_INCONSISTENT) + ns.pdsk = D_UNKNOWN; + } + + /* Clear the aftr_isp when becoming unconfigured */ + if (ns.conn == C_STANDALONE && ns.disk == D_DISKLESS && ns.role == R_SECONDARY) + ns.aftr_isp = 0; + + if (ns.conn <= C_DISCONNECTING && ns.disk == D_DISKLESS) + ns.pdsk = D_UNKNOWN; + + /* Abort resync if a disk fails/detaches */ + if (os.conn > C_CONNECTED && ns.conn > C_CONNECTED && + (ns.disk <= D_FAILED || ns.pdsk <= D_FAILED)) { + if (warn_sync_abort) + *warn_sync_abort = 1; + ns.conn = C_CONNECTED; + } + + if (ns.conn >= C_CONNECTED && + ((ns.disk == D_CONSISTENT || ns.disk == D_OUTDATED) || + (ns.disk == D_NEGOTIATING && ns.conn == C_WF_BITMAP_T))) { + switch (ns.conn) { + case C_WF_BITMAP_T: + case C_PAUSED_SYNC_T: + ns.disk = D_OUTDATED; + break; + case C_CONNECTED: + case C_WF_BITMAP_S: + case C_SYNC_SOURCE: + case C_PAUSED_SYNC_S: + ns.disk = D_UP_TO_DATE; + break; + case C_SYNC_TARGET: + ns.disk = D_INCONSISTENT; + dev_warn(DEV, "Implicitly set disk state Inconsistent!\n"); + break; + } + if (os.disk == D_OUTDATED && ns.disk == D_UP_TO_DATE) + dev_warn(DEV, "Implicitly set disk from Outdated to UpToDate\n"); + } + + if (ns.conn >= C_CONNECTED && + (ns.pdsk == D_CONSISTENT || ns.pdsk == D_OUTDATED)) { + switch (ns.conn) { + case C_CONNECTED: + case C_WF_BITMAP_T: + case C_PAUSED_SYNC_T: + case C_SYNC_TARGET: + ns.pdsk = D_UP_TO_DATE; + break; + case C_WF_BITMAP_S: + case C_PAUSED_SYNC_S: + ns.pdsk = D_OUTDATED; + break; + case C_SYNC_SOURCE: + ns.pdsk = D_INCONSISTENT; + dev_warn(DEV, "Implicitly set pdsk Inconsistent!\n"); + break; + } + if (os.pdsk == D_OUTDATED && ns.pdsk == D_UP_TO_DATE) + dev_warn(DEV, "Implicitly set pdsk from Outdated to UpToDate\n"); + } + + /* Connection breaks down before we finished "Negotiating" */ + if (ns.conn < C_CONNECTED && ns.disk == D_NEGOTIATING && + get_ldev_if_state(mdev, D_NEGOTIATING)) { + if (mdev->ed_uuid == mdev->ldev->md.uuid[UI_CURRENT]) { + ns.disk = mdev->new_state_tmp.disk; + ns.pdsk = mdev->new_state_tmp.pdsk; + } else { + dev_alert(DEV, "Connection lost while negotiating, no data!\n"); + ns.disk = D_DISKLESS; + ns.pdsk = D_UNKNOWN; + } + put_ldev(mdev); + } + + if (fp == FP_STONITH && + (ns.role == R_PRIMARY && + ns.conn < C_CONNECTED && + ns.pdsk > D_OUTDATED)) + ns.susp = 1; + + if (ns.aftr_isp || ns.peer_isp || ns.user_isp) { + if (ns.conn == C_SYNC_SOURCE) + ns.conn = C_PAUSED_SYNC_S; + if (ns.conn == C_SYNC_TARGET) + ns.conn = C_PAUSED_SYNC_T; + } else { + if (ns.conn == C_PAUSED_SYNC_S) + ns.conn = C_SYNC_SOURCE; + if (ns.conn == C_PAUSED_SYNC_T) + ns.conn = C_SYNC_TARGET; + } + + return ns; +} + +/* helper for __drbd_set_state */ +static void set_ov_position(struct drbd_conf *mdev, enum drbd_conns cs) +{ + if (cs == C_VERIFY_T) { + /* starting online verify from an arbitrary position + * does not fit well into the existing protocol. + * on C_VERIFY_T, we initialize ov_left and friends + * implicitly in receive_DataRequest once the + * first P_OV_REQUEST is received */ + mdev->ov_start_sector = ~(sector_t)0; + } else { + unsigned long bit = BM_SECT_TO_BIT(mdev->ov_start_sector); + if (bit >= mdev->rs_total) + mdev->ov_start_sector = + BM_BIT_TO_SECT(mdev->rs_total - 1); + mdev->ov_position = mdev->ov_start_sector; + } +} + +/** + * __drbd_set_state() - Set a new DRBD state + * @mdev: DRBD device. + * @ns: new state. + * @flags: Flags + * @done: Optional completion, that will get completed after the after_state_ch() finished + * + * Caller needs to hold req_lock, and global_state_lock. Do not call directly. + */ +int __drbd_set_state(struct drbd_conf *mdev, + union drbd_state ns, enum chg_state_flags flags, + struct completion *done) +{ + union drbd_state os; + int rv = SS_SUCCESS; + int warn_sync_abort = 0; + struct after_state_chg_work *ascw; + + os = mdev->state; + + ns = sanitize_state(mdev, os, ns, &warn_sync_abort); + + if (ns.i == os.i) + return SS_NOTHING_TO_DO; + + if (!(flags & CS_HARD)) { + /* pre-state-change checks ; only look at ns */ + /* See drbd_state_sw_errors in drbd_strings.c */ + + rv = is_valid_state(mdev, ns); + if (rv < SS_SUCCESS) { + /* If the old state was illegal as well, then let + this happen...*/ + + if (is_valid_state(mdev, os) == rv) { + dev_err(DEV, "Considering state change from bad state. " + "Error would be: '%s'\n", + drbd_set_st_err_str(rv)); + print_st(mdev, "old", os); + print_st(mdev, "new", ns); + rv = is_valid_state_transition(mdev, ns, os); + } + } else + rv = is_valid_state_transition(mdev, ns, os); + } + + if (rv < SS_SUCCESS) { + if (flags & CS_VERBOSE) + print_st_err(mdev, os, ns, rv); + return rv; + } + + if (warn_sync_abort) + dev_warn(DEV, "Resync aborted.\n"); + + { + char *pbp, pb[300]; + pbp = pb; + *pbp = 0; + PSC(role); + PSC(peer); + PSC(conn); + PSC(disk); + PSC(pdsk); + PSC(susp); + PSC(aftr_isp); + PSC(peer_isp); + PSC(user_isp); + dev_info(DEV, "%s\n", pb); + } + + /* solve the race between becoming unconfigured, + * worker doing the cleanup, and + * admin reconfiguring us: + * on (re)configure, first set CONFIG_PENDING, + * then wait for a potentially exiting worker, + * start the worker, and schedule one no_op. + * then proceed with configuration. + */ + if (ns.disk == D_DISKLESS && + ns.conn == C_STANDALONE && + ns.role == R_SECONDARY && + !test_and_set_bit(CONFIG_PENDING, &mdev->flags)) + set_bit(DEVICE_DYING, &mdev->flags); + + mdev->state.i = ns.i; + wake_up(&mdev->misc_wait); + wake_up(&mdev->state_wait); + + /* post-state-change actions */ + if (os.conn >= C_SYNC_SOURCE && ns.conn <= C_CONNECTED) { + set_bit(STOP_SYNC_TIMER, &mdev->flags); + mod_timer(&mdev->resync_timer, jiffies); + } + + /* aborted verify run. log the last position */ + if ((os.conn == C_VERIFY_S || os.conn == C_VERIFY_T) && + ns.conn < C_CONNECTED) { + mdev->ov_start_sector = + BM_BIT_TO_SECT(mdev->rs_total - mdev->ov_left); + dev_info(DEV, "Online Verify reached sector %llu\n", + (unsigned long long)mdev->ov_start_sector); + } + + if ((os.conn == C_PAUSED_SYNC_T || os.conn == C_PAUSED_SYNC_S) && + (ns.conn == C_SYNC_TARGET || ns.conn == C_SYNC_SOURCE)) { + dev_info(DEV, "Syncer continues.\n"); + mdev->rs_paused += (long)jiffies-(long)mdev->rs_mark_time; + if (ns.conn == C_SYNC_TARGET) { + if (!test_and_clear_bit(STOP_SYNC_TIMER, &mdev->flags)) + mod_timer(&mdev->resync_timer, jiffies); + /* This if (!test_bit) is only needed for the case + that a device that has ceased to used its timer, + i.e. it is already in drbd_resync_finished() gets + paused and resumed. */ + } + } + + if ((os.conn == C_SYNC_TARGET || os.conn == C_SYNC_SOURCE) && + (ns.conn == C_PAUSED_SYNC_T || ns.conn == C_PAUSED_SYNC_S)) { + dev_info(DEV, "Resync suspended\n"); + mdev->rs_mark_time = jiffies; + if (ns.conn == C_PAUSED_SYNC_T) + set_bit(STOP_SYNC_TIMER, &mdev->flags); + } + + if (os.conn == C_CONNECTED && + (ns.conn == C_VERIFY_S || ns.conn == C_VERIFY_T)) { + mdev->ov_position = 0; + mdev->rs_total = + mdev->rs_mark_left = drbd_bm_bits(mdev); + if (mdev->agreed_pro_version >= 90) + set_ov_position(mdev, ns.conn); + else + mdev->ov_start_sector = 0; + mdev->ov_left = mdev->rs_total + - BM_SECT_TO_BIT(mdev->ov_position); + mdev->rs_start = + mdev->rs_mark_time = jiffies; + mdev->ov_last_oos_size = 0; + mdev->ov_last_oos_start = 0; + + if (ns.conn == C_VERIFY_S) { + dev_info(DEV, "Starting Online Verify from sector %llu\n", + (unsigned long long)mdev->ov_position); + mod_timer(&mdev->resync_timer, jiffies); + } + } + + if (get_ldev(mdev)) { + u32 mdf = mdev->ldev->md.flags & ~(MDF_CONSISTENT|MDF_PRIMARY_IND| + MDF_CONNECTED_IND|MDF_WAS_UP_TO_DATE| + MDF_PEER_OUT_DATED|MDF_CRASHED_PRIMARY); + + if (test_bit(CRASHED_PRIMARY, &mdev->flags)) + mdf |= MDF_CRASHED_PRIMARY; + if (mdev->state.role == R_PRIMARY || + (mdev->state.pdsk < D_INCONSISTENT && mdev->state.peer == R_PRIMARY)) + mdf |= MDF_PRIMARY_IND; + if (mdev->state.conn > C_WF_REPORT_PARAMS) + mdf |= MDF_CONNECTED_IND; + if (mdev->state.disk > D_INCONSISTENT) + mdf |= MDF_CONSISTENT; + if (mdev->state.disk > D_OUTDATED) + mdf |= MDF_WAS_UP_TO_DATE; + if (mdev->state.pdsk <= D_OUTDATED && mdev->state.pdsk >= D_INCONSISTENT) + mdf |= MDF_PEER_OUT_DATED; + if (mdf != mdev->ldev->md.flags) { + mdev->ldev->md.flags = mdf; + drbd_md_mark_dirty(mdev); + } + if (os.disk < D_CONSISTENT && ns.disk >= D_CONSISTENT) + drbd_set_ed_uuid(mdev, mdev->ldev->md.uuid[UI_CURRENT]); + put_ldev(mdev); + } + + /* Peer was forced D_UP_TO_DATE & R_PRIMARY, consider to resync */ + if (os.disk == D_INCONSISTENT && os.pdsk == D_INCONSISTENT && + os.peer == R_SECONDARY && ns.peer == R_PRIMARY) + set_bit(CONSIDER_RESYNC, &mdev->flags); + + /* Receiver should clean up itself */ + if (os.conn != C_DISCONNECTING && ns.conn == C_DISCONNECTING) + drbd_thread_stop_nowait(&mdev->receiver); + + /* Now the receiver finished cleaning up itself, it should die */ + if (os.conn != C_STANDALONE && ns.conn == C_STANDALONE) + drbd_thread_stop_nowait(&mdev->receiver); + + /* Upon network failure, we need to restart the receiver. */ + if (os.conn > C_TEAR_DOWN && + ns.conn <= C_TEAR_DOWN && ns.conn >= C_TIMEOUT) + drbd_thread_restart_nowait(&mdev->receiver); + + ascw = kmalloc(sizeof(*ascw), GFP_ATOMIC); + if (ascw) { + ascw->os = os; + ascw->ns = ns; + ascw->flags = flags; + ascw->w.cb = w_after_state_ch; + ascw->done = done; + drbd_queue_work(&mdev->data.work, &ascw->w); + } else { + dev_warn(DEV, "Could not kmalloc an ascw\n"); + } + + return rv; +} + +static int w_after_state_ch(struct drbd_conf *mdev, struct drbd_work *w, int unused) +{ + struct after_state_chg_work *ascw = + container_of(w, struct after_state_chg_work, w); + after_state_ch(mdev, ascw->os, ascw->ns, ascw->flags); + if (ascw->flags & CS_WAIT_COMPLETE) { + D_ASSERT(ascw->done != NULL); + complete(ascw->done); + } + kfree(ascw); + + return 1; +} + +static void abw_start_sync(struct drbd_conf *mdev, int rv) +{ + if (rv) { + dev_err(DEV, "Writing the bitmap failed not starting resync.\n"); + _drbd_request_state(mdev, NS(conn, C_CONNECTED), CS_VERBOSE); + return; + } + + switch (mdev->state.conn) { + case C_STARTING_SYNC_T: + _drbd_request_state(mdev, NS(conn, C_WF_SYNC_UUID), CS_VERBOSE); + break; + case C_STARTING_SYNC_S: + drbd_start_resync(mdev, C_SYNC_SOURCE); + break; + } +} + +/** + * after_state_ch() - Perform after state change actions that may sleep + * @mdev: DRBD device. + * @os: old state. + * @ns: new state. + * @flags: Flags + */ +static void after_state_ch(struct drbd_conf *mdev, union drbd_state os, + union drbd_state ns, enum chg_state_flags flags) +{ + enum drbd_fencing_p fp; + + if (os.conn != C_CONNECTED && ns.conn == C_CONNECTED) { + clear_bit(CRASHED_PRIMARY, &mdev->flags); + if (mdev->p_uuid) + mdev->p_uuid[UI_FLAGS] &= ~((u64)2); + } + + fp = FP_DONT_CARE; + if (get_ldev(mdev)) { + fp = mdev->ldev->dc.fencing; + put_ldev(mdev); + } + + /* Inform userspace about the change... */ + drbd_bcast_state(mdev, ns); + + if (!(os.role == R_PRIMARY && os.disk < D_UP_TO_DATE && os.pdsk < D_UP_TO_DATE) && + (ns.role == R_PRIMARY && ns.disk < D_UP_TO_DATE && ns.pdsk < D_UP_TO_DATE)) + drbd_khelper(mdev, "pri-on-incon-degr"); + + /* Here we have the actions that are performed after a + state change. This function might sleep */ + + if (fp == FP_STONITH && ns.susp) { + /* case1: The outdate peer handler is successful: + * case2: The connection was established again: */ + if ((os.pdsk > D_OUTDATED && ns.pdsk <= D_OUTDATED) || + (os.conn < C_CONNECTED && ns.conn >= C_CONNECTED)) { + tl_clear(mdev); + spin_lock_irq(&mdev->req_lock); + _drbd_set_state(_NS(mdev, susp, 0), CS_VERBOSE, NULL); + spin_unlock_irq(&mdev->req_lock); + } + } + /* Do not change the order of the if above and the two below... */ + if (os.pdsk == D_DISKLESS && ns.pdsk > D_DISKLESS) { /* attach on the peer */ + drbd_send_uuids(mdev); + drbd_send_state(mdev); + } + if (os.conn != C_WF_BITMAP_S && ns.conn == C_WF_BITMAP_S) + drbd_queue_bitmap_io(mdev, &drbd_send_bitmap, NULL, "send_bitmap (WFBitMapS)"); + + /* Lost contact to peer's copy of the data */ + if ((os.pdsk >= D_INCONSISTENT && + os.pdsk != D_UNKNOWN && + os.pdsk != D_OUTDATED) + && (ns.pdsk < D_INCONSISTENT || + ns.pdsk == D_UNKNOWN || + ns.pdsk == D_OUTDATED)) { + kfree(mdev->p_uuid); + mdev->p_uuid = NULL; + if (get_ldev(mdev)) { + if ((ns.role == R_PRIMARY || ns.peer == R_PRIMARY) && + mdev->ldev->md.uuid[UI_BITMAP] == 0 && ns.disk >= D_UP_TO_DATE) { + drbd_uuid_new_current(mdev); + drbd_send_uuids(mdev); + } + put_ldev(mdev); + } + } + + if (ns.pdsk < D_INCONSISTENT && get_ldev(mdev)) { + if (ns.peer == R_PRIMARY && mdev->ldev->md.uuid[UI_BITMAP] == 0) + drbd_uuid_new_current(mdev); + + /* D_DISKLESS Peer becomes secondary */ + if (os.peer == R_PRIMARY && ns.peer == R_SECONDARY) + drbd_al_to_on_disk_bm(mdev); + put_ldev(mdev); + } + + /* Last part of the attaching process ... */ + if (ns.conn >= C_CONNECTED && + os.disk == D_ATTACHING && ns.disk == D_NEGOTIATING) { + kfree(mdev->p_uuid); /* We expect to receive up-to-date UUIDs soon. */ + mdev->p_uuid = NULL; /* ...to not use the old ones in the mean time */ + drbd_send_sizes(mdev, 0); /* to start sync... */ + drbd_send_uuids(mdev); + drbd_send_state(mdev); + } + + /* We want to pause/continue resync, tell peer. */ + if (ns.conn >= C_CONNECTED && + ((os.aftr_isp != ns.aftr_isp) || + (os.user_isp != ns.user_isp))) + drbd_send_state(mdev); + + /* In case one of the isp bits got set, suspend other devices. */ + if ((!os.aftr_isp && !os.peer_isp && !os.user_isp) && + (ns.aftr_isp || ns.peer_isp || ns.user_isp)) + suspend_other_sg(mdev); + + /* Make sure the peer gets informed about eventual state + changes (ISP bits) while we were in WFReportParams. */ + if (os.conn == C_WF_REPORT_PARAMS && ns.conn >= C_CONNECTED) + drbd_send_state(mdev); + + /* We are in the progress to start a full sync... */ + if ((os.conn != C_STARTING_SYNC_T && ns.conn == C_STARTING_SYNC_T) || + (os.conn != C_STARTING_SYNC_S && ns.conn == C_STARTING_SYNC_S)) + drbd_queue_bitmap_io(mdev, &drbd_bmio_set_n_write, &abw_start_sync, "set_n_write from StartingSync"); + + /* We are invalidating our self... */ + if (os.conn < C_CONNECTED && ns.conn < C_CONNECTED && + os.disk > D_INCONSISTENT && ns.disk == D_INCONSISTENT) + drbd_queue_bitmap_io(mdev, &drbd_bmio_set_n_write, NULL, "set_n_write from invalidate"); + + if (os.disk > D_FAILED && ns.disk == D_FAILED) { + enum drbd_io_error_p eh; + + eh = EP_PASS_ON; + if (get_ldev_if_state(mdev, D_FAILED)) { + eh = mdev->ldev->dc.on_io_error; + put_ldev(mdev); + } + + drbd_rs_cancel_all(mdev); + /* since get_ldev() only works as long as disk>=D_INCONSISTENT, + and it is D_DISKLESS here, local_cnt can only go down, it can + not increase... It will reach zero */ + wait_event(mdev->misc_wait, !atomic_read(&mdev->local_cnt)); + mdev->rs_total = 0; + mdev->rs_failed = 0; + atomic_set(&mdev->rs_pending_cnt, 0); + + spin_lock_irq(&mdev->req_lock); + _drbd_set_state(_NS(mdev, disk, D_DISKLESS), CS_HARD, NULL); + spin_unlock_irq(&mdev->req_lock); + + if (eh == EP_CALL_HELPER) + drbd_khelper(mdev, "local-io-error"); + } + + if (os.disk > D_DISKLESS && ns.disk == D_DISKLESS) { + + if (os.disk == D_FAILED) /* && ns.disk == D_DISKLESS*/ { + if (drbd_send_state(mdev)) + dev_warn(DEV, "Notified peer that my disk is broken.\n"); + else + dev_err(DEV, "Sending state in drbd_io_error() failed\n"); + } + + lc_destroy(mdev->resync); + mdev->resync = NULL; + lc_destroy(mdev->act_log); + mdev->act_log = NULL; + __no_warn(local, + drbd_free_bc(mdev->ldev); + mdev->ldev = NULL;); + + if (mdev->md_io_tmpp) + __free_page(mdev->md_io_tmpp); + } + + /* Disks got bigger while they were detached */ + if (ns.disk > D_NEGOTIATING && ns.pdsk > D_NEGOTIATING && + test_and_clear_bit(RESYNC_AFTER_NEG, &mdev->flags)) { + if (ns.conn == C_CONNECTED) + resync_after_online_grow(mdev); + } + + /* A resync finished or aborted, wake paused devices... */ + if ((os.conn > C_CONNECTED && ns.conn <= C_CONNECTED) || + (os.peer_isp && !ns.peer_isp) || + (os.user_isp && !ns.user_isp)) + resume_next_sg(mdev); + + /* Upon network connection, we need to start the receiver */ + if (os.conn == C_STANDALONE && ns.conn == C_UNCONNECTED) + drbd_thread_start(&mdev->receiver); + + /* Terminate worker thread if we are unconfigured - it will be + restarted as needed... */ + if (ns.disk == D_DISKLESS && + ns.conn == C_STANDALONE && + ns.role == R_SECONDARY) { + if (os.aftr_isp != ns.aftr_isp) + resume_next_sg(mdev); + /* set in __drbd_set_state, unless CONFIG_PENDING was set */ + if (test_bit(DEVICE_DYING, &mdev->flags)) + drbd_thread_stop_nowait(&mdev->worker); + } + + drbd_md_sync(mdev); +} + + +static int drbd_thread_setup(void *arg) +{ + struct drbd_thread *thi = (struct drbd_thread *) arg; + struct drbd_conf *mdev = thi->mdev; + unsigned long flags; + int retval; + +restart: + retval = thi->function(thi); + + spin_lock_irqsave(&thi->t_lock, flags); + + /* if the receiver has been "Exiting", the last thing it did + * was set the conn state to "StandAlone", + * if now a re-connect request comes in, conn state goes C_UNCONNECTED, + * and receiver thread will be "started". + * drbd_thread_start needs to set "Restarting" in that case. + * t_state check and assignment needs to be within the same spinlock, + * so either thread_start sees Exiting, and can remap to Restarting, + * or thread_start see None, and can proceed as normal. + */ + + if (thi->t_state == Restarting) { + dev_info(DEV, "Restarting %s\n", current->comm); + thi->t_state = Running; + spin_unlock_irqrestore(&thi->t_lock, flags); + goto restart; + } + + thi->task = NULL; + thi->t_state = None; + smp_mb(); + complete(&thi->stop); + spin_unlock_irqrestore(&thi->t_lock, flags); + + dev_info(DEV, "Terminating %s\n", current->comm); + + /* Release mod reference taken when thread was started */ + module_put(THIS_MODULE); + return retval; +} + +static void drbd_thread_init(struct drbd_conf *mdev, struct drbd_thread *thi, + int (*func) (struct drbd_thread *)) +{ + spin_lock_init(&thi->t_lock); + thi->task = NULL; + thi->t_state = None; + thi->function = func; + thi->mdev = mdev; +} + +int drbd_thread_start(struct drbd_thread *thi) +{ + struct drbd_conf *mdev = thi->mdev; + struct task_struct *nt; + unsigned long flags; + + const char *me = + thi == &mdev->receiver ? "receiver" : + thi == &mdev->asender ? "asender" : + thi == &mdev->worker ? "worker" : "NONSENSE"; + + /* is used from state engine doing drbd_thread_stop_nowait, + * while holding the req lock irqsave */ + spin_lock_irqsave(&thi->t_lock, flags); + + switch (thi->t_state) { + case None: + dev_info(DEV, "Starting %s thread (from %s [%d])\n", + me, current->comm, current->pid); + + /* Get ref on module for thread - this is released when thread exits */ + if (!try_module_get(THIS_MODULE)) { + dev_err(DEV, "Failed to get module reference in drbd_thread_start\n"); + spin_unlock_irqrestore(&thi->t_lock, flags); + return FALSE; + } + + init_completion(&thi->stop); + D_ASSERT(thi->task == NULL); + thi->reset_cpu_mask = 1; + thi->t_state = Running; + spin_unlock_irqrestore(&thi->t_lock, flags); + flush_signals(current); /* otherw. may get -ERESTARTNOINTR */ + + nt = kthread_create(drbd_thread_setup, (void *) thi, + "drbd%d_%s", mdev_to_minor(mdev), me); + + if (IS_ERR(nt)) { + dev_err(DEV, "Couldn't start thread\n"); + + module_put(THIS_MODULE); + return FALSE; + } + spin_lock_irqsave(&thi->t_lock, flags); + thi->task = nt; + thi->t_state = Running; + spin_unlock_irqrestore(&thi->t_lock, flags); + wake_up_process(nt); + break; + case Exiting: + thi->t_state = Restarting; + dev_info(DEV, "Restarting %s thread (from %s [%d])\n", + me, current->comm, current->pid); + /* fall through */ + case Running: + case Restarting: + default: + spin_unlock_irqrestore(&thi->t_lock, flags); + break; + } + + return TRUE; +} + + +void _drbd_thread_stop(struct drbd_thread *thi, int restart, int wait) +{ + unsigned long flags; + + enum drbd_thread_state ns = restart ? Restarting : Exiting; + + /* may be called from state engine, holding the req lock irqsave */ + spin_lock_irqsave(&thi->t_lock, flags); + + if (thi->t_state == None) { + spin_unlock_irqrestore(&thi->t_lock, flags); + if (restart) + drbd_thread_start(thi); + return; + } + + if (thi->t_state != ns) { + if (thi->task == NULL) { + spin_unlock_irqrestore(&thi->t_lock, flags); + return; + } + + thi->t_state = ns; + smp_mb(); + init_completion(&thi->stop); + if (thi->task != current) + force_sig(DRBD_SIGKILL, thi->task); + + } + + spin_unlock_irqrestore(&thi->t_lock, flags); + + if (wait) + wait_for_completion(&thi->stop); +} + +#ifdef CONFIG_SMP +/** + * drbd_calc_cpu_mask() - Generate CPU masks, spread over all CPUs + * @mdev: DRBD device. + * + * Forces all threads of a device onto the same CPU. This is beneficial for + * DRBD's performance. May be overwritten by user's configuration. + */ +void drbd_calc_cpu_mask(struct drbd_conf *mdev) +{ + int ord, cpu; + + /* user override. */ + if (cpumask_weight(mdev->cpu_mask)) + return; + + ord = mdev_to_minor(mdev) % cpumask_weight(cpu_online_mask); + for_each_online_cpu(cpu) { + if (ord-- == 0) { + cpumask_set_cpu(cpu, mdev->cpu_mask); + return; + } + } + /* should not be reached */ + cpumask_setall(mdev->cpu_mask); +} + +/** + * drbd_thread_current_set_cpu() - modifies the cpu mask of the _current_ thread + * @mdev: DRBD device. + * + * call in the "main loop" of _all_ threads, no need for any mutex, current won't die + * prematurely. + */ +void drbd_thread_current_set_cpu(struct drbd_conf *mdev) +{ + struct task_struct *p = current; + struct drbd_thread *thi = + p == mdev->asender.task ? &mdev->asender : + p == mdev->receiver.task ? &mdev->receiver : + p == mdev->worker.task ? &mdev->worker : + NULL; + ERR_IF(thi == NULL) + return; + if (!thi->reset_cpu_mask) + return; + thi->reset_cpu_mask = 0; + set_cpus_allowed_ptr(p, mdev->cpu_mask); +} +#endif + +/* the appropriate socket mutex must be held already */ +int _drbd_send_cmd(struct drbd_conf *mdev, struct socket *sock, + enum drbd_packets cmd, struct p_header *h, + size_t size, unsigned msg_flags) +{ + int sent, ok; + + ERR_IF(!h) return FALSE; + ERR_IF(!size) return FALSE; + + h->magic = BE_DRBD_MAGIC; + h->command = cpu_to_be16(cmd); + h->length = cpu_to_be16(size-sizeof(struct p_header)); + + trace_drbd_packet(mdev, sock, 0, (void *)h, __FILE__, __LINE__); + sent = drbd_send(mdev, sock, h, size, msg_flags); + + ok = (sent == size); + if (!ok) + dev_err(DEV, "short sent %s size=%d sent=%d\n", + cmdname(cmd), (int)size, sent); + return ok; +} + +/* don't pass the socket. we may only look at it + * when we hold the appropriate socket mutex. + */ +int drbd_send_cmd(struct drbd_conf *mdev, int use_data_socket, + enum drbd_packets cmd, struct p_header *h, size_t size) +{ + int ok = 0; + struct socket *sock; + + if (use_data_socket) { + mutex_lock(&mdev->data.mutex); + sock = mdev->data.socket; + } else { + mutex_lock(&mdev->meta.mutex); + sock = mdev->meta.socket; + } + + /* drbd_disconnect() could have called drbd_free_sock() + * while we were waiting in down()... */ + if (likely(sock != NULL)) + ok = _drbd_send_cmd(mdev, sock, cmd, h, size, 0); + + if (use_data_socket) + mutex_unlock(&mdev->data.mutex); + else + mutex_unlock(&mdev->meta.mutex); + return ok; +} + +int drbd_send_cmd2(struct drbd_conf *mdev, enum drbd_packets cmd, char *data, + size_t size) +{ + struct p_header h; + int ok; + + h.magic = BE_DRBD_MAGIC; + h.command = cpu_to_be16(cmd); + h.length = cpu_to_be16(size); + + if (!drbd_get_data_sock(mdev)) + return 0; + + trace_drbd_packet(mdev, mdev->data.socket, 0, (void *)&h, __FILE__, __LINE__); + + ok = (sizeof(h) == + drbd_send(mdev, mdev->data.socket, &h, sizeof(h), 0)); + ok = ok && (size == + drbd_send(mdev, mdev->data.socket, data, size, 0)); + + drbd_put_data_sock(mdev); + + return ok; +} + +int drbd_send_sync_param(struct drbd_conf *mdev, struct syncer_conf *sc) +{ + struct p_rs_param_89 *p; + struct socket *sock; + int size, rv; + const int apv = mdev->agreed_pro_version; + + size = apv <= 87 ? sizeof(struct p_rs_param) + : apv == 88 ? sizeof(struct p_rs_param) + + strlen(mdev->sync_conf.verify_alg) + 1 + : /* 89 */ sizeof(struct p_rs_param_89); + + /* used from admin command context and receiver/worker context. + * to avoid kmalloc, grab the socket right here, + * then use the pre-allocated sbuf there */ + mutex_lock(&mdev->data.mutex); + sock = mdev->data.socket; + + if (likely(sock != NULL)) { + enum drbd_packets cmd = apv >= 89 ? P_SYNC_PARAM89 : P_SYNC_PARAM; + + p = &mdev->data.sbuf.rs_param_89; + + /* initialize verify_alg and csums_alg */ + memset(p->verify_alg, 0, 2 * SHARED_SECRET_MAX); + + p->rate = cpu_to_be32(sc->rate); + + if (apv >= 88) + strcpy(p->verify_alg, mdev->sync_conf.verify_alg); + if (apv >= 89) + strcpy(p->csums_alg, mdev->sync_conf.csums_alg); + + rv = _drbd_send_cmd(mdev, sock, cmd, &p->head, size, 0); + } else + rv = 0; /* not ok */ + + mutex_unlock(&mdev->data.mutex); + + return rv; +} + +int drbd_send_protocol(struct drbd_conf *mdev) +{ + struct p_protocol *p; + int size, rv; + + size = sizeof(struct p_protocol); + + if (mdev->agreed_pro_version >= 87) + size += strlen(mdev->net_conf->integrity_alg) + 1; + + /* we must not recurse into our own queue, + * as that is blocked during handshake */ + p = kmalloc(size, GFP_NOIO); + if (p == NULL) + return 0; + + p->protocol = cpu_to_be32(mdev->net_conf->wire_protocol); + p->after_sb_0p = cpu_to_be32(mdev->net_conf->after_sb_0p); + p->after_sb_1p = cpu_to_be32(mdev->net_conf->after_sb_1p); + p->after_sb_2p = cpu_to_be32(mdev->net_conf->after_sb_2p); + p->want_lose = cpu_to_be32(mdev->net_conf->want_lose); + p->two_primaries = cpu_to_be32(mdev->net_conf->two_primaries); + + if (mdev->agreed_pro_version >= 87) + strcpy(p->integrity_alg, mdev->net_conf->integrity_alg); + + rv = drbd_send_cmd(mdev, USE_DATA_SOCKET, P_PROTOCOL, + (struct p_header *)p, size); + kfree(p); + return rv; +} + +int _drbd_send_uuids(struct drbd_conf *mdev, u64 uuid_flags) +{ + struct p_uuids p; + int i; + + if (!get_ldev_if_state(mdev, D_NEGOTIATING)) + return 1; + + for (i = UI_CURRENT; i < UI_SIZE; i++) + p.uuid[i] = mdev->ldev ? cpu_to_be64(mdev->ldev->md.uuid[i]) : 0; + + mdev->comm_bm_set = drbd_bm_total_weight(mdev); + p.uuid[UI_SIZE] = cpu_to_be64(mdev->comm_bm_set); + uuid_flags |= mdev->net_conf->want_lose ? 1 : 0; + uuid_flags |= test_bit(CRASHED_PRIMARY, &mdev->flags) ? 2 : 0; + uuid_flags |= mdev->new_state_tmp.disk == D_INCONSISTENT ? 4 : 0; + p.uuid[UI_FLAGS] = cpu_to_be64(uuid_flags); + + put_ldev(mdev); + + return drbd_send_cmd(mdev, USE_DATA_SOCKET, P_UUIDS, + (struct p_header *)&p, sizeof(p)); +} + +int drbd_send_uuids(struct drbd_conf *mdev) +{ + return _drbd_send_uuids(mdev, 0); +} + +int drbd_send_uuids_skip_initial_sync(struct drbd_conf *mdev) +{ + return _drbd_send_uuids(mdev, 8); +} + + +int drbd_send_sync_uuid(struct drbd_conf *mdev, u64 val) +{ + struct p_rs_uuid p; + + p.uuid = cpu_to_be64(val); + + return drbd_send_cmd(mdev, USE_DATA_SOCKET, P_SYNC_UUID, + (struct p_header *)&p, sizeof(p)); +} + +int drbd_send_sizes(struct drbd_conf *mdev, int trigger_reply) +{ + struct p_sizes p; + sector_t d_size, u_size; + int q_order_type; + int ok; + + if (get_ldev_if_state(mdev, D_NEGOTIATING)) { + D_ASSERT(mdev->ldev->backing_bdev); + d_size = drbd_get_max_capacity(mdev->ldev); + u_size = mdev->ldev->dc.disk_size; + q_order_type = drbd_queue_order_type(mdev); + p.queue_order_type = cpu_to_be32(drbd_queue_order_type(mdev)); + put_ldev(mdev); + } else { + d_size = 0; + u_size = 0; + q_order_type = QUEUE_ORDERED_NONE; + } + + p.d_size = cpu_to_be64(d_size); + p.u_size = cpu_to_be64(u_size); + p.c_size = cpu_to_be64(trigger_reply ? 0 : drbd_get_capacity(mdev->this_bdev)); + p.max_segment_size = cpu_to_be32(queue_max_segment_size(mdev->rq_queue)); + p.queue_order_type = cpu_to_be32(q_order_type); + + ok = drbd_send_cmd(mdev, USE_DATA_SOCKET, P_SIZES, + (struct p_header *)&p, sizeof(p)); + return ok; +} + +/** + * drbd_send_state() - Sends the drbd state to the peer + * @mdev: DRBD device. + */ +int drbd_send_state(struct drbd_conf *mdev) +{ + struct socket *sock; + struct p_state p; + int ok = 0; + + /* Grab state lock so we wont send state if we're in the middle + * of a cluster wide state change on another thread */ + drbd_state_lock(mdev); + + mutex_lock(&mdev->data.mutex); + + p.state = cpu_to_be32(mdev->state.i); /* Within the send mutex */ + sock = mdev->data.socket; + + if (likely(sock != NULL)) { + ok = _drbd_send_cmd(mdev, sock, P_STATE, + (struct p_header *)&p, sizeof(p), 0); + } + + mutex_unlock(&mdev->data.mutex); + + drbd_state_unlock(mdev); + return ok; +} + +int drbd_send_state_req(struct drbd_conf *mdev, + union drbd_state mask, union drbd_state val) +{ + struct p_req_state p; + + p.mask = cpu_to_be32(mask.i); + p.val = cpu_to_be32(val.i); + + return drbd_send_cmd(mdev, USE_DATA_SOCKET, P_STATE_CHG_REQ, + (struct p_header *)&p, sizeof(p)); +} + +int drbd_send_sr_reply(struct drbd_conf *mdev, int retcode) +{ + struct p_req_state_reply p; + + p.retcode = cpu_to_be32(retcode); + + return drbd_send_cmd(mdev, USE_META_SOCKET, P_STATE_CHG_REPLY, + (struct p_header *)&p, sizeof(p)); +} + +int fill_bitmap_rle_bits(struct drbd_conf *mdev, + struct p_compressed_bm *p, + struct bm_xfer_ctx *c) +{ + struct bitstream bs; + unsigned long plain_bits; + unsigned long tmp; + unsigned long rl; + unsigned len; + unsigned toggle; + int bits; + + /* may we use this feature? */ + if ((mdev->sync_conf.use_rle == 0) || + (mdev->agreed_pro_version < 90)) + return 0; + + if (c->bit_offset >= c->bm_bits) + return 0; /* nothing to do. */ + + /* use at most thus many bytes */ + bitstream_init(&bs, p->code, BM_PACKET_VLI_BYTES_MAX, 0); + memset(p->code, 0, BM_PACKET_VLI_BYTES_MAX); + /* plain bits covered in this code string */ + plain_bits = 0; + + /* p->encoding & 0x80 stores whether the first run length is set. + * bit offset is implicit. + * start with toggle == 2 to be able to tell the first iteration */ + toggle = 2; + + /* see how much plain bits we can stuff into one packet + * using RLE and VLI. */ + do { + tmp = (toggle == 0) ? _drbd_bm_find_next_zero(mdev, c->bit_offset) + : _drbd_bm_find_next(mdev, c->bit_offset); + if (tmp == -1UL) + tmp = c->bm_bits; + rl = tmp - c->bit_offset; + + if (toggle == 2) { /* first iteration */ + if (rl == 0) { + /* the first checked bit was set, + * store start value, */ + DCBP_set_start(p, 1); + /* but skip encoding of zero run length */ + toggle = !toggle; + continue; + } + DCBP_set_start(p, 0); + } + + /* paranoia: catch zero runlength. + * can only happen if bitmap is modified while we scan it. */ + if (rl == 0) { + dev_err(DEV, "unexpected zero runlength while encoding bitmap " + "t:%u bo:%lu\n", toggle, c->bit_offset); + return -1; + } + + bits = vli_encode_bits(&bs, rl); + if (bits == -ENOBUFS) /* buffer full */ + break; + if (bits <= 0) { + dev_err(DEV, "error while encoding bitmap: %d\n", bits); + return 0; + } + + toggle = !toggle; + plain_bits += rl; + c->bit_offset = tmp; + } while (c->bit_offset < c->bm_bits); + + len = bs.cur.b - p->code + !!bs.cur.bit; + + if (plain_bits < (len << 3)) { + /* incompressible with this method. + * we need to rewind both word and bit position. */ + c->bit_offset -= plain_bits; + bm_xfer_ctx_bit_to_word_offset(c); + c->bit_offset = c->word_offset * BITS_PER_LONG; + return 0; + } + + /* RLE + VLI was able to compress it just fine. + * update c->word_offset. */ + bm_xfer_ctx_bit_to_word_offset(c); + + /* store pad_bits */ + DCBP_set_pad_bits(p, (8 - bs.cur.bit) & 0x7); + + return len; +} + +enum { OK, FAILED, DONE } +send_bitmap_rle_or_plain(struct drbd_conf *mdev, + struct p_header *h, struct bm_xfer_ctx *c) +{ + struct p_compressed_bm *p = (void*)h; + unsigned long num_words; + int len; + int ok; + + len = fill_bitmap_rle_bits(mdev, p, c); + + if (len < 0) + return FAILED; + + if (len) { + DCBP_set_code(p, RLE_VLI_Bits); + ok = _drbd_send_cmd(mdev, mdev->data.socket, P_COMPRESSED_BITMAP, h, + sizeof(*p) + len, 0); + + c->packets[0]++; + c->bytes[0] += sizeof(*p) + len; + + if (c->bit_offset >= c->bm_bits) + len = 0; /* DONE */ + } else { + /* was not compressible. + * send a buffer full of plain text bits instead. */ + num_words = min_t(size_t, BM_PACKET_WORDS, c->bm_words - c->word_offset); + len = num_words * sizeof(long); + if (len) + drbd_bm_get_lel(mdev, c->word_offset, num_words, (unsigned long*)h->payload); + ok = _drbd_send_cmd(mdev, mdev->data.socket, P_BITMAP, + h, sizeof(struct p_header) + len, 0); + c->word_offset += num_words; + c->bit_offset = c->word_offset * BITS_PER_LONG; + + c->packets[1]++; + c->bytes[1] += sizeof(struct p_header) + len; + + if (c->bit_offset > c->bm_bits) + c->bit_offset = c->bm_bits; + } + ok = ok ? ((len == 0) ? DONE : OK) : FAILED; + + if (ok == DONE) + INFO_bm_xfer_stats(mdev, "send", c); + return ok; +} + +/* See the comment at receive_bitmap() */ +int _drbd_send_bitmap(struct drbd_conf *mdev) +{ + struct bm_xfer_ctx c; + struct p_header *p; + int ret; + + ERR_IF(!mdev->bitmap) return FALSE; + + /* maybe we should use some per thread scratch page, + * and allocate that during initial device creation? */ + p = (struct p_header *) __get_free_page(GFP_NOIO); + if (!p) { + dev_err(DEV, "failed to allocate one page buffer in %s\n", __func__); + return FALSE; + } + + if (get_ldev(mdev)) { + if (drbd_md_test_flag(mdev->ldev, MDF_FULL_SYNC)) { + dev_info(DEV, "Writing the whole bitmap, MDF_FullSync was set.\n"); + drbd_bm_set_all(mdev); + if (drbd_bm_write(mdev)) { + /* write_bm did fail! Leave full sync flag set in Meta P_DATA + * but otherwise process as per normal - need to tell other + * side that a full resync is required! */ + dev_err(DEV, "Failed to write bitmap to disk!\n"); + } else { + drbd_md_clear_flag(mdev, MDF_FULL_SYNC); + drbd_md_sync(mdev); + } + } + put_ldev(mdev); + } + + c = (struct bm_xfer_ctx) { + .bm_bits = drbd_bm_bits(mdev), + .bm_words = drbd_bm_words(mdev), + }; + + do { + ret = send_bitmap_rle_or_plain(mdev, p, &c); + } while (ret == OK); + + free_page((unsigned long) p); + return (ret == DONE); +} + +int drbd_send_bitmap(struct drbd_conf *mdev) +{ + int err; + + if (!drbd_get_data_sock(mdev)) + return -1; + err = !_drbd_send_bitmap(mdev); + drbd_put_data_sock(mdev); + return err; +} + +int drbd_send_b_ack(struct drbd_conf *mdev, u32 barrier_nr, u32 set_size) +{ + int ok; + struct p_barrier_ack p; + + p.barrier = barrier_nr; + p.set_size = cpu_to_be32(set_size); + + if (mdev->state.conn < C_CONNECTED) + return FALSE; + ok = drbd_send_cmd(mdev, USE_META_SOCKET, P_BARRIER_ACK, + (struct p_header *)&p, sizeof(p)); + return ok; +} + +/** + * _drbd_send_ack() - Sends an ack packet + * @mdev: DRBD device. + * @cmd: Packet command code. + * @sector: sector, needs to be in big endian byte order + * @blksize: size in byte, needs to be in big endian byte order + * @block_id: Id, big endian byte order + */ +static int _drbd_send_ack(struct drbd_conf *mdev, enum drbd_packets cmd, + u64 sector, + u32 blksize, + u64 block_id) +{ + int ok; + struct p_block_ack p; + + p.sector = sector; + p.block_id = block_id; + p.blksize = blksize; + p.seq_num = cpu_to_be32(atomic_add_return(1, &mdev->packet_seq)); + + if (!mdev->meta.socket || mdev->state.conn < C_CONNECTED) + return FALSE; + ok = drbd_send_cmd(mdev, USE_META_SOCKET, cmd, + (struct p_header *)&p, sizeof(p)); + return ok; +} + +int drbd_send_ack_dp(struct drbd_conf *mdev, enum drbd_packets cmd, + struct p_data *dp) +{ + const int header_size = sizeof(struct p_data) + - sizeof(struct p_header); + int data_size = ((struct p_header *)dp)->length - header_size; + + return _drbd_send_ack(mdev, cmd, dp->sector, cpu_to_be32(data_size), + dp->block_id); +} + +int drbd_send_ack_rp(struct drbd_conf *mdev, enum drbd_packets cmd, + struct p_block_req *rp) +{ + return _drbd_send_ack(mdev, cmd, rp->sector, rp->blksize, rp->block_id); +} + +/** + * drbd_send_ack() - Sends an ack packet + * @mdev: DRBD device. + * @cmd: Packet command code. + * @e: Epoch entry. + */ +int drbd_send_ack(struct drbd_conf *mdev, + enum drbd_packets cmd, struct drbd_epoch_entry *e) +{ + return _drbd_send_ack(mdev, cmd, + cpu_to_be64(e->sector), + cpu_to_be32(e->size), + e->block_id); +} + +/* This function misuses the block_id field to signal if the blocks + * are is sync or not. */ +int drbd_send_ack_ex(struct drbd_conf *mdev, enum drbd_packets cmd, + sector_t sector, int blksize, u64 block_id) +{ + return _drbd_send_ack(mdev, cmd, + cpu_to_be64(sector), + cpu_to_be32(blksize), + cpu_to_be64(block_id)); +} + +int drbd_send_drequest(struct drbd_conf *mdev, int cmd, + sector_t sector, int size, u64 block_id) +{ + int ok; + struct p_block_req p; + + p.sector = cpu_to_be64(sector); + p.block_id = block_id; + p.blksize = cpu_to_be32(size); + + ok = drbd_send_cmd(mdev, USE_DATA_SOCKET, cmd, + (struct p_header *)&p, sizeof(p)); + return ok; +} + +int drbd_send_drequest_csum(struct drbd_conf *mdev, + sector_t sector, int size, + void *digest, int digest_size, + enum drbd_packets cmd) +{ + int ok; + struct p_block_req p; + + p.sector = cpu_to_be64(sector); + p.block_id = BE_DRBD_MAGIC + 0xbeef; + p.blksize = cpu_to_be32(size); + + p.head.magic = BE_DRBD_MAGIC; + p.head.command = cpu_to_be16(cmd); + p.head.length = cpu_to_be16(sizeof(p) - sizeof(struct p_header) + digest_size); + + mutex_lock(&mdev->data.mutex); + + ok = (sizeof(p) == drbd_send(mdev, mdev->data.socket, &p, sizeof(p), 0)); + ok = ok && (digest_size == drbd_send(mdev, mdev->data.socket, digest, digest_size, 0)); + + mutex_unlock(&mdev->data.mutex); + + return ok; +} + +int drbd_send_ov_request(struct drbd_conf *mdev, sector_t sector, int size) +{ + int ok; + struct p_block_req p; + + p.sector = cpu_to_be64(sector); + p.block_id = BE_DRBD_MAGIC + 0xbabe; + p.blksize = cpu_to_be32(size); + + ok = drbd_send_cmd(mdev, USE_DATA_SOCKET, P_OV_REQUEST, + (struct p_header *)&p, sizeof(p)); + return ok; +} + +/* called on sndtimeo + * returns FALSE if we should retry, + * TRUE if we think connection is dead + */ +static int we_should_drop_the_connection(struct drbd_conf *mdev, struct socket *sock) +{ + int drop_it; + /* long elapsed = (long)(jiffies - mdev->last_received); */ + + drop_it = mdev->meta.socket == sock + || !mdev->asender.task + || get_t_state(&mdev->asender) != Running + || mdev->state.conn < C_CONNECTED; + + if (drop_it) + return TRUE; + + drop_it = !--mdev->ko_count; + if (!drop_it) { + dev_err(DEV, "[%s/%d] sock_sendmsg time expired, ko = %u\n", + current->comm, current->pid, mdev->ko_count); + request_ping(mdev); + } + + return drop_it; /* && (mdev->state == R_PRIMARY) */; +} + +/* The idea of sendpage seems to be to put some kind of reference + * to the page into the skb, and to hand it over to the NIC. In + * this process get_page() gets called. + * + * As soon as the page was really sent over the network put_page() + * gets called by some part of the network layer. [ NIC driver? ] + * + * [ get_page() / put_page() increment/decrement the count. If count + * reaches 0 the page will be freed. ] + * + * This works nicely with pages from FSs. + * But this means that in protocol A we might signal IO completion too early! + * + * In order not to corrupt data during a resync we must make sure + * that we do not reuse our own buffer pages (EEs) to early, therefore + * we have the net_ee list. + * + * XFS seems to have problems, still, it submits pages with page_count == 0! + * As a workaround, we disable sendpage on pages + * with page_count == 0 or PageSlab. + */ +static int _drbd_no_send_page(struct drbd_conf *mdev, struct page *page, + int offset, size_t size) +{ + int sent = drbd_send(mdev, mdev->data.socket, kmap(page) + offset, size, 0); + kunmap(page); + if (sent == size) + mdev->send_cnt += size>>9; + return sent == size; +} + +static int _drbd_send_page(struct drbd_conf *mdev, struct page *page, + int offset, size_t size) +{ + mm_segment_t oldfs = get_fs(); + int sent, ok; + int len = size; + + /* e.g. XFS meta- & log-data is in slab pages, which have a + * page_count of 0 and/or have PageSlab() set. + * we cannot use send_page for those, as that does get_page(); + * put_page(); and would cause either a VM_BUG directly, or + * __page_cache_release a page that would actually still be referenced + * by someone, leading to some obscure delayed Oops somewhere else. */ + if (disable_sendpage || (page_count(page) < 1) || PageSlab(page)) + return _drbd_no_send_page(mdev, page, offset, size); + + drbd_update_congested(mdev); + set_fs(KERNEL_DS); + do { + sent = mdev->data.socket->ops->sendpage(mdev->data.socket, page, + offset, len, + MSG_NOSIGNAL); + if (sent == -EAGAIN) { + if (we_should_drop_the_connection(mdev, + mdev->data.socket)) + break; + else + continue; + } + if (sent <= 0) { + dev_warn(DEV, "%s: size=%d len=%d sent=%d\n", + __func__, (int)size, len, sent); + break; + } + len -= sent; + offset += sent; + } while (len > 0 /* THINK && mdev->cstate >= C_CONNECTED*/); + set_fs(oldfs); + clear_bit(NET_CONGESTED, &mdev->flags); + + ok = (len == 0); + if (likely(ok)) + mdev->send_cnt += size>>9; + return ok; +} + +static int _drbd_send_bio(struct drbd_conf *mdev, struct bio *bio) +{ + struct bio_vec *bvec; + int i; + __bio_for_each_segment(bvec, bio, i, 0) { + if (!_drbd_no_send_page(mdev, bvec->bv_page, + bvec->bv_offset, bvec->bv_len)) + return 0; + } + return 1; +} + +static int _drbd_send_zc_bio(struct drbd_conf *mdev, struct bio *bio) +{ + struct bio_vec *bvec; + int i; + __bio_for_each_segment(bvec, bio, i, 0) { + if (!_drbd_send_page(mdev, bvec->bv_page, + bvec->bv_offset, bvec->bv_len)) + return 0; + } + + return 1; +} + +/* Used to send write requests + * R_PRIMARY -> Peer (P_DATA) + */ +int drbd_send_dblock(struct drbd_conf *mdev, struct drbd_request *req) +{ + int ok = 1; + struct p_data p; + unsigned int dp_flags = 0; + void *dgb; + int dgs; + + if (!drbd_get_data_sock(mdev)) + return 0; + + dgs = (mdev->agreed_pro_version >= 87 && mdev->integrity_w_tfm) ? + crypto_hash_digestsize(mdev->integrity_w_tfm) : 0; + + p.head.magic = BE_DRBD_MAGIC; + p.head.command = cpu_to_be16(P_DATA); + p.head.length = + cpu_to_be16(sizeof(p) - sizeof(struct p_header) + dgs + req->size); + + p.sector = cpu_to_be64(req->sector); + p.block_id = (unsigned long)req; + p.seq_num = cpu_to_be32(req->seq_num = + atomic_add_return(1, &mdev->packet_seq)); + dp_flags = 0; + + /* NOTE: no need to check if barriers supported here as we would + * not pass the test in make_request_common in that case + */ + if (bio_rw_flagged(req->master_bio, BIO_RW_BARRIER)) { + dev_err(DEV, "ASSERT FAILED would have set DP_HARDBARRIER\n"); + /* dp_flags |= DP_HARDBARRIER; */ + } + if (bio_rw_flagged(req->master_bio, BIO_RW_SYNCIO)) + dp_flags |= DP_RW_SYNC; + /* for now handle SYNCIO and UNPLUG + * as if they still were one and the same flag */ + if (bio_rw_flagged(req->master_bio, BIO_RW_UNPLUG)) + dp_flags |= DP_RW_SYNC; + if (mdev->state.conn >= C_SYNC_SOURCE && + mdev->state.conn <= C_PAUSED_SYNC_T) + dp_flags |= DP_MAY_SET_IN_SYNC; + + p.dp_flags = cpu_to_be32(dp_flags); + trace_drbd_packet(mdev, mdev->data.socket, 0, (void *)&p, __FILE__, __LINE__); + set_bit(UNPLUG_REMOTE, &mdev->flags); + ok = (sizeof(p) == + drbd_send(mdev, mdev->data.socket, &p, sizeof(p), MSG_MORE)); + if (ok && dgs) { + dgb = mdev->int_dig_out; + drbd_csum(mdev, mdev->integrity_w_tfm, req->master_bio, dgb); + ok = drbd_send(mdev, mdev->data.socket, dgb, dgs, MSG_MORE); + } + if (ok) { + if (mdev->net_conf->wire_protocol == DRBD_PROT_A) + ok = _drbd_send_bio(mdev, req->master_bio); + else + ok = _drbd_send_zc_bio(mdev, req->master_bio); + } + + drbd_put_data_sock(mdev); + return ok; +} + +/* answer packet, used to send data back for read requests: + * Peer -> (diskless) R_PRIMARY (P_DATA_REPLY) + * C_SYNC_SOURCE -> C_SYNC_TARGET (P_RS_DATA_REPLY) + */ +int drbd_send_block(struct drbd_conf *mdev, enum drbd_packets cmd, + struct drbd_epoch_entry *e) +{ + int ok; + struct p_data p; + void *dgb; + int dgs; + + dgs = (mdev->agreed_pro_version >= 87 && mdev->integrity_w_tfm) ? + crypto_hash_digestsize(mdev->integrity_w_tfm) : 0; + + p.head.magic = BE_DRBD_MAGIC; + p.head.command = cpu_to_be16(cmd); + p.head.length = + cpu_to_be16(sizeof(p) - sizeof(struct p_header) + dgs + e->size); + + p.sector = cpu_to_be64(e->sector); + p.block_id = e->block_id; + /* p.seq_num = 0; No sequence numbers here.. */ + + /* Only called by our kernel thread. + * This one may be interrupted by DRBD_SIG and/or DRBD_SIGKILL + * in response to admin command or module unload. + */ + if (!drbd_get_data_sock(mdev)) + return 0; + + trace_drbd_packet(mdev, mdev->data.socket, 0, (void *)&p, __FILE__, __LINE__); + ok = sizeof(p) == drbd_send(mdev, mdev->data.socket, &p, + sizeof(p), MSG_MORE); + if (ok && dgs) { + dgb = mdev->int_dig_out; + drbd_csum(mdev, mdev->integrity_w_tfm, e->private_bio, dgb); + ok = drbd_send(mdev, mdev->data.socket, dgb, dgs, MSG_MORE); + } + if (ok) + ok = _drbd_send_zc_bio(mdev, e->private_bio); + + drbd_put_data_sock(mdev); + return ok; +} + +/* + drbd_send distinguishes two cases: + + Packets sent via the data socket "sock" + and packets sent via the meta data socket "msock" + + sock msock + -----------------+-------------------------+------------------------------ + timeout conf.timeout / 2 conf.timeout / 2 + timeout action send a ping via msock Abort communication + and close all sockets +*/ + +/* + * you must have down()ed the appropriate [m]sock_mutex elsewhere! + */ +int drbd_send(struct drbd_conf *mdev, struct socket *sock, + void *buf, size_t size, unsigned msg_flags) +{ + struct kvec iov; + struct msghdr msg; + int rv, sent = 0; + + if (!sock) + return -1000; + + /* THINK if (signal_pending) return ... ? */ + + iov.iov_base = buf; + iov.iov_len = size; + + msg.msg_name = NULL; + msg.msg_namelen = 0; + msg.msg_control = NULL; + msg.msg_controllen = 0; + msg.msg_flags = msg_flags | MSG_NOSIGNAL; + + if (sock == mdev->data.socket) { + mdev->ko_count = mdev->net_conf->ko_count; + drbd_update_congested(mdev); + } + do { + /* STRANGE + * tcp_sendmsg does _not_ use its size parameter at all ? + * + * -EAGAIN on timeout, -EINTR on signal. + */ +/* THINK + * do we need to block DRBD_SIG if sock == &meta.socket ?? + * otherwise wake_asender() might interrupt some send_*Ack ! + */ + rv = kernel_sendmsg(sock, &msg, &iov, 1, size); + if (rv == -EAGAIN) { + if (we_should_drop_the_connection(mdev, sock)) + break; + else + continue; + } + D_ASSERT(rv != 0); + if (rv == -EINTR) { + flush_signals(current); + rv = 0; + } + if (rv < 0) + break; + sent += rv; + iov.iov_base += rv; + iov.iov_len -= rv; + } while (sent < size); + + if (sock == mdev->data.socket) + clear_bit(NET_CONGESTED, &mdev->flags); + + if (rv <= 0) { + if (rv != -EAGAIN) { + dev_err(DEV, "%s_sendmsg returned %d\n", + sock == mdev->meta.socket ? "msock" : "sock", + rv); + drbd_force_state(mdev, NS(conn, C_BROKEN_PIPE)); + } else + drbd_force_state(mdev, NS(conn, C_TIMEOUT)); + } + + return sent; +} + +static int drbd_open(struct block_device *bdev, fmode_t mode) +{ + struct drbd_conf *mdev = bdev->bd_disk->private_data; + unsigned long flags; + int rv = 0; + + spin_lock_irqsave(&mdev->req_lock, flags); + /* to have a stable mdev->state.role + * and no race with updating open_cnt */ + + if (mdev->state.role != R_PRIMARY) { + if (mode & FMODE_WRITE) + rv = -EROFS; + else if (!allow_oos) + rv = -EMEDIUMTYPE; + } + + if (!rv) + mdev->open_cnt++; + spin_unlock_irqrestore(&mdev->req_lock, flags); + + return rv; +} + +static int drbd_release(struct gendisk *gd, fmode_t mode) +{ + struct drbd_conf *mdev = gd->private_data; + mdev->open_cnt--; + return 0; +} + +static void drbd_unplug_fn(struct request_queue *q) +{ + struct drbd_conf *mdev = q->queuedata; + + trace_drbd_unplug(mdev, "got unplugged"); + + /* unplug FIRST */ + spin_lock_irq(q->queue_lock); + blk_remove_plug(q); + spin_unlock_irq(q->queue_lock); + + /* only if connected */ + spin_lock_irq(&mdev->req_lock); + if (mdev->state.pdsk >= D_INCONSISTENT && mdev->state.conn >= C_CONNECTED) { + D_ASSERT(mdev->state.role == R_PRIMARY); + if (test_and_clear_bit(UNPLUG_REMOTE, &mdev->flags)) { + /* add to the data.work queue, + * unless already queued. + * XXX this might be a good addition to drbd_queue_work + * anyways, to detect "double queuing" ... */ + if (list_empty(&mdev->unplug_work.list)) + drbd_queue_work(&mdev->data.work, + &mdev->unplug_work); + } + } + spin_unlock_irq(&mdev->req_lock); + + if (mdev->state.disk >= D_INCONSISTENT) + drbd_kick_lo(mdev); +} + +static void drbd_set_defaults(struct drbd_conf *mdev) +{ + mdev->sync_conf.after = DRBD_AFTER_DEF; + mdev->sync_conf.rate = DRBD_RATE_DEF; + mdev->sync_conf.al_extents = DRBD_AL_EXTENTS_DEF; + mdev->state = (union drbd_state) { + { .role = R_SECONDARY, + .peer = R_UNKNOWN, + .conn = C_STANDALONE, + .disk = D_DISKLESS, + .pdsk = D_UNKNOWN, + .susp = 0 + } }; +} + +void drbd_init_set_defaults(struct drbd_conf *mdev) +{ + /* the memset(,0,) did most of this. + * note: only assignments, no allocation in here */ + + drbd_set_defaults(mdev); + + /* for now, we do NOT yet support it, + * even though we start some framework + * to eventually support barriers */ + set_bit(NO_BARRIER_SUPP, &mdev->flags); + + atomic_set(&mdev->ap_bio_cnt, 0); + atomic_set(&mdev->ap_pending_cnt, 0); + atomic_set(&mdev->rs_pending_cnt, 0); + atomic_set(&mdev->unacked_cnt, 0); + atomic_set(&mdev->local_cnt, 0); + atomic_set(&mdev->net_cnt, 0); + atomic_set(&mdev->packet_seq, 0); + atomic_set(&mdev->pp_in_use, 0); + + mutex_init(&mdev->md_io_mutex); + mutex_init(&mdev->data.mutex); + mutex_init(&mdev->meta.mutex); + sema_init(&mdev->data.work.s, 0); + sema_init(&mdev->meta.work.s, 0); + mutex_init(&mdev->state_mutex); + + spin_lock_init(&mdev->data.work.q_lock); + spin_lock_init(&mdev->meta.work.q_lock); + + spin_lock_init(&mdev->al_lock); + spin_lock_init(&mdev->req_lock); + spin_lock_init(&mdev->peer_seq_lock); + spin_lock_init(&mdev->epoch_lock); + + INIT_LIST_HEAD(&mdev->active_ee); + INIT_LIST_HEAD(&mdev->sync_ee); + INIT_LIST_HEAD(&mdev->done_ee); + INIT_LIST_HEAD(&mdev->read_ee); + INIT_LIST_HEAD(&mdev->net_ee); + INIT_LIST_HEAD(&mdev->resync_reads); + INIT_LIST_HEAD(&mdev->data.work.q); + INIT_LIST_HEAD(&mdev->meta.work.q); + INIT_LIST_HEAD(&mdev->resync_work.list); + INIT_LIST_HEAD(&mdev->unplug_work.list); + INIT_LIST_HEAD(&mdev->md_sync_work.list); + INIT_LIST_HEAD(&mdev->bm_io_work.w.list); + mdev->resync_work.cb = w_resync_inactive; + mdev->unplug_work.cb = w_send_write_hint; + mdev->md_sync_work.cb = w_md_sync; + mdev->bm_io_work.w.cb = w_bitmap_io; + init_timer(&mdev->resync_timer); + init_timer(&mdev->md_sync_timer); + mdev->resync_timer.function = resync_timer_fn; + mdev->resync_timer.data = (unsigned long) mdev; + mdev->md_sync_timer.function = md_sync_timer_fn; + mdev->md_sync_timer.data = (unsigned long) mdev; + + init_waitqueue_head(&mdev->misc_wait); + init_waitqueue_head(&mdev->state_wait); + init_waitqueue_head(&mdev->ee_wait); + init_waitqueue_head(&mdev->al_wait); + init_waitqueue_head(&mdev->seq_wait); + + drbd_thread_init(mdev, &mdev->receiver, drbdd_init); + drbd_thread_init(mdev, &mdev->worker, drbd_worker); + drbd_thread_init(mdev, &mdev->asender, drbd_asender); + + mdev->agreed_pro_version = PRO_VERSION_MAX; + mdev->write_ordering = WO_bio_barrier; + mdev->resync_wenr = LC_FREE; +} + +void drbd_mdev_cleanup(struct drbd_conf *mdev) +{ + if (mdev->receiver.t_state != None) + dev_err(DEV, "ASSERT FAILED: receiver t_state == %d expected 0.\n", + mdev->receiver.t_state); + + /* no need to lock it, I'm the only thread alive */ + if (atomic_read(&mdev->current_epoch->epoch_size) != 0) + dev_err(DEV, "epoch_size:%d\n", atomic_read(&mdev->current_epoch->epoch_size)); + mdev->al_writ_cnt = + mdev->bm_writ_cnt = + mdev->read_cnt = + mdev->recv_cnt = + mdev->send_cnt = + mdev->writ_cnt = + mdev->p_size = + mdev->rs_start = + mdev->rs_total = + mdev->rs_failed = + mdev->rs_mark_left = + mdev->rs_mark_time = 0; + D_ASSERT(mdev->net_conf == NULL); + + drbd_set_my_capacity(mdev, 0); + if (mdev->bitmap) { + /* maybe never allocated. */ + drbd_bm_resize(mdev, 0); + drbd_bm_cleanup(mdev); + } + + drbd_free_resources(mdev); + + /* + * currently we drbd_init_ee only on module load, so + * we may do drbd_release_ee only on module unload! + */ + D_ASSERT(list_empty(&mdev->active_ee)); + D_ASSERT(list_empty(&mdev->sync_ee)); + D_ASSERT(list_empty(&mdev->done_ee)); + D_ASSERT(list_empty(&mdev->read_ee)); + D_ASSERT(list_empty(&mdev->net_ee)); + D_ASSERT(list_empty(&mdev->resync_reads)); + D_ASSERT(list_empty(&mdev->data.work.q)); + D_ASSERT(list_empty(&mdev->meta.work.q)); + D_ASSERT(list_empty(&mdev->resync_work.list)); + D_ASSERT(list_empty(&mdev->unplug_work.list)); + +} + + +static void drbd_destroy_mempools(void) +{ + struct page *page; + + while (drbd_pp_pool) { + page = drbd_pp_pool; + drbd_pp_pool = (struct page *)page_private(page); + __free_page(page); + drbd_pp_vacant--; + } + + /* D_ASSERT(atomic_read(&drbd_pp_vacant)==0); */ + + if (drbd_ee_mempool) + mempool_destroy(drbd_ee_mempool); + if (drbd_request_mempool) + mempool_destroy(drbd_request_mempool); + if (drbd_ee_cache) + kmem_cache_destroy(drbd_ee_cache); + if (drbd_request_cache) + kmem_cache_destroy(drbd_request_cache); + if (drbd_bm_ext_cache) + kmem_cache_destroy(drbd_bm_ext_cache); + if (drbd_al_ext_cache) + kmem_cache_destroy(drbd_al_ext_cache); + + drbd_ee_mempool = NULL; + drbd_request_mempool = NULL; + drbd_ee_cache = NULL; + drbd_request_cache = NULL; + drbd_bm_ext_cache = NULL; + drbd_al_ext_cache = NULL; + + return; +} + +static int drbd_create_mempools(void) +{ + struct page *page; + const int number = (DRBD_MAX_SEGMENT_SIZE/PAGE_SIZE) * minor_count; + int i; + + /* prepare our caches and mempools */ + drbd_request_mempool = NULL; + drbd_ee_cache = NULL; + drbd_request_cache = NULL; + drbd_bm_ext_cache = NULL; + drbd_al_ext_cache = NULL; + drbd_pp_pool = NULL; + + /* caches */ + drbd_request_cache = kmem_cache_create( + "drbd_req", sizeof(struct drbd_request), 0, 0, NULL); + if (drbd_request_cache == NULL) + goto Enomem; + + drbd_ee_cache = kmem_cache_create( + "drbd_ee", sizeof(struct drbd_epoch_entry), 0, 0, NULL); + if (drbd_ee_cache == NULL) + goto Enomem; + + drbd_bm_ext_cache = kmem_cache_create( + "drbd_bm", sizeof(struct bm_extent), 0, 0, NULL); + if (drbd_bm_ext_cache == NULL) + goto Enomem; + + drbd_al_ext_cache = kmem_cache_create( + "drbd_al", sizeof(struct lc_element), 0, 0, NULL); + if (drbd_al_ext_cache == NULL) + goto Enomem; + + /* mempools */ + drbd_request_mempool = mempool_create(number, + mempool_alloc_slab, mempool_free_slab, drbd_request_cache); + if (drbd_request_mempool == NULL) + goto Enomem; + + drbd_ee_mempool = mempool_create(number, + mempool_alloc_slab, mempool_free_slab, drbd_ee_cache); + if (drbd_request_mempool == NULL) + goto Enomem; + + /* drbd's page pool */ + spin_lock_init(&drbd_pp_lock); + + for (i = 0; i < number; i++) { + page = alloc_page(GFP_HIGHUSER); + if (!page) + goto Enomem; + set_page_private(page, (unsigned long)drbd_pp_pool); + drbd_pp_pool = page; + } + drbd_pp_vacant = number; + + return 0; + +Enomem: + drbd_destroy_mempools(); /* in case we allocated some */ + return -ENOMEM; +} + +static int drbd_notify_sys(struct notifier_block *this, unsigned long code, + void *unused) +{ + /* just so we have it. you never know what interesting things we + * might want to do here some day... + */ + + return NOTIFY_DONE; +} + +static struct notifier_block drbd_notifier = { + .notifier_call = drbd_notify_sys, +}; + +static void drbd_release_ee_lists(struct drbd_conf *mdev) +{ + int rr; + + rr = drbd_release_ee(mdev, &mdev->active_ee); + if (rr) + dev_err(DEV, "%d EEs in active list found!\n", rr); + + rr = drbd_release_ee(mdev, &mdev->sync_ee); + if (rr) + dev_err(DEV, "%d EEs in sync list found!\n", rr); + + rr = drbd_release_ee(mdev, &mdev->read_ee); + if (rr) + dev_err(DEV, "%d EEs in read list found!\n", rr); + + rr = drbd_release_ee(mdev, &mdev->done_ee); + if (rr) + dev_err(DEV, "%d EEs in done list found!\n", rr); + + rr = drbd_release_ee(mdev, &mdev->net_ee); + if (rr) + dev_err(DEV, "%d EEs in net list found!\n", rr); +} + +/* caution. no locking. + * currently only used from module cleanup code. */ +static void drbd_delete_device(unsigned int minor) +{ + struct drbd_conf *mdev = minor_to_mdev(minor); + + if (!mdev) + return; + + /* paranoia asserts */ + if (mdev->open_cnt != 0) + dev_err(DEV, "open_cnt = %d in %s:%u", mdev->open_cnt, + __FILE__ , __LINE__); + + ERR_IF (!list_empty(&mdev->data.work.q)) { + struct list_head *lp; + list_for_each(lp, &mdev->data.work.q) { + dev_err(DEV, "lp = %p\n", lp); + } + }; + /* end paranoia asserts */ + + del_gendisk(mdev->vdisk); + + /* cleanup stuff that may have been allocated during + * device (re-)configuration or state changes */ + + if (mdev->this_bdev) + bdput(mdev->this_bdev); + + drbd_free_resources(mdev); + + drbd_release_ee_lists(mdev); + + /* should be free'd on disconnect? */ + kfree(mdev->ee_hash); + /* + mdev->ee_hash_s = 0; + mdev->ee_hash = NULL; + */ + + lc_destroy(mdev->act_log); + lc_destroy(mdev->resync); + + kfree(mdev->p_uuid); + /* mdev->p_uuid = NULL; */ + + kfree(mdev->int_dig_out); + kfree(mdev->int_dig_in); + kfree(mdev->int_dig_vv); + + /* cleanup the rest that has been + * allocated from drbd_new_device + * and actually free the mdev itself */ + drbd_free_mdev(mdev); +} + +static void drbd_cleanup(void) +{ + unsigned int i; + + unregister_reboot_notifier(&drbd_notifier); + + drbd_nl_cleanup(); + + if (minor_table) { + if (drbd_proc) + remove_proc_entry("drbd", NULL); + i = minor_count; + while (i--) + drbd_delete_device(i); + drbd_destroy_mempools(); + } + + kfree(minor_table); + + unregister_blkdev(DRBD_MAJOR, "drbd"); + + printk(KERN_INFO "drbd: module cleanup done.\n"); +} + +/** + * drbd_congested() - Callback for pdflush + * @congested_data: User data + * @bdi_bits: Bits pdflush is currently interested in + * + * Returns 1<ldev->backing_bdev); + r = bdi_congested(&q->backing_dev_info, bdi_bits); + put_ldev(mdev); + if (r) + reason = 'b'; + } + + if (bdi_bits & (1 << BDI_async_congested) && test_bit(NET_CONGESTED, &mdev->flags)) { + r |= (1 << BDI_async_congested); + reason = reason == 'b' ? 'a' : 'n'; + } + +out: + mdev->congestion_reason = reason; + return r; +} + +struct drbd_conf *drbd_new_device(unsigned int minor) +{ + struct drbd_conf *mdev; + struct gendisk *disk; + struct request_queue *q; + + /* GFP_KERNEL, we are outside of all write-out paths */ + mdev = kzalloc(sizeof(struct drbd_conf), GFP_KERNEL); + if (!mdev) + return NULL; + if (!zalloc_cpumask_var(&mdev->cpu_mask, GFP_KERNEL)) + goto out_no_cpumask; + + mdev->minor = minor; + + drbd_init_set_defaults(mdev); + + q = blk_alloc_queue(GFP_KERNEL); + if (!q) + goto out_no_q; + mdev->rq_queue = q; + q->queuedata = mdev; + blk_queue_max_segment_size(q, DRBD_MAX_SEGMENT_SIZE); + + disk = alloc_disk(1); + if (!disk) + goto out_no_disk; + mdev->vdisk = disk; + + set_disk_ro(disk, TRUE); + + disk->queue = q; + disk->major = DRBD_MAJOR; + disk->first_minor = minor; + disk->fops = &drbd_ops; + sprintf(disk->disk_name, "drbd%d", minor); + disk->private_data = mdev; + + mdev->this_bdev = bdget(MKDEV(DRBD_MAJOR, minor)); + /* we have no partitions. we contain only ourselves. */ + mdev->this_bdev->bd_contains = mdev->this_bdev; + + q->backing_dev_info.congested_fn = drbd_congested; + q->backing_dev_info.congested_data = mdev; + + blk_queue_make_request(q, drbd_make_request_26); + blk_queue_bounce_limit(q, BLK_BOUNCE_ANY); + blk_queue_merge_bvec(q, drbd_merge_bvec); + q->queue_lock = &mdev->req_lock; /* needed since we use */ + /* plugging on a queue, that actually has no requests! */ + q->unplug_fn = drbd_unplug_fn; + + mdev->md_io_page = alloc_page(GFP_KERNEL); + if (!mdev->md_io_page) + goto out_no_io_page; + + if (drbd_bm_init(mdev)) + goto out_no_bitmap; + /* no need to lock access, we are still initializing this minor device. */ + if (!tl_init(mdev)) + goto out_no_tl; + + mdev->app_reads_hash = kzalloc(APP_R_HSIZE*sizeof(void *), GFP_KERNEL); + if (!mdev->app_reads_hash) + goto out_no_app_reads; + + mdev->current_epoch = kzalloc(sizeof(struct drbd_epoch), GFP_KERNEL); + if (!mdev->current_epoch) + goto out_no_epoch; + + INIT_LIST_HEAD(&mdev->current_epoch->list); + mdev->epochs = 1; + + return mdev; + +/* out_whatever_else: + kfree(mdev->current_epoch); */ +out_no_epoch: + kfree(mdev->app_reads_hash); +out_no_app_reads: + tl_cleanup(mdev); +out_no_tl: + drbd_bm_cleanup(mdev); +out_no_bitmap: + __free_page(mdev->md_io_page); +out_no_io_page: + put_disk(disk); +out_no_disk: + blk_cleanup_queue(q); +out_no_q: + free_cpumask_var(mdev->cpu_mask); +out_no_cpumask: + kfree(mdev); + return NULL; +} + +/* counterpart of drbd_new_device. + * last part of drbd_delete_device. */ +void drbd_free_mdev(struct drbd_conf *mdev) +{ + kfree(mdev->current_epoch); + kfree(mdev->app_reads_hash); + tl_cleanup(mdev); + if (mdev->bitmap) /* should no longer be there. */ + drbd_bm_cleanup(mdev); + __free_page(mdev->md_io_page); + put_disk(mdev->vdisk); + blk_cleanup_queue(mdev->rq_queue); + free_cpumask_var(mdev->cpu_mask); + kfree(mdev); +} + + +int __init drbd_init(void) +{ + int err; + + if (sizeof(struct p_handshake) != 80) { + printk(KERN_ERR + "drbd: never change the size or layout " + "of the HandShake packet.\n"); + return -EINVAL; + } + + if (1 > minor_count || minor_count > 255) { + printk(KERN_ERR + "drbd: invalid minor_count (%d)\n", minor_count); +#ifdef MODULE + return -EINVAL; +#else + minor_count = 8; +#endif + } + + err = drbd_nl_init(); + if (err) + return err; + + err = register_blkdev(DRBD_MAJOR, "drbd"); + if (err) { + printk(KERN_ERR + "drbd: unable to register block device major %d\n", + DRBD_MAJOR); + return err; + } + + register_reboot_notifier(&drbd_notifier); + + /* + * allocate all necessary structs + */ + err = -ENOMEM; + + init_waitqueue_head(&drbd_pp_wait); + + drbd_proc = NULL; /* play safe for drbd_cleanup */ + minor_table = kzalloc(sizeof(struct drbd_conf *)*minor_count, + GFP_KERNEL); + if (!minor_table) + goto Enomem; + + err = drbd_create_mempools(); + if (err) + goto Enomem; + + drbd_proc = proc_create("drbd", S_IFREG | S_IRUGO , NULL, &drbd_proc_fops); + if (!drbd_proc) { + printk(KERN_ERR "drbd: unable to register proc file\n"); + goto Enomem; + } + + rwlock_init(&global_state_lock); + + printk(KERN_INFO "drbd: initialized. " + "Version: " REL_VERSION " (api:%d/proto:%d-%d)\n", + API_VERSION, PRO_VERSION_MIN, PRO_VERSION_MAX); + printk(KERN_INFO "drbd: %s\n", drbd_buildtag()); + printk(KERN_INFO "drbd: registered as block device major %d\n", + DRBD_MAJOR); + printk(KERN_INFO "drbd: minor_table @ 0x%p\n", minor_table); + + return 0; /* Success! */ + +Enomem: + drbd_cleanup(); + if (err == -ENOMEM) + /* currently always the case */ + printk(KERN_ERR "drbd: ran out of memory\n"); + else + printk(KERN_ERR "drbd: initialization failure\n"); + return err; +} + +void drbd_free_bc(struct drbd_backing_dev *ldev) +{ + if (ldev == NULL) + return; + + bd_release(ldev->backing_bdev); + bd_release(ldev->md_bdev); + + fput(ldev->lo_file); + fput(ldev->md_file); + + kfree(ldev); +} + +void drbd_free_sock(struct drbd_conf *mdev) +{ + if (mdev->data.socket) { + kernel_sock_shutdown(mdev->data.socket, SHUT_RDWR); + sock_release(mdev->data.socket); + mdev->data.socket = NULL; + } + if (mdev->meta.socket) { + kernel_sock_shutdown(mdev->meta.socket, SHUT_RDWR); + sock_release(mdev->meta.socket); + mdev->meta.socket = NULL; + } +} + + +void drbd_free_resources(struct drbd_conf *mdev) +{ + crypto_free_hash(mdev->csums_tfm); + mdev->csums_tfm = NULL; + crypto_free_hash(mdev->verify_tfm); + mdev->verify_tfm = NULL; + crypto_free_hash(mdev->cram_hmac_tfm); + mdev->cram_hmac_tfm = NULL; + crypto_free_hash(mdev->integrity_w_tfm); + mdev->integrity_w_tfm = NULL; + crypto_free_hash(mdev->integrity_r_tfm); + mdev->integrity_r_tfm = NULL; + + drbd_free_sock(mdev); + + __no_warn(local, + drbd_free_bc(mdev->ldev); + mdev->ldev = NULL;); +} + +/* meta data management */ + +struct meta_data_on_disk { + u64 la_size; /* last agreed size. */ + u64 uuid[UI_SIZE]; /* UUIDs. */ + u64 device_uuid; + u64 reserved_u64_1; + u32 flags; /* MDF */ + u32 magic; + u32 md_size_sect; + u32 al_offset; /* offset to this block */ + u32 al_nr_extents; /* important for restoring the AL */ + /* `-- act_log->nr_elements <-- sync_conf.al_extents */ + u32 bm_offset; /* offset to the bitmap, from here */ + u32 bm_bytes_per_bit; /* BM_BLOCK_SIZE */ + u32 reserved_u32[4]; + +} __packed; + +/** + * drbd_md_sync() - Writes the meta data super block if the MD_DIRTY flag bit is set + * @mdev: DRBD device. + */ +void drbd_md_sync(struct drbd_conf *mdev) +{ + struct meta_data_on_disk *buffer; + sector_t sector; + int i; + + if (!test_and_clear_bit(MD_DIRTY, &mdev->flags)) + return; + del_timer(&mdev->md_sync_timer); + + /* We use here D_FAILED and not D_ATTACHING because we try to write + * metadata even if we detach due to a disk failure! */ + if (!get_ldev_if_state(mdev, D_FAILED)) + return; + + trace_drbd_md_io(mdev, WRITE, mdev->ldev); + + mutex_lock(&mdev->md_io_mutex); + buffer = (struct meta_data_on_disk *)page_address(mdev->md_io_page); + memset(buffer, 0, 512); + + buffer->la_size = cpu_to_be64(drbd_get_capacity(mdev->this_bdev)); + for (i = UI_CURRENT; i < UI_SIZE; i++) + buffer->uuid[i] = cpu_to_be64(mdev->ldev->md.uuid[i]); + buffer->flags = cpu_to_be32(mdev->ldev->md.flags); + buffer->magic = cpu_to_be32(DRBD_MD_MAGIC); + + buffer->md_size_sect = cpu_to_be32(mdev->ldev->md.md_size_sect); + buffer->al_offset = cpu_to_be32(mdev->ldev->md.al_offset); + buffer->al_nr_extents = cpu_to_be32(mdev->act_log->nr_elements); + buffer->bm_bytes_per_bit = cpu_to_be32(BM_BLOCK_SIZE); + buffer->device_uuid = cpu_to_be64(mdev->ldev->md.device_uuid); + + buffer->bm_offset = cpu_to_be32(mdev->ldev->md.bm_offset); + + D_ASSERT(drbd_md_ss__(mdev, mdev->ldev) == mdev->ldev->md.md_offset); + sector = mdev->ldev->md.md_offset; + + if (drbd_md_sync_page_io(mdev, mdev->ldev, sector, WRITE)) { + clear_bit(MD_DIRTY, &mdev->flags); + } else { + /* this was a try anyways ... */ + dev_err(DEV, "meta data update failed!\n"); + + drbd_chk_io_error(mdev, 1, TRUE); + } + + /* Update mdev->ldev->md.la_size_sect, + * since we updated it on metadata. */ + mdev->ldev->md.la_size_sect = drbd_get_capacity(mdev->this_bdev); + + mutex_unlock(&mdev->md_io_mutex); + put_ldev(mdev); +} + +/** + * drbd_md_read() - Reads in the meta data super block + * @mdev: DRBD device. + * @bdev: Device from which the meta data should be read in. + * + * Return 0 (NO_ERROR) on success, and an enum drbd_ret_codes in case + * something goes wrong. Currently only: ERR_IO_MD_DISK, ERR_MD_INVALID. + */ +int drbd_md_read(struct drbd_conf *mdev, struct drbd_backing_dev *bdev) +{ + struct meta_data_on_disk *buffer; + int i, rv = NO_ERROR; + + if (!get_ldev_if_state(mdev, D_ATTACHING)) + return ERR_IO_MD_DISK; + + trace_drbd_md_io(mdev, READ, bdev); + + mutex_lock(&mdev->md_io_mutex); + buffer = (struct meta_data_on_disk *)page_address(mdev->md_io_page); + + if (!drbd_md_sync_page_io(mdev, bdev, bdev->md.md_offset, READ)) { + /* NOTE: cant do normal error processing here as this is + called BEFORE disk is attached */ + dev_err(DEV, "Error while reading metadata.\n"); + rv = ERR_IO_MD_DISK; + goto err; + } + + if (be32_to_cpu(buffer->magic) != DRBD_MD_MAGIC) { + dev_err(DEV, "Error while reading metadata, magic not found.\n"); + rv = ERR_MD_INVALID; + goto err; + } + if (be32_to_cpu(buffer->al_offset) != bdev->md.al_offset) { + dev_err(DEV, "unexpected al_offset: %d (expected %d)\n", + be32_to_cpu(buffer->al_offset), bdev->md.al_offset); + rv = ERR_MD_INVALID; + goto err; + } + if (be32_to_cpu(buffer->bm_offset) != bdev->md.bm_offset) { + dev_err(DEV, "unexpected bm_offset: %d (expected %d)\n", + be32_to_cpu(buffer->bm_offset), bdev->md.bm_offset); + rv = ERR_MD_INVALID; + goto err; + } + if (be32_to_cpu(buffer->md_size_sect) != bdev->md.md_size_sect) { + dev_err(DEV, "unexpected md_size: %u (expected %u)\n", + be32_to_cpu(buffer->md_size_sect), bdev->md.md_size_sect); + rv = ERR_MD_INVALID; + goto err; + } + + if (be32_to_cpu(buffer->bm_bytes_per_bit) != BM_BLOCK_SIZE) { + dev_err(DEV, "unexpected bm_bytes_per_bit: %u (expected %u)\n", + be32_to_cpu(buffer->bm_bytes_per_bit), BM_BLOCK_SIZE); + rv = ERR_MD_INVALID; + goto err; + } + + bdev->md.la_size_sect = be64_to_cpu(buffer->la_size); + for (i = UI_CURRENT; i < UI_SIZE; i++) + bdev->md.uuid[i] = be64_to_cpu(buffer->uuid[i]); + bdev->md.flags = be32_to_cpu(buffer->flags); + mdev->sync_conf.al_extents = be32_to_cpu(buffer->al_nr_extents); + bdev->md.device_uuid = be64_to_cpu(buffer->device_uuid); + + if (mdev->sync_conf.al_extents < 7) + mdev->sync_conf.al_extents = 127; + + err: + mutex_unlock(&mdev->md_io_mutex); + put_ldev(mdev); + + return rv; +} + +/** + * drbd_md_mark_dirty() - Mark meta data super block as dirty + * @mdev: DRBD device. + * + * Call this function if you change anything that should be written to + * the meta-data super block. This function sets MD_DIRTY, and starts a + * timer that ensures that within five seconds you have to call drbd_md_sync(). + */ +void drbd_md_mark_dirty(struct drbd_conf *mdev) +{ + set_bit(MD_DIRTY, &mdev->flags); + mod_timer(&mdev->md_sync_timer, jiffies + 5*HZ); +} + + +static void drbd_uuid_move_history(struct drbd_conf *mdev) __must_hold(local) +{ + int i; + + for (i = UI_HISTORY_START; i < UI_HISTORY_END; i++) { + mdev->ldev->md.uuid[i+1] = mdev->ldev->md.uuid[i]; + + trace_drbd_uuid(mdev, i+1); + } +} + +void _drbd_uuid_set(struct drbd_conf *mdev, int idx, u64 val) __must_hold(local) +{ + if (idx == UI_CURRENT) { + if (mdev->state.role == R_PRIMARY) + val |= 1; + else + val &= ~((u64)1); + + drbd_set_ed_uuid(mdev, val); + } + + mdev->ldev->md.uuid[idx] = val; + trace_drbd_uuid(mdev, idx); + drbd_md_mark_dirty(mdev); +} + + +void drbd_uuid_set(struct drbd_conf *mdev, int idx, u64 val) __must_hold(local) +{ + if (mdev->ldev->md.uuid[idx]) { + drbd_uuid_move_history(mdev); + mdev->ldev->md.uuid[UI_HISTORY_START] = mdev->ldev->md.uuid[idx]; + trace_drbd_uuid(mdev, UI_HISTORY_START); + } + _drbd_uuid_set(mdev, idx, val); +} + +/** + * drbd_uuid_new_current() - Creates a new current UUID + * @mdev: DRBD device. + * + * Creates a new current UUID, and rotates the old current UUID into + * the bitmap slot. Causes an incremental resync upon next connect. + */ +void drbd_uuid_new_current(struct drbd_conf *mdev) __must_hold(local) +{ + u64 val; + + dev_info(DEV, "Creating new current UUID\n"); + D_ASSERT(mdev->ldev->md.uuid[UI_BITMAP] == 0); + mdev->ldev->md.uuid[UI_BITMAP] = mdev->ldev->md.uuid[UI_CURRENT]; + trace_drbd_uuid(mdev, UI_BITMAP); + + get_random_bytes(&val, sizeof(u64)); + _drbd_uuid_set(mdev, UI_CURRENT, val); +} + +void drbd_uuid_set_bm(struct drbd_conf *mdev, u64 val) __must_hold(local) +{ + if (mdev->ldev->md.uuid[UI_BITMAP] == 0 && val == 0) + return; + + if (val == 0) { + drbd_uuid_move_history(mdev); + mdev->ldev->md.uuid[UI_HISTORY_START] = mdev->ldev->md.uuid[UI_BITMAP]; + mdev->ldev->md.uuid[UI_BITMAP] = 0; + trace_drbd_uuid(mdev, UI_HISTORY_START); + trace_drbd_uuid(mdev, UI_BITMAP); + } else { + if (mdev->ldev->md.uuid[UI_BITMAP]) + dev_warn(DEV, "bm UUID already set"); + + mdev->ldev->md.uuid[UI_BITMAP] = val; + mdev->ldev->md.uuid[UI_BITMAP] &= ~((u64)1); + + trace_drbd_uuid(mdev, UI_BITMAP); + } + drbd_md_mark_dirty(mdev); +} + +/** + * drbd_bmio_set_n_write() - io_fn for drbd_queue_bitmap_io() or drbd_bitmap_io() + * @mdev: DRBD device. + * + * Sets all bits in the bitmap and writes the whole bitmap to stable storage. + */ +int drbd_bmio_set_n_write(struct drbd_conf *mdev) +{ + int rv = -EIO; + + if (get_ldev_if_state(mdev, D_ATTACHING)) { + drbd_md_set_flag(mdev, MDF_FULL_SYNC); + drbd_md_sync(mdev); + drbd_bm_set_all(mdev); + + rv = drbd_bm_write(mdev); + + if (!rv) { + drbd_md_clear_flag(mdev, MDF_FULL_SYNC); + drbd_md_sync(mdev); + } + + put_ldev(mdev); + } + + return rv; +} + +/** + * drbd_bmio_clear_n_write() - io_fn for drbd_queue_bitmap_io() or drbd_bitmap_io() + * @mdev: DRBD device. + * + * Clears all bits in the bitmap and writes the whole bitmap to stable storage. + */ +int drbd_bmio_clear_n_write(struct drbd_conf *mdev) +{ + int rv = -EIO; + + if (get_ldev_if_state(mdev, D_ATTACHING)) { + drbd_bm_clear_all(mdev); + rv = drbd_bm_write(mdev); + put_ldev(mdev); + } + + return rv; +} + +static int w_bitmap_io(struct drbd_conf *mdev, struct drbd_work *w, int unused) +{ + struct bm_io_work *work = container_of(w, struct bm_io_work, w); + int rv; + + D_ASSERT(atomic_read(&mdev->ap_bio_cnt) == 0); + + drbd_bm_lock(mdev, work->why); + rv = work->io_fn(mdev); + drbd_bm_unlock(mdev); + + clear_bit(BITMAP_IO, &mdev->flags); + wake_up(&mdev->misc_wait); + + if (work->done) + work->done(mdev, rv); + + clear_bit(BITMAP_IO_QUEUED, &mdev->flags); + work->why = NULL; + + return 1; +} + +/** + * drbd_queue_bitmap_io() - Queues an IO operation on the whole bitmap + * @mdev: DRBD device. + * @io_fn: IO callback to be called when bitmap IO is possible + * @done: callback to be called after the bitmap IO was performed + * @why: Descriptive text of the reason for doing the IO + * + * While IO on the bitmap happens we freeze application IO thus we ensure + * that drbd_set_out_of_sync() can not be called. This function MAY ONLY be + * called from worker context. It MUST NOT be used while a previous such + * work is still pending! + */ +void drbd_queue_bitmap_io(struct drbd_conf *mdev, + int (*io_fn)(struct drbd_conf *), + void (*done)(struct drbd_conf *, int), + char *why) +{ + D_ASSERT(current == mdev->worker.task); + + D_ASSERT(!test_bit(BITMAP_IO_QUEUED, &mdev->flags)); + D_ASSERT(!test_bit(BITMAP_IO, &mdev->flags)); + D_ASSERT(list_empty(&mdev->bm_io_work.w.list)); + if (mdev->bm_io_work.why) + dev_err(DEV, "FIXME going to queue '%s' but '%s' still pending?\n", + why, mdev->bm_io_work.why); + + mdev->bm_io_work.io_fn = io_fn; + mdev->bm_io_work.done = done; + mdev->bm_io_work.why = why; + + set_bit(BITMAP_IO, &mdev->flags); + if (atomic_read(&mdev->ap_bio_cnt) == 0) { + if (list_empty(&mdev->bm_io_work.w.list)) { + set_bit(BITMAP_IO_QUEUED, &mdev->flags); + drbd_queue_work(&mdev->data.work, &mdev->bm_io_work.w); + } else + dev_err(DEV, "FIXME avoided double queuing bm_io_work\n"); + } +} + +/** + * drbd_bitmap_io() - Does an IO operation on the whole bitmap + * @mdev: DRBD device. + * @io_fn: IO callback to be called when bitmap IO is possible + * @why: Descriptive text of the reason for doing the IO + * + * freezes application IO while that the actual IO operations runs. This + * functions MAY NOT be called from worker context. + */ +int drbd_bitmap_io(struct drbd_conf *mdev, int (*io_fn)(struct drbd_conf *), char *why) +{ + int rv; + + D_ASSERT(current != mdev->worker.task); + + drbd_suspend_io(mdev); + + drbd_bm_lock(mdev, why); + rv = io_fn(mdev); + drbd_bm_unlock(mdev); + + drbd_resume_io(mdev); + + return rv; +} + +void drbd_md_set_flag(struct drbd_conf *mdev, int flag) __must_hold(local) +{ + if ((mdev->ldev->md.flags & flag) != flag) { + drbd_md_mark_dirty(mdev); + mdev->ldev->md.flags |= flag; + } +} + +void drbd_md_clear_flag(struct drbd_conf *mdev, int flag) __must_hold(local) +{ + if ((mdev->ldev->md.flags & flag) != 0) { + drbd_md_mark_dirty(mdev); + mdev->ldev->md.flags &= ~flag; + } +} +int drbd_md_test_flag(struct drbd_backing_dev *bdev, int flag) +{ + return (bdev->md.flags & flag) != 0; +} + +static void md_sync_timer_fn(unsigned long data) +{ + struct drbd_conf *mdev = (struct drbd_conf *) data; + + drbd_queue_work_front(&mdev->data.work, &mdev->md_sync_work); +} + +static int w_md_sync(struct drbd_conf *mdev, struct drbd_work *w, int unused) +{ + dev_warn(DEV, "md_sync_timer expired! Worker calls drbd_md_sync().\n"); + drbd_md_sync(mdev); + + return 1; +} + +#ifdef CONFIG_DRBD_FAULT_INJECTION +/* Fault insertion support including random number generator shamelessly + * stolen from kernel/rcutorture.c */ +struct fault_random_state { + unsigned long state; + unsigned long count; +}; + +#define FAULT_RANDOM_MULT 39916801 /* prime */ +#define FAULT_RANDOM_ADD 479001701 /* prime */ +#define FAULT_RANDOM_REFRESH 10000 + +/* + * Crude but fast random-number generator. Uses a linear congruential + * generator, with occasional help from get_random_bytes(). + */ +static unsigned long +_drbd_fault_random(struct fault_random_state *rsp) +{ + long refresh; + + if (--rsp->count < 0) { + get_random_bytes(&refresh, sizeof(refresh)); + rsp->state += refresh; + rsp->count = FAULT_RANDOM_REFRESH; + } + rsp->state = rsp->state * FAULT_RANDOM_MULT + FAULT_RANDOM_ADD; + return swahw32(rsp->state); +} + +static char * +_drbd_fault_str(unsigned int type) { + static char *_faults[] = { + [DRBD_FAULT_MD_WR] = "Meta-data write", + [DRBD_FAULT_MD_RD] = "Meta-data read", + [DRBD_FAULT_RS_WR] = "Resync write", + [DRBD_FAULT_RS_RD] = "Resync read", + [DRBD_FAULT_DT_WR] = "Data write", + [DRBD_FAULT_DT_RD] = "Data read", + [DRBD_FAULT_DT_RA] = "Data read ahead", + [DRBD_FAULT_BM_ALLOC] = "BM allocation", + [DRBD_FAULT_AL_EE] = "EE allocation" + }; + + return (type < DRBD_FAULT_MAX) ? _faults[type] : "**Unknown**"; +} + +unsigned int +_drbd_insert_fault(struct drbd_conf *mdev, unsigned int type) +{ + static struct fault_random_state rrs = {0, 0}; + + unsigned int ret = ( + (fault_devs == 0 || + ((1 << mdev_to_minor(mdev)) & fault_devs) != 0) && + (((_drbd_fault_random(&rrs) % 100) + 1) <= fault_rate)); + + if (ret) { + fault_count++; + + if (printk_ratelimit()) + dev_warn(DEV, "***Simulating %s failure\n", + _drbd_fault_str(type)); + } + + return ret; +} +#endif + +const char *drbd_buildtag(void) +{ + /* DRBD built from external sources has here a reference to the + git hash of the source code. */ + + static char buildtag[38] = "\0uilt-in"; + + if (buildtag[0] == 0) { +#ifdef CONFIG_MODULES + if (THIS_MODULE != NULL) + sprintf(buildtag, "srcversion: %-24s", THIS_MODULE->srcversion); + else +#endif + buildtag[0] = 'b'; + } + + return buildtag; +} + +module_init(drbd_init) +module_exit(drbd_cleanup) + +/* For drbd_tracing: */ +EXPORT_SYMBOL(drbd_conn_str); +EXPORT_SYMBOL(drbd_role_str); +EXPORT_SYMBOL(drbd_disk_str); +EXPORT_SYMBOL(drbd_set_st_err_str); diff --git a/drivers/block/drbd/drbd_nl.c b/drivers/block/drbd/drbd_nl.c new file mode 100644 index 000000000000..1927acefe230 --- /dev/null +++ b/drivers/block/drbd/drbd_nl.c @@ -0,0 +1,2365 @@ +/* + drbd_nl.c + + This file is part of DRBD by Philipp Reisner and Lars Ellenberg. + + Copyright (C) 2001-2008, LINBIT Information Technologies GmbH. + Copyright (C) 1999-2008, Philipp Reisner . + Copyright (C) 2002-2008, Lars Ellenberg . + + drbd is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2, or (at your option) + any later version. + + drbd is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with drbd; see the file COPYING. If not, write to + the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. + + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "drbd_int.h" +#include "drbd_tracing.h" +#include "drbd_wrappers.h" +#include +#include +#include + +static unsigned short *tl_add_blob(unsigned short *, enum drbd_tags, const void *, int); +static unsigned short *tl_add_str(unsigned short *, enum drbd_tags, const char *); +static unsigned short *tl_add_int(unsigned short *, enum drbd_tags, const void *); + +/* see get_sb_bdev and bd_claim */ +static char *drbd_m_holder = "Hands off! this is DRBD's meta data device."; + +/* Generate the tag_list to struct functions */ +#define NL_PACKET(name, number, fields) \ +static int name ## _from_tags(struct drbd_conf *mdev, \ + unsigned short *tags, struct name *arg) __attribute__ ((unused)); \ +static int name ## _from_tags(struct drbd_conf *mdev, \ + unsigned short *tags, struct name *arg) \ +{ \ + int tag; \ + int dlen; \ + \ + while ((tag = get_unaligned(tags++)) != TT_END) { \ + dlen = get_unaligned(tags++); \ + switch (tag_number(tag)) { \ + fields \ + default: \ + if (tag & T_MANDATORY) { \ + dev_err(DEV, "Unknown tag: %d\n", tag_number(tag)); \ + return 0; \ + } \ + } \ + tags = (unsigned short *)((char *)tags + dlen); \ + } \ + return 1; \ +} +#define NL_INTEGER(pn, pr, member) \ + case pn: /* D_ASSERT( tag_type(tag) == TT_INTEGER ); */ \ + arg->member = get_unaligned((int *)(tags)); \ + break; +#define NL_INT64(pn, pr, member) \ + case pn: /* D_ASSERT( tag_type(tag) == TT_INT64 ); */ \ + arg->member = get_unaligned((u64 *)(tags)); \ + break; +#define NL_BIT(pn, pr, member) \ + case pn: /* D_ASSERT( tag_type(tag) == TT_BIT ); */ \ + arg->member = *(char *)(tags) ? 1 : 0; \ + break; +#define NL_STRING(pn, pr, member, len) \ + case pn: /* D_ASSERT( tag_type(tag) == TT_STRING ); */ \ + if (dlen > len) { \ + dev_err(DEV, "arg too long: %s (%u wanted, max len: %u bytes)\n", \ + #member, dlen, (unsigned int)len); \ + return 0; \ + } \ + arg->member ## _len = dlen; \ + memcpy(arg->member, tags, min_t(size_t, dlen, len)); \ + break; +#include "linux/drbd_nl.h" + +/* Generate the struct to tag_list functions */ +#define NL_PACKET(name, number, fields) \ +static unsigned short* \ +name ## _to_tags(struct drbd_conf *mdev, \ + struct name *arg, unsigned short *tags) __attribute__ ((unused)); \ +static unsigned short* \ +name ## _to_tags(struct drbd_conf *mdev, \ + struct name *arg, unsigned short *tags) \ +{ \ + fields \ + return tags; \ +} + +#define NL_INTEGER(pn, pr, member) \ + put_unaligned(pn | pr | TT_INTEGER, tags++); \ + put_unaligned(sizeof(int), tags++); \ + put_unaligned(arg->member, (int *)tags); \ + tags = (unsigned short *)((char *)tags+sizeof(int)); +#define NL_INT64(pn, pr, member) \ + put_unaligned(pn | pr | TT_INT64, tags++); \ + put_unaligned(sizeof(u64), tags++); \ + put_unaligned(arg->member, (u64 *)tags); \ + tags = (unsigned short *)((char *)tags+sizeof(u64)); +#define NL_BIT(pn, pr, member) \ + put_unaligned(pn | pr | TT_BIT, tags++); \ + put_unaligned(sizeof(char), tags++); \ + *(char *)tags = arg->member; \ + tags = (unsigned short *)((char *)tags+sizeof(char)); +#define NL_STRING(pn, pr, member, len) \ + put_unaligned(pn | pr | TT_STRING, tags++); \ + put_unaligned(arg->member ## _len, tags++); \ + memcpy(tags, arg->member, arg->member ## _len); \ + tags = (unsigned short *)((char *)tags + arg->member ## _len); +#include "linux/drbd_nl.h" + +void drbd_bcast_ev_helper(struct drbd_conf *mdev, char *helper_name); +void drbd_nl_send_reply(struct cn_msg *, int); + +int drbd_khelper(struct drbd_conf *mdev, char *cmd) +{ + char *envp[] = { "HOME=/", + "TERM=linux", + "PATH=/sbin:/usr/sbin:/bin:/usr/bin", + NULL, /* Will be set to address family */ + NULL, /* Will be set to address */ + NULL }; + + char mb[12], af[20], ad[60], *afs; + char *argv[] = {usermode_helper, cmd, mb, NULL }; + int ret; + + snprintf(mb, 12, "minor-%d", mdev_to_minor(mdev)); + + if (get_net_conf(mdev)) { + switch (((struct sockaddr *)mdev->net_conf->peer_addr)->sa_family) { + case AF_INET6: + afs = "ipv6"; + snprintf(ad, 60, "DRBD_PEER_ADDRESS=%pI6", + &((struct sockaddr_in6 *)mdev->net_conf->peer_addr)->sin6_addr); + break; + case AF_INET: + afs = "ipv4"; + snprintf(ad, 60, "DRBD_PEER_ADDRESS=%pI4", + &((struct sockaddr_in *)mdev->net_conf->peer_addr)->sin_addr); + break; + default: + afs = "ssocks"; + snprintf(ad, 60, "DRBD_PEER_ADDRESS=%pI4", + &((struct sockaddr_in *)mdev->net_conf->peer_addr)->sin_addr); + } + snprintf(af, 20, "DRBD_PEER_AF=%s", afs); + envp[3]=af; + envp[4]=ad; + put_net_conf(mdev); + } + + dev_info(DEV, "helper command: %s %s %s\n", usermode_helper, cmd, mb); + + drbd_bcast_ev_helper(mdev, cmd); + ret = call_usermodehelper(usermode_helper, argv, envp, 1); + if (ret) + dev_warn(DEV, "helper command: %s %s %s exit code %u (0x%x)\n", + usermode_helper, cmd, mb, + (ret >> 8) & 0xff, ret); + else + dev_info(DEV, "helper command: %s %s %s exit code %u (0x%x)\n", + usermode_helper, cmd, mb, + (ret >> 8) & 0xff, ret); + + if (ret < 0) /* Ignore any ERRNOs we got. */ + ret = 0; + + return ret; +} + +enum drbd_disk_state drbd_try_outdate_peer(struct drbd_conf *mdev) +{ + char *ex_to_string; + int r; + enum drbd_disk_state nps; + enum drbd_fencing_p fp; + + D_ASSERT(mdev->state.pdsk == D_UNKNOWN); + + if (get_ldev_if_state(mdev, D_CONSISTENT)) { + fp = mdev->ldev->dc.fencing; + put_ldev(mdev); + } else { + dev_warn(DEV, "Not fencing peer, I'm not even Consistent myself.\n"); + return mdev->state.pdsk; + } + + if (fp == FP_STONITH) + _drbd_request_state(mdev, NS(susp, 1), CS_WAIT_COMPLETE); + + r = drbd_khelper(mdev, "fence-peer"); + + switch ((r>>8) & 0xff) { + case 3: /* peer is inconsistent */ + ex_to_string = "peer is inconsistent or worse"; + nps = D_INCONSISTENT; + break; + case 4: /* peer got outdated, or was already outdated */ + ex_to_string = "peer was fenced"; + nps = D_OUTDATED; + break; + case 5: /* peer was down */ + if (mdev->state.disk == D_UP_TO_DATE) { + /* we will(have) create(d) a new UUID anyways... */ + ex_to_string = "peer is unreachable, assumed to be dead"; + nps = D_OUTDATED; + } else { + ex_to_string = "peer unreachable, doing nothing since disk != UpToDate"; + nps = mdev->state.pdsk; + } + break; + case 6: /* Peer is primary, voluntarily outdate myself. + * This is useful when an unconnected R_SECONDARY is asked to + * become R_PRIMARY, but finds the other peer being active. */ + ex_to_string = "peer is active"; + dev_warn(DEV, "Peer is primary, outdating myself.\n"); + nps = D_UNKNOWN; + _drbd_request_state(mdev, NS(disk, D_OUTDATED), CS_WAIT_COMPLETE); + break; + case 7: + if (fp != FP_STONITH) + dev_err(DEV, "fence-peer() = 7 && fencing != Stonith !!!\n"); + ex_to_string = "peer was stonithed"; + nps = D_OUTDATED; + break; + default: + /* The script is broken ... */ + nps = D_UNKNOWN; + dev_err(DEV, "fence-peer helper broken, returned %d\n", (r>>8)&0xff); + return nps; + } + + dev_info(DEV, "fence-peer helper returned %d (%s)\n", + (r>>8) & 0xff, ex_to_string); + return nps; +} + + +int drbd_set_role(struct drbd_conf *mdev, enum drbd_role new_role, int force) +{ + const int max_tries = 4; + int r = 0; + int try = 0; + int forced = 0; + union drbd_state mask, val; + enum drbd_disk_state nps; + + if (new_role == R_PRIMARY) + request_ping(mdev); /* Detect a dead peer ASAP */ + + mutex_lock(&mdev->state_mutex); + + mask.i = 0; mask.role = R_MASK; + val.i = 0; val.role = new_role; + + while (try++ < max_tries) { + r = _drbd_request_state(mdev, mask, val, CS_WAIT_COMPLETE); + + /* in case we first succeeded to outdate, + * but now suddenly could establish a connection */ + if (r == SS_CW_FAILED_BY_PEER && mask.pdsk != 0) { + val.pdsk = 0; + mask.pdsk = 0; + continue; + } + + if (r == SS_NO_UP_TO_DATE_DISK && force && + (mdev->state.disk == D_INCONSISTENT || + mdev->state.disk == D_OUTDATED)) { + mask.disk = D_MASK; + val.disk = D_UP_TO_DATE; + forced = 1; + continue; + } + + if (r == SS_NO_UP_TO_DATE_DISK && + mdev->state.disk == D_CONSISTENT && mask.pdsk == 0) { + D_ASSERT(mdev->state.pdsk == D_UNKNOWN); + nps = drbd_try_outdate_peer(mdev); + + if (nps == D_OUTDATED || nps == D_INCONSISTENT) { + val.disk = D_UP_TO_DATE; + mask.disk = D_MASK; + } + + val.pdsk = nps; + mask.pdsk = D_MASK; + + continue; + } + + if (r == SS_NOTHING_TO_DO) + goto fail; + if (r == SS_PRIMARY_NOP && mask.pdsk == 0) { + nps = drbd_try_outdate_peer(mdev); + + if (force && nps > D_OUTDATED) { + dev_warn(DEV, "Forced into split brain situation!\n"); + nps = D_OUTDATED; + } + + mask.pdsk = D_MASK; + val.pdsk = nps; + + continue; + } + if (r == SS_TWO_PRIMARIES) { + /* Maybe the peer is detected as dead very soon... + retry at most once more in this case. */ + __set_current_state(TASK_INTERRUPTIBLE); + schedule_timeout((mdev->net_conf->ping_timeo+1)*HZ/10); + if (try < max_tries) + try = max_tries - 1; + continue; + } + if (r < SS_SUCCESS) { + r = _drbd_request_state(mdev, mask, val, + CS_VERBOSE + CS_WAIT_COMPLETE); + if (r < SS_SUCCESS) + goto fail; + } + break; + } + + if (r < SS_SUCCESS) + goto fail; + + if (forced) + dev_warn(DEV, "Forced to consider local data as UpToDate!\n"); + + /* Wait until nothing is on the fly :) */ + wait_event(mdev->misc_wait, atomic_read(&mdev->ap_pending_cnt) == 0); + + if (new_role == R_SECONDARY) { + set_disk_ro(mdev->vdisk, TRUE); + if (get_ldev(mdev)) { + mdev->ldev->md.uuid[UI_CURRENT] &= ~(u64)1; + put_ldev(mdev); + } + } else { + if (get_net_conf(mdev)) { + mdev->net_conf->want_lose = 0; + put_net_conf(mdev); + } + set_disk_ro(mdev->vdisk, FALSE); + if (get_ldev(mdev)) { + if (((mdev->state.conn < C_CONNECTED || + mdev->state.pdsk <= D_FAILED) + && mdev->ldev->md.uuid[UI_BITMAP] == 0) || forced) + drbd_uuid_new_current(mdev); + + mdev->ldev->md.uuid[UI_CURRENT] |= (u64)1; + put_ldev(mdev); + } + } + + if ((new_role == R_SECONDARY) && get_ldev(mdev)) { + drbd_al_to_on_disk_bm(mdev); + put_ldev(mdev); + } + + if (mdev->state.conn >= C_WF_REPORT_PARAMS) { + /* if this was forced, we should consider sync */ + if (forced) + drbd_send_uuids(mdev); + drbd_send_state(mdev); + } + + drbd_md_sync(mdev); + + kobject_uevent(&disk_to_dev(mdev->vdisk)->kobj, KOBJ_CHANGE); + fail: + mutex_unlock(&mdev->state_mutex); + return r; +} + + +static int drbd_nl_primary(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp, + struct drbd_nl_cfg_reply *reply) +{ + struct primary primary_args; + + memset(&primary_args, 0, sizeof(struct primary)); + if (!primary_from_tags(mdev, nlp->tag_list, &primary_args)) { + reply->ret_code = ERR_MANDATORY_TAG; + return 0; + } + + reply->ret_code = + drbd_set_role(mdev, R_PRIMARY, primary_args.overwrite_peer); + + return 0; +} + +static int drbd_nl_secondary(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp, + struct drbd_nl_cfg_reply *reply) +{ + reply->ret_code = drbd_set_role(mdev, R_SECONDARY, 0); + + return 0; +} + +/* initializes the md.*_offset members, so we are able to find + * the on disk meta data */ +static void drbd_md_set_sector_offsets(struct drbd_conf *mdev, + struct drbd_backing_dev *bdev) +{ + sector_t md_size_sect = 0; + switch (bdev->dc.meta_dev_idx) { + default: + /* v07 style fixed size indexed meta data */ + bdev->md.md_size_sect = MD_RESERVED_SECT; + bdev->md.md_offset = drbd_md_ss__(mdev, bdev); + bdev->md.al_offset = MD_AL_OFFSET; + bdev->md.bm_offset = MD_BM_OFFSET; + break; + case DRBD_MD_INDEX_FLEX_EXT: + /* just occupy the full device; unit: sectors */ + bdev->md.md_size_sect = drbd_get_capacity(bdev->md_bdev); + bdev->md.md_offset = 0; + bdev->md.al_offset = MD_AL_OFFSET; + bdev->md.bm_offset = MD_BM_OFFSET; + break; + case DRBD_MD_INDEX_INTERNAL: + case DRBD_MD_INDEX_FLEX_INT: + bdev->md.md_offset = drbd_md_ss__(mdev, bdev); + /* al size is still fixed */ + bdev->md.al_offset = -MD_AL_MAX_SIZE; + /* we need (slightly less than) ~ this much bitmap sectors: */ + md_size_sect = drbd_get_capacity(bdev->backing_bdev); + md_size_sect = ALIGN(md_size_sect, BM_SECT_PER_EXT); + md_size_sect = BM_SECT_TO_EXT(md_size_sect); + md_size_sect = ALIGN(md_size_sect, 8); + + /* plus the "drbd meta data super block", + * and the activity log; */ + md_size_sect += MD_BM_OFFSET; + + bdev->md.md_size_sect = md_size_sect; + /* bitmap offset is adjusted by 'super' block size */ + bdev->md.bm_offset = -md_size_sect + MD_AL_OFFSET; + break; + } +} + +char *ppsize(char *buf, unsigned long long size) +{ + /* Needs 9 bytes at max. */ + static char units[] = { 'K', 'M', 'G', 'T', 'P', 'E' }; + int base = 0; + while (size >= 10000) { + /* shift + round */ + size = (size >> 10) + !!(size & (1<<9)); + base++; + } + sprintf(buf, "%lu %cB", (long)size, units[base]); + + return buf; +} + +/* there is still a theoretical deadlock when called from receiver + * on an D_INCONSISTENT R_PRIMARY: + * remote READ does inc_ap_bio, receiver would need to receive answer + * packet from remote to dec_ap_bio again. + * receiver receive_sizes(), comes here, + * waits for ap_bio_cnt == 0. -> deadlock. + * but this cannot happen, actually, because: + * R_PRIMARY D_INCONSISTENT, and peer's disk is unreachable + * (not connected, or bad/no disk on peer): + * see drbd_fail_request_early, ap_bio_cnt is zero. + * R_PRIMARY D_INCONSISTENT, and C_SYNC_TARGET: + * peer may not initiate a resize. + */ +void drbd_suspend_io(struct drbd_conf *mdev) +{ + set_bit(SUSPEND_IO, &mdev->flags); + wait_event(mdev->misc_wait, !atomic_read(&mdev->ap_bio_cnt)); +} + +void drbd_resume_io(struct drbd_conf *mdev) +{ + clear_bit(SUSPEND_IO, &mdev->flags); + wake_up(&mdev->misc_wait); +} + +/** + * drbd_determine_dev_size() - Sets the right device size obeying all constraints + * @mdev: DRBD device. + * + * Returns 0 on success, negative return values indicate errors. + * You should call drbd_md_sync() after calling this function. + */ +enum determine_dev_size drbd_determin_dev_size(struct drbd_conf *mdev) __must_hold(local) +{ + sector_t prev_first_sect, prev_size; /* previous meta location */ + sector_t la_size; + sector_t size; + char ppb[10]; + + int md_moved, la_size_changed; + enum determine_dev_size rv = unchanged; + + /* race: + * application request passes inc_ap_bio, + * but then cannot get an AL-reference. + * this function later may wait on ap_bio_cnt == 0. -> deadlock. + * + * to avoid that: + * Suspend IO right here. + * still lock the act_log to not trigger ASSERTs there. + */ + drbd_suspend_io(mdev); + + /* no wait necessary anymore, actually we could assert that */ + wait_event(mdev->al_wait, lc_try_lock(mdev->act_log)); + + prev_first_sect = drbd_md_first_sector(mdev->ldev); + prev_size = mdev->ldev->md.md_size_sect; + la_size = mdev->ldev->md.la_size_sect; + + /* TODO: should only be some assert here, not (re)init... */ + drbd_md_set_sector_offsets(mdev, mdev->ldev); + + size = drbd_new_dev_size(mdev, mdev->ldev); + + if (drbd_get_capacity(mdev->this_bdev) != size || + drbd_bm_capacity(mdev) != size) { + int err; + err = drbd_bm_resize(mdev, size); + if (unlikely(err)) { + /* currently there is only one error: ENOMEM! */ + size = drbd_bm_capacity(mdev)>>1; + if (size == 0) { + dev_err(DEV, "OUT OF MEMORY! " + "Could not allocate bitmap!\n"); + } else { + dev_err(DEV, "BM resizing failed. " + "Leaving size unchanged at size = %lu KB\n", + (unsigned long)size); + } + rv = dev_size_error; + } + /* racy, see comments above. */ + drbd_set_my_capacity(mdev, size); + mdev->ldev->md.la_size_sect = size; + dev_info(DEV, "size = %s (%llu KB)\n", ppsize(ppb, size>>1), + (unsigned long long)size>>1); + } + if (rv == dev_size_error) + goto out; + + la_size_changed = (la_size != mdev->ldev->md.la_size_sect); + + md_moved = prev_first_sect != drbd_md_first_sector(mdev->ldev) + || prev_size != mdev->ldev->md.md_size_sect; + + if (la_size_changed || md_moved) { + drbd_al_shrink(mdev); /* All extents inactive. */ + dev_info(DEV, "Writing the whole bitmap, %s\n", + la_size_changed && md_moved ? "size changed and md moved" : + la_size_changed ? "size changed" : "md moved"); + rv = drbd_bitmap_io(mdev, &drbd_bm_write, "size changed"); /* does drbd_resume_io() ! */ + drbd_md_mark_dirty(mdev); + } + + if (size > la_size) + rv = grew; + if (size < la_size) + rv = shrunk; +out: + lc_unlock(mdev->act_log); + wake_up(&mdev->al_wait); + drbd_resume_io(mdev); + + return rv; +} + +sector_t +drbd_new_dev_size(struct drbd_conf *mdev, struct drbd_backing_dev *bdev) +{ + sector_t p_size = mdev->p_size; /* partner's disk size. */ + sector_t la_size = bdev->md.la_size_sect; /* last agreed size. */ + sector_t m_size; /* my size */ + sector_t u_size = bdev->dc.disk_size; /* size requested by user. */ + sector_t size = 0; + + m_size = drbd_get_max_capacity(bdev); + + if (p_size && m_size) { + size = min_t(sector_t, p_size, m_size); + } else { + if (la_size) { + size = la_size; + if (m_size && m_size < size) + size = m_size; + if (p_size && p_size < size) + size = p_size; + } else { + if (m_size) + size = m_size; + if (p_size) + size = p_size; + } + } + + if (size == 0) + dev_err(DEV, "Both nodes diskless!\n"); + + if (u_size) { + if (u_size > size) + dev_err(DEV, "Requested disk size is too big (%lu > %lu)\n", + (unsigned long)u_size>>1, (unsigned long)size>>1); + else + size = u_size; + } + + return size; +} + +/** + * drbd_check_al_size() - Ensures that the AL is of the right size + * @mdev: DRBD device. + * + * Returns -EBUSY if current al lru is still used, -ENOMEM when allocation + * failed, and 0 on success. You should call drbd_md_sync() after you called + * this function. + */ +static int drbd_check_al_size(struct drbd_conf *mdev) +{ + struct lru_cache *n, *t; + struct lc_element *e; + unsigned int in_use; + int i; + + ERR_IF(mdev->sync_conf.al_extents < 7) + mdev->sync_conf.al_extents = 127; + + if (mdev->act_log && + mdev->act_log->nr_elements == mdev->sync_conf.al_extents) + return 0; + + in_use = 0; + t = mdev->act_log; + n = lc_create("act_log", drbd_al_ext_cache, + mdev->sync_conf.al_extents, sizeof(struct lc_element), 0); + + if (n == NULL) { + dev_err(DEV, "Cannot allocate act_log lru!\n"); + return -ENOMEM; + } + spin_lock_irq(&mdev->al_lock); + if (t) { + for (i = 0; i < t->nr_elements; i++) { + e = lc_element_by_index(t, i); + if (e->refcnt) + dev_err(DEV, "refcnt(%d)==%d\n", + e->lc_number, e->refcnt); + in_use += e->refcnt; + } + } + if (!in_use) + mdev->act_log = n; + spin_unlock_irq(&mdev->al_lock); + if (in_use) { + dev_err(DEV, "Activity log still in use!\n"); + lc_destroy(n); + return -EBUSY; + } else { + if (t) + lc_destroy(t); + } + drbd_md_mark_dirty(mdev); /* we changed mdev->act_log->nr_elemens */ + return 0; +} + +void drbd_setup_queue_param(struct drbd_conf *mdev, unsigned int max_seg_s) __must_hold(local) +{ + struct request_queue * const q = mdev->rq_queue; + struct request_queue * const b = mdev->ldev->backing_bdev->bd_disk->queue; + int max_segments = mdev->ldev->dc.max_bio_bvecs; + + if (b->merge_bvec_fn && !mdev->ldev->dc.use_bmbv) + max_seg_s = PAGE_SIZE; + + max_seg_s = min(queue_max_sectors(b) * queue_logical_block_size(b), max_seg_s); + + blk_queue_max_sectors(q, max_seg_s >> 9); + blk_queue_max_phys_segments(q, max_segments ? max_segments : MAX_PHYS_SEGMENTS); + blk_queue_max_hw_segments(q, max_segments ? max_segments : MAX_HW_SEGMENTS); + blk_queue_max_segment_size(q, max_seg_s); + blk_queue_logical_block_size(q, 512); + blk_queue_segment_boundary(q, PAGE_SIZE-1); + blk_stack_limits(&q->limits, &b->limits, 0); + + if (b->merge_bvec_fn) + dev_warn(DEV, "Backing device's merge_bvec_fn() = %p\n", + b->merge_bvec_fn); + dev_info(DEV, "max_segment_size ( = BIO size ) = %u\n", queue_max_segment_size(q)); + + if (q->backing_dev_info.ra_pages != b->backing_dev_info.ra_pages) { + dev_info(DEV, "Adjusting my ra_pages to backing device's (%lu -> %lu)\n", + q->backing_dev_info.ra_pages, + b->backing_dev_info.ra_pages); + q->backing_dev_info.ra_pages = b->backing_dev_info.ra_pages; + } +} + +/* serialize deconfig (worker exiting, doing cleanup) + * and reconfig (drbdsetup disk, drbdsetup net) + * + * wait for a potentially exiting worker, then restart it, + * or start a new one. + */ +static void drbd_reconfig_start(struct drbd_conf *mdev) +{ + wait_event(mdev->state_wait, test_and_set_bit(CONFIG_PENDING, &mdev->flags)); + wait_event(mdev->state_wait, !test_bit(DEVICE_DYING, &mdev->flags)); + drbd_thread_start(&mdev->worker); +} + +/* if still unconfigured, stops worker again. + * if configured now, clears CONFIG_PENDING. + * wakes potential waiters */ +static void drbd_reconfig_done(struct drbd_conf *mdev) +{ + spin_lock_irq(&mdev->req_lock); + if (mdev->state.disk == D_DISKLESS && + mdev->state.conn == C_STANDALONE && + mdev->state.role == R_SECONDARY) { + set_bit(DEVICE_DYING, &mdev->flags); + drbd_thread_stop_nowait(&mdev->worker); + } else + clear_bit(CONFIG_PENDING, &mdev->flags); + spin_unlock_irq(&mdev->req_lock); + wake_up(&mdev->state_wait); +} + +/* does always return 0; + * interesting return code is in reply->ret_code */ +static int drbd_nl_disk_conf(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp, + struct drbd_nl_cfg_reply *reply) +{ + enum drbd_ret_codes retcode; + enum determine_dev_size dd; + sector_t max_possible_sectors; + sector_t min_md_device_sectors; + struct drbd_backing_dev *nbc = NULL; /* new_backing_conf */ + struct inode *inode, *inode2; + struct lru_cache *resync_lru = NULL; + union drbd_state ns, os; + int rv; + int cp_discovered = 0; + int logical_block_size; + + drbd_reconfig_start(mdev); + + /* if you want to reconfigure, please tear down first */ + if (mdev->state.disk > D_DISKLESS) { + retcode = ERR_DISK_CONFIGURED; + goto fail; + } + + /* allocation not in the IO path, cqueue thread context */ + nbc = kzalloc(sizeof(struct drbd_backing_dev), GFP_KERNEL); + if (!nbc) { + retcode = ERR_NOMEM; + goto fail; + } + + nbc->dc.disk_size = DRBD_DISK_SIZE_SECT_DEF; + nbc->dc.on_io_error = DRBD_ON_IO_ERROR_DEF; + nbc->dc.fencing = DRBD_FENCING_DEF; + nbc->dc.max_bio_bvecs = DRBD_MAX_BIO_BVECS_DEF; + + if (!disk_conf_from_tags(mdev, nlp->tag_list, &nbc->dc)) { + retcode = ERR_MANDATORY_TAG; + goto fail; + } + + if (nbc->dc.meta_dev_idx < DRBD_MD_INDEX_FLEX_INT) { + retcode = ERR_MD_IDX_INVALID; + goto fail; + } + + nbc->lo_file = filp_open(nbc->dc.backing_dev, O_RDWR, 0); + if (IS_ERR(nbc->lo_file)) { + dev_err(DEV, "open(\"%s\") failed with %ld\n", nbc->dc.backing_dev, + PTR_ERR(nbc->lo_file)); + nbc->lo_file = NULL; + retcode = ERR_OPEN_DISK; + goto fail; + } + + inode = nbc->lo_file->f_dentry->d_inode; + + if (!S_ISBLK(inode->i_mode)) { + retcode = ERR_DISK_NOT_BDEV; + goto fail; + } + + nbc->md_file = filp_open(nbc->dc.meta_dev, O_RDWR, 0); + if (IS_ERR(nbc->md_file)) { + dev_err(DEV, "open(\"%s\") failed with %ld\n", nbc->dc.meta_dev, + PTR_ERR(nbc->md_file)); + nbc->md_file = NULL; + retcode = ERR_OPEN_MD_DISK; + goto fail; + } + + inode2 = nbc->md_file->f_dentry->d_inode; + + if (!S_ISBLK(inode2->i_mode)) { + retcode = ERR_MD_NOT_BDEV; + goto fail; + } + + nbc->backing_bdev = inode->i_bdev; + if (bd_claim(nbc->backing_bdev, mdev)) { + printk(KERN_ERR "drbd: bd_claim(%p,%p); failed [%p;%p;%u]\n", + nbc->backing_bdev, mdev, + nbc->backing_bdev->bd_holder, + nbc->backing_bdev->bd_contains->bd_holder, + nbc->backing_bdev->bd_holders); + retcode = ERR_BDCLAIM_DISK; + goto fail; + } + + resync_lru = lc_create("resync", drbd_bm_ext_cache, + 61, sizeof(struct bm_extent), + offsetof(struct bm_extent, lce)); + if (!resync_lru) { + retcode = ERR_NOMEM; + goto release_bdev_fail; + } + + /* meta_dev_idx >= 0: external fixed size, + * possibly multiple drbd sharing one meta device. + * TODO in that case, paranoia check that [md_bdev, meta_dev_idx] is + * not yet used by some other drbd minor! + * (if you use drbd.conf + drbdadm, + * that should check it for you already; but if you don't, or someone + * fooled it, we need to double check here) */ + nbc->md_bdev = inode2->i_bdev; + if (bd_claim(nbc->md_bdev, (nbc->dc.meta_dev_idx < 0) ? (void *)mdev + : (void *) drbd_m_holder)) { + retcode = ERR_BDCLAIM_MD_DISK; + goto release_bdev_fail; + } + + if ((nbc->backing_bdev == nbc->md_bdev) != + (nbc->dc.meta_dev_idx == DRBD_MD_INDEX_INTERNAL || + nbc->dc.meta_dev_idx == DRBD_MD_INDEX_FLEX_INT)) { + retcode = ERR_MD_IDX_INVALID; + goto release_bdev2_fail; + } + + /* RT - for drbd_get_max_capacity() DRBD_MD_INDEX_FLEX_INT */ + drbd_md_set_sector_offsets(mdev, nbc); + + if (drbd_get_max_capacity(nbc) < nbc->dc.disk_size) { + dev_err(DEV, "max capacity %llu smaller than disk size %llu\n", + (unsigned long long) drbd_get_max_capacity(nbc), + (unsigned long long) nbc->dc.disk_size); + retcode = ERR_DISK_TO_SMALL; + goto release_bdev2_fail; + } + + if (nbc->dc.meta_dev_idx < 0) { + max_possible_sectors = DRBD_MAX_SECTORS_FLEX; + /* at least one MB, otherwise it does not make sense */ + min_md_device_sectors = (2<<10); + } else { + max_possible_sectors = DRBD_MAX_SECTORS; + min_md_device_sectors = MD_RESERVED_SECT * (nbc->dc.meta_dev_idx + 1); + } + + if (drbd_get_capacity(nbc->md_bdev) > max_possible_sectors) + dev_warn(DEV, "truncating very big lower level device " + "to currently maximum possible %llu sectors\n", + (unsigned long long) max_possible_sectors); + + if (drbd_get_capacity(nbc->md_bdev) < min_md_device_sectors) { + retcode = ERR_MD_DISK_TO_SMALL; + dev_warn(DEV, "refusing attach: md-device too small, " + "at least %llu sectors needed for this meta-disk type\n", + (unsigned long long) min_md_device_sectors); + goto release_bdev2_fail; + } + + /* Make sure the new disk is big enough + * (we may currently be R_PRIMARY with no local disk...) */ + if (drbd_get_max_capacity(nbc) < + drbd_get_capacity(mdev->this_bdev)) { + retcode = ERR_DISK_TO_SMALL; + goto release_bdev2_fail; + } + + nbc->known_size = drbd_get_capacity(nbc->backing_bdev); + + drbd_suspend_io(mdev); + /* also wait for the last barrier ack. */ + wait_event(mdev->misc_wait, !atomic_read(&mdev->ap_pending_cnt)); + /* and for any other previously queued work */ + drbd_flush_workqueue(mdev); + + retcode = _drbd_request_state(mdev, NS(disk, D_ATTACHING), CS_VERBOSE); + drbd_resume_io(mdev); + if (retcode < SS_SUCCESS) + goto release_bdev2_fail; + + if (!get_ldev_if_state(mdev, D_ATTACHING)) + goto force_diskless; + + drbd_md_set_sector_offsets(mdev, nbc); + + if (!mdev->bitmap) { + if (drbd_bm_init(mdev)) { + retcode = ERR_NOMEM; + goto force_diskless_dec; + } + } + + retcode = drbd_md_read(mdev, nbc); + if (retcode != NO_ERROR) + goto force_diskless_dec; + + if (mdev->state.conn < C_CONNECTED && + mdev->state.role == R_PRIMARY && + (mdev->ed_uuid & ~((u64)1)) != (nbc->md.uuid[UI_CURRENT] & ~((u64)1))) { + dev_err(DEV, "Can only attach to data with current UUID=%016llX\n", + (unsigned long long)mdev->ed_uuid); + retcode = ERR_DATA_NOT_CURRENT; + goto force_diskless_dec; + } + + /* Since we are diskless, fix the activity log first... */ + if (drbd_check_al_size(mdev)) { + retcode = ERR_NOMEM; + goto force_diskless_dec; + } + + /* Prevent shrinking of consistent devices ! */ + if (drbd_md_test_flag(nbc, MDF_CONSISTENT) && + drbd_new_dev_size(mdev, nbc) < nbc->md.la_size_sect) { + dev_warn(DEV, "refusing to truncate a consistent device\n"); + retcode = ERR_DISK_TO_SMALL; + goto force_diskless_dec; + } + + if (!drbd_al_read_log(mdev, nbc)) { + retcode = ERR_IO_MD_DISK; + goto force_diskless_dec; + } + + /* allocate a second IO page if logical_block_size != 512 */ + logical_block_size = bdev_logical_block_size(nbc->md_bdev); + if (logical_block_size == 0) + logical_block_size = MD_SECTOR_SIZE; + + if (logical_block_size != MD_SECTOR_SIZE) { + if (!mdev->md_io_tmpp) { + struct page *page = alloc_page(GFP_NOIO); + if (!page) + goto force_diskless_dec; + + dev_warn(DEV, "Meta data's bdev logical_block_size = %d != %d\n", + logical_block_size, MD_SECTOR_SIZE); + dev_warn(DEV, "Workaround engaged (has performance impact).\n"); + + mdev->md_io_tmpp = page; + } + } + + /* Reset the "barriers don't work" bits here, then force meta data to + * be written, to ensure we determine if barriers are supported. */ + if (nbc->dc.no_md_flush) + set_bit(MD_NO_BARRIER, &mdev->flags); + else + clear_bit(MD_NO_BARRIER, &mdev->flags); + + /* Point of no return reached. + * Devices and memory are no longer released by error cleanup below. + * now mdev takes over responsibility, and the state engine should + * clean it up somewhere. */ + D_ASSERT(mdev->ldev == NULL); + mdev->ldev = nbc; + mdev->resync = resync_lru; + nbc = NULL; + resync_lru = NULL; + + mdev->write_ordering = WO_bio_barrier; + drbd_bump_write_ordering(mdev, WO_bio_barrier); + + if (drbd_md_test_flag(mdev->ldev, MDF_CRASHED_PRIMARY)) + set_bit(CRASHED_PRIMARY, &mdev->flags); + else + clear_bit(CRASHED_PRIMARY, &mdev->flags); + + if (drbd_md_test_flag(mdev->ldev, MDF_PRIMARY_IND)) { + set_bit(CRASHED_PRIMARY, &mdev->flags); + cp_discovered = 1; + } + + mdev->send_cnt = 0; + mdev->recv_cnt = 0; + mdev->read_cnt = 0; + mdev->writ_cnt = 0; + + drbd_setup_queue_param(mdev, DRBD_MAX_SEGMENT_SIZE); + + /* If I am currently not R_PRIMARY, + * but meta data primary indicator is set, + * I just now recover from a hard crash, + * and have been R_PRIMARY before that crash. + * + * Now, if I had no connection before that crash + * (have been degraded R_PRIMARY), chances are that + * I won't find my peer now either. + * + * In that case, and _only_ in that case, + * we use the degr-wfc-timeout instead of the default, + * so we can automatically recover from a crash of a + * degraded but active "cluster" after a certain timeout. + */ + clear_bit(USE_DEGR_WFC_T, &mdev->flags); + if (mdev->state.role != R_PRIMARY && + drbd_md_test_flag(mdev->ldev, MDF_PRIMARY_IND) && + !drbd_md_test_flag(mdev->ldev, MDF_CONNECTED_IND)) + set_bit(USE_DEGR_WFC_T, &mdev->flags); + + dd = drbd_determin_dev_size(mdev); + if (dd == dev_size_error) { + retcode = ERR_NOMEM_BITMAP; + goto force_diskless_dec; + } else if (dd == grew) + set_bit(RESYNC_AFTER_NEG, &mdev->flags); + + if (drbd_md_test_flag(mdev->ldev, MDF_FULL_SYNC)) { + dev_info(DEV, "Assuming that all blocks are out of sync " + "(aka FullSync)\n"); + if (drbd_bitmap_io(mdev, &drbd_bmio_set_n_write, "set_n_write from attaching")) { + retcode = ERR_IO_MD_DISK; + goto force_diskless_dec; + } + } else { + if (drbd_bitmap_io(mdev, &drbd_bm_read, "read from attaching") < 0) { + retcode = ERR_IO_MD_DISK; + goto force_diskless_dec; + } + } + + if (cp_discovered) { + drbd_al_apply_to_bm(mdev); + drbd_al_to_on_disk_bm(mdev); + } + + spin_lock_irq(&mdev->req_lock); + os = mdev->state; + ns.i = os.i; + /* If MDF_CONSISTENT is not set go into inconsistent state, + otherwise investigate MDF_WasUpToDate... + If MDF_WAS_UP_TO_DATE is not set go into D_OUTDATED disk state, + otherwise into D_CONSISTENT state. + */ + if (drbd_md_test_flag(mdev->ldev, MDF_CONSISTENT)) { + if (drbd_md_test_flag(mdev->ldev, MDF_WAS_UP_TO_DATE)) + ns.disk = D_CONSISTENT; + else + ns.disk = D_OUTDATED; + } else { + ns.disk = D_INCONSISTENT; + } + + if (drbd_md_test_flag(mdev->ldev, MDF_PEER_OUT_DATED)) + ns.pdsk = D_OUTDATED; + + if ( ns.disk == D_CONSISTENT && + (ns.pdsk == D_OUTDATED || mdev->ldev->dc.fencing == FP_DONT_CARE)) + ns.disk = D_UP_TO_DATE; + + /* All tests on MDF_PRIMARY_IND, MDF_CONNECTED_IND, + MDF_CONSISTENT and MDF_WAS_UP_TO_DATE must happen before + this point, because drbd_request_state() modifies these + flags. */ + + /* In case we are C_CONNECTED postpone any decision on the new disk + state after the negotiation phase. */ + if (mdev->state.conn == C_CONNECTED) { + mdev->new_state_tmp.i = ns.i; + ns.i = os.i; + ns.disk = D_NEGOTIATING; + } + + rv = _drbd_set_state(mdev, ns, CS_VERBOSE, NULL); + ns = mdev->state; + spin_unlock_irq(&mdev->req_lock); + + if (rv < SS_SUCCESS) + goto force_diskless_dec; + + if (mdev->state.role == R_PRIMARY) + mdev->ldev->md.uuid[UI_CURRENT] |= (u64)1; + else + mdev->ldev->md.uuid[UI_CURRENT] &= ~(u64)1; + + drbd_md_mark_dirty(mdev); + drbd_md_sync(mdev); + + kobject_uevent(&disk_to_dev(mdev->vdisk)->kobj, KOBJ_CHANGE); + put_ldev(mdev); + reply->ret_code = retcode; + drbd_reconfig_done(mdev); + return 0; + + force_diskless_dec: + put_ldev(mdev); + force_diskless: + drbd_force_state(mdev, NS(disk, D_DISKLESS)); + drbd_md_sync(mdev); + release_bdev2_fail: + if (nbc) + bd_release(nbc->md_bdev); + release_bdev_fail: + if (nbc) + bd_release(nbc->backing_bdev); + fail: + if (nbc) { + if (nbc->lo_file) + fput(nbc->lo_file); + if (nbc->md_file) + fput(nbc->md_file); + kfree(nbc); + } + lc_destroy(resync_lru); + + reply->ret_code = retcode; + drbd_reconfig_done(mdev); + return 0; +} + +static int drbd_nl_detach(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp, + struct drbd_nl_cfg_reply *reply) +{ + reply->ret_code = drbd_request_state(mdev, NS(disk, D_DISKLESS)); + return 0; +} + +static int drbd_nl_net_conf(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp, + struct drbd_nl_cfg_reply *reply) +{ + int i, ns; + enum drbd_ret_codes retcode; + struct net_conf *new_conf = NULL; + struct crypto_hash *tfm = NULL; + struct crypto_hash *integrity_w_tfm = NULL; + struct crypto_hash *integrity_r_tfm = NULL; + struct hlist_head *new_tl_hash = NULL; + struct hlist_head *new_ee_hash = NULL; + struct drbd_conf *odev; + char hmac_name[CRYPTO_MAX_ALG_NAME]; + void *int_dig_out = NULL; + void *int_dig_in = NULL; + void *int_dig_vv = NULL; + struct sockaddr *new_my_addr, *new_peer_addr, *taken_addr; + + drbd_reconfig_start(mdev); + + if (mdev->state.conn > C_STANDALONE) { + retcode = ERR_NET_CONFIGURED; + goto fail; + } + + /* allocation not in the IO path, cqueue thread context */ + new_conf = kmalloc(sizeof(struct net_conf), GFP_KERNEL); + if (!new_conf) { + retcode = ERR_NOMEM; + goto fail; + } + + memset(new_conf, 0, sizeof(struct net_conf)); + new_conf->timeout = DRBD_TIMEOUT_DEF; + new_conf->try_connect_int = DRBD_CONNECT_INT_DEF; + new_conf->ping_int = DRBD_PING_INT_DEF; + new_conf->max_epoch_size = DRBD_MAX_EPOCH_SIZE_DEF; + new_conf->max_buffers = DRBD_MAX_BUFFERS_DEF; + new_conf->unplug_watermark = DRBD_UNPLUG_WATERMARK_DEF; + new_conf->sndbuf_size = DRBD_SNDBUF_SIZE_DEF; + new_conf->rcvbuf_size = DRBD_RCVBUF_SIZE_DEF; + new_conf->ko_count = DRBD_KO_COUNT_DEF; + new_conf->after_sb_0p = DRBD_AFTER_SB_0P_DEF; + new_conf->after_sb_1p = DRBD_AFTER_SB_1P_DEF; + new_conf->after_sb_2p = DRBD_AFTER_SB_2P_DEF; + new_conf->want_lose = 0; + new_conf->two_primaries = 0; + new_conf->wire_protocol = DRBD_PROT_C; + new_conf->ping_timeo = DRBD_PING_TIMEO_DEF; + new_conf->rr_conflict = DRBD_RR_CONFLICT_DEF; + + if (!net_conf_from_tags(mdev, nlp->tag_list, new_conf)) { + retcode = ERR_MANDATORY_TAG; + goto fail; + } + + if (new_conf->two_primaries + && (new_conf->wire_protocol != DRBD_PROT_C)) { + retcode = ERR_NOT_PROTO_C; + goto fail; + }; + + if (mdev->state.role == R_PRIMARY && new_conf->want_lose) { + retcode = ERR_DISCARD; + goto fail; + } + + retcode = NO_ERROR; + + new_my_addr = (struct sockaddr *)&new_conf->my_addr; + new_peer_addr = (struct sockaddr *)&new_conf->peer_addr; + for (i = 0; i < minor_count; i++) { + odev = minor_to_mdev(i); + if (!odev || odev == mdev) + continue; + if (get_net_conf(odev)) { + taken_addr = (struct sockaddr *)&odev->net_conf->my_addr; + if (new_conf->my_addr_len == odev->net_conf->my_addr_len && + !memcmp(new_my_addr, taken_addr, new_conf->my_addr_len)) + retcode = ERR_LOCAL_ADDR; + + taken_addr = (struct sockaddr *)&odev->net_conf->peer_addr; + if (new_conf->peer_addr_len == odev->net_conf->peer_addr_len && + !memcmp(new_peer_addr, taken_addr, new_conf->peer_addr_len)) + retcode = ERR_PEER_ADDR; + + put_net_conf(odev); + if (retcode != NO_ERROR) + goto fail; + } + } + + if (new_conf->cram_hmac_alg[0] != 0) { + snprintf(hmac_name, CRYPTO_MAX_ALG_NAME, "hmac(%s)", + new_conf->cram_hmac_alg); + tfm = crypto_alloc_hash(hmac_name, 0, CRYPTO_ALG_ASYNC); + if (IS_ERR(tfm)) { + tfm = NULL; + retcode = ERR_AUTH_ALG; + goto fail; + } + + if (crypto_tfm_alg_type(crypto_hash_tfm(tfm)) + != CRYPTO_ALG_TYPE_HASH) { + retcode = ERR_AUTH_ALG_ND; + goto fail; + } + } + + if (new_conf->integrity_alg[0]) { + integrity_w_tfm = crypto_alloc_hash(new_conf->integrity_alg, 0, CRYPTO_ALG_ASYNC); + if (IS_ERR(integrity_w_tfm)) { + integrity_w_tfm = NULL; + retcode=ERR_INTEGRITY_ALG; + goto fail; + } + + if (!drbd_crypto_is_hash(crypto_hash_tfm(integrity_w_tfm))) { + retcode=ERR_INTEGRITY_ALG_ND; + goto fail; + } + + integrity_r_tfm = crypto_alloc_hash(new_conf->integrity_alg, 0, CRYPTO_ALG_ASYNC); + if (IS_ERR(integrity_r_tfm)) { + integrity_r_tfm = NULL; + retcode=ERR_INTEGRITY_ALG; + goto fail; + } + } + + ns = new_conf->max_epoch_size/8; + if (mdev->tl_hash_s != ns) { + new_tl_hash = kzalloc(ns*sizeof(void *), GFP_KERNEL); + if (!new_tl_hash) { + retcode = ERR_NOMEM; + goto fail; + } + } + + ns = new_conf->max_buffers/8; + if (new_conf->two_primaries && (mdev->ee_hash_s != ns)) { + new_ee_hash = kzalloc(ns*sizeof(void *), GFP_KERNEL); + if (!new_ee_hash) { + retcode = ERR_NOMEM; + goto fail; + } + } + + ((char *)new_conf->shared_secret)[SHARED_SECRET_MAX-1] = 0; + + if (integrity_w_tfm) { + i = crypto_hash_digestsize(integrity_w_tfm); + int_dig_out = kmalloc(i, GFP_KERNEL); + if (!int_dig_out) { + retcode = ERR_NOMEM; + goto fail; + } + int_dig_in = kmalloc(i, GFP_KERNEL); + if (!int_dig_in) { + retcode = ERR_NOMEM; + goto fail; + } + int_dig_vv = kmalloc(i, GFP_KERNEL); + if (!int_dig_vv) { + retcode = ERR_NOMEM; + goto fail; + } + } + + if (!mdev->bitmap) { + if(drbd_bm_init(mdev)) { + retcode = ERR_NOMEM; + goto fail; + } + } + + spin_lock_irq(&mdev->req_lock); + if (mdev->net_conf != NULL) { + retcode = ERR_NET_CONFIGURED; + spin_unlock_irq(&mdev->req_lock); + goto fail; + } + mdev->net_conf = new_conf; + + mdev->send_cnt = 0; + mdev->recv_cnt = 0; + + if (new_tl_hash) { + kfree(mdev->tl_hash); + mdev->tl_hash_s = mdev->net_conf->max_epoch_size/8; + mdev->tl_hash = new_tl_hash; + } + + if (new_ee_hash) { + kfree(mdev->ee_hash); + mdev->ee_hash_s = mdev->net_conf->max_buffers/8; + mdev->ee_hash = new_ee_hash; + } + + crypto_free_hash(mdev->cram_hmac_tfm); + mdev->cram_hmac_tfm = tfm; + + crypto_free_hash(mdev->integrity_w_tfm); + mdev->integrity_w_tfm = integrity_w_tfm; + + crypto_free_hash(mdev->integrity_r_tfm); + mdev->integrity_r_tfm = integrity_r_tfm; + + kfree(mdev->int_dig_out); + kfree(mdev->int_dig_in); + kfree(mdev->int_dig_vv); + mdev->int_dig_out=int_dig_out; + mdev->int_dig_in=int_dig_in; + mdev->int_dig_vv=int_dig_vv; + spin_unlock_irq(&mdev->req_lock); + + retcode = _drbd_request_state(mdev, NS(conn, C_UNCONNECTED), CS_VERBOSE); + + kobject_uevent(&disk_to_dev(mdev->vdisk)->kobj, KOBJ_CHANGE); + reply->ret_code = retcode; + drbd_reconfig_done(mdev); + return 0; + +fail: + kfree(int_dig_out); + kfree(int_dig_in); + kfree(int_dig_vv); + crypto_free_hash(tfm); + crypto_free_hash(integrity_w_tfm); + crypto_free_hash(integrity_r_tfm); + kfree(new_tl_hash); + kfree(new_ee_hash); + kfree(new_conf); + + reply->ret_code = retcode; + drbd_reconfig_done(mdev); + return 0; +} + +static int drbd_nl_disconnect(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp, + struct drbd_nl_cfg_reply *reply) +{ + int retcode; + + retcode = _drbd_request_state(mdev, NS(conn, C_DISCONNECTING), CS_ORDERED); + + if (retcode == SS_NOTHING_TO_DO) + goto done; + else if (retcode == SS_ALREADY_STANDALONE) + goto done; + else if (retcode == SS_PRIMARY_NOP) { + /* Our statche checking code wants to see the peer outdated. */ + retcode = drbd_request_state(mdev, NS2(conn, C_DISCONNECTING, + pdsk, D_OUTDATED)); + } else if (retcode == SS_CW_FAILED_BY_PEER) { + /* The peer probably wants to see us outdated. */ + retcode = _drbd_request_state(mdev, NS2(conn, C_DISCONNECTING, + disk, D_OUTDATED), + CS_ORDERED); + if (retcode == SS_IS_DISKLESS || retcode == SS_LOWER_THAN_OUTDATED) { + drbd_force_state(mdev, NS(conn, C_DISCONNECTING)); + retcode = SS_SUCCESS; + } + } + + if (retcode < SS_SUCCESS) + goto fail; + + if (wait_event_interruptible(mdev->state_wait, + mdev->state.conn != C_DISCONNECTING)) { + /* Do not test for mdev->state.conn == C_STANDALONE, since + someone else might connect us in the mean time! */ + retcode = ERR_INTR; + goto fail; + } + + done: + retcode = NO_ERROR; + fail: + drbd_md_sync(mdev); + reply->ret_code = retcode; + return 0; +} + +void resync_after_online_grow(struct drbd_conf *mdev) +{ + int iass; /* I am sync source */ + + dev_info(DEV, "Resync of new storage after online grow\n"); + if (mdev->state.role != mdev->state.peer) + iass = (mdev->state.role == R_PRIMARY); + else + iass = test_bit(DISCARD_CONCURRENT, &mdev->flags); + + if (iass) + drbd_start_resync(mdev, C_SYNC_SOURCE); + else + _drbd_request_state(mdev, NS(conn, C_WF_SYNC_UUID), CS_VERBOSE + CS_SERIALIZE); +} + +static int drbd_nl_resize(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp, + struct drbd_nl_cfg_reply *reply) +{ + struct resize rs; + int retcode = NO_ERROR; + int ldsc = 0; /* local disk size changed */ + enum determine_dev_size dd; + + memset(&rs, 0, sizeof(struct resize)); + if (!resize_from_tags(mdev, nlp->tag_list, &rs)) { + retcode = ERR_MANDATORY_TAG; + goto fail; + } + + if (mdev->state.conn > C_CONNECTED) { + retcode = ERR_RESIZE_RESYNC; + goto fail; + } + + if (mdev->state.role == R_SECONDARY && + mdev->state.peer == R_SECONDARY) { + retcode = ERR_NO_PRIMARY; + goto fail; + } + + if (!get_ldev(mdev)) { + retcode = ERR_NO_DISK; + goto fail; + } + + if (mdev->ldev->known_size != drbd_get_capacity(mdev->ldev->backing_bdev)) { + mdev->ldev->known_size = drbd_get_capacity(mdev->ldev->backing_bdev); + ldsc = 1; + } + + mdev->ldev->dc.disk_size = (sector_t)rs.resize_size; + dd = drbd_determin_dev_size(mdev); + drbd_md_sync(mdev); + put_ldev(mdev); + if (dd == dev_size_error) { + retcode = ERR_NOMEM_BITMAP; + goto fail; + } + + if (mdev->state.conn == C_CONNECTED && (dd != unchanged || ldsc)) { + if (dd == grew) + set_bit(RESIZE_PENDING, &mdev->flags); + + drbd_send_uuids(mdev); + drbd_send_sizes(mdev, 1); + } + + fail: + reply->ret_code = retcode; + return 0; +} + +static int drbd_nl_syncer_conf(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp, + struct drbd_nl_cfg_reply *reply) +{ + int retcode = NO_ERROR; + int err; + int ovr; /* online verify running */ + int rsr; /* re-sync running */ + struct crypto_hash *verify_tfm = NULL; + struct crypto_hash *csums_tfm = NULL; + struct syncer_conf sc; + cpumask_var_t new_cpu_mask; + + if (!zalloc_cpumask_var(&new_cpu_mask, GFP_KERNEL)) { + retcode = ERR_NOMEM; + goto fail; + } + + if (nlp->flags & DRBD_NL_SET_DEFAULTS) { + memset(&sc, 0, sizeof(struct syncer_conf)); + sc.rate = DRBD_RATE_DEF; + sc.after = DRBD_AFTER_DEF; + sc.al_extents = DRBD_AL_EXTENTS_DEF; + } else + memcpy(&sc, &mdev->sync_conf, sizeof(struct syncer_conf)); + + if (!syncer_conf_from_tags(mdev, nlp->tag_list, &sc)) { + retcode = ERR_MANDATORY_TAG; + goto fail; + } + + /* re-sync running */ + rsr = ( mdev->state.conn == C_SYNC_SOURCE || + mdev->state.conn == C_SYNC_TARGET || + mdev->state.conn == C_PAUSED_SYNC_S || + mdev->state.conn == C_PAUSED_SYNC_T ); + + if (rsr && strcmp(sc.csums_alg, mdev->sync_conf.csums_alg)) { + retcode = ERR_CSUMS_RESYNC_RUNNING; + goto fail; + } + + if (!rsr && sc.csums_alg[0]) { + csums_tfm = crypto_alloc_hash(sc.csums_alg, 0, CRYPTO_ALG_ASYNC); + if (IS_ERR(csums_tfm)) { + csums_tfm = NULL; + retcode = ERR_CSUMS_ALG; + goto fail; + } + + if (!drbd_crypto_is_hash(crypto_hash_tfm(csums_tfm))) { + retcode = ERR_CSUMS_ALG_ND; + goto fail; + } + } + + /* online verify running */ + ovr = (mdev->state.conn == C_VERIFY_S || mdev->state.conn == C_VERIFY_T); + + if (ovr) { + if (strcmp(sc.verify_alg, mdev->sync_conf.verify_alg)) { + retcode = ERR_VERIFY_RUNNING; + goto fail; + } + } + + if (!ovr && sc.verify_alg[0]) { + verify_tfm = crypto_alloc_hash(sc.verify_alg, 0, CRYPTO_ALG_ASYNC); + if (IS_ERR(verify_tfm)) { + verify_tfm = NULL; + retcode = ERR_VERIFY_ALG; + goto fail; + } + + if (!drbd_crypto_is_hash(crypto_hash_tfm(verify_tfm))) { + retcode = ERR_VERIFY_ALG_ND; + goto fail; + } + } + + /* silently ignore cpu mask on UP kernel */ + if (nr_cpu_ids > 1 && sc.cpu_mask[0] != 0) { + err = __bitmap_parse(sc.cpu_mask, 32, 0, + cpumask_bits(new_cpu_mask), nr_cpu_ids); + if (err) { + dev_warn(DEV, "__bitmap_parse() failed with %d\n", err); + retcode = ERR_CPU_MASK_PARSE; + goto fail; + } + } + + ERR_IF (sc.rate < 1) sc.rate = 1; + ERR_IF (sc.al_extents < 7) sc.al_extents = 127; /* arbitrary minimum */ +#define AL_MAX ((MD_AL_MAX_SIZE-1) * AL_EXTENTS_PT) + if (sc.al_extents > AL_MAX) { + dev_err(DEV, "sc.al_extents > %d\n", AL_MAX); + sc.al_extents = AL_MAX; + } +#undef AL_MAX + + /* most sanity checks done, try to assign the new sync-after + * dependency. need to hold the global lock in there, + * to avoid a race in the dependency loop check. */ + retcode = drbd_alter_sa(mdev, sc.after); + if (retcode != NO_ERROR) + goto fail; + + /* ok, assign the rest of it as well. + * lock against receive_SyncParam() */ + spin_lock(&mdev->peer_seq_lock); + mdev->sync_conf = sc; + + if (!rsr) { + crypto_free_hash(mdev->csums_tfm); + mdev->csums_tfm = csums_tfm; + csums_tfm = NULL; + } + + if (!ovr) { + crypto_free_hash(mdev->verify_tfm); + mdev->verify_tfm = verify_tfm; + verify_tfm = NULL; + } + spin_unlock(&mdev->peer_seq_lock); + + if (get_ldev(mdev)) { + wait_event(mdev->al_wait, lc_try_lock(mdev->act_log)); + drbd_al_shrink(mdev); + err = drbd_check_al_size(mdev); + lc_unlock(mdev->act_log); + wake_up(&mdev->al_wait); + + put_ldev(mdev); + drbd_md_sync(mdev); + + if (err) { + retcode = ERR_NOMEM; + goto fail; + } + } + + if (mdev->state.conn >= C_CONNECTED) + drbd_send_sync_param(mdev, &sc); + + if (!cpumask_equal(mdev->cpu_mask, new_cpu_mask)) { + cpumask_copy(mdev->cpu_mask, new_cpu_mask); + drbd_calc_cpu_mask(mdev); + mdev->receiver.reset_cpu_mask = 1; + mdev->asender.reset_cpu_mask = 1; + mdev->worker.reset_cpu_mask = 1; + } + + kobject_uevent(&disk_to_dev(mdev->vdisk)->kobj, KOBJ_CHANGE); +fail: + free_cpumask_var(new_cpu_mask); + crypto_free_hash(csums_tfm); + crypto_free_hash(verify_tfm); + reply->ret_code = retcode; + return 0; +} + +static int drbd_nl_invalidate(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp, + struct drbd_nl_cfg_reply *reply) +{ + int retcode; + + retcode = _drbd_request_state(mdev, NS(conn, C_STARTING_SYNC_T), CS_ORDERED); + + if (retcode < SS_SUCCESS && retcode != SS_NEED_CONNECTION) + retcode = drbd_request_state(mdev, NS(conn, C_STARTING_SYNC_T)); + + while (retcode == SS_NEED_CONNECTION) { + spin_lock_irq(&mdev->req_lock); + if (mdev->state.conn < C_CONNECTED) + retcode = _drbd_set_state(_NS(mdev, disk, D_INCONSISTENT), CS_VERBOSE, NULL); + spin_unlock_irq(&mdev->req_lock); + + if (retcode != SS_NEED_CONNECTION) + break; + + retcode = drbd_request_state(mdev, NS(conn, C_STARTING_SYNC_T)); + } + + reply->ret_code = retcode; + return 0; +} + +static int drbd_nl_invalidate_peer(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp, + struct drbd_nl_cfg_reply *reply) +{ + + reply->ret_code = drbd_request_state(mdev, NS(conn, C_STARTING_SYNC_S)); + + return 0; +} + +static int drbd_nl_pause_sync(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp, + struct drbd_nl_cfg_reply *reply) +{ + int retcode = NO_ERROR; + + if (drbd_request_state(mdev, NS(user_isp, 1)) == SS_NOTHING_TO_DO) + retcode = ERR_PAUSE_IS_SET; + + reply->ret_code = retcode; + return 0; +} + +static int drbd_nl_resume_sync(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp, + struct drbd_nl_cfg_reply *reply) +{ + int retcode = NO_ERROR; + + if (drbd_request_state(mdev, NS(user_isp, 0)) == SS_NOTHING_TO_DO) + retcode = ERR_PAUSE_IS_CLEAR; + + reply->ret_code = retcode; + return 0; +} + +static int drbd_nl_suspend_io(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp, + struct drbd_nl_cfg_reply *reply) +{ + reply->ret_code = drbd_request_state(mdev, NS(susp, 1)); + + return 0; +} + +static int drbd_nl_resume_io(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp, + struct drbd_nl_cfg_reply *reply) +{ + reply->ret_code = drbd_request_state(mdev, NS(susp, 0)); + return 0; +} + +static int drbd_nl_outdate(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp, + struct drbd_nl_cfg_reply *reply) +{ + reply->ret_code = drbd_request_state(mdev, NS(disk, D_OUTDATED)); + return 0; +} + +static int drbd_nl_get_config(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp, + struct drbd_nl_cfg_reply *reply) +{ + unsigned short *tl; + + tl = reply->tag_list; + + if (get_ldev(mdev)) { + tl = disk_conf_to_tags(mdev, &mdev->ldev->dc, tl); + put_ldev(mdev); + } + + if (get_net_conf(mdev)) { + tl = net_conf_to_tags(mdev, mdev->net_conf, tl); + put_net_conf(mdev); + } + tl = syncer_conf_to_tags(mdev, &mdev->sync_conf, tl); + + put_unaligned(TT_END, tl++); /* Close the tag list */ + + return (int)((char *)tl - (char *)reply->tag_list); +} + +static int drbd_nl_get_state(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp, + struct drbd_nl_cfg_reply *reply) +{ + unsigned short *tl = reply->tag_list; + union drbd_state s = mdev->state; + unsigned long rs_left; + unsigned int res; + + tl = get_state_to_tags(mdev, (struct get_state *)&s, tl); + + /* no local ref, no bitmap, no syncer progress. */ + if (s.conn >= C_SYNC_SOURCE && s.conn <= C_PAUSED_SYNC_T) { + if (get_ldev(mdev)) { + drbd_get_syncer_progress(mdev, &rs_left, &res); + tl = tl_add_int(tl, T_sync_progress, &res); + put_ldev(mdev); + } + } + put_unaligned(TT_END, tl++); /* Close the tag list */ + + return (int)((char *)tl - (char *)reply->tag_list); +} + +static int drbd_nl_get_uuids(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp, + struct drbd_nl_cfg_reply *reply) +{ + unsigned short *tl; + + tl = reply->tag_list; + + if (get_ldev(mdev)) { + tl = tl_add_blob(tl, T_uuids, mdev->ldev->md.uuid, UI_SIZE*sizeof(u64)); + tl = tl_add_int(tl, T_uuids_flags, &mdev->ldev->md.flags); + put_ldev(mdev); + } + put_unaligned(TT_END, tl++); /* Close the tag list */ + + return (int)((char *)tl - (char *)reply->tag_list); +} + +/** + * drbd_nl_get_timeout_flag() - Used by drbdsetup to find out which timeout value to use + * @mdev: DRBD device. + * @nlp: Netlink/connector packet from drbdsetup + * @reply: Reply packet for drbdsetup + */ +static int drbd_nl_get_timeout_flag(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp, + struct drbd_nl_cfg_reply *reply) +{ + unsigned short *tl; + char rv; + + tl = reply->tag_list; + + rv = mdev->state.pdsk == D_OUTDATED ? UT_PEER_OUTDATED : + test_bit(USE_DEGR_WFC_T, &mdev->flags) ? UT_DEGRADED : UT_DEFAULT; + + tl = tl_add_blob(tl, T_use_degraded, &rv, sizeof(rv)); + put_unaligned(TT_END, tl++); /* Close the tag list */ + + return (int)((char *)tl - (char *)reply->tag_list); +} + +static int drbd_nl_start_ov(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp, + struct drbd_nl_cfg_reply *reply) +{ + /* default to resume from last known position, if possible */ + struct start_ov args = + { .start_sector = mdev->ov_start_sector }; + + if (!start_ov_from_tags(mdev, nlp->tag_list, &args)) { + reply->ret_code = ERR_MANDATORY_TAG; + return 0; + } + /* w_make_ov_request expects position to be aligned */ + mdev->ov_start_sector = args.start_sector & ~BM_SECT_PER_BIT; + reply->ret_code = drbd_request_state(mdev,NS(conn,C_VERIFY_S)); + return 0; +} + + +static int drbd_nl_new_c_uuid(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp, + struct drbd_nl_cfg_reply *reply) +{ + int retcode = NO_ERROR; + int skip_initial_sync = 0; + int err; + + struct new_c_uuid args; + + memset(&args, 0, sizeof(struct new_c_uuid)); + if (!new_c_uuid_from_tags(mdev, nlp->tag_list, &args)) { + reply->ret_code = ERR_MANDATORY_TAG; + return 0; + } + + mutex_lock(&mdev->state_mutex); /* Protects us against serialized state changes. */ + + if (!get_ldev(mdev)) { + retcode = ERR_NO_DISK; + goto out; + } + + /* this is "skip initial sync", assume to be clean */ + if (mdev->state.conn == C_CONNECTED && mdev->agreed_pro_version >= 90 && + mdev->ldev->md.uuid[UI_CURRENT] == UUID_JUST_CREATED && args.clear_bm) { + dev_info(DEV, "Preparing to skip initial sync\n"); + skip_initial_sync = 1; + } else if (mdev->state.conn != C_STANDALONE) { + retcode = ERR_CONNECTED; + goto out_dec; + } + + drbd_uuid_set(mdev, UI_BITMAP, 0); /* Rotate UI_BITMAP to History 1, etc... */ + drbd_uuid_new_current(mdev); /* New current, previous to UI_BITMAP */ + + if (args.clear_bm) { + err = drbd_bitmap_io(mdev, &drbd_bmio_clear_n_write, "clear_n_write from new_c_uuid"); + if (err) { + dev_err(DEV, "Writing bitmap failed with %d\n",err); + retcode = ERR_IO_MD_DISK; + } + if (skip_initial_sync) { + drbd_send_uuids_skip_initial_sync(mdev); + _drbd_uuid_set(mdev, UI_BITMAP, 0); + spin_lock_irq(&mdev->req_lock); + _drbd_set_state(_NS2(mdev, disk, D_UP_TO_DATE, pdsk, D_UP_TO_DATE), + CS_VERBOSE, NULL); + spin_unlock_irq(&mdev->req_lock); + } + } + + drbd_md_sync(mdev); +out_dec: + put_ldev(mdev); +out: + mutex_unlock(&mdev->state_mutex); + + reply->ret_code = retcode; + return 0; +} + +static struct drbd_conf *ensure_mdev(struct drbd_nl_cfg_req *nlp) +{ + struct drbd_conf *mdev; + + if (nlp->drbd_minor >= minor_count) + return NULL; + + mdev = minor_to_mdev(nlp->drbd_minor); + + if (!mdev && (nlp->flags & DRBD_NL_CREATE_DEVICE)) { + struct gendisk *disk = NULL; + mdev = drbd_new_device(nlp->drbd_minor); + + spin_lock_irq(&drbd_pp_lock); + if (minor_table[nlp->drbd_minor] == NULL) { + minor_table[nlp->drbd_minor] = mdev; + disk = mdev->vdisk; + mdev = NULL; + } /* else: we lost the race */ + spin_unlock_irq(&drbd_pp_lock); + + if (disk) /* we won the race above */ + /* in case we ever add a drbd_delete_device(), + * don't forget the del_gendisk! */ + add_disk(disk); + else /* we lost the race above */ + drbd_free_mdev(mdev); + + mdev = minor_to_mdev(nlp->drbd_minor); + } + + return mdev; +} + +struct cn_handler_struct { + int (*function)(struct drbd_conf *, + struct drbd_nl_cfg_req *, + struct drbd_nl_cfg_reply *); + int reply_body_size; +}; + +static struct cn_handler_struct cnd_table[] = { + [ P_primary ] = { &drbd_nl_primary, 0 }, + [ P_secondary ] = { &drbd_nl_secondary, 0 }, + [ P_disk_conf ] = { &drbd_nl_disk_conf, 0 }, + [ P_detach ] = { &drbd_nl_detach, 0 }, + [ P_net_conf ] = { &drbd_nl_net_conf, 0 }, + [ P_disconnect ] = { &drbd_nl_disconnect, 0 }, + [ P_resize ] = { &drbd_nl_resize, 0 }, + [ P_syncer_conf ] = { &drbd_nl_syncer_conf, 0 }, + [ P_invalidate ] = { &drbd_nl_invalidate, 0 }, + [ P_invalidate_peer ] = { &drbd_nl_invalidate_peer, 0 }, + [ P_pause_sync ] = { &drbd_nl_pause_sync, 0 }, + [ P_resume_sync ] = { &drbd_nl_resume_sync, 0 }, + [ P_suspend_io ] = { &drbd_nl_suspend_io, 0 }, + [ P_resume_io ] = { &drbd_nl_resume_io, 0 }, + [ P_outdate ] = { &drbd_nl_outdate, 0 }, + [ P_get_config ] = { &drbd_nl_get_config, + sizeof(struct syncer_conf_tag_len_struct) + + sizeof(struct disk_conf_tag_len_struct) + + sizeof(struct net_conf_tag_len_struct) }, + [ P_get_state ] = { &drbd_nl_get_state, + sizeof(struct get_state_tag_len_struct) + + sizeof(struct sync_progress_tag_len_struct) }, + [ P_get_uuids ] = { &drbd_nl_get_uuids, + sizeof(struct get_uuids_tag_len_struct) }, + [ P_get_timeout_flag ] = { &drbd_nl_get_timeout_flag, + sizeof(struct get_timeout_flag_tag_len_struct)}, + [ P_start_ov ] = { &drbd_nl_start_ov, 0 }, + [ P_new_c_uuid ] = { &drbd_nl_new_c_uuid, 0 }, +}; + +static void drbd_connector_callback(struct cn_msg *req) +{ + struct drbd_nl_cfg_req *nlp = (struct drbd_nl_cfg_req *)req->data; + struct cn_handler_struct *cm; + struct cn_msg *cn_reply; + struct drbd_nl_cfg_reply *reply; + struct drbd_conf *mdev; + int retcode, rr; + int reply_size = sizeof(struct cn_msg) + + sizeof(struct drbd_nl_cfg_reply) + + sizeof(short int); + + if (!try_module_get(THIS_MODULE)) { + printk(KERN_ERR "drbd: try_module_get() failed!\n"); + return; + } + + mdev = ensure_mdev(nlp); + if (!mdev) { + retcode = ERR_MINOR_INVALID; + goto fail; + } + + trace_drbd_netlink(req, 1); + + if (nlp->packet_type >= P_nl_after_last_packet) { + retcode = ERR_PACKET_NR; + goto fail; + } + + cm = cnd_table + nlp->packet_type; + + /* This may happen if packet number is 0: */ + if (cm->function == NULL) { + retcode = ERR_PACKET_NR; + goto fail; + } + + reply_size += cm->reply_body_size; + + /* allocation not in the IO path, cqueue thread context */ + cn_reply = kmalloc(reply_size, GFP_KERNEL); + if (!cn_reply) { + retcode = ERR_NOMEM; + goto fail; + } + reply = (struct drbd_nl_cfg_reply *) cn_reply->data; + + reply->packet_type = + cm->reply_body_size ? nlp->packet_type : P_nl_after_last_packet; + reply->minor = nlp->drbd_minor; + reply->ret_code = NO_ERROR; /* Might by modified by cm->function. */ + /* reply->tag_list; might be modified by cm->function. */ + + rr = cm->function(mdev, nlp, reply); + + cn_reply->id = req->id; + cn_reply->seq = req->seq; + cn_reply->ack = req->ack + 1; + cn_reply->len = sizeof(struct drbd_nl_cfg_reply) + rr; + cn_reply->flags = 0; + + trace_drbd_netlink(cn_reply, 0); + rr = cn_netlink_send(cn_reply, CN_IDX_DRBD, GFP_KERNEL); + if (rr && rr != -ESRCH) + printk(KERN_INFO "drbd: cn_netlink_send()=%d\n", rr); + + kfree(cn_reply); + module_put(THIS_MODULE); + return; + fail: + drbd_nl_send_reply(req, retcode); + module_put(THIS_MODULE); +} + +static atomic_t drbd_nl_seq = ATOMIC_INIT(2); /* two. */ + +static unsigned short * +__tl_add_blob(unsigned short *tl, enum drbd_tags tag, const void *data, + unsigned short len, int nul_terminated) +{ + unsigned short l = tag_descriptions[tag_number(tag)].max_len; + len = (len < l) ? len : l; + put_unaligned(tag, tl++); + put_unaligned(len, tl++); + memcpy(tl, data, len); + tl = (unsigned short*)((char*)tl + len); + if (nul_terminated) + *((char*)tl - 1) = 0; + return tl; +} + +static unsigned short * +tl_add_blob(unsigned short *tl, enum drbd_tags tag, const void *data, int len) +{ + return __tl_add_blob(tl, tag, data, len, 0); +} + +static unsigned short * +tl_add_str(unsigned short *tl, enum drbd_tags tag, const char *str) +{ + return __tl_add_blob(tl, tag, str, strlen(str)+1, 0); +} + +static unsigned short * +tl_add_int(unsigned short *tl, enum drbd_tags tag, const void *val) +{ + put_unaligned(tag, tl++); + switch(tag_type(tag)) { + case TT_INTEGER: + put_unaligned(sizeof(int), tl++); + put_unaligned(*(int *)val, (int *)tl); + tl = (unsigned short*)((char*)tl+sizeof(int)); + break; + case TT_INT64: + put_unaligned(sizeof(u64), tl++); + put_unaligned(*(u64 *)val, (u64 *)tl); + tl = (unsigned short*)((char*)tl+sizeof(u64)); + break; + default: + /* someone did something stupid. */ + ; + } + return tl; +} + +void drbd_bcast_state(struct drbd_conf *mdev, union drbd_state state) +{ + char buffer[sizeof(struct cn_msg)+ + sizeof(struct drbd_nl_cfg_reply)+ + sizeof(struct get_state_tag_len_struct)+ + sizeof(short int)]; + struct cn_msg *cn_reply = (struct cn_msg *) buffer; + struct drbd_nl_cfg_reply *reply = + (struct drbd_nl_cfg_reply *)cn_reply->data; + unsigned short *tl = reply->tag_list; + + /* dev_warn(DEV, "drbd_bcast_state() got called\n"); */ + + tl = get_state_to_tags(mdev, (struct get_state *)&state, tl); + + put_unaligned(TT_END, tl++); /* Close the tag list */ + + cn_reply->id.idx = CN_IDX_DRBD; + cn_reply->id.val = CN_VAL_DRBD; + + cn_reply->seq = atomic_add_return(1, &drbd_nl_seq); + cn_reply->ack = 0; /* not used here. */ + cn_reply->len = sizeof(struct drbd_nl_cfg_reply) + + (int)((char *)tl - (char *)reply->tag_list); + cn_reply->flags = 0; + + reply->packet_type = P_get_state; + reply->minor = mdev_to_minor(mdev); + reply->ret_code = NO_ERROR; + + trace_drbd_netlink(cn_reply, 0); + cn_netlink_send(cn_reply, CN_IDX_DRBD, GFP_NOIO); +} + +void drbd_bcast_ev_helper(struct drbd_conf *mdev, char *helper_name) +{ + char buffer[sizeof(struct cn_msg)+ + sizeof(struct drbd_nl_cfg_reply)+ + sizeof(struct call_helper_tag_len_struct)+ + sizeof(short int)]; + struct cn_msg *cn_reply = (struct cn_msg *) buffer; + struct drbd_nl_cfg_reply *reply = + (struct drbd_nl_cfg_reply *)cn_reply->data; + unsigned short *tl = reply->tag_list; + + /* dev_warn(DEV, "drbd_bcast_state() got called\n"); */ + + tl = tl_add_str(tl, T_helper, helper_name); + put_unaligned(TT_END, tl++); /* Close the tag list */ + + cn_reply->id.idx = CN_IDX_DRBD; + cn_reply->id.val = CN_VAL_DRBD; + + cn_reply->seq = atomic_add_return(1, &drbd_nl_seq); + cn_reply->ack = 0; /* not used here. */ + cn_reply->len = sizeof(struct drbd_nl_cfg_reply) + + (int)((char *)tl - (char *)reply->tag_list); + cn_reply->flags = 0; + + reply->packet_type = P_call_helper; + reply->minor = mdev_to_minor(mdev); + reply->ret_code = NO_ERROR; + + trace_drbd_netlink(cn_reply, 0); + cn_netlink_send(cn_reply, CN_IDX_DRBD, GFP_NOIO); +} + +void drbd_bcast_ee(struct drbd_conf *mdev, + const char *reason, const int dgs, + const char* seen_hash, const char* calc_hash, + const struct drbd_epoch_entry* e) +{ + struct cn_msg *cn_reply; + struct drbd_nl_cfg_reply *reply; + struct bio_vec *bvec; + unsigned short *tl; + int i; + + if (!e) + return; + if (!reason || !reason[0]) + return; + + /* apparently we have to memcpy twice, first to prepare the data for the + * struct cn_msg, then within cn_netlink_send from the cn_msg to the + * netlink skb. */ + /* receiver thread context, which is not in the writeout path (of this node), + * but may be in the writeout path of the _other_ node. + * GFP_NOIO to avoid potential "distributed deadlock". */ + cn_reply = kmalloc( + sizeof(struct cn_msg)+ + sizeof(struct drbd_nl_cfg_reply)+ + sizeof(struct dump_ee_tag_len_struct)+ + sizeof(short int), + GFP_NOIO); + + if (!cn_reply) { + dev_err(DEV, "could not kmalloc buffer for drbd_bcast_ee, sector %llu, size %u\n", + (unsigned long long)e->sector, e->size); + return; + } + + reply = (struct drbd_nl_cfg_reply*)cn_reply->data; + tl = reply->tag_list; + + tl = tl_add_str(tl, T_dump_ee_reason, reason); + tl = tl_add_blob(tl, T_seen_digest, seen_hash, dgs); + tl = tl_add_blob(tl, T_calc_digest, calc_hash, dgs); + tl = tl_add_int(tl, T_ee_sector, &e->sector); + tl = tl_add_int(tl, T_ee_block_id, &e->block_id); + + put_unaligned(T_ee_data, tl++); + put_unaligned(e->size, tl++); + + __bio_for_each_segment(bvec, e->private_bio, i, 0) { + void *d = kmap(bvec->bv_page); + memcpy(tl, d + bvec->bv_offset, bvec->bv_len); + kunmap(bvec->bv_page); + tl=(unsigned short*)((char*)tl + bvec->bv_len); + } + put_unaligned(TT_END, tl++); /* Close the tag list */ + + cn_reply->id.idx = CN_IDX_DRBD; + cn_reply->id.val = CN_VAL_DRBD; + + cn_reply->seq = atomic_add_return(1,&drbd_nl_seq); + cn_reply->ack = 0; // not used here. + cn_reply->len = sizeof(struct drbd_nl_cfg_reply) + + (int)((char*)tl - (char*)reply->tag_list); + cn_reply->flags = 0; + + reply->packet_type = P_dump_ee; + reply->minor = mdev_to_minor(mdev); + reply->ret_code = NO_ERROR; + + trace_drbd_netlink(cn_reply, 0); + cn_netlink_send(cn_reply, CN_IDX_DRBD, GFP_NOIO); + kfree(cn_reply); +} + +void drbd_bcast_sync_progress(struct drbd_conf *mdev) +{ + char buffer[sizeof(struct cn_msg)+ + sizeof(struct drbd_nl_cfg_reply)+ + sizeof(struct sync_progress_tag_len_struct)+ + sizeof(short int)]; + struct cn_msg *cn_reply = (struct cn_msg *) buffer; + struct drbd_nl_cfg_reply *reply = + (struct drbd_nl_cfg_reply *)cn_reply->data; + unsigned short *tl = reply->tag_list; + unsigned long rs_left; + unsigned int res; + + /* no local ref, no bitmap, no syncer progress, no broadcast. */ + if (!get_ldev(mdev)) + return; + drbd_get_syncer_progress(mdev, &rs_left, &res); + put_ldev(mdev); + + tl = tl_add_int(tl, T_sync_progress, &res); + put_unaligned(TT_END, tl++); /* Close the tag list */ + + cn_reply->id.idx = CN_IDX_DRBD; + cn_reply->id.val = CN_VAL_DRBD; + + cn_reply->seq = atomic_add_return(1, &drbd_nl_seq); + cn_reply->ack = 0; /* not used here. */ + cn_reply->len = sizeof(struct drbd_nl_cfg_reply) + + (int)((char *)tl - (char *)reply->tag_list); + cn_reply->flags = 0; + + reply->packet_type = P_sync_progress; + reply->minor = mdev_to_minor(mdev); + reply->ret_code = NO_ERROR; + + trace_drbd_netlink(cn_reply, 0); + cn_netlink_send(cn_reply, CN_IDX_DRBD, GFP_NOIO); +} + +int __init drbd_nl_init(void) +{ + static struct cb_id cn_id_drbd; + int err, try=10; + + cn_id_drbd.val = CN_VAL_DRBD; + do { + cn_id_drbd.idx = cn_idx; + err = cn_add_callback(&cn_id_drbd, "cn_drbd", &drbd_connector_callback); + if (!err) + break; + cn_idx = (cn_idx + CN_IDX_STEP); + } while (try--); + + if (err) { + printk(KERN_ERR "drbd: cn_drbd failed to register\n"); + return err; + } + + return 0; +} + +void drbd_nl_cleanup(void) +{ + static struct cb_id cn_id_drbd; + + cn_id_drbd.idx = cn_idx; + cn_id_drbd.val = CN_VAL_DRBD; + + cn_del_callback(&cn_id_drbd); +} + +void drbd_nl_send_reply(struct cn_msg *req, int ret_code) +{ + char buffer[sizeof(struct cn_msg)+sizeof(struct drbd_nl_cfg_reply)]; + struct cn_msg *cn_reply = (struct cn_msg *) buffer; + struct drbd_nl_cfg_reply *reply = + (struct drbd_nl_cfg_reply *)cn_reply->data; + int rr; + + cn_reply->id = req->id; + + cn_reply->seq = req->seq; + cn_reply->ack = req->ack + 1; + cn_reply->len = sizeof(struct drbd_nl_cfg_reply); + cn_reply->flags = 0; + + reply->minor = ((struct drbd_nl_cfg_req *)req->data)->drbd_minor; + reply->ret_code = ret_code; + + trace_drbd_netlink(cn_reply, 0); + rr = cn_netlink_send(cn_reply, CN_IDX_DRBD, GFP_NOIO); + if (rr && rr != -ESRCH) + printk(KERN_INFO "drbd: cn_netlink_send()=%d\n", rr); +} + diff --git a/drivers/block/drbd/drbd_proc.c b/drivers/block/drbd/drbd_proc.c new file mode 100644 index 000000000000..98fcb7450c76 --- /dev/null +++ b/drivers/block/drbd/drbd_proc.c @@ -0,0 +1,266 @@ +/* + drbd_proc.c + + This file is part of DRBD by Philipp Reisner and Lars Ellenberg. + + Copyright (C) 2001-2008, LINBIT Information Technologies GmbH. + Copyright (C) 1999-2008, Philipp Reisner . + Copyright (C) 2002-2008, Lars Ellenberg . + + drbd is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2, or (at your option) + any later version. + + drbd is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with drbd; see the file COPYING. If not, write to + the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. + + */ + +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include "drbd_int.h" + +static int drbd_proc_open(struct inode *inode, struct file *file); + + +struct proc_dir_entry *drbd_proc; +struct file_operations drbd_proc_fops = { + .owner = THIS_MODULE, + .open = drbd_proc_open, + .read = seq_read, + .llseek = seq_lseek, + .release = single_release, +}; + + +/*lge + * progress bars shamelessly adapted from driver/md/md.c + * output looks like + * [=====>..............] 33.5% (23456/123456) + * finish: 2:20:20 speed: 6,345 (6,456) K/sec + */ +static void drbd_syncer_progress(struct drbd_conf *mdev, struct seq_file *seq) +{ + unsigned long db, dt, dbdt, rt, rs_left; + unsigned int res; + int i, x, y; + + drbd_get_syncer_progress(mdev, &rs_left, &res); + + x = res/50; + y = 20-x; + seq_printf(seq, "\t["); + for (i = 1; i < x; i++) + seq_printf(seq, "="); + seq_printf(seq, ">"); + for (i = 0; i < y; i++) + seq_printf(seq, "."); + seq_printf(seq, "] "); + + seq_printf(seq, "sync'ed:%3u.%u%% ", res / 10, res % 10); + /* if more than 1 GB display in MB */ + if (mdev->rs_total > 0x100000L) + seq_printf(seq, "(%lu/%lu)M\n\t", + (unsigned long) Bit2KB(rs_left >> 10), + (unsigned long) Bit2KB(mdev->rs_total >> 10)); + else + seq_printf(seq, "(%lu/%lu)K\n\t", + (unsigned long) Bit2KB(rs_left), + (unsigned long) Bit2KB(mdev->rs_total)); + + /* see drivers/md/md.c + * We do not want to overflow, so the order of operands and + * the * 100 / 100 trick are important. We do a +1 to be + * safe against division by zero. We only estimate anyway. + * + * dt: time from mark until now + * db: blocks written from mark until now + * rt: remaining time + */ + dt = (jiffies - mdev->rs_mark_time) / HZ; + + if (dt > 20) { + /* if we made no update to rs_mark_time for too long, + * we are stalled. show that. */ + seq_printf(seq, "stalled\n"); + return; + } + + if (!dt) + dt++; + db = mdev->rs_mark_left - rs_left; + rt = (dt * (rs_left / (db/100+1)))/100; /* seconds */ + + seq_printf(seq, "finish: %lu:%02lu:%02lu", + rt / 3600, (rt % 3600) / 60, rt % 60); + + /* current speed average over (SYNC_MARKS * SYNC_MARK_STEP) jiffies */ + dbdt = Bit2KB(db/dt); + if (dbdt > 1000) + seq_printf(seq, " speed: %ld,%03ld", + dbdt/1000, dbdt % 1000); + else + seq_printf(seq, " speed: %ld", dbdt); + + /* mean speed since syncer started + * we do account for PausedSync periods */ + dt = (jiffies - mdev->rs_start - mdev->rs_paused) / HZ; + if (dt <= 0) + dt = 1; + db = mdev->rs_total - rs_left; + dbdt = Bit2KB(db/dt); + if (dbdt > 1000) + seq_printf(seq, " (%ld,%03ld)", + dbdt/1000, dbdt % 1000); + else + seq_printf(seq, " (%ld)", dbdt); + + seq_printf(seq, " K/sec\n"); +} + +static void resync_dump_detail(struct seq_file *seq, struct lc_element *e) +{ + struct bm_extent *bme = lc_entry(e, struct bm_extent, lce); + + seq_printf(seq, "%5d %s %s\n", bme->rs_left, + bme->flags & BME_NO_WRITES ? "NO_WRITES" : "---------", + bme->flags & BME_LOCKED ? "LOCKED" : "------" + ); +} + +static int drbd_seq_show(struct seq_file *seq, void *v) +{ + int i, hole = 0; + const char *sn; + struct drbd_conf *mdev; + + static char write_ordering_chars[] = { + [WO_none] = 'n', + [WO_drain_io] = 'd', + [WO_bdev_flush] = 'f', + [WO_bio_barrier] = 'b', + }; + + seq_printf(seq, "version: " REL_VERSION " (api:%d/proto:%d-%d)\n%s\n", + API_VERSION, PRO_VERSION_MIN, PRO_VERSION_MAX, drbd_buildtag()); + + /* + cs .. connection state + ro .. node role (local/remote) + ds .. disk state (local/remote) + protocol + various flags + ns .. network send + nr .. network receive + dw .. disk write + dr .. disk read + al .. activity log write count + bm .. bitmap update write count + pe .. pending (waiting for ack or data reply) + ua .. unack'd (still need to send ack or data reply) + ap .. application requests accepted, but not yet completed + ep .. number of epochs currently "on the fly", P_BARRIER_ACK pending + wo .. write ordering mode currently in use + oos .. known out-of-sync kB + */ + + for (i = 0; i < minor_count; i++) { + mdev = minor_to_mdev(i); + if (!mdev) { + hole = 1; + continue; + } + if (hole) { + hole = 0; + seq_printf(seq, "\n"); + } + + sn = drbd_conn_str(mdev->state.conn); + + if (mdev->state.conn == C_STANDALONE && + mdev->state.disk == D_DISKLESS && + mdev->state.role == R_SECONDARY) { + seq_printf(seq, "%2d: cs:Unconfigured\n", i); + } else { + seq_printf(seq, + "%2d: cs:%s ro:%s/%s ds:%s/%s %c %c%c%c%c%c\n" + " ns:%u nr:%u dw:%u dr:%u al:%u bm:%u " + "lo:%d pe:%d ua:%d ap:%d ep:%d wo:%c", + i, sn, + drbd_role_str(mdev->state.role), + drbd_role_str(mdev->state.peer), + drbd_disk_str(mdev->state.disk), + drbd_disk_str(mdev->state.pdsk), + (mdev->net_conf == NULL ? ' ' : + (mdev->net_conf->wire_protocol - DRBD_PROT_A+'A')), + mdev->state.susp ? 's' : 'r', + mdev->state.aftr_isp ? 'a' : '-', + mdev->state.peer_isp ? 'p' : '-', + mdev->state.user_isp ? 'u' : '-', + mdev->congestion_reason ?: '-', + mdev->send_cnt/2, + mdev->recv_cnt/2, + mdev->writ_cnt/2, + mdev->read_cnt/2, + mdev->al_writ_cnt, + mdev->bm_writ_cnt, + atomic_read(&mdev->local_cnt), + atomic_read(&mdev->ap_pending_cnt) + + atomic_read(&mdev->rs_pending_cnt), + atomic_read(&mdev->unacked_cnt), + atomic_read(&mdev->ap_bio_cnt), + mdev->epochs, + write_ordering_chars[mdev->write_ordering] + ); + seq_printf(seq, " oos:%lu\n", + Bit2KB(drbd_bm_total_weight(mdev))); + } + if (mdev->state.conn == C_SYNC_SOURCE || + mdev->state.conn == C_SYNC_TARGET) + drbd_syncer_progress(mdev, seq); + + if (mdev->state.conn == C_VERIFY_S || mdev->state.conn == C_VERIFY_T) + seq_printf(seq, "\t%3d%% %lu/%lu\n", + (int)((mdev->rs_total-mdev->ov_left) / + (mdev->rs_total/100+1)), + mdev->rs_total - mdev->ov_left, + mdev->rs_total); + + if (proc_details >= 1 && get_ldev_if_state(mdev, D_FAILED)) { + lc_seq_printf_stats(seq, mdev->resync); + lc_seq_printf_stats(seq, mdev->act_log); + put_ldev(mdev); + } + + if (proc_details >= 2) { + if (mdev->resync) { + lc_seq_dump_details(seq, mdev->resync, "rs_left", + resync_dump_detail); + } + } + } + + return 0; +} + +static int drbd_proc_open(struct inode *inode, struct file *file) +{ + return single_open(file, drbd_seq_show, PDE(inode)->data); +} + +/* PROC FS stuff end */ diff --git a/drivers/block/drbd/drbd_receiver.c b/drivers/block/drbd/drbd_receiver.c new file mode 100644 index 000000000000..63686c4d85cf --- /dev/null +++ b/drivers/block/drbd/drbd_receiver.c @@ -0,0 +1,4456 @@ +/* + drbd_receiver.c + + This file is part of DRBD by Philipp Reisner and Lars Ellenberg. + + Copyright (C) 2001-2008, LINBIT Information Technologies GmbH. + Copyright (C) 1999-2008, Philipp Reisner . + Copyright (C) 2002-2008, Lars Ellenberg . + + drbd is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2, or (at your option) + any later version. + + drbd is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with drbd; see the file COPYING. If not, write to + the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. + */ + + +#include +#include + +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#define __KERNEL_SYSCALLS__ +#include +#include +#include +#include +#include +#include +#include "drbd_int.h" +#include "drbd_tracing.h" +#include "drbd_req.h" + +#include "drbd_vli.h" + +struct flush_work { + struct drbd_work w; + struct drbd_epoch *epoch; +}; + +enum finish_epoch { + FE_STILL_LIVE, + FE_DESTROYED, + FE_RECYCLED, +}; + +static int drbd_do_handshake(struct drbd_conf *mdev); +static int drbd_do_auth(struct drbd_conf *mdev); + +static enum finish_epoch drbd_may_finish_epoch(struct drbd_conf *, struct drbd_epoch *, enum epoch_event); +static int e_end_block(struct drbd_conf *, struct drbd_work *, int); + +static struct drbd_epoch *previous_epoch(struct drbd_conf *mdev, struct drbd_epoch *epoch) +{ + struct drbd_epoch *prev; + spin_lock(&mdev->epoch_lock); + prev = list_entry(epoch->list.prev, struct drbd_epoch, list); + if (prev == epoch || prev == mdev->current_epoch) + prev = NULL; + spin_unlock(&mdev->epoch_lock); + return prev; +} + +#define GFP_TRY (__GFP_HIGHMEM | __GFP_NOWARN) + +static struct page *drbd_pp_first_page_or_try_alloc(struct drbd_conf *mdev) +{ + struct page *page = NULL; + + /* Yes, testing drbd_pp_vacant outside the lock is racy. + * So what. It saves a spin_lock. */ + if (drbd_pp_vacant > 0) { + spin_lock(&drbd_pp_lock); + page = drbd_pp_pool; + if (page) { + drbd_pp_pool = (struct page *)page_private(page); + set_page_private(page, 0); /* just to be polite */ + drbd_pp_vacant--; + } + spin_unlock(&drbd_pp_lock); + } + /* GFP_TRY, because we must not cause arbitrary write-out: in a DRBD + * "criss-cross" setup, that might cause write-out on some other DRBD, + * which in turn might block on the other node at this very place. */ + if (!page) + page = alloc_page(GFP_TRY); + if (page) + atomic_inc(&mdev->pp_in_use); + return page; +} + +/* kick lower level device, if we have more than (arbitrary number) + * reference counts on it, which typically are locally submitted io + * requests. don't use unacked_cnt, so we speed up proto A and B, too. */ +static void maybe_kick_lo(struct drbd_conf *mdev) +{ + if (atomic_read(&mdev->local_cnt) >= mdev->net_conf->unplug_watermark) + drbd_kick_lo(mdev); +} + +static void reclaim_net_ee(struct drbd_conf *mdev, struct list_head *to_be_freed) +{ + struct drbd_epoch_entry *e; + struct list_head *le, *tle; + + /* The EEs are always appended to the end of the list. Since + they are sent in order over the wire, they have to finish + in order. As soon as we see the first not finished we can + stop to examine the list... */ + + list_for_each_safe(le, tle, &mdev->net_ee) { + e = list_entry(le, struct drbd_epoch_entry, w.list); + if (drbd_bio_has_active_page(e->private_bio)) + break; + list_move(le, to_be_freed); + } +} + +static void drbd_kick_lo_and_reclaim_net(struct drbd_conf *mdev) +{ + LIST_HEAD(reclaimed); + struct drbd_epoch_entry *e, *t; + + maybe_kick_lo(mdev); + spin_lock_irq(&mdev->req_lock); + reclaim_net_ee(mdev, &reclaimed); + spin_unlock_irq(&mdev->req_lock); + + list_for_each_entry_safe(e, t, &reclaimed, w.list) + drbd_free_ee(mdev, e); +} + +/** + * drbd_pp_alloc() - Returns a page, fails only if a signal comes in + * @mdev: DRBD device. + * @retry: whether or not to retry allocation forever (or until signalled) + * + * Tries to allocate a page, first from our own page pool, then from the + * kernel, unless this allocation would exceed the max_buffers setting. + * If @retry is non-zero, retry until DRBD frees a page somewhere else. + */ +static struct page *drbd_pp_alloc(struct drbd_conf *mdev, int retry) +{ + struct page *page = NULL; + DEFINE_WAIT(wait); + + if (atomic_read(&mdev->pp_in_use) < mdev->net_conf->max_buffers) { + page = drbd_pp_first_page_or_try_alloc(mdev); + if (page) + return page; + } + + for (;;) { + prepare_to_wait(&drbd_pp_wait, &wait, TASK_INTERRUPTIBLE); + + drbd_kick_lo_and_reclaim_net(mdev); + + if (atomic_read(&mdev->pp_in_use) < mdev->net_conf->max_buffers) { + page = drbd_pp_first_page_or_try_alloc(mdev); + if (page) + break; + } + + if (!retry) + break; + + if (signal_pending(current)) { + dev_warn(DEV, "drbd_pp_alloc interrupted!\n"); + break; + } + + schedule(); + } + finish_wait(&drbd_pp_wait, &wait); + + return page; +} + +/* Must not be used from irq, as that may deadlock: see drbd_pp_alloc. + * Is also used from inside an other spin_lock_irq(&mdev->req_lock) */ +static void drbd_pp_free(struct drbd_conf *mdev, struct page *page) +{ + int free_it; + + spin_lock(&drbd_pp_lock); + if (drbd_pp_vacant > (DRBD_MAX_SEGMENT_SIZE/PAGE_SIZE)*minor_count) { + free_it = 1; + } else { + set_page_private(page, (unsigned long)drbd_pp_pool); + drbd_pp_pool = page; + drbd_pp_vacant++; + free_it = 0; + } + spin_unlock(&drbd_pp_lock); + + atomic_dec(&mdev->pp_in_use); + + if (free_it) + __free_page(page); + + wake_up(&drbd_pp_wait); +} + +static void drbd_pp_free_bio_pages(struct drbd_conf *mdev, struct bio *bio) +{ + struct page *p_to_be_freed = NULL; + struct page *page; + struct bio_vec *bvec; + int i; + + spin_lock(&drbd_pp_lock); + __bio_for_each_segment(bvec, bio, i, 0) { + if (drbd_pp_vacant > (DRBD_MAX_SEGMENT_SIZE/PAGE_SIZE)*minor_count) { + set_page_private(bvec->bv_page, (unsigned long)p_to_be_freed); + p_to_be_freed = bvec->bv_page; + } else { + set_page_private(bvec->bv_page, (unsigned long)drbd_pp_pool); + drbd_pp_pool = bvec->bv_page; + drbd_pp_vacant++; + } + } + spin_unlock(&drbd_pp_lock); + atomic_sub(bio->bi_vcnt, &mdev->pp_in_use); + + while (p_to_be_freed) { + page = p_to_be_freed; + p_to_be_freed = (struct page *)page_private(page); + set_page_private(page, 0); /* just to be polite */ + put_page(page); + } + + wake_up(&drbd_pp_wait); +} + +/* +You need to hold the req_lock: + _drbd_wait_ee_list_empty() + +You must not have the req_lock: + drbd_free_ee() + drbd_alloc_ee() + drbd_init_ee() + drbd_release_ee() + drbd_ee_fix_bhs() + drbd_process_done_ee() + drbd_clear_done_ee() + drbd_wait_ee_list_empty() +*/ + +struct drbd_epoch_entry *drbd_alloc_ee(struct drbd_conf *mdev, + u64 id, + sector_t sector, + unsigned int data_size, + gfp_t gfp_mask) __must_hold(local) +{ + struct request_queue *q; + struct drbd_epoch_entry *e; + struct page *page; + struct bio *bio; + unsigned int ds; + + if (FAULT_ACTIVE(mdev, DRBD_FAULT_AL_EE)) + return NULL; + + e = mempool_alloc(drbd_ee_mempool, gfp_mask & ~__GFP_HIGHMEM); + if (!e) { + if (!(gfp_mask & __GFP_NOWARN)) + dev_err(DEV, "alloc_ee: Allocation of an EE failed\n"); + return NULL; + } + + bio = bio_alloc(gfp_mask & ~__GFP_HIGHMEM, div_ceil(data_size, PAGE_SIZE)); + if (!bio) { + if (!(gfp_mask & __GFP_NOWARN)) + dev_err(DEV, "alloc_ee: Allocation of a bio failed\n"); + goto fail1; + } + + bio->bi_bdev = mdev->ldev->backing_bdev; + bio->bi_sector = sector; + + ds = data_size; + while (ds) { + page = drbd_pp_alloc(mdev, (gfp_mask & __GFP_WAIT)); + if (!page) { + if (!(gfp_mask & __GFP_NOWARN)) + dev_err(DEV, "alloc_ee: Allocation of a page failed\n"); + goto fail2; + } + if (!bio_add_page(bio, page, min_t(int, ds, PAGE_SIZE), 0)) { + drbd_pp_free(mdev, page); + dev_err(DEV, "alloc_ee: bio_add_page(s=%llu," + "data_size=%u,ds=%u) failed\n", + (unsigned long long)sector, data_size, ds); + + q = bdev_get_queue(bio->bi_bdev); + if (q->merge_bvec_fn) { + struct bvec_merge_data bvm = { + .bi_bdev = bio->bi_bdev, + .bi_sector = bio->bi_sector, + .bi_size = bio->bi_size, + .bi_rw = bio->bi_rw, + }; + int l = q->merge_bvec_fn(q, &bvm, + &bio->bi_io_vec[bio->bi_vcnt]); + dev_err(DEV, "merge_bvec_fn() = %d\n", l); + } + + /* dump more of the bio. */ + dev_err(DEV, "bio->bi_max_vecs = %d\n", bio->bi_max_vecs); + dev_err(DEV, "bio->bi_vcnt = %d\n", bio->bi_vcnt); + dev_err(DEV, "bio->bi_size = %d\n", bio->bi_size); + dev_err(DEV, "bio->bi_phys_segments = %d\n", bio->bi_phys_segments); + + goto fail2; + break; + } + ds -= min_t(int, ds, PAGE_SIZE); + } + + D_ASSERT(data_size == bio->bi_size); + + bio->bi_private = e; + e->mdev = mdev; + e->sector = sector; + e->size = bio->bi_size; + + e->private_bio = bio; + e->block_id = id; + INIT_HLIST_NODE(&e->colision); + e->epoch = NULL; + e->flags = 0; + + trace_drbd_ee(mdev, e, "allocated"); + + return e; + + fail2: + drbd_pp_free_bio_pages(mdev, bio); + bio_put(bio); + fail1: + mempool_free(e, drbd_ee_mempool); + + return NULL; +} + +void drbd_free_ee(struct drbd_conf *mdev, struct drbd_epoch_entry *e) +{ + struct bio *bio = e->private_bio; + trace_drbd_ee(mdev, e, "freed"); + drbd_pp_free_bio_pages(mdev, bio); + bio_put(bio); + D_ASSERT(hlist_unhashed(&e->colision)); + mempool_free(e, drbd_ee_mempool); +} + +int drbd_release_ee(struct drbd_conf *mdev, struct list_head *list) +{ + LIST_HEAD(work_list); + struct drbd_epoch_entry *e, *t; + int count = 0; + + spin_lock_irq(&mdev->req_lock); + list_splice_init(list, &work_list); + spin_unlock_irq(&mdev->req_lock); + + list_for_each_entry_safe(e, t, &work_list, w.list) { + drbd_free_ee(mdev, e); + count++; + } + return count; +} + + +/* + * This function is called from _asender only_ + * but see also comments in _req_mod(,barrier_acked) + * and receive_Barrier. + * + * Move entries from net_ee to done_ee, if ready. + * Grab done_ee, call all callbacks, free the entries. + * The callbacks typically send out ACKs. + */ +static int drbd_process_done_ee(struct drbd_conf *mdev) +{ + LIST_HEAD(work_list); + LIST_HEAD(reclaimed); + struct drbd_epoch_entry *e, *t; + int ok = (mdev->state.conn >= C_WF_REPORT_PARAMS); + + spin_lock_irq(&mdev->req_lock); + reclaim_net_ee(mdev, &reclaimed); + list_splice_init(&mdev->done_ee, &work_list); + spin_unlock_irq(&mdev->req_lock); + + list_for_each_entry_safe(e, t, &reclaimed, w.list) + drbd_free_ee(mdev, e); + + /* possible callbacks here: + * e_end_block, and e_end_resync_block, e_send_discard_ack. + * all ignore the last argument. + */ + list_for_each_entry_safe(e, t, &work_list, w.list) { + trace_drbd_ee(mdev, e, "process_done_ee"); + /* list_del not necessary, next/prev members not touched */ + ok = e->w.cb(mdev, &e->w, !ok) && ok; + drbd_free_ee(mdev, e); + } + wake_up(&mdev->ee_wait); + + return ok; +} + +void _drbd_wait_ee_list_empty(struct drbd_conf *mdev, struct list_head *head) +{ + DEFINE_WAIT(wait); + + /* avoids spin_lock/unlock + * and calling prepare_to_wait in the fast path */ + while (!list_empty(head)) { + prepare_to_wait(&mdev->ee_wait, &wait, TASK_UNINTERRUPTIBLE); + spin_unlock_irq(&mdev->req_lock); + drbd_kick_lo(mdev); + schedule(); + finish_wait(&mdev->ee_wait, &wait); + spin_lock_irq(&mdev->req_lock); + } +} + +void drbd_wait_ee_list_empty(struct drbd_conf *mdev, struct list_head *head) +{ + spin_lock_irq(&mdev->req_lock); + _drbd_wait_ee_list_empty(mdev, head); + spin_unlock_irq(&mdev->req_lock); +} + +/* see also kernel_accept; which is only present since 2.6.18. + * also we want to log which part of it failed, exactly */ +static int drbd_accept(struct drbd_conf *mdev, const char **what, + struct socket *sock, struct socket **newsock) +{ + struct sock *sk = sock->sk; + int err = 0; + + *what = "listen"; + err = sock->ops->listen(sock, 5); + if (err < 0) + goto out; + + *what = "sock_create_lite"; + err = sock_create_lite(sk->sk_family, sk->sk_type, sk->sk_protocol, + newsock); + if (err < 0) + goto out; + + *what = "accept"; + err = sock->ops->accept(sock, *newsock, 0); + if (err < 0) { + sock_release(*newsock); + *newsock = NULL; + goto out; + } + (*newsock)->ops = sock->ops; + +out: + return err; +} + +static int drbd_recv_short(struct drbd_conf *mdev, struct socket *sock, + void *buf, size_t size, int flags) +{ + mm_segment_t oldfs; + struct kvec iov = { + .iov_base = buf, + .iov_len = size, + }; + struct msghdr msg = { + .msg_iovlen = 1, + .msg_iov = (struct iovec *)&iov, + .msg_flags = (flags ? flags : MSG_WAITALL | MSG_NOSIGNAL) + }; + int rv; + + oldfs = get_fs(); + set_fs(KERNEL_DS); + rv = sock_recvmsg(sock, &msg, size, msg.msg_flags); + set_fs(oldfs); + + return rv; +} + +static int drbd_recv(struct drbd_conf *mdev, void *buf, size_t size) +{ + mm_segment_t oldfs; + struct kvec iov = { + .iov_base = buf, + .iov_len = size, + }; + struct msghdr msg = { + .msg_iovlen = 1, + .msg_iov = (struct iovec *)&iov, + .msg_flags = MSG_WAITALL | MSG_NOSIGNAL + }; + int rv; + + oldfs = get_fs(); + set_fs(KERNEL_DS); + + for (;;) { + rv = sock_recvmsg(mdev->data.socket, &msg, size, msg.msg_flags); + if (rv == size) + break; + + /* Note: + * ECONNRESET other side closed the connection + * ERESTARTSYS (on sock) we got a signal + */ + + if (rv < 0) { + if (rv == -ECONNRESET) + dev_info(DEV, "sock was reset by peer\n"); + else if (rv != -ERESTARTSYS) + dev_err(DEV, "sock_recvmsg returned %d\n", rv); + break; + } else if (rv == 0) { + dev_info(DEV, "sock was shut down by peer\n"); + break; + } else { + /* signal came in, or peer/link went down, + * after we read a partial message + */ + /* D_ASSERT(signal_pending(current)); */ + break; + } + }; + + set_fs(oldfs); + + if (rv != size) + drbd_force_state(mdev, NS(conn, C_BROKEN_PIPE)); + + return rv; +} + +static struct socket *drbd_try_connect(struct drbd_conf *mdev) +{ + const char *what; + struct socket *sock; + struct sockaddr_in6 src_in6; + int err; + int disconnect_on_error = 1; + + if (!get_net_conf(mdev)) + return NULL; + + what = "sock_create_kern"; + err = sock_create_kern(((struct sockaddr *)mdev->net_conf->my_addr)->sa_family, + SOCK_STREAM, IPPROTO_TCP, &sock); + if (err < 0) { + sock = NULL; + goto out; + } + + sock->sk->sk_rcvtimeo = + sock->sk->sk_sndtimeo = mdev->net_conf->try_connect_int*HZ; + + /* explicitly bind to the configured IP as source IP + * for the outgoing connections. + * This is needed for multihomed hosts and to be + * able to use lo: interfaces for drbd. + * Make sure to use 0 as port number, so linux selects + * a free one dynamically. + */ + memcpy(&src_in6, mdev->net_conf->my_addr, + min_t(int, mdev->net_conf->my_addr_len, sizeof(src_in6))); + if (((struct sockaddr *)mdev->net_conf->my_addr)->sa_family == AF_INET6) + src_in6.sin6_port = 0; + else + ((struct sockaddr_in *)&src_in6)->sin_port = 0; /* AF_INET & AF_SCI */ + + what = "bind before connect"; + err = sock->ops->bind(sock, + (struct sockaddr *) &src_in6, + mdev->net_conf->my_addr_len); + if (err < 0) + goto out; + + /* connect may fail, peer not yet available. + * stay C_WF_CONNECTION, don't go Disconnecting! */ + disconnect_on_error = 0; + what = "connect"; + err = sock->ops->connect(sock, + (struct sockaddr *)mdev->net_conf->peer_addr, + mdev->net_conf->peer_addr_len, 0); + +out: + if (err < 0) { + if (sock) { + sock_release(sock); + sock = NULL; + } + switch (-err) { + /* timeout, busy, signal pending */ + case ETIMEDOUT: case EAGAIN: case EINPROGRESS: + case EINTR: case ERESTARTSYS: + /* peer not (yet) available, network problem */ + case ECONNREFUSED: case ENETUNREACH: + case EHOSTDOWN: case EHOSTUNREACH: + disconnect_on_error = 0; + break; + default: + dev_err(DEV, "%s failed, err = %d\n", what, err); + } + if (disconnect_on_error) + drbd_force_state(mdev, NS(conn, C_DISCONNECTING)); + } + put_net_conf(mdev); + return sock; +} + +static struct socket *drbd_wait_for_connect(struct drbd_conf *mdev) +{ + int timeo, err; + struct socket *s_estab = NULL, *s_listen; + const char *what; + + if (!get_net_conf(mdev)) + return NULL; + + what = "sock_create_kern"; + err = sock_create_kern(((struct sockaddr *)mdev->net_conf->my_addr)->sa_family, + SOCK_STREAM, IPPROTO_TCP, &s_listen); + if (err) { + s_listen = NULL; + goto out; + } + + timeo = mdev->net_conf->try_connect_int * HZ; + timeo += (random32() & 1) ? timeo / 7 : -timeo / 7; /* 28.5% random jitter */ + + s_listen->sk->sk_reuse = 1; /* SO_REUSEADDR */ + s_listen->sk->sk_rcvtimeo = timeo; + s_listen->sk->sk_sndtimeo = timeo; + + what = "bind before listen"; + err = s_listen->ops->bind(s_listen, + (struct sockaddr *) mdev->net_conf->my_addr, + mdev->net_conf->my_addr_len); + if (err < 0) + goto out; + + err = drbd_accept(mdev, &what, s_listen, &s_estab); + +out: + if (s_listen) + sock_release(s_listen); + if (err < 0) { + if (err != -EAGAIN && err != -EINTR && err != -ERESTARTSYS) { + dev_err(DEV, "%s failed, err = %d\n", what, err); + drbd_force_state(mdev, NS(conn, C_DISCONNECTING)); + } + } + put_net_conf(mdev); + + return s_estab; +} + +static int drbd_send_fp(struct drbd_conf *mdev, + struct socket *sock, enum drbd_packets cmd) +{ + struct p_header *h = (struct p_header *) &mdev->data.sbuf.header; + + return _drbd_send_cmd(mdev, sock, cmd, h, sizeof(*h), 0); +} + +static enum drbd_packets drbd_recv_fp(struct drbd_conf *mdev, struct socket *sock) +{ + struct p_header *h = (struct p_header *) &mdev->data.sbuf.header; + int rr; + + rr = drbd_recv_short(mdev, sock, h, sizeof(*h), 0); + + if (rr == sizeof(*h) && h->magic == BE_DRBD_MAGIC) + return be16_to_cpu(h->command); + + return 0xffff; +} + +/** + * drbd_socket_okay() - Free the socket if its connection is not okay + * @mdev: DRBD device. + * @sock: pointer to the pointer to the socket. + */ +static int drbd_socket_okay(struct drbd_conf *mdev, struct socket **sock) +{ + int rr; + char tb[4]; + + if (!*sock) + return FALSE; + + rr = drbd_recv_short(mdev, *sock, tb, 4, MSG_DONTWAIT | MSG_PEEK); + + if (rr > 0 || rr == -EAGAIN) { + return TRUE; + } else { + sock_release(*sock); + *sock = NULL; + return FALSE; + } +} + +/* + * return values: + * 1 yes, we have a valid connection + * 0 oops, did not work out, please try again + * -1 peer talks different language, + * no point in trying again, please go standalone. + * -2 We do not have a network config... + */ +static int drbd_connect(struct drbd_conf *mdev) +{ + struct socket *s, *sock, *msock; + int try, h, ok; + + D_ASSERT(!mdev->data.socket); + + if (test_and_clear_bit(CREATE_BARRIER, &mdev->flags)) + dev_err(DEV, "CREATE_BARRIER flag was set in drbd_connect - now cleared!\n"); + + if (drbd_request_state(mdev, NS(conn, C_WF_CONNECTION)) < SS_SUCCESS) + return -2; + + clear_bit(DISCARD_CONCURRENT, &mdev->flags); + + sock = NULL; + msock = NULL; + + do { + for (try = 0;;) { + /* 3 tries, this should take less than a second! */ + s = drbd_try_connect(mdev); + if (s || ++try >= 3) + break; + /* give the other side time to call bind() & listen() */ + __set_current_state(TASK_INTERRUPTIBLE); + schedule_timeout(HZ / 10); + } + + if (s) { + if (!sock) { + drbd_send_fp(mdev, s, P_HAND_SHAKE_S); + sock = s; + s = NULL; + } else if (!msock) { + drbd_send_fp(mdev, s, P_HAND_SHAKE_M); + msock = s; + s = NULL; + } else { + dev_err(DEV, "Logic error in drbd_connect()\n"); + goto out_release_sockets; + } + } + + if (sock && msock) { + __set_current_state(TASK_INTERRUPTIBLE); + schedule_timeout(HZ / 10); + ok = drbd_socket_okay(mdev, &sock); + ok = drbd_socket_okay(mdev, &msock) && ok; + if (ok) + break; + } + +retry: + s = drbd_wait_for_connect(mdev); + if (s) { + try = drbd_recv_fp(mdev, s); + drbd_socket_okay(mdev, &sock); + drbd_socket_okay(mdev, &msock); + switch (try) { + case P_HAND_SHAKE_S: + if (sock) { + dev_warn(DEV, "initial packet S crossed\n"); + sock_release(sock); + } + sock = s; + break; + case P_HAND_SHAKE_M: + if (msock) { + dev_warn(DEV, "initial packet M crossed\n"); + sock_release(msock); + } + msock = s; + set_bit(DISCARD_CONCURRENT, &mdev->flags); + break; + default: + dev_warn(DEV, "Error receiving initial packet\n"); + sock_release(s); + if (random32() & 1) + goto retry; + } + } + + if (mdev->state.conn <= C_DISCONNECTING) + goto out_release_sockets; + if (signal_pending(current)) { + flush_signals(current); + smp_rmb(); + if (get_t_state(&mdev->receiver) == Exiting) + goto out_release_sockets; + } + + if (sock && msock) { + ok = drbd_socket_okay(mdev, &sock); + ok = drbd_socket_okay(mdev, &msock) && ok; + if (ok) + break; + } + } while (1); + + msock->sk->sk_reuse = 1; /* SO_REUSEADDR */ + sock->sk->sk_reuse = 1; /* SO_REUSEADDR */ + + sock->sk->sk_allocation = GFP_NOIO; + msock->sk->sk_allocation = GFP_NOIO; + + sock->sk->sk_priority = TC_PRIO_INTERACTIVE_BULK; + msock->sk->sk_priority = TC_PRIO_INTERACTIVE; + + if (mdev->net_conf->sndbuf_size) { + sock->sk->sk_sndbuf = mdev->net_conf->sndbuf_size; + sock->sk->sk_userlocks |= SOCK_SNDBUF_LOCK; + } + + if (mdev->net_conf->rcvbuf_size) { + sock->sk->sk_rcvbuf = mdev->net_conf->rcvbuf_size; + sock->sk->sk_userlocks |= SOCK_RCVBUF_LOCK; + } + + /* NOT YET ... + * sock->sk->sk_sndtimeo = mdev->net_conf->timeout*HZ/10; + * sock->sk->sk_rcvtimeo = MAX_SCHEDULE_TIMEOUT; + * first set it to the P_HAND_SHAKE timeout, + * which we set to 4x the configured ping_timeout. */ + sock->sk->sk_sndtimeo = + sock->sk->sk_rcvtimeo = mdev->net_conf->ping_timeo*4*HZ/10; + + msock->sk->sk_sndtimeo = mdev->net_conf->timeout*HZ/10; + msock->sk->sk_rcvtimeo = mdev->net_conf->ping_int*HZ; + + /* we don't want delays. + * we use TCP_CORK where apropriate, though */ + drbd_tcp_nodelay(sock); + drbd_tcp_nodelay(msock); + + mdev->data.socket = sock; + mdev->meta.socket = msock; + mdev->last_received = jiffies; + + D_ASSERT(mdev->asender.task == NULL); + + h = drbd_do_handshake(mdev); + if (h <= 0) + return h; + + if (mdev->cram_hmac_tfm) { + /* drbd_request_state(mdev, NS(conn, WFAuth)); */ + if (!drbd_do_auth(mdev)) { + dev_err(DEV, "Authentication of peer failed\n"); + return -1; + } + } + + if (drbd_request_state(mdev, NS(conn, C_WF_REPORT_PARAMS)) < SS_SUCCESS) + return 0; + + sock->sk->sk_sndtimeo = mdev->net_conf->timeout*HZ/10; + sock->sk->sk_rcvtimeo = MAX_SCHEDULE_TIMEOUT; + + atomic_set(&mdev->packet_seq, 0); + mdev->peer_seq = 0; + + drbd_thread_start(&mdev->asender); + + drbd_send_protocol(mdev); + drbd_send_sync_param(mdev, &mdev->sync_conf); + drbd_send_sizes(mdev, 0); + drbd_send_uuids(mdev); + drbd_send_state(mdev); + clear_bit(USE_DEGR_WFC_T, &mdev->flags); + clear_bit(RESIZE_PENDING, &mdev->flags); + + return 1; + +out_release_sockets: + if (sock) + sock_release(sock); + if (msock) + sock_release(msock); + return -1; +} + +static int drbd_recv_header(struct drbd_conf *mdev, struct p_header *h) +{ + int r; + + r = drbd_recv(mdev, h, sizeof(*h)); + + if (unlikely(r != sizeof(*h))) { + dev_err(DEV, "short read expecting header on sock: r=%d\n", r); + return FALSE; + }; + h->command = be16_to_cpu(h->command); + h->length = be16_to_cpu(h->length); + if (unlikely(h->magic != BE_DRBD_MAGIC)) { + dev_err(DEV, "magic?? on data m: 0x%lx c: %d l: %d\n", + (long)be32_to_cpu(h->magic), + h->command, h->length); + return FALSE; + } + mdev->last_received = jiffies; + + return TRUE; +} + +static enum finish_epoch drbd_flush_after_epoch(struct drbd_conf *mdev, struct drbd_epoch *epoch) +{ + int rv; + + if (mdev->write_ordering >= WO_bdev_flush && get_ldev(mdev)) { + rv = blkdev_issue_flush(mdev->ldev->backing_bdev, NULL); + if (rv) { + dev_err(DEV, "local disk flush failed with status %d\n", rv); + /* would rather check on EOPNOTSUPP, but that is not reliable. + * don't try again for ANY return value != 0 + * if (rv == -EOPNOTSUPP) */ + drbd_bump_write_ordering(mdev, WO_drain_io); + } + put_ldev(mdev); + } + + return drbd_may_finish_epoch(mdev, epoch, EV_BARRIER_DONE); +} + +static int w_flush(struct drbd_conf *mdev, struct drbd_work *w, int cancel) +{ + struct flush_work *fw = (struct flush_work *)w; + struct drbd_epoch *epoch = fw->epoch; + + kfree(w); + + if (!test_and_set_bit(DE_BARRIER_IN_NEXT_EPOCH_ISSUED, &epoch->flags)) + drbd_flush_after_epoch(mdev, epoch); + + drbd_may_finish_epoch(mdev, epoch, EV_PUT | + (mdev->state.conn < C_CONNECTED ? EV_CLEANUP : 0)); + + return 1; +} + +/** + * drbd_may_finish_epoch() - Applies an epoch_event to the epoch's state, eventually finishes it. + * @mdev: DRBD device. + * @epoch: Epoch object. + * @ev: Epoch event. + */ +static enum finish_epoch drbd_may_finish_epoch(struct drbd_conf *mdev, + struct drbd_epoch *epoch, + enum epoch_event ev) +{ + int finish, epoch_size; + struct drbd_epoch *next_epoch; + int schedule_flush = 0; + enum finish_epoch rv = FE_STILL_LIVE; + + spin_lock(&mdev->epoch_lock); + do { + next_epoch = NULL; + finish = 0; + + epoch_size = atomic_read(&epoch->epoch_size); + + switch (ev & ~EV_CLEANUP) { + case EV_PUT: + atomic_dec(&epoch->active); + break; + case EV_GOT_BARRIER_NR: + set_bit(DE_HAVE_BARRIER_NUMBER, &epoch->flags); + + /* Special case: If we just switched from WO_bio_barrier to + WO_bdev_flush we should not finish the current epoch */ + if (test_bit(DE_CONTAINS_A_BARRIER, &epoch->flags) && epoch_size == 1 && + mdev->write_ordering != WO_bio_barrier && + epoch == mdev->current_epoch) + clear_bit(DE_CONTAINS_A_BARRIER, &epoch->flags); + break; + case EV_BARRIER_DONE: + set_bit(DE_BARRIER_IN_NEXT_EPOCH_DONE, &epoch->flags); + break; + case EV_BECAME_LAST: + /* nothing to do*/ + break; + } + + trace_drbd_epoch(mdev, epoch, ev); + + if (epoch_size != 0 && + atomic_read(&epoch->active) == 0 && + test_bit(DE_HAVE_BARRIER_NUMBER, &epoch->flags) && + epoch->list.prev == &mdev->current_epoch->list && + !test_bit(DE_IS_FINISHING, &epoch->flags)) { + /* Nearly all conditions are met to finish that epoch... */ + if (test_bit(DE_BARRIER_IN_NEXT_EPOCH_DONE, &epoch->flags) || + mdev->write_ordering == WO_none || + (epoch_size == 1 && test_bit(DE_CONTAINS_A_BARRIER, &epoch->flags)) || + ev & EV_CLEANUP) { + finish = 1; + set_bit(DE_IS_FINISHING, &epoch->flags); + } else if (!test_bit(DE_BARRIER_IN_NEXT_EPOCH_ISSUED, &epoch->flags) && + mdev->write_ordering == WO_bio_barrier) { + atomic_inc(&epoch->active); + schedule_flush = 1; + } + } + if (finish) { + if (!(ev & EV_CLEANUP)) { + spin_unlock(&mdev->epoch_lock); + drbd_send_b_ack(mdev, epoch->barrier_nr, epoch_size); + spin_lock(&mdev->epoch_lock); + } + dec_unacked(mdev); + + if (mdev->current_epoch != epoch) { + next_epoch = list_entry(epoch->list.next, struct drbd_epoch, list); + list_del(&epoch->list); + ev = EV_BECAME_LAST | (ev & EV_CLEANUP); + mdev->epochs--; + trace_drbd_epoch(mdev, epoch, EV_TRACE_FREE); + kfree(epoch); + + if (rv == FE_STILL_LIVE) + rv = FE_DESTROYED; + } else { + epoch->flags = 0; + atomic_set(&epoch->epoch_size, 0); + /* atomic_set(&epoch->active, 0); is alrady zero */ + if (rv == FE_STILL_LIVE) + rv = FE_RECYCLED; + } + } + + if (!next_epoch) + break; + + epoch = next_epoch; + } while (1); + + spin_unlock(&mdev->epoch_lock); + + if (schedule_flush) { + struct flush_work *fw; + fw = kmalloc(sizeof(*fw), GFP_ATOMIC); + if (fw) { + trace_drbd_epoch(mdev, epoch, EV_TRACE_FLUSH); + fw->w.cb = w_flush; + fw->epoch = epoch; + drbd_queue_work(&mdev->data.work, &fw->w); + } else { + dev_warn(DEV, "Could not kmalloc a flush_work obj\n"); + set_bit(DE_BARRIER_IN_NEXT_EPOCH_ISSUED, &epoch->flags); + /* That is not a recursion, only one level */ + drbd_may_finish_epoch(mdev, epoch, EV_BARRIER_DONE); + drbd_may_finish_epoch(mdev, epoch, EV_PUT); + } + } + + return rv; +} + +/** + * drbd_bump_write_ordering() - Fall back to an other write ordering method + * @mdev: DRBD device. + * @wo: Write ordering method to try. + */ +void drbd_bump_write_ordering(struct drbd_conf *mdev, enum write_ordering_e wo) __must_hold(local) +{ + enum write_ordering_e pwo; + static char *write_ordering_str[] = { + [WO_none] = "none", + [WO_drain_io] = "drain", + [WO_bdev_flush] = "flush", + [WO_bio_barrier] = "barrier", + }; + + pwo = mdev->write_ordering; + wo = min(pwo, wo); + if (wo == WO_bio_barrier && mdev->ldev->dc.no_disk_barrier) + wo = WO_bdev_flush; + if (wo == WO_bdev_flush && mdev->ldev->dc.no_disk_flush) + wo = WO_drain_io; + if (wo == WO_drain_io && mdev->ldev->dc.no_disk_drain) + wo = WO_none; + mdev->write_ordering = wo; + if (pwo != mdev->write_ordering || wo == WO_bio_barrier) + dev_info(DEV, "Method to ensure write ordering: %s\n", write_ordering_str[mdev->write_ordering]); +} + +/** + * w_e_reissue() - Worker callback; Resubmit a bio, without BIO_RW_BARRIER set + * @mdev: DRBD device. + * @w: work object. + * @cancel: The connection will be closed anyways (unused in this callback) + */ +int w_e_reissue(struct drbd_conf *mdev, struct drbd_work *w, int cancel) __releases(local) +{ + struct drbd_epoch_entry *e = (struct drbd_epoch_entry *)w; + struct bio *bio = e->private_bio; + + /* We leave DE_CONTAINS_A_BARRIER and EE_IS_BARRIER in place, + (and DE_BARRIER_IN_NEXT_EPOCH_ISSUED in the previous Epoch) + so that we can finish that epoch in drbd_may_finish_epoch(). + That is necessary if we already have a long chain of Epochs, before + we realize that BIO_RW_BARRIER is actually not supported */ + + /* As long as the -ENOTSUPP on the barrier is reported immediately + that will never trigger. If it is reported late, we will just + print that warning and continue correctly for all future requests + with WO_bdev_flush */ + if (previous_epoch(mdev, e->epoch)) + dev_warn(DEV, "Write ordering was not enforced (one time event)\n"); + + /* prepare bio for re-submit, + * re-init volatile members */ + /* we still have a local reference, + * get_ldev was done in receive_Data. */ + bio->bi_bdev = mdev->ldev->backing_bdev; + bio->bi_sector = e->sector; + bio->bi_size = e->size; + bio->bi_idx = 0; + + bio->bi_flags &= ~(BIO_POOL_MASK - 1); + bio->bi_flags |= 1 << BIO_UPTODATE; + + /* don't know whether this is necessary: */ + bio->bi_phys_segments = 0; + bio->bi_next = NULL; + + /* these should be unchanged: */ + /* bio->bi_end_io = drbd_endio_write_sec; */ + /* bio->bi_vcnt = whatever; */ + + e->w.cb = e_end_block; + + /* This is no longer a barrier request. */ + bio->bi_rw &= ~(1UL << BIO_RW_BARRIER); + + drbd_generic_make_request(mdev, DRBD_FAULT_DT_WR, bio); + + return 1; +} + +static int receive_Barrier(struct drbd_conf *mdev, struct p_header *h) +{ + int rv, issue_flush; + struct p_barrier *p = (struct p_barrier *)h; + struct drbd_epoch *epoch; + + ERR_IF(h->length != (sizeof(*p)-sizeof(*h))) return FALSE; + + rv = drbd_recv(mdev, h->payload, h->length); + ERR_IF(rv != h->length) return FALSE; + + inc_unacked(mdev); + + if (mdev->net_conf->wire_protocol != DRBD_PROT_C) + drbd_kick_lo(mdev); + + mdev->current_epoch->barrier_nr = p->barrier; + rv = drbd_may_finish_epoch(mdev, mdev->current_epoch, EV_GOT_BARRIER_NR); + + /* P_BARRIER_ACK may imply that the corresponding extent is dropped from + * the activity log, which means it would not be resynced in case the + * R_PRIMARY crashes now. + * Therefore we must send the barrier_ack after the barrier request was + * completed. */ + switch (mdev->write_ordering) { + case WO_bio_barrier: + case WO_none: + if (rv == FE_RECYCLED) + return TRUE; + break; + + case WO_bdev_flush: + case WO_drain_io: + D_ASSERT(rv == FE_STILL_LIVE); + set_bit(DE_BARRIER_IN_NEXT_EPOCH_ISSUED, &mdev->current_epoch->flags); + drbd_wait_ee_list_empty(mdev, &mdev->active_ee); + rv = drbd_flush_after_epoch(mdev, mdev->current_epoch); + if (rv == FE_RECYCLED) + return TRUE; + + /* The asender will send all the ACKs and barrier ACKs out, since + all EEs moved from the active_ee to the done_ee. We need to + provide a new epoch object for the EEs that come in soon */ + break; + } + + /* receiver context, in the writeout path of the other node. + * avoid potential distributed deadlock */ + epoch = kmalloc(sizeof(struct drbd_epoch), GFP_NOIO); + if (!epoch) { + dev_warn(DEV, "Allocation of an epoch failed, slowing down\n"); + issue_flush = !test_and_set_bit(DE_BARRIER_IN_NEXT_EPOCH_ISSUED, &epoch->flags); + drbd_wait_ee_list_empty(mdev, &mdev->active_ee); + if (issue_flush) { + rv = drbd_flush_after_epoch(mdev, mdev->current_epoch); + if (rv == FE_RECYCLED) + return TRUE; + } + + drbd_wait_ee_list_empty(mdev, &mdev->done_ee); + + return TRUE; + } + + epoch->flags = 0; + atomic_set(&epoch->epoch_size, 0); + atomic_set(&epoch->active, 0); + + spin_lock(&mdev->epoch_lock); + if (atomic_read(&mdev->current_epoch->epoch_size)) { + list_add(&epoch->list, &mdev->current_epoch->list); + mdev->current_epoch = epoch; + mdev->epochs++; + trace_drbd_epoch(mdev, epoch, EV_TRACE_ALLOC); + } else { + /* The current_epoch got recycled while we allocated this one... */ + kfree(epoch); + } + spin_unlock(&mdev->epoch_lock); + + return TRUE; +} + +/* used from receive_RSDataReply (recv_resync_read) + * and from receive_Data */ +static struct drbd_epoch_entry * +read_in_block(struct drbd_conf *mdev, u64 id, sector_t sector, int data_size) __must_hold(local) +{ + struct drbd_epoch_entry *e; + struct bio_vec *bvec; + struct page *page; + struct bio *bio; + int dgs, ds, i, rr; + void *dig_in = mdev->int_dig_in; + void *dig_vv = mdev->int_dig_vv; + + dgs = (mdev->agreed_pro_version >= 87 && mdev->integrity_r_tfm) ? + crypto_hash_digestsize(mdev->integrity_r_tfm) : 0; + + if (dgs) { + rr = drbd_recv(mdev, dig_in, dgs); + if (rr != dgs) { + dev_warn(DEV, "short read receiving data digest: read %d expected %d\n", + rr, dgs); + return NULL; + } + } + + data_size -= dgs; + + ERR_IF(data_size & 0x1ff) return NULL; + ERR_IF(data_size > DRBD_MAX_SEGMENT_SIZE) return NULL; + + /* GFP_NOIO, because we must not cause arbitrary write-out: in a DRBD + * "criss-cross" setup, that might cause write-out on some other DRBD, + * which in turn might block on the other node at this very place. */ + e = drbd_alloc_ee(mdev, id, sector, data_size, GFP_NOIO); + if (!e) + return NULL; + bio = e->private_bio; + ds = data_size; + bio_for_each_segment(bvec, bio, i) { + page = bvec->bv_page; + rr = drbd_recv(mdev, kmap(page), min_t(int, ds, PAGE_SIZE)); + kunmap(page); + if (rr != min_t(int, ds, PAGE_SIZE)) { + drbd_free_ee(mdev, e); + dev_warn(DEV, "short read receiving data: read %d expected %d\n", + rr, min_t(int, ds, PAGE_SIZE)); + return NULL; + } + ds -= rr; + } + + if (dgs) { + drbd_csum(mdev, mdev->integrity_r_tfm, bio, dig_vv); + if (memcmp(dig_in, dig_vv, dgs)) { + dev_err(DEV, "Digest integrity check FAILED.\n"); + drbd_bcast_ee(mdev, "digest failed", + dgs, dig_in, dig_vv, e); + drbd_free_ee(mdev, e); + return NULL; + } + } + mdev->recv_cnt += data_size>>9; + return e; +} + +/* drbd_drain_block() just takes a data block + * out of the socket input buffer, and discards it. + */ +static int drbd_drain_block(struct drbd_conf *mdev, int data_size) +{ + struct page *page; + int rr, rv = 1; + void *data; + + page = drbd_pp_alloc(mdev, 1); + + data = kmap(page); + while (data_size) { + rr = drbd_recv(mdev, data, min_t(int, data_size, PAGE_SIZE)); + if (rr != min_t(int, data_size, PAGE_SIZE)) { + rv = 0; + dev_warn(DEV, "short read receiving data: read %d expected %d\n", + rr, min_t(int, data_size, PAGE_SIZE)); + break; + } + data_size -= rr; + } + kunmap(page); + drbd_pp_free(mdev, page); + return rv; +} + +static int recv_dless_read(struct drbd_conf *mdev, struct drbd_request *req, + sector_t sector, int data_size) +{ + struct bio_vec *bvec; + struct bio *bio; + int dgs, rr, i, expect; + void *dig_in = mdev->int_dig_in; + void *dig_vv = mdev->int_dig_vv; + + dgs = (mdev->agreed_pro_version >= 87 && mdev->integrity_r_tfm) ? + crypto_hash_digestsize(mdev->integrity_r_tfm) : 0; + + if (dgs) { + rr = drbd_recv(mdev, dig_in, dgs); + if (rr != dgs) { + dev_warn(DEV, "short read receiving data reply digest: read %d expected %d\n", + rr, dgs); + return 0; + } + } + + data_size -= dgs; + + /* optimistically update recv_cnt. if receiving fails below, + * we disconnect anyways, and counters will be reset. */ + mdev->recv_cnt += data_size>>9; + + bio = req->master_bio; + D_ASSERT(sector == bio->bi_sector); + + bio_for_each_segment(bvec, bio, i) { + expect = min_t(int, data_size, bvec->bv_len); + rr = drbd_recv(mdev, + kmap(bvec->bv_page)+bvec->bv_offset, + expect); + kunmap(bvec->bv_page); + if (rr != expect) { + dev_warn(DEV, "short read receiving data reply: " + "read %d expected %d\n", + rr, expect); + return 0; + } + data_size -= rr; + } + + if (dgs) { + drbd_csum(mdev, mdev->integrity_r_tfm, bio, dig_vv); + if (memcmp(dig_in, dig_vv, dgs)) { + dev_err(DEV, "Digest integrity check FAILED. Broken NICs?\n"); + return 0; + } + } + + D_ASSERT(data_size == 0); + return 1; +} + +/* e_end_resync_block() is called via + * drbd_process_done_ee() by asender only */ +static int e_end_resync_block(struct drbd_conf *mdev, struct drbd_work *w, int unused) +{ + struct drbd_epoch_entry *e = (struct drbd_epoch_entry *)w; + sector_t sector = e->sector; + int ok; + + D_ASSERT(hlist_unhashed(&e->colision)); + + if (likely(drbd_bio_uptodate(e->private_bio))) { + drbd_set_in_sync(mdev, sector, e->size); + ok = drbd_send_ack(mdev, P_RS_WRITE_ACK, e); + } else { + /* Record failure to sync */ + drbd_rs_failed_io(mdev, sector, e->size); + + ok = drbd_send_ack(mdev, P_NEG_ACK, e); + } + dec_unacked(mdev); + + return ok; +} + +static int recv_resync_read(struct drbd_conf *mdev, sector_t sector, int data_size) __releases(local) +{ + struct drbd_epoch_entry *e; + + e = read_in_block(mdev, ID_SYNCER, sector, data_size); + if (!e) { + put_ldev(mdev); + return FALSE; + } + + dec_rs_pending(mdev); + + e->private_bio->bi_end_io = drbd_endio_write_sec; + e->private_bio->bi_rw = WRITE; + e->w.cb = e_end_resync_block; + + inc_unacked(mdev); + /* corresponding dec_unacked() in e_end_resync_block() + * respective _drbd_clear_done_ee */ + + spin_lock_irq(&mdev->req_lock); + list_add(&e->w.list, &mdev->sync_ee); + spin_unlock_irq(&mdev->req_lock); + + trace_drbd_ee(mdev, e, "submitting for (rs)write"); + trace_drbd_bio(mdev, "Sec", e->private_bio, 0, NULL); + drbd_generic_make_request(mdev, DRBD_FAULT_RS_WR, e->private_bio); + /* accounting done in endio */ + + maybe_kick_lo(mdev); + return TRUE; +} + +static int receive_DataReply(struct drbd_conf *mdev, struct p_header *h) +{ + struct drbd_request *req; + sector_t sector; + unsigned int header_size, data_size; + int ok; + struct p_data *p = (struct p_data *)h; + + header_size = sizeof(*p) - sizeof(*h); + data_size = h->length - header_size; + + ERR_IF(data_size == 0) return FALSE; + + if (drbd_recv(mdev, h->payload, header_size) != header_size) + return FALSE; + + sector = be64_to_cpu(p->sector); + + spin_lock_irq(&mdev->req_lock); + req = _ar_id_to_req(mdev, p->block_id, sector); + spin_unlock_irq(&mdev->req_lock); + if (unlikely(!req)) { + dev_err(DEV, "Got a corrupt block_id/sector pair(1).\n"); + return FALSE; + } + + /* hlist_del(&req->colision) is done in _req_may_be_done, to avoid + * special casing it there for the various failure cases. + * still no race with drbd_fail_pending_reads */ + ok = recv_dless_read(mdev, req, sector, data_size); + + if (ok) + req_mod(req, data_received); + /* else: nothing. handled from drbd_disconnect... + * I don't think we may complete this just yet + * in case we are "on-disconnect: freeze" */ + + return ok; +} + +static int receive_RSDataReply(struct drbd_conf *mdev, struct p_header *h) +{ + sector_t sector; + unsigned int header_size, data_size; + int ok; + struct p_data *p = (struct p_data *)h; + + header_size = sizeof(*p) - sizeof(*h); + data_size = h->length - header_size; + + ERR_IF(data_size == 0) return FALSE; + + if (drbd_recv(mdev, h->payload, header_size) != header_size) + return FALSE; + + sector = be64_to_cpu(p->sector); + D_ASSERT(p->block_id == ID_SYNCER); + + if (get_ldev(mdev)) { + /* data is submitted to disk within recv_resync_read. + * corresponding put_ldev done below on error, + * or in drbd_endio_write_sec. */ + ok = recv_resync_read(mdev, sector, data_size); + } else { + if (__ratelimit(&drbd_ratelimit_state)) + dev_err(DEV, "Can not write resync data to local disk.\n"); + + ok = drbd_drain_block(mdev, data_size); + + drbd_send_ack_dp(mdev, P_NEG_ACK, p); + } + + return ok; +} + +/* e_end_block() is called via drbd_process_done_ee(). + * this means this function only runs in the asender thread + */ +static int e_end_block(struct drbd_conf *mdev, struct drbd_work *w, int cancel) +{ + struct drbd_epoch_entry *e = (struct drbd_epoch_entry *)w; + sector_t sector = e->sector; + struct drbd_epoch *epoch; + int ok = 1, pcmd; + + if (e->flags & EE_IS_BARRIER) { + epoch = previous_epoch(mdev, e->epoch); + if (epoch) + drbd_may_finish_epoch(mdev, epoch, EV_BARRIER_DONE + (cancel ? EV_CLEANUP : 0)); + } + + if (mdev->net_conf->wire_protocol == DRBD_PROT_C) { + if (likely(drbd_bio_uptodate(e->private_bio))) { + pcmd = (mdev->state.conn >= C_SYNC_SOURCE && + mdev->state.conn <= C_PAUSED_SYNC_T && + e->flags & EE_MAY_SET_IN_SYNC) ? + P_RS_WRITE_ACK : P_WRITE_ACK; + ok &= drbd_send_ack(mdev, pcmd, e); + if (pcmd == P_RS_WRITE_ACK) + drbd_set_in_sync(mdev, sector, e->size); + } else { + ok = drbd_send_ack(mdev, P_NEG_ACK, e); + /* we expect it to be marked out of sync anyways... + * maybe assert this? */ + } + dec_unacked(mdev); + } + /* we delete from the conflict detection hash _after_ we sent out the + * P_WRITE_ACK / P_NEG_ACK, to get the sequence number right. */ + if (mdev->net_conf->two_primaries) { + spin_lock_irq(&mdev->req_lock); + D_ASSERT(!hlist_unhashed(&e->colision)); + hlist_del_init(&e->colision); + spin_unlock_irq(&mdev->req_lock); + } else { + D_ASSERT(hlist_unhashed(&e->colision)); + } + + drbd_may_finish_epoch(mdev, e->epoch, EV_PUT + (cancel ? EV_CLEANUP : 0)); + + return ok; +} + +static int e_send_discard_ack(struct drbd_conf *mdev, struct drbd_work *w, int unused) +{ + struct drbd_epoch_entry *e = (struct drbd_epoch_entry *)w; + int ok = 1; + + D_ASSERT(mdev->net_conf->wire_protocol == DRBD_PROT_C); + ok = drbd_send_ack(mdev, P_DISCARD_ACK, e); + + spin_lock_irq(&mdev->req_lock); + D_ASSERT(!hlist_unhashed(&e->colision)); + hlist_del_init(&e->colision); + spin_unlock_irq(&mdev->req_lock); + + dec_unacked(mdev); + + return ok; +} + +/* Called from receive_Data. + * Synchronize packets on sock with packets on msock. + * + * This is here so even when a P_DATA packet traveling via sock overtook an Ack + * packet traveling on msock, they are still processed in the order they have + * been sent. + * + * Note: we don't care for Ack packets overtaking P_DATA packets. + * + * In case packet_seq is larger than mdev->peer_seq number, there are + * outstanding packets on the msock. We wait for them to arrive. + * In case we are the logically next packet, we update mdev->peer_seq + * ourselves. Correctly handles 32bit wrap around. + * + * Assume we have a 10 GBit connection, that is about 1<<30 byte per second, + * about 1<<21 sectors per second. So "worst" case, we have 1<<3 == 8 seconds + * for the 24bit wrap (historical atomic_t guarantee on some archs), and we have + * 1<<9 == 512 seconds aka ages for the 32bit wrap around... + * + * returns 0 if we may process the packet, + * -ERESTARTSYS if we were interrupted (by disconnect signal). */ +static int drbd_wait_peer_seq(struct drbd_conf *mdev, const u32 packet_seq) +{ + DEFINE_WAIT(wait); + unsigned int p_seq; + long timeout; + int ret = 0; + spin_lock(&mdev->peer_seq_lock); + for (;;) { + prepare_to_wait(&mdev->seq_wait, &wait, TASK_INTERRUPTIBLE); + if (seq_le(packet_seq, mdev->peer_seq+1)) + break; + if (signal_pending(current)) { + ret = -ERESTARTSYS; + break; + } + p_seq = mdev->peer_seq; + spin_unlock(&mdev->peer_seq_lock); + timeout = schedule_timeout(30*HZ); + spin_lock(&mdev->peer_seq_lock); + if (timeout == 0 && p_seq == mdev->peer_seq) { + ret = -ETIMEDOUT; + dev_err(DEV, "ASSERT FAILED waited 30 seconds for sequence update, forcing reconnect\n"); + break; + } + } + finish_wait(&mdev->seq_wait, &wait); + if (mdev->peer_seq+1 == packet_seq) + mdev->peer_seq++; + spin_unlock(&mdev->peer_seq_lock); + return ret; +} + +/* mirrored write */ +static int receive_Data(struct drbd_conf *mdev, struct p_header *h) +{ + sector_t sector; + struct drbd_epoch_entry *e; + struct p_data *p = (struct p_data *)h; + int header_size, data_size; + int rw = WRITE; + u32 dp_flags; + + header_size = sizeof(*p) - sizeof(*h); + data_size = h->length - header_size; + + ERR_IF(data_size == 0) return FALSE; + + if (drbd_recv(mdev, h->payload, header_size) != header_size) + return FALSE; + + if (!get_ldev(mdev)) { + if (__ratelimit(&drbd_ratelimit_state)) + dev_err(DEV, "Can not write mirrored data block " + "to local disk.\n"); + spin_lock(&mdev->peer_seq_lock); + if (mdev->peer_seq+1 == be32_to_cpu(p->seq_num)) + mdev->peer_seq++; + spin_unlock(&mdev->peer_seq_lock); + + drbd_send_ack_dp(mdev, P_NEG_ACK, p); + atomic_inc(&mdev->current_epoch->epoch_size); + return drbd_drain_block(mdev, data_size); + } + + /* get_ldev(mdev) successful. + * Corresponding put_ldev done either below (on various errors), + * or in drbd_endio_write_sec, if we successfully submit the data at + * the end of this function. */ + + sector = be64_to_cpu(p->sector); + e = read_in_block(mdev, p->block_id, sector, data_size); + if (!e) { + put_ldev(mdev); + return FALSE; + } + + e->private_bio->bi_end_io = drbd_endio_write_sec; + e->w.cb = e_end_block; + + spin_lock(&mdev->epoch_lock); + e->epoch = mdev->current_epoch; + atomic_inc(&e->epoch->epoch_size); + atomic_inc(&e->epoch->active); + + if (mdev->write_ordering == WO_bio_barrier && atomic_read(&e->epoch->epoch_size) == 1) { + struct drbd_epoch *epoch; + /* Issue a barrier if we start a new epoch, and the previous epoch + was not a epoch containing a single request which already was + a Barrier. */ + epoch = list_entry(e->epoch->list.prev, struct drbd_epoch, list); + if (epoch == e->epoch) { + set_bit(DE_CONTAINS_A_BARRIER, &e->epoch->flags); + trace_drbd_epoch(mdev, e->epoch, EV_TRACE_ADD_BARRIER); + rw |= (1<flags |= EE_IS_BARRIER; + } else { + if (atomic_read(&epoch->epoch_size) > 1 || + !test_bit(DE_CONTAINS_A_BARRIER, &epoch->flags)) { + set_bit(DE_BARRIER_IN_NEXT_EPOCH_ISSUED, &epoch->flags); + trace_drbd_epoch(mdev, epoch, EV_TRACE_SETTING_BI); + set_bit(DE_CONTAINS_A_BARRIER, &e->epoch->flags); + trace_drbd_epoch(mdev, e->epoch, EV_TRACE_ADD_BARRIER); + rw |= (1<flags |= EE_IS_BARRIER; + } + } + } + spin_unlock(&mdev->epoch_lock); + + dp_flags = be32_to_cpu(p->dp_flags); + if (dp_flags & DP_HARDBARRIER) { + dev_err(DEV, "ASSERT FAILED would have submitted barrier request\n"); + /* rw |= (1<flags |= EE_MAY_SET_IN_SYNC; + + /* I'm the receiver, I do hold a net_cnt reference. */ + if (!mdev->net_conf->two_primaries) { + spin_lock_irq(&mdev->req_lock); + } else { + /* don't get the req_lock yet, + * we may sleep in drbd_wait_peer_seq */ + const int size = e->size; + const int discard = test_bit(DISCARD_CONCURRENT, &mdev->flags); + DEFINE_WAIT(wait); + struct drbd_request *i; + struct hlist_node *n; + struct hlist_head *slot; + int first; + + D_ASSERT(mdev->net_conf->wire_protocol == DRBD_PROT_C); + BUG_ON(mdev->ee_hash == NULL); + BUG_ON(mdev->tl_hash == NULL); + + /* conflict detection and handling: + * 1. wait on the sequence number, + * in case this data packet overtook ACK packets. + * 2. check our hash tables for conflicting requests. + * we only need to walk the tl_hash, since an ee can not + * have a conflict with an other ee: on the submitting + * node, the corresponding req had already been conflicting, + * and a conflicting req is never sent. + * + * Note: for two_primaries, we are protocol C, + * so there cannot be any request that is DONE + * but still on the transfer log. + * + * unconditionally add to the ee_hash. + * + * if no conflicting request is found: + * submit. + * + * if any conflicting request is found + * that has not yet been acked, + * AND I have the "discard concurrent writes" flag: + * queue (via done_ee) the P_DISCARD_ACK; OUT. + * + * if any conflicting request is found: + * block the receiver, waiting on misc_wait + * until no more conflicting requests are there, + * or we get interrupted (disconnect). + * + * we do not just write after local io completion of those + * requests, but only after req is done completely, i.e. + * we wait for the P_DISCARD_ACK to arrive! + * + * then proceed normally, i.e. submit. + */ + if (drbd_wait_peer_seq(mdev, be32_to_cpu(p->seq_num))) + goto out_interrupted; + + spin_lock_irq(&mdev->req_lock); + + hlist_add_head(&e->colision, ee_hash_slot(mdev, sector)); + +#define OVERLAPS overlaps(i->sector, i->size, sector, size) + slot = tl_hash_slot(mdev, sector); + first = 1; + for (;;) { + int have_unacked = 0; + int have_conflict = 0; + prepare_to_wait(&mdev->misc_wait, &wait, + TASK_INTERRUPTIBLE); + hlist_for_each_entry(i, n, slot, colision) { + if (OVERLAPS) { + /* only ALERT on first iteration, + * we may be woken up early... */ + if (first) + dev_alert(DEV, "%s[%u] Concurrent local write detected!" + " new: %llus +%u; pending: %llus +%u\n", + current->comm, current->pid, + (unsigned long long)sector, size, + (unsigned long long)i->sector, i->size); + if (i->rq_state & RQ_NET_PENDING) + ++have_unacked; + ++have_conflict; + } + } +#undef OVERLAPS + if (!have_conflict) + break; + + /* Discard Ack only for the _first_ iteration */ + if (first && discard && have_unacked) { + dev_alert(DEV, "Concurrent write! [DISCARD BY FLAG] sec=%llus\n", + (unsigned long long)sector); + inc_unacked(mdev); + e->w.cb = e_send_discard_ack; + list_add_tail(&e->w.list, &mdev->done_ee); + + spin_unlock_irq(&mdev->req_lock); + + /* we could probably send that P_DISCARD_ACK ourselves, + * but I don't like the receiver using the msock */ + + put_ldev(mdev); + wake_asender(mdev); + finish_wait(&mdev->misc_wait, &wait); + return TRUE; + } + + if (signal_pending(current)) { + hlist_del_init(&e->colision); + + spin_unlock_irq(&mdev->req_lock); + + finish_wait(&mdev->misc_wait, &wait); + goto out_interrupted; + } + + spin_unlock_irq(&mdev->req_lock); + if (first) { + first = 0; + dev_alert(DEV, "Concurrent write! [W AFTERWARDS] " + "sec=%llus\n", (unsigned long long)sector); + } else if (discard) { + /* we had none on the first iteration. + * there must be none now. */ + D_ASSERT(have_unacked == 0); + } + schedule(); + spin_lock_irq(&mdev->req_lock); + } + finish_wait(&mdev->misc_wait, &wait); + } + + list_add(&e->w.list, &mdev->active_ee); + spin_unlock_irq(&mdev->req_lock); + + switch (mdev->net_conf->wire_protocol) { + case DRBD_PROT_C: + inc_unacked(mdev); + /* corresponding dec_unacked() in e_end_block() + * respective _drbd_clear_done_ee */ + break; + case DRBD_PROT_B: + /* I really don't like it that the receiver thread + * sends on the msock, but anyways */ + drbd_send_ack(mdev, P_RECV_ACK, e); + break; + case DRBD_PROT_A: + /* nothing to do */ + break; + } + + if (mdev->state.pdsk == D_DISKLESS) { + /* In case we have the only disk of the cluster, */ + drbd_set_out_of_sync(mdev, e->sector, e->size); + e->flags |= EE_CALL_AL_COMPLETE_IO; + drbd_al_begin_io(mdev, e->sector); + } + + e->private_bio->bi_rw = rw; + trace_drbd_ee(mdev, e, "submitting for (data)write"); + trace_drbd_bio(mdev, "Sec", e->private_bio, 0, NULL); + drbd_generic_make_request(mdev, DRBD_FAULT_DT_WR, e->private_bio); + /* accounting done in endio */ + + maybe_kick_lo(mdev); + return TRUE; + +out_interrupted: + /* yes, the epoch_size now is imbalanced. + * but we drop the connection anyways, so we don't have a chance to + * receive a barrier... atomic_inc(&mdev->epoch_size); */ + put_ldev(mdev); + drbd_free_ee(mdev, e); + return FALSE; +} + +static int receive_DataRequest(struct drbd_conf *mdev, struct p_header *h) +{ + sector_t sector; + const sector_t capacity = drbd_get_capacity(mdev->this_bdev); + struct drbd_epoch_entry *e; + struct digest_info *di = NULL; + int size, digest_size; + unsigned int fault_type; + struct p_block_req *p = + (struct p_block_req *)h; + const int brps = sizeof(*p)-sizeof(*h); + + if (drbd_recv(mdev, h->payload, brps) != brps) + return FALSE; + + sector = be64_to_cpu(p->sector); + size = be32_to_cpu(p->blksize); + + if (size <= 0 || (size & 0x1ff) != 0 || size > DRBD_MAX_SEGMENT_SIZE) { + dev_err(DEV, "%s:%d: sector: %llus, size: %u\n", __FILE__, __LINE__, + (unsigned long long)sector, size); + return FALSE; + } + if (sector + (size>>9) > capacity) { + dev_err(DEV, "%s:%d: sector: %llus, size: %u\n", __FILE__, __LINE__, + (unsigned long long)sector, size); + return FALSE; + } + + if (!get_ldev_if_state(mdev, D_UP_TO_DATE)) { + if (__ratelimit(&drbd_ratelimit_state)) + dev_err(DEV, "Can not satisfy peer's read request, " + "no local data.\n"); + drbd_send_ack_rp(mdev, h->command == P_DATA_REQUEST ? P_NEG_DREPLY : + P_NEG_RS_DREPLY , p); + return TRUE; + } + + /* GFP_NOIO, because we must not cause arbitrary write-out: in a DRBD + * "criss-cross" setup, that might cause write-out on some other DRBD, + * which in turn might block on the other node at this very place. */ + e = drbd_alloc_ee(mdev, p->block_id, sector, size, GFP_NOIO); + if (!e) { + put_ldev(mdev); + return FALSE; + } + + e->private_bio->bi_rw = READ; + e->private_bio->bi_end_io = drbd_endio_read_sec; + + switch (h->command) { + case P_DATA_REQUEST: + e->w.cb = w_e_end_data_req; + fault_type = DRBD_FAULT_DT_RD; + break; + case P_RS_DATA_REQUEST: + e->w.cb = w_e_end_rsdata_req; + fault_type = DRBD_FAULT_RS_RD; + /* Eventually this should become asynchronously. Currently it + * blocks the whole receiver just to delay the reading of a + * resync data block. + * the drbd_work_queue mechanism is made for this... + */ + if (!drbd_rs_begin_io(mdev, sector)) { + /* we have been interrupted, + * probably connection lost! */ + D_ASSERT(signal_pending(current)); + goto out_free_e; + } + break; + + case P_OV_REPLY: + case P_CSUM_RS_REQUEST: + fault_type = DRBD_FAULT_RS_RD; + digest_size = h->length - brps ; + di = kmalloc(sizeof(*di) + digest_size, GFP_NOIO); + if (!di) + goto out_free_e; + + di->digest_size = digest_size; + di->digest = (((char *)di)+sizeof(struct digest_info)); + + if (drbd_recv(mdev, di->digest, digest_size) != digest_size) + goto out_free_e; + + e->block_id = (u64)(unsigned long)di; + if (h->command == P_CSUM_RS_REQUEST) { + D_ASSERT(mdev->agreed_pro_version >= 89); + e->w.cb = w_e_end_csum_rs_req; + } else if (h->command == P_OV_REPLY) { + e->w.cb = w_e_end_ov_reply; + dec_rs_pending(mdev); + break; + } + + if (!drbd_rs_begin_io(mdev, sector)) { + /* we have been interrupted, probably connection lost! */ + D_ASSERT(signal_pending(current)); + goto out_free_e; + } + break; + + case P_OV_REQUEST: + if (mdev->state.conn >= C_CONNECTED && + mdev->state.conn != C_VERIFY_T) + dev_warn(DEV, "ASSERT FAILED: got P_OV_REQUEST while being %s\n", + drbd_conn_str(mdev->state.conn)); + if (mdev->ov_start_sector == ~(sector_t)0 && + mdev->agreed_pro_version >= 90) { + mdev->ov_start_sector = sector; + mdev->ov_position = sector; + mdev->ov_left = mdev->rs_total - BM_SECT_TO_BIT(sector); + dev_info(DEV, "Online Verify start sector: %llu\n", + (unsigned long long)sector); + } + e->w.cb = w_e_end_ov_req; + fault_type = DRBD_FAULT_RS_RD; + /* Eventually this should become asynchronous. Currently it + * blocks the whole receiver just to delay the reading of a + * resync data block. + * the drbd_work_queue mechanism is made for this... + */ + if (!drbd_rs_begin_io(mdev, sector)) { + /* we have been interrupted, + * probably connection lost! */ + D_ASSERT(signal_pending(current)); + goto out_free_e; + } + break; + + + default: + dev_err(DEV, "unexpected command (%s) in receive_DataRequest\n", + cmdname(h->command)); + fault_type = DRBD_FAULT_MAX; + } + + spin_lock_irq(&mdev->req_lock); + list_add(&e->w.list, &mdev->read_ee); + spin_unlock_irq(&mdev->req_lock); + + inc_unacked(mdev); + + trace_drbd_ee(mdev, e, "submitting for read"); + trace_drbd_bio(mdev, "Sec", e->private_bio, 0, NULL); + drbd_generic_make_request(mdev, fault_type, e->private_bio); + maybe_kick_lo(mdev); + + return TRUE; + +out_free_e: + kfree(di); + put_ldev(mdev); + drbd_free_ee(mdev, e); + return FALSE; +} + +static int drbd_asb_recover_0p(struct drbd_conf *mdev) __must_hold(local) +{ + int self, peer, rv = -100; + unsigned long ch_self, ch_peer; + + self = mdev->ldev->md.uuid[UI_BITMAP] & 1; + peer = mdev->p_uuid[UI_BITMAP] & 1; + + ch_peer = mdev->p_uuid[UI_SIZE]; + ch_self = mdev->comm_bm_set; + + switch (mdev->net_conf->after_sb_0p) { + case ASB_CONSENSUS: + case ASB_DISCARD_SECONDARY: + case ASB_CALL_HELPER: + dev_err(DEV, "Configuration error.\n"); + break; + case ASB_DISCONNECT: + break; + case ASB_DISCARD_YOUNGER_PRI: + if (self == 0 && peer == 1) { + rv = -1; + break; + } + if (self == 1 && peer == 0) { + rv = 1; + break; + } + /* Else fall through to one of the other strategies... */ + case ASB_DISCARD_OLDER_PRI: + if (self == 0 && peer == 1) { + rv = 1; + break; + } + if (self == 1 && peer == 0) { + rv = -1; + break; + } + /* Else fall through to one of the other strategies... */ + dev_warn(DEV, "Discard younger/older primary did not found a decision\n" + "Using discard-least-changes instead\n"); + case ASB_DISCARD_ZERO_CHG: + if (ch_peer == 0 && ch_self == 0) { + rv = test_bit(DISCARD_CONCURRENT, &mdev->flags) + ? -1 : 1; + break; + } else { + if (ch_peer == 0) { rv = 1; break; } + if (ch_self == 0) { rv = -1; break; } + } + if (mdev->net_conf->after_sb_0p == ASB_DISCARD_ZERO_CHG) + break; + case ASB_DISCARD_LEAST_CHG: + if (ch_self < ch_peer) + rv = -1; + else if (ch_self > ch_peer) + rv = 1; + else /* ( ch_self == ch_peer ) */ + /* Well, then use something else. */ + rv = test_bit(DISCARD_CONCURRENT, &mdev->flags) + ? -1 : 1; + break; + case ASB_DISCARD_LOCAL: + rv = -1; + break; + case ASB_DISCARD_REMOTE: + rv = 1; + } + + return rv; +} + +static int drbd_asb_recover_1p(struct drbd_conf *mdev) __must_hold(local) +{ + int self, peer, hg, rv = -100; + + self = mdev->ldev->md.uuid[UI_BITMAP] & 1; + peer = mdev->p_uuid[UI_BITMAP] & 1; + + switch (mdev->net_conf->after_sb_1p) { + case ASB_DISCARD_YOUNGER_PRI: + case ASB_DISCARD_OLDER_PRI: + case ASB_DISCARD_LEAST_CHG: + case ASB_DISCARD_LOCAL: + case ASB_DISCARD_REMOTE: + dev_err(DEV, "Configuration error.\n"); + break; + case ASB_DISCONNECT: + break; + case ASB_CONSENSUS: + hg = drbd_asb_recover_0p(mdev); + if (hg == -1 && mdev->state.role == R_SECONDARY) + rv = hg; + if (hg == 1 && mdev->state.role == R_PRIMARY) + rv = hg; + break; + case ASB_VIOLENTLY: + rv = drbd_asb_recover_0p(mdev); + break; + case ASB_DISCARD_SECONDARY: + return mdev->state.role == R_PRIMARY ? 1 : -1; + case ASB_CALL_HELPER: + hg = drbd_asb_recover_0p(mdev); + if (hg == -1 && mdev->state.role == R_PRIMARY) { + self = drbd_set_role(mdev, R_SECONDARY, 0); + /* drbd_change_state() does not sleep while in SS_IN_TRANSIENT_STATE, + * we might be here in C_WF_REPORT_PARAMS which is transient. + * we do not need to wait for the after state change work either. */ + self = drbd_change_state(mdev, CS_VERBOSE, NS(role, R_SECONDARY)); + if (self != SS_SUCCESS) { + drbd_khelper(mdev, "pri-lost-after-sb"); + } else { + dev_warn(DEV, "Successfully gave up primary role.\n"); + rv = hg; + } + } else + rv = hg; + } + + return rv; +} + +static int drbd_asb_recover_2p(struct drbd_conf *mdev) __must_hold(local) +{ + int self, peer, hg, rv = -100; + + self = mdev->ldev->md.uuid[UI_BITMAP] & 1; + peer = mdev->p_uuid[UI_BITMAP] & 1; + + switch (mdev->net_conf->after_sb_2p) { + case ASB_DISCARD_YOUNGER_PRI: + case ASB_DISCARD_OLDER_PRI: + case ASB_DISCARD_LEAST_CHG: + case ASB_DISCARD_LOCAL: + case ASB_DISCARD_REMOTE: + case ASB_CONSENSUS: + case ASB_DISCARD_SECONDARY: + dev_err(DEV, "Configuration error.\n"); + break; + case ASB_VIOLENTLY: + rv = drbd_asb_recover_0p(mdev); + break; + case ASB_DISCONNECT: + break; + case ASB_CALL_HELPER: + hg = drbd_asb_recover_0p(mdev); + if (hg == -1) { + /* drbd_change_state() does not sleep while in SS_IN_TRANSIENT_STATE, + * we might be here in C_WF_REPORT_PARAMS which is transient. + * we do not need to wait for the after state change work either. */ + self = drbd_change_state(mdev, CS_VERBOSE, NS(role, R_SECONDARY)); + if (self != SS_SUCCESS) { + drbd_khelper(mdev, "pri-lost-after-sb"); + } else { + dev_warn(DEV, "Successfully gave up primary role.\n"); + rv = hg; + } + } else + rv = hg; + } + + return rv; +} + +static void drbd_uuid_dump(struct drbd_conf *mdev, char *text, u64 *uuid, + u64 bits, u64 flags) +{ + if (!uuid) { + dev_info(DEV, "%s uuid info vanished while I was looking!\n", text); + return; + } + dev_info(DEV, "%s %016llX:%016llX:%016llX:%016llX bits:%llu flags:%llX\n", + text, + (unsigned long long)uuid[UI_CURRENT], + (unsigned long long)uuid[UI_BITMAP], + (unsigned long long)uuid[UI_HISTORY_START], + (unsigned long long)uuid[UI_HISTORY_END], + (unsigned long long)bits, + (unsigned long long)flags); +} + +/* + 100 after split brain try auto recover + 2 C_SYNC_SOURCE set BitMap + 1 C_SYNC_SOURCE use BitMap + 0 no Sync + -1 C_SYNC_TARGET use BitMap + -2 C_SYNC_TARGET set BitMap + -100 after split brain, disconnect +-1000 unrelated data + */ +static int drbd_uuid_compare(struct drbd_conf *mdev, int *rule_nr) __must_hold(local) +{ + u64 self, peer; + int i, j; + + self = mdev->ldev->md.uuid[UI_CURRENT] & ~((u64)1); + peer = mdev->p_uuid[UI_CURRENT] & ~((u64)1); + + *rule_nr = 10; + if (self == UUID_JUST_CREATED && peer == UUID_JUST_CREATED) + return 0; + + *rule_nr = 20; + if ((self == UUID_JUST_CREATED || self == (u64)0) && + peer != UUID_JUST_CREATED) + return -2; + + *rule_nr = 30; + if (self != UUID_JUST_CREATED && + (peer == UUID_JUST_CREATED || peer == (u64)0)) + return 2; + + if (self == peer) { + int rct, dc; /* roles at crash time */ + + if (mdev->p_uuid[UI_BITMAP] == (u64)0 && mdev->ldev->md.uuid[UI_BITMAP] != (u64)0) { + + if (mdev->agreed_pro_version < 91) + return -1001; + + if ((mdev->ldev->md.uuid[UI_BITMAP] & ~((u64)1)) == (mdev->p_uuid[UI_HISTORY_START] & ~((u64)1)) && + (mdev->ldev->md.uuid[UI_HISTORY_START] & ~((u64)1)) == (mdev->p_uuid[UI_HISTORY_START + 1] & ~((u64)1))) { + dev_info(DEV, "was SyncSource, missed the resync finished event, corrected myself:\n"); + drbd_uuid_set_bm(mdev, 0UL); + + drbd_uuid_dump(mdev, "self", mdev->ldev->md.uuid, + mdev->state.disk >= D_NEGOTIATING ? drbd_bm_total_weight(mdev) : 0, 0); + *rule_nr = 34; + } else { + dev_info(DEV, "was SyncSource (peer failed to write sync_uuid)\n"); + *rule_nr = 36; + } + + return 1; + } + + if (mdev->ldev->md.uuid[UI_BITMAP] == (u64)0 && mdev->p_uuid[UI_BITMAP] != (u64)0) { + + if (mdev->agreed_pro_version < 91) + return -1001; + + if ((mdev->ldev->md.uuid[UI_HISTORY_START] & ~((u64)1)) == (mdev->p_uuid[UI_BITMAP] & ~((u64)1)) && + (mdev->ldev->md.uuid[UI_HISTORY_START + 1] & ~((u64)1)) == (mdev->p_uuid[UI_HISTORY_START] & ~((u64)1))) { + dev_info(DEV, "was SyncTarget, peer missed the resync finished event, corrected peer:\n"); + + mdev->p_uuid[UI_HISTORY_START + 1] = mdev->p_uuid[UI_HISTORY_START]; + mdev->p_uuid[UI_HISTORY_START] = mdev->p_uuid[UI_BITMAP]; + mdev->p_uuid[UI_BITMAP] = 0UL; + + drbd_uuid_dump(mdev, "peer", mdev->p_uuid, mdev->p_uuid[UI_SIZE], mdev->p_uuid[UI_FLAGS]); + *rule_nr = 35; + } else { + dev_info(DEV, "was SyncTarget (failed to write sync_uuid)\n"); + *rule_nr = 37; + } + + return -1; + } + + /* Common power [off|failure] */ + rct = (test_bit(CRASHED_PRIMARY, &mdev->flags) ? 1 : 0) + + (mdev->p_uuid[UI_FLAGS] & 2); + /* lowest bit is set when we were primary, + * next bit (weight 2) is set when peer was primary */ + *rule_nr = 40; + + switch (rct) { + case 0: /* !self_pri && !peer_pri */ return 0; + case 1: /* self_pri && !peer_pri */ return 1; + case 2: /* !self_pri && peer_pri */ return -1; + case 3: /* self_pri && peer_pri */ + dc = test_bit(DISCARD_CONCURRENT, &mdev->flags); + return dc ? -1 : 1; + } + } + + *rule_nr = 50; + peer = mdev->p_uuid[UI_BITMAP] & ~((u64)1); + if (self == peer) + return -1; + + *rule_nr = 51; + peer = mdev->p_uuid[UI_HISTORY_START] & ~((u64)1); + if (self == peer) { + self = mdev->ldev->md.uuid[UI_HISTORY_START] & ~((u64)1); + peer = mdev->p_uuid[UI_HISTORY_START + 1] & ~((u64)1); + if (self == peer) { + /* The last P_SYNC_UUID did not get though. Undo the last start of + resync as sync source modifications of the peer's UUIDs. */ + + if (mdev->agreed_pro_version < 91) + return -1001; + + mdev->p_uuid[UI_BITMAP] = mdev->p_uuid[UI_HISTORY_START]; + mdev->p_uuid[UI_HISTORY_START] = mdev->p_uuid[UI_HISTORY_START + 1]; + return -1; + } + } + + *rule_nr = 60; + self = mdev->ldev->md.uuid[UI_CURRENT] & ~((u64)1); + for (i = UI_HISTORY_START; i <= UI_HISTORY_END; i++) { + peer = mdev->p_uuid[i] & ~((u64)1); + if (self == peer) + return -2; + } + + *rule_nr = 70; + self = mdev->ldev->md.uuid[UI_BITMAP] & ~((u64)1); + peer = mdev->p_uuid[UI_CURRENT] & ~((u64)1); + if (self == peer) + return 1; + + *rule_nr = 71; + self = mdev->ldev->md.uuid[UI_HISTORY_START] & ~((u64)1); + if (self == peer) { + self = mdev->ldev->md.uuid[UI_HISTORY_START + 1] & ~((u64)1); + peer = mdev->p_uuid[UI_HISTORY_START] & ~((u64)1); + if (self == peer) { + /* The last P_SYNC_UUID did not get though. Undo the last start of + resync as sync source modifications of our UUIDs. */ + + if (mdev->agreed_pro_version < 91) + return -1001; + + _drbd_uuid_set(mdev, UI_BITMAP, mdev->ldev->md.uuid[UI_HISTORY_START]); + _drbd_uuid_set(mdev, UI_HISTORY_START, mdev->ldev->md.uuid[UI_HISTORY_START + 1]); + + dev_info(DEV, "Undid last start of resync:\n"); + + drbd_uuid_dump(mdev, "self", mdev->ldev->md.uuid, + mdev->state.disk >= D_NEGOTIATING ? drbd_bm_total_weight(mdev) : 0, 0); + + return 1; + } + } + + + *rule_nr = 80; + for (i = UI_HISTORY_START; i <= UI_HISTORY_END; i++) { + self = mdev->ldev->md.uuid[i] & ~((u64)1); + if (self == peer) + return 2; + } + + *rule_nr = 90; + self = mdev->ldev->md.uuid[UI_BITMAP] & ~((u64)1); + peer = mdev->p_uuid[UI_BITMAP] & ~((u64)1); + if (self == peer && self != ((u64)0)) + return 100; + + *rule_nr = 100; + for (i = UI_HISTORY_START; i <= UI_HISTORY_END; i++) { + self = mdev->ldev->md.uuid[i] & ~((u64)1); + for (j = UI_HISTORY_START; j <= UI_HISTORY_END; j++) { + peer = mdev->p_uuid[j] & ~((u64)1); + if (self == peer) + return -100; + } + } + + return -1000; +} + +/* drbd_sync_handshake() returns the new conn state on success, or + CONN_MASK (-1) on failure. + */ +static enum drbd_conns drbd_sync_handshake(struct drbd_conf *mdev, enum drbd_role peer_role, + enum drbd_disk_state peer_disk) __must_hold(local) +{ + int hg, rule_nr; + enum drbd_conns rv = C_MASK; + enum drbd_disk_state mydisk; + + mydisk = mdev->state.disk; + if (mydisk == D_NEGOTIATING) + mydisk = mdev->new_state_tmp.disk; + + dev_info(DEV, "drbd_sync_handshake:\n"); + drbd_uuid_dump(mdev, "self", mdev->ldev->md.uuid, mdev->comm_bm_set, 0); + drbd_uuid_dump(mdev, "peer", mdev->p_uuid, + mdev->p_uuid[UI_SIZE], mdev->p_uuid[UI_FLAGS]); + + hg = drbd_uuid_compare(mdev, &rule_nr); + + dev_info(DEV, "uuid_compare()=%d by rule %d\n", hg, rule_nr); + + if (hg == -1000) { + dev_alert(DEV, "Unrelated data, aborting!\n"); + return C_MASK; + } + if (hg == -1001) { + dev_alert(DEV, "To resolve this both sides have to support at least protocol\n"); + return C_MASK; + } + + if ((mydisk == D_INCONSISTENT && peer_disk > D_INCONSISTENT) || + (peer_disk == D_INCONSISTENT && mydisk > D_INCONSISTENT)) { + int f = (hg == -100) || abs(hg) == 2; + hg = mydisk > D_INCONSISTENT ? 1 : -1; + if (f) + hg = hg*2; + dev_info(DEV, "Becoming sync %s due to disk states.\n", + hg > 0 ? "source" : "target"); + } + + if (hg == 100 || (hg == -100 && mdev->net_conf->always_asbp)) { + int pcount = (mdev->state.role == R_PRIMARY) + + (peer_role == R_PRIMARY); + int forced = (hg == -100); + + switch (pcount) { + case 0: + hg = drbd_asb_recover_0p(mdev); + break; + case 1: + hg = drbd_asb_recover_1p(mdev); + break; + case 2: + hg = drbd_asb_recover_2p(mdev); + break; + } + if (abs(hg) < 100) { + dev_warn(DEV, "Split-Brain detected, %d primaries, " + "automatically solved. Sync from %s node\n", + pcount, (hg < 0) ? "peer" : "this"); + if (forced) { + dev_warn(DEV, "Doing a full sync, since" + " UUIDs where ambiguous.\n"); + hg = hg*2; + } + } + } + + if (hg == -100) { + if (mdev->net_conf->want_lose && !(mdev->p_uuid[UI_FLAGS]&1)) + hg = -1; + if (!mdev->net_conf->want_lose && (mdev->p_uuid[UI_FLAGS]&1)) + hg = 1; + + if (abs(hg) < 100) + dev_warn(DEV, "Split-Brain detected, manually solved. " + "Sync from %s node\n", + (hg < 0) ? "peer" : "this"); + } + + if (hg == -100) { + dev_alert(DEV, "Split-Brain detected, dropping connection!\n"); + drbd_khelper(mdev, "split-brain"); + return C_MASK; + } + + if (hg > 0 && mydisk <= D_INCONSISTENT) { + dev_err(DEV, "I shall become SyncSource, but I am inconsistent!\n"); + return C_MASK; + } + + if (hg < 0 && /* by intention we do not use mydisk here. */ + mdev->state.role == R_PRIMARY && mdev->state.disk >= D_CONSISTENT) { + switch (mdev->net_conf->rr_conflict) { + case ASB_CALL_HELPER: + drbd_khelper(mdev, "pri-lost"); + /* fall through */ + case ASB_DISCONNECT: + dev_err(DEV, "I shall become SyncTarget, but I am primary!\n"); + return C_MASK; + case ASB_VIOLENTLY: + dev_warn(DEV, "Becoming SyncTarget, violating the stable-data" + "assumption\n"); + } + } + + if (abs(hg) >= 2) { + dev_info(DEV, "Writing the whole bitmap, full sync required after drbd_sync_handshake.\n"); + if (drbd_bitmap_io(mdev, &drbd_bmio_set_n_write, "set_n_write from sync_handshake")) + return C_MASK; + } + + if (hg > 0) { /* become sync source. */ + rv = C_WF_BITMAP_S; + } else if (hg < 0) { /* become sync target */ + rv = C_WF_BITMAP_T; + } else { + rv = C_CONNECTED; + if (drbd_bm_total_weight(mdev)) { + dev_info(DEV, "No resync, but %lu bits in bitmap!\n", + drbd_bm_total_weight(mdev)); + } + } + + return rv; +} + +/* returns 1 if invalid */ +static int cmp_after_sb(enum drbd_after_sb_p peer, enum drbd_after_sb_p self) +{ + /* ASB_DISCARD_REMOTE - ASB_DISCARD_LOCAL is valid */ + if ((peer == ASB_DISCARD_REMOTE && self == ASB_DISCARD_LOCAL) || + (self == ASB_DISCARD_REMOTE && peer == ASB_DISCARD_LOCAL)) + return 0; + + /* any other things with ASB_DISCARD_REMOTE or ASB_DISCARD_LOCAL are invalid */ + if (peer == ASB_DISCARD_REMOTE || peer == ASB_DISCARD_LOCAL || + self == ASB_DISCARD_REMOTE || self == ASB_DISCARD_LOCAL) + return 1; + + /* everything else is valid if they are equal on both sides. */ + if (peer == self) + return 0; + + /* everything es is invalid. */ + return 1; +} + +static int receive_protocol(struct drbd_conf *mdev, struct p_header *h) +{ + struct p_protocol *p = (struct p_protocol *)h; + int header_size, data_size; + int p_proto, p_after_sb_0p, p_after_sb_1p, p_after_sb_2p; + int p_want_lose, p_two_primaries; + char p_integrity_alg[SHARED_SECRET_MAX] = ""; + + header_size = sizeof(*p) - sizeof(*h); + data_size = h->length - header_size; + + if (drbd_recv(mdev, h->payload, header_size) != header_size) + return FALSE; + + p_proto = be32_to_cpu(p->protocol); + p_after_sb_0p = be32_to_cpu(p->after_sb_0p); + p_after_sb_1p = be32_to_cpu(p->after_sb_1p); + p_after_sb_2p = be32_to_cpu(p->after_sb_2p); + p_want_lose = be32_to_cpu(p->want_lose); + p_two_primaries = be32_to_cpu(p->two_primaries); + + if (p_proto != mdev->net_conf->wire_protocol) { + dev_err(DEV, "incompatible communication protocols\n"); + goto disconnect; + } + + if (cmp_after_sb(p_after_sb_0p, mdev->net_conf->after_sb_0p)) { + dev_err(DEV, "incompatible after-sb-0pri settings\n"); + goto disconnect; + } + + if (cmp_after_sb(p_after_sb_1p, mdev->net_conf->after_sb_1p)) { + dev_err(DEV, "incompatible after-sb-1pri settings\n"); + goto disconnect; + } + + if (cmp_after_sb(p_after_sb_2p, mdev->net_conf->after_sb_2p)) { + dev_err(DEV, "incompatible after-sb-2pri settings\n"); + goto disconnect; + } + + if (p_want_lose && mdev->net_conf->want_lose) { + dev_err(DEV, "both sides have the 'want_lose' flag set\n"); + goto disconnect; + } + + if (p_two_primaries != mdev->net_conf->two_primaries) { + dev_err(DEV, "incompatible setting of the two-primaries options\n"); + goto disconnect; + } + + if (mdev->agreed_pro_version >= 87) { + unsigned char *my_alg = mdev->net_conf->integrity_alg; + + if (drbd_recv(mdev, p_integrity_alg, data_size) != data_size) + return FALSE; + + p_integrity_alg[SHARED_SECRET_MAX-1] = 0; + if (strcmp(p_integrity_alg, my_alg)) { + dev_err(DEV, "incompatible setting of the data-integrity-alg\n"); + goto disconnect; + } + dev_info(DEV, "data-integrity-alg: %s\n", + my_alg[0] ? my_alg : (unsigned char *)""); + } + + return TRUE; + +disconnect: + drbd_force_state(mdev, NS(conn, C_DISCONNECTING)); + return FALSE; +} + +/* helper function + * input: alg name, feature name + * return: NULL (alg name was "") + * ERR_PTR(error) if something goes wrong + * or the crypto hash ptr, if it worked out ok. */ +struct crypto_hash *drbd_crypto_alloc_digest_safe(const struct drbd_conf *mdev, + const char *alg, const char *name) +{ + struct crypto_hash *tfm; + + if (!alg[0]) + return NULL; + + tfm = crypto_alloc_hash(alg, 0, CRYPTO_ALG_ASYNC); + if (IS_ERR(tfm)) { + dev_err(DEV, "Can not allocate \"%s\" as %s (reason: %ld)\n", + alg, name, PTR_ERR(tfm)); + return tfm; + } + if (!drbd_crypto_is_hash(crypto_hash_tfm(tfm))) { + crypto_free_hash(tfm); + dev_err(DEV, "\"%s\" is not a digest (%s)\n", alg, name); + return ERR_PTR(-EINVAL); + } + return tfm; +} + +static int receive_SyncParam(struct drbd_conf *mdev, struct p_header *h) +{ + int ok = TRUE; + struct p_rs_param_89 *p = (struct p_rs_param_89 *)h; + unsigned int header_size, data_size, exp_max_sz; + struct crypto_hash *verify_tfm = NULL; + struct crypto_hash *csums_tfm = NULL; + const int apv = mdev->agreed_pro_version; + + exp_max_sz = apv <= 87 ? sizeof(struct p_rs_param) + : apv == 88 ? sizeof(struct p_rs_param) + + SHARED_SECRET_MAX + : /* 89 */ sizeof(struct p_rs_param_89); + + if (h->length > exp_max_sz) { + dev_err(DEV, "SyncParam packet too long: received %u, expected <= %u bytes\n", + h->length, exp_max_sz); + return FALSE; + } + + if (apv <= 88) { + header_size = sizeof(struct p_rs_param) - sizeof(*h); + data_size = h->length - header_size; + } else /* apv >= 89 */ { + header_size = sizeof(struct p_rs_param_89) - sizeof(*h); + data_size = h->length - header_size; + D_ASSERT(data_size == 0); + } + + /* initialize verify_alg and csums_alg */ + memset(p->verify_alg, 0, 2 * SHARED_SECRET_MAX); + + if (drbd_recv(mdev, h->payload, header_size) != header_size) + return FALSE; + + mdev->sync_conf.rate = be32_to_cpu(p->rate); + + if (apv >= 88) { + if (apv == 88) { + if (data_size > SHARED_SECRET_MAX) { + dev_err(DEV, "verify-alg too long, " + "peer wants %u, accepting only %u byte\n", + data_size, SHARED_SECRET_MAX); + return FALSE; + } + + if (drbd_recv(mdev, p->verify_alg, data_size) != data_size) + return FALSE; + + /* we expect NUL terminated string */ + /* but just in case someone tries to be evil */ + D_ASSERT(p->verify_alg[data_size-1] == 0); + p->verify_alg[data_size-1] = 0; + + } else /* apv >= 89 */ { + /* we still expect NUL terminated strings */ + /* but just in case someone tries to be evil */ + D_ASSERT(p->verify_alg[SHARED_SECRET_MAX-1] == 0); + D_ASSERT(p->csums_alg[SHARED_SECRET_MAX-1] == 0); + p->verify_alg[SHARED_SECRET_MAX-1] = 0; + p->csums_alg[SHARED_SECRET_MAX-1] = 0; + } + + if (strcmp(mdev->sync_conf.verify_alg, p->verify_alg)) { + if (mdev->state.conn == C_WF_REPORT_PARAMS) { + dev_err(DEV, "Different verify-alg settings. me=\"%s\" peer=\"%s\"\n", + mdev->sync_conf.verify_alg, p->verify_alg); + goto disconnect; + } + verify_tfm = drbd_crypto_alloc_digest_safe(mdev, + p->verify_alg, "verify-alg"); + if (IS_ERR(verify_tfm)) { + verify_tfm = NULL; + goto disconnect; + } + } + + if (apv >= 89 && strcmp(mdev->sync_conf.csums_alg, p->csums_alg)) { + if (mdev->state.conn == C_WF_REPORT_PARAMS) { + dev_err(DEV, "Different csums-alg settings. me=\"%s\" peer=\"%s\"\n", + mdev->sync_conf.csums_alg, p->csums_alg); + goto disconnect; + } + csums_tfm = drbd_crypto_alloc_digest_safe(mdev, + p->csums_alg, "csums-alg"); + if (IS_ERR(csums_tfm)) { + csums_tfm = NULL; + goto disconnect; + } + } + + + spin_lock(&mdev->peer_seq_lock); + /* lock against drbd_nl_syncer_conf() */ + if (verify_tfm) { + strcpy(mdev->sync_conf.verify_alg, p->verify_alg); + mdev->sync_conf.verify_alg_len = strlen(p->verify_alg) + 1; + crypto_free_hash(mdev->verify_tfm); + mdev->verify_tfm = verify_tfm; + dev_info(DEV, "using verify-alg: \"%s\"\n", p->verify_alg); + } + if (csums_tfm) { + strcpy(mdev->sync_conf.csums_alg, p->csums_alg); + mdev->sync_conf.csums_alg_len = strlen(p->csums_alg) + 1; + crypto_free_hash(mdev->csums_tfm); + mdev->csums_tfm = csums_tfm; + dev_info(DEV, "using csums-alg: \"%s\"\n", p->csums_alg); + } + spin_unlock(&mdev->peer_seq_lock); + } + + return ok; +disconnect: + /* just for completeness: actually not needed, + * as this is not reached if csums_tfm was ok. */ + crypto_free_hash(csums_tfm); + /* but free the verify_tfm again, if csums_tfm did not work out */ + crypto_free_hash(verify_tfm); + drbd_force_state(mdev, NS(conn, C_DISCONNECTING)); + return FALSE; +} + +static void drbd_setup_order_type(struct drbd_conf *mdev, int peer) +{ + /* sorry, we currently have no working implementation + * of distributed TCQ */ +} + +/* warn if the arguments differ by more than 12.5% */ +static void warn_if_differ_considerably(struct drbd_conf *mdev, + const char *s, sector_t a, sector_t b) +{ + sector_t d; + if (a == 0 || b == 0) + return; + d = (a > b) ? (a - b) : (b - a); + if (d > (a>>3) || d > (b>>3)) + dev_warn(DEV, "Considerable difference in %s: %llus vs. %llus\n", s, + (unsigned long long)a, (unsigned long long)b); +} + +static int receive_sizes(struct drbd_conf *mdev, struct p_header *h) +{ + struct p_sizes *p = (struct p_sizes *)h; + enum determine_dev_size dd = unchanged; + unsigned int max_seg_s; + sector_t p_size, p_usize, my_usize; + int ldsc = 0; /* local disk size changed */ + enum drbd_conns nconn; + + ERR_IF(h->length != (sizeof(*p)-sizeof(*h))) return FALSE; + if (drbd_recv(mdev, h->payload, h->length) != h->length) + return FALSE; + + p_size = be64_to_cpu(p->d_size); + p_usize = be64_to_cpu(p->u_size); + + if (p_size == 0 && mdev->state.disk == D_DISKLESS) { + dev_err(DEV, "some backing storage is needed\n"); + drbd_force_state(mdev, NS(conn, C_DISCONNECTING)); + return FALSE; + } + + /* just store the peer's disk size for now. + * we still need to figure out whether we accept that. */ + mdev->p_size = p_size; + +#define min_not_zero(l, r) (l == 0) ? r : ((r == 0) ? l : min(l, r)) + if (get_ldev(mdev)) { + warn_if_differ_considerably(mdev, "lower level device sizes", + p_size, drbd_get_max_capacity(mdev->ldev)); + warn_if_differ_considerably(mdev, "user requested size", + p_usize, mdev->ldev->dc.disk_size); + + /* if this is the first connect, or an otherwise expected + * param exchange, choose the minimum */ + if (mdev->state.conn == C_WF_REPORT_PARAMS) + p_usize = min_not_zero((sector_t)mdev->ldev->dc.disk_size, + p_usize); + + my_usize = mdev->ldev->dc.disk_size; + + if (mdev->ldev->dc.disk_size != p_usize) { + mdev->ldev->dc.disk_size = p_usize; + dev_info(DEV, "Peer sets u_size to %lu sectors\n", + (unsigned long)mdev->ldev->dc.disk_size); + } + + /* Never shrink a device with usable data during connect. + But allow online shrinking if we are connected. */ + if (drbd_new_dev_size(mdev, mdev->ldev) < + drbd_get_capacity(mdev->this_bdev) && + mdev->state.disk >= D_OUTDATED && + mdev->state.conn < C_CONNECTED) { + dev_err(DEV, "The peer's disk size is too small!\n"); + drbd_force_state(mdev, NS(conn, C_DISCONNECTING)); + mdev->ldev->dc.disk_size = my_usize; + put_ldev(mdev); + return FALSE; + } + put_ldev(mdev); + } +#undef min_not_zero + + if (get_ldev(mdev)) { + dd = drbd_determin_dev_size(mdev); + put_ldev(mdev); + if (dd == dev_size_error) + return FALSE; + drbd_md_sync(mdev); + } else { + /* I am diskless, need to accept the peer's size. */ + drbd_set_my_capacity(mdev, p_size); + } + + if (mdev->p_uuid && mdev->state.conn <= C_CONNECTED && get_ldev(mdev)) { + nconn = drbd_sync_handshake(mdev, + mdev->state.peer, mdev->state.pdsk); + put_ldev(mdev); + + if (nconn == C_MASK) { + drbd_force_state(mdev, NS(conn, C_DISCONNECTING)); + return FALSE; + } + + if (drbd_request_state(mdev, NS(conn, nconn)) < SS_SUCCESS) { + drbd_force_state(mdev, NS(conn, C_DISCONNECTING)); + return FALSE; + } + } + + if (get_ldev(mdev)) { + if (mdev->ldev->known_size != drbd_get_capacity(mdev->ldev->backing_bdev)) { + mdev->ldev->known_size = drbd_get_capacity(mdev->ldev->backing_bdev); + ldsc = 1; + } + + max_seg_s = be32_to_cpu(p->max_segment_size); + if (max_seg_s != queue_max_segment_size(mdev->rq_queue)) + drbd_setup_queue_param(mdev, max_seg_s); + + drbd_setup_order_type(mdev, be32_to_cpu(p->queue_order_type)); + put_ldev(mdev); + } + + if (mdev->state.conn > C_WF_REPORT_PARAMS) { + if (be64_to_cpu(p->c_size) != + drbd_get_capacity(mdev->this_bdev) || ldsc) { + /* we have different sizes, probably peer + * needs to know my new size... */ + drbd_send_sizes(mdev, 0); + } + if (test_and_clear_bit(RESIZE_PENDING, &mdev->flags) || + (dd == grew && mdev->state.conn == C_CONNECTED)) { + if (mdev->state.pdsk >= D_INCONSISTENT && + mdev->state.disk >= D_INCONSISTENT) + resync_after_online_grow(mdev); + else + set_bit(RESYNC_AFTER_NEG, &mdev->flags); + } + } + + return TRUE; +} + +static int receive_uuids(struct drbd_conf *mdev, struct p_header *h) +{ + struct p_uuids *p = (struct p_uuids *)h; + u64 *p_uuid; + int i; + + ERR_IF(h->length != (sizeof(*p)-sizeof(*h))) return FALSE; + if (drbd_recv(mdev, h->payload, h->length) != h->length) + return FALSE; + + p_uuid = kmalloc(sizeof(u64)*UI_EXTENDED_SIZE, GFP_NOIO); + + for (i = UI_CURRENT; i < UI_EXTENDED_SIZE; i++) + p_uuid[i] = be64_to_cpu(p->uuid[i]); + + kfree(mdev->p_uuid); + mdev->p_uuid = p_uuid; + + if (mdev->state.conn < C_CONNECTED && + mdev->state.disk < D_INCONSISTENT && + mdev->state.role == R_PRIMARY && + (mdev->ed_uuid & ~((u64)1)) != (p_uuid[UI_CURRENT] & ~((u64)1))) { + dev_err(DEV, "Can only connect to data with current UUID=%016llX\n", + (unsigned long long)mdev->ed_uuid); + drbd_force_state(mdev, NS(conn, C_DISCONNECTING)); + return FALSE; + } + + if (get_ldev(mdev)) { + int skip_initial_sync = + mdev->state.conn == C_CONNECTED && + mdev->agreed_pro_version >= 90 && + mdev->ldev->md.uuid[UI_CURRENT] == UUID_JUST_CREATED && + (p_uuid[UI_FLAGS] & 8); + if (skip_initial_sync) { + dev_info(DEV, "Accepted new current UUID, preparing to skip initial sync\n"); + drbd_bitmap_io(mdev, &drbd_bmio_clear_n_write, + "clear_n_write from receive_uuids"); + _drbd_uuid_set(mdev, UI_CURRENT, p_uuid[UI_CURRENT]); + _drbd_uuid_set(mdev, UI_BITMAP, 0); + _drbd_set_state(_NS2(mdev, disk, D_UP_TO_DATE, pdsk, D_UP_TO_DATE), + CS_VERBOSE, NULL); + drbd_md_sync(mdev); + } + put_ldev(mdev); + } + + /* Before we test for the disk state, we should wait until an eventually + ongoing cluster wide state change is finished. That is important if + we are primary and are detaching from our disk. We need to see the + new disk state... */ + wait_event(mdev->misc_wait, !test_bit(CLUSTER_ST_CHANGE, &mdev->flags)); + if (mdev->state.conn >= C_CONNECTED && mdev->state.disk < D_INCONSISTENT) + drbd_set_ed_uuid(mdev, p_uuid[UI_CURRENT]); + + return TRUE; +} + +/** + * convert_state() - Converts the peer's view of the cluster state to our point of view + * @ps: The state as seen by the peer. + */ +static union drbd_state convert_state(union drbd_state ps) +{ + union drbd_state ms; + + static enum drbd_conns c_tab[] = { + [C_CONNECTED] = C_CONNECTED, + + [C_STARTING_SYNC_S] = C_STARTING_SYNC_T, + [C_STARTING_SYNC_T] = C_STARTING_SYNC_S, + [C_DISCONNECTING] = C_TEAR_DOWN, /* C_NETWORK_FAILURE, */ + [C_VERIFY_S] = C_VERIFY_T, + [C_MASK] = C_MASK, + }; + + ms.i = ps.i; + + ms.conn = c_tab[ps.conn]; + ms.peer = ps.role; + ms.role = ps.peer; + ms.pdsk = ps.disk; + ms.disk = ps.pdsk; + ms.peer_isp = (ps.aftr_isp | ps.user_isp); + + return ms; +} + +static int receive_req_state(struct drbd_conf *mdev, struct p_header *h) +{ + struct p_req_state *p = (struct p_req_state *)h; + union drbd_state mask, val; + int rv; + + ERR_IF(h->length != (sizeof(*p)-sizeof(*h))) return FALSE; + if (drbd_recv(mdev, h->payload, h->length) != h->length) + return FALSE; + + mask.i = be32_to_cpu(p->mask); + val.i = be32_to_cpu(p->val); + + if (test_bit(DISCARD_CONCURRENT, &mdev->flags) && + test_bit(CLUSTER_ST_CHANGE, &mdev->flags)) { + drbd_send_sr_reply(mdev, SS_CONCURRENT_ST_CHG); + return TRUE; + } + + mask = convert_state(mask); + val = convert_state(val); + + rv = drbd_change_state(mdev, CS_VERBOSE, mask, val); + + drbd_send_sr_reply(mdev, rv); + drbd_md_sync(mdev); + + return TRUE; +} + +static int receive_state(struct drbd_conf *mdev, struct p_header *h) +{ + struct p_state *p = (struct p_state *)h; + enum drbd_conns nconn, oconn; + union drbd_state ns, peer_state; + enum drbd_disk_state real_peer_disk; + int rv; + + ERR_IF(h->length != (sizeof(*p)-sizeof(*h))) + return FALSE; + + if (drbd_recv(mdev, h->payload, h->length) != h->length) + return FALSE; + + peer_state.i = be32_to_cpu(p->state); + + real_peer_disk = peer_state.disk; + if (peer_state.disk == D_NEGOTIATING) { + real_peer_disk = mdev->p_uuid[UI_FLAGS] & 4 ? D_INCONSISTENT : D_CONSISTENT; + dev_info(DEV, "real peer disk state = %s\n", drbd_disk_str(real_peer_disk)); + } + + spin_lock_irq(&mdev->req_lock); + retry: + oconn = nconn = mdev->state.conn; + spin_unlock_irq(&mdev->req_lock); + + if (nconn == C_WF_REPORT_PARAMS) + nconn = C_CONNECTED; + + if (mdev->p_uuid && peer_state.disk >= D_NEGOTIATING && + get_ldev_if_state(mdev, D_NEGOTIATING)) { + int cr; /* consider resync */ + + /* if we established a new connection */ + cr = (oconn < C_CONNECTED); + /* if we had an established connection + * and one of the nodes newly attaches a disk */ + cr |= (oconn == C_CONNECTED && + (peer_state.disk == D_NEGOTIATING || + mdev->state.disk == D_NEGOTIATING)); + /* if we have both been inconsistent, and the peer has been + * forced to be UpToDate with --overwrite-data */ + cr |= test_bit(CONSIDER_RESYNC, &mdev->flags); + /* if we had been plain connected, and the admin requested to + * start a sync by "invalidate" or "invalidate-remote" */ + cr |= (oconn == C_CONNECTED && + (peer_state.conn >= C_STARTING_SYNC_S && + peer_state.conn <= C_WF_BITMAP_T)); + + if (cr) + nconn = drbd_sync_handshake(mdev, peer_state.role, real_peer_disk); + + put_ldev(mdev); + if (nconn == C_MASK) { + if (mdev->state.disk == D_NEGOTIATING) { + drbd_force_state(mdev, NS(disk, D_DISKLESS)); + nconn = C_CONNECTED; + } else if (peer_state.disk == D_NEGOTIATING) { + dev_err(DEV, "Disk attach process on the peer node was aborted.\n"); + peer_state.disk = D_DISKLESS; + } else { + D_ASSERT(oconn == C_WF_REPORT_PARAMS); + drbd_force_state(mdev, NS(conn, C_DISCONNECTING)); + return FALSE; + } + } + } + + spin_lock_irq(&mdev->req_lock); + if (mdev->state.conn != oconn) + goto retry; + clear_bit(CONSIDER_RESYNC, &mdev->flags); + ns.i = mdev->state.i; + ns.conn = nconn; + ns.peer = peer_state.role; + ns.pdsk = real_peer_disk; + ns.peer_isp = (peer_state.aftr_isp | peer_state.user_isp); + if ((nconn == C_CONNECTED || nconn == C_WF_BITMAP_S) && ns.disk == D_NEGOTIATING) + ns.disk = mdev->new_state_tmp.disk; + + rv = _drbd_set_state(mdev, ns, CS_VERBOSE | CS_HARD, NULL); + ns = mdev->state; + spin_unlock_irq(&mdev->req_lock); + + if (rv < SS_SUCCESS) { + drbd_force_state(mdev, NS(conn, C_DISCONNECTING)); + return FALSE; + } + + if (oconn > C_WF_REPORT_PARAMS) { + if (nconn > C_CONNECTED && peer_state.conn <= C_CONNECTED && + peer_state.disk != D_NEGOTIATING ) { + /* we want resync, peer has not yet decided to sync... */ + /* Nowadays only used when forcing a node into primary role and + setting its disk to UpToDate with that */ + drbd_send_uuids(mdev); + drbd_send_state(mdev); + } + } + + mdev->net_conf->want_lose = 0; + + drbd_md_sync(mdev); /* update connected indicator, la_size, ... */ + + return TRUE; +} + +static int receive_sync_uuid(struct drbd_conf *mdev, struct p_header *h) +{ + struct p_rs_uuid *p = (struct p_rs_uuid *)h; + + wait_event(mdev->misc_wait, + mdev->state.conn == C_WF_SYNC_UUID || + mdev->state.conn < C_CONNECTED || + mdev->state.disk < D_NEGOTIATING); + + /* D_ASSERT( mdev->state.conn == C_WF_SYNC_UUID ); */ + + ERR_IF(h->length != (sizeof(*p)-sizeof(*h))) return FALSE; + if (drbd_recv(mdev, h->payload, h->length) != h->length) + return FALSE; + + /* Here the _drbd_uuid_ functions are right, current should + _not_ be rotated into the history */ + if (get_ldev_if_state(mdev, D_NEGOTIATING)) { + _drbd_uuid_set(mdev, UI_CURRENT, be64_to_cpu(p->uuid)); + _drbd_uuid_set(mdev, UI_BITMAP, 0UL); + + drbd_start_resync(mdev, C_SYNC_TARGET); + + put_ldev(mdev); + } else + dev_err(DEV, "Ignoring SyncUUID packet!\n"); + + return TRUE; +} + +enum receive_bitmap_ret { OK, DONE, FAILED }; + +static enum receive_bitmap_ret +receive_bitmap_plain(struct drbd_conf *mdev, struct p_header *h, + unsigned long *buffer, struct bm_xfer_ctx *c) +{ + unsigned num_words = min_t(size_t, BM_PACKET_WORDS, c->bm_words - c->word_offset); + unsigned want = num_words * sizeof(long); + + if (want != h->length) { + dev_err(DEV, "%s:want (%u) != h->length (%u)\n", __func__, want, h->length); + return FAILED; + } + if (want == 0) + return DONE; + if (drbd_recv(mdev, buffer, want) != want) + return FAILED; + + drbd_bm_merge_lel(mdev, c->word_offset, num_words, buffer); + + c->word_offset += num_words; + c->bit_offset = c->word_offset * BITS_PER_LONG; + if (c->bit_offset > c->bm_bits) + c->bit_offset = c->bm_bits; + + return OK; +} + +static enum receive_bitmap_ret +recv_bm_rle_bits(struct drbd_conf *mdev, + struct p_compressed_bm *p, + struct bm_xfer_ctx *c) +{ + struct bitstream bs; + u64 look_ahead; + u64 rl; + u64 tmp; + unsigned long s = c->bit_offset; + unsigned long e; + int len = p->head.length - (sizeof(*p) - sizeof(p->head)); + int toggle = DCBP_get_start(p); + int have; + int bits; + + bitstream_init(&bs, p->code, len, DCBP_get_pad_bits(p)); + + bits = bitstream_get_bits(&bs, &look_ahead, 64); + if (bits < 0) + return FAILED; + + for (have = bits; have > 0; s += rl, toggle = !toggle) { + bits = vli_decode_bits(&rl, look_ahead); + if (bits <= 0) + return FAILED; + + if (toggle) { + e = s + rl -1; + if (e >= c->bm_bits) { + dev_err(DEV, "bitmap overflow (e:%lu) while decoding bm RLE packet\n", e); + return FAILED; + } + _drbd_bm_set_bits(mdev, s, e); + } + + if (have < bits) { + dev_err(DEV, "bitmap decoding error: h:%d b:%d la:0x%08llx l:%u/%u\n", + have, bits, look_ahead, + (unsigned int)(bs.cur.b - p->code), + (unsigned int)bs.buf_len); + return FAILED; + } + look_ahead >>= bits; + have -= bits; + + bits = bitstream_get_bits(&bs, &tmp, 64 - have); + if (bits < 0) + return FAILED; + look_ahead |= tmp << have; + have += bits; + } + + c->bit_offset = s; + bm_xfer_ctx_bit_to_word_offset(c); + + return (s == c->bm_bits) ? DONE : OK; +} + +static enum receive_bitmap_ret +decode_bitmap_c(struct drbd_conf *mdev, + struct p_compressed_bm *p, + struct bm_xfer_ctx *c) +{ + if (DCBP_get_code(p) == RLE_VLI_Bits) + return recv_bm_rle_bits(mdev, p, c); + + /* other variants had been implemented for evaluation, + * but have been dropped as this one turned out to be "best" + * during all our tests. */ + + dev_err(DEV, "receive_bitmap_c: unknown encoding %u\n", p->encoding); + drbd_force_state(mdev, NS(conn, C_PROTOCOL_ERROR)); + return FAILED; +} + +void INFO_bm_xfer_stats(struct drbd_conf *mdev, + const char *direction, struct bm_xfer_ctx *c) +{ + /* what would it take to transfer it "plaintext" */ + unsigned plain = sizeof(struct p_header) * + ((c->bm_words+BM_PACKET_WORDS-1)/BM_PACKET_WORDS+1) + + c->bm_words * sizeof(long); + unsigned total = c->bytes[0] + c->bytes[1]; + unsigned r; + + /* total can not be zero. but just in case: */ + if (total == 0) + return; + + /* don't report if not compressed */ + if (total >= plain) + return; + + /* total < plain. check for overflow, still */ + r = (total > UINT_MAX/1000) ? (total / (plain/1000)) + : (1000 * total / plain); + + if (r > 1000) + r = 1000; + + r = 1000 - r; + dev_info(DEV, "%s bitmap stats [Bytes(packets)]: plain %u(%u), RLE %u(%u), " + "total %u; compression: %u.%u%%\n", + direction, + c->bytes[1], c->packets[1], + c->bytes[0], c->packets[0], + total, r/10, r % 10); +} + +/* Since we are processing the bitfield from lower addresses to higher, + it does not matter if the process it in 32 bit chunks or 64 bit + chunks as long as it is little endian. (Understand it as byte stream, + beginning with the lowest byte...) If we would use big endian + we would need to process it from the highest address to the lowest, + in order to be agnostic to the 32 vs 64 bits issue. + + returns 0 on failure, 1 if we successfully received it. */ +static int receive_bitmap(struct drbd_conf *mdev, struct p_header *h) +{ + struct bm_xfer_ctx c; + void *buffer; + enum receive_bitmap_ret ret; + int ok = FALSE; + + wait_event(mdev->misc_wait, !atomic_read(&mdev->ap_bio_cnt)); + + drbd_bm_lock(mdev, "receive bitmap"); + + /* maybe we should use some per thread scratch page, + * and allocate that during initial device creation? */ + buffer = (unsigned long *) __get_free_page(GFP_NOIO); + if (!buffer) { + dev_err(DEV, "failed to allocate one page buffer in %s\n", __func__); + goto out; + } + + c = (struct bm_xfer_ctx) { + .bm_bits = drbd_bm_bits(mdev), + .bm_words = drbd_bm_words(mdev), + }; + + do { + if (h->command == P_BITMAP) { + ret = receive_bitmap_plain(mdev, h, buffer, &c); + } else if (h->command == P_COMPRESSED_BITMAP) { + /* MAYBE: sanity check that we speak proto >= 90, + * and the feature is enabled! */ + struct p_compressed_bm *p; + + if (h->length > BM_PACKET_PAYLOAD_BYTES) { + dev_err(DEV, "ReportCBitmap packet too large\n"); + goto out; + } + /* use the page buff */ + p = buffer; + memcpy(p, h, sizeof(*h)); + if (drbd_recv(mdev, p->head.payload, h->length) != h->length) + goto out; + if (p->head.length <= (sizeof(*p) - sizeof(p->head))) { + dev_err(DEV, "ReportCBitmap packet too small (l:%u)\n", p->head.length); + return FAILED; + } + ret = decode_bitmap_c(mdev, p, &c); + } else { + dev_warn(DEV, "receive_bitmap: h->command neither ReportBitMap nor ReportCBitMap (is 0x%x)", h->command); + goto out; + } + + c.packets[h->command == P_BITMAP]++; + c.bytes[h->command == P_BITMAP] += sizeof(struct p_header) + h->length; + + if (ret != OK) + break; + + if (!drbd_recv_header(mdev, h)) + goto out; + } while (ret == OK); + if (ret == FAILED) + goto out; + + INFO_bm_xfer_stats(mdev, "receive", &c); + + if (mdev->state.conn == C_WF_BITMAP_T) { + ok = !drbd_send_bitmap(mdev); + if (!ok) + goto out; + /* Omit CS_ORDERED with this state transition to avoid deadlocks. */ + ok = _drbd_request_state(mdev, NS(conn, C_WF_SYNC_UUID), CS_VERBOSE); + D_ASSERT(ok == SS_SUCCESS); + } else if (mdev->state.conn != C_WF_BITMAP_S) { + /* admin may have requested C_DISCONNECTING, + * other threads may have noticed network errors */ + dev_info(DEV, "unexpected cstate (%s) in receive_bitmap\n", + drbd_conn_str(mdev->state.conn)); + } + + ok = TRUE; + out: + drbd_bm_unlock(mdev); + if (ok && mdev->state.conn == C_WF_BITMAP_S) + drbd_start_resync(mdev, C_SYNC_SOURCE); + free_page((unsigned long) buffer); + return ok; +} + +static int receive_skip(struct drbd_conf *mdev, struct p_header *h) +{ + /* TODO zero copy sink :) */ + static char sink[128]; + int size, want, r; + + dev_warn(DEV, "skipping unknown optional packet type %d, l: %d!\n", + h->command, h->length); + + size = h->length; + while (size > 0) { + want = min_t(int, size, sizeof(sink)); + r = drbd_recv(mdev, sink, want); + ERR_IF(r <= 0) break; + size -= r; + } + return size == 0; +} + +static int receive_UnplugRemote(struct drbd_conf *mdev, struct p_header *h) +{ + if (mdev->state.disk >= D_INCONSISTENT) + drbd_kick_lo(mdev); + + /* Make sure we've acked all the TCP data associated + * with the data requests being unplugged */ + drbd_tcp_quickack(mdev->data.socket); + + return TRUE; +} + +typedef int (*drbd_cmd_handler_f)(struct drbd_conf *, struct p_header *); + +static drbd_cmd_handler_f drbd_default_handler[] = { + [P_DATA] = receive_Data, + [P_DATA_REPLY] = receive_DataReply, + [P_RS_DATA_REPLY] = receive_RSDataReply, + [P_BARRIER] = receive_Barrier, + [P_BITMAP] = receive_bitmap, + [P_COMPRESSED_BITMAP] = receive_bitmap, + [P_UNPLUG_REMOTE] = receive_UnplugRemote, + [P_DATA_REQUEST] = receive_DataRequest, + [P_RS_DATA_REQUEST] = receive_DataRequest, + [P_SYNC_PARAM] = receive_SyncParam, + [P_SYNC_PARAM89] = receive_SyncParam, + [P_PROTOCOL] = receive_protocol, + [P_UUIDS] = receive_uuids, + [P_SIZES] = receive_sizes, + [P_STATE] = receive_state, + [P_STATE_CHG_REQ] = receive_req_state, + [P_SYNC_UUID] = receive_sync_uuid, + [P_OV_REQUEST] = receive_DataRequest, + [P_OV_REPLY] = receive_DataRequest, + [P_CSUM_RS_REQUEST] = receive_DataRequest, + /* anything missing from this table is in + * the asender_tbl, see get_asender_cmd */ + [P_MAX_CMD] = NULL, +}; + +static drbd_cmd_handler_f *drbd_cmd_handler = drbd_default_handler; +static drbd_cmd_handler_f *drbd_opt_cmd_handler; + +static void drbdd(struct drbd_conf *mdev) +{ + drbd_cmd_handler_f handler; + struct p_header *header = &mdev->data.rbuf.header; + + while (get_t_state(&mdev->receiver) == Running) { + drbd_thread_current_set_cpu(mdev); + if (!drbd_recv_header(mdev, header)) + break; + + if (header->command < P_MAX_CMD) + handler = drbd_cmd_handler[header->command]; + else if (P_MAY_IGNORE < header->command + && header->command < P_MAX_OPT_CMD) + handler = drbd_opt_cmd_handler[header->command-P_MAY_IGNORE]; + else if (header->command > P_MAX_OPT_CMD) + handler = receive_skip; + else + handler = NULL; + + if (unlikely(!handler)) { + dev_err(DEV, "unknown packet type %d, l: %d!\n", + header->command, header->length); + drbd_force_state(mdev, NS(conn, C_PROTOCOL_ERROR)); + break; + } + if (unlikely(!handler(mdev, header))) { + dev_err(DEV, "error receiving %s, l: %d!\n", + cmdname(header->command), header->length); + drbd_force_state(mdev, NS(conn, C_PROTOCOL_ERROR)); + break; + } + + trace_drbd_packet(mdev, mdev->data.socket, 2, &mdev->data.rbuf, + __FILE__, __LINE__); + } +} + +static void drbd_fail_pending_reads(struct drbd_conf *mdev) +{ + struct hlist_head *slot; + struct hlist_node *pos; + struct hlist_node *tmp; + struct drbd_request *req; + int i; + + /* + * Application READ requests + */ + spin_lock_irq(&mdev->req_lock); + for (i = 0; i < APP_R_HSIZE; i++) { + slot = mdev->app_reads_hash+i; + hlist_for_each_entry_safe(req, pos, tmp, slot, colision) { + /* it may (but should not any longer!) + * be on the work queue; if that assert triggers, + * we need to also grab the + * spin_lock_irq(&mdev->data.work.q_lock); + * and list_del_init here. */ + D_ASSERT(list_empty(&req->w.list)); + /* It would be nice to complete outside of spinlock. + * But this is easier for now. */ + _req_mod(req, connection_lost_while_pending); + } + } + for (i = 0; i < APP_R_HSIZE; i++) + if (!hlist_empty(mdev->app_reads_hash+i)) + dev_warn(DEV, "ASSERT FAILED: app_reads_hash[%d].first: " + "%p, should be NULL\n", i, mdev->app_reads_hash[i].first); + + memset(mdev->app_reads_hash, 0, APP_R_HSIZE*sizeof(void *)); + spin_unlock_irq(&mdev->req_lock); +} + +void drbd_flush_workqueue(struct drbd_conf *mdev) +{ + struct drbd_wq_barrier barr; + + barr.w.cb = w_prev_work_done; + init_completion(&barr.done); + drbd_queue_work(&mdev->data.work, &barr.w); + wait_for_completion(&barr.done); +} + +static void drbd_disconnect(struct drbd_conf *mdev) +{ + enum drbd_fencing_p fp; + union drbd_state os, ns; + int rv = SS_UNKNOWN_ERROR; + unsigned int i; + + if (mdev->state.conn == C_STANDALONE) + return; + if (mdev->state.conn >= C_WF_CONNECTION) + dev_err(DEV, "ASSERT FAILED cstate = %s, expected < WFConnection\n", + drbd_conn_str(mdev->state.conn)); + + /* asender does not clean up anything. it must not interfere, either */ + drbd_thread_stop(&mdev->asender); + + mutex_lock(&mdev->data.mutex); + drbd_free_sock(mdev); + mutex_unlock(&mdev->data.mutex); + + spin_lock_irq(&mdev->req_lock); + _drbd_wait_ee_list_empty(mdev, &mdev->active_ee); + _drbd_wait_ee_list_empty(mdev, &mdev->sync_ee); + _drbd_wait_ee_list_empty(mdev, &mdev->read_ee); + spin_unlock_irq(&mdev->req_lock); + + /* We do not have data structures that would allow us to + * get the rs_pending_cnt down to 0 again. + * * On C_SYNC_TARGET we do not have any data structures describing + * the pending RSDataRequest's we have sent. + * * On C_SYNC_SOURCE there is no data structure that tracks + * the P_RS_DATA_REPLY blocks that we sent to the SyncTarget. + * And no, it is not the sum of the reference counts in the + * resync_LRU. The resync_LRU tracks the whole operation including + * the disk-IO, while the rs_pending_cnt only tracks the blocks + * on the fly. */ + drbd_rs_cancel_all(mdev); + mdev->rs_total = 0; + mdev->rs_failed = 0; + atomic_set(&mdev->rs_pending_cnt, 0); + wake_up(&mdev->misc_wait); + + /* make sure syncer is stopped and w_resume_next_sg queued */ + del_timer_sync(&mdev->resync_timer); + set_bit(STOP_SYNC_TIMER, &mdev->flags); + resync_timer_fn((unsigned long)mdev); + + /* so we can be sure that all remote or resync reads + * made it at least to net_ee */ + wait_event(mdev->misc_wait, !atomic_read(&mdev->local_cnt)); + + /* wait for all w_e_end_data_req, w_e_end_rsdata_req, w_send_barrier, + * w_make_resync_request etc. which may still be on the worker queue + * to be "canceled" */ + drbd_flush_workqueue(mdev); + + /* This also does reclaim_net_ee(). If we do this too early, we might + * miss some resync ee and pages.*/ + drbd_process_done_ee(mdev); + + kfree(mdev->p_uuid); + mdev->p_uuid = NULL; + + if (!mdev->state.susp) + tl_clear(mdev); + + drbd_fail_pending_reads(mdev); + + dev_info(DEV, "Connection closed\n"); + + drbd_md_sync(mdev); + + fp = FP_DONT_CARE; + if (get_ldev(mdev)) { + fp = mdev->ldev->dc.fencing; + put_ldev(mdev); + } + + if (mdev->state.role == R_PRIMARY) { + if (fp >= FP_RESOURCE && mdev->state.pdsk >= D_UNKNOWN) { + enum drbd_disk_state nps = drbd_try_outdate_peer(mdev); + drbd_request_state(mdev, NS(pdsk, nps)); + } + } + + spin_lock_irq(&mdev->req_lock); + os = mdev->state; + if (os.conn >= C_UNCONNECTED) { + /* Do not restart in case we are C_DISCONNECTING */ + ns = os; + ns.conn = C_UNCONNECTED; + rv = _drbd_set_state(mdev, ns, CS_VERBOSE, NULL); + } + spin_unlock_irq(&mdev->req_lock); + + if (os.conn == C_DISCONNECTING) { + struct hlist_head *h; + wait_event(mdev->misc_wait, atomic_read(&mdev->net_cnt) == 0); + + /* we must not free the tl_hash + * while application io is still on the fly */ + wait_event(mdev->misc_wait, atomic_read(&mdev->ap_bio_cnt) == 0); + + spin_lock_irq(&mdev->req_lock); + /* paranoia code */ + for (h = mdev->ee_hash; h < mdev->ee_hash + mdev->ee_hash_s; h++) + if (h->first) + dev_err(DEV, "ASSERT FAILED ee_hash[%u].first == %p, expected NULL\n", + (int)(h - mdev->ee_hash), h->first); + kfree(mdev->ee_hash); + mdev->ee_hash = NULL; + mdev->ee_hash_s = 0; + + /* paranoia code */ + for (h = mdev->tl_hash; h < mdev->tl_hash + mdev->tl_hash_s; h++) + if (h->first) + dev_err(DEV, "ASSERT FAILED tl_hash[%u] == %p, expected NULL\n", + (int)(h - mdev->tl_hash), h->first); + kfree(mdev->tl_hash); + mdev->tl_hash = NULL; + mdev->tl_hash_s = 0; + spin_unlock_irq(&mdev->req_lock); + + crypto_free_hash(mdev->cram_hmac_tfm); + mdev->cram_hmac_tfm = NULL; + + kfree(mdev->net_conf); + mdev->net_conf = NULL; + drbd_request_state(mdev, NS(conn, C_STANDALONE)); + } + + /* tcp_close and release of sendpage pages can be deferred. I don't + * want to use SO_LINGER, because apparently it can be deferred for + * more than 20 seconds (longest time I checked). + * + * Actually we don't care for exactly when the network stack does its + * put_page(), but release our reference on these pages right here. + */ + i = drbd_release_ee(mdev, &mdev->net_ee); + if (i) + dev_info(DEV, "net_ee not empty, killed %u entries\n", i); + i = atomic_read(&mdev->pp_in_use); + if (i) + dev_info(DEV, "pp_in_use = %u, expected 0\n", i); + + D_ASSERT(list_empty(&mdev->read_ee)); + D_ASSERT(list_empty(&mdev->active_ee)); + D_ASSERT(list_empty(&mdev->sync_ee)); + D_ASSERT(list_empty(&mdev->done_ee)); + + /* ok, no more ee's on the fly, it is safe to reset the epoch_size */ + atomic_set(&mdev->current_epoch->epoch_size, 0); + D_ASSERT(list_empty(&mdev->current_epoch->list)); +} + +/* + * We support PRO_VERSION_MIN to PRO_VERSION_MAX. The protocol version + * we can agree on is stored in agreed_pro_version. + * + * feature flags and the reserved array should be enough room for future + * enhancements of the handshake protocol, and possible plugins... + * + * for now, they are expected to be zero, but ignored. + */ +static int drbd_send_handshake(struct drbd_conf *mdev) +{ + /* ASSERT current == mdev->receiver ... */ + struct p_handshake *p = &mdev->data.sbuf.handshake; + int ok; + + if (mutex_lock_interruptible(&mdev->data.mutex)) { + dev_err(DEV, "interrupted during initial handshake\n"); + return 0; /* interrupted. not ok. */ + } + + if (mdev->data.socket == NULL) { + mutex_unlock(&mdev->data.mutex); + return 0; + } + + memset(p, 0, sizeof(*p)); + p->protocol_min = cpu_to_be32(PRO_VERSION_MIN); + p->protocol_max = cpu_to_be32(PRO_VERSION_MAX); + ok = _drbd_send_cmd( mdev, mdev->data.socket, P_HAND_SHAKE, + (struct p_header *)p, sizeof(*p), 0 ); + mutex_unlock(&mdev->data.mutex); + return ok; +} + +/* + * return values: + * 1 yes, we have a valid connection + * 0 oops, did not work out, please try again + * -1 peer talks different language, + * no point in trying again, please go standalone. + */ +static int drbd_do_handshake(struct drbd_conf *mdev) +{ + /* ASSERT current == mdev->receiver ... */ + struct p_handshake *p = &mdev->data.rbuf.handshake; + const int expect = sizeof(struct p_handshake) + -sizeof(struct p_header); + int rv; + + rv = drbd_send_handshake(mdev); + if (!rv) + return 0; + + rv = drbd_recv_header(mdev, &p->head); + if (!rv) + return 0; + + if (p->head.command != P_HAND_SHAKE) { + dev_err(DEV, "expected HandShake packet, received: %s (0x%04x)\n", + cmdname(p->head.command), p->head.command); + return -1; + } + + if (p->head.length != expect) { + dev_err(DEV, "expected HandShake length: %u, received: %u\n", + expect, p->head.length); + return -1; + } + + rv = drbd_recv(mdev, &p->head.payload, expect); + + if (rv != expect) { + dev_err(DEV, "short read receiving handshake packet: l=%u\n", rv); + return 0; + } + + trace_drbd_packet(mdev, mdev->data.socket, 2, &mdev->data.rbuf, + __FILE__, __LINE__); + + p->protocol_min = be32_to_cpu(p->protocol_min); + p->protocol_max = be32_to_cpu(p->protocol_max); + if (p->protocol_max == 0) + p->protocol_max = p->protocol_min; + + if (PRO_VERSION_MAX < p->protocol_min || + PRO_VERSION_MIN > p->protocol_max) + goto incompat; + + mdev->agreed_pro_version = min_t(int, PRO_VERSION_MAX, p->protocol_max); + + dev_info(DEV, "Handshake successful: " + "Agreed network protocol version %d\n", mdev->agreed_pro_version); + + return 1; + + incompat: + dev_err(DEV, "incompatible DRBD dialects: " + "I support %d-%d, peer supports %d-%d\n", + PRO_VERSION_MIN, PRO_VERSION_MAX, + p->protocol_min, p->protocol_max); + return -1; +} + +#if !defined(CONFIG_CRYPTO_HMAC) && !defined(CONFIG_CRYPTO_HMAC_MODULE) +static int drbd_do_auth(struct drbd_conf *mdev) +{ + dev_err(DEV, "This kernel was build without CONFIG_CRYPTO_HMAC.\n"); + dev_err(DEV, "You need to disable 'cram-hmac-alg' in drbd.conf.\n"); + return 0; +} +#else +#define CHALLENGE_LEN 64 +static int drbd_do_auth(struct drbd_conf *mdev) +{ + char my_challenge[CHALLENGE_LEN]; /* 64 Bytes... */ + struct scatterlist sg; + char *response = NULL; + char *right_response = NULL; + char *peers_ch = NULL; + struct p_header p; + unsigned int key_len = strlen(mdev->net_conf->shared_secret); + unsigned int resp_size; + struct hash_desc desc; + int rv; + + desc.tfm = mdev->cram_hmac_tfm; + desc.flags = 0; + + rv = crypto_hash_setkey(mdev->cram_hmac_tfm, + (u8 *)mdev->net_conf->shared_secret, key_len); + if (rv) { + dev_err(DEV, "crypto_hash_setkey() failed with %d\n", rv); + rv = 0; + goto fail; + } + + get_random_bytes(my_challenge, CHALLENGE_LEN); + + rv = drbd_send_cmd2(mdev, P_AUTH_CHALLENGE, my_challenge, CHALLENGE_LEN); + if (!rv) + goto fail; + + rv = drbd_recv_header(mdev, &p); + if (!rv) + goto fail; + + if (p.command != P_AUTH_CHALLENGE) { + dev_err(DEV, "expected AuthChallenge packet, received: %s (0x%04x)\n", + cmdname(p.command), p.command); + rv = 0; + goto fail; + } + + if (p.length > CHALLENGE_LEN*2) { + dev_err(DEV, "expected AuthChallenge payload too big.\n"); + rv = 0; + goto fail; + } + + peers_ch = kmalloc(p.length, GFP_NOIO); + if (peers_ch == NULL) { + dev_err(DEV, "kmalloc of peers_ch failed\n"); + rv = 0; + goto fail; + } + + rv = drbd_recv(mdev, peers_ch, p.length); + + if (rv != p.length) { + dev_err(DEV, "short read AuthChallenge: l=%u\n", rv); + rv = 0; + goto fail; + } + + resp_size = crypto_hash_digestsize(mdev->cram_hmac_tfm); + response = kmalloc(resp_size, GFP_NOIO); + if (response == NULL) { + dev_err(DEV, "kmalloc of response failed\n"); + rv = 0; + goto fail; + } + + sg_init_table(&sg, 1); + sg_set_buf(&sg, peers_ch, p.length); + + rv = crypto_hash_digest(&desc, &sg, sg.length, response); + if (rv) { + dev_err(DEV, "crypto_hash_digest() failed with %d\n", rv); + rv = 0; + goto fail; + } + + rv = drbd_send_cmd2(mdev, P_AUTH_RESPONSE, response, resp_size); + if (!rv) + goto fail; + + rv = drbd_recv_header(mdev, &p); + if (!rv) + goto fail; + + if (p.command != P_AUTH_RESPONSE) { + dev_err(DEV, "expected AuthResponse packet, received: %s (0x%04x)\n", + cmdname(p.command), p.command); + rv = 0; + goto fail; + } + + if (p.length != resp_size) { + dev_err(DEV, "expected AuthResponse payload of wrong size\n"); + rv = 0; + goto fail; + } + + rv = drbd_recv(mdev, response , resp_size); + + if (rv != resp_size) { + dev_err(DEV, "short read receiving AuthResponse: l=%u\n", rv); + rv = 0; + goto fail; + } + + right_response = kmalloc(resp_size, GFP_NOIO); + if (response == NULL) { + dev_err(DEV, "kmalloc of right_response failed\n"); + rv = 0; + goto fail; + } + + sg_set_buf(&sg, my_challenge, CHALLENGE_LEN); + + rv = crypto_hash_digest(&desc, &sg, sg.length, right_response); + if (rv) { + dev_err(DEV, "crypto_hash_digest() failed with %d\n", rv); + rv = 0; + goto fail; + } + + rv = !memcmp(response, right_response, resp_size); + + if (rv) + dev_info(DEV, "Peer authenticated using %d bytes of '%s' HMAC\n", + resp_size, mdev->net_conf->cram_hmac_alg); + + fail: + kfree(peers_ch); + kfree(response); + kfree(right_response); + + return rv; +} +#endif + +int drbdd_init(struct drbd_thread *thi) +{ + struct drbd_conf *mdev = thi->mdev; + unsigned int minor = mdev_to_minor(mdev); + int h; + + sprintf(current->comm, "drbd%d_receiver", minor); + + dev_info(DEV, "receiver (re)started\n"); + + do { + h = drbd_connect(mdev); + if (h == 0) { + drbd_disconnect(mdev); + __set_current_state(TASK_INTERRUPTIBLE); + schedule_timeout(HZ); + } + if (h == -1) { + dev_warn(DEV, "Discarding network configuration.\n"); + drbd_force_state(mdev, NS(conn, C_DISCONNECTING)); + } + } while (h == 0); + + if (h > 0) { + if (get_net_conf(mdev)) { + drbdd(mdev); + put_net_conf(mdev); + } + } + + drbd_disconnect(mdev); + + dev_info(DEV, "receiver terminated\n"); + return 0; +} + +/* ********* acknowledge sender ******** */ + +static int got_RqSReply(struct drbd_conf *mdev, struct p_header *h) +{ + struct p_req_state_reply *p = (struct p_req_state_reply *)h; + + int retcode = be32_to_cpu(p->retcode); + + if (retcode >= SS_SUCCESS) { + set_bit(CL_ST_CHG_SUCCESS, &mdev->flags); + } else { + set_bit(CL_ST_CHG_FAIL, &mdev->flags); + dev_err(DEV, "Requested state change failed by peer: %s (%d)\n", + drbd_set_st_err_str(retcode), retcode); + } + wake_up(&mdev->state_wait); + + return TRUE; +} + +static int got_Ping(struct drbd_conf *mdev, struct p_header *h) +{ + return drbd_send_ping_ack(mdev); + +} + +static int got_PingAck(struct drbd_conf *mdev, struct p_header *h) +{ + /* restore idle timeout */ + mdev->meta.socket->sk->sk_rcvtimeo = mdev->net_conf->ping_int*HZ; + + return TRUE; +} + +static int got_IsInSync(struct drbd_conf *mdev, struct p_header *h) +{ + struct p_block_ack *p = (struct p_block_ack *)h; + sector_t sector = be64_to_cpu(p->sector); + int blksize = be32_to_cpu(p->blksize); + + D_ASSERT(mdev->agreed_pro_version >= 89); + + update_peer_seq(mdev, be32_to_cpu(p->seq_num)); + + drbd_rs_complete_io(mdev, sector); + drbd_set_in_sync(mdev, sector, blksize); + /* rs_same_csums is supposed to count in units of BM_BLOCK_SIZE */ + mdev->rs_same_csum += (blksize >> BM_BLOCK_SHIFT); + dec_rs_pending(mdev); + + return TRUE; +} + +/* when we receive the ACK for a write request, + * verify that we actually know about it */ +static struct drbd_request *_ack_id_to_req(struct drbd_conf *mdev, + u64 id, sector_t sector) +{ + struct hlist_head *slot = tl_hash_slot(mdev, sector); + struct hlist_node *n; + struct drbd_request *req; + + hlist_for_each_entry(req, n, slot, colision) { + if ((unsigned long)req == (unsigned long)id) { + if (req->sector != sector) { + dev_err(DEV, "_ack_id_to_req: found req %p but it has " + "wrong sector (%llus versus %llus)\n", req, + (unsigned long long)req->sector, + (unsigned long long)sector); + break; + } + return req; + } + } + dev_err(DEV, "_ack_id_to_req: failed to find req %p, sector %llus in list\n", + (void *)(unsigned long)id, (unsigned long long)sector); + return NULL; +} + +typedef struct drbd_request *(req_validator_fn) + (struct drbd_conf *mdev, u64 id, sector_t sector); + +static int validate_req_change_req_state(struct drbd_conf *mdev, + u64 id, sector_t sector, req_validator_fn validator, + const char *func, enum drbd_req_event what) +{ + struct drbd_request *req; + struct bio_and_error m; + + spin_lock_irq(&mdev->req_lock); + req = validator(mdev, id, sector); + if (unlikely(!req)) { + spin_unlock_irq(&mdev->req_lock); + dev_err(DEV, "%s: got a corrupt block_id/sector pair\n", func); + return FALSE; + } + __req_mod(req, what, &m); + spin_unlock_irq(&mdev->req_lock); + + if (m.bio) + complete_master_bio(mdev, &m); + return TRUE; +} + +static int got_BlockAck(struct drbd_conf *mdev, struct p_header *h) +{ + struct p_block_ack *p = (struct p_block_ack *)h; + sector_t sector = be64_to_cpu(p->sector); + int blksize = be32_to_cpu(p->blksize); + enum drbd_req_event what; + + update_peer_seq(mdev, be32_to_cpu(p->seq_num)); + + if (is_syncer_block_id(p->block_id)) { + drbd_set_in_sync(mdev, sector, blksize); + dec_rs_pending(mdev); + return TRUE; + } + switch (be16_to_cpu(h->command)) { + case P_RS_WRITE_ACK: + D_ASSERT(mdev->net_conf->wire_protocol == DRBD_PROT_C); + what = write_acked_by_peer_and_sis; + break; + case P_WRITE_ACK: + D_ASSERT(mdev->net_conf->wire_protocol == DRBD_PROT_C); + what = write_acked_by_peer; + break; + case P_RECV_ACK: + D_ASSERT(mdev->net_conf->wire_protocol == DRBD_PROT_B); + what = recv_acked_by_peer; + break; + case P_DISCARD_ACK: + D_ASSERT(mdev->net_conf->wire_protocol == DRBD_PROT_C); + what = conflict_discarded_by_peer; + break; + default: + D_ASSERT(0); + return FALSE; + } + + return validate_req_change_req_state(mdev, p->block_id, sector, + _ack_id_to_req, __func__ , what); +} + +static int got_NegAck(struct drbd_conf *mdev, struct p_header *h) +{ + struct p_block_ack *p = (struct p_block_ack *)h; + sector_t sector = be64_to_cpu(p->sector); + + if (__ratelimit(&drbd_ratelimit_state)) + dev_warn(DEV, "Got NegAck packet. Peer is in troubles?\n"); + + update_peer_seq(mdev, be32_to_cpu(p->seq_num)); + + if (is_syncer_block_id(p->block_id)) { + int size = be32_to_cpu(p->blksize); + dec_rs_pending(mdev); + drbd_rs_failed_io(mdev, sector, size); + return TRUE; + } + return validate_req_change_req_state(mdev, p->block_id, sector, + _ack_id_to_req, __func__ , neg_acked); +} + +static int got_NegDReply(struct drbd_conf *mdev, struct p_header *h) +{ + struct p_block_ack *p = (struct p_block_ack *)h; + sector_t sector = be64_to_cpu(p->sector); + + update_peer_seq(mdev, be32_to_cpu(p->seq_num)); + dev_err(DEV, "Got NegDReply; Sector %llus, len %u; Fail original request.\n", + (unsigned long long)sector, be32_to_cpu(p->blksize)); + + return validate_req_change_req_state(mdev, p->block_id, sector, + _ar_id_to_req, __func__ , neg_acked); +} + +static int got_NegRSDReply(struct drbd_conf *mdev, struct p_header *h) +{ + sector_t sector; + int size; + struct p_block_ack *p = (struct p_block_ack *)h; + + sector = be64_to_cpu(p->sector); + size = be32_to_cpu(p->blksize); + D_ASSERT(p->block_id == ID_SYNCER); + + update_peer_seq(mdev, be32_to_cpu(p->seq_num)); + + dec_rs_pending(mdev); + + if (get_ldev_if_state(mdev, D_FAILED)) { + drbd_rs_complete_io(mdev, sector); + drbd_rs_failed_io(mdev, sector, size); + put_ldev(mdev); + } + + return TRUE; +} + +static int got_BarrierAck(struct drbd_conf *mdev, struct p_header *h) +{ + struct p_barrier_ack *p = (struct p_barrier_ack *)h; + + tl_release(mdev, p->barrier, be32_to_cpu(p->set_size)); + + return TRUE; +} + +static int got_OVResult(struct drbd_conf *mdev, struct p_header *h) +{ + struct p_block_ack *p = (struct p_block_ack *)h; + struct drbd_work *w; + sector_t sector; + int size; + + sector = be64_to_cpu(p->sector); + size = be32_to_cpu(p->blksize); + + update_peer_seq(mdev, be32_to_cpu(p->seq_num)); + + if (be64_to_cpu(p->block_id) == ID_OUT_OF_SYNC) + drbd_ov_oos_found(mdev, sector, size); + else + ov_oos_print(mdev); + + drbd_rs_complete_io(mdev, sector); + dec_rs_pending(mdev); + + if (--mdev->ov_left == 0) { + w = kmalloc(sizeof(*w), GFP_NOIO); + if (w) { + w->cb = w_ov_finished; + drbd_queue_work_front(&mdev->data.work, w); + } else { + dev_err(DEV, "kmalloc(w) failed."); + ov_oos_print(mdev); + drbd_resync_finished(mdev); + } + } + return TRUE; +} + +struct asender_cmd { + size_t pkt_size; + int (*process)(struct drbd_conf *mdev, struct p_header *h); +}; + +static struct asender_cmd *get_asender_cmd(int cmd) +{ + static struct asender_cmd asender_tbl[] = { + /* anything missing from this table is in + * the drbd_cmd_handler (drbd_default_handler) table, + * see the beginning of drbdd() */ + [P_PING] = { sizeof(struct p_header), got_Ping }, + [P_PING_ACK] = { sizeof(struct p_header), got_PingAck }, + [P_RECV_ACK] = { sizeof(struct p_block_ack), got_BlockAck }, + [P_WRITE_ACK] = { sizeof(struct p_block_ack), got_BlockAck }, + [P_RS_WRITE_ACK] = { sizeof(struct p_block_ack), got_BlockAck }, + [P_DISCARD_ACK] = { sizeof(struct p_block_ack), got_BlockAck }, + [P_NEG_ACK] = { sizeof(struct p_block_ack), got_NegAck }, + [P_NEG_DREPLY] = { sizeof(struct p_block_ack), got_NegDReply }, + [P_NEG_RS_DREPLY] = { sizeof(struct p_block_ack), got_NegRSDReply}, + [P_OV_RESULT] = { sizeof(struct p_block_ack), got_OVResult }, + [P_BARRIER_ACK] = { sizeof(struct p_barrier_ack), got_BarrierAck }, + [P_STATE_CHG_REPLY] = { sizeof(struct p_req_state_reply), got_RqSReply }, + [P_RS_IS_IN_SYNC] = { sizeof(struct p_block_ack), got_IsInSync }, + [P_MAX_CMD] = { 0, NULL }, + }; + if (cmd > P_MAX_CMD || asender_tbl[cmd].process == NULL) + return NULL; + return &asender_tbl[cmd]; +} + +int drbd_asender(struct drbd_thread *thi) +{ + struct drbd_conf *mdev = thi->mdev; + struct p_header *h = &mdev->meta.rbuf.header; + struct asender_cmd *cmd = NULL; + + int rv, len; + void *buf = h; + int received = 0; + int expect = sizeof(struct p_header); + int empty; + + sprintf(current->comm, "drbd%d_asender", mdev_to_minor(mdev)); + + current->policy = SCHED_RR; /* Make this a realtime task! */ + current->rt_priority = 2; /* more important than all other tasks */ + + while (get_t_state(thi) == Running) { + drbd_thread_current_set_cpu(mdev); + if (test_and_clear_bit(SEND_PING, &mdev->flags)) { + ERR_IF(!drbd_send_ping(mdev)) goto reconnect; + mdev->meta.socket->sk->sk_rcvtimeo = + mdev->net_conf->ping_timeo*HZ/10; + } + + /* conditionally cork; + * it may hurt latency if we cork without much to send */ + if (!mdev->net_conf->no_cork && + 3 < atomic_read(&mdev->unacked_cnt)) + drbd_tcp_cork(mdev->meta.socket); + while (1) { + clear_bit(SIGNAL_ASENDER, &mdev->flags); + flush_signals(current); + if (!drbd_process_done_ee(mdev)) { + dev_err(DEV, "process_done_ee() = NOT_OK\n"); + goto reconnect; + } + /* to avoid race with newly queued ACKs */ + set_bit(SIGNAL_ASENDER, &mdev->flags); + spin_lock_irq(&mdev->req_lock); + empty = list_empty(&mdev->done_ee); + spin_unlock_irq(&mdev->req_lock); + /* new ack may have been queued right here, + * but then there is also a signal pending, + * and we start over... */ + if (empty) + break; + } + /* but unconditionally uncork unless disabled */ + if (!mdev->net_conf->no_cork) + drbd_tcp_uncork(mdev->meta.socket); + + /* short circuit, recv_msg would return EINTR anyways. */ + if (signal_pending(current)) + continue; + + rv = drbd_recv_short(mdev, mdev->meta.socket, + buf, expect-received, 0); + clear_bit(SIGNAL_ASENDER, &mdev->flags); + + flush_signals(current); + + /* Note: + * -EINTR (on meta) we got a signal + * -EAGAIN (on meta) rcvtimeo expired + * -ECONNRESET other side closed the connection + * -ERESTARTSYS (on data) we got a signal + * rv < 0 other than above: unexpected error! + * rv == expected: full header or command + * rv < expected: "woken" by signal during receive + * rv == 0 : "connection shut down by peer" + */ + if (likely(rv > 0)) { + received += rv; + buf += rv; + } else if (rv == 0) { + dev_err(DEV, "meta connection shut down by peer.\n"); + goto reconnect; + } else if (rv == -EAGAIN) { + if (mdev->meta.socket->sk->sk_rcvtimeo == + mdev->net_conf->ping_timeo*HZ/10) { + dev_err(DEV, "PingAck did not arrive in time.\n"); + goto reconnect; + } + set_bit(SEND_PING, &mdev->flags); + continue; + } else if (rv == -EINTR) { + continue; + } else { + dev_err(DEV, "sock_recvmsg returned %d\n", rv); + goto reconnect; + } + + if (received == expect && cmd == NULL) { + if (unlikely(h->magic != BE_DRBD_MAGIC)) { + dev_err(DEV, "magic?? on meta m: 0x%lx c: %d l: %d\n", + (long)be32_to_cpu(h->magic), + h->command, h->length); + goto reconnect; + } + cmd = get_asender_cmd(be16_to_cpu(h->command)); + len = be16_to_cpu(h->length); + if (unlikely(cmd == NULL)) { + dev_err(DEV, "unknown command?? on meta m: 0x%lx c: %d l: %d\n", + (long)be32_to_cpu(h->magic), + h->command, h->length); + goto disconnect; + } + expect = cmd->pkt_size; + ERR_IF(len != expect-sizeof(struct p_header)) { + trace_drbd_packet(mdev, mdev->meta.socket, 1, (void *)h, __FILE__, __LINE__); + goto reconnect; + } + } + if (received == expect) { + D_ASSERT(cmd != NULL); + trace_drbd_packet(mdev, mdev->meta.socket, 1, (void *)h, __FILE__, __LINE__); + if (!cmd->process(mdev, h)) + goto reconnect; + + buf = h; + received = 0; + expect = sizeof(struct p_header); + cmd = NULL; + } + } + + if (0) { +reconnect: + drbd_force_state(mdev, NS(conn, C_NETWORK_FAILURE)); + } + if (0) { +disconnect: + drbd_force_state(mdev, NS(conn, C_DISCONNECTING)); + } + clear_bit(SIGNAL_ASENDER, &mdev->flags); + + D_ASSERT(mdev->state.conn < C_CONNECTED); + dev_info(DEV, "asender terminated\n"); + + return 0; +} diff --git a/drivers/block/drbd/drbd_req.c b/drivers/block/drbd/drbd_req.c new file mode 100644 index 000000000000..0656cf1edd57 --- /dev/null +++ b/drivers/block/drbd/drbd_req.c @@ -0,0 +1,1132 @@ +/* + drbd_req.c + + This file is part of DRBD by Philipp Reisner and Lars Ellenberg. + + Copyright (C) 2001-2008, LINBIT Information Technologies GmbH. + Copyright (C) 1999-2008, Philipp Reisner . + Copyright (C) 2002-2008, Lars Ellenberg . + + drbd is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2, or (at your option) + any later version. + + drbd is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with drbd; see the file COPYING. If not, write to + the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. + + */ + +#include +#include + +#include +#include +#include "drbd_int.h" +#include "drbd_tracing.h" +#include "drbd_req.h" + + +/* Update disk stats at start of I/O request */ +static void _drbd_start_io_acct(struct drbd_conf *mdev, struct drbd_request *req, struct bio *bio) +{ + const int rw = bio_data_dir(bio); + int cpu; + cpu = part_stat_lock(); + part_stat_inc(cpu, &mdev->vdisk->part0, ios[rw]); + part_stat_add(cpu, &mdev->vdisk->part0, sectors[rw], bio_sectors(bio)); + part_stat_unlock(); + mdev->vdisk->part0.in_flight[rw]++; +} + +/* Update disk stats when completing request upwards */ +static void _drbd_end_io_acct(struct drbd_conf *mdev, struct drbd_request *req) +{ + int rw = bio_data_dir(req->master_bio); + unsigned long duration = jiffies - req->start_time; + int cpu; + cpu = part_stat_lock(); + part_stat_add(cpu, &mdev->vdisk->part0, ticks[rw], duration); + part_round_stats(cpu, &mdev->vdisk->part0); + part_stat_unlock(); + mdev->vdisk->part0.in_flight[rw]--; +} + +static void _req_is_done(struct drbd_conf *mdev, struct drbd_request *req, const int rw) +{ + const unsigned long s = req->rq_state; + /* if it was a write, we may have to set the corresponding + * bit(s) out-of-sync first. If it had a local part, we need to + * release the reference to the activity log. */ + if (rw == WRITE) { + /* remove it from the transfer log. + * well, only if it had been there in the first + * place... if it had not (local only or conflicting + * and never sent), it should still be "empty" as + * initialized in drbd_req_new(), so we can list_del() it + * here unconditionally */ + list_del(&req->tl_requests); + /* Set out-of-sync unless both OK flags are set + * (local only or remote failed). + * Other places where we set out-of-sync: + * READ with local io-error */ + if (!(s & RQ_NET_OK) || !(s & RQ_LOCAL_OK)) + drbd_set_out_of_sync(mdev, req->sector, req->size); + + if ((s & RQ_NET_OK) && (s & RQ_LOCAL_OK) && (s & RQ_NET_SIS)) + drbd_set_in_sync(mdev, req->sector, req->size); + + /* one might be tempted to move the drbd_al_complete_io + * to the local io completion callback drbd_endio_pri. + * but, if this was a mirror write, we may only + * drbd_al_complete_io after this is RQ_NET_DONE, + * otherwise the extent could be dropped from the al + * before it has actually been written on the peer. + * if we crash before our peer knows about the request, + * but after the extent has been dropped from the al, + * we would forget to resync the corresponding extent. + */ + if (s & RQ_LOCAL_MASK) { + if (get_ldev_if_state(mdev, D_FAILED)) { + drbd_al_complete_io(mdev, req->sector); + put_ldev(mdev); + } else if (__ratelimit(&drbd_ratelimit_state)) { + dev_warn(DEV, "Should have called drbd_al_complete_io(, %llu), " + "but my Disk seems to have failed :(\n", + (unsigned long long) req->sector); + } + } + } + + /* if it was a local io error, we want to notify our + * peer about that, and see if we need to + * detach the disk and stuff. + * to avoid allocating some special work + * struct, reuse the request. */ + + /* THINK + * why do we do this not when we detect the error, + * but delay it until it is "done", i.e. possibly + * until the next barrier ack? */ + + if (rw == WRITE && + ((s & RQ_LOCAL_MASK) && !(s & RQ_LOCAL_OK))) { + if (!(req->w.list.next == LIST_POISON1 || + list_empty(&req->w.list))) { + /* DEBUG ASSERT only; if this triggers, we + * probably corrupt the worker list here */ + dev_err(DEV, "req->w.list.next = %p\n", req->w.list.next); + dev_err(DEV, "req->w.list.prev = %p\n", req->w.list.prev); + } + req->w.cb = w_io_error; + drbd_queue_work(&mdev->data.work, &req->w); + /* drbd_req_free() is done in w_io_error */ + } else { + drbd_req_free(req); + } +} + +static void queue_barrier(struct drbd_conf *mdev) +{ + struct drbd_tl_epoch *b; + + /* We are within the req_lock. Once we queued the barrier for sending, + * we set the CREATE_BARRIER bit. It is cleared as soon as a new + * barrier/epoch object is added. This is the only place this bit is + * set. It indicates that the barrier for this epoch is already queued, + * and no new epoch has been created yet. */ + if (test_bit(CREATE_BARRIER, &mdev->flags)) + return; + + b = mdev->newest_tle; + b->w.cb = w_send_barrier; + /* inc_ap_pending done here, so we won't + * get imbalanced on connection loss. + * dec_ap_pending will be done in got_BarrierAck + * or (on connection loss) in tl_clear. */ + inc_ap_pending(mdev); + drbd_queue_work(&mdev->data.work, &b->w); + set_bit(CREATE_BARRIER, &mdev->flags); +} + +static void _about_to_complete_local_write(struct drbd_conf *mdev, + struct drbd_request *req) +{ + const unsigned long s = req->rq_state; + struct drbd_request *i; + struct drbd_epoch_entry *e; + struct hlist_node *n; + struct hlist_head *slot; + + /* before we can signal completion to the upper layers, + * we may need to close the current epoch */ + if (mdev->state.conn >= C_CONNECTED && + req->epoch == mdev->newest_tle->br_number) + queue_barrier(mdev); + + /* we need to do the conflict detection stuff, + * if we have the ee_hash (two_primaries) and + * this has been on the network */ + if ((s & RQ_NET_DONE) && mdev->ee_hash != NULL) { + const sector_t sector = req->sector; + const int size = req->size; + + /* ASSERT: + * there must be no conflicting requests, since + * they must have been failed on the spot */ +#define OVERLAPS overlaps(sector, size, i->sector, i->size) + slot = tl_hash_slot(mdev, sector); + hlist_for_each_entry(i, n, slot, colision) { + if (OVERLAPS) { + dev_alert(DEV, "LOGIC BUG: completed: %p %llus +%u; " + "other: %p %llus +%u\n", + req, (unsigned long long)sector, size, + i, (unsigned long long)i->sector, i->size); + } + } + + /* maybe "wake" those conflicting epoch entries + * that wait for this request to finish. + * + * currently, there can be only _one_ such ee + * (well, or some more, which would be pending + * P_DISCARD_ACK not yet sent by the asender...), + * since we block the receiver thread upon the + * first conflict detection, which will wait on + * misc_wait. maybe we want to assert that? + * + * anyways, if we found one, + * we just have to do a wake_up. */ +#undef OVERLAPS +#define OVERLAPS overlaps(sector, size, e->sector, e->size) + slot = ee_hash_slot(mdev, req->sector); + hlist_for_each_entry(e, n, slot, colision) { + if (OVERLAPS) { + wake_up(&mdev->misc_wait); + break; + } + } + } +#undef OVERLAPS +} + +void complete_master_bio(struct drbd_conf *mdev, + struct bio_and_error *m) +{ + trace_drbd_bio(mdev, "Rq", m->bio, 1, NULL); + bio_endio(m->bio, m->error); + dec_ap_bio(mdev); +} + +/* Helper for __req_mod(). + * Set m->bio to the master bio, if it is fit to be completed, + * or leave it alone (it is initialized to NULL in __req_mod), + * if it has already been completed, or cannot be completed yet. + * If m->bio is set, the error status to be returned is placed in m->error. + */ +void _req_may_be_done(struct drbd_request *req, struct bio_and_error *m) +{ + const unsigned long s = req->rq_state; + struct drbd_conf *mdev = req->mdev; + /* only WRITES may end up here without a master bio (on barrier ack) */ + int rw = req->master_bio ? bio_data_dir(req->master_bio) : WRITE; + + trace_drbd_req(req, nothing, "_req_may_be_done"); + + /* we must not complete the master bio, while it is + * still being processed by _drbd_send_zc_bio (drbd_send_dblock) + * not yet acknowledged by the peer + * not yet completed by the local io subsystem + * these flags may get cleared in any order by + * the worker, + * the receiver, + * the bio_endio completion callbacks. + */ + if (s & RQ_NET_QUEUED) + return; + if (s & RQ_NET_PENDING) + return; + if (s & RQ_LOCAL_PENDING) + return; + + if (req->master_bio) { + /* this is data_received (remote read) + * or protocol C P_WRITE_ACK + * or protocol B P_RECV_ACK + * or protocol A "handed_over_to_network" (SendAck) + * or canceled or failed, + * or killed from the transfer log due to connection loss. + */ + + /* + * figure out whether to report success or failure. + * + * report success when at least one of the operations succeeded. + * or, to put the other way, + * only report failure, when both operations failed. + * + * what to do about the failures is handled elsewhere. + * what we need to do here is just: complete the master_bio. + * + * local completion error, if any, has been stored as ERR_PTR + * in private_bio within drbd_endio_pri. + */ + int ok = (s & RQ_LOCAL_OK) || (s & RQ_NET_OK); + int error = PTR_ERR(req->private_bio); + + /* remove the request from the conflict detection + * respective block_id verification hash */ + if (!hlist_unhashed(&req->colision)) + hlist_del(&req->colision); + else + D_ASSERT((s & RQ_NET_MASK) == 0); + + /* for writes we need to do some extra housekeeping */ + if (rw == WRITE) + _about_to_complete_local_write(mdev, req); + + /* Update disk stats */ + _drbd_end_io_acct(mdev, req); + + m->error = ok ? 0 : (error ?: -EIO); + m->bio = req->master_bio; + req->master_bio = NULL; + } + + if ((s & RQ_NET_MASK) == 0 || (s & RQ_NET_DONE)) { + /* this is disconnected (local only) operation, + * or protocol C P_WRITE_ACK, + * or protocol A or B P_BARRIER_ACK, + * or killed from the transfer log due to connection loss. */ + _req_is_done(mdev, req, rw); + } + /* else: network part and not DONE yet. that is + * protocol A or B, barrier ack still pending... */ +} + +/* + * checks whether there was an overlapping request + * or ee already registered. + * + * if so, return 1, in which case this request is completed on the spot, + * without ever being submitted or send. + * + * return 0 if it is ok to submit this request. + * + * NOTE: + * paranoia: assume something above us is broken, and issues different write + * requests for the same block simultaneously... + * + * To ensure these won't be reordered differently on both nodes, resulting in + * diverging data sets, we discard the later one(s). Not that this is supposed + * to happen, but this is the rationale why we also have to check for + * conflicting requests with local origin, and why we have to do so regardless + * of whether we allowed multiple primaries. + * + * BTW, in case we only have one primary, the ee_hash is empty anyways, and the + * second hlist_for_each_entry becomes a noop. This is even simpler than to + * grab a reference on the net_conf, and check for the two_primaries flag... + */ +static int _req_conflicts(struct drbd_request *req) +{ + struct drbd_conf *mdev = req->mdev; + const sector_t sector = req->sector; + const int size = req->size; + struct drbd_request *i; + struct drbd_epoch_entry *e; + struct hlist_node *n; + struct hlist_head *slot; + + D_ASSERT(hlist_unhashed(&req->colision)); + + if (!get_net_conf(mdev)) + return 0; + + /* BUG_ON */ + ERR_IF (mdev->tl_hash_s == 0) + goto out_no_conflict; + BUG_ON(mdev->tl_hash == NULL); + +#define OVERLAPS overlaps(i->sector, i->size, sector, size) + slot = tl_hash_slot(mdev, sector); + hlist_for_each_entry(i, n, slot, colision) { + if (OVERLAPS) { + dev_alert(DEV, "%s[%u] Concurrent local write detected! " + "[DISCARD L] new: %llus +%u; " + "pending: %llus +%u\n", + current->comm, current->pid, + (unsigned long long)sector, size, + (unsigned long long)i->sector, i->size); + goto out_conflict; + } + } + + if (mdev->ee_hash_s) { + /* now, check for overlapping requests with remote origin */ + BUG_ON(mdev->ee_hash == NULL); +#undef OVERLAPS +#define OVERLAPS overlaps(e->sector, e->size, sector, size) + slot = ee_hash_slot(mdev, sector); + hlist_for_each_entry(e, n, slot, colision) { + if (OVERLAPS) { + dev_alert(DEV, "%s[%u] Concurrent remote write detected!" + " [DISCARD L] new: %llus +%u; " + "pending: %llus +%u\n", + current->comm, current->pid, + (unsigned long long)sector, size, + (unsigned long long)e->sector, e->size); + goto out_conflict; + } + } + } +#undef OVERLAPS + +out_no_conflict: + /* this is like it should be, and what we expected. + * our users do behave after all... */ + put_net_conf(mdev); + return 0; + +out_conflict: + put_net_conf(mdev); + return 1; +} + +/* obviously this could be coded as many single functions + * instead of one huge switch, + * or by putting the code directly in the respective locations + * (as it has been before). + * + * but having it this way + * enforces that it is all in this one place, where it is easier to audit, + * it makes it obvious that whatever "event" "happens" to a request should + * happen "atomically" within the req_lock, + * and it enforces that we have to think in a very structured manner + * about the "events" that may happen to a request during its life time ... + */ +void __req_mod(struct drbd_request *req, enum drbd_req_event what, + struct bio_and_error *m) +{ + struct drbd_conf *mdev = req->mdev; + m->bio = NULL; + + trace_drbd_req(req, what, NULL); + + switch (what) { + default: + dev_err(DEV, "LOGIC BUG in %s:%u\n", __FILE__ , __LINE__); + break; + + /* does not happen... + * initialization done in drbd_req_new + case created: + break; + */ + + case to_be_send: /* via network */ + /* reached via drbd_make_request_common + * and from w_read_retry_remote */ + D_ASSERT(!(req->rq_state & RQ_NET_MASK)); + req->rq_state |= RQ_NET_PENDING; + inc_ap_pending(mdev); + break; + + case to_be_submitted: /* locally */ + /* reached via drbd_make_request_common */ + D_ASSERT(!(req->rq_state & RQ_LOCAL_MASK)); + req->rq_state |= RQ_LOCAL_PENDING; + break; + + case completed_ok: + if (bio_data_dir(req->master_bio) == WRITE) + mdev->writ_cnt += req->size>>9; + else + mdev->read_cnt += req->size>>9; + + req->rq_state |= (RQ_LOCAL_COMPLETED|RQ_LOCAL_OK); + req->rq_state &= ~RQ_LOCAL_PENDING; + + _req_may_be_done(req, m); + put_ldev(mdev); + break; + + case write_completed_with_error: + req->rq_state |= RQ_LOCAL_COMPLETED; + req->rq_state &= ~RQ_LOCAL_PENDING; + + dev_alert(DEV, "Local WRITE failed sec=%llus size=%u\n", + (unsigned long long)req->sector, req->size); + /* and now: check how to handle local io error. */ + __drbd_chk_io_error(mdev, FALSE); + _req_may_be_done(req, m); + put_ldev(mdev); + break; + + case read_ahead_completed_with_error: + /* it is legal to fail READA */ + req->rq_state |= RQ_LOCAL_COMPLETED; + req->rq_state &= ~RQ_LOCAL_PENDING; + _req_may_be_done(req, m); + put_ldev(mdev); + break; + + case read_completed_with_error: + drbd_set_out_of_sync(mdev, req->sector, req->size); + + req->rq_state |= RQ_LOCAL_COMPLETED; + req->rq_state &= ~RQ_LOCAL_PENDING; + + dev_alert(DEV, "Local READ failed sec=%llus size=%u\n", + (unsigned long long)req->sector, req->size); + /* _req_mod(req,to_be_send); oops, recursion... */ + D_ASSERT(!(req->rq_state & RQ_NET_MASK)); + req->rq_state |= RQ_NET_PENDING; + inc_ap_pending(mdev); + + __drbd_chk_io_error(mdev, FALSE); + put_ldev(mdev); + /* NOTE: if we have no connection, + * or know the peer has no good data either, + * then we don't actually need to "queue_for_net_read", + * but we do so anyways, since the drbd_io_error() + * and the potential state change to "Diskless" + * needs to be done from process context */ + + /* fall through: _req_mod(req,queue_for_net_read); */ + + case queue_for_net_read: + /* READ or READA, and + * no local disk, + * or target area marked as invalid, + * or just got an io-error. */ + /* from drbd_make_request_common + * or from bio_endio during read io-error recovery */ + + /* so we can verify the handle in the answer packet + * corresponding hlist_del is in _req_may_be_done() */ + hlist_add_head(&req->colision, ar_hash_slot(mdev, req->sector)); + + set_bit(UNPLUG_REMOTE, &mdev->flags); /* why? */ + + D_ASSERT(req->rq_state & RQ_NET_PENDING); + req->rq_state |= RQ_NET_QUEUED; + req->w.cb = (req->rq_state & RQ_LOCAL_MASK) + ? w_read_retry_remote + : w_send_read_req; + drbd_queue_work(&mdev->data.work, &req->w); + break; + + case queue_for_net_write: + /* assert something? */ + /* from drbd_make_request_common only */ + + hlist_add_head(&req->colision, tl_hash_slot(mdev, req->sector)); + /* corresponding hlist_del is in _req_may_be_done() */ + + /* NOTE + * In case the req ended up on the transfer log before being + * queued on the worker, it could lead to this request being + * missed during cleanup after connection loss. + * So we have to do both operations here, + * within the same lock that protects the transfer log. + * + * _req_add_to_epoch(req); this has to be after the + * _maybe_start_new_epoch(req); which happened in + * drbd_make_request_common, because we now may set the bit + * again ourselves to close the current epoch. + * + * Add req to the (now) current epoch (barrier). */ + + /* see drbd_make_request_common, + * just after it grabs the req_lock */ + D_ASSERT(test_bit(CREATE_BARRIER, &mdev->flags) == 0); + + req->epoch = mdev->newest_tle->br_number; + list_add_tail(&req->tl_requests, + &mdev->newest_tle->requests); + + /* increment size of current epoch */ + mdev->newest_tle->n_req++; + + /* queue work item to send data */ + D_ASSERT(req->rq_state & RQ_NET_PENDING); + req->rq_state |= RQ_NET_QUEUED; + req->w.cb = w_send_dblock; + drbd_queue_work(&mdev->data.work, &req->w); + + /* close the epoch, in case it outgrew the limit */ + if (mdev->newest_tle->n_req >= mdev->net_conf->max_epoch_size) + queue_barrier(mdev); + + break; + + case send_canceled: + /* treat it the same */ + case send_failed: + /* real cleanup will be done from tl_clear. just update flags + * so it is no longer marked as on the worker queue */ + req->rq_state &= ~RQ_NET_QUEUED; + /* if we did it right, tl_clear should be scheduled only after + * this, so this should not be necessary! */ + _req_may_be_done(req, m); + break; + + case handed_over_to_network: + /* assert something? */ + if (bio_data_dir(req->master_bio) == WRITE && + mdev->net_conf->wire_protocol == DRBD_PROT_A) { + /* this is what is dangerous about protocol A: + * pretend it was successfully written on the peer. */ + if (req->rq_state & RQ_NET_PENDING) { + dec_ap_pending(mdev); + req->rq_state &= ~RQ_NET_PENDING; + req->rq_state |= RQ_NET_OK; + } /* else: neg-ack was faster... */ + /* it is still not yet RQ_NET_DONE until the + * corresponding epoch barrier got acked as well, + * so we know what to dirty on connection loss */ + } + req->rq_state &= ~RQ_NET_QUEUED; + req->rq_state |= RQ_NET_SENT; + /* because _drbd_send_zc_bio could sleep, and may want to + * dereference the bio even after the "write_acked_by_peer" and + * "completed_ok" events came in, once we return from + * _drbd_send_zc_bio (drbd_send_dblock), we have to check + * whether it is done already, and end it. */ + _req_may_be_done(req, m); + break; + + case connection_lost_while_pending: + /* transfer log cleanup after connection loss */ + /* assert something? */ + if (req->rq_state & RQ_NET_PENDING) + dec_ap_pending(mdev); + req->rq_state &= ~(RQ_NET_OK|RQ_NET_PENDING); + req->rq_state |= RQ_NET_DONE; + /* if it is still queued, we may not complete it here. + * it will be canceled soon. */ + if (!(req->rq_state & RQ_NET_QUEUED)) + _req_may_be_done(req, m); + break; + + case write_acked_by_peer_and_sis: + req->rq_state |= RQ_NET_SIS; + case conflict_discarded_by_peer: + /* for discarded conflicting writes of multiple primaries, + * there is no need to keep anything in the tl, potential + * node crashes are covered by the activity log. */ + if (what == conflict_discarded_by_peer) + dev_alert(DEV, "Got DiscardAck packet %llus +%u!" + " DRBD is not a random data generator!\n", + (unsigned long long)req->sector, req->size); + req->rq_state |= RQ_NET_DONE; + /* fall through */ + case write_acked_by_peer: + /* protocol C; successfully written on peer. + * Nothing to do here. + * We want to keep the tl in place for all protocols, to cater + * for volatile write-back caches on lower level devices. + * + * A barrier request is expected to have forced all prior + * requests onto stable storage, so completion of a barrier + * request could set NET_DONE right here, and not wait for the + * P_BARRIER_ACK, but that is an unnecessary optimization. */ + + /* this makes it effectively the same as for: */ + case recv_acked_by_peer: + /* protocol B; pretends to be successfully written on peer. + * see also notes above in handed_over_to_network about + * protocol != C */ + req->rq_state |= RQ_NET_OK; + D_ASSERT(req->rq_state & RQ_NET_PENDING); + dec_ap_pending(mdev); + req->rq_state &= ~RQ_NET_PENDING; + _req_may_be_done(req, m); + break; + + case neg_acked: + /* assert something? */ + if (req->rq_state & RQ_NET_PENDING) + dec_ap_pending(mdev); + req->rq_state &= ~(RQ_NET_OK|RQ_NET_PENDING); + + req->rq_state |= RQ_NET_DONE; + _req_may_be_done(req, m); + /* else: done by handed_over_to_network */ + break; + + case barrier_acked: + if (req->rq_state & RQ_NET_PENDING) { + /* barrier came in before all requests have been acked. + * this is bad, because if the connection is lost now, + * we won't be able to clean them up... */ + dev_err(DEV, "FIXME (barrier_acked but pending)\n"); + trace_drbd_req(req, nothing, "FIXME (barrier_acked but pending)"); + list_move(&req->tl_requests, &mdev->out_of_sequence_requests); + } + D_ASSERT(req->rq_state & RQ_NET_SENT); + req->rq_state |= RQ_NET_DONE; + _req_may_be_done(req, m); + break; + + case data_received: + D_ASSERT(req->rq_state & RQ_NET_PENDING); + dec_ap_pending(mdev); + req->rq_state &= ~RQ_NET_PENDING; + req->rq_state |= (RQ_NET_OK|RQ_NET_DONE); + _req_may_be_done(req, m); + break; + }; +} + +/* we may do a local read if: + * - we are consistent (of course), + * - or we are generally inconsistent, + * BUT we are still/already IN SYNC for this area. + * since size may be bigger than BM_BLOCK_SIZE, + * we may need to check several bits. + */ +static int drbd_may_do_local_read(struct drbd_conf *mdev, sector_t sector, int size) +{ + unsigned long sbnr, ebnr; + sector_t esector, nr_sectors; + + if (mdev->state.disk == D_UP_TO_DATE) + return 1; + if (mdev->state.disk >= D_OUTDATED) + return 0; + if (mdev->state.disk < D_INCONSISTENT) + return 0; + /* state.disk == D_INCONSISTENT We will have a look at the BitMap */ + nr_sectors = drbd_get_capacity(mdev->this_bdev); + esector = sector + (size >> 9) - 1; + + D_ASSERT(sector < nr_sectors); + D_ASSERT(esector < nr_sectors); + + sbnr = BM_SECT_TO_BIT(sector); + ebnr = BM_SECT_TO_BIT(esector); + + return 0 == drbd_bm_count_bits(mdev, sbnr, ebnr); +} + +static int drbd_make_request_common(struct drbd_conf *mdev, struct bio *bio) +{ + const int rw = bio_rw(bio); + const int size = bio->bi_size; + const sector_t sector = bio->bi_sector; + struct drbd_tl_epoch *b = NULL; + struct drbd_request *req; + int local, remote; + int err = -EIO; + + /* allocate outside of all locks; */ + req = drbd_req_new(mdev, bio); + if (!req) { + dec_ap_bio(mdev); + /* only pass the error to the upper layers. + * if user cannot handle io errors, that's not our business. */ + dev_err(DEV, "could not kmalloc() req\n"); + bio_endio(bio, -ENOMEM); + return 0; + } + + trace_drbd_bio(mdev, "Rq", bio, 0, req); + + local = get_ldev(mdev); + if (!local) { + bio_put(req->private_bio); /* or we get a bio leak */ + req->private_bio = NULL; + } + if (rw == WRITE) { + remote = 1; + } else { + /* READ || READA */ + if (local) { + if (!drbd_may_do_local_read(mdev, sector, size)) { + /* we could kick the syncer to + * sync this extent asap, wait for + * it, then continue locally. + * Or just issue the request remotely. + */ + local = 0; + bio_put(req->private_bio); + req->private_bio = NULL; + put_ldev(mdev); + } + } + remote = !local && mdev->state.pdsk >= D_UP_TO_DATE; + } + + /* If we have a disk, but a READA request is mapped to remote, + * we are R_PRIMARY, D_INCONSISTENT, SyncTarget. + * Just fail that READA request right here. + * + * THINK: maybe fail all READA when not local? + * or make this configurable... + * if network is slow, READA won't do any good. + */ + if (rw == READA && mdev->state.disk >= D_INCONSISTENT && !local) { + err = -EWOULDBLOCK; + goto fail_and_free_req; + } + + /* For WRITES going to the local disk, grab a reference on the target + * extent. This waits for any resync activity in the corresponding + * resync extent to finish, and, if necessary, pulls in the target + * extent into the activity log, which involves further disk io because + * of transactional on-disk meta data updates. */ + if (rw == WRITE && local) + drbd_al_begin_io(mdev, sector); + + remote = remote && (mdev->state.pdsk == D_UP_TO_DATE || + (mdev->state.pdsk == D_INCONSISTENT && + mdev->state.conn >= C_CONNECTED)); + + if (!(local || remote)) { + dev_err(DEV, "IO ERROR: neither local nor remote disk\n"); + goto fail_free_complete; + } + + /* For WRITE request, we have to make sure that we have an + * unused_spare_tle, in case we need to start a new epoch. + * I try to be smart and avoid to pre-allocate always "just in case", + * but there is a race between testing the bit and pointer outside the + * spinlock, and grabbing the spinlock. + * if we lost that race, we retry. */ + if (rw == WRITE && remote && + mdev->unused_spare_tle == NULL && + test_bit(CREATE_BARRIER, &mdev->flags)) { +allocate_barrier: + b = kmalloc(sizeof(struct drbd_tl_epoch), GFP_NOIO); + if (!b) { + dev_err(DEV, "Failed to alloc barrier.\n"); + err = -ENOMEM; + goto fail_free_complete; + } + } + + /* GOOD, everything prepared, grab the spin_lock */ + spin_lock_irq(&mdev->req_lock); + + if (remote) { + remote = (mdev->state.pdsk == D_UP_TO_DATE || + (mdev->state.pdsk == D_INCONSISTENT && + mdev->state.conn >= C_CONNECTED)); + if (!remote) + dev_warn(DEV, "lost connection while grabbing the req_lock!\n"); + if (!(local || remote)) { + dev_err(DEV, "IO ERROR: neither local nor remote disk\n"); + spin_unlock_irq(&mdev->req_lock); + goto fail_free_complete; + } + } + + if (b && mdev->unused_spare_tle == NULL) { + mdev->unused_spare_tle = b; + b = NULL; + } + if (rw == WRITE && remote && + mdev->unused_spare_tle == NULL && + test_bit(CREATE_BARRIER, &mdev->flags)) { + /* someone closed the current epoch + * while we were grabbing the spinlock */ + spin_unlock_irq(&mdev->req_lock); + goto allocate_barrier; + } + + + /* Update disk stats */ + _drbd_start_io_acct(mdev, req, bio); + + /* _maybe_start_new_epoch(mdev); + * If we need to generate a write barrier packet, we have to add the + * new epoch (barrier) object, and queue the barrier packet for sending, + * and queue the req's data after it _within the same lock_, otherwise + * we have race conditions were the reorder domains could be mixed up. + * + * Even read requests may start a new epoch and queue the corresponding + * barrier packet. To get the write ordering right, we only have to + * make sure that, if this is a write request and it triggered a + * barrier packet, this request is queued within the same spinlock. */ + if (remote && mdev->unused_spare_tle && + test_and_clear_bit(CREATE_BARRIER, &mdev->flags)) { + _tl_add_barrier(mdev, mdev->unused_spare_tle); + mdev->unused_spare_tle = NULL; + } else { + D_ASSERT(!(remote && rw == WRITE && + test_bit(CREATE_BARRIER, &mdev->flags))); + } + + /* NOTE + * Actually, 'local' may be wrong here already, since we may have failed + * to write to the meta data, and may become wrong anytime because of + * local io-error for some other request, which would lead to us + * "detaching" the local disk. + * + * 'remote' may become wrong any time because the network could fail. + * + * This is a harmless race condition, though, since it is handled + * correctly at the appropriate places; so it just defers the failure + * of the respective operation. + */ + + /* mark them early for readability. + * this just sets some state flags. */ + if (remote) + _req_mod(req, to_be_send); + if (local) + _req_mod(req, to_be_submitted); + + /* check this request on the collision detection hash tables. + * if we have a conflict, just complete it here. + * THINK do we want to check reads, too? (I don't think so...) */ + if (rw == WRITE && _req_conflicts(req)) { + /* this is a conflicting request. + * even though it may have been only _partially_ + * overlapping with one of the currently pending requests, + * without even submitting or sending it, we will + * pretend that it was successfully served right now. + */ + if (local) { + bio_put(req->private_bio); + req->private_bio = NULL; + drbd_al_complete_io(mdev, req->sector); + put_ldev(mdev); + local = 0; + } + if (remote) + dec_ap_pending(mdev); + _drbd_end_io_acct(mdev, req); + /* THINK: do we want to fail it (-EIO), or pretend success? */ + bio_endio(req->master_bio, 0); + req->master_bio = NULL; + dec_ap_bio(mdev); + drbd_req_free(req); + remote = 0; + } + + /* NOTE remote first: to get the concurrent write detection right, + * we must register the request before start of local IO. */ + if (remote) { + /* either WRITE and C_CONNECTED, + * or READ, and no local disk, + * or READ, but not in sync. + */ + _req_mod(req, (rw == WRITE) + ? queue_for_net_write + : queue_for_net_read); + } + spin_unlock_irq(&mdev->req_lock); + kfree(b); /* if someone else has beaten us to it... */ + + if (local) { + req->private_bio->bi_bdev = mdev->ldev->backing_bdev; + + trace_drbd_bio(mdev, "Pri", req->private_bio, 0, NULL); + + if (FAULT_ACTIVE(mdev, rw == WRITE ? DRBD_FAULT_DT_WR + : rw == READ ? DRBD_FAULT_DT_RD + : DRBD_FAULT_DT_RA)) + bio_endio(req->private_bio, -EIO); + else + generic_make_request(req->private_bio); + } + + /* we need to plug ALWAYS since we possibly need to kick lo_dev. + * we plug after submit, so we won't miss an unplug event */ + drbd_plug_device(mdev); + + return 0; + +fail_free_complete: + if (rw == WRITE && local) + drbd_al_complete_io(mdev, sector); +fail_and_free_req: + if (local) { + bio_put(req->private_bio); + req->private_bio = NULL; + put_ldev(mdev); + } + bio_endio(bio, err); + drbd_req_free(req); + dec_ap_bio(mdev); + kfree(b); + + return 0; +} + +/* helper function for drbd_make_request + * if we can determine just by the mdev (state) that this request will fail, + * return 1 + * otherwise return 0 + */ +static int drbd_fail_request_early(struct drbd_conf *mdev, int is_write) +{ + /* Unconfigured */ + if (mdev->state.conn == C_DISCONNECTING && + mdev->state.disk == D_DISKLESS) + return 1; + + if (mdev->state.role != R_PRIMARY && + (!allow_oos || is_write)) { + if (__ratelimit(&drbd_ratelimit_state)) { + dev_err(DEV, "Process %s[%u] tried to %s; " + "since we are not in Primary state, " + "we cannot allow this\n", + current->comm, current->pid, + is_write ? "WRITE" : "READ"); + } + return 1; + } + + /* + * Paranoia: we might have been primary, but sync target, or + * even diskless, then lost the connection. + * This should have been handled (panic? suspend?) somewhere + * else. But maybe it was not, so check again here. + * Caution: as long as we do not have a read/write lock on mdev, + * to serialize state changes, this is racy, since we may lose + * the connection *after* we test for the cstate. + */ + if (mdev->state.disk < D_UP_TO_DATE && mdev->state.pdsk < D_UP_TO_DATE) { + if (__ratelimit(&drbd_ratelimit_state)) + dev_err(DEV, "Sorry, I have no access to good data anymore.\n"); + return 1; + } + + return 0; +} + +int drbd_make_request_26(struct request_queue *q, struct bio *bio) +{ + unsigned int s_enr, e_enr; + struct drbd_conf *mdev = (struct drbd_conf *) q->queuedata; + + if (drbd_fail_request_early(mdev, bio_data_dir(bio) & WRITE)) { + bio_endio(bio, -EPERM); + return 0; + } + + /* Reject barrier requests if we know the underlying device does + * not support them. + * XXX: Need to get this info from peer as well some how so we + * XXX: reject if EITHER side/data/metadata area does not support them. + * + * because of those XXX, this is not yet enabled, + * i.e. in drbd_init_set_defaults we set the NO_BARRIER_SUPP bit. + */ + if (unlikely(bio_rw_flagged(bio, BIO_RW_BARRIER) && test_bit(NO_BARRIER_SUPP, &mdev->flags))) { + /* dev_warn(DEV, "Rejecting barrier request as underlying device does not support\n"); */ + bio_endio(bio, -EOPNOTSUPP); + return 0; + } + + /* + * what we "blindly" assume: + */ + D_ASSERT(bio->bi_size > 0); + D_ASSERT((bio->bi_size & 0x1ff) == 0); + D_ASSERT(bio->bi_idx == 0); + + /* to make some things easier, force alignment of requests within the + * granularity of our hash tables */ + s_enr = bio->bi_sector >> HT_SHIFT; + e_enr = (bio->bi_sector+(bio->bi_size>>9)-1) >> HT_SHIFT; + + if (likely(s_enr == e_enr)) { + inc_ap_bio(mdev, 1); + return drbd_make_request_common(mdev, bio); + } + + /* can this bio be split generically? + * Maybe add our own split-arbitrary-bios function. */ + if (bio->bi_vcnt != 1 || bio->bi_idx != 0 || bio->bi_size > DRBD_MAX_SEGMENT_SIZE) { + /* rather error out here than BUG in bio_split */ + dev_err(DEV, "bio would need to, but cannot, be split: " + "(vcnt=%u,idx=%u,size=%u,sector=%llu)\n", + bio->bi_vcnt, bio->bi_idx, bio->bi_size, + (unsigned long long)bio->bi_sector); + bio_endio(bio, -EINVAL); + } else { + /* This bio crosses some boundary, so we have to split it. */ + struct bio_pair *bp; + /* works for the "do not cross hash slot boundaries" case + * e.g. sector 262269, size 4096 + * s_enr = 262269 >> 6 = 4097 + * e_enr = (262269+8-1) >> 6 = 4098 + * HT_SHIFT = 6 + * sps = 64, mask = 63 + * first_sectors = 64 - (262269 & 63) = 3 + */ + const sector_t sect = bio->bi_sector; + const int sps = 1 << HT_SHIFT; /* sectors per slot */ + const int mask = sps - 1; + const sector_t first_sectors = sps - (sect & mask); + bp = bio_split(bio, +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,28) + bio_split_pool, +#endif + first_sectors); + + /* we need to get a "reference count" (ap_bio_cnt) + * to avoid races with the disconnect/reconnect/suspend code. + * In case we need to split the bio here, we need to get two references + * atomically, otherwise we might deadlock when trying to submit the + * second one! */ + inc_ap_bio(mdev, 2); + + D_ASSERT(e_enr == s_enr + 1); + + drbd_make_request_common(mdev, &bp->bio1); + drbd_make_request_common(mdev, &bp->bio2); + bio_pair_release(bp); + } + return 0; +} + +/* This is called by bio_add_page(). With this function we reduce + * the number of BIOs that span over multiple DRBD_MAX_SEGMENT_SIZEs + * units (was AL_EXTENTs). + * + * we do the calculation within the lower 32bit of the byte offsets, + * since we don't care for actual offset, but only check whether it + * would cross "activity log extent" boundaries. + * + * As long as the BIO is empty we have to allow at least one bvec, + * regardless of size and offset. so the resulting bio may still + * cross extent boundaries. those are dealt with (bio_split) in + * drbd_make_request_26. + */ +int drbd_merge_bvec(struct request_queue *q, struct bvec_merge_data *bvm, struct bio_vec *bvec) +{ + struct drbd_conf *mdev = (struct drbd_conf *) q->queuedata; + unsigned int bio_offset = + (unsigned int)bvm->bi_sector << 9; /* 32 bit */ + unsigned int bio_size = bvm->bi_size; + int limit, backing_limit; + + limit = DRBD_MAX_SEGMENT_SIZE + - ((bio_offset & (DRBD_MAX_SEGMENT_SIZE-1)) + bio_size); + if (limit < 0) + limit = 0; + if (bio_size == 0) { + if (limit <= bvec->bv_len) + limit = bvec->bv_len; + } else if (limit && get_ldev(mdev)) { + struct request_queue * const b = + mdev->ldev->backing_bdev->bd_disk->queue; + if (b->merge_bvec_fn && mdev->ldev->dc.use_bmbv) { + backing_limit = b->merge_bvec_fn(b, bvm, bvec); + limit = min(limit, backing_limit); + } + put_ldev(mdev); + } + return limit; +} diff --git a/drivers/block/drbd/drbd_req.h b/drivers/block/drbd/drbd_req.h new file mode 100644 index 000000000000..d37ab57f1209 --- /dev/null +++ b/drivers/block/drbd/drbd_req.h @@ -0,0 +1,327 @@ +/* + drbd_req.h + + This file is part of DRBD by Philipp Reisner and Lars Ellenberg. + + Copyright (C) 2006-2008, LINBIT Information Technologies GmbH. + Copyright (C) 2006-2008, Lars Ellenberg . + Copyright (C) 2006-2008, Philipp Reisner . + + DRBD is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2, or (at your option) + any later version. + + DRBD is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with drbd; see the file COPYING. If not, write to + the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. + */ + +#ifndef _DRBD_REQ_H +#define _DRBD_REQ_H + +#include +#include + +#include +#include +#include "drbd_int.h" +#include "drbd_wrappers.h" + +/* The request callbacks will be called in irq context by the IDE drivers, + and in Softirqs/Tasklets/BH context by the SCSI drivers, + and by the receiver and worker in kernel-thread context. + Try to get the locking right :) */ + +/* + * Objects of type struct drbd_request do only exist on a R_PRIMARY node, and are + * associated with IO requests originating from the block layer above us. + * + * There are quite a few things that may happen to a drbd request + * during its lifetime. + * + * It will be created. + * It will be marked with the intention to be + * submitted to local disk and/or + * send via the network. + * + * It has to be placed on the transfer log and other housekeeping lists, + * In case we have a network connection. + * + * It may be identified as a concurrent (write) request + * and be handled accordingly. + * + * It may me handed over to the local disk subsystem. + * It may be completed by the local disk subsystem, + * either sucessfully or with io-error. + * In case it is a READ request, and it failed locally, + * it may be retried remotely. + * + * It may be queued for sending. + * It may be handed over to the network stack, + * which may fail. + * It may be acknowledged by the "peer" according to the wire_protocol in use. + * this may be a negative ack. + * It may receive a faked ack when the network connection is lost and the + * transfer log is cleaned up. + * Sending may be canceled due to network connection loss. + * When it finally has outlived its time, + * corresponding dirty bits in the resync-bitmap may be cleared or set, + * it will be destroyed, + * and completion will be signalled to the originator, + * with or without "success". + */ + +enum drbd_req_event { + created, + to_be_send, + to_be_submitted, + + /* XXX yes, now I am inconsistent... + * these two are not "events" but "actions" + * oh, well... */ + queue_for_net_write, + queue_for_net_read, + + send_canceled, + send_failed, + handed_over_to_network, + connection_lost_while_pending, + recv_acked_by_peer, + write_acked_by_peer, + write_acked_by_peer_and_sis, /* and set_in_sync */ + conflict_discarded_by_peer, + neg_acked, + barrier_acked, /* in protocol A and B */ + data_received, /* (remote read) */ + + read_completed_with_error, + read_ahead_completed_with_error, + write_completed_with_error, + completed_ok, + nothing, /* for tracing only */ +}; + +/* encoding of request states for now. we don't actually need that many bits. + * we don't need to do atomic bit operations either, since most of the time we + * need to look at the connection state and/or manipulate some lists at the + * same time, so we should hold the request lock anyways. + */ +enum drbd_req_state_bits { + /* 210 + * 000: no local possible + * 001: to be submitted + * UNUSED, we could map: 011: submitted, completion still pending + * 110: completed ok + * 010: completed with error + */ + __RQ_LOCAL_PENDING, + __RQ_LOCAL_COMPLETED, + __RQ_LOCAL_OK, + + /* 76543 + * 00000: no network possible + * 00001: to be send + * 00011: to be send, on worker queue + * 00101: sent, expecting recv_ack (B) or write_ack (C) + * 11101: sent, + * recv_ack (B) or implicit "ack" (A), + * still waiting for the barrier ack. + * master_bio may already be completed and invalidated. + * 11100: write_acked (C), + * data_received (for remote read, any protocol) + * or finally the barrier ack has arrived (B,A)... + * request can be freed + * 01100: neg-acked (write, protocol C) + * or neg-d-acked (read, any protocol) + * or killed from the transfer log + * during cleanup after connection loss + * request can be freed + * 01000: canceled or send failed... + * request can be freed + */ + + /* if "SENT" is not set, yet, this can still fail or be canceled. + * if "SENT" is set already, we still wait for an Ack packet. + * when cleared, the master_bio may be completed. + * in (B,A) the request object may still linger on the transaction log + * until the corresponding barrier ack comes in */ + __RQ_NET_PENDING, + + /* If it is QUEUED, and it is a WRITE, it is also registered in the + * transfer log. Currently we need this flag to avoid conflicts between + * worker canceling the request and tl_clear_barrier killing it from + * transfer log. We should restructure the code so this conflict does + * no longer occur. */ + __RQ_NET_QUEUED, + + /* well, actually only "handed over to the network stack". + * + * TODO can potentially be dropped because of the similar meaning + * of RQ_NET_SENT and ~RQ_NET_QUEUED. + * however it is not exactly the same. before we drop it + * we must ensure that we can tell a request with network part + * from a request without, regardless of what happens to it. */ + __RQ_NET_SENT, + + /* when set, the request may be freed (if RQ_NET_QUEUED is clear). + * basically this means the corresponding P_BARRIER_ACK was received */ + __RQ_NET_DONE, + + /* whether or not we know (C) or pretend (B,A) that the write + * was successfully written on the peer. + */ + __RQ_NET_OK, + + /* peer called drbd_set_in_sync() for this write */ + __RQ_NET_SIS, + + /* keep this last, its for the RQ_NET_MASK */ + __RQ_NET_MAX, +}; + +#define RQ_LOCAL_PENDING (1UL << __RQ_LOCAL_PENDING) +#define RQ_LOCAL_COMPLETED (1UL << __RQ_LOCAL_COMPLETED) +#define RQ_LOCAL_OK (1UL << __RQ_LOCAL_OK) + +#define RQ_LOCAL_MASK ((RQ_LOCAL_OK << 1)-1) /* 0x07 */ + +#define RQ_NET_PENDING (1UL << __RQ_NET_PENDING) +#define RQ_NET_QUEUED (1UL << __RQ_NET_QUEUED) +#define RQ_NET_SENT (1UL << __RQ_NET_SENT) +#define RQ_NET_DONE (1UL << __RQ_NET_DONE) +#define RQ_NET_OK (1UL << __RQ_NET_OK) +#define RQ_NET_SIS (1UL << __RQ_NET_SIS) + +/* 0x1f8 */ +#define RQ_NET_MASK (((1UL << __RQ_NET_MAX)-1) & ~RQ_LOCAL_MASK) + +/* epoch entries */ +static inline +struct hlist_head *ee_hash_slot(struct drbd_conf *mdev, sector_t sector) +{ + BUG_ON(mdev->ee_hash_s == 0); + return mdev->ee_hash + + ((unsigned int)(sector>>HT_SHIFT) % mdev->ee_hash_s); +} + +/* transfer log (drbd_request objects) */ +static inline +struct hlist_head *tl_hash_slot(struct drbd_conf *mdev, sector_t sector) +{ + BUG_ON(mdev->tl_hash_s == 0); + return mdev->tl_hash + + ((unsigned int)(sector>>HT_SHIFT) % mdev->tl_hash_s); +} + +/* application reads (drbd_request objects) */ +static struct hlist_head *ar_hash_slot(struct drbd_conf *mdev, sector_t sector) +{ + return mdev->app_reads_hash + + ((unsigned int)(sector) % APP_R_HSIZE); +} + +/* when we receive the answer for a read request, + * verify that we actually know about it */ +static inline struct drbd_request *_ar_id_to_req(struct drbd_conf *mdev, + u64 id, sector_t sector) +{ + struct hlist_head *slot = ar_hash_slot(mdev, sector); + struct hlist_node *n; + struct drbd_request *req; + + hlist_for_each_entry(req, n, slot, colision) { + if ((unsigned long)req == (unsigned long)id) { + D_ASSERT(req->sector == sector); + return req; + } + } + return NULL; +} + +static inline struct drbd_request *drbd_req_new(struct drbd_conf *mdev, + struct bio *bio_src) +{ + struct bio *bio; + struct drbd_request *req = + mempool_alloc(drbd_request_mempool, GFP_NOIO); + if (likely(req)) { + bio = bio_clone(bio_src, GFP_NOIO); /* XXX cannot fail?? */ + + req->rq_state = 0; + req->mdev = mdev; + req->master_bio = bio_src; + req->private_bio = bio; + req->epoch = 0; + req->sector = bio->bi_sector; + req->size = bio->bi_size; + req->start_time = jiffies; + INIT_HLIST_NODE(&req->colision); + INIT_LIST_HEAD(&req->tl_requests); + INIT_LIST_HEAD(&req->w.list); + + bio->bi_private = req; + bio->bi_end_io = drbd_endio_pri; + bio->bi_next = NULL; + } + return req; +} + +static inline void drbd_req_free(struct drbd_request *req) +{ + mempool_free(req, drbd_request_mempool); +} + +static inline int overlaps(sector_t s1, int l1, sector_t s2, int l2) +{ + return !((s1 + (l1>>9) <= s2) || (s1 >= s2 + (l2>>9))); +} + +/* Short lived temporary struct on the stack. + * We could squirrel the error to be returned into + * bio->bi_size, or similar. But that would be too ugly. */ +struct bio_and_error { + struct bio *bio; + int error; +}; + +extern void _req_may_be_done(struct drbd_request *req, + struct bio_and_error *m); +extern void __req_mod(struct drbd_request *req, enum drbd_req_event what, + struct bio_and_error *m); +extern void complete_master_bio(struct drbd_conf *mdev, + struct bio_and_error *m); + +/* use this if you don't want to deal with calling complete_master_bio() + * outside the spinlock, e.g. when walking some list on cleanup. */ +static inline void _req_mod(struct drbd_request *req, enum drbd_req_event what) +{ + struct drbd_conf *mdev = req->mdev; + struct bio_and_error m; + + /* __req_mod possibly frees req, do not touch req after that! */ + __req_mod(req, what, &m); + if (m.bio) + complete_master_bio(mdev, &m); +} + +/* completion of master bio is outside of spinlock. + * If you need it irqsave, do it your self! */ +static inline void req_mod(struct drbd_request *req, + enum drbd_req_event what) +{ + struct drbd_conf *mdev = req->mdev; + struct bio_and_error m; + spin_lock_irq(&mdev->req_lock); + __req_mod(req, what, &m); + spin_unlock_irq(&mdev->req_lock); + + if (m.bio) + complete_master_bio(mdev, &m); +} +#endif diff --git a/drivers/block/drbd/drbd_strings.c b/drivers/block/drbd/drbd_strings.c new file mode 100644 index 000000000000..76863e3f05be --- /dev/null +++ b/drivers/block/drbd/drbd_strings.c @@ -0,0 +1,113 @@ +/* + drbd.h + + This file is part of DRBD by Philipp Reisner and Lars Ellenberg. + + Copyright (C) 2003-2008, LINBIT Information Technologies GmbH. + Copyright (C) 2003-2008, Philipp Reisner . + Copyright (C) 2003-2008, Lars Ellenberg . + + drbd is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2, or (at your option) + any later version. + + drbd is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with drbd; see the file COPYING. If not, write to + the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. + +*/ + +#include + +static const char *drbd_conn_s_names[] = { + [C_STANDALONE] = "StandAlone", + [C_DISCONNECTING] = "Disconnecting", + [C_UNCONNECTED] = "Unconnected", + [C_TIMEOUT] = "Timeout", + [C_BROKEN_PIPE] = "BrokenPipe", + [C_NETWORK_FAILURE] = "NetworkFailure", + [C_PROTOCOL_ERROR] = "ProtocolError", + [C_WF_CONNECTION] = "WFConnection", + [C_WF_REPORT_PARAMS] = "WFReportParams", + [C_TEAR_DOWN] = "TearDown", + [C_CONNECTED] = "Connected", + [C_STARTING_SYNC_S] = "StartingSyncS", + [C_STARTING_SYNC_T] = "StartingSyncT", + [C_WF_BITMAP_S] = "WFBitMapS", + [C_WF_BITMAP_T] = "WFBitMapT", + [C_WF_SYNC_UUID] = "WFSyncUUID", + [C_SYNC_SOURCE] = "SyncSource", + [C_SYNC_TARGET] = "SyncTarget", + [C_PAUSED_SYNC_S] = "PausedSyncS", + [C_PAUSED_SYNC_T] = "PausedSyncT", + [C_VERIFY_S] = "VerifyS", + [C_VERIFY_T] = "VerifyT", +}; + +static const char *drbd_role_s_names[] = { + [R_PRIMARY] = "Primary", + [R_SECONDARY] = "Secondary", + [R_UNKNOWN] = "Unknown" +}; + +static const char *drbd_disk_s_names[] = { + [D_DISKLESS] = "Diskless", + [D_ATTACHING] = "Attaching", + [D_FAILED] = "Failed", + [D_NEGOTIATING] = "Negotiating", + [D_INCONSISTENT] = "Inconsistent", + [D_OUTDATED] = "Outdated", + [D_UNKNOWN] = "DUnknown", + [D_CONSISTENT] = "Consistent", + [D_UP_TO_DATE] = "UpToDate", +}; + +static const char *drbd_state_sw_errors[] = { + [-SS_TWO_PRIMARIES] = "Multiple primaries not allowed by config", + [-SS_NO_UP_TO_DATE_DISK] = "Refusing to be Primary without at least one UpToDate disk", + [-SS_NO_LOCAL_DISK] = "Can not resync without local disk", + [-SS_NO_REMOTE_DISK] = "Can not resync without remote disk", + [-SS_CONNECTED_OUTDATES] = "Refusing to be Outdated while Connected", + [-SS_PRIMARY_NOP] = "Refusing to be Primary while peer is not outdated", + [-SS_RESYNC_RUNNING] = "Can not start OV/resync since it is already active", + [-SS_ALREADY_STANDALONE] = "Can not disconnect a StandAlone device", + [-SS_CW_FAILED_BY_PEER] = "State change was refused by peer node", + [-SS_IS_DISKLESS] = "Device is diskless, the requested operation requires a disk", + [-SS_DEVICE_IN_USE] = "Device is held open by someone", + [-SS_NO_NET_CONFIG] = "Have no net/connection configuration", + [-SS_NO_VERIFY_ALG] = "Need a verify algorithm to start online verify", + [-SS_NEED_CONNECTION] = "Need a connection to start verify or resync", + [-SS_NOT_SUPPORTED] = "Peer does not support protocol", + [-SS_LOWER_THAN_OUTDATED] = "Disk state is lower than outdated", + [-SS_IN_TRANSIENT_STATE] = "In transient state, retry after next state change", + [-SS_CONCURRENT_ST_CHG] = "Concurrent state changes detected and aborted", +}; + +const char *drbd_conn_str(enum drbd_conns s) +{ + /* enums are unsigned... */ + return s > C_PAUSED_SYNC_T ? "TOO_LARGE" : drbd_conn_s_names[s]; +} + +const char *drbd_role_str(enum drbd_role s) +{ + return s > R_SECONDARY ? "TOO_LARGE" : drbd_role_s_names[s]; +} + +const char *drbd_disk_str(enum drbd_disk_state s) +{ + return s > D_UP_TO_DATE ? "TOO_LARGE" : drbd_disk_s_names[s]; +} + +const char *drbd_set_st_err_str(enum drbd_state_ret_codes err) +{ + return err <= SS_AFTER_LAST_ERROR ? "TOO_SMALL" : + err > SS_TWO_PRIMARIES ? "TOO_LARGE" + : drbd_state_sw_errors[-err]; +} diff --git a/drivers/block/drbd/drbd_tracing.c b/drivers/block/drbd/drbd_tracing.c new file mode 100644 index 000000000000..d18d4f7b4bef --- /dev/null +++ b/drivers/block/drbd/drbd_tracing.c @@ -0,0 +1,752 @@ +/* + drbd_tracing.c + + This file is part of DRBD by Philipp Reisner and Lars Ellenberg. + + Copyright (C) 2003-2008, LINBIT Information Technologies GmbH. + Copyright (C) 2003-2008, Philipp Reisner . + Copyright (C) 2003-2008, Lars Ellenberg . + + drbd is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2, or (at your option) + any later version. + + drbd is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with drbd; see the file COPYING. If not, write to + the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. + + */ + +#include +#include +#include +#include "drbd_int.h" +#include "drbd_tracing.h" +#include + +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Philipp Reisner, Lars Ellenberg"); +MODULE_DESCRIPTION("DRBD tracepoint probes"); +MODULE_PARM_DESC(trace_mask, "Bitmap of events to trace see drbd_tracing.c"); +MODULE_PARM_DESC(trace_level, "Current tracing level (changeable in /sys)"); +MODULE_PARM_DESC(trace_devs, "Bitmap of devices to trace (changeable in /sys)"); + +unsigned int trace_mask = 0; /* Bitmap of events to trace */ +int trace_level; /* Current trace level */ +int trace_devs; /* Bitmap of devices to trace */ + +module_param(trace_mask, uint, 0444); +module_param(trace_level, int, 0644); +module_param(trace_devs, int, 0644); + +enum { + TRACE_PACKET = 0x0001, + TRACE_RQ = 0x0002, + TRACE_UUID = 0x0004, + TRACE_RESYNC = 0x0008, + TRACE_EE = 0x0010, + TRACE_UNPLUG = 0x0020, + TRACE_NL = 0x0040, + TRACE_AL_EXT = 0x0080, + TRACE_INT_RQ = 0x0100, + TRACE_MD_IO = 0x0200, + TRACE_EPOCH = 0x0400, +}; + +/* Buffer printing support + * dbg_print_flags: used for Flags arg to drbd_print_buffer + * - DBGPRINT_BUFFADDR; if set, each line starts with the + * virtual address of the line being output. If clear, + * each line starts with the offset from the beginning + * of the buffer. */ +enum dbg_print_flags { + DBGPRINT_BUFFADDR = 0x0001, +}; + +/* Macro stuff */ +static char *nl_packet_name(int packet_type) +{ +/* Generate packet type strings */ +#define NL_PACKET(name, number, fields) \ + [P_ ## name] = # name, +#define NL_INTEGER Argh! +#define NL_BIT Argh! +#define NL_INT64 Argh! +#define NL_STRING Argh! + + static char *nl_tag_name[P_nl_after_last_packet] = { +#include "linux/drbd_nl.h" + }; + + return (packet_type < sizeof(nl_tag_name)/sizeof(nl_tag_name[0])) ? + nl_tag_name[packet_type] : "*Unknown*"; +} +/* /Macro stuff */ + +static inline int is_mdev_trace(struct drbd_conf *mdev, unsigned int level) +{ + return trace_level >= level && ((1 << mdev_to_minor(mdev)) & trace_devs); +} + +static void probe_drbd_unplug(struct drbd_conf *mdev, char *msg) +{ + if (!is_mdev_trace(mdev, TRACE_LVL_ALWAYS)) + return; + + dev_info(DEV, "%s, ap_bio_count=%d\n", msg, atomic_read(&mdev->ap_bio_cnt)); +} + +static void probe_drbd_uuid(struct drbd_conf *mdev, enum drbd_uuid_index index) +{ + static char *uuid_str[UI_EXTENDED_SIZE] = { + [UI_CURRENT] = "CURRENT", + [UI_BITMAP] = "BITMAP", + [UI_HISTORY_START] = "HISTORY_START", + [UI_HISTORY_END] = "HISTORY_END", + [UI_SIZE] = "SIZE", + [UI_FLAGS] = "FLAGS", + }; + + if (!is_mdev_trace(mdev, TRACE_LVL_ALWAYS)) + return; + + if (index >= UI_EXTENDED_SIZE) { + dev_warn(DEV, " uuid_index >= EXTENDED_SIZE\n"); + return; + } + + dev_info(DEV, " uuid[%s] now %016llX\n", + uuid_str[index], + (unsigned long long)mdev->ldev->md.uuid[index]); +} + +static void probe_drbd_md_io(struct drbd_conf *mdev, int rw, + struct drbd_backing_dev *bdev) +{ + if (!is_mdev_trace(mdev, TRACE_LVL_ALWAYS)) + return; + + dev_info(DEV, " %s metadata superblock now\n", + rw == READ ? "Reading" : "Writing"); +} + +static void probe_drbd_ee(struct drbd_conf *mdev, struct drbd_epoch_entry *e, char* msg) +{ + if (!is_mdev_trace(mdev, TRACE_LVL_ALWAYS)) + return; + + dev_info(DEV, "EE %s sec=%llus size=%u e=%p\n", + msg, (unsigned long long)e->sector, e->size, e); +} + +static void probe_drbd_epoch(struct drbd_conf *mdev, struct drbd_epoch *epoch, + enum epoch_event ev) +{ + static char *epoch_event_str[] = { + [EV_PUT] = "put", + [EV_GOT_BARRIER_NR] = "got_barrier_nr", + [EV_BARRIER_DONE] = "barrier_done", + [EV_BECAME_LAST] = "became_last", + [EV_TRACE_FLUSH] = "issuing_flush", + [EV_TRACE_ADD_BARRIER] = "added_barrier", + [EV_TRACE_SETTING_BI] = "just set barrier_in_next_epoch", + }; + + if (!is_mdev_trace(mdev, TRACE_LVL_ALWAYS)) + return; + + ev &= ~EV_CLEANUP; + + switch (ev) { + case EV_TRACE_ALLOC: + dev_info(DEV, "Allocate epoch %p/xxxx { } nr_epochs=%d\n", epoch, mdev->epochs); + break; + case EV_TRACE_FREE: + dev_info(DEV, "Freeing epoch %p/%d { size=%d } nr_epochs=%d\n", + epoch, epoch->barrier_nr, atomic_read(&epoch->epoch_size), + mdev->epochs); + break; + default: + dev_info(DEV, "Update epoch %p/%d { size=%d active=%d %c%c n%c%c } ev=%s\n", + epoch, epoch->barrier_nr, atomic_read(&epoch->epoch_size), + atomic_read(&epoch->active), + test_bit(DE_HAVE_BARRIER_NUMBER, &epoch->flags) ? 'n' : '-', + test_bit(DE_CONTAINS_A_BARRIER, &epoch->flags) ? 'b' : '-', + test_bit(DE_BARRIER_IN_NEXT_EPOCH_ISSUED, &epoch->flags) ? 'i' : '-', + test_bit(DE_BARRIER_IN_NEXT_EPOCH_DONE, &epoch->flags) ? 'd' : '-', + epoch_event_str[ev]); + } +} + +static void probe_drbd_netlink(void *data, int is_req) +{ + struct cn_msg *msg = data; + + if (is_req) { + struct drbd_nl_cfg_req *nlp = (struct drbd_nl_cfg_req *)msg->data; + + printk(KERN_INFO "drbd%d: " + "Netlink: << %s (%d) - seq: %x, ack: %x, len: %x\n", + nlp->drbd_minor, + nl_packet_name(nlp->packet_type), + nlp->packet_type, + msg->seq, msg->ack, msg->len); + } else { + struct drbd_nl_cfg_reply *nlp = (struct drbd_nl_cfg_reply *)msg->data; + + printk(KERN_INFO "drbd%d: " + "Netlink: >> %s (%d) - seq: %x, ack: %x, len: %x\n", + nlp->minor, + nlp->packet_type == P_nl_after_last_packet ? + "Empty-Reply" : nl_packet_name(nlp->packet_type), + nlp->packet_type, + msg->seq, msg->ack, msg->len); + } +} + +static void probe_drbd_actlog(struct drbd_conf *mdev, sector_t sector, char* msg) +{ + unsigned int enr = (sector >> (AL_EXTENT_SHIFT-9)); + + if (!is_mdev_trace(mdev, TRACE_LVL_ALWAYS)) + return; + + dev_info(DEV, "%s (sec=%llus, al_enr=%u, rs_enr=%d)\n", + msg, (unsigned long long) sector, enr, + (int)BM_SECT_TO_EXT(sector)); +} + +/** + * drbd_print_buffer() - Hexdump arbitrary binary data into a buffer + * @prefix: String is output at the beginning of each line output. + * @flags: Currently only defined flag: DBGPRINT_BUFFADDR; if set, each + * line starts with the virtual address of the line being + * output. If clear, each line starts with the offset from the + * beginning of the buffer. + * @size: Indicates the size of each entry in the buffer. Supported + * values are sizeof(char), sizeof(short) and sizeof(int) + * @buffer: Start address of buffer + * @buffer_va: Virtual address of start of buffer (normally the same + * as Buffer, but having it separate allows it to hold + * file address for example) + * @length: length of buffer + */ +static void drbd_print_buffer(const char *prefix, unsigned int flags, int size, + const void *buffer, const void *buffer_va, + unsigned int length) + +#define LINE_SIZE 16 +#define LINE_ENTRIES (int)(LINE_SIZE/size) +{ + const unsigned char *pstart; + const unsigned char *pstart_va; + const unsigned char *pend; + char bytes_str[LINE_SIZE*3+8], ascii_str[LINE_SIZE+8]; + char *pbytes = bytes_str, *pascii = ascii_str; + int offset = 0; + long sizemask; + int field_width; + int index; + const unsigned char *pend_str; + const unsigned char *p; + int count; + + /* verify size parameter */ + if (size != sizeof(char) && + size != sizeof(short) && + size != sizeof(int)) { + printk(KERN_DEBUG "drbd_print_buffer: " + "ERROR invalid size %d\n", size); + return; + } + + sizemask = size-1; + field_width = size*2; + + /* Adjust start/end to be on appropriate boundary for size */ + buffer = (const char *)((long)buffer & ~sizemask); + pend = (const unsigned char *) + (((long)buffer + length + sizemask) & ~sizemask); + + if (flags & DBGPRINT_BUFFADDR) { + /* Move start back to nearest multiple of line size, + * if printing address. This results in nicely formatted output + * with addresses being on line size (16) byte boundaries */ + pstart = (const unsigned char *)((long)buffer & ~(LINE_SIZE-1)); + } else { + pstart = (const unsigned char *)buffer; + } + + /* Set value of start VA to print if addresses asked for */ + pstart_va = (const unsigned char *)buffer_va + - ((const unsigned char *)buffer-pstart); + + /* Calculate end position to nicely align right hand side */ + pend_str = pstart + (((pend-pstart) + LINE_SIZE-1) & ~(LINE_SIZE-1)); + + /* Init strings */ + *pbytes = *pascii = '\0'; + + /* Start at beginning of first line */ + p = pstart; + count = 0; + + while (p < pend_str) { + if (p < (const unsigned char *)buffer || p >= pend) { + /* Before start of buffer or after end- print spaces */ + pbytes += sprintf(pbytes, "%*c ", field_width, ' '); + pascii += sprintf(pascii, "%*c", size, ' '); + p += size; + } else { + /* Add hex and ascii to strings */ + int val; + switch (size) { + default: + case 1: + val = *(unsigned char *)p; + break; + case 2: + val = *(unsigned short *)p; + break; + case 4: + val = *(unsigned int *)p; + break; + } + + pbytes += sprintf(pbytes, "%0*x ", field_width, val); + + for (index = size; index; index--) { + *pascii++ = isprint(*p) ? *p : '.'; + p++; + } + } + + count++; + + if (count == LINE_ENTRIES || p >= pend_str) { + /* Null terminate and print record */ + *pascii = '\0'; + printk(KERN_DEBUG "%s%8.8lx: %*s|%*s|\n", + prefix, + (flags & DBGPRINT_BUFFADDR) + ? (long)pstart_va:(long)offset, + LINE_ENTRIES*(field_width+1), bytes_str, + LINE_SIZE, ascii_str); + + /* Move onto next line */ + pstart_va += (p-pstart); + pstart = p; + count = 0; + offset += LINE_SIZE; + + /* Re-init strings */ + pbytes = bytes_str; + pascii = ascii_str; + *pbytes = *pascii = '\0'; + } + } +} + +static void probe_drbd_resync(struct drbd_conf *mdev, int level, const char *fmt, va_list args) +{ + char str[256]; + + if (!is_mdev_trace(mdev, level)) + return; + + if (vsnprintf(str, 256, fmt, args) >= 256) + str[255] = 0; + + printk(KERN_INFO "%s %s: %s", dev_driver_string(disk_to_dev(mdev->vdisk)), + dev_name(disk_to_dev(mdev->vdisk)), str); +} + +static void probe_drbd_bio(struct drbd_conf *mdev, const char *pfx, struct bio *bio, int complete, + struct drbd_request *r) +{ +#if defined(CONFIG_LBDAF) || defined(CONFIG_LBD) +#define SECTOR_FORMAT "%Lx" +#else +#define SECTOR_FORMAT "%lx" +#endif +#define SECTOR_SHIFT 9 + + unsigned long lowaddr = (unsigned long)(bio->bi_sector << SECTOR_SHIFT); + char *faddr = (char *)(lowaddr); + char rb[sizeof(void *)*2+6] = { 0, }; + struct bio_vec *bvec; + int segno; + + const int rw = bio->bi_rw; + const int biorw = (rw & (RW_MASK|RWA_MASK)); + const int biobarrier = (rw & (1<>>", + pfx, + biorw == WRITE ? "Write" : "Read", + biobarrier ? " : B" : "", + biosync ? " : S" : "", + bio, + rb, + complete ? (bio_flagged(bio, BIO_UPTODATE) ? "Success, " : "Failed, ") : "", + bio->bi_sector << SECTOR_SHIFT, + bio->bi_size); + + if (trace_level >= TRACE_LVL_METRICS && + ((biorw == WRITE) ^ complete)) { + printk(KERN_DEBUG " ind page offset length\n"); + __bio_for_each_segment(bvec, bio, segno, 0) { + printk(KERN_DEBUG " [%d] %p %8.8x %8.8x\n", segno, + bvec->bv_page, bvec->bv_offset, bvec->bv_len); + + if (trace_level >= TRACE_LVL_ALL) { + char *bvec_buf; + unsigned long flags; + + bvec_buf = bvec_kmap_irq(bvec, &flags); + + drbd_print_buffer(" ", DBGPRINT_BUFFADDR, 1, + bvec_buf, + faddr, + (bvec->bv_len <= 0x80) + ? bvec->bv_len : 0x80); + + bvec_kunmap_irq(bvec_buf, &flags); + + if (bvec->bv_len > 0x40) + printk(KERN_DEBUG " ....\n"); + + faddr += bvec->bv_len; + } + } + } +} + +static void probe_drbd_req(struct drbd_request *req, enum drbd_req_event what, char *msg) +{ + static const char *rq_event_names[] = { + [created] = "created", + [to_be_send] = "to_be_send", + [to_be_submitted] = "to_be_submitted", + [queue_for_net_write] = "queue_for_net_write", + [queue_for_net_read] = "queue_for_net_read", + [send_canceled] = "send_canceled", + [send_failed] = "send_failed", + [handed_over_to_network] = "handed_over_to_network", + [connection_lost_while_pending] = + "connection_lost_while_pending", + [recv_acked_by_peer] = "recv_acked_by_peer", + [write_acked_by_peer] = "write_acked_by_peer", + [neg_acked] = "neg_acked", + [conflict_discarded_by_peer] = "conflict_discarded_by_peer", + [barrier_acked] = "barrier_acked", + [data_received] = "data_received", + [read_completed_with_error] = "read_completed_with_error", + [read_ahead_completed_with_error] = "reada_completed_with_error", + [write_completed_with_error] = "write_completed_with_error", + [completed_ok] = "completed_ok", + }; + + struct drbd_conf *mdev = req->mdev; + + const int rw = (req->master_bio == NULL || + bio_data_dir(req->master_bio) == WRITE) ? + 'W' : 'R'; + const unsigned long s = req->rq_state; + + if (what != nothing) { + dev_info(DEV, "__req_mod(%p %c ,%s)\n", req, rw, rq_event_names[what]); + } else { + dev_info(DEV, "%s %p %c L%c%c%cN%c%c%c%c%c %u (%llus +%u) %s\n", + msg, req, rw, + s & RQ_LOCAL_PENDING ? 'p' : '-', + s & RQ_LOCAL_COMPLETED ? 'c' : '-', + s & RQ_LOCAL_OK ? 'o' : '-', + s & RQ_NET_PENDING ? 'p' : '-', + s & RQ_NET_QUEUED ? 'q' : '-', + s & RQ_NET_SENT ? 's' : '-', + s & RQ_NET_DONE ? 'd' : '-', + s & RQ_NET_OK ? 'o' : '-', + req->epoch, + (unsigned long long)req->sector, + req->size, + drbd_conn_str(mdev->state.conn)); + } +} + + +#define drbd_peer_str drbd_role_str +#define drbd_pdsk_str drbd_disk_str + +#define PSM(A) \ +do { \ + if (mask.A) { \ + int i = snprintf(p, len, " " #A "( %s )", \ + drbd_##A##_str(val.A)); \ + if (i >= len) \ + return op; \ + p += i; \ + len -= i; \ + } \ +} while (0) + +static char *dump_st(char *p, int len, union drbd_state mask, union drbd_state val) +{ + char *op = p; + *p = '\0'; + PSM(role); + PSM(peer); + PSM(conn); + PSM(disk); + PSM(pdsk); + + return op; +} + +#define INFOP(fmt, args...) \ +do { \ + if (trace_level >= TRACE_LVL_ALL) { \ + dev_info(DEV, "%s:%d: %s [%d] %s %s " fmt , \ + file, line, current->comm, current->pid, \ + sockname, recv ? "<<<" : ">>>" , \ + ## args); \ + } else { \ + dev_info(DEV, "%s %s " fmt, sockname, \ + recv ? "<<<" : ">>>" , \ + ## args); \ + } \ +} while (0) + +static char *_dump_block_id(u64 block_id, char *buff) +{ + if (is_syncer_block_id(block_id)) + strcpy(buff, "SyncerId"); + else + sprintf(buff, "%llx", (unsigned long long)block_id); + + return buff; +} + +static void probe_drbd_packet(struct drbd_conf *mdev, struct socket *sock, + int recv, union p_polymorph *p, char *file, int line) +{ + char *sockname = sock == mdev->meta.socket ? "meta" : "data"; + int cmd = (recv == 2) ? p->header.command : be16_to_cpu(p->header.command); + char tmp[300]; + union drbd_state m, v; + + switch (cmd) { + case P_HAND_SHAKE: + INFOP("%s (protocol %u-%u)\n", cmdname(cmd), + be32_to_cpu(p->handshake.protocol_min), + be32_to_cpu(p->handshake.protocol_max)); + break; + + case P_BITMAP: /* don't report this */ + case P_COMPRESSED_BITMAP: /* don't report this */ + break; + + case P_DATA: + INFOP("%s (sector %llus, id %s, seq %u, f %x)\n", cmdname(cmd), + (unsigned long long)be64_to_cpu(p->data.sector), + _dump_block_id(p->data.block_id, tmp), + be32_to_cpu(p->data.seq_num), + be32_to_cpu(p->data.dp_flags) + ); + break; + + case P_DATA_REPLY: + case P_RS_DATA_REPLY: + INFOP("%s (sector %llus, id %s)\n", cmdname(cmd), + (unsigned long long)be64_to_cpu(p->data.sector), + _dump_block_id(p->data.block_id, tmp) + ); + break; + + case P_RECV_ACK: + case P_WRITE_ACK: + case P_RS_WRITE_ACK: + case P_DISCARD_ACK: + case P_NEG_ACK: + case P_NEG_RS_DREPLY: + INFOP("%s (sector %llus, size %u, id %s, seq %u)\n", + cmdname(cmd), + (long long)be64_to_cpu(p->block_ack.sector), + be32_to_cpu(p->block_ack.blksize), + _dump_block_id(p->block_ack.block_id, tmp), + be32_to_cpu(p->block_ack.seq_num) + ); + break; + + case P_DATA_REQUEST: + case P_RS_DATA_REQUEST: + INFOP("%s (sector %llus, size %u, id %s)\n", cmdname(cmd), + (long long)be64_to_cpu(p->block_req.sector), + be32_to_cpu(p->block_req.blksize), + _dump_block_id(p->block_req.block_id, tmp) + ); + break; + + case P_BARRIER: + case P_BARRIER_ACK: + INFOP("%s (barrier %u)\n", cmdname(cmd), p->barrier.barrier); + break; + + case P_SYNC_PARAM: + case P_SYNC_PARAM89: + INFOP("%s (rate %u, verify-alg \"%.64s\", csums-alg \"%.64s\")\n", + cmdname(cmd), be32_to_cpu(p->rs_param_89.rate), + p->rs_param_89.verify_alg, p->rs_param_89.csums_alg); + break; + + case P_UUIDS: + INFOP("%s Curr:%016llX, Bitmap:%016llX, " + "HisSt:%016llX, HisEnd:%016llX\n", + cmdname(cmd), + (unsigned long long)be64_to_cpu(p->uuids.uuid[UI_CURRENT]), + (unsigned long long)be64_to_cpu(p->uuids.uuid[UI_BITMAP]), + (unsigned long long)be64_to_cpu(p->uuids.uuid[UI_HISTORY_START]), + (unsigned long long)be64_to_cpu(p->uuids.uuid[UI_HISTORY_END])); + break; + + case P_SIZES: + INFOP("%s (d %lluMiB, u %lluMiB, c %lldMiB, " + "max bio %x, q order %x)\n", + cmdname(cmd), + (long long)(be64_to_cpu(p->sizes.d_size)>>(20-9)), + (long long)(be64_to_cpu(p->sizes.u_size)>>(20-9)), + (long long)(be64_to_cpu(p->sizes.c_size)>>(20-9)), + be32_to_cpu(p->sizes.max_segment_size), + be32_to_cpu(p->sizes.queue_order_type)); + break; + + case P_STATE: + v.i = be32_to_cpu(p->state.state); + m.i = 0xffffffff; + dump_st(tmp, sizeof(tmp), m, v); + INFOP("%s (s %x {%s})\n", cmdname(cmd), v.i, tmp); + break; + + case P_STATE_CHG_REQ: + m.i = be32_to_cpu(p->req_state.mask); + v.i = be32_to_cpu(p->req_state.val); + dump_st(tmp, sizeof(tmp), m, v); + INFOP("%s (m %x v %x {%s})\n", cmdname(cmd), m.i, v.i, tmp); + break; + + case P_STATE_CHG_REPLY: + INFOP("%s (ret %x)\n", cmdname(cmd), + be32_to_cpu(p->req_state_reply.retcode)); + break; + + case P_PING: + case P_PING_ACK: + /* + * Dont trace pings at summary level + */ + if (trace_level < TRACE_LVL_ALL) + break; + /* fall through... */ + default: + INFOP("%s (%u)\n", cmdname(cmd), cmd); + break; + } +} + + +static int __init drbd_trace_init(void) +{ + int ret; + + if (trace_mask & TRACE_UNPLUG) { + ret = register_trace_drbd_unplug(probe_drbd_unplug); + WARN_ON(ret); + } + if (trace_mask & TRACE_UUID) { + ret = register_trace_drbd_uuid(probe_drbd_uuid); + WARN_ON(ret); + } + if (trace_mask & TRACE_EE) { + ret = register_trace_drbd_ee(probe_drbd_ee); + WARN_ON(ret); + } + if (trace_mask & TRACE_PACKET) { + ret = register_trace_drbd_packet(probe_drbd_packet); + WARN_ON(ret); + } + if (trace_mask & TRACE_MD_IO) { + ret = register_trace_drbd_md_io(probe_drbd_md_io); + WARN_ON(ret); + } + if (trace_mask & TRACE_EPOCH) { + ret = register_trace_drbd_epoch(probe_drbd_epoch); + WARN_ON(ret); + } + if (trace_mask & TRACE_NL) { + ret = register_trace_drbd_netlink(probe_drbd_netlink); + WARN_ON(ret); + } + if (trace_mask & TRACE_AL_EXT) { + ret = register_trace_drbd_actlog(probe_drbd_actlog); + WARN_ON(ret); + } + if (trace_mask & TRACE_RQ) { + ret = register_trace_drbd_bio(probe_drbd_bio); + WARN_ON(ret); + } + if (trace_mask & TRACE_INT_RQ) { + ret = register_trace_drbd_req(probe_drbd_req); + WARN_ON(ret); + } + if (trace_mask & TRACE_RESYNC) { + ret = register_trace__drbd_resync(probe_drbd_resync); + WARN_ON(ret); + } + return 0; +} + +module_init(drbd_trace_init); + +static void __exit drbd_trace_exit(void) +{ + if (trace_mask & TRACE_UNPLUG) + unregister_trace_drbd_unplug(probe_drbd_unplug); + if (trace_mask & TRACE_UUID) + unregister_trace_drbd_uuid(probe_drbd_uuid); + if (trace_mask & TRACE_EE) + unregister_trace_drbd_ee(probe_drbd_ee); + if (trace_mask & TRACE_PACKET) + unregister_trace_drbd_packet(probe_drbd_packet); + if (trace_mask & TRACE_MD_IO) + unregister_trace_drbd_md_io(probe_drbd_md_io); + if (trace_mask & TRACE_EPOCH) + unregister_trace_drbd_epoch(probe_drbd_epoch); + if (trace_mask & TRACE_NL) + unregister_trace_drbd_netlink(probe_drbd_netlink); + if (trace_mask & TRACE_AL_EXT) + unregister_trace_drbd_actlog(probe_drbd_actlog); + if (trace_mask & TRACE_RQ) + unregister_trace_drbd_bio(probe_drbd_bio); + if (trace_mask & TRACE_INT_RQ) + unregister_trace_drbd_req(probe_drbd_req); + if (trace_mask & TRACE_RESYNC) + unregister_trace__drbd_resync(probe_drbd_resync); + + tracepoint_synchronize_unregister(); +} + +module_exit(drbd_trace_exit); diff --git a/drivers/block/drbd/drbd_tracing.h b/drivers/block/drbd/drbd_tracing.h new file mode 100644 index 000000000000..c4531a137f65 --- /dev/null +++ b/drivers/block/drbd/drbd_tracing.h @@ -0,0 +1,87 @@ +/* + drbd_tracing.h + + This file is part of DRBD by Philipp Reisner and Lars Ellenberg. + + Copyright (C) 2003-2008, LINBIT Information Technologies GmbH. + Copyright (C) 2003-2008, Philipp Reisner . + Copyright (C) 2003-2008, Lars Ellenberg . + + drbd is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2, or (at your option) + any later version. + + drbd is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with drbd; see the file COPYING. If not, write to + the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. + + */ + +#ifndef DRBD_TRACING_H +#define DRBD_TRACING_H + +#include +#include "drbd_int.h" +#include "drbd_req.h" + +enum { + TRACE_LVL_ALWAYS = 0, + TRACE_LVL_SUMMARY, + TRACE_LVL_METRICS, + TRACE_LVL_ALL, + TRACE_LVL_MAX +}; + +DECLARE_TRACE(drbd_unplug, + TP_PROTO(struct drbd_conf *mdev, char* msg), + TP_ARGS(mdev, msg)); + +DECLARE_TRACE(drbd_uuid, + TP_PROTO(struct drbd_conf *mdev, enum drbd_uuid_index index), + TP_ARGS(mdev, index)); + +DECLARE_TRACE(drbd_ee, + TP_PROTO(struct drbd_conf *mdev, struct drbd_epoch_entry *e, char* msg), + TP_ARGS(mdev, e, msg)); + +DECLARE_TRACE(drbd_md_io, + TP_PROTO(struct drbd_conf *mdev, int rw, struct drbd_backing_dev *bdev), + TP_ARGS(mdev, rw, bdev)); + +DECLARE_TRACE(drbd_epoch, + TP_PROTO(struct drbd_conf *mdev, struct drbd_epoch *epoch, enum epoch_event ev), + TP_ARGS(mdev, epoch, ev)); + +DECLARE_TRACE(drbd_netlink, + TP_PROTO(void *data, int is_req), + TP_ARGS(data, is_req)); + +DECLARE_TRACE(drbd_actlog, + TP_PROTO(struct drbd_conf *mdev, sector_t sector, char* msg), + TP_ARGS(mdev, sector, msg)); + +DECLARE_TRACE(drbd_bio, + TP_PROTO(struct drbd_conf *mdev, const char *pfx, struct bio *bio, int complete, + struct drbd_request *r), + TP_ARGS(mdev, pfx, bio, complete, r)); + +DECLARE_TRACE(drbd_req, + TP_PROTO(struct drbd_request *req, enum drbd_req_event what, char *msg), + TP_ARGS(req, what, msg)); + +DECLARE_TRACE(drbd_packet, + TP_PROTO(struct drbd_conf *mdev, struct socket *sock, + int recv, union p_polymorph *p, char *file, int line), + TP_ARGS(mdev, sock, recv, p, file, line)); + +DECLARE_TRACE(_drbd_resync, + TP_PROTO(struct drbd_conf *mdev, int level, const char *fmt, va_list args), + TP_ARGS(mdev, level, fmt, args)); + +#endif diff --git a/drivers/block/drbd/drbd_vli.h b/drivers/block/drbd/drbd_vli.h new file mode 100644 index 000000000000..fc824006e721 --- /dev/null +++ b/drivers/block/drbd/drbd_vli.h @@ -0,0 +1,351 @@ +/* +-*- linux-c -*- + drbd_receiver.c + This file is part of DRBD by Philipp Reisner and Lars Ellenberg. + + Copyright (C) 2001-2008, LINBIT Information Technologies GmbH. + Copyright (C) 1999-2008, Philipp Reisner . + Copyright (C) 2002-2008, Lars Ellenberg . + + drbd is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2, or (at your option) + any later version. + + drbd is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with drbd; see the file COPYING. If not, write to + the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. + */ + +#ifndef _DRBD_VLI_H +#define _DRBD_VLI_H + +/* + * At a granularity of 4KiB storage represented per bit, + * and stroage sizes of several TiB, + * and possibly small-bandwidth replication, + * the bitmap transfer time can take much too long, + * if transmitted in plain text. + * + * We try to reduce the transfered bitmap information + * by encoding runlengths of bit polarity. + * + * We never actually need to encode a "zero" (runlengths are positive). + * But then we have to store the value of the first bit. + * The first bit of information thus shall encode if the first runlength + * gives the number of set or unset bits. + * + * We assume that large areas are either completely set or unset, + * which gives good compression with any runlength method, + * even when encoding the runlength as fixed size 32bit/64bit integers. + * + * Still, there may be areas where the polarity flips every few bits, + * and encoding the runlength sequence of those areas with fix size + * integers would be much worse than plaintext. + * + * We want to encode small runlength values with minimum code length, + * while still being able to encode a Huge run of all zeros. + * + * Thus we need a Variable Length Integer encoding, VLI. + * + * For some cases, we produce more code bits than plaintext input. + * We need to send incompressible chunks as plaintext, skip over them + * and then see if the next chunk compresses better. + * + * We don't care too much about "excellent" compression ratio for large + * runlengths (all set/all clear): whether we achieve a factor of 100 + * or 1000 is not that much of an issue. + * We do not want to waste too much on short runlengths in the "noisy" + * parts of the bitmap, though. + * + * There are endless variants of VLI, we experimented with: + * * simple byte-based + * * various bit based with different code word length. + * + * To avoid yet an other configuration parameter (choice of bitmap compression + * algorithm) which was difficult to explain and tune, we just chose the one + * variant that turned out best in all test cases. + * Based on real world usage patterns, with device sizes ranging from a few GiB + * to several TiB, file server/mailserver/webserver/mysql/postgress, + * mostly idle to really busy, the all time winner (though sometimes only + * marginally better) is: + */ + +/* + * encoding is "visualised" as + * __little endian__ bitstream, least significant bit first (left most) + * + * this particular encoding is chosen so that the prefix code + * starts as unary encoding the level, then modified so that + * 10 levels can be described in 8bit, with minimal overhead + * for the smaller levels. + * + * Number of data bits follow fibonacci sequence, with the exception of the + * last level (+1 data bit, so it makes 64bit total). The only worse code when + * encoding bit polarity runlength is 1 plain bits => 2 code bits. +prefix data bits max val Nº data bits +0 x 0x2 1 +10 x 0x4 1 +110 xx 0x8 2 +1110 xxx 0x10 3 +11110 xxx xx 0x30 5 +111110 xx xxxxxx 0x130 8 +11111100 xxxxxxxx xxxxx 0x2130 13 +11111110 xxxxxxxx xxxxxxxx xxxxx 0x202130 21 +11111101 xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx xx 0x400202130 34 +11111111 xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx 56 + * maximum encodable value: 0x100000400202130 == 2**56 + some */ + +/* compression "table": + transmitted x 0.29 + as plaintext x ........................ + x ........................ + x ........................ + x 0.59 0.21........................ + x ........................................................ + x .. c ................................................... + x 0.44.. o ................................................... + x .......... d ................................................... + x .......... e ................................................... + X............. ................................................... + x.............. b ................................................... +2.0x............... i ................................................... + #X................ t ................................................... + #................. s ........................... plain bits .......... +-+----------------------------------------------------------------------- + 1 16 32 64 +*/ + +/* LEVEL: (total bits, prefix bits, prefix value), + * sorted ascending by number of total bits. + * The rest of the code table is calculated at compiletime from this. */ + +/* fibonacci data 1, 1, ... */ +#define VLI_L_1_1() do { \ + LEVEL( 2, 1, 0x00); \ + LEVEL( 3, 2, 0x01); \ + LEVEL( 5, 3, 0x03); \ + LEVEL( 7, 4, 0x07); \ + LEVEL(10, 5, 0x0f); \ + LEVEL(14, 6, 0x1f); \ + LEVEL(21, 8, 0x3f); \ + LEVEL(29, 8, 0x7f); \ + LEVEL(42, 8, 0xbf); \ + LEVEL(64, 8, 0xff); \ + } while (0) + +/* finds a suitable level to decode the least significant part of in. + * returns number of bits consumed. + * + * BUG() for bad input, as that would mean a buggy code table. */ +static inline int vli_decode_bits(u64 *out, const u64 in) +{ + u64 adj = 1; + +#define LEVEL(t,b,v) \ + do { \ + if ((in & ((1 << b) -1)) == v) { \ + *out = ((in & ((~0ULL) >> (64-t))) >> b) + adj; \ + return t; \ + } \ + adj += 1ULL << (t - b); \ + } while (0) + + VLI_L_1_1(); + + /* NOT REACHED, if VLI_LEVELS code table is defined properly */ + BUG(); +#undef LEVEL +} + +/* return number of code bits needed, + * or negative error number */ +static inline int __vli_encode_bits(u64 *out, const u64 in) +{ + u64 max = 0; + u64 adj = 1; + + if (in == 0) + return -EINVAL; + +#define LEVEL(t,b,v) do { \ + max += 1ULL << (t - b); \ + if (in <= max) { \ + if (out) \ + *out = ((in - adj) << b) | v; \ + return t; \ + } \ + adj = max + 1; \ + } while (0) + + VLI_L_1_1(); + + return -EOVERFLOW; +#undef LEVEL +} + +#undef VLI_L_1_1 + +/* code from here down is independend of actually used bit code */ + +/* + * Code length is determined by some unique (e.g. unary) prefix. + * This encodes arbitrary bit length, not whole bytes: we have a bit-stream, + * not a byte stream. + */ + +/* for the bitstream, we need a cursor */ +struct bitstream_cursor { + /* the current byte */ + u8 *b; + /* the current bit within *b, nomalized: 0..7 */ + unsigned int bit; +}; + +/* initialize cursor to point to first bit of stream */ +static inline void bitstream_cursor_reset(struct bitstream_cursor *cur, void *s) +{ + cur->b = s; + cur->bit = 0; +} + +/* advance cursor by that many bits; maximum expected input value: 64, + * but depending on VLI implementation, it may be more. */ +static inline void bitstream_cursor_advance(struct bitstream_cursor *cur, unsigned int bits) +{ + bits += cur->bit; + cur->b = cur->b + (bits >> 3); + cur->bit = bits & 7; +} + +/* the bitstream itself knows its length */ +struct bitstream { + struct bitstream_cursor cur; + unsigned char *buf; + size_t buf_len; /* in bytes */ + + /* for input stream: + * number of trailing 0 bits for padding + * total number of valid bits in stream: buf_len * 8 - pad_bits */ + unsigned int pad_bits; +}; + +static inline void bitstream_init(struct bitstream *bs, void *s, size_t len, unsigned int pad_bits) +{ + bs->buf = s; + bs->buf_len = len; + bs->pad_bits = pad_bits; + bitstream_cursor_reset(&bs->cur, bs->buf); +} + +static inline void bitstream_rewind(struct bitstream *bs) +{ + bitstream_cursor_reset(&bs->cur, bs->buf); + memset(bs->buf, 0, bs->buf_len); +} + +/* Put (at most 64) least significant bits of val into bitstream, and advance cursor. + * Ignores "pad_bits". + * Returns zero if bits == 0 (nothing to do). + * Returns number of bits used if successful. + * + * If there is not enough room left in bitstream, + * leaves bitstream unchanged and returns -ENOBUFS. + */ +static inline int bitstream_put_bits(struct bitstream *bs, u64 val, const unsigned int bits) +{ + unsigned char *b = bs->cur.b; + unsigned int tmp; + + if (bits == 0) + return 0; + + if ((bs->cur.b + ((bs->cur.bit + bits -1) >> 3)) - bs->buf >= bs->buf_len) + return -ENOBUFS; + + /* paranoia: strip off hi bits; they should not be set anyways. */ + if (bits < 64) + val &= ~0ULL >> (64 - bits); + + *b++ |= (val & 0xff) << bs->cur.bit; + + for (tmp = 8 - bs->cur.bit; tmp < bits; tmp += 8) + *b++ |= (val >> tmp) & 0xff; + + bitstream_cursor_advance(&bs->cur, bits); + return bits; +} + +/* Fetch (at most 64) bits from bitstream into *out, and advance cursor. + * + * If more than 64 bits are requested, returns -EINVAL and leave *out unchanged. + * + * If there are less than the requested number of valid bits left in the + * bitstream, still fetches all available bits. + * + * Returns number of actually fetched bits. + */ +static inline int bitstream_get_bits(struct bitstream *bs, u64 *out, int bits) +{ + u64 val; + unsigned int n; + + if (bits > 64) + return -EINVAL; + + if (bs->cur.b + ((bs->cur.bit + bs->pad_bits + bits -1) >> 3) - bs->buf >= bs->buf_len) + bits = ((bs->buf_len - (bs->cur.b - bs->buf)) << 3) + - bs->cur.bit - bs->pad_bits; + + if (bits == 0) { + *out = 0; + return 0; + } + + /* get the high bits */ + val = 0; + n = (bs->cur.bit + bits + 7) >> 3; + /* n may be at most 9, if cur.bit + bits > 64 */ + /* which means this copies at most 8 byte */ + if (n) { + memcpy(&val, bs->cur.b+1, n - 1); + val = le64_to_cpu(val) << (8 - bs->cur.bit); + } + + /* we still need the low bits */ + val |= bs->cur.b[0] >> bs->cur.bit; + + /* and mask out bits we don't want */ + val &= ~0ULL >> (64 - bits); + + bitstream_cursor_advance(&bs->cur, bits); + *out = val; + + return bits; +} + +/* encodes @in as vli into @bs; + + * return values + * > 0: number of bits successfully stored in bitstream + * -ENOBUFS @bs is full + * -EINVAL input zero (invalid) + * -EOVERFLOW input too large for this vli code (invalid) + */ +static inline int vli_encode_bits(struct bitstream *bs, u64 in) +{ + u64 code = code; + int bits = __vli_encode_bits(&code, in); + + if (bits <= 0) + return bits; + + return bitstream_put_bits(bs, code, bits); +} + +#endif diff --git a/drivers/block/drbd/drbd_worker.c b/drivers/block/drbd/drbd_worker.c new file mode 100644 index 000000000000..212e9545e634 --- /dev/null +++ b/drivers/block/drbd/drbd_worker.c @@ -0,0 +1,1529 @@ +/* + drbd_worker.c + + This file is part of DRBD by Philipp Reisner and Lars Ellenberg. + + Copyright (C) 2001-2008, LINBIT Information Technologies GmbH. + Copyright (C) 1999-2008, Philipp Reisner . + Copyright (C) 2002-2008, Lars Ellenberg . + + drbd is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2, or (at your option) + any later version. + + drbd is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with drbd; see the file COPYING. If not, write to + the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. + + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "drbd_int.h" +#include "drbd_req.h" +#include "drbd_tracing.h" + +#define SLEEP_TIME (HZ/10) + +static int w_make_ov_request(struct drbd_conf *mdev, struct drbd_work *w, int cancel); + + + +/* defined here: + drbd_md_io_complete + drbd_endio_write_sec + drbd_endio_read_sec + drbd_endio_pri + + * more endio handlers: + atodb_endio in drbd_actlog.c + drbd_bm_async_io_complete in drbd_bitmap.c + + * For all these callbacks, note the following: + * The callbacks will be called in irq context by the IDE drivers, + * and in Softirqs/Tasklets/BH context by the SCSI drivers. + * Try to get the locking right :) + * + */ + + +/* About the global_state_lock + Each state transition on an device holds a read lock. In case we have + to evaluate the sync after dependencies, we grab a write lock, because + we need stable states on all devices for that. */ +rwlock_t global_state_lock; + +/* used for synchronous meta data and bitmap IO + * submitted by drbd_md_sync_page_io() + */ +void drbd_md_io_complete(struct bio *bio, int error) +{ + struct drbd_md_io *md_io; + + md_io = (struct drbd_md_io *)bio->bi_private; + md_io->error = error; + + trace_drbd_bio(md_io->mdev, "Md", bio, 1, NULL); + + complete(&md_io->event); +} + +/* reads on behalf of the partner, + * "submitted" by the receiver + */ +void drbd_endio_read_sec(struct bio *bio, int error) __releases(local) +{ + unsigned long flags = 0; + struct drbd_epoch_entry *e = NULL; + struct drbd_conf *mdev; + int uptodate = bio_flagged(bio, BIO_UPTODATE); + + e = bio->bi_private; + mdev = e->mdev; + + if (error) + dev_warn(DEV, "read: error=%d s=%llus\n", error, + (unsigned long long)e->sector); + if (!error && !uptodate) { + dev_warn(DEV, "read: setting error to -EIO s=%llus\n", + (unsigned long long)e->sector); + /* strange behavior of some lower level drivers... + * fail the request by clearing the uptodate flag, + * but do not return any error?! */ + error = -EIO; + } + + D_ASSERT(e->block_id != ID_VACANT); + + trace_drbd_bio(mdev, "Sec", bio, 1, NULL); + + spin_lock_irqsave(&mdev->req_lock, flags); + mdev->read_cnt += e->size >> 9; + list_del(&e->w.list); + if (list_empty(&mdev->read_ee)) + wake_up(&mdev->ee_wait); + spin_unlock_irqrestore(&mdev->req_lock, flags); + + drbd_chk_io_error(mdev, error, FALSE); + drbd_queue_work(&mdev->data.work, &e->w); + put_ldev(mdev); + + trace_drbd_ee(mdev, e, "read completed"); +} + +/* writes on behalf of the partner, or resync writes, + * "submitted" by the receiver. + */ +void drbd_endio_write_sec(struct bio *bio, int error) __releases(local) +{ + unsigned long flags = 0; + struct drbd_epoch_entry *e = NULL; + struct drbd_conf *mdev; + sector_t e_sector; + int do_wake; + int is_syncer_req; + int do_al_complete_io; + int uptodate = bio_flagged(bio, BIO_UPTODATE); + int is_barrier = bio_rw_flagged(bio, BIO_RW_BARRIER); + + e = bio->bi_private; + mdev = e->mdev; + + if (error) + dev_warn(DEV, "write: error=%d s=%llus\n", error, + (unsigned long long)e->sector); + if (!error && !uptodate) { + dev_warn(DEV, "write: setting error to -EIO s=%llus\n", + (unsigned long long)e->sector); + /* strange behavior of some lower level drivers... + * fail the request by clearing the uptodate flag, + * but do not return any error?! */ + error = -EIO; + } + + /* error == -ENOTSUPP would be a better test, + * alas it is not reliable */ + if (error && is_barrier && e->flags & EE_IS_BARRIER) { + drbd_bump_write_ordering(mdev, WO_bdev_flush); + spin_lock_irqsave(&mdev->req_lock, flags); + list_del(&e->w.list); + e->w.cb = w_e_reissue; + /* put_ldev actually happens below, once we come here again. */ + __release(local); + spin_unlock_irqrestore(&mdev->req_lock, flags); + drbd_queue_work(&mdev->data.work, &e->w); + return; + } + + D_ASSERT(e->block_id != ID_VACANT); + + trace_drbd_bio(mdev, "Sec", bio, 1, NULL); + + spin_lock_irqsave(&mdev->req_lock, flags); + mdev->writ_cnt += e->size >> 9; + is_syncer_req = is_syncer_block_id(e->block_id); + + /* after we moved e to done_ee, + * we may no longer access it, + * it may be freed/reused already! + * (as soon as we release the req_lock) */ + e_sector = e->sector; + do_al_complete_io = e->flags & EE_CALL_AL_COMPLETE_IO; + + list_del(&e->w.list); /* has been on active_ee or sync_ee */ + list_add_tail(&e->w.list, &mdev->done_ee); + + trace_drbd_ee(mdev, e, "write completed"); + + /* No hlist_del_init(&e->colision) here, we did not send the Ack yet, + * neither did we wake possibly waiting conflicting requests. + * done from "drbd_process_done_ee" within the appropriate w.cb + * (e_end_block/e_end_resync_block) or from _drbd_clear_done_ee */ + + do_wake = is_syncer_req + ? list_empty(&mdev->sync_ee) + : list_empty(&mdev->active_ee); + + if (error) + __drbd_chk_io_error(mdev, FALSE); + spin_unlock_irqrestore(&mdev->req_lock, flags); + + if (is_syncer_req) + drbd_rs_complete_io(mdev, e_sector); + + if (do_wake) + wake_up(&mdev->ee_wait); + + if (do_al_complete_io) + drbd_al_complete_io(mdev, e_sector); + + wake_asender(mdev); + put_ldev(mdev); + +} + +/* read, readA or write requests on R_PRIMARY coming from drbd_make_request + */ +void drbd_endio_pri(struct bio *bio, int error) +{ + unsigned long flags; + struct drbd_request *req = bio->bi_private; + struct drbd_conf *mdev = req->mdev; + struct bio_and_error m; + enum drbd_req_event what; + int uptodate = bio_flagged(bio, BIO_UPTODATE); + + if (error) + dev_warn(DEV, "p %s: error=%d\n", + bio_data_dir(bio) == WRITE ? "write" : "read", error); + if (!error && !uptodate) { + dev_warn(DEV, "p %s: setting error to -EIO\n", + bio_data_dir(bio) == WRITE ? "write" : "read"); + /* strange behavior of some lower level drivers... + * fail the request by clearing the uptodate flag, + * but do not return any error?! */ + error = -EIO; + } + + trace_drbd_bio(mdev, "Pri", bio, 1, NULL); + + /* to avoid recursion in __req_mod */ + if (unlikely(error)) { + what = (bio_data_dir(bio) == WRITE) + ? write_completed_with_error + : (bio_rw(bio) == READA) + ? read_completed_with_error + : read_ahead_completed_with_error; + } else + what = completed_ok; + + bio_put(req->private_bio); + req->private_bio = ERR_PTR(error); + + spin_lock_irqsave(&mdev->req_lock, flags); + __req_mod(req, what, &m); + spin_unlock_irqrestore(&mdev->req_lock, flags); + + if (m.bio) + complete_master_bio(mdev, &m); +} + +int w_io_error(struct drbd_conf *mdev, struct drbd_work *w, int cancel) +{ + struct drbd_request *req = container_of(w, struct drbd_request, w); + + /* NOTE: mdev->ldev can be NULL by the time we get here! */ + /* D_ASSERT(mdev->ldev->dc.on_io_error != EP_PASS_ON); */ + + /* the only way this callback is scheduled is from _req_may_be_done, + * when it is done and had a local write error, see comments there */ + drbd_req_free(req); + + return TRUE; +} + +int w_read_retry_remote(struct drbd_conf *mdev, struct drbd_work *w, int cancel) +{ + struct drbd_request *req = container_of(w, struct drbd_request, w); + + /* We should not detach for read io-error, + * but try to WRITE the P_DATA_REPLY to the failed location, + * to give the disk the chance to relocate that block */ + + spin_lock_irq(&mdev->req_lock); + if (cancel || + mdev->state.conn < C_CONNECTED || + mdev->state.pdsk <= D_INCONSISTENT) { + _req_mod(req, send_canceled); + spin_unlock_irq(&mdev->req_lock); + dev_alert(DEV, "WE ARE LOST. Local IO failure, no peer.\n"); + return 1; + } + spin_unlock_irq(&mdev->req_lock); + + return w_send_read_req(mdev, w, 0); +} + +int w_resync_inactive(struct drbd_conf *mdev, struct drbd_work *w, int cancel) +{ + ERR_IF(cancel) return 1; + dev_err(DEV, "resync inactive, but callback triggered??\n"); + return 1; /* Simply ignore this! */ +} + +void drbd_csum(struct drbd_conf *mdev, struct crypto_hash *tfm, struct bio *bio, void *digest) +{ + struct hash_desc desc; + struct scatterlist sg; + struct bio_vec *bvec; + int i; + + desc.tfm = tfm; + desc.flags = 0; + + sg_init_table(&sg, 1); + crypto_hash_init(&desc); + + __bio_for_each_segment(bvec, bio, i, 0) { + sg_set_page(&sg, bvec->bv_page, bvec->bv_len, bvec->bv_offset); + crypto_hash_update(&desc, &sg, sg.length); + } + crypto_hash_final(&desc, digest); +} + +static int w_e_send_csum(struct drbd_conf *mdev, struct drbd_work *w, int cancel) +{ + struct drbd_epoch_entry *e = container_of(w, struct drbd_epoch_entry, w); + int digest_size; + void *digest; + int ok; + + D_ASSERT(e->block_id == DRBD_MAGIC + 0xbeef); + + if (unlikely(cancel)) { + drbd_free_ee(mdev, e); + return 1; + } + + if (likely(drbd_bio_uptodate(e->private_bio))) { + digest_size = crypto_hash_digestsize(mdev->csums_tfm); + digest = kmalloc(digest_size, GFP_NOIO); + if (digest) { + drbd_csum(mdev, mdev->csums_tfm, e->private_bio, digest); + + inc_rs_pending(mdev); + ok = drbd_send_drequest_csum(mdev, + e->sector, + e->size, + digest, + digest_size, + P_CSUM_RS_REQUEST); + kfree(digest); + } else { + dev_err(DEV, "kmalloc() of digest failed.\n"); + ok = 0; + } + } else + ok = 1; + + drbd_free_ee(mdev, e); + + if (unlikely(!ok)) + dev_err(DEV, "drbd_send_drequest(..., csum) failed\n"); + return ok; +} + +#define GFP_TRY (__GFP_HIGHMEM | __GFP_NOWARN) + +static int read_for_csum(struct drbd_conf *mdev, sector_t sector, int size) +{ + struct drbd_epoch_entry *e; + + if (!get_ldev(mdev)) + return 0; + + /* GFP_TRY, because if there is no memory available right now, this may + * be rescheduled for later. It is "only" background resync, after all. */ + e = drbd_alloc_ee(mdev, DRBD_MAGIC+0xbeef, sector, size, GFP_TRY); + if (!e) { + put_ldev(mdev); + return 2; + } + + spin_lock_irq(&mdev->req_lock); + list_add(&e->w.list, &mdev->read_ee); + spin_unlock_irq(&mdev->req_lock); + + e->private_bio->bi_end_io = drbd_endio_read_sec; + e->private_bio->bi_rw = READ; + e->w.cb = w_e_send_csum; + + mdev->read_cnt += size >> 9; + drbd_generic_make_request(mdev, DRBD_FAULT_RS_RD, e->private_bio); + + return 1; +} + +void resync_timer_fn(unsigned long data) +{ + unsigned long flags; + struct drbd_conf *mdev = (struct drbd_conf *) data; + int queue; + + spin_lock_irqsave(&mdev->req_lock, flags); + + if (likely(!test_and_clear_bit(STOP_SYNC_TIMER, &mdev->flags))) { + queue = 1; + if (mdev->state.conn == C_VERIFY_S) + mdev->resync_work.cb = w_make_ov_request; + else + mdev->resync_work.cb = w_make_resync_request; + } else { + queue = 0; + mdev->resync_work.cb = w_resync_inactive; + } + + spin_unlock_irqrestore(&mdev->req_lock, flags); + + /* harmless race: list_empty outside data.work.q_lock */ + if (list_empty(&mdev->resync_work.list) && queue) + drbd_queue_work(&mdev->data.work, &mdev->resync_work); +} + +int w_make_resync_request(struct drbd_conf *mdev, + struct drbd_work *w, int cancel) +{ + unsigned long bit; + sector_t sector; + const sector_t capacity = drbd_get_capacity(mdev->this_bdev); + int max_segment_size = queue_max_segment_size(mdev->rq_queue); + int number, i, size, pe, mx; + int align, queued, sndbuf; + + if (unlikely(cancel)) + return 1; + + if (unlikely(mdev->state.conn < C_CONNECTED)) { + dev_err(DEV, "Confused in w_make_resync_request()! cstate < Connected"); + return 0; + } + + if (mdev->state.conn != C_SYNC_TARGET) + dev_err(DEV, "%s in w_make_resync_request\n", + drbd_conn_str(mdev->state.conn)); + + if (!get_ldev(mdev)) { + /* Since we only need to access mdev->rsync a + get_ldev_if_state(mdev,D_FAILED) would be sufficient, but + to continue resync with a broken disk makes no sense at + all */ + dev_err(DEV, "Disk broke down during resync!\n"); + mdev->resync_work.cb = w_resync_inactive; + return 1; + } + + number = SLEEP_TIME * mdev->sync_conf.rate / ((BM_BLOCK_SIZE/1024)*HZ); + pe = atomic_read(&mdev->rs_pending_cnt); + + mutex_lock(&mdev->data.mutex); + if (mdev->data.socket) + mx = mdev->data.socket->sk->sk_rcvbuf / sizeof(struct p_block_req); + else + mx = 1; + mutex_unlock(&mdev->data.mutex); + + /* For resync rates >160MB/sec, allow more pending RS requests */ + if (number > mx) + mx = number; + + /* Limit the number of pending RS requests to no more than the peer's receive buffer */ + if ((pe + number) > mx) { + number = mx - pe; + } + + for (i = 0; i < number; i++) { + /* Stop generating RS requests, when half of the send buffer is filled */ + mutex_lock(&mdev->data.mutex); + if (mdev->data.socket) { + queued = mdev->data.socket->sk->sk_wmem_queued; + sndbuf = mdev->data.socket->sk->sk_sndbuf; + } else { + queued = 1; + sndbuf = 0; + } + mutex_unlock(&mdev->data.mutex); + if (queued > sndbuf / 2) + goto requeue; + +next_sector: + size = BM_BLOCK_SIZE; + bit = drbd_bm_find_next(mdev, mdev->bm_resync_fo); + + if (bit == -1UL) { + mdev->bm_resync_fo = drbd_bm_bits(mdev); + mdev->resync_work.cb = w_resync_inactive; + put_ldev(mdev); + return 1; + } + + sector = BM_BIT_TO_SECT(bit); + + if (drbd_try_rs_begin_io(mdev, sector)) { + mdev->bm_resync_fo = bit; + goto requeue; + } + mdev->bm_resync_fo = bit + 1; + + if (unlikely(drbd_bm_test_bit(mdev, bit) == 0)) { + drbd_rs_complete_io(mdev, sector); + goto next_sector; + } + +#if DRBD_MAX_SEGMENT_SIZE > BM_BLOCK_SIZE + /* try to find some adjacent bits. + * we stop if we have already the maximum req size. + * + * Additionally always align bigger requests, in order to + * be prepared for all stripe sizes of software RAIDs. + * + * we _do_ care about the agreed-upon q->max_segment_size + * here, as splitting up the requests on the other side is more + * difficult. the consequence is, that on lvm and md and other + * "indirect" devices, this is dead code, since + * q->max_segment_size will be PAGE_SIZE. + */ + align = 1; + for (;;) { + if (size + BM_BLOCK_SIZE > max_segment_size) + break; + + /* Be always aligned */ + if (sector & ((1<<(align+3))-1)) + break; + + /* do not cross extent boundaries */ + if (((bit+1) & BM_BLOCKS_PER_BM_EXT_MASK) == 0) + break; + /* now, is it actually dirty, after all? + * caution, drbd_bm_test_bit is tri-state for some + * obscure reason; ( b == 0 ) would get the out-of-band + * only accidentally right because of the "oddly sized" + * adjustment below */ + if (drbd_bm_test_bit(mdev, bit+1) != 1) + break; + bit++; + size += BM_BLOCK_SIZE; + if ((BM_BLOCK_SIZE << align) <= size) + align++; + i++; + } + /* if we merged some, + * reset the offset to start the next drbd_bm_find_next from */ + if (size > BM_BLOCK_SIZE) + mdev->bm_resync_fo = bit + 1; +#endif + + /* adjust very last sectors, in case we are oddly sized */ + if (sector + (size>>9) > capacity) + size = (capacity-sector)<<9; + if (mdev->agreed_pro_version >= 89 && mdev->csums_tfm) { + switch (read_for_csum(mdev, sector, size)) { + case 0: /* Disk failure*/ + put_ldev(mdev); + return 0; + case 2: /* Allocation failed */ + drbd_rs_complete_io(mdev, sector); + mdev->bm_resync_fo = BM_SECT_TO_BIT(sector); + goto requeue; + /* case 1: everything ok */ + } + } else { + inc_rs_pending(mdev); + if (!drbd_send_drequest(mdev, P_RS_DATA_REQUEST, + sector, size, ID_SYNCER)) { + dev_err(DEV, "drbd_send_drequest() failed, aborting...\n"); + dec_rs_pending(mdev); + put_ldev(mdev); + return 0; + } + } + } + + if (mdev->bm_resync_fo >= drbd_bm_bits(mdev)) { + /* last syncer _request_ was sent, + * but the P_RS_DATA_REPLY not yet received. sync will end (and + * next sync group will resume), as soon as we receive the last + * resync data block, and the last bit is cleared. + * until then resync "work" is "inactive" ... + */ + mdev->resync_work.cb = w_resync_inactive; + put_ldev(mdev); + return 1; + } + + requeue: + mod_timer(&mdev->resync_timer, jiffies + SLEEP_TIME); + put_ldev(mdev); + return 1; +} + +static int w_make_ov_request(struct drbd_conf *mdev, struct drbd_work *w, int cancel) +{ + int number, i, size; + sector_t sector; + const sector_t capacity = drbd_get_capacity(mdev->this_bdev); + + if (unlikely(cancel)) + return 1; + + if (unlikely(mdev->state.conn < C_CONNECTED)) { + dev_err(DEV, "Confused in w_make_ov_request()! cstate < Connected"); + return 0; + } + + number = SLEEP_TIME*mdev->sync_conf.rate / ((BM_BLOCK_SIZE/1024)*HZ); + if (atomic_read(&mdev->rs_pending_cnt) > number) + goto requeue; + + number -= atomic_read(&mdev->rs_pending_cnt); + + sector = mdev->ov_position; + for (i = 0; i < number; i++) { + if (sector >= capacity) { + mdev->resync_work.cb = w_resync_inactive; + return 1; + } + + size = BM_BLOCK_SIZE; + + if (drbd_try_rs_begin_io(mdev, sector)) { + mdev->ov_position = sector; + goto requeue; + } + + if (sector + (size>>9) > capacity) + size = (capacity-sector)<<9; + + inc_rs_pending(mdev); + if (!drbd_send_ov_request(mdev, sector, size)) { + dec_rs_pending(mdev); + return 0; + } + sector += BM_SECT_PER_BIT; + } + mdev->ov_position = sector; + + requeue: + mod_timer(&mdev->resync_timer, jiffies + SLEEP_TIME); + return 1; +} + + +int w_ov_finished(struct drbd_conf *mdev, struct drbd_work *w, int cancel) +{ + kfree(w); + ov_oos_print(mdev); + drbd_resync_finished(mdev); + + return 1; +} + +static int w_resync_finished(struct drbd_conf *mdev, struct drbd_work *w, int cancel) +{ + kfree(w); + + drbd_resync_finished(mdev); + + return 1; +} + +int drbd_resync_finished(struct drbd_conf *mdev) +{ + unsigned long db, dt, dbdt; + unsigned long n_oos; + union drbd_state os, ns; + struct drbd_work *w; + char *khelper_cmd = NULL; + + /* Remove all elements from the resync LRU. Since future actions + * might set bits in the (main) bitmap, then the entries in the + * resync LRU would be wrong. */ + if (drbd_rs_del_all(mdev)) { + /* In case this is not possible now, most probably because + * there are P_RS_DATA_REPLY Packets lingering on the worker's + * queue (or even the read operations for those packets + * is not finished by now). Retry in 100ms. */ + + drbd_kick_lo(mdev); + __set_current_state(TASK_INTERRUPTIBLE); + schedule_timeout(HZ / 10); + w = kmalloc(sizeof(struct drbd_work), GFP_ATOMIC); + if (w) { + w->cb = w_resync_finished; + drbd_queue_work(&mdev->data.work, w); + return 1; + } + dev_err(DEV, "Warn failed to drbd_rs_del_all() and to kmalloc(w).\n"); + } + + dt = (jiffies - mdev->rs_start - mdev->rs_paused) / HZ; + if (dt <= 0) + dt = 1; + db = mdev->rs_total; + dbdt = Bit2KB(db/dt); + mdev->rs_paused /= HZ; + + if (!get_ldev(mdev)) + goto out; + + spin_lock_irq(&mdev->req_lock); + os = mdev->state; + + /* This protects us against multiple calls (that can happen in the presence + of application IO), and against connectivity loss just before we arrive here. */ + if (os.conn <= C_CONNECTED) + goto out_unlock; + + ns = os; + ns.conn = C_CONNECTED; + + dev_info(DEV, "%s done (total %lu sec; paused %lu sec; %lu K/sec)\n", + (os.conn == C_VERIFY_S || os.conn == C_VERIFY_T) ? + "Online verify " : "Resync", + dt + mdev->rs_paused, mdev->rs_paused, dbdt); + + n_oos = drbd_bm_total_weight(mdev); + + if (os.conn == C_VERIFY_S || os.conn == C_VERIFY_T) { + if (n_oos) { + dev_alert(DEV, "Online verify found %lu %dk block out of sync!\n", + n_oos, Bit2KB(1)); + khelper_cmd = "out-of-sync"; + } + } else { + D_ASSERT((n_oos - mdev->rs_failed) == 0); + + if (os.conn == C_SYNC_TARGET || os.conn == C_PAUSED_SYNC_T) + khelper_cmd = "after-resync-target"; + + if (mdev->csums_tfm && mdev->rs_total) { + const unsigned long s = mdev->rs_same_csum; + const unsigned long t = mdev->rs_total; + const int ratio = + (t == 0) ? 0 : + (t < 100000) ? ((s*100)/t) : (s/(t/100)); + dev_info(DEV, "%u %% had equal check sums, eliminated: %luK; " + "transferred %luK total %luK\n", + ratio, + Bit2KB(mdev->rs_same_csum), + Bit2KB(mdev->rs_total - mdev->rs_same_csum), + Bit2KB(mdev->rs_total)); + } + } + + if (mdev->rs_failed) { + dev_info(DEV, " %lu failed blocks\n", mdev->rs_failed); + + if (os.conn == C_SYNC_TARGET || os.conn == C_PAUSED_SYNC_T) { + ns.disk = D_INCONSISTENT; + ns.pdsk = D_UP_TO_DATE; + } else { + ns.disk = D_UP_TO_DATE; + ns.pdsk = D_INCONSISTENT; + } + } else { + ns.disk = D_UP_TO_DATE; + ns.pdsk = D_UP_TO_DATE; + + if (os.conn == C_SYNC_TARGET || os.conn == C_PAUSED_SYNC_T) { + if (mdev->p_uuid) { + int i; + for (i = UI_BITMAP ; i <= UI_HISTORY_END ; i++) + _drbd_uuid_set(mdev, i, mdev->p_uuid[i]); + drbd_uuid_set(mdev, UI_BITMAP, mdev->ldev->md.uuid[UI_CURRENT]); + _drbd_uuid_set(mdev, UI_CURRENT, mdev->p_uuid[UI_CURRENT]); + } else { + dev_err(DEV, "mdev->p_uuid is NULL! BUG\n"); + } + } + + drbd_uuid_set_bm(mdev, 0UL); + + if (mdev->p_uuid) { + /* Now the two UUID sets are equal, update what we + * know of the peer. */ + int i; + for (i = UI_CURRENT ; i <= UI_HISTORY_END ; i++) + mdev->p_uuid[i] = mdev->ldev->md.uuid[i]; + } + } + + _drbd_set_state(mdev, ns, CS_VERBOSE, NULL); +out_unlock: + spin_unlock_irq(&mdev->req_lock); + put_ldev(mdev); +out: + mdev->rs_total = 0; + mdev->rs_failed = 0; + mdev->rs_paused = 0; + mdev->ov_start_sector = 0; + + if (test_and_clear_bit(WRITE_BM_AFTER_RESYNC, &mdev->flags)) { + dev_warn(DEV, "Writing the whole bitmap, due to failed kmalloc\n"); + drbd_queue_bitmap_io(mdev, &drbd_bm_write, NULL, "write from resync_finished"); + } + + if (khelper_cmd) + drbd_khelper(mdev, khelper_cmd); + + return 1; +} + +/* helper */ +static void move_to_net_ee_or_free(struct drbd_conf *mdev, struct drbd_epoch_entry *e) +{ + if (drbd_bio_has_active_page(e->private_bio)) { + /* This might happen if sendpage() has not finished */ + spin_lock_irq(&mdev->req_lock); + list_add_tail(&e->w.list, &mdev->net_ee); + spin_unlock_irq(&mdev->req_lock); + } else + drbd_free_ee(mdev, e); +} + +/** + * w_e_end_data_req() - Worker callback, to send a P_DATA_REPLY packet in response to a P_DATA_REQUEST + * @mdev: DRBD device. + * @w: work object. + * @cancel: The connection will be closed anyways + */ +int w_e_end_data_req(struct drbd_conf *mdev, struct drbd_work *w, int cancel) +{ + struct drbd_epoch_entry *e = container_of(w, struct drbd_epoch_entry, w); + int ok; + + if (unlikely(cancel)) { + drbd_free_ee(mdev, e); + dec_unacked(mdev); + return 1; + } + + if (likely(drbd_bio_uptodate(e->private_bio))) { + ok = drbd_send_block(mdev, P_DATA_REPLY, e); + } else { + if (__ratelimit(&drbd_ratelimit_state)) + dev_err(DEV, "Sending NegDReply. sector=%llus.\n", + (unsigned long long)e->sector); + + ok = drbd_send_ack(mdev, P_NEG_DREPLY, e); + } + + dec_unacked(mdev); + + move_to_net_ee_or_free(mdev, e); + + if (unlikely(!ok)) + dev_err(DEV, "drbd_send_block() failed\n"); + return ok; +} + +/** + * w_e_end_rsdata_req() - Worker callback to send a P_RS_DATA_REPLY packet in response to a P_RS_DATA_REQUESTRS + * @mdev: DRBD device. + * @w: work object. + * @cancel: The connection will be closed anyways + */ +int w_e_end_rsdata_req(struct drbd_conf *mdev, struct drbd_work *w, int cancel) +{ + struct drbd_epoch_entry *e = container_of(w, struct drbd_epoch_entry, w); + int ok; + + if (unlikely(cancel)) { + drbd_free_ee(mdev, e); + dec_unacked(mdev); + return 1; + } + + if (get_ldev_if_state(mdev, D_FAILED)) { + drbd_rs_complete_io(mdev, e->sector); + put_ldev(mdev); + } + + if (likely(drbd_bio_uptodate(e->private_bio))) { + if (likely(mdev->state.pdsk >= D_INCONSISTENT)) { + inc_rs_pending(mdev); + ok = drbd_send_block(mdev, P_RS_DATA_REPLY, e); + } else { + if (__ratelimit(&drbd_ratelimit_state)) + dev_err(DEV, "Not sending RSDataReply, " + "partner DISKLESS!\n"); + ok = 1; + } + } else { + if (__ratelimit(&drbd_ratelimit_state)) + dev_err(DEV, "Sending NegRSDReply. sector %llus.\n", + (unsigned long long)e->sector); + + ok = drbd_send_ack(mdev, P_NEG_RS_DREPLY, e); + + /* update resync data with failure */ + drbd_rs_failed_io(mdev, e->sector, e->size); + } + + dec_unacked(mdev); + + move_to_net_ee_or_free(mdev, e); + + if (unlikely(!ok)) + dev_err(DEV, "drbd_send_block() failed\n"); + return ok; +} + +int w_e_end_csum_rs_req(struct drbd_conf *mdev, struct drbd_work *w, int cancel) +{ + struct drbd_epoch_entry *e = container_of(w, struct drbd_epoch_entry, w); + struct digest_info *di; + int digest_size; + void *digest = NULL; + int ok, eq = 0; + + if (unlikely(cancel)) { + drbd_free_ee(mdev, e); + dec_unacked(mdev); + return 1; + } + + drbd_rs_complete_io(mdev, e->sector); + + di = (struct digest_info *)(unsigned long)e->block_id; + + if (likely(drbd_bio_uptodate(e->private_bio))) { + /* quick hack to try to avoid a race against reconfiguration. + * a real fix would be much more involved, + * introducing more locking mechanisms */ + if (mdev->csums_tfm) { + digest_size = crypto_hash_digestsize(mdev->csums_tfm); + D_ASSERT(digest_size == di->digest_size); + digest = kmalloc(digest_size, GFP_NOIO); + } + if (digest) { + drbd_csum(mdev, mdev->csums_tfm, e->private_bio, digest); + eq = !memcmp(digest, di->digest, digest_size); + kfree(digest); + } + + if (eq) { + drbd_set_in_sync(mdev, e->sector, e->size); + mdev->rs_same_csum++; + ok = drbd_send_ack(mdev, P_RS_IS_IN_SYNC, e); + } else { + inc_rs_pending(mdev); + e->block_id = ID_SYNCER; + ok = drbd_send_block(mdev, P_RS_DATA_REPLY, e); + } + } else { + ok = drbd_send_ack(mdev, P_NEG_RS_DREPLY, e); + if (__ratelimit(&drbd_ratelimit_state)) + dev_err(DEV, "Sending NegDReply. I guess it gets messy.\n"); + } + + dec_unacked(mdev); + + kfree(di); + + move_to_net_ee_or_free(mdev, e); + + if (unlikely(!ok)) + dev_err(DEV, "drbd_send_block/ack() failed\n"); + return ok; +} + +int w_e_end_ov_req(struct drbd_conf *mdev, struct drbd_work *w, int cancel) +{ + struct drbd_epoch_entry *e = container_of(w, struct drbd_epoch_entry, w); + int digest_size; + void *digest; + int ok = 1; + + if (unlikely(cancel)) + goto out; + + if (unlikely(!drbd_bio_uptodate(e->private_bio))) + goto out; + + digest_size = crypto_hash_digestsize(mdev->verify_tfm); + /* FIXME if this allocation fails, online verify will not terminate! */ + digest = kmalloc(digest_size, GFP_NOIO); + if (digest) { + drbd_csum(mdev, mdev->verify_tfm, e->private_bio, digest); + inc_rs_pending(mdev); + ok = drbd_send_drequest_csum(mdev, e->sector, e->size, + digest, digest_size, P_OV_REPLY); + if (!ok) + dec_rs_pending(mdev); + kfree(digest); + } + +out: + drbd_free_ee(mdev, e); + + dec_unacked(mdev); + + return ok; +} + +void drbd_ov_oos_found(struct drbd_conf *mdev, sector_t sector, int size) +{ + if (mdev->ov_last_oos_start + mdev->ov_last_oos_size == sector) { + mdev->ov_last_oos_size += size>>9; + } else { + mdev->ov_last_oos_start = sector; + mdev->ov_last_oos_size = size>>9; + } + drbd_set_out_of_sync(mdev, sector, size); + set_bit(WRITE_BM_AFTER_RESYNC, &mdev->flags); +} + +int w_e_end_ov_reply(struct drbd_conf *mdev, struct drbd_work *w, int cancel) +{ + struct drbd_epoch_entry *e = container_of(w, struct drbd_epoch_entry, w); + struct digest_info *di; + int digest_size; + void *digest; + int ok, eq = 0; + + if (unlikely(cancel)) { + drbd_free_ee(mdev, e); + dec_unacked(mdev); + return 1; + } + + /* after "cancel", because after drbd_disconnect/drbd_rs_cancel_all + * the resync lru has been cleaned up already */ + drbd_rs_complete_io(mdev, e->sector); + + di = (struct digest_info *)(unsigned long)e->block_id; + + if (likely(drbd_bio_uptodate(e->private_bio))) { + digest_size = crypto_hash_digestsize(mdev->verify_tfm); + digest = kmalloc(digest_size, GFP_NOIO); + if (digest) { + drbd_csum(mdev, mdev->verify_tfm, e->private_bio, digest); + + D_ASSERT(digest_size == di->digest_size); + eq = !memcmp(digest, di->digest, digest_size); + kfree(digest); + } + } else { + ok = drbd_send_ack(mdev, P_NEG_RS_DREPLY, e); + if (__ratelimit(&drbd_ratelimit_state)) + dev_err(DEV, "Sending NegDReply. I guess it gets messy.\n"); + } + + dec_unacked(mdev); + + kfree(di); + + if (!eq) + drbd_ov_oos_found(mdev, e->sector, e->size); + else + ov_oos_print(mdev); + + ok = drbd_send_ack_ex(mdev, P_OV_RESULT, e->sector, e->size, + eq ? ID_IN_SYNC : ID_OUT_OF_SYNC); + + drbd_free_ee(mdev, e); + + if (--mdev->ov_left == 0) { + ov_oos_print(mdev); + drbd_resync_finished(mdev); + } + + return ok; +} + +int w_prev_work_done(struct drbd_conf *mdev, struct drbd_work *w, int cancel) +{ + struct drbd_wq_barrier *b = container_of(w, struct drbd_wq_barrier, w); + complete(&b->done); + return 1; +} + +int w_send_barrier(struct drbd_conf *mdev, struct drbd_work *w, int cancel) +{ + struct drbd_tl_epoch *b = container_of(w, struct drbd_tl_epoch, w); + struct p_barrier *p = &mdev->data.sbuf.barrier; + int ok = 1; + + /* really avoid racing with tl_clear. w.cb may have been referenced + * just before it was reassigned and re-queued, so double check that. + * actually, this race was harmless, since we only try to send the + * barrier packet here, and otherwise do nothing with the object. + * but compare with the head of w_clear_epoch */ + spin_lock_irq(&mdev->req_lock); + if (w->cb != w_send_barrier || mdev->state.conn < C_CONNECTED) + cancel = 1; + spin_unlock_irq(&mdev->req_lock); + if (cancel) + return 1; + + if (!drbd_get_data_sock(mdev)) + return 0; + p->barrier = b->br_number; + /* inc_ap_pending was done where this was queued. + * dec_ap_pending will be done in got_BarrierAck + * or (on connection loss) in w_clear_epoch. */ + ok = _drbd_send_cmd(mdev, mdev->data.socket, P_BARRIER, + (struct p_header *)p, sizeof(*p), 0); + drbd_put_data_sock(mdev); + + return ok; +} + +int w_send_write_hint(struct drbd_conf *mdev, struct drbd_work *w, int cancel) +{ + if (cancel) + return 1; + return drbd_send_short_cmd(mdev, P_UNPLUG_REMOTE); +} + +/** + * w_send_dblock() - Worker callback to send a P_DATA packet in order to mirror a write request + * @mdev: DRBD device. + * @w: work object. + * @cancel: The connection will be closed anyways + */ +int w_send_dblock(struct drbd_conf *mdev, struct drbd_work *w, int cancel) +{ + struct drbd_request *req = container_of(w, struct drbd_request, w); + int ok; + + if (unlikely(cancel)) { + req_mod(req, send_canceled); + return 1; + } + + ok = drbd_send_dblock(mdev, req); + req_mod(req, ok ? handed_over_to_network : send_failed); + + return ok; +} + +/** + * w_send_read_req() - Worker callback to send a read request (P_DATA_REQUEST) packet + * @mdev: DRBD device. + * @w: work object. + * @cancel: The connection will be closed anyways + */ +int w_send_read_req(struct drbd_conf *mdev, struct drbd_work *w, int cancel) +{ + struct drbd_request *req = container_of(w, struct drbd_request, w); + int ok; + + if (unlikely(cancel)) { + req_mod(req, send_canceled); + return 1; + } + + ok = drbd_send_drequest(mdev, P_DATA_REQUEST, req->sector, req->size, + (unsigned long)req); + + if (!ok) { + /* ?? we set C_TIMEOUT or C_BROKEN_PIPE in drbd_send(); + * so this is probably redundant */ + if (mdev->state.conn >= C_CONNECTED) + drbd_force_state(mdev, NS(conn, C_NETWORK_FAILURE)); + } + req_mod(req, ok ? handed_over_to_network : send_failed); + + return ok; +} + +static int _drbd_may_sync_now(struct drbd_conf *mdev) +{ + struct drbd_conf *odev = mdev; + + while (1) { + if (odev->sync_conf.after == -1) + return 1; + odev = minor_to_mdev(odev->sync_conf.after); + ERR_IF(!odev) return 1; + if ((odev->state.conn >= C_SYNC_SOURCE && + odev->state.conn <= C_PAUSED_SYNC_T) || + odev->state.aftr_isp || odev->state.peer_isp || + odev->state.user_isp) + return 0; + } +} + +/** + * _drbd_pause_after() - Pause resync on all devices that may not resync now + * @mdev: DRBD device. + * + * Called from process context only (admin command and after_state_ch). + */ +static int _drbd_pause_after(struct drbd_conf *mdev) +{ + struct drbd_conf *odev; + int i, rv = 0; + + for (i = 0; i < minor_count; i++) { + odev = minor_to_mdev(i); + if (!odev) + continue; + if (odev->state.conn == C_STANDALONE && odev->state.disk == D_DISKLESS) + continue; + if (!_drbd_may_sync_now(odev)) + rv |= (__drbd_set_state(_NS(odev, aftr_isp, 1), CS_HARD, NULL) + != SS_NOTHING_TO_DO); + } + + return rv; +} + +/** + * _drbd_resume_next() - Resume resync on all devices that may resync now + * @mdev: DRBD device. + * + * Called from process context only (admin command and worker). + */ +static int _drbd_resume_next(struct drbd_conf *mdev) +{ + struct drbd_conf *odev; + int i, rv = 0; + + for (i = 0; i < minor_count; i++) { + odev = minor_to_mdev(i); + if (!odev) + continue; + if (odev->state.conn == C_STANDALONE && odev->state.disk == D_DISKLESS) + continue; + if (odev->state.aftr_isp) { + if (_drbd_may_sync_now(odev)) + rv |= (__drbd_set_state(_NS(odev, aftr_isp, 0), + CS_HARD, NULL) + != SS_NOTHING_TO_DO) ; + } + } + return rv; +} + +void resume_next_sg(struct drbd_conf *mdev) +{ + write_lock_irq(&global_state_lock); + _drbd_resume_next(mdev); + write_unlock_irq(&global_state_lock); +} + +void suspend_other_sg(struct drbd_conf *mdev) +{ + write_lock_irq(&global_state_lock); + _drbd_pause_after(mdev); + write_unlock_irq(&global_state_lock); +} + +static int sync_after_error(struct drbd_conf *mdev, int o_minor) +{ + struct drbd_conf *odev; + + if (o_minor == -1) + return NO_ERROR; + if (o_minor < -1 || minor_to_mdev(o_minor) == NULL) + return ERR_SYNC_AFTER; + + /* check for loops */ + odev = minor_to_mdev(o_minor); + while (1) { + if (odev == mdev) + return ERR_SYNC_AFTER_CYCLE; + + /* dependency chain ends here, no cycles. */ + if (odev->sync_conf.after == -1) + return NO_ERROR; + + /* follow the dependency chain */ + odev = minor_to_mdev(odev->sync_conf.after); + } +} + +int drbd_alter_sa(struct drbd_conf *mdev, int na) +{ + int changes; + int retcode; + + write_lock_irq(&global_state_lock); + retcode = sync_after_error(mdev, na); + if (retcode == NO_ERROR) { + mdev->sync_conf.after = na; + do { + changes = _drbd_pause_after(mdev); + changes |= _drbd_resume_next(mdev); + } while (changes); + } + write_unlock_irq(&global_state_lock); + return retcode; +} + +/** + * drbd_start_resync() - Start the resync process + * @mdev: DRBD device. + * @side: Either C_SYNC_SOURCE or C_SYNC_TARGET + * + * This function might bring you directly into one of the + * C_PAUSED_SYNC_* states. + */ +void drbd_start_resync(struct drbd_conf *mdev, enum drbd_conns side) +{ + union drbd_state ns; + int r; + + if (mdev->state.conn >= C_SYNC_SOURCE) { + dev_err(DEV, "Resync already running!\n"); + return; + } + + trace_drbd_resync(mdev, TRACE_LVL_SUMMARY, "Resync starting: side=%s\n", + side == C_SYNC_TARGET ? "SyncTarget" : "SyncSource"); + + /* In case a previous resync run was aborted by an IO error/detach on the peer. */ + drbd_rs_cancel_all(mdev); + + if (side == C_SYNC_TARGET) { + /* Since application IO was locked out during C_WF_BITMAP_T and + C_WF_SYNC_UUID we are still unmodified. Before going to C_SYNC_TARGET + we check that we might make the data inconsistent. */ + r = drbd_khelper(mdev, "before-resync-target"); + r = (r >> 8) & 0xff; + if (r > 0) { + dev_info(DEV, "before-resync-target handler returned %d, " + "dropping connection.\n", r); + drbd_force_state(mdev, NS(conn, C_DISCONNECTING)); + return; + } + } + + drbd_state_lock(mdev); + + if (!get_ldev_if_state(mdev, D_NEGOTIATING)) { + drbd_state_unlock(mdev); + return; + } + + if (side == C_SYNC_TARGET) { + mdev->bm_resync_fo = 0; + } else /* side == C_SYNC_SOURCE */ { + u64 uuid; + + get_random_bytes(&uuid, sizeof(u64)); + drbd_uuid_set(mdev, UI_BITMAP, uuid); + drbd_send_sync_uuid(mdev, uuid); + + D_ASSERT(mdev->state.disk == D_UP_TO_DATE); + } + + write_lock_irq(&global_state_lock); + ns = mdev->state; + + ns.aftr_isp = !_drbd_may_sync_now(mdev); + + ns.conn = side; + + if (side == C_SYNC_TARGET) + ns.disk = D_INCONSISTENT; + else /* side == C_SYNC_SOURCE */ + ns.pdsk = D_INCONSISTENT; + + r = __drbd_set_state(mdev, ns, CS_VERBOSE, NULL); + ns = mdev->state; + + if (ns.conn < C_CONNECTED) + r = SS_UNKNOWN_ERROR; + + if (r == SS_SUCCESS) { + mdev->rs_total = + mdev->rs_mark_left = drbd_bm_total_weight(mdev); + mdev->rs_failed = 0; + mdev->rs_paused = 0; + mdev->rs_start = + mdev->rs_mark_time = jiffies; + mdev->rs_same_csum = 0; + _drbd_pause_after(mdev); + } + write_unlock_irq(&global_state_lock); + drbd_state_unlock(mdev); + put_ldev(mdev); + + if (r == SS_SUCCESS) { + dev_info(DEV, "Began resync as %s (will sync %lu KB [%lu bits set]).\n", + drbd_conn_str(ns.conn), + (unsigned long) mdev->rs_total << (BM_BLOCK_SHIFT-10), + (unsigned long) mdev->rs_total); + + if (mdev->rs_total == 0) { + /* Peer still reachable? Beware of failing before-resync-target handlers! */ + request_ping(mdev); + __set_current_state(TASK_INTERRUPTIBLE); + schedule_timeout(mdev->net_conf->ping_timeo*HZ/9); /* 9 instead 10 */ + drbd_resync_finished(mdev); + return; + } + + /* ns.conn may already be != mdev->state.conn, + * we may have been paused in between, or become paused until + * the timer triggers. + * No matter, that is handled in resync_timer_fn() */ + if (ns.conn == C_SYNC_TARGET) + mod_timer(&mdev->resync_timer, jiffies); + + drbd_md_sync(mdev); + } +} + +int drbd_worker(struct drbd_thread *thi) +{ + struct drbd_conf *mdev = thi->mdev; + struct drbd_work *w = NULL; + LIST_HEAD(work_list); + int intr = 0, i; + + sprintf(current->comm, "drbd%d_worker", mdev_to_minor(mdev)); + + while (get_t_state(thi) == Running) { + drbd_thread_current_set_cpu(mdev); + + if (down_trylock(&mdev->data.work.s)) { + mutex_lock(&mdev->data.mutex); + if (mdev->data.socket && !mdev->net_conf->no_cork) + drbd_tcp_uncork(mdev->data.socket); + mutex_unlock(&mdev->data.mutex); + + intr = down_interruptible(&mdev->data.work.s); + + mutex_lock(&mdev->data.mutex); + if (mdev->data.socket && !mdev->net_conf->no_cork) + drbd_tcp_cork(mdev->data.socket); + mutex_unlock(&mdev->data.mutex); + } + + if (intr) { + D_ASSERT(intr == -EINTR); + flush_signals(current); + ERR_IF (get_t_state(thi) == Running) + continue; + break; + } + + if (get_t_state(thi) != Running) + break; + /* With this break, we have done a down() but not consumed + the entry from the list. The cleanup code takes care of + this... */ + + w = NULL; + spin_lock_irq(&mdev->data.work.q_lock); + ERR_IF(list_empty(&mdev->data.work.q)) { + /* something terribly wrong in our logic. + * we were able to down() the semaphore, + * but the list is empty... doh. + * + * what is the best thing to do now? + * try again from scratch, restarting the receiver, + * asender, whatnot? could break even more ugly, + * e.g. when we are primary, but no good local data. + * + * I'll try to get away just starting over this loop. + */ + spin_unlock_irq(&mdev->data.work.q_lock); + continue; + } + w = list_entry(mdev->data.work.q.next, struct drbd_work, list); + list_del_init(&w->list); + spin_unlock_irq(&mdev->data.work.q_lock); + + if (!w->cb(mdev, w, mdev->state.conn < C_CONNECTED)) { + /* dev_warn(DEV, "worker: a callback failed! \n"); */ + if (mdev->state.conn >= C_CONNECTED) + drbd_force_state(mdev, + NS(conn, C_NETWORK_FAILURE)); + } + } + D_ASSERT(test_bit(DEVICE_DYING, &mdev->flags)); + D_ASSERT(test_bit(CONFIG_PENDING, &mdev->flags)); + + spin_lock_irq(&mdev->data.work.q_lock); + i = 0; + while (!list_empty(&mdev->data.work.q)) { + list_splice_init(&mdev->data.work.q, &work_list); + spin_unlock_irq(&mdev->data.work.q_lock); + + while (!list_empty(&work_list)) { + w = list_entry(work_list.next, struct drbd_work, list); + list_del_init(&w->list); + w->cb(mdev, w, 1); + i++; /* dead debugging code */ + } + + spin_lock_irq(&mdev->data.work.q_lock); + } + sema_init(&mdev->data.work.s, 0); + /* DANGEROUS race: if someone did queue his work within the spinlock, + * but up() ed outside the spinlock, we could get an up() on the + * semaphore without corresponding list entry. + * So don't do that. + */ + spin_unlock_irq(&mdev->data.work.q_lock); + + D_ASSERT(mdev->state.disk == D_DISKLESS && mdev->state.conn == C_STANDALONE); + /* _drbd_set_state only uses stop_nowait. + * wait here for the Exiting receiver. */ + drbd_thread_stop(&mdev->receiver); + drbd_mdev_cleanup(mdev); + + dev_info(DEV, "worker terminated\n"); + + clear_bit(DEVICE_DYING, &mdev->flags); + clear_bit(CONFIG_PENDING, &mdev->flags); + wake_up(&mdev->state_wait); + + return 0; +} diff --git a/drivers/block/drbd/drbd_wrappers.h b/drivers/block/drbd/drbd_wrappers.h new file mode 100644 index 000000000000..f93fa111ce50 --- /dev/null +++ b/drivers/block/drbd/drbd_wrappers.h @@ -0,0 +1,91 @@ +#ifndef _DRBD_WRAPPERS_H +#define _DRBD_WRAPPERS_H + +#include +#include + +/* see get_sb_bdev and bd_claim */ +extern char *drbd_sec_holder; + +/* sets the number of 512 byte sectors of our virtual device */ +static inline void drbd_set_my_capacity(struct drbd_conf *mdev, + sector_t size) +{ + /* set_capacity(mdev->this_bdev->bd_disk, size); */ + set_capacity(mdev->vdisk, size); + mdev->this_bdev->bd_inode->i_size = (loff_t)size << 9; +} + +#define drbd_bio_uptodate(bio) bio_flagged(bio, BIO_UPTODATE) + +static inline int drbd_bio_has_active_page(struct bio *bio) +{ + struct bio_vec *bvec; + int i; + + __bio_for_each_segment(bvec, bio, i, 0) { + if (page_count(bvec->bv_page) > 1) + return 1; + } + + return 0; +} + +/* bi_end_io handlers */ +extern void drbd_md_io_complete(struct bio *bio, int error); +extern void drbd_endio_read_sec(struct bio *bio, int error); +extern void drbd_endio_write_sec(struct bio *bio, int error); +extern void drbd_endio_pri(struct bio *bio, int error); + +/* + * used to submit our private bio + */ +static inline void drbd_generic_make_request(struct drbd_conf *mdev, + int fault_type, struct bio *bio) +{ + __release(local); + if (!bio->bi_bdev) { + printk(KERN_ERR "drbd%d: drbd_generic_make_request: " + "bio->bi_bdev == NULL\n", + mdev_to_minor(mdev)); + dump_stack(); + bio_endio(bio, -ENODEV); + return; + } + + if (FAULT_ACTIVE(mdev, fault_type)) + bio_endio(bio, -EIO); + else + generic_make_request(bio); +} + +static inline void drbd_plug_device(struct drbd_conf *mdev) +{ + struct request_queue *q; + q = bdev_get_queue(mdev->this_bdev); + + spin_lock_irq(q->queue_lock); + +/* XXX the check on !blk_queue_plugged is redundant, + * implicitly checked in blk_plug_device */ + + if (!blk_queue_plugged(q)) { + blk_plug_device(q); + del_timer(&q->unplug_timer); + /* unplugging should not happen automatically... */ + } + spin_unlock_irq(q->queue_lock); +} + +static inline int drbd_crypto_is_hash(struct crypto_tfm *tfm) +{ + return (crypto_tfm_alg_type(tfm) & CRYPTO_ALG_TYPE_HASH_MASK) + == CRYPTO_ALG_TYPE_HASH; +} + +#ifndef __CHECKER__ +# undef __cond_lock +# define __cond_lock(x,c) (c) +#endif + +#endif diff --git a/include/linux/drbd.h b/include/linux/drbd.h new file mode 100644 index 000000000000..69dc711f37b3 --- /dev/null +++ b/include/linux/drbd.h @@ -0,0 +1,349 @@ +/* + drbd.h + Kernel module for 2.6.x Kernels + + This file is part of DRBD by Philipp Reisner and Lars Ellenberg. + + Copyright (C) 2001-2008, LINBIT Information Technologies GmbH. + Copyright (C) 2001-2008, Philipp Reisner . + Copyright (C) 2001-2008, Lars Ellenberg . + + drbd is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2, or (at your option) + any later version. + + drbd is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with drbd; see the file COPYING. If not, write to + the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. + +*/ +#ifndef DRBD_H +#define DRBD_H +#include +#include + +#ifdef __KERNEL__ +#include +#include +#else +#include +#include +#include + +/* Altough the Linux source code makes a difference between + generic endianness and the bitfields' endianness, there is no + architecture as of Linux-2.6.24-rc4 where the bitfileds' endianness + does not match the generic endianness. */ + +#if __BYTE_ORDER == __LITTLE_ENDIAN +#define __LITTLE_ENDIAN_BITFIELD +#elif __BYTE_ORDER == __BIG_ENDIAN +#define __BIG_ENDIAN_BITFIELD +#else +# error "sorry, weird endianness on this box" +#endif + +#endif + + +extern const char *drbd_buildtag(void); +#define REL_VERSION "8.3.3rc2" +#define API_VERSION 88 +#define PRO_VERSION_MIN 86 +#define PRO_VERSION_MAX 91 + + +enum drbd_io_error_p { + EP_PASS_ON, /* FIXME should the better be named "Ignore"? */ + EP_CALL_HELPER, + EP_DETACH +}; + +enum drbd_fencing_p { + FP_DONT_CARE, + FP_RESOURCE, + FP_STONITH +}; + +enum drbd_disconnect_p { + DP_RECONNECT, + DP_DROP_NET_CONF, + DP_FREEZE_IO +}; + +enum drbd_after_sb_p { + ASB_DISCONNECT, + ASB_DISCARD_YOUNGER_PRI, + ASB_DISCARD_OLDER_PRI, + ASB_DISCARD_ZERO_CHG, + ASB_DISCARD_LEAST_CHG, + ASB_DISCARD_LOCAL, + ASB_DISCARD_REMOTE, + ASB_CONSENSUS, + ASB_DISCARD_SECONDARY, + ASB_CALL_HELPER, + ASB_VIOLENTLY +}; + +/* KEEP the order, do not delete or insert. Only append. */ +enum drbd_ret_codes { + ERR_CODE_BASE = 100, + NO_ERROR = 101, + ERR_LOCAL_ADDR = 102, + ERR_PEER_ADDR = 103, + ERR_OPEN_DISK = 104, + ERR_OPEN_MD_DISK = 105, + ERR_DISK_NOT_BDEV = 107, + ERR_MD_NOT_BDEV = 108, + ERR_DISK_TO_SMALL = 111, + ERR_MD_DISK_TO_SMALL = 112, + ERR_BDCLAIM_DISK = 114, + ERR_BDCLAIM_MD_DISK = 115, + ERR_MD_IDX_INVALID = 116, + ERR_IO_MD_DISK = 118, + ERR_MD_INVALID = 119, + ERR_AUTH_ALG = 120, + ERR_AUTH_ALG_ND = 121, + ERR_NOMEM = 122, + ERR_DISCARD = 123, + ERR_DISK_CONFIGURED = 124, + ERR_NET_CONFIGURED = 125, + ERR_MANDATORY_TAG = 126, + ERR_MINOR_INVALID = 127, + ERR_INTR = 129, /* EINTR */ + ERR_RESIZE_RESYNC = 130, + ERR_NO_PRIMARY = 131, + ERR_SYNC_AFTER = 132, + ERR_SYNC_AFTER_CYCLE = 133, + ERR_PAUSE_IS_SET = 134, + ERR_PAUSE_IS_CLEAR = 135, + ERR_PACKET_NR = 137, + ERR_NO_DISK = 138, + ERR_NOT_PROTO_C = 139, + ERR_NOMEM_BITMAP = 140, + ERR_INTEGRITY_ALG = 141, /* DRBD 8.2 only */ + ERR_INTEGRITY_ALG_ND = 142, /* DRBD 8.2 only */ + ERR_CPU_MASK_PARSE = 143, /* DRBD 8.2 only */ + ERR_CSUMS_ALG = 144, /* DRBD 8.2 only */ + ERR_CSUMS_ALG_ND = 145, /* DRBD 8.2 only */ + ERR_VERIFY_ALG = 146, /* DRBD 8.2 only */ + ERR_VERIFY_ALG_ND = 147, /* DRBD 8.2 only */ + ERR_CSUMS_RESYNC_RUNNING= 148, /* DRBD 8.2 only */ + ERR_VERIFY_RUNNING = 149, /* DRBD 8.2 only */ + ERR_DATA_NOT_CURRENT = 150, + ERR_CONNECTED = 151, /* DRBD 8.3 only */ + + /* insert new ones above this line */ + AFTER_LAST_ERR_CODE +}; + +#define DRBD_PROT_A 1 +#define DRBD_PROT_B 2 +#define DRBD_PROT_C 3 + +enum drbd_role { + R_UNKNOWN = 0, + R_PRIMARY = 1, /* role */ + R_SECONDARY = 2, /* role */ + R_MASK = 3, +}; + +/* The order of these constants is important. + * The lower ones (=C_WF_REPORT_PARAMS ==> There is a socket + */ +enum drbd_conns { + C_STANDALONE, + C_DISCONNECTING, /* Temporal state on the way to StandAlone. */ + C_UNCONNECTED, /* >= C_UNCONNECTED -> inc_net() succeeds */ + + /* These temporal states are all used on the way + * from >= C_CONNECTED to Unconnected. + * The 'disconnect reason' states + * I do not allow to change beween them. */ + C_TIMEOUT, + C_BROKEN_PIPE, + C_NETWORK_FAILURE, + C_PROTOCOL_ERROR, + C_TEAR_DOWN, + + C_WF_CONNECTION, + C_WF_REPORT_PARAMS, /* we have a socket */ + C_CONNECTED, /* we have introduced each other */ + C_STARTING_SYNC_S, /* starting full sync by admin request. */ + C_STARTING_SYNC_T, /* stariing full sync by admin request. */ + C_WF_BITMAP_S, + C_WF_BITMAP_T, + C_WF_SYNC_UUID, + + /* All SyncStates are tested with this comparison + * xx >= C_SYNC_SOURCE && xx <= C_PAUSED_SYNC_T */ + C_SYNC_SOURCE, + C_SYNC_TARGET, + C_VERIFY_S, + C_VERIFY_T, + C_PAUSED_SYNC_S, + C_PAUSED_SYNC_T, + C_MASK = 31 +}; + +enum drbd_disk_state { + D_DISKLESS, + D_ATTACHING, /* In the process of reading the meta-data */ + D_FAILED, /* Becomes D_DISKLESS as soon as we told it the peer */ + /* when >= D_FAILED it is legal to access mdev->bc */ + D_NEGOTIATING, /* Late attaching state, we need to talk to the peer */ + D_INCONSISTENT, + D_OUTDATED, + D_UNKNOWN, /* Only used for the peer, never for myself */ + D_CONSISTENT, /* Might be D_OUTDATED, might be D_UP_TO_DATE ... */ + D_UP_TO_DATE, /* Only this disk state allows applications' IO ! */ + D_MASK = 15 +}; + +union drbd_state { +/* According to gcc's docs is the ... + * The order of allocation of bit-fields within a unit (C90 6.5.2.1, C99 6.7.2.1). + * Determined by ABI. + * pointed out by Maxim Uvarov q + * even though we transmit as "cpu_to_be32(state)", + * the offsets of the bitfields still need to be swapped + * on different endianess. + */ + struct { +#if defined(__LITTLE_ENDIAN_BITFIELD) + unsigned role:2 ; /* 3/4 primary/secondary/unknown */ + unsigned peer:2 ; /* 3/4 primary/secondary/unknown */ + unsigned conn:5 ; /* 17/32 cstates */ + unsigned disk:4 ; /* 8/16 from D_DISKLESS to D_UP_TO_DATE */ + unsigned pdsk:4 ; /* 8/16 from D_DISKLESS to D_UP_TO_DATE */ + unsigned susp:1 ; /* 2/2 IO suspended no/yes */ + unsigned aftr_isp:1 ; /* isp .. imposed sync pause */ + unsigned peer_isp:1 ; + unsigned user_isp:1 ; + unsigned _pad:11; /* 0 unused */ +#elif defined(__BIG_ENDIAN_BITFIELD) + unsigned _pad:11; /* 0 unused */ + unsigned user_isp:1 ; + unsigned peer_isp:1 ; + unsigned aftr_isp:1 ; /* isp .. imposed sync pause */ + unsigned susp:1 ; /* 2/2 IO suspended no/yes */ + unsigned pdsk:4 ; /* 8/16 from D_DISKLESS to D_UP_TO_DATE */ + unsigned disk:4 ; /* 8/16 from D_DISKLESS to D_UP_TO_DATE */ + unsigned conn:5 ; /* 17/32 cstates */ + unsigned peer:2 ; /* 3/4 primary/secondary/unknown */ + unsigned role:2 ; /* 3/4 primary/secondary/unknown */ +#else +# error "this endianess is not supported" +#endif + }; + unsigned int i; +}; + +enum drbd_state_ret_codes { + SS_CW_NO_NEED = 4, + SS_CW_SUCCESS = 3, + SS_NOTHING_TO_DO = 2, + SS_SUCCESS = 1, + SS_UNKNOWN_ERROR = 0, /* Used to sleep longer in _drbd_request_state */ + SS_TWO_PRIMARIES = -1, + SS_NO_UP_TO_DATE_DISK = -2, + SS_NO_LOCAL_DISK = -4, + SS_NO_REMOTE_DISK = -5, + SS_CONNECTED_OUTDATES = -6, + SS_PRIMARY_NOP = -7, + SS_RESYNC_RUNNING = -8, + SS_ALREADY_STANDALONE = -9, + SS_CW_FAILED_BY_PEER = -10, + SS_IS_DISKLESS = -11, + SS_DEVICE_IN_USE = -12, + SS_NO_NET_CONFIG = -13, + SS_NO_VERIFY_ALG = -14, /* drbd-8.2 only */ + SS_NEED_CONNECTION = -15, /* drbd-8.2 only */ + SS_LOWER_THAN_OUTDATED = -16, + SS_NOT_SUPPORTED = -17, /* drbd-8.2 only */ + SS_IN_TRANSIENT_STATE = -18, /* Retry after the next state change */ + SS_CONCURRENT_ST_CHG = -19, /* Concurrent cluster side state change! */ + SS_AFTER_LAST_ERROR = -20, /* Keep this at bottom */ +}; + +/* from drbd_strings.c */ +extern const char *drbd_conn_str(enum drbd_conns); +extern const char *drbd_role_str(enum drbd_role); +extern const char *drbd_disk_str(enum drbd_disk_state); +extern const char *drbd_set_st_err_str(enum drbd_state_ret_codes); + +#define SHARED_SECRET_MAX 64 + +#define MDF_CONSISTENT (1 << 0) +#define MDF_PRIMARY_IND (1 << 1) +#define MDF_CONNECTED_IND (1 << 2) +#define MDF_FULL_SYNC (1 << 3) +#define MDF_WAS_UP_TO_DATE (1 << 4) +#define MDF_PEER_OUT_DATED (1 << 5) +#define MDF_CRASHED_PRIMARY (1 << 6) + +enum drbd_uuid_index { + UI_CURRENT, + UI_BITMAP, + UI_HISTORY_START, + UI_HISTORY_END, + UI_SIZE, /* nl-packet: number of dirty bits */ + UI_FLAGS, /* nl-packet: flags */ + UI_EXTENDED_SIZE /* Everything. */ +}; + +enum drbd_timeout_flag { + UT_DEFAULT = 0, + UT_DEGRADED = 1, + UT_PEER_OUTDATED = 2, +}; + +#define UUID_JUST_CREATED ((__u64)4) + +#define DRBD_MAGIC 0x83740267 +#define BE_DRBD_MAGIC __constant_cpu_to_be32(DRBD_MAGIC) + +/* these are of type "int" */ +#define DRBD_MD_INDEX_INTERNAL -1 +#define DRBD_MD_INDEX_FLEX_EXT -2 +#define DRBD_MD_INDEX_FLEX_INT -3 + +/* Start of the new netlink/connector stuff */ + +#define DRBD_NL_CREATE_DEVICE 0x01 +#define DRBD_NL_SET_DEFAULTS 0x02 + +/* The following line should be moved over to linux/connector.h + * when the time comes */ +#ifndef CN_IDX_DRBD +# define CN_IDX_DRBD 0x4 +/* Ubuntu "intrepid ibex" release defined CN_IDX_DRBD as 0x6 */ +#endif +#define CN_VAL_DRBD 0x1 + +/* For searching a vacant cn_idx value */ +#define CN_IDX_STEP 6977 + +struct drbd_nl_cfg_req { + int packet_type; + unsigned int drbd_minor; + int flags; + unsigned short tag_list[]; +}; + +struct drbd_nl_cfg_reply { + int packet_type; + unsigned int minor; + int ret_code; /* enum ret_code or set_st_err_t */ + unsigned short tag_list[]; /* only used with get_* calls */ +}; + +#endif diff --git a/include/linux/drbd_limits.h b/include/linux/drbd_limits.h new file mode 100644 index 000000000000..9d067ce46960 --- /dev/null +++ b/include/linux/drbd_limits.h @@ -0,0 +1,137 @@ +/* + drbd_limits.h + This file is part of DRBD by Philipp Reisner and Lars Ellenberg. +*/ + +/* + * Our current limitations. + * Some of them are hard limits, + * some of them are arbitrary range limits, that make it easier to provide + * feedback about nonsense settings for certain configurable values. + */ + +#ifndef DRBD_LIMITS_H +#define DRBD_LIMITS_H 1 + +#define DEBUG_RANGE_CHECK 0 + +#define DRBD_MINOR_COUNT_MIN 1 +#define DRBD_MINOR_COUNT_MAX 255 + +#define DRBD_DIALOG_REFRESH_MIN 0 +#define DRBD_DIALOG_REFRESH_MAX 600 + +/* valid port number */ +#define DRBD_PORT_MIN 1 +#define DRBD_PORT_MAX 0xffff + +/* startup { */ + /* if you want more than 3.4 days, disable */ +#define DRBD_WFC_TIMEOUT_MIN 0 +#define DRBD_WFC_TIMEOUT_MAX 300000 +#define DRBD_WFC_TIMEOUT_DEF 0 + +#define DRBD_DEGR_WFC_TIMEOUT_MIN 0 +#define DRBD_DEGR_WFC_TIMEOUT_MAX 300000 +#define DRBD_DEGR_WFC_TIMEOUT_DEF 0 + +#define DRBD_OUTDATED_WFC_TIMEOUT_MIN 0 +#define DRBD_OUTDATED_WFC_TIMEOUT_MAX 300000 +#define DRBD_OUTDATED_WFC_TIMEOUT_DEF 0 +/* }*/ + +/* net { */ + /* timeout, unit centi seconds + * more than one minute timeout is not usefull */ +#define DRBD_TIMEOUT_MIN 1 +#define DRBD_TIMEOUT_MAX 600 +#define DRBD_TIMEOUT_DEF 60 /* 6 seconds */ + + /* active connection retries when C_WF_CONNECTION */ +#define DRBD_CONNECT_INT_MIN 1 +#define DRBD_CONNECT_INT_MAX 120 +#define DRBD_CONNECT_INT_DEF 10 /* seconds */ + + /* keep-alive probes when idle */ +#define DRBD_PING_INT_MIN 1 +#define DRBD_PING_INT_MAX 120 +#define DRBD_PING_INT_DEF 10 + + /* timeout for the ping packets.*/ +#define DRBD_PING_TIMEO_MIN 1 +#define DRBD_PING_TIMEO_MAX 100 +#define DRBD_PING_TIMEO_DEF 5 + + /* max number of write requests between write barriers */ +#define DRBD_MAX_EPOCH_SIZE_MIN 1 +#define DRBD_MAX_EPOCH_SIZE_MAX 20000 +#define DRBD_MAX_EPOCH_SIZE_DEF 2048 + + /* I don't think that a tcp send buffer of more than 10M is usefull */ +#define DRBD_SNDBUF_SIZE_MIN 0 +#define DRBD_SNDBUF_SIZE_MAX (10<<20) +#define DRBD_SNDBUF_SIZE_DEF (2*65535) + +#define DRBD_RCVBUF_SIZE_MIN 0 +#define DRBD_RCVBUF_SIZE_MAX (10<<20) +#define DRBD_RCVBUF_SIZE_DEF (2*65535) + + /* @4k PageSize -> 128kB - 512MB */ +#define DRBD_MAX_BUFFERS_MIN 32 +#define DRBD_MAX_BUFFERS_MAX 131072 +#define DRBD_MAX_BUFFERS_DEF 2048 + + /* @4k PageSize -> 4kB - 512MB */ +#define DRBD_UNPLUG_WATERMARK_MIN 1 +#define DRBD_UNPLUG_WATERMARK_MAX 131072 +#define DRBD_UNPLUG_WATERMARK_DEF (DRBD_MAX_BUFFERS_DEF/16) + + /* 0 is disabled. + * 200 should be more than enough even for very short timeouts */ +#define DRBD_KO_COUNT_MIN 0 +#define DRBD_KO_COUNT_MAX 200 +#define DRBD_KO_COUNT_DEF 0 +/* } */ + +/* syncer { */ + /* FIXME allow rate to be zero? */ +#define DRBD_RATE_MIN 1 +/* channel bonding 10 GbE, or other hardware */ +#define DRBD_RATE_MAX (4 << 20) +#define DRBD_RATE_DEF 250 /* kb/second */ + + /* less than 7 would hit performance unneccessarily. + * 3833 is the largest prime that still does fit + * into 64 sectors of activity log */ +#define DRBD_AL_EXTENTS_MIN 7 +#define DRBD_AL_EXTENTS_MAX 3833 +#define DRBD_AL_EXTENTS_DEF 127 + +#define DRBD_AFTER_MIN -1 +#define DRBD_AFTER_MAX 255 +#define DRBD_AFTER_DEF -1 + +/* } */ + +/* drbdsetup XY resize -d Z + * you are free to reduce the device size to nothing, if you want to. + * the upper limit with 64bit kernel, enough ram and flexible meta data + * is 16 TB, currently. */ +/* DRBD_MAX_SECTORS */ +#define DRBD_DISK_SIZE_SECT_MIN 0 +#define DRBD_DISK_SIZE_SECT_MAX (16 * (2LLU << 30)) +#define DRBD_DISK_SIZE_SECT_DEF 0 /* = disabled = no user size... */ + +#define DRBD_ON_IO_ERROR_DEF EP_PASS_ON +#define DRBD_FENCING_DEF FP_DONT_CARE +#define DRBD_AFTER_SB_0P_DEF ASB_DISCONNECT +#define DRBD_AFTER_SB_1P_DEF ASB_DISCONNECT +#define DRBD_AFTER_SB_2P_DEF ASB_DISCONNECT +#define DRBD_RR_CONFLICT_DEF ASB_DISCONNECT + +#define DRBD_MAX_BIO_BVECS_MIN 0 +#define DRBD_MAX_BIO_BVECS_MAX 128 +#define DRBD_MAX_BIO_BVECS_DEF 0 + +#undef RANGE +#endif diff --git a/include/linux/drbd_nl.h b/include/linux/drbd_nl.h new file mode 100644 index 000000000000..db5721ad50d1 --- /dev/null +++ b/include/linux/drbd_nl.h @@ -0,0 +1,137 @@ +/* + PAKET( name, + TYPE ( pn, pr, member ) + ... + ) + + You may never reissue one of the pn arguments +*/ + +#if !defined(NL_PACKET) || !defined(NL_STRING) || !defined(NL_INTEGER) || !defined(NL_BIT) || !defined(NL_INT64) +#error "The macros NL_PACKET, NL_STRING, NL_INTEGER, NL_INT64 and NL_BIT needs to be defined" +#endif + +NL_PACKET(primary, 1, + NL_BIT( 1, T_MAY_IGNORE, overwrite_peer) +) + +NL_PACKET(secondary, 2, ) + +NL_PACKET(disk_conf, 3, + NL_INT64( 2, T_MAY_IGNORE, disk_size) + NL_STRING( 3, T_MANDATORY, backing_dev, 128) + NL_STRING( 4, T_MANDATORY, meta_dev, 128) + NL_INTEGER( 5, T_MANDATORY, meta_dev_idx) + NL_INTEGER( 6, T_MAY_IGNORE, on_io_error) + NL_INTEGER( 7, T_MAY_IGNORE, fencing) + NL_BIT( 37, T_MAY_IGNORE, use_bmbv) + NL_BIT( 53, T_MAY_IGNORE, no_disk_flush) + NL_BIT( 54, T_MAY_IGNORE, no_md_flush) + /* 55 max_bio_size was available in 8.2.6rc2 */ + NL_INTEGER( 56, T_MAY_IGNORE, max_bio_bvecs) + NL_BIT( 57, T_MAY_IGNORE, no_disk_barrier) + NL_BIT( 58, T_MAY_IGNORE, no_disk_drain) +) + +NL_PACKET(detach, 4, ) + +NL_PACKET(net_conf, 5, + NL_STRING( 8, T_MANDATORY, my_addr, 128) + NL_STRING( 9, T_MANDATORY, peer_addr, 128) + NL_STRING( 10, T_MAY_IGNORE, shared_secret, SHARED_SECRET_MAX) + NL_STRING( 11, T_MAY_IGNORE, cram_hmac_alg, SHARED_SECRET_MAX) + NL_STRING( 44, T_MAY_IGNORE, integrity_alg, SHARED_SECRET_MAX) + NL_INTEGER( 14, T_MAY_IGNORE, timeout) + NL_INTEGER( 15, T_MANDATORY, wire_protocol) + NL_INTEGER( 16, T_MAY_IGNORE, try_connect_int) + NL_INTEGER( 17, T_MAY_IGNORE, ping_int) + NL_INTEGER( 18, T_MAY_IGNORE, max_epoch_size) + NL_INTEGER( 19, T_MAY_IGNORE, max_buffers) + NL_INTEGER( 20, T_MAY_IGNORE, unplug_watermark) + NL_INTEGER( 21, T_MAY_IGNORE, sndbuf_size) + NL_INTEGER( 22, T_MAY_IGNORE, ko_count) + NL_INTEGER( 24, T_MAY_IGNORE, after_sb_0p) + NL_INTEGER( 25, T_MAY_IGNORE, after_sb_1p) + NL_INTEGER( 26, T_MAY_IGNORE, after_sb_2p) + NL_INTEGER( 39, T_MAY_IGNORE, rr_conflict) + NL_INTEGER( 40, T_MAY_IGNORE, ping_timeo) + NL_INTEGER( 67, T_MAY_IGNORE, rcvbuf_size) + /* 59 addr_family was available in GIT, never released */ + NL_BIT( 60, T_MANDATORY, mind_af) + NL_BIT( 27, T_MAY_IGNORE, want_lose) + NL_BIT( 28, T_MAY_IGNORE, two_primaries) + NL_BIT( 41, T_MAY_IGNORE, always_asbp) + NL_BIT( 61, T_MAY_IGNORE, no_cork) + NL_BIT( 62, T_MANDATORY, auto_sndbuf_size) +) + +NL_PACKET(disconnect, 6, ) + +NL_PACKET(resize, 7, + NL_INT64( 29, T_MAY_IGNORE, resize_size) +) + +NL_PACKET(syncer_conf, 8, + NL_INTEGER( 30, T_MAY_IGNORE, rate) + NL_INTEGER( 31, T_MAY_IGNORE, after) + NL_INTEGER( 32, T_MAY_IGNORE, al_extents) + NL_STRING( 52, T_MAY_IGNORE, verify_alg, SHARED_SECRET_MAX) + NL_STRING( 51, T_MAY_IGNORE, cpu_mask, 32) + NL_STRING( 64, T_MAY_IGNORE, csums_alg, SHARED_SECRET_MAX) + NL_BIT( 65, T_MAY_IGNORE, use_rle) +) + +NL_PACKET(invalidate, 9, ) +NL_PACKET(invalidate_peer, 10, ) +NL_PACKET(pause_sync, 11, ) +NL_PACKET(resume_sync, 12, ) +NL_PACKET(suspend_io, 13, ) +NL_PACKET(resume_io, 14, ) +NL_PACKET(outdate, 15, ) +NL_PACKET(get_config, 16, ) +NL_PACKET(get_state, 17, + NL_INTEGER( 33, T_MAY_IGNORE, state_i) +) + +NL_PACKET(get_uuids, 18, + NL_STRING( 34, T_MAY_IGNORE, uuids, (UI_SIZE*sizeof(__u64))) + NL_INTEGER( 35, T_MAY_IGNORE, uuids_flags) +) + +NL_PACKET(get_timeout_flag, 19, + NL_BIT( 36, T_MAY_IGNORE, use_degraded) +) + +NL_PACKET(call_helper, 20, + NL_STRING( 38, T_MAY_IGNORE, helper, 32) +) + +/* Tag nr 42 already allocated in drbd-8.1 development. */ + +NL_PACKET(sync_progress, 23, + NL_INTEGER( 43, T_MAY_IGNORE, sync_progress) +) + +NL_PACKET(dump_ee, 24, + NL_STRING( 45, T_MAY_IGNORE, dump_ee_reason, 32) + NL_STRING( 46, T_MAY_IGNORE, seen_digest, SHARED_SECRET_MAX) + NL_STRING( 47, T_MAY_IGNORE, calc_digest, SHARED_SECRET_MAX) + NL_INT64( 48, T_MAY_IGNORE, ee_sector) + NL_INT64( 49, T_MAY_IGNORE, ee_block_id) + NL_STRING( 50, T_MAY_IGNORE, ee_data, 32 << 10) +) + +NL_PACKET(start_ov, 25, + NL_INT64( 66, T_MAY_IGNORE, start_sector) +) + +NL_PACKET(new_c_uuid, 26, + NL_BIT( 63, T_MANDATORY, clear_bm) +) + +#undef NL_PACKET +#undef NL_INTEGER +#undef NL_INT64 +#undef NL_BIT +#undef NL_STRING + diff --git a/include/linux/drbd_tag_magic.h b/include/linux/drbd_tag_magic.h new file mode 100644 index 000000000000..fcdff8410e99 --- /dev/null +++ b/include/linux/drbd_tag_magic.h @@ -0,0 +1,83 @@ +#ifndef DRBD_TAG_MAGIC_H +#define DRBD_TAG_MAGIC_H + +#define TT_END 0 +#define TT_REMOVED 0xE000 + +/* declare packet_type enums */ +enum packet_types { +#define NL_PACKET(name, number, fields) P_ ## name = number, +#define NL_INTEGER(pn, pr, member) +#define NL_INT64(pn, pr, member) +#define NL_BIT(pn, pr, member) +#define NL_STRING(pn, pr, member, len) +#include "drbd_nl.h" + P_nl_after_last_packet, +}; + +/* These struct are used to deduce the size of the tag lists: */ +#define NL_PACKET(name, number, fields) \ + struct name ## _tag_len_struct { fields }; +#define NL_INTEGER(pn, pr, member) \ + int member; int tag_and_len ## member; +#define NL_INT64(pn, pr, member) \ + __u64 member; int tag_and_len ## member; +#define NL_BIT(pn, pr, member) \ + unsigned char member:1; int tag_and_len ## member; +#define NL_STRING(pn, pr, member, len) \ + unsigned char member[len]; int member ## _len; \ + int tag_and_len ## member; +#include "linux/drbd_nl.h" + +/* declate tag-list-sizes */ +static const int tag_list_sizes[] = { +#define NL_PACKET(name, number, fields) 2 fields , +#define NL_INTEGER(pn, pr, member) + 4 + 4 +#define NL_INT64(pn, pr, member) + 4 + 8 +#define NL_BIT(pn, pr, member) + 4 + 1 +#define NL_STRING(pn, pr, member, len) + 4 + (len) +#include "drbd_nl.h" +}; + +/* The two highest bits are used for the tag type */ +#define TT_MASK 0xC000 +#define TT_INTEGER 0x0000 +#define TT_INT64 0x4000 +#define TT_BIT 0x8000 +#define TT_STRING 0xC000 +/* The next bit indicates if processing of the tag is mandatory */ +#define T_MANDATORY 0x2000 +#define T_MAY_IGNORE 0x0000 +#define TN_MASK 0x1fff +/* The remaining 13 bits are used to enumerate the tags */ + +#define tag_type(T) ((T) & TT_MASK) +#define tag_number(T) ((T) & TN_MASK) + +/* declare tag enums */ +#define NL_PACKET(name, number, fields) fields +enum drbd_tags { +#define NL_INTEGER(pn, pr, member) T_ ## member = pn | TT_INTEGER | pr , +#define NL_INT64(pn, pr, member) T_ ## member = pn | TT_INT64 | pr , +#define NL_BIT(pn, pr, member) T_ ## member = pn | TT_BIT | pr , +#define NL_STRING(pn, pr, member, len) T_ ## member = pn | TT_STRING | pr , +#include "drbd_nl.h" +}; + +struct tag { + const char *name; + int type_n_flags; + int max_len; +}; + +/* declare tag names */ +#define NL_PACKET(name, number, fields) fields +static const struct tag tag_descriptions[] = { +#define NL_INTEGER(pn, pr, member) [ pn ] = { #member, TT_INTEGER | pr, sizeof(int) }, +#define NL_INT64(pn, pr, member) [ pn ] = { #member, TT_INT64 | pr, sizeof(__u64) }, +#define NL_BIT(pn, pr, member) [ pn ] = { #member, TT_BIT | pr, sizeof(int) }, +#define NL_STRING(pn, pr, member, len) [ pn ] = { #member, TT_STRING | pr, (len) }, +#include "drbd_nl.h" +}; + +#endif diff --git a/include/linux/lru_cache.h b/include/linux/lru_cache.h new file mode 100644 index 000000000000..3a2b2d9b0472 --- /dev/null +++ b/include/linux/lru_cache.h @@ -0,0 +1,294 @@ +/* + lru_cache.c + + This file is part of DRBD by Philipp Reisner and Lars Ellenberg. + + Copyright (C) 2003-2008, LINBIT Information Technologies GmbH. + Copyright (C) 2003-2008, Philipp Reisner . + Copyright (C) 2003-2008, Lars Ellenberg . + + drbd is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2, or (at your option) + any later version. + + drbd is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with drbd; see the file COPYING. If not, write to + the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. + + */ + +#ifndef LRU_CACHE_H +#define LRU_CACHE_H + +#include +#include +#include +#include /* for memset */ +#include + +/* +This header file (and its .c file; kernel-doc of functions see there) + define a helper framework to easily keep track of index:label associations, + and changes to an "active set" of objects, as well as pending transactions, + to persistently record those changes. + + We use an LRU policy if it is necessary to "cool down" a region currently in + the active set before we can "heat" a previously unused region. + + Because of this later property, it is called "lru_cache". + As it actually Tracks Objects in an Active SeT, we could also call it + toast (incidentally that is what may happen to the data on the + backend storage uppon next resync, if we don't get it right). + +What for? + +We replicate IO (more or less synchronously) to local and remote disk. + +For crash recovery after replication node failure, + we need to resync all regions that have been target of in-flight WRITE IO + (in use, or "hot", regions), as we don't know wether or not those WRITEs have + made it to stable storage. + + To avoid a "full resync", we need to persistently track these regions. + + This is known as "write intent log", and can be implemented as on-disk + (coarse or fine grained) bitmap, or other meta data. + + To avoid the overhead of frequent extra writes to this meta data area, + usually the condition is softened to regions that _may_ have been target of + in-flight WRITE IO, e.g. by only lazily clearing the on-disk write-intent + bitmap, trading frequency of meta data transactions against amount of + (possibly unneccessary) resync traffic. + + If we set a hard limit on the area that may be "hot" at any given time, we + limit the amount of resync traffic needed for crash recovery. + +For recovery after replication link failure, + we need to resync all blocks that have been changed on the other replica + in the mean time, or, if both replica have been changed independently [*], + all blocks that have been changed on either replica in the mean time. + [*] usually as a result of a cluster split-brain and insufficient protection. + but there are valid use cases to do this on purpose. + + Tracking those blocks can be implemented as "dirty bitmap". + Having it fine-grained reduces the amount of resync traffic. + It should also be persistent, to allow for reboots (or crashes) + while the replication link is down. + +There are various possible implementations for persistently storing +write intent log information, three of which are mentioned here. + +"Chunk dirtying" + The on-disk "dirty bitmap" may be re-used as "write-intent" bitmap as well. + To reduce the frequency of bitmap updates for write-intent log purposes, + one could dirty "chunks" (of some size) at a time of the (fine grained) + on-disk bitmap, while keeping the in-memory "dirty" bitmap as clean as + possible, flushing it to disk again when a previously "hot" (and on-disk + dirtied as full chunk) area "cools down" again (no IO in flight anymore, + and none expected in the near future either). + +"Explicit (coarse) write intent bitmap" + An other implementation could chose a (probably coarse) explicit bitmap, + for write-intent log purposes, additionally to the fine grained dirty bitmap. + +"Activity log" + Yet an other implementation may keep track of the hot regions, by starting + with an empty set, and writing down a journal of region numbers that have + become "hot", or have "cooled down" again. + + To be able to use a ring buffer for this journal of changes to the active + set, we not only record the actual changes to that set, but also record the + not changing members of the set in a round robin fashion. To do so, we use a + fixed (but configurable) number of slots which we can identify by index, and + associate region numbers (labels) with these indices. + For each transaction recording a change to the active set, we record the + change itself (index: -old_label, +new_label), and which index is associated + with which label (index: current_label) within a certain sliding window that + is moved further over the available indices with each such transaction. + + Thus, for crash recovery, if the ringbuffer is sufficiently large, we can + accurately reconstruct the active set. + + Sufficiently large depends only on maximum number of active objects, and the + size of the sliding window recording "index: current_label" associations within + each transaction. + + This is what we call the "activity log". + + Currently we need one activity log transaction per single label change, which + does not give much benefit over the "dirty chunks of bitmap" approach, other + than potentially less seeks. + + We plan to change the transaction format to support multiple changes per + transaction, which then would reduce several (disjoint, "random") updates to + the bitmap into one transaction to the activity log ring buffer. +*/ + +/* this defines an element in a tracked set + * .colision is for hash table lookup. + * When we process a new IO request, we know its sector, thus can deduce the + * region number (label) easily. To do the label -> object lookup without a + * full list walk, we use a simple hash table. + * + * .list is on one of three lists: + * in_use: currently in use (refcnt > 0, lc_number != LC_FREE) + * lru: unused but ready to be reused or recycled + * (ts_refcnt == 0, lc_number != LC_FREE), + * free: unused but ready to be recycled + * (ts_refcnt == 0, lc_number == LC_FREE), + * + * an element is said to be "in the active set", + * if either on "in_use" or "lru", i.e. lc_number != LC_FREE. + * + * DRBD currently (May 2009) only uses 61 elements on the resync lru_cache + * (total memory usage 2 pages), and up to 3833 elements on the act_log + * lru_cache, totalling ~215 kB for 64bit architechture, ~53 pages. + * + * We usually do not actually free these objects again, but only "recycle" + * them, as the change "index: -old_label, +LC_FREE" would need a transaction + * as well. Which also means that using a kmem_cache to allocate the objects + * from wastes some resources. + * But it avoids high order page allocations in kmalloc. + */ +struct lc_element { + struct hlist_node colision; + struct list_head list; /* LRU list or free list */ + unsigned refcnt; + /* back "pointer" into ts_cache->element[index], + * for paranoia, and for "ts_element_to_index" */ + unsigned lc_index; + /* if we want to track a larger set of objects, + * it needs to become arch independend u64 */ + unsigned lc_number; + + /* special label when on free list */ +#define LC_FREE (~0U) +}; + +struct lru_cache { + /* the least recently used item is kept at lru->prev */ + struct list_head lru; + struct list_head free; + struct list_head in_use; + + /* the pre-created kmem cache to allocate the objects from */ + struct kmem_cache *lc_cache; + + /* size of tracked objects, used to memset(,0,) them in lc_reset */ + size_t element_size; + /* offset of struct lc_element member in the tracked object */ + size_t element_off; + + /* number of elements (indices) */ + unsigned int nr_elements; + /* Arbitrary limit on maximum tracked objects. Practical limit is much + * lower due to allocation failures, probably. For typical use cases, + * nr_elements should be a few thousand at most. + * This also limits the maximum value of ts_element.ts_index, allowing the + * 8 high bits of .ts_index to be overloaded with flags in the future. */ +#define LC_MAX_ACTIVE (1<<24) + + /* statistics */ + unsigned used; /* number of lelements currently on in_use list */ + unsigned long hits, misses, starving, dirty, changed; + + /* see below: flag-bits for lru_cache */ + unsigned long flags; + + /* when changing the label of an index element */ + unsigned int new_number; + + /* for paranoia when changing the label of an index element */ + struct lc_element *changing_element; + + void *lc_private; + const char *name; + + /* nr_elements there */ + struct hlist_head *lc_slot; + struct lc_element **lc_element; +}; + + +/* flag-bits for lru_cache */ +enum { + /* debugging aid, to catch concurrent access early. + * user needs to guarantee exclusive access by proper locking! */ + __LC_PARANOIA, + /* if we need to change the set, but currently there is a changing + * transaction pending, we are "dirty", and must deferr further + * changing requests */ + __LC_DIRTY, + /* if we need to change the set, but currently there is no free nor + * unused element available, we are "starving", and must not give out + * further references, to guarantee that eventually some refcnt will + * drop to zero and we will be able to make progress again, changing + * the set, writing the transaction. + * if the statistics say we are frequently starving, + * nr_elements is too small. */ + __LC_STARVING, +}; +#define LC_PARANOIA (1<<__LC_PARANOIA) +#define LC_DIRTY (1<<__LC_DIRTY) +#define LC_STARVING (1<<__LC_STARVING) + +extern struct lru_cache *lc_create(const char *name, struct kmem_cache *cache, + unsigned e_count, size_t e_size, size_t e_off); +extern void lc_reset(struct lru_cache *lc); +extern void lc_destroy(struct lru_cache *lc); +extern void lc_set(struct lru_cache *lc, unsigned int enr, int index); +extern void lc_del(struct lru_cache *lc, struct lc_element *element); + +extern struct lc_element *lc_try_get(struct lru_cache *lc, unsigned int enr); +extern struct lc_element *lc_find(struct lru_cache *lc, unsigned int enr); +extern struct lc_element *lc_get(struct lru_cache *lc, unsigned int enr); +extern unsigned int lc_put(struct lru_cache *lc, struct lc_element *e); +extern void lc_changed(struct lru_cache *lc, struct lc_element *e); + +struct seq_file; +extern size_t lc_seq_printf_stats(struct seq_file *seq, struct lru_cache *lc); + +extern void lc_seq_dump_details(struct seq_file *seq, struct lru_cache *lc, char *utext, + void (*detail) (struct seq_file *, struct lc_element *)); + +/** + * lc_try_lock - can be used to stop lc_get() from changing the tracked set + * @lc: the lru cache to operate on + * + * Note that the reference counts and order on the active and lru lists may + * still change. Returns true if we aquired the lock. + */ +static inline int lc_try_lock(struct lru_cache *lc) +{ + return !test_and_set_bit(__LC_DIRTY, &lc->flags); +} + +/** + * lc_unlock - unlock @lc, allow lc_get() to change the set again + * @lc: the lru cache to operate on + */ +static inline void lc_unlock(struct lru_cache *lc) +{ + clear_bit(__LC_DIRTY, &lc->flags); + smp_mb__after_clear_bit(); +} + +static inline int lc_is_used(struct lru_cache *lc, unsigned int enr) +{ + struct lc_element *e = lc_find(lc, enr); + return e && e->refcnt; +} + +#define lc_entry(ptr, type, member) \ + container_of(ptr, type, member) + +extern struct lc_element *lc_element_by_index(struct lru_cache *lc, unsigned i); +extern unsigned int lc_index_of(struct lru_cache *lc, struct lc_element *e); + +#endif diff --git a/lib/Kconfig b/lib/Kconfig index bb1326d3839c..1cfe51628e1b 100644 --- a/lib/Kconfig +++ b/lib/Kconfig @@ -200,4 +200,7 @@ config NLATTR config GENERIC_ATOMIC64 bool +config LRU_CACHE + tristate + endmenu diff --git a/lib/Makefile b/lib/Makefile index 2e78277eff9d..347ad8db29d3 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -91,6 +91,8 @@ obj-$(CONFIG_DYNAMIC_DEBUG) += dynamic_debug.o obj-$(CONFIG_NLATTR) += nlattr.o +obj-$(CONFIG_LRU_CACHE) += lru_cache.o + obj-$(CONFIG_DMA_API_DEBUG) += dma-debug.o obj-$(CONFIG_GENERIC_CSUM) += checksum.o diff --git a/lib/lru_cache.c b/lib/lru_cache.c new file mode 100644 index 000000000000..270de9d31b8c --- /dev/null +++ b/lib/lru_cache.c @@ -0,0 +1,560 @@ +/* + lru_cache.c + + This file is part of DRBD by Philipp Reisner and Lars Ellenberg. + + Copyright (C) 2003-2008, LINBIT Information Technologies GmbH. + Copyright (C) 2003-2008, Philipp Reisner . + Copyright (C) 2003-2008, Lars Ellenberg . + + drbd is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2, or (at your option) + any later version. + + drbd is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with drbd; see the file COPYING. If not, write to + the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. + + */ + +#include +#include +#include +#include /* for memset */ +#include /* for seq_printf */ +#include + +MODULE_AUTHOR("Philipp Reisner , " + "Lars Ellenberg "); +MODULE_DESCRIPTION("lru_cache - Track sets of hot objects"); +MODULE_LICENSE("GPL"); + +/* this is developers aid only. + * it catches concurrent access (lack of locking on the users part) */ +#define PARANOIA_ENTRY() do { \ + BUG_ON(!lc); \ + BUG_ON(!lc->nr_elements); \ + BUG_ON(test_and_set_bit(__LC_PARANOIA, &lc->flags)); \ +} while (0) + +#define RETURN(x...) do { \ + clear_bit(__LC_PARANOIA, &lc->flags); \ + smp_mb__after_clear_bit(); return x ; } while (0) + +/* BUG() if e is not one of the elements tracked by lc */ +#define PARANOIA_LC_ELEMENT(lc, e) do { \ + struct lru_cache *lc_ = (lc); \ + struct lc_element *e_ = (e); \ + unsigned i = e_->lc_index; \ + BUG_ON(i >= lc_->nr_elements); \ + BUG_ON(lc_->lc_element[i] != e_); } while (0) + +/** + * lc_create - prepares to track objects in an active set + * @name: descriptive name only used in lc_seq_printf_stats and lc_seq_dump_details + * @e_count: number of elements allowed to be active simultaneously + * @e_size: size of the tracked objects + * @e_off: offset to the &struct lc_element member in a tracked object + * + * Returns a pointer to a newly initialized struct lru_cache on success, + * or NULL on (allocation) failure. + */ +struct lru_cache *lc_create(const char *name, struct kmem_cache *cache, + unsigned e_count, size_t e_size, size_t e_off) +{ + struct hlist_head *slot = NULL; + struct lc_element **element = NULL; + struct lru_cache *lc; + struct lc_element *e; + unsigned cache_obj_size = kmem_cache_size(cache); + unsigned i; + + WARN_ON(cache_obj_size < e_size); + if (cache_obj_size < e_size) + return NULL; + + /* e_count too big; would probably fail the allocation below anyways. + * for typical use cases, e_count should be few thousand at most. */ + if (e_count > LC_MAX_ACTIVE) + return NULL; + + slot = kzalloc(e_count * sizeof(struct hlist_head*), GFP_KERNEL); + if (!slot) + goto out_fail; + element = kzalloc(e_count * sizeof(struct lc_element *), GFP_KERNEL); + if (!element) + goto out_fail; + + lc = kzalloc(sizeof(*lc), GFP_KERNEL); + if (!lc) + goto out_fail; + + INIT_LIST_HEAD(&lc->in_use); + INIT_LIST_HEAD(&lc->lru); + INIT_LIST_HEAD(&lc->free); + + lc->name = name; + lc->element_size = e_size; + lc->element_off = e_off; + lc->nr_elements = e_count; + lc->new_number = LC_FREE; + lc->lc_cache = cache; + lc->lc_element = element; + lc->lc_slot = slot; + + /* preallocate all objects */ + for (i = 0; i < e_count; i++) { + void *p = kmem_cache_alloc(cache, GFP_KERNEL); + if (!p) + break; + memset(p, 0, lc->element_size); + e = p + e_off; + e->lc_index = i; + e->lc_number = LC_FREE; + list_add(&e->list, &lc->free); + element[i] = e; + } + if (i == e_count) + return lc; + + /* else: could not allocate all elements, give up */ + for (i--; i; i--) { + void *p = element[i]; + kmem_cache_free(cache, p - e_off); + } + kfree(lc); +out_fail: + kfree(element); + kfree(slot); + return NULL; +} + +void lc_free_by_index(struct lru_cache *lc, unsigned i) +{ + void *p = lc->lc_element[i]; + WARN_ON(!p); + if (p) { + p -= lc->element_off; + kmem_cache_free(lc->lc_cache, p); + } +} + +/** + * lc_destroy - frees memory allocated by lc_create() + * @lc: the lru cache to destroy + */ +void lc_destroy(struct lru_cache *lc) +{ + unsigned i; + if (!lc) + return; + for (i = 0; i < lc->nr_elements; i++) + lc_free_by_index(lc, i); + kfree(lc->lc_element); + kfree(lc->lc_slot); + kfree(lc); +} + +/** + * lc_reset - does a full reset for @lc and the hash table slots. + * @lc: the lru cache to operate on + * + * It is roughly the equivalent of re-allocating a fresh lru_cache object, + * basically a short cut to lc_destroy(lc); lc = lc_create(...); + */ +void lc_reset(struct lru_cache *lc) +{ + unsigned i; + + INIT_LIST_HEAD(&lc->in_use); + INIT_LIST_HEAD(&lc->lru); + INIT_LIST_HEAD(&lc->free); + lc->used = 0; + lc->hits = 0; + lc->misses = 0; + lc->starving = 0; + lc->dirty = 0; + lc->changed = 0; + lc->flags = 0; + lc->changing_element = NULL; + lc->new_number = LC_FREE; + memset(lc->lc_slot, 0, sizeof(struct hlist_head) * lc->nr_elements); + + for (i = 0; i < lc->nr_elements; i++) { + struct lc_element *e = lc->lc_element[i]; + void *p = e; + p -= lc->element_off; + memset(p, 0, lc->element_size); + /* re-init it */ + e->lc_index = i; + e->lc_number = LC_FREE; + list_add(&e->list, &lc->free); + } +} + +/** + * lc_seq_printf_stats - print stats about @lc into @seq + * @seq: the seq_file to print into + * @lc: the lru cache to print statistics of + */ +size_t lc_seq_printf_stats(struct seq_file *seq, struct lru_cache *lc) +{ + /* NOTE: + * total calls to lc_get are + * (starving + hits + misses) + * misses include "dirty" count (update from an other thread in + * progress) and "changed", when this in fact lead to an successful + * update of the cache. + */ + return seq_printf(seq, "\t%s: used:%u/%u " + "hits:%lu misses:%lu starving:%lu dirty:%lu changed:%lu\n", + lc->name, lc->used, lc->nr_elements, + lc->hits, lc->misses, lc->starving, lc->dirty, lc->changed); +} + +static struct hlist_head *lc_hash_slot(struct lru_cache *lc, unsigned int enr) +{ + return lc->lc_slot + (enr % lc->nr_elements); +} + + +/** + * lc_find - find element by label, if present in the hash table + * @lc: The lru_cache object + * @enr: element number + * + * Returns the pointer to an element, if the element with the requested + * "label" or element number is present in the hash table, + * or NULL if not found. Does not change the refcnt. + */ +struct lc_element *lc_find(struct lru_cache *lc, unsigned int enr) +{ + struct hlist_node *n; + struct lc_element *e; + + BUG_ON(!lc); + BUG_ON(!lc->nr_elements); + hlist_for_each_entry(e, n, lc_hash_slot(lc, enr), colision) { + if (e->lc_number == enr) + return e; + } + return NULL; +} + +/* returned element will be "recycled" immediately */ +static struct lc_element *lc_evict(struct lru_cache *lc) +{ + struct list_head *n; + struct lc_element *e; + + if (list_empty(&lc->lru)) + return NULL; + + n = lc->lru.prev; + e = list_entry(n, struct lc_element, list); + + PARANOIA_LC_ELEMENT(lc, e); + + list_del(&e->list); + hlist_del(&e->colision); + return e; +} + +/** + * lc_del - removes an element from the cache + * @lc: The lru_cache object + * @e: The element to remove + * + * @e must be unused (refcnt == 0). Moves @e from "lru" to "free" list, + * sets @e->enr to %LC_FREE. + */ +void lc_del(struct lru_cache *lc, struct lc_element *e) +{ + PARANOIA_ENTRY(); + PARANOIA_LC_ELEMENT(lc, e); + BUG_ON(e->refcnt); + + e->lc_number = LC_FREE; + hlist_del_init(&e->colision); + list_move(&e->list, &lc->free); + RETURN(); +} + +static struct lc_element *lc_get_unused_element(struct lru_cache *lc) +{ + struct list_head *n; + + if (list_empty(&lc->free)) + return lc_evict(lc); + + n = lc->free.next; + list_del(n); + return list_entry(n, struct lc_element, list); +} + +static int lc_unused_element_available(struct lru_cache *lc) +{ + if (!list_empty(&lc->free)) + return 1; /* something on the free list */ + if (!list_empty(&lc->lru)) + return 1; /* something to evict */ + + return 0; +} + + +/** + * lc_get - get element by label, maybe change the active set + * @lc: the lru cache to operate on + * @enr: the label to look up + * + * Finds an element in the cache, increases its usage count, + * "touches" and returns it. + * + * In case the requested number is not present, it needs to be added to the + * cache. Therefore it is possible that an other element becomes evicted from + * the cache. In either case, the user is notified so he is able to e.g. keep + * a persistent log of the cache changes, and therefore the objects in use. + * + * Return values: + * NULL + * The cache was marked %LC_STARVING, + * or the requested label was not in the active set + * and a changing transaction is still pending (@lc was marked %LC_DIRTY). + * Or no unused or free element could be recycled (@lc will be marked as + * %LC_STARVING, blocking further lc_get() operations). + * + * pointer to the element with the REQUESTED element number. + * In this case, it can be used right away + * + * pointer to an UNUSED element with some different element number, + * where that different number may also be %LC_FREE. + * + * In this case, the cache is marked %LC_DIRTY (blocking further changes), + * and the returned element pointer is removed from the lru list and + * hash collision chains. The user now should do whatever housekeeping + * is necessary. + * Then he must call lc_changed(lc,element_pointer), to finish + * the change. + * + * NOTE: The user needs to check the lc_number on EACH use, so he recognizes + * any cache set change. + */ +struct lc_element *lc_get(struct lru_cache *lc, unsigned int enr) +{ + struct lc_element *e; + + PARANOIA_ENTRY(); + if (lc->flags & LC_STARVING) { + ++lc->starving; + RETURN(NULL); + } + + e = lc_find(lc, enr); + if (e) { + ++lc->hits; + if (e->refcnt++ == 0) + lc->used++; + list_move(&e->list, &lc->in_use); /* Not evictable... */ + RETURN(e); + } + + ++lc->misses; + + /* In case there is nothing available and we can not kick out + * the LRU element, we have to wait ... + */ + if (!lc_unused_element_available(lc)) { + __set_bit(__LC_STARVING, &lc->flags); + RETURN(NULL); + } + + /* it was not present in the active set. + * we are going to recycle an unused (or even "free") element. + * user may need to commit a transaction to record that change. + * we serialize on flags & TF_DIRTY */ + if (test_and_set_bit(__LC_DIRTY, &lc->flags)) { + ++lc->dirty; + RETURN(NULL); + } + + e = lc_get_unused_element(lc); + BUG_ON(!e); + + clear_bit(__LC_STARVING, &lc->flags); + BUG_ON(++e->refcnt != 1); + lc->used++; + + lc->changing_element = e; + lc->new_number = enr; + + RETURN(e); +} + +/* similar to lc_get, + * but only gets a new reference on an existing element. + * you either get the requested element, or NULL. + * will be consolidated into one function. + */ +struct lc_element *lc_try_get(struct lru_cache *lc, unsigned int enr) +{ + struct lc_element *e; + + PARANOIA_ENTRY(); + if (lc->flags & LC_STARVING) { + ++lc->starving; + RETURN(NULL); + } + + e = lc_find(lc, enr); + if (e) { + ++lc->hits; + if (e->refcnt++ == 0) + lc->used++; + list_move(&e->list, &lc->in_use); /* Not evictable... */ + } + RETURN(e); +} + +/** + * lc_changed - tell @lc that the change has been recorded + * @lc: the lru cache to operate on + * @e: the element pending label change + */ +void lc_changed(struct lru_cache *lc, struct lc_element *e) +{ + PARANOIA_ENTRY(); + BUG_ON(e != lc->changing_element); + PARANOIA_LC_ELEMENT(lc, e); + ++lc->changed; + e->lc_number = lc->new_number; + list_add(&e->list, &lc->in_use); + hlist_add_head(&e->colision, lc_hash_slot(lc, lc->new_number)); + lc->changing_element = NULL; + lc->new_number = LC_FREE; + clear_bit(__LC_DIRTY, &lc->flags); + smp_mb__after_clear_bit(); + RETURN(); +} + + +/** + * lc_put - give up refcnt of @e + * @lc: the lru cache to operate on + * @e: the element to put + * + * If refcnt reaches zero, the element is moved to the lru list, + * and a %LC_STARVING (if set) is cleared. + * Returns the new (post-decrement) refcnt. + */ +unsigned int lc_put(struct lru_cache *lc, struct lc_element *e) +{ + PARANOIA_ENTRY(); + PARANOIA_LC_ELEMENT(lc, e); + BUG_ON(e->refcnt == 0); + BUG_ON(e == lc->changing_element); + if (--e->refcnt == 0) { + /* move it to the front of LRU. */ + list_move(&e->list, &lc->lru); + lc->used--; + clear_bit(__LC_STARVING, &lc->flags); + smp_mb__after_clear_bit(); + } + RETURN(e->refcnt); +} + +/** + * lc_element_by_index + * @lc: the lru cache to operate on + * @i: the index of the element to return + */ +struct lc_element *lc_element_by_index(struct lru_cache *lc, unsigned i) +{ + BUG_ON(i >= lc->nr_elements); + BUG_ON(lc->lc_element[i] == NULL); + BUG_ON(lc->lc_element[i]->lc_index != i); + return lc->lc_element[i]; +} + +/** + * lc_index_of + * @lc: the lru cache to operate on + * @e: the element to query for its index position in lc->element + */ +unsigned int lc_index_of(struct lru_cache *lc, struct lc_element *e) +{ + PARANOIA_LC_ELEMENT(lc, e); + return e->lc_index; +} + +/** + * lc_set - associate index with label + * @lc: the lru cache to operate on + * @enr: the label to set + * @index: the element index to associate label with. + * + * Used to initialize the active set to some previously recorded state. + */ +void lc_set(struct lru_cache *lc, unsigned int enr, int index) +{ + struct lc_element *e; + + if (index < 0 || index >= lc->nr_elements) + return; + + e = lc_element_by_index(lc, index); + e->lc_number = enr; + + hlist_del_init(&e->colision); + hlist_add_head(&e->colision, lc_hash_slot(lc, enr)); + list_move(&e->list, e->refcnt ? &lc->in_use : &lc->lru); +} + +/** + * lc_dump - Dump a complete LRU cache to seq in textual form. + * @lc: the lru cache to operate on + * @seq: the &struct seq_file pointer to seq_printf into + * @utext: user supplied "heading" or other info + * @detail: function pointer the user may provide to dump further details + * of the object the lc_element is embedded in. + */ +void lc_seq_dump_details(struct seq_file *seq, struct lru_cache *lc, char *utext, + void (*detail) (struct seq_file *, struct lc_element *)) +{ + unsigned int nr_elements = lc->nr_elements; + struct lc_element *e; + int i; + + seq_printf(seq, "\tnn: lc_number refcnt %s\n ", utext); + for (i = 0; i < nr_elements; i++) { + e = lc_element_by_index(lc, i); + if (e->lc_number == LC_FREE) { + seq_printf(seq, "\t%2d: FREE\n", i); + } else { + seq_printf(seq, "\t%2d: %4u %4u ", i, + e->lc_number, e->refcnt); + detail(seq, e); + } + } +} + +EXPORT_SYMBOL(lc_create); +EXPORT_SYMBOL(lc_reset); +EXPORT_SYMBOL(lc_destroy); +EXPORT_SYMBOL(lc_set); +EXPORT_SYMBOL(lc_del); +EXPORT_SYMBOL(lc_try_get); +EXPORT_SYMBOL(lc_find); +EXPORT_SYMBOL(lc_get); +EXPORT_SYMBOL(lc_put); +EXPORT_SYMBOL(lc_changed); +EXPORT_SYMBOL(lc_element_by_index); +EXPORT_SYMBOL(lc_index_of); +EXPORT_SYMBOL(lc_seq_printf_stats); +EXPORT_SYMBOL(lc_seq_dump_details); -- cgit v1.2.3 From 933d35f00cfaf99985515c7c212d9dee30b1a1fb Mon Sep 17 00:00:00 2001 From: Joe Perches Date: Thu, 1 Oct 2009 15:43:59 -0700 Subject: MAINTAINERS: ARM/Palm file patterns Signed-off-by: Joe Perches Acked-by: Marek Vasut Acked-by: Tomas Cech Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- MAINTAINERS | 17 +++++++++++++++-- 1 file changed, 15 insertions(+), 2 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index c450f3abb8c9..f8f9c4f5f7d6 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -741,23 +741,36 @@ M: Dirk Opfer S: Maintained ARM/PALMTX,PALMT5,PALMLD,PALMTE2,PALMTC SUPPORT -P: Marek Vasut -M: marek.vasut@gmail.com +M: Marek Vasut L: linux-arm-kernel@lists.infradead.org W: http://hackndev.com S: Maintained +F: arch/arm/mach-pxa/include/mach/palmtx.h +F: arch/arm/mach-pxa/palmtx.c +F: arch/arm/mach-pxa/include/mach/palmt5.h +F: arch/arm/mach-pxa/palmt5.c +F: arch/arm/mach-pxa/include/mach/palmld.h +F: arch/arm/mach-pxa/palmld.c +F: arch/arm/mach-pxa/include/mach/palmte2.h +F: arch/arm/mach-pxa/palmte2.c +F: arch/arm/mach-pxa/include/mach/palmtc.h +F: arch/arm/mach-pxa/palmtc.c ARM/PALM TREO 680 SUPPORT M: Tomas Cech L: linux-arm-kernel@lists.infradead.org W: http://hackndev.com S: Maintained +F: arch/arm/mach-pxa/include/mach/treo680.h +F: arch/arm/mach-pxa/treo680.c ARM/PALMZ72 SUPPORT M: Sergey Lapin L: linux-arm-kernel@lists.infradead.org W: http://hackndev.com S: Maintained +F: arch/arm/mach-pxa/include/mach/palmz72.h +F: arch/arm/mach-pxa/palmz72.c ARM/PLEB SUPPORT M: Peter Chubb -- cgit v1.2.3 From 7725ccfda59715ecf8f99e3b520a0b84cc2ea79e Mon Sep 17 00:00:00 2001 From: Jing Huang Date: Wed, 23 Sep 2009 17:46:15 -0700 Subject: [SCSI] bfa: Brocade BFA FC SCSI driver Add new driver for Brocade Hardware Signed-off-by: Jing Huang Signed-off-by: James Bottomley --- MAINTAINERS | 7 + drivers/scsi/Kconfig | 10 + drivers/scsi/Makefile | 1 + drivers/scsi/bfa/Makefile | 15 + drivers/scsi/bfa/bfa_callback_priv.h | 57 + drivers/scsi/bfa/bfa_cb_ioim_macros.h | 205 ++ drivers/scsi/bfa/bfa_cee.c | 492 ++++ drivers/scsi/bfa/bfa_core.c | 402 +++ drivers/scsi/bfa/bfa_csdebug.c | 58 + drivers/scsi/bfa/bfa_fcpim.c | 175 ++ drivers/scsi/bfa/bfa_fcpim_priv.h | 188 ++ drivers/scsi/bfa/bfa_fcport.c | 1671 +++++++++++++ drivers/scsi/bfa/bfa_fcs.c | 182 ++ drivers/scsi/bfa/bfa_fcs_lport.c | 940 +++++++ drivers/scsi/bfa/bfa_fcs_port.c | 68 + drivers/scsi/bfa/bfa_fcs_uf.c | 105 + drivers/scsi/bfa/bfa_fcxp.c | 782 ++++++ drivers/scsi/bfa/bfa_fcxp_priv.h | 138 ++ drivers/scsi/bfa/bfa_fwimg_priv.h | 31 + drivers/scsi/bfa/bfa_hw_cb.c | 142 ++ drivers/scsi/bfa/bfa_hw_ct.c | 162 ++ drivers/scsi/bfa/bfa_intr.c | 218 ++ drivers/scsi/bfa/bfa_intr_priv.h | 115 + drivers/scsi/bfa/bfa_ioc.c | 2382 ++++++++++++++++++ drivers/scsi/bfa/bfa_ioc.h | 259 ++ drivers/scsi/bfa/bfa_iocfc.c | 872 +++++++ drivers/scsi/bfa/bfa_iocfc.h | 168 ++ drivers/scsi/bfa/bfa_iocfc_q.c | 44 + drivers/scsi/bfa/bfa_ioim.c | 1311 ++++++++++ drivers/scsi/bfa/bfa_itnim.c | 1088 ++++++++ drivers/scsi/bfa/bfa_log.c | 346 +++ drivers/scsi/bfa/bfa_log_module.c | 451 ++++ drivers/scsi/bfa/bfa_lps.c | 782 ++++++ drivers/scsi/bfa/bfa_lps_priv.h | 38 + drivers/scsi/bfa/bfa_module.c | 90 + drivers/scsi/bfa/bfa_modules_priv.h | 43 + drivers/scsi/bfa/bfa_os_inc.h | 222 ++ drivers/scsi/bfa/bfa_port.c | 460 ++++ drivers/scsi/bfa/bfa_port_priv.h | 90 + drivers/scsi/bfa/bfa_priv.h | 113 + drivers/scsi/bfa/bfa_rport.c | 911 +++++++ drivers/scsi/bfa/bfa_rport_priv.h | 45 + drivers/scsi/bfa/bfa_sgpg.c | 231 ++ drivers/scsi/bfa/bfa_sgpg_priv.h | 79 + drivers/scsi/bfa/bfa_sm.c | 38 + drivers/scsi/bfa/bfa_timer.c | 90 + drivers/scsi/bfa/bfa_trcmod_priv.h | 66 + drivers/scsi/bfa/bfa_tskim.c | 689 ++++++ drivers/scsi/bfa/bfa_uf.c | 345 +++ drivers/scsi/bfa/bfa_uf_priv.h | 47 + drivers/scsi/bfa/bfad.c | 1182 +++++++++ drivers/scsi/bfa/bfad_attr.c | 649 +++++ drivers/scsi/bfa/bfad_attr.h | 65 + drivers/scsi/bfa/bfad_drv.h | 295 +++ drivers/scsi/bfa/bfad_fwimg.c | 95 + drivers/scsi/bfa/bfad_im.c | 1230 +++++++++ drivers/scsi/bfa/bfad_im.h | 150 ++ drivers/scsi/bfa/bfad_im_compat.h | 46 + drivers/scsi/bfa/bfad_intr.c | 214 ++ drivers/scsi/bfa/bfad_ipfc.h | 42 + drivers/scsi/bfa/bfad_os.c | 50 + drivers/scsi/bfa/bfad_tm.h | 59 + drivers/scsi/bfa/bfad_trcmod.h | 52 + drivers/scsi/bfa/fab.c | 62 + drivers/scsi/bfa/fabric.c | 1278 ++++++++++ drivers/scsi/bfa/fcbuild.c | 1449 +++++++++++ drivers/scsi/bfa/fcbuild.h | 273 ++ drivers/scsi/bfa/fcpim.c | 844 +++++++ drivers/scsi/bfa/fcptm.c | 68 + drivers/scsi/bfa/fcs.h | 30 + drivers/scsi/bfa/fcs_auth.h | 37 + drivers/scsi/bfa/fcs_fabric.h | 61 + drivers/scsi/bfa/fcs_fcpim.h | 44 + drivers/scsi/bfa/fcs_fcptm.h | 45 + drivers/scsi/bfa/fcs_fcxp.h | 29 + drivers/scsi/bfa/fcs_lport.h | 117 + drivers/scsi/bfa/fcs_ms.h | 35 + drivers/scsi/bfa/fcs_port.h | 32 + drivers/scsi/bfa/fcs_rport.h | 61 + drivers/scsi/bfa/fcs_trcmod.h | 56 + drivers/scsi/bfa/fcs_uf.h | 32 + drivers/scsi/bfa/fcs_vport.h | 39 + drivers/scsi/bfa/fdmi.c | 1223 +++++++++ drivers/scsi/bfa/include/aen/bfa_aen.h | 92 + drivers/scsi/bfa/include/aen/bfa_aen_adapter.h | 31 + drivers/scsi/bfa/include/aen/bfa_aen_audit.h | 31 + drivers/scsi/bfa/include/aen/bfa_aen_ethport.h | 35 + drivers/scsi/bfa/include/aen/bfa_aen_ioc.h | 37 + drivers/scsi/bfa/include/aen/bfa_aen_itnim.h | 33 + drivers/scsi/bfa/include/aen/bfa_aen_lport.h | 51 + drivers/scsi/bfa/include/aen/bfa_aen_port.h | 57 + drivers/scsi/bfa/include/aen/bfa_aen_rport.h | 37 + drivers/scsi/bfa/include/bfa.h | 177 ++ drivers/scsi/bfa/include/bfa_fcpim.h | 159 ++ drivers/scsi/bfa/include/bfa_fcptm.h | 47 + drivers/scsi/bfa/include/bfa_svc.h | 324 +++ drivers/scsi/bfa/include/bfa_timer.h | 53 + drivers/scsi/bfa/include/bfi/bfi.h | 174 ++ drivers/scsi/bfa/include/bfi/bfi_boot.h | 34 + drivers/scsi/bfa/include/bfi/bfi_cbreg.h | 305 +++ drivers/scsi/bfa/include/bfi/bfi_cee.h | 119 + drivers/scsi/bfa/include/bfi/bfi_ctreg.h | 611 +++++ drivers/scsi/bfa/include/bfi/bfi_fabric.h | 92 + drivers/scsi/bfa/include/bfi/bfi_fcpim.h | 301 +++ drivers/scsi/bfa/include/bfi/bfi_fcxp.h | 71 + drivers/scsi/bfa/include/bfi/bfi_ioc.h | 202 ++ drivers/scsi/bfa/include/bfi/bfi_iocfc.h | 177 ++ drivers/scsi/bfa/include/bfi/bfi_lport.h | 89 + drivers/scsi/bfa/include/bfi/bfi_lps.h | 96 + drivers/scsi/bfa/include/bfi/bfi_port.h | 115 + drivers/scsi/bfa/include/bfi/bfi_pport.h | 184 ++ drivers/scsi/bfa/include/bfi/bfi_rport.h | 104 + drivers/scsi/bfa/include/bfi/bfi_uf.h | 52 + drivers/scsi/bfa/include/cna/bfa_cna_trcmod.h | 36 + drivers/scsi/bfa/include/cna/cee/bfa_cee.h | 77 + drivers/scsi/bfa/include/cna/port/bfa_port.h | 69 + drivers/scsi/bfa/include/cna/pstats/ethport_defs.h | 36 + drivers/scsi/bfa/include/cna/pstats/phyport_defs.h | 218 ++ drivers/scsi/bfa/include/cs/bfa_checksum.h | 60 + drivers/scsi/bfa/include/cs/bfa_debug.h | 44 + drivers/scsi/bfa/include/cs/bfa_log.h | 184 ++ drivers/scsi/bfa/include/cs/bfa_perf.h | 34 + drivers/scsi/bfa/include/cs/bfa_plog.h | 162 ++ drivers/scsi/bfa/include/cs/bfa_q.h | 81 + drivers/scsi/bfa/include/cs/bfa_sm.h | 69 + drivers/scsi/bfa/include/cs/bfa_trc.h | 176 ++ drivers/scsi/bfa/include/cs/bfa_wc.h | 68 + drivers/scsi/bfa/include/defs/bfa_defs_adapter.h | 82 + drivers/scsi/bfa/include/defs/bfa_defs_aen.h | 73 + drivers/scsi/bfa/include/defs/bfa_defs_audit.h | 38 + drivers/scsi/bfa/include/defs/bfa_defs_auth.h | 112 + drivers/scsi/bfa/include/defs/bfa_defs_boot.h | 71 + drivers/scsi/bfa/include/defs/bfa_defs_cee.h | 159 ++ drivers/scsi/bfa/include/defs/bfa_defs_driver.h | 40 + drivers/scsi/bfa/include/defs/bfa_defs_ethport.h | 98 + drivers/scsi/bfa/include/defs/bfa_defs_fcpim.h | 45 + drivers/scsi/bfa/include/defs/bfa_defs_im_common.h | 32 + drivers/scsi/bfa/include/defs/bfa_defs_im_team.h | 72 + drivers/scsi/bfa/include/defs/bfa_defs_ioc.h | 152 ++ drivers/scsi/bfa/include/defs/bfa_defs_iocfc.h | 310 +++ drivers/scsi/bfa/include/defs/bfa_defs_ipfc.h | 70 + drivers/scsi/bfa/include/defs/bfa_defs_itnim.h | 126 + drivers/scsi/bfa/include/defs/bfa_defs_led.h | 35 + drivers/scsi/bfa/include/defs/bfa_defs_lport.h | 68 + drivers/scsi/bfa/include/defs/bfa_defs_mfg.h | 58 + drivers/scsi/bfa/include/defs/bfa_defs_pci.h | 41 + drivers/scsi/bfa/include/defs/bfa_defs_pm.h | 33 + drivers/scsi/bfa/include/defs/bfa_defs_pom.h | 56 + drivers/scsi/bfa/include/defs/bfa_defs_port.h | 245 ++ drivers/scsi/bfa/include/defs/bfa_defs_pport.h | 383 +++ drivers/scsi/bfa/include/defs/bfa_defs_qos.h | 99 + drivers/scsi/bfa/include/defs/bfa_defs_rport.h | 199 ++ drivers/scsi/bfa/include/defs/bfa_defs_status.h | 255 ++ drivers/scsi/bfa/include/defs/bfa_defs_tin.h | 118 + drivers/scsi/bfa/include/defs/bfa_defs_tsensor.h | 43 + drivers/scsi/bfa/include/defs/bfa_defs_types.h | 30 + drivers/scsi/bfa/include/defs/bfa_defs_version.h | 22 + drivers/scsi/bfa/include/defs/bfa_defs_vf.h | 74 + drivers/scsi/bfa/include/defs/bfa_defs_vport.h | 91 + drivers/scsi/bfa/include/fcb/bfa_fcb.h | 33 + drivers/scsi/bfa/include/fcb/bfa_fcb_fcpim.h | 76 + drivers/scsi/bfa/include/fcb/bfa_fcb_port.h | 113 + drivers/scsi/bfa/include/fcb/bfa_fcb_rport.h | 80 + drivers/scsi/bfa/include/fcb/bfa_fcb_vf.h | 47 + drivers/scsi/bfa/include/fcb/bfa_fcb_vport.h | 47 + drivers/scsi/bfa/include/fcs/bfa_fcs.h | 73 + drivers/scsi/bfa/include/fcs/bfa_fcs_auth.h | 82 + drivers/scsi/bfa/include/fcs/bfa_fcs_fabric.h | 112 + drivers/scsi/bfa/include/fcs/bfa_fcs_fcpim.h | 131 + drivers/scsi/bfa/include/fcs/bfa_fcs_fdmi.h | 63 + drivers/scsi/bfa/include/fcs/bfa_fcs_lport.h | 226 ++ drivers/scsi/bfa/include/fcs/bfa_fcs_rport.h | 104 + drivers/scsi/bfa/include/fcs/bfa_fcs_vport.h | 63 + drivers/scsi/bfa/include/log/bfa_log_fcs.h | 28 + drivers/scsi/bfa/include/log/bfa_log_hal.h | 30 + drivers/scsi/bfa/include/log/bfa_log_linux.h | 44 + drivers/scsi/bfa/include/log/bfa_log_wdrv.h | 36 + drivers/scsi/bfa/include/protocol/ct.h | 492 ++++ drivers/scsi/bfa/include/protocol/fc.h | 1105 +++++++++ drivers/scsi/bfa/include/protocol/fc_sp.h | 224 ++ drivers/scsi/bfa/include/protocol/fcp.h | 186 ++ drivers/scsi/bfa/include/protocol/fdmi.h | 163 ++ drivers/scsi/bfa/include/protocol/pcifw.h | 75 + drivers/scsi/bfa/include/protocol/scsi.h | 1648 ++++++++++++ drivers/scsi/bfa/include/protocol/types.h | 42 + drivers/scsi/bfa/loop.c | 422 ++++ drivers/scsi/bfa/lport_api.c | 291 +++ drivers/scsi/bfa/lport_priv.h | 82 + drivers/scsi/bfa/ms.c | 759 ++++++ drivers/scsi/bfa/n2n.c | 105 + drivers/scsi/bfa/ns.c | 1243 ++++++++++ drivers/scsi/bfa/plog.c | 184 ++ drivers/scsi/bfa/rport.c | 2618 ++++++++++++++++++++ drivers/scsi/bfa/rport_api.c | 180 ++ drivers/scsi/bfa/rport_ftrs.c | 375 +++ drivers/scsi/bfa/scn.c | 482 ++++ drivers/scsi/bfa/vfapi.c | 292 +++ drivers/scsi/bfa/vport.c | 891 +++++++ 198 files changed, 49189 insertions(+) create mode 100644 drivers/scsi/bfa/Makefile create mode 100644 drivers/scsi/bfa/bfa_callback_priv.h create mode 100644 drivers/scsi/bfa/bfa_cb_ioim_macros.h create mode 100644 drivers/scsi/bfa/bfa_cee.c create mode 100644 drivers/scsi/bfa/bfa_core.c create mode 100644 drivers/scsi/bfa/bfa_csdebug.c create mode 100644 drivers/scsi/bfa/bfa_fcpim.c create mode 100644 drivers/scsi/bfa/bfa_fcpim_priv.h create mode 100644 drivers/scsi/bfa/bfa_fcport.c create mode 100644 drivers/scsi/bfa/bfa_fcs.c create mode 100644 drivers/scsi/bfa/bfa_fcs_lport.c create mode 100644 drivers/scsi/bfa/bfa_fcs_port.c create mode 100644 drivers/scsi/bfa/bfa_fcs_uf.c create mode 100644 drivers/scsi/bfa/bfa_fcxp.c create mode 100644 drivers/scsi/bfa/bfa_fcxp_priv.h create mode 100644 drivers/scsi/bfa/bfa_fwimg_priv.h create mode 100644 drivers/scsi/bfa/bfa_hw_cb.c create mode 100644 drivers/scsi/bfa/bfa_hw_ct.c create mode 100644 drivers/scsi/bfa/bfa_intr.c create mode 100644 drivers/scsi/bfa/bfa_intr_priv.h create mode 100644 drivers/scsi/bfa/bfa_ioc.c create mode 100644 drivers/scsi/bfa/bfa_ioc.h create mode 100644 drivers/scsi/bfa/bfa_iocfc.c create mode 100644 drivers/scsi/bfa/bfa_iocfc.h create mode 100644 drivers/scsi/bfa/bfa_iocfc_q.c create mode 100644 drivers/scsi/bfa/bfa_ioim.c create mode 100644 drivers/scsi/bfa/bfa_itnim.c create mode 100644 drivers/scsi/bfa/bfa_log.c create mode 100644 drivers/scsi/bfa/bfa_log_module.c create mode 100644 drivers/scsi/bfa/bfa_lps.c create mode 100644 drivers/scsi/bfa/bfa_lps_priv.h create mode 100644 drivers/scsi/bfa/bfa_module.c create mode 100644 drivers/scsi/bfa/bfa_modules_priv.h create mode 100644 drivers/scsi/bfa/bfa_os_inc.h create mode 100644 drivers/scsi/bfa/bfa_port.c create mode 100644 drivers/scsi/bfa/bfa_port_priv.h create mode 100644 drivers/scsi/bfa/bfa_priv.h create mode 100644 drivers/scsi/bfa/bfa_rport.c create mode 100644 drivers/scsi/bfa/bfa_rport_priv.h create mode 100644 drivers/scsi/bfa/bfa_sgpg.c create mode 100644 drivers/scsi/bfa/bfa_sgpg_priv.h create mode 100644 drivers/scsi/bfa/bfa_sm.c create mode 100644 drivers/scsi/bfa/bfa_timer.c create mode 100644 drivers/scsi/bfa/bfa_trcmod_priv.h create mode 100644 drivers/scsi/bfa/bfa_tskim.c create mode 100644 drivers/scsi/bfa/bfa_uf.c create mode 100644 drivers/scsi/bfa/bfa_uf_priv.h create mode 100644 drivers/scsi/bfa/bfad.c create mode 100644 drivers/scsi/bfa/bfad_attr.c create mode 100644 drivers/scsi/bfa/bfad_attr.h create mode 100644 drivers/scsi/bfa/bfad_drv.h create mode 100644 drivers/scsi/bfa/bfad_fwimg.c create mode 100644 drivers/scsi/bfa/bfad_im.c create mode 100644 drivers/scsi/bfa/bfad_im.h create mode 100644 drivers/scsi/bfa/bfad_im_compat.h create mode 100644 drivers/scsi/bfa/bfad_intr.c create mode 100644 drivers/scsi/bfa/bfad_ipfc.h create mode 100644 drivers/scsi/bfa/bfad_os.c create mode 100644 drivers/scsi/bfa/bfad_tm.h create mode 100644 drivers/scsi/bfa/bfad_trcmod.h create mode 100644 drivers/scsi/bfa/fab.c create mode 100644 drivers/scsi/bfa/fabric.c create mode 100644 drivers/scsi/bfa/fcbuild.c create mode 100644 drivers/scsi/bfa/fcbuild.h create mode 100644 drivers/scsi/bfa/fcpim.c create mode 100644 drivers/scsi/bfa/fcptm.c create mode 100644 drivers/scsi/bfa/fcs.h create mode 100644 drivers/scsi/bfa/fcs_auth.h create mode 100644 drivers/scsi/bfa/fcs_fabric.h create mode 100644 drivers/scsi/bfa/fcs_fcpim.h create mode 100644 drivers/scsi/bfa/fcs_fcptm.h create mode 100644 drivers/scsi/bfa/fcs_fcxp.h create mode 100644 drivers/scsi/bfa/fcs_lport.h create mode 100644 drivers/scsi/bfa/fcs_ms.h create mode 100644 drivers/scsi/bfa/fcs_port.h create mode 100644 drivers/scsi/bfa/fcs_rport.h create mode 100644 drivers/scsi/bfa/fcs_trcmod.h create mode 100644 drivers/scsi/bfa/fcs_uf.h create mode 100644 drivers/scsi/bfa/fcs_vport.h create mode 100644 drivers/scsi/bfa/fdmi.c create mode 100644 drivers/scsi/bfa/include/aen/bfa_aen.h create mode 100644 drivers/scsi/bfa/include/aen/bfa_aen_adapter.h create mode 100644 drivers/scsi/bfa/include/aen/bfa_aen_audit.h create mode 100644 drivers/scsi/bfa/include/aen/bfa_aen_ethport.h create mode 100644 drivers/scsi/bfa/include/aen/bfa_aen_ioc.h create mode 100644 drivers/scsi/bfa/include/aen/bfa_aen_itnim.h create mode 100644 drivers/scsi/bfa/include/aen/bfa_aen_lport.h create mode 100644 drivers/scsi/bfa/include/aen/bfa_aen_port.h create mode 100644 drivers/scsi/bfa/include/aen/bfa_aen_rport.h create mode 100644 drivers/scsi/bfa/include/bfa.h create mode 100644 drivers/scsi/bfa/include/bfa_fcpim.h create mode 100644 drivers/scsi/bfa/include/bfa_fcptm.h create mode 100644 drivers/scsi/bfa/include/bfa_svc.h create mode 100644 drivers/scsi/bfa/include/bfa_timer.h create mode 100644 drivers/scsi/bfa/include/bfi/bfi.h create mode 100644 drivers/scsi/bfa/include/bfi/bfi_boot.h create mode 100644 drivers/scsi/bfa/include/bfi/bfi_cbreg.h create mode 100644 drivers/scsi/bfa/include/bfi/bfi_cee.h create mode 100644 drivers/scsi/bfa/include/bfi/bfi_ctreg.h create mode 100644 drivers/scsi/bfa/include/bfi/bfi_fabric.h create mode 100644 drivers/scsi/bfa/include/bfi/bfi_fcpim.h create mode 100644 drivers/scsi/bfa/include/bfi/bfi_fcxp.h create mode 100644 drivers/scsi/bfa/include/bfi/bfi_ioc.h create mode 100644 drivers/scsi/bfa/include/bfi/bfi_iocfc.h create mode 100644 drivers/scsi/bfa/include/bfi/bfi_lport.h create mode 100644 drivers/scsi/bfa/include/bfi/bfi_lps.h create mode 100644 drivers/scsi/bfa/include/bfi/bfi_port.h create mode 100644 drivers/scsi/bfa/include/bfi/bfi_pport.h create mode 100644 drivers/scsi/bfa/include/bfi/bfi_rport.h create mode 100644 drivers/scsi/bfa/include/bfi/bfi_uf.h create mode 100644 drivers/scsi/bfa/include/cna/bfa_cna_trcmod.h create mode 100644 drivers/scsi/bfa/include/cna/cee/bfa_cee.h create mode 100644 drivers/scsi/bfa/include/cna/port/bfa_port.h create mode 100644 drivers/scsi/bfa/include/cna/pstats/ethport_defs.h create mode 100644 drivers/scsi/bfa/include/cna/pstats/phyport_defs.h create mode 100644 drivers/scsi/bfa/include/cs/bfa_checksum.h create mode 100644 drivers/scsi/bfa/include/cs/bfa_debug.h create mode 100644 drivers/scsi/bfa/include/cs/bfa_log.h create mode 100644 drivers/scsi/bfa/include/cs/bfa_perf.h create mode 100644 drivers/scsi/bfa/include/cs/bfa_plog.h create mode 100644 drivers/scsi/bfa/include/cs/bfa_q.h create mode 100644 drivers/scsi/bfa/include/cs/bfa_sm.h create mode 100644 drivers/scsi/bfa/include/cs/bfa_trc.h create mode 100644 drivers/scsi/bfa/include/cs/bfa_wc.h create mode 100644 drivers/scsi/bfa/include/defs/bfa_defs_adapter.h create mode 100644 drivers/scsi/bfa/include/defs/bfa_defs_aen.h create mode 100644 drivers/scsi/bfa/include/defs/bfa_defs_audit.h create mode 100644 drivers/scsi/bfa/include/defs/bfa_defs_auth.h create mode 100644 drivers/scsi/bfa/include/defs/bfa_defs_boot.h create mode 100644 drivers/scsi/bfa/include/defs/bfa_defs_cee.h create mode 100644 drivers/scsi/bfa/include/defs/bfa_defs_driver.h create mode 100644 drivers/scsi/bfa/include/defs/bfa_defs_ethport.h create mode 100644 drivers/scsi/bfa/include/defs/bfa_defs_fcpim.h create mode 100644 drivers/scsi/bfa/include/defs/bfa_defs_im_common.h create mode 100644 drivers/scsi/bfa/include/defs/bfa_defs_im_team.h create mode 100644 drivers/scsi/bfa/include/defs/bfa_defs_ioc.h create mode 100644 drivers/scsi/bfa/include/defs/bfa_defs_iocfc.h create mode 100644 drivers/scsi/bfa/include/defs/bfa_defs_ipfc.h create mode 100644 drivers/scsi/bfa/include/defs/bfa_defs_itnim.h create mode 100644 drivers/scsi/bfa/include/defs/bfa_defs_led.h create mode 100644 drivers/scsi/bfa/include/defs/bfa_defs_lport.h create mode 100644 drivers/scsi/bfa/include/defs/bfa_defs_mfg.h create mode 100644 drivers/scsi/bfa/include/defs/bfa_defs_pci.h create mode 100644 drivers/scsi/bfa/include/defs/bfa_defs_pm.h create mode 100644 drivers/scsi/bfa/include/defs/bfa_defs_pom.h create mode 100644 drivers/scsi/bfa/include/defs/bfa_defs_port.h create mode 100644 drivers/scsi/bfa/include/defs/bfa_defs_pport.h create mode 100644 drivers/scsi/bfa/include/defs/bfa_defs_qos.h create mode 100644 drivers/scsi/bfa/include/defs/bfa_defs_rport.h create mode 100644 drivers/scsi/bfa/include/defs/bfa_defs_status.h create mode 100644 drivers/scsi/bfa/include/defs/bfa_defs_tin.h create mode 100644 drivers/scsi/bfa/include/defs/bfa_defs_tsensor.h create mode 100644 drivers/scsi/bfa/include/defs/bfa_defs_types.h create mode 100644 drivers/scsi/bfa/include/defs/bfa_defs_version.h create mode 100644 drivers/scsi/bfa/include/defs/bfa_defs_vf.h create mode 100644 drivers/scsi/bfa/include/defs/bfa_defs_vport.h create mode 100644 drivers/scsi/bfa/include/fcb/bfa_fcb.h create mode 100644 drivers/scsi/bfa/include/fcb/bfa_fcb_fcpim.h create mode 100644 drivers/scsi/bfa/include/fcb/bfa_fcb_port.h create mode 100644 drivers/scsi/bfa/include/fcb/bfa_fcb_rport.h create mode 100644 drivers/scsi/bfa/include/fcb/bfa_fcb_vf.h create mode 100644 drivers/scsi/bfa/include/fcb/bfa_fcb_vport.h create mode 100644 drivers/scsi/bfa/include/fcs/bfa_fcs.h create mode 100644 drivers/scsi/bfa/include/fcs/bfa_fcs_auth.h create mode 100644 drivers/scsi/bfa/include/fcs/bfa_fcs_fabric.h create mode 100644 drivers/scsi/bfa/include/fcs/bfa_fcs_fcpim.h create mode 100644 drivers/scsi/bfa/include/fcs/bfa_fcs_fdmi.h create mode 100644 drivers/scsi/bfa/include/fcs/bfa_fcs_lport.h create mode 100644 drivers/scsi/bfa/include/fcs/bfa_fcs_rport.h create mode 100644 drivers/scsi/bfa/include/fcs/bfa_fcs_vport.h create mode 100644 drivers/scsi/bfa/include/log/bfa_log_fcs.h create mode 100644 drivers/scsi/bfa/include/log/bfa_log_hal.h create mode 100644 drivers/scsi/bfa/include/log/bfa_log_linux.h create mode 100644 drivers/scsi/bfa/include/log/bfa_log_wdrv.h create mode 100644 drivers/scsi/bfa/include/protocol/ct.h create mode 100644 drivers/scsi/bfa/include/protocol/fc.h create mode 100644 drivers/scsi/bfa/include/protocol/fc_sp.h create mode 100644 drivers/scsi/bfa/include/protocol/fcp.h create mode 100644 drivers/scsi/bfa/include/protocol/fdmi.h create mode 100644 drivers/scsi/bfa/include/protocol/pcifw.h create mode 100644 drivers/scsi/bfa/include/protocol/scsi.h create mode 100644 drivers/scsi/bfa/include/protocol/types.h create mode 100644 drivers/scsi/bfa/loop.c create mode 100644 drivers/scsi/bfa/lport_api.c create mode 100644 drivers/scsi/bfa/lport_priv.h create mode 100644 drivers/scsi/bfa/ms.c create mode 100644 drivers/scsi/bfa/n2n.c create mode 100644 drivers/scsi/bfa/ns.c create mode 100644 drivers/scsi/bfa/plog.c create mode 100644 drivers/scsi/bfa/rport.c create mode 100644 drivers/scsi/bfa/rport_api.c create mode 100644 drivers/scsi/bfa/rport_ftrs.c create mode 100644 drivers/scsi/bfa/scn.c create mode 100644 drivers/scsi/bfa/vfapi.c create mode 100644 drivers/scsi/bfa/vport.c (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index c450f3abb8c9..c233f6fe0fff 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1211,6 +1211,13 @@ L: netdev@vger.kernel.org S: Supported F: drivers/net/tg3.* +BROCADE BFA FC SCSI DRIVER +P: Jing Huang +M: huangj@brocade.com +L: linux-scsi@vger.kernel.org +S: Supported +F: drivers/scsi/bfa/ + BSG (block layer generic sg v4 driver) M: FUJITA Tomonori L: linux-scsi@vger.kernel.org diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig index 82bb3b2d207a..a8ea01b07868 100644 --- a/drivers/scsi/Kconfig +++ b/drivers/scsi/Kconfig @@ -1827,6 +1827,16 @@ config SCSI_SRP To compile this driver as a module, choose M here: the module will be called libsrp. +config SCSI_BFA_FC + tristate "Brocade BFA Fibre Channel Support" + depends on PCI && SCSI + select SCSI_FC_ATTRS + help + This bfa driver supports all Brocade PCIe FC/FCOE host adapters. + + To compile this driver as a module, choose M here. The module will + be called bfa. + endif # SCSI_LOWLEVEL source "drivers/scsi/pcmcia/Kconfig" diff --git a/drivers/scsi/Makefile b/drivers/scsi/Makefile index 61a94af3cee7..316f0a1a7986 100644 --- a/drivers/scsi/Makefile +++ b/drivers/scsi/Makefile @@ -86,6 +86,7 @@ obj-$(CONFIG_SCSI_QLOGIC_1280) += qla1280.o obj-$(CONFIG_SCSI_QLA_FC) += qla2xxx/ obj-$(CONFIG_SCSI_QLA_ISCSI) += qla4xxx/ obj-$(CONFIG_SCSI_LPFC) += lpfc/ +obj-$(CONFIG_SCSI_BFA_FC) += bfa/ obj-$(CONFIG_SCSI_PAS16) += pas16.o obj-$(CONFIG_SCSI_T128) += t128.o obj-$(CONFIG_SCSI_DMX3191D) += dmx3191d.o diff --git a/drivers/scsi/bfa/Makefile b/drivers/scsi/bfa/Makefile new file mode 100644 index 000000000000..1d6009490d1c --- /dev/null +++ b/drivers/scsi/bfa/Makefile @@ -0,0 +1,15 @@ +obj-$(CONFIG_SCSI_BFA_FC) := bfa.o + +bfa-y := bfad.o bfad_intr.o bfad_os.o bfad_im.o bfad_attr.o bfad_fwimg.o + +bfa-y += bfa_core.o bfa_ioc.o bfa_iocfc.o bfa_fcxp.o bfa_lps.o +bfa-y += bfa_hw_cb.o bfa_hw_ct.o bfa_intr.o bfa_timer.o bfa_rport.o +bfa-y += bfa_fcport.o bfa_port.o bfa_uf.o bfa_sgpg.o bfa_module.o bfa_ioim.o +bfa-y += bfa_itnim.o bfa_fcpim.o bfa_tskim.o bfa_log.o bfa_log_module.o +bfa-y += bfa_csdebug.o bfa_sm.o plog.o + +bfa-y += fcbuild.o fabric.o fcpim.o vfapi.o fcptm.o bfa_fcs.o bfa_fcs_port.o +bfa-y += bfa_fcs_uf.o bfa_fcs_lport.o fab.o fdmi.o ms.o ns.o scn.o loop.o +bfa-y += lport_api.o n2n.o rport.o rport_api.o rport_ftrs.o vport.o + +ccflags-y := -I$(obj) -I$(obj)/include -I$(obj)/include/cna diff --git a/drivers/scsi/bfa/bfa_callback_priv.h b/drivers/scsi/bfa/bfa_callback_priv.h new file mode 100644 index 000000000000..1e3265c9f7d4 --- /dev/null +++ b/drivers/scsi/bfa/bfa_callback_priv.h @@ -0,0 +1,57 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFA_CALLBACK_PRIV_H__ +#define __BFA_CALLBACK_PRIV_H__ + +#include + +typedef void (*bfa_cb_cbfn_t) (void *cbarg, bfa_boolean_t complete); + +/** + * Generic BFA callback element. + */ +struct bfa_cb_qe_s { + struct list_head qe; + bfa_cb_cbfn_t cbfn; + bfa_boolean_t once; + u32 rsvd; + void *cbarg; +}; + +#define bfa_cb_queue(__bfa, __hcb_qe, __cbfn, __cbarg) do { \ + (__hcb_qe)->cbfn = (__cbfn); \ + (__hcb_qe)->cbarg = (__cbarg); \ + list_add_tail(&(__hcb_qe)->qe, &(__bfa)->comp_q); \ +} while (0) + +#define bfa_cb_dequeue(__hcb_qe) list_del(&(__hcb_qe)->qe) + +#define bfa_cb_queue_once(__bfa, __hcb_qe, __cbfn, __cbarg) do { \ + (__hcb_qe)->cbfn = (__cbfn); \ + (__hcb_qe)->cbarg = (__cbarg); \ + if (!(__hcb_qe)->once) { \ + list_add_tail((__hcb_qe), &(__bfa)->comp_q); \ + (__hcb_qe)->once = BFA_TRUE; \ + } \ +} while (0) + +#define bfa_cb_queue_done(__hcb_qe) do { \ + (__hcb_qe)->once = BFA_FALSE; \ +} while (0) + +#endif /* __BFA_CALLBACK_PRIV_H__ */ diff --git a/drivers/scsi/bfa/bfa_cb_ioim_macros.h b/drivers/scsi/bfa/bfa_cb_ioim_macros.h new file mode 100644 index 000000000000..0050c838c358 --- /dev/null +++ b/drivers/scsi/bfa/bfa_cb_ioim_macros.h @@ -0,0 +1,205 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * bfa_cb_ioim_macros.h BFA IOIM driver interface macros. + */ + +#ifndef __BFA_HCB_IOIM_MACROS_H__ +#define __BFA_HCB_IOIM_MACROS_H__ + +#include +/* + * #include + * + * #include #include #include + * #include + */ +#include "bfad_im_compat.h" + +/* + * task attribute values in FCP-2 FCP_CMND IU + */ +#define SIMPLE_Q 0 +#define HEAD_OF_Q 1 +#define ORDERED_Q 2 +#define ACA_Q 4 +#define UNTAGGED 5 + +static inline lun_t +bfad_int_to_lun(u32 luno) +{ + union { + u16 scsi_lun[4]; + lun_t bfa_lun; + } lun; + + lun.bfa_lun = 0; + lun.scsi_lun[0] = bfa_os_htons(luno); + + return (lun.bfa_lun); +} + +/** + * Get LUN for the I/O request + */ +#define bfa_cb_ioim_get_lun(__dio) \ + bfad_int_to_lun(((struct scsi_cmnd *)__dio)->device->lun) + +/** + * Get CDB for the I/O request + */ +static inline u8 * +bfa_cb_ioim_get_cdb(struct bfad_ioim_s *dio) +{ + struct scsi_cmnd *cmnd = (struct scsi_cmnd *)dio; + + return ((u8 *) cmnd->cmnd); +} + +/** + * Get I/O direction (read/write) for the I/O request + */ +static inline enum fcp_iodir +bfa_cb_ioim_get_iodir(struct bfad_ioim_s *dio) +{ + struct scsi_cmnd *cmnd = (struct scsi_cmnd *)dio; + enum dma_data_direction dmadir; + + dmadir = cmnd->sc_data_direction; + if (dmadir == DMA_TO_DEVICE) + return FCP_IODIR_WRITE; + else if (dmadir == DMA_FROM_DEVICE) + return FCP_IODIR_READ; + else + return FCP_IODIR_NONE; +} + +/** + * Get IO size in bytes for the I/O request + */ +static inline u32 +bfa_cb_ioim_get_size(struct bfad_ioim_s *dio) +{ + struct scsi_cmnd *cmnd = (struct scsi_cmnd *)dio; + + return (scsi_bufflen(cmnd)); +} + +/** + * Get timeout for the I/O request + */ +static inline u8 +bfa_cb_ioim_get_timeout(struct bfad_ioim_s *dio) +{ + struct scsi_cmnd *cmnd = (struct scsi_cmnd *)dio; + /* + * TBD: need a timeout for scsi passthru + */ + if (cmnd->device->host == NULL) + return 4; + + return 0; +} + +/** + * Get SG element for the I/O request given the SG element index + */ +static inline union bfi_addr_u +bfa_cb_ioim_get_sgaddr(struct bfad_ioim_s *dio, int sgeid) +{ + struct scsi_cmnd *cmnd = (struct scsi_cmnd *)dio; + struct scatterlist *sge; + u64 addr; + + sge = (struct scatterlist *)scsi_sglist(cmnd) + sgeid; + addr = (u64) sg_dma_address(sge); + + return (*(union bfi_addr_u *) &addr); +} + +static inline u32 +bfa_cb_ioim_get_sglen(struct bfad_ioim_s *dio, int sgeid) +{ + struct scsi_cmnd *cmnd = (struct scsi_cmnd *)dio; + struct scatterlist *sge; + u32 len; + + sge = (struct scatterlist *)scsi_sglist(cmnd) + sgeid; + len = sg_dma_len(sge); + + return len; +} + +/** + * Get Command Reference Number for the I/O request. 0 if none. + */ +static inline u8 +bfa_cb_ioim_get_crn(struct bfad_ioim_s *dio) +{ + return 0; +} + +/** + * Get SAM-3 priority for the I/O request. 0 is default. + */ +static inline u8 +bfa_cb_ioim_get_priority(struct bfad_ioim_s *dio) +{ + return 0; +} + +/** + * Get task attributes for the I/O request. Default is FCP_TASK_ATTR_SIMPLE(0). + */ +static inline u8 +bfa_cb_ioim_get_taskattr(struct bfad_ioim_s *dio) +{ + struct scsi_cmnd *cmnd = (struct scsi_cmnd *)dio; + u8 task_attr = UNTAGGED; + + if (cmnd->device->tagged_supported) { + switch (cmnd->tag) { + case HEAD_OF_QUEUE_TAG: + task_attr = HEAD_OF_Q; + break; + case ORDERED_QUEUE_TAG: + task_attr = ORDERED_Q; + break; + default: + task_attr = SIMPLE_Q; + break; + } + } + + return task_attr; +} + +/** + * Get CDB length in bytes for the I/O request. Default is FCP_CMND_CDB_LEN(16). + */ +static inline u8 +bfa_cb_ioim_get_cdblen(struct bfad_ioim_s *dio) +{ + struct scsi_cmnd *cmnd = (struct scsi_cmnd *)dio; + + return (cmnd->cmd_len); +} + + + +#endif /* __BFA_HCB_IOIM_MACROS_H__ */ diff --git a/drivers/scsi/bfa/bfa_cee.c b/drivers/scsi/bfa/bfa_cee.c new file mode 100644 index 000000000000..7a959c34e789 --- /dev/null +++ b/drivers/scsi/bfa/bfa_cee.c @@ -0,0 +1,492 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +BFA_TRC_FILE(CNA, CEE); + +#define bfa_ioc_portid(__ioc) ((__ioc)->port_id) +#define bfa_lpuid(__arg) bfa_ioc_portid(&(__arg)->ioc) + +static void bfa_cee_format_lldp_cfg(struct bfa_cee_lldp_cfg_s *lldp_cfg); +static void bfa_cee_format_dcbcx_stats(struct bfa_cee_dcbx_stats_s + *dcbcx_stats); +static void bfa_cee_format_lldp_stats(struct bfa_cee_lldp_stats_s + *lldp_stats); +static void bfa_cee_format_cfg_stats(struct bfa_cee_cfg_stats_s *cfg_stats); +static void bfa_cee_format_cee_cfg(void *buffer); +static void bfa_cee_format_cee_stats(void *buffer); + +static void +bfa_cee_format_cee_stats(void *buffer) +{ + struct bfa_cee_stats_s *cee_stats = buffer; + bfa_cee_format_dcbcx_stats(&cee_stats->dcbx_stats); + bfa_cee_format_lldp_stats(&cee_stats->lldp_stats); + bfa_cee_format_cfg_stats(&cee_stats->cfg_stats); +} + +static void +bfa_cee_format_cee_cfg(void *buffer) +{ + struct bfa_cee_attr_s *cee_cfg = buffer; + bfa_cee_format_lldp_cfg(&cee_cfg->lldp_remote); +} + +static void +bfa_cee_format_dcbcx_stats(struct bfa_cee_dcbx_stats_s *dcbcx_stats) +{ + dcbcx_stats->subtlvs_unrecognized = + bfa_os_ntohl(dcbcx_stats->subtlvs_unrecognized); + dcbcx_stats->negotiation_failed = + bfa_os_ntohl(dcbcx_stats->negotiation_failed); + dcbcx_stats->remote_cfg_changed = + bfa_os_ntohl(dcbcx_stats->remote_cfg_changed); + dcbcx_stats->tlvs_received = bfa_os_ntohl(dcbcx_stats->tlvs_received); + dcbcx_stats->tlvs_invalid = bfa_os_ntohl(dcbcx_stats->tlvs_invalid); + dcbcx_stats->seqno = bfa_os_ntohl(dcbcx_stats->seqno); + dcbcx_stats->ackno = bfa_os_ntohl(dcbcx_stats->ackno); + dcbcx_stats->recvd_seqno = bfa_os_ntohl(dcbcx_stats->recvd_seqno); + dcbcx_stats->recvd_ackno = bfa_os_ntohl(dcbcx_stats->recvd_ackno); +} + +static void +bfa_cee_format_lldp_stats(struct bfa_cee_lldp_stats_s *lldp_stats) +{ + lldp_stats->frames_transmitted = + bfa_os_ntohl(lldp_stats->frames_transmitted); + lldp_stats->frames_aged_out = bfa_os_ntohl(lldp_stats->frames_aged_out); + lldp_stats->frames_discarded = + bfa_os_ntohl(lldp_stats->frames_discarded); + lldp_stats->frames_in_error = bfa_os_ntohl(lldp_stats->frames_in_error); + lldp_stats->frames_rcvd = bfa_os_ntohl(lldp_stats->frames_rcvd); + lldp_stats->tlvs_discarded = bfa_os_ntohl(lldp_stats->tlvs_discarded); + lldp_stats->tlvs_unrecognized = + bfa_os_ntohl(lldp_stats->tlvs_unrecognized); +} + +static void +bfa_cee_format_cfg_stats(struct bfa_cee_cfg_stats_s *cfg_stats) +{ + cfg_stats->cee_status_down = bfa_os_ntohl(cfg_stats->cee_status_down); + cfg_stats->cee_status_up = bfa_os_ntohl(cfg_stats->cee_status_up); + cfg_stats->cee_hw_cfg_changed = + bfa_os_ntohl(cfg_stats->cee_hw_cfg_changed); + cfg_stats->recvd_invalid_cfg = + bfa_os_ntohl(cfg_stats->recvd_invalid_cfg); +} + +static void +bfa_cee_format_lldp_cfg(struct bfa_cee_lldp_cfg_s *lldp_cfg) +{ + lldp_cfg->time_to_interval = bfa_os_ntohs(lldp_cfg->time_to_interval); + lldp_cfg->enabled_system_cap = + bfa_os_ntohs(lldp_cfg->enabled_system_cap); +} + +/** + * bfa_cee_attr_meminfo() + * + * + * @param[in] void + * + * @return Size of DMA region + */ +static u32 +bfa_cee_attr_meminfo(void) +{ + return BFA_ROUNDUP(sizeof(struct bfa_cee_attr_s), BFA_DMA_ALIGN_SZ); +} + +/** + * bfa_cee_stats_meminfo() + * + * + * @param[in] void + * + * @return Size of DMA region + */ +static u32 +bfa_cee_stats_meminfo(void) +{ + return BFA_ROUNDUP(sizeof(struct bfa_cee_stats_s), BFA_DMA_ALIGN_SZ); +} + +/** + * bfa_cee_get_attr_isr() + * + * + * @param[in] cee - Pointer to the CEE module + * status - Return status from the f/w + * + * @return void + */ +static void +bfa_cee_get_attr_isr(struct bfa_cee_s *cee, bfa_status_t status) +{ + cee->get_attr_status = status; + bfa_trc(cee, 0); + if (status == BFA_STATUS_OK) { + bfa_trc(cee, 0); + /* + * The requested data has been copied to the DMA area, *process + * it. + */ + memcpy(cee->attr, cee->attr_dma.kva, + sizeof(struct bfa_cee_attr_s)); + bfa_cee_format_cee_cfg(cee->attr); + } + cee->get_attr_pending = BFA_FALSE; + if (cee->cbfn.get_attr_cbfn) { + bfa_trc(cee, 0); + cee->cbfn.get_attr_cbfn(cee->cbfn.get_attr_cbarg, status); + } + bfa_trc(cee, 0); +} + +/** + * bfa_cee_get_attr_isr() + * + * + * @param[in] cee - Pointer to the CEE module + * status - Return status from the f/w + * + * @return void + */ +static void +bfa_cee_get_stats_isr(struct bfa_cee_s *cee, bfa_status_t status) +{ + cee->get_stats_status = status; + bfa_trc(cee, 0); + if (status == BFA_STATUS_OK) { + bfa_trc(cee, 0); + /* + * The requested data has been copied to the DMA area, process + * it. + */ + memcpy(cee->stats, cee->stats_dma.kva, + sizeof(struct bfa_cee_stats_s)); + bfa_cee_format_cee_stats(cee->stats); + } + cee->get_stats_pending = BFA_FALSE; + bfa_trc(cee, 0); + if (cee->cbfn.get_stats_cbfn) { + bfa_trc(cee, 0); + cee->cbfn.get_stats_cbfn(cee->cbfn.get_stats_cbarg, status); + } + bfa_trc(cee, 0); +} + +/** + * bfa_cee_get_attr_isr() + * + * + * @param[in] cee - Pointer to the CEE module + * status - Return status from the f/w + * + * @return void + */ +static void +bfa_cee_reset_stats_isr(struct bfa_cee_s *cee, bfa_status_t status) +{ + cee->reset_stats_status = status; + cee->reset_stats_pending = BFA_FALSE; + if (cee->cbfn.reset_stats_cbfn) + cee->cbfn.reset_stats_cbfn(cee->cbfn.reset_stats_cbarg, status); +} + +/** + * bfa_cee_meminfo() + * + * + * @param[in] void + * + * @return Size of DMA region + */ +u32 +bfa_cee_meminfo(void) +{ + return (bfa_cee_attr_meminfo() + bfa_cee_stats_meminfo()); +} + +/** + * bfa_cee_mem_claim() + * + * + * @param[in] cee CEE module pointer + * dma_kva Kernel Virtual Address of CEE DMA Memory + * dma_pa Physical Address of CEE DMA Memory + * + * @return void + */ +void +bfa_cee_mem_claim(struct bfa_cee_s *cee, u8 *dma_kva, u64 dma_pa) +{ + cee->attr_dma.kva = dma_kva; + cee->attr_dma.pa = dma_pa; + cee->stats_dma.kva = dma_kva + bfa_cee_attr_meminfo(); + cee->stats_dma.pa = dma_pa + bfa_cee_attr_meminfo(); + cee->attr = (struct bfa_cee_attr_s *)dma_kva; + cee->stats = + (struct bfa_cee_stats_s *)(dma_kva + bfa_cee_attr_meminfo()); +} + +/** + * bfa_cee_get_attr() + * + * Send the request to the f/w to fetch CEE attributes. + * + * @param[in] Pointer to the CEE module data structure. + * + * @return Status + */ + +bfa_status_t +bfa_cee_get_attr(struct bfa_cee_s *cee, struct bfa_cee_attr_s *attr, + bfa_cee_get_attr_cbfn_t cbfn, void *cbarg) +{ + struct bfi_cee_get_req_s *cmd; + + bfa_assert((cee != NULL) && (cee->ioc != NULL)); + bfa_trc(cee, 0); + if (!bfa_ioc_is_operational(cee->ioc)) { + bfa_trc(cee, 0); + return BFA_STATUS_IOC_FAILURE; + } + if (cee->get_attr_pending == BFA_TRUE) { + bfa_trc(cee, 0); + return BFA_STATUS_DEVBUSY; + } + cee->get_attr_pending = BFA_TRUE; + cmd = (struct bfi_cee_get_req_s *)cee->get_cfg_mb.msg; + cee->attr = attr; + cee->cbfn.get_attr_cbfn = cbfn; + cee->cbfn.get_attr_cbarg = cbarg; + bfi_h2i_set(cmd->mh, BFI_MC_CEE, BFI_CEE_H2I_GET_CFG_REQ, + bfa_ioc_portid(cee->ioc)); + bfa_dma_be_addr_set(cmd->dma_addr, cee->attr_dma.pa); + bfa_ioc_mbox_queue(cee->ioc, &cee->get_cfg_mb); + bfa_trc(cee, 0); + + return BFA_STATUS_OK; +} + +/** + * bfa_cee_get_stats() + * + * Send the request to the f/w to fetch CEE statistics. + * + * @param[in] Pointer to the CEE module data structure. + * + * @return Status + */ + +bfa_status_t +bfa_cee_get_stats(struct bfa_cee_s *cee, struct bfa_cee_stats_s *stats, + bfa_cee_get_stats_cbfn_t cbfn, void *cbarg) +{ + struct bfi_cee_get_req_s *cmd; + + bfa_assert((cee != NULL) && (cee->ioc != NULL)); + + if (!bfa_ioc_is_operational(cee->ioc)) { + bfa_trc(cee, 0); + return BFA_STATUS_IOC_FAILURE; + } + if (cee->get_stats_pending == BFA_TRUE) { + bfa_trc(cee, 0); + return BFA_STATUS_DEVBUSY; + } + cee->get_stats_pending = BFA_TRUE; + cmd = (struct bfi_cee_get_req_s *)cee->get_stats_mb.msg; + cee->stats = stats; + cee->cbfn.get_stats_cbfn = cbfn; + cee->cbfn.get_stats_cbarg = cbarg; + bfi_h2i_set(cmd->mh, BFI_MC_CEE, BFI_CEE_H2I_GET_STATS_REQ, + bfa_ioc_portid(cee->ioc)); + bfa_dma_be_addr_set(cmd->dma_addr, cee->stats_dma.pa); + bfa_ioc_mbox_queue(cee->ioc, &cee->get_stats_mb); + bfa_trc(cee, 0); + + return BFA_STATUS_OK; +} + +/** + * bfa_cee_reset_stats() + * + * + * @param[in] Pointer to the CEE module data structure. + * + * @return Status + */ + +bfa_status_t +bfa_cee_reset_stats(struct bfa_cee_s *cee, bfa_cee_reset_stats_cbfn_t cbfn, + void *cbarg) +{ + struct bfi_cee_reset_stats_s *cmd; + + bfa_assert((cee != NULL) && (cee->ioc != NULL)); + if (!bfa_ioc_is_operational(cee->ioc)) { + bfa_trc(cee, 0); + return BFA_STATUS_IOC_FAILURE; + } + if (cee->reset_stats_pending == BFA_TRUE) { + bfa_trc(cee, 0); + return BFA_STATUS_DEVBUSY; + } + cee->reset_stats_pending = BFA_TRUE; + cmd = (struct bfi_cee_reset_stats_s *)cee->reset_stats_mb.msg; + cee->cbfn.reset_stats_cbfn = cbfn; + cee->cbfn.reset_stats_cbarg = cbarg; + bfi_h2i_set(cmd->mh, BFI_MC_CEE, BFI_CEE_H2I_RESET_STATS, + bfa_ioc_portid(cee->ioc)); + bfa_ioc_mbox_queue(cee->ioc, &cee->reset_stats_mb); + bfa_trc(cee, 0); + return BFA_STATUS_OK; +} + +/** + * bfa_cee_isrs() + * + * + * @param[in] Pointer to the CEE module data structure. + * + * @return void + */ + +void +bfa_cee_isr(void *cbarg, struct bfi_mbmsg_s *m) +{ + union bfi_cee_i2h_msg_u *msg; + struct bfi_cee_get_rsp_s *get_rsp; + struct bfa_cee_s *cee = (struct bfa_cee_s *)cbarg; + msg = (union bfi_cee_i2h_msg_u *)m; + get_rsp = (struct bfi_cee_get_rsp_s *)m; + bfa_trc(cee, msg->mh.msg_id); + switch (msg->mh.msg_id) { + case BFI_CEE_I2H_GET_CFG_RSP: + bfa_trc(cee, get_rsp->cmd_status); + bfa_cee_get_attr_isr(cee, get_rsp->cmd_status); + break; + case BFI_CEE_I2H_GET_STATS_RSP: + bfa_cee_get_stats_isr(cee, get_rsp->cmd_status); + break; + case BFI_CEE_I2H_RESET_STATS_RSP: + bfa_cee_reset_stats_isr(cee, get_rsp->cmd_status); + break; + default: + bfa_assert(0); + } +} + +/** + * bfa_cee_hbfail() + * + * + * @param[in] Pointer to the CEE module data structure. + * + * @return void + */ + +void +bfa_cee_hbfail(void *arg) +{ + struct bfa_cee_s *cee; + cee = (struct bfa_cee_s *)arg; + + if (cee->get_attr_pending == BFA_TRUE) { + cee->get_attr_status = BFA_STATUS_FAILED; + cee->get_attr_pending = BFA_FALSE; + if (cee->cbfn.get_attr_cbfn) { + cee->cbfn.get_attr_cbfn(cee->cbfn.get_attr_cbarg, + BFA_STATUS_FAILED); + } + } + if (cee->get_stats_pending == BFA_TRUE) { + cee->get_stats_status = BFA_STATUS_FAILED; + cee->get_stats_pending = BFA_FALSE; + if (cee->cbfn.get_stats_cbfn) { + cee->cbfn.get_stats_cbfn(cee->cbfn.get_stats_cbarg, + BFA_STATUS_FAILED); + } + } + if (cee->reset_stats_pending == BFA_TRUE) { + cee->reset_stats_status = BFA_STATUS_FAILED; + cee->reset_stats_pending = BFA_FALSE; + if (cee->cbfn.reset_stats_cbfn) { + cee->cbfn.reset_stats_cbfn(cee->cbfn.reset_stats_cbarg, + BFA_STATUS_FAILED); + } + } +} + +/** + * bfa_cee_attach() + * + * + * @param[in] cee - Pointer to the CEE module data structure + * ioc - Pointer to the ioc module data structure + * dev - Pointer to the device driver module data structure + * The device driver specific mbox ISR functions have + * this pointer as one of the parameters. + * trcmod - + * logmod - + * + * @return void + */ +void +bfa_cee_attach(struct bfa_cee_s *cee, struct bfa_ioc_s *ioc, void *dev, + struct bfa_trc_mod_s *trcmod, struct bfa_log_mod_s *logmod) +{ + bfa_assert(cee != NULL); + cee->dev = dev; + cee->trcmod = trcmod; + cee->logmod = logmod; + cee->ioc = ioc; + + bfa_ioc_mbox_regisr(cee->ioc, BFI_MC_CEE, bfa_cee_isr, cee); + bfa_ioc_hbfail_init(&cee->hbfail, bfa_cee_hbfail, cee); + bfa_ioc_hbfail_register(cee->ioc, &cee->hbfail); + bfa_trc(cee, 0); +} + +/** + * bfa_cee_detach() + * + * + * @param[in] cee - Pointer to the CEE module data structure + * + * @return void + */ +void +bfa_cee_detach(struct bfa_cee_s *cee) +{ + /* + * For now, just check if there is some ioctl pending and mark that as + * failed? + */ + /* bfa_cee_hbfail(cee); */ +} diff --git a/drivers/scsi/bfa/bfa_core.c b/drivers/scsi/bfa/bfa_core.c new file mode 100644 index 000000000000..44e2d1155c51 --- /dev/null +++ b/drivers/scsi/bfa/bfa_core.c @@ -0,0 +1,402 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#include +#include +#include +#include + +#define DEF_CFG_NUM_FABRICS 1 +#define DEF_CFG_NUM_LPORTS 256 +#define DEF_CFG_NUM_CQS 4 +#define DEF_CFG_NUM_IOIM_REQS (BFA_IOIM_MAX) +#define DEF_CFG_NUM_TSKIM_REQS 128 +#define DEF_CFG_NUM_FCXP_REQS 64 +#define DEF_CFG_NUM_UF_BUFS 64 +#define DEF_CFG_NUM_RPORTS 1024 +#define DEF_CFG_NUM_ITNIMS (DEF_CFG_NUM_RPORTS) +#define DEF_CFG_NUM_TINS 256 + +#define DEF_CFG_NUM_SGPGS 2048 +#define DEF_CFG_NUM_REQQ_ELEMS 256 +#define DEF_CFG_NUM_RSPQ_ELEMS 64 +#define DEF_CFG_NUM_SBOOT_TGTS 16 +#define DEF_CFG_NUM_SBOOT_LUNS 16 + +/** + * Use this function query the memory requirement of the BFA library. + * This function needs to be called before bfa_attach() to get the + * memory required of the BFA layer for a given driver configuration. + * + * This call will fail, if the cap is out of range compared to pre-defined + * values within the BFA library + * + * @param[in] cfg - pointer to bfa_ioc_cfg_t. Driver layer should indicate + * its configuration in this structure. + * The default values for struct bfa_iocfc_cfg_s can be + * fetched using bfa_cfg_get_default() API. + * + * If cap's boundary check fails, the library will use + * the default bfa_cap_t values (and log a warning msg). + * + * @param[out] meminfo - pointer to bfa_meminfo_t. This content + * indicates the memory type (see bfa_mem_type_t) and + * amount of memory required. + * + * Driver should allocate the memory, populate the + * starting address for each block and provide the same + * structure as input parameter to bfa_attach() call. + * + * @return void + * + * Special Considerations: @note + */ +void +bfa_cfg_get_meminfo(struct bfa_iocfc_cfg_s *cfg, struct bfa_meminfo_s *meminfo) +{ + int i; + u32 km_len = 0, dm_len = 0; + + bfa_assert((cfg != NULL) && (meminfo != NULL)); + + bfa_os_memset((void *)meminfo, 0, sizeof(struct bfa_meminfo_s)); + meminfo->meminfo[BFA_MEM_TYPE_KVA - 1].mem_type = + BFA_MEM_TYPE_KVA; + meminfo->meminfo[BFA_MEM_TYPE_DMA - 1].mem_type = + BFA_MEM_TYPE_DMA; + + bfa_iocfc_meminfo(cfg, &km_len, &dm_len); + + for (i = 0; hal_mods[i]; i++) + hal_mods[i]->meminfo(cfg, &km_len, &dm_len); + + + meminfo->meminfo[BFA_MEM_TYPE_KVA - 1].mem_len = km_len; + meminfo->meminfo[BFA_MEM_TYPE_DMA - 1].mem_len = dm_len; +} + +/** + * Use this function to do attach the driver instance with the BFA + * library. This function will not trigger any HW initialization + * process (which will be done in bfa_init() call) + * + * This call will fail, if the cap is out of range compared to + * pre-defined values within the BFA library + * + * @param[out] bfa Pointer to bfa_t. + * @param[in] bfad Opaque handle back to the driver's IOC structure + * @param[in] cfg Pointer to bfa_ioc_cfg_t. Should be same structure + * that was used in bfa_cfg_get_meminfo(). + * @param[in] meminfo Pointer to bfa_meminfo_t. The driver should + * use the bfa_cfg_get_meminfo() call to + * find the memory blocks required, allocate the + * required memory and provide the starting addresses. + * @param[in] pcidev pointer to struct bfa_pcidev_s + * + * @return + * void + * + * Special Considerations: + * + * @note + * + */ +void +bfa_attach(struct bfa_s *bfa, void *bfad, struct bfa_iocfc_cfg_s *cfg, + struct bfa_meminfo_s *meminfo, struct bfa_pcidev_s *pcidev) +{ + int i; + struct bfa_mem_elem_s *melem; + + bfa->fcs = BFA_FALSE; + + bfa_assert((cfg != NULL) && (meminfo != NULL)); + + /** + * initialize all memory pointers for iterative allocation + */ + for (i = 0; i < BFA_MEM_TYPE_MAX; i++) { + melem = meminfo->meminfo + i; + melem->kva_curp = melem->kva; + melem->dma_curp = melem->dma; + } + + bfa_iocfc_attach(bfa, bfad, cfg, meminfo, pcidev); + + for (i = 0; hal_mods[i]; i++) + hal_mods[i]->attach(bfa, bfad, cfg, meminfo, pcidev); + +} + +/** + * Use this function to delete a BFA IOC. IOC should be stopped (by + * calling bfa_stop()) before this function call. + * + * @param[in] bfa - pointer to bfa_t. + * + * @return + * void + * + * Special Considerations: + * + * @note + */ +void +bfa_detach(struct bfa_s *bfa) +{ + int i; + + for (i = 0; hal_mods[i]; i++) + hal_mods[i]->detach(bfa); + + bfa_iocfc_detach(bfa); +} + + +void +bfa_init_trc(struct bfa_s *bfa, struct bfa_trc_mod_s *trcmod) +{ + bfa->trcmod = trcmod; +} + + +void +bfa_init_log(struct bfa_s *bfa, struct bfa_log_mod_s *logmod) +{ + bfa->logm = logmod; +} + + +void +bfa_init_aen(struct bfa_s *bfa, struct bfa_aen_s *aen) +{ + bfa->aen = aen; +} + +void +bfa_init_plog(struct bfa_s *bfa, struct bfa_plog_s *plog) +{ + bfa->plog = plog; +} + +/** + * Initialize IOC. + * + * This function will return immediately, when the IOC initialization is + * completed, the bfa_cb_init() will be called. + * + * @param[in] bfa instance + * + * @return void + * + * Special Considerations: + * + * @note + * When this function returns, the driver should register the interrupt service + * routine(s) and enable the device interrupts. If this is not done, + * bfa_cb_init() will never get called + */ +void +bfa_init(struct bfa_s *bfa) +{ + bfa_iocfc_init(bfa); +} + +/** + * Use this function initiate the IOC configuration setup. This function + * will return immediately. + * + * @param[in] bfa instance + * + * @return None + */ +void +bfa_start(struct bfa_s *bfa) +{ + bfa_iocfc_start(bfa); +} + +/** + * Use this function quiese the IOC. This function will return immediately, + * when the IOC is actually stopped, the bfa_cb_stop() will be called. + * + * @param[in] bfa - pointer to bfa_t. + * + * @return None + * + * Special Considerations: + * bfa_cb_stop() could be called before or after bfa_stop() returns. + * + * @note + * In case of any failure, we could handle it automatically by doing a + * reset and then succeed the bfa_stop() call. + */ +void +bfa_stop(struct bfa_s *bfa) +{ + bfa_iocfc_stop(bfa); +} + +void +bfa_comp_deq(struct bfa_s *bfa, struct list_head *comp_q) +{ + INIT_LIST_HEAD(comp_q); + list_splice_tail_init(&bfa->comp_q, comp_q); +} + +void +bfa_comp_process(struct bfa_s *bfa, struct list_head *comp_q) +{ + struct list_head *qe; + struct list_head *qen; + struct bfa_cb_qe_s *hcb_qe; + + list_for_each_safe(qe, qen, comp_q) { + hcb_qe = (struct bfa_cb_qe_s *) qe; + hcb_qe->cbfn(hcb_qe->cbarg, BFA_TRUE); + } +} + +void +bfa_comp_free(struct bfa_s *bfa, struct list_head *comp_q) +{ + struct list_head *qe; + struct bfa_cb_qe_s *hcb_qe; + + while (!list_empty(comp_q)) { + bfa_q_deq(comp_q, &qe); + hcb_qe = (struct bfa_cb_qe_s *) qe; + hcb_qe->cbfn(hcb_qe->cbarg, BFA_FALSE); + } +} + +void +bfa_attach_fcs(struct bfa_s *bfa) +{ + bfa->fcs = BFA_TRUE; +} + +/** + * Periodic timer heart beat from driver + */ +void +bfa_timer_tick(struct bfa_s *bfa) +{ + bfa_timer_beat(&bfa->timer_mod); +} + +#ifndef BFA_BIOS_BUILD +/** + * Return the list of PCI vendor/device id lists supported by this + * BFA instance. + */ +void +bfa_get_pciids(struct bfa_pciid_s **pciids, int *npciids) +{ + static struct bfa_pciid_s __pciids[] = { + {BFA_PCI_VENDOR_ID_BROCADE, BFA_PCI_DEVICE_ID_FC_8G2P}, + {BFA_PCI_VENDOR_ID_BROCADE, BFA_PCI_DEVICE_ID_FC_8G1P}, + {BFA_PCI_VENDOR_ID_BROCADE, BFA_PCI_DEVICE_ID_CT}, + }; + + *npciids = sizeof(__pciids) / sizeof(__pciids[0]); + *pciids = __pciids; +} + +/** + * Use this function query the default struct bfa_iocfc_cfg_s value (compiled + * into BFA layer). The OS driver can then turn back and overwrite entries that + * have been configured by the user. + * + * @param[in] cfg - pointer to bfa_ioc_cfg_t + * + * @return + * void + * + * Special Considerations: + * note + */ +void +bfa_cfg_get_default(struct bfa_iocfc_cfg_s *cfg) +{ + cfg->fwcfg.num_fabrics = DEF_CFG_NUM_FABRICS; + cfg->fwcfg.num_lports = DEF_CFG_NUM_LPORTS; + cfg->fwcfg.num_rports = DEF_CFG_NUM_RPORTS; + cfg->fwcfg.num_ioim_reqs = DEF_CFG_NUM_IOIM_REQS; + cfg->fwcfg.num_tskim_reqs = DEF_CFG_NUM_TSKIM_REQS; + cfg->fwcfg.num_fcxp_reqs = DEF_CFG_NUM_FCXP_REQS; + cfg->fwcfg.num_uf_bufs = DEF_CFG_NUM_UF_BUFS; + cfg->fwcfg.num_cqs = DEF_CFG_NUM_CQS; + + cfg->drvcfg.num_reqq_elems = DEF_CFG_NUM_REQQ_ELEMS; + cfg->drvcfg.num_rspq_elems = DEF_CFG_NUM_RSPQ_ELEMS; + cfg->drvcfg.num_sgpgs = DEF_CFG_NUM_SGPGS; + cfg->drvcfg.num_sboot_tgts = DEF_CFG_NUM_SBOOT_TGTS; + cfg->drvcfg.num_sboot_luns = DEF_CFG_NUM_SBOOT_LUNS; + cfg->drvcfg.path_tov = BFA_FCPIM_PATHTOV_DEF; + cfg->drvcfg.ioc_recover = BFA_FALSE; + cfg->drvcfg.delay_comp = BFA_FALSE; + +} + +void +bfa_cfg_get_min(struct bfa_iocfc_cfg_s *cfg) +{ + bfa_cfg_get_default(cfg); + cfg->fwcfg.num_ioim_reqs = BFA_IOIM_MIN; + cfg->fwcfg.num_tskim_reqs = BFA_TSKIM_MIN; + cfg->fwcfg.num_fcxp_reqs = BFA_FCXP_MIN; + cfg->fwcfg.num_uf_bufs = BFA_UF_MIN; + cfg->fwcfg.num_rports = BFA_RPORT_MIN; + + cfg->drvcfg.num_sgpgs = BFA_SGPG_MIN; + cfg->drvcfg.num_reqq_elems = BFA_REQQ_NELEMS_MIN; + cfg->drvcfg.num_rspq_elems = BFA_RSPQ_NELEMS_MIN; + cfg->drvcfg.min_cfg = BFA_TRUE; +} + +void +bfa_get_attr(struct bfa_s *bfa, struct bfa_ioc_attr_s *ioc_attr) +{ + bfa_ioc_get_attr(&bfa->ioc, ioc_attr); +} + +/** + * Retrieve firmware trace information on IOC failure. + */ +bfa_status_t +bfa_debug_fwsave(struct bfa_s *bfa, void *trcdata, int *trclen) +{ + return bfa_ioc_debug_fwsave(&bfa->ioc, trcdata, trclen); +} + +/** + * Fetch firmware trace data. + * + * @param[in] bfa BFA instance + * @param[out] trcdata Firmware trace buffer + * @param[in,out] trclen Firmware trace buffer len + * + * @retval BFA_STATUS_OK Firmware trace is fetched. + * @retval BFA_STATUS_INPROGRESS Firmware trace fetch is in progress. + */ +bfa_status_t +bfa_debug_fwtrc(struct bfa_s *bfa, void *trcdata, int *trclen) +{ + return bfa_ioc_debug_fwtrc(&bfa->ioc, trcdata, trclen); +} +#endif diff --git a/drivers/scsi/bfa/bfa_csdebug.c b/drivers/scsi/bfa/bfa_csdebug.c new file mode 100644 index 000000000000..1b71d349451a --- /dev/null +++ b/drivers/scsi/bfa/bfa_csdebug.c @@ -0,0 +1,58 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#include +#include +#include +#include + +/** + * cs_debug_api + */ + + +void +bfa_panic(int line, char *file, char *panicstr) +{ + bfa_log(NULL, BFA_LOG_HAL_ASSERT, file, line, panicstr); + bfa_os_panic(); +} + +void +bfa_sm_panic(struct bfa_log_mod_s *logm, int line, char *file, int event) +{ + bfa_log(logm, BFA_LOG_HAL_SM_ASSERT, file, line, event); + bfa_os_panic(); +} + +int +bfa_q_is_on_q_func(struct list_head *q, struct list_head *qe) +{ + struct list_head *tqe; + + tqe = bfa_q_next(q); + while (tqe != q) { + if (tqe == qe) + return (1); + tqe = bfa_q_next(tqe); + if (tqe == NULL) + break; + } + return (0); +} + + diff --git a/drivers/scsi/bfa/bfa_fcpim.c b/drivers/scsi/bfa/bfa_fcpim.c new file mode 100644 index 000000000000..401babe3494e --- /dev/null +++ b/drivers/scsi/bfa/bfa_fcpim.c @@ -0,0 +1,175 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#include +#include + +BFA_TRC_FILE(HAL, FCPIM); +BFA_MODULE(fcpim); + +/** + * hal_fcpim_mod BFA FCP Initiator Mode module + */ + +/** + * Compute and return memory needed by FCP(im) module. + */ +static void +bfa_fcpim_meminfo(struct bfa_iocfc_cfg_s *cfg, u32 *km_len, + u32 *dm_len) +{ + bfa_itnim_meminfo(cfg, km_len, dm_len); + + /** + * IO memory + */ + if (cfg->fwcfg.num_ioim_reqs < BFA_IOIM_MIN) + cfg->fwcfg.num_ioim_reqs = BFA_IOIM_MIN; + else if (cfg->fwcfg.num_ioim_reqs > BFA_IOIM_MAX) + cfg->fwcfg.num_ioim_reqs = BFA_IOIM_MAX; + + *km_len += cfg->fwcfg.num_ioim_reqs * + (sizeof(struct bfa_ioim_s) + sizeof(struct bfa_ioim_sp_s)); + + *dm_len += cfg->fwcfg.num_ioim_reqs * BFI_IOIM_SNSLEN; + + /** + * task management command memory + */ + if (cfg->fwcfg.num_tskim_reqs < BFA_TSKIM_MIN) + cfg->fwcfg.num_tskim_reqs = BFA_TSKIM_MIN; + *km_len += cfg->fwcfg.num_tskim_reqs * sizeof(struct bfa_tskim_s); +} + + +static void +bfa_fcpim_attach(struct bfa_s *bfa, void *bfad, struct bfa_iocfc_cfg_s *cfg, + struct bfa_meminfo_s *meminfo, struct bfa_pcidev_s *pcidev) +{ + struct bfa_fcpim_mod_s *fcpim = BFA_FCPIM_MOD(bfa); + + bfa_trc(bfa, cfg->drvcfg.path_tov); + bfa_trc(bfa, cfg->fwcfg.num_rports); + bfa_trc(bfa, cfg->fwcfg.num_ioim_reqs); + bfa_trc(bfa, cfg->fwcfg.num_tskim_reqs); + + fcpim->bfa = bfa; + fcpim->num_itnims = cfg->fwcfg.num_rports; + fcpim->num_ioim_reqs = cfg->fwcfg.num_ioim_reqs; + fcpim->num_tskim_reqs = cfg->fwcfg.num_tskim_reqs; + fcpim->path_tov = cfg->drvcfg.path_tov; + fcpim->delay_comp = cfg->drvcfg.delay_comp; + + bfa_itnim_attach(fcpim, meminfo); + bfa_tskim_attach(fcpim, meminfo); + bfa_ioim_attach(fcpim, meminfo); +} + +static void +bfa_fcpim_initdone(struct bfa_s *bfa) +{ +} + +static void +bfa_fcpim_detach(struct bfa_s *bfa) +{ + struct bfa_fcpim_mod_s *fcpim = BFA_FCPIM_MOD(bfa); + + bfa_ioim_detach(fcpim); + bfa_tskim_detach(fcpim); +} + +static void +bfa_fcpim_start(struct bfa_s *bfa) +{ +} + +static void +bfa_fcpim_stop(struct bfa_s *bfa) +{ +} + +static void +bfa_fcpim_iocdisable(struct bfa_s *bfa) +{ + struct bfa_fcpim_mod_s *fcpim = BFA_FCPIM_MOD(bfa); + struct bfa_itnim_s *itnim; + struct list_head *qe, *qen; + + list_for_each_safe(qe, qen, &fcpim->itnim_q) { + itnim = (struct bfa_itnim_s *) qe; + bfa_itnim_iocdisable(itnim); + } +} + +void +bfa_fcpim_path_tov_set(struct bfa_s *bfa, u16 path_tov) +{ + struct bfa_fcpim_mod_s *fcpim = BFA_FCPIM_MOD(bfa); + + fcpim->path_tov = path_tov * 1000; + if (fcpim->path_tov > BFA_FCPIM_PATHTOV_MAX) + fcpim->path_tov = BFA_FCPIM_PATHTOV_MAX; +} + +u16 +bfa_fcpim_path_tov_get(struct bfa_s *bfa) +{ + struct bfa_fcpim_mod_s *fcpim = BFA_FCPIM_MOD(bfa); + + return (fcpim->path_tov / 1000); +} + +bfa_status_t +bfa_fcpim_get_modstats(struct bfa_s *bfa, struct bfa_fcpim_stats_s *modstats) +{ + struct bfa_fcpim_mod_s *fcpim = BFA_FCPIM_MOD(bfa); + + *modstats = fcpim->stats; + + return BFA_STATUS_OK; +} + +bfa_status_t +bfa_fcpim_clr_modstats(struct bfa_s *bfa) +{ + struct bfa_fcpim_mod_s *fcpim = BFA_FCPIM_MOD(bfa); + + memset(&fcpim->stats, 0, sizeof(struct bfa_fcpim_stats_s)); + + return BFA_STATUS_OK; +} + +void +bfa_fcpim_qdepth_set(struct bfa_s *bfa, u16 q_depth) +{ + struct bfa_fcpim_mod_s *fcpim = BFA_FCPIM_MOD(bfa); + + bfa_assert(q_depth <= BFA_IOCFC_QDEPTH_MAX); + + fcpim->q_depth = q_depth; +} + +u16 +bfa_fcpim_qdepth_get(struct bfa_s *bfa) +{ + struct bfa_fcpim_mod_s *fcpim = BFA_FCPIM_MOD(bfa); + + return (fcpim->q_depth); +} + + diff --git a/drivers/scsi/bfa/bfa_fcpim_priv.h b/drivers/scsi/bfa/bfa_fcpim_priv.h new file mode 100644 index 000000000000..153206cfb37a --- /dev/null +++ b/drivers/scsi/bfa/bfa_fcpim_priv.h @@ -0,0 +1,188 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFA_FCPIM_PRIV_H__ +#define __BFA_FCPIM_PRIV_H__ + +#include +#include +#include +#include "bfa_sgpg_priv.h" + +#define BFA_ITNIM_MIN 32 +#define BFA_ITNIM_MAX 1024 + +#define BFA_IOIM_MIN 8 +#define BFA_IOIM_MAX 2000 + +#define BFA_TSKIM_MIN 4 +#define BFA_TSKIM_MAX 512 +#define BFA_FCPIM_PATHTOV_DEF (30 * 1000) /* in millisecs */ +#define BFA_FCPIM_PATHTOV_MAX (90 * 1000) /* in millisecs */ + +#define bfa_fcpim_stats(__fcpim, __stats) \ + (__fcpim)->stats.__stats ++ + +struct bfa_fcpim_mod_s { + struct bfa_s *bfa; + struct bfa_itnim_s *itnim_arr; + struct bfa_ioim_s *ioim_arr; + struct bfa_ioim_sp_s *ioim_sp_arr; + struct bfa_tskim_s *tskim_arr; + struct bfa_dma_s snsbase; + int num_itnims; + int num_ioim_reqs; + int num_tskim_reqs; + u32 path_tov; + u16 q_depth; + u16 rsvd; + struct list_head itnim_q; /* queue of active itnim */ + struct list_head ioim_free_q; /* free IO resources */ + struct list_head ioim_resfree_q; /* IOs waiting for f/w */ + struct list_head ioim_comp_q; /* IO global comp Q */ + struct list_head tskim_free_q; + u32 ios_active; /* current active IOs */ + u32 delay_comp; + struct bfa_fcpim_stats_s stats; +}; + +struct bfa_ioim_s; +struct bfa_tskim_s; + +/** + * BFA IO (initiator mode) + */ +struct bfa_ioim_s { + struct list_head qe; /* queue elememt */ + bfa_sm_t sm; /* BFA ioim state machine */ + struct bfa_s *bfa; /* BFA module */ + struct bfa_fcpim_mod_s *fcpim; /* parent fcpim module */ + struct bfa_itnim_s *itnim; /* i-t-n nexus for this IO */ + struct bfad_ioim_s *dio; /* driver IO handle */ + u16 iotag; /* FWI IO tag */ + u16 abort_tag; /* unqiue abort request tag */ + u16 nsges; /* number of SG elements */ + u16 nsgpgs; /* number of SG pages */ + struct bfa_sgpg_s *sgpg; /* first SG page */ + struct list_head sgpg_q; /* allocated SG pages */ + struct bfa_cb_qe_s hcb_qe; /* bfa callback qelem */ + bfa_cb_cbfn_t io_cbfn; /* IO completion handler */ + struct bfa_ioim_sp_s *iosp; /* slow-path IO handling */ +}; + +struct bfa_ioim_sp_s { + struct bfi_msg_s comp_rspmsg; /* IO comp f/w response */ + u8 *snsinfo; /* sense info for this IO */ + struct bfa_sgpg_wqe_s sgpg_wqe; /* waitq elem for sgpg */ + struct bfa_reqq_wait_s reqq_wait; /* to wait for room in reqq */ + bfa_boolean_t abort_explicit; /* aborted by OS */ + struct bfa_tskim_s *tskim; /* Relevant TM cmd */ +}; + +/** + * BFA Task management command (initiator mode) + */ +struct bfa_tskim_s { + struct list_head qe; + bfa_sm_t sm; + struct bfa_s *bfa; /* BFA module */ + struct bfa_fcpim_mod_s *fcpim; /* parent fcpim module */ + struct bfa_itnim_s *itnim; /* i-t-n nexus for this IO */ + struct bfad_tskim_s *dtsk; /* driver task mgmt cmnd */ + bfa_boolean_t notify; /* notify itnim on TM comp */ + lun_t lun; /* lun if applicable */ + enum fcp_tm_cmnd tm_cmnd; /* task management command */ + u16 tsk_tag; /* FWI IO tag */ + u8 tsecs; /* timeout in seconds */ + struct bfa_reqq_wait_s reqq_wait; /* to wait for room in reqq */ + struct list_head io_q; /* queue of affected IOs */ + struct bfa_wc_s wc; /* waiting counter */ + struct bfa_cb_qe_s hcb_qe; /* bfa callback qelem */ + enum bfi_tskim_status tsk_status; /* TM status */ +}; + +/** + * BFA i-t-n (initiator mode) + */ +struct bfa_itnim_s { + struct list_head qe; /* queue element */ + bfa_sm_t sm; /* i-t-n im BFA state machine */ + struct bfa_s *bfa; /* bfa instance */ + struct bfa_rport_s *rport; /* bfa rport */ + void *ditn; /* driver i-t-n structure */ + struct bfi_mhdr_s mhdr; /* pre-built mhdr */ + u8 msg_no; /* itnim/rport firmware handle */ + u8 reqq; /* CQ for requests */ + struct bfa_cb_qe_s hcb_qe; /* bfa callback qelem */ + struct list_head pending_q; /* queue of pending IO requests*/ + struct list_head io_q; /* queue of active IO requests */ + struct list_head io_cleanup_q; /* IO being cleaned up */ + struct list_head tsk_q; /* queue of active TM commands */ + struct list_head delay_comp_q;/* queue of failed inflight cmds */ + bfa_boolean_t seq_rec; /* SQER supported */ + bfa_boolean_t is_online; /* itnim is ONLINE for IO */ + bfa_boolean_t iotov_active; /* IO TOV timer is active */ + struct bfa_wc_s wc; /* waiting counter */ + struct bfa_timer_s timer; /* pending IO TOV */ + struct bfa_reqq_wait_s reqq_wait; /* to wait for room in reqq */ + struct bfa_fcpim_mod_s *fcpim; /* fcpim module */ + struct bfa_itnim_hal_stats_s stats; +}; + +#define bfa_itnim_is_online(_itnim) (_itnim)->is_online +#define BFA_FCPIM_MOD(_hal) (&(_hal)->modules.fcpim_mod) +#define BFA_IOIM_FROM_TAG(_fcpim, _iotag) \ + (&fcpim->ioim_arr[_iotag]) +#define BFA_TSKIM_FROM_TAG(_fcpim, _tmtag) \ + (&fcpim->tskim_arr[_tmtag & (fcpim->num_tskim_reqs - 1)]) + +/* + * function prototypes + */ +void bfa_ioim_attach(struct bfa_fcpim_mod_s *fcpim, + struct bfa_meminfo_s *minfo); +void bfa_ioim_detach(struct bfa_fcpim_mod_s *fcpim); +void bfa_ioim_isr(struct bfa_s *bfa, struct bfi_msg_s *msg); +void bfa_ioim_good_comp_isr(struct bfa_s *bfa, + struct bfi_msg_s *msg); +void bfa_ioim_cleanup(struct bfa_ioim_s *ioim); +void bfa_ioim_cleanup_tm(struct bfa_ioim_s *ioim, + struct bfa_tskim_s *tskim); +void bfa_ioim_iocdisable(struct bfa_ioim_s *ioim); +void bfa_ioim_tov(struct bfa_ioim_s *ioim); + +void bfa_tskim_attach(struct bfa_fcpim_mod_s *fcpim, + struct bfa_meminfo_s *minfo); +void bfa_tskim_detach(struct bfa_fcpim_mod_s *fcpim); +void bfa_tskim_isr(struct bfa_s *bfa, struct bfi_msg_s *msg); +void bfa_tskim_iodone(struct bfa_tskim_s *tskim); +void bfa_tskim_iocdisable(struct bfa_tskim_s *tskim); +void bfa_tskim_cleanup(struct bfa_tskim_s *tskim); + +void bfa_itnim_meminfo(struct bfa_iocfc_cfg_s *cfg, u32 *km_len, + u32 *dm_len); +void bfa_itnim_attach(struct bfa_fcpim_mod_s *fcpim, + struct bfa_meminfo_s *minfo); +void bfa_itnim_detach(struct bfa_fcpim_mod_s *fcpim); +void bfa_itnim_iocdisable(struct bfa_itnim_s *itnim); +void bfa_itnim_isr(struct bfa_s *bfa, struct bfi_msg_s *msg); +void bfa_itnim_iodone(struct bfa_itnim_s *itnim); +void bfa_itnim_tskdone(struct bfa_itnim_s *itnim); +bfa_boolean_t bfa_itnim_hold_io(struct bfa_itnim_s *itnim); + +#endif /* __BFA_FCPIM_PRIV_H__ */ + diff --git a/drivers/scsi/bfa/bfa_fcport.c b/drivers/scsi/bfa/bfa_fcport.c new file mode 100644 index 000000000000..992435987deb --- /dev/null +++ b/drivers/scsi/bfa/bfa_fcport.c @@ -0,0 +1,1671 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#include +#include +#include +#include +#include +#include +#include + +BFA_TRC_FILE(HAL, PPORT); +BFA_MODULE(pport); + +#define bfa_pport_callback(__pport, __event) do { \ + if ((__pport)->bfa->fcs) { \ + (__pport)->event_cbfn((__pport)->event_cbarg, (__event)); \ + } else { \ + (__pport)->hcb_event = (__event); \ + bfa_cb_queue((__pport)->bfa, &(__pport)->hcb_qe, \ + __bfa_cb_port_event, (__pport)); \ + } \ +} while (0) + +/* + * The port is considered disabled if corresponding physical port or IOC are + * disabled explicitly + */ +#define BFA_PORT_IS_DISABLED(bfa) \ + ((bfa_pport_is_disabled(bfa) == BFA_TRUE) || \ + (bfa_ioc_is_disabled(&bfa->ioc) == BFA_TRUE)) + +/* + * forward declarations + */ +static bfa_boolean_t bfa_pport_send_enable(struct bfa_pport_s *port); +static bfa_boolean_t bfa_pport_send_disable(struct bfa_pport_s *port); +static void bfa_pport_update_linkinfo(struct bfa_pport_s *pport); +static void bfa_pport_reset_linkinfo(struct bfa_pport_s *pport); +static void bfa_pport_set_wwns(struct bfa_pport_s *port); +static void __bfa_cb_port_event(void *cbarg, bfa_boolean_t complete); +static void __bfa_cb_port_stats(void *cbarg, bfa_boolean_t complete); +static void __bfa_cb_port_stats_clr(void *cbarg, bfa_boolean_t complete); +static void bfa_port_stats_timeout(void *cbarg); +static void bfa_port_stats_clr_timeout(void *cbarg); + +/** + * bfa_pport_private + */ + +/** + * BFA port state machine events + */ +enum bfa_pport_sm_event { + BFA_PPORT_SM_START = 1, /* start port state machine */ + BFA_PPORT_SM_STOP = 2, /* stop port state machine */ + BFA_PPORT_SM_ENABLE = 3, /* enable port */ + BFA_PPORT_SM_DISABLE = 4, /* disable port state machine */ + BFA_PPORT_SM_FWRSP = 5, /* firmware enable/disable rsp */ + BFA_PPORT_SM_LINKUP = 6, /* firmware linkup event */ + BFA_PPORT_SM_LINKDOWN = 7, /* firmware linkup down */ + BFA_PPORT_SM_QRESUME = 8, /* CQ space available */ + BFA_PPORT_SM_HWFAIL = 9, /* IOC h/w failure */ +}; + +static void bfa_pport_sm_uninit(struct bfa_pport_s *pport, + enum bfa_pport_sm_event event); +static void bfa_pport_sm_enabling_qwait(struct bfa_pport_s *pport, + enum bfa_pport_sm_event event); +static void bfa_pport_sm_enabling(struct bfa_pport_s *pport, + enum bfa_pport_sm_event event); +static void bfa_pport_sm_linkdown(struct bfa_pport_s *pport, + enum bfa_pport_sm_event event); +static void bfa_pport_sm_linkup(struct bfa_pport_s *pport, + enum bfa_pport_sm_event event); +static void bfa_pport_sm_disabling(struct bfa_pport_s *pport, + enum bfa_pport_sm_event event); +static void bfa_pport_sm_disabling_qwait(struct bfa_pport_s *pport, + enum bfa_pport_sm_event event); +static void bfa_pport_sm_disabled(struct bfa_pport_s *pport, + enum bfa_pport_sm_event event); +static void bfa_pport_sm_stopped(struct bfa_pport_s *pport, + enum bfa_pport_sm_event event); +static void bfa_pport_sm_iocdown(struct bfa_pport_s *pport, + enum bfa_pport_sm_event event); +static void bfa_pport_sm_iocfail(struct bfa_pport_s *pport, + enum bfa_pport_sm_event event); + +static struct bfa_sm_table_s hal_pport_sm_table[] = { + {BFA_SM(bfa_pport_sm_uninit), BFA_PPORT_ST_UNINIT}, + {BFA_SM(bfa_pport_sm_enabling_qwait), BFA_PPORT_ST_ENABLING_QWAIT}, + {BFA_SM(bfa_pport_sm_enabling), BFA_PPORT_ST_ENABLING}, + {BFA_SM(bfa_pport_sm_linkdown), BFA_PPORT_ST_LINKDOWN}, + {BFA_SM(bfa_pport_sm_linkup), BFA_PPORT_ST_LINKUP}, + {BFA_SM(bfa_pport_sm_disabling_qwait), + BFA_PPORT_ST_DISABLING_QWAIT}, + {BFA_SM(bfa_pport_sm_disabling), BFA_PPORT_ST_DISABLING}, + {BFA_SM(bfa_pport_sm_disabled), BFA_PPORT_ST_DISABLED}, + {BFA_SM(bfa_pport_sm_stopped), BFA_PPORT_ST_STOPPED}, + {BFA_SM(bfa_pport_sm_iocdown), BFA_PPORT_ST_IOCDOWN}, + {BFA_SM(bfa_pport_sm_iocfail), BFA_PPORT_ST_IOCDOWN}, +}; + +static void +bfa_pport_aen_post(struct bfa_pport_s *pport, enum bfa_port_aen_event event) +{ + union bfa_aen_data_u aen_data; + struct bfa_log_mod_s *logmod = pport->bfa->logm; + wwn_t pwwn = pport->pwwn; + char pwwn_ptr[BFA_STRING_32]; + struct bfa_ioc_attr_s ioc_attr; + + wwn2str(pwwn_ptr, pwwn); + switch (event) { + case BFA_PORT_AEN_ONLINE: + bfa_log(logmod, BFA_AEN_PORT_ONLINE, pwwn_ptr); + break; + case BFA_PORT_AEN_OFFLINE: + bfa_log(logmod, BFA_AEN_PORT_OFFLINE, pwwn_ptr); + break; + case BFA_PORT_AEN_ENABLE: + bfa_log(logmod, BFA_AEN_PORT_ENABLE, pwwn_ptr); + break; + case BFA_PORT_AEN_DISABLE: + bfa_log(logmod, BFA_AEN_PORT_DISABLE, pwwn_ptr); + break; + case BFA_PORT_AEN_DISCONNECT: + bfa_log(logmod, BFA_AEN_PORT_DISCONNECT, pwwn_ptr); + break; + case BFA_PORT_AEN_QOS_NEG: + bfa_log(logmod, BFA_AEN_PORT_QOS_NEG, pwwn_ptr); + break; + default: + break; + } + + bfa_ioc_get_attr(&pport->bfa->ioc, &ioc_attr); + aen_data.port.ioc_type = ioc_attr.ioc_type; + aen_data.port.pwwn = pwwn; +} + +static void +bfa_pport_sm_uninit(struct bfa_pport_s *pport, enum bfa_pport_sm_event event) +{ + bfa_trc(pport->bfa, event); + + switch (event) { + case BFA_PPORT_SM_START: + /** + * Start event after IOC is configured and BFA is started. + */ + if (bfa_pport_send_enable(pport)) + bfa_sm_set_state(pport, bfa_pport_sm_enabling); + else + bfa_sm_set_state(pport, bfa_pport_sm_enabling_qwait); + break; + + case BFA_PPORT_SM_ENABLE: + /** + * Port is persistently configured to be in enabled state. Do + * not change state. Port enabling is done when START event is + * received. + */ + break; + + case BFA_PPORT_SM_DISABLE: + /** + * If a port is persistently configured to be disabled, the + * first event will a port disable request. + */ + bfa_sm_set_state(pport, bfa_pport_sm_disabled); + break; + + case BFA_PPORT_SM_HWFAIL: + bfa_sm_set_state(pport, bfa_pport_sm_iocdown); + break; + + default: + bfa_sm_fault(pport->bfa, event); + } +} + +static void +bfa_pport_sm_enabling_qwait(struct bfa_pport_s *pport, + enum bfa_pport_sm_event event) +{ + bfa_trc(pport->bfa, event); + + switch (event) { + case BFA_PPORT_SM_QRESUME: + bfa_sm_set_state(pport, bfa_pport_sm_enabling); + bfa_pport_send_enable(pport); + break; + + case BFA_PPORT_SM_STOP: + bfa_reqq_wcancel(&pport->reqq_wait); + bfa_sm_set_state(pport, bfa_pport_sm_stopped); + break; + + case BFA_PPORT_SM_ENABLE: + /** + * Already enable is in progress. + */ + break; + + case BFA_PPORT_SM_DISABLE: + /** + * Just send disable request to firmware when room becomes + * available in request queue. + */ + bfa_sm_set_state(pport, bfa_pport_sm_disabled); + bfa_reqq_wcancel(&pport->reqq_wait); + bfa_plog_str(pport->bfa->plog, BFA_PL_MID_HAL, + BFA_PL_EID_PORT_DISABLE, 0, "Port Disable"); + bfa_pport_aen_post(pport, BFA_PORT_AEN_DISABLE); + break; + + case BFA_PPORT_SM_LINKUP: + case BFA_PPORT_SM_LINKDOWN: + /** + * Possible to get link events when doing back-to-back + * enable/disables. + */ + break; + + case BFA_PPORT_SM_HWFAIL: + bfa_reqq_wcancel(&pport->reqq_wait); + bfa_sm_set_state(pport, bfa_pport_sm_iocdown); + break; + + default: + bfa_sm_fault(pport->bfa, event); + } +} + +static void +bfa_pport_sm_enabling(struct bfa_pport_s *pport, enum bfa_pport_sm_event event) +{ + bfa_trc(pport->bfa, event); + + switch (event) { + case BFA_PPORT_SM_FWRSP: + case BFA_PPORT_SM_LINKDOWN: + bfa_sm_set_state(pport, bfa_pport_sm_linkdown); + break; + + case BFA_PPORT_SM_LINKUP: + bfa_pport_update_linkinfo(pport); + bfa_sm_set_state(pport, bfa_pport_sm_linkup); + + bfa_assert(pport->event_cbfn); + bfa_pport_callback(pport, BFA_PPORT_LINKUP); + break; + + case BFA_PPORT_SM_ENABLE: + /** + * Already being enabled. + */ + break; + + case BFA_PPORT_SM_DISABLE: + if (bfa_pport_send_disable(pport)) + bfa_sm_set_state(pport, bfa_pport_sm_disabling); + else + bfa_sm_set_state(pport, bfa_pport_sm_disabling_qwait); + + bfa_plog_str(pport->bfa->plog, BFA_PL_MID_HAL, + BFA_PL_EID_PORT_DISABLE, 0, "Port Disable"); + bfa_pport_aen_post(pport, BFA_PORT_AEN_DISABLE); + break; + + case BFA_PPORT_SM_STOP: + bfa_sm_set_state(pport, bfa_pport_sm_stopped); + break; + + case BFA_PPORT_SM_HWFAIL: + bfa_sm_set_state(pport, bfa_pport_sm_iocdown); + break; + + default: + bfa_sm_fault(pport->bfa, event); + } +} + +static void +bfa_pport_sm_linkdown(struct bfa_pport_s *pport, enum bfa_pport_sm_event event) +{ + bfa_trc(pport->bfa, event); + + switch (event) { + case BFA_PPORT_SM_LINKUP: + bfa_pport_update_linkinfo(pport); + bfa_sm_set_state(pport, bfa_pport_sm_linkup); + bfa_assert(pport->event_cbfn); + bfa_plog_str(pport->bfa->plog, BFA_PL_MID_HAL, + BFA_PL_EID_PORT_ST_CHANGE, 0, "Port Linkup"); + bfa_pport_callback(pport, BFA_PPORT_LINKUP); + bfa_pport_aen_post(pport, BFA_PORT_AEN_ONLINE); + /** + * If QoS is enabled and it is not online, + * Send a separate event. + */ + if ((pport->cfg.qos_enabled) + && (bfa_os_ntohl(pport->qos_attr.state) != BFA_QOS_ONLINE)) + bfa_pport_aen_post(pport, BFA_PORT_AEN_QOS_NEG); + + break; + + case BFA_PPORT_SM_LINKDOWN: + /** + * Possible to get link down event. + */ + break; + + case BFA_PPORT_SM_ENABLE: + /** + * Already enabled. + */ + break; + + case BFA_PPORT_SM_DISABLE: + if (bfa_pport_send_disable(pport)) + bfa_sm_set_state(pport, bfa_pport_sm_disabling); + else + bfa_sm_set_state(pport, bfa_pport_sm_disabling_qwait); + + bfa_plog_str(pport->bfa->plog, BFA_PL_MID_HAL, + BFA_PL_EID_PORT_DISABLE, 0, "Port Disable"); + bfa_pport_aen_post(pport, BFA_PORT_AEN_DISABLE); + break; + + case BFA_PPORT_SM_STOP: + bfa_sm_set_state(pport, bfa_pport_sm_stopped); + break; + + case BFA_PPORT_SM_HWFAIL: + bfa_sm_set_state(pport, bfa_pport_sm_iocdown); + break; + + default: + bfa_sm_fault(pport->bfa, event); + } +} + +static void +bfa_pport_sm_linkup(struct bfa_pport_s *pport, enum bfa_pport_sm_event event) +{ + bfa_trc(pport->bfa, event); + + switch (event) { + case BFA_PPORT_SM_ENABLE: + /** + * Already enabled. + */ + break; + + case BFA_PPORT_SM_DISABLE: + if (bfa_pport_send_disable(pport)) + bfa_sm_set_state(pport, bfa_pport_sm_disabling); + else + bfa_sm_set_state(pport, bfa_pport_sm_disabling_qwait); + + bfa_pport_reset_linkinfo(pport); + bfa_pport_callback(pport, BFA_PPORT_LINKDOWN); + bfa_plog_str(pport->bfa->plog, BFA_PL_MID_HAL, + BFA_PL_EID_PORT_DISABLE, 0, "Port Disable"); + bfa_pport_aen_post(pport, BFA_PORT_AEN_OFFLINE); + bfa_pport_aen_post(pport, BFA_PORT_AEN_DISABLE); + break; + + case BFA_PPORT_SM_LINKDOWN: + bfa_sm_set_state(pport, bfa_pport_sm_linkdown); + bfa_pport_reset_linkinfo(pport); + bfa_pport_callback(pport, BFA_PPORT_LINKDOWN); + bfa_plog_str(pport->bfa->plog, BFA_PL_MID_HAL, + BFA_PL_EID_PORT_ST_CHANGE, 0, "Port Linkdown"); + if (BFA_PORT_IS_DISABLED(pport->bfa)) { + bfa_pport_aen_post(pport, BFA_PORT_AEN_OFFLINE); + } else { + bfa_pport_aen_post(pport, BFA_PORT_AEN_DISCONNECT); + } + break; + + case BFA_PPORT_SM_STOP: + bfa_sm_set_state(pport, bfa_pport_sm_stopped); + bfa_pport_reset_linkinfo(pport); + if (BFA_PORT_IS_DISABLED(pport->bfa)) { + bfa_pport_aen_post(pport, BFA_PORT_AEN_OFFLINE); + } else { + bfa_pport_aen_post(pport, BFA_PORT_AEN_DISCONNECT); + } + break; + + case BFA_PPORT_SM_HWFAIL: + bfa_sm_set_state(pport, bfa_pport_sm_iocdown); + bfa_pport_reset_linkinfo(pport); + bfa_pport_callback(pport, BFA_PPORT_LINKDOWN); + if (BFA_PORT_IS_DISABLED(pport->bfa)) { + bfa_pport_aen_post(pport, BFA_PORT_AEN_OFFLINE); + } else { + bfa_pport_aen_post(pport, BFA_PORT_AEN_DISCONNECT); + } + break; + + default: + bfa_sm_fault(pport->bfa, event); + } +} + +static void +bfa_pport_sm_disabling_qwait(struct bfa_pport_s *pport, + enum bfa_pport_sm_event event) +{ + bfa_trc(pport->bfa, event); + + switch (event) { + case BFA_PPORT_SM_QRESUME: + bfa_sm_set_state(pport, bfa_pport_sm_disabling); + bfa_pport_send_disable(pport); + break; + + case BFA_PPORT_SM_STOP: + bfa_sm_set_state(pport, bfa_pport_sm_stopped); + bfa_reqq_wcancel(&pport->reqq_wait); + break; + + case BFA_PPORT_SM_DISABLE: + /** + * Already being disabled. + */ + break; + + case BFA_PPORT_SM_LINKUP: + case BFA_PPORT_SM_LINKDOWN: + /** + * Possible to get link events when doing back-to-back + * enable/disables. + */ + break; + + case BFA_PPORT_SM_HWFAIL: + bfa_sm_set_state(pport, bfa_pport_sm_iocfail); + bfa_reqq_wcancel(&pport->reqq_wait); + break; + + default: + bfa_sm_fault(pport->bfa, event); + } +} + +static void +bfa_pport_sm_disabling(struct bfa_pport_s *pport, enum bfa_pport_sm_event event) +{ + bfa_trc(pport->bfa, event); + + switch (event) { + case BFA_PPORT_SM_FWRSP: + bfa_sm_set_state(pport, bfa_pport_sm_disabled); + break; + + case BFA_PPORT_SM_DISABLE: + /** + * Already being disabled. + */ + break; + + case BFA_PPORT_SM_ENABLE: + if (bfa_pport_send_enable(pport)) + bfa_sm_set_state(pport, bfa_pport_sm_enabling); + else + bfa_sm_set_state(pport, bfa_pport_sm_enabling_qwait); + + bfa_plog_str(pport->bfa->plog, BFA_PL_MID_HAL, + BFA_PL_EID_PORT_ENABLE, 0, "Port Enable"); + bfa_pport_aen_post(pport, BFA_PORT_AEN_ENABLE); + break; + + case BFA_PPORT_SM_STOP: + bfa_sm_set_state(pport, bfa_pport_sm_stopped); + break; + + case BFA_PPORT_SM_LINKUP: + case BFA_PPORT_SM_LINKDOWN: + /** + * Possible to get link events when doing back-to-back + * enable/disables. + */ + break; + + case BFA_PPORT_SM_HWFAIL: + bfa_sm_set_state(pport, bfa_pport_sm_iocfail); + break; + + default: + bfa_sm_fault(pport->bfa, event); + } +} + +static void +bfa_pport_sm_disabled(struct bfa_pport_s *pport, enum bfa_pport_sm_event event) +{ + bfa_trc(pport->bfa, event); + + switch (event) { + case BFA_PPORT_SM_START: + /** + * Ignore start event for a port that is disabled. + */ + break; + + case BFA_PPORT_SM_STOP: + bfa_sm_set_state(pport, bfa_pport_sm_stopped); + break; + + case BFA_PPORT_SM_ENABLE: + if (bfa_pport_send_enable(pport)) + bfa_sm_set_state(pport, bfa_pport_sm_enabling); + else + bfa_sm_set_state(pport, bfa_pport_sm_enabling_qwait); + + bfa_plog_str(pport->bfa->plog, BFA_PL_MID_HAL, + BFA_PL_EID_PORT_ENABLE, 0, "Port Enable"); + bfa_pport_aen_post(pport, BFA_PORT_AEN_ENABLE); + break; + + case BFA_PPORT_SM_DISABLE: + /** + * Already disabled. + */ + break; + + case BFA_PPORT_SM_HWFAIL: + bfa_sm_set_state(pport, bfa_pport_sm_iocfail); + break; + + default: + bfa_sm_fault(pport->bfa, event); + } +} + +static void +bfa_pport_sm_stopped(struct bfa_pport_s *pport, enum bfa_pport_sm_event event) +{ + bfa_trc(pport->bfa, event); + + switch (event) { + case BFA_PPORT_SM_START: + if (bfa_pport_send_enable(pport)) + bfa_sm_set_state(pport, bfa_pport_sm_enabling); + else + bfa_sm_set_state(pport, bfa_pport_sm_enabling_qwait); + break; + + default: + /** + * Ignore all other events. + */ + ; + } +} + +/** + * Port is enabled. IOC is down/failed. + */ +static void +bfa_pport_sm_iocdown(struct bfa_pport_s *pport, enum bfa_pport_sm_event event) +{ + bfa_trc(pport->bfa, event); + + switch (event) { + case BFA_PPORT_SM_START: + if (bfa_pport_send_enable(pport)) + bfa_sm_set_state(pport, bfa_pport_sm_enabling); + else + bfa_sm_set_state(pport, bfa_pport_sm_enabling_qwait); + break; + + default: + /** + * Ignore all events. + */ + ; + } +} + +/** + * Port is disabled. IOC is down/failed. + */ +static void +bfa_pport_sm_iocfail(struct bfa_pport_s *pport, enum bfa_pport_sm_event event) +{ + bfa_trc(pport->bfa, event); + + switch (event) { + case BFA_PPORT_SM_START: + bfa_sm_set_state(pport, bfa_pport_sm_disabled); + break; + + case BFA_PPORT_SM_ENABLE: + bfa_sm_set_state(pport, bfa_pport_sm_iocdown); + break; + + default: + /** + * Ignore all events. + */ + ; + } +} + + + +/** + * bfa_pport_private + */ + +static void +__bfa_cb_port_event(void *cbarg, bfa_boolean_t complete) +{ + struct bfa_pport_s *pport = cbarg; + + if (complete) + pport->event_cbfn(pport->event_cbarg, pport->hcb_event); +} + +#define PPORT_STATS_DMA_SZ (BFA_ROUNDUP(sizeof(union bfa_pport_stats_u), \ + BFA_CACHELINE_SZ)) + +static void +bfa_pport_meminfo(struct bfa_iocfc_cfg_s *cfg, u32 *ndm_len, + u32 *dm_len) +{ + *dm_len += PPORT_STATS_DMA_SZ; +} + +static void +bfa_pport_qresume(void *cbarg) +{ + struct bfa_pport_s *port = cbarg; + + bfa_sm_send_event(port, BFA_PPORT_SM_QRESUME); +} + +static void +bfa_pport_mem_claim(struct bfa_pport_s *pport, struct bfa_meminfo_s *meminfo) +{ + u8 *dm_kva; + u64 dm_pa; + + dm_kva = bfa_meminfo_dma_virt(meminfo); + dm_pa = bfa_meminfo_dma_phys(meminfo); + + pport->stats_kva = dm_kva; + pport->stats_pa = dm_pa; + pport->stats = (union bfa_pport_stats_u *)dm_kva; + + dm_kva += PPORT_STATS_DMA_SZ; + dm_pa += PPORT_STATS_DMA_SZ; + + bfa_meminfo_dma_virt(meminfo) = dm_kva; + bfa_meminfo_dma_phys(meminfo) = dm_pa; +} + +/** + * Memory initialization. + */ +static void +bfa_pport_attach(struct bfa_s *bfa, void *bfad, struct bfa_iocfc_cfg_s *cfg, + struct bfa_meminfo_s *meminfo, struct bfa_pcidev_s *pcidev) +{ + struct bfa_pport_s *pport = BFA_PORT_MOD(bfa); + struct bfa_pport_cfg_s *port_cfg = &pport->cfg; + + bfa_os_memset(pport, 0, sizeof(struct bfa_pport_s)); + pport->bfa = bfa; + + bfa_pport_mem_claim(pport, meminfo); + + bfa_sm_set_state(pport, bfa_pport_sm_uninit); + + /** + * initialize and set default configuration + */ + port_cfg->topology = BFA_PPORT_TOPOLOGY_P2P; + port_cfg->speed = BFA_PPORT_SPEED_AUTO; + port_cfg->trunked = BFA_FALSE; + port_cfg->maxfrsize = 0; + + port_cfg->trl_def_speed = BFA_PPORT_SPEED_1GBPS; + + bfa_reqq_winit(&pport->reqq_wait, bfa_pport_qresume, pport); +} + +static void +bfa_pport_initdone(struct bfa_s *bfa) +{ + struct bfa_pport_s *pport = BFA_PORT_MOD(bfa); + + /** + * Initialize port attributes from IOC hardware data. + */ + bfa_pport_set_wwns(pport); + if (pport->cfg.maxfrsize == 0) + pport->cfg.maxfrsize = bfa_ioc_maxfrsize(&bfa->ioc); + pport->cfg.rx_bbcredit = bfa_ioc_rx_bbcredit(&bfa->ioc); + pport->speed_sup = bfa_ioc_speed_sup(&bfa->ioc); + + bfa_assert(pport->cfg.maxfrsize); + bfa_assert(pport->cfg.rx_bbcredit); + bfa_assert(pport->speed_sup); +} + +static void +bfa_pport_detach(struct bfa_s *bfa) +{ +} + +/** + * Called when IOC is ready. + */ +static void +bfa_pport_start(struct bfa_s *bfa) +{ + bfa_sm_send_event(BFA_PORT_MOD(bfa), BFA_PPORT_SM_START); +} + +/** + * Called before IOC is stopped. + */ +static void +bfa_pport_stop(struct bfa_s *bfa) +{ + bfa_sm_send_event(BFA_PORT_MOD(bfa), BFA_PPORT_SM_STOP); +} + +/** + * Called when IOC failure is detected. + */ +static void +bfa_pport_iocdisable(struct bfa_s *bfa) +{ + bfa_sm_send_event(BFA_PORT_MOD(bfa), BFA_PPORT_SM_HWFAIL); +} + +static void +bfa_pport_update_linkinfo(struct bfa_pport_s *pport) +{ + struct bfi_pport_event_s *pevent = pport->event_arg.i2hmsg.event; + + pport->speed = pevent->link_state.speed; + pport->topology = pevent->link_state.topology; + + if (pport->topology == BFA_PPORT_TOPOLOGY_LOOP) + pport->myalpa = pevent->link_state.tl.loop_info.myalpa; + + /* + * QoS Details + */ + bfa_os_assign(pport->qos_attr, pevent->link_state.qos_attr); + bfa_os_assign(pport->qos_vc_attr, pevent->link_state.qos_vc_attr); + + bfa_trc(pport->bfa, pport->speed); + bfa_trc(pport->bfa, pport->topology); +} + +static void +bfa_pport_reset_linkinfo(struct bfa_pport_s *pport) +{ + pport->speed = BFA_PPORT_SPEED_UNKNOWN; + pport->topology = BFA_PPORT_TOPOLOGY_NONE; +} + +/** + * Send port enable message to firmware. + */ +static bfa_boolean_t +bfa_pport_send_enable(struct bfa_pport_s *port) +{ + struct bfi_pport_enable_req_s *m; + + /** + * Increment message tag before queue check, so that responses to old + * requests are discarded. + */ + port->msgtag++; + + /** + * check for room in queue to send request now + */ + m = bfa_reqq_next(port->bfa, BFA_REQQ_PORT); + if (!m) { + bfa_reqq_wait(port->bfa, BFA_REQQ_PORT, &port->reqq_wait); + return BFA_FALSE; + } + + bfi_h2i_set(m->mh, BFI_MC_FC_PORT, BFI_PPORT_H2I_ENABLE_REQ, + bfa_lpuid(port->bfa)); + m->nwwn = port->nwwn; + m->pwwn = port->pwwn; + m->port_cfg = port->cfg; + m->msgtag = port->msgtag; + m->port_cfg.maxfrsize = bfa_os_htons(port->cfg.maxfrsize); + bfa_dma_be_addr_set(m->stats_dma_addr, port->stats_pa); + bfa_trc(port->bfa, m->stats_dma_addr.a32.addr_lo); + bfa_trc(port->bfa, m->stats_dma_addr.a32.addr_hi); + + /** + * queue I/O message to firmware + */ + bfa_reqq_produce(port->bfa, BFA_REQQ_PORT); + return BFA_TRUE; +} + +/** + * Send port disable message to firmware. + */ +static bfa_boolean_t +bfa_pport_send_disable(struct bfa_pport_s *port) +{ + bfi_pport_disable_req_t *m; + + /** + * Increment message tag before queue check, so that responses to old + * requests are discarded. + */ + port->msgtag++; + + /** + * check for room in queue to send request now + */ + m = bfa_reqq_next(port->bfa, BFA_REQQ_PORT); + if (!m) { + bfa_reqq_wait(port->bfa, BFA_REQQ_PORT, &port->reqq_wait); + return BFA_FALSE; + } + + bfi_h2i_set(m->mh, BFI_MC_FC_PORT, BFI_PPORT_H2I_DISABLE_REQ, + bfa_lpuid(port->bfa)); + m->msgtag = port->msgtag; + + /** + * queue I/O message to firmware + */ + bfa_reqq_produce(port->bfa, BFA_REQQ_PORT); + + return BFA_TRUE; +} + +static void +bfa_pport_set_wwns(struct bfa_pport_s *port) +{ + port->pwwn = bfa_ioc_get_pwwn(&port->bfa->ioc); + port->nwwn = bfa_ioc_get_nwwn(&port->bfa->ioc); + + bfa_trc(port->bfa, port->pwwn); + bfa_trc(port->bfa, port->nwwn); +} + +static void +bfa_port_send_txcredit(void *port_cbarg) +{ + + struct bfa_pport_s *port = port_cbarg; + struct bfi_pport_set_svc_params_req_s *m; + + /** + * check for room in queue to send request now + */ + m = bfa_reqq_next(port->bfa, BFA_REQQ_PORT); + if (!m) { + bfa_trc(port->bfa, port->cfg.tx_bbcredit); + return; + } + + bfi_h2i_set(m->mh, BFI_MC_FC_PORT, BFI_PPORT_H2I_SET_SVC_PARAMS_REQ, + bfa_lpuid(port->bfa)); + m->tx_bbcredit = bfa_os_htons((u16) port->cfg.tx_bbcredit); + + /** + * queue I/O message to firmware + */ + bfa_reqq_produce(port->bfa, BFA_REQQ_PORT); +} + + + +/** + * bfa_pport_public + */ + +/** + * Firmware message handler. + */ +void +bfa_pport_isr(struct bfa_s *bfa, struct bfi_msg_s *msg) +{ + struct bfa_pport_s *pport = BFA_PORT_MOD(bfa); + union bfi_pport_i2h_msg_u i2hmsg; + + i2hmsg.msg = msg; + pport->event_arg.i2hmsg = i2hmsg; + + switch (msg->mhdr.msg_id) { + case BFI_PPORT_I2H_ENABLE_RSP: + if (pport->msgtag == i2hmsg.enable_rsp->msgtag) + bfa_sm_send_event(pport, BFA_PPORT_SM_FWRSP); + break; + + case BFI_PPORT_I2H_DISABLE_RSP: + if (pport->msgtag == i2hmsg.enable_rsp->msgtag) + bfa_sm_send_event(pport, BFA_PPORT_SM_FWRSP); + break; + + case BFI_PPORT_I2H_EVENT: + switch (i2hmsg.event->link_state.linkstate) { + case BFA_PPORT_LINKUP: + bfa_sm_send_event(pport, BFA_PPORT_SM_LINKUP); + break; + case BFA_PPORT_LINKDOWN: + bfa_sm_send_event(pport, BFA_PPORT_SM_LINKDOWN); + break; + case BFA_PPORT_TRUNK_LINKDOWN: + /** todo: event notification */ + break; + } + break; + + case BFI_PPORT_I2H_GET_STATS_RSP: + case BFI_PPORT_I2H_GET_QOS_STATS_RSP: + /* + * check for timer pop before processing the rsp + */ + if (pport->stats_busy == BFA_FALSE + || pport->stats_status == BFA_STATUS_ETIMER) + break; + + bfa_timer_stop(&pport->timer); + pport->stats_status = i2hmsg.getstats_rsp->status; + bfa_cb_queue(pport->bfa, &pport->hcb_qe, __bfa_cb_port_stats, + pport); + break; + case BFI_PPORT_I2H_CLEAR_STATS_RSP: + case BFI_PPORT_I2H_CLEAR_QOS_STATS_RSP: + /* + * check for timer pop before processing the rsp + */ + if (pport->stats_busy == BFA_FALSE + || pport->stats_status == BFA_STATUS_ETIMER) + break; + + bfa_timer_stop(&pport->timer); + pport->stats_status = BFA_STATUS_OK; + bfa_cb_queue(pport->bfa, &pport->hcb_qe, + __bfa_cb_port_stats_clr, pport); + break; + + default: + bfa_assert(0); + } +} + + + +/** + * bfa_pport_api + */ + +/** + * Registered callback for port events. + */ +void +bfa_pport_event_register(struct bfa_s *bfa, + void (*cbfn) (void *cbarg, bfa_pport_event_t event), + void *cbarg) +{ + struct bfa_pport_s *pport = BFA_PORT_MOD(bfa); + + pport->event_cbfn = cbfn; + pport->event_cbarg = cbarg; +} + +bfa_status_t +bfa_pport_enable(struct bfa_s *bfa) +{ + struct bfa_pport_s *pport = BFA_PORT_MOD(bfa); + + if (pport->diag_busy) + return (BFA_STATUS_DIAG_BUSY); + else if (bfa_sm_cmp_state + (BFA_PORT_MOD(bfa), bfa_pport_sm_disabling_qwait)) + return (BFA_STATUS_DEVBUSY); + + bfa_sm_send_event(BFA_PORT_MOD(bfa), BFA_PPORT_SM_ENABLE); + return BFA_STATUS_OK; +} + +bfa_status_t +bfa_pport_disable(struct bfa_s *bfa) +{ + bfa_sm_send_event(BFA_PORT_MOD(bfa), BFA_PPORT_SM_DISABLE); + return BFA_STATUS_OK; +} + +/** + * Configure port speed. + */ +bfa_status_t +bfa_pport_cfg_speed(struct bfa_s *bfa, enum bfa_pport_speed speed) +{ + struct bfa_pport_s *pport = BFA_PORT_MOD(bfa); + + bfa_trc(bfa, speed); + + if ((speed != BFA_PPORT_SPEED_AUTO) && (speed > pport->speed_sup)) { + bfa_trc(bfa, pport->speed_sup); + return BFA_STATUS_UNSUPP_SPEED; + } + + pport->cfg.speed = speed; + + return (BFA_STATUS_OK); +} + +/** + * Get current speed. + */ +enum bfa_pport_speed +bfa_pport_get_speed(struct bfa_s *bfa) +{ + struct bfa_pport_s *port = BFA_PORT_MOD(bfa); + + return port->speed; +} + +/** + * Configure port topology. + */ +bfa_status_t +bfa_pport_cfg_topology(struct bfa_s *bfa, enum bfa_pport_topology topology) +{ + struct bfa_pport_s *pport = BFA_PORT_MOD(bfa); + + bfa_trc(bfa, topology); + bfa_trc(bfa, pport->cfg.topology); + + switch (topology) { + case BFA_PPORT_TOPOLOGY_P2P: + case BFA_PPORT_TOPOLOGY_LOOP: + case BFA_PPORT_TOPOLOGY_AUTO: + break; + + default: + return BFA_STATUS_EINVAL; + } + + pport->cfg.topology = topology; + return (BFA_STATUS_OK); +} + +/** + * Get current topology. + */ +enum bfa_pport_topology +bfa_pport_get_topology(struct bfa_s *bfa) +{ + struct bfa_pport_s *port = BFA_PORT_MOD(bfa); + + return port->topology; +} + +bfa_status_t +bfa_pport_cfg_hardalpa(struct bfa_s *bfa, u8 alpa) +{ + struct bfa_pport_s *pport = BFA_PORT_MOD(bfa); + + bfa_trc(bfa, alpa); + bfa_trc(bfa, pport->cfg.cfg_hardalpa); + bfa_trc(bfa, pport->cfg.hardalpa); + + pport->cfg.cfg_hardalpa = BFA_TRUE; + pport->cfg.hardalpa = alpa; + + return (BFA_STATUS_OK); +} + +bfa_status_t +bfa_pport_clr_hardalpa(struct bfa_s *bfa) +{ + struct bfa_pport_s *pport = BFA_PORT_MOD(bfa); + + bfa_trc(bfa, pport->cfg.cfg_hardalpa); + bfa_trc(bfa, pport->cfg.hardalpa); + + pport->cfg.cfg_hardalpa = BFA_FALSE; + return (BFA_STATUS_OK); +} + +bfa_boolean_t +bfa_pport_get_hardalpa(struct bfa_s *bfa, u8 *alpa) +{ + struct bfa_pport_s *port = BFA_PORT_MOD(bfa); + + *alpa = port->cfg.hardalpa; + return port->cfg.cfg_hardalpa; +} + +u8 +bfa_pport_get_myalpa(struct bfa_s *bfa) +{ + struct bfa_pport_s *port = BFA_PORT_MOD(bfa); + + return port->myalpa; +} + +bfa_status_t +bfa_pport_cfg_maxfrsize(struct bfa_s *bfa, u16 maxfrsize) +{ + struct bfa_pport_s *pport = BFA_PORT_MOD(bfa); + + bfa_trc(bfa, maxfrsize); + bfa_trc(bfa, pport->cfg.maxfrsize); + + /* + * with in range + */ + if ((maxfrsize > FC_MAX_PDUSZ) || (maxfrsize < FC_MIN_PDUSZ)) + return (BFA_STATUS_INVLD_DFSZ); + + /* + * power of 2, if not the max frame size of 2112 + */ + if ((maxfrsize != FC_MAX_PDUSZ) && (maxfrsize & (maxfrsize - 1))) + return (BFA_STATUS_INVLD_DFSZ); + + pport->cfg.maxfrsize = maxfrsize; + return (BFA_STATUS_OK); +} + +u16 +bfa_pport_get_maxfrsize(struct bfa_s *bfa) +{ + struct bfa_pport_s *port = BFA_PORT_MOD(bfa); + + return port->cfg.maxfrsize; +} + +u32 +bfa_pport_mypid(struct bfa_s *bfa) +{ + struct bfa_pport_s *port = BFA_PORT_MOD(bfa); + + return port->mypid; +} + +u8 +bfa_pport_get_rx_bbcredit(struct bfa_s *bfa) +{ + struct bfa_pport_s *port = BFA_PORT_MOD(bfa); + + return port->cfg.rx_bbcredit; +} + +void +bfa_pport_set_tx_bbcredit(struct bfa_s *bfa, u16 tx_bbcredit) +{ + struct bfa_pport_s *port = BFA_PORT_MOD(bfa); + + port->cfg.tx_bbcredit = (u8) tx_bbcredit; + bfa_port_send_txcredit(port); +} + +/** + * Get port attributes. + */ + +wwn_t +bfa_pport_get_wwn(struct bfa_s *bfa, bfa_boolean_t node) +{ + struct bfa_pport_s *pport = BFA_PORT_MOD(bfa); + if (node) + return pport->nwwn; + else + return pport->pwwn; +} + +void +bfa_pport_get_attr(struct bfa_s *bfa, struct bfa_pport_attr_s *attr) +{ + struct bfa_pport_s *pport = BFA_PORT_MOD(bfa); + + bfa_os_memset(attr, 0, sizeof(struct bfa_pport_attr_s)); + + attr->nwwn = pport->nwwn; + attr->pwwn = pport->pwwn; + + bfa_os_memcpy(&attr->pport_cfg, &pport->cfg, + sizeof(struct bfa_pport_cfg_s)); + /* + * speed attributes + */ + attr->pport_cfg.speed = pport->cfg.speed; + attr->speed_supported = pport->speed_sup; + attr->speed = pport->speed; + attr->cos_supported = FC_CLASS_3; + + /* + * topology attributes + */ + attr->pport_cfg.topology = pport->cfg.topology; + attr->topology = pport->topology; + + /* + * beacon attributes + */ + attr->beacon = pport->beacon; + attr->link_e2e_beacon = pport->link_e2e_beacon; + attr->plog_enabled = bfa_plog_get_setting(pport->bfa->plog); + + attr->pport_cfg.path_tov = bfa_fcpim_path_tov_get(bfa); + attr->pport_cfg.q_depth = bfa_fcpim_qdepth_get(bfa); + attr->port_state = bfa_sm_to_state(hal_pport_sm_table, pport->sm); + if (bfa_ioc_is_disabled(&pport->bfa->ioc)) + attr->port_state = BFA_PPORT_ST_IOCDIS; + else if (bfa_ioc_fw_mismatch(&pport->bfa->ioc)) + attr->port_state = BFA_PPORT_ST_FWMISMATCH; +} + +static void +bfa_port_stats_query(void *cbarg) +{ + struct bfa_pport_s *port = (struct bfa_pport_s *)cbarg; + bfi_pport_get_stats_req_t *msg; + + msg = bfa_reqq_next(port->bfa, BFA_REQQ_PORT); + + if (!msg) { + port->stats_qfull = BFA_TRUE; + bfa_reqq_winit(&port->stats_reqq_wait, bfa_port_stats_query, + port); + bfa_reqq_wait(port->bfa, BFA_REQQ_PORT, &port->stats_reqq_wait); + return; + } + port->stats_qfull = BFA_FALSE; + + bfa_os_memset(msg, 0, sizeof(bfi_pport_get_stats_req_t)); + bfi_h2i_set(msg->mh, BFI_MC_FC_PORT, BFI_PPORT_H2I_GET_STATS_REQ, + bfa_lpuid(port->bfa)); + bfa_reqq_produce(port->bfa, BFA_REQQ_PORT); + + return; +} + +static void +bfa_port_stats_clear(void *cbarg) +{ + struct bfa_pport_s *port = (struct bfa_pport_s *)cbarg; + bfi_pport_clear_stats_req_t *msg; + + msg = bfa_reqq_next(port->bfa, BFA_REQQ_PORT); + + if (!msg) { + port->stats_qfull = BFA_TRUE; + bfa_reqq_winit(&port->stats_reqq_wait, bfa_port_stats_clear, + port); + bfa_reqq_wait(port->bfa, BFA_REQQ_PORT, &port->stats_reqq_wait); + return; + } + port->stats_qfull = BFA_FALSE; + + bfa_os_memset(msg, 0, sizeof(bfi_pport_clear_stats_req_t)); + bfi_h2i_set(msg->mh, BFI_MC_FC_PORT, BFI_PPORT_H2I_CLEAR_STATS_REQ, + bfa_lpuid(port->bfa)); + bfa_reqq_produce(port->bfa, BFA_REQQ_PORT); + return; +} + +static void +bfa_port_qos_stats_clear(void *cbarg) +{ + struct bfa_pport_s *port = (struct bfa_pport_s *)cbarg; + bfi_pport_clear_qos_stats_req_t *msg; + + msg = bfa_reqq_next(port->bfa, BFA_REQQ_PORT); + + if (!msg) { + port->stats_qfull = BFA_TRUE; + bfa_reqq_winit(&port->stats_reqq_wait, bfa_port_qos_stats_clear, + port); + bfa_reqq_wait(port->bfa, BFA_REQQ_PORT, &port->stats_reqq_wait); + return; + } + port->stats_qfull = BFA_FALSE; + + bfa_os_memset(msg, 0, sizeof(bfi_pport_clear_qos_stats_req_t)); + bfi_h2i_set(msg->mh, BFI_MC_FC_PORT, BFI_PPORT_H2I_CLEAR_QOS_STATS_REQ, + bfa_lpuid(port->bfa)); + bfa_reqq_produce(port->bfa, BFA_REQQ_PORT); + return; +} + +static void +bfa_pport_stats_swap(union bfa_pport_stats_u *d, union bfa_pport_stats_u *s) +{ + u32 *dip = (u32 *) d; + u32 *sip = (u32 *) s; + int i; + + /* + * Do 64 bit fields swap first + */ + for (i = 0; + i < + ((sizeof(union bfa_pport_stats_u) - + sizeof(struct bfa_qos_stats_s)) / sizeof(u32)); i = i + 2) { +#ifdef __BIGENDIAN + dip[i] = bfa_os_ntohl(sip[i]); + dip[i + 1] = bfa_os_ntohl(sip[i + 1]); +#else + dip[i] = bfa_os_ntohl(sip[i + 1]); + dip[i + 1] = bfa_os_ntohl(sip[i]); +#endif + } + + /* + * Now swap the 32 bit fields + */ + for (; i < (sizeof(union bfa_pport_stats_u) / sizeof(u32)); ++i) + dip[i] = bfa_os_ntohl(sip[i]); +} + +static void +__bfa_cb_port_stats_clr(void *cbarg, bfa_boolean_t complete) +{ + struct bfa_pport_s *port = cbarg; + + if (complete) { + port->stats_cbfn(port->stats_cbarg, port->stats_status); + } else { + port->stats_busy = BFA_FALSE; + port->stats_status = BFA_STATUS_OK; + } +} + +static void +bfa_port_stats_clr_timeout(void *cbarg) +{ + struct bfa_pport_s *port = (struct bfa_pport_s *)cbarg; + + bfa_trc(port->bfa, port->stats_qfull); + + if (port->stats_qfull) { + bfa_reqq_wcancel(&port->stats_reqq_wait); + port->stats_qfull = BFA_FALSE; + } + + port->stats_status = BFA_STATUS_ETIMER; + bfa_cb_queue(port->bfa, &port->hcb_qe, __bfa_cb_port_stats_clr, port); +} + +static void +__bfa_cb_port_stats(void *cbarg, bfa_boolean_t complete) +{ + struct bfa_pport_s *port = cbarg; + + if (complete) { + if (port->stats_status == BFA_STATUS_OK) + bfa_pport_stats_swap(port->stats_ret, port->stats); + port->stats_cbfn(port->stats_cbarg, port->stats_status); + } else { + port->stats_busy = BFA_FALSE; + port->stats_status = BFA_STATUS_OK; + } +} + +static void +bfa_port_stats_timeout(void *cbarg) +{ + struct bfa_pport_s *port = (struct bfa_pport_s *)cbarg; + + bfa_trc(port->bfa, port->stats_qfull); + + if (port->stats_qfull) { + bfa_reqq_wcancel(&port->stats_reqq_wait); + port->stats_qfull = BFA_FALSE; + } + + port->stats_status = BFA_STATUS_ETIMER; + bfa_cb_queue(port->bfa, &port->hcb_qe, __bfa_cb_port_stats, port); +} + +#define BFA_PORT_STATS_TOV 1000 + +/** + * Fetch port attributes. + */ +bfa_status_t +bfa_pport_get_stats(struct bfa_s *bfa, union bfa_pport_stats_u *stats, + bfa_cb_pport_t cbfn, void *cbarg) +{ + struct bfa_pport_s *port = BFA_PORT_MOD(bfa); + + if (port->stats_busy) { + bfa_trc(bfa, port->stats_busy); + return (BFA_STATUS_DEVBUSY); + } + + port->stats_busy = BFA_TRUE; + port->stats_ret = stats; + port->stats_cbfn = cbfn; + port->stats_cbarg = cbarg; + + bfa_port_stats_query(port); + + bfa_timer_start(bfa, &port->timer, bfa_port_stats_timeout, port, + BFA_PORT_STATS_TOV); + return (BFA_STATUS_OK); +} + +bfa_status_t +bfa_pport_clear_stats(struct bfa_s *bfa, bfa_cb_pport_t cbfn, void *cbarg) +{ + struct bfa_pport_s *port = BFA_PORT_MOD(bfa); + + if (port->stats_busy) { + bfa_trc(bfa, port->stats_busy); + return (BFA_STATUS_DEVBUSY); + } + + port->stats_busy = BFA_TRUE; + port->stats_cbfn = cbfn; + port->stats_cbarg = cbarg; + + bfa_port_stats_clear(port); + + bfa_timer_start(bfa, &port->timer, bfa_port_stats_clr_timeout, port, + BFA_PORT_STATS_TOV); + return (BFA_STATUS_OK); +} + +bfa_status_t +bfa_pport_trunk_enable(struct bfa_s *bfa, u8 bitmap) +{ + struct bfa_pport_s *pport = BFA_PORT_MOD(bfa); + + bfa_trc(bfa, bitmap); + bfa_trc(bfa, pport->cfg.trunked); + bfa_trc(bfa, pport->cfg.trunk_ports); + + if (!bitmap || (bitmap & (bitmap - 1))) + return BFA_STATUS_EINVAL; + + pport->cfg.trunked = BFA_TRUE; + pport->cfg.trunk_ports = bitmap; + + return BFA_STATUS_OK; +} + +void +bfa_pport_qos_get_attr(struct bfa_s *bfa, struct bfa_qos_attr_s *qos_attr) +{ + struct bfa_pport_s *pport = BFA_PORT_MOD(bfa); + + qos_attr->state = bfa_os_ntohl(pport->qos_attr.state); + qos_attr->total_bb_cr = bfa_os_ntohl(pport->qos_attr.total_bb_cr); +} + +void +bfa_pport_qos_get_vc_attr(struct bfa_s *bfa, + struct bfa_qos_vc_attr_s *qos_vc_attr) +{ + struct bfa_pport_s *pport = BFA_PORT_MOD(bfa); + struct bfa_qos_vc_attr_s *bfa_vc_attr = &pport->qos_vc_attr; + u32 i = 0; + + qos_vc_attr->total_vc_count = bfa_os_ntohs(bfa_vc_attr->total_vc_count); + qos_vc_attr->shared_credit = bfa_os_ntohs(bfa_vc_attr->shared_credit); + qos_vc_attr->elp_opmode_flags = + bfa_os_ntohl(bfa_vc_attr->elp_opmode_flags); + + /* + * Individual VC info + */ + while (i < qos_vc_attr->total_vc_count) { + qos_vc_attr->vc_info[i].vc_credit = + bfa_vc_attr->vc_info[i].vc_credit; + qos_vc_attr->vc_info[i].borrow_credit = + bfa_vc_attr->vc_info[i].borrow_credit; + qos_vc_attr->vc_info[i].priority = + bfa_vc_attr->vc_info[i].priority; + ++i; + } +} + +/** + * Fetch QoS Stats. + */ +bfa_status_t +bfa_pport_get_qos_stats(struct bfa_s *bfa, union bfa_pport_stats_u *stats, + bfa_cb_pport_t cbfn, void *cbarg) +{ + /* + * QoS stats is embedded in port stats + */ + return (bfa_pport_get_stats(bfa, stats, cbfn, cbarg)); +} + +bfa_status_t +bfa_pport_clear_qos_stats(struct bfa_s *bfa, bfa_cb_pport_t cbfn, void *cbarg) +{ + struct bfa_pport_s *port = BFA_PORT_MOD(bfa); + + if (port->stats_busy) { + bfa_trc(bfa, port->stats_busy); + return (BFA_STATUS_DEVBUSY); + } + + port->stats_busy = BFA_TRUE; + port->stats_cbfn = cbfn; + port->stats_cbarg = cbarg; + + bfa_port_qos_stats_clear(port); + + bfa_timer_start(bfa, &port->timer, bfa_port_stats_clr_timeout, port, + BFA_PORT_STATS_TOV); + return (BFA_STATUS_OK); +} + +/** + * Fetch port attributes. + */ +bfa_status_t +bfa_pport_trunk_disable(struct bfa_s *bfa) +{ + return (BFA_STATUS_OK); +} + +bfa_boolean_t +bfa_pport_trunk_query(struct bfa_s *bfa, u32 *bitmap) +{ + struct bfa_pport_s *port = BFA_PORT_MOD(bfa); + + *bitmap = port->cfg.trunk_ports; + return port->cfg.trunked; +} + +bfa_boolean_t +bfa_pport_is_disabled(struct bfa_s *bfa) +{ + struct bfa_pport_s *port = BFA_PORT_MOD(bfa); + + return (bfa_sm_to_state(hal_pport_sm_table, port->sm) == + BFA_PPORT_ST_DISABLED); + +} + +bfa_boolean_t +bfa_pport_is_ratelim(struct bfa_s *bfa) +{ + struct bfa_pport_s *pport = BFA_PORT_MOD(bfa); + +return (pport->cfg.ratelimit ? BFA_TRUE : BFA_FALSE); + +} + +void +bfa_pport_cfg_qos(struct bfa_s *bfa, bfa_boolean_t on_off) +{ + struct bfa_pport_s *pport = BFA_PORT_MOD(bfa); + + bfa_trc(bfa, on_off); + bfa_trc(bfa, pport->cfg.qos_enabled); + + pport->cfg.qos_enabled = on_off; +} + +void +bfa_pport_cfg_ratelim(struct bfa_s *bfa, bfa_boolean_t on_off) +{ + struct bfa_pport_s *pport = BFA_PORT_MOD(bfa); + + bfa_trc(bfa, on_off); + bfa_trc(bfa, pport->cfg.ratelimit); + + pport->cfg.ratelimit = on_off; + if (pport->cfg.trl_def_speed == BFA_PPORT_SPEED_UNKNOWN) + pport->cfg.trl_def_speed = BFA_PPORT_SPEED_1GBPS; +} + +/** + * Configure default minimum ratelim speed + */ +bfa_status_t +bfa_pport_cfg_ratelim_speed(struct bfa_s *bfa, enum bfa_pport_speed speed) +{ + struct bfa_pport_s *pport = BFA_PORT_MOD(bfa); + + bfa_trc(bfa, speed); + + /* + * Auto and speeds greater than the supported speed, are invalid + */ + if ((speed == BFA_PPORT_SPEED_AUTO) || (speed > pport->speed_sup)) { + bfa_trc(bfa, pport->speed_sup); + return BFA_STATUS_UNSUPP_SPEED; + } + + pport->cfg.trl_def_speed = speed; + + return (BFA_STATUS_OK); +} + +/** + * Get default minimum ratelim speed + */ +enum bfa_pport_speed +bfa_pport_get_ratelim_speed(struct bfa_s *bfa) +{ + struct bfa_pport_s *pport = BFA_PORT_MOD(bfa); + + bfa_trc(bfa, pport->cfg.trl_def_speed); + return (pport->cfg.trl_def_speed); + +} + +void +bfa_pport_busy(struct bfa_s *bfa, bfa_boolean_t status) +{ + struct bfa_pport_s *pport = BFA_PORT_MOD(bfa); + + bfa_trc(bfa, status); + bfa_trc(bfa, pport->diag_busy); + + pport->diag_busy = status; +} + +void +bfa_pport_beacon(struct bfa_s *bfa, bfa_boolean_t beacon, + bfa_boolean_t link_e2e_beacon) +{ + struct bfa_pport_s *pport = BFA_PORT_MOD(bfa); + + bfa_trc(bfa, beacon); + bfa_trc(bfa, link_e2e_beacon); + bfa_trc(bfa, pport->beacon); + bfa_trc(bfa, pport->link_e2e_beacon); + + pport->beacon = beacon; + pport->link_e2e_beacon = link_e2e_beacon; +} + +bfa_boolean_t +bfa_pport_is_linkup(struct bfa_s *bfa) +{ + return bfa_sm_cmp_state(BFA_PORT_MOD(bfa), bfa_pport_sm_linkup); +} + + diff --git a/drivers/scsi/bfa/bfa_fcs.c b/drivers/scsi/bfa/bfa_fcs.c new file mode 100644 index 000000000000..7cb39a306ea9 --- /dev/null +++ b/drivers/scsi/bfa/bfa_fcs.c @@ -0,0 +1,182 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * bfa_fcs.c BFA FCS main + */ + +#include +#include "fcs_port.h" +#include "fcs_uf.h" +#include "fcs_vport.h" +#include "fcs_rport.h" +#include "fcs_fabric.h" +#include "fcs_fcpim.h" +#include "fcs_fcptm.h" +#include "fcbuild.h" +#include "fcs.h" +#include "bfad_drv.h" +#include + +/** + * FCS sub-modules + */ +struct bfa_fcs_mod_s { + void (*modinit) (struct bfa_fcs_s *fcs); + void (*modexit) (struct bfa_fcs_s *fcs); +}; + +#define BFA_FCS_MODULE(_mod) { _mod ## _modinit, _mod ## _modexit } + +static struct bfa_fcs_mod_s fcs_modules[] = { + BFA_FCS_MODULE(bfa_fcs_pport), + BFA_FCS_MODULE(bfa_fcs_uf), + BFA_FCS_MODULE(bfa_fcs_fabric), + BFA_FCS_MODULE(bfa_fcs_vport), + BFA_FCS_MODULE(bfa_fcs_rport), + BFA_FCS_MODULE(bfa_fcs_fcpim), +}; + +/** + * fcs_api BFA FCS API + */ + +static void +bfa_fcs_exit_comp(void *fcs_cbarg) +{ + struct bfa_fcs_s *fcs = fcs_cbarg; + struct bfad_s *bfad = fcs->bfad; + + complete(&bfad->comp); +} + + + +/** + * fcs_api BFA FCS API + */ + +/** + * FCS instance initialization. + * + * param[in] fcs FCS instance + * param[in] bfa BFA instance + * param[in] bfad BFA driver instance + * + * return None + */ +void +bfa_fcs_init(struct bfa_fcs_s *fcs, struct bfa_s *bfa, struct bfad_s *bfad, + bfa_boolean_t min_cfg) +{ + int i; + struct bfa_fcs_mod_s *mod; + + fcs->bfa = bfa; + fcs->bfad = bfad; + fcs->min_cfg = min_cfg; + + bfa_attach_fcs(bfa); + fcbuild_init(); + + for (i = 0; i < sizeof(fcs_modules) / sizeof(fcs_modules[0]); i++) { + mod = &fcs_modules[i]; + mod->modinit(fcs); + } +} + +/** + * Start FCS operations. + */ +void +bfa_fcs_start(struct bfa_fcs_s *fcs) +{ + bfa_fcs_fabric_modstart(fcs); +} + +/** + * FCS driver details initialization. + * + * param[in] fcs FCS instance + * param[in] driver_info Driver Details + * + * return None + */ +void +bfa_fcs_driver_info_init(struct bfa_fcs_s *fcs, + struct bfa_fcs_driver_info_s *driver_info) +{ + + fcs->driver_info = *driver_info; + + bfa_fcs_fabric_psymb_init(&fcs->fabric); +} + +/** + * FCS instance cleanup and exit. + * + * param[in] fcs FCS instance + * return None + */ +void +bfa_fcs_exit(struct bfa_fcs_s *fcs) +{ + struct bfa_fcs_mod_s *mod; + int nmods, i; + + bfa_wc_init(&fcs->wc, bfa_fcs_exit_comp, fcs); + + nmods = sizeof(fcs_modules) / sizeof(fcs_modules[0]); + + for (i = 0; i < nmods; i++) { + bfa_wc_up(&fcs->wc); + + mod = &fcs_modules[i]; + mod->modexit(fcs); + } + + bfa_wc_wait(&fcs->wc); +} + + +void +bfa_fcs_trc_init(struct bfa_fcs_s *fcs, struct bfa_trc_mod_s *trcmod) +{ + fcs->trcmod = trcmod; +} + + +void +bfa_fcs_log_init(struct bfa_fcs_s *fcs, struct bfa_log_mod_s *logmod) +{ + fcs->logm = logmod; +} + + +void +bfa_fcs_aen_init(struct bfa_fcs_s *fcs, struct bfa_aen_s *aen) +{ + fcs->aen = aen; +} + +void +bfa_fcs_modexit_comp(struct bfa_fcs_s *fcs) +{ + bfa_wc_down(&fcs->wc); +} + + diff --git a/drivers/scsi/bfa/bfa_fcs_lport.c b/drivers/scsi/bfa/bfa_fcs_lport.c new file mode 100644 index 000000000000..8975ed041dc0 --- /dev/null +++ b/drivers/scsi/bfa/bfa_fcs_lport.c @@ -0,0 +1,940 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * bfa_fcs_port.c BFA FCS port + */ + +#include +#include +#include +#include +#include +#include +#include "fcs.h" +#include "fcs_lport.h" +#include "fcs_vport.h" +#include "fcs_rport.h" +#include "fcs_fcxp.h" +#include "fcs_trcmod.h" +#include "lport_priv.h" +#include + +BFA_TRC_FILE(FCS, PORT); + +/** + * Forward declarations + */ + +static void bfa_fcs_port_aen_post(struct bfa_fcs_port_s *port, + enum bfa_lport_aen_event event); +static void bfa_fcs_port_send_ls_rjt(struct bfa_fcs_port_s *port, + struct fchs_s *rx_fchs, u8 reason_code, + u8 reason_code_expl); +static void bfa_fcs_port_plogi(struct bfa_fcs_port_s *port, + struct fchs_s *rx_fchs, + struct fc_logi_s *plogi); +static void bfa_fcs_port_online_actions(struct bfa_fcs_port_s *port); +static void bfa_fcs_port_offline_actions(struct bfa_fcs_port_s *port); +static void bfa_fcs_port_unknown_init(struct bfa_fcs_port_s *port); +static void bfa_fcs_port_unknown_online(struct bfa_fcs_port_s *port); +static void bfa_fcs_port_unknown_offline(struct bfa_fcs_port_s *port); +static void bfa_fcs_port_deleted(struct bfa_fcs_port_s *port); +static void bfa_fcs_port_echo(struct bfa_fcs_port_s *port, + struct fchs_s *rx_fchs, + struct fc_echo_s *echo, u16 len); +static void bfa_fcs_port_rnid(struct bfa_fcs_port_s *port, + struct fchs_s *rx_fchs, + struct fc_rnid_cmd_s *rnid, u16 len); +static void bfa_fs_port_get_gen_topo_data(struct bfa_fcs_port_s *port, + struct fc_rnid_general_topology_data_s *gen_topo_data); + +static struct { + void (*init) (struct bfa_fcs_port_s *port); + void (*online) (struct bfa_fcs_port_s *port); + void (*offline) (struct bfa_fcs_port_s *port); +} __port_action[] = { + { + bfa_fcs_port_unknown_init, bfa_fcs_port_unknown_online, + bfa_fcs_port_unknown_offline}, { + bfa_fcs_port_fab_init, bfa_fcs_port_fab_online, + bfa_fcs_port_fab_offline}, { + bfa_fcs_port_loop_init, bfa_fcs_port_loop_online, + bfa_fcs_port_loop_offline}, { +bfa_fcs_port_n2n_init, bfa_fcs_port_n2n_online, + bfa_fcs_port_n2n_offline},}; + +/** + * fcs_port_sm FCS logical port state machine + */ + +enum bfa_fcs_port_event { + BFA_FCS_PORT_SM_CREATE = 1, + BFA_FCS_PORT_SM_ONLINE = 2, + BFA_FCS_PORT_SM_OFFLINE = 3, + BFA_FCS_PORT_SM_DELETE = 4, + BFA_FCS_PORT_SM_DELRPORT = 5, +}; + +static void bfa_fcs_port_sm_uninit(struct bfa_fcs_port_s *port, + enum bfa_fcs_port_event event); +static void bfa_fcs_port_sm_init(struct bfa_fcs_port_s *port, + enum bfa_fcs_port_event event); +static void bfa_fcs_port_sm_online(struct bfa_fcs_port_s *port, + enum bfa_fcs_port_event event); +static void bfa_fcs_port_sm_offline(struct bfa_fcs_port_s *port, + enum bfa_fcs_port_event event); +static void bfa_fcs_port_sm_deleting(struct bfa_fcs_port_s *port, + enum bfa_fcs_port_event event); + +static void +bfa_fcs_port_sm_uninit(struct bfa_fcs_port_s *port, + enum bfa_fcs_port_event event) +{ + bfa_trc(port->fcs, port->port_cfg.pwwn); + bfa_trc(port->fcs, event); + + switch (event) { + case BFA_FCS_PORT_SM_CREATE: + bfa_sm_set_state(port, bfa_fcs_port_sm_init); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_port_sm_init(struct bfa_fcs_port_s *port, enum bfa_fcs_port_event event) +{ + bfa_trc(port->fcs, port->port_cfg.pwwn); + bfa_trc(port->fcs, event); + + switch (event) { + case BFA_FCS_PORT_SM_ONLINE: + bfa_sm_set_state(port, bfa_fcs_port_sm_online); + bfa_fcs_port_online_actions(port); + break; + + case BFA_FCS_PORT_SM_DELETE: + bfa_sm_set_state(port, bfa_fcs_port_sm_uninit); + bfa_fcs_port_deleted(port); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_port_sm_online(struct bfa_fcs_port_s *port, + enum bfa_fcs_port_event event) +{ + struct bfa_fcs_rport_s *rport; + struct list_head *qe, *qen; + + bfa_trc(port->fcs, port->port_cfg.pwwn); + bfa_trc(port->fcs, event); + + switch (event) { + case BFA_FCS_PORT_SM_OFFLINE: + bfa_sm_set_state(port, bfa_fcs_port_sm_offline); + bfa_fcs_port_offline_actions(port); + break; + + case BFA_FCS_PORT_SM_DELETE: + + __port_action[port->fabric->fab_type].offline(port); + + if (port->num_rports == 0) { + bfa_sm_set_state(port, bfa_fcs_port_sm_uninit); + bfa_fcs_port_deleted(port); + } else { + bfa_sm_set_state(port, bfa_fcs_port_sm_deleting); + list_for_each_safe(qe, qen, &port->rport_q) { + rport = (struct bfa_fcs_rport_s *)qe; + bfa_fcs_rport_delete(rport); + } + } + break; + + case BFA_FCS_PORT_SM_DELRPORT: + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_port_sm_offline(struct bfa_fcs_port_s *port, + enum bfa_fcs_port_event event) +{ + struct bfa_fcs_rport_s *rport; + struct list_head *qe, *qen; + + bfa_trc(port->fcs, port->port_cfg.pwwn); + bfa_trc(port->fcs, event); + + switch (event) { + case BFA_FCS_PORT_SM_ONLINE: + bfa_sm_set_state(port, bfa_fcs_port_sm_online); + bfa_fcs_port_online_actions(port); + break; + + case BFA_FCS_PORT_SM_DELETE: + if (port->num_rports == 0) { + bfa_sm_set_state(port, bfa_fcs_port_sm_uninit); + bfa_fcs_port_deleted(port); + } else { + bfa_sm_set_state(port, bfa_fcs_port_sm_deleting); + list_for_each_safe(qe, qen, &port->rport_q) { + rport = (struct bfa_fcs_rport_s *)qe; + bfa_fcs_rport_delete(rport); + } + } + break; + + case BFA_FCS_PORT_SM_DELRPORT: + case BFA_FCS_PORT_SM_OFFLINE: + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_port_sm_deleting(struct bfa_fcs_port_s *port, + enum bfa_fcs_port_event event) +{ + bfa_trc(port->fcs, port->port_cfg.pwwn); + bfa_trc(port->fcs, event); + + switch (event) { + case BFA_FCS_PORT_SM_DELRPORT: + if (port->num_rports == 0) { + bfa_sm_set_state(port, bfa_fcs_port_sm_uninit); + bfa_fcs_port_deleted(port); + } + break; + + default: + bfa_assert(0); + } +} + + + +/** + * fcs_port_pvt + */ + +/** + * Send AEN notification + */ +static void +bfa_fcs_port_aen_post(struct bfa_fcs_port_s *port, + enum bfa_lport_aen_event event) +{ + union bfa_aen_data_u aen_data; + struct bfa_log_mod_s *logmod = port->fcs->logm; + enum bfa_port_role role = port->port_cfg.roles; + wwn_t lpwwn = bfa_fcs_port_get_pwwn(port); + char lpwwn_ptr[BFA_STRING_32]; + char *role_str[BFA_PORT_ROLE_FCP_MAX / 2 + 1] = + { "Initiator", "Target", "IPFC" }; + + wwn2str(lpwwn_ptr, lpwwn); + + bfa_assert(role <= BFA_PORT_ROLE_FCP_MAX); + + switch (event) { + case BFA_LPORT_AEN_ONLINE: + bfa_log(logmod, BFA_AEN_LPORT_ONLINE, lpwwn_ptr, + role_str[role / 2]); + break; + case BFA_LPORT_AEN_OFFLINE: + bfa_log(logmod, BFA_AEN_LPORT_OFFLINE, lpwwn_ptr, + role_str[role / 2]); + break; + case BFA_LPORT_AEN_NEW: + bfa_log(logmod, BFA_AEN_LPORT_NEW, lpwwn_ptr, + role_str[role / 2]); + break; + case BFA_LPORT_AEN_DELETE: + bfa_log(logmod, BFA_AEN_LPORT_DELETE, lpwwn_ptr, + role_str[role / 2]); + break; + case BFA_LPORT_AEN_DISCONNECT: + bfa_log(logmod, BFA_AEN_LPORT_DISCONNECT, lpwwn_ptr, + role_str[role / 2]); + break; + default: + break; + } + + aen_data.lport.vf_id = port->fabric->vf_id; + aen_data.lport.roles = role; + aen_data.lport.ppwwn = + bfa_fcs_port_get_pwwn(bfa_fcs_get_base_port(port->fcs)); + aen_data.lport.lpwwn = lpwwn; +} + +/* + * Send a LS reject + */ +static void +bfa_fcs_port_send_ls_rjt(struct bfa_fcs_port_s *port, struct fchs_s *rx_fchs, + u8 reason_code, u8 reason_code_expl) +{ + struct fchs_s fchs; + struct bfa_fcxp_s *fcxp; + struct bfa_rport_s *bfa_rport = NULL; + int len; + + bfa_trc(port->fcs, rx_fchs->s_id); + + fcxp = bfa_fcs_fcxp_alloc(port->fcs); + if (!fcxp) + return; + + len = fc_ls_rjt_build(&fchs, bfa_fcxp_get_reqbuf(fcxp), rx_fchs->s_id, + bfa_fcs_port_get_fcid(port), rx_fchs->ox_id, + reason_code, reason_code_expl); + + bfa_fcxp_send(fcxp, bfa_rport, port->fabric->vf_id, port->lp_tag, + BFA_FALSE, FC_CLASS_3, len, &fchs, NULL, NULL, + FC_MAX_PDUSZ, 0); +} + +/** + * Process incoming plogi from a remote port. + */ +static void +bfa_fcs_port_plogi(struct bfa_fcs_port_s *port, struct fchs_s *rx_fchs, + struct fc_logi_s *plogi) +{ + struct bfa_fcs_rport_s *rport; + + bfa_trc(port->fcs, rx_fchs->d_id); + bfa_trc(port->fcs, rx_fchs->s_id); + + /* + * If min cfg mode is enabled, drop any incoming PLOGIs + */ + if (__fcs_min_cfg(port->fcs)) { + bfa_trc(port->fcs, rx_fchs->s_id); + return; + } + + if (fc_plogi_parse(rx_fchs) != FC_PARSE_OK) { + bfa_trc(port->fcs, rx_fchs->s_id); + /* + * send a LS reject + */ + bfa_fcs_port_send_ls_rjt(port, rx_fchs, + FC_LS_RJT_RSN_PROTOCOL_ERROR, + FC_LS_RJT_EXP_SPARMS_ERR_OPTIONS); + return; + } + + /** +* Direct Attach P2P mode : verify address assigned by the r-port. + */ + if ((!bfa_fcs_fabric_is_switched(port->fabric)) + && + (memcmp + ((void *)&bfa_fcs_port_get_pwwn(port), (void *)&plogi->port_name, + sizeof(wwn_t)) < 0)) { + if (BFA_FCS_PID_IS_WKA(rx_fchs->d_id)) { + /* + * Address assigned to us cannot be a WKA + */ + bfa_fcs_port_send_ls_rjt(port, rx_fchs, + FC_LS_RJT_RSN_PROTOCOL_ERROR, + FC_LS_RJT_EXP_INVALID_NPORT_ID); + return; + } + port->pid = rx_fchs->d_id; + } + + /** + * First, check if we know the device by pwwn. + */ + rport = bfa_fcs_port_get_rport_by_pwwn(port, plogi->port_name); + if (rport) { + /** + * Direct Attach P2P mode: handle address assigned by the rport. + */ + if ((!bfa_fcs_fabric_is_switched(port->fabric)) + && + (memcmp + ((void *)&bfa_fcs_port_get_pwwn(port), + (void *)&plogi->port_name, sizeof(wwn_t)) < 0)) { + port->pid = rx_fchs->d_id; + rport->pid = rx_fchs->s_id; + } + bfa_fcs_rport_plogi(rport, rx_fchs, plogi); + return; + } + + /** + * Next, lookup rport by PID. + */ + rport = bfa_fcs_port_get_rport_by_pid(port, rx_fchs->s_id); + if (!rport) { + /** + * Inbound PLOGI from a new device. + */ + bfa_fcs_rport_plogi_create(port, rx_fchs, plogi); + return; + } + + /** + * Rport is known only by PID. + */ + if (rport->pwwn) { + /** + * This is a different device with the same pid. Old device + * disappeared. Send implicit LOGO to old device. + */ + bfa_assert(rport->pwwn != plogi->port_name); + bfa_fcs_rport_logo_imp(rport); + + /** + * Inbound PLOGI from a new device (with old PID). + */ + bfa_fcs_rport_plogi_create(port, rx_fchs, plogi); + return; + } + + /** + * PLOGI crossing each other. + */ + bfa_assert(rport->pwwn == WWN_NULL); + bfa_fcs_rport_plogi(rport, rx_fchs, plogi); +} + +/* + * Process incoming ECHO. + * Since it does not require a login, it is processed here. + */ +static void +bfa_fcs_port_echo(struct bfa_fcs_port_s *port, struct fchs_s *rx_fchs, + struct fc_echo_s *echo, u16 rx_len) +{ + struct fchs_s fchs; + struct bfa_fcxp_s *fcxp; + struct bfa_rport_s *bfa_rport = NULL; + int len, pyld_len; + + bfa_trc(port->fcs, rx_fchs->s_id); + bfa_trc(port->fcs, rx_fchs->d_id); + bfa_trc(port->fcs, rx_len); + + fcxp = bfa_fcs_fcxp_alloc(port->fcs); + if (!fcxp) + return; + + len = fc_ls_acc_build(&fchs, bfa_fcxp_get_reqbuf(fcxp), rx_fchs->s_id, + bfa_fcs_port_get_fcid(port), rx_fchs->ox_id); + + /* + * Copy the payload (if any) from the echo frame + */ + pyld_len = rx_len - sizeof(struct fchs_s); + bfa_trc(port->fcs, pyld_len); + + if (pyld_len > len) + memcpy(((u8 *) bfa_fcxp_get_reqbuf(fcxp)) + + sizeof(struct fc_echo_s), (echo + 1), + (pyld_len - sizeof(struct fc_echo_s))); + + bfa_fcxp_send(fcxp, bfa_rport, port->fabric->vf_id, port->lp_tag, + BFA_FALSE, FC_CLASS_3, pyld_len, &fchs, NULL, NULL, + FC_MAX_PDUSZ, 0); +} + +/* + * Process incoming RNID. + * Since it does not require a login, it is processed here. + */ +static void +bfa_fcs_port_rnid(struct bfa_fcs_port_s *port, struct fchs_s *rx_fchs, + struct fc_rnid_cmd_s *rnid, u16 rx_len) +{ + struct fc_rnid_common_id_data_s common_id_data; + struct fc_rnid_general_topology_data_s gen_topo_data; + struct fchs_s fchs; + struct bfa_fcxp_s *fcxp; + struct bfa_rport_s *bfa_rport = NULL; + u16 len; + u32 data_format; + + bfa_trc(port->fcs, rx_fchs->s_id); + bfa_trc(port->fcs, rx_fchs->d_id); + bfa_trc(port->fcs, rx_len); + + fcxp = bfa_fcs_fcxp_alloc(port->fcs); + if (!fcxp) + return; + + /* + * Check Node Indentification Data Format + * We only support General Topology Discovery Format. + * For any other requested Data Formats, we return Common Node Id Data + * only, as per FC-LS. + */ + bfa_trc(port->fcs, rnid->node_id_data_format); + if (rnid->node_id_data_format == RNID_NODEID_DATA_FORMAT_DISCOVERY) { + data_format = RNID_NODEID_DATA_FORMAT_DISCOVERY; + /* + * Get General topology data for this port + */ + bfa_fs_port_get_gen_topo_data(port, &gen_topo_data); + } else { + data_format = RNID_NODEID_DATA_FORMAT_COMMON; + } + + /* + * Copy the Node Id Info + */ + common_id_data.port_name = bfa_fcs_port_get_pwwn(port); + common_id_data.node_name = bfa_fcs_port_get_nwwn(port); + + len = fc_rnid_acc_build(&fchs, bfa_fcxp_get_reqbuf(fcxp), rx_fchs->s_id, + bfa_fcs_port_get_fcid(port), rx_fchs->ox_id, + data_format, &common_id_data, &gen_topo_data); + + bfa_fcxp_send(fcxp, bfa_rport, port->fabric->vf_id, port->lp_tag, + BFA_FALSE, FC_CLASS_3, len, &fchs, NULL, NULL, + FC_MAX_PDUSZ, 0); + + return; +} + +/* + * Fill out General Topolpgy Discovery Data for RNID ELS. + */ +static void +bfa_fs_port_get_gen_topo_data(struct bfa_fcs_port_s *port, + struct fc_rnid_general_topology_data_s *gen_topo_data) +{ + + bfa_os_memset(gen_topo_data, 0, + sizeof(struct fc_rnid_general_topology_data_s)); + + gen_topo_data->asso_type = bfa_os_htonl(RNID_ASSOCIATED_TYPE_HOST); + gen_topo_data->phy_port_num = 0; /* @todo */ + gen_topo_data->num_attached_nodes = bfa_os_htonl(1); +} + +static void +bfa_fcs_port_online_actions(struct bfa_fcs_port_s *port) +{ + bfa_trc(port->fcs, port->fabric->oper_type); + + __port_action[port->fabric->fab_type].init(port); + __port_action[port->fabric->fab_type].online(port); + + bfa_fcs_port_aen_post(port, BFA_LPORT_AEN_ONLINE); + bfa_fcb_port_online(port->fcs->bfad, port->port_cfg.roles, + port->fabric->vf_drv, (port->vport == NULL) ? + NULL : port->vport->vport_drv); +} + +static void +bfa_fcs_port_offline_actions(struct bfa_fcs_port_s *port) +{ + struct list_head *qe, *qen; + struct bfa_fcs_rport_s *rport; + + bfa_trc(port->fcs, port->fabric->oper_type); + + __port_action[port->fabric->fab_type].offline(port); + + if (bfa_fcs_fabric_is_online(port->fabric) == BFA_TRUE) { + bfa_fcs_port_aen_post(port, BFA_LPORT_AEN_DISCONNECT); + } else { + bfa_fcs_port_aen_post(port, BFA_LPORT_AEN_OFFLINE); + } + bfa_fcb_port_offline(port->fcs->bfad, port->port_cfg.roles, + port->fabric->vf_drv, + (port->vport == NULL) ? NULL : port->vport->vport_drv); + + list_for_each_safe(qe, qen, &port->rport_q) { + rport = (struct bfa_fcs_rport_s *)qe; + bfa_fcs_rport_offline(rport); + } +} + +static void +bfa_fcs_port_unknown_init(struct bfa_fcs_port_s *port) +{ + bfa_assert(0); +} + +static void +bfa_fcs_port_unknown_online(struct bfa_fcs_port_s *port) +{ + bfa_assert(0); +} + +static void +bfa_fcs_port_unknown_offline(struct bfa_fcs_port_s *port) +{ + bfa_assert(0); +} + +static void +bfa_fcs_port_deleted(struct bfa_fcs_port_s *port) +{ + bfa_fcs_port_aen_post(port, BFA_LPORT_AEN_DELETE); + + /* + * Base port will be deleted by the OS driver + */ + if (port->vport) { + bfa_fcb_port_delete(port->fcs->bfad, port->port_cfg.roles, + port->fabric->vf_drv, + port->vport ? port->vport->vport_drv : NULL); + bfa_fcs_vport_delete_comp(port->vport); + } else { + bfa_fcs_fabric_port_delete_comp(port->fabric); + } +} + + + +/** + * fcs_lport_api BFA FCS port API + */ +/** + * Module initialization + */ +void +bfa_fcs_port_modinit(struct bfa_fcs_s *fcs) +{ + +} + +/** + * Module cleanup + */ +void +bfa_fcs_port_modexit(struct bfa_fcs_s *fcs) +{ + bfa_fcs_modexit_comp(fcs); +} + +/** + * Unsolicited frame receive handling. + */ +void +bfa_fcs_port_uf_recv(struct bfa_fcs_port_s *lport, struct fchs_s *fchs, + u16 len) +{ + u32 pid = fchs->s_id; + struct bfa_fcs_rport_s *rport = NULL; + struct fc_els_cmd_s *els_cmd = (struct fc_els_cmd_s *) (fchs + 1); + + bfa_stats(lport, uf_recvs); + + if (!bfa_fcs_port_is_online(lport)) { + bfa_stats(lport, uf_recv_drops); + return; + } + + /** + * First, handle ELSs that donot require a login. + */ + /* + * Handle PLOGI first + */ + if ((fchs->type == FC_TYPE_ELS) && + (els_cmd->els_code == FC_ELS_PLOGI)) { + bfa_fcs_port_plogi(lport, fchs, (struct fc_logi_s *) els_cmd); + return; + } + + /* + * Handle ECHO separately. + */ + if ((fchs->type == FC_TYPE_ELS) && (els_cmd->els_code == FC_ELS_ECHO)) { + bfa_fcs_port_echo(lport, fchs, + (struct fc_echo_s *) els_cmd, len); + return; + } + + /* + * Handle RNID separately. + */ + if ((fchs->type == FC_TYPE_ELS) && (els_cmd->els_code == FC_ELS_RNID)) { + bfa_fcs_port_rnid(lport, fchs, + (struct fc_rnid_cmd_s *) els_cmd, len); + return; + } + + /** + * look for a matching remote port ID + */ + rport = bfa_fcs_port_get_rport_by_pid(lport, pid); + if (rport) { + bfa_trc(rport->fcs, fchs->s_id); + bfa_trc(rport->fcs, fchs->d_id); + bfa_trc(rport->fcs, fchs->type); + + bfa_fcs_rport_uf_recv(rport, fchs, len); + return; + } + + /** + * Only handles ELS frames for now. + */ + if (fchs->type != FC_TYPE_ELS) { + bfa_trc(lport->fcs, fchs->type); + bfa_assert(0); + return; + } + + bfa_trc(lport->fcs, els_cmd->els_code); + if (els_cmd->els_code == FC_ELS_RSCN) { + bfa_fcs_port_scn_process_rscn(lport, fchs, len); + return; + } + + if (els_cmd->els_code == FC_ELS_LOGO) { + /** + * @todo Handle LOGO frames received. + */ + bfa_trc(lport->fcs, els_cmd->els_code); + return; + } + + if (els_cmd->els_code == FC_ELS_PRLI) { + /** + * @todo Handle PRLI frames received. + */ + bfa_trc(lport->fcs, els_cmd->els_code); + return; + } + + /** + * Unhandled ELS frames. Send a LS_RJT. + */ + bfa_fcs_port_send_ls_rjt(lport, fchs, FC_LS_RJT_RSN_CMD_NOT_SUPP, + FC_LS_RJT_EXP_NO_ADDL_INFO); + +} + +/** + * PID based Lookup for a R-Port in the Port R-Port Queue + */ +struct bfa_fcs_rport_s * +bfa_fcs_port_get_rport_by_pid(struct bfa_fcs_port_s *port, u32 pid) +{ + struct bfa_fcs_rport_s *rport; + struct list_head *qe; + + list_for_each(qe, &port->rport_q) { + rport = (struct bfa_fcs_rport_s *)qe; + if (rport->pid == pid) + return rport; + } + + bfa_trc(port->fcs, pid); + return NULL; +} + +/** + * PWWN based Lookup for a R-Port in the Port R-Port Queue + */ +struct bfa_fcs_rport_s * +bfa_fcs_port_get_rport_by_pwwn(struct bfa_fcs_port_s *port, wwn_t pwwn) +{ + struct bfa_fcs_rport_s *rport; + struct list_head *qe; + + list_for_each(qe, &port->rport_q) { + rport = (struct bfa_fcs_rport_s *)qe; + if (wwn_is_equal(rport->pwwn, pwwn)) + return rport; + } + + bfa_trc(port->fcs, pwwn); + return (NULL); +} + +/** + * NWWN based Lookup for a R-Port in the Port R-Port Queue + */ +struct bfa_fcs_rport_s * +bfa_fcs_port_get_rport_by_nwwn(struct bfa_fcs_port_s *port, wwn_t nwwn) +{ + struct bfa_fcs_rport_s *rport; + struct list_head *qe; + + list_for_each(qe, &port->rport_q) { + rport = (struct bfa_fcs_rport_s *)qe; + if (wwn_is_equal(rport->nwwn, nwwn)) + return rport; + } + + bfa_trc(port->fcs, nwwn); + return (NULL); +} + +/** + * Called by rport module when new rports are discovered. + */ +void +bfa_fcs_port_add_rport(struct bfa_fcs_port_s *port, + struct bfa_fcs_rport_s *rport) +{ + list_add_tail(&rport->qe, &port->rport_q); + port->num_rports++; +} + +/** + * Called by rport module to when rports are deleted. + */ +void +bfa_fcs_port_del_rport(struct bfa_fcs_port_s *port, + struct bfa_fcs_rport_s *rport) +{ + bfa_assert(bfa_q_is_on_q(&port->rport_q, rport)); + list_del(&rport->qe); + port->num_rports--; + + bfa_sm_send_event(port, BFA_FCS_PORT_SM_DELRPORT); +} + +/** + * Called by fabric for base port when fabric login is complete. + * Called by vport for virtual ports when FDISC is complete. + */ +void +bfa_fcs_port_online(struct bfa_fcs_port_s *port) +{ + bfa_sm_send_event(port, BFA_FCS_PORT_SM_ONLINE); +} + +/** + * Called by fabric for base port when fabric goes offline. + * Called by vport for virtual ports when virtual port becomes offline. + */ +void +bfa_fcs_port_offline(struct bfa_fcs_port_s *port) +{ + bfa_sm_send_event(port, BFA_FCS_PORT_SM_OFFLINE); +} + +/** + * Called by fabric to delete base lport and associated resources. + * + * Called by vport to delete lport and associated resources. Should call + * bfa_fcs_vport_delete_comp() for vports on completion. + */ +void +bfa_fcs_port_delete(struct bfa_fcs_port_s *port) +{ + bfa_sm_send_event(port, BFA_FCS_PORT_SM_DELETE); +} + +/** + * Called by fabric in private loop topology to process LIP event. + */ +void +bfa_fcs_port_lip(struct bfa_fcs_port_s *port) +{ +} + +/** + * Return TRUE if port is online, else return FALSE + */ +bfa_boolean_t +bfa_fcs_port_is_online(struct bfa_fcs_port_s *port) +{ + return (bfa_sm_cmp_state(port, bfa_fcs_port_sm_online)); +} + +/** + * Logical port initialization of base or virtual port. + * Called by fabric for base port or by vport for virtual ports. + */ +void +bfa_fcs_lport_init(struct bfa_fcs_port_s *lport, struct bfa_fcs_s *fcs, + u16 vf_id, struct bfa_port_cfg_s *port_cfg, + struct bfa_fcs_vport_s *vport) +{ + lport->fcs = fcs; + lport->fabric = bfa_fcs_vf_lookup(fcs, vf_id); + bfa_os_assign(lport->port_cfg, *port_cfg); + lport->vport = vport; + lport->lp_tag = (vport) ? bfa_lps_get_tag(vport->lps) : + bfa_lps_get_tag(lport->fabric->lps); + + INIT_LIST_HEAD(&lport->rport_q); + lport->num_rports = 0; + + lport->bfad_port = + bfa_fcb_port_new(fcs->bfad, lport, lport->port_cfg.roles, + lport->fabric->vf_drv, + vport ? vport->vport_drv : NULL); + bfa_fcs_port_aen_post(lport, BFA_LPORT_AEN_NEW); + + bfa_sm_set_state(lport, bfa_fcs_port_sm_uninit); + bfa_sm_send_event(lport, BFA_FCS_PORT_SM_CREATE); +} + + + +/** + * fcs_lport_api + */ + +void +bfa_fcs_port_get_attr(struct bfa_fcs_port_s *port, + struct bfa_port_attr_s *port_attr) +{ + if (bfa_sm_cmp_state(port, bfa_fcs_port_sm_online)) + port_attr->pid = port->pid; + else + port_attr->pid = 0; + + port_attr->port_cfg = port->port_cfg; + + if (port->fabric) { + port_attr->port_type = bfa_fcs_fabric_port_type(port->fabric); + port_attr->loopback = bfa_fcs_fabric_is_loopback(port->fabric); + port_attr->fabric_name = bfa_fcs_port_get_fabric_name(port); + memcpy(port_attr->fabric_ip_addr, + bfa_fcs_port_get_fabric_ipaddr(port), + BFA_FCS_FABRIC_IPADDR_SZ); + + if (port->vport != NULL) + port_attr->port_type = BFA_PPORT_TYPE_VPORT; + + } else { + port_attr->port_type = BFA_PPORT_TYPE_UNKNOWN; + port_attr->state = BFA_PORT_UNINIT; + } + +} + + diff --git a/drivers/scsi/bfa/bfa_fcs_port.c b/drivers/scsi/bfa/bfa_fcs_port.c new file mode 100644 index 000000000000..9c4b24e62de1 --- /dev/null +++ b/drivers/scsi/bfa/bfa_fcs_port.c @@ -0,0 +1,68 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * bfa_fcs_pport.c BFA FCS PPORT ( physical port) + */ + +#include +#include +#include +#include "fcs_trcmod.h" +#include "fcs.h" +#include "fcs_fabric.h" +#include "fcs_port.h" + +BFA_TRC_FILE(FCS, PPORT); + +static void +bfa_fcs_pport_event_handler(void *cbarg, bfa_pport_event_t event) +{ + struct bfa_fcs_s *fcs = cbarg; + + bfa_trc(fcs, event); + + switch (event) { + case BFA_PPORT_LINKUP: + bfa_fcs_fabric_link_up(&fcs->fabric); + break; + + case BFA_PPORT_LINKDOWN: + bfa_fcs_fabric_link_down(&fcs->fabric); + break; + + case BFA_PPORT_TRUNK_LINKDOWN: + bfa_assert(0); + break; + + default: + bfa_assert(0); + } +} + +void +bfa_fcs_pport_modinit(struct bfa_fcs_s *fcs) +{ + bfa_pport_event_register(fcs->bfa, bfa_fcs_pport_event_handler, + fcs); +} + +void +bfa_fcs_pport_modexit(struct bfa_fcs_s *fcs) +{ + bfa_fcs_modexit_comp(fcs); +} diff --git a/drivers/scsi/bfa/bfa_fcs_uf.c b/drivers/scsi/bfa/bfa_fcs_uf.c new file mode 100644 index 000000000000..ad01db6444b2 --- /dev/null +++ b/drivers/scsi/bfa/bfa_fcs_uf.c @@ -0,0 +1,105 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * bfa_fcs_uf.c BFA FCS UF ( Unsolicited Frames) + */ + +#include +#include +#include +#include "fcs.h" +#include "fcs_trcmod.h" +#include "fcs_fabric.h" +#include "fcs_uf.h" + +BFA_TRC_FILE(FCS, UF); + +/** + * BFA callback for unsolicited frame receive handler. + * + * @param[in] cbarg callback arg for receive handler + * @param[in] uf unsolicited frame descriptor + * + * @return None + */ +static void +bfa_fcs_uf_recv(void *cbarg, struct bfa_uf_s *uf) +{ + struct bfa_fcs_s *fcs = (struct bfa_fcs_s *) cbarg; + struct fchs_s *fchs = bfa_uf_get_frmbuf(uf); + u16 len = bfa_uf_get_frmlen(uf); + struct fc_vft_s *vft; + struct bfa_fcs_fabric_s *fabric; + + /** + * check for VFT header + */ + if (fchs->routing == FC_RTG_EXT_HDR && + fchs->cat_info == FC_CAT_VFT_HDR) { + bfa_stats(fcs, uf.tagged); + vft = bfa_uf_get_frmbuf(uf); + if (fcs->port_vfid == vft->vf_id) + fabric = &fcs->fabric; + else + fabric = bfa_fcs_vf_lookup(fcs, (u16) vft->vf_id); + + /** + * drop frame if vfid is unknown + */ + if (!fabric) { + bfa_assert(0); + bfa_stats(fcs, uf.vfid_unknown); + bfa_uf_free(uf); + return; + } + + /** + * skip vft header + */ + fchs = (struct fchs_s *) (vft + 1); + len -= sizeof(struct fc_vft_s); + + bfa_trc(fcs, vft->vf_id); + } else { + bfa_stats(fcs, uf.untagged); + fabric = &fcs->fabric; + } + + bfa_trc(fcs, ((u32 *) fchs)[0]); + bfa_trc(fcs, ((u32 *) fchs)[1]); + bfa_trc(fcs, ((u32 *) fchs)[2]); + bfa_trc(fcs, ((u32 *) fchs)[3]); + bfa_trc(fcs, ((u32 *) fchs)[4]); + bfa_trc(fcs, ((u32 *) fchs)[5]); + bfa_trc(fcs, len); + + bfa_fcs_fabric_uf_recv(fabric, fchs, len); + bfa_uf_free(uf); +} + +void +bfa_fcs_uf_modinit(struct bfa_fcs_s *fcs) +{ + bfa_uf_recv_register(fcs->bfa, bfa_fcs_uf_recv, fcs); +} + +void +bfa_fcs_uf_modexit(struct bfa_fcs_s *fcs) +{ + bfa_fcs_modexit_comp(fcs); +} diff --git a/drivers/scsi/bfa/bfa_fcxp.c b/drivers/scsi/bfa/bfa_fcxp.c new file mode 100644 index 000000000000..4754a0e9006a --- /dev/null +++ b/drivers/scsi/bfa/bfa_fcxp.c @@ -0,0 +1,782 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#include +#include +#include + +BFA_TRC_FILE(HAL, FCXP); +BFA_MODULE(fcxp); + +/** + * forward declarations + */ +static void __bfa_fcxp_send_cbfn(void *cbarg, bfa_boolean_t complete); +static void hal_fcxp_rx_plog(struct bfa_s *bfa, struct bfa_fcxp_s *fcxp, + struct bfi_fcxp_send_rsp_s *fcxp_rsp); +static void hal_fcxp_tx_plog(struct bfa_s *bfa, u32 reqlen, + struct bfa_fcxp_s *fcxp, struct fchs_s *fchs); +static void bfa_fcxp_qresume(void *cbarg); +static void bfa_fcxp_queue(struct bfa_fcxp_s *fcxp, + struct bfi_fcxp_send_req_s *send_req); + +/** + * fcxp_pvt BFA FCXP private functions + */ + +static void +claim_fcxp_req_rsp_mem(struct bfa_fcxp_mod_s *mod, struct bfa_meminfo_s *mi) +{ + u8 *dm_kva = NULL; + u64 dm_pa; + u32 buf_pool_sz; + + dm_kva = bfa_meminfo_dma_virt(mi); + dm_pa = bfa_meminfo_dma_phys(mi); + + buf_pool_sz = mod->req_pld_sz * mod->num_fcxps; + + /* + * Initialize the fcxp req payload list + */ + mod->req_pld_list_kva = dm_kva; + mod->req_pld_list_pa = dm_pa; + dm_kva += buf_pool_sz; + dm_pa += buf_pool_sz; + bfa_os_memset(mod->req_pld_list_kva, 0, buf_pool_sz); + + /* + * Initialize the fcxp rsp payload list + */ + buf_pool_sz = mod->rsp_pld_sz * mod->num_fcxps; + mod->rsp_pld_list_kva = dm_kva; + mod->rsp_pld_list_pa = dm_pa; + dm_kva += buf_pool_sz; + dm_pa += buf_pool_sz; + bfa_os_memset(mod->rsp_pld_list_kva, 0, buf_pool_sz); + + bfa_meminfo_dma_virt(mi) = dm_kva; + bfa_meminfo_dma_phys(mi) = dm_pa; +} + +static void +claim_fcxps_mem(struct bfa_fcxp_mod_s *mod, struct bfa_meminfo_s *mi) +{ + u16 i; + struct bfa_fcxp_s *fcxp; + + fcxp = (struct bfa_fcxp_s *) bfa_meminfo_kva(mi); + bfa_os_memset(fcxp, 0, sizeof(struct bfa_fcxp_s) * mod->num_fcxps); + + INIT_LIST_HEAD(&mod->fcxp_free_q); + INIT_LIST_HEAD(&mod->fcxp_active_q); + + mod->fcxp_list = fcxp; + + for (i = 0; i < mod->num_fcxps; i++) { + fcxp->fcxp_mod = mod; + fcxp->fcxp_tag = i; + + list_add_tail(&fcxp->qe, &mod->fcxp_free_q); + bfa_reqq_winit(&fcxp->reqq_wqe, bfa_fcxp_qresume, fcxp); + fcxp->reqq_waiting = BFA_FALSE; + + fcxp = fcxp + 1; + } + + bfa_meminfo_kva(mi) = (void *)fcxp; +} + +static void +bfa_fcxp_meminfo(struct bfa_iocfc_cfg_s *cfg, u32 *ndm_len, + u32 *dm_len) +{ + u16 num_fcxp_reqs = cfg->fwcfg.num_fcxp_reqs; + + if (num_fcxp_reqs == 0) + return; + + /* + * Account for req/rsp payload + */ + *dm_len += BFA_FCXP_MAX_IBUF_SZ * num_fcxp_reqs; + if (cfg->drvcfg.min_cfg) + *dm_len += BFA_FCXP_MAX_IBUF_SZ * num_fcxp_reqs; + else + *dm_len += BFA_FCXP_MAX_LBUF_SZ * num_fcxp_reqs; + + /* + * Account for fcxp structs + */ + *ndm_len += sizeof(struct bfa_fcxp_s) * num_fcxp_reqs; +} + +static void +bfa_fcxp_attach(struct bfa_s *bfa, void *bfad, struct bfa_iocfc_cfg_s *cfg, + struct bfa_meminfo_s *meminfo, struct bfa_pcidev_s *pcidev) +{ + struct bfa_fcxp_mod_s *mod = BFA_FCXP_MOD(bfa); + + bfa_os_memset(mod, 0, sizeof(struct bfa_fcxp_mod_s)); + mod->bfa = bfa; + mod->num_fcxps = cfg->fwcfg.num_fcxp_reqs; + + /** + * Initialize FCXP request and response payload sizes. + */ + mod->req_pld_sz = mod->rsp_pld_sz = BFA_FCXP_MAX_IBUF_SZ; + if (!cfg->drvcfg.min_cfg) + mod->rsp_pld_sz = BFA_FCXP_MAX_LBUF_SZ; + + INIT_LIST_HEAD(&mod->wait_q); + + claim_fcxp_req_rsp_mem(mod, meminfo); + claim_fcxps_mem(mod, meminfo); +} + +static void +bfa_fcxp_initdone(struct bfa_s *bfa) +{ +} + +static void +bfa_fcxp_detach(struct bfa_s *bfa) +{ +} + +static void +bfa_fcxp_start(struct bfa_s *bfa) +{ +} + +static void +bfa_fcxp_stop(struct bfa_s *bfa) +{ +} + +static void +bfa_fcxp_iocdisable(struct bfa_s *bfa) +{ + struct bfa_fcxp_mod_s *mod = BFA_FCXP_MOD(bfa); + struct bfa_fcxp_s *fcxp; + struct list_head *qe, *qen; + + list_for_each_safe(qe, qen, &mod->fcxp_active_q) { + fcxp = (struct bfa_fcxp_s *) qe; + if (fcxp->caller == NULL) { + fcxp->send_cbfn(fcxp->caller, fcxp, fcxp->send_cbarg, + BFA_STATUS_IOC_FAILURE, 0, 0, NULL); + bfa_fcxp_free(fcxp); + } else { + fcxp->rsp_status = BFA_STATUS_IOC_FAILURE; + bfa_cb_queue(bfa, &fcxp->hcb_qe, + __bfa_fcxp_send_cbfn, fcxp); + } + } +} + +static struct bfa_fcxp_s * +bfa_fcxp_get(struct bfa_fcxp_mod_s *fm) +{ + struct bfa_fcxp_s *fcxp; + + bfa_q_deq(&fm->fcxp_free_q, &fcxp); + + if (fcxp) + list_add_tail(&fcxp->qe, &fm->fcxp_active_q); + + return (fcxp); +} + +static void +bfa_fcxp_put(struct bfa_fcxp_s *fcxp) +{ + struct bfa_fcxp_mod_s *mod = fcxp->fcxp_mod; + struct bfa_fcxp_wqe_s *wqe; + + bfa_q_deq(&mod->wait_q, &wqe); + if (wqe) { + bfa_trc(mod->bfa, fcxp->fcxp_tag); + wqe->alloc_cbfn(wqe->alloc_cbarg, fcxp); + return; + } + + bfa_assert(bfa_q_is_on_q(&mod->fcxp_active_q, fcxp)); + list_del(&fcxp->qe); + list_add_tail(&fcxp->qe, &mod->fcxp_free_q); +} + +static void +bfa_fcxp_null_comp(void *bfad_fcxp, struct bfa_fcxp_s *fcxp, void *cbarg, + bfa_status_t req_status, u32 rsp_len, + u32 resid_len, struct fchs_s *rsp_fchs) +{ + /**discarded fcxp completion */ +} + +static void +__bfa_fcxp_send_cbfn(void *cbarg, bfa_boolean_t complete) +{ + struct bfa_fcxp_s *fcxp = cbarg; + + if (complete) { + fcxp->send_cbfn(fcxp->caller, fcxp, fcxp->send_cbarg, + fcxp->rsp_status, fcxp->rsp_len, + fcxp->residue_len, &fcxp->rsp_fchs); + } else { + bfa_fcxp_free(fcxp); + } +} + +static void +hal_fcxp_send_comp(struct bfa_s *bfa, struct bfi_fcxp_send_rsp_s *fcxp_rsp) +{ + struct bfa_fcxp_mod_s *mod = BFA_FCXP_MOD(bfa); + struct bfa_fcxp_s *fcxp; + u16 fcxp_tag = bfa_os_ntohs(fcxp_rsp->fcxp_tag); + + bfa_trc(bfa, fcxp_tag); + + fcxp_rsp->rsp_len = bfa_os_ntohl(fcxp_rsp->rsp_len); + + /** + * @todo f/w should not set residue to non-0 when everything + * is received. + */ + if (fcxp_rsp->req_status == BFA_STATUS_OK) + fcxp_rsp->residue_len = 0; + else + fcxp_rsp->residue_len = bfa_os_ntohl(fcxp_rsp->residue_len); + + fcxp = BFA_FCXP_FROM_TAG(mod, fcxp_tag); + + bfa_assert(fcxp->send_cbfn != NULL); + + hal_fcxp_rx_plog(mod->bfa, fcxp, fcxp_rsp); + + if (fcxp->send_cbfn != NULL) { + if (fcxp->caller == NULL) { + bfa_trc(mod->bfa, fcxp->fcxp_tag); + + fcxp->send_cbfn(fcxp->caller, fcxp, fcxp->send_cbarg, + fcxp_rsp->req_status, fcxp_rsp->rsp_len, + fcxp_rsp->residue_len, &fcxp_rsp->fchs); + /* + * fcxp automatically freed on return from the callback + */ + bfa_fcxp_free(fcxp); + } else { + bfa_trc(mod->bfa, fcxp->fcxp_tag); + fcxp->rsp_status = fcxp_rsp->req_status; + fcxp->rsp_len = fcxp_rsp->rsp_len; + fcxp->residue_len = fcxp_rsp->residue_len; + fcxp->rsp_fchs = fcxp_rsp->fchs; + + bfa_cb_queue(bfa, &fcxp->hcb_qe, + __bfa_fcxp_send_cbfn, fcxp); + } + } else { + bfa_trc(bfa, fcxp_tag); + } +} + +static void +hal_fcxp_set_local_sges(struct bfi_sge_s *sge, u32 reqlen, u64 req_pa) +{ + union bfi_addr_u sga_zero = { {0} }; + + sge->sg_len = reqlen; + sge->flags = BFI_SGE_DATA_LAST; + bfa_dma_addr_set(sge[0].sga, req_pa); + bfa_sge_to_be(sge); + sge++; + + sge->sga = sga_zero; + sge->sg_len = reqlen; + sge->flags = BFI_SGE_PGDLEN; + bfa_sge_to_be(sge); +} + +static void +hal_fcxp_tx_plog(struct bfa_s *bfa, u32 reqlen, struct bfa_fcxp_s *fcxp, + struct fchs_s *fchs) +{ + /* + * TODO: TX ox_id + */ + if (reqlen > 0) { + if (fcxp->use_ireqbuf) { + u32 pld_w0 = + *((u32 *) BFA_FCXP_REQ_PLD(fcxp)); + + bfa_plog_fchdr_and_pl(bfa->plog, BFA_PL_MID_HAL_FCXP, + BFA_PL_EID_TX, + reqlen + sizeof(struct fchs_s), fchs, pld_w0); + } else { + bfa_plog_fchdr(bfa->plog, BFA_PL_MID_HAL_FCXP, + BFA_PL_EID_TX, reqlen + sizeof(struct fchs_s), + fchs); + } + } else { + bfa_plog_fchdr(bfa->plog, BFA_PL_MID_HAL_FCXP, BFA_PL_EID_TX, + reqlen + sizeof(struct fchs_s), fchs); + } +} + +static void +hal_fcxp_rx_plog(struct bfa_s *bfa, struct bfa_fcxp_s *fcxp, + struct bfi_fcxp_send_rsp_s *fcxp_rsp) +{ + if (fcxp_rsp->rsp_len > 0) { + if (fcxp->use_irspbuf) { + u32 pld_w0 = + *((u32 *) BFA_FCXP_RSP_PLD(fcxp)); + + bfa_plog_fchdr_and_pl(bfa->plog, BFA_PL_MID_HAL_FCXP, + BFA_PL_EID_RX, + (u16) fcxp_rsp->rsp_len, + &fcxp_rsp->fchs, pld_w0); + } else { + bfa_plog_fchdr(bfa->plog, BFA_PL_MID_HAL_FCXP, + BFA_PL_EID_RX, + (u16) fcxp_rsp->rsp_len, + &fcxp_rsp->fchs); + } + } else { + bfa_plog_fchdr(bfa->plog, BFA_PL_MID_HAL_FCXP, BFA_PL_EID_RX, + (u16) fcxp_rsp->rsp_len, &fcxp_rsp->fchs); + } +} + +/** + * Handler to resume sending fcxp when space in available in cpe queue. + */ +static void +bfa_fcxp_qresume(void *cbarg) +{ + struct bfa_fcxp_s *fcxp = cbarg; + struct bfa_s *bfa = fcxp->fcxp_mod->bfa; + struct bfi_fcxp_send_req_s *send_req; + + fcxp->reqq_waiting = BFA_FALSE; + send_req = bfa_reqq_next(bfa, BFA_REQQ_FCXP); + bfa_fcxp_queue(fcxp, send_req); +} + +/** + * Queue fcxp send request to foimrware. + */ +static void +bfa_fcxp_queue(struct bfa_fcxp_s *fcxp, struct bfi_fcxp_send_req_s *send_req) +{ + struct bfa_s *bfa = fcxp->fcxp_mod->bfa; + struct bfa_fcxp_req_info_s *reqi = &fcxp->req_info; + struct bfa_fcxp_rsp_info_s *rspi = &fcxp->rsp_info; + struct bfa_rport_s *rport = reqi->bfa_rport; + + bfi_h2i_set(send_req->mh, BFI_MC_FCXP, BFI_FCXP_H2I_SEND_REQ, + bfa_lpuid(bfa)); + + send_req->fcxp_tag = bfa_os_htons(fcxp->fcxp_tag); + if (rport) { + send_req->rport_fw_hndl = rport->fw_handle; + send_req->max_frmsz = bfa_os_htons(rport->rport_info.max_frmsz); + if (send_req->max_frmsz == 0) + send_req->max_frmsz = bfa_os_htons(FC_MAX_PDUSZ); + } else { + send_req->rport_fw_hndl = 0; + send_req->max_frmsz = bfa_os_htons(FC_MAX_PDUSZ); + } + + send_req->vf_id = bfa_os_htons(reqi->vf_id); + send_req->lp_tag = reqi->lp_tag; + send_req->class = reqi->class; + send_req->rsp_timeout = rspi->rsp_timeout; + send_req->cts = reqi->cts; + send_req->fchs = reqi->fchs; + + send_req->req_len = bfa_os_htonl(reqi->req_tot_len); + send_req->rsp_maxlen = bfa_os_htonl(rspi->rsp_maxlen); + + /* + * setup req sgles + */ + if (fcxp->use_ireqbuf == 1) { + hal_fcxp_set_local_sges(send_req->req_sge, reqi->req_tot_len, + BFA_FCXP_REQ_PLD_PA(fcxp)); + } else { + if (fcxp->nreq_sgles > 0) { + bfa_assert(fcxp->nreq_sgles == 1); + hal_fcxp_set_local_sges(send_req->req_sge, + reqi->req_tot_len, + fcxp->req_sga_cbfn(fcxp->caller, + 0)); + } else { + bfa_assert(reqi->req_tot_len == 0); + hal_fcxp_set_local_sges(send_req->rsp_sge, 0, 0); + } + } + + /* + * setup rsp sgles + */ + if (fcxp->use_irspbuf == 1) { + bfa_assert(rspi->rsp_maxlen <= BFA_FCXP_MAX_LBUF_SZ); + + hal_fcxp_set_local_sges(send_req->rsp_sge, rspi->rsp_maxlen, + BFA_FCXP_RSP_PLD_PA(fcxp)); + + } else { + if (fcxp->nrsp_sgles > 0) { + bfa_assert(fcxp->nrsp_sgles == 1); + hal_fcxp_set_local_sges(send_req->rsp_sge, + rspi->rsp_maxlen, + fcxp->rsp_sga_cbfn(fcxp->caller, + 0)); + } else { + bfa_assert(rspi->rsp_maxlen == 0); + hal_fcxp_set_local_sges(send_req->rsp_sge, 0, 0); + } + } + + hal_fcxp_tx_plog(bfa, reqi->req_tot_len, fcxp, &reqi->fchs); + + bfa_reqq_produce(bfa, BFA_REQQ_FCXP); + + bfa_trc(bfa, bfa_reqq_pi(bfa, BFA_REQQ_FCXP)); + bfa_trc(bfa, bfa_reqq_ci(bfa, BFA_REQQ_FCXP)); +} + + +/** + * hal_fcxp_api BFA FCXP API + */ + +/** + * Allocate an FCXP instance to send a response or to send a request + * that has a response. Request/response buffers are allocated by caller. + * + * @param[in] bfa BFA bfa instance + * @param[in] nreq_sgles Number of SG elements required for request + * buffer. 0, if fcxp internal buffers are used. + * Use bfa_fcxp_get_reqbuf() to get the + * internal req buffer. + * @param[in] req_sgles SG elements describing request buffer. Will be + * copied in by BFA and hence can be freed on + * return from this function. + * @param[in] get_req_sga function ptr to be called to get a request SG + * Address (given the sge index). + * @param[in] get_req_sglen function ptr to be called to get a request SG + * len (given the sge index). + * @param[in] get_rsp_sga function ptr to be called to get a response SG + * Address (given the sge index). + * @param[in] get_rsp_sglen function ptr to be called to get a response SG + * len (given the sge index). + * + * @return FCXP instance. NULL on failure. + */ +struct bfa_fcxp_s * +bfa_fcxp_alloc(void *caller, struct bfa_s *bfa, int nreq_sgles, + int nrsp_sgles, bfa_fcxp_get_sgaddr_t req_sga_cbfn, + bfa_fcxp_get_sglen_t req_sglen_cbfn, + bfa_fcxp_get_sgaddr_t rsp_sga_cbfn, + bfa_fcxp_get_sglen_t rsp_sglen_cbfn) +{ + struct bfa_fcxp_s *fcxp = NULL; + u32 nreq_sgpg, nrsp_sgpg; + + bfa_assert(bfa != NULL); + + fcxp = bfa_fcxp_get(BFA_FCXP_MOD(bfa)); + if (fcxp == NULL) + return (NULL); + + bfa_trc(bfa, fcxp->fcxp_tag); + + fcxp->caller = caller; + + if (nreq_sgles == 0) { + fcxp->use_ireqbuf = 1; + } else { + bfa_assert(req_sga_cbfn != NULL); + bfa_assert(req_sglen_cbfn != NULL); + + fcxp->use_ireqbuf = 0; + fcxp->req_sga_cbfn = req_sga_cbfn; + fcxp->req_sglen_cbfn = req_sglen_cbfn; + + fcxp->nreq_sgles = nreq_sgles; + + /* + * alloc required sgpgs + */ + if (nreq_sgles > BFI_SGE_INLINE) { + nreq_sgpg = BFA_SGPG_NPAGE(nreq_sgles); + + if (bfa_sgpg_malloc + (bfa, &fcxp->req_sgpg_q, nreq_sgpg) + != BFA_STATUS_OK) { + /* bfa_sgpg_wait(bfa, &fcxp->req_sgpg_wqe, + nreq_sgpg); */ + /* + * TODO + */ + } + } + } + + if (nrsp_sgles == 0) { + fcxp->use_irspbuf = 1; + } else { + bfa_assert(rsp_sga_cbfn != NULL); + bfa_assert(rsp_sglen_cbfn != NULL); + + fcxp->use_irspbuf = 0; + fcxp->rsp_sga_cbfn = rsp_sga_cbfn; + fcxp->rsp_sglen_cbfn = rsp_sglen_cbfn; + + fcxp->nrsp_sgles = nrsp_sgles; + /* + * alloc required sgpgs + */ + if (nrsp_sgles > BFI_SGE_INLINE) { + nrsp_sgpg = BFA_SGPG_NPAGE(nreq_sgles); + + if (bfa_sgpg_malloc + (bfa, &fcxp->rsp_sgpg_q, nrsp_sgpg) + != BFA_STATUS_OK) { + /* bfa_sgpg_wait(bfa, &fcxp->rsp_sgpg_wqe, + nrsp_sgpg); */ + /* + * TODO + */ + } + } + } + + return (fcxp); +} + +/** + * Get the internal request buffer pointer + * + * @param[in] fcxp BFA fcxp pointer + * + * @return pointer to the internal request buffer + */ +void * +bfa_fcxp_get_reqbuf(struct bfa_fcxp_s *fcxp) +{ + struct bfa_fcxp_mod_s *mod = fcxp->fcxp_mod; + void *reqbuf; + + bfa_assert(fcxp->use_ireqbuf == 1); + reqbuf = ((u8 *)mod->req_pld_list_kva) + + fcxp->fcxp_tag * mod->req_pld_sz; + return reqbuf; +} + +u32 +bfa_fcxp_get_reqbufsz(struct bfa_fcxp_s *fcxp) +{ + struct bfa_fcxp_mod_s *mod = fcxp->fcxp_mod; + + return mod->req_pld_sz; +} + +/** + * Get the internal response buffer pointer + * + * @param[in] fcxp BFA fcxp pointer + * + * @return pointer to the internal request buffer + */ +void * +bfa_fcxp_get_rspbuf(struct bfa_fcxp_s *fcxp) +{ + struct bfa_fcxp_mod_s *mod = fcxp->fcxp_mod; + void *rspbuf; + + bfa_assert(fcxp->use_irspbuf == 1); + + rspbuf = ((u8 *)mod->rsp_pld_list_kva) + + fcxp->fcxp_tag * mod->rsp_pld_sz; + return rspbuf; +} + +/** + * Free the BFA FCXP + * + * @param[in] fcxp BFA fcxp pointer + * + * @return void + */ +void +bfa_fcxp_free(struct bfa_fcxp_s *fcxp) +{ + struct bfa_fcxp_mod_s *mod = fcxp->fcxp_mod; + + bfa_assert(fcxp != NULL); + bfa_trc(mod->bfa, fcxp->fcxp_tag); + bfa_fcxp_put(fcxp); +} + +/** + * Send a FCXP request + * + * @param[in] fcxp BFA fcxp pointer + * @param[in] rport BFA rport pointer. Could be left NULL for WKA rports + * @param[in] vf_id virtual Fabric ID + * @param[in] lp_tag lport tag + * @param[in] cts use Continous sequence + * @param[in] cos fc Class of Service + * @param[in] reqlen request length, does not include FCHS length + * @param[in] fchs fc Header Pointer. The header content will be copied + * in by BFA. + * + * @param[in] cbfn call back function to be called on receiving + * the response + * @param[in] cbarg arg for cbfn + * @param[in] rsp_timeout + * response timeout + * + * @return bfa_status_t + */ +void +bfa_fcxp_send(struct bfa_fcxp_s *fcxp, struct bfa_rport_s *rport, + u16 vf_id, u8 lp_tag, bfa_boolean_t cts, enum fc_cos cos, + u32 reqlen, struct fchs_s *fchs, bfa_cb_fcxp_send_t cbfn, + void *cbarg, u32 rsp_maxlen, u8 rsp_timeout) +{ + struct bfa_s *bfa = fcxp->fcxp_mod->bfa; + struct bfa_fcxp_req_info_s *reqi = &fcxp->req_info; + struct bfa_fcxp_rsp_info_s *rspi = &fcxp->rsp_info; + struct bfi_fcxp_send_req_s *send_req; + + bfa_trc(bfa, fcxp->fcxp_tag); + + /** + * setup request/response info + */ + reqi->bfa_rport = rport; + reqi->vf_id = vf_id; + reqi->lp_tag = lp_tag; + reqi->class = cos; + rspi->rsp_timeout = rsp_timeout; + reqi->cts = cts; + reqi->fchs = *fchs; + reqi->req_tot_len = reqlen; + rspi->rsp_maxlen = rsp_maxlen; + fcxp->send_cbfn = cbfn ? cbfn : bfa_fcxp_null_comp; + fcxp->send_cbarg = cbarg; + + /** + * If no room in CPE queue, wait for + */ + send_req = bfa_reqq_next(bfa, BFA_REQQ_FCXP); + if (!send_req) { + bfa_trc(bfa, fcxp->fcxp_tag); + fcxp->reqq_waiting = BFA_TRUE; + bfa_reqq_wait(bfa, BFA_REQQ_FCXP, &fcxp->reqq_wqe); + return; + } + + bfa_fcxp_queue(fcxp, send_req); +} + +/** + * Abort a BFA FCXP + * + * @param[in] fcxp BFA fcxp pointer + * + * @return void + */ +bfa_status_t +bfa_fcxp_abort(struct bfa_fcxp_s *fcxp) +{ + bfa_assert(0); + return (BFA_STATUS_OK); +} + +void +bfa_fcxp_alloc_wait(struct bfa_s *bfa, struct bfa_fcxp_wqe_s *wqe, + bfa_fcxp_alloc_cbfn_t alloc_cbfn, void *alloc_cbarg) +{ + struct bfa_fcxp_mod_s *mod = BFA_FCXP_MOD(bfa); + + bfa_assert(list_empty(&mod->fcxp_free_q)); + + wqe->alloc_cbfn = alloc_cbfn; + wqe->alloc_cbarg = alloc_cbarg; + list_add_tail(&wqe->qe, &mod->wait_q); +} + +void +bfa_fcxp_walloc_cancel(struct bfa_s *bfa, struct bfa_fcxp_wqe_s *wqe) +{ + struct bfa_fcxp_mod_s *mod = BFA_FCXP_MOD(bfa); + + bfa_assert(bfa_q_is_on_q(&mod->wait_q, wqe)); + list_del(&wqe->qe); +} + +void +bfa_fcxp_discard(struct bfa_fcxp_s *fcxp) +{ + /** + * If waiting for room in request queue, cancel reqq wait + * and free fcxp. + */ + if (fcxp->reqq_waiting) { + fcxp->reqq_waiting = BFA_FALSE; + bfa_reqq_wcancel(&fcxp->reqq_wqe); + bfa_fcxp_free(fcxp); + return; + } + + fcxp->send_cbfn = bfa_fcxp_null_comp; +} + + + +/** + * hal_fcxp_public BFA FCXP public functions + */ + +void +bfa_fcxp_isr(struct bfa_s *bfa, struct bfi_msg_s *msg) +{ + switch (msg->mhdr.msg_id) { + case BFI_FCXP_I2H_SEND_RSP: + hal_fcxp_send_comp(bfa, (struct bfi_fcxp_send_rsp_s *) msg); + break; + + default: + bfa_trc(bfa, msg->mhdr.msg_id); + bfa_assert(0); + } +} + +u32 +bfa_fcxp_get_maxrsp(struct bfa_s *bfa) +{ + struct bfa_fcxp_mod_s *mod = BFA_FCXP_MOD(bfa); + + return mod->rsp_pld_sz; +} + + diff --git a/drivers/scsi/bfa/bfa_fcxp_priv.h b/drivers/scsi/bfa/bfa_fcxp_priv.h new file mode 100644 index 000000000000..4cda49397da0 --- /dev/null +++ b/drivers/scsi/bfa/bfa_fcxp_priv.h @@ -0,0 +1,138 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFA_FCXP_PRIV_H__ +#define __BFA_FCXP_PRIV_H__ + +#include +#include +#include +#include + +#define BFA_FCXP_MIN (1) +#define BFA_FCXP_MAX_IBUF_SZ (2 * 1024 + 256) +#define BFA_FCXP_MAX_LBUF_SZ (4 * 1024 + 256) + +struct bfa_fcxp_mod_s { + struct bfa_s *bfa; /* backpointer to BFA */ + struct bfa_fcxp_s *fcxp_list; /* array of FCXPs */ + u16 num_fcxps; /* max num FCXP requests */ + struct list_head fcxp_free_q; /* free FCXPs */ + struct list_head fcxp_active_q; /* active FCXPs */ + void *req_pld_list_kva; /* list of FCXP req pld */ + u64 req_pld_list_pa; /* list of FCXP req pld */ + void *rsp_pld_list_kva; /* list of FCXP resp pld */ + u64 rsp_pld_list_pa; /* list of FCXP resp pld */ + struct list_head wait_q; /* wait queue for free fcxp */ + u32 req_pld_sz; + u32 rsp_pld_sz; +}; + +#define BFA_FCXP_MOD(__bfa) (&(__bfa)->modules.fcxp_mod) +#define BFA_FCXP_FROM_TAG(__mod, __tag) (&(__mod)->fcxp_list[__tag]) + +typedef void (*fcxp_send_cb_t) (struct bfa_s *ioc, struct bfa_fcxp_s *fcxp, + void *cb_arg, bfa_status_t req_status, + u32 rsp_len, u32 resid_len, + struct fchs_s *rsp_fchs); + +/** + * Information needed for a FCXP request + */ +struct bfa_fcxp_req_info_s { + struct bfa_rport_s *bfa_rport; /* Pointer to the bfa rport that was + *returned from bfa_rport_create(). + *This could be left NULL for WKA or for + *FCXP interactions before the rport + *nexus is established + */ + struct fchs_s fchs; /* request FC header structure */ + u8 cts; /* continous sequence */ + u8 class; /* FC class for the request/response */ + u16 max_frmsz; /* max send frame size */ + u16 vf_id; /* vsan tag if applicable */ + u8 lp_tag; /* lport tag */ + u32 req_tot_len; /* request payload total length */ +}; + +struct bfa_fcxp_rsp_info_s { + struct fchs_s rsp_fchs; /* Response frame's FC header will + * be *sent back in this field */ + u8 rsp_timeout; /* timeout in seconds, 0-no response + */ + u8 rsvd2[3]; + u32 rsp_maxlen; /* max response length expected */ +}; + +struct bfa_fcxp_s { + struct list_head qe; /* fcxp queue element */ + bfa_sm_t sm; /* state machine */ + void *caller; /* driver or fcs */ + struct bfa_fcxp_mod_s *fcxp_mod; + /* back pointer to fcxp mod */ + u16 fcxp_tag; /* internal tag */ + struct bfa_fcxp_req_info_s req_info; + /* request info */ + struct bfa_fcxp_rsp_info_s rsp_info; + /* response info */ + u8 use_ireqbuf; /* use internal req buf */ + u8 use_irspbuf; /* use internal rsp buf */ + u32 nreq_sgles; /* num request SGLEs */ + u32 nrsp_sgles; /* num response SGLEs */ + struct list_head req_sgpg_q; /* SG pages for request buf */ + struct list_head req_sgpg_wqe; /* wait queue for req SG page */ + struct list_head rsp_sgpg_q; /* SG pages for response buf */ + struct list_head rsp_sgpg_wqe; /* wait queue for rsp SG page */ + + bfa_fcxp_get_sgaddr_t req_sga_cbfn; + /* SG elem addr user function */ + bfa_fcxp_get_sglen_t req_sglen_cbfn; + /* SG elem len user function */ + bfa_fcxp_get_sgaddr_t rsp_sga_cbfn; + /* SG elem addr user function */ + bfa_fcxp_get_sglen_t rsp_sglen_cbfn; + /* SG elem len user function */ + bfa_cb_fcxp_send_t send_cbfn; /* send completion callback */ + void *send_cbarg; /* callback arg */ + struct bfa_sge_s req_sge[BFA_FCXP_MAX_SGES]; + /* req SG elems */ + struct bfa_sge_s rsp_sge[BFA_FCXP_MAX_SGES]; + /* rsp SG elems */ + u8 rsp_status; /* comp: rsp status */ + u32 rsp_len; /* comp: actual response len */ + u32 residue_len; /* comp: residual rsp length */ + struct fchs_s rsp_fchs; /* comp: response fchs */ + struct bfa_cb_qe_s hcb_qe; /* comp: callback qelem */ + struct bfa_reqq_wait_s reqq_wqe; + bfa_boolean_t reqq_waiting; +}; + +#define BFA_FCXP_REQ_PLD(_fcxp) (bfa_fcxp_get_reqbuf(_fcxp)) + +#define BFA_FCXP_RSP_FCHS(_fcxp) (&((_fcxp)->rsp_info.fchs)) +#define BFA_FCXP_RSP_PLD(_fcxp) (bfa_fcxp_get_rspbuf(_fcxp)) + +#define BFA_FCXP_REQ_PLD_PA(_fcxp) \ + ((_fcxp)->fcxp_mod->req_pld_list_pa + \ + ((_fcxp)->fcxp_mod->req_pld_sz * (_fcxp)->fcxp_tag)) + +#define BFA_FCXP_RSP_PLD_PA(_fcxp) \ + ((_fcxp)->fcxp_mod->rsp_pld_list_pa + \ + ((_fcxp)->fcxp_mod->rsp_pld_sz * (_fcxp)->fcxp_tag)) + +void bfa_fcxp_isr(struct bfa_s *bfa, struct bfi_msg_s *msg); +#endif /* __BFA_FCXP_PRIV_H__ */ diff --git a/drivers/scsi/bfa/bfa_fwimg_priv.h b/drivers/scsi/bfa/bfa_fwimg_priv.h new file mode 100644 index 000000000000..1ec1355924d9 --- /dev/null +++ b/drivers/scsi/bfa/bfa_fwimg_priv.h @@ -0,0 +1,31 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFA_FWIMG_PRIV_H__ +#define __BFA_FWIMG_PRIV_H__ + +#define BFI_FLASH_CHUNK_SZ 256 /* Flash chunk size */ +#define BFI_FLASH_CHUNK_SZ_WORDS (BFI_FLASH_CHUNK_SZ/sizeof(u32)) + +extern u32 *bfi_image_ct_get_chunk(u32 off); +extern u32 bfi_image_ct_size; +extern u32 *bfi_image_cb_get_chunk(u32 off); +extern u32 bfi_image_cb_size; +extern u32 *bfi_image_cb; +extern u32 *bfi_image_ct; + +#endif /* __BFA_FWIMG_PRIV_H__ */ diff --git a/drivers/scsi/bfa/bfa_hw_cb.c b/drivers/scsi/bfa/bfa_hw_cb.c new file mode 100644 index 000000000000..ede1438619e2 --- /dev/null +++ b/drivers/scsi/bfa/bfa_hw_cb.c @@ -0,0 +1,142 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#include +#include + +void +bfa_hwcb_reginit(struct bfa_s *bfa) +{ + struct bfa_iocfc_regs_s *bfa_regs = &bfa->iocfc.bfa_regs; + bfa_os_addr_t kva = bfa_ioc_bar0(&bfa->ioc); + int i, q, fn = bfa_ioc_pcifn(&bfa->ioc); + + if (fn == 0) { + bfa_regs->intr_status = (kva + HOSTFN0_INT_STATUS); + bfa_regs->intr_mask = (kva + HOSTFN0_INT_MSK); + } else { + bfa_regs->intr_status = (kva + HOSTFN1_INT_STATUS); + bfa_regs->intr_mask = (kva + HOSTFN1_INT_MSK); + } + + for (i = 0; i < BFI_IOC_MAX_CQS; i++) { + /* + * CPE registers + */ + q = CPE_Q_NUM(fn, i); + bfa_regs->cpe_q_pi[i] = (kva + CPE_Q_PI(q)); + bfa_regs->cpe_q_ci[i] = (kva + CPE_Q_CI(q)); + bfa_regs->cpe_q_depth[i] = (kva + CPE_Q_DEPTH(q)); + + /* + * RME registers + */ + q = CPE_Q_NUM(fn, i); + bfa_regs->rme_q_pi[i] = (kva + RME_Q_PI(q)); + bfa_regs->rme_q_ci[i] = (kva + RME_Q_CI(q)); + bfa_regs->rme_q_depth[i] = (kva + RME_Q_DEPTH(q)); + } +} + +void +bfa_hwcb_rspq_ack(struct bfa_s *bfa, int rspq) +{ +} + +static void +bfa_hwcb_rspq_ack_msix(struct bfa_s *bfa, int rspq) +{ + bfa_reg_write(bfa->iocfc.bfa_regs.intr_status, + __HFN_INT_RME_Q0 << RME_Q_NUM(bfa_ioc_pcifn(&bfa->ioc), rspq)); +} + +void +bfa_hwcb_msix_getvecs(struct bfa_s *bfa, u32 *msix_vecs_bmap, + u32 *num_vecs, u32 *max_vec_bit) +{ +#define __HFN_NUMINTS 13 + if (bfa_ioc_pcifn(&bfa->ioc) == 0) { + *msix_vecs_bmap = (__HFN_INT_CPE_Q0 | __HFN_INT_CPE_Q1 | + __HFN_INT_CPE_Q2 | __HFN_INT_CPE_Q3 | + __HFN_INT_RME_Q0 | __HFN_INT_RME_Q1 | + __HFN_INT_RME_Q2 | __HFN_INT_RME_Q3 | + __HFN_INT_MBOX_LPU0); + *max_vec_bit = __HFN_INT_MBOX_LPU0; + } else { + *msix_vecs_bmap = (__HFN_INT_CPE_Q4 | __HFN_INT_CPE_Q5 | + __HFN_INT_CPE_Q6 | __HFN_INT_CPE_Q7 | + __HFN_INT_RME_Q4 | __HFN_INT_RME_Q5 | + __HFN_INT_RME_Q6 | __HFN_INT_RME_Q7 | + __HFN_INT_MBOX_LPU1); + *max_vec_bit = __HFN_INT_MBOX_LPU1; + } + + *msix_vecs_bmap |= (__HFN_INT_ERR_EMC | __HFN_INT_ERR_LPU0 | + __HFN_INT_ERR_LPU1 | __HFN_INT_ERR_PSS); + *num_vecs = __HFN_NUMINTS; +} + +/** + * No special setup required for crossbow -- vector assignments are implicit. + */ +void +bfa_hwcb_msix_init(struct bfa_s *bfa, int nvecs) +{ + int i; + + bfa_assert((nvecs == 1) || (nvecs == __HFN_NUMINTS)); + + bfa->msix.nvecs = nvecs; + if (nvecs == 1) { + for (i = 0; i < BFA_MSIX_CB_MAX; i++) + bfa->msix.handler[i] = bfa_msix_all; + return; + } + + for (i = BFA_MSIX_CPE_Q0; i <= BFA_MSIX_CPE_Q7; i++) + bfa->msix.handler[i] = bfa_msix_reqq; + + for (i = BFA_MSIX_RME_Q0; i <= BFA_MSIX_RME_Q7; i++) + bfa->msix.handler[i] = bfa_msix_rspq; + + for (; i < BFA_MSIX_CB_MAX; i++) + bfa->msix.handler[i] = bfa_msix_lpu_err; +} + +/** + * Crossbow -- dummy, interrupts are masked + */ +void +bfa_hwcb_msix_install(struct bfa_s *bfa) +{ +} + +void +bfa_hwcb_msix_uninstall(struct bfa_s *bfa) +{ +} + +/** + * No special enable/disable -- vector assignments are implicit. + */ +void +bfa_hwcb_isr_mode_set(struct bfa_s *bfa, bfa_boolean_t msix) +{ + bfa->iocfc.hwif.hw_rspq_ack = bfa_hwcb_rspq_ack_msix; +} + + diff --git a/drivers/scsi/bfa/bfa_hw_ct.c b/drivers/scsi/bfa/bfa_hw_ct.c new file mode 100644 index 000000000000..51ae5740e6e9 --- /dev/null +++ b/drivers/scsi/bfa/bfa_hw_ct.c @@ -0,0 +1,162 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#include +#include +#include + +BFA_TRC_FILE(HAL, IOCFC_CT); + +static u32 __ct_msix_err_vec_reg[] = { + HOST_MSIX_ERR_INDEX_FN0, + HOST_MSIX_ERR_INDEX_FN1, + HOST_MSIX_ERR_INDEX_FN2, + HOST_MSIX_ERR_INDEX_FN3, +}; + +static void +bfa_hwct_msix_lpu_err_set(struct bfa_s *bfa, bfa_boolean_t msix, int vec) +{ + int fn = bfa_ioc_pcifn(&bfa->ioc); + bfa_os_addr_t kva = bfa_ioc_bar0(&bfa->ioc); + + if (msix) + bfa_reg_write(kva + __ct_msix_err_vec_reg[fn], vec); + else + bfa_reg_write(kva + __ct_msix_err_vec_reg[fn], 0); +} + +/** + * Dummy interrupt handler for handling spurious interrupt during chip-reinit. + */ +static void +bfa_hwct_msix_dummy(struct bfa_s *bfa, int vec) +{ +} + +void +bfa_hwct_reginit(struct bfa_s *bfa) +{ + struct bfa_iocfc_regs_s *bfa_regs = &bfa->iocfc.bfa_regs; + bfa_os_addr_t kva = bfa_ioc_bar0(&bfa->ioc); + int i, q, fn = bfa_ioc_pcifn(&bfa->ioc); + + if (fn == 0) { + bfa_regs->intr_status = (kva + HOSTFN0_INT_STATUS); + bfa_regs->intr_mask = (kva + HOSTFN0_INT_MSK); + } else { + bfa_regs->intr_status = (kva + HOSTFN1_INT_STATUS); + bfa_regs->intr_mask = (kva + HOSTFN1_INT_MSK); + } + + for (i = 0; i < BFI_IOC_MAX_CQS; i++) { + /* + * CPE registers + */ + q = CPE_Q_NUM(fn, i); + bfa_regs->cpe_q_pi[i] = (kva + CPE_PI_PTR_Q(q << 5)); + bfa_regs->cpe_q_ci[i] = (kva + CPE_CI_PTR_Q(q << 5)); + bfa_regs->cpe_q_depth[i] = (kva + CPE_DEPTH_Q(q << 5)); + bfa_regs->cpe_q_ctrl[i] = (kva + CPE_QCTRL_Q(q << 5)); + + /* + * RME registers + */ + q = CPE_Q_NUM(fn, i); + bfa_regs->rme_q_pi[i] = (kva + RME_PI_PTR_Q(q << 5)); + bfa_regs->rme_q_ci[i] = (kva + RME_CI_PTR_Q(q << 5)); + bfa_regs->rme_q_depth[i] = (kva + RME_DEPTH_Q(q << 5)); + bfa_regs->rme_q_ctrl[i] = (kva + RME_QCTRL_Q(q << 5)); + } +} + +void +bfa_hwct_rspq_ack(struct bfa_s *bfa, int rspq) +{ + u32 r32; + + r32 = bfa_reg_read(bfa->iocfc.bfa_regs.rme_q_ctrl[rspq]); + bfa_reg_write(bfa->iocfc.bfa_regs.rme_q_ctrl[rspq], r32); +} + +void +bfa_hwct_msix_getvecs(struct bfa_s *bfa, u32 *msix_vecs_bmap, + u32 *num_vecs, u32 *max_vec_bit) +{ + *msix_vecs_bmap = (1 << BFA_MSIX_CT_MAX) - 1; + *max_vec_bit = (1 << (BFA_MSIX_CT_MAX - 1)); + *num_vecs = BFA_MSIX_CT_MAX; +} + +/** + * Setup MSI-X vector for catapult + */ +void +bfa_hwct_msix_init(struct bfa_s *bfa, int nvecs) +{ + bfa_assert((nvecs == 1) || (nvecs == BFA_MSIX_CT_MAX)); + bfa_trc(bfa, nvecs); + + bfa->msix.nvecs = nvecs; + bfa_hwct_msix_uninstall(bfa); +} + +void +bfa_hwct_msix_install(struct bfa_s *bfa) +{ + int i; + + if (bfa->msix.nvecs == 0) + return; + + if (bfa->msix.nvecs == 1) { + for (i = 0; i < BFA_MSIX_CT_MAX; i++) + bfa->msix.handler[i] = bfa_msix_all; + return; + } + + for (i = BFA_MSIX_CPE_Q0; i <= BFA_MSIX_CPE_Q3; i++) + bfa->msix.handler[i] = bfa_msix_reqq; + + for (; i <= BFA_MSIX_RME_Q3; i++) + bfa->msix.handler[i] = bfa_msix_rspq; + + bfa_assert(i == BFA_MSIX_LPU_ERR); + bfa->msix.handler[BFA_MSIX_LPU_ERR] = bfa_msix_lpu_err; +} + +void +bfa_hwct_msix_uninstall(struct bfa_s *bfa) +{ + int i; + + for (i = 0; i < BFA_MSIX_CT_MAX; i++) + bfa->msix.handler[i] = bfa_hwct_msix_dummy; +} + +/** + * Enable MSI-X vectors + */ +void +bfa_hwct_isr_mode_set(struct bfa_s *bfa, bfa_boolean_t msix) +{ + bfa_trc(bfa, 0); + bfa_hwct_msix_lpu_err_set(bfa, msix, BFA_MSIX_LPU_ERR); + bfa_ioc_isr_mode_set(&bfa->ioc, msix); +} + + diff --git a/drivers/scsi/bfa/bfa_intr.c b/drivers/scsi/bfa/bfa_intr.c new file mode 100644 index 000000000000..0ca125712a04 --- /dev/null +++ b/drivers/scsi/bfa/bfa_intr.c @@ -0,0 +1,218 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ +#include +#include +#include +#include +#include + +BFA_TRC_FILE(HAL, INTR); + +static void +bfa_msix_errint(struct bfa_s *bfa, u32 intr) +{ + bfa_ioc_error_isr(&bfa->ioc); +} + +static void +bfa_msix_lpu(struct bfa_s *bfa) +{ + bfa_ioc_mbox_isr(&bfa->ioc); +} + +void +bfa_msix_all(struct bfa_s *bfa, int vec) +{ + bfa_intx(bfa); +} + +/** + * hal_intr_api + */ +bfa_boolean_t +bfa_intx(struct bfa_s *bfa) +{ + u32 intr, qintr; + int queue; + + intr = bfa_reg_read(bfa->iocfc.bfa_regs.intr_status); + if (!intr) + return BFA_FALSE; + + /** + * RME completion queue interrupt + */ + qintr = intr & __HFN_INT_RME_MASK; + bfa_reg_write(bfa->iocfc.bfa_regs.intr_status, qintr); + + for (queue = 0; queue < BFI_IOC_MAX_CQS_ASIC; queue ++) { + if (intr & (__HFN_INT_RME_Q0 << queue)) + bfa_msix_rspq(bfa, queue & (BFI_IOC_MAX_CQS - 1)); + } + intr &= ~qintr; + if (!intr) + return BFA_TRUE; + + /** + * CPE completion queue interrupt + */ + qintr = intr & __HFN_INT_CPE_MASK; + bfa_reg_write(bfa->iocfc.bfa_regs.intr_status, qintr); + + for (queue = 0; queue < BFI_IOC_MAX_CQS_ASIC; queue++) { + if (intr & (__HFN_INT_CPE_Q0 << queue)) + bfa_msix_reqq(bfa, queue & (BFI_IOC_MAX_CQS - 1)); + } + intr &= ~qintr; + if (!intr) + return BFA_TRUE; + + bfa_msix_lpu_err(bfa, intr); + + return BFA_TRUE; +} + +void +bfa_isr_enable(struct bfa_s *bfa) +{ + u32 intr_unmask; + int pci_func = bfa_ioc_pcifn(&bfa->ioc); + + bfa_trc(bfa, pci_func); + + bfa_msix_install(bfa); + intr_unmask = (__HFN_INT_ERR_EMC | __HFN_INT_ERR_LPU0 | + __HFN_INT_ERR_LPU1 | __HFN_INT_ERR_PSS); + + if (pci_func == 0) + intr_unmask |= (__HFN_INT_CPE_Q0 | __HFN_INT_CPE_Q1 | + __HFN_INT_CPE_Q2 | __HFN_INT_CPE_Q3 | + __HFN_INT_RME_Q0 | __HFN_INT_RME_Q1 | + __HFN_INT_RME_Q2 | __HFN_INT_RME_Q3 | + __HFN_INT_MBOX_LPU0); + else + intr_unmask |= (__HFN_INT_CPE_Q4 | __HFN_INT_CPE_Q5 | + __HFN_INT_CPE_Q6 | __HFN_INT_CPE_Q7 | + __HFN_INT_RME_Q4 | __HFN_INT_RME_Q5 | + __HFN_INT_RME_Q6 | __HFN_INT_RME_Q7 | + __HFN_INT_MBOX_LPU1); + + bfa_reg_write(bfa->iocfc.bfa_regs.intr_status, intr_unmask); + bfa_reg_write(bfa->iocfc.bfa_regs.intr_mask, ~intr_unmask); + bfa_isr_mode_set(bfa, bfa->msix.nvecs != 0); +} + +void +bfa_isr_disable(struct bfa_s *bfa) +{ + bfa_isr_mode_set(bfa, BFA_FALSE); + bfa_reg_write(bfa->iocfc.bfa_regs.intr_mask, -1L); + bfa_msix_uninstall(bfa); +} + +void +bfa_msix_reqq(struct bfa_s *bfa, int qid) +{ + struct list_head *waitq, *qe, *qen; + struct bfa_reqq_wait_s *wqe; + + qid &= (BFI_IOC_MAX_CQS - 1); + + waitq = bfa_reqq(bfa, qid); + list_for_each_safe(qe, qen, waitq) { + /** + * Callback only as long as there is room in request queue + */ + if (bfa_reqq_full(bfa, qid)) + break; + + list_del(qe); + wqe = (struct bfa_reqq_wait_s *) qe; + wqe->qresume(wqe->cbarg); + } +} + +void +bfa_isr_unhandled(struct bfa_s *bfa, struct bfi_msg_s *m) +{ + bfa_trc(bfa, m->mhdr.msg_class); + bfa_trc(bfa, m->mhdr.msg_id); + bfa_trc(bfa, m->mhdr.mtag.i2htok); + bfa_assert(0); + bfa_trc_stop(bfa->trcmod); +} + +void +bfa_msix_rspq(struct bfa_s *bfa, int rsp_qid) +{ + struct bfi_msg_s *m; + u32 pi, ci; + + bfa_trc_fp(bfa, rsp_qid); + + rsp_qid &= (BFI_IOC_MAX_CQS - 1); + + bfa->iocfc.hwif.hw_rspq_ack(bfa, rsp_qid); + + ci = bfa_rspq_ci(bfa, rsp_qid); + pi = bfa_rspq_pi(bfa, rsp_qid); + + bfa_trc_fp(bfa, ci); + bfa_trc_fp(bfa, pi); + + if (bfa->rme_process) { + while (ci != pi) { + m = bfa_rspq_elem(bfa, rsp_qid, ci); + bfa_assert_fp(m->mhdr.msg_class < BFI_MC_MAX); + + bfa_isrs[m->mhdr.msg_class] (bfa, m); + + CQ_INCR(ci, bfa->iocfc.cfg.drvcfg.num_rspq_elems); + } + } + + /** + * update CI + */ + bfa_rspq_ci(bfa, rsp_qid) = pi; + bfa_reg_write(bfa->iocfc.bfa_regs.rme_q_ci[rsp_qid], pi); + bfa_os_mmiowb(); +} + +void +bfa_msix_lpu_err(struct bfa_s *bfa, int vec) +{ + u32 intr; + + intr = bfa_reg_read(bfa->iocfc.bfa_regs.intr_status); + + if (intr & (__HFN_INT_MBOX_LPU0 | __HFN_INT_MBOX_LPU1)) + bfa_msix_lpu(bfa); + + if (intr & (__HFN_INT_ERR_EMC | + __HFN_INT_ERR_LPU0 | __HFN_INT_ERR_LPU1 | + __HFN_INT_ERR_PSS)) + bfa_msix_errint(bfa, intr); +} + +void +bfa_isr_bind(enum bfi_mclass mc, bfa_isr_func_t isr_func) +{ + bfa_isrs[mc] = isr_func; +} + + diff --git a/drivers/scsi/bfa/bfa_intr_priv.h b/drivers/scsi/bfa/bfa_intr_priv.h new file mode 100644 index 000000000000..8ce6e6b105c8 --- /dev/null +++ b/drivers/scsi/bfa/bfa_intr_priv.h @@ -0,0 +1,115 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFA_INTR_PRIV_H__ +#define __BFA_INTR_PRIV_H__ + +/** + * Message handler + */ +typedef void (*bfa_isr_func_t) (struct bfa_s *bfa, struct bfi_msg_s *m); +void bfa_isr_unhandled(struct bfa_s *bfa, struct bfi_msg_s *m); +void bfa_isr_bind(enum bfi_mclass mc, bfa_isr_func_t isr_func); + + +#define bfa_reqq_pi(__bfa, __reqq) (__bfa)->iocfc.req_cq_pi[__reqq] +#define bfa_reqq_ci(__bfa, __reqq) \ + *(u32 *)((__bfa)->iocfc.req_cq_shadow_ci[__reqq].kva) + +#define bfa_reqq_full(__bfa, __reqq) \ + (((bfa_reqq_pi(__bfa, __reqq) + 1) & \ + ((__bfa)->iocfc.cfg.drvcfg.num_reqq_elems - 1)) == \ + bfa_reqq_ci(__bfa, __reqq)) + +#define bfa_reqq_next(__bfa, __reqq) \ + (bfa_reqq_full(__bfa, __reqq) ? NULL : \ + ((void *)((struct bfi_msg_s *)((__bfa)->iocfc.req_cq_ba[__reqq].kva) \ + + bfa_reqq_pi((__bfa), (__reqq))))) + +#define bfa_reqq_produce(__bfa, __reqq) do { \ + (__bfa)->iocfc.req_cq_pi[__reqq]++; \ + (__bfa)->iocfc.req_cq_pi[__reqq] &= \ + ((__bfa)->iocfc.cfg.drvcfg.num_reqq_elems - 1); \ + bfa_reg_write((__bfa)->iocfc.bfa_regs.cpe_q_pi[__reqq], \ + (__bfa)->iocfc.req_cq_pi[__reqq]); \ + bfa_os_mmiowb(); \ +} while (0) + +#define bfa_rspq_pi(__bfa, __rspq) \ + *(u32 *)((__bfa)->iocfc.rsp_cq_shadow_pi[__rspq].kva) + +#define bfa_rspq_ci(__bfa, __rspq) (__bfa)->iocfc.rsp_cq_ci[__rspq] +#define bfa_rspq_elem(__bfa, __rspq, __ci) \ + &((struct bfi_msg_s *)((__bfa)->iocfc.rsp_cq_ba[__rspq].kva))[__ci] + +#define CQ_INCR(__index, __size) \ + (__index)++; (__index) &= ((__size) - 1) + +/** + * Queue element to wait for room in request queue. FIFO order is + * maintained when fullfilling requests. + */ +struct bfa_reqq_wait_s { + struct list_head qe; + void (*qresume) (void *cbarg); + void *cbarg; +}; + +/** + * Circular queue usage assignments + */ +enum { + BFA_REQQ_IOC = 0, /* all low-priority IOC msgs */ + BFA_REQQ_FCXP = 0, /* all FCXP messages */ + BFA_REQQ_LPS = 0, /* all lport service msgs */ + BFA_REQQ_PORT = 0, /* all port messages */ + BFA_REQQ_FLASH = 0, /* for flash module */ + BFA_REQQ_DIAG = 0, /* for diag module */ + BFA_REQQ_RPORT = 0, /* all port messages */ + BFA_REQQ_SBOOT = 0, /* all san boot messages */ + BFA_REQQ_QOS_LO = 1, /* all low priority IO */ + BFA_REQQ_QOS_MD = 2, /* all medium priority IO */ + BFA_REQQ_QOS_HI = 3, /* all high priority IO */ +}; + +static inline void +bfa_reqq_winit(struct bfa_reqq_wait_s *wqe, void (*qresume) (void *cbarg), + void *cbarg) +{ + wqe->qresume = qresume; + wqe->cbarg = cbarg; +} + +#define bfa_reqq(__bfa, __reqq) &(__bfa)->reqq_waitq[__reqq] + +/** + * static inline void + * bfa_reqq_wait(struct bfa_s *bfa, int reqq, struct bfa_reqq_wait_s *wqe) + */ +#define bfa_reqq_wait(__bfa, __reqq, __wqe) do { \ + \ + struct list_head *waitq = bfa_reqq(__bfa, __reqq); \ + \ + bfa_assert(((__reqq) < BFI_IOC_MAX_CQS)); \ + bfa_assert((__wqe)->qresume && (__wqe)->cbarg); \ + \ + list_add_tail(&(__wqe)->qe, waitq); \ +} while (0) + +#define bfa_reqq_wcancel(__wqe) list_del(&(__wqe)->qe) + +#endif /* __BFA_INTR_PRIV_H__ */ diff --git a/drivers/scsi/bfa/bfa_ioc.c b/drivers/scsi/bfa/bfa_ioc.c new file mode 100644 index 000000000000..149348934ce3 --- /dev/null +++ b/drivers/scsi/bfa/bfa_ioc.c @@ -0,0 +1,2382 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +BFA_TRC_FILE(HAL, IOC); + +/** + * IOC local definitions + */ +#define BFA_IOC_TOV 2000 /* msecs */ +#define BFA_IOC_HB_TOV 1000 /* msecs */ +#define BFA_IOC_HB_FAIL_MAX 4 +#define BFA_IOC_HWINIT_MAX 2 +#define BFA_IOC_FWIMG_MINSZ (16 * 1024) +#define BFA_IOC_TOV_RECOVER (BFA_IOC_HB_FAIL_MAX * BFA_IOC_HB_TOV \ + + BFA_IOC_TOV) + +#define bfa_ioc_timer_start(__ioc) \ + bfa_timer_begin((__ioc)->timer_mod, &(__ioc)->ioc_timer, \ + bfa_ioc_timeout, (__ioc), BFA_IOC_TOV) +#define bfa_ioc_timer_stop(__ioc) bfa_timer_stop(&(__ioc)->ioc_timer) + +#define BFA_DBG_FWTRC_ENTS (BFI_IOC_TRC_ENTS) +#define BFA_DBG_FWTRC_LEN \ + (BFA_DBG_FWTRC_ENTS * sizeof(struct bfa_trc_s) + \ + (sizeof(struct bfa_trc_mod_s) - \ + BFA_TRC_MAX * sizeof(struct bfa_trc_s))) +#define BFA_DBG_FWTRC_OFF(_fn) (BFI_IOC_TRC_OFF + BFA_DBG_FWTRC_LEN * (_fn)) +#define bfa_ioc_stats(_ioc, _stats) (_ioc)->stats._stats ++ + +#define BFA_FLASH_CHUNK_NO(off) (off / BFI_FLASH_CHUNK_SZ_WORDS) +#define BFA_FLASH_OFFSET_IN_CHUNK(off) (off % BFI_FLASH_CHUNK_SZ_WORDS) +#define BFA_FLASH_CHUNK_ADDR(chunkno) (chunkno * BFI_FLASH_CHUNK_SZ_WORDS) +bfa_boolean_t bfa_auto_recover = BFA_FALSE; + +/* + * forward declarations + */ +static void bfa_ioc_aen_post(struct bfa_ioc_s *bfa, + enum bfa_ioc_aen_event event); +static void bfa_ioc_hw_sem_get(struct bfa_ioc_s *ioc); +static void bfa_ioc_hw_sem_release(struct bfa_ioc_s *ioc); +static void bfa_ioc_hw_sem_get_cancel(struct bfa_ioc_s *ioc); +static void bfa_ioc_hwinit(struct bfa_ioc_s *ioc, bfa_boolean_t force); +static void bfa_ioc_timeout(void *ioc); +static void bfa_ioc_send_enable(struct bfa_ioc_s *ioc); +static void bfa_ioc_send_disable(struct bfa_ioc_s *ioc); +static void bfa_ioc_send_getattr(struct bfa_ioc_s *ioc); +static void bfa_ioc_hb_monitor(struct bfa_ioc_s *ioc); +static void bfa_ioc_hb_stop(struct bfa_ioc_s *ioc); +static void bfa_ioc_reset(struct bfa_ioc_s *ioc, bfa_boolean_t force); +static void bfa_ioc_mbox_poll(struct bfa_ioc_s *ioc); +static void bfa_ioc_mbox_hbfail(struct bfa_ioc_s *ioc); +static void bfa_ioc_recover(struct bfa_ioc_s *ioc); +static bfa_boolean_t bfa_ioc_firmware_lock(struct bfa_ioc_s *ioc); +static void bfa_ioc_firmware_unlock(struct bfa_ioc_s *ioc); +static void bfa_ioc_disable_comp(struct bfa_ioc_s *ioc); +static void bfa_ioc_lpu_stop(struct bfa_ioc_s *ioc); + +/** + * bfa_ioc_sm + */ + +/** + * IOC state machine events + */ +enum ioc_event { + IOC_E_ENABLE = 1, /* IOC enable request */ + IOC_E_DISABLE = 2, /* IOC disable request */ + IOC_E_TIMEOUT = 3, /* f/w response timeout */ + IOC_E_FWREADY = 4, /* f/w initialization done */ + IOC_E_FWRSP_GETATTR = 5, /* IOC get attribute response */ + IOC_E_FWRSP_ENABLE = 6, /* enable f/w response */ + IOC_E_FWRSP_DISABLE = 7, /* disable f/w response */ + IOC_E_HBFAIL = 8, /* heartbeat failure */ + IOC_E_HWERROR = 9, /* hardware error interrupt */ + IOC_E_SEMLOCKED = 10, /* h/w semaphore is locked */ + IOC_E_DETACH = 11, /* driver detach cleanup */ +}; + +bfa_fsm_state_decl(bfa_ioc, reset, struct bfa_ioc_s, enum ioc_event); +bfa_fsm_state_decl(bfa_ioc, fwcheck, struct bfa_ioc_s, enum ioc_event); +bfa_fsm_state_decl(bfa_ioc, mismatch, struct bfa_ioc_s, enum ioc_event); +bfa_fsm_state_decl(bfa_ioc, semwait, struct bfa_ioc_s, enum ioc_event); +bfa_fsm_state_decl(bfa_ioc, hwinit, struct bfa_ioc_s, enum ioc_event); +bfa_fsm_state_decl(bfa_ioc, enabling, struct bfa_ioc_s, enum ioc_event); +bfa_fsm_state_decl(bfa_ioc, getattr, struct bfa_ioc_s, enum ioc_event); +bfa_fsm_state_decl(bfa_ioc, op, struct bfa_ioc_s, enum ioc_event); +bfa_fsm_state_decl(bfa_ioc, initfail, struct bfa_ioc_s, enum ioc_event); +bfa_fsm_state_decl(bfa_ioc, hbfail, struct bfa_ioc_s, enum ioc_event); +bfa_fsm_state_decl(bfa_ioc, disabling, struct bfa_ioc_s, enum ioc_event); +bfa_fsm_state_decl(bfa_ioc, disabled, struct bfa_ioc_s, enum ioc_event); + +static struct bfa_sm_table_s ioc_sm_table[] = { + {BFA_SM(bfa_ioc_sm_reset), BFA_IOC_RESET}, + {BFA_SM(bfa_ioc_sm_fwcheck), BFA_IOC_FWMISMATCH}, + {BFA_SM(bfa_ioc_sm_mismatch), BFA_IOC_FWMISMATCH}, + {BFA_SM(bfa_ioc_sm_semwait), BFA_IOC_SEMWAIT}, + {BFA_SM(bfa_ioc_sm_hwinit), BFA_IOC_HWINIT}, + {BFA_SM(bfa_ioc_sm_enabling), BFA_IOC_HWINIT}, + {BFA_SM(bfa_ioc_sm_getattr), BFA_IOC_GETATTR}, + {BFA_SM(bfa_ioc_sm_op), BFA_IOC_OPERATIONAL}, + {BFA_SM(bfa_ioc_sm_initfail), BFA_IOC_INITFAIL}, + {BFA_SM(bfa_ioc_sm_hbfail), BFA_IOC_HBFAIL}, + {BFA_SM(bfa_ioc_sm_disabling), BFA_IOC_DISABLING}, + {BFA_SM(bfa_ioc_sm_disabled), BFA_IOC_DISABLED}, +}; + +/** + * Reset entry actions -- initialize state machine + */ +static void +bfa_ioc_sm_reset_entry(struct bfa_ioc_s *ioc) +{ + ioc->retry_count = 0; + ioc->auto_recover = bfa_auto_recover; +} + +/** + * Beginning state. IOC is in reset state. + */ +static void +bfa_ioc_sm_reset(struct bfa_ioc_s *ioc, enum ioc_event event) +{ + bfa_trc(ioc, event); + + switch (event) { + case IOC_E_ENABLE: + bfa_fsm_set_state(ioc, bfa_ioc_sm_fwcheck); + break; + + case IOC_E_DISABLE: + bfa_ioc_disable_comp(ioc); + break; + + case IOC_E_DETACH: + break; + + default: + bfa_sm_fault(ioc, event); + } +} + +/** + * Semaphore should be acquired for version check. + */ +static void +bfa_ioc_sm_fwcheck_entry(struct bfa_ioc_s *ioc) +{ + bfa_ioc_hw_sem_get(ioc); +} + +/** + * Awaiting h/w semaphore to continue with version check. + */ +static void +bfa_ioc_sm_fwcheck(struct bfa_ioc_s *ioc, enum ioc_event event) +{ + bfa_trc(ioc, event); + + switch (event) { + case IOC_E_SEMLOCKED: + if (bfa_ioc_firmware_lock(ioc)) { + ioc->retry_count = 0; + bfa_fsm_set_state(ioc, bfa_ioc_sm_hwinit); + } else { + bfa_ioc_hw_sem_release(ioc); + bfa_fsm_set_state(ioc, bfa_ioc_sm_mismatch); + } + break; + + case IOC_E_DISABLE: + bfa_ioc_disable_comp(ioc); + /* + * fall through + */ + + case IOC_E_DETACH: + bfa_ioc_hw_sem_get_cancel(ioc); + bfa_fsm_set_state(ioc, bfa_ioc_sm_reset); + break; + + case IOC_E_FWREADY: + break; + + default: + bfa_sm_fault(ioc, event); + } +} + +/** + * Notify enable completion callback and generate mismatch AEN. + */ +static void +bfa_ioc_sm_mismatch_entry(struct bfa_ioc_s *ioc) +{ + /** + * Provide enable completion callback and AEN notification only once. + */ + if (ioc->retry_count == 0) { + ioc->cbfn->enable_cbfn(ioc->bfa, BFA_STATUS_IOC_FAILURE); + bfa_ioc_aen_post(ioc, BFA_IOC_AEN_FWMISMATCH); + } + ioc->retry_count++; + bfa_ioc_timer_start(ioc); +} + +/** + * Awaiting firmware version match. + */ +static void +bfa_ioc_sm_mismatch(struct bfa_ioc_s *ioc, enum ioc_event event) +{ + bfa_trc(ioc, event); + + switch (event) { + case IOC_E_TIMEOUT: + bfa_fsm_set_state(ioc, bfa_ioc_sm_fwcheck); + break; + + case IOC_E_DISABLE: + bfa_ioc_disable_comp(ioc); + /* + * fall through + */ + + case IOC_E_DETACH: + bfa_ioc_timer_stop(ioc); + bfa_fsm_set_state(ioc, bfa_ioc_sm_reset); + break; + + case IOC_E_FWREADY: + break; + + default: + bfa_sm_fault(ioc, event); + } +} + +/** + * Request for semaphore. + */ +static void +bfa_ioc_sm_semwait_entry(struct bfa_ioc_s *ioc) +{ + bfa_ioc_hw_sem_get(ioc); +} + +/** + * Awaiting semaphore for h/w initialzation. + */ +static void +bfa_ioc_sm_semwait(struct bfa_ioc_s *ioc, enum ioc_event event) +{ + bfa_trc(ioc, event); + + switch (event) { + case IOC_E_SEMLOCKED: + ioc->retry_count = 0; + bfa_fsm_set_state(ioc, bfa_ioc_sm_hwinit); + break; + + case IOC_E_DISABLE: + bfa_ioc_hw_sem_get_cancel(ioc); + bfa_fsm_set_state(ioc, bfa_ioc_sm_disabled); + break; + + default: + bfa_sm_fault(ioc, event); + } +} + + +static void +bfa_ioc_sm_hwinit_entry(struct bfa_ioc_s *ioc) +{ + bfa_ioc_timer_start(ioc); + bfa_ioc_reset(ioc, BFA_FALSE); +} + +/** + * Hardware is being initialized. Interrupts are enabled. + * Holding hardware semaphore lock. + */ +static void +bfa_ioc_sm_hwinit(struct bfa_ioc_s *ioc, enum ioc_event event) +{ + bfa_trc(ioc, event); + + switch (event) { + case IOC_E_FWREADY: + bfa_ioc_timer_stop(ioc); + bfa_fsm_set_state(ioc, bfa_ioc_sm_enabling); + break; + + case IOC_E_HWERROR: + bfa_ioc_timer_stop(ioc); + /* + * fall through + */ + + case IOC_E_TIMEOUT: + ioc->retry_count++; + if (ioc->retry_count < BFA_IOC_HWINIT_MAX) { + bfa_ioc_timer_start(ioc); + bfa_ioc_reset(ioc, BFA_TRUE); + break; + } + + bfa_ioc_hw_sem_release(ioc); + bfa_fsm_set_state(ioc, bfa_ioc_sm_initfail); + break; + + case IOC_E_DISABLE: + bfa_ioc_hw_sem_release(ioc); + bfa_ioc_timer_stop(ioc); + bfa_fsm_set_state(ioc, bfa_ioc_sm_disabled); + break; + + default: + bfa_sm_fault(ioc, event); + } +} + + +static void +bfa_ioc_sm_enabling_entry(struct bfa_ioc_s *ioc) +{ + bfa_ioc_timer_start(ioc); + bfa_ioc_send_enable(ioc); +} + +/** + * Host IOC function is being enabled, awaiting response from firmware. + * Semaphore is acquired. + */ +static void +bfa_ioc_sm_enabling(struct bfa_ioc_s *ioc, enum ioc_event event) +{ + bfa_trc(ioc, event); + + switch (event) { + case IOC_E_FWRSP_ENABLE: + bfa_ioc_timer_stop(ioc); + bfa_ioc_hw_sem_release(ioc); + bfa_fsm_set_state(ioc, bfa_ioc_sm_getattr); + break; + + case IOC_E_HWERROR: + bfa_ioc_timer_stop(ioc); + /* + * fall through + */ + + case IOC_E_TIMEOUT: + ioc->retry_count++; + if (ioc->retry_count < BFA_IOC_HWINIT_MAX) { + bfa_reg_write(ioc->ioc_regs.ioc_fwstate, + BFI_IOC_UNINIT); + bfa_fsm_set_state(ioc, bfa_ioc_sm_hwinit); + break; + } + + bfa_ioc_hw_sem_release(ioc); + bfa_fsm_set_state(ioc, bfa_ioc_sm_initfail); + break; + + case IOC_E_DISABLE: + bfa_ioc_timer_stop(ioc); + bfa_ioc_hw_sem_release(ioc); + bfa_fsm_set_state(ioc, bfa_ioc_sm_disabled); + break; + + case IOC_E_FWREADY: + bfa_ioc_send_enable(ioc); + break; + + default: + bfa_sm_fault(ioc, event); + } +} + + +static void +bfa_ioc_sm_getattr_entry(struct bfa_ioc_s *ioc) +{ + bfa_ioc_timer_start(ioc); + bfa_ioc_send_getattr(ioc); +} + +/** + * IOC configuration in progress. Timer is active. + */ +static void +bfa_ioc_sm_getattr(struct bfa_ioc_s *ioc, enum ioc_event event) +{ + bfa_trc(ioc, event); + + switch (event) { + case IOC_E_FWRSP_GETATTR: + bfa_ioc_timer_stop(ioc); + bfa_fsm_set_state(ioc, bfa_ioc_sm_op); + break; + + case IOC_E_HWERROR: + bfa_ioc_timer_stop(ioc); + /* + * fall through + */ + + case IOC_E_TIMEOUT: + bfa_fsm_set_state(ioc, bfa_ioc_sm_initfail); + break; + + case IOC_E_DISABLE: + bfa_ioc_timer_stop(ioc); + bfa_fsm_set_state(ioc, bfa_ioc_sm_disabled); + break; + + default: + bfa_sm_fault(ioc, event); + } +} + + +static void +bfa_ioc_sm_op_entry(struct bfa_ioc_s *ioc) +{ + ioc->cbfn->enable_cbfn(ioc->bfa, BFA_STATUS_OK); + bfa_ioc_hb_monitor(ioc); + bfa_ioc_aen_post(ioc, BFA_IOC_AEN_ENABLE); +} + +static void +bfa_ioc_sm_op(struct bfa_ioc_s *ioc, enum ioc_event event) +{ + bfa_trc(ioc, event); + + switch (event) { + case IOC_E_ENABLE: + break; + + case IOC_E_DISABLE: + bfa_ioc_hb_stop(ioc); + bfa_fsm_set_state(ioc, bfa_ioc_sm_disabling); + break; + + case IOC_E_HWERROR: + case IOC_E_FWREADY: + /** + * Hard error or IOC recovery by other function. + * Treat it same as heartbeat failure. + */ + bfa_ioc_hb_stop(ioc); + /* + * !!! fall through !!! + */ + + case IOC_E_HBFAIL: + bfa_fsm_set_state(ioc, bfa_ioc_sm_hbfail); + break; + + default: + bfa_sm_fault(ioc, event); + } +} + + +static void +bfa_ioc_sm_disabling_entry(struct bfa_ioc_s *ioc) +{ + bfa_ioc_aen_post(ioc, BFA_IOC_AEN_DISABLE); + bfa_ioc_timer_start(ioc); + bfa_ioc_send_disable(ioc); +} + +/** + * IOC is being disabled + */ +static void +bfa_ioc_sm_disabling(struct bfa_ioc_s *ioc, enum ioc_event event) +{ + bfa_trc(ioc, event); + + switch (event) { + case IOC_E_HWERROR: + case IOC_E_FWRSP_DISABLE: + bfa_ioc_timer_stop(ioc); + /* + * !!! fall through !!! + */ + + case IOC_E_TIMEOUT: + bfa_fsm_set_state(ioc, bfa_ioc_sm_disabled); + break; + + default: + bfa_sm_fault(ioc, event); + } +} + +/** + * IOC disable completion entry. + */ +static void +bfa_ioc_sm_disabled_entry(struct bfa_ioc_s *ioc) +{ + bfa_ioc_disable_comp(ioc); +} + +static void +bfa_ioc_sm_disabled(struct bfa_ioc_s *ioc, enum ioc_event event) +{ + bfa_trc(ioc, event); + + switch (event) { + case IOC_E_ENABLE: + bfa_fsm_set_state(ioc, bfa_ioc_sm_semwait); + break; + + case IOC_E_DISABLE: + ioc->cbfn->disable_cbfn(ioc->bfa); + break; + + case IOC_E_FWREADY: + break; + + case IOC_E_DETACH: + bfa_ioc_firmware_unlock(ioc); + bfa_fsm_set_state(ioc, bfa_ioc_sm_reset); + break; + + default: + bfa_sm_fault(ioc, event); + } +} + + +static void +bfa_ioc_sm_initfail_entry(struct bfa_ioc_s *ioc) +{ + ioc->cbfn->enable_cbfn(ioc->bfa, BFA_STATUS_IOC_FAILURE); + bfa_ioc_timer_start(ioc); +} + +/** + * Hardware initialization failed. + */ +static void +bfa_ioc_sm_initfail(struct bfa_ioc_s *ioc, enum ioc_event event) +{ + bfa_trc(ioc, event); + + switch (event) { + case IOC_E_DISABLE: + bfa_ioc_timer_stop(ioc); + bfa_fsm_set_state(ioc, bfa_ioc_sm_disabled); + break; + + case IOC_E_DETACH: + bfa_ioc_timer_stop(ioc); + bfa_ioc_firmware_unlock(ioc); + bfa_fsm_set_state(ioc, bfa_ioc_sm_reset); + break; + + case IOC_E_TIMEOUT: + bfa_fsm_set_state(ioc, bfa_ioc_sm_semwait); + break; + + default: + bfa_sm_fault(ioc, event); + } +} + + +static void +bfa_ioc_sm_hbfail_entry(struct bfa_ioc_s *ioc) +{ + struct list_head *qe; + struct bfa_ioc_hbfail_notify_s *notify; + + /** + * Mark IOC as failed in hardware and stop firmware. + */ + bfa_ioc_lpu_stop(ioc); + bfa_reg_write(ioc->ioc_regs.ioc_fwstate, BFI_IOC_HBFAIL); + + if (ioc->pcidev.device_id == BFA_PCI_DEVICE_ID_CT) { + bfa_reg_write(ioc->ioc_regs.ll_halt, __FW_INIT_HALT_P); + /* + * Wait for halt to take effect + */ + bfa_reg_read(ioc->ioc_regs.ll_halt); + } + + /** + * Notify driver and common modules registered for notification. + */ + ioc->cbfn->hbfail_cbfn(ioc->bfa); + list_for_each(qe, &ioc->hb_notify_q) { + notify = (struct bfa_ioc_hbfail_notify_s *)qe; + notify->cbfn(notify->cbarg); + } + + /** + * Flush any queued up mailbox requests. + */ + bfa_ioc_mbox_hbfail(ioc); + bfa_ioc_aen_post(ioc, BFA_IOC_AEN_HBFAIL); + + /** + * Trigger auto-recovery after a delay. + */ + if (ioc->auto_recover) { + bfa_timer_begin(ioc->timer_mod, &ioc->ioc_timer, + bfa_ioc_timeout, ioc, BFA_IOC_TOV_RECOVER); + } +} + +/** + * IOC heartbeat failure. + */ +static void +bfa_ioc_sm_hbfail(struct bfa_ioc_s *ioc, enum ioc_event event) +{ + bfa_trc(ioc, event); + + switch (event) { + + case IOC_E_ENABLE: + ioc->cbfn->enable_cbfn(ioc->bfa, BFA_STATUS_IOC_FAILURE); + break; + + case IOC_E_DISABLE: + if (ioc->auto_recover) + bfa_ioc_timer_stop(ioc); + bfa_fsm_set_state(ioc, bfa_ioc_sm_disabled); + break; + + case IOC_E_TIMEOUT: + bfa_fsm_set_state(ioc, bfa_ioc_sm_semwait); + break; + + case IOC_E_FWREADY: + /** + * Recovery is already initiated by other function. + */ + break; + + default: + bfa_sm_fault(ioc, event); + } +} + + + +/** + * bfa_ioc_pvt BFA IOC private functions + */ + +static void +bfa_ioc_disable_comp(struct bfa_ioc_s *ioc) +{ + struct list_head *qe; + struct bfa_ioc_hbfail_notify_s *notify; + + ioc->cbfn->disable_cbfn(ioc->bfa); + + /** + * Notify common modules registered for notification. + */ + list_for_each(qe, &ioc->hb_notify_q) { + notify = (struct bfa_ioc_hbfail_notify_s *)qe; + notify->cbfn(notify->cbarg); + } +} + +static void +bfa_ioc_sem_timeout(void *ioc_arg) +{ + struct bfa_ioc_s *ioc = (struct bfa_ioc_s *)ioc_arg; + + bfa_ioc_hw_sem_get(ioc); +} + +static void +bfa_ioc_usage_sem_get(struct bfa_ioc_s *ioc) +{ + u32 r32; + int cnt = 0; +#define BFA_SEM_SPINCNT 1000 + + do { + r32 = bfa_reg_read(ioc->ioc_regs.ioc_usage_sem_reg); + cnt++; + if (cnt > BFA_SEM_SPINCNT) + break; + } while (r32 != 0); + bfa_assert(cnt < BFA_SEM_SPINCNT); +} + +static void +bfa_ioc_usage_sem_release(struct bfa_ioc_s *ioc) +{ + bfa_reg_write(ioc->ioc_regs.ioc_usage_sem_reg, 1); +} + +static void +bfa_ioc_hw_sem_get(struct bfa_ioc_s *ioc) +{ + u32 r32; + + /** + * First read to the semaphore register will return 0, subsequent reads + * will return 1. Semaphore is released by writing 0 to the register + */ + r32 = bfa_reg_read(ioc->ioc_regs.ioc_sem_reg); + if (r32 == 0) { + bfa_fsm_send_event(ioc, IOC_E_SEMLOCKED); + return; + } + + bfa_timer_begin(ioc->timer_mod, &ioc->sem_timer, bfa_ioc_sem_timeout, + ioc, BFA_IOC_TOV); +} + +static void +bfa_ioc_hw_sem_release(struct bfa_ioc_s *ioc) +{ + bfa_reg_write(ioc->ioc_regs.ioc_sem_reg, 1); +} + +static void +bfa_ioc_hw_sem_get_cancel(struct bfa_ioc_s *ioc) +{ + bfa_timer_stop(&ioc->sem_timer); +} + +/** + * Initialize LPU local memory (aka secondary memory / SRAM) + */ +static void +bfa_ioc_lmem_init(struct bfa_ioc_s *ioc) +{ + u32 pss_ctl; + int i; +#define PSS_LMEM_INIT_TIME 10000 + + pss_ctl = bfa_reg_read(ioc->ioc_regs.pss_ctl_reg); + pss_ctl &= ~__PSS_LMEM_RESET; + pss_ctl |= __PSS_LMEM_INIT_EN; + pss_ctl |= __PSS_I2C_CLK_DIV(3UL); /* i2c workaround 12.5khz clock */ + bfa_reg_write(ioc->ioc_regs.pss_ctl_reg, pss_ctl); + + /** + * wait for memory initialization to be complete + */ + i = 0; + do { + pss_ctl = bfa_reg_read(ioc->ioc_regs.pss_ctl_reg); + i++; + } while (!(pss_ctl & __PSS_LMEM_INIT_DONE) && (i < PSS_LMEM_INIT_TIME)); + + /** + * If memory initialization is not successful, IOC timeout will catch + * such failures. + */ + bfa_assert(pss_ctl & __PSS_LMEM_INIT_DONE); + bfa_trc(ioc, pss_ctl); + + pss_ctl &= ~(__PSS_LMEM_INIT_DONE | __PSS_LMEM_INIT_EN); + bfa_reg_write(ioc->ioc_regs.pss_ctl_reg, pss_ctl); +} + +static void +bfa_ioc_lpu_start(struct bfa_ioc_s *ioc) +{ + u32 pss_ctl; + + /** + * Take processor out of reset. + */ + pss_ctl = bfa_reg_read(ioc->ioc_regs.pss_ctl_reg); + pss_ctl &= ~__PSS_LPU0_RESET; + + bfa_reg_write(ioc->ioc_regs.pss_ctl_reg, pss_ctl); +} + +static void +bfa_ioc_lpu_stop(struct bfa_ioc_s *ioc) +{ + u32 pss_ctl; + + /** + * Put processors in reset. + */ + pss_ctl = bfa_reg_read(ioc->ioc_regs.pss_ctl_reg); + pss_ctl |= (__PSS_LPU0_RESET | __PSS_LPU1_RESET); + + bfa_reg_write(ioc->ioc_regs.pss_ctl_reg, pss_ctl); +} + +/** + * Get driver and firmware versions. + */ +static void +bfa_ioc_fwver_get(struct bfa_ioc_s *ioc, struct bfi_ioc_image_hdr_s *fwhdr) +{ + u32 pgnum, pgoff; + u32 loff = 0; + int i; + u32 *fwsig = (u32 *) fwhdr; + + pgnum = bfa_ioc_smem_pgnum(ioc, loff); + pgoff = bfa_ioc_smem_pgoff(ioc, loff); + bfa_reg_write(ioc->ioc_regs.host_page_num_fn, pgnum); + + for (i = 0; i < (sizeof(struct bfi_ioc_image_hdr_s) / sizeof(u32)); + i++) { + fwsig[i] = bfa_mem_read(ioc->ioc_regs.smem_page_start, loff); + loff += sizeof(u32); + } +} + +static u32 * +bfa_ioc_fwimg_get_chunk(struct bfa_ioc_s *ioc, u32 off) +{ + if (ioc->ctdev) + return bfi_image_ct_get_chunk(off); + return bfi_image_cb_get_chunk(off); +} + +static u32 +bfa_ioc_fwimg_get_size(struct bfa_ioc_s *ioc) +{ +return (ioc->ctdev) ? bfi_image_ct_size : bfi_image_cb_size; +} + +/** + * Returns TRUE if same. + */ +static bfa_boolean_t +bfa_ioc_fwver_cmp(struct bfa_ioc_s *ioc, struct bfi_ioc_image_hdr_s *fwhdr) +{ + struct bfi_ioc_image_hdr_s *drv_fwhdr; + int i; + + drv_fwhdr = + (struct bfi_ioc_image_hdr_s *)bfa_ioc_fwimg_get_chunk(ioc, 0); + + for (i = 0; i < BFI_IOC_MD5SUM_SZ; i++) { + if (fwhdr->md5sum[i] != drv_fwhdr->md5sum[i]) { + bfa_trc(ioc, i); + bfa_trc(ioc, fwhdr->md5sum[i]); + bfa_trc(ioc, drv_fwhdr->md5sum[i]); + return BFA_FALSE; + } + } + + bfa_trc(ioc, fwhdr->md5sum[0]); + return BFA_TRUE; +} + +/** + * Return true if current running version is valid. Firmware signature and + * execution context (driver/bios) must match. + */ +static bfa_boolean_t +bfa_ioc_fwver_valid(struct bfa_ioc_s *ioc) +{ + struct bfi_ioc_image_hdr_s fwhdr, *drv_fwhdr; + + /** + * If bios/efi boot (flash based) -- return true + */ + if (bfa_ioc_fwimg_get_size(ioc) < BFA_IOC_FWIMG_MINSZ) + return BFA_TRUE; + + bfa_ioc_fwver_get(ioc, &fwhdr); + drv_fwhdr = + (struct bfi_ioc_image_hdr_s *)bfa_ioc_fwimg_get_chunk(ioc, 0); + + if (fwhdr.signature != drv_fwhdr->signature) { + bfa_trc(ioc, fwhdr.signature); + bfa_trc(ioc, drv_fwhdr->signature); + return BFA_FALSE; + } + + if (fwhdr.exec != drv_fwhdr->exec) { + bfa_trc(ioc, fwhdr.exec); + bfa_trc(ioc, drv_fwhdr->exec); + return BFA_FALSE; + } + + return bfa_ioc_fwver_cmp(ioc, &fwhdr); +} + +/** + * Return true if firmware of current driver matches the running firmware. + */ +static bfa_boolean_t +bfa_ioc_firmware_lock(struct bfa_ioc_s *ioc) +{ + enum bfi_ioc_state ioc_fwstate; + u32 usecnt; + struct bfi_ioc_image_hdr_s fwhdr; + + /** + * Firmware match check is relevant only for CNA. + */ + if (!ioc->cna) + return BFA_TRUE; + + /** + * If bios boot (flash based) -- do not increment usage count + */ + if (bfa_ioc_fwimg_get_size(ioc) < BFA_IOC_FWIMG_MINSZ) + return BFA_TRUE; + + bfa_ioc_usage_sem_get(ioc); + usecnt = bfa_reg_read(ioc->ioc_regs.ioc_usage_reg); + + /** + * If usage count is 0, always return TRUE. + */ + if (usecnt == 0) { + bfa_reg_write(ioc->ioc_regs.ioc_usage_reg, 1); + bfa_ioc_usage_sem_release(ioc); + bfa_trc(ioc, usecnt); + return BFA_TRUE; + } + + ioc_fwstate = bfa_reg_read(ioc->ioc_regs.ioc_fwstate); + bfa_trc(ioc, ioc_fwstate); + + /** + * Use count cannot be non-zero and chip in uninitialized state. + */ + bfa_assert(ioc_fwstate != BFI_IOC_UNINIT); + + /** + * Check if another driver with a different firmware is active + */ + bfa_ioc_fwver_get(ioc, &fwhdr); + if (!bfa_ioc_fwver_cmp(ioc, &fwhdr)) { + bfa_ioc_usage_sem_release(ioc); + bfa_trc(ioc, usecnt); + return BFA_FALSE; + } + + /** + * Same firmware version. Increment the reference count. + */ + usecnt++; + bfa_reg_write(ioc->ioc_regs.ioc_usage_reg, usecnt); + bfa_ioc_usage_sem_release(ioc); + bfa_trc(ioc, usecnt); + return BFA_TRUE; +} + +static void +bfa_ioc_firmware_unlock(struct bfa_ioc_s *ioc) +{ + u32 usecnt; + + /** + * Firmware lock is relevant only for CNA. + * If bios boot (flash based) -- do not decrement usage count + */ + if (!ioc->cna || (bfa_ioc_fwimg_get_size(ioc) < BFA_IOC_FWIMG_MINSZ)) + return; + + /** + * decrement usage count + */ + bfa_ioc_usage_sem_get(ioc); + usecnt = bfa_reg_read(ioc->ioc_regs.ioc_usage_reg); + bfa_assert(usecnt > 0); + + usecnt--; + bfa_reg_write(ioc->ioc_regs.ioc_usage_reg, usecnt); + bfa_trc(ioc, usecnt); + + bfa_ioc_usage_sem_release(ioc); +} + +/** + * Conditionally flush any pending message from firmware at start. + */ +static void +bfa_ioc_msgflush(struct bfa_ioc_s *ioc) +{ + u32 r32; + + r32 = bfa_reg_read(ioc->ioc_regs.lpu_mbox_cmd); + if (r32) + bfa_reg_write(ioc->ioc_regs.lpu_mbox_cmd, 1); +} + + +static void +bfa_ioc_hwinit(struct bfa_ioc_s *ioc, bfa_boolean_t force) +{ + enum bfi_ioc_state ioc_fwstate; + bfa_boolean_t fwvalid; + + ioc_fwstate = bfa_reg_read(ioc->ioc_regs.ioc_fwstate); + + if (force) + ioc_fwstate = BFI_IOC_UNINIT; + + bfa_trc(ioc, ioc_fwstate); + + /** + * check if firmware is valid + */ + fwvalid = (ioc_fwstate == BFI_IOC_UNINIT) ? + BFA_FALSE : bfa_ioc_fwver_valid(ioc); + + if (!fwvalid) { + bfa_ioc_boot(ioc, BFI_BOOT_TYPE_NORMAL, ioc->pcidev.device_id); + return; + } + + /** + * If hardware initialization is in progress (initialized by other IOC), + * just wait for an initialization completion interrupt. + */ + if (ioc_fwstate == BFI_IOC_INITING) { + bfa_trc(ioc, ioc_fwstate); + ioc->cbfn->reset_cbfn(ioc->bfa); + return; + } + + /** + * If IOC function is disabled and firmware version is same, + * just re-enable IOC. + */ + if (ioc_fwstate == BFI_IOC_DISABLED || ioc_fwstate == BFI_IOC_OP) { + bfa_trc(ioc, ioc_fwstate); + + /** + * When using MSI-X any pending firmware ready event should + * be flushed. Otherwise MSI-X interrupts are not delivered. + */ + bfa_ioc_msgflush(ioc); + ioc->cbfn->reset_cbfn(ioc->bfa); + bfa_fsm_send_event(ioc, IOC_E_FWREADY); + return; + } + + /** + * Initialize the h/w for any other states. + */ + bfa_ioc_boot(ioc, BFI_BOOT_TYPE_NORMAL, ioc->pcidev.device_id); +} + +static void +bfa_ioc_timeout(void *ioc_arg) +{ + struct bfa_ioc_s *ioc = (struct bfa_ioc_s *)ioc_arg; + + bfa_trc(ioc, 0); + bfa_fsm_send_event(ioc, IOC_E_TIMEOUT); +} + +void +bfa_ioc_mbox_send(struct bfa_ioc_s *ioc, void *ioc_msg, int len) +{ + u32 *msgp = (u32 *) ioc_msg; + u32 i; + + bfa_trc(ioc, msgp[0]); + bfa_trc(ioc, len); + + bfa_assert(len <= BFI_IOC_MSGLEN_MAX); + + /* + * first write msg to mailbox registers + */ + for (i = 0; i < len / sizeof(u32); i++) + bfa_reg_write(ioc->ioc_regs.hfn_mbox + i * sizeof(u32), + bfa_os_wtole(msgp[i])); + + for (; i < BFI_IOC_MSGLEN_MAX / sizeof(u32); i++) + bfa_reg_write(ioc->ioc_regs.hfn_mbox + i * sizeof(u32), 0); + + /* + * write 1 to mailbox CMD to trigger LPU event + */ + bfa_reg_write(ioc->ioc_regs.hfn_mbox_cmd, 1); + (void)bfa_reg_read(ioc->ioc_regs.hfn_mbox_cmd); +} + +static void +bfa_ioc_send_enable(struct bfa_ioc_s *ioc) +{ + struct bfi_ioc_ctrl_req_s enable_req; + + bfi_h2i_set(enable_req.mh, BFI_MC_IOC, BFI_IOC_H2I_ENABLE_REQ, + bfa_ioc_portid(ioc)); + enable_req.ioc_class = ioc->ioc_mc; + bfa_ioc_mbox_send(ioc, &enable_req, sizeof(struct bfi_ioc_ctrl_req_s)); +} + +static void +bfa_ioc_send_disable(struct bfa_ioc_s *ioc) +{ + struct bfi_ioc_ctrl_req_s disable_req; + + bfi_h2i_set(disable_req.mh, BFI_MC_IOC, BFI_IOC_H2I_DISABLE_REQ, + bfa_ioc_portid(ioc)); + bfa_ioc_mbox_send(ioc, &disable_req, sizeof(struct bfi_ioc_ctrl_req_s)); +} + +static void +bfa_ioc_send_getattr(struct bfa_ioc_s *ioc) +{ + struct bfi_ioc_getattr_req_s attr_req; + + bfi_h2i_set(attr_req.mh, BFI_MC_IOC, BFI_IOC_H2I_GETATTR_REQ, + bfa_ioc_portid(ioc)); + bfa_dma_be_addr_set(attr_req.attr_addr, ioc->attr_dma.pa); + bfa_ioc_mbox_send(ioc, &attr_req, sizeof(attr_req)); +} + +static void +bfa_ioc_hb_check(void *cbarg) +{ + struct bfa_ioc_s *ioc = cbarg; + u32 hb_count; + + hb_count = bfa_reg_read(ioc->ioc_regs.heartbeat); + if (ioc->hb_count == hb_count) { + ioc->hb_fail++; + } else { + ioc->hb_count = hb_count; + ioc->hb_fail = 0; + } + + if (ioc->hb_fail >= BFA_IOC_HB_FAIL_MAX) { + bfa_log(ioc->logm, BFA_LOG_HAL_HEARTBEAT_FAILURE, hb_count); + ioc->hb_fail = 0; + bfa_ioc_recover(ioc); + return; + } + + bfa_ioc_mbox_poll(ioc); + bfa_timer_begin(ioc->timer_mod, &ioc->ioc_timer, bfa_ioc_hb_check, ioc, + BFA_IOC_HB_TOV); +} + +static void +bfa_ioc_hb_monitor(struct bfa_ioc_s *ioc) +{ + ioc->hb_fail = 0; + ioc->hb_count = bfa_reg_read(ioc->ioc_regs.heartbeat); + bfa_timer_begin(ioc->timer_mod, &ioc->ioc_timer, bfa_ioc_hb_check, ioc, + BFA_IOC_HB_TOV); +} + +static void +bfa_ioc_hb_stop(struct bfa_ioc_s *ioc) +{ + bfa_timer_stop(&ioc->ioc_timer); +} + +/** + * Host to LPU mailbox message addresses + */ +static struct { + u32 hfn_mbox, lpu_mbox, hfn_pgn; +} iocreg_fnreg[] = { + { + HOSTFN0_LPU_MBOX0_0, LPU_HOSTFN0_MBOX0_0, HOST_PAGE_NUM_FN0}, { + HOSTFN1_LPU_MBOX0_8, LPU_HOSTFN1_MBOX0_8, HOST_PAGE_NUM_FN1}, { + HOSTFN2_LPU_MBOX0_0, LPU_HOSTFN2_MBOX0_0, HOST_PAGE_NUM_FN2}, { + HOSTFN3_LPU_MBOX0_8, LPU_HOSTFN3_MBOX0_8, HOST_PAGE_NUM_FN3} +}; + +/** + * Host <-> LPU mailbox command/status registers - port 0 + */ +static struct { + u32 hfn, lpu; +} iocreg_mbcmd_p0[] = { + { + HOSTFN0_LPU0_MBOX0_CMD_STAT, LPU0_HOSTFN0_MBOX0_CMD_STAT}, { + HOSTFN1_LPU0_MBOX0_CMD_STAT, LPU0_HOSTFN1_MBOX0_CMD_STAT}, { + HOSTFN2_LPU0_MBOX0_CMD_STAT, LPU0_HOSTFN2_MBOX0_CMD_STAT}, { + HOSTFN3_LPU0_MBOX0_CMD_STAT, LPU0_HOSTFN3_MBOX0_CMD_STAT} +}; + +/** + * Host <-> LPU mailbox command/status registers - port 1 + */ +static struct { + u32 hfn, lpu; +} iocreg_mbcmd_p1[] = { + { + HOSTFN0_LPU1_MBOX0_CMD_STAT, LPU1_HOSTFN0_MBOX0_CMD_STAT}, { + HOSTFN1_LPU1_MBOX0_CMD_STAT, LPU1_HOSTFN1_MBOX0_CMD_STAT}, { + HOSTFN2_LPU1_MBOX0_CMD_STAT, LPU1_HOSTFN2_MBOX0_CMD_STAT}, { + HOSTFN3_LPU1_MBOX0_CMD_STAT, LPU1_HOSTFN3_MBOX0_CMD_STAT} +}; + +/** + * Shared IRQ handling in INTX mode + */ +static struct { + u32 isr, msk; +} iocreg_shirq_next[] = { + { + HOSTFN1_INT_STATUS, HOSTFN1_INT_MSK}, { + HOSTFN2_INT_STATUS, HOSTFN2_INT_MSK}, { + HOSTFN3_INT_STATUS, HOSTFN3_INT_MSK}, { +HOSTFN0_INT_STATUS, HOSTFN0_INT_MSK},}; + +static void +bfa_ioc_reg_init(struct bfa_ioc_s *ioc) +{ + bfa_os_addr_t rb; + int pcifn = bfa_ioc_pcifn(ioc); + + rb = bfa_ioc_bar0(ioc); + + ioc->ioc_regs.hfn_mbox = rb + iocreg_fnreg[pcifn].hfn_mbox; + ioc->ioc_regs.lpu_mbox = rb + iocreg_fnreg[pcifn].lpu_mbox; + ioc->ioc_regs.host_page_num_fn = rb + iocreg_fnreg[pcifn].hfn_pgn; + + if (ioc->port_id == 0) { + ioc->ioc_regs.heartbeat = rb + BFA_IOC0_HBEAT_REG; + ioc->ioc_regs.ioc_fwstate = rb + BFA_IOC0_STATE_REG; + ioc->ioc_regs.hfn_mbox_cmd = rb + iocreg_mbcmd_p0[pcifn].hfn; + ioc->ioc_regs.lpu_mbox_cmd = rb + iocreg_mbcmd_p0[pcifn].lpu; + ioc->ioc_regs.ll_halt = rb + FW_INIT_HALT_P0; + } else { + ioc->ioc_regs.heartbeat = (rb + BFA_IOC1_HBEAT_REG); + ioc->ioc_regs.ioc_fwstate = (rb + BFA_IOC1_STATE_REG); + ioc->ioc_regs.hfn_mbox_cmd = rb + iocreg_mbcmd_p1[pcifn].hfn; + ioc->ioc_regs.lpu_mbox_cmd = rb + iocreg_mbcmd_p1[pcifn].lpu; + ioc->ioc_regs.ll_halt = rb + FW_INIT_HALT_P1; + } + + /** + * Shared IRQ handling in INTX mode + */ + ioc->ioc_regs.shirq_isr_next = rb + iocreg_shirq_next[pcifn].isr; + ioc->ioc_regs.shirq_msk_next = rb + iocreg_shirq_next[pcifn].msk; + + /* + * PSS control registers + */ + ioc->ioc_regs.pss_ctl_reg = (rb + PSS_CTL_REG); + ioc->ioc_regs.app_pll_fast_ctl_reg = (rb + APP_PLL_425_CTL_REG); + ioc->ioc_regs.app_pll_slow_ctl_reg = (rb + APP_PLL_312_CTL_REG); + + /* + * IOC semaphore registers and serialization + */ + ioc->ioc_regs.ioc_sem_reg = (rb + HOST_SEM0_REG); + ioc->ioc_regs.ioc_usage_sem_reg = (rb + HOST_SEM1_REG); + ioc->ioc_regs.ioc_usage_reg = (rb + BFA_FW_USE_COUNT); + + /** + * sram memory access + */ + ioc->ioc_regs.smem_page_start = (rb + PSS_SMEM_PAGE_START); + ioc->ioc_regs.smem_pg0 = BFI_IOC_SMEM_PG0_CB; + if (ioc->pcidev.device_id == BFA_PCI_DEVICE_ID_CT) + ioc->ioc_regs.smem_pg0 = BFI_IOC_SMEM_PG0_CT; +} + +/** + * Initiate a full firmware download. + */ +static void +bfa_ioc_download_fw(struct bfa_ioc_s *ioc, u32 boot_type, + u32 boot_param) +{ + u32 *fwimg; + u32 pgnum, pgoff; + u32 loff = 0; + u32 chunkno = 0; + u32 i; + + /** + * Initialize LMEM first before code download + */ + bfa_ioc_lmem_init(ioc); + + /** + * Flash based firmware boot + */ + bfa_trc(ioc, bfa_ioc_fwimg_get_size(ioc)); + if (bfa_ioc_fwimg_get_size(ioc) < BFA_IOC_FWIMG_MINSZ) + boot_type = BFI_BOOT_TYPE_FLASH; + fwimg = bfa_ioc_fwimg_get_chunk(ioc, chunkno); + fwimg[BFI_BOOT_TYPE_OFF / sizeof(u32)] = bfa_os_swap32(boot_type); + fwimg[BFI_BOOT_PARAM_OFF / sizeof(u32)] = + bfa_os_swap32(boot_param); + + pgnum = bfa_ioc_smem_pgnum(ioc, loff); + pgoff = bfa_ioc_smem_pgoff(ioc, loff); + + bfa_reg_write(ioc->ioc_regs.host_page_num_fn, pgnum); + + for (i = 0; i < bfa_ioc_fwimg_get_size(ioc); i++) { + + if (BFA_FLASH_CHUNK_NO(i) != chunkno) { + chunkno = BFA_FLASH_CHUNK_NO(i); + fwimg = bfa_ioc_fwimg_get_chunk(ioc, + BFA_FLASH_CHUNK_ADDR(chunkno)); + } + + /** + * write smem + */ + bfa_mem_write(ioc->ioc_regs.smem_page_start, loff, + fwimg[BFA_FLASH_OFFSET_IN_CHUNK(i)]); + + loff += sizeof(u32); + + /** + * handle page offset wrap around + */ + loff = PSS_SMEM_PGOFF(loff); + if (loff == 0) { + pgnum++; + bfa_reg_write(ioc->ioc_regs.host_page_num_fn, pgnum); + } + } + + bfa_reg_write(ioc->ioc_regs.host_page_num_fn, + bfa_ioc_smem_pgnum(ioc, 0)); +} + +static void +bfa_ioc_reset(struct bfa_ioc_s *ioc, bfa_boolean_t force) +{ + bfa_ioc_hwinit(ioc, force); +} + +/** + * Update BFA configuration from firmware configuration. + */ +static void +bfa_ioc_getattr_reply(struct bfa_ioc_s *ioc) +{ + struct bfi_ioc_attr_s *attr = ioc->attr; + + attr->adapter_prop = bfa_os_ntohl(attr->adapter_prop); + attr->maxfrsize = bfa_os_ntohs(attr->maxfrsize); + + bfa_fsm_send_event(ioc, IOC_E_FWRSP_GETATTR); +} + +/** + * Attach time initialization of mbox logic. + */ +static void +bfa_ioc_mbox_attach(struct bfa_ioc_s *ioc) +{ + struct bfa_ioc_mbox_mod_s *mod = &ioc->mbox_mod; + int mc; + + INIT_LIST_HEAD(&mod->cmd_q); + for (mc = 0; mc < BFI_MC_MAX; mc++) { + mod->mbhdlr[mc].cbfn = NULL; + mod->mbhdlr[mc].cbarg = ioc->bfa; + } +} + +/** + * Mbox poll timer -- restarts any pending mailbox requests. + */ +static void +bfa_ioc_mbox_poll(struct bfa_ioc_s *ioc) +{ + struct bfa_ioc_mbox_mod_s *mod = &ioc->mbox_mod; + struct bfa_mbox_cmd_s *cmd; + u32 stat; + + /** + * If no command pending, do nothing + */ + if (list_empty(&mod->cmd_q)) + return; + + /** + * If previous command is not yet fetched by firmware, do nothing + */ + stat = bfa_reg_read(ioc->ioc_regs.hfn_mbox_cmd); + if (stat) + return; + + /** + * Enqueue command to firmware. + */ + bfa_q_deq(&mod->cmd_q, &cmd); + bfa_ioc_mbox_send(ioc, cmd->msg, sizeof(cmd->msg)); +} + +/** + * Cleanup any pending requests. + */ +static void +bfa_ioc_mbox_hbfail(struct bfa_ioc_s *ioc) +{ + struct bfa_ioc_mbox_mod_s *mod = &ioc->mbox_mod; + struct bfa_mbox_cmd_s *cmd; + + while (!list_empty(&mod->cmd_q)) + bfa_q_deq(&mod->cmd_q, &cmd); +} + +/** + * Initialize IOC to port mapping. + */ + +#define FNC_PERS_FN_SHIFT(__fn) ((__fn) * 8) +static void +bfa_ioc_map_port(struct bfa_ioc_s *ioc) +{ + bfa_os_addr_t rb = ioc->pcidev.pci_bar_kva; + u32 r32; + + /** + * For crossbow, port id is same as pci function. + */ + if (ioc->pcidev.device_id != BFA_PCI_DEVICE_ID_CT) { + ioc->port_id = bfa_ioc_pcifn(ioc); + return; + } + + /** + * For catapult, base port id on personality register and IOC type + */ + r32 = bfa_reg_read(rb + FNC_PERS_REG); + r32 >>= FNC_PERS_FN_SHIFT(bfa_ioc_pcifn(ioc)); + ioc->port_id = (r32 & __F0_PORT_MAP_MK) >> __F0_PORT_MAP_SH; + + bfa_trc(ioc, bfa_ioc_pcifn(ioc)); + bfa_trc(ioc, ioc->port_id); +} + + + +/** + * bfa_ioc_public + */ + +/** +* Set interrupt mode for a function: INTX or MSIX + */ +void +bfa_ioc_isr_mode_set(struct bfa_ioc_s *ioc, bfa_boolean_t msix) +{ + bfa_os_addr_t rb = ioc->pcidev.pci_bar_kva; + u32 r32, mode; + + r32 = bfa_reg_read(rb + FNC_PERS_REG); + bfa_trc(ioc, r32); + + mode = (r32 >> FNC_PERS_FN_SHIFT(bfa_ioc_pcifn(ioc))) & + __F0_INTX_STATUS; + + /** + * If already in desired mode, do not change anything + */ + if (!msix && mode) + return; + + if (msix) + mode = __F0_INTX_STATUS_MSIX; + else + mode = __F0_INTX_STATUS_INTA; + + r32 &= ~(__F0_INTX_STATUS << FNC_PERS_FN_SHIFT(bfa_ioc_pcifn(ioc))); + r32 |= (mode << FNC_PERS_FN_SHIFT(bfa_ioc_pcifn(ioc))); + bfa_trc(ioc, r32); + + bfa_reg_write(rb + FNC_PERS_REG, r32); +} + +bfa_status_t +bfa_ioc_pll_init(struct bfa_ioc_s *ioc) +{ + bfa_os_addr_t rb = ioc->pcidev.pci_bar_kva; + u32 pll_sclk, pll_fclk, r32; + + if (ioc->pcidev.device_id == BFA_PCI_DEVICE_ID_CT) { + pll_sclk = + __APP_PLL_312_ENABLE | __APP_PLL_312_LRESETN | + __APP_PLL_312_RSEL200500 | __APP_PLL_312_P0_1(0U) | + __APP_PLL_312_JITLMT0_1(3U) | + __APP_PLL_312_CNTLMT0_1(1U); + pll_fclk = + __APP_PLL_425_ENABLE | __APP_PLL_425_LRESETN | + __APP_PLL_425_RSEL200500 | __APP_PLL_425_P0_1(0U) | + __APP_PLL_425_JITLMT0_1(3U) | + __APP_PLL_425_CNTLMT0_1(1U); + + /** + * For catapult, choose operational mode FC/FCoE + */ + if (ioc->fcmode) { + bfa_reg_write((rb + OP_MODE), 0); + bfa_reg_write((rb + ETH_MAC_SER_REG), + __APP_EMS_CMLCKSEL | __APP_EMS_REFCKBUFEN2 + | __APP_EMS_CHANNEL_SEL); + } else { + ioc->pllinit = BFA_TRUE; + bfa_reg_write((rb + OP_MODE), __GLOBAL_FCOE_MODE); + bfa_reg_write((rb + ETH_MAC_SER_REG), + __APP_EMS_REFCKBUFEN1); + } + } else { + pll_sclk = + __APP_PLL_312_ENABLE | __APP_PLL_312_LRESETN | + __APP_PLL_312_P0_1(3U) | __APP_PLL_312_JITLMT0_1(3U) | + __APP_PLL_312_CNTLMT0_1(3U); + pll_fclk = + __APP_PLL_425_ENABLE | __APP_PLL_425_LRESETN | + __APP_PLL_425_RSEL200500 | __APP_PLL_425_P0_1(3U) | + __APP_PLL_425_JITLMT0_1(3U) | + __APP_PLL_425_CNTLMT0_1(3U); + } + + bfa_reg_write((rb + BFA_IOC0_STATE_REG), BFI_IOC_UNINIT); + bfa_reg_write((rb + BFA_IOC1_STATE_REG), BFI_IOC_UNINIT); + + bfa_reg_write((rb + HOSTFN0_INT_MSK), 0xffffffffU); + bfa_reg_write((rb + HOSTFN1_INT_MSK), 0xffffffffU); + bfa_reg_write((rb + HOSTFN0_INT_STATUS), 0xffffffffU); + bfa_reg_write((rb + HOSTFN1_INT_STATUS), 0xffffffffU); + bfa_reg_write((rb + HOSTFN0_INT_MSK), 0xffffffffU); + bfa_reg_write((rb + HOSTFN1_INT_MSK), 0xffffffffU); + + bfa_reg_write(ioc->ioc_regs.app_pll_slow_ctl_reg, + __APP_PLL_312_LOGIC_SOFT_RESET); + bfa_reg_write(ioc->ioc_regs.app_pll_slow_ctl_reg, + __APP_PLL_312_BYPASS | __APP_PLL_312_LOGIC_SOFT_RESET); + bfa_reg_write(ioc->ioc_regs.app_pll_fast_ctl_reg, + __APP_PLL_425_LOGIC_SOFT_RESET); + bfa_reg_write(ioc->ioc_regs.app_pll_fast_ctl_reg, + __APP_PLL_425_BYPASS | __APP_PLL_425_LOGIC_SOFT_RESET); + bfa_os_udelay(2); + bfa_reg_write(ioc->ioc_regs.app_pll_slow_ctl_reg, + __APP_PLL_312_LOGIC_SOFT_RESET); + bfa_reg_write(ioc->ioc_regs.app_pll_fast_ctl_reg, + __APP_PLL_425_LOGIC_SOFT_RESET); + + bfa_reg_write(ioc->ioc_regs.app_pll_slow_ctl_reg, + pll_sclk | __APP_PLL_312_LOGIC_SOFT_RESET); + bfa_reg_write(ioc->ioc_regs.app_pll_fast_ctl_reg, + pll_fclk | __APP_PLL_425_LOGIC_SOFT_RESET); + + /** + * Wait for PLLs to lock. + */ + bfa_os_udelay(2000); + bfa_reg_write((rb + HOSTFN0_INT_STATUS), 0xffffffffU); + bfa_reg_write((rb + HOSTFN1_INT_STATUS), 0xffffffffU); + + bfa_reg_write(ioc->ioc_regs.app_pll_slow_ctl_reg, pll_sclk); + bfa_reg_write(ioc->ioc_regs.app_pll_fast_ctl_reg, pll_fclk); + + if (ioc->pcidev.device_id == BFA_PCI_DEVICE_ID_CT) { + bfa_reg_write((rb + MBIST_CTL_REG), __EDRAM_BISTR_START); + bfa_os_udelay(1000); + r32 = bfa_reg_read((rb + MBIST_STAT_REG)); + bfa_trc(ioc, r32); + } + + return BFA_STATUS_OK; +} + +/** + * Interface used by diag module to do firmware boot with memory test + * as the entry vector. + */ +void +bfa_ioc_boot(struct bfa_ioc_s *ioc, u32 boot_type, u32 boot_param) +{ + bfa_os_addr_t rb; + + bfa_ioc_stats(ioc, ioc_boots); + + if (bfa_ioc_pll_init(ioc) != BFA_STATUS_OK) + return; + + /** + * Initialize IOC state of all functions on a chip reset. + */ + rb = ioc->pcidev.pci_bar_kva; + if (boot_param == BFI_BOOT_TYPE_MEMTEST) { + bfa_reg_write((rb + BFA_IOC0_STATE_REG), BFI_IOC_MEMTEST); + bfa_reg_write((rb + BFA_IOC1_STATE_REG), BFI_IOC_MEMTEST); + } else { + bfa_reg_write((rb + BFA_IOC0_STATE_REG), BFI_IOC_INITING); + bfa_reg_write((rb + BFA_IOC1_STATE_REG), BFI_IOC_INITING); + } + + bfa_ioc_download_fw(ioc, boot_type, boot_param); + + /** + * Enable interrupts just before starting LPU + */ + ioc->cbfn->reset_cbfn(ioc->bfa); + bfa_ioc_lpu_start(ioc); +} + +/** + * Enable/disable IOC failure auto recovery. + */ +void +bfa_ioc_auto_recover(bfa_boolean_t auto_recover) +{ + bfa_auto_recover = BFA_FALSE; +} + + +bfa_boolean_t +bfa_ioc_is_operational(struct bfa_ioc_s *ioc) +{ + return bfa_fsm_cmp_state(ioc, bfa_ioc_sm_op); +} + +void +bfa_ioc_msgget(struct bfa_ioc_s *ioc, void *mbmsg) +{ + u32 *msgp = mbmsg; + u32 r32; + int i; + + /** + * read the MBOX msg + */ + for (i = 0; i < (sizeof(union bfi_ioc_i2h_msg_u) / sizeof(u32)); + i++) { + r32 = bfa_reg_read(ioc->ioc_regs.lpu_mbox + + i * sizeof(u32)); + msgp[i] = bfa_os_htonl(r32); + } + + /** + * turn off mailbox interrupt by clearing mailbox status + */ + bfa_reg_write(ioc->ioc_regs.lpu_mbox_cmd, 1); + bfa_reg_read(ioc->ioc_regs.lpu_mbox_cmd); +} + +void +bfa_ioc_isr(struct bfa_ioc_s *ioc, struct bfi_mbmsg_s *m) +{ + union bfi_ioc_i2h_msg_u *msg; + + msg = (union bfi_ioc_i2h_msg_u *)m; + + bfa_ioc_stats(ioc, ioc_isrs); + + switch (msg->mh.msg_id) { + case BFI_IOC_I2H_HBEAT: + break; + + case BFI_IOC_I2H_READY_EVENT: + bfa_fsm_send_event(ioc, IOC_E_FWREADY); + break; + + case BFI_IOC_I2H_ENABLE_REPLY: + bfa_fsm_send_event(ioc, IOC_E_FWRSP_ENABLE); + break; + + case BFI_IOC_I2H_DISABLE_REPLY: + bfa_fsm_send_event(ioc, IOC_E_FWRSP_DISABLE); + break; + + case BFI_IOC_I2H_GETATTR_REPLY: + bfa_ioc_getattr_reply(ioc); + break; + + default: + bfa_trc(ioc, msg->mh.msg_id); + bfa_assert(0); + } +} + +/** + * IOC attach time initialization and setup. + * + * @param[in] ioc memory for IOC + * @param[in] bfa driver instance structure + * @param[in] trcmod kernel trace module + * @param[in] aen kernel aen event module + * @param[in] logm kernel logging module + */ +void +bfa_ioc_attach(struct bfa_ioc_s *ioc, void *bfa, struct bfa_ioc_cbfn_s *cbfn, + struct bfa_timer_mod_s *timer_mod, struct bfa_trc_mod_s *trcmod, + struct bfa_aen_s *aen, struct bfa_log_mod_s *logm) +{ + ioc->bfa = bfa; + ioc->cbfn = cbfn; + ioc->timer_mod = timer_mod; + ioc->trcmod = trcmod; + ioc->aen = aen; + ioc->logm = logm; + ioc->fcmode = BFA_FALSE; + ioc->pllinit = BFA_FALSE; + ioc->dbg_fwsave_once = BFA_TRUE; + + bfa_ioc_mbox_attach(ioc); + INIT_LIST_HEAD(&ioc->hb_notify_q); + + bfa_fsm_set_state(ioc, bfa_ioc_sm_reset); +} + +/** + * Driver detach time IOC cleanup. + */ +void +bfa_ioc_detach(struct bfa_ioc_s *ioc) +{ + bfa_fsm_send_event(ioc, IOC_E_DETACH); +} + +/** + * Setup IOC PCI properties. + * + * @param[in] pcidev PCI device information for this IOC + */ +void +bfa_ioc_pci_init(struct bfa_ioc_s *ioc, struct bfa_pcidev_s *pcidev, + enum bfi_mclass mc) +{ + ioc->ioc_mc = mc; + ioc->pcidev = *pcidev; + ioc->ctdev = (ioc->pcidev.device_id == BFA_PCI_DEVICE_ID_CT); + ioc->cna = ioc->ctdev && !ioc->fcmode; + + bfa_ioc_map_port(ioc); + bfa_ioc_reg_init(ioc); +} + +/** + * Initialize IOC dma memory + * + * @param[in] dm_kva kernel virtual address of IOC dma memory + * @param[in] dm_pa physical address of IOC dma memory + */ +void +bfa_ioc_mem_claim(struct bfa_ioc_s *ioc, u8 *dm_kva, u64 dm_pa) +{ + /** + * dma memory for firmware attribute + */ + ioc->attr_dma.kva = dm_kva; + ioc->attr_dma.pa = dm_pa; + ioc->attr = (struct bfi_ioc_attr_s *)dm_kva; +} + +/** + * Return size of dma memory required. + */ +u32 +bfa_ioc_meminfo(void) +{ + return BFA_ROUNDUP(sizeof(struct bfi_ioc_attr_s), BFA_DMA_ALIGN_SZ); +} + +void +bfa_ioc_enable(struct bfa_ioc_s *ioc) +{ + bfa_ioc_stats(ioc, ioc_enables); + ioc->dbg_fwsave_once = BFA_TRUE; + + bfa_fsm_send_event(ioc, IOC_E_ENABLE); +} + +void +bfa_ioc_disable(struct bfa_ioc_s *ioc) +{ + bfa_ioc_stats(ioc, ioc_disables); + bfa_fsm_send_event(ioc, IOC_E_DISABLE); +} + +/** + * Returns memory required for saving firmware trace in case of crash. + * Driver must call this interface to allocate memory required for + * automatic saving of firmware trace. Driver should call + * bfa_ioc_debug_memclaim() right after bfa_ioc_attach() to setup this + * trace memory. + */ +int +bfa_ioc_debug_trcsz(bfa_boolean_t auto_recover) +{ +return (auto_recover) ? BFA_DBG_FWTRC_LEN : 0; +} + +/** + * Initialize memory for saving firmware trace. Driver must initialize + * trace memory before call bfa_ioc_enable(). + */ +void +bfa_ioc_debug_memclaim(struct bfa_ioc_s *ioc, void *dbg_fwsave) +{ + bfa_assert(ioc->auto_recover); + ioc->dbg_fwsave = dbg_fwsave; + ioc->dbg_fwsave_len = bfa_ioc_debug_trcsz(ioc->auto_recover); +} + +u32 +bfa_ioc_smem_pgnum(struct bfa_ioc_s *ioc, u32 fmaddr) +{ + return PSS_SMEM_PGNUM(ioc->ioc_regs.smem_pg0, fmaddr); +} + +u32 +bfa_ioc_smem_pgoff(struct bfa_ioc_s *ioc, u32 fmaddr) +{ + return PSS_SMEM_PGOFF(fmaddr); +} + +/** + * Register mailbox message handler functions + * + * @param[in] ioc IOC instance + * @param[in] mcfuncs message class handler functions + */ +void +bfa_ioc_mbox_register(struct bfa_ioc_s *ioc, bfa_ioc_mbox_mcfunc_t *mcfuncs) +{ + struct bfa_ioc_mbox_mod_s *mod = &ioc->mbox_mod; + int mc; + + for (mc = 0; mc < BFI_MC_MAX; mc++) + mod->mbhdlr[mc].cbfn = mcfuncs[mc]; +} + +/** + * Register mailbox message handler function, to be called by common modules + */ +void +bfa_ioc_mbox_regisr(struct bfa_ioc_s *ioc, enum bfi_mclass mc, + bfa_ioc_mbox_mcfunc_t cbfn, void *cbarg) +{ + struct bfa_ioc_mbox_mod_s *mod = &ioc->mbox_mod; + + mod->mbhdlr[mc].cbfn = cbfn; + mod->mbhdlr[mc].cbarg = cbarg; +} + +/** + * Queue a mailbox command request to firmware. Waits if mailbox is busy. + * Responsibility of caller to serialize + * + * @param[in] ioc IOC instance + * @param[i] cmd Mailbox command + */ +void +bfa_ioc_mbox_queue(struct bfa_ioc_s *ioc, struct bfa_mbox_cmd_s *cmd) +{ + struct bfa_ioc_mbox_mod_s *mod = &ioc->mbox_mod; + u32 stat; + + /** + * If a previous command is pending, queue new command + */ + if (!list_empty(&mod->cmd_q)) { + list_add_tail(&cmd->qe, &mod->cmd_q); + return; + } + + /** + * If mailbox is busy, queue command for poll timer + */ + stat = bfa_reg_read(ioc->ioc_regs.hfn_mbox_cmd); + if (stat) { + list_add_tail(&cmd->qe, &mod->cmd_q); + return; + } + + /** + * mailbox is free -- queue command to firmware + */ + bfa_ioc_mbox_send(ioc, cmd->msg, sizeof(cmd->msg)); +} + +/** + * Handle mailbox interrupts + */ +void +bfa_ioc_mbox_isr(struct bfa_ioc_s *ioc) +{ + struct bfa_ioc_mbox_mod_s *mod = &ioc->mbox_mod; + struct bfi_mbmsg_s m; + int mc; + + bfa_ioc_msgget(ioc, &m); + + /** + * Treat IOC message class as special. + */ + mc = m.mh.msg_class; + if (mc == BFI_MC_IOC) { + bfa_ioc_isr(ioc, &m); + return; + } + + if ((mc > BFI_MC_MAX) || (mod->mbhdlr[mc].cbfn == NULL)) + return; + + mod->mbhdlr[mc].cbfn(mod->mbhdlr[mc].cbarg, &m); +} + +void +bfa_ioc_error_isr(struct bfa_ioc_s *ioc) +{ + bfa_fsm_send_event(ioc, IOC_E_HWERROR); +} + +#ifndef BFA_BIOS_BUILD + +/** + * return true if IOC is disabled + */ +bfa_boolean_t +bfa_ioc_is_disabled(struct bfa_ioc_s *ioc) +{ + return (bfa_fsm_cmp_state(ioc, bfa_ioc_sm_disabling) + || bfa_fsm_cmp_state(ioc, bfa_ioc_sm_disabled)); +} + +/** + * return true if IOC firmware is different. + */ +bfa_boolean_t +bfa_ioc_fw_mismatch(struct bfa_ioc_s *ioc) +{ + return (bfa_fsm_cmp_state(ioc, bfa_ioc_sm_reset) + || bfa_fsm_cmp_state(ioc, bfa_ioc_sm_fwcheck) + || bfa_fsm_cmp_state(ioc, bfa_ioc_sm_mismatch)); +} + +#define bfa_ioc_state_disabled(__sm) \ + (((__sm) == BFI_IOC_UNINIT) || \ + ((__sm) == BFI_IOC_INITING) || \ + ((__sm) == BFI_IOC_HWINIT) || \ + ((__sm) == BFI_IOC_DISABLED) || \ + ((__sm) == BFI_IOC_HBFAIL) || \ + ((__sm) == BFI_IOC_CFG_DISABLED)) + +/** + * Check if adapter is disabled -- both IOCs should be in a disabled + * state. + */ +bfa_boolean_t +bfa_ioc_adapter_is_disabled(struct bfa_ioc_s *ioc) +{ + u32 ioc_state; + bfa_os_addr_t rb = ioc->pcidev.pci_bar_kva; + + if (!bfa_fsm_cmp_state(ioc, bfa_ioc_sm_disabled)) + return BFA_FALSE; + + ioc_state = bfa_reg_read(rb + BFA_IOC0_STATE_REG); + if (!bfa_ioc_state_disabled(ioc_state)) + return BFA_FALSE; + + ioc_state = bfa_reg_read(rb + BFA_IOC1_STATE_REG); + if (!bfa_ioc_state_disabled(ioc_state)) + return BFA_FALSE; + + return BFA_TRUE; +} + +/** + * Add to IOC heartbeat failure notification queue. To be used by common + * modules such as + */ +void +bfa_ioc_hbfail_register(struct bfa_ioc_s *ioc, + struct bfa_ioc_hbfail_notify_s *notify) +{ + list_add_tail(¬ify->qe, &ioc->hb_notify_q); +} + +#define BFA_MFG_NAME "Brocade" +void +bfa_ioc_get_adapter_attr(struct bfa_ioc_s *ioc, + struct bfa_adapter_attr_s *ad_attr) +{ + struct bfi_ioc_attr_s *ioc_attr; + char model[BFA_ADAPTER_MODEL_NAME_LEN]; + + ioc_attr = ioc->attr; + bfa_os_memcpy((void *)&ad_attr->serial_num, + (void *)ioc_attr->brcd_serialnum, + BFA_ADAPTER_SERIAL_NUM_LEN); + + bfa_os_memcpy(&ad_attr->fw_ver, ioc_attr->fw_version, BFA_VERSION_LEN); + bfa_os_memcpy(&ad_attr->optrom_ver, ioc_attr->optrom_version, + BFA_VERSION_LEN); + bfa_os_memcpy(&ad_attr->manufacturer, BFA_MFG_NAME, + BFA_ADAPTER_MFG_NAME_LEN); + bfa_os_memcpy(&ad_attr->vpd, &ioc_attr->vpd, + sizeof(struct bfa_mfg_vpd_s)); + + ad_attr->nports = BFI_ADAPTER_GETP(NPORTS, ioc_attr->adapter_prop); + ad_attr->max_speed = BFI_ADAPTER_GETP(SPEED, ioc_attr->adapter_prop); + + /** + * model name + */ + if (BFI_ADAPTER_GETP(SPEED, ioc_attr->adapter_prop) == 10) { + strcpy(model, "BR-10?0"); + model[5] = '0' + ad_attr->nports; + } else { + strcpy(model, "Brocade-??5"); + model[8] = + '0' + BFI_ADAPTER_GETP(SPEED, ioc_attr->adapter_prop); + model[9] = '0' + ad_attr->nports; + } + + if (BFI_ADAPTER_IS_SPECIAL(ioc_attr->adapter_prop)) + ad_attr->prototype = 1; + else + ad_attr->prototype = 0; + + bfa_os_memcpy(&ad_attr->model, model, BFA_ADAPTER_MODEL_NAME_LEN); + bfa_os_memcpy(&ad_attr->model_descr, &ad_attr->model, + BFA_ADAPTER_MODEL_NAME_LEN); + + ad_attr->pwwn = bfa_ioc_get_pwwn(ioc); + ad_attr->mac = bfa_ioc_get_mac(ioc); + + ad_attr->pcie_gen = ioc_attr->pcie_gen; + ad_attr->pcie_lanes = ioc_attr->pcie_lanes; + ad_attr->pcie_lanes_orig = ioc_attr->pcie_lanes_orig; + ad_attr->asic_rev = ioc_attr->asic_rev; + ad_attr->hw_ver[0] = 'R'; + ad_attr->hw_ver[1] = 'e'; + ad_attr->hw_ver[2] = 'v'; + ad_attr->hw_ver[3] = '-'; + ad_attr->hw_ver[4] = ioc_attr->asic_rev; + ad_attr->hw_ver[5] = '\0'; + + ad_attr->cna_capable = ioc->cna; +} + +void +bfa_ioc_get_attr(struct bfa_ioc_s *ioc, struct bfa_ioc_attr_s *ioc_attr) +{ + bfa_os_memset((void *)ioc_attr, 0, sizeof(struct bfa_ioc_attr_s)); + + ioc_attr->state = bfa_sm_to_state(ioc_sm_table, ioc->fsm); + ioc_attr->port_id = ioc->port_id; + + if (!ioc->ctdev) + ioc_attr->ioc_type = BFA_IOC_TYPE_FC; + else if (ioc->ioc_mc == BFI_MC_IOCFC) + ioc_attr->ioc_type = BFA_IOC_TYPE_FCoE; + else if (ioc->ioc_mc == BFI_MC_LL) + ioc_attr->ioc_type = BFA_IOC_TYPE_LL; + + bfa_ioc_get_adapter_attr(ioc, &ioc_attr->adapter_attr); + + ioc_attr->pci_attr.device_id = ioc->pcidev.device_id; + ioc_attr->pci_attr.pcifn = ioc->pcidev.pci_func; + ioc_attr->pci_attr.chip_rev[0] = 'R'; + ioc_attr->pci_attr.chip_rev[1] = 'e'; + ioc_attr->pci_attr.chip_rev[2] = 'v'; + ioc_attr->pci_attr.chip_rev[3] = '-'; + ioc_attr->pci_attr.chip_rev[4] = ioc_attr->adapter_attr.asic_rev; + ioc_attr->pci_attr.chip_rev[5] = '\0'; +} + +/** + * hal_wwn_public + */ +wwn_t +bfa_ioc_get_pwwn(struct bfa_ioc_s *ioc) +{ + union { + wwn_t wwn; + u8 byte[sizeof(wwn_t)]; + } + w; + + w.wwn = ioc->attr->mfg_wwn; + + if (bfa_ioc_portid(ioc) == 1) + w.byte[7]++; + + return w.wwn; +} + +wwn_t +bfa_ioc_get_nwwn(struct bfa_ioc_s *ioc) +{ + union { + wwn_t wwn; + u8 byte[sizeof(wwn_t)]; + } + w; + + w.wwn = ioc->attr->mfg_wwn; + + if (bfa_ioc_portid(ioc) == 1) + w.byte[7]++; + + w.byte[0] = 0x20; + + return w.wwn; +} + +wwn_t +bfa_ioc_get_wwn_naa5(struct bfa_ioc_s *ioc, u16 inst) +{ + union { + wwn_t wwn; + u8 byte[sizeof(wwn_t)]; + } + w , w5; + + bfa_trc(ioc, inst); + + w.wwn = ioc->attr->mfg_wwn; + w5.byte[0] = 0x50 | w.byte[2] >> 4; + w5.byte[1] = w.byte[2] << 4 | w.byte[3] >> 4; + w5.byte[2] = w.byte[3] << 4 | w.byte[4] >> 4; + w5.byte[3] = w.byte[4] << 4 | w.byte[5] >> 4; + w5.byte[4] = w.byte[5] << 4 | w.byte[6] >> 4; + w5.byte[5] = w.byte[6] << 4 | w.byte[7] >> 4; + w5.byte[6] = w.byte[7] << 4 | (inst & 0x0f00) >> 8; + w5.byte[7] = (inst & 0xff); + + return w5.wwn; +} + +u64 +bfa_ioc_get_adid(struct bfa_ioc_s *ioc) +{ + return ioc->attr->mfg_wwn; +} + +mac_t +bfa_ioc_get_mac(struct bfa_ioc_s *ioc) +{ + mac_t mac; + + mac = ioc->attr->mfg_mac; + mac.mac[MAC_ADDRLEN - 1] += bfa_ioc_pcifn(ioc); + + return mac; +} + +void +bfa_ioc_set_fcmode(struct bfa_ioc_s *ioc) +{ + ioc->fcmode = BFA_TRUE; + ioc->port_id = bfa_ioc_pcifn(ioc); +} + +bfa_boolean_t +bfa_ioc_get_fcmode(struct bfa_ioc_s *ioc) +{ + return ioc->fcmode || (ioc->pcidev.device_id != BFA_PCI_DEVICE_ID_CT); +} + +/** + * Return true if interrupt should be claimed. + */ +bfa_boolean_t +bfa_ioc_intx_claim(struct bfa_ioc_s *ioc) +{ + u32 isr, msk; + + /** + * Always claim if not catapult. + */ + if (!ioc->ctdev) + return BFA_TRUE; + + /** + * FALSE if next device is claiming interrupt. + * TRUE if next device is not interrupting or not present. + */ + msk = bfa_reg_read(ioc->ioc_regs.shirq_msk_next); + isr = bfa_reg_read(ioc->ioc_regs.shirq_isr_next); + return !(isr & ~msk); +} + +/** + * Send AEN notification + */ +static void +bfa_ioc_aen_post(struct bfa_ioc_s *ioc, enum bfa_ioc_aen_event event) +{ + union bfa_aen_data_u aen_data; + struct bfa_log_mod_s *logmod = ioc->logm; + s32 inst_num = 0; + struct bfa_ioc_attr_s ioc_attr; + + switch (event) { + case BFA_IOC_AEN_HBGOOD: + bfa_log(logmod, BFA_AEN_IOC_HBGOOD, inst_num); + break; + case BFA_IOC_AEN_HBFAIL: + bfa_log(logmod, BFA_AEN_IOC_HBFAIL, inst_num); + break; + case BFA_IOC_AEN_ENABLE: + bfa_log(logmod, BFA_AEN_IOC_ENABLE, inst_num); + break; + case BFA_IOC_AEN_DISABLE: + bfa_log(logmod, BFA_AEN_IOC_DISABLE, inst_num); + break; + case BFA_IOC_AEN_FWMISMATCH: + bfa_log(logmod, BFA_AEN_IOC_FWMISMATCH, inst_num); + break; + default: + break; + } + + memset(&aen_data.ioc.pwwn, 0, sizeof(aen_data.ioc.pwwn)); + memset(&aen_data.ioc.mac, 0, sizeof(aen_data.ioc.mac)); + bfa_ioc_get_attr(ioc, &ioc_attr); + switch (ioc_attr.ioc_type) { + case BFA_IOC_TYPE_FC: + aen_data.ioc.pwwn = bfa_ioc_get_pwwn(ioc); + break; + case BFA_IOC_TYPE_FCoE: + aen_data.ioc.pwwn = bfa_ioc_get_pwwn(ioc); + aen_data.ioc.mac = bfa_ioc_get_mac(ioc); + break; + case BFA_IOC_TYPE_LL: + aen_data.ioc.mac = bfa_ioc_get_mac(ioc); + break; + default: + bfa_assert(ioc_attr.ioc_type == BFA_IOC_TYPE_FC); + break; + } + aen_data.ioc.ioc_type = ioc_attr.ioc_type; +} + +/** + * Retrieve saved firmware trace from a prior IOC failure. + */ +bfa_status_t +bfa_ioc_debug_fwsave(struct bfa_ioc_s *ioc, void *trcdata, int *trclen) +{ + int tlen; + + if (ioc->dbg_fwsave_len == 0) + return BFA_STATUS_ENOFSAVE; + + tlen = *trclen; + if (tlen > ioc->dbg_fwsave_len) + tlen = ioc->dbg_fwsave_len; + + bfa_os_memcpy(trcdata, ioc->dbg_fwsave, tlen); + *trclen = tlen; + return BFA_STATUS_OK; +} + +/** + * Retrieve saved firmware trace from a prior IOC failure. + */ +bfa_status_t +bfa_ioc_debug_fwtrc(struct bfa_ioc_s *ioc, void *trcdata, int *trclen) +{ + u32 pgnum; + u32 loff = BFA_DBG_FWTRC_OFF(bfa_ioc_portid(ioc)); + int i, tlen; + u32 *tbuf = trcdata, r32; + + bfa_trc(ioc, *trclen); + + pgnum = bfa_ioc_smem_pgnum(ioc, loff); + loff = bfa_ioc_smem_pgoff(ioc, loff); + bfa_reg_write(ioc->ioc_regs.host_page_num_fn, pgnum); + + tlen = *trclen; + if (tlen > BFA_DBG_FWTRC_LEN) + tlen = BFA_DBG_FWTRC_LEN; + tlen /= sizeof(u32); + + bfa_trc(ioc, tlen); + + for (i = 0; i < tlen; i++) { + r32 = bfa_mem_read(ioc->ioc_regs.smem_page_start, loff); + tbuf[i] = bfa_os_ntohl(r32); + loff += sizeof(u32); + + /** + * handle page offset wrap around + */ + loff = PSS_SMEM_PGOFF(loff); + if (loff == 0) { + pgnum++; + bfa_reg_write(ioc->ioc_regs.host_page_num_fn, pgnum); + } + } + bfa_reg_write(ioc->ioc_regs.host_page_num_fn, + bfa_ioc_smem_pgnum(ioc, 0)); + bfa_trc(ioc, pgnum); + + *trclen = tlen * sizeof(u32); + return BFA_STATUS_OK; +} + +/** + * Save firmware trace if configured. + */ +static void +bfa_ioc_debug_save(struct bfa_ioc_s *ioc) +{ + int tlen; + + if (ioc->dbg_fwsave_len) { + tlen = ioc->dbg_fwsave_len; + bfa_ioc_debug_fwtrc(ioc, ioc->dbg_fwsave, &tlen); + } +} + +/** + * Firmware failure detected. Start recovery actions. + */ +static void +bfa_ioc_recover(struct bfa_ioc_s *ioc) +{ + if (ioc->dbg_fwsave_once) { + ioc->dbg_fwsave_once = BFA_FALSE; + bfa_ioc_debug_save(ioc); + } + + bfa_ioc_stats(ioc, ioc_hbfails); + bfa_fsm_send_event(ioc, IOC_E_HBFAIL); +} + +#else + +static void +bfa_ioc_aen_post(struct bfa_ioc_s *ioc, enum bfa_ioc_aen_event event) +{ +} + +static void +bfa_ioc_recover(struct bfa_ioc_s *ioc) +{ + bfa_assert(0); +} + +#endif + + diff --git a/drivers/scsi/bfa/bfa_ioc.h b/drivers/scsi/bfa/bfa_ioc.h new file mode 100644 index 000000000000..58efd4b13143 --- /dev/null +++ b/drivers/scsi/bfa/bfa_ioc.h @@ -0,0 +1,259 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFA_IOC_H__ +#define __BFA_IOC_H__ + +#include +#include +#include +#include +#include + +/** + * PCI device information required by IOC + */ +struct bfa_pcidev_s { + int pci_slot; + u8 pci_func; + u16 device_id; + bfa_os_addr_t pci_bar_kva; +}; + +/** + * Structure used to remember the DMA-able memory block's KVA and Physical + * Address + */ +struct bfa_dma_s { + void *kva; /*! Kernel virtual address */ + u64 pa; /*! Physical address */ +}; + +#define BFA_DMA_ALIGN_SZ 256 +#define BFA_ROUNDUP(_l, _s) (((_l) + ((_s) - 1)) & ~((_s) - 1)) + + + +#define bfa_dma_addr_set(dma_addr, pa) \ + __bfa_dma_addr_set(&dma_addr, (u64)pa) + +static inline void +__bfa_dma_addr_set(union bfi_addr_u *dma_addr, u64 pa) +{ + dma_addr->a32.addr_lo = (u32) pa; + dma_addr->a32.addr_hi = (u32) (bfa_os_u32(pa)); +} + + +#define bfa_dma_be_addr_set(dma_addr, pa) \ + __bfa_dma_be_addr_set(&dma_addr, (u64)pa) +static inline void +__bfa_dma_be_addr_set(union bfi_addr_u *dma_addr, u64 pa) +{ + dma_addr->a32.addr_lo = (u32) bfa_os_htonl(pa); + dma_addr->a32.addr_hi = (u32) bfa_os_htonl(bfa_os_u32(pa)); +} + +struct bfa_ioc_regs_s { + bfa_os_addr_t hfn_mbox_cmd; + bfa_os_addr_t hfn_mbox; + bfa_os_addr_t lpu_mbox_cmd; + bfa_os_addr_t lpu_mbox; + bfa_os_addr_t pss_ctl_reg; + bfa_os_addr_t app_pll_fast_ctl_reg; + bfa_os_addr_t app_pll_slow_ctl_reg; + bfa_os_addr_t ioc_sem_reg; + bfa_os_addr_t ioc_usage_sem_reg; + bfa_os_addr_t ioc_usage_reg; + bfa_os_addr_t host_page_num_fn; + bfa_os_addr_t heartbeat; + bfa_os_addr_t ioc_fwstate; + bfa_os_addr_t ll_halt; + bfa_os_addr_t shirq_isr_next; + bfa_os_addr_t shirq_msk_next; + bfa_os_addr_t smem_page_start; + u32 smem_pg0; +}; + +#define bfa_reg_read(_raddr) bfa_os_reg_read(_raddr) +#define bfa_reg_write(_raddr, _val) bfa_os_reg_write(_raddr, _val) +#define bfa_mem_read(_raddr, _off) bfa_os_mem_read(_raddr, _off) +#define bfa_mem_write(_raddr, _off, _val) \ + bfa_os_mem_write(_raddr, _off, _val) +/** + * IOC Mailbox structures + */ +struct bfa_mbox_cmd_s { + struct list_head qe; + u32 msg[BFI_IOC_MSGSZ]; +}; + +/** + * IOC mailbox module + */ +typedef void (*bfa_ioc_mbox_mcfunc_t)(void *cbarg, struct bfi_mbmsg_s *m); +struct bfa_ioc_mbox_mod_s { + struct list_head cmd_q; /* pending mbox queue */ + int nmclass; /* number of handlers */ + struct { + bfa_ioc_mbox_mcfunc_t cbfn; /* message handlers */ + void *cbarg; + } mbhdlr[BFI_MC_MAX]; +}; + +/** + * IOC callback function interfaces + */ +typedef void (*bfa_ioc_enable_cbfn_t)(void *bfa, enum bfa_status status); +typedef void (*bfa_ioc_disable_cbfn_t)(void *bfa); +typedef void (*bfa_ioc_hbfail_cbfn_t)(void *bfa); +typedef void (*bfa_ioc_reset_cbfn_t)(void *bfa); +struct bfa_ioc_cbfn_s { + bfa_ioc_enable_cbfn_t enable_cbfn; + bfa_ioc_disable_cbfn_t disable_cbfn; + bfa_ioc_hbfail_cbfn_t hbfail_cbfn; + bfa_ioc_reset_cbfn_t reset_cbfn; +}; + +/** + * Heartbeat failure notification queue element. + */ +struct bfa_ioc_hbfail_notify_s { + struct list_head qe; + bfa_ioc_hbfail_cbfn_t cbfn; + void *cbarg; +}; + +/** + * Initialize a heartbeat failure notification structure + */ +#define bfa_ioc_hbfail_init(__notify, __cbfn, __cbarg) do { \ + (__notify)->cbfn = (__cbfn); \ + (__notify)->cbarg = (__cbarg); \ +} while (0) + +struct bfa_ioc_s { + bfa_fsm_t fsm; + struct bfa_s *bfa; + struct bfa_pcidev_s pcidev; + struct bfa_timer_mod_s *timer_mod; + struct bfa_timer_s ioc_timer; + struct bfa_timer_s sem_timer; + u32 hb_count; + u32 hb_fail; + u32 retry_count; + struct list_head hb_notify_q; + void *dbg_fwsave; + int dbg_fwsave_len; + bfa_boolean_t dbg_fwsave_once; + enum bfi_mclass ioc_mc; + struct bfa_ioc_regs_s ioc_regs; + struct bfa_trc_mod_s *trcmod; + struct bfa_aen_s *aen; + struct bfa_log_mod_s *logm; + struct bfa_ioc_drv_stats_s stats; + bfa_boolean_t auto_recover; + bfa_boolean_t fcmode; + bfa_boolean_t ctdev; + bfa_boolean_t cna; + bfa_boolean_t pllinit; + u8 port_id; + + struct bfa_dma_s attr_dma; + struct bfi_ioc_attr_s *attr; + struct bfa_ioc_cbfn_s *cbfn; + struct bfa_ioc_mbox_mod_s mbox_mod; +}; + +#define bfa_ioc_pcifn(__ioc) (__ioc)->pcidev.pci_func +#define bfa_ioc_devid(__ioc) (__ioc)->pcidev.device_id +#define bfa_ioc_bar0(__ioc) (__ioc)->pcidev.pci_bar_kva +#define bfa_ioc_portid(__ioc) ((__ioc)->port_id) +#define bfa_ioc_fetch_stats(__ioc, __stats) \ + ((__stats)->drv_stats) = (__ioc)->stats +#define bfa_ioc_clr_stats(__ioc) \ + bfa_os_memset(&(__ioc)->stats, 0, sizeof((__ioc)->stats)) +#define bfa_ioc_maxfrsize(__ioc) (__ioc)->attr->maxfrsize +#define bfa_ioc_rx_bbcredit(__ioc) (__ioc)->attr->rx_bbcredit +#define bfa_ioc_speed_sup(__ioc) \ + BFI_ADAPTER_GETP(SPEED, (__ioc)->attr->adapter_prop) + +/** + * IOC mailbox interface + */ +void bfa_ioc_mbox_queue(struct bfa_ioc_s *ioc, struct bfa_mbox_cmd_s *cmd); +void bfa_ioc_mbox_register(struct bfa_ioc_s *ioc, + bfa_ioc_mbox_mcfunc_t *mcfuncs); +void bfa_ioc_mbox_isr(struct bfa_ioc_s *ioc); +void bfa_ioc_mbox_send(struct bfa_ioc_s *ioc, void *ioc_msg, int len); +void bfa_ioc_msgget(struct bfa_ioc_s *ioc, void *mbmsg); +void bfa_ioc_mbox_regisr(struct bfa_ioc_s *ioc, enum bfi_mclass mc, + bfa_ioc_mbox_mcfunc_t cbfn, void *cbarg); + +/** + * IOC interfaces + */ +void bfa_ioc_attach(struct bfa_ioc_s *ioc, void *bfa, + struct bfa_ioc_cbfn_s *cbfn, struct bfa_timer_mod_s *timer_mod, + struct bfa_trc_mod_s *trcmod, + struct bfa_aen_s *aen, struct bfa_log_mod_s *logm); +void bfa_ioc_detach(struct bfa_ioc_s *ioc); +void bfa_ioc_pci_init(struct bfa_ioc_s *ioc, struct bfa_pcidev_s *pcidev, + enum bfi_mclass mc); +u32 bfa_ioc_meminfo(void); +void bfa_ioc_mem_claim(struct bfa_ioc_s *ioc, u8 *dm_kva, u64 dm_pa); +void bfa_ioc_enable(struct bfa_ioc_s *ioc); +void bfa_ioc_disable(struct bfa_ioc_s *ioc); +bfa_boolean_t bfa_ioc_intx_claim(struct bfa_ioc_s *ioc); + +void bfa_ioc_boot(struct bfa_ioc_s *ioc, u32 boot_type, u32 boot_param); +void bfa_ioc_isr(struct bfa_ioc_s *ioc, struct bfi_mbmsg_s *msg); +void bfa_ioc_error_isr(struct bfa_ioc_s *ioc); +void bfa_ioc_isr_mode_set(struct bfa_ioc_s *ioc, bfa_boolean_t intx); +bfa_status_t bfa_ioc_pll_init(struct bfa_ioc_s *ioc); +bfa_boolean_t bfa_ioc_is_operational(struct bfa_ioc_s *ioc); +bfa_boolean_t bfa_ioc_is_disabled(struct bfa_ioc_s *ioc); +bfa_boolean_t bfa_ioc_fw_mismatch(struct bfa_ioc_s *ioc); +bfa_boolean_t bfa_ioc_adapter_is_disabled(struct bfa_ioc_s *ioc); +void bfa_ioc_cfg_complete(struct bfa_ioc_s *ioc); +void bfa_ioc_get_attr(struct bfa_ioc_s *ioc, struct bfa_ioc_attr_s *ioc_attr); +void bfa_ioc_get_adapter_attr(struct bfa_ioc_s *ioc, + struct bfa_adapter_attr_s *ad_attr); +int bfa_ioc_debug_trcsz(bfa_boolean_t auto_recover); +void bfa_ioc_debug_memclaim(struct bfa_ioc_s *ioc, void *dbg_fwsave); +bfa_status_t bfa_ioc_debug_fwsave(struct bfa_ioc_s *ioc, void *trcdata, + int *trclen); +bfa_status_t bfa_ioc_debug_fwtrc(struct bfa_ioc_s *ioc, void *trcdata, + int *trclen); +u32 bfa_ioc_smem_pgnum(struct bfa_ioc_s *ioc, u32 fmaddr); +u32 bfa_ioc_smem_pgoff(struct bfa_ioc_s *ioc, u32 fmaddr); +void bfa_ioc_set_fcmode(struct bfa_ioc_s *ioc); +bfa_boolean_t bfa_ioc_get_fcmode(struct bfa_ioc_s *ioc); +void bfa_ioc_hbfail_register(struct bfa_ioc_s *ioc, + struct bfa_ioc_hbfail_notify_s *notify); + +/* + * bfa mfg wwn API functions + */ +wwn_t bfa_ioc_get_pwwn(struct bfa_ioc_s *ioc); +wwn_t bfa_ioc_get_nwwn(struct bfa_ioc_s *ioc); +wwn_t bfa_ioc_get_wwn_naa5(struct bfa_ioc_s *ioc, u16 inst); +mac_t bfa_ioc_get_mac(struct bfa_ioc_s *ioc); +u64 bfa_ioc_get_adid(struct bfa_ioc_s *ioc); + +#endif /* __BFA_IOC_H__ */ + diff --git a/drivers/scsi/bfa/bfa_iocfc.c b/drivers/scsi/bfa/bfa_iocfc.c new file mode 100644 index 000000000000..12350b022d63 --- /dev/null +++ b/drivers/scsi/bfa/bfa_iocfc.c @@ -0,0 +1,872 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include "bfa_callback_priv.h" +#include "bfad_drv.h" + +BFA_TRC_FILE(HAL, IOCFC); + +/** + * IOC local definitions + */ +#define BFA_IOCFC_TOV 5000 /* msecs */ + +enum { + BFA_IOCFC_ACT_NONE = 0, + BFA_IOCFC_ACT_INIT = 1, + BFA_IOCFC_ACT_STOP = 2, + BFA_IOCFC_ACT_DISABLE = 3, +}; + +/* + * forward declarations + */ +static void bfa_iocfc_enable_cbfn(void *bfa_arg, enum bfa_status status); +static void bfa_iocfc_disable_cbfn(void *bfa_arg); +static void bfa_iocfc_hbfail_cbfn(void *bfa_arg); +static void bfa_iocfc_reset_cbfn(void *bfa_arg); +static void bfa_iocfc_stats_clear(void *bfa_arg); +static void bfa_iocfc_stats_swap(struct bfa_fw_stats_s *d, + struct bfa_fw_stats_s *s); +static void bfa_iocfc_stats_clr_cb(void *bfa_arg, bfa_boolean_t complete); +static void bfa_iocfc_stats_clr_timeout(void *bfa_arg); +static void bfa_iocfc_stats_cb(void *bfa_arg, bfa_boolean_t complete); +static void bfa_iocfc_stats_timeout(void *bfa_arg); + +static struct bfa_ioc_cbfn_s bfa_iocfc_cbfn; + +/** + * bfa_ioc_pvt BFA IOC private functions + */ + +static void +bfa_iocfc_cqs_sz(struct bfa_iocfc_cfg_s *cfg, u32 *dm_len) +{ + int i, per_reqq_sz, per_rspq_sz; + + per_reqq_sz = BFA_ROUNDUP((cfg->drvcfg.num_reqq_elems * BFI_LMSG_SZ), + BFA_DMA_ALIGN_SZ); + per_rspq_sz = BFA_ROUNDUP((cfg->drvcfg.num_rspq_elems * BFI_LMSG_SZ), + BFA_DMA_ALIGN_SZ); + + /* + * Calculate CQ size + */ + for (i = 0; i < cfg->fwcfg.num_cqs; i++) { + *dm_len = *dm_len + per_reqq_sz; + *dm_len = *dm_len + per_rspq_sz; + } + + /* + * Calculate Shadow CI/PI size + */ + for (i = 0; i < cfg->fwcfg.num_cqs; i++) + *dm_len += (2 * BFA_CACHELINE_SZ); +} + +static void +bfa_iocfc_fw_cfg_sz(struct bfa_iocfc_cfg_s *cfg, u32 *dm_len) +{ + *dm_len += + BFA_ROUNDUP(sizeof(struct bfi_iocfc_cfg_s), BFA_CACHELINE_SZ); + *dm_len += + BFA_ROUNDUP(sizeof(struct bfi_iocfc_cfgrsp_s), + BFA_CACHELINE_SZ); + *dm_len += BFA_ROUNDUP(sizeof(struct bfa_fw_stats_s), BFA_CACHELINE_SZ); +} + +/** + * Use the Mailbox interface to send BFI_IOCFC_H2I_CFG_REQ + */ +static void +bfa_iocfc_send_cfg(void *bfa_arg) +{ + struct bfa_s *bfa = bfa_arg; + struct bfa_iocfc_s *iocfc = &bfa->iocfc; + struct bfi_iocfc_cfg_req_s cfg_req; + struct bfi_iocfc_cfg_s *cfg_info = iocfc->cfginfo; + struct bfa_iocfc_cfg_s *cfg = &iocfc->cfg; + int i; + + bfa_assert(cfg->fwcfg.num_cqs <= BFI_IOC_MAX_CQS); + bfa_trc(bfa, cfg->fwcfg.num_cqs); + + iocfc->cfgdone = BFA_FALSE; + bfa_iocfc_reset_queues(bfa); + + /** + * initialize IOC configuration info + */ + cfg_info->endian_sig = BFI_IOC_ENDIAN_SIG; + cfg_info->num_cqs = cfg->fwcfg.num_cqs; + + bfa_dma_be_addr_set(cfg_info->cfgrsp_addr, iocfc->cfgrsp_dma.pa); + bfa_dma_be_addr_set(cfg_info->stats_addr, iocfc->stats_pa); + + /** + * dma map REQ and RSP circular queues and shadow pointers + */ + for (i = 0; i < cfg->fwcfg.num_cqs; i++) { + bfa_dma_be_addr_set(cfg_info->req_cq_ba[i], + iocfc->req_cq_ba[i].pa); + bfa_dma_be_addr_set(cfg_info->req_shadow_ci[i], + iocfc->req_cq_shadow_ci[i].pa); + cfg_info->req_cq_elems[i] = + bfa_os_htons(cfg->drvcfg.num_reqq_elems); + + bfa_dma_be_addr_set(cfg_info->rsp_cq_ba[i], + iocfc->rsp_cq_ba[i].pa); + bfa_dma_be_addr_set(cfg_info->rsp_shadow_pi[i], + iocfc->rsp_cq_shadow_pi[i].pa); + cfg_info->rsp_cq_elems[i] = + bfa_os_htons(cfg->drvcfg.num_rspq_elems); + } + + /** + * dma map IOC configuration itself + */ + bfi_h2i_set(cfg_req.mh, BFI_MC_IOCFC, BFI_IOCFC_H2I_CFG_REQ, + bfa_lpuid(bfa)); + bfa_dma_be_addr_set(cfg_req.ioc_cfg_dma_addr, iocfc->cfg_info.pa); + + bfa_ioc_mbox_send(&bfa->ioc, &cfg_req, + sizeof(struct bfi_iocfc_cfg_req_s)); +} + +static void +bfa_iocfc_init_mem(struct bfa_s *bfa, void *bfad, struct bfa_iocfc_cfg_s *cfg, + struct bfa_pcidev_s *pcidev) +{ + struct bfa_iocfc_s *iocfc = &bfa->iocfc; + + bfa->bfad = bfad; + iocfc->bfa = bfa; + iocfc->action = BFA_IOCFC_ACT_NONE; + + bfa_os_assign(iocfc->cfg, *cfg); + + /** + * Initialize chip specific handlers. + */ + if (bfa_ioc_devid(&bfa->ioc) == BFA_PCI_DEVICE_ID_CT) { + iocfc->hwif.hw_reginit = bfa_hwct_reginit; + iocfc->hwif.hw_rspq_ack = bfa_hwct_rspq_ack; + iocfc->hwif.hw_msix_init = bfa_hwct_msix_init; + iocfc->hwif.hw_msix_install = bfa_hwct_msix_install; + iocfc->hwif.hw_msix_uninstall = bfa_hwct_msix_uninstall; + iocfc->hwif.hw_isr_mode_set = bfa_hwct_isr_mode_set; + iocfc->hwif.hw_msix_getvecs = bfa_hwct_msix_getvecs; + } else { + iocfc->hwif.hw_reginit = bfa_hwcb_reginit; + iocfc->hwif.hw_rspq_ack = bfa_hwcb_rspq_ack; + iocfc->hwif.hw_msix_init = bfa_hwcb_msix_init; + iocfc->hwif.hw_msix_install = bfa_hwcb_msix_install; + iocfc->hwif.hw_msix_uninstall = bfa_hwcb_msix_uninstall; + iocfc->hwif.hw_isr_mode_set = bfa_hwcb_isr_mode_set; + iocfc->hwif.hw_msix_getvecs = bfa_hwcb_msix_getvecs; + } + + iocfc->hwif.hw_reginit(bfa); + bfa->msix.nvecs = 0; +} + +static void +bfa_iocfc_mem_claim(struct bfa_s *bfa, struct bfa_iocfc_cfg_s *cfg, + struct bfa_meminfo_s *meminfo) +{ + u8 *dm_kva; + u64 dm_pa; + int i, per_reqq_sz, per_rspq_sz; + struct bfa_iocfc_s *iocfc = &bfa->iocfc; + int dbgsz; + + dm_kva = bfa_meminfo_dma_virt(meminfo); + dm_pa = bfa_meminfo_dma_phys(meminfo); + + /* + * First allocate dma memory for IOC. + */ + bfa_ioc_mem_claim(&bfa->ioc, dm_kva, dm_pa); + dm_kva += bfa_ioc_meminfo(); + dm_pa += bfa_ioc_meminfo(); + + /* + * Claim DMA-able memory for the request/response queues and for shadow + * ci/pi registers + */ + per_reqq_sz = BFA_ROUNDUP((cfg->drvcfg.num_reqq_elems * BFI_LMSG_SZ), + BFA_DMA_ALIGN_SZ); + per_rspq_sz = BFA_ROUNDUP((cfg->drvcfg.num_rspq_elems * BFI_LMSG_SZ), + BFA_DMA_ALIGN_SZ); + + for (i = 0; i < cfg->fwcfg.num_cqs; i++) { + iocfc->req_cq_ba[i].kva = dm_kva; + iocfc->req_cq_ba[i].pa = dm_pa; + bfa_os_memset(dm_kva, 0, per_reqq_sz); + dm_kva += per_reqq_sz; + dm_pa += per_reqq_sz; + + iocfc->rsp_cq_ba[i].kva = dm_kva; + iocfc->rsp_cq_ba[i].pa = dm_pa; + bfa_os_memset(dm_kva, 0, per_rspq_sz); + dm_kva += per_rspq_sz; + dm_pa += per_rspq_sz; + } + + for (i = 0; i < cfg->fwcfg.num_cqs; i++) { + iocfc->req_cq_shadow_ci[i].kva = dm_kva; + iocfc->req_cq_shadow_ci[i].pa = dm_pa; + dm_kva += BFA_CACHELINE_SZ; + dm_pa += BFA_CACHELINE_SZ; + + iocfc->rsp_cq_shadow_pi[i].kva = dm_kva; + iocfc->rsp_cq_shadow_pi[i].pa = dm_pa; + dm_kva += BFA_CACHELINE_SZ; + dm_pa += BFA_CACHELINE_SZ; + } + + /* + * Claim DMA-able memory for the config info page + */ + bfa->iocfc.cfg_info.kva = dm_kva; + bfa->iocfc.cfg_info.pa = dm_pa; + bfa->iocfc.cfginfo = (struct bfi_iocfc_cfg_s *) dm_kva; + dm_kva += BFA_ROUNDUP(sizeof(struct bfi_iocfc_cfg_s), BFA_CACHELINE_SZ); + dm_pa += BFA_ROUNDUP(sizeof(struct bfi_iocfc_cfg_s), BFA_CACHELINE_SZ); + + /* + * Claim DMA-able memory for the config response + */ + bfa->iocfc.cfgrsp_dma.kva = dm_kva; + bfa->iocfc.cfgrsp_dma.pa = dm_pa; + bfa->iocfc.cfgrsp = (struct bfi_iocfc_cfgrsp_s *) dm_kva; + + dm_kva += + BFA_ROUNDUP(sizeof(struct bfi_iocfc_cfgrsp_s), + BFA_CACHELINE_SZ); + dm_pa += BFA_ROUNDUP(sizeof(struct bfi_iocfc_cfgrsp_s), + BFA_CACHELINE_SZ); + + /* + * Claim DMA-able memory for iocfc stats + */ + bfa->iocfc.stats_kva = dm_kva; + bfa->iocfc.stats_pa = dm_pa; + bfa->iocfc.fw_stats = (struct bfa_fw_stats_s *) dm_kva; + dm_kva += BFA_ROUNDUP(sizeof(struct bfa_fw_stats_s), BFA_CACHELINE_SZ); + dm_pa += BFA_ROUNDUP(sizeof(struct bfa_fw_stats_s), BFA_CACHELINE_SZ); + + bfa_meminfo_dma_virt(meminfo) = dm_kva; + bfa_meminfo_dma_phys(meminfo) = dm_pa; + + dbgsz = bfa_ioc_debug_trcsz(bfa_auto_recover); + if (dbgsz > 0) { + bfa_ioc_debug_memclaim(&bfa->ioc, bfa_meminfo_kva(meminfo)); + bfa_meminfo_kva(meminfo) += dbgsz; + } +} + +/** + * BFA submodules initialization completion notification. + */ +static void +bfa_iocfc_initdone_submod(struct bfa_s *bfa) +{ + int i; + + for (i = 0; hal_mods[i]; i++) + hal_mods[i]->initdone(bfa); +} + +/** + * Start BFA submodules. + */ +static void +bfa_iocfc_start_submod(struct bfa_s *bfa) +{ + int i; + + bfa->rme_process = BFA_TRUE; + + for (i = 0; hal_mods[i]; i++) + hal_mods[i]->start(bfa); +} + +/** + * Disable BFA submodules. + */ +static void +bfa_iocfc_disable_submod(struct bfa_s *bfa) +{ + int i; + + for (i = 0; hal_mods[i]; i++) + hal_mods[i]->iocdisable(bfa); +} + +static void +bfa_iocfc_init_cb(void *bfa_arg, bfa_boolean_t complete) +{ + struct bfa_s *bfa = bfa_arg; + + if (complete) { + if (bfa->iocfc.cfgdone) + bfa_cb_init(bfa->bfad, BFA_STATUS_OK); + else + bfa_cb_init(bfa->bfad, BFA_STATUS_FAILED); + } else + bfa->iocfc.action = BFA_IOCFC_ACT_NONE; +} + +static void +bfa_iocfc_stop_cb(void *bfa_arg, bfa_boolean_t compl) +{ + struct bfa_s *bfa = bfa_arg; + struct bfad_s *bfad = bfa->bfad; + + if (compl) + complete(&bfad->comp); + + else + bfa->iocfc.action = BFA_IOCFC_ACT_NONE; +} + +static void +bfa_iocfc_disable_cb(void *bfa_arg, bfa_boolean_t compl) +{ + struct bfa_s *bfa = bfa_arg; + struct bfad_s *bfad = bfa->bfad; + + if (compl) + complete(&bfad->disable_comp); +} + +/** + * Update BFA configuration from firmware configuration. + */ +static void +bfa_iocfc_cfgrsp(struct bfa_s *bfa) +{ + struct bfa_iocfc_s *iocfc = &bfa->iocfc; + struct bfi_iocfc_cfgrsp_s *cfgrsp = iocfc->cfgrsp; + struct bfa_iocfc_fwcfg_s *fwcfg = &cfgrsp->fwcfg; + struct bfi_iocfc_cfg_s *cfginfo = iocfc->cfginfo; + + fwcfg->num_cqs = fwcfg->num_cqs; + fwcfg->num_ioim_reqs = bfa_os_ntohs(fwcfg->num_ioim_reqs); + fwcfg->num_tskim_reqs = bfa_os_ntohs(fwcfg->num_tskim_reqs); + fwcfg->num_fcxp_reqs = bfa_os_ntohs(fwcfg->num_fcxp_reqs); + fwcfg->num_uf_bufs = bfa_os_ntohs(fwcfg->num_uf_bufs); + fwcfg->num_rports = bfa_os_ntohs(fwcfg->num_rports); + + cfginfo->intr_attr.coalesce = cfgrsp->intr_attr.coalesce; + cfginfo->intr_attr.delay = bfa_os_ntohs(cfgrsp->intr_attr.delay); + cfginfo->intr_attr.latency = bfa_os_ntohs(cfgrsp->intr_attr.latency); + + iocfc->cfgdone = BFA_TRUE; + + /** + * Configuration is complete - initialize/start submodules + */ + if (iocfc->action == BFA_IOCFC_ACT_INIT) + bfa_cb_queue(bfa, &iocfc->init_hcb_qe, bfa_iocfc_init_cb, bfa); + else + bfa_iocfc_start_submod(bfa); +} + +static void +bfa_iocfc_stats_clear(void *bfa_arg) +{ + struct bfa_s *bfa = bfa_arg; + struct bfa_iocfc_s *iocfc = &bfa->iocfc; + struct bfi_iocfc_stats_req_s stats_req; + + bfa_timer_start(bfa, &iocfc->stats_timer, + bfa_iocfc_stats_clr_timeout, bfa, + BFA_IOCFC_TOV); + + bfi_h2i_set(stats_req.mh, BFI_MC_IOCFC, BFI_IOCFC_H2I_CLEAR_STATS_REQ, + bfa_lpuid(bfa)); + bfa_ioc_mbox_send(&bfa->ioc, &stats_req, + sizeof(struct bfi_iocfc_stats_req_s)); +} + +static void +bfa_iocfc_stats_swap(struct bfa_fw_stats_s *d, struct bfa_fw_stats_s *s) +{ + u32 *dip = (u32 *) d; + u32 *sip = (u32 *) s; + int i; + + for (i = 0; i < (sizeof(struct bfa_fw_stats_s) / sizeof(u32)); i++) + dip[i] = bfa_os_ntohl(sip[i]); +} + +static void +bfa_iocfc_stats_clr_cb(void *bfa_arg, bfa_boolean_t complete) +{ + struct bfa_s *bfa = bfa_arg; + struct bfa_iocfc_s *iocfc = &bfa->iocfc; + + if (complete) { + bfa_ioc_clr_stats(&bfa->ioc); + iocfc->stats_cbfn(iocfc->stats_cbarg, iocfc->stats_status); + } else { + iocfc->stats_busy = BFA_FALSE; + iocfc->stats_status = BFA_STATUS_OK; + } +} + +static void +bfa_iocfc_stats_clr_timeout(void *bfa_arg) +{ + struct bfa_s *bfa = bfa_arg; + struct bfa_iocfc_s *iocfc = &bfa->iocfc; + + bfa_trc(bfa, 0); + + iocfc->stats_status = BFA_STATUS_ETIMER; + bfa_cb_queue(bfa, &iocfc->stats_hcb_qe, bfa_iocfc_stats_clr_cb, bfa); +} + +static void +bfa_iocfc_stats_cb(void *bfa_arg, bfa_boolean_t complete) +{ + struct bfa_s *bfa = bfa_arg; + struct bfa_iocfc_s *iocfc = &bfa->iocfc; + + if (complete) { + if (iocfc->stats_status == BFA_STATUS_OK) { + bfa_os_memset(iocfc->stats_ret, 0, + sizeof(*iocfc->stats_ret)); + bfa_iocfc_stats_swap(&iocfc->stats_ret->fw_stats, + iocfc->fw_stats); + } + iocfc->stats_cbfn(iocfc->stats_cbarg, iocfc->stats_status); + } else { + iocfc->stats_busy = BFA_FALSE; + iocfc->stats_status = BFA_STATUS_OK; + } +} + +static void +bfa_iocfc_stats_timeout(void *bfa_arg) +{ + struct bfa_s *bfa = bfa_arg; + struct bfa_iocfc_s *iocfc = &bfa->iocfc; + + bfa_trc(bfa, 0); + + iocfc->stats_status = BFA_STATUS_ETIMER; + bfa_cb_queue(bfa, &iocfc->stats_hcb_qe, bfa_iocfc_stats_cb, bfa); +} + +static void +bfa_iocfc_stats_query(struct bfa_s *bfa) +{ + struct bfa_iocfc_s *iocfc = &bfa->iocfc; + struct bfi_iocfc_stats_req_s stats_req; + + bfa_timer_start(bfa, &iocfc->stats_timer, + bfa_iocfc_stats_timeout, bfa, BFA_IOCFC_TOV); + + bfi_h2i_set(stats_req.mh, BFI_MC_IOCFC, BFI_IOCFC_H2I_GET_STATS_REQ, + bfa_lpuid(bfa)); + bfa_ioc_mbox_send(&bfa->ioc, &stats_req, + sizeof(struct bfi_iocfc_stats_req_s)); +} + +void +bfa_iocfc_reset_queues(struct bfa_s *bfa) +{ + int q; + + for (q = 0; q < BFI_IOC_MAX_CQS; q++) { + bfa_reqq_ci(bfa, q) = 0; + bfa_reqq_pi(bfa, q) = 0; + bfa_rspq_ci(bfa, q) = 0; + bfa_rspq_pi(bfa, q) = 0; + } +} + +/** + * IOC enable request is complete + */ +static void +bfa_iocfc_enable_cbfn(void *bfa_arg, enum bfa_status status) +{ + struct bfa_s *bfa = bfa_arg; + + if (status != BFA_STATUS_OK) { + bfa_isr_disable(bfa); + if (bfa->iocfc.action == BFA_IOCFC_ACT_INIT) + bfa_cb_queue(bfa, &bfa->iocfc.init_hcb_qe, + bfa_iocfc_init_cb, bfa); + return; + } + + bfa_iocfc_initdone_submod(bfa); + bfa_iocfc_send_cfg(bfa); +} + +/** + * IOC disable request is complete + */ +static void +bfa_iocfc_disable_cbfn(void *bfa_arg) +{ + struct bfa_s *bfa = bfa_arg; + + bfa_isr_disable(bfa); + bfa_iocfc_disable_submod(bfa); + + if (bfa->iocfc.action == BFA_IOCFC_ACT_STOP) + bfa_cb_queue(bfa, &bfa->iocfc.stop_hcb_qe, bfa_iocfc_stop_cb, + bfa); + else { + bfa_assert(bfa->iocfc.action == BFA_IOCFC_ACT_DISABLE); + bfa_cb_queue(bfa, &bfa->iocfc.dis_hcb_qe, bfa_iocfc_disable_cb, + bfa); + } +} + +/** + * Notify sub-modules of hardware failure. + */ +static void +bfa_iocfc_hbfail_cbfn(void *bfa_arg) +{ + struct bfa_s *bfa = bfa_arg; + + bfa->rme_process = BFA_FALSE; + + bfa_isr_disable(bfa); + bfa_iocfc_disable_submod(bfa); + + if (bfa->iocfc.action == BFA_IOCFC_ACT_INIT) + bfa_cb_queue(bfa, &bfa->iocfc.init_hcb_qe, bfa_iocfc_init_cb, + bfa); +} + +/** + * Actions on chip-reset completion. + */ +static void +bfa_iocfc_reset_cbfn(void *bfa_arg) +{ + struct bfa_s *bfa = bfa_arg; + + bfa_iocfc_reset_queues(bfa); + bfa_isr_enable(bfa); +} + + + +/** + * bfa_ioc_public + */ + +/** + * Query IOC memory requirement information. + */ +void +bfa_iocfc_meminfo(struct bfa_iocfc_cfg_s *cfg, u32 *km_len, + u32 *dm_len) +{ + /* dma memory for IOC */ + *dm_len += bfa_ioc_meminfo(); + + bfa_iocfc_fw_cfg_sz(cfg, dm_len); + bfa_iocfc_cqs_sz(cfg, dm_len); + *km_len += bfa_ioc_debug_trcsz(bfa_auto_recover); +} + +/** + * Query IOC memory requirement information. + */ +void +bfa_iocfc_attach(struct bfa_s *bfa, void *bfad, struct bfa_iocfc_cfg_s *cfg, + struct bfa_meminfo_s *meminfo, struct bfa_pcidev_s *pcidev) +{ + int i; + + bfa_iocfc_cbfn.enable_cbfn = bfa_iocfc_enable_cbfn; + bfa_iocfc_cbfn.disable_cbfn = bfa_iocfc_disable_cbfn; + bfa_iocfc_cbfn.hbfail_cbfn = bfa_iocfc_hbfail_cbfn; + bfa_iocfc_cbfn.reset_cbfn = bfa_iocfc_reset_cbfn; + + bfa_ioc_attach(&bfa->ioc, bfa, &bfa_iocfc_cbfn, &bfa->timer_mod, + bfa->trcmod, bfa->aen, bfa->logm); + bfa_ioc_pci_init(&bfa->ioc, pcidev, BFI_MC_IOCFC); + bfa_ioc_mbox_register(&bfa->ioc, bfa_mbox_isrs); + + /** + * Choose FC (ssid: 0x1C) v/s FCoE (ssid: 0x14) mode. + */ + if (0) + bfa_ioc_set_fcmode(&bfa->ioc); + + bfa_iocfc_init_mem(bfa, bfad, cfg, pcidev); + bfa_iocfc_mem_claim(bfa, cfg, meminfo); + bfa_timer_init(&bfa->timer_mod); + + INIT_LIST_HEAD(&bfa->comp_q); + for (i = 0; i < BFI_IOC_MAX_CQS; i++) + INIT_LIST_HEAD(&bfa->reqq_waitq[i]); +} + +/** + * Query IOC memory requirement information. + */ +void +bfa_iocfc_detach(struct bfa_s *bfa) +{ + bfa_ioc_detach(&bfa->ioc); +} + +/** + * Query IOC memory requirement information. + */ +void +bfa_iocfc_init(struct bfa_s *bfa) +{ + bfa->iocfc.action = BFA_IOCFC_ACT_INIT; + bfa_ioc_enable(&bfa->ioc); + bfa_msix_install(bfa); +} + +/** + * IOC start called from bfa_start(). Called to start IOC operations + * at driver instantiation for this instance. + */ +void +bfa_iocfc_start(struct bfa_s *bfa) +{ + if (bfa->iocfc.cfgdone) + bfa_iocfc_start_submod(bfa); +} + +/** + * IOC stop called from bfa_stop(). Called only when driver is unloaded + * for this instance. + */ +void +bfa_iocfc_stop(struct bfa_s *bfa) +{ + bfa->iocfc.action = BFA_IOCFC_ACT_STOP; + + bfa->rme_process = BFA_FALSE; + bfa_ioc_disable(&bfa->ioc); +} + +void +bfa_iocfc_isr(void *bfaarg, struct bfi_mbmsg_s *m) +{ + struct bfa_s *bfa = bfaarg; + struct bfa_iocfc_s *iocfc = &bfa->iocfc; + union bfi_iocfc_i2h_msg_u *msg; + + msg = (union bfi_iocfc_i2h_msg_u *) m; + bfa_trc(bfa, msg->mh.msg_id); + + switch (msg->mh.msg_id) { + case BFI_IOCFC_I2H_CFG_REPLY: + iocfc->cfg_reply = &msg->cfg_reply; + bfa_iocfc_cfgrsp(bfa); + break; + + case BFI_IOCFC_I2H_GET_STATS_RSP: + if (iocfc->stats_busy == BFA_FALSE + || iocfc->stats_status == BFA_STATUS_ETIMER) + break; + + bfa_timer_stop(&iocfc->stats_timer); + iocfc->stats_status = BFA_STATUS_OK; + bfa_cb_queue(bfa, &iocfc->stats_hcb_qe, bfa_iocfc_stats_cb, + bfa); + break; + case BFI_IOCFC_I2H_CLEAR_STATS_RSP: + /* + * check for timer pop before processing the rsp + */ + if (iocfc->stats_busy == BFA_FALSE + || iocfc->stats_status == BFA_STATUS_ETIMER) + break; + + bfa_timer_stop(&iocfc->stats_timer); + iocfc->stats_status = BFA_STATUS_OK; + bfa_cb_queue(bfa, &iocfc->stats_hcb_qe, + bfa_iocfc_stats_clr_cb, bfa); + break; + case BFI_IOCFC_I2H_UPDATEQ_RSP: + iocfc->updateq_cbfn(iocfc->updateq_cbarg, BFA_STATUS_OK); + break; + default: + bfa_assert(0); + } +} + +#ifndef BFA_BIOS_BUILD +void +bfa_adapter_get_attr(struct bfa_s *bfa, struct bfa_adapter_attr_s *ad_attr) +{ + bfa_ioc_get_adapter_attr(&bfa->ioc, ad_attr); +} + +u64 +bfa_adapter_get_id(struct bfa_s *bfa) +{ + return bfa_ioc_get_adid(&bfa->ioc); +} + +void +bfa_iocfc_get_attr(struct bfa_s *bfa, struct bfa_iocfc_attr_s *attr) +{ + struct bfa_iocfc_s *iocfc = &bfa->iocfc; + + attr->intr_attr = iocfc->cfginfo->intr_attr; + attr->config = iocfc->cfg; +} + +bfa_status_t +bfa_iocfc_israttr_set(struct bfa_s *bfa, struct bfa_iocfc_intr_attr_s *attr) +{ + struct bfa_iocfc_s *iocfc = &bfa->iocfc; + struct bfi_iocfc_set_intr_req_s *m; + + iocfc->cfginfo->intr_attr = *attr; + if (!bfa_iocfc_is_operational(bfa)) + return BFA_STATUS_OK; + + m = bfa_reqq_next(bfa, BFA_REQQ_IOC); + if (!m) + return BFA_STATUS_DEVBUSY; + + bfi_h2i_set(m->mh, BFI_MC_IOCFC, BFI_IOCFC_H2I_SET_INTR_REQ, + bfa_lpuid(bfa)); + m->coalesce = attr->coalesce; + m->delay = bfa_os_htons(attr->delay); + m->latency = bfa_os_htons(attr->latency); + + bfa_trc(bfa, attr->delay); + bfa_trc(bfa, attr->latency); + + bfa_reqq_produce(bfa, BFA_REQQ_IOC); + return BFA_STATUS_OK; +} + +void +bfa_iocfc_set_snsbase(struct bfa_s *bfa, u64 snsbase_pa) +{ + struct bfa_iocfc_s *iocfc = &bfa->iocfc; + + iocfc->cfginfo->sense_buf_len = (BFI_IOIM_SNSLEN - 1); + bfa_dma_be_addr_set(iocfc->cfginfo->ioim_snsbase, snsbase_pa); +} + +bfa_status_t +bfa_iocfc_get_stats(struct bfa_s *bfa, struct bfa_iocfc_stats_s *stats, + bfa_cb_ioc_t cbfn, void *cbarg) +{ + struct bfa_iocfc_s *iocfc = &bfa->iocfc; + + if (iocfc->stats_busy) { + bfa_trc(bfa, iocfc->stats_busy); + return (BFA_STATUS_DEVBUSY); + } + + iocfc->stats_busy = BFA_TRUE; + iocfc->stats_ret = stats; + iocfc->stats_cbfn = cbfn; + iocfc->stats_cbarg = cbarg; + + bfa_iocfc_stats_query(bfa); + + return (BFA_STATUS_OK); +} + +bfa_status_t +bfa_iocfc_clear_stats(struct bfa_s *bfa, bfa_cb_ioc_t cbfn, void *cbarg) +{ + struct bfa_iocfc_s *iocfc = &bfa->iocfc; + + if (iocfc->stats_busy) { + bfa_trc(bfa, iocfc->stats_busy); + return (BFA_STATUS_DEVBUSY); + } + + iocfc->stats_busy = BFA_TRUE; + iocfc->stats_cbfn = cbfn; + iocfc->stats_cbarg = cbarg; + + bfa_iocfc_stats_clear(bfa); + return (BFA_STATUS_OK); +} + +/** + * Enable IOC after it is disabled. + */ +void +bfa_iocfc_enable(struct bfa_s *bfa) +{ + bfa_plog_str(bfa->plog, BFA_PL_MID_HAL, BFA_PL_EID_MISC, 0, + "IOC Enable"); + bfa_ioc_enable(&bfa->ioc); +} + +void +bfa_iocfc_disable(struct bfa_s *bfa) +{ + bfa_plog_str(bfa->plog, BFA_PL_MID_HAL, BFA_PL_EID_MISC, 0, + "IOC Disable"); + bfa->iocfc.action = BFA_IOCFC_ACT_DISABLE; + + bfa->rme_process = BFA_FALSE; + bfa_ioc_disable(&bfa->ioc); +} + + +bfa_boolean_t +bfa_iocfc_is_operational(struct bfa_s *bfa) +{ + return bfa_ioc_is_operational(&bfa->ioc) && bfa->iocfc.cfgdone; +} + +/** + * Return boot target port wwns -- read from boot information in flash. + */ +void +bfa_iocfc_get_bootwwns(struct bfa_s *bfa, u8 *nwwns, wwn_t **wwns) +{ + struct bfa_iocfc_s *iocfc = &bfa->iocfc; + struct bfi_iocfc_cfgrsp_s *cfgrsp = iocfc->cfgrsp; + + *nwwns = cfgrsp->bootwwns.nwwns; + *wwns = cfgrsp->bootwwns.wwn; +} + +#endif + + diff --git a/drivers/scsi/bfa/bfa_iocfc.h b/drivers/scsi/bfa/bfa_iocfc.h new file mode 100644 index 000000000000..7ad177ed4cfc --- /dev/null +++ b/drivers/scsi/bfa/bfa_iocfc.h @@ -0,0 +1,168 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFA_IOCFC_H__ +#define __BFA_IOCFC_H__ + +#include +#include +#include +#include + +#define BFA_REQQ_NELEMS_MIN (4) +#define BFA_RSPQ_NELEMS_MIN (4) + +struct bfa_iocfc_regs_s { + bfa_os_addr_t intr_status; + bfa_os_addr_t intr_mask; + bfa_os_addr_t cpe_q_pi[BFI_IOC_MAX_CQS]; + bfa_os_addr_t cpe_q_ci[BFI_IOC_MAX_CQS]; + bfa_os_addr_t cpe_q_depth[BFI_IOC_MAX_CQS]; + bfa_os_addr_t cpe_q_ctrl[BFI_IOC_MAX_CQS]; + bfa_os_addr_t rme_q_ci[BFI_IOC_MAX_CQS]; + bfa_os_addr_t rme_q_pi[BFI_IOC_MAX_CQS]; + bfa_os_addr_t rme_q_depth[BFI_IOC_MAX_CQS]; + bfa_os_addr_t rme_q_ctrl[BFI_IOC_MAX_CQS]; +}; + +/** + * MSIX vector handlers + */ +#define BFA_MSIX_MAX_VECTORS 22 +typedef void (*bfa_msix_handler_t)(struct bfa_s *bfa, int vec); +struct bfa_msix_s { + int nvecs; + bfa_msix_handler_t handler[BFA_MSIX_MAX_VECTORS]; +}; + +/** + * Chip specific interfaces + */ +struct bfa_hwif_s { + void (*hw_reginit)(struct bfa_s *bfa); + void (*hw_rspq_ack)(struct bfa_s *bfa, int rspq); + void (*hw_msix_init)(struct bfa_s *bfa, int nvecs); + void (*hw_msix_install)(struct bfa_s *bfa); + void (*hw_msix_uninstall)(struct bfa_s *bfa); + void (*hw_isr_mode_set)(struct bfa_s *bfa, bfa_boolean_t msix); + void (*hw_msix_getvecs)(struct bfa_s *bfa, u32 *vecmap, + u32 *nvecs, u32 *maxvec); +}; +typedef void (*bfa_cb_iocfc_t) (void *cbarg, enum bfa_status status); + +struct bfa_iocfc_s { + struct bfa_s *bfa; + struct bfa_iocfc_cfg_s cfg; + int action; + + u32 req_cq_pi[BFI_IOC_MAX_CQS]; + u32 rsp_cq_ci[BFI_IOC_MAX_CQS]; + + struct bfa_cb_qe_s init_hcb_qe; + struct bfa_cb_qe_s stop_hcb_qe; + struct bfa_cb_qe_s dis_hcb_qe; + struct bfa_cb_qe_s stats_hcb_qe; + bfa_boolean_t cfgdone; + + struct bfa_dma_s cfg_info; + struct bfi_iocfc_cfg_s *cfginfo; + struct bfa_dma_s cfgrsp_dma; + struct bfi_iocfc_cfgrsp_s *cfgrsp; + struct bfi_iocfc_cfg_reply_s *cfg_reply; + + u8 *stats_kva; + u64 stats_pa; + struct bfa_fw_stats_s *fw_stats; + struct bfa_timer_s stats_timer; /* timer */ + struct bfa_iocfc_stats_s *stats_ret; /* driver stats location */ + bfa_status_t stats_status; /* stats/statsclr status */ + bfa_boolean_t stats_busy; /* outstanding stats */ + bfa_cb_ioc_t stats_cbfn; /* driver callback function */ + void *stats_cbarg; /* user callback arg */ + + struct bfa_dma_s req_cq_ba[BFI_IOC_MAX_CQS]; + struct bfa_dma_s req_cq_shadow_ci[BFI_IOC_MAX_CQS]; + struct bfa_dma_s rsp_cq_ba[BFI_IOC_MAX_CQS]; + struct bfa_dma_s rsp_cq_shadow_pi[BFI_IOC_MAX_CQS]; + struct bfa_iocfc_regs_s bfa_regs; /* BFA device registers */ + struct bfa_hwif_s hwif; + + bfa_cb_iocfc_t updateq_cbfn; /* bios callback function */ + void *updateq_cbarg; /* bios callback arg */ +}; + +#define bfa_lpuid(__bfa) bfa_ioc_portid(&(__bfa)->ioc) +#define bfa_msix_init(__bfa, __nvecs) \ + (__bfa)->iocfc.hwif.hw_msix_init(__bfa, __nvecs) +#define bfa_msix_install(__bfa) \ + (__bfa)->iocfc.hwif.hw_msix_install(__bfa) +#define bfa_msix_uninstall(__bfa) \ + (__bfa)->iocfc.hwif.hw_msix_uninstall(__bfa) +#define bfa_isr_mode_set(__bfa, __msix) \ + (__bfa)->iocfc.hwif.hw_isr_mode_set(__bfa, __msix) +#define bfa_msix_getvecs(__bfa, __vecmap, __nvecs, __maxvec) \ + (__bfa)->iocfc.hwif.hw_msix_getvecs(__bfa, __vecmap, __nvecs, __maxvec) + +/* + * FC specific IOC functions. + */ +void bfa_iocfc_meminfo(struct bfa_iocfc_cfg_s *cfg, u32 *km_len, + u32 *dm_len); +void bfa_iocfc_attach(struct bfa_s *bfa, void *bfad, + struct bfa_iocfc_cfg_s *cfg, struct bfa_meminfo_s *meminfo, + struct bfa_pcidev_s *pcidev); +void bfa_iocfc_detach(struct bfa_s *bfa); +void bfa_iocfc_init(struct bfa_s *bfa); +void bfa_iocfc_start(struct bfa_s *bfa); +void bfa_iocfc_stop(struct bfa_s *bfa); +void bfa_iocfc_isr(void *bfa, struct bfi_mbmsg_s *msg); +void bfa_iocfc_set_snsbase(struct bfa_s *bfa, u64 snsbase_pa); +bfa_boolean_t bfa_iocfc_is_operational(struct bfa_s *bfa); +void bfa_iocfc_reset_queues(struct bfa_s *bfa); +void bfa_iocfc_updateq(struct bfa_s *bfa, u32 reqq_ba, u32 rspq_ba, + u32 reqq_sci, u32 rspq_spi, + bfa_cb_iocfc_t cbfn, void *cbarg); + +void bfa_msix_all(struct bfa_s *bfa, int vec); +void bfa_msix_reqq(struct bfa_s *bfa, int vec); +void bfa_msix_rspq(struct bfa_s *bfa, int vec); +void bfa_msix_lpu_err(struct bfa_s *bfa, int vec); + +void bfa_hwcb_reginit(struct bfa_s *bfa); +void bfa_hwcb_rspq_ack(struct bfa_s *bfa, int rspq); +void bfa_hwcb_msix_init(struct bfa_s *bfa, int nvecs); +void bfa_hwcb_msix_install(struct bfa_s *bfa); +void bfa_hwcb_msix_uninstall(struct bfa_s *bfa); +void bfa_hwcb_isr_mode_set(struct bfa_s *bfa, bfa_boolean_t msix); +void bfa_hwcb_msix_getvecs(struct bfa_s *bfa, u32 *vecmap, + u32 *nvecs, u32 *maxvec); +void bfa_hwct_reginit(struct bfa_s *bfa); +void bfa_hwct_rspq_ack(struct bfa_s *bfa, int rspq); +void bfa_hwct_msix_init(struct bfa_s *bfa, int nvecs); +void bfa_hwct_msix_install(struct bfa_s *bfa); +void bfa_hwct_msix_uninstall(struct bfa_s *bfa); +void bfa_hwct_isr_mode_set(struct bfa_s *bfa, bfa_boolean_t msix); +void bfa_hwct_msix_getvecs(struct bfa_s *bfa, u32 *vecmap, + u32 *nvecs, u32 *maxvec); + +void bfa_com_meminfo(bfa_boolean_t mincfg, u32 *dm_len); +void bfa_com_attach(struct bfa_s *bfa, struct bfa_meminfo_s *mi, + bfa_boolean_t mincfg); +void bfa_iocfc_get_bootwwns(struct bfa_s *bfa, u8 *nwwns, wwn_t **wwns); + +#endif /* __BFA_IOCFC_H__ */ + diff --git a/drivers/scsi/bfa/bfa_iocfc_q.c b/drivers/scsi/bfa/bfa_iocfc_q.c new file mode 100644 index 000000000000..500a17df40b2 --- /dev/null +++ b/drivers/scsi/bfa/bfa_iocfc_q.c @@ -0,0 +1,44 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#include +#include "bfa_intr_priv.h" + +BFA_TRC_FILE(HAL, IOCFC_Q); + +void +bfa_iocfc_updateq(struct bfa_s *bfa, u32 reqq_ba, u32 rspq_ba, + u32 reqq_sci, u32 rspq_spi, bfa_cb_iocfc_t cbfn, + void *cbarg) +{ + struct bfa_iocfc_s *iocfc = &bfa->iocfc; + struct bfi_iocfc_updateq_req_s updateq_req; + + iocfc->updateq_cbfn = cbfn; + iocfc->updateq_cbarg = cbarg; + + bfi_h2i_set(updateq_req.mh, BFI_MC_IOCFC, BFI_IOCFC_H2I_UPDATEQ_REQ, + bfa_lpuid(bfa)); + + updateq_req.reqq_ba = bfa_os_htonl(reqq_ba); + updateq_req.rspq_ba = bfa_os_htonl(rspq_ba); + updateq_req.reqq_sci = bfa_os_htonl(reqq_sci); + updateq_req.rspq_spi = bfa_os_htonl(rspq_spi); + + bfa_ioc_mbox_send(&bfa->ioc, &updateq_req, + sizeof(struct bfi_iocfc_updateq_req_s)); +} diff --git a/drivers/scsi/bfa/bfa_ioim.c b/drivers/scsi/bfa/bfa_ioim.c new file mode 100644 index 000000000000..7ae2552e1e14 --- /dev/null +++ b/drivers/scsi/bfa/bfa_ioim.c @@ -0,0 +1,1311 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#include +#include +#include + +BFA_TRC_FILE(HAL, IOIM); + +/* + * forward declarations. + */ +static bfa_boolean_t bfa_ioim_send_ioreq(struct bfa_ioim_s *ioim); +static bfa_boolean_t bfa_ioim_sge_setup(struct bfa_ioim_s *ioim); +static void bfa_ioim_sgpg_setup(struct bfa_ioim_s *ioim); +static bfa_boolean_t bfa_ioim_send_abort(struct bfa_ioim_s *ioim); +static void bfa_ioim_notify_cleanup(struct bfa_ioim_s *ioim); +static void __bfa_cb_ioim_good_comp(void *cbarg, bfa_boolean_t complete); +static void __bfa_cb_ioim_comp(void *cbarg, bfa_boolean_t complete); +static void __bfa_cb_ioim_abort(void *cbarg, bfa_boolean_t complete); +static void __bfa_cb_ioim_failed(void *cbarg, bfa_boolean_t complete); +static void __bfa_cb_ioim_pathtov(void *cbarg, bfa_boolean_t complete); + +/** + * bfa_ioim_sm + */ + +/** + * IO state machine events + */ +enum bfa_ioim_event { + BFA_IOIM_SM_START = 1, /* io start request from host */ + BFA_IOIM_SM_COMP_GOOD = 2, /* io good comp, resource free */ + BFA_IOIM_SM_COMP = 3, /* io comp, resource is free */ + BFA_IOIM_SM_COMP_UTAG = 4, /* io comp, resource is free */ + BFA_IOIM_SM_DONE = 5, /* io comp, resource not free */ + BFA_IOIM_SM_FREE = 6, /* io resource is freed */ + BFA_IOIM_SM_ABORT = 7, /* abort request from scsi stack */ + BFA_IOIM_SM_ABORT_COMP = 8, /* abort from f/w */ + BFA_IOIM_SM_ABORT_DONE = 9, /* abort completion from f/w */ + BFA_IOIM_SM_QRESUME = 10, /* CQ space available to queue IO */ + BFA_IOIM_SM_SGALLOCED = 11, /* SG page allocation successful */ + BFA_IOIM_SM_SQRETRY = 12, /* sequence recovery retry */ + BFA_IOIM_SM_HCB = 13, /* bfa callback complete */ + BFA_IOIM_SM_CLEANUP = 14, /* IO cleanup from itnim */ + BFA_IOIM_SM_TMSTART = 15, /* IO cleanup from tskim */ + BFA_IOIM_SM_TMDONE = 16, /* IO cleanup from tskim */ + BFA_IOIM_SM_HWFAIL = 17, /* IOC h/w failure event */ + BFA_IOIM_SM_IOTOV = 18, /* ITN offline TOV */ +}; + +/* + * forward declaration of IO state machine + */ +static void bfa_ioim_sm_uninit(struct bfa_ioim_s *ioim, + enum bfa_ioim_event event); +static void bfa_ioim_sm_sgalloc(struct bfa_ioim_s *ioim, + enum bfa_ioim_event event); +static void bfa_ioim_sm_active(struct bfa_ioim_s *ioim, + enum bfa_ioim_event event); +static void bfa_ioim_sm_abort(struct bfa_ioim_s *ioim, + enum bfa_ioim_event event); +static void bfa_ioim_sm_cleanup(struct bfa_ioim_s *ioim, + enum bfa_ioim_event event); +static void bfa_ioim_sm_qfull(struct bfa_ioim_s *ioim, + enum bfa_ioim_event event); +static void bfa_ioim_sm_abort_qfull(struct bfa_ioim_s *ioim, + enum bfa_ioim_event event); +static void bfa_ioim_sm_cleanup_qfull(struct bfa_ioim_s *ioim, + enum bfa_ioim_event event); +static void bfa_ioim_sm_hcb(struct bfa_ioim_s *ioim, + enum bfa_ioim_event event); +static void bfa_ioim_sm_hcb_free(struct bfa_ioim_s *ioim, + enum bfa_ioim_event event); +static void bfa_ioim_sm_resfree(struct bfa_ioim_s *ioim, + enum bfa_ioim_event event); + +/** + * IO is not started (unallocated). + */ +static void +bfa_ioim_sm_uninit(struct bfa_ioim_s *ioim, enum bfa_ioim_event event) +{ + bfa_trc_fp(ioim->bfa, ioim->iotag); + bfa_trc_fp(ioim->bfa, event); + + switch (event) { + case BFA_IOIM_SM_START: + if (!bfa_itnim_is_online(ioim->itnim)) { + if (!bfa_itnim_hold_io(ioim->itnim)) { + bfa_sm_set_state(ioim, bfa_ioim_sm_hcb); + list_del(&ioim->qe); + list_add_tail(&ioim->qe, + &ioim->fcpim->ioim_comp_q); + bfa_cb_queue(ioim->bfa, &ioim->hcb_qe, + __bfa_cb_ioim_pathtov, ioim); + } else { + list_del(&ioim->qe); + list_add_tail(&ioim->qe, + &ioim->itnim->pending_q); + } + break; + } + + if (ioim->nsges > BFI_SGE_INLINE) { + if (!bfa_ioim_sge_setup(ioim)) { + bfa_sm_set_state(ioim, bfa_ioim_sm_sgalloc); + return; + } + } + + if (!bfa_ioim_send_ioreq(ioim)) { + bfa_sm_set_state(ioim, bfa_ioim_sm_qfull); + break; + } + + bfa_sm_set_state(ioim, bfa_ioim_sm_active); + break; + + case BFA_IOIM_SM_IOTOV: + bfa_sm_set_state(ioim, bfa_ioim_sm_hcb); + bfa_cb_queue(ioim->bfa, &ioim->hcb_qe, + __bfa_cb_ioim_pathtov, ioim); + break; + + case BFA_IOIM_SM_ABORT: + /** + * IO in pending queue can get abort requests. Complete abort + * requests immediately. + */ + bfa_sm_set_state(ioim, bfa_ioim_sm_hcb); + bfa_assert(bfa_q_is_on_q(&ioim->itnim->pending_q, ioim)); + bfa_cb_queue(ioim->bfa, &ioim->hcb_qe, __bfa_cb_ioim_abort, + ioim); + break; + + default: + bfa_assert(0); + } +} + +/** + * IO is waiting for SG pages. + */ +static void +bfa_ioim_sm_sgalloc(struct bfa_ioim_s *ioim, enum bfa_ioim_event event) +{ + bfa_trc(ioim->bfa, ioim->iotag); + bfa_trc(ioim->bfa, event); + + switch (event) { + case BFA_IOIM_SM_SGALLOCED: + if (!bfa_ioim_send_ioreq(ioim)) { + bfa_sm_set_state(ioim, bfa_ioim_sm_qfull); + break; + } + bfa_sm_set_state(ioim, bfa_ioim_sm_active); + break; + + case BFA_IOIM_SM_CLEANUP: + bfa_sm_set_state(ioim, bfa_ioim_sm_hcb); + bfa_sgpg_wcancel(ioim->bfa, &ioim->iosp->sgpg_wqe); + bfa_cb_queue(ioim->bfa, &ioim->hcb_qe, __bfa_cb_ioim_failed, + ioim); + bfa_ioim_notify_cleanup(ioim); + break; + + case BFA_IOIM_SM_ABORT: + bfa_sm_set_state(ioim, bfa_ioim_sm_hcb); + bfa_sgpg_wcancel(ioim->bfa, &ioim->iosp->sgpg_wqe); + bfa_cb_queue(ioim->bfa, &ioim->hcb_qe, __bfa_cb_ioim_abort, + ioim); + break; + + case BFA_IOIM_SM_HWFAIL: + bfa_sm_set_state(ioim, bfa_ioim_sm_hcb); + bfa_sgpg_wcancel(ioim->bfa, &ioim->iosp->sgpg_wqe); + bfa_cb_queue(ioim->bfa, &ioim->hcb_qe, __bfa_cb_ioim_failed, + ioim); + break; + + default: + bfa_assert(0); + } +} + +/** + * IO is active. + */ +static void +bfa_ioim_sm_active(struct bfa_ioim_s *ioim, enum bfa_ioim_event event) +{ + bfa_trc_fp(ioim->bfa, ioim->iotag); + bfa_trc_fp(ioim->bfa, event); + + switch (event) { + case BFA_IOIM_SM_COMP_GOOD: + bfa_sm_set_state(ioim, bfa_ioim_sm_hcb); + bfa_cb_queue(ioim->bfa, &ioim->hcb_qe, + __bfa_cb_ioim_good_comp, ioim); + break; + + case BFA_IOIM_SM_COMP: + bfa_sm_set_state(ioim, bfa_ioim_sm_hcb); + bfa_cb_queue(ioim->bfa, &ioim->hcb_qe, __bfa_cb_ioim_comp, + ioim); + break; + + case BFA_IOIM_SM_DONE: + bfa_sm_set_state(ioim, bfa_ioim_sm_hcb_free); + bfa_cb_queue(ioim->bfa, &ioim->hcb_qe, __bfa_cb_ioim_comp, + ioim); + break; + + case BFA_IOIM_SM_ABORT: + ioim->iosp->abort_explicit = BFA_TRUE; + ioim->io_cbfn = __bfa_cb_ioim_abort; + + if (bfa_ioim_send_abort(ioim)) + bfa_sm_set_state(ioim, bfa_ioim_sm_abort); + else { + bfa_sm_set_state(ioim, bfa_ioim_sm_abort_qfull); + bfa_reqq_wait(ioim->bfa, ioim->itnim->reqq, + &ioim->iosp->reqq_wait); + } + break; + + case BFA_IOIM_SM_CLEANUP: + ioim->iosp->abort_explicit = BFA_FALSE; + ioim->io_cbfn = __bfa_cb_ioim_failed; + + if (bfa_ioim_send_abort(ioim)) + bfa_sm_set_state(ioim, bfa_ioim_sm_cleanup); + else { + bfa_sm_set_state(ioim, bfa_ioim_sm_cleanup_qfull); + bfa_reqq_wait(ioim->bfa, ioim->itnim->reqq, + &ioim->iosp->reqq_wait); + } + break; + + case BFA_IOIM_SM_HWFAIL: + bfa_sm_set_state(ioim, bfa_ioim_sm_hcb); + bfa_cb_queue(ioim->bfa, &ioim->hcb_qe, __bfa_cb_ioim_failed, + ioim); + break; + + default: + bfa_assert(0); + } +} + +/** + * IO is being aborted, waiting for completion from firmware. + */ +static void +bfa_ioim_sm_abort(struct bfa_ioim_s *ioim, enum bfa_ioim_event event) +{ + bfa_trc(ioim->bfa, ioim->iotag); + bfa_trc(ioim->bfa, event); + + switch (event) { + case BFA_IOIM_SM_COMP_GOOD: + case BFA_IOIM_SM_COMP: + case BFA_IOIM_SM_DONE: + case BFA_IOIM_SM_FREE: + break; + + case BFA_IOIM_SM_ABORT_DONE: + bfa_sm_set_state(ioim, bfa_ioim_sm_hcb_free); + bfa_cb_queue(ioim->bfa, &ioim->hcb_qe, __bfa_cb_ioim_abort, + ioim); + break; + + case BFA_IOIM_SM_ABORT_COMP: + bfa_sm_set_state(ioim, bfa_ioim_sm_hcb); + bfa_cb_queue(ioim->bfa, &ioim->hcb_qe, __bfa_cb_ioim_abort, + ioim); + break; + + case BFA_IOIM_SM_COMP_UTAG: + bfa_sm_set_state(ioim, bfa_ioim_sm_hcb); + bfa_cb_queue(ioim->bfa, &ioim->hcb_qe, __bfa_cb_ioim_abort, + ioim); + break; + + case BFA_IOIM_SM_CLEANUP: + bfa_assert(ioim->iosp->abort_explicit == BFA_TRUE); + ioim->iosp->abort_explicit = BFA_FALSE; + + if (bfa_ioim_send_abort(ioim)) + bfa_sm_set_state(ioim, bfa_ioim_sm_cleanup); + else { + bfa_sm_set_state(ioim, bfa_ioim_sm_cleanup_qfull); + bfa_reqq_wait(ioim->bfa, ioim->itnim->reqq, + &ioim->iosp->reqq_wait); + } + break; + + case BFA_IOIM_SM_HWFAIL: + bfa_sm_set_state(ioim, bfa_ioim_sm_hcb); + bfa_cb_queue(ioim->bfa, &ioim->hcb_qe, __bfa_cb_ioim_failed, + ioim); + break; + + default: + bfa_assert(0); + } +} + +/** + * IO is being cleaned up (implicit abort), waiting for completion from + * firmware. + */ +static void +bfa_ioim_sm_cleanup(struct bfa_ioim_s *ioim, enum bfa_ioim_event event) +{ + bfa_trc(ioim->bfa, ioim->iotag); + bfa_trc(ioim->bfa, event); + + switch (event) { + case BFA_IOIM_SM_COMP_GOOD: + case BFA_IOIM_SM_COMP: + case BFA_IOIM_SM_DONE: + case BFA_IOIM_SM_FREE: + break; + + case BFA_IOIM_SM_ABORT: + /** + * IO is already being aborted implicitly + */ + ioim->io_cbfn = __bfa_cb_ioim_abort; + break; + + case BFA_IOIM_SM_ABORT_DONE: + bfa_sm_set_state(ioim, bfa_ioim_sm_hcb_free); + bfa_cb_queue(ioim->bfa, &ioim->hcb_qe, ioim->io_cbfn, ioim); + bfa_ioim_notify_cleanup(ioim); + break; + + case BFA_IOIM_SM_ABORT_COMP: + bfa_sm_set_state(ioim, bfa_ioim_sm_hcb); + bfa_cb_queue(ioim->bfa, &ioim->hcb_qe, ioim->io_cbfn, ioim); + bfa_ioim_notify_cleanup(ioim); + break; + + case BFA_IOIM_SM_COMP_UTAG: + bfa_sm_set_state(ioim, bfa_ioim_sm_hcb); + bfa_cb_queue(ioim->bfa, &ioim->hcb_qe, ioim->io_cbfn, ioim); + bfa_ioim_notify_cleanup(ioim); + break; + + case BFA_IOIM_SM_HWFAIL: + bfa_sm_set_state(ioim, bfa_ioim_sm_hcb); + bfa_cb_queue(ioim->bfa, &ioim->hcb_qe, __bfa_cb_ioim_failed, + ioim); + break; + + case BFA_IOIM_SM_CLEANUP: + /** + * IO can be in cleanup state already due to TM command. 2nd cleanup + * request comes from ITN offline event. + */ + break; + + default: + bfa_assert(0); + } +} + +/** + * IO is waiting for room in request CQ + */ +static void +bfa_ioim_sm_qfull(struct bfa_ioim_s *ioim, enum bfa_ioim_event event) +{ + bfa_trc(ioim->bfa, ioim->iotag); + bfa_trc(ioim->bfa, event); + + switch (event) { + case BFA_IOIM_SM_QRESUME: + bfa_sm_set_state(ioim, bfa_ioim_sm_active); + bfa_ioim_send_ioreq(ioim); + break; + + case BFA_IOIM_SM_ABORT: + bfa_sm_set_state(ioim, bfa_ioim_sm_hcb); + bfa_reqq_wcancel(&ioim->iosp->reqq_wait); + bfa_cb_queue(ioim->bfa, &ioim->hcb_qe, __bfa_cb_ioim_abort, + ioim); + break; + + case BFA_IOIM_SM_CLEANUP: + bfa_sm_set_state(ioim, bfa_ioim_sm_hcb); + bfa_reqq_wcancel(&ioim->iosp->reqq_wait); + bfa_cb_queue(ioim->bfa, &ioim->hcb_qe, __bfa_cb_ioim_failed, + ioim); + bfa_ioim_notify_cleanup(ioim); + break; + + case BFA_IOIM_SM_HWFAIL: + bfa_sm_set_state(ioim, bfa_ioim_sm_hcb); + bfa_reqq_wcancel(&ioim->iosp->reqq_wait); + bfa_cb_queue(ioim->bfa, &ioim->hcb_qe, __bfa_cb_ioim_failed, + ioim); + break; + + default: + bfa_assert(0); + } +} + +/** + * Active IO is being aborted, waiting for room in request CQ. + */ +static void +bfa_ioim_sm_abort_qfull(struct bfa_ioim_s *ioim, enum bfa_ioim_event event) +{ + bfa_trc(ioim->bfa, ioim->iotag); + bfa_trc(ioim->bfa, event); + + switch (event) { + case BFA_IOIM_SM_QRESUME: + bfa_sm_set_state(ioim, bfa_ioim_sm_abort); + bfa_ioim_send_abort(ioim); + break; + + case BFA_IOIM_SM_CLEANUP: + bfa_assert(ioim->iosp->abort_explicit == BFA_TRUE); + ioim->iosp->abort_explicit = BFA_FALSE; + bfa_sm_set_state(ioim, bfa_ioim_sm_cleanup_qfull); + break; + + case BFA_IOIM_SM_COMP_GOOD: + case BFA_IOIM_SM_COMP: + bfa_sm_set_state(ioim, bfa_ioim_sm_hcb); + bfa_reqq_wcancel(&ioim->iosp->reqq_wait); + bfa_cb_queue(ioim->bfa, &ioim->hcb_qe, __bfa_cb_ioim_abort, + ioim); + break; + + case BFA_IOIM_SM_DONE: + bfa_sm_set_state(ioim, bfa_ioim_sm_hcb_free); + bfa_reqq_wcancel(&ioim->iosp->reqq_wait); + bfa_cb_queue(ioim->bfa, &ioim->hcb_qe, __bfa_cb_ioim_abort, + ioim); + break; + + case BFA_IOIM_SM_HWFAIL: + bfa_sm_set_state(ioim, bfa_ioim_sm_hcb); + bfa_reqq_wcancel(&ioim->iosp->reqq_wait); + bfa_cb_queue(ioim->bfa, &ioim->hcb_qe, __bfa_cb_ioim_failed, + ioim); + break; + + default: + bfa_assert(0); + } +} + +/** + * Active IO is being cleaned up, waiting for room in request CQ. + */ +static void +bfa_ioim_sm_cleanup_qfull(struct bfa_ioim_s *ioim, enum bfa_ioim_event event) +{ + bfa_trc(ioim->bfa, ioim->iotag); + bfa_trc(ioim->bfa, event); + + switch (event) { + case BFA_IOIM_SM_QRESUME: + bfa_sm_set_state(ioim, bfa_ioim_sm_cleanup); + bfa_ioim_send_abort(ioim); + break; + + case BFA_IOIM_SM_ABORT: + /** + * IO is alraedy being cleaned up implicitly + */ + ioim->io_cbfn = __bfa_cb_ioim_abort; + break; + + case BFA_IOIM_SM_COMP_GOOD: + case BFA_IOIM_SM_COMP: + bfa_sm_set_state(ioim, bfa_ioim_sm_hcb); + bfa_reqq_wcancel(&ioim->iosp->reqq_wait); + bfa_cb_queue(ioim->bfa, &ioim->hcb_qe, ioim->io_cbfn, ioim); + bfa_ioim_notify_cleanup(ioim); + break; + + case BFA_IOIM_SM_DONE: + bfa_sm_set_state(ioim, bfa_ioim_sm_hcb_free); + bfa_reqq_wcancel(&ioim->iosp->reqq_wait); + bfa_cb_queue(ioim->bfa, &ioim->hcb_qe, ioim->io_cbfn, ioim); + bfa_ioim_notify_cleanup(ioim); + break; + + case BFA_IOIM_SM_HWFAIL: + bfa_sm_set_state(ioim, bfa_ioim_sm_hcb); + bfa_reqq_wcancel(&ioim->iosp->reqq_wait); + bfa_cb_queue(ioim->bfa, &ioim->hcb_qe, __bfa_cb_ioim_failed, + ioim); + break; + + default: + bfa_assert(0); + } +} + +/** + * IO bfa callback is pending. + */ +static void +bfa_ioim_sm_hcb(struct bfa_ioim_s *ioim, enum bfa_ioim_event event) +{ + bfa_trc_fp(ioim->bfa, ioim->iotag); + bfa_trc_fp(ioim->bfa, event); + + switch (event) { + case BFA_IOIM_SM_HCB: + bfa_sm_set_state(ioim, bfa_ioim_sm_uninit); + bfa_ioim_free(ioim); + bfa_cb_ioim_resfree(ioim->bfa->bfad); + break; + + case BFA_IOIM_SM_CLEANUP: + bfa_ioim_notify_cleanup(ioim); + break; + + case BFA_IOIM_SM_HWFAIL: + break; + + default: + bfa_assert(0); + } +} + +/** + * IO bfa callback is pending. IO resource cannot be freed. + */ +static void +bfa_ioim_sm_hcb_free(struct bfa_ioim_s *ioim, enum bfa_ioim_event event) +{ + bfa_trc(ioim->bfa, ioim->iotag); + bfa_trc(ioim->bfa, event); + + switch (event) { + case BFA_IOIM_SM_HCB: + bfa_sm_set_state(ioim, bfa_ioim_sm_resfree); + list_del(&ioim->qe); + list_add_tail(&ioim->qe, &ioim->fcpim->ioim_resfree_q); + break; + + case BFA_IOIM_SM_FREE: + bfa_sm_set_state(ioim, bfa_ioim_sm_hcb); + break; + + case BFA_IOIM_SM_CLEANUP: + bfa_ioim_notify_cleanup(ioim); + break; + + case BFA_IOIM_SM_HWFAIL: + bfa_sm_set_state(ioim, bfa_ioim_sm_hcb); + break; + + default: + bfa_assert(0); + } +} + +/** + * IO is completed, waiting resource free from firmware. + */ +static void +bfa_ioim_sm_resfree(struct bfa_ioim_s *ioim, enum bfa_ioim_event event) +{ + bfa_trc(ioim->bfa, ioim->iotag); + bfa_trc(ioim->bfa, event); + + switch (event) { + case BFA_IOIM_SM_FREE: + bfa_sm_set_state(ioim, bfa_ioim_sm_uninit); + bfa_ioim_free(ioim); + bfa_cb_ioim_resfree(ioim->bfa->bfad); + break; + + case BFA_IOIM_SM_CLEANUP: + bfa_ioim_notify_cleanup(ioim); + break; + + case BFA_IOIM_SM_HWFAIL: + break; + + default: + bfa_assert(0); + } +} + + + +/** + * bfa_ioim_private + */ + +static void +__bfa_cb_ioim_good_comp(void *cbarg, bfa_boolean_t complete) +{ + struct bfa_ioim_s *ioim = cbarg; + + if (!complete) { + bfa_sm_send_event(ioim, BFA_IOIM_SM_HCB); + return; + } + + bfa_cb_ioim_good_comp(ioim->bfa->bfad, ioim->dio); +} + +static void +__bfa_cb_ioim_comp(void *cbarg, bfa_boolean_t complete) +{ + struct bfa_ioim_s *ioim = cbarg; + struct bfi_ioim_rsp_s *m; + u8 *snsinfo = NULL; + u8 sns_len = 0; + s32 residue = 0; + + if (!complete) { + bfa_sm_send_event(ioim, BFA_IOIM_SM_HCB); + return; + } + + m = (struct bfi_ioim_rsp_s *) &ioim->iosp->comp_rspmsg; + if (m->io_status == BFI_IOIM_STS_OK) { + /** + * setup sense information, if present + */ + if (m->scsi_status == SCSI_STATUS_CHECK_CONDITION + && m->sns_len) { + sns_len = m->sns_len; + snsinfo = ioim->iosp->snsinfo; + } + + /** + * setup residue value correctly for normal completions + */ + if (m->resid_flags == FCP_RESID_UNDER) + residue = bfa_os_ntohl(m->residue); + if (m->resid_flags == FCP_RESID_OVER) { + residue = bfa_os_ntohl(m->residue); + residue = -residue; + } + } + + bfa_cb_ioim_done(ioim->bfa->bfad, ioim->dio, m->io_status, + m->scsi_status, sns_len, snsinfo, residue); +} + +static void +__bfa_cb_ioim_failed(void *cbarg, bfa_boolean_t complete) +{ + struct bfa_ioim_s *ioim = cbarg; + + if (!complete) { + bfa_sm_send_event(ioim, BFA_IOIM_SM_HCB); + return; + } + + bfa_cb_ioim_done(ioim->bfa->bfad, ioim->dio, BFI_IOIM_STS_ABORTED, + 0, 0, NULL, 0); +} + +static void +__bfa_cb_ioim_pathtov(void *cbarg, bfa_boolean_t complete) +{ + struct bfa_ioim_s *ioim = cbarg; + + if (!complete) { + bfa_sm_send_event(ioim, BFA_IOIM_SM_HCB); + return; + } + + bfa_cb_ioim_done(ioim->bfa->bfad, ioim->dio, BFI_IOIM_STS_PATHTOV, + 0, 0, NULL, 0); +} + +static void +__bfa_cb_ioim_abort(void *cbarg, bfa_boolean_t complete) +{ + struct bfa_ioim_s *ioim = cbarg; + + if (!complete) { + bfa_sm_send_event(ioim, BFA_IOIM_SM_HCB); + return; + } + + bfa_cb_ioim_abort(ioim->bfa->bfad, ioim->dio); +} + +static void +bfa_ioim_sgpg_alloced(void *cbarg) +{ + struct bfa_ioim_s *ioim = cbarg; + + ioim->nsgpgs = BFA_SGPG_NPAGE(ioim->nsges); + list_splice_tail_init(&ioim->iosp->sgpg_wqe.sgpg_q, &ioim->sgpg_q); + bfa_ioim_sgpg_setup(ioim); + bfa_sm_send_event(ioim, BFA_IOIM_SM_SGALLOCED); +} + +/** + * Send I/O request to firmware. + */ +static bfa_boolean_t +bfa_ioim_send_ioreq(struct bfa_ioim_s *ioim) +{ + struct bfa_itnim_s *itnim = ioim->itnim; + struct bfi_ioim_req_s *m; + static struct fcp_cmnd_s cmnd_z0 = { 0 }; + struct bfi_sge_s *sge; + u32 pgdlen = 0; + + /** + * check for room in queue to send request now + */ + m = bfa_reqq_next(ioim->bfa, itnim->reqq); + if (!m) { + bfa_reqq_wait(ioim->bfa, ioim->itnim->reqq, + &ioim->iosp->reqq_wait); + return BFA_FALSE; + } + + /** + * build i/o request message next + */ + m->io_tag = bfa_os_htons(ioim->iotag); + m->rport_hdl = ioim->itnim->rport->fw_handle; + m->io_timeout = bfa_cb_ioim_get_timeout(ioim->dio); + + /** + * build inline IO SG element here + */ + sge = &m->sges[0]; + if (ioim->nsges) { + sge->sga = bfa_cb_ioim_get_sgaddr(ioim->dio, 0); + pgdlen = bfa_cb_ioim_get_sglen(ioim->dio, 0); + sge->sg_len = pgdlen; + sge->flags = (ioim->nsges > BFI_SGE_INLINE) ? + BFI_SGE_DATA_CPL : BFI_SGE_DATA_LAST; + bfa_sge_to_be(sge); + sge++; + } + + if (ioim->nsges > BFI_SGE_INLINE) { + sge->sga = ioim->sgpg->sgpg_pa; + } else { + sge->sga.a32.addr_lo = 0; + sge->sga.a32.addr_hi = 0; + } + sge->sg_len = pgdlen; + sge->flags = BFI_SGE_PGDLEN; + bfa_sge_to_be(sge); + + /** + * set up I/O command parameters + */ + bfa_os_assign(m->cmnd, cmnd_z0); + m->cmnd.lun = bfa_cb_ioim_get_lun(ioim->dio); + m->cmnd.iodir = bfa_cb_ioim_get_iodir(ioim->dio); + bfa_os_assign(m->cmnd.cdb, + *(struct scsi_cdb_s *)bfa_cb_ioim_get_cdb(ioim->dio)); + m->cmnd.fcp_dl = bfa_os_htonl(bfa_cb_ioim_get_size(ioim->dio)); + + /** + * set up I/O message header + */ + switch (m->cmnd.iodir) { + case FCP_IODIR_READ: + bfi_h2i_set(m->mh, BFI_MC_IOIM_READ, 0, bfa_lpuid(ioim->bfa)); + bfa_stats(itnim, input_reqs); + break; + case FCP_IODIR_WRITE: + bfi_h2i_set(m->mh, BFI_MC_IOIM_WRITE, 0, bfa_lpuid(ioim->bfa)); + bfa_stats(itnim, output_reqs); + break; + case FCP_IODIR_RW: + bfa_stats(itnim, input_reqs); + bfa_stats(itnim, output_reqs); + default: + bfi_h2i_set(m->mh, BFI_MC_IOIM_IO, 0, bfa_lpuid(ioim->bfa)); + } + if (itnim->seq_rec || + (bfa_cb_ioim_get_size(ioim->dio) & (sizeof(u32) - 1))) + bfi_h2i_set(m->mh, BFI_MC_IOIM_IO, 0, bfa_lpuid(ioim->bfa)); + +#ifdef IOIM_ADVANCED + m->cmnd.crn = bfa_cb_ioim_get_crn(ioim->dio); + m->cmnd.priority = bfa_cb_ioim_get_priority(ioim->dio); + m->cmnd.taskattr = bfa_cb_ioim_get_taskattr(ioim->dio); + + /** + * Handle large CDB (>16 bytes). + */ + m->cmnd.addl_cdb_len = (bfa_cb_ioim_get_cdblen(ioim->dio) - + FCP_CMND_CDB_LEN) / sizeof(u32); + if (m->cmnd.addl_cdb_len) { + bfa_os_memcpy(&m->cmnd.cdb + 1, (struct scsi_cdb_s *) + bfa_cb_ioim_get_cdb(ioim->dio) + 1, + m->cmnd.addl_cdb_len * sizeof(u32)); + fcp_cmnd_fcpdl(&m->cmnd) = + bfa_os_htonl(bfa_cb_ioim_get_size(ioim->dio)); + } +#endif + + /** + * queue I/O message to firmware + */ + bfa_reqq_produce(ioim->bfa, itnim->reqq); + return BFA_TRUE; +} + +/** + * Setup any additional SG pages needed.Inline SG element is setup + * at queuing time. + */ +static bfa_boolean_t +bfa_ioim_sge_setup(struct bfa_ioim_s *ioim) +{ + u16 nsgpgs; + + bfa_assert(ioim->nsges > BFI_SGE_INLINE); + + /** + * allocate SG pages needed + */ + nsgpgs = BFA_SGPG_NPAGE(ioim->nsges); + if (!nsgpgs) + return BFA_TRUE; + + if (bfa_sgpg_malloc(ioim->bfa, &ioim->sgpg_q, nsgpgs) + != BFA_STATUS_OK) { + bfa_sgpg_wait(ioim->bfa, &ioim->iosp->sgpg_wqe, nsgpgs); + return BFA_FALSE; + } + + ioim->nsgpgs = nsgpgs; + bfa_ioim_sgpg_setup(ioim); + + return BFA_TRUE; +} + +static void +bfa_ioim_sgpg_setup(struct bfa_ioim_s *ioim) +{ + int sgeid, nsges, i; + struct bfi_sge_s *sge; + struct bfa_sgpg_s *sgpg; + u32 pgcumsz; + + sgeid = BFI_SGE_INLINE; + ioim->sgpg = sgpg = bfa_q_first(&ioim->sgpg_q); + + do { + sge = sgpg->sgpg->sges; + nsges = ioim->nsges - sgeid; + if (nsges > BFI_SGPG_DATA_SGES) + nsges = BFI_SGPG_DATA_SGES; + + pgcumsz = 0; + for (i = 0; i < nsges; i++, sge++, sgeid++) { + sge->sga = bfa_cb_ioim_get_sgaddr(ioim->dio, sgeid); + sge->sg_len = bfa_cb_ioim_get_sglen(ioim->dio, sgeid); + pgcumsz += sge->sg_len; + + /** + * set flags + */ + if (i < (nsges - 1)) + sge->flags = BFI_SGE_DATA; + else if (sgeid < (ioim->nsges - 1)) + sge->flags = BFI_SGE_DATA_CPL; + else + sge->flags = BFI_SGE_DATA_LAST; + } + + sgpg = (struct bfa_sgpg_s *) bfa_q_next(sgpg); + + /** + * set the link element of each page + */ + if (sgeid == ioim->nsges) { + sge->flags = BFI_SGE_PGDLEN; + sge->sga.a32.addr_lo = 0; + sge->sga.a32.addr_hi = 0; + } else { + sge->flags = BFI_SGE_LINK; + sge->sga = sgpg->sgpg_pa; + } + sge->sg_len = pgcumsz; + } while (sgeid < ioim->nsges); +} + +/** + * Send I/O abort request to firmware. + */ +static bfa_boolean_t +bfa_ioim_send_abort(struct bfa_ioim_s *ioim) +{ + struct bfa_itnim_s *itnim = ioim->itnim; + struct bfi_ioim_abort_req_s *m; + enum bfi_ioim_h2i msgop; + + /** + * check for room in queue to send request now + */ + m = bfa_reqq_next(ioim->bfa, itnim->reqq); + if (!m) + return BFA_FALSE; + + /** + * build i/o request message next + */ + if (ioim->iosp->abort_explicit) + msgop = BFI_IOIM_H2I_IOABORT_REQ; + else + msgop = BFI_IOIM_H2I_IOCLEANUP_REQ; + + bfi_h2i_set(m->mh, BFI_MC_IOIM, msgop, bfa_lpuid(ioim->bfa)); + m->io_tag = bfa_os_htons(ioim->iotag); + m->abort_tag = ++ioim->abort_tag; + + /** + * queue I/O message to firmware + */ + bfa_reqq_produce(ioim->bfa, itnim->reqq); + return BFA_TRUE; +} + +/** + * Call to resume any I/O requests waiting for room in request queue. + */ +static void +bfa_ioim_qresume(void *cbarg) +{ + struct bfa_ioim_s *ioim = cbarg; + + bfa_fcpim_stats(ioim->fcpim, qresumes); + bfa_sm_send_event(ioim, BFA_IOIM_SM_QRESUME); +} + + +static void +bfa_ioim_notify_cleanup(struct bfa_ioim_s *ioim) +{ + /** + * Move IO from itnim queue to fcpim global queue since itnim will be + * freed. + */ + list_del(&ioim->qe); + list_add_tail(&ioim->qe, &ioim->fcpim->ioim_comp_q); + + if (!ioim->iosp->tskim) { + if (ioim->fcpim->delay_comp && ioim->itnim->iotov_active) { + bfa_cb_dequeue(&ioim->hcb_qe); + list_del(&ioim->qe); + list_add_tail(&ioim->qe, &ioim->itnim->delay_comp_q); + } + bfa_itnim_iodone(ioim->itnim); + } else + bfa_tskim_iodone(ioim->iosp->tskim); +} + +/** + * or after the link comes back. + */ +void +bfa_ioim_delayed_comp(struct bfa_ioim_s *ioim, bfa_boolean_t iotov) +{ + /** + * If path tov timer expired, failback with PATHTOV status - these + * IO requests are not normally retried by IO stack. + * + * Otherwise device cameback online and fail it with normal failed + * status so that IO stack retries these failed IO requests. + */ + if (iotov) + ioim->io_cbfn = __bfa_cb_ioim_pathtov; + else + ioim->io_cbfn = __bfa_cb_ioim_failed; + + bfa_cb_queue(ioim->bfa, &ioim->hcb_qe, ioim->io_cbfn, ioim); + + /** + * Move IO to fcpim global queue since itnim will be + * freed. + */ + list_del(&ioim->qe); + list_add_tail(&ioim->qe, &ioim->fcpim->ioim_comp_q); +} + + + +/** + * bfa_ioim_friend + */ + +/** + * Memory allocation and initialization. + */ +void +bfa_ioim_attach(struct bfa_fcpim_mod_s *fcpim, struct bfa_meminfo_s *minfo) +{ + struct bfa_ioim_s *ioim; + struct bfa_ioim_sp_s *iosp; + u16 i; + u8 *snsinfo; + u32 snsbufsz; + + /** + * claim memory first + */ + ioim = (struct bfa_ioim_s *) bfa_meminfo_kva(minfo); + fcpim->ioim_arr = ioim; + bfa_meminfo_kva(minfo) = (u8 *) (ioim + fcpim->num_ioim_reqs); + + iosp = (struct bfa_ioim_sp_s *) bfa_meminfo_kva(minfo); + fcpim->ioim_sp_arr = iosp; + bfa_meminfo_kva(minfo) = (u8 *) (iosp + fcpim->num_ioim_reqs); + + /** + * Claim DMA memory for per IO sense data. + */ + snsbufsz = fcpim->num_ioim_reqs * BFI_IOIM_SNSLEN; + fcpim->snsbase.pa = bfa_meminfo_dma_phys(minfo); + bfa_meminfo_dma_phys(minfo) += snsbufsz; + + fcpim->snsbase.kva = bfa_meminfo_dma_virt(minfo); + bfa_meminfo_dma_virt(minfo) += snsbufsz; + snsinfo = fcpim->snsbase.kva; + bfa_iocfc_set_snsbase(fcpim->bfa, fcpim->snsbase.pa); + + /** + * Initialize ioim free queues + */ + INIT_LIST_HEAD(&fcpim->ioim_free_q); + INIT_LIST_HEAD(&fcpim->ioim_resfree_q); + INIT_LIST_HEAD(&fcpim->ioim_comp_q); + + for (i = 0; i < fcpim->num_ioim_reqs; + i++, ioim++, iosp++, snsinfo += BFI_IOIM_SNSLEN) { + /* + * initialize IOIM + */ + bfa_os_memset(ioim, 0, sizeof(struct bfa_ioim_s)); + ioim->iotag = i; + ioim->bfa = fcpim->bfa; + ioim->fcpim = fcpim; + ioim->iosp = iosp; + iosp->snsinfo = snsinfo; + INIT_LIST_HEAD(&ioim->sgpg_q); + bfa_reqq_winit(&ioim->iosp->reqq_wait, + bfa_ioim_qresume, ioim); + bfa_sgpg_winit(&ioim->iosp->sgpg_wqe, + bfa_ioim_sgpg_alloced, ioim); + bfa_sm_set_state(ioim, bfa_ioim_sm_uninit); + + list_add_tail(&ioim->qe, &fcpim->ioim_free_q); + } +} + +/** + * Driver detach time call. + */ +void +bfa_ioim_detach(struct bfa_fcpim_mod_s *fcpim) +{ +} + +void +bfa_ioim_isr(struct bfa_s *bfa, struct bfi_msg_s *m) +{ + struct bfa_fcpim_mod_s *fcpim = BFA_FCPIM_MOD(bfa); + struct bfi_ioim_rsp_s *rsp = (struct bfi_ioim_rsp_s *) m; + struct bfa_ioim_s *ioim; + u16 iotag; + enum bfa_ioim_event evt = BFA_IOIM_SM_COMP; + + iotag = bfa_os_ntohs(rsp->io_tag); + + ioim = BFA_IOIM_FROM_TAG(fcpim, iotag); + bfa_assert(ioim->iotag == iotag); + + bfa_trc(ioim->bfa, ioim->iotag); + bfa_trc(ioim->bfa, rsp->io_status); + bfa_trc(ioim->bfa, rsp->reuse_io_tag); + + if (bfa_sm_cmp_state(ioim, bfa_ioim_sm_active)) + bfa_os_assign(ioim->iosp->comp_rspmsg, *m); + + switch (rsp->io_status) { + case BFI_IOIM_STS_OK: + bfa_fcpim_stats(fcpim, iocomp_ok); + if (rsp->reuse_io_tag == 0) + evt = BFA_IOIM_SM_DONE; + else + evt = BFA_IOIM_SM_COMP; + break; + + case BFI_IOIM_STS_TIMEDOUT: + case BFI_IOIM_STS_ABORTED: + rsp->io_status = BFI_IOIM_STS_ABORTED; + bfa_fcpim_stats(fcpim, iocomp_aborted); + if (rsp->reuse_io_tag == 0) + evt = BFA_IOIM_SM_DONE; + else + evt = BFA_IOIM_SM_COMP; + break; + + case BFI_IOIM_STS_PROTO_ERR: + bfa_fcpim_stats(fcpim, iocom_proto_err); + bfa_assert(rsp->reuse_io_tag); + evt = BFA_IOIM_SM_COMP; + break; + + case BFI_IOIM_STS_SQER_NEEDED: + bfa_fcpim_stats(fcpim, iocom_sqer_needed); + bfa_assert(rsp->reuse_io_tag == 0); + evt = BFA_IOIM_SM_SQRETRY; + break; + + case BFI_IOIM_STS_RES_FREE: + bfa_fcpim_stats(fcpim, iocom_res_free); + evt = BFA_IOIM_SM_FREE; + break; + + case BFI_IOIM_STS_HOST_ABORTED: + bfa_fcpim_stats(fcpim, iocom_hostabrts); + if (rsp->abort_tag != ioim->abort_tag) { + bfa_trc(ioim->bfa, rsp->abort_tag); + bfa_trc(ioim->bfa, ioim->abort_tag); + return; + } + + if (rsp->reuse_io_tag) + evt = BFA_IOIM_SM_ABORT_COMP; + else + evt = BFA_IOIM_SM_ABORT_DONE; + break; + + case BFI_IOIM_STS_UTAG: + bfa_fcpim_stats(fcpim, iocom_utags); + evt = BFA_IOIM_SM_COMP_UTAG; + break; + + default: + bfa_assert(0); + } + + bfa_sm_send_event(ioim, evt); +} + +void +bfa_ioim_good_comp_isr(struct bfa_s *bfa, struct bfi_msg_s *m) +{ + struct bfa_fcpim_mod_s *fcpim = BFA_FCPIM_MOD(bfa); + struct bfi_ioim_rsp_s *rsp = (struct bfi_ioim_rsp_s *) m; + struct bfa_ioim_s *ioim; + u16 iotag; + + iotag = bfa_os_ntohs(rsp->io_tag); + + ioim = BFA_IOIM_FROM_TAG(fcpim, iotag); + bfa_assert(ioim->iotag == iotag); + + bfa_trc_fp(ioim->bfa, ioim->iotag); + bfa_sm_send_event(ioim, BFA_IOIM_SM_COMP_GOOD); +} + +/** + * Called by itnim to clean up IO while going offline. + */ +void +bfa_ioim_cleanup(struct bfa_ioim_s *ioim) +{ + bfa_trc(ioim->bfa, ioim->iotag); + bfa_fcpim_stats(ioim->fcpim, io_cleanups); + + ioim->iosp->tskim = NULL; + bfa_sm_send_event(ioim, BFA_IOIM_SM_CLEANUP); +} + +void +bfa_ioim_cleanup_tm(struct bfa_ioim_s *ioim, struct bfa_tskim_s *tskim) +{ + bfa_trc(ioim->bfa, ioim->iotag); + bfa_fcpim_stats(ioim->fcpim, io_tmaborts); + + ioim->iosp->tskim = tskim; + bfa_sm_send_event(ioim, BFA_IOIM_SM_CLEANUP); +} + +/** + * IOC failure handling. + */ +void +bfa_ioim_iocdisable(struct bfa_ioim_s *ioim) +{ + bfa_sm_send_event(ioim, BFA_IOIM_SM_HWFAIL); +} + +/** + * IO offline TOV popped. Fail the pending IO. + */ +void +bfa_ioim_tov(struct bfa_ioim_s *ioim) +{ + bfa_sm_send_event(ioim, BFA_IOIM_SM_IOTOV); +} + + + +/** + * bfa_ioim_api + */ + +/** + * Allocate IOIM resource for initiator mode I/O request. + */ +struct bfa_ioim_s * +bfa_ioim_alloc(struct bfa_s *bfa, struct bfad_ioim_s *dio, + struct bfa_itnim_s *itnim, u16 nsges) +{ + struct bfa_fcpim_mod_s *fcpim = BFA_FCPIM_MOD(bfa); + struct bfa_ioim_s *ioim; + + /** + * alocate IOIM resource + */ + bfa_q_deq(&fcpim->ioim_free_q, &ioim); + if (!ioim) { + bfa_fcpim_stats(fcpim, no_iotags); + return NULL; + } + + ioim->dio = dio; + ioim->itnim = itnim; + ioim->nsges = nsges; + ioim->nsgpgs = 0; + + bfa_stats(fcpim, total_ios); + bfa_stats(itnim, ios); + fcpim->ios_active++; + + list_add_tail(&ioim->qe, &itnim->io_q); + bfa_trc_fp(ioim->bfa, ioim->iotag); + + return ioim; +} + +void +bfa_ioim_free(struct bfa_ioim_s *ioim) +{ + struct bfa_fcpim_mod_s *fcpim = ioim->fcpim; + + bfa_trc_fp(ioim->bfa, ioim->iotag); + bfa_assert_fp(bfa_sm_cmp_state(ioim, bfa_ioim_sm_uninit)); + + bfa_assert_fp(list_empty(&ioim->sgpg_q) + || (ioim->nsges > BFI_SGE_INLINE)); + + if (ioim->nsgpgs > 0) + bfa_sgpg_mfree(ioim->bfa, &ioim->sgpg_q, ioim->nsgpgs); + + bfa_stats(ioim->itnim, io_comps); + fcpim->ios_active--; + + list_del(&ioim->qe); + list_add_tail(&ioim->qe, &fcpim->ioim_free_q); +} + +void +bfa_ioim_start(struct bfa_ioim_s *ioim) +{ + bfa_trc_fp(ioim->bfa, ioim->iotag); + bfa_sm_send_event(ioim, BFA_IOIM_SM_START); +} + +/** + * Driver I/O abort request. + */ +void +bfa_ioim_abort(struct bfa_ioim_s *ioim) +{ + bfa_trc(ioim->bfa, ioim->iotag); + bfa_fcpim_stats(ioim->fcpim, io_aborts); + bfa_sm_send_event(ioim, BFA_IOIM_SM_ABORT); +} + + diff --git a/drivers/scsi/bfa/bfa_itnim.c b/drivers/scsi/bfa/bfa_itnim.c new file mode 100644 index 000000000000..4d5c61a4f85c --- /dev/null +++ b/drivers/scsi/bfa/bfa_itnim.c @@ -0,0 +1,1088 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#include +#include +#include "bfa_fcpim_priv.h" + +BFA_TRC_FILE(HAL, ITNIM); + +#define BFA_ITNIM_FROM_TAG(_fcpim, _tag) \ + ((_fcpim)->itnim_arr + ((_tag) & ((_fcpim)->num_itnims - 1))) + +#define bfa_fcpim_additn(__itnim) \ + list_add_tail(&(__itnim)->qe, &(__itnim)->fcpim->itnim_q) +#define bfa_fcpim_delitn(__itnim) do { \ + bfa_assert(bfa_q_is_on_q(&(__itnim)->fcpim->itnim_q, __itnim)); \ + list_del(&(__itnim)->qe); \ + bfa_assert(list_empty(&(__itnim)->io_q)); \ + bfa_assert(list_empty(&(__itnim)->io_cleanup_q)); \ + bfa_assert(list_empty(&(__itnim)->pending_q)); \ +} while (0) + +#define bfa_itnim_online_cb(__itnim) do { \ + if ((__itnim)->bfa->fcs) \ + bfa_cb_itnim_online((__itnim)->ditn); \ + else { \ + bfa_cb_queue((__itnim)->bfa, &(__itnim)->hcb_qe, \ + __bfa_cb_itnim_online, (__itnim)); \ + } \ +} while (0) + +#define bfa_itnim_offline_cb(__itnim) do { \ + if ((__itnim)->bfa->fcs) \ + bfa_cb_itnim_offline((__itnim)->ditn); \ + else { \ + bfa_cb_queue((__itnim)->bfa, &(__itnim)->hcb_qe, \ + __bfa_cb_itnim_offline, (__itnim)); \ + } \ +} while (0) + +#define bfa_itnim_sler_cb(__itnim) do { \ + if ((__itnim)->bfa->fcs) \ + bfa_cb_itnim_sler((__itnim)->ditn); \ + else { \ + bfa_cb_queue((__itnim)->bfa, &(__itnim)->hcb_qe, \ + __bfa_cb_itnim_sler, (__itnim)); \ + } \ +} while (0) + +/* + * forward declarations + */ +static void bfa_itnim_iocdisable_cleanup(struct bfa_itnim_s *itnim); +static bfa_boolean_t bfa_itnim_send_fwcreate(struct bfa_itnim_s *itnim); +static bfa_boolean_t bfa_itnim_send_fwdelete(struct bfa_itnim_s *itnim); +static void bfa_itnim_cleanp_comp(void *itnim_cbarg); +static void bfa_itnim_cleanup(struct bfa_itnim_s *itnim); +static void __bfa_cb_itnim_online(void *cbarg, bfa_boolean_t complete); +static void __bfa_cb_itnim_offline(void *cbarg, bfa_boolean_t complete); +static void __bfa_cb_itnim_sler(void *cbarg, bfa_boolean_t complete); +static void bfa_itnim_iotov_online(struct bfa_itnim_s *itnim); +static void bfa_itnim_iotov_cleanup(struct bfa_itnim_s *itnim); +static void bfa_itnim_iotov(void *itnim_arg); +static void bfa_itnim_iotov_start(struct bfa_itnim_s *itnim); +static void bfa_itnim_iotov_stop(struct bfa_itnim_s *itnim); +static void bfa_itnim_iotov_delete(struct bfa_itnim_s *itnim); + +/** + * bfa_itnim_sm BFA itnim state machine + */ + + +enum bfa_itnim_event { + BFA_ITNIM_SM_CREATE = 1, /* itnim is created */ + BFA_ITNIM_SM_ONLINE = 2, /* itnim is online */ + BFA_ITNIM_SM_OFFLINE = 3, /* itnim is offline */ + BFA_ITNIM_SM_FWRSP = 4, /* firmware response */ + BFA_ITNIM_SM_DELETE = 5, /* deleting an existing itnim */ + BFA_ITNIM_SM_CLEANUP = 6, /* IO cleanup completion */ + BFA_ITNIM_SM_SLER = 7, /* second level error recovery */ + BFA_ITNIM_SM_HWFAIL = 8, /* IOC h/w failure event */ + BFA_ITNIM_SM_QRESUME = 9, /* queue space available */ +}; + +static void bfa_itnim_sm_uninit(struct bfa_itnim_s *itnim, + enum bfa_itnim_event event); +static void bfa_itnim_sm_created(struct bfa_itnim_s *itnim, + enum bfa_itnim_event event); +static void bfa_itnim_sm_fwcreate(struct bfa_itnim_s *itnim, + enum bfa_itnim_event event); +static void bfa_itnim_sm_delete_pending(struct bfa_itnim_s *itnim, + enum bfa_itnim_event event); +static void bfa_itnim_sm_online(struct bfa_itnim_s *itnim, + enum bfa_itnim_event event); +static void bfa_itnim_sm_sler(struct bfa_itnim_s *itnim, + enum bfa_itnim_event event); +static void bfa_itnim_sm_cleanup_offline(struct bfa_itnim_s *itnim, + enum bfa_itnim_event event); +static void bfa_itnim_sm_cleanup_delete(struct bfa_itnim_s *itnim, + enum bfa_itnim_event event); +static void bfa_itnim_sm_fwdelete(struct bfa_itnim_s *itnim, + enum bfa_itnim_event event); +static void bfa_itnim_sm_offline(struct bfa_itnim_s *itnim, + enum bfa_itnim_event event); +static void bfa_itnim_sm_iocdisable(struct bfa_itnim_s *itnim, + enum bfa_itnim_event event); +static void bfa_itnim_sm_deleting(struct bfa_itnim_s *itnim, + enum bfa_itnim_event event); +static void bfa_itnim_sm_fwcreate_qfull(struct bfa_itnim_s *itnim, + enum bfa_itnim_event event); +static void bfa_itnim_sm_fwdelete_qfull(struct bfa_itnim_s *itnim, + enum bfa_itnim_event event); +static void bfa_itnim_sm_deleting_qfull(struct bfa_itnim_s *itnim, + enum bfa_itnim_event event); + +/** + * Beginning/unallocated state - no events expected. + */ +static void +bfa_itnim_sm_uninit(struct bfa_itnim_s *itnim, enum bfa_itnim_event event) +{ + bfa_trc(itnim->bfa, itnim->rport->rport_tag); + bfa_trc(itnim->bfa, event); + + switch (event) { + case BFA_ITNIM_SM_CREATE: + bfa_sm_set_state(itnim, bfa_itnim_sm_created); + itnim->is_online = BFA_FALSE; + bfa_fcpim_additn(itnim); + break; + + default: + bfa_assert(0); + } +} + +/** + * Beginning state, only online event expected. + */ +static void +bfa_itnim_sm_created(struct bfa_itnim_s *itnim, enum bfa_itnim_event event) +{ + bfa_trc(itnim->bfa, itnim->rport->rport_tag); + bfa_trc(itnim->bfa, event); + + switch (event) { + case BFA_ITNIM_SM_ONLINE: + if (bfa_itnim_send_fwcreate(itnim)) + bfa_sm_set_state(itnim, bfa_itnim_sm_fwcreate); + else + bfa_sm_set_state(itnim, bfa_itnim_sm_fwcreate_qfull); + break; + + case BFA_ITNIM_SM_DELETE: + bfa_sm_set_state(itnim, bfa_itnim_sm_uninit); + bfa_fcpim_delitn(itnim); + break; + + case BFA_ITNIM_SM_HWFAIL: + bfa_sm_set_state(itnim, bfa_itnim_sm_iocdisable); + break; + + default: + bfa_assert(0); + } +} + +/** + * Waiting for itnim create response from firmware. + */ +static void +bfa_itnim_sm_fwcreate(struct bfa_itnim_s *itnim, enum bfa_itnim_event event) +{ + bfa_trc(itnim->bfa, itnim->rport->rport_tag); + bfa_trc(itnim->bfa, event); + + switch (event) { + case BFA_ITNIM_SM_FWRSP: + bfa_sm_set_state(itnim, bfa_itnim_sm_online); + itnim->is_online = BFA_TRUE; + bfa_itnim_iotov_online(itnim); + bfa_itnim_online_cb(itnim); + break; + + case BFA_ITNIM_SM_DELETE: + bfa_sm_set_state(itnim, bfa_itnim_sm_delete_pending); + break; + + case BFA_ITNIM_SM_OFFLINE: + if (bfa_itnim_send_fwdelete(itnim)) + bfa_sm_set_state(itnim, bfa_itnim_sm_fwdelete); + else + bfa_sm_set_state(itnim, bfa_itnim_sm_fwdelete_qfull); + break; + + case BFA_ITNIM_SM_HWFAIL: + bfa_sm_set_state(itnim, bfa_itnim_sm_iocdisable); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_itnim_sm_fwcreate_qfull(struct bfa_itnim_s *itnim, + enum bfa_itnim_event event) +{ + bfa_trc(itnim->bfa, itnim->rport->rport_tag); + bfa_trc(itnim->bfa, event); + + switch (event) { + case BFA_ITNIM_SM_QRESUME: + bfa_sm_set_state(itnim, bfa_itnim_sm_fwcreate); + bfa_itnim_send_fwcreate(itnim); + break; + + case BFA_ITNIM_SM_DELETE: + bfa_sm_set_state(itnim, bfa_itnim_sm_uninit); + bfa_reqq_wcancel(&itnim->reqq_wait); + bfa_fcpim_delitn(itnim); + break; + + case BFA_ITNIM_SM_OFFLINE: + bfa_sm_set_state(itnim, bfa_itnim_sm_offline); + bfa_reqq_wcancel(&itnim->reqq_wait); + bfa_itnim_offline_cb(itnim); + break; + + case BFA_ITNIM_SM_HWFAIL: + bfa_sm_set_state(itnim, bfa_itnim_sm_iocdisable); + bfa_reqq_wcancel(&itnim->reqq_wait); + break; + + default: + bfa_assert(0); + } +} + +/** + * Waiting for itnim create response from firmware, a delete is pending. + */ +static void +bfa_itnim_sm_delete_pending(struct bfa_itnim_s *itnim, + enum bfa_itnim_event event) +{ + bfa_trc(itnim->bfa, itnim->rport->rport_tag); + bfa_trc(itnim->bfa, event); + + switch (event) { + case BFA_ITNIM_SM_FWRSP: + if (bfa_itnim_send_fwdelete(itnim)) + bfa_sm_set_state(itnim, bfa_itnim_sm_deleting); + else + bfa_sm_set_state(itnim, bfa_itnim_sm_deleting_qfull); + break; + + case BFA_ITNIM_SM_HWFAIL: + bfa_sm_set_state(itnim, bfa_itnim_sm_uninit); + bfa_fcpim_delitn(itnim); + break; + + default: + bfa_assert(0); + } +} + +/** + * Online state - normal parking state. + */ +static void +bfa_itnim_sm_online(struct bfa_itnim_s *itnim, enum bfa_itnim_event event) +{ + bfa_trc(itnim->bfa, itnim->rport->rport_tag); + bfa_trc(itnim->bfa, event); + + switch (event) { + case BFA_ITNIM_SM_OFFLINE: + bfa_sm_set_state(itnim, bfa_itnim_sm_cleanup_offline); + itnim->is_online = BFA_FALSE; + bfa_itnim_iotov_start(itnim); + bfa_itnim_cleanup(itnim); + break; + + case BFA_ITNIM_SM_DELETE: + bfa_sm_set_state(itnim, bfa_itnim_sm_cleanup_delete); + itnim->is_online = BFA_FALSE; + bfa_itnim_cleanup(itnim); + break; + + case BFA_ITNIM_SM_SLER: + bfa_sm_set_state(itnim, bfa_itnim_sm_sler); + itnim->is_online = BFA_FALSE; + bfa_itnim_iotov_start(itnim); + bfa_itnim_sler_cb(itnim); + break; + + case BFA_ITNIM_SM_HWFAIL: + bfa_sm_set_state(itnim, bfa_itnim_sm_iocdisable); + itnim->is_online = BFA_FALSE; + bfa_itnim_iotov_start(itnim); + bfa_itnim_iocdisable_cleanup(itnim); + break; + + default: + bfa_assert(0); + } +} + +/** + * Second level error recovery need. + */ +static void +bfa_itnim_sm_sler(struct bfa_itnim_s *itnim, enum bfa_itnim_event event) +{ + bfa_trc(itnim->bfa, itnim->rport->rport_tag); + bfa_trc(itnim->bfa, event); + + switch (event) { + case BFA_ITNIM_SM_OFFLINE: + bfa_sm_set_state(itnim, bfa_itnim_sm_cleanup_offline); + bfa_itnim_cleanup(itnim); + break; + + case BFA_ITNIM_SM_DELETE: + bfa_sm_set_state(itnim, bfa_itnim_sm_cleanup_delete); + bfa_itnim_cleanup(itnim); + bfa_itnim_iotov_delete(itnim); + break; + + case BFA_ITNIM_SM_HWFAIL: + bfa_sm_set_state(itnim, bfa_itnim_sm_iocdisable); + bfa_itnim_iocdisable_cleanup(itnim); + break; + + default: + bfa_assert(0); + } +} + +/** + * Going offline. Waiting for active IO cleanup. + */ +static void +bfa_itnim_sm_cleanup_offline(struct bfa_itnim_s *itnim, + enum bfa_itnim_event event) +{ + bfa_trc(itnim->bfa, itnim->rport->rport_tag); + bfa_trc(itnim->bfa, event); + + switch (event) { + case BFA_ITNIM_SM_CLEANUP: + if (bfa_itnim_send_fwdelete(itnim)) + bfa_sm_set_state(itnim, bfa_itnim_sm_fwdelete); + else + bfa_sm_set_state(itnim, bfa_itnim_sm_fwdelete_qfull); + break; + + case BFA_ITNIM_SM_DELETE: + bfa_sm_set_state(itnim, bfa_itnim_sm_cleanup_delete); + bfa_itnim_iotov_delete(itnim); + break; + + case BFA_ITNIM_SM_HWFAIL: + bfa_sm_set_state(itnim, bfa_itnim_sm_iocdisable); + bfa_itnim_iocdisable_cleanup(itnim); + bfa_itnim_offline_cb(itnim); + break; + + case BFA_ITNIM_SM_SLER: + break; + + default: + bfa_assert(0); + } +} + +/** + * Deleting itnim. Waiting for active IO cleanup. + */ +static void +bfa_itnim_sm_cleanup_delete(struct bfa_itnim_s *itnim, + enum bfa_itnim_event event) +{ + bfa_trc(itnim->bfa, itnim->rport->rport_tag); + bfa_trc(itnim->bfa, event); + + switch (event) { + case BFA_ITNIM_SM_CLEANUP: + if (bfa_itnim_send_fwdelete(itnim)) + bfa_sm_set_state(itnim, bfa_itnim_sm_deleting); + else + bfa_sm_set_state(itnim, bfa_itnim_sm_deleting_qfull); + break; + + case BFA_ITNIM_SM_HWFAIL: + bfa_sm_set_state(itnim, bfa_itnim_sm_iocdisable); + bfa_itnim_iocdisable_cleanup(itnim); + break; + + default: + bfa_assert(0); + } +} + +/** + * Rport offline. Fimrware itnim is being deleted - awaiting f/w response. + */ +static void +bfa_itnim_sm_fwdelete(struct bfa_itnim_s *itnim, enum bfa_itnim_event event) +{ + bfa_trc(itnim->bfa, itnim->rport->rport_tag); + bfa_trc(itnim->bfa, event); + + switch (event) { + case BFA_ITNIM_SM_FWRSP: + bfa_sm_set_state(itnim, bfa_itnim_sm_offline); + bfa_itnim_offline_cb(itnim); + break; + + case BFA_ITNIM_SM_DELETE: + bfa_sm_set_state(itnim, bfa_itnim_sm_deleting); + break; + + case BFA_ITNIM_SM_HWFAIL: + bfa_sm_set_state(itnim, bfa_itnim_sm_iocdisable); + bfa_itnim_offline_cb(itnim); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_itnim_sm_fwdelete_qfull(struct bfa_itnim_s *itnim, + enum bfa_itnim_event event) +{ + bfa_trc(itnim->bfa, itnim->rport->rport_tag); + bfa_trc(itnim->bfa, event); + + switch (event) { + case BFA_ITNIM_SM_QRESUME: + bfa_sm_set_state(itnim, bfa_itnim_sm_fwdelete); + bfa_itnim_send_fwdelete(itnim); + break; + + case BFA_ITNIM_SM_DELETE: + bfa_sm_set_state(itnim, bfa_itnim_sm_deleting_qfull); + break; + + case BFA_ITNIM_SM_HWFAIL: + bfa_sm_set_state(itnim, bfa_itnim_sm_iocdisable); + bfa_reqq_wcancel(&itnim->reqq_wait); + bfa_itnim_offline_cb(itnim); + break; + + default: + bfa_assert(0); + } +} + +/** + * Offline state. + */ +static void +bfa_itnim_sm_offline(struct bfa_itnim_s *itnim, enum bfa_itnim_event event) +{ + bfa_trc(itnim->bfa, itnim->rport->rport_tag); + bfa_trc(itnim->bfa, event); + + switch (event) { + case BFA_ITNIM_SM_DELETE: + bfa_sm_set_state(itnim, bfa_itnim_sm_uninit); + bfa_itnim_iotov_delete(itnim); + bfa_fcpim_delitn(itnim); + break; + + case BFA_ITNIM_SM_ONLINE: + if (bfa_itnim_send_fwcreate(itnim)) + bfa_sm_set_state(itnim, bfa_itnim_sm_fwcreate); + else + bfa_sm_set_state(itnim, bfa_itnim_sm_fwcreate_qfull); + break; + + case BFA_ITNIM_SM_HWFAIL: + bfa_sm_set_state(itnim, bfa_itnim_sm_iocdisable); + break; + + default: + bfa_assert(0); + } +} + +/** + * IOC h/w failed state. + */ +static void +bfa_itnim_sm_iocdisable(struct bfa_itnim_s *itnim, + enum bfa_itnim_event event) +{ + bfa_trc(itnim->bfa, itnim->rport->rport_tag); + bfa_trc(itnim->bfa, event); + + switch (event) { + case BFA_ITNIM_SM_DELETE: + bfa_sm_set_state(itnim, bfa_itnim_sm_uninit); + bfa_itnim_iotov_delete(itnim); + bfa_fcpim_delitn(itnim); + break; + + case BFA_ITNIM_SM_OFFLINE: + bfa_itnim_offline_cb(itnim); + break; + + case BFA_ITNIM_SM_ONLINE: + if (bfa_itnim_send_fwcreate(itnim)) + bfa_sm_set_state(itnim, bfa_itnim_sm_fwcreate); + else + bfa_sm_set_state(itnim, bfa_itnim_sm_fwcreate_qfull); + break; + + case BFA_ITNIM_SM_HWFAIL: + break; + + default: + bfa_assert(0); + } +} + +/** + * Itnim is deleted, waiting for firmware response to delete. + */ +static void +bfa_itnim_sm_deleting(struct bfa_itnim_s *itnim, enum bfa_itnim_event event) +{ + bfa_trc(itnim->bfa, itnim->rport->rport_tag); + bfa_trc(itnim->bfa, event); + + switch (event) { + case BFA_ITNIM_SM_FWRSP: + case BFA_ITNIM_SM_HWFAIL: + bfa_sm_set_state(itnim, bfa_itnim_sm_uninit); + bfa_fcpim_delitn(itnim); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_itnim_sm_deleting_qfull(struct bfa_itnim_s *itnim, + enum bfa_itnim_event event) +{ + bfa_trc(itnim->bfa, itnim->rport->rport_tag); + bfa_trc(itnim->bfa, event); + + switch (event) { + case BFA_ITNIM_SM_QRESUME: + bfa_sm_set_state(itnim, bfa_itnim_sm_deleting); + bfa_itnim_send_fwdelete(itnim); + break; + + case BFA_ITNIM_SM_HWFAIL: + bfa_sm_set_state(itnim, bfa_itnim_sm_uninit); + bfa_reqq_wcancel(&itnim->reqq_wait); + bfa_fcpim_delitn(itnim); + break; + + default: + bfa_assert(0); + } +} + + + +/** + * bfa_itnim_private + */ + +/** + * Initiate cleanup of all IOs on an IOC failure. + */ +static void +bfa_itnim_iocdisable_cleanup(struct bfa_itnim_s *itnim) +{ + struct bfa_tskim_s *tskim; + struct bfa_ioim_s *ioim; + struct list_head *qe, *qen; + + list_for_each_safe(qe, qen, &itnim->tsk_q) { + tskim = (struct bfa_tskim_s *) qe; + bfa_tskim_iocdisable(tskim); + } + + list_for_each_safe(qe, qen, &itnim->io_q) { + ioim = (struct bfa_ioim_s *) qe; + bfa_ioim_iocdisable(ioim); + } + + /** + * For IO request in pending queue, we pretend an early timeout. + */ + list_for_each_safe(qe, qen, &itnim->pending_q) { + ioim = (struct bfa_ioim_s *) qe; + bfa_ioim_tov(ioim); + } + + list_for_each_safe(qe, qen, &itnim->io_cleanup_q) { + ioim = (struct bfa_ioim_s *) qe; + bfa_ioim_iocdisable(ioim); + } +} + +/** + * IO cleanup completion + */ +static void +bfa_itnim_cleanp_comp(void *itnim_cbarg) +{ + struct bfa_itnim_s *itnim = itnim_cbarg; + + bfa_stats(itnim, cleanup_comps); + bfa_sm_send_event(itnim, BFA_ITNIM_SM_CLEANUP); +} + +/** + * Initiate cleanup of all IOs. + */ +static void +bfa_itnim_cleanup(struct bfa_itnim_s *itnim) +{ + struct bfa_ioim_s *ioim; + struct bfa_tskim_s *tskim; + struct list_head *qe, *qen; + + bfa_wc_init(&itnim->wc, bfa_itnim_cleanp_comp, itnim); + + list_for_each_safe(qe, qen, &itnim->io_q) { + ioim = (struct bfa_ioim_s *) qe; + + /** + * Move IO to a cleanup queue from active queue so that a later + * TM will not pickup this IO. + */ + list_del(&ioim->qe); + list_add_tail(&ioim->qe, &itnim->io_cleanup_q); + + bfa_wc_up(&itnim->wc); + bfa_ioim_cleanup(ioim); + } + + list_for_each_safe(qe, qen, &itnim->tsk_q) { + tskim = (struct bfa_tskim_s *) qe; + bfa_wc_up(&itnim->wc); + bfa_tskim_cleanup(tskim); + } + + bfa_wc_wait(&itnim->wc); +} + +static void +__bfa_cb_itnim_online(void *cbarg, bfa_boolean_t complete) +{ + struct bfa_itnim_s *itnim = cbarg; + + if (complete) + bfa_cb_itnim_online(itnim->ditn); +} + +static void +__bfa_cb_itnim_offline(void *cbarg, bfa_boolean_t complete) +{ + struct bfa_itnim_s *itnim = cbarg; + + if (complete) + bfa_cb_itnim_offline(itnim->ditn); +} + +static void +__bfa_cb_itnim_sler(void *cbarg, bfa_boolean_t complete) +{ + struct bfa_itnim_s *itnim = cbarg; + + if (complete) + bfa_cb_itnim_sler(itnim->ditn); +} + +/** + * Call to resume any I/O requests waiting for room in request queue. + */ +static void +bfa_itnim_qresume(void *cbarg) +{ + struct bfa_itnim_s *itnim = cbarg; + + bfa_sm_send_event(itnim, BFA_ITNIM_SM_QRESUME); +} + + + + +/** + * bfa_itnim_public + */ + +void +bfa_itnim_iodone(struct bfa_itnim_s *itnim) +{ + bfa_wc_down(&itnim->wc); +} + +void +bfa_itnim_tskdone(struct bfa_itnim_s *itnim) +{ + bfa_wc_down(&itnim->wc); +} + +void +bfa_itnim_meminfo(struct bfa_iocfc_cfg_s *cfg, u32 *km_len, + u32 *dm_len) +{ + /** + * ITN memory + */ + *km_len += cfg->fwcfg.num_rports * sizeof(struct bfa_itnim_s); +} + +void +bfa_itnim_attach(struct bfa_fcpim_mod_s *fcpim, struct bfa_meminfo_s *minfo) +{ + struct bfa_s *bfa = fcpim->bfa; + struct bfa_itnim_s *itnim; + int i; + + INIT_LIST_HEAD(&fcpim->itnim_q); + + itnim = (struct bfa_itnim_s *) bfa_meminfo_kva(minfo); + fcpim->itnim_arr = itnim; + + for (i = 0; i < fcpim->num_itnims; i++, itnim++) { + bfa_os_memset(itnim, 0, sizeof(struct bfa_itnim_s)); + itnim->bfa = bfa; + itnim->fcpim = fcpim; + itnim->reqq = BFA_REQQ_QOS_LO; + itnim->rport = BFA_RPORT_FROM_TAG(bfa, i); + itnim->iotov_active = BFA_FALSE; + bfa_reqq_winit(&itnim->reqq_wait, bfa_itnim_qresume, itnim); + + INIT_LIST_HEAD(&itnim->io_q); + INIT_LIST_HEAD(&itnim->io_cleanup_q); + INIT_LIST_HEAD(&itnim->pending_q); + INIT_LIST_HEAD(&itnim->tsk_q); + INIT_LIST_HEAD(&itnim->delay_comp_q); + bfa_sm_set_state(itnim, bfa_itnim_sm_uninit); + } + + bfa_meminfo_kva(minfo) = (u8 *) itnim; +} + +void +bfa_itnim_iocdisable(struct bfa_itnim_s *itnim) +{ + bfa_stats(itnim, ioc_disabled); + bfa_sm_send_event(itnim, BFA_ITNIM_SM_HWFAIL); +} + +static bfa_boolean_t +bfa_itnim_send_fwcreate(struct bfa_itnim_s *itnim) +{ + struct bfi_itnim_create_req_s *m; + + itnim->msg_no++; + + /** + * check for room in queue to send request now + */ + m = bfa_reqq_next(itnim->bfa, itnim->reqq); + if (!m) { + bfa_reqq_wait(itnim->bfa, itnim->reqq, &itnim->reqq_wait); + return BFA_FALSE; + } + + bfi_h2i_set(m->mh, BFI_MC_ITNIM, BFI_ITNIM_H2I_CREATE_REQ, + bfa_lpuid(itnim->bfa)); + m->fw_handle = itnim->rport->fw_handle; + m->class = FC_CLASS_3; + m->seq_rec = itnim->seq_rec; + m->msg_no = itnim->msg_no; + + /** + * queue I/O message to firmware + */ + bfa_reqq_produce(itnim->bfa, itnim->reqq); + return BFA_TRUE; +} + +static bfa_boolean_t +bfa_itnim_send_fwdelete(struct bfa_itnim_s *itnim) +{ + struct bfi_itnim_delete_req_s *m; + + /** + * check for room in queue to send request now + */ + m = bfa_reqq_next(itnim->bfa, itnim->reqq); + if (!m) { + bfa_reqq_wait(itnim->bfa, itnim->reqq, &itnim->reqq_wait); + return BFA_FALSE; + } + + bfi_h2i_set(m->mh, BFI_MC_ITNIM, BFI_ITNIM_H2I_DELETE_REQ, + bfa_lpuid(itnim->bfa)); + m->fw_handle = itnim->rport->fw_handle; + + /** + * queue I/O message to firmware + */ + bfa_reqq_produce(itnim->bfa, itnim->reqq); + return BFA_TRUE; +} + +/** + * Cleanup all pending failed inflight requests. + */ +static void +bfa_itnim_delayed_comp(struct bfa_itnim_s *itnim, bfa_boolean_t iotov) +{ + struct bfa_ioim_s *ioim; + struct list_head *qe, *qen; + + list_for_each_safe(qe, qen, &itnim->delay_comp_q) { + ioim = (struct bfa_ioim_s *)qe; + bfa_ioim_delayed_comp(ioim, iotov); + } +} + +/** + * Start all pending IO requests. + */ +static void +bfa_itnim_iotov_online(struct bfa_itnim_s *itnim) +{ + struct bfa_ioim_s *ioim; + + bfa_itnim_iotov_stop(itnim); + + /** + * Abort all inflight IO requests in the queue + */ + bfa_itnim_delayed_comp(itnim, BFA_FALSE); + + /** + * Start all pending IO requests. + */ + while (!list_empty(&itnim->pending_q)) { + bfa_q_deq(&itnim->pending_q, &ioim); + list_add_tail(&ioim->qe, &itnim->io_q); + bfa_ioim_start(ioim); + } +} + +/** + * Fail all pending IO requests + */ +static void +bfa_itnim_iotov_cleanup(struct bfa_itnim_s *itnim) +{ + struct bfa_ioim_s *ioim; + + /** + * Fail all inflight IO requests in the queue + */ + bfa_itnim_delayed_comp(itnim, BFA_TRUE); + + /** + * Fail any pending IO requests. + */ + while (!list_empty(&itnim->pending_q)) { + bfa_q_deq(&itnim->pending_q, &ioim); + list_add_tail(&ioim->qe, &ioim->fcpim->ioim_comp_q); + bfa_ioim_tov(ioim); + } +} + +/** + * IO TOV timer callback. Fail any pending IO requests. + */ +static void +bfa_itnim_iotov(void *itnim_arg) +{ + struct bfa_itnim_s *itnim = itnim_arg; + + itnim->iotov_active = BFA_FALSE; + + bfa_cb_itnim_tov_begin(itnim->ditn); + bfa_itnim_iotov_cleanup(itnim); + bfa_cb_itnim_tov(itnim->ditn); +} + +/** + * Start IO TOV timer for failing back pending IO requests in offline state. + */ +static void +bfa_itnim_iotov_start(struct bfa_itnim_s *itnim) +{ + if (itnim->fcpim->path_tov > 0) { + + itnim->iotov_active = BFA_TRUE; + bfa_assert(bfa_itnim_hold_io(itnim)); + bfa_timer_start(itnim->bfa, &itnim->timer, + bfa_itnim_iotov, itnim, itnim->fcpim->path_tov); + } +} + +/** + * Stop IO TOV timer. + */ +static void +bfa_itnim_iotov_stop(struct bfa_itnim_s *itnim) +{ + if (itnim->iotov_active) { + itnim->iotov_active = BFA_FALSE; + bfa_timer_stop(&itnim->timer); + } +} + +/** + * Stop IO TOV timer. + */ +static void +bfa_itnim_iotov_delete(struct bfa_itnim_s *itnim) +{ + bfa_boolean_t pathtov_active = BFA_FALSE; + + if (itnim->iotov_active) + pathtov_active = BFA_TRUE; + + bfa_itnim_iotov_stop(itnim); + if (pathtov_active) + bfa_cb_itnim_tov_begin(itnim->ditn); + bfa_itnim_iotov_cleanup(itnim); + if (pathtov_active) + bfa_cb_itnim_tov(itnim->ditn); +} + + + +/** + * bfa_itnim_public + */ + +/** + * Itnim interrupt processing. + */ +void +bfa_itnim_isr(struct bfa_s *bfa, struct bfi_msg_s *m) +{ + struct bfa_fcpim_mod_s *fcpim = BFA_FCPIM_MOD(bfa); + union bfi_itnim_i2h_msg_u msg; + struct bfa_itnim_s *itnim; + + bfa_trc(bfa, m->mhdr.msg_id); + + msg.msg = m; + + switch (m->mhdr.msg_id) { + case BFI_ITNIM_I2H_CREATE_RSP: + itnim = BFA_ITNIM_FROM_TAG(fcpim, + msg.create_rsp->bfa_handle); + bfa_assert(msg.create_rsp->status == BFA_STATUS_OK); + bfa_stats(itnim, create_comps); + bfa_sm_send_event(itnim, BFA_ITNIM_SM_FWRSP); + break; + + case BFI_ITNIM_I2H_DELETE_RSP: + itnim = BFA_ITNIM_FROM_TAG(fcpim, + msg.delete_rsp->bfa_handle); + bfa_assert(msg.delete_rsp->status == BFA_STATUS_OK); + bfa_stats(itnim, delete_comps); + bfa_sm_send_event(itnim, BFA_ITNIM_SM_FWRSP); + break; + + case BFI_ITNIM_I2H_SLER_EVENT: + itnim = BFA_ITNIM_FROM_TAG(fcpim, + msg.sler_event->bfa_handle); + bfa_stats(itnim, sler_events); + bfa_sm_send_event(itnim, BFA_ITNIM_SM_SLER); + break; + + default: + bfa_trc(bfa, m->mhdr.msg_id); + bfa_assert(0); + } +} + + + +/** + * bfa_itnim_api + */ + +struct bfa_itnim_s * +bfa_itnim_create(struct bfa_s *bfa, struct bfa_rport_s *rport, void *ditn) +{ + struct bfa_fcpim_mod_s *fcpim = BFA_FCPIM_MOD(bfa); + struct bfa_itnim_s *itnim; + + itnim = BFA_ITNIM_FROM_TAG(fcpim, rport->rport_tag); + bfa_assert(itnim->rport == rport); + + itnim->ditn = ditn; + + bfa_stats(itnim, creates); + bfa_sm_send_event(itnim, BFA_ITNIM_SM_CREATE); + + return (itnim); +} + +void +bfa_itnim_delete(struct bfa_itnim_s *itnim) +{ + bfa_stats(itnim, deletes); + bfa_sm_send_event(itnim, BFA_ITNIM_SM_DELETE); +} + +void +bfa_itnim_online(struct bfa_itnim_s *itnim, bfa_boolean_t seq_rec) +{ + itnim->seq_rec = seq_rec; + bfa_stats(itnim, onlines); + bfa_sm_send_event(itnim, BFA_ITNIM_SM_ONLINE); +} + +void +bfa_itnim_offline(struct bfa_itnim_s *itnim) +{ + bfa_stats(itnim, offlines); + bfa_sm_send_event(itnim, BFA_ITNIM_SM_OFFLINE); +} + +/** + * Return true if itnim is considered offline for holding off IO request. + * IO is not held if itnim is being deleted. + */ +bfa_boolean_t +bfa_itnim_hold_io(struct bfa_itnim_s *itnim) +{ + return ( + itnim->fcpim->path_tov && itnim->iotov_active && + (bfa_sm_cmp_state(itnim, bfa_itnim_sm_fwcreate) || + bfa_sm_cmp_state(itnim, bfa_itnim_sm_sler) || + bfa_sm_cmp_state(itnim, bfa_itnim_sm_cleanup_offline) || + bfa_sm_cmp_state(itnim, bfa_itnim_sm_fwdelete) || + bfa_sm_cmp_state(itnim, bfa_itnim_sm_offline) || + bfa_sm_cmp_state(itnim, bfa_itnim_sm_iocdisable)) +); +} + +void +bfa_itnim_get_stats(struct bfa_itnim_s *itnim, + struct bfa_itnim_hal_stats_s *stats) +{ + *stats = itnim->stats; +} + +void +bfa_itnim_clear_stats(struct bfa_itnim_s *itnim) +{ + bfa_os_memset(&itnim->stats, 0, sizeof(itnim->stats)); +} + + diff --git a/drivers/scsi/bfa/bfa_log.c b/drivers/scsi/bfa/bfa_log.c new file mode 100644 index 000000000000..c2735e55cf03 --- /dev/null +++ b/drivers/scsi/bfa/bfa_log.c @@ -0,0 +1,346 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * bfa_log.c BFA log library + */ + +#include +#include + +/* + * global log info structure + */ +struct bfa_log_info_s { + u32 start_idx; /* start index for a module */ + u32 total_count; /* total count for a module */ + enum bfa_log_severity level; /* global log level */ + bfa_log_cb_t cbfn; /* callback function */ +}; + +static struct bfa_log_info_s bfa_log_info[BFA_LOG_MODULE_ID_MAX + 1]; +static u32 bfa_log_msg_total_count; +static int bfa_log_initialized; + +static char *bfa_log_severity[] = + { "[none]", "[critical]", "[error]", "[warn]", "[info]", "" }; + +/** + * BFA log library initialization + * + * The log library initialization includes the following, + * - set log instance name and callback function + * - read the message array generated from xml files + * - calculate start index for each module + * - calculate message count for each module + * - perform error checking + * + * @param[in] log_mod - log module info + * @param[in] instance_name - instance name + * @param[in] cbfn - callback function + * + * It return 0 on success, or -1 on failure + */ +int +bfa_log_init(struct bfa_log_mod_s *log_mod, char *instance_name, + bfa_log_cb_t cbfn) +{ + struct bfa_log_msgdef_s *msg; + u32 pre_mod_id = 0; + u32 cur_mod_id = 0; + u32 i, pre_idx, idx, msg_id; + + /* + * set instance name + */ + if (log_mod) { + strncpy(log_mod->instance_info, instance_name, + sizeof(log_mod->instance_info)); + log_mod->cbfn = cbfn; + for (i = 0; i <= BFA_LOG_MODULE_ID_MAX; i++) + log_mod->log_level[i] = BFA_LOG_WARNING; + } + + if (bfa_log_initialized) + return 0; + + for (i = 0; i <= BFA_LOG_MODULE_ID_MAX; i++) { + bfa_log_info[i].start_idx = 0; + bfa_log_info[i].total_count = 0; + bfa_log_info[i].level = BFA_LOG_WARNING; + bfa_log_info[i].cbfn = cbfn; + } + + pre_idx = 0; + idx = 0; + msg = bfa_log_msg_array; + msg_id = BFA_LOG_GET_MSG_ID(msg); + pre_mod_id = BFA_LOG_GET_MOD_ID(msg_id); + while (msg_id != 0) { + cur_mod_id = BFA_LOG_GET_MOD_ID(msg_id); + + if (cur_mod_id > BFA_LOG_MODULE_ID_MAX) { + cbfn(log_mod, msg_id, + "%s%s log: module id %u out of range\n", + BFA_LOG_CAT_NAME, + bfa_log_severity[BFA_LOG_ERROR], + cur_mod_id); + return -1; + } + + if (pre_mod_id > BFA_LOG_MODULE_ID_MAX) { + cbfn(log_mod, msg_id, + "%s%s log: module id %u out of range\n", + BFA_LOG_CAT_NAME, + bfa_log_severity[BFA_LOG_ERROR], + pre_mod_id); + return -1; + } + + if (cur_mod_id != pre_mod_id) { + bfa_log_info[pre_mod_id].start_idx = pre_idx; + bfa_log_info[pre_mod_id].total_count = idx - pre_idx; + pre_mod_id = cur_mod_id; + pre_idx = idx; + } + + idx++; + msg++; + msg_id = BFA_LOG_GET_MSG_ID(msg); + } + + bfa_log_info[cur_mod_id].start_idx = pre_idx; + bfa_log_info[cur_mod_id].total_count = idx - pre_idx; + bfa_log_msg_total_count = idx; + + cbfn(log_mod, msg_id, "%s%s log: init OK, msg total count %u\n", + BFA_LOG_CAT_NAME, + bfa_log_severity[BFA_LOG_INFO], bfa_log_msg_total_count); + + bfa_log_initialized = 1; + + return 0; +} + +/** + * BFA log set log level for a module + * + * @param[in] log_mod - log module info + * @param[in] mod_id - module id + * @param[in] log_level - log severity level + * + * It return BFA_STATUS_OK on success, or > 0 on failure + */ +bfa_status_t +bfa_log_set_level(struct bfa_log_mod_s *log_mod, int mod_id, + enum bfa_log_severity log_level) +{ + if (mod_id <= BFA_LOG_UNUSED_ID || mod_id > BFA_LOG_MODULE_ID_MAX) + return BFA_STATUS_EINVAL; + + if (log_level <= BFA_LOG_INVALID || log_level > BFA_LOG_LEVEL_MAX) + return BFA_STATUS_EINVAL; + + if (log_mod) + log_mod->log_level[mod_id] = log_level; + else + bfa_log_info[mod_id].level = log_level; + + return BFA_STATUS_OK; +} + +/** + * BFA log set log level for all modules + * + * @param[in] log_mod - log module info + * @param[in] log_level - log severity level + * + * It return BFA_STATUS_OK on success, or > 0 on failure + */ +bfa_status_t +bfa_log_set_level_all(struct bfa_log_mod_s *log_mod, + enum bfa_log_severity log_level) +{ + int mod_id = BFA_LOG_UNUSED_ID + 1; + + if (log_level <= BFA_LOG_INVALID || log_level > BFA_LOG_LEVEL_MAX) + return BFA_STATUS_EINVAL; + + if (log_mod) { + for (; mod_id <= BFA_LOG_MODULE_ID_MAX; mod_id++) + log_mod->log_level[mod_id] = log_level; + } else { + for (; mod_id <= BFA_LOG_MODULE_ID_MAX; mod_id++) + bfa_log_info[mod_id].level = log_level; + } + + return BFA_STATUS_OK; +} + +/** + * BFA log set log level for all aen sub-modules + * + * @param[in] log_mod - log module info + * @param[in] log_level - log severity level + * + * It return BFA_STATUS_OK on success, or > 0 on failure + */ +bfa_status_t +bfa_log_set_level_aen(struct bfa_log_mod_s *log_mod, + enum bfa_log_severity log_level) +{ + int mod_id = BFA_LOG_AEN_MIN + 1; + + if (log_mod) { + for (; mod_id <= BFA_LOG_AEN_MAX; mod_id++) + log_mod->log_level[mod_id] = log_level; + } else { + for (; mod_id <= BFA_LOG_AEN_MAX; mod_id++) + bfa_log_info[mod_id].level = log_level; + } + + return BFA_STATUS_OK; +} + +/** + * BFA log get log level for a module + * + * @param[in] log_mod - log module info + * @param[in] mod_id - module id + * + * It returns log level or -1 on error + */ +enum bfa_log_severity +bfa_log_get_level(struct bfa_log_mod_s *log_mod, int mod_id) +{ + if (mod_id <= BFA_LOG_UNUSED_ID || mod_id > BFA_LOG_MODULE_ID_MAX) + return BFA_LOG_INVALID; + + if (log_mod) + return (log_mod->log_level[mod_id]); + else + return (bfa_log_info[mod_id].level); +} + +enum bfa_log_severity +bfa_log_get_msg_level(struct bfa_log_mod_s *log_mod, u32 msg_id) +{ + struct bfa_log_msgdef_s *msg; + u32 mod = BFA_LOG_GET_MOD_ID(msg_id); + u32 idx = BFA_LOG_GET_MSG_IDX(msg_id) - 1; + + if (!bfa_log_initialized) + return BFA_LOG_INVALID; + + if (mod > BFA_LOG_MODULE_ID_MAX) + return BFA_LOG_INVALID; + + if (idx >= bfa_log_info[mod].total_count) { + bfa_log_info[mod].cbfn(log_mod, msg_id, + "%s%s log: inconsistent idx %u vs. total count %u\n", + BFA_LOG_CAT_NAME, bfa_log_severity[BFA_LOG_ERROR], idx, + bfa_log_info[mod].total_count); + return BFA_LOG_INVALID; + } + + msg = bfa_log_msg_array + bfa_log_info[mod].start_idx + idx; + if (msg_id != BFA_LOG_GET_MSG_ID(msg)) { + bfa_log_info[mod].cbfn(log_mod, msg_id, + "%s%s log: inconsistent msg id %u array msg id %u\n", + BFA_LOG_CAT_NAME, bfa_log_severity[BFA_LOG_ERROR], + msg_id, BFA_LOG_GET_MSG_ID(msg)); + return BFA_LOG_INVALID; + } + + return BFA_LOG_GET_SEVERITY(msg); +} + +/** + * BFA log message handling + * + * BFA log message handling finds the message based on message id and prints + * out the message based on its format and arguments. It also does prefix + * the severity etc. + * + * @param[in] log_mod - log module info + * @param[in] msg_id - message id + * @param[in] ... - message arguments + * + * It return 0 on success, or -1 on errors + */ +int +bfa_log(struct bfa_log_mod_s *log_mod, u32 msg_id, ...) +{ + va_list ap; + char buf[256]; + struct bfa_log_msgdef_s *msg; + int log_level; + u32 mod = BFA_LOG_GET_MOD_ID(msg_id); + u32 idx = BFA_LOG_GET_MSG_IDX(msg_id) - 1; + + if (!bfa_log_initialized) + return -1; + + if (mod > BFA_LOG_MODULE_ID_MAX) + return -1; + + if (idx >= bfa_log_info[mod].total_count) { + bfa_log_info[mod]. + cbfn + (log_mod, msg_id, + "%s%s log: inconsistent idx %u vs. total count %u\n", + BFA_LOG_CAT_NAME, bfa_log_severity[BFA_LOG_ERROR], idx, + bfa_log_info[mod].total_count); + return -1; + } + + msg = bfa_log_msg_array + bfa_log_info[mod].start_idx + idx; + if (msg_id != BFA_LOG_GET_MSG_ID(msg)) { + bfa_log_info[mod]. + cbfn + (log_mod, msg_id, + "%s%s log: inconsistent msg id %u array msg id %u\n", + BFA_LOG_CAT_NAME, bfa_log_severity[BFA_LOG_ERROR], + msg_id, BFA_LOG_GET_MSG_ID(msg)); + return -1; + } + + log_level = log_mod ? log_mod->log_level[mod] : bfa_log_info[mod].level; + if ((BFA_LOG_GET_SEVERITY(msg) > log_level) && + (msg->attributes != BFA_LOG_ATTR_NONE)) + return 0; + + va_start(ap, msg_id); + bfa_os_vsprintf(buf, BFA_LOG_GET_MSG_FMT_STRING(msg), ap); + va_end(ap); + + if (log_mod) + log_mod->cbfn(log_mod, msg_id, "%s[%s]%s%s %s: %s\n", + BFA_LOG_CAT_NAME, log_mod->instance_info, + bfa_log_severity[BFA_LOG_GET_SEVERITY(msg)], + (msg->attributes & BFA_LOG_ATTR_AUDIT) + ? " (audit) " : "", msg->msg_value, buf); + else + bfa_log_info[mod].cbfn(log_mod, msg_id, "%s%s%s %s: %s\n", + BFA_LOG_CAT_NAME, + bfa_log_severity[BFA_LOG_GET_SEVERITY(msg)], + (msg->attributes & BFA_LOG_ATTR_AUDIT) ? + " (audit) " : "", msg->msg_value, buf); + + return 0; +} + diff --git a/drivers/scsi/bfa/bfa_log_module.c b/drivers/scsi/bfa/bfa_log_module.c new file mode 100644 index 000000000000..5c154d341d69 --- /dev/null +++ b/drivers/scsi/bfa/bfa_log_module.c @@ -0,0 +1,451 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +struct bfa_log_msgdef_s bfa_log_msg_array[] = { + + +/* messages define for BFA_AEN_CAT_ADAPTER Module */ +{BFA_AEN_ADAPTER_ADD, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, BFA_LOG_INFO, + "BFA_AEN_ADAPTER_ADD", + "New adapter found: SN = %s, base port WWN = %s.", + ((BFA_LOG_S << BFA_LOG_ARG0) | (BFA_LOG_S << BFA_LOG_ARG1) | 0), 2}, + +{BFA_AEN_ADAPTER_REMOVE, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, + BFA_LOG_WARNING, "BFA_AEN_ADAPTER_REMOVE", + "Adapter removed: SN = %s.", + ((BFA_LOG_S << BFA_LOG_ARG0) | 0), 1}, + + + + +/* messages define for BFA_AEN_CAT_AUDIT Module */ +{BFA_AEN_AUDIT_AUTH_ENABLE, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, + BFA_LOG_INFO, "BFA_AEN_AUDIT_AUTH_ENABLE", + "Authentication enabled for base port: WWN = %s.", + ((BFA_LOG_S << BFA_LOG_ARG0) | 0), 1}, + +{BFA_AEN_AUDIT_AUTH_DISABLE, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, + BFA_LOG_INFO, "BFA_AEN_AUDIT_AUTH_DISABLE", + "Authentication disabled for base port: WWN = %s.", + ((BFA_LOG_S << BFA_LOG_ARG0) | 0), 1}, + + + + +/* messages define for BFA_AEN_CAT_ETHPORT Module */ +{BFA_AEN_ETHPORT_LINKUP, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, BFA_LOG_INFO, + "BFA_AEN_ETHPORT_LINKUP", + "Base port ethernet linkup: mac = %s.", + ((BFA_LOG_S << BFA_LOG_ARG0) | 0), 1}, + +{BFA_AEN_ETHPORT_LINKDOWN, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, BFA_LOG_INFO, + "BFA_AEN_ETHPORT_LINKDOWN", + "Base port ethernet linkdown: mac = %s.", + ((BFA_LOG_S << BFA_LOG_ARG0) | 0), 1}, + +{BFA_AEN_ETHPORT_ENABLE, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, BFA_LOG_INFO, + "BFA_AEN_ETHPORT_ENABLE", + "Base port ethernet interface enabled: mac = %s.", + ((BFA_LOG_S << BFA_LOG_ARG0) | 0), 1}, + +{BFA_AEN_ETHPORT_DISABLE, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, BFA_LOG_INFO, + "BFA_AEN_ETHPORT_DISABLE", + "Base port ethernet interface disabled: mac = %s.", + ((BFA_LOG_S << BFA_LOG_ARG0) | 0), 1}, + + + + +/* messages define for BFA_AEN_CAT_IOC Module */ +{BFA_AEN_IOC_HBGOOD, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, BFA_LOG_INFO, + "BFA_AEN_IOC_HBGOOD", + "Heart Beat of IOC %d is good.", + ((BFA_LOG_D << BFA_LOG_ARG0) | 0), 1}, + +{BFA_AEN_IOC_HBFAIL, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, BFA_LOG_CRITICAL, + "BFA_AEN_IOC_HBFAIL", + "Heart Beat of IOC %d has failed.", + ((BFA_LOG_D << BFA_LOG_ARG0) | 0), 1}, + +{BFA_AEN_IOC_ENABLE, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, BFA_LOG_INFO, + "BFA_AEN_IOC_ENABLE", + "IOC %d is enabled.", + ((BFA_LOG_D << BFA_LOG_ARG0) | 0), 1}, + +{BFA_AEN_IOC_DISABLE, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, BFA_LOG_INFO, + "BFA_AEN_IOC_DISABLE", + "IOC %d is disabled.", + ((BFA_LOG_D << BFA_LOG_ARG0) | 0), 1}, + +{BFA_AEN_IOC_FWMISMATCH, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, + BFA_LOG_CRITICAL, "BFA_AEN_IOC_FWMISMATCH", + "Running firmware version is incompatible with the driver version.", + (0), 0}, + + + + +/* messages define for BFA_AEN_CAT_ITNIM Module */ +{BFA_AEN_ITNIM_ONLINE, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, BFA_LOG_INFO, + "BFA_AEN_ITNIM_ONLINE", + "Target (WWN = %s) is online for initiator (WWN = %s).", + ((BFA_LOG_S << BFA_LOG_ARG0) | (BFA_LOG_S << BFA_LOG_ARG1) | 0), 2}, + +{BFA_AEN_ITNIM_OFFLINE, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, BFA_LOG_INFO, + "BFA_AEN_ITNIM_OFFLINE", + "Target (WWN = %s) offlined by initiator (WWN = %s).", + ((BFA_LOG_S << BFA_LOG_ARG0) | (BFA_LOG_S << BFA_LOG_ARG1) | 0), 2}, + +{BFA_AEN_ITNIM_DISCONNECT, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, + BFA_LOG_ERROR, "BFA_AEN_ITNIM_DISCONNECT", + "Target (WWN = %s) connectivity lost for initiator (WWN = %s).", + ((BFA_LOG_S << BFA_LOG_ARG0) | (BFA_LOG_S << BFA_LOG_ARG1) | 0), 2}, + + + + +/* messages define for BFA_AEN_CAT_LPORT Module */ +{BFA_AEN_LPORT_NEW, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, BFA_LOG_INFO, + "BFA_AEN_LPORT_NEW", + "New logical port created: WWN = %s, Role = %s.", + ((BFA_LOG_S << BFA_LOG_ARG0) | (BFA_LOG_S << BFA_LOG_ARG1) | 0), 2}, + +{BFA_AEN_LPORT_DELETE, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, BFA_LOG_INFO, + "BFA_AEN_LPORT_DELETE", + "Logical port deleted: WWN = %s, Role = %s.", + ((BFA_LOG_S << BFA_LOG_ARG0) | (BFA_LOG_S << BFA_LOG_ARG1) | 0), 2}, + +{BFA_AEN_LPORT_ONLINE, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, BFA_LOG_INFO, + "BFA_AEN_LPORT_ONLINE", + "Logical port online: WWN = %s, Role = %s.", + ((BFA_LOG_S << BFA_LOG_ARG0) | (BFA_LOG_S << BFA_LOG_ARG1) | 0), 2}, + +{BFA_AEN_LPORT_OFFLINE, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, BFA_LOG_INFO, + "BFA_AEN_LPORT_OFFLINE", + "Logical port taken offline: WWN = %s, Role = %s.", + ((BFA_LOG_S << BFA_LOG_ARG0) | (BFA_LOG_S << BFA_LOG_ARG1) | 0), 2}, + +{BFA_AEN_LPORT_DISCONNECT, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, + BFA_LOG_ERROR, "BFA_AEN_LPORT_DISCONNECT", + "Logical port lost fabric connectivity: WWN = %s, Role = %s.", + ((BFA_LOG_S << BFA_LOG_ARG0) | (BFA_LOG_S << BFA_LOG_ARG1) | 0), 2}, + +{BFA_AEN_LPORT_NEW_PROP, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, BFA_LOG_INFO, + "BFA_AEN_LPORT_NEW_PROP", + "New virtual port created using proprietary interface: WWN = %s, Role = %s.", + ((BFA_LOG_S << BFA_LOG_ARG0) | (BFA_LOG_S << BFA_LOG_ARG1) | 0), 2}, + +{BFA_AEN_LPORT_DELETE_PROP, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, + BFA_LOG_INFO, "BFA_AEN_LPORT_DELETE_PROP", + "Virtual port deleted using proprietary interface: WWN = %s, Role = %s.", + ((BFA_LOG_S << BFA_LOG_ARG0) | (BFA_LOG_S << BFA_LOG_ARG1) | 0), 2}, + +{BFA_AEN_LPORT_NEW_STANDARD, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, + BFA_LOG_INFO, "BFA_AEN_LPORT_NEW_STANDARD", + "New virtual port created using standard interface: WWN = %s, Role = %s.", + ((BFA_LOG_S << BFA_LOG_ARG0) | (BFA_LOG_S << BFA_LOG_ARG1) | 0), 2}, + +{BFA_AEN_LPORT_DELETE_STANDARD, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, + BFA_LOG_INFO, "BFA_AEN_LPORT_DELETE_STANDARD", + "Virtual port deleted using standard interface: WWN = %s, Role = %s.", + ((BFA_LOG_S << BFA_LOG_ARG0) | (BFA_LOG_S << BFA_LOG_ARG1) | 0), 2}, + +{BFA_AEN_LPORT_NPIV_DUP_WWN, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, + BFA_LOG_WARNING, "BFA_AEN_LPORT_NPIV_DUP_WWN", + "Virtual port login failed. Duplicate WWN = %s reported by fabric.", + ((BFA_LOG_S << BFA_LOG_ARG0) | 0), 1}, + +{BFA_AEN_LPORT_NPIV_FABRIC_MAX, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, + BFA_LOG_WARNING, "BFA_AEN_LPORT_NPIV_FABRIC_MAX", + "Virtual port (WWN = %s) login failed. Max NPIV ports already exist in" + " fabric/fport.", + ((BFA_LOG_S << BFA_LOG_ARG0) | 0), 1}, + +{BFA_AEN_LPORT_NPIV_UNKNOWN, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, + BFA_LOG_WARNING, "BFA_AEN_LPORT_NPIV_UNKNOWN", + "Virtual port (WWN = %s) login failed.", + ((BFA_LOG_S << BFA_LOG_ARG0) | 0), 1}, + + + + +/* messages define for BFA_AEN_CAT_PORT Module */ +{BFA_AEN_PORT_ONLINE, BFA_LOG_ATTR_NONE, BFA_LOG_INFO, "BFA_AEN_PORT_ONLINE", + "Base port online: WWN = %s.", + ((BFA_LOG_S << BFA_LOG_ARG0) | 0), 1}, + +{BFA_AEN_PORT_OFFLINE, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, BFA_LOG_WARNING, + "BFA_AEN_PORT_OFFLINE", + "Base port offline: WWN = %s.", + ((BFA_LOG_S << BFA_LOG_ARG0) | 0), 1}, + +{BFA_AEN_PORT_RLIR, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, BFA_LOG_INFO, + "BFA_AEN_PORT_RLIR", + "RLIR event not supported.", + (0), 0}, + +{BFA_AEN_PORT_SFP_INSERT, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, BFA_LOG_INFO, + "BFA_AEN_PORT_SFP_INSERT", + "New SFP found: WWN/MAC = %s.", + ((BFA_LOG_S << BFA_LOG_ARG0) | 0), 1}, + +{BFA_AEN_PORT_SFP_REMOVE, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, + BFA_LOG_WARNING, "BFA_AEN_PORT_SFP_REMOVE", + "SFP removed: WWN/MAC = %s.", + ((BFA_LOG_S << BFA_LOG_ARG0) | 0), 1}, + +{BFA_AEN_PORT_SFP_POM, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, BFA_LOG_WARNING, + "BFA_AEN_PORT_SFP_POM", + "SFP POM level to %s: WWN/MAC = %s.", + ((BFA_LOG_S << BFA_LOG_ARG0) | (BFA_LOG_S << BFA_LOG_ARG1) | 0), 2}, + +{BFA_AEN_PORT_ENABLE, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, BFA_LOG_INFO, + "BFA_AEN_PORT_ENABLE", + "Base port enabled: WWN = %s.", + ((BFA_LOG_S << BFA_LOG_ARG0) | 0), 1}, + +{BFA_AEN_PORT_DISABLE, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, BFA_LOG_INFO, + "BFA_AEN_PORT_DISABLE", + "Base port disabled: WWN = %s.", + ((BFA_LOG_S << BFA_LOG_ARG0) | 0), 1}, + +{BFA_AEN_PORT_AUTH_ON, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, BFA_LOG_INFO, + "BFA_AEN_PORT_AUTH_ON", + "Authentication successful for base port: WWN = %s.", + ((BFA_LOG_S << BFA_LOG_ARG0) | 0), 1}, + +{BFA_AEN_PORT_AUTH_OFF, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, BFA_LOG_ERROR, + "BFA_AEN_PORT_AUTH_OFF", + "Authentication unsuccessful for base port: WWN = %s.", + ((BFA_LOG_S << BFA_LOG_ARG0) | 0), 1}, + +{BFA_AEN_PORT_DISCONNECT, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, BFA_LOG_ERROR, + "BFA_AEN_PORT_DISCONNECT", + "Base port (WWN = %s) lost fabric connectivity.", + ((BFA_LOG_S << BFA_LOG_ARG0) | 0), 1}, + +{BFA_AEN_PORT_QOS_NEG, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, BFA_LOG_WARNING, + "BFA_AEN_PORT_QOS_NEG", + "QOS negotiation failed for base port: WWN = %s.", + ((BFA_LOG_S << BFA_LOG_ARG0) | 0), 1}, + +{BFA_AEN_PORT_FABRIC_NAME_CHANGE, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, + BFA_LOG_WARNING, "BFA_AEN_PORT_FABRIC_NAME_CHANGE", + "Base port WWN = %s, Fabric WWN = %s.", + ((BFA_LOG_S << BFA_LOG_ARG0) | (BFA_LOG_S << BFA_LOG_ARG1) | 0), 2}, + +{BFA_AEN_PORT_SFP_ACCESS_ERROR, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, + BFA_LOG_WARNING, "BFA_AEN_PORT_SFP_ACCESS_ERROR", + "SFP access error: WWN/MAC = %s.", + ((BFA_LOG_S << BFA_LOG_ARG0) | 0), 1}, + +{BFA_AEN_PORT_SFP_UNSUPPORT, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, + BFA_LOG_WARNING, "BFA_AEN_PORT_SFP_UNSUPPORT", + "Unsupported SFP found: WWN/MAC = %s.", + ((BFA_LOG_S << BFA_LOG_ARG0) | 0), 1}, + + + + +/* messages define for BFA_AEN_CAT_RPORT Module */ +{BFA_AEN_RPORT_ONLINE, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, BFA_LOG_INFO, + "BFA_AEN_RPORT_ONLINE", + "Remote port (WWN = %s) online for logical port (WWN = %s).", + ((BFA_LOG_S << BFA_LOG_ARG0) | (BFA_LOG_S << BFA_LOG_ARG1) | 0), 2}, + +{BFA_AEN_RPORT_OFFLINE, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, BFA_LOG_INFO, + "BFA_AEN_RPORT_OFFLINE", + "Remote port (WWN = %s) offlined by logical port (WWN = %s).", + ((BFA_LOG_S << BFA_LOG_ARG0) | (BFA_LOG_S << BFA_LOG_ARG1) | 0), 2}, + +{BFA_AEN_RPORT_DISCONNECT, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, + BFA_LOG_ERROR, "BFA_AEN_RPORT_DISCONNECT", + "Remote port (WWN = %s) connectivity lost for logical port (WWN = %s).", + ((BFA_LOG_S << BFA_LOG_ARG0) | (BFA_LOG_S << BFA_LOG_ARG1) | 0), 2}, + +{BFA_AEN_RPORT_QOS_PRIO, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, BFA_LOG_INFO, + "BFA_AEN_RPORT_QOS_PRIO", + "QOS priority changed to %s: RPWWN = %s and LPWWN = %s.", + ((BFA_LOG_S << BFA_LOG_ARG0) | (BFA_LOG_S << BFA_LOG_ARG1) | + (BFA_LOG_S << BFA_LOG_ARG2) | 0), 3}, + +{BFA_AEN_RPORT_QOS_FLOWID, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, BFA_LOG_INFO, + "BFA_AEN_RPORT_QOS_FLOWID", + "QOS flow ID changed to %d: RPWWN = %s and LPWWN = %s.", + ((BFA_LOG_D << BFA_LOG_ARG0) | (BFA_LOG_S << BFA_LOG_ARG1) | + (BFA_LOG_S << BFA_LOG_ARG2) | 0), 3}, + + + + +/* messages define for FCS Module */ +{BFA_LOG_FCS_FABRIC_NOSWITCH, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, + BFA_LOG_INFO, "FCS_FABRIC_NOSWITCH", + "No switched fabric presence is detected.", + (0), 0}, + +{BFA_LOG_FCS_FABRIC_ISOLATED, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, + BFA_LOG_INFO, "FCS_FABRIC_ISOLATED", + "Port is isolated due to VF_ID mismatch. PWWN: %s, Port VF_ID: %04x and" + " switch port VF_ID: %04x.", + ((BFA_LOG_S << BFA_LOG_ARG0) | (BFA_LOG_X << BFA_LOG_ARG1) | + (BFA_LOG_X << BFA_LOG_ARG2) | 0), 3}, + + + + +/* messages define for HAL Module */ +{BFA_LOG_HAL_ASSERT, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, BFA_LOG_ERROR, + "HAL_ASSERT", + "Assertion failure: %s:%d: %s", + ((BFA_LOG_S << BFA_LOG_ARG0) | (BFA_LOG_D << BFA_LOG_ARG1) | + (BFA_LOG_S << BFA_LOG_ARG2) | 0), 3}, + +{BFA_LOG_HAL_HEARTBEAT_FAILURE, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, + BFA_LOG_CRITICAL, "HAL_HEARTBEAT_FAILURE", + "Firmware heartbeat failure at %d", + ((BFA_LOG_D << BFA_LOG_ARG0) | 0), 1}, + +{BFA_LOG_HAL_FCPIM_PARM_INVALID, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, + BFA_LOG_INFO, "HAL_FCPIM_PARM_INVALID", + "Driver configuration %s value %d is invalid. Value should be within" + " %d and %d.", + ((BFA_LOG_S << BFA_LOG_ARG0) | (BFA_LOG_D << BFA_LOG_ARG1) | + (BFA_LOG_D << BFA_LOG_ARG2) | (BFA_LOG_D << BFA_LOG_ARG3) | 0), 4}, + +{BFA_LOG_HAL_SM_ASSERT, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, BFA_LOG_ERROR, + "HAL_SM_ASSERT", + "SM Assertion failure: %s:%d: event = %d", + ((BFA_LOG_S << BFA_LOG_ARG0) | (BFA_LOG_D << BFA_LOG_ARG1) | + (BFA_LOG_D << BFA_LOG_ARG2) | 0), 3}, + + + + +/* messages define for LINUX Module */ +{BFA_LOG_LINUX_DEVICE_CLAIMED, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, + BFA_LOG_INFO, "LINUX_DEVICE_CLAIMED", + "bfa device at %s claimed.", + ((BFA_LOG_S << BFA_LOG_ARG0) | 0), 1}, + +{BFA_LOG_LINUX_HASH_INIT_FAILED, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, + BFA_LOG_INFO, "LINUX_HASH_INIT_FAILED", + "Hash table initialization failure for the port %s.", + ((BFA_LOG_S << BFA_LOG_ARG0) | 0), 1}, + +{BFA_LOG_LINUX_SYSFS_FAILED, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, + BFA_LOG_INFO, "LINUX_SYSFS_FAILED", + "sysfs file creation failure for the port %s.", + ((BFA_LOG_S << BFA_LOG_ARG0) | 0), 1}, + +{BFA_LOG_LINUX_MEM_ALLOC_FAILED, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, + BFA_LOG_INFO, "LINUX_MEM_ALLOC_FAILED", + "Memory allocation failed: %s. ", + ((BFA_LOG_S << BFA_LOG_ARG0) | 0), 1}, + +{BFA_LOG_LINUX_DRIVER_REGISTRATION_FAILED, + BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, BFA_LOG_INFO, + "LINUX_DRIVER_REGISTRATION_FAILED", + "%s. ", + ((BFA_LOG_S << BFA_LOG_ARG0) | 0), 1}, + +{BFA_LOG_LINUX_ITNIM_FREE, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, BFA_LOG_INFO, + "LINUX_ITNIM_FREE", + "scsi%d: FCID: %s WWPN: %s", + ((BFA_LOG_D << BFA_LOG_ARG0) | (BFA_LOG_S << BFA_LOG_ARG1) | + (BFA_LOG_S << BFA_LOG_ARG2) | 0), 3}, + +{BFA_LOG_LINUX_ITNIM_ONLINE, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, + BFA_LOG_INFO, "LINUX_ITNIM_ONLINE", + "Target: %d:0:%d FCID: %s WWPN: %s", + ((BFA_LOG_D << BFA_LOG_ARG0) | (BFA_LOG_D << BFA_LOG_ARG1) | + (BFA_LOG_S << BFA_LOG_ARG2) | (BFA_LOG_S << BFA_LOG_ARG3) | 0), 4}, + +{BFA_LOG_LINUX_ITNIM_OFFLINE, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, + BFA_LOG_INFO, "LINUX_ITNIM_OFFLINE", + "Target: %d:0:%d FCID: %s WWPN: %s", + ((BFA_LOG_D << BFA_LOG_ARG0) | (BFA_LOG_D << BFA_LOG_ARG1) | + (BFA_LOG_S << BFA_LOG_ARG2) | (BFA_LOG_S << BFA_LOG_ARG3) | 0), 4}, + +{BFA_LOG_LINUX_SCSI_HOST_FREE, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, + BFA_LOG_INFO, "LINUX_SCSI_HOST_FREE", + "Free scsi%d", + ((BFA_LOG_D << BFA_LOG_ARG0) | 0), 1}, + +{BFA_LOG_LINUX_SCSI_ABORT, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, BFA_LOG_INFO, + "LINUX_SCSI_ABORT", + "scsi%d: abort cmnd %p, iotag %x", + ((BFA_LOG_D << BFA_LOG_ARG0) | (BFA_LOG_P << BFA_LOG_ARG1) | + (BFA_LOG_X << BFA_LOG_ARG2) | 0), 3}, + +{BFA_LOG_LINUX_SCSI_ABORT_COMP, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, + BFA_LOG_INFO, "LINUX_SCSI_ABORT_COMP", + "scsi%d: complete abort 0x%p, iotag 0x%x", + ((BFA_LOG_D << BFA_LOG_ARG0) | (BFA_LOG_P << BFA_LOG_ARG1) | + (BFA_LOG_X << BFA_LOG_ARG2) | 0), 3}, + + + + +/* messages define for WDRV Module */ +{BFA_LOG_WDRV_IOC_INIT_ERROR, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, + BFA_LOG_INFO, "WDRV_IOC_INIT_ERROR", + "IOC initialization has failed.", + (0), 0}, + +{BFA_LOG_WDRV_IOC_INTERNAL_ERROR, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, + BFA_LOG_INFO, "WDRV_IOC_INTERNAL_ERROR", + "IOC internal error. ", + (0), 0}, + +{BFA_LOG_WDRV_IOC_START_ERROR, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, + BFA_LOG_INFO, "WDRV_IOC_START_ERROR", + "IOC could not be started. ", + (0), 0}, + +{BFA_LOG_WDRV_IOC_STOP_ERROR, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, + BFA_LOG_INFO, "WDRV_IOC_STOP_ERROR", + "IOC could not be stopped. ", + (0), 0}, + +{BFA_LOG_WDRV_INSUFFICIENT_RESOURCES, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, + BFA_LOG_INFO, "WDRV_INSUFFICIENT_RESOURCES", + "Insufficient memory. ", + (0), 0}, + +{BFA_LOG_WDRV_BASE_ADDRESS_MAP_ERROR, BFA_LOG_ATTR_NONE | BFA_LOG_ATTR_LOG, + BFA_LOG_INFO, "WDRV_BASE_ADDRESS_MAP_ERROR", + "Unable to map the IOC onto the system address space. ", + (0), 0}, + + +{0, 0, 0, "", "", 0, 0}, +}; diff --git a/drivers/scsi/bfa/bfa_lps.c b/drivers/scsi/bfa/bfa_lps.c new file mode 100644 index 000000000000..9844b45412b6 --- /dev/null +++ b/drivers/scsi/bfa/bfa_lps.c @@ -0,0 +1,782 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#include +#include +#include + +BFA_TRC_FILE(HAL, LPS); +BFA_MODULE(lps); + +#define BFA_LPS_MIN_LPORTS (1) +#define BFA_LPS_MAX_LPORTS (256) + +/** + * forward declarations + */ +static void bfa_lps_meminfo(struct bfa_iocfc_cfg_s *cfg, u32 *ndm_len, + u32 *dm_len); +static void bfa_lps_attach(struct bfa_s *bfa, void *bfad, + struct bfa_iocfc_cfg_s *cfg, + struct bfa_meminfo_s *meminfo, + struct bfa_pcidev_s *pcidev); +static void bfa_lps_initdone(struct bfa_s *bfa); +static void bfa_lps_detach(struct bfa_s *bfa); +static void bfa_lps_start(struct bfa_s *bfa); +static void bfa_lps_stop(struct bfa_s *bfa); +static void bfa_lps_iocdisable(struct bfa_s *bfa); +static void bfa_lps_login_rsp(struct bfa_s *bfa, + struct bfi_lps_login_rsp_s *rsp); +static void bfa_lps_logout_rsp(struct bfa_s *bfa, + struct bfi_lps_logout_rsp_s *rsp); +static void bfa_lps_reqq_resume(void *lps_arg); +static void bfa_lps_free(struct bfa_lps_s *lps); +static void bfa_lps_send_login(struct bfa_lps_s *lps); +static void bfa_lps_send_logout(struct bfa_lps_s *lps); +static void bfa_lps_login_comp(struct bfa_lps_s *lps); +static void bfa_lps_logout_comp(struct bfa_lps_s *lps); + + +/** + * lps_pvt BFA LPS private functions + */ + +enum bfa_lps_event { + BFA_LPS_SM_LOGIN = 1, /* login request from user */ + BFA_LPS_SM_LOGOUT = 2, /* logout request from user */ + BFA_LPS_SM_FWRSP = 3, /* f/w response to login/logout */ + BFA_LPS_SM_RESUME = 4, /* space present in reqq queue */ + BFA_LPS_SM_DELETE = 5, /* lps delete from user */ + BFA_LPS_SM_OFFLINE = 6, /* Link is offline */ +}; + +static void bfa_lps_sm_init(struct bfa_lps_s *lps, enum bfa_lps_event event); +static void bfa_lps_sm_login(struct bfa_lps_s *lps, enum bfa_lps_event event); +static void bfa_lps_sm_loginwait(struct bfa_lps_s *lps, + enum bfa_lps_event event); +static void bfa_lps_sm_online(struct bfa_lps_s *lps, enum bfa_lps_event event); +static void bfa_lps_sm_logout(struct bfa_lps_s *lps, enum bfa_lps_event event); +static void bfa_lps_sm_logowait(struct bfa_lps_s *lps, + enum bfa_lps_event event); + +/** + * Init state -- no login + */ +static void +bfa_lps_sm_init(struct bfa_lps_s *lps, enum bfa_lps_event event) +{ + bfa_trc(lps->bfa, lps->lp_tag); + bfa_trc(lps->bfa, event); + + switch (event) { + case BFA_LPS_SM_LOGIN: + if (bfa_reqq_full(lps->bfa, lps->reqq)) { + bfa_sm_set_state(lps, bfa_lps_sm_loginwait); + bfa_reqq_wait(lps->bfa, lps->reqq, &lps->wqe); + } else { + bfa_sm_set_state(lps, bfa_lps_sm_login); + bfa_lps_send_login(lps); + } + break; + + case BFA_LPS_SM_LOGOUT: + bfa_lps_logout_comp(lps); + break; + + case BFA_LPS_SM_DELETE: + bfa_lps_free(lps); + break; + + case BFA_LPS_SM_OFFLINE: + break; + + case BFA_LPS_SM_FWRSP: + /* Could happen when fabric detects loopback and discards + * the lps request. Fw will eventually sent out the timeout + * Just ignore + */ + break; + + default: + bfa_assert(0); + } +} + +/** + * login is in progress -- awaiting response from firmware + */ +static void +bfa_lps_sm_login(struct bfa_lps_s *lps, enum bfa_lps_event event) +{ + bfa_trc(lps->bfa, lps->lp_tag); + bfa_trc(lps->bfa, event); + + switch (event) { + case BFA_LPS_SM_FWRSP: + if (lps->status == BFA_STATUS_OK) + bfa_sm_set_state(lps, bfa_lps_sm_online); + else + bfa_sm_set_state(lps, bfa_lps_sm_init); + bfa_lps_login_comp(lps); + break; + + case BFA_LPS_SM_OFFLINE: + bfa_sm_set_state(lps, bfa_lps_sm_init); + break; + + default: + bfa_assert(0); + } +} + +/** + * login pending - awaiting space in request queue + */ +static void +bfa_lps_sm_loginwait(struct bfa_lps_s *lps, enum bfa_lps_event event) +{ + bfa_trc(lps->bfa, lps->lp_tag); + bfa_trc(lps->bfa, event); + + switch (event) { + case BFA_LPS_SM_RESUME: + bfa_sm_set_state(lps, bfa_lps_sm_login); + break; + + case BFA_LPS_SM_OFFLINE: + bfa_sm_set_state(lps, bfa_lps_sm_init); + bfa_reqq_wcancel(&lps->wqe); + break; + + default: + bfa_assert(0); + } +} + +/** + * login complete + */ +static void +bfa_lps_sm_online(struct bfa_lps_s *lps, enum bfa_lps_event event) +{ + bfa_trc(lps->bfa, lps->lp_tag); + bfa_trc(lps->bfa, event); + + switch (event) { + case BFA_LPS_SM_LOGOUT: + if (bfa_reqq_full(lps->bfa, lps->reqq)) { + bfa_sm_set_state(lps, bfa_lps_sm_logowait); + bfa_reqq_wait(lps->bfa, lps->reqq, &lps->wqe); + } else { + bfa_sm_set_state(lps, bfa_lps_sm_logout); + bfa_lps_send_logout(lps); + } + break; + + case BFA_LPS_SM_OFFLINE: + case BFA_LPS_SM_DELETE: + bfa_sm_set_state(lps, bfa_lps_sm_init); + break; + + default: + bfa_assert(0); + } +} + +/** + * logout in progress - awaiting firmware response + */ +static void +bfa_lps_sm_logout(struct bfa_lps_s *lps, enum bfa_lps_event event) +{ + bfa_trc(lps->bfa, lps->lp_tag); + bfa_trc(lps->bfa, event); + + switch (event) { + case BFA_LPS_SM_FWRSP: + bfa_sm_set_state(lps, bfa_lps_sm_init); + bfa_lps_logout_comp(lps); + break; + + case BFA_LPS_SM_OFFLINE: + bfa_sm_set_state(lps, bfa_lps_sm_init); + break; + + default: + bfa_assert(0); + } +} + +/** + * logout pending -- awaiting space in request queue + */ +static void +bfa_lps_sm_logowait(struct bfa_lps_s *lps, enum bfa_lps_event event) +{ + bfa_trc(lps->bfa, lps->lp_tag); + bfa_trc(lps->bfa, event); + + switch (event) { + case BFA_LPS_SM_RESUME: + bfa_sm_set_state(lps, bfa_lps_sm_logout); + bfa_lps_send_logout(lps); + break; + + case BFA_LPS_SM_OFFLINE: + bfa_sm_set_state(lps, bfa_lps_sm_init); + bfa_reqq_wcancel(&lps->wqe); + break; + + default: + bfa_assert(0); + } +} + + + +/** + * lps_pvt BFA LPS private functions + */ + +/** + * return memory requirement + */ +static void +bfa_lps_meminfo(struct bfa_iocfc_cfg_s *cfg, u32 *ndm_len, u32 *dm_len) +{ + if (cfg->drvcfg.min_cfg) + *ndm_len += sizeof(struct bfa_lps_s) * BFA_LPS_MIN_LPORTS; + else + *ndm_len += sizeof(struct bfa_lps_s) * BFA_LPS_MAX_LPORTS; +} + +/** + * bfa module attach at initialization time + */ +static void +bfa_lps_attach(struct bfa_s *bfa, void *bfad, struct bfa_iocfc_cfg_s *cfg, + struct bfa_meminfo_s *meminfo, struct bfa_pcidev_s *pcidev) +{ + struct bfa_lps_mod_s *mod = BFA_LPS_MOD(bfa); + struct bfa_lps_s *lps; + int i; + + bfa_os_memset(mod, 0, sizeof(struct bfa_lps_mod_s)); + mod->num_lps = BFA_LPS_MAX_LPORTS; + if (cfg->drvcfg.min_cfg) + mod->num_lps = BFA_LPS_MIN_LPORTS; + else + mod->num_lps = BFA_LPS_MAX_LPORTS; + mod->lps_arr = lps = (struct bfa_lps_s *) bfa_meminfo_kva(meminfo); + + bfa_meminfo_kva(meminfo) += mod->num_lps * sizeof(struct bfa_lps_s); + + INIT_LIST_HEAD(&mod->lps_free_q); + INIT_LIST_HEAD(&mod->lps_active_q); + + for (i = 0; i < mod->num_lps; i++, lps++) { + lps->bfa = bfa; + lps->lp_tag = (u8) i; + lps->reqq = BFA_REQQ_LPS; + bfa_reqq_winit(&lps->wqe, bfa_lps_reqq_resume, lps); + list_add_tail(&lps->qe, &mod->lps_free_q); + } +} + +static void +bfa_lps_initdone(struct bfa_s *bfa) +{ +} + +static void +bfa_lps_detach(struct bfa_s *bfa) +{ +} + +static void +bfa_lps_start(struct bfa_s *bfa) +{ +} + +static void +bfa_lps_stop(struct bfa_s *bfa) +{ +} + +/** + * IOC in disabled state -- consider all lps offline + */ +static void +bfa_lps_iocdisable(struct bfa_s *bfa) +{ + struct bfa_lps_mod_s *mod = BFA_LPS_MOD(bfa); + struct bfa_lps_s *lps; + struct list_head *qe, *qen; + + list_for_each_safe(qe, qen, &mod->lps_active_q) { + lps = (struct bfa_lps_s *) qe; + bfa_sm_send_event(lps, BFA_LPS_SM_OFFLINE); + } +} + +/** + * Firmware login response + */ +static void +bfa_lps_login_rsp(struct bfa_s *bfa, struct bfi_lps_login_rsp_s *rsp) +{ + struct bfa_lps_mod_s *mod = BFA_LPS_MOD(bfa); + struct bfa_lps_s *lps; + + bfa_assert(rsp->lp_tag < mod->num_lps); + lps = BFA_LPS_FROM_TAG(mod, rsp->lp_tag); + + lps->status = rsp->status; + switch (rsp->status) { + case BFA_STATUS_OK: + lps->fport = rsp->f_port; + lps->npiv_en = rsp->npiv_en; + lps->lp_pid = rsp->lp_pid; + lps->pr_bbcred = bfa_os_ntohs(rsp->bb_credit); + lps->pr_pwwn = rsp->port_name; + lps->pr_nwwn = rsp->node_name; + lps->auth_req = rsp->auth_req; + lps->lp_mac = rsp->lp_mac; + lps->brcd_switch = rsp->brcd_switch; + lps->fcf_mac = rsp->fcf_mac; + + break; + + case BFA_STATUS_FABRIC_RJT: + lps->lsrjt_rsn = rsp->lsrjt_rsn; + lps->lsrjt_expl = rsp->lsrjt_expl; + + break; + + case BFA_STATUS_EPROTOCOL: + lps->ext_status = rsp->ext_status; + + break; + + default: + /* Nothing to do with other status */ + break; + } + + bfa_sm_send_event(lps, BFA_LPS_SM_FWRSP); +} + +/** + * Firmware logout response + */ +static void +bfa_lps_logout_rsp(struct bfa_s *bfa, struct bfi_lps_logout_rsp_s *rsp) +{ + struct bfa_lps_mod_s *mod = BFA_LPS_MOD(bfa); + struct bfa_lps_s *lps; + + bfa_assert(rsp->lp_tag < mod->num_lps); + lps = BFA_LPS_FROM_TAG(mod, rsp->lp_tag); + + bfa_sm_send_event(lps, BFA_LPS_SM_FWRSP); +} + +/** + * Space is available in request queue, resume queueing request to firmware. + */ +static void +bfa_lps_reqq_resume(void *lps_arg) +{ + struct bfa_lps_s *lps = lps_arg; + + bfa_sm_send_event(lps, BFA_LPS_SM_RESUME); +} + +/** + * lps is freed -- triggered by vport delete + */ +static void +bfa_lps_free(struct bfa_lps_s *lps) +{ + struct bfa_lps_mod_s *mod = BFA_LPS_MOD(lps->bfa); + + list_del(&lps->qe); + list_add_tail(&lps->qe, &mod->lps_free_q); +} + +/** + * send login request to firmware + */ +static void +bfa_lps_send_login(struct bfa_lps_s *lps) +{ + struct bfi_lps_login_req_s *m; + + m = bfa_reqq_next(lps->bfa, lps->reqq); + bfa_assert(m); + + bfi_h2i_set(m->mh, BFI_MC_LPS, BFI_LPS_H2I_LOGIN_REQ, + bfa_lpuid(lps->bfa)); + + m->lp_tag = lps->lp_tag; + m->alpa = lps->alpa; + m->pdu_size = bfa_os_htons(lps->pdusz); + m->pwwn = lps->pwwn; + m->nwwn = lps->nwwn; + m->fdisc = lps->fdisc; + m->auth_en = lps->auth_en; + + bfa_reqq_produce(lps->bfa, lps->reqq); +} + +/** + * send logout request to firmware + */ +static void +bfa_lps_send_logout(struct bfa_lps_s *lps) +{ + struct bfi_lps_logout_req_s *m; + + m = bfa_reqq_next(lps->bfa, lps->reqq); + bfa_assert(m); + + bfi_h2i_set(m->mh, BFI_MC_LPS, BFI_LPS_H2I_LOGOUT_REQ, + bfa_lpuid(lps->bfa)); + + m->lp_tag = lps->lp_tag; + m->port_name = lps->pwwn; + bfa_reqq_produce(lps->bfa, lps->reqq); +} + +/** + * Indirect login completion handler for non-fcs + */ +static void +bfa_lps_login_comp_cb(void *arg, bfa_boolean_t complete) +{ + struct bfa_lps_s *lps = arg; + + if (!complete) + return; + + if (lps->fdisc) + bfa_cb_lps_fdisc_comp(lps->bfa->bfad, lps->uarg, lps->status); + else + bfa_cb_lps_flogi_comp(lps->bfa->bfad, lps->uarg, lps->status); +} + +/** + * Login completion handler -- direct call for fcs, queue for others + */ +static void +bfa_lps_login_comp(struct bfa_lps_s *lps) +{ + if (!lps->bfa->fcs) { + bfa_cb_queue(lps->bfa, &lps->hcb_qe, + bfa_lps_login_comp_cb, lps); + return; + } + + if (lps->fdisc) + bfa_cb_lps_fdisc_comp(lps->bfa->bfad, lps->uarg, lps->status); + else + bfa_cb_lps_flogi_comp(lps->bfa->bfad, lps->uarg, lps->status); +} + +/** + * Indirect logout completion handler for non-fcs + */ +static void +bfa_lps_logout_comp_cb(void *arg, bfa_boolean_t complete) +{ + struct bfa_lps_s *lps = arg; + + if (!complete) + return; + + if (lps->fdisc) + bfa_cb_lps_fdisclogo_comp(lps->bfa->bfad, lps->uarg); + else + bfa_cb_lps_flogo_comp(lps->bfa->bfad, lps->uarg); +} + +/** + * Logout completion handler -- direct call for fcs, queue for others + */ +static void +bfa_lps_logout_comp(struct bfa_lps_s *lps) +{ + if (!lps->bfa->fcs) { + bfa_cb_queue(lps->bfa, &lps->hcb_qe, + bfa_lps_logout_comp_cb, lps); + return; + } + if (lps->fdisc) + bfa_cb_lps_fdisclogo_comp(lps->bfa->bfad, lps->uarg); + else + bfa_cb_lps_flogo_comp(lps->bfa->bfad, lps->uarg); +} + + + +/** + * lps_public BFA LPS public functions + */ + +/** + * Allocate a lport srvice tag. + */ +struct bfa_lps_s * +bfa_lps_alloc(struct bfa_s *bfa) +{ + struct bfa_lps_mod_s *mod = BFA_LPS_MOD(bfa); + struct bfa_lps_s *lps = NULL; + + bfa_q_deq(&mod->lps_free_q, &lps); + + if (lps == NULL) + return NULL; + + list_add_tail(&lps->qe, &mod->lps_active_q); + + bfa_sm_set_state(lps, bfa_lps_sm_init); + return lps; +} + +/** + * Free lport service tag. This can be called anytime after an alloc. + * No need to wait for any pending login/logout completions. + */ +void +bfa_lps_delete(struct bfa_lps_s *lps) +{ + bfa_sm_send_event(lps, BFA_LPS_SM_DELETE); +} + +/** + * Initiate a lport login. + */ +void +bfa_lps_flogi(struct bfa_lps_s *lps, void *uarg, u8 alpa, u16 pdusz, + wwn_t pwwn, wwn_t nwwn, bfa_boolean_t auth_en) +{ + lps->uarg = uarg; + lps->alpa = alpa; + lps->pdusz = pdusz; + lps->pwwn = pwwn; + lps->nwwn = nwwn; + lps->fdisc = BFA_FALSE; + lps->auth_en = auth_en; + bfa_sm_send_event(lps, BFA_LPS_SM_LOGIN); +} + +/** + * Initiate a lport fdisc login. + */ +void +bfa_lps_fdisc(struct bfa_lps_s *lps, void *uarg, u16 pdusz, wwn_t pwwn, + wwn_t nwwn) +{ + lps->uarg = uarg; + lps->alpa = 0; + lps->pdusz = pdusz; + lps->pwwn = pwwn; + lps->nwwn = nwwn; + lps->fdisc = BFA_TRUE; + lps->auth_en = BFA_FALSE; + bfa_sm_send_event(lps, BFA_LPS_SM_LOGIN); +} + +/** + * Initiate a lport logout (flogi). + */ +void +bfa_lps_flogo(struct bfa_lps_s *lps) +{ + bfa_sm_send_event(lps, BFA_LPS_SM_LOGOUT); +} + +/** + * Initiate a lport FDSIC logout. + */ +void +bfa_lps_fdisclogo(struct bfa_lps_s *lps) +{ + bfa_sm_send_event(lps, BFA_LPS_SM_LOGOUT); +} + +/** + * Discard a pending login request -- should be called only for + * link down handling. + */ +void +bfa_lps_discard(struct bfa_lps_s *lps) +{ + bfa_sm_send_event(lps, BFA_LPS_SM_OFFLINE); +} + +/** + * Return lport services tag + */ +u8 +bfa_lps_get_tag(struct bfa_lps_s *lps) +{ + return lps->lp_tag; +} + +/** + * Return lport services tag given the pid + */ +u8 +bfa_lps_get_tag_from_pid(struct bfa_s *bfa, u32 pid) +{ + struct bfa_lps_mod_s *mod = BFA_LPS_MOD(bfa); + struct bfa_lps_s *lps; + int i; + + for (i = 0, lps = mod->lps_arr; i < mod->num_lps; i++, lps++) { + if (lps->lp_pid == pid) + return lps->lp_tag; + } + + /* Return base port tag anyway */ + return 0; +} + +/** + * return if fabric login indicates support for NPIV + */ +bfa_boolean_t +bfa_lps_is_npiv_en(struct bfa_lps_s *lps) +{ + return lps->npiv_en; +} + +/** + * Return TRUE if attached to F-Port, else return FALSE + */ +bfa_boolean_t +bfa_lps_is_fport(struct bfa_lps_s *lps) +{ + return lps->fport; +} + +/** + * Return TRUE if attached to a Brocade Fabric + */ +bfa_boolean_t +bfa_lps_is_brcd_fabric(struct bfa_lps_s *lps) +{ + return lps->brcd_switch; +} +/** + * return TRUE if authentication is required + */ +bfa_boolean_t +bfa_lps_is_authreq(struct bfa_lps_s *lps) +{ + return lps->auth_req; +} + +bfa_eproto_status_t +bfa_lps_get_extstatus(struct bfa_lps_s *lps) +{ + return lps->ext_status; +} + +/** + * return port id assigned to the lport + */ +u32 +bfa_lps_get_pid(struct bfa_lps_s *lps) +{ + return lps->lp_pid; +} + +/** + * Return bb_credit assigned in FLOGI response + */ +u16 +bfa_lps_get_peer_bbcredit(struct bfa_lps_s *lps) +{ + return lps->pr_bbcred; +} + +/** + * Return peer port name + */ +wwn_t +bfa_lps_get_peer_pwwn(struct bfa_lps_s *lps) +{ + return lps->pr_pwwn; +} + +/** + * Return peer node name + */ +wwn_t +bfa_lps_get_peer_nwwn(struct bfa_lps_s *lps) +{ + return lps->pr_nwwn; +} + +/** + * return reason code if login request is rejected + */ +u8 +bfa_lps_get_lsrjt_rsn(struct bfa_lps_s *lps) +{ + return lps->lsrjt_rsn; +} + +/** + * return explanation code if login request is rejected + */ +u8 +bfa_lps_get_lsrjt_expl(struct bfa_lps_s *lps) +{ + return lps->lsrjt_expl; +} + + +/** + * LPS firmware message class handler. + */ +void +bfa_lps_isr(struct bfa_s *bfa, struct bfi_msg_s *m) +{ + union bfi_lps_i2h_msg_u msg; + + bfa_trc(bfa, m->mhdr.msg_id); + msg.msg = m; + + switch (m->mhdr.msg_id) { + case BFI_LPS_H2I_LOGIN_RSP: + bfa_lps_login_rsp(bfa, msg.login_rsp); + break; + + case BFI_LPS_H2I_LOGOUT_RSP: + bfa_lps_logout_rsp(bfa, msg.logout_rsp); + break; + + default: + bfa_trc(bfa, m->mhdr.msg_id); + bfa_assert(0); + } +} + + diff --git a/drivers/scsi/bfa/bfa_lps_priv.h b/drivers/scsi/bfa/bfa_lps_priv.h new file mode 100644 index 000000000000..d16c6ce995df --- /dev/null +++ b/drivers/scsi/bfa/bfa_lps_priv.h @@ -0,0 +1,38 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFA_LPS_PRIV_H__ +#define __BFA_LPS_PRIV_H__ + +#include + +struct bfa_lps_mod_s { + struct list_head lps_free_q; + struct list_head lps_active_q; + struct bfa_lps_s *lps_arr; + int num_lps; +}; + +#define BFA_LPS_MOD(__bfa) (&(__bfa)->modules.lps_mod) +#define BFA_LPS_FROM_TAG(__mod, __tag) (&(__mod)->lps_arr[__tag]) + +/* + * external functions + */ +void bfa_lps_isr(struct bfa_s *bfa, struct bfi_msg_s *msg); + +#endif /* __BFA_LPS_PRIV_H__ */ diff --git a/drivers/scsi/bfa/bfa_module.c b/drivers/scsi/bfa/bfa_module.c new file mode 100644 index 000000000000..32eda8e1ec65 --- /dev/null +++ b/drivers/scsi/bfa/bfa_module.c @@ -0,0 +1,90 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ +#include +#include +#include +#include + +/** + * BFA module list terminated by NULL + */ +struct bfa_module_s *hal_mods[] = { + &hal_mod_sgpg, + &hal_mod_pport, + &hal_mod_fcxp, + &hal_mod_lps, + &hal_mod_uf, + &hal_mod_rport, + &hal_mod_fcpim, +#ifdef BFA_CFG_PBIND + &hal_mod_pbind, +#endif + NULL +}; + +/** + * Message handlers for various modules. + */ +bfa_isr_func_t bfa_isrs[BFI_MC_MAX] = { + bfa_isr_unhandled, /* NONE */ + bfa_isr_unhandled, /* BFI_MC_IOC */ + bfa_isr_unhandled, /* BFI_MC_DIAG */ + bfa_isr_unhandled, /* BFI_MC_FLASH */ + bfa_isr_unhandled, /* BFI_MC_CEE */ + bfa_pport_isr, /* BFI_MC_PORT */ + bfa_isr_unhandled, /* BFI_MC_IOCFC */ + bfa_isr_unhandled, /* BFI_MC_LL */ + bfa_uf_isr, /* BFI_MC_UF */ + bfa_fcxp_isr, /* BFI_MC_FCXP */ + bfa_lps_isr, /* BFI_MC_LPS */ + bfa_rport_isr, /* BFI_MC_RPORT */ + bfa_itnim_isr, /* BFI_MC_ITNIM */ + bfa_isr_unhandled, /* BFI_MC_IOIM_READ */ + bfa_isr_unhandled, /* BFI_MC_IOIM_WRITE */ + bfa_isr_unhandled, /* BFI_MC_IOIM_IO */ + bfa_ioim_isr, /* BFI_MC_IOIM */ + bfa_ioim_good_comp_isr, /* BFI_MC_IOIM_IOCOM */ + bfa_tskim_isr, /* BFI_MC_TSKIM */ + bfa_isr_unhandled, /* BFI_MC_SBOOT */ + bfa_isr_unhandled, /* BFI_MC_IPFC */ + bfa_isr_unhandled, /* BFI_MC_PORT */ + bfa_isr_unhandled, /* --------- */ + bfa_isr_unhandled, /* --------- */ + bfa_isr_unhandled, /* --------- */ + bfa_isr_unhandled, /* --------- */ + bfa_isr_unhandled, /* --------- */ + bfa_isr_unhandled, /* --------- */ + bfa_isr_unhandled, /* --------- */ + bfa_isr_unhandled, /* --------- */ + bfa_isr_unhandled, /* --------- */ + bfa_isr_unhandled, /* --------- */ +}; + +/** + * Message handlers for mailbox command classes + */ +bfa_ioc_mbox_mcfunc_t bfa_mbox_isrs[BFI_MC_MAX] = { + NULL, + NULL, /* BFI_MC_IOC */ + NULL, /* BFI_MC_DIAG */ + NULL, /* BFI_MC_FLASH */ + NULL, /* BFI_MC_CEE */ + NULL, /* BFI_MC_PORT */ + bfa_iocfc_isr, /* BFI_MC_IOCFC */ + NULL, +}; + diff --git a/drivers/scsi/bfa/bfa_modules_priv.h b/drivers/scsi/bfa/bfa_modules_priv.h new file mode 100644 index 000000000000..96f70534593c --- /dev/null +++ b/drivers/scsi/bfa/bfa_modules_priv.h @@ -0,0 +1,43 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFA_MODULES_PRIV_H__ +#define __BFA_MODULES_PRIV_H__ + +#include "bfa_uf_priv.h" +#include "bfa_port_priv.h" +#include "bfa_rport_priv.h" +#include "bfa_fcxp_priv.h" +#include "bfa_lps_priv.h" +#include "bfa_fcpim_priv.h" +#include +#include + + +struct bfa_modules_s { + struct bfa_pport_s pport; /* physical port module */ + struct bfa_fcxp_mod_s fcxp_mod; /* fcxp module */ + struct bfa_lps_mod_s lps_mod; /* fcxp module */ + struct bfa_uf_mod_s uf_mod; /* unsolicited frame module */ + struct bfa_rport_mod_s rport_mod; /* remote port module */ + struct bfa_fcpim_mod_s fcpim_mod; /* FCP initiator module */ + struct bfa_sgpg_mod_s sgpg_mod; /* SG page module */ + struct bfa_cee_s cee; /* CEE Module */ + struct bfa_port_s port; /* Physical port module */ +}; + +#endif /* __BFA_MODULES_PRIV_H__ */ diff --git a/drivers/scsi/bfa/bfa_os_inc.h b/drivers/scsi/bfa/bfa_os_inc.h new file mode 100644 index 000000000000..10a89f75fa94 --- /dev/null +++ b/drivers/scsi/bfa/bfa_os_inc.h @@ -0,0 +1,222 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * Contains declarations all OS Specific files needed for BFA layer + */ + +#ifndef __BFA_OS_INC_H__ +#define __BFA_OS_INC_H__ + +#ifndef __KERNEL__ +#include +#else +#include + +#include +#include + +#include +#define SET_MODULE_VERSION(VER) + +#include + +#include +#include +#include +#include +#include + +#include + +#include +#include + +#include +#include +#include + +#define BFA_ERR KERN_ERR +#define BFA_WARNING KERN_WARNING +#define BFA_NOTICE KERN_NOTICE +#define BFA_INFO KERN_INFO +#define BFA_DEBUG KERN_DEBUG + +#define LOG_BFAD_INIT 0x00000001 +#define LOG_FCP_IO 0x00000002 + +#ifdef DEBUG +#define BFA_LOG_TRACE(bfad, level, mask, fmt, arg...) \ + BFA_LOG(bfad, level, mask, fmt, ## arg) +#define BFA_DEV_TRACE(bfad, level, fmt, arg...) \ + BFA_DEV_PRINTF(bfad, level, fmt, ## arg) +#define BFA_TRACE(level, fmt, arg...) \ + BFA_PRINTF(level, fmt, ## arg) +#else +#define BFA_LOG_TRACE(bfad, level, mask, fmt, arg...) +#define BFA_DEV_TRACE(bfad, level, fmt, arg...) +#define BFA_TRACE(level, fmt, arg...) +#endif + +#define BFA_ASSERT(p) do { \ + if (!(p)) { \ + printk(KERN_ERR "assert(%s) failed at %s:%d\n", \ + #p, __FILE__, __LINE__); \ + BUG(); \ + } \ +} while (0) + + +#define BFA_LOG(bfad, level, mask, fmt, arg...) \ +do { \ + if (((mask) & (((struct bfad_s *)(bfad))-> \ + cfg_data[cfg_log_mask])) || (level[1] <= '3')) \ + dev_printk(level, &(((struct bfad_s *) \ + (bfad))->pcidev->dev), fmt, ##arg); \ +} while (0) + +#ifndef BFA_DEV_PRINTF +#define BFA_DEV_PRINTF(bfad, level, fmt, arg...) \ + dev_printk(level, &(((struct bfad_s *) \ + (bfad))->pcidev->dev), fmt, ##arg); +#endif + +#define BFA_PRINTF(level, fmt, arg...) \ + printk(level fmt, ##arg); + +int bfa_os_MWB(void *); + +#define bfa_os_mmiowb() mmiowb() + +#define bfa_swap_3b(_x) \ + ((((_x) & 0xff) << 16) | \ + ((_x) & 0x00ff00) | \ + (((_x) & 0xff0000) >> 16)) + +#define bfa_swap_8b(_x) \ + ((((_x) & 0xff00000000000000ull) >> 56) \ + | (((_x) & 0x00ff000000000000ull) >> 40) \ + | (((_x) & 0x0000ff0000000000ull) >> 24) \ + | (((_x) & 0x000000ff00000000ull) >> 8) \ + | (((_x) & 0x00000000ff000000ull) << 8) \ + | (((_x) & 0x0000000000ff0000ull) << 24) \ + | (((_x) & 0x000000000000ff00ull) << 40) \ + | (((_x) & 0x00000000000000ffull) << 56)) + +#define bfa_os_swap32(_x) \ + ((((_x) & 0xff) << 24) | \ + (((_x) & 0x0000ff00) << 8) | \ + (((_x) & 0x00ff0000) >> 8) | \ + (((_x) & 0xff000000) >> 24)) + + +#ifndef __BIGENDIAN +#define bfa_os_htons(_x) ((u16)((((_x) & 0xff00) >> 8) | \ + (((_x) & 0x00ff) << 8))) + +#define bfa_os_htonl(_x) bfa_os_swap32(_x) +#define bfa_os_htonll(_x) bfa_swap_8b(_x) +#define bfa_os_hton3b(_x) bfa_swap_3b(_x) + +#define bfa_os_wtole(_x) (_x) + +#else + +#define bfa_os_htons(_x) (_x) +#define bfa_os_htonl(_x) (_x) +#define bfa_os_hton3b(_x) (_x) +#define bfa_os_htonll(_x) (_x) +#define bfa_os_wtole(_x) bfa_os_swap32(_x) + +#endif + +#define bfa_os_ntohs(_x) bfa_os_htons(_x) +#define bfa_os_ntohl(_x) bfa_os_htonl(_x) +#define bfa_os_ntohll(_x) bfa_os_htonll(_x) +#define bfa_os_ntoh3b(_x) bfa_os_hton3b(_x) + +#define bfa_os_u32(__pa64) ((__pa64) >> 32) + +#define bfa_os_memset memset +#define bfa_os_memcpy memcpy +#define bfa_os_udelay udelay +#define bfa_os_vsprintf vsprintf + +#define bfa_os_assign(__t, __s) __t = __s + +#define bfa_os_addr_t char __iomem * +#define bfa_os_panic() + +#define bfa_os_reg_read(_raddr) bfa_os_wtole(readl(_raddr)) +#define bfa_os_reg_write(_raddr, _val) writel(bfa_os_wtole((_val)), (_raddr)) +#define bfa_os_mem_read(_raddr, _off) \ + bfa_os_ntohl(readl(((_raddr) + (_off)))) +#define bfa_os_mem_write(_raddr, _off, _val) \ + writel(bfa_os_htonl((_val)), ((_raddr) + (_off))) + +#define BFA_TRC_TS(_trcm) \ + ({ \ + struct timeval tv; \ + \ + do_gettimeofday(&tv); \ + (tv.tv_sec*1000000+tv.tv_usec); \ + }) + +struct bfa_log_mod_s; +void bfa_os_printf(struct bfa_log_mod_s *log_mod, u32 msg_id, + const char *fmt, ...); +#endif + +#define boolean_t int + +/** + * For current time stamp, OS API will fill-in + */ +struct bfa_timeval_s { + u32 tv_sec; /* seconds */ + u32 tv_usec; /* microseconds */ +}; + +void bfa_os_gettimeofday(struct bfa_timeval_s *tv); + +static inline void +wwn2str(char *wwn_str, u64 wwn) +{ + union { + u64 wwn; + u8 byte[8]; + } w; + + w.wwn = wwn; + sprintf(wwn_str, "%02x:%02x:%02x:%02x:%02x:%02x:%02x:%02x", w.byte[0], + w.byte[1], w.byte[2], w.byte[3], w.byte[4], w.byte[5], + w.byte[6], w.byte[7]); +} + +static inline void +fcid2str(char *fcid_str, u32 fcid) +{ + union { + u32 fcid; + u8 byte[4]; + } f; + + f.fcid = fcid; + sprintf(fcid_str, "%02x:%02x:%02x", f.byte[1], f.byte[2], f.byte[3]); +} + +#endif /* __BFA_OS_INC_H__ */ diff --git a/drivers/scsi/bfa/bfa_port.c b/drivers/scsi/bfa/bfa_port.c new file mode 100644 index 000000000000..cab19028361a --- /dev/null +++ b/drivers/scsi/bfa/bfa_port.c @@ -0,0 +1,460 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +BFA_TRC_FILE(CNA, PORT); + +#define bfa_ioc_portid(__ioc) ((__ioc)->port_id) +#define bfa_lpuid(__arg) bfa_ioc_portid(&(__arg)->ioc) + +static void +bfa_port_stats_swap(struct bfa_port_s *port, union bfa_pport_stats_u *stats) +{ + u32 *dip = (u32 *) stats; + u32 t0, t1; + int i; + + for (i = 0; i < sizeof(union bfa_pport_stats_u) / sizeof(u32); + i += 2) { + t0 = dip[i]; + t1 = dip[i + 1]; +#ifdef __BIGENDIAN + dip[i] = bfa_os_ntohl(t0); + dip[i + 1] = bfa_os_ntohl(t1); +#else + dip[i] = bfa_os_ntohl(t1); + dip[i + 1] = bfa_os_ntohl(t0); +#endif + } + + /** todo + * QoS stats r also swapped as 64bit; that structure also + * has to use 64 bit counters + */ +} + +/** + * bfa_port_enable_isr() + * + * + * @param[in] port - Pointer to the port module + * status - Return status from the f/w + * + * @return void + */ +static void +bfa_port_enable_isr(struct bfa_port_s *port, bfa_status_t status) +{ + bfa_assert(0); +} + +/** + * bfa_port_disable_isr() + * + * + * @param[in] port - Pointer to the port module + * status - Return status from the f/w + * + * @return void + */ +static void +bfa_port_disable_isr(struct bfa_port_s *port, bfa_status_t status) +{ + bfa_assert(0); +} + +/** + * bfa_port_get_stats_isr() + * + * + * @param[in] port - Pointer to the Port module + * status - Return status from the f/w + * + * @return void + */ +static void +bfa_port_get_stats_isr(struct bfa_port_s *port, bfa_status_t status) +{ + port->stats_status = status; + port->stats_busy = BFA_FALSE; + + if (status == BFA_STATUS_OK) { + memcpy(port->stats, port->stats_dma.kva, + sizeof(union bfa_pport_stats_u)); + bfa_port_stats_swap(port, port->stats); + } + + if (port->stats_cbfn) { + port->stats_cbfn(port->stats_cbarg, status); + port->stats_cbfn = NULL; + } +} + +/** + * bfa_port_clear_stats_isr() + * + * + * @param[in] port - Pointer to the Port module + * status - Return status from the f/w + * + * @return void + */ +static void +bfa_port_clear_stats_isr(struct bfa_port_s *port, bfa_status_t status) +{ + port->stats_status = status; + port->stats_busy = BFA_FALSE; + + if (port->stats_cbfn) { + port->stats_cbfn(port->stats_cbarg, status); + port->stats_cbfn = NULL; + } +} + +/** + * bfa_port_isr() + * + * + * @param[in] Pointer to the Port module data structure. + * + * @return void + */ +static void +bfa_port_isr(void *cbarg, struct bfi_mbmsg_s *m) +{ + struct bfa_port_s *port = (struct bfa_port_s *)cbarg; + union bfi_port_i2h_msg_u *i2hmsg; + + i2hmsg = (union bfi_port_i2h_msg_u *)m; + bfa_trc(port, m->mh.msg_id); + + switch (m->mh.msg_id) { + case BFI_PORT_I2H_ENABLE_RSP: + if (port->endis_pending == BFA_FALSE) + break; + bfa_port_enable_isr(port, i2hmsg->enable_rsp.status); + break; + + case BFI_PORT_I2H_DISABLE_RSP: + if (port->endis_pending == BFA_FALSE) + break; + bfa_port_disable_isr(port, i2hmsg->disable_rsp.status); + break; + + case BFI_PORT_I2H_GET_STATS_RSP: + /* + * Stats busy flag is still set? (may be cmd timed out) + */ + if (port->stats_busy == BFA_FALSE) + break; + bfa_port_get_stats_isr(port, i2hmsg->getstats_rsp.status); + break; + + case BFI_PORT_I2H_CLEAR_STATS_RSP: + if (port->stats_busy == BFA_FALSE) + break; + bfa_port_clear_stats_isr(port, i2hmsg->clearstats_rsp.status); + break; + + default: + bfa_assert(0); + } +} + +/** + * bfa_port_meminfo() + * + * + * @param[in] void + * + * @return Size of DMA region + */ +u32 +bfa_port_meminfo(void) +{ + return BFA_ROUNDUP(sizeof(union bfa_pport_stats_u), BFA_DMA_ALIGN_SZ); +} + +/** + * bfa_port_mem_claim() + * + * + * @param[in] port Port module pointer + * dma_kva Kernel Virtual Address of Port DMA Memory + * dma_pa Physical Address of Port DMA Memory + * + * @return void + */ +void +bfa_port_mem_claim(struct bfa_port_s *port, u8 *dma_kva, u64 dma_pa) +{ + port->stats_dma.kva = dma_kva; + port->stats_dma.pa = dma_pa; +} + +/** + * bfa_port_enable() + * + * Send the Port enable request to the f/w + * + * @param[in] Pointer to the Port module data structure. + * + * @return Status + */ +bfa_status_t +bfa_port_enable(struct bfa_port_s *port, bfa_port_endis_cbfn_t cbfn, + void *cbarg) +{ + struct bfi_port_generic_req_s *m; + + /** todo Not implemented */ + bfa_assert(0); + + if (!bfa_ioc_is_operational(port->ioc)) { + bfa_trc(port, BFA_STATUS_IOC_FAILURE); + return BFA_STATUS_IOC_FAILURE; + } + + if (port->endis_pending) { + bfa_trc(port, BFA_STATUS_DEVBUSY); + return BFA_STATUS_DEVBUSY; + } + + m = (struct bfi_port_generic_req_s *)port->endis_mb.msg; + + port->msgtag++; + port->endis_cbfn = cbfn; + port->endis_cbarg = cbarg; + port->endis_pending = BFA_TRUE; + + bfi_h2i_set(m->mh, BFI_MC_PORT, BFI_PORT_H2I_ENABLE_REQ, + bfa_ioc_portid(port->ioc)); + bfa_ioc_mbox_queue(port->ioc, &port->endis_mb); + + return BFA_STATUS_OK; +} + +/** + * bfa_port_disable() + * + * Send the Port disable request to the f/w + * + * @param[in] Pointer to the Port module data structure. + * + * @return Status + */ +bfa_status_t +bfa_port_disable(struct bfa_port_s *port, bfa_port_endis_cbfn_t cbfn, + void *cbarg) +{ + struct bfi_port_generic_req_s *m; + + /** todo Not implemented */ + bfa_assert(0); + + if (!bfa_ioc_is_operational(port->ioc)) { + bfa_trc(port, BFA_STATUS_IOC_FAILURE); + return BFA_STATUS_IOC_FAILURE; + } + + if (port->endis_pending) { + bfa_trc(port, BFA_STATUS_DEVBUSY); + return BFA_STATUS_DEVBUSY; + } + + m = (struct bfi_port_generic_req_s *)port->endis_mb.msg; + + port->msgtag++; + port->endis_cbfn = cbfn; + port->endis_cbarg = cbarg; + port->endis_pending = BFA_TRUE; + + bfi_h2i_set(m->mh, BFI_MC_PORT, BFI_PORT_H2I_DISABLE_REQ, + bfa_ioc_portid(port->ioc)); + bfa_ioc_mbox_queue(port->ioc, &port->endis_mb); + + return BFA_STATUS_OK; +} + +/** + * bfa_port_get_stats() + * + * Send the request to the f/w to fetch Port statistics. + * + * @param[in] Pointer to the Port module data structure. + * + * @return Status + */ +bfa_status_t +bfa_port_get_stats(struct bfa_port_s *port, union bfa_pport_stats_u *stats, + bfa_port_stats_cbfn_t cbfn, void *cbarg) +{ + struct bfi_port_get_stats_req_s *m; + + if (!bfa_ioc_is_operational(port->ioc)) { + bfa_trc(port, BFA_STATUS_IOC_FAILURE); + return BFA_STATUS_IOC_FAILURE; + } + + if (port->stats_busy) { + bfa_trc(port, BFA_STATUS_DEVBUSY); + return BFA_STATUS_DEVBUSY; + } + + m = (struct bfi_port_get_stats_req_s *)port->stats_mb.msg; + + port->stats = stats; + port->stats_cbfn = cbfn; + port->stats_cbarg = cbarg; + port->stats_busy = BFA_TRUE; + bfa_dma_be_addr_set(m->dma_addr, port->stats_dma.pa); + + bfi_h2i_set(m->mh, BFI_MC_PORT, BFI_PORT_H2I_GET_STATS_REQ, + bfa_ioc_portid(port->ioc)); + bfa_ioc_mbox_queue(port->ioc, &port->stats_mb); + + return BFA_STATUS_OK; +} + +/** + * bfa_port_clear_stats() + * + * + * @param[in] Pointer to the Port module data structure. + * + * @return Status + */ +bfa_status_t +bfa_port_clear_stats(struct bfa_port_s *port, bfa_port_stats_cbfn_t cbfn, + void *cbarg) +{ + struct bfi_port_generic_req_s *m; + + if (!bfa_ioc_is_operational(port->ioc)) { + bfa_trc(port, BFA_STATUS_IOC_FAILURE); + return BFA_STATUS_IOC_FAILURE; + } + + if (port->stats_busy) { + bfa_trc(port, BFA_STATUS_DEVBUSY); + return BFA_STATUS_DEVBUSY; + } + + m = (struct bfi_port_generic_req_s *)port->stats_mb.msg; + + port->stats_cbfn = cbfn; + port->stats_cbarg = cbarg; + port->stats_busy = BFA_TRUE; + + bfi_h2i_set(m->mh, BFI_MC_PORT, BFI_PORT_H2I_CLEAR_STATS_REQ, + bfa_ioc_portid(port->ioc)); + bfa_ioc_mbox_queue(port->ioc, &port->stats_mb); + + return BFA_STATUS_OK; +} + +/** + * bfa_port_hbfail() + * + * + * @param[in] Pointer to the Port module data structure. + * + * @return void + */ +void +bfa_port_hbfail(void *arg) +{ + struct bfa_port_s *port = (struct bfa_port_s *)arg; + + /* + * Fail any pending get_stats/clear_stats requests + */ + if (port->stats_busy) { + if (port->stats_cbfn) + port->stats_cbfn(port->dev, BFA_STATUS_FAILED); + port->stats_cbfn = NULL; + port->stats_busy = BFA_FALSE; + } + + /* + * Clear any enable/disable is pending + */ + if (port->endis_pending) { + if (port->endis_cbfn) + port->endis_cbfn(port->dev, BFA_STATUS_FAILED); + port->endis_cbfn = NULL; + port->endis_pending = BFA_FALSE; + } +} + +/** + * bfa_port_attach() + * + * + * @param[in] port - Pointer to the Port module data structure + * ioc - Pointer to the ioc module data structure + * dev - Pointer to the device driver module data structure + * The device driver specific mbox ISR functions have + * this pointer as one of the parameters. + * trcmod - + * logmod - + * + * @return void + */ +void +bfa_port_attach(struct bfa_port_s *port, struct bfa_ioc_s *ioc, void *dev, + struct bfa_trc_mod_s *trcmod, struct bfa_log_mod_s *logmod) +{ + bfa_assert(port); + + port->dev = dev; + port->ioc = ioc; + port->trcmod = trcmod; + port->logmod = logmod; + + port->stats_busy = port->endis_pending = BFA_FALSE; + port->stats_cbfn = port->endis_cbfn = NULL; + + bfa_ioc_mbox_regisr(port->ioc, BFI_MC_PORT, bfa_port_isr, port); + bfa_ioc_hbfail_init(&port->hbfail, bfa_port_hbfail, port); + bfa_ioc_hbfail_register(port->ioc, &port->hbfail); + + bfa_trc(port, 0); +} + +/** + * bfa_port_detach() + * + * + * @param[in] port - Pointer to the Port module data structure + * + * @return void + */ +void +bfa_port_detach(struct bfa_port_s *port) +{ + bfa_trc(port, 0); +} diff --git a/drivers/scsi/bfa/bfa_port_priv.h b/drivers/scsi/bfa/bfa_port_priv.h new file mode 100644 index 000000000000..4b97e2759908 --- /dev/null +++ b/drivers/scsi/bfa/bfa_port_priv.h @@ -0,0 +1,90 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFA_PORT_PRIV_H__ +#define __BFA_PORT_PRIV_H__ + +#include +#include +#include "bfa_intr_priv.h" + +/** + * BFA physical port data structure + */ +struct bfa_pport_s { + struct bfa_s *bfa; /* parent BFA instance */ + bfa_sm_t sm; /* port state machine */ + wwn_t nwwn; /* node wwn of physical port */ + wwn_t pwwn; /* port wwn of physical oprt */ + enum bfa_pport_speed speed_sup; + /* supported speeds */ + enum bfa_pport_speed speed; /* current speed */ + enum bfa_pport_topology topology; /* current topology */ + u8 myalpa; /* my ALPA in LOOP topology */ + u8 rsvd[3]; + struct bfa_pport_cfg_s cfg; /* current port configuration */ + struct bfa_qos_attr_s qos_attr; /* QoS Attributes */ + struct bfa_qos_vc_attr_s qos_vc_attr; /* VC info from ELP */ + struct bfa_reqq_wait_s reqq_wait; + /* to wait for room in reqq */ + struct bfa_reqq_wait_s svcreq_wait; + /* to wait for room in reqq */ + struct bfa_reqq_wait_s stats_reqq_wait; + /* to wait for room in reqq (stats) */ + void *event_cbarg; + void (*event_cbfn) (void *cbarg, + bfa_pport_event_t event); + union { + union bfi_pport_i2h_msg_u i2hmsg; + } event_arg; + void *bfad; /* BFA driver handle */ + struct bfa_cb_qe_s hcb_qe; /* BFA callback queue elem */ + enum bfa_pport_linkstate hcb_event; + /* link event for callback */ + u32 msgtag; /* fimrware msg tag for reply */ + u8 *stats_kva; + u64 stats_pa; + union bfa_pport_stats_u *stats; /* pport stats */ + u32 mypid : 24; + u32 rsvd_b : 8; + struct bfa_timer_s timer; /* timer */ + union bfa_pport_stats_u *stats_ret; + /* driver stats location */ + bfa_status_t stats_status; + /* stats/statsclr status */ + bfa_boolean_t stats_busy; + /* outstanding stats/statsclr */ + bfa_boolean_t stats_qfull; + bfa_boolean_t diag_busy; + /* diag busy status */ + bfa_boolean_t beacon; + /* port beacon status */ + bfa_boolean_t link_e2e_beacon; + /* link beacon status */ + bfa_cb_pport_t stats_cbfn; + /* driver callback function */ + void *stats_cbarg; + /* *!< user callback arg */ +}; + +#define BFA_PORT_MOD(__bfa) (&(__bfa)->modules.pport) + +/* + * public functions + */ +void bfa_pport_isr(struct bfa_s *bfa, struct bfi_msg_s *msg); +#endif /* __BFA_PORT_PRIV_H__ */ diff --git a/drivers/scsi/bfa/bfa_priv.h b/drivers/scsi/bfa/bfa_priv.h new file mode 100644 index 000000000000..0747a6b26f7b --- /dev/null +++ b/drivers/scsi/bfa/bfa_priv.h @@ -0,0 +1,113 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFA_PRIV_H__ +#define __BFA_PRIV_H__ + +#include "bfa_iocfc.h" +#include "bfa_intr_priv.h" +#include "bfa_trcmod_priv.h" +#include "bfa_modules_priv.h" +#include "bfa_fwimg_priv.h" +#include +#include + +/** + * Macro to define a new BFA module + */ +#define BFA_MODULE(__mod) \ + static void bfa_ ## __mod ## _meminfo( \ + struct bfa_iocfc_cfg_s *cfg, u32 *ndm_len, \ + u32 *dm_len); \ + static void bfa_ ## __mod ## _attach(struct bfa_s *bfa, \ + void *bfad, struct bfa_iocfc_cfg_s *cfg, \ + struct bfa_meminfo_s *meminfo, \ + struct bfa_pcidev_s *pcidev); \ + static void bfa_ ## __mod ## _initdone(struct bfa_s *bfa); \ + static void bfa_ ## __mod ## _detach(struct bfa_s *bfa); \ + static void bfa_ ## __mod ## _start(struct bfa_s *bfa); \ + static void bfa_ ## __mod ## _stop(struct bfa_s *bfa); \ + static void bfa_ ## __mod ## _iocdisable(struct bfa_s *bfa); \ + \ + extern struct bfa_module_s hal_mod_ ## __mod; \ + struct bfa_module_s hal_mod_ ## __mod = { \ + bfa_ ## __mod ## _meminfo, \ + bfa_ ## __mod ## _attach, \ + bfa_ ## __mod ## _initdone, \ + bfa_ ## __mod ## _detach, \ + bfa_ ## __mod ## _start, \ + bfa_ ## __mod ## _stop, \ + bfa_ ## __mod ## _iocdisable, \ + } + +#define BFA_CACHELINE_SZ (256) + +/** + * Structure used to interact between different BFA sub modules + * + * Each sub module needs to implement only the entry points relevant to it (and + * can leave entry points as NULL) + */ +struct bfa_module_s { + void (*meminfo) (struct bfa_iocfc_cfg_s *cfg, u32 *km_len, + u32 *dm_len); + void (*attach) (struct bfa_s *bfa, void *bfad, + struct bfa_iocfc_cfg_s *cfg, + struct bfa_meminfo_s *meminfo, + struct bfa_pcidev_s *pcidev); + void (*initdone) (struct bfa_s *bfa); + void (*detach) (struct bfa_s *bfa); + void (*start) (struct bfa_s *bfa); + void (*stop) (struct bfa_s *bfa); + void (*iocdisable) (struct bfa_s *bfa); +}; + +extern struct bfa_module_s *hal_mods[]; + +struct bfa_s { + void *bfad; /* BFA driver instance */ + struct bfa_aen_s *aen; /* AEN module */ + struct bfa_plog_s *plog; /* portlog buffer */ + struct bfa_log_mod_s *logm; /* driver logging modulen */ + struct bfa_trc_mod_s *trcmod; /* driver tracing */ + struct bfa_ioc_s ioc; /* IOC module */ + struct bfa_iocfc_s iocfc; /* IOCFC module */ + struct bfa_timer_mod_s timer_mod; /* timer module */ + struct bfa_modules_s modules; /* BFA modules */ + struct list_head comp_q; /* pending completions */ + bfa_boolean_t rme_process; /* RME processing enabled */ + struct list_head reqq_waitq[BFI_IOC_MAX_CQS]; + bfa_boolean_t fcs; /* FCS is attached to BFA */ + struct bfa_msix_s msix; +}; + +extern bfa_isr_func_t bfa_isrs[BFI_MC_MAX]; +extern bfa_ioc_mbox_mcfunc_t bfa_mbox_isrs[]; +extern bfa_boolean_t bfa_auto_recover; +extern struct bfa_module_s hal_mod_flash; +extern struct bfa_module_s hal_mod_fcdiag; +extern struct bfa_module_s hal_mod_sgpg; +extern struct bfa_module_s hal_mod_pport; +extern struct bfa_module_s hal_mod_fcxp; +extern struct bfa_module_s hal_mod_lps; +extern struct bfa_module_s hal_mod_uf; +extern struct bfa_module_s hal_mod_rport; +extern struct bfa_module_s hal_mod_fcpim; +extern struct bfa_module_s hal_mod_pbind; + +#endif /* __BFA_PRIV_H__ */ + diff --git a/drivers/scsi/bfa/bfa_rport.c b/drivers/scsi/bfa/bfa_rport.c new file mode 100644 index 000000000000..16da77a8db28 --- /dev/null +++ b/drivers/scsi/bfa/bfa_rport.c @@ -0,0 +1,911 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#include +#include +#include +#include +#include "bfa_intr_priv.h" + +BFA_TRC_FILE(HAL, RPORT); +BFA_MODULE(rport); + +#define bfa_rport_offline_cb(__rp) do { \ + if ((__rp)->bfa->fcs) \ + bfa_cb_rport_offline((__rp)->rport_drv); \ + else { \ + bfa_cb_queue((__rp)->bfa, &(__rp)->hcb_qe, \ + __bfa_cb_rport_offline, (__rp)); \ + } \ +} while (0) + +#define bfa_rport_online_cb(__rp) do { \ + if ((__rp)->bfa->fcs) \ + bfa_cb_rport_online((__rp)->rport_drv); \ + else { \ + bfa_cb_queue((__rp)->bfa, &(__rp)->hcb_qe, \ + __bfa_cb_rport_online, (__rp)); \ + } \ +} while (0) + +/* + * forward declarations + */ +static struct bfa_rport_s *bfa_rport_alloc(struct bfa_rport_mod_s *rp_mod); +static void bfa_rport_free(struct bfa_rport_s *rport); +static bfa_boolean_t bfa_rport_send_fwcreate(struct bfa_rport_s *rp); +static bfa_boolean_t bfa_rport_send_fwdelete(struct bfa_rport_s *rp); +static bfa_boolean_t bfa_rport_send_fwspeed(struct bfa_rport_s *rp); +static void __bfa_cb_rport_online(void *cbarg, bfa_boolean_t complete); +static void __bfa_cb_rport_offline(void *cbarg, bfa_boolean_t complete); + +/** + * bfa_rport_sm BFA rport state machine + */ + + +enum bfa_rport_event { + BFA_RPORT_SM_CREATE = 1, /* rport create event */ + BFA_RPORT_SM_DELETE = 2, /* deleting an existing rport */ + BFA_RPORT_SM_ONLINE = 3, /* rport is online */ + BFA_RPORT_SM_OFFLINE = 4, /* rport is offline */ + BFA_RPORT_SM_FWRSP = 5, /* firmware response */ + BFA_RPORT_SM_HWFAIL = 6, /* IOC h/w failure */ + BFA_RPORT_SM_QOS_SCN = 7, /* QoS SCN from firmware */ + BFA_RPORT_SM_SET_SPEED = 8, /* Set Rport Speed */ + BFA_RPORT_SM_QRESUME = 9, /* space in requeue queue */ +}; + +static void bfa_rport_sm_uninit(struct bfa_rport_s *rp, + enum bfa_rport_event event); +static void bfa_rport_sm_created(struct bfa_rport_s *rp, + enum bfa_rport_event event); +static void bfa_rport_sm_fwcreate(struct bfa_rport_s *rp, + enum bfa_rport_event event); +static void bfa_rport_sm_online(struct bfa_rport_s *rp, + enum bfa_rport_event event); +static void bfa_rport_sm_fwdelete(struct bfa_rport_s *rp, + enum bfa_rport_event event); +static void bfa_rport_sm_offline(struct bfa_rport_s *rp, + enum bfa_rport_event event); +static void bfa_rport_sm_deleting(struct bfa_rport_s *rp, + enum bfa_rport_event event); +static void bfa_rport_sm_offline_pending(struct bfa_rport_s *rp, + enum bfa_rport_event event); +static void bfa_rport_sm_delete_pending(struct bfa_rport_s *rp, + enum bfa_rport_event event); +static void bfa_rport_sm_iocdisable(struct bfa_rport_s *rp, + enum bfa_rport_event event); +static void bfa_rport_sm_fwcreate_qfull(struct bfa_rport_s *rp, + enum bfa_rport_event event); +static void bfa_rport_sm_fwdelete_qfull(struct bfa_rport_s *rp, + enum bfa_rport_event event); +static void bfa_rport_sm_deleting_qfull(struct bfa_rport_s *rp, + enum bfa_rport_event event); + +/** + * Beginning state, only online event expected. + */ +static void +bfa_rport_sm_uninit(struct bfa_rport_s *rp, enum bfa_rport_event event) +{ + bfa_trc(rp->bfa, rp->rport_tag); + bfa_trc(rp->bfa, event); + + switch (event) { + case BFA_RPORT_SM_CREATE: + bfa_stats(rp, sm_un_cr); + bfa_sm_set_state(rp, bfa_rport_sm_created); + break; + + default: + bfa_stats(rp, sm_un_unexp); + bfa_assert(0); + } +} + +static void +bfa_rport_sm_created(struct bfa_rport_s *rp, enum bfa_rport_event event) +{ + bfa_trc(rp->bfa, rp->rport_tag); + bfa_trc(rp->bfa, event); + + switch (event) { + case BFA_RPORT_SM_ONLINE: + bfa_stats(rp, sm_cr_on); + if (bfa_rport_send_fwcreate(rp)) + bfa_sm_set_state(rp, bfa_rport_sm_fwcreate); + else + bfa_sm_set_state(rp, bfa_rport_sm_fwcreate_qfull); + break; + + case BFA_RPORT_SM_DELETE: + bfa_stats(rp, sm_cr_del); + bfa_sm_set_state(rp, bfa_rport_sm_uninit); + bfa_rport_free(rp); + break; + + case BFA_RPORT_SM_HWFAIL: + bfa_stats(rp, sm_cr_hwf); + bfa_sm_set_state(rp, bfa_rport_sm_iocdisable); + break; + + default: + bfa_stats(rp, sm_cr_unexp); + bfa_assert(0); + } +} + +/** + * Waiting for rport create response from firmware. + */ +static void +bfa_rport_sm_fwcreate(struct bfa_rport_s *rp, enum bfa_rport_event event) +{ + bfa_trc(rp->bfa, rp->rport_tag); + bfa_trc(rp->bfa, event); + + switch (event) { + case BFA_RPORT_SM_FWRSP: + bfa_stats(rp, sm_fwc_rsp); + bfa_sm_set_state(rp, bfa_rport_sm_online); + bfa_rport_online_cb(rp); + break; + + case BFA_RPORT_SM_DELETE: + bfa_stats(rp, sm_fwc_del); + bfa_sm_set_state(rp, bfa_rport_sm_delete_pending); + break; + + case BFA_RPORT_SM_OFFLINE: + bfa_stats(rp, sm_fwc_off); + bfa_sm_set_state(rp, bfa_rport_sm_offline_pending); + break; + + case BFA_RPORT_SM_HWFAIL: + bfa_stats(rp, sm_fwc_hwf); + bfa_sm_set_state(rp, bfa_rport_sm_iocdisable); + break; + + default: + bfa_stats(rp, sm_fwc_unexp); + bfa_assert(0); + } +} + +/** + * Request queue is full, awaiting queue resume to send create request. + */ +static void +bfa_rport_sm_fwcreate_qfull(struct bfa_rport_s *rp, enum bfa_rport_event event) +{ + bfa_trc(rp->bfa, rp->rport_tag); + bfa_trc(rp->bfa, event); + + switch (event) { + case BFA_RPORT_SM_QRESUME: + bfa_sm_set_state(rp, bfa_rport_sm_fwcreate); + bfa_rport_send_fwcreate(rp); + break; + + case BFA_RPORT_SM_DELETE: + bfa_stats(rp, sm_fwc_del); + bfa_sm_set_state(rp, bfa_rport_sm_uninit); + bfa_reqq_wcancel(&rp->reqq_wait); + bfa_rport_free(rp); + break; + + case BFA_RPORT_SM_OFFLINE: + bfa_stats(rp, sm_fwc_off); + bfa_sm_set_state(rp, bfa_rport_sm_offline); + bfa_reqq_wcancel(&rp->reqq_wait); + bfa_rport_offline_cb(rp); + break; + + case BFA_RPORT_SM_HWFAIL: + bfa_stats(rp, sm_fwc_hwf); + bfa_sm_set_state(rp, bfa_rport_sm_iocdisable); + bfa_reqq_wcancel(&rp->reqq_wait); + break; + + default: + bfa_stats(rp, sm_fwc_unexp); + bfa_assert(0); + } +} + +/** + * Online state - normal parking state. + */ +static void +bfa_rport_sm_online(struct bfa_rport_s *rp, enum bfa_rport_event event) +{ + struct bfi_rport_qos_scn_s *qos_scn; + + bfa_trc(rp->bfa, rp->rport_tag); + bfa_trc(rp->bfa, event); + + switch (event) { + case BFA_RPORT_SM_OFFLINE: + bfa_stats(rp, sm_on_off); + if (bfa_rport_send_fwdelete(rp)) + bfa_sm_set_state(rp, bfa_rport_sm_fwdelete); + else + bfa_sm_set_state(rp, bfa_rport_sm_fwdelete_qfull); + break; + + case BFA_RPORT_SM_DELETE: + bfa_stats(rp, sm_on_del); + if (bfa_rport_send_fwdelete(rp)) + bfa_sm_set_state(rp, bfa_rport_sm_deleting); + else + bfa_sm_set_state(rp, bfa_rport_sm_deleting_qfull); + break; + + case BFA_RPORT_SM_HWFAIL: + bfa_stats(rp, sm_on_hwf); + bfa_sm_set_state(rp, bfa_rport_sm_iocdisable); + break; + + case BFA_RPORT_SM_SET_SPEED: + bfa_rport_send_fwspeed(rp); + break; + + case BFA_RPORT_SM_QOS_SCN: + qos_scn = (struct bfi_rport_qos_scn_s *) rp->event_arg.fw_msg; + rp->qos_attr = qos_scn->new_qos_attr; + bfa_trc(rp->bfa, qos_scn->old_qos_attr.qos_flow_id); + bfa_trc(rp->bfa, qos_scn->new_qos_attr.qos_flow_id); + bfa_trc(rp->bfa, qos_scn->old_qos_attr.qos_priority); + bfa_trc(rp->bfa, qos_scn->new_qos_attr.qos_priority); + + qos_scn->old_qos_attr.qos_flow_id = + bfa_os_ntohl(qos_scn->old_qos_attr.qos_flow_id); + qos_scn->new_qos_attr.qos_flow_id = + bfa_os_ntohl(qos_scn->new_qos_attr.qos_flow_id); + qos_scn->old_qos_attr.qos_priority = + bfa_os_ntohl(qos_scn->old_qos_attr.qos_priority); + qos_scn->new_qos_attr.qos_priority = + bfa_os_ntohl(qos_scn->new_qos_attr.qos_priority); + + if (qos_scn->old_qos_attr.qos_flow_id != + qos_scn->new_qos_attr.qos_flow_id) + bfa_cb_rport_qos_scn_flowid(rp->rport_drv, + qos_scn->old_qos_attr, + qos_scn->new_qos_attr); + if (qos_scn->old_qos_attr.qos_priority != + qos_scn->new_qos_attr.qos_priority) + bfa_cb_rport_qos_scn_prio(rp->rport_drv, + qos_scn->old_qos_attr, + qos_scn->new_qos_attr); + break; + + default: + bfa_stats(rp, sm_on_unexp); + bfa_assert(0); + } +} + +/** + * Firmware rport is being deleted - awaiting f/w response. + */ +static void +bfa_rport_sm_fwdelete(struct bfa_rport_s *rp, enum bfa_rport_event event) +{ + bfa_trc(rp->bfa, rp->rport_tag); + bfa_trc(rp->bfa, event); + + switch (event) { + case BFA_RPORT_SM_FWRSP: + bfa_stats(rp, sm_fwd_rsp); + bfa_sm_set_state(rp, bfa_rport_sm_offline); + bfa_rport_offline_cb(rp); + break; + + case BFA_RPORT_SM_DELETE: + bfa_stats(rp, sm_fwd_del); + bfa_sm_set_state(rp, bfa_rport_sm_deleting); + break; + + case BFA_RPORT_SM_HWFAIL: + bfa_stats(rp, sm_fwd_hwf); + bfa_sm_set_state(rp, bfa_rport_sm_iocdisable); + bfa_rport_offline_cb(rp); + break; + + default: + bfa_stats(rp, sm_fwd_unexp); + bfa_assert(0); + } +} + +static void +bfa_rport_sm_fwdelete_qfull(struct bfa_rport_s *rp, enum bfa_rport_event event) +{ + bfa_trc(rp->bfa, rp->rport_tag); + bfa_trc(rp->bfa, event); + + switch (event) { + case BFA_RPORT_SM_QRESUME: + bfa_sm_set_state(rp, bfa_rport_sm_fwdelete); + bfa_rport_send_fwdelete(rp); + break; + + case BFA_RPORT_SM_DELETE: + bfa_stats(rp, sm_fwd_del); + bfa_sm_set_state(rp, bfa_rport_sm_deleting_qfull); + break; + + case BFA_RPORT_SM_HWFAIL: + bfa_stats(rp, sm_fwd_hwf); + bfa_sm_set_state(rp, bfa_rport_sm_iocdisable); + bfa_reqq_wcancel(&rp->reqq_wait); + bfa_rport_offline_cb(rp); + break; + + default: + bfa_stats(rp, sm_fwd_unexp); + bfa_assert(0); + } +} + +/** + * Offline state. + */ +static void +bfa_rport_sm_offline(struct bfa_rport_s *rp, enum bfa_rport_event event) +{ + bfa_trc(rp->bfa, rp->rport_tag); + bfa_trc(rp->bfa, event); + + switch (event) { + case BFA_RPORT_SM_DELETE: + bfa_stats(rp, sm_off_del); + bfa_sm_set_state(rp, bfa_rport_sm_uninit); + bfa_rport_free(rp); + break; + + case BFA_RPORT_SM_ONLINE: + bfa_stats(rp, sm_off_on); + if (bfa_rport_send_fwcreate(rp)) + bfa_sm_set_state(rp, bfa_rport_sm_fwcreate); + else + bfa_sm_set_state(rp, bfa_rport_sm_fwcreate_qfull); + break; + + case BFA_RPORT_SM_HWFAIL: + bfa_stats(rp, sm_off_hwf); + bfa_sm_set_state(rp, bfa_rport_sm_iocdisable); + break; + + default: + bfa_stats(rp, sm_off_unexp); + bfa_assert(0); + } +} + +/** + * Rport is deleted, waiting for firmware response to delete. + */ +static void +bfa_rport_sm_deleting(struct bfa_rport_s *rp, enum bfa_rport_event event) +{ + bfa_trc(rp->bfa, rp->rport_tag); + bfa_trc(rp->bfa, event); + + switch (event) { + case BFA_RPORT_SM_FWRSP: + bfa_stats(rp, sm_del_fwrsp); + bfa_sm_set_state(rp, bfa_rport_sm_uninit); + bfa_rport_free(rp); + break; + + case BFA_RPORT_SM_HWFAIL: + bfa_stats(rp, sm_del_hwf); + bfa_sm_set_state(rp, bfa_rport_sm_uninit); + bfa_rport_free(rp); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_rport_sm_deleting_qfull(struct bfa_rport_s *rp, enum bfa_rport_event event) +{ + bfa_trc(rp->bfa, rp->rport_tag); + bfa_trc(rp->bfa, event); + + switch (event) { + case BFA_RPORT_SM_QRESUME: + bfa_stats(rp, sm_del_fwrsp); + bfa_sm_set_state(rp, bfa_rport_sm_deleting); + bfa_rport_send_fwdelete(rp); + break; + + case BFA_RPORT_SM_HWFAIL: + bfa_stats(rp, sm_del_hwf); + bfa_sm_set_state(rp, bfa_rport_sm_uninit); + bfa_reqq_wcancel(&rp->reqq_wait); + bfa_rport_free(rp); + break; + + default: + bfa_assert(0); + } +} + +/** + * Waiting for rport create response from firmware. A delete is pending. + */ +static void +bfa_rport_sm_delete_pending(struct bfa_rport_s *rp, + enum bfa_rport_event event) +{ + bfa_trc(rp->bfa, rp->rport_tag); + bfa_trc(rp->bfa, event); + + switch (event) { + case BFA_RPORT_SM_FWRSP: + bfa_stats(rp, sm_delp_fwrsp); + if (bfa_rport_send_fwdelete(rp)) + bfa_sm_set_state(rp, bfa_rport_sm_deleting); + else + bfa_sm_set_state(rp, bfa_rport_sm_deleting_qfull); + break; + + case BFA_RPORT_SM_HWFAIL: + bfa_stats(rp, sm_delp_hwf); + bfa_sm_set_state(rp, bfa_rport_sm_uninit); + bfa_rport_free(rp); + break; + + default: + bfa_stats(rp, sm_delp_unexp); + bfa_assert(0); + } +} + +/** + * Waiting for rport create response from firmware. Rport offline is pending. + */ +static void +bfa_rport_sm_offline_pending(struct bfa_rport_s *rp, + enum bfa_rport_event event) +{ + bfa_trc(rp->bfa, rp->rport_tag); + bfa_trc(rp->bfa, event); + + switch (event) { + case BFA_RPORT_SM_FWRSP: + bfa_stats(rp, sm_offp_fwrsp); + if (bfa_rport_send_fwdelete(rp)) + bfa_sm_set_state(rp, bfa_rport_sm_fwdelete); + else + bfa_sm_set_state(rp, bfa_rport_sm_fwdelete_qfull); + break; + + case BFA_RPORT_SM_DELETE: + bfa_stats(rp, sm_offp_del); + bfa_sm_set_state(rp, bfa_rport_sm_delete_pending); + break; + + case BFA_RPORT_SM_HWFAIL: + bfa_stats(rp, sm_offp_hwf); + bfa_sm_set_state(rp, bfa_rport_sm_iocdisable); + break; + + default: + bfa_stats(rp, sm_offp_unexp); + bfa_assert(0); + } +} + +/** + * IOC h/w failed. + */ +static void +bfa_rport_sm_iocdisable(struct bfa_rport_s *rp, enum bfa_rport_event event) +{ + bfa_trc(rp->bfa, rp->rport_tag); + bfa_trc(rp->bfa, event); + + switch (event) { + case BFA_RPORT_SM_OFFLINE: + bfa_stats(rp, sm_iocd_off); + bfa_rport_offline_cb(rp); + break; + + case BFA_RPORT_SM_DELETE: + bfa_stats(rp, sm_iocd_del); + bfa_sm_set_state(rp, bfa_rport_sm_uninit); + bfa_rport_free(rp); + break; + + case BFA_RPORT_SM_ONLINE: + bfa_stats(rp, sm_iocd_on); + if (bfa_rport_send_fwcreate(rp)) + bfa_sm_set_state(rp, bfa_rport_sm_fwcreate); + else + bfa_sm_set_state(rp, bfa_rport_sm_fwcreate_qfull); + break; + + case BFA_RPORT_SM_HWFAIL: + break; + + default: + bfa_stats(rp, sm_iocd_unexp); + bfa_assert(0); + } +} + + + +/** + * bfa_rport_private BFA rport private functions + */ + +static void +__bfa_cb_rport_online(void *cbarg, bfa_boolean_t complete) +{ + struct bfa_rport_s *rp = cbarg; + + if (complete) + bfa_cb_rport_online(rp->rport_drv); +} + +static void +__bfa_cb_rport_offline(void *cbarg, bfa_boolean_t complete) +{ + struct bfa_rport_s *rp = cbarg; + + if (complete) + bfa_cb_rport_offline(rp->rport_drv); +} + +static void +bfa_rport_qresume(void *cbarg) +{ + struct bfa_rport_s *rp = cbarg; + + bfa_sm_send_event(rp, BFA_RPORT_SM_QRESUME); +} + +static void +bfa_rport_meminfo(struct bfa_iocfc_cfg_s *cfg, u32 *km_len, + u32 *dm_len) +{ + if (cfg->fwcfg.num_rports < BFA_RPORT_MIN) + cfg->fwcfg.num_rports = BFA_RPORT_MIN; + + *km_len += cfg->fwcfg.num_rports * sizeof(struct bfa_rport_s); +} + +static void +bfa_rport_attach(struct bfa_s *bfa, void *bfad, struct bfa_iocfc_cfg_s *cfg, + struct bfa_meminfo_s *meminfo, struct bfa_pcidev_s *pcidev) +{ + struct bfa_rport_mod_s *mod = BFA_RPORT_MOD(bfa); + struct bfa_rport_s *rp; + u16 i; + + INIT_LIST_HEAD(&mod->rp_free_q); + INIT_LIST_HEAD(&mod->rp_active_q); + + rp = (struct bfa_rport_s *) bfa_meminfo_kva(meminfo); + mod->rps_list = rp; + mod->num_rports = cfg->fwcfg.num_rports; + + bfa_assert(mod->num_rports + && !(mod->num_rports & (mod->num_rports - 1))); + + for (i = 0; i < mod->num_rports; i++, rp++) { + bfa_os_memset(rp, 0, sizeof(struct bfa_rport_s)); + rp->bfa = bfa; + rp->rport_tag = i; + bfa_sm_set_state(rp, bfa_rport_sm_uninit); + + /** + * - is unused + */ + if (i) + list_add_tail(&rp->qe, &mod->rp_free_q); + + bfa_reqq_winit(&rp->reqq_wait, bfa_rport_qresume, rp); + } + + /** + * consume memory + */ + bfa_meminfo_kva(meminfo) = (u8 *) rp; +} + +static void +bfa_rport_initdone(struct bfa_s *bfa) +{ +} + +static void +bfa_rport_detach(struct bfa_s *bfa) +{ +} + +static void +bfa_rport_start(struct bfa_s *bfa) +{ +} + +static void +bfa_rport_stop(struct bfa_s *bfa) +{ +} + +static void +bfa_rport_iocdisable(struct bfa_s *bfa) +{ + struct bfa_rport_mod_s *mod = BFA_RPORT_MOD(bfa); + struct bfa_rport_s *rport; + struct list_head *qe, *qen; + + list_for_each_safe(qe, qen, &mod->rp_active_q) { + rport = (struct bfa_rport_s *) qe; + bfa_sm_send_event(rport, BFA_RPORT_SM_HWFAIL); + } +} + +static struct bfa_rport_s * +bfa_rport_alloc(struct bfa_rport_mod_s *mod) +{ + struct bfa_rport_s *rport; + + bfa_q_deq(&mod->rp_free_q, &rport); + if (rport) + list_add_tail(&rport->qe, &mod->rp_active_q); + + return (rport); +} + +static void +bfa_rport_free(struct bfa_rport_s *rport) +{ + struct bfa_rport_mod_s *mod = BFA_RPORT_MOD(rport->bfa); + + bfa_assert(bfa_q_is_on_q(&mod->rp_active_q, rport)); + list_del(&rport->qe); + list_add_tail(&rport->qe, &mod->rp_free_q); +} + +static bfa_boolean_t +bfa_rport_send_fwcreate(struct bfa_rport_s *rp) +{ + struct bfi_rport_create_req_s *m; + + /** + * check for room in queue to send request now + */ + m = bfa_reqq_next(rp->bfa, BFA_REQQ_RPORT); + if (!m) { + bfa_reqq_wait(rp->bfa, BFA_REQQ_RPORT, &rp->reqq_wait); + return BFA_FALSE; + } + + bfi_h2i_set(m->mh, BFI_MC_RPORT, BFI_RPORT_H2I_CREATE_REQ, + bfa_lpuid(rp->bfa)); + m->bfa_handle = rp->rport_tag; + m->max_frmsz = bfa_os_htons(rp->rport_info.max_frmsz); + m->pid = rp->rport_info.pid; + m->lp_tag = rp->rport_info.lp_tag; + m->local_pid = rp->rport_info.local_pid; + m->fc_class = rp->rport_info.fc_class; + m->vf_en = rp->rport_info.vf_en; + m->vf_id = rp->rport_info.vf_id; + m->cisc = rp->rport_info.cisc; + + /** + * queue I/O message to firmware + */ + bfa_reqq_produce(rp->bfa, BFA_REQQ_RPORT); + return BFA_TRUE; +} + +static bfa_boolean_t +bfa_rport_send_fwdelete(struct bfa_rport_s *rp) +{ + struct bfi_rport_delete_req_s *m; + + /** + * check for room in queue to send request now + */ + m = bfa_reqq_next(rp->bfa, BFA_REQQ_RPORT); + if (!m) { + bfa_reqq_wait(rp->bfa, BFA_REQQ_RPORT, &rp->reqq_wait); + return BFA_FALSE; + } + + bfi_h2i_set(m->mh, BFI_MC_RPORT, BFI_RPORT_H2I_DELETE_REQ, + bfa_lpuid(rp->bfa)); + m->fw_handle = rp->fw_handle; + + /** + * queue I/O message to firmware + */ + bfa_reqq_produce(rp->bfa, BFA_REQQ_RPORT); + return BFA_TRUE; +} + +static bfa_boolean_t +bfa_rport_send_fwspeed(struct bfa_rport_s *rp) +{ + struct bfa_rport_speed_req_s *m; + + /** + * check for room in queue to send request now + */ + m = bfa_reqq_next(rp->bfa, BFA_REQQ_RPORT); + if (!m) { + bfa_trc(rp->bfa, rp->rport_info.speed); + return BFA_FALSE; + } + + bfi_h2i_set(m->mh, BFI_MC_RPORT, BFI_RPORT_H2I_SET_SPEED_REQ, + bfa_lpuid(rp->bfa)); + m->fw_handle = rp->fw_handle; + m->speed = (u8)rp->rport_info.speed; + + /** + * queue I/O message to firmware + */ + bfa_reqq_produce(rp->bfa, BFA_REQQ_RPORT); + return BFA_TRUE; +} + + + +/** + * bfa_rport_public + */ + +/** + * Rport interrupt processing. + */ +void +bfa_rport_isr(struct bfa_s *bfa, struct bfi_msg_s *m) +{ + union bfi_rport_i2h_msg_u msg; + struct bfa_rport_s *rp; + + bfa_trc(bfa, m->mhdr.msg_id); + + msg.msg = m; + + switch (m->mhdr.msg_id) { + case BFI_RPORT_I2H_CREATE_RSP: + rp = BFA_RPORT_FROM_TAG(bfa, msg.create_rsp->bfa_handle); + rp->fw_handle = msg.create_rsp->fw_handle; + rp->qos_attr = msg.create_rsp->qos_attr; + bfa_assert(msg.create_rsp->status == BFA_STATUS_OK); + bfa_sm_send_event(rp, BFA_RPORT_SM_FWRSP); + break; + + case BFI_RPORT_I2H_DELETE_RSP: + rp = BFA_RPORT_FROM_TAG(bfa, msg.delete_rsp->bfa_handle); + bfa_assert(msg.delete_rsp->status == BFA_STATUS_OK); + bfa_sm_send_event(rp, BFA_RPORT_SM_FWRSP); + break; + + case BFI_RPORT_I2H_QOS_SCN: + rp = BFA_RPORT_FROM_TAG(bfa, msg.qos_scn_evt->bfa_handle); + rp->event_arg.fw_msg = msg.qos_scn_evt; + bfa_sm_send_event(rp, BFA_RPORT_SM_QOS_SCN); + break; + + default: + bfa_trc(bfa, m->mhdr.msg_id); + bfa_assert(0); + } +} + + + +/** + * bfa_rport_api + */ + +struct bfa_rport_s * +bfa_rport_create(struct bfa_s *bfa, void *rport_drv) +{ + struct bfa_rport_s *rp; + + rp = bfa_rport_alloc(BFA_RPORT_MOD(bfa)); + + if (rp == NULL) + return (NULL); + + rp->bfa = bfa; + rp->rport_drv = rport_drv; + bfa_rport_clear_stats(rp); + + bfa_assert(bfa_sm_cmp_state(rp, bfa_rport_sm_uninit)); + bfa_sm_send_event(rp, BFA_RPORT_SM_CREATE); + + return (rp); +} + +void +bfa_rport_delete(struct bfa_rport_s *rport) +{ + bfa_sm_send_event(rport, BFA_RPORT_SM_DELETE); +} + +void +bfa_rport_online(struct bfa_rport_s *rport, struct bfa_rport_info_s *rport_info) +{ + bfa_assert(rport_info->max_frmsz != 0); + + /** + * Some JBODs are seen to be not setting PDU size correctly in PLOGI + * responses. Default to minimum size. + */ + if (rport_info->max_frmsz == 0) { + bfa_trc(rport->bfa, rport->rport_tag); + rport_info->max_frmsz = FC_MIN_PDUSZ; + } + + bfa_os_assign(rport->rport_info, *rport_info); + bfa_sm_send_event(rport, BFA_RPORT_SM_ONLINE); +} + +void +bfa_rport_offline(struct bfa_rport_s *rport) +{ + bfa_sm_send_event(rport, BFA_RPORT_SM_OFFLINE); +} + +void +bfa_rport_speed(struct bfa_rport_s *rport, enum bfa_pport_speed speed) +{ + bfa_assert(speed != 0); + bfa_assert(speed != BFA_PPORT_SPEED_AUTO); + + rport->rport_info.speed = speed; + bfa_sm_send_event(rport, BFA_RPORT_SM_SET_SPEED); +} + +void +bfa_rport_get_stats(struct bfa_rport_s *rport, + struct bfa_rport_hal_stats_s *stats) +{ + *stats = rport->stats; +} + +void +bfa_rport_get_qos_attr(struct bfa_rport_s *rport, + struct bfa_rport_qos_attr_s *qos_attr) +{ + qos_attr->qos_priority = bfa_os_ntohl(rport->qos_attr.qos_priority); + qos_attr->qos_flow_id = bfa_os_ntohl(rport->qos_attr.qos_flow_id); + +} + +void +bfa_rport_clear_stats(struct bfa_rport_s *rport) +{ + bfa_os_memset(&rport->stats, 0, sizeof(rport->stats)); +} + + diff --git a/drivers/scsi/bfa/bfa_rport_priv.h b/drivers/scsi/bfa/bfa_rport_priv.h new file mode 100644 index 000000000000..6490ce2e990d --- /dev/null +++ b/drivers/scsi/bfa/bfa_rport_priv.h @@ -0,0 +1,45 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFA_RPORT_PRIV_H__ +#define __BFA_RPORT_PRIV_H__ + +#include + +#define BFA_RPORT_MIN 4 + +struct bfa_rport_mod_s { + struct bfa_rport_s *rps_list; /* list of rports */ + struct list_head rp_free_q; /* free bfa_rports */ + struct list_head rp_active_q; /* free bfa_rports */ + u16 num_rports; /* number of rports */ +}; + +#define BFA_RPORT_MOD(__bfa) (&(__bfa)->modules.rport_mod) + +/** + * Convert rport tag to RPORT + */ +#define BFA_RPORT_FROM_TAG(__bfa, _tag) \ + (BFA_RPORT_MOD(__bfa)->rps_list + \ + ((_tag) & (BFA_RPORT_MOD(__bfa)->num_rports - 1))) + +/* + * external functions + */ +void bfa_rport_isr(struct bfa_s *bfa, struct bfi_msg_s *msg); +#endif /* __BFA_RPORT_PRIV_H__ */ diff --git a/drivers/scsi/bfa/bfa_sgpg.c b/drivers/scsi/bfa/bfa_sgpg.c new file mode 100644 index 000000000000..279d8f9b8907 --- /dev/null +++ b/drivers/scsi/bfa/bfa_sgpg.c @@ -0,0 +1,231 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#include + +BFA_TRC_FILE(HAL, SGPG); +BFA_MODULE(sgpg); + +/** + * bfa_sgpg_mod BFA SGPG Mode module + */ + +/** + * Compute and return memory needed by FCP(im) module. + */ +static void +bfa_sgpg_meminfo(struct bfa_iocfc_cfg_s *cfg, u32 *km_len, + u32 *dm_len) +{ + if (cfg->drvcfg.num_sgpgs < BFA_SGPG_MIN) + cfg->drvcfg.num_sgpgs = BFA_SGPG_MIN; + + *km_len += (cfg->drvcfg.num_sgpgs + 1) * sizeof(struct bfa_sgpg_s); + *dm_len += (cfg->drvcfg.num_sgpgs + 1) * sizeof(struct bfi_sgpg_s); +} + + +static void +bfa_sgpg_attach(struct bfa_s *bfa, void *bfad, struct bfa_iocfc_cfg_s *cfg, + struct bfa_meminfo_s *minfo, struct bfa_pcidev_s *pcidev) +{ + struct bfa_sgpg_mod_s *mod = BFA_SGPG_MOD(bfa); + int i; + struct bfa_sgpg_s *hsgpg; + struct bfi_sgpg_s *sgpg; + u64 align_len; + + union { + u64 pa; + union bfi_addr_u addr; + } sgpg_pa; + + INIT_LIST_HEAD(&mod->sgpg_q); + INIT_LIST_HEAD(&mod->sgpg_wait_q); + + bfa_trc(bfa, cfg->drvcfg.num_sgpgs); + + mod->num_sgpgs = cfg->drvcfg.num_sgpgs; + mod->sgpg_arr_pa = bfa_meminfo_dma_phys(minfo); + align_len = (BFA_SGPG_ROUNDUP(mod->sgpg_arr_pa) - mod->sgpg_arr_pa); + mod->sgpg_arr_pa += align_len; + mod->hsgpg_arr = (struct bfa_sgpg_s *) (bfa_meminfo_kva(minfo) + + align_len); + mod->sgpg_arr = (struct bfi_sgpg_s *) (bfa_meminfo_dma_virt(minfo) + + align_len); + + hsgpg = mod->hsgpg_arr; + sgpg = mod->sgpg_arr; + sgpg_pa.pa = mod->sgpg_arr_pa; + mod->free_sgpgs = mod->num_sgpgs; + + bfa_assert(!(sgpg_pa.pa & (sizeof(struct bfi_sgpg_s) - 1))); + + for (i = 0; i < mod->num_sgpgs; i++) { + bfa_os_memset(hsgpg, 0, sizeof(*hsgpg)); + bfa_os_memset(sgpg, 0, sizeof(*sgpg)); + + hsgpg->sgpg = sgpg; + hsgpg->sgpg_pa = sgpg_pa.addr; + list_add_tail(&hsgpg->qe, &mod->sgpg_q); + + hsgpg++; + sgpg++; + sgpg_pa.pa += sizeof(struct bfi_sgpg_s); + } + + bfa_meminfo_kva(minfo) = (u8 *) hsgpg; + bfa_meminfo_dma_virt(minfo) = (u8 *) sgpg; + bfa_meminfo_dma_phys(minfo) = sgpg_pa.pa; +} + +static void +bfa_sgpg_initdone(struct bfa_s *bfa) +{ +} + +static void +bfa_sgpg_detach(struct bfa_s *bfa) +{ +} + +static void +bfa_sgpg_start(struct bfa_s *bfa) +{ +} + +static void +bfa_sgpg_stop(struct bfa_s *bfa) +{ +} + +static void +bfa_sgpg_iocdisable(struct bfa_s *bfa) +{ +} + + + +/** + * bfa_sgpg_public BFA SGPG public functions + */ + +bfa_status_t +bfa_sgpg_malloc(struct bfa_s *bfa, struct list_head *sgpg_q, int nsgpgs) +{ + struct bfa_sgpg_mod_s *mod = BFA_SGPG_MOD(bfa); + struct bfa_sgpg_s *hsgpg; + int i; + + bfa_trc_fp(bfa, nsgpgs); + + if (mod->free_sgpgs < nsgpgs) + return BFA_STATUS_ENOMEM; + + for (i = 0; i < nsgpgs; i++) { + bfa_q_deq(&mod->sgpg_q, &hsgpg); + bfa_assert(hsgpg); + list_add_tail(&hsgpg->qe, sgpg_q); + } + + mod->free_sgpgs -= nsgpgs; + return BFA_STATUS_OK; +} + +void +bfa_sgpg_mfree(struct bfa_s *bfa, struct list_head *sgpg_q, int nsgpg) +{ + struct bfa_sgpg_mod_s *mod = BFA_SGPG_MOD(bfa); + struct bfa_sgpg_wqe_s *wqe; + + bfa_trc_fp(bfa, nsgpg); + + mod->free_sgpgs += nsgpg; + bfa_assert(mod->free_sgpgs <= mod->num_sgpgs); + + list_splice_tail_init(sgpg_q, &mod->sgpg_q); + + if (list_empty(&mod->sgpg_wait_q)) + return; + + /** + * satisfy as many waiting requests as possible + */ + do { + wqe = bfa_q_first(&mod->sgpg_wait_q); + if (mod->free_sgpgs < wqe->nsgpg) + nsgpg = mod->free_sgpgs; + else + nsgpg = wqe->nsgpg; + bfa_sgpg_malloc(bfa, &wqe->sgpg_q, nsgpg); + wqe->nsgpg -= nsgpg; + if (wqe->nsgpg == 0) { + list_del(&wqe->qe); + wqe->cbfn(wqe->cbarg); + } + } while (mod->free_sgpgs && !list_empty(&mod->sgpg_wait_q)); +} + +void +bfa_sgpg_wait(struct bfa_s *bfa, struct bfa_sgpg_wqe_s *wqe, int nsgpg) +{ + struct bfa_sgpg_mod_s *mod = BFA_SGPG_MOD(bfa); + + bfa_assert(nsgpg > 0); + bfa_assert(nsgpg > mod->free_sgpgs); + + wqe->nsgpg_total = wqe->nsgpg = nsgpg; + + /** + * allocate any left to this one first + */ + if (mod->free_sgpgs) { + /** + * no one else is waiting for SGPG + */ + bfa_assert(list_empty(&mod->sgpg_wait_q)); + list_splice_tail_init(&mod->sgpg_q, &wqe->sgpg_q); + wqe->nsgpg -= mod->free_sgpgs; + mod->free_sgpgs = 0; + } + + list_add_tail(&wqe->qe, &mod->sgpg_wait_q); +} + +void +bfa_sgpg_wcancel(struct bfa_s *bfa, struct bfa_sgpg_wqe_s *wqe) +{ + struct bfa_sgpg_mod_s *mod = BFA_SGPG_MOD(bfa); + + bfa_assert(bfa_q_is_on_q(&mod->sgpg_wait_q, wqe)); + list_del(&wqe->qe); + + if (wqe->nsgpg_total != wqe->nsgpg) + bfa_sgpg_mfree(bfa, &wqe->sgpg_q, + wqe->nsgpg_total - wqe->nsgpg); +} + +void +bfa_sgpg_winit(struct bfa_sgpg_wqe_s *wqe, void (*cbfn) (void *cbarg), + void *cbarg) +{ + INIT_LIST_HEAD(&wqe->sgpg_q); + wqe->cbfn = cbfn; + wqe->cbarg = cbarg; +} + + diff --git a/drivers/scsi/bfa/bfa_sgpg_priv.h b/drivers/scsi/bfa/bfa_sgpg_priv.h new file mode 100644 index 000000000000..9c2a8cbe7522 --- /dev/null +++ b/drivers/scsi/bfa/bfa_sgpg_priv.h @@ -0,0 +1,79 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * hal_sgpg.h BFA SG page module + */ + +#ifndef __BFA_SGPG_PRIV_H__ +#define __BFA_SGPG_PRIV_H__ + +#include + +#define BFA_SGPG_MIN (16) + +/** + * Alignment macro for SG page allocation + */ +#define BFA_SGPG_ROUNDUP(_l) (((_l) + (sizeof(struct bfi_sgpg_s) - 1)) \ + & ~(sizeof(struct bfi_sgpg_s) - 1)) + +struct bfa_sgpg_wqe_s { + struct list_head qe; /* queue sg page element */ + int nsgpg; /* pages to be allocated */ + int nsgpg_total; /* total pages required */ + void (*cbfn) (void *cbarg); + /* callback function */ + void *cbarg; /* callback arg */ + struct list_head sgpg_q; /* queue of alloced sgpgs */ +}; + +struct bfa_sgpg_s { + struct list_head qe; /* queue sg page element */ + struct bfi_sgpg_s *sgpg; /* va of SG page */ + union bfi_addr_u sgpg_pa;/* pa of SG page */ +}; + +/** + * Given number of SG elements, BFA_SGPG_NPAGE() returns the number of + * SG pages required. + */ +#define BFA_SGPG_NPAGE(_nsges) (((_nsges) / BFI_SGPG_DATA_SGES) + 1) + +struct bfa_sgpg_mod_s { + struct bfa_s *bfa; + int num_sgpgs; /* number of SG pages */ + int free_sgpgs; /* number of free SG pages */ + struct bfa_sgpg_s *hsgpg_arr; /* BFA SG page array */ + struct bfi_sgpg_s *sgpg_arr; /* actual SG page array */ + u64 sgpg_arr_pa; /* SG page array DMA addr */ + struct list_head sgpg_q; /* queue of free SG pages */ + struct list_head sgpg_wait_q; /* wait queue for SG pages */ +}; +#define BFA_SGPG_MOD(__bfa) (&(__bfa)->modules.sgpg_mod) + +bfa_status_t bfa_sgpg_malloc(struct bfa_s *bfa, struct list_head *sgpg_q, + int nsgpgs); +void bfa_sgpg_mfree(struct bfa_s *bfa, struct list_head *sgpg_q, + int nsgpgs); +void bfa_sgpg_winit(struct bfa_sgpg_wqe_s *wqe, + void (*cbfn) (void *cbarg), void *cbarg); +void bfa_sgpg_wait(struct bfa_s *bfa, struct bfa_sgpg_wqe_s *wqe, + int nsgpgs); +void bfa_sgpg_wcancel(struct bfa_s *bfa, struct bfa_sgpg_wqe_s *wqe); + +#endif /* __BFA_SGPG_PRIV_H__ */ diff --git a/drivers/scsi/bfa/bfa_sm.c b/drivers/scsi/bfa/bfa_sm.c new file mode 100644 index 000000000000..5420f4f45e58 --- /dev/null +++ b/drivers/scsi/bfa/bfa_sm.c @@ -0,0 +1,38 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * bfasm.c BFA State machine utility functions + */ + +#include + +/** + * cs_sm_api + */ + +int +bfa_sm_to_state(struct bfa_sm_table_s *smt, bfa_sm_t sm) +{ + int i = 0; + + while (smt[i].sm && smt[i].sm != sm) + i++; + return smt[i].state; +} + + diff --git a/drivers/scsi/bfa/bfa_timer.c b/drivers/scsi/bfa/bfa_timer.c new file mode 100644 index 000000000000..cb76481f5cb1 --- /dev/null +++ b/drivers/scsi/bfa/bfa_timer.c @@ -0,0 +1,90 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#include +#include + +void +bfa_timer_init(struct bfa_timer_mod_s *mod) +{ + INIT_LIST_HEAD(&mod->timer_q); +} + +void +bfa_timer_beat(struct bfa_timer_mod_s *mod) +{ + struct list_head *qh = &mod->timer_q; + struct list_head *qe, *qe_next; + struct bfa_timer_s *elem; + struct list_head timedout_q; + + INIT_LIST_HEAD(&timedout_q); + + qe = bfa_q_next(qh); + + while (qe != qh) { + qe_next = bfa_q_next(qe); + + elem = (struct bfa_timer_s *) qe; + if (elem->timeout <= BFA_TIMER_FREQ) { + elem->timeout = 0; + list_del(&elem->qe); + list_add_tail(&elem->qe, &timedout_q); + } else { + elem->timeout -= BFA_TIMER_FREQ; + } + + qe = qe_next; /* go to next elem */ + } + + /* + * Pop all the timeout entries + */ + while (!list_empty(&timedout_q)) { + bfa_q_deq(&timedout_q, &elem); + elem->timercb(elem->arg); + } +} + +/** + * Should be called with lock protection + */ +void +bfa_timer_begin(struct bfa_timer_mod_s *mod, struct bfa_timer_s *timer, + void (*timercb) (void *), void *arg, unsigned int timeout) +{ + + bfa_assert(timercb != NULL); + bfa_assert(!bfa_q_is_on_q(&mod->timer_q, timer)); + + timer->timeout = timeout; + timer->timercb = timercb; + timer->arg = arg; + + list_add_tail(&timer->qe, &mod->timer_q); +} + +/** + * Should be called with lock protection + */ +void +bfa_timer_stop(struct bfa_timer_s *timer) +{ + bfa_assert(!list_empty(&timer->qe)); + + list_del(&timer->qe); +} diff --git a/drivers/scsi/bfa/bfa_trcmod_priv.h b/drivers/scsi/bfa/bfa_trcmod_priv.h new file mode 100644 index 000000000000..b3562dce7e9f --- /dev/null +++ b/drivers/scsi/bfa/bfa_trcmod_priv.h @@ -0,0 +1,66 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * hal_trcmod.h BFA trace modules + */ + +#ifndef __BFA_TRCMOD_PRIV_H__ +#define __BFA_TRCMOD_PRIV_H__ + +#include + +/* + * !!! Only append to the enums defined here to avoid any versioning + * !!! needed between trace utility and driver version + */ +enum { + BFA_TRC_HAL_IOC = 1, + BFA_TRC_HAL_INTR = 2, + BFA_TRC_HAL_FCXP = 3, + BFA_TRC_HAL_UF = 4, + BFA_TRC_HAL_DIAG = 5, + BFA_TRC_HAL_RPORT = 6, + BFA_TRC_HAL_FCPIM = 7, + BFA_TRC_HAL_IOIM = 8, + BFA_TRC_HAL_TSKIM = 9, + BFA_TRC_HAL_ITNIM = 10, + BFA_TRC_HAL_PPORT = 11, + BFA_TRC_HAL_SGPG = 12, + BFA_TRC_HAL_FLASH = 13, + BFA_TRC_HAL_DEBUG = 14, + BFA_TRC_HAL_WWN = 15, + BFA_TRC_HAL_FLASH_RAW = 16, + BFA_TRC_HAL_SBOOT = 17, + BFA_TRC_HAL_SBOOT_IO = 18, + BFA_TRC_HAL_SBOOT_INTR = 19, + BFA_TRC_HAL_SBTEST = 20, + BFA_TRC_HAL_IPFC = 21, + BFA_TRC_HAL_IOCFC = 22, + BFA_TRC_HAL_FCPTM = 23, + BFA_TRC_HAL_IOTM = 24, + BFA_TRC_HAL_TSKTM = 25, + BFA_TRC_HAL_TIN = 26, + BFA_TRC_HAL_LPS = 27, + BFA_TRC_HAL_FCDIAG = 28, + BFA_TRC_HAL_PBIND = 29, + BFA_TRC_HAL_IOCFC_CT = 30, + BFA_TRC_HAL_IOCFC_CB = 31, + BFA_TRC_HAL_IOCFC_Q = 32, +}; + +#endif /* __BFA_TRCMOD_PRIV_H__ */ diff --git a/drivers/scsi/bfa/bfa_tskim.c b/drivers/scsi/bfa/bfa_tskim.c new file mode 100644 index 000000000000..010d40d1e5d3 --- /dev/null +++ b/drivers/scsi/bfa/bfa_tskim.c @@ -0,0 +1,689 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#include +#include + +BFA_TRC_FILE(HAL, TSKIM); + +/** + * task management completion handling + */ +#define bfa_tskim_qcomp(__tskim, __cbfn) do { \ + bfa_cb_queue((__tskim)->bfa, &(__tskim)->hcb_qe, __cbfn, (__tskim)); \ + bfa_tskim_notify_comp(__tskim); \ +} while (0) + +#define bfa_tskim_notify_comp(__tskim) do { \ + if ((__tskim)->notify) \ + bfa_itnim_tskdone((__tskim)->itnim); \ +} while (0) + +/* + * forward declarations + */ +static void __bfa_cb_tskim_done(void *cbarg, bfa_boolean_t complete); +static void __bfa_cb_tskim_failed(void *cbarg, bfa_boolean_t complete); +static bfa_boolean_t bfa_tskim_match_scope(struct bfa_tskim_s *tskim, + lun_t lun); +static void bfa_tskim_gather_ios(struct bfa_tskim_s *tskim); +static void bfa_tskim_cleanp_comp(void *tskim_cbarg); +static void bfa_tskim_cleanup_ios(struct bfa_tskim_s *tskim); +static bfa_boolean_t bfa_tskim_send(struct bfa_tskim_s *tskim); +static bfa_boolean_t bfa_tskim_send_abort(struct bfa_tskim_s *tskim); +static void bfa_tskim_iocdisable_ios(struct bfa_tskim_s *tskim); + +/** + * bfa_tskim_sm + */ + +enum bfa_tskim_event { + BFA_TSKIM_SM_START = 1, /* TM command start */ + BFA_TSKIM_SM_DONE = 2, /* TM completion */ + BFA_TSKIM_SM_QRESUME = 3, /* resume after qfull */ + BFA_TSKIM_SM_HWFAIL = 5, /* IOC h/w failure event */ + BFA_TSKIM_SM_HCB = 6, /* BFA callback completion */ + BFA_TSKIM_SM_IOS_DONE = 7, /* IO and sub TM completions */ + BFA_TSKIM_SM_CLEANUP = 8, /* TM cleanup on ITN offline */ + BFA_TSKIM_SM_CLEANUP_DONE = 9, /* TM abort completion */ +}; + +static void bfa_tskim_sm_uninit(struct bfa_tskim_s *tskim, + enum bfa_tskim_event event); +static void bfa_tskim_sm_active(struct bfa_tskim_s *tskim, + enum bfa_tskim_event event); +static void bfa_tskim_sm_cleanup(struct bfa_tskim_s *tskim, + enum bfa_tskim_event event); +static void bfa_tskim_sm_iocleanup(struct bfa_tskim_s *tskim, + enum bfa_tskim_event event); +static void bfa_tskim_sm_qfull(struct bfa_tskim_s *tskim, + enum bfa_tskim_event event); +static void bfa_tskim_sm_cleanup_qfull(struct bfa_tskim_s *tskim, + enum bfa_tskim_event event); +static void bfa_tskim_sm_hcb(struct bfa_tskim_s *tskim, + enum bfa_tskim_event event); + +/** + * Task management command beginning state. + */ +static void +bfa_tskim_sm_uninit(struct bfa_tskim_s *tskim, enum bfa_tskim_event event) +{ + bfa_trc(tskim->bfa, event); + + switch (event) { + case BFA_TSKIM_SM_START: + bfa_sm_set_state(tskim, bfa_tskim_sm_active); + bfa_tskim_gather_ios(tskim); + + /** + * If device is offline, do not send TM on wire. Just cleanup + * any pending IO requests and complete TM request. + */ + if (!bfa_itnim_is_online(tskim->itnim)) { + bfa_sm_set_state(tskim, bfa_tskim_sm_iocleanup); + tskim->tsk_status = BFI_TSKIM_STS_OK; + bfa_tskim_cleanup_ios(tskim); + return; + } + + if (!bfa_tskim_send(tskim)) { + bfa_sm_set_state(tskim, bfa_tskim_sm_qfull); + bfa_reqq_wait(tskim->bfa, tskim->itnim->reqq, + &tskim->reqq_wait); + } + break; + + default: + bfa_assert(0); + } +} + +/** + * brief + * TM command is active, awaiting completion from firmware to + * cleanup IO requests in TM scope. + */ +static void +bfa_tskim_sm_active(struct bfa_tskim_s *tskim, enum bfa_tskim_event event) +{ + bfa_trc(tskim->bfa, event); + + switch (event) { + case BFA_TSKIM_SM_DONE: + bfa_sm_set_state(tskim, bfa_tskim_sm_iocleanup); + bfa_tskim_cleanup_ios(tskim); + break; + + case BFA_TSKIM_SM_CLEANUP: + bfa_sm_set_state(tskim, bfa_tskim_sm_cleanup); + if (!bfa_tskim_send_abort(tskim)) { + bfa_sm_set_state(tskim, bfa_tskim_sm_cleanup_qfull); + bfa_reqq_wait(tskim->bfa, tskim->itnim->reqq, + &tskim->reqq_wait); + } + break; + + case BFA_TSKIM_SM_HWFAIL: + bfa_sm_set_state(tskim, bfa_tskim_sm_hcb); + bfa_tskim_iocdisable_ios(tskim); + bfa_tskim_qcomp(tskim, __bfa_cb_tskim_failed); + break; + + default: + bfa_assert(0); + } +} + +/** + * An active TM is being cleaned up since ITN is offline. Awaiting cleanup + * completion event from firmware. + */ +static void +bfa_tskim_sm_cleanup(struct bfa_tskim_s *tskim, enum bfa_tskim_event event) +{ + bfa_trc(tskim->bfa, event); + + switch (event) { + case BFA_TSKIM_SM_DONE: + /** + * Ignore and wait for ABORT completion from firmware. + */ + break; + + case BFA_TSKIM_SM_CLEANUP_DONE: + bfa_sm_set_state(tskim, bfa_tskim_sm_iocleanup); + bfa_tskim_cleanup_ios(tskim); + break; + + case BFA_TSKIM_SM_HWFAIL: + bfa_sm_set_state(tskim, bfa_tskim_sm_hcb); + bfa_tskim_iocdisable_ios(tskim); + bfa_tskim_qcomp(tskim, __bfa_cb_tskim_failed); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_tskim_sm_iocleanup(struct bfa_tskim_s *tskim, enum bfa_tskim_event event) +{ + bfa_trc(tskim->bfa, event); + + switch (event) { + case BFA_TSKIM_SM_IOS_DONE: + bfa_sm_set_state(tskim, bfa_tskim_sm_hcb); + bfa_tskim_qcomp(tskim, __bfa_cb_tskim_done); + break; + + case BFA_TSKIM_SM_CLEANUP: + /** + * Ignore, TM command completed on wire. + * Notify TM conmpletion on IO cleanup completion. + */ + break; + + case BFA_TSKIM_SM_HWFAIL: + bfa_sm_set_state(tskim, bfa_tskim_sm_hcb); + bfa_tskim_iocdisable_ios(tskim); + bfa_tskim_qcomp(tskim, __bfa_cb_tskim_failed); + break; + + default: + bfa_assert(0); + } +} + +/** + * Task management command is waiting for room in request CQ + */ +static void +bfa_tskim_sm_qfull(struct bfa_tskim_s *tskim, enum bfa_tskim_event event) +{ + bfa_trc(tskim->bfa, event); + + switch (event) { + case BFA_TSKIM_SM_QRESUME: + bfa_sm_set_state(tskim, bfa_tskim_sm_active); + bfa_tskim_send(tskim); + break; + + case BFA_TSKIM_SM_CLEANUP: + /** + * No need to send TM on wire since ITN is offline. + */ + bfa_sm_set_state(tskim, bfa_tskim_sm_iocleanup); + bfa_reqq_wcancel(&tskim->reqq_wait); + bfa_tskim_cleanup_ios(tskim); + break; + + case BFA_TSKIM_SM_HWFAIL: + bfa_sm_set_state(tskim, bfa_tskim_sm_hcb); + bfa_reqq_wcancel(&tskim->reqq_wait); + bfa_tskim_iocdisable_ios(tskim); + bfa_tskim_qcomp(tskim, __bfa_cb_tskim_failed); + break; + + default: + bfa_assert(0); + } +} + +/** + * Task management command is active, awaiting for room in request CQ + * to send clean up request. + */ +static void +bfa_tskim_sm_cleanup_qfull(struct bfa_tskim_s *tskim, + enum bfa_tskim_event event) +{ + bfa_trc(tskim->bfa, event); + + switch (event) { + case BFA_TSKIM_SM_DONE: + bfa_reqq_wcancel(&tskim->reqq_wait); + /** + * + * Fall through !!! + */ + + case BFA_TSKIM_SM_QRESUME: + bfa_sm_set_state(tskim, bfa_tskim_sm_cleanup); + bfa_tskim_send_abort(tskim); + break; + + case BFA_TSKIM_SM_HWFAIL: + bfa_sm_set_state(tskim, bfa_tskim_sm_hcb); + bfa_reqq_wcancel(&tskim->reqq_wait); + bfa_tskim_iocdisable_ios(tskim); + bfa_tskim_qcomp(tskim, __bfa_cb_tskim_failed); + break; + + default: + bfa_assert(0); + } +} + +/** + * BFA callback is pending + */ +static void +bfa_tskim_sm_hcb(struct bfa_tskim_s *tskim, enum bfa_tskim_event event) +{ + bfa_trc(tskim->bfa, event); + + switch (event) { + case BFA_TSKIM_SM_HCB: + bfa_sm_set_state(tskim, bfa_tskim_sm_uninit); + bfa_tskim_free(tskim); + break; + + case BFA_TSKIM_SM_CLEANUP: + bfa_tskim_notify_comp(tskim); + break; + + case BFA_TSKIM_SM_HWFAIL: + break; + + default: + bfa_assert(0); + } +} + + + +/** + * bfa_tskim_private + */ + +static void +__bfa_cb_tskim_done(void *cbarg, bfa_boolean_t complete) +{ + struct bfa_tskim_s *tskim = cbarg; + + if (!complete) { + bfa_sm_send_event(tskim, BFA_TSKIM_SM_HCB); + return; + } + + bfa_stats(tskim->itnim, tm_success); + bfa_cb_tskim_done(tskim->bfa->bfad, tskim->dtsk, tskim->tsk_status); +} + +static void +__bfa_cb_tskim_failed(void *cbarg, bfa_boolean_t complete) +{ + struct bfa_tskim_s *tskim = cbarg; + + if (!complete) { + bfa_sm_send_event(tskim, BFA_TSKIM_SM_HCB); + return; + } + + bfa_stats(tskim->itnim, tm_failures); + bfa_cb_tskim_done(tskim->bfa->bfad, tskim->dtsk, + BFI_TSKIM_STS_FAILED); +} + +static bfa_boolean_t +bfa_tskim_match_scope(struct bfa_tskim_s *tskim, lun_t lun) +{ + switch (tskim->tm_cmnd) { + case FCP_TM_TARGET_RESET: + return BFA_TRUE; + + case FCP_TM_ABORT_TASK_SET: + case FCP_TM_CLEAR_TASK_SET: + case FCP_TM_LUN_RESET: + case FCP_TM_CLEAR_ACA: + return (tskim->lun == lun); + + default: + bfa_assert(0); + } + + return BFA_FALSE; +} + +/** + * Gather affected IO requests and task management commands. + */ +static void +bfa_tskim_gather_ios(struct bfa_tskim_s *tskim) +{ + struct bfa_itnim_s *itnim = tskim->itnim; + struct bfa_ioim_s *ioim; + struct list_head *qe, *qen; + + INIT_LIST_HEAD(&tskim->io_q); + + /** + * Gather any active IO requests first. + */ + list_for_each_safe(qe, qen, &itnim->io_q) { + ioim = (struct bfa_ioim_s *) qe; + if (bfa_tskim_match_scope + (tskim, bfa_cb_ioim_get_lun(ioim->dio))) { + list_del(&ioim->qe); + list_add_tail(&ioim->qe, &tskim->io_q); + } + } + + /** + * Failback any pending IO requests immediately. + */ + list_for_each_safe(qe, qen, &itnim->pending_q) { + ioim = (struct bfa_ioim_s *) qe; + if (bfa_tskim_match_scope + (tskim, bfa_cb_ioim_get_lun(ioim->dio))) { + list_del(&ioim->qe); + list_add_tail(&ioim->qe, &ioim->fcpim->ioim_comp_q); + bfa_ioim_tov(ioim); + } + } +} + +/** + * IO cleanup completion + */ +static void +bfa_tskim_cleanp_comp(void *tskim_cbarg) +{ + struct bfa_tskim_s *tskim = tskim_cbarg; + + bfa_stats(tskim->itnim, tm_io_comps); + bfa_sm_send_event(tskim, BFA_TSKIM_SM_IOS_DONE); +} + +/** + * Gather affected IO requests and task management commands. + */ +static void +bfa_tskim_cleanup_ios(struct bfa_tskim_s *tskim) +{ + struct bfa_ioim_s *ioim; + struct list_head *qe, *qen; + + bfa_wc_init(&tskim->wc, bfa_tskim_cleanp_comp, tskim); + + list_for_each_safe(qe, qen, &tskim->io_q) { + ioim = (struct bfa_ioim_s *) qe; + bfa_wc_up(&tskim->wc); + bfa_ioim_cleanup_tm(ioim, tskim); + } + + bfa_wc_wait(&tskim->wc); +} + +/** + * Send task management request to firmware. + */ +static bfa_boolean_t +bfa_tskim_send(struct bfa_tskim_s *tskim) +{ + struct bfa_itnim_s *itnim = tskim->itnim; + struct bfi_tskim_req_s *m; + + /** + * check for room in queue to send request now + */ + m = bfa_reqq_next(tskim->bfa, itnim->reqq); + if (!m) + return BFA_FALSE; + + /** + * build i/o request message next + */ + bfi_h2i_set(m->mh, BFI_MC_TSKIM, BFI_TSKIM_H2I_TM_REQ, + bfa_lpuid(tskim->bfa)); + + m->tsk_tag = bfa_os_htons(tskim->tsk_tag); + m->itn_fhdl = tskim->itnim->rport->fw_handle; + m->t_secs = tskim->tsecs; + m->lun = tskim->lun; + m->tm_flags = tskim->tm_cmnd; + + /** + * queue I/O message to firmware + */ + bfa_reqq_produce(tskim->bfa, itnim->reqq); + return BFA_TRUE; +} + +/** + * Send abort request to cleanup an active TM to firmware. + */ +static bfa_boolean_t +bfa_tskim_send_abort(struct bfa_tskim_s *tskim) +{ + struct bfa_itnim_s *itnim = tskim->itnim; + struct bfi_tskim_abortreq_s *m; + + /** + * check for room in queue to send request now + */ + m = bfa_reqq_next(tskim->bfa, itnim->reqq); + if (!m) + return BFA_FALSE; + + /** + * build i/o request message next + */ + bfi_h2i_set(m->mh, BFI_MC_TSKIM, BFI_TSKIM_H2I_ABORT_REQ, + bfa_lpuid(tskim->bfa)); + + m->tsk_tag = bfa_os_htons(tskim->tsk_tag); + + /** + * queue I/O message to firmware + */ + bfa_reqq_produce(tskim->bfa, itnim->reqq); + return BFA_TRUE; +} + +/** + * Call to resume task management cmnd waiting for room in request queue. + */ +static void +bfa_tskim_qresume(void *cbarg) +{ + struct bfa_tskim_s *tskim = cbarg; + + bfa_fcpim_stats(tskim->fcpim, qresumes); + bfa_stats(tskim->itnim, tm_qresumes); + bfa_sm_send_event(tskim, BFA_TSKIM_SM_QRESUME); +} + +/** + * Cleanup IOs associated with a task mangement command on IOC failures. + */ +static void +bfa_tskim_iocdisable_ios(struct bfa_tskim_s *tskim) +{ + struct bfa_ioim_s *ioim; + struct list_head *qe, *qen; + + list_for_each_safe(qe, qen, &tskim->io_q) { + ioim = (struct bfa_ioim_s *) qe; + bfa_ioim_iocdisable(ioim); + } +} + + + +/** + * bfa_tskim_friend + */ + +/** + * Notification on completions from related ioim. + */ +void +bfa_tskim_iodone(struct bfa_tskim_s *tskim) +{ + bfa_wc_down(&tskim->wc); +} + +/** + * Handle IOC h/w failure notification from itnim. + */ +void +bfa_tskim_iocdisable(struct bfa_tskim_s *tskim) +{ + tskim->notify = BFA_FALSE; + bfa_stats(tskim->itnim, tm_iocdowns); + bfa_sm_send_event(tskim, BFA_TSKIM_SM_HWFAIL); +} + +/** + * Cleanup TM command and associated IOs as part of ITNIM offline. + */ +void +bfa_tskim_cleanup(struct bfa_tskim_s *tskim) +{ + tskim->notify = BFA_TRUE; + bfa_stats(tskim->itnim, tm_cleanups); + bfa_sm_send_event(tskim, BFA_TSKIM_SM_CLEANUP); +} + +/** + * Memory allocation and initialization. + */ +void +bfa_tskim_attach(struct bfa_fcpim_mod_s *fcpim, struct bfa_meminfo_s *minfo) +{ + struct bfa_tskim_s *tskim; + u16 i; + + INIT_LIST_HEAD(&fcpim->tskim_free_q); + + tskim = (struct bfa_tskim_s *) bfa_meminfo_kva(minfo); + fcpim->tskim_arr = tskim; + + for (i = 0; i < fcpim->num_tskim_reqs; i++, tskim++) { + /* + * initialize TSKIM + */ + bfa_os_memset(tskim, 0, sizeof(struct bfa_tskim_s)); + tskim->tsk_tag = i; + tskim->bfa = fcpim->bfa; + tskim->fcpim = fcpim; + tskim->notify = BFA_FALSE; + bfa_reqq_winit(&tskim->reqq_wait, bfa_tskim_qresume, + tskim); + bfa_sm_set_state(tskim, bfa_tskim_sm_uninit); + + list_add_tail(&tskim->qe, &fcpim->tskim_free_q); + } + + bfa_meminfo_kva(minfo) = (u8 *) tskim; +} + +void +bfa_tskim_detach(struct bfa_fcpim_mod_s *fcpim) +{ + /** + * @todo + */ +} + +void +bfa_tskim_isr(struct bfa_s *bfa, struct bfi_msg_s *m) +{ + struct bfa_fcpim_mod_s *fcpim = BFA_FCPIM_MOD(bfa); + struct bfi_tskim_rsp_s *rsp = (struct bfi_tskim_rsp_s *) m; + struct bfa_tskim_s *tskim; + u16 tsk_tag = bfa_os_ntohs(rsp->tsk_tag); + + tskim = BFA_TSKIM_FROM_TAG(fcpim, tsk_tag); + bfa_assert(tskim->tsk_tag == tsk_tag); + + tskim->tsk_status = rsp->tsk_status; + + /** + * Firmware sends BFI_TSKIM_STS_ABORTED status for abort + * requests. All other statuses are for normal completions. + */ + if (rsp->tsk_status == BFI_TSKIM_STS_ABORTED) { + bfa_stats(tskim->itnim, tm_cleanup_comps); + bfa_sm_send_event(tskim, BFA_TSKIM_SM_CLEANUP_DONE); + } else { + bfa_stats(tskim->itnim, tm_fw_rsps); + bfa_sm_send_event(tskim, BFA_TSKIM_SM_DONE); + } +} + + + +/** + * bfa_tskim_api + */ + + +struct bfa_tskim_s * +bfa_tskim_alloc(struct bfa_s *bfa, struct bfad_tskim_s *dtsk) +{ + struct bfa_fcpim_mod_s *fcpim = BFA_FCPIM_MOD(bfa); + struct bfa_tskim_s *tskim; + + bfa_q_deq(&fcpim->tskim_free_q, &tskim); + + if (!tskim) + bfa_fcpim_stats(fcpim, no_tskims); + else + tskim->dtsk = dtsk; + + return tskim; +} + +void +bfa_tskim_free(struct bfa_tskim_s *tskim) +{ + bfa_assert(bfa_q_is_on_q_func(&tskim->itnim->tsk_q, &tskim->qe)); + list_del(&tskim->qe); + list_add_tail(&tskim->qe, &tskim->fcpim->tskim_free_q); +} + +/** + * Start a task management command. + * + * @param[in] tskim BFA task management command instance + * @param[in] itnim i-t nexus for the task management command + * @param[in] lun lun, if applicable + * @param[in] tm_cmnd Task management command code. + * @param[in] t_secs Timeout in seconds + * + * @return None. + */ +void +bfa_tskim_start(struct bfa_tskim_s *tskim, struct bfa_itnim_s *itnim, lun_t lun, + enum fcp_tm_cmnd tm_cmnd, u8 tsecs) +{ + tskim->itnim = itnim; + tskim->lun = lun; + tskim->tm_cmnd = tm_cmnd; + tskim->tsecs = tsecs; + tskim->notify = BFA_FALSE; + bfa_stats(itnim, tm_cmnds); + + list_add_tail(&tskim->qe, &itnim->tsk_q); + bfa_sm_send_event(tskim, BFA_TSKIM_SM_START); +} + + diff --git a/drivers/scsi/bfa/bfa_uf.c b/drivers/scsi/bfa/bfa_uf.c new file mode 100644 index 000000000000..ff5f9deb1b22 --- /dev/null +++ b/drivers/scsi/bfa/bfa_uf.c @@ -0,0 +1,345 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * bfa_uf.c BFA unsolicited frame receive implementation + */ + +#include +#include +#include +#include + +BFA_TRC_FILE(HAL, UF); +BFA_MODULE(uf); + +/* + ***************************************************************************** + * Internal functions + ***************************************************************************** + */ +static void +__bfa_cb_uf_recv(void *cbarg, bfa_boolean_t complete) +{ + struct bfa_uf_s *uf = cbarg; + struct bfa_uf_mod_s *ufm = BFA_UF_MOD(uf->bfa); + + if (complete) + ufm->ufrecv(ufm->cbarg, uf); +} + +static void +claim_uf_pbs(struct bfa_uf_mod_s *ufm, struct bfa_meminfo_s *mi) +{ + u32 uf_pb_tot_sz; + + ufm->uf_pbs_kva = (struct bfa_uf_buf_s *) bfa_meminfo_dma_virt(mi); + ufm->uf_pbs_pa = bfa_meminfo_dma_phys(mi); + uf_pb_tot_sz = BFA_ROUNDUP((sizeof(struct bfa_uf_buf_s) * ufm->num_ufs), + BFA_DMA_ALIGN_SZ); + + bfa_meminfo_dma_virt(mi) += uf_pb_tot_sz; + bfa_meminfo_dma_phys(mi) += uf_pb_tot_sz; + + bfa_os_memset((void *)ufm->uf_pbs_kva, 0, uf_pb_tot_sz); +} + +static void +claim_uf_post_msgs(struct bfa_uf_mod_s *ufm, struct bfa_meminfo_s *mi) +{ + struct bfi_uf_buf_post_s *uf_bp_msg; + struct bfi_sge_s *sge; + union bfi_addr_u sga_zero = { {0} }; + u16 i; + u16 buf_len; + + ufm->uf_buf_posts = (struct bfi_uf_buf_post_s *) bfa_meminfo_kva(mi); + uf_bp_msg = ufm->uf_buf_posts; + + for (i = 0, uf_bp_msg = ufm->uf_buf_posts; i < ufm->num_ufs; + i++, uf_bp_msg++) { + bfa_os_memset(uf_bp_msg, 0, sizeof(struct bfi_uf_buf_post_s)); + + uf_bp_msg->buf_tag = i; + buf_len = sizeof(struct bfa_uf_buf_s); + uf_bp_msg->buf_len = bfa_os_htons(buf_len); + bfi_h2i_set(uf_bp_msg->mh, BFI_MC_UF, BFI_UF_H2I_BUF_POST, + bfa_lpuid(ufm->bfa)); + + sge = uf_bp_msg->sge; + sge[0].sg_len = buf_len; + sge[0].flags = BFI_SGE_DATA_LAST; + bfa_dma_addr_set(sge[0].sga, ufm_pbs_pa(ufm, i)); + bfa_sge_to_be(sge); + + sge[1].sg_len = buf_len; + sge[1].flags = BFI_SGE_PGDLEN; + sge[1].sga = sga_zero; + bfa_sge_to_be(&sge[1]); + } + + /** + * advance pointer beyond consumed memory + */ + bfa_meminfo_kva(mi) = (u8 *) uf_bp_msg; +} + +static void +claim_ufs(struct bfa_uf_mod_s *ufm, struct bfa_meminfo_s *mi) +{ + u16 i; + struct bfa_uf_s *uf; + + /* + * Claim block of memory for UF list + */ + ufm->uf_list = (struct bfa_uf_s *) bfa_meminfo_kva(mi); + + /* + * Initialize UFs and queue it in UF free queue + */ + for (i = 0, uf = ufm->uf_list; i < ufm->num_ufs; i++, uf++) { + bfa_os_memset(uf, 0, sizeof(struct bfa_uf_s)); + uf->bfa = ufm->bfa; + uf->uf_tag = i; + uf->pb_len = sizeof(struct bfa_uf_buf_s); + uf->buf_kva = (void *)&ufm->uf_pbs_kva[i]; + uf->buf_pa = ufm_pbs_pa(ufm, i); + list_add_tail(&uf->qe, &ufm->uf_free_q); + } + + /** + * advance memory pointer + */ + bfa_meminfo_kva(mi) = (u8 *) uf; +} + +static void +uf_mem_claim(struct bfa_uf_mod_s *ufm, struct bfa_meminfo_s *mi) +{ + claim_uf_pbs(ufm, mi); + claim_ufs(ufm, mi); + claim_uf_post_msgs(ufm, mi); +} + +static void +bfa_uf_meminfo(struct bfa_iocfc_cfg_s *cfg, u32 *ndm_len, u32 *dm_len) +{ + u32 num_ufs = cfg->fwcfg.num_uf_bufs; + + /* + * dma-able memory for UF posted bufs + */ + *dm_len += BFA_ROUNDUP((sizeof(struct bfa_uf_buf_s) * num_ufs), + BFA_DMA_ALIGN_SZ); + + /* + * kernel Virtual memory for UFs and UF buf post msg copies + */ + *ndm_len += sizeof(struct bfa_uf_s) * num_ufs; + *ndm_len += sizeof(struct bfi_uf_buf_post_s) * num_ufs; +} + +static void +bfa_uf_attach(struct bfa_s *bfa, void *bfad, struct bfa_iocfc_cfg_s *cfg, + struct bfa_meminfo_s *meminfo, struct bfa_pcidev_s *pcidev) +{ + struct bfa_uf_mod_s *ufm = BFA_UF_MOD(bfa); + + bfa_os_memset(ufm, 0, sizeof(struct bfa_uf_mod_s)); + ufm->bfa = bfa; + ufm->num_ufs = cfg->fwcfg.num_uf_bufs; + INIT_LIST_HEAD(&ufm->uf_free_q); + INIT_LIST_HEAD(&ufm->uf_posted_q); + + uf_mem_claim(ufm, meminfo); +} + +static void +bfa_uf_initdone(struct bfa_s *bfa) +{ +} + +static void +bfa_uf_detach(struct bfa_s *bfa) +{ +} + +static struct bfa_uf_s * +bfa_uf_get(struct bfa_uf_mod_s *uf_mod) +{ + struct bfa_uf_s *uf; + + bfa_q_deq(&uf_mod->uf_free_q, &uf); + return (uf); +} + +static void +bfa_uf_put(struct bfa_uf_mod_s *uf_mod, struct bfa_uf_s *uf) +{ + list_add_tail(&uf->qe, &uf_mod->uf_free_q); +} + +static bfa_status_t +bfa_uf_post(struct bfa_uf_mod_s *ufm, struct bfa_uf_s *uf) +{ + struct bfi_uf_buf_post_s *uf_post_msg; + + uf_post_msg = bfa_reqq_next(ufm->bfa, BFA_REQQ_FCXP); + if (!uf_post_msg) + return BFA_STATUS_FAILED; + + bfa_os_memcpy(uf_post_msg, &ufm->uf_buf_posts[uf->uf_tag], + sizeof(struct bfi_uf_buf_post_s)); + bfa_reqq_produce(ufm->bfa, BFA_REQQ_FCXP); + + bfa_trc(ufm->bfa, uf->uf_tag); + + list_add_tail(&uf->qe, &ufm->uf_posted_q); + return BFA_STATUS_OK; +} + +static void +bfa_uf_post_all(struct bfa_uf_mod_s *uf_mod) +{ + struct bfa_uf_s *uf; + + while ((uf = bfa_uf_get(uf_mod)) != NULL) { + if (bfa_uf_post(uf_mod, uf) != BFA_STATUS_OK) + break; + } +} + +static void +uf_recv(struct bfa_s *bfa, struct bfi_uf_frm_rcvd_s *m) +{ + struct bfa_uf_mod_s *ufm = BFA_UF_MOD(bfa); + u16 uf_tag = m->buf_tag; + struct bfa_uf_buf_s *uf_buf = &ufm->uf_pbs_kva[uf_tag]; + struct bfa_uf_s *uf = &ufm->uf_list[uf_tag]; + u8 *buf = &uf_buf->d[0]; + struct fchs_s *fchs; + + m->frm_len = bfa_os_ntohs(m->frm_len); + m->xfr_len = bfa_os_ntohs(m->xfr_len); + + fchs = (struct fchs_s *) uf_buf; + + list_del(&uf->qe); /* dequeue from posted queue */ + + uf->data_ptr = buf; + uf->data_len = m->xfr_len; + + bfa_assert(uf->data_len >= sizeof(struct fchs_s)); + + if (uf->data_len == sizeof(struct fchs_s)) { + bfa_plog_fchdr(bfa->plog, BFA_PL_MID_HAL_UF, BFA_PL_EID_RX, + uf->data_len, (struct fchs_s *) buf); + } else { + u32 pld_w0 = *((u32 *) (buf + sizeof(struct fchs_s))); + bfa_plog_fchdr_and_pl(bfa->plog, BFA_PL_MID_HAL_UF, + BFA_PL_EID_RX, uf->data_len, + (struct fchs_s *) buf, pld_w0); + } + + bfa_cb_queue(bfa, &uf->hcb_qe, __bfa_cb_uf_recv, uf); +} + +static void +bfa_uf_stop(struct bfa_s *bfa) +{ +} + +static void +bfa_uf_iocdisable(struct bfa_s *bfa) +{ + struct bfa_uf_mod_s *ufm = BFA_UF_MOD(bfa); + struct bfa_uf_s *uf; + struct list_head *qe, *qen; + + list_for_each_safe(qe, qen, &ufm->uf_posted_q) { + uf = (struct bfa_uf_s *) qe; + list_del(&uf->qe); + bfa_uf_put(ufm, uf); + } +} + +static void +bfa_uf_start(struct bfa_s *bfa) +{ + bfa_uf_post_all(BFA_UF_MOD(bfa)); +} + + + +/** + * bfa_uf_api + */ + +/** + * Register handler for all unsolicted recieve frames. + * + * @param[in] bfa BFA instance + * @param[in] ufrecv receive handler function + * @param[in] cbarg receive handler arg + */ +void +bfa_uf_recv_register(struct bfa_s *bfa, bfa_cb_uf_recv_t ufrecv, void *cbarg) +{ + struct bfa_uf_mod_s *ufm = BFA_UF_MOD(bfa); + + ufm->ufrecv = ufrecv; + ufm->cbarg = cbarg; +} + +/** + * Free an unsolicited frame back to BFA. + * + * @param[in] uf unsolicited frame to be freed + * + * @return None + */ +void +bfa_uf_free(struct bfa_uf_s *uf) +{ + bfa_uf_put(BFA_UF_MOD(uf->bfa), uf); + bfa_uf_post_all(BFA_UF_MOD(uf->bfa)); +} + + + +/** + * uf_pub BFA uf module public functions + */ + +void +bfa_uf_isr(struct bfa_s *bfa, struct bfi_msg_s *msg) +{ + bfa_trc(bfa, msg->mhdr.msg_id); + + switch (msg->mhdr.msg_id) { + case BFI_UF_I2H_FRM_RCVD: + uf_recv(bfa, (struct bfi_uf_frm_rcvd_s *) msg); + break; + + default: + bfa_trc(bfa, msg->mhdr.msg_id); + bfa_assert(0); + } +} + + diff --git a/drivers/scsi/bfa/bfa_uf_priv.h b/drivers/scsi/bfa/bfa_uf_priv.h new file mode 100644 index 000000000000..bcb490f834f3 --- /dev/null +++ b/drivers/scsi/bfa/bfa_uf_priv.h @@ -0,0 +1,47 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ +#ifndef __BFA_UF_PRIV_H__ +#define __BFA_UF_PRIV_H__ + +#include +#include +#include + +#define BFA_UF_MIN (4) + +struct bfa_uf_mod_s { + struct bfa_s *bfa; /* back pointer to BFA */ + struct bfa_uf_s *uf_list; /* array of UFs */ + u16 num_ufs; /* num unsolicited rx frames */ + struct list_head uf_free_q; /* free UFs */ + struct list_head uf_posted_q; /* UFs posted to IOC */ + struct bfa_uf_buf_s *uf_pbs_kva; /* list UF bufs request pld */ + u64 uf_pbs_pa; /* phy addr for UF bufs */ + struct bfi_uf_buf_post_s *uf_buf_posts; + /* pre-built UF post msgs */ + bfa_cb_uf_recv_t ufrecv; /* uf recv handler function */ + void *cbarg; /* uf receive handler arg */ +}; + +#define BFA_UF_MOD(__bfa) (&(__bfa)->modules.uf_mod) + +#define ufm_pbs_pa(_ufmod, _uftag) \ + ((_ufmod)->uf_pbs_pa + sizeof(struct bfa_uf_buf_s) * (_uftag)) + +void bfa_uf_isr(struct bfa_s *bfa, struct bfi_msg_s *msg); + +#endif /* __BFA_UF_PRIV_H__ */ diff --git a/drivers/scsi/bfa/bfad.c b/drivers/scsi/bfa/bfad.c new file mode 100644 index 000000000000..6f2be5abf561 --- /dev/null +++ b/drivers/scsi/bfa/bfad.c @@ -0,0 +1,1182 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * bfad.c Linux driver PCI interface module. + */ + +#include +#include "bfad_drv.h" +#include "bfad_im.h" +#include "bfad_tm.h" +#include "bfad_ipfc.h" +#include "bfad_trcmod.h" +#include +#include +#include +#include + +BFA_TRC_FILE(LDRV, BFAD); +static DEFINE_MUTEX(bfad_mutex); +LIST_HEAD(bfad_list); +static int bfad_inst; +int bfad_supported_fc4s; + +static char *host_name; +static char *os_name; +static char *os_patch; +static int num_rports; +static int num_ios; +static int num_tms; +static int num_fcxps; +static int num_ufbufs; +static int reqq_size; +static int rspq_size; +static int num_sgpgs; +static int rport_del_timeout = BFA_FCS_RPORT_DEF_DEL_TIMEOUT; +static int bfa_io_max_sge = BFAD_IO_MAX_SGE; +static int log_level = BFA_LOG_WARNING; +static int ioc_auto_recover = BFA_TRUE; +static int ipfc_enable = BFA_FALSE; +static int ipfc_mtu = -1; +int bfa_lun_queue_depth = BFAD_LUN_QUEUE_DEPTH; +int bfa_linkup_delay = -1; + +module_param(os_name, charp, S_IRUGO | S_IWUSR); +module_param(os_patch, charp, S_IRUGO | S_IWUSR); +module_param(host_name, charp, S_IRUGO | S_IWUSR); +module_param(num_rports, int, S_IRUGO | S_IWUSR); +module_param(num_ios, int, S_IRUGO | S_IWUSR); +module_param(num_tms, int, S_IRUGO | S_IWUSR); +module_param(num_fcxps, int, S_IRUGO | S_IWUSR); +module_param(num_ufbufs, int, S_IRUGO | S_IWUSR); +module_param(reqq_size, int, S_IRUGO | S_IWUSR); +module_param(rspq_size, int, S_IRUGO | S_IWUSR); +module_param(num_sgpgs, int, S_IRUGO | S_IWUSR); +module_param(rport_del_timeout, int, S_IRUGO | S_IWUSR); +module_param(bfa_lun_queue_depth, int, S_IRUGO | S_IWUSR); +module_param(bfa_io_max_sge, int, S_IRUGO | S_IWUSR); +module_param(log_level, int, S_IRUGO | S_IWUSR); +module_param(ioc_auto_recover, int, S_IRUGO | S_IWUSR); +module_param(ipfc_enable, int, S_IRUGO | S_IWUSR); +module_param(ipfc_mtu, int, S_IRUGO | S_IWUSR); +module_param(bfa_linkup_delay, int, S_IRUGO | S_IWUSR); + +/* + * Stores the module parm num_sgpgs value; + * used to reset for bfad next instance. + */ +static int num_sgpgs_parm; + +static bfa_status_t +bfad_fc4_probe(struct bfad_s *bfad) +{ + int rc; + + rc = bfad_im_probe(bfad); + if (rc != BFA_STATUS_OK) + goto ext; + + bfad_tm_probe(bfad); + + if (ipfc_enable) + bfad_ipfc_probe(bfad); +ext: + return rc; +} + +static void +bfad_fc4_probe_undo(struct bfad_s *bfad) +{ + bfad_im_probe_undo(bfad); + bfad_tm_probe_undo(bfad); + if (ipfc_enable) + bfad_ipfc_probe_undo(bfad); +} + +static void +bfad_fc4_probe_post(struct bfad_s *bfad) +{ + if (bfad->im) + bfad_im_probe_post(bfad->im); + + bfad_tm_probe_post(bfad); + if (ipfc_enable) + bfad_ipfc_probe_post(bfad); +} + +static bfa_status_t +bfad_fc4_port_new(struct bfad_s *bfad, struct bfad_port_s *port, int roles) +{ + int rc = BFA_STATUS_FAILED; + + if (roles & BFA_PORT_ROLE_FCP_IM) + rc = bfad_im_port_new(bfad, port); + if (rc != BFA_STATUS_OK) + goto ext; + + if (roles & BFA_PORT_ROLE_FCP_TM) + rc = bfad_tm_port_new(bfad, port); + if (rc != BFA_STATUS_OK) + goto ext; + + if ((roles & BFA_PORT_ROLE_FCP_IPFC) && ipfc_enable) + rc = bfad_ipfc_port_new(bfad, port, port->pvb_type); +ext: + return rc; +} + +static void +bfad_fc4_port_delete(struct bfad_s *bfad, struct bfad_port_s *port, int roles) +{ + if (roles & BFA_PORT_ROLE_FCP_IM) + bfad_im_port_delete(bfad, port); + + if (roles & BFA_PORT_ROLE_FCP_TM) + bfad_tm_port_delete(bfad, port); + + if ((roles & BFA_PORT_ROLE_FCP_IPFC) && ipfc_enable) + bfad_ipfc_port_delete(bfad, port); +} + +/** + * BFA callbacks + */ +void +bfad_hcb_comp(void *arg, bfa_status_t status) +{ + struct bfad_hal_comp *fcomp = (struct bfad_hal_comp *)arg; + + fcomp->status = status; + complete(&fcomp->comp); +} + +/** + * bfa_init callback + */ +void +bfa_cb_init(void *drv, bfa_status_t init_status) +{ + struct bfad_s *bfad = drv; + + if (init_status == BFA_STATUS_OK) + bfad->bfad_flags |= BFAD_HAL_INIT_DONE; + + complete(&bfad->comp); +} + + + +/** + * BFA_FCS callbacks + */ +static struct bfad_port_s * +bfad_get_drv_port(struct bfad_s *bfad, struct bfad_vf_s *vf_drv, + struct bfad_vport_s *vp_drv) +{ + return ((vp_drv) ? (&(vp_drv)->drv_port) + : ((vf_drv) ? (&(vf_drv)->base_port) : (&(bfad)->pport))); +} + +struct bfad_port_s * +bfa_fcb_port_new(struct bfad_s *bfad, struct bfa_fcs_port_s *port, + enum bfa_port_role roles, struct bfad_vf_s *vf_drv, + struct bfad_vport_s *vp_drv) +{ + bfa_status_t rc; + struct bfad_port_s *port_drv; + + if (!vp_drv && !vf_drv) { + port_drv = &bfad->pport; + port_drv->pvb_type = BFAD_PORT_PHYS_BASE; + } else if (!vp_drv && vf_drv) { + port_drv = &vf_drv->base_port; + port_drv->pvb_type = BFAD_PORT_VF_BASE; + } else if (vp_drv && !vf_drv) { + port_drv = &vp_drv->drv_port; + port_drv->pvb_type = BFAD_PORT_PHYS_VPORT; + } else { + port_drv = &vp_drv->drv_port; + port_drv->pvb_type = BFAD_PORT_VF_VPORT; + } + + port_drv->fcs_port = port; + port_drv->roles = roles; + rc = bfad_fc4_port_new(bfad, port_drv, roles); + if (rc != BFA_STATUS_OK) { + bfad_fc4_port_delete(bfad, port_drv, roles); + port_drv = NULL; + } + + return port_drv; +} + +void +bfa_fcb_port_delete(struct bfad_s *bfad, enum bfa_port_role roles, + struct bfad_vf_s *vf_drv, struct bfad_vport_s *vp_drv) +{ + struct bfad_port_s *port_drv; + + /* + * this will be only called from rmmod context + */ + if (vp_drv && !vp_drv->comp_del) { + port_drv = bfad_get_drv_port(bfad, vf_drv, vp_drv); + bfa_trc(bfad, roles); + bfad_fc4_port_delete(bfad, port_drv, roles); + } +} + +void +bfa_fcb_port_online(struct bfad_s *bfad, enum bfa_port_role roles, + struct bfad_vf_s *vf_drv, struct bfad_vport_s *vp_drv) +{ + struct bfad_port_s *port_drv = bfad_get_drv_port(bfad, vf_drv, vp_drv); + + if (roles & BFA_PORT_ROLE_FCP_IM) + bfad_im_port_online(bfad, port_drv); + + if (roles & BFA_PORT_ROLE_FCP_TM) + bfad_tm_port_online(bfad, port_drv); + + if ((roles & BFA_PORT_ROLE_FCP_IPFC) && ipfc_enable) + bfad_ipfc_port_online(bfad, port_drv); + + bfad->bfad_flags |= BFAD_PORT_ONLINE; +} + +void +bfa_fcb_port_offline(struct bfad_s *bfad, enum bfa_port_role roles, + struct bfad_vf_s *vf_drv, struct bfad_vport_s *vp_drv) +{ + struct bfad_port_s *port_drv = bfad_get_drv_port(bfad, vf_drv, vp_drv); + + if (roles & BFA_PORT_ROLE_FCP_IM) + bfad_im_port_offline(bfad, port_drv); + + if (roles & BFA_PORT_ROLE_FCP_TM) + bfad_tm_port_offline(bfad, port_drv); + + if ((roles & BFA_PORT_ROLE_FCP_IPFC) && ipfc_enable) + bfad_ipfc_port_offline(bfad, port_drv); +} + +void +bfa_fcb_vport_delete(struct bfad_vport_s *vport_drv) +{ + if (vport_drv->comp_del) { + complete(vport_drv->comp_del); + return; + } + + kfree(vport_drv); +} + +/** + * FCS RPORT alloc callback, after successful PLOGI by FCS + */ +bfa_status_t +bfa_fcb_rport_alloc(struct bfad_s *bfad, struct bfa_fcs_rport_s **rport, + struct bfad_rport_s **rport_drv) +{ + bfa_status_t rc = BFA_STATUS_OK; + + *rport_drv = kzalloc(sizeof(struct bfad_rport_s), GFP_ATOMIC); + if (*rport_drv == NULL) { + rc = BFA_STATUS_ENOMEM; + goto ext; + } + + *rport = &(*rport_drv)->fcs_rport; + +ext: + return rc; +} + + + +void +bfad_hal_mem_release(struct bfad_s *bfad) +{ + int i; + struct bfa_meminfo_s *hal_meminfo = &bfad->meminfo; + struct bfa_mem_elem_s *meminfo_elem; + + for (i = 0; i < BFA_MEM_TYPE_MAX; i++) { + meminfo_elem = &hal_meminfo->meminfo[i]; + if (meminfo_elem->kva != NULL) { + switch (meminfo_elem->mem_type) { + case BFA_MEM_TYPE_KVA: + vfree(meminfo_elem->kva); + break; + case BFA_MEM_TYPE_DMA: + dma_free_coherent(&bfad->pcidev->dev, + meminfo_elem->mem_len, + meminfo_elem->kva, + (dma_addr_t) meminfo_elem->dma); + break; + default: + bfa_assert(0); + break; + } + } + } + + memset(hal_meminfo, 0, sizeof(struct bfa_meminfo_s)); +} + +void +bfad_update_hal_cfg(struct bfa_iocfc_cfg_s *bfa_cfg) +{ + if (num_rports > 0) + bfa_cfg->fwcfg.num_rports = num_rports; + if (num_ios > 0) + bfa_cfg->fwcfg.num_ioim_reqs = num_ios; + if (num_tms > 0) + bfa_cfg->fwcfg.num_tskim_reqs = num_tms; + if (num_fcxps > 0) + bfa_cfg->fwcfg.num_fcxp_reqs = num_fcxps; + if (num_ufbufs > 0) + bfa_cfg->fwcfg.num_uf_bufs = num_ufbufs; + if (reqq_size > 0) + bfa_cfg->drvcfg.num_reqq_elems = reqq_size; + if (rspq_size > 0) + bfa_cfg->drvcfg.num_rspq_elems = rspq_size; + if (num_sgpgs > 0) + bfa_cfg->drvcfg.num_sgpgs = num_sgpgs; + + /* + * populate the hal values back to the driver for sysfs use. + * otherwise, the default values will be shown as 0 in sysfs + */ + num_rports = bfa_cfg->fwcfg.num_rports; + num_ios = bfa_cfg->fwcfg.num_ioim_reqs; + num_tms = bfa_cfg->fwcfg.num_tskim_reqs; + num_fcxps = bfa_cfg->fwcfg.num_fcxp_reqs; + num_ufbufs = bfa_cfg->fwcfg.num_uf_bufs; + reqq_size = bfa_cfg->drvcfg.num_reqq_elems; + rspq_size = bfa_cfg->drvcfg.num_rspq_elems; + num_sgpgs = bfa_cfg->drvcfg.num_sgpgs; +} + +bfa_status_t +bfad_hal_mem_alloc(struct bfad_s *bfad) +{ + struct bfa_meminfo_s *hal_meminfo = &bfad->meminfo; + struct bfa_mem_elem_s *meminfo_elem; + bfa_status_t rc = BFA_STATUS_OK; + dma_addr_t phys_addr; + int retry_count = 0; + int reset_value = 1; + int min_num_sgpgs = 512; + void *kva; + int i; + + bfa_cfg_get_default(&bfad->ioc_cfg); + +retry: + bfad_update_hal_cfg(&bfad->ioc_cfg); + bfad->cfg_data.ioc_queue_depth = bfad->ioc_cfg.fwcfg.num_ioim_reqs; + bfa_cfg_get_meminfo(&bfad->ioc_cfg, hal_meminfo); + + for (i = 0; i < BFA_MEM_TYPE_MAX; i++) { + meminfo_elem = &hal_meminfo->meminfo[i]; + switch (meminfo_elem->mem_type) { + case BFA_MEM_TYPE_KVA: + kva = vmalloc(meminfo_elem->mem_len); + if (kva == NULL) { + bfad_hal_mem_release(bfad); + rc = BFA_STATUS_ENOMEM; + goto ext; + } + memset(kva, 0, meminfo_elem->mem_len); + meminfo_elem->kva = kva; + break; + case BFA_MEM_TYPE_DMA: + kva = dma_alloc_coherent(&bfad->pcidev->dev, + meminfo_elem->mem_len, + &phys_addr, GFP_KERNEL); + if (kva == NULL) { + bfad_hal_mem_release(bfad); + /* + * If we cannot allocate with default + * num_sgpages try with half the value. + */ + if (num_sgpgs > min_num_sgpgs) { + printk(KERN_INFO "bfad[%d]: memory" + " allocation failed with" + " num_sgpgs: %d\n", + bfad->inst_no, num_sgpgs); + nextLowerInt(&num_sgpgs); + printk(KERN_INFO "bfad[%d]: trying to" + " allocate memory with" + " num_sgpgs: %d\n", + bfad->inst_no, num_sgpgs); + retry_count++; + goto retry; + } else { + if (num_sgpgs_parm > 0) + num_sgpgs = num_sgpgs_parm; + else { + reset_value = + (1 << retry_count); + num_sgpgs *= reset_value; + } + rc = BFA_STATUS_ENOMEM; + goto ext; + } + } + + if (num_sgpgs_parm > 0) + num_sgpgs = num_sgpgs_parm; + else { + reset_value = (1 << retry_count); + num_sgpgs *= reset_value; + } + + memset(kva, 0, meminfo_elem->mem_len); + meminfo_elem->kva = kva; + meminfo_elem->dma = phys_addr; + break; + default: + break; + + } + } +ext: + return rc; +} + +/** + * Create a vport under a vf. + */ +bfa_status_t +bfad_vport_create(struct bfad_s *bfad, u16 vf_id, + struct bfa_port_cfg_s *port_cfg) +{ + struct bfad_vport_s *vport; + int rc = BFA_STATUS_OK; + unsigned long flags; + struct completion fcomp; + + vport = kzalloc(sizeof(struct bfad_vport_s), GFP_KERNEL); + if (!vport) { + rc = BFA_STATUS_ENOMEM; + goto ext; + } + + vport->drv_port.bfad = bfad; + spin_lock_irqsave(&bfad->bfad_lock, flags); + rc = bfa_fcs_vport_create(&vport->fcs_vport, &bfad->bfa_fcs, vf_id, + port_cfg, vport); + spin_unlock_irqrestore(&bfad->bfad_lock, flags); + + if (rc != BFA_STATUS_OK) + goto ext_free_vport; + + if (port_cfg->roles & BFA_PORT_ROLE_FCP_IM) { + rc = bfad_im_scsi_host_alloc(bfad, vport->drv_port.im_port); + if (rc != BFA_STATUS_OK) + goto ext_free_fcs_vport; + } + + spin_lock_irqsave(&bfad->bfad_lock, flags); + bfa_fcs_vport_start(&vport->fcs_vport); + spin_unlock_irqrestore(&bfad->bfad_lock, flags); + + return BFA_STATUS_OK; + +ext_free_fcs_vport: + spin_lock_irqsave(&bfad->bfad_lock, flags); + vport->comp_del = &fcomp; + init_completion(vport->comp_del); + bfa_fcs_vport_delete(&vport->fcs_vport); + spin_unlock_irqrestore(&bfad->bfad_lock, flags); + wait_for_completion(vport->comp_del); +ext_free_vport: + kfree(vport); +ext: + return rc; +} + +/** + * Create a vf and its base vport implicitely. + */ +bfa_status_t +bfad_vf_create(struct bfad_s *bfad, u16 vf_id, + struct bfa_port_cfg_s *port_cfg) +{ + struct bfad_vf_s *vf; + int rc = BFA_STATUS_OK; + + vf = kzalloc(sizeof(struct bfad_vf_s), GFP_KERNEL); + if (!vf) { + rc = BFA_STATUS_FAILED; + goto ext; + } + + rc = bfa_fcs_vf_create(&vf->fcs_vf, &bfad->bfa_fcs, vf_id, port_cfg, + vf); + if (rc != BFA_STATUS_OK) + kfree(vf); +ext: + return rc; +} + +void +bfad_bfa_tmo(unsigned long data) +{ + struct bfad_s *bfad = (struct bfad_s *)data; + unsigned long flags; + struct list_head doneq; + + spin_lock_irqsave(&bfad->bfad_lock, flags); + + bfa_timer_tick(&bfad->bfa); + + bfa_comp_deq(&bfad->bfa, &doneq); + spin_unlock_irqrestore(&bfad->bfad_lock, flags); + + if (!list_empty(&doneq)) { + bfa_comp_process(&bfad->bfa, &doneq); + spin_lock_irqsave(&bfad->bfad_lock, flags); + bfa_comp_free(&bfad->bfa, &doneq); + spin_unlock_irqrestore(&bfad->bfad_lock, flags); + } + + mod_timer(&bfad->hal_tmo, jiffies + msecs_to_jiffies(BFA_TIMER_FREQ)); +} + +void +bfad_init_timer(struct bfad_s *bfad) +{ + init_timer(&bfad->hal_tmo); + bfad->hal_tmo.function = bfad_bfa_tmo; + bfad->hal_tmo.data = (unsigned long)bfad; + + mod_timer(&bfad->hal_tmo, jiffies + msecs_to_jiffies(BFA_TIMER_FREQ)); +} + +int +bfad_pci_init(struct pci_dev *pdev, struct bfad_s *bfad) +{ + unsigned long bar0_len; + int rc = -ENODEV; + + if (pci_enable_device(pdev)) { + BFA_PRINTF(BFA_ERR, "pci_enable_device fail %p\n", pdev); + goto out; + } + + if (pci_request_regions(pdev, BFAD_DRIVER_NAME)) + goto out_disable_device; + + pci_set_master(pdev); + + + if (pci_set_dma_mask(pdev, DMA_BIT_MASK(64)) != 0) + if (pci_set_dma_mask(pdev, DMA_BIT_MASK(32)) != 0) { + BFA_PRINTF(BFA_ERR, "pci_set_dma_mask fail %p\n", pdev); + goto out_release_region; + } + + bfad->pci_bar0_map = pci_resource_start(pdev, 0); + bar0_len = pci_resource_len(pdev, 0); + bfad->pci_bar0_kva = ioremap(bfad->pci_bar0_map, bar0_len); + + if (bfad->pci_bar0_kva == NULL) { + BFA_PRINTF(BFA_ERR, "Fail to map bar0\n"); + goto out_release_region; + } + + bfad->hal_pcidev.pci_slot = PCI_SLOT(pdev->devfn); + bfad->hal_pcidev.pci_func = PCI_FUNC(pdev->devfn); + bfad->hal_pcidev.pci_bar_kva = bfad->pci_bar0_kva; + bfad->hal_pcidev.device_id = pdev->device; + bfad->pci_name = pci_name(pdev); + + bfad->pci_attr.vendor_id = pdev->vendor; + bfad->pci_attr.device_id = pdev->device; + bfad->pci_attr.ssid = pdev->subsystem_device; + bfad->pci_attr.ssvid = pdev->subsystem_vendor; + bfad->pci_attr.pcifn = PCI_FUNC(pdev->devfn); + + bfad->pcidev = pdev; + return 0; + +out_release_region: + pci_release_regions(pdev); +out_disable_device: + pci_disable_device(pdev); +out: + return rc; +} + +void +bfad_pci_uninit(struct pci_dev *pdev, struct bfad_s *bfad) +{ +#if defined(__ia64__) + pci_iounmap(pdev, bfad->pci_bar0_kva); +#else + iounmap(bfad->pci_bar0_kva); +#endif + pci_release_regions(pdev); + pci_disable_device(pdev); + pci_set_drvdata(pdev, NULL); +} + +void +bfad_fcs_port_cfg(struct bfad_s *bfad) +{ + struct bfa_port_cfg_s port_cfg; + struct bfa_pport_attr_s attr; + char symname[BFA_SYMNAME_MAXLEN]; + + sprintf(symname, "%s-%d", BFAD_DRIVER_NAME, bfad->inst_no); + memcpy(port_cfg.sym_name.symname, symname, strlen(symname)); + bfa_pport_get_attr(&bfad->bfa, &attr); + port_cfg.nwwn = attr.nwwn; + port_cfg.pwwn = attr.pwwn; + + bfa_fcs_cfg_base_port(&bfad->bfa_fcs, &port_cfg); +} + +bfa_status_t +bfad_drv_init(struct bfad_s *bfad) +{ + bfa_status_t rc; + unsigned long flags; + struct bfa_fcs_driver_info_s driver_info; + int i; + + bfad->cfg_data.rport_del_timeout = rport_del_timeout; + bfad->cfg_data.lun_queue_depth = bfa_lun_queue_depth; + bfad->cfg_data.io_max_sge = bfa_io_max_sge; + bfad->cfg_data.binding_method = FCP_PWWN_BINDING; + + rc = bfad_hal_mem_alloc(bfad); + if (rc != BFA_STATUS_OK) { + printk(KERN_WARNING "bfad%d bfad_hal_mem_alloc failure\n", + bfad->inst_no); + printk(KERN_WARNING + "Not enough memory to attach all Brocade HBA ports," + " System may need more memory.\n"); + goto out_hal_mem_alloc_failure; + } + + bfa_init_log(&bfad->bfa, bfad->logmod); + bfa_init_trc(&bfad->bfa, bfad->trcmod); + bfa_init_aen(&bfad->bfa, bfad->aen); + INIT_LIST_HEAD(&bfad->file_q); + INIT_LIST_HEAD(&bfad->file_free_q); + for (i = 0; i < BFAD_AEN_MAX_APPS; i++) { + bfa_q_qe_init(&bfad->file_buf[i].qe); + list_add_tail(&bfad->file_buf[i].qe, &bfad->file_free_q); + } + bfa_init_plog(&bfad->bfa, &bfad->plog_buf); + bfa_plog_init(&bfad->plog_buf); + bfa_plog_str(&bfad->plog_buf, BFA_PL_MID_DRVR, BFA_PL_EID_DRIVER_START, + 0, "Driver Attach"); + + bfa_attach(&bfad->bfa, bfad, &bfad->ioc_cfg, &bfad->meminfo, + &bfad->hal_pcidev); + + init_completion(&bfad->comp); + + /* + * Enable Interrupt and wait bfa_init completion + */ + if (bfad_setup_intr(bfad)) { + printk(KERN_WARNING "bfad%d: bfad_setup_intr failed\n", + bfad->inst_no); + goto out_setup_intr_failure; + } + + spin_lock_irqsave(&bfad->bfad_lock, flags); + bfa_init(&bfad->bfa); + spin_unlock_irqrestore(&bfad->bfad_lock, flags); + + /* + * Set up interrupt handler for each vectors + */ + if ((bfad->bfad_flags & BFAD_MSIX_ON) + && bfad_install_msix_handler(bfad)) { + printk(KERN_WARNING "%s: install_msix failed, bfad%d\n", + __FUNCTION__, bfad->inst_no); + } + + bfad_init_timer(bfad); + + wait_for_completion(&bfad->comp); + + memset(&driver_info, 0, sizeof(driver_info)); + strncpy(driver_info.version, BFAD_DRIVER_VERSION, + sizeof(driver_info.version) - 1); + if (host_name) + strncpy(driver_info.host_machine_name, host_name, + sizeof(driver_info.host_machine_name) - 1); + if (os_name) + strncpy(driver_info.host_os_name, os_name, + sizeof(driver_info.host_os_name) - 1); + if (os_patch) + strncpy(driver_info.host_os_patch, os_patch, + sizeof(driver_info.host_os_patch) - 1); + + strncpy(driver_info.os_device_name, bfad->pci_name, + sizeof(driver_info.os_device_name - 1)); + + /* + * FCS INIT + */ + spin_lock_irqsave(&bfad->bfad_lock, flags); + bfa_fcs_log_init(&bfad->bfa_fcs, bfad->logmod); + bfa_fcs_trc_init(&bfad->bfa_fcs, bfad->trcmod); + bfa_fcs_aen_init(&bfad->bfa_fcs, bfad->aen); + bfa_fcs_init(&bfad->bfa_fcs, &bfad->bfa, bfad, BFA_FALSE); + bfa_fcs_driver_info_init(&bfad->bfa_fcs, &driver_info); + spin_unlock_irqrestore(&bfad->bfad_lock, flags); + + bfad->bfad_flags |= BFAD_DRV_INIT_DONE; + return BFA_STATUS_OK; + +out_setup_intr_failure: + bfa_detach(&bfad->bfa); + bfad_hal_mem_release(bfad); +out_hal_mem_alloc_failure: + return BFA_STATUS_FAILED; +} + +void +bfad_drv_uninit(struct bfad_s *bfad) +{ + del_timer_sync(&bfad->hal_tmo); + bfa_isr_disable(&bfad->bfa); + bfa_detach(&bfad->bfa); + bfad_remove_intr(bfad); + bfa_assert(list_empty(&bfad->file_q)); + bfad_hal_mem_release(bfad); +} + +void +bfad_drv_start(struct bfad_s *bfad) +{ + unsigned long flags; + + spin_lock_irqsave(&bfad->bfad_lock, flags); + bfa_start(&bfad->bfa); + bfa_fcs_start(&bfad->bfa_fcs); + bfad->bfad_flags |= BFAD_HAL_START_DONE; + spin_unlock_irqrestore(&bfad->bfad_lock, flags); + + bfad_fc4_probe_post(bfad); +} + +void +bfad_drv_stop(struct bfad_s *bfad) +{ + unsigned long flags; + + spin_lock_irqsave(&bfad->bfad_lock, flags); + init_completion(&bfad->comp); + bfad->pport.flags |= BFAD_PORT_DELETE; + bfa_fcs_exit(&bfad->bfa_fcs); + spin_unlock_irqrestore(&bfad->bfad_lock, flags); + wait_for_completion(&bfad->comp); + + spin_lock_irqsave(&bfad->bfad_lock, flags); + init_completion(&bfad->comp); + bfa_stop(&bfad->bfa); + bfad->bfad_flags &= ~BFAD_HAL_START_DONE; + spin_unlock_irqrestore(&bfad->bfad_lock, flags); + wait_for_completion(&bfad->comp); +} + +bfa_status_t +bfad_cfg_pport(struct bfad_s *bfad, enum bfa_port_role role) +{ + int rc = BFA_STATUS_OK; + + /* + * Allocate scsi_host for the physical port + */ + if ((bfad_supported_fc4s & BFA_PORT_ROLE_FCP_IM) + && (role & BFA_PORT_ROLE_FCP_IM)) { + if (bfad->pport.im_port == NULL) { + rc = BFA_STATUS_FAILED; + goto out; + } + + rc = bfad_im_scsi_host_alloc(bfad, bfad->pport.im_port); + if (rc != BFA_STATUS_OK) + goto out; + + bfad->pport.roles |= BFA_PORT_ROLE_FCP_IM; + } + + bfad->bfad_flags |= BFAD_CFG_PPORT_DONE; + +out: + return rc; +} + +void +bfad_uncfg_pport(struct bfad_s *bfad) +{ + if ((bfad->pport.roles & BFA_PORT_ROLE_FCP_IPFC) && ipfc_enable) { + bfad_ipfc_port_delete(bfad, &bfad->pport); + bfad->pport.roles &= ~BFA_PORT_ROLE_FCP_IPFC; + } + + if ((bfad_supported_fc4s & BFA_PORT_ROLE_FCP_IM) + && (bfad->pport.roles & BFA_PORT_ROLE_FCP_IM)) { + bfad_im_scsi_host_free(bfad, bfad->pport.im_port); + bfad_im_port_clean(bfad->pport.im_port); + kfree(bfad->pport.im_port); + bfad->pport.roles &= ~BFA_PORT_ROLE_FCP_IM; + } + + bfad->bfad_flags &= ~BFAD_CFG_PPORT_DONE; +} + +void +bfad_drv_log_level_set(struct bfad_s *bfad) +{ + if (log_level > BFA_LOG_INVALID && log_level <= BFA_LOG_LEVEL_MAX) + bfa_log_set_level_all(&bfad->log_data, log_level); +} + + /* + * PCI_entry PCI driver entries * { + */ + +/** + * PCI probe entry. + */ +int +bfad_pci_probe(struct pci_dev *pdev, const struct pci_device_id *pid) +{ + struct bfad_s *bfad; + int error = -ENODEV, retval; + char buf[16]; + + /* + * For single port cards - only claim function 0 + */ + if ((pdev->device == BFA_PCI_DEVICE_ID_FC_8G1P) + && (PCI_FUNC(pdev->devfn) != 0)) + return -ENODEV; + + BFA_TRACE(BFA_INFO, "bfad_pci_probe entry"); + + bfad = kzalloc(sizeof(struct bfad_s), GFP_KERNEL); + if (!bfad) { + error = -ENOMEM; + goto out; + } + + bfad->trcmod = kzalloc(sizeof(struct bfa_trc_mod_s), GFP_KERNEL); + if (!bfad->trcmod) { + printk(KERN_WARNING "Error alloc trace buffer!\n"); + error = -ENOMEM; + goto out_alloc_trace_failure; + } + + /* + * LOG/TRACE INIT + */ + bfa_trc_init(bfad->trcmod); + bfa_trc(bfad, bfad_inst); + + bfad->logmod = &bfad->log_data; + sprintf(buf, "%d", bfad_inst); + bfa_log_init(bfad->logmod, buf, bfa_os_printf); + + bfad_drv_log_level_set(bfad); + + bfad->aen = &bfad->aen_buf; + + if (!(bfad_load_fwimg(pdev))) { + printk(KERN_WARNING "bfad_load_fwimg failure!\n"); + kfree(bfad->trcmod); + goto out_alloc_trace_failure; + } + + retval = bfad_pci_init(pdev, bfad); + if (retval) { + printk(KERN_WARNING "bfad_pci_init failure!\n"); + error = retval; + goto out_pci_init_failure; + } + + mutex_lock(&bfad_mutex); + bfad->inst_no = bfad_inst++; + list_add_tail(&bfad->list_entry, &bfad_list); + mutex_unlock(&bfad_mutex); + + spin_lock_init(&bfad->bfad_lock); + pci_set_drvdata(pdev, bfad); + + bfad->ref_count = 0; + bfad->pport.bfad = bfad; + + retval = bfad_drv_init(bfad); + if (retval != BFA_STATUS_OK) + goto out_drv_init_failure; + if (!(bfad->bfad_flags & BFAD_HAL_INIT_DONE)) { + printk(KERN_WARNING "bfad%d: hal init failed\n", bfad->inst_no); + goto ok; + } + + /* + * PPORT FCS config + */ + bfad_fcs_port_cfg(bfad); + + retval = bfad_cfg_pport(bfad, BFA_PORT_ROLE_FCP_IM); + if (retval != BFA_STATUS_OK) + goto out_cfg_pport_failure; + + /* + * BFAD level FC4 (IM/TM/IPFC) specific resource allocation + */ + retval = bfad_fc4_probe(bfad); + if (retval != BFA_STATUS_OK) { + printk(KERN_WARNING "bfad_fc4_probe failed\n"); + goto out_fc4_probe_failure; + } + + bfad_drv_start(bfad); + + /* + * If bfa_linkup_delay is set to -1 default; try to retrive the + * value using the bfad_os_get_linkup_delay(); else use the + * passed in module param value as the bfa_linkup_delay. + */ + if (bfa_linkup_delay < 0) { + bfa_linkup_delay = bfad_os_get_linkup_delay(bfad); + bfad_os_rport_online_wait(bfad); + bfa_linkup_delay = -1; + } else { + bfad_os_rport_online_wait(bfad); + } + + bfa_log(bfad->logmod, BFA_LOG_LINUX_DEVICE_CLAIMED, bfad->pci_name); +ok: + return 0; + +out_fc4_probe_failure: + bfad_fc4_probe_undo(bfad); + bfad_uncfg_pport(bfad); +out_cfg_pport_failure: + bfad_drv_uninit(bfad); +out_drv_init_failure: + mutex_lock(&bfad_mutex); + bfad_inst--; + list_del(&bfad->list_entry); + mutex_unlock(&bfad_mutex); + bfad_pci_uninit(pdev, bfad); +out_pci_init_failure: + kfree(bfad->trcmod); +out_alloc_trace_failure: + kfree(bfad); +out: + return error; +} + +/** + * PCI remove entry. + */ +void +bfad_pci_remove(struct pci_dev *pdev) +{ + struct bfad_s *bfad = pci_get_drvdata(pdev); + unsigned long flags; + + bfa_trc(bfad, bfad->inst_no); + + if ((bfad->bfad_flags & BFAD_DRV_INIT_DONE) + && !(bfad->bfad_flags & BFAD_HAL_INIT_DONE)) { + + spin_lock_irqsave(&bfad->bfad_lock, flags); + init_completion(&bfad->comp); + bfa_stop(&bfad->bfa); + spin_unlock_irqrestore(&bfad->bfad_lock, flags); + wait_for_completion(&bfad->comp); + + bfad_remove_intr(bfad); + del_timer_sync(&bfad->hal_tmo); + goto hal_detach; + } else if (!(bfad->bfad_flags & BFAD_DRV_INIT_DONE)) { + goto remove_sysfs; + } + + if (bfad->bfad_flags & BFAD_HAL_START_DONE) + bfad_drv_stop(bfad); + + bfad_remove_intr(bfad); + + del_timer_sync(&bfad->hal_tmo); + bfad_fc4_probe_undo(bfad); + + if (bfad->bfad_flags & BFAD_CFG_PPORT_DONE) + bfad_uncfg_pport(bfad); + +hal_detach: + spin_lock_irqsave(&bfad->bfad_lock, flags); + bfa_detach(&bfad->bfa); + spin_unlock_irqrestore(&bfad->bfad_lock, flags); + bfad_hal_mem_release(bfad); +remove_sysfs: + + mutex_lock(&bfad_mutex); + bfad_inst--; + list_del(&bfad->list_entry); + mutex_unlock(&bfad_mutex); + bfad_pci_uninit(pdev, bfad); + + kfree(bfad->trcmod); + kfree(bfad); +} + + +static struct pci_device_id bfad_id_table[] = { + { + .vendor = BFA_PCI_VENDOR_ID_BROCADE, + .device = BFA_PCI_DEVICE_ID_FC_8G2P, + .subvendor = PCI_ANY_ID, + .subdevice = PCI_ANY_ID, + }, + { + .vendor = BFA_PCI_VENDOR_ID_BROCADE, + .device = BFA_PCI_DEVICE_ID_FC_8G1P, + .subvendor = PCI_ANY_ID, + .subdevice = PCI_ANY_ID, + }, + { + .vendor = BFA_PCI_VENDOR_ID_BROCADE, + .device = BFA_PCI_DEVICE_ID_CT, + .subvendor = PCI_ANY_ID, + .subdevice = PCI_ANY_ID, + .class = (PCI_CLASS_SERIAL_FIBER << 8), + .class_mask = ~0, + }, + + {0, 0}, +}; + +MODULE_DEVICE_TABLE(pci, bfad_id_table); + +static struct pci_driver bfad_pci_driver = { + .name = BFAD_DRIVER_NAME, + .id_table = bfad_id_table, + .probe = bfad_pci_probe, + .remove = __devexit_p(bfad_pci_remove), +}; + +/** + * Linux driver module functions + */ +bfa_status_t +bfad_fc4_module_init(void) +{ + int rc; + + rc = bfad_im_module_init(); + if (rc != BFA_STATUS_OK) + goto ext; + + bfad_tm_module_init(); + if (ipfc_enable) + bfad_ipfc_module_init(); +ext: + return rc; +} + +void +bfad_fc4_module_exit(void) +{ + if (ipfc_enable) + bfad_ipfc_module_exit(); + bfad_tm_module_exit(); + bfad_im_module_exit(); +} + +/** + * Driver module init. + */ +static int __init +bfad_init(void) +{ + int error = 0; + + printk(KERN_INFO "Brocade BFA FC/FCOE SCSI driver - version: %s\n", + BFAD_DRIVER_VERSION); + + if (num_sgpgs > 0) + num_sgpgs_parm = num_sgpgs; + + error = bfad_fc4_module_init(); + if (error) { + error = -ENOMEM; + printk(KERN_WARNING "bfad_fc4_module_init failure\n"); + goto ext; + } + + if (!strcmp(FCPI_NAME, " fcpim")) + bfad_supported_fc4s |= BFA_PORT_ROLE_FCP_IM; + if (!strcmp(FCPT_NAME, " fcptm")) + bfad_supported_fc4s |= BFA_PORT_ROLE_FCP_TM; + if (!strcmp(IPFC_NAME, " ipfc")) + bfad_supported_fc4s |= BFA_PORT_ROLE_FCP_IPFC; + + bfa_ioc_auto_recover(ioc_auto_recover); + bfa_fcs_rport_set_del_timeout(rport_del_timeout); + error = pci_register_driver(&bfad_pci_driver); + + if (error) { + printk(KERN_WARNING "bfad pci_register_driver failure\n"); + goto ext; + } + + return 0; + +ext: + bfad_fc4_module_exit(); + return error; +} + +/** + * Driver module exit. + */ +static void __exit +bfad_exit(void) +{ + pci_unregister_driver(&bfad_pci_driver); + bfad_fc4_module_exit(); + bfad_free_fwimg(); +} + +#define BFAD_PROTO_NAME FCPI_NAME FCPT_NAME IPFC_NAME + +module_init(bfad_init); +module_exit(bfad_exit); +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("Brocade Fibre Channel HBA Driver" BFAD_PROTO_NAME); +MODULE_AUTHOR("Brocade Communications Systems, Inc."); +MODULE_VERSION(BFAD_DRIVER_VERSION); + + diff --git a/drivers/scsi/bfa/bfad_attr.c b/drivers/scsi/bfa/bfad_attr.c new file mode 100644 index 000000000000..9129ae3040ff --- /dev/null +++ b/drivers/scsi/bfa/bfad_attr.c @@ -0,0 +1,649 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * bfa_attr.c Linux driver configuration interface module. + */ + +#include "bfad_drv.h" +#include "bfad_im.h" +#include "bfad_trcmod.h" +#include "bfad_attr.h" + +/** + * FC_transport_template FC transport template + */ + +/** + * FC transport template entry, get SCSI target port ID. + */ +void +bfad_im_get_starget_port_id(struct scsi_target *starget) +{ + struct Scsi_Host *shost; + struct bfad_im_port_s *im_port; + struct bfad_s *bfad; + struct bfad_itnim_s *itnim = NULL; + u32 fc_id = -1; + unsigned long flags; + + shost = bfad_os_starget_to_shost(starget); + im_port = (struct bfad_im_port_s *) shost->hostdata[0]; + bfad = im_port->bfad; + spin_lock_irqsave(&bfad->bfad_lock, flags); + + itnim = bfad_os_get_itnim(im_port, starget->id); + if (itnim) + fc_id = bfa_fcs_itnim_get_fcid(&itnim->fcs_itnim); + + fc_starget_port_id(starget) = fc_id; + spin_unlock_irqrestore(&bfad->bfad_lock, flags); +} + +/** + * FC transport template entry, get SCSI target nwwn. + */ +void +bfad_im_get_starget_node_name(struct scsi_target *starget) +{ + struct Scsi_Host *shost; + struct bfad_im_port_s *im_port; + struct bfad_s *bfad; + struct bfad_itnim_s *itnim = NULL; + u64 node_name = 0; + unsigned long flags; + + shost = bfad_os_starget_to_shost(starget); + im_port = (struct bfad_im_port_s *) shost->hostdata[0]; + bfad = im_port->bfad; + spin_lock_irqsave(&bfad->bfad_lock, flags); + + itnim = bfad_os_get_itnim(im_port, starget->id); + if (itnim) + node_name = bfa_fcs_itnim_get_nwwn(&itnim->fcs_itnim); + + fc_starget_node_name(starget) = bfa_os_htonll(node_name); + spin_unlock_irqrestore(&bfad->bfad_lock, flags); +} + +/** + * FC transport template entry, get SCSI target pwwn. + */ +void +bfad_im_get_starget_port_name(struct scsi_target *starget) +{ + struct Scsi_Host *shost; + struct bfad_im_port_s *im_port; + struct bfad_s *bfad; + struct bfad_itnim_s *itnim = NULL; + u64 port_name = 0; + unsigned long flags; + + shost = bfad_os_starget_to_shost(starget); + im_port = (struct bfad_im_port_s *) shost->hostdata[0]; + bfad = im_port->bfad; + spin_lock_irqsave(&bfad->bfad_lock, flags); + + itnim = bfad_os_get_itnim(im_port, starget->id); + if (itnim) + port_name = bfa_fcs_itnim_get_pwwn(&itnim->fcs_itnim); + + fc_starget_port_name(starget) = bfa_os_htonll(port_name); + spin_unlock_irqrestore(&bfad->bfad_lock, flags); +} + +/** + * FC transport template entry, get SCSI host port ID. + */ +void +bfad_im_get_host_port_id(struct Scsi_Host *shost) +{ + struct bfad_im_port_s *im_port = + (struct bfad_im_port_s *) shost->hostdata[0]; + struct bfad_port_s *port = im_port->port; + + fc_host_port_id(shost) = + bfa_os_hton3b(bfa_fcs_port_get_fcid(port->fcs_port)); +} + + + + + +struct Scsi_Host * +bfad_os_starget_to_shost(struct scsi_target *starget) +{ + return dev_to_shost(starget->dev.parent); +} + +/** + * FC transport template entry, get SCSI host port type. + */ +static void +bfad_im_get_host_port_type(struct Scsi_Host *shost) +{ + struct bfad_im_port_s *im_port = + (struct bfad_im_port_s *) shost->hostdata[0]; + struct bfad_s *bfad = im_port->bfad; + struct bfa_pport_attr_s attr; + + bfa_pport_get_attr(&bfad->bfa, &attr); + + switch (attr.port_type) { + case BFA_PPORT_TYPE_NPORT: + fc_host_port_type(shost) = FC_PORTTYPE_NPORT; + break; + case BFA_PPORT_TYPE_NLPORT: + fc_host_port_type(shost) = FC_PORTTYPE_NLPORT; + break; + case BFA_PPORT_TYPE_P2P: + fc_host_port_type(shost) = FC_PORTTYPE_PTP; + break; + case BFA_PPORT_TYPE_LPORT: + fc_host_port_type(shost) = FC_PORTTYPE_LPORT; + break; + default: + fc_host_port_type(shost) = FC_PORTTYPE_UNKNOWN; + break; + } +} + +/** + * FC transport template entry, get SCSI host port state. + */ +static void +bfad_im_get_host_port_state(struct Scsi_Host *shost) +{ + struct bfad_im_port_s *im_port = + (struct bfad_im_port_s *) shost->hostdata[0]; + struct bfad_s *bfad = im_port->bfad; + struct bfa_pport_attr_s attr; + + bfa_pport_get_attr(&bfad->bfa, &attr); + + switch (attr.port_state) { + case BFA_PPORT_ST_LINKDOWN: + fc_host_port_state(shost) = FC_PORTSTATE_LINKDOWN; + break; + case BFA_PPORT_ST_LINKUP: + fc_host_port_state(shost) = FC_PORTSTATE_ONLINE; + break; + case BFA_PPORT_ST_UNINIT: + case BFA_PPORT_ST_ENABLING_QWAIT: + case BFA_PPORT_ST_ENABLING: + case BFA_PPORT_ST_DISABLING_QWAIT: + case BFA_PPORT_ST_DISABLING: + case BFA_PPORT_ST_DISABLED: + case BFA_PPORT_ST_STOPPED: + case BFA_PPORT_ST_IOCDOWN: + default: + fc_host_port_state(shost) = FC_PORTSTATE_UNKNOWN; + break; + } +} + +/** + * FC transport template entry, get SCSI host active fc4s. + */ +static void +bfad_im_get_host_active_fc4s(struct Scsi_Host *shost) +{ + struct bfad_im_port_s *im_port = + (struct bfad_im_port_s *) shost->hostdata[0]; + struct bfad_port_s *port = im_port->port; + + memset(fc_host_active_fc4s(shost), 0, + sizeof(fc_host_active_fc4s(shost))); + + if (port->supported_fc4s & + (BFA_PORT_ROLE_FCP_IM | BFA_PORT_ROLE_FCP_TM)) + fc_host_active_fc4s(shost)[2] = 1; + + if (port->supported_fc4s & BFA_PORT_ROLE_FCP_IPFC) + fc_host_active_fc4s(shost)[3] = 0x20; + + fc_host_active_fc4s(shost)[7] = 1; +} + +/** + * FC transport template entry, get SCSI host link speed. + */ +static void +bfad_im_get_host_speed(struct Scsi_Host *shost) +{ + struct bfad_im_port_s *im_port = + (struct bfad_im_port_s *) shost->hostdata[0]; + struct bfad_s *bfad = im_port->bfad; + struct bfa_pport_attr_s attr; + + bfa_pport_get_attr(&bfad->bfa, &attr); + switch (attr.speed) { + case BFA_PPORT_SPEED_8GBPS: + fc_host_speed(shost) = FC_PORTSPEED_8GBIT; + break; + case BFA_PPORT_SPEED_4GBPS: + fc_host_speed(shost) = FC_PORTSPEED_4GBIT; + break; + case BFA_PPORT_SPEED_2GBPS: + fc_host_speed(shost) = FC_PORTSPEED_2GBIT; + break; + case BFA_PPORT_SPEED_1GBPS: + fc_host_speed(shost) = FC_PORTSPEED_1GBIT; + break; + default: + fc_host_speed(shost) = FC_PORTSPEED_UNKNOWN; + break; + } +} + +/** + * FC transport template entry, get SCSI host port type. + */ +static void +bfad_im_get_host_fabric_name(struct Scsi_Host *shost) +{ + struct bfad_im_port_s *im_port = + (struct bfad_im_port_s *) shost->hostdata[0]; + struct bfad_port_s *port = im_port->port; + wwn_t fabric_nwwn = 0; + + fabric_nwwn = bfa_fcs_port_get_fabric_name(port->fcs_port); + + fc_host_fabric_name(shost) = bfa_os_htonll(fabric_nwwn); + +} + +/** + * FC transport template entry, get BFAD statistics. + */ +static struct fc_host_statistics * +bfad_im_get_stats(struct Scsi_Host *shost) +{ + struct bfad_im_port_s *im_port = + (struct bfad_im_port_s *) shost->hostdata[0]; + struct bfad_s *bfad = im_port->bfad; + struct bfad_hal_comp fcomp; + struct fc_host_statistics *hstats; + bfa_status_t rc; + unsigned long flags; + + hstats = &bfad->link_stats; + init_completion(&fcomp.comp); + spin_lock_irqsave(&bfad->bfad_lock, flags); + memset(hstats, 0, sizeof(struct fc_host_statistics)); + rc = bfa_pport_get_stats(&bfad->bfa, + (union bfa_pport_stats_u *) hstats, + bfad_hcb_comp, &fcomp); + spin_unlock_irqrestore(&bfad->bfad_lock, flags); + if (rc != BFA_STATUS_OK) + return NULL; + + wait_for_completion(&fcomp.comp); + + return hstats; +} + +/** + * FC transport template entry, reset BFAD statistics. + */ +static void +bfad_im_reset_stats(struct Scsi_Host *shost) +{ + struct bfad_im_port_s *im_port = + (struct bfad_im_port_s *) shost->hostdata[0]; + struct bfad_s *bfad = im_port->bfad; + struct bfad_hal_comp fcomp; + unsigned long flags; + bfa_status_t rc; + + init_completion(&fcomp.comp); + spin_lock_irqsave(&bfad->bfad_lock, flags); + rc = bfa_pport_clear_stats(&bfad->bfa, bfad_hcb_comp, &fcomp); + spin_unlock_irqrestore(&bfad->bfad_lock, flags); + + if (rc != BFA_STATUS_OK) + return; + + wait_for_completion(&fcomp.comp); + + return; +} + +/** + * FC transport template entry, get rport loss timeout. + */ +static void +bfad_im_get_rport_loss_tmo(struct fc_rport *rport) +{ + struct bfad_itnim_data_s *itnim_data = rport->dd_data; + struct bfad_itnim_s *itnim = itnim_data->itnim; + struct bfad_s *bfad = itnim->im->bfad; + unsigned long flags; + + spin_lock_irqsave(&bfad->bfad_lock, flags); + rport->dev_loss_tmo = bfa_fcpim_path_tov_get(&bfad->bfa); + spin_unlock_irqrestore(&bfad->bfad_lock, flags); +} + +/** + * FC transport template entry, set rport loss timeout. + */ +static void +bfad_im_set_rport_loss_tmo(struct fc_rport *rport, u32 timeout) +{ + struct bfad_itnim_data_s *itnim_data = rport->dd_data; + struct bfad_itnim_s *itnim = itnim_data->itnim; + struct bfad_s *bfad = itnim->im->bfad; + unsigned long flags; + + if (timeout > 0) { + spin_lock_irqsave(&bfad->bfad_lock, flags); + bfa_fcpim_path_tov_set(&bfad->bfa, timeout); + rport->dev_loss_tmo = bfa_fcpim_path_tov_get(&bfad->bfa); + spin_unlock_irqrestore(&bfad->bfad_lock, flags); + } + +} + +struct fc_function_template bfad_im_fc_function_template = { + + /* Target dynamic attributes */ + .get_starget_port_id = bfad_im_get_starget_port_id, + .show_starget_port_id = 1, + .get_starget_node_name = bfad_im_get_starget_node_name, + .show_starget_node_name = 1, + .get_starget_port_name = bfad_im_get_starget_port_name, + .show_starget_port_name = 1, + + /* Host dynamic attribute */ + .get_host_port_id = bfad_im_get_host_port_id, + .show_host_port_id = 1, + + /* Host fixed attributes */ + .show_host_node_name = 1, + .show_host_port_name = 1, + .show_host_supported_classes = 1, + .show_host_supported_fc4s = 1, + .show_host_supported_speeds = 1, + .show_host_maxframe_size = 1, + + /* More host dynamic attributes */ + .show_host_port_type = 1, + .get_host_port_type = bfad_im_get_host_port_type, + .show_host_port_state = 1, + .get_host_port_state = bfad_im_get_host_port_state, + .show_host_active_fc4s = 1, + .get_host_active_fc4s = bfad_im_get_host_active_fc4s, + .show_host_speed = 1, + .get_host_speed = bfad_im_get_host_speed, + .show_host_fabric_name = 1, + .get_host_fabric_name = bfad_im_get_host_fabric_name, + + .show_host_symbolic_name = 1, + + /* Statistics */ + .get_fc_host_stats = bfad_im_get_stats, + .reset_fc_host_stats = bfad_im_reset_stats, + + /* Allocation length for host specific data */ + .dd_fcrport_size = sizeof(struct bfad_itnim_data_s *), + + /* Remote port fixed attributes */ + .show_rport_maxframe_size = 1, + .show_rport_supported_classes = 1, + .show_rport_dev_loss_tmo = 1, + .get_rport_dev_loss_tmo = bfad_im_get_rport_loss_tmo, + .set_rport_dev_loss_tmo = bfad_im_set_rport_loss_tmo, +}; + +/** + * Scsi_Host_attrs SCSI host attributes + */ +static ssize_t +bfad_im_serial_num_show(struct device *dev, struct device_attribute *attr, + char *buf) +{ + struct Scsi_Host *shost = class_to_shost(dev); + struct bfad_im_port_s *im_port = + (struct bfad_im_port_s *) shost->hostdata[0]; + struct bfad_s *bfad = im_port->bfad; + struct bfa_ioc_attr_s ioc_attr; + + memset(&ioc_attr, 0, sizeof(ioc_attr)); + bfa_get_attr(&bfad->bfa, &ioc_attr); + return snprintf(buf, PAGE_SIZE, "%s\n", + ioc_attr.adapter_attr.serial_num); +} + +static ssize_t +bfad_im_model_show(struct device *dev, struct device_attribute *attr, + char *buf) +{ + struct Scsi_Host *shost = class_to_shost(dev); + struct bfad_im_port_s *im_port = + (struct bfad_im_port_s *) shost->hostdata[0]; + struct bfad_s *bfad = im_port->bfad; + struct bfa_ioc_attr_s ioc_attr; + + memset(&ioc_attr, 0, sizeof(ioc_attr)); + bfa_get_attr(&bfad->bfa, &ioc_attr); + return snprintf(buf, PAGE_SIZE, "%s\n", ioc_attr.adapter_attr.model); +} + +static ssize_t +bfad_im_model_desc_show(struct device *dev, struct device_attribute *attr, + char *buf) +{ + struct Scsi_Host *shost = class_to_shost(dev); + struct bfad_im_port_s *im_port = + (struct bfad_im_port_s *) shost->hostdata[0]; + struct bfad_s *bfad = im_port->bfad; + struct bfa_ioc_attr_s ioc_attr; + + memset(&ioc_attr, 0, sizeof(ioc_attr)); + bfa_get_attr(&bfad->bfa, &ioc_attr); + return snprintf(buf, PAGE_SIZE, "%s\n", + ioc_attr.adapter_attr.model_descr); +} + +static ssize_t +bfad_im_node_name_show(struct device *dev, struct device_attribute *attr, + char *buf) +{ + struct Scsi_Host *shost = class_to_shost(dev); + struct bfad_im_port_s *im_port = + (struct bfad_im_port_s *) shost->hostdata[0]; + struct bfad_port_s *port = im_port->port; + u64 nwwn; + + nwwn = bfa_fcs_port_get_nwwn(port->fcs_port); + return snprintf(buf, PAGE_SIZE, "0x%llx\n", bfa_os_htonll(nwwn)); +} + +static ssize_t +bfad_im_symbolic_name_show(struct device *dev, struct device_attribute *attr, + char *buf) +{ + struct Scsi_Host *shost = class_to_shost(dev); + struct bfad_im_port_s *im_port = + (struct bfad_im_port_s *) shost->hostdata[0]; + struct bfad_s *bfad = im_port->bfad; + struct bfa_ioc_attr_s ioc_attr; + + memset(&ioc_attr, 0, sizeof(ioc_attr)); + bfa_get_attr(&bfad->bfa, &ioc_attr); + + return snprintf(buf, PAGE_SIZE, "Brocade %s FV%s DV%s\n", + ioc_attr.adapter_attr.model, + ioc_attr.adapter_attr.fw_ver, BFAD_DRIVER_VERSION); +} + +static ssize_t +bfad_im_hw_version_show(struct device *dev, struct device_attribute *attr, + char *buf) +{ + struct Scsi_Host *shost = class_to_shost(dev); + struct bfad_im_port_s *im_port = + (struct bfad_im_port_s *) shost->hostdata[0]; + struct bfad_s *bfad = im_port->bfad; + struct bfa_ioc_attr_s ioc_attr; + + memset(&ioc_attr, 0, sizeof(ioc_attr)); + bfa_get_attr(&bfad->bfa, &ioc_attr); + return snprintf(buf, PAGE_SIZE, "%s\n", ioc_attr.adapter_attr.hw_ver); +} + +static ssize_t +bfad_im_drv_version_show(struct device *dev, struct device_attribute *attr, + char *buf) +{ + return snprintf(buf, PAGE_SIZE, "%s\n", BFAD_DRIVER_VERSION); +} + +static ssize_t +bfad_im_optionrom_version_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct Scsi_Host *shost = class_to_shost(dev); + struct bfad_im_port_s *im_port = + (struct bfad_im_port_s *) shost->hostdata[0]; + struct bfad_s *bfad = im_port->bfad; + struct bfa_ioc_attr_s ioc_attr; + + memset(&ioc_attr, 0, sizeof(ioc_attr)); + bfa_get_attr(&bfad->bfa, &ioc_attr); + return snprintf(buf, PAGE_SIZE, "%s\n", + ioc_attr.adapter_attr.optrom_ver); +} + +static ssize_t +bfad_im_fw_version_show(struct device *dev, struct device_attribute *attr, + char *buf) +{ + struct Scsi_Host *shost = class_to_shost(dev); + struct bfad_im_port_s *im_port = + (struct bfad_im_port_s *) shost->hostdata[0]; + struct bfad_s *bfad = im_port->bfad; + struct bfa_ioc_attr_s ioc_attr; + + memset(&ioc_attr, 0, sizeof(ioc_attr)); + bfa_get_attr(&bfad->bfa, &ioc_attr); + return snprintf(buf, PAGE_SIZE, "%s\n", ioc_attr.adapter_attr.fw_ver); +} + +static ssize_t +bfad_im_num_of_ports_show(struct device *dev, struct device_attribute *attr, + char *buf) +{ + struct Scsi_Host *shost = class_to_shost(dev); + struct bfad_im_port_s *im_port = + (struct bfad_im_port_s *) shost->hostdata[0]; + struct bfad_s *bfad = im_port->bfad; + struct bfa_ioc_attr_s ioc_attr; + + memset(&ioc_attr, 0, sizeof(ioc_attr)); + bfa_get_attr(&bfad->bfa, &ioc_attr); + return snprintf(buf, PAGE_SIZE, "%d\n", ioc_attr.adapter_attr.nports); +} + +static ssize_t +bfad_im_drv_name_show(struct device *dev, struct device_attribute *attr, + char *buf) +{ + return snprintf(buf, PAGE_SIZE, "%s\n", BFAD_DRIVER_NAME); +} + +static ssize_t +bfad_im_num_of_discovered_ports_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct Scsi_Host *shost = class_to_shost(dev); + struct bfad_im_port_s *im_port = + (struct bfad_im_port_s *) shost->hostdata[0]; + struct bfad_port_s *port = im_port->port; + struct bfad_s *bfad = im_port->bfad; + int nrports = 2048; + wwn_t *rports = NULL; + unsigned long flags; + + rports = kzalloc(sizeof(wwn_t) * nrports , GFP_ATOMIC); + if (rports == NULL) + return -ENOMEM; + + spin_lock_irqsave(&bfad->bfad_lock, flags); + bfa_fcs_port_get_rports(port->fcs_port, rports, &nrports); + spin_unlock_irqrestore(&bfad->bfad_lock, flags); + kfree(rports); + + return snprintf(buf, PAGE_SIZE, "%d\n", nrports); +} + +static DEVICE_ATTR(serial_number, S_IRUGO, + bfad_im_serial_num_show, NULL); +static DEVICE_ATTR(model, S_IRUGO, bfad_im_model_show, NULL); +static DEVICE_ATTR(model_description, S_IRUGO, + bfad_im_model_desc_show, NULL); +static DEVICE_ATTR(node_name, S_IRUGO, bfad_im_node_name_show, NULL); +static DEVICE_ATTR(symbolic_name, S_IRUGO, + bfad_im_symbolic_name_show, NULL); +static DEVICE_ATTR(hardware_version, S_IRUGO, + bfad_im_hw_version_show, NULL); +static DEVICE_ATTR(driver_version, S_IRUGO, + bfad_im_drv_version_show, NULL); +static DEVICE_ATTR(option_rom_version, S_IRUGO, + bfad_im_optionrom_version_show, NULL); +static DEVICE_ATTR(firmware_version, S_IRUGO, + bfad_im_fw_version_show, NULL); +static DEVICE_ATTR(number_of_ports, S_IRUGO, + bfad_im_num_of_ports_show, NULL); +static DEVICE_ATTR(driver_name, S_IRUGO, bfad_im_drv_name_show, NULL); +static DEVICE_ATTR(number_of_discovered_ports, S_IRUGO, + bfad_im_num_of_discovered_ports_show, NULL); + +struct device_attribute *bfad_im_host_attrs[] = { + &dev_attr_serial_number, + &dev_attr_model, + &dev_attr_model_description, + &dev_attr_node_name, + &dev_attr_symbolic_name, + &dev_attr_hardware_version, + &dev_attr_driver_version, + &dev_attr_option_rom_version, + &dev_attr_firmware_version, + &dev_attr_number_of_ports, + &dev_attr_driver_name, + &dev_attr_number_of_discovered_ports, + NULL, +}; + +struct device_attribute *bfad_im_vport_attrs[] = { + &dev_attr_serial_number, + &dev_attr_model, + &dev_attr_model_description, + &dev_attr_node_name, + &dev_attr_symbolic_name, + &dev_attr_hardware_version, + &dev_attr_driver_version, + &dev_attr_option_rom_version, + &dev_attr_firmware_version, + &dev_attr_number_of_ports, + &dev_attr_driver_name, + &dev_attr_number_of_discovered_ports, + NULL, +}; + + diff --git a/drivers/scsi/bfa/bfad_attr.h b/drivers/scsi/bfa/bfad_attr.h new file mode 100644 index 000000000000..4d3312da6a81 --- /dev/null +++ b/drivers/scsi/bfa/bfad_attr.h @@ -0,0 +1,65 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFAD_ATTR_H__ +#define __BFAD_ATTR_H__ +/** + * bfad_attr.h VMware driver configuration interface module. + */ + +/** + * FC_transport_template FC transport template + */ + +struct Scsi_Host* +bfad_os_dev_to_shost(struct scsi_target *starget); + +/** + * FC transport template entry, get SCSI target port ID. + */ +void +bfad_im_get_starget_port_id(struct scsi_target *starget); + +/** + * FC transport template entry, get SCSI target nwwn. + */ +void +bfad_im_get_starget_node_name(struct scsi_target *starget); + +/** + * FC transport template entry, get SCSI target pwwn. + */ +void +bfad_im_get_starget_port_name(struct scsi_target *starget); + +/** + * FC transport template entry, get SCSI host port ID. + */ +void +bfad_im_get_host_port_id(struct Scsi_Host *shost); + +/** + * FC transport template entry, issue a LIP. + */ +int +bfad_im_issue_fc_host_lip(struct Scsi_Host *shost); + +struct Scsi_Host* +bfad_os_starget_to_shost(struct scsi_target *starget); + + +#endif /* __BFAD_ATTR_H__ */ diff --git a/drivers/scsi/bfa/bfad_drv.h b/drivers/scsi/bfa/bfad_drv.h new file mode 100644 index 000000000000..172c81e25c1c --- /dev/null +++ b/drivers/scsi/bfa/bfad_drv.h @@ -0,0 +1,295 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * Contains base driver definitions. + */ + +/** + * bfa_drv.h Linux driver data structures. + */ + +#ifndef __BFAD_DRV_H__ +#define __BFAD_DRV_H__ + +#include "bfa_os_inc.h" + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include "aen/bfa_aen.h" +#include + +#define BFAD_DRIVER_NAME "bfa" +#ifdef BFA_DRIVER_VERSION +#define BFAD_DRIVER_VERSION BFA_DRIVER_VERSION +#else +#define BFAD_DRIVER_VERSION "2.0.0.0" +#endif + + +#define BFAD_IRQ_FLAGS IRQF_SHARED + +/* + * BFAD flags + */ +#define BFAD_MSIX_ON 0x00000001 +#define BFAD_HAL_INIT_DONE 0x00000002 +#define BFAD_DRV_INIT_DONE 0x00000004 +#define BFAD_CFG_PPORT_DONE 0x00000008 +#define BFAD_HAL_START_DONE 0x00000010 +#define BFAD_PORT_ONLINE 0x00000020 +#define BFAD_RPORT_ONLINE 0x00000040 + +#define BFAD_PORT_DELETE 0x00000001 + +/* + * BFAD related definition + */ +#define SCSI_SCAN_DELAY HZ +#define BFAD_STOP_TIMEOUT 30 +#define BFAD_SUSPEND_TIMEOUT BFAD_STOP_TIMEOUT + +/* + * BFAD configuration parameter default values + */ +#define BFAD_LUN_QUEUE_DEPTH 32 +#define BFAD_IO_MAX_SGE SG_ALL + +#define bfad_isr_t irq_handler_t + +#define MAX_MSIX_ENTRY 22 + +struct bfad_msix_s { + struct bfad_s *bfad; + struct msix_entry msix; +}; + +enum bfad_port_pvb_type { + BFAD_PORT_PHYS_BASE = 0, + BFAD_PORT_PHYS_VPORT = 1, + BFAD_PORT_VF_BASE = 2, + BFAD_PORT_VF_VPORT = 3, +}; + +/* + * PORT data structure + */ +struct bfad_port_s { + struct list_head list_entry; + struct bfad_s *bfad; + struct bfa_fcs_port_s *fcs_port; + u32 roles; + s32 flags; + u32 supported_fc4s; + u8 ipfc_flags; + enum bfad_port_pvb_type pvb_type; + struct bfad_im_port_s *im_port; /* IM specific data */ + struct bfad_tm_port_s *tm_port; /* TM specific data */ + struct bfad_ipfc_port_s *ipfc_port; /* IPFC specific data */ +}; + +/* + * VPORT data structure + */ +struct bfad_vport_s { + struct bfad_port_s drv_port; + struct bfa_fcs_vport_s fcs_vport; + struct completion *comp_del; +}; + +/* + * VF data structure + */ +struct bfad_vf_s { + bfa_fcs_vf_t fcs_vf; + struct bfad_port_s base_port; /* base port for vf */ + struct bfad_s *bfad; +}; + +struct bfad_cfg_param_s { + u32 rport_del_timeout; + u32 ioc_queue_depth; + u32 lun_queue_depth; + u32 io_max_sge; + u32 binding_method; +}; + +#define BFAD_AEN_MAX_APPS 8 +struct bfad_aen_file_s { + struct list_head qe; + struct bfad_s *bfad; + s32 ri; + s32 app_id; +}; + +/* + * BFAD (PCI function) data structure + */ +struct bfad_s { + struct list_head list_entry; + struct bfa_s bfa; + struct bfa_fcs_s bfa_fcs; + struct pci_dev *pcidev; + const char *pci_name; + struct bfa_pcidev_s hal_pcidev; + struct bfa_ioc_pci_attr_s pci_attr; + unsigned long pci_bar0_map; + void __iomem *pci_bar0_kva; + struct completion comp; + struct completion suspend; + struct completion disable_comp; + bfa_boolean_t disable_active; + struct bfad_port_s pport; /* physical port of the BFAD */ + struct bfa_meminfo_s meminfo; + struct bfa_iocfc_cfg_s ioc_cfg; + u32 inst_no; /* BFAD instance number */ + u32 bfad_flags; + spinlock_t bfad_lock; + struct bfad_cfg_param_s cfg_data; + struct bfad_msix_s msix_tab[MAX_MSIX_ENTRY]; + int nvec; + char adapter_name[BFA_ADAPTER_SYM_NAME_LEN]; + char port_name[BFA_ADAPTER_SYM_NAME_LEN]; + struct timer_list hal_tmo; + unsigned long hs_start; + struct bfad_im_s *im; /* IM specific data */ + struct bfad_tm_s *tm; /* TM specific data */ + struct bfad_ipfc_s *ipfc; /* IPFC specific data */ + struct bfa_log_mod_s log_data; + struct bfa_trc_mod_s *trcmod; + struct bfa_log_mod_s *logmod; + struct bfa_aen_s *aen; + struct bfa_aen_s aen_buf; + struct bfad_aen_file_s file_buf[BFAD_AEN_MAX_APPS]; + struct list_head file_q; + struct list_head file_free_q; + struct bfa_plog_s plog_buf; + int ref_count; + bfa_boolean_t ipfc_enabled; + struct fc_host_statistics link_stats; + + struct kobject *bfa_kobj; + struct kobject *ioc_kobj; + struct kobject *pport_kobj; + struct kobject *lport_kobj; +}; + +/* + * RPORT data structure + */ +struct bfad_rport_s { + struct bfa_fcs_rport_s fcs_rport; +}; + +struct bfad_buf_info { + void *virt; + dma_addr_t phys; + u32 size; +}; + +struct bfad_fcxp { + struct bfad_port_s *port; + struct bfa_rport_s *bfa_rport; + bfa_status_t req_status; + u16 tag; + u16 rsp_len; + u16 rsp_maxlen; + u8 use_ireqbuf; + u8 use_irspbuf; + u32 num_req_sgles; + u32 num_rsp_sgles; + struct fchs_s fchs; + void *reqbuf_info; + void *rspbuf_info; + struct bfa_sge_s *req_sge; + struct bfa_sge_s *rsp_sge; + fcxp_send_cb_t send_cbfn; + void *send_cbarg; + void *bfa_fcxp; + struct completion comp; +}; + +struct bfad_hal_comp { + bfa_status_t status; + struct completion comp; +}; + +/* + * Macro to obtain the immediate lower power + * of two for the integer. + */ +#define nextLowerInt(x) \ +do { \ + int j; \ + (*x)--; \ + for (j = 1; j < (sizeof(int) * 8); j <<= 1) \ + (*x) = (*x) | (*x) >> j; \ + (*x)++; \ + (*x) = (*x) >> 1; \ +} while (0) + + +bfa_status_t bfad_vport_create(struct bfad_s *bfad, u16 vf_id, + struct bfa_port_cfg_s *port_cfg); +bfa_status_t bfad_vf_create(struct bfad_s *bfad, u16 vf_id, + struct bfa_port_cfg_s *port_cfg); +bfa_status_t bfad_cfg_pport(struct bfad_s *bfad, enum bfa_port_role role); +bfa_status_t bfad_drv_init(struct bfad_s *bfad); +void bfad_drv_start(struct bfad_s *bfad); +void bfad_uncfg_pport(struct bfad_s *bfad); +void bfad_drv_stop(struct bfad_s *bfad); +void bfad_remove_intr(struct bfad_s *bfad); +void bfad_hal_mem_release(struct bfad_s *bfad); +void bfad_hcb_comp(void *arg, bfa_status_t status); + +int bfad_setup_intr(struct bfad_s *bfad); +void bfad_remove_intr(struct bfad_s *bfad); + +void bfad_update_hal_cfg(struct bfa_iocfc_cfg_s *bfa_cfg); +bfa_status_t bfad_hal_mem_alloc(struct bfad_s *bfad); +void bfad_bfa_tmo(unsigned long data); +void bfad_init_timer(struct bfad_s *bfad); +int bfad_pci_init(struct pci_dev *pdev, struct bfad_s *bfad); +void bfad_pci_uninit(struct pci_dev *pdev, struct bfad_s *bfad); +void bfad_fcs_port_cfg(struct bfad_s *bfad); +void bfad_drv_uninit(struct bfad_s *bfad); +void bfad_drv_log_level_set(struct bfad_s *bfad); +bfa_status_t bfad_fc4_module_init(void); +void bfad_fc4_module_exit(void); + +void bfad_pci_remove(struct pci_dev *pdev); +int bfad_pci_probe(struct pci_dev *pdev, const struct pci_device_id *pid); +void bfad_os_rport_online_wait(struct bfad_s *bfad); +int bfad_os_get_linkup_delay(struct bfad_s *bfad); +int bfad_install_msix_handler(struct bfad_s *bfad); + +extern struct idr bfad_im_port_index; +extern struct list_head bfad_list; +extern int bfa_lun_queue_depth; +extern int bfad_supported_fc4s; +extern int bfa_linkup_delay; + +#endif /* __BFAD_DRV_H__ */ diff --git a/drivers/scsi/bfa/bfad_fwimg.c b/drivers/scsi/bfa/bfad_fwimg.c new file mode 100644 index 000000000000..b2f6949bc8d3 --- /dev/null +++ b/drivers/scsi/bfa/bfad_fwimg.c @@ -0,0 +1,95 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * bfad_fwimg.c Linux driver PCI interface module. + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +u32 bfi_image_ct_size; +u32 bfi_image_cb_size; +u32 *bfi_image_ct; +u32 *bfi_image_cb; + + +#define BFAD_FW_FILE_CT "ctfw.bin" +#define BFAD_FW_FILE_CB "cbfw.bin" + +u32 * +bfad_read_firmware(struct pci_dev *pdev, u32 **bfi_image, + u32 *bfi_image_size, char *fw_name) +{ + const struct firmware *fw; + + if (request_firmware(&fw, fw_name, &pdev->dev)) { + printk(KERN_ALERT "Can't locate firmware %s\n", fw_name); + goto error; + } + + *bfi_image = vmalloc(fw->size); + if (NULL == *bfi_image) { + printk(KERN_ALERT "Fail to allocate buffer for fw image " + "size=%x!\n", (u32) fw->size); + goto error; + } + + memcpy(*bfi_image, fw->data, fw->size); + *bfi_image_size = fw->size/sizeof(u32); + + return(*bfi_image); + +error: + return(NULL); +} + +u32 * +bfad_get_firmware_buf(struct pci_dev *pdev) +{ + if (pdev->device == BFA_PCI_DEVICE_ID_CT) { + if (bfi_image_ct_size == 0) + bfad_read_firmware(pdev, &bfi_image_ct, + &bfi_image_ct_size, BFAD_FW_FILE_CT); + return(bfi_image_ct); + } else { + if (bfi_image_cb_size == 0) + bfad_read_firmware(pdev, &bfi_image_cb, + &bfi_image_cb_size, BFAD_FW_FILE_CB); + return(bfi_image_cb); + } +} + +u32 * +bfi_image_ct_get_chunk(u32 off) +{ return (u32 *)(bfi_image_ct + off); } + +u32 * +bfi_image_cb_get_chunk(u32 off) +{ return (u32 *)(bfi_image_cb + off); } + diff --git a/drivers/scsi/bfa/bfad_im.c b/drivers/scsi/bfa/bfad_im.c new file mode 100644 index 000000000000..158c99243c08 --- /dev/null +++ b/drivers/scsi/bfa/bfad_im.c @@ -0,0 +1,1230 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * bfad_im.c Linux driver IM module. + */ + +#include "bfad_drv.h" +#include "bfad_im.h" +#include "bfad_trcmod.h" +#include "bfa_cb_ioim_macros.h" +#include + +BFA_TRC_FILE(LDRV, IM); + +DEFINE_IDR(bfad_im_port_index); +struct scsi_transport_template *bfad_im_scsi_transport_template; +static void bfad_im_itnim_work_handler(struct work_struct *work); +static int bfad_im_queuecommand(struct scsi_cmnd *cmnd, + void (*done)(struct scsi_cmnd *)); +static int bfad_im_slave_alloc(struct scsi_device *sdev); + +void +bfa_cb_ioim_done(void *drv, struct bfad_ioim_s *dio, + enum bfi_ioim_status io_status, u8 scsi_status, + int sns_len, u8 *sns_info, s32 residue) +{ + struct scsi_cmnd *cmnd = (struct scsi_cmnd *)dio; + struct bfad_s *bfad = drv; + struct bfad_itnim_data_s *itnim_data; + struct bfad_itnim_s *itnim; + + switch (io_status) { + case BFI_IOIM_STS_OK: + bfa_trc(bfad, scsi_status); + cmnd->result = ScsiResult(DID_OK, scsi_status); + scsi_set_resid(cmnd, 0); + + if (sns_len > 0) { + bfa_trc(bfad, sns_len); + if (sns_len > SCSI_SENSE_BUFFERSIZE) + sns_len = SCSI_SENSE_BUFFERSIZE; + memcpy(cmnd->sense_buffer, sns_info, sns_len); + } + if (residue > 0) + scsi_set_resid(cmnd, residue); + break; + + case BFI_IOIM_STS_ABORTED: + case BFI_IOIM_STS_TIMEDOUT: + case BFI_IOIM_STS_PATHTOV: + default: + cmnd->result = ScsiResult(DID_ERROR, 0); + } + + /* Unmap DMA, if host is NULL, it means a scsi passthru cmd */ + if (cmnd->device->host != NULL) + scsi_dma_unmap(cmnd); + + cmnd->host_scribble = NULL; + bfa_trc(bfad, cmnd->result); + + itnim_data = cmnd->device->hostdata; + if (itnim_data) { + itnim = itnim_data->itnim; + if (!cmnd->result && itnim && + (bfa_lun_queue_depth > cmnd->device->queue_depth)) { + /* Queue depth adjustment for good status completion */ + bfad_os_ramp_up_qdepth(itnim, cmnd->device); + } else if (cmnd->result == SAM_STAT_TASK_SET_FULL && itnim) { + /* qfull handling */ + bfad_os_handle_qfull(itnim, cmnd->device); + } + } + + cmnd->scsi_done(cmnd); +} + +void +bfa_cb_ioim_good_comp(void *drv, struct bfad_ioim_s *dio) +{ + struct scsi_cmnd *cmnd = (struct scsi_cmnd *)dio; + struct bfad_itnim_data_s *itnim_data; + struct bfad_itnim_s *itnim; + + cmnd->result = ScsiResult(DID_OK, SCSI_STATUS_GOOD); + + /* Unmap DMA, if host is NULL, it means a scsi passthru cmd */ + if (cmnd->device->host != NULL) + scsi_dma_unmap(cmnd); + + cmnd->host_scribble = NULL; + + /* Queue depth adjustment */ + if (bfa_lun_queue_depth > cmnd->device->queue_depth) { + itnim_data = cmnd->device->hostdata; + if (itnim_data) { + itnim = itnim_data->itnim; + if (itnim) + bfad_os_ramp_up_qdepth(itnim, cmnd->device); + } + } + + cmnd->scsi_done(cmnd); +} + +void +bfa_cb_ioim_abort(void *drv, struct bfad_ioim_s *dio) +{ + struct scsi_cmnd *cmnd = (struct scsi_cmnd *)dio; + struct bfad_s *bfad = drv; + + cmnd->result = ScsiResult(DID_ERROR, 0); + + /* Unmap DMA, if host is NULL, it means a scsi passthru cmd */ + if (cmnd->device->host != NULL) + scsi_dma_unmap(cmnd); + + bfa_trc(bfad, cmnd->result); + cmnd->host_scribble = NULL; +} + +void +bfa_cb_tskim_done(void *bfad, struct bfad_tskim_s *dtsk, + enum bfi_tskim_status tsk_status) +{ + struct scsi_cmnd *cmnd = (struct scsi_cmnd *)dtsk; + wait_queue_head_t *wq; + + cmnd->SCp.Status |= tsk_status << 1; + set_bit(IO_DONE_BIT, (unsigned long *)&cmnd->SCp.Status); + wq = (wait_queue_head_t *) cmnd->SCp.ptr; + cmnd->SCp.ptr = NULL; + + if (wq) + wake_up(wq); +} + +void +bfa_cb_ioim_resfree(void *drv) +{ +} + +/** + * Scsi_Host_template SCSI host template + */ +/** + * Scsi_Host template entry, returns BFAD PCI info. + */ +static const char * +bfad_im_info(struct Scsi_Host *shost) +{ + static char bfa_buf[256]; + struct bfad_im_port_s *im_port = + (struct bfad_im_port_s *) shost->hostdata[0]; + struct bfa_ioc_attr_s ioc_attr; + struct bfad_s *bfad = im_port->bfad; + + memset(&ioc_attr, 0, sizeof(ioc_attr)); + bfa_get_attr(&bfad->bfa, &ioc_attr); + + memset(bfa_buf, 0, sizeof(bfa_buf)); + snprintf(bfa_buf, sizeof(bfa_buf), + "Brocade FC/FCOE Adapter, " "model: %s hwpath: %s driver: %s", + ioc_attr.adapter_attr.model, bfad->pci_name, + BFAD_DRIVER_VERSION); + return bfa_buf; +} + +/** + * Scsi_Host template entry, aborts the specified SCSI command. + * + * Returns: SUCCESS or FAILED. + */ +static int +bfad_im_abort_handler(struct scsi_cmnd *cmnd) +{ + struct Scsi_Host *shost = cmnd->device->host; + struct bfad_im_port_s *im_port = + (struct bfad_im_port_s *) shost->hostdata[0]; + struct bfad_s *bfad = im_port->bfad; + struct bfa_ioim_s *hal_io; + unsigned long flags; + u32 timeout; + int rc = FAILED; + + spin_lock_irqsave(&bfad->bfad_lock, flags); + hal_io = (struct bfa_ioim_s *) cmnd->host_scribble; + if (!hal_io) { + /* IO has been completed, retrun success */ + rc = SUCCESS; + goto out; + } + if (hal_io->dio != (struct bfad_ioim_s *) cmnd) { + rc = FAILED; + goto out; + } + + bfa_trc(bfad, hal_io->iotag); + bfa_log(bfad->logmod, BFA_LOG_LINUX_SCSI_ABORT, + im_port->shost->host_no, cmnd, hal_io->iotag); + bfa_ioim_abort(hal_io); + spin_unlock_irqrestore(&bfad->bfad_lock, flags); + + /* Need to wait until the command get aborted */ + timeout = 10; + while ((struct bfa_ioim_s *) cmnd->host_scribble == hal_io) { + set_current_state(TASK_UNINTERRUPTIBLE); + schedule_timeout(timeout); + if (timeout < 4 * HZ) + timeout *= 2; + } + + cmnd->scsi_done(cmnd); + bfa_trc(bfad, hal_io->iotag); + bfa_log(bfad->logmod, BFA_LOG_LINUX_SCSI_ABORT_COMP, + im_port->shost->host_no, cmnd, hal_io->iotag); + return SUCCESS; +out: + spin_unlock_irqrestore(&bfad->bfad_lock, flags); + return rc; +} + +static bfa_status_t +bfad_im_target_reset_send(struct bfad_s *bfad, struct scsi_cmnd *cmnd, + struct bfad_itnim_s *itnim) +{ + struct bfa_tskim_s *tskim; + struct bfa_itnim_s *bfa_itnim; + bfa_status_t rc = BFA_STATUS_OK; + + bfa_itnim = bfa_fcs_itnim_get_halitn(&itnim->fcs_itnim); + tskim = bfa_tskim_alloc(&bfad->bfa, (struct bfad_tskim_s *) cmnd); + if (!tskim) { + BFA_DEV_PRINTF(bfad, BFA_ERR, + "target reset, fail to allocate tskim\n"); + rc = BFA_STATUS_FAILED; + goto out; + } + + /* + * Set host_scribble to NULL to avoid aborting a task command if + * happens. + */ + cmnd->host_scribble = NULL; + cmnd->SCp.Status = 0; + bfa_itnim = bfa_fcs_itnim_get_halitn(&itnim->fcs_itnim); + bfa_tskim_start(tskim, bfa_itnim, (lun_t)0, + FCP_TM_TARGET_RESET, BFAD_TARGET_RESET_TMO); +out: + return rc; +} + +/** + * Scsi_Host template entry, resets a LUN and abort its all commands. + * + * Returns: SUCCESS or FAILED. + * + */ +static int +bfad_im_reset_lun_handler(struct scsi_cmnd *cmnd) +{ + struct Scsi_Host *shost = cmnd->device->host; + struct bfad_im_port_s *im_port = + (struct bfad_im_port_s *) shost->hostdata[0]; + struct bfad_itnim_data_s *itnim_data = cmnd->device->hostdata; + struct bfad_s *bfad = im_port->bfad; + struct bfa_tskim_s *tskim; + struct bfad_itnim_s *itnim; + struct bfa_itnim_s *bfa_itnim; + DECLARE_WAIT_QUEUE_HEAD(wq); + int rc = SUCCESS; + unsigned long flags; + enum bfi_tskim_status task_status; + + spin_lock_irqsave(&bfad->bfad_lock, flags); + itnim = itnim_data->itnim; + if (!itnim) { + spin_unlock_irqrestore(&bfad->bfad_lock, flags); + rc = FAILED; + goto out; + } + + tskim = bfa_tskim_alloc(&bfad->bfa, (struct bfad_tskim_s *) cmnd); + if (!tskim) { + BFA_DEV_PRINTF(bfad, BFA_ERR, + "LUN reset, fail to allocate tskim"); + spin_unlock_irqrestore(&bfad->bfad_lock, flags); + rc = FAILED; + goto out; + } + + /** + * Set host_scribble to NULL to avoid aborting a task command + * if happens. + */ + cmnd->host_scribble = NULL; + cmnd->SCp.ptr = (char *)&wq; + cmnd->SCp.Status = 0; + bfa_itnim = bfa_fcs_itnim_get_halitn(&itnim->fcs_itnim); + bfa_tskim_start(tskim, bfa_itnim, + bfad_int_to_lun(cmnd->device->lun), + FCP_TM_LUN_RESET, BFAD_LUN_RESET_TMO); + spin_unlock_irqrestore(&bfad->bfad_lock, flags); + + wait_event(wq, test_bit(IO_DONE_BIT, + (unsigned long *)&cmnd->SCp.Status)); + + task_status = cmnd->SCp.Status >> 1; + if (task_status != BFI_TSKIM_STS_OK) { + BFA_DEV_PRINTF(bfad, BFA_ERR, "LUN reset failure, status: %d\n", + task_status); + rc = FAILED; + } + +out: + return rc; +} + +/** + * Scsi_Host template entry, resets the bus and abort all commands. + */ +static int +bfad_im_reset_bus_handler(struct scsi_cmnd *cmnd) +{ + struct Scsi_Host *shost = cmnd->device->host; + struct bfad_im_port_s *im_port = + (struct bfad_im_port_s *) shost->hostdata[0]; + struct bfad_s *bfad = im_port->bfad; + struct bfad_itnim_s *itnim; + unsigned long flags; + u32 i, rc, err_cnt = 0; + DECLARE_WAIT_QUEUE_HEAD(wq); + enum bfi_tskim_status task_status; + + spin_lock_irqsave(&bfad->bfad_lock, flags); + for (i = 0; i < MAX_FCP_TARGET; i++) { + itnim = bfad_os_get_itnim(im_port, i); + if (itnim) { + cmnd->SCp.ptr = (char *)&wq; + rc = bfad_im_target_reset_send(bfad, cmnd, itnim); + if (rc != BFA_STATUS_OK) { + err_cnt++; + continue; + } + + /* wait target reset to complete */ + spin_unlock_irqrestore(&bfad->bfad_lock, flags); + wait_event(wq, test_bit(IO_DONE_BIT, + (unsigned long *)&cmnd->SCp.Status)); + spin_lock_irqsave(&bfad->bfad_lock, flags); + + task_status = cmnd->SCp.Status >> 1; + if (task_status != BFI_TSKIM_STS_OK) { + BFA_DEV_PRINTF(bfad, BFA_ERR, + "target reset failure," + " status: %d\n", task_status); + err_cnt++; + } + } + } + spin_unlock_irqrestore(&bfad->bfad_lock, flags); + + if (err_cnt) + return FAILED; + + return SUCCESS; +} + +/** + * Scsi_Host template entry slave_destroy. + */ +static void +bfad_im_slave_destroy(struct scsi_device *sdev) +{ + sdev->hostdata = NULL; + return; +} + +/** + * BFA FCS itnim callbacks + */ + +/** + * BFA FCS itnim alloc callback, after successful PRLI + * Context: Interrupt + */ +void +bfa_fcb_itnim_alloc(struct bfad_s *bfad, struct bfa_fcs_itnim_s **itnim, + struct bfad_itnim_s **itnim_drv) +{ + *itnim_drv = kzalloc(sizeof(struct bfad_itnim_s), GFP_ATOMIC); + if (*itnim_drv == NULL) + return; + + (*itnim_drv)->im = bfad->im; + *itnim = &(*itnim_drv)->fcs_itnim; + (*itnim_drv)->state = ITNIM_STATE_NONE; + + /* + * Initiaze the itnim_work + */ + INIT_WORK(&(*itnim_drv)->itnim_work, bfad_im_itnim_work_handler); + bfad->bfad_flags |= BFAD_RPORT_ONLINE; +} + +/** + * BFA FCS itnim free callback. + * Context: Interrupt. bfad_lock is held + */ +void +bfa_fcb_itnim_free(struct bfad_s *bfad, struct bfad_itnim_s *itnim_drv) +{ + struct bfad_port_s *port; + wwn_t wwpn; + u32 fcid; + char wwpn_str[32], fcid_str[16]; + + /* online to free state transtion should not happen */ + bfa_assert(itnim_drv->state != ITNIM_STATE_ONLINE); + + itnim_drv->queue_work = 1; + /* offline request is not yet done, use the same request to free */ + if (itnim_drv->state == ITNIM_STATE_OFFLINE_PENDING) + itnim_drv->queue_work = 0; + + itnim_drv->state = ITNIM_STATE_FREE; + port = bfa_fcs_itnim_get_drvport(&itnim_drv->fcs_itnim); + itnim_drv->im_port = port->im_port; + wwpn = bfa_fcs_itnim_get_pwwn(&itnim_drv->fcs_itnim); + fcid = bfa_fcs_itnim_get_fcid(&itnim_drv->fcs_itnim); + wwn2str(wwpn_str, wwpn); + fcid2str(fcid_str, fcid); + bfa_log(bfad->logmod, BFA_LOG_LINUX_ITNIM_FREE, + port->im_port->shost->host_no, + fcid_str, wwpn_str); + bfad_os_itnim_process(itnim_drv); +} + +/** + * BFA FCS itnim online callback. + * Context: Interrupt. bfad_lock is held + */ +void +bfa_fcb_itnim_online(struct bfad_itnim_s *itnim_drv) +{ + struct bfad_port_s *port; + + itnim_drv->bfa_itnim = bfa_fcs_itnim_get_halitn(&itnim_drv->fcs_itnim); + port = bfa_fcs_itnim_get_drvport(&itnim_drv->fcs_itnim); + itnim_drv->state = ITNIM_STATE_ONLINE; + itnim_drv->queue_work = 1; + itnim_drv->im_port = port->im_port; + bfad_os_itnim_process(itnim_drv); +} + +/** + * BFA FCS itnim offline callback. + * Context: Interrupt. bfad_lock is held + */ +void +bfa_fcb_itnim_offline(struct bfad_itnim_s *itnim_drv) +{ + struct bfad_port_s *port; + struct bfad_s *bfad; + + port = bfa_fcs_itnim_get_drvport(&itnim_drv->fcs_itnim); + bfad = port->bfad; + if ((bfad->pport.flags & BFAD_PORT_DELETE) || + (port->flags & BFAD_PORT_DELETE)) { + itnim_drv->state = ITNIM_STATE_OFFLINE; + return; + } + itnim_drv->im_port = port->im_port; + itnim_drv->state = ITNIM_STATE_OFFLINE_PENDING; + itnim_drv->queue_work = 1; + bfad_os_itnim_process(itnim_drv); +} + +/** + * BFA FCS itnim timeout callback. + * Context: Interrupt. bfad_lock is held + */ +void bfa_fcb_itnim_tov(struct bfad_itnim_s *itnim) +{ + itnim->state = ITNIM_STATE_TIMEOUT; +} + +/** + * Path TOV processing begin notification -- dummy for linux + */ +void +bfa_fcb_itnim_tov_begin(struct bfad_itnim_s *itnim) +{ +} + + + +/** + * Allocate a Scsi_Host for a port. + */ +int +bfad_im_scsi_host_alloc(struct bfad_s *bfad, struct bfad_im_port_s *im_port) +{ + int error = 1; + + if (!idr_pre_get(&bfad_im_port_index, GFP_KERNEL)) { + printk(KERN_WARNING "idr_pre_get failure\n"); + goto out; + } + + error = idr_get_new(&bfad_im_port_index, im_port, + &im_port->idr_id); + if (error) { + printk(KERN_WARNING "idr_get_new failure\n"); + goto out; + } + + im_port->shost = bfad_os_scsi_host_alloc(im_port, bfad); + if (!im_port->shost) { + error = 1; + goto out_free_idr; + } + + im_port->shost->hostdata[0] = (unsigned long)im_port; + im_port->shost->unique_id = im_port->idr_id; + im_port->shost->this_id = -1; + im_port->shost->max_id = MAX_FCP_TARGET; + im_port->shost->max_lun = MAX_FCP_LUN; + im_port->shost->max_cmd_len = 16; + im_port->shost->can_queue = bfad->cfg_data.ioc_queue_depth; + im_port->shost->transportt = bfad_im_scsi_transport_template; + + error = bfad_os_scsi_add_host(im_port->shost, im_port, bfad); + if (error) { + printk(KERN_WARNING "bfad_os_scsi_add_host failure %d\n", + error); + goto out_fc_rel; + } + + /* setup host fixed attribute if the lk supports */ + bfad_os_fc_host_init(im_port); + + return 0; + +out_fc_rel: + scsi_host_put(im_port->shost); +out_free_idr: + idr_remove(&bfad_im_port_index, im_port->idr_id); +out: + return error; +} + +void +bfad_im_scsi_host_free(struct bfad_s *bfad, struct bfad_im_port_s *im_port) +{ + unsigned long flags; + + bfa_trc(bfad, bfad->inst_no); + bfa_log(bfad->logmod, BFA_LOG_LINUX_SCSI_HOST_FREE, + im_port->shost->host_no); + + fc_remove_host(im_port->shost); + + scsi_remove_host(im_port->shost); + scsi_host_put(im_port->shost); + + spin_lock_irqsave(&bfad->bfad_lock, flags); + idr_remove(&bfad_im_port_index, im_port->idr_id); + spin_unlock_irqrestore(&bfad->bfad_lock, flags); +} + +static void +bfad_im_port_delete_handler(struct work_struct *work) +{ + struct bfad_im_port_s *im_port = + container_of(work, struct bfad_im_port_s, port_delete_work); + + bfad_im_scsi_host_free(im_port->bfad, im_port); + bfad_im_port_clean(im_port); + kfree(im_port); +} + +bfa_status_t +bfad_im_port_new(struct bfad_s *bfad, struct bfad_port_s *port) +{ + int rc = BFA_STATUS_OK; + struct bfad_im_port_s *im_port; + + im_port = kzalloc(sizeof(struct bfad_im_port_s), GFP_ATOMIC); + if (im_port == NULL) { + rc = BFA_STATUS_ENOMEM; + goto ext; + } + port->im_port = im_port; + im_port->port = port; + im_port->bfad = bfad; + + INIT_WORK(&im_port->port_delete_work, bfad_im_port_delete_handler); + INIT_LIST_HEAD(&im_port->itnim_mapped_list); + INIT_LIST_HEAD(&im_port->binding_list); + +ext: + return rc; +} + +void +bfad_im_port_delete(struct bfad_s *bfad, struct bfad_port_s *port) +{ + struct bfad_im_port_s *im_port = port->im_port; + + queue_work(bfad->im->drv_workq, + &im_port->port_delete_work); +} + +void +bfad_im_port_clean(struct bfad_im_port_s *im_port) +{ + struct bfad_fcp_binding *bp, *bp_new; + unsigned long flags; + struct bfad_s *bfad = im_port->bfad; + + spin_lock_irqsave(&bfad->bfad_lock, flags); + list_for_each_entry_safe(bp, bp_new, &im_port->binding_list, + list_entry) { + list_del(&bp->list_entry); + kfree(bp); + } + + /* the itnim_mapped_list must be empty at this time */ + bfa_assert(list_empty(&im_port->itnim_mapped_list)); + + spin_unlock_irqrestore(&bfad->bfad_lock, flags); +} + +void +bfad_im_port_online(struct bfad_s *bfad, struct bfad_port_s *port) +{ +} + +void +bfad_im_port_offline(struct bfad_s *bfad, struct bfad_port_s *port) +{ +} + +bfa_status_t +bfad_im_probe(struct bfad_s *bfad) +{ + struct bfad_im_s *im; + bfa_status_t rc = BFA_STATUS_OK; + + im = kzalloc(sizeof(struct bfad_im_s), GFP_KERNEL); + if (im == NULL) { + rc = BFA_STATUS_ENOMEM; + goto ext; + } + + bfad->im = im; + im->bfad = bfad; + + if (bfad_os_thread_workq(bfad) != BFA_STATUS_OK) { + kfree(im); + rc = BFA_STATUS_FAILED; + } + +ext: + return rc; +} + +void +bfad_im_probe_undo(struct bfad_s *bfad) +{ + if (bfad->im) { + bfad_os_destroy_workq(bfad->im); + kfree(bfad->im); + bfad->im = NULL; + } +} + + + + +int +bfad_os_scsi_add_host(struct Scsi_Host *shost, struct bfad_im_port_s *im_port, + struct bfad_s *bfad) +{ + struct device *dev; + + if (im_port->port->pvb_type == BFAD_PORT_PHYS_BASE) + dev = &bfad->pcidev->dev; + else + dev = &bfad->pport.im_port->shost->shost_gendev; + + return scsi_add_host(shost, dev); +} + +struct Scsi_Host * +bfad_os_scsi_host_alloc(struct bfad_im_port_s *im_port, struct bfad_s *bfad) +{ + struct scsi_host_template *sht; + + if (im_port->port->pvb_type == BFAD_PORT_PHYS_BASE) + sht = &bfad_im_scsi_host_template; + else + sht = &bfad_im_vport_template; + + sht->sg_tablesize = bfad->cfg_data.io_max_sge; + + return scsi_host_alloc(sht, sizeof(unsigned long)); +} + +void +bfad_os_scsi_host_free(struct bfad_s *bfad, struct bfad_im_port_s *im_port) +{ + flush_workqueue(bfad->im->drv_workq); + bfad_im_scsi_host_free(im_port->bfad, im_port); + bfad_im_port_clean(im_port); + kfree(im_port); +} + +void +bfad_os_destroy_workq(struct bfad_im_s *im) +{ + if (im && im->drv_workq) { + destroy_workqueue(im->drv_workq); + im->drv_workq = NULL; + } +} + +bfa_status_t +bfad_os_thread_workq(struct bfad_s *bfad) +{ + struct bfad_im_s *im = bfad->im; + + bfa_trc(bfad, 0); + snprintf(im->drv_workq_name, BFAD_KOBJ_NAME_LEN, "bfad_wq_%d", + bfad->inst_no); + im->drv_workq = create_singlethread_workqueue(im->drv_workq_name); + if (!im->drv_workq) + return BFA_STATUS_FAILED; + + return BFA_STATUS_OK; +} + +/** + * Scsi_Host template entry. + * + * Description: + * OS entry point to adjust the queue_depths on a per-device basis. + * Called once per device during the bus scan. + * Return non-zero if fails. + */ +static int +bfad_im_slave_configure(struct scsi_device *sdev) +{ + if (sdev->tagged_supported) + scsi_activate_tcq(sdev, bfa_lun_queue_depth); + else + scsi_deactivate_tcq(sdev, bfa_lun_queue_depth); + + return 0; +} + +struct scsi_host_template bfad_im_scsi_host_template = { + .module = THIS_MODULE, + .name = BFAD_DRIVER_NAME, + .info = bfad_im_info, + .queuecommand = bfad_im_queuecommand, + .eh_abort_handler = bfad_im_abort_handler, + .eh_device_reset_handler = bfad_im_reset_lun_handler, + .eh_bus_reset_handler = bfad_im_reset_bus_handler, + + .slave_alloc = bfad_im_slave_alloc, + .slave_configure = bfad_im_slave_configure, + .slave_destroy = bfad_im_slave_destroy, + + .this_id = -1, + .sg_tablesize = BFAD_IO_MAX_SGE, + .cmd_per_lun = 3, + .use_clustering = ENABLE_CLUSTERING, + .shost_attrs = bfad_im_host_attrs, + .max_sectors = 0xFFFF, +}; + +struct scsi_host_template bfad_im_vport_template = { + .module = THIS_MODULE, + .name = BFAD_DRIVER_NAME, + .info = bfad_im_info, + .queuecommand = bfad_im_queuecommand, + .eh_abort_handler = bfad_im_abort_handler, + .eh_device_reset_handler = bfad_im_reset_lun_handler, + .eh_bus_reset_handler = bfad_im_reset_bus_handler, + + .slave_alloc = bfad_im_slave_alloc, + .slave_configure = bfad_im_slave_configure, + .slave_destroy = bfad_im_slave_destroy, + + .this_id = -1, + .sg_tablesize = BFAD_IO_MAX_SGE, + .cmd_per_lun = 3, + .use_clustering = ENABLE_CLUSTERING, + .shost_attrs = bfad_im_vport_attrs, + .max_sectors = 0xFFFF, +}; + +void +bfad_im_probe_post(struct bfad_im_s *im) +{ + flush_workqueue(im->drv_workq); +} + +bfa_status_t +bfad_im_module_init(void) +{ + bfad_im_scsi_transport_template = + fc_attach_transport(&bfad_im_fc_function_template); + if (!bfad_im_scsi_transport_template) + return BFA_STATUS_ENOMEM; + + return BFA_STATUS_OK; +} + +void +bfad_im_module_exit(void) +{ + if (bfad_im_scsi_transport_template) + fc_release_transport(bfad_im_scsi_transport_template); +} + +void +bfad_os_itnim_process(struct bfad_itnim_s *itnim_drv) +{ + struct bfad_im_s *im = itnim_drv->im; + + if (itnim_drv->queue_work) + queue_work(im->drv_workq, &itnim_drv->itnim_work); +} + +void +bfad_os_ramp_up_qdepth(struct bfad_itnim_s *itnim, struct scsi_device *sdev) +{ + struct scsi_device *tmp_sdev; + + if (((jiffies - itnim->last_ramp_up_time) > + BFA_QUEUE_FULL_RAMP_UP_TIME * HZ) && + ((jiffies - itnim->last_queue_full_time) > + BFA_QUEUE_FULL_RAMP_UP_TIME * HZ)) { + shost_for_each_device(tmp_sdev, sdev->host) { + if (bfa_lun_queue_depth > tmp_sdev->queue_depth) { + if (tmp_sdev->id != sdev->id) + continue; + if (tmp_sdev->ordered_tags) + scsi_adjust_queue_depth(tmp_sdev, + MSG_ORDERED_TAG, + tmp_sdev->queue_depth + 1); + else + scsi_adjust_queue_depth(tmp_sdev, + MSG_SIMPLE_TAG, + tmp_sdev->queue_depth + 1); + + itnim->last_ramp_up_time = jiffies; + } + } + } +} + +void +bfad_os_handle_qfull(struct bfad_itnim_s *itnim, struct scsi_device *sdev) +{ + struct scsi_device *tmp_sdev; + + itnim->last_queue_full_time = jiffies; + + shost_for_each_device(tmp_sdev, sdev->host) { + if (tmp_sdev->id != sdev->id) + continue; + scsi_track_queue_full(tmp_sdev, tmp_sdev->queue_depth - 1); + } +} + + + + +struct bfad_itnim_s * +bfad_os_get_itnim(struct bfad_im_port_s *im_port, int id) +{ + struct bfad_itnim_s *itnim = NULL; + + /* Search the mapped list for this target ID */ + list_for_each_entry(itnim, &im_port->itnim_mapped_list, list_entry) { + if (id == itnim->scsi_tgt_id) + return itnim; + } + + return NULL; +} + +/** + * Scsi_Host template entry slave_alloc + */ +static int +bfad_im_slave_alloc(struct scsi_device *sdev) +{ + struct fc_rport *rport = starget_to_rport(scsi_target(sdev)); + + if (!rport || fc_remote_port_chkready(rport)) + return -ENXIO; + + sdev->hostdata = rport->dd_data; + + return 0; +} + +void +bfad_os_fc_host_init(struct bfad_im_port_s *im_port) +{ + struct Scsi_Host *host = im_port->shost; + struct bfad_s *bfad = im_port->bfad; + struct bfad_port_s *port = im_port->port; + union attr { + struct bfa_pport_attr_s pattr; + struct bfa_ioc_attr_s ioc_attr; + } attr; + + fc_host_node_name(host) = + bfa_os_htonll((bfa_fcs_port_get_nwwn(port->fcs_port))); + fc_host_port_name(host) = + bfa_os_htonll((bfa_fcs_port_get_pwwn(port->fcs_port))); + + fc_host_supported_classes(host) = FC_COS_CLASS3; + + memset(fc_host_supported_fc4s(host), 0, + sizeof(fc_host_supported_fc4s(host))); + if (bfad_supported_fc4s & (BFA_PORT_ROLE_FCP_IM | BFA_PORT_ROLE_FCP_TM)) + /* For FCP type 0x08 */ + fc_host_supported_fc4s(host)[2] = 1; + if (bfad_supported_fc4s | BFA_PORT_ROLE_FCP_IPFC) + /* For LLC/SNAP type 0x05 */ + fc_host_supported_fc4s(host)[3] = 0x20; + /* For fibre channel services type 0x20 */ + fc_host_supported_fc4s(host)[7] = 1; + + memset(&attr.ioc_attr, 0, sizeof(attr.ioc_attr)); + bfa_get_attr(&bfad->bfa, &attr.ioc_attr); + sprintf(fc_host_symbolic_name(host), "Brocade %s FV%s DV%s", + attr.ioc_attr.adapter_attr.model, + attr.ioc_attr.adapter_attr.fw_ver, BFAD_DRIVER_VERSION); + + fc_host_supported_speeds(host) = 0; + fc_host_supported_speeds(host) |= + FC_PORTSPEED_8GBIT | FC_PORTSPEED_4GBIT | FC_PORTSPEED_2GBIT | + FC_PORTSPEED_1GBIT; + + memset(&attr.pattr, 0, sizeof(attr.pattr)); + bfa_pport_get_attr(&bfad->bfa, &attr.pattr); + fc_host_maxframe_size(host) = attr.pattr.pport_cfg.maxfrsize; +} + +static void +bfad_im_fc_rport_add(struct bfad_im_port_s *im_port, struct bfad_itnim_s *itnim) +{ + struct fc_rport_identifiers rport_ids; + struct fc_rport *fc_rport; + struct bfad_itnim_data_s *itnim_data; + + rport_ids.node_name = + bfa_os_htonll(bfa_fcs_itnim_get_nwwn(&itnim->fcs_itnim)); + rport_ids.port_name = + bfa_os_htonll(bfa_fcs_itnim_get_pwwn(&itnim->fcs_itnim)); + rport_ids.port_id = + bfa_os_hton3b(bfa_fcs_itnim_get_fcid(&itnim->fcs_itnim)); + rport_ids.roles = FC_RPORT_ROLE_UNKNOWN; + + itnim->fc_rport = fc_rport = + fc_remote_port_add(im_port->shost, 0, &rport_ids); + + if (!fc_rport) + return; + + fc_rport->maxframe_size = + bfa_fcs_itnim_get_maxfrsize(&itnim->fcs_itnim); + fc_rport->supported_classes = bfa_fcs_itnim_get_cos(&itnim->fcs_itnim); + + itnim_data = fc_rport->dd_data; + itnim_data->itnim = itnim; + + rport_ids.roles |= FC_RPORT_ROLE_FCP_TARGET; + + if (rport_ids.roles != FC_RPORT_ROLE_UNKNOWN) + fc_remote_port_rolechg(fc_rport, rport_ids.roles); + + if ((fc_rport->scsi_target_id != -1) + && (fc_rport->scsi_target_id < MAX_FCP_TARGET)) + itnim->scsi_tgt_id = fc_rport->scsi_target_id; + + return; +} + +/** + * Work queue handler using FC transport service +* Context: kernel + */ +static void +bfad_im_itnim_work_handler(struct work_struct *work) +{ + struct bfad_itnim_s *itnim = container_of(work, struct bfad_itnim_s, + itnim_work); + struct bfad_im_s *im = itnim->im; + struct bfad_s *bfad = im->bfad; + struct bfad_im_port_s *im_port; + unsigned long flags; + struct fc_rport *fc_rport; + wwn_t wwpn; + u32 fcid; + char wwpn_str[32], fcid_str[16]; + + spin_lock_irqsave(&bfad->bfad_lock, flags); + im_port = itnim->im_port; + bfa_trc(bfad, itnim->state); + switch (itnim->state) { + case ITNIM_STATE_ONLINE: + if (!itnim->fc_rport) { + spin_unlock_irqrestore(&bfad->bfad_lock, flags); + bfad_im_fc_rport_add(im_port, itnim); + spin_lock_irqsave(&bfad->bfad_lock, flags); + wwpn = bfa_fcs_itnim_get_pwwn(&itnim->fcs_itnim); + fcid = bfa_fcs_itnim_get_fcid(&itnim->fcs_itnim); + wwn2str(wwpn_str, wwpn); + fcid2str(fcid_str, fcid); + list_add_tail(&itnim->list_entry, + &im_port->itnim_mapped_list); + bfa_log(bfad->logmod, BFA_LOG_LINUX_ITNIM_ONLINE, + im_port->shost->host_no, + itnim->scsi_tgt_id, + fcid_str, wwpn_str); + } else { + printk(KERN_WARNING + "%s: itnim %llx is already in online state\n", + __FUNCTION__, + bfa_fcs_itnim_get_pwwn(&itnim->fcs_itnim)); + } + + break; + case ITNIM_STATE_OFFLINE_PENDING: + itnim->state = ITNIM_STATE_OFFLINE; + if (itnim->fc_rport) { + fc_rport = itnim->fc_rport; + ((struct bfad_itnim_data_s *) + fc_rport->dd_data)->itnim = NULL; + itnim->fc_rport = NULL; + if (!(im_port->port->flags & BFAD_PORT_DELETE)) { + spin_unlock_irqrestore(&bfad->bfad_lock, flags); + fc_rport->dev_loss_tmo = + bfa_fcpim_path_tov_get(&bfad->bfa) + 1; + fc_remote_port_delete(fc_rport); + spin_lock_irqsave(&bfad->bfad_lock, flags); + } + wwpn = bfa_fcs_itnim_get_pwwn(&itnim->fcs_itnim); + fcid = bfa_fcs_itnim_get_fcid(&itnim->fcs_itnim); + wwn2str(wwpn_str, wwpn); + fcid2str(fcid_str, fcid); + list_del(&itnim->list_entry); + bfa_log(bfad->logmod, BFA_LOG_LINUX_ITNIM_OFFLINE, + im_port->shost->host_no, + itnim->scsi_tgt_id, + fcid_str, wwpn_str); + } + break; + case ITNIM_STATE_FREE: + if (itnim->fc_rport) { + fc_rport = itnim->fc_rport; + ((struct bfad_itnim_data_s *) + fc_rport->dd_data)->itnim = NULL; + itnim->fc_rport = NULL; + if (!(im_port->port->flags & BFAD_PORT_DELETE)) { + spin_unlock_irqrestore(&bfad->bfad_lock, flags); + fc_rport->dev_loss_tmo = + bfa_fcpim_path_tov_get(&bfad->bfa) + 1; + fc_remote_port_delete(fc_rport); + spin_lock_irqsave(&bfad->bfad_lock, flags); + } + list_del(&itnim->list_entry); + } + + kfree(itnim); + break; + default: + bfa_assert(0); + break; + } + + spin_unlock_irqrestore(&bfad->bfad_lock, flags); +} + +/** + * Scsi_Host template entry, queue a SCSI command to the BFAD. + */ +static int +bfad_im_queuecommand(struct scsi_cmnd *cmnd, void (*done) (struct scsi_cmnd *)) +{ + struct bfad_im_port_s *im_port = + (struct bfad_im_port_s *) cmnd->device->host->hostdata[0]; + struct bfad_s *bfad = im_port->bfad; + struct bfad_itnim_data_s *itnim_data = cmnd->device->hostdata; + struct bfad_itnim_s *itnim; + struct bfa_ioim_s *hal_io; + unsigned long flags; + int rc; + s16 sg_cnt = 0; + struct fc_rport *rport = starget_to_rport(scsi_target(cmnd->device)); + + rc = fc_remote_port_chkready(rport); + if (rc) { + cmnd->result = rc; + done(cmnd); + return 0; + } + + sg_cnt = scsi_dma_map(cmnd); + + if (sg_cnt < 0) + return SCSI_MLQUEUE_HOST_BUSY; + + cmnd->scsi_done = done; + + spin_lock_irqsave(&bfad->bfad_lock, flags); + if (!(bfad->bfad_flags & BFAD_HAL_START_DONE)) { + printk(KERN_WARNING + "bfad%d, queuecommand %p %x failed, BFA stopped\n", + bfad->inst_no, cmnd, cmnd->cmnd[0]); + cmnd->result = ScsiResult(DID_NO_CONNECT, 0); + goto out_fail_cmd; + } + + itnim = itnim_data->itnim; + if (!itnim) { + cmnd->result = ScsiResult(DID_IMM_RETRY, 0); + goto out_fail_cmd; + } + + hal_io = bfa_ioim_alloc(&bfad->bfa, (struct bfad_ioim_s *) cmnd, + itnim->bfa_itnim, sg_cnt); + if (!hal_io) { + printk(KERN_WARNING "hal_io failure\n"); + spin_unlock_irqrestore(&bfad->bfad_lock, flags); + scsi_dma_unmap(cmnd); + return SCSI_MLQUEUE_HOST_BUSY; + } + + cmnd->host_scribble = (char *)hal_io; + bfa_trc_fp(bfad, hal_io->iotag); + bfa_ioim_start(hal_io); + spin_unlock_irqrestore(&bfad->bfad_lock, flags); + + return 0; + +out_fail_cmd: + spin_unlock_irqrestore(&bfad->bfad_lock, flags); + scsi_dma_unmap(cmnd); + if (done) + done(cmnd); + + return 0; +} + +void +bfad_os_rport_online_wait(struct bfad_s *bfad) +{ + int i; + int rport_delay = 10; + + for (i = 0; !(bfad->bfad_flags & BFAD_PORT_ONLINE) + && i < bfa_linkup_delay; i++) + schedule_timeout_uninterruptible(HZ); + + if (bfad->bfad_flags & BFAD_PORT_ONLINE) { + rport_delay = rport_delay < bfa_linkup_delay ? + rport_delay : bfa_linkup_delay; + for (i = 0; !(bfad->bfad_flags & BFAD_RPORT_ONLINE) + && i < rport_delay; i++) + schedule_timeout_uninterruptible(HZ); + + if (rport_delay > 0 && (bfad->bfad_flags & BFAD_RPORT_ONLINE)) + schedule_timeout_uninterruptible(rport_delay * HZ); + } +} + +int +bfad_os_get_linkup_delay(struct bfad_s *bfad) +{ + + u8 nwwns = 0; + wwn_t *wwns; + int ldelay; + + /* + * Querying for the boot target port wwns + * -- read from boot information in flash. + * If nwwns > 0 => boot over SAN and set bfa_linkup_delay = 30 + * else => local boot machine set bfa_linkup_delay = 10 + */ + + bfa_iocfc_get_bootwwns(&bfad->bfa, &nwwns, &wwns); + + if (nwwns > 0) { + /* If boot over SAN; linkup_delay = 30sec */ + ldelay = 30; + } else { + /* If local boot; linkup_delay = 10sec */ + ldelay = 0; + } + + return ldelay; +} + + diff --git a/drivers/scsi/bfa/bfad_im.h b/drivers/scsi/bfa/bfad_im.h new file mode 100644 index 000000000000..189a5b29e21a --- /dev/null +++ b/drivers/scsi/bfa/bfad_im.h @@ -0,0 +1,150 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFAD_IM_H__ +#define __BFAD_IM_H__ + +#include "fcs/bfa_fcs_fcpim.h" +#include "bfad_im_compat.h" + +#define FCPI_NAME " fcpim" + +void bfad_flags_set(struct bfad_s *bfad, u32 flags); +bfa_status_t bfad_im_module_init(void); +void bfad_im_module_exit(void); +bfa_status_t bfad_im_probe(struct bfad_s *bfad); +void bfad_im_probe_undo(struct bfad_s *bfad); +void bfad_im_probe_post(struct bfad_im_s *im); +bfa_status_t bfad_im_port_new(struct bfad_s *bfad, struct bfad_port_s *port); +void bfad_im_port_delete(struct bfad_s *bfad, struct bfad_port_s *port); +void bfad_im_port_online(struct bfad_s *bfad, struct bfad_port_s *port); +void bfad_im_port_offline(struct bfad_s *bfad, struct bfad_port_s *port); +void bfad_im_port_clean(struct bfad_im_port_s *im_port); +int bfad_im_scsi_host_alloc(struct bfad_s *bfad, + struct bfad_im_port_s *im_port); +void bfad_im_scsi_host_free(struct bfad_s *bfad, + struct bfad_im_port_s *im_port); + +#define MAX_FCP_TARGET 1024 +#define MAX_FCP_LUN 16384 +#define BFAD_TARGET_RESET_TMO 60 +#define BFAD_LUN_RESET_TMO 60 +#define ScsiResult(host_code, scsi_code) (((host_code) << 16) | scsi_code) +#define BFA_QUEUE_FULL_RAMP_UP_TIME 120 +#define BFAD_KOBJ_NAME_LEN 20 + +/* + * itnim flags + */ +#define ITNIM_MAPPED 0x00000001 + +#define SCSI_TASK_MGMT 0x00000001 +#define IO_DONE_BIT 0 + +struct bfad_itnim_data_s { + struct bfad_itnim_s *itnim; +}; + +struct bfad_im_port_s { + struct bfad_s *bfad; + struct bfad_port_s *port; + struct work_struct port_delete_work; + int idr_id; + u16 cur_scsi_id; + struct list_head binding_list; + struct Scsi_Host *shost; + struct list_head itnim_mapped_list; +}; + +enum bfad_itnim_state { + ITNIM_STATE_NONE, + ITNIM_STATE_ONLINE, + ITNIM_STATE_OFFLINE_PENDING, + ITNIM_STATE_OFFLINE, + ITNIM_STATE_TIMEOUT, + ITNIM_STATE_FREE, +}; + +/* + * Per itnim data structure + */ +struct bfad_itnim_s { + struct list_head list_entry; + struct bfa_fcs_itnim_s fcs_itnim; + struct work_struct itnim_work; + u32 flags; + enum bfad_itnim_state state; + struct bfad_im_s *im; + struct bfad_im_port_s *im_port; + struct bfad_rport_s *drv_rport; + struct fc_rport *fc_rport; + struct bfa_itnim_s *bfa_itnim; + u16 scsi_tgt_id; + u16 queue_work; + unsigned long last_ramp_up_time; + unsigned long last_queue_full_time; +}; + +enum bfad_binding_type { + FCP_PWWN_BINDING = 0x1, + FCP_NWWN_BINDING = 0x2, + FCP_FCID_BINDING = 0x3, +}; + +struct bfad_fcp_binding { + struct list_head list_entry; + enum bfad_binding_type binding_type; + u16 scsi_target_id; + u32 fc_id; + wwn_t nwwn; + wwn_t pwwn; +}; + +struct bfad_im_s { + struct bfad_s *bfad; + struct workqueue_struct *drv_workq; + char drv_workq_name[BFAD_KOBJ_NAME_LEN]; +}; + +struct Scsi_Host *bfad_os_scsi_host_alloc(struct bfad_im_port_s *im_port, + struct bfad_s *); +bfa_status_t bfad_os_thread_workq(struct bfad_s *bfad); +void bfad_os_destroy_workq(struct bfad_im_s *im); +void bfad_os_itnim_process(struct bfad_itnim_s *itnim_drv); +void bfad_os_fc_host_init(struct bfad_im_port_s *im_port); +void bfad_os_init_work(struct bfad_im_port_s *im_port); +void bfad_os_scsi_host_free(struct bfad_s *bfad, + struct bfad_im_port_s *im_port); +void bfad_os_ramp_up_qdepth(struct bfad_itnim_s *itnim, + struct scsi_device *sdev); +void bfad_os_handle_qfull(struct bfad_itnim_s *itnim, struct scsi_device *sdev); +struct bfad_itnim_s *bfad_os_get_itnim(struct bfad_im_port_s *im_port, int id); +int bfad_os_scsi_add_host(struct Scsi_Host *shost, + struct bfad_im_port_s *im_port, struct bfad_s *bfad); + +/* + * scsi_host_template entries + */ +void bfad_im_itnim_unmap(struct bfad_im_port_s *im_port, + struct bfad_itnim_s *itnim); + +extern struct scsi_host_template bfad_im_scsi_host_template; +extern struct scsi_host_template bfad_im_vport_template; +extern struct fc_function_template bfad_im_fc_function_template; +extern struct scsi_transport_template *bfad_im_scsi_transport_template; + +#endif diff --git a/drivers/scsi/bfa/bfad_im_compat.h b/drivers/scsi/bfa/bfad_im_compat.h new file mode 100644 index 000000000000..1d3e74ec338c --- /dev/null +++ b/drivers/scsi/bfa/bfad_im_compat.h @@ -0,0 +1,46 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFAD_IM_COMPAT_H__ +#define __BFAD_IM_COMPAT_H__ + +extern u32 *bfi_image_buf; +extern u32 bfi_image_size; + +extern struct device_attribute *bfad_im_host_attrs[]; +extern struct device_attribute *bfad_im_vport_attrs[]; + +u32 *bfad_get_firmware_buf(struct pci_dev *pdev); +u32 *bfad_read_firmware(struct pci_dev *pdev, u32 **bfi_image, + u32 *bfi_image_size, char *fw_name); + +static inline u32 * +bfad_load_fwimg(struct pci_dev *pdev) +{ + return(bfad_get_firmware_buf(pdev)); +} + +static inline void +bfad_free_fwimg(void) +{ + if (bfi_image_ct_size && bfi_image_ct) + vfree(bfi_image_ct); + if (bfi_image_cb_size && bfi_image_cb) + vfree(bfi_image_cb); +} + +#endif diff --git a/drivers/scsi/bfa/bfad_intr.c b/drivers/scsi/bfa/bfad_intr.c new file mode 100644 index 000000000000..f104e029cac9 --- /dev/null +++ b/drivers/scsi/bfa/bfad_intr.c @@ -0,0 +1,214 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#include "bfad_drv.h" +#include "bfad_trcmod.h" + +BFA_TRC_FILE(LDRV, INTR); + +/** + * bfa_isr BFA driver interrupt functions + */ +irqreturn_t bfad_intx(int irq, void *dev_id); +static int msix_disable; +module_param(msix_disable, int, S_IRUGO | S_IWUSR); +/** + * Line based interrupt handler. + */ +irqreturn_t +bfad_intx(int irq, void *dev_id) +{ + struct bfad_s *bfad = dev_id; + struct list_head doneq; + unsigned long flags; + bfa_boolean_t rc; + + spin_lock_irqsave(&bfad->bfad_lock, flags); + rc = bfa_intx(&bfad->bfa); + if (!rc) { + spin_unlock_irqrestore(&bfad->bfad_lock, flags); + return IRQ_NONE; + } + + bfa_comp_deq(&bfad->bfa, &doneq); + spin_unlock_irqrestore(&bfad->bfad_lock, flags); + + if (!list_empty(&doneq)) { + bfa_comp_process(&bfad->bfa, &doneq); + + spin_lock_irqsave(&bfad->bfad_lock, flags); + bfa_comp_free(&bfad->bfa, &doneq); + spin_unlock_irqrestore(&bfad->bfad_lock, flags); + bfa_trc_fp(bfad, irq); + } + + return IRQ_HANDLED; + +} + +static irqreturn_t +bfad_msix(int irq, void *dev_id) +{ + struct bfad_msix_s *vec = dev_id; + struct bfad_s *bfad = vec->bfad; + struct list_head doneq; + unsigned long flags; + + spin_lock_irqsave(&bfad->bfad_lock, flags); + + bfa_msix(&bfad->bfa, vec->msix.entry); + bfa_comp_deq(&bfad->bfa, &doneq); + spin_unlock_irqrestore(&bfad->bfad_lock, flags); + + if (!list_empty(&doneq)) { + bfa_comp_process(&bfad->bfa, &doneq); + + spin_lock_irqsave(&bfad->bfad_lock, flags); + bfa_comp_free(&bfad->bfa, &doneq); + spin_unlock_irqrestore(&bfad->bfad_lock, flags); + } + + return IRQ_HANDLED; +} + +/** + * Initialize the MSIX entry table. + */ +static void +bfad_init_msix_entry(struct bfad_s *bfad, struct msix_entry *msix_entries, + int mask, int max_bit) +{ + int i; + int match = 0x00000001; + + for (i = 0, bfad->nvec = 0; i < MAX_MSIX_ENTRY; i++) { + if (mask & match) { + bfad->msix_tab[bfad->nvec].msix.entry = i; + bfad->msix_tab[bfad->nvec].bfad = bfad; + msix_entries[bfad->nvec].entry = i; + bfad->nvec++; + } + + match <<= 1; + } + +} + +int +bfad_install_msix_handler(struct bfad_s *bfad) +{ + int i, error = 0; + + for (i = 0; i < bfad->nvec; i++) { + error = request_irq(bfad->msix_tab[i].msix.vector, + (irq_handler_t) bfad_msix, 0, + BFAD_DRIVER_NAME, &bfad->msix_tab[i]); + bfa_trc(bfad, i); + bfa_trc(bfad, bfad->msix_tab[i].msix.vector); + if (error) { + int j; + + for (j = 0; j < i; j++) + free_irq(bfad->msix_tab[j].msix.vector, + &bfad->msix_tab[j]); + + return 1; + } + } + + return 0; +} + +/** + * Setup MSIX based interrupt. + */ +int +bfad_setup_intr(struct bfad_s *bfad) +{ + int error = 0; + u32 mask = 0, i, num_bit = 0, max_bit = 0; + struct msix_entry msix_entries[MAX_MSIX_ENTRY]; + + /* Call BFA to get the msix map for this PCI function. */ + bfa_msix_getvecs(&bfad->bfa, &mask, &num_bit, &max_bit); + + /* Set up the msix entry table */ + bfad_init_msix_entry(bfad, msix_entries, mask, max_bit); + + if (!msix_disable) { + error = pci_enable_msix(bfad->pcidev, msix_entries, bfad->nvec); + if (error) { + /* + * Only error number of vector is available. + * We don't have a mechanism to map multiple + * interrupts into one vector, so even if we + * can try to request less vectors, we don't + * know how to associate interrupt events to + * vectors. Linux doesn't dupicate vectors + * in the MSIX table for this case. + */ + + printk(KERN_WARNING "bfad%d: " + "pci_enable_msix failed (%d)," + " use line based.\n", bfad->inst_no, error); + + goto line_based; + } + + /* Save the vectors */ + for (i = 0; i < bfad->nvec; i++) { + bfa_trc(bfad, msix_entries[i].vector); + bfad->msix_tab[i].msix.vector = msix_entries[i].vector; + } + + bfa_msix_init(&bfad->bfa, bfad->nvec); + + bfad->bfad_flags |= BFAD_MSIX_ON; + + return error; + } + +line_based: + error = 0; + if (request_irq + (bfad->pcidev->irq, (irq_handler_t) bfad_intx, BFAD_IRQ_FLAGS, + BFAD_DRIVER_NAME, bfad) != 0) { + /* Enable interrupt handler failed */ + return 1; + } + + return error; +} + +void +bfad_remove_intr(struct bfad_s *bfad) +{ + int i; + + if (bfad->bfad_flags & BFAD_MSIX_ON) { + for (i = 0; i < bfad->nvec; i++) + free_irq(bfad->msix_tab[i].msix.vector, + &bfad->msix_tab[i]); + + pci_disable_msix(bfad->pcidev); + bfad->bfad_flags &= ~BFAD_MSIX_ON; + } else { + free_irq(bfad->pcidev->irq, bfad); + } +} + + diff --git a/drivers/scsi/bfa/bfad_ipfc.h b/drivers/scsi/bfa/bfad_ipfc.h new file mode 100644 index 000000000000..718bc5227671 --- /dev/null +++ b/drivers/scsi/bfa/bfad_ipfc.h @@ -0,0 +1,42 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ +#ifndef __BFA_DRV_IPFC_H__ +#define __BFA_DRV_IPFC_H__ + + +#define IPFC_NAME "" + +#define bfad_ipfc_module_init(x) do {} while (0) +#define bfad_ipfc_module_exit(x) do {} while (0) +#define bfad_ipfc_probe(x) do {} while (0) +#define bfad_ipfc_probe_undo(x) do {} while (0) +#define bfad_ipfc_port_config(x, y) BFA_STATUS_OK +#define bfad_ipfc_port_unconfig(x, y) do {} while (0) + +#define bfad_ipfc_probe_post(x) do {} while (0) +#define bfad_ipfc_port_new(x, y, z) BFA_STATUS_OK +#define bfad_ipfc_port_delete(x, y) do {} while (0) +#define bfad_ipfc_port_online(x, y) do {} while (0) +#define bfad_ipfc_port_offline(x, y) do {} while (0) + +#define bfad_ip_get_attr(x) BFA_STATUS_FAILED +#define bfad_ip_reset_drv_stats(x) BFA_STATUS_FAILED +#define bfad_ip_get_drv_stats(x, y) BFA_STATUS_FAILED +#define bfad_ip_enable_ipfc(x, y, z) BFA_STATUS_FAILED + + +#endif diff --git a/drivers/scsi/bfa/bfad_os.c b/drivers/scsi/bfa/bfad_os.c new file mode 100644 index 000000000000..faf47b4f1a38 --- /dev/null +++ b/drivers/scsi/bfa/bfad_os.c @@ -0,0 +1,50 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * bfad_os.c Linux driver OS specific calls. + */ + +#include "bfa_os_inc.h" +#include "bfad_drv.h" + +void +bfa_os_gettimeofday(struct bfa_timeval_s *tv) +{ + struct timeval tmp_tv; + + do_gettimeofday(&tmp_tv); + tv->tv_sec = (u32) tmp_tv.tv_sec; + tv->tv_usec = (u32) tmp_tv.tv_usec; +} + +void +bfa_os_printf(struct bfa_log_mod_s *log_mod, u32 msg_id, + const char *fmt, ...) +{ + va_list ap; + #define BFA_STRING_256 256 + char tmp[BFA_STRING_256]; + + va_start(ap, fmt); + vsprintf(tmp, fmt, ap); + va_end(ap); + + printk(tmp); +} + + diff --git a/drivers/scsi/bfa/bfad_tm.h b/drivers/scsi/bfa/bfad_tm.h new file mode 100644 index 000000000000..4901b1b7df02 --- /dev/null +++ b/drivers/scsi/bfa/bfad_tm.h @@ -0,0 +1,59 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/* + * Brocade Fibre Channel HBA Linux Target Mode Driver + */ + +/** + * tm/dummy/bfad_tm.h BFA callback dummy header file for BFA Linux target mode PCI interface module. + */ + +#ifndef __BFAD_TM_H__ +#define __BFAD_TM_H__ + +#include + +#define FCPT_NAME "" + +/* + * Called from base Linux driver on (De)Init events + */ + +/* attach tgt template with scst */ +#define bfad_tm_module_init() do {} while (0) + +/* detach/release tgt template */ +#define bfad_tm_module_exit() do {} while (0) + +#define bfad_tm_probe(x) do {} while (0) +#define bfad_tm_probe_undo(x) do {} while (0) +#define bfad_tm_probe_post(x) do {} while (0) + +/* + * Called by base Linux driver but triggered by BFA FCS on config events + */ +#define bfad_tm_port_new(x, y) BFA_STATUS_OK +#define bfad_tm_port_delete(x, y) do {} while (0) + +/* + * Called by base Linux driver but triggered by BFA FCS on PLOGI/O events + */ +#define bfad_tm_port_online(x, y) do {} while (0) +#define bfad_tm_port_offline(x, y) do {} while (0) + +#endif diff --git a/drivers/scsi/bfa/bfad_trcmod.h b/drivers/scsi/bfa/bfad_trcmod.h new file mode 100644 index 000000000000..2827b2acd041 --- /dev/null +++ b/drivers/scsi/bfa/bfad_trcmod.h @@ -0,0 +1,52 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * bfad_trcmod.h Linux driver trace modules + */ + + +#ifndef __BFAD_TRCMOD_H__ +#define __BFAD_TRCMOD_H__ + +#include + +/* + * !!! Only append to the enums defined here to avoid any versioning + * !!! needed between trace utility and driver version + */ +enum { + /* 2.6 Driver */ + BFA_TRC_LDRV_BFAD = 1, + BFA_TRC_LDRV_BFAD_2_6 = 2, + BFA_TRC_LDRV_BFAD_2_6_9 = 3, + BFA_TRC_LDRV_BFAD_2_6_10 = 4, + BFA_TRC_LDRV_INTR = 5, + BFA_TRC_LDRV_IOCTL = 6, + BFA_TRC_LDRV_OS = 7, + BFA_TRC_LDRV_IM = 8, + BFA_TRC_LDRV_IM_2_6 = 9, + BFA_TRC_LDRV_IM_2_6_9 = 10, + BFA_TRC_LDRV_IM_2_6_10 = 11, + BFA_TRC_LDRV_TM = 12, + BFA_TRC_LDRV_IPFC = 13, + BFA_TRC_LDRV_IM_2_4 = 14, + BFA_TRC_LDRV_IM_VMW = 15, + BFA_TRC_LDRV_IM_LT_2_6_10 = 16, +}; + +#endif /* __BFAD_TRCMOD_H__ */ diff --git a/drivers/scsi/bfa/fab.c b/drivers/scsi/bfa/fab.c new file mode 100644 index 000000000000..7e3a4d5d7bb4 --- /dev/null +++ b/drivers/scsi/bfa/fab.c @@ -0,0 +1,62 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#include +#include +#include "fcs_lport.h" +#include "fcs_rport.h" +#include "lport_priv.h" + +/** + * fab.c port fab implementation. + */ + +/** + * bfa_fcs_port_fab_public port fab public functions + */ + +/** + * Called by port to initialize fabric services of the base port. + */ +void +bfa_fcs_port_fab_init(struct bfa_fcs_port_s *port) +{ + bfa_fcs_port_ns_init(port); + bfa_fcs_port_scn_init(port); + bfa_fcs_port_ms_init(port); +} + +/** + * Called by port to notify transition to online state. + */ +void +bfa_fcs_port_fab_online(struct bfa_fcs_port_s *port) +{ + bfa_fcs_port_ns_online(port); + bfa_fcs_port_scn_online(port); +} + +/** + * Called by port to notify transition to offline state. + */ +void +bfa_fcs_port_fab_offline(struct bfa_fcs_port_s *port) +{ + bfa_fcs_port_ns_offline(port); + bfa_fcs_port_scn_offline(port); + bfa_fcs_port_ms_offline(port); +} diff --git a/drivers/scsi/bfa/fabric.c b/drivers/scsi/bfa/fabric.c new file mode 100644 index 000000000000..a8b14c47b009 --- /dev/null +++ b/drivers/scsi/bfa/fabric.c @@ -0,0 +1,1278 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * fabric.c Fabric module implementation. + */ + +#include "fcs_fabric.h" +#include "fcs_lport.h" +#include "fcs_vport.h" +#include "fcs_trcmod.h" +#include "fcs_fcxp.h" +#include "fcs_auth.h" +#include "fcs.h" +#include "fcbuild.h" +#include +#include +#include + +BFA_TRC_FILE(FCS, FABRIC); + +#define BFA_FCS_FABRIC_RETRY_DELAY (2000) /* Milliseconds */ +#define BFA_FCS_FABRIC_CLEANUP_DELAY (10000) /* Milliseconds */ + +#define bfa_fcs_fabric_set_opertype(__fabric) do { \ + if (bfa_pport_get_topology((__fabric)->fcs->bfa) \ + == BFA_PPORT_TOPOLOGY_P2P) \ + (__fabric)->oper_type = BFA_PPORT_TYPE_NPORT; \ + else \ + (__fabric)->oper_type = BFA_PPORT_TYPE_NLPORT; \ +} while (0) + +/* + * forward declarations + */ +static void bfa_fcs_fabric_init(struct bfa_fcs_fabric_s *fabric); +static void bfa_fcs_fabric_login(struct bfa_fcs_fabric_s *fabric); +static void bfa_fcs_fabric_notify_online(struct bfa_fcs_fabric_s *fabric); +static void bfa_fcs_fabric_notify_offline(struct bfa_fcs_fabric_s *fabric); +static void bfa_fcs_fabric_delay(void *cbarg); +static void bfa_fcs_fabric_delete(struct bfa_fcs_fabric_s *fabric); +static void bfa_fcs_fabric_delete_comp(void *cbarg); +static void bfa_fcs_fabric_process_uf(struct bfa_fcs_fabric_s *fabric, + struct fchs_s *fchs, u16 len); +static void bfa_fcs_fabric_process_flogi(struct bfa_fcs_fabric_s *fabric, + struct fchs_s *fchs, u16 len); +static void bfa_fcs_fabric_send_flogi_acc(struct bfa_fcs_fabric_s *fabric); +static void bfa_fcs_fabric_flogiacc_comp(void *fcsarg, + struct bfa_fcxp_s *fcxp, + void *cbarg, bfa_status_t status, + u32 rsp_len, + u32 resid_len, + struct fchs_s *rspfchs); +/** + * fcs_fabric_sm fabric state machine functions + */ + +/** + * Fabric state machine events + */ +enum bfa_fcs_fabric_event { + BFA_FCS_FABRIC_SM_CREATE = 1, /* fabric create from driver */ + BFA_FCS_FABRIC_SM_DELETE = 2, /* fabric delete from driver */ + BFA_FCS_FABRIC_SM_LINK_DOWN = 3, /* link down from port */ + BFA_FCS_FABRIC_SM_LINK_UP = 4, /* link up from port */ + BFA_FCS_FABRIC_SM_CONT_OP = 5, /* continue op from flogi/auth */ + BFA_FCS_FABRIC_SM_RETRY_OP = 6, /* continue op from flogi/auth */ + BFA_FCS_FABRIC_SM_NO_FABRIC = 7, /* no fabric from flogi/auth + */ + BFA_FCS_FABRIC_SM_PERF_EVFP = 8, /* perform EVFP from + *flogi/auth */ + BFA_FCS_FABRIC_SM_ISOLATE = 9, /* isolate from EVFP processing */ + BFA_FCS_FABRIC_SM_NO_TAGGING = 10,/* no VFT tagging from EVFP */ + BFA_FCS_FABRIC_SM_DELAYED = 11, /* timeout delay event */ + BFA_FCS_FABRIC_SM_AUTH_FAILED = 12, /* authentication failed */ + BFA_FCS_FABRIC_SM_AUTH_SUCCESS = 13, /* authentication successful + */ + BFA_FCS_FABRIC_SM_DELCOMP = 14, /* all vports deleted event */ + BFA_FCS_FABRIC_SM_LOOPBACK = 15, /* Received our own FLOGI */ + BFA_FCS_FABRIC_SM_START = 16, /* fabric delete from driver */ +}; + +static void bfa_fcs_fabric_sm_uninit(struct bfa_fcs_fabric_s *fabric, + enum bfa_fcs_fabric_event event); +static void bfa_fcs_fabric_sm_created(struct bfa_fcs_fabric_s *fabric, + enum bfa_fcs_fabric_event event); +static void bfa_fcs_fabric_sm_linkdown(struct bfa_fcs_fabric_s *fabric, + enum bfa_fcs_fabric_event event); +static void bfa_fcs_fabric_sm_flogi(struct bfa_fcs_fabric_s *fabric, + enum bfa_fcs_fabric_event event); +static void bfa_fcs_fabric_sm_flogi_retry(struct bfa_fcs_fabric_s *fabric, + enum bfa_fcs_fabric_event event); +static void bfa_fcs_fabric_sm_auth(struct bfa_fcs_fabric_s *fabric, + enum bfa_fcs_fabric_event event); +static void bfa_fcs_fabric_sm_auth_failed(struct bfa_fcs_fabric_s *fabric, + enum bfa_fcs_fabric_event event); +static void bfa_fcs_fabric_sm_loopback(struct bfa_fcs_fabric_s *fabric, + enum bfa_fcs_fabric_event event); +static void bfa_fcs_fabric_sm_nofabric(struct bfa_fcs_fabric_s *fabric, + enum bfa_fcs_fabric_event event); +static void bfa_fcs_fabric_sm_online(struct bfa_fcs_fabric_s *fabric, + enum bfa_fcs_fabric_event event); +static void bfa_fcs_fabric_sm_evfp(struct bfa_fcs_fabric_s *fabric, + enum bfa_fcs_fabric_event event); +static void bfa_fcs_fabric_sm_evfp_done(struct bfa_fcs_fabric_s *fabric, + enum bfa_fcs_fabric_event event); +static void bfa_fcs_fabric_sm_isolated(struct bfa_fcs_fabric_s *fabric, + enum bfa_fcs_fabric_event event); +static void bfa_fcs_fabric_sm_deleting(struct bfa_fcs_fabric_s *fabric, + enum bfa_fcs_fabric_event event); +/** + * Beginning state before fabric creation. + */ +static void +bfa_fcs_fabric_sm_uninit(struct bfa_fcs_fabric_s *fabric, + enum bfa_fcs_fabric_event event) +{ + bfa_trc(fabric->fcs, fabric->bport.port_cfg.pwwn); + bfa_trc(fabric->fcs, event); + + switch (event) { + case BFA_FCS_FABRIC_SM_CREATE: + bfa_sm_set_state(fabric, bfa_fcs_fabric_sm_created); + bfa_fcs_fabric_init(fabric); + bfa_fcs_lport_init(&fabric->bport, fabric->fcs, FC_VF_ID_NULL, + &fabric->bport.port_cfg, NULL); + break; + + case BFA_FCS_FABRIC_SM_LINK_UP: + case BFA_FCS_FABRIC_SM_LINK_DOWN: + break; + + default: + bfa_sm_fault(fabric->fcs, event); + } +} + +/** + * Beginning state before fabric creation. + */ +static void +bfa_fcs_fabric_sm_created(struct bfa_fcs_fabric_s *fabric, + enum bfa_fcs_fabric_event event) +{ + bfa_trc(fabric->fcs, fabric->bport.port_cfg.pwwn); + bfa_trc(fabric->fcs, event); + + switch (event) { + case BFA_FCS_FABRIC_SM_START: + if (bfa_pport_is_linkup(fabric->fcs->bfa)) { + bfa_sm_set_state(fabric, bfa_fcs_fabric_sm_flogi); + bfa_fcs_fabric_login(fabric); + } else + bfa_sm_set_state(fabric, bfa_fcs_fabric_sm_linkdown); + break; + + case BFA_FCS_FABRIC_SM_LINK_UP: + case BFA_FCS_FABRIC_SM_LINK_DOWN: + break; + + case BFA_FCS_FABRIC_SM_DELETE: + bfa_sm_set_state(fabric, bfa_fcs_fabric_sm_uninit); + bfa_fcs_modexit_comp(fabric->fcs); + break; + + default: + bfa_sm_fault(fabric->fcs, event); + } +} + +/** + * Link is down, awaiting LINK UP event from port. This is also the + * first state at fabric creation. + */ +static void +bfa_fcs_fabric_sm_linkdown(struct bfa_fcs_fabric_s *fabric, + enum bfa_fcs_fabric_event event) +{ + bfa_trc(fabric->fcs, fabric->bport.port_cfg.pwwn); + bfa_trc(fabric->fcs, event); + + switch (event) { + case BFA_FCS_FABRIC_SM_LINK_UP: + bfa_sm_set_state(fabric, bfa_fcs_fabric_sm_flogi); + bfa_fcs_fabric_login(fabric); + break; + + case BFA_FCS_FABRIC_SM_RETRY_OP: + break; + + case BFA_FCS_FABRIC_SM_DELETE: + bfa_sm_set_state(fabric, bfa_fcs_fabric_sm_deleting); + bfa_fcs_fabric_delete(fabric); + break; + + default: + bfa_sm_fault(fabric->fcs, event); + } +} + +/** + * FLOGI is in progress, awaiting FLOGI reply. + */ +static void +bfa_fcs_fabric_sm_flogi(struct bfa_fcs_fabric_s *fabric, + enum bfa_fcs_fabric_event event) +{ + bfa_trc(fabric->fcs, fabric->bport.port_cfg.pwwn); + bfa_trc(fabric->fcs, event); + + switch (event) { + case BFA_FCS_FABRIC_SM_CONT_OP: + + bfa_pport_set_tx_bbcredit(fabric->fcs->bfa, fabric->bb_credit); + fabric->fab_type = BFA_FCS_FABRIC_SWITCHED; + + if (fabric->auth_reqd && fabric->is_auth) { + bfa_sm_set_state(fabric, bfa_fcs_fabric_sm_auth); + bfa_trc(fabric->fcs, event); + } else { + bfa_sm_set_state(fabric, bfa_fcs_fabric_sm_online); + bfa_fcs_fabric_notify_online(fabric); + } + break; + + case BFA_FCS_FABRIC_SM_RETRY_OP: + bfa_sm_set_state(fabric, bfa_fcs_fabric_sm_flogi_retry); + bfa_timer_start(fabric->fcs->bfa, &fabric->delay_timer, + bfa_fcs_fabric_delay, fabric, + BFA_FCS_FABRIC_RETRY_DELAY); + break; + + case BFA_FCS_FABRIC_SM_LOOPBACK: + bfa_sm_set_state(fabric, bfa_fcs_fabric_sm_loopback); + bfa_lps_discard(fabric->lps); + bfa_fcs_fabric_set_opertype(fabric); + break; + + case BFA_FCS_FABRIC_SM_NO_FABRIC: + fabric->fab_type = BFA_FCS_FABRIC_N2N; + bfa_pport_set_tx_bbcredit(fabric->fcs->bfa, fabric->bb_credit); + bfa_fcs_fabric_notify_online(fabric); + bfa_sm_set_state(fabric, bfa_fcs_fabric_sm_nofabric); + break; + + case BFA_FCS_FABRIC_SM_LINK_DOWN: + bfa_sm_set_state(fabric, bfa_fcs_fabric_sm_linkdown); + bfa_lps_discard(fabric->lps); + break; + + case BFA_FCS_FABRIC_SM_DELETE: + bfa_sm_set_state(fabric, bfa_fcs_fabric_sm_deleting); + bfa_lps_discard(fabric->lps); + bfa_fcs_fabric_delete(fabric); + break; + + default: + bfa_sm_fault(fabric->fcs, event); + } +} + + +static void +bfa_fcs_fabric_sm_flogi_retry(struct bfa_fcs_fabric_s *fabric, + enum bfa_fcs_fabric_event event) +{ + bfa_trc(fabric->fcs, fabric->bport.port_cfg.pwwn); + bfa_trc(fabric->fcs, event); + + switch (event) { + case BFA_FCS_FABRIC_SM_DELAYED: + bfa_sm_set_state(fabric, bfa_fcs_fabric_sm_flogi); + bfa_fcs_fabric_login(fabric); + break; + + case BFA_FCS_FABRIC_SM_LINK_DOWN: + bfa_sm_set_state(fabric, bfa_fcs_fabric_sm_linkdown); + bfa_timer_stop(&fabric->delay_timer); + break; + + case BFA_FCS_FABRIC_SM_DELETE: + bfa_sm_set_state(fabric, bfa_fcs_fabric_sm_deleting); + bfa_timer_stop(&fabric->delay_timer); + bfa_fcs_fabric_delete(fabric); + break; + + default: + bfa_sm_fault(fabric->fcs, event); + } +} + +/** + * Authentication is in progress, awaiting authentication results. + */ +static void +bfa_fcs_fabric_sm_auth(struct bfa_fcs_fabric_s *fabric, + enum bfa_fcs_fabric_event event) +{ + bfa_trc(fabric->fcs, fabric->bport.port_cfg.pwwn); + bfa_trc(fabric->fcs, event); + + switch (event) { + case BFA_FCS_FABRIC_SM_AUTH_FAILED: + bfa_sm_set_state(fabric, bfa_fcs_fabric_sm_auth_failed); + bfa_lps_discard(fabric->lps); + break; + + case BFA_FCS_FABRIC_SM_AUTH_SUCCESS: + bfa_sm_set_state(fabric, bfa_fcs_fabric_sm_online); + bfa_fcs_fabric_notify_online(fabric); + break; + + case BFA_FCS_FABRIC_SM_PERF_EVFP: + bfa_sm_set_state(fabric, bfa_fcs_fabric_sm_evfp); + break; + + case BFA_FCS_FABRIC_SM_LINK_DOWN: + bfa_sm_set_state(fabric, bfa_fcs_fabric_sm_linkdown); + bfa_lps_discard(fabric->lps); + break; + + case BFA_FCS_FABRIC_SM_DELETE: + bfa_sm_set_state(fabric, bfa_fcs_fabric_sm_deleting); + bfa_fcs_fabric_delete(fabric); + break; + + default: + bfa_sm_fault(fabric->fcs, event); + } +} + +/** + * Authentication failed + */ +static void +bfa_fcs_fabric_sm_auth_failed(struct bfa_fcs_fabric_s *fabric, + enum bfa_fcs_fabric_event event) +{ + bfa_trc(fabric->fcs, fabric->bport.port_cfg.pwwn); + bfa_trc(fabric->fcs, event); + + switch (event) { + case BFA_FCS_FABRIC_SM_LINK_DOWN: + bfa_sm_set_state(fabric, bfa_fcs_fabric_sm_linkdown); + bfa_fcs_fabric_notify_offline(fabric); + break; + + case BFA_FCS_FABRIC_SM_DELETE: + bfa_sm_set_state(fabric, bfa_fcs_fabric_sm_deleting); + bfa_fcs_fabric_delete(fabric); + break; + + default: + bfa_sm_fault(fabric->fcs, event); + } +} + +/** + * Port is in loopback mode. + */ +static void +bfa_fcs_fabric_sm_loopback(struct bfa_fcs_fabric_s *fabric, + enum bfa_fcs_fabric_event event) +{ + bfa_trc(fabric->fcs, fabric->bport.port_cfg.pwwn); + bfa_trc(fabric->fcs, event); + + switch (event) { + case BFA_FCS_FABRIC_SM_LINK_DOWN: + bfa_sm_set_state(fabric, bfa_fcs_fabric_sm_linkdown); + bfa_fcs_fabric_notify_offline(fabric); + break; + + case BFA_FCS_FABRIC_SM_DELETE: + bfa_sm_set_state(fabric, bfa_fcs_fabric_sm_deleting); + bfa_fcs_fabric_delete(fabric); + break; + + default: + bfa_sm_fault(fabric->fcs, event); + } +} + +/** + * There is no attached fabric - private loop or NPort-to-NPort topology. + */ +static void +bfa_fcs_fabric_sm_nofabric(struct bfa_fcs_fabric_s *fabric, + enum bfa_fcs_fabric_event event) +{ + bfa_trc(fabric->fcs, fabric->bport.port_cfg.pwwn); + bfa_trc(fabric->fcs, event); + + switch (event) { + case BFA_FCS_FABRIC_SM_LINK_DOWN: + bfa_sm_set_state(fabric, bfa_fcs_fabric_sm_linkdown); + bfa_lps_discard(fabric->lps); + bfa_fcs_fabric_notify_offline(fabric); + break; + + case BFA_FCS_FABRIC_SM_DELETE: + bfa_sm_set_state(fabric, bfa_fcs_fabric_sm_deleting); + bfa_fcs_fabric_delete(fabric); + break; + + case BFA_FCS_FABRIC_SM_NO_FABRIC: + bfa_trc(fabric->fcs, fabric->bb_credit); + bfa_pport_set_tx_bbcredit(fabric->fcs->bfa, fabric->bb_credit); + break; + + default: + bfa_sm_fault(fabric->fcs, event); + } +} + +/** + * Fabric is online - normal operating state. + */ +static void +bfa_fcs_fabric_sm_online(struct bfa_fcs_fabric_s *fabric, + enum bfa_fcs_fabric_event event) +{ + bfa_trc(fabric->fcs, fabric->bport.port_cfg.pwwn); + bfa_trc(fabric->fcs, event); + + switch (event) { + case BFA_FCS_FABRIC_SM_LINK_DOWN: + bfa_sm_set_state(fabric, bfa_fcs_fabric_sm_linkdown); + bfa_lps_discard(fabric->lps); + bfa_fcs_fabric_notify_offline(fabric); + break; + + case BFA_FCS_FABRIC_SM_DELETE: + bfa_sm_set_state(fabric, bfa_fcs_fabric_sm_deleting); + bfa_fcs_fabric_delete(fabric); + break; + + case BFA_FCS_FABRIC_SM_AUTH_FAILED: + bfa_sm_set_state(fabric, bfa_fcs_fabric_sm_auth_failed); + bfa_lps_discard(fabric->lps); + break; + + case BFA_FCS_FABRIC_SM_AUTH_SUCCESS: + break; + + default: + bfa_sm_fault(fabric->fcs, event); + } +} + +/** + * Exchanging virtual fabric parameters. + */ +static void +bfa_fcs_fabric_sm_evfp(struct bfa_fcs_fabric_s *fabric, + enum bfa_fcs_fabric_event event) +{ + bfa_trc(fabric->fcs, fabric->bport.port_cfg.pwwn); + bfa_trc(fabric->fcs, event); + + switch (event) { + case BFA_FCS_FABRIC_SM_CONT_OP: + bfa_sm_set_state(fabric, bfa_fcs_fabric_sm_evfp_done); + break; + + case BFA_FCS_FABRIC_SM_ISOLATE: + bfa_sm_set_state(fabric, bfa_fcs_fabric_sm_isolated); + break; + + default: + bfa_sm_fault(fabric->fcs, event); + } +} + +/** + * EVFP exchange complete and VFT tagging is enabled. + */ +static void +bfa_fcs_fabric_sm_evfp_done(struct bfa_fcs_fabric_s *fabric, + enum bfa_fcs_fabric_event event) +{ + bfa_trc(fabric->fcs, fabric->bport.port_cfg.pwwn); + bfa_trc(fabric->fcs, event); +} + +/** + * Port is isolated after EVFP exchange due to VF_ID mismatch (N and F). + */ +static void +bfa_fcs_fabric_sm_isolated(struct bfa_fcs_fabric_s *fabric, + enum bfa_fcs_fabric_event event) +{ + bfa_trc(fabric->fcs, fabric->bport.port_cfg.pwwn); + bfa_trc(fabric->fcs, event); + + bfa_log(fabric->fcs->logm, BFA_LOG_FCS_FABRIC_ISOLATED, + fabric->bport.port_cfg.pwwn, fabric->fcs->port_vfid, + fabric->event_arg.swp_vfid); +} + +/** + * Fabric is being deleted, awaiting vport delete completions. + */ +static void +bfa_fcs_fabric_sm_deleting(struct bfa_fcs_fabric_s *fabric, + enum bfa_fcs_fabric_event event) +{ + bfa_trc(fabric->fcs, fabric->bport.port_cfg.pwwn); + bfa_trc(fabric->fcs, event); + + switch (event) { + case BFA_FCS_FABRIC_SM_DELCOMP: + bfa_sm_set_state(fabric, bfa_fcs_fabric_sm_uninit); + bfa_fcs_modexit_comp(fabric->fcs); + break; + + case BFA_FCS_FABRIC_SM_LINK_UP: + break; + + case BFA_FCS_FABRIC_SM_LINK_DOWN: + bfa_fcs_fabric_notify_offline(fabric); + break; + + default: + bfa_sm_fault(fabric->fcs, event); + } +} + + + +/** + * fcs_fabric_private fabric private functions + */ + +static void +bfa_fcs_fabric_init(struct bfa_fcs_fabric_s *fabric) +{ + struct bfa_port_cfg_s *port_cfg = &fabric->bport.port_cfg; + + port_cfg->roles = BFA_PORT_ROLE_FCP_IM; + port_cfg->nwwn = bfa_ioc_get_nwwn(&fabric->fcs->bfa->ioc); + port_cfg->pwwn = bfa_ioc_get_pwwn(&fabric->fcs->bfa->ioc); +} + +/** + * Port Symbolic Name Creation for base port. + */ +void +bfa_fcs_fabric_psymb_init(struct bfa_fcs_fabric_s *fabric) +{ + struct bfa_port_cfg_s *port_cfg = &fabric->bport.port_cfg; + struct bfa_adapter_attr_s adapter_attr; + struct bfa_fcs_driver_info_s *driver_info = &fabric->fcs->driver_info; + + bfa_os_memset((void *)&adapter_attr, 0, + sizeof(struct bfa_adapter_attr_s)); + bfa_ioc_get_adapter_attr(&fabric->fcs->bfa->ioc, &adapter_attr); + + /* + * Model name/number + */ + strncpy((char *)&port_cfg->sym_name, adapter_attr.model, + BFA_FCS_PORT_SYMBNAME_MODEL_SZ); + strncat((char *)&port_cfg->sym_name, BFA_FCS_PORT_SYMBNAME_SEPARATOR, + sizeof(BFA_FCS_PORT_SYMBNAME_SEPARATOR)); + + /* + * Driver Version + */ + strncat((char *)&port_cfg->sym_name, (char *)driver_info->version, + BFA_FCS_PORT_SYMBNAME_VERSION_SZ); + strncat((char *)&port_cfg->sym_name, BFA_FCS_PORT_SYMBNAME_SEPARATOR, + sizeof(BFA_FCS_PORT_SYMBNAME_SEPARATOR)); + + /* + * Host machine name + */ + strncat((char *)&port_cfg->sym_name, + (char *)driver_info->host_machine_name, + BFA_FCS_PORT_SYMBNAME_MACHINENAME_SZ); + strncat((char *)&port_cfg->sym_name, BFA_FCS_PORT_SYMBNAME_SEPARATOR, + sizeof(BFA_FCS_PORT_SYMBNAME_SEPARATOR)); + + /* + * Host OS Info : + * If OS Patch Info is not there, do not truncate any bytes from the + * OS name string and instead copy the entire OS info string (64 bytes). + */ + if (driver_info->host_os_patch[0] == '\0') { + strncat((char *)&port_cfg->sym_name, + (char *)driver_info->host_os_name, BFA_FCS_OS_STR_LEN); + strncat((char *)&port_cfg->sym_name, + BFA_FCS_PORT_SYMBNAME_SEPARATOR, + sizeof(BFA_FCS_PORT_SYMBNAME_SEPARATOR)); + } else { + strncat((char *)&port_cfg->sym_name, + (char *)driver_info->host_os_name, + BFA_FCS_PORT_SYMBNAME_OSINFO_SZ); + strncat((char *)&port_cfg->sym_name, + BFA_FCS_PORT_SYMBNAME_SEPARATOR, + sizeof(BFA_FCS_PORT_SYMBNAME_SEPARATOR)); + + /* + * Append host OS Patch Info + */ + strncat((char *)&port_cfg->sym_name, + (char *)driver_info->host_os_patch, + BFA_FCS_PORT_SYMBNAME_OSPATCH_SZ); + } + + /* + * null terminate + */ + port_cfg->sym_name.symname[BFA_SYMNAME_MAXLEN - 1] = 0; +} + +/** + * bfa lps login completion callback + */ +void +bfa_cb_lps_flogi_comp(void *bfad, void *uarg, bfa_status_t status) +{ + struct bfa_fcs_fabric_s *fabric = uarg; + + bfa_trc(fabric->fcs, fabric->bport.port_cfg.pwwn); + bfa_trc(fabric->fcs, status); + + switch (status) { + case BFA_STATUS_OK: + fabric->stats.flogi_accepts++; + break; + + case BFA_STATUS_INVALID_MAC: + /* + * Only for CNA + */ + fabric->stats.flogi_acc_err++; + bfa_sm_send_event(fabric, BFA_FCS_FABRIC_SM_RETRY_OP); + + return; + + case BFA_STATUS_EPROTOCOL: + switch (bfa_lps_get_extstatus(fabric->lps)) { + case BFA_EPROTO_BAD_ACCEPT: + fabric->stats.flogi_acc_err++; + break; + + case BFA_EPROTO_UNKNOWN_RSP: + fabric->stats.flogi_unknown_rsp++; + break; + + default: + break; + } + bfa_sm_send_event(fabric, BFA_FCS_FABRIC_SM_RETRY_OP); + + return; + + case BFA_STATUS_FABRIC_RJT: + fabric->stats.flogi_rejects++; + bfa_sm_send_event(fabric, BFA_FCS_FABRIC_SM_RETRY_OP); + return; + + default: + fabric->stats.flogi_rsp_err++; + bfa_sm_send_event(fabric, BFA_FCS_FABRIC_SM_RETRY_OP); + return; + } + + fabric->bb_credit = bfa_lps_get_peer_bbcredit(fabric->lps); + bfa_trc(fabric->fcs, fabric->bb_credit); + + if (!bfa_lps_is_brcd_fabric(fabric->lps)) + fabric->fabric_name = bfa_lps_get_peer_nwwn(fabric->lps); + + /* + * Check port type. It should be 1 = F-port. + */ + if (bfa_lps_is_fport(fabric->lps)) { + fabric->bport.pid = bfa_lps_get_pid(fabric->lps); + fabric->is_npiv = bfa_lps_is_npiv_en(fabric->lps); + fabric->is_auth = bfa_lps_is_authreq(fabric->lps); + bfa_sm_send_event(fabric, BFA_FCS_FABRIC_SM_CONT_OP); + } else { + /* + * Nport-2-Nport direct attached + */ + fabric->bport.port_topo.pn2n.rem_port_wwn = + bfa_lps_get_peer_pwwn(fabric->lps); + bfa_sm_send_event(fabric, BFA_FCS_FABRIC_SM_NO_FABRIC); + } + + bfa_trc(fabric->fcs, fabric->bport.pid); + bfa_trc(fabric->fcs, fabric->is_npiv); + bfa_trc(fabric->fcs, fabric->is_auth); +} + +/** + * Allocate and send FLOGI. + */ +static void +bfa_fcs_fabric_login(struct bfa_fcs_fabric_s *fabric) +{ + struct bfa_s *bfa = fabric->fcs->bfa; + struct bfa_port_cfg_s *pcfg = &fabric->bport.port_cfg; + u8 alpa = 0; + + if (bfa_pport_get_topology(bfa) == BFA_PPORT_TOPOLOGY_LOOP) + alpa = bfa_pport_get_myalpa(bfa); + + bfa_lps_flogi(fabric->lps, fabric, alpa, bfa_pport_get_maxfrsize(bfa), + pcfg->pwwn, pcfg->nwwn, fabric->auth_reqd); + + fabric->stats.flogi_sent++; +} + +static void +bfa_fcs_fabric_notify_online(struct bfa_fcs_fabric_s *fabric) +{ + struct bfa_fcs_vport_s *vport; + struct list_head *qe, *qen; + + bfa_trc(fabric->fcs, fabric->fabric_name); + + bfa_fcs_fabric_set_opertype(fabric); + fabric->stats.fabric_onlines++; + + /** + * notify online event to base and then virtual ports + */ + bfa_fcs_port_online(&fabric->bport); + + list_for_each_safe(qe, qen, &fabric->vport_q) { + vport = (struct bfa_fcs_vport_s *)qe; + bfa_fcs_vport_online(vport); + } +} + +static void +bfa_fcs_fabric_notify_offline(struct bfa_fcs_fabric_s *fabric) +{ + struct bfa_fcs_vport_s *vport; + struct list_head *qe, *qen; + + bfa_trc(fabric->fcs, fabric->fabric_name); + fabric->stats.fabric_offlines++; + + /** + * notify offline event first to vports and then base port. + */ + list_for_each_safe(qe, qen, &fabric->vport_q) { + vport = (struct bfa_fcs_vport_s *)qe; + bfa_fcs_vport_offline(vport); + } + + bfa_fcs_port_offline(&fabric->bport); + + fabric->fabric_name = 0; + fabric->fabric_ip_addr[0] = 0; +} + +static void +bfa_fcs_fabric_delay(void *cbarg) +{ + struct bfa_fcs_fabric_s *fabric = cbarg; + + bfa_sm_send_event(fabric, BFA_FCS_FABRIC_SM_DELAYED); +} + +/** + * Delete all vports and wait for vport delete completions. + */ +static void +bfa_fcs_fabric_delete(struct bfa_fcs_fabric_s *fabric) +{ + struct bfa_fcs_vport_s *vport; + struct list_head *qe, *qen; + + list_for_each_safe(qe, qen, &fabric->vport_q) { + vport = (struct bfa_fcs_vport_s *)qe; + bfa_fcs_vport_delete(vport); + } + + bfa_fcs_port_delete(&fabric->bport); + bfa_wc_wait(&fabric->wc); +} + +static void +bfa_fcs_fabric_delete_comp(void *cbarg) +{ + struct bfa_fcs_fabric_s *fabric = cbarg; + + bfa_sm_send_event(fabric, BFA_FCS_FABRIC_SM_DELCOMP); +} + + + +/** + * fcs_fabric_public fabric public functions + */ + +/** + * Module initialization + */ +void +bfa_fcs_fabric_modinit(struct bfa_fcs_s *fcs) +{ + struct bfa_fcs_fabric_s *fabric; + + fabric = &fcs->fabric; + bfa_os_memset(fabric, 0, sizeof(struct bfa_fcs_fabric_s)); + + /** + * Initialize base fabric. + */ + fabric->fcs = fcs; + INIT_LIST_HEAD(&fabric->vport_q); + INIT_LIST_HEAD(&fabric->vf_q); + fabric->lps = bfa_lps_alloc(fcs->bfa); + bfa_assert(fabric->lps); + + /** + * Initialize fabric delete completion handler. Fabric deletion is complete + * when the last vport delete is complete. + */ + bfa_wc_init(&fabric->wc, bfa_fcs_fabric_delete_comp, fabric); + bfa_wc_up(&fabric->wc); /* For the base port */ + + bfa_sm_set_state(fabric, bfa_fcs_fabric_sm_uninit); + bfa_sm_send_event(fabric, BFA_FCS_FABRIC_SM_CREATE); + bfa_trc(fcs, 0); +} + +/** + * Module cleanup + */ +void +bfa_fcs_fabric_modexit(struct bfa_fcs_s *fcs) +{ + struct bfa_fcs_fabric_s *fabric; + + bfa_trc(fcs, 0); + + /** + * Cleanup base fabric. + */ + fabric = &fcs->fabric; + bfa_lps_delete(fabric->lps); + bfa_sm_send_event(fabric, BFA_FCS_FABRIC_SM_DELETE); +} + +/** + * Fabric module start -- kick starts FCS actions + */ +void +bfa_fcs_fabric_modstart(struct bfa_fcs_s *fcs) +{ + struct bfa_fcs_fabric_s *fabric; + + bfa_trc(fcs, 0); + fabric = &fcs->fabric; + bfa_sm_send_event(fabric, BFA_FCS_FABRIC_SM_START); +} + +/** + * Suspend fabric activity as part of driver suspend. + */ +void +bfa_fcs_fabric_modsusp(struct bfa_fcs_s *fcs) +{ +} + +bfa_boolean_t +bfa_fcs_fabric_is_loopback(struct bfa_fcs_fabric_s *fabric) +{ + return (bfa_sm_cmp_state(fabric, bfa_fcs_fabric_sm_loopback)); +} + +enum bfa_pport_type +bfa_fcs_fabric_port_type(struct bfa_fcs_fabric_s *fabric) +{ + return fabric->oper_type; +} + +/** + * Link up notification from BFA physical port module. + */ +void +bfa_fcs_fabric_link_up(struct bfa_fcs_fabric_s *fabric) +{ + bfa_trc(fabric->fcs, fabric->bport.port_cfg.pwwn); + bfa_sm_send_event(fabric, BFA_FCS_FABRIC_SM_LINK_UP); +} + +/** + * Link down notification from BFA physical port module. + */ +void +bfa_fcs_fabric_link_down(struct bfa_fcs_fabric_s *fabric) +{ + bfa_trc(fabric->fcs, fabric->bport.port_cfg.pwwn); + bfa_sm_send_event(fabric, BFA_FCS_FABRIC_SM_LINK_DOWN); +} + +/** + * A child vport is being created in the fabric. + * + * Call from vport module at vport creation. A list of base port and vports + * belonging to a fabric is maintained to propagate link events. + * + * param[in] fabric - Fabric instance. This can be a base fabric or vf. + * param[in] vport - Vport being created. + * + * @return None (always succeeds) + */ +void +bfa_fcs_fabric_addvport(struct bfa_fcs_fabric_s *fabric, + struct bfa_fcs_vport_s *vport) +{ + /** + * - add vport to fabric's vport_q + */ + bfa_trc(fabric->fcs, fabric->vf_id); + + list_add_tail(&vport->qe, &fabric->vport_q); + fabric->num_vports++; + bfa_wc_up(&fabric->wc); +} + +/** + * A child vport is being deleted from fabric. + * + * Vport is being deleted. + */ +void +bfa_fcs_fabric_delvport(struct bfa_fcs_fabric_s *fabric, + struct bfa_fcs_vport_s *vport) +{ + list_del(&vport->qe); + fabric->num_vports--; + bfa_wc_down(&fabric->wc); +} + +/** + * Base port is deleted. + */ +void +bfa_fcs_fabric_port_delete_comp(struct bfa_fcs_fabric_s *fabric) +{ + bfa_wc_down(&fabric->wc); +} + +/** + * Check if fabric is online. + * + * param[in] fabric - Fabric instance. This can be a base fabric or vf. + * + * @return TRUE/FALSE + */ +int +bfa_fcs_fabric_is_online(struct bfa_fcs_fabric_s *fabric) +{ + return (bfa_sm_cmp_state(fabric, bfa_fcs_fabric_sm_online)); +} + + +bfa_status_t +bfa_fcs_fabric_addvf(struct bfa_fcs_fabric_s *vf, struct bfa_fcs_s *fcs, + struct bfa_port_cfg_s *port_cfg, + struct bfad_vf_s *vf_drv) +{ + bfa_sm_set_state(vf, bfa_fcs_fabric_sm_uninit); + return BFA_STATUS_OK; +} + +/** + * Lookup for a vport withing a fabric given its pwwn + */ +struct bfa_fcs_vport_s * +bfa_fcs_fabric_vport_lookup(struct bfa_fcs_fabric_s *fabric, wwn_t pwwn) +{ + struct bfa_fcs_vport_s *vport; + struct list_head *qe; + + list_for_each(qe, &fabric->vport_q) { + vport = (struct bfa_fcs_vport_s *)qe; + if (bfa_fcs_port_get_pwwn(&vport->lport) == pwwn) + return vport; + } + + return NULL; +} + +/** + * In a given fabric, return the number of lports. + * + * param[in] fabric - Fabric instance. This can be a base fabric or vf. + * +* @return : 1 or more. + */ +u16 +bfa_fcs_fabric_vport_count(struct bfa_fcs_fabric_s *fabric) +{ + return (fabric->num_vports); +} + +/** + * Unsolicited frame receive handling. + */ +void +bfa_fcs_fabric_uf_recv(struct bfa_fcs_fabric_s *fabric, struct fchs_s *fchs, + u16 len) +{ + u32 pid = fchs->d_id; + struct bfa_fcs_vport_s *vport; + struct list_head *qe; + struct fc_els_cmd_s *els_cmd = (struct fc_els_cmd_s *) (fchs + 1); + struct fc_logi_s *flogi = (struct fc_logi_s *) els_cmd; + + bfa_trc(fabric->fcs, len); + bfa_trc(fabric->fcs, pid); + + /** + * Look for our own FLOGI frames being looped back. This means an + * external loopback cable is in place. Our own FLOGI frames are + * sometimes looped back when switch port gets temporarily bypassed. + */ + if ((pid == bfa_os_ntoh3b(FC_FABRIC_PORT)) + && (els_cmd->els_code == FC_ELS_FLOGI) + && (flogi->port_name == bfa_fcs_port_get_pwwn(&fabric->bport))) { + bfa_sm_send_event(fabric, BFA_FCS_FABRIC_SM_LOOPBACK); + return; + } + + /** + * FLOGI/EVFP exchanges should be consumed by base fabric. + */ + if (fchs->d_id == bfa_os_hton3b(FC_FABRIC_PORT)) { + bfa_trc(fabric->fcs, pid); + bfa_fcs_fabric_process_uf(fabric, fchs, len); + return; + } + + if (fabric->bport.pid == pid) { + /** + * All authentication frames should be routed to auth + */ + bfa_trc(fabric->fcs, els_cmd->els_code); + if (els_cmd->els_code == FC_ELS_AUTH) { + bfa_trc(fabric->fcs, els_cmd->els_code); + fabric->auth.response = (u8 *) els_cmd; + return; + } + + bfa_trc(fabric->fcs, *(u8 *) ((u8 *) fchs)); + bfa_fcs_port_uf_recv(&fabric->bport, fchs, len); + return; + } + + /** + * look for a matching local port ID + */ + list_for_each(qe, &fabric->vport_q) { + vport = (struct bfa_fcs_vport_s *)qe; + if (vport->lport.pid == pid) { + bfa_fcs_port_uf_recv(&vport->lport, fchs, len); + return; + } + } + bfa_trc(fabric->fcs, els_cmd->els_code); + bfa_fcs_port_uf_recv(&fabric->bport, fchs, len); +} + +/** + * Unsolicited frames to be processed by fabric. + */ +static void +bfa_fcs_fabric_process_uf(struct bfa_fcs_fabric_s *fabric, struct fchs_s *fchs, + u16 len) +{ + struct fc_els_cmd_s *els_cmd = (struct fc_els_cmd_s *) (fchs + 1); + + bfa_trc(fabric->fcs, els_cmd->els_code); + + switch (els_cmd->els_code) { + case FC_ELS_FLOGI: + bfa_fcs_fabric_process_flogi(fabric, fchs, len); + break; + + default: + /* + * need to generate a LS_RJT + */ + break; + } +} + +/** + * Process incoming FLOGI + */ +static void +bfa_fcs_fabric_process_flogi(struct bfa_fcs_fabric_s *fabric, + struct fchs_s *fchs, u16 len) +{ + struct fc_logi_s *flogi = (struct fc_logi_s *) (fchs + 1); + struct bfa_fcs_port_s *bport = &fabric->bport; + + bfa_trc(fabric->fcs, fchs->s_id); + + fabric->stats.flogi_rcvd++; + /* + * Check port type. It should be 0 = n-port. + */ + if (flogi->csp.port_type) { + /* + * @todo: may need to send a LS_RJT + */ + bfa_trc(fabric->fcs, flogi->port_name); + fabric->stats.flogi_rejected++; + return; + } + + fabric->bb_credit = bfa_os_ntohs(flogi->csp.bbcred); + bport->port_topo.pn2n.rem_port_wwn = flogi->port_name; + bport->port_topo.pn2n.reply_oxid = fchs->ox_id; + + /* + * Send a Flogi Acc + */ + bfa_fcs_fabric_send_flogi_acc(fabric); + bfa_sm_send_event(fabric, BFA_FCS_FABRIC_SM_NO_FABRIC); +} + +static void +bfa_fcs_fabric_send_flogi_acc(struct bfa_fcs_fabric_s *fabric) +{ + struct bfa_port_cfg_s *pcfg = &fabric->bport.port_cfg; + struct bfa_fcs_port_n2n_s *n2n_port = &fabric->bport.port_topo.pn2n; + struct bfa_s *bfa = fabric->fcs->bfa; + struct bfa_fcxp_s *fcxp; + u16 reqlen; + struct fchs_s fchs; + + fcxp = bfa_fcs_fcxp_alloc(fabric->fcs); + /** + * Do not expect this failure -- expect remote node to retry + */ + if (!fcxp) + return; + + reqlen = fc_flogi_acc_build(&fchs, bfa_fcxp_get_reqbuf(fcxp), + bfa_os_hton3b(FC_FABRIC_PORT), + n2n_port->reply_oxid, pcfg->pwwn, + pcfg->nwwn, bfa_pport_get_maxfrsize(bfa), + bfa_pport_get_rx_bbcredit(bfa)); + + bfa_fcxp_send(fcxp, NULL, fabric->vf_id, bfa_lps_get_tag(fabric->lps), + BFA_FALSE, FC_CLASS_3, reqlen, &fchs, + bfa_fcs_fabric_flogiacc_comp, fabric, + FC_MAX_PDUSZ, 0); /* Timeout 0 indicates no + * response expected + */ +} + +/** + * Flogi Acc completion callback. + */ +static void +bfa_fcs_fabric_flogiacc_comp(void *fcsarg, struct bfa_fcxp_s *fcxp, void *cbarg, + bfa_status_t status, u32 rsp_len, + u32 resid_len, struct fchs_s *rspfchs) +{ + struct bfa_fcs_fabric_s *fabric = cbarg; + + bfa_trc(fabric->fcs, status); +} + +/* + * + * @param[in] fabric - fabric + * @param[in] result - 1 + * + * @return - none + */ +void +bfa_fcs_auth_finished(struct bfa_fcs_fabric_s *fabric, enum auth_status status) +{ + bfa_trc(fabric->fcs, status); + + if (status == FC_AUTH_STATE_SUCCESS) + bfa_sm_send_event(fabric, BFA_FCS_FABRIC_SM_AUTH_SUCCESS); + else + bfa_sm_send_event(fabric, BFA_FCS_FABRIC_SM_AUTH_FAILED); +} + +/** + * Send AEN notification + */ +static void +bfa_fcs_fabric_aen_post(struct bfa_fcs_port_s *port, + enum bfa_port_aen_event event) +{ + union bfa_aen_data_u aen_data; + struct bfa_log_mod_s *logmod = port->fcs->logm; + wwn_t pwwn = bfa_fcs_port_get_pwwn(port); + wwn_t fwwn = bfa_fcs_port_get_fabric_name(port); + char pwwn_ptr[BFA_STRING_32]; + char fwwn_ptr[BFA_STRING_32]; + + wwn2str(pwwn_ptr, pwwn); + wwn2str(fwwn_ptr, fwwn); + + switch (event) { + case BFA_PORT_AEN_FABRIC_NAME_CHANGE: + bfa_log(logmod, BFA_AEN_PORT_FABRIC_NAME_CHANGE, pwwn_ptr, + fwwn_ptr); + break; + default: + break; + } + + aen_data.port.pwwn = pwwn; + aen_data.port.fwwn = fwwn; +} + +/* + * + * @param[in] fabric - fabric + * @param[in] wwn_t - new fabric name + * + * @return - none + */ +void +bfa_fcs_fabric_set_fabric_name(struct bfa_fcs_fabric_s *fabric, + wwn_t fabric_name) +{ + bfa_trc(fabric->fcs, fabric_name); + + if (fabric->fabric_name == 0) { + /* + * With BRCD switches, we don't get Fabric Name in FLOGI. + * Don't generate a fabric name change event in this case. + */ + fabric->fabric_name = fabric_name; + } else { + fabric->fabric_name = fabric_name; + /* + * Generate a Event + */ + bfa_fcs_fabric_aen_post(&fabric->bport, + BFA_PORT_AEN_FABRIC_NAME_CHANGE); + } + +} + +/** + * Not used by FCS. + */ +void +bfa_cb_lps_flogo_comp(void *bfad, void *uarg) +{ +} + + diff --git a/drivers/scsi/bfa/fcbuild.c b/drivers/scsi/bfa/fcbuild.c new file mode 100644 index 000000000000..d174706b9caa --- /dev/null +++ b/drivers/scsi/bfa/fcbuild.c @@ -0,0 +1,1449 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ +/* + * fcbuild.c - FC link service frame building and parsing routines + */ + +#include +#include "fcbuild.h" + +/* + * static build functions + */ +static void fc_els_rsp_build(struct fchs_s *fchs, u32 d_id, u32 s_id, + u16 ox_id); +static void fc_bls_rsp_build(struct fchs_s *fchs, u32 d_id, u32 s_id, + u16 ox_id); +static struct fchs_s fc_els_req_tmpl; +static struct fchs_s fc_els_rsp_tmpl; +static struct fchs_s fc_bls_req_tmpl; +static struct fchs_s fc_bls_rsp_tmpl; +static struct fc_ba_acc_s ba_acc_tmpl; +static struct fc_logi_s plogi_tmpl; +static struct fc_prli_s prli_tmpl; +static struct fc_rrq_s rrq_tmpl; +static struct fchs_s fcp_fchs_tmpl; + +void +fcbuild_init(void) +{ + /* + * fc_els_req_tmpl + */ + fc_els_req_tmpl.routing = FC_RTG_EXT_LINK; + fc_els_req_tmpl.cat_info = FC_CAT_LD_REQUEST; + fc_els_req_tmpl.type = FC_TYPE_ELS; + fc_els_req_tmpl.f_ctl = + bfa_os_hton3b(FCTL_SEQ_INI | FCTL_FS_EXCH | FCTL_END_SEQ | + FCTL_SI_XFER); + fc_els_req_tmpl.rx_id = FC_RXID_ANY; + + /* + * fc_els_rsp_tmpl + */ + fc_els_rsp_tmpl.routing = FC_RTG_EXT_LINK; + fc_els_rsp_tmpl.cat_info = FC_CAT_LD_REPLY; + fc_els_rsp_tmpl.type = FC_TYPE_ELS; + fc_els_rsp_tmpl.f_ctl = + bfa_os_hton3b(FCTL_EC_RESP | FCTL_SEQ_INI | FCTL_LS_EXCH | + FCTL_END_SEQ | FCTL_SI_XFER); + fc_els_rsp_tmpl.rx_id = FC_RXID_ANY; + + /* + * fc_bls_req_tmpl + */ + fc_bls_req_tmpl.routing = FC_RTG_BASIC_LINK; + fc_bls_req_tmpl.type = FC_TYPE_BLS; + fc_bls_req_tmpl.f_ctl = bfa_os_hton3b(FCTL_END_SEQ | FCTL_SI_XFER); + fc_bls_req_tmpl.rx_id = FC_RXID_ANY; + + /* + * fc_bls_rsp_tmpl + */ + fc_bls_rsp_tmpl.routing = FC_RTG_BASIC_LINK; + fc_bls_rsp_tmpl.cat_info = FC_CAT_BA_ACC; + fc_bls_rsp_tmpl.type = FC_TYPE_BLS; + fc_bls_rsp_tmpl.f_ctl = + bfa_os_hton3b(FCTL_EC_RESP | FCTL_SEQ_INI | FCTL_LS_EXCH | + FCTL_END_SEQ | FCTL_SI_XFER); + fc_bls_rsp_tmpl.rx_id = FC_RXID_ANY; + + /* + * ba_acc_tmpl + */ + ba_acc_tmpl.seq_id_valid = 0; + ba_acc_tmpl.low_seq_cnt = 0; + ba_acc_tmpl.high_seq_cnt = 0xFFFF; + + /* + * plogi_tmpl + */ + plogi_tmpl.csp.verhi = FC_PH_VER_PH_3; + plogi_tmpl.csp.verlo = FC_PH_VER_4_3; + plogi_tmpl.csp.bbcred = bfa_os_htons(0x0004); + plogi_tmpl.csp.ciro = 0x1; + plogi_tmpl.csp.cisc = 0x0; + plogi_tmpl.csp.altbbcred = 0x0; + plogi_tmpl.csp.conseq = bfa_os_htons(0x00FF); + plogi_tmpl.csp.ro_bitmap = bfa_os_htons(0x0002); + plogi_tmpl.csp.e_d_tov = bfa_os_htonl(2000); + + plogi_tmpl.class3.class_valid = 1; + plogi_tmpl.class3.sequential = 1; + plogi_tmpl.class3.conseq = 0xFF; + plogi_tmpl.class3.ospx = 1; + + /* + * prli_tmpl + */ + prli_tmpl.command = FC_ELS_PRLI; + prli_tmpl.pglen = 0x10; + prli_tmpl.pagebytes = bfa_os_htons(0x0014); + prli_tmpl.parampage.type = FC_TYPE_FCP; + prli_tmpl.parampage.imagepair = 1; + prli_tmpl.parampage.servparams.rxrdisab = 1; + + /* + * rrq_tmpl + */ + rrq_tmpl.els_cmd.els_code = FC_ELS_RRQ; + + /* + * fcp_fchs_tmpl + */ + fcp_fchs_tmpl.routing = FC_RTG_FC4_DEV_DATA; + fcp_fchs_tmpl.cat_info = FC_CAT_UNSOLICIT_CMD; + fcp_fchs_tmpl.type = FC_TYPE_FCP; + fcp_fchs_tmpl.f_ctl = + bfa_os_hton3b(FCTL_FS_EXCH | FCTL_END_SEQ | FCTL_SI_XFER); + fcp_fchs_tmpl.seq_id = 1; + fcp_fchs_tmpl.rx_id = FC_RXID_ANY; +} + +static void +fc_gs_fchdr_build(struct fchs_s *fchs, u32 d_id, u32 s_id, + u32 ox_id) +{ + bfa_os_memset(fchs, 0, sizeof(struct fchs_s)); + + fchs->routing = FC_RTG_FC4_DEV_DATA; + fchs->cat_info = FC_CAT_UNSOLICIT_CTRL; + fchs->type = FC_TYPE_SERVICES; + fchs->f_ctl = + bfa_os_hton3b(FCTL_SEQ_INI | FCTL_FS_EXCH | FCTL_END_SEQ | + FCTL_SI_XFER); + fchs->rx_id = FC_RXID_ANY; + fchs->d_id = (d_id); + fchs->s_id = (s_id); + fchs->ox_id = bfa_os_htons(ox_id); + + /** + * @todo no need to set ox_id for request + * no need to set rx_id for response + */ +} + +void +fc_els_req_build(struct fchs_s *fchs, u32 d_id, u32 s_id, + u16 ox_id) +{ + bfa_os_memcpy(fchs, &fc_els_req_tmpl, sizeof(struct fchs_s)); + fchs->d_id = (d_id); + fchs->s_id = (s_id); + fchs->ox_id = bfa_os_htons(ox_id); +} + +static void +fc_els_rsp_build(struct fchs_s *fchs, u32 d_id, u32 s_id, + u16 ox_id) +{ + bfa_os_memcpy(fchs, &fc_els_rsp_tmpl, sizeof(struct fchs_s)); + fchs->d_id = d_id; + fchs->s_id = s_id; + fchs->ox_id = ox_id; +} + +enum fc_parse_status +fc_els_rsp_parse(struct fchs_s *fchs, int len) +{ + struct fc_els_cmd_s *els_cmd = (struct fc_els_cmd_s *) (fchs + 1); + struct fc_ls_rjt_s *ls_rjt = (struct fc_ls_rjt_s *) els_cmd; + + len = len; + + switch (els_cmd->els_code) { + case FC_ELS_LS_RJT: + if (ls_rjt->reason_code == FC_LS_RJT_RSN_LOGICAL_BUSY) + return (FC_PARSE_BUSY); + else + return (FC_PARSE_FAILURE); + + case FC_ELS_ACC: + return (FC_PARSE_OK); + } + return (FC_PARSE_OK); +} + +static void +fc_bls_rsp_build(struct fchs_s *fchs, u32 d_id, u32 s_id, + u16 ox_id) +{ + bfa_os_memcpy(fchs, &fc_bls_rsp_tmpl, sizeof(struct fchs_s)); + fchs->d_id = d_id; + fchs->s_id = s_id; + fchs->ox_id = ox_id; +} + +static u16 +fc_plogi_x_build(struct fchs_s *fchs, void *pld, u32 d_id, u32 s_id, + u16 ox_id, wwn_t port_name, wwn_t node_name, + u16 pdu_size, u8 els_code) +{ + struct fc_logi_s *plogi = (struct fc_logi_s *) (pld); + + bfa_os_memcpy(plogi, &plogi_tmpl, sizeof(struct fc_logi_s)); + + plogi->els_cmd.els_code = els_code; + if (els_code == FC_ELS_PLOGI) + fc_els_req_build(fchs, d_id, s_id, ox_id); + else + fc_els_rsp_build(fchs, d_id, s_id, ox_id); + + plogi->csp.rxsz = plogi->class3.rxsz = bfa_os_htons(pdu_size); + + bfa_os_memcpy(&plogi->port_name, &port_name, sizeof(wwn_t)); + bfa_os_memcpy(&plogi->node_name, &node_name, sizeof(wwn_t)); + + return (sizeof(struct fc_logi_s)); +} + +u16 +fc_flogi_build(struct fchs_s *fchs, struct fc_logi_s *flogi, u32 s_id, + u16 ox_id, wwn_t port_name, wwn_t node_name, + u16 pdu_size, u8 set_npiv, u8 set_auth, + u16 local_bb_credits) +{ + u32 d_id = bfa_os_hton3b(FC_FABRIC_PORT); + u32 *vvl_info; + + bfa_os_memcpy(flogi, &plogi_tmpl, sizeof(struct fc_logi_s)); + + flogi->els_cmd.els_code = FC_ELS_FLOGI; + fc_els_req_build(fchs, d_id, s_id, ox_id); + + flogi->csp.rxsz = flogi->class3.rxsz = bfa_os_htons(pdu_size); + flogi->port_name = port_name; + flogi->node_name = node_name; + + /* + * Set the NPIV Capability Bit ( word 1, bit 31) of Common + * Service Parameters. + */ + flogi->csp.ciro = set_npiv; + + /* set AUTH capability */ + flogi->csp.security = set_auth; + + flogi->csp.bbcred = bfa_os_htons(local_bb_credits); + + /* Set brcd token in VVL */ + vvl_info = (u32 *)&flogi->vvl[0]; + + /* set the flag to indicate the presence of VVL */ + flogi->csp.npiv_supp = 1; /* @todo. field name is not correct */ + vvl_info[0] = bfa_os_htonl(FLOGI_VVL_BRCD); + + return (sizeof(struct fc_logi_s)); +} + +u16 +fc_flogi_acc_build(struct fchs_s *fchs, struct fc_logi_s *flogi, u32 s_id, + u16 ox_id, wwn_t port_name, wwn_t node_name, + u16 pdu_size, u16 local_bb_credits) +{ + u32 d_id = 0; + + bfa_os_memcpy(flogi, &plogi_tmpl, sizeof(struct fc_logi_s)); + fc_els_rsp_build(fchs, d_id, s_id, ox_id); + + flogi->els_cmd.els_code = FC_ELS_ACC; + flogi->csp.rxsz = flogi->class3.rxsz = bfa_os_htons(pdu_size); + flogi->port_name = port_name; + flogi->node_name = node_name; + + flogi->csp.bbcred = bfa_os_htons(local_bb_credits); + + return (sizeof(struct fc_logi_s)); +} + +u16 +fc_fdisc_build(struct fchs_s *fchs, struct fc_logi_s *flogi, u32 s_id, + u16 ox_id, wwn_t port_name, wwn_t node_name, + u16 pdu_size) +{ + u32 d_id = bfa_os_hton3b(FC_FABRIC_PORT); + + bfa_os_memcpy(flogi, &plogi_tmpl, sizeof(struct fc_logi_s)); + + flogi->els_cmd.els_code = FC_ELS_FDISC; + fc_els_req_build(fchs, d_id, s_id, ox_id); + + flogi->csp.rxsz = flogi->class3.rxsz = bfa_os_htons(pdu_size); + flogi->port_name = port_name; + flogi->node_name = node_name; + + return (sizeof(struct fc_logi_s)); +} + +u16 +fc_plogi_build(struct fchs_s *fchs, void *pld, u32 d_id, u32 s_id, + u16 ox_id, wwn_t port_name, wwn_t node_name, + u16 pdu_size) +{ + return fc_plogi_x_build(fchs, pld, d_id, s_id, ox_id, port_name, + node_name, pdu_size, FC_ELS_PLOGI); +} + +u16 +fc_plogi_acc_build(struct fchs_s *fchs, void *pld, u32 d_id, u32 s_id, + u16 ox_id, wwn_t port_name, wwn_t node_name, + u16 pdu_size) +{ + return fc_plogi_x_build(fchs, pld, d_id, s_id, ox_id, port_name, + node_name, pdu_size, FC_ELS_ACC); +} + +enum fc_parse_status +fc_plogi_rsp_parse(struct fchs_s *fchs, int len, wwn_t port_name) +{ + struct fc_els_cmd_s *els_cmd = (struct fc_els_cmd_s *) (fchs + 1); + struct fc_logi_s *plogi; + struct fc_ls_rjt_s *ls_rjt; + + switch (els_cmd->els_code) { + case FC_ELS_LS_RJT: + ls_rjt = (struct fc_ls_rjt_s *) (fchs + 1); + if (ls_rjt->reason_code == FC_LS_RJT_RSN_LOGICAL_BUSY) + return (FC_PARSE_BUSY); + else + return (FC_PARSE_FAILURE); + case FC_ELS_ACC: + plogi = (struct fc_logi_s *) (fchs + 1); + if (len < sizeof(struct fc_logi_s)) + return (FC_PARSE_FAILURE); + + if (!wwn_is_equal(plogi->port_name, port_name)) + return (FC_PARSE_FAILURE); + + if (!plogi->class3.class_valid) + return (FC_PARSE_FAILURE); + + if (bfa_os_ntohs(plogi->class3.rxsz) < (FC_MIN_PDUSZ)) + return (FC_PARSE_FAILURE); + + return (FC_PARSE_OK); + default: + return (FC_PARSE_FAILURE); + } +} + +enum fc_parse_status +fc_plogi_parse(struct fchs_s *fchs) +{ + struct fc_logi_s *plogi = (struct fc_logi_s *) (fchs + 1); + + if (plogi->class3.class_valid != 1) + return FC_PARSE_FAILURE; + + if ((bfa_os_ntohs(plogi->class3.rxsz) < FC_MIN_PDUSZ) + || (bfa_os_ntohs(plogi->class3.rxsz) > FC_MAX_PDUSZ) + || (plogi->class3.rxsz == 0)) + return (FC_PARSE_FAILURE); + + return FC_PARSE_OK; +} + +u16 +fc_prli_build(struct fchs_s *fchs, void *pld, u32 d_id, u32 s_id, + u16 ox_id) +{ + struct fc_prli_s *prli = (struct fc_prli_s *) (pld); + + fc_els_req_build(fchs, d_id, s_id, ox_id); + bfa_os_memcpy(prli, &prli_tmpl, sizeof(struct fc_prli_s)); + + prli->command = FC_ELS_PRLI; + prli->parampage.servparams.initiator = 1; + prli->parampage.servparams.retry = 1; + prli->parampage.servparams.rec_support = 1; + prli->parampage.servparams.task_retry_id = 0; + prli->parampage.servparams.confirm = 1; + + return (sizeof(struct fc_prli_s)); +} + +u16 +fc_prli_acc_build(struct fchs_s *fchs, void *pld, u32 d_id, u32 s_id, + u16 ox_id, enum bfa_port_role role) +{ + struct fc_prli_s *prli = (struct fc_prli_s *) (pld); + + fc_els_rsp_build(fchs, d_id, s_id, ox_id); + bfa_os_memcpy(prli, &prli_tmpl, sizeof(struct fc_prli_s)); + + prli->command = FC_ELS_ACC; + + if ((role & BFA_PORT_ROLE_FCP_TM) == BFA_PORT_ROLE_FCP_TM) + prli->parampage.servparams.target = 1; + else + prli->parampage.servparams.initiator = 1; + + prli->parampage.rspcode = FC_PRLI_ACC_XQTD; + + return (sizeof(struct fc_prli_s)); +} + +enum fc_parse_status +fc_prli_rsp_parse(struct fc_prli_s *prli, int len) +{ + if (len < sizeof(struct fc_prli_s)) + return (FC_PARSE_FAILURE); + + if (prli->command != FC_ELS_ACC) + return (FC_PARSE_FAILURE); + + if ((prli->parampage.rspcode != FC_PRLI_ACC_XQTD) + && (prli->parampage.rspcode != FC_PRLI_ACC_PREDEF_IMG)) + return (FC_PARSE_FAILURE); + + if (prli->parampage.servparams.target != 1) + return (FC_PARSE_FAILURE); + + return (FC_PARSE_OK); +} + +enum fc_parse_status +fc_prli_parse(struct fc_prli_s *prli) +{ + if (prli->parampage.type != FC_TYPE_FCP) + return (FC_PARSE_FAILURE); + + if (!prli->parampage.imagepair) + return (FC_PARSE_FAILURE); + + if (!prli->parampage.servparams.initiator) + return (FC_PARSE_FAILURE); + + return (FC_PARSE_OK); +} + +u16 +fc_logo_build(struct fchs_s *fchs, struct fc_logo_s *logo, u32 d_id, + u32 s_id, u16 ox_id, wwn_t port_name) +{ + fc_els_req_build(fchs, d_id, s_id, ox_id); + + memset(logo, '\0', sizeof(struct fc_logo_s)); + logo->els_cmd.els_code = FC_ELS_LOGO; + logo->nport_id = (s_id); + logo->orig_port_name = port_name; + + return (sizeof(struct fc_logo_s)); +} + +static u16 +fc_adisc_x_build(struct fchs_s *fchs, struct fc_adisc_s *adisc, u32 d_id, + u32 s_id, u16 ox_id, wwn_t port_name, + wwn_t node_name, u8 els_code) +{ + memset(adisc, '\0', sizeof(struct fc_adisc_s)); + + adisc->els_cmd.els_code = els_code; + + if (els_code == FC_ELS_ADISC) + fc_els_req_build(fchs, d_id, s_id, ox_id); + else + fc_els_rsp_build(fchs, d_id, s_id, ox_id); + + adisc->orig_HA = 0; + adisc->orig_port_name = port_name; + adisc->orig_node_name = node_name; + adisc->nport_id = (s_id); + + return (sizeof(struct fc_adisc_s)); +} + +u16 +fc_adisc_build(struct fchs_s *fchs, struct fc_adisc_s *adisc, u32 d_id, + u32 s_id, u16 ox_id, wwn_t port_name, + wwn_t node_name) +{ + return fc_adisc_x_build(fchs, adisc, d_id, s_id, ox_id, port_name, + node_name, FC_ELS_ADISC); +} + +u16 +fc_adisc_acc_build(struct fchs_s *fchs, struct fc_adisc_s *adisc, u32 d_id, + u32 s_id, u16 ox_id, wwn_t port_name, + wwn_t node_name) +{ + return fc_adisc_x_build(fchs, adisc, d_id, s_id, ox_id, port_name, + node_name, FC_ELS_ACC); +} + +enum fc_parse_status +fc_adisc_rsp_parse(struct fc_adisc_s *adisc, int len, wwn_t port_name, + wwn_t node_name) +{ + + if (len < sizeof(struct fc_adisc_s)) + return (FC_PARSE_FAILURE); + + if (adisc->els_cmd.els_code != FC_ELS_ACC) + return (FC_PARSE_FAILURE); + + if (!wwn_is_equal(adisc->orig_port_name, port_name)) + return (FC_PARSE_FAILURE); + + return (FC_PARSE_OK); +} + +enum fc_parse_status +fc_adisc_parse(struct fchs_s *fchs, void *pld, u32 host_dap, + wwn_t node_name, wwn_t port_name) +{ + struct fc_adisc_s *adisc = (struct fc_adisc_s *) pld; + + if (adisc->els_cmd.els_code != FC_ELS_ACC) + return (FC_PARSE_FAILURE); + + if ((adisc->nport_id == (host_dap)) + && wwn_is_equal(adisc->orig_port_name, port_name) + && wwn_is_equal(adisc->orig_node_name, node_name)) + return (FC_PARSE_OK); + + return (FC_PARSE_FAILURE); +} + +enum fc_parse_status +fc_pdisc_parse(struct fchs_s *fchs, wwn_t node_name, wwn_t port_name) +{ + struct fc_logi_s *pdisc = (struct fc_logi_s *) (fchs + 1); + + if (pdisc->class3.class_valid != 1) + return FC_PARSE_FAILURE; + + if ((bfa_os_ntohs(pdisc->class3.rxsz) < + (FC_MIN_PDUSZ - sizeof(struct fchs_s))) + || (pdisc->class3.rxsz == 0)) + return (FC_PARSE_FAILURE); + + if (!wwn_is_equal(pdisc->port_name, port_name)) + return (FC_PARSE_FAILURE); + + if (!wwn_is_equal(pdisc->node_name, node_name)) + return (FC_PARSE_FAILURE); + + return FC_PARSE_OK; +} + +u16 +fc_abts_build(struct fchs_s *fchs, u32 d_id, u32 s_id, u16 ox_id) +{ + bfa_os_memcpy(fchs, &fc_bls_req_tmpl, sizeof(struct fchs_s)); + fchs->cat_info = FC_CAT_ABTS; + fchs->d_id = (d_id); + fchs->s_id = (s_id); + fchs->ox_id = bfa_os_htons(ox_id); + + return (sizeof(struct fchs_s)); +} + +enum fc_parse_status +fc_abts_rsp_parse(struct fchs_s *fchs, int len) +{ + if ((fchs->cat_info == FC_CAT_BA_ACC) + || (fchs->cat_info == FC_CAT_BA_RJT)) + return (FC_PARSE_OK); + + return (FC_PARSE_FAILURE); +} + +u16 +fc_rrq_build(struct fchs_s *fchs, struct fc_rrq_s *rrq, u32 d_id, + u32 s_id, u16 ox_id, u16 rrq_oxid) +{ + fc_els_req_build(fchs, d_id, s_id, ox_id); + + /* + * build rrq payload + */ + bfa_os_memcpy(rrq, &rrq_tmpl, sizeof(struct fc_rrq_s)); + rrq->s_id = (s_id); + rrq->ox_id = bfa_os_htons(rrq_oxid); + rrq->rx_id = FC_RXID_ANY; + + return (sizeof(struct fc_rrq_s)); +} + +u16 +fc_logo_acc_build(struct fchs_s *fchs, void *pld, u32 d_id, u32 s_id, + u16 ox_id) +{ + struct fc_els_cmd_s *acc = pld; + + fc_els_rsp_build(fchs, d_id, s_id, ox_id); + + memset(acc, 0, sizeof(struct fc_els_cmd_s)); + acc->els_code = FC_ELS_ACC; + + return (sizeof(struct fc_els_cmd_s)); +} + +u16 +fc_ls_rjt_build(struct fchs_s *fchs, struct fc_ls_rjt_s *ls_rjt, u32 d_id, + u32 s_id, u16 ox_id, u8 reason_code, + u8 reason_code_expl) +{ + fc_els_rsp_build(fchs, d_id, s_id, ox_id); + memset(ls_rjt, 0, sizeof(struct fc_ls_rjt_s)); + + ls_rjt->els_cmd.els_code = FC_ELS_LS_RJT; + ls_rjt->reason_code = reason_code; + ls_rjt->reason_code_expl = reason_code_expl; + ls_rjt->vendor_unique = 0x00; + + return (sizeof(struct fc_ls_rjt_s)); +} + +u16 +fc_ba_acc_build(struct fchs_s *fchs, struct fc_ba_acc_s *ba_acc, u32 d_id, + u32 s_id, u16 ox_id, u16 rx_id) +{ + fc_bls_rsp_build(fchs, d_id, s_id, ox_id); + + bfa_os_memcpy(ba_acc, &ba_acc_tmpl, sizeof(struct fc_ba_acc_s)); + + fchs->rx_id = rx_id; + + ba_acc->ox_id = fchs->ox_id; + ba_acc->rx_id = fchs->rx_id; + + return (sizeof(struct fc_ba_acc_s)); +} + +u16 +fc_ls_acc_build(struct fchs_s *fchs, struct fc_els_cmd_s *els_cmd, + u32 d_id, u32 s_id, u16 ox_id) +{ + fc_els_rsp_build(fchs, d_id, s_id, ox_id); + memset(els_cmd, 0, sizeof(struct fc_els_cmd_s)); + els_cmd->els_code = FC_ELS_ACC; + + return (sizeof(struct fc_els_cmd_s)); +} + +int +fc_logout_params_pages(struct fchs_s *fc_frame, u8 els_code) +{ + int num_pages = 0; + struct fc_prlo_s *prlo; + struct fc_tprlo_s *tprlo; + + if (els_code == FC_ELS_PRLO) { + prlo = (struct fc_prlo_s *) (fc_frame + 1); + num_pages = (bfa_os_ntohs(prlo->payload_len) - 4) / 16; + } else { + tprlo = (struct fc_tprlo_s *) (fc_frame + 1); + num_pages = (bfa_os_ntohs(tprlo->payload_len) - 4) / 16; + } + return num_pages; +} + +u16 +fc_tprlo_acc_build(struct fchs_s *fchs, struct fc_tprlo_acc_s *tprlo_acc, + u32 d_id, u32 s_id, u16 ox_id, + int num_pages) +{ + int page; + + fc_els_rsp_build(fchs, d_id, s_id, ox_id); + + memset(tprlo_acc, 0, (num_pages * 16) + 4); + tprlo_acc->command = FC_ELS_ACC; + + tprlo_acc->page_len = 0x10; + tprlo_acc->payload_len = bfa_os_htons((num_pages * 16) + 4); + + for (page = 0; page < num_pages; page++) { + tprlo_acc->tprlo_acc_params[page].opa_valid = 0; + tprlo_acc->tprlo_acc_params[page].rpa_valid = 0; + tprlo_acc->tprlo_acc_params[page].fc4type_csp = FC_TYPE_FCP; + tprlo_acc->tprlo_acc_params[page].orig_process_assc = 0; + tprlo_acc->tprlo_acc_params[page].resp_process_assc = 0; + } + return (bfa_os_ntohs(tprlo_acc->payload_len)); +} + +u16 +fc_prlo_acc_build(struct fchs_s *fchs, struct fc_prlo_acc_s *prlo_acc, + u32 d_id, u32 s_id, u16 ox_id, + int num_pages) +{ + int page; + + fc_els_rsp_build(fchs, d_id, s_id, ox_id); + + memset(prlo_acc, 0, (num_pages * 16) + 4); + prlo_acc->command = FC_ELS_ACC; + prlo_acc->page_len = 0x10; + prlo_acc->payload_len = bfa_os_htons((num_pages * 16) + 4); + + for (page = 0; page < num_pages; page++) { + prlo_acc->prlo_acc_params[page].opa_valid = 0; + prlo_acc->prlo_acc_params[page].rpa_valid = 0; + prlo_acc->prlo_acc_params[page].fc4type_csp = FC_TYPE_FCP; + prlo_acc->prlo_acc_params[page].orig_process_assc = 0; + prlo_acc->prlo_acc_params[page].resp_process_assc = 0; + } + + return (bfa_os_ntohs(prlo_acc->payload_len)); +} + +u16 +fc_rnid_build(struct fchs_s *fchs, struct fc_rnid_cmd_s *rnid, u32 d_id, + u32 s_id, u16 ox_id, u32 data_format) +{ + fc_els_req_build(fchs, d_id, s_id, ox_id); + + memset(rnid, 0, sizeof(struct fc_rnid_cmd_s)); + + rnid->els_cmd.els_code = FC_ELS_RNID; + rnid->node_id_data_format = data_format; + + return (sizeof(struct fc_rnid_cmd_s)); +} + +u16 +fc_rnid_acc_build(struct fchs_s *fchs, struct fc_rnid_acc_s *rnid_acc, + u32 d_id, u32 s_id, u16 ox_id, + u32 data_format, + struct fc_rnid_common_id_data_s *common_id_data, + struct fc_rnid_general_topology_data_s *gen_topo_data) +{ + memset(rnid_acc, 0, sizeof(struct fc_rnid_acc_s)); + + fc_els_rsp_build(fchs, d_id, s_id, ox_id); + + rnid_acc->els_cmd.els_code = FC_ELS_ACC; + rnid_acc->node_id_data_format = data_format; + rnid_acc->common_id_data_length = + sizeof(struct fc_rnid_common_id_data_s); + rnid_acc->common_id_data = *common_id_data; + + if (data_format == RNID_NODEID_DATA_FORMAT_DISCOVERY) { + rnid_acc->specific_id_data_length = + sizeof(struct fc_rnid_general_topology_data_s); + bfa_os_assign(rnid_acc->gen_topology_data, *gen_topo_data); + return (sizeof(struct fc_rnid_acc_s)); + } else { + return (sizeof(struct fc_rnid_acc_s) - + sizeof(struct fc_rnid_general_topology_data_s)); + } + +} + +u16 +fc_rpsc_build(struct fchs_s *fchs, struct fc_rpsc_cmd_s *rpsc, u32 d_id, + u32 s_id, u16 ox_id) +{ + fc_els_req_build(fchs, d_id, s_id, ox_id); + + memset(rpsc, 0, sizeof(struct fc_rpsc_cmd_s)); + + rpsc->els_cmd.els_code = FC_ELS_RPSC; + return (sizeof(struct fc_rpsc_cmd_s)); +} + +u16 +fc_rpsc2_build(struct fchs_s *fchs, struct fc_rpsc2_cmd_s *rpsc2, + u32 d_id, u32 s_id, u32 *pid_list, + u16 npids) +{ + u32 dctlr_id = FC_DOMAIN_CTRLR(bfa_os_hton3b(d_id)); + int i = 0; + + fc_els_req_build(fchs, bfa_os_hton3b(dctlr_id), s_id, 0); + + memset(rpsc2, 0, sizeof(struct fc_rpsc2_cmd_s)); + + rpsc2->els_cmd.els_code = FC_ELS_RPSC; + rpsc2->token = bfa_os_htonl(FC_BRCD_TOKEN); + rpsc2->num_pids = bfa_os_htons(npids); + for (i = 0; i < npids; i++) + rpsc2->pid_list[i].pid = pid_list[i]; + + return (sizeof(struct fc_rpsc2_cmd_s) + ((npids - 1) * + (sizeof(u32)))); +} + +u16 +fc_rpsc_acc_build(struct fchs_s *fchs, struct fc_rpsc_acc_s *rpsc_acc, + u32 d_id, u32 s_id, u16 ox_id, + struct fc_rpsc_speed_info_s *oper_speed) +{ + memset(rpsc_acc, 0, sizeof(struct fc_rpsc_acc_s)); + + fc_els_rsp_build(fchs, d_id, s_id, ox_id); + + rpsc_acc->command = FC_ELS_ACC; + rpsc_acc->num_entries = bfa_os_htons(1); + + rpsc_acc->speed_info[0].port_speed_cap = + bfa_os_htons(oper_speed->port_speed_cap); + + rpsc_acc->speed_info[0].port_op_speed = + bfa_os_htons(oper_speed->port_op_speed); + + return (sizeof(struct fc_rpsc_acc_s)); + +} + +/* + * TBD - + * . get rid of unnecessary memsets + */ + +u16 +fc_logo_rsp_parse(struct fchs_s *fchs, int len) +{ + struct fc_els_cmd_s *els_cmd = (struct fc_els_cmd_s *) (fchs + 1); + + len = len; + if (els_cmd->els_code != FC_ELS_ACC) + return FC_PARSE_FAILURE; + + return FC_PARSE_OK; +} + +u16 +fc_pdisc_build(struct fchs_s *fchs, u32 d_id, u32 s_id, + u16 ox_id, wwn_t port_name, wwn_t node_name, + u16 pdu_size) +{ + struct fc_logi_s *pdisc = (struct fc_logi_s *) (fchs + 1); + + bfa_os_memcpy(pdisc, &plogi_tmpl, sizeof(struct fc_logi_s)); + + pdisc->els_cmd.els_code = FC_ELS_PDISC; + fc_els_req_build(fchs, d_id, s_id, ox_id); + + pdisc->csp.rxsz = pdisc->class3.rxsz = bfa_os_htons(pdu_size); + pdisc->port_name = port_name; + pdisc->node_name = node_name; + + return (sizeof(struct fc_logi_s)); +} + +u16 +fc_pdisc_rsp_parse(struct fchs_s *fchs, int len, wwn_t port_name) +{ + struct fc_logi_s *pdisc = (struct fc_logi_s *) (fchs + 1); + + if (len < sizeof(struct fc_logi_s)) + return (FC_PARSE_LEN_INVAL); + + if (pdisc->els_cmd.els_code != FC_ELS_ACC) + return (FC_PARSE_ACC_INVAL); + + if (!wwn_is_equal(pdisc->port_name, port_name)) + return (FC_PARSE_PWWN_NOT_EQUAL); + + if (!pdisc->class3.class_valid) + return (FC_PARSE_NWWN_NOT_EQUAL); + + if (bfa_os_ntohs(pdisc->class3.rxsz) < (FC_MIN_PDUSZ)) + return (FC_PARSE_RXSZ_INVAL); + + return (FC_PARSE_OK); +} + +u16 +fc_prlo_build(struct fchs_s *fchs, u32 d_id, u32 s_id, u16 ox_id, + int num_pages) +{ + struct fc_prlo_s *prlo = (struct fc_prlo_s *) (fchs + 1); + int page; + + fc_els_req_build(fchs, d_id, s_id, ox_id); + memset(prlo, 0, (num_pages * 16) + 4); + prlo->command = FC_ELS_PRLO; + prlo->page_len = 0x10; + prlo->payload_len = bfa_os_htons((num_pages * 16) + 4); + + for (page = 0; page < num_pages; page++) { + prlo->prlo_params[page].type = FC_TYPE_FCP; + prlo->prlo_params[page].opa_valid = 0; + prlo->prlo_params[page].rpa_valid = 0; + prlo->prlo_params[page].orig_process_assc = 0; + prlo->prlo_params[page].resp_process_assc = 0; + } + + return (bfa_os_ntohs(prlo->payload_len)); +} + +u16 +fc_prlo_rsp_parse(struct fchs_s *fchs, int len) +{ + struct fc_prlo_acc_s *prlo = (struct fc_prlo_acc_s *) (fchs + 1); + int num_pages = 0; + int page = 0; + + len = len; + + if (prlo->command != FC_ELS_ACC) + return (FC_PARSE_FAILURE); + + num_pages = ((bfa_os_ntohs(prlo->payload_len)) - 4) / 16; + + for (page = 0; page < num_pages; page++) { + if (prlo->prlo_acc_params[page].type != FC_TYPE_FCP) + return FC_PARSE_FAILURE; + + if (prlo->prlo_acc_params[page].opa_valid != 0) + return FC_PARSE_FAILURE; + + if (prlo->prlo_acc_params[page].rpa_valid != 0) + return FC_PARSE_FAILURE; + + if (prlo->prlo_acc_params[page].orig_process_assc != 0) + return FC_PARSE_FAILURE; + + if (prlo->prlo_acc_params[page].resp_process_assc != 0) + return FC_PARSE_FAILURE; + } + return (FC_PARSE_OK); + +} + +u16 +fc_tprlo_build(struct fchs_s *fchs, u32 d_id, u32 s_id, + u16 ox_id, int num_pages, + enum fc_tprlo_type tprlo_type, u32 tpr_id) +{ + struct fc_tprlo_s *tprlo = (struct fc_tprlo_s *) (fchs + 1); + int page; + + fc_els_req_build(fchs, d_id, s_id, ox_id); + memset(tprlo, 0, (num_pages * 16) + 4); + tprlo->command = FC_ELS_TPRLO; + tprlo->page_len = 0x10; + tprlo->payload_len = bfa_os_htons((num_pages * 16) + 4); + + for (page = 0; page < num_pages; page++) { + tprlo->tprlo_params[page].type = FC_TYPE_FCP; + tprlo->tprlo_params[page].opa_valid = 0; + tprlo->tprlo_params[page].rpa_valid = 0; + tprlo->tprlo_params[page].orig_process_assc = 0; + tprlo->tprlo_params[page].resp_process_assc = 0; + if (tprlo_type == FC_GLOBAL_LOGO) { + tprlo->tprlo_params[page].global_process_logout = 1; + } else if (tprlo_type == FC_TPR_LOGO) { + tprlo->tprlo_params[page].tpo_nport_valid = 1; + tprlo->tprlo_params[page].tpo_nport_id = (tpr_id); + } + } + + return (bfa_os_ntohs(tprlo->payload_len)); +} + +u16 +fc_tprlo_rsp_parse(struct fchs_s *fchs, int len) +{ + struct fc_tprlo_acc_s *tprlo = (struct fc_tprlo_acc_s *) (fchs + 1); + int num_pages = 0; + int page = 0; + + len = len; + + if (tprlo->command != FC_ELS_ACC) + return (FC_PARSE_ACC_INVAL); + + num_pages = (bfa_os_ntohs(tprlo->payload_len) - 4) / 16; + + for (page = 0; page < num_pages; page++) { + if (tprlo->tprlo_acc_params[page].type != FC_TYPE_FCP) + return (FC_PARSE_NOT_FCP); + if (tprlo->tprlo_acc_params[page].opa_valid != 0) + return (FC_PARSE_OPAFLAG_INVAL); + if (tprlo->tprlo_acc_params[page].rpa_valid != 0) + return (FC_PARSE_RPAFLAG_INVAL); + if (tprlo->tprlo_acc_params[page].orig_process_assc != 0) + return (FC_PARSE_OPA_INVAL); + if (tprlo->tprlo_acc_params[page].resp_process_assc != 0) + return (FC_PARSE_RPA_INVAL); + } + return (FC_PARSE_OK); +} + +enum fc_parse_status +fc_rrq_rsp_parse(struct fchs_s *fchs, int len) +{ + struct fc_els_cmd_s *els_cmd = (struct fc_els_cmd_s *) (fchs + 1); + + len = len; + if (els_cmd->els_code != FC_ELS_ACC) + return FC_PARSE_FAILURE; + + return FC_PARSE_OK; +} + +u16 +fc_ba_rjt_build(struct fchs_s *fchs, u32 d_id, u32 s_id, + u16 ox_id, u32 reason_code, + u32 reason_expl) +{ + struct fc_ba_rjt_s *ba_rjt = (struct fc_ba_rjt_s *) (fchs + 1); + + fc_bls_rsp_build(fchs, d_id, s_id, ox_id); + + fchs->cat_info = FC_CAT_BA_RJT; + ba_rjt->reason_code = reason_code; + ba_rjt->reason_expl = reason_expl; + return (sizeof(struct fc_ba_rjt_s)); +} + +static void +fc_gs_cthdr_build(struct ct_hdr_s *cthdr, u32 s_id, u16 cmd_code) +{ + bfa_os_memset(cthdr, 0, sizeof(struct ct_hdr_s)); + cthdr->rev_id = CT_GS3_REVISION; + cthdr->gs_type = CT_GSTYPE_DIRSERVICE; + cthdr->gs_sub_type = CT_GSSUBTYPE_NAMESERVER; + cthdr->cmd_rsp_code = bfa_os_htons(cmd_code); +} + +static void +fc_gs_fdmi_cthdr_build(struct ct_hdr_s *cthdr, u32 s_id, u16 cmd_code) +{ + bfa_os_memset(cthdr, 0, sizeof(struct ct_hdr_s)); + cthdr->rev_id = CT_GS3_REVISION; + cthdr->gs_type = CT_GSTYPE_MGMTSERVICE; + cthdr->gs_sub_type = CT_GSSUBTYPE_HBA_MGMTSERVER; + cthdr->cmd_rsp_code = bfa_os_htons(cmd_code); +} + +static void +fc_gs_ms_cthdr_build(struct ct_hdr_s *cthdr, u32 s_id, u16 cmd_code, + u8 sub_type) +{ + bfa_os_memset(cthdr, 0, sizeof(struct ct_hdr_s)); + cthdr->rev_id = CT_GS3_REVISION; + cthdr->gs_type = CT_GSTYPE_MGMTSERVICE; + cthdr->gs_sub_type = sub_type; + cthdr->cmd_rsp_code = bfa_os_htons(cmd_code); +} + +u16 +fc_gidpn_build(struct fchs_s *fchs, void *pyld, u32 s_id, u16 ox_id, + wwn_t port_name) +{ + + struct ct_hdr_s *cthdr = (struct ct_hdr_s *) pyld; + struct fcgs_gidpn_req_s *gidpn = + (struct fcgs_gidpn_req_s *) (cthdr + 1); + u32 d_id = bfa_os_hton3b(FC_NAME_SERVER); + + fc_gs_fchdr_build(fchs, d_id, s_id, ox_id); + fc_gs_cthdr_build(cthdr, s_id, GS_GID_PN); + + bfa_os_memset(gidpn, 0, sizeof(struct fcgs_gidpn_req_s)); + gidpn->port_name = port_name; + return (sizeof(struct fcgs_gidpn_req_s) + sizeof(struct ct_hdr_s)); +} + +u16 +fc_gpnid_build(struct fchs_s *fchs, void *pyld, u32 s_id, u16 ox_id, + u32 port_id) +{ + + struct ct_hdr_s *cthdr = (struct ct_hdr_s *) pyld; + fcgs_gpnid_req_t *gpnid = (fcgs_gpnid_req_t *) (cthdr + 1); + u32 d_id = bfa_os_hton3b(FC_NAME_SERVER); + + fc_gs_fchdr_build(fchs, d_id, s_id, ox_id); + fc_gs_cthdr_build(cthdr, s_id, GS_GPN_ID); + + bfa_os_memset(gpnid, 0, sizeof(fcgs_gpnid_req_t)); + gpnid->dap = port_id; + return (sizeof(fcgs_gpnid_req_t) + sizeof(struct ct_hdr_s)); +} + +u16 +fc_gnnid_build(struct fchs_s *fchs, void *pyld, u32 s_id, u16 ox_id, + u32 port_id) +{ + + struct ct_hdr_s *cthdr = (struct ct_hdr_s *) pyld; + fcgs_gnnid_req_t *gnnid = (fcgs_gnnid_req_t *) (cthdr + 1); + u32 d_id = bfa_os_hton3b(FC_NAME_SERVER); + + fc_gs_fchdr_build(fchs, d_id, s_id, ox_id); + fc_gs_cthdr_build(cthdr, s_id, GS_GNN_ID); + + bfa_os_memset(gnnid, 0, sizeof(fcgs_gnnid_req_t)); + gnnid->dap = port_id; + return (sizeof(fcgs_gnnid_req_t) + sizeof(struct ct_hdr_s)); +} + +u16 +fc_ct_rsp_parse(struct ct_hdr_s *cthdr) +{ + if (bfa_os_ntohs(cthdr->cmd_rsp_code) != CT_RSP_ACCEPT) { + if (cthdr->reason_code == CT_RSN_LOGICAL_BUSY) + return FC_PARSE_BUSY; + else + return FC_PARSE_FAILURE; + } + + return FC_PARSE_OK; +} + +u16 +fc_scr_build(struct fchs_s *fchs, struct fc_scr_s *scr, u8 set_br_reg, + u32 s_id, u16 ox_id) +{ + u32 d_id = bfa_os_hton3b(FC_FABRIC_CONTROLLER); + + fc_els_req_build(fchs, d_id, s_id, ox_id); + + bfa_os_memset(scr, 0, sizeof(struct fc_scr_s)); + scr->command = FC_ELS_SCR; + scr->reg_func = FC_SCR_REG_FUNC_FULL; + if (set_br_reg) + scr->vu_reg_func = FC_VU_SCR_REG_FUNC_FABRIC_NAME_CHANGE; + + return (sizeof(struct fc_scr_s)); +} + +u16 +fc_rscn_build(struct fchs_s *fchs, struct fc_rscn_pl_s *rscn, u32 s_id, + u16 ox_id) +{ + u32 d_id = bfa_os_hton3b(FC_FABRIC_CONTROLLER); + u16 payldlen; + + fc_els_req_build(fchs, d_id, s_id, ox_id); + rscn->command = FC_ELS_RSCN; + rscn->pagelen = sizeof(rscn->event[0]); + + payldlen = sizeof(u32) + rscn->pagelen; + rscn->payldlen = bfa_os_htons(payldlen); + + rscn->event[0].format = FC_RSCN_FORMAT_PORTID; + rscn->event[0].portid = s_id; + + return (sizeof(struct fc_rscn_pl_s)); +} + +u16 +fc_rftid_build(struct fchs_s *fchs, void *pyld, u32 s_id, u16 ox_id, + enum bfa_port_role roles) +{ + struct ct_hdr_s *cthdr = (struct ct_hdr_s *) pyld; + struct fcgs_rftid_req_s *rftid = + (struct fcgs_rftid_req_s *) (cthdr + 1); + u32 type_value, d_id = bfa_os_hton3b(FC_NAME_SERVER); + u8 index; + + fc_gs_fchdr_build(fchs, d_id, s_id, ox_id); + fc_gs_cthdr_build(cthdr, s_id, GS_RFT_ID); + + bfa_os_memset(rftid, 0, sizeof(struct fcgs_rftid_req_s)); + + rftid->dap = s_id; + + /* By default, FCP FC4 Type is registered */ + index = FC_TYPE_FCP >> 5; + type_value = 1 << (FC_TYPE_FCP % 32); + rftid->fc4_type[index] = bfa_os_htonl(type_value); + + if (roles & BFA_PORT_ROLE_FCP_IPFC) { + index = FC_TYPE_IP >> 5; + type_value = 1 << (FC_TYPE_IP % 32); + rftid->fc4_type[index] |= bfa_os_htonl(type_value); + } + + return (sizeof(struct fcgs_rftid_req_s) + sizeof(struct ct_hdr_s)); +} + +u16 +fc_rftid_build_sol(struct fchs_s *fchs, void *pyld, u32 s_id, + u16 ox_id, u8 *fc4_bitmap, + u32 bitmap_size) +{ + struct ct_hdr_s *cthdr = (struct ct_hdr_s *) pyld; + struct fcgs_rftid_req_s *rftid = + (struct fcgs_rftid_req_s *) (cthdr + 1); + u32 d_id = bfa_os_hton3b(FC_NAME_SERVER); + + fc_gs_fchdr_build(fchs, d_id, s_id, ox_id); + fc_gs_cthdr_build(cthdr, s_id, GS_RFT_ID); + + bfa_os_memset(rftid, 0, sizeof(struct fcgs_rftid_req_s)); + + rftid->dap = s_id; + bfa_os_memcpy((void *)rftid->fc4_type, (void *)fc4_bitmap, + (bitmap_size < 32 ? bitmap_size : 32)); + + return (sizeof(struct fcgs_rftid_req_s) + sizeof(struct ct_hdr_s)); +} + +u16 +fc_rffid_build(struct fchs_s *fchs, void *pyld, u32 s_id, u16 ox_id, + u8 fc4_type, u8 fc4_ftrs) +{ + struct ct_hdr_s *cthdr = (struct ct_hdr_s *) pyld; + struct fcgs_rffid_req_s *rffid = + (struct fcgs_rffid_req_s *) (cthdr + 1); + u32 d_id = bfa_os_hton3b(FC_NAME_SERVER); + + fc_gs_fchdr_build(fchs, d_id, s_id, ox_id); + fc_gs_cthdr_build(cthdr, s_id, GS_RFF_ID); + + bfa_os_memset(rffid, 0, sizeof(struct fcgs_rffid_req_s)); + + rffid->dap = s_id; + rffid->fc4ftr_bits = fc4_ftrs; + rffid->fc4_type = fc4_type; + + return (sizeof(struct fcgs_rffid_req_s) + sizeof(struct ct_hdr_s)); +} + +u16 +fc_rspnid_build(struct fchs_s *fchs, void *pyld, u32 s_id, u16 ox_id, + u8 *name) +{ + + struct ct_hdr_s *cthdr = (struct ct_hdr_s *) pyld; + struct fcgs_rspnid_req_s *rspnid = + (struct fcgs_rspnid_req_s *) (cthdr + 1); + u32 d_id = bfa_os_hton3b(FC_NAME_SERVER); + + fc_gs_fchdr_build(fchs, d_id, s_id, ox_id); + fc_gs_cthdr_build(cthdr, s_id, GS_RSPN_ID); + + bfa_os_memset(rspnid, 0, sizeof(struct fcgs_rspnid_req_s)); + + rspnid->dap = s_id; + rspnid->spn_len = (u8) strlen((char *)name); + strncpy((char *)rspnid->spn, (char *)name, rspnid->spn_len); + + return (sizeof(struct fcgs_rspnid_req_s) + sizeof(struct ct_hdr_s)); +} + +u16 +fc_gid_ft_build(struct fchs_s *fchs, void *pyld, u32 s_id, + u8 fc4_type) +{ + + struct ct_hdr_s *cthdr = (struct ct_hdr_s *) pyld; + struct fcgs_gidft_req_s *gidft = + (struct fcgs_gidft_req_s *) (cthdr + 1); + u32 d_id = bfa_os_hton3b(FC_NAME_SERVER); + + fc_gs_fchdr_build(fchs, d_id, s_id, 0); + + fc_gs_cthdr_build(cthdr, s_id, GS_GID_FT); + + bfa_os_memset(gidft, 0, sizeof(struct fcgs_gidft_req_s)); + gidft->fc4_type = fc4_type; + gidft->domain_id = 0; + gidft->area_id = 0; + + return (sizeof(struct fcgs_gidft_req_s) + sizeof(struct ct_hdr_s)); +} + +u16 +fc_rpnid_build(struct fchs_s *fchs, void *pyld, u32 s_id, u32 port_id, + wwn_t port_name) +{ + struct ct_hdr_s *cthdr = (struct ct_hdr_s *) pyld; + struct fcgs_rpnid_req_s *rpnid = + (struct fcgs_rpnid_req_s *) (cthdr + 1); + u32 d_id = bfa_os_hton3b(FC_NAME_SERVER); + + fc_gs_fchdr_build(fchs, d_id, s_id, 0); + fc_gs_cthdr_build(cthdr, s_id, GS_RPN_ID); + + bfa_os_memset(rpnid, 0, sizeof(struct fcgs_rpnid_req_s)); + rpnid->port_id = port_id; + rpnid->port_name = port_name; + + return (sizeof(struct fcgs_rpnid_req_s) + sizeof(struct ct_hdr_s)); +} + +u16 +fc_rnnid_build(struct fchs_s *fchs, void *pyld, u32 s_id, u32 port_id, + wwn_t node_name) +{ + struct ct_hdr_s *cthdr = (struct ct_hdr_s *) pyld; + struct fcgs_rnnid_req_s *rnnid = + (struct fcgs_rnnid_req_s *) (cthdr + 1); + u32 d_id = bfa_os_hton3b(FC_NAME_SERVER); + + fc_gs_fchdr_build(fchs, d_id, s_id, 0); + fc_gs_cthdr_build(cthdr, s_id, GS_RNN_ID); + + bfa_os_memset(rnnid, 0, sizeof(struct fcgs_rnnid_req_s)); + rnnid->port_id = port_id; + rnnid->node_name = node_name; + + return (sizeof(struct fcgs_rnnid_req_s) + sizeof(struct ct_hdr_s)); +} + +u16 +fc_rcsid_build(struct fchs_s *fchs, void *pyld, u32 s_id, u32 port_id, + u32 cos) +{ + struct ct_hdr_s *cthdr = (struct ct_hdr_s *) pyld; + struct fcgs_rcsid_req_s *rcsid = + (struct fcgs_rcsid_req_s *) (cthdr + 1); + u32 d_id = bfa_os_hton3b(FC_NAME_SERVER); + + fc_gs_fchdr_build(fchs, d_id, s_id, 0); + fc_gs_cthdr_build(cthdr, s_id, GS_RCS_ID); + + bfa_os_memset(rcsid, 0, sizeof(struct fcgs_rcsid_req_s)); + rcsid->port_id = port_id; + rcsid->cos = cos; + + return (sizeof(struct fcgs_rcsid_req_s) + sizeof(struct ct_hdr_s)); +} + +u16 +fc_rptid_build(struct fchs_s *fchs, void *pyld, u32 s_id, u32 port_id, + u8 port_type) +{ + struct ct_hdr_s *cthdr = (struct ct_hdr_s *) pyld; + struct fcgs_rptid_req_s *rptid = + (struct fcgs_rptid_req_s *) (cthdr + 1); + u32 d_id = bfa_os_hton3b(FC_NAME_SERVER); + + fc_gs_fchdr_build(fchs, d_id, s_id, 0); + fc_gs_cthdr_build(cthdr, s_id, GS_RPT_ID); + + bfa_os_memset(rptid, 0, sizeof(struct fcgs_rptid_req_s)); + rptid->port_id = port_id; + rptid->port_type = port_type; + + return (sizeof(struct fcgs_rptid_req_s) + sizeof(struct ct_hdr_s)); +} + +u16 +fc_ganxt_build(struct fchs_s *fchs, void *pyld, u32 s_id, u32 port_id) +{ + struct ct_hdr_s *cthdr = (struct ct_hdr_s *) pyld; + struct fcgs_ganxt_req_s *ganxt = + (struct fcgs_ganxt_req_s *) (cthdr + 1); + u32 d_id = bfa_os_hton3b(FC_NAME_SERVER); + + fc_gs_fchdr_build(fchs, d_id, s_id, 0); + fc_gs_cthdr_build(cthdr, s_id, GS_GA_NXT); + + bfa_os_memset(ganxt, 0, sizeof(struct fcgs_ganxt_req_s)); + ganxt->port_id = port_id; + + return (sizeof(struct ct_hdr_s) + sizeof(struct fcgs_ganxt_req_s)); +} + +/* + * Builds fc hdr and ct hdr for FDMI requests. + */ +u16 +fc_fdmi_reqhdr_build(struct fchs_s *fchs, void *pyld, u32 s_id, + u16 cmd_code) +{ + + struct ct_hdr_s *cthdr = (struct ct_hdr_s *) pyld; + u32 d_id = bfa_os_hton3b(FC_MGMT_SERVER); + + fc_gs_fchdr_build(fchs, d_id, s_id, 0); + fc_gs_fdmi_cthdr_build(cthdr, s_id, cmd_code); + + return (sizeof(struct ct_hdr_s)); +} + +/* + * Given a FC4 Type, this function returns a fc4 type bitmask + */ +void +fc_get_fc4type_bitmask(u8 fc4_type, u8 *bit_mask) +{ + u8 index; + u32 *ptr = (u32 *) bit_mask; + u32 type_value; + + /* + * @todo : Check for bitmask size + */ + + index = fc4_type >> 5; + type_value = 1 << (fc4_type % 32); + ptr[index] = bfa_os_htonl(type_value); + +} + +/* + * GMAL Request + */ +u16 +fc_gmal_req_build(struct fchs_s *fchs, void *pyld, u32 s_id, wwn_t wwn) +{ + struct ct_hdr_s *cthdr = (struct ct_hdr_s *) pyld; + fcgs_gmal_req_t *gmal = (fcgs_gmal_req_t *) (cthdr + 1); + u32 d_id = bfa_os_hton3b(FC_MGMT_SERVER); + + fc_gs_fchdr_build(fchs, d_id, s_id, 0); + fc_gs_ms_cthdr_build(cthdr, s_id, GS_FC_GMAL_CMD, + CT_GSSUBTYPE_CFGSERVER); + + bfa_os_memset(gmal, 0, sizeof(fcgs_gmal_req_t)); + gmal->wwn = wwn; + + return (sizeof(struct ct_hdr_s) + sizeof(fcgs_gmal_req_t)); +} + +/* + * GFN (Get Fabric Name) Request + */ +u16 +fc_gfn_req_build(struct fchs_s *fchs, void *pyld, u32 s_id, wwn_t wwn) +{ + struct ct_hdr_s *cthdr = (struct ct_hdr_s *) pyld; + fcgs_gfn_req_t *gfn = (fcgs_gfn_req_t *) (cthdr + 1); + u32 d_id = bfa_os_hton3b(FC_MGMT_SERVER); + + fc_gs_fchdr_build(fchs, d_id, s_id, 0); + fc_gs_ms_cthdr_build(cthdr, s_id, GS_FC_GFN_CMD, + CT_GSSUBTYPE_CFGSERVER); + + bfa_os_memset(gfn, 0, sizeof(fcgs_gfn_req_t)); + gfn->wwn = wwn; + + return (sizeof(struct ct_hdr_s) + sizeof(fcgs_gfn_req_t)); +} diff --git a/drivers/scsi/bfa/fcbuild.h b/drivers/scsi/bfa/fcbuild.h new file mode 100644 index 000000000000..4d248424f7b3 --- /dev/null +++ b/drivers/scsi/bfa/fcbuild.h @@ -0,0 +1,273 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ +/* + * fcbuild.h - FC link service frame building and parsing routines + */ + +#ifndef __FCBUILD_H__ +#define __FCBUILD_H__ + +#include +#include +#include +#include +#include +#include + +/* + * Utility Macros/functions + */ + +#define fcif_sof_set(_ifhdr, _sof) (_ifhdr)->sof = FC_ ## _sof +#define fcif_eof_set(_ifhdr, _eof) (_ifhdr)->eof = FC_ ## _eof + +#define wwn_is_equal(_wwn1, _wwn2) \ + (memcmp(&(_wwn1), &(_wwn2), sizeof(wwn_t)) == 0) + +#define fc_roundup(_l, _s) (((_l) + ((_s) - 1)) & ~((_s) - 1)) + +/* + * Given the fc response length, this routine will return + * the length of the actual payload bytes following the CT header. + * + * Assumes the input response length does not include the crc, eof, etc. + */ +static inline u32 +fc_get_ctresp_pyld_len(u32 resp_len) +{ + return (resp_len - sizeof(struct ct_hdr_s)); +} + +/* + * Convert bfa speed to rpsc speed value. + */ +static inline enum bfa_pport_speed +fc_rpsc_operspeed_to_bfa_speed(enum fc_rpsc_op_speed_s speed) +{ + switch (speed) { + + case RPSC_OP_SPEED_1G: + return BFA_PPORT_SPEED_1GBPS; + + case RPSC_OP_SPEED_2G: + return BFA_PPORT_SPEED_2GBPS; + + case RPSC_OP_SPEED_4G: + return BFA_PPORT_SPEED_4GBPS; + + case RPSC_OP_SPEED_8G: + return BFA_PPORT_SPEED_8GBPS; + + default: + return BFA_PPORT_SPEED_UNKNOWN; + } +} + +/* + * Convert RPSC speed to bfa speed value. + */ +static inline enum fc_rpsc_op_speed_s +fc_bfa_speed_to_rpsc_operspeed(enum bfa_pport_speed op_speed) +{ + switch (op_speed) { + + case BFA_PPORT_SPEED_1GBPS: + return RPSC_OP_SPEED_1G; + + case BFA_PPORT_SPEED_2GBPS: + return RPSC_OP_SPEED_2G; + + case BFA_PPORT_SPEED_4GBPS: + return RPSC_OP_SPEED_4G; + + case BFA_PPORT_SPEED_8GBPS: + return RPSC_OP_SPEED_8G; + + default: + return RPSC_OP_SPEED_NOT_EST; + } +} +enum fc_parse_status { + FC_PARSE_OK = 0, + FC_PARSE_FAILURE = 1, + FC_PARSE_BUSY = 2, + FC_PARSE_LEN_INVAL, + FC_PARSE_ACC_INVAL, + FC_PARSE_PWWN_NOT_EQUAL, + FC_PARSE_NWWN_NOT_EQUAL, + FC_PARSE_RXSZ_INVAL, + FC_PARSE_NOT_FCP, + FC_PARSE_OPAFLAG_INVAL, + FC_PARSE_RPAFLAG_INVAL, + FC_PARSE_OPA_INVAL, + FC_PARSE_RPA_INVAL, + +}; + +struct fc_templates_s { + struct fchs_s fc_els_req; + struct fchs_s fc_bls_req; + struct fc_logi_s plogi; + struct fc_rrq_s rrq; +}; + +void fcbuild_init(void); + +u16 fc_flogi_build(struct fchs_s *fchs, struct fc_logi_s *flogi, + u32 s_id, u16 ox_id, wwn_t port_name, + wwn_t node_name, u16 pdu_size, u8 set_npiv, + u8 set_auth, u16 local_bb_credits); +u16 fc_fdisc_build(struct fchs_s *buf, struct fc_logi_s *flogi, + u32 s_id, u16 ox_id, wwn_t port_name, + wwn_t node_name, u16 pdu_size); +u16 fc_flogi_acc_build(struct fchs_s *fchs, struct fc_logi_s *flogi, + u32 s_id, u16 ox_id, wwn_t port_name, + wwn_t node_name, u16 pdu_size, + u16 local_bb_credits); +u16 fc_plogi_build(struct fchs_s *fchs, void *pld, u32 d_id, + u32 s_id, u16 ox_id, wwn_t port_name, + wwn_t node_name, u16 pdu_size); +enum fc_parse_status fc_plogi_parse(struct fchs_s *fchs); +u16 fc_abts_build(struct fchs_s *buf, u32 d_id, u32 s_id, + u16 ox_id); +enum fc_parse_status fc_abts_rsp_parse(struct fchs_s *buf, int len); +u16 fc_rrq_build(struct fchs_s *buf, struct fc_rrq_s *rrq, u32 d_id, + u32 s_id, u16 ox_id, u16 rrq_oxid); +enum fc_parse_status fc_rrq_rsp_parse(struct fchs_s *buf, int len); +u16 fc_rspnid_build(struct fchs_s *fchs, void *pld, u32 s_id, + u16 ox_id, u8 *name); +u16 fc_rftid_build(struct fchs_s *fchs, void *pld, u32 s_id, + u16 ox_id, enum bfa_port_role role); +u16 fc_rftid_build_sol(struct fchs_s *fchs, void *pyld, u32 s_id, + u16 ox_id, u8 *fc4_bitmap, + u32 bitmap_size); +u16 fc_rffid_build(struct fchs_s *fchs, void *pyld, u32 s_id, + u16 ox_id, u8 fc4_type, u8 fc4_ftrs); +u16 fc_gidpn_build(struct fchs_s *fchs, void *pyld, u32 s_id, + u16 ox_id, wwn_t port_name); +u16 fc_gpnid_build(struct fchs_s *fchs, void *pld, u32 s_id, + u16 ox_id, u32 port_id); +u16 fc_scr_build(struct fchs_s *fchs, struct fc_scr_s *scr, + u8 set_br_reg, u32 s_id, u16 ox_id); +u16 fc_plogi_acc_build(struct fchs_s *fchs, void *pld, u32 d_id, + u32 s_id, u16 ox_id, + wwn_t port_name, wwn_t node_name, u16 pdu_size); + +u16 fc_adisc_build(struct fchs_s *fchs, struct fc_adisc_s *adisc, + u32 d_id, u32 s_id, u16 ox_id, + wwn_t port_name, wwn_t node_name); +enum fc_parse_status fc_adisc_parse(struct fchs_s *fchs, void *pld, + u32 host_dap, + wwn_t node_name, wwn_t port_name); +enum fc_parse_status fc_adisc_rsp_parse(struct fc_adisc_s *adisc, int len, + wwn_t port_name, wwn_t node_name); +u16 fc_adisc_acc_build(struct fchs_s *fchs, struct fc_adisc_s *adisc, + u32 d_id, u32 s_id, u16 ox_id, + wwn_t port_name, wwn_t node_name); +u16 fc_ls_rjt_build(struct fchs_s *fchs, struct fc_ls_rjt_s *ls_rjt, + u32 d_id, u32 s_id, u16 ox_id, + u8 reason_code, u8 reason_code_expl); +u16 fc_ls_acc_build(struct fchs_s *fchs, struct fc_els_cmd_s *els_cmd, + u32 d_id, u32 s_id, u16 ox_id); +u16 fc_prli_build(struct fchs_s *fchs, void *pld, u32 d_id, + u32 s_id, u16 ox_id); +enum fc_parse_status fc_prli_rsp_parse(struct fc_prli_s *prli, int len); + +u16 fc_prli_acc_build(struct fchs_s *fchs, void *pld, u32 d_id, + u32 s_id, u16 ox_id, + enum bfa_port_role role); +u16 fc_rnid_build(struct fchs_s *fchs, struct fc_rnid_cmd_s *rnid, + u32 d_id, u32 s_id, u16 ox_id, + u32 data_format); +u16 fc_rnid_acc_build(struct fchs_s *fchs, struct fc_rnid_acc_s *rnid_acc, + u32 d_id, u32 s_id, u16 ox_id, + u32 data_format, + struct fc_rnid_common_id_data_s *common_id_data, + struct fc_rnid_general_topology_data_s * + gen_topo_data); +u16 fc_rpsc2_build(struct fchs_s *fchs, struct fc_rpsc2_cmd_s *rps2c, + u32 d_id, u32 s_id, + u32 *pid_list, u16 npids); +u16 fc_rpsc_build(struct fchs_s *fchs, struct fc_rpsc_cmd_s *rpsc, + u32 d_id, u32 s_id, u16 ox_id); +u16 fc_rpsc_acc_build(struct fchs_s *fchs, struct fc_rpsc_acc_s *rpsc_acc, + u32 d_id, u32 s_id, u16 ox_id, + struct fc_rpsc_speed_info_s *oper_speed); +u16 fc_gid_ft_build(struct fchs_s *fchs, void *pld, u32 s_id, + u8 fc4_type); +u16 fc_rpnid_build(struct fchs_s *fchs, void *pyld, u32 s_id, + u32 port_id, wwn_t port_name); +u16 fc_rnnid_build(struct fchs_s *fchs, void *pyld, u32 s_id, + u32 port_id, wwn_t node_name); +u16 fc_rcsid_build(struct fchs_s *fchs, void *pyld, u32 s_id, + u32 port_id, u32 cos); +u16 fc_rptid_build(struct fchs_s *fchs, void *pyld, u32 s_id, + u32 port_id, u8 port_type); +u16 fc_ganxt_build(struct fchs_s *fchs, void *pyld, u32 s_id, + u32 port_id); +u16 fc_logo_build(struct fchs_s *fchs, struct fc_logo_s *logo, + u32 d_id, u32 s_id, u16 ox_id, + wwn_t port_name); +u16 fc_logo_acc_build(struct fchs_s *fchs, void *pld, u32 d_id, + u32 s_id, u16 ox_id); +u16 fc_fdmi_reqhdr_build(struct fchs_s *fchs, void *pyld, u32 s_id, + u16 cmd_code); +u16 fc_gmal_req_build(struct fchs_s *fchs, void *pyld, u32 s_id, + wwn_t wwn); +u16 fc_gfn_req_build(struct fchs_s *fchs, void *pyld, u32 s_id, + wwn_t wwn); +void fc_get_fc4type_bitmask(u8 fc4_type, u8 *bit_mask); +void fc_els_req_build(struct fchs_s *fchs, u32 d_id, u32 s_id, + u16 ox_id); +enum fc_parse_status fc_els_rsp_parse(struct fchs_s *fchs, int len); +enum fc_parse_status fc_plogi_rsp_parse(struct fchs_s *fchs, int len, + wwn_t port_name); +enum fc_parse_status fc_prli_parse(struct fc_prli_s *prli); +enum fc_parse_status fc_pdisc_parse(struct fchs_s *fchs, wwn_t node_name, + wwn_t port_name); +u16 fc_ba_acc_build(struct fchs_s *fchs, struct fc_ba_acc_s *ba_acc, + u32 d_id, u32 s_id, u16 ox_id, + u16 rx_id); +int fc_logout_params_pages(struct fchs_s *fc_frame, u8 els_code); +u16 fc_tprlo_acc_build(struct fchs_s *fchs, + struct fc_tprlo_acc_s *tprlo_acc, + u32 d_id, u32 s_id, u16 ox_id, + int num_pages); +u16 fc_prlo_acc_build(struct fchs_s *fchs, struct fc_prlo_acc_s *prlo_acc, + u32 d_id, u32 s_id, u16 ox_id, + int num_pages); +u16 fc_logo_rsp_parse(struct fchs_s *fchs, int len); +u16 fc_pdisc_build(struct fchs_s *fchs, u32 d_id, u32 s_id, + u16 ox_id, wwn_t port_name, wwn_t node_name, + u16 pdu_size); +u16 fc_pdisc_rsp_parse(struct fchs_s *fchs, int len, wwn_t port_name); +u16 fc_prlo_build(struct fchs_s *fchs, u32 d_id, u32 s_id, + u16 ox_id, int num_pages); +u16 fc_prlo_rsp_parse(struct fchs_s *fchs, int len); +u16 fc_tprlo_build(struct fchs_s *fchs, u32 d_id, u32 s_id, + u16 ox_id, int num_pages, + enum fc_tprlo_type tprlo_type, u32 tpr_id); +u16 fc_tprlo_rsp_parse(struct fchs_s *fchs, int len); +u16 fc_ba_rjt_build(struct fchs_s *fchs, u32 d_id, u32 s_id, + u16 ox_id, u32 reason_code, + u32 reason_expl); +u16 fc_gnnid_build(struct fchs_s *fchs, void *pyld, u32 s_id, + u16 ox_id, u32 port_id); +u16 fc_ct_rsp_parse(struct ct_hdr_s *cthdr); +u16 fc_rscn_build(struct fchs_s *fchs, struct fc_rscn_pl_s *rscn, + u32 s_id, u16 ox_id); +#endif diff --git a/drivers/scsi/bfa/fcpim.c b/drivers/scsi/bfa/fcpim.c new file mode 100644 index 000000000000..8ce5d8934677 --- /dev/null +++ b/drivers/scsi/bfa/fcpim.c @@ -0,0 +1,844 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * fcpim.c - FCP initiator mode i-t nexus state machine + */ + +#include +#include +#include "fcs_fcpim.h" +#include "fcs_rport.h" +#include "fcs_lport.h" +#include "fcs_trcmod.h" +#include "fcs_fcxp.h" +#include "fcs.h" +#include +#include +#include + +BFA_TRC_FILE(FCS, FCPIM); + +/* + * forward declarations + */ +static void bfa_fcs_itnim_timeout(void *arg); +static void bfa_fcs_itnim_free(struct bfa_fcs_itnim_s *itnim); +static void bfa_fcs_itnim_send_prli(void *itnim_cbarg, + struct bfa_fcxp_s *fcxp_alloced); +static void bfa_fcs_itnim_prli_response(void *fcsarg, + struct bfa_fcxp_s *fcxp, + void *cbarg, + bfa_status_t req_status, + u32 rsp_len, + u32 resid_len, + struct fchs_s *rsp_fchs); +static void bfa_fcs_itnim_aen_post(struct bfa_fcs_itnim_s *itnim, + enum bfa_itnim_aen_event event); + +/** + * fcs_itnim_sm FCS itnim state machine events + */ + +enum bfa_fcs_itnim_event { + BFA_FCS_ITNIM_SM_ONLINE = 1, /* rport online event */ + BFA_FCS_ITNIM_SM_OFFLINE = 2, /* rport offline */ + BFA_FCS_ITNIM_SM_FRMSENT = 3, /* prli frame is sent */ + BFA_FCS_ITNIM_SM_RSP_OK = 4, /* good response */ + BFA_FCS_ITNIM_SM_RSP_ERROR = 5, /* error response */ + BFA_FCS_ITNIM_SM_TIMEOUT = 6, /* delay timeout */ + BFA_FCS_ITNIM_SM_HCB_OFFLINE = 7, /* BFA online callback */ + BFA_FCS_ITNIM_SM_HCB_ONLINE = 8, /* BFA offline callback */ + BFA_FCS_ITNIM_SM_INITIATOR = 9, /* rport is initiator */ + BFA_FCS_ITNIM_SM_DELETE = 10, /* delete event from rport */ + BFA_FCS_ITNIM_SM_PRLO = 11, /* delete event from rport */ +}; + +static void bfa_fcs_itnim_sm_offline(struct bfa_fcs_itnim_s *itnim, + enum bfa_fcs_itnim_event event); +static void bfa_fcs_itnim_sm_prli_send(struct bfa_fcs_itnim_s *itnim, + enum bfa_fcs_itnim_event event); +static void bfa_fcs_itnim_sm_prli(struct bfa_fcs_itnim_s *itnim, + enum bfa_fcs_itnim_event event); +static void bfa_fcs_itnim_sm_prli_retry(struct bfa_fcs_itnim_s *itnim, + enum bfa_fcs_itnim_event event); +static void bfa_fcs_itnim_sm_hcb_online(struct bfa_fcs_itnim_s *itnim, + enum bfa_fcs_itnim_event event); +static void bfa_fcs_itnim_sm_online(struct bfa_fcs_itnim_s *itnim, + enum bfa_fcs_itnim_event event); +static void bfa_fcs_itnim_sm_hcb_offline(struct bfa_fcs_itnim_s *itnim, + enum bfa_fcs_itnim_event event); +static void bfa_fcs_itnim_sm_initiator(struct bfa_fcs_itnim_s *itnim, + enum bfa_fcs_itnim_event event); + +static struct bfa_sm_table_s itnim_sm_table[] = { + {BFA_SM(bfa_fcs_itnim_sm_offline), BFA_ITNIM_OFFLINE}, + {BFA_SM(bfa_fcs_itnim_sm_prli_send), BFA_ITNIM_PRLI_SEND}, + {BFA_SM(bfa_fcs_itnim_sm_prli), BFA_ITNIM_PRLI_SENT}, + {BFA_SM(bfa_fcs_itnim_sm_prli_retry), BFA_ITNIM_PRLI_RETRY}, + {BFA_SM(bfa_fcs_itnim_sm_hcb_online), BFA_ITNIM_HCB_ONLINE}, + {BFA_SM(bfa_fcs_itnim_sm_online), BFA_ITNIM_ONLINE}, + {BFA_SM(bfa_fcs_itnim_sm_hcb_offline), BFA_ITNIM_HCB_OFFLINE}, + {BFA_SM(bfa_fcs_itnim_sm_initiator), BFA_ITNIM_INITIATIOR}, +}; + +/** + * fcs_itnim_sm FCS itnim state machine + */ + +static void +bfa_fcs_itnim_sm_offline(struct bfa_fcs_itnim_s *itnim, + enum bfa_fcs_itnim_event event) +{ + bfa_trc(itnim->fcs, itnim->rport->pwwn); + bfa_trc(itnim->fcs, event); + + switch (event) { + case BFA_FCS_ITNIM_SM_ONLINE: + bfa_sm_set_state(itnim, bfa_fcs_itnim_sm_prli_send); + bfa_fcs_itnim_send_prli(itnim, NULL); + break; + + case BFA_FCS_ITNIM_SM_OFFLINE: + bfa_fcs_rport_itnim_ack(itnim->rport); + break; + + case BFA_FCS_ITNIM_SM_INITIATOR: + bfa_sm_set_state(itnim, bfa_fcs_itnim_sm_initiator); + break; + + case BFA_FCS_ITNIM_SM_DELETE: + bfa_fcs_itnim_free(itnim); + break; + + default: + bfa_assert(0); + } + +} + +static void +bfa_fcs_itnim_sm_prli_send(struct bfa_fcs_itnim_s *itnim, + enum bfa_fcs_itnim_event event) +{ + bfa_trc(itnim->fcs, itnim->rport->pwwn); + bfa_trc(itnim->fcs, event); + + switch (event) { + case BFA_FCS_ITNIM_SM_FRMSENT: + bfa_sm_set_state(itnim, bfa_fcs_itnim_sm_prli); + break; + + case BFA_FCS_ITNIM_SM_INITIATOR: + bfa_sm_set_state(itnim, bfa_fcs_itnim_sm_initiator); + bfa_fcxp_walloc_cancel(itnim->fcs->bfa, &itnim->fcxp_wqe); + break; + + case BFA_FCS_ITNIM_SM_OFFLINE: + bfa_sm_set_state(itnim, bfa_fcs_itnim_sm_offline); + bfa_fcxp_walloc_cancel(itnim->fcs->bfa, &itnim->fcxp_wqe); + bfa_fcs_rport_itnim_ack(itnim->rport); + break; + + case BFA_FCS_ITNIM_SM_DELETE: + bfa_sm_set_state(itnim, bfa_fcs_itnim_sm_offline); + bfa_fcxp_walloc_cancel(itnim->fcs->bfa, &itnim->fcxp_wqe); + bfa_fcs_itnim_free(itnim); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_itnim_sm_prli(struct bfa_fcs_itnim_s *itnim, + enum bfa_fcs_itnim_event event) +{ + bfa_trc(itnim->fcs, itnim->rport->pwwn); + bfa_trc(itnim->fcs, event); + + switch (event) { + case BFA_FCS_ITNIM_SM_RSP_OK: + bfa_sm_set_state(itnim, bfa_fcs_itnim_sm_hcb_online); + bfa_itnim_online(itnim->bfa_itnim, itnim->seq_rec); + break; + + case BFA_FCS_ITNIM_SM_RSP_ERROR: + bfa_sm_set_state(itnim, bfa_fcs_itnim_sm_prli_retry); + bfa_timer_start(itnim->fcs->bfa, &itnim->timer, + bfa_fcs_itnim_timeout, itnim, + BFA_FCS_RETRY_TIMEOUT); + break; + + case BFA_FCS_ITNIM_SM_OFFLINE: + bfa_sm_set_state(itnim, bfa_fcs_itnim_sm_offline); + bfa_fcxp_discard(itnim->fcxp); + bfa_fcs_rport_itnim_ack(itnim->rport); + break; + + case BFA_FCS_ITNIM_SM_INITIATOR: + bfa_sm_set_state(itnim, bfa_fcs_itnim_sm_initiator); + /* + * dont discard fcxp. accept will reach same state + */ + break; + + case BFA_FCS_ITNIM_SM_DELETE: + bfa_sm_set_state(itnim, bfa_fcs_itnim_sm_offline); + bfa_fcxp_discard(itnim->fcxp); + bfa_fcs_itnim_free(itnim); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_itnim_sm_prli_retry(struct bfa_fcs_itnim_s *itnim, + enum bfa_fcs_itnim_event event) +{ + bfa_trc(itnim->fcs, itnim->rport->pwwn); + bfa_trc(itnim->fcs, event); + + switch (event) { + case BFA_FCS_ITNIM_SM_TIMEOUT: + bfa_sm_set_state(itnim, bfa_fcs_itnim_sm_prli_send); + bfa_fcs_itnim_send_prli(itnim, NULL); + break; + + case BFA_FCS_ITNIM_SM_OFFLINE: + bfa_sm_set_state(itnim, bfa_fcs_itnim_sm_offline); + bfa_timer_stop(&itnim->timer); + bfa_fcs_rport_itnim_ack(itnim->rport); + break; + + case BFA_FCS_ITNIM_SM_INITIATOR: + bfa_sm_set_state(itnim, bfa_fcs_itnim_sm_initiator); + bfa_timer_stop(&itnim->timer); + break; + + case BFA_FCS_ITNIM_SM_DELETE: + bfa_sm_set_state(itnim, bfa_fcs_itnim_sm_offline); + bfa_timer_stop(&itnim->timer); + bfa_fcs_itnim_free(itnim); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_itnim_sm_hcb_online(struct bfa_fcs_itnim_s *itnim, + enum bfa_fcs_itnim_event event) +{ + bfa_trc(itnim->fcs, itnim->rport->pwwn); + bfa_trc(itnim->fcs, event); + + switch (event) { + case BFA_FCS_ITNIM_SM_HCB_ONLINE: + bfa_sm_set_state(itnim, bfa_fcs_itnim_sm_online); + bfa_fcb_itnim_online(itnim->itnim_drv); + bfa_fcs_itnim_aen_post(itnim, BFA_ITNIM_AEN_ONLINE); + break; + + case BFA_FCS_ITNIM_SM_OFFLINE: + bfa_sm_set_state(itnim, bfa_fcs_itnim_sm_offline); + bfa_itnim_offline(itnim->bfa_itnim); + bfa_fcs_rport_itnim_ack(itnim->rport); + break; + + case BFA_FCS_ITNIM_SM_DELETE: + bfa_sm_set_state(itnim, bfa_fcs_itnim_sm_offline); + bfa_fcs_itnim_free(itnim); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_itnim_sm_online(struct bfa_fcs_itnim_s *itnim, + enum bfa_fcs_itnim_event event) +{ + bfa_trc(itnim->fcs, itnim->rport->pwwn); + bfa_trc(itnim->fcs, event); + + switch (event) { + case BFA_FCS_ITNIM_SM_OFFLINE: + bfa_sm_set_state(itnim, bfa_fcs_itnim_sm_hcb_offline); + bfa_fcb_itnim_offline(itnim->itnim_drv); + bfa_itnim_offline(itnim->bfa_itnim); + if (bfa_fcs_port_is_online(itnim->rport->port) == BFA_TRUE) { + bfa_fcs_itnim_aen_post(itnim, BFA_ITNIM_AEN_DISCONNECT); + } else { + bfa_fcs_itnim_aen_post(itnim, BFA_ITNIM_AEN_OFFLINE); + } + break; + + case BFA_FCS_ITNIM_SM_DELETE: + bfa_sm_set_state(itnim, bfa_fcs_itnim_sm_offline); + bfa_fcs_itnim_free(itnim); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_itnim_sm_hcb_offline(struct bfa_fcs_itnim_s *itnim, + enum bfa_fcs_itnim_event event) +{ + bfa_trc(itnim->fcs, itnim->rport->pwwn); + bfa_trc(itnim->fcs, event); + + switch (event) { + case BFA_FCS_ITNIM_SM_HCB_OFFLINE: + bfa_sm_set_state(itnim, bfa_fcs_itnim_sm_offline); + bfa_fcs_rport_itnim_ack(itnim->rport); + break; + + case BFA_FCS_ITNIM_SM_DELETE: + bfa_sm_set_state(itnim, bfa_fcs_itnim_sm_offline); + bfa_fcs_itnim_free(itnim); + break; + + default: + bfa_assert(0); + } +} + +/* + * This state is set when a discovered rport is also in intiator mode. + * This ITN is marked as no_op and is not active and will not be truned into + * online state. + */ +static void +bfa_fcs_itnim_sm_initiator(struct bfa_fcs_itnim_s *itnim, + enum bfa_fcs_itnim_event event) +{ + bfa_trc(itnim->fcs, itnim->rport->pwwn); + bfa_trc(itnim->fcs, event); + + switch (event) { + case BFA_FCS_ITNIM_SM_OFFLINE: + bfa_sm_set_state(itnim, bfa_fcs_itnim_sm_offline); + bfa_fcs_rport_itnim_ack(itnim->rport); + break; + + case BFA_FCS_ITNIM_SM_RSP_ERROR: + case BFA_FCS_ITNIM_SM_ONLINE: + case BFA_FCS_ITNIM_SM_INITIATOR: + break; + + case BFA_FCS_ITNIM_SM_DELETE: + bfa_sm_set_state(itnim, bfa_fcs_itnim_sm_offline); + bfa_fcs_itnim_free(itnim); + break; + + default: + bfa_assert(0); + } +} + + + +/** + * itnim_private FCS ITNIM private interfaces + */ + +static void +bfa_fcs_itnim_aen_post(struct bfa_fcs_itnim_s *itnim, + enum bfa_itnim_aen_event event) +{ + struct bfa_fcs_rport_s *rport = itnim->rport; + union bfa_aen_data_u aen_data; + struct bfa_log_mod_s *logmod = rport->fcs->logm; + wwn_t lpwwn = bfa_fcs_port_get_pwwn(rport->port); + wwn_t rpwwn = rport->pwwn; + char lpwwn_ptr[BFA_STRING_32]; + char rpwwn_ptr[BFA_STRING_32]; + + /* + * Don't post events for well known addresses + */ + if (BFA_FCS_PID_IS_WKA(rport->pid)) + return; + + wwn2str(lpwwn_ptr, lpwwn); + wwn2str(rpwwn_ptr, rpwwn); + + switch (event) { + case BFA_ITNIM_AEN_ONLINE: + bfa_log(logmod, BFA_AEN_ITNIM_ONLINE, rpwwn_ptr, lpwwn_ptr); + break; + case BFA_ITNIM_AEN_OFFLINE: + bfa_log(logmod, BFA_AEN_ITNIM_OFFLINE, rpwwn_ptr, lpwwn_ptr); + break; + case BFA_ITNIM_AEN_DISCONNECT: + bfa_log(logmod, BFA_AEN_ITNIM_DISCONNECT, rpwwn_ptr, lpwwn_ptr); + break; + default: + break; + } + + aen_data.itnim.vf_id = rport->port->fabric->vf_id; + aen_data.itnim.ppwwn = + bfa_fcs_port_get_pwwn(bfa_fcs_get_base_port(itnim->fcs)); + aen_data.itnim.lpwwn = lpwwn; + aen_data.itnim.rpwwn = rpwwn; +} + +static void +bfa_fcs_itnim_send_prli(void *itnim_cbarg, struct bfa_fcxp_s *fcxp_alloced) +{ + struct bfa_fcs_itnim_s *itnim = itnim_cbarg; + struct bfa_fcs_rport_s *rport = itnim->rport; + struct bfa_fcs_port_s *port = rport->port; + struct fchs_s fchs; + struct bfa_fcxp_s *fcxp; + int len; + + bfa_trc(itnim->fcs, itnim->rport->pwwn); + + fcxp = fcxp_alloced ? fcxp_alloced : bfa_fcs_fcxp_alloc(port->fcs); + if (!fcxp) { + itnim->stats.fcxp_alloc_wait++; + bfa_fcxp_alloc_wait(port->fcs->bfa, &itnim->fcxp_wqe, + bfa_fcs_itnim_send_prli, itnim); + return; + } + itnim->fcxp = fcxp; + + len = fc_prli_build(&fchs, bfa_fcxp_get_reqbuf(fcxp), itnim->rport->pid, + bfa_fcs_port_get_fcid(port), 0); + + bfa_fcxp_send(fcxp, rport->bfa_rport, port->fabric->vf_id, port->lp_tag, + BFA_FALSE, FC_CLASS_3, len, &fchs, + bfa_fcs_itnim_prli_response, (void *)itnim, FC_MAX_PDUSZ, + FC_RA_TOV); + + itnim->stats.prli_sent++; + bfa_sm_send_event(itnim, BFA_FCS_ITNIM_SM_FRMSENT); +} + +static void +bfa_fcs_itnim_prli_response(void *fcsarg, struct bfa_fcxp_s *fcxp, void *cbarg, + bfa_status_t req_status, u32 rsp_len, + u32 resid_len, struct fchs_s *rsp_fchs) +{ + struct bfa_fcs_itnim_s *itnim = (struct bfa_fcs_itnim_s *)cbarg; + struct fc_els_cmd_s *els_cmd; + struct fc_prli_s *prli_resp; + struct fc_ls_rjt_s *ls_rjt; + struct fc_prli_params_s *sparams; + + bfa_trc(itnim->fcs, req_status); + + /* + * Sanity Checks + */ + if (req_status != BFA_STATUS_OK) { + itnim->stats.prli_rsp_err++; + bfa_sm_send_event(itnim, BFA_FCS_ITNIM_SM_RSP_ERROR); + return; + } + + els_cmd = (struct fc_els_cmd_s *) BFA_FCXP_RSP_PLD(fcxp); + + if (els_cmd->els_code == FC_ELS_ACC) { + prli_resp = (struct fc_prli_s *) els_cmd; + + if (fc_prli_rsp_parse(prli_resp, rsp_len) != FC_PARSE_OK) { + bfa_trc(itnim->fcs, rsp_len); + /* + * Check if this r-port is also in Initiator mode. + * If so, we need to set this ITN as a no-op. + */ + if (prli_resp->parampage.servparams.initiator) { + bfa_trc(itnim->fcs, prli_resp->parampage.type); + itnim->rport->scsi_function = + BFA_RPORT_INITIATOR; + itnim->stats.prli_rsp_acc++; + bfa_sm_send_event(itnim, + BFA_FCS_ITNIM_SM_INITIATOR); + return; + } + + itnim->stats.prli_rsp_parse_err++; + return; + } + itnim->rport->scsi_function = BFA_RPORT_TARGET; + + sparams = &prli_resp->parampage.servparams; + itnim->seq_rec = sparams->retry; + itnim->rec_support = sparams->rec_support; + itnim->task_retry_id = sparams->task_retry_id; + itnim->conf_comp = sparams->confirm; + + itnim->stats.prli_rsp_acc++; + bfa_sm_send_event(itnim, BFA_FCS_ITNIM_SM_RSP_OK); + } else { + ls_rjt = (struct fc_ls_rjt_s *) BFA_FCXP_RSP_PLD(fcxp); + + bfa_trc(itnim->fcs, ls_rjt->reason_code); + bfa_trc(itnim->fcs, ls_rjt->reason_code_expl); + + itnim->stats.prli_rsp_rjt++; + bfa_sm_send_event(itnim, BFA_FCS_ITNIM_SM_RSP_ERROR); + } +} + +static void +bfa_fcs_itnim_timeout(void *arg) +{ + struct bfa_fcs_itnim_s *itnim = (struct bfa_fcs_itnim_s *)arg; + + itnim->stats.timeout++; + bfa_sm_send_event(itnim, BFA_FCS_ITNIM_SM_TIMEOUT); +} + +static void +bfa_fcs_itnim_free(struct bfa_fcs_itnim_s *itnim) +{ + bfa_itnim_delete(itnim->bfa_itnim); + bfa_fcb_itnim_free(itnim->fcs->bfad, itnim->itnim_drv); +} + + + +/** + * itnim_public FCS ITNIM public interfaces + */ + +/** + * Called by rport when a new rport is created. + * + * @param[in] rport - remote port. + */ +struct bfa_fcs_itnim_s * +bfa_fcs_itnim_create(struct bfa_fcs_rport_s *rport) +{ + struct bfa_fcs_port_s *port = rport->port; + struct bfa_fcs_itnim_s *itnim; + struct bfad_itnim_s *itnim_drv; + struct bfa_itnim_s *bfa_itnim; + + /* + * call bfad to allocate the itnim + */ + bfa_fcb_itnim_alloc(port->fcs->bfad, &itnim, &itnim_drv); + if (itnim == NULL) { + bfa_trc(port->fcs, rport->pwwn); + return NULL; + } + + /* + * Initialize itnim + */ + itnim->rport = rport; + itnim->fcs = rport->fcs; + itnim->itnim_drv = itnim_drv; + + /* + * call BFA to create the itnim + */ + bfa_itnim = bfa_itnim_create(port->fcs->bfa, rport->bfa_rport, itnim); + + if (bfa_itnim == NULL) { + bfa_trc(port->fcs, rport->pwwn); + bfa_fcb_itnim_free(port->fcs->bfad, itnim_drv); + bfa_assert(0); + return NULL; + } + + itnim->bfa_itnim = bfa_itnim; + itnim->seq_rec = BFA_FALSE; + itnim->rec_support = BFA_FALSE; + itnim->conf_comp = BFA_FALSE; + itnim->task_retry_id = BFA_FALSE; + + /* + * Set State machine + */ + bfa_sm_set_state(itnim, bfa_fcs_itnim_sm_offline); + + return itnim; +} + +/** + * Called by rport to delete the instance of FCPIM. + * + * @param[in] rport - remote port. + */ +void +bfa_fcs_itnim_delete(struct bfa_fcs_itnim_s *itnim) +{ + bfa_trc(itnim->fcs, itnim->rport->pid); + bfa_sm_send_event(itnim, BFA_FCS_ITNIM_SM_DELETE); +} + +/** + * Notification from rport that PLOGI is complete to initiate FC-4 session. + */ +void +bfa_fcs_itnim_rport_online(struct bfa_fcs_itnim_s *itnim) +{ + itnim->stats.onlines++; + + if (!BFA_FCS_PID_IS_WKA(itnim->rport->pid)) { + bfa_sm_send_event(itnim, BFA_FCS_ITNIM_SM_ONLINE); + } else { + /* + * For well known addresses, we set the itnim to initiator + * state + */ + itnim->stats.initiator++; + bfa_sm_send_event(itnim, BFA_FCS_ITNIM_SM_INITIATOR); + } +} + +/** + * Called by rport to handle a remote device offline. + */ +void +bfa_fcs_itnim_rport_offline(struct bfa_fcs_itnim_s *itnim) +{ + itnim->stats.offlines++; + bfa_sm_send_event(itnim, BFA_FCS_ITNIM_SM_OFFLINE); +} + +/** + * Called by rport when remote port is known to be an initiator from + * PRLI received. + */ +void +bfa_fcs_itnim_is_initiator(struct bfa_fcs_itnim_s *itnim) +{ + bfa_trc(itnim->fcs, itnim->rport->pid); + itnim->stats.initiator++; + bfa_sm_send_event(itnim, BFA_FCS_ITNIM_SM_INITIATOR); +} + +/** + * Called by rport to check if the itnim is online. + */ +bfa_status_t +bfa_fcs_itnim_get_online_state(struct bfa_fcs_itnim_s *itnim) +{ + bfa_trc(itnim->fcs, itnim->rport->pid); + switch (bfa_sm_to_state(itnim_sm_table, itnim->sm)) { + case BFA_ITNIM_ONLINE: + case BFA_ITNIM_INITIATIOR: + return BFA_STATUS_OK; + + default: + return BFA_STATUS_NO_FCPIM_NEXUS; + + } +} + +/** + * BFA completion callback for bfa_itnim_online(). + */ +void +bfa_cb_itnim_online(void *cbarg) +{ + struct bfa_fcs_itnim_s *itnim = (struct bfa_fcs_itnim_s *)cbarg; + + bfa_trc(itnim->fcs, itnim->rport->pwwn); + bfa_sm_send_event(itnim, BFA_FCS_ITNIM_SM_HCB_ONLINE); +} + +/** + * BFA completion callback for bfa_itnim_offline(). + */ +void +bfa_cb_itnim_offline(void *cb_arg) +{ + struct bfa_fcs_itnim_s *itnim = (struct bfa_fcs_itnim_s *)cb_arg; + + bfa_trc(itnim->fcs, itnim->rport->pwwn); + bfa_sm_send_event(itnim, BFA_FCS_ITNIM_SM_HCB_OFFLINE); +} + +/** + * Mark the beginning of PATH TOV handling. IO completion callbacks + * are still pending. + */ +void +bfa_cb_itnim_tov_begin(void *cb_arg) +{ + struct bfa_fcs_itnim_s *itnim = (struct bfa_fcs_itnim_s *)cb_arg; + + bfa_trc(itnim->fcs, itnim->rport->pwwn); + bfa_fcb_itnim_tov_begin(itnim->itnim_drv); +} + +/** + * Mark the end of PATH TOV handling. All pending IOs are already cleaned up. + */ +void +bfa_cb_itnim_tov(void *cb_arg) +{ + struct bfa_fcs_itnim_s *itnim = (struct bfa_fcs_itnim_s *)cb_arg; + + bfa_trc(itnim->fcs, itnim->rport->pwwn); + bfa_fcb_itnim_tov(itnim->itnim_drv); +} + +/** + * BFA notification to FCS/driver for second level error recovery. + * + * Atleast one I/O request has timedout and target is unresponsive to + * repeated abort requests. Second level error recovery should be initiated + * by starting implicit logout and recovery procedures. + */ +void +bfa_cb_itnim_sler(void *cb_arg) +{ + struct bfa_fcs_itnim_s *itnim = (struct bfa_fcs_itnim_s *)cb_arg; + + itnim->stats.sler++; + bfa_trc(itnim->fcs, itnim->rport->pwwn); + bfa_fcs_rport_logo_imp(itnim->rport); +} + +struct bfa_fcs_itnim_s * +bfa_fcs_itnim_lookup(struct bfa_fcs_port_s *port, wwn_t rpwwn) +{ + struct bfa_fcs_rport_s *rport; + rport = bfa_fcs_rport_lookup(port, rpwwn); + + if (!rport) + return NULL; + + bfa_assert(rport->itnim != NULL); + return (rport->itnim); +} + +bfa_status_t +bfa_fcs_itnim_attr_get(struct bfa_fcs_port_s *port, wwn_t rpwwn, + struct bfa_itnim_attr_s *attr) +{ + struct bfa_fcs_itnim_s *itnim = NULL; + + itnim = bfa_fcs_itnim_lookup(port, rpwwn); + + if (itnim == NULL) + return BFA_STATUS_NO_FCPIM_NEXUS; + + attr->state = bfa_sm_to_state(itnim_sm_table, itnim->sm); + attr->retry = itnim->seq_rec; + attr->rec_support = itnim->rec_support; + attr->conf_comp = itnim->conf_comp; + attr->task_retry_id = itnim->task_retry_id; + + return BFA_STATUS_OK; +} + +bfa_status_t +bfa_fcs_itnim_stats_get(struct bfa_fcs_port_s *port, wwn_t rpwwn, + struct bfa_itnim_stats_s *stats) +{ + struct bfa_fcs_itnim_s *itnim = NULL; + + bfa_assert(port != NULL); + + itnim = bfa_fcs_itnim_lookup(port, rpwwn); + + if (itnim == NULL) + return BFA_STATUS_NO_FCPIM_NEXUS; + + bfa_os_memcpy(stats, &itnim->stats, sizeof(struct bfa_itnim_stats_s)); + + return BFA_STATUS_OK; +} + +bfa_status_t +bfa_fcs_itnim_stats_clear(struct bfa_fcs_port_s *port, wwn_t rpwwn) +{ + struct bfa_fcs_itnim_s *itnim = NULL; + + bfa_assert(port != NULL); + + itnim = bfa_fcs_itnim_lookup(port, rpwwn); + + if (itnim == NULL) + return BFA_STATUS_NO_FCPIM_NEXUS; + + bfa_os_memset(&itnim->stats, 0, sizeof(struct bfa_itnim_stats_s)); + return BFA_STATUS_OK; +} + +void +bfa_fcs_fcpim_uf_recv(struct bfa_fcs_itnim_s *itnim, struct fchs_s *fchs, + u16 len) +{ + struct fc_els_cmd_s *els_cmd; + + bfa_trc(itnim->fcs, fchs->type); + + if (fchs->type != FC_TYPE_ELS) + return; + + els_cmd = (struct fc_els_cmd_s *) (fchs + 1); + + bfa_trc(itnim->fcs, els_cmd->els_code); + + switch (els_cmd->els_code) { + case FC_ELS_PRLO: + /* bfa_sm_send_event(itnim, BFA_FCS_ITNIM_SM_PRLO); */ + break; + + default: + bfa_assert(0); + } +} + +void +bfa_fcs_itnim_pause(struct bfa_fcs_itnim_s *itnim) +{ +} + +void +bfa_fcs_itnim_resume(struct bfa_fcs_itnim_s *itnim) +{ +} + +/** + * Module initialization + */ +void +bfa_fcs_fcpim_modinit(struct bfa_fcs_s *fcs) +{ +} + +/** + * Module cleanup + */ +void +bfa_fcs_fcpim_modexit(struct bfa_fcs_s *fcs) +{ + bfa_fcs_modexit_comp(fcs); +} + + diff --git a/drivers/scsi/bfa/fcptm.c b/drivers/scsi/bfa/fcptm.c new file mode 100644 index 000000000000..8c8b08c72e7a --- /dev/null +++ b/drivers/scsi/bfa/fcptm.c @@ -0,0 +1,68 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * This file contains dummy FCPTM routines to aid in Initiator Mode only + * compilation of OS driver. + * + */ + +#include "bfa_os_inc.h" +#include "fcs_rport.h" +#include "fcs_fcptm.h" +#include "fcs/bfa_fcs_rport.h" + +struct bfa_fcs_tin_s * +bfa_fcs_tin_create(struct bfa_fcs_rport_s *rport) +{ + return NULL; +} + +void +bfa_fcs_tin_delete(struct bfa_fcs_tin_s *tin) +{ +} + +void +bfa_fcs_tin_rport_offline(struct bfa_fcs_tin_s *tin) +{ +} + +void +bfa_fcs_tin_rport_online(struct bfa_fcs_tin_s *tin) +{ +} + +void +bfa_fcs_tin_rx_prli(struct bfa_fcs_tin_s *tin, struct fchs_s *fchs, u16 len) +{ +} + +void +bfa_fcs_fcptm_uf_recv(struct bfa_fcs_tin_s *tin, struct fchs_s *fchs, u16 len) +{ +} + +void +bfa_fcs_tin_pause(struct bfa_fcs_tin_s *tin) +{ +} + +void +bfa_fcs_tin_resume(struct bfa_fcs_tin_s *tin) +{ +} diff --git a/drivers/scsi/bfa/fcs.h b/drivers/scsi/bfa/fcs.h new file mode 100644 index 000000000000..deee685e8478 --- /dev/null +++ b/drivers/scsi/bfa/fcs.h @@ -0,0 +1,30 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * fcs.h FCS module functions + */ + + +#ifndef __FCS_H__ +#define __FCS_H__ + +#define __fcs_min_cfg(__fcs) (__fcs)->min_cfg + +void bfa_fcs_modexit_comp(struct bfa_fcs_s *fcs); + +#endif /* __FCS_H__ */ diff --git a/drivers/scsi/bfa/fcs_auth.h b/drivers/scsi/bfa/fcs_auth.h new file mode 100644 index 000000000000..65d155fea3d7 --- /dev/null +++ b/drivers/scsi/bfa/fcs_auth.h @@ -0,0 +1,37 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * fcs_uf.h FCS unsolicited frame receive + */ + + +#ifndef __FCS_AUTH_H__ +#define __FCS_AUTH_H__ + +#include +#include +#include + +/* + * fcs friend functions: only between fcs modules + */ +void bfa_fcs_auth_uf_recv(struct bfa_fcs_fabric_s *fabric, int len); +void bfa_fcs_auth_start(struct bfa_fcs_fabric_s *fabric); +void bfa_fcs_auth_stop(struct bfa_fcs_fabric_s *fabric); + +#endif /* __FCS_UF_H__ */ diff --git a/drivers/scsi/bfa/fcs_fabric.h b/drivers/scsi/bfa/fcs_fabric.h new file mode 100644 index 000000000000..eee960820f86 --- /dev/null +++ b/drivers/scsi/bfa/fcs_fabric.h @@ -0,0 +1,61 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * fcs_lport.h FCS logical port interfaces + */ + +#ifndef __FCS_FABRIC_H__ +#define __FCS_FABRIC_H__ + +#include +#include +#include + +/* +* fcs friend functions: only between fcs modules + */ +void bfa_fcs_fabric_modinit(struct bfa_fcs_s *fcs); +void bfa_fcs_fabric_modexit(struct bfa_fcs_s *fcs); +void bfa_fcs_fabric_modsusp(struct bfa_fcs_s *fcs); +void bfa_fcs_fabric_link_up(struct bfa_fcs_fabric_s *fabric); +void bfa_fcs_fabric_link_down(struct bfa_fcs_fabric_s *fabric); +void bfa_fcs_fabric_addvport(struct bfa_fcs_fabric_s *fabric, + struct bfa_fcs_vport_s *vport); +void bfa_fcs_fabric_delvport(struct bfa_fcs_fabric_s *fabric, + struct bfa_fcs_vport_s *vport); +int bfa_fcs_fabric_is_online(struct bfa_fcs_fabric_s *fabric); +struct bfa_fcs_vport_s *bfa_fcs_fabric_vport_lookup( + struct bfa_fcs_fabric_s *fabric, wwn_t pwwn); +void bfa_fcs_fabric_modstart(struct bfa_fcs_s *fcs); +void bfa_fcs_fabric_uf_recv(struct bfa_fcs_fabric_s *fabric, + struct fchs_s *fchs, u16 len); +u16 bfa_fcs_fabric_vport_count(struct bfa_fcs_fabric_s *fabric); +bfa_boolean_t bfa_fcs_fabric_is_loopback(struct bfa_fcs_fabric_s *fabric); +enum bfa_pport_type bfa_fcs_fabric_port_type(struct bfa_fcs_fabric_s *fabric); +void bfa_fcs_fabric_psymb_init(struct bfa_fcs_fabric_s *fabric); +void bfa_fcs_fabric_port_delete_comp(struct bfa_fcs_fabric_s *fabric); + +bfa_status_t bfa_fcs_fabric_addvf(struct bfa_fcs_fabric_s *vf, + struct bfa_fcs_s *fcs, struct bfa_port_cfg_s *port_cfg, + struct bfad_vf_s *vf_drv); +void bfa_fcs_auth_finished(struct bfa_fcs_fabric_s *fabric, + enum auth_status status); + +void bfa_fcs_fabric_set_fabric_name(struct bfa_fcs_fabric_s *fabric, + wwn_t fabric_name); +#endif /* __FCS_FABRIC_H__ */ diff --git a/drivers/scsi/bfa/fcs_fcpim.h b/drivers/scsi/bfa/fcs_fcpim.h new file mode 100644 index 000000000000..61e9e2687de3 --- /dev/null +++ b/drivers/scsi/bfa/fcs_fcpim.h @@ -0,0 +1,44 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ +#ifndef __FCS_FCPIM_H__ +#define __FCS_FCPIM_H__ + +#include +#include +#include + +/* + * Following routines are from FCPIM and will be called by rport. + */ +struct bfa_fcs_itnim_s *bfa_fcs_itnim_create(struct bfa_fcs_rport_s *rport); +void bfa_fcs_itnim_delete(struct bfa_fcs_itnim_s *itnim); +void bfa_fcs_itnim_rport_offline(struct bfa_fcs_itnim_s *itnim); +void bfa_fcs_itnim_rport_online(struct bfa_fcs_itnim_s *itnim); +bfa_status_t bfa_fcs_itnim_get_online_state(struct bfa_fcs_itnim_s *itnim); + +void bfa_fcs_itnim_is_initiator(struct bfa_fcs_itnim_s *itnim); +void bfa_fcs_itnim_pause(struct bfa_fcs_itnim_s *itnim); +void bfa_fcs_itnim_resume(struct bfa_fcs_itnim_s *itnim); + +/* + * Modudle init/cleanup routines. + */ +void bfa_fcs_fcpim_modinit(struct bfa_fcs_s *fcs); +void bfa_fcs_fcpim_modexit(struct bfa_fcs_s *fcs); +void bfa_fcs_fcpim_uf_recv(struct bfa_fcs_itnim_s *itnim, struct fchs_s *fchs, + u16 len); +#endif /* __FCS_FCPIM_H__ */ diff --git a/drivers/scsi/bfa/fcs_fcptm.h b/drivers/scsi/bfa/fcs_fcptm.h new file mode 100644 index 000000000000..ffff0829fd31 --- /dev/null +++ b/drivers/scsi/bfa/fcs_fcptm.h @@ -0,0 +1,45 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __FCS_FCPTM_H__ +#define __FCS_FCPTM_H__ + +#include +#include +#include + +/* + * Following routines are from FCPTM and will be called by rport. + */ +struct bfa_fcs_tin_s *bfa_fcs_tin_create(struct bfa_fcs_rport_s *rport); +void bfa_fcs_tin_rport_offline(struct bfa_fcs_tin_s *tin); +void bfa_fcs_tin_rport_online(struct bfa_fcs_tin_s *tin); +void bfa_fcs_tin_delete(struct bfa_fcs_tin_s *tin); +void bfa_fcs_tin_rx_prli(struct bfa_fcs_tin_s *tin, struct fchs_s *fchs, + u16 len); +void bfa_fcs_tin_pause(struct bfa_fcs_tin_s *tin); +void bfa_fcs_tin_resume(struct bfa_fcs_tin_s *tin); + +/* + * Modudle init/cleanup routines. + */ +void bfa_fcs_fcptm_modinit(struct bfa_fcs_s *fcs); +void bfa_fcs_fcptm_modexit(struct bfa_fcs_s *fcs); +void bfa_fcs_fcptm_uf_recv(struct bfa_fcs_tin_s *tin, struct fchs_s *fchs, + u16 len); + +#endif /* __FCS_FCPTM_H__ */ diff --git a/drivers/scsi/bfa/fcs_fcxp.h b/drivers/scsi/bfa/fcs_fcxp.h new file mode 100644 index 000000000000..8277fe9c2b70 --- /dev/null +++ b/drivers/scsi/bfa/fcs_fcxp.h @@ -0,0 +1,29 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * fcs_fcxp.h FCXP helper macros for FCS + */ + + +#ifndef __FCS_FCXP_H__ +#define __FCS_FCXP_H__ + +#define bfa_fcs_fcxp_alloc(__fcs) \ + bfa_fcxp_alloc(NULL, (__fcs)->bfa, 0, 0, NULL, NULL, NULL, NULL) + +#endif /* __FCS_FCXP_H__ */ diff --git a/drivers/scsi/bfa/fcs_lport.h b/drivers/scsi/bfa/fcs_lport.h new file mode 100644 index 000000000000..ae744ba35671 --- /dev/null +++ b/drivers/scsi/bfa/fcs_lport.h @@ -0,0 +1,117 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * fcs_lport.h FCS logical port interfaces + */ + +#ifndef __FCS_LPORT_H__ +#define __FCS_LPORT_H__ + +#define __VPORT_H__ +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/* + * PID used in P2P/N2N ( In Big Endian) + */ +#define N2N_LOCAL_PID 0x010000 +#define N2N_REMOTE_PID 0x020000 + +/* + * Misc Timeouts + */ +/* + * To be used when spawning a timer before retrying a failed command. Milli + * Secs. + */ +#define BFA_FCS_RETRY_TIMEOUT 2000 + +/* + * Check for Port/Vport Mode/Role + */ +#define BFA_FCS_VPORT_IS_INITIATOR_MODE(port) \ + (port->port_cfg.roles & BFA_PORT_ROLE_FCP_IM) + +#define BFA_FCS_VPORT_IS_TARGET_MODE(port) \ + (port->port_cfg.roles & BFA_PORT_ROLE_FCP_TM) + +#define BFA_FCS_VPORT_IS_IPFC_MODE(port) \ + (port->port_cfg.roles & BFA_PORT_ROLE_FCP_IPFC) + +/* + * Is this a Well Known Address + */ +#define BFA_FCS_PID_IS_WKA(pid) ((bfa_os_ntoh3b(pid) > 0xFFF000) ? 1 : 0) + +/* + * Pointer to elements within Port + */ +#define BFA_FCS_GET_HAL_FROM_PORT(port) (port->fcs->bfa) +#define BFA_FCS_GET_NS_FROM_PORT(port) (&port->port_topo.pfab.ns) +#define BFA_FCS_GET_SCN_FROM_PORT(port) (&port->port_topo.pfab.scn) +#define BFA_FCS_GET_MS_FROM_PORT(port) (&port->port_topo.pfab.ms) +#define BFA_FCS_GET_FDMI_FROM_PORT(port) (&port->port_topo.pfab.ms.fdmi) + +/* + * handler for unsolicied frames + */ +void bfa_fcs_port_uf_recv(struct bfa_fcs_port_s *lport, struct fchs_s *fchs, + u16 len); + +/* + * Following routines will be called by Fabric to indicate port + * online/offline to vport. + */ +void bfa_fcs_lport_init(struct bfa_fcs_port_s *lport, struct bfa_fcs_s *fcs, + u16 vf_id, struct bfa_port_cfg_s *port_cfg, + struct bfa_fcs_vport_s *vport); +void bfa_fcs_port_online(struct bfa_fcs_port_s *port); +void bfa_fcs_port_offline(struct bfa_fcs_port_s *port); +void bfa_fcs_port_delete(struct bfa_fcs_port_s *port); +bfa_boolean_t bfa_fcs_port_is_online(struct bfa_fcs_port_s *port); + +/* + * Lookup rport based on PID + */ +struct bfa_fcs_rport_s *bfa_fcs_port_get_rport_by_pid( + struct bfa_fcs_port_s *port, u32 pid); + +/* + * Lookup rport based on PWWN + */ +struct bfa_fcs_rport_s *bfa_fcs_port_get_rport_by_pwwn( + struct bfa_fcs_port_s *port, wwn_t pwwn); +struct bfa_fcs_rport_s *bfa_fcs_port_get_rport_by_nwwn( + struct bfa_fcs_port_s *port, wwn_t nwwn); +void bfa_fcs_port_add_rport(struct bfa_fcs_port_s *port, + struct bfa_fcs_rport_s *rport); +void bfa_fcs_port_del_rport(struct bfa_fcs_port_s *port, + struct bfa_fcs_rport_s *rport); + +void bfa_fcs_port_modinit(struct bfa_fcs_s *fcs); +void bfa_fcs_port_modexit(struct bfa_fcs_s *fcs); +void bfa_fcs_port_lip(struct bfa_fcs_port_s *port); + +#endif /* __FCS_LPORT_H__ */ diff --git a/drivers/scsi/bfa/fcs_ms.h b/drivers/scsi/bfa/fcs_ms.h new file mode 100644 index 000000000000..b6a8c12876f4 --- /dev/null +++ b/drivers/scsi/bfa/fcs_ms.h @@ -0,0 +1,35 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * fcs_ms.h FCS ms interfaces + */ +#ifndef __FCS_MS_H__ +#define __FCS_MS_H__ + +/* MS FCS routines */ +void bfa_fcs_port_ms_init(struct bfa_fcs_port_s *port); +void bfa_fcs_port_ms_offline(struct bfa_fcs_port_s *port); +void bfa_fcs_port_ms_online(struct bfa_fcs_port_s *port); +void bfa_fcs_port_ms_fabric_rscn(struct bfa_fcs_port_s *port); + +/* FDMI FCS routines */ +void bfa_fcs_port_fdmi_init(struct bfa_fcs_port_ms_s *ms); +void bfa_fcs_port_fdmi_offline(struct bfa_fcs_port_ms_s *ms); +void bfa_fcs_port_fdmi_online(struct bfa_fcs_port_ms_s *ms); + +#endif diff --git a/drivers/scsi/bfa/fcs_port.h b/drivers/scsi/bfa/fcs_port.h new file mode 100644 index 000000000000..abb65191dd27 --- /dev/null +++ b/drivers/scsi/bfa/fcs_port.h @@ -0,0 +1,32 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * fcs_pport.h FCS physical port interfaces + */ + + +#ifndef __FCS_PPORT_H__ +#define __FCS_PPORT_H__ + +/* + * fcs friend functions: only between fcs modules + */ +void bfa_fcs_pport_modinit(struct bfa_fcs_s *fcs); +void bfa_fcs_pport_modexit(struct bfa_fcs_s *fcs); + +#endif /* __FCS_PPORT_H__ */ diff --git a/drivers/scsi/bfa/fcs_rport.h b/drivers/scsi/bfa/fcs_rport.h new file mode 100644 index 000000000000..f601e9d74236 --- /dev/null +++ b/drivers/scsi/bfa/fcs_rport.h @@ -0,0 +1,61 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * fcs_rport.h FCS rport interfaces and defines + */ + +#ifndef __FCS_RPORT_H__ +#define __FCS_RPORT_H__ + +#include + +void bfa_fcs_rport_modinit(struct bfa_fcs_s *fcs); +void bfa_fcs_rport_modexit(struct bfa_fcs_s *fcs); + +void bfa_fcs_rport_uf_recv(struct bfa_fcs_rport_s *rport, struct fchs_s *fchs, + u16 len); +void bfa_fcs_rport_scn(struct bfa_fcs_rport_s *rport); + +struct bfa_fcs_rport_s *bfa_fcs_rport_create(struct bfa_fcs_port_s *port, + u32 pid); +void bfa_fcs_rport_delete(struct bfa_fcs_rport_s *rport); +void bfa_fcs_rport_online(struct bfa_fcs_rport_s *rport); +void bfa_fcs_rport_offline(struct bfa_fcs_rport_s *rport); +void bfa_fcs_rport_start(struct bfa_fcs_port_s *port, struct fchs_s *rx_fchs, + struct fc_logi_s *plogi_rsp); +void bfa_fcs_rport_plogi_create(struct bfa_fcs_port_s *port, + struct fchs_s *rx_fchs, + struct fc_logi_s *plogi); +void bfa_fcs_rport_plogi(struct bfa_fcs_rport_s *rport, struct fchs_s *fchs, + struct fc_logi_s *plogi); +void bfa_fcs_rport_logo_imp(struct bfa_fcs_rport_s *rport); +void bfa_fcs_rport_itnim_ack(struct bfa_fcs_rport_s *rport); +void bfa_fcs_rport_itntm_ack(struct bfa_fcs_rport_s *rport); +void bfa_fcs_rport_tin_ack(struct bfa_fcs_rport_s *rport); +void bfa_fcs_rport_fcptm_offline_done(struct bfa_fcs_rport_s *rport); +int bfa_fcs_rport_get_state(struct bfa_fcs_rport_s *rport); +struct bfa_fcs_rport_s *bfa_fcs_rport_create_by_wwn(struct bfa_fcs_port_s *port, + wwn_t wwn); + + +/* Rport Features */ +void bfa_fcs_rpf_init(struct bfa_fcs_rport_s *rport); +void bfa_fcs_rpf_rport_online(struct bfa_fcs_rport_s *rport); +void bfa_fcs_rpf_rport_offline(struct bfa_fcs_rport_s *rport); + +#endif /* __FCS_RPORT_H__ */ diff --git a/drivers/scsi/bfa/fcs_trcmod.h b/drivers/scsi/bfa/fcs_trcmod.h new file mode 100644 index 000000000000..41b5ae8d7644 --- /dev/null +++ b/drivers/scsi/bfa/fcs_trcmod.h @@ -0,0 +1,56 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * fcs_trcmod.h BFA FCS trace modules + */ + +#ifndef __FCS_TRCMOD_H__ +#define __FCS_TRCMOD_H__ + +#include + +/* + * !!! Only append to the enums defined here to avoid any versioning + * !!! needed between trace utility and driver version + */ +enum { + BFA_TRC_FCS_FABRIC = 1, + BFA_TRC_FCS_VFAPI = 2, + BFA_TRC_FCS_PORT = 3, + BFA_TRC_FCS_VPORT = 4, + BFA_TRC_FCS_VP_API = 5, + BFA_TRC_FCS_VPS = 6, + BFA_TRC_FCS_RPORT = 7, + BFA_TRC_FCS_FCPIM = 8, + BFA_TRC_FCS_FCPTM = 9, + BFA_TRC_FCS_NS = 10, + BFA_TRC_FCS_SCN = 11, + BFA_TRC_FCS_LOOP = 12, + BFA_TRC_FCS_UF = 13, + BFA_TRC_FCS_PPORT = 14, + BFA_TRC_FCS_FCPIP = 15, + BFA_TRC_FCS_PORT_API = 16, + BFA_TRC_FCS_RPORT_API = 17, + BFA_TRC_FCS_AUTH = 18, + BFA_TRC_FCS_N2N = 19, + BFA_TRC_FCS_MS = 20, + BFA_TRC_FCS_FDMI = 21, + BFA_TRC_FCS_RPORT_FTRS = 22, +}; + +#endif /* __FCS_TRCMOD_H__ */ diff --git a/drivers/scsi/bfa/fcs_uf.h b/drivers/scsi/bfa/fcs_uf.h new file mode 100644 index 000000000000..96f1bdcb31ed --- /dev/null +++ b/drivers/scsi/bfa/fcs_uf.h @@ -0,0 +1,32 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * fcs_uf.h FCS unsolicited frame receive + */ + + +#ifndef __FCS_UF_H__ +#define __FCS_UF_H__ + +/* + * fcs friend functions: only between fcs modules + */ +void bfa_fcs_uf_modinit(struct bfa_fcs_s *fcs); +void bfa_fcs_uf_modexit(struct bfa_fcs_s *fcs); + +#endif /* __FCS_UF_H__ */ diff --git a/drivers/scsi/bfa/fcs_vport.h b/drivers/scsi/bfa/fcs_vport.h new file mode 100644 index 000000000000..9e80b6a97b7f --- /dev/null +++ b/drivers/scsi/bfa/fcs_vport.h @@ -0,0 +1,39 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __FCS_VPORT_H__ +#define __FCS_VPORT_H__ + +#include +#include +#include + +/* + * Modudle init/cleanup routines. + */ + +void bfa_fcs_vport_modinit(struct bfa_fcs_s *fcs); +void bfa_fcs_vport_modexit(struct bfa_fcs_s *fcs); + +void bfa_fcs_vport_cleanup(struct bfa_fcs_vport_s *vport); +void bfa_fcs_vport_online(struct bfa_fcs_vport_s *vport); +void bfa_fcs_vport_offline(struct bfa_fcs_vport_s *vport); +void bfa_fcs_vport_delete_comp(struct bfa_fcs_vport_s *vport); +u32 bfa_fcs_vport_get_max(struct bfa_fcs_s *fcs); + +#endif /* __FCS_VPORT_H__ */ + diff --git a/drivers/scsi/bfa/fdmi.c b/drivers/scsi/bfa/fdmi.c new file mode 100644 index 000000000000..b845eb272c78 --- /dev/null +++ b/drivers/scsi/bfa/fdmi.c @@ -0,0 +1,1223 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * port_api.c BFA FCS port + */ + + +#include +#include +#include "fcs_lport.h" +#include "fcs_rport.h" +#include "lport_priv.h" +#include "fcs_trcmod.h" +#include "fcs_fcxp.h" +#include + +BFA_TRC_FILE(FCS, FDMI); + +#define BFA_FCS_FDMI_CMD_MAX_RETRIES 2 + +/* + * forward declarations + */ +static void bfa_fcs_port_fdmi_send_rhba(void *fdmi_cbarg, + struct bfa_fcxp_s *fcxp_alloced); +static void bfa_fcs_port_fdmi_send_rprt(void *fdmi_cbarg, + struct bfa_fcxp_s *fcxp_alloced); +static void bfa_fcs_port_fdmi_send_rpa(void *fdmi_cbarg, + struct bfa_fcxp_s *fcxp_alloced); +static void bfa_fcs_port_fdmi_rhba_response(void *fcsarg, + struct bfa_fcxp_s *fcxp, + void *cbarg, + bfa_status_t req_status, + u32 rsp_len, + u32 resid_len, + struct fchs_s *rsp_fchs); +static void bfa_fcs_port_fdmi_rprt_response(void *fcsarg, + struct bfa_fcxp_s *fcxp, + void *cbarg, + bfa_status_t req_status, + u32 rsp_len, + u32 resid_len, + struct fchs_s *rsp_fchs); +static void bfa_fcs_port_fdmi_rpa_response(void *fcsarg, + struct bfa_fcxp_s *fcxp, + void *cbarg, + bfa_status_t req_status, + u32 rsp_len, + u32 resid_len, + struct fchs_s *rsp_fchs); +static void bfa_fcs_port_fdmi_timeout(void *arg); +static u16 bfa_fcs_port_fdmi_build_rhba_pyld( + struct bfa_fcs_port_fdmi_s *fdmi, u8 *pyld); +static u16 bfa_fcs_port_fdmi_build_rprt_pyld( + struct bfa_fcs_port_fdmi_s *fdmi, u8 *pyld); +static u16 bfa_fcs_port_fdmi_build_rpa_pyld( + struct bfa_fcs_port_fdmi_s *fdmi, u8 *pyld); +static u16 bfa_fcs_port_fdmi_build_portattr_block( + struct bfa_fcs_port_fdmi_s *fdmi, u8 *pyld); +void bfa_fcs_fdmi_get_hbaattr(struct bfa_fcs_port_fdmi_s *fdmi, + struct bfa_fcs_fdmi_hba_attr_s *hba_attr); +void bfa_fcs_fdmi_get_portattr(struct bfa_fcs_port_fdmi_s *fdmi, + struct bfa_fcs_fdmi_port_attr_s *port_attr); +/** + * fcs_fdmi_sm FCS FDMI state machine + */ + +/** + * FDMI State Machine events + */ +enum port_fdmi_event { + FDMISM_EVENT_PORT_ONLINE = 1, + FDMISM_EVENT_PORT_OFFLINE = 2, + FDMISM_EVENT_RSP_OK = 4, + FDMISM_EVENT_RSP_ERROR = 5, + FDMISM_EVENT_TIMEOUT = 6, + FDMISM_EVENT_RHBA_SENT = 7, + FDMISM_EVENT_RPRT_SENT = 8, + FDMISM_EVENT_RPA_SENT = 9, +}; + +static void bfa_fcs_port_fdmi_sm_offline(struct bfa_fcs_port_fdmi_s *fdmi, + enum port_fdmi_event event); +static void bfa_fcs_port_fdmi_sm_sending_rhba(struct bfa_fcs_port_fdmi_s *fdmi, + enum port_fdmi_event event); +static void bfa_fcs_port_fdmi_sm_rhba(struct bfa_fcs_port_fdmi_s *fdmi, + enum port_fdmi_event event); +static void bfa_fcs_port_fdmi_sm_rhba_retry(struct bfa_fcs_port_fdmi_s *fdmi, + enum port_fdmi_event event); +static void bfa_fcs_port_fdmi_sm_sending_rprt(struct bfa_fcs_port_fdmi_s *fdmi, + enum port_fdmi_event event); +static void bfa_fcs_port_fdmi_sm_rprt(struct bfa_fcs_port_fdmi_s *fdmi, + enum port_fdmi_event event); +static void bfa_fcs_port_fdmi_sm_rprt_retry(struct bfa_fcs_port_fdmi_s *fdmi, + enum port_fdmi_event event); +static void bfa_fcs_port_fdmi_sm_sending_rpa(struct bfa_fcs_port_fdmi_s *fdmi, + enum port_fdmi_event event); +static void bfa_fcs_port_fdmi_sm_rpa(struct bfa_fcs_port_fdmi_s *fdmi, + enum port_fdmi_event event); +static void bfa_fcs_port_fdmi_sm_rpa_retry(struct bfa_fcs_port_fdmi_s *fdmi, + enum port_fdmi_event event); +static void bfa_fcs_port_fdmi_sm_online(struct bfa_fcs_port_fdmi_s *fdmi, + enum port_fdmi_event event); +/** + * Start in offline state - awaiting MS to send start. + */ +static void +bfa_fcs_port_fdmi_sm_offline(struct bfa_fcs_port_fdmi_s *fdmi, + enum port_fdmi_event event) +{ + struct bfa_fcs_port_s *port = fdmi->ms->port; + + bfa_trc(port->fcs, port->port_cfg.pwwn); + bfa_trc(port->fcs, event); + + fdmi->retry_cnt = 0; + + switch (event) { + case FDMISM_EVENT_PORT_ONLINE: + if (port->vport) { + /* + * For Vports, register a new port. + */ + bfa_sm_set_state(fdmi, + bfa_fcs_port_fdmi_sm_sending_rprt); + bfa_fcs_port_fdmi_send_rprt(fdmi, NULL); + } else { + /* + * For a base port, we should first register the HBA + * atribute. The HBA attribute also contains the base + * port registration. + */ + bfa_sm_set_state(fdmi, + bfa_fcs_port_fdmi_sm_sending_rhba); + bfa_fcs_port_fdmi_send_rhba(fdmi, NULL); + } + break; + + case FDMISM_EVENT_PORT_OFFLINE: + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_port_fdmi_sm_sending_rhba(struct bfa_fcs_port_fdmi_s *fdmi, + enum port_fdmi_event event) +{ + struct bfa_fcs_port_s *port = fdmi->ms->port; + + bfa_trc(port->fcs, port->port_cfg.pwwn); + bfa_trc(port->fcs, event); + + switch (event) { + case FDMISM_EVENT_RHBA_SENT: + bfa_sm_set_state(fdmi, bfa_fcs_port_fdmi_sm_rhba); + break; + + case FDMISM_EVENT_PORT_OFFLINE: + bfa_sm_set_state(fdmi, bfa_fcs_port_fdmi_sm_offline); + bfa_fcxp_walloc_cancel(BFA_FCS_GET_HAL_FROM_PORT(port), + &fdmi->fcxp_wqe); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_port_fdmi_sm_rhba(struct bfa_fcs_port_fdmi_s *fdmi, + enum port_fdmi_event event) +{ + struct bfa_fcs_port_s *port = fdmi->ms->port; + + bfa_trc(port->fcs, port->port_cfg.pwwn); + bfa_trc(port->fcs, event); + + switch (event) { + case FDMISM_EVENT_RSP_ERROR: + /* + * if max retries have not been reached, start timer for a + * delayed retry + */ + if (fdmi->retry_cnt++ < BFA_FCS_FDMI_CMD_MAX_RETRIES) { + bfa_sm_set_state(fdmi, bfa_fcs_port_fdmi_sm_rhba_retry); + bfa_timer_start(BFA_FCS_GET_HAL_FROM_PORT(port), + &fdmi->timer, bfa_fcs_port_fdmi_timeout, + fdmi, BFA_FCS_RETRY_TIMEOUT); + } else { + /* + * set state to offline + */ + bfa_sm_set_state(fdmi, bfa_fcs_port_fdmi_sm_offline); + } + break; + + case FDMISM_EVENT_RSP_OK: + /* + * Initiate Register Port Attributes + */ + bfa_sm_set_state(fdmi, bfa_fcs_port_fdmi_sm_sending_rpa); + fdmi->retry_cnt = 0; + bfa_fcs_port_fdmi_send_rpa(fdmi, NULL); + break; + + case FDMISM_EVENT_PORT_OFFLINE: + bfa_fcxp_discard(fdmi->fcxp); + bfa_sm_set_state(fdmi, bfa_fcs_port_fdmi_sm_offline); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_port_fdmi_sm_rhba_retry(struct bfa_fcs_port_fdmi_s *fdmi, + enum port_fdmi_event event) +{ + struct bfa_fcs_port_s *port = fdmi->ms->port; + + bfa_trc(port->fcs, port->port_cfg.pwwn); + bfa_trc(port->fcs, event); + + switch (event) { + case FDMISM_EVENT_TIMEOUT: + /* + * Retry Timer Expired. Re-send + */ + bfa_sm_set_state(fdmi, bfa_fcs_port_fdmi_sm_sending_rhba); + bfa_fcs_port_fdmi_send_rhba(fdmi, NULL); + break; + + case FDMISM_EVENT_PORT_OFFLINE: + bfa_sm_set_state(fdmi, bfa_fcs_port_fdmi_sm_offline); + bfa_timer_stop(&fdmi->timer); + break; + + default: + bfa_assert(0); + } +} + +/* +* RPRT : Register Port + */ +static void +bfa_fcs_port_fdmi_sm_sending_rprt(struct bfa_fcs_port_fdmi_s *fdmi, + enum port_fdmi_event event) +{ + struct bfa_fcs_port_s *port = fdmi->ms->port; + + bfa_trc(port->fcs, port->port_cfg.pwwn); + bfa_trc(port->fcs, event); + + switch (event) { + case FDMISM_EVENT_RPRT_SENT: + bfa_sm_set_state(fdmi, bfa_fcs_port_fdmi_sm_rprt); + break; + + case FDMISM_EVENT_PORT_OFFLINE: + bfa_sm_set_state(fdmi, bfa_fcs_port_fdmi_sm_offline); + bfa_fcxp_walloc_cancel(BFA_FCS_GET_HAL_FROM_PORT(port), + &fdmi->fcxp_wqe); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_port_fdmi_sm_rprt(struct bfa_fcs_port_fdmi_s *fdmi, + enum port_fdmi_event event) +{ + struct bfa_fcs_port_s *port = fdmi->ms->port; + + bfa_trc(port->fcs, port->port_cfg.pwwn); + bfa_trc(port->fcs, event); + + switch (event) { + case FDMISM_EVENT_RSP_ERROR: + /* + * if max retries have not been reached, start timer for a + * delayed retry + */ + if (fdmi->retry_cnt++ < BFA_FCS_FDMI_CMD_MAX_RETRIES) { + bfa_sm_set_state(fdmi, bfa_fcs_port_fdmi_sm_rprt_retry); + bfa_timer_start(BFA_FCS_GET_HAL_FROM_PORT(port), + &fdmi->timer, bfa_fcs_port_fdmi_timeout, + fdmi, BFA_FCS_RETRY_TIMEOUT); + + } else { + /* + * set state to offline + */ + bfa_sm_set_state(fdmi, bfa_fcs_port_fdmi_sm_offline); + fdmi->retry_cnt = 0; + } + break; + + case FDMISM_EVENT_RSP_OK: + fdmi->retry_cnt = 0; + bfa_sm_set_state(fdmi, bfa_fcs_port_fdmi_sm_online); + break; + + case FDMISM_EVENT_PORT_OFFLINE: + bfa_fcxp_discard(fdmi->fcxp); + bfa_sm_set_state(fdmi, bfa_fcs_port_fdmi_sm_offline); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_port_fdmi_sm_rprt_retry(struct bfa_fcs_port_fdmi_s *fdmi, + enum port_fdmi_event event) +{ + struct bfa_fcs_port_s *port = fdmi->ms->port; + + bfa_trc(port->fcs, port->port_cfg.pwwn); + bfa_trc(port->fcs, event); + + switch (event) { + case FDMISM_EVENT_TIMEOUT: + /* + * Retry Timer Expired. Re-send + */ + bfa_sm_set_state(fdmi, bfa_fcs_port_fdmi_sm_sending_rprt); + bfa_fcs_port_fdmi_send_rprt(fdmi, NULL); + break; + + case FDMISM_EVENT_PORT_OFFLINE: + bfa_sm_set_state(fdmi, bfa_fcs_port_fdmi_sm_offline); + bfa_timer_stop(&fdmi->timer); + break; + + default: + bfa_assert(0); + } +} + +/* + * Register Port Attributes + */ +static void +bfa_fcs_port_fdmi_sm_sending_rpa(struct bfa_fcs_port_fdmi_s *fdmi, + enum port_fdmi_event event) +{ + struct bfa_fcs_port_s *port = fdmi->ms->port; + + bfa_trc(port->fcs, port->port_cfg.pwwn); + bfa_trc(port->fcs, event); + + switch (event) { + case FDMISM_EVENT_RPA_SENT: + bfa_sm_set_state(fdmi, bfa_fcs_port_fdmi_sm_rpa); + break; + + case FDMISM_EVENT_PORT_OFFLINE: + bfa_sm_set_state(fdmi, bfa_fcs_port_fdmi_sm_offline); + bfa_fcxp_walloc_cancel(BFA_FCS_GET_HAL_FROM_PORT(port), + &fdmi->fcxp_wqe); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_port_fdmi_sm_rpa(struct bfa_fcs_port_fdmi_s *fdmi, + enum port_fdmi_event event) +{ + struct bfa_fcs_port_s *port = fdmi->ms->port; + + bfa_trc(port->fcs, port->port_cfg.pwwn); + bfa_trc(port->fcs, event); + + switch (event) { + case FDMISM_EVENT_RSP_ERROR: + /* + * if max retries have not been reached, start timer for a + * delayed retry + */ + if (fdmi->retry_cnt++ < BFA_FCS_FDMI_CMD_MAX_RETRIES) { + bfa_sm_set_state(fdmi, bfa_fcs_port_fdmi_sm_rpa_retry); + bfa_timer_start(BFA_FCS_GET_HAL_FROM_PORT(port), + &fdmi->timer, bfa_fcs_port_fdmi_timeout, + fdmi, BFA_FCS_RETRY_TIMEOUT); + } else { + /* + * set state to offline + */ + bfa_sm_set_state(fdmi, bfa_fcs_port_fdmi_sm_offline); + fdmi->retry_cnt = 0; + } + break; + + case FDMISM_EVENT_RSP_OK: + bfa_sm_set_state(fdmi, bfa_fcs_port_fdmi_sm_online); + fdmi->retry_cnt = 0; + break; + + case FDMISM_EVENT_PORT_OFFLINE: + bfa_fcxp_discard(fdmi->fcxp); + bfa_sm_set_state(fdmi, bfa_fcs_port_fdmi_sm_offline); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_port_fdmi_sm_rpa_retry(struct bfa_fcs_port_fdmi_s *fdmi, + enum port_fdmi_event event) +{ + struct bfa_fcs_port_s *port = fdmi->ms->port; + + bfa_trc(port->fcs, port->port_cfg.pwwn); + bfa_trc(port->fcs, event); + + switch (event) { + case FDMISM_EVENT_TIMEOUT: + /* + * Retry Timer Expired. Re-send + */ + bfa_sm_set_state(fdmi, bfa_fcs_port_fdmi_sm_sending_rpa); + bfa_fcs_port_fdmi_send_rpa(fdmi, NULL); + break; + + case FDMISM_EVENT_PORT_OFFLINE: + bfa_sm_set_state(fdmi, bfa_fcs_port_fdmi_sm_offline); + bfa_timer_stop(&fdmi->timer); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_port_fdmi_sm_online(struct bfa_fcs_port_fdmi_s *fdmi, + enum port_fdmi_event event) +{ + struct bfa_fcs_port_s *port = fdmi->ms->port; + + bfa_trc(port->fcs, port->port_cfg.pwwn); + bfa_trc(port->fcs, event); + + switch (event) { + case FDMISM_EVENT_PORT_OFFLINE: + bfa_sm_set_state(fdmi, bfa_fcs_port_fdmi_sm_offline); + break; + + default: + bfa_assert(0); + } +} + + +/** +* RHBA : Register HBA Attributes. + */ +static void +bfa_fcs_port_fdmi_send_rhba(void *fdmi_cbarg, struct bfa_fcxp_s *fcxp_alloced) +{ + struct bfa_fcs_port_fdmi_s *fdmi = fdmi_cbarg; + struct bfa_fcs_port_s *port = fdmi->ms->port; + struct fchs_s fchs; + int len, attr_len; + struct bfa_fcxp_s *fcxp; + u8 *pyld; + + bfa_trc(port->fcs, port->port_cfg.pwwn); + + fcxp = fcxp_alloced ? fcxp_alloced : bfa_fcs_fcxp_alloc(port->fcs); + if (!fcxp) { + bfa_fcxp_alloc_wait(port->fcs->bfa, &fdmi->fcxp_wqe, + bfa_fcs_port_fdmi_send_rhba, fdmi); + return; + } + fdmi->fcxp = fcxp; + + pyld = bfa_fcxp_get_reqbuf(fcxp); + bfa_os_memset(pyld, 0, FC_MAX_PDUSZ); + + len = fc_fdmi_reqhdr_build(&fchs, pyld, bfa_fcs_port_get_fcid(port), + FDMI_RHBA); + + attr_len = bfa_fcs_port_fdmi_build_rhba_pyld(fdmi, + (u8 *) ((struct ct_hdr_s *) pyld + 1)); + + bfa_fcxp_send(fcxp, NULL, port->fabric->vf_id, port->lp_tag, BFA_FALSE, + FC_CLASS_3, (len + attr_len), &fchs, + bfa_fcs_port_fdmi_rhba_response, (void *)fdmi, + FC_MAX_PDUSZ, FC_RA_TOV); + + bfa_sm_send_event(fdmi, FDMISM_EVENT_RHBA_SENT); +} + +static u16 +bfa_fcs_port_fdmi_build_rhba_pyld(struct bfa_fcs_port_fdmi_s *fdmi, + u8 *pyld) +{ + struct bfa_fcs_port_s *port = fdmi->ms->port; + struct bfa_fcs_fdmi_hba_attr_s hba_attr; /* @todo */ + struct bfa_fcs_fdmi_hba_attr_s *fcs_hba_attr = &hba_attr; /* @todo */ + struct fdmi_rhba_s *rhba = (struct fdmi_rhba_s *) pyld; + struct fdmi_attr_s *attr; + u8 *curr_ptr; + u16 len, count; + + /* + * get hba attributes + */ + bfa_fcs_fdmi_get_hbaattr(fdmi, fcs_hba_attr); + + rhba->hba_id = bfa_fcs_port_get_pwwn(port); + rhba->port_list.num_ports = bfa_os_htonl(1); + rhba->port_list.port_entry = bfa_fcs_port_get_pwwn(port); + + len = sizeof(rhba->hba_id) + sizeof(rhba->port_list); + + count = 0; + len += sizeof(rhba->hba_attr_blk.attr_count); + + /* + * fill out the invididual entries of the HBA attrib Block + */ + curr_ptr = (u8 *) &rhba->hba_attr_blk.hba_attr; + + /* + * Node Name + */ + attr = (struct fdmi_attr_s *) curr_ptr; + attr->type = bfa_os_htons(FDMI_HBA_ATTRIB_NODENAME); + attr->len = sizeof(wwn_t); + memcpy(attr->value, &bfa_fcs_port_get_nwwn(port), attr->len); + curr_ptr += sizeof(attr->type) + sizeof(attr->len) + attr->len; + len += attr->len; + count++; + attr->len = + bfa_os_htons(attr->len + sizeof(attr->type) + + sizeof(attr->len)); + + /* + * Manufacturer + */ + attr = (struct fdmi_attr_s *) curr_ptr; + attr->type = bfa_os_htons(FDMI_HBA_ATTRIB_MANUFACTURER); + attr->len = (u16) strlen(fcs_hba_attr->manufacturer); + memcpy(attr->value, fcs_hba_attr->manufacturer, attr->len); + /* variable fields need to be 4 byte aligned */ + attr->len = fc_roundup(attr->len, sizeof(u32)); + curr_ptr += sizeof(attr->type) + sizeof(attr->len) + attr->len; + len += attr->len; + count++; + attr->len = + bfa_os_htons(attr->len + sizeof(attr->type) + + sizeof(attr->len)); + + /* + * Serial Number + */ + attr = (struct fdmi_attr_s *) curr_ptr; + attr->type = bfa_os_htons(FDMI_HBA_ATTRIB_SERIALNUM); + attr->len = (u16) strlen(fcs_hba_attr->serial_num); + memcpy(attr->value, fcs_hba_attr->serial_num, attr->len); + /* variable fields need to be 4 byte aligned */ + attr->len = fc_roundup(attr->len, sizeof(u32)); + curr_ptr += sizeof(attr->type) + sizeof(attr->len) + attr->len; + len += attr->len; + count++; + attr->len = + bfa_os_htons(attr->len + sizeof(attr->type) + + sizeof(attr->len)); + + /* + * Model + */ + attr = (struct fdmi_attr_s *) curr_ptr; + attr->type = bfa_os_htons(FDMI_HBA_ATTRIB_MODEL); + attr->len = (u16) strlen(fcs_hba_attr->model); + memcpy(attr->value, fcs_hba_attr->model, attr->len); + /* variable fields need to be 4 byte aligned */ + attr->len = fc_roundup(attr->len, sizeof(u32)); + curr_ptr += sizeof(attr->type) + sizeof(attr->len) + attr->len; + len += attr->len; + count++; + attr->len = + bfa_os_htons(attr->len + sizeof(attr->type) + + sizeof(attr->len)); + + /* + * Model Desc + */ + attr = (struct fdmi_attr_s *) curr_ptr; + attr->type = bfa_os_htons(FDMI_HBA_ATTRIB_MODEL_DESC); + attr->len = (u16) strlen(fcs_hba_attr->model_desc); + memcpy(attr->value, fcs_hba_attr->model_desc, attr->len); + /* variable fields need to be 4 byte aligned */ + attr->len = fc_roundup(attr->len, sizeof(u32)); + curr_ptr += sizeof(attr->type) + sizeof(attr->len) + attr->len; + len += attr->len; + count++; + attr->len = + bfa_os_htons(attr->len + sizeof(attr->type) + + sizeof(attr->len)); + + /* + * H/W Version + */ + if (fcs_hba_attr->hw_version[0] != '\0') { + attr = (struct fdmi_attr_s *) curr_ptr; + attr->type = bfa_os_htons(FDMI_HBA_ATTRIB_HW_VERSION); + attr->len = (u16) strlen(fcs_hba_attr->hw_version); + memcpy(attr->value, fcs_hba_attr->hw_version, attr->len); + /* variable fields need to be 4 byte aligned */ + attr->len = fc_roundup(attr->len, sizeof(u32)); + curr_ptr += sizeof(attr->type) + sizeof(attr->len) + attr->len; + len += attr->len; + count++; + attr->len = + bfa_os_htons(attr->len + sizeof(attr->type) + + sizeof(attr->len)); + } + + /* + * Driver Version + */ + attr = (struct fdmi_attr_s *) curr_ptr; + attr->type = bfa_os_htons(FDMI_HBA_ATTRIB_DRIVER_VERSION); + attr->len = (u16) strlen(fcs_hba_attr->driver_version); + memcpy(attr->value, fcs_hba_attr->driver_version, attr->len); + /* variable fields need to be 4 byte aligned */ + attr->len = fc_roundup(attr->len, sizeof(u32)); + curr_ptr += sizeof(attr->type) + sizeof(attr->len) + attr->len; + len += attr->len;; + count++; + attr->len = + bfa_os_htons(attr->len + sizeof(attr->type) + + sizeof(attr->len)); + + /* + * Option Rom Version + */ + if (fcs_hba_attr->option_rom_ver[0] != '\0') { + attr = (struct fdmi_attr_s *) curr_ptr; + attr->type = bfa_os_htons(FDMI_HBA_ATTRIB_ROM_VERSION); + attr->len = (u16) strlen(fcs_hba_attr->option_rom_ver); + memcpy(attr->value, fcs_hba_attr->option_rom_ver, attr->len); + /* variable fields need to be 4 byte aligned */ + attr->len = fc_roundup(attr->len, sizeof(u32)); + curr_ptr += sizeof(attr->type) + sizeof(attr->len) + attr->len; + len += attr->len; + count++; + attr->len = + bfa_os_htons(attr->len + sizeof(attr->type) + + sizeof(attr->len)); + } + + /* + * f/w Version = driver version + */ + attr = (struct fdmi_attr_s *) curr_ptr; + attr->type = bfa_os_htons(FDMI_HBA_ATTRIB_FW_VERSION); + attr->len = (u16) strlen(fcs_hba_attr->driver_version); + memcpy(attr->value, fcs_hba_attr->driver_version, attr->len); + /* variable fields need to be 4 byte aligned */ + attr->len = fc_roundup(attr->len, sizeof(u32)); + curr_ptr += sizeof(attr->type) + sizeof(attr->len) + attr->len; + len += attr->len; + count++; + attr->len = + bfa_os_htons(attr->len + sizeof(attr->type) + + sizeof(attr->len)); + + /* + * OS Name + */ + if (fcs_hba_attr->os_name[0] != '\0') { + attr = (struct fdmi_attr_s *) curr_ptr; + attr->type = bfa_os_htons(FDMI_HBA_ATTRIB_OS_NAME); + attr->len = (u16) strlen(fcs_hba_attr->os_name); + memcpy(attr->value, fcs_hba_attr->os_name, attr->len); + /* variable fields need to be 4 byte aligned */ + attr->len = fc_roundup(attr->len, sizeof(u32)); + curr_ptr += sizeof(attr->type) + sizeof(attr->len) + attr->len; + len += attr->len; + count++; + attr->len = + bfa_os_htons(attr->len + sizeof(attr->type) + + sizeof(attr->len)); + } + + /* + * MAX_CT_PAYLOAD + */ + attr = (struct fdmi_attr_s *) curr_ptr; + attr->type = bfa_os_htons(FDMI_HBA_ATTRIB_MAX_CT); + attr->len = sizeof(fcs_hba_attr->max_ct_pyld); + memcpy(attr->value, &fcs_hba_attr->max_ct_pyld, attr->len); + len += attr->len; + count++; + attr->len = + bfa_os_htons(attr->len + sizeof(attr->type) + + sizeof(attr->len)); + + /* + * Update size of payload + */ + len += ((sizeof(attr->type) + sizeof(attr->len)) * count); + + rhba->hba_attr_blk.attr_count = bfa_os_htonl(count); + return len; +} + +static void +bfa_fcs_port_fdmi_rhba_response(void *fcsarg, struct bfa_fcxp_s *fcxp, + void *cbarg, bfa_status_t req_status, + u32 rsp_len, u32 resid_len, + struct fchs_s *rsp_fchs) +{ + struct bfa_fcs_port_fdmi_s *fdmi = (struct bfa_fcs_port_fdmi_s *)cbarg; + struct bfa_fcs_port_s *port = fdmi->ms->port; + struct ct_hdr_s *cthdr = NULL; + + bfa_trc(port->fcs, port->port_cfg.pwwn); + + /* + * Sanity Checks + */ + if (req_status != BFA_STATUS_OK) { + bfa_trc(port->fcs, req_status); + bfa_sm_send_event(fdmi, FDMISM_EVENT_RSP_ERROR); + return; + } + + cthdr = (struct ct_hdr_s *) BFA_FCXP_RSP_PLD(fcxp); + cthdr->cmd_rsp_code = bfa_os_ntohs(cthdr->cmd_rsp_code); + + if (cthdr->cmd_rsp_code == CT_RSP_ACCEPT) { + bfa_sm_send_event(fdmi, FDMISM_EVENT_RSP_OK); + return; + } + + bfa_trc(port->fcs, cthdr->reason_code); + bfa_trc(port->fcs, cthdr->exp_code); + bfa_sm_send_event(fdmi, FDMISM_EVENT_RSP_ERROR); +} + +/** +* RPRT : Register Port + */ +static void +bfa_fcs_port_fdmi_send_rprt(void *fdmi_cbarg, struct bfa_fcxp_s *fcxp_alloced) +{ + struct bfa_fcs_port_fdmi_s *fdmi = fdmi_cbarg; + struct bfa_fcs_port_s *port = fdmi->ms->port; + struct fchs_s fchs; + u16 len, attr_len; + struct bfa_fcxp_s *fcxp; + u8 *pyld; + + bfa_trc(port->fcs, port->port_cfg.pwwn); + + fcxp = fcxp_alloced ? fcxp_alloced : bfa_fcs_fcxp_alloc(port->fcs); + if (!fcxp) { + bfa_fcxp_alloc_wait(port->fcs->bfa, &fdmi->fcxp_wqe, + bfa_fcs_port_fdmi_send_rprt, fdmi); + return; + } + fdmi->fcxp = fcxp; + + pyld = bfa_fcxp_get_reqbuf(fcxp); + bfa_os_memset(pyld, 0, FC_MAX_PDUSZ); + + len = fc_fdmi_reqhdr_build(&fchs, pyld, bfa_fcs_port_get_fcid(port), + FDMI_RPRT); + + attr_len = bfa_fcs_port_fdmi_build_rprt_pyld(fdmi, + (u8 *) ((struct ct_hdr_s *) pyld + 1)); + + bfa_fcxp_send(fcxp, NULL, port->fabric->vf_id, port->lp_tag, BFA_FALSE, + FC_CLASS_3, len + attr_len, &fchs, + bfa_fcs_port_fdmi_rprt_response, (void *)fdmi, + FC_MAX_PDUSZ, FC_RA_TOV); + + bfa_sm_send_event(fdmi, FDMISM_EVENT_RPRT_SENT); +} + +/** + * This routine builds Port Attribute Block that used in RPA, RPRT commands. + */ +static u16 +bfa_fcs_port_fdmi_build_portattr_block(struct bfa_fcs_port_fdmi_s *fdmi, + u8 *pyld) +{ + struct bfa_fcs_fdmi_port_attr_s fcs_port_attr; + struct fdmi_port_attr_s *port_attrib = (struct fdmi_port_attr_s *) pyld; + struct fdmi_attr_s *attr; + u8 *curr_ptr; + u16 len; + u8 count = 0; + + /* + * get port attributes + */ + bfa_fcs_fdmi_get_portattr(fdmi, &fcs_port_attr); + + len = sizeof(port_attrib->attr_count); + + /* + * fill out the invididual entries + */ + curr_ptr = (u8 *) &port_attrib->port_attr; + + /* + * FC4 Types + */ + attr = (struct fdmi_attr_s *) curr_ptr; + attr->type = bfa_os_htons(FDMI_PORT_ATTRIB_FC4_TYPES); + attr->len = sizeof(fcs_port_attr.supp_fc4_types); + memcpy(attr->value, fcs_port_attr.supp_fc4_types, attr->len); + curr_ptr += sizeof(attr->type) + sizeof(attr->len) + attr->len; + len += attr->len; + ++count; + attr->len = + bfa_os_htons(attr->len + sizeof(attr->type) + + sizeof(attr->len)); + + /* + * Supported Speed + */ + attr = (struct fdmi_attr_s *) curr_ptr; + attr->type = bfa_os_htons(FDMI_PORT_ATTRIB_SUPP_SPEED); + attr->len = sizeof(fcs_port_attr.supp_speed); + memcpy(attr->value, &fcs_port_attr.supp_speed, attr->len); + curr_ptr += sizeof(attr->type) + sizeof(attr->len) + attr->len; + len += attr->len; + ++count; + attr->len = + bfa_os_htons(attr->len + sizeof(attr->type) + + sizeof(attr->len)); + + /* + * current Port Speed + */ + attr = (struct fdmi_attr_s *) curr_ptr; + attr->type = bfa_os_htons(FDMI_PORT_ATTRIB_PORT_SPEED); + attr->len = sizeof(fcs_port_attr.curr_speed); + memcpy(attr->value, &fcs_port_attr.curr_speed, attr->len); + curr_ptr += sizeof(attr->type) + sizeof(attr->len) + attr->len; + len += attr->len; + ++count; + attr->len = + bfa_os_htons(attr->len + sizeof(attr->type) + + sizeof(attr->len)); + + /* + * max frame size + */ + attr = (struct fdmi_attr_s *) curr_ptr; + attr->type = bfa_os_htons(FDMI_PORT_ATTRIB_FRAME_SIZE); + attr->len = sizeof(fcs_port_attr.max_frm_size); + memcpy(attr->value, &fcs_port_attr.max_frm_size, attr->len); + curr_ptr += sizeof(attr->type) + sizeof(attr->len) + attr->len; + len += attr->len; + ++count; + attr->len = + bfa_os_htons(attr->len + sizeof(attr->type) + + sizeof(attr->len)); + + /* + * OS Device Name + */ + if (fcs_port_attr.os_device_name[0] != '\0') { + attr = (struct fdmi_attr_s *) curr_ptr; + attr->type = bfa_os_htons(FDMI_PORT_ATTRIB_DEV_NAME); + attr->len = (u16) strlen(fcs_port_attr.os_device_name); + memcpy(attr->value, fcs_port_attr.os_device_name, attr->len); + /* variable fields need to be 4 byte aligned */ + attr->len = fc_roundup(attr->len, sizeof(u32)); + curr_ptr += sizeof(attr->type) + sizeof(attr->len) + attr->len; + len += attr->len; + ++count; + attr->len = + bfa_os_htons(attr->len + sizeof(attr->type) + + sizeof(attr->len)); + + } + /* + * Host Name + */ + if (fcs_port_attr.host_name[0] != '\0') { + attr = (struct fdmi_attr_s *) curr_ptr; + attr->type = bfa_os_htons(FDMI_PORT_ATTRIB_HOST_NAME); + attr->len = (u16) strlen(fcs_port_attr.host_name); + memcpy(attr->value, fcs_port_attr.host_name, attr->len); + /* variable fields need to be 4 byte aligned */ + attr->len = fc_roundup(attr->len, sizeof(u32)); + curr_ptr += sizeof(attr->type) + sizeof(attr->len) + attr->len; + len += attr->len; + ++count; + attr->len = + bfa_os_htons(attr->len + sizeof(attr->type) + + sizeof(attr->len)); + + } + + /* + * Update size of payload + */ + port_attrib->attr_count = bfa_os_htonl(count); + len += ((sizeof(attr->type) + sizeof(attr->len)) * count); + return len; +} + +static u16 +bfa_fcs_port_fdmi_build_rprt_pyld(struct bfa_fcs_port_fdmi_s *fdmi, + u8 *pyld) +{ + struct bfa_fcs_port_s *port = fdmi->ms->port; + struct fdmi_rprt_s *rprt = (struct fdmi_rprt_s *) pyld; + u16 len; + + rprt->hba_id = bfa_fcs_port_get_pwwn(bfa_fcs_get_base_port(port->fcs)); + rprt->port_name = bfa_fcs_port_get_pwwn(port); + + len = bfa_fcs_port_fdmi_build_portattr_block(fdmi, + (u8 *) &rprt->port_attr_blk); + + len += sizeof(rprt->hba_id) + sizeof(rprt->port_name); + + return len; +} + +static void +bfa_fcs_port_fdmi_rprt_response(void *fcsarg, struct bfa_fcxp_s *fcxp, + void *cbarg, bfa_status_t req_status, + u32 rsp_len, u32 resid_len, + struct fchs_s *rsp_fchs) +{ + struct bfa_fcs_port_fdmi_s *fdmi = (struct bfa_fcs_port_fdmi_s *)cbarg; + struct bfa_fcs_port_s *port = fdmi->ms->port; + struct ct_hdr_s *cthdr = NULL; + + bfa_trc(port->fcs, port->port_cfg.pwwn); + + /* + * Sanity Checks + */ + if (req_status != BFA_STATUS_OK) { + bfa_trc(port->fcs, req_status); + bfa_sm_send_event(fdmi, FDMISM_EVENT_RSP_ERROR); + return; + } + + cthdr = (struct ct_hdr_s *) BFA_FCXP_RSP_PLD(fcxp); + cthdr->cmd_rsp_code = bfa_os_ntohs(cthdr->cmd_rsp_code); + + if (cthdr->cmd_rsp_code == CT_RSP_ACCEPT) { + bfa_sm_send_event(fdmi, FDMISM_EVENT_RSP_OK); + return; + } + + bfa_trc(port->fcs, cthdr->reason_code); + bfa_trc(port->fcs, cthdr->exp_code); + bfa_sm_send_event(fdmi, FDMISM_EVENT_RSP_ERROR); +} + +/** +* RPA : Register Port Attributes. + */ +static void +bfa_fcs_port_fdmi_send_rpa(void *fdmi_cbarg, struct bfa_fcxp_s *fcxp_alloced) +{ + struct bfa_fcs_port_fdmi_s *fdmi = fdmi_cbarg; + struct bfa_fcs_port_s *port = fdmi->ms->port; + struct fchs_s fchs; + u16 len, attr_len; + struct bfa_fcxp_s *fcxp; + u8 *pyld; + + bfa_trc(port->fcs, port->port_cfg.pwwn); + + fcxp = fcxp_alloced ? fcxp_alloced : bfa_fcs_fcxp_alloc(port->fcs); + if (!fcxp) { + bfa_fcxp_alloc_wait(port->fcs->bfa, &fdmi->fcxp_wqe, + bfa_fcs_port_fdmi_send_rpa, fdmi); + return; + } + fdmi->fcxp = fcxp; + + pyld = bfa_fcxp_get_reqbuf(fcxp); + bfa_os_memset(pyld, 0, FC_MAX_PDUSZ); + + len = fc_fdmi_reqhdr_build(&fchs, pyld, bfa_fcs_port_get_fcid(port), + FDMI_RPA); + + attr_len = bfa_fcs_port_fdmi_build_rpa_pyld(fdmi, + (u8 *) ((struct ct_hdr_s *) pyld + 1)); + + bfa_fcxp_send(fcxp, NULL, port->fabric->vf_id, port->lp_tag, BFA_FALSE, + FC_CLASS_3, len + attr_len, &fchs, + bfa_fcs_port_fdmi_rpa_response, (void *)fdmi, + FC_MAX_PDUSZ, FC_RA_TOV); + + bfa_sm_send_event(fdmi, FDMISM_EVENT_RPA_SENT); +} + +static u16 +bfa_fcs_port_fdmi_build_rpa_pyld(struct bfa_fcs_port_fdmi_s *fdmi, + u8 *pyld) +{ + struct bfa_fcs_port_s *port = fdmi->ms->port; + struct fdmi_rpa_s *rpa = (struct fdmi_rpa_s *) pyld; + u16 len; + + rpa->port_name = bfa_fcs_port_get_pwwn(port); + + len = bfa_fcs_port_fdmi_build_portattr_block(fdmi, + (u8 *) &rpa->port_attr_blk); + + len += sizeof(rpa->port_name); + + return len; +} + +static void +bfa_fcs_port_fdmi_rpa_response(void *fcsarg, struct bfa_fcxp_s *fcxp, + void *cbarg, bfa_status_t req_status, + u32 rsp_len, u32 resid_len, + struct fchs_s *rsp_fchs) +{ + struct bfa_fcs_port_fdmi_s *fdmi = (struct bfa_fcs_port_fdmi_s *)cbarg; + struct bfa_fcs_port_s *port = fdmi->ms->port; + struct ct_hdr_s *cthdr = NULL; + + bfa_trc(port->fcs, port->port_cfg.pwwn); + + /* + * Sanity Checks + */ + if (req_status != BFA_STATUS_OK) { + bfa_trc(port->fcs, req_status); + bfa_sm_send_event(fdmi, FDMISM_EVENT_RSP_ERROR); + return; + } + + cthdr = (struct ct_hdr_s *) BFA_FCXP_RSP_PLD(fcxp); + cthdr->cmd_rsp_code = bfa_os_ntohs(cthdr->cmd_rsp_code); + + if (cthdr->cmd_rsp_code == CT_RSP_ACCEPT) { + bfa_sm_send_event(fdmi, FDMISM_EVENT_RSP_OK); + return; + } + + bfa_trc(port->fcs, cthdr->reason_code); + bfa_trc(port->fcs, cthdr->exp_code); + bfa_sm_send_event(fdmi, FDMISM_EVENT_RSP_ERROR); +} + +static void +bfa_fcs_port_fdmi_timeout(void *arg) +{ + struct bfa_fcs_port_fdmi_s *fdmi = (struct bfa_fcs_port_fdmi_s *)arg; + + bfa_sm_send_event(fdmi, FDMISM_EVENT_TIMEOUT); +} + +void +bfa_fcs_fdmi_get_hbaattr(struct bfa_fcs_port_fdmi_s *fdmi, + struct bfa_fcs_fdmi_hba_attr_s *hba_attr) +{ + struct bfa_fcs_port_s *port = fdmi->ms->port; + struct bfa_fcs_driver_info_s *driver_info = &port->fcs->driver_info; + struct bfa_adapter_attr_s adapter_attr; + + bfa_os_memset(hba_attr, 0, sizeof(struct bfa_fcs_fdmi_hba_attr_s)); + bfa_os_memset(&adapter_attr, 0, sizeof(struct bfa_adapter_attr_s)); + + bfa_ioc_get_adapter_attr(&port->fcs->bfa->ioc, &adapter_attr); + + strncpy(hba_attr->manufacturer, adapter_attr.manufacturer, + sizeof(adapter_attr.manufacturer)); + + strncpy(hba_attr->serial_num, adapter_attr.serial_num, + sizeof(adapter_attr.serial_num)); + + strncpy(hba_attr->model, adapter_attr.model, sizeof(hba_attr->model)); + + strncpy(hba_attr->model_desc, adapter_attr.model_descr, + sizeof(hba_attr->model_desc)); + + strncpy(hba_attr->hw_version, adapter_attr.hw_ver, + sizeof(hba_attr->hw_version)); + + strncpy(hba_attr->driver_version, (char *)driver_info->version, + sizeof(hba_attr->driver_version)); + + strncpy(hba_attr->option_rom_ver, adapter_attr.optrom_ver, + sizeof(hba_attr->option_rom_ver)); + + strncpy(hba_attr->fw_version, adapter_attr.fw_ver, + sizeof(hba_attr->fw_version)); + + strncpy(hba_attr->os_name, driver_info->host_os_name, + sizeof(hba_attr->os_name)); + + /* + * If there is a patch level, append it to the os name along with a + * separator + */ + if (driver_info->host_os_patch[0] != '\0') { + strncat(hba_attr->os_name, BFA_FCS_PORT_SYMBNAME_SEPARATOR, + sizeof(BFA_FCS_PORT_SYMBNAME_SEPARATOR)); + strncat(hba_attr->os_name, driver_info->host_os_patch, + sizeof(driver_info->host_os_patch)); + } + + hba_attr->max_ct_pyld = bfa_os_htonl(FC_MAX_PDUSZ); + +} + +void +bfa_fcs_fdmi_get_portattr(struct bfa_fcs_port_fdmi_s *fdmi, + struct bfa_fcs_fdmi_port_attr_s *port_attr) +{ + struct bfa_fcs_port_s *port = fdmi->ms->port; + struct bfa_fcs_driver_info_s *driver_info = &port->fcs->driver_info; + struct bfa_pport_attr_s pport_attr; + + bfa_os_memset(port_attr, 0, sizeof(struct bfa_fcs_fdmi_port_attr_s)); + + /* + * get pport attributes from hal + */ + bfa_pport_get_attr(port->fcs->bfa, &pport_attr); + + /* + * get FC4 type Bitmask + */ + fc_get_fc4type_bitmask(FC_TYPE_FCP, port_attr->supp_fc4_types); + + /* + * Supported Speeds + */ + port_attr->supp_speed = bfa_os_htonl(BFA_FCS_FDMI_SUPORTED_SPEEDS); + + /* + * Current Speed + */ + port_attr->curr_speed = bfa_os_htonl(pport_attr.speed); + + /* + * Max PDU Size. + */ + port_attr->max_frm_size = bfa_os_htonl(FC_MAX_PDUSZ); + + /* + * OS device Name + */ + strncpy(port_attr->os_device_name, (char *)driver_info->os_device_name, + sizeof(port_attr->os_device_name)); + + /* + * Host name + */ + strncpy(port_attr->host_name, (char *)driver_info->host_machine_name, + sizeof(port_attr->host_name)); + +} + + +void +bfa_fcs_port_fdmi_init(struct bfa_fcs_port_ms_s *ms) +{ + struct bfa_fcs_port_fdmi_s *fdmi = &ms->fdmi; + + fdmi->ms = ms; + bfa_sm_set_state(fdmi, bfa_fcs_port_fdmi_sm_offline); +} + +void +bfa_fcs_port_fdmi_offline(struct bfa_fcs_port_ms_s *ms) +{ + struct bfa_fcs_port_fdmi_s *fdmi = &ms->fdmi; + + fdmi->ms = ms; + bfa_sm_send_event(fdmi, FDMISM_EVENT_PORT_OFFLINE); +} + +void +bfa_fcs_port_fdmi_online(struct bfa_fcs_port_ms_s *ms) +{ + struct bfa_fcs_port_fdmi_s *fdmi = &ms->fdmi; + + fdmi->ms = ms; + bfa_sm_send_event(fdmi, FDMISM_EVENT_PORT_ONLINE); +} diff --git a/drivers/scsi/bfa/include/aen/bfa_aen.h b/drivers/scsi/bfa/include/aen/bfa_aen.h new file mode 100644 index 000000000000..da8cac093d3d --- /dev/null +++ b/drivers/scsi/bfa/include/aen/bfa_aen.h @@ -0,0 +1,92 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ +#ifndef __BFA_AEN_H__ +#define __BFA_AEN_H__ + +#include "defs/bfa_defs_aen.h" + +#define BFA_AEN_MAX_ENTRY 512 + +extern s32 bfa_aen_max_cfg_entry; +struct bfa_aen_s { + void *bfad; + s32 max_entry; + s32 write_index; + s32 read_index; + u32 bfad_num; + u32 seq_num; + void (*aen_cb_notify)(void *bfad); + void (*gettimeofday)(struct bfa_timeval_s *tv); + struct bfa_trc_mod_s *trcmod; + struct bfa_aen_entry_s list[BFA_AEN_MAX_ENTRY]; /* Must be the last */ +}; + + +/** + * Public APIs + */ +static inline void +bfa_aen_set_max_cfg_entry(int max_entry) +{ + bfa_aen_max_cfg_entry = max_entry; +} + +static inline s32 +bfa_aen_get_max_cfg_entry(void) +{ + return bfa_aen_max_cfg_entry; +} + +static inline s32 +bfa_aen_get_meminfo(void) +{ + return (sizeof(struct bfa_aen_entry_s) * bfa_aen_get_max_cfg_entry()); +} + +static inline s32 +bfa_aen_get_wi(struct bfa_aen_s *aen) +{ + return aen->write_index; +} + +static inline s32 +bfa_aen_get_ri(struct bfa_aen_s *aen) +{ + return aen->read_index; +} + +static inline s32 +bfa_aen_fetch_count(struct bfa_aen_s *aen, s32 read_index) +{ + return ((aen->write_index + aen->max_entry) - read_index) + % aen->max_entry; +} + +s32 bfa_aen_init(struct bfa_aen_s *aen, struct bfa_trc_mod_s *trcmod, + void *bfad, u32 inst_id, void (*aen_cb_notify)(void *), + void (*gettimeofday)(struct bfa_timeval_s *)); + +s32 bfa_aen_post(struct bfa_aen_s *aen, enum bfa_aen_category aen_category, + int aen_type, union bfa_aen_data_u *aen_data); + +s32 bfa_aen_fetch(struct bfa_aen_s *aen, struct bfa_aen_entry_s *aen_entry, + s32 entry_space, s32 rii, s32 *ri_arr, + s32 ri_arr_cnt); + +s32 bfa_aen_get_inst(struct bfa_aen_s *aen); + +#endif /* __BFA_AEN_H__ */ diff --git a/drivers/scsi/bfa/include/aen/bfa_aen_adapter.h b/drivers/scsi/bfa/include/aen/bfa_aen_adapter.h new file mode 100644 index 000000000000..260d3ea1cab3 --- /dev/null +++ b/drivers/scsi/bfa/include/aen/bfa_aen_adapter.h @@ -0,0 +1,31 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/* messages define for BFA_AEN_CAT_ADAPTER Module */ +#ifndef __bfa_aen_adapter_h__ +#define __bfa_aen_adapter_h__ + +#include +#include + +#define BFA_AEN_ADAPTER_ADD \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_ADAPTER, BFA_ADAPTER_AEN_ADD) +#define BFA_AEN_ADAPTER_REMOVE \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_ADAPTER, BFA_ADAPTER_AEN_REMOVE) + +#endif + diff --git a/drivers/scsi/bfa/include/aen/bfa_aen_audit.h b/drivers/scsi/bfa/include/aen/bfa_aen_audit.h new file mode 100644 index 000000000000..12cd7aab5d53 --- /dev/null +++ b/drivers/scsi/bfa/include/aen/bfa_aen_audit.h @@ -0,0 +1,31 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/* messages define for BFA_AEN_CAT_AUDIT Module */ +#ifndef __bfa_aen_audit_h__ +#define __bfa_aen_audit_h__ + +#include +#include + +#define BFA_AEN_AUDIT_AUTH_ENABLE \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_AUDIT, BFA_AUDIT_AEN_AUTH_ENABLE) +#define BFA_AEN_AUDIT_AUTH_DISABLE \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_AUDIT, BFA_AUDIT_AEN_AUTH_DISABLE) + +#endif + diff --git a/drivers/scsi/bfa/include/aen/bfa_aen_ethport.h b/drivers/scsi/bfa/include/aen/bfa_aen_ethport.h new file mode 100644 index 000000000000..507d0b58d149 --- /dev/null +++ b/drivers/scsi/bfa/include/aen/bfa_aen_ethport.h @@ -0,0 +1,35 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/* messages define for BFA_AEN_CAT_ETHPORT Module */ +#ifndef __bfa_aen_ethport_h__ +#define __bfa_aen_ethport_h__ + +#include +#include + +#define BFA_AEN_ETHPORT_LINKUP \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_ETHPORT, BFA_ETHPORT_AEN_LINKUP) +#define BFA_AEN_ETHPORT_LINKDOWN \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_ETHPORT, BFA_ETHPORT_AEN_LINKDOWN) +#define BFA_AEN_ETHPORT_ENABLE \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_ETHPORT, BFA_ETHPORT_AEN_ENABLE) +#define BFA_AEN_ETHPORT_DISABLE \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_ETHPORT, BFA_ETHPORT_AEN_DISABLE) + +#endif + diff --git a/drivers/scsi/bfa/include/aen/bfa_aen_ioc.h b/drivers/scsi/bfa/include/aen/bfa_aen_ioc.h new file mode 100644 index 000000000000..71378b446b69 --- /dev/null +++ b/drivers/scsi/bfa/include/aen/bfa_aen_ioc.h @@ -0,0 +1,37 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/* messages define for BFA_AEN_CAT_IOC Module */ +#ifndef __bfa_aen_ioc_h__ +#define __bfa_aen_ioc_h__ + +#include +#include + +#define BFA_AEN_IOC_HBGOOD \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_IOC, BFA_IOC_AEN_HBGOOD) +#define BFA_AEN_IOC_HBFAIL \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_IOC, BFA_IOC_AEN_HBFAIL) +#define BFA_AEN_IOC_ENABLE \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_IOC, BFA_IOC_AEN_ENABLE) +#define BFA_AEN_IOC_DISABLE \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_IOC, BFA_IOC_AEN_DISABLE) +#define BFA_AEN_IOC_FWMISMATCH \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_IOC, BFA_IOC_AEN_FWMISMATCH) + +#endif + diff --git a/drivers/scsi/bfa/include/aen/bfa_aen_itnim.h b/drivers/scsi/bfa/include/aen/bfa_aen_itnim.h new file mode 100644 index 000000000000..a7d8ddcfef99 --- /dev/null +++ b/drivers/scsi/bfa/include/aen/bfa_aen_itnim.h @@ -0,0 +1,33 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/* messages define for BFA_AEN_CAT_ITNIM Module */ +#ifndef __bfa_aen_itnim_h__ +#define __bfa_aen_itnim_h__ + +#include +#include + +#define BFA_AEN_ITNIM_ONLINE \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_ITNIM, BFA_ITNIM_AEN_ONLINE) +#define BFA_AEN_ITNIM_OFFLINE \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_ITNIM, BFA_ITNIM_AEN_OFFLINE) +#define BFA_AEN_ITNIM_DISCONNECT \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_ITNIM, BFA_ITNIM_AEN_DISCONNECT) + +#endif + diff --git a/drivers/scsi/bfa/include/aen/bfa_aen_lport.h b/drivers/scsi/bfa/include/aen/bfa_aen_lport.h new file mode 100644 index 000000000000..5a8ebb65193f --- /dev/null +++ b/drivers/scsi/bfa/include/aen/bfa_aen_lport.h @@ -0,0 +1,51 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/* messages define for BFA_AEN_CAT_LPORT Module */ +#ifndef __bfa_aen_lport_h__ +#define __bfa_aen_lport_h__ + +#include +#include + +#define BFA_AEN_LPORT_NEW \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_LPORT, BFA_LPORT_AEN_NEW) +#define BFA_AEN_LPORT_DELETE \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_LPORT, BFA_LPORT_AEN_DELETE) +#define BFA_AEN_LPORT_ONLINE \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_LPORT, BFA_LPORT_AEN_ONLINE) +#define BFA_AEN_LPORT_OFFLINE \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_LPORT, BFA_LPORT_AEN_OFFLINE) +#define BFA_AEN_LPORT_DISCONNECT \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_LPORT, BFA_LPORT_AEN_DISCONNECT) +#define BFA_AEN_LPORT_NEW_PROP \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_LPORT, BFA_LPORT_AEN_NEW_PROP) +#define BFA_AEN_LPORT_DELETE_PROP \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_LPORT, BFA_LPORT_AEN_DELETE_PROP) +#define BFA_AEN_LPORT_NEW_STANDARD \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_LPORT, BFA_LPORT_AEN_NEW_STANDARD) +#define BFA_AEN_LPORT_DELETE_STANDARD \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_LPORT, BFA_LPORT_AEN_DELETE_STANDARD) +#define BFA_AEN_LPORT_NPIV_DUP_WWN \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_LPORT, BFA_LPORT_AEN_NPIV_DUP_WWN) +#define BFA_AEN_LPORT_NPIV_FABRIC_MAX \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_LPORT, BFA_LPORT_AEN_NPIV_FABRIC_MAX) +#define BFA_AEN_LPORT_NPIV_UNKNOWN \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_LPORT, BFA_LPORT_AEN_NPIV_UNKNOWN) + +#endif + diff --git a/drivers/scsi/bfa/include/aen/bfa_aen_port.h b/drivers/scsi/bfa/include/aen/bfa_aen_port.h new file mode 100644 index 000000000000..9add905a622d --- /dev/null +++ b/drivers/scsi/bfa/include/aen/bfa_aen_port.h @@ -0,0 +1,57 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/* messages define for BFA_AEN_CAT_PORT Module */ +#ifndef __bfa_aen_port_h__ +#define __bfa_aen_port_h__ + +#include +#include + +#define BFA_AEN_PORT_ONLINE \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_PORT, BFA_PORT_AEN_ONLINE) +#define BFA_AEN_PORT_OFFLINE \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_PORT, BFA_PORT_AEN_OFFLINE) +#define BFA_AEN_PORT_RLIR \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_PORT, BFA_PORT_AEN_RLIR) +#define BFA_AEN_PORT_SFP_INSERT \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_PORT, BFA_PORT_AEN_SFP_INSERT) +#define BFA_AEN_PORT_SFP_REMOVE \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_PORT, BFA_PORT_AEN_SFP_REMOVE) +#define BFA_AEN_PORT_SFP_POM \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_PORT, BFA_PORT_AEN_SFP_POM) +#define BFA_AEN_PORT_ENABLE \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_PORT, BFA_PORT_AEN_ENABLE) +#define BFA_AEN_PORT_DISABLE \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_PORT, BFA_PORT_AEN_DISABLE) +#define BFA_AEN_PORT_AUTH_ON \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_PORT, BFA_PORT_AEN_AUTH_ON) +#define BFA_AEN_PORT_AUTH_OFF \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_PORT, BFA_PORT_AEN_AUTH_OFF) +#define BFA_AEN_PORT_DISCONNECT \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_PORT, BFA_PORT_AEN_DISCONNECT) +#define BFA_AEN_PORT_QOS_NEG \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_PORT, BFA_PORT_AEN_QOS_NEG) +#define BFA_AEN_PORT_FABRIC_NAME_CHANGE \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_PORT, BFA_PORT_AEN_FABRIC_NAME_CHANGE) +#define BFA_AEN_PORT_SFP_ACCESS_ERROR \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_PORT, BFA_PORT_AEN_SFP_ACCESS_ERROR) +#define BFA_AEN_PORT_SFP_UNSUPPORT \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_PORT, BFA_PORT_AEN_SFP_UNSUPPORT) + +#endif + diff --git a/drivers/scsi/bfa/include/aen/bfa_aen_rport.h b/drivers/scsi/bfa/include/aen/bfa_aen_rport.h new file mode 100644 index 000000000000..7e4be1fd5e15 --- /dev/null +++ b/drivers/scsi/bfa/include/aen/bfa_aen_rport.h @@ -0,0 +1,37 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/* messages define for BFA_AEN_CAT_RPORT Module */ +#ifndef __bfa_aen_rport_h__ +#define __bfa_aen_rport_h__ + +#include +#include + +#define BFA_AEN_RPORT_ONLINE \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_RPORT, BFA_RPORT_AEN_ONLINE) +#define BFA_AEN_RPORT_OFFLINE \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_RPORT, BFA_RPORT_AEN_OFFLINE) +#define BFA_AEN_RPORT_DISCONNECT \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_RPORT, BFA_RPORT_AEN_DISCONNECT) +#define BFA_AEN_RPORT_QOS_PRIO \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_RPORT, BFA_RPORT_AEN_QOS_PRIO) +#define BFA_AEN_RPORT_QOS_FLOWID \ + BFA_LOG_CREATE_ID(BFA_AEN_CAT_RPORT, BFA_RPORT_AEN_QOS_FLOWID) + +#endif + diff --git a/drivers/scsi/bfa/include/bfa.h b/drivers/scsi/bfa/include/bfa.h new file mode 100644 index 000000000000..64c1412c5703 --- /dev/null +++ b/drivers/scsi/bfa/include/bfa.h @@ -0,0 +1,177 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ +#ifndef __BFA_H__ +#define __BFA_H__ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +struct bfa_s; +#include + +struct bfa_pcidev_s; + +/** + * PCI devices supported by the current BFA + */ +struct bfa_pciid_s { + u16 device_id; + u16 vendor_id; +}; + +extern char bfa_version[]; + +/** + * BFA Power Mgmt Commands + */ +enum bfa_pm_cmd { + BFA_PM_CTL_D0 = 0, + BFA_PM_CTL_D1 = 1, + BFA_PM_CTL_D2 = 2, + BFA_PM_CTL_D3 = 3, +}; + +/** + * BFA memory resources + */ +enum bfa_mem_type { + BFA_MEM_TYPE_KVA = 1, /*! Kernel Virtual Memory *(non-dma-able) */ + BFA_MEM_TYPE_DMA = 2, /*! DMA-able memory */ + BFA_MEM_TYPE_MAX = BFA_MEM_TYPE_DMA, +}; + +struct bfa_mem_elem_s { + enum bfa_mem_type mem_type; /* see enum bfa_mem_type */ + u32 mem_len; /* Total Length in Bytes */ + u8 *kva; /* kernel virtual address */ + u64 dma; /* dma address if DMA memory */ + u8 *kva_curp; /* kva allocation cursor */ + u64 dma_curp; /* dma allocation cursor */ +}; + +struct bfa_meminfo_s { + struct bfa_mem_elem_s meminfo[BFA_MEM_TYPE_MAX]; +}; +#define bfa_meminfo_kva(_m) \ + (_m)->meminfo[BFA_MEM_TYPE_KVA - 1].kva_curp +#define bfa_meminfo_dma_virt(_m) \ + (_m)->meminfo[BFA_MEM_TYPE_DMA - 1].kva_curp +#define bfa_meminfo_dma_phys(_m) \ + (_m)->meminfo[BFA_MEM_TYPE_DMA - 1].dma_curp + +/** + * Generic Scatter Gather Element used by driver + */ +struct bfa_sge_s { + u32 sg_len; + void *sg_addr; +}; + +#define bfa_sge_to_be(__sge) do { \ + ((u32 *)(__sge))[0] = bfa_os_htonl(((u32 *)(__sge))[0]); \ + ((u32 *)(__sge))[1] = bfa_os_htonl(((u32 *)(__sge))[1]); \ + ((u32 *)(__sge))[2] = bfa_os_htonl(((u32 *)(__sge))[2]); \ +} while (0) + + +/* + * bfa stats interfaces + */ +#define bfa_stats(_mod, _stats) (_mod)->stats._stats ++ + +#define bfa_ioc_get_stats(__bfa, __ioc_stats) \ + bfa_ioc_fetch_stats(&(__bfa)->ioc, __ioc_stats) +#define bfa_ioc_clear_stats(__bfa) \ + bfa_ioc_clr_stats(&(__bfa)->ioc) + +/* + * bfa API functions + */ +void bfa_get_pciids(struct bfa_pciid_s **pciids, int *npciids); +void bfa_cfg_get_default(struct bfa_iocfc_cfg_s *cfg); +void bfa_cfg_get_min(struct bfa_iocfc_cfg_s *cfg); +void bfa_cfg_get_meminfo(struct bfa_iocfc_cfg_s *cfg, + struct bfa_meminfo_s *meminfo); +void bfa_attach(struct bfa_s *bfa, void *bfad, struct bfa_iocfc_cfg_s *cfg, + struct bfa_meminfo_s *meminfo, + struct bfa_pcidev_s *pcidev); +void bfa_init_trc(struct bfa_s *bfa, struct bfa_trc_mod_s *trcmod); +void bfa_init_log(struct bfa_s *bfa, struct bfa_log_mod_s *logmod); +void bfa_init_aen(struct bfa_s *bfa, struct bfa_aen_s *aen); +void bfa_init_plog(struct bfa_s *bfa, struct bfa_plog_s *plog); +void bfa_detach(struct bfa_s *bfa); +void bfa_init(struct bfa_s *bfa); +void bfa_start(struct bfa_s *bfa); +void bfa_stop(struct bfa_s *bfa); +void bfa_attach_fcs(struct bfa_s *bfa); +void bfa_cb_init(void *bfad, bfa_status_t status); +void bfa_cb_stop(void *bfad, bfa_status_t status); +void bfa_cb_updateq(void *bfad, bfa_status_t status); + +bfa_boolean_t bfa_intx(struct bfa_s *bfa); +void bfa_isr_enable(struct bfa_s *bfa); +void bfa_isr_disable(struct bfa_s *bfa); +void bfa_msix_getvecs(struct bfa_s *bfa, u32 *msix_vecs_bmap, + u32 *num_vecs, u32 *max_vec_bit); +#define bfa_msix(__bfa, __vec) (__bfa)->msix.handler[__vec](__bfa, __vec) + +void bfa_comp_deq(struct bfa_s *bfa, struct list_head *comp_q); +void bfa_comp_process(struct bfa_s *bfa, struct list_head *comp_q); +void bfa_comp_free(struct bfa_s *bfa, struct list_head *comp_q); + +typedef void (*bfa_cb_ioc_t) (void *cbarg, enum bfa_status status); +void bfa_iocfc_get_attr(struct bfa_s *bfa, struct bfa_iocfc_attr_s *attr); +bfa_status_t bfa_iocfc_get_stats(struct bfa_s *bfa, + struct bfa_iocfc_stats_s *stats, + bfa_cb_ioc_t cbfn, void *cbarg); +bfa_status_t bfa_iocfc_clear_stats(struct bfa_s *bfa, + bfa_cb_ioc_t cbfn, void *cbarg); +void bfa_get_attr(struct bfa_s *bfa, struct bfa_ioc_attr_s *ioc_attr); + +void bfa_adapter_get_attr(struct bfa_s *bfa, + struct bfa_adapter_attr_s *ad_attr); +u64 bfa_adapter_get_id(struct bfa_s *bfa); + +bfa_status_t bfa_iocfc_israttr_set(struct bfa_s *bfa, + struct bfa_iocfc_intr_attr_s *attr); + +void bfa_iocfc_enable(struct bfa_s *bfa); +void bfa_iocfc_disable(struct bfa_s *bfa); +void bfa_ioc_auto_recover(bfa_boolean_t auto_recover); +void bfa_cb_ioc_disable(void *bfad); +void bfa_timer_tick(struct bfa_s *bfa); +#define bfa_timer_start(_bfa, _timer, _timercb, _arg, _timeout) \ + bfa_timer_begin(&(_bfa)->timer_mod, _timer, _timercb, _arg, _timeout) + +/* + * BFA debug API functions + */ +bfa_status_t bfa_debug_fwtrc(struct bfa_s *bfa, void *trcdata, int *trclen); +bfa_status_t bfa_debug_fwsave(struct bfa_s *bfa, void *trcdata, int *trclen); + +#include "bfa_priv.h" + +#endif /* __BFA_H__ */ diff --git a/drivers/scsi/bfa/include/bfa_fcpim.h b/drivers/scsi/bfa/include/bfa_fcpim.h new file mode 100644 index 000000000000..04789795fa53 --- /dev/null +++ b/drivers/scsi/bfa/include/bfa_fcpim.h @@ -0,0 +1,159 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFA_FCPIM_H__ +#define __BFA_FCPIM_H__ + +#include +#include +#include +#include + +/* + * forward declarations + */ +struct bfa_itnim_s; +struct bfa_ioim_s; +struct bfa_tskim_s; +struct bfad_ioim_s; +struct bfad_tskim_s; + +/* + * bfa fcpim module API functions + */ +void bfa_fcpim_path_tov_set(struct bfa_s *bfa, u16 path_tov); +u16 bfa_fcpim_path_tov_get(struct bfa_s *bfa); +void bfa_fcpim_qdepth_set(struct bfa_s *bfa, u16 q_depth); +u16 bfa_fcpim_qdepth_get(struct bfa_s *bfa); +bfa_status_t bfa_fcpim_get_modstats(struct bfa_s *bfa, + struct bfa_fcpim_stats_s *modstats); +bfa_status_t bfa_fcpim_clr_modstats(struct bfa_s *bfa); + +/* + * bfa itnim API functions + */ +struct bfa_itnim_s *bfa_itnim_create(struct bfa_s *bfa, + struct bfa_rport_s *rport, void *itnim); +void bfa_itnim_delete(struct bfa_itnim_s *itnim); +void bfa_itnim_online(struct bfa_itnim_s *itnim, + bfa_boolean_t seq_rec); +void bfa_itnim_offline(struct bfa_itnim_s *itnim); +void bfa_itnim_get_stats(struct bfa_itnim_s *itnim, + struct bfa_itnim_hal_stats_s *stats); +void bfa_itnim_clear_stats(struct bfa_itnim_s *itnim); + + +/** + * BFA completion callback for bfa_itnim_online(). + * + * @param[in] itnim FCS or driver itnim instance + * + * return None + */ +void bfa_cb_itnim_online(void *itnim); + +/** + * BFA completion callback for bfa_itnim_offline(). + * + * @param[in] itnim FCS or driver itnim instance + * + * return None + */ +void bfa_cb_itnim_offline(void *itnim); +void bfa_cb_itnim_tov_begin(void *itnim); +void bfa_cb_itnim_tov(void *itnim); + +/** + * BFA notification to FCS/driver for second level error recovery. + * + * Atleast one I/O request has timedout and target is unresponsive to + * repeated abort requests. Second level error recovery should be initiated + * by starting implicit logout and recovery procedures. + * + * @param[in] itnim FCS or driver itnim instance + * + * return None + */ +void bfa_cb_itnim_sler(void *itnim); + +/* + * bfa ioim API functions + */ +struct bfa_ioim_s *bfa_ioim_alloc(struct bfa_s *bfa, + struct bfad_ioim_s *dio, + struct bfa_itnim_s *itnim, + u16 nsgles); + +void bfa_ioim_free(struct bfa_ioim_s *ioim); +void bfa_ioim_start(struct bfa_ioim_s *ioim); +void bfa_ioim_abort(struct bfa_ioim_s *ioim); +void bfa_ioim_delayed_comp(struct bfa_ioim_s *ioim, + bfa_boolean_t iotov); + + +/** + * I/O completion notification. + * + * @param[in] dio driver IO structure + * @param[in] io_status IO completion status + * @param[in] scsi_status SCSI status returned by target + * @param[in] sns_len SCSI sense length, 0 if none + * @param[in] sns_info SCSI sense data, if any + * @param[in] residue Residual length + * + * @return None + */ +void bfa_cb_ioim_done(void *bfad, struct bfad_ioim_s *dio, + enum bfi_ioim_status io_status, + u8 scsi_status, int sns_len, + u8 *sns_info, s32 residue); + +/** + * I/O good completion notification. + * + * @param[in] dio driver IO structure + * + * @return None + */ +void bfa_cb_ioim_good_comp(void *bfad, struct bfad_ioim_s *dio); + +/** + * I/O abort completion notification + * + * @param[in] dio driver IO that was aborted + * + * @return None + */ +void bfa_cb_ioim_abort(void *bfad, struct bfad_ioim_s *dio); +void bfa_cb_ioim_resfree(void *hcb_bfad); + +void bfa_cb_ioim_resfree(void *hcb_bfad); + +/* + * bfa tskim API functions + */ +struct bfa_tskim_s *bfa_tskim_alloc(struct bfa_s *bfa, + struct bfad_tskim_s *dtsk); +void bfa_tskim_free(struct bfa_tskim_s *tskim); +void bfa_tskim_start(struct bfa_tskim_s *tskim, + struct bfa_itnim_s *itnim, lun_t lun, + enum fcp_tm_cmnd tm, u8 t_secs); +void bfa_cb_tskim_done(void *bfad, struct bfad_tskim_s *dtsk, + enum bfi_tskim_status tsk_status); + +#endif /* __BFA_FCPIM_H__ */ + diff --git a/drivers/scsi/bfa/include/bfa_fcptm.h b/drivers/scsi/bfa/include/bfa_fcptm.h new file mode 100644 index 000000000000..5f5ffe0bb1bb --- /dev/null +++ b/drivers/scsi/bfa/include/bfa_fcptm.h @@ -0,0 +1,47 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFA_FCPTM_H__ +#define __BFA_FCPTM_H__ + +#include +#include +#include + +/* + * forward declarations + */ +struct bfa_tin_s; +struct bfa_iotm_s; +struct bfa_tsktm_s; + +/* + * bfa fcptm module API functions + */ +void bfa_fcptm_path_tov_set(struct bfa_s *bfa, u16 path_tov); +u16 bfa_fcptm_path_tov_get(struct bfa_s *bfa); +void bfa_fcptm_qdepth_set(struct bfa_s *bfa, u16 q_depth); +u16 bfa_fcptm_qdepth_get(struct bfa_s *bfa); + +/* + * bfa tin API functions + */ +void bfa_tin_get_stats(struct bfa_tin_s *tin, struct bfa_tin_stats_s *stats); +void bfa_tin_clear_stats(struct bfa_tin_s *tin); + +#endif /* __BFA_FCPTM_H__ */ + diff --git a/drivers/scsi/bfa/include/bfa_svc.h b/drivers/scsi/bfa/include/bfa_svc.h new file mode 100644 index 000000000000..0c80b74f72ef --- /dev/null +++ b/drivers/scsi/bfa/include/bfa_svc.h @@ -0,0 +1,324 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ +#ifndef __BFA_SVC_H__ +#define __BFA_SVC_H__ + +/* + * forward declarations + */ +struct bfa_fcxp_s; + +#include +#include +#include +#include +#include +#include + +/** + * BFA rport information. + */ +struct bfa_rport_info_s { + u16 max_frmsz; /* max rcv pdu size */ + u32 pid : 24, /* remote port ID */ + lp_tag : 8; + u32 local_pid : 24, /* local port ID */ + cisc : 8; /* CIRO supported */ + u8 fc_class; /* supported FC classes. enum fc_cos */ + u8 vf_en; /* virtual fabric enable */ + u16 vf_id; /* virtual fabric ID */ + enum bfa_pport_speed speed; /* Rport's current speed */ +}; + +/** + * BFA rport data structure + */ +struct bfa_rport_s { + struct list_head qe; /* queue element */ + bfa_sm_t sm; /* state machine */ + struct bfa_s *bfa; /* backpointer to BFA */ + void *rport_drv; /* fcs/driver rport object */ + u16 fw_handle; /* firmware rport handle */ + u16 rport_tag; /* BFA rport tag */ + struct bfa_rport_info_s rport_info; /* rport info from *fcs/driver */ + struct bfa_reqq_wait_s reqq_wait; /* to wait for room in reqq */ + struct bfa_cb_qe_s hcb_qe; /* BFA callback qelem */ + struct bfa_rport_hal_stats_s stats; /* BFA rport statistics */ + struct bfa_rport_qos_attr_s qos_attr; + union a { + bfa_status_t status; /* f/w status */ + void *fw_msg; /* QoS scn event */ + } event_arg; +}; +#define BFA_RPORT_FC_COS(_rport) ((_rport)->rport_info.fc_class) + +/** + * Send completion callback. + */ +typedef void (*bfa_cb_fcxp_send_t) (void *bfad_fcxp, struct bfa_fcxp_s *fcxp, + void *cbarg, enum bfa_status req_status, + u32 rsp_len, u32 resid_len, + struct fchs_s *rsp_fchs); + +/** + * BFA fcxp allocation (asynchronous) + */ +typedef void (*bfa_fcxp_alloc_cbfn_t) (void *cbarg, struct bfa_fcxp_s *fcxp); + +struct bfa_fcxp_wqe_s { + struct list_head qe; + bfa_fcxp_alloc_cbfn_t alloc_cbfn; + void *alloc_cbarg; +}; + +typedef u64 (*bfa_fcxp_get_sgaddr_t) (void *bfad_fcxp, int sgeid); +typedef u32 (*bfa_fcxp_get_sglen_t) (void *bfad_fcxp, int sgeid); + +#define BFA_UF_BUFSZ (2 * 1024 + 256) + +/** + * @todo private + */ +struct bfa_uf_buf_s { + u8 d[BFA_UF_BUFSZ]; +}; + + +struct bfa_uf_s { + struct list_head qe; /* queue element */ + struct bfa_s *bfa; /* bfa instance */ + u16 uf_tag; /* identifying tag f/w messages */ + u16 vf_id; + u16 src_rport_handle; + u16 rsvd; + u8 *data_ptr; + u16 data_len; /* actual receive length */ + u16 pb_len; /* posted buffer length */ + void *buf_kva; /* buffer virtual address */ + u64 buf_pa; /* buffer physical address */ + struct bfa_cb_qe_s hcb_qe; /* comp: BFA comp qelem */ + struct bfa_sge_s sges[BFI_SGE_INLINE_MAX]; +}; + +typedef void (*bfa_cb_pport_t) (void *cbarg, enum bfa_status status); + +/** + * bfa lport login/logout service interface + */ +struct bfa_lps_s { + struct list_head qe; /* queue element */ + struct bfa_s *bfa; /* parent bfa instance */ + bfa_sm_t sm; /* finite state machine */ + u8 lp_tag; /* lport tag */ + u8 reqq; /* lport request queue */ + u8 alpa; /* ALPA for loop topologies */ + u32 lp_pid; /* lport port ID */ + bfa_boolean_t fdisc; /* send FDISC instead of FLOGI*/ + bfa_boolean_t auth_en; /* enable authentication */ + bfa_boolean_t auth_req; /* authentication required */ + bfa_boolean_t npiv_en; /* NPIV is allowed by peer */ + bfa_boolean_t fport; /* attached peer is F_PORT */ + bfa_boolean_t brcd_switch;/* attached peer is brcd switch */ + bfa_status_t status; /* login status */ + u16 pdusz; /* max receive PDU size */ + u16 pr_bbcred; /* BB_CREDIT from peer */ + u8 lsrjt_rsn; /* LSRJT reason */ + u8 lsrjt_expl; /* LSRJT explanation */ + wwn_t pwwn; /* port wwn of lport */ + wwn_t nwwn; /* node wwn of lport */ + wwn_t pr_pwwn; /* port wwn of lport peer */ + wwn_t pr_nwwn; /* node wwn of lport peer */ + mac_t lp_mac; /* fpma/spma MAC for lport */ + mac_t fcf_mac; /* FCF MAC of lport */ + struct bfa_reqq_wait_s wqe; /* request wait queue element */ + void *uarg; /* user callback arg */ + struct bfa_cb_qe_s hcb_qe; /* comp: callback qelem */ + struct bfi_lps_login_rsp_s *loginrsp; + bfa_eproto_status_t ext_status; +}; + +/* + * bfa pport API functions + */ +bfa_status_t bfa_pport_enable(struct bfa_s *bfa); +bfa_status_t bfa_pport_disable(struct bfa_s *bfa); +bfa_status_t bfa_pport_cfg_speed(struct bfa_s *bfa, + enum bfa_pport_speed speed); +enum bfa_pport_speed bfa_pport_get_speed(struct bfa_s *bfa); +bfa_status_t bfa_pport_cfg_topology(struct bfa_s *bfa, + enum bfa_pport_topology topo); +enum bfa_pport_topology bfa_pport_get_topology(struct bfa_s *bfa); +bfa_status_t bfa_pport_cfg_hardalpa(struct bfa_s *bfa, u8 alpa); +bfa_boolean_t bfa_pport_get_hardalpa(struct bfa_s *bfa, u8 *alpa); +u8 bfa_pport_get_myalpa(struct bfa_s *bfa); +bfa_status_t bfa_pport_clr_hardalpa(struct bfa_s *bfa); +bfa_status_t bfa_pport_cfg_maxfrsize(struct bfa_s *bfa, u16 maxsize); +u16 bfa_pport_get_maxfrsize(struct bfa_s *bfa); +u32 bfa_pport_mypid(struct bfa_s *bfa); +u8 bfa_pport_get_rx_bbcredit(struct bfa_s *bfa); +bfa_status_t bfa_pport_trunk_enable(struct bfa_s *bfa, u8 bitmap); +bfa_status_t bfa_pport_trunk_disable(struct bfa_s *bfa); +bfa_boolean_t bfa_pport_trunk_query(struct bfa_s *bfa, u32 *bitmap); +void bfa_pport_get_attr(struct bfa_s *bfa, struct bfa_pport_attr_s *attr); +wwn_t bfa_pport_get_wwn(struct bfa_s *bfa, bfa_boolean_t node); +bfa_status_t bfa_pport_get_stats(struct bfa_s *bfa, + union bfa_pport_stats_u *stats, + bfa_cb_pport_t cbfn, void *cbarg); +bfa_status_t bfa_pport_clear_stats(struct bfa_s *bfa, bfa_cb_pport_t cbfn, + void *cbarg); +void bfa_pport_event_register(struct bfa_s *bfa, + void (*event_cbfn) (void *cbarg, + bfa_pport_event_t event), void *event_cbarg); +bfa_boolean_t bfa_pport_is_disabled(struct bfa_s *bfa); +void bfa_pport_cfg_qos(struct bfa_s *bfa, bfa_boolean_t on_off); +void bfa_pport_cfg_ratelim(struct bfa_s *bfa, bfa_boolean_t on_off); +bfa_status_t bfa_pport_cfg_ratelim_speed(struct bfa_s *bfa, + enum bfa_pport_speed speed); +enum bfa_pport_speed bfa_pport_get_ratelim_speed(struct bfa_s *bfa); + +void bfa_pport_set_tx_bbcredit(struct bfa_s *bfa, u16 tx_bbcredit); +void bfa_pport_busy(struct bfa_s *bfa, bfa_boolean_t status); +void bfa_pport_beacon(struct bfa_s *bfa, bfa_boolean_t beacon, + bfa_boolean_t link_e2e_beacon); +void bfa_cb_pport_event(void *cbarg, bfa_pport_event_t event); +void bfa_pport_qos_get_attr(struct bfa_s *bfa, struct bfa_qos_attr_s *qos_attr); +void bfa_pport_qos_get_vc_attr(struct bfa_s *bfa, + struct bfa_qos_vc_attr_s *qos_vc_attr); +bfa_status_t bfa_pport_get_qos_stats(struct bfa_s *bfa, + union bfa_pport_stats_u *stats, + bfa_cb_pport_t cbfn, void *cbarg); +bfa_status_t bfa_pport_clear_qos_stats(struct bfa_s *bfa, bfa_cb_pport_t cbfn, + void *cbarg); +bfa_boolean_t bfa_pport_is_ratelim(struct bfa_s *bfa); +bfa_boolean_t bfa_pport_is_linkup(struct bfa_s *bfa); + +/* + * bfa rport API functions + */ +struct bfa_rport_s *bfa_rport_create(struct bfa_s *bfa, void *rport_drv); +void bfa_rport_delete(struct bfa_rport_s *rport); +void bfa_rport_online(struct bfa_rport_s *rport, + struct bfa_rport_info_s *rport_info); +void bfa_rport_offline(struct bfa_rport_s *rport); +void bfa_rport_speed(struct bfa_rport_s *rport, enum bfa_pport_speed speed); +void bfa_rport_get_stats(struct bfa_rport_s *rport, + struct bfa_rport_hal_stats_s *stats); +void bfa_rport_clear_stats(struct bfa_rport_s *rport); +void bfa_cb_rport_online(void *rport); +void bfa_cb_rport_offline(void *rport); +void bfa_cb_rport_qos_scn_flowid(void *rport, + struct bfa_rport_qos_attr_s old_qos_attr, + struct bfa_rport_qos_attr_s new_qos_attr); +void bfa_cb_rport_qos_scn_prio(void *rport, + struct bfa_rport_qos_attr_s old_qos_attr, + struct bfa_rport_qos_attr_s new_qos_attr); +void bfa_rport_get_qos_attr(struct bfa_rport_s *rport, + struct bfa_rport_qos_attr_s *qos_attr); + +/* + * bfa fcxp API functions + */ +struct bfa_fcxp_s *bfa_fcxp_alloc(void *bfad_fcxp, struct bfa_s *bfa, + int nreq_sgles, int nrsp_sgles, + bfa_fcxp_get_sgaddr_t get_req_sga, + bfa_fcxp_get_sglen_t get_req_sglen, + bfa_fcxp_get_sgaddr_t get_rsp_sga, + bfa_fcxp_get_sglen_t get_rsp_sglen); +void bfa_fcxp_alloc_wait(struct bfa_s *bfa, struct bfa_fcxp_wqe_s *wqe, + bfa_fcxp_alloc_cbfn_t alloc_cbfn, void *cbarg); +void bfa_fcxp_walloc_cancel(struct bfa_s *bfa, + struct bfa_fcxp_wqe_s *wqe); +void bfa_fcxp_discard(struct bfa_fcxp_s *fcxp); + +void *bfa_fcxp_get_reqbuf(struct bfa_fcxp_s *fcxp); +void *bfa_fcxp_get_rspbuf(struct bfa_fcxp_s *fcxp); + +void bfa_fcxp_free(struct bfa_fcxp_s *fcxp); + +void bfa_fcxp_send(struct bfa_fcxp_s *fcxp, + struct bfa_rport_s *rport, u16 vf_id, u8 lp_tag, + bfa_boolean_t cts, enum fc_cos cos, + u32 reqlen, struct fchs_s *fchs, + bfa_cb_fcxp_send_t cbfn, + void *cbarg, + u32 rsp_maxlen, u8 rsp_timeout); +bfa_status_t bfa_fcxp_abort(struct bfa_fcxp_s *fcxp); +u32 bfa_fcxp_get_reqbufsz(struct bfa_fcxp_s *fcxp); +u32 bfa_fcxp_get_maxrsp(struct bfa_s *bfa); + +static inline void * +bfa_uf_get_frmbuf(struct bfa_uf_s *uf) +{ + return uf->data_ptr; +} + +static inline u16 +bfa_uf_get_frmlen(struct bfa_uf_s *uf) +{ + return uf->data_len; +} + +/** + * Callback prototype for unsolicited frame receive handler. + * + * @param[in] cbarg callback arg for receive handler + * @param[in] uf unsolicited frame descriptor + * + * @return None + */ +typedef void (*bfa_cb_uf_recv_t) (void *cbarg, struct bfa_uf_s *uf); + +/* + * bfa uf API functions + */ +void bfa_uf_recv_register(struct bfa_s *bfa, bfa_cb_uf_recv_t ufrecv, + void *cbarg); +void bfa_uf_free(struct bfa_uf_s *uf); + +/** + * bfa lport service api + */ + +struct bfa_lps_s *bfa_lps_alloc(struct bfa_s *bfa); +void bfa_lps_delete(struct bfa_lps_s *lps); +void bfa_lps_discard(struct bfa_lps_s *lps); +void bfa_lps_flogi(struct bfa_lps_s *lps, void *uarg, u8 alpa, u16 pdusz, + wwn_t pwwn, wwn_t nwwn, bfa_boolean_t auth_en); +void bfa_lps_fdisc(struct bfa_lps_s *lps, void *uarg, u16 pdusz, wwn_t pwwn, + wwn_t nwwn); +void bfa_lps_flogo(struct bfa_lps_s *lps); +void bfa_lps_fdisclogo(struct bfa_lps_s *lps); +u8 bfa_lps_get_tag(struct bfa_lps_s *lps); +bfa_boolean_t bfa_lps_is_npiv_en(struct bfa_lps_s *lps); +bfa_boolean_t bfa_lps_is_fport(struct bfa_lps_s *lps); +bfa_boolean_t bfa_lps_is_brcd_fabric(struct bfa_lps_s *lps); +bfa_boolean_t bfa_lps_is_authreq(struct bfa_lps_s *lps); +bfa_eproto_status_t bfa_lps_get_extstatus(struct bfa_lps_s *lps); +u32 bfa_lps_get_pid(struct bfa_lps_s *lps); +u8 bfa_lps_get_tag_from_pid(struct bfa_s *bfa, u32 pid); +u16 bfa_lps_get_peer_bbcredit(struct bfa_lps_s *lps); +wwn_t bfa_lps_get_peer_pwwn(struct bfa_lps_s *lps); +wwn_t bfa_lps_get_peer_nwwn(struct bfa_lps_s *lps); +u8 bfa_lps_get_lsrjt_rsn(struct bfa_lps_s *lps); +u8 bfa_lps_get_lsrjt_expl(struct bfa_lps_s *lps); +void bfa_cb_lps_flogi_comp(void *bfad, void *uarg, bfa_status_t status); +void bfa_cb_lps_flogo_comp(void *bfad, void *uarg); +void bfa_cb_lps_fdisc_comp(void *bfad, void *uarg, bfa_status_t status); +void bfa_cb_lps_fdisclogo_comp(void *bfad, void *uarg); + +#endif /* __BFA_SVC_H__ */ + diff --git a/drivers/scsi/bfa/include/bfa_timer.h b/drivers/scsi/bfa/include/bfa_timer.h new file mode 100644 index 000000000000..e407103fa565 --- /dev/null +++ b/drivers/scsi/bfa/include/bfa_timer.h @@ -0,0 +1,53 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ +#ifndef __BFA_TIMER_H__ +#define __BFA_TIMER_H__ + +#include +#include + +struct bfa_s; + +typedef void (*bfa_timer_cbfn_t)(void *); + +/** + * BFA timer data structure + */ +struct bfa_timer_s { + struct list_head qe; + bfa_timer_cbfn_t timercb; + void *arg; + int timeout; /**< in millisecs. */ +}; + +/** + * Timer module structure + */ +struct bfa_timer_mod_s { + struct list_head timer_q; +}; + +#define BFA_TIMER_FREQ 500 /**< specified in millisecs */ + +void bfa_timer_beat(struct bfa_timer_mod_s *mod); +void bfa_timer_init(struct bfa_timer_mod_s *mod); +void bfa_timer_begin(struct bfa_timer_mod_s *mod, struct bfa_timer_s *timer, + bfa_timer_cbfn_t timercb, void *arg, + unsigned int timeout); +void bfa_timer_stop(struct bfa_timer_s *timer); + +#endif /* __BFA_TIMER_H__ */ diff --git a/drivers/scsi/bfa/include/bfi/bfi.h b/drivers/scsi/bfa/include/bfi/bfi.h new file mode 100644 index 000000000000..6cadfe0d4ba1 --- /dev/null +++ b/drivers/scsi/bfa/include/bfi/bfi.h @@ -0,0 +1,174 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFI_H__ +#define __BFI_H__ + +#include +#include + +#pragma pack(1) + +/** + * Msg header common to all msgs + */ +struct bfi_mhdr_s { + u8 msg_class; /* @ref bfi_mclass_t */ + u8 msg_id; /* msg opcode with in the class */ + union { + struct { + u8 rsvd; + u8 lpu_id; /* msg destination */ + } h2i; + u16 i2htok; /* token in msgs to host */ + } mtag; +}; + +#define bfi_h2i_set(_mh, _mc, _op, _lpuid) do { \ + (_mh).msg_class = (_mc); \ + (_mh).msg_id = (_op); \ + (_mh).mtag.h2i.lpu_id = (_lpuid); \ +} while (0) + +#define bfi_i2h_set(_mh, _mc, _op, _i2htok) do { \ + (_mh).msg_class = (_mc); \ + (_mh).msg_id = (_op); \ + (_mh).mtag.i2htok = (_i2htok); \ +} while (0) + +/* + * Message opcodes: 0-127 to firmware, 128-255 to host + */ +#define BFI_I2H_OPCODE_BASE 128 +#define BFA_I2HM(_x) ((_x) + BFI_I2H_OPCODE_BASE) + +/** + **************************************************************************** + * + * Scatter Gather Element and Page definition + * + **************************************************************************** + */ + +#define BFI_SGE_INLINE 1 +#define BFI_SGE_INLINE_MAX (BFI_SGE_INLINE + 1) + +/** + * SG Flags + */ +enum { + BFI_SGE_DATA = 0, /* data address, not last */ + BFI_SGE_DATA_CPL = 1, /* data addr, last in current page */ + BFI_SGE_DATA_LAST = 3, /* data address, last */ + BFI_SGE_LINK = 2, /* link address */ + BFI_SGE_PGDLEN = 2, /* cumulative data length for page */ +}; + +/** + * DMA addresses + */ +union bfi_addr_u { + struct { + u32 addr_lo; + u32 addr_hi; + } a32; +}; + +/** + * Scatter Gather Element + */ +struct bfi_sge_s { +#ifdef __BIGENDIAN + u32 flags : 2, + rsvd : 2, + sg_len : 28; +#else + u32 sg_len : 28, + rsvd : 2, + flags : 2; +#endif + union bfi_addr_u sga; +}; + +/** + * Scatter Gather Page + */ +#define BFI_SGPG_DATA_SGES 7 +#define BFI_SGPG_SGES_MAX (BFI_SGPG_DATA_SGES + 1) +#define BFI_SGPG_RSVD_WD_LEN 8 +struct bfi_sgpg_s { + struct bfi_sge_s sges[BFI_SGPG_SGES_MAX]; + u32 rsvd[BFI_SGPG_RSVD_WD_LEN]; +}; + +/* + * Large Message structure - 128 Bytes size Msgs + */ +#define BFI_LMSG_SZ 128 +#define BFI_LMSG_PL_WSZ \ + ((BFI_LMSG_SZ - sizeof(struct bfi_mhdr_s)) / 4) + +struct bfi_msg_s { + struct bfi_mhdr_s mhdr; + u32 pl[BFI_LMSG_PL_WSZ]; +}; + +/** + * Mailbox message structure + */ +#define BFI_MBMSG_SZ 7 +struct bfi_mbmsg_s { + struct bfi_mhdr_s mh; + u32 pl[BFI_MBMSG_SZ]; +}; + +/** + * Message Classes + */ +enum bfi_mclass { + BFI_MC_IOC = 1, /* IO Controller (IOC) */ + BFI_MC_DIAG = 2, /* Diagnostic Msgs */ + BFI_MC_FLASH = 3, /* Flash message class */ + BFI_MC_CEE = 4, + BFI_MC_FC_PORT = 5, /* FC port */ + BFI_MC_IOCFC = 6, /* FC - IO Controller (IOC) */ + BFI_MC_LL = 7, /* Link Layer */ + BFI_MC_UF = 8, /* Unsolicited frame receive */ + BFI_MC_FCXP = 9, /* FC Transport */ + BFI_MC_LPS = 10, /* lport fc login services */ + BFI_MC_RPORT = 11, /* Remote port */ + BFI_MC_ITNIM = 12, /* I-T nexus (Initiator mode) */ + BFI_MC_IOIM_READ = 13, /* read IO (Initiator mode) */ + BFI_MC_IOIM_WRITE = 14, /* write IO (Initiator mode) */ + BFI_MC_IOIM_IO = 15, /* IO (Initiator mode) */ + BFI_MC_IOIM = 16, /* IO (Initiator mode) */ + BFI_MC_IOIM_IOCOM = 17, /* good IO completion */ + BFI_MC_TSKIM = 18, /* Initiator Task management */ + BFI_MC_SBOOT = 19, /* SAN boot services */ + BFI_MC_IPFC = 20, /* IP over FC Msgs */ + BFI_MC_PORT = 21, /* Physical port */ + BFI_MC_MAX = 32 +}; + +#define BFI_IOC_MAX_CQS 4 +#define BFI_IOC_MAX_CQS_ASIC 8 +#define BFI_IOC_MSGLEN_MAX 32 /* 32 bytes */ + +#pragma pack() + +#endif /* __BFI_H__ */ + diff --git a/drivers/scsi/bfa/include/bfi/bfi_boot.h b/drivers/scsi/bfa/include/bfi/bfi_boot.h new file mode 100644 index 000000000000..5955afe7d108 --- /dev/null +++ b/drivers/scsi/bfa/include/bfi/bfi_boot.h @@ -0,0 +1,34 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ +/* + * bfi_boot.h + */ + +#ifndef __BFI_BOOT_H__ +#define __BFI_BOOT_H__ + +#define BFI_BOOT_TYPE_OFF 8 +#define BFI_BOOT_PARAM_OFF 12 + +#define BFI_BOOT_TYPE_NORMAL 0 /* param is device id */ +#define BFI_BOOT_TYPE_FLASH 1 +#define BFI_BOOT_TYPE_MEMTEST 2 + +#define BFI_BOOT_MEMTEST_RES_ADDR 0x900 +#define BFI_BOOT_MEMTEST_RES_SIG 0xA0A1A2A3 + +#endif diff --git a/drivers/scsi/bfa/include/bfi/bfi_cbreg.h b/drivers/scsi/bfa/include/bfi/bfi_cbreg.h new file mode 100644 index 000000000000..b3bb52b565b1 --- /dev/null +++ b/drivers/scsi/bfa/include/bfi/bfi_cbreg.h @@ -0,0 +1,305 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/* + * bfi_cbreg.h crossbow host block register definitions + * + * !!! Do not edit. Auto generated. !!! + */ + +#ifndef __BFI_CBREG_H__ +#define __BFI_CBREG_H__ + + +#define HOSTFN0_INT_STATUS 0x00014000 +#define __HOSTFN0_INT_STATUS_LVL_MK 0x00f00000 +#define __HOSTFN0_INT_STATUS_LVL_SH 20 +#define __HOSTFN0_INT_STATUS_LVL(_v) ((_v) << __HOSTFN0_INT_STATUS_LVL_SH) +#define __HOSTFN0_INT_STATUS_P 0x000fffff +#define HOSTFN0_INT_MSK 0x00014004 +#define HOST_PAGE_NUM_FN0 0x00014008 +#define __HOST_PAGE_NUM_FN 0x000001ff +#define HOSTFN1_INT_STATUS 0x00014100 +#define __HOSTFN1_INT_STAT_LVL_MK 0x00f00000 +#define __HOSTFN1_INT_STAT_LVL_SH 20 +#define __HOSTFN1_INT_STAT_LVL(_v) ((_v) << __HOSTFN1_INT_STAT_LVL_SH) +#define __HOSTFN1_INT_STAT_P 0x000fffff +#define HOSTFN1_INT_MSK 0x00014104 +#define HOST_PAGE_NUM_FN1 0x00014108 +#define APP_PLL_400_CTL_REG 0x00014204 +#define __P_400_PLL_LOCK 0x80000000 +#define __APP_PLL_400_SRAM_USE_100MHZ 0x00100000 +#define __APP_PLL_400_RESET_TIMER_MK 0x000e0000 +#define __APP_PLL_400_RESET_TIMER_SH 17 +#define __APP_PLL_400_RESET_TIMER(_v) ((_v) << __APP_PLL_400_RESET_TIMER_SH) +#define __APP_PLL_400_LOGIC_SOFT_RESET 0x00010000 +#define __APP_PLL_400_CNTLMT0_1_MK 0x0000c000 +#define __APP_PLL_400_CNTLMT0_1_SH 14 +#define __APP_PLL_400_CNTLMT0_1(_v) ((_v) << __APP_PLL_400_CNTLMT0_1_SH) +#define __APP_PLL_400_JITLMT0_1_MK 0x00003000 +#define __APP_PLL_400_JITLMT0_1_SH 12 +#define __APP_PLL_400_JITLMT0_1(_v) ((_v) << __APP_PLL_400_JITLMT0_1_SH) +#define __APP_PLL_400_HREF 0x00000800 +#define __APP_PLL_400_HDIV 0x00000400 +#define __APP_PLL_400_P0_1_MK 0x00000300 +#define __APP_PLL_400_P0_1_SH 8 +#define __APP_PLL_400_P0_1(_v) ((_v) << __APP_PLL_400_P0_1_SH) +#define __APP_PLL_400_Z0_2_MK 0x000000e0 +#define __APP_PLL_400_Z0_2_SH 5 +#define __APP_PLL_400_Z0_2(_v) ((_v) << __APP_PLL_400_Z0_2_SH) +#define __APP_PLL_400_RSEL200500 0x00000010 +#define __APP_PLL_400_ENARST 0x00000008 +#define __APP_PLL_400_BYPASS 0x00000004 +#define __APP_PLL_400_LRESETN 0x00000002 +#define __APP_PLL_400_ENABLE 0x00000001 +#define APP_PLL_212_CTL_REG 0x00014208 +#define __P_212_PLL_LOCK 0x80000000 +#define __APP_PLL_212_RESET_TIMER_MK 0x000e0000 +#define __APP_PLL_212_RESET_TIMER_SH 17 +#define __APP_PLL_212_RESET_TIMER(_v) ((_v) << __APP_PLL_212_RESET_TIMER_SH) +#define __APP_PLL_212_LOGIC_SOFT_RESET 0x00010000 +#define __APP_PLL_212_CNTLMT0_1_MK 0x0000c000 +#define __APP_PLL_212_CNTLMT0_1_SH 14 +#define __APP_PLL_212_CNTLMT0_1(_v) ((_v) << __APP_PLL_212_CNTLMT0_1_SH) +#define __APP_PLL_212_JITLMT0_1_MK 0x00003000 +#define __APP_PLL_212_JITLMT0_1_SH 12 +#define __APP_PLL_212_JITLMT0_1(_v) ((_v) << __APP_PLL_212_JITLMT0_1_SH) +#define __APP_PLL_212_HREF 0x00000800 +#define __APP_PLL_212_HDIV 0x00000400 +#define __APP_PLL_212_P0_1_MK 0x00000300 +#define __APP_PLL_212_P0_1_SH 8 +#define __APP_PLL_212_P0_1(_v) ((_v) << __APP_PLL_212_P0_1_SH) +#define __APP_PLL_212_Z0_2_MK 0x000000e0 +#define __APP_PLL_212_Z0_2_SH 5 +#define __APP_PLL_212_Z0_2(_v) ((_v) << __APP_PLL_212_Z0_2_SH) +#define __APP_PLL_212_RSEL200500 0x00000010 +#define __APP_PLL_212_ENARST 0x00000008 +#define __APP_PLL_212_BYPASS 0x00000004 +#define __APP_PLL_212_LRESETN 0x00000002 +#define __APP_PLL_212_ENABLE 0x00000001 +#define HOST_SEM0_REG 0x00014230 +#define __HOST_SEMAPHORE 0x00000001 +#define HOST_SEM1_REG 0x00014234 +#define HOST_SEM2_REG 0x00014238 +#define HOST_SEM3_REG 0x0001423c +#define HOST_SEM0_INFO_REG 0x00014240 +#define HOST_SEM1_INFO_REG 0x00014244 +#define HOST_SEM2_INFO_REG 0x00014248 +#define HOST_SEM3_INFO_REG 0x0001424c +#define HOSTFN0_LPU0_CMD_STAT 0x00019000 +#define __HOSTFN0_LPU0_MBOX_INFO_MK 0xfffffffe +#define __HOSTFN0_LPU0_MBOX_INFO_SH 1 +#define __HOSTFN0_LPU0_MBOX_INFO(_v) ((_v) << __HOSTFN0_LPU0_MBOX_INFO_SH) +#define __HOSTFN0_LPU0_MBOX_CMD_STATUS 0x00000001 +#define LPU0_HOSTFN0_CMD_STAT 0x00019008 +#define __LPU0_HOSTFN0_MBOX_INFO_MK 0xfffffffe +#define __LPU0_HOSTFN0_MBOX_INFO_SH 1 +#define __LPU0_HOSTFN0_MBOX_INFO(_v) ((_v) << __LPU0_HOSTFN0_MBOX_INFO_SH) +#define __LPU0_HOSTFN0_MBOX_CMD_STATUS 0x00000001 +#define HOSTFN1_LPU1_CMD_STAT 0x00019014 +#define __HOSTFN1_LPU1_MBOX_INFO_MK 0xfffffffe +#define __HOSTFN1_LPU1_MBOX_INFO_SH 1 +#define __HOSTFN1_LPU1_MBOX_INFO(_v) ((_v) << __HOSTFN1_LPU1_MBOX_INFO_SH) +#define __HOSTFN1_LPU1_MBOX_CMD_STATUS 0x00000001 +#define LPU1_HOSTFN1_CMD_STAT 0x0001901c +#define __LPU1_HOSTFN1_MBOX_INFO_MK 0xfffffffe +#define __LPU1_HOSTFN1_MBOX_INFO_SH 1 +#define __LPU1_HOSTFN1_MBOX_INFO(_v) ((_v) << __LPU1_HOSTFN1_MBOX_INFO_SH) +#define __LPU1_HOSTFN1_MBOX_CMD_STATUS 0x00000001 +#define CPE_Q0_DEPTH 0x00010014 +#define CPE_Q0_PI 0x0001001c +#define CPE_Q0_CI 0x00010020 +#define CPE_Q1_DEPTH 0x00010034 +#define CPE_Q1_PI 0x0001003c +#define CPE_Q1_CI 0x00010040 +#define CPE_Q2_DEPTH 0x00010054 +#define CPE_Q2_PI 0x0001005c +#define CPE_Q2_CI 0x00010060 +#define CPE_Q3_DEPTH 0x00010074 +#define CPE_Q3_PI 0x0001007c +#define CPE_Q3_CI 0x00010080 +#define CPE_Q4_DEPTH 0x00010094 +#define CPE_Q4_PI 0x0001009c +#define CPE_Q4_CI 0x000100a0 +#define CPE_Q5_DEPTH 0x000100b4 +#define CPE_Q5_PI 0x000100bc +#define CPE_Q5_CI 0x000100c0 +#define CPE_Q6_DEPTH 0x000100d4 +#define CPE_Q6_PI 0x000100dc +#define CPE_Q6_CI 0x000100e0 +#define CPE_Q7_DEPTH 0x000100f4 +#define CPE_Q7_PI 0x000100fc +#define CPE_Q7_CI 0x00010100 +#define RME_Q0_DEPTH 0x00011014 +#define RME_Q0_PI 0x0001101c +#define RME_Q0_CI 0x00011020 +#define RME_Q1_DEPTH 0x00011034 +#define RME_Q1_PI 0x0001103c +#define RME_Q1_CI 0x00011040 +#define RME_Q2_DEPTH 0x00011054 +#define RME_Q2_PI 0x0001105c +#define RME_Q2_CI 0x00011060 +#define RME_Q3_DEPTH 0x00011074 +#define RME_Q3_PI 0x0001107c +#define RME_Q3_CI 0x00011080 +#define RME_Q4_DEPTH 0x00011094 +#define RME_Q4_PI 0x0001109c +#define RME_Q4_CI 0x000110a0 +#define RME_Q5_DEPTH 0x000110b4 +#define RME_Q5_PI 0x000110bc +#define RME_Q5_CI 0x000110c0 +#define RME_Q6_DEPTH 0x000110d4 +#define RME_Q6_PI 0x000110dc +#define RME_Q6_CI 0x000110e0 +#define RME_Q7_DEPTH 0x000110f4 +#define RME_Q7_PI 0x000110fc +#define RME_Q7_CI 0x00011100 +#define PSS_CTL_REG 0x00018800 +#define __PSS_I2C_CLK_DIV_MK 0x00030000 +#define __PSS_I2C_CLK_DIV_SH 16 +#define __PSS_I2C_CLK_DIV(_v) ((_v) << __PSS_I2C_CLK_DIV_SH) +#define __PSS_LMEM_INIT_DONE 0x00001000 +#define __PSS_LMEM_RESET 0x00000200 +#define __PSS_LMEM_INIT_EN 0x00000100 +#define __PSS_LPU1_RESET 0x00000002 +#define __PSS_LPU0_RESET 0x00000001 + + +/* + * These definitions are either in error/missing in spec. Its auto-generated + * from hard coded values in regparse.pl. + */ +#define __EMPHPOST_AT_4G_MK_FIX 0x0000001c +#define __EMPHPOST_AT_4G_SH_FIX 0x00000002 +#define __EMPHPRE_AT_4G_FIX 0x00000003 +#define __SFP_TXRATE_EN_FIX 0x00000100 +#define __SFP_RXRATE_EN_FIX 0x00000080 + + +/* + * These register definitions are auto-generated from hard coded values + * in regparse.pl. + */ +#define HOSTFN0_LPU_MBOX0_0 0x00019200 +#define HOSTFN1_LPU_MBOX0_8 0x00019260 +#define LPU_HOSTFN0_MBOX0_0 0x00019280 +#define LPU_HOSTFN1_MBOX0_8 0x000192e0 + + +/* + * These register mapping definitions are auto-generated from mapping tables + * in regparse.pl. + */ +#define BFA_IOC0_HBEAT_REG HOST_SEM0_INFO_REG +#define BFA_IOC0_STATE_REG HOST_SEM1_INFO_REG +#define BFA_IOC1_HBEAT_REG HOST_SEM2_INFO_REG +#define BFA_IOC1_STATE_REG HOST_SEM3_INFO_REG +#define BFA_FW_USE_COUNT HOST_SEM4_INFO_REG + +#define CPE_Q_DEPTH(__n) \ + (CPE_Q0_DEPTH + (__n) * (CPE_Q1_DEPTH - CPE_Q0_DEPTH)) +#define CPE_Q_PI(__n) \ + (CPE_Q0_PI + (__n) * (CPE_Q1_PI - CPE_Q0_PI)) +#define CPE_Q_CI(__n) \ + (CPE_Q0_CI + (__n) * (CPE_Q1_CI - CPE_Q0_CI)) +#define RME_Q_DEPTH(__n) \ + (RME_Q0_DEPTH + (__n) * (RME_Q1_DEPTH - RME_Q0_DEPTH)) +#define RME_Q_PI(__n) \ + (RME_Q0_PI + (__n) * (RME_Q1_PI - RME_Q0_PI)) +#define RME_Q_CI(__n) \ + (RME_Q0_CI + (__n) * (RME_Q1_CI - RME_Q0_CI)) + +#define CPE_Q_NUM(__fn, __q) (((__fn) << 2) + (__q)) +#define RME_Q_NUM(__fn, __q) (((__fn) << 2) + (__q)) +#define CPE_Q_MASK(__q) ((__q) & 0x3) +#define RME_Q_MASK(__q) ((__q) & 0x3) + + +/* + * PCI MSI-X vector defines + */ +enum { + BFA_MSIX_CPE_Q0 = 0, + BFA_MSIX_CPE_Q1 = 1, + BFA_MSIX_CPE_Q2 = 2, + BFA_MSIX_CPE_Q3 = 3, + BFA_MSIX_CPE_Q4 = 4, + BFA_MSIX_CPE_Q5 = 5, + BFA_MSIX_CPE_Q6 = 6, + BFA_MSIX_CPE_Q7 = 7, + BFA_MSIX_RME_Q0 = 8, + BFA_MSIX_RME_Q1 = 9, + BFA_MSIX_RME_Q2 = 10, + BFA_MSIX_RME_Q3 = 11, + BFA_MSIX_RME_Q4 = 12, + BFA_MSIX_RME_Q5 = 13, + BFA_MSIX_RME_Q6 = 14, + BFA_MSIX_RME_Q7 = 15, + BFA_MSIX_ERR_EMC = 16, + BFA_MSIX_ERR_LPU0 = 17, + BFA_MSIX_ERR_LPU1 = 18, + BFA_MSIX_ERR_PSS = 19, + BFA_MSIX_MBOX_LPU0 = 20, + BFA_MSIX_MBOX_LPU1 = 21, + BFA_MSIX_CB_MAX = 22, +}; + +/* + * And corresponding host interrupt status bit field defines + */ +#define __HFN_INT_CPE_Q0 0x00000001U +#define __HFN_INT_CPE_Q1 0x00000002U +#define __HFN_INT_CPE_Q2 0x00000004U +#define __HFN_INT_CPE_Q3 0x00000008U +#define __HFN_INT_CPE_Q4 0x00000010U +#define __HFN_INT_CPE_Q5 0x00000020U +#define __HFN_INT_CPE_Q6 0x00000040U +#define __HFN_INT_CPE_Q7 0x00000080U +#define __HFN_INT_RME_Q0 0x00000100U +#define __HFN_INT_RME_Q1 0x00000200U +#define __HFN_INT_RME_Q2 0x00000400U +#define __HFN_INT_RME_Q3 0x00000800U +#define __HFN_INT_RME_Q4 0x00001000U +#define __HFN_INT_RME_Q5 0x00002000U +#define __HFN_INT_RME_Q6 0x00004000U +#define __HFN_INT_RME_Q7 0x00008000U +#define __HFN_INT_ERR_EMC 0x00010000U +#define __HFN_INT_ERR_LPU0 0x00020000U +#define __HFN_INT_ERR_LPU1 0x00040000U +#define __HFN_INT_ERR_PSS 0x00080000U +#define __HFN_INT_MBOX_LPU0 0x00100000U +#define __HFN_INT_MBOX_LPU1 0x00200000U +#define __HFN_INT_MBOX1_LPU0 0x00400000U +#define __HFN_INT_MBOX1_LPU1 0x00800000U +#define __HFN_INT_CPE_MASK 0x000000ffU +#define __HFN_INT_RME_MASK 0x0000ff00U + + +/* + * crossbow memory map. + */ +#define PSS_SMEM_PAGE_START 0x8000 +#define PSS_SMEM_PGNUM(_pg0, _ma) ((_pg0) + ((_ma) >> 15)) +#define PSS_SMEM_PGOFF(_ma) ((_ma) & 0x7fff) + +/* + * End of crossbow memory map + */ + + +#endif /* __BFI_CBREG_H__ */ + diff --git a/drivers/scsi/bfa/include/bfi/bfi_cee.h b/drivers/scsi/bfa/include/bfi/bfi_cee.h new file mode 100644 index 000000000000..0970596583ea --- /dev/null +++ b/drivers/scsi/bfa/include/bfi/bfi_cee.h @@ -0,0 +1,119 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ +/** + * Copyright (c) 2006-2009 Brocade Communications Systems, Inc. + * All rights reserved. + * + * bfi_dcbx.h BFI Interface (Mailbox commands and related structures) + * between host driver and DCBX/LLDP firmware module. + * +**/ + +#ifndef __BFI_CEE_H__ +#define __BFI_CEE_H__ + +#include + +#pragma pack(1) + + +enum bfi_cee_h2i_msgs_e { + BFI_CEE_H2I_GET_CFG_REQ = 1, + BFI_CEE_H2I_RESET_STATS = 2, + BFI_CEE_H2I_GET_STATS_REQ = 3, +}; + + +enum bfi_cee_i2h_msgs_e { + BFI_CEE_I2H_GET_CFG_RSP = BFA_I2HM(1), + BFI_CEE_I2H_RESET_STATS_RSP = BFA_I2HM(2), + BFI_CEE_I2H_GET_STATS_RSP = BFA_I2HM(3), +}; + + +/* Data structures */ + +/* + * BFI_CEE_H2I_RESET_STATS + */ +struct bfi_lldp_reset_stats_s { + struct bfi_mhdr_s mh; +}; + +/* + * BFI_CEE_H2I_RESET_STATS + */ +struct bfi_cee_reset_stats_s { + struct bfi_mhdr_s mh; +}; + +/* + * BFI_CEE_H2I_GET_CFG_REQ + */ +struct bfi_cee_get_req_s { + struct bfi_mhdr_s mh; + union bfi_addr_u dma_addr; +}; + + +/* + * BFI_CEE_I2H_GET_CFG_RSP + */ +struct bfi_cee_get_rsp_s { + struct bfi_mhdr_s mh; + u8 cmd_status; + u8 rsvd[3]; +}; + +/* + * BFI_CEE_H2I_GET_STATS_REQ + */ +struct bfi_cee_stats_req_s { + struct bfi_mhdr_s mh; + union bfi_addr_u dma_addr; +}; + + +/* + * BFI_CEE_I2H_GET_STATS_RSP + */ +struct bfi_cee_stats_rsp_s { + struct bfi_mhdr_s mh; + u8 cmd_status; + u8 rsvd[3]; +}; + + + +union bfi_cee_h2i_msg_u { + struct bfi_mhdr_s mh; + struct bfi_cee_get_req_s get_req; + struct bfi_cee_stats_req_s stats_req; +}; + + +union bfi_cee_i2h_msg_u { + struct bfi_mhdr_s mh; + struct bfi_cee_get_rsp_s get_rsp; + struct bfi_cee_stats_rsp_s stats_rsp; +}; + +#pragma pack() + + +#endif /* __BFI_CEE_H__ */ + diff --git a/drivers/scsi/bfa/include/bfi/bfi_ctreg.h b/drivers/scsi/bfa/include/bfi/bfi_ctreg.h new file mode 100644 index 000000000000..d3caa58c0a0a --- /dev/null +++ b/drivers/scsi/bfa/include/bfi/bfi_ctreg.h @@ -0,0 +1,611 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/* + * bfi_ctreg.h catapult host block register definitions + * + * !!! Do not edit. Auto generated. !!! + */ + +#ifndef __BFI_CTREG_H__ +#define __BFI_CTREG_H__ + + +#define HOSTFN0_LPU_MBOX0_0 0x00019200 +#define HOSTFN1_LPU_MBOX0_8 0x00019260 +#define LPU_HOSTFN0_MBOX0_0 0x00019280 +#define LPU_HOSTFN1_MBOX0_8 0x000192e0 +#define HOSTFN2_LPU_MBOX0_0 0x00019400 +#define HOSTFN3_LPU_MBOX0_8 0x00019460 +#define LPU_HOSTFN2_MBOX0_0 0x00019480 +#define LPU_HOSTFN3_MBOX0_8 0x000194e0 +#define HOSTFN0_INT_STATUS 0x00014000 +#define __HOSTFN0_HALT_OCCURRED 0x01000000 +#define __HOSTFN0_INT_STATUS_LVL_MK 0x00f00000 +#define __HOSTFN0_INT_STATUS_LVL_SH 20 +#define __HOSTFN0_INT_STATUS_LVL(_v) ((_v) << __HOSTFN0_INT_STATUS_LVL_SH) +#define __HOSTFN0_INT_STATUS_P_MK 0x000f0000 +#define __HOSTFN0_INT_STATUS_P_SH 16 +#define __HOSTFN0_INT_STATUS_P(_v) ((_v) << __HOSTFN0_INT_STATUS_P_SH) +#define __HOSTFN0_INT_STATUS_F 0x0000ffff +#define HOSTFN0_INT_MSK 0x00014004 +#define HOST_PAGE_NUM_FN0 0x00014008 +#define __HOST_PAGE_NUM_FN 0x000001ff +#define HOST_MSIX_ERR_INDEX_FN0 0x0001400c +#define __MSIX_ERR_INDEX_FN 0x000001ff +#define HOSTFN1_INT_STATUS 0x00014100 +#define __HOSTFN1_HALT_OCCURRED 0x01000000 +#define __HOSTFN1_INT_STATUS_LVL_MK 0x00f00000 +#define __HOSTFN1_INT_STATUS_LVL_SH 20 +#define __HOSTFN1_INT_STATUS_LVL(_v) ((_v) << __HOSTFN1_INT_STATUS_LVL_SH) +#define __HOSTFN1_INT_STATUS_P_MK 0x000f0000 +#define __HOSTFN1_INT_STATUS_P_SH 16 +#define __HOSTFN1_INT_STATUS_P(_v) ((_v) << __HOSTFN1_INT_STATUS_P_SH) +#define __HOSTFN1_INT_STATUS_F 0x0000ffff +#define HOSTFN1_INT_MSK 0x00014104 +#define HOST_PAGE_NUM_FN1 0x00014108 +#define HOST_MSIX_ERR_INDEX_FN1 0x0001410c +#define APP_PLL_425_CTL_REG 0x00014204 +#define __P_425_PLL_LOCK 0x80000000 +#define __APP_PLL_425_SRAM_USE_100MHZ 0x00100000 +#define __APP_PLL_425_RESET_TIMER_MK 0x000e0000 +#define __APP_PLL_425_RESET_TIMER_SH 17 +#define __APP_PLL_425_RESET_TIMER(_v) ((_v) << __APP_PLL_425_RESET_TIMER_SH) +#define __APP_PLL_425_LOGIC_SOFT_RESET 0x00010000 +#define __APP_PLL_425_CNTLMT0_1_MK 0x0000c000 +#define __APP_PLL_425_CNTLMT0_1_SH 14 +#define __APP_PLL_425_CNTLMT0_1(_v) ((_v) << __APP_PLL_425_CNTLMT0_1_SH) +#define __APP_PLL_425_JITLMT0_1_MK 0x00003000 +#define __APP_PLL_425_JITLMT0_1_SH 12 +#define __APP_PLL_425_JITLMT0_1(_v) ((_v) << __APP_PLL_425_JITLMT0_1_SH) +#define __APP_PLL_425_HREF 0x00000800 +#define __APP_PLL_425_HDIV 0x00000400 +#define __APP_PLL_425_P0_1_MK 0x00000300 +#define __APP_PLL_425_P0_1_SH 8 +#define __APP_PLL_425_P0_1(_v) ((_v) << __APP_PLL_425_P0_1_SH) +#define __APP_PLL_425_Z0_2_MK 0x000000e0 +#define __APP_PLL_425_Z0_2_SH 5 +#define __APP_PLL_425_Z0_2(_v) ((_v) << __APP_PLL_425_Z0_2_SH) +#define __APP_PLL_425_RSEL200500 0x00000010 +#define __APP_PLL_425_ENARST 0x00000008 +#define __APP_PLL_425_BYPASS 0x00000004 +#define __APP_PLL_425_LRESETN 0x00000002 +#define __APP_PLL_425_ENABLE 0x00000001 +#define APP_PLL_312_CTL_REG 0x00014208 +#define __P_312_PLL_LOCK 0x80000000 +#define __ENABLE_MAC_AHB_1 0x00800000 +#define __ENABLE_MAC_AHB_0 0x00400000 +#define __ENABLE_MAC_1 0x00200000 +#define __ENABLE_MAC_0 0x00100000 +#define __APP_PLL_312_RESET_TIMER_MK 0x000e0000 +#define __APP_PLL_312_RESET_TIMER_SH 17 +#define __APP_PLL_312_RESET_TIMER(_v) ((_v) << __APP_PLL_312_RESET_TIMER_SH) +#define __APP_PLL_312_LOGIC_SOFT_RESET 0x00010000 +#define __APP_PLL_312_CNTLMT0_1_MK 0x0000c000 +#define __APP_PLL_312_CNTLMT0_1_SH 14 +#define __APP_PLL_312_CNTLMT0_1(_v) ((_v) << __APP_PLL_312_CNTLMT0_1_SH) +#define __APP_PLL_312_JITLMT0_1_MK 0x00003000 +#define __APP_PLL_312_JITLMT0_1_SH 12 +#define __APP_PLL_312_JITLMT0_1(_v) ((_v) << __APP_PLL_312_JITLMT0_1_SH) +#define __APP_PLL_312_HREF 0x00000800 +#define __APP_PLL_312_HDIV 0x00000400 +#define __APP_PLL_312_P0_1_MK 0x00000300 +#define __APP_PLL_312_P0_1_SH 8 +#define __APP_PLL_312_P0_1(_v) ((_v) << __APP_PLL_312_P0_1_SH) +#define __APP_PLL_312_Z0_2_MK 0x000000e0 +#define __APP_PLL_312_Z0_2_SH 5 +#define __APP_PLL_312_Z0_2(_v) ((_v) << __APP_PLL_312_Z0_2_SH) +#define __APP_PLL_312_RSEL200500 0x00000010 +#define __APP_PLL_312_ENARST 0x00000008 +#define __APP_PLL_312_BYPASS 0x00000004 +#define __APP_PLL_312_LRESETN 0x00000002 +#define __APP_PLL_312_ENABLE 0x00000001 +#define MBIST_CTL_REG 0x00014220 +#define __EDRAM_BISTR_START 0x00000004 +#define __MBIST_RESET 0x00000002 +#define __MBIST_START 0x00000001 +#define MBIST_STAT_REG 0x00014224 +#define __EDRAM_BISTR_STATUS 0x00000008 +#define __EDRAM_BISTR_DONE 0x00000004 +#define __MEM_BIT_STATUS 0x00000002 +#define __MBIST_DONE 0x00000001 +#define HOST_SEM0_REG 0x00014230 +#define __HOST_SEMAPHORE 0x00000001 +#define HOST_SEM1_REG 0x00014234 +#define HOST_SEM2_REG 0x00014238 +#define HOST_SEM3_REG 0x0001423c +#define HOST_SEM0_INFO_REG 0x00014240 +#define HOST_SEM1_INFO_REG 0x00014244 +#define HOST_SEM2_INFO_REG 0x00014248 +#define HOST_SEM3_INFO_REG 0x0001424c +#define ETH_MAC_SER_REG 0x00014288 +#define __APP_EMS_CKBUFAMPIN 0x00000020 +#define __APP_EMS_REFCLKSEL 0x00000010 +#define __APP_EMS_CMLCKSEL 0x00000008 +#define __APP_EMS_REFCKBUFEN2 0x00000004 +#define __APP_EMS_REFCKBUFEN1 0x00000002 +#define __APP_EMS_CHANNEL_SEL 0x00000001 +#define HOSTFN2_INT_STATUS 0x00014300 +#define __HOSTFN2_HALT_OCCURRED 0x01000000 +#define __HOSTFN2_INT_STATUS_LVL_MK 0x00f00000 +#define __HOSTFN2_INT_STATUS_LVL_SH 20 +#define __HOSTFN2_INT_STATUS_LVL(_v) ((_v) << __HOSTFN2_INT_STATUS_LVL_SH) +#define __HOSTFN2_INT_STATUS_P_MK 0x000f0000 +#define __HOSTFN2_INT_STATUS_P_SH 16 +#define __HOSTFN2_INT_STATUS_P(_v) ((_v) << __HOSTFN2_INT_STATUS_P_SH) +#define __HOSTFN2_INT_STATUS_F 0x0000ffff +#define HOSTFN2_INT_MSK 0x00014304 +#define HOST_PAGE_NUM_FN2 0x00014308 +#define HOST_MSIX_ERR_INDEX_FN2 0x0001430c +#define HOSTFN3_INT_STATUS 0x00014400 +#define __HALT_OCCURRED 0x01000000 +#define __HOSTFN3_INT_STATUS_LVL_MK 0x00f00000 +#define __HOSTFN3_INT_STATUS_LVL_SH 20 +#define __HOSTFN3_INT_STATUS_LVL(_v) ((_v) << __HOSTFN3_INT_STATUS_LVL_SH) +#define __HOSTFN3_INT_STATUS_P_MK 0x000f0000 +#define __HOSTFN3_INT_STATUS_P_SH 16 +#define __HOSTFN3_INT_STATUS_P(_v) ((_v) << __HOSTFN3_INT_STATUS_P_SH) +#define __HOSTFN3_INT_STATUS_F 0x0000ffff +#define HOSTFN3_INT_MSK 0x00014404 +#define HOST_PAGE_NUM_FN3 0x00014408 +#define HOST_MSIX_ERR_INDEX_FN3 0x0001440c +#define FNC_ID_REG 0x00014600 +#define __FUNCTION_NUMBER 0x00000007 +#define FNC_PERS_REG 0x00014604 +#define __F3_FUNCTION_ACTIVE 0x80000000 +#define __F3_FUNCTION_MODE 0x40000000 +#define __F3_PORT_MAP_MK 0x30000000 +#define __F3_PORT_MAP_SH 28 +#define __F3_PORT_MAP(_v) ((_v) << __F3_PORT_MAP_SH) +#define __F3_VM_MODE 0x08000000 +#define __F3_INTX_STATUS_MK 0x07000000 +#define __F3_INTX_STATUS_SH 24 +#define __F3_INTX_STATUS(_v) ((_v) << __F3_INTX_STATUS_SH) +#define __F2_FUNCTION_ACTIVE 0x00800000 +#define __F2_FUNCTION_MODE 0x00400000 +#define __F2_PORT_MAP_MK 0x00300000 +#define __F2_PORT_MAP_SH 20 +#define __F2_PORT_MAP(_v) ((_v) << __F2_PORT_MAP_SH) +#define __F2_VM_MODE 0x00080000 +#define __F2_INTX_STATUS_MK 0x00070000 +#define __F2_INTX_STATUS_SH 16 +#define __F2_INTX_STATUS(_v) ((_v) << __F2_INTX_STATUS_SH) +#define __F1_FUNCTION_ACTIVE 0x00008000 +#define __F1_FUNCTION_MODE 0x00004000 +#define __F1_PORT_MAP_MK 0x00003000 +#define __F1_PORT_MAP_SH 12 +#define __F1_PORT_MAP(_v) ((_v) << __F1_PORT_MAP_SH) +#define __F1_VM_MODE 0x00000800 +#define __F1_INTX_STATUS_MK 0x00000700 +#define __F1_INTX_STATUS_SH 8 +#define __F1_INTX_STATUS(_v) ((_v) << __F1_INTX_STATUS_SH) +#define __F0_FUNCTION_ACTIVE 0x00000080 +#define __F0_FUNCTION_MODE 0x00000040 +#define __F0_PORT_MAP_MK 0x00000030 +#define __F0_PORT_MAP_SH 4 +#define __F0_PORT_MAP(_v) ((_v) << __F0_PORT_MAP_SH) +#define __F0_VM_MODE 0x00000008 +#define __F0_INTX_STATUS 0x00000007 +enum { + __F0_INTX_STATUS_MSIX = 0x0, + __F0_INTX_STATUS_INTA = 0x1, + __F0_INTX_STATUS_INTB = 0x2, + __F0_INTX_STATUS_INTC = 0x3, + __F0_INTX_STATUS_INTD = 0x4, +}; +#define OP_MODE 0x0001460c +#define __APP_ETH_CLK_LOWSPEED 0x00000004 +#define __GLOBAL_CORECLK_HALFSPEED 0x00000002 +#define __GLOBAL_FCOE_MODE 0x00000001 +#define HOST_SEM4_REG 0x00014610 +#define HOST_SEM5_REG 0x00014614 +#define HOST_SEM6_REG 0x00014618 +#define HOST_SEM7_REG 0x0001461c +#define HOST_SEM4_INFO_REG 0x00014620 +#define HOST_SEM5_INFO_REG 0x00014624 +#define HOST_SEM6_INFO_REG 0x00014628 +#define HOST_SEM7_INFO_REG 0x0001462c +#define HOSTFN0_LPU0_MBOX0_CMD_STAT 0x00019000 +#define __HOSTFN0_LPU0_MBOX0_INFO_MK 0xfffffffe +#define __HOSTFN0_LPU0_MBOX0_INFO_SH 1 +#define __HOSTFN0_LPU0_MBOX0_INFO(_v) ((_v) << __HOSTFN0_LPU0_MBOX0_INFO_SH) +#define __HOSTFN0_LPU0_MBOX0_CMD_STATUS 0x00000001 +#define HOSTFN0_LPU1_MBOX0_CMD_STAT 0x00019004 +#define __HOSTFN0_LPU1_MBOX0_INFO_MK 0xfffffffe +#define __HOSTFN0_LPU1_MBOX0_INFO_SH 1 +#define __HOSTFN0_LPU1_MBOX0_INFO(_v) ((_v) << __HOSTFN0_LPU1_MBOX0_INFO_SH) +#define __HOSTFN0_LPU1_MBOX0_CMD_STATUS 0x00000001 +#define LPU0_HOSTFN0_MBOX0_CMD_STAT 0x00019008 +#define __LPU0_HOSTFN0_MBOX0_INFO_MK 0xfffffffe +#define __LPU0_HOSTFN0_MBOX0_INFO_SH 1 +#define __LPU0_HOSTFN0_MBOX0_INFO(_v) ((_v) << __LPU0_HOSTFN0_MBOX0_INFO_SH) +#define __LPU0_HOSTFN0_MBOX0_CMD_STATUS 0x00000001 +#define LPU1_HOSTFN0_MBOX0_CMD_STAT 0x0001900c +#define __LPU1_HOSTFN0_MBOX0_INFO_MK 0xfffffffe +#define __LPU1_HOSTFN0_MBOX0_INFO_SH 1 +#define __LPU1_HOSTFN0_MBOX0_INFO(_v) ((_v) << __LPU1_HOSTFN0_MBOX0_INFO_SH) +#define __LPU1_HOSTFN0_MBOX0_CMD_STATUS 0x00000001 +#define HOSTFN1_LPU0_MBOX0_CMD_STAT 0x00019010 +#define __HOSTFN1_LPU0_MBOX0_INFO_MK 0xfffffffe +#define __HOSTFN1_LPU0_MBOX0_INFO_SH 1 +#define __HOSTFN1_LPU0_MBOX0_INFO(_v) ((_v) << __HOSTFN1_LPU0_MBOX0_INFO_SH) +#define __HOSTFN1_LPU0_MBOX0_CMD_STATUS 0x00000001 +#define HOSTFN1_LPU1_MBOX0_CMD_STAT 0x00019014 +#define __HOSTFN1_LPU1_MBOX0_INFO_MK 0xfffffffe +#define __HOSTFN1_LPU1_MBOX0_INFO_SH 1 +#define __HOSTFN1_LPU1_MBOX0_INFO(_v) ((_v) << __HOSTFN1_LPU1_MBOX0_INFO_SH) +#define __HOSTFN1_LPU1_MBOX0_CMD_STATUS 0x00000001 +#define LPU0_HOSTFN1_MBOX0_CMD_STAT 0x00019018 +#define __LPU0_HOSTFN1_MBOX0_INFO_MK 0xfffffffe +#define __LPU0_HOSTFN1_MBOX0_INFO_SH 1 +#define __LPU0_HOSTFN1_MBOX0_INFO(_v) ((_v) << __LPU0_HOSTFN1_MBOX0_INFO_SH) +#define __LPU0_HOSTFN1_MBOX0_CMD_STATUS 0x00000001 +#define LPU1_HOSTFN1_MBOX0_CMD_STAT 0x0001901c +#define __LPU1_HOSTFN1_MBOX0_INFO_MK 0xfffffffe +#define __LPU1_HOSTFN1_MBOX0_INFO_SH 1 +#define __LPU1_HOSTFN1_MBOX0_INFO(_v) ((_v) << __LPU1_HOSTFN1_MBOX0_INFO_SH) +#define __LPU1_HOSTFN1_MBOX0_CMD_STATUS 0x00000001 +#define HOSTFN2_LPU0_MBOX0_CMD_STAT 0x00019150 +#define __HOSTFN2_LPU0_MBOX0_INFO_MK 0xfffffffe +#define __HOSTFN2_LPU0_MBOX0_INFO_SH 1 +#define __HOSTFN2_LPU0_MBOX0_INFO(_v) ((_v) << __HOSTFN2_LPU0_MBOX0_INFO_SH) +#define __HOSTFN2_LPU0_MBOX0_CMD_STATUS 0x00000001 +#define HOSTFN2_LPU1_MBOX0_CMD_STAT 0x00019154 +#define __HOSTFN2_LPU1_MBOX0_INFO_MK 0xfffffffe +#define __HOSTFN2_LPU1_MBOX0_INFO_SH 1 +#define __HOSTFN2_LPU1_MBOX0_INFO(_v) ((_v) << __HOSTFN2_LPU1_MBOX0_INFO_SH) +#define __HOSTFN2_LPU1_MBOX0BOX0_CMD_STATUS 0x00000001 +#define LPU0_HOSTFN2_MBOX0_CMD_STAT 0x00019158 +#define __LPU0_HOSTFN2_MBOX0_INFO_MK 0xfffffffe +#define __LPU0_HOSTFN2_MBOX0_INFO_SH 1 +#define __LPU0_HOSTFN2_MBOX0_INFO(_v) ((_v) << __LPU0_HOSTFN2_MBOX0_INFO_SH) +#define __LPU0_HOSTFN2_MBOX0_CMD_STATUS 0x00000001 +#define LPU1_HOSTFN2_MBOX0_CMD_STAT 0x0001915c +#define __LPU1_HOSTFN2_MBOX0_INFO_MK 0xfffffffe +#define __LPU1_HOSTFN2_MBOX0_INFO_SH 1 +#define __LPU1_HOSTFN2_MBOX0_INFO(_v) ((_v) << __LPU1_HOSTFN2_MBOX0_INFO_SH) +#define __LPU1_HOSTFN2_MBOX0_CMD_STATUS 0x00000001 +#define HOSTFN3_LPU0_MBOX0_CMD_STAT 0x00019160 +#define __HOSTFN3_LPU0_MBOX0_INFO_MK 0xfffffffe +#define __HOSTFN3_LPU0_MBOX0_INFO_SH 1 +#define __HOSTFN3_LPU0_MBOX0_INFO(_v) ((_v) << __HOSTFN3_LPU0_MBOX0_INFO_SH) +#define __HOSTFN3_LPU0_MBOX0_CMD_STATUS 0x00000001 +#define HOSTFN3_LPU1_MBOX0_CMD_STAT 0x00019164 +#define __HOSTFN3_LPU1_MBOX0_INFO_MK 0xfffffffe +#define __HOSTFN3_LPU1_MBOX0_INFO_SH 1 +#define __HOSTFN3_LPU1_MBOX0_INFO(_v) ((_v) << __HOSTFN3_LPU1_MBOX0_INFO_SH) +#define __HOSTFN3_LPU1_MBOX0_CMD_STATUS 0x00000001 +#define LPU0_HOSTFN3_MBOX0_CMD_STAT 0x00019168 +#define __LPU0_HOSTFN3_MBOX0_INFO_MK 0xfffffffe +#define __LPU0_HOSTFN3_MBOX0_INFO_SH 1 +#define __LPU0_HOSTFN3_MBOX0_INFO(_v) ((_v) << __LPU0_HOSTFN3_MBOX0_INFO_SH) +#define __LPU0_HOSTFN3_MBOX0_CMD_STATUS 0x00000001 +#define LPU1_HOSTFN3_MBOX0_CMD_STAT 0x0001916c +#define __LPU1_HOSTFN3_MBOX0_INFO_MK 0xfffffffe +#define __LPU1_HOSTFN3_MBOX0_INFO_SH 1 +#define __LPU1_HOSTFN3_MBOX0_INFO(_v) ((_v) << __LPU1_HOSTFN3_MBOX0_INFO_SH) +#define __LPU1_HOSTFN3_MBOX0_CMD_STATUS 0x00000001 +#define FW_INIT_HALT_P0 0x000191ac +#define __FW_INIT_HALT_P 0x00000001 +#define FW_INIT_HALT_P1 0x000191bc +#define CPE_PI_PTR_Q0 0x00038000 +#define __CPE_PI_UNUSED_MK 0xffff0000 +#define __CPE_PI_UNUSED_SH 16 +#define __CPE_PI_UNUSED(_v) ((_v) << __CPE_PI_UNUSED_SH) +#define __CPE_PI_PTR 0x0000ffff +#define CPE_PI_PTR_Q1 0x00038040 +#define CPE_CI_PTR_Q0 0x00038004 +#define __CPE_CI_UNUSED_MK 0xffff0000 +#define __CPE_CI_UNUSED_SH 16 +#define __CPE_CI_UNUSED(_v) ((_v) << __CPE_CI_UNUSED_SH) +#define __CPE_CI_PTR 0x0000ffff +#define CPE_CI_PTR_Q1 0x00038044 +#define CPE_DEPTH_Q0 0x00038008 +#define __CPE_DEPTH_UNUSED_MK 0xf8000000 +#define __CPE_DEPTH_UNUSED_SH 27 +#define __CPE_DEPTH_UNUSED(_v) ((_v) << __CPE_DEPTH_UNUSED_SH) +#define __CPE_MSIX_VEC_INDEX_MK 0x07ff0000 +#define __CPE_MSIX_VEC_INDEX_SH 16 +#define __CPE_MSIX_VEC_INDEX(_v) ((_v) << __CPE_MSIX_VEC_INDEX_SH) +#define __CPE_DEPTH 0x0000ffff +#define CPE_DEPTH_Q1 0x00038048 +#define CPE_QCTRL_Q0 0x0003800c +#define __CPE_CTRL_UNUSED30_MK 0xfc000000 +#define __CPE_CTRL_UNUSED30_SH 26 +#define __CPE_CTRL_UNUSED30(_v) ((_v) << __CPE_CTRL_UNUSED30_SH) +#define __CPE_FUNC_INT_CTRL_MK 0x03000000 +#define __CPE_FUNC_INT_CTRL_SH 24 +#define __CPE_FUNC_INT_CTRL(_v) ((_v) << __CPE_FUNC_INT_CTRL_SH) +enum { + __CPE_FUNC_INT_CTRL_DISABLE = 0x0, + __CPE_FUNC_INT_CTRL_F2NF = 0x1, + __CPE_FUNC_INT_CTRL_3QUART = 0x2, + __CPE_FUNC_INT_CTRL_HALF = 0x3, +}; +#define __CPE_CTRL_UNUSED20_MK 0x00f00000 +#define __CPE_CTRL_UNUSED20_SH 20 +#define __CPE_CTRL_UNUSED20(_v) ((_v) << __CPE_CTRL_UNUSED20_SH) +#define __CPE_SCI_TH_MK 0x000f0000 +#define __CPE_SCI_TH_SH 16 +#define __CPE_SCI_TH(_v) ((_v) << __CPE_SCI_TH_SH) +#define __CPE_CTRL_UNUSED10_MK 0x0000c000 +#define __CPE_CTRL_UNUSED10_SH 14 +#define __CPE_CTRL_UNUSED10(_v) ((_v) << __CPE_CTRL_UNUSED10_SH) +#define __CPE_ACK_PENDING 0x00002000 +#define __CPE_CTRL_UNUSED40_MK 0x00001c00 +#define __CPE_CTRL_UNUSED40_SH 10 +#define __CPE_CTRL_UNUSED40(_v) ((_v) << __CPE_CTRL_UNUSED40_SH) +#define __CPE_PCIEID_MK 0x00000300 +#define __CPE_PCIEID_SH 8 +#define __CPE_PCIEID(_v) ((_v) << __CPE_PCIEID_SH) +#define __CPE_CTRL_UNUSED00_MK 0x000000fe +#define __CPE_CTRL_UNUSED00_SH 1 +#define __CPE_CTRL_UNUSED00(_v) ((_v) << __CPE_CTRL_UNUSED00_SH) +#define __CPE_ESIZE 0x00000001 +#define CPE_QCTRL_Q1 0x0003804c +#define __CPE_CTRL_UNUSED31_MK 0xfc000000 +#define __CPE_CTRL_UNUSED31_SH 26 +#define __CPE_CTRL_UNUSED31(_v) ((_v) << __CPE_CTRL_UNUSED31_SH) +#define __CPE_CTRL_UNUSED21_MK 0x00f00000 +#define __CPE_CTRL_UNUSED21_SH 20 +#define __CPE_CTRL_UNUSED21(_v) ((_v) << __CPE_CTRL_UNUSED21_SH) +#define __CPE_CTRL_UNUSED11_MK 0x0000c000 +#define __CPE_CTRL_UNUSED11_SH 14 +#define __CPE_CTRL_UNUSED11(_v) ((_v) << __CPE_CTRL_UNUSED11_SH) +#define __CPE_CTRL_UNUSED41_MK 0x00001c00 +#define __CPE_CTRL_UNUSED41_SH 10 +#define __CPE_CTRL_UNUSED41(_v) ((_v) << __CPE_CTRL_UNUSED41_SH) +#define __CPE_CTRL_UNUSED01_MK 0x000000fe +#define __CPE_CTRL_UNUSED01_SH 1 +#define __CPE_CTRL_UNUSED01(_v) ((_v) << __CPE_CTRL_UNUSED01_SH) +#define RME_PI_PTR_Q0 0x00038020 +#define __LATENCY_TIME_STAMP_MK 0xffff0000 +#define __LATENCY_TIME_STAMP_SH 16 +#define __LATENCY_TIME_STAMP(_v) ((_v) << __LATENCY_TIME_STAMP_SH) +#define __RME_PI_PTR 0x0000ffff +#define RME_PI_PTR_Q1 0x00038060 +#define RME_CI_PTR_Q0 0x00038024 +#define __DELAY_TIME_STAMP_MK 0xffff0000 +#define __DELAY_TIME_STAMP_SH 16 +#define __DELAY_TIME_STAMP(_v) ((_v) << __DELAY_TIME_STAMP_SH) +#define __RME_CI_PTR 0x0000ffff +#define RME_CI_PTR_Q1 0x00038064 +#define RME_DEPTH_Q0 0x00038028 +#define __RME_DEPTH_UNUSED_MK 0xf8000000 +#define __RME_DEPTH_UNUSED_SH 27 +#define __RME_DEPTH_UNUSED(_v) ((_v) << __RME_DEPTH_UNUSED_SH) +#define __RME_MSIX_VEC_INDEX_MK 0x07ff0000 +#define __RME_MSIX_VEC_INDEX_SH 16 +#define __RME_MSIX_VEC_INDEX(_v) ((_v) << __RME_MSIX_VEC_INDEX_SH) +#define __RME_DEPTH 0x0000ffff +#define RME_DEPTH_Q1 0x00038068 +#define RME_QCTRL_Q0 0x0003802c +#define __RME_INT_LATENCY_TIMER_MK 0xff000000 +#define __RME_INT_LATENCY_TIMER_SH 24 +#define __RME_INT_LATENCY_TIMER(_v) ((_v) << __RME_INT_LATENCY_TIMER_SH) +#define __RME_INT_DELAY_TIMER_MK 0x00ff0000 +#define __RME_INT_DELAY_TIMER_SH 16 +#define __RME_INT_DELAY_TIMER(_v) ((_v) << __RME_INT_DELAY_TIMER_SH) +#define __RME_INT_DELAY_DISABLE 0x00008000 +#define __RME_DLY_DELAY_DISABLE 0x00004000 +#define __RME_ACK_PENDING 0x00002000 +#define __RME_FULL_INTERRUPT_DISABLE 0x00001000 +#define __RME_CTRL_UNUSED10_MK 0x00000c00 +#define __RME_CTRL_UNUSED10_SH 10 +#define __RME_CTRL_UNUSED10(_v) ((_v) << __RME_CTRL_UNUSED10_SH) +#define __RME_PCIEID_MK 0x00000300 +#define __RME_PCIEID_SH 8 +#define __RME_PCIEID(_v) ((_v) << __RME_PCIEID_SH) +#define __RME_CTRL_UNUSED00_MK 0x000000fe +#define __RME_CTRL_UNUSED00_SH 1 +#define __RME_CTRL_UNUSED00(_v) ((_v) << __RME_CTRL_UNUSED00_SH) +#define __RME_ESIZE 0x00000001 +#define RME_QCTRL_Q1 0x0003806c +#define __RME_CTRL_UNUSED11_MK 0x00000c00 +#define __RME_CTRL_UNUSED11_SH 10 +#define __RME_CTRL_UNUSED11(_v) ((_v) << __RME_CTRL_UNUSED11_SH) +#define __RME_CTRL_UNUSED01_MK 0x000000fe +#define __RME_CTRL_UNUSED01_SH 1 +#define __RME_CTRL_UNUSED01(_v) ((_v) << __RME_CTRL_UNUSED01_SH) +#define PSS_CTL_REG 0x00018800 +#define __PSS_I2C_CLK_DIV_MK 0x007f0000 +#define __PSS_I2C_CLK_DIV_SH 16 +#define __PSS_I2C_CLK_DIV(_v) ((_v) << __PSS_I2C_CLK_DIV_SH) +#define __PSS_LMEM_INIT_DONE 0x00001000 +#define __PSS_LMEM_RESET 0x00000200 +#define __PSS_LMEM_INIT_EN 0x00000100 +#define __PSS_LPU1_RESET 0x00000002 +#define __PSS_LPU0_RESET 0x00000001 +#define HQM_QSET0_RXQ_DRBL_P0 0x00038000 +#define __RXQ0_ADD_VECTORS_P 0x80000000 +#define __RXQ0_STOP_P 0x40000000 +#define __RXQ0_PRD_PTR_P 0x0000ffff +#define HQM_QSET1_RXQ_DRBL_P0 0x00038080 +#define __RXQ1_ADD_VECTORS_P 0x80000000 +#define __RXQ1_STOP_P 0x40000000 +#define __RXQ1_PRD_PTR_P 0x0000ffff +#define HQM_QSET0_RXQ_DRBL_P1 0x0003c000 +#define HQM_QSET1_RXQ_DRBL_P1 0x0003c080 +#define HQM_QSET0_TXQ_DRBL_P0 0x00038020 +#define __TXQ0_ADD_VECTORS_P 0x80000000 +#define __TXQ0_STOP_P 0x40000000 +#define __TXQ0_PRD_PTR_P 0x0000ffff +#define HQM_QSET1_TXQ_DRBL_P0 0x000380a0 +#define __TXQ1_ADD_VECTORS_P 0x80000000 +#define __TXQ1_STOP_P 0x40000000 +#define __TXQ1_PRD_PTR_P 0x0000ffff +#define HQM_QSET0_TXQ_DRBL_P1 0x0003c020 +#define HQM_QSET1_TXQ_DRBL_P1 0x0003c0a0 +#define HQM_QSET0_IB_DRBL_1_P0 0x00038040 +#define __IB1_0_ACK_P 0x80000000 +#define __IB1_0_DISABLE_P 0x40000000 +#define __IB1_0_NUM_OF_ACKED_EVENTS_P 0x0000ffff +#define HQM_QSET1_IB_DRBL_1_P0 0x000380c0 +#define __IB1_1_ACK_P 0x80000000 +#define __IB1_1_DISABLE_P 0x40000000 +#define __IB1_1_NUM_OF_ACKED_EVENTS_P 0x0000ffff +#define HQM_QSET0_IB_DRBL_1_P1 0x0003c040 +#define HQM_QSET1_IB_DRBL_1_P1 0x0003c0c0 +#define HQM_QSET0_IB_DRBL_2_P0 0x00038060 +#define __IB2_0_ACK_P 0x80000000 +#define __IB2_0_DISABLE_P 0x40000000 +#define __IB2_0_NUM_OF_ACKED_EVENTS_P 0x0000ffff +#define HQM_QSET1_IB_DRBL_2_P0 0x000380e0 +#define __IB2_1_ACK_P 0x80000000 +#define __IB2_1_DISABLE_P 0x40000000 +#define __IB2_1_NUM_OF_ACKED_EVENTS_P 0x0000ffff +#define HQM_QSET0_IB_DRBL_2_P1 0x0003c060 +#define HQM_QSET1_IB_DRBL_2_P1 0x0003c0e0 + + +/* + * These definitions are either in error/missing in spec. Its auto-generated + * from hard coded values in regparse.pl. + */ +#define __EMPHPOST_AT_4G_MK_FIX 0x0000001c +#define __EMPHPOST_AT_4G_SH_FIX 0x00000002 +#define __EMPHPRE_AT_4G_FIX 0x00000003 +#define __SFP_TXRATE_EN_FIX 0x00000100 +#define __SFP_RXRATE_EN_FIX 0x00000080 + + +/* + * These register definitions are auto-generated from hard coded values + * in regparse.pl. + */ + + +/* + * These register mapping definitions are auto-generated from mapping tables + * in regparse.pl. + */ +#define BFA_IOC0_HBEAT_REG HOST_SEM0_INFO_REG +#define BFA_IOC0_STATE_REG HOST_SEM1_INFO_REG +#define BFA_IOC1_HBEAT_REG HOST_SEM2_INFO_REG +#define BFA_IOC1_STATE_REG HOST_SEM3_INFO_REG +#define BFA_FW_USE_COUNT HOST_SEM4_INFO_REG + +#define CPE_DEPTH_Q(__n) \ + (CPE_DEPTH_Q0 + (__n) * (CPE_DEPTH_Q1 - CPE_DEPTH_Q0)) +#define CPE_QCTRL_Q(__n) \ + (CPE_QCTRL_Q0 + (__n) * (CPE_QCTRL_Q1 - CPE_QCTRL_Q0)) +#define CPE_PI_PTR_Q(__n) \ + (CPE_PI_PTR_Q0 + (__n) * (CPE_PI_PTR_Q1 - CPE_PI_PTR_Q0)) +#define CPE_CI_PTR_Q(__n) \ + (CPE_CI_PTR_Q0 + (__n) * (CPE_CI_PTR_Q1 - CPE_CI_PTR_Q0)) +#define RME_DEPTH_Q(__n) \ + (RME_DEPTH_Q0 + (__n) * (RME_DEPTH_Q1 - RME_DEPTH_Q0)) +#define RME_QCTRL_Q(__n) \ + (RME_QCTRL_Q0 + (__n) * (RME_QCTRL_Q1 - RME_QCTRL_Q0)) +#define RME_PI_PTR_Q(__n) \ + (RME_PI_PTR_Q0 + (__n) * (RME_PI_PTR_Q1 - RME_PI_PTR_Q0)) +#define RME_CI_PTR_Q(__n) \ + (RME_CI_PTR_Q0 + (__n) * (RME_CI_PTR_Q1 - RME_CI_PTR_Q0)) +#define HQM_QSET_RXQ_DRBL_P0(__n) \ + (HQM_QSET0_RXQ_DRBL_P0 + (__n) * (HQM_QSET1_RXQ_DRBL_P0 - \ + HQM_QSET0_RXQ_DRBL_P0)) +#define HQM_QSET_TXQ_DRBL_P0(__n) \ + (HQM_QSET0_TXQ_DRBL_P0 + (__n) * (HQM_QSET1_TXQ_DRBL_P0 - \ + HQM_QSET0_TXQ_DRBL_P0)) +#define HQM_QSET_IB_DRBL_1_P0(__n) \ + (HQM_QSET0_IB_DRBL_1_P0 + (__n) * (HQM_QSET1_IB_DRBL_1_P0 - \ + HQM_QSET0_IB_DRBL_1_P0)) +#define HQM_QSET_IB_DRBL_2_P0(__n) \ + (HQM_QSET0_IB_DRBL_2_P0 + (__n) * (HQM_QSET1_IB_DRBL_2_P0 - \ + HQM_QSET0_IB_DRBL_2_P0)) +#define HQM_QSET_RXQ_DRBL_P1(__n) \ + (HQM_QSET0_RXQ_DRBL_P1 + (__n) * (HQM_QSET1_RXQ_DRBL_P1 - \ + HQM_QSET0_RXQ_DRBL_P1)) +#define HQM_QSET_TXQ_DRBL_P1(__n) \ + (HQM_QSET0_TXQ_DRBL_P1 + (__n) * (HQM_QSET1_TXQ_DRBL_P1 - \ + HQM_QSET0_TXQ_DRBL_P1)) +#define HQM_QSET_IB_DRBL_1_P1(__n) \ + (HQM_QSET0_IB_DRBL_1_P1 + (__n) * (HQM_QSET1_IB_DRBL_1_P1 - \ + HQM_QSET0_IB_DRBL_1_P1)) +#define HQM_QSET_IB_DRBL_2_P1(__n) \ + (HQM_QSET0_IB_DRBL_2_P1 + (__n) * (HQM_QSET1_IB_DRBL_2_P1 - \ + HQM_QSET0_IB_DRBL_2_P1)) + +#define CPE_Q_NUM(__fn, __q) (((__fn) << 2) + (__q)) +#define RME_Q_NUM(__fn, __q) (((__fn) << 2) + (__q)) +#define CPE_Q_MASK(__q) ((__q) & 0x3) +#define RME_Q_MASK(__q) ((__q) & 0x3) + + +/* + * PCI MSI-X vector defines + */ +enum { + BFA_MSIX_CPE_Q0 = 0, + BFA_MSIX_CPE_Q1 = 1, + BFA_MSIX_CPE_Q2 = 2, + BFA_MSIX_CPE_Q3 = 3, + BFA_MSIX_RME_Q0 = 4, + BFA_MSIX_RME_Q1 = 5, + BFA_MSIX_RME_Q2 = 6, + BFA_MSIX_RME_Q3 = 7, + BFA_MSIX_LPU_ERR = 8, + BFA_MSIX_CT_MAX = 9, +}; + +/* + * And corresponding host interrupt status bit field defines + */ +#define __HFN_INT_CPE_Q0 0x00000001U +#define __HFN_INT_CPE_Q1 0x00000002U +#define __HFN_INT_CPE_Q2 0x00000004U +#define __HFN_INT_CPE_Q3 0x00000008U +#define __HFN_INT_CPE_Q4 0x00000010U +#define __HFN_INT_CPE_Q5 0x00000020U +#define __HFN_INT_CPE_Q6 0x00000040U +#define __HFN_INT_CPE_Q7 0x00000080U +#define __HFN_INT_RME_Q0 0x00000100U +#define __HFN_INT_RME_Q1 0x00000200U +#define __HFN_INT_RME_Q2 0x00000400U +#define __HFN_INT_RME_Q3 0x00000800U +#define __HFN_INT_RME_Q4 0x00001000U +#define __HFN_INT_RME_Q5 0x00002000U +#define __HFN_INT_RME_Q6 0x00004000U +#define __HFN_INT_RME_Q7 0x00008000U +#define __HFN_INT_ERR_EMC 0x00010000U +#define __HFN_INT_ERR_LPU0 0x00020000U +#define __HFN_INT_ERR_LPU1 0x00040000U +#define __HFN_INT_ERR_PSS 0x00080000U +#define __HFN_INT_MBOX_LPU0 0x00100000U +#define __HFN_INT_MBOX_LPU1 0x00200000U +#define __HFN_INT_MBOX1_LPU0 0x00400000U +#define __HFN_INT_MBOX1_LPU1 0x00800000U +#define __HFN_INT_CPE_MASK 0x000000ffU +#define __HFN_INT_RME_MASK 0x0000ff00U + + +/* + * catapult memory map. + */ +#define LL_PGN_HQM0 0x0096 +#define LL_PGN_HQM1 0x0097 +#define PSS_SMEM_PAGE_START 0x8000 +#define PSS_SMEM_PGNUM(_pg0, _ma) ((_pg0) + ((_ma) >> 15)) +#define PSS_SMEM_PGOFF(_ma) ((_ma) & 0x7fff) + +/* + * End of catapult memory map + */ + + +#endif /* __BFI_CTREG_H__ */ + diff --git a/drivers/scsi/bfa/include/bfi/bfi_fabric.h b/drivers/scsi/bfa/include/bfi/bfi_fabric.h new file mode 100644 index 000000000000..c0669ed41078 --- /dev/null +++ b/drivers/scsi/bfa/include/bfi/bfi_fabric.h @@ -0,0 +1,92 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFI_FABRIC_H__ +#define __BFI_FABRIC_H__ + +#include + +#pragma pack(1) + +enum bfi_fabric_h2i_msgs { + BFI_FABRIC_H2I_CREATE_REQ = 1, + BFI_FABRIC_H2I_DELETE_REQ = 2, + BFI_FABRIC_H2I_SETAUTH = 3, +}; + +enum bfi_fabric_i2h_msgs { + BFI_FABRIC_I2H_CREATE_RSP = BFA_I2HM(1), + BFI_FABRIC_I2H_DELETE_RSP = BFA_I2HM(2), + BFI_FABRIC_I2H_SETAUTH_RSP = BFA_I2HM(3), + BFI_FABRIC_I2H_ONLINE = BFA_I2HM(4), + BFI_FABRIC_I2H_OFFLINE = BFA_I2HM(5), +}; + +struct bfi_fabric_create_req_s { + bfi_mhdr_t mh; /* common msg header */ + u8 vf_en; /* virtual fabric enable */ + u8 rsvd; + u16 vf_id; /* virtual fabric ID */ + wwn_t pwwn; /* port name */ + wwn_t nwwn; /* node name */ +}; + +struct bfi_fabric_create_rsp_s { + bfi_mhdr_t mh; /* common msg header */ + u16 bfa_handle; /* host fabric handle */ + u8 status; /* fabric create status */ + u8 rsvd; +}; + +struct bfi_fabric_delete_req_s { + bfi_mhdr_t mh; /* common msg header */ + u16 fw_handle; /* firmware fabric handle */ + u16 rsvd; +}; + +struct bfi_fabric_delete_rsp_s { + bfi_mhdr_t mh; /* common msg header */ + u16 bfa_handle; /* host fabric handle */ + u8 status; /* fabric deletion status */ + u8 rsvd; +}; + +#define BFI_FABRIC_AUTHSECRET_LEN 64 +struct bfi_fabric_setauth_req_s { + bfi_mhdr_t mh; /* common msg header */ + u16 fw_handle; /* f/w handle of fabric */ + u8 algorithm; + u8 group; + u8 secret[BFI_FABRIC_AUTHSECRET_LEN]; +}; + +union bfi_fabric_h2i_msg_u { + bfi_msg_t *msg; + struct bfi_fabric_create_req_s *create_req; + struct bfi_fabric_delete_req_s *delete_req; +}; + +union bfi_fabric_i2h_msg_u { + bfi_msg_t *msg; + struct bfi_fabric_create_rsp_s *create_rsp; + struct bfi_fabric_delete_rsp_s *delete_rsp; +}; + +#pragma pack() + +#endif /* __BFI_FABRIC_H__ */ + diff --git a/drivers/scsi/bfa/include/bfi/bfi_fcpim.h b/drivers/scsi/bfa/include/bfi/bfi_fcpim.h new file mode 100644 index 000000000000..52c059fb4c3a --- /dev/null +++ b/drivers/scsi/bfa/include/bfi/bfi_fcpim.h @@ -0,0 +1,301 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFI_FCPIM_H__ +#define __BFI_FCPIM_H__ + +#include "bfi.h" +#include + +#pragma pack(1) + +/* + * Initiator mode I-T nexus interface defines. + */ + +enum bfi_itnim_h2i { + BFI_ITNIM_H2I_CREATE_REQ = 1, /* i-t nexus creation */ + BFI_ITNIM_H2I_DELETE_REQ = 2, /* i-t nexus deletion */ +}; + +enum bfi_itnim_i2h { + BFI_ITNIM_I2H_CREATE_RSP = BFA_I2HM(1), + BFI_ITNIM_I2H_DELETE_RSP = BFA_I2HM(2), + BFI_ITNIM_I2H_SLER_EVENT = BFA_I2HM(3), +}; + +struct bfi_itnim_create_req_s { + struct bfi_mhdr_s mh; /* common msg header */ + u16 fw_handle; /* f/w handle for itnim */ + u8 class; /* FC class for IO */ + u8 seq_rec; /* sequence recovery support */ + u8 msg_no; /* seq id of the msg */ +}; + +struct bfi_itnim_create_rsp_s { + struct bfi_mhdr_s mh; /* common msg header */ + u16 bfa_handle; /* bfa handle for itnim */ + u8 status; /* fcp request status */ + u8 seq_id; /* seq id of the msg */ +}; + +struct bfi_itnim_delete_req_s { + struct bfi_mhdr_s mh; /* common msg header */ + u16 fw_handle; /* f/w itnim handle */ + u8 seq_id; /* seq id of the msg */ + u8 rsvd; +}; + +struct bfi_itnim_delete_rsp_s { + struct bfi_mhdr_s mh; /* common msg header */ + u16 bfa_handle; /* bfa handle for itnim */ + u8 status; /* fcp request status */ + u8 seq_id; /* seq id of the msg */ +}; + +struct bfi_itnim_sler_event_s { + struct bfi_mhdr_s mh; /* common msg header */ + u16 bfa_handle; /* bfa handle for itnim */ + u16 rsvd; +}; + +union bfi_itnim_h2i_msg_u { + struct bfi_itnim_create_req_s *create_req; + struct bfi_itnim_delete_req_s *delete_req; + struct bfi_msg_s *msg; +}; + +union bfi_itnim_i2h_msg_u { + struct bfi_itnim_create_rsp_s *create_rsp; + struct bfi_itnim_delete_rsp_s *delete_rsp; + struct bfi_itnim_sler_event_s *sler_event; + struct bfi_msg_s *msg; +}; + +/* + * Initiator mode IO interface defines. + */ + +enum bfi_ioim_h2i { + BFI_IOIM_H2I_IOABORT_REQ = 1, /* IO abort request */ + BFI_IOIM_H2I_IOCLEANUP_REQ = 2, /* IO cleanup request */ +}; + +enum bfi_ioim_i2h { + BFI_IOIM_I2H_IO_RSP = BFA_I2HM(1), /* non-fp IO response */ + BFI_IOIM_I2H_IOABORT_RSP = BFA_I2HM(2),/* ABORT rsp */ +}; + +/** + * IO command DIF info + */ +struct bfi_ioim_dif_s { + u32 dif_info[4]; +}; + +/** + * FCP IO messages overview + * + * @note + * - Max CDB length supported is 64 bytes. + * - SCSI Linked commands and SCSI bi-directional Commands not + * supported. + * + */ +struct bfi_ioim_req_s { + struct bfi_mhdr_s mh; /* Common msg header */ + u16 io_tag; /* I/O tag */ + u16 rport_hdl; /* itnim/rport firmware handle */ + struct fcp_cmnd_s cmnd; /* IO request info */ + + /** + * SG elements array within the IO request must be double word + * aligned. This aligment is required to optimize SGM setup for the IO. + */ + struct bfi_sge_s sges[BFI_SGE_INLINE_MAX]; + u8 io_timeout; + u8 dif_en; + u8 rsvd_a[2]; + struct bfi_ioim_dif_s dif; +}; + +/** + * This table shows various IO status codes from firmware and their + * meaning. Host driver can use these status codes to further process + * IO completions. + * + * BFI_IOIM_STS_OK : IO completed with error free SCSI & + * transport status. + * - io-tag can be reused. + * + * BFA_IOIM_STS_SCSI_ERR : IO completed with scsi error. + * - io-tag can be reused. + * + * BFI_IOIM_STS_HOST_ABORTED : IO was aborted successfully due to + * host request. + * - io-tag cannot be reused yet. + * + * BFI_IOIM_STS_ABORTED : IO was aborted successfully + * internally by f/w. + * - io-tag cannot be reused yet. + * + * BFI_IOIM_STS_TIMEDOUT : IO timedout and ABTS/RRQ is happening + * in the firmware and + * - io-tag cannot be reused yet. + * + * BFI_IOIM_STS_SQER_NEEDED : Firmware could not recover the IO + * with sequence level error + * logic and hence host needs to retry + * this IO with a different IO tag + * - io-tag cannot be used yet. + * + * BFI_IOIM_STS_NEXUS_ABORT : Second Level Error Recovery from host + * is required because 2 consecutive ABTS + * timedout and host needs logout and + * re-login with the target + * - io-tag cannot be used yet. + * + * BFI_IOIM_STS_UNDERRUN : IO completed with SCSI status good, + * but the data tranferred is less than + * the fcp data length in the command. + * ex. SCSI INQUIRY where transferred + * data length and residue count in FCP + * response accounts for total fcp-dl + * - io-tag can be reused. + * + * BFI_IOIM_STS_OVERRUN : IO completed with SCSI status good, + * but the data transerred is more than + * fcp data length in the command. ex. + * TAPE IOs where blocks can of unequal + * lengths. + * - io-tag can be reused. + * + * BFI_IOIM_STS_RES_FREE : Firmware has completed using io-tag + * during abort process + * - io-tag can be reused. + * + * BFI_IOIM_STS_PROTO_ERR : Firmware detected a protocol error. + * ex target sent more data than + * requested, or there was data frame + * loss and other reasons + * - io-tag cannot be used yet. + * + * BFI_IOIM_STS_DIF_ERR : Firwmare detected DIF error. ex: DIF + * CRC err or Ref Tag err or App tag err. + * - io-tag can be reused. + * + * BFA_IOIM_STS_TSK_MGT_ABORT : IO was aborted because of Task + * Management command from the host + * - io-tag can be reused. + * + * BFI_IOIM_STS_UTAG : Firmware does not know about this + * io_tag. + * - io-tag can be reused. + */ +enum bfi_ioim_status { + BFI_IOIM_STS_OK = 0, + BFI_IOIM_STS_HOST_ABORTED = 1, + BFI_IOIM_STS_ABORTED = 2, + BFI_IOIM_STS_TIMEDOUT = 3, + BFI_IOIM_STS_RES_FREE = 4, + BFI_IOIM_STS_SQER_NEEDED = 5, + BFI_IOIM_STS_PROTO_ERR = 6, + BFI_IOIM_STS_UTAG = 7, + BFI_IOIM_STS_PATHTOV = 8, +}; + +#define BFI_IOIM_SNSLEN (256) +/** + * I/O response message + */ +struct bfi_ioim_rsp_s { + struct bfi_mhdr_s mh; /* common msg header */ + u16 io_tag; /* completed IO tag */ + u16 bfa_rport_hndl; /* releated rport handle */ + u8 io_status; /* IO completion status */ + u8 reuse_io_tag; /* IO tag can be reused */ + u16 abort_tag; /* host abort request tag */ + u8 scsi_status; /* scsi status from target */ + u8 sns_len; /* scsi sense length */ + u8 resid_flags; /* IO residue flags */ + u8 rsvd_a; + u32 residue; /* IO residual length in bytes */ + u32 rsvd_b[3]; +}; + +struct bfi_ioim_abort_req_s { + struct bfi_mhdr_s mh; /* Common msg header */ + u16 io_tag; /* I/O tag */ + u16 abort_tag; /* unique request tag */ +}; + +/* + * Initiator mode task management command interface defines. + */ + +enum bfi_tskim_h2i { + BFI_TSKIM_H2I_TM_REQ = 1, /* task-mgmt command */ + BFI_TSKIM_H2I_ABORT_REQ = 2, /* task-mgmt command */ +}; + +enum bfi_tskim_i2h { + BFI_TSKIM_I2H_TM_RSP = BFA_I2HM(1), +}; + +struct bfi_tskim_req_s { + struct bfi_mhdr_s mh; /* Common msg header */ + u16 tsk_tag; /* task management tag */ + u16 itn_fhdl; /* itn firmware handle */ + lun_t lun; /* LU number */ + u8 tm_flags; /* see fcp_tm_cmnd_t */ + u8 t_secs; /* Timeout value in seconds */ + u8 rsvd[2]; +}; + +struct bfi_tskim_abortreq_s { + struct bfi_mhdr_s mh; /* Common msg header */ + u16 tsk_tag; /* task management tag */ + u16 rsvd; +}; + +enum bfi_tskim_status { + /* + * Following are FCP-4 spec defined status codes, + * **DO NOT CHANGE THEM ** + */ + BFI_TSKIM_STS_OK = 0, + BFI_TSKIM_STS_NOT_SUPP = 4, + BFI_TSKIM_STS_FAILED = 5, + + /** + * Defined by BFA + */ + BFI_TSKIM_STS_TIMEOUT = 10, /* TM request timedout */ + BFI_TSKIM_STS_ABORTED = 11, /* Aborted on host request */ +}; + +struct bfi_tskim_rsp_s { + struct bfi_mhdr_s mh; /* Common msg header */ + u16 tsk_tag; /* task mgmt cmnd tag */ + u8 tsk_status; /* @ref bfi_tskim_status */ + u8 rsvd; +}; + +#pragma pack() + +#endif /* __BFI_FCPIM_H__ */ + diff --git a/drivers/scsi/bfa/include/bfi/bfi_fcxp.h b/drivers/scsi/bfa/include/bfi/bfi_fcxp.h new file mode 100644 index 000000000000..e0e995a32828 --- /dev/null +++ b/drivers/scsi/bfa/include/bfi/bfi_fcxp.h @@ -0,0 +1,71 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFI_FCXP_H__ +#define __BFI_FCXP_H__ + +#include "bfi.h" + +#pragma pack(1) + +enum bfi_fcxp_h2i { + BFI_FCXP_H2I_SEND_REQ = 1, +}; + +enum bfi_fcxp_i2h { + BFI_FCXP_I2H_SEND_RSP = BFA_I2HM(1), +}; + +#define BFA_FCXP_MAX_SGES 2 + +/** + * FCXP send request structure + */ +struct bfi_fcxp_send_req_s { + struct bfi_mhdr_s mh; /* Common msg header */ + u16 fcxp_tag; /* driver request tag */ + u16 max_frmsz; /* max send frame size */ + u16 vf_id; /* vsan tag if applicable */ + u16 rport_fw_hndl; /* FW Handle for the remote port */ + u8 class; /* FC class used for req/rsp */ + u8 rsp_timeout; /* timeout in secs, 0-no response */ + u8 cts; /* continue sequence */ + u8 lp_tag; /* lport tag */ + struct fchs_s fchs; /* request FC header structure */ + u32 req_len; /* request payload length */ + u32 rsp_maxlen; /* max response length expected */ + struct bfi_sge_s req_sge[BFA_FCXP_MAX_SGES]; /* request buf */ + struct bfi_sge_s rsp_sge[BFA_FCXP_MAX_SGES]; /* response buf */ +}; + +/** + * FCXP send response structure + */ +struct bfi_fcxp_send_rsp_s { + struct bfi_mhdr_s mh; /* Common msg header */ + u16 fcxp_tag; /* send request tag */ + u8 req_status; /* request status */ + u8 rsvd; + u32 rsp_len; /* actual response length */ + u32 residue_len; /* residual response length */ + struct fchs_s fchs; /* response FC header structure */ +}; + +#pragma pack() + +#endif /* __BFI_FCXP_H__ */ + diff --git a/drivers/scsi/bfa/include/bfi/bfi_ioc.h b/drivers/scsi/bfa/include/bfi/bfi_ioc.h new file mode 100644 index 000000000000..026e9c06ae97 --- /dev/null +++ b/drivers/scsi/bfa/include/bfi/bfi_ioc.h @@ -0,0 +1,202 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFI_IOC_H__ +#define __BFI_IOC_H__ + +#include "bfi.h" +#include + +#pragma pack(1) + +enum bfi_ioc_h2i_msgs { + BFI_IOC_H2I_ENABLE_REQ = 1, + BFI_IOC_H2I_DISABLE_REQ = 2, + BFI_IOC_H2I_GETATTR_REQ = 3, + BFI_IOC_H2I_DBG_SYNC = 4, + BFI_IOC_H2I_DBG_DUMP = 5, +}; + +enum bfi_ioc_i2h_msgs { + BFI_IOC_I2H_ENABLE_REPLY = BFA_I2HM(1), + BFI_IOC_I2H_DISABLE_REPLY = BFA_I2HM(2), + BFI_IOC_I2H_GETATTR_REPLY = BFA_I2HM(3), + BFI_IOC_I2H_READY_EVENT = BFA_I2HM(4), + BFI_IOC_I2H_HBEAT = BFA_I2HM(5), +}; + +/** + * BFI_IOC_H2I_GETATTR_REQ message + */ +struct bfi_ioc_getattr_req_s { + struct bfi_mhdr_s mh; + union bfi_addr_u attr_addr; +}; + +struct bfi_ioc_attr_s { + wwn_t mfg_wwn; + mac_t mfg_mac; + u16 rsvd_a; + char brcd_serialnum[STRSZ(BFA_MFG_SERIALNUM_SIZE)]; + u8 pcie_gen; + u8 pcie_lanes_orig; + u8 pcie_lanes; + u8 rx_bbcredit; /* receive buffer credits */ + u32 adapter_prop; /* adapter properties */ + u16 maxfrsize; /* max receive frame size */ + char asic_rev; + u8 rsvd_b; + char fw_version[BFA_VERSION_LEN]; + char optrom_version[BFA_VERSION_LEN]; + struct bfa_mfg_vpd_s vpd; +}; + +/** + * BFI_IOC_I2H_GETATTR_REPLY message + */ +struct bfi_ioc_getattr_reply_s { + struct bfi_mhdr_s mh; /* Common msg header */ + u8 status; /* cfg reply status */ + u8 rsvd[3]; +}; + +/** + * Firmware memory page offsets + */ +#define BFI_IOC_SMEM_PG0_CB (0x40) +#define BFI_IOC_SMEM_PG0_CT (0x180) + +/** + * Firmware trace offset + */ +#define BFI_IOC_TRC_OFF (0x4b00) +#define BFI_IOC_TRC_ENTS 256 + +#define BFI_IOC_FW_SIGNATURE (0xbfadbfad) +#define BFI_IOC_MD5SUM_SZ 4 +struct bfi_ioc_image_hdr_s { + u32 signature; /* constant signature */ + u32 rsvd_a; + u32 exec; /* exec vector */ + u32 param; /* parameters */ + u32 rsvd_b[4]; + u32 md5sum[BFI_IOC_MD5SUM_SZ]; +}; + +/** + * BFI_IOC_I2H_READY_EVENT message + */ +struct bfi_ioc_rdy_event_s { + struct bfi_mhdr_s mh; /* common msg header */ + u8 init_status; /* init event status */ + u8 rsvd[3]; +}; + +struct bfi_ioc_hbeat_s { + struct bfi_mhdr_s mh; /* common msg header */ + u32 hb_count; /* current heart beat count */ +}; + +/** + * IOC hardware/firmware state + */ +enum bfi_ioc_state { + BFI_IOC_UNINIT = 0, /* not initialized */ + BFI_IOC_INITING = 1, /* h/w is being initialized */ + BFI_IOC_HWINIT = 2, /* h/w is initialized */ + BFI_IOC_CFG = 3, /* IOC configuration in progress */ + BFI_IOC_OP = 4, /* IOC is operational */ + BFI_IOC_DISABLING = 5, /* IOC is being disabled */ + BFI_IOC_DISABLED = 6, /* IOC is disabled */ + BFI_IOC_CFG_DISABLED = 7, /* IOC is being disabled;transient */ + BFI_IOC_HBFAIL = 8, /* IOC heart-beat failure */ + BFI_IOC_MEMTEST = 9, /* IOC is doing memtest */ +}; + +#define BFI_IOC_ENDIAN_SIG 0x12345678 + +enum { + BFI_ADAPTER_TYPE_FC = 0x01, /* FC adapters */ + BFI_ADAPTER_TYPE_MK = 0x0f0000, /* adapter type mask */ + BFI_ADAPTER_TYPE_SH = 16, /* adapter type shift */ + BFI_ADAPTER_NPORTS_MK = 0xff00, /* number of ports mask */ + BFI_ADAPTER_NPORTS_SH = 8, /* number of ports shift */ + BFI_ADAPTER_SPEED_MK = 0xff, /* adapter speed mask */ + BFI_ADAPTER_SPEED_SH = 0, /* adapter speed shift */ + BFI_ADAPTER_PROTO = 0x100000, /* prototype adapaters */ + BFI_ADAPTER_TTV = 0x200000, /* TTV debug capable */ + BFI_ADAPTER_UNSUPP = 0x400000, /* unknown adapter type */ +}; + +#define BFI_ADAPTER_GETP(__prop,__adap_prop) \ + (((__adap_prop) & BFI_ADAPTER_ ## __prop ## _MK) >> \ + BFI_ADAPTER_ ## __prop ## _SH) +#define BFI_ADAPTER_SETP(__prop, __val) \ + ((__val) << BFI_ADAPTER_ ## __prop ## _SH) +#define BFI_ADAPTER_IS_PROTO(__adap_type) \ + ((__adap_type) & BFI_ADAPTER_PROTO) +#define BFI_ADAPTER_IS_TTV(__adap_type) \ + ((__adap_type) & BFI_ADAPTER_TTV) +#define BFI_ADAPTER_IS_UNSUPP(__adap_type) \ + ((__adap_type) & BFI_ADAPTER_UNSUPP) +#define BFI_ADAPTER_IS_SPECIAL(__adap_type) \ + ((__adap_type) & (BFI_ADAPTER_TTV | BFI_ADAPTER_PROTO | \ + BFI_ADAPTER_UNSUPP)) + +/** + * BFI_IOC_H2I_ENABLE_REQ & BFI_IOC_H2I_DISABLE_REQ messages + */ +struct bfi_ioc_ctrl_req_s { + struct bfi_mhdr_s mh; + u8 ioc_class; + u8 rsvd[3]; +}; + +/** + * BFI_IOC_I2H_ENABLE_REPLY & BFI_IOC_I2H_DISABLE_REPLY messages + */ +struct bfi_ioc_ctrl_reply_s { + struct bfi_mhdr_s mh; /* Common msg header */ + u8 status; /* enable/disable status */ + u8 rsvd[3]; +}; + +#define BFI_IOC_MSGSZ 8 +/** + * H2I Messages + */ +union bfi_ioc_h2i_msg_u { + struct bfi_mhdr_s mh; + struct bfi_ioc_ctrl_req_s enable_req; + struct bfi_ioc_ctrl_req_s disable_req; + struct bfi_ioc_getattr_req_s getattr_req; + u32 mboxmsg[BFI_IOC_MSGSZ]; +}; + +/** + * I2H Messages + */ +union bfi_ioc_i2h_msg_u { + struct bfi_mhdr_s mh; + struct bfi_ioc_rdy_event_s rdy_event; + u32 mboxmsg[BFI_IOC_MSGSZ]; +}; + +#pragma pack() + +#endif /* __BFI_IOC_H__ */ + diff --git a/drivers/scsi/bfa/include/bfi/bfi_iocfc.h b/drivers/scsi/bfa/include/bfi/bfi_iocfc.h new file mode 100644 index 000000000000..c3760df72575 --- /dev/null +++ b/drivers/scsi/bfa/include/bfi/bfi_iocfc.h @@ -0,0 +1,177 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFI_IOCFC_H__ +#define __BFI_IOCFC_H__ + +#include "bfi.h" +#include +#include +#include + +#pragma pack(1) + +enum bfi_iocfc_h2i_msgs { + BFI_IOCFC_H2I_CFG_REQ = 1, + BFI_IOCFC_H2I_GET_STATS_REQ = 2, + BFI_IOCFC_H2I_CLEAR_STATS_REQ = 3, + BFI_IOCFC_H2I_SET_INTR_REQ = 4, + BFI_IOCFC_H2I_UPDATEQ_REQ = 5, +}; + +enum bfi_iocfc_i2h_msgs { + BFI_IOCFC_I2H_CFG_REPLY = BFA_I2HM(1), + BFI_IOCFC_I2H_GET_STATS_RSP = BFA_I2HM(2), + BFI_IOCFC_I2H_CLEAR_STATS_RSP = BFA_I2HM(3), + BFI_IOCFC_I2H_UPDATEQ_RSP = BFA_I2HM(5), +}; + +struct bfi_iocfc_cfg_s { + u8 num_cqs; /* Number of CQs to be used */ + u8 sense_buf_len; /* SCSI sense length */ + u8 trunk_enabled; /* port trunking enabled */ + u8 trunk_ports; /* trunk ports bit map */ + u32 endian_sig; /* endian signature of host */ + + /** + * Request and response circular queue base addresses, size and + * shadow index pointers. + */ + union bfi_addr_u req_cq_ba[BFI_IOC_MAX_CQS]; + union bfi_addr_u req_shadow_ci[BFI_IOC_MAX_CQS]; + u16 req_cq_elems[BFI_IOC_MAX_CQS]; + union bfi_addr_u rsp_cq_ba[BFI_IOC_MAX_CQS]; + union bfi_addr_u rsp_shadow_pi[BFI_IOC_MAX_CQS]; + u16 rsp_cq_elems[BFI_IOC_MAX_CQS]; + + union bfi_addr_u stats_addr; /* DMA-able address for stats */ + union bfi_addr_u cfgrsp_addr; /* config response dma address */ + union bfi_addr_u ioim_snsbase; /* IO sense buffer base address */ + struct bfa_iocfc_intr_attr_s intr_attr; /* IOC interrupt attributes */ +}; + +/** + * Boot target wwn information for this port. This contains either the stored + * or discovered boot target port wwns for the port. + */ +struct bfi_iocfc_bootwwns { + wwn_t wwn[BFA_BOOT_BOOTLUN_MAX]; + u8 nwwns; + u8 rsvd[7]; +}; + +struct bfi_iocfc_cfgrsp_s { + struct bfa_iocfc_fwcfg_s fwcfg; + struct bfa_iocfc_intr_attr_s intr_attr; + struct bfi_iocfc_bootwwns bootwwns; +}; + +/** + * BFI_IOCFC_H2I_CFG_REQ message + */ +struct bfi_iocfc_cfg_req_s { + struct bfi_mhdr_s mh; + union bfi_addr_u ioc_cfg_dma_addr; +}; + +/** + * BFI_IOCFC_I2H_CFG_REPLY message + */ +struct bfi_iocfc_cfg_reply_s { + struct bfi_mhdr_s mh; /* Common msg header */ + u8 cfg_success; /* cfg reply status */ + u8 lpu_bm; /* LPUs assigned for this IOC */ + u8 rsvd[2]; +}; + +/** + * BFI_IOCFC_H2I_GET_STATS_REQ & BFI_IOCFC_H2I_CLEAR_STATS_REQ messages + */ +struct bfi_iocfc_stats_req_s { + struct bfi_mhdr_s mh; /* msg header */ + u32 msgtag; /* msgtag for reply */ +}; + +/** + * BFI_IOCFC_I2H_GET_STATS_RSP & BFI_IOCFC_I2H_CLEAR_STATS_RSP messages + */ +struct bfi_iocfc_stats_rsp_s { + struct bfi_mhdr_s mh; /* common msg header */ + u8 status; /* reply status */ + u8 rsvd[3]; + u32 msgtag; /* msgtag for reply */ +}; + +/** + * BFI_IOCFC_H2I_SET_INTR_REQ message + */ +struct bfi_iocfc_set_intr_req_s { + struct bfi_mhdr_s mh; /* common msg header */ + u8 coalesce; /* enable intr coalescing*/ + u8 rsvd[3]; + u16 delay; /* delay timer 0..1125us */ + u16 latency; /* latency timer 0..225us */ +}; + +/** + * BFI_IOCFC_H2I_UPDATEQ_REQ message + */ +struct bfi_iocfc_updateq_req_s { + struct bfi_mhdr_s mh; /* common msg header */ + u32 reqq_ba; /* reqq base addr */ + u32 rspq_ba; /* rspq base addr */ + u32 reqq_sci; /* reqq shadow ci */ + u32 rspq_spi; /* rspq shadow pi */ +}; + +/** + * BFI_IOCFC_I2H_UPDATEQ_RSP message + */ +struct bfi_iocfc_updateq_rsp_s { + struct bfi_mhdr_s mh; /* common msg header */ + u8 status; /* updateq status */ + u8 rsvd[3]; +}; + +/** + * H2I Messages + */ +union bfi_iocfc_h2i_msg_u { + struct bfi_mhdr_s mh; + struct bfi_iocfc_cfg_req_s cfg_req; + struct bfi_iocfc_stats_req_s stats_get; + struct bfi_iocfc_stats_req_s stats_clr; + struct bfi_iocfc_updateq_req_s updateq_req; + u32 mboxmsg[BFI_IOC_MSGSZ]; +}; + +/** + * I2H Messages + */ +union bfi_iocfc_i2h_msg_u { + struct bfi_mhdr_s mh; + struct bfi_iocfc_cfg_reply_s cfg_reply; + struct bfi_iocfc_stats_rsp_s stats_get_rsp; + struct bfi_iocfc_stats_rsp_s stats_clr_rsp; + struct bfi_iocfc_updateq_rsp_s updateq_rsp; + u32 mboxmsg[BFI_IOC_MSGSZ]; +}; + +#pragma pack() + +#endif /* __BFI_IOCFC_H__ */ + diff --git a/drivers/scsi/bfa/include/bfi/bfi_lport.h b/drivers/scsi/bfa/include/bfi/bfi_lport.h new file mode 100644 index 000000000000..29010614bac9 --- /dev/null +++ b/drivers/scsi/bfa/include/bfi/bfi_lport.h @@ -0,0 +1,89 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFI_LPORT_H__ +#define __BFI_LPORT_H__ + +#include + +#pragma pack(1) + +enum bfi_lport_h2i_msgs { + BFI_LPORT_H2I_CREATE_REQ = 1, + BFI_LPORT_H2I_DELETE_REQ = 2, +}; + +enum bfi_lport_i2h_msgs { + BFI_LPORT_I2H_CREATE_RSP = BFA_I2HM(1), + BFI_LPORT_I2H_DELETE_RSP = BFA_I2HM(2), + BFI_LPORT_I2H_ONLINE = BFA_I2HM(3), + BFI_LPORT_I2H_OFFLINE = BFA_I2HM(4), +}; + +#define BFI_LPORT_MAX_SYNNAME 64 + +enum bfi_lport_role_e { + BFI_LPORT_ROLE_FCPIM = 1, + BFI_LPORT_ROLE_FCPTM = 2, + BFI_LPORT_ROLE_IPFC = 4, +}; + +struct bfi_lport_create_req_s { + bfi_mhdr_t mh; /* common msg header */ + u16 fabric_fwhdl; /* parent fabric instance */ + u8 roles; /* lport FC-4 roles */ + u8 rsvd; + wwn_t pwwn; /* port name */ + wwn_t nwwn; /* node name */ + u8 symname[BFI_LPORT_MAX_SYNNAME]; +}; + +struct bfi_lport_create_rsp_s { + bfi_mhdr_t mh; /* common msg header */ + u8 status; /* lport creation status */ + u8 rsvd[3]; +}; + +struct bfi_lport_delete_req_s { + bfi_mhdr_t mh; /* common msg header */ + u16 fw_handle; /* firmware lport handle */ + u16 rsvd; +}; + +struct bfi_lport_delete_rsp_s { + bfi_mhdr_t mh; /* common msg header */ + u16 bfa_handle; /* host lport handle */ + u8 status; /* lport deletion status */ + u8 rsvd; +}; + +union bfi_lport_h2i_msg_u { + bfi_msg_t *msg; + struct bfi_lport_create_req_s *create_req; + struct bfi_lport_delete_req_s *delete_req; +}; + +union bfi_lport_i2h_msg_u { + bfi_msg_t *msg; + struct bfi_lport_create_rsp_s *create_rsp; + struct bfi_lport_delete_rsp_s *delete_rsp; +}; + +#pragma pack() + +#endif /* __BFI_LPORT_H__ */ + diff --git a/drivers/scsi/bfa/include/bfi/bfi_lps.h b/drivers/scsi/bfa/include/bfi/bfi_lps.h new file mode 100644 index 000000000000..414b0e30f6ef --- /dev/null +++ b/drivers/scsi/bfa/include/bfi/bfi_lps.h @@ -0,0 +1,96 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFI_LPS_H__ +#define __BFI_LPS_H__ + +#include + +#pragma pack(1) + +enum bfi_lps_h2i_msgs { + BFI_LPS_H2I_LOGIN_REQ = 1, + BFI_LPS_H2I_LOGOUT_REQ = 2, +}; + +enum bfi_lps_i2h_msgs { + BFI_LPS_H2I_LOGIN_RSP = BFA_I2HM(1), + BFI_LPS_H2I_LOGOUT_RSP = BFA_I2HM(2), +}; + +struct bfi_lps_login_req_s { + struct bfi_mhdr_s mh; /* common msg header */ + u8 lp_tag; + u8 alpa; + u16 pdu_size; + wwn_t pwwn; + wwn_t nwwn; + u8 fdisc; + u8 auth_en; + u8 rsvd[2]; +}; + +struct bfi_lps_login_rsp_s { + struct bfi_mhdr_s mh; /* common msg header */ + u8 lp_tag; + u8 status; + u8 lsrjt_rsn; + u8 lsrjt_expl; + wwn_t port_name; + wwn_t node_name; + u16 bb_credit; + u8 f_port; + u8 npiv_en; + u32 lp_pid : 24; + u32 auth_req : 8; + mac_t lp_mac; + mac_t fcf_mac; + u8 ext_status; + u8 brcd_switch;/* attached peer is brcd switch */ +}; + +struct bfi_lps_logout_req_s { + struct bfi_mhdr_s mh; /* common msg header */ + u8 lp_tag; + u8 rsvd[3]; + wwn_t port_name; +}; + +struct bfi_lps_logout_rsp_s { + struct bfi_mhdr_s mh; /* common msg header */ + u8 lp_tag; + u8 status; + u8 rsvd[2]; +}; + +union bfi_lps_h2i_msg_u { + struct bfi_mhdr_s *msg; + struct bfi_lps_login_req_s *login_req; + struct bfi_lps_logout_req_s *logout_req; +}; + +union bfi_lps_i2h_msg_u { + struct bfi_msg_s *msg; + struct bfi_lps_login_rsp_s *login_rsp; + struct bfi_lps_logout_rsp_s *logout_rsp; +}; + +#pragma pack() + +#endif /* __BFI_LPS_H__ */ + + diff --git a/drivers/scsi/bfa/include/bfi/bfi_port.h b/drivers/scsi/bfa/include/bfi/bfi_port.h new file mode 100644 index 000000000000..3ec3bea110ba --- /dev/null +++ b/drivers/scsi/bfa/include/bfi/bfi_port.h @@ -0,0 +1,115 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ +#ifndef __BFI_PORT_H__ +#define __BFI_PORT_H__ + +#include +#include + +#pragma pack(1) + +enum bfi_port_h2i { + BFI_PORT_H2I_ENABLE_REQ = (1), + BFI_PORT_H2I_DISABLE_REQ = (2), + BFI_PORT_H2I_GET_STATS_REQ = (3), + BFI_PORT_H2I_CLEAR_STATS_REQ = (4), +}; + +enum bfi_port_i2h { + BFI_PORT_I2H_ENABLE_RSP = BFA_I2HM(1), + BFI_PORT_I2H_DISABLE_RSP = BFA_I2HM(2), + BFI_PORT_I2H_GET_STATS_RSP = BFA_I2HM(3), + BFI_PORT_I2H_CLEAR_STATS_RSP = BFA_I2HM(4), +}; + +/** + * Generic REQ type + */ +struct bfi_port_generic_req_s { + struct bfi_mhdr_s mh; /* msg header */ + u32 msgtag; /* msgtag for reply */ + u32 rsvd; +}; + +/** + * Generic RSP type + */ +struct bfi_port_generic_rsp_s { + struct bfi_mhdr_s mh; /* common msg header */ + u8 status; /* port enable status */ + u8 rsvd[3]; + u32 msgtag; /* msgtag for reply */ +}; + +/** + * @todo + * BFI_PORT_H2I_ENABLE_REQ + */ + +/** + * @todo + * BFI_PORT_I2H_ENABLE_RSP + */ + +/** + * BFI_PORT_H2I_DISABLE_REQ + */ + +/** + * BFI_PORT_I2H_DISABLE_RSP + */ + +/** + * BFI_PORT_H2I_GET_STATS_REQ + */ +struct bfi_port_get_stats_req_s { + struct bfi_mhdr_s mh; /* common msg header */ + union bfi_addr_u dma_addr; +}; + +/** + * BFI_PORT_I2H_GET_STATS_RSP + */ + +/** + * BFI_PORT_H2I_CLEAR_STATS_REQ + */ + +/** + * BFI_PORT_I2H_CLEAR_STATS_RSP + */ + +union bfi_port_h2i_msg_u { + struct bfi_mhdr_s mh; + struct bfi_port_generic_req_s enable_req; + struct bfi_port_generic_req_s disable_req; + struct bfi_port_get_stats_req_s getstats_req; + struct bfi_port_generic_req_s clearstats_req; +}; + +union bfi_port_i2h_msg_u { + struct bfi_mhdr_s mh; + struct bfi_port_generic_rsp_s enable_rsp; + struct bfi_port_generic_rsp_s disable_rsp; + struct bfi_port_generic_rsp_s getstats_rsp; + struct bfi_port_generic_rsp_s clearstats_rsp; +}; + +#pragma pack() + +#endif /* __BFI_PORT_H__ */ + diff --git a/drivers/scsi/bfa/include/bfi/bfi_pport.h b/drivers/scsi/bfa/include/bfi/bfi_pport.h new file mode 100644 index 000000000000..c96d246851af --- /dev/null +++ b/drivers/scsi/bfa/include/bfi/bfi_pport.h @@ -0,0 +1,184 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ +#ifndef __BFI_PPORT_H__ +#define __BFI_PPORT_H__ + +#include +#include + +#pragma pack(1) + +enum bfi_pport_h2i { + BFI_PPORT_H2I_ENABLE_REQ = (1), + BFI_PPORT_H2I_DISABLE_REQ = (2), + BFI_PPORT_H2I_GET_STATS_REQ = (3), + BFI_PPORT_H2I_CLEAR_STATS_REQ = (4), + BFI_PPORT_H2I_SET_SVC_PARAMS_REQ = (5), + BFI_PPORT_H2I_ENABLE_RX_VF_TAG_REQ = (6), + BFI_PPORT_H2I_ENABLE_TX_VF_TAG_REQ = (7), + BFI_PPORT_H2I_GET_QOS_STATS_REQ = (8), + BFI_PPORT_H2I_CLEAR_QOS_STATS_REQ = (9), +}; + +enum bfi_pport_i2h { + BFI_PPORT_I2H_ENABLE_RSP = BFA_I2HM(1), + BFI_PPORT_I2H_DISABLE_RSP = BFA_I2HM(2), + BFI_PPORT_I2H_GET_STATS_RSP = BFA_I2HM(3), + BFI_PPORT_I2H_CLEAR_STATS_RSP = BFA_I2HM(4), + BFI_PPORT_I2H_SET_SVC_PARAMS_RSP = BFA_I2HM(5), + BFI_PPORT_I2H_ENABLE_RX_VF_TAG_RSP = BFA_I2HM(6), + BFI_PPORT_I2H_ENABLE_TX_VF_TAG_RSP = BFA_I2HM(7), + BFI_PPORT_I2H_EVENT = BFA_I2HM(8), + BFI_PPORT_I2H_GET_QOS_STATS_RSP = BFA_I2HM(9), + BFI_PPORT_I2H_CLEAR_QOS_STATS_RSP = BFA_I2HM(10), +}; + +/** + * Generic REQ type + */ +struct bfi_pport_generic_req_s { + struct bfi_mhdr_s mh; /* msg header */ + u32 msgtag; /* msgtag for reply */ +}; + +/** + * Generic RSP type + */ +struct bfi_pport_generic_rsp_s { + struct bfi_mhdr_s mh; /* common msg header */ + u8 status; /* port enable status */ + u8 rsvd[3]; + u32 msgtag; /* msgtag for reply */ +}; + +/** + * BFI_PPORT_H2I_ENABLE_REQ + */ +struct bfi_pport_enable_req_s { + struct bfi_mhdr_s mh; /* msg header */ + u32 rsvd1; + wwn_t nwwn; /* node wwn of physical port */ + wwn_t pwwn; /* port wwn of physical port */ + struct bfa_pport_cfg_s port_cfg; /* port configuration */ + union bfi_addr_u stats_dma_addr; /* DMA address for stats */ + u32 msgtag; /* msgtag for reply */ + u32 rsvd2; +}; + +/** + * BFI_PPORT_I2H_ENABLE_RSP + */ +#define bfi_pport_enable_rsp_t struct bfi_pport_generic_rsp_s + +/** + * BFI_PPORT_H2I_DISABLE_REQ + */ +#define bfi_pport_disable_req_t struct bfi_pport_generic_req_s + +/** + * BFI_PPORT_I2H_DISABLE_RSP + */ +#define bfi_pport_disable_rsp_t struct bfi_pport_generic_rsp_s + +/** + * BFI_PPORT_H2I_GET_STATS_REQ + */ +#define bfi_pport_get_stats_req_t struct bfi_pport_generic_req_s + +/** + * BFI_PPORT_I2H_GET_STATS_RSP + */ +#define bfi_pport_get_stats_rsp_t struct bfi_pport_generic_rsp_s + +/** + * BFI_PPORT_H2I_CLEAR_STATS_REQ + */ +#define bfi_pport_clear_stats_req_t struct bfi_pport_generic_req_s + +/** + * BFI_PPORT_I2H_CLEAR_STATS_RSP + */ +#define bfi_pport_clear_stats_rsp_t struct bfi_pport_generic_rsp_s + +/** + * BFI_PPORT_H2I_GET_QOS_STATS_REQ + */ +#define bfi_pport_get_qos_stats_req_t struct bfi_pport_generic_req_s + +/** + * BFI_PPORT_H2I_GET_QOS_STATS_RSP + */ +#define bfi_pport_get_qos_stats_rsp_t struct bfi_pport_generic_rsp_s + +/** + * BFI_PPORT_H2I_CLEAR_QOS_STATS_REQ + */ +#define bfi_pport_clear_qos_stats_req_t struct bfi_pport_generic_req_s + +/** + * BFI_PPORT_H2I_CLEAR_QOS_STATS_RSP + */ +#define bfi_pport_clear_qos_stats_rsp_t struct bfi_pport_generic_rsp_s + +/** + * BFI_PPORT_H2I_SET_SVC_PARAMS_REQ + */ +struct bfi_pport_set_svc_params_req_s { + struct bfi_mhdr_s mh; /* msg header */ + u16 tx_bbcredit; /* Tx credits */ + u16 rsvd; +}; + +/** + * BFI_PPORT_I2H_SET_SVC_PARAMS_RSP + */ + +/** + * BFI_PPORT_I2H_EVENT + */ +struct bfi_pport_event_s { + struct bfi_mhdr_s mh; /* common msg header */ + struct bfa_pport_link_s link_state; +}; + +union bfi_pport_h2i_msg_u { + struct bfi_mhdr_s *mhdr; + struct bfi_pport_enable_req_s *penable; + struct bfi_pport_generic_req_s *pdisable; + struct bfi_pport_generic_req_s *pgetstats; + struct bfi_pport_generic_req_s *pclearstats; + struct bfi_pport_set_svc_params_req_s *psetsvcparams; + struct bfi_pport_get_qos_stats_req_s *pgetqosstats; + struct bfi_pport_generic_req_s *pclearqosstats; +}; + +union bfi_pport_i2h_msg_u { + struct bfi_msg_s *msg; + struct bfi_pport_generic_rsp_s *enable_rsp; + struct bfi_pport_disable_rsp_s *disable_rsp; + struct bfi_pport_generic_rsp_s *getstats_rsp; + struct bfi_pport_clear_stats_rsp_s *clearstats_rsp; + struct bfi_pport_set_svc_params_rsp_s *setsvcparasm_rsp; + struct bfi_pport_get_qos_stats_rsp_s *getqosstats_rsp; + struct bfi_pport_clear_qos_stats_rsp_s *clearqosstats_rsp; + struct bfi_pport_event_s *event; +}; + +#pragma pack() + +#endif /* __BFI_PPORT_H__ */ + diff --git a/drivers/scsi/bfa/include/bfi/bfi_rport.h b/drivers/scsi/bfa/include/bfi/bfi_rport.h new file mode 100644 index 000000000000..3520f55f09d7 --- /dev/null +++ b/drivers/scsi/bfa/include/bfi/bfi_rport.h @@ -0,0 +1,104 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFI_RPORT_H__ +#define __BFI_RPORT_H__ + +#include + +#pragma pack(1) + +enum bfi_rport_h2i_msgs { + BFI_RPORT_H2I_CREATE_REQ = 1, + BFI_RPORT_H2I_DELETE_REQ = 2, + BFI_RPORT_H2I_SET_SPEED_REQ = 3, +}; + +enum bfi_rport_i2h_msgs { + BFI_RPORT_I2H_CREATE_RSP = BFA_I2HM(1), + BFI_RPORT_I2H_DELETE_RSP = BFA_I2HM(2), + BFI_RPORT_I2H_QOS_SCN = BFA_I2HM(3), +}; + +struct bfi_rport_create_req_s { + struct bfi_mhdr_s mh; /* common msg header */ + u16 bfa_handle; /* host rport handle */ + u16 max_frmsz; /* max rcv pdu size */ + u32 pid : 24, /* remote port ID */ + lp_tag : 8; /* local port tag */ + u32 local_pid : 24, /* local port ID */ + cisc : 8; + u8 fc_class; /* supported FC classes */ + u8 vf_en; /* virtual fabric enable */ + u16 vf_id; /* virtual fabric ID */ +}; + +struct bfi_rport_create_rsp_s { + struct bfi_mhdr_s mh; /* common msg header */ + u8 status; /* rport creation status */ + u8 rsvd[3]; + u16 bfa_handle; /* host rport handle */ + u16 fw_handle; /* firmware rport handle */ + struct bfa_rport_qos_attr_s qos_attr; /* QoS Attributes */ +}; + +struct bfa_rport_speed_req_s { + struct bfi_mhdr_s mh; /* common msg header */ + u16 fw_handle; /* firmware rport handle */ + u8 speed; /*! rport's speed via RPSC */ + u8 rsvd; +}; + +struct bfi_rport_delete_req_s { + struct bfi_mhdr_s mh; /* common msg header */ + u16 fw_handle; /* firmware rport handle */ + u16 rsvd; +}; + +struct bfi_rport_delete_rsp_s { + struct bfi_mhdr_s mh; /* common msg header */ + u16 bfa_handle; /* host rport handle */ + u8 status; /* rport deletion status */ + u8 rsvd; +}; + +struct bfi_rport_qos_scn_s { + struct bfi_mhdr_s mh; /* common msg header */ + u16 bfa_handle; /* host rport handle */ + u16 rsvd; + struct bfa_rport_qos_attr_s old_qos_attr; /* Old QoS Attributes */ + struct bfa_rport_qos_attr_s new_qos_attr; /* New QoS Attributes */ +}; + +union bfi_rport_h2i_msg_u { + struct bfi_msg_s *msg; + struct bfi_rport_create_req_s *create_req; + struct bfi_rport_delete_req_s *delete_req; + struct bfi_rport_speed_req_s *speed_req; +}; + +union bfi_rport_i2h_msg_u { + struct bfi_msg_s *msg; + struct bfi_rport_create_rsp_s *create_rsp; + struct bfi_rport_delete_rsp_s *delete_rsp; + struct bfi_rport_qos_scn_s *qos_scn_evt; +}; + +#pragma pack() + +#endif /* __BFI_RPORT_H__ */ + diff --git a/drivers/scsi/bfa/include/bfi/bfi_uf.h b/drivers/scsi/bfa/include/bfi/bfi_uf.h new file mode 100644 index 000000000000..f328a9e7e622 --- /dev/null +++ b/drivers/scsi/bfa/include/bfi/bfi_uf.h @@ -0,0 +1,52 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFI_UF_H__ +#define __BFI_UF_H__ + +#include "bfi.h" + +#pragma pack(1) + +enum bfi_uf_h2i { + BFI_UF_H2I_BUF_POST = 1, +}; + +enum bfi_uf_i2h { + BFI_UF_I2H_FRM_RCVD = BFA_I2HM(1), +}; + +#define BFA_UF_MAX_SGES 2 + +struct bfi_uf_buf_post_s { + struct bfi_mhdr_s mh; /* Common msg header */ + u16 buf_tag; /* buffer tag */ + u16 buf_len; /* total buffer length */ + struct bfi_sge_s sge[BFA_UF_MAX_SGES]; /* buffer DMA SGEs */ +}; + +struct bfi_uf_frm_rcvd_s { + struct bfi_mhdr_s mh; /* Common msg header */ + u16 buf_tag; /* buffer tag */ + u16 rsvd; + u16 frm_len; /* received frame length */ + u16 xfr_len; /* tranferred length */ +}; + +#pragma pack() + +#endif /* __BFI_UF_H__ */ diff --git a/drivers/scsi/bfa/include/cna/bfa_cna_trcmod.h b/drivers/scsi/bfa/include/cna/bfa_cna_trcmod.h new file mode 100644 index 000000000000..43ba7064e81a --- /dev/null +++ b/drivers/scsi/bfa/include/cna/bfa_cna_trcmod.h @@ -0,0 +1,36 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * bfa_cna_trcmod.h CNA trace modules + */ + +#ifndef __BFA_CNA_TRCMOD_H__ +#define __BFA_CNA_TRCMOD_H__ + +#include + +/* + * !!! Only append to the enums defined here to avoid any versioning + * !!! needed between trace utility and driver version + */ +enum { + BFA_TRC_CNA_CEE = 1, + BFA_TRC_CNA_PORT = 2, +}; + +#endif /* __BFA_CNA_TRCMOD_H__ */ diff --git a/drivers/scsi/bfa/include/cna/cee/bfa_cee.h b/drivers/scsi/bfa/include/cna/cee/bfa_cee.h new file mode 100644 index 000000000000..77f297f68046 --- /dev/null +++ b/drivers/scsi/bfa/include/cna/cee/bfa_cee.h @@ -0,0 +1,77 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFA_CEE_H__ +#define __BFA_CEE_H__ + +#include +#include +#include +#include + +typedef void (*bfa_cee_get_attr_cbfn_t) (void *dev, bfa_status_t status); +typedef void (*bfa_cee_get_stats_cbfn_t) (void *dev, bfa_status_t status); +typedef void (*bfa_cee_reset_stats_cbfn_t) (void *dev, bfa_status_t status); +typedef void (*bfa_cee_hbfail_cbfn_t) (void *dev, bfa_status_t status); + +struct bfa_cee_cbfn_s { + bfa_cee_get_attr_cbfn_t get_attr_cbfn; + void *get_attr_cbarg; + bfa_cee_get_stats_cbfn_t get_stats_cbfn; + void *get_stats_cbarg; + bfa_cee_reset_stats_cbfn_t reset_stats_cbfn; + void *reset_stats_cbarg; +}; + +struct bfa_cee_s { + void *dev; + bfa_boolean_t get_attr_pending; + bfa_boolean_t get_stats_pending; + bfa_boolean_t reset_stats_pending; + bfa_status_t get_attr_status; + bfa_status_t get_stats_status; + bfa_status_t reset_stats_status; + struct bfa_cee_cbfn_s cbfn; + struct bfa_ioc_hbfail_notify_s hbfail; + struct bfa_trc_mod_s *trcmod; + struct bfa_log_mod_s *logmod; + struct bfa_cee_attr_s *attr; + struct bfa_cee_stats_s *stats; + struct bfa_dma_s attr_dma; + struct bfa_dma_s stats_dma; + struct bfa_ioc_s *ioc; + struct bfa_mbox_cmd_s get_cfg_mb; + struct bfa_mbox_cmd_s get_stats_mb; + struct bfa_mbox_cmd_s reset_stats_mb; +}; + +u32 bfa_cee_meminfo(void); +void bfa_cee_mem_claim(struct bfa_cee_s *cee, u8 *dma_kva, + u64 dma_pa); +void bfa_cee_attach(struct bfa_cee_s *cee, struct bfa_ioc_s *ioc, void *dev, + struct bfa_trc_mod_s *trcmod, + struct bfa_log_mod_s *logmod); +void bfa_cee_detach(struct bfa_cee_s *cee); +bfa_status_t bfa_cee_get_attr(struct bfa_cee_s *cee, + struct bfa_cee_attr_s *attr, + bfa_cee_get_attr_cbfn_t cbfn, void *cbarg); +bfa_status_t bfa_cee_get_stats(struct bfa_cee_s *cee, + struct bfa_cee_stats_s *stats, + bfa_cee_get_stats_cbfn_t cbfn, void *cbarg); +bfa_status_t bfa_cee_reset_stats(struct bfa_cee_s *cee, + bfa_cee_reset_stats_cbfn_t cbfn, void *cbarg); +#endif /* __BFA_CEE_H__ */ diff --git a/drivers/scsi/bfa/include/cna/port/bfa_port.h b/drivers/scsi/bfa/include/cna/port/bfa_port.h new file mode 100644 index 000000000000..7cbf17d3141b --- /dev/null +++ b/drivers/scsi/bfa/include/cna/port/bfa_port.h @@ -0,0 +1,69 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFA_PORT_H__ +#define __BFA_PORT_H__ + +#include +#include +#include +#include + +typedef void (*bfa_port_stats_cbfn_t) (void *dev, bfa_status_t status); +typedef void (*bfa_port_endis_cbfn_t) (void *dev, bfa_status_t status); + +struct bfa_port_s { + void *dev; + struct bfa_ioc_s *ioc; + struct bfa_trc_mod_s *trcmod; + struct bfa_log_mod_s *logmod; + u32 msgtag; + bfa_boolean_t stats_busy; + struct bfa_mbox_cmd_s stats_mb; + bfa_port_stats_cbfn_t stats_cbfn; + void *stats_cbarg; + bfa_status_t stats_status; + union bfa_pport_stats_u *stats; + struct bfa_dma_s stats_dma; + bfa_boolean_t endis_pending; + struct bfa_mbox_cmd_s endis_mb; + bfa_port_endis_cbfn_t endis_cbfn; + void *endis_cbarg; + bfa_status_t endis_status; + struct bfa_ioc_hbfail_notify_s hbfail; +}; + +void bfa_port_attach(struct bfa_port_s *port, struct bfa_ioc_s *ioc, + void *dev, struct bfa_trc_mod_s *trcmod, + struct bfa_log_mod_s *logmod); +void bfa_port_detach(struct bfa_port_s *port); +void bfa_port_hbfail(void *arg); + +bfa_status_t bfa_port_get_stats(struct bfa_port_s *port, + union bfa_pport_stats_u *stats, + bfa_port_stats_cbfn_t cbfn, void *cbarg); +bfa_status_t bfa_port_clear_stats(struct bfa_port_s *port, + bfa_port_stats_cbfn_t cbfn, void *cbarg); +bfa_status_t bfa_port_enable(struct bfa_port_s *port, + bfa_port_endis_cbfn_t cbfn, void *cbarg); +bfa_status_t bfa_port_disable(struct bfa_port_s *port, + bfa_port_endis_cbfn_t cbfn, void *cbarg); +u32 bfa_port_meminfo(void); +void bfa_port_mem_claim(struct bfa_port_s *port, u8 *dma_kva, + u64 dma_pa); + +#endif /* __BFA_PORT_H__ */ diff --git a/drivers/scsi/bfa/include/cna/pstats/ethport_defs.h b/drivers/scsi/bfa/include/cna/pstats/ethport_defs.h new file mode 100644 index 000000000000..1563ee512218 --- /dev/null +++ b/drivers/scsi/bfa/include/cna/pstats/ethport_defs.h @@ -0,0 +1,36 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved. + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __ETHPORT_DEFS_H__ +#define __ETHPORT_DEFS_H__ + +struct bnad_drv_stats { + u64 netif_queue_stop; + u64 netif_queue_wakeup; + u64 tso4; + u64 tso6; + u64 tso_err; + u64 tcpcsum_offload; + u64 udpcsum_offload; + u64 csum_help; + u64 csum_help_err; + + u64 hw_stats_updates; + u64 netif_rx_schedule; + u64 netif_rx_complete; + u64 netif_rx_dropped; +}; +#endif diff --git a/drivers/scsi/bfa/include/cna/pstats/phyport_defs.h b/drivers/scsi/bfa/include/cna/pstats/phyport_defs.h new file mode 100644 index 000000000000..eb7548030d0f --- /dev/null +++ b/drivers/scsi/bfa/include/cna/pstats/phyport_defs.h @@ -0,0 +1,218 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved. + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __PHYPORT_DEFS_H__ +#define __PHYPORT_DEFS_H__ + +#define BNA_TXF_ID_MAX 64 +#define BNA_RXF_ID_MAX 64 + +/* + * Statistics + */ + +/* + * TxF Frame Statistics + */ +struct bna_stats_txf { + u64 ucast_octets; + u64 ucast; + u64 ucast_vlan; + + u64 mcast_octets; + u64 mcast; + u64 mcast_vlan; + + u64 bcast_octets; + u64 bcast; + u64 bcast_vlan; + + u64 errors; + u64 filter_vlan; /* frames filtered due to VLAN */ + u64 filter_mac_sa; /* frames filtered due to SA check */ +}; + +/* + * RxF Frame Statistics + */ +struct bna_stats_rxf { + u64 ucast_octets; + u64 ucast; + u64 ucast_vlan; + + u64 mcast_octets; + u64 mcast; + u64 mcast_vlan; + + u64 bcast_octets; + u64 bcast; + u64 bcast_vlan; + u64 frame_drops; +}; + +/* + * FC Tx Frame Statistics + */ +struct bna_stats_fc_tx { + u64 txf_ucast_octets; + u64 txf_ucast; + u64 txf_ucast_vlan; + + u64 txf_mcast_octets; + u64 txf_mcast; + u64 txf_mcast_vlan; + + u64 txf_bcast_octets; + u64 txf_bcast; + u64 txf_bcast_vlan; + + u64 txf_parity_errors; + u64 txf_timeout; + u64 txf_fid_parity_errors; +}; + +/* + * FC Rx Frame Statistics + */ +struct bna_stats_fc_rx { + u64 rxf_ucast_octets; + u64 rxf_ucast; + u64 rxf_ucast_vlan; + + u64 rxf_mcast_octets; + u64 rxf_mcast; + u64 rxf_mcast_vlan; + + u64 rxf_bcast_octets; + u64 rxf_bcast; + u64 rxf_bcast_vlan; +}; + +/* + * RAD Frame Statistics + */ +struct cna_stats_rad { + u64 rx_frames; + u64 rx_octets; + u64 rx_vlan_frames; + + u64 rx_ucast; + u64 rx_ucast_octets; + u64 rx_ucast_vlan; + + u64 rx_mcast; + u64 rx_mcast_octets; + u64 rx_mcast_vlan; + + u64 rx_bcast; + u64 rx_bcast_octets; + u64 rx_bcast_vlan; + + u64 rx_drops; +}; + +/* + * BPC Tx Registers + */ +struct cna_stats_bpc_tx { + u64 tx_pause[8]; + u64 tx_zero_pause[8]; /* Pause cancellation */ + u64 tx_first_pause[8]; /* Pause initiation rather + *than retention */ +}; + +/* + * BPC Rx Registers + */ +struct cna_stats_bpc_rx { + u64 rx_pause[8]; + u64 rx_zero_pause[8]; /* Pause cancellation */ + u64 rx_first_pause[8]; /* Pause initiation rather + *than retention */ +}; + +/* + * MAC Rx Statistics + */ +struct cna_stats_mac_rx { + u64 frame_64; /* both rx and tx counter */ + u64 frame_65_127; /* both rx and tx counter */ + u64 frame_128_255; /* both rx and tx counter */ + u64 frame_256_511; /* both rx and tx counter */ + u64 frame_512_1023; /* both rx and tx counter */ + u64 frame_1024_1518; /* both rx and tx counter */ + u64 frame_1518_1522; /* both rx and tx counter */ + u64 rx_bytes; + u64 rx_packets; + u64 rx_fcs_error; + u64 rx_multicast; + u64 rx_broadcast; + u64 rx_control_frames; + u64 rx_pause; + u64 rx_unknown_opcode; + u64 rx_alignment_error; + u64 rx_frame_length_error; + u64 rx_code_error; + u64 rx_carrier_sense_error; + u64 rx_undersize; + u64 rx_oversize; + u64 rx_fragments; + u64 rx_jabber; + u64 rx_drop; +}; + +/* + * MAC Tx Statistics + */ +struct cna_stats_mac_tx { + u64 tx_bytes; + u64 tx_packets; + u64 tx_multicast; + u64 tx_broadcast; + u64 tx_pause; + u64 tx_deferral; + u64 tx_excessive_deferral; + u64 tx_single_collision; + u64 tx_muliple_collision; + u64 tx_late_collision; + u64 tx_excessive_collision; + u64 tx_total_collision; + u64 tx_pause_honored; + u64 tx_drop; + u64 tx_jabber; + u64 tx_fcs_error; + u64 tx_control_frame; + u64 tx_oversize; + u64 tx_undersize; + u64 tx_fragments; +}; + +/* + * Complete statistics + */ +struct bna_stats { + struct cna_stats_mac_rx mac_rx_stats; + struct cna_stats_bpc_rx bpc_rx_stats; + struct cna_stats_rad rad_stats; + struct bna_stats_fc_rx fc_rx_stats; + struct cna_stats_mac_tx mac_tx_stats; + struct cna_stats_bpc_tx bpc_tx_stats; + struct bna_stats_fc_tx fc_tx_stats; + struct bna_stats_rxf rxf_stats[BNA_TXF_ID_MAX]; + struct bna_stats_txf txf_stats[BNA_RXF_ID_MAX]; +}; + +#endif diff --git a/drivers/scsi/bfa/include/cs/bfa_checksum.h b/drivers/scsi/bfa/include/cs/bfa_checksum.h new file mode 100644 index 000000000000..af8c1d533ba8 --- /dev/null +++ b/drivers/scsi/bfa/include/cs/bfa_checksum.h @@ -0,0 +1,60 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * bfa_checksum.h BFA checksum utilities + */ + +#ifndef __BFA_CHECKSUM_H__ +#define __BFA_CHECKSUM_H__ + +static inline u32 +bfa_checksum_u32(u32 *buf, int sz) +{ + int i, m = sz >> 2; + u32 sum = 0; + + for (i = 0; i < m; i++) + sum ^= buf[i]; + + return (sum); +} + +static inline u16 +bfa_checksum_u16(u16 *buf, int sz) +{ + int i, m = sz >> 1; + u16 sum = 0; + + for (i = 0; i < m; i++) + sum ^= buf[i]; + + return (sum); +} + +static inline u8 +bfa_checksum_u8(u8 *buf, int sz) +{ + int i; + u8 sum = 0; + + for (i = 0; i < sz; i++) + sum ^= buf[i]; + + return (sum); +} +#endif diff --git a/drivers/scsi/bfa/include/cs/bfa_debug.h b/drivers/scsi/bfa/include/cs/bfa_debug.h new file mode 100644 index 000000000000..441be86b1b0f --- /dev/null +++ b/drivers/scsi/bfa/include/cs/bfa_debug.h @@ -0,0 +1,44 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * bfa_debug.h BFA debug interfaces + */ + +#ifndef __BFA_DEBUG_H__ +#define __BFA_DEBUG_H__ + +#define bfa_assert(__cond) do { \ + if (!(__cond)) \ + bfa_panic(__LINE__, __FILE__, #__cond); \ +} while (0) + +#define bfa_sm_fault(__mod, __event) do { \ + bfa_sm_panic((__mod)->logm, __LINE__, __FILE__, __event); \ +} while (0) + +#ifndef BFA_PERF_BUILD +#define bfa_assert_fp(__cond) bfa_assert(__cond) +#else +#define bfa_assert_fp(__cond) +#endif + +struct bfa_log_mod_s; +void bfa_panic(int line, char *file, char *panicstr); +void bfa_sm_panic(struct bfa_log_mod_s *logm, int line, char *file, int event); + +#endif /* __BFA_DEBUG_H__ */ diff --git a/drivers/scsi/bfa/include/cs/bfa_log.h b/drivers/scsi/bfa/include/cs/bfa_log.h new file mode 100644 index 000000000000..761cbe22130a --- /dev/null +++ b/drivers/scsi/bfa/include/cs/bfa_log.h @@ -0,0 +1,184 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * bfa_log.h BFA log library data structure and function definition + */ + +#ifndef __BFA_LOG_H__ +#define __BFA_LOG_H__ + +#include +#include +#include + +/* + * BFA log module definition + * + * To create a new module id: + * Add a #define at the end of the list below. Select a value for your + * definition so that it is one (1) greater than the previous + * definition. Modify the definition of BFA_LOG_MODULE_ID_MAX to become + * your new definition. + * Should have no gaps in between the values because this is used in arrays. + * IMPORTANT: AEN_IDs must be at the begining, otherwise update bfa_defs_aen.h + */ + +enum bfa_log_module_id { + BFA_LOG_UNUSED_ID = 0, + + /* AEN defs begin */ + BFA_LOG_AEN_MIN = BFA_LOG_UNUSED_ID, + + BFA_LOG_AEN_ID_ADAPTER = BFA_LOG_AEN_MIN + BFA_AEN_CAT_ADAPTER,/* 1 */ + BFA_LOG_AEN_ID_PORT = BFA_LOG_AEN_MIN + BFA_AEN_CAT_PORT, /* 2 */ + BFA_LOG_AEN_ID_LPORT = BFA_LOG_AEN_MIN + BFA_AEN_CAT_LPORT, /* 3 */ + BFA_LOG_AEN_ID_RPORT = BFA_LOG_AEN_MIN + BFA_AEN_CAT_RPORT, /* 4 */ + BFA_LOG_AEN_ID_ITNIM = BFA_LOG_AEN_MIN + BFA_AEN_CAT_ITNIM, /* 5 */ + BFA_LOG_AEN_ID_TIN = BFA_LOG_AEN_MIN + BFA_AEN_CAT_TIN, /* 6 */ + BFA_LOG_AEN_ID_IPFC = BFA_LOG_AEN_MIN + BFA_AEN_CAT_IPFC, /* 7 */ + BFA_LOG_AEN_ID_AUDIT = BFA_LOG_AEN_MIN + BFA_AEN_CAT_AUDIT, /* 8 */ + BFA_LOG_AEN_ID_IOC = BFA_LOG_AEN_MIN + BFA_AEN_CAT_IOC, /* 9 */ + BFA_LOG_AEN_ID_ETHPORT = BFA_LOG_AEN_MIN + BFA_AEN_CAT_ETHPORT,/* 10 */ + + BFA_LOG_AEN_MAX = BFA_LOG_AEN_ID_ETHPORT, + /* AEN defs end */ + + BFA_LOG_MODULE_ID_MIN = BFA_LOG_AEN_MAX, + + BFA_LOG_FW_ID = BFA_LOG_MODULE_ID_MIN + 1, + BFA_LOG_HAL_ID = BFA_LOG_MODULE_ID_MIN + 2, + BFA_LOG_FCS_ID = BFA_LOG_MODULE_ID_MIN + 3, + BFA_LOG_WDRV_ID = BFA_LOG_MODULE_ID_MIN + 4, + BFA_LOG_LINUX_ID = BFA_LOG_MODULE_ID_MIN + 5, + BFA_LOG_SOLARIS_ID = BFA_LOG_MODULE_ID_MIN + 6, + + BFA_LOG_MODULE_ID_MAX = BFA_LOG_SOLARIS_ID, + + /* Not part of any arrays */ + BFA_LOG_MODULE_ID_ALL = BFA_LOG_MODULE_ID_MAX + 1, + BFA_LOG_AEN_ALL = BFA_LOG_MODULE_ID_MAX + 2, + BFA_LOG_DRV_ALL = BFA_LOG_MODULE_ID_MAX + 3, +}; + +/* + * BFA log catalog name + */ +#define BFA_LOG_CAT_NAME "BFA" + +/* + * bfa log severity values + */ +enum bfa_log_severity { + BFA_LOG_INVALID = 0, + BFA_LOG_CRITICAL = 1, + BFA_LOG_ERROR = 2, + BFA_LOG_WARNING = 3, + BFA_LOG_INFO = 4, + BFA_LOG_NONE = 5, + BFA_LOG_LEVEL_MAX = BFA_LOG_NONE +}; + +#define BFA_LOG_MODID_OFFSET 16 + + +struct bfa_log_msgdef_s { + u32 msg_id; /* message id */ + int attributes; /* attributes */ + int severity; /* severity level */ + char *msg_value; + /* msg string */ + char *message; + /* msg format string */ + int arg_type; /* argument type */ + int arg_num; /* number of argument */ +}; + +/* + * supported argument type + */ +enum bfa_log_arg_type { + BFA_LOG_S = 0, /* string */ + BFA_LOG_D, /* decimal */ + BFA_LOG_I, /* integer */ + BFA_LOG_O, /* oct number */ + BFA_LOG_U, /* unsigned integer */ + BFA_LOG_X, /* hex number */ + BFA_LOG_F, /* floating */ + BFA_LOG_C, /* character */ + BFA_LOG_L, /* double */ + BFA_LOG_P /* pointer */ +}; + +#define BFA_LOG_ARG_TYPE 2 +#define BFA_LOG_ARG0 (0 * BFA_LOG_ARG_TYPE) +#define BFA_LOG_ARG1 (1 * BFA_LOG_ARG_TYPE) +#define BFA_LOG_ARG2 (2 * BFA_LOG_ARG_TYPE) +#define BFA_LOG_ARG3 (3 * BFA_LOG_ARG_TYPE) + +#define BFA_LOG_GET_MOD_ID(msgid) ((msgid >> BFA_LOG_MODID_OFFSET) & 0xff) +#define BFA_LOG_GET_MSG_IDX(msgid) (msgid & 0xffff) +#define BFA_LOG_GET_MSG_ID(msgdef) ((msgdef)->msg_id) +#define BFA_LOG_GET_MSG_FMT_STRING(msgdef) ((msgdef)->message) +#define BFA_LOG_GET_SEVERITY(msgdef) ((msgdef)->severity) + +/* + * Event attributes + */ +#define BFA_LOG_ATTR_NONE 0 +#define BFA_LOG_ATTR_AUDIT 1 +#define BFA_LOG_ATTR_LOG 2 +#define BFA_LOG_ATTR_FFDC 4 + +#define BFA_LOG_CREATE_ID(msw, lsw) \ + (((u32)msw << BFA_LOG_MODID_OFFSET) | lsw) + +struct bfa_log_mod_s; + +/** + * callback function + */ +typedef void (*bfa_log_cb_t)(struct bfa_log_mod_s *log_mod, u32 msg_id, + const char *format, ...); + + +struct bfa_log_mod_s { + char instance_info[16]; /* instance info */ + int log_level[BFA_LOG_MODULE_ID_MAX + 1]; + /* log level for modules */ + bfa_log_cb_t cbfn; /* callback function */ +}; + +extern int bfa_log_init(struct bfa_log_mod_s *log_mod, + char *instance_name, bfa_log_cb_t cbfn); +extern int bfa_log(struct bfa_log_mod_s *log_mod, u32 msg_id, ...); +extern bfa_status_t bfa_log_set_level(struct bfa_log_mod_s *log_mod, + int mod_id, enum bfa_log_severity log_level); +extern bfa_status_t bfa_log_set_level_all(struct bfa_log_mod_s *log_mod, + enum bfa_log_severity log_level); +extern bfa_status_t bfa_log_set_level_aen(struct bfa_log_mod_s *log_mod, + enum bfa_log_severity log_level); +extern enum bfa_log_severity bfa_log_get_level(struct bfa_log_mod_s *log_mod, + int mod_id); +extern enum bfa_log_severity bfa_log_get_msg_level( + struct bfa_log_mod_s *log_mod, u32 msg_id); +/* + * array of messages generated from xml files + */ +extern struct bfa_log_msgdef_s bfa_log_msg_array[]; + +#endif diff --git a/drivers/scsi/bfa/include/cs/bfa_perf.h b/drivers/scsi/bfa/include/cs/bfa_perf.h new file mode 100644 index 000000000000..45aa5f978ff5 --- /dev/null +++ b/drivers/scsi/bfa/include/cs/bfa_perf.h @@ -0,0 +1,34 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ +#ifndef __BFAD_PERF_H__ +#define __BFAD_PERF_H__ + +#ifdef BFAD_PERF_BUILD + +#undef bfa_trc +#undef bfa_trc32 +#undef bfa_assert +#undef BFA_TRC_FILE + +#define bfa_trc(_trcp, _data) +#define bfa_trc32(_trcp, _data) +#define bfa_assert(__cond) +#define BFA_TRC_FILE(__mod, __submod) + +#endif + +#endif /* __BFAD_PERF_H__ */ diff --git a/drivers/scsi/bfa/include/cs/bfa_plog.h b/drivers/scsi/bfa/include/cs/bfa_plog.h new file mode 100644 index 000000000000..670f86e5fc6e --- /dev/null +++ b/drivers/scsi/bfa/include/cs/bfa_plog.h @@ -0,0 +1,162 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ +#ifndef __BFA_PORTLOG_H__ +#define __BFA_PORTLOG_H__ + +#include "protocol/fc.h" +#include + +#define BFA_PL_NLOG_ENTS 256 +#define BFA_PL_LOG_REC_INCR(_x) ((_x)++, (_x) %= BFA_PL_NLOG_ENTS) + +#define BFA_PL_STRING_LOG_SZ 32 /* number of chars in string log */ +#define BFA_PL_INT_LOG_SZ 8 /* number of integers in the integer log */ + +enum bfa_plog_log_type { + BFA_PL_LOG_TYPE_INVALID = 0, + BFA_PL_LOG_TYPE_INT = 1, + BFA_PL_LOG_TYPE_STRING = 2, +}; + +/* + * the (fixed size) record format for each entry in the portlog + */ +struct bfa_plog_rec_s { + u32 tv; /* Filled by the portlog driver when the * + * entry is added to the circular log. */ + u8 port; /* Source port that logged this entry. CM + * entities will use 0xFF */ + u8 mid; /* Integer value to be used by all entities * + * while logging. The module id to string * + * conversion will be done by BFAL. See + * enum bfa_plog_mid */ + u8 eid; /* indicates Rx, Tx, IOCTL, etc. See + * enum bfa_plog_eid */ + u8 log_type; /* indicates string log or integer log. + * see bfa_plog_log_type_t */ + u8 log_num_ints; + /* + * interpreted only if log_type is INT_LOG. indicates number of + * integers in the int_log[] (0-PL_INT_LOG_SZ). + */ + u8 rsvd; + u16 misc; /* can be used to indicate fc frame length, + *etc.. */ + union { + char string_log[BFA_PL_STRING_LOG_SZ]; + u32 int_log[BFA_PL_INT_LOG_SZ]; + } log_entry; + +}; + +/* + * the following #defines will be used by the logging entities to indicate + * their module id. BFAL will convert the integer value to string format + * +* process to be used while changing the following #defines: + * - Always add new entries at the end + * - define corresponding string in BFAL + * - Do not remove any entry or rearrange the order. + */ +enum bfa_plog_mid { + BFA_PL_MID_INVALID = 0, + BFA_PL_MID_DEBUG = 1, + BFA_PL_MID_DRVR = 2, + BFA_PL_MID_HAL = 3, + BFA_PL_MID_HAL_FCXP = 4, + BFA_PL_MID_HAL_UF = 5, + BFA_PL_MID_FCS = 6, + BFA_PL_MID_MAX = 7 +}; + +#define BFA_PL_MID_STRLEN 8 +struct bfa_plog_mid_strings_s { + char m_str[BFA_PL_MID_STRLEN]; +}; + +/* + * the following #defines will be used by the logging entities to indicate + * their event type. BFAL will convert the integer value to string format + * +* process to be used while changing the following #defines: + * - Always add new entries at the end + * - define corresponding string in BFAL + * - Do not remove any entry or rearrange the order. + */ +enum bfa_plog_eid { + BFA_PL_EID_INVALID = 0, + BFA_PL_EID_IOC_DISABLE = 1, + BFA_PL_EID_IOC_ENABLE = 2, + BFA_PL_EID_PORT_DISABLE = 3, + BFA_PL_EID_PORT_ENABLE = 4, + BFA_PL_EID_PORT_ST_CHANGE = 5, + BFA_PL_EID_TX = 6, + BFA_PL_EID_TX_ACK1 = 7, + BFA_PL_EID_TX_RJT = 8, + BFA_PL_EID_TX_BSY = 9, + BFA_PL_EID_RX = 10, + BFA_PL_EID_RX_ACK1 = 11, + BFA_PL_EID_RX_RJT = 12, + BFA_PL_EID_RX_BSY = 13, + BFA_PL_EID_CT_IN = 14, + BFA_PL_EID_CT_OUT = 15, + BFA_PL_EID_DRIVER_START = 16, + BFA_PL_EID_RSCN = 17, + BFA_PL_EID_DEBUG = 18, + BFA_PL_EID_MISC = 19, + BFA_PL_EID_MAX = 20 +}; + +#define BFA_PL_ENAME_STRLEN 8 +struct bfa_plog_eid_strings_s { + char e_str[BFA_PL_ENAME_STRLEN]; +}; + +#define BFA_PL_SIG_LEN 8 +#define BFA_PL_SIG_STR "12pl123" + +/* + * per port circular log buffer + */ +struct bfa_plog_s { + char plog_sig[BFA_PL_SIG_LEN]; /* Start signature */ + u8 plog_enabled; + u8 rsvd[7]; + u32 ticks; + u16 head; + u16 tail; + struct bfa_plog_rec_s plog_recs[BFA_PL_NLOG_ENTS]; +}; + +void bfa_plog_init(struct bfa_plog_s *plog); +void bfa_plog_str(struct bfa_plog_s *plog, enum bfa_plog_mid mid, + enum bfa_plog_eid event, u16 misc, char *log_str); +void bfa_plog_intarr(struct bfa_plog_s *plog, enum bfa_plog_mid mid, + enum bfa_plog_eid event, u16 misc, + u32 *intarr, u32 num_ints); +void bfa_plog_fchdr(struct bfa_plog_s *plog, enum bfa_plog_mid mid, + enum bfa_plog_eid event, u16 misc, + struct fchs_s *fchdr); +void bfa_plog_fchdr_and_pl(struct bfa_plog_s *plog, enum bfa_plog_mid mid, + enum bfa_plog_eid event, u16 misc, + struct fchs_s *fchdr, u32 pld_w0); +void bfa_plog_clear(struct bfa_plog_s *plog); +void bfa_plog_enable(struct bfa_plog_s *plog); +void bfa_plog_disable(struct bfa_plog_s *plog); +bfa_boolean_t bfa_plog_get_setting(struct bfa_plog_s *plog); + +#endif /* __BFA_PORTLOG_H__ */ diff --git a/drivers/scsi/bfa/include/cs/bfa_q.h b/drivers/scsi/bfa/include/cs/bfa_q.h new file mode 100644 index 000000000000..ea895facedbc --- /dev/null +++ b/drivers/scsi/bfa/include/cs/bfa_q.h @@ -0,0 +1,81 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * bfa_q.h Circular queue definitions. + */ + +#ifndef __BFA_Q_H__ +#define __BFA_Q_H__ + +#define bfa_q_first(_q) ((void *)(((struct list_head *) (_q))->next)) +#define bfa_q_next(_qe) (((struct list_head *) (_qe))->next) +#define bfa_q_prev(_qe) (((struct list_head *) (_qe))->prev) + +/* + * bfa_q_qe_init - to initialize a queue element + */ +#define bfa_q_qe_init(_qe) { \ + bfa_q_next(_qe) = (struct list_head *) NULL; \ + bfa_q_prev(_qe) = (struct list_head *) NULL; \ +} + +/* + * bfa_q_deq - dequeue an element from head of the queue + */ +#define bfa_q_deq(_q, _qe) { \ + if (!list_empty(_q)) { \ + (*((struct list_head **) (_qe))) = bfa_q_next(_q); \ + bfa_q_prev(bfa_q_next(*((struct list_head **) _qe))) = \ + (struct list_head *) (_q); \ + bfa_q_next(_q) = bfa_q_next(*((struct list_head **) _qe)); \ + BFA_Q_DBG_INIT(*((struct list_head **) _qe)); \ + } else { \ + *((struct list_head **) (_qe)) = (struct list_head *) NULL; \ + } \ +} + +/* + * bfa_q_deq_tail - dequeue an element from tail of the queue + */ +#define bfa_q_deq_tail(_q, _qe) { \ + if (!list_empty(_q)) { \ + *((struct list_head **) (_qe)) = bfa_q_prev(_q); \ + bfa_q_next(bfa_q_prev(*((struct list_head **) _qe))) = \ + (struct list_head *) (_q); \ + bfa_q_prev(_q) = bfa_q_prev(*(struct list_head **) _qe); \ + BFA_Q_DBG_INIT(*((struct list_head **) _qe)); \ + } else { \ + *((struct list_head **) (_qe)) = (struct list_head *) NULL; \ + } \ +} + +/* + * #ifdef BFA_DEBUG (Using bfa_assert to check for debug_build is not + * consistent across modules) + */ +#ifndef BFA_PERF_BUILD +#define BFA_Q_DBG_INIT(_qe) bfa_q_qe_init(_qe) +#else +#define BFA_Q_DBG_INIT(_qe) +#endif + +#define bfa_q_is_on_q(_q, _qe) \ + bfa_q_is_on_q_func(_q, (struct list_head *)(_qe)) +extern int bfa_q_is_on_q_func(struct list_head *q, struct list_head *qe); + +#endif diff --git a/drivers/scsi/bfa/include/cs/bfa_sm.h b/drivers/scsi/bfa/include/cs/bfa_sm.h new file mode 100644 index 000000000000..9877066680a6 --- /dev/null +++ b/drivers/scsi/bfa/include/cs/bfa_sm.h @@ -0,0 +1,69 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * bfasm.h State machine defines + */ + +#ifndef __BFA_SM_H__ +#define __BFA_SM_H__ + +typedef void (*bfa_sm_t)(void *sm, int event); + +#define bfa_sm_set_state(_sm, _state) (_sm)->sm = (bfa_sm_t)(_state) +#define bfa_sm_send_event(_sm, _event) (_sm)->sm((_sm), (_event)) +#define bfa_sm_get_state(_sm) ((_sm)->sm) +#define bfa_sm_cmp_state(_sm, _state) ((_sm)->sm == (bfa_sm_t)(_state)) + +/** + * For converting from state machine function to state encoding. + */ +struct bfa_sm_table_s { + bfa_sm_t sm; /* state machine function */ + int state; /* state machine encoding */ + char *name; /* state name for display */ +}; +#define BFA_SM(_sm) ((bfa_sm_t)(_sm)) + +int bfa_sm_to_state(struct bfa_sm_table_s *smt, bfa_sm_t sm); + +/** + * State machine with entry actions. + */ +typedef void (*bfa_fsm_t)(void *fsm, int event); + +/** + * oc - object class eg. bfa_ioc + * st - state, eg. reset + * otype - object type, eg. struct bfa_ioc_s + * etype - object type, eg. enum ioc_event + */ +#define bfa_fsm_state_decl(oc, st, otype, etype) \ + static void oc ## _sm_ ## st(otype * fsm, etype event); \ + static void oc ## _sm_ ## st ## _entry(otype * fsm) + +#define bfa_fsm_set_state(_fsm, _state) do { \ + (_fsm)->fsm = (bfa_fsm_t)(_state); \ + _state ## _entry(_fsm); \ +} while (0) + +#define bfa_fsm_send_event(_fsm, _event) \ + (_fsm)->fsm((_fsm), (_event)) +#define bfa_fsm_cmp_state(_fsm, _state) \ + ((_fsm)->fsm == (bfa_fsm_t)(_state)) + +#endif diff --git a/drivers/scsi/bfa/include/cs/bfa_trc.h b/drivers/scsi/bfa/include/cs/bfa_trc.h new file mode 100644 index 000000000000..3e743928c74c --- /dev/null +++ b/drivers/scsi/bfa/include/cs/bfa_trc.h @@ -0,0 +1,176 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ +#ifndef __BFA_TRC_H__ +#define __BFA_TRC_H__ + +#include + +#ifndef BFA_TRC_MAX +#define BFA_TRC_MAX (4 * 1024) +#endif + +#ifndef BFA_TRC_TS +#define BFA_TRC_TS(_trcm) ((_trcm)->ticks ++) +#endif + +struct bfa_trc_s { +#ifdef __BIGENDIAN + u16 fileno; + u16 line; +#else + u16 line; + u16 fileno; +#endif + u32 timestamp; + union { + struct { + u32 rsvd; + u32 u32; + } u32; + u64 u64; + } data; +}; + + +struct bfa_trc_mod_s { + u32 head; + u32 tail; + u32 ntrc; + u32 stopped; + u32 ticks; + u32 rsvd[3]; + struct bfa_trc_s trc[BFA_TRC_MAX]; +}; + + +enum { + BFA_TRC_FW = 1, /* firmware modules */ + BFA_TRC_HAL = 2, /* BFA modules */ + BFA_TRC_FCS = 3, /* BFA FCS modules */ + BFA_TRC_LDRV = 4, /* Linux driver modules */ + BFA_TRC_SDRV = 5, /* Solaris driver modules */ + BFA_TRC_VDRV = 6, /* vmware driver modules */ + BFA_TRC_WDRV = 7, /* windows driver modules */ + BFA_TRC_AEN = 8, /* AEN module */ + BFA_TRC_BIOS = 9, /* bios driver modules */ + BFA_TRC_EFI = 10, /* EFI driver modules */ + BNA_TRC_WDRV = 11, /* BNA windows driver modules */ + BNA_TRC_VDRV = 12, /* BNA vmware driver modules */ + BNA_TRC_SDRV = 13, /* BNA Solaris driver modules */ + BNA_TRC_LDRV = 14, /* BNA Linux driver modules */ + BNA_TRC_HAL = 15, /* BNA modules */ + BFA_TRC_CNA = 16, /* Common modules */ + BNA_TRC_IMDRV = 17 /* BNA windows intermediate driver modules */ +}; +#define BFA_TRC_MOD_SH 10 +#define BFA_TRC_MOD(__mod) ((BFA_TRC_ ## __mod) << BFA_TRC_MOD_SH) + +/** + * Define a new tracing file (module). Module should match one defined above. + */ +#define BFA_TRC_FILE(__mod, __submod) \ + static int __trc_fileno = ((BFA_TRC_ ## __mod ## _ ## __submod) | \ + BFA_TRC_MOD(__mod)) + + +#define bfa_trc32(_trcp, _data) \ + __bfa_trc((_trcp)->trcmod, __trc_fileno, __LINE__, (u32)_data) + + +#ifndef BFA_BOOT_BUILD +#define bfa_trc(_trcp, _data) \ + __bfa_trc((_trcp)->trcmod, __trc_fileno, __LINE__, (u64)_data) +#else +void bfa_boot_trc(struct bfa_trc_mod_s *trcmod, u16 fileno, + u16 line, u32 data); +#define bfa_trc(_trcp, _data) \ + bfa_boot_trc((_trcp)->trcmod, __trc_fileno, __LINE__, (u32)_data) +#endif + + +static inline void +bfa_trc_init(struct bfa_trc_mod_s *trcm) +{ + trcm->head = trcm->tail = trcm->stopped = 0; + trcm->ntrc = BFA_TRC_MAX; +} + + +static inline void +bfa_trc_stop(struct bfa_trc_mod_s *trcm) +{ + trcm->stopped = 1; +} + +#ifdef FWTRC +extern void dc_flush(void *data); +#else +#define dc_flush(data) +#endif + + +static inline void +__bfa_trc(struct bfa_trc_mod_s *trcm, int fileno, int line, u64 data) +{ + int tail = trcm->tail; + struct bfa_trc_s *trc = &trcm->trc[tail]; + + if (trcm->stopped) + return; + + trc->fileno = (u16) fileno; + trc->line = (u16) line; + trc->data.u64 = data; + trc->timestamp = BFA_TRC_TS(trcm); + dc_flush(trc); + + trcm->tail = (trcm->tail + 1) & (BFA_TRC_MAX - 1); + if (trcm->tail == trcm->head) + trcm->head = (trcm->head + 1) & (BFA_TRC_MAX - 1); + dc_flush(trcm); +} + + +static inline void +__bfa_trc32(struct bfa_trc_mod_s *trcm, int fileno, int line, u32 data) +{ + int tail = trcm->tail; + struct bfa_trc_s *trc = &trcm->trc[tail]; + + if (trcm->stopped) + return; + + trc->fileno = (u16) fileno; + trc->line = (u16) line; + trc->data.u32.u32 = data; + trc->timestamp = BFA_TRC_TS(trcm); + dc_flush(trc); + + trcm->tail = (trcm->tail + 1) & (BFA_TRC_MAX - 1); + if (trcm->tail == trcm->head) + trcm->head = (trcm->head + 1) & (BFA_TRC_MAX - 1); + dc_flush(trcm); +} + +#ifndef BFA_PERF_BUILD +#define bfa_trc_fp(_trcp, _data) bfa_trc(_trcp, _data) +#else +#define bfa_trc_fp(_trcp, _data) +#endif + +#endif /* __BFA_TRC_H__ */ + diff --git a/drivers/scsi/bfa/include/cs/bfa_wc.h b/drivers/scsi/bfa/include/cs/bfa_wc.h new file mode 100644 index 000000000000..0460bd4fc7c4 --- /dev/null +++ b/drivers/scsi/bfa/include/cs/bfa_wc.h @@ -0,0 +1,68 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * bfa_wc.h Generic wait counter. + */ + +#ifndef __BFA_WC_H__ +#define __BFA_WC_H__ + +typedef void (*bfa_wc_resume_t) (void *cbarg); + +struct bfa_wc_s { + bfa_wc_resume_t wc_resume; + void *wc_cbarg; + int wc_count; +}; + +static inline void +bfa_wc_up(struct bfa_wc_s *wc) +{ + wc->wc_count++; +} + +static inline void +bfa_wc_down(struct bfa_wc_s *wc) +{ + wc->wc_count--; + if (wc->wc_count == 0) + wc->wc_resume(wc->wc_cbarg); +} + +/** + * Initialize a waiting counter. + */ +static inline void +bfa_wc_init(struct bfa_wc_s *wc, bfa_wc_resume_t wc_resume, void *wc_cbarg) +{ + wc->wc_resume = wc_resume; + wc->wc_cbarg = wc_cbarg; + wc->wc_count = 0; + bfa_wc_up(wc); +} + +/** + * Wait for counter to reach zero + */ +static inline void +bfa_wc_wait(struct bfa_wc_s *wc) +{ + bfa_wc_down(wc); +} + +#endif diff --git a/drivers/scsi/bfa/include/defs/bfa_defs_adapter.h b/drivers/scsi/bfa/include/defs/bfa_defs_adapter.h new file mode 100644 index 000000000000..8c208fc8e329 --- /dev/null +++ b/drivers/scsi/bfa/include/defs/bfa_defs_adapter.h @@ -0,0 +1,82 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ +#ifndef __BFA_DEFS_ADAPTER_H__ +#define __BFA_DEFS_ADAPTER_H__ + +#include +#include +#include + +/** + * BFA adapter level attributes. + */ +enum { + BFA_ADAPTER_SERIAL_NUM_LEN = STRSZ(BFA_MFG_SERIALNUM_SIZE), + /* + *!< adapter serial num length + */ + BFA_ADAPTER_MODEL_NAME_LEN = 16, /* model name length */ + BFA_ADAPTER_MODEL_DESCR_LEN = 128, /* model description length */ + BFA_ADAPTER_MFG_NAME_LEN = 8, /* manufacturer name length */ + BFA_ADAPTER_SYM_NAME_LEN = 64, /* adapter symbolic name length */ + BFA_ADAPTER_OS_TYPE_LEN = 64, /* adapter os type length */ +}; + +struct bfa_adapter_attr_s { + char manufacturer[BFA_ADAPTER_MFG_NAME_LEN]; + char serial_num[BFA_ADAPTER_SERIAL_NUM_LEN]; + u32 rsvd1; + char model[BFA_ADAPTER_MODEL_NAME_LEN]; + char model_descr[BFA_ADAPTER_MODEL_DESCR_LEN]; + wwn_t pwwn; + char node_symname[FC_SYMNAME_MAX]; + char hw_ver[BFA_VERSION_LEN]; + char fw_ver[BFA_VERSION_LEN]; + char optrom_ver[BFA_VERSION_LEN]; + char os_type[BFA_ADAPTER_OS_TYPE_LEN]; + struct bfa_mfg_vpd_s vpd; + struct mac_s mac; + + u8 nports; + u8 max_speed; + u8 prototype; + char asic_rev; + + u8 pcie_gen; + u8 pcie_lanes_orig; + u8 pcie_lanes; + u8 cna_capable; +}; + +/** + * BFA adapter level events + * Arguments below are in BFAL context from Mgmt + * BFA_PORT_AEN_ADD: [in]: None [out]: serial_num, pwwn, nports + * BFA_PORT_AEN_REMOVE: [in]: pwwn [out]: serial_num, pwwn, nports + */ +enum bfa_adapter_aen_event { + BFA_ADAPTER_AEN_ADD = 1, /* New Adapter found event */ + BFA_ADAPTER_AEN_REMOVE = 2, /* Adapter removed event */ +}; + +struct bfa_adapter_aen_data_s { + char serial_num[BFA_ADAPTER_SERIAL_NUM_LEN]; + u32 nports; /* Number of NPorts */ + wwn_t pwwn; /* WWN of one of its physical port */ +}; + +#endif /* __BFA_DEFS_ADAPTER_H__ */ diff --git a/drivers/scsi/bfa/include/defs/bfa_defs_aen.h b/drivers/scsi/bfa/include/defs/bfa_defs_aen.h new file mode 100644 index 000000000000..4c81a613db3d --- /dev/null +++ b/drivers/scsi/bfa/include/defs/bfa_defs_aen.h @@ -0,0 +1,73 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFA_DEFS_AEN_H__ +#define __BFA_DEFS_AEN_H__ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +enum bfa_aen_category { + BFA_AEN_CAT_ADAPTER = 1, + BFA_AEN_CAT_PORT = 2, + BFA_AEN_CAT_LPORT = 3, + BFA_AEN_CAT_RPORT = 4, + BFA_AEN_CAT_ITNIM = 5, + BFA_AEN_CAT_TIN = 6, + BFA_AEN_CAT_IPFC = 7, + BFA_AEN_CAT_AUDIT = 8, + BFA_AEN_CAT_IOC = 9, + BFA_AEN_CAT_ETHPORT = 10, + BFA_AEN_MAX_CAT = 10 +}; + +#pragma pack(1) +union bfa_aen_data_u { + struct bfa_adapter_aen_data_s adapter; + struct bfa_port_aen_data_s port; + struct bfa_lport_aen_data_s lport; + struct bfa_rport_aen_data_s rport; + struct bfa_itnim_aen_data_s itnim; + struct bfa_audit_aen_data_s audit; + struct bfa_ioc_aen_data_s ioc; + struct bfa_ethport_aen_data_s ethport; +}; + +struct bfa_aen_entry_s { + enum bfa_aen_category aen_category; + int aen_type; + union bfa_aen_data_u aen_data; + struct bfa_timeval_s aen_tv; + s32 seq_num; + s32 bfad_num; + s32 rsvd[1]; +}; + +#pragma pack() + +#define bfa_aen_event_t int + +#endif /* __BFA_DEFS_AEN_H__ */ diff --git a/drivers/scsi/bfa/include/defs/bfa_defs_audit.h b/drivers/scsi/bfa/include/defs/bfa_defs_audit.h new file mode 100644 index 000000000000..8e3a962bf20c --- /dev/null +++ b/drivers/scsi/bfa/include/defs/bfa_defs_audit.h @@ -0,0 +1,38 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFA_DEFS_AUDIT_H__ +#define __BFA_DEFS_AUDIT_H__ + +#include + +/** + * BFA audit events + */ +enum bfa_audit_aen_event { + BFA_AUDIT_AEN_AUTH_ENABLE = 1, + BFA_AUDIT_AEN_AUTH_DISABLE = 2, +}; + +/** + * audit event data + */ +struct bfa_audit_aen_data_s { + wwn_t pwwn; +}; + +#endif /* __BFA_DEFS_AUDIT_H__ */ diff --git a/drivers/scsi/bfa/include/defs/bfa_defs_auth.h b/drivers/scsi/bfa/include/defs/bfa_defs_auth.h new file mode 100644 index 000000000000..dd19c83aba58 --- /dev/null +++ b/drivers/scsi/bfa/include/defs/bfa_defs_auth.h @@ -0,0 +1,112 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ +#ifndef __BFA_DEFS_AUTH_H__ +#define __BFA_DEFS_AUTH_H__ + +#include + +#define PUBLIC_KEY 15409 +#define PRIVATE_KEY 19009 +#define KEY_LEN 32399 +#define BFA_AUTH_SECRET_STRING_LEN 256 +#define BFA_AUTH_FAIL_TIMEOUT 0xFF + +/** + * Authentication status + */ +enum bfa_auth_status { + BFA_AUTH_STATUS_NONE = 0, /* no authentication */ + BFA_AUTH_UNINIT = 1, /* state - uninit */ + BFA_AUTH_NEG_SEND = 2, /* state - negotiate send */ + BFA_AUTH_CHAL_WAIT = 3, /* state - challenge wait */ + BFA_AUTH_NEG_RETRY = 4, /* state - negotiate retry */ + BFA_AUTH_REPLY_SEND = 5, /* state - reply send */ + BFA_AUTH_STATUS_WAIT = 6, /* state - status wait */ + BFA_AUTH_SUCCESS = 7, /* state - success */ + BFA_AUTH_FAILED = 8, /* state - failed */ + BFA_AUTH_STATUS_UNKNOWN = 9, /* authentication status unknown */ +}; + +struct auth_proto_stats_s { + u32 auth_rjts; + u32 auth_negs; + u32 auth_dones; + + u32 dhchap_challenges; + u32 dhchap_replies; + u32 dhchap_successes; +}; + +/** + * Authentication related statistics + */ +struct bfa_auth_stats_s { + u32 auth_failures; /* authentication failures */ + u32 auth_successes; /* authentication successes*/ + struct auth_proto_stats_s auth_rx_stats; /* Rx protocol stats */ + struct auth_proto_stats_s auth_tx_stats; /* Tx protocol stats */ +}; + +/** + * Authentication hash function algorithms + */ +enum bfa_auth_algo { + BFA_AUTH_ALGO_MD5 = 1, /* Message-Digest algorithm 5 */ + BFA_AUTH_ALGO_SHA1 = 2, /* Secure Hash Algorithm 1 */ + BFA_AUTH_ALGO_MS = 3, /* MD5, then SHA-1 */ + BFA_AUTH_ALGO_SM = 4, /* SHA-1, then MD5 */ +}; + +/** + * DH Groups + * + * Current value could be combination of one or more of the following values + */ +enum bfa_auth_group { + BFA_AUTH_GROUP_DHNULL = 0, /* DH NULL (value == 0) */ + BFA_AUTH_GROUP_DH768 = 1, /* DH group 768 (value == 1) */ + BFA_AUTH_GROUP_DH1024 = 2, /* DH group 1024 (value == 2) */ + BFA_AUTH_GROUP_DH1280 = 4, /* DH group 1280 (value == 3) */ + BFA_AUTH_GROUP_DH1536 = 8, /* DH group 1536 (value == 4) */ + + BFA_AUTH_GROUP_ALL = 256 /* Use default DH group order + * 0, 1, 2, 3, 4 */ +}; + +/** + * Authentication secret sources + */ +enum bfa_auth_secretsource { + BFA_AUTH_SECSRC_LOCAL = 1, /* locally configured */ + BFA_AUTH_SECSRC_RADIUS = 2, /* use radius server */ + BFA_AUTH_SECSRC_TACACS = 3, /* TACACS server */ +}; + +/** + * Authentication attributes + */ +struct bfa_auth_attr_s { + enum bfa_auth_status status; + enum bfa_auth_algo algo; + enum bfa_auth_group dh_grp; + u16 rjt_code; + u16 rjt_code_exp; + u8 secret_set; + u8 resv[7]; +}; + +#endif /* __BFA_DEFS_AUTH_H__ */ diff --git a/drivers/scsi/bfa/include/defs/bfa_defs_boot.h b/drivers/scsi/bfa/include/defs/bfa_defs_boot.h new file mode 100644 index 000000000000..6f4aa5283545 --- /dev/null +++ b/drivers/scsi/bfa/include/defs/bfa_defs_boot.h @@ -0,0 +1,71 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFA_DEFS_BOOT_H__ +#define __BFA_DEFS_BOOT_H__ + +#include +#include +#include + +enum { + BFA_BOOT_BOOTLUN_MAX = 4, /* maximum boot lun per IOC */ +}; + +#define BOOT_CFG_REV1 1 + +/** + * Boot options setting. Boot options setting determines from where + * to get the boot lun information + */ +enum bfa_boot_bootopt { + BFA_BOOT_AUTO_DISCOVER = 0, /* Boot from blun provided by fabric */ + BFA_BOOT_STORED_BLUN = 1, /* Boot from bluns stored in flash */ + BFA_BOOT_FIRST_LUN = 2, /* Boot from first discovered blun */ +}; + +/** + * Boot lun information. + */ +struct bfa_boot_bootlun_s { + wwn_t pwwn; /* port wwn of target */ + lun_t lun; /* 64-bit lun */ +}; + +/** + * BOOT boot configuraton + */ +struct bfa_boot_cfg_s { + u8 version; + u8 rsvd1; + u16 chksum; + + u8 enable; /* enable/disable SAN boot */ + u8 speed; /* boot speed settings */ + u8 topology; /* boot topology setting */ + u8 bootopt; /* bfa_boot_bootopt_t */ + + u32 nbluns; /* number of boot luns */ + + u32 rsvd2; + + struct bfa_boot_bootlun_s blun[BFA_BOOT_BOOTLUN_MAX]; + struct bfa_boot_bootlun_s blun_disc[BFA_BOOT_BOOTLUN_MAX]; +}; + + +#endif /* __BFA_DEFS_BOOT_H__ */ diff --git a/drivers/scsi/bfa/include/defs/bfa_defs_cee.h b/drivers/scsi/bfa/include/defs/bfa_defs_cee.h new file mode 100644 index 000000000000..520a22f52dd1 --- /dev/null +++ b/drivers/scsi/bfa/include/defs/bfa_defs_cee.h @@ -0,0 +1,159 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * bfa_defs_cee.h Interface declarations between host based + * BFAL and DCBX/LLDP module in Firmware + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ +#ifndef __BFA_DEFS_CEE_H__ +#define __BFA_DEFS_CEE_H__ + +#include +#include +#include + +#pragma pack(1) + +#define BFA_CEE_LLDP_MAX_STRING_LEN (128) + + +/* FIXME: this is coming from the protocol spec. Can the host & apps share the + protocol .h files ? + */ +#define BFA_CEE_LLDP_SYS_CAP_OTHER 0x0001 +#define BFA_CEE_LLDP_SYS_CAP_REPEATER 0x0002 +#define BFA_CEE_LLDP_SYS_CAP_MAC_BRIDGE 0x0004 +#define BFA_CEE_LLDP_SYS_CAP_WLAN_AP 0x0008 +#define BFA_CEE_LLDP_SYS_CAP_ROUTER 0x0010 +#define BFA_CEE_LLDP_SYS_CAP_TELEPHONE 0x0020 +#define BFA_CEE_LLDP_SYS_CAP_DOCSIS_CD 0x0040 +#define BFA_CEE_LLDP_SYS_CAP_STATION 0x0080 +#define BFA_CEE_LLDP_SYS_CAP_CVLAN 0x0100 +#define BFA_CEE_LLDP_SYS_CAP_SVLAN 0x0200 +#define BFA_CEE_LLDP_SYS_CAP_TPMR 0x0400 + + +/* LLDP string type */ +struct bfa_cee_lldp_str_s { + u8 sub_type; + u8 len; + u8 rsvd[2]; + u8 value[BFA_CEE_LLDP_MAX_STRING_LEN]; +}; + + +/* LLDP paramters */ +struct bfa_cee_lldp_cfg_s { + struct bfa_cee_lldp_str_s chassis_id; + struct bfa_cee_lldp_str_s port_id; + struct bfa_cee_lldp_str_s port_desc; + struct bfa_cee_lldp_str_s sys_name; + struct bfa_cee_lldp_str_s sys_desc; + struct bfa_cee_lldp_str_s mgmt_addr; + u16 time_to_interval; + u16 enabled_system_cap; +}; + +enum bfa_cee_dcbx_version_e { + DCBX_PROTOCOL_PRECEE = 1, + DCBX_PROTOCOL_CEE = 2, +}; + +enum bfa_cee_lls_e { + CEE_LLS_DOWN_NO_TLV = 0, /* LLS is down because the TLV not sent by + * the peer */ + CEE_LLS_DOWN = 1, /* LLS is down as advertised by the peer */ + CEE_LLS_UP = 2, +}; + +/* CEE/DCBX parameters */ +struct bfa_cee_dcbx_cfg_s { + u8 pgid[8]; + u8 pg_percentage[8]; + u8 pfc_enabled; /* bitmap of priorties with PFC enabled */ + u8 fcoe_user_priority; /* bitmap of priorities used for FcoE + * traffic */ + u8 dcbx_version; /* operating version:CEE or preCEE */ + u8 lls_fcoe; /* FCoE Logical Link Status */ + u8 lls_lan; /* LAN Logical Link Status */ + u8 rsvd[3]; +}; + +/* CEE status */ +/* Making this to tri-state for the benefit of port list command */ +enum bfa_cee_status_e { + CEE_PHY_DOWN = 0, + CEE_PHY_UP = 1, + CEE_UP = 2, +}; + +/* CEE Query */ +struct bfa_cee_attr_s { + u8 cee_status; + u8 error_reason; + struct bfa_cee_lldp_cfg_s lldp_remote; + struct bfa_cee_dcbx_cfg_s dcbx_remote; + mac_t src_mac; + u8 link_speed; + u8 filler[3]; +}; + + + + +/* LLDP/DCBX/CEE Statistics */ + +struct bfa_cee_lldp_stats_s { + u32 frames_transmitted; + u32 frames_aged_out; + u32 frames_discarded; + u32 frames_in_error; + u32 frames_rcvd; + u32 tlvs_discarded; + u32 tlvs_unrecognized; +}; + +struct bfa_cee_dcbx_stats_s { + u32 subtlvs_unrecognized; + u32 negotiation_failed; + u32 remote_cfg_changed; + u32 tlvs_received; + u32 tlvs_invalid; + u32 seqno; + u32 ackno; + u32 recvd_seqno; + u32 recvd_ackno; +}; + +struct bfa_cee_cfg_stats_s { + u32 cee_status_down; + u32 cee_status_up; + u32 cee_hw_cfg_changed; + u32 recvd_invalid_cfg; +}; + + +struct bfa_cee_stats_s { + struct bfa_cee_lldp_stats_s lldp_stats; + struct bfa_cee_dcbx_stats_s dcbx_stats; + struct bfa_cee_cfg_stats_s cfg_stats; +}; + +#pragma pack() + + +#endif /* __BFA_DEFS_CEE_H__ */ + + diff --git a/drivers/scsi/bfa/include/defs/bfa_defs_driver.h b/drivers/scsi/bfa/include/defs/bfa_defs_driver.h new file mode 100644 index 000000000000..57049805762b --- /dev/null +++ b/drivers/scsi/bfa/include/defs/bfa_defs_driver.h @@ -0,0 +1,40 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFA_DEFS_DRIVER_H__ +#define __BFA_DEFS_DRIVER_H__ + +/** + * Driver statistics + */ + u16 tm_io_abort; + u16 tm_io_abort_comp; + u16 tm_lun_reset; + u16 tm_lun_reset_comp; + u16 tm_target_reset; + u16 tm_bus_reset; + u16 ioc_restart; /* IOC restart count */ + u16 io_pending; /* outstanding io count per-IOC */ + u64 control_req; + u64 input_req; + u64 output_req; + u64 input_words; + u64 output_words; +} bfa_driver_stats_t; + + +#endif /* __BFA_DEFS_DRIVER_H__ */ diff --git a/drivers/scsi/bfa/include/defs/bfa_defs_ethport.h b/drivers/scsi/bfa/include/defs/bfa_defs_ethport.h new file mode 100644 index 000000000000..79f9b3e146f7 --- /dev/null +++ b/drivers/scsi/bfa/include/defs/bfa_defs_ethport.h @@ -0,0 +1,98 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFA_DEFS_ETHPORT_H__ +#define __BFA_DEFS_ETHPORT_H__ + +#include +#include +#include +#include + +struct bna_tx_info_s { + u32 miniport_state; + u32 adapter_state; + u64 tx_count; + u64 tx_wi; + u64 tx_sg; + u64 tx_tcp_chksum; + u64 tx_udp_chksum; + u64 tx_ip_chksum; + u64 tx_lsov1; + u64 tx_lsov2; + u64 tx_max_sg_len ; +}; + +struct bna_rx_queue_info_s { + u16 q_id ; + u16 buf_size ; + u16 buf_count ; + u16 rsvd ; + u64 rx_count ; + u64 rx_dropped ; + u64 rx_unsupported ; + u64 rx_internal_err ; + u64 rss_count ; + u64 vlan_count ; + u64 rx_tcp_chksum ; + u64 rx_udp_chksum ; + u64 rx_ip_chksum ; + u64 rx_hds ; +}; + +struct bna_rx_q_set_s { + u16 q_set_type; + u32 miniport_state; + u32 adapter_state; + struct bna_rx_queue_info_s rx_queue[2]; +}; + +struct bna_port_stats_s { + struct bna_tx_info_s tx_stats; + u16 qset_count ; + struct bna_rx_q_set_s rx_qset[8]; +}; + +struct bfa_ethport_stats_s { + struct bna_stats_txf txf_stats[1]; + struct bna_stats_rxf rxf_stats[1]; + struct bnad_drv_stats drv_stats; +}; + +/** + * Ethernet port events + * Arguments below are in BFAL context from Mgmt + * BFA_PORT_AEN_ETH_LINKUP: [in]: mac [out]: mac + * BFA_PORT_AEN_ETH_LINKDOWN: [in]: mac [out]: mac + * BFA_PORT_AEN_ETH_ENABLE: [in]: mac [out]: mac + * BFA_PORT_AEN_ETH_DISABLE: [in]: mac [out]: mac + * + */ +enum bfa_ethport_aen_event { + BFA_ETHPORT_AEN_LINKUP = 1, /* Base Port Ethernet link up event */ + BFA_ETHPORT_AEN_LINKDOWN = 2, /* Base Port Ethernet link down event */ + BFA_ETHPORT_AEN_ENABLE = 3, /* Base Port Ethernet link enable event */ + BFA_ETHPORT_AEN_DISABLE = 4, /* Base Port Ethernet link disable + * event */ +}; + +struct bfa_ethport_aen_data_s { + mac_t mac; /* MAC address of the physical port */ +}; + + +#endif /* __BFA_DEFS_ETHPORT_H__ */ diff --git a/drivers/scsi/bfa/include/defs/bfa_defs_fcpim.h b/drivers/scsi/bfa/include/defs/bfa_defs_fcpim.h new file mode 100644 index 000000000000..c08f4f5026ac --- /dev/null +++ b/drivers/scsi/bfa/include/defs/bfa_defs_fcpim.h @@ -0,0 +1,45 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ +#ifndef __BFA_DEFS_FCPIM_H__ +#define __BFA_DEFS_FCPIM_H__ + +struct bfa_fcpim_stats_s { + u32 total_ios; /* Total IO count */ + u32 qresumes; /* IO waiting for CQ space */ + u32 no_iotags; /* NO IO contexts */ + u32 io_aborts; /* IO abort requests */ + u32 no_tskims; /* NO task management contexts */ + u32 iocomp_ok; /* IO completions with OK status */ + u32 iocomp_underrun; /* IO underrun (good) */ + u32 iocomp_overrun; /* IO overrun (good) */ + u32 iocomp_aborted; /* Aborted IO requests */ + u32 iocomp_timedout; /* IO timeouts */ + u32 iocom_nexus_abort; /* IO selection timeouts */ + u32 iocom_proto_err; /* IO protocol errors */ + u32 iocom_dif_err; /* IO SBC-3 protection errors */ + u32 iocom_tm_abort; /* IO aborted by TM requests */ + u32 iocom_sqer_needed; /* IO retry for SQ error + *recovery */ + u32 iocom_res_free; /* Delayed freeing of IO resources */ + u32 iocomp_scsierr; /* IO with non-good SCSI status */ + u32 iocom_hostabrts; /* Host IO abort requests */ + u32 iocom_utags; /* IO comp with unknown tags */ + u32 io_cleanups; /* IO implicitly aborted */ + u32 io_tmaborts; /* IO aborted due to TM commands */ + u32 rsvd; +}; +#endif /*__BFA_DEFS_FCPIM_H__*/ diff --git a/drivers/scsi/bfa/include/defs/bfa_defs_im_common.h b/drivers/scsi/bfa/include/defs/bfa_defs_im_common.h new file mode 100644 index 000000000000..9ccf53bef65a --- /dev/null +++ b/drivers/scsi/bfa/include/defs/bfa_defs_im_common.h @@ -0,0 +1,32 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFA_DEFS_IM_COMMON_H__ +#define __BFA_DEFS_IM_COMMON_H__ + +#define BFA_ADAPTER_NAME_LEN 256 +#define BFA_ADAPTER_GUID_LEN 256 +#define RESERVED_VLAN_NAME L"PORT VLAN" +#define PASSTHRU_VLAN_NAME L"PASSTHRU VLAN" + + u64 tx_pkt_cnt; + u64 rx_pkt_cnt; + u32 duration; + u8 status; +} bfa_im_stats_t, *pbfa_im_stats_t; + +#endif /* __BFA_DEFS_IM_COMMON_H__ */ diff --git a/drivers/scsi/bfa/include/defs/bfa_defs_im_team.h b/drivers/scsi/bfa/include/defs/bfa_defs_im_team.h new file mode 100644 index 000000000000..a486a7eb81d6 --- /dev/null +++ b/drivers/scsi/bfa/include/defs/bfa_defs_im_team.h @@ -0,0 +1,72 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFA_DEFS_IM_TEAM_H__ +#define __BFA_DEFS_IM_TEAM_H__ + +#include + +#define BFA_TEAM_MAX_PORTS 8 +#define BFA_TEAM_NAME_LEN 256 +#define BFA_MAX_NUM_TEAMS 16 +#define BFA_TEAM_INVALID_DELAY -1 + + BFA_LACP_RATE_SLOW = 1, + BFA_LACP_RATE_FAST +} bfa_im_lacp_rate_t; + + BFA_TEAM_MODE_FAIL_OVER = 1, + BFA_TEAM_MODE_FAIL_BACK, + BFA_TEAM_MODE_LACP, + BFA_TEAM_MODE_NONE +} bfa_im_team_mode_t; + + BFA_XMIT_POLICY_L2 = 1, + BFA_XMIT_POLICY_L3_L4 +} bfa_im_xmit_policy_t; + + bfa_im_team_mode_t team_mode; + bfa_im_lacp_rate_t lacp_rate; + bfa_im_xmit_policy_t xmit_policy; + int delay; + wchar_t primary[BFA_ADAPTER_NAME_LEN]; + wchar_t preferred_primary[BFA_ADAPTER_NAME_LEN]; + mac_t mac; + u16 num_ports; + u16 num_vlans; + u16 vlan_list[BFA_MAX_VLANS_PER_PORT]; + wchar_t team_guid_list[BFA_TEAM_MAX_PORTS][BFA_ADAPTER_GUID_LEN]; + wchar_t ioc_name_list[BFA_TEAM_MAX_PORTS][BFA_ADAPTER_NAME_LEN]; +} bfa_im_team_attr_t; + + wchar_t team_name[BFA_TEAM_NAME_LEN]; + bfa_im_xmit_policy_t xmit_policy; + int delay; + wchar_t primary[BFA_ADAPTER_NAME_LEN]; + wchar_t preferred_primary[BFA_ADAPTER_NAME_LEN]; +} bfa_im_team_edit_t, *pbfa_im_team_edit_t; + + wchar_t team_name[BFA_TEAM_NAME_LEN]; + bfa_im_team_mode_t team_mode; + mac_t mac; +} bfa_im_team_info_t; + + bfa_im_team_info_t team_info[BFA_MAX_NUM_TEAMS]; + u16 num_teams; +} bfa_im_team_list_t, *pbfa_im_team_list_t; + +#endif /* __BFA_DEFS_IM_TEAM_H__ */ diff --git a/drivers/scsi/bfa/include/defs/bfa_defs_ioc.h b/drivers/scsi/bfa/include/defs/bfa_defs_ioc.h new file mode 100644 index 000000000000..b1d532da3a9d --- /dev/null +++ b/drivers/scsi/bfa/include/defs/bfa_defs_ioc.h @@ -0,0 +1,152 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFA_DEFS_IOC_H__ +#define __BFA_DEFS_IOC_H__ + +#include +#include +#include +#include +#include + +enum { + BFA_IOC_DRIVER_LEN = 16, + BFA_IOC_CHIP_REV_LEN = 8, +}; + +/** + * Driver and firmware versions. + */ +struct bfa_ioc_driver_attr_s { + char driver[BFA_IOC_DRIVER_LEN]; /* driver name */ + char driver_ver[BFA_VERSION_LEN]; /* driver version */ + char fw_ver[BFA_VERSION_LEN]; /* firmware version*/ + char bios_ver[BFA_VERSION_LEN]; /* bios version */ + char efi_ver[BFA_VERSION_LEN]; /* EFI version */ + char ob_ver[BFA_VERSION_LEN]; /* openboot version*/ +}; + +/** + * IOC PCI device attributes + */ +struct bfa_ioc_pci_attr_s { + u16 vendor_id; /* PCI vendor ID */ + u16 device_id; /* PCI device ID */ + u16 ssid; /* subsystem ID */ + u16 ssvid; /* subsystem vendor ID */ + u32 pcifn; /* PCI device function */ + u32 rsvd; /* padding */ + u8 chip_rev[BFA_IOC_CHIP_REV_LEN]; /* chip revision */ +}; + +/** + * IOC states + */ +enum bfa_ioc_state { + BFA_IOC_RESET = 1, /* IOC is in reset state */ + BFA_IOC_SEMWAIT = 2, /* Waiting for IOC hardware semaphore */ + BFA_IOC_HWINIT = 3, /* IOC hardware is being initialized */ + BFA_IOC_GETATTR = 4, /* IOC is being configured */ + BFA_IOC_OPERATIONAL = 5, /* IOC is operational */ + BFA_IOC_INITFAIL = 6, /* IOC hardware failure */ + BFA_IOC_HBFAIL = 7, /* IOC heart-beat failure */ + BFA_IOC_DISABLING = 8, /* IOC is being disabled */ + BFA_IOC_DISABLED = 9, /* IOC is disabled */ + BFA_IOC_FWMISMATCH = 10, /* IOC firmware different from drivers */ +}; + +/** + * IOC firmware stats + */ +struct bfa_fw_ioc_stats_s { + u32 hb_count; + u32 cfg_reqs; + u32 enable_reqs; + u32 disable_reqs; + u32 stats_reqs; + u32 clrstats_reqs; + u32 unknown_reqs; + u32 ic_reqs; /* interrupt coalesce reqs */ +}; + +/** + * IOC driver stats + */ +struct bfa_ioc_drv_stats_s { + u32 ioc_isrs; + u32 ioc_enables; + u32 ioc_disables; + u32 ioc_hbfails; + u32 ioc_boots; + u32 stats_tmos; + u32 hb_count; + u32 disable_reqs; + u32 enable_reqs; + u32 disable_replies; + u32 enable_replies; +}; + +/** + * IOC statistics + */ +struct bfa_ioc_stats_s { + struct bfa_ioc_drv_stats_s drv_stats; /* driver IOC stats */ + struct bfa_fw_ioc_stats_s fw_stats; /* firmware IOC stats */ +}; + + +enum bfa_ioc_type_e { + BFA_IOC_TYPE_FC = 1, + BFA_IOC_TYPE_FCoE = 2, + BFA_IOC_TYPE_LL = 3, +}; + +/** + * IOC attributes returned in queries + */ +struct bfa_ioc_attr_s { + enum bfa_ioc_type_e ioc_type; + enum bfa_ioc_state state; /* IOC state */ + struct bfa_adapter_attr_s adapter_attr; /* HBA attributes */ + struct bfa_ioc_driver_attr_s driver_attr; /* driver attr */ + struct bfa_ioc_pci_attr_s pci_attr; + u8 port_id; /* port number */ +}; + +/** + * BFA IOC level events + */ +enum bfa_ioc_aen_event { + BFA_IOC_AEN_HBGOOD = 1, /* Heart Beat restore event */ + BFA_IOC_AEN_HBFAIL = 2, /* Heart Beat failure event */ + BFA_IOC_AEN_ENABLE = 3, /* IOC enabled event */ + BFA_IOC_AEN_DISABLE = 4, /* IOC disabled event */ + BFA_IOC_AEN_FWMISMATCH = 5, /* IOC firmware mismatch */ +}; + +/** + * BFA IOC level event data, now just a place holder + */ +struct bfa_ioc_aen_data_s { + enum bfa_ioc_type_e ioc_type; + wwn_t pwwn; + mac_t mac; +}; + +#endif /* __BFA_DEFS_IOC_H__ */ + diff --git a/drivers/scsi/bfa/include/defs/bfa_defs_iocfc.h b/drivers/scsi/bfa/include/defs/bfa_defs_iocfc.h new file mode 100644 index 000000000000..d76bcbd9820f --- /dev/null +++ b/drivers/scsi/bfa/include/defs/bfa_defs_iocfc.h @@ -0,0 +1,310 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFA_DEFS_IOCFC_H__ +#define __BFA_DEFS_IOCFC_H__ + +#include +#include +#include +#include +#include + +#define BFA_IOCFC_INTR_DELAY 1125 +#define BFA_IOCFC_INTR_LATENCY 225 + +/** + * Interrupt coalescing configuration. + */ +struct bfa_iocfc_intr_attr_s { + bfa_boolean_t coalesce; /* enable/disable coalescing */ + u16 latency; /* latency in microseconds */ + u16 delay; /* delay in microseconds */ +}; + +/** + * IOC firmware configuraton + */ +struct bfa_iocfc_fwcfg_s { + u16 num_fabrics; /* number of fabrics */ + u16 num_lports; /* number of local lports */ + u16 num_rports; /* number of remote ports */ + u16 num_ioim_reqs; /* number of IO reqs */ + u16 num_tskim_reqs; /* task management requests */ + u16 num_iotm_reqs; /* number of TM IO reqs */ + u16 num_tsktm_reqs; /* TM task management requests*/ + u16 num_fcxp_reqs; /* unassisted FC exchanges */ + u16 num_uf_bufs; /* unsolicited recv buffers */ + u8 num_cqs; + u8 rsvd; +}; + +struct bfa_iocfc_drvcfg_s { + u16 num_reqq_elems; /* number of req queue elements */ + u16 num_rspq_elems; /* number of rsp queue elements */ + u16 num_sgpgs; /* number of total SG pages */ + u16 num_sboot_tgts; /* number of SAN boot targets */ + u16 num_sboot_luns; /* number of SAN boot luns */ + u16 ioc_recover; /* IOC recovery mode */ + u16 min_cfg; /* minimum configuration */ + u16 path_tov; /* device path timeout */ + bfa_boolean_t delay_comp; /* delay completion of + failed inflight IOs */ + u32 rsvd; +}; +/** + * IOC configuration + */ +struct bfa_iocfc_cfg_s { + struct bfa_iocfc_fwcfg_s fwcfg; /* firmware side config */ + struct bfa_iocfc_drvcfg_s drvcfg; /* driver side config */ +}; + +/** + * IOC firmware IO stats + */ +struct bfa_fw_io_stats_s { + u32 host_abort; /* IO aborted by host driver*/ + u32 host_cleanup; /* IO clean up by host driver */ + + u32 fw_io_timeout; /* IOs timedout */ + u32 fw_frm_parse; /* frame parsed by f/w */ + u32 fw_frm_data; /* fcp_data frame parsed by f/w */ + u32 fw_frm_rsp; /* fcp_rsp frame parsed by f/w */ + u32 fw_frm_xfer_rdy; /* xfer_rdy frame parsed by f/w */ + u32 fw_frm_bls_acc; /* BLS ACC frame parsed by f/w */ + u32 fw_frm_tgt_abort; /* target ABTS parsed by f/w */ + u32 fw_frm_unknown; /* unknown parsed by f/w */ + u32 fw_data_dma; /* f/w DMA'ed the data frame */ + u32 fw_frm_drop; /* f/w drop the frame */ + + u32 rec_timeout; /* FW rec timed out */ + u32 error_rec; /* FW sending rec on + * an error condition*/ + u32 wait_for_si; /* FW wait for SI */ + u32 rec_rsp_inval; /* REC rsp invalid */ + u32 seqr_io_abort; /* target does not know cmd so abort */ + u32 seqr_io_retry; /* SEQR failed so retry IO */ + + u32 itn_cisc_upd_rsp; /* ITN cisc updated on fcp_rsp */ + u32 itn_cisc_upd_data; /* ITN cisc updated on fcp_data */ + u32 itn_cisc_upd_xfer_rdy; /* ITN cisc updated on fcp_data */ + + u32 fcp_data_lost; /* fcp data lost */ + + u32 ro_set_in_xfer_rdy; /* Target set RO in Xfer_rdy frame */ + u32 xfer_rdy_ooo_err; /* Out of order Xfer_rdy received */ + u32 xfer_rdy_unknown_err; /* unknown error in xfer_rdy frame */ + + u32 io_abort_timeout; /* ABTS timedout */ + u32 sler_initiated; /* SLER initiated */ + + u32 unexp_fcp_rsp; /* fcp response in wrong state */ + + u32 fcp_rsp_under_run; /* fcp rsp IO underrun */ + u32 fcp_rsp_under_run_wr; /* fcp rsp IO underrun for write */ + u32 fcp_rsp_under_run_err; /* fcp rsp IO underrun error */ + u32 fcp_rsp_resid_inval; /* invalid residue */ + u32 fcp_rsp_over_run; /* fcp rsp IO overrun */ + u32 fcp_rsp_over_run_err; /* fcp rsp IO overrun error */ + u32 fcp_rsp_proto_err; /* protocol error in fcp rsp */ + u32 fcp_rsp_sense_err; /* error in sense info in fcp rsp */ + u32 fcp_conf_req; /* FCP conf requested */ + + u32 tgt_aborted_io; /* target initiated abort */ + + u32 ioh_edtov_timeout_event;/* IOH edtov timer popped */ + u32 ioh_fcp_rsp_excp_event; /* IOH FCP_RSP exception */ + u32 ioh_fcp_conf_event; /* IOH FCP_CONF */ + u32 ioh_mult_frm_rsp_event; /* IOH multi_frame FCP_RSP */ + u32 ioh_hit_class2_event; /* IOH hit class2 */ + u32 ioh_miss_other_event; /* IOH miss other */ + u32 ioh_seq_cnt_err_event; /* IOH seq cnt error */ + u32 ioh_len_err_event; /* IOH len error - fcp_dl != + * bytes xfered */ + u32 ioh_seq_len_err_event; /* IOH seq len error */ + u32 ioh_data_oor_event; /* Data out of range */ + u32 ioh_ro_ooo_event; /* Relative offset out of range */ + u32 ioh_cpu_owned_event; /* IOH hit -iost owned by f/w */ + u32 ioh_unexp_frame_event; /* unexpected frame recieved + * count */ + u32 ioh_err_int; /* IOH error int during data-phase + * for scsi write + */ +}; + +/** + * IOC port firmware stats + */ + +struct bfa_fw_port_fpg_stats_s { + u32 intr_evt; + u32 intr; + u32 intr_excess; + u32 intr_cause0; + u32 intr_other; + u32 intr_other_ign; + u32 sig_lost; + u32 sig_regained; + u32 sync_lost; + u32 sync_to; + u32 sync_regained; + u32 div2_overflow; + u32 div2_underflow; + u32 efifo_overflow; + u32 efifo_underflow; + u32 idle_rx; + u32 lrr_rx; + u32 lr_rx; + u32 ols_rx; + u32 nos_rx; + u32 lip_rx; + u32 arbf0_rx; + u32 mrk_rx; + u32 const_mrk_rx; + u32 prim_unknown; + u32 rsvd; +}; + + +struct bfa_fw_port_lksm_stats_s { + u32 hwsm_success; /* hwsm state machine success */ + u32 hwsm_fails; /* hwsm fails */ + u32 hwsm_wdtov; /* hwsm timed out */ + u32 swsm_success; /* swsm success */ + u32 swsm_fails; /* swsm fails */ + u32 swsm_wdtov; /* swsm timed out */ + u32 busybufs; /* link init failed due to busybuf */ + u32 buf_waits; /* bufwait state entries */ + u32 link_fails; /* link failures */ + u32 psp_errors; /* primitive sequence protocol errors */ + u32 lr_unexp; /* No. of times LR rx-ed unexpectedly */ + u32 lrr_unexp; /* No. of times LRR rx-ed unexpectedly */ + u32 lr_tx; /* No. of times LR tx started */ + u32 lrr_tx; /* No. of times LRR tx started */ + u32 ols_tx; /* No. of times OLS tx started */ + u32 nos_tx; /* No. of times NOS tx started */ +}; + + +struct bfa_fw_port_snsm_stats_s { + u32 hwsm_success; /* Successful hwsm terminations */ + u32 hwsm_fails; /* hwsm fail count */ + u32 hwsm_wdtov; /* hwsm timed out */ + u32 swsm_success; /* swsm success */ + u32 swsm_wdtov; /* swsm timed out */ + u32 error_resets; /* error resets initiated by upsm */ + u32 sync_lost; /* Sync loss count */ + u32 sig_lost; /* Signal loss count */ +}; + + +struct bfa_fw_port_physm_stats_s { + u32 module_inserts; /* Module insert count */ + u32 module_xtracts; /* Module extracts count */ + u32 module_invalids; /* Invalid module inserted count */ + u32 module_read_ign; /* Module validation status ignored */ + u32 laser_faults; /* Laser fault count */ + u32 rsvd; +}; + + +struct bfa_fw_fip_stats_s { + u32 disc_req; /* Discovery solicit requests */ + u32 disc_rsp; /* Discovery solicit response */ + u32 disc_err; /* Discovery advt. parse errors */ + u32 disc_unsol; /* Discovery unsolicited */ + u32 disc_timeouts; /* Discovery timeouts */ + u32 linksvc_unsupp; /* Unsupported link service req */ + u32 linksvc_err; /* Parse error in link service req */ + u32 logo_req; /* Number of FIP logos received */ + u32 clrvlink_req; /* Clear virtual link req */ + u32 op_unsupp; /* Unsupported FIP operation */ + u32 untagged; /* Untagged frames (ignored) */ + u32 rsvd; +}; + + +struct bfa_fw_lps_stats_s { + u32 mac_invalids; /* Invalid mac assigned */ + u32 rsvd; +}; + + +struct bfa_fw_fcoe_stats_s { + u32 cee_linkups; /* CEE link up count */ + u32 cee_linkdns; /* CEE link down count */ + u32 fip_linkups; /* FIP link up count */ + u32 fip_linkdns; /* FIP link up count */ + u32 fip_fails; /* FIP fail count */ + u32 mac_invalids; /* Invalid mac assigned */ +}; + +/** + * IOC firmware FCoE port stats + */ +struct bfa_fw_fcoe_port_stats_s { + struct bfa_fw_fcoe_stats_s fcoe_stats; + struct bfa_fw_fip_stats_s fip_stats; +}; + +/** + * IOC firmware FC port stats + */ +struct bfa_fw_fc_port_stats_s { + struct bfa_fw_port_fpg_stats_s fpg_stats; + struct bfa_fw_port_physm_stats_s physm_stats; + struct bfa_fw_port_snsm_stats_s snsm_stats; + struct bfa_fw_port_lksm_stats_s lksm_stats; +}; + +/** + * IOC firmware FC port stats + */ +union bfa_fw_port_stats_s { + struct bfa_fw_fc_port_stats_s fc_stats; + struct bfa_fw_fcoe_port_stats_s fcoe_stats; +}; + +/** + * IOC firmware stats + */ +struct bfa_fw_stats_s { + struct bfa_fw_ioc_stats_s ioc_stats; + struct bfa_fw_io_stats_s io_stats; + union bfa_fw_port_stats_s port_stats; +}; + +/** + * IOC statistics + */ +struct bfa_iocfc_stats_s { + struct bfa_fw_stats_s fw_stats; /* firmware IOC stats */ +}; + +/** + * IOC attributes returned in queries + */ +struct bfa_iocfc_attr_s { + struct bfa_iocfc_cfg_s config; /* IOCFC config */ + struct bfa_iocfc_intr_attr_s intr_attr; /* interrupt attr */ +}; + +#define BFA_IOCFC_PATHTOV_MAX 60 +#define BFA_IOCFC_QDEPTH_MAX 2000 + +#endif /* __BFA_DEFS_IOC_H__ */ diff --git a/drivers/scsi/bfa/include/defs/bfa_defs_ipfc.h b/drivers/scsi/bfa/include/defs/bfa_defs_ipfc.h new file mode 100644 index 000000000000..7cb63ea98f38 --- /dev/null +++ b/drivers/scsi/bfa/include/defs/bfa_defs_ipfc.h @@ -0,0 +1,70 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ +#ifndef __BFA_DEFS_IPFC_H__ +#define __BFA_DEFS_IPFC_H__ + +#include +#include +#include + +/** + * FCS ip remote port states + */ +enum bfa_iprp_state { + BFA_IPRP_UNINIT = 0, /* PORT is not yet initialized */ + BFA_IPRP_ONLINE = 1, /* process login is complete */ + BFA_IPRP_OFFLINE = 2, /* iprp is offline */ +}; + +/** + * FCS remote port statistics + */ +struct bfa_iprp_stats_s { + u32 offlines; + u32 onlines; + u32 rscns; + u32 plogis; + u32 logos; + u32 plogi_timeouts; + u32 plogi_rejects; +}; + +/** + * FCS iprp attribute returned in queries + */ +struct bfa_iprp_attr_s { + enum bfa_iprp_state state; +}; + +struct bfa_ipfc_stats_s { + u32 arp_sent; + u32 arp_recv; + u32 arp_reply_sent; + u32 arp_reply_recv; + u32 farp_sent; + u32 farp_recv; + u32 farp_reply_sent; + u32 farp_reply_recv; + u32 farp_reject_sent; + u32 farp_reject_recv; +}; + +struct bfa_ipfc_attr_s { + bfa_boolean_t enabled; +}; + +#endif /* __BFA_DEFS_IPFC_H__ */ diff --git a/drivers/scsi/bfa/include/defs/bfa_defs_itnim.h b/drivers/scsi/bfa/include/defs/bfa_defs_itnim.h new file mode 100644 index 000000000000..2ec769903d24 --- /dev/null +++ b/drivers/scsi/bfa/include/defs/bfa_defs_itnim.h @@ -0,0 +1,126 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ +#ifndef __BFA_DEFS_ITNIM_H__ +#define __BFA_DEFS_ITNIM_H__ + +#include +#include + +/** + * FCS itnim states + */ +enum bfa_itnim_state { + BFA_ITNIM_OFFLINE = 0, /* offline */ + BFA_ITNIM_PRLI_SEND = 1, /* prli send */ + BFA_ITNIM_PRLI_SENT = 2, /* prli sent */ + BFA_ITNIM_PRLI_RETRY = 3, /* prli retry */ + BFA_ITNIM_HCB_ONLINE = 4, /* online callback */ + BFA_ITNIM_ONLINE = 5, /* online */ + BFA_ITNIM_HCB_OFFLINE = 6, /* offline callback */ + BFA_ITNIM_INITIATIOR = 7, /* initiator */ +}; + +struct bfa_itnim_hal_stats_s { + u32 onlines; /* ITN nexus onlines (PRLI done) */ + u32 offlines; /* ITN Nexus offlines */ + u32 creates; /* ITN create requests */ + u32 deletes; /* ITN delete requests */ + u32 create_comps; /* ITN create completions */ + u32 delete_comps; /* ITN delete completions */ + u32 sler_events; /* SLER (sequence level error + * recovery) events */ + u32 ioc_disabled; /* Num IOC disables */ + u32 cleanup_comps; /* ITN cleanup completions */ + u32 tm_cmnds; /* task management(TM) cmnds sent */ + u32 tm_fw_rsps; /* TM cmds firmware responses */ + u32 tm_success; /* TM successes */ + u32 tm_failures; /* TM failures */ + u32 tm_io_comps; /* TM IO completions */ + u32 tm_qresumes; /* TM queue resumes (after waiting + * for resources) + */ + u32 tm_iocdowns; /* TM cmnds affected by IOC down */ + u32 tm_cleanups; /* TM cleanups */ + u32 tm_cleanup_comps; + /* TM cleanup completions */ + u32 ios; /* IO requests */ + u32 io_comps; /* IO completions */ + u64 input_reqs; /* INPUT requests */ + u64 output_reqs; /* OUTPUT requests */ +}; + +/** + * FCS remote port statistics + */ +struct bfa_itnim_stats_s { + u32 onlines; /* num rport online */ + u32 offlines; /* num rport offline */ + u32 prli_sent; /* num prli sent out */ + u32 fcxp_alloc_wait;/* num fcxp alloc waits */ + u32 prli_rsp_err; /* num prli rsp errors */ + u32 prli_rsp_acc; /* num prli rsp accepts */ + u32 initiator; /* rport is an initiator */ + u32 prli_rsp_parse_err; /* prli rsp parsing errors */ + u32 prli_rsp_rjt; /* num prli rsp rejects */ + u32 timeout; /* num timeouts detected */ + u32 sler; /* num sler notification from BFA */ + u32 rsvd; + struct bfa_itnim_hal_stats_s hal_stats; +}; + +/** + * FCS itnim attributes returned in queries + */ +struct bfa_itnim_attr_s { + enum bfa_itnim_state state; /* FCS itnim state */ + u8 retry; /* data retransmision support */ + u8 task_retry_id; /* task retry ident support */ + u8 rec_support; /* REC supported */ + u8 conf_comp; /* confirmed completion supp */ +}; + +/** + * BFA ITNIM events. + * Arguments below are in BFAL context from Mgmt + * BFA_ITNIM_AEN_NEW: [in]: None [out]: vf_id, lpwwn + * BFA_ITNIM_AEN_DELETE: [in]: vf_id, lpwwn, rpwwn (0 = all fcp4 targets), + * [out]: vf_id, ppwwn, lpwwn, rpwwn + * BFA_ITNIM_AEN_ONLINE: [in]: vf_id, lpwwn, rpwwn (0 = all fcp4 targets), + * [out]: vf_id, ppwwn, lpwwn, rpwwn + * BFA_ITNIM_AEN_OFFLINE: [in]: vf_id, lpwwn, rpwwn (0 = all fcp4 targets), + * [out]: vf_id, ppwwn, lpwwn, rpwwn + * BFA_ITNIM_AEN_DISCONNECT:[in]: vf_id, lpwwn, rpwwn (0 = all fcp4 targets), + * [out]: vf_id, ppwwn, lpwwn, rpwwn + */ +enum bfa_itnim_aen_event { + BFA_ITNIM_AEN_ONLINE = 1, /* Target online */ + BFA_ITNIM_AEN_OFFLINE = 2, /* Target offline */ + BFA_ITNIM_AEN_DISCONNECT = 3, /* Target disconnected */ +}; + +/** + * BFA ITNIM event data structure. + */ +struct bfa_itnim_aen_data_s { + u16 vf_id; /* vf_id of the IT nexus */ + u16 rsvd[3]; + wwn_t ppwwn; /* WWN of its physical port */ + wwn_t lpwwn; /* WWN of logical port */ + wwn_t rpwwn; /* WWN of remote(target) port */ +}; + +#endif /* __BFA_DEFS_ITNIM_H__ */ diff --git a/drivers/scsi/bfa/include/defs/bfa_defs_led.h b/drivers/scsi/bfa/include/defs/bfa_defs_led.h new file mode 100644 index 000000000000..62039273264e --- /dev/null +++ b/drivers/scsi/bfa/include/defs/bfa_defs_led.h @@ -0,0 +1,35 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFA_DEFS_LED_H__ +#define __BFA_DEFS_LED_H__ + +#define BFA_LED_MAX_NUM 3 + +enum bfa_led_op { + BFA_LED_OFF = 0, + BFA_LED_ON = 1, + BFA_LED_FLICK = 2, + BFA_LED_BLINK = 3, +}; + +enum bfa_led_color { + BFA_LED_GREEN = 0, + BFA_LED_AMBER = 1, +}; + +#endif /* __BFA_DEFS_LED_H__ */ diff --git a/drivers/scsi/bfa/include/defs/bfa_defs_lport.h b/drivers/scsi/bfa/include/defs/bfa_defs_lport.h new file mode 100644 index 000000000000..7359f82aacfc --- /dev/null +++ b/drivers/scsi/bfa/include/defs/bfa_defs_lport.h @@ -0,0 +1,68 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFA_DEFS_LPORT_H__ +#define __BFA_DEFS_LPORT_H__ + +#include +#include + +/** + * BFA AEN logical port events. + * Arguments below are in BFAL context from Mgmt + * BFA_LPORT_AEN_NEW: [in]: None [out]: vf_id, ppwwn, lpwwn, roles + * BFA_LPORT_AEN_DELETE: [in]: lpwwn [out]: vf_id, ppwwn. lpwwn, roles + * BFA_LPORT_AEN_ONLINE: [in]: lpwwn [out]: vf_id, ppwwn. lpwwn, roles + * BFA_LPORT_AEN_OFFLINE: [in]: lpwwn [out]: vf_id, ppwwn. lpwwn, roles + * BFA_LPORT_AEN_DISCONNECT:[in]: lpwwn [out]: vf_id, ppwwn. lpwwn, roles + * BFA_LPORT_AEN_NEW_PROP: [in]: None [out]: vf_id, ppwwn. lpwwn, roles + * BFA_LPORT_AEN_DELETE_PROP: [in]: lpwwn [out]: vf_id, ppwwn. lpwwn, roles + * BFA_LPORT_AEN_NEW_STANDARD: [in]: None [out]: vf_id, ppwwn. lpwwn, roles + * BFA_LPORT_AEN_DELETE_STANDARD: [in]: lpwwn [out]: vf_id, ppwwn. lpwwn, roles + * BFA_LPORT_AEN_NPIV_DUP_WWN: [in]: lpwwn [out]: vf_id, ppwwn. lpwwn, roles + * BFA_LPORT_AEN_NPIV_FABRIC_MAX: [in]: lpwwn [out]: vf_id, ppwwn. lpwwn, roles + * BFA_LPORT_AEN_NPIV_UNKNOWN: [in]: lpwwn [out]: vf_id, ppwwn. lpwwn, roles + */ +enum bfa_lport_aen_event { + BFA_LPORT_AEN_NEW = 1, /* LPort created event */ + BFA_LPORT_AEN_DELETE = 2, /* LPort deleted event */ + BFA_LPORT_AEN_ONLINE = 3, /* LPort online event */ + BFA_LPORT_AEN_OFFLINE = 4, /* LPort offline event */ + BFA_LPORT_AEN_DISCONNECT = 5, /* LPort disconnect event */ + BFA_LPORT_AEN_NEW_PROP = 6, /* VPort created event */ + BFA_LPORT_AEN_DELETE_PROP = 7, /* VPort deleted event */ + BFA_LPORT_AEN_NEW_STANDARD = 8, /* VPort created event */ + BFA_LPORT_AEN_DELETE_STANDARD = 9, /* VPort deleted event */ + BFA_LPORT_AEN_NPIV_DUP_WWN = 10, /* VPort configured with + * duplicate WWN event + */ + BFA_LPORT_AEN_NPIV_FABRIC_MAX = 11, /* Max NPIV in fabric/fport */ + BFA_LPORT_AEN_NPIV_UNKNOWN = 12, /* Unknown NPIV Error code event */ +}; + +/** + * BFA AEN event data structure + */ +struct bfa_lport_aen_data_s { + u16 vf_id; /* vf_id of this logical port */ + u16 rsvd; + enum bfa_port_role roles; /* Logical port mode,IM/TM/IP etc */ + wwn_t ppwwn; /* WWN of its physical port */ + wwn_t lpwwn; /* WWN of this logical port */ +}; + +#endif /* __BFA_DEFS_LPORT_H__ */ diff --git a/drivers/scsi/bfa/include/defs/bfa_defs_mfg.h b/drivers/scsi/bfa/include/defs/bfa_defs_mfg.h new file mode 100644 index 000000000000..13fd4ab6aae2 --- /dev/null +++ b/drivers/scsi/bfa/include/defs/bfa_defs_mfg.h @@ -0,0 +1,58 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ +#ifndef __BFA_DEFS_MFG_H__ +#define __BFA_DEFS_MFG_H__ + +#include + +/** + * Manufacturing block version + */ +#define BFA_MFG_VERSION 1 + +/** + * Manufacturing block format + */ +#define BFA_MFG_SERIALNUM_SIZE 11 +#define BFA_MFG_PARTNUM_SIZE 14 +#define BFA_MFG_SUPPLIER_ID_SIZE 10 +#define BFA_MFG_SUPPLIER_PARTNUM_SIZE 20 +#define BFA_MFG_SUPPLIER_SERIALNUM_SIZE 20 +#define BFA_MFG_SUPPLIER_REVISION_SIZE 4 +#define STRSZ(_n) (((_n) + 4) & ~3) + +/** + * VPD data length + */ +#define BFA_MFG_VPD_LEN 256 + +/** + * All numerical fields are in big-endian format. + */ +struct bfa_mfg_vpd_s { + u8 version; /* vpd data version */ + u8 vpd_sig[3]; /* characters 'V', 'P', 'D' */ + u8 chksum; /* u8 checksum */ + u8 vendor; /* vendor */ + u8 len; /* vpd data length excluding header */ + u8 rsv; + u8 data[BFA_MFG_VPD_LEN]; /* vpd data */ +}; + +#pragma pack(1) + +#endif /* __BFA_DEFS_MFG_H__ */ diff --git a/drivers/scsi/bfa/include/defs/bfa_defs_pci.h b/drivers/scsi/bfa/include/defs/bfa_defs_pci.h new file mode 100644 index 000000000000..c9b83321694b --- /dev/null +++ b/drivers/scsi/bfa/include/defs/bfa_defs_pci.h @@ -0,0 +1,41 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFA_DEFS_PCI_H__ +#define __BFA_DEFS_PCI_H__ + +/** + * PCI device and vendor ID information + */ +enum { + BFA_PCI_VENDOR_ID_BROCADE = 0x1657, + BFA_PCI_DEVICE_ID_FC_8G2P = 0x13, + BFA_PCI_DEVICE_ID_FC_8G1P = 0x17, + BFA_PCI_DEVICE_ID_CT = 0x14, +}; + +/** + * PCI sub-system device and vendor ID information + */ +enum { + BFA_PCI_FCOE_SSDEVICE_ID = 0x14, +}; + +#define BFA_PCI_ACCESS_RANGES 1 /* Maximum number of device address ranges + * mapped through different BAR(s). */ + +#endif /* __BFA_DEFS_PCI_H__ */ diff --git a/drivers/scsi/bfa/include/defs/bfa_defs_pm.h b/drivers/scsi/bfa/include/defs/bfa_defs_pm.h new file mode 100644 index 000000000000..e8d6d959006e --- /dev/null +++ b/drivers/scsi/bfa/include/defs/bfa_defs_pm.h @@ -0,0 +1,33 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFA_DEFS_PM_H__ +#define __BFA_DEFS_PM_H__ + +#include + +/** + * BFA power management device states + */ +enum bfa_pm_ds { + BFA_PM_DS_D0 = 0, /* full power mode */ + BFA_PM_DS_D1 = 1, /* power save state 1 */ + BFA_PM_DS_D2 = 2, /* power save state 2 */ + BFA_PM_DS_D3 = 3, /* power off state */ +}; + +#endif /* __BFA_DEFS_PM_H__ */ diff --git a/drivers/scsi/bfa/include/defs/bfa_defs_pom.h b/drivers/scsi/bfa/include/defs/bfa_defs_pom.h new file mode 100644 index 000000000000..d9fa278472b7 --- /dev/null +++ b/drivers/scsi/bfa/include/defs/bfa_defs_pom.h @@ -0,0 +1,56 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ +#ifndef __BFA_DEFS_POM_H__ +#define __BFA_DEFS_POM_H__ + +#include +#include + +/** + * POM health status levels for each attributes. + */ +enum bfa_pom_entry_health { + BFA_POM_HEALTH_NOINFO = 1, /* no information */ + BFA_POM_HEALTH_NORMAL = 2, /* health is normal */ + BFA_POM_HEALTH_WARNING = 3, /* warning level */ + BFA_POM_HEALTH_ALARM = 4, /* alarming level */ +}; + +/** + * Reading of temperature/voltage/current/power + */ +struct bfa_pom_entry_s { + enum bfa_pom_entry_health health; /* POM entry health */ + u32 curr_value; /* current value */ + u32 thr_warn_high; /* threshold warning high */ + u32 thr_warn_low; /* threshold warning low */ + u32 thr_alarm_low; /* threshold alaram low */ + u32 thr_alarm_high; /* threshold alarm high */ +}; + +/** + * POM attributes + */ +struct bfa_pom_attr_s { + struct bfa_pom_entry_s temperature; /* centigrade */ + struct bfa_pom_entry_s voltage; /* volts */ + struct bfa_pom_entry_s curr; /* milli amps */ + struct bfa_pom_entry_s txpower; /* micro watts */ + struct bfa_pom_entry_s rxpower; /* micro watts */ +}; + +#endif /* __BFA_DEFS_POM_H__ */ diff --git a/drivers/scsi/bfa/include/defs/bfa_defs_port.h b/drivers/scsi/bfa/include/defs/bfa_defs_port.h new file mode 100644 index 000000000000..de0696c81bc4 --- /dev/null +++ b/drivers/scsi/bfa/include/defs/bfa_defs_port.h @@ -0,0 +1,245 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFA_DEFS_PORT_H__ +#define __BFA_DEFS_PORT_H__ + +#include +#include +#include +#include + +#define BFA_FCS_FABRIC_IPADDR_SZ 16 + +/** + * symbolic names for base port/virtual port + */ +#define BFA_SYMNAME_MAXLEN 128 /* vmware/windows uses 128 bytes */ +struct bfa_port_symname_s { + char symname[BFA_SYMNAME_MAXLEN]; +}; + +/** +* Roles of FCS port: + * - FCP IM and FCP TM roles cannot be enabled together for a FCS port + * - Create multiple ports if both IM and TM functions required. + * - Atleast one role must be specified. + */ +enum bfa_port_role { + BFA_PORT_ROLE_FCP_IM = 0x01, /* FCP initiator role */ + BFA_PORT_ROLE_FCP_TM = 0x02, /* FCP target role */ + BFA_PORT_ROLE_FCP_IPFC = 0x04, /* IP over FC role */ + BFA_PORT_ROLE_FCP_MAX = BFA_PORT_ROLE_FCP_IPFC | BFA_PORT_ROLE_FCP_IM +}; + +/** + * FCS port configuration. + */ +struct bfa_port_cfg_s { + wwn_t pwwn; /* port wwn */ + wwn_t nwwn; /* node wwn */ + struct bfa_port_symname_s sym_name; /* vm port symbolic name */ + enum bfa_port_role roles; /* FCS port roles */ + u32 rsvd; + u8 tag[16]; /* opaque tag from application */ +}; + +/** + * FCS port states + */ +enum bfa_port_state { + BFA_PORT_UNINIT = 0, /* PORT is not yet initialized */ + BFA_PORT_FDISC = 1, /* FDISC is in progress */ + BFA_PORT_ONLINE = 2, /* login to fabric is complete */ + BFA_PORT_OFFLINE = 3, /* No login to fabric */ +}; + +/** + * FCS port type. Required for VmWare. + */ +enum bfa_port_type { + BFA_PORT_TYPE_PHYSICAL = 0, + BFA_PORT_TYPE_VIRTUAL, +}; + +/** + * FCS port offline reason. Required for VmWare. + */ +enum bfa_port_offline_reason { + BFA_PORT_OFFLINE_UNKNOWN = 0, + BFA_PORT_OFFLINE_LINKDOWN, + BFA_PORT_OFFLINE_FAB_UNSUPPORTED, /* NPIV not supported by the + * fabric */ + BFA_PORT_OFFLINE_FAB_NORESOURCES, + BFA_PORT_OFFLINE_FAB_LOGOUT, +}; + +/** + * FCS lport info. Required for VmWare. + */ +struct bfa_port_info_s { + u8 port_type; /* bfa_port_type_t : physical or + * virtual */ + u8 port_state; /* one of bfa_port_state values */ + u8 offline_reason; /* one of bfa_port_offline_reason_t + * values */ + wwn_t port_wwn; + wwn_t node_wwn; + + /* + * following 4 feilds are valid for Physical Ports only + */ + u32 max_vports_supp; /* Max supported vports */ + u32 num_vports_inuse; /* Num of in use vports */ + u32 max_rports_supp; /* Max supported rports */ + u32 num_rports_inuse; /* Num of doscovered rports */ + +}; + +/** + * FCS port statistics + */ +struct bfa_port_stats_s { + u32 ns_plogi_sent; + u32 ns_plogi_rsp_err; + u32 ns_plogi_acc_err; + u32 ns_plogi_accepts; + u32 ns_rejects; /* NS command rejects */ + u32 ns_plogi_unknown_rsp; + u32 ns_plogi_alloc_wait; + + u32 ns_retries; /* NS command retries */ + u32 ns_timeouts; /* NS command timeouts */ + + u32 ns_rspnid_sent; + u32 ns_rspnid_accepts; + u32 ns_rspnid_rsp_err; + u32 ns_rspnid_rejects; + u32 ns_rspnid_alloc_wait; + + u32 ns_rftid_sent; + u32 ns_rftid_accepts; + u32 ns_rftid_rsp_err; + u32 ns_rftid_rejects; + u32 ns_rftid_alloc_wait; + + u32 ns_rffid_sent; + u32 ns_rffid_accepts; + u32 ns_rffid_rsp_err; + u32 ns_rffid_rejects; + u32 ns_rffid_alloc_wait; + + u32 ns_gidft_sent; + u32 ns_gidft_accepts; + u32 ns_gidft_rsp_err; + u32 ns_gidft_rejects; + u32 ns_gidft_unknown_rsp; + u32 ns_gidft_alloc_wait; + + /* + * Mgmt Server stats + */ + u32 ms_retries; /* MS command retries */ + u32 ms_timeouts; /* MS command timeouts */ + u32 ms_plogi_sent; + u32 ms_plogi_rsp_err; + u32 ms_plogi_acc_err; + u32 ms_plogi_accepts; + u32 ms_rejects; /* NS command rejects */ + u32 ms_plogi_unknown_rsp; + u32 ms_plogi_alloc_wait; + + u32 num_rscn; /* Num of RSCN received */ + u32 num_portid_rscn;/* Num portid format RSCN + * received */ + + u32 uf_recvs; /* unsolicited recv frames */ + u32 uf_recv_drops; /* dropped received frames */ + + u32 rsvd; /* padding for 64 bit alignment */ +}; + +/** + * BFA port attribute returned in queries + */ +struct bfa_port_attr_s { + enum bfa_port_state state; /* port state */ + u32 pid; /* port ID */ + struct bfa_port_cfg_s port_cfg; /* port configuration */ + enum bfa_pport_type port_type; /* current topology */ + u32 loopback; /* cable is externally looped back */ + wwn_t fabric_name; /* attached switch's nwwn */ + u8 fabric_ip_addr[BFA_FCS_FABRIC_IPADDR_SZ]; /* attached + * fabric's ip addr */ +}; + +/** + * BFA physical port Level events + * Arguments below are in BFAL context from Mgmt + * BFA_PORT_AEN_ONLINE: [in]: pwwn [out]: pwwn + * BFA_PORT_AEN_OFFLINE: [in]: pwwn [out]: pwwn + * BFA_PORT_AEN_RLIR: [in]: None [out]: pwwn, rlir_data, rlir_len + * BFA_PORT_AEN_SFP_INSERT: [in]: pwwn [out]: port_id, pwwn + * BFA_PORT_AEN_SFP_REMOVE: [in]: pwwn [out]: port_id, pwwn + * BFA_PORT_AEN_SFP_POM: [in]: pwwn [out]: level, port_id, pwwn + * BFA_PORT_AEN_ENABLE: [in]: pwwn [out]: pwwn + * BFA_PORT_AEN_DISABLE: [in]: pwwn [out]: pwwn + * BFA_PORT_AEN_AUTH_ON: [in]: pwwn [out]: pwwn + * BFA_PORT_AEN_AUTH_OFF: [in]: pwwn [out]: pwwn + * BFA_PORT_AEN_DISCONNECT: [in]: pwwn [out]: pwwn + * BFA_PORT_AEN_QOS_NEG: [in]: pwwn [out]: pwwn + * BFA_PORT_AEN_FABRIC_NAME_CHANGE: [in]: pwwn, [out]: pwwn, fwwn + * + */ +enum bfa_port_aen_event { + BFA_PORT_AEN_ONLINE = 1, /* Physical Port online event */ + BFA_PORT_AEN_OFFLINE = 2, /* Physical Port offline event */ + BFA_PORT_AEN_RLIR = 3, /* RLIR event, not supported */ + BFA_PORT_AEN_SFP_INSERT = 4, /* SFP inserted event */ + BFA_PORT_AEN_SFP_REMOVE = 5, /* SFP removed event */ + BFA_PORT_AEN_SFP_POM = 6, /* SFP POM event */ + BFA_PORT_AEN_ENABLE = 7, /* Physical Port enable event */ + BFA_PORT_AEN_DISABLE = 8, /* Physical Port disable event */ + BFA_PORT_AEN_AUTH_ON = 9, /* Physical Port auth success event */ + BFA_PORT_AEN_AUTH_OFF = 10, /* Physical Port auth fail event */ + BFA_PORT_AEN_DISCONNECT = 11, /* Physical Port disconnect event */ + BFA_PORT_AEN_QOS_NEG = 12, /* Base Port QOS negotiation event */ + BFA_PORT_AEN_FABRIC_NAME_CHANGE = 13, /* Fabric Name/WWN change + * event */ + BFA_PORT_AEN_SFP_ACCESS_ERROR = 14, /* SFP read error event */ + BFA_PORT_AEN_SFP_UNSUPPORT = 15, /* Unsupported SFP event */ +}; + +enum bfa_port_aen_sfp_pom { + BFA_PORT_AEN_SFP_POM_GREEN = 1, /* Normal */ + BFA_PORT_AEN_SFP_POM_AMBER = 2, /* Warning */ + BFA_PORT_AEN_SFP_POM_RED = 3, /* Critical */ + BFA_PORT_AEN_SFP_POM_MAX = BFA_PORT_AEN_SFP_POM_RED +}; + +struct bfa_port_aen_data_s { + enum bfa_ioc_type_e ioc_type; + wwn_t pwwn; /* WWN of the physical port */ + wwn_t fwwn; /* WWN of the fabric port */ + mac_t mac; /* MAC addres of the ethernet port, + * applicable to CNA port only */ + int phy_port_num; /*! For SFP related events */ + enum bfa_port_aen_sfp_pom level; /* Only transitions will + * be informed */ +}; + +#endif /* __BFA_DEFS_PORT_H__ */ diff --git a/drivers/scsi/bfa/include/defs/bfa_defs_pport.h b/drivers/scsi/bfa/include/defs/bfa_defs_pport.h new file mode 100644 index 000000000000..a000bc4e2d4a --- /dev/null +++ b/drivers/scsi/bfa/include/defs/bfa_defs_pport.h @@ -0,0 +1,383 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFA_DEFS_PPORT_H__ +#define __BFA_DEFS_PPORT_H__ + +#include +#include +#include +#include +#include + +/* Modify char* port_stt[] in bfal_port.c if a new state was added */ +enum bfa_pport_states { + BFA_PPORT_ST_UNINIT = 1, + BFA_PPORT_ST_ENABLING_QWAIT = 2, + BFA_PPORT_ST_ENABLING = 3, + BFA_PPORT_ST_LINKDOWN = 4, + BFA_PPORT_ST_LINKUP = 5, + BFA_PPORT_ST_DISABLING_QWAIT = 6, + BFA_PPORT_ST_DISABLING = 7, + BFA_PPORT_ST_DISABLED = 8, + BFA_PPORT_ST_STOPPED = 9, + BFA_PPORT_ST_IOCDOWN = 10, + BFA_PPORT_ST_IOCDIS = 11, + BFA_PPORT_ST_FWMISMATCH = 12, + BFA_PPORT_ST_MAX_STATE, +}; + +/** + * Port speed settings. Each specific speed is a bit field. Use multiple + * bits to specify speeds to be selected for auto-negotiation. + */ +enum bfa_pport_speed { + BFA_PPORT_SPEED_UNKNOWN = 0, + BFA_PPORT_SPEED_1GBPS = 1, + BFA_PPORT_SPEED_2GBPS = 2, + BFA_PPORT_SPEED_4GBPS = 4, + BFA_PPORT_SPEED_8GBPS = 8, + BFA_PPORT_SPEED_10GBPS = 10, + BFA_PPORT_SPEED_AUTO = + (BFA_PPORT_SPEED_1GBPS | BFA_PPORT_SPEED_2GBPS | + BFA_PPORT_SPEED_4GBPS | BFA_PPORT_SPEED_8GBPS), +}; + +/** + * Port operational type (in sync with SNIA port type). + */ +enum bfa_pport_type { + BFA_PPORT_TYPE_UNKNOWN = 1, /* port type is unkown */ + BFA_PPORT_TYPE_TRUNKED = 2, /* Trunked mode */ + BFA_PPORT_TYPE_NPORT = 5, /* P2P with switched fabric */ + BFA_PPORT_TYPE_NLPORT = 6, /* public loop */ + BFA_PPORT_TYPE_LPORT = 20, /* private loop */ + BFA_PPORT_TYPE_P2P = 21, /* P2P with no switched fabric */ + BFA_PPORT_TYPE_VPORT = 22, /* NPIV - virtual port */ +}; + +/** + * Port topology setting. A port's topology and fabric login status + * determine its operational type. + */ +enum bfa_pport_topology { + BFA_PPORT_TOPOLOGY_NONE = 0, /* No valid topology */ + BFA_PPORT_TOPOLOGY_P2P = 1, /* P2P only */ + BFA_PPORT_TOPOLOGY_LOOP = 2, /* LOOP topology */ + BFA_PPORT_TOPOLOGY_AUTO = 3, /* auto topology selection */ +}; + +/** + * Physical port loopback types. + */ +enum bfa_pport_opmode { + BFA_PPORT_OPMODE_NORMAL = 0x00, /* normal non-loopback mode */ + BFA_PPORT_OPMODE_LB_INT = 0x01, /* internal loop back */ + BFA_PPORT_OPMODE_LB_SLW = 0x02, /* serial link wrapback (serdes) */ + BFA_PPORT_OPMODE_LB_EXT = 0x04, /* external loop back (serdes) */ + BFA_PPORT_OPMODE_LB_CBL = 0x08, /* cabled loop back */ + BFA_PPORT_OPMODE_LB_NLINT = 0x20, /* NL_Port internal loopback */ +}; + +#define BFA_PPORT_OPMODE_LB_HARD(_mode) \ + ((_mode == BFA_PPORT_OPMODE_LB_INT) || \ + (_mode == BFA_PPORT_OPMODE_LB_SLW) || \ + (_mode == BFA_PPORT_OPMODE_LB_EXT)) + +/** + Port State (in sync with SNIA port state). + */ +enum bfa_pport_snia_state { + BFA_PPORT_STATE_UNKNOWN = 1, /* port is not initialized */ + BFA_PPORT_STATE_ONLINE = 2, /* port is ONLINE */ + BFA_PPORT_STATE_DISABLED = 3, /* port is disabled by user */ + BFA_PPORT_STATE_BYPASSED = 4, /* port is bypassed (in LOOP) */ + BFA_PPORT_STATE_DIAG = 5, /* port diagnostics is active */ + BFA_PPORT_STATE_LINKDOWN = 6, /* link is down */ + BFA_PPORT_STATE_LOOPBACK = 8, /* port is looped back */ +}; + +/** + * Port link state + */ +enum bfa_pport_linkstate { + BFA_PPORT_LINKUP = 1, /* Physical port/Trunk link up */ + BFA_PPORT_LINKDOWN = 2, /* Physical port/Trunk link down */ + BFA_PPORT_TRUNK_LINKDOWN = 3, /* Trunk link down (new tmaster) */ +}; + +/** + * Port link state event + */ +#define bfa_pport_event_t enum bfa_pport_linkstate + +/** + * Port link state reason code + */ +enum bfa_pport_linkstate_rsn { + BFA_PPORT_LINKSTATE_RSN_NONE = 0, + BFA_PPORT_LINKSTATE_RSN_DISABLED = 1, + BFA_PPORT_LINKSTATE_RSN_RX_NOS = 2, + BFA_PPORT_LINKSTATE_RSN_RX_OLS = 3, + BFA_PPORT_LINKSTATE_RSN_RX_LIP = 4, + BFA_PPORT_LINKSTATE_RSN_RX_LIPF7 = 5, + BFA_PPORT_LINKSTATE_RSN_SFP_REMOVED = 6, + BFA_PPORT_LINKSTATE_RSN_PORT_FAULT = 7, + BFA_PPORT_LINKSTATE_RSN_RX_LOS = 8, + BFA_PPORT_LINKSTATE_RSN_LOCAL_FAULT = 9, + BFA_PPORT_LINKSTATE_RSN_REMOTE_FAULT = 10, + BFA_PPORT_LINKSTATE_RSN_TIMEOUT = 11, + + + + /* CEE related reason codes/errors */ + CEE_LLDP_INFO_AGED_OUT = 20, + CEE_LLDP_SHUTDOWN_TLV_RCVD = 21, + CEE_PEER_NOT_ADVERTISE_DCBX = 22, + CEE_PEER_NOT_ADVERTISE_PG = 23, + CEE_PEER_NOT_ADVERTISE_PFC = 24, + CEE_PEER_NOT_ADVERTISE_FCOE = 25, + CEE_PG_NOT_COMPATIBLE = 26, + CEE_PFC_NOT_COMPATIBLE = 27, + CEE_FCOE_NOT_COMPATIBLE = 28, + CEE_BAD_PG_RCVD = 29, + CEE_BAD_BW_RCVD = 30, + CEE_BAD_PFC_RCVD = 31, + CEE_BAD_FCOE_PRI_RCVD = 32, + CEE_FCOE_PRI_PFC_OFF = 33, + CEE_DUP_CONTROL_TLV_RCVD = 34, + CEE_DUP_FEAT_TLV_RCVD = 35, + CEE_APPLY_NEW_CFG = 36, /* reason, not an error */ + CEE_PROTOCOL_INIT = 37, /* reason, not an error */ + CEE_PHY_LINK_DOWN = 38, + CEE_LLS_FCOE_ABSENT = 39, + CEE_LLS_FCOE_DOWN = 40 +}; + +/** + * Default Target Rate Limiting Speed. + */ +#define BFA_PPORT_DEF_TRL_SPEED BFA_PPORT_SPEED_1GBPS + +/** + * Physical port configuration + */ +struct bfa_pport_cfg_s { + u8 topology; /* bfa_pport_topology */ + u8 speed; /* enum bfa_pport_speed */ + u8 trunked; /* trunked or not */ + u8 qos_enabled; /* qos enabled or not */ + u8 trunk_ports; /* bitmap of trunked ports */ + u8 cfg_hardalpa; /* is hard alpa configured */ + u16 maxfrsize; /* maximum frame size */ + u8 hardalpa; /* configured hard alpa */ + u8 rx_bbcredit; /* receive buffer credits */ + u8 tx_bbcredit; /* transmit buffer credits */ + u8 ratelimit; /* ratelimit enabled or not */ + u8 trl_def_speed; /* ratelimit default speed */ + u8 rsvd[3]; + u16 path_tov; /* device path timeout */ + u16 q_depth; /* SCSI Queue depth */ +}; + +/** + * Port attribute values. + */ +struct bfa_pport_attr_s { + /* + * Static fields + */ + wwn_t nwwn; /* node wwn */ + wwn_t pwwn; /* port wwn */ + enum fc_cos cos_supported; /* supported class of services */ + u32 rsvd; + struct fc_symname_s port_symname; /* port symbolic name */ + enum bfa_pport_speed speed_supported; /* supported speeds */ + bfa_boolean_t pbind_enabled; /* Will be set if Persistent binding + * enabled. Relevant only in Windows + */ + + /* + * Configured values + */ + struct bfa_pport_cfg_s pport_cfg; /* pport cfg */ + + /* + * Dynamic field - info from BFA + */ + enum bfa_pport_states port_state; /* current port state */ + enum bfa_pport_speed speed; /* current speed */ + enum bfa_pport_topology topology; /* current topology */ + bfa_boolean_t beacon; /* current beacon status */ + bfa_boolean_t link_e2e_beacon;/* set if link beacon on */ + bfa_boolean_t plog_enabled; /* set if portlog is enabled*/ + + /* + * Dynamic field - info from FCS + */ + u32 pid; /* port ID */ + enum bfa_pport_type port_type; /* current topology */ + u32 loopback; /* external loopback */ + u32 rsvd1; + u32 rsvd2; /* padding for 64 bit */ +}; + +/** + * FC Port statistics. + */ +struct bfa_pport_fc_stats_s { + u64 secs_reset; /* seconds since stats is reset */ + u64 tx_frames; /* transmitted frames */ + u64 tx_words; /* transmitted words */ + u64 rx_frames; /* received frames */ + u64 rx_words; /* received words */ + u64 lip_count; /* LIPs seen */ + u64 nos_count; /* NOS count */ + u64 error_frames; /* errored frames (sent?) */ + u64 dropped_frames; /* dropped frames */ + u64 link_failures; /* link failure count */ + u64 loss_of_syncs; /* loss of sync count */ + u64 loss_of_signals;/* loss of signal count */ + u64 primseq_errs; /* primitive sequence protocol */ + u64 bad_os_count; /* invalid ordered set */ + u64 err_enc_out; /* Encoding error outside frame */ + u64 invalid_crcs; /* frames received with invalid CRC*/ + u64 undersized_frm; /* undersized frames */ + u64 oversized_frm; /* oversized frames */ + u64 bad_eof_frm; /* frames with bad EOF */ + struct bfa_qos_stats_s qos_stats; /* QoS statistics */ +}; + +/** + * Eth Port statistics. + */ +struct bfa_pport_eth_stats_s { + u64 secs_reset; /* seconds since stats is reset */ + u64 frame_64; /* both rx and tx counter */ + u64 frame_65_127; /* both rx and tx counter */ + u64 frame_128_255; /* both rx and tx counter */ + u64 frame_256_511; /* both rx and tx counter */ + u64 frame_512_1023; /* both rx and tx counter */ + u64 frame_1024_1518; /* both rx and tx counter */ + u64 frame_1519_1522; /* both rx and tx counter */ + + u64 tx_bytes; + u64 tx_packets; + u64 tx_mcast_packets; + u64 tx_bcast_packets; + u64 tx_control_frame; + u64 tx_drop; + u64 tx_jabber; + u64 tx_fcs_error; + u64 tx_fragments; + + u64 rx_bytes; + u64 rx_packets; + u64 rx_mcast_packets; + u64 rx_bcast_packets; + u64 rx_control_frames; + u64 rx_unknown_opcode; + u64 rx_drop; + u64 rx_jabber; + u64 rx_fcs_error; + u64 rx_alignment_error; + u64 rx_frame_length_error; + u64 rx_code_error; + u64 rx_fragments; + + u64 rx_pause; /* BPC */ + u64 rx_zero_pause; /* BPC Pause cancellation */ + u64 tx_pause; /* BPC */ + u64 tx_zero_pause; /* BPC Pause cancellation */ + u64 rx_fcoe_pause; /* BPC */ + u64 rx_fcoe_zero_pause; /* BPC Pause cancellation */ + u64 tx_fcoe_pause; /* BPC */ + u64 tx_fcoe_zero_pause; /* BPC Pause cancellation */ +}; + +/** + * Port statistics. + */ +union bfa_pport_stats_u { + struct bfa_pport_fc_stats_s fc; + struct bfa_pport_eth_stats_s eth; +}; + +/** + * Port FCP mappings. + */ +struct bfa_pport_fcpmap_s { + char osdevname[256]; + u32 bus; + u32 target; + u32 oslun; + u32 fcid; + wwn_t nwwn; + wwn_t pwwn; + u64 fcplun; + char luid[256]; +}; + +/** + * Port RNID info. + */ +struct bfa_pport_rnid_s { + wwn_t wwn; + u32 unittype; + u32 portid; + u32 attached_nodes_num; + u16 ip_version; + u16 udp_port; + u8 ipaddr[16]; + u16 rsvd; + u16 topologydiscoveryflags; +}; + +/** + * Link state information + */ +struct bfa_pport_link_s { + u8 linkstate; /* Link state bfa_pport_linkstate */ + u8 linkstate_rsn; /* bfa_pport_linkstate_rsn_t */ + u8 topology; /* P2P/LOOP bfa_pport_topology */ + u8 speed; /* Link speed (1/2/4/8 G) */ + u32 linkstate_opt; /* Linkstate optional data (debug) */ + u8 trunked; /* Trunked or not (1 or 0) */ + u8 resvd[3]; + struct bfa_qos_attr_s qos_attr; /* QoS Attributes */ + struct bfa_qos_vc_attr_s qos_vc_attr; /* VC info from ELP */ + union { + struct { + u8 tmaster;/* Trunk Master or + * not (1 or 0) */ + u8 tlinks; /* Trunk links bitmap + * (linkup) */ + u8 resv1; /* Reserved */ + } trunk_info; + + struct { + u8 myalpa; /* alpa claimed */ + u8 login_req; /* Login required or + * not (1 or 0) */ + u8 alpabm_val;/* alpa bitmap valid + * or not (1 or 0) */ + struct fc_alpabm_s alpabm; /* alpa bitmap */ + } loop_info; + } tl; +}; + +#endif /* __BFA_DEFS_PPORT_H__ */ diff --git a/drivers/scsi/bfa/include/defs/bfa_defs_qos.h b/drivers/scsi/bfa/include/defs/bfa_defs_qos.h new file mode 100644 index 000000000000..aadbacd1d2d7 --- /dev/null +++ b/drivers/scsi/bfa/include/defs/bfa_defs_qos.h @@ -0,0 +1,99 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFA_DEFS_QOS_H__ +#define __BFA_DEFS_QOS_H__ + +/** + * QoS states + */ +enum bfa_qos_state { + BFA_QOS_ONLINE = 1, /* QoS is online */ + BFA_QOS_OFFLINE = 2, /* QoS is offline */ +}; + + +/** + * QoS Priority levels. + */ +enum bfa_qos_priority { + BFA_QOS_UNKNOWN = 0, + BFA_QOS_HIGH = 1, /* QoS Priority Level High */ + BFA_QOS_MED = 2, /* QoS Priority Level Medium */ + BFA_QOS_LOW = 3, /* QoS Priority Level Low */ +}; + + +/** + * QoS bandwidth allocation for each priority level + */ +enum bfa_qos_bw_alloc { + BFA_QOS_BW_HIGH = 60, /* bandwidth allocation for High */ + BFA_QOS_BW_MED = 30, /* bandwidth allocation for Medium */ + BFA_QOS_BW_LOW = 10, /* bandwidth allocation for Low */ +}; + +/** + * QoS attribute returned in QoS Query + */ +struct bfa_qos_attr_s { + enum bfa_qos_state state; /* QoS current state */ + u32 total_bb_cr; /* Total BB Credits */ +}; + +/** + * These fields should be displayed only from the CLI. + * There will be a separate BFAL API (get_qos_vc_attr ?) + * to retrieve this. + * + */ +#define BFA_QOS_MAX_VC 16 + +struct bfa_qos_vc_info_s { + u8 vc_credit; + u8 borrow_credit; + u8 priority; + u8 resvd; +}; + +struct bfa_qos_vc_attr_s { + u16 total_vc_count; /* Total VC Count */ + u16 shared_credit; + u32 elp_opmode_flags; + struct bfa_qos_vc_info_s vc_info[BFA_QOS_MAX_VC]; /* as many as + * total_vc_count */ +}; + +/** + * QoS statistics + */ +struct bfa_qos_stats_s { + u32 flogi_sent; /* QoS Flogi sent */ + u32 flogi_acc_recvd; /* QoS Flogi Acc received */ + u32 flogi_rjt_recvd; /* QoS Flogi rejects received */ + u32 flogi_retries; /* QoS Flogi retries */ + + u32 elp_recvd; /* QoS ELP received */ + u32 elp_accepted; /* QoS ELP Accepted */ + u32 elp_rejected; /* QoS ELP rejected */ + u32 elp_dropped; /* QoS ELP dropped */ + + u32 qos_rscn_recvd; /* QoS RSCN received */ + u32 rsvd; /* padding for 64 bit alignment */ +}; + +#endif /* __BFA_DEFS_QOS_H__ */ diff --git a/drivers/scsi/bfa/include/defs/bfa_defs_rport.h b/drivers/scsi/bfa/include/defs/bfa_defs_rport.h new file mode 100644 index 000000000000..e0af59d6d2f6 --- /dev/null +++ b/drivers/scsi/bfa/include/defs/bfa_defs_rport.h @@ -0,0 +1,199 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFA_DEFS_RPORT_H__ +#define __BFA_DEFS_RPORT_H__ + +#include +#include +#include +#include +#include + +/** + * FCS remote port states + */ +enum bfa_rport_state { + BFA_RPORT_UNINIT = 0, /* PORT is not yet initialized */ + BFA_RPORT_OFFLINE = 1, /* rport is offline */ + BFA_RPORT_PLOGI = 2, /* PLOGI to rport is in progress */ + BFA_RPORT_ONLINE = 3, /* login to rport is complete */ + BFA_RPORT_PLOGI_RETRY = 4, /* retrying login to rport */ + BFA_RPORT_NSQUERY = 5, /* nameserver query */ + BFA_RPORT_ADISC = 6, /* ADISC authentication */ + BFA_RPORT_LOGO = 7, /* logging out with rport */ + BFA_RPORT_LOGORCV = 8, /* handling LOGO from rport */ + BFA_RPORT_NSDISC = 9, /* re-discover rport */ +}; + +/** + * Rport Scsi Function : Initiator/Target. + */ +enum bfa_rport_function { + BFA_RPORT_INITIATOR = 0x01, /* SCSI Initiator */ + BFA_RPORT_TARGET = 0x02, /* SCSI Target */ +}; + +/** + * port/node symbolic names for rport + */ +#define BFA_RPORT_SYMNAME_MAXLEN 255 +struct bfa_rport_symname_s { + char symname[BFA_RPORT_SYMNAME_MAXLEN]; +}; + +struct bfa_rport_hal_stats_s { + u32 sm_un_cr; /* uninit: create events */ + u32 sm_un_unexp; /* uninit: exception events */ + u32 sm_cr_on; /* created: online events */ + u32 sm_cr_del; /* created: delete events */ + u32 sm_cr_hwf; /* created: IOC down */ + u32 sm_cr_unexp; /* created: exception events */ + u32 sm_fwc_rsp; /* fw create: f/w responses */ + u32 sm_fwc_del; /* fw create: delete events */ + u32 sm_fwc_off; /* fw create: offline events */ + u32 sm_fwc_hwf; /* fw create: IOC down */ + u32 sm_fwc_unexp; /* fw create: exception events*/ + u32 sm_on_off; /* online: offline events */ + u32 sm_on_del; /* online: delete events */ + u32 sm_on_hwf; /* online: IOC down events */ + u32 sm_on_unexp; /* online: exception events */ + u32 sm_fwd_rsp; /* fw delete: fw responses */ + u32 sm_fwd_del; /* fw delete: delete events */ + u32 sm_fwd_hwf; /* fw delete: IOC down events */ + u32 sm_fwd_unexp; /* fw delete: exception events*/ + u32 sm_off_del; /* offline: delete events */ + u32 sm_off_on; /* offline: online events */ + u32 sm_off_hwf; /* offline: IOC down events */ + u32 sm_off_unexp; /* offline: exception events */ + u32 sm_del_fwrsp; /* delete: fw responses */ + u32 sm_del_hwf; /* delete: IOC down events */ + u32 sm_del_unexp; /* delete: exception events */ + u32 sm_delp_fwrsp; /* delete pend: fw responses */ + u32 sm_delp_hwf; /* delete pend: IOC downs */ + u32 sm_delp_unexp; /* delete pend: exceptions */ + u32 sm_offp_fwrsp; /* off-pending: fw responses */ + u32 sm_offp_del; /* off-pending: deletes */ + u32 sm_offp_hwf; /* off-pending: IOC downs */ + u32 sm_offp_unexp; /* off-pending: exceptions */ + u32 sm_iocd_off; /* IOC down: offline events */ + u32 sm_iocd_del; /* IOC down: delete events */ + u32 sm_iocd_on; /* IOC down: online events */ + u32 sm_iocd_unexp; /* IOC down: exceptions */ + u32 rsvd; +}; + +/** + * FCS remote port statistics + */ +struct bfa_rport_stats_s { + u32 offlines; /* remote port offline count */ + u32 onlines; /* remote port online count */ + u32 rscns; /* RSCN affecting rport */ + u32 plogis; /* plogis sent */ + u32 plogi_accs; /* plogi accepts */ + u32 plogi_timeouts; /* plogi timeouts */ + u32 plogi_rejects; /* rcvd plogi rejects */ + u32 plogi_failed; /* local failure */ + u32 plogi_rcvd; /* plogis rcvd */ + u32 prli_rcvd; /* inbound PRLIs */ + u32 adisc_rcvd; /* ADISCs received */ + u32 adisc_rejects; /* recvd ADISC rejects */ + u32 adisc_sent; /* ADISC requests sent */ + u32 adisc_accs; /* ADISC accepted by rport */ + u32 adisc_failed; /* ADISC failed (no response) */ + u32 adisc_rejected; /* ADISC rejected by us */ + u32 logos; /* logos sent */ + u32 logo_accs; /* LOGO accepts from rport */ + u32 logo_failed; /* LOGO failures */ + u32 logo_rejected; /* LOGO rejects from rport */ + u32 logo_rcvd; /* LOGO from remote port */ + + u32 rpsc_rcvd; /* RPSC received */ + u32 rpsc_rejects; /* recvd RPSC rejects */ + u32 rpsc_sent; /* RPSC requests sent */ + u32 rpsc_accs; /* RPSC accepted by rport */ + u32 rpsc_failed; /* RPSC failed (no response) */ + u32 rpsc_rejected; /* RPSC rejected by us */ + + u32 rsvd; + struct bfa_rport_hal_stats_s hal_stats; /* BFA rport stats */ +}; + +/** + * Rport's QoS attributes + */ +struct bfa_rport_qos_attr_s { + enum bfa_qos_priority qos_priority; /* rport's QoS priority */ + u32 qos_flow_id; /* QoS flow Id */ +}; + +/** + * FCS remote port attributes returned in queries + */ +struct bfa_rport_attr_s { + wwn_t nwwn; /* node wwn */ + wwn_t pwwn; /* port wwn */ + enum fc_cos cos_supported; /* supported class of services */ + u32 pid; /* port ID */ + u32 df_sz; /* Max payload size */ + enum bfa_rport_state state; /* Rport State machine state */ + enum fc_cos fc_cos; /* FC classes of services */ + bfa_boolean_t cisc; /* CISC capable device */ + struct bfa_rport_symname_s symname; /* Symbolic Name */ + enum bfa_rport_function scsi_function; /* Initiator/Target */ + struct bfa_rport_qos_attr_s qos_attr; /* qos attributes */ + enum bfa_pport_speed curr_speed; /* operating speed got from + * RPSC ELS. UNKNOWN, if RPSC + * is not supported */ + bfa_boolean_t trl_enforced; /* TRL enforced ? TRUE/FALSE */ + enum bfa_pport_speed assigned_speed; /* Speed assigned by the user. + * will be used if RPSC is not + * supported by the rport */ +}; + +#define bfa_rport_aen_qos_data_t struct bfa_rport_qos_attr_s + +/** + * BFA remote port events + * Arguments below are in BFAL context from Mgmt + * BFA_RPORT_AEN_ONLINE: [in]: lpwwn [out]: vf_id, lpwwn, rpwwn + * BFA_RPORT_AEN_OFFLINE: [in]: lpwwn [out]: vf_id, lpwwn, rpwwn + * BFA_RPORT_AEN_DISCONNECT:[in]: lpwwn [out]: vf_id, lpwwn, rpwwn + * BFA_RPORT_AEN_QOS_PRIO: [in]: lpwwn [out]: vf_id, lpwwn, rpwwn, prio + * BFA_RPORT_AEN_QOS_FLOWID:[in]: lpwwn [out]: vf_id, lpwwn, rpwwn, flow_id + */ +enum bfa_rport_aen_event { + BFA_RPORT_AEN_ONLINE = 1, /* RPort online event */ + BFA_RPORT_AEN_OFFLINE = 2, /* RPort offline event */ + BFA_RPORT_AEN_DISCONNECT = 3, /* RPort disconnect event */ + BFA_RPORT_AEN_QOS_PRIO = 4, /* QOS priority change event */ + BFA_RPORT_AEN_QOS_FLOWID = 5, /* QOS flow Id change event */ +}; + +struct bfa_rport_aen_data_s { + u16 vf_id; /* vf_id of this logical port */ + u16 rsvd[3]; + wwn_t ppwwn; /* WWN of its physical port */ + wwn_t lpwwn; /* WWN of this logical port */ + wwn_t rpwwn; /* WWN of this remote port */ + union { + bfa_rport_aen_qos_data_t qos; + } priv; +}; + +#endif /* __BFA_DEFS_RPORT_H__ */ diff --git a/drivers/scsi/bfa/include/defs/bfa_defs_status.h b/drivers/scsi/bfa/include/defs/bfa_defs_status.h new file mode 100644 index 000000000000..cdceaeb9f4b8 --- /dev/null +++ b/drivers/scsi/bfa/include/defs/bfa_defs_status.h @@ -0,0 +1,255 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ +#ifndef __BFA_DEFS_STATUS_H__ +#define __BFA_DEFS_STATUS_H__ + +/** + * API status return values + * + * NOTE: The error msgs are auto generated from the comments. Only singe line + * comments are supported + */ +enum bfa_status { + BFA_STATUS_OK = 0, /* Success */ + BFA_STATUS_FAILED = 1, /* Operation failed */ + BFA_STATUS_EINVAL = 2, /* Invalid params Check input + * parameters */ + BFA_STATUS_ENOMEM = 3, /* Out of resources */ + BFA_STATUS_ENOSYS = 4, /* Function not implemented */ + BFA_STATUS_ETIMER = 5, /* Timer expired - Retry, if + * persists, contact support */ + BFA_STATUS_EPROTOCOL = 6, /* Protocol error */ + BFA_STATUS_ENOFCPORTS = 7, /* No FC ports resources */ + BFA_STATUS_NOFLASH = 8, /* Flash not present */ + BFA_STATUS_BADFLASH = 9, /* Flash is corrupted or bad */ + BFA_STATUS_SFP_UNSUPP = 10, /* Unsupported SFP - Replace SFP */ + BFA_STATUS_UNKNOWN_VFID = 11, /* VF_ID not found */ + BFA_STATUS_DATACORRUPTED = 12, /* Diag returned data corrupted + * contact support */ + BFA_STATUS_DEVBUSY = 13, /* Device busy - Retry operation */ + BFA_STATUS_ABORTED = 14, /* Operation aborted */ + BFA_STATUS_NODEV = 15, /* Dev is not present */ + BFA_STATUS_HDMA_FAILED = 16, /* Host dma failed contact support */ + BFA_STATUS_FLASH_BAD_LEN = 17, /* Flash bad length */ + BFA_STATUS_UNKNOWN_LWWN = 18, /* LPORT PWWN not found */ + BFA_STATUS_UNKNOWN_RWWN = 19, /* RPORT PWWN not found */ + BFA_STATUS_FCPT_LS_RJT = 20, /* Got LS_RJT for FC Pass + * through Req */ + BFA_STATUS_VPORT_EXISTS = 21, /* VPORT already exists */ + BFA_STATUS_VPORT_MAX = 22, /* Reached max VPORT supported + * limit */ + BFA_STATUS_UNSUPP_SPEED = 23, /* Invalid Speed Check speed + * setting */ + BFA_STATUS_INVLD_DFSZ = 24, /* Invalid Max data field size */ + BFA_STATUS_CNFG_FAILED = 25, /* Setting can not be persisted */ + BFA_STATUS_CMD_NOTSUPP = 26, /* Command/API not supported */ + BFA_STATUS_NO_ADAPTER = 27, /* No Brocade Adapter Found */ + BFA_STATUS_LINKDOWN = 28, /* Link is down - Check or replace + * SFP/cable */ + BFA_STATUS_FABRIC_RJT = 29, /* Reject from attached fabric */ + BFA_STATUS_UNKNOWN_VWWN = 30, /* VPORT PWWN not found */ + BFA_STATUS_NSLOGIN_FAILED = 31, /* Nameserver login failed */ + BFA_STATUS_NO_RPORTS = 32, /* No remote ports found */ + BFA_STATUS_NSQUERY_FAILED = 33, /* Nameserver query failed */ + BFA_STATUS_PORT_OFFLINE = 34, /* Port is not online */ + BFA_STATUS_RPORT_OFFLINE = 35, /* RPORT is not online */ + BFA_STATUS_TGTOPEN_FAILED = 36, /* Remote SCSI target open failed */ + BFA_STATUS_BAD_LUNS = 37, /* No valid LUNs found */ + BFA_STATUS_IO_FAILURE = 38, /* SCSI target IO failure */ + BFA_STATUS_NO_FABRIC = 39, /* No switched fabric present */ + BFA_STATUS_EBADF = 40, /* Bad file descriptor */ + BFA_STATUS_EINTR = 41, /* A signal was caught during ioctl */ + BFA_STATUS_EIO = 42, /* I/O error */ + BFA_STATUS_ENOTTY = 43, /* Inappropriate I/O control + * operation */ + BFA_STATUS_ENXIO = 44, /* No such device or address */ + BFA_STATUS_EFOPEN = 45, /* Failed to open file */ + BFA_STATUS_VPORT_WWN_BP = 46, /* WWN is same as base port's WWN */ + BFA_STATUS_PORT_NOT_DISABLED = 47, /* Port not disabled disable port + * first */ + BFA_STATUS_BADFRMHDR = 48, /* Bad frame header */ + BFA_STATUS_BADFRMSZ = 49, /* Bad frame size check and replace + * SFP/cable */ + BFA_STATUS_MISSINGFRM = 50, /* Missing frame check and replace + * SFP/cable */ + BFA_STATUS_LINKTIMEOUT = 51, /* Link timeout check and replace + * SFP/cable */ + BFA_STATUS_NO_FCPIM_NEXUS = 52, /* No FCP Nexus exists with the + * rport */ + BFA_STATUS_CHECKSUM_FAIL = 53, /* checksum failure */ + BFA_STATUS_GZME_FAILED = 54, /* Get zone member query failed */ + BFA_STATUS_SCSISTART_REQD = 55, /* SCSI disk require START command */ + BFA_STATUS_IOC_FAILURE = 56, /* IOC failure - Retry, if persists + * contact support */ + BFA_STATUS_INVALID_WWN = 57, /* Invalid WWN */ + BFA_STATUS_MISMATCH = 58, /* Version mismatch */ + BFA_STATUS_IOC_ENABLED = 59, /* IOC is already enabled */ + BFA_STATUS_ADAPTER_ENABLED = 60, /* Adapter is not disabled disable + * adapter first */ + BFA_STATUS_IOC_NON_OP = 61, /* IOC is not operational. Enable IOC + * and if it still fails, + * contact support */ + BFA_STATUS_ADDR_MAP_FAILURE = 62, /* PCI base address not mapped + * in OS */ + BFA_STATUS_SAME_NAME = 63, /* Name exists! use a different + * name */ + BFA_STATUS_PENDING = 64, /* API completes asynchronously */ + BFA_STATUS_8G_SPD = 65, /* Speed setting not valid for + * 8G HBA */ + BFA_STATUS_4G_SPD = 66, /* Speed setting not valid for + * 4G HBA */ + BFA_STATUS_AD_IS_ENABLE = 67, /* Adapter is already enabled */ + BFA_STATUS_EINVAL_TOV = 68, /* Invalid path failover TOV */ + BFA_STATUS_EINVAL_QDEPTH = 69, /* Invalid queue depth value */ + BFA_STATUS_VERSION_FAIL = 70, /* Application/Driver version + * mismatch */ + BFA_STATUS_DIAG_BUSY = 71, /* diag busy */ + BFA_STATUS_BEACON_ON = 72, /* Port Beacon already on */ + BFA_STATUS_BEACON_OFF = 73, /* Port Beacon already off */ + BFA_STATUS_LBEACON_ON = 74, /* Link End-to-End Beacon already + * on */ + BFA_STATUS_LBEACON_OFF = 75, /* Link End-to-End Beacon already + * off */ + BFA_STATUS_PORT_NOT_INITED = 76, /* Port not initialized */ + BFA_STATUS_RPSC_ENABLED = 77, /* Target has a valid speed */ + BFA_STATUS_ENOFSAVE = 78, /* No saved firmware trace */ + BFA_STATUS_BAD_FILE = 79, /* Not a valid Brocade Boot Code + * file */ + BFA_STATUS_RLIM_EN = 80, /* Target rate limiting is already + * enabled */ + BFA_STATUS_RLIM_DIS = 81, /* Target rate limiting is already + * disabled */ + BFA_STATUS_IOC_DISABLED = 82, /* IOC is already disabled */ + BFA_STATUS_ADAPTER_DISABLED = 83, /* Adapter is already disabled */ + BFA_STATUS_BIOS_DISABLED = 84, /* Bios is already disabled */ + BFA_STATUS_AUTH_ENABLED = 85, /* Authentication is already + * enabled */ + BFA_STATUS_AUTH_DISABLED = 86, /* Authentication is already + * disabled */ + BFA_STATUS_ERROR_TRL_ENABLED = 87, /* Target rate limiting is + * enabled */ + BFA_STATUS_ERROR_QOS_ENABLED = 88, /* QoS is enabled */ + BFA_STATUS_NO_SFP_DEV = 89, /* No SFP device check or replace SFP */ + BFA_STATUS_MEMTEST_FAILED = 90, /* Memory test failed contact + * support */ + BFA_STATUS_INVALID_DEVID = 91, /* Invalid device id provided */ + BFA_STATUS_QOS_ENABLED = 92, /* QOS is already enabled */ + BFA_STATUS_QOS_DISABLED = 93, /* QOS is already disabled */ + BFA_STATUS_INCORRECT_DRV_CONFIG = 94, /* Check configuration + * key/value pair */ + BFA_STATUS_REG_FAIL = 95, /* Can't read windows registry */ + BFA_STATUS_IM_INV_CODE = 96, /* Invalid IOCTL code */ + BFA_STATUS_IM_INV_VLAN = 97, /* Invalid VLAN ID */ + BFA_STATUS_IM_INV_ADAPT_NAME = 98, /* Invalid adapter name */ + BFA_STATUS_IM_LOW_RESOURCES = 99, /* Memory allocation failure in + * driver */ + BFA_STATUS_IM_VLANID_IS_PVID = 100, /* Given VLAN id same as PVID */ + BFA_STATUS_IM_VLANID_EXISTS = 101, /* Given VLAN id already exists */ + BFA_STATUS_IM_FW_UPDATE_FAIL = 102, /* Updating firmware with new + * VLAN ID failed */ + BFA_STATUS_PORTLOG_ENABLED = 103, /* Port Log is already enabled */ + BFA_STATUS_PORTLOG_DISABLED = 104, /* Port Log is already disabled */ + BFA_STATUS_FILE_NOT_FOUND = 105, /* Specified file could not be + * found */ + BFA_STATUS_QOS_FC_ONLY = 106, /* QOS can be enabled for FC mode + * only */ + BFA_STATUS_RLIM_FC_ONLY = 107, /* RATELIM can be enabled for FC mode + * only */ + BFA_STATUS_CT_SPD = 108, /* Invalid speed selection for Catapult. */ + BFA_STATUS_LEDTEST_OP = 109, /* LED test is operating */ + BFA_STATUS_CEE_NOT_DN = 110, /* eth port is not at down state, please + * bring down first */ + BFA_STATUS_10G_SPD = 111, /* Speed setting not valid for 10G HBA */ + BFA_STATUS_IM_INV_TEAM_NAME = 112, /* Invalid team name */ + BFA_STATUS_IM_DUP_TEAM_NAME = 113, /* Given team name already + * exists */ + BFA_STATUS_IM_ADAPT_ALREADY_IN_TEAM = 114, /* Given adapter is part + * of another team */ + BFA_STATUS_IM_ADAPT_HAS_VLANS = 115, /* Adapter has VLANs configured. + * Delete all VLANs before + * creating team */ + BFA_STATUS_IM_PVID_MISMATCH = 116, /* Mismatching PVIDs configured + * for adapters */ + BFA_STATUS_IM_LINK_SPEED_MISMATCH = 117, /* Mismatching link speeds + * configured for adapters */ + BFA_STATUS_IM_MTU_MISMATCH = 118, /* Mismatching MTUs configured for + * adapters */ + BFA_STATUS_IM_RSS_MISMATCH = 119, /* Mismatching RSS parameters + * configured for adapters */ + BFA_STATUS_IM_HDS_MISMATCH = 120, /* Mismatching HDS parameters + * configured for adapters */ + BFA_STATUS_IM_OFFLOAD_MISMATCH = 121, /* Mismatching offload + * parameters configured for + * adapters */ + BFA_STATUS_IM_PORT_PARAMS = 122, /* Error setting port parameters */ + BFA_STATUS_IM_PORT_NOT_IN_TEAM = 123, /* Port is not part of team */ + BFA_STATUS_IM_CANNOT_REM_PRI = 124, /* Primary adapter cannot be + * removed. Change primary before + * removing */ + BFA_STATUS_IM_MAX_PORTS_REACHED = 125, /* Exceeding maximum ports + * per team */ + BFA_STATUS_IM_LAST_PORT_DELETE = 126, /* Last port in team being + * deleted */ + BFA_STATUS_IM_NO_DRIVER = 127, /* IM driver is not installed */ + BFA_STATUS_IM_MAX_VLANS_REACHED = 128, /* Exceeding maximum VLANs + * per port */ + BFA_STATUS_TOMCAT_SPD_NOT_ALLOWED = 129, /* Bios speed config not + * allowed for CNA */ + BFA_STATUS_NO_MINPORT_DRIVER = 130, /* Miniport driver is not + * loaded */ + BFA_STATUS_CARD_TYPE_MISMATCH = 131, /* Card type mismatch */ + BFA_STATUS_BAD_ASICBLK = 132, /* Bad ASIC block */ + BFA_STATUS_NO_DRIVER = 133, /* Storage/Ethernet driver not loaded */ + BFA_STATUS_INVALID_MAC = 134, /* Invalid mac address */ + BFA_STATUS_IM_NO_VLAN = 135, /* No VLANs configured on the adapter */ + BFA_STATUS_IM_ETH_LB_FAILED = 136, /* Ethernet loopback test failed */ + BFA_STATUS_IM_PVID_REMOVE = 137, /* Cannot remove port vlan (PVID) */ + BFA_STATUS_IM_PVID_EDIT = 138, /* Cannot edit port vlan (PVID) */ + BFA_STATUS_CNA_NO_BOOT = 139, /* Boot upload not allowed for CNA */ + BFA_STATUS_IM_PVID_NON_ZERO = 140, /* Port VLAN ID (PVID) is Set to + * Non-Zero Value */ + BFA_STATUS_IM_INETCFG_LOCK_FAILED = 141, /* Acquiring Network + * Subsytem Lock Failed.Please + * try after some time */ + BFA_STATUS_IM_GET_INETCFG_FAILED = 142, /* Acquiring Network Subsytem + * handle Failed. Please try + * after some time */ + BFA_STATUS_IM_NOT_BOUND = 143, /* Brocade 10G Ethernet Service is not + * Enabled on this port */ + BFA_STATUS_INSUFFICIENT_PERMS = 144, /* User doesn't have sufficient + * permissions to execute the BCU + * application */ + BFA_STATUS_IM_INV_VLAN_NAME = 145, /* Invalid/Reserved Vlan name + * string. The name is not allowed + * for the normal Vlans */ + BFA_STATUS_CMD_NOTSUPP_CNA = 146, /* Command not supported for CNA */ + BFA_STATUS_IM_PASSTHRU_EDIT = 147, /* Can not edit passthru vlan id */ + BFA_STATUS_IM_BIND_FAILED = 148, /*! < IM Driver bind operation + * failed */ + BFA_STATUS_IM_UNBIND_FAILED = 149, /* ! < IM Driver unbind operation + * failed */ + BFA_STATUS_MAX_VAL /* Unknown error code */ +}; +#define bfa_status_t enum bfa_status + +enum bfa_eproto_status { + BFA_EPROTO_BAD_ACCEPT = 0, + BFA_EPROTO_UNKNOWN_RSP = 1 +}; +#define bfa_eproto_status_t enum bfa_eproto_status + +#endif /* __BFA_DEFS_STATUS_H__ */ diff --git a/drivers/scsi/bfa/include/defs/bfa_defs_tin.h b/drivers/scsi/bfa/include/defs/bfa_defs_tin.h new file mode 100644 index 000000000000..e05a2db7abed --- /dev/null +++ b/drivers/scsi/bfa/include/defs/bfa_defs_tin.h @@ -0,0 +1,118 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFA_DEFS_TIN_H__ +#define __BFA_DEFS_TIN_H__ + +#include +#include + +/** + * FCS tin states + */ +enum bfa_tin_state_e { + BFA_TIN_SM_OFFLINE = 0, /* tin is offline */ + BFA_TIN_SM_WOS_LOGIN = 1, /* Waiting PRLI ACC/RJT from ULP */ + BFA_TIN_SM_WFW_ONLINE = 2, /* Waiting ACK to PRLI ACC from FW */ + BFA_TIN_SM_ONLINE = 3, /* tin login is complete */ + BFA_TIN_SM_WIO_RELOGIN = 4, /* tin relogin is in progress */ + BFA_TIN_SM_WIO_LOGOUT = 5, /* Processing of PRLO req from + * Initiator is in progress + */ + BFA_TIN_SM_WOS_LOGOUT = 6, /* Processing of PRLO req from + * Initiator is in progress + */ + BFA_TIN_SM_WIO_CLEAN = 7, /* Waiting for IO cleanup before tin + * is offline. This can be triggered + * by RPORT LOGO (rcvd/sent) or by + * PRLO (rcvd/sent) + */ +}; + +struct bfa_prli_req_s { + struct fchs_s fchs; + struct fc_prli_s prli_payload; +}; + +struct bfa_prlo_req_s { + struct fchs_s fchs; + struct fc_prlo_s prlo_payload; +}; + +void bfa_tin_send_login_rsp(void *bfa_tin, u32 login_rsp, + struct fc_ls_rjt_s rjt_payload); +void bfa_tin_send_logout_rsp(void *bfa_tin, u32 logout_rsp, + struct fc_ls_rjt_s rjt_payload); +/** + * FCS target port statistics + */ +struct bfa_tin_stats_s { + u32 onlines; /* ITN nexus onlines (PRLI done) */ + u32 offlines; /* ITN Nexus offlines */ + u32 prli_req_parse_err; /* prli req parsing errors */ + u32 prli_rsp_rjt; /* num prli rsp rejects sent */ + u32 prli_rsp_acc; /* num prli rsp accepts sent */ + u32 cleanup_comps; /* ITN cleanup completions */ +}; + +/** + * FCS tin attributes returned in queries + */ +struct bfa_tin_attr_s { + enum bfa_tin_state_e state; + u8 seq_retry; /* Sequence retry supported */ + u8 rsvd[3]; +}; + +/** + * BFA TIN async event data structure for BFAL + */ +enum bfa_tin_aen_event { + BFA_TIN_AEN_ONLINE = 1, /* Target online */ + BFA_TIN_AEN_OFFLINE = 2, /* Target offline */ + BFA_TIN_AEN_DISCONNECT = 3, /* Target disconnected */ +}; + +/** + * BFA TIN event data structure. + */ +struct bfa_tin_aen_data_s { + u16 vf_id; /* vf_id of the IT nexus */ + u16 rsvd[3]; + wwn_t lpwwn; /* WWN of logical port */ + wwn_t rpwwn; /* WWN of remote(target) port */ +}; + +/** + * Below APIs are needed from BFA driver + * Move these to BFA driver public header file? + */ +/* TIN rcvd new PRLI & gets bfad_tin_t ptr from driver this callback */ +void *bfad_tin_rcvd_login_req(void *bfad_tm_port, void *bfa_tin, + wwn_t rp_wwn, u32 rp_fcid, + struct bfa_prli_req_s prli_req); +/* TIN rcvd new PRLO */ +void bfad_tin_rcvd_logout_req(void *bfad_tin, wwn_t rp_wwn, u32 rp_fcid, + struct bfa_prlo_req_s prlo_req); +/* TIN is online and ready for IO */ +void bfad_tin_online(void *bfad_tin); +/* TIN is offline and BFA driver can shutdown its upper stack */ +void bfad_tin_offline(void *bfad_tin); +/* TIN does not need this BFA driver tin tag anymore, so can be freed */ +void bfad_tin_res_free(void *bfad_tin); + +#endif /* __BFA_DEFS_TIN_H__ */ diff --git a/drivers/scsi/bfa/include/defs/bfa_defs_tsensor.h b/drivers/scsi/bfa/include/defs/bfa_defs_tsensor.h new file mode 100644 index 000000000000..31881d218515 --- /dev/null +++ b/drivers/scsi/bfa/include/defs/bfa_defs_tsensor.h @@ -0,0 +1,43 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFA_DEFS_TSENSOR_H__ +#define __BFA_DEFS_TSENSOR_H__ + +#include +#include + +/** + * Temperature sensor status values + */ +enum bfa_tsensor_status { + BFA_TSENSOR_STATUS_UNKNOWN = 1, /* unkown status */ + BFA_TSENSOR_STATUS_FAULTY = 2, /* sensor is faulty */ + BFA_TSENSOR_STATUS_BELOW_MIN = 3, /* temperature below mininum */ + BFA_TSENSOR_STATUS_NOMINAL = 4, /* normal temperature */ + BFA_TSENSOR_STATUS_ABOVE_MAX = 5, /* temperature above maximum */ +}; + +/** + * Temperature sensor attribute + */ +struct bfa_tsensor_attr_s { + enum bfa_tsensor_status status; /* temperature sensor status */ + u32 value; /* current temperature in celsius */ +}; + +#endif /* __BFA_DEFS_TSENSOR_H__ */ diff --git a/drivers/scsi/bfa/include/defs/bfa_defs_types.h b/drivers/scsi/bfa/include/defs/bfa_defs_types.h new file mode 100644 index 000000000000..4348332b107a --- /dev/null +++ b/drivers/scsi/bfa/include/defs/bfa_defs_types.h @@ -0,0 +1,30 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ +#ifndef __BFA_DEFS_TYPES_H__ +#define __BFA_DEFS_TYPES_H__ + +#include + +enum bfa_boolean { + BFA_FALSE = 0, + BFA_TRUE = 1 +}; +#define bfa_boolean_t enum bfa_boolean + +#define BFA_STRING_32 32 + +#endif /* __BFA_DEFS_TYPES_H__ */ diff --git a/drivers/scsi/bfa/include/defs/bfa_defs_version.h b/drivers/scsi/bfa/include/defs/bfa_defs_version.h new file mode 100644 index 000000000000..f8902a2c9aad --- /dev/null +++ b/drivers/scsi/bfa/include/defs/bfa_defs_version.h @@ -0,0 +1,22 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ +#ifndef __BFA_DEFS_VERSION_H__ +#define __BFA_DEFS_VERSION_H__ + +#define BFA_VERSION_LEN 64 + +#endif diff --git a/drivers/scsi/bfa/include/defs/bfa_defs_vf.h b/drivers/scsi/bfa/include/defs/bfa_defs_vf.h new file mode 100644 index 000000000000..3235be5e9423 --- /dev/null +++ b/drivers/scsi/bfa/include/defs/bfa_defs_vf.h @@ -0,0 +1,74 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFA_DEFS_VF_H__ +#define __BFA_DEFS_VF_H__ + +#include +#include +#include + +/** + * VF states + */ +enum bfa_vf_state { + BFA_VF_UNINIT = 0, /* fabric is not yet initialized */ + BFA_VF_LINK_DOWN = 1, /* link is down */ + BFA_VF_FLOGI = 2, /* flogi is in progress */ + BFA_VF_AUTH = 3, /* authentication in progress */ + BFA_VF_NOFABRIC = 4, /* fabric is not present */ + BFA_VF_ONLINE = 5, /* login to fabric is complete */ + BFA_VF_EVFP = 6, /* EVFP is in progress */ + BFA_VF_ISOLATED = 7, /* port isolated due to vf_id mismatch */ +}; + +/** + * VF statistics + */ +struct bfa_vf_stats_s { + u32 flogi_sent; /* Num FLOGIs sent */ + u32 flogi_rsp_err; /* FLOGI response errors */ + u32 flogi_acc_err; /* FLOGI accept errors */ + u32 flogi_accepts; /* FLOGI accepts received */ + u32 flogi_rejects; /* FLOGI rejects received */ + u32 flogi_unknown_rsp; /* Unknown responses for FLOGI */ + u32 flogi_alloc_wait; /* Allocation waits prior to + * sending FLOGI + */ + u32 flogi_rcvd; /* FLOGIs received */ + u32 flogi_rejected; /* Incoming FLOGIs rejected */ + u32 fabric_onlines; /* Internal fabric online + * notification sent to other + * modules + */ + u32 fabric_offlines; /* Internal fabric offline + * notification sent to other + * modules + */ + u32 resvd; +}; + +/** + * VF attributes returned in queries + */ +struct bfa_vf_attr_s { + enum bfa_vf_state state; /* VF state */ + u32 rsvd; + wwn_t fabric_name; /* fabric name */ +}; + +#endif /* __BFA_DEFS_VF_H__ */ diff --git a/drivers/scsi/bfa/include/defs/bfa_defs_vport.h b/drivers/scsi/bfa/include/defs/bfa_defs_vport.h new file mode 100644 index 000000000000..9f021f43b3b4 --- /dev/null +++ b/drivers/scsi/bfa/include/defs/bfa_defs_vport.h @@ -0,0 +1,91 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFA_DEFS_VPORT_H__ +#define __BFA_DEFS_VPORT_H__ + +#include +#include +#include + +/** + * VPORT states + */ +enum bfa_vport_state { + BFA_FCS_VPORT_UNINIT = 0, + BFA_FCS_VPORT_CREATED = 1, + BFA_FCS_VPORT_OFFLINE = 1, + BFA_FCS_VPORT_FDISC_SEND = 2, + BFA_FCS_VPORT_FDISC = 3, + BFA_FCS_VPORT_FDISC_RETRY = 4, + BFA_FCS_VPORT_ONLINE = 5, + BFA_FCS_VPORT_DELETING = 6, + BFA_FCS_VPORT_CLEANUP = 6, + BFA_FCS_VPORT_LOGO_SEND = 7, + BFA_FCS_VPORT_LOGO = 8, + BFA_FCS_VPORT_ERROR = 9, + BFA_FCS_VPORT_MAX_STATE, +}; + +/** + * vport statistics + */ +struct bfa_vport_stats_s { + struct bfa_port_stats_s port_stats; /* base class (port) stats */ + /* + * TODO - remove + */ + + u32 fdisc_sent; /* num fdisc sent */ + u32 fdisc_accepts; /* fdisc accepts */ + u32 fdisc_retries; /* fdisc retries */ + u32 fdisc_timeouts; /* fdisc timeouts */ + u32 fdisc_rsp_err; /* fdisc response error */ + u32 fdisc_acc_bad; /* bad fdisc accepts */ + u32 fdisc_rejects; /* fdisc rejects */ + u32 fdisc_unknown_rsp; + /* + *!< fdisc rsp unknown error + */ + u32 fdisc_alloc_wait;/* fdisc req (fcxp)alloc wait */ + + u32 logo_alloc_wait;/* logo req (fcxp) alloc wait */ + u32 logo_sent; /* logo sent */ + u32 logo_accepts; /* logo accepts */ + u32 logo_rejects; /* logo rejects */ + u32 logo_rsp_err; /* logo rsp errors */ + u32 logo_unknown_rsp; + /* logo rsp unknown errors */ + + u32 fab_no_npiv; /* fabric does not support npiv */ + + u32 fab_offline; /* offline events from fab SM */ + u32 fab_online; /* online events from fab SM */ + u32 fab_cleanup; /* cleanup request from fab SM */ + u32 rsvd; +}; + +/** + * BFA vport attribute returned in queries + */ +struct bfa_vport_attr_s { + struct bfa_port_attr_s port_attr; /* base class (port) attributes */ + enum bfa_vport_state vport_state; /* vport state */ + u32 rsvd; +}; + +#endif /* __BFA_DEFS_VPORT_H__ */ diff --git a/drivers/scsi/bfa/include/fcb/bfa_fcb.h b/drivers/scsi/bfa/include/fcb/bfa_fcb.h new file mode 100644 index 000000000000..2963b0bc30e7 --- /dev/null +++ b/drivers/scsi/bfa/include/fcb/bfa_fcb.h @@ -0,0 +1,33 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * bfa_fcb.h BFA FCS callback interfaces + */ + +#ifndef __BFA_FCB_H__ +#define __BFA_FCB_H__ + +/** + * fcb Main fcs callbacks + */ + +void bfa_fcb_exit(struct bfad_s *bfad); + + + +#endif /* __BFA_FCB_H__ */ diff --git a/drivers/scsi/bfa/include/fcb/bfa_fcb_fcpim.h b/drivers/scsi/bfa/include/fcb/bfa_fcb_fcpim.h new file mode 100644 index 000000000000..a6c70aee0aa3 --- /dev/null +++ b/drivers/scsi/bfa/include/fcb/bfa_fcb_fcpim.h @@ -0,0 +1,76 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** +* : bfad_fcpim.h - BFA FCS initiator mode remote port callbacks + */ + +#ifndef __BFAD_FCB_FCPIM_H__ +#define __BFAD_FCB_FCPIM_H__ + +struct bfad_itnim_s; + +/* + * RPIM callbacks + */ + +/** + * Memory allocation for remote port instance. Called before PRLI is + * initiated to the remote target port. + * + * @param[in] bfad - driver instance + * @param[out] itnim - FCS remote port (IM) instance + * @param[out] itnim_drv - driver remote port (IM) instance + * + * @return None + */ +void bfa_fcb_itnim_alloc(struct bfad_s *bfad, struct bfa_fcs_itnim_s **itnim, + struct bfad_itnim_s **itnim_drv); + +/** + * Free remote port (IM) instance. + * + * @param[in] bfad - driver instance + * @param[in] itnim_drv - driver remote port instance + * + * @return None + */ +void bfa_fcb_itnim_free(struct bfad_s *bfad, + struct bfad_itnim_s *itnim_drv); + +/** + * Notification of when login with a remote target device is complete. + * + * @param[in] itnim_drv - driver remote port instance + * + * @return None + */ +void bfa_fcb_itnim_online(struct bfad_itnim_s *itnim_drv); + +/** + * Notification when login with the remote device is severed. + * + * @param[in] itnim_drv - driver remote port instance + * + * @return None + */ +void bfa_fcb_itnim_offline(struct bfad_itnim_s *itnim_drv); + +void bfa_fcb_itnim_tov_begin(struct bfad_itnim_s *itnim_drv); +void bfa_fcb_itnim_tov(struct bfad_itnim_s *itnim_drv); + +#endif /* __BFAD_FCB_FCPIM_H__ */ diff --git a/drivers/scsi/bfa/include/fcb/bfa_fcb_port.h b/drivers/scsi/bfa/include/fcb/bfa_fcb_port.h new file mode 100644 index 000000000000..5fd7f986fa32 --- /dev/null +++ b/drivers/scsi/bfa/include/fcb/bfa_fcb_port.h @@ -0,0 +1,113 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * bfa_fcb_port.h BFA FCS virtual port driver interfaces + */ + +#ifndef __BFA_FCB_PORT_H__ +#define __BFA_FCB_PORT_H__ + +#include +/** + * fcs_port_fcb FCS port driver interfaces + */ + +/* + * Forward declarations + */ +struct bfad_port_s; + +/* + * Callback functions from BFA FCS to driver + */ + +/** + * Call from FCS to driver module when a port is instantiated. The port + * can be a base port or a virtual port with in the base fabric or + * a virtual fabric. + * + * On this callback, driver is supposed to create scsi_host, scsi_tgt or + * network interfaces bases on ports personality/roles. + * + * base port of base fabric: vf_drv == NULL && vp_drv == NULL + * vport of base fabric: vf_drv == NULL && vp_drv != NULL + * base port of VF: vf_drv != NULL && vp_drv == NULL + * vport of VF: vf_drv != NULL && vp_drv != NULL + * + * @param[in] bfad - driver instance + * @param[in] port - FCS port instance + * @param[in] roles - port roles: IM, TM, IP + * @param[in] vf_drv - VF driver instance, NULL if base fabric (no VF) + * @param[in] vp_drv - vport driver instance, NULL if base port + * + * @return None + */ +struct bfad_port_s *bfa_fcb_port_new(struct bfad_s *bfad, + struct bfa_fcs_port_s *port, + enum bfa_port_role roles, struct bfad_vf_s *vf_drv, + struct bfad_vport_s *vp_drv); + +/** + * Call from FCS to driver module when a port is deleted. The port + * can be a base port or a virtual port with in the base fabric or + * a virtual fabric. + * + * @param[in] bfad - driver instance + * @param[in] roles - port roles: IM, TM, IP + * @param[in] vf_drv - VF driver instance, NULL if base fabric (no VF) + * @param[in] vp_drv - vport driver instance, NULL if base port + * + * @return None + */ +void bfa_fcb_port_delete(struct bfad_s *bfad, enum bfa_port_role roles, + struct bfad_vf_s *vf_drv, struct bfad_vport_s *vp_drv); + +/** + * Notification when port transitions to ONLINE state. + * + * Online notification is a logical link up for the local port. This + * notification is sent after a successfull FLOGI, or a successful + * link initialization in proviate-loop or N2N topologies. + * + * @param[in] bfad - driver instance + * @param[in] roles - port roles: IM, TM, IP + * @param[in] vf_drv - VF driver instance, NULL if base fabric (no VF) + * @param[in] vp_drv - vport driver instance, NULL if base port + * + * @return None + */ +void bfa_fcb_port_online(struct bfad_s *bfad, enum bfa_port_role roles, + struct bfad_vf_s *vf_drv, struct bfad_vport_s *vp_drv); + +/** + * Notification when port transitions to OFFLINE state. + * + * Offline notification is a logical link down for the local port. + * + * @param[in] bfad - driver instance + * @param[in] roles - port roles: IM, TM, IP + * @param[in] vf_drv - VF driver instance, NULL if base fabric (no VF) + * @param[in] vp_drv - vport driver instance, NULL if base port + * + * @return None + */ +void bfa_fcb_port_offline(struct bfad_s *bfad, enum bfa_port_role roles, + struct bfad_vf_s *vf_drv, struct bfad_vport_s *vp_drv); + + +#endif /* __BFA_FCB_PORT_H__ */ diff --git a/drivers/scsi/bfa/include/fcb/bfa_fcb_rport.h b/drivers/scsi/bfa/include/fcb/bfa_fcb_rport.h new file mode 100644 index 000000000000..e0261bb6d1c1 --- /dev/null +++ b/drivers/scsi/bfa/include/fcb/bfa_fcb_rport.h @@ -0,0 +1,80 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * bfa_fcb_rport.h BFA FCS rport driver interfaces + */ + +#ifndef __BFA_FCB_RPORT_H__ +#define __BFA_FCB_RPORT_H__ + +/** + * fcs_rport_fcb Remote port driver interfaces + */ + + +struct bfad_rport_s; + +/* + * Callback functions from BFA FCS to driver + */ + +/** + * Completion callback for bfa_fcs_rport_add(). + * + * @param[in] rport_drv - driver instance of rport + * + * @return None + */ +void bfa_fcb_rport_add(struct bfad_rport_s *rport_drv); + +/** + * Completion callback for bfa_fcs_rport_remove(). + * + * @param[in] rport_drv - driver instance of rport + * + * @return None + */ +void bfa_fcb_rport_remove(struct bfad_rport_s *rport_drv); + +/** + * Call to allocate a rport instance. + * + * @param[in] bfad - driver instance + * @param[out] rport - BFA FCS instance of rport + * @param[out] rport_drv - driver instance of rport + * + * @retval BFA_STATUS_OK - successfully allocated + * @retval BFA_STATUS_ENOMEM - cannot allocate + */ +bfa_status_t bfa_fcb_rport_alloc(struct bfad_s *bfad, + struct bfa_fcs_rport_s **rport, + struct bfad_rport_s **rport_drv); + +/** + * Call to free rport memory resources. + * + * @param[in] bfad - driver instance + * @param[in] rport_drv - driver instance of rport + * + * @return None + */ +void bfa_fcb_rport_free(struct bfad_s *bfad, struct bfad_rport_s **rport_drv); + + + +#endif /* __BFA_FCB_RPORT_H__ */ diff --git a/drivers/scsi/bfa/include/fcb/bfa_fcb_vf.h b/drivers/scsi/bfa/include/fcb/bfa_fcb_vf.h new file mode 100644 index 000000000000..cfd3fac0a4e2 --- /dev/null +++ b/drivers/scsi/bfa/include/fcb/bfa_fcb_vf.h @@ -0,0 +1,47 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * bfa_fcb_vf.h BFA FCS virtual fabric driver interfaces + */ + +#ifndef __BFA_FCB_VF_H__ +#define __BFA_FCB_VF_H__ + +/** + * fcs_vf_fcb Virtual fabric driver intrefaces + */ + + +struct bfad_vf_s; + +/* + * Callback functions from BFA FCS to driver + */ + +/** + * Completion callback for bfa_fcs_vf_stop(). + * + * @param[in] vf_drv - driver instance of vf + * + * @return None + */ +void bfa_fcb_vf_stop(struct bfad_vf_s *vf_drv); + + + +#endif /* __BFA_FCB_VF_H__ */ diff --git a/drivers/scsi/bfa/include/fcb/bfa_fcb_vport.h b/drivers/scsi/bfa/include/fcb/bfa_fcb_vport.h new file mode 100644 index 000000000000..a39f474c2fcf --- /dev/null +++ b/drivers/scsi/bfa/include/fcb/bfa_fcb_vport.h @@ -0,0 +1,47 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * bfa_fcb_vport.h BFA FCS virtual port driver interfaces + */ + +#ifndef __BFA_FCB_VPORT_H__ +#define __BFA_FCB_VPORT_H__ + +/** + * fcs_vport_fcb Virtual port driver interfaces + */ + + +struct bfad_vport_s; + +/* + * Callback functions from BFA FCS to driver + */ + +/** + * Completion callback for bfa_fcs_vport_delete(). + * + * @param[in] vport_drv - driver instance of vport + * + * @return None + */ +void bfa_fcb_vport_delete(struct bfad_vport_s *vport_drv); + + + +#endif /* __BFA_FCB_VPORT_H__ */ diff --git a/drivers/scsi/bfa/include/fcs/bfa_fcs.h b/drivers/scsi/bfa/include/fcs/bfa_fcs.h new file mode 100644 index 000000000000..627669c65546 --- /dev/null +++ b/drivers/scsi/bfa/include/fcs/bfa_fcs.h @@ -0,0 +1,73 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFA_FCS_H__ +#define __BFA_FCS_H__ + +#include +#include +#include +#include +#include + +#define BFA_FCS_OS_STR_LEN 64 + +struct bfa_fcs_stats_s { + struct { + u32 untagged; /* untagged receive frames */ + u32 tagged; /* tagged receive frames */ + u32 vfid_unknown; /* VF id is unknown */ + } uf; +}; + +struct bfa_fcs_driver_info_s { + u8 version[BFA_VERSION_LEN]; /* Driver Version */ + u8 host_machine_name[BFA_FCS_OS_STR_LEN]; + u8 host_os_name[BFA_FCS_OS_STR_LEN]; /* OS name and version */ + u8 host_os_patch[BFA_FCS_OS_STR_LEN];/* patch or service pack */ + u8 os_device_name[BFA_FCS_OS_STR_LEN]; /* Driver Device Name */ +}; + +struct bfa_fcs_s { + struct bfa_s *bfa; /* corresponding BFA bfa instance */ + struct bfad_s *bfad; /* corresponding BDA driver instance */ + struct bfa_log_mod_s *logm; /* driver logging module instance */ + struct bfa_trc_mod_s *trcmod; /* tracing module */ + struct bfa_aen_s *aen; /* aen component */ + bfa_boolean_t vf_enabled; /* VF mode is enabled */ + bfa_boolean_t min_cfg; /* min cfg enabled/disabled */ + u16 port_vfid; /* port default VF ID */ + struct bfa_fcs_driver_info_s driver_info; + struct bfa_fcs_fabric_s fabric; /* base fabric state machine */ + struct bfa_fcs_stats_s stats; /* FCS statistics */ + struct bfa_wc_s wc; /* waiting counter */ +}; + +/* + * bfa fcs API functions + */ +void bfa_fcs_init(struct bfa_fcs_s *fcs, struct bfa_s *bfa, struct bfad_s *bfad, + bfa_boolean_t min_cfg); +void bfa_fcs_driver_info_init(struct bfa_fcs_s *fcs, + struct bfa_fcs_driver_info_s *driver_info); +void bfa_fcs_exit(struct bfa_fcs_s *fcs); +void bfa_fcs_trc_init(struct bfa_fcs_s *fcs, struct bfa_trc_mod_s *trcmod); +void bfa_fcs_log_init(struct bfa_fcs_s *fcs, struct bfa_log_mod_s *logmod); +void bfa_fcs_aen_init(struct bfa_fcs_s *fcs, struct bfa_aen_s *aen); +void bfa_fcs_start(struct bfa_fcs_s *fcs); + +#endif /* __BFA_FCS_H__ */ diff --git a/drivers/scsi/bfa/include/fcs/bfa_fcs_auth.h b/drivers/scsi/bfa/include/fcs/bfa_fcs_auth.h new file mode 100644 index 000000000000..28c4c9ff08b3 --- /dev/null +++ b/drivers/scsi/bfa/include/fcs/bfa_fcs_auth.h @@ -0,0 +1,82 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFA_FCS_AUTH_H__ +#define __BFA_FCS_AUTH_H__ + +struct bfa_fcs_s; + +#include +#include +#include +#include +#include +#include +#include +#include + +struct bfa_fcs_fabric_s; + + + +struct bfa_fcs_auth_s { + bfa_sm_t sm; /* state machine */ + bfa_boolean_t policy; /* authentication enabled/disabled */ + enum bfa_auth_status status; /* authentication status */ + enum auth_rjt_codes rjt_code; /* auth reject status */ + enum auth_rjt_code_exps rjt_code_exp; /* auth reject reason */ + enum bfa_auth_algo algo; /* Authentication algorithm */ + struct bfa_auth_stats_s stats; /* Statistics */ + enum auth_dh_gid group; /* DH(diffie-hellman) Group */ + enum bfa_auth_secretsource source; /* Secret source */ + char secret[BFA_AUTH_SECRET_STRING_LEN]; + /* secret string */ + u8 secret_len; + /* secret string length */ + u8 nretries; + /* number of retries */ + struct bfa_fcs_fabric_s *fabric;/* pointer to fabric */ + u8 sentcode; /* pointer to response data */ + u8 *response; /* pointer to response data */ + struct bfa_timer_s delay_timer; /* delay timer */ + struct bfa_fcxp_s *fcxp; /* pointer to fcxp */ + struct bfa_fcxp_wqe_s fcxp_wqe; +}; + +/** + * bfa fcs authentication public functions + */ +bfa_status_t bfa_fcs_auth_get_attr(struct bfa_fcs_s *port, + struct bfa_auth_attr_s *attr); +bfa_status_t bfa_fcs_auth_set_policy(struct bfa_fcs_s *port, + bfa_boolean_t policy); +enum bfa_auth_status bfa_fcs_auth_get_status(struct bfa_fcs_s *port); +bfa_status_t bfa_fcs_auth_set_algo(struct bfa_fcs_s *port, + enum bfa_auth_algo algo); +bfa_status_t bfa_fcs_auth_get_stats(struct bfa_fcs_s *port, + struct bfa_auth_stats_s *stats); +bfa_status_t bfa_fcs_auth_set_dh_group(struct bfa_fcs_s *port, int group); +bfa_status_t bfa_fcs_auth_set_secretstring(struct bfa_fcs_s *port, + char *secret); +bfa_status_t bfa_fcs_auth_set_secretstring_encrypt(struct bfa_fcs_s *port, + u32 secret[], u32 len); +bfa_status_t bfa_fcs_auth_set_secretsource(struct bfa_fcs_s *port, + enum bfa_auth_secretsource src); +bfa_status_t bfa_fcs_auth_reset_stats(struct bfa_fcs_s *port); +bfa_status_t bfa_fcs_auth_reinit(struct bfa_fcs_s *port); + +#endif /* __BFA_FCS_AUTH_H__ */ diff --git a/drivers/scsi/bfa/include/fcs/bfa_fcs_fabric.h b/drivers/scsi/bfa/include/fcs/bfa_fcs_fabric.h new file mode 100644 index 000000000000..4ffd2242d3de --- /dev/null +++ b/drivers/scsi/bfa/include/fcs/bfa_fcs_fabric.h @@ -0,0 +1,112 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFA_FCS_FABRIC_H__ +#define __BFA_FCS_FABRIC_H__ + +struct bfa_fcs_s; + +#include +#include +#include +#include +#include +#include +#include +#include + +/* + * forward declaration + */ +struct bfad_vf_s; + +enum bfa_fcs_fabric_type { + BFA_FCS_FABRIC_UNKNOWN = 0, + BFA_FCS_FABRIC_SWITCHED = 1, + BFA_FCS_FABRIC_PLOOP = 2, + BFA_FCS_FABRIC_N2N = 3, +}; + + +struct bfa_fcs_fabric_s { + struct list_head qe; /* queue element */ + bfa_sm_t sm; /* state machine */ + struct bfa_fcs_s *fcs; /* FCS instance */ + struct bfa_fcs_port_s bport; /* base logical port */ + enum bfa_fcs_fabric_type fab_type; /* fabric type */ + enum bfa_pport_type oper_type; /* current link topology */ + u8 is_vf; /* is virtual fabric? */ + u8 is_npiv; /* is NPIV supported ? */ + u8 is_auth; /* is Security/Auth supported ? */ + u16 bb_credit; /* BB credit from fabric */ + u16 vf_id; /* virtual fabric ID */ + u16 num_vports; /* num vports */ + u16 rsvd; + struct list_head vport_q; /* queue of virtual ports */ + struct list_head vf_q; /* queue of virtual fabrics */ + struct bfad_vf_s *vf_drv; /* driver vf structure */ + struct bfa_timer_s link_timer; /* Link Failure timer. Vport */ + wwn_t fabric_name; /* attached fabric name */ + bfa_boolean_t auth_reqd; /* authentication required */ + struct bfa_timer_s delay_timer; /* delay timer */ + union { + u16 swp_vfid;/* switch port VF id */ + } event_arg; + struct bfa_fcs_auth_s auth; /* authentication config */ + struct bfa_wc_s wc; /* wait counter for delete */ + struct bfa_vf_stats_s stats; /* fabric/vf stats */ + struct bfa_lps_s *lps; /* lport login services */ + u8 fabric_ip_addr[BFA_FCS_FABRIC_IPADDR_SZ]; /* attached + * fabric's ip addr + */ +}; + +#define bfa_fcs_fabric_npiv_capable(__f) (__f)->is_npiv +#define bfa_fcs_fabric_is_switched(__f) \ + ((__f)->fab_type == BFA_FCS_FABRIC_SWITCHED) + +/** + * The design calls for a single implementation of base fabric and vf. + */ +#define bfa_fcs_vf_t struct bfa_fcs_fabric_s + +struct bfa_vf_event_s { + u32 undefined; +}; + +/** + * bfa fcs vf public functions + */ +bfa_status_t bfa_fcs_vf_mode_enable(struct bfa_fcs_s *fcs, u16 vf_id); +bfa_status_t bfa_fcs_vf_mode_disable(struct bfa_fcs_s *fcs); +bfa_status_t bfa_fcs_vf_create(bfa_fcs_vf_t *vf, struct bfa_fcs_s *fcs, + u16 vf_id, struct bfa_port_cfg_s *port_cfg, + struct bfad_vf_s *vf_drv); +bfa_status_t bfa_fcs_vf_delete(bfa_fcs_vf_t *vf); +void bfa_fcs_vf_start(bfa_fcs_vf_t *vf); +bfa_status_t bfa_fcs_vf_stop(bfa_fcs_vf_t *vf); +void bfa_fcs_vf_list(struct bfa_fcs_s *fcs, u16 *vf_ids, int *nvfs); +void bfa_fcs_vf_list_all(struct bfa_fcs_s *fcs, u16 *vf_ids, int *nvfs); +void bfa_fcs_vf_get_attr(bfa_fcs_vf_t *vf, struct bfa_vf_attr_s *vf_attr); +void bfa_fcs_vf_get_stats(bfa_fcs_vf_t *vf, + struct bfa_vf_stats_s *vf_stats); +void bfa_fcs_vf_clear_stats(bfa_fcs_vf_t *vf); +void bfa_fcs_vf_get_ports(bfa_fcs_vf_t *vf, wwn_t vpwwn[], int *nports); +bfa_fcs_vf_t *bfa_fcs_vf_lookup(struct bfa_fcs_s *fcs, u16 vf_id); +struct bfad_vf_s *bfa_fcs_vf_get_drv_vf(bfa_fcs_vf_t *vf); + +#endif /* __BFA_FCS_FABRIC_H__ */ diff --git a/drivers/scsi/bfa/include/fcs/bfa_fcs_fcpim.h b/drivers/scsi/bfa/include/fcs/bfa_fcs_fcpim.h new file mode 100644 index 000000000000..e719f2c3eb35 --- /dev/null +++ b/drivers/scsi/bfa/include/fcs/bfa_fcs_fcpim.h @@ -0,0 +1,131 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * bfa_fcs_fcpim.h BFA FCS FCP Initiator Mode interfaces/defines. + */ + +#ifndef __BFA_FCS_FCPIM_H__ +#define __BFA_FCS_FCPIM_H__ + +#include +#include +#include +#include +#include +#include + +/* + * forward declarations + */ +struct bfad_itnim_s; + +struct bfa_fcs_itnim_s { + bfa_sm_t sm; /* state machine */ + struct bfa_fcs_rport_s *rport; /* parent remote rport */ + struct bfad_itnim_s *itnim_drv; /* driver peer instance */ + struct bfa_fcs_s *fcs; /* fcs instance */ + struct bfa_timer_s timer; /* timer functions */ + struct bfa_itnim_s *bfa_itnim; /* BFA itnim struct */ + bfa_boolean_t seq_rec; /* seq recovery support */ + bfa_boolean_t rec_support; /* REC supported */ + bfa_boolean_t conf_comp; /* FCP_CONF support */ + bfa_boolean_t task_retry_id; /* task retry id supp */ + struct bfa_fcxp_wqe_s fcxp_wqe; /* wait qelem for fcxp */ + struct bfa_fcxp_s *fcxp; /* FCXP in use */ + struct bfa_itnim_stats_s stats; /* itn statistics */ +}; + + +static inline struct bfad_port_s * +bfa_fcs_itnim_get_drvport(struct bfa_fcs_itnim_s *itnim) +{ + return itnim->rport->port->bfad_port; +} + + +static inline struct bfa_fcs_port_s * +bfa_fcs_itnim_get_port(struct bfa_fcs_itnim_s *itnim) +{ + return itnim->rport->port; +} + + +static inline wwn_t +bfa_fcs_itnim_get_nwwn(struct bfa_fcs_itnim_s *itnim) +{ + return itnim->rport->nwwn; +} + + +static inline wwn_t +bfa_fcs_itnim_get_pwwn(struct bfa_fcs_itnim_s *itnim) +{ + return itnim->rport->pwwn; +} + + +static inline u32 +bfa_fcs_itnim_get_fcid(struct bfa_fcs_itnim_s *itnim) +{ + return itnim->rport->pid; +} + + +static inline u32 +bfa_fcs_itnim_get_maxfrsize(struct bfa_fcs_itnim_s *itnim) +{ + return itnim->rport->maxfrsize; +} + + +static inline enum fc_cos +bfa_fcs_itnim_get_cos(struct bfa_fcs_itnim_s *itnim) +{ + return itnim->rport->fc_cos; +} + + +static inline struct bfad_itnim_s * +bfa_fcs_itnim_get_drvitn(struct bfa_fcs_itnim_s *itnim) +{ + return itnim->itnim_drv; +} + + +static inline struct bfa_itnim_s * +bfa_fcs_itnim_get_halitn(struct bfa_fcs_itnim_s *itnim) +{ + return itnim->bfa_itnim; +} + +/** + * bfa fcs FCP Initiator mode API functions + */ +void bfa_fcs_itnim_get_attr(struct bfa_fcs_itnim_s *itnim, + struct bfa_itnim_attr_s *attr); +void bfa_fcs_itnim_get_stats(struct bfa_fcs_itnim_s *itnim, + struct bfa_itnim_stats_s *stats); +struct bfa_fcs_itnim_s *bfa_fcs_itnim_lookup(struct bfa_fcs_port_s *port, + wwn_t rpwwn); +bfa_status_t bfa_fcs_itnim_attr_get(struct bfa_fcs_port_s *port, wwn_t rpwwn, + struct bfa_itnim_attr_s *attr); +bfa_status_t bfa_fcs_itnim_stats_get(struct bfa_fcs_port_s *port, wwn_t rpwwn, + struct bfa_itnim_stats_s *stats); +bfa_status_t bfa_fcs_itnim_stats_clear(struct bfa_fcs_port_s *port, + wwn_t rpwwn); +#endif /* __BFA_FCS_FCPIM_H__ */ diff --git a/drivers/scsi/bfa/include/fcs/bfa_fcs_fdmi.h b/drivers/scsi/bfa/include/fcs/bfa_fcs_fdmi.h new file mode 100644 index 000000000000..4441fffc9c82 --- /dev/null +++ b/drivers/scsi/bfa/include/fcs/bfa_fcs_fdmi.h @@ -0,0 +1,63 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * bfa_fcs_fdmi.h BFA fcs fdmi module public interface + */ + +#ifndef __BFA_FCS_FDMI_H__ +#define __BFA_FCS_FDMI_H__ +#include +#include + +#define BFA_FCS_FDMI_SUPORTED_SPEEDS (FDMI_TRANS_SPEED_1G | \ + FDMI_TRANS_SPEED_2G | \ + FDMI_TRANS_SPEED_4G | \ + FDMI_TRANS_SPEED_8G) + +/* +* HBA Attribute Block : BFA internal representation. Note : Some variable +* sizes have been trimmed to suit BFA For Ex : Model will be "Brocade". Based + * on this the size has been reduced to 16 bytes from the standard's 64 bytes. + */ +struct bfa_fcs_fdmi_hba_attr_s { + wwn_t node_name; + u8 manufacturer[64]; + u8 serial_num[64]; + u8 model[16]; + u8 model_desc[256]; + u8 hw_version[8]; + u8 driver_version[8]; + u8 option_rom_ver[BFA_VERSION_LEN]; + u8 fw_version[8]; + u8 os_name[256]; + u32 max_ct_pyld; +}; + +/* + * Port Attribute Block + */ +struct bfa_fcs_fdmi_port_attr_s { + u8 supp_fc4_types[32]; /* supported FC4 types */ + u32 supp_speed; /* supported speed */ + u32 curr_speed; /* current Speed */ + u32 max_frm_size; /* max frame size */ + u8 os_device_name[256]; /* OS device Name */ + u8 host_name[256]; /* host name */ +}; + +#endif /* __BFA_FCS_FDMI_H__ */ diff --git a/drivers/scsi/bfa/include/fcs/bfa_fcs_lport.h b/drivers/scsi/bfa/include/fcs/bfa_fcs_lport.h new file mode 100644 index 000000000000..b85cba884b96 --- /dev/null +++ b/drivers/scsi/bfa/include/fcs/bfa_fcs_lport.h @@ -0,0 +1,226 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * bfa_fcs_port.h BFA fcs port module public interface + */ + +#ifndef __BFA_FCS_PORT_H__ +#define __BFA_FCS_PORT_H__ + +#include +#include +#include +#include +#include +#include +#include + +struct bfa_fcs_s; +struct bfa_fcs_fabric_s; + +/* +* @todo : need to move to a global config file. + * Maximum Vports supported per physical port or vf. + */ +#define BFA_FCS_MAX_VPORTS_SUPP_CB 255 +#define BFA_FCS_MAX_VPORTS_SUPP_CT 191 + +/* +* @todo : need to move to a global config file. + * Maximum Rports supported per port (physical/logical). + */ +#define BFA_FCS_MAX_RPORTS_SUPP 256 /* @todo : tentative value */ + + +struct bfa_fcs_port_ns_s { + bfa_sm_t sm; /* state machine */ + struct bfa_timer_s timer; + struct bfa_fcs_port_s *port; /* parent port */ + struct bfa_fcxp_s *fcxp; + struct bfa_fcxp_wqe_s fcxp_wqe; +}; + + +struct bfa_fcs_port_scn_s { + bfa_sm_t sm; /* state machine */ + struct bfa_timer_s timer; + struct bfa_fcs_port_s *port; /* parent port */ + struct bfa_fcxp_s *fcxp; + struct bfa_fcxp_wqe_s fcxp_wqe; +}; + + +struct bfa_fcs_port_fdmi_s { + bfa_sm_t sm; /* state machine */ + struct bfa_timer_s timer; + struct bfa_fcs_port_ms_s *ms; /* parent ms */ + struct bfa_fcxp_s *fcxp; + struct bfa_fcxp_wqe_s fcxp_wqe; + u8 retry_cnt; /* retry count */ + u8 rsvd[3]; +}; + + +struct bfa_fcs_port_ms_s { + bfa_sm_t sm; /* state machine */ + struct bfa_timer_s timer; + struct bfa_fcs_port_s *port; /* parent port */ + struct bfa_fcxp_s *fcxp; + struct bfa_fcxp_wqe_s fcxp_wqe; + struct bfa_fcs_port_fdmi_s fdmi; /* FDMI component of MS */ + u8 retry_cnt; /* retry count */ + u8 rsvd[3]; +}; + + +struct bfa_fcs_port_fab_s { + struct bfa_fcs_port_ns_s ns; /* NS component of port */ + struct bfa_fcs_port_scn_s scn; /* scn component of port */ + struct bfa_fcs_port_ms_s ms; /* MS component of port */ +}; + + + +#define MAX_ALPA_COUNT 127 + +struct bfa_fcs_port_loop_s { + u8 num_alpa; /* Num of ALPA entries in the map */ + u8 alpa_pos_map[MAX_ALPA_COUNT]; /* ALPA Positional + *Map */ + struct bfa_fcs_port_s *port; /* parent port */ +}; + + + +struct bfa_fcs_port_n2n_s { + u32 rsvd; + u16 reply_oxid; /* ox_id from the req flogi to be + *used in flogi acc */ + wwn_t rem_port_wwn; /* Attached port's wwn */ +}; + + +union bfa_fcs_port_topo_u { + struct bfa_fcs_port_fab_s pfab; + struct bfa_fcs_port_loop_s ploop; + struct bfa_fcs_port_n2n_s pn2n; +}; + + +struct bfa_fcs_port_s { + struct list_head qe; /* used by port/vport */ + bfa_sm_t sm; /* state machine */ + struct bfa_fcs_fabric_s *fabric; /* parent fabric */ + struct bfa_port_cfg_s port_cfg; /* port configuration */ + struct bfa_timer_s link_timer; /* timer for link offline */ + u32 pid : 24; /* FC address */ + u8 lp_tag; /* lport tag */ + u16 num_rports; /* Num of r-ports */ + struct list_head rport_q; /* queue of discovered r-ports */ + struct bfa_fcs_s *fcs; /* FCS instance */ + union bfa_fcs_port_topo_u port_topo; /* fabric/loop/n2n details */ + struct bfad_port_s *bfad_port; /* driver peer instance */ + struct bfa_fcs_vport_s *vport; /* NULL for base ports */ + struct bfa_fcxp_s *fcxp; + struct bfa_fcxp_wqe_s fcxp_wqe; + struct bfa_port_stats_s stats; + struct bfa_wc_s wc; /* waiting counter for events */ +}; + +#define bfa_fcs_lport_t struct bfa_fcs_port_s + +/** + * Symbolic Name related defines + * Total bytes 255. + * Physical Port's symbolic name 128 bytes. + * For Vports, Vport's symbolic name is appended to the Physical port's + * Symbolic Name. + * + * Physical Port's symbolic name Format : (Total 128 bytes) + * Adapter Model number/name : 12 bytes + * Driver Version : 10 bytes + * Host Machine Name : 30 bytes + * Host OS Info : 48 bytes + * Host OS PATCH Info : 16 bytes + * ( remaining 12 bytes reserved to be used for separator) + */ +#define BFA_FCS_PORT_SYMBNAME_SEPARATOR " | " + +#define BFA_FCS_PORT_SYMBNAME_MODEL_SZ 12 +#define BFA_FCS_PORT_SYMBNAME_VERSION_SZ 10 +#define BFA_FCS_PORT_SYMBNAME_MACHINENAME_SZ 30 +#define BFA_FCS_PORT_SYMBNAME_OSINFO_SZ 48 +#define BFA_FCS_PORT_SYMBNAME_OSPATCH_SZ 16 + +/** + * Get FC port ID for a logical port. + */ +#define bfa_fcs_port_get_fcid(_lport) ((_lport)->pid) +#define bfa_fcs_port_get_pwwn(_lport) ((_lport)->port_cfg.pwwn) +#define bfa_fcs_port_get_nwwn(_lport) ((_lport)->port_cfg.nwwn) +#define bfa_fcs_port_get_psym_name(_lport) ((_lport)->port_cfg.sym_name) +#define bfa_fcs_port_is_initiator(_lport) \ + ((_lport)->port_cfg.roles & BFA_PORT_ROLE_FCP_IM) +#define bfa_fcs_port_is_target(_lport) \ + ((_lport)->port_cfg.roles & BFA_PORT_ROLE_FCP_TM) +#define bfa_fcs_port_get_nrports(_lport) \ + ((_lport) ? (_lport)->num_rports : 0) + +static inline struct bfad_port_s * +bfa_fcs_port_get_drvport(struct bfa_fcs_port_s *port) +{ + return port->bfad_port; +} + + +#define bfa_fcs_port_get_opertype(_lport) (_lport)->fabric->oper_type + + +#define bfa_fcs_port_get_fabric_name(_lport) (_lport)->fabric->fabric_name + + +#define bfa_fcs_port_get_fabric_ipaddr(_lport) (_lport)->fabric->fabric_ip_addr + +/** + * bfa fcs port public functions + */ +void bfa_fcs_cfg_base_port(struct bfa_fcs_s *fcs, + struct bfa_port_cfg_s *port_cfg); +struct bfa_fcs_port_s *bfa_fcs_get_base_port(struct bfa_fcs_s *fcs); +void bfa_fcs_port_get_rports(struct bfa_fcs_port_s *port, + wwn_t rport_wwns[], int *nrports); + +wwn_t bfa_fcs_port_get_rport(struct bfa_fcs_port_s *port, wwn_t wwn, + int index, int nrports, bfa_boolean_t bwwn); + +struct bfa_fcs_port_s *bfa_fcs_lookup_port(struct bfa_fcs_s *fcs, + u16 vf_id, wwn_t lpwwn); + +void bfa_fcs_port_get_info(struct bfa_fcs_port_s *port, + struct bfa_port_info_s *port_info); +void bfa_fcs_port_get_attr(struct bfa_fcs_port_s *port, + struct bfa_port_attr_s *port_attr); +void bfa_fcs_port_get_stats(struct bfa_fcs_port_s *fcs_port, + struct bfa_port_stats_s *port_stats); +void bfa_fcs_port_clear_stats(struct bfa_fcs_port_s *fcs_port); +enum bfa_pport_speed bfa_fcs_port_get_rport_max_speed( + struct bfa_fcs_port_s *port); +void bfa_fcs_port_enable_ipfc_roles(struct bfa_fcs_port_s *fcs_port); +void bfa_fcs_port_disable_ipfc_roles(struct bfa_fcs_port_s *fcs_port); + +#endif /* __BFA_FCS_PORT_H__ */ diff --git a/drivers/scsi/bfa/include/fcs/bfa_fcs_rport.h b/drivers/scsi/bfa/include/fcs/bfa_fcs_rport.h new file mode 100644 index 000000000000..702b95b76c2d --- /dev/null +++ b/drivers/scsi/bfa/include/fcs/bfa_fcs_rport.h @@ -0,0 +1,104 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __BFA_FCS_RPORT_H__ +#define __BFA_FCS_RPORT_H__ + +#include +#include +#include +#include + +#define BFA_FCS_RPORT_DEF_DEL_TIMEOUT 90 /* in secs */ +/* + * forward declarations + */ +struct bfad_rport_s; + +struct bfa_fcs_itnim_s; +struct bfa_fcs_tin_s; +struct bfa_fcs_iprp_s; + +/* Rport Features (RPF) */ +struct bfa_fcs_rpf_s { + bfa_sm_t sm; /* state machine */ + struct bfa_fcs_rport_s *rport; /* parent rport */ + struct bfa_timer_s timer; /* general purpose timer */ + struct bfa_fcxp_s *fcxp; /* FCXP needed for discarding */ + struct bfa_fcxp_wqe_s fcxp_wqe; /* fcxp wait queue element */ + int rpsc_retries; /* max RPSC retry attempts */ + enum bfa_pport_speed rpsc_speed; /* Current Speed from RPSC. + * O if RPSC fails */ + enum bfa_pport_speed assigned_speed; /* Speed assigned by the user. + * will be used if RPSC is not + * supported by the rport */ +}; + +struct bfa_fcs_rport_s { + struct list_head qe; /* used by port/vport */ + struct bfa_fcs_port_s *port; /* parent FCS port */ + struct bfa_fcs_s *fcs; /* fcs instance */ + struct bfad_rport_s *rp_drv; /* driver peer instance */ + u32 pid; /* port ID of rport */ + u16 maxfrsize; /* maximum frame size */ + u16 reply_oxid; /* OX_ID of inbound requests */ + enum fc_cos fc_cos; /* FC classes of service supp */ + bfa_boolean_t cisc; /* CISC capable device */ + wwn_t pwwn; /* port wwn of rport */ + wwn_t nwwn; /* node wwn of rport */ + struct bfa_rport_symname_s psym_name; /* port symbolic name */ + bfa_sm_t sm; /* state machine */ + struct bfa_timer_s timer; /* general purpose timer */ + struct bfa_fcs_itnim_s *itnim; /* ITN initiator mode role */ + struct bfa_fcs_tin_s *tin; /* ITN initiator mode role */ + struct bfa_fcs_iprp_s *iprp; /* IP/FC role */ + struct bfa_rport_s *bfa_rport; /* BFA Rport */ + struct bfa_fcxp_s *fcxp; /* FCXP needed for discarding */ + int plogi_retries; /* max plogi retry attempts */ + int ns_retries; /* max NS query retry attempts */ + struct bfa_fcxp_wqe_s fcxp_wqe; /* fcxp wait queue element */ + struct bfa_rport_stats_s stats; /* rport stats */ + enum bfa_rport_function scsi_function; /* Initiator/Target */ + struct bfa_fcs_rpf_s rpf; /* Rport features module */ +}; + +static inline struct bfa_rport_s * +bfa_fcs_rport_get_halrport(struct bfa_fcs_rport_s *rport) +{ + return rport->bfa_rport; +} + +/** + * bfa fcs rport API functions + */ +bfa_status_t bfa_fcs_rport_add(struct bfa_fcs_port_s *port, wwn_t *pwwn, + struct bfa_fcs_rport_s *rport, + struct bfad_rport_s *rport_drv); +bfa_status_t bfa_fcs_rport_remove(struct bfa_fcs_rport_s *rport); +void bfa_fcs_rport_get_attr(struct bfa_fcs_rport_s *rport, + struct bfa_rport_attr_s *attr); +void bfa_fcs_rport_get_stats(struct bfa_fcs_rport_s *rport, + struct bfa_rport_stats_s *stats); +void bfa_fcs_rport_clear_stats(struct bfa_fcs_rport_s *rport); +struct bfa_fcs_rport_s *bfa_fcs_rport_lookup(struct bfa_fcs_port_s *port, + wwn_t rpwwn); +struct bfa_fcs_rport_s *bfa_fcs_rport_lookup_by_nwwn( + struct bfa_fcs_port_s *port, wwn_t rnwwn); +void bfa_fcs_rport_set_del_timeout(u8 rport_tmo); +void bfa_fcs_rport_set_speed(struct bfa_fcs_rport_s *rport, + enum bfa_pport_speed speed); +#endif /* __BFA_FCS_RPORT_H__ */ diff --git a/drivers/scsi/bfa/include/fcs/bfa_fcs_vport.h b/drivers/scsi/bfa/include/fcs/bfa_fcs_vport.h new file mode 100644 index 000000000000..cd33f2cd5c34 --- /dev/null +++ b/drivers/scsi/bfa/include/fcs/bfa_fcs_vport.h @@ -0,0 +1,63 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * bfa_fcs_vport.h BFA fcs vport module public interface + */ + +#ifndef __BFA_FCS_VPORT_H__ +#define __BFA_FCS_VPORT_H__ + +#include +#include +#include +#include +#include + +struct bfa_fcs_vport_s { + struct list_head qe; /* queue elem */ + bfa_sm_t sm; /* state machine */ + bfa_fcs_lport_t lport; /* logical port */ + struct bfa_timer_s timer; /* general purpose timer */ + struct bfad_vport_s *vport_drv; /* Driver private */ + struct bfa_vport_stats_s vport_stats; /* vport statistics */ + struct bfa_lps_s *lps; /* Lport login service */ + int fdisc_retries; +}; + +#define bfa_fcs_vport_get_port(vport) \ + ((struct bfa_fcs_port_s *)(&vport->port)) + +/** + * bfa fcs vport public functions + */ +bfa_status_t bfa_fcs_vport_create(struct bfa_fcs_vport_s *vport, + struct bfa_fcs_s *fcs, u16 vf_id, + struct bfa_port_cfg_s *port_cfg, + struct bfad_vport_s *vport_drv); +bfa_status_t bfa_fcs_vport_delete(struct bfa_fcs_vport_s *vport); +bfa_status_t bfa_fcs_vport_start(struct bfa_fcs_vport_s *vport); +bfa_status_t bfa_fcs_vport_stop(struct bfa_fcs_vport_s *vport); +void bfa_fcs_vport_get_attr(struct bfa_fcs_vport_s *vport, + struct bfa_vport_attr_s *vport_attr); +void bfa_fcs_vport_get_stats(struct bfa_fcs_vport_s *vport, + struct bfa_vport_stats_s *vport_stats); +void bfa_fcs_vport_clr_stats(struct bfa_fcs_vport_s *vport); +struct bfa_fcs_vport_s *bfa_fcs_vport_lookup(struct bfa_fcs_s *fcs, + u16 vf_id, wwn_t vpwwn); + +#endif /* __BFA_FCS_VPORT_H__ */ diff --git a/drivers/scsi/bfa/include/log/bfa_log_fcs.h b/drivers/scsi/bfa/include/log/bfa_log_fcs.h new file mode 100644 index 000000000000..b6f5df8827f8 --- /dev/null +++ b/drivers/scsi/bfa/include/log/bfa_log_fcs.h @@ -0,0 +1,28 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/* + * messages define for FCS Module + */ +#ifndef __BFA_LOG_FCS_H__ +#define __BFA_LOG_FCS_H__ +#include +#define BFA_LOG_FCS_FABRIC_NOSWITCH \ + (((u32) BFA_LOG_FCS_ID << BFA_LOG_MODID_OFFSET) | 1) +#define BFA_LOG_FCS_FABRIC_ISOLATED \ + (((u32) BFA_LOG_FCS_ID << BFA_LOG_MODID_OFFSET) | 2) +#endif diff --git a/drivers/scsi/bfa/include/log/bfa_log_hal.h b/drivers/scsi/bfa/include/log/bfa_log_hal.h new file mode 100644 index 000000000000..0412aea2ec30 --- /dev/null +++ b/drivers/scsi/bfa/include/log/bfa_log_hal.h @@ -0,0 +1,30 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/* messages define for HAL Module */ +#ifndef __BFA_LOG_HAL_H__ +#define __BFA_LOG_HAL_H__ +#include +#define BFA_LOG_HAL_ASSERT \ + (((u32) BFA_LOG_HAL_ID << BFA_LOG_MODID_OFFSET) | 1) +#define BFA_LOG_HAL_HEARTBEAT_FAILURE \ + (((u32) BFA_LOG_HAL_ID << BFA_LOG_MODID_OFFSET) | 2) +#define BFA_LOG_HAL_FCPIM_PARM_INVALID \ + (((u32) BFA_LOG_HAL_ID << BFA_LOG_MODID_OFFSET) | 3) +#define BFA_LOG_HAL_SM_ASSERT \ + (((u32) BFA_LOG_HAL_ID << BFA_LOG_MODID_OFFSET) | 4) +#endif diff --git a/drivers/scsi/bfa/include/log/bfa_log_linux.h b/drivers/scsi/bfa/include/log/bfa_log_linux.h new file mode 100644 index 000000000000..317c0547ee16 --- /dev/null +++ b/drivers/scsi/bfa/include/log/bfa_log_linux.h @@ -0,0 +1,44 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/* messages define for LINUX Module */ +#ifndef __BFA_LOG_LINUX_H__ +#define __BFA_LOG_LINUX_H__ +#include +#define BFA_LOG_LINUX_DEVICE_CLAIMED \ + (((u32) BFA_LOG_LINUX_ID << BFA_LOG_MODID_OFFSET) | 1) +#define BFA_LOG_LINUX_HASH_INIT_FAILED \ + (((u32) BFA_LOG_LINUX_ID << BFA_LOG_MODID_OFFSET) | 2) +#define BFA_LOG_LINUX_SYSFS_FAILED \ + (((u32) BFA_LOG_LINUX_ID << BFA_LOG_MODID_OFFSET) | 3) +#define BFA_LOG_LINUX_MEM_ALLOC_FAILED \ + (((u32) BFA_LOG_LINUX_ID << BFA_LOG_MODID_OFFSET) | 4) +#define BFA_LOG_LINUX_DRIVER_REGISTRATION_FAILED \ + (((u32) BFA_LOG_LINUX_ID << BFA_LOG_MODID_OFFSET) | 5) +#define BFA_LOG_LINUX_ITNIM_FREE \ + (((u32) BFA_LOG_LINUX_ID << BFA_LOG_MODID_OFFSET) | 6) +#define BFA_LOG_LINUX_ITNIM_ONLINE \ + (((u32) BFA_LOG_LINUX_ID << BFA_LOG_MODID_OFFSET) | 7) +#define BFA_LOG_LINUX_ITNIM_OFFLINE \ + (((u32) BFA_LOG_LINUX_ID << BFA_LOG_MODID_OFFSET) | 8) +#define BFA_LOG_LINUX_SCSI_HOST_FREE \ + (((u32) BFA_LOG_LINUX_ID << BFA_LOG_MODID_OFFSET) | 9) +#define BFA_LOG_LINUX_SCSI_ABORT \ + (((u32) BFA_LOG_LINUX_ID << BFA_LOG_MODID_OFFSET) | 10) +#define BFA_LOG_LINUX_SCSI_ABORT_COMP \ + (((u32) BFA_LOG_LINUX_ID << BFA_LOG_MODID_OFFSET) | 11) +#endif diff --git a/drivers/scsi/bfa/include/log/bfa_log_wdrv.h b/drivers/scsi/bfa/include/log/bfa_log_wdrv.h new file mode 100644 index 000000000000..809a95f7afe2 --- /dev/null +++ b/drivers/scsi/bfa/include/log/bfa_log_wdrv.h @@ -0,0 +1,36 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/* + * messages define for WDRV Module + */ +#ifndef __BFA_LOG_WDRV_H__ +#define __BFA_LOG_WDRV_H__ +#include +#define BFA_LOG_WDRV_IOC_INIT_ERROR \ + (((u32) BFA_LOG_WDRV_ID << BFA_LOG_MODID_OFFSET) | 1) +#define BFA_LOG_WDRV_IOC_INTERNAL_ERROR \ + (((u32) BFA_LOG_WDRV_ID << BFA_LOG_MODID_OFFSET) | 2) +#define BFA_LOG_WDRV_IOC_START_ERROR \ + (((u32) BFA_LOG_WDRV_ID << BFA_LOG_MODID_OFFSET) | 3) +#define BFA_LOG_WDRV_IOC_STOP_ERROR \ + (((u32) BFA_LOG_WDRV_ID << BFA_LOG_MODID_OFFSET) | 4) +#define BFA_LOG_WDRV_INSUFFICIENT_RESOURCES \ + (((u32) BFA_LOG_WDRV_ID << BFA_LOG_MODID_OFFSET) | 5) +#define BFA_LOG_WDRV_BASE_ADDRESS_MAP_ERROR \ + (((u32) BFA_LOG_WDRV_ID << BFA_LOG_MODID_OFFSET) | 6) +#endif diff --git a/drivers/scsi/bfa/include/protocol/ct.h b/drivers/scsi/bfa/include/protocol/ct.h new file mode 100644 index 000000000000..c59d6630b070 --- /dev/null +++ b/drivers/scsi/bfa/include/protocol/ct.h @@ -0,0 +1,492 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __CT_H__ +#define __CT_H__ + +#include + +#pragma pack(1) + +struct ct_hdr_s{ + u32 rev_id:8; /* Revision of the CT */ + u32 in_id:24; /* Initiator Id */ + u32 gs_type:8; /* Generic service Type */ + u32 gs_sub_type:8; /* Generic service sub type */ + u32 options:8; /* options */ + u32 rsvrd:8; /* reserved */ + u32 cmd_rsp_code:16;/* ct command/response code */ + u32 max_res_size:16;/* maximum/residual size */ + u32 frag_id:8; /* fragment ID */ + u32 reason_code:8; /* reason code */ + u32 exp_code:8; /* explanation code */ + u32 vendor_unq:8; /* vendor unique */ +}; + +/* + * defines for the Revision + */ +enum { + CT_GS3_REVISION = 0x01, +}; + +/* + * defines for gs_type + */ +enum { + CT_GSTYPE_KEYSERVICE = 0xF7, + CT_GSTYPE_ALIASSERVICE = 0xF8, + CT_GSTYPE_MGMTSERVICE = 0xFA, + CT_GSTYPE_TIMESERVICE = 0xFB, + CT_GSTYPE_DIRSERVICE = 0xFC, +}; + +/* + * defines for gs_sub_type for gs type directory service + */ +enum { + CT_GSSUBTYPE_NAMESERVER = 0x02, +}; + +/* + * defines for gs_sub_type for gs type management service + */ +enum { + CT_GSSUBTYPE_CFGSERVER = 0x01, + CT_GSSUBTYPE_UNZONED_NS = 0x02, + CT_GSSUBTYPE_ZONESERVER = 0x03, + CT_GSSUBTYPE_LOCKSERVER = 0x04, + CT_GSSUBTYPE_HBA_MGMTSERVER = 0x10, /* for FDMI */ +}; + +/* + * defines for CT response code field + */ +enum { + CT_RSP_REJECT = 0x8001, + CT_RSP_ACCEPT = 0x8002, +}; + +/* + * defintions for CT reason code + */ +enum { + CT_RSN_INV_CMD = 0x01, + CT_RSN_INV_VER = 0x02, + CT_RSN_LOGIC_ERR = 0x03, + CT_RSN_INV_SIZE = 0x04, + CT_RSN_LOGICAL_BUSY = 0x05, + CT_RSN_PROTO_ERR = 0x07, + CT_RSN_UNABLE_TO_PERF = 0x09, + CT_RSN_NOT_SUPP = 0x0B, + CT_RSN_SERVER_NOT_AVBL = 0x0D, + CT_RSN_SESSION_COULD_NOT_BE_ESTBD = 0x0E, + CT_RSN_VENDOR_SPECIFIC = 0xFF, + +}; + +/* + * definitions for explanations code for Name server + */ +enum { + CT_NS_EXP_NOADDITIONAL = 0x00, + CT_NS_EXP_ID_NOT_REG = 0x01, + CT_NS_EXP_PN_NOT_REG = 0x02, + CT_NS_EXP_NN_NOT_REG = 0x03, + CT_NS_EXP_CS_NOT_REG = 0x04, + CT_NS_EXP_IPN_NOT_REG = 0x05, + CT_NS_EXP_IPA_NOT_REG = 0x06, + CT_NS_EXP_FT_NOT_REG = 0x07, + CT_NS_EXP_SPN_NOT_REG = 0x08, + CT_NS_EXP_SNN_NOT_REG = 0x09, + CT_NS_EXP_PT_NOT_REG = 0x0A, + CT_NS_EXP_IPP_NOT_REG = 0x0B, + CT_NS_EXP_FPN_NOT_REG = 0x0C, + CT_NS_EXP_HA_NOT_REG = 0x0D, + CT_NS_EXP_FD_NOT_REG = 0x0E, + CT_NS_EXP_FF_NOT_REG = 0x0F, + CT_NS_EXP_ACCESSDENIED = 0x10, + CT_NS_EXP_UNACCEPTABLE_ID = 0x11, + CT_NS_EXP_DATABASEEMPTY = 0x12, + CT_NS_EXP_NOT_REG_IN_SCOPE = 0x13, + CT_NS_EXP_DOM_ID_NOT_PRESENT = 0x14, + CT_NS_EXP_PORT_NUM_NOT_PRESENT = 0x15, + CT_NS_EXP_NO_DEVICE_ATTACHED = 0x16 +}; + +/* + * defintions for the explanation code for all servers + */ +enum { + CT_EXP_AUTH_EXCEPTION = 0xF1, + CT_EXP_DB_FULL = 0xF2, + CT_EXP_DB_EMPTY = 0xF3, + CT_EXP_PROCESSING_REQ = 0xF4, + CT_EXP_UNABLE_TO_VERIFY_CONN = 0xF5, + CT_EXP_DEVICES_NOT_IN_CMN_ZONE = 0xF6 +}; + +/* + * Command codes for Name server + */ +enum { + GS_GID_PN = 0x0121, /* Get Id on port name */ + GS_GPN_ID = 0x0112, /* Get port name on ID */ + GS_GNN_ID = 0x0113, /* Get node name on ID */ + GS_GID_FT = 0x0171, /* Get Id on FC4 type */ + GS_GSPN_ID = 0x0118, /* Get symbolic PN on ID */ + GS_RFT_ID = 0x0217, /* Register fc4type on ID */ + GS_RSPN_ID = 0x0218, /* Register symbolic PN on ID */ + GS_RPN_ID = 0x0212, /* Register port name */ + GS_RNN_ID = 0x0213, /* Register node name */ + GS_RCS_ID = 0x0214, /* Register class of service */ + GS_RPT_ID = 0x021A, /* Register port type */ + GS_GA_NXT = 0x0100, /* Get all next */ + GS_RFF_ID = 0x021F, /* Register FC4 Feature */ +}; + +struct fcgs_id_req_s{ + u32 rsvd:8; + u32 dap:24; /* port identifier */ +}; +#define fcgs_gpnid_req_t struct fcgs_id_req_s +#define fcgs_gnnid_req_t struct fcgs_id_req_s +#define fcgs_gspnid_req_t struct fcgs_id_req_s + +struct fcgs_gidpn_req_s{ + wwn_t port_name; /* port wwn */ +}; + +struct fcgs_gidpn_resp_s{ + u32 rsvd:8; + u32 dap:24; /* port identifier */ +}; + +/** + * RFT_ID + */ +struct fcgs_rftid_req_s { + u32 rsvd:8; + u32 dap:24; /* port identifier */ + u32 fc4_type[8]; /* fc4 types */ +}; + +/** + * RFF_ID : Register FC4 features. + */ + +#define FC_GS_FCP_FC4_FEATURE_INITIATOR 0x02 +#define FC_GS_FCP_FC4_FEATURE_TARGET 0x01 + +struct fcgs_rffid_req_s{ + u32 rsvd :8; + u32 dap :24; /* port identifier */ + u32 rsvd1 :16; + u32 fc4ftr_bits :8; /* fc4 feature bits */ + u32 fc4_type :8; /* corresponding FC4 Type */ +}; + +/** + * GID_FT Request + */ +struct fcgs_gidft_req_s{ + u8 reserved; + u8 domain_id; /* domain, 0 - all fabric */ + u8 area_id; /* area, 0 - whole domain */ + u8 fc4_type; /* FC_TYPE_FCP for SCSI devices */ +}; /* GID_FT Request */ + +/** + * GID_FT Response + */ +struct fcgs_gidft_resp_s { + u8 last:1; /* last port identifier flag */ + u8 reserved:7; + u32 pid:24; /* port identifier */ +}; /* GID_FT Response */ + +/** + * RSPN_ID + */ +struct fcgs_rspnid_req_s{ + u32 rsvd:8; + u32 dap:24; /* port identifier */ + u8 spn_len; /* symbolic port name length */ + u8 spn[256]; /* symbolic port name */ +}; + +/** + * RPN_ID + */ +struct fcgs_rpnid_req_s{ + u32 rsvd:8; + u32 port_id:24; + wwn_t port_name; +}; + +/** + * RNN_ID + */ +struct fcgs_rnnid_req_s{ + u32 rsvd:8; + u32 port_id:24; + wwn_t node_name; +}; + +/** + * RCS_ID + */ +struct fcgs_rcsid_req_s{ + u32 rsvd:8; + u32 port_id:24; + u32 cos; +}; + +/** + * RPT_ID + */ +struct fcgs_rptid_req_s{ + u32 rsvd:8; + u32 port_id:24; + u32 port_type:8; + u32 rsvd1:24; +}; + +/** + * GA_NXT Request + */ +struct fcgs_ganxt_req_s{ + u32 rsvd:8; + u32 port_id:24; +}; + +/** + * GA_NXT Response + */ +struct fcgs_ganxt_rsp_s{ + u32 port_type:8; /* Port Type */ + u32 port_id:24; /* Port Identifier */ + wwn_t port_name; /* Port Name */ + u8 spn_len; /* Length of Symbolic Port Name */ + char spn[255]; /* Symbolic Port Name */ + wwn_t node_name; /* Node Name */ + u8 snn_len; /* Length of Symbolic Node Name */ + char snn[255]; /* Symbolic Node Name */ + u8 ipa[8]; /* Initial Process Associator */ + u8 ip[16]; /* IP Address */ + u32 cos; /* Class of Service */ + u32 fc4types[8]; /* FC-4 TYPEs */ + wwn_t fabric_port_name; + /* Fabric Port Name */ + u32 rsvd:8; /* Reserved */ + u32 hard_addr:24; /* Hard Address */ +}; + +/* + * Fabric Config Server + */ + +/* + * Command codes for Fabric Configuration Server + */ +enum { + GS_FC_GFN_CMD = 0x0114, /* GS FC Get Fabric Name */ + GS_FC_GMAL_CMD = 0x0116, /* GS FC GMAL */ + GS_FC_TRACE_CMD = 0x0400, /* GS FC Trace Route */ + GS_FC_PING_CMD = 0x0401, /* GS FC Ping */ +}; + +/* + * Source or Destination Port Tags. + */ +enum { + GS_FTRACE_TAG_NPORT_ID = 1, + GS_FTRACE_TAG_NPORT_NAME = 2, +}; + +/* +* Port Value : Could be a Port id or wwn + */ +union fcgs_port_val_u{ + u32 nport_id; + wwn_t nport_wwn; +}; + +#define GS_FTRACE_MAX_HOP_COUNT 20 +#define GS_FTRACE_REVISION 1 + +/* + * Ftrace Related Structures. + */ + +/* + * STR (Switch Trace) Reject Reason Codes. From FC-SW. + */ +enum { + GS_FTRACE_STR_CMD_COMPLETED_SUCC = 0, + GS_FTRACE_STR_CMD_NOT_SUPP_IN_NEXT_SWITCH, + GS_FTRACE_STR_NO_RESP_FROM_NEXT_SWITCH, + GS_FTRACE_STR_MAX_HOP_CNT_REACHED, + GS_FTRACE_STR_SRC_PORT_NOT_FOUND, + GS_FTRACE_STR_DST_PORT_NOT_FOUND, + GS_FTRACE_STR_DEVICES_NOT_IN_COMMON_ZONE, + GS_FTRACE_STR_NO_ROUTE_BW_PORTS, + GS_FTRACE_STR_NO_ADDL_EXPLN, + GS_FTRACE_STR_FABRIC_BUSY, + GS_FTRACE_STR_FABRIC_BUILD_IN_PROGRESS, + GS_FTRACE_STR_VENDOR_SPECIFIC_ERR_START = 0xf0, + GS_FTRACE_STR_VENDOR_SPECIFIC_ERR_END = 0xff, +}; + +/* + * Ftrace Request + */ +struct fcgs_ftrace_req_s{ + u32 revision; + u16 src_port_tag; /* Source Port tag */ + u16 src_port_len; /* Source Port len */ + union fcgs_port_val_u src_port_val; /* Source Port value */ + u16 dst_port_tag; /* Destination Port tag */ + u16 dst_port_len; /* Destination Port len */ + union fcgs_port_val_u dst_port_val; /* Destination Port value */ + u32 token; + u8 vendor_id[8]; /* T10 Vendor Identifier */ + u8 vendor_info[8]; /* Vendor specific Info */ + u32 max_hop_cnt; /* Max Hop Count */ +}; + +/* + * Path info structure + */ +struct fcgs_ftrace_path_info_s{ + wwn_t switch_name; /* Switch WWN */ + u32 domain_id; + wwn_t ingress_port_name; /* Ingress ports wwn */ + u32 ingress_phys_port_num; /* Ingress ports physical port + * number + */ + wwn_t egress_port_name; /* Ingress ports wwn */ + u32 egress_phys_port_num; /* Ingress ports physical port + * number + */ +}; + +/* + * Ftrace Acc Response + */ +struct fcgs_ftrace_resp_s{ + u32 revision; + u32 token; + u8 vendor_id[8]; /* T10 Vendor Identifier */ + u8 vendor_info[8]; /* Vendor specific Info */ + u32 str_rej_reason_code; /* STR Reject Reason Code */ + u32 num_path_info_entries; /* No. of path info entries */ + /* + * path info entry/entries. + */ + struct fcgs_ftrace_path_info_s path_info[1]; + +}; + +/* +* Fabric Config Server : FCPing + */ + +/* + * FC Ping Request + */ +struct fcgs_fcping_req_s{ + u32 revision; + u16 port_tag; + u16 port_len; /* Port len */ + union fcgs_port_val_u port_val; /* Port value */ + u32 token; +}; + +/* + * FC Ping Response + */ +struct fcgs_fcping_resp_s{ + u32 token; +}; + +/* + * Command codes for zone server query. + */ +enum { + ZS_GZME = 0x0124, /* Get zone member extended */ +}; + +/* + * ZS GZME request + */ +#define ZS_GZME_ZNAMELEN 32 +struct zs_gzme_req_s{ + u8 znamelen; + u8 rsvd[3]; + u8 zname[ZS_GZME_ZNAMELEN]; +}; + +enum zs_mbr_type{ + ZS_MBR_TYPE_PWWN = 1, + ZS_MBR_TYPE_DOMPORT = 2, + ZS_MBR_TYPE_PORTID = 3, + ZS_MBR_TYPE_NWWN = 4, +}; + +struct zs_mbr_wwn_s{ + u8 mbr_type; + u8 rsvd[3]; + wwn_t wwn; +}; + +struct zs_query_resp_s{ + u32 nmbrs; /* number of zone members */ + struct zs_mbr_wwn_s mbr[1]; +}; + +/* + * GMAL Command ( Get ( interconnect Element) Management Address List) + * To retrieve the IP Address of a Switch. + */ + +#define CT_GMAL_RESP_PREFIX_TELNET "telnet://" +#define CT_GMAL_RESP_PREFIX_HTTP "http://" + +/* GMAL/GFN request */ +struct fcgs_req_s { + wwn_t wwn; /* PWWN/NWWN */ +}; + +#define fcgs_gmal_req_t struct fcgs_req_s +#define fcgs_gfn_req_t struct fcgs_req_s + +/* Accept Response to GMAL */ +struct fcgs_gmal_resp_s { + u32 ms_len; /* Num of entries */ + u8 ms_ma[256]; +}; + +struct fc_gmal_entry_s { + u8 len; + u8 prefix[7]; /* like "http://" */ + u8 ip_addr[248]; +}; + +#pragma pack() + +#endif diff --git a/drivers/scsi/bfa/include/protocol/fc.h b/drivers/scsi/bfa/include/protocol/fc.h new file mode 100644 index 000000000000..3e39ba58cfb5 --- /dev/null +++ b/drivers/scsi/bfa/include/protocol/fc.h @@ -0,0 +1,1105 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __FC_H__ +#define __FC_H__ + +#include + +#pragma pack(1) + +/* + * Fibre Channel Header Structure (FCHS) definition + */ +struct fchs_s { +#ifdef __BIGENDIAN + u32 routing:4; /* routing bits */ + u32 cat_info:4; /* category info */ +#else + u32 cat_info:4; /* category info */ + u32 routing:4; /* routing bits */ +#endif + u32 d_id:24; /* destination identifier */ + + u32 cs_ctl:8; /* class specific control */ + u32 s_id:24; /* source identifier */ + + u32 type:8; /* data structure type */ + u32 f_ctl:24; /* initial frame control */ + + u8 seq_id; /* sequence identifier */ + u8 df_ctl; /* data field control */ + u16 seq_cnt; /* sequence count */ + + u16 ox_id; /* originator exchange ID */ + u16 rx_id; /* responder exchange ID */ + + u32 ro; /* relative offset */ +}; +/* + * Fibre Channel BB_E Header Structure + */ +struct fcbbehs_s { + u16 ver_rsvd; + u32 rsvd[2]; + u32 rsvd__sof; +}; + +#define FC_SEQ_ID_MAX 256 + +/* + * routing bit definitions + */ +enum { + FC_RTG_FC4_DEV_DATA = 0x0, /* FC-4 Device Data */ + FC_RTG_EXT_LINK = 0x2, /* Extended Link Data */ + FC_RTG_FC4_LINK_DATA = 0x3, /* FC-4 Link Data */ + FC_RTG_VIDEO_DATA = 0x4, /* Video Data */ + FC_RTG_EXT_HDR = 0x5, /* VFT, IFR or Encapsuled */ + FC_RTG_BASIC_LINK = 0x8, /* Basic Link data */ + FC_RTG_LINK_CTRL = 0xC, /* Link Control */ +}; + +/* + * information category for extended link data and FC-4 Link Data + */ +enum { + FC_CAT_LD_REQUEST = 0x2, /* Request */ + FC_CAT_LD_REPLY = 0x3, /* Reply */ + FC_CAT_LD_DIAG = 0xF, /* for DIAG use only */ +}; + +/* + * information category for extended headers (VFT, IFR or encapsulation) + */ +enum { + FC_CAT_VFT_HDR = 0x0, /* Virtual fabric tagging header */ + FC_CAT_IFR_HDR = 0x1, /* Inter-Fabric routing header */ + FC_CAT_ENC_HDR = 0x2, /* Encapsulation header */ +}; + +/* + * information category for FC-4 device data + */ +enum { + FC_CAT_UNCATEG_INFO = 0x0, /* Uncategorized information */ + FC_CAT_SOLICIT_DATA = 0x1, /* Solicited Data */ + FC_CAT_UNSOLICIT_CTRL = 0x2, /* Unsolicited Control */ + FC_CAT_SOLICIT_CTRL = 0x3, /* Solicited Control */ + FC_CAT_UNSOLICIT_DATA = 0x4, /* Unsolicited Data */ + FC_CAT_DATA_DESC = 0x5, /* Data Descriptor */ + FC_CAT_UNSOLICIT_CMD = 0x6, /* Unsolicited Command */ + FC_CAT_CMD_STATUS = 0x7, /* Command Status */ +}; + +/* + * information category for Link Control + */ +enum { + FC_CAT_ACK_1 = 0x00, + FC_CAT_ACK_0_N = 0x01, + FC_CAT_P_RJT = 0x02, + FC_CAT_F_RJT = 0x03, + FC_CAT_P_BSY = 0x04, + FC_CAT_F_BSY_DATA = 0x05, + FC_CAT_F_BSY_LINK_CTL = 0x06, + FC_CAT_F_LCR = 0x07, + FC_CAT_NTY = 0x08, + FC_CAT_END = 0x09, +}; + +/* + * Type Field Definitions. FC-PH Section 18.5 pg. 165 + */ +enum { + FC_TYPE_BLS = 0x0, /* Basic Link Service */ + FC_TYPE_ELS = 0x1, /* Extended Link Service */ + FC_TYPE_IP = 0x5, /* IP */ + FC_TYPE_FCP = 0x8, /* SCSI-FCP */ + FC_TYPE_GPP = 0x9, /* SCSI_GPP */ + FC_TYPE_SERVICES = 0x20, /* Fibre Channel Services */ + FC_TYPE_FC_FSS = 0x22, /* Fabric Switch Services */ + FC_TYPE_FC_AL = 0x23, /* FC-AL */ + FC_TYPE_FC_SNMP = 0x24, /* FC-SNMP */ + FC_TYPE_MAX = 256, /* 256 FC-4 types */ +}; + +struct fc_fc4types_s{ + u8 bits[FC_TYPE_MAX / 8]; +}; + +/* + * Frame Control Definitions. FC-PH Table-45. pg. 168 + */ +enum { + FCTL_EC_ORIG = 0x000000, /* exchange originator */ + FCTL_EC_RESP = 0x800000, /* exchange responder */ + FCTL_SEQ_INI = 0x000000, /* sequence initiator */ + FCTL_SEQ_REC = 0x400000, /* sequence recipient */ + FCTL_FS_EXCH = 0x200000, /* first sequence of xchg */ + FCTL_LS_EXCH = 0x100000, /* last sequence of xchg */ + FCTL_END_SEQ = 0x080000, /* last frame of sequence */ + FCTL_SI_XFER = 0x010000, /* seq initiative transfer */ + FCTL_RO_PRESENT = 0x000008, /* relative offset present */ + FCTL_FILLBYTE_MASK = 0x000003 /* , fill byte mask */ +}; + +/* + * Fabric Well Known Addresses + */ +enum { + FC_MIN_WELL_KNOWN_ADDR = 0xFFFFF0, + FC_DOMAIN_CONTROLLER_MASK = 0xFFFC00, + FC_ALIAS_SERVER = 0xFFFFF8, + FC_MGMT_SERVER = 0xFFFFFA, + FC_TIME_SERVER = 0xFFFFFB, + FC_NAME_SERVER = 0xFFFFFC, + FC_FABRIC_CONTROLLER = 0xFFFFFD, + FC_FABRIC_PORT = 0xFFFFFE, + FC_BROADCAST_SERVER = 0xFFFFFF +}; + +/* + * domain/area/port defines + */ +#define FC_DOMAIN_MASK 0xFF0000 +#define FC_DOMAIN_SHIFT 16 +#define FC_AREA_MASK 0x00FF00 +#define FC_AREA_SHIFT 8 +#define FC_PORT_MASK 0x0000FF +#define FC_PORT_SHIFT 0 + +#define FC_GET_DOMAIN(p) (((p) & FC_DOMAIN_MASK) >> FC_DOMAIN_SHIFT) +#define FC_GET_AREA(p) (((p) & FC_AREA_MASK) >> FC_AREA_SHIFT) +#define FC_GET_PORT(p) (((p) & FC_PORT_MASK) >> FC_PORT_SHIFT) + +#define FC_DOMAIN_CTRLR(p) (FC_DOMAIN_CONTROLLER_MASK | (FC_GET_DOMAIN(p))) + +enum { + FC_RXID_ANY = 0xFFFFU, +}; + +/* + * generic ELS command + */ +struct fc_els_cmd_s{ + u32 els_code:8; /* ELS Command Code */ + u32 reserved:24; +}; + +/* + * ELS Command Codes. FC-PH Table-75. pg. 223 + */ +enum { + FC_ELS_LS_RJT = 0x1, /* Link Service Reject. */ + FC_ELS_ACC = 0x02, /* Accept */ + FC_ELS_PLOGI = 0x03, /* N_Port Login. */ + FC_ELS_FLOGI = 0x04, /* F_Port Login. */ + FC_ELS_LOGO = 0x05, /* Logout. */ + FC_ELS_ABTX = 0x06, /* Abort Exchange */ + FC_ELS_RES = 0x08, /* Read Exchange status */ + FC_ELS_RSS = 0x09, /* Read sequence status block */ + FC_ELS_RSI = 0x0A, /* Request Sequence Initiative */ + FC_ELS_ESTC = 0x0C, /* Estimate Credit. */ + FC_ELS_RTV = 0x0E, /* Read Timeout Value. */ + FC_ELS_RLS = 0x0F, /* Read Link Status. */ + FC_ELS_ECHO = 0x10, /* Echo */ + FC_ELS_TEST = 0x11, /* Test */ + FC_ELS_RRQ = 0x12, /* Reinstate Recovery Qualifier. */ + FC_ELS_REC = 0x13, /* Add this for TAPE support in FCR */ + FC_ELS_PRLI = 0x20, /* Process Login */ + FC_ELS_PRLO = 0x21, /* Process Logout. */ + FC_ELS_SCN = 0x22, /* State Change Notification. */ + FC_ELS_TPRLO = 0x24, /* Third Party Process Logout. */ + FC_ELS_PDISC = 0x50, /* Discover N_Port Parameters. */ + FC_ELS_FDISC = 0x51, /* Discover F_Port Parameters. */ + FC_ELS_ADISC = 0x52, /* Discover Address. */ + FC_ELS_FAN = 0x60, /* Fabric Address Notification */ + FC_ELS_RSCN = 0x61, /* Reg State Change Notification */ + FC_ELS_SCR = 0x62, /* State Change Registration. */ + FC_ELS_RTIN = 0x77, /* Mangement server request */ + FC_ELS_RNID = 0x78, /* Mangement server request */ + FC_ELS_RLIR = 0x79, /* Registered Link Incident Record */ + + FC_ELS_RPSC = 0x7D, /* Report Port Speed Capabilities */ + FC_ELS_QSA = 0x7E, /* Query Security Attributes. Ref FC-SP */ + FC_ELS_E2E_LBEACON = 0x81, + /* End-to-End Link Beacon */ + FC_ELS_AUTH = 0x90, /* Authentication. Ref FC-SP */ + FC_ELS_RFCN = 0x97, /* Request Fabric Change Notification. Ref + *FC-SP */ + +}; + +/* + * Version numbers for FC-PH standards, + * used in login to indicate what port + * supports. See FC-PH-X table 158. + */ +enum { + FC_PH_VER_4_3 = 0x09, + FC_PH_VER_PH_3 = 0x20, +}; + +/* + * PDU size defines + */ +enum { + FC_MIN_PDUSZ = 512, + FC_MAX_PDUSZ = 2112, +}; + +/* + * N_Port PLOGI Common Service Parameters. + * FC-PH-x. Figure-76. pg. 308. + */ +struct fc_plogi_csp_s{ + u8 verhi; /* FC-PH high version */ + u8 verlo; /* FC-PH low version */ + u16 bbcred; /* BB_Credit */ + +#ifdef __BIGENDIAN + u8 ciro:1, /* continuously increasing RO */ + rro:1, /* random relative offset */ + npiv_supp:1, /* NPIV supported */ + port_type:1, /* N_Port/F_port */ + altbbcred:1, /* alternate BB_Credit */ + resolution:1, /* ms/ns ED_TOV resolution */ + vvl_info:1, /* VVL Info included */ + reserved1:1; + + u8 hg_supp:1, + query_dbc:1, + security:1, + sync_cap:1, + r_t_tov:1, + dh_dup_supp:1, + cisc:1, /* continuously increasing seq count */ + payload:1; +#else + u8 reserved2:2, + resolution:1, /* ms/ns ED_TOV resolution */ + altbbcred:1, /* alternate BB_Credit */ + port_type:1, /* N_Port/F_port */ + npiv_supp:1, /* NPIV supported */ + rro:1, /* random relative offset */ + ciro:1; /* continuously increasing RO */ + + u8 payload:1, + cisc:1, /* continuously increasing seq count */ + dh_dup_supp:1, + r_t_tov:1, + sync_cap:1, + security:1, + query_dbc:1, + hg_supp:1; +#endif + + u16 rxsz; /* recieve data_field size */ + + u16 conseq; + u16 ro_bitmap; + + u32 e_d_tov; +}; + +/* + * N_Port PLOGI Class Specific Parameters. + * FC-PH-x. Figure 78. pg. 318. + */ +struct fc_plogi_clp_s{ +#ifdef __BIGENDIAN + u32 class_valid:1; + u32 intermix:1; /* class intermix supported if set =1. + * valid only for class1. Reserved for + * class2 & class3 + */ + u32 reserved1:2; + u32 sequential:1; + u32 reserved2:3; +#else + u32 reserved2:3; + u32 sequential:1; + u32 reserved1:2; + u32 intermix:1; /* class intermix supported if set =1. + * valid only for class1. Reserved for + * class2 & class3 + */ + u32 class_valid:1; +#endif + + u32 reserved3:24; + + u32 reserved4:16; + u32 rxsz:16; /* Receive data_field size */ + + u32 reserved5:8; + u32 conseq:8; + u32 e2e_credit:16; /* end to end credit */ + + u32 reserved7:8; + u32 ospx:8; + u32 reserved8:16; +}; + +#define FLOGI_VVL_BRCD 0x42524344 /* ASCII value for each character in + * string "BRCD" */ + +/* + * PLOGI els command and reply payload + */ +struct fc_logi_s{ + struct fc_els_cmd_s els_cmd; /* ELS command code */ + struct fc_plogi_csp_s csp; /* common service params */ + wwn_t port_name; + wwn_t node_name; + struct fc_plogi_clp_s class1; /* class 1 service parameters */ + struct fc_plogi_clp_s class2; /* class 2 service parameters */ + struct fc_plogi_clp_s class3; /* class 3 service parameters */ + struct fc_plogi_clp_s class4; /* class 4 service parameters */ + u8 vvl[16]; /* vendor version level */ +}; + +/* + * LOGO els command payload + */ +struct fc_logo_s{ + struct fc_els_cmd_s els_cmd; /* ELS command code */ + u32 res1:8; + u32 nport_id:24; /* N_Port identifier of source */ + wwn_t orig_port_name; /* Port name of the LOGO originator */ +}; + +/* + * ADISC els command payload + */ +struct fc_adisc_s { + struct fc_els_cmd_s els_cmd; /* ELS command code */ + u32 res1:8; + u32 orig_HA:24; /* originator hard address */ + wwn_t orig_port_name; /* originator port name */ + wwn_t orig_node_name; /* originator node name */ + u32 res2:8; + u32 nport_id:24; /* originator NPortID */ +}; + +/* + * Exchange status block + */ +struct fc_exch_status_blk_s{ + u32 oxid:16; + u32 rxid:16; + u32 res1:8; + u32 orig_np:24; /* originator NPortID */ + u32 res2:8; + u32 resp_np:24; /* responder NPortID */ + u32 es_bits; + u32 res3; + /* + * un modified section of the fields + */ +}; + +/* + * RES els command payload + */ +struct fc_res_s { + struct fc_els_cmd_s els_cmd; /* ELS command code */ + u32 res1:8; + u32 nport_id:24; /* N_Port identifier of source */ + u32 oxid:16; + u32 rxid:16; + u8 assoc_hdr[32]; +}; + +/* + * RES els accept payload + */ +struct fc_res_acc_s{ + struct fc_els_cmd_s els_cmd; /* ELS command code */ + struct fc_exch_status_blk_s fc_exch_blk; /* Exchange status block */ +}; + +/* + * REC els command payload + */ +struct fc_rec_s { + struct fc_els_cmd_s els_cmd; /* ELS command code */ + u32 res1:8; + u32 nport_id:24; /* N_Port identifier of source */ + u32 oxid:16; + u32 rxid:16; +}; + +#define FC_REC_ESB_OWN_RSP 0x80000000 /* responder owns */ +#define FC_REC_ESB_SI 0x40000000 /* SI is owned */ +#define FC_REC_ESB_COMP 0x20000000 /* exchange is complete */ +#define FC_REC_ESB_ENDCOND_ABN 0x10000000 /* abnormal ending */ +#define FC_REC_ESB_RQACT 0x04000000 /* recovery qual active */ +#define FC_REC_ESB_ERRP_MSK 0x03000000 +#define FC_REC_ESB_OXID_INV 0x00800000 /* invalid OXID */ +#define FC_REC_ESB_RXID_INV 0x00400000 /* invalid RXID */ +#define FC_REC_ESB_PRIO_INUSE 0x00200000 + +/* + * REC els accept payload + */ +struct fc_rec_acc_s { + struct fc_els_cmd_s els_cmd; /* ELS command code */ + u32 oxid:16; + u32 rxid:16; + u32 res1:8; + u32 orig_id:24; /* N_Port id of exchange originator */ + u32 res2:8; + u32 resp_id:24; /* N_Port id of exchange responder */ + u32 count; /* data transfer count */ + u32 e_stat; /* exchange status */ +}; + +/* + * RSI els payload + */ +struct fc_rsi_s { + struct fc_els_cmd_s els_cmd; + u32 res1:8; + u32 orig_sid:24; + u32 oxid:16; + u32 rxid:16; +}; + +/* + * structure for PRLI paramater pages, both request & response + * see FC-PH-X table 113 & 115 for explanation also FCP table 8 + */ +struct fc_prli_params_s{ + u32 reserved: 16; +#ifdef __BIGENDIAN + u32 reserved1: 5; + u32 rec_support : 1; + u32 task_retry_id : 1; + u32 retry : 1; + + u32 confirm : 1; + u32 doverlay:1; + u32 initiator:1; + u32 target:1; + u32 cdmix:1; + u32 drmix:1; + u32 rxrdisab:1; + u32 wxrdisab:1; +#else + u32 retry : 1; + u32 task_retry_id : 1; + u32 rec_support : 1; + u32 reserved1: 5; + + u32 wxrdisab:1; + u32 rxrdisab:1; + u32 drmix:1; + u32 cdmix:1; + u32 target:1; + u32 initiator:1; + u32 doverlay:1; + u32 confirm : 1; +#endif +}; + +/* + * valid values for rspcode in PRLI ACC payload + */ +enum { + FC_PRLI_ACC_XQTD = 0x1, /* request executed */ + FC_PRLI_ACC_PREDEF_IMG = 0x5, /* predefined image - no prli needed */ +}; + +struct fc_prli_params_page_s{ + u32 type:8; + u32 codext:8; +#ifdef __BIGENDIAN + u32 origprocasv:1; + u32 rsppav:1; + u32 imagepair:1; + u32 reserved1:1; + u32 rspcode:4; +#else + u32 rspcode:4; + u32 reserved1:1; + u32 imagepair:1; + u32 rsppav:1; + u32 origprocasv:1; +#endif + u32 reserved2:8; + + u32 origprocas; + u32 rspprocas; + struct fc_prli_params_s servparams; +}; + +/* + * PRLI request and accept payload, FC-PH-X tables 112 & 114 + */ +struct fc_prli_s{ + u32 command:8; + u32 pglen:8; + u32 pagebytes:16; + struct fc_prli_params_page_s parampage; +}; + +/* + * PRLO logout params page + */ +struct fc_prlo_params_page_s{ + u32 type:8; + u32 type_ext:8; +#ifdef __BIGENDIAN + u32 opa_valid:1; /* originator process associator + * valid + */ + u32 rpa_valid:1; /* responder process associator valid */ + u32 res1:14; +#else + u32 res1:14; + u32 rpa_valid:1; /* responder process associator valid */ + u32 opa_valid:1; /* originator process associator + * valid + */ +#endif + u32 orig_process_assc; + u32 resp_process_assc; + + u32 res2; +}; + +/* + * PRLO els command payload + */ +struct fc_prlo_s{ + u32 command:8; + u32 page_len:8; + u32 payload_len:16; + struct fc_prlo_params_page_s prlo_params[1]; +}; + +/* + * PRLO Logout response parameter page + */ +struct fc_prlo_acc_params_page_s{ + u32 type:8; + u32 type_ext:8; + +#ifdef __BIGENDIAN + u32 opa_valid:1; /* originator process associator + * valid + */ + u32 rpa_valid:1; /* responder process associator valid */ + u32 res1:14; +#else + u32 res1:14; + u32 rpa_valid:1; /* responder process associator valid */ + u32 opa_valid:1; /* originator process associator + * valid + */ +#endif + u32 orig_process_assc; + u32 resp_process_assc; + + u32 fc4type_csp; +}; + +/* + * PRLO els command ACC payload + */ +struct fc_prlo_acc_s{ + u32 command:8; + u32 page_len:8; + u32 payload_len:16; + struct fc_prlo_acc_params_page_s prlo_acc_params[1]; +}; + +/* + * SCR els command payload + */ +enum { + FC_SCR_REG_FUNC_FABRIC_DETECTED = 0x01, + FC_SCR_REG_FUNC_N_PORT_DETECTED = 0x02, + FC_SCR_REG_FUNC_FULL = 0x03, + FC_SCR_REG_FUNC_CLEAR_REG = 0xFF, +}; + +/* SCR VU registrations */ +enum { + FC_VU_SCR_REG_FUNC_FABRIC_NAME_CHANGE = 0x01 +}; + +struct fc_scr_s{ + u32 command:8; + u32 res:24; + u32 vu_reg_func:8; /* Vendor Unique Registrations */ + u32 res1:16; + u32 reg_func:8; +}; + +/* + * Information category for Basic link data + */ +enum { + FC_CAT_NOP = 0x0, + FC_CAT_ABTS = 0x1, + FC_CAT_RMC = 0x2, + FC_CAT_BA_ACC = 0x4, + FC_CAT_BA_RJT = 0x5, + FC_CAT_PRMT = 0x6, +}; + +/* + * LS_RJT els reply payload + */ +struct fc_ls_rjt_s { + struct fc_els_cmd_s els_cmd; /* ELS command code */ + u32 res1:8; + u32 reason_code:8; /* Reason code for reject */ + u32 reason_code_expl:8; /* Reason code explanation */ + u32 vendor_unique:8; /* Vendor specific */ +}; + +/* + * LS_RJT reason codes + */ +enum { + FC_LS_RJT_RSN_INV_CMD_CODE = 0x01, + FC_LS_RJT_RSN_LOGICAL_ERROR = 0x03, + FC_LS_RJT_RSN_LOGICAL_BUSY = 0x05, + FC_LS_RJT_RSN_PROTOCOL_ERROR = 0x07, + FC_LS_RJT_RSN_UNABLE_TO_PERF_CMD = 0x09, + FC_LS_RJT_RSN_CMD_NOT_SUPP = 0x0B, +}; + +/* + * LS_RJT reason code explanation + */ +enum { + FC_LS_RJT_EXP_NO_ADDL_INFO = 0x00, + FC_LS_RJT_EXP_SPARMS_ERR_OPTIONS = 0x01, + FC_LS_RJT_EXP_SPARMS_ERR_INI_CTL = 0x03, + FC_LS_RJT_EXP_SPARMS_ERR_REC_CTL = 0x05, + FC_LS_RJT_EXP_SPARMS_ERR_RXSZ = 0x07, + FC_LS_RJT_EXP_SPARMS_ERR_CONSEQ = 0x09, + FC_LS_RJT_EXP_SPARMS_ERR_CREDIT = 0x0B, + FC_LS_RJT_EXP_INV_PORT_NAME = 0x0D, + FC_LS_RJT_EXP_INV_NODE_FABRIC_NAME = 0x0E, + FC_LS_RJT_EXP_INV_CSP = 0x0F, + FC_LS_RJT_EXP_INV_ASSOC_HDR = 0x11, + FC_LS_RJT_EXP_ASSOC_HDR_REQD = 0x13, + FC_LS_RJT_EXP_INV_ORIG_S_ID = 0x15, + FC_LS_RJT_EXP_INV_OXID_RXID_COMB = 0x17, + FC_LS_RJT_EXP_CMD_ALREADY_IN_PROG = 0x19, + FC_LS_RJT_EXP_LOGIN_REQUIRED = 0x1E, + FC_LS_RJT_EXP_INVALID_NPORT_ID = 0x1F, + FC_LS_RJT_EXP_INSUFF_RES = 0x29, + FC_LS_RJT_EXP_CMD_NOT_SUPP = 0x2C, + FC_LS_RJT_EXP_INV_PAYLOAD_LEN = 0x2D, +}; + +/* + * RRQ els command payload + */ +struct fc_rrq_s{ + struct fc_els_cmd_s els_cmd; /* ELS command code */ + u32 res1:8; + u32 s_id:24; /* exchange originator S_ID */ + + u32 ox_id:16; /* originator exchange ID */ + u32 rx_id:16; /* responder exchange ID */ + + u32 res2[8]; /* optional association header */ +}; + +/* + * ABTS BA_ACC reply payload + */ +struct fc_ba_acc_s{ + u32 seq_id_valid:8; /* set to 0x00 for Abort Exchange */ + u32 seq_id:8; /* invalid for Abort Exchange */ + u32 res2:16; + u32 ox_id:16; /* OX_ID from ABTS frame */ + u32 rx_id:16; /* RX_ID from ABTS frame */ + u32 low_seq_cnt:16; /* set to 0x0000 for Abort Exchange */ + u32 high_seq_cnt:16;/* set to 0xFFFF for Abort Exchange */ +}; + +/* + * ABTS BA_RJT reject payload + */ +struct fc_ba_rjt_s{ + u32 res1:8; /* Reserved */ + u32 reason_code:8; /* reason code for reject */ + u32 reason_expl:8; /* reason code explanation */ + u32 vendor_unique:8;/* vendor unique reason code,set to 0 */ +}; + +/* + * TPRLO logout parameter page + */ +struct fc_tprlo_params_page_s{ + u32 type:8; + u32 type_ext:8; + +#ifdef __BIGENDIAN + u32 opa_valid:1; + u32 rpa_valid:1; + u32 tpo_nport_valid:1; + u32 global_process_logout:1; + u32 res1:12; +#else + u32 res1:12; + u32 global_process_logout:1; + u32 tpo_nport_valid:1; + u32 rpa_valid:1; + u32 opa_valid:1; +#endif + + u32 orig_process_assc; + u32 resp_process_assc; + + u32 res2:8; + u32 tpo_nport_id; +}; + +/* + * TPRLO ELS command payload + */ +struct fc_tprlo_s{ + u32 command:8; + u32 page_len:8; + u32 payload_len:16; + + struct fc_tprlo_params_page_s tprlo_params[1]; +}; + +enum fc_tprlo_type{ + FC_GLOBAL_LOGO = 1, + FC_TPR_LOGO +}; + +/* + * TPRLO els command ACC payload + */ +struct fc_tprlo_acc_s{ + u32 command:8; + u32 page_len:8; + u32 payload_len:16; + struct fc_prlo_acc_params_page_s tprlo_acc_params[1]; +}; + +/* + * RSCN els command req payload + */ +#define FC_RSCN_PGLEN 0x4 + +enum fc_rscn_format{ + FC_RSCN_FORMAT_PORTID = 0x0, + FC_RSCN_FORMAT_AREA = 0x1, + FC_RSCN_FORMAT_DOMAIN = 0x2, + FC_RSCN_FORMAT_FABRIC = 0x3, +}; + +struct fc_rscn_event_s{ + u32 format:2; + u32 qualifier:4; + u32 resvd:2; + u32 portid:24; +}; + +struct fc_rscn_pl_s{ + u8 command; + u8 pagelen; + u16 payldlen; + struct fc_rscn_event_s event[1]; +}; + +/* + * ECHO els command req payload + */ +struct fc_echo_s { + struct fc_els_cmd_s els_cmd; +}; + +/* + * RNID els command + */ + +#define RNID_NODEID_DATA_FORMAT_COMMON 0x00 +#define RNID_NODEID_DATA_FORMAT_FCP3 0x08 +#define RNID_NODEID_DATA_FORMAT_DISCOVERY 0xDF + +#define RNID_ASSOCIATED_TYPE_UNKNOWN 0x00000001 +#define RNID_ASSOCIATED_TYPE_OTHER 0x00000002 +#define RNID_ASSOCIATED_TYPE_HUB 0x00000003 +#define RNID_ASSOCIATED_TYPE_SWITCH 0x00000004 +#define RNID_ASSOCIATED_TYPE_GATEWAY 0x00000005 +#define RNID_ASSOCIATED_TYPE_STORAGE_DEVICE 0x00000009 +#define RNID_ASSOCIATED_TYPE_HOST 0x0000000A +#define RNID_ASSOCIATED_TYPE_STORAGE_SUBSYSTEM 0x0000000B +#define RNID_ASSOCIATED_TYPE_STORAGE_ACCESS_DEVICE 0x0000000E +#define RNID_ASSOCIATED_TYPE_NAS_SERVER 0x00000011 +#define RNID_ASSOCIATED_TYPE_BRIDGE 0x00000002 +#define RNID_ASSOCIATED_TYPE_VIRTUALIZATION_DEVICE 0x00000003 +#define RNID_ASSOCIATED_TYPE_MULTI_FUNCTION_DEVICE 0x000000FF + +/* + * RNID els command payload + */ +struct fc_rnid_cmd_s{ + struct fc_els_cmd_s els_cmd; + u32 node_id_data_format:8; + u32 reserved:24; +}; + +/* + * RNID els response payload + */ + +struct fc_rnid_common_id_data_s{ + wwn_t port_name; + wwn_t node_name; +}; + +struct fc_rnid_general_topology_data_s{ + u32 vendor_unique[4]; + u32 asso_type; + u32 phy_port_num; + u32 num_attached_nodes; + u32 node_mgmt:8; + u32 ip_version:8; + u32 udp_tcp_port_num:16; + u32 ip_address[4]; + u32 reserved:16; + u32 vendor_specific:16; +}; + +struct fc_rnid_acc_s{ + struct fc_els_cmd_s els_cmd; + u32 node_id_data_format:8; + u32 common_id_data_length:8; + u32 reserved:8; + u32 specific_id_data_length:8; + struct fc_rnid_common_id_data_s common_id_data; + struct fc_rnid_general_topology_data_s gen_topology_data; +}; + +#define RNID_ASSOCIATED_TYPE_UNKNOWN 0x00000001 +#define RNID_ASSOCIATED_TYPE_OTHER 0x00000002 +#define RNID_ASSOCIATED_TYPE_HUB 0x00000003 +#define RNID_ASSOCIATED_TYPE_SWITCH 0x00000004 +#define RNID_ASSOCIATED_TYPE_GATEWAY 0x00000005 +#define RNID_ASSOCIATED_TYPE_STORAGE_DEVICE 0x00000009 +#define RNID_ASSOCIATED_TYPE_HOST 0x0000000A +#define RNID_ASSOCIATED_TYPE_STORAGE_SUBSYSTEM 0x0000000B +#define RNID_ASSOCIATED_TYPE_STORAGE_ACCESS_DEVICE 0x0000000E +#define RNID_ASSOCIATED_TYPE_NAS_SERVER 0x00000011 +#define RNID_ASSOCIATED_TYPE_BRIDGE 0x00000002 +#define RNID_ASSOCIATED_TYPE_VIRTUALIZATION_DEVICE 0x00000003 +#define RNID_ASSOCIATED_TYPE_MULTI_FUNCTION_DEVICE 0x000000FF + +enum fc_rpsc_speed_cap{ + RPSC_SPEED_CAP_1G = 0x8000, + RPSC_SPEED_CAP_2G = 0x4000, + RPSC_SPEED_CAP_4G = 0x2000, + RPSC_SPEED_CAP_10G = 0x1000, + RPSC_SPEED_CAP_8G = 0x0800, + RPSC_SPEED_CAP_16G = 0x0400, + + RPSC_SPEED_CAP_UNKNOWN = 0x0001, +}; + +enum fc_rpsc_op_speed_s{ + RPSC_OP_SPEED_1G = 0x8000, + RPSC_OP_SPEED_2G = 0x4000, + RPSC_OP_SPEED_4G = 0x2000, + RPSC_OP_SPEED_10G = 0x1000, + RPSC_OP_SPEED_8G = 0x0800, + RPSC_OP_SPEED_16G = 0x0400, + + RPSC_OP_SPEED_NOT_EST = 0x0001, /*! speed not established */ +}; + +struct fc_rpsc_speed_info_s{ + u16 port_speed_cap; /*! see fc_rpsc_speed_cap_t */ + u16 port_op_speed; /*! see fc_rpsc_op_speed_t */ +}; + +enum link_e2e_beacon_subcmd{ + LINK_E2E_BEACON_ON = 1, + LINK_E2E_BEACON_OFF = 2 +}; + +enum beacon_type{ + BEACON_TYPE_NORMAL = 1, /*! Normal Beaconing. Green */ + BEACON_TYPE_WARN = 2, /*! Warning Beaconing. Yellow/Amber */ + BEACON_TYPE_CRITICAL = 3 /*! Critical Beaconing. Red */ +}; + +struct link_e2e_beacon_param_s { + u8 beacon_type; /* Beacon Type. See beacon_type_t */ + u8 beacon_frequency; + /* Beacon frequency. Number of blinks + * per 10 seconds + */ + u16 beacon_duration;/* Beacon duration (in Seconds). The + * command operation should be + * terminated at the end of this + * timeout value. + * + * Ignored if diag_sub_cmd is + * LINK_E2E_BEACON_OFF. + * + * If 0, beaconing will continue till a + * BEACON OFF request is received + */ +}; + +/* + * Link E2E beacon request/good response format. For LS_RJTs use fc_ls_rjt_t + */ +struct link_e2e_beacon_req_s{ + u32 ls_code; /*! FC_ELS_E2E_LBEACON in requests * + *or FC_ELS_ACC in good replies */ + u32 ls_sub_cmd; /*! See link_e2e_beacon_subcmd_t */ + struct link_e2e_beacon_param_s beacon_parm; +}; + +/** + * If RPSC request is sent to the Domain Controller, the request is for + * all the ports within that domain (TODO - I don't think FOS implements + * this...). + */ +struct fc_rpsc_cmd_s{ + struct fc_els_cmd_s els_cmd; +}; + +/* + * RPSC Acc + */ +struct fc_rpsc_acc_s{ + u32 command:8; + u32 rsvd:8; + u32 num_entries:16; + + struct fc_rpsc_speed_info_s speed_info[1]; +}; + +/** + * If RPSC2 request is sent to the Domain Controller, + */ +#define FC_BRCD_TOKEN 0x42524344 + +struct fc_rpsc2_cmd_s{ + struct fc_els_cmd_s els_cmd; + u32 token; + u16 resvd; + u16 num_pids; /* Number of pids in the request */ + struct { + u32 rsvd1:8; + u32 pid:24; /* port identifier */ + } pid_list[1]; +}; + +enum fc_rpsc2_port_type{ + RPSC2_PORT_TYPE_UNKNOWN = 0, + RPSC2_PORT_TYPE_NPORT = 1, + RPSC2_PORT_TYPE_NLPORT = 2, + RPSC2_PORT_TYPE_NPIV_PORT = 0x5f, + RPSC2_PORT_TYPE_NPORT_TRUNK = 0x6f, +}; + +/* + * RPSC2 portInfo entry structure + */ +struct fc_rpsc2_port_info_s{ + u32 pid; /* PID */ + u16 resvd1; + u16 index; /* port number / index */ + u8 resvd2; + u8 type; /* port type N/NL/... */ + u16 speed; /* port Operating Speed */ +}; + +/* + * RPSC2 Accept payload + */ +struct fc_rpsc2_acc_s{ + u8 els_cmd; + u8 resvd; + u16 num_pids; /* Number of pids in the request */ + struct fc_rpsc2_port_info_s port_info[1]; /* port information */ +}; + +/** + * bit fields so that multiple classes can be specified + */ +enum fc_cos{ + FC_CLASS_2 = 0x04, + FC_CLASS_3 = 0x08, + FC_CLASS_2_3 = 0x0C, +}; + +/* + * symbolic name + */ +struct fc_symname_s{ + u8 symname[FC_SYMNAME_MAX]; +}; + +struct fc_alpabm_s{ + u8 alpa_bm[FC_ALPA_MAX / 8]; +}; + +/* + * protocol default timeout values + */ +#define FC_ED_TOV 2 +#define FC_REC_TOV (FC_ED_TOV + 1) +#define FC_RA_TOV 10 +#define FC_ELS_TOV (2 * FC_RA_TOV) + +/* + * virtual fabric related defines + */ +#define FC_VF_ID_NULL 0 /* must not be used as VF_ID */ +#define FC_VF_ID_MIN 1 +#define FC_VF_ID_MAX 0xEFF +#define FC_VF_ID_CTL 0xFEF /* control VF_ID */ + +/** + * Virtual Fabric Tagging header format + * @caution This is defined only in BIG ENDIAN format. + */ +struct fc_vft_s{ + u32 r_ctl:8; + u32 ver:2; + u32 type:4; + u32 res_a:2; + u32 priority:3; + u32 vf_id:12; + u32 res_b:1; + u32 hopct:8; + u32 res_c:24; +}; + +#pragma pack() + +#endif diff --git a/drivers/scsi/bfa/include/protocol/fc_sp.h b/drivers/scsi/bfa/include/protocol/fc_sp.h new file mode 100644 index 000000000000..55bb0b31d04b --- /dev/null +++ b/drivers/scsi/bfa/include/protocol/fc_sp.h @@ -0,0 +1,224 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __FC_SP_H__ +#define __FC_SP_H__ + +#include + +#pragma pack(1) + +enum auth_els_flags{ + FC_AUTH_ELS_MORE_FRAGS_FLAG = 0x80, /*! bit-7. More Fragments + * Follow + */ + FC_AUTH_ELS_CONCAT_FLAG = 0x40, /*! bit-6. Concatenation Flag */ + FC_AUTH_ELS_SEQ_NUM_FLAG = 0x01 /*! bit-0. Sequence Number */ +}; + +enum auth_msg_codes{ + FC_AUTH_MC_AUTH_RJT = 0x0A, /*! Auth Reject */ + FC_AUTH_MC_AUTH_NEG = 0x0B, /*! Auth Negotiate */ + FC_AUTH_MC_AUTH_DONE = 0x0C, /*! Auth Done */ + + FC_AUTH_MC_DHCHAP_CHAL = 0x10, /*! DHCHAP Challenge */ + FC_AUTH_MC_DHCHAP_REPLY = 0x11, /*! DHCHAP Reply */ + FC_AUTH_MC_DHCHAP_SUCC = 0x12, /*! DHCHAP Success */ + + FC_AUTH_MC_FCAP_REQ = 0x13, /*! FCAP Request */ + FC_AUTH_MC_FCAP_ACK = 0x14, /*! FCAP Acknowledge */ + FC_AUTH_MC_FCAP_CONF = 0x15, /*! FCAP Confirm */ + + FC_AUTH_MC_FCPAP_INIT = 0x16, /*! FCPAP Init */ + FC_AUTH_MC_FCPAP_ACC = 0x17, /*! FCPAP Accept */ + FC_AUTH_MC_FCPAP_COMP = 0x18, /*! FCPAP Complete */ + + FC_AUTH_MC_IKE_SA_INIT = 0x22, /*! IKE SA INIT */ + FC_AUTH_MC_IKE_SA_AUTH = 0x23, /*! IKE SA Auth */ + FC_AUTH_MC_IKE_CREATE_CHILD_SA = 0x24, /*! IKE Create Child SA */ + FC_AUTH_MC_IKE_INFO = 0x25, /*! IKE informational */ +}; + +enum auth_proto_version{ + FC_AUTH_PROTO_VER_1 = 1, /*! Protocol Version 1 */ +}; + +enum { + FC_AUTH_ELS_COMMAND_CODE = 0x90,/*! Authentication ELS Command code */ + FC_AUTH_PROTO_PARAM_LEN_SZ = 4, /*! Size of Proto Parameter Len Field */ + FC_AUTH_PROTO_PARAM_VAL_SZ = 4, /*! Size of Proto Parameter Val Field */ + FC_MAX_AUTH_SECRET_LEN = 256, + /*! Maximum secret string length */ + FC_AUTH_NUM_USABLE_PROTO_LEN_SZ = 4, + /*! Size of usable protocols field */ + FC_AUTH_RESP_VALUE_LEN_SZ = 4, + /*! Size of response value length */ + FC_MAX_CHAP_KEY_LEN = 256, /*! Maximum md5 digest length */ + FC_MAX_AUTH_RETRIES = 3, /*! Maximum number of retries */ + FC_MD5_DIGEST_LEN = 16, /*! MD5 digest length */ + FC_SHA1_DIGEST_LEN = 20, /*! SHA1 digest length */ + FC_MAX_DHG_SUPPORTED = 1, /*! Maximum DH Groups supported */ + FC_MAX_ALG_SUPPORTED = 1, /*! Maximum algorithms supported */ + FC_MAX_PROTO_SUPPORTED = 1, /*! Maximum protocols supported */ + FC_START_TXN_ID = 2, /*! Starting transaction ID */ +}; + +enum auth_proto_id{ + FC_AUTH_PROTO_DHCHAP = 0x00000001, + FC_AUTH_PROTO_FCAP = 0x00000002, + FC_AUTH_PROTO_FCPAP = 0x00000003, + FC_AUTH_PROTO_IKEv2 = 0x00000004, + FC_AUTH_PROTO_IKEv2_AUTH = 0x00000005, +}; + +struct auth_name_s{ + u16 name_tag; /*! Name Tag = 1 for Authentication */ + u16 name_len; /*! Name Length = 8 for Authentication + */ + wwn_t name; /*! Name. TODO - is this PWWN */ +}; + + +enum auth_hash_func{ + FC_AUTH_HASH_FUNC_MD5 = 0x00000005, + FC_AUTH_HASH_FUNC_SHA_1 = 0x00000006, +}; + +enum auth_dh_gid{ + FC_AUTH_DH_GID_0_DHG_NULL = 0x00000000, + FC_AUTH_DH_GID_1_DHG_1024 = 0x00000001, + FC_AUTH_DH_GID_2_DHG_1280 = 0x00000002, + FC_AUTH_DH_GID_3_DHG_1536 = 0x00000003, + FC_AUTH_DH_GID_4_DHG_2048 = 0x00000004, + FC_AUTH_DH_GID_6_DHG_3072 = 0x00000006, + FC_AUTH_DH_GID_7_DHG_4096 = 0x00000007, + FC_AUTH_DH_GID_8_DHG_6144 = 0x00000008, + FC_AUTH_DH_GID_9_DHG_8192 = 0x00000009, +}; + +struct auth_els_msg_s { + u8 auth_els_code; /* Authentication ELS Code (0x90) */ + u8 auth_els_flag; /* Authentication ELS Flags */ + u8 auth_msg_code; /* Authentication Message Code */ + u8 proto_version; /* Protocol Version */ + u32 msg_len; /* Message Length */ + u32 trans_id; /* Transaction Identifier (T_ID) */ + + /* Msg payload follows... */ +}; + + +enum auth_neg_param_tags { + FC_AUTH_NEG_DHCHAP_HASHLIST = 0x0001, + FC_AUTH_NEG_DHCHAP_DHG_ID_LIST = 0x0002, +}; + + +struct dhchap_param_format_s { + u16 tag; /*! Parameter Tag. See + * auth_neg_param_tags_t + */ + u16 word_cnt; + + /* followed by variable length parameter value... */ +}; + +struct auth_proto_params_s { + u32 proto_param_len; + u32 proto_id; + + /* + * Followed by variable length Protocol specific parameters. DH-CHAP + * uses dhchap_param_format_t + */ +}; + +struct auth_neg_msg_s { + struct auth_name_s auth_ini_name; + u32 usable_auth_protos; + struct auth_proto_params_s proto_params[1]; /*! (1..usable_auth_proto) + * protocol params + */ +}; + +struct auth_dh_val_s { + u32 dh_val_len; + u32 dh_val[1]; +}; + +struct auth_dhchap_chal_msg_s { + struct auth_els_msg_s hdr; + struct auth_name_s auth_responder_name; /* TODO VRK - is auth_name_t + * type OK? + */ + u32 hash_id; + u32 dh_grp_id; + u32 chal_val_len; + char chal_val[1]; + + /* ...followed by variable Challenge length/value and DH length/value */ +}; + + +enum auth_rjt_codes { + FC_AUTH_RJT_CODE_AUTH_FAILURE = 0x01, + FC_AUTH_RJT_CODE_LOGICAL_ERR = 0x02, +}; + +enum auth_rjt_code_exps { + FC_AUTH_CEXP_AUTH_MECH_NOT_USABLE = 0x01, + FC_AUTH_CEXP_DH_GROUP_NOT_USABLE = 0x02, + FC_AUTH_CEXP_HASH_FUNC_NOT_USABLE = 0x03, + FC_AUTH_CEXP_AUTH_XACT_STARTED = 0x04, + FC_AUTH_CEXP_AUTH_FAILED = 0x05, + FC_AUTH_CEXP_INCORRECT_PLD = 0x06, + FC_AUTH_CEXP_INCORRECT_PROTO_MSG = 0x07, + FC_AUTH_CEXP_RESTART_AUTH_PROTO = 0x08, + FC_AUTH_CEXP_AUTH_CONCAT_NOT_SUPP = 0x09, + FC_AUTH_CEXP_PROTO_VER_NOT_SUPP = 0x0A, +}; + +enum auth_status { + FC_AUTH_STATE_INPROGRESS = 0, /*! authentication in progress */ + FC_AUTH_STATE_FAILED = 1, /*! authentication failed */ + FC_AUTH_STATE_SUCCESS = 2 /*! authentication successful */ +}; + +struct auth_rjt_msg_s { + struct auth_els_msg_s hdr; + u8 reason_code; + u8 reason_code_exp; + u8 rsvd[2]; +}; + + +struct auth_dhchap_neg_msg_s { + struct auth_els_msg_s hdr; + struct auth_neg_msg_s nego; +}; + +struct auth_dhchap_reply_msg_s { + struct auth_els_msg_s hdr; + + /* + * followed by response value length & Value + DH Value Length & Value + */ +}; + +#pragma pack() + +#endif /* __FC_SP_H__ */ diff --git a/drivers/scsi/bfa/include/protocol/fcp.h b/drivers/scsi/bfa/include/protocol/fcp.h new file mode 100644 index 000000000000..9ade68ad2853 --- /dev/null +++ b/drivers/scsi/bfa/include/protocol/fcp.h @@ -0,0 +1,186 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __FCPPROTO_H__ +#define __FCPPROTO_H__ + +#include + +#pragma pack(1) + +enum { + FCP_RJT = 0x01000000, /* SRR reject */ + FCP_SRR_ACCEPT = 0x02000000, /* SRR accept */ + FCP_SRR = 0x14000000, /* Sequence Retransmission Request */ +}; + +/* + * SRR FC-4 LS payload + */ +struct fc_srr_s{ + u32 ls_cmd; + u32 ox_id:16; /* ox-id */ + u32 rx_id:16; /* rx-id */ + u32 ro; /* relative offset */ + u32 r_ctl:8; /* R_CTL for I.U. */ + u32 res:24; +}; + + +/* + * FCP_CMND definitions + */ +#define FCP_CMND_CDB_LEN 16 +#define FCP_CMND_LUN_LEN 8 + +struct fcp_cmnd_s{ + lun_t lun; /* 64-bit LU number */ + u8 crn; /* command reference number */ +#ifdef __BIGENDIAN + u8 resvd:1, + priority:4, /* FCP-3: SAM-3 priority */ + taskattr:3; /* scsi task attribute */ +#else + u8 taskattr:3, /* scsi task attribute */ + priority:4, /* FCP-3: SAM-3 priority */ + resvd:1; +#endif + u8 tm_flags; /* task management flags */ +#ifdef __BIGENDIAN + u8 addl_cdb_len:6, /* additional CDB length words */ + iodir:2; /* read/write FCP_DATA IUs */ +#else + u8 iodir:2, /* read/write FCP_DATA IUs */ + addl_cdb_len:6; /* additional CDB length */ +#endif + struct scsi_cdb_s cdb; + + /* + * !!! additional cdb bytes follows here!!! + */ + u32 fcp_dl; /* bytes to be transferred */ +}; + +#define fcp_cmnd_cdb_len(_cmnd) ((_cmnd)->addl_cdb_len * 4 + FCP_CMND_CDB_LEN) +#define fcp_cmnd_fcpdl(_cmnd) ((&(_cmnd)->fcp_dl)[(_cmnd)->addl_cdb_len]) + +/* + * fcp_cmnd_t.iodir field values + */ +enum fcp_iodir{ + FCP_IODIR_NONE = 0, + FCP_IODIR_WRITE = 1, + FCP_IODIR_READ = 2, + FCP_IODIR_RW = 3, +}; + +/* + * Task attribute field + */ +enum { + FCP_TASK_ATTR_SIMPLE = 0, + FCP_TASK_ATTR_HOQ = 1, + FCP_TASK_ATTR_ORDERED = 2, + FCP_TASK_ATTR_ACA = 4, + FCP_TASK_ATTR_UNTAGGED = 5, /* obsolete in FCP-3 */ +}; + +/* + * Task management flags field - only one bit shall be set + */ +#ifndef BIT +#define BIT(_x) (1 << (_x)) +#endif +enum fcp_tm_cmnd{ + FCP_TM_ABORT_TASK_SET = BIT(1), + FCP_TM_CLEAR_TASK_SET = BIT(2), + FCP_TM_LUN_RESET = BIT(4), + FCP_TM_TARGET_RESET = BIT(5), /* obsolete in FCP-3 */ + FCP_TM_CLEAR_ACA = BIT(6), +}; + +/* + * FCP_XFER_RDY IU defines + */ +struct fcp_xfer_rdy_s{ + u32 data_ro; + u32 burst_len; + u32 reserved; +}; + +/* + * FCP_RSP residue flags + */ +enum fcp_residue{ + FCP_NO_RESIDUE = 0, /* no residue */ + FCP_RESID_OVER = 1, /* more data left that was not sent */ + FCP_RESID_UNDER = 2, /* less data than requested */ +}; + +enum { + FCP_RSPINFO_GOOD = 0, + FCP_RSPINFO_DATALEN_MISMATCH = 1, + FCP_RSPINFO_CMND_INVALID = 2, + FCP_RSPINFO_ROLEN_MISMATCH = 3, + FCP_RSPINFO_TM_NOT_SUPP = 4, + FCP_RSPINFO_TM_FAILED = 5, +}; + +struct fcp_rspinfo_s{ + u32 res0:24; + u32 rsp_code:8; /* response code (as above) */ + u32 res1; +}; + +struct fcp_resp_s{ + u32 reserved[2]; /* 2 words reserved */ + u16 reserved2; +#ifdef __BIGENDIAN + u8 reserved3:3; + u8 fcp_conf_req:1; /* FCP_CONF is requested */ + u8 resid_flags:2; /* underflow/overflow */ + u8 sns_len_valid:1;/* sense len is valid */ + u8 rsp_len_valid:1;/* response len is valid */ +#else + u8 rsp_len_valid:1;/* response len is valid */ + u8 sns_len_valid:1;/* sense len is valid */ + u8 resid_flags:2; /* underflow/overflow */ + u8 fcp_conf_req:1; /* FCP_CONF is requested */ + u8 reserved3:3; +#endif + u8 scsi_status; /* one byte SCSI status */ + u32 residue; /* residual data bytes */ + u32 sns_len; /* length od sense info */ + u32 rsp_len; /* length of response info */ +}; + +#define fcp_snslen(__fcprsp) ((__fcprsp)->sns_len_valid ? \ + (__fcprsp)->sns_len : 0) +#define fcp_rsplen(__fcprsp) ((__fcprsp)->rsp_len_valid ? \ + (__fcprsp)->rsp_len : 0) +#define fcp_rspinfo(__fcprsp) ((struct fcp_rspinfo_s *)((__fcprsp) + 1)) +#define fcp_snsinfo(__fcprsp) (((u8 *)fcp_rspinfo(__fcprsp)) + \ + fcp_rsplen(__fcprsp)) + +struct fcp_cmnd_fr_s{ + struct fchs_s fchs; + struct fcp_cmnd_s fcp; +}; + +#pragma pack() + +#endif diff --git a/drivers/scsi/bfa/include/protocol/fdmi.h b/drivers/scsi/bfa/include/protocol/fdmi.h new file mode 100644 index 000000000000..6c05c268c71b --- /dev/null +++ b/drivers/scsi/bfa/include/protocol/fdmi.h @@ -0,0 +1,163 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __FDMI_H__ +#define __FDMI_H__ + +#include +#include +#include + +#pragma pack(1) + +/* + * FDMI Command Codes + */ +#define FDMI_GRHL 0x0100 +#define FDMI_GHAT 0x0101 +#define FDMI_GRPL 0x0102 +#define FDMI_GPAT 0x0110 +#define FDMI_RHBA 0x0200 +#define FDMI_RHAT 0x0201 +#define FDMI_RPRT 0x0210 +#define FDMI_RPA 0x0211 +#define FDMI_DHBA 0x0300 +#define FDMI_DPRT 0x0310 + +/* + * FDMI reason codes + */ +#define FDMI_NO_ADDITIONAL_EXP 0x00 +#define FDMI_HBA_ALREADY_REG 0x10 +#define FDMI_HBA_ATTRIB_NOT_REG 0x11 +#define FDMI_HBA_ATTRIB_MULTIPLE 0x12 +#define FDMI_HBA_ATTRIB_LENGTH_INVALID 0x13 +#define FDMI_HBA_ATTRIB_NOT_PRESENT 0x14 +#define FDMI_PORT_ORIG_NOT_IN_LIST 0x15 +#define FDMI_PORT_HBA_NOT_IN_LIST 0x16 +#define FDMI_PORT_ATTRIB_NOT_REG 0x20 +#define FDMI_PORT_NOT_REG 0x21 +#define FDMI_PORT_ATTRIB_MULTIPLE 0x22 +#define FDMI_PORT_ATTRIB_LENGTH_INVALID 0x23 +#define FDMI_PORT_ALREADY_REGISTEREED 0x24 + +/* + * FDMI Transmission Speed Mask values + */ +#define FDMI_TRANS_SPEED_1G 0x00000001 +#define FDMI_TRANS_SPEED_2G 0x00000002 +#define FDMI_TRANS_SPEED_10G 0x00000004 +#define FDMI_TRANS_SPEED_4G 0x00000008 +#define FDMI_TRANS_SPEED_8G 0x00000010 +#define FDMI_TRANS_SPEED_16G 0x00000020 +#define FDMI_TRANS_SPEED_UNKNOWN 0x00008000 + +/* + * FDMI HBA attribute types + */ +enum fdmi_hba_attribute_type { + FDMI_HBA_ATTRIB_NODENAME = 1, /* 0x0001 */ + FDMI_HBA_ATTRIB_MANUFACTURER, /* 0x0002 */ + FDMI_HBA_ATTRIB_SERIALNUM, /* 0x0003 */ + FDMI_HBA_ATTRIB_MODEL, /* 0x0004 */ + FDMI_HBA_ATTRIB_MODEL_DESC, /* 0x0005 */ + FDMI_HBA_ATTRIB_HW_VERSION, /* 0x0006 */ + FDMI_HBA_ATTRIB_DRIVER_VERSION, /* 0x0007 */ + FDMI_HBA_ATTRIB_ROM_VERSION, /* 0x0008 */ + FDMI_HBA_ATTRIB_FW_VERSION, /* 0x0009 */ + FDMI_HBA_ATTRIB_OS_NAME, /* 0x000A */ + FDMI_HBA_ATTRIB_MAX_CT, /* 0x000B */ + + FDMI_HBA_ATTRIB_MAX_TYPE +}; + +/* + * FDMI Port attribute types + */ +enum fdmi_port_attribute_type { + FDMI_PORT_ATTRIB_FC4_TYPES = 1, /* 0x0001 */ + FDMI_PORT_ATTRIB_SUPP_SPEED, /* 0x0002 */ + FDMI_PORT_ATTRIB_PORT_SPEED, /* 0x0003 */ + FDMI_PORT_ATTRIB_FRAME_SIZE, /* 0x0004 */ + FDMI_PORT_ATTRIB_DEV_NAME, /* 0x0005 */ + FDMI_PORT_ATTRIB_HOST_NAME, /* 0x0006 */ + + FDMI_PORT_ATTR_MAX_TYPE +}; + +/* + * FDMI attribute + */ +struct fdmi_attr_s { + u16 type; + u16 len; + u8 value[1]; +}; + +/* + * HBA Attribute Block + */ +struct fdmi_hba_attr_s { + u32 attr_count; /* # of attributes */ + struct fdmi_attr_s hba_attr; /* n attributes */ +}; + +/* + * Registered Port List + */ +struct fdmi_port_list_s { + u32 num_ports; /* number Of Port Entries */ + wwn_t port_entry; /* one or more */ +}; + +/* + * Port Attribute Block + */ +struct fdmi_port_attr_s { + u32 attr_count; /* # of attributes */ + struct fdmi_attr_s port_attr; /* n attributes */ +}; + +/* + * FDMI Register HBA Attributes + */ +struct fdmi_rhba_s { + wwn_t hba_id; /* HBA Identifier */ + struct fdmi_port_list_s port_list; /* Registered Port List */ + struct fdmi_hba_attr_s hba_attr_blk; /* HBA attribute block */ +}; + +/* + * FDMI Register Port + */ +struct fdmi_rprt_s { + wwn_t hba_id; /* HBA Identifier */ + wwn_t port_name; /* Port wwn */ + struct fdmi_port_attr_s port_attr_blk; /* Port Attr Block */ +}; + +/* + * FDMI Register Port Attributes + */ +struct fdmi_rpa_s { + wwn_t port_name; /* port wwn */ + struct fdmi_port_attr_s port_attr_blk; /* Port Attr Block */ +}; + +#pragma pack() + +#endif diff --git a/drivers/scsi/bfa/include/protocol/pcifw.h b/drivers/scsi/bfa/include/protocol/pcifw.h new file mode 100644 index 000000000000..6830dc3ee58a --- /dev/null +++ b/drivers/scsi/bfa/include/protocol/pcifw.h @@ -0,0 +1,75 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * pcifw.h PCI FW related headers + */ + +#ifndef __PCIFW_H__ +#define __PCIFW_H__ + +#pragma pack(1) + +struct pnp_hdr_s{ + u32 signature; /* "$PnP" */ + u8 rev; /* Struct revision */ + u8 len; /* Header structure len in multiples + * of 16 bytes */ + u16 off; /* Offset to next header 00 if none */ + u8 rsvd; /* Reserved byte */ + u8 cksum; /* 8-bit checksum for this header */ + u32 pnp_dev_id; /* PnP Device Id */ + u16 mfstr; /* Pointer to manufacturer string */ + u16 prstr; /* Pointer to product string */ + u8 devtype[3]; /* Device Type Code */ + u8 devind; /* Device Indicator */ + u16 bcventr; /* Bootstrap entry vector */ + u16 rsvd2; /* Reserved */ + u16 sriv; /* Static resource information vector */ +}; + +struct pci_3_0_ds_s{ + u32 sig; /* Signature "PCIR" */ + u16 vendid; /* Vendor ID */ + u16 devid; /* Device ID */ + u16 devlistoff; /* Device List Offset */ + u16 len; /* PCI Data Structure Length */ + u8 rev; /* PCI Data Structure Revision */ + u8 clcode[3]; /* Class Code */ + u16 imglen; /* Code image length in multiples of + * 512 bytes */ + u16 coderev; /* Revision level of code/data */ + u8 codetype; /* Code type 0x00 - BIOS */ + u8 indr; /* Last image indicator */ + u16 mrtimglen; /* Max Run Time Image Length */ + u16 cuoff; /* Config Utility Code Header Offset */ + u16 dmtfclp; /* DMTF CLP entry point offset */ +}; + +struct pci_optrom_hdr_s{ + u16 sig; /* Signature 0x55AA */ + u8 len; /* Option ROM length in units of 512 bytes */ + u8 inivec[3]; /* Initialization vector */ + u8 rsvd[16]; /* Reserved field */ + u16 verptr; /* Pointer to version string - private */ + u16 pcids; /* Pointer to PCI data structure */ + u16 pnphdr; /* Pointer to PnP expansion header */ +}; + +#pragma pack() + +#endif diff --git a/drivers/scsi/bfa/include/protocol/scsi.h b/drivers/scsi/bfa/include/protocol/scsi.h new file mode 100644 index 000000000000..b220e6b4f6e1 --- /dev/null +++ b/drivers/scsi/bfa/include/protocol/scsi.h @@ -0,0 +1,1648 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __SCSI_H__ +#define __SCSI_H__ + +#include + +#pragma pack(1) + +/* + * generic SCSI cdb definition + */ +#define SCSI_MAX_CDBLEN 16 +struct scsi_cdb_s{ + u8 scsi_cdb[SCSI_MAX_CDBLEN]; +}; + +/* + * scsi lun serial number definition + */ +#define SCSI_LUN_SN_LEN 32 +struct scsi_lun_sn_s{ + u8 lun_sn[SCSI_LUN_SN_LEN]; +}; + +/* + * SCSI Direct Access Commands + */ +enum { + SCSI_OP_TEST_UNIT_READY = 0x00, + SCSI_OP_REQUEST_SENSE = 0x03, + SCSI_OP_FORMAT_UNIT = 0x04, + SCSI_OP_READ6 = 0x08, + SCSI_OP_WRITE6 = 0x0A, + SCSI_OP_WRITE_FILEMARKS = 0x10, + SCSI_OP_INQUIRY = 0x12, + SCSI_OP_MODE_SELECT6 = 0x15, + SCSI_OP_RESERVE6 = 0x16, + SCSI_OP_RELEASE6 = 0x17, + SCSI_OP_MODE_SENSE6 = 0x1A, + SCSI_OP_START_STOP_UNIT = 0x1B, + SCSI_OP_SEND_DIAGNOSTIC = 0x1D, + SCSI_OP_READ_CAPACITY = 0x25, + SCSI_OP_READ10 = 0x28, + SCSI_OP_WRITE10 = 0x2A, + SCSI_OP_VERIFY10 = 0x2F, + SCSI_OP_READ_DEFECT_DATA = 0x37, + SCSI_OP_LOG_SELECT = 0x4C, + SCSI_OP_LOG_SENSE = 0x4D, + SCSI_OP_MODE_SELECT10 = 0x55, + SCSI_OP_RESERVE10 = 0x56, + SCSI_OP_RELEASE10 = 0x57, + SCSI_OP_MODE_SENSE10 = 0x5A, + SCSI_OP_PER_RESERVE_IN = 0x5E, + SCSI_OP_PER_RESERVE_OUR = 0x5E, + SCSI_OP_READ16 = 0x88, + SCSI_OP_WRITE16 = 0x8A, + SCSI_OP_VERIFY16 = 0x8F, + SCSI_OP_READ_CAPACITY16 = 0x9E, + SCSI_OP_REPORT_LUNS = 0xA0, + SCSI_OP_READ12 = 0xA8, + SCSI_OP_WRITE12 = 0xAA, + SCSI_OP_UNDEF = 0xFF, +}; + +/* + * SCSI START_STOP_UNIT command + */ +struct scsi_start_stop_unit_s{ + u8 opcode; +#ifdef __BIGENDIAN + u8 lun:3; + u8 reserved1:4; + u8 immed:1; +#else + u8 immed:1; + u8 reserved1:4; + u8 lun:3; +#endif + u8 reserved2; + u8 reserved3; +#ifdef __BIGENDIAN + u8 power_conditions:4; + u8 reserved4:2; + u8 loEj:1; + u8 start:1; +#else + u8 start:1; + u8 loEj:1; + u8 reserved4:2; + u8 power_conditions:4; +#endif + u8 control; +}; + +/* + * SCSI SEND_DIAGNOSTIC command + */ +struct scsi_send_diagnostic_s{ + u8 opcode; +#ifdef __BIGENDIAN + u8 self_test_code:3; + u8 pf:1; + u8 reserved1:1; + u8 self_test:1; + u8 dev_offl:1; + u8 unit_offl:1; +#else + u8 unit_offl:1; + u8 dev_offl:1; + u8 self_test:1; + u8 reserved1:1; + u8 pf:1; + u8 self_test_code:3; +#endif + u8 reserved2; + + u8 param_list_length[2]; /* MSB first */ + u8 control; + +}; + +/* + * SCSI READ10/WRITE10 commands + */ +struct scsi_rw10_s{ + u8 opcode; +#ifdef __BIGENDIAN + u8 lun:3; + u8 dpo:1; /* Disable Page Out */ + u8 fua:1; /* Force Unit Access */ + u8 reserved1:2; + u8 rel_adr:1; /* relative address */ +#else + u8 rel_adr:1; + u8 reserved1:2; + u8 fua:1; + u8 dpo:1; + u8 lun:3; +#endif + u8 lba0; /* logical block address - MSB */ + u8 lba1; + u8 lba2; + u8 lba3; /* LSB */ + u8 reserved3; + u8 xfer_length0; /* transfer length in blocks - MSB */ + u8 xfer_length1; /* LSB */ + u8 control; +}; + +#define SCSI_CDB10_GET_LBA(cdb) \ + (((cdb)->lba0 << 24) | ((cdb)->lba1 << 16) | \ + ((cdb)->lba2 << 8) | (cdb)->lba3) + +#define SCSI_CDB10_SET_LBA(cdb, lba) { \ + (cdb)->lba0 = lba >> 24; \ + (cdb)->lba1 = (lba >> 16) & 0xFF; \ + (cdb)->lba2 = (lba >> 8) & 0xFF; \ + (cdb)->lba3 = lba & 0xFF; \ +} + +#define SCSI_CDB10_GET_TL(cdb) \ + ((cdb)->xfer_length0 << 8 | (cdb)->xfer_length1) +#define SCSI_CDB10_SET_TL(cdb, tl) { \ + (cdb)->xfer_length0 = tl >> 8; \ + (cdb)->xfer_length1 = tl & 0xFF; \ +} + +/* + * SCSI READ6/WRITE6 commands + */ +struct scsi_rw6_s{ + u8 opcode; +#ifdef __BIGENDIAN + u8 lun:3; + u8 lba0:5; /* MSb */ +#else + u8 lba0:5; /* MSb */ + u8 lun:3; +#endif + u8 lba1; + u8 lba2; /* LSB */ + u8 xfer_length; + u8 control; +}; + +#define SCSI_TAPE_CDB6_GET_TL(cdb) \ + (((cdb)->tl0 << 16) | ((cdb)->tl1 << 8) | (cdb)->tl2) + +#define SCSI_TAPE_CDB6_SET_TL(cdb, tl) { \ + (cdb)->tl0 = tl >> 16; \ + (cdb)->tl1 = (tl >> 8) & 0xFF; \ + (cdb)->tl2 = tl & 0xFF; \ +} + +/* + * SCSI sequential (TAPE) wrtie command + */ +struct scsi_tape_wr_s{ + u8 opcode; +#ifdef __BIGENDIAN + u8 rsvd:7; + u8 fixed:1; /* MSb */ +#else + u8 fixed:1; /* MSb */ + u8 rsvd:7; +#endif + u8 tl0; /* Msb */ + u8 tl1; + u8 tl2; /* Lsb */ + + u8 control; +}; + +#define SCSI_CDB6_GET_LBA(cdb) \ + (((cdb)->lba0 << 16) | ((cdb)->lba1 << 8) | (cdb)->lba2) + +#define SCSI_CDB6_SET_LBA(cdb, lba) { \ + (cdb)->lba0 = lba >> 16; \ + (cdb)->lba1 = (lba >> 8) & 0xFF; \ + (cdb)->lba2 = lba & 0xFF; \ +} + +#define SCSI_CDB6_GET_TL(cdb) ((cdb)->xfer_length) +#define SCSI_CDB6_SET_TL(cdb, tl) { \ + (cdb)->xfer_length = tl; \ +} + +/* + * SCSI sense data format + */ +struct scsi_sense_s{ +#ifdef __BIGENDIAN + u8 valid:1; + u8 rsp_code:7; +#else + u8 rsp_code:7; + u8 valid:1; +#endif + u8 seg_num; +#ifdef __BIGENDIAN + u8 file_mark:1; + u8 eom:1; /* end of media */ + u8 ili:1; /* incorrect length indicator */ + u8 reserved:1; + u8 sense_key:4; +#else + u8 sense_key:4; + u8 reserved:1; + u8 ili:1; /* incorrect length indicator */ + u8 eom:1; /* end of media */ + u8 file_mark:1; +#endif + u8 information[4]; /* device-type or command specific info + */ + u8 add_sense_length; + /* additional sense length */ + u8 command_info[4];/* command specific information + */ + u8 asc; /* additional sense code */ + u8 ascq; /* additional sense code qualifier */ + u8 fru_code; /* field replaceable unit code */ +#ifdef __BIGENDIAN + u8 sksv:1; /* sense key specific valid */ + u8 c_d:1; /* command/data bit */ + u8 res1:2; + u8 bpv:1; /* bit pointer valid */ + u8 bpointer:3; /* bit pointer */ +#else + u8 bpointer:3; /* bit pointer */ + u8 bpv:1; /* bit pointer valid */ + u8 res1:2; + u8 c_d:1; /* command/data bit */ + u8 sksv:1; /* sense key specific valid */ +#endif + u8 fpointer[2]; /* field pointer */ +}; + +#define SCSI_SENSE_CUR_ERR 0x70 +#define SCSI_SENSE_DEF_ERR 0x71 + +/* + * SCSI sense key values + */ +#define SCSI_SK_NO_SENSE 0x0 +#define SCSI_SK_REC_ERR 0x1 /* recovered error */ +#define SCSI_SK_NOT_READY 0x2 +#define SCSI_SK_MED_ERR 0x3 /* medium error */ +#define SCSI_SK_HW_ERR 0x4 /* hardware error */ +#define SCSI_SK_ILLEGAL_REQ 0x5 +#define SCSI_SK_UNIT_ATT 0x6 /* unit attention */ +#define SCSI_SK_DATA_PROTECT 0x7 +#define SCSI_SK_BLANK_CHECK 0x8 +#define SCSI_SK_VENDOR_SPEC 0x9 +#define SCSI_SK_COPY_ABORTED 0xA +#define SCSI_SK_ABORTED_CMND 0xB +#define SCSI_SK_VOL_OVERFLOW 0xD +#define SCSI_SK_MISCOMPARE 0xE + +/* + * SCSI additional sense codes + */ +#define SCSI_ASC_NO_ADD_SENSE 0x00 +#define SCSI_ASC_LUN_NOT_READY 0x04 +#define SCSI_ASC_LUN_COMMUNICATION 0x08 +#define SCSI_ASC_WRITE_ERROR 0x0C +#define SCSI_ASC_INVALID_CMND_CODE 0x20 +#define SCSI_ASC_BAD_LBA 0x21 +#define SCSI_ASC_INVALID_FIELD_IN_CDB 0x24 +#define SCSI_ASC_LUN_NOT_SUPPORTED 0x25 +#define SCSI_ASC_LUN_WRITE_PROTECT 0x27 +#define SCSI_ASC_POWERON_BDR 0x29 /* power on reset, bus reset, + * bus device reset + */ +#define SCSI_ASC_PARAMS_CHANGED 0x2A +#define SCSI_ASC_CMND_CLEARED_BY_A_I 0x2F +#define SCSI_ASC_SAVING_PARAM_NOTSUPP 0x39 +#define SCSI_ASC_TOCC 0x3F /* target operating condtions + * changed + */ +#define SCSI_ASC_PARITY_ERROR 0x47 +#define SCSI_ASC_CMND_PHASE_ERROR 0x4A +#define SCSI_ASC_DATA_PHASE_ERROR 0x4B +#define SCSI_ASC_VENDOR_SPEC 0x7F + +/* + * SCSI additional sense code qualifiers + */ +#define SCSI_ASCQ_CAUSE_NOT_REPORT 0x00 +#define SCSI_ASCQ_BECOMING_READY 0x01 +#define SCSI_ASCQ_INIT_CMD_REQ 0x02 +#define SCSI_ASCQ_FORMAT_IN_PROGRESS 0x04 +#define SCSI_ASCQ_OPERATION_IN_PROGRESS 0x07 +#define SCSI_ASCQ_SELF_TEST_IN_PROGRESS 0x09 +#define SCSI_ASCQ_WR_UNEXP_UNSOL_DATA 0x0C +#define SCSI_ASCQ_WR_NOTENG_UNSOL_DATA 0x0D + +#define SCSI_ASCQ_LBA_OUT_OF_RANGE 0x00 +#define SCSI_ASCQ_INVALID_ELEMENT_ADDR 0x01 + +#define SCSI_ASCQ_LUN_WRITE_PROTECTED 0x00 +#define SCSI_ASCQ_LUN_HW_WRITE_PROTECTED 0x01 +#define SCSI_ASCQ_LUN_SW_WRITE_PROTECTED 0x02 + +#define SCSI_ASCQ_POR 0x01 /* power on reset */ +#define SCSI_ASCQ_SBR 0x02 /* scsi bus reset */ +#define SCSI_ASCQ_BDR 0x03 /* bus device reset */ +#define SCSI_ASCQ_DIR 0x04 /* device internal reset */ + +#define SCSI_ASCQ_MODE_PARAMS_CHANGED 0x01 +#define SCSI_ASCQ_LOG_PARAMS_CHANGED 0x02 +#define SCSI_ASCQ_RESERVATIONS_PREEMPTED 0x03 +#define SCSI_ASCQ_RESERVATIONS_RELEASED 0x04 +#define SCSI_ASCQ_REGISTRATIONS_PREEMPTED 0x05 + +#define SCSI_ASCQ_MICROCODE_CHANGED 0x01 +#define SCSI_ASCQ_CHANGED_OPER_COND 0x02 +#define SCSI_ASCQ_INQ_CHANGED 0x03 /* inquiry data changed */ +#define SCSI_ASCQ_DI_CHANGED 0x05 /* device id changed */ +#define SCSI_ASCQ_RL_DATA_CHANGED 0x0E /* report luns data changed */ + +#define SCSI_ASCQ_DP_CRC_ERR 0x01 /* data phase crc error */ +#define SCSI_ASCQ_DP_SCSI_PARITY_ERR 0x02 /* data phase scsi parity error + */ +#define SCSI_ASCQ_IU_CRC_ERR 0x03 /* information unit crc error */ +#define SCSI_ASCQ_PROTO_SERV_CRC_ERR 0x05 + +#define SCSI_ASCQ_LUN_TIME_OUT 0x01 + +/* ------------------------------------------------------------ + * SCSI INQUIRY + * ------------------------------------------------------------*/ + +struct scsi_inquiry_s{ + u8 opcode; +#ifdef __BIGENDIAN + u8 lun:3; + u8 reserved1:3; + u8 cmd_dt:1; + u8 evpd:1; +#else + u8 evpd:1; + u8 cmd_dt:1; + u8 reserved1:3; + u8 lun:3; +#endif + u8 page_code; + u8 reserved2; + u8 alloc_length; + u8 control; +}; + +struct scsi_inquiry_vendor_s{ + u8 vendor_id[8]; +}; + +struct scsi_inquiry_prodid_s{ + u8 product_id[16]; +}; + +struct scsi_inquiry_prodrev_s{ + u8 product_rev[4]; +}; + +struct scsi_inquiry_data_s{ +#ifdef __BIGENDIAN + u8 peripheral_qual:3; /* peripheral qualifier */ + u8 device_type:5; /* peripheral device type */ + + u8 rmb:1; /* removable medium bit */ + u8 device_type_mod:7; /* device type modifier */ + + u8 version; + + u8 aenc:1; /* async event notification capability + */ + u8 trm_iop:1; /* terminate I/O process */ + u8 norm_aca:1; /* normal ACA supported */ + u8 hi_support:1; /* SCSI-3: supports REPORT LUNS */ + u8 rsp_data_format:4; + + u8 additional_len; + u8 sccs:1; + u8 reserved1:7; + + u8 reserved2:1; + u8 enc_serv:1; /* enclosure service component */ + u8 reserved3:1; + u8 multi_port:1; /* multi-port device */ + u8 m_chngr:1; /* device in medium transport element */ + u8 ack_req_q:1; /* SIP specific bit */ + u8 addr32:1; /* SIP specific bit */ + u8 addr16:1; /* SIP specific bit */ + + u8 rel_adr:1; /* relative address */ + u8 w_bus32:1; + u8 w_bus16:1; + u8 synchronous:1; + u8 linked_commands:1; + u8 trans_dis:1; + u8 cmd_queue:1; /* command queueing supported */ + u8 soft_reset:1; /* soft reset alternative (VS) */ +#else + u8 device_type:5; /* peripheral device type */ + u8 peripheral_qual:3; + /* peripheral qualifier */ + + u8 device_type_mod:7; + /* device type modifier */ + u8 rmb:1; /* removable medium bit */ + + u8 version; + + u8 rsp_data_format:4; + u8 hi_support:1; /* SCSI-3: supports REPORT LUNS */ + u8 norm_aca:1; /* normal ACA supported */ + u8 terminate_iop:1;/* terminate I/O process */ + u8 aenc:1; /* async event notification capability + */ + + u8 additional_len; + u8 reserved1:7; + u8 sccs:1; + + u8 addr16:1; /* SIP specific bit */ + u8 addr32:1; /* SIP specific bit */ + u8 ack_req_q:1; /* SIP specific bit */ + u8 m_chngr:1; /* device in medium transport element */ + u8 multi_port:1; /* multi-port device */ + u8 reserved3:1; /* TBD - Vendor Specific */ + u8 enc_serv:1; /* enclosure service component */ + u8 reserved2:1; + + u8 soft_seset:1; /* soft reset alternative (VS) */ + u8 cmd_queue:1; /* command queueing supported */ + u8 trans_dis:1; + u8 linked_commands:1; + u8 synchronous:1; + u8 w_bus16:1; + u8 w_bus32:1; + u8 rel_adr:1; /* relative address */ +#endif + struct scsi_inquiry_vendor_s vendor_id; + struct scsi_inquiry_prodid_s product_id; + struct scsi_inquiry_prodrev_s product_rev; + u8 vendor_specific[20]; + u8 reserved4[40]; +}; + +/* + * inquiry.peripheral_qual field values + */ +#define SCSI_DEVQUAL_DEFAULT 0 +#define SCSI_DEVQUAL_NOT_CONNECTED 1 +#define SCSI_DEVQUAL_NOT_SUPPORTED 3 + +/* + * inquiry.device_type field values + */ +#define SCSI_DEVICE_DIRECT_ACCESS 0x00 +#define SCSI_DEVICE_SEQ_ACCESS 0x01 +#define SCSI_DEVICE_ARRAY_CONTROLLER 0x0C +#define SCSI_DEVICE_UNKNOWN 0x1F + +/* + * inquiry.version + */ +#define SCSI_VERSION_ANSI_X3131 2 /* ANSI X3.131 SCSI-2 */ +#define SCSI_VERSION_SPC 3 /* SPC (SCSI-3), ANSI X3.301:1997 */ +#define SCSI_VERSION_SPC_2 4 /* SPC-2 */ + +/* + * response data format + */ +#define SCSI_RSP_DATA_FORMAT 2 /* SCSI-2 & SPC */ + +/* + * SCSI inquiry page codes + */ +#define SCSI_INQ_PAGE_VPD_PAGES 0x00 /* supported vpd pages */ +#define SCSI_INQ_PAGE_USN_PAGE 0x80 /* unit serial number page */ +#define SCSI_INQ_PAGE_DEV_IDENT 0x83 /* device indentification page + */ +#define SCSI_INQ_PAGES_MAX 3 + +/* + * supported vital product data pages + */ +struct scsi_inq_page_vpd_pages_s{ +#ifdef __BIGENDIAN + u8 peripheral_qual:3; + u8 device_type:5; +#else + u8 device_type:5; + u8 peripheral_qual:3; +#endif + u8 page_code; + u8 reserved; + u8 page_length; + u8 pages[SCSI_INQ_PAGES_MAX]; +}; + +/* + * Unit serial number page + */ +#define SCSI_INQ_USN_LEN 32 + +struct scsi_inq_usn_s{ + char usn[SCSI_INQ_USN_LEN]; +}; + +struct scsi_inq_page_usn_s{ +#ifdef __BIGENDIAN + u8 peripheral_qual:3; + u8 device_type:5; +#else + u8 device_type:5; + u8 peripheral_qual:3; +#endif + u8 page_code; + u8 reserved1; + u8 page_length; + struct scsi_inq_usn_s usn; +}; + +enum { + SCSI_INQ_DIP_CODE_BINARY = 1, /* identifier has binary value */ + SCSI_INQ_DIP_CODE_ASCII = 2, /* identifier has ascii value */ +}; + +enum { + SCSI_INQ_DIP_ASSOC_LUN = 0, /* id is associated with device */ + SCSI_INQ_DIP_ASSOC_PORT = 1, /* id is associated with port that + * received the request + */ +}; + +enum { + SCSI_INQ_ID_TYPE_VENDOR = 1, + SCSI_INQ_ID_TYPE_IEEE = 2, + SCSI_INQ_ID_TYPE_FC_FS = 3, + SCSI_INQ_ID_TYPE_OTHER = 4, +}; + +struct scsi_inq_dip_desc_s{ +#ifdef __BIGENDIAN + u8 res0:4; + u8 code_set:4; + u8 res1:2; + u8 association:2; + u8 id_type:4; +#else + u8 code_set:4; + u8 res0:4; + u8 id_type:4; + u8 association:2; + u8 res1:2; +#endif + u8 res2; + u8 id_len; + struct scsi_lun_sn_s id; +}; + +/* + * Device indentification page + */ +struct scsi_inq_page_dev_ident_s{ +#ifdef __BIGENDIAN + u8 peripheral_qual:3; + u8 device_type:5; +#else + u8 device_type:5; + u8 peripheral_qual:3; +#endif + u8 page_code; + u8 reserved1; + u8 page_length; + struct scsi_inq_dip_desc_s desc; +}; + +/* ------------------------------------------------------------ + * READ CAPACITY + * ------------------------------------------------------------ + */ + +struct scsi_read_capacity_s{ + u8 opcode; +#ifdef __BIGENDIAN + u8 lun:3; + u8 reserved1:4; + u8 rel_adr:1; +#else + u8 rel_adr:1; + u8 reserved1:4; + u8 lun:3; +#endif + u8 lba0; /* MSB */ + u8 lba1; + u8 lba2; + u8 lba3; /* LSB */ + u8 reserved2; + u8 reserved3; +#ifdef __BIGENDIAN + u8 reserved4:7; + u8 pmi:1; /* partial medium indicator */ +#else + u8 pmi:1; /* partial medium indicator */ + u8 reserved4:7; +#endif + u8 control; +}; + +struct scsi_read_capacity_data_s{ + u32 max_lba; /* maximum LBA available */ + u32 block_length; /* in bytes */ +}; + +struct scsi_read_capacity16_data_s{ + u64 lba; /* maximum LBA available */ + u32 block_length; /* in bytes */ +#ifdef __BIGENDIAN + u8 reserved1:4, + p_type:3, + prot_en:1; + u8 reserved2:4, + lb_pbe:4; /* logical blocks per physical block + * exponent */ + u16 reserved3:2, + lba_align:14; /* lowest aligned logical block + * address */ +#else + u16 lba_align:14, /* lowest aligned logical block + * address */ + reserved3:2; + u8 lb_pbe:4, /* logical blocks per physical block + * exponent */ + reserved2:4; + u8 prot_en:1, + p_type:3, + reserved1:4; +#endif + u64 reserved4; + u64 reserved5; +}; + +/* ------------------------------------------------------------ + * REPORT LUNS command + * ------------------------------------------------------------ + */ + +struct scsi_report_luns_s{ + u8 opcode; /* A0h - REPORT LUNS opCode */ + u8 reserved1[5]; + u8 alloc_length[4];/* allocation length MSB first */ + u8 reserved2; + u8 control; +}; + +#define SCSI_REPORT_LUN_ALLOC_LENGTH(rl) \ + ((rl->alloc_length[0] << 24) | (rl->alloc_length[1] << 16) | \ + (rl->alloc_length[2] << 8) | (rl->alloc_length[3])) + +#define SCSI_REPORT_LUNS_SET_ALLOCLEN(rl, alloc_len) { \ + (rl)->alloc_length[0] = (alloc_len) >> 24; \ + (rl)->alloc_length[1] = ((alloc_len) >> 16) & 0xFF; \ + (rl)->alloc_length[2] = ((alloc_len) >> 8) & 0xFF; \ + (rl)->alloc_length[3] = (alloc_len) & 0xFF; \ +} + +struct scsi_report_luns_data_s{ + u32 lun_list_length; /* length of LUN list length */ + u32 reserved; + lun_t lun[1]; /* first LUN in lun list */ +}; + +/* ------------------------------------------------------------- + * SCSI mode parameters + * ----------------------------------------------------------- + */ +enum { + SCSI_DA_MEDIUM_DEF = 0, /* direct access default medium type */ + SCSI_DA_MEDIUM_SS = 1, /* direct access single sided */ + SCSI_DA_MEDIUM_DS = 2, /* direct access double sided */ +}; + +/* + * SCSI Mode Select(6) cdb + */ +struct scsi_mode_select6_s{ + u8 opcode; +#ifdef __BIGENDIAN + u8 reserved1:3; + u8 pf:1; /* page format */ + u8 reserved2:3; + u8 sp:1; /* save pages if set to 1 */ +#else + u8 sp:1; /* save pages if set to 1 */ + u8 reserved2:3; + u8 pf:1; /* page format */ + u8 reserved1:3; +#endif + u8 reserved3[2]; + u8 alloc_len; + u8 control; +}; + +/* + * SCSI Mode Select(10) cdb + */ +struct scsi_mode_select10_s{ + u8 opcode; +#ifdef __BIGENDIAN + u8 reserved1:3; + u8 pf:1; /* page format */ + u8 reserved2:3; + u8 sp:1; /* save pages if set to 1 */ +#else + u8 sp:1; /* save pages if set to 1 */ + u8 reserved2:3; + u8 pf:1; /* page format */ + u8 reserved1:3; +#endif + u8 reserved3[5]; + u8 alloc_len_msb; + u8 alloc_len_lsb; + u8 control; +}; + +/* + * SCSI Mode Sense(6) cdb + */ +struct scsi_mode_sense6_s{ + u8 opcode; +#ifdef __BIGENDIAN + u8 reserved1:4; + u8 dbd:1; /* disable block discriptors if set to 1 */ + u8 reserved2:3; + + u8 pc:2; /* page control */ + u8 page_code:6; +#else + u8 reserved2:3; + u8 dbd:1; /* disable block descriptors if set to 1 */ + u8 reserved1:4; + + u8 page_code:6; + u8 pc:2; /* page control */ +#endif + u8 reserved3; + u8 alloc_len; + u8 control; +}; + +/* + * SCSI Mode Sense(10) cdb + */ +struct scsi_mode_sense10_s{ + u8 opcode; +#ifdef __BIGENDIAN + u8 reserved1:3; + u8 LLBAA:1; /* long LBA accepted if set to 1 */ + u8 dbd:1; /* disable block descriptors if set + * to 1 + */ + u8 reserved2:3; + + u8 pc:2; /* page control */ + u8 page_code:6; +#else + u8 reserved2:3; + u8 dbd:1; /* disable block descriptors if set to + * 1 + */ + u8 LLBAA:1; /* long LBA accepted if set to 1 */ + u8 reserved1:3; + + u8 page_code:6; + u8 pc:2; /* page control */ +#endif + u8 reserved3[4]; + u8 alloc_len_msb; + u8 alloc_len_lsb; + u8 control; +}; + +#define SCSI_CDB10_GET_AL(cdb) \ + ((cdb)->alloc_len_msb << 8 | (cdb)->alloc_len_lsb) + +#define SCSI_CDB10_SET_AL(cdb, al) { \ + (cdb)->alloc_len_msb = al >> 8; \ + (cdb)->alloc_len_lsb = al & 0xFF; \ +} + +#define SCSI_CDB6_GET_AL(cdb) ((cdb)->alloc_len) + +#define SCSI_CDB6_SET_AL(cdb, al) { \ + (cdb)->alloc_len = al; \ +} + +/* + * page control field values + */ +#define SCSI_PC_CURRENT_VALUES 0x0 +#define SCSI_PC_CHANGEABLE_VALUES 0x1 +#define SCSI_PC_DEFAULT_VALUES 0x2 +#define SCSI_PC_SAVED_VALUES 0x3 + +/* + * SCSI mode page codes + */ +#define SCSI_MP_VENDOR_SPEC 0x00 +#define SCSI_MP_DISC_RECN 0x02 /* disconnect-reconnect page */ +#define SCSI_MP_FORMAT_DEVICE 0x03 +#define SCSI_MP_RDG 0x04 /* rigid disk geometry page */ +#define SCSI_MP_FDP 0x05 /* flexible disk page */ +#define SCSI_MP_CACHING 0x08 /* caching page */ +#define SCSI_MP_CONTROL 0x0A /* control mode page */ +#define SCSI_MP_MED_TYPES_SUP 0x0B /* medium types supported page */ +#define SCSI_MP_INFO_EXCP_CNTL 0x1C /* informational exception control */ +#define SCSI_MP_ALL 0x3F /* return all pages - mode sense only */ + +/* + * mode parameter header + */ +struct scsi_mode_param_header6_s{ + u8 mode_datalen; + u8 medium_type; + + /* + * device specific parameters expanded for direct access devices + */ +#ifdef __BIGENDIAN + u32 wp:1; /* write protected */ + u32 reserved1:2; + u32 dpofua:1; /* disable page out + force unit access + */ + u32 reserved2:4; +#else + u32 reserved2:4; + u32 dpofua:1; /* disable page out + force unit access + */ + u32 reserved1:2; + u32 wp:1; /* write protected */ +#endif + + u8 block_desclen; +}; + +struct scsi_mode_param_header10_s{ + u32 mode_datalen:16; + u32 medium_type:8; + + /* + * device specific parameters expanded for direct access devices + */ +#ifdef __BIGENDIAN + u32 wp:1; /* write protected */ + u32 reserved1:2; + u32 dpofua:1; /* disable page out + force unit access + */ + u32 reserved2:4; +#else + u32 reserved2:4; + u32 dpofua:1; /* disable page out + force unit access + */ + u32 reserved1:2; + u32 wp:1; /* write protected */ +#endif + +#ifdef __BIGENDIAN + u32 reserved3:7; + u32 longlba:1; +#else + u32 longlba:1; + u32 reserved3:7; +#endif + u32 reserved4:8; + u32 block_desclen:16; +}; + +/* + * mode parameter block descriptor + */ +struct scsi_mode_param_desc_s{ + u32 nblks; + u32 density_code:8; + u32 block_length:24; +}; + +/* + * Disconnect-reconnect mode page format + */ +struct scsi_mp_disc_recn_s{ +#ifdef __BIGENDIAN + u8 ps:1; + u8 reserved1:1; + u8 page_code:6; +#else + u8 page_code:6; + u8 reserved1:1; + u8 ps:1; +#endif + u8 page_len; + u8 buf_full_ratio; + u8 buf_empty_ratio; + + u8 bil_msb; /* bus inactivity limit -MSB */ + u8 bil_lsb; /* bus inactivity limit -LSB */ + + u8 dtl_msb; /* disconnect time limit - MSB */ + u8 dtl_lsb; /* disconnect time limit - LSB */ + + u8 ctl_msb; /* connect time limit - MSB */ + u8 ctl_lsb; /* connect time limit - LSB */ + + u8 max_burst_len_msb; + u8 max_burst_len_lsb; +#ifdef __BIGENDIAN + u8 emdp:1; /* enable modify data pointers */ + u8 fa:3; /* fair arbitration */ + u8 dimm:1; /* disconnect immediate */ + u8 dtdc:3; /* data transfer disconnect control */ +#else + u8 dtdc:3; /* data transfer disconnect control */ + u8 dimm:1; /* disconnect immediate */ + u8 fa:3; /* fair arbitration */ + u8 emdp:1; /* enable modify data pointers */ +#endif + + u8 reserved3; + + u8 first_burst_len_msb; + u8 first_burst_len_lsb; +}; + +/* + * SCSI format device mode page + */ +struct scsi_mp_format_device_s{ +#ifdef __BIGENDIAN + u32 ps:1; + u32 reserved1:1; + u32 page_code:6; +#else + u32 page_code:6; + u32 reserved1:1; + u32 ps:1; +#endif + u32 page_len:8; + u32 tracks_per_zone:16; + + u32 a_sec_per_zone:16; + u32 a_tracks_per_zone:16; + + u32 a_tracks_per_lun:16; /* alternate tracks/lun-MSB */ + u32 sec_per_track:16; /* sectors/track-MSB */ + + u32 bytes_per_sector:16; + u32 interleave:16; + + u32 tsf:16; /* track skew factor-MSB */ + u32 csf:16; /* cylinder skew factor-MSB */ + +#ifdef __BIGENDIAN + u32 ssec:1; /* soft sector formatting */ + u32 hsec:1; /* hard sector formatting */ + u32 rmb:1; /* removable media */ + u32 surf:1; /* surface */ + u32 reserved2:4; +#else + u32 reserved2:4; + u32 surf:1; /* surface */ + u32 rmb:1; /* removable media */ + u32 hsec:1; /* hard sector formatting */ + u32 ssec:1; /* soft sector formatting */ +#endif + u32 reserved3:24; +}; + +/* + * SCSI rigid disk device geometry page + */ +struct scsi_mp_rigid_device_geometry_s{ +#ifdef __BIGENDIAN + u32 ps:1; + u32 reserved1:1; + u32 page_code:6; +#else + u32 page_code:6; + u32 reserved1:1; + u32 ps:1; +#endif + u32 page_len:8; + u32 num_cylinders0:8; + u32 num_cylinders1:8; + + u32 num_cylinders2:8; + u32 num_heads:8; + u32 scwp0:8; + u32 scwp1:8; + + u32 scwp2:8; + u32 scrwc0:8; + u32 scrwc1:8; + u32 scrwc2:8; + + u32 dsr:16; + u32 lscyl0:8; + u32 lscyl1:8; + + u32 lscyl2:8; +#ifdef __BIGENDIAN + u32 reserved2:6; + u32 rpl:2; /* rotational position locking */ +#else + u32 rpl:2; /* rotational position locking */ + u32 reserved2:6; +#endif + u32 rot_off:8; + u32 reserved3:8; + + u32 med_rot_rate:16; + u32 reserved4:16; +}; + +/* + * SCSI caching mode page + */ +struct scsi_mp_caching_s{ +#ifdef __BIGENDIAN + u8 ps:1; + u8 res1:1; + u8 page_code:6; +#else + u8 page_code:6; + u8 res1:1; + u8 ps:1; +#endif + u8 page_len; +#ifdef __BIGENDIAN + u8 ic:1; /* initiator control */ + u8 abpf:1; /* abort pre-fetch */ + u8 cap:1; /* caching analysis permitted */ + u8 disc:1; /* discontinuity */ + u8 size:1; /* size enable */ + u8 wce:1; /* write cache enable */ + u8 mf:1; /* multiplication factor */ + u8 rcd:1; /* read cache disable */ + + u8 drrp:4; /* demand read retention priority */ + u8 wrp:4; /* write retention priority */ +#else + u8 rcd:1; /* read cache disable */ + u8 mf:1; /* multiplication factor */ + u8 wce:1; /* write cache enable */ + u8 size:1; /* size enable */ + u8 disc:1; /* discontinuity */ + u8 cap:1; /* caching analysis permitted */ + u8 abpf:1; /* abort pre-fetch */ + u8 ic:1; /* initiator control */ + + u8 wrp:4; /* write retention priority */ + u8 drrp:4; /* demand read retention priority */ +#endif + u8 dptl[2];/* disable pre-fetch transfer length */ + u8 min_prefetch[2]; + u8 max_prefetch[2]; + u8 max_prefetch_limit[2]; +#ifdef __BIGENDIAN + u8 fsw:1; /* force sequential write */ + u8 lbcss:1;/* logical block cache segment size */ + u8 dra:1; /* disable read ahead */ + u8 vs:2; /* vendor specific */ + u8 res2:3; +#else + u8 res2:3; + u8 vs:2; /* vendor specific */ + u8 dra:1; /* disable read ahead */ + u8 lbcss:1;/* logical block cache segment size */ + u8 fsw:1; /* force sequential write */ +#endif + u8 num_cache_segs; + + u8 cache_seg_size[2]; + u8 res3; + u8 non_cache_seg_size[3]; +}; + +/* + * SCSI control mode page + */ +struct scsi_mp_control_page_s{ +#ifdef __BIGENDIAN +u8 ps:1; +u8 reserved1:1; +u8 page_code:6; +#else +u8 page_code:6; +u8 reserved1:1; +u8 ps:1; +#endif + u8 page_len; +#ifdef __BIGENDIAN + u8 tst:3; /* task set type */ + u8 reserved3:3; + u8 gltsd:1; /* global logging target save disable */ + u8 rlec:1; /* report log exception condition */ + + u8 qalgo_mod:4; /* queue alogorithm modifier */ + u8 reserved4:1; + u8 qerr:2; /* queue error management */ + u8 dque:1; /* disable queuing */ + + u8 reserved5:1; + u8 rac:1; /* report a check */ + u8 reserved6:2; + u8 swp:1; /* software write protect */ + u8 raerp:1; /* ready AER permission */ + u8 uaaerp:1; /* unit attenstion AER permission */ + u8 eaerp:1; /* error AER permission */ + + u8 reserved7:5; + u8 autoload_mod:3; +#else + u8 rlec:1; /* report log exception condition */ + u8 gltsd:1; /* global logging target save disable */ + u8 reserved3:3; + u8 tst:3; /* task set type */ + + u8 dque:1; /* disable queuing */ + u8 qerr:2; /* queue error management */ + u8 reserved4:1; + u8 qalgo_mod:4; /* queue alogorithm modifier */ + + u8 eaerp:1; /* error AER permission */ + u8 uaaerp:1; /* unit attenstion AER permission */ + u8 raerp:1; /* ready AER permission */ + u8 swp:1; /* software write protect */ + u8 reserved6:2; + u8 rac:1; /* report a check */ + u8 reserved5:1; + + u8 autoload_mod:3; + u8 reserved7:5; +#endif + u8 rahp_msb; /* ready AER holdoff period - MSB */ + u8 rahp_lsb; /* ready AER holdoff period - LSB */ + + u8 busy_timeout_period_msb; + u8 busy_timeout_period_lsb; + + u8 ext_selftest_compl_time_msb; + u8 ext_selftest_compl_time_lsb; +}; + +/* + * SCSI medium types supported mode page + */ +struct scsi_mp_medium_types_sup_s{ +#ifdef __BIGENDIAN + u8 ps:1; + u8 reserved1:1; + u8 page_code:6; +#else + u8 page_code:6; + u8 reserved1:1; + u8 ps:1; +#endif + u8 page_len; + + u8 reserved3[2]; + u8 med_type1_sup; /* medium type one supported */ + u8 med_type2_sup; /* medium type two supported */ + u8 med_type3_sup; /* medium type three supported */ + u8 med_type4_sup; /* medium type four supported */ +}; + +/* + * SCSI informational exception control mode page + */ +struct scsi_mp_info_excpt_cntl_s{ +#ifdef __BIGENDIAN + u8 ps:1; + u8 reserved1:1; + u8 page_code:6; +#else + u8 page_code:6; + u8 reserved1:1; + u8 ps:1; +#endif + u8 page_len; +#ifdef __BIGENDIAN + u8 perf:1; /* performance */ + u8 reserved3:1; + u8 ebf:1; /* enable background fucntion */ + u8 ewasc:1; /* enable warning */ + u8 dexcpt:1; /* disable exception control */ + u8 test:1; /* enable test device failure + * notification + */ + u8 reserved4:1; + u8 log_error:1; + + u8 reserved5:4; + u8 mrie:4; /* method of reporting info + * exceptions + */ +#else + u8 log_error:1; + u8 reserved4:1; + u8 test:1; /* enable test device failure + * notification + */ + u8 dexcpt:1; /* disable exception control */ + u8 ewasc:1; /* enable warning */ + u8 ebf:1; /* enable background fucntion */ + u8 reserved3:1; + u8 perf:1; /* performance */ + + u8 mrie:4; /* method of reporting info + * exceptions + */ + u8 reserved5:4; +#endif + u8 interval_timer_msb; + u8 interval_timer_lsb; + + u8 report_count_msb; + u8 report_count_lsb; +}; + +/* + * Methods of reporting informational exceptions + */ +#define SCSI_MP_IEC_NO_REPORT 0x0 /* no reporting of exceptions */ +#define SCSI_MP_IEC_AER 0x1 /* async event reporting */ +#define SCSI_MP_IEC_UNIT_ATTN 0x2 /* generate unit attenstion */ +#define SCSI_MO_IEC_COND_REC_ERR 0x3 /* conditionally generate recovered + * error + */ +#define SCSI_MP_IEC_UNCOND_REC_ERR 0x4 /* unconditionally generate recovered + * error + */ +#define SCSI_MP_IEC_NO_SENSE 0x5 /* generate no sense */ +#define SCSI_MP_IEC_ON_REQUEST 0x6 /* only report exceptions on request */ + +/* + * SCSI flexible disk page + */ +struct scsi_mp_flexible_disk_s{ +#ifdef __BIGENDIAN + u8 ps:1; + u8 reserved1:1; + u8 page_code:6; +#else + u8 page_code:6; + u8 reserved1:1; + u8 ps:1; +#endif + u8 page_len; + + u8 transfer_rate_msb; + u8 transfer_rate_lsb; + + u8 num_heads; + u8 num_sectors; + + u8 bytes_per_sector_msb; + u8 bytes_per_sector_lsb; + + u8 num_cylinders_msb; + u8 num_cylinders_lsb; + + u8 sc_wpc_msb; /* starting cylinder-write + * precompensation msb + */ + u8 sc_wpc_lsb; /* starting cylinder-write + * precompensation lsb + */ + u8 sc_rwc_msb; /* starting cylinder-reduced write + * current msb + */ + u8 sc_rwc_lsb; /* starting cylinder-reduced write + * current lsb + */ + + u8 dev_step_rate_msb; + u8 dev_step_rate_lsb; + + u8 dev_step_pulse_width; + + u8 head_sd_msb; /* head settle delay msb */ + u8 head_sd_lsb; /* head settle delay lsb */ + + u8 motor_on_delay; + u8 motor_off_delay; +#ifdef __BIGENDIAN + u8 trdy:1; /* true ready bit */ + u8 ssn:1; /* start sector number bit */ + u8 mo:1; /* motor on bit */ + u8 reserved3:5; + + u8 reserved4:4; + u8 spc:4; /* step pulse per cylinder */ +#else + u8 reserved3:5; + u8 mo:1; /* motor on bit */ + u8 ssn:1; /* start sector number bit */ + u8 trdy:1; /* true ready bit */ + + u8 spc:4; /* step pulse per cylinder */ + u8 reserved4:4; +#endif + u8 write_comp; + u8 head_load_delay; + u8 head_unload_delay; +#ifdef __BIGENDIAN + u8 pin34:4; /* pin34 usage */ + u8 pin2:4; /* pin2 usage */ + + u8 pin4:4; /* pin4 usage */ + u8 pin1:4; /* pin1 usage */ +#else + u8 pin2:4; /* pin2 usage */ + u8 pin34:4; /* pin34 usage */ + + u8 pin1:4; /* pin1 usage */ + u8 pin4:4; /* pin4 usage */ +#endif + u8 med_rot_rate_msb; + u8 med_rot_rate_lsb; + + u8 reserved5[2]; +}; + +struct scsi_mode_page_format_data6_s{ + struct scsi_mode_param_header6_s mph; /* mode page header */ + struct scsi_mode_param_desc_s desc; /* block descriptor */ + struct scsi_mp_format_device_s format; /* format device data */ +}; + +struct scsi_mode_page_format_data10_s{ + struct scsi_mode_param_header10_s mph; /* mode page header */ + struct scsi_mode_param_desc_s desc; /* block descriptor */ + struct scsi_mp_format_device_s format; /* format device data */ +}; + +struct scsi_mode_page_rdg_data6_s{ + struct scsi_mode_param_header6_s mph; /* mode page header */ + struct scsi_mode_param_desc_s desc; /* block descriptor */ + struct scsi_mp_rigid_device_geometry_s rdg; + /* rigid geometry data */ +}; + +struct scsi_mode_page_rdg_data10_s{ + struct scsi_mode_param_header10_s mph; /* mode page header */ + struct scsi_mode_param_desc_s desc; /* block descriptor */ + struct scsi_mp_rigid_device_geometry_s rdg; + /* rigid geometry data */ +}; + +struct scsi_mode_page_cache6_s{ + struct scsi_mode_param_header6_s mph; /* mode page header */ + struct scsi_mode_param_desc_s desc; /* block descriptor */ + struct scsi_mp_caching_s cache; /* cache page data */ +}; + +struct scsi_mode_page_cache10_s{ + struct scsi_mode_param_header10_s mph; /* mode page header */ + struct scsi_mode_param_desc_s desc; /* block descriptor */ + struct scsi_mp_caching_s cache; /* cache page data */ +}; + +/* -------------------------------------------------------------- + * Format Unit command + * ------------------------------------------------------------ + */ + +/* + * Format Unit CDB + */ +struct scsi_format_unit_s{ + u8 opcode; +#ifdef __BIGENDIAN + u8 res1:3; + u8 fmtdata:1; /* if set, data out phase has format + * data + */ + u8 cmplst:1; /* if set, defect list is complete */ + u8 def_list:3; /* format of defect descriptor is + * fmtdata =1 + */ +#else + u8 def_list:3; /* format of defect descriptor is + * fmtdata = 1 + */ + u8 cmplst:1; /* if set, defect list is complete */ + u8 fmtdata:1; /* if set, data out phase has format + * data + */ + u8 res1:3; +#endif + u8 interleave_msb; + u8 interleave_lsb; + u8 vendor_spec; + u8 control; +}; + +/* + * h + */ +struct scsi_reserve6_s{ + u8 opcode; +#ifdef __BIGENDIAN + u8 reserved:3; + u8 obsolete:4; + u8 extent:1; +#else + u8 extent:1; + u8 obsolete:4; + u8 reserved:3; +#endif + u8 reservation_id; + u16 param_list_len; + u8 control; +}; + +/* + * h + */ +struct scsi_release6_s{ + u8 opcode; +#ifdef __BIGENDIAN + u8 reserved1:3; + u8 obsolete:4; + u8 extent:1; +#else + u8 extent:1; + u8 obsolete:4; + u8 reserved1:3; +#endif + u8 reservation_id; + u16 reserved2; + u8 control; +}; + +/* + * h + */ +struct scsi_reserve10_s{ + u8 opcode; +#ifdef __BIGENDIAN + u8 reserved1:3; + u8 third_party:1; + u8 reserved2:2; + u8 long_id:1; + u8 extent:1; +#else + u8 extent:1; + u8 long_id:1; + u8 reserved2:2; + u8 third_party:1; + u8 reserved1:3; +#endif + u8 reservation_id; + u8 third_pty_dev_id; + u8 reserved3; + u8 reserved4; + u8 reserved5; + u16 param_list_len; + u8 control; +}; + +struct scsi_release10_s{ + u8 opcode; +#ifdef __BIGENDIAN + u8 reserved1:3; + u8 third_party:1; + u8 reserved2:2; + u8 long_id:1; + u8 extent:1; +#else + u8 extent:1; + u8 long_id:1; + u8 reserved2:2; + u8 third_party:1; + u8 reserved1:3; +#endif + u8 reservation_id; + u8 third_pty_dev_id; + u8 reserved3; + u8 reserved4; + u8 reserved5; + u16 param_list_len; + u8 control; +}; + +struct scsi_verify10_s{ + u8 opcode; +#ifdef __BIGENDIAN + u8 lun:3; + u8 dpo:1; + u8 reserved:2; + u8 bytchk:1; + u8 reladdr:1; +#else + u8 reladdr:1; + u8 bytchk:1; + u8 reserved:2; + u8 dpo:1; + u8 lun:3; +#endif + u8 lba0; + u8 lba1; + u8 lba2; + u8 lba3; + u8 reserved1; + u8 verification_len0; + u8 verification_len1; + u8 control_byte; +}; + +struct scsi_request_sense_s{ + u8 opcode; +#ifdef __BIGENDIAN + u8 lun:3; + u8 reserved:5; +#else + u8 reserved:5; + u8 lun:3; +#endif + u8 reserved0; + u8 reserved1; + u8 alloc_len; + u8 control_byte; +}; + +/* ------------------------------------------------------------ + * SCSI status byte values + * ------------------------------------------------------------ + */ +#define SCSI_STATUS_GOOD 0x00 +#define SCSI_STATUS_CHECK_CONDITION 0x02 +#define SCSI_STATUS_CONDITION_MET 0x04 +#define SCSI_STATUS_BUSY 0x08 +#define SCSI_STATUS_INTERMEDIATE 0x10 +#define SCSI_STATUS_ICM 0x14 /* intermediate condition met */ +#define SCSI_STATUS_RESERVATION_CONFLICT 0x18 +#define SCSI_STATUS_COMMAND_TERMINATED 0x22 +#define SCSI_STATUS_QUEUE_FULL 0x28 +#define SCSI_STATUS_ACA_ACTIVE 0x30 + +#define SCSI_MAX_ALLOC_LEN 0xFF /* maximum allocarion length + * in CDBs + */ + +#define SCSI_OP_WRITE_VERIFY10 0x2E +#define SCSI_OP_WRITE_VERIFY12 0xAE +#define SCSI_OP_UNDEF 0xFF + +/* + * SCSI WRITE-VERIFY(10) command + */ +struct scsi_write_verify10_s{ + u8 opcode; +#ifdef __BIGENDIAN + u8 reserved1:3; + u8 dpo:1; /* Disable Page Out */ + u8 reserved2:1; + u8 ebp:1; /* erse by-pass */ + u8 bytchk:1; /* byte check */ + u8 rel_adr:1; /* relative address */ +#else + u8 rel_adr:1; /* relative address */ + u8 bytchk:1; /* byte check */ + u8 ebp:1; /* erse by-pass */ + u8 reserved2:1; + u8 dpo:1; /* Disable Page Out */ + u8 reserved1:3; +#endif + u8 lba0; /* logical block address - MSB */ + u8 lba1; + u8 lba2; + u8 lba3; /* LSB */ + u8 reserved3; + u8 xfer_length0; /* transfer length in blocks - MSB */ + u8 xfer_length1; /* LSB */ + u8 control; +}; + +#pragma pack() + +#endif /* __SCSI_H__ */ diff --git a/drivers/scsi/bfa/include/protocol/types.h b/drivers/scsi/bfa/include/protocol/types.h new file mode 100644 index 000000000000..2875a6cced3b --- /dev/null +++ b/drivers/scsi/bfa/include/protocol/types.h @@ -0,0 +1,42 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * types.h Protocol defined base types + */ + +#ifndef __TYPES_H__ +#define __TYPES_H__ + +#include + +#define wwn_t u64 +#define lun_t u64 + +#define WWN_NULL (0) +#define FC_SYMNAME_MAX 256 /* max name server symbolic name size */ +#define FC_ALPA_MAX 128 + +#pragma pack(1) + +#define MAC_ADDRLEN (6) +struct mac_s { u8 mac[MAC_ADDRLEN]; }; +#define mac_t struct mac_s + +#pragma pack() + +#endif diff --git a/drivers/scsi/bfa/loop.c b/drivers/scsi/bfa/loop.c new file mode 100644 index 000000000000..a418dedebe9e --- /dev/null +++ b/drivers/scsi/bfa/loop.c @@ -0,0 +1,422 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * port_loop.c vport private loop implementation. + */ +#include +#include +#include "fcs_lport.h" +#include "fcs_rport.h" +#include "fcs_trcmod.h" +#include "lport_priv.h" + +BFA_TRC_FILE(FCS, LOOP); + +/** + * ALPA to LIXA bitmap mapping + * + * ALPA 0x00 (Word 0, Bit 30) is invalid for N_Ports. Also Word 0 Bit 31 + * is for L_bit (login required) and is filled as ALPA 0x00 here. + */ +static const u8 port_loop_alpa_map[] = { + 0xEF, 0xE8, 0xE4, 0xE2, 0xE1, 0xE0, 0xDC, 0xDA, /* Word 3 Bits 0..7 */ + 0xD9, 0xD6, 0xD5, 0xD4, 0xD3, 0xD2, 0xD1, 0xCE, /* Word 3 Bits 8..15 */ + 0xCD, 0xCC, 0xCB, 0xCA, 0xC9, 0xC7, 0xC6, 0xC5, /* Word 3 Bits 16..23 */ + 0xC3, 0xBC, 0xBA, 0xB9, 0xB6, 0xB5, 0xB4, 0xB3, /* Word 3 Bits 24..31 */ + + 0xB2, 0xB1, 0xAE, 0xAD, 0xAC, 0xAB, 0xAA, 0xA9, /* Word 2 Bits 0..7 */ + 0xA7, 0xA6, 0xA5, 0xA3, 0x9F, 0x9E, 0x9D, 0x9B, /* Word 2 Bits 8..15 */ + 0x98, 0x97, 0x90, 0x8F, 0x88, 0x84, 0x82, 0x81, /* Word 2 Bits 16..23 */ + 0x80, 0x7C, 0x7A, 0x79, 0x76, 0x75, 0x74, 0x73, /* Word 2 Bits 24..31 */ + + 0x72, 0x71, 0x6E, 0x6D, 0x6C, 0x6B, 0x6A, 0x69, /* Word 1 Bits 0..7 */ + 0x67, 0x66, 0x65, 0x63, 0x5C, 0x5A, 0x59, 0x56, /* Word 1 Bits 8..15 */ + 0x55, 0x54, 0x53, 0x52, 0x51, 0x4E, 0x4D, 0x4C, /* Word 1 Bits 16..23 */ + 0x4B, 0x4A, 0x49, 0x47, 0x46, 0x45, 0x43, 0x3C, /* Word 1 Bits 24..31 */ + + 0x3A, 0x39, 0x36, 0x35, 0x34, 0x33, 0x32, 0x31, /* Word 0 Bits 0..7 */ + 0x2E, 0x2D, 0x2C, 0x2B, 0x2A, 0x29, 0x27, 0x26, /* Word 0 Bits 8..15 */ + 0x25, 0x23, 0x1F, 0x1E, 0x1D, 0x1B, 0x18, 0x17, /* Word 0 Bits 16..23 */ + 0x10, 0x0F, 0x08, 0x04, 0x02, 0x01, 0x00, 0x00, /* Word 0 Bits 24..31 */ +}; + +/* + * Local Functions + */ +bfa_status_t bfa_fcs_port_loop_send_plogi(struct bfa_fcs_port_s *port, + u8 alpa); + +void bfa_fcs_port_loop_plogi_response(void *fcsarg, + struct bfa_fcxp_s *fcxp, + void *cbarg, + bfa_status_t req_status, + u32 rsp_len, + u32 resid_len, + struct fchs_s *rsp_fchs); + +bfa_status_t bfa_fcs_port_loop_send_adisc(struct bfa_fcs_port_s *port, + u8 alpa); + +void bfa_fcs_port_loop_adisc_response(void *fcsarg, + struct bfa_fcxp_s *fcxp, + void *cbarg, + bfa_status_t req_status, + u32 rsp_len, + u32 resid_len, + struct fchs_s *rsp_fchs); + +bfa_status_t bfa_fcs_port_loop_send_plogi_acc(struct bfa_fcs_port_s *port, + u8 alpa); + +void bfa_fcs_port_loop_plogi_acc_response(void *fcsarg, + struct bfa_fcxp_s *fcxp, + void *cbarg, + bfa_status_t req_status, + u32 rsp_len, + u32 resid_len, + struct fchs_s *rsp_fchs); + +bfa_status_t bfa_fcs_port_loop_send_adisc_acc(struct bfa_fcs_port_s *port, + u8 alpa); + +void bfa_fcs_port_loop_adisc_acc_response(void *fcsarg, + struct bfa_fcxp_s *fcxp, + void *cbarg, + bfa_status_t req_status, + u32 rsp_len, + u32 resid_len, + struct fchs_s *rsp_fchs); +/** + * Called by port to initializar in provate LOOP topology. + */ +void +bfa_fcs_port_loop_init(struct bfa_fcs_port_s *port) +{ +} + +/** + * Called by port to notify transition to online state. + */ +void +bfa_fcs_port_loop_online(struct bfa_fcs_port_s *port) +{ + + u8 num_alpa = port->port_topo.ploop.num_alpa; + u8 *alpa_pos_map = port->port_topo.ploop.alpa_pos_map; + struct bfa_fcs_rport_s *r_port; + int ii = 0; + + /* + * If the port role is Initiator Mode, create Rports. + */ + if (port->port_cfg.roles == BFA_PORT_ROLE_FCP_IM) { + /* + * Check if the ALPA positional bitmap is available. + * if not, we send PLOGI to all possible ALPAs. + */ + if (num_alpa > 0) { + for (ii = 0; ii < num_alpa; ii++) { + /* + * ignore ALPA of bfa port + */ + if (alpa_pos_map[ii] != port->pid) { + r_port = bfa_fcs_rport_create(port, + alpa_pos_map[ii]); + } + } + } else { + for (ii = 0; ii < MAX_ALPA_COUNT; ii++) { + /* + * ignore ALPA of bfa port + */ + if ((port_loop_alpa_map[ii] > 0) + && (port_loop_alpa_map[ii] != port->pid)) + bfa_fcs_port_loop_send_plogi(port, + port_loop_alpa_map[ii]); + /**TBD */ + } + } + } else { + /* + * TBD Target Mode ?? + */ + } + +} + +/** + * Called by port to notify transition to offline state. + */ +void +bfa_fcs_port_loop_offline(struct bfa_fcs_port_s *port) +{ + +} + +/** + * Called by port to notify a LIP on the loop. + */ +void +bfa_fcs_port_loop_lip(struct bfa_fcs_port_s *port) +{ +} + +/** + * Local Functions. + */ +bfa_status_t +bfa_fcs_port_loop_send_plogi(struct bfa_fcs_port_s *port, u8 alpa) +{ + struct fchs_s fchs; + struct bfa_fcxp_s *fcxp = NULL; + int len; + + bfa_trc(port->fcs, alpa); + + fcxp = bfa_fcxp_alloc(NULL, port->fcs->bfa, 0, 0, NULL, NULL, NULL, + NULL); + bfa_assert(fcxp); + + len = fc_plogi_build(&fchs, bfa_fcxp_get_reqbuf(fcxp), alpa, + bfa_fcs_port_get_fcid(port), 0, + port->port_cfg.pwwn, port->port_cfg.nwwn, + bfa_pport_get_maxfrsize(port->fcs->bfa)); + + bfa_fcxp_send(fcxp, NULL, port->fabric->vf_id, port->lp_tag, BFA_FALSE, + FC_CLASS_3, len, &fchs, + bfa_fcs_port_loop_plogi_response, (void *)port, + FC_MAX_PDUSZ, FC_RA_TOV); + + return BFA_STATUS_OK; +} + +/** + * Called by fcxp to notify the Plogi response + */ +void +bfa_fcs_port_loop_plogi_response(void *fcsarg, struct bfa_fcxp_s *fcxp, + void *cbarg, bfa_status_t req_status, + u32 rsp_len, u32 resid_len, + struct fchs_s *rsp_fchs) +{ + struct bfa_fcs_port_s *port = (struct bfa_fcs_port_s *) cbarg; + struct fc_logi_s *plogi_resp; + struct fc_els_cmd_s *els_cmd; + + bfa_trc(port->fcs, req_status); + + /* + * Sanity Checks + */ + if (req_status != BFA_STATUS_OK) { + bfa_trc(port->fcs, req_status); + /* + * @todo + * This could mean that the device with this APLA does not + * exist on the loop. + */ + + return; + } + + els_cmd = (struct fc_els_cmd_s *) BFA_FCXP_RSP_PLD(fcxp); + plogi_resp = (struct fc_logi_s *) els_cmd; + + if (els_cmd->els_code == FC_ELS_ACC) { + bfa_fcs_rport_start(port, rsp_fchs, plogi_resp); + } else { + bfa_trc(port->fcs, plogi_resp->els_cmd.els_code); + bfa_assert(0); + } +} + +bfa_status_t +bfa_fcs_port_loop_send_plogi_acc(struct bfa_fcs_port_s *port, u8 alpa) +{ + struct fchs_s fchs; + struct bfa_fcxp_s *fcxp; + int len; + + bfa_trc(port->fcs, alpa); + + fcxp = bfa_fcxp_alloc(NULL, port->fcs->bfa, 0, 0, NULL, NULL, NULL, + NULL); + bfa_assert(fcxp); + + len = fc_plogi_acc_build(&fchs, bfa_fcxp_get_reqbuf(fcxp), alpa, + bfa_fcs_port_get_fcid(port), 0, + port->port_cfg.pwwn, port->port_cfg.nwwn, + bfa_pport_get_maxfrsize(port->fcs->bfa)); + + bfa_fcxp_send(fcxp, NULL, port->fabric->vf_id, port->lp_tag, BFA_FALSE, + FC_CLASS_3, len, &fchs, + bfa_fcs_port_loop_plogi_acc_response, + (void *)port, FC_MAX_PDUSZ, 0); /* No response + * expected + */ + + return BFA_STATUS_OK; +} + +/* + * Plogi Acc Response + * We donot do any processing here. + */ +void +bfa_fcs_port_loop_plogi_acc_response(void *fcsarg, struct bfa_fcxp_s *fcxp, + void *cbarg, bfa_status_t req_status, + u32 rsp_len, u32 resid_len, + struct fchs_s *rsp_fchs) +{ + + struct bfa_fcs_port_s *port = (struct bfa_fcs_port_s *) cbarg; + + bfa_trc(port->fcs, port->pid); + + /* + * Sanity Checks + */ + if (req_status != BFA_STATUS_OK) { + bfa_trc(port->fcs, req_status); + return; + } +} + +bfa_status_t +bfa_fcs_port_loop_send_adisc(struct bfa_fcs_port_s *port, u8 alpa) +{ + struct fchs_s fchs; + struct bfa_fcxp_s *fcxp; + int len; + + bfa_trc(port->fcs, alpa); + + fcxp = bfa_fcxp_alloc(NULL, port->fcs->bfa, 0, 0, NULL, NULL, NULL, + NULL); + bfa_assert(fcxp); + + len = fc_adisc_build(&fchs, bfa_fcxp_get_reqbuf(fcxp), alpa, + bfa_fcs_port_get_fcid(port), 0, + port->port_cfg.pwwn, port->port_cfg.nwwn); + + bfa_fcxp_send(fcxp, NULL, port->fabric->vf_id, port->lp_tag, BFA_FALSE, + FC_CLASS_3, len, &fchs, + bfa_fcs_port_loop_adisc_response, (void *)port, + FC_MAX_PDUSZ, FC_RA_TOV); + + return BFA_STATUS_OK; +} + +/** + * Called by fcxp to notify the ADISC response + */ +void +bfa_fcs_port_loop_adisc_response(void *fcsarg, struct bfa_fcxp_s *fcxp, + void *cbarg, bfa_status_t req_status, + u32 rsp_len, u32 resid_len, + struct fchs_s *rsp_fchs) +{ + struct bfa_fcs_port_s *port = (struct bfa_fcs_port_s *) cbarg; + struct bfa_fcs_rport_s *rport; + struct fc_adisc_s *adisc_resp; + struct fc_els_cmd_s *els_cmd; + u32 pid = rsp_fchs->s_id; + + bfa_trc(port->fcs, req_status); + + /* + * Sanity Checks + */ + if (req_status != BFA_STATUS_OK) { + /* + * TBD : we may need to retry certain requests + */ + bfa_fcxp_free(fcxp); + return; + } + + els_cmd = (struct fc_els_cmd_s *) BFA_FCXP_RSP_PLD(fcxp); + adisc_resp = (struct fc_adisc_s *) els_cmd; + + if (els_cmd->els_code == FC_ELS_ACC) { + } else { + bfa_trc(port->fcs, adisc_resp->els_cmd.els_code); + + /* + * TBD: we may need to check for reject codes and retry + */ + rport = bfa_fcs_port_get_rport_by_pid(port, pid); + if (rport) { + list_del(&rport->qe); + bfa_fcs_rport_delete(rport); + } + + } + return; +} + +bfa_status_t +bfa_fcs_port_loop_send_adisc_acc(struct bfa_fcs_port_s *port, u8 alpa) +{ + struct fchs_s fchs; + struct bfa_fcxp_s *fcxp; + int len; + + bfa_trc(port->fcs, alpa); + + fcxp = bfa_fcxp_alloc(NULL, port->fcs->bfa, 0, 0, NULL, NULL, NULL, + NULL); + bfa_assert(fcxp); + + len = fc_adisc_acc_build(&fchs, bfa_fcxp_get_reqbuf(fcxp), alpa, + bfa_fcs_port_get_fcid(port), 0, + port->port_cfg.pwwn, port->port_cfg.nwwn); + + bfa_fcxp_send(fcxp, NULL, port->fabric->vf_id, port->lp_tag, BFA_FALSE, + FC_CLASS_3, len, &fchs, + bfa_fcs_port_loop_adisc_acc_response, + (void *)port, FC_MAX_PDUSZ, 0); /* no reponse + * expected + */ + + return BFA_STATUS_OK; +} + +/* + * Adisc Acc Response + * We donot do any processing here. + */ +void +bfa_fcs_port_loop_adisc_acc_response(void *fcsarg, struct bfa_fcxp_s *fcxp, + void *cbarg, bfa_status_t req_status, + u32 rsp_len, u32 resid_len, + struct fchs_s *rsp_fchs) +{ + + struct bfa_fcs_port_s *port = (struct bfa_fcs_port_s *) cbarg; + + bfa_trc(port->fcs, port->pid); + + /* + * Sanity Checks + */ + if (req_status != BFA_STATUS_OK) { + bfa_trc(port->fcs, req_status); + return; + } +} diff --git a/drivers/scsi/bfa/lport_api.c b/drivers/scsi/bfa/lport_api.c new file mode 100644 index 000000000000..8f51a83f1834 --- /dev/null +++ b/drivers/scsi/bfa/lport_api.c @@ -0,0 +1,291 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * port_api.c BFA FCS port + */ + +#include +#include +#include +#include "fcs_rport.h" +#include "fcs_fabric.h" +#include "fcs_trcmod.h" +#include "fcs_vport.h" + +BFA_TRC_FILE(FCS, PORT_API); + + + +/** + * fcs_port_api BFA FCS port API + */ + +void +bfa_fcs_cfg_base_port(struct bfa_fcs_s *fcs, struct bfa_port_cfg_s *port_cfg) +{ +} + +struct bfa_fcs_port_s * +bfa_fcs_get_base_port(struct bfa_fcs_s *fcs) +{ + return (&fcs->fabric.bport); +} + +wwn_t +bfa_fcs_port_get_rport(struct bfa_fcs_port_s *port, wwn_t wwn, int index, + int nrports, bfa_boolean_t bwwn) +{ + struct list_head *qh, *qe; + struct bfa_fcs_rport_s *rport = NULL; + int i; + struct bfa_fcs_s *fcs; + + if (port == NULL || nrports == 0) + return (wwn_t) 0; + + fcs = port->fcs; + bfa_trc(fcs, (u32) nrports); + + i = 0; + qh = &port->rport_q; + qe = bfa_q_first(qh); + + while ((qe != qh) && (i < nrports)) { + rport = (struct bfa_fcs_rport_s *)qe; + if (bfa_os_ntoh3b(rport->pid) > 0xFFF000) { + qe = bfa_q_next(qe); + bfa_trc(fcs, (u32) rport->pwwn); + bfa_trc(fcs, rport->pid); + bfa_trc(fcs, i); + continue; + } + + if (bwwn) { + if (!memcmp(&wwn, &rport->pwwn, 8)) + break; + } else { + if (i == index) + break; + } + + i++; + qe = bfa_q_next(qe); + } + + bfa_trc(fcs, i); + if (rport) { + return rport->pwwn; + } else { + return (wwn_t) 0; + } +} + +void +bfa_fcs_port_get_rports(struct bfa_fcs_port_s *port, wwn_t rport_wwns[], + int *nrports) +{ + struct list_head *qh, *qe; + struct bfa_fcs_rport_s *rport = NULL; + int i; + struct bfa_fcs_s *fcs; + + if (port == NULL || rport_wwns == NULL || *nrports == 0) + return; + + fcs = port->fcs; + bfa_trc(fcs, (u32) *nrports); + + i = 0; + qh = &port->rport_q; + qe = bfa_q_first(qh); + + while ((qe != qh) && (i < *nrports)) { + rport = (struct bfa_fcs_rport_s *)qe; + if (bfa_os_ntoh3b(rport->pid) > 0xFFF000) { + qe = bfa_q_next(qe); + bfa_trc(fcs, (u32) rport->pwwn); + bfa_trc(fcs, rport->pid); + bfa_trc(fcs, i); + continue; + } + + rport_wwns[i] = rport->pwwn; + + i++; + qe = bfa_q_next(qe); + } + + bfa_trc(fcs, i); + *nrports = i; + return; +} + +/* + * Iterate's through all the rport's in the given port to + * determine the maximum operating speed. + */ +enum bfa_pport_speed +bfa_fcs_port_get_rport_max_speed(struct bfa_fcs_port_s *port) +{ + struct list_head *qh, *qe; + struct bfa_fcs_rport_s *rport = NULL; + struct bfa_fcs_s *fcs; + enum bfa_pport_speed max_speed = 0; + struct bfa_pport_attr_s pport_attr; + enum bfa_pport_speed pport_speed; + + if (port == NULL) + return 0; + + fcs = port->fcs; + + /* + * Get Physical port's current speed + */ + bfa_pport_get_attr(port->fcs->bfa, &pport_attr); + pport_speed = pport_attr.speed; + bfa_trc(fcs, pport_speed); + + qh = &port->rport_q; + qe = bfa_q_first(qh); + + while (qe != qh) { + rport = (struct bfa_fcs_rport_s *)qe; + if ((bfa_os_ntoh3b(rport->pid) > 0xFFF000) + || (bfa_fcs_rport_get_state(rport) == BFA_RPORT_OFFLINE)) { + qe = bfa_q_next(qe); + continue; + } + + if ((rport->rpf.rpsc_speed == BFA_PPORT_SPEED_8GBPS) + || (rport->rpf.rpsc_speed > pport_speed)) { + max_speed = rport->rpf.rpsc_speed; + break; + } else if (rport->rpf.rpsc_speed > max_speed) { + max_speed = rport->rpf.rpsc_speed; + } + + qe = bfa_q_next(qe); + } + + bfa_trc(fcs, max_speed); + return max_speed; +} + +struct bfa_fcs_port_s * +bfa_fcs_lookup_port(struct bfa_fcs_s *fcs, u16 vf_id, wwn_t lpwwn) +{ + struct bfa_fcs_vport_s *vport; + bfa_fcs_vf_t *vf; + + bfa_assert(fcs != NULL); + + vf = bfa_fcs_vf_lookup(fcs, vf_id); + if (vf == NULL) { + bfa_trc(fcs, vf_id); + return (NULL); + } + + if (!lpwwn || (vf->bport.port_cfg.pwwn == lpwwn)) + return (&vf->bport); + + vport = bfa_fcs_fabric_vport_lookup(vf, lpwwn); + if (vport) + return (&vport->lport); + + return (NULL); +} + +/* + * API corresponding to VmWare's NPIV_VPORT_GETINFO. + */ +void +bfa_fcs_port_get_info(struct bfa_fcs_port_s *port, + struct bfa_port_info_s *port_info) +{ + + bfa_trc(port->fcs, port->fabric->fabric_name); + + if (port->vport == NULL) { + /* + * This is a Physical port + */ + port_info->port_type = BFA_PORT_TYPE_PHYSICAL; + + /* + * @todo : need to fix the state & reason + */ + port_info->port_state = 0; + port_info->offline_reason = 0; + + port_info->port_wwn = bfa_fcs_port_get_pwwn(port); + port_info->node_wwn = bfa_fcs_port_get_nwwn(port); + + port_info->max_vports_supp = bfa_fcs_vport_get_max(port->fcs); + port_info->num_vports_inuse = + bfa_fcs_fabric_vport_count(port->fabric); + port_info->max_rports_supp = BFA_FCS_MAX_RPORTS_SUPP; + port_info->num_rports_inuse = port->num_rports; + } else { + /* + * This is a virtual port + */ + port_info->port_type = BFA_PORT_TYPE_VIRTUAL; + + /* + * @todo : need to fix the state & reason + */ + port_info->port_state = 0; + port_info->offline_reason = 0; + + port_info->port_wwn = bfa_fcs_port_get_pwwn(port); + port_info->node_wwn = bfa_fcs_port_get_nwwn(port); + } +} + +void +bfa_fcs_port_get_stats(struct bfa_fcs_port_s *fcs_port, + struct bfa_port_stats_s *port_stats) +{ + bfa_os_memcpy(port_stats, &fcs_port->stats, + sizeof(struct bfa_port_stats_s)); + return; +} + +void +bfa_fcs_port_clear_stats(struct bfa_fcs_port_s *fcs_port) +{ + bfa_os_memset(&fcs_port->stats, 0, sizeof(struct bfa_port_stats_s)); + return; +} + +void +bfa_fcs_port_enable_ipfc_roles(struct bfa_fcs_port_s *fcs_port) +{ + fcs_port->port_cfg.roles |= BFA_PORT_ROLE_FCP_IPFC; + return; +} + +void +bfa_fcs_port_disable_ipfc_roles(struct bfa_fcs_port_s *fcs_port) +{ + fcs_port->port_cfg.roles &= ~BFA_PORT_ROLE_FCP_IPFC; + return; +} + + diff --git a/drivers/scsi/bfa/lport_priv.h b/drivers/scsi/bfa/lport_priv.h new file mode 100644 index 000000000000..dbae370a599a --- /dev/null +++ b/drivers/scsi/bfa/lport_priv.h @@ -0,0 +1,82 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#ifndef __VP_PRIV_H__ +#define __VP_PRIV_H__ + +#include +#include + +/* + * Functions exported by vps + */ +void bfa_fcs_vport_init(struct bfa_fcs_vport_s *vport); + +/* + * Functions exported by vps + */ +void bfa_fcs_vps_online(struct bfa_fcs_port_s *port); +void bfa_fcs_vps_offline(struct bfa_fcs_port_s *port); +void bfa_fcs_vps_lip(struct bfa_fcs_port_s *port); + +/* + * Functions exported by port_fab + */ +void bfa_fcs_port_fab_init(struct bfa_fcs_port_s *vport); +void bfa_fcs_port_fab_online(struct bfa_fcs_port_s *vport); +void bfa_fcs_port_fab_offline(struct bfa_fcs_port_s *vport); +void bfa_fcs_port_fab_rx_frame(struct bfa_fcs_port_s *port, + u8 *rx_frame, u32 len); + +/* + * Functions exported by VP-NS. + */ +void bfa_fcs_port_ns_init(struct bfa_fcs_port_s *vport); +void bfa_fcs_port_ns_offline(struct bfa_fcs_port_s *vport); +void bfa_fcs_port_ns_online(struct bfa_fcs_port_s *vport); +void bfa_fcs_port_ns_query(struct bfa_fcs_port_s *port); + +/* + * Functions exported by VP-SCN + */ +void bfa_fcs_port_scn_init(struct bfa_fcs_port_s *vport); +void bfa_fcs_port_scn_offline(struct bfa_fcs_port_s *vport); +void bfa_fcs_port_scn_online(struct bfa_fcs_port_s *vport); +void bfa_fcs_port_scn_process_rscn(struct bfa_fcs_port_s *port, + struct fchs_s *rx_frame, u32 len); + +/* + * Functions exported by VP-N2N + */ + +void bfa_fcs_port_n2n_init(struct bfa_fcs_port_s *port); +void bfa_fcs_port_n2n_online(struct bfa_fcs_port_s *port); +void bfa_fcs_port_n2n_offline(struct bfa_fcs_port_s *port); +void bfa_fcs_port_n2n_rx_frame(struct bfa_fcs_port_s *port, + u8 *rx_frame, u32 len); + +/* + * Functions exported by VP-LOOP + */ +void bfa_fcs_port_loop_init(struct bfa_fcs_port_s *port); +void bfa_fcs_port_loop_online(struct bfa_fcs_port_s *port); +void bfa_fcs_port_loop_offline(struct bfa_fcs_port_s *port); +void bfa_fcs_port_loop_lip(struct bfa_fcs_port_s *port); +void bfa_fcs_port_loop_rx_frame(struct bfa_fcs_port_s *port, + u8 *rx_frame, u32 len); + +#endif /* __VP_PRIV_H__ */ diff --git a/drivers/scsi/bfa/ms.c b/drivers/scsi/bfa/ms.c new file mode 100644 index 000000000000..c96b3ca007ae --- /dev/null +++ b/drivers/scsi/bfa/ms.c @@ -0,0 +1,759 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + + +#include +#include +#include "fcs_lport.h" +#include "fcs_rport.h" +#include "fcs_trcmod.h" +#include "fcs_fcxp.h" +#include "lport_priv.h" + +BFA_TRC_FILE(FCS, MS); + +#define BFA_FCS_MS_CMD_MAX_RETRIES 2 +/* + * forward declarations + */ +static void bfa_fcs_port_ms_send_plogi(void *ms_cbarg, + struct bfa_fcxp_s *fcxp_alloced); +static void bfa_fcs_port_ms_timeout(void *arg); +static void bfa_fcs_port_ms_plogi_response(void *fcsarg, + struct bfa_fcxp_s *fcxp, + void *cbarg, + bfa_status_t req_status, + u32 rsp_len, + u32 resid_len, + struct fchs_s *rsp_fchs); + +static void bfa_fcs_port_ms_send_gmal(void *ms_cbarg, + struct bfa_fcxp_s *fcxp_alloced); +static void bfa_fcs_port_ms_gmal_response(void *fcsarg, + struct bfa_fcxp_s *fcxp, + void *cbarg, + bfa_status_t req_status, + u32 rsp_len, + u32 resid_len, + struct fchs_s *rsp_fchs); +static void bfa_fcs_port_ms_send_gfn(void *ms_cbarg, + struct bfa_fcxp_s *fcxp_alloced); +static void bfa_fcs_port_ms_gfn_response(void *fcsarg, + struct bfa_fcxp_s *fcxp, + void *cbarg, + bfa_status_t req_status, + u32 rsp_len, + u32 resid_len, + struct fchs_s *rsp_fchs); +/** + * fcs_ms_sm FCS MS state machine + */ + +/** + * MS State Machine events + */ +enum port_ms_event { + MSSM_EVENT_PORT_ONLINE = 1, + MSSM_EVENT_PORT_OFFLINE = 2, + MSSM_EVENT_RSP_OK = 3, + MSSM_EVENT_RSP_ERROR = 4, + MSSM_EVENT_TIMEOUT = 5, + MSSM_EVENT_FCXP_SENT = 6, + MSSM_EVENT_PORT_FABRIC_RSCN = 7 +}; + +static void bfa_fcs_port_ms_sm_offline(struct bfa_fcs_port_ms_s *ms, + enum port_ms_event event); +static void bfa_fcs_port_ms_sm_plogi_sending(struct bfa_fcs_port_ms_s *ms, + enum port_ms_event event); +static void bfa_fcs_port_ms_sm_plogi(struct bfa_fcs_port_ms_s *ms, + enum port_ms_event event); +static void bfa_fcs_port_ms_sm_plogi_retry(struct bfa_fcs_port_ms_s *ms, + enum port_ms_event event); +static void bfa_fcs_port_ms_sm_gmal_sending(struct bfa_fcs_port_ms_s *ms, + enum port_ms_event event); +static void bfa_fcs_port_ms_sm_gmal(struct bfa_fcs_port_ms_s *ms, + enum port_ms_event event); +static void bfa_fcs_port_ms_sm_gmal_retry(struct bfa_fcs_port_ms_s *ms, + enum port_ms_event event); +static void bfa_fcs_port_ms_sm_gfn_sending(struct bfa_fcs_port_ms_s *ms, + enum port_ms_event event); +static void bfa_fcs_port_ms_sm_gfn(struct bfa_fcs_port_ms_s *ms, + enum port_ms_event event); +static void bfa_fcs_port_ms_sm_gfn_retry(struct bfa_fcs_port_ms_s *ms, + enum port_ms_event event); +static void bfa_fcs_port_ms_sm_online(struct bfa_fcs_port_ms_s *ms, + enum port_ms_event event); +/** + * Start in offline state - awaiting NS to send start. + */ +static void +bfa_fcs_port_ms_sm_offline(struct bfa_fcs_port_ms_s *ms, + enum port_ms_event event) +{ + bfa_trc(ms->port->fcs, ms->port->port_cfg.pwwn); + bfa_trc(ms->port->fcs, event); + + switch (event) { + case MSSM_EVENT_PORT_ONLINE: + bfa_sm_set_state(ms, bfa_fcs_port_ms_sm_plogi_sending); + bfa_fcs_port_ms_send_plogi(ms, NULL); + break; + + case MSSM_EVENT_PORT_OFFLINE: + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_port_ms_sm_plogi_sending(struct bfa_fcs_port_ms_s *ms, + enum port_ms_event event) +{ + bfa_trc(ms->port->fcs, ms->port->port_cfg.pwwn); + bfa_trc(ms->port->fcs, event); + + switch (event) { + case MSSM_EVENT_FCXP_SENT: + bfa_sm_set_state(ms, bfa_fcs_port_ms_sm_plogi); + break; + + case MSSM_EVENT_PORT_OFFLINE: + bfa_sm_set_state(ms, bfa_fcs_port_ms_sm_offline); + bfa_fcxp_walloc_cancel(BFA_FCS_GET_HAL_FROM_PORT(ms->port), + &ms->fcxp_wqe); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_port_ms_sm_plogi(struct bfa_fcs_port_ms_s *ms, enum port_ms_event event) +{ + bfa_trc(ms->port->fcs, ms->port->port_cfg.pwwn); + bfa_trc(ms->port->fcs, event); + + switch (event) { + case MSSM_EVENT_RSP_ERROR: + /* + * Start timer for a delayed retry + */ + bfa_sm_set_state(ms, bfa_fcs_port_ms_sm_plogi_retry); + bfa_timer_start(BFA_FCS_GET_HAL_FROM_PORT(ms->port), &ms->timer, + bfa_fcs_port_ms_timeout, ms, + BFA_FCS_RETRY_TIMEOUT); + break; + + case MSSM_EVENT_RSP_OK: + /* + * since plogi is done, now invoke MS related sub-modules + */ + bfa_fcs_port_fdmi_online(ms); + + /** + * if this is a Vport, go to online state. + */ + if (ms->port->vport) { + bfa_sm_set_state(ms, bfa_fcs_port_ms_sm_online); + break; + } + + /* + * For a base port we need to get the + * switch's IP address. + */ + bfa_sm_set_state(ms, bfa_fcs_port_ms_sm_gmal_sending); + bfa_fcs_port_ms_send_gmal(ms, NULL); + break; + + case MSSM_EVENT_PORT_OFFLINE: + bfa_sm_set_state(ms, bfa_fcs_port_ms_sm_offline); + bfa_fcxp_discard(ms->fcxp); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_port_ms_sm_plogi_retry(struct bfa_fcs_port_ms_s *ms, + enum port_ms_event event) +{ + bfa_trc(ms->port->fcs, ms->port->port_cfg.pwwn); + bfa_trc(ms->port->fcs, event); + + switch (event) { + case MSSM_EVENT_TIMEOUT: + /* + * Retry Timer Expired. Re-send + */ + bfa_sm_set_state(ms, bfa_fcs_port_ms_sm_plogi_sending); + bfa_fcs_port_ms_send_plogi(ms, NULL); + break; + + case MSSM_EVENT_PORT_OFFLINE: + bfa_sm_set_state(ms, bfa_fcs_port_ms_sm_offline); + bfa_timer_stop(&ms->timer); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_port_ms_sm_online(struct bfa_fcs_port_ms_s *ms, + enum port_ms_event event) +{ + bfa_trc(ms->port->fcs, ms->port->port_cfg.pwwn); + bfa_trc(ms->port->fcs, event); + + switch (event) { + case MSSM_EVENT_PORT_OFFLINE: + bfa_sm_set_state(ms, bfa_fcs_port_ms_sm_offline); + /* + * now invoke MS related sub-modules + */ + bfa_fcs_port_fdmi_offline(ms); + break; + + case MSSM_EVENT_PORT_FABRIC_RSCN: + bfa_sm_set_state(ms, bfa_fcs_port_ms_sm_gfn_sending); + ms->retry_cnt = 0; + bfa_fcs_port_ms_send_gfn(ms, NULL); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_port_ms_sm_gmal_sending(struct bfa_fcs_port_ms_s *ms, + enum port_ms_event event) +{ + bfa_trc(ms->port->fcs, ms->port->port_cfg.pwwn); + bfa_trc(ms->port->fcs, event); + + switch (event) { + case MSSM_EVENT_FCXP_SENT: + bfa_sm_set_state(ms, bfa_fcs_port_ms_sm_gmal); + break; + + case MSSM_EVENT_PORT_OFFLINE: + bfa_sm_set_state(ms, bfa_fcs_port_ms_sm_offline); + bfa_fcxp_walloc_cancel(BFA_FCS_GET_HAL_FROM_PORT(ms->port), + &ms->fcxp_wqe); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_port_ms_sm_gmal(struct bfa_fcs_port_ms_s *ms, enum port_ms_event event) +{ + bfa_trc(ms->port->fcs, ms->port->port_cfg.pwwn); + bfa_trc(ms->port->fcs, event); + + switch (event) { + case MSSM_EVENT_RSP_ERROR: + /* + * Start timer for a delayed retry + */ + if (ms->retry_cnt++ < BFA_FCS_MS_CMD_MAX_RETRIES) { + bfa_sm_set_state(ms, bfa_fcs_port_ms_sm_gmal_retry); + bfa_timer_start(BFA_FCS_GET_HAL_FROM_PORT(ms->port), + &ms->timer, bfa_fcs_port_ms_timeout, ms, + BFA_FCS_RETRY_TIMEOUT); + } else { + bfa_sm_set_state(ms, bfa_fcs_port_ms_sm_gfn_sending); + bfa_fcs_port_ms_send_gfn(ms, NULL); + ms->retry_cnt = 0; + } + break; + + case MSSM_EVENT_RSP_OK: + bfa_sm_set_state(ms, bfa_fcs_port_ms_sm_gfn_sending); + bfa_fcs_port_ms_send_gfn(ms, NULL); + break; + + case MSSM_EVENT_PORT_OFFLINE: + bfa_sm_set_state(ms, bfa_fcs_port_ms_sm_offline); + bfa_fcxp_discard(ms->fcxp); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_port_ms_sm_gmal_retry(struct bfa_fcs_port_ms_s *ms, + enum port_ms_event event) +{ + bfa_trc(ms->port->fcs, ms->port->port_cfg.pwwn); + bfa_trc(ms->port->fcs, event); + + switch (event) { + case MSSM_EVENT_TIMEOUT: + /* + * Retry Timer Expired. Re-send + */ + bfa_sm_set_state(ms, bfa_fcs_port_ms_sm_gmal_sending); + bfa_fcs_port_ms_send_gmal(ms, NULL); + break; + + case MSSM_EVENT_PORT_OFFLINE: + bfa_sm_set_state(ms, bfa_fcs_port_ms_sm_offline); + bfa_timer_stop(&ms->timer); + break; + + default: + bfa_assert(0); + } +} + +/** + * ms_pvt MS local functions + */ + +static void +bfa_fcs_port_ms_send_gmal(void *ms_cbarg, struct bfa_fcxp_s *fcxp_alloced) +{ + struct bfa_fcs_port_ms_s *ms = ms_cbarg; + struct bfa_fcs_port_s *port = ms->port; + struct fchs_s fchs; + int len; + struct bfa_fcxp_s *fcxp; + + bfa_trc(port->fcs, port->pid); + + fcxp = fcxp_alloced ? fcxp_alloced : bfa_fcs_fcxp_alloc(port->fcs); + if (!fcxp) { + bfa_fcxp_alloc_wait(port->fcs->bfa, &ms->fcxp_wqe, + bfa_fcs_port_ms_send_gmal, ms); + return; + } + ms->fcxp = fcxp; + + len = fc_gmal_req_build(&fchs, bfa_fcxp_get_reqbuf(fcxp), + bfa_fcs_port_get_fcid(port), + bfa_lps_get_peer_nwwn(port->fabric->lps)); + + bfa_fcxp_send(fcxp, NULL, port->fabric->vf_id, port->lp_tag, BFA_FALSE, + FC_CLASS_3, len, &fchs, bfa_fcs_port_ms_gmal_response, + (void *)ms, FC_MAX_PDUSZ, FC_RA_TOV); + + bfa_sm_send_event(ms, MSSM_EVENT_FCXP_SENT); +} + +static void +bfa_fcs_port_ms_gmal_response(void *fcsarg, struct bfa_fcxp_s *fcxp, + void *cbarg, bfa_status_t req_status, + u32 rsp_len, u32 resid_len, + struct fchs_s *rsp_fchs) +{ + struct bfa_fcs_port_ms_s *ms = (struct bfa_fcs_port_ms_s *)cbarg; + struct bfa_fcs_port_s *port = ms->port; + struct ct_hdr_s *cthdr = NULL; + struct fcgs_gmal_resp_s *gmal_resp; + struct fc_gmal_entry_s *gmal_entry; + u32 num_entries; + u8 *rsp_str; + + bfa_trc(port->fcs, req_status); + bfa_trc(port->fcs, port->port_cfg.pwwn); + + /* + * Sanity Checks + */ + if (req_status != BFA_STATUS_OK) { + bfa_trc(port->fcs, req_status); + bfa_sm_send_event(ms, MSSM_EVENT_RSP_ERROR); + return; + } + + cthdr = (struct ct_hdr_s *) BFA_FCXP_RSP_PLD(fcxp); + cthdr->cmd_rsp_code = bfa_os_ntohs(cthdr->cmd_rsp_code); + + if (cthdr->cmd_rsp_code == CT_RSP_ACCEPT) { + gmal_resp = (struct fcgs_gmal_resp_s *)(cthdr + 1); + num_entries = bfa_os_ntohl(gmal_resp->ms_len); + if (num_entries == 0) { + bfa_sm_send_event(ms, MSSM_EVENT_RSP_ERROR); + return; + } + /* + * The response could contain multiple Entries. + * Entries for SNMP interface, etc. + * We look for the entry with a telnet prefix. + * First "http://" entry refers to IP addr + */ + + gmal_entry = (struct fc_gmal_entry_s *)gmal_resp->ms_ma; + while (num_entries > 0) { + if (strncmp + (gmal_entry->prefix, CT_GMAL_RESP_PREFIX_HTTP, + sizeof(gmal_entry->prefix)) == 0) { + + /* + * if the IP address is terminating with a '/', + * remove it. *Byte 0 consists of the length + * of the string. + */ + rsp_str = &(gmal_entry->prefix[0]); + if (rsp_str[gmal_entry->len - 1] == '/') + rsp_str[gmal_entry->len - 1] = 0; + /* + * copy IP Address to fabric + */ + strncpy(bfa_fcs_port_get_fabric_ipaddr(port), + gmal_entry->ip_addr, + BFA_FCS_FABRIC_IPADDR_SZ); + break; + } else { + --num_entries; + ++gmal_entry; + } + } + + bfa_sm_send_event(ms, MSSM_EVENT_RSP_OK); + return; + } + + bfa_trc(port->fcs, cthdr->reason_code); + bfa_trc(port->fcs, cthdr->exp_code); + bfa_sm_send_event(ms, MSSM_EVENT_RSP_ERROR); +} + +static void +bfa_fcs_port_ms_sm_gfn_sending(struct bfa_fcs_port_ms_s *ms, + enum port_ms_event event) +{ + bfa_trc(ms->port->fcs, ms->port->port_cfg.pwwn); + bfa_trc(ms->port->fcs, event); + + switch (event) { + case MSSM_EVENT_FCXP_SENT: + bfa_sm_set_state(ms, bfa_fcs_port_ms_sm_gfn); + break; + + case MSSM_EVENT_PORT_OFFLINE: + bfa_sm_set_state(ms, bfa_fcs_port_ms_sm_offline); + bfa_fcxp_walloc_cancel(BFA_FCS_GET_HAL_FROM_PORT(ms->port), + &ms->fcxp_wqe); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_port_ms_sm_gfn(struct bfa_fcs_port_ms_s *ms, enum port_ms_event event) +{ + bfa_trc(ms->port->fcs, ms->port->port_cfg.pwwn); + bfa_trc(ms->port->fcs, event); + + switch (event) { + case MSSM_EVENT_RSP_ERROR: + /* + * Start timer for a delayed retry + */ + if (ms->retry_cnt++ < BFA_FCS_MS_CMD_MAX_RETRIES) { + bfa_sm_set_state(ms, bfa_fcs_port_ms_sm_gfn_retry); + bfa_timer_start(BFA_FCS_GET_HAL_FROM_PORT(ms->port), + &ms->timer, bfa_fcs_port_ms_timeout, ms, + BFA_FCS_RETRY_TIMEOUT); + } else { + bfa_sm_set_state(ms, bfa_fcs_port_ms_sm_online); + ms->retry_cnt = 0; + } + break; + + case MSSM_EVENT_RSP_OK: + bfa_sm_set_state(ms, bfa_fcs_port_ms_sm_online); + break; + + case MSSM_EVENT_PORT_OFFLINE: + bfa_sm_set_state(ms, bfa_fcs_port_ms_sm_offline); + bfa_fcxp_discard(ms->fcxp); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_port_ms_sm_gfn_retry(struct bfa_fcs_port_ms_s *ms, + enum port_ms_event event) +{ + bfa_trc(ms->port->fcs, ms->port->port_cfg.pwwn); + bfa_trc(ms->port->fcs, event); + + switch (event) { + case MSSM_EVENT_TIMEOUT: + /* + * Retry Timer Expired. Re-send + */ + bfa_sm_set_state(ms, bfa_fcs_port_ms_sm_gfn_sending); + bfa_fcs_port_ms_send_gfn(ms, NULL); + break; + + case MSSM_EVENT_PORT_OFFLINE: + bfa_sm_set_state(ms, bfa_fcs_port_ms_sm_offline); + bfa_timer_stop(&ms->timer); + break; + + default: + bfa_assert(0); + } +} + +/** + * ms_pvt MS local functions + */ + +static void +bfa_fcs_port_ms_send_gfn(void *ms_cbarg, struct bfa_fcxp_s *fcxp_alloced) +{ + struct bfa_fcs_port_ms_s *ms = ms_cbarg; + struct bfa_fcs_port_s *port = ms->port; + struct fchs_s fchs; + int len; + struct bfa_fcxp_s *fcxp; + + bfa_trc(port->fcs, port->pid); + + fcxp = fcxp_alloced ? fcxp_alloced : bfa_fcs_fcxp_alloc(port->fcs); + if (!fcxp) { + bfa_fcxp_alloc_wait(port->fcs->bfa, &ms->fcxp_wqe, + bfa_fcs_port_ms_send_gfn, ms); + return; + } + ms->fcxp = fcxp; + + len = fc_gfn_req_build(&fchs, bfa_fcxp_get_reqbuf(fcxp), + bfa_fcs_port_get_fcid(port), + bfa_lps_get_peer_nwwn(port->fabric->lps)); + + bfa_fcxp_send(fcxp, NULL, port->fabric->vf_id, port->lp_tag, BFA_FALSE, + FC_CLASS_3, len, &fchs, bfa_fcs_port_ms_gfn_response, + (void *)ms, FC_MAX_PDUSZ, FC_RA_TOV); + + bfa_sm_send_event(ms, MSSM_EVENT_FCXP_SENT); +} + +static void +bfa_fcs_port_ms_gfn_response(void *fcsarg, struct bfa_fcxp_s *fcxp, void *cbarg, + bfa_status_t req_status, u32 rsp_len, + u32 resid_len, struct fchs_s *rsp_fchs) +{ + struct bfa_fcs_port_ms_s *ms = (struct bfa_fcs_port_ms_s *)cbarg; + struct bfa_fcs_port_s *port = ms->port; + struct ct_hdr_s *cthdr = NULL; + wwn_t *gfn_resp; + + bfa_trc(port->fcs, req_status); + bfa_trc(port->fcs, port->port_cfg.pwwn); + + /* + * Sanity Checks + */ + if (req_status != BFA_STATUS_OK) { + bfa_trc(port->fcs, req_status); + bfa_sm_send_event(ms, MSSM_EVENT_RSP_ERROR); + return; + } + + cthdr = (struct ct_hdr_s *) BFA_FCXP_RSP_PLD(fcxp); + cthdr->cmd_rsp_code = bfa_os_ntohs(cthdr->cmd_rsp_code); + + if (cthdr->cmd_rsp_code == CT_RSP_ACCEPT) { + gfn_resp = (wwn_t *) (cthdr + 1); + /* + * check if it has actually changed + */ + if ((memcmp + ((void *)&bfa_fcs_port_get_fabric_name(port), gfn_resp, + sizeof(wwn_t)) != 0)) + bfa_fcs_fabric_set_fabric_name(port->fabric, *gfn_resp); + bfa_sm_send_event(ms, MSSM_EVENT_RSP_OK); + return; + } + + bfa_trc(port->fcs, cthdr->reason_code); + bfa_trc(port->fcs, cthdr->exp_code); + bfa_sm_send_event(ms, MSSM_EVENT_RSP_ERROR); +} + +/** + * ms_pvt MS local functions + */ + +static void +bfa_fcs_port_ms_send_plogi(void *ms_cbarg, struct bfa_fcxp_s *fcxp_alloced) +{ + struct bfa_fcs_port_ms_s *ms = ms_cbarg; + struct bfa_fcs_port_s *port = ms->port; + struct fchs_s fchs; + int len; + struct bfa_fcxp_s *fcxp; + + bfa_trc(port->fcs, port->pid); + + fcxp = fcxp_alloced ? fcxp_alloced : bfa_fcs_fcxp_alloc(port->fcs); + if (!fcxp) { + port->stats.ms_plogi_alloc_wait++; + bfa_fcxp_alloc_wait(port->fcs->bfa, &ms->fcxp_wqe, + bfa_fcs_port_ms_send_plogi, ms); + return; + } + ms->fcxp = fcxp; + + len = fc_plogi_build(&fchs, bfa_fcxp_get_reqbuf(fcxp), + bfa_os_hton3b(FC_MGMT_SERVER), + bfa_fcs_port_get_fcid(port), 0, + port->port_cfg.pwwn, port->port_cfg.nwwn, + bfa_pport_get_maxfrsize(port->fcs->bfa)); + + bfa_fcxp_send(fcxp, NULL, port->fabric->vf_id, port->lp_tag, BFA_FALSE, + FC_CLASS_3, len, &fchs, bfa_fcs_port_ms_plogi_response, + (void *)ms, FC_MAX_PDUSZ, FC_RA_TOV); + + port->stats.ms_plogi_sent++; + bfa_sm_send_event(ms, MSSM_EVENT_FCXP_SENT); +} + +static void +bfa_fcs_port_ms_plogi_response(void *fcsarg, struct bfa_fcxp_s *fcxp, + void *cbarg, bfa_status_t req_status, + u32 rsp_len, u32 resid_len, + struct fchs_s *rsp_fchs) +{ + struct bfa_fcs_port_ms_s *ms = (struct bfa_fcs_port_ms_s *)cbarg; + + struct bfa_fcs_port_s *port = ms->port; + struct fc_els_cmd_s *els_cmd; + struct fc_ls_rjt_s *ls_rjt; + + bfa_trc(port->fcs, req_status); + bfa_trc(port->fcs, port->port_cfg.pwwn); + + /* + * Sanity Checks + */ + if (req_status != BFA_STATUS_OK) { + port->stats.ms_plogi_rsp_err++; + bfa_trc(port->fcs, req_status); + bfa_sm_send_event(ms, MSSM_EVENT_RSP_ERROR); + return; + } + + els_cmd = (struct fc_els_cmd_s *) BFA_FCXP_RSP_PLD(fcxp); + + switch (els_cmd->els_code) { + + case FC_ELS_ACC: + if (rsp_len < sizeof(struct fc_logi_s)) { + bfa_trc(port->fcs, rsp_len); + port->stats.ms_plogi_acc_err++; + bfa_sm_send_event(ms, MSSM_EVENT_RSP_ERROR); + break; + } + port->stats.ms_plogi_accepts++; + bfa_sm_send_event(ms, MSSM_EVENT_RSP_OK); + break; + + case FC_ELS_LS_RJT: + ls_rjt = (struct fc_ls_rjt_s *) BFA_FCXP_RSP_PLD(fcxp); + + bfa_trc(port->fcs, ls_rjt->reason_code); + bfa_trc(port->fcs, ls_rjt->reason_code_expl); + + port->stats.ms_rejects++; + bfa_sm_send_event(ms, MSSM_EVENT_RSP_ERROR); + break; + + default: + port->stats.ms_plogi_unknown_rsp++; + bfa_trc(port->fcs, els_cmd->els_code); + bfa_sm_send_event(ms, MSSM_EVENT_RSP_ERROR); + } +} + +static void +bfa_fcs_port_ms_timeout(void *arg) +{ + struct bfa_fcs_port_ms_s *ms = (struct bfa_fcs_port_ms_s *)arg; + + ms->port->stats.ms_timeouts++; + bfa_sm_send_event(ms, MSSM_EVENT_TIMEOUT); +} + + +void +bfa_fcs_port_ms_init(struct bfa_fcs_port_s *port) +{ + struct bfa_fcs_port_ms_s *ms = BFA_FCS_GET_MS_FROM_PORT(port); + + ms->port = port; + bfa_sm_set_state(ms, bfa_fcs_port_ms_sm_offline); + + /* + * Invoke init routines of sub modules. + */ + bfa_fcs_port_fdmi_init(ms); +} + +void +bfa_fcs_port_ms_offline(struct bfa_fcs_port_s *port) +{ + struct bfa_fcs_port_ms_s *ms = BFA_FCS_GET_MS_FROM_PORT(port); + + ms->port = port; + bfa_sm_send_event(ms, MSSM_EVENT_PORT_OFFLINE); +} + +void +bfa_fcs_port_ms_online(struct bfa_fcs_port_s *port) +{ + struct bfa_fcs_port_ms_s *ms = BFA_FCS_GET_MS_FROM_PORT(port); + + ms->port = port; + bfa_sm_send_event(ms, MSSM_EVENT_PORT_ONLINE); +} + +void +bfa_fcs_port_ms_fabric_rscn(struct bfa_fcs_port_s *port) +{ + struct bfa_fcs_port_ms_s *ms = BFA_FCS_GET_MS_FROM_PORT(port); + + /* + * @todo. Handle this only when in Online state + */ + if (bfa_sm_cmp_state(ms, bfa_fcs_port_ms_sm_online)) + bfa_sm_send_event(ms, MSSM_EVENT_PORT_FABRIC_RSCN); +} diff --git a/drivers/scsi/bfa/n2n.c b/drivers/scsi/bfa/n2n.c new file mode 100644 index 000000000000..735456824346 --- /dev/null +++ b/drivers/scsi/bfa/n2n.c @@ -0,0 +1,105 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * n2n.c n2n implementation. + */ +#include +#include +#include "fcs_lport.h" +#include "fcs_rport.h" +#include "fcs_trcmod.h" +#include "lport_priv.h" + +BFA_TRC_FILE(FCS, N2N); + +/** + * Called by fcs/port to initialize N2N topology. + */ +void +bfa_fcs_port_n2n_init(struct bfa_fcs_port_s *port) +{ +} + +/** + * Called by fcs/port to notify transition to online state. + */ +void +bfa_fcs_port_n2n_online(struct bfa_fcs_port_s *port) +{ + struct bfa_fcs_port_n2n_s *n2n_port = &port->port_topo.pn2n; + struct bfa_port_cfg_s *pcfg = &port->port_cfg; + struct bfa_fcs_rport_s *rport; + + bfa_trc(port->fcs, pcfg->pwwn); + + /* + * If our PWWN is > than that of the r-port, we have to initiate PLOGI + * and assign an Address. if not, we need to wait for its PLOGI. + * + * If our PWWN is < than that of the remote port, it will send a PLOGI + * with the PIDs assigned. The rport state machine take care of this + * incoming PLOGI. + */ + if (memcmp + ((void *)&pcfg->pwwn, (void *)&n2n_port->rem_port_wwn, + sizeof(wwn_t)) > 0) { + port->pid = N2N_LOCAL_PID; + /** + * First, check if we know the device by pwwn. + */ + rport = bfa_fcs_port_get_rport_by_pwwn(port, + n2n_port->rem_port_wwn); + if (rport) { + bfa_trc(port->fcs, rport->pid); + bfa_trc(port->fcs, rport->pwwn); + rport->pid = N2N_REMOTE_PID; + bfa_fcs_rport_online(rport); + return; + } + + /* + * In n2n there can be only one rport. Delete the old one whose + * pid should be zero, because it is offline. + */ + if (port->num_rports > 0) { + rport = bfa_fcs_port_get_rport_by_pid(port, 0); + bfa_assert(rport != NULL); + if (rport) { + bfa_trc(port->fcs, rport->pwwn); + bfa_fcs_rport_delete(rport); + } + } + bfa_fcs_rport_create(port, N2N_REMOTE_PID); + } +} + +/** + * Called by fcs/port to notify transition to offline state. + */ +void +bfa_fcs_port_n2n_offline(struct bfa_fcs_port_s *port) +{ + struct bfa_fcs_port_n2n_s *n2n_port = &port->port_topo.pn2n; + + bfa_trc(port->fcs, port->pid); + port->pid = 0; + n2n_port->rem_port_wwn = 0; + n2n_port->reply_oxid = 0; +} + + diff --git a/drivers/scsi/bfa/ns.c b/drivers/scsi/bfa/ns.c new file mode 100644 index 000000000000..59fea99d67a4 --- /dev/null +++ b/drivers/scsi/bfa/ns.c @@ -0,0 +1,1243 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * @page ns_sm_info VPORT NS State Machine + * + * @section ns_sm_interactions VPORT NS State Machine Interactions + * + * @section ns_sm VPORT NS State Machine + * img ns_sm.jpg + */ +#include +#include +#include +#include "fcs_lport.h" +#include "fcs_rport.h" +#include "fcs_trcmod.h" +#include "fcs_fcxp.h" +#include "fcs.h" +#include "lport_priv.h" + +BFA_TRC_FILE(FCS, NS); + +/* + * forward declarations + */ +static void bfa_fcs_port_ns_send_plogi(void *ns_cbarg, + struct bfa_fcxp_s *fcxp_alloced); +static void bfa_fcs_port_ns_send_rspn_id(void *ns_cbarg, + struct bfa_fcxp_s *fcxp_alloced); +static void bfa_fcs_port_ns_send_rft_id(void *ns_cbarg, + struct bfa_fcxp_s *fcxp_alloced); +static void bfa_fcs_port_ns_send_rff_id(void *ns_cbarg, + struct bfa_fcxp_s *fcxp_alloced); +static void bfa_fcs_port_ns_send_gid_ft(void *ns_cbarg, + struct bfa_fcxp_s *fcxp_alloced); +static void bfa_fcs_port_ns_timeout(void *arg); +static void bfa_fcs_port_ns_plogi_response(void *fcsarg, + struct bfa_fcxp_s *fcxp, + void *cbarg, + bfa_status_t req_status, + u32 rsp_len, + u32 resid_len, + struct fchs_s *rsp_fchs); +static void bfa_fcs_port_ns_rspn_id_response(void *fcsarg, + struct bfa_fcxp_s *fcxp, + void *cbarg, + bfa_status_t req_status, + u32 rsp_len, + u32 resid_len, + struct fchs_s *rsp_fchs); +static void bfa_fcs_port_ns_rft_id_response(void *fcsarg, + struct bfa_fcxp_s *fcxp, + void *cbarg, + bfa_status_t req_status, + u32 rsp_len, + u32 resid_len, + struct fchs_s *rsp_fchs); +static void bfa_fcs_port_ns_rff_id_response(void *fcsarg, + struct bfa_fcxp_s *fcxp, + void *cbarg, + bfa_status_t req_status, + u32 rsp_len, + u32 resid_len, + struct fchs_s *rsp_fchs); +static void bfa_fcs_port_ns_gid_ft_response(void *fcsarg, + struct bfa_fcxp_s *fcxp, + void *cbarg, + bfa_status_t req_status, + u32 rsp_len, + u32 resid_len, + struct fchs_s *rsp_fchs); +static void bfa_fcs_port_ns_process_gidft_pids(struct bfa_fcs_port_s *port, + u32 *pid_buf, + u32 n_pids); + +static void bfa_fcs_port_ns_boot_target_disc(struct bfa_fcs_port_s *port); +/** + * fcs_ns_sm FCS nameserver interface state machine + */ + +/** + * VPort NS State Machine events + */ +enum vport_ns_event { + NSSM_EVENT_PORT_ONLINE = 1, + NSSM_EVENT_PORT_OFFLINE = 2, + NSSM_EVENT_PLOGI_SENT = 3, + NSSM_EVENT_RSP_OK = 4, + NSSM_EVENT_RSP_ERROR = 5, + NSSM_EVENT_TIMEOUT = 6, + NSSM_EVENT_NS_QUERY = 7, + NSSM_EVENT_RSPNID_SENT = 8, + NSSM_EVENT_RFTID_SENT = 9, + NSSM_EVENT_RFFID_SENT = 10, + NSSM_EVENT_GIDFT_SENT = 11, +}; + +static void bfa_fcs_port_ns_sm_offline(struct bfa_fcs_port_ns_s *ns, + enum vport_ns_event event); +static void bfa_fcs_port_ns_sm_plogi_sending(struct bfa_fcs_port_ns_s *ns, + enum vport_ns_event event); +static void bfa_fcs_port_ns_sm_plogi(struct bfa_fcs_port_ns_s *ns, + enum vport_ns_event event); +static void bfa_fcs_port_ns_sm_plogi_retry(struct bfa_fcs_port_ns_s *ns, + enum vport_ns_event event); +static void bfa_fcs_port_ns_sm_sending_rspn_id(struct bfa_fcs_port_ns_s *ns, + enum vport_ns_event event); +static void bfa_fcs_port_ns_sm_rspn_id(struct bfa_fcs_port_ns_s *ns, + enum vport_ns_event event); +static void bfa_fcs_port_ns_sm_rspn_id_retry(struct bfa_fcs_port_ns_s *ns, + enum vport_ns_event event); +static void bfa_fcs_port_ns_sm_sending_rft_id(struct bfa_fcs_port_ns_s *ns, + enum vport_ns_event event); +static void bfa_fcs_port_ns_sm_rft_id_retry(struct bfa_fcs_port_ns_s *ns, + enum vport_ns_event event); +static void bfa_fcs_port_ns_sm_rft_id(struct bfa_fcs_port_ns_s *ns, + enum vport_ns_event event); +static void bfa_fcs_port_ns_sm_sending_rff_id(struct bfa_fcs_port_ns_s *ns, + enum vport_ns_event event); +static void bfa_fcs_port_ns_sm_rff_id_retry(struct bfa_fcs_port_ns_s *ns, + enum vport_ns_event event); +static void bfa_fcs_port_ns_sm_rff_id(struct bfa_fcs_port_ns_s *ns, + enum vport_ns_event event); +static void bfa_fcs_port_ns_sm_sending_gid_ft(struct bfa_fcs_port_ns_s *ns, + enum vport_ns_event event); +static void bfa_fcs_port_ns_sm_gid_ft(struct bfa_fcs_port_ns_s *ns, + enum vport_ns_event event); +static void bfa_fcs_port_ns_sm_gid_ft_retry(struct bfa_fcs_port_ns_s *ns, + enum vport_ns_event event); +static void bfa_fcs_port_ns_sm_online(struct bfa_fcs_port_ns_s *ns, + enum vport_ns_event event); +/** + * Start in offline state - awaiting linkup + */ +static void +bfa_fcs_port_ns_sm_offline(struct bfa_fcs_port_ns_s *ns, + enum vport_ns_event event) +{ + bfa_trc(ns->port->fcs, ns->port->port_cfg.pwwn); + bfa_trc(ns->port->fcs, event); + + switch (event) { + case NSSM_EVENT_PORT_ONLINE: + bfa_sm_set_state(ns, bfa_fcs_port_ns_sm_plogi_sending); + bfa_fcs_port_ns_send_plogi(ns, NULL); + break; + + case NSSM_EVENT_PORT_OFFLINE: + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_port_ns_sm_plogi_sending(struct bfa_fcs_port_ns_s *ns, + enum vport_ns_event event) +{ + bfa_trc(ns->port->fcs, ns->port->port_cfg.pwwn); + bfa_trc(ns->port->fcs, event); + + switch (event) { + case NSSM_EVENT_PLOGI_SENT: + bfa_sm_set_state(ns, bfa_fcs_port_ns_sm_plogi); + break; + + case NSSM_EVENT_PORT_OFFLINE: + bfa_sm_set_state(ns, bfa_fcs_port_ns_sm_offline); + bfa_fcxp_walloc_cancel(BFA_FCS_GET_HAL_FROM_PORT(ns->port), + &ns->fcxp_wqe); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_port_ns_sm_plogi(struct bfa_fcs_port_ns_s *ns, + enum vport_ns_event event) +{ + bfa_trc(ns->port->fcs, ns->port->port_cfg.pwwn); + bfa_trc(ns->port->fcs, event); + + switch (event) { + case NSSM_EVENT_RSP_ERROR: + /* + * Start timer for a delayed retry + */ + bfa_sm_set_state(ns, bfa_fcs_port_ns_sm_plogi_retry); + ns->port->stats.ns_retries++; + bfa_timer_start(BFA_FCS_GET_HAL_FROM_PORT(ns->port), &ns->timer, + bfa_fcs_port_ns_timeout, ns, + BFA_FCS_RETRY_TIMEOUT); + break; + + case NSSM_EVENT_RSP_OK: + bfa_sm_set_state(ns, bfa_fcs_port_ns_sm_sending_rspn_id); + bfa_fcs_port_ns_send_rspn_id(ns, NULL); + break; + + case NSSM_EVENT_PORT_OFFLINE: + bfa_sm_set_state(ns, bfa_fcs_port_ns_sm_offline); + bfa_fcxp_discard(ns->fcxp); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_port_ns_sm_plogi_retry(struct bfa_fcs_port_ns_s *ns, + enum vport_ns_event event) +{ + bfa_trc(ns->port->fcs, ns->port->port_cfg.pwwn); + bfa_trc(ns->port->fcs, event); + + switch (event) { + case NSSM_EVENT_TIMEOUT: + /* + * Retry Timer Expired. Re-send + */ + bfa_sm_set_state(ns, bfa_fcs_port_ns_sm_plogi_sending); + bfa_fcs_port_ns_send_plogi(ns, NULL); + break; + + case NSSM_EVENT_PORT_OFFLINE: + bfa_sm_set_state(ns, bfa_fcs_port_ns_sm_offline); + bfa_timer_stop(&ns->timer); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_port_ns_sm_sending_rspn_id(struct bfa_fcs_port_ns_s *ns, + enum vport_ns_event event) +{ + bfa_trc(ns->port->fcs, ns->port->port_cfg.pwwn); + bfa_trc(ns->port->fcs, event); + + switch (event) { + case NSSM_EVENT_RSPNID_SENT: + bfa_sm_set_state(ns, bfa_fcs_port_ns_sm_rspn_id); + break; + + case NSSM_EVENT_PORT_OFFLINE: + bfa_sm_set_state(ns, bfa_fcs_port_ns_sm_offline); + bfa_fcxp_walloc_cancel(BFA_FCS_GET_HAL_FROM_PORT(ns->port), + &ns->fcxp_wqe); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_port_ns_sm_rspn_id(struct bfa_fcs_port_ns_s *ns, + enum vport_ns_event event) +{ + bfa_trc(ns->port->fcs, ns->port->port_cfg.pwwn); + bfa_trc(ns->port->fcs, event); + + switch (event) { + case NSSM_EVENT_RSP_ERROR: + /* + * Start timer for a delayed retry + */ + bfa_sm_set_state(ns, bfa_fcs_port_ns_sm_rspn_id_retry); + ns->port->stats.ns_retries++; + bfa_timer_start(BFA_FCS_GET_HAL_FROM_PORT(ns->port), &ns->timer, + bfa_fcs_port_ns_timeout, ns, + BFA_FCS_RETRY_TIMEOUT); + break; + + case NSSM_EVENT_RSP_OK: + bfa_sm_set_state(ns, bfa_fcs_port_ns_sm_sending_rft_id); + bfa_fcs_port_ns_send_rft_id(ns, NULL); + break; + + case NSSM_EVENT_PORT_OFFLINE: + bfa_fcxp_discard(ns->fcxp); + bfa_sm_set_state(ns, bfa_fcs_port_ns_sm_offline); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_port_ns_sm_rspn_id_retry(struct bfa_fcs_port_ns_s *ns, + enum vport_ns_event event) +{ + bfa_trc(ns->port->fcs, ns->port->port_cfg.pwwn); + bfa_trc(ns->port->fcs, event); + + switch (event) { + case NSSM_EVENT_TIMEOUT: + /* + * Retry Timer Expired. Re-send + */ + bfa_sm_set_state(ns, bfa_fcs_port_ns_sm_sending_rspn_id); + bfa_fcs_port_ns_send_rspn_id(ns, NULL); + break; + + case NSSM_EVENT_PORT_OFFLINE: + bfa_sm_set_state(ns, bfa_fcs_port_ns_sm_offline); + bfa_timer_stop(&ns->timer); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_port_ns_sm_sending_rft_id(struct bfa_fcs_port_ns_s *ns, + enum vport_ns_event event) +{ + bfa_trc(ns->port->fcs, ns->port->port_cfg.pwwn); + bfa_trc(ns->port->fcs, event); + + switch (event) { + case NSSM_EVENT_RFTID_SENT: + bfa_sm_set_state(ns, bfa_fcs_port_ns_sm_rft_id); + break; + + case NSSM_EVENT_PORT_OFFLINE: + bfa_sm_set_state(ns, bfa_fcs_port_ns_sm_offline); + bfa_fcxp_walloc_cancel(BFA_FCS_GET_HAL_FROM_PORT(ns->port), + &ns->fcxp_wqe); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_port_ns_sm_rft_id(struct bfa_fcs_port_ns_s *ns, + enum vport_ns_event event) +{ + bfa_trc(ns->port->fcs, ns->port->port_cfg.pwwn); + bfa_trc(ns->port->fcs, event); + + switch (event) { + case NSSM_EVENT_RSP_OK: + /* + * Now move to register FC4 Features + */ + bfa_sm_set_state(ns, bfa_fcs_port_ns_sm_sending_rff_id); + bfa_fcs_port_ns_send_rff_id(ns, NULL); + break; + + case NSSM_EVENT_RSP_ERROR: + /* + * Start timer for a delayed retry + */ + bfa_sm_set_state(ns, bfa_fcs_port_ns_sm_rft_id_retry); + ns->port->stats.ns_retries++; + bfa_timer_start(BFA_FCS_GET_HAL_FROM_PORT(ns->port), &ns->timer, + bfa_fcs_port_ns_timeout, ns, + BFA_FCS_RETRY_TIMEOUT); + break; + + case NSSM_EVENT_PORT_OFFLINE: + bfa_sm_set_state(ns, bfa_fcs_port_ns_sm_offline); + bfa_fcxp_discard(ns->fcxp); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_port_ns_sm_rft_id_retry(struct bfa_fcs_port_ns_s *ns, + enum vport_ns_event event) +{ + bfa_trc(ns->port->fcs, ns->port->port_cfg.pwwn); + bfa_trc(ns->port->fcs, event); + + switch (event) { + case NSSM_EVENT_TIMEOUT: + bfa_sm_set_state(ns, bfa_fcs_port_ns_sm_sending_rft_id); + bfa_fcs_port_ns_send_rft_id(ns, NULL); + break; + + case NSSM_EVENT_PORT_OFFLINE: + bfa_sm_set_state(ns, bfa_fcs_port_ns_sm_offline); + bfa_timer_stop(&ns->timer); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_port_ns_sm_sending_rff_id(struct bfa_fcs_port_ns_s *ns, + enum vport_ns_event event) +{ + bfa_trc(ns->port->fcs, ns->port->port_cfg.pwwn); + bfa_trc(ns->port->fcs, event); + + switch (event) { + case NSSM_EVENT_RFFID_SENT: + bfa_sm_set_state(ns, bfa_fcs_port_ns_sm_rff_id); + break; + + case NSSM_EVENT_PORT_OFFLINE: + bfa_sm_set_state(ns, bfa_fcs_port_ns_sm_offline); + bfa_fcxp_walloc_cancel(BFA_FCS_GET_HAL_FROM_PORT(ns->port), + &ns->fcxp_wqe); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_port_ns_sm_rff_id(struct bfa_fcs_port_ns_s *ns, + enum vport_ns_event event) +{ + bfa_trc(ns->port->fcs, ns->port->port_cfg.pwwn); + bfa_trc(ns->port->fcs, event); + + switch (event) { + case NSSM_EVENT_RSP_OK: + + /* + * If min cfg mode is enabled, we donot initiate rport + * discovery with the fabric. Instead, we will retrieve the + * boot targets from HAL/FW. + */ + if (__fcs_min_cfg(ns->port->fcs)) { + bfa_fcs_port_ns_boot_target_disc(ns->port); + bfa_sm_set_state(ns, bfa_fcs_port_ns_sm_online); + return; + } + + /* + * If the port role is Initiator Mode issue NS query. + * If it is Target Mode, skip this and go to online. + */ + if (BFA_FCS_VPORT_IS_INITIATOR_MODE(ns->port)) { + bfa_sm_set_state(ns, bfa_fcs_port_ns_sm_sending_gid_ft); + bfa_fcs_port_ns_send_gid_ft(ns, NULL); + } else if (BFA_FCS_VPORT_IS_TARGET_MODE(ns->port)) { + bfa_sm_set_state(ns, bfa_fcs_port_ns_sm_online); + } + /* + * kick off mgmt srvr state machine + */ + bfa_fcs_port_ms_online(ns->port); + break; + + case NSSM_EVENT_RSP_ERROR: + /* + * Start timer for a delayed retry + */ + bfa_sm_set_state(ns, bfa_fcs_port_ns_sm_rff_id_retry); + ns->port->stats.ns_retries++; + bfa_timer_start(BFA_FCS_GET_HAL_FROM_PORT(ns->port), &ns->timer, + bfa_fcs_port_ns_timeout, ns, + BFA_FCS_RETRY_TIMEOUT); + break; + + case NSSM_EVENT_PORT_OFFLINE: + bfa_sm_set_state(ns, bfa_fcs_port_ns_sm_offline); + bfa_fcxp_discard(ns->fcxp); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_port_ns_sm_rff_id_retry(struct bfa_fcs_port_ns_s *ns, + enum vport_ns_event event) +{ + bfa_trc(ns->port->fcs, ns->port->port_cfg.pwwn); + bfa_trc(ns->port->fcs, event); + + switch (event) { + case NSSM_EVENT_TIMEOUT: + bfa_sm_set_state(ns, bfa_fcs_port_ns_sm_sending_rff_id); + bfa_fcs_port_ns_send_rff_id(ns, NULL); + break; + + case NSSM_EVENT_PORT_OFFLINE: + bfa_sm_set_state(ns, bfa_fcs_port_ns_sm_offline); + bfa_timer_stop(&ns->timer); + break; + + default: + bfa_assert(0); + } +} +static void +bfa_fcs_port_ns_sm_sending_gid_ft(struct bfa_fcs_port_ns_s *ns, + enum vport_ns_event event) +{ + bfa_trc(ns->port->fcs, ns->port->port_cfg.pwwn); + bfa_trc(ns->port->fcs, event); + + switch (event) { + case NSSM_EVENT_GIDFT_SENT: + bfa_sm_set_state(ns, bfa_fcs_port_ns_sm_gid_ft); + break; + + case NSSM_EVENT_PORT_OFFLINE: + bfa_sm_set_state(ns, bfa_fcs_port_ns_sm_offline); + bfa_fcxp_walloc_cancel(BFA_FCS_GET_HAL_FROM_PORT(ns->port), + &ns->fcxp_wqe); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_port_ns_sm_gid_ft(struct bfa_fcs_port_ns_s *ns, + enum vport_ns_event event) +{ + bfa_trc(ns->port->fcs, ns->port->port_cfg.pwwn); + bfa_trc(ns->port->fcs, event); + + switch (event) { + case NSSM_EVENT_RSP_OK: + bfa_sm_set_state(ns, bfa_fcs_port_ns_sm_online); + break; + + case NSSM_EVENT_RSP_ERROR: + /* + * TBD: for certain reject codes, we don't need to retry + */ + /* + * Start timer for a delayed retry + */ + bfa_sm_set_state(ns, bfa_fcs_port_ns_sm_gid_ft_retry); + ns->port->stats.ns_retries++; + bfa_timer_start(BFA_FCS_GET_HAL_FROM_PORT(ns->port), &ns->timer, + bfa_fcs_port_ns_timeout, ns, + BFA_FCS_RETRY_TIMEOUT); + break; + + case NSSM_EVENT_PORT_OFFLINE: + bfa_sm_set_state(ns, bfa_fcs_port_ns_sm_offline); + bfa_fcxp_discard(ns->fcxp); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_port_ns_sm_gid_ft_retry(struct bfa_fcs_port_ns_s *ns, + enum vport_ns_event event) +{ + bfa_trc(ns->port->fcs, ns->port->port_cfg.pwwn); + bfa_trc(ns->port->fcs, event); + + switch (event) { + case NSSM_EVENT_TIMEOUT: + bfa_sm_set_state(ns, bfa_fcs_port_ns_sm_sending_gid_ft); + bfa_fcs_port_ns_send_gid_ft(ns, NULL); + break; + + case NSSM_EVENT_PORT_OFFLINE: + bfa_sm_set_state(ns, bfa_fcs_port_ns_sm_offline); + bfa_timer_stop(&ns->timer); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_port_ns_sm_online(struct bfa_fcs_port_ns_s *ns, + enum vport_ns_event event) +{ + bfa_trc(ns->port->fcs, ns->port->port_cfg.pwwn); + bfa_trc(ns->port->fcs, event); + + switch (event) { + case NSSM_EVENT_PORT_OFFLINE: + bfa_sm_set_state(ns, bfa_fcs_port_ns_sm_offline); + break; + + case NSSM_EVENT_NS_QUERY: + /* + * If the port role is Initiator Mode issue NS query. + * If it is Target Mode, skip this and go to online. + */ + if (BFA_FCS_VPORT_IS_INITIATOR_MODE(ns->port)) { + bfa_sm_set_state(ns, bfa_fcs_port_ns_sm_sending_gid_ft); + bfa_fcs_port_ns_send_gid_ft(ns, NULL); + }; + break; + + default: + bfa_assert(0); + } +} + + + +/** + * ns_pvt Nameserver local functions + */ + +static void +bfa_fcs_port_ns_send_plogi(void *ns_cbarg, struct bfa_fcxp_s *fcxp_alloced) +{ + struct bfa_fcs_port_ns_s *ns = ns_cbarg; + struct bfa_fcs_port_s *port = ns->port; + struct fchs_s fchs; + int len; + struct bfa_fcxp_s *fcxp; + + bfa_trc(port->fcs, port->pid); + + fcxp = fcxp_alloced ? fcxp_alloced : bfa_fcs_fcxp_alloc(port->fcs); + if (!fcxp) { + port->stats.ns_plogi_alloc_wait++; + bfa_fcxp_alloc_wait(port->fcs->bfa, &ns->fcxp_wqe, + bfa_fcs_port_ns_send_plogi, ns); + return; + } + ns->fcxp = fcxp; + + len = fc_plogi_build(&fchs, bfa_fcxp_get_reqbuf(fcxp), + bfa_os_hton3b(FC_NAME_SERVER), + bfa_fcs_port_get_fcid(port), 0, + port->port_cfg.pwwn, port->port_cfg.nwwn, + bfa_pport_get_maxfrsize(port->fcs->bfa)); + + bfa_fcxp_send(fcxp, NULL, port->fabric->vf_id, port->lp_tag, BFA_FALSE, + FC_CLASS_3, len, &fchs, bfa_fcs_port_ns_plogi_response, + (void *)ns, FC_MAX_PDUSZ, FC_RA_TOV); + port->stats.ns_plogi_sent++; + + bfa_sm_send_event(ns, NSSM_EVENT_PLOGI_SENT); +} + +static void +bfa_fcs_port_ns_plogi_response(void *fcsarg, struct bfa_fcxp_s *fcxp, + void *cbarg, bfa_status_t req_status, + u32 rsp_len, u32 resid_len, + struct fchs_s *rsp_fchs) +{ + struct bfa_fcs_port_ns_s *ns = (struct bfa_fcs_port_ns_s *)cbarg; + struct bfa_fcs_port_s *port = ns->port; + /* struct fc_logi_s *plogi_resp; */ + struct fc_els_cmd_s *els_cmd; + struct fc_ls_rjt_s *ls_rjt; + + bfa_trc(port->fcs, req_status); + bfa_trc(port->fcs, port->port_cfg.pwwn); + + /* + * Sanity Checks + */ + if (req_status != BFA_STATUS_OK) { + bfa_trc(port->fcs, req_status); + port->stats.ns_plogi_rsp_err++; + bfa_sm_send_event(ns, NSSM_EVENT_RSP_ERROR); + return; + } + + els_cmd = (struct fc_els_cmd_s *) BFA_FCXP_RSP_PLD(fcxp); + + switch (els_cmd->els_code) { + + case FC_ELS_ACC: + if (rsp_len < sizeof(struct fc_logi_s)) { + bfa_trc(port->fcs, rsp_len); + port->stats.ns_plogi_acc_err++; + bfa_sm_send_event(ns, NSSM_EVENT_RSP_ERROR); + break; + } + port->stats.ns_plogi_accepts++; + bfa_sm_send_event(ns, NSSM_EVENT_RSP_OK); + break; + + case FC_ELS_LS_RJT: + ls_rjt = (struct fc_ls_rjt_s *) BFA_FCXP_RSP_PLD(fcxp); + + bfa_trc(port->fcs, ls_rjt->reason_code); + bfa_trc(port->fcs, ls_rjt->reason_code_expl); + + port->stats.ns_rejects++; + + bfa_sm_send_event(ns, NSSM_EVENT_RSP_ERROR); + break; + + default: + port->stats.ns_plogi_unknown_rsp++; + bfa_trc(port->fcs, els_cmd->els_code); + bfa_sm_send_event(ns, NSSM_EVENT_RSP_ERROR); + } +} + +/** + * Register the symbolic port name. + */ +static void +bfa_fcs_port_ns_send_rspn_id(void *ns_cbarg, struct bfa_fcxp_s *fcxp_alloced) +{ + struct bfa_fcs_port_ns_s *ns = ns_cbarg; + struct bfa_fcs_port_s *port = ns->port; + struct fchs_s fchs; + int len; + struct bfa_fcxp_s *fcxp; + u8 symbl[256]; + u8 *psymbl = &symbl[0]; + + bfa_os_memset(symbl, 0, sizeof(symbl)); + + bfa_trc(port->fcs, port->port_cfg.pwwn); + + fcxp = fcxp_alloced ? fcxp_alloced : bfa_fcs_fcxp_alloc(port->fcs); + if (!fcxp) { + port->stats.ns_rspnid_alloc_wait++; + bfa_fcxp_alloc_wait(port->fcs->bfa, &ns->fcxp_wqe, + bfa_fcs_port_ns_send_rspn_id, ns); + return; + } + ns->fcxp = fcxp; + + /* + * for V-Port, form a Port Symbolic Name + */ + if (port->vport) { + /**For Vports, + * we append the vport's port symbolic name to that of the base port. + */ + + strncpy((char *)psymbl, + (char *) + &(bfa_fcs_port_get_psym_name + (bfa_fcs_get_base_port(port->fcs))), + strlen((char *) + &bfa_fcs_port_get_psym_name(bfa_fcs_get_base_port + (port->fcs)))); + + /* + * Ensure we have a null terminating string. + */ + ((char *) + psymbl)[strlen((char *) + &bfa_fcs_port_get_psym_name + (bfa_fcs_get_base_port(port->fcs)))] = 0; + + strncat((char *)psymbl, + (char *)&(bfa_fcs_port_get_psym_name(port)), + strlen((char *)&bfa_fcs_port_get_psym_name(port))); + } else { + psymbl = (u8 *) &(bfa_fcs_port_get_psym_name(port)); + } + + len = fc_rspnid_build(&fchs, bfa_fcxp_get_reqbuf(fcxp), + bfa_fcs_port_get_fcid(port), 0, psymbl); + + bfa_fcxp_send(fcxp, NULL, port->fabric->vf_id, port->lp_tag, BFA_FALSE, + FC_CLASS_3, len, &fchs, bfa_fcs_port_ns_rspn_id_response, + (void *)ns, FC_MAX_PDUSZ, FC_RA_TOV); + + port->stats.ns_rspnid_sent++; + + bfa_sm_send_event(ns, NSSM_EVENT_RSPNID_SENT); +} + +static void +bfa_fcs_port_ns_rspn_id_response(void *fcsarg, struct bfa_fcxp_s *fcxp, + void *cbarg, bfa_status_t req_status, + u32 rsp_len, u32 resid_len, + struct fchs_s *rsp_fchs) +{ + struct bfa_fcs_port_ns_s *ns = (struct bfa_fcs_port_ns_s *)cbarg; + struct bfa_fcs_port_s *port = ns->port; + struct ct_hdr_s *cthdr = NULL; + + bfa_trc(port->fcs, port->port_cfg.pwwn); + + /* + * Sanity Checks + */ + if (req_status != BFA_STATUS_OK) { + bfa_trc(port->fcs, req_status); + port->stats.ns_rspnid_rsp_err++; + bfa_sm_send_event(ns, NSSM_EVENT_RSP_ERROR); + return; + } + + cthdr = (struct ct_hdr_s *) BFA_FCXP_RSP_PLD(fcxp); + cthdr->cmd_rsp_code = bfa_os_ntohs(cthdr->cmd_rsp_code); + + if (cthdr->cmd_rsp_code == CT_RSP_ACCEPT) { + port->stats.ns_rspnid_accepts++; + bfa_sm_send_event(ns, NSSM_EVENT_RSP_OK); + return; + } + + port->stats.ns_rspnid_rejects++; + bfa_trc(port->fcs, cthdr->reason_code); + bfa_trc(port->fcs, cthdr->exp_code); + bfa_sm_send_event(ns, NSSM_EVENT_RSP_ERROR); +} + +/** + * Register FC4-Types + * TBD, Need to retrieve this from the OS driver, in case IPFC is enabled ? + */ +static void +bfa_fcs_port_ns_send_rft_id(void *ns_cbarg, struct bfa_fcxp_s *fcxp_alloced) +{ + struct bfa_fcs_port_ns_s *ns = ns_cbarg; + struct bfa_fcs_port_s *port = ns->port; + struct fchs_s fchs; + int len; + struct bfa_fcxp_s *fcxp; + + bfa_trc(port->fcs, port->port_cfg.pwwn); + + fcxp = fcxp_alloced ? fcxp_alloced : bfa_fcs_fcxp_alloc(port->fcs); + if (!fcxp) { + port->stats.ns_rftid_alloc_wait++; + bfa_fcxp_alloc_wait(port->fcs->bfa, &ns->fcxp_wqe, + bfa_fcs_port_ns_send_rft_id, ns); + return; + } + ns->fcxp = fcxp; + + len = fc_rftid_build(&fchs, bfa_fcxp_get_reqbuf(fcxp), + bfa_fcs_port_get_fcid(port), 0, + port->port_cfg.roles); + + bfa_fcxp_send(fcxp, NULL, port->fabric->vf_id, port->lp_tag, BFA_FALSE, + FC_CLASS_3, len, &fchs, bfa_fcs_port_ns_rft_id_response, + (void *)ns, FC_MAX_PDUSZ, FC_RA_TOV); + + port->stats.ns_rftid_sent++; + bfa_sm_send_event(ns, NSSM_EVENT_RFTID_SENT); +} + +static void +bfa_fcs_port_ns_rft_id_response(void *fcsarg, struct bfa_fcxp_s *fcxp, + void *cbarg, bfa_status_t req_status, + u32 rsp_len, u32 resid_len, + struct fchs_s *rsp_fchs) +{ + struct bfa_fcs_port_ns_s *ns = (struct bfa_fcs_port_ns_s *)cbarg; + struct bfa_fcs_port_s *port = ns->port; + struct ct_hdr_s *cthdr = NULL; + + bfa_trc(port->fcs, port->port_cfg.pwwn); + + /* + * Sanity Checks + */ + if (req_status != BFA_STATUS_OK) { + bfa_trc(port->fcs, req_status); + port->stats.ns_rftid_rsp_err++; + bfa_sm_send_event(ns, NSSM_EVENT_RSP_ERROR); + return; + } + + cthdr = (struct ct_hdr_s *) BFA_FCXP_RSP_PLD(fcxp); + cthdr->cmd_rsp_code = bfa_os_ntohs(cthdr->cmd_rsp_code); + + if (cthdr->cmd_rsp_code == CT_RSP_ACCEPT) { + port->stats.ns_rftid_accepts++; + bfa_sm_send_event(ns, NSSM_EVENT_RSP_OK); + return; + } + + port->stats.ns_rftid_rejects++; + bfa_trc(port->fcs, cthdr->reason_code); + bfa_trc(port->fcs, cthdr->exp_code); + bfa_sm_send_event(ns, NSSM_EVENT_RSP_ERROR); +} + +/** +* Register FC4-Features : Should be done after RFT_ID + */ +static void +bfa_fcs_port_ns_send_rff_id(void *ns_cbarg, struct bfa_fcxp_s *fcxp_alloced) +{ + struct bfa_fcs_port_ns_s *ns = ns_cbarg; + struct bfa_fcs_port_s *port = ns->port; + struct fchs_s fchs; + int len; + struct bfa_fcxp_s *fcxp; + u8 fc4_ftrs = 0; + + bfa_trc(port->fcs, port->port_cfg.pwwn); + + fcxp = fcxp_alloced ? fcxp_alloced : bfa_fcs_fcxp_alloc(port->fcs); + if (!fcxp) { + port->stats.ns_rffid_alloc_wait++; + bfa_fcxp_alloc_wait(port->fcs->bfa, &ns->fcxp_wqe, + bfa_fcs_port_ns_send_rff_id, ns); + return; + } + ns->fcxp = fcxp; + + if (BFA_FCS_VPORT_IS_INITIATOR_MODE(ns->port)) { + fc4_ftrs = FC_GS_FCP_FC4_FEATURE_INITIATOR; + } else if (BFA_FCS_VPORT_IS_TARGET_MODE(ns->port)) { + fc4_ftrs = FC_GS_FCP_FC4_FEATURE_TARGET; + } + + len = fc_rffid_build(&fchs, bfa_fcxp_get_reqbuf(fcxp), + bfa_fcs_port_get_fcid(port), 0, FC_TYPE_FCP, + fc4_ftrs); + + bfa_fcxp_send(fcxp, NULL, port->fabric->vf_id, port->lp_tag, BFA_FALSE, + FC_CLASS_3, len, &fchs, bfa_fcs_port_ns_rff_id_response, + (void *)ns, FC_MAX_PDUSZ, FC_RA_TOV); + + port->stats.ns_rffid_sent++; + bfa_sm_send_event(ns, NSSM_EVENT_RFFID_SENT); +} + +static void +bfa_fcs_port_ns_rff_id_response(void *fcsarg, struct bfa_fcxp_s *fcxp, + void *cbarg, bfa_status_t req_status, + u32 rsp_len, u32 resid_len, + struct fchs_s *rsp_fchs) +{ + struct bfa_fcs_port_ns_s *ns = (struct bfa_fcs_port_ns_s *)cbarg; + struct bfa_fcs_port_s *port = ns->port; + struct ct_hdr_s *cthdr = NULL; + + bfa_trc(port->fcs, port->port_cfg.pwwn); + + /* + * Sanity Checks + */ + if (req_status != BFA_STATUS_OK) { + bfa_trc(port->fcs, req_status); + port->stats.ns_rffid_rsp_err++; + bfa_sm_send_event(ns, NSSM_EVENT_RSP_ERROR); + return; + } + + cthdr = (struct ct_hdr_s *) BFA_FCXP_RSP_PLD(fcxp); + cthdr->cmd_rsp_code = bfa_os_ntohs(cthdr->cmd_rsp_code); + + if (cthdr->cmd_rsp_code == CT_RSP_ACCEPT) { + port->stats.ns_rffid_accepts++; + bfa_sm_send_event(ns, NSSM_EVENT_RSP_OK); + return; + } + + port->stats.ns_rffid_rejects++; + bfa_trc(port->fcs, cthdr->reason_code); + bfa_trc(port->fcs, cthdr->exp_code); + + if (cthdr->reason_code == CT_RSN_NOT_SUPP) { + /* + * if this command is not supported, we don't retry + */ + bfa_sm_send_event(ns, NSSM_EVENT_RSP_OK); + } else { + bfa_sm_send_event(ns, NSSM_EVENT_RSP_ERROR); + } +} + +/** + * Query Fabric for FC4-Types Devices. + * +* TBD : Need to use a local (FCS private) response buffer, since the response + * can be larger than 2K. + */ +static void +bfa_fcs_port_ns_send_gid_ft(void *ns_cbarg, struct bfa_fcxp_s *fcxp_alloced) +{ + struct bfa_fcs_port_ns_s *ns = ns_cbarg; + struct bfa_fcs_port_s *port = ns->port; + struct fchs_s fchs; + int len; + struct bfa_fcxp_s *fcxp; + + bfa_trc(port->fcs, port->pid); + + fcxp = fcxp_alloced ? fcxp_alloced : bfa_fcs_fcxp_alloc(port->fcs); + if (!fcxp) { + port->stats.ns_gidft_alloc_wait++; + bfa_fcxp_alloc_wait(port->fcs->bfa, &ns->fcxp_wqe, + bfa_fcs_port_ns_send_gid_ft, ns); + return; + } + ns->fcxp = fcxp; + + /* + * This query is only initiated for FCP initiator mode. + */ + len = fc_gid_ft_build(&fchs, bfa_fcxp_get_reqbuf(fcxp), ns->port->pid, + FC_TYPE_FCP); + + bfa_fcxp_send(fcxp, NULL, port->fabric->vf_id, port->lp_tag, BFA_FALSE, + FC_CLASS_3, len, &fchs, bfa_fcs_port_ns_gid_ft_response, + (void *)ns, bfa_fcxp_get_maxrsp(port->fcs->bfa), + FC_RA_TOV); + + port->stats.ns_gidft_sent++; + + bfa_sm_send_event(ns, NSSM_EVENT_GIDFT_SENT); +} + +static void +bfa_fcs_port_ns_gid_ft_response(void *fcsarg, struct bfa_fcxp_s *fcxp, + void *cbarg, bfa_status_t req_status, + u32 rsp_len, u32 resid_len, + struct fchs_s *rsp_fchs) +{ + struct bfa_fcs_port_ns_s *ns = (struct bfa_fcs_port_ns_s *)cbarg; + struct bfa_fcs_port_s *port = ns->port; + struct ct_hdr_s *cthdr = NULL; + u32 n_pids; + + bfa_trc(port->fcs, port->port_cfg.pwwn); + + /* + * Sanity Checks + */ + if (req_status != BFA_STATUS_OK) { + bfa_trc(port->fcs, req_status); + port->stats.ns_gidft_rsp_err++; + bfa_sm_send_event(ns, NSSM_EVENT_RSP_ERROR); + return; + } + + if (resid_len != 0) { + /* + * TBD : we will need to allocate a larger buffer & retry the + * command + */ + bfa_trc(port->fcs, rsp_len); + bfa_trc(port->fcs, resid_len); + return; + } + + cthdr = (struct ct_hdr_s *) BFA_FCXP_RSP_PLD(fcxp); + cthdr->cmd_rsp_code = bfa_os_ntohs(cthdr->cmd_rsp_code); + + switch (cthdr->cmd_rsp_code) { + + case CT_RSP_ACCEPT: + + port->stats.ns_gidft_accepts++; + n_pids = (fc_get_ctresp_pyld_len(rsp_len) / sizeof(u32)); + bfa_trc(port->fcs, n_pids); + bfa_fcs_port_ns_process_gidft_pids(port, + (u32 *) (cthdr + 1), + n_pids); + bfa_sm_send_event(ns, NSSM_EVENT_RSP_OK); + break; + + case CT_RSP_REJECT: + + /* + * Check the reason code & explanation. + * There may not have been any FC4 devices in the fabric + */ + port->stats.ns_gidft_rejects++; + bfa_trc(port->fcs, cthdr->reason_code); + bfa_trc(port->fcs, cthdr->exp_code); + + if ((cthdr->reason_code == CT_RSN_UNABLE_TO_PERF) + && (cthdr->exp_code == CT_NS_EXP_FT_NOT_REG)) { + + bfa_sm_send_event(ns, NSSM_EVENT_RSP_OK); + } else { + /* + * for all other errors, retry + */ + bfa_sm_send_event(ns, NSSM_EVENT_RSP_ERROR); + } + break; + + default: + port->stats.ns_gidft_unknown_rsp++; + bfa_trc(port->fcs, cthdr->cmd_rsp_code); + bfa_sm_send_event(ns, NSSM_EVENT_RSP_ERROR); + } +} + +/** + * This routine will be called by bfa_timer on timer timeouts. + * + * param[in] port - pointer to bfa_fcs_port_t. + * + * return + * void + * +* Special Considerations: + * + * note + */ +static void +bfa_fcs_port_ns_timeout(void *arg) +{ + struct bfa_fcs_port_ns_s *ns = (struct bfa_fcs_port_ns_s *)arg; + + ns->port->stats.ns_timeouts++; + bfa_sm_send_event(ns, NSSM_EVENT_TIMEOUT); +} + +/* + * Process the PID list in GID_FT response + */ +static void +bfa_fcs_port_ns_process_gidft_pids(struct bfa_fcs_port_s *port, + u32 *pid_buf, u32 n_pids) +{ + struct fcgs_gidft_resp_s *gidft_entry; + struct bfa_fcs_rport_s *rport; + u32 ii; + + for (ii = 0; ii < n_pids; ii++) { + gidft_entry = (struct fcgs_gidft_resp_s *) &pid_buf[ii]; + + if (gidft_entry->pid == port->pid) + continue; + + /* + * Check if this rport already exists + */ + rport = bfa_fcs_port_get_rport_by_pid(port, gidft_entry->pid); + if (rport == NULL) { + /* + * this is a new device. create rport + */ + rport = bfa_fcs_rport_create(port, gidft_entry->pid); + } else { + /* + * this rport already exists + */ + bfa_fcs_rport_scn(rport); + } + + bfa_trc(port->fcs, gidft_entry->pid); + + /* + * if the last entry bit is set, bail out. + */ + if (gidft_entry->last) + return; + } +} + +/** + * fcs_ns_public FCS nameserver public interfaces + */ + +/* + * Functions called by port/fab. + * These will send relevant Events to the ns state machine. + */ +void +bfa_fcs_port_ns_init(struct bfa_fcs_port_s *port) +{ + struct bfa_fcs_port_ns_s *ns = BFA_FCS_GET_NS_FROM_PORT(port); + + ns->port = port; + bfa_sm_set_state(ns, bfa_fcs_port_ns_sm_offline); +} + +void +bfa_fcs_port_ns_offline(struct bfa_fcs_port_s *port) +{ + struct bfa_fcs_port_ns_s *ns = BFA_FCS_GET_NS_FROM_PORT(port); + + ns->port = port; + bfa_sm_send_event(ns, NSSM_EVENT_PORT_OFFLINE); +} + +void +bfa_fcs_port_ns_online(struct bfa_fcs_port_s *port) +{ + struct bfa_fcs_port_ns_s *ns = BFA_FCS_GET_NS_FROM_PORT(port); + + ns->port = port; + bfa_sm_send_event(ns, NSSM_EVENT_PORT_ONLINE); +} + +void +bfa_fcs_port_ns_query(struct bfa_fcs_port_s *port) +{ + struct bfa_fcs_port_ns_s *ns = BFA_FCS_GET_NS_FROM_PORT(port); + + bfa_trc(port->fcs, port->pid); + bfa_sm_send_event(ns, NSSM_EVENT_NS_QUERY); +} + +static void +bfa_fcs_port_ns_boot_target_disc(struct bfa_fcs_port_s *port) +{ + + struct bfa_fcs_rport_s *rport; + u8 nwwns; + wwn_t *wwns; + int ii; + + bfa_iocfc_get_bootwwns(port->fcs->bfa, &nwwns, &wwns); + + for (ii = 0; ii < nwwns; ++ii) { + rport = bfa_fcs_rport_create_by_wwn(port, wwns[ii]); + bfa_assert(rport); + } +} + + diff --git a/drivers/scsi/bfa/plog.c b/drivers/scsi/bfa/plog.c new file mode 100644 index 000000000000..86af818d17bb --- /dev/null +++ b/drivers/scsi/bfa/plog.c @@ -0,0 +1,184 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#include +#include +#include + +static int +plkd_validate_logrec(struct bfa_plog_rec_s *pl_rec) +{ + if ((pl_rec->log_type != BFA_PL_LOG_TYPE_INT) + && (pl_rec->log_type != BFA_PL_LOG_TYPE_STRING)) + return 1; + + if ((pl_rec->log_type != BFA_PL_LOG_TYPE_INT) + && (pl_rec->log_num_ints > BFA_PL_INT_LOG_SZ)) + return 1; + + return 0; +} + +static void +bfa_plog_add(struct bfa_plog_s *plog, struct bfa_plog_rec_s *pl_rec) +{ + u16 tail; + struct bfa_plog_rec_s *pl_recp; + + if (plog->plog_enabled == 0) + return; + + if (plkd_validate_logrec(pl_rec)) { + bfa_assert(0); + return; + } + + tail = plog->tail; + + pl_recp = &(plog->plog_recs[tail]); + + bfa_os_memcpy(pl_recp, pl_rec, sizeof(struct bfa_plog_rec_s)); + + pl_recp->tv = BFA_TRC_TS(plog); + BFA_PL_LOG_REC_INCR(plog->tail); + + if (plog->head == plog->tail) + BFA_PL_LOG_REC_INCR(plog->head); +} + +void +bfa_plog_init(struct bfa_plog_s *plog) +{ + bfa_os_memset((char *)plog, 0, sizeof(struct bfa_plog_s)); + + bfa_os_memcpy(plog->plog_sig, BFA_PL_SIG_STR, BFA_PL_SIG_LEN); + plog->head = plog->tail = 0; + plog->plog_enabled = 1; +} + +void +bfa_plog_str(struct bfa_plog_s *plog, enum bfa_plog_mid mid, + enum bfa_plog_eid event, + u16 misc, char *log_str) +{ + struct bfa_plog_rec_s lp; + + if (plog->plog_enabled) { + bfa_os_memset(&lp, 0, sizeof(struct bfa_plog_rec_s)); + lp.mid = mid; + lp.eid = event; + lp.log_type = BFA_PL_LOG_TYPE_STRING; + lp.misc = misc; + strncpy(lp.log_entry.string_log, log_str, + BFA_PL_STRING_LOG_SZ - 1); + lp.log_entry.string_log[BFA_PL_STRING_LOG_SZ - 1] = '\0'; + bfa_plog_add(plog, &lp); + } +} + +void +bfa_plog_intarr(struct bfa_plog_s *plog, enum bfa_plog_mid mid, + enum bfa_plog_eid event, + u16 misc, u32 *intarr, u32 num_ints) +{ + struct bfa_plog_rec_s lp; + u32 i; + + if (num_ints > BFA_PL_INT_LOG_SZ) + num_ints = BFA_PL_INT_LOG_SZ; + + if (plog->plog_enabled) { + bfa_os_memset(&lp, 0, sizeof(struct bfa_plog_rec_s)); + lp.mid = mid; + lp.eid = event; + lp.log_type = BFA_PL_LOG_TYPE_INT; + lp.misc = misc; + + for (i = 0; i < num_ints; i++) + bfa_os_assign(lp.log_entry.int_log[i], + intarr[i]); + + lp.log_num_ints = (u8) num_ints; + + bfa_plog_add(plog, &lp); + } +} + +void +bfa_plog_fchdr(struct bfa_plog_s *plog, enum bfa_plog_mid mid, + enum bfa_plog_eid event, + u16 misc, struct fchs_s *fchdr) +{ + struct bfa_plog_rec_s lp; + u32 *tmp_int = (u32 *) fchdr; + u32 ints[BFA_PL_INT_LOG_SZ]; + + if (plog->plog_enabled) { + bfa_os_memset(&lp, 0, sizeof(struct bfa_plog_rec_s)); + + ints[0] = tmp_int[0]; + ints[1] = tmp_int[1]; + ints[2] = tmp_int[4]; + + bfa_plog_intarr(plog, mid, event, misc, ints, 3); + } +} + +void +bfa_plog_fchdr_and_pl(struct bfa_plog_s *plog, enum bfa_plog_mid mid, + enum bfa_plog_eid event, u16 misc, struct fchs_s *fchdr, + u32 pld_w0) +{ + struct bfa_plog_rec_s lp; + u32 *tmp_int = (u32 *) fchdr; + u32 ints[BFA_PL_INT_LOG_SZ]; + + if (plog->plog_enabled) { + bfa_os_memset(&lp, 0, sizeof(struct bfa_plog_rec_s)); + + ints[0] = tmp_int[0]; + ints[1] = tmp_int[1]; + ints[2] = tmp_int[4]; + ints[3] = pld_w0; + + bfa_plog_intarr(plog, mid, event, misc, ints, 4); + } +} + +void +bfa_plog_clear(struct bfa_plog_s *plog) +{ + plog->head = plog->tail = 0; +} + +void +bfa_plog_enable(struct bfa_plog_s *plog) +{ + plog->plog_enabled = 1; +} + +void +bfa_plog_disable(struct bfa_plog_s *plog) +{ + plog->plog_enabled = 0; +} + +bfa_boolean_t +bfa_plog_get_setting(struct bfa_plog_s *plog) +{ + return((bfa_boolean_t)plog->plog_enabled); +} diff --git a/drivers/scsi/bfa/rport.c b/drivers/scsi/bfa/rport.c new file mode 100644 index 000000000000..9cf58bb138dc --- /dev/null +++ b/drivers/scsi/bfa/rport.c @@ -0,0 +1,2618 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * rport.c Remote port implementation. + */ + +#include +#include +#include "fcbuild.h" +#include "fcs_vport.h" +#include "fcs_lport.h" +#include "fcs_rport.h" +#include "fcs_fcpim.h" +#include "fcs_fcptm.h" +#include "fcs_trcmod.h" +#include "fcs_fcxp.h" +#include "fcs.h" +#include +#include + +BFA_TRC_FILE(FCS, RPORT); + +#define BFA_FCS_RPORT_MAX_RETRIES (5) + +/* In millisecs */ +static u32 bfa_fcs_rport_del_timeout = + BFA_FCS_RPORT_DEF_DEL_TIMEOUT * 1000; + +/* + * forward declarations + */ +static struct bfa_fcs_rport_s *bfa_fcs_rport_alloc(struct bfa_fcs_port_s *port, + wwn_t pwwn, u32 rpid); +static void bfa_fcs_rport_free(struct bfa_fcs_rport_s *rport); +static void bfa_fcs_rport_hal_online(struct bfa_fcs_rport_s *rport); +static void bfa_fcs_rport_online_action(struct bfa_fcs_rport_s *rport); +static void bfa_fcs_rport_offline_action(struct bfa_fcs_rport_s *rport); +static void bfa_fcs_rport_update(struct bfa_fcs_rport_s *rport, + struct fc_logi_s *plogi); +static void bfa_fcs_rport_fc4_pause(struct bfa_fcs_rport_s *rport); +static void bfa_fcs_rport_fc4_resume(struct bfa_fcs_rport_s *rport); +static void bfa_fcs_rport_timeout(void *arg); +static void bfa_fcs_rport_send_plogi(void *rport_cbarg, + struct bfa_fcxp_s *fcxp_alloced); +static void bfa_fcs_rport_send_plogiacc(void *rport_cbarg, + struct bfa_fcxp_s *fcxp_alloced); +static void bfa_fcs_rport_plogi_response(void *fcsarg, + struct bfa_fcxp_s *fcxp, + void *cbarg, + bfa_status_t req_status, + u32 rsp_len, + u32 resid_len, + struct fchs_s *rsp_fchs); +static void bfa_fcs_rport_send_adisc(void *rport_cbarg, + struct bfa_fcxp_s *fcxp_alloced); +static void bfa_fcs_rport_adisc_response(void *fcsarg, + struct bfa_fcxp_s *fcxp, + void *cbarg, + bfa_status_t req_status, + u32 rsp_len, + u32 resid_len, + struct fchs_s *rsp_fchs); +static void bfa_fcs_rport_send_gidpn(void *rport_cbarg, + struct bfa_fcxp_s *fcxp_alloced); +static void bfa_fcs_rport_gidpn_response(void *fcsarg, + struct bfa_fcxp_s *fcxp, + void *cbarg, + bfa_status_t req_status, + u32 rsp_len, + u32 resid_len, + struct fchs_s *rsp_fchs); +static void bfa_fcs_rport_send_logo(void *rport_cbarg, + struct bfa_fcxp_s *fcxp_alloced); +static void bfa_fcs_rport_send_logo_acc(void *rport_cbarg); +static void bfa_fcs_rport_process_prli(struct bfa_fcs_rport_s *rport, + struct fchs_s *rx_fchs, u16 len); +static void bfa_fcs_rport_send_ls_rjt(struct bfa_fcs_rport_s *rport, + struct fchs_s *rx_fchs, u8 reason_code, + u8 reason_code_expl); +static void bfa_fcs_rport_process_adisc(struct bfa_fcs_rport_s *rport, + struct fchs_s *rx_fchs, u16 len); +/** + * fcs_rport_sm FCS rport state machine events + */ + +enum rport_event { + RPSM_EVENT_PLOGI_SEND = 1, /* new rport; start with PLOGI */ + RPSM_EVENT_PLOGI_RCVD = 2, /* Inbound PLOGI from remote port */ + RPSM_EVENT_PLOGI_COMP = 3, /* PLOGI completed to rport */ + RPSM_EVENT_LOGO_RCVD = 4, /* LOGO from remote device */ + RPSM_EVENT_LOGO_IMP = 5, /* implicit logo for SLER */ + RPSM_EVENT_FCXP_SENT = 6, /* Frame from has been sent */ + RPSM_EVENT_DELETE = 7, /* RPORT delete request */ + RPSM_EVENT_SCN = 8, /* state change notification */ + RPSM_EVENT_ACCEPTED = 9,/* Good response from remote device */ + RPSM_EVENT_FAILED = 10, /* Request to rport failed. */ + RPSM_EVENT_TIMEOUT = 11, /* Rport SM timeout event */ + RPSM_EVENT_HCB_ONLINE = 12, /* BFA rport online callback */ + RPSM_EVENT_HCB_OFFLINE = 13, /* BFA rport offline callback */ + RPSM_EVENT_FC4_OFFLINE = 14, /* FC-4 offline complete */ + RPSM_EVENT_ADDRESS_CHANGE = 15, /* Rport's PID has changed */ + RPSM_EVENT_ADDRESS_DISC = 16 /* Need to Discover rport's PID */ +}; + +static void bfa_fcs_rport_sm_uninit(struct bfa_fcs_rport_s *rport, + enum rport_event event); +static void bfa_fcs_rport_sm_plogi_sending(struct bfa_fcs_rport_s *rport, + enum rport_event event); +static void bfa_fcs_rport_sm_plogiacc_sending(struct bfa_fcs_rport_s *rport, + enum rport_event event); +static void bfa_fcs_rport_sm_plogi_retry(struct bfa_fcs_rport_s *rport, + enum rport_event event); +static void bfa_fcs_rport_sm_plogi(struct bfa_fcs_rport_s *rport, + enum rport_event event); +static void bfa_fcs_rport_sm_hal_online(struct bfa_fcs_rport_s *rport, + enum rport_event event); +static void bfa_fcs_rport_sm_online(struct bfa_fcs_rport_s *rport, + enum rport_event event); +static void bfa_fcs_rport_sm_nsquery_sending(struct bfa_fcs_rport_s *rport, + enum rport_event event); +static void bfa_fcs_rport_sm_nsquery(struct bfa_fcs_rport_s *rport, + enum rport_event event); +static void bfa_fcs_rport_sm_adisc_sending(struct bfa_fcs_rport_s *rport, + enum rport_event event); +static void bfa_fcs_rport_sm_adisc(struct bfa_fcs_rport_s *rport, + enum rport_event event); +static void bfa_fcs_rport_sm_fc4_logorcv(struct bfa_fcs_rport_s *rport, + enum rport_event event); +static void bfa_fcs_rport_sm_fc4_logosend(struct bfa_fcs_rport_s *rport, + enum rport_event event); +static void bfa_fcs_rport_sm_fc4_offline(struct bfa_fcs_rport_s *rport, + enum rport_event event); +static void bfa_fcs_rport_sm_hcb_offline(struct bfa_fcs_rport_s *rport, + enum rport_event event); +static void bfa_fcs_rport_sm_hcb_logorcv(struct bfa_fcs_rport_s *rport, + enum rport_event event); +static void bfa_fcs_rport_sm_hcb_logosend(struct bfa_fcs_rport_s *rport, + enum rport_event event); +static void bfa_fcs_rport_sm_logo_sending(struct bfa_fcs_rport_s *rport, + enum rport_event event); +static void bfa_fcs_rport_sm_offline(struct bfa_fcs_rport_s *rport, + enum rport_event event); +static void bfa_fcs_rport_sm_nsdisc_sending(struct bfa_fcs_rport_s *rport, + enum rport_event event); +static void bfa_fcs_rport_sm_nsdisc_retry(struct bfa_fcs_rport_s *rport, + enum rport_event event); +static void bfa_fcs_rport_sm_nsdisc_sent(struct bfa_fcs_rport_s *rport, + enum rport_event event); +static void bfa_fcs_rport_sm_nsdisc_sent(struct bfa_fcs_rport_s *rport, + enum rport_event event); + +static struct bfa_sm_table_s rport_sm_table[] = { + {BFA_SM(bfa_fcs_rport_sm_uninit), BFA_RPORT_UNINIT}, + {BFA_SM(bfa_fcs_rport_sm_plogi_sending), BFA_RPORT_PLOGI}, + {BFA_SM(bfa_fcs_rport_sm_plogiacc_sending), BFA_RPORT_ONLINE}, + {BFA_SM(bfa_fcs_rport_sm_plogi_retry), BFA_RPORT_PLOGI_RETRY}, + {BFA_SM(bfa_fcs_rport_sm_plogi), BFA_RPORT_PLOGI}, + {BFA_SM(bfa_fcs_rport_sm_hal_online), BFA_RPORT_ONLINE}, + {BFA_SM(bfa_fcs_rport_sm_online), BFA_RPORT_ONLINE}, + {BFA_SM(bfa_fcs_rport_sm_nsquery_sending), BFA_RPORT_NSQUERY}, + {BFA_SM(bfa_fcs_rport_sm_nsquery), BFA_RPORT_NSQUERY}, + {BFA_SM(bfa_fcs_rport_sm_adisc_sending), BFA_RPORT_ADISC}, + {BFA_SM(bfa_fcs_rport_sm_adisc), BFA_RPORT_ADISC}, + {BFA_SM(bfa_fcs_rport_sm_fc4_logorcv), BFA_RPORT_LOGORCV}, + {BFA_SM(bfa_fcs_rport_sm_fc4_logosend), BFA_RPORT_LOGO}, + {BFA_SM(bfa_fcs_rport_sm_fc4_offline), BFA_RPORT_OFFLINE}, + {BFA_SM(bfa_fcs_rport_sm_hcb_offline), BFA_RPORT_OFFLINE}, + {BFA_SM(bfa_fcs_rport_sm_hcb_logorcv), BFA_RPORT_LOGORCV}, + {BFA_SM(bfa_fcs_rport_sm_hcb_logosend), BFA_RPORT_LOGO}, + {BFA_SM(bfa_fcs_rport_sm_logo_sending), BFA_RPORT_LOGO}, + {BFA_SM(bfa_fcs_rport_sm_offline), BFA_RPORT_OFFLINE}, + {BFA_SM(bfa_fcs_rport_sm_nsdisc_sending), BFA_RPORT_NSDISC}, + {BFA_SM(bfa_fcs_rport_sm_nsdisc_retry), BFA_RPORT_NSDISC}, + {BFA_SM(bfa_fcs_rport_sm_nsdisc_sent), BFA_RPORT_NSDISC}, +}; + +/** + * Beginning state. + */ +static void +bfa_fcs_rport_sm_uninit(struct bfa_fcs_rport_s *rport, enum rport_event event) +{ + bfa_trc(rport->fcs, rport->pwwn); + bfa_trc(rport->fcs, rport->pid); + bfa_trc(rport->fcs, event); + + switch (event) { + case RPSM_EVENT_PLOGI_SEND: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_plogi_sending); + rport->plogi_retries = 0; + bfa_fcs_rport_send_plogi(rport, NULL); + break; + + case RPSM_EVENT_PLOGI_RCVD: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_plogiacc_sending); + bfa_fcs_rport_send_plogiacc(rport, NULL); + break; + + case RPSM_EVENT_PLOGI_COMP: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_hal_online); + bfa_fcs_rport_hal_online(rport); + break; + + case RPSM_EVENT_ADDRESS_CHANGE: + case RPSM_EVENT_ADDRESS_DISC: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_nsdisc_sending); + rport->ns_retries = 0; + bfa_fcs_rport_send_gidpn(rport, NULL); + break; + + default: + bfa_assert(0); + } +} + +/** + * PLOGI is being sent. + */ +static void +bfa_fcs_rport_sm_plogi_sending(struct bfa_fcs_rport_s *rport, + enum rport_event event) +{ + bfa_trc(rport->fcs, rport->pwwn); + bfa_trc(rport->fcs, rport->pid); + bfa_trc(rport->fcs, event); + + switch (event) { + case RPSM_EVENT_FCXP_SENT: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_plogi); + break; + + case RPSM_EVENT_DELETE: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_uninit); + bfa_fcxp_walloc_cancel(rport->fcs->bfa, &rport->fcxp_wqe); + bfa_fcs_rport_free(rport); + break; + + case RPSM_EVENT_PLOGI_RCVD: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_plogiacc_sending); + bfa_fcxp_walloc_cancel(rport->fcs->bfa, &rport->fcxp_wqe); + bfa_fcs_rport_send_plogiacc(rport, NULL); + break; + + case RPSM_EVENT_ADDRESS_CHANGE: + bfa_fcxp_walloc_cancel(rport->fcs->bfa, &rport->fcxp_wqe); + bfa_sm_set_state(rport, bfa_fcs_rport_sm_nsdisc_sending); + rport->ns_retries = 0; + bfa_fcs_rport_send_gidpn(rport, NULL); + break; + + case RPSM_EVENT_LOGO_IMP: + rport->pid = 0; + bfa_sm_set_state(rport, bfa_fcs_rport_sm_offline); + bfa_fcxp_walloc_cancel(rport->fcs->bfa, &rport->fcxp_wqe); + bfa_timer_start(rport->fcs->bfa, &rport->timer, + bfa_fcs_rport_timeout, rport, + bfa_fcs_rport_del_timeout); + break; + + case RPSM_EVENT_SCN: + break; + + default: + bfa_assert(0); + } +} + +/** + * PLOGI is being sent. + */ +static void +bfa_fcs_rport_sm_plogiacc_sending(struct bfa_fcs_rport_s *rport, + enum rport_event event) +{ + bfa_trc(rport->fcs, rport->pwwn); + bfa_trc(rport->fcs, rport->pid); + bfa_trc(rport->fcs, event); + + switch (event) { + case RPSM_EVENT_FCXP_SENT: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_hal_online); + bfa_fcs_rport_hal_online(rport); + break; + + case RPSM_EVENT_DELETE: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_uninit); + bfa_fcxp_walloc_cancel(rport->fcs->bfa, &rport->fcxp_wqe); + bfa_fcs_rport_free(rport); + break; + + case RPSM_EVENT_SCN: + /** + * Ignore, SCN is possibly online notification. + */ + break; + + case RPSM_EVENT_ADDRESS_CHANGE: + bfa_fcxp_walloc_cancel(rport->fcs->bfa, &rport->fcxp_wqe); + bfa_sm_set_state(rport, bfa_fcs_rport_sm_nsdisc_sending); + rport->ns_retries = 0; + bfa_fcs_rport_send_gidpn(rport, NULL); + break; + + case RPSM_EVENT_LOGO_IMP: + rport->pid = 0; + bfa_sm_set_state(rport, bfa_fcs_rport_sm_offline); + bfa_fcxp_walloc_cancel(rport->fcs->bfa, &rport->fcxp_wqe); + bfa_timer_start(rport->fcs->bfa, &rport->timer, + bfa_fcs_rport_timeout, rport, + bfa_fcs_rport_del_timeout); + break; + + case RPSM_EVENT_HCB_OFFLINE: + /** + * Ignore BFA callback, on a PLOGI receive we call bfa offline. + */ + break; + + default: + bfa_assert(0); + } +} + +/** + * PLOGI is sent. + */ +static void +bfa_fcs_rport_sm_plogi_retry(struct bfa_fcs_rport_s *rport, + enum rport_event event) +{ + bfa_trc(rport->fcs, rport->pwwn); + bfa_trc(rport->fcs, rport->pid); + bfa_trc(rport->fcs, event); + + switch (event) { + case RPSM_EVENT_SCN: + bfa_timer_stop(&rport->timer); + /* + * !! fall through !! + */ + + case RPSM_EVENT_TIMEOUT: + rport->plogi_retries++; + if (rport->plogi_retries < BFA_FCS_RPORT_MAX_RETRIES) { + bfa_sm_set_state(rport, bfa_fcs_rport_sm_plogi_sending); + bfa_fcs_rport_send_plogi(rport, NULL); + } else { + rport->pid = 0; + bfa_sm_set_state(rport, bfa_fcs_rport_sm_offline); + bfa_timer_start(rport->fcs->bfa, &rport->timer, + bfa_fcs_rport_timeout, rport, + bfa_fcs_rport_del_timeout); + } + break; + + case RPSM_EVENT_DELETE: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_uninit); + bfa_timer_stop(&rport->timer); + bfa_fcs_rport_free(rport); + break; + + case RPSM_EVENT_LOGO_RCVD: + break; + + case RPSM_EVENT_PLOGI_RCVD: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_plogiacc_sending); + bfa_timer_stop(&rport->timer); + bfa_fcs_rport_send_plogiacc(rport, NULL); + break; + + case RPSM_EVENT_ADDRESS_CHANGE: + bfa_timer_stop(&rport->timer); + bfa_sm_set_state(rport, bfa_fcs_rport_sm_nsdisc_sending); + rport->ns_retries = 0; + bfa_fcs_rport_send_gidpn(rport, NULL); + break; + + case RPSM_EVENT_LOGO_IMP: + rport->pid = 0; + bfa_sm_set_state(rport, bfa_fcs_rport_sm_offline); + bfa_timer_stop(&rport->timer); + bfa_timer_start(rport->fcs->bfa, &rport->timer, + bfa_fcs_rport_timeout, rport, + bfa_fcs_rport_del_timeout); + break; + + case RPSM_EVENT_PLOGI_COMP: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_hal_online); + bfa_timer_stop(&rport->timer); + bfa_fcs_rport_hal_online(rport); + break; + + default: + bfa_assert(0); + } +} + +/** + * PLOGI is sent. + */ +static void +bfa_fcs_rport_sm_plogi(struct bfa_fcs_rport_s *rport, enum rport_event event) +{ + bfa_trc(rport->fcs, rport->pwwn); + bfa_trc(rport->fcs, rport->pid); + bfa_trc(rport->fcs, event); + + switch (event) { + case RPSM_EVENT_ACCEPTED: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_hal_online); + rport->plogi_retries = 0; + bfa_fcs_rport_hal_online(rport); + break; + + case RPSM_EVENT_LOGO_RCVD: + bfa_fcs_rport_send_logo_acc(rport); + bfa_fcxp_discard(rport->fcxp); + /* + * !! fall through !! + */ + case RPSM_EVENT_FAILED: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_plogi_retry); + bfa_timer_start(rport->fcs->bfa, &rport->timer, + bfa_fcs_rport_timeout, rport, + BFA_FCS_RETRY_TIMEOUT); + break; + + case RPSM_EVENT_LOGO_IMP: + rport->pid = 0; + bfa_sm_set_state(rport, bfa_fcs_rport_sm_offline); + bfa_fcxp_discard(rport->fcxp); + bfa_timer_start(rport->fcs->bfa, &rport->timer, + bfa_fcs_rport_timeout, rport, + bfa_fcs_rport_del_timeout); + break; + + case RPSM_EVENT_ADDRESS_CHANGE: + bfa_fcxp_discard(rport->fcxp); + bfa_sm_set_state(rport, bfa_fcs_rport_sm_nsdisc_sending); + rport->ns_retries = 0; + bfa_fcs_rport_send_gidpn(rport, NULL); + break; + + case RPSM_EVENT_PLOGI_RCVD: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_plogiacc_sending); + bfa_fcxp_discard(rport->fcxp); + bfa_fcs_rport_send_plogiacc(rport, NULL); + break; + + case RPSM_EVENT_SCN: + /** + * Ignore SCN - wait for PLOGI response. + */ + break; + + case RPSM_EVENT_DELETE: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_uninit); + bfa_fcxp_discard(rport->fcxp); + bfa_fcs_rport_free(rport); + break; + + case RPSM_EVENT_PLOGI_COMP: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_hal_online); + bfa_fcxp_discard(rport->fcxp); + bfa_fcs_rport_hal_online(rport); + break; + + default: + bfa_assert(0); + } +} + +/** + * PLOGI is complete. Awaiting BFA rport online callback. FC-4s + * are offline. + */ +static void +bfa_fcs_rport_sm_hal_online(struct bfa_fcs_rport_s *rport, + enum rport_event event) +{ + bfa_trc(rport->fcs, rport->pwwn); + bfa_trc(rport->fcs, rport->pid); + bfa_trc(rport->fcs, event); + + switch (event) { + case RPSM_EVENT_HCB_ONLINE: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_online); + bfa_fcs_rport_online_action(rport); + break; + + case RPSM_EVENT_LOGO_RCVD: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_hcb_logorcv); + bfa_rport_offline(rport->bfa_rport); + break; + + case RPSM_EVENT_LOGO_IMP: + case RPSM_EVENT_ADDRESS_CHANGE: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_hcb_offline); + bfa_rport_offline(rport->bfa_rport); + break; + + case RPSM_EVENT_PLOGI_RCVD: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_plogiacc_sending); + bfa_rport_offline(rport->bfa_rport); + bfa_fcs_rport_send_plogiacc(rport, NULL); + break; + + case RPSM_EVENT_DELETE: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_hcb_logosend); + bfa_rport_offline(rport->bfa_rport); + break; + + case RPSM_EVENT_SCN: + /** + * @todo + * Ignore SCN - PLOGI just completed, FC-4 login should detect + * device failures. + */ + break; + + default: + bfa_assert(0); + } +} + +/** + * Rport is ONLINE. FC-4s active. + */ +static void +bfa_fcs_rport_sm_online(struct bfa_fcs_rport_s *rport, enum rport_event event) +{ + bfa_trc(rport->fcs, rport->pwwn); + bfa_trc(rport->fcs, rport->pid); + bfa_trc(rport->fcs, event); + + switch (event) { + case RPSM_EVENT_SCN: + /** + * Pause FC-4 activity till rport is authenticated. + * In switched fabrics, check presence of device in nameserver + * first. + */ + bfa_fcs_rport_fc4_pause(rport); + + if (bfa_fcs_fabric_is_switched(rport->port->fabric)) { + bfa_sm_set_state(rport, + bfa_fcs_rport_sm_nsquery_sending); + rport->ns_retries = 0; + bfa_fcs_rport_send_gidpn(rport, NULL); + } else { + bfa_sm_set_state(rport, bfa_fcs_rport_sm_adisc_sending); + bfa_fcs_rport_send_adisc(rport, NULL); + } + break; + + case RPSM_EVENT_PLOGI_RCVD: + case RPSM_EVENT_LOGO_IMP: + case RPSM_EVENT_ADDRESS_CHANGE: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_fc4_offline); + bfa_fcs_rport_offline_action(rport); + break; + + case RPSM_EVENT_DELETE: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_fc4_logosend); + bfa_fcs_rport_offline_action(rport); + break; + + case RPSM_EVENT_LOGO_RCVD: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_fc4_logorcv); + bfa_fcs_rport_offline_action(rport); + break; + + case RPSM_EVENT_PLOGI_COMP: + break; + + default: + bfa_assert(0); + } +} + +/** + * An SCN event is received in ONLINE state. NS query is being sent + * prior to ADISC authentication with rport. FC-4s are paused. + */ +static void +bfa_fcs_rport_sm_nsquery_sending(struct bfa_fcs_rport_s *rport, + enum rport_event event) +{ + bfa_trc(rport->fcs, rport->pwwn); + bfa_trc(rport->fcs, rport->pid); + bfa_trc(rport->fcs, event); + + switch (event) { + case RPSM_EVENT_FCXP_SENT: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_nsquery); + break; + + case RPSM_EVENT_DELETE: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_fc4_logosend); + bfa_fcxp_walloc_cancel(rport->fcs->bfa, &rport->fcxp_wqe); + bfa_fcs_rport_offline_action(rport); + break; + + case RPSM_EVENT_SCN: + /** + * ignore SCN, wait for response to query itself + */ + break; + + case RPSM_EVENT_LOGO_RCVD: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_fc4_logorcv); + bfa_fcxp_walloc_cancel(rport->fcs->bfa, &rport->fcxp_wqe); + bfa_fcs_rport_offline_action(rport); + break; + + case RPSM_EVENT_LOGO_IMP: + rport->pid = 0; + bfa_sm_set_state(rport, bfa_fcs_rport_sm_offline); + bfa_fcxp_walloc_cancel(rport->fcs->bfa, &rport->fcxp_wqe); + bfa_timer_start(rport->fcs->bfa, &rport->timer, + bfa_fcs_rport_timeout, rport, + bfa_fcs_rport_del_timeout); + break; + + case RPSM_EVENT_PLOGI_RCVD: + case RPSM_EVENT_ADDRESS_CHANGE: + case RPSM_EVENT_PLOGI_COMP: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_fc4_offline); + bfa_fcxp_walloc_cancel(rport->fcs->bfa, &rport->fcxp_wqe); + bfa_fcs_rport_offline_action(rport); + break; + + default: + bfa_assert(0); + } +} + +/** + * An SCN event is received in ONLINE state. NS query is sent to rport. + * FC-4s are paused. + */ +static void +bfa_fcs_rport_sm_nsquery(struct bfa_fcs_rport_s *rport, enum rport_event event) +{ + bfa_trc(rport->fcs, rport->pwwn); + bfa_trc(rport->fcs, rport->pid); + bfa_trc(rport->fcs, event); + + switch (event) { + case RPSM_EVENT_ACCEPTED: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_adisc_sending); + bfa_fcs_rport_send_adisc(rport, NULL); + break; + + case RPSM_EVENT_FAILED: + rport->ns_retries++; + if (rport->ns_retries < BFA_FCS_RPORT_MAX_RETRIES) { + bfa_sm_set_state(rport, + bfa_fcs_rport_sm_nsquery_sending); + bfa_fcs_rport_send_gidpn(rport, NULL); + } else { + bfa_sm_set_state(rport, bfa_fcs_rport_sm_fc4_offline); + bfa_fcs_rport_offline_action(rport); + } + break; + + case RPSM_EVENT_DELETE: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_fc4_logosend); + bfa_fcxp_discard(rport->fcxp); + bfa_fcs_rport_offline_action(rport); + break; + + case RPSM_EVENT_SCN: + break; + + case RPSM_EVENT_LOGO_RCVD: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_fc4_logorcv); + bfa_fcxp_discard(rport->fcxp); + bfa_fcs_rport_offline_action(rport); + break; + + case RPSM_EVENT_PLOGI_COMP: + case RPSM_EVENT_ADDRESS_CHANGE: + case RPSM_EVENT_PLOGI_RCVD: + case RPSM_EVENT_LOGO_IMP: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_fc4_offline); + bfa_fcxp_discard(rport->fcxp); + bfa_fcs_rport_offline_action(rport); + break; + + default: + bfa_assert(0); + } +} + +/** + * An SCN event is received in ONLINE state. ADISC is being sent for + * authenticating with rport. FC-4s are paused. + */ +static void +bfa_fcs_rport_sm_adisc_sending(struct bfa_fcs_rport_s *rport, + enum rport_event event) +{ + bfa_trc(rport->fcs, rport->pwwn); + bfa_trc(rport->fcs, rport->pid); + bfa_trc(rport->fcs, event); + + switch (event) { + case RPSM_EVENT_FCXP_SENT: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_adisc); + break; + + case RPSM_EVENT_DELETE: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_fc4_logosend); + bfa_fcxp_walloc_cancel(rport->fcs->bfa, &rport->fcxp_wqe); + bfa_fcs_rport_offline_action(rport); + break; + + case RPSM_EVENT_LOGO_IMP: + case RPSM_EVENT_ADDRESS_CHANGE: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_fc4_offline); + bfa_fcxp_walloc_cancel(rport->fcs->bfa, &rport->fcxp_wqe); + bfa_fcs_rport_offline_action(rport); + break; + + case RPSM_EVENT_LOGO_RCVD: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_fc4_logorcv); + bfa_fcxp_walloc_cancel(rport->fcs->bfa, &rport->fcxp_wqe); + bfa_fcs_rport_offline_action(rport); + break; + + case RPSM_EVENT_SCN: + break; + + case RPSM_EVENT_PLOGI_RCVD: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_fc4_offline); + bfa_fcxp_walloc_cancel(rport->fcs->bfa, &rport->fcxp_wqe); + bfa_fcs_rport_offline_action(rport); + break; + + default: + bfa_assert(0); + } +} + +/** + * An SCN event is received in ONLINE state. ADISC is to rport. + * FC-4s are paused. + */ +static void +bfa_fcs_rport_sm_adisc(struct bfa_fcs_rport_s *rport, enum rport_event event) +{ + bfa_trc(rport->fcs, rport->pwwn); + bfa_trc(rport->fcs, rport->pid); + bfa_trc(rport->fcs, event); + + switch (event) { + case RPSM_EVENT_ACCEPTED: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_online); + bfa_fcs_rport_fc4_resume(rport); + break; + + case RPSM_EVENT_PLOGI_RCVD: + /** + * Too complex to cleanup FC-4 & rport and then acc to PLOGI. + * At least go offline when a PLOGI is received. + */ + bfa_fcxp_discard(rport->fcxp); + /* + * !!! fall through !!! + */ + + case RPSM_EVENT_FAILED: + case RPSM_EVENT_ADDRESS_CHANGE: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_fc4_offline); + bfa_fcs_rport_offline_action(rport); + break; + + case RPSM_EVENT_DELETE: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_fc4_logosend); + bfa_fcxp_discard(rport->fcxp); + bfa_fcs_rport_offline_action(rport); + break; + + case RPSM_EVENT_SCN: + /** + * already processing RSCN + */ + break; + + case RPSM_EVENT_LOGO_IMP: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_fc4_offline); + bfa_fcxp_discard(rport->fcxp); + bfa_fcs_rport_offline_action(rport); + break; + + case RPSM_EVENT_LOGO_RCVD: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_fc4_logorcv); + bfa_fcxp_discard(rport->fcxp); + bfa_fcs_rport_offline_action(rport); + break; + + default: + bfa_assert(0); + } +} + +/** + * Rport has sent LOGO. Awaiting FC-4 offline completion callback. + */ +static void +bfa_fcs_rport_sm_fc4_logorcv(struct bfa_fcs_rport_s *rport, + enum rport_event event) +{ + bfa_trc(rport->fcs, rport->pwwn); + bfa_trc(rport->fcs, rport->pid); + bfa_trc(rport->fcs, event); + + switch (event) { + case RPSM_EVENT_FC4_OFFLINE: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_hcb_logorcv); + bfa_rport_offline(rport->bfa_rport); + break; + + case RPSM_EVENT_DELETE: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_fc4_logosend); + break; + + case RPSM_EVENT_LOGO_RCVD: + case RPSM_EVENT_ADDRESS_CHANGE: + break; + + default: + bfa_assert(0); + } +} + +/** + * LOGO needs to be sent to rport. Awaiting FC-4 offline completion + * callback. + */ +static void +bfa_fcs_rport_sm_fc4_logosend(struct bfa_fcs_rport_s *rport, + enum rport_event event) +{ + bfa_trc(rport->fcs, rport->pwwn); + bfa_trc(rport->fcs, rport->pid); + bfa_trc(rport->fcs, event); + + switch (event) { + case RPSM_EVENT_FC4_OFFLINE: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_hcb_logosend); + bfa_rport_offline(rport->bfa_rport); + break; + + default: + bfa_assert(0); + } +} + +/** + * Rport is going offline. Awaiting FC-4 offline completion callback. + */ +static void +bfa_fcs_rport_sm_fc4_offline(struct bfa_fcs_rport_s *rport, + enum rport_event event) +{ + bfa_trc(rport->fcs, rport->pwwn); + bfa_trc(rport->fcs, rport->pid); + bfa_trc(rport->fcs, event); + + switch (event) { + case RPSM_EVENT_FC4_OFFLINE: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_hcb_offline); + bfa_rport_offline(rport->bfa_rport); + break; + + case RPSM_EVENT_SCN: + case RPSM_EVENT_LOGO_IMP: + case RPSM_EVENT_LOGO_RCVD: + case RPSM_EVENT_ADDRESS_CHANGE: + /** + * rport is already going offline. + * SCN - ignore and wait till transitioning to offline state + */ + break; + + case RPSM_EVENT_DELETE: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_fc4_logosend); + break; + + default: + bfa_assert(0); + } +} + +/** + * Rport is offline. FC-4s are offline. Awaiting BFA rport offline + * callback. + */ +static void +bfa_fcs_rport_sm_hcb_offline(struct bfa_fcs_rport_s *rport, + enum rport_event event) +{ + bfa_trc(rport->fcs, rport->pwwn); + bfa_trc(rport->fcs, rport->pid); + bfa_trc(rport->fcs, event); + + switch (event) { + case RPSM_EVENT_HCB_OFFLINE: + case RPSM_EVENT_ADDRESS_CHANGE: + if (bfa_fcs_port_is_online(rport->port)) { + bfa_sm_set_state(rport, + bfa_fcs_rport_sm_nsdisc_sending); + rport->ns_retries = 0; + bfa_fcs_rport_send_gidpn(rport, NULL); + } else { + rport->pid = 0; + bfa_sm_set_state(rport, bfa_fcs_rport_sm_offline); + bfa_timer_start(rport->fcs->bfa, &rport->timer, + bfa_fcs_rport_timeout, rport, + bfa_fcs_rport_del_timeout); + } + break; + + case RPSM_EVENT_DELETE: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_uninit); + bfa_fcs_rport_free(rport); + break; + + case RPSM_EVENT_SCN: + case RPSM_EVENT_LOGO_RCVD: + /** + * Ignore, already offline. + */ + break; + + default: + bfa_assert(0); + } +} + +/** + * Rport is offline. FC-4s are offline. Awaiting BFA rport offline + * callback to send LOGO accept. + */ +static void +bfa_fcs_rport_sm_hcb_logorcv(struct bfa_fcs_rport_s *rport, + enum rport_event event) +{ + bfa_trc(rport->fcs, rport->pwwn); + bfa_trc(rport->fcs, rport->pid); + bfa_trc(rport->fcs, event); + + switch (event) { + case RPSM_EVENT_HCB_OFFLINE: + case RPSM_EVENT_ADDRESS_CHANGE: + if (rport->pid) + bfa_fcs_rport_send_logo_acc(rport); + /* + * If the lport is online and if the rport is not a well known + * address port, we try to re-discover the r-port. + */ + if (bfa_fcs_port_is_online(rport->port) + && (!BFA_FCS_PID_IS_WKA(rport->pid))) { + bfa_sm_set_state(rport, + bfa_fcs_rport_sm_nsdisc_sending); + rport->ns_retries = 0; + bfa_fcs_rport_send_gidpn(rport, NULL); + } else { + /* + * if it is not a well known address, reset the pid to + * + */ + if (!BFA_FCS_PID_IS_WKA(rport->pid)) + rport->pid = 0; + bfa_sm_set_state(rport, bfa_fcs_rport_sm_offline); + bfa_timer_start(rport->fcs->bfa, &rport->timer, + bfa_fcs_rport_timeout, rport, + bfa_fcs_rport_del_timeout); + } + break; + + case RPSM_EVENT_DELETE: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_hcb_logosend); + break; + + case RPSM_EVENT_LOGO_IMP: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_hcb_offline); + break; + + case RPSM_EVENT_LOGO_RCVD: + /** + * Ignore - already processing a LOGO. + */ + break; + + default: + bfa_assert(0); + } +} + +/** + * Rport is being deleted. FC-4s are offline. Awaiting BFA rport offline + * callback to send LOGO. + */ +static void +bfa_fcs_rport_sm_hcb_logosend(struct bfa_fcs_rport_s *rport, + enum rport_event event) +{ + bfa_trc(rport->fcs, rport->pwwn); + bfa_trc(rport->fcs, rport->pid); + bfa_trc(rport->fcs, event); + + switch (event) { + case RPSM_EVENT_HCB_OFFLINE: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_logo_sending); + bfa_fcs_rport_send_logo(rport, NULL); + break; + + case RPSM_EVENT_LOGO_RCVD: + case RPSM_EVENT_ADDRESS_CHANGE: + break; + + default: + bfa_assert(0); + } +} + +/** + * Rport is being deleted. FC-4s are offline. LOGO is being sent. + */ +static void +bfa_fcs_rport_sm_logo_sending(struct bfa_fcs_rport_s *rport, + enum rport_event event) +{ + bfa_trc(rport->fcs, rport->pwwn); + bfa_trc(rport->fcs, rport->pid); + bfa_trc(rport->fcs, event); + + switch (event) { + case RPSM_EVENT_FCXP_SENT: + /* + * Once LOGO is sent, we donot wait for the response + */ + bfa_sm_set_state(rport, bfa_fcs_rport_sm_uninit); + bfa_fcs_rport_free(rport); + break; + + case RPSM_EVENT_SCN: + case RPSM_EVENT_ADDRESS_CHANGE: + break; + + case RPSM_EVENT_LOGO_RCVD: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_uninit); + bfa_fcxp_walloc_cancel(rport->fcs->bfa, &rport->fcxp_wqe); + bfa_fcs_rport_free(rport); + break; + + default: + bfa_assert(0); + } +} + +/** + * Rport is offline. FC-4s are offline. BFA rport is offline. + * Timer active to delete stale rport. + */ +static void +bfa_fcs_rport_sm_offline(struct bfa_fcs_rport_s *rport, enum rport_event event) +{ + bfa_trc(rport->fcs, rport->pwwn); + bfa_trc(rport->fcs, rport->pid); + bfa_trc(rport->fcs, event); + + switch (event) { + case RPSM_EVENT_TIMEOUT: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_uninit); + bfa_fcs_rport_free(rport); + break; + + case RPSM_EVENT_SCN: + case RPSM_EVENT_ADDRESS_CHANGE: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_nsdisc_sending); + bfa_timer_stop(&rport->timer); + rport->ns_retries = 0; + bfa_fcs_rport_send_gidpn(rport, NULL); + break; + + case RPSM_EVENT_DELETE: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_uninit); + bfa_timer_stop(&rport->timer); + bfa_fcs_rport_free(rport); + break; + + case RPSM_EVENT_PLOGI_RCVD: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_plogiacc_sending); + bfa_timer_stop(&rport->timer); + bfa_fcs_rport_send_plogiacc(rport, NULL); + break; + + case RPSM_EVENT_LOGO_RCVD: + case RPSM_EVENT_LOGO_IMP: + break; + + case RPSM_EVENT_PLOGI_COMP: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_hal_online); + bfa_timer_stop(&rport->timer); + bfa_fcs_rport_hal_online(rport); + break; + + case RPSM_EVENT_PLOGI_SEND: + bfa_timer_stop(&rport->timer); + bfa_sm_set_state(rport, bfa_fcs_rport_sm_plogi_sending); + rport->plogi_retries = 0; + bfa_fcs_rport_send_plogi(rport, NULL); + break; + + default: + bfa_assert(0); + } +} + +/** + * Rport address has changed. Nameserver discovery request is being sent. + */ +static void +bfa_fcs_rport_sm_nsdisc_sending(struct bfa_fcs_rport_s *rport, + enum rport_event event) +{ + bfa_trc(rport->fcs, rport->pwwn); + bfa_trc(rport->fcs, rport->pid); + bfa_trc(rport->fcs, event); + + switch (event) { + case RPSM_EVENT_FCXP_SENT: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_nsdisc_sent); + break; + + case RPSM_EVENT_DELETE: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_uninit); + bfa_fcxp_walloc_cancel(rport->fcs->bfa, &rport->fcxp_wqe); + bfa_fcs_rport_free(rport); + break; + + case RPSM_EVENT_PLOGI_RCVD: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_plogiacc_sending); + bfa_fcxp_walloc_cancel(rport->fcs->bfa, &rport->fcxp_wqe); + bfa_fcs_rport_send_plogiacc(rport, NULL); + break; + + case RPSM_EVENT_SCN: + case RPSM_EVENT_LOGO_RCVD: + case RPSM_EVENT_PLOGI_SEND: + break; + + case RPSM_EVENT_ADDRESS_CHANGE: + rport->ns_retries = 0; /* reset the retry count */ + break; + + case RPSM_EVENT_LOGO_IMP: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_offline); + bfa_fcxp_walloc_cancel(rport->fcs->bfa, &rport->fcxp_wqe); + bfa_timer_start(rport->fcs->bfa, &rport->timer, + bfa_fcs_rport_timeout, rport, + bfa_fcs_rport_del_timeout); + break; + + case RPSM_EVENT_PLOGI_COMP: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_hal_online); + bfa_fcxp_walloc_cancel(rport->fcs->bfa, &rport->fcxp_wqe); + bfa_fcs_rport_hal_online(rport); + break; + + default: + bfa_assert(0); + } +} + +/** + * Nameserver discovery failed. Waiting for timeout to retry. + */ +static void +bfa_fcs_rport_sm_nsdisc_retry(struct bfa_fcs_rport_s *rport, + enum rport_event event) +{ + bfa_trc(rport->fcs, rport->pwwn); + bfa_trc(rport->fcs, rport->pid); + bfa_trc(rport->fcs, event); + + switch (event) { + case RPSM_EVENT_TIMEOUT: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_nsdisc_sending); + bfa_fcs_rport_send_gidpn(rport, NULL); + break; + + case RPSM_EVENT_SCN: + case RPSM_EVENT_ADDRESS_CHANGE: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_nsdisc_sending); + bfa_timer_stop(&rport->timer); + rport->ns_retries = 0; + bfa_fcs_rport_send_gidpn(rport, NULL); + break; + + case RPSM_EVENT_DELETE: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_uninit); + bfa_timer_stop(&rport->timer); + bfa_fcs_rport_free(rport); + break; + + case RPSM_EVENT_PLOGI_RCVD: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_plogiacc_sending); + bfa_timer_stop(&rport->timer); + bfa_fcs_rport_send_plogiacc(rport, NULL); + break; + + case RPSM_EVENT_LOGO_IMP: + rport->pid = 0; + bfa_sm_set_state(rport, bfa_fcs_rport_sm_offline); + bfa_timer_stop(&rport->timer); + bfa_timer_start(rport->fcs->bfa, &rport->timer, + bfa_fcs_rport_timeout, rport, + bfa_fcs_rport_del_timeout); + break; + + case RPSM_EVENT_LOGO_RCVD: + bfa_fcs_rport_send_logo_acc(rport); + break; + + case RPSM_EVENT_PLOGI_COMP: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_hal_online); + bfa_timer_stop(&rport->timer); + bfa_fcs_rport_hal_online(rport); + break; + + default: + bfa_assert(0); + } +} + +/** + * Rport address has changed. Nameserver discovery request is sent. + */ +static void +bfa_fcs_rport_sm_nsdisc_sent(struct bfa_fcs_rport_s *rport, + enum rport_event event) +{ + bfa_trc(rport->fcs, rport->pwwn); + bfa_trc(rport->fcs, rport->pid); + bfa_trc(rport->fcs, event); + + switch (event) { + case RPSM_EVENT_ACCEPTED: + case RPSM_EVENT_ADDRESS_CHANGE: + if (rport->pid) { + bfa_sm_set_state(rport, bfa_fcs_rport_sm_plogi_sending); + bfa_fcs_rport_send_plogi(rport, NULL); + } else { + bfa_sm_set_state(rport, + bfa_fcs_rport_sm_nsdisc_sending); + rport->ns_retries = 0; + bfa_fcs_rport_send_gidpn(rport, NULL); + } + break; + + case RPSM_EVENT_FAILED: + rport->ns_retries++; + if (rport->ns_retries < BFA_FCS_RPORT_MAX_RETRIES) { + bfa_sm_set_state(rport, + bfa_fcs_rport_sm_nsdisc_sending); + bfa_fcs_rport_send_gidpn(rport, NULL); + } else { + rport->pid = 0; + bfa_sm_set_state(rport, bfa_fcs_rport_sm_offline); + bfa_timer_start(rport->fcs->bfa, &rport->timer, + bfa_fcs_rport_timeout, rport, + bfa_fcs_rport_del_timeout); + }; + break; + + case RPSM_EVENT_DELETE: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_uninit); + bfa_fcxp_discard(rport->fcxp); + bfa_fcs_rport_free(rport); + break; + + case RPSM_EVENT_PLOGI_RCVD: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_plogiacc_sending); + bfa_fcxp_discard(rport->fcxp); + bfa_fcs_rport_send_plogiacc(rport, NULL); + break; + + case RPSM_EVENT_LOGO_IMP: + rport->pid = 0; + bfa_sm_set_state(rport, bfa_fcs_rport_sm_offline); + bfa_fcxp_discard(rport->fcxp); + bfa_timer_start(rport->fcs->bfa, &rport->timer, + bfa_fcs_rport_timeout, rport, + bfa_fcs_rport_del_timeout); + break; + + case RPSM_EVENT_SCN: + /** + * ignore, wait for NS query response + */ + break; + + case RPSM_EVENT_LOGO_RCVD: + /** + * Not logged-in yet. Accept LOGO. + */ + bfa_fcs_rport_send_logo_acc(rport); + break; + + case RPSM_EVENT_PLOGI_COMP: + bfa_sm_set_state(rport, bfa_fcs_rport_sm_hal_online); + bfa_fcxp_discard(rport->fcxp); + bfa_fcs_rport_hal_online(rport); + break; + + default: + bfa_assert(0); + } +} + + + +/** + * fcs_rport_private FCS RPORT provate functions + */ + +static void +bfa_fcs_rport_send_plogi(void *rport_cbarg, struct bfa_fcxp_s *fcxp_alloced) +{ + struct bfa_fcs_rport_s *rport = rport_cbarg; + struct bfa_fcs_port_s *port = rport->port; + struct fchs_s fchs; + int len; + struct bfa_fcxp_s *fcxp; + + bfa_trc(rport->fcs, rport->pwwn); + + fcxp = fcxp_alloced ? fcxp_alloced : bfa_fcs_fcxp_alloc(port->fcs); + if (!fcxp) { + bfa_fcxp_alloc_wait(port->fcs->bfa, &rport->fcxp_wqe, + bfa_fcs_rport_send_plogi, rport); + return; + } + rport->fcxp = fcxp; + + len = fc_plogi_build(&fchs, bfa_fcxp_get_reqbuf(fcxp), rport->pid, + bfa_fcs_port_get_fcid(port), 0, + port->port_cfg.pwwn, port->port_cfg.nwwn, + bfa_pport_get_maxfrsize(port->fcs->bfa)); + + bfa_fcxp_send(fcxp, NULL, port->fabric->vf_id, port->lp_tag, BFA_FALSE, + FC_CLASS_3, len, &fchs, bfa_fcs_rport_plogi_response, + (void *)rport, FC_MAX_PDUSZ, FC_RA_TOV); + + rport->stats.plogis++; + bfa_sm_send_event(rport, RPSM_EVENT_FCXP_SENT); +} + +static void +bfa_fcs_rport_plogi_response(void *fcsarg, struct bfa_fcxp_s *fcxp, void *cbarg, + bfa_status_t req_status, u32 rsp_len, + u32 resid_len, struct fchs_s *rsp_fchs) +{ + struct bfa_fcs_rport_s *rport = (struct bfa_fcs_rport_s *)cbarg; + struct fc_logi_s *plogi_rsp; + struct fc_ls_rjt_s *ls_rjt; + struct bfa_fcs_rport_s *twin; + struct list_head *qe; + + bfa_trc(rport->fcs, rport->pwwn); + + /* + * Sanity Checks + */ + if (req_status != BFA_STATUS_OK) { + bfa_trc(rport->fcs, req_status); + rport->stats.plogi_failed++; + bfa_sm_send_event(rport, RPSM_EVENT_FAILED); + return; + } + + plogi_rsp = (struct fc_logi_s *) BFA_FCXP_RSP_PLD(fcxp); + + /** + * Check for failure first. + */ + if (plogi_rsp->els_cmd.els_code != FC_ELS_ACC) { + ls_rjt = (struct fc_ls_rjt_s *) BFA_FCXP_RSP_PLD(fcxp); + + bfa_trc(rport->fcs, ls_rjt->reason_code); + bfa_trc(rport->fcs, ls_rjt->reason_code_expl); + + rport->stats.plogi_rejects++; + bfa_sm_send_event(rport, RPSM_EVENT_FAILED); + return; + } + + /** + * PLOGI is complete. Make sure this device is not one of the known + * device with a new FC port address. + */ + list_for_each(qe, &rport->port->rport_q) { + twin = (struct bfa_fcs_rport_s *)qe; + if (twin == rport) + continue; + if (!rport->pwwn && (plogi_rsp->port_name == twin->pwwn)) { + bfa_trc(rport->fcs, twin->pid); + bfa_trc(rport->fcs, rport->pid); + + /* + * Update plogi stats in twin + */ + twin->stats.plogis += rport->stats.plogis; + twin->stats.plogi_rejects += rport->stats.plogi_rejects; + twin->stats.plogi_timeouts += + rport->stats.plogi_timeouts; + twin->stats.plogi_failed += rport->stats.plogi_failed; + twin->stats.plogi_rcvd += rport->stats.plogi_rcvd; + twin->stats.plogi_accs++; + + bfa_fcs_rport_delete(rport); + + bfa_fcs_rport_update(twin, plogi_rsp); + twin->pid = rsp_fchs->s_id; + bfa_sm_send_event(twin, RPSM_EVENT_PLOGI_COMP); + return; + } + } + + /** + * Normal login path -- no evil twins. + */ + rport->stats.plogi_accs++; + bfa_fcs_rport_update(rport, plogi_rsp); + bfa_sm_send_event(rport, RPSM_EVENT_ACCEPTED); +} + +static void +bfa_fcs_rport_send_plogiacc(void *rport_cbarg, struct bfa_fcxp_s *fcxp_alloced) +{ + struct bfa_fcs_rport_s *rport = rport_cbarg; + struct bfa_fcs_port_s *port = rport->port; + struct fchs_s fchs; + int len; + struct bfa_fcxp_s *fcxp; + + bfa_trc(rport->fcs, rport->pwwn); + bfa_trc(rport->fcs, rport->reply_oxid); + + fcxp = fcxp_alloced ? fcxp_alloced : bfa_fcs_fcxp_alloc(port->fcs); + if (!fcxp) { + bfa_fcxp_alloc_wait(port->fcs->bfa, &rport->fcxp_wqe, + bfa_fcs_rport_send_plogiacc, rport); + return; + } + rport->fcxp = fcxp; + + len = fc_plogi_acc_build(&fchs, bfa_fcxp_get_reqbuf(fcxp), rport->pid, + bfa_fcs_port_get_fcid(port), rport->reply_oxid, + port->port_cfg.pwwn, port->port_cfg.nwwn, + bfa_pport_get_maxfrsize(port->fcs->bfa)); + + bfa_fcxp_send(fcxp, NULL, port->fabric->vf_id, port->lp_tag, BFA_FALSE, + FC_CLASS_3, len, &fchs, NULL, NULL, FC_MAX_PDUSZ, 0); + + bfa_sm_send_event(rport, RPSM_EVENT_FCXP_SENT); +} + +static void +bfa_fcs_rport_send_adisc(void *rport_cbarg, struct bfa_fcxp_s *fcxp_alloced) +{ + struct bfa_fcs_rport_s *rport = rport_cbarg; + struct bfa_fcs_port_s *port = rport->port; + struct fchs_s fchs; + int len; + struct bfa_fcxp_s *fcxp; + + bfa_trc(rport->fcs, rport->pwwn); + + fcxp = fcxp_alloced ? fcxp_alloced : bfa_fcs_fcxp_alloc(port->fcs); + if (!fcxp) { + bfa_fcxp_alloc_wait(port->fcs->bfa, &rport->fcxp_wqe, + bfa_fcs_rport_send_adisc, rport); + return; + } + rport->fcxp = fcxp; + + len = fc_adisc_build(&fchs, bfa_fcxp_get_reqbuf(fcxp), rport->pid, + bfa_fcs_port_get_fcid(port), 0, + port->port_cfg.pwwn, port->port_cfg.nwwn); + + bfa_fcxp_send(fcxp, NULL, port->fabric->vf_id, port->lp_tag, BFA_FALSE, + FC_CLASS_3, len, &fchs, bfa_fcs_rport_adisc_response, + rport, FC_MAX_PDUSZ, FC_RA_TOV); + + rport->stats.adisc_sent++; + bfa_sm_send_event(rport, RPSM_EVENT_FCXP_SENT); +} + +static void +bfa_fcs_rport_adisc_response(void *fcsarg, struct bfa_fcxp_s *fcxp, void *cbarg, + bfa_status_t req_status, u32 rsp_len, + u32 resid_len, struct fchs_s *rsp_fchs) +{ + struct bfa_fcs_rport_s *rport = (struct bfa_fcs_rport_s *)cbarg; + void *pld = bfa_fcxp_get_rspbuf(fcxp); + struct fc_ls_rjt_s *ls_rjt; + + if (req_status != BFA_STATUS_OK) { + bfa_trc(rport->fcs, req_status); + rport->stats.adisc_failed++; + bfa_sm_send_event(rport, RPSM_EVENT_FAILED); + return; + } + + if (fc_adisc_rsp_parse((struct fc_adisc_s *)pld, rsp_len, rport->pwwn, + rport->nwwn) == FC_PARSE_OK) { + rport->stats.adisc_accs++; + bfa_sm_send_event(rport, RPSM_EVENT_ACCEPTED); + return; + } + + rport->stats.adisc_rejects++; + ls_rjt = pld; + bfa_trc(rport->fcs, ls_rjt->els_cmd.els_code); + bfa_trc(rport->fcs, ls_rjt->reason_code); + bfa_trc(rport->fcs, ls_rjt->reason_code_expl); + bfa_sm_send_event(rport, RPSM_EVENT_FAILED); +} + +static void +bfa_fcs_rport_send_gidpn(void *rport_cbarg, struct bfa_fcxp_s *fcxp_alloced) +{ + struct bfa_fcs_rport_s *rport = rport_cbarg; + struct bfa_fcs_port_s *port = rport->port; + struct fchs_s fchs; + struct bfa_fcxp_s *fcxp; + int len; + + bfa_trc(rport->fcs, rport->pid); + + fcxp = fcxp_alloced ? fcxp_alloced : bfa_fcs_fcxp_alloc(port->fcs); + if (!fcxp) { + bfa_fcxp_alloc_wait(port->fcs->bfa, &rport->fcxp_wqe, + bfa_fcs_rport_send_gidpn, rport); + return; + } + rport->fcxp = fcxp; + + len = fc_gidpn_build(&fchs, bfa_fcxp_get_reqbuf(fcxp), + bfa_fcs_port_get_fcid(port), 0, rport->pwwn); + + bfa_fcxp_send(fcxp, NULL, port->fabric->vf_id, port->lp_tag, BFA_FALSE, + FC_CLASS_3, len, &fchs, bfa_fcs_rport_gidpn_response, + (void *)rport, FC_MAX_PDUSZ, FC_RA_TOV); + + bfa_sm_send_event(rport, RPSM_EVENT_FCXP_SENT); +} + +static void +bfa_fcs_rport_gidpn_response(void *fcsarg, struct bfa_fcxp_s *fcxp, void *cbarg, + bfa_status_t req_status, u32 rsp_len, + u32 resid_len, struct fchs_s *rsp_fchs) +{ + struct bfa_fcs_rport_s *rport = (struct bfa_fcs_rport_s *)cbarg; + struct bfa_fcs_rport_s *twin; + struct list_head *qe; + struct ct_hdr_s *cthdr; + struct fcgs_gidpn_resp_s *gidpn_rsp; + + bfa_trc(rport->fcs, rport->pwwn); + + cthdr = (struct ct_hdr_s *) BFA_FCXP_RSP_PLD(fcxp); + cthdr->cmd_rsp_code = bfa_os_ntohs(cthdr->cmd_rsp_code); + + if (cthdr->cmd_rsp_code == CT_RSP_ACCEPT) { + /* + * Check if the pid is the same as before. + */ + gidpn_rsp = (struct fcgs_gidpn_resp_s *) (cthdr + 1); + + if (gidpn_rsp->dap == rport->pid) { + /* + * Device is online + */ + bfa_sm_send_event(rport, RPSM_EVENT_ACCEPTED); + } else { + /* + * Device's PID has changed. We need to cleanup and + * re-login. If there is another device with the the + * newly discovered pid, send an scn notice so that its + * new pid can be discovered. + */ + list_for_each(qe, &rport->port->rport_q) { + twin = (struct bfa_fcs_rport_s *)qe; + if (twin == rport) + continue; + if (gidpn_rsp->dap == twin->pid) { + bfa_trc(rport->fcs, twin->pid); + bfa_trc(rport->fcs, rport->pid); + + twin->pid = 0; + bfa_sm_send_event(twin, + RPSM_EVENT_ADDRESS_CHANGE); + } + } + rport->pid = gidpn_rsp->dap; + bfa_sm_send_event(rport, RPSM_EVENT_ADDRESS_CHANGE); + } + return; + } + + /* + * Reject Response + */ + switch (cthdr->reason_code) { + case CT_RSN_LOGICAL_BUSY: + /* + * Need to retry + */ + bfa_sm_send_event(rport, RPSM_EVENT_TIMEOUT); + break; + + case CT_RSN_UNABLE_TO_PERF: + /* + * device doesn't exist : Start timer to cleanup this later. + */ + bfa_sm_send_event(rport, RPSM_EVENT_FAILED); + break; + + default: + bfa_sm_send_event(rport, RPSM_EVENT_FAILED); + break; + } +} + +/** + * Called to send a logout to the rport. + */ +static void +bfa_fcs_rport_send_logo(void *rport_cbarg, struct bfa_fcxp_s *fcxp_alloced) +{ + struct bfa_fcs_rport_s *rport = rport_cbarg; + struct bfa_fcs_port_s *port; + struct fchs_s fchs; + struct bfa_fcxp_s *fcxp; + u16 len; + + bfa_trc(rport->fcs, rport->pid); + + port = rport->port; + + fcxp = fcxp_alloced ? fcxp_alloced : bfa_fcs_fcxp_alloc(port->fcs); + if (!fcxp) { + bfa_fcxp_alloc_wait(port->fcs->bfa, &rport->fcxp_wqe, + bfa_fcs_rport_send_logo, rport); + return; + } + rport->fcxp = fcxp; + + len = fc_logo_build(&fchs, bfa_fcxp_get_reqbuf(fcxp), rport->pid, + bfa_fcs_port_get_fcid(port), 0, + bfa_fcs_port_get_pwwn(port)); + + bfa_fcxp_send(fcxp, NULL, port->fabric->vf_id, port->lp_tag, BFA_FALSE, + FC_CLASS_3, len, &fchs, NULL, rport, FC_MAX_PDUSZ, + FC_ED_TOV); + + rport->stats.logos++; + bfa_fcxp_discard(rport->fcxp); + bfa_sm_send_event(rport, RPSM_EVENT_FCXP_SENT); +} + +/** + * Send ACC for a LOGO received. + */ +static void +bfa_fcs_rport_send_logo_acc(void *rport_cbarg) +{ + struct bfa_fcs_rport_s *rport = rport_cbarg; + struct bfa_fcs_port_s *port; + struct fchs_s fchs; + struct bfa_fcxp_s *fcxp; + u16 len; + + bfa_trc(rport->fcs, rport->pid); + + port = rport->port; + + fcxp = bfa_fcs_fcxp_alloc(port->fcs); + if (!fcxp) + return; + + rport->stats.logo_rcvd++; + len = fc_logo_acc_build(&fchs, bfa_fcxp_get_reqbuf(fcxp), rport->pid, + bfa_fcs_port_get_fcid(port), rport->reply_oxid); + + bfa_fcxp_send(fcxp, NULL, port->fabric->vf_id, port->lp_tag, BFA_FALSE, + FC_CLASS_3, len, &fchs, NULL, NULL, FC_MAX_PDUSZ, 0); +} + +/** + * This routine will be called by bfa_timer on timer timeouts. + * + * param[in] rport - pointer to bfa_fcs_port_ns_t. + * param[out] rport_status - pointer to return vport status in + * + * return + * void + * +* Special Considerations: + * + * note + */ +static void +bfa_fcs_rport_timeout(void *arg) +{ + struct bfa_fcs_rport_s *rport = (struct bfa_fcs_rport_s *)arg; + + rport->stats.plogi_timeouts++; + bfa_sm_send_event(rport, RPSM_EVENT_TIMEOUT); +} + +static void +bfa_fcs_rport_process_prli(struct bfa_fcs_rport_s *rport, + struct fchs_s *rx_fchs, u16 len) +{ + struct bfa_fcxp_s *fcxp; + struct fchs_s fchs; + struct bfa_fcs_port_s *port = rport->port; + struct fc_prli_s *prli; + + bfa_trc(port->fcs, rx_fchs->s_id); + bfa_trc(port->fcs, rx_fchs->d_id); + + rport->stats.prli_rcvd++; + + if (BFA_FCS_VPORT_IS_TARGET_MODE(port)) { + /* + * Target Mode : Let the fcptm handle it + */ + bfa_fcs_tin_rx_prli(rport->tin, rx_fchs, len); + return; + } + + /* + * We are either in Initiator or ipfc Mode + */ + prli = (struct fc_prli_s *) (rx_fchs + 1); + + if (prli->parampage.servparams.initiator) { + bfa_trc(rport->fcs, prli->parampage.type); + rport->scsi_function = BFA_RPORT_INITIATOR; + bfa_fcs_itnim_is_initiator(rport->itnim); + } else { + /* + * @todo: PRLI from a target ? + */ + bfa_trc(port->fcs, rx_fchs->s_id); + rport->scsi_function = BFA_RPORT_TARGET; + } + + fcxp = bfa_fcs_fcxp_alloc(port->fcs); + if (!fcxp) + return; + + len = fc_prli_acc_build(&fchs, bfa_fcxp_get_reqbuf(fcxp), rx_fchs->s_id, + bfa_fcs_port_get_fcid(port), rx_fchs->ox_id, + port->port_cfg.roles); + + bfa_fcxp_send(fcxp, NULL, port->fabric->vf_id, port->lp_tag, BFA_FALSE, + FC_CLASS_3, len, &fchs, NULL, NULL, FC_MAX_PDUSZ, 0); +} + +static void +bfa_fcs_rport_process_rpsc(struct bfa_fcs_rport_s *rport, + struct fchs_s *rx_fchs, u16 len) +{ + struct bfa_fcxp_s *fcxp; + struct fchs_s fchs; + struct bfa_fcs_port_s *port = rport->port; + struct fc_rpsc_speed_info_s speeds; + struct bfa_pport_attr_s pport_attr; + + bfa_trc(port->fcs, rx_fchs->s_id); + bfa_trc(port->fcs, rx_fchs->d_id); + + rport->stats.rpsc_rcvd++; + speeds.port_speed_cap = + RPSC_SPEED_CAP_1G | RPSC_SPEED_CAP_2G | RPSC_SPEED_CAP_4G | + RPSC_SPEED_CAP_8G; + + /* + * get curent speed from pport attributes from BFA + */ + bfa_pport_get_attr(port->fcs->bfa, &pport_attr); + + speeds.port_op_speed = fc_bfa_speed_to_rpsc_operspeed(pport_attr.speed); + + fcxp = bfa_fcs_fcxp_alloc(port->fcs); + if (!fcxp) + return; + + len = fc_rpsc_acc_build(&fchs, bfa_fcxp_get_reqbuf(fcxp), rx_fchs->s_id, + bfa_fcs_port_get_fcid(port), rx_fchs->ox_id, + &speeds); + + bfa_fcxp_send(fcxp, NULL, port->fabric->vf_id, port->lp_tag, BFA_FALSE, + FC_CLASS_3, len, &fchs, NULL, NULL, FC_MAX_PDUSZ, 0); +} + +static void +bfa_fcs_rport_process_adisc(struct bfa_fcs_rport_s *rport, + struct fchs_s *rx_fchs, u16 len) +{ + struct bfa_fcxp_s *fcxp; + struct fchs_s fchs; + struct bfa_fcs_port_s *port = rport->port; + struct fc_adisc_s *adisc; + + bfa_trc(port->fcs, rx_fchs->s_id); + bfa_trc(port->fcs, rx_fchs->d_id); + + rport->stats.adisc_rcvd++; + + if (BFA_FCS_VPORT_IS_TARGET_MODE(port)) { + /* + * @todo : Target Mode handling + */ + bfa_trc(port->fcs, rx_fchs->d_id); + bfa_assert(0); + return; + } + + adisc = (struct fc_adisc_s *) (rx_fchs + 1); + + /* + * Accept if the itnim for this rport is online. Else reject the ADISC + */ + if (bfa_fcs_itnim_get_online_state(rport->itnim) == BFA_STATUS_OK) { + + fcxp = bfa_fcs_fcxp_alloc(port->fcs); + if (!fcxp) + return; + + len = fc_adisc_acc_build(&fchs, bfa_fcxp_get_reqbuf(fcxp), + rx_fchs->s_id, + bfa_fcs_port_get_fcid(port), + rx_fchs->ox_id, port->port_cfg.pwwn, + port->port_cfg.nwwn); + + bfa_fcxp_send(fcxp, NULL, port->fabric->vf_id, port->lp_tag, + BFA_FALSE, FC_CLASS_3, len, &fchs, NULL, NULL, + FC_MAX_PDUSZ, 0); + } else { + rport->stats.adisc_rejected++; + bfa_fcs_rport_send_ls_rjt(rport, rx_fchs, + FC_LS_RJT_RSN_UNABLE_TO_PERF_CMD, + FC_LS_RJT_EXP_LOGIN_REQUIRED); + } + +} + +static void +bfa_fcs_rport_hal_online(struct bfa_fcs_rport_s *rport) +{ + struct bfa_fcs_port_s *port = rport->port; + struct bfa_rport_info_s rport_info; + + rport_info.pid = rport->pid; + rport_info.local_pid = port->pid; + rport_info.lp_tag = port->lp_tag; + rport_info.vf_id = port->fabric->vf_id; + rport_info.vf_en = port->fabric->is_vf; + rport_info.fc_class = rport->fc_cos; + rport_info.cisc = rport->cisc; + rport_info.max_frmsz = rport->maxfrsize; + bfa_rport_online(rport->bfa_rport, &rport_info); +} + +static void +bfa_fcs_rport_fc4_pause(struct bfa_fcs_rport_s *rport) +{ + if (bfa_fcs_port_is_initiator(rport->port)) + bfa_fcs_itnim_pause(rport->itnim); + + if (bfa_fcs_port_is_target(rport->port)) + bfa_fcs_tin_pause(rport->tin); +} + +static void +bfa_fcs_rport_fc4_resume(struct bfa_fcs_rport_s *rport) +{ + if (bfa_fcs_port_is_initiator(rport->port)) + bfa_fcs_itnim_resume(rport->itnim); + + if (bfa_fcs_port_is_target(rport->port)) + bfa_fcs_tin_resume(rport->tin); +} + +static struct bfa_fcs_rport_s * +bfa_fcs_rport_alloc(struct bfa_fcs_port_s *port, wwn_t pwwn, u32 rpid) +{ + struct bfa_fcs_s *fcs = port->fcs; + struct bfa_fcs_rport_s *rport; + struct bfad_rport_s *rport_drv; + + /** + * allocate rport + */ + if (bfa_fcb_rport_alloc(fcs->bfad, &rport, &rport_drv) + != BFA_STATUS_OK) { + bfa_trc(fcs, rpid); + return NULL; + } + + /* + * Initialize r-port + */ + rport->port = port; + rport->fcs = fcs; + rport->rp_drv = rport_drv; + rport->pid = rpid; + rport->pwwn = pwwn; + + /** + * allocate BFA rport + */ + rport->bfa_rport = bfa_rport_create(port->fcs->bfa, rport); + if (!rport->bfa_rport) { + bfa_trc(fcs, rpid); + kfree(rport_drv); + return NULL; + } + + /** + * allocate FC-4s + */ + bfa_assert(bfa_fcs_port_is_initiator(port) ^ + bfa_fcs_port_is_target(port)); + + if (bfa_fcs_port_is_initiator(port)) { + rport->itnim = bfa_fcs_itnim_create(rport); + if (!rport->itnim) { + bfa_trc(fcs, rpid); + bfa_rport_delete(rport->bfa_rport); + kfree(rport_drv); + return NULL; + } + } + + if (bfa_fcs_port_is_target(port)) { + rport->tin = bfa_fcs_tin_create(rport); + if (!rport->tin) { + bfa_trc(fcs, rpid); + bfa_rport_delete(rport->bfa_rport); + kfree(rport_drv); + return NULL; + } + } + + bfa_fcs_port_add_rport(port, rport); + + bfa_sm_set_state(rport, bfa_fcs_rport_sm_uninit); + + /* + * Initialize the Rport Features(RPF) Sub Module + */ + if (!BFA_FCS_PID_IS_WKA(rport->pid)) + bfa_fcs_rpf_init(rport); + + return rport; +} + + +static void +bfa_fcs_rport_free(struct bfa_fcs_rport_s *rport) +{ + struct bfa_fcs_port_s *port = rport->port; + + /** + * - delete FC-4s + * - delete BFA rport + * - remove from queue of rports + */ + if (bfa_fcs_port_is_initiator(port)) + bfa_fcs_itnim_delete(rport->itnim); + + if (bfa_fcs_port_is_target(port)) + bfa_fcs_tin_delete(rport->tin); + + bfa_rport_delete(rport->bfa_rport); + bfa_fcs_port_del_rport(port, rport); + kfree(rport->rp_drv); +} + +static void +bfa_fcs_rport_aen_post(struct bfa_fcs_rport_s *rport, + enum bfa_rport_aen_event event, + struct bfa_rport_aen_data_s *data) +{ + union bfa_aen_data_u aen_data; + struct bfa_log_mod_s *logmod = rport->fcs->logm; + wwn_t lpwwn = bfa_fcs_port_get_pwwn(rport->port); + wwn_t rpwwn = rport->pwwn; + char lpwwn_ptr[BFA_STRING_32]; + char rpwwn_ptr[BFA_STRING_32]; + char *prio_str[] = { "unknown", "high", "medium", "low" }; + + wwn2str(lpwwn_ptr, lpwwn); + wwn2str(rpwwn_ptr, rpwwn); + + switch (event) { + case BFA_RPORT_AEN_ONLINE: + bfa_log(logmod, BFA_AEN_RPORT_ONLINE, rpwwn_ptr, lpwwn_ptr); + break; + case BFA_RPORT_AEN_OFFLINE: + bfa_log(logmod, BFA_AEN_RPORT_OFFLINE, rpwwn_ptr, lpwwn_ptr); + break; + case BFA_RPORT_AEN_DISCONNECT: + bfa_log(logmod, BFA_AEN_RPORT_DISCONNECT, rpwwn_ptr, lpwwn_ptr); + break; + case BFA_RPORT_AEN_QOS_PRIO: + aen_data.rport.priv.qos = data->priv.qos; + bfa_log(logmod, BFA_AEN_RPORT_QOS_PRIO, + prio_str[aen_data.rport.priv.qos.qos_priority], + rpwwn_ptr, lpwwn_ptr); + break; + case BFA_RPORT_AEN_QOS_FLOWID: + aen_data.rport.priv.qos = data->priv.qos; + bfa_log(logmod, BFA_AEN_RPORT_QOS_FLOWID, + aen_data.rport.priv.qos.qos_flow_id, rpwwn_ptr, + lpwwn_ptr); + break; + default: + break; + } + + aen_data.rport.vf_id = rport->port->fabric->vf_id; + aen_data.rport.ppwwn = + bfa_fcs_port_get_pwwn(bfa_fcs_get_base_port(rport->fcs)); + aen_data.rport.lpwwn = lpwwn; + aen_data.rport.rpwwn = rpwwn; +} + +static void +bfa_fcs_rport_online_action(struct bfa_fcs_rport_s *rport) +{ + struct bfa_fcs_port_s *port = rport->port; + + rport->stats.onlines++; + + if (bfa_fcs_port_is_initiator(port)) { + bfa_fcs_itnim_rport_online(rport->itnim); + if (!BFA_FCS_PID_IS_WKA(rport->pid)) + bfa_fcs_rpf_rport_online(rport); + }; + + if (bfa_fcs_port_is_target(port)) + bfa_fcs_tin_rport_online(rport->tin); + + /* + * Don't post events for well known addresses + */ + if (!BFA_FCS_PID_IS_WKA(rport->pid)) + bfa_fcs_rport_aen_post(rport, BFA_RPORT_AEN_ONLINE, NULL); +} + +static void +bfa_fcs_rport_offline_action(struct bfa_fcs_rport_s *rport) +{ + struct bfa_fcs_port_s *port = rport->port; + + rport->stats.offlines++; + + /* + * Don't post events for well known addresses + */ + if (!BFA_FCS_PID_IS_WKA(rport->pid)) { + if (bfa_fcs_port_is_online(rport->port) == BFA_TRUE) { + bfa_fcs_rport_aen_post(rport, BFA_RPORT_AEN_DISCONNECT, + NULL); + } else { + bfa_fcs_rport_aen_post(rport, BFA_RPORT_AEN_OFFLINE, + NULL); + } + } + + if (bfa_fcs_port_is_initiator(port)) { + bfa_fcs_itnim_rport_offline(rport->itnim); + if (!BFA_FCS_PID_IS_WKA(rport->pid)) + bfa_fcs_rpf_rport_offline(rport); + } + + if (bfa_fcs_port_is_target(port)) + bfa_fcs_tin_rport_offline(rport->tin); +} + +/** + * Update rport parameters from PLOGI or PLOGI accept. + */ +static void +bfa_fcs_rport_update(struct bfa_fcs_rport_s *rport, struct fc_logi_s *plogi) +{ + struct bfa_fcs_port_s *port = rport->port; + + /** + * - port name + * - node name + */ + rport->pwwn = plogi->port_name; + rport->nwwn = plogi->node_name; + + /** + * - class of service + */ + rport->fc_cos = 0; + if (plogi->class3.class_valid) + rport->fc_cos = FC_CLASS_3; + + if (plogi->class2.class_valid) + rport->fc_cos |= FC_CLASS_2; + + /** + * - CISC + * - MAX receive frame size + */ + rport->cisc = plogi->csp.cisc; + rport->maxfrsize = bfa_os_ntohs(plogi->class3.rxsz); + + bfa_trc(port->fcs, bfa_os_ntohs(plogi->csp.bbcred)); + bfa_trc(port->fcs, port->fabric->bb_credit); + /** + * Direct Attach P2P mode : + * This is to handle a bug (233476) in IBM targets in Direct Attach + * Mode. Basically, in FLOGI Accept the target would have erroneously + * set the BB Credit to the value used in the FLOGI sent by the HBA. + * It uses the correct value (its own BB credit) in PLOGI. + */ + if ((!bfa_fcs_fabric_is_switched(port->fabric)) + && (bfa_os_ntohs(plogi->csp.bbcred) < port->fabric->bb_credit)) { + + bfa_trc(port->fcs, bfa_os_ntohs(plogi->csp.bbcred)); + bfa_trc(port->fcs, port->fabric->bb_credit); + + port->fabric->bb_credit = bfa_os_ntohs(plogi->csp.bbcred); + bfa_pport_set_tx_bbcredit(port->fcs->bfa, + port->fabric->bb_credit); + } + +} + +/** + * Called to handle LOGO received from an existing remote port. + */ +static void +bfa_fcs_rport_process_logo(struct bfa_fcs_rport_s *rport, struct fchs_s *fchs) +{ + rport->reply_oxid = fchs->ox_id; + bfa_trc(rport->fcs, rport->reply_oxid); + + rport->stats.logo_rcvd++; + bfa_sm_send_event(rport, RPSM_EVENT_LOGO_RCVD); +} + + + +/** + * fcs_rport_public FCS rport public interfaces + */ + +/** + * Called by bport/vport to create a remote port instance for a discovered + * remote device. + * + * @param[in] port - base port or vport + * @param[in] rpid - remote port ID + * + * @return None + */ +struct bfa_fcs_rport_s * +bfa_fcs_rport_create(struct bfa_fcs_port_s *port, u32 rpid) +{ + struct bfa_fcs_rport_s *rport; + + bfa_trc(port->fcs, rpid); + rport = bfa_fcs_rport_alloc(port, WWN_NULL, rpid); + if (!rport) + return NULL; + + bfa_sm_send_event(rport, RPSM_EVENT_PLOGI_SEND); + return rport; +} + +/** + * Called to create a rport for which only the wwn is known. + * + * @param[in] port - base port + * @param[in] rpwwn - remote port wwn + * + * @return None + */ +struct bfa_fcs_rport_s * +bfa_fcs_rport_create_by_wwn(struct bfa_fcs_port_s *port, wwn_t rpwwn) +{ + struct bfa_fcs_rport_s *rport; + + bfa_trc(port->fcs, rpwwn); + rport = bfa_fcs_rport_alloc(port, rpwwn, 0); + if (!rport) + return NULL; + + bfa_sm_send_event(rport, RPSM_EVENT_ADDRESS_DISC); + return rport; +} + +/** + * Called by bport in private loop topology to indicate that a + * rport has been discovered and plogi has been completed. + * + * @param[in] port - base port or vport + * @param[in] rpid - remote port ID + */ +void +bfa_fcs_rport_start(struct bfa_fcs_port_s *port, struct fchs_s *fchs, + struct fc_logi_s *plogi) +{ + struct bfa_fcs_rport_s *rport; + + rport = bfa_fcs_rport_alloc(port, WWN_NULL, fchs->s_id); + if (!rport) + return; + + bfa_fcs_rport_update(rport, plogi); + + bfa_sm_send_event(rport, RPSM_EVENT_PLOGI_COMP); +} + +/** + * Called by bport/vport to handle PLOGI received from a new remote port. + * If an existing rport does a plogi, it will be handled separately. + */ +void +bfa_fcs_rport_plogi_create(struct bfa_fcs_port_s *port, struct fchs_s *fchs, + struct fc_logi_s *plogi) +{ + struct bfa_fcs_rport_s *rport; + + rport = bfa_fcs_rport_alloc(port, plogi->port_name, fchs->s_id); + if (!rport) + return; + + bfa_fcs_rport_update(rport, plogi); + + rport->reply_oxid = fchs->ox_id; + bfa_trc(rport->fcs, rport->reply_oxid); + + rport->stats.plogi_rcvd++; + bfa_sm_send_event(rport, RPSM_EVENT_PLOGI_RCVD); +} + +static int +wwn_compare(wwn_t wwn1, wwn_t wwn2) +{ + u8 *b1 = (u8 *) &wwn1; + u8 *b2 = (u8 *) &wwn2; + int i; + + for (i = 0; i < sizeof(wwn_t); i++) { + if (b1[i] < b2[i]) + return -1; + if (b1[i] > b2[i]) + return 1; + } + return 0; +} + +/** + * Called by bport/vport to handle PLOGI received from an existing + * remote port. + */ +void +bfa_fcs_rport_plogi(struct bfa_fcs_rport_s *rport, struct fchs_s *rx_fchs, + struct fc_logi_s *plogi) +{ + /** + * @todo Handle P2P and initiator-initiator. + */ + + bfa_fcs_rport_update(rport, plogi); + + rport->reply_oxid = rx_fchs->ox_id; + bfa_trc(rport->fcs, rport->reply_oxid); + + /** + * In Switched fabric topology, + * PLOGI to each other. If our pwwn is smaller, ignore it, + * if it is not a well known address. + * If the link topology is N2N, + * this Plogi should be accepted. + */ + if ((wwn_compare(rport->port->port_cfg.pwwn, rport->pwwn) == -1) + && (bfa_fcs_fabric_is_switched(rport->port->fabric)) + && (!BFA_FCS_PID_IS_WKA(rport->pid))) { + bfa_trc(rport->fcs, rport->pid); + return; + } + + rport->stats.plogi_rcvd++; + bfa_sm_send_event(rport, RPSM_EVENT_PLOGI_RCVD); +} + +/** + * Called by bport/vport to delete a remote port instance. + * +* Rport delete is called under the following conditions: + * - vport is deleted + * - vf is deleted + * - explicit request from OS to delete rport (vmware) + */ +void +bfa_fcs_rport_delete(struct bfa_fcs_rport_s *rport) +{ + bfa_sm_send_event(rport, RPSM_EVENT_DELETE); +} + +/** + * Called by bport/vport to when a target goes offline. + * + */ +void +bfa_fcs_rport_offline(struct bfa_fcs_rport_s *rport) +{ + bfa_sm_send_event(rport, RPSM_EVENT_LOGO_IMP); +} + +/** + * Called by bport in n2n when a target (attached port) becomes online. + * + */ +void +bfa_fcs_rport_online(struct bfa_fcs_rport_s *rport) +{ + bfa_sm_send_event(rport, RPSM_EVENT_PLOGI_SEND); +} + +/** + * Called by bport/vport to notify SCN for the remote port + */ +void +bfa_fcs_rport_scn(struct bfa_fcs_rport_s *rport) +{ + + rport->stats.rscns++; + bfa_sm_send_event(rport, RPSM_EVENT_SCN); +} + +/** + * Called by fcpim to notify that the ITN cleanup is done. + */ +void +bfa_fcs_rport_itnim_ack(struct bfa_fcs_rport_s *rport) +{ + bfa_sm_send_event(rport, RPSM_EVENT_FC4_OFFLINE); +} + +/** + * Called by fcptm to notify that the ITN cleanup is done. + */ +void +bfa_fcs_rport_tin_ack(struct bfa_fcs_rport_s *rport) +{ + bfa_sm_send_event(rport, RPSM_EVENT_FC4_OFFLINE); +} + +/** + * This routine BFA callback for bfa_rport_online() call. + * + * param[in] cb_arg - rport struct. + * + * return + * void + * +* Special Considerations: + * + * note + */ +void +bfa_cb_rport_online(void *cbarg) +{ + + struct bfa_fcs_rport_s *rport = (struct bfa_fcs_rport_s *)cbarg; + + bfa_trc(rport->fcs, rport->pwwn); + bfa_sm_send_event(rport, RPSM_EVENT_HCB_ONLINE); +} + +/** + * This routine BFA callback for bfa_rport_offline() call. + * + * param[in] rport - + * + * return + * void + * + * Special Considerations: + * + * note + */ +void +bfa_cb_rport_offline(void *cbarg) +{ + struct bfa_fcs_rport_s *rport = (struct bfa_fcs_rport_s *)cbarg; + + bfa_trc(rport->fcs, rport->pwwn); + bfa_sm_send_event(rport, RPSM_EVENT_HCB_OFFLINE); +} + +/** + * This routine is a static BFA callback when there is a QoS flow_id + * change notification + * + * @param[in] rport - + * + * @return void + * + * Special Considerations: + * + * @note + */ +void +bfa_cb_rport_qos_scn_flowid(void *cbarg, + struct bfa_rport_qos_attr_s old_qos_attr, + struct bfa_rport_qos_attr_s new_qos_attr) +{ + struct bfa_fcs_rport_s *rport = (struct bfa_fcs_rport_s *)cbarg; + struct bfa_rport_aen_data_s aen_data; + + bfa_trc(rport->fcs, rport->pwwn); + aen_data.priv.qos = new_qos_attr; + bfa_fcs_rport_aen_post(rport, BFA_RPORT_AEN_QOS_FLOWID, &aen_data); +} + +/** + * This routine is a static BFA callback when there is a QoS priority + * change notification + * + * @param[in] rport - + * + * @return void + * + * Special Considerations: + * + * @note + */ +void +bfa_cb_rport_qos_scn_prio(void *cbarg, struct bfa_rport_qos_attr_s old_qos_attr, + struct bfa_rport_qos_attr_s new_qos_attr) +{ + struct bfa_fcs_rport_s *rport = (struct bfa_fcs_rport_s *)cbarg; + struct bfa_rport_aen_data_s aen_data; + + bfa_trc(rport->fcs, rport->pwwn); + aen_data.priv.qos = new_qos_attr; + bfa_fcs_rport_aen_post(rport, BFA_RPORT_AEN_QOS_PRIO, &aen_data); +} + +/** + * Called to process any unsolicted frames from this remote port + */ +void +bfa_fcs_rport_logo_imp(struct bfa_fcs_rport_s *rport) +{ + bfa_sm_send_event(rport, RPSM_EVENT_LOGO_IMP); +} + +/** + * Called to process any unsolicted frames from this remote port + */ +void +bfa_fcs_rport_uf_recv(struct bfa_fcs_rport_s *rport, struct fchs_s *fchs, + u16 len) +{ + struct bfa_fcs_port_s *port = rport->port; + struct fc_els_cmd_s *els_cmd; + + bfa_trc(rport->fcs, fchs->s_id); + bfa_trc(rport->fcs, fchs->d_id); + bfa_trc(rport->fcs, fchs->type); + + if (fchs->type != FC_TYPE_ELS) + return; + + els_cmd = (struct fc_els_cmd_s *) (fchs + 1); + + bfa_trc(rport->fcs, els_cmd->els_code); + + switch (els_cmd->els_code) { + case FC_ELS_LOGO: + bfa_fcs_rport_process_logo(rport, fchs); + break; + + case FC_ELS_ADISC: + bfa_fcs_rport_process_adisc(rport, fchs, len); + break; + + case FC_ELS_PRLO: + if (bfa_fcs_port_is_initiator(port)) + bfa_fcs_fcpim_uf_recv(rport->itnim, fchs, len); + + if (bfa_fcs_port_is_target(port)) + bfa_fcs_fcptm_uf_recv(rport->tin, fchs, len); + break; + + case FC_ELS_PRLI: + bfa_fcs_rport_process_prli(rport, fchs, len); + break; + + case FC_ELS_RPSC: + bfa_fcs_rport_process_rpsc(rport, fchs, len); + break; + + default: + bfa_fcs_rport_send_ls_rjt(rport, fchs, + FC_LS_RJT_RSN_CMD_NOT_SUPP, + FC_LS_RJT_EXP_NO_ADDL_INFO); + break; + } +} + +/* + * Send a LS reject + */ +static void +bfa_fcs_rport_send_ls_rjt(struct bfa_fcs_rport_s *rport, struct fchs_s *rx_fchs, + u8 reason_code, u8 reason_code_expl) +{ + struct bfa_fcs_port_s *port = rport->port; + struct fchs_s fchs; + struct bfa_fcxp_s *fcxp; + int len; + + bfa_trc(rport->fcs, rx_fchs->s_id); + + fcxp = bfa_fcs_fcxp_alloc(rport->fcs); + if (!fcxp) + return; + + len = fc_ls_rjt_build(&fchs, bfa_fcxp_get_reqbuf(fcxp), rx_fchs->s_id, + bfa_fcs_port_get_fcid(port), rx_fchs->ox_id, + reason_code, reason_code_expl); + + bfa_fcxp_send(fcxp, NULL, port->fabric->vf_id, port->lp_tag, BFA_FALSE, + FC_CLASS_3, len, &fchs, NULL, NULL, FC_MAX_PDUSZ, 0); +} + +/** + * Module initialization + */ +void +bfa_fcs_rport_modinit(struct bfa_fcs_s *fcs) +{ +} + +/** + * Module cleanup + */ +void +bfa_fcs_rport_modexit(struct bfa_fcs_s *fcs) +{ + bfa_fcs_modexit_comp(fcs); +} + +/** + * Return state of rport. + */ +int +bfa_fcs_rport_get_state(struct bfa_fcs_rport_s *rport) +{ + return bfa_sm_to_state(rport_sm_table, rport->sm); +} + +/** + * Called by the Driver to set rport delete/ageout timeout + * + * param[in] rport timeout value in seconds. + * + * return None + */ +void +bfa_fcs_rport_set_del_timeout(u8 rport_tmo) +{ + /* + * convert to Millisecs + */ + if (rport_tmo > 0) + bfa_fcs_rport_del_timeout = rport_tmo * 1000; +} diff --git a/drivers/scsi/bfa/rport_api.c b/drivers/scsi/bfa/rport_api.c new file mode 100644 index 000000000000..3dae1774181e --- /dev/null +++ b/drivers/scsi/bfa/rport_api.c @@ -0,0 +1,180 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ +#include +#include +#include "fcs_vport.h" +#include "fcs_lport.h" +#include "fcs_rport.h" +#include "fcs_trcmod.h" + +BFA_TRC_FILE(FCS, RPORT_API); + +/** + * rport_api.c Remote port implementation. + */ + +/** + * fcs_rport_api FCS rport API. + */ + +/** + * Direct API to add a target by port wwn. This interface is used, for + * example, by bios when target pwwn is known from boot lun configuration. + */ +bfa_status_t +bfa_fcs_rport_add(struct bfa_fcs_port_s *port, wwn_t *pwwn, + struct bfa_fcs_rport_s *rport, + struct bfad_rport_s *rport_drv) +{ + bfa_trc(port->fcs, *pwwn); + + return BFA_STATUS_OK; +} + +/** + * Direct API to remove a target and its associated resources. This + * interface is used, for example, by vmware driver to remove target + * ports from the target list for a VM. + */ +bfa_status_t +bfa_fcs_rport_remove(struct bfa_fcs_rport_s *rport_in) +{ + + struct bfa_fcs_rport_s *rport; + + bfa_trc(rport_in->fcs, rport_in->pwwn); + + rport = bfa_fcs_port_get_rport_by_pwwn(rport_in->port, rport_in->pwwn); + if (rport == NULL) { + /* + * TBD Error handling + */ + bfa_trc(rport_in->fcs, rport_in->pid); + return BFA_STATUS_UNKNOWN_RWWN; + } + + /* + * TBD if this remote port is online, send a logo + */ + return BFA_STATUS_OK; + +} + +/** + * Remote device status for display/debug. + */ +void +bfa_fcs_rport_get_attr(struct bfa_fcs_rport_s *rport, + struct bfa_rport_attr_s *rport_attr) +{ + struct bfa_rport_qos_attr_s qos_attr; + struct bfa_fcs_port_s *port = rport->port; + + bfa_os_memset(rport_attr, 0, sizeof(struct bfa_rport_attr_s)); + + rport_attr->pid = rport->pid; + rport_attr->pwwn = rport->pwwn; + rport_attr->nwwn = rport->nwwn; + rport_attr->cos_supported = rport->fc_cos; + rport_attr->df_sz = rport->maxfrsize; + rport_attr->state = bfa_fcs_rport_get_state(rport); + rport_attr->fc_cos = rport->fc_cos; + rport_attr->cisc = rport->cisc; + rport_attr->scsi_function = rport->scsi_function; + rport_attr->curr_speed = rport->rpf.rpsc_speed; + rport_attr->assigned_speed = rport->rpf.assigned_speed; + + bfa_rport_get_qos_attr(rport->bfa_rport, &qos_attr); + rport_attr->qos_attr = qos_attr; + + rport_attr->trl_enforced = BFA_FALSE; + if (bfa_pport_is_ratelim(port->fcs->bfa)) { + if ((rport->rpf.rpsc_speed == BFA_PPORT_SPEED_UNKNOWN) || + (rport->rpf.rpsc_speed < + bfa_fcs_port_get_rport_max_speed(port))) + rport_attr->trl_enforced = BFA_TRUE; + } + + /* + * TODO + * rport->symname + */ +} + +/** + * Per remote device statistics. + */ +void +bfa_fcs_rport_get_stats(struct bfa_fcs_rport_s *rport, + struct bfa_rport_stats_s *stats) +{ + *stats = rport->stats; +} + +void +bfa_fcs_rport_clear_stats(struct bfa_fcs_rport_s *rport) +{ + bfa_os_memset((char *)&rport->stats, 0, + sizeof(struct bfa_rport_stats_s)); +} + +struct bfa_fcs_rport_s * +bfa_fcs_rport_lookup(struct bfa_fcs_port_s *port, wwn_t rpwwn) +{ + struct bfa_fcs_rport_s *rport; + + rport = bfa_fcs_port_get_rport_by_pwwn(port, rpwwn); + if (rport == NULL) { + /* + * TBD Error handling + */ + } + + return rport; +} + +struct bfa_fcs_rport_s * +bfa_fcs_rport_lookup_by_nwwn(struct bfa_fcs_port_s *port, wwn_t rnwwn) +{ + struct bfa_fcs_rport_s *rport; + + rport = bfa_fcs_port_get_rport_by_nwwn(port, rnwwn); + if (rport == NULL) { + /* + * TBD Error handling + */ + } + + return rport; +} + +/* + * This API is to set the Rport's speed. Should be used when RPSC is not + * supported by the rport. + */ +void +bfa_fcs_rport_set_speed(struct bfa_fcs_rport_s *rport, + enum bfa_pport_speed speed) +{ + rport->rpf.assigned_speed = speed; + + /* Set this speed in f/w only if the RPSC speed is not available */ + if (rport->rpf.rpsc_speed == BFA_PPORT_SPEED_UNKNOWN) + bfa_rport_speed(rport->bfa_rport, speed); +} + + diff --git a/drivers/scsi/bfa/rport_ftrs.c b/drivers/scsi/bfa/rport_ftrs.c new file mode 100644 index 000000000000..8a1f59d596c1 --- /dev/null +++ b/drivers/scsi/bfa/rport_ftrs.c @@ -0,0 +1,375 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * rport_ftrs.c Remote port features (RPF) implementation. + */ + +#include +#include +#include "fcbuild.h" +#include "fcs_rport.h" +#include "fcs_lport.h" +#include "fcs_trcmod.h" +#include "fcs_fcxp.h" +#include "fcs.h" + +BFA_TRC_FILE(FCS, RPORT_FTRS); + +#define BFA_FCS_RPF_RETRIES (3) +#define BFA_FCS_RPF_RETRY_TIMEOUT (1000) /* 1 sec (In millisecs) */ + +static void bfa_fcs_rpf_send_rpsc2(void *rport_cbarg, + struct bfa_fcxp_s *fcxp_alloced); +static void bfa_fcs_rpf_rpsc2_response(void *fcsarg, + struct bfa_fcxp_s *fcxp, void *cbarg, + bfa_status_t req_status, u32 rsp_len, + u32 resid_len, + struct fchs_s *rsp_fchs); +static void bfa_fcs_rpf_timeout(void *arg); + +/** + * fcs_rport_ftrs_sm FCS rport state machine events + */ + +enum rpf_event { + RPFSM_EVENT_RPORT_OFFLINE = 1, /* Rport offline */ + RPFSM_EVENT_RPORT_ONLINE = 2, /* Rport online */ + RPFSM_EVENT_FCXP_SENT = 3, /* Frame from has been sent */ + RPFSM_EVENT_TIMEOUT = 4, /* Rport SM timeout event */ + RPFSM_EVENT_RPSC_COMP = 5, + RPFSM_EVENT_RPSC_FAIL = 6, + RPFSM_EVENT_RPSC_ERROR = 7, +}; + +static void bfa_fcs_rpf_sm_uninit(struct bfa_fcs_rpf_s *rpf, + enum rpf_event event); +static void bfa_fcs_rpf_sm_rpsc_sending(struct bfa_fcs_rpf_s *rpf, + enum rpf_event event); +static void bfa_fcs_rpf_sm_rpsc(struct bfa_fcs_rpf_s *rpf, + enum rpf_event event); +static void bfa_fcs_rpf_sm_rpsc_retry(struct bfa_fcs_rpf_s *rpf, + enum rpf_event event); +static void bfa_fcs_rpf_sm_offline(struct bfa_fcs_rpf_s *rpf, + enum rpf_event event); +static void bfa_fcs_rpf_sm_online(struct bfa_fcs_rpf_s *rpf, + enum rpf_event event); + +static void +bfa_fcs_rpf_sm_uninit(struct bfa_fcs_rpf_s *rpf, enum rpf_event event) +{ + struct bfa_fcs_rport_s *rport = rpf->rport; + + bfa_trc(rport->fcs, rport->pwwn); + bfa_trc(rport->fcs, rport->pid); + bfa_trc(rport->fcs, event); + + switch (event) { + case RPFSM_EVENT_RPORT_ONLINE : + if (!BFA_FCS_PID_IS_WKA(rport->pid)) { + bfa_sm_set_state(rpf, bfa_fcs_rpf_sm_rpsc_sending); + rpf->rpsc_retries = 0; + bfa_fcs_rpf_send_rpsc2(rpf, NULL); + break; + }; + + case RPFSM_EVENT_RPORT_OFFLINE : + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_rpf_sm_rpsc_sending(struct bfa_fcs_rpf_s *rpf, enum rpf_event event) +{ + struct bfa_fcs_rport_s *rport = rpf->rport; + + bfa_trc(rport->fcs, event); + + switch (event) { + case RPFSM_EVENT_FCXP_SENT: + bfa_sm_set_state(rpf, bfa_fcs_rpf_sm_rpsc); + break; + + case RPFSM_EVENT_RPORT_OFFLINE : + bfa_sm_set_state(rpf, bfa_fcs_rpf_sm_offline); + bfa_fcxp_walloc_cancel(rport->fcs->bfa, &rpf->fcxp_wqe); + rpf->rpsc_retries = 0; + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_rpf_sm_rpsc(struct bfa_fcs_rpf_s *rpf, enum rpf_event event) +{ + struct bfa_fcs_rport_s *rport = rpf->rport; + + bfa_trc(rport->fcs, rport->pid); + bfa_trc(rport->fcs, event); + + switch (event) { + case RPFSM_EVENT_RPSC_COMP: + bfa_sm_set_state(rpf, bfa_fcs_rpf_sm_online); + /* Update speed info in f/w via BFA */ + if (rpf->rpsc_speed != BFA_PPORT_SPEED_UNKNOWN) { + bfa_rport_speed(rport->bfa_rport, rpf->rpsc_speed); + } else if (rpf->assigned_speed != BFA_PPORT_SPEED_UNKNOWN) { + bfa_rport_speed(rport->bfa_rport, rpf->assigned_speed); + } + break; + + case RPFSM_EVENT_RPSC_FAIL: + /* RPSC not supported by rport */ + bfa_sm_set_state(rpf, bfa_fcs_rpf_sm_online); + break; + + case RPFSM_EVENT_RPSC_ERROR: + /* need to retry...delayed a bit. */ + if (rpf->rpsc_retries++ < BFA_FCS_RPF_RETRIES) { + bfa_timer_start(rport->fcs->bfa, &rpf->timer, + bfa_fcs_rpf_timeout, rpf, + BFA_FCS_RPF_RETRY_TIMEOUT); + bfa_sm_set_state(rpf, bfa_fcs_rpf_sm_rpsc_retry); + } else { + bfa_sm_set_state(rpf, bfa_fcs_rpf_sm_online); + } + break; + + case RPFSM_EVENT_RPORT_OFFLINE : + bfa_sm_set_state(rpf, bfa_fcs_rpf_sm_offline); + bfa_fcxp_discard(rpf->fcxp); + rpf->rpsc_retries = 0; + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_rpf_sm_rpsc_retry(struct bfa_fcs_rpf_s *rpf, enum rpf_event event) +{ + struct bfa_fcs_rport_s *rport = rpf->rport; + + bfa_trc(rport->fcs, rport->pid); + bfa_trc(rport->fcs, event); + + switch (event) { + case RPFSM_EVENT_TIMEOUT : + /* re-send the RPSC */ + bfa_sm_set_state(rpf, bfa_fcs_rpf_sm_rpsc_sending); + bfa_fcs_rpf_send_rpsc2(rpf, NULL); + break; + + case RPFSM_EVENT_RPORT_OFFLINE : + bfa_timer_stop(&rpf->timer); + bfa_sm_set_state(rpf, bfa_fcs_rpf_sm_offline); + rpf->rpsc_retries = 0; + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_rpf_sm_online(struct bfa_fcs_rpf_s *rpf, enum rpf_event event) +{ + struct bfa_fcs_rport_s *rport = rpf->rport; + + bfa_trc(rport->fcs, rport->pwwn); + bfa_trc(rport->fcs, rport->pid); + bfa_trc(rport->fcs, event); + + switch (event) { + case RPFSM_EVENT_RPORT_OFFLINE : + bfa_sm_set_state(rpf, bfa_fcs_rpf_sm_offline); + rpf->rpsc_retries = 0; + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_rpf_sm_offline(struct bfa_fcs_rpf_s *rpf, enum rpf_event event) +{ + struct bfa_fcs_rport_s *rport = rpf->rport; + + bfa_trc(rport->fcs, rport->pwwn); + bfa_trc(rport->fcs, rport->pid); + bfa_trc(rport->fcs, event); + + switch (event) { + case RPFSM_EVENT_RPORT_ONLINE : + bfa_sm_set_state(rpf, bfa_fcs_rpf_sm_rpsc_sending); + bfa_fcs_rpf_send_rpsc2(rpf, NULL); + break; + + case RPFSM_EVENT_RPORT_OFFLINE : + break; + + default: + bfa_assert(0); + } +} +/** + * Called when Rport is created. + */ +void bfa_fcs_rpf_init(struct bfa_fcs_rport_s *rport) +{ + struct bfa_fcs_rpf_s *rpf = &rport->rpf; + + bfa_trc(rport->fcs, rport->pid); + rpf->rport = rport; + + bfa_sm_set_state(rpf, bfa_fcs_rpf_sm_uninit); +} + +/** + * Called when Rport becomes online + */ +void bfa_fcs_rpf_rport_online(struct bfa_fcs_rport_s *rport) +{ + bfa_trc(rport->fcs, rport->pid); + + if (__fcs_min_cfg(rport->port->fcs)) + return; + + if (bfa_fcs_fabric_is_switched(rport->port->fabric)) + bfa_sm_send_event(&rport->rpf, RPFSM_EVENT_RPORT_ONLINE); +} + +/** + * Called when Rport becomes offline + */ +void bfa_fcs_rpf_rport_offline(struct bfa_fcs_rport_s *rport) +{ + bfa_trc(rport->fcs, rport->pid); + + if (__fcs_min_cfg(rport->port->fcs)) + return; + + bfa_sm_send_event(&rport->rpf, RPFSM_EVENT_RPORT_OFFLINE); +} + +static void +bfa_fcs_rpf_timeout(void *arg) +{ + struct bfa_fcs_rpf_s *rpf = (struct bfa_fcs_rpf_s *) arg; + struct bfa_fcs_rport_s *rport = rpf->rport; + + bfa_trc(rport->fcs, rport->pid); + bfa_sm_send_event(rpf, RPFSM_EVENT_TIMEOUT); +} + +static void +bfa_fcs_rpf_send_rpsc2(void *rpf_cbarg, struct bfa_fcxp_s *fcxp_alloced) +{ + struct bfa_fcs_rpf_s *rpf = (struct bfa_fcs_rpf_s *)rpf_cbarg; + struct bfa_fcs_rport_s *rport = rpf->rport; + struct bfa_fcs_port_s *port = rport->port; + struct fchs_s fchs; + int len; + struct bfa_fcxp_s *fcxp; + + bfa_trc(rport->fcs, rport->pwwn); + + fcxp = fcxp_alloced ? fcxp_alloced : bfa_fcs_fcxp_alloc(port->fcs); + if (!fcxp) { + bfa_fcxp_alloc_wait(port->fcs->bfa, &rpf->fcxp_wqe, + bfa_fcs_rpf_send_rpsc2, rpf); + return; + } + rpf->fcxp = fcxp; + + len = fc_rpsc2_build(&fchs, bfa_fcxp_get_reqbuf(fcxp), rport->pid, + bfa_fcs_port_get_fcid(port), &rport->pid, 1); + + bfa_fcxp_send(fcxp, NULL, port->fabric->vf_id, port->lp_tag, BFA_FALSE, + FC_CLASS_3, len, &fchs, bfa_fcs_rpf_rpsc2_response, + rpf, FC_MAX_PDUSZ, FC_RA_TOV); + rport->stats.rpsc_sent++; + bfa_sm_send_event(rpf, RPFSM_EVENT_FCXP_SENT); + +} + +static void +bfa_fcs_rpf_rpsc2_response(void *fcsarg, struct bfa_fcxp_s *fcxp, void *cbarg, + bfa_status_t req_status, u32 rsp_len, + u32 resid_len, struct fchs_s *rsp_fchs) +{ + struct bfa_fcs_rpf_s *rpf = (struct bfa_fcs_rpf_s *) cbarg; + struct bfa_fcs_rport_s *rport = rpf->rport; + struct fc_ls_rjt_s *ls_rjt; + struct fc_rpsc2_acc_s *rpsc2_acc; + u16 num_ents; + + bfa_trc(rport->fcs, req_status); + + if (req_status != BFA_STATUS_OK) { + bfa_trc(rport->fcs, req_status); + if (req_status == BFA_STATUS_ETIMER) + rport->stats.rpsc_failed++; + bfa_sm_send_event(rpf, RPFSM_EVENT_RPSC_ERROR); + return; + } + + rpsc2_acc = (struct fc_rpsc2_acc_s *) BFA_FCXP_RSP_PLD(fcxp); + if (rpsc2_acc->els_cmd == FC_ELS_ACC) { + rport->stats.rpsc_accs++; + num_ents = bfa_os_ntohs(rpsc2_acc->num_pids); + bfa_trc(rport->fcs, num_ents); + if (num_ents > 0) { + bfa_assert(rpsc2_acc->port_info[0].pid != rport->pid); + bfa_trc(rport->fcs, + bfa_os_ntohs(rpsc2_acc->port_info[0].pid)); + bfa_trc(rport->fcs, + bfa_os_ntohs(rpsc2_acc->port_info[0].speed)); + bfa_trc(rport->fcs, + bfa_os_ntohs(rpsc2_acc->port_info[0].index)); + bfa_trc(rport->fcs, + rpsc2_acc->port_info[0].type); + + if (rpsc2_acc->port_info[0].speed == 0) { + bfa_sm_send_event(rpf, RPFSM_EVENT_RPSC_ERROR); + return; + } + + rpf->rpsc_speed = fc_rpsc_operspeed_to_bfa_speed( + bfa_os_ntohs(rpsc2_acc->port_info[0].speed)); + + bfa_sm_send_event(rpf, RPFSM_EVENT_RPSC_COMP); + } + } else { + ls_rjt = (struct fc_ls_rjt_s *) BFA_FCXP_RSP_PLD(fcxp); + bfa_trc(rport->fcs, ls_rjt->reason_code); + bfa_trc(rport->fcs, ls_rjt->reason_code_expl); + rport->stats.rpsc_rejects++; + if (ls_rjt->reason_code == FC_LS_RJT_RSN_CMD_NOT_SUPP) { + bfa_sm_send_event(rpf, RPFSM_EVENT_RPSC_FAIL); + } else { + bfa_sm_send_event(rpf, RPFSM_EVENT_RPSC_ERROR); + } + } +} diff --git a/drivers/scsi/bfa/scn.c b/drivers/scsi/bfa/scn.c new file mode 100644 index 000000000000..bd4771ff62c8 --- /dev/null +++ b/drivers/scsi/bfa/scn.c @@ -0,0 +1,482 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#include +#include +#include "fcs_lport.h" +#include "fcs_rport.h" +#include "fcs_ms.h" +#include "fcs_trcmod.h" +#include "fcs_fcxp.h" +#include "fcs.h" +#include "lport_priv.h" + +BFA_TRC_FILE(FCS, SCN); + +#define FC_QOS_RSCN_EVENT 0x0c +#define FC_FABRIC_NAME_RSCN_EVENT 0x0d + +/* + * forward declarations + */ +static void bfa_fcs_port_scn_send_scr(void *scn_cbarg, + struct bfa_fcxp_s *fcxp_alloced); +static void bfa_fcs_port_scn_scr_response(void *fcsarg, + struct bfa_fcxp_s *fcxp, + void *cbarg, + bfa_status_t req_status, + u32 rsp_len, + u32 resid_len, + struct fchs_s *rsp_fchs); +static void bfa_fcs_port_scn_send_ls_acc(struct bfa_fcs_port_s *port, + struct fchs_s *rx_fchs); +static void bfa_fcs_port_scn_timeout(void *arg); + +/** + * fcs_scm_sm FCS SCN state machine + */ + +/** + * VPort SCN State Machine events + */ +enum port_scn_event { + SCNSM_EVENT_PORT_ONLINE = 1, + SCNSM_EVENT_PORT_OFFLINE = 2, + SCNSM_EVENT_RSP_OK = 3, + SCNSM_EVENT_RSP_ERROR = 4, + SCNSM_EVENT_TIMEOUT = 5, + SCNSM_EVENT_SCR_SENT = 6, +}; + +static void bfa_fcs_port_scn_sm_offline(struct bfa_fcs_port_scn_s *scn, + enum port_scn_event event); +static void bfa_fcs_port_scn_sm_sending_scr(struct bfa_fcs_port_scn_s *scn, + enum port_scn_event event); +static void bfa_fcs_port_scn_sm_scr(struct bfa_fcs_port_scn_s *scn, + enum port_scn_event event); +static void bfa_fcs_port_scn_sm_scr_retry(struct bfa_fcs_port_scn_s *scn, + enum port_scn_event event); +static void bfa_fcs_port_scn_sm_online(struct bfa_fcs_port_scn_s *scn, + enum port_scn_event event); + +/** + * Starting state - awaiting link up. + */ +static void +bfa_fcs_port_scn_sm_offline(struct bfa_fcs_port_scn_s *scn, + enum port_scn_event event) +{ + switch (event) { + case SCNSM_EVENT_PORT_ONLINE: + bfa_sm_set_state(scn, bfa_fcs_port_scn_sm_sending_scr); + bfa_fcs_port_scn_send_scr(scn, NULL); + break; + + case SCNSM_EVENT_PORT_OFFLINE: + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_port_scn_sm_sending_scr(struct bfa_fcs_port_scn_s *scn, + enum port_scn_event event) +{ + switch (event) { + case SCNSM_EVENT_SCR_SENT: + bfa_sm_set_state(scn, bfa_fcs_port_scn_sm_scr); + break; + + case SCNSM_EVENT_PORT_OFFLINE: + bfa_sm_set_state(scn, bfa_fcs_port_scn_sm_offline); + bfa_fcxp_walloc_cancel(scn->port->fcs->bfa, &scn->fcxp_wqe); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_port_scn_sm_scr(struct bfa_fcs_port_scn_s *scn, + enum port_scn_event event) +{ + struct bfa_fcs_port_s *port = scn->port; + + switch (event) { + case SCNSM_EVENT_RSP_OK: + bfa_sm_set_state(scn, bfa_fcs_port_scn_sm_online); + break; + + case SCNSM_EVENT_RSP_ERROR: + bfa_sm_set_state(scn, bfa_fcs_port_scn_sm_scr_retry); + bfa_timer_start(port->fcs->bfa, &scn->timer, + bfa_fcs_port_scn_timeout, scn, + BFA_FCS_RETRY_TIMEOUT); + break; + + case SCNSM_EVENT_PORT_OFFLINE: + bfa_sm_set_state(scn, bfa_fcs_port_scn_sm_offline); + bfa_fcxp_discard(scn->fcxp); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_port_scn_sm_scr_retry(struct bfa_fcs_port_scn_s *scn, + enum port_scn_event event) +{ + switch (event) { + case SCNSM_EVENT_TIMEOUT: + bfa_sm_set_state(scn, bfa_fcs_port_scn_sm_sending_scr); + bfa_fcs_port_scn_send_scr(scn, NULL); + break; + + case SCNSM_EVENT_PORT_OFFLINE: + bfa_sm_set_state(scn, bfa_fcs_port_scn_sm_offline); + bfa_timer_stop(&scn->timer); + break; + + default: + bfa_assert(0); + } +} + +static void +bfa_fcs_port_scn_sm_online(struct bfa_fcs_port_scn_s *scn, + enum port_scn_event event) +{ + switch (event) { + case SCNSM_EVENT_PORT_OFFLINE: + bfa_sm_set_state(scn, bfa_fcs_port_scn_sm_offline); + break; + + default: + bfa_assert(0); + } +} + + + +/** + * fcs_scn_private FCS SCN private functions + */ + +/** + * This routine will be called to send a SCR command. + */ +static void +bfa_fcs_port_scn_send_scr(void *scn_cbarg, struct bfa_fcxp_s *fcxp_alloced) +{ + struct bfa_fcs_port_scn_s *scn = scn_cbarg; + struct bfa_fcs_port_s *port = scn->port; + struct fchs_s fchs; + int len; + struct bfa_fcxp_s *fcxp; + + bfa_trc(port->fcs, port->pid); + bfa_trc(port->fcs, port->port_cfg.pwwn); + + fcxp = fcxp_alloced ? fcxp_alloced : bfa_fcs_fcxp_alloc(port->fcs); + if (!fcxp) { + bfa_fcxp_alloc_wait(port->fcs->bfa, &scn->fcxp_wqe, + bfa_fcs_port_scn_send_scr, scn); + return; + } + scn->fcxp = fcxp; + + /* + * Handle VU registrations for Base port only + */ + if ((!port->vport) && bfa_ioc_get_fcmode(&port->fcs->bfa->ioc)) { + len = fc_scr_build(&fchs, bfa_fcxp_get_reqbuf(fcxp), + bfa_lps_is_brcd_fabric(port->fabric->lps), + port->pid, 0); + } else { + len = fc_scr_build(&fchs, bfa_fcxp_get_reqbuf(fcxp), BFA_FALSE, + port->pid, 0); + } + + bfa_fcxp_send(fcxp, NULL, port->fabric->vf_id, port->lp_tag, BFA_FALSE, + FC_CLASS_3, len, &fchs, bfa_fcs_port_scn_scr_response, + (void *)scn, FC_MAX_PDUSZ, FC_RA_TOV); + + bfa_sm_send_event(scn, SCNSM_EVENT_SCR_SENT); +} + +static void +bfa_fcs_port_scn_scr_response(void *fcsarg, struct bfa_fcxp_s *fcxp, + void *cbarg, bfa_status_t req_status, + u32 rsp_len, u32 resid_len, + struct fchs_s *rsp_fchs) +{ + struct bfa_fcs_port_scn_s *scn = (struct bfa_fcs_port_scn_s *)cbarg; + struct bfa_fcs_port_s *port = scn->port; + struct fc_els_cmd_s *els_cmd; + struct fc_ls_rjt_s *ls_rjt; + + bfa_trc(port->fcs, port->port_cfg.pwwn); + + /* + * Sanity Checks + */ + if (req_status != BFA_STATUS_OK) { + bfa_trc(port->fcs, req_status); + bfa_sm_send_event(scn, SCNSM_EVENT_RSP_ERROR); + return; + } + + els_cmd = (struct fc_els_cmd_s *) BFA_FCXP_RSP_PLD(fcxp); + + switch (els_cmd->els_code) { + + case FC_ELS_ACC: + bfa_sm_send_event(scn, SCNSM_EVENT_RSP_OK); + break; + + case FC_ELS_LS_RJT: + + ls_rjt = (struct fc_ls_rjt_s *) BFA_FCXP_RSP_PLD(fcxp); + + bfa_trc(port->fcs, ls_rjt->reason_code); + bfa_trc(port->fcs, ls_rjt->reason_code_expl); + + bfa_sm_send_event(scn, SCNSM_EVENT_RSP_ERROR); + break; + + default: + bfa_sm_send_event(scn, SCNSM_EVENT_RSP_ERROR); + } +} + +/* + * Send a LS Accept + */ +static void +bfa_fcs_port_scn_send_ls_acc(struct bfa_fcs_port_s *port, + struct fchs_s *rx_fchs) +{ + struct fchs_s fchs; + struct bfa_fcxp_s *fcxp; + struct bfa_rport_s *bfa_rport = NULL; + int len; + + bfa_trc(port->fcs, rx_fchs->s_id); + + fcxp = bfa_fcs_fcxp_alloc(port->fcs); + if (!fcxp) + return; + + len = fc_ls_acc_build(&fchs, bfa_fcxp_get_reqbuf(fcxp), rx_fchs->s_id, + bfa_fcs_port_get_fcid(port), rx_fchs->ox_id); + + bfa_fcxp_send(fcxp, bfa_rport, port->fabric->vf_id, port->lp_tag, + BFA_FALSE, FC_CLASS_3, len, &fchs, NULL, NULL, + FC_MAX_PDUSZ, 0); +} + +/** + * This routine will be called by bfa_timer on timer timeouts. + * + * param[in] vport - pointer to bfa_fcs_port_t. + * param[out] vport_status - pointer to return vport status in + * + * return + * void + * +* Special Considerations: + * + * note + */ +static void +bfa_fcs_port_scn_timeout(void *arg) +{ + struct bfa_fcs_port_scn_s *scn = (struct bfa_fcs_port_scn_s *)arg; + + bfa_sm_send_event(scn, SCNSM_EVENT_TIMEOUT); +} + + + +/** + * fcs_scn_public FCS state change notification public interfaces + */ + +/* + * Functions called by port/fab + */ +void +bfa_fcs_port_scn_init(struct bfa_fcs_port_s *port) +{ + struct bfa_fcs_port_scn_s *scn = BFA_FCS_GET_SCN_FROM_PORT(port); + + scn->port = port; + bfa_sm_set_state(scn, bfa_fcs_port_scn_sm_offline); +} + +void +bfa_fcs_port_scn_offline(struct bfa_fcs_port_s *port) +{ + struct bfa_fcs_port_scn_s *scn = BFA_FCS_GET_SCN_FROM_PORT(port); + + scn->port = port; + bfa_sm_send_event(scn, SCNSM_EVENT_PORT_OFFLINE); +} + +void +bfa_fcs_port_scn_online(struct bfa_fcs_port_s *port) +{ + struct bfa_fcs_port_scn_s *scn = BFA_FCS_GET_SCN_FROM_PORT(port); + + scn->port = port; + bfa_sm_send_event(scn, SCNSM_EVENT_PORT_ONLINE); +} + +static void +bfa_fcs_port_scn_portid_rscn(struct bfa_fcs_port_s *port, u32 rpid) +{ + struct bfa_fcs_rport_s *rport; + + bfa_trc(port->fcs, rpid); + + /** + * If this is an unknown device, then it just came online. + * Otherwise let rport handle the RSCN event. + */ + rport = bfa_fcs_port_get_rport_by_pid(port, rpid); + if (rport == NULL) { + /* + * If min cfg mode is enabled, we donot need to + * discover any new rports. + */ + if (!__fcs_min_cfg(port->fcs)) + rport = bfa_fcs_rport_create(port, rpid); + } else { + bfa_fcs_rport_scn(rport); + } +} + +/** + * rscn format based PID comparison + */ +#define __fc_pid_match(__c0, __c1, __fmt) \ + (((__fmt) == FC_RSCN_FORMAT_FABRIC) || \ + (((__fmt) == FC_RSCN_FORMAT_DOMAIN) && \ + ((__c0)[0] == (__c1)[0])) || \ + (((__fmt) == FC_RSCN_FORMAT_AREA) && \ + ((__c0)[0] == (__c1)[0]) && \ + ((__c0)[1] == (__c1)[1]))) + +static void +bfa_fcs_port_scn_multiport_rscn(struct bfa_fcs_port_s *port, + enum fc_rscn_format format, u32 rscn_pid) +{ + struct bfa_fcs_rport_s *rport; + struct list_head *qe, *qe_next; + u8 *c0, *c1; + + bfa_trc(port->fcs, format); + bfa_trc(port->fcs, rscn_pid); + + c0 = (u8 *) &rscn_pid; + + list_for_each_safe(qe, qe_next, &port->rport_q) { + rport = (struct bfa_fcs_rport_s *)qe; + c1 = (u8 *) &rport->pid; + if (__fc_pid_match(c0, c1, format)) + bfa_fcs_rport_scn(rport); + } +} + +void +bfa_fcs_port_scn_process_rscn(struct bfa_fcs_port_s *port, struct fchs_s *fchs, + u32 len) +{ + struct fc_rscn_pl_s *rscn = (struct fc_rscn_pl_s *) (fchs + 1); + int num_entries; + u32 rscn_pid; + bfa_boolean_t nsquery = BFA_FALSE; + int i = 0; + + num_entries = + (bfa_os_ntohs(rscn->payldlen) - + sizeof(u32)) / sizeof(rscn->event[0]); + + bfa_trc(port->fcs, num_entries); + + port->stats.num_rscn++; + + bfa_fcs_port_scn_send_ls_acc(port, fchs); + + for (i = 0; i < num_entries; i++) { + rscn_pid = rscn->event[i].portid; + + bfa_trc(port->fcs, rscn->event[i].format); + bfa_trc(port->fcs, rscn_pid); + + switch (rscn->event[i].format) { + case FC_RSCN_FORMAT_PORTID: + if (rscn->event[i].qualifier == FC_QOS_RSCN_EVENT) { + /* + * Ignore this event. f/w would have processed + * it + */ + bfa_trc(port->fcs, rscn_pid); + } else { + port->stats.num_portid_rscn++; + bfa_fcs_port_scn_portid_rscn(port, rscn_pid); + } + break; + + case FC_RSCN_FORMAT_FABRIC: + if (rscn->event[i].qualifier == + FC_FABRIC_NAME_RSCN_EVENT) { + bfa_fcs_port_ms_fabric_rscn(port); + break; + } + /* + * !!!!!!!!! Fall Through !!!!!!!!!!!!! + */ + + case FC_RSCN_FORMAT_AREA: + case FC_RSCN_FORMAT_DOMAIN: + nsquery = BFA_TRUE; + bfa_fcs_port_scn_multiport_rscn(port, + rscn->event[i].format, + rscn_pid); + break; + + default: + bfa_assert(0); + nsquery = BFA_TRUE; + } + } + + /** + * If any of area, domain or fabric RSCN is received, do a fresh discovery + * to find new devices. + */ + if (nsquery) + bfa_fcs_port_ns_query(port); +} + + diff --git a/drivers/scsi/bfa/vfapi.c b/drivers/scsi/bfa/vfapi.c new file mode 100644 index 000000000000..31d81fe2fc48 --- /dev/null +++ b/drivers/scsi/bfa/vfapi.c @@ -0,0 +1,292 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * vfapi.c Fabric module implementation. + */ + +#include "fcs_fabric.h" +#include "fcs_trcmod.h" + +BFA_TRC_FILE(FCS, VFAPI); + +/** + * fcs_vf_api virtual fabrics API + */ + +/** + * Enable VF mode. + * + * @param[in] fcs fcs module instance + * @param[in] vf_id default vf_id of port, FC_VF_ID_NULL + * to use standard default vf_id of 1. + * + * @retval BFA_STATUS_OK vf mode is enabled + * @retval BFA_STATUS_BUSY Port is active. Port must be disabled + * before VF mode can be enabled. + */ +bfa_status_t +bfa_fcs_vf_mode_enable(struct bfa_fcs_s *fcs, u16 vf_id) +{ + return BFA_STATUS_OK; +} + +/** + * Disable VF mode. + * + * @param[in] fcs fcs module instance + * + * @retval BFA_STATUS_OK vf mode is disabled + * @retval BFA_STATUS_BUSY VFs are present and being used. All + * VFs must be deleted before disabling + * VF mode. + */ +bfa_status_t +bfa_fcs_vf_mode_disable(struct bfa_fcs_s *fcs) +{ + return BFA_STATUS_OK; +} + +/** + * Create a new VF instance. + * + * A new VF is created using the given VF configuration. A VF is identified + * by VF id. No duplicate VF creation is allowed with the same VF id. Once + * a VF is created, VF is automatically started after link initialization + * and EVFP exchange is completed. + * + * param[in] vf - FCS vf data structure. Memory is + * allocated by caller (driver) + * param[in] fcs - FCS module + * param[in] vf_cfg - VF configuration + * param[in] vf_drv - Opaque handle back to the driver's + * virtual vf structure + * + * retval BFA_STATUS_OK VF creation is successful + * retval BFA_STATUS_FAILED VF creation failed + * retval BFA_STATUS_EEXIST A VF exists with the given vf_id + */ +bfa_status_t +bfa_fcs_vf_create(bfa_fcs_vf_t *vf, struct bfa_fcs_s *fcs, u16 vf_id, + struct bfa_port_cfg_s *port_cfg, struct bfad_vf_s *vf_drv) +{ + bfa_trc(fcs, vf_id); + return BFA_STATUS_OK; +} + +/** + * Use this function to delete a BFA VF object. VF object should + * be stopped before this function call. + * + * param[in] vf - pointer to bfa_vf_t. + * + * retval BFA_STATUS_OK On vf deletion success + * retval BFA_STATUS_BUSY VF is not in a stopped state + * retval BFA_STATUS_INPROGRESS VF deletion in in progress + */ +bfa_status_t +bfa_fcs_vf_delete(bfa_fcs_vf_t *vf) +{ + bfa_trc(vf->fcs, vf->vf_id); + return BFA_STATUS_OK; +} + +/** + * Start participation in VF. This triggers login to the virtual fabric. + * + * param[in] vf - pointer to bfa_vf_t. + * + * return None + */ +void +bfa_fcs_vf_start(bfa_fcs_vf_t *vf) +{ + bfa_trc(vf->fcs, vf->vf_id); +} + +/** + * Logout with the virtual fabric. + * + * param[in] vf - pointer to bfa_vf_t. + * + * retval BFA_STATUS_OK On success. + * retval BFA_STATUS_INPROGRESS VF is being stopped. + */ +bfa_status_t +bfa_fcs_vf_stop(bfa_fcs_vf_t *vf) +{ + bfa_trc(vf->fcs, vf->vf_id); + return BFA_STATUS_OK; +} + +/** + * Returns attributes of the given VF. + * + * param[in] vf pointer to bfa_vf_t. + * param[out] vf_attr vf attributes returned + * + * return None + */ +void +bfa_fcs_vf_get_attr(bfa_fcs_vf_t *vf, struct bfa_vf_attr_s *vf_attr) +{ + bfa_trc(vf->fcs, vf->vf_id); +} + +/** + * Return statistics associated with the given vf. + * + * param[in] vf pointer to bfa_vf_t. + * param[out] vf_stats vf statistics returned + * + * @return None + */ +void +bfa_fcs_vf_get_stats(bfa_fcs_vf_t *vf, struct bfa_vf_stats_s *vf_stats) +{ + bfa_os_memcpy(vf_stats, &vf->stats, sizeof(struct bfa_vf_stats_s)); + return; +} + +void +/** + * clear statistics associated with the given vf. + * + * param[in] vf pointer to bfa_vf_t. + * + * @return None + */ +bfa_fcs_vf_clear_stats(bfa_fcs_vf_t *vf) +{ + bfa_os_memset(&vf->stats, 0, sizeof(struct bfa_vf_stats_s)); + return; +} + +/** + * Returns FCS vf structure for a given vf_id. + * + * param[in] vf_id - VF_ID + * + * return + * If lookup succeeds, retuns fcs vf object, otherwise returns NULL + */ +bfa_fcs_vf_t * +bfa_fcs_vf_lookup(struct bfa_fcs_s *fcs, u16 vf_id) +{ + bfa_trc(fcs, vf_id); + if (vf_id == FC_VF_ID_NULL) + return (&fcs->fabric); + + /** + * @todo vf support + */ + + return NULL; +} + +/** + * Returns driver VF structure for a given FCS vf. + * + * param[in] vf - pointer to bfa_vf_t + * + * return Driver VF structure + */ +struct bfad_vf_s * +bfa_fcs_vf_get_drv_vf(bfa_fcs_vf_t *vf) +{ + bfa_assert(vf); + bfa_trc(vf->fcs, vf->vf_id); + return vf->vf_drv; +} + +/** + * Return the list of VFs configured. + * + * param[in] fcs fcs module instance + * param[out] vf_ids returned list of vf_ids + * param[in,out] nvfs in:size of vf_ids array, + * out:total elements present, + * actual elements returned is limited by the size + * + * return Driver VF structure + */ +void +bfa_fcs_vf_list(struct bfa_fcs_s *fcs, u16 *vf_ids, int *nvfs) +{ + bfa_trc(fcs, *nvfs); +} + +/** + * Return the list of all VFs visible from fabric. + * + * param[in] fcs fcs module instance + * param[out] vf_ids returned list of vf_ids + * param[in,out] nvfs in:size of vf_ids array, + * out:total elements present, + * actual elements returned is limited by the size + * + * return Driver VF structure + */ +void +bfa_fcs_vf_list_all(struct bfa_fcs_s *fcs, u16 *vf_ids, int *nvfs) +{ + bfa_trc(fcs, *nvfs); +} + +/** + * Return the list of local logical ports present in the given VF. + * + * param[in] vf vf for which logical ports are returned + * param[out] lpwwn returned logical port wwn list + * param[in,out] nlports in:size of lpwwn list; + * out:total elements present, + * actual elements returned is limited by the size + * + */ +void +bfa_fcs_vf_get_ports(bfa_fcs_vf_t *vf, wwn_t lpwwn[], int *nlports) +{ + struct list_head *qe; + struct bfa_fcs_vport_s *vport; + int i; + struct bfa_fcs_s *fcs; + + if (vf == NULL || lpwwn == NULL || *nlports == 0) + return; + + fcs = vf->fcs; + + bfa_trc(fcs, vf->vf_id); + bfa_trc(fcs, (u32) *nlports); + + i = 0; + lpwwn[i++] = vf->bport.port_cfg.pwwn; + + list_for_each(qe, &vf->vport_q) { + if (i >= *nlports) + break; + + vport = (struct bfa_fcs_vport_s *) qe; + lpwwn[i++] = vport->lport.port_cfg.pwwn; + } + + bfa_trc(fcs, i); + *nlports = i; + return; +} + + diff --git a/drivers/scsi/bfa/vport.c b/drivers/scsi/bfa/vport.c new file mode 100644 index 000000000000..c10af06c5714 --- /dev/null +++ b/drivers/scsi/bfa/vport.c @@ -0,0 +1,891 @@ +/* + * Copyright (c) 2005-2009 Brocade Communications Systems, Inc. + * All rights reserved + * www.brocade.com + * + * Linux driver for Brocade Fibre Channel Host Bus Adapter. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License (GPL) Version 2 as + * published by the Free Software Foundation + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +/** + * bfa_fcs_vport.c FCS virtual port state machine + */ + +#include +#include +#include +#include "fcs_fabric.h" +#include "fcs_lport.h" +#include "fcs_vport.h" +#include "fcs_trcmod.h" +#include "fcs.h" +#include + +BFA_TRC_FILE(FCS, VPORT); + +#define __vport_fcs(__vp) (__vp)->lport.fcs +#define __vport_pwwn(__vp) (__vp)->lport.port_cfg.pwwn +#define __vport_nwwn(__vp) (__vp)->lport.port_cfg.nwwn +#define __vport_bfa(__vp) (__vp)->lport.fcs->bfa +#define __vport_fcid(__vp) (__vp)->lport.pid +#define __vport_fabric(__vp) (__vp)->lport.fabric +#define __vport_vfid(__vp) (__vp)->lport.fabric->vf_id + +#define BFA_FCS_VPORT_MAX_RETRIES 5 +/* + * Forward declarations + */ +static void bfa_fcs_vport_do_fdisc(struct bfa_fcs_vport_s *vport); +static void bfa_fcs_vport_timeout(void *vport_arg); +static void bfa_fcs_vport_do_logo(struct bfa_fcs_vport_s *vport); +static void bfa_fcs_vport_free(struct bfa_fcs_vport_s *vport); + +/** + * fcs_vport_sm FCS virtual port state machine + */ + +/** + * VPort State Machine events + */ +enum bfa_fcs_vport_event { + BFA_FCS_VPORT_SM_CREATE = 1, /* vport create event */ + BFA_FCS_VPORT_SM_DELETE = 2, /* vport delete event */ + BFA_FCS_VPORT_SM_START = 3, /* vport start request */ + BFA_FCS_VPORT_SM_STOP = 4, /* stop: unsupported */ + BFA_FCS_VPORT_SM_ONLINE = 5, /* fabric online */ + BFA_FCS_VPORT_SM_OFFLINE = 6, /* fabric offline event */ + BFA_FCS_VPORT_SM_FRMSENT = 7, /* fdisc/logo sent events */ + BFA_FCS_VPORT_SM_RSP_OK = 8, /* good response */ + BFA_FCS_VPORT_SM_RSP_ERROR = 9, /* error/bad response */ + BFA_FCS_VPORT_SM_TIMEOUT = 10, /* delay timer event */ + BFA_FCS_VPORT_SM_DELCOMP = 11, /* lport delete completion */ + BFA_FCS_VPORT_SM_RSP_DUP_WWN = 12, /* Dup wnn error */ + BFA_FCS_VPORT_SM_RSP_FAILED = 13, /* non-retryable failure */ +}; + +static void bfa_fcs_vport_sm_uninit(struct bfa_fcs_vport_s *vport, + enum bfa_fcs_vport_event event); +static void bfa_fcs_vport_sm_created(struct bfa_fcs_vport_s *vport, + enum bfa_fcs_vport_event event); +static void bfa_fcs_vport_sm_offline(struct bfa_fcs_vport_s *vport, + enum bfa_fcs_vport_event event); +static void bfa_fcs_vport_sm_fdisc(struct bfa_fcs_vport_s *vport, + enum bfa_fcs_vport_event event); +static void bfa_fcs_vport_sm_fdisc_retry(struct bfa_fcs_vport_s *vport, + enum bfa_fcs_vport_event event); +static void bfa_fcs_vport_sm_online(struct bfa_fcs_vport_s *vport, + enum bfa_fcs_vport_event event); +static void bfa_fcs_vport_sm_deleting(struct bfa_fcs_vport_s *vport, + enum bfa_fcs_vport_event event); +static void bfa_fcs_vport_sm_cleanup(struct bfa_fcs_vport_s *vport, + enum bfa_fcs_vport_event event); +static void bfa_fcs_vport_sm_logo(struct bfa_fcs_vport_s *vport, + enum bfa_fcs_vport_event event); +static void bfa_fcs_vport_sm_error(struct bfa_fcs_vport_s *vport, + enum bfa_fcs_vport_event event); + +static struct bfa_sm_table_s vport_sm_table[] = { + {BFA_SM(bfa_fcs_vport_sm_uninit), BFA_FCS_VPORT_UNINIT}, + {BFA_SM(bfa_fcs_vport_sm_created), BFA_FCS_VPORT_CREATED}, + {BFA_SM(bfa_fcs_vport_sm_offline), BFA_FCS_VPORT_OFFLINE}, + {BFA_SM(bfa_fcs_vport_sm_fdisc), BFA_FCS_VPORT_FDISC}, + {BFA_SM(bfa_fcs_vport_sm_fdisc_retry), BFA_FCS_VPORT_FDISC_RETRY}, + {BFA_SM(bfa_fcs_vport_sm_online), BFA_FCS_VPORT_ONLINE}, + {BFA_SM(bfa_fcs_vport_sm_deleting), BFA_FCS_VPORT_DELETING}, + {BFA_SM(bfa_fcs_vport_sm_cleanup), BFA_FCS_VPORT_CLEANUP}, + {BFA_SM(bfa_fcs_vport_sm_logo), BFA_FCS_VPORT_LOGO}, + {BFA_SM(bfa_fcs_vport_sm_error), BFA_FCS_VPORT_ERROR} +}; + +/** + * Beginning state. + */ +static void +bfa_fcs_vport_sm_uninit(struct bfa_fcs_vport_s *vport, + enum bfa_fcs_vport_event event) +{ + bfa_trc(__vport_fcs(vport), __vport_pwwn(vport)); + bfa_trc(__vport_fcs(vport), event); + + switch (event) { + case BFA_FCS_VPORT_SM_CREATE: + bfa_sm_set_state(vport, bfa_fcs_vport_sm_created); + bfa_fcs_fabric_addvport(__vport_fabric(vport), vport); + break; + + default: + bfa_assert(0); + } +} + +/** + * Created state - a start event is required to start up the state machine. + */ +static void +bfa_fcs_vport_sm_created(struct bfa_fcs_vport_s *vport, + enum bfa_fcs_vport_event event) +{ + bfa_trc(__vport_fcs(vport), __vport_pwwn(vport)); + bfa_trc(__vport_fcs(vport), event); + + switch (event) { + case BFA_FCS_VPORT_SM_START: + if (bfa_fcs_fabric_is_online(__vport_fabric(vport)) + && bfa_fcs_fabric_npiv_capable(__vport_fabric(vport))) { + bfa_sm_set_state(vport, bfa_fcs_vport_sm_fdisc); + bfa_fcs_vport_do_fdisc(vport); + } else { + /** + * Fabric is offline or not NPIV capable, stay in + * offline state. + */ + vport->vport_stats.fab_no_npiv++; + bfa_sm_set_state(vport, bfa_fcs_vport_sm_offline); + } + break; + + case BFA_FCS_VPORT_SM_DELETE: + bfa_sm_set_state(vport, bfa_fcs_vport_sm_cleanup); + bfa_fcs_port_delete(&vport->lport); + break; + + case BFA_FCS_VPORT_SM_ONLINE: + case BFA_FCS_VPORT_SM_OFFLINE: + /** + * Ignore ONLINE/OFFLINE events from fabric till vport is started. + */ + break; + + default: + bfa_assert(0); + } +} + +/** + * Offline state - awaiting ONLINE event from fabric SM. + */ +static void +bfa_fcs_vport_sm_offline(struct bfa_fcs_vport_s *vport, + enum bfa_fcs_vport_event event) +{ + bfa_trc(__vport_fcs(vport), __vport_pwwn(vport)); + bfa_trc(__vport_fcs(vport), event); + + switch (event) { + case BFA_FCS_VPORT_SM_DELETE: + bfa_sm_set_state(vport, bfa_fcs_vport_sm_cleanup); + bfa_fcs_port_delete(&vport->lport); + break; + + case BFA_FCS_VPORT_SM_ONLINE: + bfa_sm_set_state(vport, bfa_fcs_vport_sm_fdisc); + vport->fdisc_retries = 0; + bfa_fcs_vport_do_fdisc(vport); + break; + + case BFA_FCS_VPORT_SM_OFFLINE: + /* + * This can happen if the vport couldn't be initialzied due + * the fact that the npiv was not enabled on the switch. In + * that case we will put the vport in offline state. However, + * the link can go down and cause the this event to be sent when + * we are already offline. Ignore it. + */ + break; + + default: + bfa_assert(0); + } +} + +/** + * FDISC is sent and awaiting reply from fabric. + */ +static void +bfa_fcs_vport_sm_fdisc(struct bfa_fcs_vport_s *vport, + enum bfa_fcs_vport_event event) +{ + bfa_trc(__vport_fcs(vport), __vport_pwwn(vport)); + bfa_trc(__vport_fcs(vport), event); + + switch (event) { + case BFA_FCS_VPORT_SM_DELETE: + bfa_sm_set_state(vport, bfa_fcs_vport_sm_logo); + bfa_lps_discard(vport->lps); + bfa_fcs_vport_do_logo(vport); + break; + + case BFA_FCS_VPORT_SM_OFFLINE: + bfa_sm_set_state(vport, bfa_fcs_vport_sm_offline); + bfa_lps_discard(vport->lps); + break; + + case BFA_FCS_VPORT_SM_RSP_OK: + bfa_sm_set_state(vport, bfa_fcs_vport_sm_online); + bfa_fcs_port_online(&vport->lport); + break; + + case BFA_FCS_VPORT_SM_RSP_ERROR: + bfa_sm_set_state(vport, bfa_fcs_vport_sm_fdisc_retry); + bfa_timer_start(__vport_bfa(vport), &vport->timer, + bfa_fcs_vport_timeout, vport, + BFA_FCS_RETRY_TIMEOUT); + break; + + case BFA_FCS_VPORT_SM_RSP_FAILED: + bfa_sm_set_state(vport, bfa_fcs_vport_sm_offline); + break; + + case BFA_FCS_VPORT_SM_RSP_DUP_WWN: + bfa_sm_set_state(vport, bfa_fcs_vport_sm_error); + break; + + default: + bfa_assert(0); + } +} + +/** + * FDISC attempt failed - a timer is active to retry FDISC. + */ +static void +bfa_fcs_vport_sm_fdisc_retry(struct bfa_fcs_vport_s *vport, + enum bfa_fcs_vport_event event) +{ + bfa_trc(__vport_fcs(vport), __vport_pwwn(vport)); + bfa_trc(__vport_fcs(vport), event); + + switch (event) { + case BFA_FCS_VPORT_SM_DELETE: + bfa_sm_set_state(vport, bfa_fcs_vport_sm_cleanup); + bfa_timer_stop(&vport->timer); + bfa_fcs_port_delete(&vport->lport); + break; + + case BFA_FCS_VPORT_SM_OFFLINE: + bfa_sm_set_state(vport, bfa_fcs_vport_sm_offline); + bfa_timer_stop(&vport->timer); + break; + + case BFA_FCS_VPORT_SM_TIMEOUT: + bfa_sm_set_state(vport, bfa_fcs_vport_sm_fdisc); + vport->vport_stats.fdisc_retries++; + vport->fdisc_retries++; + bfa_fcs_vport_do_fdisc(vport); + break; + + default: + bfa_assert(0); + } +} + +/** + * Vport is online (FDISC is complete). + */ +static void +bfa_fcs_vport_sm_online(struct bfa_fcs_vport_s *vport, + enum bfa_fcs_vport_event event) +{ + bfa_trc(__vport_fcs(vport), __vport_pwwn(vport)); + bfa_trc(__vport_fcs(vport), event); + + switch (event) { + case BFA_FCS_VPORT_SM_DELETE: + bfa_sm_set_state(vport, bfa_fcs_vport_sm_deleting); + bfa_fcs_port_delete(&vport->lport); + break; + + case BFA_FCS_VPORT_SM_OFFLINE: + bfa_sm_set_state(vport, bfa_fcs_vport_sm_offline); + bfa_lps_discard(vport->lps); + bfa_fcs_port_offline(&vport->lport); + break; + + default: + bfa_assert(0); + } +} + +/** + * Vport is being deleted - awaiting lport delete completion to send + * LOGO to fabric. + */ +static void +bfa_fcs_vport_sm_deleting(struct bfa_fcs_vport_s *vport, + enum bfa_fcs_vport_event event) +{ + bfa_trc(__vport_fcs(vport), __vport_pwwn(vport)); + bfa_trc(__vport_fcs(vport), event); + + switch (event) { + case BFA_FCS_VPORT_SM_DELETE: + break; + + case BFA_FCS_VPORT_SM_DELCOMP: + bfa_sm_set_state(vport, bfa_fcs_vport_sm_logo); + bfa_fcs_vport_do_logo(vport); + break; + + case BFA_FCS_VPORT_SM_OFFLINE: + bfa_sm_set_state(vport, bfa_fcs_vport_sm_cleanup); + break; + + default: + bfa_assert(0); + } +} + +/** + * Error State. + * This state will be set when the Vport Creation fails due to errors like + * Dup WWN. In this state only operation allowed is a Vport Delete. + */ +static void +bfa_fcs_vport_sm_error(struct bfa_fcs_vport_s *vport, + enum bfa_fcs_vport_event event) +{ + bfa_trc(__vport_fcs(vport), __vport_pwwn(vport)); + bfa_trc(__vport_fcs(vport), event); + + switch (event) { + case BFA_FCS_VPORT_SM_DELETE: + bfa_sm_set_state(vport, bfa_fcs_vport_sm_uninit); + bfa_fcs_vport_free(vport); + break; + + default: + bfa_trc(__vport_fcs(vport), event); + } +} + +/** + * Lport cleanup is in progress since vport is being deleted. Fabric is + * offline, so no LOGO is needed to complete vport deletion. + */ +static void +bfa_fcs_vport_sm_cleanup(struct bfa_fcs_vport_s *vport, + enum bfa_fcs_vport_event event) +{ + bfa_trc(__vport_fcs(vport), __vport_pwwn(vport)); + bfa_trc(__vport_fcs(vport), event); + + switch (event) { + case BFA_FCS_VPORT_SM_DELCOMP: + bfa_sm_set_state(vport, bfa_fcs_vport_sm_uninit); + bfa_fcs_vport_free(vport); + break; + + case BFA_FCS_VPORT_SM_DELETE: + break; + + default: + bfa_assert(0); + } +} + +/** + * LOGO is sent to fabric. Vport delete is in progress. Lport delete cleanup + * is done. + */ +static void +bfa_fcs_vport_sm_logo(struct bfa_fcs_vport_s *vport, + enum bfa_fcs_vport_event event) +{ + bfa_trc(__vport_fcs(vport), __vport_pwwn(vport)); + bfa_trc(__vport_fcs(vport), event); + + switch (event) { + case BFA_FCS_VPORT_SM_OFFLINE: + bfa_lps_discard(vport->lps); + /* + * !!! fall through !!! + */ + + case BFA_FCS_VPORT_SM_RSP_OK: + case BFA_FCS_VPORT_SM_RSP_ERROR: + bfa_sm_set_state(vport, bfa_fcs_vport_sm_uninit); + bfa_fcs_vport_free(vport); + break; + + case BFA_FCS_VPORT_SM_DELETE: + break; + + default: + bfa_assert(0); + } +} + + + +/** + * fcs_vport_private FCS virtual port private functions + */ + +/** + * Send AEN notification + */ +static void +bfa_fcs_vport_aen_post(bfa_fcs_lport_t *port, enum bfa_lport_aen_event event) +{ + union bfa_aen_data_u aen_data; + struct bfa_log_mod_s *logmod = port->fcs->logm; + enum bfa_port_role role = port->port_cfg.roles; + wwn_t lpwwn = bfa_fcs_port_get_pwwn(port); + char lpwwn_ptr[BFA_STRING_32]; + char *role_str[BFA_PORT_ROLE_FCP_MAX / 2 + 1] = + { "Initiator", "Target", "IPFC" }; + + wwn2str(lpwwn_ptr, lpwwn); + + bfa_assert(role <= BFA_PORT_ROLE_FCP_MAX); + + switch (event) { + case BFA_LPORT_AEN_NPIV_DUP_WWN: + bfa_log(logmod, BFA_AEN_LPORT_NPIV_DUP_WWN, lpwwn_ptr, + role_str[role / 2]); + break; + case BFA_LPORT_AEN_NPIV_FABRIC_MAX: + bfa_log(logmod, BFA_AEN_LPORT_NPIV_FABRIC_MAX, lpwwn_ptr, + role_str[role / 2]); + break; + case BFA_LPORT_AEN_NPIV_UNKNOWN: + bfa_log(logmod, BFA_AEN_LPORT_NPIV_UNKNOWN, lpwwn_ptr, + role_str[role / 2]); + break; + default: + break; + } + + aen_data.lport.vf_id = port->fabric->vf_id; + aen_data.lport.roles = role; + aen_data.lport.ppwwn = + bfa_fcs_port_get_pwwn(bfa_fcs_get_base_port(port->fcs)); + aen_data.lport.lpwwn = lpwwn; +} + +/** + * This routine will be called to send a FDISC command. + */ +static void +bfa_fcs_vport_do_fdisc(struct bfa_fcs_vport_s *vport) +{ + bfa_lps_fdisc(vport->lps, vport, + bfa_pport_get_maxfrsize(__vport_bfa(vport)), + __vport_pwwn(vport), __vport_nwwn(vport)); + vport->vport_stats.fdisc_sent++; +} + +static void +bfa_fcs_vport_fdisc_rejected(struct bfa_fcs_vport_s *vport) +{ + u8 lsrjt_rsn = bfa_lps_get_lsrjt_rsn(vport->lps); + u8 lsrjt_expl = bfa_lps_get_lsrjt_expl(vport->lps); + + bfa_trc(__vport_fcs(vport), lsrjt_rsn); + bfa_trc(__vport_fcs(vport), lsrjt_expl); + + /* + * For certain reason codes, we don't want to retry. + */ + switch (bfa_lps_get_lsrjt_expl(vport->lps)) { + case FC_LS_RJT_EXP_INV_PORT_NAME: /* by brocade */ + case FC_LS_RJT_EXP_INVALID_NPORT_ID: /* by Cisco */ + if (vport->fdisc_retries < BFA_FCS_VPORT_MAX_RETRIES) + bfa_sm_send_event(vport, BFA_FCS_VPORT_SM_RSP_ERROR); + else { + bfa_fcs_vport_aen_post(&vport->lport, + BFA_LPORT_AEN_NPIV_DUP_WWN); + bfa_sm_send_event(vport, BFA_FCS_VPORT_SM_RSP_DUP_WWN); + } + break; + + case FC_LS_RJT_EXP_INSUFF_RES: + /* + * This means max logins per port/switch setting on the + * switch was exceeded. + */ + if (vport->fdisc_retries < BFA_FCS_VPORT_MAX_RETRIES) + bfa_sm_send_event(vport, BFA_FCS_VPORT_SM_RSP_ERROR); + else { + bfa_fcs_vport_aen_post(&vport->lport, + BFA_LPORT_AEN_NPIV_FABRIC_MAX); + bfa_sm_send_event(vport, BFA_FCS_VPORT_SM_RSP_FAILED); + } + break; + + default: + if (vport->fdisc_retries == 0) /* Print only once */ + bfa_fcs_vport_aen_post(&vport->lport, + BFA_LPORT_AEN_NPIV_UNKNOWN); + bfa_sm_send_event(vport, BFA_FCS_VPORT_SM_RSP_ERROR); + } +} + +/** + * Called to send a logout to the fabric. Used when a V-Port is + * deleted/stopped. + */ +static void +bfa_fcs_vport_do_logo(struct bfa_fcs_vport_s *vport) +{ + bfa_trc(__vport_fcs(vport), __vport_pwwn(vport)); + + vport->vport_stats.logo_sent++; + bfa_lps_fdisclogo(vport->lps); +} + +/** + * This routine will be called by bfa_timer on timer timeouts. + * + * param[in] vport - pointer to bfa_fcs_vport_t. + * param[out] vport_status - pointer to return vport status in + * + * return + * void + * +* Special Considerations: + * + * note + */ +static void +bfa_fcs_vport_timeout(void *vport_arg) +{ + struct bfa_fcs_vport_s *vport = (struct bfa_fcs_vport_s *)vport_arg; + + vport->vport_stats.fdisc_timeouts++; + bfa_sm_send_event(vport, BFA_FCS_VPORT_SM_TIMEOUT); +} + +static void +bfa_fcs_vport_free(struct bfa_fcs_vport_s *vport) +{ + bfa_fcs_fabric_delvport(__vport_fabric(vport), vport); + bfa_fcb_vport_delete(vport->vport_drv); + bfa_lps_delete(vport->lps); +} + + + +/** + * fcs_vport_public FCS virtual port public interfaces + */ + +/** + * Online notification from fabric SM. + */ +void +bfa_fcs_vport_online(struct bfa_fcs_vport_s *vport) +{ + vport->vport_stats.fab_online++; + bfa_sm_send_event(vport, BFA_FCS_VPORT_SM_ONLINE); +} + +/** + * Offline notification from fabric SM. + */ +void +bfa_fcs_vport_offline(struct bfa_fcs_vport_s *vport) +{ + vport->vport_stats.fab_offline++; + bfa_sm_send_event(vport, BFA_FCS_VPORT_SM_OFFLINE); +} + +/** + * Cleanup notification from fabric SM on link timer expiry. + */ +void +bfa_fcs_vport_cleanup(struct bfa_fcs_vport_s *vport) +{ + vport->vport_stats.fab_cleanup++; +} + +/** + * Delete completion callback from associated lport + */ +void +bfa_fcs_vport_delete_comp(struct bfa_fcs_vport_s *vport) +{ + bfa_sm_send_event(vport, BFA_FCS_VPORT_SM_DELCOMP); +} + +/** + * Module initialization + */ +void +bfa_fcs_vport_modinit(struct bfa_fcs_s *fcs) +{ +} + +/** + * Module cleanup + */ +void +bfa_fcs_vport_modexit(struct bfa_fcs_s *fcs) +{ + bfa_fcs_modexit_comp(fcs); +} + +u32 +bfa_fcs_vport_get_max(struct bfa_fcs_s *fcs) +{ + struct bfa_ioc_attr_s ioc_attr; + + bfa_get_attr(fcs->bfa, &ioc_attr); + + if (ioc_attr.pci_attr.device_id == BFA_PCI_DEVICE_ID_CT) + return (BFA_FCS_MAX_VPORTS_SUPP_CT); + else + return (BFA_FCS_MAX_VPORTS_SUPP_CB); +} + + + +/** + * fcs_vport_api Virtual port API + */ + +/** + * Use this function to instantiate a new FCS vport object. This + * function will not trigger any HW initialization process (which will be + * done in vport_start() call) + * + * param[in] vport - pointer to bfa_fcs_vport_t. This space + * needs to be allocated by the driver. + * param[in] fcs - FCS instance + * param[in] vport_cfg - vport configuration + * param[in] vf_id - VF_ID if vport is created within a VF. + * FC_VF_ID_NULL to specify base fabric. + * param[in] vport_drv - Opaque handle back to the driver's vport + * structure + * + * retval BFA_STATUS_OK - on success. + * retval BFA_STATUS_FAILED - on failure. + */ +bfa_status_t +bfa_fcs_vport_create(struct bfa_fcs_vport_s *vport, struct bfa_fcs_s *fcs, + u16 vf_id, struct bfa_port_cfg_s *vport_cfg, + struct bfad_vport_s *vport_drv) +{ + if (vport_cfg->pwwn == 0) + return (BFA_STATUS_INVALID_WWN); + + if (bfa_fcs_port_get_pwwn(&fcs->fabric.bport) == vport_cfg->pwwn) + return BFA_STATUS_VPORT_WWN_BP; + + if (bfa_fcs_vport_lookup(fcs, vf_id, vport_cfg->pwwn) != NULL) + return BFA_STATUS_VPORT_EXISTS; + + if (bfa_fcs_fabric_vport_count(&fcs->fabric) == + bfa_fcs_vport_get_max(fcs)) + return BFA_STATUS_VPORT_MAX; + + vport->lps = bfa_lps_alloc(fcs->bfa); + if (!vport->lps) + return BFA_STATUS_VPORT_MAX; + + vport->vport_drv = vport_drv; + bfa_sm_set_state(vport, bfa_fcs_vport_sm_uninit); + + bfa_fcs_lport_init(&vport->lport, fcs, vf_id, vport_cfg, vport); + + bfa_sm_send_event(vport, BFA_FCS_VPORT_SM_CREATE); + + return BFA_STATUS_OK; +} + +/** + * Use this function initialize the vport. + * + * @param[in] vport - pointer to bfa_fcs_vport_t. + * + * @returns None + */ +bfa_status_t +bfa_fcs_vport_start(struct bfa_fcs_vport_s *vport) +{ + bfa_sm_send_event(vport, BFA_FCS_VPORT_SM_START); + + return BFA_STATUS_OK; +} + +/** + * Use this function quiese the vport object. This function will return + * immediately, when the vport is actually stopped, the + * bfa_drv_vport_stop_cb() will be called. + * + * param[in] vport - pointer to bfa_fcs_vport_t. + * + * return None + */ +bfa_status_t +bfa_fcs_vport_stop(struct bfa_fcs_vport_s *vport) +{ + bfa_sm_send_event(vport, BFA_FCS_VPORT_SM_STOP); + + return BFA_STATUS_OK; +} + +/** + * Use this function to delete a vport object. Fabric object should + * be stopped before this function call. + * + * param[in] vport - pointer to bfa_fcs_vport_t. + * + * return None + */ +bfa_status_t +bfa_fcs_vport_delete(struct bfa_fcs_vport_s *vport) +{ + bfa_sm_send_event(vport, BFA_FCS_VPORT_SM_DELETE); + + return BFA_STATUS_OK; +} + +/** + * Use this function to get vport's current status info. + * + * param[in] vport pointer to bfa_fcs_vport_t. + * param[out] attr pointer to return vport attributes + * + * return None + */ +void +bfa_fcs_vport_get_attr(struct bfa_fcs_vport_s *vport, + struct bfa_vport_attr_s *attr) +{ + if (vport == NULL || attr == NULL) + return; + + bfa_os_memset(attr, 0, sizeof(struct bfa_vport_attr_s)); + + bfa_fcs_port_get_attr(&vport->lport, &attr->port_attr); + attr->vport_state = bfa_sm_to_state(vport_sm_table, vport->sm); +} + +/** + * Use this function to get vport's statistics. + * + * param[in] vport pointer to bfa_fcs_vport_t. + * param[out] stats pointer to return vport statistics in + * + * return None + */ +void +bfa_fcs_vport_get_stats(struct bfa_fcs_vport_s *vport, + struct bfa_vport_stats_s *stats) +{ + *stats = vport->vport_stats; +} + +/** + * Use this function to clear vport's statistics. + * + * param[in] vport pointer to bfa_fcs_vport_t. + * + * return None + */ +void +bfa_fcs_vport_clr_stats(struct bfa_fcs_vport_s *vport) +{ + bfa_os_memset(&vport->vport_stats, 0, sizeof(struct bfa_vport_stats_s)); +} + +/** + * Lookup a virtual port. Excludes base port from lookup. + */ +struct bfa_fcs_vport_s * +bfa_fcs_vport_lookup(struct bfa_fcs_s *fcs, u16 vf_id, wwn_t vpwwn) +{ + struct bfa_fcs_vport_s *vport; + struct bfa_fcs_fabric_s *fabric; + + bfa_trc(fcs, vf_id); + bfa_trc(fcs, vpwwn); + + fabric = bfa_fcs_vf_lookup(fcs, vf_id); + if (!fabric) { + bfa_trc(fcs, vf_id); + return NULL; + } + + vport = bfa_fcs_fabric_vport_lookup(fabric, vpwwn); + return vport; +} + +/** + * FDISC Response + */ +void +bfa_cb_lps_fdisc_comp(void *bfad, void *uarg, bfa_status_t status) +{ + struct bfa_fcs_vport_s *vport = uarg; + + bfa_trc(__vport_fcs(vport), __vport_pwwn(vport)); + bfa_trc(__vport_fcs(vport), status); + + switch (status) { + case BFA_STATUS_OK: + /* + * Initialiaze the V-Port fields + */ + __vport_fcid(vport) = bfa_lps_get_pid(vport->lps); + vport->vport_stats.fdisc_accepts++; + bfa_sm_send_event(vport, BFA_FCS_VPORT_SM_RSP_OK); + break; + + case BFA_STATUS_INVALID_MAC: + /* + * Only for CNA + */ + vport->vport_stats.fdisc_acc_bad++; + bfa_sm_send_event(vport, BFA_FCS_VPORT_SM_RSP_ERROR); + + break; + + case BFA_STATUS_EPROTOCOL: + switch (bfa_lps_get_extstatus(vport->lps)) { + case BFA_EPROTO_BAD_ACCEPT: + vport->vport_stats.fdisc_acc_bad++; + break; + + case BFA_EPROTO_UNKNOWN_RSP: + vport->vport_stats.fdisc_unknown_rsp++; + break; + + default: + break; + } + + bfa_sm_send_event(vport, BFA_FCS_VPORT_SM_RSP_ERROR); + break; + + case BFA_STATUS_FABRIC_RJT: + vport->vport_stats.fdisc_rejects++; + bfa_fcs_vport_fdisc_rejected(vport); + break; + + default: + vport->vport_stats.fdisc_rsp_err++; + bfa_sm_send_event(vport, BFA_FCS_VPORT_SM_RSP_ERROR); + } +} + +/** + * LOGO response + */ +void +bfa_cb_lps_fdisclogo_comp(void *bfad, void *uarg) +{ + struct bfa_fcs_vport_s *vport = uarg; + bfa_sm_send_event(vport, BFA_FCS_VPORT_SM_RSP_OK); +} + + -- cgit v1.2.3 From 6733b39a1301b0b020bbcbf3295852e93e624cb1 Mon Sep 17 00:00:00 2001 From: Jayamohan Kallickal Date: Sat, 5 Sep 2009 07:36:35 +0530 Subject: [SCSI] be2iscsi: add 10Gbps iSCSI - BladeEngine 2 driver [v2: fixed up virt_to_bus() issue spotted by sfr] Signed-off-by: Mike Christie Signed-off-by: Jayamohan Kallickal Signed-off-by: James Bottomley --- MAINTAINERS | 8 + drivers/scsi/Kconfig | 1 + drivers/scsi/Makefile | 1 + drivers/scsi/be2iscsi/Kconfig | 8 + drivers/scsi/be2iscsi/Makefile | 8 + drivers/scsi/be2iscsi/be.h | 183 ++ drivers/scsi/be2iscsi/be_cmds.c | 523 ++++++ drivers/scsi/be2iscsi/be_cmds.h | 877 ++++++++++ drivers/scsi/be2iscsi/be_iscsi.c | 646 ++++++++ drivers/scsi/be2iscsi/be_iscsi.h | 75 + drivers/scsi/be2iscsi/be_main.c | 3393 ++++++++++++++++++++++++++++++++++++++ drivers/scsi/be2iscsi/be_main.h | 833 ++++++++++ drivers/scsi/be2iscsi/be_mgmt.c | 321 ++++ drivers/scsi/be2iscsi/be_mgmt.h | 249 +++ 14 files changed, 7126 insertions(+) create mode 100644 drivers/scsi/be2iscsi/Kconfig create mode 100644 drivers/scsi/be2iscsi/Makefile create mode 100644 drivers/scsi/be2iscsi/be.h create mode 100644 drivers/scsi/be2iscsi/be_cmds.c create mode 100644 drivers/scsi/be2iscsi/be_cmds.h create mode 100644 drivers/scsi/be2iscsi/be_iscsi.c create mode 100644 drivers/scsi/be2iscsi/be_iscsi.h create mode 100644 drivers/scsi/be2iscsi/be_main.c create mode 100644 drivers/scsi/be2iscsi/be_main.h create mode 100644 drivers/scsi/be2iscsi/be_mgmt.c create mode 100644 drivers/scsi/be2iscsi/be_mgmt.h (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index c233f6fe0fff..db6c47b3ec5a 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -4625,6 +4625,14 @@ F: drivers/ata/ F: include/linux/ata.h F: include/linux/libata.h +SERVER ENGINES 10Gbps iSCSI - BladeEngine 2 DRIVER +P: Jayamohan Kallickal +M: jayamohank@serverengines.com +L: linux-scsi@vger.kernel.org +W: http://www.serverengines.com +S: Supported +F: drivers/scsi/be2iscsi/ + SERVER ENGINES 10Gbps NIC - BladeEngine 2 DRIVER M: Sathya Perla M: Subbu Seetharaman diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig index a8ea01b07868..e11cca4c784c 100644 --- a/drivers/scsi/Kconfig +++ b/drivers/scsi/Kconfig @@ -366,6 +366,7 @@ config ISCSI_TCP source "drivers/scsi/cxgb3i/Kconfig" source "drivers/scsi/bnx2i/Kconfig" +source "drivers/scsi/be2iscsi/Kconfig" config SGIWD93_SCSI tristate "SGI WD93C93 SCSI Driver" diff --git a/drivers/scsi/Makefile b/drivers/scsi/Makefile index 316f0a1a7986..3ad61db5e3fa 100644 --- a/drivers/scsi/Makefile +++ b/drivers/scsi/Makefile @@ -131,6 +131,7 @@ obj-$(CONFIG_SCSI_MVSAS) += mvsas/ obj-$(CONFIG_PS3_ROM) += ps3rom.o obj-$(CONFIG_SCSI_CXGB3_ISCSI) += libiscsi.o libiscsi_tcp.o cxgb3i/ obj-$(CONFIG_SCSI_BNX2_ISCSI) += libiscsi.o bnx2i/ +obj-$(CONFIG_BE2ISCSI) += libiscsi.o be2iscsi/ obj-$(CONFIG_SCSI_PMCRAID) += pmcraid.o obj-$(CONFIG_ARM) += arm/ diff --git a/drivers/scsi/be2iscsi/Kconfig b/drivers/scsi/be2iscsi/Kconfig new file mode 100644 index 000000000000..2952fcd008ea --- /dev/null +++ b/drivers/scsi/be2iscsi/Kconfig @@ -0,0 +1,8 @@ +config BE2ISCSI + tristate "ServerEngines' 10Gbps iSCSI - BladeEngine 2" + depends on PCI && SCSI + select SCSI_ISCSI_ATTRS + + help + This driver implements the iSCSI functionality for ServerEngines' + 10Gbps Storage adapter - BladeEngine 2. diff --git a/drivers/scsi/be2iscsi/Makefile b/drivers/scsi/be2iscsi/Makefile new file mode 100644 index 000000000000..c11f443e3f83 --- /dev/null +++ b/drivers/scsi/be2iscsi/Makefile @@ -0,0 +1,8 @@ +# +# Makefile to build the iSCSI driver for ServerEngine's BladeEngine. +# +# + +obj-$(CONFIG_BE2ISCSI) += be2iscsi.o + +be2iscsi-y := be_iscsi.o be_main.o be_mgmt.o be_cmds.o diff --git a/drivers/scsi/be2iscsi/be.h b/drivers/scsi/be2iscsi/be.h new file mode 100644 index 000000000000..b36020dcf012 --- /dev/null +++ b/drivers/scsi/be2iscsi/be.h @@ -0,0 +1,183 @@ +/** + * Copyright (C) 2005 - 2009 ServerEngines + * All rights reserved. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License version 2 + * as published by the Free Software Foundation. The full GNU General + * Public License is included in this distribution in the file called COPYING. + * + * Contact Information: + * linux-drivers@serverengines.com + * + * ServerEngines + * 209 N. Fair Oaks Ave + * Sunnyvale, CA 94085 + */ + +#ifndef BEISCSI_H +#define BEISCSI_H + +#include +#include + +#define FW_VER_LEN 32 + +struct be_dma_mem { + void *va; + dma_addr_t dma; + u32 size; +}; + +struct be_queue_info { + struct be_dma_mem dma_mem; + u16 len; + u16 entry_size; /* Size of an element in the queue */ + u16 id; + u16 tail, head; + bool created; + atomic_t used; /* Number of valid elements in the queue */ +}; + +static inline u32 MODULO(u16 val, u16 limit) +{ + WARN_ON(limit & (limit - 1)); + return val & (limit - 1); +} + +static inline void index_inc(u16 *index, u16 limit) +{ + *index = MODULO((*index + 1), limit); +} + +static inline void *queue_head_node(struct be_queue_info *q) +{ + return q->dma_mem.va + q->head * q->entry_size; +} + +static inline void *queue_tail_node(struct be_queue_info *q) +{ + return q->dma_mem.va + q->tail * q->entry_size; +} + +static inline void queue_head_inc(struct be_queue_info *q) +{ + index_inc(&q->head, q->len); +} + +static inline void queue_tail_inc(struct be_queue_info *q) +{ + index_inc(&q->tail, q->len); +} + +/*ISCSI */ + +struct be_eq_obj { + struct be_queue_info q; + char desc[32]; + + /* Adaptive interrupt coalescing (AIC) info */ + bool enable_aic; + u16 min_eqd; /* in usecs */ + u16 max_eqd; /* in usecs */ + u16 cur_eqd; /* in usecs */ +}; + +struct be_mcc_obj { + struct be_queue_info *q; + struct be_queue_info *cq; +}; + +struct be_ctrl_info { + u8 __iomem *csr; + u8 __iomem *db; /* Door Bell */ + u8 __iomem *pcicfg; /* PCI config space */ + struct pci_dev *pdev; + + /* Mbox used for cmd request/response */ + spinlock_t mbox_lock; /* For serializing mbox cmds to BE card */ + struct be_dma_mem mbox_mem; + /* Mbox mem is adjusted to align to 16 bytes. The allocated addr + * is stored for freeing purpose */ + struct be_dma_mem mbox_mem_alloced; + + /* MCC Rings */ + struct be_mcc_obj mcc_obj; + spinlock_t mcc_lock; /* For serializing mcc cmds to BE card */ + spinlock_t mcc_cq_lock; + + /* MCC Async callback */ + void (*async_cb) (void *adapter, bool link_up); + void *adapter_ctxt; +}; + +#include "be_cmds.h" + +#define PAGE_SHIFT_4K 12 +#define PAGE_SIZE_4K (1 << PAGE_SHIFT_4K) + +/* Returns number of pages spanned by the data starting at the given addr */ +#define PAGES_4K_SPANNED(_address, size) \ + ((u32)((((size_t)(_address) & (PAGE_SIZE_4K - 1)) + \ + (size) + (PAGE_SIZE_4K - 1)) >> PAGE_SHIFT_4K)) + +/* Byte offset into the page corresponding to given address */ +#define OFFSET_IN_PAGE(addr) \ + ((size_t)(addr) & (PAGE_SIZE_4K-1)) + +/* Returns bit offset within a DWORD of a bitfield */ +#define AMAP_BIT_OFFSET(_struct, field) \ + (((size_t)&(((_struct *)0)->field))%32) + +/* Returns the bit mask of the field that is NOT shifted into location. */ +static inline u32 amap_mask(u32 bitsize) +{ + return (bitsize == 32 ? 0xFFFFFFFF : (1 << bitsize) - 1); +} + +static inline void amap_set(void *ptr, u32 dw_offset, u32 mask, + u32 offset, u32 value) +{ + u32 *dw = (u32 *) ptr + dw_offset; + *dw &= ~(mask << offset); + *dw |= (mask & value) << offset; +} + +#define AMAP_SET_BITS(_struct, field, ptr, val) \ + amap_set(ptr, \ + offsetof(_struct, field)/32, \ + amap_mask(sizeof(((_struct *)0)->field)), \ + AMAP_BIT_OFFSET(_struct, field), \ + val) + +static inline u32 amap_get(void *ptr, u32 dw_offset, u32 mask, u32 offset) +{ + u32 *dw = ptr; + return mask & (*(dw + dw_offset) >> offset); +} + +#define AMAP_GET_BITS(_struct, field, ptr) \ + amap_get(ptr, \ + offsetof(_struct, field)/32, \ + amap_mask(sizeof(((_struct *)0)->field)), \ + AMAP_BIT_OFFSET(_struct, field)) + +#define be_dws_cpu_to_le(wrb, len) swap_dws(wrb, len) +#define be_dws_le_to_cpu(wrb, len) swap_dws(wrb, len) +static inline void swap_dws(void *wrb, int len) +{ +#ifdef __BIG_ENDIAN + u32 *dw = wrb; + WARN_ON(len % 4); + do { + *dw = cpu_to_le32(*dw); + dw++; + len -= 4; + } while (len); +#endif /* __BIG_ENDIAN */ +} + +extern void beiscsi_cq_notify(struct be_ctrl_info *ctrl, u16 qid, bool arm, + u16 num_popped); + +#endif /* BEISCSI_H */ diff --git a/drivers/scsi/be2iscsi/be_cmds.c b/drivers/scsi/be2iscsi/be_cmds.c new file mode 100644 index 000000000000..08007b6e42df --- /dev/null +++ b/drivers/scsi/be2iscsi/be_cmds.c @@ -0,0 +1,523 @@ +/** + * Copyright (C) 2005 - 2009 ServerEngines + * All rights reserved. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License version 2 + * as published by the Free Software Foundation. The full GNU General + * Public License is included in this distribution in the file called COPYING. + * + * Contact Information: + * linux-drivers@serverengines.com + * + * ServerEngines + * 209 N. Fair Oaks Ave + * Sunnyvale, CA 94085 + */ + +#include "be.h" +#include "be_mgmt.h" +#include "be_main.h" + +static inline bool be_mcc_compl_is_new(struct be_mcc_compl *compl) +{ + if (compl->flags != 0) { + compl->flags = le32_to_cpu(compl->flags); + WARN_ON((compl->flags & CQE_FLAGS_VALID_MASK) == 0); + return true; + } else + return false; +} + +static inline void be_mcc_compl_use(struct be_mcc_compl *compl) +{ + compl->flags = 0; +} + +static int be_mcc_compl_process(struct be_ctrl_info *ctrl, + struct be_mcc_compl *compl) +{ + u16 compl_status, extd_status; + + be_dws_le_to_cpu(compl, 4); + + compl_status = (compl->status >> CQE_STATUS_COMPL_SHIFT) & + CQE_STATUS_COMPL_MASK; + if (compl_status != MCC_STATUS_SUCCESS) { + extd_status = (compl->status >> CQE_STATUS_EXTD_SHIFT) & + CQE_STATUS_EXTD_MASK; + dev_err(&ctrl->pdev->dev, + "error in cmd completion: status(compl/extd)=%d/%d\n", + compl_status, extd_status); + return -1; + } + return 0; +} + +static inline bool is_link_state_evt(u32 trailer) +{ + return (((trailer >> ASYNC_TRAILER_EVENT_CODE_SHIFT) & + ASYNC_TRAILER_EVENT_CODE_MASK) == ASYNC_EVENT_CODE_LINK_STATE); +} + +void beiscsi_cq_notify(struct be_ctrl_info *ctrl, u16 qid, bool arm, + u16 num_popped) +{ + u32 val = 0; + val |= qid & DB_CQ_RING_ID_MASK; + if (arm) + val |= 1 << DB_CQ_REARM_SHIFT; + val |= num_popped << DB_CQ_NUM_POPPED_SHIFT; + iowrite32(val, ctrl->db + DB_CQ_OFFSET); +} + +static int be_mbox_db_ready_wait(struct be_ctrl_info *ctrl) +{ +#define long_delay 2000 + void __iomem *db = ctrl->db + MPU_MAILBOX_DB_OFFSET; + int cnt = 0, wait = 5; /* in usecs */ + u32 ready; + + do { + ready = ioread32(db) & MPU_MAILBOX_DB_RDY_MASK; + if (ready) + break; + + if (cnt > 6000000) { + dev_err(&ctrl->pdev->dev, "mbox_db poll timed out\n"); + return -1; + } + + if (cnt > 50) { + wait = long_delay; + mdelay(long_delay / 1000); + } else + udelay(wait); + cnt += wait; + } while (true); + return 0; +} + +int be_mbox_notify(struct be_ctrl_info *ctrl) +{ + int status; + u32 val = 0; + void __iomem *db = ctrl->db + MPU_MAILBOX_DB_OFFSET; + struct be_dma_mem *mbox_mem = &ctrl->mbox_mem; + struct be_mcc_mailbox *mbox = mbox_mem->va; + struct be_mcc_compl *compl = &mbox->compl; + + val &= ~MPU_MAILBOX_DB_RDY_MASK; + val |= MPU_MAILBOX_DB_HI_MASK; + val |= (upper_32_bits(mbox_mem->dma) >> 2) << 2; + iowrite32(val, db); + + status = be_mbox_db_ready_wait(ctrl); + if (status != 0) { + SE_DEBUG(DBG_LVL_1, " be_mbox_db_ready_wait failed 1\n"); + return status; + } + val = 0; + val &= ~MPU_MAILBOX_DB_RDY_MASK; + val &= ~MPU_MAILBOX_DB_HI_MASK; + val |= (u32) (mbox_mem->dma >> 4) << 2; + iowrite32(val, db); + + status = be_mbox_db_ready_wait(ctrl); + if (status != 0) { + SE_DEBUG(DBG_LVL_1, " be_mbox_db_ready_wait failed 2\n"); + return status; + } + if (be_mcc_compl_is_new(compl)) { + status = be_mcc_compl_process(ctrl, &mbox->compl); + be_mcc_compl_use(compl); + if (status) { + SE_DEBUG(DBG_LVL_1, "After be_mcc_compl_process \n"); + return status; + } + } else { + dev_err(&ctrl->pdev->dev, "invalid mailbox completion\n"); + return -1; + } + return 0; +} + +void be_wrb_hdr_prepare(struct be_mcc_wrb *wrb, int payload_len, + bool embedded, u8 sge_cnt) +{ + if (embedded) + wrb->embedded |= MCC_WRB_EMBEDDED_MASK; + else + wrb->embedded |= (sge_cnt & MCC_WRB_SGE_CNT_MASK) << + MCC_WRB_SGE_CNT_SHIFT; + wrb->payload_length = payload_len; + be_dws_cpu_to_le(wrb, 8); +} + +void be_cmd_hdr_prepare(struct be_cmd_req_hdr *req_hdr, + u8 subsystem, u8 opcode, int cmd_len) +{ + req_hdr->opcode = opcode; + req_hdr->subsystem = subsystem; + req_hdr->request_length = cpu_to_le32(cmd_len - sizeof(*req_hdr)); +} + +static void be_cmd_page_addrs_prepare(struct phys_addr *pages, u32 max_pages, + struct be_dma_mem *mem) +{ + int i, buf_pages; + u64 dma = (u64) mem->dma; + + buf_pages = min(PAGES_4K_SPANNED(mem->va, mem->size), max_pages); + for (i = 0; i < buf_pages; i++) { + pages[i].lo = cpu_to_le32(dma & 0xFFFFFFFF); + pages[i].hi = cpu_to_le32(upper_32_bits(dma)); + dma += PAGE_SIZE_4K; + } +} + +static u32 eq_delay_to_mult(u32 usec_delay) +{ +#define MAX_INTR_RATE 651042 + const u32 round = 10; + u32 multiplier; + + if (usec_delay == 0) + multiplier = 0; + else { + u32 interrupt_rate = 1000000 / usec_delay; + if (interrupt_rate == 0) + multiplier = 1023; + else { + multiplier = (MAX_INTR_RATE - interrupt_rate) * round; + multiplier /= interrupt_rate; + multiplier = (multiplier + round / 2) / round; + multiplier = min(multiplier, (u32) 1023); + } + } + return multiplier; +} + +struct be_mcc_wrb *wrb_from_mbox(struct be_dma_mem *mbox_mem) +{ + return &((struct be_mcc_mailbox *)(mbox_mem->va))->wrb; +} + +int beiscsi_cmd_eq_create(struct be_ctrl_info *ctrl, + struct be_queue_info *eq, int eq_delay) +{ + struct be_mcc_wrb *wrb = wrb_from_mbox(&ctrl->mbox_mem); + struct be_cmd_req_eq_create *req = embedded_payload(wrb); + struct be_cmd_resp_eq_create *resp = embedded_payload(wrb); + struct be_dma_mem *q_mem = &eq->dma_mem; + int status; + + spin_lock(&ctrl->mbox_lock); + memset(wrb, 0, sizeof(*wrb)); + + be_wrb_hdr_prepare(wrb, sizeof(*req), true, 0); + + be_cmd_hdr_prepare(&req->hdr, CMD_SUBSYSTEM_COMMON, + OPCODE_COMMON_EQ_CREATE, sizeof(*req)); + + req->num_pages = cpu_to_le16(PAGES_4K_SPANNED(q_mem->va, q_mem->size)); + + AMAP_SET_BITS(struct amap_eq_context, func, req->context, + PCI_FUNC(ctrl->pdev->devfn)); + AMAP_SET_BITS(struct amap_eq_context, valid, req->context, 1); + AMAP_SET_BITS(struct amap_eq_context, size, req->context, 0); + AMAP_SET_BITS(struct amap_eq_context, count, req->context, + __ilog2_u32(eq->len / 256)); + AMAP_SET_BITS(struct amap_eq_context, delaymult, req->context, + eq_delay_to_mult(eq_delay)); + be_dws_cpu_to_le(req->context, sizeof(req->context)); + + be_cmd_page_addrs_prepare(req->pages, ARRAY_SIZE(req->pages), q_mem); + + status = be_mbox_notify(ctrl); + if (!status) { + eq->id = le16_to_cpu(resp->eq_id); + eq->created = true; + } + spin_unlock(&ctrl->mbox_lock); + return status; +} + +int be_cmd_fw_initialize(struct be_ctrl_info *ctrl) +{ + struct be_mcc_wrb *wrb = wrb_from_mbox(&ctrl->mbox_mem); + int status; + u8 *endian_check; + + spin_lock(&ctrl->mbox_lock); + memset(wrb, 0, sizeof(*wrb)); + + endian_check = (u8 *) wrb; + *endian_check++ = 0xFF; + *endian_check++ = 0x12; + *endian_check++ = 0x34; + *endian_check++ = 0xFF; + *endian_check++ = 0xFF; + *endian_check++ = 0x56; + *endian_check++ = 0x78; + *endian_check++ = 0xFF; + be_dws_cpu_to_le(wrb, sizeof(*wrb)); + + status = be_mbox_notify(ctrl); + if (status) + SE_DEBUG(DBG_LVL_1, "be_cmd_fw_initialize Failed \n"); + + spin_unlock(&ctrl->mbox_lock); + return status; +} + +int beiscsi_cmd_cq_create(struct be_ctrl_info *ctrl, + struct be_queue_info *cq, struct be_queue_info *eq, + bool sol_evts, bool no_delay, int coalesce_wm) +{ + struct be_mcc_wrb *wrb = wrb_from_mbox(&ctrl->mbox_mem); + struct be_cmd_req_cq_create *req = embedded_payload(wrb); + struct be_cmd_resp_cq_create *resp = embedded_payload(wrb); + struct be_dma_mem *q_mem = &cq->dma_mem; + void *ctxt = &req->context; + int status; + + spin_lock(&ctrl->mbox_lock); + memset(wrb, 0, sizeof(*wrb)); + + be_wrb_hdr_prepare(wrb, sizeof(*req), true, 0); + + be_cmd_hdr_prepare(&req->hdr, CMD_SUBSYSTEM_COMMON, + OPCODE_COMMON_CQ_CREATE, sizeof(*req)); + + if (!q_mem->va) + SE_DEBUG(DBG_LVL_1, "uninitialized q_mem->va\n"); + + req->num_pages = cpu_to_le16(PAGES_4K_SPANNED(q_mem->va, q_mem->size)); + + AMAP_SET_BITS(struct amap_cq_context, coalescwm, ctxt, coalesce_wm); + AMAP_SET_BITS(struct amap_cq_context, nodelay, ctxt, no_delay); + AMAP_SET_BITS(struct amap_cq_context, count, ctxt, + __ilog2_u32(cq->len / 256)); + AMAP_SET_BITS(struct amap_cq_context, valid, ctxt, 1); + AMAP_SET_BITS(struct amap_cq_context, solevent, ctxt, sol_evts); + AMAP_SET_BITS(struct amap_cq_context, eventable, ctxt, 1); + AMAP_SET_BITS(struct amap_cq_context, eqid, ctxt, eq->id); + AMAP_SET_BITS(struct amap_cq_context, armed, ctxt, 1); + AMAP_SET_BITS(struct amap_cq_context, func, ctxt, + PCI_FUNC(ctrl->pdev->devfn)); + be_dws_cpu_to_le(ctxt, sizeof(req->context)); + + be_cmd_page_addrs_prepare(req->pages, ARRAY_SIZE(req->pages), q_mem); + + status = be_mbox_notify(ctrl); + if (!status) { + cq->id = le16_to_cpu(resp->cq_id); + cq->created = true; + } else + SE_DEBUG(DBG_LVL_1, "In be_cmd_cq_create, status=ox%08x \n", + status); + spin_unlock(&ctrl->mbox_lock); + + return status; +} + +static u32 be_encoded_q_len(int q_len) +{ + u32 len_encoded = fls(q_len); /* log2(len) + 1 */ + if (len_encoded == 16) + len_encoded = 0; + return len_encoded; +} +int beiscsi_cmd_q_destroy(struct be_ctrl_info *ctrl, struct be_queue_info *q, + int queue_type) +{ + struct be_mcc_wrb *wrb = wrb_from_mbox(&ctrl->mbox_mem); + struct be_cmd_req_q_destroy *req = embedded_payload(wrb); + u8 subsys = 0, opcode = 0; + int status; + + spin_lock(&ctrl->mbox_lock); + memset(wrb, 0, sizeof(*wrb)); + be_wrb_hdr_prepare(wrb, sizeof(*req), true, 0); + + switch (queue_type) { + case QTYPE_EQ: + subsys = CMD_SUBSYSTEM_COMMON; + opcode = OPCODE_COMMON_EQ_DESTROY; + break; + case QTYPE_CQ: + subsys = CMD_SUBSYSTEM_COMMON; + opcode = OPCODE_COMMON_CQ_DESTROY; + break; + case QTYPE_WRBQ: + subsys = CMD_SUBSYSTEM_ISCSI; + opcode = OPCODE_COMMON_ISCSI_WRBQ_DESTROY; + break; + case QTYPE_DPDUQ: + subsys = CMD_SUBSYSTEM_ISCSI; + opcode = OPCODE_COMMON_ISCSI_DEFQ_DESTROY; + break; + case QTYPE_SGL: + subsys = CMD_SUBSYSTEM_ISCSI; + opcode = OPCODE_COMMON_ISCSI_CFG_REMOVE_SGL_PAGES; + break; + default: + spin_unlock(&ctrl->mbox_lock); + BUG(); + return -1; + } + be_cmd_hdr_prepare(&req->hdr, subsys, opcode, sizeof(*req)); + if (queue_type != QTYPE_SGL) + req->id = cpu_to_le16(q->id); + + status = be_mbox_notify(ctrl); + + spin_unlock(&ctrl->mbox_lock); + return status; +} + +int be_cmd_get_mac_addr(struct be_ctrl_info *ctrl, u8 *mac_addr) +{ + struct be_mcc_wrb *wrb = wrb_from_mbox(&ctrl->mbox_mem); + struct be_cmd_req_get_mac_addr *req = embedded_payload(wrb); + int status; + + spin_lock(&ctrl->mbox_lock); + memset(wrb, 0, sizeof(*wrb)); + be_wrb_hdr_prepare(wrb, sizeof(*req), true, 0); + be_cmd_hdr_prepare(&req->hdr, CMD_SUBSYSTEM_ISCSI, + OPCODE_COMMON_ISCSI_NTWK_GET_NIC_CONFIG, + sizeof(*req)); + + status = be_mbox_notify(ctrl); + if (!status) { + struct be_cmd_resp_get_mac_addr *resp = embedded_payload(wrb); + + memcpy(mac_addr, resp->mac_address, ETH_ALEN); + } + + spin_unlock(&ctrl->mbox_lock); + return status; +} + +int be_cmd_create_default_pdu_queue(struct be_ctrl_info *ctrl, + struct be_queue_info *cq, + struct be_queue_info *dq, int length, + int entry_size) +{ + struct be_mcc_wrb *wrb = wrb_from_mbox(&ctrl->mbox_mem); + struct be_defq_create_req *req = embedded_payload(wrb); + struct be_dma_mem *q_mem = &dq->dma_mem; + void *ctxt = &req->context; + int status; + + spin_lock(&ctrl->mbox_lock); + memset(wrb, 0, sizeof(*wrb)); + + be_wrb_hdr_prepare(wrb, sizeof(*req), true, 0); + + be_cmd_hdr_prepare(&req->hdr, CMD_SUBSYSTEM_ISCSI, + OPCODE_COMMON_ISCSI_DEFQ_CREATE, sizeof(*req)); + + req->num_pages = PAGES_4K_SPANNED(q_mem->va, q_mem->size); + AMAP_SET_BITS(struct amap_be_default_pdu_context, rx_pdid, ctxt, 0); + AMAP_SET_BITS(struct amap_be_default_pdu_context, rx_pdid_valid, ctxt, + 1); + AMAP_SET_BITS(struct amap_be_default_pdu_context, pci_func_id, ctxt, + PCI_FUNC(ctrl->pdev->devfn)); + AMAP_SET_BITS(struct amap_be_default_pdu_context, ring_size, ctxt, + be_encoded_q_len(length / sizeof(struct phys_addr))); + AMAP_SET_BITS(struct amap_be_default_pdu_context, default_buffer_size, + ctxt, entry_size); + AMAP_SET_BITS(struct amap_be_default_pdu_context, cq_id_recv, ctxt, + cq->id); + + be_dws_cpu_to_le(ctxt, sizeof(req->context)); + + be_cmd_page_addrs_prepare(req->pages, ARRAY_SIZE(req->pages), q_mem); + + status = be_mbox_notify(ctrl); + if (!status) { + struct be_defq_create_resp *resp = embedded_payload(wrb); + + dq->id = le16_to_cpu(resp->id); + dq->created = true; + } + spin_unlock(&ctrl->mbox_lock); + + return status; +} + +int be_cmd_wrbq_create(struct be_ctrl_info *ctrl, struct be_dma_mem *q_mem, + struct be_queue_info *wrbq) +{ + struct be_mcc_wrb *wrb = wrb_from_mbox(&ctrl->mbox_mem); + struct be_wrbq_create_req *req = embedded_payload(wrb); + struct be_wrbq_create_resp *resp = embedded_payload(wrb); + int status; + + spin_lock(&ctrl->mbox_lock); + memset(wrb, 0, sizeof(*wrb)); + + be_wrb_hdr_prepare(wrb, sizeof(*req), true, 0); + + be_cmd_hdr_prepare(&req->hdr, CMD_SUBSYSTEM_ISCSI, + OPCODE_COMMON_ISCSI_WRBQ_CREATE, sizeof(*req)); + req->num_pages = PAGES_4K_SPANNED(q_mem->va, q_mem->size); + be_cmd_page_addrs_prepare(req->pages, ARRAY_SIZE(req->pages), q_mem); + + status = be_mbox_notify(ctrl); + if (!status) + wrbq->id = le16_to_cpu(resp->cid); + spin_unlock(&ctrl->mbox_lock); + return status; +} + +int be_cmd_iscsi_post_sgl_pages(struct be_ctrl_info *ctrl, + struct be_dma_mem *q_mem, + u32 page_offset, u32 num_pages) +{ + struct be_mcc_wrb *wrb = wrb_from_mbox(&ctrl->mbox_mem); + struct be_post_sgl_pages_req *req = embedded_payload(wrb); + int status; + unsigned int curr_pages; + u32 internal_page_offset = 0; + u32 temp_num_pages = num_pages; + + if (num_pages == 0xff) + num_pages = 1; + + spin_lock(&ctrl->mbox_lock); + do { + memset(wrb, 0, sizeof(*wrb)); + be_wrb_hdr_prepare(wrb, sizeof(*req), true, 0); + be_cmd_hdr_prepare(&req->hdr, CMD_SUBSYSTEM_ISCSI, + OPCODE_COMMON_ISCSI_CFG_POST_SGL_PAGES, + sizeof(*req)); + curr_pages = BE_NUMBER_OF_FIELD(struct be_post_sgl_pages_req, + pages); + req->num_pages = min(num_pages, curr_pages); + req->page_offset = page_offset; + be_cmd_page_addrs_prepare(req->pages, req->num_pages, q_mem); + q_mem->dma = q_mem->dma + (req->num_pages * PAGE_SIZE); + internal_page_offset += req->num_pages; + page_offset += req->num_pages; + num_pages -= req->num_pages; + + if (temp_num_pages == 0xff) + req->num_pages = temp_num_pages; + + status = be_mbox_notify(ctrl); + if (status) { + SE_DEBUG(DBG_LVL_1, + "FW CMD to map iscsi frags failed.\n"); + goto error; + } + } while (num_pages > 0); +error: + spin_unlock(&ctrl->mbox_lock); + if (status != 0) + beiscsi_cmd_q_destroy(ctrl, NULL, QTYPE_SGL); + return status; +} diff --git a/drivers/scsi/be2iscsi/be_cmds.h b/drivers/scsi/be2iscsi/be_cmds.h new file mode 100644 index 000000000000..c20d686cbb43 --- /dev/null +++ b/drivers/scsi/be2iscsi/be_cmds.h @@ -0,0 +1,877 @@ +/** + * Copyright (C) 2005 - 2009 ServerEngines + * All rights reserved. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License version 2 + * as published by the Free Software Foundation. The full GNU General + * Public License is included in this distribution in the file called COPYING. + * + * Contact Information: + * linux-drivers@serverengines.com + * + * ServerEngines + * 209 N. Fair Oaks Ave + * Sunnyvale, CA 94085 + */ + +#ifndef BEISCSI_CMDS_H +#define BEISCSI_CMDS_H + +/** + * The driver sends configuration and managements command requests to the + * firmware in the BE. These requests are communicated to the processor + * using Work Request Blocks (WRBs) submitted to the MCC-WRB ring or via one + * WRB inside a MAILBOX. + * The commands are serviced by the ARM processor in the BladeEngine's MPU. + */ +struct be_sge { + u32 pa_lo; + u32 pa_hi; + u32 len; +}; + +#define MCC_WRB_SGE_CNT_SHIFT 3 /* bits 3 - 7 of dword 0 */ +#define MCC_WRB_SGE_CNT_MASK 0x1F /* bits 3 - 7 of dword 0 */ +struct be_mcc_wrb { + u32 embedded; /* dword 0 */ + u32 payload_length; /* dword 1 */ + u32 tag0; /* dword 2 */ + u32 tag1; /* dword 3 */ + u32 rsvd; /* dword 4 */ + union { + u8 embedded_payload[236]; /* used by embedded cmds */ + struct be_sge sgl[19]; /* used by non-embedded cmds */ + } payload; +}; + +#define CQE_FLAGS_VALID_MASK (1 << 31) +#define CQE_FLAGS_ASYNC_MASK (1 << 30) + +/* Completion Status */ +#define MCC_STATUS_SUCCESS 0x0 + +#define CQE_STATUS_COMPL_MASK 0xFFFF +#define CQE_STATUS_COMPL_SHIFT 0 /* bits 0 - 15 */ +#define CQE_STATUS_EXTD_MASK 0xFFFF +#define CQE_STATUS_EXTD_SHIFT 0 /* bits 0 - 15 */ + +struct be_mcc_compl { + u32 status; /* dword 0 */ + u32 tag0; /* dword 1 */ + u32 tag1; /* dword 2 */ + u32 flags; /* dword 3 */ +}; + +/********* Mailbox door bell *************/ +/** + * Used for driver communication with the FW. + * The software must write this register twice to post any command. First, + * it writes the register with hi=1 and the upper bits of the physical address + * for the MAILBOX structure. Software must poll the ready bit until this + * is acknowledged. Then, sotware writes the register with hi=0 with the lower + * bits in the address. It must poll the ready bit until the command is + * complete. Upon completion, the MAILBOX will contain a valid completion + * queue entry. + */ +#define MPU_MAILBOX_DB_OFFSET 0x160 +#define MPU_MAILBOX_DB_RDY_MASK 0x1 /* bit 0 */ +#define MPU_MAILBOX_DB_HI_MASK 0x2 /* bit 1 */ + +/********** MPU semphore ******************/ +#define MPU_EP_SEMAPHORE_OFFSET 0xac +#define EP_SEMAPHORE_POST_STAGE_MASK 0x0000FFFF +#define EP_SEMAPHORE_POST_ERR_MASK 0x1 +#define EP_SEMAPHORE_POST_ERR_SHIFT 31 + +/********** MCC door bell ************/ +#define DB_MCCQ_OFFSET 0x140 +#define DB_MCCQ_RING_ID_MASK 0x7FF /* bits 0 - 10 */ +/* Number of entries posted */ +#define DB_MCCQ_NUM_POSTED_SHIFT 16 /* bits 16 - 29 */ + +/* MPU semphore POST stage values */ +#define POST_STAGE_ARMFW_RDY 0xc000 /* FW is done with POST */ + +/** + * When the async bit of mcc_compl is set, the last 4 bytes of + * mcc_compl is interpreted as follows: + */ +#define ASYNC_TRAILER_EVENT_CODE_SHIFT 8 /* bits 8 - 15 */ +#define ASYNC_TRAILER_EVENT_CODE_MASK 0xFF +#define ASYNC_EVENT_CODE_LINK_STATE 0x1 +struct be_async_event_trailer { + u32 code; +}; + +enum { + ASYNC_EVENT_LINK_DOWN = 0x0, + ASYNC_EVENT_LINK_UP = 0x1 +}; + +/** + * When the event code of an async trailer is link-state, the mcc_compl + * must be interpreted as follows + */ +struct be_async_event_link_state { + u8 physical_port; + u8 port_link_status; + u8 port_duplex; + u8 port_speed; + u8 port_fault; + u8 rsvd0[7]; + struct be_async_event_trailer trailer; +} __packed; + +struct be_mcc_mailbox { + struct be_mcc_wrb wrb; + struct be_mcc_compl compl; +}; + +/* Type of subsystems supported by FW */ +#define CMD_SUBSYSTEM_COMMON 0x1 +#define CMD_SUBSYSTEM_ISCSI 0x2 +#define CMD_SUBSYSTEM_ETH 0x3 +#define CMD_SUBSYSTEM_ISCSI_INI 0x6 +#define CMD_COMMON_TCP_UPLOAD 0x1 + +/** + * List of common opcodes subsystem CMD_SUBSYSTEM_COMMON + * These opcodes are unique for each subsystem defined above + */ +#define OPCODE_COMMON_CQ_CREATE 12 +#define OPCODE_COMMON_EQ_CREATE 13 +#define OPCODE_COMMON_MCC_CREATE 21 +#define OPCODE_COMMON_GET_CNTL_ATTRIBUTES 32 +#define OPCODE_COMMON_GET_FW_VERSION 35 +#define OPCODE_COMMON_MODIFY_EQ_DELAY 41 +#define OPCODE_COMMON_FIRMWARE_CONFIG 42 +#define OPCODE_COMMON_MCC_DESTROY 53 +#define OPCODE_COMMON_CQ_DESTROY 54 +#define OPCODE_COMMON_EQ_DESTROY 55 +#define OPCODE_COMMON_QUERY_FIRMWARE_CONFIG 58 +#define OPCODE_COMMON_FUNCTION_RESET 61 + +/** + * LIST of opcodes that are common between Initiator and Target + * used by CMD_SUBSYSTEM_ISCSI + * These opcodes are unique for each subsystem defined above + */ +#define OPCODE_COMMON_ISCSI_CFG_POST_SGL_PAGES 2 +#define OPCODE_COMMON_ISCSI_CFG_REMOVE_SGL_PAGES 3 +#define OPCODE_COMMON_ISCSI_NTWK_GET_NIC_CONFIG 7 +#define OPCODE_COMMON_ISCSI_SET_FRAGNUM_BITS_FOR_SGL_CRA 61 +#define OPCODE_COMMON_ISCSI_DEFQ_CREATE 64 +#define OPCODE_COMMON_ISCSI_DEFQ_DESTROY 65 +#define OPCODE_COMMON_ISCSI_WRBQ_CREATE 66 +#define OPCODE_COMMON_ISCSI_WRBQ_DESTROY 67 + +struct be_cmd_req_hdr { + u8 opcode; /* dword 0 */ + u8 subsystem; /* dword 0 */ + u8 port_number; /* dword 0 */ + u8 domain; /* dword 0 */ + u32 timeout; /* dword 1 */ + u32 request_length; /* dword 2 */ + u32 rsvd; /* dword 3 */ +}; + +struct be_cmd_resp_hdr { + u32 info; /* dword 0 */ + u32 status; /* dword 1 */ + u32 response_length; /* dword 2 */ + u32 actual_resp_len; /* dword 3 */ +}; + +struct phys_addr { + u32 lo; + u32 hi; +}; + +/************************** + * BE Command definitions * + **************************/ + +/** + * Pseudo amap definition in which each bit of the actual structure is defined + * as a byte - used to calculate offset/shift/mask of each field + */ +struct amap_eq_context { + u8 cidx[13]; /* dword 0 */ + u8 rsvd0[3]; /* dword 0 */ + u8 epidx[13]; /* dword 0 */ + u8 valid; /* dword 0 */ + u8 rsvd1; /* dword 0 */ + u8 size; /* dword 0 */ + u8 pidx[13]; /* dword 1 */ + u8 rsvd2[3]; /* dword 1 */ + u8 pd[10]; /* dword 1 */ + u8 count[3]; /* dword 1 */ + u8 solevent; /* dword 1 */ + u8 stalled; /* dword 1 */ + u8 armed; /* dword 1 */ + u8 rsvd3[4]; /* dword 2 */ + u8 func[8]; /* dword 2 */ + u8 rsvd4; /* dword 2 */ + u8 delaymult[10]; /* dword 2 */ + u8 rsvd5[2]; /* dword 2 */ + u8 phase[2]; /* dword 2 */ + u8 nodelay; /* dword 2 */ + u8 rsvd6[4]; /* dword 2 */ + u8 rsvd7[32]; /* dword 3 */ +} __packed; + +struct be_cmd_req_eq_create { + struct be_cmd_req_hdr hdr; /* dw[4] */ + u16 num_pages; /* sword */ + u16 rsvd0; /* sword */ + u8 context[sizeof(struct amap_eq_context) / 8]; /* dw[4] */ + struct phys_addr pages[8]; +} __packed; + +struct be_cmd_resp_eq_create { + struct be_cmd_resp_hdr resp_hdr; + u16 eq_id; /* sword */ + u16 rsvd0; /* sword */ +} __packed; + +struct mac_addr { + u16 size_of_struct; + u8 addr[ETH_ALEN]; +} __packed; + +struct be_cmd_req_mac_query { + struct be_cmd_req_hdr hdr; + u8 type; + u8 permanent; + u16 if_id; +} __packed; + +struct be_cmd_resp_mac_query { + struct be_cmd_resp_hdr hdr; + struct mac_addr mac; +}; + +/******************** Create CQ ***************************/ +/** + * Pseudo amap definition in which each bit of the actual structure is defined + * as a byte - used to calculate offset/shift/mask of each field + */ +struct amap_cq_context { + u8 cidx[11]; /* dword 0 */ + u8 rsvd0; /* dword 0 */ + u8 coalescwm[2]; /* dword 0 */ + u8 nodelay; /* dword 0 */ + u8 epidx[11]; /* dword 0 */ + u8 rsvd1; /* dword 0 */ + u8 count[2]; /* dword 0 */ + u8 valid; /* dword 0 */ + u8 solevent; /* dword 0 */ + u8 eventable; /* dword 0 */ + u8 pidx[11]; /* dword 1 */ + u8 rsvd2; /* dword 1 */ + u8 pd[10]; /* dword 1 */ + u8 eqid[8]; /* dword 1 */ + u8 stalled; /* dword 1 */ + u8 armed; /* dword 1 */ + u8 rsvd3[4]; /* dword 2 */ + u8 func[8]; /* dword 2 */ + u8 rsvd4[20]; /* dword 2 */ + u8 rsvd5[32]; /* dword 3 */ +} __packed; + +struct be_cmd_req_cq_create { + struct be_cmd_req_hdr hdr; + u16 num_pages; + u16 rsvd0; + u8 context[sizeof(struct amap_cq_context) / 8]; + struct phys_addr pages[4]; +} __packed; + +struct be_cmd_resp_cq_create { + struct be_cmd_resp_hdr hdr; + u16 cq_id; + u16 rsvd0; +} __packed; + +/******************** Create MCCQ ***************************/ +/** + * Pseudo amap definition in which each bit of the actual structure is defined + * as a byte - used to calculate offset/shift/mask of each field + */ +struct amap_mcc_context { + u8 con_index[14]; + u8 rsvd0[2]; + u8 ring_size[4]; + u8 fetch_wrb; + u8 fetch_r2t; + u8 cq_id[10]; + u8 prod_index[14]; + u8 fid[8]; + u8 pdid[9]; + u8 valid; + u8 rsvd1[32]; + u8 rsvd2[32]; +} __packed; + +struct be_cmd_req_mcc_create { + struct be_cmd_req_hdr hdr; + u16 num_pages; + u16 rsvd0; + u8 context[sizeof(struct amap_mcc_context) / 8]; + struct phys_addr pages[8]; +} __packed; + +struct be_cmd_resp_mcc_create { + struct be_cmd_resp_hdr hdr; + u16 id; + u16 rsvd0; +} __packed; + +/******************** Q Destroy ***************************/ +/* Type of Queue to be destroyed */ +enum { + QTYPE_EQ = 1, + QTYPE_CQ, + QTYPE_MCCQ, + QTYPE_WRBQ, + QTYPE_DPDUQ, + QTYPE_SGL +}; + +struct be_cmd_req_q_destroy { + struct be_cmd_req_hdr hdr; + u16 id; + u16 bypass_flush; /* valid only for rx q destroy */ +} __packed; + +struct macaddr { + u8 byte[ETH_ALEN]; +}; + +struct be_cmd_req_mcast_mac_config { + struct be_cmd_req_hdr hdr; + u16 num_mac; + u8 promiscuous; + u8 interface_id; + struct macaddr mac[32]; +} __packed; + +static inline void *embedded_payload(struct be_mcc_wrb *wrb) +{ + return wrb->payload.embedded_payload; +} + +static inline struct be_sge *nonembedded_sgl(struct be_mcc_wrb *wrb) +{ + return &wrb->payload.sgl[0]; +} + +/******************** Modify EQ Delay *******************/ +struct be_cmd_req_modify_eq_delay { + struct be_cmd_req_hdr hdr; + u32 num_eq; + struct { + u32 eq_id; + u32 phase; + u32 delay_multiplier; + } delay[8]; +} __packed; + +/******************** Get MAC ADDR *******************/ + +#define ETH_ALEN 6 + + +struct be_cmd_req_get_mac_addr { + struct be_cmd_req_hdr hdr; + u32 nic_port_count; + u32 speed; + u32 max_speed; + u32 link_state; + u32 max_frame_size; + u16 size_of_structure; + u8 mac_address[ETH_ALEN]; + u32 rsvd[23]; +}; + +struct be_cmd_resp_get_mac_addr { + struct be_cmd_resp_hdr hdr; + u32 nic_port_count; + u32 speed; + u32 max_speed; + u32 link_state; + u32 max_frame_size; + u16 size_of_structure; + u8 mac_address[6]; + u32 rsvd[23]; +}; + +int beiscsi_cmd_eq_create(struct be_ctrl_info *ctrl, + struct be_queue_info *eq, int eq_delay); + +int beiscsi_cmd_cq_create(struct be_ctrl_info *ctrl, + struct be_queue_info *cq, struct be_queue_info *eq, + bool sol_evts, bool no_delay, + int num_cqe_dma_coalesce); + +int beiscsi_cmd_q_destroy(struct be_ctrl_info *ctrl, struct be_queue_info *q, + int type); +int be_poll_mcc(struct be_ctrl_info *ctrl); +unsigned char mgmt_check_supported_fw(struct be_ctrl_info *ctrl); +int be_cmd_get_mac_addr(struct be_ctrl_info *ctrl, u8 *mac_addr); + +/*ISCSI Functuions */ +int be_cmd_fw_initialize(struct be_ctrl_info *ctrl); + +struct be_mcc_wrb *wrb_from_mbox(struct be_dma_mem *mbox_mem); + +int be_mbox_notify(struct be_ctrl_info *ctrl); + +int be_cmd_create_default_pdu_queue(struct be_ctrl_info *ctrl, + struct be_queue_info *cq, + struct be_queue_info *dq, int length, + int entry_size); + +int be_cmd_iscsi_post_sgl_pages(struct be_ctrl_info *ctrl, + struct be_dma_mem *q_mem, u32 page_offset, + u32 num_pages); + +int be_cmd_wrbq_create(struct be_ctrl_info *ctrl, struct be_dma_mem *q_mem, + struct be_queue_info *wrbq); + +struct be_default_pdu_context { + u32 dw[4]; +} __packed; + +struct amap_be_default_pdu_context { + u8 dbuf_cindex[13]; /* dword 0 */ + u8 rsvd0[3]; /* dword 0 */ + u8 ring_size[4]; /* dword 0 */ + u8 ring_state[4]; /* dword 0 */ + u8 rsvd1[8]; /* dword 0 */ + u8 dbuf_pindex[13]; /* dword 1 */ + u8 rsvd2; /* dword 1 */ + u8 pci_func_id[8]; /* dword 1 */ + u8 rx_pdid[9]; /* dword 1 */ + u8 rx_pdid_valid; /* dword 1 */ + u8 default_buffer_size[16]; /* dword 2 */ + u8 cq_id_recv[10]; /* dword 2 */ + u8 rx_pdid_not_valid; /* dword 2 */ + u8 rsvd3[5]; /* dword 2 */ + u8 rsvd4[32]; /* dword 3 */ +} __packed; + +struct be_defq_create_req { + struct be_cmd_req_hdr hdr; + u16 num_pages; + u8 ulp_num; + u8 rsvd0; + struct be_default_pdu_context context; + struct phys_addr pages[8]; +} __packed; + +struct be_defq_create_resp { + struct be_cmd_req_hdr hdr; + u16 id; + u16 rsvd0; +} __packed; + +struct be_post_sgl_pages_req { + struct be_cmd_req_hdr hdr; + u16 num_pages; + u16 page_offset; + u32 rsvd0; + struct phys_addr pages[26]; + u32 rsvd1; +} __packed; + +struct be_wrbq_create_req { + struct be_cmd_req_hdr hdr; + u16 num_pages; + u8 ulp_num; + u8 rsvd0; + struct phys_addr pages[8]; +} __packed; + +struct be_wrbq_create_resp { + struct be_cmd_resp_hdr resp_hdr; + u16 cid; + u16 rsvd0; +} __packed; + +#define SOL_CID_MASK 0x0000FFC0 +#define SOL_CODE_MASK 0x0000003F +#define SOL_WRB_INDEX_MASK 0x00FF0000 +#define SOL_CMD_WND_MASK 0xFF000000 +#define SOL_RES_CNT_MASK 0x7FFFFFFF +#define SOL_EXP_CMD_SN_MASK 0xFFFFFFFF +#define SOL_HW_STS_MASK 0x000000FF +#define SOL_STS_MASK 0x0000FF00 +#define SOL_RESP_MASK 0x00FF0000 +#define SOL_FLAGS_MASK 0x7F000000 +#define SOL_S_MASK 0x80000000 + +struct sol_cqe { + u32 dw[4]; +}; + +struct amap_sol_cqe { + u8 hw_sts[8]; /* dword 0 */ + u8 i_sts[8]; /* dword 0 */ + u8 i_resp[8]; /* dword 0 */ + u8 i_flags[7]; /* dword 0 */ + u8 s; /* dword 0 */ + u8 i_exp_cmd_sn[32]; /* dword 1 */ + u8 code[6]; /* dword 2 */ + u8 cid[10]; /* dword 2 */ + u8 wrb_index[8]; /* dword 2 */ + u8 i_cmd_wnd[8]; /* dword 2 */ + u8 i_res_cnt[31]; /* dword 3 */ + u8 valid; /* dword 3 */ +} __packed; + + +/** + * Post WRB Queue Doorbell Register used by the host Storage + * stack to notify the + * controller of a posted Work Request Block + */ +#define DB_WRB_POST_CID_MASK 0x3FF /* bits 0 - 9 */ +#define DB_DEF_PDU_WRB_INDEX_MASK 0xFF /* bits 0 - 9 */ + +#define DB_DEF_PDU_WRB_INDEX_SHIFT 16 +#define DB_DEF_PDU_NUM_POSTED_SHIFT 24 + +struct fragnum_bits_for_sgl_cra_in { + struct be_cmd_req_hdr hdr; + u32 num_bits; +} __packed; + +struct iscsi_cleanup_req { + struct be_cmd_req_hdr hdr; + u16 chute; + u8 hdr_ring_id; + u8 data_ring_id; + +} __packed; + +struct eq_delay { + u32 eq_id; + u32 phase; + u32 delay_multiplier; +} __packed; + +struct be_eq_delay_params_in { + struct be_cmd_req_hdr hdr; + u32 num_eq; + struct eq_delay delay[8]; +} __packed; + +struct ip_address_format { + u16 size_of_structure; + u8 reserved; + u8 ip_type; + u8 ip_address[16]; + u32 rsvd0; +} __packed; + +struct tcp_connect_and_offload_in { + struct be_cmd_req_hdr hdr; + struct ip_address_format ip_address; + u16 tcp_port; + u16 cid; + u16 cq_id; + u16 defq_id; + struct phys_addr dataout_template_pa; + u16 hdr_ring_id; + u16 data_ring_id; + u8 do_offload; + u8 rsvd0[3]; +} __packed; + +struct tcp_connect_and_offload_out { + struct be_cmd_resp_hdr hdr; + u32 connection_handle; + u16 cid; + u16 rsvd0; + +} __packed; + +struct be_mcc_wrb_context { + struct MCC_WRB *wrb; + int *users_final_status; +} __packed; + +#define DB_DEF_PDU_RING_ID_MASK 0x3FF /* bits 0 - 9 */ +#define DB_DEF_PDU_CQPROC_MASK 0x3FFF /* bits 0 - 9 */ +#define DB_DEF_PDU_REARM_SHIFT 14 +#define DB_DEF_PDU_EVENT_SHIFT 15 +#define DB_DEF_PDU_CQPROC_SHIFT 16 + +struct dmsg_cqe { + u32 dw[4]; +} __packed; + +struct tcp_upload_params_in { + struct be_cmd_req_hdr hdr; + u16 id; + u16 upload_type; + u32 reset_seq; +} __packed; + +struct tcp_upload_params_out { + u32 dw[32]; +} __packed; + +union tcp_upload_params { + struct tcp_upload_params_in request; + struct tcp_upload_params_out response; +} __packed; + +struct be_ulp_fw_cfg { + u32 ulp_mode; + u32 etx_base; + u32 etx_count; + u32 sq_base; + u32 sq_count; + u32 rq_base; + u32 rq_count; + u32 dq_base; + u32 dq_count; + u32 lro_base; + u32 lro_count; + u32 icd_base; + u32 icd_count; +}; + +struct be_fw_cfg { + struct be_cmd_req_hdr hdr; + u32 be_config_number; + u32 asic_revision; + u32 phys_port; + u32 function_mode; + struct be_ulp_fw_cfg ulp[2]; + u32 function_caps; +} __packed; + +#define CMD_ISCSI_COMMAND_INVALIDATE 1 +#define ISCSI_OPCODE_SCSI_DATA_OUT 5 +#define OPCODE_COMMON_ISCSI_TCP_CONNECT_AND_OFFLOAD 70 +#define OPCODE_ISCSI_INI_DRIVER_OFFLOAD_SESSION 41 +#define OPCODE_COMMON_MODIFY_EQ_DELAY 41 +#define OPCODE_COMMON_ISCSI_CLEANUP 59 +#define OPCODE_COMMON_TCP_UPLOAD 56 +#define OPCODE_COMMON_ISCSI_ERROR_RECOVERY_INVALIDATE_COMMANDS 1 +/* --- CMD_ISCSI_INVALIDATE_CONNECTION_TYPE --- */ +#define CMD_ISCSI_CONNECTION_INVALIDATE 1 +#define CMD_ISCSI_CONNECTION_ISSUE_TCP_RST 2 +#define OPCODE_ISCSI_INI_DRIVER_INVALIDATE_CONNECTION 42 + +#define INI_WR_CMD 1 /* Initiator write command */ +#define INI_TMF_CMD 2 /* Initiator TMF command */ +#define INI_NOPOUT_CMD 3 /* Initiator; Send a NOP-OUT */ +#define INI_RD_CMD 5 /* Initiator requesting to send + * a read command + */ +#define TGT_CTX_UPDT_CMD 7 /* Target context update */ +#define TGT_STS_CMD 8 /* Target R2T and other BHS + * where only the status number + * need to be updated + */ +#define TGT_DATAIN_CMD 9 /* Target Data-Ins in response + * to read command + */ +#define TGT_SOS_PDU 10 /* Target:standalone status + * response + */ +#define TGT_DM_CMD 11 /* Indicates that the bhs + * preparedby + * driver should not be touched + */ +/* --- CMD_CHUTE_TYPE --- */ +#define CMD_CONNECTION_CHUTE_0 1 +#define CMD_CONNECTION_CHUTE_1 2 +#define CMD_CONNECTION_CHUTE_2 3 + +#define EQ_MAJOR_CODE_COMPLETION 0 + +#define CMD_ISCSI_SESSION_DEL_CFG_FROM_FLASH 0 +#define CMD_ISCSI_SESSION_SAVE_CFG_ON_FLASH 1 + +/* --- CONNECTION_UPLOAD_PARAMS --- */ +/* These parameters are used to define the type of upload desired. */ +#define CONNECTION_UPLOAD_GRACEFUL 1 /* Graceful upload */ +#define CONNECTION_UPLOAD_ABORT_RESET 2 /* Abortive upload with + * reset + */ +#define CONNECTION_UPLOAD_ABORT 3 /* Abortive upload without + * reset + */ +#define CONNECTION_UPLOAD_ABORT_WITH_SEQ 4 /* Abortive upload with reset, + * sequence number by driver */ + +/* Returns byte size of given field with a structure. */ + +/* Returns the number of items in the field array. */ +#define BE_NUMBER_OF_FIELD(_type_, _field_) \ + (FIELD_SIZEOF(_type_, _field_)/sizeof((((_type_ *)0)->_field_[0])))\ + +/** + * Different types of iSCSI completions to host driver for both initiator + * and taget mode + * of operation. + */ +#define SOL_CMD_COMPLETE 1 /* Solicited command completed + * normally + */ +#define SOL_CMD_KILLED_DATA_DIGEST_ERR 2 /* Solicited command got + * invalidated internally due + * to Data Digest error + */ +#define CXN_KILLED_PDU_SIZE_EXCEEDS_DSL 3 /* Connection got invalidated + * internally + * due to a recieved PDU + * size > DSL + */ +#define CXN_KILLED_BURST_LEN_MISMATCH 4 /* Connection got invalidated + * internally due ti received + * PDU sequence size > + * FBL/MBL. + */ +#define CXN_KILLED_AHS_RCVD 5 /* Connection got invalidated + * internally due to a recieved + * PDU Hdr that has + * AHS */ +#define CXN_KILLED_HDR_DIGEST_ERR 6 /* Connection got invalidated + * internally due to Hdr Digest + * error + */ +#define CXN_KILLED_UNKNOWN_HDR 7 /* Connection got invalidated + * internally + * due to a bad opcode in the + * pdu hdr + */ +#define CXN_KILLED_STALE_ITT_TTT_RCVD 8 /* Connection got invalidated + * internally due to a recieved + * ITT/TTT that does not belong + * to this Connection + */ +#define CXN_KILLED_INVALID_ITT_TTT_RCVD 9 /* Connection got invalidated + * internally due to recieved + * ITT/TTT value > Max + * Supported ITTs/TTTs + */ +#define CXN_KILLED_RST_RCVD 10 /* Connection got invalidated + * internally due to an + * incoming TCP RST + */ +#define CXN_KILLED_TIMED_OUT 11 /* Connection got invalidated + * internally due to timeout on + * tcp segment 12 retransmit + * attempts failed + */ +#define CXN_KILLED_RST_SENT 12 /* Connection got invalidated + * internally due to TCP RST + * sent by the Tx side + */ +#define CXN_KILLED_FIN_RCVD 13 /* Connection got invalidated + * internally due to an + * incoming TCP FIN. + */ +#define CXN_KILLED_BAD_UNSOL_PDU_RCVD 14 /* Connection got invalidated + * internally due to bad + * unsolicited PDU Unsolicited + * PDUs are PDUs with + * ITT=0xffffffff + */ +#define CXN_KILLED_BAD_WRB_INDEX_ERROR 15 /* Connection got invalidated + * internally due to bad WRB + * index. + */ +#define CXN_KILLED_OVER_RUN_RESIDUAL 16 /* Command got invalidated + * internally due to recived + * command has residual + * over run bytes. + */ +#define CXN_KILLED_UNDER_RUN_RESIDUAL 17 /* Command got invalidated + * internally due to recived + * command has residual under + * run bytes. + */ +#define CMD_KILLED_INVALID_STATSN_RCVD 18 /* Command got invalidated + * internally due to a recieved + * PDU has an invalid StatusSN + */ +#define CMD_KILLED_INVALID_R2T_RCVD 19 /* Command got invalidated + * internally due to a recieved + * an R2T with some invalid + * fields in it + */ +#define CMD_CXN_KILLED_LUN_INVALID 20 /* Command got invalidated + * internally due to received + * PDU has an invalid LUN. + */ +#define CMD_CXN_KILLED_ICD_INVALID 21 /* Command got invalidated + * internally due to the + * corresponding ICD not in a + * valid state + */ +#define CMD_CXN_KILLED_ITT_INVALID 22 /* Command got invalidated due + * to received PDU has an + * invalid ITT. + */ +#define CMD_CXN_KILLED_SEQ_OUTOFORDER 23 /* Command got invalidated due + * to received sequence buffer + * offset is out of order. + */ +#define CMD_CXN_KILLED_INVALID_DATASN_RCVD 24 /* Command got invalidated + * internally due to a + * recieved PDU has an invalid + * DataSN + */ +#define CXN_INVALIDATE_NOTIFY 25 /* Connection invalidation + * completion notify. + */ +#define CXN_INVALIDATE_INDEX_NOTIFY 26 /* Connection invalidation + * completion + * with data PDU index. + */ +#define CMD_INVALIDATED_NOTIFY 27 /* Command invalidation + * completionnotifify. + */ +#define UNSOL_HDR_NOTIFY 28 /* Unsolicited header notify.*/ +#define UNSOL_DATA_NOTIFY 29 /* Unsolicited data notify.*/ +#define UNSOL_DATA_DIGEST_ERROR_NOTIFY 30 /* Unsolicited data digest + * error notify. + */ +#define DRIVERMSG_NOTIFY 31 /* TCP acknowledge based + * notification. + */ +#define CXN_KILLED_CMND_DATA_NOT_ON_SAME_CONN 32 /* Connection got invalidated + * internally due to command + * and data are not on same + * connection. + */ +#define SOL_CMD_KILLED_DIF_ERR 33 /* Solicited command got + * invalidated internally due + * to DIF error + */ +#define CXN_KILLED_SYN_RCVD 34 /* Connection got invalidated + * internally due to incoming + * TCP SYN + */ +#define CXN_KILLED_IMM_DATA_RCVD 35 /* Connection got invalidated + * internally due to an + * incoming Unsolicited PDU + * that has immediate data on + * the cxn + */ + +void be_wrb_hdr_prepare(struct be_mcc_wrb *wrb, int payload_len, + bool embedded, u8 sge_cnt); + +void be_cmd_hdr_prepare(struct be_cmd_req_hdr *req_hdr, + u8 subsystem, u8 opcode, int cmd_len); + +#endif /* !BEISCSI_CMDS_H */ diff --git a/drivers/scsi/be2iscsi/be_iscsi.c b/drivers/scsi/be2iscsi/be_iscsi.c new file mode 100644 index 000000000000..b23526cb39d7 --- /dev/null +++ b/drivers/scsi/be2iscsi/be_iscsi.c @@ -0,0 +1,646 @@ +/** + * Copyright (C) 2005 - 2009 ServerEngines + * All rights reserved. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License version 2 + * as published by the Free Software Foundation. The full GNU General + * Public License is included in this distribution in the file called COPYING. + * + * Written by: Jayamohan Kallickal (jayamohank@serverengines.com) + * + * Contact Information: + * linux-drivers@serverengines.com + * + * ServerEngines + * 209 N. Fair Oaks Ave + * Sunnyvale, CA 94085 + * + */ + +#include +#include +#include +#include +#include +#include +#include + +#include "be_iscsi.h" + +extern struct iscsi_transport beiscsi_iscsi_transport; + +/** + * beiscsi_session_create - creates a new iscsi session + * @cmds_max: max commands supported + * @qdepth: max queue depth supported + * @initial_cmdsn: initial iscsi CMDSN + */ +struct iscsi_cls_session *beiscsi_session_create(struct iscsi_endpoint *ep, + u16 cmds_max, + u16 qdepth, + u32 initial_cmdsn) +{ + struct Scsi_Host *shost; + struct beiscsi_endpoint *beiscsi_ep; + struct iscsi_cls_session *cls_session; + struct iscsi_session *sess; + struct beiscsi_hba *phba; + struct iscsi_task *task; + struct beiscsi_io_task *io_task; + unsigned int max_size, num_cmd; + dma_addr_t bus_add; + u64 pa_addr; + void *vaddr; + + SE_DEBUG(DBG_LVL_8, "In beiscsi_session_create\n"); + + if (!ep) { + SE_DEBUG(DBG_LVL_1, "beiscsi_session_create: invalid ep \n"); + return NULL; + } + beiscsi_ep = ep->dd_data; + phba = beiscsi_ep->phba; + shost = phba->shost; + if (cmds_max > beiscsi_ep->phba->params.wrbs_per_cxn) { + shost_printk(KERN_ERR, shost, "Cannot handle %d cmds." + "Max cmds per session supported is %d. Using %d. " + "\n", cmds_max, + beiscsi_ep->phba->params.wrbs_per_cxn, + beiscsi_ep->phba->params.wrbs_per_cxn); + cmds_max = beiscsi_ep->phba->params.wrbs_per_cxn; + } + + cls_session = iscsi_session_setup(&beiscsi_iscsi_transport, + shost, cmds_max, + sizeof(struct beiscsi_io_task), + initial_cmdsn, ISCSI_MAX_TARGET); + if (!cls_session) + return NULL; + sess = cls_session->dd_data; + max_size = ALIGN(sizeof(struct be_cmd_bhs), 64) * sess->cmds_max; + vaddr = pci_alloc_consistent(phba->pcidev, max_size, &bus_add); + pa_addr = (__u64) bus_add; + + for (num_cmd = 0; num_cmd < sess->cmds_max; num_cmd++) { + task = sess->cmds[num_cmd]; + io_task = task->dd_data; + io_task->cmd_bhs = vaddr; + io_task->bhs_pa.u.a64.address = pa_addr; + io_task->alloc_size = max_size; + vaddr += ALIGN(sizeof(struct be_cmd_bhs), 64); + pa_addr += ALIGN(sizeof(struct be_cmd_bhs), 64); + } + return cls_session; +} + +/** + * beiscsi_session_destroy - destroys iscsi session + * @cls_session: pointer to iscsi cls session + * + * Destroys iSCSI session instance and releases + * resources allocated for it. + */ +void beiscsi_session_destroy(struct iscsi_cls_session *cls_session) +{ + struct iscsi_task *task; + struct beiscsi_io_task *io_task; + struct iscsi_session *sess = cls_session->dd_data; + struct Scsi_Host *shost = iscsi_session_to_shost(cls_session); + struct beiscsi_hba *phba = iscsi_host_priv(shost); + + task = sess->cmds[0]; + io_task = task->dd_data; + pci_free_consistent(phba->pcidev, + io_task->alloc_size, + io_task->cmd_bhs, + io_task->bhs_pa.u.a64.address); + iscsi_session_teardown(cls_session); +} + +/** + * beiscsi_conn_create - create an instance of iscsi connection + * @cls_session: ptr to iscsi_cls_session + * @cid: iscsi cid + */ +struct iscsi_cls_conn * +beiscsi_conn_create(struct iscsi_cls_session *cls_session, u32 cid) +{ + struct beiscsi_hba *phba; + struct Scsi_Host *shost; + struct iscsi_cls_conn *cls_conn; + struct beiscsi_conn *beiscsi_conn; + struct iscsi_conn *conn; + + SE_DEBUG(DBG_LVL_8, "In beiscsi_conn_create ,cid" + "from iscsi layer=%d\n", cid); + shost = iscsi_session_to_shost(cls_session); + phba = iscsi_host_priv(shost); + + cls_conn = iscsi_conn_setup(cls_session, sizeof(*beiscsi_conn), cid); + if (!cls_conn) + return NULL; + + conn = cls_conn->dd_data; + beiscsi_conn = conn->dd_data; + beiscsi_conn->ep = NULL; + beiscsi_conn->phba = phba; + beiscsi_conn->conn = conn; + return cls_conn; +} + +/** + * beiscsi_bindconn_cid - Bind the beiscsi_conn with phba connection table + * @beiscsi_conn: The pointer to beiscsi_conn structure + * @phba: The phba instance + * @cid: The cid to free + */ +static int beiscsi_bindconn_cid(struct beiscsi_hba *phba, + struct beiscsi_conn *beiscsi_conn, + unsigned int cid) +{ + if (phba->conn_table[cid]) { + SE_DEBUG(DBG_LVL_1, + "Connection table already occupied. Detected clash\n"); + return -EINVAL; + } else { + SE_DEBUG(DBG_LVL_8, "phba->conn_table[%d]=%p(beiscsi_conn) \n", + cid, beiscsi_conn); + phba->conn_table[cid] = beiscsi_conn; + } + return 0; +} + +/** + * beiscsi_conn_bind - Binds iscsi session/connection with TCP connection + * @cls_session: pointer to iscsi cls session + * @cls_conn: pointer to iscsi cls conn + * @transport_fd: EP handle(64 bit) + * + * This function binds the TCP Conn with iSCSI Connection and Session. + */ +int beiscsi_conn_bind(struct iscsi_cls_session *cls_session, + struct iscsi_cls_conn *cls_conn, + u64 transport_fd, int is_leading) +{ + struct iscsi_conn *conn = cls_conn->dd_data; + struct beiscsi_conn *beiscsi_conn = conn->dd_data; + struct Scsi_Host *shost = + (struct Scsi_Host *)iscsi_session_to_shost(cls_session); + struct beiscsi_hba *phba = (struct beiscsi_hba *)iscsi_host_priv(shost); + struct beiscsi_endpoint *beiscsi_ep; + struct iscsi_endpoint *ep; + + SE_DEBUG(DBG_LVL_8, "In beiscsi_conn_bind\n"); + ep = iscsi_lookup_endpoint(transport_fd); + if (!ep) + return -EINVAL; + + beiscsi_ep = ep->dd_data; + + if (iscsi_conn_bind(cls_session, cls_conn, is_leading)) + return -EINVAL; + + if (beiscsi_ep->phba != phba) { + SE_DEBUG(DBG_LVL_8, + "beiscsi_ep->hba=%p not equal to phba=%p \n", + beiscsi_ep->phba, phba); + return -EEXIST; + } + + beiscsi_conn->beiscsi_conn_cid = beiscsi_ep->ep_cid; + beiscsi_conn->ep = beiscsi_ep; + beiscsi_ep->conn = beiscsi_conn; + SE_DEBUG(DBG_LVL_8, "beiscsi_conn=%p conn=%p ep_cid=%d \n", + beiscsi_conn, conn, beiscsi_ep->ep_cid); + return beiscsi_bindconn_cid(phba, beiscsi_conn, beiscsi_ep->ep_cid); +} + +/** + * beiscsi_conn_get_param - get the iscsi parameter + * @cls_conn: pointer to iscsi cls conn + * @param: parameter type identifier + * @buf: buffer pointer + * + * returns iscsi parameter + */ +int beiscsi_conn_get_param(struct iscsi_cls_conn *cls_conn, + enum iscsi_param param, char *buf) +{ + struct beiscsi_endpoint *beiscsi_ep; + struct iscsi_conn *conn = cls_conn->dd_data; + struct beiscsi_conn *beiscsi_conn = conn->dd_data; + int len = 0; + + beiscsi_ep = beiscsi_conn->ep; + if (!beiscsi_ep) { + SE_DEBUG(DBG_LVL_1, + "In beiscsi_conn_get_param , no beiscsi_ep\n"); + return -1; + } + + switch (param) { + case ISCSI_PARAM_CONN_PORT: + len = sprintf(buf, "%hu\n", beiscsi_ep->dst_tcpport); + break; + case ISCSI_PARAM_CONN_ADDRESS: + if (beiscsi_ep->ip_type == BE2_IPV4) + len = sprintf(buf, "%pI4\n", &beiscsi_ep->dst_addr); + else + len = sprintf(buf, "%pI6\n", &beiscsi_ep->dst6_addr); + break; + default: + return iscsi_conn_get_param(cls_conn, param, buf); + } + return len; +} + +int beiscsi_set_param(struct iscsi_cls_conn *cls_conn, + enum iscsi_param param, char *buf, int buflen) +{ + struct iscsi_conn *conn = cls_conn->dd_data; + struct iscsi_session *session = conn->session; + int ret; + + ret = iscsi_set_param(cls_conn, param, buf, buflen); + if (ret) + return ret; + /* + * If userspace tried to set the value to higher than we can + * support override here. + */ + switch (param) { + case ISCSI_PARAM_FIRST_BURST: + if (session->first_burst > 8192) + session->first_burst = 8192; + break; + case ISCSI_PARAM_MAX_RECV_DLENGTH: + if (conn->max_recv_dlength > 65536) + conn->max_recv_dlength = 65536; + break; + case ISCSI_PARAM_MAX_BURST: + if (session->first_burst > 262144) + session->first_burst = 262144; + break; + default: + return 0; + } + + return 0; +} + +/** + * beiscsi_get_host_param - get the iscsi parameter + * @shost: pointer to scsi_host structure + * @param: parameter type identifier + * @buf: buffer pointer + * + * returns host parameter + */ +int beiscsi_get_host_param(struct Scsi_Host *shost, + enum iscsi_host_param param, char *buf) +{ + struct beiscsi_hba *phba = (struct beiscsi_hba *)iscsi_host_priv(shost); + int len = 0; + + switch (param) { + case ISCSI_HOST_PARAM_HWADDRESS: + be_cmd_get_mac_addr(&phba->ctrl, phba->mac_address); + len = sysfs_format_mac(buf, phba->mac_address, ETH_ALEN); + break; + default: + return iscsi_host_get_param(shost, param, buf); + } + return len; +} + +/** + * beiscsi_conn_get_stats - get the iscsi stats + * @cls_conn: pointer to iscsi cls conn + * @stats: pointer to iscsi_stats structure + * + * returns iscsi stats + */ +void beiscsi_conn_get_stats(struct iscsi_cls_conn *cls_conn, + struct iscsi_stats *stats) +{ + struct iscsi_conn *conn = cls_conn->dd_data; + + SE_DEBUG(DBG_LVL_8, "In beiscsi_conn_get_stats\n"); + stats->txdata_octets = conn->txdata_octets; + stats->rxdata_octets = conn->rxdata_octets; + stats->dataout_pdus = conn->dataout_pdus_cnt; + stats->scsirsp_pdus = conn->scsirsp_pdus_cnt; + stats->scsicmd_pdus = conn->scsicmd_pdus_cnt; + stats->datain_pdus = conn->datain_pdus_cnt; + stats->tmfrsp_pdus = conn->tmfrsp_pdus_cnt; + stats->tmfcmd_pdus = conn->tmfcmd_pdus_cnt; + stats->r2t_pdus = conn->r2t_pdus_cnt; + stats->digest_err = 0; + stats->timeout_err = 0; + stats->custom_length = 0; + strcpy(stats->custom[0].desc, "eh_abort_cnt"); + stats->custom[0].value = conn->eh_abort_cnt; +} + +/** + * beiscsi_set_params_for_offld - get the parameters for offload + * @beiscsi_conn: pointer to beiscsi_conn + * @params: pointer to offload_params structure + */ +static void beiscsi_set_params_for_offld(struct beiscsi_conn *beiscsi_conn, + struct beiscsi_offload_params *params) +{ + struct iscsi_conn *conn = beiscsi_conn->conn; + struct iscsi_session *session = conn->session; + + AMAP_SET_BITS(struct amap_beiscsi_offload_params, max_burst_length, + params, session->max_burst); + AMAP_SET_BITS(struct amap_beiscsi_offload_params, + max_send_data_segment_length, params, + conn->max_xmit_dlength); + AMAP_SET_BITS(struct amap_beiscsi_offload_params, first_burst_length, + params, session->first_burst); + AMAP_SET_BITS(struct amap_beiscsi_offload_params, erl, params, + session->erl); + AMAP_SET_BITS(struct amap_beiscsi_offload_params, dde, params, + conn->datadgst_en); + AMAP_SET_BITS(struct amap_beiscsi_offload_params, hde, params, + conn->hdrdgst_en); + AMAP_SET_BITS(struct amap_beiscsi_offload_params, ir2t, params, + session->initial_r2t_en); + AMAP_SET_BITS(struct amap_beiscsi_offload_params, imd, params, + session->imm_data_en); + AMAP_SET_BITS(struct amap_beiscsi_offload_params, exp_statsn, params, + (conn->exp_statsn - 1)); +} + +/** + * beiscsi_conn_start - offload of session to chip + * @cls_conn: pointer to beiscsi_conn + */ +int beiscsi_conn_start(struct iscsi_cls_conn *cls_conn) +{ + struct iscsi_conn *conn = cls_conn->dd_data; + struct beiscsi_conn *beiscsi_conn = conn->dd_data; + struct beiscsi_endpoint *beiscsi_ep; + struct beiscsi_offload_params params; + struct iscsi_session *session = conn->session; + struct Scsi_Host *shost = iscsi_session_to_shost(session->cls_session); + struct beiscsi_hba *phba = iscsi_host_priv(shost); + + memset(¶ms, 0, sizeof(struct beiscsi_offload_params)); + beiscsi_ep = beiscsi_conn->ep; + if (!beiscsi_ep) + SE_DEBUG(DBG_LVL_1, "In beiscsi_conn_start , no beiscsi_ep\n"); + + free_mgmt_sgl_handle(phba, beiscsi_conn->plogin_sgl_handle); + beiscsi_conn->login_in_progress = 0; + beiscsi_set_params_for_offld(beiscsi_conn, ¶ms); + beiscsi_offload_connection(beiscsi_conn, ¶ms); + iscsi_conn_start(cls_conn); + return 0; +} + +/** + * beiscsi_get_cid - Allocate a cid + * @phba: The phba instance + */ +static int beiscsi_get_cid(struct beiscsi_hba *phba) +{ + unsigned short cid = 0xFFFF; + + if (!phba->avlbl_cids) + return cid; + + cid = phba->cid_array[phba->cid_alloc++]; + if (phba->cid_alloc == phba->params.cxns_per_ctrl) + phba->cid_alloc = 0; + phba->avlbl_cids--; + return cid; +} + +/** + * beiscsi_open_conn - Ask FW to open a TCP connection + * @ep: endpoint to be used + * @src_addr: The source IP address + * @dst_addr: The Destination IP address + * + * Asks the FW to open a TCP connection + */ +static int beiscsi_open_conn(struct iscsi_endpoint *ep, + struct sockaddr *src_addr, + struct sockaddr *dst_addr, int non_blocking) +{ + struct beiscsi_endpoint *beiscsi_ep = ep->dd_data; + struct beiscsi_hba *phba = beiscsi_ep->phba; + int ret = -1; + + beiscsi_ep->ep_cid = beiscsi_get_cid(phba); + if (beiscsi_ep->ep_cid == 0xFFFF) { + SE_DEBUG(DBG_LVL_1, "No free cid available\n"); + return ret; + } + SE_DEBUG(DBG_LVL_8, "In beiscsi_open_conn, ep_cid=%d ", + beiscsi_ep->ep_cid); + phba->ep_array[beiscsi_ep->ep_cid] = ep; + if (beiscsi_ep->ep_cid > + (phba->fw_config.iscsi_cid_start + phba->params.cxns_per_ctrl)) { + SE_DEBUG(DBG_LVL_1, "Failed in allocate iscsi cid\n"); + return ret; + } + + beiscsi_ep->cid_vld = 0; + return mgmt_open_connection(phba, dst_addr, beiscsi_ep); +} + +/** + * beiscsi_put_cid - Free the cid + * @phba: The phba for which the cid is being freed + * @cid: The cid to free + */ +static void beiscsi_put_cid(struct beiscsi_hba *phba, unsigned short cid) +{ + phba->avlbl_cids++; + phba->cid_array[phba->cid_free++] = cid; + if (phba->cid_free == phba->params.cxns_per_ctrl) + phba->cid_free = 0; +} + +/** + * beiscsi_free_ep - free endpoint + * @ep: pointer to iscsi endpoint structure + */ +static void beiscsi_free_ep(struct iscsi_endpoint *ep) +{ + struct beiscsi_endpoint *beiscsi_ep = ep->dd_data; + struct beiscsi_hba *phba = beiscsi_ep->phba; + + beiscsi_put_cid(phba, beiscsi_ep->ep_cid); + beiscsi_ep->phba = NULL; + iscsi_destroy_endpoint(ep); +} + +/** + * beiscsi_ep_connect - Ask chip to create TCP Conn + * @scsi_host: Pointer to scsi_host structure + * @dst_addr: The IP address of Target + * @non_blocking: blocking or non-blocking call + * + * This routines first asks chip to create a connection and then allocates an EP + */ +struct iscsi_endpoint * +beiscsi_ep_connect(struct Scsi_Host *shost, struct sockaddr *dst_addr, + int non_blocking) +{ + struct beiscsi_hba *phba; + struct beiscsi_endpoint *beiscsi_ep; + struct iscsi_endpoint *ep; + int ret; + + SE_DEBUG(DBG_LVL_8, "In beiscsi_ep_connect \n"); + if (shost) + phba = iscsi_host_priv(shost); + else { + ret = -ENXIO; + SE_DEBUG(DBG_LVL_1, "shost is NULL \n"); + return ERR_PTR(ret); + } + ep = iscsi_create_endpoint(sizeof(struct beiscsi_endpoint)); + if (!ep) { + ret = -ENOMEM; + return ERR_PTR(ret); + } + + beiscsi_ep = ep->dd_data; + beiscsi_ep->phba = phba; + + if (beiscsi_open_conn(ep, NULL, dst_addr, non_blocking)) { + SE_DEBUG(DBG_LVL_1, "Failed in allocate iscsi cid\n"); + ret = -ENOMEM; + goto free_ep; + } + + return ep; + +free_ep: + beiscsi_free_ep(ep); + return ERR_PTR(ret); +} + +/** + * beiscsi_ep_poll - Poll to see if connection is established + * @ep: endpoint to be used + * @timeout_ms: timeout specified in millisecs + * + * Poll to see if TCP connection established + */ +int beiscsi_ep_poll(struct iscsi_endpoint *ep, int timeout_ms) +{ + struct beiscsi_endpoint *beiscsi_ep = ep->dd_data; + + SE_DEBUG(DBG_LVL_8, "In beiscsi_ep_poll\n"); + if (beiscsi_ep->cid_vld == 1) + return 1; + else + return 0; +} + +/** + * beiscsi_close_conn - Upload the connection + * @ep: The iscsi endpoint + * @flag: The type of connection closure + */ +static int beiscsi_close_conn(struct iscsi_endpoint *ep, int flag) +{ + int ret = 0; + struct beiscsi_endpoint *beiscsi_ep = ep->dd_data; + struct beiscsi_hba *phba = beiscsi_ep->phba; + + if (MGMT_STATUS_SUCCESS != + mgmt_upload_connection(phba, beiscsi_ep->ep_cid, + CONNECTION_UPLOAD_GRACEFUL)) { + SE_DEBUG(DBG_LVL_8, "upload failed for cid 0x%x", + beiscsi_ep->ep_cid); + ret = -1; + } + + return ret; +} + +/** + * beiscsi_ep_disconnect - Tears down the TCP connection + * @ep: endpoint to be used + * + * Tears down the TCP connection + */ +void beiscsi_ep_disconnect(struct iscsi_endpoint *ep) +{ + struct beiscsi_conn *beiscsi_conn; + struct beiscsi_endpoint *beiscsi_ep; + struct beiscsi_hba *phba; + int flag = 0; + + beiscsi_ep = ep->dd_data; + phba = beiscsi_ep->phba; + SE_DEBUG(DBG_LVL_8, "In beiscsi_ep_disconnect\n"); + + if (beiscsi_ep->conn) { + beiscsi_conn = beiscsi_ep->conn; + iscsi_suspend_queue(beiscsi_conn->conn); + beiscsi_close_conn(ep, flag); + } + + beiscsi_free_ep(ep); +} + +/** + * beiscsi_unbind_conn_to_cid - Unbind the beiscsi_conn from phba conn table + * @phba: The phba instance + * @cid: The cid to free + */ +static int beiscsi_unbind_conn_to_cid(struct beiscsi_hba *phba, + unsigned int cid) +{ + if (phba->conn_table[cid]) + phba->conn_table[cid] = NULL; + else { + SE_DEBUG(DBG_LVL_8, "Connection table Not occupied. \n"); + return -EINVAL; + } + return 0; +} + +/** + * beiscsi_conn_stop - Invalidate and stop the connection + * @cls_conn: pointer to get iscsi_conn + * @flag: The type of connection closure + */ +void beiscsi_conn_stop(struct iscsi_cls_conn *cls_conn, int flag) +{ + struct iscsi_conn *conn = cls_conn->dd_data; + struct beiscsi_conn *beiscsi_conn = conn->dd_data; + struct beiscsi_endpoint *beiscsi_ep; + struct iscsi_session *session = conn->session; + struct Scsi_Host *shost = iscsi_session_to_shost(session->cls_session); + struct beiscsi_hba *phba = iscsi_host_priv(shost); + unsigned int status; + unsigned short savecfg_flag = CMD_ISCSI_SESSION_SAVE_CFG_ON_FLASH; + + SE_DEBUG(DBG_LVL_8, "In beiscsi_conn_stop\n"); + beiscsi_ep = beiscsi_conn->ep; + if (!beiscsi_ep) { + SE_DEBUG(DBG_LVL_8, "In beiscsi_conn_stop , no beiscsi_ep\n"); + return; + } + status = mgmt_invalidate_connection(phba, beiscsi_ep, + beiscsi_ep->ep_cid, 1, + savecfg_flag); + if (status != MGMT_STATUS_SUCCESS) { + SE_DEBUG(DBG_LVL_1, + "mgmt_invalidate_connection Failed for cid=%d \n", + beiscsi_ep->ep_cid); + } + beiscsi_unbind_conn_to_cid(phba, beiscsi_ep->ep_cid); + iscsi_conn_stop(cls_conn, flag); +} diff --git a/drivers/scsi/be2iscsi/be_iscsi.h b/drivers/scsi/be2iscsi/be_iscsi.h new file mode 100644 index 000000000000..f92ffc5349fb --- /dev/null +++ b/drivers/scsi/be2iscsi/be_iscsi.h @@ -0,0 +1,75 @@ +/** + * Copyright (C) 2005 - 2009 ServerEngines + * All rights reserved. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License version 2 + * as published by the Free Software Foundation. The full GNU General + * Public License is included in this distribution in the file called COPYING. + * + * Written by: Jayamohan Kallickal (jayamohank@serverengines.com) + * + * Contact Information: + * linux-drivers@serverengines.com + * + * ServerEngines + * 209 N. Fair Oaks Ave + * Sunnyvale, CA 94085 + * + */ + +#ifndef _BE_ISCSI_ +#define _BE_ISCSI_ + +#include "be_main.h" +#include "be_mgmt.h" + +#define BE2_IPV4 0x1 +#define BE2_IPV6 0x10 + +void beiscsi_offload_connection(struct beiscsi_conn *beiscsi_conn, + struct beiscsi_offload_params *params); + +void beiscsi_offload_iscsi(struct beiscsi_hba *phba, struct iscsi_conn *conn, + struct beiscsi_conn *beiscsi_conn, + unsigned int fw_handle); + +struct iscsi_cls_session *beiscsi_session_create(struct iscsi_endpoint *ep, + uint16_t cmds_max, + uint16_t qdepth, + uint32_t initial_cmdsn); + +void beiscsi_session_destroy(struct iscsi_cls_session *cls_session); + +struct iscsi_cls_conn *beiscsi_conn_create(struct iscsi_cls_session + *cls_session, uint32_t cid); + +int beiscsi_conn_bind(struct iscsi_cls_session *cls_session, + struct iscsi_cls_conn *cls_conn, + uint64_t transport_fd, int is_leading); + +int beiscsi_conn_get_param(struct iscsi_cls_conn *cls_conn, + enum iscsi_param param, char *buf); + +int beiscsi_get_host_param(struct Scsi_Host *shost, + enum iscsi_host_param param, char *buf); + +int beiscsi_set_param(struct iscsi_cls_conn *cls_conn, + enum iscsi_param param, char *buf, int buflen); + +int beiscsi_conn_start(struct iscsi_cls_conn *cls_conn); + +void beiscsi_conn_stop(struct iscsi_cls_conn *cls_conn, int flag); + +struct iscsi_endpoint *beiscsi_ep_connect(struct Scsi_Host *shost, + struct sockaddr *dst_addr, + int non_blocking); + +int beiscsi_ep_poll(struct iscsi_endpoint *ep, int timeout_ms); + +void beiscsi_ep_disconnect(struct iscsi_endpoint *ep); + +void beiscsi_conn_get_stats(struct iscsi_cls_conn *cls_conn, + struct iscsi_stats *stats); + +#endif diff --git a/drivers/scsi/be2iscsi/be_main.c b/drivers/scsi/be2iscsi/be_main.c new file mode 100644 index 000000000000..d520fe875859 --- /dev/null +++ b/drivers/scsi/be2iscsi/be_main.c @@ -0,0 +1,3393 @@ +/** + * Copyright (C) 2005 - 2009 ServerEngines + * All rights reserved. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License version 2 + * as published by the Free Software Foundation. The full GNU General + * Public License is included in this distribution in the file called COPYING. + * + * Written by: Jayamohan Kallickal (jayamohank@serverengines.com) + * + * Contact Information: + * linux-drivers@serverengines.com + * + * ServerEngines + * 209 N. Fair Oaks Ave + * Sunnyvale, CA 94085 + * + */ +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include "be_main.h" +#include "be_iscsi.h" +#include "be_mgmt.h" + +static unsigned int be_iopoll_budget = 10; +static unsigned int be_max_phys_size = 64; +static unsigned int enable_msix; + +MODULE_DEVICE_TABLE(pci, beiscsi_pci_id_table); +MODULE_DESCRIPTION(DRV_DESC " " BUILD_STR); +MODULE_AUTHOR("ServerEngines Corporation"); +MODULE_LICENSE("GPL"); +module_param(be_iopoll_budget, int, 0); +module_param(enable_msix, int, 0); +module_param(be_max_phys_size, uint, S_IRUGO); +MODULE_PARM_DESC(be_max_phys_size, "Maximum Size (In Kilobytes) of physically" + "contiguous memory that can be allocated." + "Range is 16 - 128"); + +static int beiscsi_slave_configure(struct scsi_device *sdev) +{ + blk_queue_max_segment_size(sdev->request_queue, 65536); + return 0; +} + +static struct scsi_host_template beiscsi_sht = { + .module = THIS_MODULE, + .name = "ServerEngines 10Gbe open-iscsi Initiator Driver", + .proc_name = DRV_NAME, + .queuecommand = iscsi_queuecommand, + .eh_abort_handler = iscsi_eh_abort, + .change_queue_depth = iscsi_change_queue_depth, + .slave_configure = beiscsi_slave_configure, + .target_alloc = iscsi_target_alloc, + .eh_device_reset_handler = iscsi_eh_device_reset, + .eh_target_reset_handler = iscsi_eh_target_reset, + .sg_tablesize = BEISCSI_SGLIST_ELEMENTS, + .can_queue = BE2_IO_DEPTH, + .this_id = -1, + .max_sectors = BEISCSI_MAX_SECTORS, + .cmd_per_lun = BEISCSI_CMD_PER_LUN, + .use_clustering = ENABLE_CLUSTERING, +}; +static struct scsi_transport_template *beiscsi_scsi_transport; + +/*------------------- PCI Driver operations and data ----------------- */ +static DEFINE_PCI_DEVICE_TABLE(beiscsi_pci_id_table) = { + { PCI_DEVICE(BE_VENDOR_ID, BE_DEVICE_ID1) }, + { PCI_DEVICE(BE_VENDOR_ID, OC_DEVICE_ID1) }, + { PCI_DEVICE(BE_VENDOR_ID, OC_DEVICE_ID2) }, + { 0 } +}; +MODULE_DEVICE_TABLE(pci, beiscsi_pci_id_table); + +static struct beiscsi_hba *beiscsi_hba_alloc(struct pci_dev *pcidev) +{ + struct beiscsi_hba *phba; + struct Scsi_Host *shost; + + shost = iscsi_host_alloc(&beiscsi_sht, sizeof(*phba), 0); + if (!shost) { + dev_err(&pcidev->dev, "beiscsi_hba_alloc -" + "iscsi_host_alloc failed \n"); + return NULL; + } + shost->dma_boundary = pcidev->dma_mask; + shost->max_id = BE2_MAX_SESSIONS; + shost->max_channel = 0; + shost->max_cmd_len = BEISCSI_MAX_CMD_LEN; + shost->max_lun = BEISCSI_NUM_MAX_LUN; + shost->transportt = beiscsi_scsi_transport; + + phba = iscsi_host_priv(shost); + memset(phba, 0, sizeof(*phba)); + phba->shost = shost; + phba->pcidev = pci_dev_get(pcidev); + + if (iscsi_host_add(shost, &phba->pcidev->dev)) + goto free_devices; + return phba; + +free_devices: + pci_dev_put(phba->pcidev); + iscsi_host_free(phba->shost); + return NULL; +} + +static void beiscsi_unmap_pci_function(struct beiscsi_hba *phba) +{ + if (phba->csr_va) { + iounmap(phba->csr_va); + phba->csr_va = NULL; + } + if (phba->db_va) { + iounmap(phba->db_va); + phba->db_va = NULL; + } + if (phba->pci_va) { + iounmap(phba->pci_va); + phba->pci_va = NULL; + } +} + +static int beiscsi_map_pci_bars(struct beiscsi_hba *phba, + struct pci_dev *pcidev) +{ + u8 __iomem *addr; + + addr = ioremap_nocache(pci_resource_start(pcidev, 2), + pci_resource_len(pcidev, 2)); + if (addr == NULL) + return -ENOMEM; + phba->ctrl.csr = addr; + phba->csr_va = addr; + phba->csr_pa.u.a64.address = pci_resource_start(pcidev, 2); + + addr = ioremap_nocache(pci_resource_start(pcidev, 4), 128 * 1024); + if (addr == NULL) + goto pci_map_err; + phba->ctrl.db = addr; + phba->db_va = addr; + phba->db_pa.u.a64.address = pci_resource_start(pcidev, 4); + + addr = ioremap_nocache(pci_resource_start(pcidev, 1), + pci_resource_len(pcidev, 1)); + if (addr == NULL) + goto pci_map_err; + phba->ctrl.pcicfg = addr; + phba->pci_va = addr; + phba->pci_pa.u.a64.address = pci_resource_start(pcidev, 1); + return 0; + +pci_map_err: + beiscsi_unmap_pci_function(phba); + return -ENOMEM; +} + +static int beiscsi_enable_pci(struct pci_dev *pcidev) +{ + int ret; + + ret = pci_enable_device(pcidev); + if (ret) { + dev_err(&pcidev->dev, "beiscsi_enable_pci - enable device " + "failed. Returning -ENODEV\n"); + return ret; + } + + if (pci_set_consistent_dma_mask(pcidev, DMA_BIT_MASK(64))) { + ret = pci_set_consistent_dma_mask(pcidev, DMA_BIT_MASK(32)); + if (ret) { + dev_err(&pcidev->dev, "Could not set PCI DMA Mask\n"); + pci_disable_device(pcidev); + return ret; + } + } + return 0; +} + +static int be_ctrl_init(struct beiscsi_hba *phba, struct pci_dev *pdev) +{ + struct be_ctrl_info *ctrl = &phba->ctrl; + struct be_dma_mem *mbox_mem_alloc = &ctrl->mbox_mem_alloced; + struct be_dma_mem *mbox_mem_align = &ctrl->mbox_mem; + int status = 0; + + ctrl->pdev = pdev; + status = beiscsi_map_pci_bars(phba, pdev); + if (status) + return status; + + mbox_mem_alloc->size = sizeof(struct be_mcc_mailbox) + 16; + mbox_mem_alloc->va = pci_alloc_consistent(pdev, + mbox_mem_alloc->size, + &mbox_mem_alloc->dma); + if (!mbox_mem_alloc->va) { + beiscsi_unmap_pci_function(phba); + status = -ENOMEM; + return status; + } + + mbox_mem_align->size = sizeof(struct be_mcc_mailbox); + mbox_mem_align->va = PTR_ALIGN(mbox_mem_alloc->va, 16); + mbox_mem_align->dma = PTR_ALIGN(mbox_mem_alloc->dma, 16); + memset(mbox_mem_align->va, 0, sizeof(struct be_mcc_mailbox)); + spin_lock_init(&ctrl->mbox_lock); + return status; +} + +static void beiscsi_get_params(struct beiscsi_hba *phba) +{ + phba->params.ios_per_ctrl = BE2_IO_DEPTH; + phba->params.cxns_per_ctrl = BE2_MAX_SESSIONS; + phba->params.asyncpdus_per_ctrl = BE2_ASYNCPDUS; + phba->params.icds_per_ctrl = BE2_MAX_ICDS / 2; + phba->params.num_sge_per_io = BE2_SGE; + phba->params.defpdu_hdr_sz = BE2_DEFPDU_HDR_SZ; + phba->params.defpdu_data_sz = BE2_DEFPDU_DATA_SZ; + phba->params.eq_timer = 64; + phba->params.num_eq_entries = + (((BE2_CMDS_PER_CXN * 2 + BE2_LOGOUTS + BE2_TMFS + BE2_ASYNCPDUS) / + 512) + 1) * 512; + phba->params.num_eq_entries = (phba->params.num_eq_entries < 1024) + ? 1024 : phba->params.num_eq_entries; + SE_DEBUG(DBG_LVL_8, "phba->params.num_eq_entries=%d \n", + phba->params.num_eq_entries); + phba->params.num_cq_entries = + (((BE2_CMDS_PER_CXN * 2 + BE2_LOGOUTS + BE2_TMFS + BE2_ASYNCPDUS) / + 512) + 1) * 512; + SE_DEBUG(DBG_LVL_8, + "phba->params.num_cq_entries=%d BE2_CMDS_PER_CXN=%d" + "BE2_LOGOUTS=%d BE2_TMFS=%d BE2_ASYNCPDUS=%d \n", + phba->params.num_cq_entries, BE2_CMDS_PER_CXN, + BE2_LOGOUTS, BE2_TMFS, BE2_ASYNCPDUS); + phba->params.wrbs_per_cxn = 256; +} + +static void hwi_ring_eq_db(struct beiscsi_hba *phba, + unsigned int id, unsigned int clr_interrupt, + unsigned int num_processed, + unsigned char rearm, unsigned char event) +{ + u32 val = 0; + val |= id & DB_EQ_RING_ID_MASK; + if (rearm) + val |= 1 << DB_EQ_REARM_SHIFT; + if (clr_interrupt) + val |= 1 << DB_EQ_CLR_SHIFT; + if (event) + val |= 1 << DB_EQ_EVNT_SHIFT; + val |= num_processed << DB_EQ_NUM_POPPED_SHIFT; + iowrite32(val, phba->db_va + DB_EQ_OFFSET); +} + +/** + * be_isr - The isr routine of the driver. + * @irq: Not used + * @dev_id: Pointer to host adapter structure + */ +static irqreturn_t be_isr(int irq, void *dev_id) +{ + struct beiscsi_hba *phba; + struct hwi_controller *phwi_ctrlr; + struct hwi_context_memory *phwi_context; + struct be_eq_entry *eqe = NULL; + struct be_queue_info *eq; + struct be_queue_info *cq; + unsigned long flags, index; + unsigned int num_eq_processed; + struct be_ctrl_info *ctrl; + int isr; + + phba = dev_id; + if (!enable_msix) { + ctrl = &phba->ctrl;; + isr = ioread32(ctrl->csr + CEV_ISR0_OFFSET + + (PCI_FUNC(ctrl->pdev->devfn) * CEV_ISR_SIZE)); + if (!isr) + return IRQ_NONE; + } + + phwi_ctrlr = phba->phwi_ctrlr; + phwi_context = phwi_ctrlr->phwi_ctxt; + eq = &phwi_context->be_eq.q; + cq = &phwi_context->be_cq; + index = 0; + eqe = queue_tail_node(eq); + if (!eqe) + SE_DEBUG(DBG_LVL_1, "eqe is NULL\n"); + + num_eq_processed = 0; + if (blk_iopoll_enabled) { + while (eqe->dw[offsetof(struct amap_eq_entry, valid) / 32] + & EQE_VALID_MASK) { + if (!blk_iopoll_sched_prep(&phba->iopoll)) + blk_iopoll_sched(&phba->iopoll); + + AMAP_SET_BITS(struct amap_eq_entry, valid, eqe, 0); + queue_tail_inc(eq); + eqe = queue_tail_node(eq); + num_eq_processed++; + SE_DEBUG(DBG_LVL_8, "Valid EQE\n"); + } + if (num_eq_processed) { + hwi_ring_eq_db(phba, eq->id, 0, num_eq_processed, 0, 1); + return IRQ_HANDLED; + } else + return IRQ_NONE; + } else { + while (eqe->dw[offsetof(struct amap_eq_entry, valid) / 32] + & EQE_VALID_MASK) { + + if (((eqe->dw[offsetof(struct amap_eq_entry, + resource_id) / 32] & + EQE_RESID_MASK) >> 16) != cq->id) { + spin_lock_irqsave(&phba->isr_lock, flags); + phba->todo_mcc_cq = 1; + spin_unlock_irqrestore(&phba->isr_lock, flags); + } else { + spin_lock_irqsave(&phba->isr_lock, flags); + phba->todo_cq = 1; + spin_unlock_irqrestore(&phba->isr_lock, flags); + } + AMAP_SET_BITS(struct amap_eq_entry, valid, eqe, 0); + queue_tail_inc(eq); + eqe = queue_tail_node(eq); + num_eq_processed++; + } + if (phba->todo_cq || phba->todo_mcc_cq) + queue_work(phba->wq, &phba->work_cqs); + + if (num_eq_processed) { + hwi_ring_eq_db(phba, eq->id, 0, num_eq_processed, 1, 1); + return IRQ_HANDLED; + } else + return IRQ_NONE; + } +} + +static int beiscsi_init_irqs(struct beiscsi_hba *phba) +{ + struct pci_dev *pcidev = phba->pcidev; + int ret; + + ret = request_irq(pcidev->irq, be_isr, IRQF_SHARED, "beiscsi", phba); + if (ret) { + shost_printk(KERN_ERR, phba->shost, "beiscsi_init_irqs-" + "Failed to register irq\\n"); + return ret; + } + return 0; +} + +static void hwi_ring_cq_db(struct beiscsi_hba *phba, + unsigned int id, unsigned int num_processed, + unsigned char rearm, unsigned char event) +{ + u32 val = 0; + val |= id & DB_CQ_RING_ID_MASK; + if (rearm) + val |= 1 << DB_CQ_REARM_SHIFT; + val |= num_processed << DB_CQ_NUM_POPPED_SHIFT; + iowrite32(val, phba->db_va + DB_CQ_OFFSET); +} + +/* + * async pdus include + * a. unsolicited NOP-In (target initiated NOP-In) + * b. Async Messages + * c. Reject PDU + * d. Login response + * These headers arrive unprocessed by the EP firmware and iSCSI layer + * process them + */ +static unsigned int +beiscsi_process_async_pdu(struct beiscsi_conn *beiscsi_conn, + struct beiscsi_hba *phba, + unsigned short cid, + struct pdu_base *ppdu, + unsigned long pdu_len, + void *pbuffer, unsigned long buf_len) +{ + struct iscsi_conn *conn = beiscsi_conn->conn; + struct iscsi_session *session = conn->session; + + switch (ppdu->dw[offsetof(struct amap_pdu_base, opcode) / 32] & + PDUBASE_OPCODE_MASK) { + case ISCSI_OP_NOOP_IN: + pbuffer = NULL; + buf_len = 0; + break; + case ISCSI_OP_ASYNC_EVENT: + break; + case ISCSI_OP_REJECT: + WARN_ON(!pbuffer); + WARN_ON(!(buf_len == 48)); + SE_DEBUG(DBG_LVL_1, "In ISCSI_OP_REJECT\n"); + break; + case ISCSI_OP_LOGIN_RSP: + break; + default: + shost_printk(KERN_WARNING, phba->shost, + "Unrecognized opcode 0x%x in async msg \n", + (ppdu-> + dw[offsetof(struct amap_pdu_base, opcode) / 32] + & PDUBASE_OPCODE_MASK)); + return 1; + } + + spin_lock_bh(&session->lock); + __iscsi_complete_pdu(conn, (struct iscsi_hdr *)ppdu, pbuffer, buf_len); + spin_unlock_bh(&session->lock); + return 0; +} + +static struct sgl_handle *alloc_io_sgl_handle(struct beiscsi_hba *phba) +{ + struct sgl_handle *psgl_handle; + + if (phba->io_sgl_hndl_avbl) { + SE_DEBUG(DBG_LVL_8, + "In alloc_io_sgl_handle,io_sgl_alloc_index=%d \n", + phba->io_sgl_alloc_index); + psgl_handle = phba->io_sgl_hndl_base[phba-> + io_sgl_alloc_index]; + phba->io_sgl_hndl_base[phba->io_sgl_alloc_index] = NULL; + phba->io_sgl_hndl_avbl--; + if (phba->io_sgl_alloc_index == (phba->params.ios_per_ctrl - 1)) + phba->io_sgl_alloc_index = 0; + else + phba->io_sgl_alloc_index++; + } else + psgl_handle = NULL; + return psgl_handle; +} + +static void +free_io_sgl_handle(struct beiscsi_hba *phba, struct sgl_handle *psgl_handle) +{ + SE_DEBUG(DBG_LVL_8, "In free_,io_sgl_free_index=%d \n", + phba->io_sgl_free_index); + if (phba->io_sgl_hndl_base[phba->io_sgl_free_index]) { + /* + * this can happen if clean_task is called on a task that + * failed in xmit_task or alloc_pdu. + */ + SE_DEBUG(DBG_LVL_8, + "Double Free in IO SGL io_sgl_free_index=%d," + "value there=%p \n", phba->io_sgl_free_index, + phba->io_sgl_hndl_base[phba->io_sgl_free_index]); + return; + } + phba->io_sgl_hndl_base[phba->io_sgl_free_index] = psgl_handle; + phba->io_sgl_hndl_avbl++; + if (phba->io_sgl_free_index == (phba->params.ios_per_ctrl - 1)) + phba->io_sgl_free_index = 0; + else + phba->io_sgl_free_index++; +} + +/** + * alloc_wrb_handle - To allocate a wrb handle + * @phba: The hba pointer + * @cid: The cid to use for allocation + * @index: index allocation and wrb index + * + * This happens under session_lock until submission to chip + */ +struct wrb_handle *alloc_wrb_handle(struct beiscsi_hba *phba, unsigned int cid, + int index) +{ + struct hwi_wrb_context *pwrb_context; + struct hwi_controller *phwi_ctrlr; + struct wrb_handle *pwrb_handle; + + phwi_ctrlr = phba->phwi_ctrlr; + pwrb_context = &phwi_ctrlr->wrb_context[cid]; + pwrb_handle = pwrb_context->pwrb_handle_base[index]; + pwrb_handle->wrb_index = index; + pwrb_handle->nxt_wrb_index = index; + return pwrb_handle; +} + +/** + * free_wrb_handle - To free the wrb handle back to pool + * @phba: The hba pointer + * @pwrb_context: The context to free from + * @pwrb_handle: The wrb_handle to free + * + * This happens under session_lock until submission to chip + */ +static void +free_wrb_handle(struct beiscsi_hba *phba, struct hwi_wrb_context *pwrb_context, + struct wrb_handle *pwrb_handle) +{ + SE_DEBUG(DBG_LVL_8, + "FREE WRB: pwrb_handle=%p free_index=%d=0x%x" + "wrb_handles_available=%d \n", + pwrb_handle, pwrb_context->free_index, + pwrb_context->free_index, pwrb_context->wrb_handles_available); +} + +static struct sgl_handle *alloc_mgmt_sgl_handle(struct beiscsi_hba *phba) +{ + struct sgl_handle *psgl_handle; + + if (phba->eh_sgl_hndl_avbl) { + psgl_handle = phba->eh_sgl_hndl_base[phba->eh_sgl_alloc_index]; + phba->eh_sgl_hndl_base[phba->eh_sgl_alloc_index] = NULL; + SE_DEBUG(DBG_LVL_8, "mgmt_sgl_alloc_index=%d=0x%x \n", + phba->eh_sgl_alloc_index, phba->eh_sgl_alloc_index); + phba->eh_sgl_hndl_avbl--; + if (phba->eh_sgl_alloc_index == + (phba->params.icds_per_ctrl - phba->params.ios_per_ctrl - + 1)) + phba->eh_sgl_alloc_index = 0; + else + phba->eh_sgl_alloc_index++; + } else + psgl_handle = NULL; + return psgl_handle; +} + +void +free_mgmt_sgl_handle(struct beiscsi_hba *phba, struct sgl_handle *psgl_handle) +{ + + if (phba->eh_sgl_hndl_base[phba->eh_sgl_free_index]) { + /* + * this can happen if clean_task is called on a task that + * failed in xmit_task or alloc_pdu. + */ + SE_DEBUG(DBG_LVL_8, + "Double Free in eh SGL ,eh_sgl_free_index=%d \n", + phba->eh_sgl_free_index); + return; + } + phba->eh_sgl_hndl_base[phba->eh_sgl_free_index] = psgl_handle; + phba->eh_sgl_hndl_avbl++; + if (phba->eh_sgl_free_index == + (phba->params.icds_per_ctrl - phba->params.ios_per_ctrl - 1)) + phba->eh_sgl_free_index = 0; + else + phba->eh_sgl_free_index++; +} + +static void +be_complete_io(struct beiscsi_conn *beiscsi_conn, + struct iscsi_task *task, struct sol_cqe *psol) +{ + struct beiscsi_io_task *io_task = task->dd_data; + struct be_status_bhs *sts_bhs = + (struct be_status_bhs *)io_task->cmd_bhs; + struct iscsi_conn *conn = beiscsi_conn->conn; + unsigned int sense_len; + unsigned char *sense; + u32 resid = 0, exp_cmdsn, max_cmdsn; + u8 rsp, status, flags; + + exp_cmdsn = be32_to_cpu(psol-> + dw[offsetof(struct amap_sol_cqe, i_exp_cmd_sn) / 32] + & SOL_EXP_CMD_SN_MASK); + max_cmdsn = be32_to_cpu((psol-> + dw[offsetof(struct amap_sol_cqe, i_exp_cmd_sn) / 32] + & SOL_EXP_CMD_SN_MASK) + + ((psol->dw[offsetof(struct amap_sol_cqe, i_cmd_wnd) + / 32] & SOL_CMD_WND_MASK) >> 24) - 1); + rsp = ((psol->dw[offsetof(struct amap_sol_cqe, i_resp) / 32] + & SOL_RESP_MASK) >> 16); + status = ((psol->dw[offsetof(struct amap_sol_cqe, i_sts) / 32] + & SOL_STS_MASK) >> 8); + flags = ((psol->dw[offsetof(struct amap_sol_cqe, i_flags) / 32] + & SOL_FLAGS_MASK) >> 24) | 0x80; + + task->sc->result = (DID_OK << 16) | status; + if (rsp != ISCSI_STATUS_CMD_COMPLETED) { + task->sc->result = DID_ERROR << 16; + goto unmap; + } + + /* bidi not initially supported */ + if (flags & (ISCSI_FLAG_CMD_UNDERFLOW | ISCSI_FLAG_CMD_OVERFLOW)) { + resid = (psol->dw[offsetof(struct amap_sol_cqe, i_res_cnt) / + 32] & SOL_RES_CNT_MASK); + + if (!status && (flags & ISCSI_FLAG_CMD_OVERFLOW)) + task->sc->result = DID_ERROR << 16; + + if (flags & ISCSI_FLAG_CMD_UNDERFLOW) { + scsi_set_resid(task->sc, resid); + if (!status && (scsi_bufflen(task->sc) - resid < + task->sc->underflow)) + task->sc->result = DID_ERROR << 16; + } + } + + if (status == SAM_STAT_CHECK_CONDITION) { + sense = sts_bhs->sense_info + sizeof(unsigned short); + sense_len = + cpu_to_be16((unsigned short)(sts_bhs->sense_info[0])); + memcpy(task->sc->sense_buffer, sense, + min_t(u16, sense_len, SCSI_SENSE_BUFFERSIZE)); + } + if (io_task->cmd_bhs->iscsi_hdr.flags & ISCSI_FLAG_CMD_READ) { + if (psol->dw[offsetof(struct amap_sol_cqe, i_res_cnt) / 32] + & SOL_RES_CNT_MASK) + conn->rxdata_octets += (psol-> + dw[offsetof(struct amap_sol_cqe, i_res_cnt) / 32] + & SOL_RES_CNT_MASK); + } +unmap: + scsi_dma_unmap(io_task->scsi_cmnd); + iscsi_complete_scsi_task(task, exp_cmdsn, max_cmdsn); +} + +static void +be_complete_logout(struct beiscsi_conn *beiscsi_conn, + struct iscsi_task *task, struct sol_cqe *psol) +{ + struct iscsi_logout_rsp *hdr; + struct iscsi_conn *conn = beiscsi_conn->conn; + + hdr = (struct iscsi_logout_rsp *)task->hdr; + hdr->t2wait = 5; + hdr->t2retain = 0; + hdr->flags = ((psol->dw[offsetof(struct amap_sol_cqe, i_flags) / 32] + & SOL_FLAGS_MASK) >> 24) | 0x80; + hdr->response = (psol->dw[offsetof(struct amap_sol_cqe, i_resp) / + 32] & SOL_RESP_MASK); + hdr->exp_cmdsn = cpu_to_be32(psol-> + dw[offsetof(struct amap_sol_cqe, i_exp_cmd_sn) / 32] + & SOL_EXP_CMD_SN_MASK); + hdr->max_cmdsn = be32_to_cpu((psol-> + dw[offsetof(struct amap_sol_cqe, i_exp_cmd_sn) / 32] + & SOL_EXP_CMD_SN_MASK) + + ((psol->dw[offsetof(struct amap_sol_cqe, i_cmd_wnd) + / 32] & SOL_CMD_WND_MASK) >> 24) - 1); + hdr->hlength = 0; + + __iscsi_complete_pdu(conn, (struct iscsi_hdr *)hdr, NULL, 0); +} + +static void +be_complete_tmf(struct beiscsi_conn *beiscsi_conn, + struct iscsi_task *task, struct sol_cqe *psol) +{ + struct iscsi_tm_rsp *hdr; + struct iscsi_conn *conn = beiscsi_conn->conn; + + hdr = (struct iscsi_tm_rsp *)task->hdr; + hdr->flags = ((psol->dw[offsetof(struct amap_sol_cqe, i_flags) / 32] + & SOL_FLAGS_MASK) >> 24) | 0x80; + hdr->response = (psol->dw[offsetof(struct amap_sol_cqe, i_resp) / + 32] & SOL_RESP_MASK); + hdr->exp_cmdsn = cpu_to_be32(psol->dw[offsetof(struct amap_sol_cqe, + i_exp_cmd_sn) / 32] & SOL_EXP_CMD_SN_MASK); + hdr->max_cmdsn = be32_to_cpu((psol->dw[offsetof(struct amap_sol_cqe, + i_exp_cmd_sn) / 32] & SOL_EXP_CMD_SN_MASK) + + ((psol->dw[offsetof(struct amap_sol_cqe, i_cmd_wnd) + / 32] & SOL_CMD_WND_MASK) >> 24) - 1); + __iscsi_complete_pdu(conn, (struct iscsi_hdr *)hdr, NULL, 0); +} + +static void +hwi_complete_drvr_msgs(struct beiscsi_conn *beiscsi_conn, + struct beiscsi_hba *phba, struct sol_cqe *psol) +{ + struct hwi_wrb_context *pwrb_context; + struct wrb_handle *pwrb_handle; + struct hwi_controller *phwi_ctrlr; + struct iscsi_conn *conn = beiscsi_conn->conn; + struct iscsi_session *session = conn->session; + + phwi_ctrlr = phba->phwi_ctrlr; + pwrb_context = &phwi_ctrlr->wrb_context[((psol-> + dw[offsetof(struct amap_sol_cqe, cid) / 32] & + SOL_CID_MASK) >> 6)]; + pwrb_handle = pwrb_context->pwrb_handle_basestd[((psol-> + dw[offsetof(struct amap_sol_cqe, wrb_index) / + 32] & SOL_WRB_INDEX_MASK) >> 16)]; + spin_lock_bh(&session->lock); + free_wrb_handle(phba, pwrb_context, pwrb_handle); + spin_unlock_bh(&session->lock); +} + +static void +be_complete_nopin_resp(struct beiscsi_conn *beiscsi_conn, + struct iscsi_task *task, struct sol_cqe *psol) +{ + struct iscsi_nopin *hdr; + struct iscsi_conn *conn = beiscsi_conn->conn; + + hdr = (struct iscsi_nopin *)task->hdr; + hdr->flags = ((psol->dw[offsetof(struct amap_sol_cqe, i_flags) / 32] + & SOL_FLAGS_MASK) >> 24) | 0x80; + hdr->exp_cmdsn = cpu_to_be32(psol->dw[offsetof(struct amap_sol_cqe, + i_exp_cmd_sn) / 32] & SOL_EXP_CMD_SN_MASK); + hdr->max_cmdsn = be32_to_cpu((psol->dw[offsetof(struct amap_sol_cqe, + i_exp_cmd_sn) / 32] & SOL_EXP_CMD_SN_MASK) + + ((psol->dw[offsetof(struct amap_sol_cqe, i_cmd_wnd) + / 32] & SOL_CMD_WND_MASK) >> 24) - 1); + hdr->opcode = ISCSI_OP_NOOP_IN; + __iscsi_complete_pdu(conn, (struct iscsi_hdr *)hdr, NULL, 0); +} + +static void hwi_complete_cmd(struct beiscsi_conn *beiscsi_conn, + struct beiscsi_hba *phba, struct sol_cqe *psol) +{ + struct hwi_wrb_context *pwrb_context; + struct wrb_handle *pwrb_handle; + struct iscsi_wrb *pwrb = NULL; + struct hwi_controller *phwi_ctrlr; + struct iscsi_task *task; + struct beiscsi_io_task *io_task; + struct iscsi_conn *conn = beiscsi_conn->conn; + struct iscsi_session *session = conn->session; + + phwi_ctrlr = phba->phwi_ctrlr; + + pwrb_context = &phwi_ctrlr-> + wrb_context[((psol->dw[offsetof(struct amap_sol_cqe, cid) / 32] + & SOL_CID_MASK) >> 6)]; + pwrb_handle = pwrb_context->pwrb_handle_basestd[((psol-> + dw[offsetof(struct amap_sol_cqe, wrb_index) / + 32] & SOL_WRB_INDEX_MASK) >> 16)]; + + task = pwrb_handle->pio_handle; + io_task = task->dd_data; + spin_lock_bh(&session->lock); + pwrb = pwrb_handle->pwrb; + switch ((pwrb->dw[offsetof(struct amap_iscsi_wrb, type) / 32] & + WRB_TYPE_MASK) >> 28) { + case HWH_TYPE_IO: + case HWH_TYPE_IO_RD: + if ((task->hdr->opcode & ISCSI_OPCODE_MASK) == + ISCSI_OP_NOOP_OUT) { + be_complete_nopin_resp(beiscsi_conn, task, psol); + } else + be_complete_io(beiscsi_conn, task, psol); + break; + + case HWH_TYPE_LOGOUT: + be_complete_logout(beiscsi_conn, task, psol); + break; + + case HWH_TYPE_LOGIN: + SE_DEBUG(DBG_LVL_1, + "\t\t No HWH_TYPE_LOGIN Expected in hwi_complete_cmd" + "- Solicited path \n"); + break; + + case HWH_TYPE_TMF: + be_complete_tmf(beiscsi_conn, task, psol); + break; + + case HWH_TYPE_NOP: + be_complete_nopin_resp(beiscsi_conn, task, psol); + break; + + default: + shost_printk(KERN_WARNING, phba->shost, + "wrb_index 0x%x CID 0x%x\n", + ((psol->dw[offsetof(struct amap_iscsi_wrb, type) / + 32] & SOL_WRB_INDEX_MASK) >> 16), + ((psol->dw[offsetof(struct amap_sol_cqe, cid) / 32] + & SOL_CID_MASK) >> 6)); + break; + } + + spin_unlock_bh(&session->lock); +} + +static struct list_head *hwi_get_async_busy_list(struct hwi_async_pdu_context + *pasync_ctx, unsigned int is_header, + unsigned int host_write_ptr) +{ + if (is_header) + return &pasync_ctx->async_entry[host_write_ptr]. + header_busy_list; + else + return &pasync_ctx->async_entry[host_write_ptr].data_busy_list; +} + +static struct async_pdu_handle * +hwi_get_async_handle(struct beiscsi_hba *phba, + struct beiscsi_conn *beiscsi_conn, + struct hwi_async_pdu_context *pasync_ctx, + struct i_t_dpdu_cqe *pdpdu_cqe, unsigned int *pcq_index) +{ + struct be_bus_address phys_addr; + struct list_head *pbusy_list; + struct async_pdu_handle *pasync_handle = NULL; + int buffer_len = 0; + unsigned char buffer_index = -1; + unsigned char is_header = 0; + + phys_addr.u.a32.address_lo = + pdpdu_cqe->dw[offsetof(struct amap_i_t_dpdu_cqe, db_addr_lo) / 32] - + ((pdpdu_cqe->dw[offsetof(struct amap_i_t_dpdu_cqe, dpl) / 32] + & PDUCQE_DPL_MASK) >> 16); + phys_addr.u.a32.address_hi = + pdpdu_cqe->dw[offsetof(struct amap_i_t_dpdu_cqe, db_addr_hi) / 32]; + + phys_addr.u.a64.address = + *((unsigned long long *)(&phys_addr.u.a64.address)); + + switch (pdpdu_cqe->dw[offsetof(struct amap_i_t_dpdu_cqe, code) / 32] + & PDUCQE_CODE_MASK) { + case UNSOL_HDR_NOTIFY: + is_header = 1; + + pbusy_list = hwi_get_async_busy_list(pasync_ctx, 1, + (pdpdu_cqe->dw[offsetof(struct amap_i_t_dpdu_cqe, + index) / 32] & PDUCQE_INDEX_MASK)); + + buffer_len = (unsigned int)(phys_addr.u.a64.address - + pasync_ctx->async_header.pa_base.u.a64.address); + + buffer_index = buffer_len / + pasync_ctx->async_header.buffer_size; + + break; + case UNSOL_DATA_NOTIFY: + pbusy_list = hwi_get_async_busy_list(pasync_ctx, 0, (pdpdu_cqe-> + dw[offsetof(struct amap_i_t_dpdu_cqe, + index) / 32] & PDUCQE_INDEX_MASK)); + buffer_len = (unsigned long)(phys_addr.u.a64.address - + pasync_ctx->async_data.pa_base.u. + a64.address); + buffer_index = buffer_len / pasync_ctx->async_data.buffer_size; + break; + default: + pbusy_list = NULL; + shost_printk(KERN_WARNING, phba->shost, + "Unexpected code=%d \n", + pdpdu_cqe->dw[offsetof(struct amap_i_t_dpdu_cqe, + code) / 32] & PDUCQE_CODE_MASK); + return NULL; + } + + WARN_ON(!(buffer_index <= pasync_ctx->async_data.num_entries)); + WARN_ON(list_empty(pbusy_list)); + list_for_each_entry(pasync_handle, pbusy_list, link) { + WARN_ON(pasync_handle->consumed); + if (pasync_handle->index == buffer_index) + break; + } + + WARN_ON(!pasync_handle); + + pasync_handle->cri = (unsigned short)beiscsi_conn->beiscsi_conn_cid; + pasync_handle->is_header = is_header; + pasync_handle->buffer_len = ((pdpdu_cqe-> + dw[offsetof(struct amap_i_t_dpdu_cqe, dpl) / 32] + & PDUCQE_DPL_MASK) >> 16); + + *pcq_index = (pdpdu_cqe->dw[offsetof(struct amap_i_t_dpdu_cqe, + index) / 32] & PDUCQE_INDEX_MASK); + return pasync_handle; +} + +static unsigned int +hwi_update_async_writables(struct hwi_async_pdu_context *pasync_ctx, + unsigned int is_header, unsigned int cq_index) +{ + struct list_head *pbusy_list; + struct async_pdu_handle *pasync_handle; + unsigned int num_entries, writables = 0; + unsigned int *pep_read_ptr, *pwritables; + + + if (is_header) { + pep_read_ptr = &pasync_ctx->async_header.ep_read_ptr; + pwritables = &pasync_ctx->async_header.writables; + num_entries = pasync_ctx->async_header.num_entries; + } else { + pep_read_ptr = &pasync_ctx->async_data.ep_read_ptr; + pwritables = &pasync_ctx->async_data.writables; + num_entries = pasync_ctx->async_data.num_entries; + } + + while ((*pep_read_ptr) != cq_index) { + (*pep_read_ptr)++; + *pep_read_ptr = (*pep_read_ptr) % num_entries; + + pbusy_list = hwi_get_async_busy_list(pasync_ctx, is_header, + *pep_read_ptr); + if (writables == 0) + WARN_ON(list_empty(pbusy_list)); + + if (!list_empty(pbusy_list)) { + pasync_handle = list_entry(pbusy_list->next, + struct async_pdu_handle, + link); + WARN_ON(!pasync_handle); + pasync_handle->consumed = 1; + } + + writables++; + } + + if (!writables) { + SE_DEBUG(DBG_LVL_1, + "Duplicate notification received - index 0x%x!!\n", + cq_index); + WARN_ON(1); + } + + *pwritables = *pwritables + writables; + return 0; +} + +static unsigned int hwi_free_async_msg(struct beiscsi_hba *phba, + unsigned int cri) +{ + struct hwi_controller *phwi_ctrlr; + struct hwi_async_pdu_context *pasync_ctx; + struct async_pdu_handle *pasync_handle, *tmp_handle; + struct list_head *plist; + unsigned int i = 0; + + phwi_ctrlr = phba->phwi_ctrlr; + pasync_ctx = HWI_GET_ASYNC_PDU_CTX(phwi_ctrlr); + + plist = &pasync_ctx->async_entry[cri].wait_queue.list; + + list_for_each_entry_safe(pasync_handle, tmp_handle, plist, link) { + list_del(&pasync_handle->link); + + if (i == 0) { + list_add_tail(&pasync_handle->link, + &pasync_ctx->async_header.free_list); + pasync_ctx->async_header.free_entries++; + i++; + } else { + list_add_tail(&pasync_handle->link, + &pasync_ctx->async_data.free_list); + pasync_ctx->async_data.free_entries++; + i++; + } + } + + INIT_LIST_HEAD(&pasync_ctx->async_entry[cri].wait_queue.list); + pasync_ctx->async_entry[cri].wait_queue.hdr_received = 0; + pasync_ctx->async_entry[cri].wait_queue.bytes_received = 0; + return 0; +} + +static struct phys_addr * +hwi_get_ring_address(struct hwi_async_pdu_context *pasync_ctx, + unsigned int is_header, unsigned int host_write_ptr) +{ + struct phys_addr *pasync_sge = NULL; + + if (is_header) + pasync_sge = pasync_ctx->async_header.ring_base; + else + pasync_sge = pasync_ctx->async_data.ring_base; + + return pasync_sge + host_write_ptr; +} + +static void hwi_post_async_buffers(struct beiscsi_hba *phba, + unsigned int is_header) +{ + struct hwi_controller *phwi_ctrlr; + struct hwi_async_pdu_context *pasync_ctx; + struct async_pdu_handle *pasync_handle; + struct list_head *pfree_link, *pbusy_list; + struct phys_addr *pasync_sge; + unsigned int ring_id, num_entries; + unsigned int host_write_num; + unsigned int writables; + unsigned int i = 0; + u32 doorbell = 0; + + phwi_ctrlr = phba->phwi_ctrlr; + pasync_ctx = HWI_GET_ASYNC_PDU_CTX(phwi_ctrlr); + + if (is_header) { + num_entries = pasync_ctx->async_header.num_entries; + writables = min(pasync_ctx->async_header.writables, + pasync_ctx->async_header.free_entries); + pfree_link = pasync_ctx->async_header.free_list.next; + host_write_num = pasync_ctx->async_header.host_write_ptr; + ring_id = phwi_ctrlr->default_pdu_hdr.id; + } else { + num_entries = pasync_ctx->async_data.num_entries; + writables = min(pasync_ctx->async_data.writables, + pasync_ctx->async_data.free_entries); + pfree_link = pasync_ctx->async_data.free_list.next; + host_write_num = pasync_ctx->async_data.host_write_ptr; + ring_id = phwi_ctrlr->default_pdu_data.id; + } + + writables = (writables / 8) * 8; + if (writables) { + for (i = 0; i < writables; i++) { + pbusy_list = + hwi_get_async_busy_list(pasync_ctx, is_header, + host_write_num); + pasync_handle = + list_entry(pfree_link, struct async_pdu_handle, + link); + WARN_ON(!pasync_handle); + pasync_handle->consumed = 0; + + pfree_link = pfree_link->next; + + pasync_sge = hwi_get_ring_address(pasync_ctx, + is_header, host_write_num); + + pasync_sge->hi = pasync_handle->pa.u.a32.address_lo; + pasync_sge->lo = pasync_handle->pa.u.a32.address_hi; + + list_move(&pasync_handle->link, pbusy_list); + + host_write_num++; + host_write_num = host_write_num % num_entries; + } + + if (is_header) { + pasync_ctx->async_header.host_write_ptr = + host_write_num; + pasync_ctx->async_header.free_entries -= writables; + pasync_ctx->async_header.writables -= writables; + pasync_ctx->async_header.busy_entries += writables; + } else { + pasync_ctx->async_data.host_write_ptr = host_write_num; + pasync_ctx->async_data.free_entries -= writables; + pasync_ctx->async_data.writables -= writables; + pasync_ctx->async_data.busy_entries += writables; + } + + doorbell |= ring_id & DB_DEF_PDU_RING_ID_MASK; + doorbell |= 1 << DB_DEF_PDU_REARM_SHIFT; + doorbell |= 0 << DB_DEF_PDU_EVENT_SHIFT; + doorbell |= (writables & DB_DEF_PDU_CQPROC_MASK) + << DB_DEF_PDU_CQPROC_SHIFT; + + iowrite32(doorbell, phba->db_va + DB_RXULP0_OFFSET); + } +} + +static void hwi_flush_default_pdu_buffer(struct beiscsi_hba *phba, + struct beiscsi_conn *beiscsi_conn, + struct i_t_dpdu_cqe *pdpdu_cqe) +{ + struct hwi_controller *phwi_ctrlr; + struct hwi_async_pdu_context *pasync_ctx; + struct async_pdu_handle *pasync_handle = NULL; + unsigned int cq_index = -1; + + phwi_ctrlr = phba->phwi_ctrlr; + pasync_ctx = HWI_GET_ASYNC_PDU_CTX(phwi_ctrlr); + + pasync_handle = hwi_get_async_handle(phba, beiscsi_conn, pasync_ctx, + pdpdu_cqe, &cq_index); + BUG_ON(pasync_handle->is_header != 0); + if (pasync_handle->consumed == 0) + hwi_update_async_writables(pasync_ctx, pasync_handle->is_header, + cq_index); + + hwi_free_async_msg(phba, pasync_handle->cri); + hwi_post_async_buffers(phba, pasync_handle->is_header); +} + +static unsigned int +hwi_fwd_async_msg(struct beiscsi_conn *beiscsi_conn, + struct beiscsi_hba *phba, + struct hwi_async_pdu_context *pasync_ctx, unsigned short cri) +{ + struct list_head *plist; + struct async_pdu_handle *pasync_handle; + void *phdr = NULL; + unsigned int hdr_len = 0, buf_len = 0; + unsigned int status, index = 0, offset = 0; + void *pfirst_buffer = NULL; + unsigned int num_buf = 0; + + plist = &pasync_ctx->async_entry[cri].wait_queue.list; + + list_for_each_entry(pasync_handle, plist, link) { + if (index == 0) { + phdr = pasync_handle->pbuffer; + hdr_len = pasync_handle->buffer_len; + } else { + buf_len = pasync_handle->buffer_len; + if (!num_buf) { + pfirst_buffer = pasync_handle->pbuffer; + num_buf++; + } + memcpy(pfirst_buffer + offset, + pasync_handle->pbuffer, buf_len); + offset = buf_len; + } + index++; + } + + status = beiscsi_process_async_pdu(beiscsi_conn, phba, + beiscsi_conn->beiscsi_conn_cid, + phdr, hdr_len, pfirst_buffer, + buf_len); + + if (status == 0) + hwi_free_async_msg(phba, cri); + return 0; +} + +static unsigned int +hwi_gather_async_pdu(struct beiscsi_conn *beiscsi_conn, + struct beiscsi_hba *phba, + struct async_pdu_handle *pasync_handle) +{ + struct hwi_async_pdu_context *pasync_ctx; + struct hwi_controller *phwi_ctrlr; + unsigned int bytes_needed = 0, status = 0; + unsigned short cri = pasync_handle->cri; + struct pdu_base *ppdu; + + phwi_ctrlr = phba->phwi_ctrlr; + pasync_ctx = HWI_GET_ASYNC_PDU_CTX(phwi_ctrlr); + + list_del(&pasync_handle->link); + if (pasync_handle->is_header) { + pasync_ctx->async_header.busy_entries--; + if (pasync_ctx->async_entry[cri].wait_queue.hdr_received) { + hwi_free_async_msg(phba, cri); + BUG(); + } + + pasync_ctx->async_entry[cri].wait_queue.bytes_received = 0; + pasync_ctx->async_entry[cri].wait_queue.hdr_received = 1; + pasync_ctx->async_entry[cri].wait_queue.hdr_len = + (unsigned short)pasync_handle->buffer_len; + list_add_tail(&pasync_handle->link, + &pasync_ctx->async_entry[cri].wait_queue.list); + + ppdu = pasync_handle->pbuffer; + bytes_needed = ((((ppdu->dw[offsetof(struct amap_pdu_base, + data_len_hi) / 32] & PDUBASE_DATALENHI_MASK) << 8) & + 0xFFFF0000) | ((be16_to_cpu((ppdu-> + dw[offsetof(struct amap_pdu_base, data_len_lo) / 32] + & PDUBASE_DATALENLO_MASK) >> 16)) & 0x0000FFFF)); + + if (status == 0) { + pasync_ctx->async_entry[cri].wait_queue.bytes_needed = + bytes_needed; + + if (bytes_needed == 0) + status = hwi_fwd_async_msg(beiscsi_conn, phba, + pasync_ctx, cri); + } + } else { + pasync_ctx->async_data.busy_entries--; + if (pasync_ctx->async_entry[cri].wait_queue.hdr_received) { + list_add_tail(&pasync_handle->link, + &pasync_ctx->async_entry[cri].wait_queue. + list); + pasync_ctx->async_entry[cri].wait_queue. + bytes_received += + (unsigned short)pasync_handle->buffer_len; + + if (pasync_ctx->async_entry[cri].wait_queue. + bytes_received >= + pasync_ctx->async_entry[cri].wait_queue. + bytes_needed) + status = hwi_fwd_async_msg(beiscsi_conn, phba, + pasync_ctx, cri); + } + } + return status; +} + +static void hwi_process_default_pdu_ring(struct beiscsi_conn *beiscsi_conn, + struct beiscsi_hba *phba, + struct i_t_dpdu_cqe *pdpdu_cqe) +{ + struct hwi_controller *phwi_ctrlr; + struct hwi_async_pdu_context *pasync_ctx; + struct async_pdu_handle *pasync_handle = NULL; + unsigned int cq_index = -1; + + phwi_ctrlr = phba->phwi_ctrlr; + pasync_ctx = HWI_GET_ASYNC_PDU_CTX(phwi_ctrlr); + pasync_handle = hwi_get_async_handle(phba, beiscsi_conn, pasync_ctx, + pdpdu_cqe, &cq_index); + + if (pasync_handle->consumed == 0) + hwi_update_async_writables(pasync_ctx, pasync_handle->is_header, + cq_index); + hwi_gather_async_pdu(beiscsi_conn, phba, pasync_handle); + hwi_post_async_buffers(phba, pasync_handle->is_header); +} + +static unsigned int beiscsi_process_cq(struct beiscsi_hba *phba) +{ + struct hwi_controller *phwi_ctrlr; + struct hwi_context_memory *phwi_context; + struct be_queue_info *cq; + struct sol_cqe *sol; + struct dmsg_cqe *dmsg; + unsigned int num_processed = 0; + unsigned int tot_nump = 0; + struct beiscsi_conn *beiscsi_conn; + + phwi_ctrlr = phba->phwi_ctrlr; + phwi_context = phwi_ctrlr->phwi_ctxt; + cq = &phwi_context->be_cq; + sol = queue_tail_node(cq); + + while (sol->dw[offsetof(struct amap_sol_cqe, valid) / 32] & + CQE_VALID_MASK) { + be_dws_le_to_cpu(sol, sizeof(struct sol_cqe)); + + beiscsi_conn = phba->conn_table[(u32) (sol-> + dw[offsetof(struct amap_sol_cqe, cid) / 32] & + SOL_CID_MASK) >> 6]; + + if (!beiscsi_conn || !beiscsi_conn->ep) { + shost_printk(KERN_WARNING, phba->shost, + "Connection table empty for cid = %d\n", + (u32)(sol->dw[offsetof(struct amap_sol_cqe, + cid) / 32] & SOL_CID_MASK) >> 6); + return 0; + } + + if (num_processed >= 32) { + hwi_ring_cq_db(phba, phwi_context->be_cq.id, + num_processed, 0, 0); + tot_nump += num_processed; + num_processed = 0; + } + + switch ((u32) sol->dw[offsetof(struct amap_sol_cqe, code) / + 32] & CQE_CODE_MASK) { + case SOL_CMD_COMPLETE: + hwi_complete_cmd(beiscsi_conn, phba, sol); + break; + case DRIVERMSG_NOTIFY: + SE_DEBUG(DBG_LVL_8, "Received DRIVERMSG_NOTIFY \n"); + dmsg = (struct dmsg_cqe *)sol; + hwi_complete_drvr_msgs(beiscsi_conn, phba, sol); + break; + case UNSOL_HDR_NOTIFY: + case UNSOL_DATA_NOTIFY: + SE_DEBUG(DBG_LVL_8, "Received UNSOL_HDR/DATA_NOTIFY\n"); + hwi_process_default_pdu_ring(beiscsi_conn, phba, + (struct i_t_dpdu_cqe *)sol); + break; + case CXN_INVALIDATE_INDEX_NOTIFY: + case CMD_INVALIDATED_NOTIFY: + case CXN_INVALIDATE_NOTIFY: + SE_DEBUG(DBG_LVL_1, + "Ignoring CQ Error notification for cmd/cxn" + "invalidate\n"); + break; + case SOL_CMD_KILLED_DATA_DIGEST_ERR: + case CMD_KILLED_INVALID_STATSN_RCVD: + case CMD_KILLED_INVALID_R2T_RCVD: + case CMD_CXN_KILLED_LUN_INVALID: + case CMD_CXN_KILLED_ICD_INVALID: + case CMD_CXN_KILLED_ITT_INVALID: + case CMD_CXN_KILLED_SEQ_OUTOFORDER: + case CMD_CXN_KILLED_INVALID_DATASN_RCVD: + SE_DEBUG(DBG_LVL_1, + "CQ Error notification for cmd.. " + "code %d cid 0x%x\n", + sol->dw[offsetof(struct amap_sol_cqe, code) / + 32] & CQE_CODE_MASK, + (sol->dw[offsetof(struct amap_sol_cqe, cid) / + 32] & SOL_CID_MASK)); + break; + case UNSOL_DATA_DIGEST_ERROR_NOTIFY: + SE_DEBUG(DBG_LVL_1, + "Digest error on def pdu ring, dropping..\n"); + hwi_flush_default_pdu_buffer(phba, beiscsi_conn, + (struct i_t_dpdu_cqe *) sol); + break; + case CXN_KILLED_PDU_SIZE_EXCEEDS_DSL: + case CXN_KILLED_BURST_LEN_MISMATCH: + case CXN_KILLED_AHS_RCVD: + case CXN_KILLED_HDR_DIGEST_ERR: + case CXN_KILLED_UNKNOWN_HDR: + case CXN_KILLED_STALE_ITT_TTT_RCVD: + case CXN_KILLED_INVALID_ITT_TTT_RCVD: + case CXN_KILLED_TIMED_OUT: + case CXN_KILLED_FIN_RCVD: + case CXN_KILLED_BAD_UNSOL_PDU_RCVD: + case CXN_KILLED_BAD_WRB_INDEX_ERROR: + case CXN_KILLED_OVER_RUN_RESIDUAL: + case CXN_KILLED_UNDER_RUN_RESIDUAL: + case CXN_KILLED_CMND_DATA_NOT_ON_SAME_CONN: + SE_DEBUG(DBG_LVL_1, "CQ Error %d, resetting CID " + "0x%x...\n", + sol->dw[offsetof(struct amap_sol_cqe, code) / + 32] & CQE_CODE_MASK, + sol->dw[offsetof(struct amap_sol_cqe, cid) / + 32] & CQE_CID_MASK); + iscsi_conn_failure(beiscsi_conn->conn, + ISCSI_ERR_CONN_FAILED); + break; + case CXN_KILLED_RST_SENT: + case CXN_KILLED_RST_RCVD: + SE_DEBUG(DBG_LVL_1, "CQ Error %d, reset received/sent " + "on CID 0x%x...\n", + sol->dw[offsetof(struct amap_sol_cqe, code) / + 32] & CQE_CODE_MASK, + sol->dw[offsetof(struct amap_sol_cqe, cid) / + 32] & CQE_CID_MASK); + iscsi_conn_failure(beiscsi_conn->conn, + ISCSI_ERR_CONN_FAILED); + break; + default: + SE_DEBUG(DBG_LVL_1, "CQ Error Invalid code= %d " + "received on CID 0x%x...\n", + sol->dw[offsetof(struct amap_sol_cqe, code) / + 32] & CQE_CODE_MASK, + sol->dw[offsetof(struct amap_sol_cqe, cid) / + 32] & CQE_CID_MASK); + break; + } + + AMAP_SET_BITS(struct amap_sol_cqe, valid, sol, 0); + queue_tail_inc(cq); + sol = queue_tail_node(cq); + num_processed++; + } + + if (num_processed > 0) { + tot_nump += num_processed; + hwi_ring_cq_db(phba, phwi_context->be_cq.id, num_processed, + 1, 0); + } + return tot_nump; +} + +static void beiscsi_process_all_cqs(struct work_struct *work) +{ + unsigned long flags; + struct beiscsi_hba *phba = + container_of(work, struct beiscsi_hba, work_cqs); + + if (phba->todo_mcc_cq) { + spin_lock_irqsave(&phba->isr_lock, flags); + phba->todo_mcc_cq = 0; + spin_unlock_irqrestore(&phba->isr_lock, flags); + SE_DEBUG(DBG_LVL_1, "MCC Interrupt Not expected \n"); + } + + if (phba->todo_cq) { + spin_lock_irqsave(&phba->isr_lock, flags); + phba->todo_cq = 0; + spin_unlock_irqrestore(&phba->isr_lock, flags); + beiscsi_process_cq(phba); + } +} + +static int be_iopoll(struct blk_iopoll *iop, int budget) +{ + static unsigned int ret; + struct beiscsi_hba *phba; + + phba = container_of(iop, struct beiscsi_hba, iopoll); + + ret = beiscsi_process_cq(phba); + if (ret < budget) { + struct hwi_controller *phwi_ctrlr; + struct hwi_context_memory *phwi_context; + + phwi_ctrlr = phba->phwi_ctrlr; + phwi_context = phwi_ctrlr->phwi_ctxt; + blk_iopoll_complete(iop); + hwi_ring_eq_db(phba, phwi_context->be_eq.q.id, 0, + 0, 1, 1); + } + return ret; +} + +static void +hwi_write_sgl(struct iscsi_wrb *pwrb, struct scatterlist *sg, + unsigned int num_sg, struct beiscsi_io_task *io_task) +{ + struct iscsi_sge *psgl; + unsigned short sg_len, index; + unsigned int sge_len = 0; + unsigned long long addr; + struct scatterlist *l_sg; + unsigned int offset; + + AMAP_SET_BITS(struct amap_iscsi_wrb, iscsi_bhs_addr_lo, pwrb, + io_task->bhs_pa.u.a32.address_lo); + AMAP_SET_BITS(struct amap_iscsi_wrb, iscsi_bhs_addr_hi, pwrb, + io_task->bhs_pa.u.a32.address_hi); + + l_sg = sg; + for (index = 0; (index < num_sg) && (index < 2); index++, sg_next(sg)) { + if (index == 0) { + sg_len = sg_dma_len(sg); + addr = (u64) sg_dma_address(sg); + AMAP_SET_BITS(struct amap_iscsi_wrb, sge0_addr_lo, pwrb, + (addr & 0xFFFFFFFF)); + AMAP_SET_BITS(struct amap_iscsi_wrb, sge0_addr_hi, pwrb, + (addr >> 32)); + AMAP_SET_BITS(struct amap_iscsi_wrb, sge0_len, pwrb, + sg_len); + sge_len = sg_len; + AMAP_SET_BITS(struct amap_iscsi_wrb, sge0_last, pwrb, + 1); + } else { + AMAP_SET_BITS(struct amap_iscsi_wrb, sge0_last, pwrb, + 0); + AMAP_SET_BITS(struct amap_iscsi_wrb, sge1_r2t_offset, + pwrb, sge_len); + sg_len = sg_dma_len(sg); + addr = (u64) sg_dma_address(sg); + AMAP_SET_BITS(struct amap_iscsi_wrb, sge1_addr_lo, pwrb, + (addr & 0xFFFFFFFF)); + AMAP_SET_BITS(struct amap_iscsi_wrb, sge1_addr_hi, pwrb, + (addr >> 32)); + AMAP_SET_BITS(struct amap_iscsi_wrb, sge1_len, pwrb, + sg_len); + } + } + psgl = (struct iscsi_sge *)io_task->psgl_handle->pfrag; + memset(psgl, 0, sizeof(*psgl) * BE2_SGE); + + AMAP_SET_BITS(struct amap_iscsi_sge, len, psgl, io_task->bhs_len - 2); + + AMAP_SET_BITS(struct amap_iscsi_sge, addr_hi, psgl, + io_task->bhs_pa.u.a32.address_hi); + AMAP_SET_BITS(struct amap_iscsi_sge, addr_lo, psgl, + io_task->bhs_pa.u.a32.address_lo); + + if (num_sg == 2) + AMAP_SET_BITS(struct amap_iscsi_wrb, sge1_last, pwrb, 1); + sg = l_sg; + psgl++; + psgl++; + offset = 0; + for (index = 0; index < num_sg; index++, sg_next(sg), psgl++) { + sg_len = sg_dma_len(sg); + addr = (u64) sg_dma_address(sg); + AMAP_SET_BITS(struct amap_iscsi_sge, addr_lo, psgl, + (addr & 0xFFFFFFFF)); + AMAP_SET_BITS(struct amap_iscsi_sge, addr_hi, psgl, + (addr >> 32)); + AMAP_SET_BITS(struct amap_iscsi_sge, len, psgl, sg_len); + AMAP_SET_BITS(struct amap_iscsi_sge, sge_offset, psgl, offset); + AMAP_SET_BITS(struct amap_iscsi_sge, last_sge, psgl, 0); + offset += sg_len; + } + psgl--; + AMAP_SET_BITS(struct amap_iscsi_sge, last_sge, psgl, 1); +} + +static void hwi_write_buffer(struct iscsi_wrb *pwrb, struct iscsi_task *task) +{ + struct iscsi_sge *psgl; + unsigned long long addr; + struct beiscsi_io_task *io_task = task->dd_data; + struct beiscsi_conn *beiscsi_conn = io_task->conn; + struct beiscsi_hba *phba = beiscsi_conn->phba; + + io_task->bhs_len = sizeof(struct be_nonio_bhs) - 2; + AMAP_SET_BITS(struct amap_iscsi_wrb, iscsi_bhs_addr_lo, pwrb, + io_task->bhs_pa.u.a32.address_lo); + AMAP_SET_BITS(struct amap_iscsi_wrb, iscsi_bhs_addr_hi, pwrb, + io_task->bhs_pa.u.a32.address_hi); + + if (task->data) { + if (task->data_count) { + AMAP_SET_BITS(struct amap_iscsi_wrb, dsp, pwrb, 1); + addr = (u64) pci_map_single(phba->pcidev, + task->data, + task->data_count, 1); + } else { + AMAP_SET_BITS(struct amap_iscsi_wrb, dsp, pwrb, 0); + addr = 0; + } + AMAP_SET_BITS(struct amap_iscsi_wrb, sge0_addr_lo, pwrb, + (addr & 0xFFFFFFFF)); + AMAP_SET_BITS(struct amap_iscsi_wrb, sge0_addr_hi, pwrb, + (addr >> 32)); + AMAP_SET_BITS(struct amap_iscsi_wrb, sge0_len, pwrb, + task->data_count); + + AMAP_SET_BITS(struct amap_iscsi_wrb, sge0_last, pwrb, 1); + } else { + AMAP_SET_BITS(struct amap_iscsi_wrb, dsp, pwrb, 0); + addr = 0; + } + + psgl = (struct iscsi_sge *)io_task->psgl_handle->pfrag; + + AMAP_SET_BITS(struct amap_iscsi_sge, len, psgl, io_task->bhs_len); + + AMAP_SET_BITS(struct amap_iscsi_sge, addr_hi, psgl, + io_task->bhs_pa.u.a32.address_hi); + AMAP_SET_BITS(struct amap_iscsi_sge, addr_lo, psgl, + io_task->bhs_pa.u.a32.address_lo); + if (task->data) { + psgl++; + AMAP_SET_BITS(struct amap_iscsi_sge, addr_hi, psgl, 0); + AMAP_SET_BITS(struct amap_iscsi_sge, addr_lo, psgl, 0); + AMAP_SET_BITS(struct amap_iscsi_sge, len, psgl, 0); + AMAP_SET_BITS(struct amap_iscsi_sge, sge_offset, psgl, 0); + AMAP_SET_BITS(struct amap_iscsi_sge, rsvd0, psgl, 0); + AMAP_SET_BITS(struct amap_iscsi_sge, last_sge, psgl, 0); + + psgl++; + if (task->data) { + AMAP_SET_BITS(struct amap_iscsi_sge, addr_lo, psgl, + (addr & 0xFFFFFFFF)); + AMAP_SET_BITS(struct amap_iscsi_sge, addr_hi, psgl, + (addr >> 32)); + } + AMAP_SET_BITS(struct amap_iscsi_sge, len, psgl, 0x106); + } + AMAP_SET_BITS(struct amap_iscsi_sge, last_sge, psgl, 1); +} + +static void beiscsi_find_mem_req(struct beiscsi_hba *phba) +{ + unsigned int num_cq_pages, num_eq_pages, num_async_pdu_buf_pages; + unsigned int num_async_pdu_data_pages, wrb_sz_per_cxn; + unsigned int num_async_pdu_buf_sgl_pages, num_async_pdu_data_sgl_pages; + + num_cq_pages = PAGES_REQUIRED(phba->params.num_cq_entries * \ + sizeof(struct sol_cqe)); + num_eq_pages = PAGES_REQUIRED(phba->params.num_eq_entries * \ + sizeof(struct be_eq_entry)); + num_async_pdu_buf_pages = + PAGES_REQUIRED(phba->params.asyncpdus_per_ctrl * \ + phba->params.defpdu_hdr_sz); + num_async_pdu_buf_sgl_pages = + PAGES_REQUIRED(phba->params.asyncpdus_per_ctrl * \ + sizeof(struct phys_addr)); + num_async_pdu_data_pages = + PAGES_REQUIRED(phba->params.asyncpdus_per_ctrl * \ + phba->params.defpdu_data_sz); + num_async_pdu_data_sgl_pages = + PAGES_REQUIRED(phba->params.asyncpdus_per_ctrl * \ + sizeof(struct phys_addr)); + + phba->params.hwi_ws_sz = sizeof(struct hwi_controller); + + phba->mem_req[ISCSI_MEM_GLOBAL_HEADER] = 2 * + BE_ISCSI_PDU_HEADER_SIZE; + phba->mem_req[HWI_MEM_ADDN_CONTEXT] = + sizeof(struct hwi_context_memory); + + phba->mem_req[HWI_MEM_CQ] = num_cq_pages * PAGE_SIZE; + phba->mem_req[HWI_MEM_EQ] = num_eq_pages * PAGE_SIZE; + + phba->mem_req[HWI_MEM_WRB] = sizeof(struct iscsi_wrb) + * (phba->params.wrbs_per_cxn) + * phba->params.cxns_per_ctrl; + wrb_sz_per_cxn = sizeof(struct wrb_handle) * + (phba->params.wrbs_per_cxn); + phba->mem_req[HWI_MEM_WRBH] = roundup_pow_of_two((wrb_sz_per_cxn) * + phba->params.cxns_per_ctrl); + + phba->mem_req[HWI_MEM_SGLH] = sizeof(struct sgl_handle) * + phba->params.icds_per_ctrl; + phba->mem_req[HWI_MEM_SGE] = sizeof(struct iscsi_sge) * + phba->params.num_sge_per_io * phba->params.icds_per_ctrl; + + phba->mem_req[HWI_MEM_ASYNC_HEADER_BUF] = + num_async_pdu_buf_pages * PAGE_SIZE; + phba->mem_req[HWI_MEM_ASYNC_DATA_BUF] = + num_async_pdu_data_pages * PAGE_SIZE; + phba->mem_req[HWI_MEM_ASYNC_HEADER_RING] = + num_async_pdu_buf_sgl_pages * PAGE_SIZE; + phba->mem_req[HWI_MEM_ASYNC_DATA_RING] = + num_async_pdu_data_sgl_pages * PAGE_SIZE; + phba->mem_req[HWI_MEM_ASYNC_HEADER_HANDLE] = + phba->params.asyncpdus_per_ctrl * + sizeof(struct async_pdu_handle); + phba->mem_req[HWI_MEM_ASYNC_DATA_HANDLE] = + phba->params.asyncpdus_per_ctrl * + sizeof(struct async_pdu_handle); + phba->mem_req[HWI_MEM_ASYNC_PDU_CONTEXT] = + sizeof(struct hwi_async_pdu_context) + + (phba->params.cxns_per_ctrl * sizeof(struct hwi_async_entry)); +} + +static int beiscsi_alloc_mem(struct beiscsi_hba *phba) +{ + struct be_mem_descriptor *mem_descr; + dma_addr_t bus_add; + struct mem_array *mem_arr, *mem_arr_orig; + unsigned int i, j, alloc_size, curr_alloc_size; + + phba->phwi_ctrlr = kmalloc(phba->params.hwi_ws_sz, GFP_KERNEL); + if (!phba->phwi_ctrlr) + return -ENOMEM; + + phba->init_mem = kcalloc(SE_MEM_MAX, sizeof(*mem_descr), + GFP_KERNEL); + if (!phba->init_mem) { + kfree(phba->phwi_ctrlr); + return -ENOMEM; + } + + mem_arr_orig = kmalloc(sizeof(*mem_arr_orig) * BEISCSI_MAX_FRAGS_INIT, + GFP_KERNEL); + if (!mem_arr_orig) { + kfree(phba->init_mem); + kfree(phba->phwi_ctrlr); + return -ENOMEM; + } + + mem_descr = phba->init_mem; + for (i = 0; i < SE_MEM_MAX; i++) { + j = 0; + mem_arr = mem_arr_orig; + alloc_size = phba->mem_req[i]; + memset(mem_arr, 0, sizeof(struct mem_array) * + BEISCSI_MAX_FRAGS_INIT); + curr_alloc_size = min(be_max_phys_size * 1024, alloc_size); + do { + mem_arr->virtual_address = pci_alloc_consistent( + phba->pcidev, + curr_alloc_size, + &bus_add); + if (!mem_arr->virtual_address) { + if (curr_alloc_size <= BE_MIN_MEM_SIZE) + goto free_mem; + if (curr_alloc_size - + rounddown_pow_of_two(curr_alloc_size)) + curr_alloc_size = rounddown_pow_of_two + (curr_alloc_size); + else + curr_alloc_size = curr_alloc_size / 2; + } else { + mem_arr->bus_address.u. + a64.address = (__u64) bus_add; + mem_arr->size = curr_alloc_size; + alloc_size -= curr_alloc_size; + curr_alloc_size = min(be_max_phys_size * + 1024, alloc_size); + j++; + mem_arr++; + } + } while (alloc_size); + mem_descr->num_elements = j; + mem_descr->size_in_bytes = phba->mem_req[i]; + mem_descr->mem_array = kmalloc(sizeof(*mem_arr) * j, + GFP_KERNEL); + if (!mem_descr->mem_array) + goto free_mem; + + memcpy(mem_descr->mem_array, mem_arr_orig, + sizeof(struct mem_array) * j); + mem_descr++; + } + kfree(mem_arr_orig); + return 0; +free_mem: + mem_descr->num_elements = j; + while ((i) || (j)) { + for (j = mem_descr->num_elements; j > 0; j--) { + pci_free_consistent(phba->pcidev, + mem_descr->mem_array[j - 1].size, + mem_descr->mem_array[j - 1]. + virtual_address, + mem_descr->mem_array[j - 1]. + bus_address.u.a64.address); + } + if (i) { + i--; + kfree(mem_descr->mem_array); + mem_descr--; + } + } + kfree(mem_arr_orig); + kfree(phba->init_mem); + kfree(phba->phwi_ctrlr); + return -ENOMEM; +} + +static int beiscsi_get_memory(struct beiscsi_hba *phba) +{ + beiscsi_find_mem_req(phba); + return beiscsi_alloc_mem(phba); +} + +static void iscsi_init_global_templates(struct beiscsi_hba *phba) +{ + struct pdu_data_out *pdata_out; + struct pdu_nop_out *pnop_out; + struct be_mem_descriptor *mem_descr; + + mem_descr = phba->init_mem; + mem_descr += ISCSI_MEM_GLOBAL_HEADER; + pdata_out = + (struct pdu_data_out *)mem_descr->mem_array[0].virtual_address; + memset(pdata_out, 0, BE_ISCSI_PDU_HEADER_SIZE); + + AMAP_SET_BITS(struct amap_pdu_data_out, opcode, pdata_out, + IIOC_SCSI_DATA); + + pnop_out = + (struct pdu_nop_out *)((unsigned char *)mem_descr->mem_array[0]. + virtual_address + BE_ISCSI_PDU_HEADER_SIZE); + + memset(pnop_out, 0, BE_ISCSI_PDU_HEADER_SIZE); + AMAP_SET_BITS(struct amap_pdu_nop_out, ttt, pnop_out, 0xFFFFFFFF); + AMAP_SET_BITS(struct amap_pdu_nop_out, f_bit, pnop_out, 1); + AMAP_SET_BITS(struct amap_pdu_nop_out, i_bit, pnop_out, 0); +} + +static void beiscsi_init_wrb_handle(struct beiscsi_hba *phba) +{ + struct be_mem_descriptor *mem_descr_wrbh, *mem_descr_wrb; + struct wrb_handle *pwrb_handle; + struct hwi_controller *phwi_ctrlr; + struct hwi_wrb_context *pwrb_context; + struct iscsi_wrb *pwrb; + unsigned int num_cxn_wrbh; + unsigned int num_cxn_wrb, j, idx, index; + + mem_descr_wrbh = phba->init_mem; + mem_descr_wrbh += HWI_MEM_WRBH; + + mem_descr_wrb = phba->init_mem; + mem_descr_wrb += HWI_MEM_WRB; + + idx = 0; + pwrb_handle = mem_descr_wrbh->mem_array[idx].virtual_address; + num_cxn_wrbh = ((mem_descr_wrbh->mem_array[idx].size) / + ((sizeof(struct wrb_handle)) * + phba->params.wrbs_per_cxn)); + phwi_ctrlr = phba->phwi_ctrlr; + + for (index = 0; index < phba->params.cxns_per_ctrl * 2; index += 2) { + pwrb_context = &phwi_ctrlr->wrb_context[index]; + SE_DEBUG(DBG_LVL_8, "cid=%d pwrb_context=%p \n", index, + pwrb_context); + pwrb_context->pwrb_handle_base = + kzalloc(sizeof(struct wrb_handle *) * + phba->params.wrbs_per_cxn, GFP_KERNEL); + pwrb_context->pwrb_handle_basestd = + kzalloc(sizeof(struct wrb_handle *) * + phba->params.wrbs_per_cxn, GFP_KERNEL); + if (num_cxn_wrbh) { + pwrb_context->alloc_index = 0; + pwrb_context->wrb_handles_available = 0; + for (j = 0; j < phba->params.wrbs_per_cxn; j++) { + pwrb_context->pwrb_handle_base[j] = pwrb_handle; + pwrb_context->pwrb_handle_basestd[j] = + pwrb_handle; + pwrb_context->wrb_handles_available++; + pwrb_handle++; + } + pwrb_context->free_index = 0; + num_cxn_wrbh--; + } else { + idx++; + pwrb_handle = + mem_descr_wrbh->mem_array[idx].virtual_address; + num_cxn_wrbh = + ((mem_descr_wrbh->mem_array[idx].size) / + ((sizeof(struct wrb_handle)) * + phba->params.wrbs_per_cxn)); + pwrb_context->alloc_index = 0; + for (j = 0; j < phba->params.wrbs_per_cxn; j++) { + pwrb_context->pwrb_handle_base[j] = pwrb_handle; + pwrb_context->pwrb_handle_basestd[j] = + pwrb_handle; + pwrb_context->wrb_handles_available++; + pwrb_handle++; + } + pwrb_context->free_index = 0; + num_cxn_wrbh--; + } + } + idx = 0; + pwrb = mem_descr_wrb->mem_array[idx].virtual_address; + num_cxn_wrb = + ((mem_descr_wrb->mem_array[idx].size) / (sizeof(struct iscsi_wrb)) * + phba->params.wrbs_per_cxn); + + for (index = 0; index < phba->params.cxns_per_ctrl; index += 2) { + pwrb_context = &phwi_ctrlr->wrb_context[index]; + if (num_cxn_wrb) { + for (j = 0; j < phba->params.wrbs_per_cxn; j++) { + pwrb_handle = pwrb_context->pwrb_handle_base[j]; + pwrb_handle->pwrb = pwrb; + pwrb++; + } + num_cxn_wrb--; + } else { + idx++; + pwrb = mem_descr_wrb->mem_array[idx].virtual_address; + num_cxn_wrb = ((mem_descr_wrb->mem_array[idx].size) / + (sizeof(struct iscsi_wrb)) * + phba->params.wrbs_per_cxn); + for (j = 0; j < phba->params.wrbs_per_cxn; j++) { + pwrb_handle = pwrb_context->pwrb_handle_base[j]; + pwrb_handle->pwrb = pwrb; + pwrb++; + } + num_cxn_wrb--; + } + } +} + +static void hwi_init_async_pdu_ctx(struct beiscsi_hba *phba) +{ + struct hwi_controller *phwi_ctrlr; + struct hba_parameters *p = &phba->params; + struct hwi_async_pdu_context *pasync_ctx; + struct async_pdu_handle *pasync_header_h, *pasync_data_h; + unsigned int index; + struct be_mem_descriptor *mem_descr; + + mem_descr = (struct be_mem_descriptor *)phba->init_mem; + mem_descr += HWI_MEM_ASYNC_PDU_CONTEXT; + + phwi_ctrlr = phba->phwi_ctrlr; + phwi_ctrlr->phwi_ctxt->pasync_ctx = (struct hwi_async_pdu_context *) + mem_descr->mem_array[0].virtual_address; + pasync_ctx = phwi_ctrlr->phwi_ctxt->pasync_ctx; + memset(pasync_ctx, 0, sizeof(*pasync_ctx)); + + pasync_ctx->async_header.num_entries = p->asyncpdus_per_ctrl; + pasync_ctx->async_header.buffer_size = p->defpdu_hdr_sz; + pasync_ctx->async_data.buffer_size = p->defpdu_data_sz; + pasync_ctx->async_data.num_entries = p->asyncpdus_per_ctrl; + + mem_descr = (struct be_mem_descriptor *)phba->init_mem; + mem_descr += HWI_MEM_ASYNC_HEADER_BUF; + if (mem_descr->mem_array[0].virtual_address) { + SE_DEBUG(DBG_LVL_8, + "hwi_init_async_pdu_ctx HWI_MEM_ASYNC_HEADER_BUF" + "va=%p \n", mem_descr->mem_array[0].virtual_address); + } else + shost_printk(KERN_WARNING, phba->shost, + "No Virtual address \n"); + + pasync_ctx->async_header.va_base = + mem_descr->mem_array[0].virtual_address; + + pasync_ctx->async_header.pa_base.u.a64.address = + mem_descr->mem_array[0].bus_address.u.a64.address; + + mem_descr = (struct be_mem_descriptor *)phba->init_mem; + mem_descr += HWI_MEM_ASYNC_HEADER_RING; + if (mem_descr->mem_array[0].virtual_address) { + SE_DEBUG(DBG_LVL_8, + "hwi_init_async_pdu_ctx HWI_MEM_ASYNC_HEADER_RING" + "va=%p \n", mem_descr->mem_array[0].virtual_address); + } else + shost_printk(KERN_WARNING, phba->shost, + "No Virtual address \n"); + pasync_ctx->async_header.ring_base = + mem_descr->mem_array[0].virtual_address; + + mem_descr = (struct be_mem_descriptor *)phba->init_mem; + mem_descr += HWI_MEM_ASYNC_HEADER_HANDLE; + if (mem_descr->mem_array[0].virtual_address) { + SE_DEBUG(DBG_LVL_8, + "hwi_init_async_pdu_ctx HWI_MEM_ASYNC_HEADER_HANDLE" + "va=%p \n", mem_descr->mem_array[0].virtual_address); + } else + shost_printk(KERN_WARNING, phba->shost, + "No Virtual address \n"); + + pasync_ctx->async_header.handle_base = + mem_descr->mem_array[0].virtual_address; + pasync_ctx->async_header.writables = 0; + INIT_LIST_HEAD(&pasync_ctx->async_header.free_list); + + mem_descr = (struct be_mem_descriptor *)phba->init_mem; + mem_descr += HWI_MEM_ASYNC_DATA_BUF; + if (mem_descr->mem_array[0].virtual_address) { + SE_DEBUG(DBG_LVL_8, + "hwi_init_async_pdu_ctx HWI_MEM_ASYNC_DATA_BUF" + "va=%p \n", mem_descr->mem_array[0].virtual_address); + } else + shost_printk(KERN_WARNING, phba->shost, + "No Virtual address \n"); + pasync_ctx->async_data.va_base = + mem_descr->mem_array[0].virtual_address; + pasync_ctx->async_data.pa_base.u.a64.address = + mem_descr->mem_array[0].bus_address.u.a64.address; + + mem_descr = (struct be_mem_descriptor *)phba->init_mem; + mem_descr += HWI_MEM_ASYNC_DATA_RING; + if (mem_descr->mem_array[0].virtual_address) { + SE_DEBUG(DBG_LVL_8, + "hwi_init_async_pdu_ctx HWI_MEM_ASYNC_DATA_RING" + "va=%p \n", mem_descr->mem_array[0].virtual_address); + } else + shost_printk(KERN_WARNING, phba->shost, + "No Virtual address \n"); + + pasync_ctx->async_data.ring_base = + mem_descr->mem_array[0].virtual_address; + + mem_descr = (struct be_mem_descriptor *)phba->init_mem; + mem_descr += HWI_MEM_ASYNC_DATA_HANDLE; + if (!mem_descr->mem_array[0].virtual_address) + shost_printk(KERN_WARNING, phba->shost, + "No Virtual address \n"); + + pasync_ctx->async_data.handle_base = + mem_descr->mem_array[0].virtual_address; + pasync_ctx->async_data.writables = 0; + INIT_LIST_HEAD(&pasync_ctx->async_data.free_list); + + pasync_header_h = + (struct async_pdu_handle *)pasync_ctx->async_header.handle_base; + pasync_data_h = + (struct async_pdu_handle *)pasync_ctx->async_data.handle_base; + + for (index = 0; index < p->asyncpdus_per_ctrl; index++) { + pasync_header_h->cri = -1; + pasync_header_h->index = (char)index; + INIT_LIST_HEAD(&pasync_header_h->link); + pasync_header_h->pbuffer = + (void *)((unsigned long) + (pasync_ctx->async_header.va_base) + + (p->defpdu_hdr_sz * index)); + + pasync_header_h->pa.u.a64.address = + pasync_ctx->async_header.pa_base.u.a64.address + + (p->defpdu_hdr_sz * index); + + list_add_tail(&pasync_header_h->link, + &pasync_ctx->async_header.free_list); + pasync_header_h++; + pasync_ctx->async_header.free_entries++; + pasync_ctx->async_header.writables++; + + INIT_LIST_HEAD(&pasync_ctx->async_entry[index].wait_queue.list); + INIT_LIST_HEAD(&pasync_ctx->async_entry[index]. + header_busy_list); + pasync_data_h->cri = -1; + pasync_data_h->index = (char)index; + INIT_LIST_HEAD(&pasync_data_h->link); + pasync_data_h->pbuffer = + (void *)((unsigned long) + (pasync_ctx->async_data.va_base) + + (p->defpdu_data_sz * index)); + + pasync_data_h->pa.u.a64.address = + pasync_ctx->async_data.pa_base.u.a64.address + + (p->defpdu_data_sz * index); + + list_add_tail(&pasync_data_h->link, + &pasync_ctx->async_data.free_list); + pasync_data_h++; + pasync_ctx->async_data.free_entries++; + pasync_ctx->async_data.writables++; + + INIT_LIST_HEAD(&pasync_ctx->async_entry[index].data_busy_list); + } + + pasync_ctx->async_header.host_write_ptr = 0; + pasync_ctx->async_header.ep_read_ptr = -1; + pasync_ctx->async_data.host_write_ptr = 0; + pasync_ctx->async_data.ep_read_ptr = -1; +} + +static int +be_sgl_create_contiguous(void *virtual_address, + u64 physical_address, u32 length, + struct be_dma_mem *sgl) +{ + WARN_ON(!virtual_address); + WARN_ON(!physical_address); + WARN_ON(!length > 0); + WARN_ON(!sgl); + + sgl->va = virtual_address; + sgl->dma = physical_address; + sgl->size = length; + + return 0; +} + +static void be_sgl_destroy_contiguous(struct be_dma_mem *sgl) +{ + memset(sgl, 0, sizeof(*sgl)); +} + +static void +hwi_build_be_sgl_arr(struct beiscsi_hba *phba, + struct mem_array *pmem, struct be_dma_mem *sgl) +{ + if (sgl->va) + be_sgl_destroy_contiguous(sgl); + + be_sgl_create_contiguous(pmem->virtual_address, + pmem->bus_address.u.a64.address, + pmem->size, sgl); +} + +static void +hwi_build_be_sgl_by_offset(struct beiscsi_hba *phba, + struct mem_array *pmem, struct be_dma_mem *sgl) +{ + if (sgl->va) + be_sgl_destroy_contiguous(sgl); + + be_sgl_create_contiguous((unsigned char *)pmem->virtual_address, + pmem->bus_address.u.a64.address, + pmem->size, sgl); +} + +static int be_fill_queue(struct be_queue_info *q, + u16 len, u16 entry_size, void *vaddress) +{ + struct be_dma_mem *mem = &q->dma_mem; + + memset(q, 0, sizeof(*q)); + q->len = len; + q->entry_size = entry_size; + mem->size = len * entry_size; + mem->va = vaddress; + if (!mem->va) + return -ENOMEM; + memset(mem->va, 0, mem->size); + return 0; +} + +static int beiscsi_create_eq(struct beiscsi_hba *phba, + struct hwi_context_memory *phwi_context) +{ + unsigned int idx; + int ret; + struct be_queue_info *eq; + struct be_dma_mem *mem; + struct be_mem_descriptor *mem_descr; + void *eq_vaddress; + + idx = 0; + eq = &phwi_context->be_eq.q; + mem = &eq->dma_mem; + mem_descr = phba->init_mem; + mem_descr += HWI_MEM_EQ; + eq_vaddress = mem_descr->mem_array[idx].virtual_address; + + ret = be_fill_queue(eq, phba->params.num_eq_entries, + sizeof(struct be_eq_entry), eq_vaddress); + if (ret) { + shost_printk(KERN_ERR, phba->shost, + "be_fill_queue Failed for EQ \n"); + return ret; + } + + mem->dma = mem_descr->mem_array[idx].bus_address.u.a64.address; + + ret = beiscsi_cmd_eq_create(&phba->ctrl, eq, + phwi_context->be_eq.cur_eqd); + if (ret) { + shost_printk(KERN_ERR, phba->shost, "beiscsi_cmd_eq_create" + "Failedfor EQ \n"); + return ret; + } + SE_DEBUG(DBG_LVL_8, "eq id is %d\n", phwi_context->be_eq.q.id); + return 0; +} + +static int beiscsi_create_cq(struct beiscsi_hba *phba, + struct hwi_context_memory *phwi_context) +{ + unsigned int idx; + int ret; + struct be_queue_info *cq, *eq; + struct be_dma_mem *mem; + struct be_mem_descriptor *mem_descr; + void *cq_vaddress; + + idx = 0; + cq = &phwi_context->be_cq; + eq = &phwi_context->be_eq.q; + mem = &cq->dma_mem; + mem_descr = phba->init_mem; + mem_descr += HWI_MEM_CQ; + cq_vaddress = mem_descr->mem_array[idx].virtual_address; + ret = be_fill_queue(cq, phba->params.icds_per_ctrl / 2, + sizeof(struct sol_cqe), cq_vaddress); + if (ret) { + shost_printk(KERN_ERR, phba->shost, + "be_fill_queue Failed for ISCSI CQ \n"); + return ret; + } + + mem->dma = mem_descr->mem_array[idx].bus_address.u.a64.address; + ret = beiscsi_cmd_cq_create(&phba->ctrl, cq, eq, false, false, 0); + if (ret) { + shost_printk(KERN_ERR, phba->shost, + "beiscsi_cmd_eq_create Failed for ISCSI CQ \n"); + return ret; + } + SE_DEBUG(DBG_LVL_8, "iscsi cq id is %d\n", phwi_context->be_cq.id); + SE_DEBUG(DBG_LVL_8, "ISCSI CQ CREATED\n"); + return 0; +} + +static int +beiscsi_create_def_hdr(struct beiscsi_hba *phba, + struct hwi_context_memory *phwi_context, + struct hwi_controller *phwi_ctrlr, + unsigned int def_pdu_ring_sz) +{ + unsigned int idx; + int ret; + struct be_queue_info *dq, *cq; + struct be_dma_mem *mem; + struct be_mem_descriptor *mem_descr; + void *dq_vaddress; + + idx = 0; + dq = &phwi_context->be_def_hdrq; + cq = &phwi_context->be_cq; + mem = &dq->dma_mem; + mem_descr = phba->init_mem; + mem_descr += HWI_MEM_ASYNC_HEADER_RING; + dq_vaddress = mem_descr->mem_array[idx].virtual_address; + ret = be_fill_queue(dq, mem_descr->mem_array[0].size / + sizeof(struct phys_addr), + sizeof(struct phys_addr), dq_vaddress); + if (ret) { + shost_printk(KERN_ERR, phba->shost, + "be_fill_queue Failed for DEF PDU HDR\n"); + return ret; + } + mem->dma = mem_descr->mem_array[idx].bus_address.u.a64.address; + ret = be_cmd_create_default_pdu_queue(&phba->ctrl, cq, dq, + def_pdu_ring_sz, + phba->params.defpdu_hdr_sz); + if (ret) { + shost_printk(KERN_ERR, phba->shost, + "be_cmd_create_default_pdu_queue Failed DEFHDR\n"); + return ret; + } + phwi_ctrlr->default_pdu_hdr.id = phwi_context->be_def_hdrq.id; + SE_DEBUG(DBG_LVL_8, "iscsi def pdu id is %d\n", + phwi_context->be_def_hdrq.id); + hwi_post_async_buffers(phba, 1); + return 0; +} + +static int +beiscsi_create_def_data(struct beiscsi_hba *phba, + struct hwi_context_memory *phwi_context, + struct hwi_controller *phwi_ctrlr, + unsigned int def_pdu_ring_sz) +{ + unsigned int idx; + int ret; + struct be_queue_info *dataq, *cq; + struct be_dma_mem *mem; + struct be_mem_descriptor *mem_descr; + void *dq_vaddress; + + idx = 0; + dataq = &phwi_context->be_def_dataq; + cq = &phwi_context->be_cq; + mem = &dataq->dma_mem; + mem_descr = phba->init_mem; + mem_descr += HWI_MEM_ASYNC_DATA_RING; + dq_vaddress = mem_descr->mem_array[idx].virtual_address; + ret = be_fill_queue(dataq, mem_descr->mem_array[0].size / + sizeof(struct phys_addr), + sizeof(struct phys_addr), dq_vaddress); + if (ret) { + shost_printk(KERN_ERR, phba->shost, + "be_fill_queue Failed for DEF PDU DATA\n"); + return ret; + } + mem->dma = mem_descr->mem_array[idx].bus_address.u.a64.address; + ret = be_cmd_create_default_pdu_queue(&phba->ctrl, cq, dataq, + def_pdu_ring_sz, + phba->params.defpdu_data_sz); + if (ret) { + shost_printk(KERN_ERR, phba->shost, + "be_cmd_create_default_pdu_queue Failed" + " for DEF PDU DATA\n"); + return ret; + } + phwi_ctrlr->default_pdu_data.id = phwi_context->be_def_dataq.id; + SE_DEBUG(DBG_LVL_8, "iscsi def data id is %d\n", + phwi_context->be_def_dataq.id); + hwi_post_async_buffers(phba, 0); + SE_DEBUG(DBG_LVL_8, "DEFAULT PDU DATA RING CREATED \n"); + return 0; +} + +static int +beiscsi_post_pages(struct beiscsi_hba *phba) +{ + struct be_mem_descriptor *mem_descr; + struct mem_array *pm_arr; + unsigned int page_offset, i; + struct be_dma_mem sgl; + int status; + + mem_descr = phba->init_mem; + mem_descr += HWI_MEM_SGE; + pm_arr = mem_descr->mem_array; + + page_offset = (sizeof(struct iscsi_sge) * phba->params.num_sge_per_io * + phba->fw_config.iscsi_icd_start) / PAGE_SIZE; + for (i = 0; i < mem_descr->num_elements; i++) { + hwi_build_be_sgl_arr(phba, pm_arr, &sgl); + status = be_cmd_iscsi_post_sgl_pages(&phba->ctrl, &sgl, + page_offset, + (pm_arr->size / PAGE_SIZE)); + page_offset += pm_arr->size / PAGE_SIZE; + if (status != 0) { + shost_printk(KERN_ERR, phba->shost, + "post sgl failed.\n"); + return status; + } + pm_arr++; + } + SE_DEBUG(DBG_LVL_8, "POSTED PAGES \n"); + return 0; +} + +static int +beiscsi_create_wrb_rings(struct beiscsi_hba *phba, + struct hwi_context_memory *phwi_context, + struct hwi_controller *phwi_ctrlr) +{ + unsigned int wrb_mem_index, offset, size, num_wrb_rings; + u64 pa_addr_lo; + unsigned int idx, num, i; + struct mem_array *pwrb_arr; + void *wrb_vaddr; + struct be_dma_mem sgl; + struct be_mem_descriptor *mem_descr; + int status; + + idx = 0; + mem_descr = phba->init_mem; + mem_descr += HWI_MEM_WRB; + pwrb_arr = kmalloc(sizeof(*pwrb_arr) * phba->params.cxns_per_ctrl, + GFP_KERNEL); + if (!pwrb_arr) { + shost_printk(KERN_ERR, phba->shost, + "Memory alloc failed in create wrb ring.\n"); + return -ENOMEM; + } + wrb_vaddr = mem_descr->mem_array[idx].virtual_address; + pa_addr_lo = mem_descr->mem_array[idx].bus_address.u.a64.address; + num_wrb_rings = mem_descr->mem_array[idx].size / + (phba->params.wrbs_per_cxn * sizeof(struct iscsi_wrb)); + + for (num = 0; num < phba->params.cxns_per_ctrl; num++) { + if (num_wrb_rings) { + pwrb_arr[num].virtual_address = wrb_vaddr; + pwrb_arr[num].bus_address.u.a64.address = pa_addr_lo; + pwrb_arr[num].size = phba->params.wrbs_per_cxn * + sizeof(struct iscsi_wrb); + wrb_vaddr += pwrb_arr[num].size; + pa_addr_lo += pwrb_arr[num].size; + num_wrb_rings--; + } else { + idx++; + wrb_vaddr = mem_descr->mem_array[idx].virtual_address; + pa_addr_lo = mem_descr->mem_array[idx].\ + bus_address.u.a64.address; + num_wrb_rings = mem_descr->mem_array[idx].size / + (phba->params.wrbs_per_cxn * + sizeof(struct iscsi_wrb)); + pwrb_arr[num].virtual_address = wrb_vaddr; + pwrb_arr[num].bus_address.u.a64.address\ + = pa_addr_lo; + pwrb_arr[num].size = phba->params.wrbs_per_cxn * + sizeof(struct iscsi_wrb); + wrb_vaddr += pwrb_arr[num].size; + pa_addr_lo += pwrb_arr[num].size; + num_wrb_rings--; + } + } + for (i = 0; i < phba->params.cxns_per_ctrl; i++) { + wrb_mem_index = 0; + offset = 0; + size = 0; + + hwi_build_be_sgl_by_offset(phba, &pwrb_arr[i], &sgl); + status = be_cmd_wrbq_create(&phba->ctrl, &sgl, + &phwi_context->be_wrbq[i]); + if (status != 0) { + shost_printk(KERN_ERR, phba->shost, + "wrbq create failed."); + return status; + } + phwi_ctrlr->wrb_context[i].cid = phwi_context->be_wrbq[i].id; + } + kfree(pwrb_arr); + return 0; +} + +static void free_wrb_handles(struct beiscsi_hba *phba) +{ + unsigned int index; + struct hwi_controller *phwi_ctrlr; + struct hwi_wrb_context *pwrb_context; + + phwi_ctrlr = phba->phwi_ctrlr; + for (index = 0; index < phba->params.cxns_per_ctrl * 2; index += 2) { + pwrb_context = &phwi_ctrlr->wrb_context[index]; + kfree(pwrb_context->pwrb_handle_base); + kfree(pwrb_context->pwrb_handle_basestd); + } +} + +static void hwi_cleanup(struct beiscsi_hba *phba) +{ + struct be_queue_info *q; + struct be_ctrl_info *ctrl = &phba->ctrl; + struct hwi_controller *phwi_ctrlr; + struct hwi_context_memory *phwi_context; + int i; + + phwi_ctrlr = phba->phwi_ctrlr; + phwi_context = phwi_ctrlr->phwi_ctxt; + for (i = 0; i < phba->params.cxns_per_ctrl; i++) { + q = &phwi_context->be_wrbq[i]; + if (q->created) + beiscsi_cmd_q_destroy(ctrl, q, QTYPE_WRBQ); + } + + free_wrb_handles(phba); + + q = &phwi_context->be_def_hdrq; + if (q->created) + beiscsi_cmd_q_destroy(ctrl, q, QTYPE_DPDUQ); + + q = &phwi_context->be_def_dataq; + if (q->created) + beiscsi_cmd_q_destroy(ctrl, q, QTYPE_DPDUQ); + + beiscsi_cmd_q_destroy(ctrl, NULL, QTYPE_SGL); + + q = &phwi_context->be_cq; + if (q->created) + beiscsi_cmd_q_destroy(ctrl, q, QTYPE_CQ); + + q = &phwi_context->be_eq.q; + if (q->created) + beiscsi_cmd_q_destroy(ctrl, q, QTYPE_EQ); +} + +static int hwi_init_port(struct beiscsi_hba *phba) +{ + struct hwi_controller *phwi_ctrlr; + struct hwi_context_memory *phwi_context; + unsigned int def_pdu_ring_sz; + struct be_ctrl_info *ctrl = &phba->ctrl; + int status; + + def_pdu_ring_sz = + phba->params.asyncpdus_per_ctrl * sizeof(struct phys_addr); + phwi_ctrlr = phba->phwi_ctrlr; + + phwi_context = phwi_ctrlr->phwi_ctxt; + phwi_context->be_eq.max_eqd = 0; + phwi_context->be_eq.min_eqd = 0; + phwi_context->be_eq.cur_eqd = 64; + phwi_context->be_eq.enable_aic = false; + be_cmd_fw_initialize(&phba->ctrl); + status = beiscsi_create_eq(phba, phwi_context); + if (status != 0) { + shost_printk(KERN_ERR, phba->shost, "EQ not created \n"); + goto error; + } + + status = mgmt_check_supported_fw(ctrl); + if (status != 0) { + shost_printk(KERN_ERR, phba->shost, + "Unsupported fw version \n"); + goto error; + } + + status = mgmt_get_fw_config(ctrl, phba); + if (status != 0) { + shost_printk(KERN_ERR, phba->shost, + "Error getting fw config\n"); + goto error; + } + + status = beiscsi_create_cq(phba, phwi_context); + if (status != 0) { + shost_printk(KERN_ERR, phba->shost, "CQ not created\n"); + goto error; + } + + status = beiscsi_create_def_hdr(phba, phwi_context, phwi_ctrlr, + def_pdu_ring_sz); + if (status != 0) { + shost_printk(KERN_ERR, phba->shost, + "Default Header not created\n"); + goto error; + } + + status = beiscsi_create_def_data(phba, phwi_context, + phwi_ctrlr, def_pdu_ring_sz); + if (status != 0) { + shost_printk(KERN_ERR, phba->shost, + "Default Data not created\n"); + goto error; + } + + status = beiscsi_post_pages(phba); + if (status != 0) { + shost_printk(KERN_ERR, phba->shost, "Post SGL Pages Failed\n"); + goto error; + } + + status = beiscsi_create_wrb_rings(phba, phwi_context, phwi_ctrlr); + if (status != 0) { + shost_printk(KERN_ERR, phba->shost, + "WRB Rings not created\n"); + goto error; + } + + SE_DEBUG(DBG_LVL_8, "hwi_init_port success\n"); + return 0; + +error: + shost_printk(KERN_ERR, phba->shost, "hwi_init_port failed"); + hwi_cleanup(phba); + return -ENOMEM; +} + + +static int hwi_init_controller(struct beiscsi_hba *phba) +{ + struct hwi_controller *phwi_ctrlr; + + phwi_ctrlr = phba->phwi_ctrlr; + if (1 == phba->init_mem[HWI_MEM_ADDN_CONTEXT].num_elements) { + phwi_ctrlr->phwi_ctxt = (struct hwi_context_memory *)phba-> + init_mem[HWI_MEM_ADDN_CONTEXT].mem_array[0].virtual_address; + SE_DEBUG(DBG_LVL_8, " phwi_ctrlr->phwi_ctxt=%p \n", + phwi_ctrlr->phwi_ctxt); + } else { + shost_printk(KERN_ERR, phba->shost, + "HWI_MEM_ADDN_CONTEXT is more than one element." + "Failing to load\n"); + return -ENOMEM; + } + + iscsi_init_global_templates(phba); + beiscsi_init_wrb_handle(phba); + hwi_init_async_pdu_ctx(phba); + if (hwi_init_port(phba) != 0) { + shost_printk(KERN_ERR, phba->shost, + "hwi_init_controller failed\n"); + return -ENOMEM; + } + return 0; +} + +static void beiscsi_free_mem(struct beiscsi_hba *phba) +{ + struct be_mem_descriptor *mem_descr; + int i, j; + + mem_descr = phba->init_mem; + i = 0; + j = 0; + for (i = 0; i < SE_MEM_MAX; i++) { + for (j = mem_descr->num_elements; j > 0; j--) { + pci_free_consistent(phba->pcidev, + mem_descr->mem_array[j - 1].size, + mem_descr->mem_array[j - 1].virtual_address, + mem_descr->mem_array[j - 1].bus_address. + u.a64.address); + } + kfree(mem_descr->mem_array); + mem_descr++; + } + kfree(phba->init_mem); + kfree(phba->phwi_ctrlr); +} + +static int beiscsi_init_controller(struct beiscsi_hba *phba) +{ + int ret = -ENOMEM; + + ret = beiscsi_get_memory(phba); + if (ret < 0) { + shost_printk(KERN_ERR, phba->shost, "beiscsi_dev_probe -" + "Failed in beiscsi_alloc_memory \n"); + return ret; + } + + ret = hwi_init_controller(phba); + if (ret) + goto free_init; + SE_DEBUG(DBG_LVL_8, "Return success from beiscsi_init_controller"); + return 0; + +free_init: + beiscsi_free_mem(phba); + return -ENOMEM; +} + +static int beiscsi_init_sgl_handle(struct beiscsi_hba *phba) +{ + struct be_mem_descriptor *mem_descr_sglh, *mem_descr_sg; + struct sgl_handle *psgl_handle; + struct iscsi_sge *pfrag; + unsigned int arr_index, i, idx; + + phba->io_sgl_hndl_avbl = 0; + phba->eh_sgl_hndl_avbl = 0; + mem_descr_sglh = phba->init_mem; + mem_descr_sglh += HWI_MEM_SGLH; + if (1 == mem_descr_sglh->num_elements) { + phba->io_sgl_hndl_base = kzalloc(sizeof(struct sgl_handle *) * + phba->params.ios_per_ctrl, + GFP_KERNEL); + if (!phba->io_sgl_hndl_base) { + shost_printk(KERN_ERR, phba->shost, + "Mem Alloc Failed. Failing to load\n"); + return -ENOMEM; + } + phba->eh_sgl_hndl_base = kzalloc(sizeof(struct sgl_handle *) * + (phba->params.icds_per_ctrl - + phba->params.ios_per_ctrl), + GFP_KERNEL); + if (!phba->eh_sgl_hndl_base) { + kfree(phba->io_sgl_hndl_base); + shost_printk(KERN_ERR, phba->shost, + "Mem Alloc Failed. Failing to load\n"); + return -ENOMEM; + } + } else { + shost_printk(KERN_ERR, phba->shost, + "HWI_MEM_SGLH is more than one element." + "Failing to load\n"); + return -ENOMEM; + } + + arr_index = 0; + idx = 0; + while (idx < mem_descr_sglh->num_elements) { + psgl_handle = mem_descr_sglh->mem_array[idx].virtual_address; + + for (i = 0; i < (mem_descr_sglh->mem_array[idx].size / + sizeof(struct sgl_handle)); i++) { + if (arr_index < phba->params.ios_per_ctrl) { + phba->io_sgl_hndl_base[arr_index] = psgl_handle; + phba->io_sgl_hndl_avbl++; + arr_index++; + } else { + phba->eh_sgl_hndl_base[arr_index - + phba->params.ios_per_ctrl] = + psgl_handle; + arr_index++; + phba->eh_sgl_hndl_avbl++; + } + psgl_handle++; + } + idx++; + } + SE_DEBUG(DBG_LVL_8, + "phba->io_sgl_hndl_avbl=%d" + "phba->eh_sgl_hndl_avbl=%d \n", + phba->io_sgl_hndl_avbl, + phba->eh_sgl_hndl_avbl); + mem_descr_sg = phba->init_mem; + mem_descr_sg += HWI_MEM_SGE; + SE_DEBUG(DBG_LVL_8, "\n mem_descr_sg->num_elements=%d \n", + mem_descr_sg->num_elements); + arr_index = 0; + idx = 0; + while (idx < mem_descr_sg->num_elements) { + pfrag = mem_descr_sg->mem_array[idx].virtual_address; + + for (i = 0; + i < (mem_descr_sg->mem_array[idx].size) / + (sizeof(struct iscsi_sge) * phba->params.num_sge_per_io); + i++) { + if (arr_index < phba->params.ios_per_ctrl) + psgl_handle = phba->io_sgl_hndl_base[arr_index]; + else + psgl_handle = phba->eh_sgl_hndl_base[arr_index - + phba->params.ios_per_ctrl]; + psgl_handle->pfrag = pfrag; + AMAP_SET_BITS(struct amap_iscsi_sge, addr_hi, pfrag, 0); + AMAP_SET_BITS(struct amap_iscsi_sge, addr_lo, pfrag, 0); + pfrag += phba->params.num_sge_per_io; + psgl_handle->sgl_index = + phba->fw_config.iscsi_cid_start + arr_index++; + } + idx++; + } + phba->io_sgl_free_index = 0; + phba->io_sgl_alloc_index = 0; + phba->eh_sgl_free_index = 0; + phba->eh_sgl_alloc_index = 0; + return 0; +} + +static int hba_setup_cid_tbls(struct beiscsi_hba *phba) +{ + int i, new_cid; + + phba->cid_array = kmalloc(sizeof(void *) * phba->params.cxns_per_ctrl, + GFP_KERNEL); + if (!phba->cid_array) { + shost_printk(KERN_ERR, phba->shost, + "Failed to allocate memory in " + "hba_setup_cid_tbls\n"); + return -ENOMEM; + } + phba->ep_array = kmalloc(sizeof(struct iscsi_endpoint *) * + phba->params.cxns_per_ctrl * 2, GFP_KERNEL); + if (!phba->ep_array) { + shost_printk(KERN_ERR, phba->shost, + "Failed to allocate memory in " + "hba_setup_cid_tbls \n"); + kfree(phba->cid_array); + return -ENOMEM; + } + new_cid = phba->fw_config.iscsi_icd_start; + for (i = 0; i < phba->params.cxns_per_ctrl; i++) { + phba->cid_array[i] = new_cid; + new_cid += 2; + } + phba->avlbl_cids = phba->params.cxns_per_ctrl; + return 0; +} + +static unsigned char hwi_enable_intr(struct beiscsi_hba *phba) +{ + struct be_ctrl_info *ctrl = &phba->ctrl; + struct hwi_controller *phwi_ctrlr; + struct hwi_context_memory *phwi_context; + struct be_queue_info *eq; + u8 __iomem *addr; + u32 reg; + u32 enabled; + + phwi_ctrlr = phba->phwi_ctrlr; + phwi_context = phwi_ctrlr->phwi_ctxt; + + eq = &phwi_context->be_eq.q; + addr = (u8 __iomem *) ((u8 __iomem *) ctrl->pcicfg + + PCICFG_MEMBAR_CTRL_INT_CTRL_OFFSET); + reg = ioread32(addr); + SE_DEBUG(DBG_LVL_8, "reg =x%08x \n", reg); + + enabled = reg & MEMBAR_CTRL_INT_CTRL_HOSTINTR_MASK; + if (!enabled) { + reg |= MEMBAR_CTRL_INT_CTRL_HOSTINTR_MASK; + SE_DEBUG(DBG_LVL_8, "reg =x%08x addr=%p \n", reg, addr); + iowrite32(reg, addr); + SE_DEBUG(DBG_LVL_8, "eq->id=%d \n", eq->id); + + hwi_ring_eq_db(phba, eq->id, 0, 0, 1, 1); + } else + shost_printk(KERN_WARNING, phba->shost, + "In hwi_enable_intr, Not Enabled \n"); + return true; +} + +static void hwi_disable_intr(struct beiscsi_hba *phba) +{ + struct be_ctrl_info *ctrl = &phba->ctrl; + + u8 __iomem *addr = ctrl->pcicfg + PCICFG_MEMBAR_CTRL_INT_CTRL_OFFSET; + u32 reg = ioread32(addr); + + u32 enabled = reg & MEMBAR_CTRL_INT_CTRL_HOSTINTR_MASK; + if (enabled) { + reg &= ~MEMBAR_CTRL_INT_CTRL_HOSTINTR_MASK; + iowrite32(reg, addr); + } else + shost_printk(KERN_WARNING, phba->shost, + "In hwi_disable_intr, Already Disabled \n"); +} + +static int beiscsi_init_port(struct beiscsi_hba *phba) +{ + int ret; + + ret = beiscsi_init_controller(phba); + if (ret < 0) { + shost_printk(KERN_ERR, phba->shost, + "beiscsi_dev_probe - Failed in" + "beiscsi_init_controller \n"); + return ret; + } + ret = beiscsi_init_sgl_handle(phba); + if (ret < 0) { + shost_printk(KERN_ERR, phba->shost, + "beiscsi_dev_probe - Failed in" + "beiscsi_init_sgl_handle \n"); + goto do_cleanup_ctrlr; + } + + if (hba_setup_cid_tbls(phba)) { + shost_printk(KERN_ERR, phba->shost, + "Failed in hba_setup_cid_tbls\n"); + kfree(phba->io_sgl_hndl_base); + kfree(phba->eh_sgl_hndl_base); + goto do_cleanup_ctrlr; + } + + return ret; + +do_cleanup_ctrlr: + hwi_cleanup(phba); + return ret; +} + +static void hwi_purge_eq(struct beiscsi_hba *phba) +{ + struct hwi_controller *phwi_ctrlr; + struct hwi_context_memory *phwi_context; + struct be_queue_info *eq; + struct be_eq_entry *eqe = NULL; + + phwi_ctrlr = phba->phwi_ctrlr; + phwi_context = phwi_ctrlr->phwi_ctxt; + eq = &phwi_context->be_eq.q; + eqe = queue_tail_node(eq); + + while (eqe->dw[offsetof(struct amap_eq_entry, valid) / 32] + & EQE_VALID_MASK) { + AMAP_SET_BITS(struct amap_eq_entry, valid, eqe, 0); + queue_tail_inc(eq); + eqe = queue_tail_node(eq); + } +} + +static void beiscsi_clean_port(struct beiscsi_hba *phba) +{ + unsigned char mgmt_status; + + mgmt_status = mgmt_epfw_cleanup(phba, CMD_CONNECTION_CHUTE_0); + if (mgmt_status) + shost_printk(KERN_WARNING, phba->shost, + "mgmt_epfw_cleanup FAILED \n"); + hwi_cleanup(phba); + hwi_purge_eq(phba); + kfree(phba->io_sgl_hndl_base); + kfree(phba->eh_sgl_hndl_base); + kfree(phba->cid_array); + kfree(phba->ep_array); +} + +void +beiscsi_offload_connection(struct beiscsi_conn *beiscsi_conn, + struct beiscsi_offload_params *params) +{ + struct wrb_handle *pwrb_handle; + struct iscsi_target_context_update_wrb *pwrb = NULL; + struct be_mem_descriptor *mem_descr; + struct beiscsi_hba *phba = beiscsi_conn->phba; + u32 doorbell = 0; + + /* + * We can always use 0 here because it is reserved by libiscsi for + * login/startup related tasks. + */ + pwrb_handle = alloc_wrb_handle(phba, beiscsi_conn->beiscsi_conn_cid, 0); + pwrb = (struct iscsi_target_context_update_wrb *)pwrb_handle->pwrb; + memset(pwrb, 0, sizeof(*pwrb)); + AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb, + max_burst_length, pwrb, params->dw[offsetof + (struct amap_beiscsi_offload_params, + max_burst_length) / 32]); + AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb, + max_send_data_segment_length, pwrb, + params->dw[offsetof(struct amap_beiscsi_offload_params, + max_send_data_segment_length) / 32]); + AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb, + first_burst_length, + pwrb, + params->dw[offsetof(struct amap_beiscsi_offload_params, + first_burst_length) / 32]); + + AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb, erl, pwrb, + (params->dw[offsetof(struct amap_beiscsi_offload_params, + erl) / 32] & OFFLD_PARAMS_ERL)); + AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb, dde, pwrb, + (params->dw[offsetof(struct amap_beiscsi_offload_params, + dde) / 32] & OFFLD_PARAMS_DDE) >> 2); + AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb, hde, pwrb, + (params->dw[offsetof(struct amap_beiscsi_offload_params, + hde) / 32] & OFFLD_PARAMS_HDE) >> 3); + AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb, ir2t, pwrb, + (params->dw[offsetof(struct amap_beiscsi_offload_params, + ir2t) / 32] & OFFLD_PARAMS_IR2T) >> 4); + AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb, imd, pwrb, + (params->dw[offsetof(struct amap_beiscsi_offload_params, + imd) / 32] & OFFLD_PARAMS_IMD) >> 5); + AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb, stat_sn, + pwrb, + (params->dw[offsetof(struct amap_beiscsi_offload_params, + exp_statsn) / 32] + 1)); + AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb, type, pwrb, + 0x7); + AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb, wrb_idx, + pwrb, pwrb_handle->wrb_index); + AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb, ptr2nextwrb, + pwrb, pwrb_handle->nxt_wrb_index); + AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb, + session_state, pwrb, 0); + AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb, compltonack, + pwrb, 1); + AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb, notpredblq, + pwrb, 0); + AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb, mode, pwrb, + 0); + + mem_descr = phba->init_mem; + mem_descr += ISCSI_MEM_GLOBAL_HEADER; + + AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb, + pad_buffer_addr_hi, pwrb, + mem_descr->mem_array[0].bus_address.u.a32.address_hi); + AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb, + pad_buffer_addr_lo, pwrb, + mem_descr->mem_array[0].bus_address.u.a32.address_lo); + + be_dws_le_to_cpu(pwrb, sizeof(struct iscsi_target_context_update_wrb)); + + doorbell |= beiscsi_conn->beiscsi_conn_cid & DB_WRB_POST_CID_MASK; + doorbell |= (pwrb_handle->wrb_index & DB_DEF_PDU_WRB_INDEX_MASK) << + DB_DEF_PDU_WRB_INDEX_SHIFT; + doorbell |= 1 << DB_DEF_PDU_NUM_POSTED_SHIFT; + + iowrite32(doorbell, phba->db_va + DB_TXULP0_OFFSET); +} + +static void beiscsi_parse_pdu(struct iscsi_conn *conn, itt_t itt, + int *index, int *age) +{ + *index = be32_to_cpu(itt) >> 16; + if (age) + *age = conn->session->age; +} + +/** + * beiscsi_alloc_pdu - allocates pdu and related resources + * @task: libiscsi task + * @opcode: opcode of pdu for task + * + * This is called with the session lock held. It will allocate + * the wrb and sgl if needed for the command. And it will prep + * the pdu's itt. beiscsi_parse_pdu will later translate + * the pdu itt to the libiscsi task itt. + */ +static int beiscsi_alloc_pdu(struct iscsi_task *task, uint8_t opcode) +{ + struct beiscsi_io_task *io_task = task->dd_data; + struct iscsi_conn *conn = task->conn; + struct beiscsi_conn *beiscsi_conn = conn->dd_data; + struct beiscsi_hba *phba = beiscsi_conn->phba; + struct hwi_wrb_context *pwrb_context; + struct hwi_controller *phwi_ctrlr; + itt_t itt; + + io_task->pwrb_handle = alloc_wrb_handle(phba, + beiscsi_conn->beiscsi_conn_cid, + task->itt); + io_task->pwrb_handle->pio_handle = task; + io_task->conn = beiscsi_conn; + + task->hdr = (struct iscsi_hdr *)&io_task->cmd_bhs->iscsi_hdr; + task->hdr_max = sizeof(struct be_cmd_bhs); + + if (task->sc) { + spin_lock(&phba->io_sgl_lock); + io_task->psgl_handle = alloc_io_sgl_handle(phba); + spin_unlock(&phba->io_sgl_lock); + if (!io_task->psgl_handle) { + phwi_ctrlr = phba->phwi_ctrlr; + pwrb_context = &phwi_ctrlr->wrb_context + [beiscsi_conn->beiscsi_conn_cid]; + free_wrb_handle(phba, pwrb_context, + io_task->pwrb_handle); + io_task->pwrb_handle = NULL; + SE_DEBUG(DBG_LVL_1, + "Alloc of SGL_ICD Failed \n"); + return -ENOMEM; + } + } else { + io_task->scsi_cmnd = NULL; + if ((task->hdr->opcode & ISCSI_OPCODE_MASK) == ISCSI_OP_LOGIN) { + if (!beiscsi_conn->login_in_progress) { + spin_lock(&phba->mgmt_sgl_lock); + io_task->psgl_handle = (struct sgl_handle *) + alloc_mgmt_sgl_handle(phba); + spin_unlock(&phba->mgmt_sgl_lock); + if (!io_task->psgl_handle) { + phwi_ctrlr = phba->phwi_ctrlr; + pwrb_context = + &phwi_ctrlr->wrb_context + [beiscsi_conn->beiscsi_conn_cid]; + free_wrb_handle(phba, pwrb_context, + io_task->pwrb_handle); + io_task->pwrb_handle = NULL; + SE_DEBUG(DBG_LVL_1, "Alloc of " + "MGMT_SGL_ICD Failed \n"); + return -ENOMEM; + } + beiscsi_conn->login_in_progress = 1; + beiscsi_conn->plogin_sgl_handle = + io_task->psgl_handle; + } else { + io_task->psgl_handle = + beiscsi_conn->plogin_sgl_handle; + } + } else { + spin_lock(&phba->mgmt_sgl_lock); + io_task->psgl_handle = alloc_mgmt_sgl_handle(phba); + spin_unlock(&phba->mgmt_sgl_lock); + if (!io_task->psgl_handle) { + phwi_ctrlr = phba->phwi_ctrlr; + pwrb_context = &phwi_ctrlr->wrb_context + [beiscsi_conn->beiscsi_conn_cid]; + free_wrb_handle(phba, pwrb_context, + io_task->pwrb_handle); + io_task->pwrb_handle = NULL; + SE_DEBUG(DBG_LVL_1, "Alloc of " + "MGMT_SGL_ICD Failed \n"); + return -ENOMEM; + } + } + } + itt = (itt_t) cpu_to_be32(((unsigned int)task->itt << 16) | + (unsigned int)(io_task->psgl_handle->sgl_index)); + io_task->cmd_bhs->iscsi_hdr.itt = itt; + return 0; +} + +static void beiscsi_cleanup_task(struct iscsi_task *task) +{ + struct beiscsi_io_task *io_task = task->dd_data; + struct iscsi_conn *conn = task->conn; + struct beiscsi_conn *beiscsi_conn = conn->dd_data; + struct beiscsi_hba *phba = beiscsi_conn->phba; + struct hwi_wrb_context *pwrb_context; + struct hwi_controller *phwi_ctrlr; + + phwi_ctrlr = phba->phwi_ctrlr; + pwrb_context = &phwi_ctrlr->wrb_context[beiscsi_conn->beiscsi_conn_cid]; + if (io_task->pwrb_handle) { + free_wrb_handle(phba, pwrb_context, io_task->pwrb_handle); + io_task->pwrb_handle = NULL; + } + + if (task->sc) { + if (io_task->psgl_handle) { + spin_lock(&phba->io_sgl_lock); + free_io_sgl_handle(phba, io_task->psgl_handle); + spin_unlock(&phba->io_sgl_lock); + io_task->psgl_handle = NULL; + } + } else { + if ((task->hdr->opcode & ISCSI_OPCODE_MASK) == ISCSI_OP_LOGIN) + return; + if (io_task->psgl_handle) { + spin_lock(&phba->mgmt_sgl_lock); + free_mgmt_sgl_handle(phba, io_task->psgl_handle); + spin_unlock(&phba->mgmt_sgl_lock); + io_task->psgl_handle = NULL; + } + } +} + +static int beiscsi_iotask(struct iscsi_task *task, struct scatterlist *sg, + unsigned int num_sg, unsigned int xferlen, + unsigned int writedir) +{ + + struct beiscsi_io_task *io_task = task->dd_data; + struct iscsi_conn *conn = task->conn; + struct beiscsi_conn *beiscsi_conn = conn->dd_data; + struct beiscsi_hba *phba = beiscsi_conn->phba; + struct iscsi_wrb *pwrb = NULL; + unsigned int doorbell = 0; + + pwrb = io_task->pwrb_handle->pwrb; + + io_task->cmd_bhs->iscsi_hdr.exp_statsn = 0; + io_task->bhs_len = sizeof(struct be_cmd_bhs); + + if (writedir) { + SE_DEBUG(DBG_LVL_4, " WRITE Command \t"); + memset(&io_task->cmd_bhs->iscsi_data_pdu, 0, 48); + AMAP_SET_BITS(struct amap_pdu_data_out, itt, + &io_task->cmd_bhs->iscsi_data_pdu, + (unsigned int)io_task->cmd_bhs->iscsi_hdr.itt); + AMAP_SET_BITS(struct amap_pdu_data_out, opcode, + &io_task->cmd_bhs->iscsi_data_pdu, + ISCSI_OPCODE_SCSI_DATA_OUT); + AMAP_SET_BITS(struct amap_pdu_data_out, final_bit, + &io_task->cmd_bhs->iscsi_data_pdu, 1); + AMAP_SET_BITS(struct amap_iscsi_wrb, type, pwrb, INI_WR_CMD); + AMAP_SET_BITS(struct amap_iscsi_wrb, dsp, pwrb, 1); + + } else { + SE_DEBUG(DBG_LVL_4, "READ Command \t"); + AMAP_SET_BITS(struct amap_iscsi_wrb, type, pwrb, INI_RD_CMD); + AMAP_SET_BITS(struct amap_iscsi_wrb, dsp, pwrb, 0); + } + memcpy(&io_task->cmd_bhs->iscsi_data_pdu. + dw[offsetof(struct amap_pdu_data_out, lun) / 32], + io_task->cmd_bhs->iscsi_hdr.lun, sizeof(struct scsi_lun)); + + AMAP_SET_BITS(struct amap_iscsi_wrb, lun, pwrb, + cpu_to_be16((unsigned short)io_task->cmd_bhs->iscsi_hdr. + lun[0])); + AMAP_SET_BITS(struct amap_iscsi_wrb, r2t_exp_dtl, pwrb, xferlen); + AMAP_SET_BITS(struct amap_iscsi_wrb, wrb_idx, pwrb, + io_task->pwrb_handle->wrb_index); + AMAP_SET_BITS(struct amap_iscsi_wrb, cmdsn_itt, pwrb, + be32_to_cpu(task->cmdsn)); + AMAP_SET_BITS(struct amap_iscsi_wrb, sgl_icd_idx, pwrb, + io_task->psgl_handle->sgl_index); + + hwi_write_sgl(pwrb, sg, num_sg, io_task); + + AMAP_SET_BITS(struct amap_iscsi_wrb, ptr2nextwrb, pwrb, + io_task->pwrb_handle->nxt_wrb_index); + be_dws_le_to_cpu(pwrb, sizeof(struct iscsi_wrb)); + + doorbell |= beiscsi_conn->beiscsi_conn_cid & DB_WRB_POST_CID_MASK; + doorbell |= (io_task->pwrb_handle->wrb_index & + DB_DEF_PDU_WRB_INDEX_MASK) << DB_DEF_PDU_WRB_INDEX_SHIFT; + doorbell |= 1 << DB_DEF_PDU_NUM_POSTED_SHIFT; + + iowrite32(doorbell, phba->db_va + DB_TXULP0_OFFSET); + return 0; +} + +static int beiscsi_mtask(struct iscsi_task *task) +{ + struct beiscsi_io_task *aborted_io_task, *io_task = task->dd_data; + struct iscsi_conn *conn = task->conn; + struct beiscsi_conn *beiscsi_conn = conn->dd_data; + struct beiscsi_hba *phba = beiscsi_conn->phba; + struct iscsi_wrb *pwrb = NULL; + unsigned int doorbell = 0; + struct iscsi_task *aborted_task; + + pwrb = io_task->pwrb_handle->pwrb; + AMAP_SET_BITS(struct amap_iscsi_wrb, cmdsn_itt, pwrb, + be32_to_cpu(task->cmdsn)); + AMAP_SET_BITS(struct amap_iscsi_wrb, wrb_idx, pwrb, + io_task->pwrb_handle->wrb_index); + AMAP_SET_BITS(struct amap_iscsi_wrb, sgl_icd_idx, pwrb, + io_task->psgl_handle->sgl_index); + + switch (task->hdr->opcode & ISCSI_OPCODE_MASK) { + case ISCSI_OP_LOGIN: + AMAP_SET_BITS(struct amap_iscsi_wrb, type, pwrb, TGT_DM_CMD); + AMAP_SET_BITS(struct amap_iscsi_wrb, dmsg, pwrb, 0); + AMAP_SET_BITS(struct amap_iscsi_wrb, cmdsn_itt, pwrb, 1); + hwi_write_buffer(pwrb, task); + break; + case ISCSI_OP_NOOP_OUT: + AMAP_SET_BITS(struct amap_iscsi_wrb, type, pwrb, INI_RD_CMD); + hwi_write_buffer(pwrb, task); + break; + case ISCSI_OP_TEXT: + AMAP_SET_BITS(struct amap_iscsi_wrb, type, pwrb, INI_WR_CMD); + AMAP_SET_BITS(struct amap_iscsi_wrb, dsp, pwrb, 1); + hwi_write_buffer(pwrb, task); + break; + case ISCSI_OP_SCSI_TMFUNC: + aborted_task = iscsi_itt_to_task(conn, + ((struct iscsi_tm *)task->hdr)->rtt); + if (!aborted_task) + return 0; + aborted_io_task = aborted_task->dd_data; + if (!aborted_io_task->scsi_cmnd) + return 0; + + mgmt_invalidate_icds(phba, + aborted_io_task->psgl_handle->sgl_index, + beiscsi_conn->beiscsi_conn_cid); + AMAP_SET_BITS(struct amap_iscsi_wrb, type, pwrb, INI_TMF_CMD); + AMAP_SET_BITS(struct amap_iscsi_wrb, dmsg, pwrb, 0); + hwi_write_buffer(pwrb, task); + break; + case ISCSI_OP_LOGOUT: + AMAP_SET_BITS(struct amap_iscsi_wrb, dmsg, pwrb, 0); + AMAP_SET_BITS(struct amap_iscsi_wrb, type, pwrb, + HWH_TYPE_LOGOUT); + hwi_write_buffer(pwrb, task); + break; + + default: + SE_DEBUG(DBG_LVL_1, "opcode =%d Not supported \n", + task->hdr->opcode & ISCSI_OPCODE_MASK); + return -EINVAL; + } + + AMAP_SET_BITS(struct amap_iscsi_wrb, r2t_exp_dtl, pwrb, + be32_to_cpu(task->data_count)); + AMAP_SET_BITS(struct amap_iscsi_wrb, ptr2nextwrb, pwrb, + io_task->pwrb_handle->nxt_wrb_index); + be_dws_le_to_cpu(pwrb, sizeof(struct iscsi_wrb)); + + doorbell |= beiscsi_conn->beiscsi_conn_cid & DB_WRB_POST_CID_MASK; + doorbell |= (io_task->pwrb_handle->wrb_index & + DB_DEF_PDU_WRB_INDEX_MASK) << DB_DEF_PDU_WRB_INDEX_SHIFT; + doorbell |= 1 << DB_DEF_PDU_NUM_POSTED_SHIFT; + iowrite32(doorbell, phba->db_va + DB_TXULP0_OFFSET); + return 0; +} + +static int beiscsi_task_xmit(struct iscsi_task *task) +{ + struct iscsi_conn *conn = task->conn; + struct beiscsi_io_task *io_task = task->dd_data; + struct scsi_cmnd *sc = task->sc; + struct beiscsi_conn *beiscsi_conn = conn->dd_data; + struct scatterlist *sg; + int num_sg; + unsigned int writedir = 0, xferlen = 0; + + SE_DEBUG(DBG_LVL_4, "\n cid=%d In beiscsi_task_xmit task=%p conn=%p \t" + "beiscsi_conn=%p \n", beiscsi_conn->beiscsi_conn_cid, + task, conn, beiscsi_conn); + if (!sc) + return beiscsi_mtask(task); + + io_task->scsi_cmnd = sc; + num_sg = scsi_dma_map(sc); + if (num_sg < 0) { + SE_DEBUG(DBG_LVL_1, " scsi_dma_map Failed\n") + return num_sg; + } + SE_DEBUG(DBG_LVL_4, "xferlen=0x%08x scmd=%p num_sg=%d sernum=%lu\n", + (scsi_bufflen(sc)), sc, num_sg, sc->serial_number); + xferlen = scsi_bufflen(sc); + sg = scsi_sglist(sc); + if (sc->sc_data_direction == DMA_TO_DEVICE) { + writedir = 1; + SE_DEBUG(DBG_LVL_4, "task->imm_count=0x%08x \n", + task->imm_count); + } else + writedir = 0; + return beiscsi_iotask(task, sg, num_sg, xferlen, writedir); +} + +static void beiscsi_remove(struct pci_dev *pcidev) +{ + struct beiscsi_hba *phba = NULL; + + phba = (struct beiscsi_hba *)pci_get_drvdata(pcidev); + if (!phba) { + dev_err(&pcidev->dev, "beiscsi_remove called with no phba \n"); + return; + } + + hwi_disable_intr(phba); + if (phba->pcidev->irq) + free_irq(phba->pcidev->irq, phba); + destroy_workqueue(phba->wq); + if (blk_iopoll_enabled) + blk_iopoll_disable(&phba->iopoll); + + beiscsi_clean_port(phba); + beiscsi_free_mem(phba); + beiscsi_unmap_pci_function(phba); + pci_free_consistent(phba->pcidev, + phba->ctrl.mbox_mem_alloced.size, + phba->ctrl.mbox_mem_alloced.va, + phba->ctrl.mbox_mem_alloced.dma); + iscsi_host_remove(phba->shost); + pci_dev_put(phba->pcidev); + iscsi_host_free(phba->shost); +} + +static int __devinit beiscsi_dev_probe(struct pci_dev *pcidev, + const struct pci_device_id *id) +{ + struct beiscsi_hba *phba = NULL; + int ret; + + ret = beiscsi_enable_pci(pcidev); + if (ret < 0) { + shost_printk(KERN_ERR, phba->shost, "beiscsi_dev_probe-" + "Failed to enable pci device \n"); + return ret; + } + + phba = beiscsi_hba_alloc(pcidev); + if (!phba) { + dev_err(&pcidev->dev, "beiscsi_dev_probe-" + " Failed in beiscsi_hba_alloc \n"); + goto disable_pci; + } + + pci_set_drvdata(pcidev, phba); + ret = be_ctrl_init(phba, pcidev); + if (ret) { + shost_printk(KERN_ERR, phba->shost, "beiscsi_dev_probe-" + "Failed in be_ctrl_init\n"); + goto hba_free; + } + + spin_lock_init(&phba->io_sgl_lock); + spin_lock_init(&phba->mgmt_sgl_lock); + spin_lock_init(&phba->isr_lock); + beiscsi_get_params(phba); + ret = beiscsi_init_port(phba); + if (ret < 0) { + shost_printk(KERN_ERR, phba->shost, "beiscsi_dev_probe-" + "Failed in beiscsi_init_port\n"); + goto free_port; + } + + snprintf(phba->wq_name, sizeof(phba->wq_name), "beiscsi_q_irq%u", + phba->shost->host_no); + phba->wq = create_singlethread_workqueue(phba->wq_name); + if (!phba->wq) { + shost_printk(KERN_ERR, phba->shost, "beiscsi_dev_probe-" + "Failed to allocate work queue\n"); + goto free_twq; + } + + INIT_WORK(&phba->work_cqs, beiscsi_process_all_cqs); + + if (blk_iopoll_enabled) { + blk_iopoll_init(&phba->iopoll, be_iopoll_budget, be_iopoll); + blk_iopoll_enable(&phba->iopoll); + } + + ret = beiscsi_init_irqs(phba); + if (ret < 0) { + shost_printk(KERN_ERR, phba->shost, "beiscsi_dev_probe-" + "Failed to beiscsi_init_irqs\n"); + goto free_blkenbld; + } + ret = hwi_enable_intr(phba); + if (ret < 0) { + shost_printk(KERN_ERR, phba->shost, "beiscsi_dev_probe-" + "Failed to hwi_enable_intr\n"); + goto free_ctrlr; + } + + SE_DEBUG(DBG_LVL_8, "\n\n\n SUCCESS - DRIVER LOADED \n\n\n"); + return 0; + +free_ctrlr: + if (phba->pcidev->irq) + free_irq(phba->pcidev->irq, phba); +free_blkenbld: + destroy_workqueue(phba->wq); + if (blk_iopoll_enabled) + blk_iopoll_disable(&phba->iopoll); +free_twq: + beiscsi_clean_port(phba); + beiscsi_free_mem(phba); +free_port: + pci_free_consistent(phba->pcidev, + phba->ctrl.mbox_mem_alloced.size, + phba->ctrl.mbox_mem_alloced.va, + phba->ctrl.mbox_mem_alloced.dma); + beiscsi_unmap_pci_function(phba); +hba_free: + iscsi_host_remove(phba->shost); + pci_dev_put(phba->pcidev); + iscsi_host_free(phba->shost); +disable_pci: + pci_disable_device(pcidev); + return ret; +} + +struct iscsi_transport beiscsi_iscsi_transport = { + .owner = THIS_MODULE, + .name = DRV_NAME, + .caps = CAP_RECOVERY_L0 | CAP_HDRDGST | + CAP_MULTI_R2T | CAP_DATADGST | CAP_DATA_PATH_OFFLOAD, + .param_mask = ISCSI_MAX_RECV_DLENGTH | + ISCSI_MAX_XMIT_DLENGTH | + ISCSI_HDRDGST_EN | + ISCSI_DATADGST_EN | + ISCSI_INITIAL_R2T_EN | + ISCSI_MAX_R2T | + ISCSI_IMM_DATA_EN | + ISCSI_FIRST_BURST | + ISCSI_MAX_BURST | + ISCSI_PDU_INORDER_EN | + ISCSI_DATASEQ_INORDER_EN | + ISCSI_ERL | + ISCSI_CONN_PORT | + ISCSI_CONN_ADDRESS | + ISCSI_EXP_STATSN | + ISCSI_PERSISTENT_PORT | + ISCSI_PERSISTENT_ADDRESS | + ISCSI_TARGET_NAME | ISCSI_TPGT | + ISCSI_USERNAME | ISCSI_PASSWORD | + ISCSI_USERNAME_IN | ISCSI_PASSWORD_IN | + ISCSI_FAST_ABORT | ISCSI_ABORT_TMO | + ISCSI_LU_RESET_TMO | + ISCSI_PING_TMO | ISCSI_RECV_TMO | + ISCSI_IFACE_NAME | ISCSI_INITIATOR_NAME, + .host_param_mask = ISCSI_HOST_HWADDRESS | ISCSI_HOST_IPADDRESS | + ISCSI_HOST_INITIATOR_NAME, + .create_session = beiscsi_session_create, + .destroy_session = beiscsi_session_destroy, + .create_conn = beiscsi_conn_create, + .bind_conn = beiscsi_conn_bind, + .destroy_conn = iscsi_conn_teardown, + .set_param = beiscsi_set_param, + .get_conn_param = beiscsi_conn_get_param, + .get_session_param = iscsi_session_get_param, + .get_host_param = beiscsi_get_host_param, + .start_conn = beiscsi_conn_start, + .stop_conn = beiscsi_conn_stop, + .send_pdu = iscsi_conn_send_pdu, + .xmit_task = beiscsi_task_xmit, + .cleanup_task = beiscsi_cleanup_task, + .alloc_pdu = beiscsi_alloc_pdu, + .parse_pdu_itt = beiscsi_parse_pdu, + .get_stats = beiscsi_conn_get_stats, + .ep_connect = beiscsi_ep_connect, + .ep_poll = beiscsi_ep_poll, + .ep_disconnect = beiscsi_ep_disconnect, + .session_recovery_timedout = iscsi_session_recovery_timedout, +}; + +static struct pci_driver beiscsi_pci_driver = { + .name = DRV_NAME, + .probe = beiscsi_dev_probe, + .remove = beiscsi_remove, + .id_table = beiscsi_pci_id_table +}; + +static int __init beiscsi_module_init(void) +{ + int ret; + + beiscsi_scsi_transport = + iscsi_register_transport(&beiscsi_iscsi_transport); + if (!beiscsi_scsi_transport) { + SE_DEBUG(DBG_LVL_1, + "beiscsi_module_init - Unable to register beiscsi" + "transport.\n"); + ret = -ENOMEM; + } + SE_DEBUG(DBG_LVL_8, "In beiscsi_module_init, tt=%p \n", + &beiscsi_iscsi_transport); + + ret = pci_register_driver(&beiscsi_pci_driver); + if (ret) { + SE_DEBUG(DBG_LVL_1, + "beiscsi_module_init - Unable to register" + "beiscsi pci driver.\n"); + goto unregister_iscsi_transport; + } + return 0; + +unregister_iscsi_transport: + iscsi_unregister_transport(&beiscsi_iscsi_transport); + return ret; +} + +static void __exit beiscsi_module_exit(void) +{ + pci_unregister_driver(&beiscsi_pci_driver); + iscsi_unregister_transport(&beiscsi_iscsi_transport); +} + +module_init(beiscsi_module_init); +module_exit(beiscsi_module_exit); diff --git a/drivers/scsi/be2iscsi/be_main.h b/drivers/scsi/be2iscsi/be_main.h new file mode 100644 index 000000000000..2520c39c594d --- /dev/null +++ b/drivers/scsi/be2iscsi/be_main.h @@ -0,0 +1,833 @@ +/** + * Copyright (C) 2005 - 2009 ServerEngines + * All rights reserved. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License version 2 + * as published by the Free Software Foundation. The full GNU General + * Public License is included in this distribution in the file called COPYING. + * + * Written by: Jayamohan Kallickal (jayamohank@serverengines.com) + * + * Contact Information: + * linux-drivers@serverengines.com + * + * ServerEngines + * 209 N. Fair Oaks Ave + * Sunnyvale, CA 94085 + * + */ + +#ifndef _BEISCSI_MAIN_ +#define _BEISCSI_MAIN_ + + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "be.h" + + + +#define DRV_NAME "be2iscsi" +#define BUILD_STR "2.0.527.0" + +#define BE_NAME "ServerEngines BladeEngine2" \ + "Linux iSCSI Driver version" BUILD_STR +#define DRV_DESC BE_NAME " " "Driver" + +#define BE_VENDOR_ID 0x19A2 +#define BE_DEVICE_ID1 0x212 +#define OC_DEVICE_ID1 0x702 +#define OC_DEVICE_ID2 0x703 + +#define BE2_MAX_SESSIONS 64 +#define BE2_CMDS_PER_CXN 128 +#define BE2_LOGOUTS BE2_MAX_SESSIONS +#define BE2_TMFS 16 +#define BE2_NOPOUT_REQ 16 +#define BE2_ASYNCPDUS BE2_MAX_SESSIONS +#define BE2_MAX_ICDS 2048 +#define BE2_SGE 32 +#define BE2_DEFPDU_HDR_SZ 64 +#define BE2_DEFPDU_DATA_SZ 8192 +#define BE2_IO_DEPTH \ + (BE2_MAX_ICDS / 2 - (BE2_LOGOUTS + BE2_TMFS + BE2_NOPOUT_REQ)) + +#define BEISCSI_SGLIST_ELEMENTS BE2_SGE + +#define BEISCSI_MAX_CMNDS 1024 /* Max IO's per Ctrlr sht->can_queue */ +#define BEISCSI_CMD_PER_LUN 128 /* scsi_host->cmd_per_lun */ +#define BEISCSI_MAX_SECTORS 2048 /* scsi_host->max_sectors */ + +#define BEISCSI_MAX_CMD_LEN 16 /* scsi_host->max_cmd_len */ +#define BEISCSI_NUM_MAX_LUN 256 /* scsi_host->max_lun */ +#define BEISCSI_NUM_DEVICES_SUPPORTED 0x01 +#define BEISCSI_MAX_FRAGS_INIT 192 +#define BE_NUM_MSIX_ENTRIES 1 +#define MPU_EP_SEMAPHORE 0xac + +#define BE_SENSE_INFO_SIZE 258 +#define BE_ISCSI_PDU_HEADER_SIZE 64 +#define BE_MIN_MEM_SIZE 16384 + +#define IIOC_SCSI_DATA 0x05 /* Write Operation */ + +#define DBG_LVL 0x00000001 +#define DBG_LVL_1 0x00000001 +#define DBG_LVL_2 0x00000002 +#define DBG_LVL_3 0x00000004 +#define DBG_LVL_4 0x00000008 +#define DBG_LVL_5 0x00000010 +#define DBG_LVL_6 0x00000020 +#define DBG_LVL_7 0x00000040 +#define DBG_LVL_8 0x00000080 + +#define SE_DEBUG(debug_mask, fmt, args...) \ +do { \ + if (debug_mask & DBG_LVL) { \ + printk(KERN_ERR "(%s():%d):", __func__, __LINE__);\ + printk(fmt, ##args); \ + } \ +} while (0); + +/** + * hardware needs the async PDU buffers to be posted in multiples of 8 + * So have atleast 8 of them by default + */ + +#define HWI_GET_ASYNC_PDU_CTX(phwi) (phwi->phwi_ctxt->pasync_ctx) + +/********* Memory BAR register ************/ +#define PCICFG_MEMBAR_CTRL_INT_CTRL_OFFSET 0xfc +/** + * Host Interrupt Enable, if set interrupts are enabled although "PCI Interrupt + * Disable" may still globally block interrupts in addition to individual + * interrupt masks; a mechanism for the device driver to block all interrupts + * atomically without having to arbitrate for the PCI Interrupt Disable bit + * with the OS. + */ +#define MEMBAR_CTRL_INT_CTRL_HOSTINTR_MASK (1 << 29) /* bit 29 */ + +/********* ISR0 Register offset **********/ +#define CEV_ISR0_OFFSET 0xC18 +#define CEV_ISR_SIZE 4 + +/** + * Macros for reading/writing a protection domain or CSR registers + * in BladeEngine. + */ + +#define DB_TXULP0_OFFSET 0x40 +#define DB_RXULP0_OFFSET 0xA0 +/********* Event Q door bell *************/ +#define DB_EQ_OFFSET DB_CQ_OFFSET +#define DB_EQ_RING_ID_MASK 0x1FF /* bits 0 - 8 */ +/* Clear the interrupt for this eq */ +#define DB_EQ_CLR_SHIFT (9) /* bit 9 */ +/* Must be 1 */ +#define DB_EQ_EVNT_SHIFT (10) /* bit 10 */ +/* Number of event entries processed */ +#define DB_EQ_NUM_POPPED_SHIFT (16) /* bits 16 - 28 */ +/* Rearm bit */ +#define DB_EQ_REARM_SHIFT (29) /* bit 29 */ + +/********* Compl Q door bell *************/ +#define DB_CQ_OFFSET 0x120 +#define DB_CQ_RING_ID_MASK 0x3FF /* bits 0 - 9 */ +/* Number of event entries processed */ +#define DB_CQ_NUM_POPPED_SHIFT (16) /* bits 16 - 28 */ +/* Rearm bit */ +#define DB_CQ_REARM_SHIFT (29) /* bit 29 */ + +#define GET_HWI_CONTROLLER_WS(pc) (pc->phwi_ctrlr) +#define HWI_GET_DEF_BUFQ_ID(pc) (((struct hwi_controller *)\ + (GET_HWI_CONTROLLER_WS(pc)))->default_pdu_data.id) +#define HWI_GET_DEF_HDRQ_ID(pc) (((struct hwi_controller *)\ + (GET_HWI_CONTROLLER_WS(pc)))->default_pdu_hdr.id) + +#define PAGES_REQUIRED(x) \ + ((x < PAGE_SIZE) ? 1 : ((x + PAGE_SIZE - 1) / PAGE_SIZE)) + +enum be_mem_enum { + HWI_MEM_ADDN_CONTEXT, + HWI_MEM_CQ, + HWI_MEM_EQ, + HWI_MEM_WRB, + HWI_MEM_WRBH, + HWI_MEM_SGLH, /* 5 */ + HWI_MEM_SGE, + HWI_MEM_ASYNC_HEADER_BUF, + HWI_MEM_ASYNC_DATA_BUF, + HWI_MEM_ASYNC_HEADER_RING, + HWI_MEM_ASYNC_DATA_RING, /* 10 */ + HWI_MEM_ASYNC_HEADER_HANDLE, + HWI_MEM_ASYNC_DATA_HANDLE, + HWI_MEM_ASYNC_PDU_CONTEXT, + ISCSI_MEM_GLOBAL_HEADER, + SE_MEM_MAX /* 15 */ +}; + +struct be_bus_address32 { + unsigned int address_lo; + unsigned int address_hi; +}; + +struct be_bus_address64 { + unsigned long long address; +}; + +struct be_bus_address { + union { + struct be_bus_address32 a32; + struct be_bus_address64 a64; + } u; +}; + +struct mem_array { + struct be_bus_address bus_address; /* Bus address of location */ + void *virtual_address; /* virtual address to the location */ + unsigned int size; /* Size required by memory block */ +}; + +struct be_mem_descriptor { + unsigned int index; /* Index of this memory parameter */ + unsigned int category; /* type indicates cached/non-cached */ + unsigned int num_elements; /* number of elements in this + * descriptor + */ + unsigned int alignment_mask; /* Alignment mask for this block */ + unsigned int size_in_bytes; /* Size required by memory block */ + struct mem_array *mem_array; +}; + +struct sgl_handle { + unsigned int sgl_index; + struct iscsi_sge *pfrag; +}; + +struct hba_parameters { + unsigned int ios_per_ctrl; + unsigned int cxns_per_ctrl; + unsigned int asyncpdus_per_ctrl; + unsigned int icds_per_ctrl; + unsigned int num_sge_per_io; + unsigned int defpdu_hdr_sz; + unsigned int defpdu_data_sz; + unsigned int num_cq_entries; + unsigned int num_eq_entries; + unsigned int wrbs_per_cxn; + unsigned int crashmode; + unsigned int hba_num; + + unsigned int mgmt_ws_sz; + unsigned int hwi_ws_sz; + + unsigned int eto; + unsigned int ldto; + + unsigned int dbg_flags; + unsigned int num_cxn; + + unsigned int eq_timer; + /** + * These are calculated from other params. They're here + * for debug purposes + */ + unsigned int num_mcc_pages; + unsigned int num_mcc_cq_pages; + unsigned int num_cq_pages; + unsigned int num_eq_pages; + + unsigned int num_async_pdu_buf_pages; + unsigned int num_async_pdu_buf_sgl_pages; + unsigned int num_async_pdu_buf_cq_pages; + + unsigned int num_async_pdu_hdr_pages; + unsigned int num_async_pdu_hdr_sgl_pages; + unsigned int num_async_pdu_hdr_cq_pages; + + unsigned int num_sge; +}; + +struct beiscsi_hba { + struct hba_parameters params; + struct hwi_controller *phwi_ctrlr; + unsigned int mem_req[SE_MEM_MAX]; + /* PCI BAR mapped addresses */ + u8 __iomem *csr_va; /* CSR */ + u8 __iomem *db_va; /* Door Bell */ + u8 __iomem *pci_va; /* PCI Config */ + struct be_bus_address csr_pa; /* CSR */ + struct be_bus_address db_pa; /* CSR */ + struct be_bus_address pci_pa; /* CSR */ + /* PCI representation of our HBA */ + struct pci_dev *pcidev; + unsigned int state; + unsigned short asic_revision; + struct blk_iopoll iopoll; + struct be_mem_descriptor *init_mem; + + unsigned short io_sgl_alloc_index; + unsigned short io_sgl_free_index; + unsigned short io_sgl_hndl_avbl; + struct sgl_handle **io_sgl_hndl_base; + + unsigned short eh_sgl_alloc_index; + unsigned short eh_sgl_free_index; + unsigned short eh_sgl_hndl_avbl; + struct sgl_handle **eh_sgl_hndl_base; + spinlock_t io_sgl_lock; + spinlock_t mgmt_sgl_lock; + spinlock_t isr_lock; + unsigned int age; + unsigned short avlbl_cids; + unsigned short cid_alloc; + unsigned short cid_free; + struct beiscsi_conn *conn_table[BE2_MAX_SESSIONS * 2]; + struct list_head hba_queue; + unsigned short *cid_array; + struct iscsi_endpoint **ep_array; + struct Scsi_Host *shost; + struct { + /** + * group together since they are used most frequently + * for cid to cri conversion + */ + unsigned int iscsi_cid_start; + unsigned int phys_port; + + unsigned int isr_offset; + unsigned int iscsi_icd_start; + unsigned int iscsi_cid_count; + unsigned int iscsi_icd_count; + unsigned int pci_function; + + unsigned short cid_alloc; + unsigned short cid_free; + unsigned short avlbl_cids; + spinlock_t cid_lock; + } fw_config; + + u8 mac_address[ETH_ALEN]; + unsigned short todo_cq; + unsigned short todo_mcc_cq; + char wq_name[20]; + struct workqueue_struct *wq; /* The actuak work queue */ + struct work_struct work_cqs; /* The work being queued */ + struct be_ctrl_info ctrl; +}; + +/** + * struct beiscsi_conn - iscsi connection structure + */ +struct beiscsi_conn { + struct iscsi_conn *conn; + struct beiscsi_hba *phba; + u32 exp_statsn; + u32 beiscsi_conn_cid; + struct beiscsi_endpoint *ep; + unsigned short login_in_progress; + struct sgl_handle *plogin_sgl_handle; +}; + +/* This structure is used by the chip */ +struct pdu_data_out { + u32 dw[12]; +}; +/** + * Pseudo amap definition in which each bit of the actual structure is defined + * as a byte: used to calculate offset/shift/mask of each field + */ +struct amap_pdu_data_out { + u8 opcode[6]; /* opcode */ + u8 rsvd0[2]; /* should be 0 */ + u8 rsvd1[7]; + u8 final_bit; /* F bit */ + u8 rsvd2[16]; + u8 ahs_length[8]; /* no AHS */ + u8 data_len_hi[8]; + u8 data_len_lo[16]; /* DataSegmentLength */ + u8 lun[64]; + u8 itt[32]; /* ITT; initiator task tag */ + u8 ttt[32]; /* TTT; valid for R2T or 0xffffffff */ + u8 rsvd3[32]; + u8 exp_stat_sn[32]; + u8 rsvd4[32]; + u8 data_sn[32]; + u8 buffer_offset[32]; + u8 rsvd5[32]; +}; + +struct be_cmd_bhs { + struct iscsi_cmd iscsi_hdr; + unsigned char pad1[16]; + struct pdu_data_out iscsi_data_pdu; + unsigned char pad2[BE_SENSE_INFO_SIZE - + sizeof(struct pdu_data_out)]; +}; + +struct beiscsi_io_task { + struct wrb_handle *pwrb_handle; + struct sgl_handle *psgl_handle; + struct beiscsi_conn *conn; + struct scsi_cmnd *scsi_cmnd; + unsigned int cmd_sn; + unsigned int flags; + unsigned short cid; + unsigned short header_len; + + unsigned int alloc_size; + struct be_cmd_bhs *cmd_bhs; + struct be_bus_address bhs_pa; + unsigned short bhs_len; +}; + +struct be_nonio_bhs { + struct iscsi_hdr iscsi_hdr; + unsigned char pad1[16]; + struct pdu_data_out iscsi_data_pdu; + unsigned char pad2[BE_SENSE_INFO_SIZE - + sizeof(struct pdu_data_out)]; +}; + +struct be_status_bhs { + struct iscsi_cmd iscsi_hdr; + unsigned char pad1[16]; + /** + * The plus 2 below is to hold the sense info length that gets + * DMA'ed by RxULP + */ + unsigned char sense_info[BE_SENSE_INFO_SIZE]; +}; + +struct iscsi_sge { + u32 dw[4]; +}; + +/** + * Pseudo amap definition in which each bit of the actual structure is defined + * as a byte: used to calculate offset/shift/mask of each field + */ +struct amap_iscsi_sge { + u8 addr_hi[32]; + u8 addr_lo[32]; + u8 sge_offset[22]; /* DWORD 2 */ + u8 rsvd0[9]; /* DWORD 2 */ + u8 last_sge; /* DWORD 2 */ + u8 len[17]; /* DWORD 3 */ + u8 rsvd1[15]; /* DWORD 3 */ +}; + +struct beiscsi_offload_params { + u32 dw[5]; +}; + +#define OFFLD_PARAMS_ERL 0x00000003 +#define OFFLD_PARAMS_DDE 0x00000004 +#define OFFLD_PARAMS_HDE 0x00000008 +#define OFFLD_PARAMS_IR2T 0x00000010 +#define OFFLD_PARAMS_IMD 0x00000020 + +/** + * Pseudo amap definition in which each bit of the actual structure is defined + * as a byte: used to calculate offset/shift/mask of each field + */ +struct amap_beiscsi_offload_params { + u8 max_burst_length[32]; + u8 max_send_data_segment_length[32]; + u8 first_burst_length[32]; + u8 erl[2]; + u8 dde[1]; + u8 hde[1]; + u8 ir2t[1]; + u8 imd[1]; + u8 pad[26]; + u8 exp_statsn[32]; +}; + +/* void hwi_complete_drvr_msgs(struct beiscsi_conn *beiscsi_conn, + struct beiscsi_hba *phba, struct sol_cqe *psol);*/ + +struct async_pdu_handle { + struct list_head link; + struct be_bus_address pa; + void *pbuffer; + unsigned int consumed; + unsigned char index; + unsigned char is_header; + unsigned short cri; + unsigned long buffer_len; +}; + +struct hwi_async_entry { + struct { + unsigned char hdr_received; + unsigned char hdr_len; + unsigned short bytes_received; + unsigned int bytes_needed; + struct list_head list; + } wait_queue; + + struct list_head header_busy_list; + struct list_head data_busy_list; +}; + +#define BE_MIN_ASYNC_ENTRIES 128 + +struct hwi_async_pdu_context { + struct { + struct be_bus_address pa_base; + void *va_base; + void *ring_base; + struct async_pdu_handle *handle_base; + + unsigned int host_write_ptr; + unsigned int ep_read_ptr; + unsigned int writables; + + unsigned int free_entries; + unsigned int busy_entries; + unsigned int buffer_size; + unsigned int num_entries; + + struct list_head free_list; + } async_header; + + struct { + struct be_bus_address pa_base; + void *va_base; + void *ring_base; + struct async_pdu_handle *handle_base; + + unsigned int host_write_ptr; + unsigned int ep_read_ptr; + unsigned int writables; + + unsigned int free_entries; + unsigned int busy_entries; + unsigned int buffer_size; + struct list_head free_list; + unsigned int num_entries; + } async_data; + + /** + * This is a varying size list! Do not add anything + * after this entry!! + */ + struct hwi_async_entry async_entry[BE_MIN_ASYNC_ENTRIES]; +}; + +#define PDUCQE_CODE_MASK 0x0000003F +#define PDUCQE_DPL_MASK 0xFFFF0000 +#define PDUCQE_INDEX_MASK 0x0000FFFF + +struct i_t_dpdu_cqe { + u32 dw[4]; +} __packed; + +/** + * Pseudo amap definition in which each bit of the actual structure is defined + * as a byte: used to calculate offset/shift/mask of each field + */ +struct amap_i_t_dpdu_cqe { + u8 db_addr_hi[32]; + u8 db_addr_lo[32]; + u8 code[6]; + u8 cid[10]; + u8 dpl[16]; + u8 index[16]; + u8 num_cons[10]; + u8 rsvd0[4]; + u8 final; + u8 valid; +} __packed; + +#define CQE_VALID_MASK 0x80000000 +#define CQE_CODE_MASK 0x0000003F +#define CQE_CID_MASK 0x0000FFC0 + +#define EQE_VALID_MASK 0x00000001 +#define EQE_MAJORCODE_MASK 0x0000000E +#define EQE_RESID_MASK 0xFFFF0000 + +struct be_eq_entry { + u32 dw[1]; +} __packed; + +/** + * Pseudo amap definition in which each bit of the actual structure is defined + * as a byte: used to calculate offset/shift/mask of each field + */ +struct amap_eq_entry { + u8 valid; /* DWORD 0 */ + u8 major_code[3]; /* DWORD 0 */ + u8 minor_code[12]; /* DWORD 0 */ + u8 resource_id[16]; /* DWORD 0 */ + +} __packed; + +struct cq_db { + u32 dw[1]; +} __packed; + +/** + * Pseudo amap definition in which each bit of the actual structure is defined + * as a byte: used to calculate offset/shift/mask of each field + */ +struct amap_cq_db { + u8 qid[10]; + u8 event[1]; + u8 rsvd0[5]; + u8 num_popped[13]; + u8 rearm[1]; + u8 rsvd1[2]; +} __packed; + +void beiscsi_process_eq(struct beiscsi_hba *phba); + + +struct iscsi_wrb { + u32 dw[16]; +} __packed; + +#define WRB_TYPE_MASK 0xF0000000 + +/** + * Pseudo amap definition in which each bit of the actual structure is defined + * as a byte: used to calculate offset/shift/mask of each field + */ +struct amap_iscsi_wrb { + u8 lun[14]; /* DWORD 0 */ + u8 lt; /* DWORD 0 */ + u8 invld; /* DWORD 0 */ + u8 wrb_idx[8]; /* DWORD 0 */ + u8 dsp; /* DWORD 0 */ + u8 dmsg; /* DWORD 0 */ + u8 undr_run; /* DWORD 0 */ + u8 over_run; /* DWORD 0 */ + u8 type[4]; /* DWORD 0 */ + u8 ptr2nextwrb[8]; /* DWORD 1 */ + u8 r2t_exp_dtl[24]; /* DWORD 1 */ + u8 sgl_icd_idx[12]; /* DWORD 2 */ + u8 rsvd0[20]; /* DWORD 2 */ + u8 exp_data_sn[32]; /* DWORD 3 */ + u8 iscsi_bhs_addr_hi[32]; /* DWORD 4 */ + u8 iscsi_bhs_addr_lo[32]; /* DWORD 5 */ + u8 cmdsn_itt[32]; /* DWORD 6 */ + u8 dif_ref_tag[32]; /* DWORD 7 */ + u8 sge0_addr_hi[32]; /* DWORD 8 */ + u8 sge0_addr_lo[32]; /* DWORD 9 */ + u8 sge0_offset[22]; /* DWORD 10 */ + u8 pbs; /* DWORD 10 */ + u8 dif_mode[2]; /* DWORD 10 */ + u8 rsvd1[6]; /* DWORD 10 */ + u8 sge0_last; /* DWORD 10 */ + u8 sge0_len[17]; /* DWORD 11 */ + u8 dif_meta_tag[14]; /* DWORD 11 */ + u8 sge0_in_ddr; /* DWORD 11 */ + u8 sge1_addr_hi[32]; /* DWORD 12 */ + u8 sge1_addr_lo[32]; /* DWORD 13 */ + u8 sge1_r2t_offset[22]; /* DWORD 14 */ + u8 rsvd2[9]; /* DWORD 14 */ + u8 sge1_last; /* DWORD 14 */ + u8 sge1_len[17]; /* DWORD 15 */ + u8 ref_sgl_icd_idx[12]; /* DWORD 15 */ + u8 rsvd3[2]; /* DWORD 15 */ + u8 sge1_in_ddr; /* DWORD 15 */ + +} __packed; + +struct wrb_handle *alloc_wrb_handle(struct beiscsi_hba *phba, unsigned int cid, + int index); +void +free_mgmt_sgl_handle(struct beiscsi_hba *phba, struct sgl_handle *psgl_handle); + +struct pdu_nop_out { + u32 dw[12]; +}; + +/** + * Pseudo amap definition in which each bit of the actual structure is defined + * as a byte: used to calculate offset/shift/mask of each field + */ +struct amap_pdu_nop_out { + u8 opcode[6]; /* opcode 0x00 */ + u8 i_bit; /* I Bit */ + u8 x_bit; /* reserved; should be 0 */ + u8 fp_bit_filler1[7]; + u8 f_bit; /* always 1 */ + u8 reserved1[16]; + u8 ahs_length[8]; /* no AHS */ + u8 data_len_hi[8]; + u8 data_len_lo[16]; /* DataSegmentLength */ + u8 lun[64]; + u8 itt[32]; /* initiator id for ping or 0xffffffff */ + u8 ttt[32]; /* target id for ping or 0xffffffff */ + u8 cmd_sn[32]; + u8 exp_stat_sn[32]; + u8 reserved5[128]; +}; + +#define PDUBASE_OPCODE_MASK 0x0000003F +#define PDUBASE_DATALENHI_MASK 0x0000FF00 +#define PDUBASE_DATALENLO_MASK 0xFFFF0000 + +struct pdu_base { + u32 dw[16]; +} __packed; + +/** + * Pseudo amap definition in which each bit of the actual structure is defined + * as a byte: used to calculate offset/shift/mask of each field + */ +struct amap_pdu_base { + u8 opcode[6]; + u8 i_bit; /* immediate bit */ + u8 x_bit; /* reserved, always 0 */ + u8 reserved1[24]; /* opcode-specific fields */ + u8 ahs_length[8]; /* length units is 4 byte words */ + u8 data_len_hi[8]; + u8 data_len_lo[16]; /* DatasegmentLength */ + u8 lun[64]; /* lun or opcode-specific fields */ + u8 itt[32]; /* initiator task tag */ + u8 reserved4[224]; +}; + +struct iscsi_target_context_update_wrb { + u32 dw[16]; +} __packed; + +/** + * Pseudo amap definition in which each bit of the actual structure is defined + * as a byte: used to calculate offset/shift/mask of each field + */ +struct amap_iscsi_target_context_update_wrb { + u8 lun[14]; /* DWORD 0 */ + u8 lt; /* DWORD 0 */ + u8 invld; /* DWORD 0 */ + u8 wrb_idx[8]; /* DWORD 0 */ + u8 dsp; /* DWORD 0 */ + u8 dmsg; /* DWORD 0 */ + u8 undr_run; /* DWORD 0 */ + u8 over_run; /* DWORD 0 */ + u8 type[4]; /* DWORD 0 */ + u8 ptr2nextwrb[8]; /* DWORD 1 */ + u8 max_burst_length[19]; /* DWORD 1 */ + u8 rsvd0[5]; /* DWORD 1 */ + u8 rsvd1[15]; /* DWORD 2 */ + u8 max_send_data_segment_length[17]; /* DWORD 2 */ + u8 first_burst_length[14]; /* DWORD 3 */ + u8 rsvd2[2]; /* DWORD 3 */ + u8 tx_wrbindex_drv_msg[8]; /* DWORD 3 */ + u8 rsvd3[5]; /* DWORD 3 */ + u8 session_state[3]; /* DWORD 3 */ + u8 rsvd4[16]; /* DWORD 4 */ + u8 tx_jumbo; /* DWORD 4 */ + u8 hde; /* DWORD 4 */ + u8 dde; /* DWORD 4 */ + u8 erl[2]; /* DWORD 4 */ + u8 domain_id[5]; /* DWORD 4 */ + u8 mode; /* DWORD 4 */ + u8 imd; /* DWORD 4 */ + u8 ir2t; /* DWORD 4 */ + u8 notpredblq[2]; /* DWORD 4 */ + u8 compltonack; /* DWORD 4 */ + u8 stat_sn[32]; /* DWORD 5 */ + u8 pad_buffer_addr_hi[32]; /* DWORD 6 */ + u8 pad_buffer_addr_lo[32]; /* DWORD 7 */ + u8 pad_addr_hi[32]; /* DWORD 8 */ + u8 pad_addr_lo[32]; /* DWORD 9 */ + u8 rsvd5[32]; /* DWORD 10 */ + u8 rsvd6[32]; /* DWORD 11 */ + u8 rsvd7[32]; /* DWORD 12 */ + u8 rsvd8[32]; /* DWORD 13 */ + u8 rsvd9[32]; /* DWORD 14 */ + u8 rsvd10[32]; /* DWORD 15 */ + +} __packed; + +struct be_ring { + u32 pages; /* queue size in pages */ + u32 id; /* queue id assigned by beklib */ + u32 num; /* number of elements in queue */ + u32 cidx; /* consumer index */ + u32 pidx; /* producer index -- not used by most rings */ + u32 item_size; /* size in bytes of one object */ + + void *va; /* The virtual address of the ring. This + * should be last to allow 32 & 64 bit debugger + * extensions to work. + */ +}; + +struct hwi_wrb_context { + struct list_head wrb_handle_list; + struct list_head wrb_handle_drvr_list; + struct wrb_handle **pwrb_handle_base; + struct wrb_handle **pwrb_handle_basestd; + struct iscsi_wrb *plast_wrb; + unsigned short alloc_index; + unsigned short free_index; + unsigned short wrb_handles_available; + unsigned short cid; +}; + +struct hwi_controller { + struct list_head io_sgl_list; + struct list_head eh_sgl_list; + struct sgl_handle *psgl_handle_base; + unsigned int wrb_mem_index; + + struct hwi_wrb_context wrb_context[BE2_MAX_SESSIONS * 2]; + struct mcc_wrb *pmcc_wrb_base; + struct be_ring default_pdu_hdr; + struct be_ring default_pdu_data; + struct hwi_context_memory *phwi_ctxt; + unsigned short cq_errors[CXN_KILLED_CMND_DATA_NOT_ON_SAME_CONN]; +}; + +enum hwh_type_enum { + HWH_TYPE_IO = 1, + HWH_TYPE_LOGOUT = 2, + HWH_TYPE_TMF = 3, + HWH_TYPE_NOP = 4, + HWH_TYPE_IO_RD = 5, + HWH_TYPE_LOGIN = 11, + HWH_TYPE_INVALID = 0xFFFFFFFF +}; + +struct wrb_handle { + enum hwh_type_enum type; + unsigned short wrb_index; + unsigned short nxt_wrb_index; + + struct iscsi_task *pio_handle; + struct iscsi_wrb *pwrb; +}; + +struct hwi_context_memory { + struct be_eq_obj be_eq; + struct be_queue_info be_cq; + struct be_queue_info be_mcc_cq; + struct be_queue_info be_mcc; + + struct be_queue_info be_def_hdrq; + struct be_queue_info be_def_dataq; + + struct be_queue_info be_wrbq[BE2_MAX_SESSIONS]; + struct be_mcc_wrb_context *pbe_mcc_context; + + struct hwi_async_pdu_context *pasync_ctx; +}; + +#endif diff --git a/drivers/scsi/be2iscsi/be_mgmt.c b/drivers/scsi/be2iscsi/be_mgmt.c new file mode 100644 index 000000000000..12e644fc746e --- /dev/null +++ b/drivers/scsi/be2iscsi/be_mgmt.c @@ -0,0 +1,321 @@ +/** + * Copyright (C) 2005 - 2009 ServerEngines + * All rights reserved. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License version 2 + * as published by the Free Software Foundation. The full GNU General + * Public License is included in this distribution in the file called COPYING. + * + * Written by: Jayamohan Kallickal (jayamohank@serverengines.com) + * + * Contact Information: + * linux-drivers@serverengines.com + * + * ServerEngines + * 209 N. Fair Oaks Ave + * Sunnyvale, CA 94085 + * + */ + +#include "be_mgmt.h" +#include "be_iscsi.h" + +unsigned char mgmt_get_fw_config(struct be_ctrl_info *ctrl, + struct beiscsi_hba *phba) +{ + struct be_mcc_wrb *wrb = wrb_from_mbox(&ctrl->mbox_mem); + struct be_fw_cfg *req = embedded_payload(wrb); + int status = 0; + + spin_lock(&ctrl->mbox_lock); + memset(wrb, 0, sizeof(*wrb)); + + be_wrb_hdr_prepare(wrb, sizeof(*req), true, 0); + + be_cmd_hdr_prepare(&req->hdr, CMD_SUBSYSTEM_COMMON, + OPCODE_COMMON_QUERY_FIRMWARE_CONFIG, sizeof(*req)); + + status = be_mbox_notify(ctrl); + if (!status) { + struct be_fw_cfg *pfw_cfg; + pfw_cfg = req; + phba->fw_config.phys_port = pfw_cfg->phys_port; + phba->fw_config.iscsi_icd_start = + pfw_cfg->ulp[0].icd_base; + phba->fw_config.iscsi_icd_count = + pfw_cfg->ulp[0].icd_count; + phba->fw_config.iscsi_cid_start = + pfw_cfg->ulp[0].sq_base; + phba->fw_config.iscsi_cid_count = + pfw_cfg->ulp[0].sq_count; + } else { + shost_printk(KERN_WARNING, phba->shost, + "Failed in mgmt_get_fw_config \n"); + } + + spin_unlock(&ctrl->mbox_lock); + return status; +} + +unsigned char mgmt_check_supported_fw(struct be_ctrl_info *ctrl) +{ + struct be_dma_mem nonemb_cmd; + struct be_mcc_wrb *wrb = wrb_from_mbox(&ctrl->mbox_mem); + struct be_mgmt_controller_attributes *req; + struct be_sge *sge = nonembedded_sgl(wrb); + int status = 0; + + nonemb_cmd.va = pci_alloc_consistent(ctrl->pdev, + sizeof(struct be_mgmt_controller_attributes), + &nonemb_cmd.dma); + if (nonemb_cmd.va == NULL) { + SE_DEBUG(DBG_LVL_1, + "Failed to allocate memory for mgmt_check_supported_fw" + "\n"); + return -1; + } + nonemb_cmd.size = sizeof(struct be_mgmt_controller_attributes); + req = nonemb_cmd.va; + spin_lock(&ctrl->mbox_lock); + memset(wrb, 0, sizeof(*wrb)); + be_wrb_hdr_prepare(wrb, sizeof(*req), false, 1); + be_cmd_hdr_prepare(&req->hdr, CMD_SUBSYSTEM_COMMON, + OPCODE_COMMON_GET_CNTL_ATTRIBUTES, sizeof(*req)); + sge->pa_hi = cpu_to_le32(upper_32_bits(nonemb_cmd.dma)); + sge->pa_lo = cpu_to_le32(nonemb_cmd.dma & 0xFFFFFFFF); + sge->len = cpu_to_le32(nonemb_cmd.size); + + status = be_mbox_notify(ctrl); + if (!status) { + struct be_mgmt_controller_attributes_resp *resp = nonemb_cmd.va; + SE_DEBUG(DBG_LVL_8, "Firmware version of CMD: %s\n", + resp->params.hba_attribs.flashrom_version_string); + SE_DEBUG(DBG_LVL_8, "Firmware version is : %s\n", + resp->params.hba_attribs.firmware_version_string); + SE_DEBUG(DBG_LVL_8, + "Developer Build, not performing version check...\n"); + + } else + SE_DEBUG(DBG_LVL_1, " Failed in mgmt_check_supported_fw\n"); + if (nonemb_cmd.va) + pci_free_consistent(ctrl->pdev, nonemb_cmd.size, + nonemb_cmd.va, nonemb_cmd.dma); + + spin_unlock(&ctrl->mbox_lock); + return status; +} + +unsigned char mgmt_epfw_cleanup(struct beiscsi_hba *phba, unsigned short chute) +{ + struct be_ctrl_info *ctrl = &phba->ctrl; + struct be_mcc_wrb *wrb = wrb_from_mbox(&ctrl->mbox_mem); + struct iscsi_cleanup_req *req = embedded_payload(wrb); + int status = 0; + + spin_lock(&ctrl->mbox_lock); + memset(wrb, 0, sizeof(*wrb)); + + be_wrb_hdr_prepare(wrb, sizeof(*req), true, 0); + be_cmd_hdr_prepare(&req->hdr, CMD_SUBSYSTEM_ISCSI, + OPCODE_COMMON_ISCSI_CLEANUP, sizeof(*req)); + + req->chute = chute; + req->hdr_ring_id = 0; + req->data_ring_id = 0; + + status = be_mbox_notify(ctrl); + if (status) + shost_printk(KERN_WARNING, phba->shost, + " mgmt_epfw_cleanup , FAILED\n"); + spin_unlock(&ctrl->mbox_lock); + return status; +} + +unsigned char mgmt_invalidate_icds(struct beiscsi_hba *phba, + unsigned int icd, unsigned int cid) +{ + struct be_dma_mem nonemb_cmd; + struct be_ctrl_info *ctrl = &phba->ctrl; + struct be_mcc_wrb *wrb = wrb_from_mbox(&ctrl->mbox_mem); + struct be_sge *sge = nonembedded_sgl(wrb); + struct invalidate_commands_params_in *req; + int status = 0; + + nonemb_cmd.va = pci_alloc_consistent(ctrl->pdev, + sizeof(struct invalidate_commands_params_in), + &nonemb_cmd.dma); + if (nonemb_cmd.va == NULL) { + SE_DEBUG(DBG_LVL_1, + "Failed to allocate memory for" + "mgmt_invalidate_icds \n"); + return -1; + } + nonemb_cmd.size = sizeof(struct invalidate_commands_params_in); + req = nonemb_cmd.va; + spin_lock(&ctrl->mbox_lock); + memset(wrb, 0, sizeof(*wrb)); + + be_wrb_hdr_prepare(wrb, sizeof(*req), false, 1); + be_cmd_hdr_prepare(&req->hdr, CMD_SUBSYSTEM_ISCSI, + OPCODE_COMMON_ISCSI_ERROR_RECOVERY_INVALIDATE_COMMANDS, + sizeof(*req)); + req->ref_handle = 0; + req->cleanup_type = CMD_ISCSI_COMMAND_INVALIDATE; + req->icd_count = 0; + req->table[req->icd_count].icd = icd; + req->table[req->icd_count].cid = cid; + sge->pa_hi = cpu_to_le32(upper_32_bits(nonemb_cmd.dma)); + sge->pa_lo = cpu_to_le32(nonemb_cmd.dma & 0xFFFFFFFF); + sge->len = cpu_to_le32(nonemb_cmd.size); + + status = be_mbox_notify(ctrl); + if (status) + SE_DEBUG(DBG_LVL_1, "ICDS Invalidation Failed\n"); + spin_unlock(&ctrl->mbox_lock); + if (nonemb_cmd.va) + pci_free_consistent(ctrl->pdev, nonemb_cmd.size, + nonemb_cmd.va, nonemb_cmd.dma); + return status; +} + +unsigned char mgmt_invalidate_connection(struct beiscsi_hba *phba, + struct beiscsi_endpoint *beiscsi_ep, + unsigned short cid, + unsigned short issue_reset, + unsigned short savecfg_flag) +{ + struct be_ctrl_info *ctrl = &phba->ctrl; + struct be_mcc_wrb *wrb = wrb_from_mbox(&ctrl->mbox_mem); + struct iscsi_invalidate_connection_params_in *req = + embedded_payload(wrb); + int status = 0; + + spin_lock(&ctrl->mbox_lock); + memset(wrb, 0, sizeof(*wrb)); + + be_wrb_hdr_prepare(wrb, sizeof(*req), true, 0); + be_cmd_hdr_prepare(&req->hdr, CMD_SUBSYSTEM_ISCSI_INI, + OPCODE_ISCSI_INI_DRIVER_INVALIDATE_CONNECTION, + sizeof(*req)); + req->session_handle = beiscsi_ep->fw_handle; + req->cid = cid; + if (issue_reset) + req->cleanup_type = CMD_ISCSI_CONNECTION_ISSUE_TCP_RST; + else + req->cleanup_type = CMD_ISCSI_CONNECTION_INVALIDATE; + req->save_cfg = savecfg_flag; + status = be_mbox_notify(ctrl); + if (status) + SE_DEBUG(DBG_LVL_1, "Invalidation Failed\n"); + + spin_unlock(&ctrl->mbox_lock); + return status; +} + +unsigned char mgmt_upload_connection(struct beiscsi_hba *phba, + unsigned short cid, unsigned int upload_flag) +{ + struct be_ctrl_info *ctrl = &phba->ctrl; + struct be_mcc_wrb *wrb = wrb_from_mbox(&ctrl->mbox_mem); + struct tcp_upload_params_in *req = embedded_payload(wrb); + int status = 0; + + spin_lock(&ctrl->mbox_lock); + memset(wrb, 0, sizeof(*wrb)); + + be_wrb_hdr_prepare(wrb, sizeof(*req), true, 0); + be_cmd_hdr_prepare(&req->hdr, CMD_COMMON_TCP_UPLOAD, + OPCODE_COMMON_TCP_UPLOAD, sizeof(*req)); + req->id = (unsigned short)cid; + req->upload_type = (unsigned char)upload_flag; + status = be_mbox_notify(ctrl); + if (status) + SE_DEBUG(DBG_LVL_1, "mgmt_upload_connection Failed\n"); + spin_unlock(&ctrl->mbox_lock); + return status; +} + +int mgmt_open_connection(struct beiscsi_hba *phba, + struct sockaddr *dst_addr, + struct beiscsi_endpoint *beiscsi_ep) +{ + struct hwi_controller *phwi_ctrlr; + struct hwi_context_memory *phwi_context; + struct sockaddr_in *daddr_in = (struct sockaddr_in *)dst_addr; + struct sockaddr_in6 *daddr_in6 = (struct sockaddr_in6 *)dst_addr; + struct be_ctrl_info *ctrl = &phba->ctrl; + struct be_mcc_wrb *wrb = wrb_from_mbox(&ctrl->mbox_mem); + struct tcp_connect_and_offload_in *req = embedded_payload(wrb); + unsigned short def_hdr_id; + unsigned short def_data_id; + struct phys_addr template_address = { 0, 0 }; + struct phys_addr *ptemplate_address; + int status = 0; + unsigned short cid = beiscsi_ep->ep_cid; + + phwi_ctrlr = phba->phwi_ctrlr; + phwi_context = phwi_ctrlr->phwi_ctxt; + def_hdr_id = (unsigned short)HWI_GET_DEF_HDRQ_ID(phba); + def_data_id = (unsigned short)HWI_GET_DEF_BUFQ_ID(phba); + + ptemplate_address = &template_address; + ISCSI_GET_PDU_TEMPLATE_ADDRESS(phba, ptemplate_address); + spin_lock(&ctrl->mbox_lock); + memset(wrb, 0, sizeof(*wrb)); + + be_wrb_hdr_prepare(wrb, sizeof(*req), true, 0); + be_cmd_hdr_prepare(&req->hdr, CMD_SUBSYSTEM_ISCSI, + OPCODE_COMMON_ISCSI_TCP_CONNECT_AND_OFFLOAD, + sizeof(*req)); + if (dst_addr->sa_family == PF_INET) { + __be32 s_addr = daddr_in->sin_addr.s_addr; + req->ip_address.ip_type = BE2_IPV4; + req->ip_address.ip_address[0] = s_addr & 0x000000ff; + req->ip_address.ip_address[1] = (s_addr & 0x0000ff00) >> 8; + req->ip_address.ip_address[2] = (s_addr & 0x00ff0000) >> 16; + req->ip_address.ip_address[3] = (s_addr & 0xff000000) >> 24; + req->tcp_port = ntohs(daddr_in->sin_port); + beiscsi_ep->dst_addr = daddr_in->sin_addr.s_addr; + beiscsi_ep->dst_tcpport = ntohs(daddr_in->sin_port); + beiscsi_ep->ip_type = BE2_IPV4; + } else if (dst_addr->sa_family == PF_INET6) { + req->ip_address.ip_type = BE2_IPV6; + memcpy(&req->ip_address.ip_address, + &daddr_in6->sin6_addr.in6_u.u6_addr8, 16); + req->tcp_port = ntohs(daddr_in6->sin6_port); + beiscsi_ep->dst_tcpport = ntohs(daddr_in6->sin6_port); + memcpy(&beiscsi_ep->dst6_addr, + &daddr_in6->sin6_addr.in6_u.u6_addr8, 16); + beiscsi_ep->ip_type = BE2_IPV6; + } else{ + shost_printk(KERN_ERR, phba->shost, "unknown addr family %d\n", + dst_addr->sa_family); + spin_unlock(&ctrl->mbox_lock); + return -EINVAL; + + } + req->cid = cid; + req->cq_id = phwi_context->be_cq.id; + req->defq_id = def_hdr_id; + req->hdr_ring_id = def_hdr_id; + req->data_ring_id = def_data_id; + req->do_offload = 1; + req->dataout_template_pa.lo = ptemplate_address->lo; + req->dataout_template_pa.hi = ptemplate_address->hi; + status = be_mbox_notify(ctrl); + if (!status) { + struct iscsi_endpoint *ep; + struct tcp_connect_and_offload_out *ptcpcnct_out = + embedded_payload(wrb); + + ep = phba->ep_array[ptcpcnct_out->cid]; + beiscsi_ep = ep->dd_data; + beiscsi_ep->fw_handle = 0; + beiscsi_ep->cid_vld = 1; + SE_DEBUG(DBG_LVL_8, "mgmt_open_connection Success\n"); + } else + SE_DEBUG(DBG_LVL_1, "mgmt_open_connection Failed\n"); + spin_unlock(&ctrl->mbox_lock); + return status; +} diff --git a/drivers/scsi/be2iscsi/be_mgmt.h b/drivers/scsi/be2iscsi/be_mgmt.h new file mode 100644 index 000000000000..00e816ee8070 --- /dev/null +++ b/drivers/scsi/be2iscsi/be_mgmt.h @@ -0,0 +1,249 @@ +/** + * Copyright (C) 2005 - 2009 ServerEngines + * All rights reserved. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License version 2 + * as published by the Free Software Foundation. The full GNU General + * Public License is included in this distribution in the file called COPYING. + * + * Written by: Jayamohan Kallickal (jayamohank@serverengines.com) + * + * Contact Information: + * linux-drivers@serverengines.com + * + * ServerEngines + * 209 N. Fair Oaks Ave + * Sunnyvale, CA 94085 + * + */ + +#ifndef _BEISCSI_MGMT_ +#define _BEISCSI_MGMT_ + +#include +#include +#include "be_iscsi.h" +#include "be_main.h" + +/** + * Pseudo amap definition in which each bit of the actual structure is defined + * as a byte: used to calculate offset/shift/mask of each field + */ +struct amap_mcc_sge { + u8 pa_lo[32]; /* dword 0 */ + u8 pa_hi[32]; /* dword 1 */ + u8 length[32]; /* DWORD 2 */ +} __packed; + +/** + * Pseudo amap definition in which each bit of the actual structure is defined + * as a byte: used to calculate offset/shift/mask of each field + */ +struct amap_mcc_wrb_payload { + union { + struct amap_mcc_sge sgl[19]; + u8 embedded[59 * 32]; /* DWORDS 57 to 115 */ + } u; +} __packed; + +/** + * Pseudo amap definition in which each bit of the actual structure is defined + * as a byte: used to calculate offset/shift/mask of each field + */ +struct amap_mcc_wrb { + u8 embedded; /* DWORD 0 */ + u8 rsvd0[2]; /* DWORD 0 */ + u8 sge_count[5]; /* DWORD 0 */ + u8 rsvd1[16]; /* DWORD 0 */ + u8 special[8]; /* DWORD 0 */ + u8 payload_length[32]; + u8 tag[64]; /* DWORD 2 */ + u8 rsvd2[32]; /* DWORD 4 */ + struct amap_mcc_wrb_payload payload; +}; + +struct mcc_sge { + u32 pa_lo; /* dword 0 */ + u32 pa_hi; /* dword 1 */ + u32 length; /* DWORD 2 */ +} __packed; + +struct mcc_wrb_payload { + union { + struct mcc_sge sgl[19]; + u32 embedded[59]; /* DWORDS 57 to 115 */ + } u; +} __packed; + +#define MCC_WRB_EMBEDDED_MASK 0x00000001 + +struct mcc_wrb { + u32 dw[0]; /* DWORD 0 */ + u32 payload_length; + u32 tag[2]; /* DWORD 2 */ + u32 rsvd2[1]; /* DWORD 4 */ + struct mcc_wrb_payload payload; +}; + +unsigned char mgmt_epfw_cleanup(struct beiscsi_hba *phba, unsigned short chute); +int mgmt_open_connection(struct beiscsi_hba *phba, struct sockaddr *dst_addr, + struct beiscsi_endpoint *beiscsi_ep); + +unsigned char mgmt_upload_connection(struct beiscsi_hba *phba, + unsigned short cid, + unsigned int upload_flag); +unsigned char mgmt_invalidate_icds(struct beiscsi_hba *phba, + unsigned int icd, unsigned int cid); + +struct iscsi_invalidate_connection_params_in { + struct be_cmd_req_hdr hdr; + unsigned int session_handle; + unsigned short cid; + unsigned short unused; + unsigned short cleanup_type; + unsigned short save_cfg; +} __packed; + +struct iscsi_invalidate_connection_params_out { + unsigned int session_handle; + unsigned short cid; + unsigned short unused; +} __packed; + +union iscsi_invalidate_connection_params { + struct iscsi_invalidate_connection_params_in request; + struct iscsi_invalidate_connection_params_out response; +} __packed; + +struct invalidate_command_table { + unsigned short icd; + unsigned short cid; +} __packed; + +struct invalidate_commands_params_in { + struct be_cmd_req_hdr hdr; + unsigned int ref_handle; + unsigned int icd_count; + struct invalidate_command_table table[128]; + unsigned short cleanup_type; + unsigned short unused; +} __packed; + +struct invalidate_commands_params_out { + unsigned int ref_handle; + unsigned int icd_count; + unsigned int icd_status[128]; +} __packed; + +union invalidate_commands_params { + struct invalidate_commands_params_in request; + struct invalidate_commands_params_out response; +} __packed; + +struct mgmt_hba_attributes { + u8 flashrom_version_string[32]; + u8 manufacturer_name[32]; + u32 supported_modes; + u8 seeprom_version_lo; + u8 seeprom_version_hi; + u8 rsvd0[2]; + u32 fw_cmd_data_struct_version; + u32 ep_fw_data_struct_version; + u32 future_reserved[12]; + u32 default_extended_timeout; + u8 controller_model_number[32]; + u8 controller_description[64]; + u8 controller_serial_number[32]; + u8 ip_version_string[32]; + u8 firmware_version_string[32]; + u8 bios_version_string[32]; + u8 redboot_version_string[32]; + u8 driver_version_string[32]; + u8 fw_on_flash_version_string[32]; + u32 functionalities_supported; + u16 max_cdblength; + u8 asic_revision; + u8 generational_guid[16]; + u8 hba_port_count; + u16 default_link_down_timeout; + u8 iscsi_ver_min_max; + u8 multifunction_device; + u8 cache_valid; + u8 hba_status; + u8 max_domains_supported; + u8 phy_port; + u32 firmware_post_status; + u32 hba_mtu[8]; + u32 future_u32[4]; +} __packed; + +struct mgmt_controller_attributes { + struct mgmt_hba_attributes hba_attribs; + u16 pci_vendor_id; + u16 pci_device_id; + u16 pci_sub_vendor_id; + u16 pci_sub_system_id; + u8 pci_bus_number; + u8 pci_device_number; + u8 pci_function_number; + u8 interface_type; + u64 unique_identifier; + u8 netfilters; + u8 rsvd0[3]; + u8 future_u32[4]; +} __packed; + +struct be_mgmt_controller_attributes { + struct be_cmd_req_hdr hdr; + struct mgmt_controller_attributes params; +} __packed; + +struct be_mgmt_controller_attributes_resp { + struct be_cmd_resp_hdr hdr; + struct mgmt_controller_attributes params; +} __packed; + +/* configuration management */ + +#define GET_MGMT_CONTROLLER_WS(phba) (phba->pmgmt_ws) + +/* MGMT CMD flags */ + +#define MGMT_CMDH_FREE (1<<0) + +/* --- MGMT_ERROR_CODES --- */ +/* Error Codes returned in the status field of the CMD response header */ +#define MGMT_STATUS_SUCCESS 0 /* The CMD completed without errors */ +#define MGMT_STATUS_FAILED 1 /* Error status in the Status field of */ + /* the CMD_RESPONSE_HEADER */ + +#define ISCSI_GET_PDU_TEMPLATE_ADDRESS(pc, pa) {\ + pa->lo = phba->init_mem[ISCSI_MEM_GLOBAL_HEADER].mem_array[0].\ + bus_address.u.a32.address_lo; \ + pa->hi = phba->init_mem[ISCSI_MEM_GLOBAL_HEADER].mem_array[0].\ + bus_address.u.a32.address_hi; \ +} + +struct beiscsi_endpoint { + struct beiscsi_hba *phba; + struct beiscsi_sess *sess; + struct beiscsi_conn *conn; + unsigned short ip_type; + char dst6_addr[ISCSI_ADDRESS_BUF_LEN]; + unsigned long dst_addr; + unsigned short ep_cid; + unsigned int fw_handle; + u16 dst_tcpport; + u16 cid_vld; +}; + +unsigned char mgmt_get_fw_config(struct be_ctrl_info *ctrl, + struct beiscsi_hba *phba); + +unsigned char mgmt_invalidate_connection(struct beiscsi_hba *phba, + struct beiscsi_endpoint *beiscsi_ep, + unsigned short cid, + unsigned short issue_reset, + unsigned short savecfg_flag); +#endif -- cgit v1.2.3 From 6e43650cee644b5d7957ecd50f1e41ff71f87734 Mon Sep 17 00:00:00 2001 From: Neil Horman Date: Mon, 5 Oct 2009 03:56:55 +0000 Subject: add maintainer for network drop monitor kernel service I was getting ribbed about this earlier, so I figured I'd make it official. Add myself as the maintainer of the drop monitor bits, so people don't just gripe at Dave when it breaks (I'm sure it will never break, but just in case :) ). Signed-off-by: Neil Horman Signed-off-by: David S. Miller --- MAINTAINERS | 7 +++++++ 1 file changed, 7 insertions(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index e797c4d48cf1..4c5b920e0725 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3619,6 +3619,13 @@ F: Documentation/blockdev/nbd.txt F: drivers/block/nbd.c F: include/linux/nbd.h +NETWORK DROP MONITOR +M: Neil Horman +L: netdev@vger.kernel.org +S: Maintained +W: https://fedorahosted.org/dropwatch/ +F: net/core/drop_monitor.c + NETWORKING [GENERAL] M: "David S. Miller" L: netdev@vger.kernel.org -- cgit v1.2.3 From 941500954470e04679ff6e3cff0f82d8dbd2b6c3 Mon Sep 17 00:00:00 2001 From: Hubert Feurstein Date: Wed, 7 Oct 2009 08:36:07 +0100 Subject: ARM: 5749/1: ep93xx/micro9: Update maintainer Update Contec Micro9 maintainer and add entry in MAINTAINERS Cc: Ryan Mallon Requires: 5744/1 Signed-off-by: Hubert Feurstein Acked-by: H Hartley Sweeten Acked-by: Manfred Gruber Signed-off-by: Russell King --- MAINTAINERS | 5 +++++ arch/arm/mach-ep93xx/micro9.c | 10 ++++++---- 2 files changed, 11 insertions(+), 4 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 09a2028bab7f..1401a85b93ae 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -577,6 +577,11 @@ M: Mike Rapoport L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Maintained +ARM/CONTEC MICRO9 MACHINE SUPPORT +M: Hubert Feurstein +S: Maintained +F: arch/arm/mach-ep93xx/micro9.c + ARM/CORGI MACHINE SUPPORT M: Richard Purdie S: Maintained diff --git a/arch/arm/mach-ep93xx/micro9.c b/arch/arm/mach-ep93xx/micro9.c index 0a313e82fb74..72e7a7dfd7bf 100644 --- a/arch/arm/mach-ep93xx/micro9.c +++ b/arch/arm/mach-ep93xx/micro9.c @@ -2,7 +2,9 @@ * linux/arch/arm/mach-ep93xx/micro9.c * * Copyright (C) 2006 Contec Steuerungstechnik & Automation GmbH - * Manfred Gruber + * Manfred Gruber + * Copyright (C) 2009 Contec Steuerungstechnik & Automation GmbH + * Hubert Feurstein * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as @@ -66,7 +68,7 @@ static void __init micro9h_init_machine(void) } MACHINE_START(MICRO9, "Contec Hypercontrol Micro9-H") - /* Maintainer: Manfred Gruber */ + /* Maintainer: Hubert Feurstein */ .phys_io = EP93XX_APB_PHYS_BASE, .io_pg_offst = ((EP93XX_APB_VIRT_BASE) >> 18) & 0xfffc, .boot_params = EP93XX_SDCE3_PHYS_BASE_SYNC + 0x100, @@ -88,7 +90,7 @@ static void __init micro9m_init_machine(void) } MACHINE_START(MICRO9M, "Contec Hypercontrol Micro9-M") - /* Maintainer: Manfred Gruber */ + /* Maintainer: Hubert Feurstein */ .phys_io = EP93XX_APB_PHYS_BASE, .io_pg_offst = ((EP93XX_APB_VIRT_BASE) >> 18) & 0xfffc, .boot_params = EP93XX_SDCE3_PHYS_BASE_SYNC + 0x100, @@ -110,7 +112,7 @@ static void __init micro9l_init_machine(void) } MACHINE_START(MICRO9L, "Contec Hypercontrol Micro9-L") - /* Maintainer: Manfred Gruber */ + /* Maintainer: Hubert Feurstein */ .phys_io = EP93XX_APB_PHYS_BASE, .io_pg_offst = ((EP93XX_APB_VIRT_BASE) >> 18) & 0xfffc, .boot_params = EP93XX_SDCE3_PHYS_BASE_SYNC + 0x100, -- cgit v1.2.3 From 544df55d6c1590bc21c86119b89a1689b1eb5e75 Mon Sep 17 00:00:00 2001 From: Stefan Richter Date: Sat, 3 Oct 2009 10:17:29 +0200 Subject: ieee1394: add documentation entry to MAINTAINERS Add the file pattern of drivers/ieee1394/init_ohci1394_dma.c's documentation to the maintainers database. init_ohci1394_dma.c is not really part of the IEEE 1394 subsystem, but this maintainers contact seems to be better than none at all. Signed-off-by: Stefan Richter --- MAINTAINERS | 1 + 1 file changed, 1 insertion(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 8dca9d89c6c1..6a387f6ea02a 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -2519,6 +2519,7 @@ L: linux1394-devel@lists.sourceforge.net W: http://www.linux1394.org/ T: git git://git.kernel.org/pub/scm/linux/kernel/git/ieee1394/linux1394-2.6.git S: Maintained +F: Documentation/debugging-via-ohci1394.txt F: drivers/ieee1394/ IEEE 1394 RAW I/O DRIVER -- cgit v1.2.3 From 05576a1e38e2d06dece32974c5218528d3fbc6e2 Mon Sep 17 00:00:00 2001 From: Jean Delvare Date: Fri, 9 Oct 2009 20:35:19 +0200 Subject: MAINTAINERS: Fix Riku Voipio's address Signed-off-by: Jean Delvare Acked-by: Riku Voipio --- MAINTAINERS | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index e1da925b38c8..16c51fc5d764 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -2058,7 +2058,7 @@ S: Maintained F: fs/* FINTEK F75375S HARDWARE MONITOR AND FAN CONTROLLER DRIVER -M: Riku Voipio +M: Riku Voipio L: lm-sensors@lm-sensors.org S: Maintained F: drivers/hwmon/f75375s.c -- cgit v1.2.3 From d1f6803a58e827fda7b810dcb7cbdb490d32ab9e Mon Sep 17 00:00:00 2001 From: Eric Dumazet Date: Sat, 10 Oct 2009 21:40:54 +0000 Subject: net: Add patchwork URL to MAINTAINERS Signed-off-by: David S. Miller --- MAINTAINERS | 1 + 1 file changed, 1 insertion(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 4c5b920e0725..c34b728bbf0e 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3630,6 +3630,7 @@ NETWORKING [GENERAL] M: "David S. Miller" L: netdev@vger.kernel.org W: http://www.linuxfoundation.org/en/Net +W: http://patchwork.ozlabs.org/project/netdev/list/ T: git git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-2.6.git S: Maintained F: net/ -- cgit v1.2.3 From d1a890fa37f27d6aca3abc6e25e4148efc3223a6 Mon Sep 17 00:00:00 2001 From: Shreyas Bhatewara Date: Tue, 13 Oct 2009 00:15:51 -0700 Subject: net: VMware virtual Ethernet NIC driver: vmxnet3 Ethernet NIC driver for VMware's vmxnet3 From: Shreyas Bhatewara This patch adds driver support for VMware's virtual Ethernet NIC: vmxnet3 Guests running on VMware hypervisors supporting vmxnet3 device will thus have access to improved network functionalities and performance. Signed-off-by: Shreyas Bhatewara Signed-off-by: Bhavesh Davda Signed-off-by: Ronghua Zhang Signed-off-by: David S. Miller --- MAINTAINERS | 7 + drivers/net/Kconfig | 8 + drivers/net/Makefile | 1 + drivers/net/vmxnet3/Makefile | 35 + drivers/net/vmxnet3/upt1_defs.h | 96 ++ drivers/net/vmxnet3/vmxnet3_defs.h | 535 +++++++ drivers/net/vmxnet3/vmxnet3_drv.c | 2556 +++++++++++++++++++++++++++++++++ drivers/net/vmxnet3/vmxnet3_ethtool.c | 566 ++++++++ drivers/net/vmxnet3/vmxnet3_int.h | 389 +++++ 9 files changed, 4193 insertions(+) create mode 100644 drivers/net/vmxnet3/Makefile create mode 100644 drivers/net/vmxnet3/upt1_defs.h create mode 100644 drivers/net/vmxnet3/vmxnet3_defs.h create mode 100644 drivers/net/vmxnet3/vmxnet3_drv.c create mode 100644 drivers/net/vmxnet3/vmxnet3_ethtool.c create mode 100644 drivers/net/vmxnet3/vmxnet3_int.h (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index c34b728bbf0e..70bc7ded9444 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -5612,6 +5612,13 @@ S: Maintained F: drivers/vlynq/vlynq.c F: include/linux/vlynq.h +VMWARE VMXNET3 ETHERNET DRIVER +M: Shreyas Bhatewara +M: VMware, Inc. +L: netdev@vger.kernel.org +S: Maintained +F: drivers/net/vmxnet3/ + VOLTAGE AND CURRENT REGULATOR FRAMEWORK M: Liam Girdwood M: Mark Brown diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig index 712776089b46..9789de234130 100644 --- a/drivers/net/Kconfig +++ b/drivers/net/Kconfig @@ -3230,4 +3230,12 @@ config VIRTIO_NET This is the virtual network driver for virtio. It can be used with lguest or QEMU based VMMs (like KVM or Xen). Say Y or M. +config VMXNET3 + tristate "VMware VMXNET3 ethernet driver" + depends on PCI && X86 + help + This driver supports VMware's vmxnet3 virtual ethernet NIC. + To compile this driver as a module, choose M here: the + module will be called vmxnet3. + endif # NETDEVICES diff --git a/drivers/net/Makefile b/drivers/net/Makefile index 48d82e901aad..fc6c8bb92c50 100644 --- a/drivers/net/Makefile +++ b/drivers/net/Makefile @@ -30,6 +30,7 @@ obj-$(CONFIG_TEHUTI) += tehuti.o obj-$(CONFIG_ENIC) += enic/ obj-$(CONFIG_JME) += jme.o obj-$(CONFIG_BE2NET) += benet/ +obj-$(CONFIG_VMXNET3) += vmxnet3/ gianfar_driver-objs := gianfar.o \ gianfar_ethtool.o \ diff --git a/drivers/net/vmxnet3/Makefile b/drivers/net/vmxnet3/Makefile new file mode 100644 index 000000000000..880f5098eac9 --- /dev/null +++ b/drivers/net/vmxnet3/Makefile @@ -0,0 +1,35 @@ +################################################################################ +# +# Linux driver for VMware's vmxnet3 ethernet NIC. +# +# Copyright (C) 2007-2009, VMware, Inc. All Rights Reserved. +# +# This program is free software; you can redistribute it and/or modify it +# under the terms of the GNU General Public License as published by the +# Free Software Foundation; version 2 of the License and no later version. +# +# This program is distributed in the hope that it will be useful, but +# WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or +# NON INFRINGEMENT. See the GNU General Public License for more +# details. +# +# You should have received a copy of the GNU General Public License +# along with this program; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. +# +# The full GNU General Public License is included in this distribution in +# the file called "COPYING". +# +# Maintained by: Shreyas Bhatewara +# +# +################################################################################ + +# +# Makefile for the VMware vmxnet3 ethernet NIC driver +# + +obj-$(CONFIG_VMXNET3) += vmxnet3.o + +vmxnet3-objs := vmxnet3_drv.o vmxnet3_ethtool.o diff --git a/drivers/net/vmxnet3/upt1_defs.h b/drivers/net/vmxnet3/upt1_defs.h new file mode 100644 index 000000000000..37108fb226d3 --- /dev/null +++ b/drivers/net/vmxnet3/upt1_defs.h @@ -0,0 +1,96 @@ +/* + * Linux driver for VMware's vmxnet3 ethernet NIC. + * + * Copyright (C) 2008-2009, VMware, Inc. All Rights Reserved. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License as published by the + * Free Software Foundation; version 2 of the License and no later version. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or + * NON INFRINGEMENT. See the GNU General Public License for more + * details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. + * + * The full GNU General Public License is included in this distribution in + * the file called "COPYING". + * + * Maintained by: Shreyas Bhatewara + * + */ + +#ifndef _UPT1_DEFS_H +#define _UPT1_DEFS_H + +struct UPT1_TxStats { + u64 TSOPktsTxOK; /* TSO pkts post-segmentation */ + u64 TSOBytesTxOK; + u64 ucastPktsTxOK; + u64 ucastBytesTxOK; + u64 mcastPktsTxOK; + u64 mcastBytesTxOK; + u64 bcastPktsTxOK; + u64 bcastBytesTxOK; + u64 pktsTxError; + u64 pktsTxDiscard; +}; + +struct UPT1_RxStats { + u64 LROPktsRxOK; /* LRO pkts */ + u64 LROBytesRxOK; /* bytes from LRO pkts */ + /* the following counters are for pkts from the wire, i.e., pre-LRO */ + u64 ucastPktsRxOK; + u64 ucastBytesRxOK; + u64 mcastPktsRxOK; + u64 mcastBytesRxOK; + u64 bcastPktsRxOK; + u64 bcastBytesRxOK; + u64 pktsRxOutOfBuf; + u64 pktsRxError; +}; + +/* interrupt moderation level */ +enum { + UPT1_IML_NONE = 0, /* no interrupt moderation */ + UPT1_IML_HIGHEST = 7, /* least intr generated */ + UPT1_IML_ADAPTIVE = 8, /* adpative intr moderation */ +}; +/* values for UPT1_RSSConf.hashFunc */ +enum { + UPT1_RSS_HASH_TYPE_NONE = 0x0, + UPT1_RSS_HASH_TYPE_IPV4 = 0x01, + UPT1_RSS_HASH_TYPE_TCP_IPV4 = 0x02, + UPT1_RSS_HASH_TYPE_IPV6 = 0x04, + UPT1_RSS_HASH_TYPE_TCP_IPV6 = 0x08, +}; + +enum { + UPT1_RSS_HASH_FUNC_NONE = 0x0, + UPT1_RSS_HASH_FUNC_TOEPLITZ = 0x01, +}; + +#define UPT1_RSS_MAX_KEY_SIZE 40 +#define UPT1_RSS_MAX_IND_TABLE_SIZE 128 + +struct UPT1_RSSConf { + u16 hashType; + u16 hashFunc; + u16 hashKeySize; + u16 indTableSize; + u8 hashKey[UPT1_RSS_MAX_KEY_SIZE]; + u8 indTable[UPT1_RSS_MAX_IND_TABLE_SIZE]; +}; + +/* features */ +enum { + UPT1_F_RXCSUM = 0x0001, /* rx csum verification */ + UPT1_F_RSS = 0x0002, + UPT1_F_RXVLAN = 0x0004, /* VLAN tag stripping */ + UPT1_F_LRO = 0x0008, +}; +#endif diff --git a/drivers/net/vmxnet3/vmxnet3_defs.h b/drivers/net/vmxnet3/vmxnet3_defs.h new file mode 100644 index 000000000000..dc8ee4438a4f --- /dev/null +++ b/drivers/net/vmxnet3/vmxnet3_defs.h @@ -0,0 +1,535 @@ +/* + * Linux driver for VMware's vmxnet3 ethernet NIC. + * + * Copyright (C) 2008-2009, VMware, Inc. All Rights Reserved. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License as published by the + * Free Software Foundation; version 2 of the License and no later version. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or + * NON INFRINGEMENT. See the GNU General Public License for more + * details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. + * + * The full GNU General Public License is included in this distribution in + * the file called "COPYING". + * + * Maintained by: Shreyas Bhatewara + * + */ + +#ifndef _VMXNET3_DEFS_H_ +#define _VMXNET3_DEFS_H_ + +#include "upt1_defs.h" + +/* all registers are 32 bit wide */ +/* BAR 1 */ +enum { + VMXNET3_REG_VRRS = 0x0, /* Vmxnet3 Revision Report Selection */ + VMXNET3_REG_UVRS = 0x8, /* UPT Version Report Selection */ + VMXNET3_REG_DSAL = 0x10, /* Driver Shared Address Low */ + VMXNET3_REG_DSAH = 0x18, /* Driver Shared Address High */ + VMXNET3_REG_CMD = 0x20, /* Command */ + VMXNET3_REG_MACL = 0x28, /* MAC Address Low */ + VMXNET3_REG_MACH = 0x30, /* MAC Address High */ + VMXNET3_REG_ICR = 0x38, /* Interrupt Cause Register */ + VMXNET3_REG_ECR = 0x40 /* Event Cause Register */ +}; + +/* BAR 0 */ +enum { + VMXNET3_REG_IMR = 0x0, /* Interrupt Mask Register */ + VMXNET3_REG_TXPROD = 0x600, /* Tx Producer Index */ + VMXNET3_REG_RXPROD = 0x800, /* Rx Producer Index for ring 1 */ + VMXNET3_REG_RXPROD2 = 0xA00 /* Rx Producer Index for ring 2 */ +}; + +#define VMXNET3_PT_REG_SIZE 4096 /* BAR 0 */ +#define VMXNET3_VD_REG_SIZE 4096 /* BAR 1 */ + +#define VMXNET3_REG_ALIGN 8 /* All registers are 8-byte aligned. */ +#define VMXNET3_REG_ALIGN_MASK 0x7 + +/* I/O Mapped access to registers */ +#define VMXNET3_IO_TYPE_PT 0 +#define VMXNET3_IO_TYPE_VD 1 +#define VMXNET3_IO_ADDR(type, reg) (((type) << 24) | ((reg) & 0xFFFFFF)) +#define VMXNET3_IO_TYPE(addr) ((addr) >> 24) +#define VMXNET3_IO_REG(addr) ((addr) & 0xFFFFFF) + +enum { + VMXNET3_CMD_FIRST_SET = 0xCAFE0000, + VMXNET3_CMD_ACTIVATE_DEV = VMXNET3_CMD_FIRST_SET, + VMXNET3_CMD_QUIESCE_DEV, + VMXNET3_CMD_RESET_DEV, + VMXNET3_CMD_UPDATE_RX_MODE, + VMXNET3_CMD_UPDATE_MAC_FILTERS, + VMXNET3_CMD_UPDATE_VLAN_FILTERS, + VMXNET3_CMD_UPDATE_RSSIDT, + VMXNET3_CMD_UPDATE_IML, + VMXNET3_CMD_UPDATE_PMCFG, + VMXNET3_CMD_UPDATE_FEATURE, + VMXNET3_CMD_LOAD_PLUGIN, + + VMXNET3_CMD_FIRST_GET = 0xF00D0000, + VMXNET3_CMD_GET_QUEUE_STATUS = VMXNET3_CMD_FIRST_GET, + VMXNET3_CMD_GET_STATS, + VMXNET3_CMD_GET_LINK, + VMXNET3_CMD_GET_PERM_MAC_LO, + VMXNET3_CMD_GET_PERM_MAC_HI, + VMXNET3_CMD_GET_DID_LO, + VMXNET3_CMD_GET_DID_HI, + VMXNET3_CMD_GET_DEV_EXTRA_INFO, + VMXNET3_CMD_GET_CONF_INTR +}; + +struct Vmxnet3_TxDesc { + u64 addr; + + u32 len:14; + u32 gen:1; /* generation bit */ + u32 rsvd:1; + u32 dtype:1; /* descriptor type */ + u32 ext1:1; + u32 msscof:14; /* MSS, checksum offset, flags */ + + u32 hlen:10; /* header len */ + u32 om:2; /* offload mode */ + u32 eop:1; /* End Of Packet */ + u32 cq:1; /* completion request */ + u32 ext2:1; + u32 ti:1; /* VLAN Tag Insertion */ + u32 tci:16; /* Tag to Insert */ +}; + +/* TxDesc.OM values */ +#define VMXNET3_OM_NONE 0 +#define VMXNET3_OM_CSUM 2 +#define VMXNET3_OM_TSO 3 + +/* fields in TxDesc we access w/o using bit fields */ +#define VMXNET3_TXD_EOP_SHIFT 12 +#define VMXNET3_TXD_CQ_SHIFT 13 +#define VMXNET3_TXD_GEN_SHIFT 14 + +#define VMXNET3_TXD_CQ (1 << VMXNET3_TXD_CQ_SHIFT) +#define VMXNET3_TXD_EOP (1 << VMXNET3_TXD_EOP_SHIFT) +#define VMXNET3_TXD_GEN (1 << VMXNET3_TXD_GEN_SHIFT) + +#define VMXNET3_HDR_COPY_SIZE 128 + + +struct Vmxnet3_TxDataDesc { + u8 data[VMXNET3_HDR_COPY_SIZE]; +}; + + +struct Vmxnet3_TxCompDesc { + u32 txdIdx:12; /* Index of the EOP TxDesc */ + u32 ext1:20; + + u32 ext2; + u32 ext3; + + u32 rsvd:24; + u32 type:7; /* completion type */ + u32 gen:1; /* generation bit */ +}; + + +struct Vmxnet3_RxDesc { + u64 addr; + + u32 len:14; + u32 btype:1; /* Buffer Type */ + u32 dtype:1; /* Descriptor type */ + u32 rsvd:15; + u32 gen:1; /* Generation bit */ + + u32 ext1; +}; + +/* values of RXD.BTYPE */ +#define VMXNET3_RXD_BTYPE_HEAD 0 /* head only */ +#define VMXNET3_RXD_BTYPE_BODY 1 /* body only */ + +/* fields in RxDesc we access w/o using bit fields */ +#define VMXNET3_RXD_BTYPE_SHIFT 14 +#define VMXNET3_RXD_GEN_SHIFT 31 + + +struct Vmxnet3_RxCompDesc { + u32 rxdIdx:12; /* Index of the RxDesc */ + u32 ext1:2; + u32 eop:1; /* End of Packet */ + u32 sop:1; /* Start of Packet */ + u32 rqID:10; /* rx queue/ring ID */ + u32 rssType:4; /* RSS hash type used */ + u32 cnc:1; /* Checksum Not Calculated */ + u32 ext2:1; + + u32 rssHash; /* RSS hash value */ + + u32 len:14; /* data length */ + u32 err:1; /* Error */ + u32 ts:1; /* Tag is stripped */ + u32 tci:16; /* Tag stripped */ + + u32 csum:16; + u32 tuc:1; /* TCP/UDP Checksum Correct */ + u32 udp:1; /* UDP packet */ + u32 tcp:1; /* TCP packet */ + u32 ipc:1; /* IP Checksum Correct */ + u32 v6:1; /* IPv6 */ + u32 v4:1; /* IPv4 */ + u32 frg:1; /* IP Fragment */ + u32 fcs:1; /* Frame CRC correct */ + u32 type:7; /* completion type */ + u32 gen:1; /* generation bit */ +}; + +/* fields in RxCompDesc we access via Vmxnet3_GenericDesc.dword[3] */ +#define VMXNET3_RCD_TUC_SHIFT 16 +#define VMXNET3_RCD_IPC_SHIFT 19 + +/* fields in RxCompDesc we access via Vmxnet3_GenericDesc.qword[1] */ +#define VMXNET3_RCD_TYPE_SHIFT 56 +#define VMXNET3_RCD_GEN_SHIFT 63 + +/* csum OK for TCP/UDP pkts over IP */ +#define VMXNET3_RCD_CSUM_OK (1 << VMXNET3_RCD_TUC_SHIFT | \ + 1 << VMXNET3_RCD_IPC_SHIFT) + +/* value of RxCompDesc.rssType */ +enum { + VMXNET3_RCD_RSS_TYPE_NONE = 0, + VMXNET3_RCD_RSS_TYPE_IPV4 = 1, + VMXNET3_RCD_RSS_TYPE_TCPIPV4 = 2, + VMXNET3_RCD_RSS_TYPE_IPV6 = 3, + VMXNET3_RCD_RSS_TYPE_TCPIPV6 = 4, +}; + + +/* a union for accessing all cmd/completion descriptors */ +union Vmxnet3_GenericDesc { + u64 qword[2]; + u32 dword[4]; + u16 word[8]; + struct Vmxnet3_TxDesc txd; + struct Vmxnet3_RxDesc rxd; + struct Vmxnet3_TxCompDesc tcd; + struct Vmxnet3_RxCompDesc rcd; +}; + +#define VMXNET3_INIT_GEN 1 + +/* Max size of a single tx buffer */ +#define VMXNET3_MAX_TX_BUF_SIZE (1 << 14) + +/* # of tx desc needed for a tx buffer size */ +#define VMXNET3_TXD_NEEDED(size) (((size) + VMXNET3_MAX_TX_BUF_SIZE - 1) / \ + VMXNET3_MAX_TX_BUF_SIZE) + +/* max # of tx descs for a non-tso pkt */ +#define VMXNET3_MAX_TXD_PER_PKT 16 + +/* Max size of a single rx buffer */ +#define VMXNET3_MAX_RX_BUF_SIZE ((1 << 14) - 1) +/* Minimum size of a type 0 buffer */ +#define VMXNET3_MIN_T0_BUF_SIZE 128 +#define VMXNET3_MAX_CSUM_OFFSET 1024 + +/* Ring base address alignment */ +#define VMXNET3_RING_BA_ALIGN 512 +#define VMXNET3_RING_BA_MASK (VMXNET3_RING_BA_ALIGN - 1) + +/* Ring size must be a multiple of 32 */ +#define VMXNET3_RING_SIZE_ALIGN 32 +#define VMXNET3_RING_SIZE_MASK (VMXNET3_RING_SIZE_ALIGN - 1) + +/* Max ring size */ +#define VMXNET3_TX_RING_MAX_SIZE 4096 +#define VMXNET3_TC_RING_MAX_SIZE 4096 +#define VMXNET3_RX_RING_MAX_SIZE 4096 +#define VMXNET3_RC_RING_MAX_SIZE 8192 + +/* a list of reasons for queue stop */ + +enum { + VMXNET3_ERR_NOEOP = 0x80000000, /* cannot find the EOP desc of a pkt */ + VMXNET3_ERR_TXD_REUSE = 0x80000001, /* reuse TxDesc before tx completion */ + VMXNET3_ERR_BIG_PKT = 0x80000002, /* too many TxDesc for a pkt */ + VMXNET3_ERR_DESC_NOT_SPT = 0x80000003, /* descriptor type not supported */ + VMXNET3_ERR_SMALL_BUF = 0x80000004, /* type 0 buffer too small */ + VMXNET3_ERR_STRESS = 0x80000005, /* stress option firing in vmkernel */ + VMXNET3_ERR_SWITCH = 0x80000006, /* mode switch failure */ + VMXNET3_ERR_TXD_INVALID = 0x80000007, /* invalid TxDesc */ +}; + +/* completion descriptor types */ +#define VMXNET3_CDTYPE_TXCOMP 0 /* Tx Completion Descriptor */ +#define VMXNET3_CDTYPE_RXCOMP 3 /* Rx Completion Descriptor */ + +enum { + VMXNET3_GOS_BITS_UNK = 0, /* unknown */ + VMXNET3_GOS_BITS_32 = 1, + VMXNET3_GOS_BITS_64 = 2, +}; + +#define VMXNET3_GOS_TYPE_LINUX 1 + + +struct Vmxnet3_GOSInfo { + u32 gosBits:2; /* 32-bit or 64-bit? */ + u32 gosType:4; /* which guest */ + u32 gosVer:16; /* gos version */ + u32 gosMisc:10; /* other info about gos */ +}; + + +struct Vmxnet3_DriverInfo { + u32 version; + struct Vmxnet3_GOSInfo gos; + u32 vmxnet3RevSpt; + u32 uptVerSpt; +}; + + +#define VMXNET3_REV1_MAGIC 0xbabefee1 + +/* + * QueueDescPA must be 128 bytes aligned. It points to an array of + * Vmxnet3_TxQueueDesc followed by an array of Vmxnet3_RxQueueDesc. + * The number of Vmxnet3_TxQueueDesc/Vmxnet3_RxQueueDesc are specified by + * Vmxnet3_MiscConf.numTxQueues/numRxQueues, respectively. + */ +#define VMXNET3_QUEUE_DESC_ALIGN 128 + + +struct Vmxnet3_MiscConf { + struct Vmxnet3_DriverInfo driverInfo; + u64 uptFeatures; + u64 ddPA; /* driver data PA */ + u64 queueDescPA; /* queue descriptor table PA */ + u32 ddLen; /* driver data len */ + u32 queueDescLen; /* queue desc. table len in bytes */ + u32 mtu; + u16 maxNumRxSG; + u8 numTxQueues; + u8 numRxQueues; + u32 reserved[4]; +}; + + +struct Vmxnet3_TxQueueConf { + u64 txRingBasePA; + u64 dataRingBasePA; + u64 compRingBasePA; + u64 ddPA; /* driver data */ + u64 reserved; + u32 txRingSize; /* # of tx desc */ + u32 dataRingSize; /* # of data desc */ + u32 compRingSize; /* # of comp desc */ + u32 ddLen; /* size of driver data */ + u8 intrIdx; + u8 _pad[7]; +}; + + +struct Vmxnet3_RxQueueConf { + u64 rxRingBasePA[2]; + u64 compRingBasePA; + u64 ddPA; /* driver data */ + u64 reserved; + u32 rxRingSize[2]; /* # of rx desc */ + u32 compRingSize; /* # of rx comp desc */ + u32 ddLen; /* size of driver data */ + u8 intrIdx; + u8 _pad[7]; +}; + + +enum vmxnet3_intr_mask_mode { + VMXNET3_IMM_AUTO = 0, + VMXNET3_IMM_ACTIVE = 1, + VMXNET3_IMM_LAZY = 2 +}; + +enum vmxnet3_intr_type { + VMXNET3_IT_AUTO = 0, + VMXNET3_IT_INTX = 1, + VMXNET3_IT_MSI = 2, + VMXNET3_IT_MSIX = 3 +}; + +#define VMXNET3_MAX_TX_QUEUES 8 +#define VMXNET3_MAX_RX_QUEUES 16 +/* addition 1 for events */ +#define VMXNET3_MAX_INTRS 25 + + +struct Vmxnet3_IntrConf { + bool autoMask; + u8 numIntrs; /* # of interrupts */ + u8 eventIntrIdx; + u8 modLevels[VMXNET3_MAX_INTRS]; /* moderation level for + * each intr */ + u32 reserved[3]; +}; + +/* one bit per VLAN ID, the size is in the units of u32 */ +#define VMXNET3_VFT_SIZE (4096 / (sizeof(u32) * 8)) + + +struct Vmxnet3_QueueStatus { + bool stopped; + u8 _pad[3]; + u32 error; +}; + + +struct Vmxnet3_TxQueueCtrl { + u32 txNumDeferred; + u32 txThreshold; + u64 reserved; +}; + + +struct Vmxnet3_RxQueueCtrl { + bool updateRxProd; + u8 _pad[7]; + u64 reserved; +}; + +enum { + VMXNET3_RXM_UCAST = 0x01, /* unicast only */ + VMXNET3_RXM_MCAST = 0x02, /* multicast passing the filters */ + VMXNET3_RXM_BCAST = 0x04, /* broadcast only */ + VMXNET3_RXM_ALL_MULTI = 0x08, /* all multicast */ + VMXNET3_RXM_PROMISC = 0x10 /* promiscuous */ +}; + +struct Vmxnet3_RxFilterConf { + u32 rxMode; /* VMXNET3_RXM_xxx */ + u16 mfTableLen; /* size of the multicast filter table */ + u16 _pad1; + u64 mfTablePA; /* PA of the multicast filters table */ + u32 vfTable[VMXNET3_VFT_SIZE]; /* vlan filter */ +}; + + +#define VMXNET3_PM_MAX_FILTERS 6 +#define VMXNET3_PM_MAX_PATTERN_SIZE 128 +#define VMXNET3_PM_MAX_MASK_SIZE (VMXNET3_PM_MAX_PATTERN_SIZE / 8) + +#define VMXNET3_PM_WAKEUP_MAGIC 0x01 /* wake up on magic pkts */ +#define VMXNET3_PM_WAKEUP_FILTER 0x02 /* wake up on pkts matching + * filters */ + + +struct Vmxnet3_PM_PktFilter { + u8 maskSize; + u8 patternSize; + u8 mask[VMXNET3_PM_MAX_MASK_SIZE]; + u8 pattern[VMXNET3_PM_MAX_PATTERN_SIZE]; + u8 pad[6]; +}; + + +struct Vmxnet3_PMConf { + u16 wakeUpEvents; /* VMXNET3_PM_WAKEUP_xxx */ + u8 numFilters; + u8 pad[5]; + struct Vmxnet3_PM_PktFilter filters[VMXNET3_PM_MAX_FILTERS]; +}; + + +struct Vmxnet3_VariableLenConfDesc { + u32 confVer; + u32 confLen; + u64 confPA; +}; + + +struct Vmxnet3_TxQueueDesc { + struct Vmxnet3_TxQueueCtrl ctrl; + struct Vmxnet3_TxQueueConf conf; + + /* Driver read after a GET command */ + struct Vmxnet3_QueueStatus status; + struct UPT1_TxStats stats; + u8 _pad[88]; /* 128 aligned */ +}; + + +struct Vmxnet3_RxQueueDesc { + struct Vmxnet3_RxQueueCtrl ctrl; + struct Vmxnet3_RxQueueConf conf; + /* Driver read after a GET commad */ + struct Vmxnet3_QueueStatus status; + struct UPT1_RxStats stats; + u8 __pad[88]; /* 128 aligned */ +}; + + +struct Vmxnet3_DSDevRead { + /* read-only region for device, read by dev in response to a SET cmd */ + struct Vmxnet3_MiscConf misc; + struct Vmxnet3_IntrConf intrConf; + struct Vmxnet3_RxFilterConf rxFilterConf; + struct Vmxnet3_VariableLenConfDesc rssConfDesc; + struct Vmxnet3_VariableLenConfDesc pmConfDesc; + struct Vmxnet3_VariableLenConfDesc pluginConfDesc; +}; + +/* All structures in DriverShared are padded to multiples of 8 bytes */ +struct Vmxnet3_DriverShared { + u32 magic; + /* make devRead start at 64bit boundaries */ + u32 pad; + struct Vmxnet3_DSDevRead devRead; + u32 ecr; + u32 reserved[5]; +}; + + +#define VMXNET3_ECR_RQERR (1 << 0) +#define VMXNET3_ECR_TQERR (1 << 1) +#define VMXNET3_ECR_LINK (1 << 2) +#define VMXNET3_ECR_DIC (1 << 3) +#define VMXNET3_ECR_DEBUG (1 << 4) + +/* flip the gen bit of a ring */ +#define VMXNET3_FLIP_RING_GEN(gen) ((gen) = (gen) ^ 0x1) + +/* only use this if moving the idx won't affect the gen bit */ +#define VMXNET3_INC_RING_IDX_ONLY(idx, ring_size) \ + do {\ + (idx)++;\ + if (unlikely((idx) == (ring_size))) {\ + (idx) = 0;\ + } \ + } while (0) + +#define VMXNET3_SET_VFTABLE_ENTRY(vfTable, vid) \ + (vfTable[vid >> 5] |= (1 << (vid & 31))) +#define VMXNET3_CLEAR_VFTABLE_ENTRY(vfTable, vid) \ + (vfTable[vid >> 5] &= ~(1 << (vid & 31))) + +#define VMXNET3_VFTABLE_ENTRY_IS_SET(vfTable, vid) \ + ((vfTable[vid >> 5] & (1 << (vid & 31))) != 0) + +#define VMXNET3_MAX_MTU 9000 +#define VMXNET3_MIN_MTU 60 + +#define VMXNET3_LINK_UP (10000 << 16 | 1) /* 10 Gbps, up */ +#define VMXNET3_LINK_DOWN 0 + +#endif /* _VMXNET3_DEFS_H_ */ diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet3_drv.c new file mode 100644 index 000000000000..44fb0c5a2800 --- /dev/null +++ b/drivers/net/vmxnet3/vmxnet3_drv.c @@ -0,0 +1,2556 @@ +/* + * Linux driver for VMware's vmxnet3 ethernet NIC. + * + * Copyright (C) 2008-2009, VMware, Inc. All Rights Reserved. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License as published by the + * Free Software Foundation; version 2 of the License and no later version. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or + * NON INFRINGEMENT. See the GNU General Public License for more + * details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. + * + * The full GNU General Public License is included in this distribution in + * the file called "COPYING". + * + * Maintained by: Shreyas Bhatewara + * + */ + +#include "vmxnet3_int.h" + +char vmxnet3_driver_name[] = "vmxnet3"; +#define VMXNET3_DRIVER_DESC "VMware vmxnet3 virtual NIC driver" + + +/* + * PCI Device ID Table + * Last entry must be all 0s + */ +static const struct pci_device_id vmxnet3_pciid_table[] = { + {PCI_VDEVICE(VMWARE, PCI_DEVICE_ID_VMWARE_VMXNET3)}, + {0} +}; + +MODULE_DEVICE_TABLE(pci, vmxnet3_pciid_table); + +static atomic_t devices_found; + + +/* + * Enable/Disable the given intr + */ +static void +vmxnet3_enable_intr(struct vmxnet3_adapter *adapter, unsigned intr_idx) +{ + VMXNET3_WRITE_BAR0_REG(adapter, VMXNET3_REG_IMR + intr_idx * 8, 0); +} + + +static void +vmxnet3_disable_intr(struct vmxnet3_adapter *adapter, unsigned intr_idx) +{ + VMXNET3_WRITE_BAR0_REG(adapter, VMXNET3_REG_IMR + intr_idx * 8, 1); +} + + +/* + * Enable/Disable all intrs used by the device + */ +static void +vmxnet3_enable_all_intrs(struct vmxnet3_adapter *adapter) +{ + int i; + + for (i = 0; i < adapter->intr.num_intrs; i++) + vmxnet3_enable_intr(adapter, i); +} + + +static void +vmxnet3_disable_all_intrs(struct vmxnet3_adapter *adapter) +{ + int i; + + for (i = 0; i < adapter->intr.num_intrs; i++) + vmxnet3_disable_intr(adapter, i); +} + + +static void +vmxnet3_ack_events(struct vmxnet3_adapter *adapter, u32 events) +{ + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_ECR, events); +} + + +static bool +vmxnet3_tq_stopped(struct vmxnet3_tx_queue *tq, struct vmxnet3_adapter *adapter) +{ + return netif_queue_stopped(adapter->netdev); +} + + +static void +vmxnet3_tq_start(struct vmxnet3_tx_queue *tq, struct vmxnet3_adapter *adapter) +{ + tq->stopped = false; + netif_start_queue(adapter->netdev); +} + + +static void +vmxnet3_tq_wake(struct vmxnet3_tx_queue *tq, struct vmxnet3_adapter *adapter) +{ + tq->stopped = false; + netif_wake_queue(adapter->netdev); +} + + +static void +vmxnet3_tq_stop(struct vmxnet3_tx_queue *tq, struct vmxnet3_adapter *adapter) +{ + tq->stopped = true; + tq->num_stop++; + netif_stop_queue(adapter->netdev); +} + + +/* + * Check the link state. This may start or stop the tx queue. + */ +static void +vmxnet3_check_link(struct vmxnet3_adapter *adapter) +{ + u32 ret; + + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, VMXNET3_CMD_GET_LINK); + ret = VMXNET3_READ_BAR1_REG(adapter, VMXNET3_REG_CMD); + adapter->link_speed = ret >> 16; + if (ret & 1) { /* Link is up. */ + printk(KERN_INFO "%s: NIC Link is Up %d Mbps\n", + adapter->netdev->name, adapter->link_speed); + if (!netif_carrier_ok(adapter->netdev)) + netif_carrier_on(adapter->netdev); + + vmxnet3_tq_start(&adapter->tx_queue, adapter); + } else { + printk(KERN_INFO "%s: NIC Link is Down\n", + adapter->netdev->name); + if (netif_carrier_ok(adapter->netdev)) + netif_carrier_off(adapter->netdev); + + vmxnet3_tq_stop(&adapter->tx_queue, adapter); + } +} + + +static void +vmxnet3_process_events(struct vmxnet3_adapter *adapter) +{ + u32 events = adapter->shared->ecr; + if (!events) + return; + + vmxnet3_ack_events(adapter, events); + + /* Check if link state has changed */ + if (events & VMXNET3_ECR_LINK) + vmxnet3_check_link(adapter); + + /* Check if there is an error on xmit/recv queues */ + if (events & (VMXNET3_ECR_TQERR | VMXNET3_ECR_RQERR)) { + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, + VMXNET3_CMD_GET_QUEUE_STATUS); + + if (adapter->tqd_start->status.stopped) { + printk(KERN_ERR "%s: tq error 0x%x\n", + adapter->netdev->name, + adapter->tqd_start->status.error); + } + if (adapter->rqd_start->status.stopped) { + printk(KERN_ERR "%s: rq error 0x%x\n", + adapter->netdev->name, + adapter->rqd_start->status.error); + } + + schedule_work(&adapter->work); + } +} + + +static void +vmxnet3_unmap_tx_buf(struct vmxnet3_tx_buf_info *tbi, + struct pci_dev *pdev) +{ + if (tbi->map_type == VMXNET3_MAP_SINGLE) + pci_unmap_single(pdev, tbi->dma_addr, tbi->len, + PCI_DMA_TODEVICE); + else if (tbi->map_type == VMXNET3_MAP_PAGE) + pci_unmap_page(pdev, tbi->dma_addr, tbi->len, + PCI_DMA_TODEVICE); + else + BUG_ON(tbi->map_type != VMXNET3_MAP_NONE); + + tbi->map_type = VMXNET3_MAP_NONE; /* to help debugging */ +} + + +static int +vmxnet3_unmap_pkt(u32 eop_idx, struct vmxnet3_tx_queue *tq, + struct pci_dev *pdev, struct vmxnet3_adapter *adapter) +{ + struct sk_buff *skb; + int entries = 0; + + /* no out of order completion */ + BUG_ON(tq->buf_info[eop_idx].sop_idx != tq->tx_ring.next2comp); + BUG_ON(tq->tx_ring.base[eop_idx].txd.eop != 1); + + skb = tq->buf_info[eop_idx].skb; + BUG_ON(skb == NULL); + tq->buf_info[eop_idx].skb = NULL; + + VMXNET3_INC_RING_IDX_ONLY(eop_idx, tq->tx_ring.size); + + while (tq->tx_ring.next2comp != eop_idx) { + vmxnet3_unmap_tx_buf(tq->buf_info + tq->tx_ring.next2comp, + pdev); + + /* update next2comp w/o tx_lock. Since we are marking more, + * instead of less, tx ring entries avail, the worst case is + * that the tx routine incorrectly re-queues a pkt due to + * insufficient tx ring entries. + */ + vmxnet3_cmd_ring_adv_next2comp(&tq->tx_ring); + entries++; + } + + dev_kfree_skb_any(skb); + return entries; +} + + +static int +vmxnet3_tq_tx_complete(struct vmxnet3_tx_queue *tq, + struct vmxnet3_adapter *adapter) +{ + int completed = 0; + union Vmxnet3_GenericDesc *gdesc; + + gdesc = tq->comp_ring.base + tq->comp_ring.next2proc; + while (gdesc->tcd.gen == tq->comp_ring.gen) { + completed += vmxnet3_unmap_pkt(gdesc->tcd.txdIdx, tq, + adapter->pdev, adapter); + + vmxnet3_comp_ring_adv_next2proc(&tq->comp_ring); + gdesc = tq->comp_ring.base + tq->comp_ring.next2proc; + } + + if (completed) { + spin_lock(&tq->tx_lock); + if (unlikely(vmxnet3_tq_stopped(tq, adapter) && + vmxnet3_cmd_ring_desc_avail(&tq->tx_ring) > + VMXNET3_WAKE_QUEUE_THRESHOLD(tq) && + netif_carrier_ok(adapter->netdev))) { + vmxnet3_tq_wake(tq, adapter); + } + spin_unlock(&tq->tx_lock); + } + return completed; +} + + +static void +vmxnet3_tq_cleanup(struct vmxnet3_tx_queue *tq, + struct vmxnet3_adapter *adapter) +{ + int i; + + while (tq->tx_ring.next2comp != tq->tx_ring.next2fill) { + struct vmxnet3_tx_buf_info *tbi; + union Vmxnet3_GenericDesc *gdesc; + + tbi = tq->buf_info + tq->tx_ring.next2comp; + gdesc = tq->tx_ring.base + tq->tx_ring.next2comp; + + vmxnet3_unmap_tx_buf(tbi, adapter->pdev); + if (tbi->skb) { + dev_kfree_skb_any(tbi->skb); + tbi->skb = NULL; + } + vmxnet3_cmd_ring_adv_next2comp(&tq->tx_ring); + } + + /* sanity check, verify all buffers are indeed unmapped and freed */ + for (i = 0; i < tq->tx_ring.size; i++) { + BUG_ON(tq->buf_info[i].skb != NULL || + tq->buf_info[i].map_type != VMXNET3_MAP_NONE); + } + + tq->tx_ring.gen = VMXNET3_INIT_GEN; + tq->tx_ring.next2fill = tq->tx_ring.next2comp = 0; + + tq->comp_ring.gen = VMXNET3_INIT_GEN; + tq->comp_ring.next2proc = 0; +} + + +void +vmxnet3_tq_destroy(struct vmxnet3_tx_queue *tq, + struct vmxnet3_adapter *adapter) +{ + if (tq->tx_ring.base) { + pci_free_consistent(adapter->pdev, tq->tx_ring.size * + sizeof(struct Vmxnet3_TxDesc), + tq->tx_ring.base, tq->tx_ring.basePA); + tq->tx_ring.base = NULL; + } + if (tq->data_ring.base) { + pci_free_consistent(adapter->pdev, tq->data_ring.size * + sizeof(struct Vmxnet3_TxDataDesc), + tq->data_ring.base, tq->data_ring.basePA); + tq->data_ring.base = NULL; + } + if (tq->comp_ring.base) { + pci_free_consistent(adapter->pdev, tq->comp_ring.size * + sizeof(struct Vmxnet3_TxCompDesc), + tq->comp_ring.base, tq->comp_ring.basePA); + tq->comp_ring.base = NULL; + } + kfree(tq->buf_info); + tq->buf_info = NULL; +} + + +static void +vmxnet3_tq_init(struct vmxnet3_tx_queue *tq, + struct vmxnet3_adapter *adapter) +{ + int i; + + /* reset the tx ring contents to 0 and reset the tx ring states */ + memset(tq->tx_ring.base, 0, tq->tx_ring.size * + sizeof(struct Vmxnet3_TxDesc)); + tq->tx_ring.next2fill = tq->tx_ring.next2comp = 0; + tq->tx_ring.gen = VMXNET3_INIT_GEN; + + memset(tq->data_ring.base, 0, tq->data_ring.size * + sizeof(struct Vmxnet3_TxDataDesc)); + + /* reset the tx comp ring contents to 0 and reset comp ring states */ + memset(tq->comp_ring.base, 0, tq->comp_ring.size * + sizeof(struct Vmxnet3_TxCompDesc)); + tq->comp_ring.next2proc = 0; + tq->comp_ring.gen = VMXNET3_INIT_GEN; + + /* reset the bookkeeping data */ + memset(tq->buf_info, 0, sizeof(tq->buf_info[0]) * tq->tx_ring.size); + for (i = 0; i < tq->tx_ring.size; i++) + tq->buf_info[i].map_type = VMXNET3_MAP_NONE; + + /* stats are not reset */ +} + + +static int +vmxnet3_tq_create(struct vmxnet3_tx_queue *tq, + struct vmxnet3_adapter *adapter) +{ + BUG_ON(tq->tx_ring.base || tq->data_ring.base || + tq->comp_ring.base || tq->buf_info); + + tq->tx_ring.base = pci_alloc_consistent(adapter->pdev, tq->tx_ring.size + * sizeof(struct Vmxnet3_TxDesc), + &tq->tx_ring.basePA); + if (!tq->tx_ring.base) { + printk(KERN_ERR "%s: failed to allocate tx ring\n", + adapter->netdev->name); + goto err; + } + + tq->data_ring.base = pci_alloc_consistent(adapter->pdev, + tq->data_ring.size * + sizeof(struct Vmxnet3_TxDataDesc), + &tq->data_ring.basePA); + if (!tq->data_ring.base) { + printk(KERN_ERR "%s: failed to allocate data ring\n", + adapter->netdev->name); + goto err; + } + + tq->comp_ring.base = pci_alloc_consistent(adapter->pdev, + tq->comp_ring.size * + sizeof(struct Vmxnet3_TxCompDesc), + &tq->comp_ring.basePA); + if (!tq->comp_ring.base) { + printk(KERN_ERR "%s: failed to allocate tx comp ring\n", + adapter->netdev->name); + goto err; + } + + tq->buf_info = kcalloc(tq->tx_ring.size, sizeof(tq->buf_info[0]), + GFP_KERNEL); + if (!tq->buf_info) { + printk(KERN_ERR "%s: failed to allocate tx bufinfo\n", + adapter->netdev->name); + goto err; + } + + return 0; + +err: + vmxnet3_tq_destroy(tq, adapter); + return -ENOMEM; +} + + +/* + * starting from ring->next2fill, allocate rx buffers for the given ring + * of the rx queue and update the rx desc. stop after @num_to_alloc buffers + * are allocated or allocation fails + */ + +static int +vmxnet3_rq_alloc_rx_buf(struct vmxnet3_rx_queue *rq, u32 ring_idx, + int num_to_alloc, struct vmxnet3_adapter *adapter) +{ + int num_allocated = 0; + struct vmxnet3_rx_buf_info *rbi_base = rq->buf_info[ring_idx]; + struct vmxnet3_cmd_ring *ring = &rq->rx_ring[ring_idx]; + u32 val; + + while (num_allocated < num_to_alloc) { + struct vmxnet3_rx_buf_info *rbi; + union Vmxnet3_GenericDesc *gd; + + rbi = rbi_base + ring->next2fill; + gd = ring->base + ring->next2fill; + + if (rbi->buf_type == VMXNET3_RX_BUF_SKB) { + if (rbi->skb == NULL) { + rbi->skb = dev_alloc_skb(rbi->len + + NET_IP_ALIGN); + if (unlikely(rbi->skb == NULL)) { + rq->stats.rx_buf_alloc_failure++; + break; + } + rbi->skb->dev = adapter->netdev; + + skb_reserve(rbi->skb, NET_IP_ALIGN); + rbi->dma_addr = pci_map_single(adapter->pdev, + rbi->skb->data, rbi->len, + PCI_DMA_FROMDEVICE); + } else { + /* rx buffer skipped by the device */ + } + val = VMXNET3_RXD_BTYPE_HEAD << VMXNET3_RXD_BTYPE_SHIFT; + } else { + BUG_ON(rbi->buf_type != VMXNET3_RX_BUF_PAGE || + rbi->len != PAGE_SIZE); + + if (rbi->page == NULL) { + rbi->page = alloc_page(GFP_ATOMIC); + if (unlikely(rbi->page == NULL)) { + rq->stats.rx_buf_alloc_failure++; + break; + } + rbi->dma_addr = pci_map_page(adapter->pdev, + rbi->page, 0, PAGE_SIZE, + PCI_DMA_FROMDEVICE); + } else { + /* rx buffers skipped by the device */ + } + val = VMXNET3_RXD_BTYPE_BODY << VMXNET3_RXD_BTYPE_SHIFT; + } + + BUG_ON(rbi->dma_addr == 0); + gd->rxd.addr = rbi->dma_addr; + gd->dword[2] = (ring->gen << VMXNET3_RXD_GEN_SHIFT) | val | + rbi->len; + + num_allocated++; + vmxnet3_cmd_ring_adv_next2fill(ring); + } + rq->uncommitted[ring_idx] += num_allocated; + + dprintk(KERN_ERR "alloc_rx_buf: %d allocated, next2fill %u, next2comp " + "%u, uncommited %u\n", num_allocated, ring->next2fill, + ring->next2comp, rq->uncommitted[ring_idx]); + + /* so that the device can distinguish a full ring and an empty ring */ + BUG_ON(num_allocated != 0 && ring->next2fill == ring->next2comp); + + return num_allocated; +} + + +static void +vmxnet3_append_frag(struct sk_buff *skb, struct Vmxnet3_RxCompDesc *rcd, + struct vmxnet3_rx_buf_info *rbi) +{ + struct skb_frag_struct *frag = skb_shinfo(skb)->frags + + skb_shinfo(skb)->nr_frags; + + BUG_ON(skb_shinfo(skb)->nr_frags >= MAX_SKB_FRAGS); + + frag->page = rbi->page; + frag->page_offset = 0; + frag->size = rcd->len; + skb->data_len += frag->size; + skb_shinfo(skb)->nr_frags++; +} + + +static void +vmxnet3_map_pkt(struct sk_buff *skb, struct vmxnet3_tx_ctx *ctx, + struct vmxnet3_tx_queue *tq, struct pci_dev *pdev, + struct vmxnet3_adapter *adapter) +{ + u32 dw2, len; + unsigned long buf_offset; + int i; + union Vmxnet3_GenericDesc *gdesc; + struct vmxnet3_tx_buf_info *tbi = NULL; + + BUG_ON(ctx->copy_size > skb_headlen(skb)); + + /* use the previous gen bit for the SOP desc */ + dw2 = (tq->tx_ring.gen ^ 0x1) << VMXNET3_TXD_GEN_SHIFT; + + ctx->sop_txd = tq->tx_ring.base + tq->tx_ring.next2fill; + gdesc = ctx->sop_txd; /* both loops below can be skipped */ + + /* no need to map the buffer if headers are copied */ + if (ctx->copy_size) { + ctx->sop_txd->txd.addr = tq->data_ring.basePA + + tq->tx_ring.next2fill * + sizeof(struct Vmxnet3_TxDataDesc); + ctx->sop_txd->dword[2] = dw2 | ctx->copy_size; + ctx->sop_txd->dword[3] = 0; + + tbi = tq->buf_info + tq->tx_ring.next2fill; + tbi->map_type = VMXNET3_MAP_NONE; + + dprintk(KERN_ERR "txd[%u]: 0x%Lx 0x%x 0x%x\n", + tq->tx_ring.next2fill, ctx->sop_txd->txd.addr, + ctx->sop_txd->dword[2], ctx->sop_txd->dword[3]); + vmxnet3_cmd_ring_adv_next2fill(&tq->tx_ring); + + /* use the right gen for non-SOP desc */ + dw2 = tq->tx_ring.gen << VMXNET3_TXD_GEN_SHIFT; + } + + /* linear part can use multiple tx desc if it's big */ + len = skb_headlen(skb) - ctx->copy_size; + buf_offset = ctx->copy_size; + while (len) { + u32 buf_size; + + buf_size = len > VMXNET3_MAX_TX_BUF_SIZE ? + VMXNET3_MAX_TX_BUF_SIZE : len; + + tbi = tq->buf_info + tq->tx_ring.next2fill; + tbi->map_type = VMXNET3_MAP_SINGLE; + tbi->dma_addr = pci_map_single(adapter->pdev, + skb->data + buf_offset, buf_size, + PCI_DMA_TODEVICE); + + tbi->len = buf_size; /* this automatically convert 2^14 to 0 */ + + gdesc = tq->tx_ring.base + tq->tx_ring.next2fill; + BUG_ON(gdesc->txd.gen == tq->tx_ring.gen); + + gdesc->txd.addr = tbi->dma_addr; + gdesc->dword[2] = dw2 | buf_size; + gdesc->dword[3] = 0; + + dprintk(KERN_ERR "txd[%u]: 0x%Lx 0x%x 0x%x\n", + tq->tx_ring.next2fill, gdesc->txd.addr, + gdesc->dword[2], gdesc->dword[3]); + vmxnet3_cmd_ring_adv_next2fill(&tq->tx_ring); + dw2 = tq->tx_ring.gen << VMXNET3_TXD_GEN_SHIFT; + + len -= buf_size; + buf_offset += buf_size; + } + + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { + struct skb_frag_struct *frag = &skb_shinfo(skb)->frags[i]; + + tbi = tq->buf_info + tq->tx_ring.next2fill; + tbi->map_type = VMXNET3_MAP_PAGE; + tbi->dma_addr = pci_map_page(adapter->pdev, frag->page, + frag->page_offset, frag->size, + PCI_DMA_TODEVICE); + + tbi->len = frag->size; + + gdesc = tq->tx_ring.base + tq->tx_ring.next2fill; + BUG_ON(gdesc->txd.gen == tq->tx_ring.gen); + + gdesc->txd.addr = tbi->dma_addr; + gdesc->dword[2] = dw2 | frag->size; + gdesc->dword[3] = 0; + + dprintk(KERN_ERR "txd[%u]: 0x%llu %u %u\n", + tq->tx_ring.next2fill, gdesc->txd.addr, + gdesc->dword[2], gdesc->dword[3]); + vmxnet3_cmd_ring_adv_next2fill(&tq->tx_ring); + dw2 = tq->tx_ring.gen << VMXNET3_TXD_GEN_SHIFT; + } + + ctx->eop_txd = gdesc; + + /* set the last buf_info for the pkt */ + tbi->skb = skb; + tbi->sop_idx = ctx->sop_txd - tq->tx_ring.base; +} + + +/* + * parse and copy relevant protocol headers: + * For a tso pkt, relevant headers are L2/3/4 including options + * For a pkt requesting csum offloading, they are L2/3 and may include L4 + * if it's a TCP/UDP pkt + * + * Returns: + * -1: error happens during parsing + * 0: protocol headers parsed, but too big to be copied + * 1: protocol headers parsed and copied + * + * Other effects: + * 1. related *ctx fields are updated. + * 2. ctx->copy_size is # of bytes copied + * 3. the portion copied is guaranteed to be in the linear part + * + */ +static int +vmxnet3_parse_and_copy_hdr(struct sk_buff *skb, struct vmxnet3_tx_queue *tq, + struct vmxnet3_tx_ctx *ctx, + struct vmxnet3_adapter *adapter) +{ + struct Vmxnet3_TxDataDesc *tdd; + + if (ctx->mss) { + ctx->eth_ip_hdr_size = skb_transport_offset(skb); + ctx->l4_hdr_size = ((struct tcphdr *) + skb_transport_header(skb))->doff * 4; + ctx->copy_size = ctx->eth_ip_hdr_size + ctx->l4_hdr_size; + } else { + unsigned int pull_size; + + if (skb->ip_summed == CHECKSUM_PARTIAL) { + ctx->eth_ip_hdr_size = skb_transport_offset(skb); + + if (ctx->ipv4) { + struct iphdr *iph = (struct iphdr *) + skb_network_header(skb); + if (iph->protocol == IPPROTO_TCP) { + pull_size = ctx->eth_ip_hdr_size + + sizeof(struct tcphdr); + + if (unlikely(!pskb_may_pull(skb, + pull_size))) { + goto err; + } + ctx->l4_hdr_size = ((struct tcphdr *) + skb_transport_header(skb))->doff * 4; + } else if (iph->protocol == IPPROTO_UDP) { + ctx->l4_hdr_size = + sizeof(struct udphdr); + } else { + ctx->l4_hdr_size = 0; + } + } else { + /* for simplicity, don't copy L4 headers */ + ctx->l4_hdr_size = 0; + } + ctx->copy_size = ctx->eth_ip_hdr_size + + ctx->l4_hdr_size; + } else { + ctx->eth_ip_hdr_size = 0; + ctx->l4_hdr_size = 0; + /* copy as much as allowed */ + ctx->copy_size = min((unsigned int)VMXNET3_HDR_COPY_SIZE + , skb_headlen(skb)); + } + + /* make sure headers are accessible directly */ + if (unlikely(!pskb_may_pull(skb, ctx->copy_size))) + goto err; + } + + if (unlikely(ctx->copy_size > VMXNET3_HDR_COPY_SIZE)) { + tq->stats.oversized_hdr++; + ctx->copy_size = 0; + return 0; + } + + tdd = tq->data_ring.base + tq->tx_ring.next2fill; + + memcpy(tdd->data, skb->data, ctx->copy_size); + dprintk(KERN_ERR "copy %u bytes to dataRing[%u]\n", + ctx->copy_size, tq->tx_ring.next2fill); + return 1; + +err: + return -1; +} + + +static void +vmxnet3_prepare_tso(struct sk_buff *skb, + struct vmxnet3_tx_ctx *ctx) +{ + struct tcphdr *tcph = (struct tcphdr *)skb_transport_header(skb); + if (ctx->ipv4) { + struct iphdr *iph = (struct iphdr *)skb_network_header(skb); + iph->check = 0; + tcph->check = ~csum_tcpudp_magic(iph->saddr, iph->daddr, 0, + IPPROTO_TCP, 0); + } else { + struct ipv6hdr *iph = (struct ipv6hdr *)skb_network_header(skb); + tcph->check = ~csum_ipv6_magic(&iph->saddr, &iph->daddr, 0, + IPPROTO_TCP, 0); + } +} + + +/* + * Transmits a pkt thru a given tq + * Returns: + * NETDEV_TX_OK: descriptors are setup successfully + * NETDEV_TX_OK: error occured, the pkt is dropped + * NETDEV_TX_BUSY: tx ring is full, queue is stopped + * + * Side-effects: + * 1. tx ring may be changed + * 2. tq stats may be updated accordingly + * 3. shared->txNumDeferred may be updated + */ + +static int +vmxnet3_tq_xmit(struct sk_buff *skb, struct vmxnet3_tx_queue *tq, + struct vmxnet3_adapter *adapter, struct net_device *netdev) +{ + int ret; + u32 count; + unsigned long flags; + struct vmxnet3_tx_ctx ctx; + union Vmxnet3_GenericDesc *gdesc; + + /* conservatively estimate # of descriptors to use */ + count = VMXNET3_TXD_NEEDED(skb_headlen(skb)) + + skb_shinfo(skb)->nr_frags + 1; + + ctx.ipv4 = (skb->protocol == __constant_ntohs(ETH_P_IP)); + + ctx.mss = skb_shinfo(skb)->gso_size; + if (ctx.mss) { + if (skb_header_cloned(skb)) { + if (unlikely(pskb_expand_head(skb, 0, 0, + GFP_ATOMIC) != 0)) { + tq->stats.drop_tso++; + goto drop_pkt; + } + tq->stats.copy_skb_header++; + } + vmxnet3_prepare_tso(skb, &ctx); + } else { + if (unlikely(count > VMXNET3_MAX_TXD_PER_PKT)) { + + /* non-tso pkts must not use more than + * VMXNET3_MAX_TXD_PER_PKT entries + */ + if (skb_linearize(skb) != 0) { + tq->stats.drop_too_many_frags++; + goto drop_pkt; + } + tq->stats.linearized++; + + /* recalculate the # of descriptors to use */ + count = VMXNET3_TXD_NEEDED(skb_headlen(skb)) + 1; + } + } + + ret = vmxnet3_parse_and_copy_hdr(skb, tq, &ctx, adapter); + if (ret >= 0) { + BUG_ON(ret <= 0 && ctx.copy_size != 0); + /* hdrs parsed, check against other limits */ + if (ctx.mss) { + if (unlikely(ctx.eth_ip_hdr_size + ctx.l4_hdr_size > + VMXNET3_MAX_TX_BUF_SIZE)) { + goto hdr_too_big; + } + } else { + if (skb->ip_summed == CHECKSUM_PARTIAL) { + if (unlikely(ctx.eth_ip_hdr_size + + skb->csum_offset > + VMXNET3_MAX_CSUM_OFFSET)) { + goto hdr_too_big; + } + } + } + } else { + tq->stats.drop_hdr_inspect_err++; + goto drop_pkt; + } + + spin_lock_irqsave(&tq->tx_lock, flags); + + if (count > vmxnet3_cmd_ring_desc_avail(&tq->tx_ring)) { + tq->stats.tx_ring_full++; + dprintk(KERN_ERR "tx queue stopped on %s, next2comp %u" + " next2fill %u\n", adapter->netdev->name, + tq->tx_ring.next2comp, tq->tx_ring.next2fill); + + vmxnet3_tq_stop(tq, adapter); + spin_unlock_irqrestore(&tq->tx_lock, flags); + return NETDEV_TX_BUSY; + } + + /* fill tx descs related to addr & len */ + vmxnet3_map_pkt(skb, &ctx, tq, adapter->pdev, adapter); + + /* setup the EOP desc */ + ctx.eop_txd->dword[3] = VMXNET3_TXD_CQ | VMXNET3_TXD_EOP; + + /* setup the SOP desc */ + gdesc = ctx.sop_txd; + if (ctx.mss) { + gdesc->txd.hlen = ctx.eth_ip_hdr_size + ctx.l4_hdr_size; + gdesc->txd.om = VMXNET3_OM_TSO; + gdesc->txd.msscof = ctx.mss; + tq->shared->txNumDeferred += (skb->len - gdesc->txd.hlen + + ctx.mss - 1) / ctx.mss; + } else { + if (skb->ip_summed == CHECKSUM_PARTIAL) { + gdesc->txd.hlen = ctx.eth_ip_hdr_size; + gdesc->txd.om = VMXNET3_OM_CSUM; + gdesc->txd.msscof = ctx.eth_ip_hdr_size + + skb->csum_offset; + } else { + gdesc->txd.om = 0; + gdesc->txd.msscof = 0; + } + tq->shared->txNumDeferred++; + } + + if (vlan_tx_tag_present(skb)) { + gdesc->txd.ti = 1; + gdesc->txd.tci = vlan_tx_tag_get(skb); + } + + wmb(); + + /* finally flips the GEN bit of the SOP desc */ + gdesc->dword[2] ^= VMXNET3_TXD_GEN; + dprintk(KERN_ERR "txd[%u]: SOP 0x%Lx 0x%x 0x%x\n", + (u32)((union Vmxnet3_GenericDesc *)ctx.sop_txd - + tq->tx_ring.base), gdesc->txd.addr, gdesc->dword[2], + gdesc->dword[3]); + + spin_unlock_irqrestore(&tq->tx_lock, flags); + + if (tq->shared->txNumDeferred >= tq->shared->txThreshold) { + tq->shared->txNumDeferred = 0; + VMXNET3_WRITE_BAR0_REG(adapter, VMXNET3_REG_TXPROD, + tq->tx_ring.next2fill); + } + netdev->trans_start = jiffies; + + return NETDEV_TX_OK; + +hdr_too_big: + tq->stats.drop_oversized_hdr++; +drop_pkt: + tq->stats.drop_total++; + dev_kfree_skb(skb); + return NETDEV_TX_OK; +} + + +static netdev_tx_t +vmxnet3_xmit_frame(struct sk_buff *skb, struct net_device *netdev) +{ + struct vmxnet3_adapter *adapter = netdev_priv(netdev); + struct vmxnet3_tx_queue *tq = &adapter->tx_queue; + + return vmxnet3_tq_xmit(skb, tq, adapter, netdev); +} + + +static void +vmxnet3_rx_csum(struct vmxnet3_adapter *adapter, + struct sk_buff *skb, + union Vmxnet3_GenericDesc *gdesc) +{ + if (!gdesc->rcd.cnc && adapter->rxcsum) { + /* typical case: TCP/UDP over IP and both csums are correct */ + if ((gdesc->dword[3] & VMXNET3_RCD_CSUM_OK) == + VMXNET3_RCD_CSUM_OK) { + skb->ip_summed = CHECKSUM_UNNECESSARY; + BUG_ON(!(gdesc->rcd.tcp || gdesc->rcd.udp)); + BUG_ON(!(gdesc->rcd.v4 || gdesc->rcd.v6)); + BUG_ON(gdesc->rcd.frg); + } else { + if (gdesc->rcd.csum) { + skb->csum = htons(gdesc->rcd.csum); + skb->ip_summed = CHECKSUM_PARTIAL; + } else { + skb->ip_summed = CHECKSUM_NONE; + } + } + } else { + skb->ip_summed = CHECKSUM_NONE; + } +} + + +static void +vmxnet3_rx_error(struct vmxnet3_rx_queue *rq, struct Vmxnet3_RxCompDesc *rcd, + struct vmxnet3_rx_ctx *ctx, struct vmxnet3_adapter *adapter) +{ + rq->stats.drop_err++; + if (!rcd->fcs) + rq->stats.drop_fcs++; + + rq->stats.drop_total++; + + /* + * We do not unmap and chain the rx buffer to the skb. + * We basically pretend this buffer is not used and will be recycled + * by vmxnet3_rq_alloc_rx_buf() + */ + + /* + * ctx->skb may be NULL if this is the first and the only one + * desc for the pkt + */ + if (ctx->skb) + dev_kfree_skb_irq(ctx->skb); + + ctx->skb = NULL; +} + + +static int +vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq, + struct vmxnet3_adapter *adapter, int quota) +{ + static u32 rxprod_reg[2] = {VMXNET3_REG_RXPROD, VMXNET3_REG_RXPROD2}; + u32 num_rxd = 0; + struct Vmxnet3_RxCompDesc *rcd; + struct vmxnet3_rx_ctx *ctx = &rq->rx_ctx; + + rcd = &rq->comp_ring.base[rq->comp_ring.next2proc].rcd; + while (rcd->gen == rq->comp_ring.gen) { + struct vmxnet3_rx_buf_info *rbi; + struct sk_buff *skb; + int num_to_alloc; + struct Vmxnet3_RxDesc *rxd; + u32 idx, ring_idx; + + if (num_rxd >= quota) { + /* we may stop even before we see the EOP desc of + * the current pkt + */ + break; + } + num_rxd++; + + idx = rcd->rxdIdx; + ring_idx = rcd->rqID == rq->qid ? 0 : 1; + + rxd = &rq->rx_ring[ring_idx].base[idx].rxd; + rbi = rq->buf_info[ring_idx] + idx; + + BUG_ON(rxd->addr != rbi->dma_addr || rxd->len != rbi->len); + + if (unlikely(rcd->eop && rcd->err)) { + vmxnet3_rx_error(rq, rcd, ctx, adapter); + goto rcd_done; + } + + if (rcd->sop) { /* first buf of the pkt */ + BUG_ON(rxd->btype != VMXNET3_RXD_BTYPE_HEAD || + rcd->rqID != rq->qid); + + BUG_ON(rbi->buf_type != VMXNET3_RX_BUF_SKB); + BUG_ON(ctx->skb != NULL || rbi->skb == NULL); + + if (unlikely(rcd->len == 0)) { + /* Pretend the rx buffer is skipped. */ + BUG_ON(!(rcd->sop && rcd->eop)); + dprintk(KERN_ERR "rxRing[%u][%u] 0 length\n", + ring_idx, idx); + goto rcd_done; + } + + ctx->skb = rbi->skb; + rbi->skb = NULL; + + pci_unmap_single(adapter->pdev, rbi->dma_addr, rbi->len, + PCI_DMA_FROMDEVICE); + + skb_put(ctx->skb, rcd->len); + } else { + BUG_ON(ctx->skb == NULL); + /* non SOP buffer must be type 1 in most cases */ + if (rbi->buf_type == VMXNET3_RX_BUF_PAGE) { + BUG_ON(rxd->btype != VMXNET3_RXD_BTYPE_BODY); + + if (rcd->len) { + pci_unmap_page(adapter->pdev, + rbi->dma_addr, rbi->len, + PCI_DMA_FROMDEVICE); + + vmxnet3_append_frag(ctx->skb, rcd, rbi); + rbi->page = NULL; + } + } else { + /* + * The only time a non-SOP buffer is type 0 is + * when it's EOP and error flag is raised, which + * has already been handled. + */ + BUG_ON(true); + } + } + + skb = ctx->skb; + if (rcd->eop) { + skb->len += skb->data_len; + skb->truesize += skb->data_len; + + vmxnet3_rx_csum(adapter, skb, + (union Vmxnet3_GenericDesc *)rcd); + skb->protocol = eth_type_trans(skb, adapter->netdev); + + if (unlikely(adapter->vlan_grp && rcd->ts)) { + vlan_hwaccel_receive_skb(skb, + adapter->vlan_grp, rcd->tci); + } else { + netif_receive_skb(skb); + } + + adapter->netdev->last_rx = jiffies; + ctx->skb = NULL; + } + +rcd_done: + /* device may skip some rx descs */ + rq->rx_ring[ring_idx].next2comp = idx; + VMXNET3_INC_RING_IDX_ONLY(rq->rx_ring[ring_idx].next2comp, + rq->rx_ring[ring_idx].size); + + /* refill rx buffers frequently to avoid starving the h/w */ + num_to_alloc = vmxnet3_cmd_ring_desc_avail(rq->rx_ring + + ring_idx); + if (unlikely(num_to_alloc > VMXNET3_RX_ALLOC_THRESHOLD(rq, + ring_idx, adapter))) { + vmxnet3_rq_alloc_rx_buf(rq, ring_idx, num_to_alloc, + adapter); + + /* if needed, update the register */ + if (unlikely(rq->shared->updateRxProd)) { + VMXNET3_WRITE_BAR0_REG(adapter, + rxprod_reg[ring_idx] + rq->qid * 8, + rq->rx_ring[ring_idx].next2fill); + rq->uncommitted[ring_idx] = 0; + } + } + + vmxnet3_comp_ring_adv_next2proc(&rq->comp_ring); + rcd = &rq->comp_ring.base[rq->comp_ring.next2proc].rcd; + } + + return num_rxd; +} + + +static void +vmxnet3_rq_cleanup(struct vmxnet3_rx_queue *rq, + struct vmxnet3_adapter *adapter) +{ + u32 i, ring_idx; + struct Vmxnet3_RxDesc *rxd; + + for (ring_idx = 0; ring_idx < 2; ring_idx++) { + for (i = 0; i < rq->rx_ring[ring_idx].size; i++) { + rxd = &rq->rx_ring[ring_idx].base[i].rxd; + + if (rxd->btype == VMXNET3_RXD_BTYPE_HEAD && + rq->buf_info[ring_idx][i].skb) { + pci_unmap_single(adapter->pdev, rxd->addr, + rxd->len, PCI_DMA_FROMDEVICE); + dev_kfree_skb(rq->buf_info[ring_idx][i].skb); + rq->buf_info[ring_idx][i].skb = NULL; + } else if (rxd->btype == VMXNET3_RXD_BTYPE_BODY && + rq->buf_info[ring_idx][i].page) { + pci_unmap_page(adapter->pdev, rxd->addr, + rxd->len, PCI_DMA_FROMDEVICE); + put_page(rq->buf_info[ring_idx][i].page); + rq->buf_info[ring_idx][i].page = NULL; + } + } + + rq->rx_ring[ring_idx].gen = VMXNET3_INIT_GEN; + rq->rx_ring[ring_idx].next2fill = + rq->rx_ring[ring_idx].next2comp = 0; + rq->uncommitted[ring_idx] = 0; + } + + rq->comp_ring.gen = VMXNET3_INIT_GEN; + rq->comp_ring.next2proc = 0; +} + + +void vmxnet3_rq_destroy(struct vmxnet3_rx_queue *rq, + struct vmxnet3_adapter *adapter) +{ + int i; + int j; + + /* all rx buffers must have already been freed */ + for (i = 0; i < 2; i++) { + if (rq->buf_info[i]) { + for (j = 0; j < rq->rx_ring[i].size; j++) + BUG_ON(rq->buf_info[i][j].page != NULL); + } + } + + + kfree(rq->buf_info[0]); + + for (i = 0; i < 2; i++) { + if (rq->rx_ring[i].base) { + pci_free_consistent(adapter->pdev, rq->rx_ring[i].size + * sizeof(struct Vmxnet3_RxDesc), + rq->rx_ring[i].base, + rq->rx_ring[i].basePA); + rq->rx_ring[i].base = NULL; + } + rq->buf_info[i] = NULL; + } + + if (rq->comp_ring.base) { + pci_free_consistent(adapter->pdev, rq->comp_ring.size * + sizeof(struct Vmxnet3_RxCompDesc), + rq->comp_ring.base, rq->comp_ring.basePA); + rq->comp_ring.base = NULL; + } +} + + +static int +vmxnet3_rq_init(struct vmxnet3_rx_queue *rq, + struct vmxnet3_adapter *adapter) +{ + int i; + + /* initialize buf_info */ + for (i = 0; i < rq->rx_ring[0].size; i++) { + + /* 1st buf for a pkt is skbuff */ + if (i % adapter->rx_buf_per_pkt == 0) { + rq->buf_info[0][i].buf_type = VMXNET3_RX_BUF_SKB; + rq->buf_info[0][i].len = adapter->skb_buf_size; + } else { /* subsequent bufs for a pkt is frag */ + rq->buf_info[0][i].buf_type = VMXNET3_RX_BUF_PAGE; + rq->buf_info[0][i].len = PAGE_SIZE; + } + } + for (i = 0; i < rq->rx_ring[1].size; i++) { + rq->buf_info[1][i].buf_type = VMXNET3_RX_BUF_PAGE; + rq->buf_info[1][i].len = PAGE_SIZE; + } + + /* reset internal state and allocate buffers for both rings */ + for (i = 0; i < 2; i++) { + rq->rx_ring[i].next2fill = rq->rx_ring[i].next2comp = 0; + rq->uncommitted[i] = 0; + + memset(rq->rx_ring[i].base, 0, rq->rx_ring[i].size * + sizeof(struct Vmxnet3_RxDesc)); + rq->rx_ring[i].gen = VMXNET3_INIT_GEN; + } + if (vmxnet3_rq_alloc_rx_buf(rq, 0, rq->rx_ring[0].size - 1, + adapter) == 0) { + /* at least has 1 rx buffer for the 1st ring */ + return -ENOMEM; + } + vmxnet3_rq_alloc_rx_buf(rq, 1, rq->rx_ring[1].size - 1, adapter); + + /* reset the comp ring */ + rq->comp_ring.next2proc = 0; + memset(rq->comp_ring.base, 0, rq->comp_ring.size * + sizeof(struct Vmxnet3_RxCompDesc)); + rq->comp_ring.gen = VMXNET3_INIT_GEN; + + /* reset rxctx */ + rq->rx_ctx.skb = NULL; + + /* stats are not reset */ + return 0; +} + + +static int +vmxnet3_rq_create(struct vmxnet3_rx_queue *rq, struct vmxnet3_adapter *adapter) +{ + int i; + size_t sz; + struct vmxnet3_rx_buf_info *bi; + + for (i = 0; i < 2; i++) { + + sz = rq->rx_ring[i].size * sizeof(struct Vmxnet3_RxDesc); + rq->rx_ring[i].base = pci_alloc_consistent(adapter->pdev, sz, + &rq->rx_ring[i].basePA); + if (!rq->rx_ring[i].base) { + printk(KERN_ERR "%s: failed to allocate rx ring %d\n", + adapter->netdev->name, i); + goto err; + } + } + + sz = rq->comp_ring.size * sizeof(struct Vmxnet3_RxCompDesc); + rq->comp_ring.base = pci_alloc_consistent(adapter->pdev, sz, + &rq->comp_ring.basePA); + if (!rq->comp_ring.base) { + printk(KERN_ERR "%s: failed to allocate rx comp ring\n", + adapter->netdev->name); + goto err; + } + + sz = sizeof(struct vmxnet3_rx_buf_info) * (rq->rx_ring[0].size + + rq->rx_ring[1].size); + bi = kmalloc(sz, GFP_KERNEL); + if (!bi) { + printk(KERN_ERR "%s: failed to allocate rx bufinfo\n", + adapter->netdev->name); + goto err; + } + memset(bi, 0, sz); + rq->buf_info[0] = bi; + rq->buf_info[1] = bi + rq->rx_ring[0].size; + + return 0; + +err: + vmxnet3_rq_destroy(rq, adapter); + return -ENOMEM; +} + + +static int +vmxnet3_do_poll(struct vmxnet3_adapter *adapter, int budget) +{ + if (unlikely(adapter->shared->ecr)) + vmxnet3_process_events(adapter); + + vmxnet3_tq_tx_complete(&adapter->tx_queue, adapter); + return vmxnet3_rq_rx_complete(&adapter->rx_queue, adapter, budget); +} + + +static int +vmxnet3_poll(struct napi_struct *napi, int budget) +{ + struct vmxnet3_adapter *adapter = container_of(napi, + struct vmxnet3_adapter, napi); + int rxd_done; + + rxd_done = vmxnet3_do_poll(adapter, budget); + + if (rxd_done < budget) { + napi_complete(napi); + vmxnet3_enable_intr(adapter, 0); + } + return rxd_done; +} + + +/* Interrupt handler for vmxnet3 */ +static irqreturn_t +vmxnet3_intr(int irq, void *dev_id) +{ + struct net_device *dev = dev_id; + struct vmxnet3_adapter *adapter = netdev_priv(dev); + + if (unlikely(adapter->intr.type == VMXNET3_IT_INTX)) { + u32 icr = VMXNET3_READ_BAR1_REG(adapter, VMXNET3_REG_ICR); + if (unlikely(icr == 0)) + /* not ours */ + return IRQ_NONE; + } + + + /* disable intr if needed */ + if (adapter->intr.mask_mode == VMXNET3_IMM_ACTIVE) + vmxnet3_disable_intr(adapter, 0); + + napi_schedule(&adapter->napi); + + return IRQ_HANDLED; +} + +#ifdef CONFIG_NET_POLL_CONTROLLER + + +/* netpoll callback. */ +static void +vmxnet3_netpoll(struct net_device *netdev) +{ + struct vmxnet3_adapter *adapter = netdev_priv(netdev); + int irq; + + if (adapter->intr.type == VMXNET3_IT_MSIX) + irq = adapter->intr.msix_entries[0].vector; + else + irq = adapter->pdev->irq; + + disable_irq(irq); + vmxnet3_intr(irq, netdev); + enable_irq(irq); +} +#endif + +static int +vmxnet3_request_irqs(struct vmxnet3_adapter *adapter) +{ + int err; + + if (adapter->intr.type == VMXNET3_IT_MSIX) { + /* we only use 1 MSI-X vector */ + err = request_irq(adapter->intr.msix_entries[0].vector, + vmxnet3_intr, 0, adapter->netdev->name, + adapter->netdev); + } else if (adapter->intr.type == VMXNET3_IT_MSI) { + err = request_irq(adapter->pdev->irq, vmxnet3_intr, 0, + adapter->netdev->name, adapter->netdev); + } else { + err = request_irq(adapter->pdev->irq, vmxnet3_intr, + IRQF_SHARED, adapter->netdev->name, + adapter->netdev); + } + + if (err) + printk(KERN_ERR "Failed to request irq %s (intr type:%d), error" + ":%d\n", adapter->netdev->name, adapter->intr.type, err); + + + if (!err) { + int i; + /* init our intr settings */ + for (i = 0; i < adapter->intr.num_intrs; i++) + adapter->intr.mod_levels[i] = UPT1_IML_ADAPTIVE; + + /* next setup intr index for all intr sources */ + adapter->tx_queue.comp_ring.intr_idx = 0; + adapter->rx_queue.comp_ring.intr_idx = 0; + adapter->intr.event_intr_idx = 0; + + printk(KERN_INFO "%s: intr type %u, mode %u, %u vectors " + "allocated\n", adapter->netdev->name, adapter->intr.type, + adapter->intr.mask_mode, adapter->intr.num_intrs); + } + + return err; +} + + +static void +vmxnet3_free_irqs(struct vmxnet3_adapter *adapter) +{ + BUG_ON(adapter->intr.type == VMXNET3_IT_AUTO || + adapter->intr.num_intrs <= 0); + + switch (adapter->intr.type) { + case VMXNET3_IT_MSIX: + { + int i; + + for (i = 0; i < adapter->intr.num_intrs; i++) + free_irq(adapter->intr.msix_entries[i].vector, + adapter->netdev); + break; + } + case VMXNET3_IT_MSI: + free_irq(adapter->pdev->irq, adapter->netdev); + break; + case VMXNET3_IT_INTX: + free_irq(adapter->pdev->irq, adapter->netdev); + break; + default: + BUG_ON(true); + } +} + + +static void +vmxnet3_vlan_rx_register(struct net_device *netdev, struct vlan_group *grp) +{ + struct vmxnet3_adapter *adapter = netdev_priv(netdev); + struct Vmxnet3_DriverShared *shared = adapter->shared; + u32 *vfTable = adapter->shared->devRead.rxFilterConf.vfTable; + + if (grp) { + /* add vlan rx stripping. */ + if (adapter->netdev->features & NETIF_F_HW_VLAN_RX) { + int i; + struct Vmxnet3_DSDevRead *devRead = &shared->devRead; + adapter->vlan_grp = grp; + + /* update FEATURES to device */ + devRead->misc.uptFeatures |= UPT1_F_RXVLAN; + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, + VMXNET3_CMD_UPDATE_FEATURE); + /* + * Clear entire vfTable; then enable untagged pkts. + * Note: setting one entry in vfTable to non-zero turns + * on VLAN rx filtering. + */ + for (i = 0; i < VMXNET3_VFT_SIZE; i++) + vfTable[i] = 0; + + VMXNET3_SET_VFTABLE_ENTRY(vfTable, 0); + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, + VMXNET3_CMD_UPDATE_VLAN_FILTERS); + } else { + printk(KERN_ERR "%s: vlan_rx_register when device has " + "no NETIF_F_HW_VLAN_RX\n", netdev->name); + } + } else { + /* remove vlan rx stripping. */ + struct Vmxnet3_DSDevRead *devRead = &shared->devRead; + adapter->vlan_grp = NULL; + + if (devRead->misc.uptFeatures & UPT1_F_RXVLAN) { + int i; + + for (i = 0; i < VMXNET3_VFT_SIZE; i++) { + /* clear entire vfTable; this also disables + * VLAN rx filtering + */ + vfTable[i] = 0; + } + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, + VMXNET3_CMD_UPDATE_VLAN_FILTERS); + + /* update FEATURES to device */ + devRead->misc.uptFeatures &= ~UPT1_F_RXVLAN; + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, + VMXNET3_CMD_UPDATE_FEATURE); + } + } +} + + +static void +vmxnet3_restore_vlan(struct vmxnet3_adapter *adapter) +{ + if (adapter->vlan_grp) { + u16 vid; + u32 *vfTable = adapter->shared->devRead.rxFilterConf.vfTable; + bool activeVlan = false; + + for (vid = 0; vid < VLAN_GROUP_ARRAY_LEN; vid++) { + if (vlan_group_get_device(adapter->vlan_grp, vid)) { + VMXNET3_SET_VFTABLE_ENTRY(vfTable, vid); + activeVlan = true; + } + } + if (activeVlan) { + /* continue to allow untagged pkts */ + VMXNET3_SET_VFTABLE_ENTRY(vfTable, 0); + } + } +} + + +static void +vmxnet3_vlan_rx_add_vid(struct net_device *netdev, u16 vid) +{ + struct vmxnet3_adapter *adapter = netdev_priv(netdev); + u32 *vfTable = adapter->shared->devRead.rxFilterConf.vfTable; + + VMXNET3_SET_VFTABLE_ENTRY(vfTable, vid); + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, + VMXNET3_CMD_UPDATE_VLAN_FILTERS); +} + + +static void +vmxnet3_vlan_rx_kill_vid(struct net_device *netdev, u16 vid) +{ + struct vmxnet3_adapter *adapter = netdev_priv(netdev); + u32 *vfTable = adapter->shared->devRead.rxFilterConf.vfTable; + + VMXNET3_CLEAR_VFTABLE_ENTRY(vfTable, vid); + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, + VMXNET3_CMD_UPDATE_VLAN_FILTERS); +} + + +static u8 * +vmxnet3_copy_mc(struct net_device *netdev) +{ + u8 *buf = NULL; + u32 sz = netdev->mc_count * ETH_ALEN; + + /* struct Vmxnet3_RxFilterConf.mfTableLen is u16. */ + if (sz <= 0xffff) { + /* We may be called with BH disabled */ + buf = kmalloc(sz, GFP_ATOMIC); + if (buf) { + int i; + struct dev_mc_list *mc = netdev->mc_list; + + for (i = 0; i < netdev->mc_count; i++) { + BUG_ON(!mc); + memcpy(buf + i * ETH_ALEN, mc->dmi_addr, + ETH_ALEN); + mc = mc->next; + } + } + } + return buf; +} + + +static void +vmxnet3_set_mc(struct net_device *netdev) +{ + struct vmxnet3_adapter *adapter = netdev_priv(netdev); + struct Vmxnet3_RxFilterConf *rxConf = + &adapter->shared->devRead.rxFilterConf; + u8 *new_table = NULL; + u32 new_mode = VMXNET3_RXM_UCAST; + + if (netdev->flags & IFF_PROMISC) + new_mode |= VMXNET3_RXM_PROMISC; + + if (netdev->flags & IFF_BROADCAST) + new_mode |= VMXNET3_RXM_BCAST; + + if (netdev->flags & IFF_ALLMULTI) + new_mode |= VMXNET3_RXM_ALL_MULTI; + else + if (netdev->mc_count > 0) { + new_table = vmxnet3_copy_mc(netdev); + if (new_table) { + new_mode |= VMXNET3_RXM_MCAST; + rxConf->mfTableLen = netdev->mc_count * + ETH_ALEN; + rxConf->mfTablePA = virt_to_phys(new_table); + } else { + printk(KERN_INFO "%s: failed to copy mcast list" + ", setting ALL_MULTI\n", netdev->name); + new_mode |= VMXNET3_RXM_ALL_MULTI; + } + } + + + if (!(new_mode & VMXNET3_RXM_MCAST)) { + rxConf->mfTableLen = 0; + rxConf->mfTablePA = 0; + } + + if (new_mode != rxConf->rxMode) { + rxConf->rxMode = new_mode; + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, + VMXNET3_CMD_UPDATE_RX_MODE); + } + + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, + VMXNET3_CMD_UPDATE_MAC_FILTERS); + + kfree(new_table); +} + + +/* + * Set up driver_shared based on settings in adapter. + */ + +static void +vmxnet3_setup_driver_shared(struct vmxnet3_adapter *adapter) +{ + struct Vmxnet3_DriverShared *shared = adapter->shared; + struct Vmxnet3_DSDevRead *devRead = &shared->devRead; + struct Vmxnet3_TxQueueConf *tqc; + struct Vmxnet3_RxQueueConf *rqc; + int i; + + memset(shared, 0, sizeof(*shared)); + + /* driver settings */ + shared->magic = VMXNET3_REV1_MAGIC; + devRead->misc.driverInfo.version = VMXNET3_DRIVER_VERSION_NUM; + devRead->misc.driverInfo.gos.gosBits = (sizeof(void *) == 4 ? + VMXNET3_GOS_BITS_32 : VMXNET3_GOS_BITS_64); + devRead->misc.driverInfo.gos.gosType = VMXNET3_GOS_TYPE_LINUX; + devRead->misc.driverInfo.vmxnet3RevSpt = 1; + devRead->misc.driverInfo.uptVerSpt = 1; + + devRead->misc.ddPA = virt_to_phys(adapter); + devRead->misc.ddLen = sizeof(struct vmxnet3_adapter); + + /* set up feature flags */ + if (adapter->rxcsum) + devRead->misc.uptFeatures |= UPT1_F_RXCSUM; + + if (adapter->lro) { + devRead->misc.uptFeatures |= UPT1_F_LRO; + devRead->misc.maxNumRxSG = 1 + MAX_SKB_FRAGS; + } + if ((adapter->netdev->features & NETIF_F_HW_VLAN_RX) + && adapter->vlan_grp) { + devRead->misc.uptFeatures |= UPT1_F_RXVLAN; + } + + devRead->misc.mtu = adapter->netdev->mtu; + devRead->misc.queueDescPA = adapter->queue_desc_pa; + devRead->misc.queueDescLen = sizeof(struct Vmxnet3_TxQueueDesc) + + sizeof(struct Vmxnet3_RxQueueDesc); + + /* tx queue settings */ + BUG_ON(adapter->tx_queue.tx_ring.base == NULL); + + devRead->misc.numTxQueues = 1; + tqc = &adapter->tqd_start->conf; + tqc->txRingBasePA = adapter->tx_queue.tx_ring.basePA; + tqc->dataRingBasePA = adapter->tx_queue.data_ring.basePA; + tqc->compRingBasePA = adapter->tx_queue.comp_ring.basePA; + tqc->ddPA = virt_to_phys(adapter->tx_queue.buf_info); + tqc->txRingSize = adapter->tx_queue.tx_ring.size; + tqc->dataRingSize = adapter->tx_queue.data_ring.size; + tqc->compRingSize = adapter->tx_queue.comp_ring.size; + tqc->ddLen = sizeof(struct vmxnet3_tx_buf_info) * + tqc->txRingSize; + tqc->intrIdx = adapter->tx_queue.comp_ring.intr_idx; + + /* rx queue settings */ + devRead->misc.numRxQueues = 1; + rqc = &adapter->rqd_start->conf; + rqc->rxRingBasePA[0] = adapter->rx_queue.rx_ring[0].basePA; + rqc->rxRingBasePA[1] = adapter->rx_queue.rx_ring[1].basePA; + rqc->compRingBasePA = adapter->rx_queue.comp_ring.basePA; + rqc->ddPA = virt_to_phys(adapter->rx_queue.buf_info); + rqc->rxRingSize[0] = adapter->rx_queue.rx_ring[0].size; + rqc->rxRingSize[1] = adapter->rx_queue.rx_ring[1].size; + rqc->compRingSize = adapter->rx_queue.comp_ring.size; + rqc->ddLen = sizeof(struct vmxnet3_rx_buf_info) * + (rqc->rxRingSize[0] + rqc->rxRingSize[1]); + rqc->intrIdx = adapter->rx_queue.comp_ring.intr_idx; + + /* intr settings */ + devRead->intrConf.autoMask = adapter->intr.mask_mode == + VMXNET3_IMM_AUTO; + devRead->intrConf.numIntrs = adapter->intr.num_intrs; + for (i = 0; i < adapter->intr.num_intrs; i++) + devRead->intrConf.modLevels[i] = adapter->intr.mod_levels[i]; + + devRead->intrConf.eventIntrIdx = adapter->intr.event_intr_idx; + + /* rx filter settings */ + devRead->rxFilterConf.rxMode = 0; + vmxnet3_restore_vlan(adapter); + /* the rest are already zeroed */ +} + + +int +vmxnet3_activate_dev(struct vmxnet3_adapter *adapter) +{ + int err; + u32 ret; + + dprintk(KERN_ERR "%s: skb_buf_size %d, rx_buf_per_pkt %d, ring sizes" + " %u %u %u\n", adapter->netdev->name, adapter->skb_buf_size, + adapter->rx_buf_per_pkt, adapter->tx_queue.tx_ring.size, + adapter->rx_queue.rx_ring[0].size, + adapter->rx_queue.rx_ring[1].size); + + vmxnet3_tq_init(&adapter->tx_queue, adapter); + err = vmxnet3_rq_init(&adapter->rx_queue, adapter); + if (err) { + printk(KERN_ERR "Failed to init rx queue for %s: error %d\n", + adapter->netdev->name, err); + goto rq_err; + } + + err = vmxnet3_request_irqs(adapter); + if (err) { + printk(KERN_ERR "Failed to setup irq for %s: error %d\n", + adapter->netdev->name, err); + goto irq_err; + } + + vmxnet3_setup_driver_shared(adapter); + + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_DSAL, + VMXNET3_GET_ADDR_LO(adapter->shared_pa)); + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_DSAH, + VMXNET3_GET_ADDR_HI(adapter->shared_pa)); + + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, + VMXNET3_CMD_ACTIVATE_DEV); + ret = VMXNET3_READ_BAR1_REG(adapter, VMXNET3_REG_CMD); + + if (ret != 0) { + printk(KERN_ERR "Failed to activate dev %s: error %u\n", + adapter->netdev->name, ret); + err = -EINVAL; + goto activate_err; + } + VMXNET3_WRITE_BAR0_REG(adapter, VMXNET3_REG_RXPROD, + adapter->rx_queue.rx_ring[0].next2fill); + VMXNET3_WRITE_BAR0_REG(adapter, VMXNET3_REG_RXPROD2, + adapter->rx_queue.rx_ring[1].next2fill); + + /* Apply the rx filter settins last. */ + vmxnet3_set_mc(adapter->netdev); + + /* + * Check link state when first activating device. It will start the + * tx queue if the link is up. + */ + vmxnet3_check_link(adapter); + + napi_enable(&adapter->napi); + vmxnet3_enable_all_intrs(adapter); + clear_bit(VMXNET3_STATE_BIT_QUIESCED, &adapter->state); + return 0; + +activate_err: + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_DSAL, 0); + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_DSAH, 0); + vmxnet3_free_irqs(adapter); +irq_err: +rq_err: + /* free up buffers we allocated */ + vmxnet3_rq_cleanup(&adapter->rx_queue, adapter); + return err; +} + + +void +vmxnet3_reset_dev(struct vmxnet3_adapter *adapter) +{ + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, VMXNET3_CMD_RESET_DEV); +} + + +int +vmxnet3_quiesce_dev(struct vmxnet3_adapter *adapter) +{ + if (test_and_set_bit(VMXNET3_STATE_BIT_QUIESCED, &adapter->state)) + return 0; + + + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, + VMXNET3_CMD_QUIESCE_DEV); + vmxnet3_disable_all_intrs(adapter); + + napi_disable(&adapter->napi); + netif_tx_disable(adapter->netdev); + adapter->link_speed = 0; + netif_carrier_off(adapter->netdev); + + vmxnet3_tq_cleanup(&adapter->tx_queue, adapter); + vmxnet3_rq_cleanup(&adapter->rx_queue, adapter); + vmxnet3_free_irqs(adapter); + return 0; +} + + +static void +vmxnet3_write_mac_addr(struct vmxnet3_adapter *adapter, u8 *mac) +{ + u32 tmp; + + tmp = *(u32 *)mac; + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_MACL, tmp); + + tmp = (mac[5] << 8) | mac[4]; + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_MACH, tmp); +} + + +static int +vmxnet3_set_mac_addr(struct net_device *netdev, void *p) +{ + struct sockaddr *addr = p; + struct vmxnet3_adapter *adapter = netdev_priv(netdev); + + memcpy(netdev->dev_addr, addr->sa_data, netdev->addr_len); + vmxnet3_write_mac_addr(adapter, addr->sa_data); + + return 0; +} + + +/* ==================== initialization and cleanup routines ============ */ + +static int +vmxnet3_alloc_pci_resources(struct vmxnet3_adapter *adapter, bool *dma64) +{ + int err; + unsigned long mmio_start, mmio_len; + struct pci_dev *pdev = adapter->pdev; + + err = pci_enable_device(pdev); + if (err) { + printk(KERN_ERR "Failed to enable adapter %s: error %d\n", + pci_name(pdev), err); + return err; + } + + if (pci_set_dma_mask(pdev, DMA_BIT_MASK(64)) == 0) { + if (pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64)) != 0) { + printk(KERN_ERR "pci_set_consistent_dma_mask failed " + "for adapter %s\n", pci_name(pdev)); + err = -EIO; + goto err_set_mask; + } + *dma64 = true; + } else { + if (pci_set_dma_mask(pdev, DMA_BIT_MASK(32)) != 0) { + printk(KERN_ERR "pci_set_dma_mask failed for adapter " + "%s\n", pci_name(pdev)); + err = -EIO; + goto err_set_mask; + } + *dma64 = false; + } + + err = pci_request_selected_regions(pdev, (1 << 2) - 1, + vmxnet3_driver_name); + if (err) { + printk(KERN_ERR "Failed to request region for adapter %s: " + "error %d\n", pci_name(pdev), err); + goto err_set_mask; + } + + pci_set_master(pdev); + + mmio_start = pci_resource_start(pdev, 0); + mmio_len = pci_resource_len(pdev, 0); + adapter->hw_addr0 = ioremap(mmio_start, mmio_len); + if (!adapter->hw_addr0) { + printk(KERN_ERR "Failed to map bar0 for adapter %s\n", + pci_name(pdev)); + err = -EIO; + goto err_ioremap; + } + + mmio_start = pci_resource_start(pdev, 1); + mmio_len = pci_resource_len(pdev, 1); + adapter->hw_addr1 = ioremap(mmio_start, mmio_len); + if (!adapter->hw_addr1) { + printk(KERN_ERR "Failed to map bar1 for adapter %s\n", + pci_name(pdev)); + err = -EIO; + goto err_bar1; + } + return 0; + +err_bar1: + iounmap(adapter->hw_addr0); +err_ioremap: + pci_release_selected_regions(pdev, (1 << 2) - 1); +err_set_mask: + pci_disable_device(pdev); + return err; +} + + +static void +vmxnet3_free_pci_resources(struct vmxnet3_adapter *adapter) +{ + BUG_ON(!adapter->pdev); + + iounmap(adapter->hw_addr0); + iounmap(adapter->hw_addr1); + pci_release_selected_regions(adapter->pdev, (1 << 2) - 1); + pci_disable_device(adapter->pdev); +} + + +static void +vmxnet3_adjust_rx_ring_size(struct vmxnet3_adapter *adapter) +{ + size_t sz; + + if (adapter->netdev->mtu <= VMXNET3_MAX_SKB_BUF_SIZE - + VMXNET3_MAX_ETH_HDR_SIZE) { + adapter->skb_buf_size = adapter->netdev->mtu + + VMXNET3_MAX_ETH_HDR_SIZE; + if (adapter->skb_buf_size < VMXNET3_MIN_T0_BUF_SIZE) + adapter->skb_buf_size = VMXNET3_MIN_T0_BUF_SIZE; + + adapter->rx_buf_per_pkt = 1; + } else { + adapter->skb_buf_size = VMXNET3_MAX_SKB_BUF_SIZE; + sz = adapter->netdev->mtu - VMXNET3_MAX_SKB_BUF_SIZE + + VMXNET3_MAX_ETH_HDR_SIZE; + adapter->rx_buf_per_pkt = 1 + (sz + PAGE_SIZE - 1) / PAGE_SIZE; + } + + /* + * for simplicity, force the ring0 size to be a multiple of + * rx_buf_per_pkt * VMXNET3_RING_SIZE_ALIGN + */ + sz = adapter->rx_buf_per_pkt * VMXNET3_RING_SIZE_ALIGN; + adapter->rx_queue.rx_ring[0].size = (adapter->rx_queue.rx_ring[0].size + + sz - 1) / sz * sz; + adapter->rx_queue.rx_ring[0].size = min_t(u32, + adapter->rx_queue.rx_ring[0].size, + VMXNET3_RX_RING_MAX_SIZE / sz * sz); +} + + +int +vmxnet3_create_queues(struct vmxnet3_adapter *adapter, u32 tx_ring_size, + u32 rx_ring_size, u32 rx_ring2_size) +{ + int err; + + adapter->tx_queue.tx_ring.size = tx_ring_size; + adapter->tx_queue.data_ring.size = tx_ring_size; + adapter->tx_queue.comp_ring.size = tx_ring_size; + adapter->tx_queue.shared = &adapter->tqd_start->ctrl; + adapter->tx_queue.stopped = true; + err = vmxnet3_tq_create(&adapter->tx_queue, adapter); + if (err) + return err; + + adapter->rx_queue.rx_ring[0].size = rx_ring_size; + adapter->rx_queue.rx_ring[1].size = rx_ring2_size; + vmxnet3_adjust_rx_ring_size(adapter); + adapter->rx_queue.comp_ring.size = adapter->rx_queue.rx_ring[0].size + + adapter->rx_queue.rx_ring[1].size; + adapter->rx_queue.qid = 0; + adapter->rx_queue.qid2 = 1; + adapter->rx_queue.shared = &adapter->rqd_start->ctrl; + err = vmxnet3_rq_create(&adapter->rx_queue, adapter); + if (err) + vmxnet3_tq_destroy(&adapter->tx_queue, adapter); + + return err; +} + +static int +vmxnet3_open(struct net_device *netdev) +{ + struct vmxnet3_adapter *adapter; + int err; + + adapter = netdev_priv(netdev); + + spin_lock_init(&adapter->tx_queue.tx_lock); + + err = vmxnet3_create_queues(adapter, VMXNET3_DEF_TX_RING_SIZE, + VMXNET3_DEF_RX_RING_SIZE, + VMXNET3_DEF_RX_RING_SIZE); + if (err) + goto queue_err; + + err = vmxnet3_activate_dev(adapter); + if (err) + goto activate_err; + + return 0; + +activate_err: + vmxnet3_rq_destroy(&adapter->rx_queue, adapter); + vmxnet3_tq_destroy(&adapter->tx_queue, adapter); +queue_err: + return err; +} + + +static int +vmxnet3_close(struct net_device *netdev) +{ + struct vmxnet3_adapter *adapter = netdev_priv(netdev); + + /* + * Reset_work may be in the middle of resetting the device, wait for its + * completion. + */ + while (test_and_set_bit(VMXNET3_STATE_BIT_RESETTING, &adapter->state)) + msleep(1); + + vmxnet3_quiesce_dev(adapter); + + vmxnet3_rq_destroy(&adapter->rx_queue, adapter); + vmxnet3_tq_destroy(&adapter->tx_queue, adapter); + + clear_bit(VMXNET3_STATE_BIT_RESETTING, &adapter->state); + + + return 0; +} + + +void +vmxnet3_force_close(struct vmxnet3_adapter *adapter) +{ + /* + * we must clear VMXNET3_STATE_BIT_RESETTING, otherwise + * vmxnet3_close() will deadlock. + */ + BUG_ON(test_bit(VMXNET3_STATE_BIT_RESETTING, &adapter->state)); + + /* we need to enable NAPI, otherwise dev_close will deadlock */ + napi_enable(&adapter->napi); + dev_close(adapter->netdev); +} + + +static int +vmxnet3_change_mtu(struct net_device *netdev, int new_mtu) +{ + struct vmxnet3_adapter *adapter = netdev_priv(netdev); + int err = 0; + + if (new_mtu < VMXNET3_MIN_MTU || new_mtu > VMXNET3_MAX_MTU) + return -EINVAL; + + if (new_mtu > 1500 && !adapter->jumbo_frame) + return -EINVAL; + + netdev->mtu = new_mtu; + + /* + * Reset_work may be in the middle of resetting the device, wait for its + * completion. + */ + while (test_and_set_bit(VMXNET3_STATE_BIT_RESETTING, &adapter->state)) + msleep(1); + + if (netif_running(netdev)) { + vmxnet3_quiesce_dev(adapter); + vmxnet3_reset_dev(adapter); + + /* we need to re-create the rx queue based on the new mtu */ + vmxnet3_rq_destroy(&adapter->rx_queue, adapter); + vmxnet3_adjust_rx_ring_size(adapter); + adapter->rx_queue.comp_ring.size = + adapter->rx_queue.rx_ring[0].size + + adapter->rx_queue.rx_ring[1].size; + err = vmxnet3_rq_create(&adapter->rx_queue, adapter); + if (err) { + printk(KERN_ERR "%s: failed to re-create rx queue," + " error %d. Closing it.\n", netdev->name, err); + goto out; + } + + err = vmxnet3_activate_dev(adapter); + if (err) { + printk(KERN_ERR "%s: failed to re-activate, error %d. " + "Closing it\n", netdev->name, err); + goto out; + } + } + +out: + clear_bit(VMXNET3_STATE_BIT_RESETTING, &adapter->state); + if (err) + vmxnet3_force_close(adapter); + + return err; +} + + +static void +vmxnet3_declare_features(struct vmxnet3_adapter *adapter, bool dma64) +{ + struct net_device *netdev = adapter->netdev; + + netdev->features = NETIF_F_SG | + NETIF_F_HW_CSUM | + NETIF_F_HW_VLAN_TX | + NETIF_F_HW_VLAN_RX | + NETIF_F_HW_VLAN_FILTER | + NETIF_F_TSO | + NETIF_F_TSO6 | + NETIF_F_LRO; + + printk(KERN_INFO "features: sg csum vlan jf tso tsoIPv6 lro"); + + adapter->rxcsum = true; + adapter->jumbo_frame = true; + adapter->lro = true; + + if (dma64) { + netdev->features |= NETIF_F_HIGHDMA; + printk(" highDMA"); + } + + netdev->vlan_features = netdev->features; + printk("\n"); +} + + +static void +vmxnet3_read_mac_addr(struct vmxnet3_adapter *adapter, u8 *mac) +{ + u32 tmp; + + tmp = VMXNET3_READ_BAR1_REG(adapter, VMXNET3_REG_MACL); + *(u32 *)mac = tmp; + + tmp = VMXNET3_READ_BAR1_REG(adapter, VMXNET3_REG_MACH); + mac[4] = tmp & 0xff; + mac[5] = (tmp >> 8) & 0xff; +} + + +static void +vmxnet3_alloc_intr_resources(struct vmxnet3_adapter *adapter) +{ + u32 cfg; + + /* intr settings */ + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, + VMXNET3_CMD_GET_CONF_INTR); + cfg = VMXNET3_READ_BAR1_REG(adapter, VMXNET3_REG_CMD); + adapter->intr.type = cfg & 0x3; + adapter->intr.mask_mode = (cfg >> 2) & 0x3; + + if (adapter->intr.type == VMXNET3_IT_AUTO) { + int err; + + adapter->intr.msix_entries[0].entry = 0; + err = pci_enable_msix(adapter->pdev, adapter->intr.msix_entries, + VMXNET3_LINUX_MAX_MSIX_VECT); + if (!err) { + adapter->intr.num_intrs = 1; + adapter->intr.type = VMXNET3_IT_MSIX; + return; + } + + err = pci_enable_msi(adapter->pdev); + if (!err) { + adapter->intr.num_intrs = 1; + adapter->intr.type = VMXNET3_IT_MSI; + return; + } + } + + adapter->intr.type = VMXNET3_IT_INTX; + + /* INT-X related setting */ + adapter->intr.num_intrs = 1; +} + + +static void +vmxnet3_free_intr_resources(struct vmxnet3_adapter *adapter) +{ + if (adapter->intr.type == VMXNET3_IT_MSIX) + pci_disable_msix(adapter->pdev); + else if (adapter->intr.type == VMXNET3_IT_MSI) + pci_disable_msi(adapter->pdev); + else + BUG_ON(adapter->intr.type != VMXNET3_IT_INTX); +} + + +static void +vmxnet3_tx_timeout(struct net_device *netdev) +{ + struct vmxnet3_adapter *adapter = netdev_priv(netdev); + adapter->tx_timeout_count++; + + printk(KERN_ERR "%s: tx hang\n", adapter->netdev->name); + schedule_work(&adapter->work); +} + + +static void +vmxnet3_reset_work(struct work_struct *data) +{ + struct vmxnet3_adapter *adapter; + + adapter = container_of(data, struct vmxnet3_adapter, work); + + /* if another thread is resetting the device, no need to proceed */ + if (test_and_set_bit(VMXNET3_STATE_BIT_RESETTING, &adapter->state)) + return; + + /* if the device is closed, we must leave it alone */ + if (netif_running(adapter->netdev)) { + printk(KERN_INFO "%s: resetting\n", adapter->netdev->name); + vmxnet3_quiesce_dev(adapter); + vmxnet3_reset_dev(adapter); + vmxnet3_activate_dev(adapter); + } else { + printk(KERN_INFO "%s: already closed\n", adapter->netdev->name); + } + + clear_bit(VMXNET3_STATE_BIT_RESETTING, &adapter->state); +} + + +static int __devinit +vmxnet3_probe_device(struct pci_dev *pdev, + const struct pci_device_id *id) +{ + static const struct net_device_ops vmxnet3_netdev_ops = { + .ndo_open = vmxnet3_open, + .ndo_stop = vmxnet3_close, + .ndo_start_xmit = vmxnet3_xmit_frame, + .ndo_set_mac_address = vmxnet3_set_mac_addr, + .ndo_change_mtu = vmxnet3_change_mtu, + .ndo_get_stats = vmxnet3_get_stats, + .ndo_tx_timeout = vmxnet3_tx_timeout, + .ndo_set_multicast_list = vmxnet3_set_mc, + .ndo_vlan_rx_register = vmxnet3_vlan_rx_register, + .ndo_vlan_rx_add_vid = vmxnet3_vlan_rx_add_vid, + .ndo_vlan_rx_kill_vid = vmxnet3_vlan_rx_kill_vid, +#ifdef CONFIG_NET_POLL_CONTROLLER + .ndo_poll_controller = vmxnet3_netpoll, +#endif + }; + int err; + bool dma64 = false; /* stupid gcc */ + u32 ver; + struct net_device *netdev; + struct vmxnet3_adapter *adapter; + u8 mac[ETH_ALEN]; + + netdev = alloc_etherdev(sizeof(struct vmxnet3_adapter)); + if (!netdev) { + printk(KERN_ERR "Failed to alloc ethernet device for adapter " + "%s\n", pci_name(pdev)); + return -ENOMEM; + } + + pci_set_drvdata(pdev, netdev); + adapter = netdev_priv(netdev); + adapter->netdev = netdev; + adapter->pdev = pdev; + + adapter->shared = pci_alloc_consistent(adapter->pdev, + sizeof(struct Vmxnet3_DriverShared), + &adapter->shared_pa); + if (!adapter->shared) { + printk(KERN_ERR "Failed to allocate memory for %s\n", + pci_name(pdev)); + err = -ENOMEM; + goto err_alloc_shared; + } + + adapter->tqd_start = pci_alloc_consistent(adapter->pdev, + sizeof(struct Vmxnet3_TxQueueDesc) + + sizeof(struct Vmxnet3_RxQueueDesc), + &adapter->queue_desc_pa); + + if (!adapter->tqd_start) { + printk(KERN_ERR "Failed to allocate memory for %s\n", + pci_name(pdev)); + err = -ENOMEM; + goto err_alloc_queue_desc; + } + adapter->rqd_start = (struct Vmxnet3_RxQueueDesc *)(adapter->tqd_start + + 1); + + adapter->pm_conf = kmalloc(sizeof(struct Vmxnet3_PMConf), GFP_KERNEL); + if (adapter->pm_conf == NULL) { + printk(KERN_ERR "Failed to allocate memory for %s\n", + pci_name(pdev)); + err = -ENOMEM; + goto err_alloc_pm; + } + + err = vmxnet3_alloc_pci_resources(adapter, &dma64); + if (err < 0) + goto err_alloc_pci; + + ver = VMXNET3_READ_BAR1_REG(adapter, VMXNET3_REG_VRRS); + if (ver & 1) { + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_VRRS, 1); + } else { + printk(KERN_ERR "Incompatible h/w version (0x%x) for adapter" + " %s\n", ver, pci_name(pdev)); + err = -EBUSY; + goto err_ver; + } + + ver = VMXNET3_READ_BAR1_REG(adapter, VMXNET3_REG_UVRS); + if (ver & 1) { + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_UVRS, 1); + } else { + printk(KERN_ERR "Incompatible upt version (0x%x) for " + "adapter %s\n", ver, pci_name(pdev)); + err = -EBUSY; + goto err_ver; + } + + vmxnet3_declare_features(adapter, dma64); + + adapter->dev_number = atomic_read(&devices_found); + vmxnet3_alloc_intr_resources(adapter); + + vmxnet3_read_mac_addr(adapter, mac); + memcpy(netdev->dev_addr, mac, netdev->addr_len); + + netdev->netdev_ops = &vmxnet3_netdev_ops; + netdev->watchdog_timeo = 5 * HZ; + vmxnet3_set_ethtool_ops(netdev); + + INIT_WORK(&adapter->work, vmxnet3_reset_work); + + netif_napi_add(netdev, &adapter->napi, vmxnet3_poll, 64); + SET_NETDEV_DEV(netdev, &pdev->dev); + err = register_netdev(netdev); + + if (err) { + printk(KERN_ERR "Failed to register adapter %s\n", + pci_name(pdev)); + goto err_register; + } + + set_bit(VMXNET3_STATE_BIT_QUIESCED, &adapter->state); + atomic_inc(&devices_found); + return 0; + +err_register: + vmxnet3_free_intr_resources(adapter); +err_ver: + vmxnet3_free_pci_resources(adapter); +err_alloc_pci: + kfree(adapter->pm_conf); +err_alloc_pm: + pci_free_consistent(adapter->pdev, sizeof(struct Vmxnet3_TxQueueDesc) + + sizeof(struct Vmxnet3_RxQueueDesc), + adapter->tqd_start, adapter->queue_desc_pa); +err_alloc_queue_desc: + pci_free_consistent(adapter->pdev, sizeof(struct Vmxnet3_DriverShared), + adapter->shared, adapter->shared_pa); +err_alloc_shared: + pci_set_drvdata(pdev, NULL); + free_netdev(netdev); + return err; +} + + +static void __devexit +vmxnet3_remove_device(struct pci_dev *pdev) +{ + struct net_device *netdev = pci_get_drvdata(pdev); + struct vmxnet3_adapter *adapter = netdev_priv(netdev); + + flush_scheduled_work(); + + unregister_netdev(netdev); + + vmxnet3_free_intr_resources(adapter); + vmxnet3_free_pci_resources(adapter); + kfree(adapter->pm_conf); + pci_free_consistent(adapter->pdev, sizeof(struct Vmxnet3_TxQueueDesc) + + sizeof(struct Vmxnet3_RxQueueDesc), + adapter->tqd_start, adapter->queue_desc_pa); + pci_free_consistent(adapter->pdev, sizeof(struct Vmxnet3_DriverShared), + adapter->shared, adapter->shared_pa); + free_netdev(netdev); +} + + +#ifdef CONFIG_PM + +static int +vmxnet3_suspend(struct device *device) +{ + struct pci_dev *pdev = to_pci_dev(device); + struct net_device *netdev = pci_get_drvdata(pdev); + struct vmxnet3_adapter *adapter = netdev_priv(netdev); + struct Vmxnet3_PMConf *pmConf; + struct ethhdr *ehdr; + struct arphdr *ahdr; + u8 *arpreq; + struct in_device *in_dev; + struct in_ifaddr *ifa; + int i = 0; + + if (!netif_running(netdev)) + return 0; + + vmxnet3_disable_all_intrs(adapter); + vmxnet3_free_irqs(adapter); + vmxnet3_free_intr_resources(adapter); + + netif_device_detach(netdev); + netif_stop_queue(netdev); + + /* Create wake-up filters. */ + pmConf = adapter->pm_conf; + memset(pmConf, 0, sizeof(*pmConf)); + + if (adapter->wol & WAKE_UCAST) { + pmConf->filters[i].patternSize = ETH_ALEN; + pmConf->filters[i].maskSize = 1; + memcpy(pmConf->filters[i].pattern, netdev->dev_addr, ETH_ALEN); + pmConf->filters[i].mask[0] = 0x3F; /* LSB ETH_ALEN bits */ + + pmConf->wakeUpEvents |= VMXNET3_PM_WAKEUP_FILTER; + i++; + } + + if (adapter->wol & WAKE_ARP) { + in_dev = in_dev_get(netdev); + if (!in_dev) + goto skip_arp; + + ifa = (struct in_ifaddr *)in_dev->ifa_list; + if (!ifa) + goto skip_arp; + + pmConf->filters[i].patternSize = ETH_HLEN + /* Ethernet header*/ + sizeof(struct arphdr) + /* ARP header */ + 2 * ETH_ALEN + /* 2 Ethernet addresses*/ + 2 * sizeof(u32); /*2 IPv4 addresses */ + pmConf->filters[i].maskSize = + (pmConf->filters[i].patternSize - 1) / 8 + 1; + + /* ETH_P_ARP in Ethernet header. */ + ehdr = (struct ethhdr *)pmConf->filters[i].pattern; + ehdr->h_proto = htons(ETH_P_ARP); + + /* ARPOP_REQUEST in ARP header. */ + ahdr = (struct arphdr *)&pmConf->filters[i].pattern[ETH_HLEN]; + ahdr->ar_op = htons(ARPOP_REQUEST); + arpreq = (u8 *)(ahdr + 1); + + /* The Unicast IPv4 address in 'tip' field. */ + arpreq += 2 * ETH_ALEN + sizeof(u32); + *(u32 *)arpreq = ifa->ifa_address; + + /* The mask for the relevant bits. */ + pmConf->filters[i].mask[0] = 0x00; + pmConf->filters[i].mask[1] = 0x30; /* ETH_P_ARP */ + pmConf->filters[i].mask[2] = 0x30; /* ARPOP_REQUEST */ + pmConf->filters[i].mask[3] = 0x00; + pmConf->filters[i].mask[4] = 0xC0; /* IPv4 TIP */ + pmConf->filters[i].mask[5] = 0x03; /* IPv4 TIP */ + in_dev_put(in_dev); + + pmConf->wakeUpEvents |= VMXNET3_PM_WAKEUP_FILTER; + i++; + } + +skip_arp: + if (adapter->wol & WAKE_MAGIC) + pmConf->wakeUpEvents |= VMXNET3_PM_WAKEUP_MAGIC; + + pmConf->numFilters = i; + + adapter->shared->devRead.pmConfDesc.confVer = 1; + adapter->shared->devRead.pmConfDesc.confLen = sizeof(*pmConf); + adapter->shared->devRead.pmConfDesc.confPA = virt_to_phys(pmConf); + + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, + VMXNET3_CMD_UPDATE_PMCFG); + + pci_save_state(pdev); + pci_enable_wake(pdev, pci_choose_state(pdev, PMSG_SUSPEND), + adapter->wol); + pci_disable_device(pdev); + pci_set_power_state(pdev, pci_choose_state(pdev, PMSG_SUSPEND)); + + return 0; +} + + +static int +vmxnet3_resume(struct device *device) +{ + int err; + struct pci_dev *pdev = to_pci_dev(device); + struct net_device *netdev = pci_get_drvdata(pdev); + struct vmxnet3_adapter *adapter = netdev_priv(netdev); + struct Vmxnet3_PMConf *pmConf; + + if (!netif_running(netdev)) + return 0; + + /* Destroy wake-up filters. */ + pmConf = adapter->pm_conf; + memset(pmConf, 0, sizeof(*pmConf)); + + adapter->shared->devRead.pmConfDesc.confVer = 1; + adapter->shared->devRead.pmConfDesc.confLen = sizeof(*pmConf); + adapter->shared->devRead.pmConfDesc.confPA = virt_to_phys(pmConf); + + netif_device_attach(netdev); + pci_set_power_state(pdev, PCI_D0); + pci_restore_state(pdev); + err = pci_enable_device_mem(pdev); + if (err != 0) + return err; + + pci_enable_wake(pdev, PCI_D0, 0); + + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, + VMXNET3_CMD_UPDATE_PMCFG); + vmxnet3_alloc_intr_resources(adapter); + vmxnet3_request_irqs(adapter); + vmxnet3_enable_all_intrs(adapter); + + return 0; +} + +static struct dev_pm_ops vmxnet3_pm_ops = { + .suspend = vmxnet3_suspend, + .resume = vmxnet3_resume, +}; +#endif + +static struct pci_driver vmxnet3_driver = { + .name = vmxnet3_driver_name, + .id_table = vmxnet3_pciid_table, + .probe = vmxnet3_probe_device, + .remove = __devexit_p(vmxnet3_remove_device), +#ifdef CONFIG_PM + .driver.pm = &vmxnet3_pm_ops, +#endif +}; + + +static int __init +vmxnet3_init_module(void) +{ + printk(KERN_INFO "%s - version %s\n", VMXNET3_DRIVER_DESC, + VMXNET3_DRIVER_VERSION_REPORT); + return pci_register_driver(&vmxnet3_driver); +} + +module_init(vmxnet3_init_module); + + +static void +vmxnet3_exit_module(void) +{ + pci_unregister_driver(&vmxnet3_driver); +} + +module_exit(vmxnet3_exit_module); + +MODULE_AUTHOR("VMware, Inc."); +MODULE_DESCRIPTION(VMXNET3_DRIVER_DESC); +MODULE_LICENSE("GPL v2"); +MODULE_VERSION(VMXNET3_DRIVER_VERSION_STRING); diff --git a/drivers/net/vmxnet3/vmxnet3_ethtool.c b/drivers/net/vmxnet3/vmxnet3_ethtool.c new file mode 100644 index 000000000000..c2c15e4cafc7 --- /dev/null +++ b/drivers/net/vmxnet3/vmxnet3_ethtool.c @@ -0,0 +1,566 @@ +/* + * Linux driver for VMware's vmxnet3 ethernet NIC. + * + * Copyright (C) 2008-2009, VMware, Inc. All Rights Reserved. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License as published by the + * Free Software Foundation; version 2 of the License and no later version. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or + * NON INFRINGEMENT. See the GNU General Public License for more + * details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. + * + * The full GNU General Public License is included in this distribution in + * the file called "COPYING". + * + * Maintained by: Shreyas Bhatewara + * + */ + + +#include "vmxnet3_int.h" + +struct vmxnet3_stat_desc { + char desc[ETH_GSTRING_LEN]; + int offset; +}; + + +static u32 +vmxnet3_get_rx_csum(struct net_device *netdev) +{ + struct vmxnet3_adapter *adapter = netdev_priv(netdev); + return adapter->rxcsum; +} + + +static int +vmxnet3_set_rx_csum(struct net_device *netdev, u32 val) +{ + struct vmxnet3_adapter *adapter = netdev_priv(netdev); + + if (adapter->rxcsum != val) { + adapter->rxcsum = val; + if (netif_running(netdev)) { + if (val) + adapter->shared->devRead.misc.uptFeatures |= + UPT1_F_RXCSUM; + else + adapter->shared->devRead.misc.uptFeatures &= + ~UPT1_F_RXCSUM; + + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, + VMXNET3_CMD_UPDATE_FEATURE); + } + } + return 0; +} + + +/* per tq stats maintained by the device */ +static const struct vmxnet3_stat_desc +vmxnet3_tq_dev_stats[] = { + /* description, offset */ + { "TSO pkts tx", offsetof(struct UPT1_TxStats, TSOPktsTxOK) }, + { "TSO bytes tx", offsetof(struct UPT1_TxStats, TSOBytesTxOK) }, + { "ucast pkts tx", offsetof(struct UPT1_TxStats, ucastPktsTxOK) }, + { "ucast bytes tx", offsetof(struct UPT1_TxStats, ucastBytesTxOK) }, + { "mcast pkts tx", offsetof(struct UPT1_TxStats, mcastPktsTxOK) }, + { "mcast bytes tx", offsetof(struct UPT1_TxStats, mcastBytesTxOK) }, + { "bcast pkts tx", offsetof(struct UPT1_TxStats, bcastPktsTxOK) }, + { "bcast bytes tx", offsetof(struct UPT1_TxStats, bcastBytesTxOK) }, + { "pkts tx err", offsetof(struct UPT1_TxStats, pktsTxError) }, + { "pkts tx discard", offsetof(struct UPT1_TxStats, pktsTxDiscard) }, +}; + +/* per tq stats maintained by the driver */ +static const struct vmxnet3_stat_desc +vmxnet3_tq_driver_stats[] = { + /* description, offset */ + {"drv dropped tx total", offsetof(struct vmxnet3_tq_driver_stats, + drop_total) }, + { " too many frags", offsetof(struct vmxnet3_tq_driver_stats, + drop_too_many_frags) }, + { " giant hdr", offsetof(struct vmxnet3_tq_driver_stats, + drop_oversized_hdr) }, + { " hdr err", offsetof(struct vmxnet3_tq_driver_stats, + drop_hdr_inspect_err) }, + { " tso", offsetof(struct vmxnet3_tq_driver_stats, + drop_tso) }, + { "ring full", offsetof(struct vmxnet3_tq_driver_stats, + tx_ring_full) }, + { "pkts linearized", offsetof(struct vmxnet3_tq_driver_stats, + linearized) }, + { "hdr cloned", offsetof(struct vmxnet3_tq_driver_stats, + copy_skb_header) }, + { "giant hdr", offsetof(struct vmxnet3_tq_driver_stats, + oversized_hdr) }, +}; + +/* per rq stats maintained by the device */ +static const struct vmxnet3_stat_desc +vmxnet3_rq_dev_stats[] = { + { "LRO pkts rx", offsetof(struct UPT1_RxStats, LROPktsRxOK) }, + { "LRO byte rx", offsetof(struct UPT1_RxStats, LROBytesRxOK) }, + { "ucast pkts rx", offsetof(struct UPT1_RxStats, ucastPktsRxOK) }, + { "ucast bytes rx", offsetof(struct UPT1_RxStats, ucastBytesRxOK) }, + { "mcast pkts rx", offsetof(struct UPT1_RxStats, mcastPktsRxOK) }, + { "mcast bytes rx", offsetof(struct UPT1_RxStats, mcastBytesRxOK) }, + { "bcast pkts rx", offsetof(struct UPT1_RxStats, bcastPktsRxOK) }, + { "bcast bytes rx", offsetof(struct UPT1_RxStats, bcastBytesRxOK) }, + { "pkts rx out of buf", offsetof(struct UPT1_RxStats, pktsRxOutOfBuf) }, + { "pkts rx err", offsetof(struct UPT1_RxStats, pktsRxError) }, +}; + +/* per rq stats maintained by the driver */ +static const struct vmxnet3_stat_desc +vmxnet3_rq_driver_stats[] = { + /* description, offset */ + { "drv dropped rx total", offsetof(struct vmxnet3_rq_driver_stats, + drop_total) }, + { " err", offsetof(struct vmxnet3_rq_driver_stats, + drop_err) }, + { " fcs", offsetof(struct vmxnet3_rq_driver_stats, + drop_fcs) }, + { "rx buf alloc fail", offsetof(struct vmxnet3_rq_driver_stats, + rx_buf_alloc_failure) }, +}; + +/* gloabl stats maintained by the driver */ +static const struct vmxnet3_stat_desc +vmxnet3_global_stats[] = { + /* description, offset */ + { "tx timeout count", offsetof(struct vmxnet3_adapter, + tx_timeout_count) } +}; + + +struct net_device_stats * +vmxnet3_get_stats(struct net_device *netdev) +{ + struct vmxnet3_adapter *adapter; + struct vmxnet3_tq_driver_stats *drvTxStats; + struct vmxnet3_rq_driver_stats *drvRxStats; + struct UPT1_TxStats *devTxStats; + struct UPT1_RxStats *devRxStats; + struct net_device_stats *net_stats = &netdev->stats; + + adapter = netdev_priv(netdev); + + /* Collect the dev stats into the shared area */ + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, VMXNET3_CMD_GET_STATS); + + /* Assuming that we have a single queue device */ + devTxStats = &adapter->tqd_start->stats; + devRxStats = &adapter->rqd_start->stats; + + /* Get access to the driver stats per queue */ + drvTxStats = &adapter->tx_queue.stats; + drvRxStats = &adapter->rx_queue.stats; + + memset(net_stats, 0, sizeof(*net_stats)); + + net_stats->rx_packets = devRxStats->ucastPktsRxOK + + devRxStats->mcastPktsRxOK + + devRxStats->bcastPktsRxOK; + + net_stats->tx_packets = devTxStats->ucastPktsTxOK + + devTxStats->mcastPktsTxOK + + devTxStats->bcastPktsTxOK; + + net_stats->rx_bytes = devRxStats->ucastBytesRxOK + + devRxStats->mcastBytesRxOK + + devRxStats->bcastBytesRxOK; + + net_stats->tx_bytes = devTxStats->ucastBytesTxOK + + devTxStats->mcastBytesTxOK + + devTxStats->bcastBytesTxOK; + + net_stats->rx_errors = devRxStats->pktsRxError; + net_stats->tx_errors = devTxStats->pktsTxError; + net_stats->rx_dropped = drvRxStats->drop_total; + net_stats->tx_dropped = drvTxStats->drop_total; + net_stats->multicast = devRxStats->mcastPktsRxOK; + + return net_stats; +} + +static int +vmxnet3_get_sset_count(struct net_device *netdev, int sset) +{ + switch (sset) { + case ETH_SS_STATS: + return ARRAY_SIZE(vmxnet3_tq_dev_stats) + + ARRAY_SIZE(vmxnet3_tq_driver_stats) + + ARRAY_SIZE(vmxnet3_rq_dev_stats) + + ARRAY_SIZE(vmxnet3_rq_driver_stats) + + ARRAY_SIZE(vmxnet3_global_stats); + default: + return -EOPNOTSUPP; + } +} + + +static int +vmxnet3_get_regs_len(struct net_device *netdev) +{ + return 20 * sizeof(u32); +} + + +static void +vmxnet3_get_drvinfo(struct net_device *netdev, struct ethtool_drvinfo *drvinfo) +{ + struct vmxnet3_adapter *adapter = netdev_priv(netdev); + + strlcpy(drvinfo->driver, vmxnet3_driver_name, sizeof(drvinfo->driver)); + drvinfo->driver[sizeof(drvinfo->driver) - 1] = '\0'; + + strlcpy(drvinfo->version, VMXNET3_DRIVER_VERSION_REPORT, + sizeof(drvinfo->version)); + drvinfo->driver[sizeof(drvinfo->version) - 1] = '\0'; + + strlcpy(drvinfo->fw_version, "N/A", sizeof(drvinfo->fw_version)); + drvinfo->fw_version[sizeof(drvinfo->fw_version) - 1] = '\0'; + + strlcpy(drvinfo->bus_info, pci_name(adapter->pdev), + ETHTOOL_BUSINFO_LEN); + drvinfo->n_stats = vmxnet3_get_sset_count(netdev, ETH_SS_STATS); + drvinfo->testinfo_len = 0; + drvinfo->eedump_len = 0; + drvinfo->regdump_len = vmxnet3_get_regs_len(netdev); +} + + +static void +vmxnet3_get_strings(struct net_device *netdev, u32 stringset, u8 *buf) +{ + if (stringset == ETH_SS_STATS) { + int i; + + for (i = 0; i < ARRAY_SIZE(vmxnet3_tq_dev_stats); i++) { + memcpy(buf, vmxnet3_tq_dev_stats[i].desc, + ETH_GSTRING_LEN); + buf += ETH_GSTRING_LEN; + } + for (i = 0; i < ARRAY_SIZE(vmxnet3_tq_driver_stats); i++) { + memcpy(buf, vmxnet3_tq_driver_stats[i].desc, + ETH_GSTRING_LEN); + buf += ETH_GSTRING_LEN; + } + for (i = 0; i < ARRAY_SIZE(vmxnet3_rq_dev_stats); i++) { + memcpy(buf, vmxnet3_rq_dev_stats[i].desc, + ETH_GSTRING_LEN); + buf += ETH_GSTRING_LEN; + } + for (i = 0; i < ARRAY_SIZE(vmxnet3_rq_driver_stats); i++) { + memcpy(buf, vmxnet3_rq_driver_stats[i].desc, + ETH_GSTRING_LEN); + buf += ETH_GSTRING_LEN; + } + for (i = 0; i < ARRAY_SIZE(vmxnet3_global_stats); i++) { + memcpy(buf, vmxnet3_global_stats[i].desc, + ETH_GSTRING_LEN); + buf += ETH_GSTRING_LEN; + } + } +} + +static u32 +vmxnet3_get_flags(struct net_device *netdev) { + return netdev->features; +} + +static int +vmxnet3_set_flags(struct net_device *netdev, u32 data) { + struct vmxnet3_adapter *adapter = netdev_priv(netdev); + u8 lro_requested = (data & ETH_FLAG_LRO) == 0 ? 0 : 1; + u8 lro_present = (netdev->features & NETIF_F_LRO) == 0 ? 0 : 1; + + if (lro_requested ^ lro_present) { + /* toggle the LRO feature*/ + netdev->features ^= NETIF_F_LRO; + + /* update harware LRO capability accordingly */ + if (lro_requested) + adapter->shared->devRead.misc.uptFeatures &= UPT1_F_LRO; + else + adapter->shared->devRead.misc.uptFeatures &= + ~UPT1_F_LRO; + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, + VMXNET3_CMD_UPDATE_FEATURE); + } + return 0; +} + +static void +vmxnet3_get_ethtool_stats(struct net_device *netdev, + struct ethtool_stats *stats, u64 *buf) +{ + struct vmxnet3_adapter *adapter = netdev_priv(netdev); + u8 *base; + int i; + + VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD, VMXNET3_CMD_GET_STATS); + + /* this does assume each counter is 64-bit wide */ + + base = (u8 *)&adapter->tqd_start->stats; + for (i = 0; i < ARRAY_SIZE(vmxnet3_tq_dev_stats); i++) + *buf++ = *(u64 *)(base + vmxnet3_tq_dev_stats[i].offset); + + base = (u8 *)&adapter->tx_queue.stats; + for (i = 0; i < ARRAY_SIZE(vmxnet3_tq_driver_stats); i++) + *buf++ = *(u64 *)(base + vmxnet3_tq_driver_stats[i].offset); + + base = (u8 *)&adapter->rqd_start->stats; + for (i = 0; i < ARRAY_SIZE(vmxnet3_rq_dev_stats); i++) + *buf++ = *(u64 *)(base + vmxnet3_rq_dev_stats[i].offset); + + base = (u8 *)&adapter->rx_queue.stats; + for (i = 0; i < ARRAY_SIZE(vmxnet3_rq_driver_stats); i++) + *buf++ = *(u64 *)(base + vmxnet3_rq_driver_stats[i].offset); + + base = (u8 *)adapter; + for (i = 0; i < ARRAY_SIZE(vmxnet3_global_stats); i++) + *buf++ = *(u64 *)(base + vmxnet3_global_stats[i].offset); +} + + +static void +vmxnet3_get_regs(struct net_device *netdev, struct ethtool_regs *regs, void *p) +{ + struct vmxnet3_adapter *adapter = netdev_priv(netdev); + u32 *buf = p; + + memset(p, 0, vmxnet3_get_regs_len(netdev)); + + regs->version = 1; + + /* Update vmxnet3_get_regs_len if we want to dump more registers */ + + /* make each ring use multiple of 16 bytes */ + buf[0] = adapter->tx_queue.tx_ring.next2fill; + buf[1] = adapter->tx_queue.tx_ring.next2comp; + buf[2] = adapter->tx_queue.tx_ring.gen; + buf[3] = 0; + + buf[4] = adapter->tx_queue.comp_ring.next2proc; + buf[5] = adapter->tx_queue.comp_ring.gen; + buf[6] = adapter->tx_queue.stopped; + buf[7] = 0; + + buf[8] = adapter->rx_queue.rx_ring[0].next2fill; + buf[9] = adapter->rx_queue.rx_ring[0].next2comp; + buf[10] = adapter->rx_queue.rx_ring[0].gen; + buf[11] = 0; + + buf[12] = adapter->rx_queue.rx_ring[1].next2fill; + buf[13] = adapter->rx_queue.rx_ring[1].next2comp; + buf[14] = adapter->rx_queue.rx_ring[1].gen; + buf[15] = 0; + + buf[16] = adapter->rx_queue.comp_ring.next2proc; + buf[17] = adapter->rx_queue.comp_ring.gen; + buf[18] = 0; + buf[19] = 0; +} + + +static void +vmxnet3_get_wol(struct net_device *netdev, struct ethtool_wolinfo *wol) +{ + struct vmxnet3_adapter *adapter = netdev_priv(netdev); + + wol->supported = WAKE_UCAST | WAKE_ARP | WAKE_MAGIC; + wol->wolopts = adapter->wol; +} + + +static int +vmxnet3_set_wol(struct net_device *netdev, struct ethtool_wolinfo *wol) +{ + struct vmxnet3_adapter *adapter = netdev_priv(netdev); + + if (wol->wolopts & (WAKE_PHY | WAKE_MCAST | WAKE_BCAST | + WAKE_MAGICSECURE)) { + return -EOPNOTSUPP; + } + + adapter->wol = wol->wolopts; + + device_set_wakeup_enable(&adapter->pdev->dev, adapter->wol); + + return 0; +} + + +static int +vmxnet3_get_settings(struct net_device *netdev, struct ethtool_cmd *ecmd) +{ + struct vmxnet3_adapter *adapter = netdev_priv(netdev); + + ecmd->supported = SUPPORTED_10000baseT_Full | SUPPORTED_1000baseT_Full | + SUPPORTED_TP; + ecmd->advertising = ADVERTISED_TP; + ecmd->port = PORT_TP; + ecmd->transceiver = XCVR_INTERNAL; + + if (adapter->link_speed) { + ecmd->speed = adapter->link_speed; + ecmd->duplex = DUPLEX_FULL; + } else { + ecmd->speed = -1; + ecmd->duplex = -1; + } + return 0; +} + + +static void +vmxnet3_get_ringparam(struct net_device *netdev, + struct ethtool_ringparam *param) +{ + struct vmxnet3_adapter *adapter = netdev_priv(netdev); + + param->rx_max_pending = VMXNET3_RX_RING_MAX_SIZE; + param->tx_max_pending = VMXNET3_TX_RING_MAX_SIZE; + param->rx_mini_max_pending = 0; + param->rx_jumbo_max_pending = 0; + + param->rx_pending = adapter->rx_queue.rx_ring[0].size; + param->tx_pending = adapter->tx_queue.tx_ring.size; + param->rx_mini_pending = 0; + param->rx_jumbo_pending = 0; +} + + +static int +vmxnet3_set_ringparam(struct net_device *netdev, + struct ethtool_ringparam *param) +{ + struct vmxnet3_adapter *adapter = netdev_priv(netdev); + u32 new_tx_ring_size, new_rx_ring_size; + u32 sz; + int err = 0; + + if (param->tx_pending == 0 || param->tx_pending > + VMXNET3_TX_RING_MAX_SIZE) + return -EINVAL; + + if (param->rx_pending == 0 || param->rx_pending > + VMXNET3_RX_RING_MAX_SIZE) + return -EINVAL; + + + /* round it up to a multiple of VMXNET3_RING_SIZE_ALIGN */ + new_tx_ring_size = (param->tx_pending + VMXNET3_RING_SIZE_MASK) & + ~VMXNET3_RING_SIZE_MASK; + new_tx_ring_size = min_t(u32, new_tx_ring_size, + VMXNET3_TX_RING_MAX_SIZE); + if (new_tx_ring_size > VMXNET3_TX_RING_MAX_SIZE || (new_tx_ring_size % + VMXNET3_RING_SIZE_ALIGN) != 0) + return -EINVAL; + + /* ring0 has to be a multiple of + * rx_buf_per_pkt * VMXNET3_RING_SIZE_ALIGN + */ + sz = adapter->rx_buf_per_pkt * VMXNET3_RING_SIZE_ALIGN; + new_rx_ring_size = (param->rx_pending + sz - 1) / sz * sz; + new_rx_ring_size = min_t(u32, new_rx_ring_size, + VMXNET3_RX_RING_MAX_SIZE / sz * sz); + if (new_rx_ring_size > VMXNET3_RX_RING_MAX_SIZE || (new_rx_ring_size % + sz) != 0) + return -EINVAL; + + if (new_tx_ring_size == adapter->tx_queue.tx_ring.size && + new_rx_ring_size == adapter->rx_queue.rx_ring[0].size) { + return 0; + } + + /* + * Reset_work may be in the middle of resetting the device, wait for its + * completion. + */ + while (test_and_set_bit(VMXNET3_STATE_BIT_RESETTING, &adapter->state)) + msleep(1); + + if (netif_running(netdev)) { + vmxnet3_quiesce_dev(adapter); + vmxnet3_reset_dev(adapter); + + /* recreate the rx queue and the tx queue based on the + * new sizes */ + vmxnet3_tq_destroy(&adapter->tx_queue, adapter); + vmxnet3_rq_destroy(&adapter->rx_queue, adapter); + + err = vmxnet3_create_queues(adapter, new_tx_ring_size, + new_rx_ring_size, VMXNET3_DEF_RX_RING_SIZE); + if (err) { + /* failed, most likely because of OOM, try default + * size */ + printk(KERN_ERR "%s: failed to apply new sizes, try the" + " default ones\n", netdev->name); + err = vmxnet3_create_queues(adapter, + VMXNET3_DEF_TX_RING_SIZE, + VMXNET3_DEF_RX_RING_SIZE, + VMXNET3_DEF_RX_RING_SIZE); + if (err) { + printk(KERN_ERR "%s: failed to create queues " + "with default sizes. Closing it\n", + netdev->name); + goto out; + } + } + + err = vmxnet3_activate_dev(adapter); + if (err) + printk(KERN_ERR "%s: failed to re-activate, error %d." + " Closing it\n", netdev->name, err); + } + +out: + clear_bit(VMXNET3_STATE_BIT_RESETTING, &adapter->state); + if (err) + vmxnet3_force_close(adapter); + + return err; +} + + +static struct ethtool_ops vmxnet3_ethtool_ops = { + .get_settings = vmxnet3_get_settings, + .get_drvinfo = vmxnet3_get_drvinfo, + .get_regs_len = vmxnet3_get_regs_len, + .get_regs = vmxnet3_get_regs, + .get_wol = vmxnet3_get_wol, + .set_wol = vmxnet3_set_wol, + .get_link = ethtool_op_get_link, + .get_rx_csum = vmxnet3_get_rx_csum, + .set_rx_csum = vmxnet3_set_rx_csum, + .get_tx_csum = ethtool_op_get_tx_csum, + .set_tx_csum = ethtool_op_set_tx_hw_csum, + .get_sg = ethtool_op_get_sg, + .set_sg = ethtool_op_set_sg, + .get_tso = ethtool_op_get_tso, + .set_tso = ethtool_op_set_tso, + .get_strings = vmxnet3_get_strings, + .get_flags = vmxnet3_get_flags, + .set_flags = vmxnet3_set_flags, + .get_sset_count = vmxnet3_get_sset_count, + .get_ethtool_stats = vmxnet3_get_ethtool_stats, + .get_ringparam = vmxnet3_get_ringparam, + .set_ringparam = vmxnet3_set_ringparam, +}; + +void vmxnet3_set_ethtool_ops(struct net_device *netdev) +{ + SET_ETHTOOL_OPS(netdev, &vmxnet3_ethtool_ops); +} diff --git a/drivers/net/vmxnet3/vmxnet3_int.h b/drivers/net/vmxnet3/vmxnet3_int.h new file mode 100644 index 000000000000..6bb91576e999 --- /dev/null +++ b/drivers/net/vmxnet3/vmxnet3_int.h @@ -0,0 +1,389 @@ +/* + * Linux driver for VMware's vmxnet3 ethernet NIC. + * + * Copyright (C) 2008-2009, VMware, Inc. All Rights Reserved. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License as published by the + * Free Software Foundation; version 2 of the License and no later version. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or + * NON INFRINGEMENT. See the GNU General Public License for more + * details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. + * + * The full GNU General Public License is included in this distribution in + * the file called "COPYING". + * + * Maintained by: Shreyas Bhatewara + * + */ + +#ifndef _VMXNET3_INT_H +#define _VMXNET3_INT_H + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "vmxnet3_defs.h" + +#ifdef DEBUG +# define VMXNET3_DRIVER_VERSION_REPORT VMXNET3_DRIVER_VERSION_STRING"-NAPI(debug)" +#else +# define VMXNET3_DRIVER_VERSION_REPORT VMXNET3_DRIVER_VERSION_STRING"-NAPI" +#endif + + +/* + * Version numbers + */ +#define VMXNET3_DRIVER_VERSION_STRING "1.0.5.0-k" + +/* a 32-bit int, each byte encode a verion number in VMXNET3_DRIVER_VERSION */ +#define VMXNET3_DRIVER_VERSION_NUM 0x01000500 + + +/* + * Capabilities + */ + +enum { + VMNET_CAP_SG = 0x0001, /* Can do scatter-gather transmits. */ + VMNET_CAP_IP4_CSUM = 0x0002, /* Can checksum only TCP/UDP over + * IPv4 */ + VMNET_CAP_HW_CSUM = 0x0004, /* Can checksum all packets. */ + VMNET_CAP_HIGH_DMA = 0x0008, /* Can DMA to high memory. */ + VMNET_CAP_TOE = 0x0010, /* Supports TCP/IP offload. */ + VMNET_CAP_TSO = 0x0020, /* Supports TCP Segmentation + * offload */ + VMNET_CAP_SW_TSO = 0x0040, /* Supports SW TCP Segmentation */ + VMNET_CAP_VMXNET_APROM = 0x0080, /* Vmxnet APROM support */ + VMNET_CAP_HW_TX_VLAN = 0x0100, /* Can we do VLAN tagging in HW */ + VMNET_CAP_HW_RX_VLAN = 0x0200, /* Can we do VLAN untagging in HW */ + VMNET_CAP_SW_VLAN = 0x0400, /* VLAN tagging/untagging in SW */ + VMNET_CAP_WAKE_PCKT_RCV = 0x0800, /* Can wake on network packet recv? */ + VMNET_CAP_ENABLE_INT_INLINE = 0x1000, /* Enable Interrupt Inline */ + VMNET_CAP_ENABLE_HEADER_COPY = 0x2000, /* copy header for vmkernel */ + VMNET_CAP_TX_CHAIN = 0x4000, /* Guest can use multiple tx entries + * for a pkt */ + VMNET_CAP_RX_CHAIN = 0x8000, /* pkt can span multiple rx entries */ + VMNET_CAP_LPD = 0x10000, /* large pkt delivery */ + VMNET_CAP_BPF = 0x20000, /* BPF Support in VMXNET Virtual HW*/ + VMNET_CAP_SG_SPAN_PAGES = 0x40000, /* Scatter-gather can span multiple*/ + /* pages transmits */ + VMNET_CAP_IP6_CSUM = 0x80000, /* Can do IPv6 csum offload. */ + VMNET_CAP_TSO6 = 0x100000, /* TSO seg. offload for IPv6 pkts. */ + VMNET_CAP_TSO256k = 0x200000, /* Can do TSO seg offload for */ + /* pkts up to 256kB. */ + VMNET_CAP_UPT = 0x400000 /* Support UPT */ +}; + +/* + * PCI vendor and device IDs. + */ +#define PCI_VENDOR_ID_VMWARE 0x15AD +#define PCI_DEVICE_ID_VMWARE_VMXNET3 0x07B0 +#define MAX_ETHERNET_CARDS 10 +#define MAX_PCI_PASSTHRU_DEVICE 6 + +struct vmxnet3_cmd_ring { + union Vmxnet3_GenericDesc *base; + u32 size; + u32 next2fill; + u32 next2comp; + u8 gen; + dma_addr_t basePA; +}; + +static inline void +vmxnet3_cmd_ring_adv_next2fill(struct vmxnet3_cmd_ring *ring) +{ + ring->next2fill++; + if (unlikely(ring->next2fill == ring->size)) { + ring->next2fill = 0; + VMXNET3_FLIP_RING_GEN(ring->gen); + } +} + +static inline void +vmxnet3_cmd_ring_adv_next2comp(struct vmxnet3_cmd_ring *ring) +{ + VMXNET3_INC_RING_IDX_ONLY(ring->next2comp, ring->size); +} + +static inline int +vmxnet3_cmd_ring_desc_avail(struct vmxnet3_cmd_ring *ring) +{ + return (ring->next2comp > ring->next2fill ? 0 : ring->size) + + ring->next2comp - ring->next2fill - 1; +} + +struct vmxnet3_comp_ring { + union Vmxnet3_GenericDesc *base; + u32 size; + u32 next2proc; + u8 gen; + u8 intr_idx; + dma_addr_t basePA; +}; + +static inline void +vmxnet3_comp_ring_adv_next2proc(struct vmxnet3_comp_ring *ring) +{ + ring->next2proc++; + if (unlikely(ring->next2proc == ring->size)) { + ring->next2proc = 0; + VMXNET3_FLIP_RING_GEN(ring->gen); + } +} + +struct vmxnet3_tx_data_ring { + struct Vmxnet3_TxDataDesc *base; + u32 size; + dma_addr_t basePA; +}; + +enum vmxnet3_buf_map_type { + VMXNET3_MAP_INVALID = 0, + VMXNET3_MAP_NONE, + VMXNET3_MAP_SINGLE, + VMXNET3_MAP_PAGE, +}; + +struct vmxnet3_tx_buf_info { + u32 map_type; + u16 len; + u16 sop_idx; + dma_addr_t dma_addr; + struct sk_buff *skb; +}; + +struct vmxnet3_tq_driver_stats { + u64 drop_total; /* # of pkts dropped by the driver, the + * counters below track droppings due to + * different reasons + */ + u64 drop_too_many_frags; + u64 drop_oversized_hdr; + u64 drop_hdr_inspect_err; + u64 drop_tso; + + u64 tx_ring_full; + u64 linearized; /* # of pkts linearized */ + u64 copy_skb_header; /* # of times we have to copy skb header */ + u64 oversized_hdr; +}; + +struct vmxnet3_tx_ctx { + bool ipv4; + u16 mss; + u32 eth_ip_hdr_size; /* only valid for pkts requesting tso or csum + * offloading + */ + u32 l4_hdr_size; /* only valid if mss != 0 */ + u32 copy_size; /* # of bytes copied into the data ring */ + union Vmxnet3_GenericDesc *sop_txd; + union Vmxnet3_GenericDesc *eop_txd; +}; + +struct vmxnet3_tx_queue { + spinlock_t tx_lock; + struct vmxnet3_cmd_ring tx_ring; + struct vmxnet3_tx_buf_info *buf_info; + struct vmxnet3_tx_data_ring data_ring; + struct vmxnet3_comp_ring comp_ring; + struct Vmxnet3_TxQueueCtrl *shared; + struct vmxnet3_tq_driver_stats stats; + bool stopped; + int num_stop; /* # of times the queue is + * stopped */ +} __attribute__((__aligned__(SMP_CACHE_BYTES))); + +enum vmxnet3_rx_buf_type { + VMXNET3_RX_BUF_NONE = 0, + VMXNET3_RX_BUF_SKB = 1, + VMXNET3_RX_BUF_PAGE = 2 +}; + +struct vmxnet3_rx_buf_info { + enum vmxnet3_rx_buf_type buf_type; + u16 len; + union { + struct sk_buff *skb; + struct page *page; + }; + dma_addr_t dma_addr; +}; + +struct vmxnet3_rx_ctx { + struct sk_buff *skb; + u32 sop_idx; +}; + +struct vmxnet3_rq_driver_stats { + u64 drop_total; + u64 drop_err; + u64 drop_fcs; + u64 rx_buf_alloc_failure; +}; + +struct vmxnet3_rx_queue { + struct vmxnet3_cmd_ring rx_ring[2]; + struct vmxnet3_comp_ring comp_ring; + struct vmxnet3_rx_ctx rx_ctx; + u32 qid; /* rqID in RCD for buffer from 1st ring */ + u32 qid2; /* rqID in RCD for buffer from 2nd ring */ + u32 uncommitted[2]; /* # of buffers allocated since last RXPROD + * update */ + struct vmxnet3_rx_buf_info *buf_info[2]; + struct Vmxnet3_RxQueueCtrl *shared; + struct vmxnet3_rq_driver_stats stats; +} __attribute__((__aligned__(SMP_CACHE_BYTES))); + +#define VMXNET3_LINUX_MAX_MSIX_VECT 1 + +struct vmxnet3_intr { + enum vmxnet3_intr_mask_mode mask_mode; + enum vmxnet3_intr_type type; /* MSI-X, MSI, or INTx? */ + u8 num_intrs; /* # of intr vectors */ + u8 event_intr_idx; /* idx of the intr vector for event */ + u8 mod_levels[VMXNET3_LINUX_MAX_MSIX_VECT]; /* moderation level */ +#ifdef CONFIG_PCI_MSI + struct msix_entry msix_entries[VMXNET3_LINUX_MAX_MSIX_VECT]; +#endif +}; + +#define VMXNET3_STATE_BIT_RESETTING 0 +#define VMXNET3_STATE_BIT_QUIESCED 1 +struct vmxnet3_adapter { + struct vmxnet3_tx_queue tx_queue; + struct vmxnet3_rx_queue rx_queue; + struct napi_struct napi; + struct vlan_group *vlan_grp; + + struct vmxnet3_intr intr; + + struct Vmxnet3_DriverShared *shared; + struct Vmxnet3_PMConf *pm_conf; + struct Vmxnet3_TxQueueDesc *tqd_start; /* first tx queue desc */ + struct Vmxnet3_RxQueueDesc *rqd_start; /* first rx queue desc */ + struct net_device *netdev; + struct pci_dev *pdev; + + u8 *hw_addr0; /* for BAR 0 */ + u8 *hw_addr1; /* for BAR 1 */ + + /* feature control */ + bool rxcsum; + bool lro; + bool jumbo_frame; + + /* rx buffer related */ + unsigned skb_buf_size; + int rx_buf_per_pkt; /* only apply to the 1st ring */ + dma_addr_t shared_pa; + dma_addr_t queue_desc_pa; + + /* Wake-on-LAN */ + u32 wol; + + /* Link speed */ + u32 link_speed; /* in mbps */ + + u64 tx_timeout_count; + struct work_struct work; + + unsigned long state; /* VMXNET3_STATE_BIT_xxx */ + + int dev_number; +}; + +#define VMXNET3_WRITE_BAR0_REG(adapter, reg, val) \ + writel((val), (adapter)->hw_addr0 + (reg)) +#define VMXNET3_READ_BAR0_REG(adapter, reg) \ + readl((adapter)->hw_addr0 + (reg)) + +#define VMXNET3_WRITE_BAR1_REG(adapter, reg, val) \ + writel((val), (adapter)->hw_addr1 + (reg)) +#define VMXNET3_READ_BAR1_REG(adapter, reg) \ + readl((adapter)->hw_addr1 + (reg)) + +#define VMXNET3_WAKE_QUEUE_THRESHOLD(tq) (5) +#define VMXNET3_RX_ALLOC_THRESHOLD(rq, ring_idx, adapter) \ + ((rq)->rx_ring[ring_idx].size >> 3) + +#define VMXNET3_GET_ADDR_LO(dma) ((u32)(dma)) +#define VMXNET3_GET_ADDR_HI(dma) ((u32)(((u64)(dma)) >> 32)) + +/* must be a multiple of VMXNET3_RING_SIZE_ALIGN */ +#define VMXNET3_DEF_TX_RING_SIZE 512 +#define VMXNET3_DEF_RX_RING_SIZE 256 + +#define VMXNET3_MAX_ETH_HDR_SIZE 22 +#define VMXNET3_MAX_SKB_BUF_SIZE (3*1024) + +int +vmxnet3_quiesce_dev(struct vmxnet3_adapter *adapter); + +int +vmxnet3_activate_dev(struct vmxnet3_adapter *adapter); + +void +vmxnet3_force_close(struct vmxnet3_adapter *adapter); + +void +vmxnet3_reset_dev(struct vmxnet3_adapter *adapter); + +void +vmxnet3_tq_destroy(struct vmxnet3_tx_queue *tq, + struct vmxnet3_adapter *adapter); + +void +vmxnet3_rq_destroy(struct vmxnet3_rx_queue *rq, + struct vmxnet3_adapter *adapter); + +int +vmxnet3_create_queues(struct vmxnet3_adapter *adapter, + u32 tx_ring_size, u32 rx_ring_size, u32 rx_ring2_size); + +extern void vmxnet3_set_ethtool_ops(struct net_device *netdev); +extern struct net_device_stats *vmxnet3_get_stats(struct net_device *netdev); + +extern char vmxnet3_driver_name[]; +#endif -- cgit v1.2.3 From a003236c32706f3c1f74d4e3b98c58cf0d9a9d8f Mon Sep 17 00:00:00 2001 From: Vincent Legoll Date: Tue, 13 Oct 2009 14:48:14 +0200 Subject: perf events: Update MAINTAINERS entry file patterns Add file patterns that match relevant files for this subsystem. Signed-off-by: Vincent Legoll Cc: Linus Torvalds Cc: paulus@samba.org Cc: a.p.zijlstra@chello.nl LKML-Reference: <4727185d0910130548p325f0185vf4e23b5491c730a0@mail.gmail.com> Signed-off-by: Ingo Molnar --- MAINTAINERS | 7 +++++++ 1 file changed, 7 insertions(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 69e31aab1308..cf690913f2b0 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -4071,6 +4071,13 @@ M: Peter Zijlstra M: Paul Mackerras M: Ingo Molnar S: Supported +F: kernel/perf_event.c +F: include/linux/perf_event.h +F: arch/*/*/kernel/perf_event.c +F: arch/*/include/asm/perf_event.h +F: arch/*/lib/perf_event.c +F: arch/*/kernel/perf_callchain.c +F: tools/perf/ PERSONALITY HANDLING M: Christoph Hellwig -- cgit v1.2.3 From 57784dfa82fe032cf64613e154f3ae8748e3fa3d Mon Sep 17 00:00:00 2001 From: Huaxu Wan Date: Sat, 24 Oct 2009 13:28:45 +0200 Subject: hwmon: (coretemp) Maintainer update Intel will help maintaining the coretemp driver. Signed-off-by: Jean Delvare --- MAINTAINERS | 1 + 1 file changed, 1 insertion(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 88241154f4ce..46a18efc4ed3 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1480,6 +1480,7 @@ F: mm/*cgroup* CORETEMP HARDWARE MONITORING DRIVER M: Rudolf Marek +M: Huaxu Wan L: lm-sensors@lm-sensors.org S: Maintained F: Documentation/hwmon/coretemp -- cgit v1.2.3 From 83fc9c8938cdb55f06aa6e14f4747136b747be74 Mon Sep 17 00:00:00 2001 From: Bartlomiej Zolnierkiewicz Date: Mon, 26 Oct 2009 20:14:33 +0100 Subject: MAINTAINERS: rt2x00 list is moderated Cc: users@rt2x00.serialmonkey.com Signed-off-by: Bartlomiej Zolnierkiewicz Signed-off-by: John W. Linville --- MAINTAINERS | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 70bc7ded9444..cdbbaf59a43b 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -4280,7 +4280,7 @@ F: drivers/video/aty/aty128fb.c RALINK RT2X00 WIRELESS LAN DRIVER P: rt2x00 project L: linux-wireless@vger.kernel.org -L: users@rt2x00.serialmonkey.com +L: users@rt2x00.serialmonkey.com (moderated for non-subscribers) W: http://rt2x00.serialmonkey.com/ S: Maintained T: git git://git.kernel.org/pub/scm/linux/kernel/git/ivd/rt2x00.git -- cgit v1.2.3 From c32bec78e1af751a279ec1614eadd36aab532e56 Mon Sep 17 00:00:00 2001 From: Michael Buesch Date: Fri, 9 Oct 2009 20:48:16 +0200 Subject: b43: Remove me as maintainer Remove me Signed-off-by: Michael Buesch Signed-off-by: John W. Linville --- MAINTAINERS | 1 - 1 file changed, 1 deletion(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index ca4131ea5f7b..950bd3294313 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1086,7 +1086,6 @@ F: include/net/ax25.h F: net/ax25/ B43 WIRELESS DRIVER -M: Michael Buesch M: Stefano Brivio L: linux-wireless@vger.kernel.org W: http://linuxwireless.org/en/users/Drivers/b43 -- cgit v1.2.3 From bda2562c34c338c516750b4f6e956c138115cce2 Mon Sep 17 00:00:00 2001 From: Joe Perches Date: Mon, 26 Oct 2009 16:49:39 -0700 Subject: MAINTAINERS: update GENERIC UIO FOR PCI DEVICES Quote a name with a period remove L: linux-kernel@vger.kernel.org Signed-off-by: Joe Perches Cc: "Michael S. Tsirkin" Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- MAINTAINERS | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 88241154f4ce..be2124b30693 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -2272,9 +2272,8 @@ S: Maintained F: include/asm-generic GENERIC UIO DRIVER FOR PCI DEVICES -M: Michael S. Tsirkin +M: "Michael S. Tsirkin" L: kvm@vger.kernel.org -L: linux-kernel@vger.kernel.org S: Supported F: drivers/uio/uio_pci_generic.c -- cgit v1.2.3 From d6f005a108983292f191a177314c591a95fc2233 Mon Sep 17 00:00:00 2001 From: Joe Perches Date: Mon, 26 Oct 2009 16:49:40 -0700 Subject: MAINTAINERS: update TRACING section Move to alphabetic position Use single line F: entries Signed-off-by: Joe Perches Cc: Steven Rostedt Cc: Ingo Molnar Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- MAINTAINERS | 26 ++++++++++++++------------ 1 file changed, 14 insertions(+), 12 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index be2124b30693..2c2ca16c4363 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -2202,18 +2202,6 @@ F: Documentation/filesystems/caching/ F: fs/fscache/ F: include/linux/fscache*.h -TRACING -M: Steven Rostedt -M: Frederic Weisbecker -M: Ingo Molnar -T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip.git tracing/core -S: Maintained -F: Documentation/trace/ftrace.txt -F: arch/*/*/*/ftrace.h -F: arch/*/kernel/ftrace.c -F: include/*/ftrace.h include/trace/ include/linux/trace*.h -F: kernel/trace/ - FUJITSU FR-V (FRV) PORT M: David Howells S: Maintained @@ -5176,6 +5164,20 @@ L: tpmdd-devel@lists.sourceforge.net (moderated for non-subscribers) S: Maintained F: drivers/char/tpm/ +TRACING +M: Steven Rostedt +M: Frederic Weisbecker +M: Ingo Molnar +T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip.git tracing/core +S: Maintained +F: Documentation/trace/ftrace.txt +F: arch/*/*/*/ftrace.h +F: arch/*/kernel/ftrace.c +F: include/*/ftrace.h +F: include/linux/trace*.h +F: include/trace/ +F: kernel/trace/ + TRIVIAL PATCHES M: Jiri Kosina T: git git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial.git -- cgit v1.2.3 From 0e24bdd49d33cc857918de8faabb7fd722410e71 Mon Sep 17 00:00:00 2001 From: Joe Perches Date: Mon, 26 Oct 2009 16:49:40 -0700 Subject: MAINTAINERS: update OMAP Tony Lindgren email name Which had an embedded and duplicated email address Signed-off-by: Joe Perches Cc: Tony Lindgren Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- MAINTAINERS | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 2c2ca16c4363..1cba3e9e21b6 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3772,7 +3772,7 @@ F: drivers/video/riva/ F: drivers/video/nvidia/ OMAP SUPPORT -M: "Tony Lindgren " +M: Tony Lindgren L: linux-omap@vger.kernel.org W: http://www.muru.com/linux/omap/ W: http://linux.omap.com/ -- cgit v1.2.3 From 476604de5a9d4e782523c6bb0f4894f690768c87 Mon Sep 17 00:00:00 2001 From: Joe Perches Date: Mon, 26 Oct 2009 16:49:41 -0700 Subject: MAINTAINERS: change ATM mailing list to moderated Signed-off-by: Joe Perches Cc: Chas Williams Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- MAINTAINERS | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 1cba3e9e21b6..d0c16a166e47 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -992,7 +992,7 @@ F: drivers/net/atlx/ ATM M: Chas Williams -L: linux-atm-general@lists.sourceforge.net (subscribers-only) +L: linux-atm-general@lists.sourceforge.net (moderated for non-subscribers) L: netdev@vger.kernel.org W: http://linux-atm.sourceforge.net S: Maintained -- cgit v1.2.3 From 7a241d6ecd3ffbe5aa987176fc4466250ff3ed3b Mon Sep 17 00:00:00 2001 From: Joe Perches Date: Mon, 26 Oct 2009 16:49:42 -0700 Subject: MAINTAINERS: use tab not spaces after field types Signed-off-by: Joe Perches Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- MAINTAINERS | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index d0c16a166e47..100b448c93a1 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -4599,20 +4599,20 @@ S: Maintained F: drivers/mmc/host/sdricoh_cs.c SECURE DIGITAL HOST CONTROLLER INTERFACE (SDHCI) DRIVER -S: Orphan -L: linux-mmc@vger.kernel.org -F: drivers/mmc/host/sdhci.* +S: Orphan +L: linux-mmc@vger.kernel.org +F: drivers/mmc/host/sdhci.* SECURE DIGITAL HOST CONTROLLER INTERFACE, OPEN FIRMWARE BINDINGS (SDHCI-OF) M: Anton Vorontsov L: linuxppc-dev@ozlabs.org -L: linux-mmc@vger.kernel.org +L: linux-mmc@vger.kernel.org S: Maintained -F: drivers/mmc/host/sdhci-of.* +F: drivers/mmc/host/sdhci-of.* SECURE DIGITAL HOST CONTROLLER INTERFACE (SDHCI) SAMSUNG DRIVER M: Ben Dooks -L: linux-mmc@vger.kernel.org +L: linux-mmc@vger.kernel.org S: Maintained F: drivers/mmc/host/sdhci-s3c.c -- cgit v1.2.3 From ee709b0c62bd163fd3b5aff2320e8724cd8e3d8b Mon Sep 17 00:00:00 2001 From: Joe Perches Date: Mon, 26 Oct 2009 16:49:43 -0700 Subject: MAINTAINERS: update Kernel Janitors after mismerge Fix the mismerge of the W: URL and the S: status fields. Signed-off-by: Joe Perches Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- MAINTAINERS | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 100b448c93a1..88dffb461498 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -2991,8 +2991,8 @@ F: scripts/Makefile.* KERNEL JANITORS L: kernel-janitors@vger.kernel.org -W: http://www.kerneljanitors.org/ -S: Maintained +W: http://janitor.kernelnewbies.org/ +S: Odd Fixes KERNEL NFSD, SUNRPC, AND LOCKD SERVERS M: "J. Bruce Fields" -- cgit v1.2.3 From a2681a75426552c185b7c594116cc44ef8302aba Mon Sep 17 00:00:00 2001 From: Joe Perches Date: Mon, 26 Oct 2009 16:49:43 -0700 Subject: MAINTAINERS: update SCORE architecture name style and add file pattern Signed-off-by: Joe Perches Cc: Chen Liqin Cc: Lennox Wu Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- MAINTAINERS | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 88dffb461498..8ea20fc88d09 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -4521,12 +4521,11 @@ F: kernel/sched* F: include/linux/sched.h SCORE ARCHITECTURE -P: Chen Liqin -M: liqin.chen@sunplusct.com -P: Lennox Wu -M: lennox.wu@gmail.com +M: Chen Liqin +M: Lennox Wu W: http://www.sunplusct.com S: Supported +F: arch/score/ SCSI CDROM DRIVER M: Jens Axboe -- cgit v1.2.3 From 2bf822d79f81c5923bce5630d2815857b5b98a6e Mon Sep 17 00:00:00 2001 From: Joe Perches Date: Mon, 26 Oct 2009 16:49:44 -0700 Subject: MAINTAINERS: SIMPLE FIRMWARE INTERFACE: update email style Signed-off-by: Joe Perches Cc: Len Brown Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- MAINTAINERS | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 8ea20fc88d09..cbea223d6bb9 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -4714,8 +4714,7 @@ F: drivers/usb/gadget/lh7a40* F: drivers/usb/host/ohci-lh7a40* SIMPLE FIRMWARE INTERFACE (SFI) -P: Len Brown -M: lenb@kernel.org +M: Len Brown L: sfi-devel@simplefirmware.org W: http://simplefirmware.org/ T: git git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux-sfi-2.6.git -- cgit v1.2.3 From 364e9e1887da25ce1bd682f4926f536e88b084ad Mon Sep 17 00:00:00 2001 From: Joe Perches Date: Mon, 26 Oct 2009 16:49:45 -0700 Subject: MAINTAINERS: WINBOND CIR - Integrate P:/M: lines, fixup David Härdeman's name MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: Joe Perches Cc: David Härdeman Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- MAINTAINERS | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index cbea223d6bb9..4d809fff4b27 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -5742,8 +5742,7 @@ S: Maintained F: drivers/scsi/wd7000.c WINBOND CIR DRIVER -P: David Härdeman -M: david@hardeman.nu +M: David Härdeman S: Maintained F: drivers/input/misc/winbond-cir.c -- cgit v1.2.3 From b55ef929cb0b7fc8c10651396b46286e505806c3 Mon Sep 17 00:00:00 2001 From: Joe Perches Date: Mon, 26 Oct 2009 16:49:46 -0700 Subject: MAINTAINERS: fix up PERIPHERAL spelling Signed-off-by: Joe Perches Cc: Li Yang Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- MAINTAINERS | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 4d809fff4b27..5ad2a7666f0a 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -2151,7 +2151,7 @@ S: Supported F: arch/powerpc/sysdev/qe_lib/ F: arch/powerpc/include/asm/*qe.h -FREESCALE USB PERIPHERIAL DRIVERS +FREESCALE USB PERIPHERAL DRIVERS M: Li Yang L: linux-usb@vger.kernel.org L: linuxppc-dev@ozlabs.org -- cgit v1.2.3 From 27480ccc29c84206ad53f1990d4a22ff6236de91 Mon Sep 17 00:00:00 2001 From: Joe Perches Date: Mon, 26 Oct 2009 16:49:47 -0700 Subject: MAINTAINERS: update WOLFSON MICROELECTRONICS Integrate P:/M: lines Remove L: linux-kernel@vger.kernel.org Signed-off-by: Joe Perches Cc: Mark Brown Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- MAINTAINERS | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 5ad2a7666f0a..6a3f5ab209e1 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -5799,9 +5799,7 @@ F: drivers/input/touchscreen/*wm97* F: include/linux/wm97xx.h WOLFSON MICROELECTRONICS PMIC DRIVERS -P: Mark Brown -M: broonie@opensource.wolfsonmicro.com -L: linux-kernel@vger.kernel.org +M: Mark Brown T: git git://opensource.wolfsonmicro.com/linux-2.6-audioplus W: http://opensource.wolfsonmicro.com/node/8 S: Supported -- cgit v1.2.3 From c7c4fb18d0026bdb70d287a97ba9c38a649bf39e Mon Sep 17 00:00:00 2001 From: Joe Perches Date: Mon, 26 Oct 2009 16:49:48 -0700 Subject: MAINTAINERS: document new "K:" entry type K: is for keyword. Syntax is perl extended regex. Reorganized header documentation and indent the section entry descriptions so that the first K: would not be considered a regex to match by get_maintainer.pl Signed-off-by: Joe Perches Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- MAINTAINERS | 70 ++++++++++++++++++++++++++++++++++--------------------------- 1 file changed, 39 insertions(+), 31 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 6a3f5ab209e1..23a61d9d0e78 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -65,43 +65,51 @@ trivial patch so apply some common sense. 8. Happy hacking. - ----------------------------------- - -Maintainers List (try to look for most precise areas first) +Descriptions of section entries: + + P: Person (obsolete) + M: Mail patches to: FullName + L: Mailing list that is relevant to this area + W: Web-page with status/info + T: SCM tree type and location. Type is one of: git, hg, quilt, stgit. + S: Status, one of the following: + Supported: Someone is actually paid to look after this. + Maintained: Someone actually looks after it. + Odd Fixes: It has a maintainer but they don't have time to do + much other than throw the odd patch in. See below.. + Orphan: No current maintainer [but maybe you could take the + role as you write your new code]. + Obsolete: Old code. Something tagged obsolete generally means + it has been replaced by a better system and you + should be using that. + F: Files and directories with wildcard patterns. + A trailing slash includes all files and subdirectory files. + F: drivers/net/ all files in and below drivers/net + F: drivers/net/* all files in drivers/net, but not below + F: */net/* all files in "any top level directory"/net + One pattern per line. Multiple F: lines acceptable. + X: Files and directories that are NOT maintained, same rules as F: + Files exclusions are tested before file matches. + Can be useful for excluding a specific subdirectory, for instance: + F: net/ + X: net/ipv6/ + matches all files in and below net excluding net/ipv6/ + K: Keyword perl extended regex pattern to match content in a + patch or file. For instance: + K: of_get_profile + matches patches or files that contain "of_get_profile" + K: \b(printk|pr_(info|err))\b + matches patches or files that contain one or more of the words + printk, pr_info or pr_err + One regex pattern per line. Multiple K: lines acceptable. Note: For the hard of thinking, this list is meant to remain in alphabetical order. If you could add yourselves to it in alphabetical order that would be so much easier [Ed] -P: Person (obsolete) -M: Mail patches to: FullName -L: Mailing list that is relevant to this area -W: Web-page with status/info -T: SCM tree type and location. Type is one of: git, hg, quilt, stgit. -S: Status, one of the following: - - Supported: Someone is actually paid to look after this. - Maintained: Someone actually looks after it. - Odd Fixes: It has a maintainer but they don't have time to do - much other than throw the odd patch in. See below.. - Orphan: No current maintainer [but maybe you could take the - role as you write your new code]. - Obsolete: Old code. Something tagged obsolete generally means - it has been replaced by a better system and you - should be using that. +Maintainers List (try to look for most precise areas first) -F: Files and directories with wildcard patterns. - A trailing slash includes all files and subdirectory files. - F: drivers/net/ all files in and below drivers/net - F: drivers/net/* all files in drivers/net, but not below - F: */net/* all files in "any top level directory"/net - One pattern per line. Multiple F: lines acceptable. -X: Files and directories that are NOT maintained, same rules as F: - Files exclusions are tested before file matches. - Can be useful for excluding a specific subdirectory, for instance: - F: net/ - X: net/ipv6/ - matches all files in and below net excluding net/ipv6/ + ----------------------------------- 3C505 NETWORK DRIVER M: Philip Blundell -- cgit v1.2.3 From 860c44c1968bf9ac401f058549b687fde72ff02a Mon Sep 17 00:00:00 2001 From: Grant Likely Date: Mon, 26 Oct 2009 16:49:49 -0700 Subject: MAINTAINERS: add Open Firmware / Flattened Device Tree entry Signed-off-by: Grant Likely Cc: Joe Perches Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- MAINTAINERS | 9 +++++++++ 1 file changed, 9 insertions(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 23a61d9d0e78..63008663f4ad 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3885,6 +3885,15 @@ S: Maintained F: Documentation/i2c/busses/i2c-ocores F: drivers/i2c/busses/i2c-ocores.c +OPEN FIRMWARE AND FLATTENED DEVICE TREE +M: Grant Likely +L: devicetree-discuss@lists.ozlabs.org +W: http://fdt.secretlab.ca +S: Maintained +F: drivers/of +F: include/linux/of*.h +K: of_get_property + OPROFILE M: Robert Richter L: oprofile-list@lists.sf.net -- cgit v1.2.3 From b5654f5e7fc414a6e69b3647db2b043257c9e62e Mon Sep 17 00:00:00 2001 From: Bartlomiej Zolnierkiewicz Date: Mon, 26 Oct 2009 16:49:50 -0700 Subject: MAINTAINERS: rt2x00 list is moderated Signed-off-by: Bartlomiej Zolnierkiewicz Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- MAINTAINERS | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 63008663f4ad..ece9f60dc5d5 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -4329,7 +4329,7 @@ F: drivers/video/aty/aty128fb.c RALINK RT2X00 WIRELESS LAN DRIVER P: rt2x00 project L: linux-wireless@vger.kernel.org -L: users@rt2x00.serialmonkey.com +L: users@rt2x00.serialmonkey.com (moderated for non-subscribers) W: http://rt2x00.serialmonkey.com/ S: Maintained T: git git://git.kernel.org/pub/scm/linux/kernel/git/ivd/rt2x00.git -- cgit v1.2.3 From dffc14365bb07812567ee7f3f8699277ef19aaa8 Mon Sep 17 00:00:00 2001 From: Joe Perches Date: Wed, 4 Nov 2009 09:38:58 -0800 Subject: MAINTAINERS: Add git net-next-2.6 Add a reference to the the git tree where most of the forward going network development occurs. Signed-off-by: Joe Perches Signed-off-by: David S. Miller --- MAINTAINERS | 1 + 1 file changed, 1 insertion(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index cdbbaf59a43b..c856aee0f1e2 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3632,6 +3632,7 @@ L: netdev@vger.kernel.org W: http://www.linuxfoundation.org/en/Net W: http://patchwork.ozlabs.org/project/netdev/list/ T: git git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-2.6.git +T: git git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next-2.6.git S: Maintained F: net/ F: include/net/ -- cgit v1.2.3 From e1a6542f24fad84a132f79e13ca452c37df857c4 Mon Sep 17 00:00:00 2001 From: Ivo van Doorn Date: Sat, 7 Nov 2009 19:14:47 +0100 Subject: rt2x00: update MAINTAINERS Although I have always been the active maintainer of the rt2x00 drivers, I was not mentioned explicitely in the MAINTAINERS file as such. Update the rt2x00 entry in the MAINTAINERS file to add my name and email address. Signed-off-by: Ivo van Doorn Signed-off-by: John W. Linville --- MAINTAINERS | 1 + 1 file changed, 1 insertion(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index c856aee0f1e2..b37e0cdd7632 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -4280,6 +4280,7 @@ F: drivers/video/aty/aty128fb.c RALINK RT2X00 WIRELESS LAN DRIVER P: rt2x00 project +M: Ivo van Doorn L: linux-wireless@vger.kernel.org L: users@rt2x00.serialmonkey.com (moderated for non-subscribers) W: http://rt2x00.serialmonkey.com/ -- cgit v1.2.3 From 52e650407d709fd02091f0d735434def96a545bc Mon Sep 17 00:00:00 2001 From: Joe Perches Date: Wed, 11 Nov 2009 14:26:09 -0800 Subject: MAINTAINERS: ASUS ACPI EXTRAS - remove F:arch/x86/kernel/acpi/boot.c Oops. How did that get there? (Don't look, it's my original pattern commit...) Signed-off-by: Joe Perches Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- MAINTAINERS | 1 - 1 file changed, 1 deletion(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 8264e6bddaa1..ba8ded198b2b 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -906,7 +906,6 @@ M: Karol Kozimor L: acpi4asus-user@lists.sourceforge.net W: http://acpi4asus.sf.net S: Maintained -F: arch/x86/kernel/acpi/boot.c F: drivers/platform/x86/asus_acpi.c ASUS ASB100 HARDWARE MONITOR DRIVER -- cgit v1.2.3 From 455518e7b5ff53478821fe9185531e0a2135d076 Mon Sep 17 00:00:00 2001 From: Joe Perches Date: Wed, 11 Nov 2009 14:26:10 -0800 Subject: MAINTAINERS: BROCADE BFA - Use single line M: and tabs Integrate P:/M: to single M: Use tab for spacing Signed-off-by: Joe Perches Cc: Jing Huang Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- MAINTAINERS | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index ba8ded198b2b..b6bcc2f14083 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1244,11 +1244,10 @@ S: Supported F: drivers/net/tg3.* BROCADE BFA FC SCSI DRIVER -P: Jing Huang -M: huangj@brocade.com -L: linux-scsi@vger.kernel.org -S: Supported -F: drivers/scsi/bfa/ +M: Jing Huang +L: linux-scsi@vger.kernel.org +S: Supported +F: drivers/scsi/bfa/ BSG (block layer generic sg v4 driver) M: FUJITA Tomonori -- cgit v1.2.3 From 3387f656309dd427023c7e00cad901a89d40bc1c Mon Sep 17 00:00:00 2001 From: Joe Perches Date: Wed, 11 Nov 2009 14:26:11 -0800 Subject: MAINTAINERS: SERVER ENGINES 10Gbps iSCSI - Use single line M: Integrate P:/M: lines to single M: Use tabs not spaces Signed-off-by: Joe Perches Cc: Jayamohan Kallickal Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- MAINTAINERS | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index b6bcc2f14083..e5d11054e78b 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -4671,12 +4671,11 @@ F: include/linux/ata.h F: include/linux/libata.h SERVER ENGINES 10Gbps iSCSI - BladeEngine 2 DRIVER -P: Jayamohan Kallickal -M: jayamohank@serverengines.com -L: linux-scsi@vger.kernel.org -W: http://www.serverengines.com -S: Supported -F: drivers/scsi/be2iscsi/ +M: Jayamohan Kallickal +L: linux-scsi@vger.kernel.org +W: http://www.serverengines.com +S: Supported +F: drivers/scsi/be2iscsi/ SERVER ENGINES 10Gbps NIC - BladeEngine 2 DRIVER M: Sathya Perla -- cgit v1.2.3 From 65c8bb5b9f25d869b814ca4238935a2e609f81cf Mon Sep 17 00:00:00 2001 From: Joe Perches Date: Wed, 11 Nov 2009 14:26:12 -0800 Subject: MAINTAINERS: VMWARE VMXNET3 - Quote name with comma and period, use tabs Names with periods or commas need to be quoted Use tab not spaces Signed-off-by: Joe Perches Cc: Shreyas Bhatewara Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- MAINTAINERS | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index e5d11054e78b..5577259911d7 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -5681,11 +5681,11 @@ F: drivers/vlynq/vlynq.c F: include/linux/vlynq.h VMWARE VMXNET3 ETHERNET DRIVER -M: Shreyas Bhatewara -M: VMware, Inc. -L: netdev@vger.kernel.org -S: Maintained -F: drivers/net/vmxnet3/ +M: Shreyas Bhatewara +M: "VMware, Inc." +L: netdev@vger.kernel.org +S: Maintained +F: drivers/net/vmxnet3/ VOLTAGE AND CURRENT REGULATOR FRAMEWORK M: Liam Girdwood -- cgit v1.2.3 From b0c9065324b7c1242b97b27a1407da18471b9b23 Mon Sep 17 00:00:00 2001 From: Randy Dunlap Date: Wed, 11 Nov 2009 14:26:13 -0800 Subject: MAINTAINERS: openipmi list is moderated openipmi list is moderated. Signed-off-by: Randy Dunlap Acked-by: Corey Minyard Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- MAINTAINERS | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 5577259911d7..bdbfdcdbee0e 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -2823,7 +2823,7 @@ F: drivers/infiniband/hw/ipath/ IPMI SUBSYSTEM M: Corey Minyard -L: openipmi-developer@lists.sourceforge.net +L: openipmi-developer@lists.sourceforge.net (moderated for non-subscribers) W: http://openipmi.sourceforge.net/ S: Supported F: Documentation/IPMI.txt -- cgit v1.2.3 From eeba444a5d5646553c094f79235b50e757aa1424 Mon Sep 17 00:00:00 2001 From: Joe Perches Date: Wed, 11 Nov 2009 14:26:44 -0800 Subject: MAINTAINERS: correct 9P FILE SYSTEM git entry Signed-off-by: Joe Perches Cc: Latchesar Ionkov Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- MAINTAINERS | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index bdbfdcdbee0e..c13e66f771c5 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -182,7 +182,7 @@ M: Ron Minnich M: Latchesar Ionkov L: v9fs-developer@lists.sourceforge.net W: http://swik.net/v9fs -T: git git://git.kernel.org/pub/scm/linux/kernel/ericvh/v9fs.git +T: git git://git.kernel.org/pub/scm/linux/kernel/git/ericvh/v9fs.git S: Maintained F: Documentation/filesystems/9p.txt F: fs/9p/ -- cgit v1.2.3 From 34693710601146d683018a14817f93791dfb2737 Mon Sep 17 00:00:00 2001 From: Joe Perches Date: Wed, 11 Nov 2009 14:26:45 -0800 Subject: MAINTAINERS: correct NETFILTER git entry format Signed-off-by: Joe Perches Cc: Patrick McHardy Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- MAINTAINERS | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index c13e66f771c5..765c3c50f81a 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3616,7 +3616,7 @@ L: netfilter@vger.kernel.org L: coreteam@netfilter.org W: http://www.netfilter.org/ W: http://www.iptables.org/ -T: git://git.kernel.org/pub/scm/linux/kernel/git/kaber/nf-2.6.git +T: git git://git.kernel.org/pub/scm/linux/kernel/git/kaber/nf-2.6.git S: Supported F: include/linux/netfilter* F: include/linux/netfilter/ -- cgit v1.2.3 From df69f0da187815bc116860be4a29c7c953d22c5a Mon Sep 17 00:00:00 2001 From: Joe Perches Date: Wed, 11 Nov 2009 14:26:46 -0800 Subject: MAINTAINERS: correct SECURITY SUBSYSTEM git entry Use git.kernel.org not www.kernel.org Signed-off-by: Joe Perches Cc: James Morris Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- MAINTAINERS | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 765c3c50f81a..81d68d5b7eea 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -4635,7 +4635,7 @@ F: drivers/mmc/host/sdhci-s3c.c SECURITY SUBSYSTEM M: James Morris L: linux-security-module@vger.kernel.org (suggested Cc:) -T: git git://www.kernel.org/pub/scm/linux/kernel/git/jmorris/security-testing-2.6.git +T: git git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/security-testing-2.6.git W: http://security.wiki.kernel.org/ S: Supported F: security/ -- cgit v1.2.3 From 4a7bd3ec7aa305048b0e4791d056c52ac1f43ddf Mon Sep 17 00:00:00 2001 From: Gertjan van Wingerde Date: Sun, 8 Nov 2009 12:31:20 +0100 Subject: rt2x00: Update MAINTAINERS Add myself to the list of maintainers for the rt2x00 driver. Signed-off-by: Gertjan van Wingerde Signed-off-by: John W. Linville --- MAINTAINERS | 1 + 1 file changed, 1 insertion(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index b37e0cdd7632..223d6976c08c 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -4281,6 +4281,7 @@ F: drivers/video/aty/aty128fb.c RALINK RT2X00 WIRELESS LAN DRIVER P: rt2x00 project M: Ivo van Doorn +M: Gertjan van Wingerde L: linux-wireless@vger.kernel.org L: users@rt2x00.serialmonkey.com (moderated for non-subscribers) W: http://rt2x00.serialmonkey.com/ -- cgit v1.2.3 From 505c92470bda486be589729514d3ed5302fb0551 Mon Sep 17 00:00:00 2001 From: Joe Perches Date: Thu, 12 Nov 2009 14:55:00 -0800 Subject: MAINTAINERS: RFKILL - Fix pattern entry missing colon Signed-off-by: Joe Perches Signed-off-by: John W. Linville --- MAINTAINERS | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 223d6976c08c..887b50b27cce 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -4369,7 +4369,7 @@ RFKILL M: Johannes Berg L: linux-wireless@vger.kernel.org S: Maintained -F Documentation/rfkill.txt +F: Documentation/rfkill.txt F: net/rfkill/ RISCOM8 DRIVER -- cgit v1.2.3 From 9c2b5bdee125d840b2023dcc7625353c8e5bb10c Mon Sep 17 00:00:00 2001 From: Amit Kumar Salecha Date: Fri, 13 Nov 2009 16:37:37 +0000 Subject: netxen: update MAINTAINERS Changing MAINTAINERS for netxen nic driver. Signed-off-by: Amit Kumar Salecha Signed-off-by: David S. Miller --- MAINTAINERS | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 7f2f29cf75ff..fe1cd41b660f 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3704,9 +3704,9 @@ F: include/linux/if_* F: include/linux/*device.h NETXEN (1/10) GbE SUPPORT -M: Dhananjay Phadke +M: Amit Kumar Salecha L: netdev@vger.kernel.org -W: http://www.netxen.com +W: http://www.qlogic.com S: Supported F: drivers/net/netxen/ -- cgit v1.2.3 From 410d7a979e0bc8386bf26316303809fa5688238c Mon Sep 17 00:00:00 2001 From: Joe Perches Date: Tue, 17 Nov 2009 14:06:15 -0800 Subject: MAINTAINERS: KMEMCHECK: add file patterns, use M: for Pekka's name and address Signed-off-by: Joe Perches Cc: Vegard Nossum Cc: Pekka Enberg Cc: Ingo Molnar Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- MAINTAINERS | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 81d68d5b7eea..1ba4f98c9cbc 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3084,9 +3084,13 @@ F: kernel/kgdb.c KMEMCHECK M: Vegard Nossum -P Pekka Enberg -M: penberg@cs.helsinki.fi +M: Pekka Enberg S: Maintained +F: Documentation/kmemcheck.txt +F: arch/x86/include/asm/kmemcheck.h +F: arch/x86/mm/kmemcheck/ +F: include/linux/kmemcheck.h +F: mm/kmemcheck.c KMEMLEAK M: Catalin Marinas -- cgit v1.2.3 From cefbf4ea629427af2fb4709bab9fe126dcddc234 Mon Sep 17 00:00:00 2001 From: Russell King Date: Sun, 22 Nov 2009 17:40:28 +0000 Subject: MAINTAINERS: add maintainer information for AMBA primecell drivers Signed-off-by: Russell King --- MAINTAINERS | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index c824b4d62754..e9fcac2cf1a8 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -512,10 +512,32 @@ W: http://www.arm.linux.org.uk/ S: Maintained F: arch/arm/ +ARM PRIMECELL AACI PL041 DRIVER +M: Russell King +S: Maintained +F: sound/arm/aaci.* + +ARM PRIMECELL CLCD PL110 DRIVER +M: Russell King +S: Maintained +F: drivers/video/amba-clcd.* + +ARM PRIMECELL KMI PL050 DRIVER +M: Russell King +S: Maintained +F: drivers/input/serio/ambakmi.* +F: include/linux/amba/kmi.h + ARM PRIMECELL MMCI PL180/1 DRIVER S: Orphan F: drivers/mmc/host/mmci.* +ARM PRIMECELL BUS SUPPORT +M: Russell King +S: Maintained +F: drivers/amba/ +F: include/linux/amba/bus.h + ARM/ADI ROADRUNNER MACHINE SUPPORT M: Lennert Buytenhek L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) -- cgit v1.2.3 From 949cb6232d5fc9fa77cfa441418e12d6f9de163e Mon Sep 17 00:00:00 2001 From: Artem Bityutskiy Date: Tue, 20 Oct 2009 10:21:45 +0300 Subject: MAINTAINERS: change e-mail of Artem Bityutskiy Nowadays I have all my comunity-related stuff at gmail, so update my e-mail address. Signed-off-by: Artem Bityutskiy --- MAINTAINERS | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index c824b4d62754..8a158e1a9c4c 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -5241,7 +5241,7 @@ S: Maintained F: drivers/scsi/u14-34f.c UBI FILE SYSTEM (UBIFS) -M: Artem Bityutskiy +M: Artem Bityutskiy M: Adrian Hunter L: linux-mtd@lists.infradead.org T: git git://git.infradead.org/ubifs-2.6.git @@ -5292,7 +5292,7 @@ F: drivers/cdrom/cdrom.c F: include/linux/cdrom.h UNSORTED BLOCK IMAGES (UBI) -M: Artem Bityutskiy +M: Artem Bityutskiy W: http://www.linux-mtd.infradead.org/ L: linux-mtd@lists.infradead.org T: git git://git.infradead.org/ubi-2.6.git -- cgit v1.2.3 From 03b70d625c10d1605012d41489d9df18467c5f55 Mon Sep 17 00:00:00 2001 From: Jean Delvare Date: Thu, 26 Nov 2009 09:22:32 +0100 Subject: MAINTAINERS: Add missing i2c files Add missing header files to the i2c subsystem section. Signed-off-by: Jean Delvare --- MAINTAINERS | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index c824b4d62754..d5e8d5ad115d 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -2533,8 +2533,7 @@ S: Maintained F: Documentation/i2c/ F: drivers/i2c/ F: include/linux/i2c.h -F: include/linux/i2c-dev.h -F: include/linux/i2c-id.h +F: include/linux/i2c-*.h I2C-TINY-USB DRIVER M: Till Harbaum -- cgit v1.2.3 From 4b3fa3c486f53ff267505b115ebf611726217da8 Mon Sep 17 00:00:00 2001 From: Olivier Lorin Date: Sat, 3 Oct 2009 19:09:10 -0300 Subject: V4L/DVB (13372a): MAINTAINERS: addition of gspca_gl860 driver MAINTAINERS: addition of gspca_gl860 driver - addition of gspca_gl860 driver Signed-off-by: Olivier Lorin Signed-off-by: Mauro Carvalho Chehab --- MAINTAINERS | 7 +++++++ 1 file changed, 7 insertions(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index c824b4d62754..4982eb0baed1 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -2312,6 +2312,13 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-2.6.git S: Maintained F: drivers/media/video/gspca/finepix.c +GSPCA GL860 SUBDRIVER +M: Olivier Lorin +L: linux-media@vger.kernel.org +T: git git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-2.6.git +S: Maintained +F: drivers/media/video/gspca/gl860/ + GSPCA M5602 SUBDRIVER M: Erik Andren L: linux-media@vger.kernel.org -- cgit v1.2.3 From f6cd53c6a4204a3d4a274546449a70a766a99b6e Mon Sep 17 00:00:00 2001 From: Samuel Ortiz Date: Tue, 24 Nov 2009 11:33:26 +0800 Subject: MAINTAINERS: Add iwmc3200wifi entry Update MAINTAINERS with the Intel supported iwmc3200wifi entry. Signed-off-by: Samuel Ortiz Signed-off-by: Zhu Yi Signed-off-by: John W. Linville --- MAINTAINERS | 9 +++++++++ 1 file changed, 9 insertions(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 808e4dfb34be..5531e6fd10dd 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -2788,6 +2788,15 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/iwlwifi/iwlwifi-2.6.git S: Supported F: drivers/net/wireless/iwlwifi/ +INTEL WIRELESS MULTICOMM 3200 WIFI (iwmc3200wifi) +M: Samuel Ortiz +M: Zhu Yi +M: Intel Linux Wireless +L: linux-wireless@vger.kernel.org +S: Supported +W: http://wireless.kernel.org/en/users/Drivers/iwmc3200wifi +F: drivers/net/wireless/iwmc3200wifi/ + IOC3 ETHERNET DRIVER M: Ralf Baechle L: linux-mips@linux-mips.org -- cgit v1.2.3 From 7f49a7f7011f3a59b51dd6003714d7aed72d7718 Mon Sep 17 00:00:00 2001 From: Jonathan Cameron Date: Thu, 15 Oct 2009 16:39:30 +0100 Subject: MAINTAINERS: Add entries for IMote 2 and Stargate 2 Signed-off-by: Jonathan Cameron Signed-off-by: Eric Miao --- MAINTAINERS | 13 +++++++++++++ 1 file changed, 13 insertions(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index c824b4d62754..54f7d4afdbb1 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -707,6 +707,19 @@ L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Maintained F: arch/arm/mach-ixp4xx/ +ARM/INTEL RESEARCH IMOTE 2 MACHINE SUPPORT +M: Jonathan Cameron +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) +S: Maintained +F: arch/arm/mach-pxa/imote2.c + +ARM/INTEL RESEARCH STARGATE 2 MACHINE SUPPORT +M: Jonathan Cameron +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) +S: Maintained +F: arch/arm/mach-pxa/stargate2.c +F: drivers/pcmcia/pxa2xx_stargate2.c + ARM/INTEL XSC3 (MANZANO) ARM CORE M: Lennert Buytenhek M: Dan Williams -- cgit v1.2.3 From c69f677cc852f3f7b2342ab2f1598670a463d576 Mon Sep 17 00:00:00 2001 From: Geert Uytterhoeven Date: Fri, 20 Nov 2009 20:48:31 +0100 Subject: fbdev: Migrate mailing lists to vger The fbdev mailing lists at SourceForge have been migrated to a single mailing list at kernel.org: linux-fbdev@vger.kernel.org. Signed-off-by: Geert Uytterhoeven Acked-by: Jean Delvare Signed-off-by: Linus Torvalds --- Documentation/fb/framebuffer.txt | 6 ++---- MAINTAINERS | 28 ++++++++++++++-------------- 2 files changed, 16 insertions(+), 18 deletions(-) (limited to 'MAINTAINERS') diff --git a/Documentation/fb/framebuffer.txt b/Documentation/fb/framebuffer.txt index b3e3a0356839..fe79e3c8847d 100644 --- a/Documentation/fb/framebuffer.txt +++ b/Documentation/fb/framebuffer.txt @@ -312,10 +312,8 @@ and to the following documentation: 8. Mailing list --------------- -There are several frame buffer device related mailing lists at SourceForge: - - linux-fbdev-announce@lists.sourceforge.net, for announcements, - - linux-fbdev-user@lists.sourceforge.net, for generic user support, - - linux-fbdev-devel@lists.sourceforge.net, for project developers. +There is a frame buffer device related mailing list at kernel.org: +linux-fbdev@vger.kernel.org. Point your web browser to http://sourceforge.net/projects/linux-fbdev/ for subscription information and archive browsing. diff --git a/MAINTAINERS b/MAINTAINERS index c824b4d62754..5726ffc8c43d 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1027,7 +1027,7 @@ F: drivers/serial/atmel_serial.c ATMEL LCDFB DRIVER M: Nicolas Ferre -L: linux-fbdev-devel@lists.sourceforge.net (moderated for non-subscribers) +L: linux-fbdev@vger.kernel.org S: Maintained F: drivers/video/atmel_lcdfb.c F: include/video/atmel_lcdc.h @@ -2113,7 +2113,7 @@ F: drivers/net/wan/dlci.c F: drivers/net/wan/sdla.c FRAMEBUFFER LAYER -L: linux-fbdev-devel@lists.sourceforge.net (moderated for non-subscribers) +L: linux-fbdev@vger.kernel.org W: http://linux-fbdev.sourceforge.net/ S: Orphan F: Documentation/fb/ @@ -2136,7 +2136,7 @@ F: drivers/i2c/busses/i2c-cpm.c FREESCALE IMX / MXC FRAMEBUFFER DRIVER M: Sascha Hauer -L: linux-fbdev-devel@lists.sourceforge.net (moderated for non-subscribers) +L: linux-fbdev@vger.kernel.org L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Maintained F: arch/arm/plat-mxc/include/mach/imxfb.h @@ -2635,7 +2635,7 @@ S: Supported F: security/integrity/ima/ IMS TWINTURBO FRAMEBUFFER DRIVER -L: linux-fbdev-devel@lists.sourceforge.net (moderated for non-subscribers) +L: linux-fbdev@vger.kernel.org S: Orphan F: drivers/video/imsttfb.c @@ -2670,14 +2670,14 @@ F: drivers/input/ INTEL FRAMEBUFFER DRIVER (excluding 810 and 815) M: Sylvain Meyer -L: linux-fbdev-devel@lists.sourceforge.net (moderated for non-subscribers) +L: linux-fbdev@vger.kernel.org S: Maintained F: Documentation/fb/intelfb.txt F: drivers/video/intelfb/ INTEL 810/815 FRAMEBUFFER DRIVER M: Antonino Daplas -L: linux-fbdev-devel@lists.sourceforge.net (moderated for non-subscribers) +L: linux-fbdev@vger.kernel.org S: Maintained F: drivers/video/i810/ @@ -3391,7 +3391,7 @@ S: Supported MATROX FRAMEBUFFER DRIVER M: Petr Vandrovec -L: linux-fbdev-devel@lists.sourceforge.net (moderated for non-subscribers) +L: linux-fbdev@vger.kernel.org S: Maintained F: drivers/video/matrox/matroxfb_* F: include/linux/matroxfb.h @@ -3778,7 +3778,7 @@ F: fs/ntfs/ NVIDIA (rivafb and nvidiafb) FRAMEBUFFER DRIVER M: Antonino Daplas -L: linux-fbdev-devel@lists.sourceforge.net (moderated for non-subscribers) +L: linux-fbdev@vger.kernel.org S: Maintained F: drivers/video/riva/ F: drivers/video/nvidia/ @@ -3813,7 +3813,7 @@ F: sound/soc/omap/ OMAP FRAMEBUFFER SUPPORT M: Imre Deak -L: linux-fbdev-devel@lists.sourceforge.net (moderated for non-subscribers) +L: linux-fbdev@vger.kernel.org L: linux-omap@vger.kernel.org S: Maintained F: drivers/video/omap/ @@ -4319,14 +4319,14 @@ F: include/linux/qnxtypes.h RADEON FRAMEBUFFER DISPLAY DRIVER M: Benjamin Herrenschmidt -L: linux-fbdev-devel@lists.sourceforge.net (moderated for non-subscribers) +L: linux-fbdev@vger.kernel.org S: Maintained F: drivers/video/aty/radeon* F: include/linux/radeonfb.h RAGE128 FRAMEBUFFER DISPLAY DRIVER M: Paul Mackerras -L: linux-fbdev-devel@lists.sourceforge.net (moderated for non-subscribers) +L: linux-fbdev@vger.kernel.org S: Maintained F: drivers/video/aty/aty128fb.c @@ -4465,7 +4465,7 @@ F: drivers/net/wireless/rtl818x/rtl8187* S3 SAVAGE FRAMEBUFFER DRIVER M: Antonino Daplas -L: linux-fbdev-devel@lists.sourceforge.net (moderated for non-subscribers) +L: linux-fbdev@vger.kernel.org S: Maintained F: drivers/video/savage/ @@ -5628,7 +5628,7 @@ S: Maintained UVESAFB DRIVER M: Michal Januszewski -L: linux-fbdev-devel@lists.sourceforge.net (moderated for non-subscribers) +L: linux-fbdev@vger.kernel.org W: http://dev.gentoo.org/~spock/projects/uvesafb/ S: Maintained F: Documentation/fb/uvesafb.txt @@ -5661,7 +5661,7 @@ F: drivers/mmc/host/via-sdmmc.c VIA UNICHROME(PRO)/CHROME9 FRAMEBUFFER DRIVER M: Joseph Chan M: Scott Fang -L: linux-fbdev-devel@lists.sourceforge.net (moderated for non-subscribers) +L: linux-fbdev@vger.kernel.org S: Maintained F: drivers/video/via/ -- cgit v1.2.3 From 870725d9fcdecb23eab696d405fa90df46151865 Mon Sep 17 00:00:00 2001 From: Srinidhi Kasagar Date: Tue, 1 Dec 2009 13:41:24 +0100 Subject: ARM: 5836/1: add MAINTAINERS entry Add MAINTAINERS entry for U8500 SoC Signed-off-by: srinidhi kasagar Acked-by: Andrea Gallo Signed-off-by: Russell King --- MAINTAINERS | 6 ++++++ 1 file changed, 6 insertions(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index c824b4d62754..2c719e1fca71 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -893,6 +893,12 @@ L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) W: http://www.mcuos.com S: Maintained +ARM/U8500 ARM ARCHITECTURE +M: Srinidhi Kasagar +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) +S: Maintained +F: arch/arm/mach-ux500/ + ARM/VFP SUPPORT M: Russell King L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) -- cgit v1.2.3 From 974348435c6d7b65fd6d8857b4809b0ff208d323 Mon Sep 17 00:00:00 2001 From: Sam Ravnborg Date: Tue, 1 Dec 2009 13:17:44 -0800 Subject: kbuild: stepping down as maintainer It has been fun but the last year or more it has been a duty and a burden. So I leave it open for others to take over. Signed-off-by: Sam Ravnborg Cc: Anibal Monsalve Salazar Cc: Steven Rostedt Cc: Michal Marek Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- MAINTAINERS | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 16129e67678a..4f96ac81089c 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3015,11 +3015,8 @@ S: Maintained F: fs/autofs4/ KERNEL BUILD -M: Sam Ravnborg -T: git git://git.kernel.org/pub/scm/linux/kernel/git/sam/kbuild-next.git -T: git git://git.kernel.org/pub/scm/linux/kernel/git/sam/kbuild-fixes.git L: linux-kbuild@vger.kernel.org -S: Maintained +S: Orphan F: Documentation/kbuild/ F: Makefile F: scripts/Makefile.* -- cgit v1.2.3 From b16533d3331e3b5905f31b00399933f956936c97 Mon Sep 17 00:00:00 2001 From: Uwe Kleine-König Date: Mon, 30 Nov 2009 16:53:24 +0100 Subject: MAINTAINERS: add tree and file pattern for ARM IMX MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: Uwe Kleine-König Cc: Sascha Hauer Signed-off-by: Sascha Hauer --- MAINTAINERS | 3 +++ 1 file changed, 3 insertions(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index a1a2aceca5bd..d3d90ffed718 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -637,6 +637,9 @@ ARM/FREESCALE IMX / MXC ARM ARCHITECTURE M: Sascha Hauer L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Maintained +T: git://git.pengutronix.de/git/imx/linux-2.6.git +F: arch/arm/mach-mx*/ +F: arch/arm/plat-mxc/ ARM/GLOMATION GESBC9312SX MACHINE SUPPORT M: Lennert Buytenhek -- cgit v1.2.3 From e0ee98513d1a2e24d2ddbdecf4216bcca29d1158 Mon Sep 17 00:00:00 2001 From: Alessandro Rubini Date: Wed, 2 Dec 2009 14:01:03 +0100 Subject: ARM: 5846/1: MAINTAINERS: Add arm Nomadik support Signed-off-by: Alessandro Rubini Acked-by: Andrea Gallo Signed-off-by: Russell King --- MAINTAINERS | 8 ++++++++ 1 file changed, 8 insertions(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 88241154f4ce..807aeadf0c4a 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -741,6 +741,14 @@ ARM/NEC MOBILEPRO 900/c MACHINE SUPPORT M: Michael Petchkovsky S: Maintained +ARM/NOMADIK ARCHITECTURE +M: Alessandro Rubini +M: STEricsson +L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) +S: Maintained +F: arch/arm/mach-nomadik/ +F: arch/arm/plat-nomadik/ + ARM/OPENMOKO NEO FREERUNNER (GTA02) MACHINE SUPPORT M: Nelson Castillo L: openmoko-kernel@lists.openmoko.org (subscribers-only) -- cgit v1.2.3 From dbf9bfe615717d1145f263c0049fe2328e6ed395 Mon Sep 17 00:00:00 2001 From: jack wang Date: Wed, 14 Oct 2009 16:19:21 +0800 Subject: [SCSI] pm8001: add SAS/SATA HBA driver This driver supports PMC-Sierra PCIe SAS/SATA 8x6G SPC 8001 chip based host adapters. Signed-off-by: Jack Wang Signed-off-by: Lindar Liu Signed-off-by: Tom Peng Signed-off-by: Kevin Ao Signed-off-by: James Bottomley --- MAINTAINERS | 7 + drivers/scsi/Kconfig | 8 + drivers/scsi/Makefile | 1 + drivers/scsi/pm8001/Makefile | 12 + drivers/scsi/pm8001/pm8001_chips.h | 89 + drivers/scsi/pm8001/pm8001_ctl.c | 573 +++++ drivers/scsi/pm8001/pm8001_ctl.h | 67 + drivers/scsi/pm8001/pm8001_defs.h | 112 + drivers/scsi/pm8001/pm8001_hwi.c | 4371 ++++++++++++++++++++++++++++++++++++ drivers/scsi/pm8001/pm8001_hwi.h | 1011 +++++++++ drivers/scsi/pm8001/pm8001_init.c | 888 ++++++++ drivers/scsi/pm8001/pm8001_sas.c | 1104 +++++++++ drivers/scsi/pm8001/pm8001_sas.h | 480 ++++ include/linux/pci_ids.h | 2 + 14 files changed, 8725 insertions(+) create mode 100644 drivers/scsi/pm8001/Makefile create mode 100644 drivers/scsi/pm8001/pm8001_chips.h create mode 100644 drivers/scsi/pm8001/pm8001_ctl.c create mode 100644 drivers/scsi/pm8001/pm8001_ctl.h create mode 100644 drivers/scsi/pm8001/pm8001_defs.h create mode 100644 drivers/scsi/pm8001/pm8001_hwi.c create mode 100644 drivers/scsi/pm8001/pm8001_hwi.h create mode 100644 drivers/scsi/pm8001/pm8001_init.c create mode 100644 drivers/scsi/pm8001/pm8001_sas.c create mode 100644 drivers/scsi/pm8001/pm8001_sas.h (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index a1a2aceca5bd..016411cadc9a 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -4116,6 +4116,13 @@ W: http://www.pmc-sierra.com/ S: Supported F: drivers/scsi/pmcraid.* +PMC SIERRA PM8001 DRIVER +M: jack_wang@usish.com +M: lindar_liu@usish.com +L: linux-scsi@vger.kernel.org +S: Supported +F: drivers/scsi/pm8001/ + POSIX CLOCKS and TIMERS M: Thomas Gleixner S: Supported diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig index e11cca4c784c..2e4f7d0ee639 100644 --- a/drivers/scsi/Kconfig +++ b/drivers/scsi/Kconfig @@ -1818,6 +1818,14 @@ config SCSI_PMCRAID ---help--- This driver supports the PMC SIERRA MaxRAID adapters. +config SCSI_PM8001 + tristate "PMC-Sierra SPC 8001 SAS/SATA Based Host Adapter driver" + depends on PCI && SCSI + select SCSI_SAS_LIBSAS + help + This driver supports PMC-Sierra PCIE SAS/SATA 8x6G SPC 8001 chip + based host adapters. + config SCSI_SRP tristate "SCSI RDMA Protocol helper library" depends on SCSI && PCI diff --git a/drivers/scsi/Makefile b/drivers/scsi/Makefile index 3ad61db5e3fa..53b1dac7e7d9 100644 --- a/drivers/scsi/Makefile +++ b/drivers/scsi/Makefile @@ -70,6 +70,7 @@ obj-$(CONFIG_SCSI_AIC79XX) += aic7xxx/ obj-$(CONFIG_SCSI_AACRAID) += aacraid/ obj-$(CONFIG_SCSI_AIC7XXX_OLD) += aic7xxx_old.o obj-$(CONFIG_SCSI_AIC94XX) += aic94xx/ +obj-$(CONFIG_SCSI_PM8001) += pm8001/ obj-$(CONFIG_SCSI_IPS) += ips.o obj-$(CONFIG_SCSI_FD_MCS) += fd_mcs.o obj-$(CONFIG_SCSI_FUTURE_DOMAIN)+= fdomain.o diff --git a/drivers/scsi/pm8001/Makefile b/drivers/scsi/pm8001/Makefile new file mode 100644 index 000000000000..52f04296171c --- /dev/null +++ b/drivers/scsi/pm8001/Makefile @@ -0,0 +1,12 @@ +# +# Kernel configuration file for the PM8001 SAS/SATA 8x6G based HBA driver +# +# Copyright (C) 2008-2009 USI Co., Ltd. + + +obj-$(CONFIG_SCSI_PM8001) += pm8001.o +pm8001-y += pm8001_init.o \ + pm8001_sas.o \ + pm8001_ctl.o \ + pm8001_hwi.o + diff --git a/drivers/scsi/pm8001/pm8001_chips.h b/drivers/scsi/pm8001/pm8001_chips.h new file mode 100644 index 000000000000..4efa4d0950e5 --- /dev/null +++ b/drivers/scsi/pm8001/pm8001_chips.h @@ -0,0 +1,89 @@ +/* + * PMC-Sierra SPC 8001 SAS/SATA based host adapters driver + * + * Copyright (c) 2008-2009 USI Co., Ltd. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions, and the following disclaimer, + * without modification. + * 2. Redistributions in binary form must reproduce at minimum a disclaimer + * substantially similar to the "NO WARRANTY" disclaimer below + * ("Disclaimer") and any redistribution must be conditioned upon + * including a substantially similar Disclaimer requirement for further + * binary redistribution. + * 3. Neither the names of the above-listed copyright holders nor the names + * of any contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * Alternatively, this software may be distributed under the terms of the + * GNU General Public License ("GPL") version 2 as published by the Free + * Software Foundation. + * + * NO WARRANTY + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE + * POSSIBILITY OF SUCH DAMAGES. + * + */ + +#ifndef _PM8001_CHIPS_H_ +#define _PM8001_CHIPS_H_ + +static inline u32 pm8001_read_32(void *virt_addr) +{ + return *((u32 *)virt_addr); +} + +static inline void pm8001_write_32(void *addr, u32 offset, u32 val) +{ + *((u32 *)(addr + offset)) = val; +} + +static inline u32 pm8001_cr32(struct pm8001_hba_info *pm8001_ha, u32 bar, + u32 offset) +{ + return readl(pm8001_ha->io_mem[bar].memvirtaddr + offset); +} + +static inline void pm8001_cw32(struct pm8001_hba_info *pm8001_ha, u32 bar, + u32 addr, u32 val) +{ + writel(val, pm8001_ha->io_mem[bar].memvirtaddr + addr); +} +static inline u32 pm8001_mr32(void __iomem *addr, u32 offset) +{ + return readl(addr + offset); +} +static inline void pm8001_mw32(void __iomem *addr, u32 offset, u32 val) +{ + writel(val, addr + offset); +} +static inline u32 get_pci_bar_index(u32 pcibar) +{ + switch (pcibar) { + case 0x18: + case 0x1C: + return 1; + case 0x20: + return 2; + case 0x24: + return 3; + default: + return 0; + } +} + +#endif /* _PM8001_CHIPS_H_ */ + diff --git a/drivers/scsi/pm8001/pm8001_ctl.c b/drivers/scsi/pm8001/pm8001_ctl.c new file mode 100644 index 000000000000..14b13acae6dd --- /dev/null +++ b/drivers/scsi/pm8001/pm8001_ctl.c @@ -0,0 +1,573 @@ +/* + * PMC-Sierra SPC 8001 SAS/SATA based host adapters driver + * + * Copyright (c) 2008-2009 USI Co., Ltd. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions, and the following disclaimer, + * without modification. + * 2. Redistributions in binary form must reproduce at minimum a disclaimer + * substantially similar to the "NO WARRANTY" disclaimer below + * ("Disclaimer") and any redistribution must be conditioned upon + * including a substantially similar Disclaimer requirement for further + * binary redistribution. + * 3. Neither the names of the above-listed copyright holders nor the names + * of any contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * Alternatively, this software may be distributed under the terms of the + * GNU General Public License ("GPL") version 2 as published by the Free + * Software Foundation. + * + * NO WARRANTY + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE + * POSSIBILITY OF SUCH DAMAGES. + * + */ +#include +#include "pm8001_sas.h" +#include "pm8001_ctl.h" + +/* scsi host attributes */ + +/** + * pm8001_ctl_mpi_interface_rev_show - MPI interface revision number + * @cdev: pointer to embedded class device + * @buf: the buffer returned + * + * A sysfs 'read-only' shost attribute. + */ +static ssize_t pm8001_ctl_mpi_interface_rev_show(struct device *cdev, + struct device_attribute *attr, char *buf) +{ + struct Scsi_Host *shost = class_to_shost(cdev); + struct sas_ha_struct *sha = SHOST_TO_SAS_HA(shost); + struct pm8001_hba_info *pm8001_ha = sha->lldd_ha; + + return snprintf(buf, PAGE_SIZE, "%d\n", + pm8001_ha->main_cfg_tbl.interface_rev); +} +static +DEVICE_ATTR(interface_rev, S_IRUGO, pm8001_ctl_mpi_interface_rev_show, NULL); + +/** + * pm8001_ctl_fw_version_show - firmware version + * @cdev: pointer to embedded class device + * @buf: the buffer returned + * + * A sysfs 'read-only' shost attribute. + */ +static ssize_t pm8001_ctl_fw_version_show(struct device *cdev, + struct device_attribute *attr, char *buf) +{ + struct Scsi_Host *shost = class_to_shost(cdev); + struct sas_ha_struct *sha = SHOST_TO_SAS_HA(shost); + struct pm8001_hba_info *pm8001_ha = sha->lldd_ha; + + return snprintf(buf, PAGE_SIZE, "%02x.%02x.%02x.%02x\n", + (u8)(pm8001_ha->main_cfg_tbl.firmware_rev >> 24), + (u8)(pm8001_ha->main_cfg_tbl.firmware_rev >> 16), + (u8)(pm8001_ha->main_cfg_tbl.firmware_rev >> 8), + (u8)(pm8001_ha->main_cfg_tbl.firmware_rev)); +} +static DEVICE_ATTR(fw_version, S_IRUGO, pm8001_ctl_fw_version_show, NULL); +/** + * pm8001_ctl_max_out_io_show - max outstanding io supported + * @cdev: pointer to embedded class device + * @buf: the buffer returned + * + * A sysfs 'read-only' shost attribute. + */ +static ssize_t pm8001_ctl_max_out_io_show(struct device *cdev, + struct device_attribute *attr, char *buf) +{ + struct Scsi_Host *shost = class_to_shost(cdev); + struct sas_ha_struct *sha = SHOST_TO_SAS_HA(shost); + struct pm8001_hba_info *pm8001_ha = sha->lldd_ha; + + return snprintf(buf, PAGE_SIZE, "%d\n", + pm8001_ha->main_cfg_tbl.max_out_io); +} +static DEVICE_ATTR(max_out_io, S_IRUGO, pm8001_ctl_max_out_io_show, NULL); +/** + * pm8001_ctl_max_devices_show - max devices support + * @cdev: pointer to embedded class device + * @buf: the buffer returned + * + * A sysfs 'read-only' shost attribute. + */ +static ssize_t pm8001_ctl_max_devices_show(struct device *cdev, + struct device_attribute *attr, char *buf) +{ + struct Scsi_Host *shost = class_to_shost(cdev); + struct sas_ha_struct *sha = SHOST_TO_SAS_HA(shost); + struct pm8001_hba_info *pm8001_ha = sha->lldd_ha; + + return snprintf(buf, PAGE_SIZE, "%04d\n", + (u16)(pm8001_ha->main_cfg_tbl.max_sgl >> 16)); +} +static DEVICE_ATTR(max_devices, S_IRUGO, pm8001_ctl_max_devices_show, NULL); +/** + * pm8001_ctl_max_sg_list_show - max sg list supported iff not 0.0 for no + * hardware limitation + * @cdev: pointer to embedded class device + * @buf: the buffer returned + * + * A sysfs 'read-only' shost attribute. + */ +static ssize_t pm8001_ctl_max_sg_list_show(struct device *cdev, + struct device_attribute *attr, char *buf) +{ + struct Scsi_Host *shost = class_to_shost(cdev); + struct sas_ha_struct *sha = SHOST_TO_SAS_HA(shost); + struct pm8001_hba_info *pm8001_ha = sha->lldd_ha; + + return snprintf(buf, PAGE_SIZE, "%04d\n", + pm8001_ha->main_cfg_tbl.max_sgl & 0x0000FFFF); +} +static DEVICE_ATTR(max_sg_list, S_IRUGO, pm8001_ctl_max_sg_list_show, NULL); + +#define SAS_1_0 0x1 +#define SAS_1_1 0x2 +#define SAS_2_0 0x4 + +static ssize_t +show_sas_spec_support_status(unsigned int mode, char *buf) +{ + ssize_t len = 0; + + if (mode & SAS_1_1) + len = sprintf(buf, "%s", "SAS1.1"); + if (mode & SAS_2_0) + len += sprintf(buf + len, "%s%s", len ? ", " : "", "SAS2.0"); + len += sprintf(buf + len, "\n"); + + return len; +} + +/** + * pm8001_ctl_sas_spec_support_show - sas spec supported + * @cdev: pointer to embedded class device + * @buf: the buffer returned + * + * A sysfs 'read-only' shost attribute. + */ +static ssize_t pm8001_ctl_sas_spec_support_show(struct device *cdev, + struct device_attribute *attr, char *buf) +{ + unsigned int mode; + struct Scsi_Host *shost = class_to_shost(cdev); + struct sas_ha_struct *sha = SHOST_TO_SAS_HA(shost); + struct pm8001_hba_info *pm8001_ha = sha->lldd_ha; + mode = (pm8001_ha->main_cfg_tbl.ctrl_cap_flag & 0xfe000000)>>25; + return show_sas_spec_support_status(mode, buf); +} +static DEVICE_ATTR(sas_spec_support, S_IRUGO, + pm8001_ctl_sas_spec_support_show, NULL); + +/** + * pm8001_ctl_sas_address_show - sas address + * @cdev: pointer to embedded class device + * @buf: the buffer returned + * + * This is the controller sas address + * + * A sysfs 'read-only' shost attribute. + */ +static ssize_t pm8001_ctl_host_sas_address_show(struct device *cdev, + struct device_attribute *attr, char *buf) +{ + struct Scsi_Host *shost = class_to_shost(cdev); + struct sas_ha_struct *sha = SHOST_TO_SAS_HA(shost); + struct pm8001_hba_info *pm8001_ha = sha->lldd_ha; + return snprintf(buf, PAGE_SIZE, "0x%016llx\n", + be64_to_cpu(*(__be64 *)pm8001_ha->sas_addr)); +} +static DEVICE_ATTR(host_sas_address, S_IRUGO, + pm8001_ctl_host_sas_address_show, NULL); + +/** + * pm8001_ctl_logging_level_show - logging level + * @cdev: pointer to embedded class device + * @buf: the buffer returned + * + * A sysfs 'read/write' shost attribute. + */ +static ssize_t pm8001_ctl_logging_level_show(struct device *cdev, + struct device_attribute *attr, char *buf) +{ + struct Scsi_Host *shost = class_to_shost(cdev); + struct sas_ha_struct *sha = SHOST_TO_SAS_HA(shost); + struct pm8001_hba_info *pm8001_ha = sha->lldd_ha; + + return snprintf(buf, PAGE_SIZE, "%08xh\n", pm8001_ha->logging_level); +} +static ssize_t pm8001_ctl_logging_level_store(struct device *cdev, + struct device_attribute *attr, const char *buf, size_t count) +{ + struct Scsi_Host *shost = class_to_shost(cdev); + struct sas_ha_struct *sha = SHOST_TO_SAS_HA(shost); + struct pm8001_hba_info *pm8001_ha = sha->lldd_ha; + int val = 0; + + if (sscanf(buf, "%x", &val) != 1) + return -EINVAL; + + pm8001_ha->logging_level = val; + return strlen(buf); +} + +static DEVICE_ATTR(logging_level, S_IRUGO | S_IWUSR, + pm8001_ctl_logging_level_show, pm8001_ctl_logging_level_store); +/** + * pm8001_ctl_aap_log_show - aap1 event log + * @cdev: pointer to embedded class device + * @buf: the buffer returned + * + * A sysfs 'read-only' shost attribute. + */ +static ssize_t pm8001_ctl_aap_log_show(struct device *cdev, + struct device_attribute *attr, char *buf) +{ + struct Scsi_Host *shost = class_to_shost(cdev); + struct sas_ha_struct *sha = SHOST_TO_SAS_HA(shost); + struct pm8001_hba_info *pm8001_ha = sha->lldd_ha; + int i; +#define AAP1_MEMMAP(r, c) \ + (*(u32 *)((u8*)pm8001_ha->memoryMap.region[AAP1].virt_ptr + (r) * 32 \ + + (c))) + + char *str = buf; + int max = 2; + for (i = 0; i < max; i++) { + str += sprintf(str, "0x%08x 0x%08x 0x%08x 0x%08x 0x%08x 0x%08x" + "0x%08x 0x%08x\n", + AAP1_MEMMAP(i, 0), + AAP1_MEMMAP(i, 4), + AAP1_MEMMAP(i, 8), + AAP1_MEMMAP(i, 12), + AAP1_MEMMAP(i, 16), + AAP1_MEMMAP(i, 20), + AAP1_MEMMAP(i, 24), + AAP1_MEMMAP(i, 28)); + } + + return str - buf; +} +static DEVICE_ATTR(aap_log, S_IRUGO, pm8001_ctl_aap_log_show, NULL); +/** + * pm8001_ctl_aap_log_show - IOP event log + * @cdev: pointer to embedded class device + * @buf: the buffer returned + * + * A sysfs 'read-only' shost attribute. + */ +static ssize_t pm8001_ctl_iop_log_show(struct device *cdev, + struct device_attribute *attr, char *buf) +{ + struct Scsi_Host *shost = class_to_shost(cdev); + struct sas_ha_struct *sha = SHOST_TO_SAS_HA(shost); + struct pm8001_hba_info *pm8001_ha = sha->lldd_ha; +#define IOP_MEMMAP(r, c) \ + (*(u32 *)((u8*)pm8001_ha->memoryMap.region[IOP].virt_ptr + (r) * 32 \ + + (c))) + int i; + char *str = buf; + int max = 2; + for (i = 0; i < max; i++) { + str += sprintf(str, "0x%08x 0x%08x 0x%08x 0x%08x 0x%08x 0x%08x" + "0x%08x 0x%08x\n", + IOP_MEMMAP(i, 0), + IOP_MEMMAP(i, 4), + IOP_MEMMAP(i, 8), + IOP_MEMMAP(i, 12), + IOP_MEMMAP(i, 16), + IOP_MEMMAP(i, 20), + IOP_MEMMAP(i, 24), + IOP_MEMMAP(i, 28)); + } + + return str - buf; +} +static DEVICE_ATTR(iop_log, S_IRUGO, pm8001_ctl_iop_log_show, NULL); + +#define FLASH_CMD_NONE 0x00 +#define FLASH_CMD_UPDATE 0x01 +#define FLASH_CMD_SET_NVMD 0x02 + +struct flash_command { + u8 command[8]; + int code; +}; + +static struct flash_command flash_command_table[] = +{ + {"set_nvmd", FLASH_CMD_SET_NVMD}, + {"update", FLASH_CMD_UPDATE}, + {"", FLASH_CMD_NONE} /* Last entry should be NULL. */ +}; + +struct error_fw { + char *reason; + int err_code; +}; + +static struct error_fw flash_error_table[] = +{ + {"Failed to open fw image file", FAIL_OPEN_BIOS_FILE}, + {"image header mismatch", FLASH_UPDATE_HDR_ERR}, + {"image offset mismatch", FLASH_UPDATE_OFFSET_ERR}, + {"image CRC Error", FLASH_UPDATE_CRC_ERR}, + {"image length Error.", FLASH_UPDATE_LENGTH_ERR}, + {"Failed to program flash chip", FLASH_UPDATE_HW_ERR}, + {"Flash chip not supported.", FLASH_UPDATE_DNLD_NOT_SUPPORTED}, + {"Flash update disabled.", FLASH_UPDATE_DISABLED}, + {"Flash in progress", FLASH_IN_PROGRESS}, + {"Image file size Error", FAIL_FILE_SIZE}, + {"Input parameter error", FAIL_PARAMETERS}, + {"Out of memory", FAIL_OUT_MEMORY}, + {"OK", 0} /* Last entry err_code = 0. */ +}; + +static int pm8001_set_nvmd(struct pm8001_hba_info *pm8001_ha) +{ + struct pm8001_ioctl_payload *payload; + DECLARE_COMPLETION_ONSTACK(completion); + u8 *ioctlbuffer = NULL; + u32 length = 0; + u32 ret = 0; + + length = 1024 * 5 + sizeof(*payload) - 1; + ioctlbuffer = kzalloc(length, GFP_KERNEL); + if (!ioctlbuffer) + return -ENOMEM; + if ((pm8001_ha->fw_image->size <= 0) || + (pm8001_ha->fw_image->size > 4096)) { + ret = FAIL_FILE_SIZE; + goto out; + } + payload = (struct pm8001_ioctl_payload *)ioctlbuffer; + memcpy((u8 *)payload->func_specific, (u8 *)pm8001_ha->fw_image->data, + pm8001_ha->fw_image->size); + payload->length = pm8001_ha->fw_image->size; + payload->id = 0; + pm8001_ha->nvmd_completion = &completion; + ret = PM8001_CHIP_DISP->set_nvmd_req(pm8001_ha, payload); + wait_for_completion(&completion); +out: + kfree(ioctlbuffer); + return ret; +} + +static int pm8001_update_flash(struct pm8001_hba_info *pm8001_ha) +{ + struct pm8001_ioctl_payload *payload; + DECLARE_COMPLETION_ONSTACK(completion); + u8 *ioctlbuffer = NULL; + u32 length = 0; + struct fw_control_info *fwControl; + u32 loopNumber, loopcount = 0; + u32 sizeRead = 0; + u32 partitionSize, partitionSizeTmp; + u32 ret = 0; + u32 partitionNumber = 0; + struct pm8001_fw_image_header *image_hdr; + + length = 1024 * 16 + sizeof(*payload) - 1; + ioctlbuffer = kzalloc(length, GFP_KERNEL); + image_hdr = (struct pm8001_fw_image_header *)pm8001_ha->fw_image->data; + if (!ioctlbuffer) + return -ENOMEM; + if (pm8001_ha->fw_image->size < 28) { + ret = FAIL_FILE_SIZE; + goto out; + } + + while (sizeRead < pm8001_ha->fw_image->size) { + partitionSizeTmp = + *(u32 *)((u8 *)&image_hdr->image_length + sizeRead); + partitionSize = be32_to_cpu(partitionSizeTmp); + loopcount = (partitionSize + HEADER_LEN)/IOCTL_BUF_SIZE; + if (loopcount % IOCTL_BUF_SIZE) + loopcount++; + if (loopcount == 0) + loopcount++; + for (loopNumber = 0; loopNumber < loopcount; loopNumber++) { + payload = (struct pm8001_ioctl_payload *)ioctlbuffer; + payload->length = 1024*16; + payload->id = 0; + fwControl = + (struct fw_control_info *)payload->func_specific; + fwControl->len = IOCTL_BUF_SIZE; /* IN */ + fwControl->size = partitionSize + HEADER_LEN;/* IN */ + fwControl->retcode = 0;/* OUT */ + fwControl->offset = loopNumber * IOCTL_BUF_SIZE;/*OUT */ + + /* for the last chunk of data in case file size is not even with + 4k, load only the rest*/ + if (((loopcount-loopNumber) == 1) && + ((partitionSize + HEADER_LEN) % IOCTL_BUF_SIZE)) { + fwControl->len = + (partitionSize + HEADER_LEN) % IOCTL_BUF_SIZE; + memcpy((u8 *)fwControl->buffer, + (u8 *)pm8001_ha->fw_image->data + sizeRead, + (partitionSize + HEADER_LEN) % IOCTL_BUF_SIZE); + sizeRead += + (partitionSize + HEADER_LEN) % IOCTL_BUF_SIZE; + } else { + memcpy((u8 *)fwControl->buffer, + (u8 *)pm8001_ha->fw_image->data + sizeRead, + IOCTL_BUF_SIZE); + sizeRead += IOCTL_BUF_SIZE; + } + + pm8001_ha->nvmd_completion = &completion; + ret = PM8001_CHIP_DISP->fw_flash_update_req(pm8001_ha, payload); + wait_for_completion(&completion); + if (ret || (fwControl->retcode > FLASH_UPDATE_IN_PROGRESS)) { + ret = fwControl->retcode; + kfree(ioctlbuffer); + ioctlbuffer = NULL; + break; + } + } + if (ret) + break; + partitionNumber++; +} +out: + kfree(ioctlbuffer); + return ret; +} +static ssize_t pm8001_store_update_fw(struct device *cdev, + struct device_attribute *attr, + const char *buf, size_t count) +{ + struct Scsi_Host *shost = class_to_shost(cdev); + struct sas_ha_struct *sha = SHOST_TO_SAS_HA(shost); + struct pm8001_hba_info *pm8001_ha = sha->lldd_ha; + char *cmd_ptr, *filename_ptr; + int res, i; + int flash_command = FLASH_CMD_NONE; + int err = 0; + if (!capable(CAP_SYS_ADMIN)) + return -EACCES; + + cmd_ptr = kzalloc(count*2, GFP_KERNEL); + + if (!cmd_ptr) { + err = FAIL_OUT_MEMORY; + goto out; + } + + filename_ptr = cmd_ptr + count; + res = sscanf(buf, "%s %s", cmd_ptr, filename_ptr); + if (res != 2) { + err = FAIL_PARAMETERS; + goto out1; + } + + for (i = 0; flash_command_table[i].code != FLASH_CMD_NONE; i++) { + if (!memcmp(flash_command_table[i].command, + cmd_ptr, strlen(cmd_ptr))) { + flash_command = flash_command_table[i].code; + break; + } + } + if (flash_command == FLASH_CMD_NONE) { + err = FAIL_PARAMETERS; + goto out1; + } + + if (pm8001_ha->fw_status == FLASH_IN_PROGRESS) { + err = FLASH_IN_PROGRESS; + goto out1; + } + err = request_firmware(&pm8001_ha->fw_image, + filename_ptr, + pm8001_ha->dev); + + if (err) { + PM8001_FAIL_DBG(pm8001_ha, + pm8001_printk("Failed to load firmware image file %s," + " error %d\n", filename_ptr, err)); + err = FAIL_OPEN_BIOS_FILE; + goto out1; + } + + switch (flash_command) { + case FLASH_CMD_UPDATE: + pm8001_ha->fw_status = FLASH_IN_PROGRESS; + err = pm8001_update_flash(pm8001_ha); + break; + case FLASH_CMD_SET_NVMD: + pm8001_ha->fw_status = FLASH_IN_PROGRESS; + err = pm8001_set_nvmd(pm8001_ha); + break; + default: + pm8001_ha->fw_status = FAIL_PARAMETERS; + err = FAIL_PARAMETERS; + break; + } + release_firmware(pm8001_ha->fw_image); +out1: + kfree(cmd_ptr); +out: + pm8001_ha->fw_status = err; + + if (!err) + return count; + else + return -err; +} + +static ssize_t pm8001_show_update_fw(struct device *cdev, + struct device_attribute *attr, char *buf) +{ + int i; + struct Scsi_Host *shost = class_to_shost(cdev); + struct sas_ha_struct *sha = SHOST_TO_SAS_HA(shost); + struct pm8001_hba_info *pm8001_ha = sha->lldd_ha; + + for (i = 0; flash_error_table[i].err_code != 0; i++) { + if (flash_error_table[i].err_code == pm8001_ha->fw_status) + break; + } + if (pm8001_ha->fw_status != FLASH_IN_PROGRESS) + pm8001_ha->fw_status = FLASH_OK; + + return snprintf(buf, PAGE_SIZE, "status=%x %s\n", + flash_error_table[i].err_code, + flash_error_table[i].reason); +} + +static DEVICE_ATTR(update_fw, S_IRUGO|S_IWUGO, + pm8001_show_update_fw, pm8001_store_update_fw); +struct device_attribute *pm8001_host_attrs[] = { + &dev_attr_interface_rev, + &dev_attr_fw_version, + &dev_attr_update_fw, + &dev_attr_aap_log, + &dev_attr_iop_log, + &dev_attr_max_out_io, + &dev_attr_max_devices, + &dev_attr_max_sg_list, + &dev_attr_sas_spec_support, + &dev_attr_logging_level, + &dev_attr_host_sas_address, + NULL, +}; + diff --git a/drivers/scsi/pm8001/pm8001_ctl.h b/drivers/scsi/pm8001/pm8001_ctl.h new file mode 100644 index 000000000000..22644de26399 --- /dev/null +++ b/drivers/scsi/pm8001/pm8001_ctl.h @@ -0,0 +1,67 @@ + /* + * PMC-Sierra SPC 8001 SAS/SATA based host adapters driver + * + * Copyright (c) 2008-2009 USI Co., Ltd. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions, and the following disclaimer, + * without modification. + * 2. Redistributions in binary form must reproduce at minimum a disclaimer + * substantially similar to the "NO WARRANTY" disclaimer below + * ("Disclaimer") and any redistribution must be conditioned upon + * including a substantially similar Disclaimer requirement for further + * binary redistribution. + * 3. Neither the names of the above-listed copyright holders nor the names + * of any contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * Alternatively, this software may be distributed under the terms of the + * GNU General Public License ("GPL") version 2 as published by the Free + * Software Foundation. + * + * NO WARRANTY + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE + * POSSIBILITY OF SUCH DAMAGES. + * + */ + +#ifndef PM8001_CTL_H_INCLUDED +#define PM8001_CTL_H_INCLUDED + +#define IOCTL_BUF_SIZE 4096 +#define HEADER_LEN 28 +#define SIZE_OFFSET 16 + +struct pm8001_ioctl_payload { + u32 signature; + u16 major_function; + u16 minor_function; + u16 length; + u16 status; + u16 offset; + u16 id; + u8 func_specific[1]; +}; + +#define FLASH_OK 0x000000 +#define FAIL_OPEN_BIOS_FILE 0x000100 +#define FAIL_FILE_SIZE 0x000a00 +#define FAIL_PARAMETERS 0x000b00 +#define FAIL_OUT_MEMORY 0x000c00 +#define FLASH_IN_PROGRESS 0x001000 + +#endif /* PM8001_CTL_H_INCLUDED */ + diff --git a/drivers/scsi/pm8001/pm8001_defs.h b/drivers/scsi/pm8001/pm8001_defs.h new file mode 100644 index 000000000000..944afada61ee --- /dev/null +++ b/drivers/scsi/pm8001/pm8001_defs.h @@ -0,0 +1,112 @@ +/* + * PMC-Sierra SPC 8001 SAS/SATA based host adapters driver + * + * Copyright (c) 2008-2009 USI Co., Ltd. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions, and the following disclaimer, + * without modification. + * 2. Redistributions in binary form must reproduce at minimum a disclaimer + * substantially similar to the "NO WARRANTY" disclaimer below + * ("Disclaimer") and any redistribution must be conditioned upon + * including a substantially similar Disclaimer requirement for further + * binary redistribution. + * 3. Neither the names of the above-listed copyright holders nor the names + * of any contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * Alternatively, this software may be distributed under the terms of the + * GNU General Public License ("GPL") version 2 as published by the Free + * Software Foundation. + * + * NO WARRANTY + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE + * POSSIBILITY OF SUCH DAMAGES. + * + */ + +#ifndef _PM8001_DEFS_H_ +#define _PM8001_DEFS_H_ + +enum chip_flavors { + chip_8001, +}; +#define USI_MAX_MEMCNT 9 +#define PM8001_MAX_DMA_SG SG_ALL +enum phy_speed { + PHY_SPEED_15 = 0x01, + PHY_SPEED_30 = 0x02, + PHY_SPEED_60 = 0x04, +}; + +enum data_direction { + DATA_DIR_NONE = 0x0, /* NO TRANSFER */ + DATA_DIR_IN = 0x01, /* INBOUND */ + DATA_DIR_OUT = 0x02, /* OUTBOUND */ + DATA_DIR_BYRECIPIENT = 0x04, /* UNSPECIFIED */ +}; + +enum port_type { + PORT_TYPE_SAS = (1L << 1), + PORT_TYPE_SATA = (1L << 0), +}; + +/* driver compile-time configuration */ +#define PM8001_MAX_CCB 512 /* max ccbs supported */ +#define PM8001_MAX_INB_NUM 1 +#define PM8001_MAX_OUTB_NUM 1 +#define PM8001_CAN_QUEUE 128 /* SCSI Queue depth */ + +/* unchangeable hardware details */ +#define PM8001_MAX_PHYS 8 /* max. possible phys */ +#define PM8001_MAX_PORTS 8 /* max. possible ports */ +#define PM8001_MAX_DEVICES 1024 /* max supported device */ + +enum memory_region_num { + AAP1 = 0x0, /* application acceleration processor */ + IOP, /* IO processor */ + CI, /* consumer index */ + PI, /* producer index */ + IB, /* inbound queue */ + OB, /* outbound queue */ + NVMD, /* NVM device */ + DEV_MEM, /* memory for devices */ + CCB_MEM, /* memory for command control block */ +}; +#define PM8001_EVENT_LOG_SIZE (128 * 1024) + +/*error code*/ +enum mpi_err { + MPI_IO_STATUS_SUCCESS = 0x0, + MPI_IO_STATUS_BUSY = 0x01, + MPI_IO_STATUS_FAIL = 0x02, +}; + +/** + * Phy Control constants + */ +enum phy_control_type { + PHY_LINK_RESET = 0x01, + PHY_HARD_RESET = 0x02, + PHY_NOTIFY_ENABLE_SPINUP = 0x10, +}; + +enum pm8001_hba_info_flags { + PM8001F_INIT_TIME = (1U << 0), + PM8001F_RUN_TIME = (1U << 1), +}; + +#endif diff --git a/drivers/scsi/pm8001/pm8001_hwi.c b/drivers/scsi/pm8001/pm8001_hwi.c new file mode 100644 index 000000000000..aa5756fe0574 --- /dev/null +++ b/drivers/scsi/pm8001/pm8001_hwi.c @@ -0,0 +1,4371 @@ +/* + * PMC-Sierra SPC 8001 SAS/SATA based host adapters driver + * + * Copyright (c) 2008-2009 USI Co., Ltd. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions, and the following disclaimer, + * without modification. + * 2. Redistributions in binary form must reproduce at minimum a disclaimer + * substantially similar to the "NO WARRANTY" disclaimer below + * ("Disclaimer") and any redistribution must be conditioned upon + * including a substantially similar Disclaimer requirement for further + * binary redistribution. + * 3. Neither the names of the above-listed copyright holders nor the names + * of any contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * Alternatively, this software may be distributed under the terms of the + * GNU General Public License ("GPL") version 2 as published by the Free + * Software Foundation. + * + * NO WARRANTY + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE + * POSSIBILITY OF SUCH DAMAGES. + * + */ + #include "pm8001_sas.h" + #include "pm8001_hwi.h" + #include "pm8001_chips.h" + #include "pm8001_ctl.h" + +/** + * read_main_config_table - read the configure table and save it. + * @pm8001_ha: our hba card information + */ +static void __devinit read_main_config_table(struct pm8001_hba_info *pm8001_ha) +{ + void __iomem *address = pm8001_ha->main_cfg_tbl_addr; + pm8001_ha->main_cfg_tbl.signature = pm8001_mr32(address, 0x00); + pm8001_ha->main_cfg_tbl.interface_rev = pm8001_mr32(address, 0x04); + pm8001_ha->main_cfg_tbl.firmware_rev = pm8001_mr32(address, 0x08); + pm8001_ha->main_cfg_tbl.max_out_io = pm8001_mr32(address, 0x0C); + pm8001_ha->main_cfg_tbl.max_sgl = pm8001_mr32(address, 0x10); + pm8001_ha->main_cfg_tbl.ctrl_cap_flag = pm8001_mr32(address, 0x14); + pm8001_ha->main_cfg_tbl.gst_offset = pm8001_mr32(address, 0x18); + pm8001_ha->main_cfg_tbl.inbound_queue_offset = + pm8001_mr32(address, 0x1C); + pm8001_ha->main_cfg_tbl.outbound_queue_offset = + pm8001_mr32(address, 0x20); + pm8001_ha->main_cfg_tbl.hda_mode_flag = + pm8001_mr32(address, MAIN_HDA_FLAGS_OFFSET); + + /* read analog Setting offset from the configuration table */ + pm8001_ha->main_cfg_tbl.anolog_setup_table_offset = + pm8001_mr32(address, MAIN_ANALOG_SETUP_OFFSET); + + /* read Error Dump Offset and Length */ + pm8001_ha->main_cfg_tbl.fatal_err_dump_offset0 = + pm8001_mr32(address, MAIN_FATAL_ERROR_RDUMP0_OFFSET); + pm8001_ha->main_cfg_tbl.fatal_err_dump_length0 = + pm8001_mr32(address, MAIN_FATAL_ERROR_RDUMP0_LENGTH); + pm8001_ha->main_cfg_tbl.fatal_err_dump_offset1 = + pm8001_mr32(address, MAIN_FATAL_ERROR_RDUMP1_OFFSET); + pm8001_ha->main_cfg_tbl.fatal_err_dump_length1 = + pm8001_mr32(address, MAIN_FATAL_ERROR_RDUMP1_LENGTH); +} + +/** + * read_general_status_table - read the general status table and save it. + * @pm8001_ha: our hba card information + */ +static void __devinit +read_general_status_table(struct pm8001_hba_info *pm8001_ha) +{ + void __iomem *address = pm8001_ha->general_stat_tbl_addr; + pm8001_ha->gs_tbl.gst_len_mpistate = pm8001_mr32(address, 0x00); + pm8001_ha->gs_tbl.iq_freeze_state0 = pm8001_mr32(address, 0x04); + pm8001_ha->gs_tbl.iq_freeze_state1 = pm8001_mr32(address, 0x08); + pm8001_ha->gs_tbl.msgu_tcnt = pm8001_mr32(address, 0x0C); + pm8001_ha->gs_tbl.iop_tcnt = pm8001_mr32(address, 0x10); + pm8001_ha->gs_tbl.reserved = pm8001_mr32(address, 0x14); + pm8001_ha->gs_tbl.phy_state[0] = pm8001_mr32(address, 0x18); + pm8001_ha->gs_tbl.phy_state[1] = pm8001_mr32(address, 0x1C); + pm8001_ha->gs_tbl.phy_state[2] = pm8001_mr32(address, 0x20); + pm8001_ha->gs_tbl.phy_state[3] = pm8001_mr32(address, 0x24); + pm8001_ha->gs_tbl.phy_state[4] = pm8001_mr32(address, 0x28); + pm8001_ha->gs_tbl.phy_state[5] = pm8001_mr32(address, 0x2C); + pm8001_ha->gs_tbl.phy_state[6] = pm8001_mr32(address, 0x30); + pm8001_ha->gs_tbl.phy_state[7] = pm8001_mr32(address, 0x34); + pm8001_ha->gs_tbl.reserved1 = pm8001_mr32(address, 0x38); + pm8001_ha->gs_tbl.reserved2 = pm8001_mr32(address, 0x3C); + pm8001_ha->gs_tbl.reserved3 = pm8001_mr32(address, 0x40); + pm8001_ha->gs_tbl.recover_err_info[0] = pm8001_mr32(address, 0x44); + pm8001_ha->gs_tbl.recover_err_info[1] = pm8001_mr32(address, 0x48); + pm8001_ha->gs_tbl.recover_err_info[2] = pm8001_mr32(address, 0x4C); + pm8001_ha->gs_tbl.recover_err_info[3] = pm8001_mr32(address, 0x50); + pm8001_ha->gs_tbl.recover_err_info[4] = pm8001_mr32(address, 0x54); + pm8001_ha->gs_tbl.recover_err_info[5] = pm8001_mr32(address, 0x58); + pm8001_ha->gs_tbl.recover_err_info[6] = pm8001_mr32(address, 0x5C); + pm8001_ha->gs_tbl.recover_err_info[7] = pm8001_mr32(address, 0x60); +} + +/** + * read_inbnd_queue_table - read the inbound queue table and save it. + * @pm8001_ha: our hba card information + */ +static void __devinit +read_inbnd_queue_table(struct pm8001_hba_info *pm8001_ha) +{ + int inbQ_num = 1; + int i; + void __iomem *address = pm8001_ha->inbnd_q_tbl_addr; + for (i = 0; i < inbQ_num; i++) { + u32 offset = i * 0x24; + pm8001_ha->inbnd_q_tbl[i].pi_pci_bar = + get_pci_bar_index(pm8001_mr32(address, (offset + 0x14))); + pm8001_ha->inbnd_q_tbl[i].pi_offset = + pm8001_mr32(address, (offset + 0x18)); + } +} + +/** + * read_outbnd_queue_table - read the outbound queue table and save it. + * @pm8001_ha: our hba card information + */ +static void __devinit +read_outbnd_queue_table(struct pm8001_hba_info *pm8001_ha) +{ + int outbQ_num = 1; + int i; + void __iomem *address = pm8001_ha->outbnd_q_tbl_addr; + for (i = 0; i < outbQ_num; i++) { + u32 offset = i * 0x24; + pm8001_ha->outbnd_q_tbl[i].ci_pci_bar = + get_pci_bar_index(pm8001_mr32(address, (offset + 0x14))); + pm8001_ha->outbnd_q_tbl[i].ci_offset = + pm8001_mr32(address, (offset + 0x18)); + } +} + +/** + * init_default_table_values - init the default table. + * @pm8001_ha: our hba card information + */ +static void __devinit +init_default_table_values(struct pm8001_hba_info *pm8001_ha) +{ + int qn = 1; + int i; + u32 offsetib, offsetob; + void __iomem *addressib = pm8001_ha->inbnd_q_tbl_addr; + void __iomem *addressob = pm8001_ha->outbnd_q_tbl_addr; + + pm8001_ha->main_cfg_tbl.inbound_q_nppd_hppd = 0; + pm8001_ha->main_cfg_tbl.outbound_hw_event_pid0_3 = 0; + pm8001_ha->main_cfg_tbl.outbound_hw_event_pid4_7 = 0; + pm8001_ha->main_cfg_tbl.outbound_ncq_event_pid0_3 = 0; + pm8001_ha->main_cfg_tbl.outbound_ncq_event_pid4_7 = 0; + pm8001_ha->main_cfg_tbl.outbound_tgt_ITNexus_event_pid0_3 = 0; + pm8001_ha->main_cfg_tbl.outbound_tgt_ITNexus_event_pid4_7 = 0; + pm8001_ha->main_cfg_tbl.outbound_tgt_ssp_event_pid0_3 = 0; + pm8001_ha->main_cfg_tbl.outbound_tgt_ssp_event_pid4_7 = 0; + pm8001_ha->main_cfg_tbl.outbound_tgt_smp_event_pid0_3 = 0; + pm8001_ha->main_cfg_tbl.outbound_tgt_smp_event_pid4_7 = 0; + + pm8001_ha->main_cfg_tbl.upper_event_log_addr = + pm8001_ha->memoryMap.region[AAP1].phys_addr_hi; + pm8001_ha->main_cfg_tbl.lower_event_log_addr = + pm8001_ha->memoryMap.region[AAP1].phys_addr_lo; + pm8001_ha->main_cfg_tbl.event_log_size = PM8001_EVENT_LOG_SIZE; + pm8001_ha->main_cfg_tbl.event_log_option = 0x01; + pm8001_ha->main_cfg_tbl.upper_iop_event_log_addr = + pm8001_ha->memoryMap.region[IOP].phys_addr_hi; + pm8001_ha->main_cfg_tbl.lower_iop_event_log_addr = + pm8001_ha->memoryMap.region[IOP].phys_addr_lo; + pm8001_ha->main_cfg_tbl.iop_event_log_size = PM8001_EVENT_LOG_SIZE; + pm8001_ha->main_cfg_tbl.iop_event_log_option = 0x01; + pm8001_ha->main_cfg_tbl.fatal_err_interrupt = 0x01; + for (i = 0; i < qn; i++) { + pm8001_ha->inbnd_q_tbl[i].element_pri_size_cnt = + 0x00000100 | (0x00000040 << 16) | (0x00<<30); + pm8001_ha->inbnd_q_tbl[i].upper_base_addr = + pm8001_ha->memoryMap.region[IB].phys_addr_hi; + pm8001_ha->inbnd_q_tbl[i].lower_base_addr = + pm8001_ha->memoryMap.region[IB].phys_addr_lo; + pm8001_ha->inbnd_q_tbl[i].base_virt = + (u8 *)pm8001_ha->memoryMap.region[IB].virt_ptr; + pm8001_ha->inbnd_q_tbl[i].total_length = + pm8001_ha->memoryMap.region[IB].total_len; + pm8001_ha->inbnd_q_tbl[i].ci_upper_base_addr = + pm8001_ha->memoryMap.region[CI].phys_addr_hi; + pm8001_ha->inbnd_q_tbl[i].ci_lower_base_addr = + pm8001_ha->memoryMap.region[CI].phys_addr_lo; + pm8001_ha->inbnd_q_tbl[i].ci_virt = + pm8001_ha->memoryMap.region[CI].virt_ptr; + offsetib = i * 0x20; + pm8001_ha->inbnd_q_tbl[i].pi_pci_bar = + get_pci_bar_index(pm8001_mr32(addressib, + (offsetib + 0x14))); + pm8001_ha->inbnd_q_tbl[i].pi_offset = + pm8001_mr32(addressib, (offsetib + 0x18)); + pm8001_ha->inbnd_q_tbl[i].producer_idx = 0; + pm8001_ha->inbnd_q_tbl[i].consumer_index = 0; + } + for (i = 0; i < qn; i++) { + pm8001_ha->outbnd_q_tbl[i].element_size_cnt = + 256 | (64 << 16) | (1<<30); + pm8001_ha->outbnd_q_tbl[i].upper_base_addr = + pm8001_ha->memoryMap.region[OB].phys_addr_hi; + pm8001_ha->outbnd_q_tbl[i].lower_base_addr = + pm8001_ha->memoryMap.region[OB].phys_addr_lo; + pm8001_ha->outbnd_q_tbl[i].base_virt = + (u8 *)pm8001_ha->memoryMap.region[OB].virt_ptr; + pm8001_ha->outbnd_q_tbl[i].total_length = + pm8001_ha->memoryMap.region[OB].total_len; + pm8001_ha->outbnd_q_tbl[i].pi_upper_base_addr = + pm8001_ha->memoryMap.region[PI].phys_addr_hi; + pm8001_ha->outbnd_q_tbl[i].pi_lower_base_addr = + pm8001_ha->memoryMap.region[PI].phys_addr_lo; + pm8001_ha->outbnd_q_tbl[i].interrup_vec_cnt_delay = + 0 | (0 << 16) | (0 << 24); + pm8001_ha->outbnd_q_tbl[i].pi_virt = + pm8001_ha->memoryMap.region[PI].virt_ptr; + offsetob = i * 0x24; + pm8001_ha->outbnd_q_tbl[i].ci_pci_bar = + get_pci_bar_index(pm8001_mr32(addressob, + offsetob + 0x14)); + pm8001_ha->outbnd_q_tbl[i].ci_offset = + pm8001_mr32(addressob, (offsetob + 0x18)); + pm8001_ha->outbnd_q_tbl[i].consumer_idx = 0; + pm8001_ha->outbnd_q_tbl[i].producer_index = 0; + } +} + +/** + * update_main_config_table - update the main default table to the HBA. + * @pm8001_ha: our hba card information + */ +static void __devinit +update_main_config_table(struct pm8001_hba_info *pm8001_ha) +{ + void __iomem *address = pm8001_ha->main_cfg_tbl_addr; + pm8001_mw32(address, 0x24, + pm8001_ha->main_cfg_tbl.inbound_q_nppd_hppd); + pm8001_mw32(address, 0x28, + pm8001_ha->main_cfg_tbl.outbound_hw_event_pid0_3); + pm8001_mw32(address, 0x2C, + pm8001_ha->main_cfg_tbl.outbound_hw_event_pid4_7); + pm8001_mw32(address, 0x30, + pm8001_ha->main_cfg_tbl.outbound_ncq_event_pid0_3); + pm8001_mw32(address, 0x34, + pm8001_ha->main_cfg_tbl.outbound_ncq_event_pid4_7); + pm8001_mw32(address, 0x38, + pm8001_ha->main_cfg_tbl.outbound_tgt_ITNexus_event_pid0_3); + pm8001_mw32(address, 0x3C, + pm8001_ha->main_cfg_tbl.outbound_tgt_ITNexus_event_pid4_7); + pm8001_mw32(address, 0x40, + pm8001_ha->main_cfg_tbl.outbound_tgt_ssp_event_pid0_3); + pm8001_mw32(address, 0x44, + pm8001_ha->main_cfg_tbl.outbound_tgt_ssp_event_pid4_7); + pm8001_mw32(address, 0x48, + pm8001_ha->main_cfg_tbl.outbound_tgt_smp_event_pid0_3); + pm8001_mw32(address, 0x4C, + pm8001_ha->main_cfg_tbl.outbound_tgt_smp_event_pid4_7); + pm8001_mw32(address, 0x50, + pm8001_ha->main_cfg_tbl.upper_event_log_addr); + pm8001_mw32(address, 0x54, + pm8001_ha->main_cfg_tbl.lower_event_log_addr); + pm8001_mw32(address, 0x58, pm8001_ha->main_cfg_tbl.event_log_size); + pm8001_mw32(address, 0x5C, pm8001_ha->main_cfg_tbl.event_log_option); + pm8001_mw32(address, 0x60, + pm8001_ha->main_cfg_tbl.upper_iop_event_log_addr); + pm8001_mw32(address, 0x64, + pm8001_ha->main_cfg_tbl.lower_iop_event_log_addr); + pm8001_mw32(address, 0x68, pm8001_ha->main_cfg_tbl.iop_event_log_size); + pm8001_mw32(address, 0x6C, + pm8001_ha->main_cfg_tbl.iop_event_log_option); + pm8001_mw32(address, 0x70, + pm8001_ha->main_cfg_tbl.fatal_err_interrupt); +} + +/** + * update_inbnd_queue_table - update the inbound queue table to the HBA. + * @pm8001_ha: our hba card information + */ +static void __devinit +update_inbnd_queue_table(struct pm8001_hba_info *pm8001_ha, int number) +{ + void __iomem *address = pm8001_ha->inbnd_q_tbl_addr; + u16 offset = number * 0x20; + pm8001_mw32(address, offset + 0x00, + pm8001_ha->inbnd_q_tbl[number].element_pri_size_cnt); + pm8001_mw32(address, offset + 0x04, + pm8001_ha->inbnd_q_tbl[number].upper_base_addr); + pm8001_mw32(address, offset + 0x08, + pm8001_ha->inbnd_q_tbl[number].lower_base_addr); + pm8001_mw32(address, offset + 0x0C, + pm8001_ha->inbnd_q_tbl[number].ci_upper_base_addr); + pm8001_mw32(address, offset + 0x10, + pm8001_ha->inbnd_q_tbl[number].ci_lower_base_addr); +} + +/** + * update_outbnd_queue_table - update the outbound queue table to the HBA. + * @pm8001_ha: our hba card information + */ +static void __devinit +update_outbnd_queue_table(struct pm8001_hba_info *pm8001_ha, int number) +{ + void __iomem *address = pm8001_ha->outbnd_q_tbl_addr; + u16 offset = number * 0x24; + pm8001_mw32(address, offset + 0x00, + pm8001_ha->outbnd_q_tbl[number].element_size_cnt); + pm8001_mw32(address, offset + 0x04, + pm8001_ha->outbnd_q_tbl[number].upper_base_addr); + pm8001_mw32(address, offset + 0x08, + pm8001_ha->outbnd_q_tbl[number].lower_base_addr); + pm8001_mw32(address, offset + 0x0C, + pm8001_ha->outbnd_q_tbl[number].pi_upper_base_addr); + pm8001_mw32(address, offset + 0x10, + pm8001_ha->outbnd_q_tbl[number].pi_lower_base_addr); + pm8001_mw32(address, offset + 0x1C, + pm8001_ha->outbnd_q_tbl[number].interrup_vec_cnt_delay); +} + +/** + * bar4_shift - function is called to shift BAR base address + * @pm8001_ha : our hba card infomation + * @shiftValue : shifting value in memory bar. + */ +static u32 bar4_shift(struct pm8001_hba_info *pm8001_ha, u32 shiftValue) +{ + u32 regVal; + u32 max_wait_count; + + /* program the inbound AXI translation Lower Address */ + pm8001_cw32(pm8001_ha, 1, SPC_IBW_AXI_TRANSLATION_LOW, shiftValue); + + /* confirm the setting is written */ + max_wait_count = 1 * 1000 * 1000; /* 1 sec */ + do { + udelay(1); + regVal = pm8001_cr32(pm8001_ha, 1, SPC_IBW_AXI_TRANSLATION_LOW); + } while ((regVal != shiftValue) && (--max_wait_count)); + + if (!max_wait_count) { + PM8001_INIT_DBG(pm8001_ha, + pm8001_printk("TIMEOUT:SPC_IBW_AXI_TRANSLATION_LOW" + " = 0x%x\n", regVal)); + return -1; + } + return 0; +} + +/** + * mpi_set_phys_g3_with_ssc + * @pm8001_ha: our hba card information + * @SSCbit: set SSCbit to 0 to disable all phys ssc; 1 to enable all phys ssc. + */ +static void __devinit +mpi_set_phys_g3_with_ssc(struct pm8001_hba_info *pm8001_ha, u32 SSCbit) +{ + u32 offset; + u32 value; + u32 i; + +#define SAS2_SETTINGS_LOCAL_PHY_0_3_SHIFT_ADDR 0x00030000 +#define SAS2_SETTINGS_LOCAL_PHY_4_7_SHIFT_ADDR 0x00040000 +#define SAS2_SETTINGS_LOCAL_PHY_0_3_OFFSET 0x1074 +#define SAS2_SETTINGS_LOCAL_PHY_4_7_OFFSET 0x1074 +#define PHY_SSC_BIT_SHIFT 13 + + /* + * Using shifted destination address 0x3_0000:0x1074 + 0x4000*N (N=0:3) + * Using shifted destination address 0x4_0000:0x1074 + 0x4000*(N-4) (N=4:7) + */ + if (-1 == bar4_shift(pm8001_ha, SAS2_SETTINGS_LOCAL_PHY_0_3_SHIFT_ADDR)) + return; + /* set SSC bit of PHY 0 - 3 */ + for (i = 0; i < 4; i++) { + offset = SAS2_SETTINGS_LOCAL_PHY_0_3_OFFSET + 0x4000 * i; + value = pm8001_cr32(pm8001_ha, 2, offset); + if (SSCbit) + value = value | (0x00000001 << PHY_SSC_BIT_SHIFT); + else + value = value & (~(0x00000001<general_stat_tbl_addr, + GST_GSTLEN_MPIS_OFFSET); + if (GST_MPI_STATE_INIT != (gst_len_mpistate & GST_MPI_STATE_MASK)) + return -1; + /* check MPI Initialization error */ + gst_len_mpistate = gst_len_mpistate >> 16; + if (0x0000 != gst_len_mpistate) + return -1; + return 0; +} + +/** + * check_fw_ready - The LLDD check if the FW is ready, if not, return error. + * @pm8001_ha: our hba card information + */ +static int check_fw_ready(struct pm8001_hba_info *pm8001_ha) +{ + u32 value, value1; + u32 max_wait_count; + /* check error state */ + value = pm8001_cr32(pm8001_ha, 0, MSGU_SCRATCH_PAD_1); + value1 = pm8001_cr32(pm8001_ha, 0, MSGU_SCRATCH_PAD_2); + /* check AAP error */ + if (SCRATCH_PAD1_ERR == (value & SCRATCH_PAD_STATE_MASK)) { + /* error state */ + value = pm8001_cr32(pm8001_ha, 0, MSGU_SCRATCH_PAD_0); + return -1; + } + + /* check IOP error */ + if (SCRATCH_PAD2_ERR == (value1 & SCRATCH_PAD_STATE_MASK)) { + /* error state */ + value1 = pm8001_cr32(pm8001_ha, 0, MSGU_SCRATCH_PAD_3); + return -1; + } + + /* bit 4-31 of scratch pad1 should be zeros if it is not + in error state*/ + if (value & SCRATCH_PAD1_STATE_MASK) { + /* error case */ + pm8001_cr32(pm8001_ha, 0, MSGU_SCRATCH_PAD_0); + return -1; + } + + /* bit 2, 4-31 of scratch pad2 should be zeros if it is not + in error state */ + if (value1 & SCRATCH_PAD2_STATE_MASK) { + /* error case */ + return -1; + } + + max_wait_count = 1 * 1000 * 1000;/* 1 sec timeout */ + + /* wait until scratch pad 1 and 2 registers in ready state */ + do { + udelay(1); + value = pm8001_cr32(pm8001_ha, 0, MSGU_SCRATCH_PAD_1) + & SCRATCH_PAD1_RDY; + value1 = pm8001_cr32(pm8001_ha, 0, MSGU_SCRATCH_PAD_2) + & SCRATCH_PAD2_RDY; + if ((--max_wait_count) == 0) + return -1; + } while ((value != SCRATCH_PAD1_RDY) || (value1 != SCRATCH_PAD2_RDY)); + return 0; +} + +static void init_pci_device_addresses(struct pm8001_hba_info *pm8001_ha) +{ + void __iomem *base_addr; + u32 value; + u32 offset; + u32 pcibar; + u32 pcilogic; + + value = pm8001_cr32(pm8001_ha, 0, 0x44); + offset = value & 0x03FFFFFF; + PM8001_INIT_DBG(pm8001_ha, + pm8001_printk("Scratchpad 0 Offset: %x \n", offset)); + pcilogic = (value & 0xFC000000) >> 26; + pcibar = get_pci_bar_index(pcilogic); + PM8001_INIT_DBG(pm8001_ha, + pm8001_printk("Scratchpad 0 PCI BAR: %d \n", pcibar)); + pm8001_ha->main_cfg_tbl_addr = base_addr = + pm8001_ha->io_mem[pcibar].memvirtaddr + offset; + pm8001_ha->general_stat_tbl_addr = + base_addr + pm8001_cr32(pm8001_ha, pcibar, offset + 0x18); + pm8001_ha->inbnd_q_tbl_addr = + base_addr + pm8001_cr32(pm8001_ha, pcibar, offset + 0x1C); + pm8001_ha->outbnd_q_tbl_addr = + base_addr + pm8001_cr32(pm8001_ha, pcibar, offset + 0x20); +} + +/** + * pm8001_chip_init - the main init function that initialize whole PM8001 chip. + * @pm8001_ha: our hba card information + */ +static int __devinit pm8001_chip_init(struct pm8001_hba_info *pm8001_ha) +{ + /* check the firmware status */ + if (-1 == check_fw_ready(pm8001_ha)) { + PM8001_FAIL_DBG(pm8001_ha, + pm8001_printk("Firmware is not ready!\n")); + return -EBUSY; + } + + /* Initialize pci space address eg: mpi offset */ + init_pci_device_addresses(pm8001_ha); + init_default_table_values(pm8001_ha); + read_main_config_table(pm8001_ha); + read_general_status_table(pm8001_ha); + read_inbnd_queue_table(pm8001_ha); + read_outbnd_queue_table(pm8001_ha); + /* update main config table ,inbound table and outbound table */ + update_main_config_table(pm8001_ha); + update_inbnd_queue_table(pm8001_ha, 0); + update_outbnd_queue_table(pm8001_ha, 0); + mpi_set_phys_g3_with_ssc(pm8001_ha, 0); + mpi_set_open_retry_interval_reg(pm8001_ha, 7); + /* notify firmware update finished and check initialization status */ + if (0 == mpi_init_check(pm8001_ha)) { + PM8001_INIT_DBG(pm8001_ha, + pm8001_printk("MPI initialize successful!\n")); + } else + return -EBUSY; + /*This register is a 16-bit timer with a resolution of 1us. This is the + timer used for interrupt delay/coalescing in the PCIe Application Layer. + Zero is not a valid value. A value of 1 in the register will cause the + interrupts to be normal. A value greater than 1 will cause coalescing + delays.*/ + pm8001_cw32(pm8001_ha, 1, 0x0033c0, 0x1); + pm8001_cw32(pm8001_ha, 1, 0x0033c4, 0x0); + return 0; +} + +static int mpi_uninit_check(struct pm8001_hba_info *pm8001_ha) +{ + u32 max_wait_count; + u32 value; + u32 gst_len_mpistate; + init_pci_device_addresses(pm8001_ha); + /* Write bit1=1 to Inbound DoorBell Register to tell the SPC FW the + table is stop */ + pm8001_cw32(pm8001_ha, 0, MSGU_IBDB_SET, SPC_MSGU_CFG_TABLE_RESET); + + /* wait until Inbound DoorBell Clear Register toggled */ + max_wait_count = 1 * 1000 * 1000;/* 1 sec */ + do { + udelay(1); + value = pm8001_cr32(pm8001_ha, 0, MSGU_IBDB_SET); + value &= SPC_MSGU_CFG_TABLE_RESET; + } while ((value != 0) && (--max_wait_count)); + + if (!max_wait_count) { + PM8001_FAIL_DBG(pm8001_ha, + pm8001_printk("TIMEOUT:IBDB value/=0x%x\n", value)); + return -1; + } + + /* check the MPI-State for termination in progress */ + /* wait until Inbound DoorBell Clear Register toggled */ + max_wait_count = 1 * 1000 * 1000; /* 1 sec */ + do { + udelay(1); + gst_len_mpistate = + pm8001_mr32(pm8001_ha->general_stat_tbl_addr, + GST_GSTLEN_MPIS_OFFSET); + if (GST_MPI_STATE_UNINIT == + (gst_len_mpistate & GST_MPI_STATE_MASK)) + break; + } while (--max_wait_count); + if (!max_wait_count) { + PM8001_FAIL_DBG(pm8001_ha, + pm8001_printk(" TIME OUT MPI State = 0x%x\n", + gst_len_mpistate & GST_MPI_STATE_MASK)); + return -1; + } + return 0; +} + +/** + * soft_reset_ready_check - Function to check FW is ready for soft reset. + * @pm8001_ha: our hba card information + */ +static u32 soft_reset_ready_check(struct pm8001_hba_info *pm8001_ha) +{ + u32 regVal, regVal1, regVal2; + if (mpi_uninit_check(pm8001_ha) != 0) { + PM8001_FAIL_DBG(pm8001_ha, + pm8001_printk("MPI state is not ready\n")); + return -1; + } + /* read the scratch pad 2 register bit 2 */ + regVal = pm8001_cr32(pm8001_ha, 0, MSGU_SCRATCH_PAD_2) + & SCRATCH_PAD2_FWRDY_RST; + if (regVal == SCRATCH_PAD2_FWRDY_RST) { + PM8001_INIT_DBG(pm8001_ha, + pm8001_printk("Firmware is ready for reset .\n")); + } else { + /* Trigger NMI twice via RB6 */ + if (-1 == bar4_shift(pm8001_ha, RB6_ACCESS_REG)) { + PM8001_FAIL_DBG(pm8001_ha, + pm8001_printk("Shift Bar4 to 0x%x failed\n", + RB6_ACCESS_REG)); + return -1; + } + pm8001_cw32(pm8001_ha, 2, SPC_RB6_OFFSET, + RB6_MAGIC_NUMBER_RST); + pm8001_cw32(pm8001_ha, 2, SPC_RB6_OFFSET, RB6_MAGIC_NUMBER_RST); + /* wait for 100 ms */ + mdelay(100); + regVal = pm8001_cr32(pm8001_ha, 0, MSGU_SCRATCH_PAD_2) & + SCRATCH_PAD2_FWRDY_RST; + if (regVal != SCRATCH_PAD2_FWRDY_RST) { + regVal1 = pm8001_cr32(pm8001_ha, 0, MSGU_SCRATCH_PAD_1); + regVal2 = pm8001_cr32(pm8001_ha, 0, MSGU_SCRATCH_PAD_2); + PM8001_FAIL_DBG(pm8001_ha, + pm8001_printk("TIMEOUT:MSGU_SCRATCH_PAD1" + "=0x%x, MSGU_SCRATCH_PAD2=0x%x\n", + regVal1, regVal2)); + PM8001_FAIL_DBG(pm8001_ha, + pm8001_printk("SCRATCH_PAD0 value = 0x%x\n", + pm8001_cr32(pm8001_ha, 0, MSGU_SCRATCH_PAD_0))); + PM8001_FAIL_DBG(pm8001_ha, + pm8001_printk("SCRATCH_PAD3 value = 0x%x\n", + pm8001_cr32(pm8001_ha, 0, MSGU_SCRATCH_PAD_3))); + return -1; + } + } + return 0; +} + +/** + * pm8001_chip_soft_rst - soft reset the PM8001 chip, so that the clear all + * the FW register status to the originated status. + * @pm8001_ha: our hba card information + * @signature: signature in host scratch pad0 register. + */ +static int +pm8001_chip_soft_rst(struct pm8001_hba_info *pm8001_ha, u32 signature) +{ + u32 regVal, toggleVal; + u32 max_wait_count; + u32 regVal1, regVal2, regVal3; + + /* step1: Check FW is ready for soft reset */ + if (soft_reset_ready_check(pm8001_ha) != 0) { + PM8001_FAIL_DBG(pm8001_ha, pm8001_printk("FW is not ready\n")); + return -1; + } + + /* step 2: clear NMI status register on AAP1 and IOP, write the same + value to clear */ + /* map 0x60000 to BAR4(0x20), BAR2(win) */ + if (-1 == bar4_shift(pm8001_ha, MBIC_AAP1_ADDR_BASE)) { + PM8001_FAIL_DBG(pm8001_ha, + pm8001_printk("Shift Bar4 to 0x%x failed\n", + MBIC_AAP1_ADDR_BASE)); + return -1; + } + regVal = pm8001_cr32(pm8001_ha, 2, MBIC_NMI_ENABLE_VPE0_IOP); + PM8001_INIT_DBG(pm8001_ha, + pm8001_printk("MBIC - NMI Enable VPE0 (IOP)= 0x%x\n", regVal)); + pm8001_cw32(pm8001_ha, 2, MBIC_NMI_ENABLE_VPE0_IOP, 0x0); + /* map 0x70000 to BAR4(0x20), BAR2(win) */ + if (-1 == bar4_shift(pm8001_ha, MBIC_IOP_ADDR_BASE)) { + PM8001_FAIL_DBG(pm8001_ha, + pm8001_printk("Shift Bar4 to 0x%x failed\n", + MBIC_IOP_ADDR_BASE)); + return -1; + } + regVal = pm8001_cr32(pm8001_ha, 2, MBIC_NMI_ENABLE_VPE0_AAP1); + PM8001_INIT_DBG(pm8001_ha, + pm8001_printk("MBIC - NMI Enable VPE0 (AAP1)= 0x%x\n", regVal)); + pm8001_cw32(pm8001_ha, 2, MBIC_NMI_ENABLE_VPE0_AAP1, 0x0); + + regVal = pm8001_cr32(pm8001_ha, 1, PCIE_EVENT_INTERRUPT_ENABLE); + PM8001_INIT_DBG(pm8001_ha, + pm8001_printk("PCIE -Event Interrupt Enable = 0x%x\n", regVal)); + pm8001_cw32(pm8001_ha, 1, PCIE_EVENT_INTERRUPT_ENABLE, 0x0); + + regVal = pm8001_cr32(pm8001_ha, 1, PCIE_EVENT_INTERRUPT); + PM8001_INIT_DBG(pm8001_ha, + pm8001_printk("PCIE - Event Interrupt = 0x%x\n", regVal)); + pm8001_cw32(pm8001_ha, 1, PCIE_EVENT_INTERRUPT, regVal); + + regVal = pm8001_cr32(pm8001_ha, 1, PCIE_ERROR_INTERRUPT_ENABLE); + PM8001_INIT_DBG(pm8001_ha, + pm8001_printk("PCIE -Error Interrupt Enable = 0x%x\n", regVal)); + pm8001_cw32(pm8001_ha, 1, PCIE_ERROR_INTERRUPT_ENABLE, 0x0); + + regVal = pm8001_cr32(pm8001_ha, 1, PCIE_ERROR_INTERRUPT); + PM8001_INIT_DBG(pm8001_ha, + pm8001_printk("PCIE - Error Interrupt = 0x%x\n", regVal)); + pm8001_cw32(pm8001_ha, 1, PCIE_ERROR_INTERRUPT, regVal); + + /* read the scratch pad 1 register bit 2 */ + regVal = pm8001_cr32(pm8001_ha, 0, MSGU_SCRATCH_PAD_1) + & SCRATCH_PAD1_RST; + toggleVal = regVal ^ SCRATCH_PAD1_RST; + + /* set signature in host scratch pad0 register to tell SPC that the + host performs the soft reset */ + pm8001_cw32(pm8001_ha, 0, MSGU_HOST_SCRATCH_PAD_0, signature); + + /* read required registers for confirmming */ + /* map 0x0700000 to BAR4(0x20), BAR2(win) */ + if (-1 == bar4_shift(pm8001_ha, GSM_ADDR_BASE)) { + PM8001_FAIL_DBG(pm8001_ha, + pm8001_printk("Shift Bar4 to 0x%x failed\n", + GSM_ADDR_BASE)); + return -1; + } + PM8001_INIT_DBG(pm8001_ha, + pm8001_printk("GSM 0x0(0x00007b88)-GSM Configuration and" + " Reset = 0x%x\n", + pm8001_cr32(pm8001_ha, 2, GSM_CONFIG_RESET))); + + /* step 3: host read GSM Configuration and Reset register */ + regVal = pm8001_cr32(pm8001_ha, 2, GSM_CONFIG_RESET); + /* Put those bits to low */ + /* GSM XCBI offset = 0x70 0000 + 0x00 Bit 13 COM_SLV_SW_RSTB 1 + 0x00 Bit 12 QSSP_SW_RSTB 1 + 0x00 Bit 11 RAAE_SW_RSTB 1 + 0x00 Bit 9 RB_1_SW_RSTB 1 + 0x00 Bit 8 SM_SW_RSTB 1 + */ + regVal &= ~(0x00003b00); + /* host write GSM Configuration and Reset register */ + pm8001_cw32(pm8001_ha, 2, GSM_CONFIG_RESET, regVal); + PM8001_INIT_DBG(pm8001_ha, + pm8001_printk("GSM 0x0 (0x00007b88 ==> 0x00004088) - GSM " + "Configuration and Reset is set to = 0x%x\n", + pm8001_cr32(pm8001_ha, 2, GSM_CONFIG_RESET))); + + /* step 4: */ + /* disable GSM - Read Address Parity Check */ + regVal1 = pm8001_cr32(pm8001_ha, 2, GSM_READ_ADDR_PARITY_CHECK); + PM8001_INIT_DBG(pm8001_ha, + pm8001_printk("GSM 0x700038 - Read Address Parity Check " + "Enable = 0x%x\n", regVal1)); + pm8001_cw32(pm8001_ha, 2, GSM_READ_ADDR_PARITY_CHECK, 0x0); + PM8001_INIT_DBG(pm8001_ha, + pm8001_printk("GSM 0x700038 - Read Address Parity Check Enable" + "is set to = 0x%x\n", + pm8001_cr32(pm8001_ha, 2, GSM_READ_ADDR_PARITY_CHECK))); + + /* disable GSM - Write Address Parity Check */ + regVal2 = pm8001_cr32(pm8001_ha, 2, GSM_WRITE_ADDR_PARITY_CHECK); + PM8001_INIT_DBG(pm8001_ha, + pm8001_printk("GSM 0x700040 - Write Address Parity Check" + " Enable = 0x%x\n", regVal2)); + pm8001_cw32(pm8001_ha, 2, GSM_WRITE_ADDR_PARITY_CHECK, 0x0); + PM8001_INIT_DBG(pm8001_ha, + pm8001_printk("GSM 0x700040 - Write Address Parity Check " + "Enable is set to = 0x%x\n", + pm8001_cr32(pm8001_ha, 2, GSM_WRITE_ADDR_PARITY_CHECK))); + + /* disable GSM - Write Data Parity Check */ + regVal3 = pm8001_cr32(pm8001_ha, 2, GSM_WRITE_DATA_PARITY_CHECK); + PM8001_INIT_DBG(pm8001_ha, + pm8001_printk("GSM 0x300048 - Write Data Parity Check" + " Enable = 0x%x\n", regVal3)); + pm8001_cw32(pm8001_ha, 2, GSM_WRITE_DATA_PARITY_CHECK, 0x0); + PM8001_INIT_DBG(pm8001_ha, + pm8001_printk("GSM 0x300048 - Write Data Parity Check Enable" + "is set to = 0x%x\n", + pm8001_cr32(pm8001_ha, 2, GSM_WRITE_DATA_PARITY_CHECK))); + + /* step 5: delay 10 usec */ + udelay(10); + /* step 5-b: set GPIO-0 output control to tristate anyway */ + if (-1 == bar4_shift(pm8001_ha, GPIO_ADDR_BASE)) { + PM8001_INIT_DBG(pm8001_ha, + pm8001_printk("Shift Bar4 to 0x%x failed\n", + GPIO_ADDR_BASE)); + return -1; + } + regVal = pm8001_cr32(pm8001_ha, 2, GPIO_GPIO_0_0UTPUT_CTL_OFFSET); + PM8001_INIT_DBG(pm8001_ha, + pm8001_printk("GPIO Output Control Register:" + " = 0x%x\n", regVal)); + /* set GPIO-0 output control to tri-state */ + regVal &= 0xFFFFFFFC; + pm8001_cw32(pm8001_ha, 2, GPIO_GPIO_0_0UTPUT_CTL_OFFSET, regVal); + + /* Step 6: Reset the IOP and AAP1 */ + /* map 0x00000 to BAR4(0x20), BAR2(win) */ + if (-1 == bar4_shift(pm8001_ha, SPC_TOP_LEVEL_ADDR_BASE)) { + PM8001_FAIL_DBG(pm8001_ha, + pm8001_printk("SPC Shift Bar4 to 0x%x failed\n", + SPC_TOP_LEVEL_ADDR_BASE)); + return -1; + } + regVal = pm8001_cr32(pm8001_ha, 2, SPC_REG_RESET); + PM8001_INIT_DBG(pm8001_ha, + pm8001_printk("Top Register before resetting IOP/AAP1" + ":= 0x%x\n", regVal)); + regVal &= ~(SPC_REG_RESET_PCS_IOP_SS | SPC_REG_RESET_PCS_AAP1_SS); + pm8001_cw32(pm8001_ha, 2, SPC_REG_RESET, regVal); + + /* step 7: Reset the BDMA/OSSP */ + regVal = pm8001_cr32(pm8001_ha, 2, SPC_REG_RESET); + PM8001_INIT_DBG(pm8001_ha, + pm8001_printk("Top Register before resetting BDMA/OSSP" + ": = 0x%x\n", regVal)); + regVal &= ~(SPC_REG_RESET_BDMA_CORE | SPC_REG_RESET_OSSP); + pm8001_cw32(pm8001_ha, 2, SPC_REG_RESET, regVal); + + /* step 8: delay 10 usec */ + udelay(10); + + /* step 9: bring the BDMA and OSSP out of reset */ + regVal = pm8001_cr32(pm8001_ha, 2, SPC_REG_RESET); + PM8001_INIT_DBG(pm8001_ha, + pm8001_printk("Top Register before bringing up BDMA/OSSP" + ":= 0x%x\n", regVal)); + regVal |= (SPC_REG_RESET_BDMA_CORE | SPC_REG_RESET_OSSP); + pm8001_cw32(pm8001_ha, 2, SPC_REG_RESET, regVal); + + /* step 10: delay 10 usec */ + udelay(10); + + /* step 11: reads and sets the GSM Configuration and Reset Register */ + /* map 0x0700000 to BAR4(0x20), BAR2(win) */ + if (-1 == bar4_shift(pm8001_ha, GSM_ADDR_BASE)) { + PM8001_FAIL_DBG(pm8001_ha, + pm8001_printk("SPC Shift Bar4 to 0x%x failed\n", + GSM_ADDR_BASE)); + return -1; + } + PM8001_INIT_DBG(pm8001_ha, + pm8001_printk("GSM 0x0 (0x00007b88)-GSM Configuration and " + "Reset = 0x%x\n", pm8001_cr32(pm8001_ha, 2, GSM_CONFIG_RESET))); + regVal = pm8001_cr32(pm8001_ha, 2, GSM_CONFIG_RESET); + /* Put those bits to high */ + /* GSM XCBI offset = 0x70 0000 + 0x00 Bit 13 COM_SLV_SW_RSTB 1 + 0x00 Bit 12 QSSP_SW_RSTB 1 + 0x00 Bit 11 RAAE_SW_RSTB 1 + 0x00 Bit 9 RB_1_SW_RSTB 1 + 0x00 Bit 8 SM_SW_RSTB 1 + */ + regVal |= (GSM_CONFIG_RESET_VALUE); + pm8001_cw32(pm8001_ha, 2, GSM_CONFIG_RESET, regVal); + PM8001_INIT_DBG(pm8001_ha, + pm8001_printk("GSM (0x00004088 ==> 0x00007b88) - GSM" + " Configuration and Reset is set to = 0x%x\n", + pm8001_cr32(pm8001_ha, 2, GSM_CONFIG_RESET))); + + /* step 12: Restore GSM - Read Address Parity Check */ + regVal = pm8001_cr32(pm8001_ha, 2, GSM_READ_ADDR_PARITY_CHECK); + /* just for debugging */ + PM8001_INIT_DBG(pm8001_ha, + pm8001_printk("GSM 0x700038 - Read Address Parity Check Enable" + " = 0x%x\n", regVal)); + pm8001_cw32(pm8001_ha, 2, GSM_READ_ADDR_PARITY_CHECK, regVal1); + PM8001_INIT_DBG(pm8001_ha, + pm8001_printk("GSM 0x700038 - Read Address Parity" + " Check Enable is set to = 0x%x\n", + pm8001_cr32(pm8001_ha, 2, GSM_READ_ADDR_PARITY_CHECK))); + /* Restore GSM - Write Address Parity Check */ + regVal = pm8001_cr32(pm8001_ha, 2, GSM_WRITE_ADDR_PARITY_CHECK); + pm8001_cw32(pm8001_ha, 2, GSM_WRITE_ADDR_PARITY_CHECK, regVal2); + PM8001_INIT_DBG(pm8001_ha, + pm8001_printk("GSM 0x700040 - Write Address Parity Check" + " Enable is set to = 0x%x\n", + pm8001_cr32(pm8001_ha, 2, GSM_WRITE_ADDR_PARITY_CHECK))); + /* Restore GSM - Write Data Parity Check */ + regVal = pm8001_cr32(pm8001_ha, 2, GSM_WRITE_DATA_PARITY_CHECK); + pm8001_cw32(pm8001_ha, 2, GSM_WRITE_DATA_PARITY_CHECK, regVal3); + PM8001_INIT_DBG(pm8001_ha, + pm8001_printk("GSM 0x700048 - Write Data Parity Check Enable" + "is set to = 0x%x\n", + pm8001_cr32(pm8001_ha, 2, GSM_WRITE_DATA_PARITY_CHECK))); + + /* step 13: bring the IOP and AAP1 out of reset */ + /* map 0x00000 to BAR4(0x20), BAR2(win) */ + if (-1 == bar4_shift(pm8001_ha, SPC_TOP_LEVEL_ADDR_BASE)) { + PM8001_FAIL_DBG(pm8001_ha, + pm8001_printk("Shift Bar4 to 0x%x failed\n", + SPC_TOP_LEVEL_ADDR_BASE)); + return -1; + } + regVal = pm8001_cr32(pm8001_ha, 2, SPC_REG_RESET); + regVal |= (SPC_REG_RESET_PCS_IOP_SS | SPC_REG_RESET_PCS_AAP1_SS); + pm8001_cw32(pm8001_ha, 2, SPC_REG_RESET, regVal); + + /* step 14: delay 10 usec - Normal Mode */ + udelay(10); + /* check Soft Reset Normal mode or Soft Reset HDA mode */ + if (signature == SPC_SOFT_RESET_SIGNATURE) { + /* step 15 (Normal Mode): wait until scratch pad1 register + bit 2 toggled */ + max_wait_count = 2 * 1000 * 1000;/* 2 sec */ + do { + udelay(1); + regVal = pm8001_cr32(pm8001_ha, 0, MSGU_SCRATCH_PAD_1) & + SCRATCH_PAD1_RST; + } while ((regVal != toggleVal) && (--max_wait_count)); + + if (!max_wait_count) { + regVal = pm8001_cr32(pm8001_ha, 0, + MSGU_SCRATCH_PAD_1); + PM8001_FAIL_DBG(pm8001_ha, + pm8001_printk("TIMEOUT : ToggleVal 0x%x," + "MSGU_SCRATCH_PAD1 = 0x%x\n", + toggleVal, regVal)); + PM8001_FAIL_DBG(pm8001_ha, + pm8001_printk("SCRATCH_PAD0 value = 0x%x\n", + pm8001_cr32(pm8001_ha, 0, + MSGU_SCRATCH_PAD_0))); + PM8001_FAIL_DBG(pm8001_ha, + pm8001_printk("SCRATCH_PAD2 value = 0x%x\n", + pm8001_cr32(pm8001_ha, 0, + MSGU_SCRATCH_PAD_2))); + PM8001_FAIL_DBG(pm8001_ha, + pm8001_printk("SCRATCH_PAD3 value = 0x%x\n", + pm8001_cr32(pm8001_ha, 0, + MSGU_SCRATCH_PAD_3))); + return -1; + } + + /* step 16 (Normal) - Clear ODMR and ODCR */ + pm8001_cw32(pm8001_ha, 0, MSGU_ODCR, ODCR_CLEAR_ALL); + pm8001_cw32(pm8001_ha, 0, MSGU_ODMR, ODMR_CLEAR_ALL); + + /* step 17 (Normal Mode): wait for the FW and IOP to get + ready - 1 sec timeout */ + /* Wait for the SPC Configuration Table to be ready */ + if (check_fw_ready(pm8001_ha) == -1) { + regVal = pm8001_cr32(pm8001_ha, 0, MSGU_SCRATCH_PAD_1); + /* return error if MPI Configuration Table not ready */ + PM8001_INIT_DBG(pm8001_ha, + pm8001_printk("FW not ready SCRATCH_PAD1" + " = 0x%x\n", regVal)); + regVal = pm8001_cr32(pm8001_ha, 0, MSGU_SCRATCH_PAD_2); + /* return error if MPI Configuration Table not ready */ + PM8001_INIT_DBG(pm8001_ha, + pm8001_printk("FW not ready SCRATCH_PAD2" + " = 0x%x\n", regVal)); + PM8001_INIT_DBG(pm8001_ha, + pm8001_printk("SCRATCH_PAD0 value = 0x%x\n", + pm8001_cr32(pm8001_ha, 0, + MSGU_SCRATCH_PAD_0))); + PM8001_INIT_DBG(pm8001_ha, + pm8001_printk("SCRATCH_PAD3 value = 0x%x\n", + pm8001_cr32(pm8001_ha, 0, + MSGU_SCRATCH_PAD_3))); + return -1; + } + } + + PM8001_INIT_DBG(pm8001_ha, + pm8001_printk("SPC soft reset Complete\n")); + return 0; +} + +static void pm8001_hw_chip_rst(struct pm8001_hba_info *pm8001_ha) +{ + u32 i; + u32 regVal; + PM8001_INIT_DBG(pm8001_ha, + pm8001_printk("chip reset start\n")); + + /* do SPC chip reset. */ + regVal = pm8001_cr32(pm8001_ha, 1, SPC_REG_RESET); + regVal &= ~(SPC_REG_RESET_DEVICE); + pm8001_cw32(pm8001_ha, 1, SPC_REG_RESET, regVal); + + /* delay 10 usec */ + udelay(10); + + /* bring chip reset out of reset */ + regVal = pm8001_cr32(pm8001_ha, 1, SPC_REG_RESET); + regVal |= SPC_REG_RESET_DEVICE; + pm8001_cw32(pm8001_ha, 1, SPC_REG_RESET, regVal); + + /* delay 10 usec */ + udelay(10); + + /* wait for 20 msec until the firmware gets reloaded */ + i = 20; + do { + mdelay(1); + } while ((--i) != 0); + + PM8001_INIT_DBG(pm8001_ha, + pm8001_printk("chip reset finished\n")); +} + +/** + * pm8001_chip_iounmap - which maped when initilized. + * @pm8001_ha: our hba card information + */ +static void pm8001_chip_iounmap(struct pm8001_hba_info *pm8001_ha) +{ + s8 bar, logical = 0; + for (bar = 0; bar < 6; bar++) { + /* + ** logical BARs for SPC: + ** bar 0 and 1 - logical BAR0 + ** bar 2 and 3 - logical BAR1 + ** bar4 - logical BAR2 + ** bar5 - logical BAR3 + ** Skip the appropriate assignments: + */ + if ((bar == 1) || (bar == 3)) + continue; + if (pm8001_ha->io_mem[logical].memvirtaddr) { + iounmap(pm8001_ha->io_mem[logical].memvirtaddr); + logical++; + } + } +} + +/** + * pm8001_chip_interrupt_enable - enable PM8001 chip interrupt + * @pm8001_ha: our hba card information + */ +static void +pm8001_chip_intx_interrupt_enable(struct pm8001_hba_info *pm8001_ha) +{ + pm8001_cw32(pm8001_ha, 0, MSGU_ODMR, ODMR_CLEAR_ALL); + pm8001_cw32(pm8001_ha, 0, MSGU_ODCR, ODCR_CLEAR_ALL); +} + + /** + * pm8001_chip_intx_interrupt_disable- disable PM8001 chip interrupt + * @pm8001_ha: our hba card information + */ +static void +pm8001_chip_intx_interrupt_disable(struct pm8001_hba_info *pm8001_ha) +{ + pm8001_cw32(pm8001_ha, 0, MSGU_ODMR, ODMR_MASK_ALL); +} + +/** + * pm8001_chip_msix_interrupt_enable - enable PM8001 chip interrupt + * @pm8001_ha: our hba card information + */ +static void +pm8001_chip_msix_interrupt_enable(struct pm8001_hba_info *pm8001_ha, + u32 int_vec_idx) +{ + u32 msi_index; + u32 value; + msi_index = int_vec_idx * MSIX_TABLE_ELEMENT_SIZE; + msi_index += MSIX_TABLE_BASE; + pm8001_cw32(pm8001_ha, 0, msi_index, MSIX_INTERRUPT_ENABLE); + value = (1 << int_vec_idx); + pm8001_cw32(pm8001_ha, 0, MSGU_ODCR, value); + +} + +/** + * pm8001_chip_msix_interrupt_disable - disable PM8001 chip interrupt + * @pm8001_ha: our hba card information + */ +static void +pm8001_chip_msix_interrupt_disable(struct pm8001_hba_info *pm8001_ha, + u32 int_vec_idx) +{ + u32 msi_index; + msi_index = int_vec_idx * MSIX_TABLE_ELEMENT_SIZE; + msi_index += MSIX_TABLE_BASE; + pm8001_cw32(pm8001_ha, 0, msi_index, MSIX_INTERRUPT_DISABLE); + +} +/** + * pm8001_chip_interrupt_enable - enable PM8001 chip interrupt + * @pm8001_ha: our hba card information + */ +static void +pm8001_chip_interrupt_enable(struct pm8001_hba_info *pm8001_ha) +{ +#ifdef PM8001_USE_MSIX + pm8001_chip_msix_interrupt_enable(pm8001_ha, 0); + return; +#endif + pm8001_chip_intx_interrupt_enable(pm8001_ha); + +} + +/** + * pm8001_chip_intx_interrupt_disable- disable PM8001 chip interrupt + * @pm8001_ha: our hba card information + */ +static void +pm8001_chip_interrupt_disable(struct pm8001_hba_info *pm8001_ha) +{ +#ifdef PM8001_USE_MSIX + pm8001_chip_msix_interrupt_disable(pm8001_ha, 0); + return; +#endif + pm8001_chip_intx_interrupt_disable(pm8001_ha); + +} + +/** + * mpi_msg_free_get- get the free message buffer for transfer inbound queue. + * @circularQ: the inbound queue we want to transfer to HBA. + * @messageSize: the message size of this transfer, normally it is 64 bytes + * @messagePtr: the pointer to message. + */ +static u32 mpi_msg_free_get(struct inbound_queue_table *circularQ, + u16 messageSize, void **messagePtr) +{ + u32 offset, consumer_index; + struct mpi_msg_hdr *msgHeader; + u8 bcCount = 1; /* only support single buffer */ + + /* Checks is the requested message size can be allocated in this queue*/ + if (messageSize > 64) { + *messagePtr = NULL; + return -1; + } + + /* Stores the new consumer index */ + consumer_index = pm8001_read_32(circularQ->ci_virt); + circularQ->consumer_index = cpu_to_le32(consumer_index); + if (((circularQ->producer_idx + bcCount) % 256) == + circularQ->consumer_index) { + *messagePtr = NULL; + return -1; + } + /* get memory IOMB buffer address */ + offset = circularQ->producer_idx * 64; + /* increment to next bcCount element */ + circularQ->producer_idx = (circularQ->producer_idx + bcCount) % 256; + /* Adds that distance to the base of the region virtual address plus + the message header size*/ + msgHeader = (struct mpi_msg_hdr *)(circularQ->base_virt + offset); + *messagePtr = ((void *)msgHeader) + sizeof(struct mpi_msg_hdr); + return 0; +} + +/** + * mpi_build_cmd- build the message queue for transfer, update the PI to FW + * to tell the fw to get this message from IOMB. + * @pm8001_ha: our hba card information + * @circularQ: the inbound queue we want to transfer to HBA. + * @opCode: the operation code represents commands which LLDD and fw recognized. + * @payload: the command payload of each operation command. + */ +static u32 mpi_build_cmd(struct pm8001_hba_info *pm8001_ha, + struct inbound_queue_table *circularQ, + u32 opCode, void *payload) +{ + u32 Header = 0, hpriority = 0, bc = 1, category = 0x02; + u32 responseQueue = 0; + void *pMessage; + + if (mpi_msg_free_get(circularQ, 64, &pMessage) < 0) { + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("No free mpi buffer \n")); + return -1; + } + + /*Copy to the payload*/ + memcpy(pMessage, payload, (64 - sizeof(struct mpi_msg_hdr))); + + /*Build the header*/ + Header = ((1 << 31) | (hpriority << 30) | ((bc & 0x1f) << 24) + | ((responseQueue & 0x3F) << 16) + | ((category & 0xF) << 12) | (opCode & 0xFFF)); + + pm8001_write_32((pMessage - 4), 0, cpu_to_le32(Header)); + /*Update the PI to the firmware*/ + pm8001_cw32(pm8001_ha, circularQ->pi_pci_bar, + circularQ->pi_offset, circularQ->producer_idx); + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("after PI= %d CI= %d \n", circularQ->producer_idx, + circularQ->consumer_index)); + return 0; +} + +static u32 mpi_msg_free_set(struct pm8001_hba_info *pm8001_ha, + struct outbound_queue_table *circularQ, u8 bc) +{ + u32 producer_index; + /* free the circular queue buffer elements associated with the message*/ + circularQ->consumer_idx = (circularQ->consumer_idx + bc) % 256; + /* update the CI of outbound queue */ + pm8001_cw32(pm8001_ha, circularQ->ci_pci_bar, circularQ->ci_offset, + circularQ->consumer_idx); + /* Update the producer index from SPC*/ + producer_index = pm8001_read_32(circularQ->pi_virt); + circularQ->producer_index = cpu_to_le32(producer_index); + PM8001_IO_DBG(pm8001_ha, + pm8001_printk(" CI=%d PI=%d\n", circularQ->consumer_idx, + circularQ->producer_index)); + return 0; +} + +/** + * mpi_msg_consume- get the MPI message from outbound queue message table. + * @pm8001_ha: our hba card information + * @circularQ: the outbound queue table. + * @messagePtr1: the message contents of this outbound message. + * @pBC: the message size. + */ +static u32 mpi_msg_consume(struct pm8001_hba_info *pm8001_ha, + struct outbound_queue_table *circularQ, + void **messagePtr1, u8 *pBC) +{ + struct mpi_msg_hdr *msgHeader; + __le32 msgHeader_tmp; + u32 header_tmp; + do { + /* If there are not-yet-delivered messages ... */ + if (circularQ->producer_index != circularQ->consumer_idx) { + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("process an IOMB\n")); + /*Get the pointer to the circular queue buffer element*/ + msgHeader = (struct mpi_msg_hdr *) + (circularQ->base_virt + + circularQ->consumer_idx * 64); + /* read header */ + header_tmp = pm8001_read_32(msgHeader); + msgHeader_tmp = cpu_to_le32(header_tmp); + if (0 != (msgHeader_tmp & 0x80000000)) { + if (OPC_OUB_SKIP_ENTRY != + (msgHeader_tmp & 0xfff)) { + *messagePtr1 = + ((u8 *)msgHeader) + + sizeof(struct mpi_msg_hdr); + *pBC = (u8)((msgHeader_tmp >> 24) & + 0x1f); + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("mpi_msg_consume" + ": CI=%d PI=%d msgHeader=%x\n", + circularQ->consumer_idx, + circularQ->producer_index, + msgHeader_tmp)); + return MPI_IO_STATUS_SUCCESS; + } else { + u32 producer_index; + void *pi_virt = circularQ->pi_virt; + /* free the circular queue buffer + elements associated with the message*/ + circularQ->consumer_idx = + (circularQ->consumer_idx + + ((msgHeader_tmp >> 24) & 0x1f)) + % 256; + /* update the CI of outbound queue */ + pm8001_cw32(pm8001_ha, + circularQ->ci_pci_bar, + circularQ->ci_offset, + circularQ->consumer_idx); + /* Update the producer index from SPC */ + producer_index = + pm8001_read_32(pi_virt); + circularQ->producer_index = + cpu_to_le32(producer_index); + } + } else + return MPI_IO_STATUS_FAIL; + } + } while (circularQ->producer_index != circularQ->consumer_idx); + /* while we don't have any more not-yet-delivered message */ + /* report empty */ + return MPI_IO_STATUS_BUSY; +} + +static void pm8001_work_queue(struct work_struct *work) +{ + struct delayed_work *dw = container_of(work, struct delayed_work, work); + struct pm8001_wq *wq = container_of(dw, struct pm8001_wq, work_q); + struct pm8001_device *pm8001_dev; + struct domain_device *dev; + + switch (wq->handler) { + case IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS: + pm8001_dev = wq->data; + dev = pm8001_dev->sas_device; + pm8001_I_T_nexus_reset(dev); + break; + case IO_OPEN_CNX_ERROR_STP_RESOURCES_BUSY: + pm8001_dev = wq->data; + dev = pm8001_dev->sas_device; + pm8001_I_T_nexus_reset(dev); + break; + case IO_DS_IN_ERROR: + pm8001_dev = wq->data; + dev = pm8001_dev->sas_device; + pm8001_I_T_nexus_reset(dev); + break; + case IO_DS_NON_OPERATIONAL: + pm8001_dev = wq->data; + dev = pm8001_dev->sas_device; + pm8001_I_T_nexus_reset(dev); + break; + } + list_del(&wq->entry); + kfree(wq); +} + +static int pm8001_handle_event(struct pm8001_hba_info *pm8001_ha, void *data, + int handler) +{ + struct pm8001_wq *wq; + int ret = 0; + + wq = kmalloc(sizeof(struct pm8001_wq), GFP_ATOMIC); + if (wq) { + wq->pm8001_ha = pm8001_ha; + wq->data = data; + wq->handler = handler; + INIT_DELAYED_WORK(&wq->work_q, pm8001_work_queue); + list_add_tail(&wq->entry, &pm8001_ha->wq_list); + schedule_delayed_work(&wq->work_q, 0); + } else + ret = -ENOMEM; + + return ret; +} + +/** + * mpi_ssp_completion- process the event that FW response to the SSP request. + * @pm8001_ha: our hba card information + * @piomb: the message contents of this outbound message. + * + * When FW has completed a ssp request for example a IO request, after it has + * filled the SG data with the data, it will trigger this event represent + * that he has finished the job,please check the coresponding buffer. + * So we will tell the caller who maybe waiting the result to tell upper layer + * that the task has been finished. + */ +static int +mpi_ssp_completion(struct pm8001_hba_info *pm8001_ha , void *piomb) +{ + struct sas_task *t; + struct pm8001_ccb_info *ccb; + unsigned long flags; + u32 status; + u32 param; + u32 tag; + struct ssp_completion_resp *psspPayload; + struct task_status_struct *ts; + struct ssp_response_iu *iu; + struct pm8001_device *pm8001_dev; + psspPayload = (struct ssp_completion_resp *)(piomb + 4); + status = le32_to_cpu(psspPayload->status); + tag = le32_to_cpu(psspPayload->tag); + ccb = &pm8001_ha->ccb_info[tag]; + pm8001_dev = ccb->device; + param = le32_to_cpu(psspPayload->param); + + PM8001_IO_DBG(pm8001_ha, pm8001_printk("OPC_OUB_SSP_COMP\n")); + t = ccb->task; + + if (status) + PM8001_FAIL_DBG(pm8001_ha, + pm8001_printk("sas IO status 0x%x\n", status)); + if (unlikely(!t || !t->lldd_task || !t->dev)) + return -1; + ts = &t->task_status; + switch (status) { + case IO_SUCCESS: + PM8001_IO_DBG(pm8001_ha, pm8001_printk("IO_SUCCESS" + ",param = %d \n", param)); + if (param == 0) { + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAM_GOOD; + } else { + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_PROTO_RESPONSE; + ts->residual = param; + iu = &psspPayload->ssp_resp_iu; + sas_ssp_task_response(pm8001_ha->dev, t, iu); + } + if (pm8001_dev) + pm8001_dev->running_req--; + break; + case IO_ABORTED: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_ABORTED IOMB Tag \n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_ABORTED_TASK; + break; + case IO_UNDERFLOW: + /* SSP Completion with error */ + PM8001_IO_DBG(pm8001_ha, pm8001_printk("IO_UNDERFLOW" + ",param = %d \n", param)); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_DATA_UNDERRUN; + ts->residual = param; + if (pm8001_dev) + pm8001_dev->running_req--; + break; + case IO_NO_DEVICE: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_NO_DEVICE\n")); + ts->resp = SAS_TASK_UNDELIVERED; + ts->stat = SAS_PHY_DOWN; + break; + case IO_XFER_ERROR_BREAK: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_ERROR_BREAK\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + break; + case IO_XFER_ERROR_PHY_NOT_READY: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_ERROR_PHY_NOT_READY\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_RSVD_RETRY; + break; + case IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_EPROTO; + break; + case IO_OPEN_CNX_ERROR_ZONE_VIOLATION: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_OPEN_CNX_ERROR_ZONE_VIOLATION\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_UNKNOWN; + break; + case IO_OPEN_CNX_ERROR_BREAK: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_OPEN_CNX_ERROR_BREAK\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_RSVD_CONT0; + break; + case IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_UNKNOWN; + if (!t->uldd_task) + pm8001_handle_event(pm8001_ha, + pm8001_dev, + IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS); + break; + case IO_OPEN_CNX_ERROR_BAD_DESTINATION: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_OPEN_CNX_ERROR_BAD_DESTINATION\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_BAD_DEST; + break; + case IO_OPEN_CNX_ERROR_CONNECTION_RATE_NOT_SUPPORTED: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_OPEN_CNX_ERROR_CONNECTION_RATE_" + "NOT_SUPPORTED\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_CONN_RATE; + break; + case IO_OPEN_CNX_ERROR_WRONG_DESTINATION: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_OPEN_CNX_ERROR_WRONG_DESTINATION\n")); + ts->resp = SAS_TASK_UNDELIVERED; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_WRONG_DEST; + break; + case IO_XFER_ERROR_NAK_RECEIVED: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_ERROR_NAK_RECEIVED\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + break; + case IO_XFER_ERROR_ACK_NAK_TIMEOUT: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_ERROR_ACK_NAK_TIMEOUT\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_NAK_R_ERR; + break; + case IO_XFER_ERROR_DMA: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_ERROR_DMA\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + break; + case IO_XFER_OPEN_RETRY_TIMEOUT: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_OPEN_RETRY_TIMEOUT\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_RSVD_RETRY; + break; + case IO_XFER_ERROR_OFFSET_MISMATCH: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_ERROR_OFFSET_MISMATCH\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + break; + case IO_PORT_IN_RESET: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_PORT_IN_RESET\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + break; + case IO_DS_NON_OPERATIONAL: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_DS_NON_OPERATIONAL\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + if (!t->uldd_task) + pm8001_handle_event(pm8001_ha, + pm8001_dev, + IO_DS_NON_OPERATIONAL); + break; + case IO_DS_IN_RECOVERY: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_DS_IN_RECOVERY\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + break; + case IO_TM_TAG_NOT_FOUND: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_TM_TAG_NOT_FOUND\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + break; + case IO_SSP_EXT_IU_ZERO_LEN_ERROR: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_SSP_EXT_IU_ZERO_LEN_ERROR\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + break; + case IO_OPEN_CNX_ERROR_HW_RESOURCE_BUSY: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_OPEN_CNX_ERROR_HW_RESOURCE_BUSY\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_RSVD_RETRY; + default: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("Unknown status 0x%x\n", status)); + /* not allowed case. Therefore, return failed status */ + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + break; + } + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("scsi_satus = %x \n ", + psspPayload->ssp_resp_iu.status)); + spin_lock_irqsave(&t->task_state_lock, flags); + t->task_state_flags &= ~SAS_TASK_STATE_PENDING; + t->task_state_flags &= ~SAS_TASK_AT_INITIATOR; + t->task_state_flags |= SAS_TASK_STATE_DONE; + if (unlikely((t->task_state_flags & SAS_TASK_STATE_ABORTED))) { + spin_unlock_irqrestore(&t->task_state_lock, flags); + PM8001_FAIL_DBG(pm8001_ha, pm8001_printk("task 0x%p done with" + " io_status 0x%x resp 0x%x " + "stat 0x%x but aborted by upper layer!\n", + t, status, ts->resp, ts->stat)); + pm8001_ccb_task_free(pm8001_ha, t, ccb, tag); + } else { + spin_unlock_irqrestore(&t->task_state_lock, flags); + pm8001_ccb_task_free(pm8001_ha, t, ccb, tag); + mb();/* in order to force CPU ordering */ + t->task_done(t); + } + return 0; +} + +/*See the comments for mpi_ssp_completion */ +static int mpi_ssp_event(struct pm8001_hba_info *pm8001_ha , void *piomb) +{ + struct sas_task *t; + unsigned long flags; + struct task_status_struct *ts; + struct pm8001_ccb_info *ccb; + struct pm8001_device *pm8001_dev; + struct ssp_event_resp *psspPayload = + (struct ssp_event_resp *)(piomb + 4); + u32 event = le32_to_cpu(psspPayload->event); + u32 tag = le32_to_cpu(psspPayload->tag); + u32 port_id = le32_to_cpu(psspPayload->port_id); + u32 dev_id = le32_to_cpu(psspPayload->device_id); + + ccb = &pm8001_ha->ccb_info[tag]; + t = ccb->task; + pm8001_dev = ccb->device; + if (event) + PM8001_FAIL_DBG(pm8001_ha, + pm8001_printk("sas IO status 0x%x\n", event)); + if (unlikely(!t || !t->lldd_task || !t->dev)) + return -1; + ts = &t->task_status; + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("port_id = %x,device_id = %x\n", + port_id, dev_id)); + switch (event) { + case IO_OVERFLOW: + PM8001_IO_DBG(pm8001_ha, pm8001_printk("IO_UNDERFLOW\n");) + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_DATA_OVERRUN; + ts->residual = 0; + if (pm8001_dev) + pm8001_dev->running_req--; + break; + case IO_XFER_ERROR_BREAK: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_ERROR_BREAK\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_INTERRUPTED; + break; + case IO_XFER_ERROR_PHY_NOT_READY: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_ERROR_PHY_NOT_READY\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_RSVD_RETRY; + break; + case IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_OPEN_CNX_ERROR_PROTOCOL_NOT" + "_SUPPORTED\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_EPROTO; + break; + case IO_OPEN_CNX_ERROR_ZONE_VIOLATION: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_OPEN_CNX_ERROR_ZONE_VIOLATION\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_UNKNOWN; + break; + case IO_OPEN_CNX_ERROR_BREAK: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_OPEN_CNX_ERROR_BREAK\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_RSVD_CONT0; + break; + case IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_UNKNOWN; + if (!t->uldd_task) + pm8001_handle_event(pm8001_ha, + pm8001_dev, + IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS); + break; + case IO_OPEN_CNX_ERROR_BAD_DESTINATION: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_OPEN_CNX_ERROR_BAD_DESTINATION\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_BAD_DEST; + break; + case IO_OPEN_CNX_ERROR_CONNECTION_RATE_NOT_SUPPORTED: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_OPEN_CNX_ERROR_CONNECTION_RATE_" + "NOT_SUPPORTED\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_CONN_RATE; + break; + case IO_OPEN_CNX_ERROR_WRONG_DESTINATION: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_OPEN_CNX_ERROR_WRONG_DESTINATION\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_WRONG_DEST; + break; + case IO_XFER_ERROR_NAK_RECEIVED: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_ERROR_NAK_RECEIVED\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + break; + case IO_XFER_ERROR_ACK_NAK_TIMEOUT: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_ERROR_ACK_NAK_TIMEOUT\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_NAK_R_ERR; + break; + case IO_XFER_OPEN_RETRY_TIMEOUT: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_OPEN_RETRY_TIMEOUT\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_RSVD_RETRY; + break; + case IO_XFER_ERROR_UNEXPECTED_PHASE: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_ERROR_UNEXPECTED_PHASE\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_DATA_OVERRUN; + break; + case IO_XFER_ERROR_XFER_RDY_OVERRUN: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_ERROR_XFER_RDY_OVERRUN\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_DATA_OVERRUN; + break; + case IO_XFER_ERROR_XFER_RDY_NOT_EXPECTED: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_ERROR_XFER_RDY_NOT_EXPECTED\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_DATA_OVERRUN; + break; + case IO_XFER_ERROR_CMD_ISSUE_ACK_NAK_TIMEOUT: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_ERROR_CMD_ISSUE_ACK_NAK_TIMEOUT\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_DATA_OVERRUN; + break; + case IO_XFER_ERROR_OFFSET_MISMATCH: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_ERROR_OFFSET_MISMATCH\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_DATA_OVERRUN; + break; + case IO_XFER_ERROR_XFER_ZERO_DATA_LEN: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_ERROR_XFER_ZERO_DATA_LEN\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_DATA_OVERRUN; + break; + case IO_XFER_CMD_FRAME_ISSUED: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk(" IO_XFER_CMD_FRAME_ISSUED\n")); + return 0; + default: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("Unknown status 0x%x\n", event)); + /* not allowed case. Therefore, return failed status */ + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_DATA_OVERRUN; + break; + } + spin_lock_irqsave(&t->task_state_lock, flags); + t->task_state_flags &= ~SAS_TASK_STATE_PENDING; + t->task_state_flags &= ~SAS_TASK_AT_INITIATOR; + t->task_state_flags |= SAS_TASK_STATE_DONE; + if (unlikely((t->task_state_flags & SAS_TASK_STATE_ABORTED))) { + spin_unlock_irqrestore(&t->task_state_lock, flags); + PM8001_FAIL_DBG(pm8001_ha, pm8001_printk("task 0x%p done with" + " event 0x%x resp 0x%x " + "stat 0x%x but aborted by upper layer!\n", + t, event, ts->resp, ts->stat)); + pm8001_ccb_task_free(pm8001_ha, t, ccb, tag); + } else { + spin_unlock_irqrestore(&t->task_state_lock, flags); + pm8001_ccb_task_free(pm8001_ha, t, ccb, tag); + mb();/* in order to force CPU ordering */ + t->task_done(t); + } + return 0; +} + +/*See the comments for mpi_ssp_completion */ +static int +mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb) +{ + struct sas_task *t; + struct pm8001_ccb_info *ccb; + unsigned long flags; + u32 param; + u32 status; + u32 tag; + struct sata_completion_resp *psataPayload; + struct task_status_struct *ts; + struct ata_task_resp *resp ; + u32 *sata_resp; + struct pm8001_device *pm8001_dev; + + psataPayload = (struct sata_completion_resp *)(piomb + 4); + status = le32_to_cpu(psataPayload->status); + tag = le32_to_cpu(psataPayload->tag); + + ccb = &pm8001_ha->ccb_info[tag]; + param = le32_to_cpu(psataPayload->param); + t = ccb->task; + ts = &t->task_status; + pm8001_dev = ccb->device; + if (status) + PM8001_FAIL_DBG(pm8001_ha, + pm8001_printk("sata IO status 0x%x\n", status)); + if (unlikely(!t || !t->lldd_task || !t->dev)) + return -1; + + switch (status) { + case IO_SUCCESS: + PM8001_IO_DBG(pm8001_ha, pm8001_printk("IO_SUCCESS\n")); + if (param == 0) { + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAM_GOOD; + } else { + u8 len; + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_PROTO_RESPONSE; + ts->residual = param; + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("SAS_PROTO_RESPONSE len = %d\n", + param)); + sata_resp = &psataPayload->sata_resp[0]; + resp = (struct ata_task_resp *)ts->buf; + if (t->ata_task.dma_xfer == 0 && + t->data_dir == PCI_DMA_FROMDEVICE) { + len = sizeof(struct pio_setup_fis); + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("PIO read len = %d\n", len)); + } else if (t->ata_task.use_ncq) { + len = sizeof(struct set_dev_bits_fis); + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("FPDMA len = %d\n", len)); + } else { + len = sizeof(struct dev_to_host_fis); + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("other len = %d\n", len)); + } + if (SAS_STATUS_BUF_SIZE >= sizeof(*resp)) { + resp->frame_len = len; + memcpy(&resp->ending_fis[0], sata_resp, len); + ts->buf_valid_size = sizeof(*resp); + } else + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("response to large \n")); + } + if (pm8001_dev) + pm8001_dev->running_req--; + break; + case IO_ABORTED: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_ABORTED IOMB Tag \n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_ABORTED_TASK; + if (pm8001_dev) + pm8001_dev->running_req--; + break; + /* following cases are to do cases */ + case IO_UNDERFLOW: + /* SATA Completion with error */ + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_UNDERFLOW param = %d\n", param)); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_DATA_UNDERRUN; + ts->residual = param; + if (pm8001_dev) + pm8001_dev->running_req--; + break; + case IO_NO_DEVICE: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_NO_DEVICE\n")); + ts->resp = SAS_TASK_UNDELIVERED; + ts->stat = SAS_PHY_DOWN; + break; + case IO_XFER_ERROR_BREAK: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_ERROR_BREAK\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_INTERRUPTED; + break; + case IO_XFER_ERROR_PHY_NOT_READY: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_ERROR_PHY_NOT_READY\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_RSVD_RETRY; + break; + case IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_OPEN_CNX_ERROR_PROTOCOL_NOT" + "_SUPPORTED\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_EPROTO; + break; + case IO_OPEN_CNX_ERROR_ZONE_VIOLATION: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_OPEN_CNX_ERROR_ZONE_VIOLATION\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_UNKNOWN; + break; + case IO_OPEN_CNX_ERROR_BREAK: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_OPEN_CNX_ERROR_BREAK\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_RSVD_CONT0; + break; + case IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_DEV_NO_RESPONSE; + if (!t->uldd_task) { + pm8001_handle_event(pm8001_ha, + pm8001_dev, + IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS); + ts->resp = SAS_TASK_UNDELIVERED; + ts->stat = SAS_QUEUE_FULL; + pm8001_ccb_task_free(pm8001_ha, t, ccb, tag); + mb();/*in order to force CPU ordering*/ + t->task_done(t); + return 0; + } + break; + case IO_OPEN_CNX_ERROR_BAD_DESTINATION: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_OPEN_CNX_ERROR_BAD_DESTINATION\n")); + ts->resp = SAS_TASK_UNDELIVERED; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_BAD_DEST; + if (!t->uldd_task) { + pm8001_handle_event(pm8001_ha, + pm8001_dev, + IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS); + ts->resp = SAS_TASK_UNDELIVERED; + ts->stat = SAS_QUEUE_FULL; + pm8001_ccb_task_free(pm8001_ha, t, ccb, tag); + mb();/*ditto*/ + t->task_done(t); + return 0; + } + break; + case IO_OPEN_CNX_ERROR_CONNECTION_RATE_NOT_SUPPORTED: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_OPEN_CNX_ERROR_CONNECTION_RATE_" + "NOT_SUPPORTED\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_CONN_RATE; + break; + case IO_OPEN_CNX_ERROR_STP_RESOURCES_BUSY: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_OPEN_CNX_ERROR_STP_RESOURCES" + "_BUSY\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_DEV_NO_RESPONSE; + if (!t->uldd_task) { + pm8001_handle_event(pm8001_ha, + pm8001_dev, + IO_OPEN_CNX_ERROR_STP_RESOURCES_BUSY); + ts->resp = SAS_TASK_UNDELIVERED; + ts->stat = SAS_QUEUE_FULL; + pm8001_ccb_task_free(pm8001_ha, t, ccb, tag); + mb();/* ditto*/ + t->task_done(t); + return 0; + } + break; + case IO_OPEN_CNX_ERROR_WRONG_DESTINATION: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_OPEN_CNX_ERROR_WRONG_DESTINATION\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_WRONG_DEST; + break; + case IO_XFER_ERROR_NAK_RECEIVED: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_ERROR_NAK_RECEIVED\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_NAK_R_ERR; + break; + case IO_XFER_ERROR_ACK_NAK_TIMEOUT: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_ERROR_ACK_NAK_TIMEOUT\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_NAK_R_ERR; + break; + case IO_XFER_ERROR_DMA: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_ERROR_DMA\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_ABORTED_TASK; + break; + case IO_XFER_ERROR_SATA_LINK_TIMEOUT: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_ERROR_SATA_LINK_TIMEOUT\n")); + ts->resp = SAS_TASK_UNDELIVERED; + ts->stat = SAS_DEV_NO_RESPONSE; + break; + case IO_XFER_ERROR_REJECTED_NCQ_MODE: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_ERROR_REJECTED_NCQ_MODE\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_DATA_UNDERRUN; + break; + case IO_XFER_OPEN_RETRY_TIMEOUT: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_OPEN_RETRY_TIMEOUT\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_TO; + break; + case IO_PORT_IN_RESET: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_PORT_IN_RESET\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_DEV_NO_RESPONSE; + break; + case IO_DS_NON_OPERATIONAL: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_DS_NON_OPERATIONAL\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_DEV_NO_RESPONSE; + if (!t->uldd_task) { + pm8001_handle_event(pm8001_ha, pm8001_dev, + IO_DS_NON_OPERATIONAL); + ts->resp = SAS_TASK_UNDELIVERED; + ts->stat = SAS_QUEUE_FULL; + pm8001_ccb_task_free(pm8001_ha, t, ccb, tag); + mb();/*ditto*/ + t->task_done(t); + return 0; + } + break; + case IO_DS_IN_RECOVERY: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk(" IO_DS_IN_RECOVERY\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_DEV_NO_RESPONSE; + break; + case IO_DS_IN_ERROR: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_DS_IN_ERROR\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_DEV_NO_RESPONSE; + if (!t->uldd_task) { + pm8001_handle_event(pm8001_ha, pm8001_dev, + IO_DS_IN_ERROR); + ts->resp = SAS_TASK_UNDELIVERED; + ts->stat = SAS_QUEUE_FULL; + pm8001_ccb_task_free(pm8001_ha, t, ccb, tag); + mb();/*ditto*/ + t->task_done(t); + return 0; + } + break; + case IO_OPEN_CNX_ERROR_HW_RESOURCE_BUSY: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_OPEN_CNX_ERROR_HW_RESOURCE_BUSY\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_RSVD_RETRY; + default: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("Unknown status 0x%x\n", status)); + /* not allowed case. Therefore, return failed status */ + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_DEV_NO_RESPONSE; + break; + } + spin_lock_irqsave(&t->task_state_lock, flags); + t->task_state_flags &= ~SAS_TASK_STATE_PENDING; + t->task_state_flags &= ~SAS_TASK_AT_INITIATOR; + t->task_state_flags |= SAS_TASK_STATE_DONE; + if (unlikely((t->task_state_flags & SAS_TASK_STATE_ABORTED))) { + spin_unlock_irqrestore(&t->task_state_lock, flags); + PM8001_FAIL_DBG(pm8001_ha, + pm8001_printk("task 0x%p done with io_status 0x%x" + " resp 0x%x stat 0x%x but aborted by upper layer!\n", + t, status, ts->resp, ts->stat)); + pm8001_ccb_task_free(pm8001_ha, t, ccb, tag); + } else { + spin_unlock_irqrestore(&t->task_state_lock, flags); + pm8001_ccb_task_free(pm8001_ha, t, ccb, tag); + mb();/* ditto */ + t->task_done(t); + } + return 0; +} + +/*See the comments for mpi_ssp_completion */ +static int mpi_sata_event(struct pm8001_hba_info *pm8001_ha , void *piomb) +{ + struct sas_task *t; + unsigned long flags; + struct task_status_struct *ts; + struct pm8001_ccb_info *ccb; + struct pm8001_device *pm8001_dev; + struct sata_event_resp *psataPayload = + (struct sata_event_resp *)(piomb + 4); + u32 event = le32_to_cpu(psataPayload->event); + u32 tag = le32_to_cpu(psataPayload->tag); + u32 port_id = le32_to_cpu(psataPayload->port_id); + u32 dev_id = le32_to_cpu(psataPayload->device_id); + + ccb = &pm8001_ha->ccb_info[tag]; + t = ccb->task; + pm8001_dev = ccb->device; + if (event) + PM8001_FAIL_DBG(pm8001_ha, + pm8001_printk("sata IO status 0x%x\n", event)); + if (unlikely(!t || !t->lldd_task || !t->dev)) + return -1; + ts = &t->task_status; + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("port_id = %x,device_id = %x\n", + port_id, dev_id)); + switch (event) { + case IO_OVERFLOW: + PM8001_IO_DBG(pm8001_ha, pm8001_printk("IO_UNDERFLOW\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_DATA_OVERRUN; + ts->residual = 0; + if (pm8001_dev) + pm8001_dev->running_req--; + break; + case IO_XFER_ERROR_BREAK: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_ERROR_BREAK\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_INTERRUPTED; + break; + case IO_XFER_ERROR_PHY_NOT_READY: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_ERROR_PHY_NOT_READY\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_RSVD_RETRY; + break; + case IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_OPEN_CNX_ERROR_PROTOCOL_NOT" + "_SUPPORTED\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_EPROTO; + break; + case IO_OPEN_CNX_ERROR_ZONE_VIOLATION: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_OPEN_CNX_ERROR_ZONE_VIOLATION\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_UNKNOWN; + break; + case IO_OPEN_CNX_ERROR_BREAK: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_OPEN_CNX_ERROR_BREAK\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_RSVD_CONT0; + break; + case IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS\n")); + ts->resp = SAS_TASK_UNDELIVERED; + ts->stat = SAS_DEV_NO_RESPONSE; + if (!t->uldd_task) { + pm8001_handle_event(pm8001_ha, + pm8001_dev, + IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_QUEUE_FULL; + pm8001_ccb_task_free(pm8001_ha, t, ccb, tag); + mb();/*ditto*/ + t->task_done(t); + return 0; + } + break; + case IO_OPEN_CNX_ERROR_BAD_DESTINATION: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_OPEN_CNX_ERROR_BAD_DESTINATION\n")); + ts->resp = SAS_TASK_UNDELIVERED; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_BAD_DEST; + break; + case IO_OPEN_CNX_ERROR_CONNECTION_RATE_NOT_SUPPORTED: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_OPEN_CNX_ERROR_CONNECTION_RATE_" + "NOT_SUPPORTED\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_CONN_RATE; + break; + case IO_OPEN_CNX_ERROR_WRONG_DESTINATION: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_OPEN_CNX_ERROR_WRONG_DESTINATION\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_WRONG_DEST; + break; + case IO_XFER_ERROR_NAK_RECEIVED: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_ERROR_NAK_RECEIVED\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_NAK_R_ERR; + break; + case IO_XFER_ERROR_PEER_ABORTED: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_ERROR_PEER_ABORTED\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_NAK_R_ERR; + break; + case IO_XFER_ERROR_REJECTED_NCQ_MODE: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_ERROR_REJECTED_NCQ_MODE\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_DATA_UNDERRUN; + break; + case IO_XFER_OPEN_RETRY_TIMEOUT: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_OPEN_RETRY_TIMEOUT\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_TO; + break; + case IO_XFER_ERROR_UNEXPECTED_PHASE: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_ERROR_UNEXPECTED_PHASE\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_TO; + break; + case IO_XFER_ERROR_XFER_RDY_OVERRUN: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_ERROR_XFER_RDY_OVERRUN\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_TO; + break; + case IO_XFER_ERROR_XFER_RDY_NOT_EXPECTED: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_ERROR_XFER_RDY_NOT_EXPECTED\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_TO; + break; + case IO_XFER_ERROR_OFFSET_MISMATCH: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_ERROR_OFFSET_MISMATCH\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_TO; + break; + case IO_XFER_ERROR_XFER_ZERO_DATA_LEN: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_ERROR_XFER_ZERO_DATA_LEN\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_TO; + break; + case IO_XFER_CMD_FRAME_ISSUED: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_CMD_FRAME_ISSUED\n")); + break; + case IO_XFER_PIO_SETUP_ERROR: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_PIO_SETUP_ERROR\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_TO; + break; + default: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("Unknown status 0x%x\n", event)); + /* not allowed case. Therefore, return failed status */ + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_TO; + break; + } + spin_lock_irqsave(&t->task_state_lock, flags); + t->task_state_flags &= ~SAS_TASK_STATE_PENDING; + t->task_state_flags &= ~SAS_TASK_AT_INITIATOR; + t->task_state_flags |= SAS_TASK_STATE_DONE; + if (unlikely((t->task_state_flags & SAS_TASK_STATE_ABORTED))) { + spin_unlock_irqrestore(&t->task_state_lock, flags); + PM8001_FAIL_DBG(pm8001_ha, + pm8001_printk("task 0x%p done with io_status 0x%x" + " resp 0x%x stat 0x%x but aborted by upper layer!\n", + t, event, ts->resp, ts->stat)); + pm8001_ccb_task_free(pm8001_ha, t, ccb, tag); + } else { + spin_unlock_irqrestore(&t->task_state_lock, flags); + pm8001_ccb_task_free(pm8001_ha, t, ccb, tag); + mb();/* in order to force CPU ordering */ + t->task_done(t); + } + return 0; +} + +/*See the comments for mpi_ssp_completion */ +static int +mpi_smp_completion(struct pm8001_hba_info *pm8001_ha, void *piomb) +{ + u32 param; + struct sas_task *t; + struct pm8001_ccb_info *ccb; + unsigned long flags; + u32 status; + u32 tag; + struct smp_completion_resp *psmpPayload; + struct task_status_struct *ts; + struct pm8001_device *pm8001_dev; + + psmpPayload = (struct smp_completion_resp *)(piomb + 4); + status = le32_to_cpu(psmpPayload->status); + tag = le32_to_cpu(psmpPayload->tag); + + ccb = &pm8001_ha->ccb_info[tag]; + param = le32_to_cpu(psmpPayload->param); + t = ccb->task; + ts = &t->task_status; + pm8001_dev = ccb->device; + if (status) + PM8001_FAIL_DBG(pm8001_ha, + pm8001_printk("smp IO status 0x%x\n", status)); + if (unlikely(!t || !t->lldd_task || !t->dev)) + return -1; + + switch (status) { + case IO_SUCCESS: + PM8001_IO_DBG(pm8001_ha, pm8001_printk("IO_SUCCESS\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAM_GOOD; + if (pm8001_dev) + pm8001_dev->running_req--; + break; + case IO_ABORTED: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_ABORTED IOMB\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_ABORTED_TASK; + if (pm8001_dev) + pm8001_dev->running_req--; + break; + case IO_OVERFLOW: + PM8001_IO_DBG(pm8001_ha, pm8001_printk("IO_UNDERFLOW\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_DATA_OVERRUN; + ts->residual = 0; + if (pm8001_dev) + pm8001_dev->running_req--; + break; + case IO_NO_DEVICE: + PM8001_IO_DBG(pm8001_ha, pm8001_printk("IO_NO_DEVICE\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_PHY_DOWN; + break; + case IO_ERROR_HW_TIMEOUT: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_ERROR_HW_TIMEOUT\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAM_BUSY; + break; + case IO_XFER_ERROR_BREAK: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_ERROR_BREAK\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAM_BUSY; + break; + case IO_XFER_ERROR_PHY_NOT_READY: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_ERROR_PHY_NOT_READY\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAM_BUSY; + break; + case IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_UNKNOWN; + break; + case IO_OPEN_CNX_ERROR_ZONE_VIOLATION: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_OPEN_CNX_ERROR_ZONE_VIOLATION\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_UNKNOWN; + break; + case IO_OPEN_CNX_ERROR_BREAK: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_OPEN_CNX_ERROR_BREAK\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_RSVD_CONT0; + break; + case IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_UNKNOWN; + pm8001_handle_event(pm8001_ha, + pm8001_dev, + IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS); + break; + case IO_OPEN_CNX_ERROR_BAD_DESTINATION: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_OPEN_CNX_ERROR_BAD_DESTINATION\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_BAD_DEST; + break; + case IO_OPEN_CNX_ERROR_CONNECTION_RATE_NOT_SUPPORTED: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_OPEN_CNX_ERROR_CONNECTION_RATE_" + "NOT_SUPPORTED\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_CONN_RATE; + break; + case IO_OPEN_CNX_ERROR_WRONG_DESTINATION: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_OPEN_CNX_ERROR_WRONG_DESTINATION\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_WRONG_DEST; + break; + case IO_XFER_ERROR_RX_FRAME: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_ERROR_RX_FRAME\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_DEV_NO_RESPONSE; + break; + case IO_XFER_OPEN_RETRY_TIMEOUT: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_XFER_OPEN_RETRY_TIMEOUT\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_RSVD_RETRY; + break; + case IO_ERROR_INTERNAL_SMP_RESOURCE: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_ERROR_INTERNAL_SMP_RESOURCE\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_QUEUE_FULL; + break; + case IO_PORT_IN_RESET: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_PORT_IN_RESET\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_RSVD_RETRY; + break; + case IO_DS_NON_OPERATIONAL: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_DS_NON_OPERATIONAL\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_DEV_NO_RESPONSE; + break; + case IO_DS_IN_RECOVERY: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_DS_IN_RECOVERY\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_RSVD_RETRY; + break; + case IO_OPEN_CNX_ERROR_HW_RESOURCE_BUSY: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("IO_OPEN_CNX_ERROR_HW_RESOURCE_BUSY\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_RSVD_RETRY; + break; + default: + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("Unknown status 0x%x\n", status)); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAS_DEV_NO_RESPONSE; + /* not allowed case. Therefore, return failed status */ + break; + } + spin_lock_irqsave(&t->task_state_lock, flags); + t->task_state_flags &= ~SAS_TASK_STATE_PENDING; + t->task_state_flags &= ~SAS_TASK_AT_INITIATOR; + t->task_state_flags |= SAS_TASK_STATE_DONE; + if (unlikely((t->task_state_flags & SAS_TASK_STATE_ABORTED))) { + spin_unlock_irqrestore(&t->task_state_lock, flags); + PM8001_FAIL_DBG(pm8001_ha, pm8001_printk("task 0x%p done with" + " io_status 0x%x resp 0x%x " + "stat 0x%x but aborted by upper layer!\n", + t, status, ts->resp, ts->stat)); + pm8001_ccb_task_free(pm8001_ha, t, ccb, tag); + } else { + spin_unlock_irqrestore(&t->task_state_lock, flags); + pm8001_ccb_task_free(pm8001_ha, t, ccb, tag); + mb();/* in order to force CPU ordering */ + t->task_done(t); + } + return 0; +} + +static void +mpi_set_dev_state_resp(struct pm8001_hba_info *pm8001_ha, void *piomb) +{ + struct set_dev_state_resp *pPayload = + (struct set_dev_state_resp *)(piomb + 4); + u32 tag = le32_to_cpu(pPayload->tag); + struct pm8001_ccb_info *ccb = &pm8001_ha->ccb_info[tag]; + struct pm8001_device *pm8001_dev = ccb->device; + u32 status = le32_to_cpu(pPayload->status); + u32 device_id = le32_to_cpu(pPayload->device_id); + u8 pds = le32_to_cpu(pPayload->pds_nds) | PDS_BITS; + u8 nds = le32_to_cpu(pPayload->pds_nds) | NDS_BITS; + PM8001_MSG_DBG(pm8001_ha, pm8001_printk("Set device id = 0x%x state " + "from 0x%x to 0x%x status = 0x%x!\n", + device_id, pds, nds, status)); + complete(pm8001_dev->setds_completion); + ccb->task = NULL; + ccb->ccb_tag = 0xFFFFFFFF; + pm8001_ccb_free(pm8001_ha, tag); +} + +static void +mpi_set_nvmd_resp(struct pm8001_hba_info *pm8001_ha, void *piomb) +{ + struct get_nvm_data_resp *pPayload = + (struct get_nvm_data_resp *)(piomb + 4); + u32 tag = le32_to_cpu(pPayload->tag); + struct pm8001_ccb_info *ccb = &pm8001_ha->ccb_info[tag]; + u32 dlen_status = le32_to_cpu(pPayload->dlen_status); + complete(pm8001_ha->nvmd_completion); + PM8001_MSG_DBG(pm8001_ha, pm8001_printk("Set nvm data complete!\n")); + if ((dlen_status & NVMD_STAT) != 0) { + PM8001_FAIL_DBG(pm8001_ha, + pm8001_printk("Set nvm data error!\n")); + return; + } + ccb->task = NULL; + ccb->ccb_tag = 0xFFFFFFFF; + pm8001_ccb_free(pm8001_ha, tag); +} + +static void +mpi_get_nvmd_resp(struct pm8001_hba_info *pm8001_ha, void *piomb) +{ + struct fw_control_ex *fw_control_context; + struct get_nvm_data_resp *pPayload = + (struct get_nvm_data_resp *)(piomb + 4); + u32 tag = le32_to_cpu(pPayload->tag); + struct pm8001_ccb_info *ccb = &pm8001_ha->ccb_info[tag]; + u32 dlen_status = le32_to_cpu(pPayload->dlen_status); + u32 ir_tds_bn_dps_das_nvm = + le32_to_cpu(pPayload->ir_tda_bn_dps_das_nvm); + void *virt_addr = pm8001_ha->memoryMap.region[NVMD].virt_ptr; + fw_control_context = ccb->fw_control_context; + + PM8001_MSG_DBG(pm8001_ha, pm8001_printk("Get nvm data complete!\n")); + if ((dlen_status & NVMD_STAT) != 0) { + PM8001_FAIL_DBG(pm8001_ha, + pm8001_printk("Get nvm data error!\n")); + complete(pm8001_ha->nvmd_completion); + return; + } + + if (ir_tds_bn_dps_das_nvm & IPMode) { + /* indirect mode - IR bit set */ + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("Get NVMD success, IR=1\n")); + if ((ir_tds_bn_dps_das_nvm & NVMD_TYPE) == TWI_DEVICE) { + if (ir_tds_bn_dps_das_nvm == 0x80a80200) { + memcpy(pm8001_ha->sas_addr, + ((u8 *)virt_addr + 4), + SAS_ADDR_SIZE); + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("Get SAS address" + " from VPD successfully!\n")); + } + } else if (((ir_tds_bn_dps_das_nvm & NVMD_TYPE) == C_SEEPROM) + || ((ir_tds_bn_dps_das_nvm & NVMD_TYPE) == VPD_FLASH) || + ((ir_tds_bn_dps_das_nvm & NVMD_TYPE) == EXPAN_ROM)) { + ; + } else if (((ir_tds_bn_dps_das_nvm & NVMD_TYPE) == AAP1_RDUMP) + || ((ir_tds_bn_dps_das_nvm & NVMD_TYPE) == IOP_RDUMP)) { + ; + } else { + /* Should not be happened*/ + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("(IR=1)Wrong Device type 0x%x\n", + ir_tds_bn_dps_das_nvm)); + } + } else /* direct mode */{ + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("Get NVMD success, IR=0, dataLen=%d\n", + (dlen_status & NVMD_LEN) >> 24)); + } + memcpy((void *)(fw_control_context->usrAddr), + (void *)(pm8001_ha->memoryMap.region[NVMD].virt_ptr), + fw_control_context->len); + complete(pm8001_ha->nvmd_completion); + ccb->task = NULL; + ccb->ccb_tag = 0xFFFFFFFF; + pm8001_ccb_free(pm8001_ha, tag); +} + +static int mpi_local_phy_ctl(struct pm8001_hba_info *pm8001_ha, void *piomb) +{ + struct local_phy_ctl_resp *pPayload = + (struct local_phy_ctl_resp *)(piomb + 4); + u32 status = le32_to_cpu(pPayload->status); + u32 phy_id = le32_to_cpu(pPayload->phyop_phyid) & ID_BITS; + u32 phy_op = le32_to_cpu(pPayload->phyop_phyid) & OP_BITS; + if (status != 0) { + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("%x phy execute %x phy op failed! \n", + phy_id, phy_op)); + } else + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("%x phy execute %x phy op success! \n", + phy_id, phy_op)); + return 0; +} + +/** + * pm8001_bytes_dmaed - one of the interface function communication with libsas + * @pm8001_ha: our hba card information + * @i: which phy that received the event. + * + * when HBA driver received the identify done event or initiate FIS received + * event(for SATA), it will invoke this function to notify the sas layer that + * the sas toplogy has formed, please discover the the whole sas domain, + * while receive a broadcast(change) primitive just tell the sas + * layer to discover the changed domain rather than the whole domain. + */ +static void pm8001_bytes_dmaed(struct pm8001_hba_info *pm8001_ha, int i) +{ + struct pm8001_phy *phy = &pm8001_ha->phy[i]; + struct asd_sas_phy *sas_phy = &phy->sas_phy; + struct sas_ha_struct *sas_ha; + if (!phy->phy_attached) + return; + + sas_ha = pm8001_ha->sas; + if (sas_phy->phy) { + struct sas_phy *sphy = sas_phy->phy; + sphy->negotiated_linkrate = sas_phy->linkrate; + sphy->minimum_linkrate = phy->minimum_linkrate; + sphy->minimum_linkrate_hw = SAS_LINK_RATE_1_5_GBPS; + sphy->maximum_linkrate = phy->maximum_linkrate; + sphy->maximum_linkrate_hw = phy->maximum_linkrate; + } + + if (phy->phy_type & PORT_TYPE_SAS) { + struct sas_identify_frame *id; + id = (struct sas_identify_frame *)phy->frame_rcvd; + id->dev_type = phy->identify.device_type; + id->initiator_bits = SAS_PROTOCOL_ALL; + id->target_bits = phy->identify.target_port_protocols; + } else if (phy->phy_type & PORT_TYPE_SATA) { + /*Nothing*/ + } + PM8001_MSG_DBG(pm8001_ha, pm8001_printk("phy %d byte dmaded.\n", i)); + + sas_phy->frame_rcvd_size = phy->frame_rcvd_size; + pm8001_ha->sas->notify_port_event(sas_phy, PORTE_BYTES_DMAED); +} + +/* Get the link rate speed */ +static void get_lrate_mode(struct pm8001_phy *phy, u8 link_rate) +{ + struct sas_phy *sas_phy = phy->sas_phy.phy; + + switch (link_rate) { + case PHY_SPEED_60: + phy->sas_phy.linkrate = SAS_LINK_RATE_6_0_GBPS; + phy->sas_phy.phy->negotiated_linkrate = SAS_LINK_RATE_6_0_GBPS; + break; + case PHY_SPEED_30: + phy->sas_phy.linkrate = SAS_LINK_RATE_3_0_GBPS; + phy->sas_phy.phy->negotiated_linkrate = SAS_LINK_RATE_3_0_GBPS; + break; + case PHY_SPEED_15: + phy->sas_phy.linkrate = SAS_LINK_RATE_1_5_GBPS; + phy->sas_phy.phy->negotiated_linkrate = SAS_LINK_RATE_1_5_GBPS; + break; + } + sas_phy->negotiated_linkrate = phy->sas_phy.linkrate; + sas_phy->maximum_linkrate_hw = SAS_LINK_RATE_6_0_GBPS; + sas_phy->minimum_linkrate_hw = SAS_LINK_RATE_1_5_GBPS; + sas_phy->maximum_linkrate = SAS_LINK_RATE_6_0_GBPS; + sas_phy->minimum_linkrate = SAS_LINK_RATE_1_5_GBPS; +} + +/** + * asd_get_attached_sas_addr -- extract/generate attached SAS address + * @phy: pointer to asd_phy + * @sas_addr: pointer to buffer where the SAS address is to be written + * + * This function extracts the SAS address from an IDENTIFY frame + * received. If OOB is SATA, then a SAS address is generated from the + * HA tables. + * + * LOCKING: the frame_rcvd_lock needs to be held since this parses the frame + * buffer. + */ +static void pm8001_get_attached_sas_addr(struct pm8001_phy *phy, + u8 *sas_addr) +{ + if (phy->sas_phy.frame_rcvd[0] == 0x34 + && phy->sas_phy.oob_mode == SATA_OOB_MODE) { + struct pm8001_hba_info *pm8001_ha = phy->sas_phy.ha->lldd_ha; + /* FIS device-to-host */ + u64 addr = be64_to_cpu(*(__be64 *)pm8001_ha->sas_addr); + addr += phy->sas_phy.id; + *(__be64 *)sas_addr = cpu_to_be64(addr); + } else { + struct sas_identify_frame *idframe = + (void *) phy->sas_phy.frame_rcvd; + memcpy(sas_addr, idframe->sas_addr, SAS_ADDR_SIZE); + } +} + +/** + * pm8001_hw_event_ack_req- For PM8001,some events need to acknowage to FW. + * @pm8001_ha: our hba card information + * @Qnum: the outbound queue message number. + * @SEA: source of event to ack + * @port_id: port id. + * @phyId: phy id. + * @param0: parameter 0. + * @param1: parameter 1. + */ +static void pm8001_hw_event_ack_req(struct pm8001_hba_info *pm8001_ha, + u32 Qnum, u32 SEA, u32 port_id, u32 phyId, u32 param0, u32 param1) +{ + struct hw_event_ack_req payload; + u32 opc = OPC_INB_SAS_HW_EVENT_ACK; + + struct inbound_queue_table *circularQ; + + memset((u8 *)&payload, 0, sizeof(payload)); + circularQ = &pm8001_ha->inbnd_q_tbl[Qnum]; + payload.tag = 1; + payload.sea_phyid_portid = cpu_to_le32(((SEA & 0xFFFF) << 8) | + ((phyId & 0x0F) << 4) | (port_id & 0x0F)); + payload.param0 = cpu_to_le32(param0); + payload.param1 = cpu_to_le32(param1); + mpi_build_cmd(pm8001_ha, circularQ, opc, &payload); +} + +static int pm8001_chip_phy_ctl_req(struct pm8001_hba_info *pm8001_ha, + u32 phyId, u32 phy_op); + +/** + * hw_event_sas_phy_up -FW tells me a SAS phy up event. + * @pm8001_ha: our hba card information + * @piomb: IO message buffer + */ +static void +hw_event_sas_phy_up(struct pm8001_hba_info *pm8001_ha, void *piomb) +{ + struct hw_event_resp *pPayload = + (struct hw_event_resp *)(piomb + 4); + u32 lr_evt_status_phyid_portid = + le32_to_cpu(pPayload->lr_evt_status_phyid_portid); + u8 link_rate = + (u8)((lr_evt_status_phyid_portid & 0xF0000000) >> 28); + u8 phy_id = + (u8)((lr_evt_status_phyid_portid & 0x000000F0) >> 4); + struct sas_ha_struct *sas_ha = pm8001_ha->sas; + struct pm8001_phy *phy = &pm8001_ha->phy[phy_id]; + unsigned long flags; + u8 deviceType = pPayload->sas_identify.dev_type; + + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("HW_EVENT_SAS_PHY_UP \n")); + + switch (deviceType) { + case SAS_PHY_UNUSED: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("device type no device.\n")); + break; + case SAS_END_DEVICE: + PM8001_MSG_DBG(pm8001_ha, pm8001_printk("end device.\n")); + pm8001_chip_phy_ctl_req(pm8001_ha, phy_id, + PHY_NOTIFY_ENABLE_SPINUP); + get_lrate_mode(phy, link_rate); + break; + case SAS_EDGE_EXPANDER_DEVICE: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("expander device.\n")); + get_lrate_mode(phy, link_rate); + break; + case SAS_FANOUT_EXPANDER_DEVICE: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("fanout expander device.\n")); + get_lrate_mode(phy, link_rate); + break; + default: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("unkown device type(%x)\n", deviceType)); + break; + } + phy->phy_type |= PORT_TYPE_SAS; + phy->identify.device_type = deviceType; + phy->phy_attached = 1; + if (phy->identify.device_type == SAS_END_DEV) + phy->identify.target_port_protocols = SAS_PROTOCOL_SSP; + else if (phy->identify.device_type != NO_DEVICE) + phy->identify.target_port_protocols = SAS_PROTOCOL_SMP; + phy->sas_phy.oob_mode = SAS_OOB_MODE; + sas_ha->notify_phy_event(&phy->sas_phy, PHYE_OOB_DONE); + spin_lock_irqsave(&phy->sas_phy.frame_rcvd_lock, flags); + memcpy(phy->frame_rcvd, &pPayload->sas_identify, + sizeof(struct sas_identify_frame)-4); + phy->frame_rcvd_size = sizeof(struct sas_identify_frame) - 4; + pm8001_get_attached_sas_addr(phy, phy->sas_phy.attached_sas_addr); + spin_unlock_irqrestore(&phy->sas_phy.frame_rcvd_lock, flags); + if (pm8001_ha->flags == PM8001F_RUN_TIME) + mdelay(200);/*delay a moment to wait disk to spinup*/ + pm8001_bytes_dmaed(pm8001_ha, phy_id); +} + +/** + * hw_event_sata_phy_up -FW tells me a SATA phy up event. + * @pm8001_ha: our hba card information + * @piomb: IO message buffer + */ +static void +hw_event_sata_phy_up(struct pm8001_hba_info *pm8001_ha, void *piomb) +{ + struct hw_event_resp *pPayload = + (struct hw_event_resp *)(piomb + 4); + u32 lr_evt_status_phyid_portid = + le32_to_cpu(pPayload->lr_evt_status_phyid_portid); + u8 link_rate = + (u8)((lr_evt_status_phyid_portid & 0xF0000000) >> 28); + u8 phy_id = + (u8)((lr_evt_status_phyid_portid & 0x000000F0) >> 4); + struct sas_ha_struct *sas_ha = pm8001_ha->sas; + struct pm8001_phy *phy = &pm8001_ha->phy[phy_id]; + unsigned long flags; + get_lrate_mode(phy, link_rate); + phy->phy_type |= PORT_TYPE_SATA; + phy->phy_attached = 1; + phy->sas_phy.oob_mode = SATA_OOB_MODE; + sas_ha->notify_phy_event(&phy->sas_phy, PHYE_OOB_DONE); + spin_lock_irqsave(&phy->sas_phy.frame_rcvd_lock, flags); + memcpy(phy->frame_rcvd, ((u8 *)&pPayload->sata_fis - 4), + sizeof(struct dev_to_host_fis)); + phy->frame_rcvd_size = sizeof(struct dev_to_host_fis); + phy->identify.target_port_protocols = SAS_PROTOCOL_SATA; + phy->identify.device_type = SATA_DEV; + pm8001_get_attached_sas_addr(phy, phy->sas_phy.attached_sas_addr); + spin_unlock_irqrestore(&phy->sas_phy.frame_rcvd_lock, flags); + pm8001_bytes_dmaed(pm8001_ha, phy_id); +} + +/** + * hw_event_phy_down -we should notify the libsas the phy is down. + * @pm8001_ha: our hba card information + * @piomb: IO message buffer + */ +static void +hw_event_phy_down(struct pm8001_hba_info *pm8001_ha, void *piomb) +{ + struct hw_event_resp *pPayload = + (struct hw_event_resp *)(piomb + 4); + u32 lr_evt_status_phyid_portid = + le32_to_cpu(pPayload->lr_evt_status_phyid_portid); + u8 port_id = (u8)(lr_evt_status_phyid_portid & 0x0000000F); + u8 phy_id = + (u8)((lr_evt_status_phyid_portid & 0x000000F0) >> 4); + u32 npip_portstate = le32_to_cpu(pPayload->npip_portstate); + u8 portstate = (u8)(npip_portstate & 0x0000000F); + + switch (portstate) { + case PORT_VALID: + break; + case PORT_INVALID: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk(" PortInvalid portID %d \n", port_id)); + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk(" Last phy Down and port invalid\n")); + pm8001_hw_event_ack_req(pm8001_ha, 0, HW_EVENT_PHY_DOWN, + port_id, phy_id, 0, 0); + break; + case PORT_IN_RESET: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk(" PortInReset portID %d \n", port_id)); + break; + case PORT_NOT_ESTABLISHED: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk(" phy Down and PORT_NOT_ESTABLISHED\n")); + break; + case PORT_LOSTCOMM: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk(" phy Down and PORT_LOSTCOMM\n")); + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk(" Last phy Down and port invalid\n")); + pm8001_hw_event_ack_req(pm8001_ha, 0, HW_EVENT_PHY_DOWN, + port_id, phy_id, 0, 0); + break; + default: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk(" phy Down and(default) = %x\n", + portstate)); + break; + + } +} + +/** + * mpi_reg_resp -process register device ID response. + * @pm8001_ha: our hba card information + * @piomb: IO message buffer + * + * when sas layer find a device it will notify LLDD, then the driver register + * the domain device to FW, this event is the return device ID which the FW + * has assigned, from now,inter-communication with FW is no longer using the + * SAS address, use device ID which FW assigned. + */ +static int mpi_reg_resp(struct pm8001_hba_info *pm8001_ha, void *piomb) +{ + u32 status; + u32 device_id; + u32 htag; + struct pm8001_ccb_info *ccb; + struct pm8001_device *pm8001_dev; + struct dev_reg_resp *registerRespPayload = + (struct dev_reg_resp *)(piomb + 4); + + htag = le32_to_cpu(registerRespPayload->tag); + ccb = &pm8001_ha->ccb_info[registerRespPayload->tag]; + pm8001_dev = ccb->device; + status = le32_to_cpu(registerRespPayload->status); + device_id = le32_to_cpu(registerRespPayload->device_id); + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk(" register device is status = %d\n", status)); + switch (status) { + case DEVREG_SUCCESS: + PM8001_MSG_DBG(pm8001_ha, pm8001_printk("DEVREG_SUCCESS\n")); + pm8001_dev->device_id = device_id; + break; + case DEVREG_FAILURE_OUT_OF_RESOURCE: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("DEVREG_FAILURE_OUT_OF_RESOURCE\n")); + break; + case DEVREG_FAILURE_DEVICE_ALREADY_REGISTERED: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("DEVREG_FAILURE_DEVICE_ALREADY_REGISTERED\n")); + break; + case DEVREG_FAILURE_INVALID_PHY_ID: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("DEVREG_FAILURE_INVALID_PHY_ID\n")); + break; + case DEVREG_FAILURE_PHY_ID_ALREADY_REGISTERED: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("DEVREG_FAILURE_PHY_ID_ALREADY_REGISTERED\n")); + break; + case DEVREG_FAILURE_PORT_ID_OUT_OF_RANGE: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("DEVREG_FAILURE_PORT_ID_OUT_OF_RANGE\n")); + break; + case DEVREG_FAILURE_PORT_NOT_VALID_STATE: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("DEVREG_FAILURE_PORT_NOT_VALID_STATE\n")); + break; + case DEVREG_FAILURE_DEVICE_TYPE_NOT_VALID: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("DEVREG_FAILURE_DEVICE_TYPE_NOT_VALID\n")); + break; + default: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("DEVREG_FAILURE_DEVICE_TYPE_NOT_UNSORPORTED\n")); + break; + } + complete(pm8001_dev->dcompletion); + ccb->task = NULL; + ccb->ccb_tag = 0xFFFFFFFF; + pm8001_ccb_free(pm8001_ha, htag); + return 0; +} + +static int mpi_dereg_resp(struct pm8001_hba_info *pm8001_ha, void *piomb) +{ + u32 status; + u32 device_id; + struct dev_reg_resp *registerRespPayload = + (struct dev_reg_resp *)(piomb + 4); + + status = le32_to_cpu(registerRespPayload->status); + device_id = le32_to_cpu(registerRespPayload->device_id); + if (status != 0) + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk(" deregister device failed ,status = %x" + ", device_id = %x\n", status, device_id)); + return 0; +} + +static int +mpi_fw_flash_update_resp(struct pm8001_hba_info *pm8001_ha, void *piomb) +{ + u32 status; + struct fw_control_ex fw_control_context; + struct fw_flash_Update_resp *ppayload = + (struct fw_flash_Update_resp *)(piomb + 4); + u32 tag = le32_to_cpu(ppayload->tag); + struct pm8001_ccb_info *ccb = &pm8001_ha->ccb_info[tag]; + status = le32_to_cpu(ppayload->status); + memcpy(&fw_control_context, + ccb->fw_control_context, + sizeof(fw_control_context)); + switch (status) { + case FLASH_UPDATE_COMPLETE_PENDING_REBOOT: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk(": FLASH_UPDATE_COMPLETE_PENDING_REBOOT\n")); + break; + case FLASH_UPDATE_IN_PROGRESS: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk(": FLASH_UPDATE_IN_PROGRESS\n")); + break; + case FLASH_UPDATE_HDR_ERR: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk(": FLASH_UPDATE_HDR_ERR\n")); + break; + case FLASH_UPDATE_OFFSET_ERR: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk(": FLASH_UPDATE_OFFSET_ERR\n")); + break; + case FLASH_UPDATE_CRC_ERR: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk(": FLASH_UPDATE_CRC_ERR\n")); + break; + case FLASH_UPDATE_LENGTH_ERR: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk(": FLASH_UPDATE_LENGTH_ERR\n")); + break; + case FLASH_UPDATE_HW_ERR: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk(": FLASH_UPDATE_HW_ERR\n")); + break; + case FLASH_UPDATE_DNLD_NOT_SUPPORTED: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk(": FLASH_UPDATE_DNLD_NOT_SUPPORTED\n")); + break; + case FLASH_UPDATE_DISABLED: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk(": FLASH_UPDATE_DISABLED\n")); + break; + default: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("No matched status = %d\n", status)); + break; + } + ccb->fw_control_context->fw_control->retcode = status; + pci_free_consistent(pm8001_ha->pdev, + fw_control_context.len, + fw_control_context.virtAddr, + fw_control_context.phys_addr); + complete(pm8001_ha->nvmd_completion); + ccb->task = NULL; + ccb->ccb_tag = 0xFFFFFFFF; + pm8001_ccb_free(pm8001_ha, tag); + return 0; +} + +static int +mpi_general_event(struct pm8001_hba_info *pm8001_ha , void *piomb) +{ + u32 status; + int i; + struct general_event_resp *pPayload = + (struct general_event_resp *)(piomb + 4); + status = le32_to_cpu(pPayload->status); + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk(" status = 0x%x\n", status)); + for (i = 0; i < GENERAL_EVENT_PAYLOAD; i++) + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("inb_IOMB_payload[0x%x] 0x%x, \n", i, + pPayload->inb_IOMB_payload[i])); + return 0; +} + +static int +mpi_task_abort_resp(struct pm8001_hba_info *pm8001_ha, void *piomb) +{ + struct sas_task *t; + struct pm8001_ccb_info *ccb; + unsigned long flags; + u32 status ; + u32 tag, scp; + struct task_status_struct *ts; + + struct task_abort_resp *pPayload = + (struct task_abort_resp *)(piomb + 4); + ccb = &pm8001_ha->ccb_info[pPayload->tag]; + t = ccb->task; + ts = &t->task_status; + + if (t == NULL) + return -1; + + status = le32_to_cpu(pPayload->status); + tag = le32_to_cpu(pPayload->tag); + scp = le32_to_cpu(pPayload->scp); + PM8001_IO_DBG(pm8001_ha, + pm8001_printk(" status = 0x%x\n", status)); + if (status != 0) + PM8001_FAIL_DBG(pm8001_ha, + pm8001_printk("task abort failed tag = 0x%x," + " scp= 0x%x\n", tag, scp)); + switch (status) { + case IO_SUCCESS: + PM8001_IO_DBG(pm8001_ha, pm8001_printk("IO_SUCCESS\n")); + ts->resp = SAS_TASK_COMPLETE; + ts->stat = SAM_GOOD; + break; + case IO_NOT_VALID: + PM8001_IO_DBG(pm8001_ha, pm8001_printk("IO_NOT_VALID\n")); + ts->resp = TMF_RESP_FUNC_FAILED; + break; + } + spin_lock_irqsave(&t->task_state_lock, flags); + t->task_state_flags &= ~SAS_TASK_STATE_PENDING; + t->task_state_flags &= ~SAS_TASK_AT_INITIATOR; + t->task_state_flags |= SAS_TASK_STATE_DONE; + spin_unlock_irqrestore(&t->task_state_lock, flags); + pm8001_ccb_task_free(pm8001_ha, t, ccb, pPayload->tag); + mb(); + t->task_done(t); + return 0; +} + +/** + * mpi_hw_event -The hw event has come. + * @pm8001_ha: our hba card information + * @piomb: IO message buffer + */ +static int mpi_hw_event(struct pm8001_hba_info *pm8001_ha, void* piomb) +{ + unsigned long flags; + struct hw_event_resp *pPayload = + (struct hw_event_resp *)(piomb + 4); + u32 lr_evt_status_phyid_portid = + le32_to_cpu(pPayload->lr_evt_status_phyid_portid); + u8 port_id = (u8)(lr_evt_status_phyid_portid & 0x0000000F); + u8 phy_id = + (u8)((lr_evt_status_phyid_portid & 0x000000F0) >> 4); + u16 eventType = + (u16)((lr_evt_status_phyid_portid & 0x00FFFF00) >> 8); + u8 status = + (u8)((lr_evt_status_phyid_portid & 0x0F000000) >> 24); + struct sas_ha_struct *sas_ha = pm8001_ha->sas; + struct pm8001_phy *phy = &pm8001_ha->phy[phy_id]; + struct asd_sas_phy *sas_phy = sas_ha->sas_phy[phy_id]; + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("outbound queue HW event & event type : ")); + switch (eventType) { + case HW_EVENT_PHY_START_STATUS: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("HW_EVENT_PHY_START_STATUS" + " status = %x\n", status)); + if (status == 0) { + phy->phy_state = 1; + if (pm8001_ha->flags == PM8001F_RUN_TIME) + complete(phy->enable_completion); + } + break; + case HW_EVENT_SAS_PHY_UP: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("HW_EVENT_PHY_START_STATUS \n")); + hw_event_sas_phy_up(pm8001_ha, piomb); + break; + case HW_EVENT_SATA_PHY_UP: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("HW_EVENT_SATA_PHY_UP \n")); + hw_event_sata_phy_up(pm8001_ha, piomb); + break; + case HW_EVENT_PHY_STOP_STATUS: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("HW_EVENT_PHY_STOP_STATUS " + "status = %x\n", status)); + if (status == 0) + phy->phy_state = 0; + break; + case HW_EVENT_SATA_SPINUP_HOLD: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("HW_EVENT_SATA_SPINUP_HOLD \n")); + sas_ha->notify_phy_event(&phy->sas_phy, PHYE_SPINUP_HOLD); + break; + case HW_EVENT_PHY_DOWN: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("HW_EVENT_PHY_DOWN \n")); + sas_ha->notify_phy_event(&phy->sas_phy, PHYE_LOSS_OF_SIGNAL); + phy->phy_attached = 0; + phy->phy_state = 0; + hw_event_phy_down(pm8001_ha, piomb); + break; + case HW_EVENT_PORT_INVALID: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("HW_EVENT_PORT_INVALID\n")); + sas_phy_disconnected(sas_phy); + phy->phy_attached = 0; + sas_ha->notify_port_event(sas_phy, PORTE_LINK_RESET_ERR); + break; + /* the broadcast change primitive received, tell the LIBSAS this event + to revalidate the sas domain*/ + case HW_EVENT_BROADCAST_CHANGE: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("HW_EVENT_BROADCAST_CHANGE\n")); + pm8001_hw_event_ack_req(pm8001_ha, 0, HW_EVENT_BROADCAST_CHANGE, + port_id, phy_id, 1, 0); + spin_lock_irqsave(&sas_phy->sas_prim_lock, flags); + sas_phy->sas_prim = HW_EVENT_BROADCAST_CHANGE; + spin_unlock_irqrestore(&sas_phy->sas_prim_lock, flags); + sas_ha->notify_port_event(sas_phy, PORTE_BROADCAST_RCVD); + break; + case HW_EVENT_PHY_ERROR: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("HW_EVENT_PHY_ERROR\n")); + sas_phy_disconnected(&phy->sas_phy); + phy->phy_attached = 0; + sas_ha->notify_phy_event(&phy->sas_phy, PHYE_OOB_ERROR); + break; + case HW_EVENT_BROADCAST_EXP: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("HW_EVENT_BROADCAST_EXP\n")); + spin_lock_irqsave(&sas_phy->sas_prim_lock, flags); + sas_phy->sas_prim = HW_EVENT_BROADCAST_EXP; + spin_unlock_irqrestore(&sas_phy->sas_prim_lock, flags); + sas_ha->notify_port_event(sas_phy, PORTE_BROADCAST_RCVD); + break; + case HW_EVENT_LINK_ERR_INVALID_DWORD: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("HW_EVENT_LINK_ERR_INVALID_DWORD\n")); + pm8001_hw_event_ack_req(pm8001_ha, 0, + HW_EVENT_LINK_ERR_INVALID_DWORD, port_id, phy_id, 0, 0); + sas_phy_disconnected(sas_phy); + phy->phy_attached = 0; + sas_ha->notify_port_event(sas_phy, PORTE_LINK_RESET_ERR); + break; + case HW_EVENT_LINK_ERR_DISPARITY_ERROR: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("HW_EVENT_LINK_ERR_DISPARITY_ERROR\n")); + pm8001_hw_event_ack_req(pm8001_ha, 0, + HW_EVENT_LINK_ERR_DISPARITY_ERROR, + port_id, phy_id, 0, 0); + sas_phy_disconnected(sas_phy); + phy->phy_attached = 0; + sas_ha->notify_port_event(sas_phy, PORTE_LINK_RESET_ERR); + break; + case HW_EVENT_LINK_ERR_CODE_VIOLATION: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("HW_EVENT_LINK_ERR_CODE_VIOLATION\n")); + pm8001_hw_event_ack_req(pm8001_ha, 0, + HW_EVENT_LINK_ERR_CODE_VIOLATION, + port_id, phy_id, 0, 0); + sas_phy_disconnected(sas_phy); + phy->phy_attached = 0; + sas_ha->notify_port_event(sas_phy, PORTE_LINK_RESET_ERR); + break; + case HW_EVENT_LINK_ERR_LOSS_OF_DWORD_SYNCH: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("HW_EVENT_LINK_ERR_LOSS_OF_DWORD_SYNCH\n")); + pm8001_hw_event_ack_req(pm8001_ha, 0, + HW_EVENT_LINK_ERR_LOSS_OF_DWORD_SYNCH, + port_id, phy_id, 0, 0); + sas_phy_disconnected(sas_phy); + phy->phy_attached = 0; + sas_ha->notify_port_event(sas_phy, PORTE_LINK_RESET_ERR); + break; + case HW_EVENT_MALFUNCTION: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("HW_EVENT_MALFUNCTION\n")); + break; + case HW_EVENT_BROADCAST_SES: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("HW_EVENT_BROADCAST_SES\n")); + spin_lock_irqsave(&sas_phy->sas_prim_lock, flags); + sas_phy->sas_prim = HW_EVENT_BROADCAST_SES; + spin_unlock_irqrestore(&sas_phy->sas_prim_lock, flags); + sas_ha->notify_port_event(sas_phy, PORTE_BROADCAST_RCVD); + break; + case HW_EVENT_INBOUND_CRC_ERROR: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("HW_EVENT_INBOUND_CRC_ERROR\n")); + pm8001_hw_event_ack_req(pm8001_ha, 0, + HW_EVENT_INBOUND_CRC_ERROR, + port_id, phy_id, 0, 0); + break; + case HW_EVENT_HARD_RESET_RECEIVED: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("HW_EVENT_HARD_RESET_RECEIVED\n")); + sas_ha->notify_port_event(sas_phy, PORTE_HARD_RESET); + break; + case HW_EVENT_ID_FRAME_TIMEOUT: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("HW_EVENT_ID_FRAME_TIMEOUT\n")); + sas_phy_disconnected(sas_phy); + phy->phy_attached = 0; + sas_ha->notify_port_event(sas_phy, PORTE_LINK_RESET_ERR); + break; + case HW_EVENT_LINK_ERR_PHY_RESET_FAILED: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("HW_EVENT_LINK_ERR_PHY_RESET_FAILED \n")); + pm8001_hw_event_ack_req(pm8001_ha, 0, + HW_EVENT_LINK_ERR_PHY_RESET_FAILED, + port_id, phy_id, 0, 0); + sas_phy_disconnected(sas_phy); + phy->phy_attached = 0; + sas_ha->notify_port_event(sas_phy, PORTE_LINK_RESET_ERR); + break; + case HW_EVENT_PORT_RESET_TIMER_TMO: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("HW_EVENT_PORT_RESET_TIMER_TMO \n")); + sas_phy_disconnected(sas_phy); + phy->phy_attached = 0; + sas_ha->notify_port_event(sas_phy, PORTE_LINK_RESET_ERR); + break; + case HW_EVENT_PORT_RECOVERY_TIMER_TMO: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("HW_EVENT_PORT_RECOVERY_TIMER_TMO \n")); + sas_phy_disconnected(sas_phy); + phy->phy_attached = 0; + sas_ha->notify_port_event(sas_phy, PORTE_LINK_RESET_ERR); + break; + case HW_EVENT_PORT_RECOVER: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("HW_EVENT_PORT_RECOVER \n")); + break; + case HW_EVENT_PORT_RESET_COMPLETE: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("HW_EVENT_PORT_RESET_COMPLETE \n")); + break; + case EVENT_BROADCAST_ASYNCH_EVENT: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("EVENT_BROADCAST_ASYNCH_EVENT\n")); + break; + default: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("Unknown event type = %x\n", eventType)); + break; + } + return 0; +} + +/** + * process_one_iomb - process one outbound Queue memory block + * @pm8001_ha: our hba card information + * @piomb: IO message buffer + */ +static void process_one_iomb(struct pm8001_hba_info *pm8001_ha, void *piomb) +{ + u32 pHeader = (u32)*(u32 *)piomb; + u8 opc = (u8)((le32_to_cpu(pHeader)) & 0xFFF); + + PM8001_MSG_DBG(pm8001_ha, pm8001_printk("process_one_iomb:\n")); + + switch (opc) { + case OPC_OUB_ECHO: + PM8001_MSG_DBG(pm8001_ha, pm8001_printk("OPC_OUB_ECHO \n")); + break; + case OPC_OUB_HW_EVENT: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("OPC_OUB_HW_EVENT \n")); + mpi_hw_event(pm8001_ha, piomb); + break; + case OPC_OUB_SSP_COMP: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("OPC_OUB_SSP_COMP \n")); + mpi_ssp_completion(pm8001_ha, piomb); + break; + case OPC_OUB_SMP_COMP: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("OPC_OUB_SMP_COMP \n")); + mpi_smp_completion(pm8001_ha, piomb); + break; + case OPC_OUB_LOCAL_PHY_CNTRL: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("OPC_OUB_LOCAL_PHY_CNTRL\n")); + mpi_local_phy_ctl(pm8001_ha, piomb); + break; + case OPC_OUB_DEV_REGIST: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("OPC_OUB_DEV_REGIST \n")); + mpi_reg_resp(pm8001_ha, piomb); + break; + case OPC_OUB_DEREG_DEV: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("unresgister the deviece \n")); + mpi_dereg_resp(pm8001_ha, piomb); + break; + case OPC_OUB_GET_DEV_HANDLE: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("OPC_OUB_GET_DEV_HANDLE \n")); + break; + case OPC_OUB_SATA_COMP: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("OPC_OUB_SATA_COMP \n")); + mpi_sata_completion(pm8001_ha, piomb); + break; + case OPC_OUB_SATA_EVENT: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("OPC_OUB_SATA_EVENT \n")); + mpi_sata_event(pm8001_ha, piomb); + break; + case OPC_OUB_SSP_EVENT: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("OPC_OUB_SSP_EVENT\n")); + mpi_ssp_event(pm8001_ha, piomb); + break; + case OPC_OUB_DEV_HANDLE_ARRIV: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("OPC_OUB_DEV_HANDLE_ARRIV\n")); + /*This is for target*/ + break; + case OPC_OUB_SSP_RECV_EVENT: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("OPC_OUB_SSP_RECV_EVENT\n")); + /*This is for target*/ + break; + case OPC_OUB_DEV_INFO: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("OPC_OUB_DEV_INFO\n")); + break; + case OPC_OUB_FW_FLASH_UPDATE: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("OPC_OUB_FW_FLASH_UPDATE\n")); + mpi_fw_flash_update_resp(pm8001_ha, piomb); + break; + case OPC_OUB_GPIO_RESPONSE: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("OPC_OUB_GPIO_RESPONSE\n")); + break; + case OPC_OUB_GPIO_EVENT: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("OPC_OUB_GPIO_EVENT\n")); + break; + case OPC_OUB_GENERAL_EVENT: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("OPC_OUB_GENERAL_EVENT\n")); + mpi_general_event(pm8001_ha, piomb); + break; + case OPC_OUB_SSP_ABORT_RSP: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("OPC_OUB_SSP_ABORT_RSP\n")); + mpi_task_abort_resp(pm8001_ha, piomb); + break; + case OPC_OUB_SATA_ABORT_RSP: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("OPC_OUB_SATA_ABORT_RSP\n")); + mpi_task_abort_resp(pm8001_ha, piomb); + break; + case OPC_OUB_SAS_DIAG_MODE_START_END: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("OPC_OUB_SAS_DIAG_MODE_START_END\n")); + break; + case OPC_OUB_SAS_DIAG_EXECUTE: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("OPC_OUB_SAS_DIAG_EXECUTE\n")); + break; + case OPC_OUB_GET_TIME_STAMP: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("OPC_OUB_GET_TIME_STAMP\n")); + break; + case OPC_OUB_SAS_HW_EVENT_ACK: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("OPC_OUB_SAS_HW_EVENT_ACK\n")); + break; + case OPC_OUB_PORT_CONTROL: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("OPC_OUB_PORT_CONTROL\n")); + break; + case OPC_OUB_SMP_ABORT_RSP: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("OPC_OUB_SMP_ABORT_RSP\n")); + mpi_task_abort_resp(pm8001_ha, piomb); + break; + case OPC_OUB_GET_NVMD_DATA: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("OPC_OUB_GET_NVMD_DATA\n")); + mpi_get_nvmd_resp(pm8001_ha, piomb); + break; + case OPC_OUB_SET_NVMD_DATA: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("OPC_OUB_SET_NVMD_DATA\n")); + mpi_set_nvmd_resp(pm8001_ha, piomb); + break; + case OPC_OUB_DEVICE_HANDLE_REMOVAL: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("OPC_OUB_DEVICE_HANDLE_REMOVAL\n")); + break; + case OPC_OUB_SET_DEVICE_STATE: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("OPC_OUB_SET_DEVICE_STATE\n")); + mpi_set_dev_state_resp(pm8001_ha, piomb); + break; + case OPC_OUB_GET_DEVICE_STATE: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("OPC_OUB_GET_DEVICE_STATE\n")); + break; + case OPC_OUB_SET_DEV_INFO: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("OPC_OUB_SET_DEV_INFO\n")); + break; + case OPC_OUB_SAS_RE_INITIALIZE: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("OPC_OUB_SAS_RE_INITIALIZE\n")); + break; + default: + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("Unknown outbound Queue IOMB OPC = %x\n", + opc)); + break; + } +} + +static int process_oq(struct pm8001_hba_info *pm8001_ha) +{ + struct outbound_queue_table *circularQ; + void *pMsg1 = NULL; + u8 bc = 0; + u32 ret = MPI_IO_STATUS_FAIL, processedMsgCount = 0; + + circularQ = &pm8001_ha->outbnd_q_tbl[0]; + do { + ret = mpi_msg_consume(pm8001_ha, circularQ, &pMsg1, &bc); + if (MPI_IO_STATUS_SUCCESS == ret) { + /* process the outbound message */ + process_one_iomb(pm8001_ha, (void *)((u8 *)pMsg1 - 4)); + /* free the message from the outbound circular buffer */ + mpi_msg_free_set(pm8001_ha, circularQ, bc); + processedMsgCount++; + } + if (MPI_IO_STATUS_BUSY == ret) { + u32 producer_idx; + /* Update the producer index from SPC */ + producer_idx = pm8001_read_32(circularQ->pi_virt); + circularQ->producer_index = cpu_to_le32(producer_idx); + if (circularQ->producer_index == + circularQ->consumer_idx) + /* OQ is empty */ + break; + } + } while (100 > processedMsgCount);/*end message processing if hit the + count*/ + return ret; +} + +/* PCI_DMA_... to our direction translation. */ +static const u8 data_dir_flags[] = { + [PCI_DMA_BIDIRECTIONAL] = DATA_DIR_BYRECIPIENT,/* UNSPECIFIED */ + [PCI_DMA_TODEVICE] = DATA_DIR_OUT,/* OUTBOUND */ + [PCI_DMA_FROMDEVICE] = DATA_DIR_IN,/* INBOUND */ + [PCI_DMA_NONE] = DATA_DIR_NONE,/* NO TRANSFER */ +}; +static void +pm8001_chip_make_sg(struct scatterlist *scatter, int nr, void *prd) +{ + int i; + struct scatterlist *sg; + struct pm8001_prd *buf_prd = prd; + + for_each_sg(scatter, sg, nr, i) { + buf_prd->addr = cpu_to_le64(sg_dma_address(sg)); + buf_prd->im_len.len = cpu_to_le32(sg_dma_len(sg)); + buf_prd->im_len.e = 0; + buf_prd++; + } +} + +static void build_smp_cmd(u32 deviceID, u32 hTag, struct smp_req *psmp_cmd) +{ + psmp_cmd->tag = cpu_to_le32(hTag); + psmp_cmd->device_id = cpu_to_le32(deviceID); + psmp_cmd->len_ip_ir = cpu_to_le32(1|(1 << 1)); +} + +/** + * pm8001_chip_smp_req - send a SMP task to FW + * @pm8001_ha: our hba card information. + * @ccb: the ccb information this request used. + */ +static int pm8001_chip_smp_req(struct pm8001_hba_info *pm8001_ha, + struct pm8001_ccb_info *ccb) +{ + int elem, rc; + struct sas_task *task = ccb->task; + struct domain_device *dev = task->dev; + struct pm8001_device *pm8001_dev = dev->lldd_dev; + struct scatterlist *sg_req, *sg_resp; + u32 req_len, resp_len; + struct smp_req smp_cmd; + u32 opc; + struct inbound_queue_table *circularQ; + + memset(&smp_cmd, 0, sizeof(smp_cmd)); + /* + * DMA-map SMP request, response buffers + */ + sg_req = &task->smp_task.smp_req; + elem = dma_map_sg(pm8001_ha->dev, sg_req, 1, PCI_DMA_TODEVICE); + if (!elem) + return -ENOMEM; + req_len = sg_dma_len(sg_req); + + sg_resp = &task->smp_task.smp_resp; + elem = dma_map_sg(pm8001_ha->dev, sg_resp, 1, PCI_DMA_FROMDEVICE); + if (!elem) { + rc = -ENOMEM; + goto err_out; + } + resp_len = sg_dma_len(sg_resp); + /* must be in dwords */ + if ((req_len & 0x3) || (resp_len & 0x3)) { + rc = -EINVAL; + goto err_out_2; + } + + opc = OPC_INB_SMP_REQUEST; + circularQ = &pm8001_ha->inbnd_q_tbl[0]; + smp_cmd.tag = cpu_to_le32(ccb->ccb_tag); + smp_cmd.long_smp_req.long_req_addr = + cpu_to_le64((u64)sg_dma_address(&task->smp_task.smp_req)); + smp_cmd.long_smp_req.long_req_size = + cpu_to_le32((u32)sg_dma_len(&task->smp_task.smp_req)-4); + smp_cmd.long_smp_req.long_resp_addr = + cpu_to_le64((u64)sg_dma_address(&task->smp_task.smp_resp)); + smp_cmd.long_smp_req.long_resp_size = + cpu_to_le32((u32)sg_dma_len(&task->smp_task.smp_resp)-4); + build_smp_cmd(pm8001_dev->device_id, smp_cmd.tag, &smp_cmd); + mpi_build_cmd(pm8001_ha, circularQ, opc, (u32 *)&smp_cmd); + return 0; + +err_out_2: + dma_unmap_sg(pm8001_ha->dev, &ccb->task->smp_task.smp_resp, 1, + PCI_DMA_FROMDEVICE); +err_out: + dma_unmap_sg(pm8001_ha->dev, &ccb->task->smp_task.smp_req, 1, + PCI_DMA_TODEVICE); + return rc; +} + +/** + * pm8001_chip_ssp_io_req - send a SSP task to FW + * @pm8001_ha: our hba card information. + * @ccb: the ccb information this request used. + */ +static int pm8001_chip_ssp_io_req(struct pm8001_hba_info *pm8001_ha, + struct pm8001_ccb_info *ccb) +{ + struct sas_task *task = ccb->task; + struct domain_device *dev = task->dev; + struct pm8001_device *pm8001_dev = dev->lldd_dev; + struct ssp_ini_io_start_req ssp_cmd; + u32 tag = ccb->ccb_tag; + __le64 phys_addr; + struct inbound_queue_table *circularQ; + u32 opc = OPC_INB_SSPINIIOSTART; + memset(&ssp_cmd, 0, sizeof(ssp_cmd)); + memcpy(ssp_cmd.ssp_iu.lun, task->ssp_task.LUN, 8); + ssp_cmd.dir_m_tlr = data_dir_flags[task->data_dir] << 8 | 0x0;/*0 for + SAS 1.1 compatible TLR*/ + ssp_cmd.data_len = cpu_to_le32(task->total_xfer_len); + ssp_cmd.device_id = cpu_to_le32(pm8001_dev->device_id); + ssp_cmd.tag = cpu_to_le32(tag); + if (task->ssp_task.enable_first_burst) + ssp_cmd.ssp_iu.efb_prio_attr |= 0x80; + ssp_cmd.ssp_iu.efb_prio_attr |= (task->ssp_task.task_prio << 3); + ssp_cmd.ssp_iu.efb_prio_attr |= (task->ssp_task.task_attr & 7); + memcpy(ssp_cmd.ssp_iu.cdb, task->ssp_task.cdb, 16); + circularQ = &pm8001_ha->inbnd_q_tbl[0]; + + /* fill in PRD (scatter/gather) table, if any */ + if (task->num_scatter > 1) { + pm8001_chip_make_sg(task->scatter, ccb->n_elem, ccb->buf_prd); + phys_addr = cpu_to_le64(ccb->ccb_dma_handle + + offsetof(struct pm8001_ccb_info, buf_prd[0])); + ssp_cmd.addr_low = lower_32_bits(phys_addr); + ssp_cmd.addr_high = upper_32_bits(phys_addr); + ssp_cmd.esgl = cpu_to_le32(1<<31); + } else if (task->num_scatter == 1) { + __le64 dma_addr = cpu_to_le64(sg_dma_address(task->scatter)); + ssp_cmd.addr_low = lower_32_bits(dma_addr); + ssp_cmd.addr_high = upper_32_bits(dma_addr); + ssp_cmd.len = cpu_to_le32(task->total_xfer_len); + ssp_cmd.esgl = 0; + } else if (task->num_scatter == 0) { + ssp_cmd.addr_low = 0; + ssp_cmd.addr_high = 0; + ssp_cmd.len = cpu_to_le32(task->total_xfer_len); + ssp_cmd.esgl = 0; + } + mpi_build_cmd(pm8001_ha, circularQ, opc, &ssp_cmd); + return 0; +} + +static int pm8001_chip_sata_req(struct pm8001_hba_info *pm8001_ha, + struct pm8001_ccb_info *ccb) +{ + struct sas_task *task = ccb->task; + struct domain_device *dev = task->dev; + struct pm8001_device *pm8001_ha_dev = dev->lldd_dev; + u32 tag = ccb->ccb_tag; + struct sata_start_req sata_cmd; + u32 hdr_tag, ncg_tag = 0; + __le64 phys_addr; + u32 ATAP = 0x0; + u32 dir; + struct inbound_queue_table *circularQ; + u32 opc = OPC_INB_SATA_HOST_OPSTART; + memset(&sata_cmd, 0, sizeof(sata_cmd)); + circularQ = &pm8001_ha->inbnd_q_tbl[0]; + if (task->data_dir == PCI_DMA_NONE) { + ATAP = 0x04; /* no data*/ + PM8001_IO_DBG(pm8001_ha, pm8001_printk("no data \n")); + } else if (likely(!task->ata_task.device_control_reg_update)) { + if (task->ata_task.dma_xfer) { + ATAP = 0x06; /* DMA */ + PM8001_IO_DBG(pm8001_ha, pm8001_printk("DMA \n")); + } else { + ATAP = 0x05; /* PIO*/ + PM8001_IO_DBG(pm8001_ha, pm8001_printk("PIO \n")); + } + if (task->ata_task.use_ncq && + dev->sata_dev.command_set != ATAPI_COMMAND_SET) { + ATAP = 0x07; /* FPDMA */ + PM8001_IO_DBG(pm8001_ha, pm8001_printk("FPDMA \n")); + } + } + if (task->ata_task.use_ncq && pm8001_get_ncq_tag(task, &hdr_tag)) + ncg_tag = cpu_to_le32(hdr_tag); + dir = data_dir_flags[task->data_dir] << 8; + sata_cmd.tag = cpu_to_le32(tag); + sata_cmd.device_id = cpu_to_le32(pm8001_ha_dev->device_id); + sata_cmd.data_len = cpu_to_le32(task->total_xfer_len); + sata_cmd.ncqtag_atap_dir_m = + cpu_to_le32(((ncg_tag & 0xff)<<16)|((ATAP & 0x3f) << 10) | dir); + sata_cmd.sata_fis = task->ata_task.fis; + if (likely(!task->ata_task.device_control_reg_update)) + sata_cmd.sata_fis.flags |= 0x80;/* C=1: update ATA cmd reg */ + sata_cmd.sata_fis.flags &= 0xF0;/* PM_PORT field shall be 0 */ + /* fill in PRD (scatter/gather) table, if any */ + if (task->num_scatter > 1) { + pm8001_chip_make_sg(task->scatter, ccb->n_elem, ccb->buf_prd); + phys_addr = cpu_to_le64(ccb->ccb_dma_handle + + offsetof(struct pm8001_ccb_info, buf_prd[0])); + sata_cmd.addr_low = lower_32_bits(phys_addr); + sata_cmd.addr_high = upper_32_bits(phys_addr); + sata_cmd.esgl = cpu_to_le32(1 << 31); + } else if (task->num_scatter == 1) { + __le64 dma_addr = cpu_to_le64(sg_dma_address(task->scatter)); + sata_cmd.addr_low = lower_32_bits(dma_addr); + sata_cmd.addr_high = upper_32_bits(dma_addr); + sata_cmd.len = cpu_to_le32(task->total_xfer_len); + sata_cmd.esgl = 0; + } else if (task->num_scatter == 0) { + sata_cmd.addr_low = 0; + sata_cmd.addr_high = 0; + sata_cmd.len = cpu_to_le32(task->total_xfer_len); + sata_cmd.esgl = 0; + } + mpi_build_cmd(pm8001_ha, circularQ, opc, &sata_cmd); + return 0; +} + +/** + * pm8001_chip_phy_start_req - start phy via PHY_START COMMAND + * @pm8001_ha: our hba card information. + * @num: the inbound queue number + * @phy_id: the phy id which we wanted to start up. + */ +static int +pm8001_chip_phy_start_req(struct pm8001_hba_info *pm8001_ha, u8 phy_id) +{ + struct phy_start_req payload; + struct inbound_queue_table *circularQ; + u32 tag = 0x01; + u32 opcode = OPC_INB_PHYSTART; + circularQ = &pm8001_ha->inbnd_q_tbl[0]; + memset(&payload, 0, sizeof(payload)); + payload.tag = cpu_to_le32(tag); + /* + ** [0:7] PHY Identifier + ** [8:11] link rate 1.5G, 3G, 6G + ** [12:13] link mode 01b SAS mode; 10b SATA mode; 11b both + ** [14] 0b disable spin up hold; 1b enable spin up hold + */ + payload.ase_sh_lm_slr_phyid = cpu_to_le32(SPINHOLD_DISABLE | + LINKMODE_AUTO | LINKRATE_15 | + LINKRATE_30 | LINKRATE_60 | phy_id); + payload.sas_identify.dev_type = SAS_END_DEV; + payload.sas_identify.initiator_bits = SAS_PROTOCOL_ALL; + memcpy(payload.sas_identify.sas_addr, + pm8001_ha->sas_addr, SAS_ADDR_SIZE); + payload.sas_identify.phy_id = phy_id; + mpi_build_cmd(pm8001_ha, circularQ, opcode, &payload); + return 0; +} + +/** + * pm8001_chip_phy_stop_req - start phy via PHY_STOP COMMAND + * @pm8001_ha: our hba card information. + * @num: the inbound queue number + * @phy_id: the phy id which we wanted to start up. + */ +static int pm8001_chip_phy_stop_req(struct pm8001_hba_info *pm8001_ha, + u8 phy_id) +{ + struct phy_stop_req payload; + struct inbound_queue_table *circularQ; + u32 tag = 0x01; + u32 opcode = OPC_INB_PHYSTOP; + circularQ = &pm8001_ha->inbnd_q_tbl[0]; + memset(&payload, 0, sizeof(payload)); + payload.tag = cpu_to_le32(tag); + payload.phy_id = cpu_to_le32(phy_id); + mpi_build_cmd(pm8001_ha, circularQ, opcode, &payload); + return 0; +} + +/** + * see comments on mpi_reg_resp. + */ +static int pm8001_chip_reg_dev_req(struct pm8001_hba_info *pm8001_ha, + struct pm8001_device *pm8001_dev, u32 flag) +{ + struct reg_dev_req payload; + u32 opc; + u32 stp_sspsmp_sata = 0x4; + struct inbound_queue_table *circularQ; + u32 linkrate, phy_id; + u32 rc, tag = 0xdeadbeef; + struct pm8001_ccb_info *ccb; + u8 retryFlag = 0x1; + u16 firstBurstSize = 0; + u16 ITNT = 2000; + struct domain_device *dev = pm8001_dev->sas_device; + struct domain_device *parent_dev = dev->parent; + circularQ = &pm8001_ha->inbnd_q_tbl[0]; + + memset(&payload, 0, sizeof(payload)); + rc = pm8001_tag_alloc(pm8001_ha, &tag); + if (rc) + return rc; + ccb = &pm8001_ha->ccb_info[tag]; + ccb->device = pm8001_dev; + ccb->ccb_tag = tag; + payload.tag = cpu_to_le32(tag); + if (flag == 1) + stp_sspsmp_sata = 0x02; /*direct attached sata */ + else { + if (pm8001_dev->dev_type == SATA_DEV) + stp_sspsmp_sata = 0x00; /* stp*/ + else if (pm8001_dev->dev_type == SAS_END_DEV || + pm8001_dev->dev_type == EDGE_DEV || + pm8001_dev->dev_type == FANOUT_DEV) + stp_sspsmp_sata = 0x01; /*ssp or smp*/ + } + if (parent_dev && DEV_IS_EXPANDER(parent_dev->dev_type)) + phy_id = parent_dev->ex_dev.ex_phy->phy_id; + else + phy_id = pm8001_dev->attached_phy; + opc = OPC_INB_REG_DEV; + linkrate = (pm8001_dev->sas_device->linkrate < dev->port->linkrate) ? + pm8001_dev->sas_device->linkrate : dev->port->linkrate; + payload.phyid_portid = + cpu_to_le32(((pm8001_dev->sas_device->port->id) & 0x0F) | + ((phy_id & 0x0F) << 4)); + payload.dtype_dlr_retry = cpu_to_le32((retryFlag & 0x01) | + ((linkrate & 0x0F) * 0x1000000) | + ((stp_sspsmp_sata & 0x03) * 0x10000000)); + payload.firstburstsize_ITNexustimeout = + cpu_to_le32(ITNT | (firstBurstSize * 0x10000)); + memcpy(&payload.sas_addr_hi, pm8001_dev->sas_device->sas_addr, + SAS_ADDR_SIZE); + mpi_build_cmd(pm8001_ha, circularQ, opc, &payload); + return 0; +} + +/** + * see comments on mpi_reg_resp. + */ +static int pm8001_chip_dereg_dev_req(struct pm8001_hba_info *pm8001_ha, + u32 device_id) +{ + struct dereg_dev_req payload; + u32 opc = OPC_INB_DEREG_DEV_HANDLE; + struct inbound_queue_table *circularQ; + + circularQ = &pm8001_ha->inbnd_q_tbl[0]; + memset((u8 *)&payload, 0, sizeof(payload)); + payload.tag = 1; + payload.device_id = cpu_to_le32(device_id); + PM8001_MSG_DBG(pm8001_ha, + pm8001_printk("unregister device device_id = %d\n", device_id)); + mpi_build_cmd(pm8001_ha, circularQ, opc, &payload); + return 0; +} + +/** + * pm8001_chip_phy_ctl_req - support the local phy operation + * @pm8001_ha: our hba card information. + * @num: the inbound queue number + * @phy_id: the phy id which we wanted to operate + * @phy_op: + */ +static int pm8001_chip_phy_ctl_req(struct pm8001_hba_info *pm8001_ha, + u32 phyId, u32 phy_op) +{ + struct local_phy_ctl_req payload; + struct inbound_queue_table *circularQ; + u32 opc = OPC_INB_LOCAL_PHY_CONTROL; + memset((u8 *)&payload, 0, sizeof(payload)); + circularQ = &pm8001_ha->inbnd_q_tbl[0]; + payload.tag = 1; + payload.phyop_phyid = + cpu_to_le32(((phy_op & 0xff) << 8) | (phyId & 0x0F)); + mpi_build_cmd(pm8001_ha, circularQ, opc, &payload); + return 0; +} + +static u32 pm8001_chip_is_our_interupt(struct pm8001_hba_info *pm8001_ha) +{ + u32 value; +#ifdef PM8001_USE_MSIX + return 1; +#endif + value = pm8001_cr32(pm8001_ha, 0, MSGU_ODR); + if (value) + return 1; + return 0; + +} + +/** + * pm8001_chip_isr - PM8001 isr handler. + * @pm8001_ha: our hba card information. + * @irq: irq number. + * @stat: stat. + */ +static void +pm8001_chip_isr(struct pm8001_hba_info *pm8001_ha) +{ + pm8001_chip_interrupt_disable(pm8001_ha); + process_oq(pm8001_ha); + pm8001_chip_interrupt_enable(pm8001_ha); +} + +static int send_task_abort(struct pm8001_hba_info *pm8001_ha, u32 opc, + u32 dev_id, u8 flag, u32 task_tag, u32 cmd_tag) +{ + struct task_abort_req task_abort; + struct inbound_queue_table *circularQ; + + circularQ = &pm8001_ha->inbnd_q_tbl[0]; + memset(&task_abort, 0, sizeof(task_abort)); + if (ABORT_SINGLE == (flag & ABORT_MASK)) { + task_abort.abort_all = 0; + task_abort.device_id = cpu_to_le32(dev_id); + task_abort.tag_to_abort = cpu_to_le32(task_tag); + task_abort.tag = cpu_to_le32(cmd_tag); + } else if (ABORT_ALL == (flag & ABORT_MASK)) { + task_abort.abort_all = cpu_to_le32(1); + task_abort.device_id = cpu_to_le32(dev_id); + task_abort.tag = cpu_to_le32(cmd_tag); + } + mpi_build_cmd(pm8001_ha, circularQ, opc, &task_abort); + return 0; +} + +/** + * pm8001_chip_abort_task - SAS abort task when error or exception happened. + * @task: the task we wanted to aborted. + * @flag: the abort flag. + */ +static int pm8001_chip_abort_task(struct pm8001_hba_info *pm8001_ha, + struct pm8001_device *pm8001_dev, u8 flag, u32 task_tag, u32 cmd_tag) +{ + u32 opc, device_id; + int rc = TMF_RESP_FUNC_FAILED; + PM8001_IO_DBG(pm8001_ha, pm8001_printk("Abort tag[%x]", task_tag)); + if (pm8001_dev->dev_type == SAS_END_DEV) + opc = OPC_INB_SSP_ABORT; + else if (pm8001_dev->dev_type == SATA_DEV) + opc = OPC_INB_SATA_ABORT; + else + opc = OPC_INB_SMP_ABORT;/* SMP */ + device_id = pm8001_dev->device_id; + rc = send_task_abort(pm8001_ha, opc, device_id, flag, + task_tag, cmd_tag); + if (rc != TMF_RESP_FUNC_COMPLETE) + PM8001_IO_DBG(pm8001_ha, pm8001_printk("rc= %d\n", rc)); + return rc; +} + +/** + * pm8001_chip_ssp_tm_req - built the task managment command. + * @pm8001_ha: our hba card information. + * @ccb: the ccb information. + * @tmf: task management function. + */ +static int pm8001_chip_ssp_tm_req(struct pm8001_hba_info *pm8001_ha, + struct pm8001_ccb_info *ccb, struct pm8001_tmf_task *tmf) +{ + struct sas_task *task = ccb->task; + struct domain_device *dev = task->dev; + struct pm8001_device *pm8001_dev = dev->lldd_dev; + u32 opc = OPC_INB_SSPINITMSTART; + struct inbound_queue_table *circularQ; + struct ssp_ini_tm_start_req sspTMCmd; + + memset(&sspTMCmd, 0, sizeof(sspTMCmd)); + sspTMCmd.device_id = cpu_to_le32(pm8001_dev->device_id); + sspTMCmd.relate_tag = cpu_to_le32(tmf->tag_of_task_to_be_managed); + sspTMCmd.tmf = cpu_to_le32(tmf->tmf); + sspTMCmd.ds_ads_m = cpu_to_le32(1 << 2); + memcpy(sspTMCmd.lun, task->ssp_task.LUN, 8); + sspTMCmd.tag = cpu_to_le32(ccb->ccb_tag); + circularQ = &pm8001_ha->inbnd_q_tbl[0]; + mpi_build_cmd(pm8001_ha, circularQ, opc, &sspTMCmd); + return 0; +} + +static int pm8001_chip_get_nvmd_req(struct pm8001_hba_info *pm8001_ha, + void *payload) +{ + u32 opc = OPC_INB_GET_NVMD_DATA; + u32 nvmd_type; + u32 rc; + u32 tag; + struct pm8001_ccb_info *ccb; + struct inbound_queue_table *circularQ; + struct get_nvm_data_req nvmd_req; + struct fw_control_ex *fw_control_context; + struct pm8001_ioctl_payload *ioctl_payload = payload; + + nvmd_type = ioctl_payload->minor_function; + fw_control_context = kzalloc(sizeof(struct fw_control_ex), GFP_KERNEL); + fw_control_context->usrAddr = (u8 *)&ioctl_payload->func_specific[0]; + fw_control_context->len = ioctl_payload->length; + circularQ = &pm8001_ha->inbnd_q_tbl[0]; + memset(&nvmd_req, 0, sizeof(nvmd_req)); + rc = pm8001_tag_alloc(pm8001_ha, &tag); + if (rc) + return rc; + ccb = &pm8001_ha->ccb_info[tag]; + ccb->ccb_tag = tag; + ccb->fw_control_context = fw_control_context; + nvmd_req.tag = cpu_to_le32(tag); + + switch (nvmd_type) { + case TWI_DEVICE: { + u32 twi_addr, twi_page_size; + twi_addr = 0xa8; + twi_page_size = 2; + + nvmd_req.len_ir_vpdd = cpu_to_le32(IPMode | twi_addr << 16 | + twi_page_size << 8 | TWI_DEVICE); + nvmd_req.resp_len = cpu_to_le32(ioctl_payload->length); + nvmd_req.resp_addr_hi = + cpu_to_le32(pm8001_ha->memoryMap.region[NVMD].phys_addr_hi); + nvmd_req.resp_addr_lo = + cpu_to_le32(pm8001_ha->memoryMap.region[NVMD].phys_addr_lo); + break; + } + case C_SEEPROM: { + nvmd_req.len_ir_vpdd = cpu_to_le32(IPMode | C_SEEPROM); + nvmd_req.resp_len = cpu_to_le32(ioctl_payload->length); + nvmd_req.resp_addr_hi = + cpu_to_le32(pm8001_ha->memoryMap.region[NVMD].phys_addr_hi); + nvmd_req.resp_addr_lo = + cpu_to_le32(pm8001_ha->memoryMap.region[NVMD].phys_addr_lo); + break; + } + case VPD_FLASH: { + nvmd_req.len_ir_vpdd = cpu_to_le32(IPMode | VPD_FLASH); + nvmd_req.resp_len = cpu_to_le32(ioctl_payload->length); + nvmd_req.resp_addr_hi = + cpu_to_le32(pm8001_ha->memoryMap.region[NVMD].phys_addr_hi); + nvmd_req.resp_addr_lo = + cpu_to_le32(pm8001_ha->memoryMap.region[NVMD].phys_addr_lo); + break; + } + case EXPAN_ROM: { + nvmd_req.len_ir_vpdd = cpu_to_le32(IPMode | EXPAN_ROM); + nvmd_req.resp_len = cpu_to_le32(ioctl_payload->length); + nvmd_req.resp_addr_hi = + cpu_to_le32(pm8001_ha->memoryMap.region[NVMD].phys_addr_hi); + nvmd_req.resp_addr_lo = + cpu_to_le32(pm8001_ha->memoryMap.region[NVMD].phys_addr_lo); + break; + } + default: + break; + } + mpi_build_cmd(pm8001_ha, circularQ, opc, &nvmd_req); + return 0; +} + +static int pm8001_chip_set_nvmd_req(struct pm8001_hba_info *pm8001_ha, + void *payload) +{ + u32 opc = OPC_INB_SET_NVMD_DATA; + u32 nvmd_type; + u32 rc; + u32 tag; + struct pm8001_ccb_info *ccb; + struct inbound_queue_table *circularQ; + struct set_nvm_data_req nvmd_req; + struct fw_control_ex *fw_control_context; + struct pm8001_ioctl_payload *ioctl_payload = payload; + + nvmd_type = ioctl_payload->minor_function; + fw_control_context = kzalloc(sizeof(struct fw_control_ex), GFP_KERNEL); + circularQ = &pm8001_ha->inbnd_q_tbl[0]; + memcpy(pm8001_ha->memoryMap.region[NVMD].virt_ptr, + ioctl_payload->func_specific, + ioctl_payload->length); + memset(&nvmd_req, 0, sizeof(nvmd_req)); + rc = pm8001_tag_alloc(pm8001_ha, &tag); + if (rc) + return rc; + ccb = &pm8001_ha->ccb_info[tag]; + ccb->fw_control_context = fw_control_context; + ccb->ccb_tag = tag; + nvmd_req.tag = cpu_to_le32(tag); + switch (nvmd_type) { + case TWI_DEVICE: { + u32 twi_addr, twi_page_size; + twi_addr = 0xa8; + twi_page_size = 2; + nvmd_req.reserved[0] = cpu_to_le32(0xFEDCBA98); + nvmd_req.len_ir_vpdd = cpu_to_le32(IPMode | twi_addr << 16 | + twi_page_size << 8 | TWI_DEVICE); + nvmd_req.resp_len = cpu_to_le32(ioctl_payload->length); + nvmd_req.resp_addr_hi = + cpu_to_le32(pm8001_ha->memoryMap.region[NVMD].phys_addr_hi); + nvmd_req.resp_addr_lo = + cpu_to_le32(pm8001_ha->memoryMap.region[NVMD].phys_addr_lo); + break; + } + case C_SEEPROM: + nvmd_req.len_ir_vpdd = cpu_to_le32(IPMode | C_SEEPROM); + nvmd_req.resp_len = cpu_to_le32(ioctl_payload->length); + nvmd_req.reserved[0] = cpu_to_le32(0xFEDCBA98); + nvmd_req.resp_addr_hi = + cpu_to_le32(pm8001_ha->memoryMap.region[NVMD].phys_addr_hi); + nvmd_req.resp_addr_lo = + cpu_to_le32(pm8001_ha->memoryMap.region[NVMD].phys_addr_lo); + break; + case VPD_FLASH: + nvmd_req.len_ir_vpdd = cpu_to_le32(IPMode | VPD_FLASH); + nvmd_req.resp_len = cpu_to_le32(ioctl_payload->length); + nvmd_req.reserved[0] = cpu_to_le32(0xFEDCBA98); + nvmd_req.resp_addr_hi = + cpu_to_le32(pm8001_ha->memoryMap.region[NVMD].phys_addr_hi); + nvmd_req.resp_addr_lo = + cpu_to_le32(pm8001_ha->memoryMap.region[NVMD].phys_addr_lo); + break; + case EXPAN_ROM: + nvmd_req.len_ir_vpdd = cpu_to_le32(IPMode | EXPAN_ROM); + nvmd_req.resp_len = cpu_to_le32(ioctl_payload->length); + nvmd_req.reserved[0] = cpu_to_le32(0xFEDCBA98); + nvmd_req.resp_addr_hi = + cpu_to_le32(pm8001_ha->memoryMap.region[NVMD].phys_addr_hi); + nvmd_req.resp_addr_lo = + cpu_to_le32(pm8001_ha->memoryMap.region[NVMD].phys_addr_lo); + break; + default: + break; + } + mpi_build_cmd(pm8001_ha, circularQ, opc, &nvmd_req); + return 0; +} + +/** + * pm8001_chip_fw_flash_update_build - support the firmware update operation + * @pm8001_ha: our hba card information. + * @fw_flash_updata_info: firmware flash update param + */ +static int +pm8001_chip_fw_flash_update_build(struct pm8001_hba_info *pm8001_ha, + void *fw_flash_updata_info, u32 tag) +{ + struct fw_flash_Update_req payload; + struct fw_flash_updata_info *info; + struct inbound_queue_table *circularQ; + u32 opc = OPC_INB_FW_FLASH_UPDATE; + + memset((u8 *)&payload, 0, sizeof(struct fw_flash_Update_req)); + circularQ = &pm8001_ha->inbnd_q_tbl[0]; + info = fw_flash_updata_info; + payload.tag = cpu_to_le32(tag); + payload.cur_image_len = cpu_to_le32(info->cur_image_len); + payload.cur_image_offset = cpu_to_le32(info->cur_image_offset); + payload.total_image_len = cpu_to_le32(info->total_image_len); + payload.len = info->sgl.im_len.len ; + payload.sgl_addr_lo = lower_32_bits(info->sgl.addr); + payload.sgl_addr_hi = upper_32_bits(info->sgl.addr); + mpi_build_cmd(pm8001_ha, circularQ, opc, &payload); + return 0; +} + +static int +pm8001_chip_fw_flash_update_req(struct pm8001_hba_info *pm8001_ha, + void *payload) +{ + struct fw_flash_updata_info flash_update_info; + struct fw_control_info *fw_control; + struct fw_control_ex *fw_control_context; + u32 rc; + u32 tag; + struct pm8001_ccb_info *ccb; + void *buffer = NULL; + dma_addr_t phys_addr; + u32 phys_addr_hi; + u32 phys_addr_lo; + struct pm8001_ioctl_payload *ioctl_payload = payload; + + fw_control_context = kzalloc(sizeof(struct fw_control_ex), GFP_KERNEL); + fw_control = (struct fw_control_info *)&ioctl_payload->func_specific[0]; + if (fw_control->len != 0) { + if (pm8001_mem_alloc(pm8001_ha->pdev, + (void **)&buffer, + &phys_addr, + &phys_addr_hi, + &phys_addr_lo, + fw_control->len, 0) != 0) { + PM8001_FAIL_DBG(pm8001_ha, + pm8001_printk("Mem alloc failure\n")); + return -ENOMEM; + } + } + memset((void *)buffer, 0, fw_control->len); + memcpy((void *)buffer, fw_control->buffer, fw_control->len); + flash_update_info.sgl.addr = cpu_to_le64(phys_addr); + flash_update_info.sgl.im_len.len = cpu_to_le32(fw_control->len); + flash_update_info.sgl.im_len.e = 0; + flash_update_info.cur_image_offset = fw_control->offset; + flash_update_info.cur_image_len = fw_control->len; + flash_update_info.total_image_len = fw_control->size; + fw_control_context->fw_control = fw_control; + fw_control_context->virtAddr = buffer; + fw_control_context->len = fw_control->len; + rc = pm8001_tag_alloc(pm8001_ha, &tag); + if (rc) + return rc; + ccb = &pm8001_ha->ccb_info[tag]; + ccb->fw_control_context = fw_control_context; + ccb->ccb_tag = tag; + pm8001_chip_fw_flash_update_build(pm8001_ha, &flash_update_info, tag); + return 0; +} + +static int +pm8001_chip_set_dev_state_req(struct pm8001_hba_info *pm8001_ha, + struct pm8001_device *pm8001_dev, u32 state) +{ + struct set_dev_state_req payload; + struct inbound_queue_table *circularQ; + struct pm8001_ccb_info *ccb; + u32 rc; + u32 tag; + u32 opc = OPC_INB_SET_DEVICE_STATE; + memset((u8 *)&payload, 0, sizeof(payload)); + rc = pm8001_tag_alloc(pm8001_ha, &tag); + if (rc) + return -1; + ccb = &pm8001_ha->ccb_info[tag]; + ccb->ccb_tag = tag; + ccb->device = pm8001_dev; + circularQ = &pm8001_ha->inbnd_q_tbl[0]; + payload.tag = cpu_to_le32(tag); + payload.device_id = cpu_to_le32(pm8001_dev->device_id); + payload.nds = cpu_to_le32(state); + mpi_build_cmd(pm8001_ha, circularQ, opc, &payload); + return 0; + +} + +const struct pm8001_dispatch pm8001_8001_dispatch = { + .name = "pmc8001", + .chip_init = pm8001_chip_init, + .chip_soft_rst = pm8001_chip_soft_rst, + .chip_rst = pm8001_hw_chip_rst, + .chip_iounmap = pm8001_chip_iounmap, + .isr = pm8001_chip_isr, + .is_our_interupt = pm8001_chip_is_our_interupt, + .isr_process_oq = process_oq, + .interrupt_enable = pm8001_chip_interrupt_enable, + .interrupt_disable = pm8001_chip_interrupt_disable, + .make_prd = pm8001_chip_make_sg, + .smp_req = pm8001_chip_smp_req, + .ssp_io_req = pm8001_chip_ssp_io_req, + .sata_req = pm8001_chip_sata_req, + .phy_start_req = pm8001_chip_phy_start_req, + .phy_stop_req = pm8001_chip_phy_stop_req, + .reg_dev_req = pm8001_chip_reg_dev_req, + .dereg_dev_req = pm8001_chip_dereg_dev_req, + .phy_ctl_req = pm8001_chip_phy_ctl_req, + .task_abort = pm8001_chip_abort_task, + .ssp_tm_req = pm8001_chip_ssp_tm_req, + .get_nvmd_req = pm8001_chip_get_nvmd_req, + .set_nvmd_req = pm8001_chip_set_nvmd_req, + .fw_flash_update_req = pm8001_chip_fw_flash_update_req, + .set_dev_state_req = pm8001_chip_set_dev_state_req, +}; + diff --git a/drivers/scsi/pm8001/pm8001_hwi.h b/drivers/scsi/pm8001/pm8001_hwi.h new file mode 100644 index 000000000000..3690a2ba0eb2 --- /dev/null +++ b/drivers/scsi/pm8001/pm8001_hwi.h @@ -0,0 +1,1011 @@ +/* + * PMC-Sierra SPC 8001 SAS/SATA based host adapters driver + * + * Copyright (c) 2008-2009 USI Co., Ltd. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions, and the following disclaimer, + * without modification. + * 2. Redistributions in binary form must reproduce at minimum a disclaimer + * substantially similar to the "NO WARRANTY" disclaimer below + * ("Disclaimer") and any redistribution must be conditioned upon + * including a substantially similar Disclaimer requirement for further + * binary redistribution. + * 3. Neither the names of the above-listed copyright holders nor the names + * of any contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * Alternatively, this software may be distributed under the terms of the + * GNU General Public License ("GPL") version 2 as published by the Free + * Software Foundation. + * + * NO WARRANTY + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE + * POSSIBILITY OF SUCH DAMAGES. + * + */ +#ifndef _PMC8001_REG_H_ +#define _PMC8001_REG_H_ + +#include +#include + + +/* for Request Opcode of IOMB */ +#define OPC_INB_ECHO 1 /* 0x000 */ +#define OPC_INB_PHYSTART 4 /* 0x004 */ +#define OPC_INB_PHYSTOP 5 /* 0x005 */ +#define OPC_INB_SSPINIIOSTART 6 /* 0x006 */ +#define OPC_INB_SSPINITMSTART 7 /* 0x007 */ +#define OPC_INB_SSPINIEXTIOSTART 8 /* 0x008 */ +#define OPC_INB_DEV_HANDLE_ACCEPT 9 /* 0x009 */ +#define OPC_INB_SSPTGTIOSTART 10 /* 0x00A */ +#define OPC_INB_SSPTGTRSPSTART 11 /* 0x00B */ +#define OPC_INB_SSPINIEDCIOSTART 12 /* 0x00C */ +#define OPC_INB_SSPINIEXTEDCIOSTART 13 /* 0x00D */ +#define OPC_INB_SSPTGTEDCIOSTART 14 /* 0x00E */ +#define OPC_INB_SSP_ABORT 15 /* 0x00F */ +#define OPC_INB_DEREG_DEV_HANDLE 16 /* 0x010 */ +#define OPC_INB_GET_DEV_HANDLE 17 /* 0x011 */ +#define OPC_INB_SMP_REQUEST 18 /* 0x012 */ +/* SMP_RESPONSE is removed */ +#define OPC_INB_SMP_RESPONSE 19 /* 0x013 */ +#define OPC_INB_SMP_ABORT 20 /* 0x014 */ +#define OPC_INB_REG_DEV 22 /* 0x016 */ +#define OPC_INB_SATA_HOST_OPSTART 23 /* 0x017 */ +#define OPC_INB_SATA_ABORT 24 /* 0x018 */ +#define OPC_INB_LOCAL_PHY_CONTROL 25 /* 0x019 */ +#define OPC_INB_GET_DEV_INFO 26 /* 0x01A */ +#define OPC_INB_FW_FLASH_UPDATE 32 /* 0x020 */ +#define OPC_INB_GPIO 34 /* 0x022 */ +#define OPC_INB_SAS_DIAG_MODE_START_END 35 /* 0x023 */ +#define OPC_INB_SAS_DIAG_EXECUTE 36 /* 0x024 */ +#define OPC_INB_SAS_HW_EVENT_ACK 37 /* 0x025 */ +#define OPC_INB_GET_TIME_STAMP 38 /* 0x026 */ +#define OPC_INB_PORT_CONTROL 39 /* 0x027 */ +#define OPC_INB_GET_NVMD_DATA 40 /* 0x028 */ +#define OPC_INB_SET_NVMD_DATA 41 /* 0x029 */ +#define OPC_INB_SET_DEVICE_STATE 42 /* 0x02A */ +#define OPC_INB_GET_DEVICE_STATE 43 /* 0x02B */ +#define OPC_INB_SET_DEV_INFO 44 /* 0x02C */ +#define OPC_INB_SAS_RE_INITIALIZE 45 /* 0x02D */ + +/* for Response Opcode of IOMB */ +#define OPC_OUB_ECHO 1 /* 0x001 */ +#define OPC_OUB_HW_EVENT 4 /* 0x004 */ +#define OPC_OUB_SSP_COMP 5 /* 0x005 */ +#define OPC_OUB_SMP_COMP 6 /* 0x006 */ +#define OPC_OUB_LOCAL_PHY_CNTRL 7 /* 0x007 */ +#define OPC_OUB_DEV_REGIST 10 /* 0x00A */ +#define OPC_OUB_DEREG_DEV 11 /* 0x00B */ +#define OPC_OUB_GET_DEV_HANDLE 12 /* 0x00C */ +#define OPC_OUB_SATA_COMP 13 /* 0x00D */ +#define OPC_OUB_SATA_EVENT 14 /* 0x00E */ +#define OPC_OUB_SSP_EVENT 15 /* 0x00F */ +#define OPC_OUB_DEV_HANDLE_ARRIV 16 /* 0x010 */ +/* SMP_RECEIVED Notification is removed */ +#define OPC_OUB_SMP_RECV_EVENT 17 /* 0x011 */ +#define OPC_OUB_SSP_RECV_EVENT 18 /* 0x012 */ +#define OPC_OUB_DEV_INFO 19 /* 0x013 */ +#define OPC_OUB_FW_FLASH_UPDATE 20 /* 0x014 */ +#define OPC_OUB_GPIO_RESPONSE 22 /* 0x016 */ +#define OPC_OUB_GPIO_EVENT 23 /* 0x017 */ +#define OPC_OUB_GENERAL_EVENT 24 /* 0x018 */ +#define OPC_OUB_SSP_ABORT_RSP 26 /* 0x01A */ +#define OPC_OUB_SATA_ABORT_RSP 27 /* 0x01B */ +#define OPC_OUB_SAS_DIAG_MODE_START_END 28 /* 0x01C */ +#define OPC_OUB_SAS_DIAG_EXECUTE 29 /* 0x01D */ +#define OPC_OUB_GET_TIME_STAMP 30 /* 0x01E */ +#define OPC_OUB_SAS_HW_EVENT_ACK 31 /* 0x01F */ +#define OPC_OUB_PORT_CONTROL 32 /* 0x020 */ +#define OPC_OUB_SKIP_ENTRY 33 /* 0x021 */ +#define OPC_OUB_SMP_ABORT_RSP 34 /* 0x022 */ +#define OPC_OUB_GET_NVMD_DATA 35 /* 0x023 */ +#define OPC_OUB_SET_NVMD_DATA 36 /* 0x024 */ +#define OPC_OUB_DEVICE_HANDLE_REMOVAL 37 /* 0x025 */ +#define OPC_OUB_SET_DEVICE_STATE 38 /* 0x026 */ +#define OPC_OUB_GET_DEVICE_STATE 39 /* 0x027 */ +#define OPC_OUB_SET_DEV_INFO 40 /* 0x028 */ +#define OPC_OUB_SAS_RE_INITIALIZE 41 /* 0x029 */ + +/* for phy start*/ +#define SPINHOLD_DISABLE (0x00 << 14) +#define SPINHOLD_ENABLE (0x01 << 14) +#define LINKMODE_SAS (0x01 << 12) +#define LINKMODE_DSATA (0x02 << 12) +#define LINKMODE_AUTO (0x03 << 12) +#define LINKRATE_15 (0x01 << 8) +#define LINKRATE_30 (0x02 << 8) +#define LINKRATE_60 (0x04 << 8) + +struct mpi_msg_hdr{ + __le32 header; /* Bits [11:0] - Message operation code */ + /* Bits [15:12] - Message Category */ + /* Bits [21:16] - Outboundqueue ID for the + operation completion message */ + /* Bits [23:22] - Reserved */ + /* Bits [28:24] - Buffer Count, indicates how + many buffer are allocated for the massage */ + /* Bits [30:29] - Reserved */ + /* Bits [31] - Message Valid bit */ +} __attribute__((packed, aligned(4))); + + +/* + * brief the data structure of PHY Start Command + * use to describe enable the phy (64 bytes) + */ +struct phy_start_req { + __le32 tag; + __le32 ase_sh_lm_slr_phyid; + struct sas_identify_frame sas_identify; + u32 reserved[5]; +} __attribute__((packed, aligned(4))); + + +/* + * brief the data structure of PHY Start Command + * use to disable the phy (64 bytes) + */ +struct phy_stop_req { + __le32 tag; + __le32 phy_id; + u32 reserved[13]; +} __attribute__((packed, aligned(4))); + + +/* set device bits fis - device to host */ +struct set_dev_bits_fis { + u8 fis_type; /* 0xA1*/ + u8 n_i_pmport; + /* b7 : n Bit. Notification bit. If set device needs attention. */ + /* b6 : i Bit. Interrupt Bit */ + /* b5-b4: reserved2 */ + /* b3-b0: PM Port */ + u8 status; + u8 error; + u32 _r_a; +} __attribute__ ((packed)); +/* PIO setup FIS - device to host */ +struct pio_setup_fis { + u8 fis_type; /* 0x5f */ + u8 i_d_pmPort; + /* b7 : reserved */ + /* b6 : i bit. Interrupt bit */ + /* b5 : d bit. data transfer direction. set to 1 for device to host + xfer */ + /* b4 : reserved */ + /* b3-b0: PM Port */ + u8 status; + u8 error; + u8 lbal; + u8 lbam; + u8 lbah; + u8 device; + u8 lbal_exp; + u8 lbam_exp; + u8 lbah_exp; + u8 _r_a; + u8 sector_count; + u8 sector_count_exp; + u8 _r_b; + u8 e_status; + u8 _r_c[2]; + u8 transfer_count; +} __attribute__ ((packed)); + +/* + * brief the data structure of SATA Completion Response + * use to discribe the sata task response (64 bytes) + */ +struct sata_completion_resp { + __le32 tag; + __le32 status; + __le32 param; + u32 sata_resp[12]; +} __attribute__((packed, aligned(4))); + + +/* + * brief the data structure of SAS HW Event Notification + * use to alert the host about the hardware event(64 bytes) + */ +struct hw_event_resp { + __le32 lr_evt_status_phyid_portid; + __le32 evt_param; + __le32 npip_portstate; + struct sas_identify_frame sas_identify; + struct dev_to_host_fis sata_fis; +} __attribute__((packed, aligned(4))); + + +/* + * brief the data structure of REGISTER DEVICE Command + * use to describe MPI REGISTER DEVICE Command (64 bytes) + */ + +struct reg_dev_req { + __le32 tag; + __le32 phyid_portid; + __le32 dtype_dlr_retry; + __le32 firstburstsize_ITNexustimeout; + u32 sas_addr_hi; + u32 sas_addr_low; + __le32 upper_device_id; + u32 reserved[8]; +} __attribute__((packed, aligned(4))); + + +/* + * brief the data structure of DEREGISTER DEVICE Command + * use to request spc to remove all internal resources associated + * with the device id (64 bytes) + */ + +struct dereg_dev_req { + __le32 tag; + __le32 device_id; + u32 reserved[13]; +} __attribute__((packed, aligned(4))); + + +/* + * brief the data structure of DEVICE_REGISTRATION Response + * use to notify the completion of the device registration (64 bytes) + */ + +struct dev_reg_resp { + __le32 tag; + __le32 status; + __le32 device_id; + u32 reserved[12]; +} __attribute__((packed, aligned(4))); + + +/* + * brief the data structure of Local PHY Control Command + * use to issue PHY CONTROL to local phy (64 bytes) + */ +struct local_phy_ctl_req { + __le32 tag; + __le32 phyop_phyid; + u32 reserved1[13]; +} __attribute__((packed, aligned(4))); + + +/** + * brief the data structure of Local Phy Control Response + * use to describe MPI Local Phy Control Response (64 bytes) + */ +struct local_phy_ctl_resp { + __le32 tag; + __le32 phyop_phyid; + __le32 status; + u32 reserved[12]; +} __attribute__((packed, aligned(4))); + + +#define OP_BITS 0x0000FF00 +#define ID_BITS 0x0000000F + +/* + * brief the data structure of PORT Control Command + * use to control port properties (64 bytes) + */ + +struct port_ctl_req { + __le32 tag; + __le32 portop_portid; + __le32 param0; + __le32 param1; + u32 reserved1[11]; +} __attribute__((packed, aligned(4))); + + +/* + * brief the data structure of HW Event Ack Command + * use to acknowledge receive HW event (64 bytes) + */ + +struct hw_event_ack_req { + __le32 tag; + __le32 sea_phyid_portid; + __le32 param0; + __le32 param1; + u32 reserved1[11]; +} __attribute__((packed, aligned(4))); + + +/* + * brief the data structure of SSP Completion Response + * use to indicate a SSP Completion (n bytes) + */ +struct ssp_completion_resp { + __le32 tag; + __le32 status; + __le32 param; + __le32 ssptag_rescv_rescpad; + struct ssp_response_iu ssp_resp_iu; + __le32 residual_count; +} __attribute__((packed, aligned(4))); + + +#define SSP_RESCV_BIT 0x00010000 + +/* + * brief the data structure of SATA EVNET esponse + * use to indicate a SATA Completion (64 bytes) + */ + +struct sata_event_resp { + __le32 tag; + __le32 event; + __le32 port_id; + __le32 device_id; + u32 reserved[11]; +} __attribute__((packed, aligned(4))); + +/* + * brief the data structure of SSP EVNET esponse + * use to indicate a SSP Completion (64 bytes) + */ + +struct ssp_event_resp { + __le32 tag; + __le32 event; + __le32 port_id; + __le32 device_id; + u32 reserved[11]; +} __attribute__((packed, aligned(4))); + +/** + * brief the data structure of General Event Notification Response + * use to describe MPI General Event Notification Response (64 bytes) + */ +struct general_event_resp { + __le32 status; + __le32 inb_IOMB_payload[14]; +} __attribute__((packed, aligned(4))); + + +#define GENERAL_EVENT_PAYLOAD 14 +#define OPCODE_BITS 0x00000fff + +/* + * brief the data structure of SMP Request Command + * use to describe MPI SMP REQUEST Command (64 bytes) + */ +struct smp_req { + __le32 tag; + __le32 device_id; + __le32 len_ip_ir; + /* Bits [0] - Indirect response */ + /* Bits [1] - Indirect Payload */ + /* Bits [15:2] - Reserved */ + /* Bits [23:16] - direct payload Len */ + /* Bits [31:24] - Reserved */ + u8 smp_req16[16]; + union { + u8 smp_req[32]; + struct { + __le64 long_req_addr;/* sg dma address, LE */ + __le32 long_req_size;/* LE */ + u32 _r_a; + __le64 long_resp_addr;/* sg dma address, LE */ + __le32 long_resp_size;/* LE */ + u32 _r_b; + } long_smp_req;/* sequencer extension */ + }; +} __attribute__((packed, aligned(4))); +/* + * brief the data structure of SMP Completion Response + * use to describe MPI SMP Completion Response (64 bytes) + */ +struct smp_completion_resp { + __le32 tag; + __le32 status; + __le32 param; + __le32 _r_a[12]; +} __attribute__((packed, aligned(4))); + +/* + *brief the data structure of SSP SMP SATA Abort Command + * use to describe MPI SSP SMP & SATA Abort Command (64 bytes) + */ +struct task_abort_req { + __le32 tag; + __le32 device_id; + __le32 tag_to_abort; + __le32 abort_all; + u32 reserved[11]; +} __attribute__((packed, aligned(4))); + +/* These flags used for SSP SMP & SATA Abort */ +#define ABORT_MASK 0x3 +#define ABORT_SINGLE 0x0 +#define ABORT_ALL 0x1 + +/** + * brief the data structure of SSP SATA SMP Abort Response + * use to describe SSP SMP & SATA Abort Response ( 64 bytes) + */ +struct task_abort_resp { + __le32 tag; + __le32 status; + __le32 scp; + u32 reserved[12]; +} __attribute__((packed, aligned(4))); + + +/** + * brief the data structure of SAS Diagnostic Start/End Command + * use to describe MPI SAS Diagnostic Start/End Command (64 bytes) + */ +struct sas_diag_start_end_req { + __le32 tag; + __le32 operation_phyid; + u32 reserved[13]; +} __attribute__((packed, aligned(4))); + + +/** + * brief the data structure of SAS Diagnostic Execute Command + * use to describe MPI SAS Diagnostic Execute Command (64 bytes) + */ +struct sas_diag_execute_req{ + __le32 tag; + __le32 cmdtype_cmddesc_phyid; + __le32 pat1_pat2; + __le32 threshold; + __le32 codepat_errmsk; + __le32 pmon; + __le32 pERF1CTL; + u32 reserved[8]; +} __attribute__((packed, aligned(4))); + + +#define SAS_DIAG_PARAM_BYTES 24 + +/* + * brief the data structure of Set Device State Command + * use to describe MPI Set Device State Command (64 bytes) + */ +struct set_dev_state_req { + __le32 tag; + __le32 device_id; + __le32 nds; + u32 reserved[12]; +} __attribute__((packed, aligned(4))); + + +/* + * brief the data structure of SATA Start Command + * use to describe MPI SATA IO Start Command (64 bytes) + */ + +struct sata_start_req { + __le32 tag; + __le32 device_id; + __le32 data_len; + __le32 ncqtag_atap_dir_m; + struct host_to_dev_fis sata_fis; + u32 reserved1; + u32 reserved2; + u32 addr_low; + u32 addr_high; + __le32 len; + __le32 esgl; +} __attribute__((packed, aligned(4))); + +/** + * brief the data structure of SSP INI TM Start Command + * use to describe MPI SSP INI TM Start Command (64 bytes) + */ +struct ssp_ini_tm_start_req { + __le32 tag; + __le32 device_id; + __le32 relate_tag; + __le32 tmf; + u8 lun[8]; + __le32 ds_ads_m; + u32 reserved[8]; +} __attribute__((packed, aligned(4))); + + +struct ssp_info_unit { + u8 lun[8];/* SCSI Logical Unit Number */ + u8 reserved1;/* reserved */ + u8 efb_prio_attr; + /* B7 : enabledFirstBurst */ + /* B6-3 : taskPriority */ + /* B2-0 : taskAttribute */ + u8 reserved2; /* reserved */ + u8 additional_cdb_len; + /* B7-2 : additional_cdb_len */ + /* B1-0 : reserved */ + u8 cdb[16];/* The SCSI CDB up to 16 bytes length */ +} __attribute__((packed, aligned(4))); + + +/** + * brief the data structure of SSP INI IO Start Command + * use to describe MPI SSP INI IO Start Command (64 bytes) + */ +struct ssp_ini_io_start_req { + __le32 tag; + __le32 device_id; + __le32 data_len; + __le32 dir_m_tlr; + struct ssp_info_unit ssp_iu; + __le32 addr_low; + __le32 addr_high; + __le32 len; + __le32 esgl; +} __attribute__((packed, aligned(4))); + + +/** + * brief the data structure of Firmware download + * use to describe MPI FW DOWNLOAD Command (64 bytes) + */ +struct fw_flash_Update_req { + __le32 tag; + __le32 cur_image_offset; + __le32 cur_image_len; + __le32 total_image_len; + u32 reserved0[7]; + __le32 sgl_addr_lo; + __le32 sgl_addr_hi; + __le32 len; + __le32 ext_reserved; +} __attribute__((packed, aligned(4))); + + +#define FWFLASH_IOMB_RESERVED_LEN 0x07 +/** + * brief the data structure of FW_FLASH_UPDATE Response + * use to describe MPI FW_FLASH_UPDATE Response (64 bytes) + * + */ +struct fw_flash_Update_resp { + dma_addr_t tag; + __le32 status; + u32 reserved[13]; +} __attribute__((packed, aligned(4))); + + +/** + * brief the data structure of Get NVM Data Command + * use to get data from NVM in HBA(64 bytes) + */ +struct get_nvm_data_req { + __le32 tag; + __le32 len_ir_vpdd; + __le32 vpd_offset; + u32 reserved[8]; + __le32 resp_addr_lo; + __le32 resp_addr_hi; + __le32 resp_len; + u32 reserved1; +} __attribute__((packed, aligned(4))); + + +struct set_nvm_data_req { + __le32 tag; + __le32 len_ir_vpdd; + __le32 vpd_offset; + u32 reserved[8]; + __le32 resp_addr_lo; + __le32 resp_addr_hi; + __le32 resp_len; + u32 reserved1; +} __attribute__((packed, aligned(4))); + + +#define TWI_DEVICE 0x0 +#define C_SEEPROM 0x1 +#define VPD_FLASH 0x4 +#define AAP1_RDUMP 0x5 +#define IOP_RDUMP 0x6 +#define EXPAN_ROM 0x7 + +#define IPMode 0x80000000 +#define NVMD_TYPE 0x0000000F +#define NVMD_STAT 0x0000FFFF +#define NVMD_LEN 0xFF000000 +/** + * brief the data structure of Get NVMD Data Response + * use to describe MPI Get NVMD Data Response (64 bytes) + */ +struct get_nvm_data_resp { + __le32 tag; + __le32 ir_tda_bn_dps_das_nvm; + __le32 dlen_status; + __le32 nvm_data[12]; +} __attribute__((packed, aligned(4))); + + +/** + * brief the data structure of SAS Diagnostic Start/End Response + * use to describe MPI SAS Diagnostic Start/End Response (64 bytes) + * + */ +struct sas_diag_start_end_resp { + __le32 tag; + __le32 status; + u32 reserved[13]; +} __attribute__((packed, aligned(4))); + + +/** + * brief the data structure of SAS Diagnostic Execute Response + * use to describe MPI SAS Diagnostic Execute Response (64 bytes) + * + */ +struct sas_diag_execute_resp { + __le32 tag; + __le32 cmdtype_cmddesc_phyid; + __le32 Status; + __le32 ReportData; + u32 reserved[11]; +} __attribute__((packed, aligned(4))); + + +/** + * brief the data structure of Set Device State Response + * use to describe MPI Set Device State Response (64 bytes) + * + */ +struct set_dev_state_resp { + __le32 tag; + __le32 status; + __le32 device_id; + __le32 pds_nds; + u32 reserved[11]; +} __attribute__((packed, aligned(4))); + + +#define NDS_BITS 0x0F +#define PDS_BITS 0xF0 + +/* + * HW Events type + */ + +#define HW_EVENT_RESET_START 0x01 +#define HW_EVENT_CHIP_RESET_COMPLETE 0x02 +#define HW_EVENT_PHY_STOP_STATUS 0x03 +#define HW_EVENT_SAS_PHY_UP 0x04 +#define HW_EVENT_SATA_PHY_UP 0x05 +#define HW_EVENT_SATA_SPINUP_HOLD 0x06 +#define HW_EVENT_PHY_DOWN 0x07 +#define HW_EVENT_PORT_INVALID 0x08 +#define HW_EVENT_BROADCAST_CHANGE 0x09 +#define HW_EVENT_PHY_ERROR 0x0A +#define HW_EVENT_BROADCAST_SES 0x0B +#define HW_EVENT_INBOUND_CRC_ERROR 0x0C +#define HW_EVENT_HARD_RESET_RECEIVED 0x0D +#define HW_EVENT_MALFUNCTION 0x0E +#define HW_EVENT_ID_FRAME_TIMEOUT 0x0F +#define HW_EVENT_BROADCAST_EXP 0x10 +#define HW_EVENT_PHY_START_STATUS 0x11 +#define HW_EVENT_LINK_ERR_INVALID_DWORD 0x12 +#define HW_EVENT_LINK_ERR_DISPARITY_ERROR 0x13 +#define HW_EVENT_LINK_ERR_CODE_VIOLATION 0x14 +#define HW_EVENT_LINK_ERR_LOSS_OF_DWORD_SYNCH 0x15 +#define HW_EVENT_LINK_ERR_PHY_RESET_FAILED 0x16 +#define HW_EVENT_PORT_RECOVERY_TIMER_TMO 0x17 +#define HW_EVENT_PORT_RECOVER 0x18 +#define HW_EVENT_PORT_RESET_TIMER_TMO 0x19 +#define HW_EVENT_PORT_RESET_COMPLETE 0x20 +#define EVENT_BROADCAST_ASYNCH_EVENT 0x21 + +/* port state */ +#define PORT_NOT_ESTABLISHED 0x00 +#define PORT_VALID 0x01 +#define PORT_LOSTCOMM 0x02 +#define PORT_IN_RESET 0x04 +#define PORT_INVALID 0x08 + +/* + * SSP/SMP/SATA IO Completion Status values + */ + +#define IO_SUCCESS 0x00 +#define IO_ABORTED 0x01 +#define IO_OVERFLOW 0x02 +#define IO_UNDERFLOW 0x03 +#define IO_FAILED 0x04 +#define IO_ABORT_RESET 0x05 +#define IO_NOT_VALID 0x06 +#define IO_NO_DEVICE 0x07 +#define IO_ILLEGAL_PARAMETER 0x08 +#define IO_LINK_FAILURE 0x09 +#define IO_PROG_ERROR 0x0A +#define IO_EDC_IN_ERROR 0x0B +#define IO_EDC_OUT_ERROR 0x0C +#define IO_ERROR_HW_TIMEOUT 0x0D +#define IO_XFER_ERROR_BREAK 0x0E +#define IO_XFER_ERROR_PHY_NOT_READY 0x0F +#define IO_OPEN_CNX_ERROR_PROTOCOL_NOT_SUPPORTED 0x10 +#define IO_OPEN_CNX_ERROR_ZONE_VIOLATION 0x11 +#define IO_OPEN_CNX_ERROR_BREAK 0x12 +#define IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS 0x13 +#define IO_OPEN_CNX_ERROR_BAD_DESTINATION 0x14 +#define IO_OPEN_CNX_ERROR_CONNECTION_RATE_NOT_SUPPORTED 0x15 +#define IO_OPEN_CNX_ERROR_STP_RESOURCES_BUSY 0x16 +#define IO_OPEN_CNX_ERROR_WRONG_DESTINATION 0x17 +#define IO_OPEN_CNX_ERROR_UNKNOWN_ERROR 0x18 +#define IO_XFER_ERROR_NAK_RECEIVED 0x19 +#define IO_XFER_ERROR_ACK_NAK_TIMEOUT 0x1A +#define IO_XFER_ERROR_PEER_ABORTED 0x1B +#define IO_XFER_ERROR_RX_FRAME 0x1C +#define IO_XFER_ERROR_DMA 0x1D +#define IO_XFER_ERROR_CREDIT_TIMEOUT 0x1E +#define IO_XFER_ERROR_SATA_LINK_TIMEOUT 0x1F +#define IO_XFER_ERROR_SATA 0x20 +#define IO_XFER_ERROR_ABORTED_DUE_TO_SRST 0x22 +#define IO_XFER_ERROR_REJECTED_NCQ_MODE 0x21 +#define IO_XFER_ERROR_ABORTED_NCQ_MODE 0x23 +#define IO_XFER_OPEN_RETRY_TIMEOUT 0x24 +#define IO_XFER_SMP_RESP_CONNECTION_ERROR 0x25 +#define IO_XFER_ERROR_UNEXPECTED_PHASE 0x26 +#define IO_XFER_ERROR_XFER_RDY_OVERRUN 0x27 +#define IO_XFER_ERROR_XFER_RDY_NOT_EXPECTED 0x28 + +#define IO_XFER_ERROR_CMD_ISSUE_ACK_NAK_TIMEOUT 0x30 +#define IO_XFER_ERROR_CMD_ISSUE_BREAK_BEFORE_ACK_NAK 0x31 +#define IO_XFER_ERROR_CMD_ISSUE_PHY_DOWN_BEFORE_ACK_NAK 0x32 + +#define IO_XFER_ERROR_OFFSET_MISMATCH 0x34 +#define IO_XFER_ERROR_XFER_ZERO_DATA_LEN 0x35 +#define IO_XFER_CMD_FRAME_ISSUED 0x36 +#define IO_ERROR_INTERNAL_SMP_RESOURCE 0x37 +#define IO_PORT_IN_RESET 0x38 +#define IO_DS_NON_OPERATIONAL 0x39 +#define IO_DS_IN_RECOVERY 0x3A +#define IO_TM_TAG_NOT_FOUND 0x3B +#define IO_XFER_PIO_SETUP_ERROR 0x3C +#define IO_SSP_EXT_IU_ZERO_LEN_ERROR 0x3D +#define IO_DS_IN_ERROR 0x3E +#define IO_OPEN_CNX_ERROR_HW_RESOURCE_BUSY 0x3F +#define IO_ABORT_IN_PROGRESS 0x40 +#define IO_ABORT_DELAYED 0x41 +#define IO_INVALID_LENGTH 0x42 + +/* WARNING: This error code must always be the last number. + * If you add error code, modify this code also + * It is used as an index + */ +#define IO_ERROR_UNKNOWN_GENERIC 0x43 + +/* MSGU CONFIGURATION TABLE*/ + +#define SPC_MSGU_CFG_TABLE_UPDATE 0x01/* Inbound doorbell bit0 */ +#define SPC_MSGU_CFG_TABLE_RESET 0x02/* Inbound doorbell bit1 */ +#define SPC_MSGU_CFG_TABLE_FREEZE 0x04/* Inbound doorbell bit2 */ +#define SPC_MSGU_CFG_TABLE_UNFREEZE 0x08/* Inbound doorbell bit4 */ +#define MSGU_IBDB_SET 0x04 +#define MSGU_HOST_INT_STATUS 0x08 +#define MSGU_HOST_INT_MASK 0x0C +#define MSGU_IOPIB_INT_STATUS 0x18 +#define MSGU_IOPIB_INT_MASK 0x1C +#define MSGU_IBDB_CLEAR 0x20/* RevB - Host not use */ +#define MSGU_MSGU_CONTROL 0x24 +#define MSGU_ODR 0x3C/* RevB */ +#define MSGU_ODCR 0x40/* RevB */ +#define MSGU_SCRATCH_PAD_0 0x44 +#define MSGU_SCRATCH_PAD_1 0x48 +#define MSGU_SCRATCH_PAD_2 0x4C +#define MSGU_SCRATCH_PAD_3 0x50 +#define MSGU_HOST_SCRATCH_PAD_0 0x54 +#define MSGU_HOST_SCRATCH_PAD_1 0x58 +#define MSGU_HOST_SCRATCH_PAD_2 0x5C +#define MSGU_HOST_SCRATCH_PAD_3 0x60 +#define MSGU_HOST_SCRATCH_PAD_4 0x64 +#define MSGU_HOST_SCRATCH_PAD_5 0x68 +#define MSGU_HOST_SCRATCH_PAD_6 0x6C +#define MSGU_HOST_SCRATCH_PAD_7 0x70 +#define MSGU_ODMR 0x74/* RevB */ + +/* bit definition for ODMR register */ +#define ODMR_MASK_ALL 0xFFFFFFFF/* mask all + interrupt vector */ +#define ODMR_CLEAR_ALL 0/* clear all + interrupt vector */ +/* bit definition for ODCR register */ +#define ODCR_CLEAR_ALL 0xFFFFFFFF /* mask all + interrupt vector*/ +/* MSIX Interupts */ +#define MSIX_TABLE_OFFSET 0x2000 +#define MSIX_TABLE_ELEMENT_SIZE 0x10 +#define MSIX_INTERRUPT_CONTROL_OFFSET 0xC +#define MSIX_TABLE_BASE (MSIX_TABLE_OFFSET + MSIX_INTERRUPT_CONTROL_OFFSET) +#define MSIX_INTERRUPT_DISABLE 0x1 +#define MSIX_INTERRUPT_ENABLE 0x0 + + +/* state definition for Scratch Pad1 register */ +#define SCRATCH_PAD1_POR 0x00 /* power on reset state */ +#define SCRATCH_PAD1_SFR 0x01 /* soft reset state */ +#define SCRATCH_PAD1_ERR 0x02 /* error state */ +#define SCRATCH_PAD1_RDY 0x03 /* ready state */ +#define SCRATCH_PAD1_RST 0x04 /* soft reset toggle flag */ +#define SCRATCH_PAD1_AAP1RDY_RST 0x08 /* AAP1 ready for soft reset */ +#define SCRATCH_PAD1_STATE_MASK 0xFFFFFFF0 /* ScratchPad1 + Mask, bit1-0 State, bit2 Soft Reset, bit3 FW RDY for Soft Reset */ +#define SCRATCH_PAD1_RESERVED 0x000003F8 /* Scratch Pad1 + Reserved bit 3 to 9 */ + + /* state definition for Scratch Pad2 register */ +#define SCRATCH_PAD2_POR 0x00 /* power on state */ +#define SCRATCH_PAD2_SFR 0x01 /* soft reset state */ +#define SCRATCH_PAD2_ERR 0x02 /* error state */ +#define SCRATCH_PAD2_RDY 0x03 /* ready state */ +#define SCRATCH_PAD2_FWRDY_RST 0x04 /* FW ready for soft reset flag*/ +#define SCRATCH_PAD2_IOPRDY_RST 0x08 /* IOP ready for soft reset */ +#define SCRATCH_PAD2_STATE_MASK 0xFFFFFFF4 /* ScratchPad 2 + Mask, bit1-0 State */ +#define SCRATCH_PAD2_RESERVED 0x000003FC /* Scratch Pad1 + Reserved bit 2 to 9 */ + +#define SCRATCH_PAD_ERROR_MASK 0xFFFFFC00 /* Error mask bits */ +#define SCRATCH_PAD_STATE_MASK 0x00000003 /* State Mask bits */ + +/* main configuration offset - byte offset */ +#define MAIN_SIGNATURE_OFFSET 0x00/* DWORD 0x00 */ +#define MAIN_INTERFACE_REVISION 0x04/* DWORD 0x01 */ +#define MAIN_FW_REVISION 0x08/* DWORD 0x02 */ +#define MAIN_MAX_OUTSTANDING_IO_OFFSET 0x0C/* DWORD 0x03 */ +#define MAIN_MAX_SGL_OFFSET 0x10/* DWORD 0x04 */ +#define MAIN_CNTRL_CAP_OFFSET 0x14/* DWORD 0x05 */ +#define MAIN_GST_OFFSET 0x18/* DWORD 0x06 */ +#define MAIN_IBQ_OFFSET 0x1C/* DWORD 0x07 */ +#define MAIN_OBQ_OFFSET 0x20/* DWORD 0x08 */ +#define MAIN_IQNPPD_HPPD_OFFSET 0x24/* DWORD 0x09 */ +#define MAIN_OB_HW_EVENT_PID03_OFFSET 0x28/* DWORD 0x0A */ +#define MAIN_OB_HW_EVENT_PID47_OFFSET 0x2C/* DWORD 0x0B */ +#define MAIN_OB_NCQ_EVENT_PID03_OFFSET 0x30/* DWORD 0x0C */ +#define MAIN_OB_NCQ_EVENT_PID47_OFFSET 0x34/* DWORD 0x0D */ +#define MAIN_TITNX_EVENT_PID03_OFFSET 0x38/* DWORD 0x0E */ +#define MAIN_TITNX_EVENT_PID47_OFFSET 0x3C/* DWORD 0x0F */ +#define MAIN_OB_SSP_EVENT_PID03_OFFSET 0x40/* DWORD 0x10 */ +#define MAIN_OB_SSP_EVENT_PID47_OFFSET 0x44/* DWORD 0x11 */ +#define MAIN_OB_SMP_EVENT_PID03_OFFSET 0x48/* DWORD 0x12 */ +#define MAIN_OB_SMP_EVENT_PID47_OFFSET 0x4C/* DWORD 0x13 */ +#define MAIN_EVENT_LOG_ADDR_HI 0x50/* DWORD 0x14 */ +#define MAIN_EVENT_LOG_ADDR_LO 0x54/* DWORD 0x15 */ +#define MAIN_EVENT_LOG_BUFF_SIZE 0x58/* DWORD 0x16 */ +#define MAIN_EVENT_LOG_OPTION 0x5C/* DWORD 0x17 */ +#define MAIN_IOP_EVENT_LOG_ADDR_HI 0x60/* DWORD 0x18 */ +#define MAIN_IOP_EVENT_LOG_ADDR_LO 0x64/* DWORD 0x19 */ +#define MAIN_IOP_EVENT_LOG_BUFF_SIZE 0x68/* DWORD 0x1A */ +#define MAIN_IOP_EVENT_LOG_OPTION 0x6C/* DWORD 0x1B */ +#define MAIN_FATAL_ERROR_INTERRUPT 0x70/* DWORD 0x1C */ +#define MAIN_FATAL_ERROR_RDUMP0_OFFSET 0x74/* DWORD 0x1D */ +#define MAIN_FATAL_ERROR_RDUMP0_LENGTH 0x78/* DWORD 0x1E */ +#define MAIN_FATAL_ERROR_RDUMP1_OFFSET 0x7C/* DWORD 0x1F */ +#define MAIN_FATAL_ERROR_RDUMP1_LENGTH 0x80/* DWORD 0x20 */ +#define MAIN_HDA_FLAGS_OFFSET 0x84/* DWORD 0x21 */ +#define MAIN_ANALOG_SETUP_OFFSET 0x88/* DWORD 0x22 */ + +/* Gereral Status Table offset - byte offset */ +#define GST_GSTLEN_MPIS_OFFSET 0x00 +#define GST_IQ_FREEZE_STATE0_OFFSET 0x04 +#define GST_IQ_FREEZE_STATE1_OFFSET 0x08 +#define GST_MSGUTCNT_OFFSET 0x0C +#define GST_IOPTCNT_OFFSET 0x10 +#define GST_PHYSTATE_OFFSET 0x18 +#define GST_PHYSTATE0_OFFSET 0x18 +#define GST_PHYSTATE1_OFFSET 0x1C +#define GST_PHYSTATE2_OFFSET 0x20 +#define GST_PHYSTATE3_OFFSET 0x24 +#define GST_PHYSTATE4_OFFSET 0x28 +#define GST_PHYSTATE5_OFFSET 0x2C +#define GST_PHYSTATE6_OFFSET 0x30 +#define GST_PHYSTATE7_OFFSET 0x34 +#define GST_RERRINFO_OFFSET 0x44 + +/* General Status Table - MPI state */ +#define GST_MPI_STATE_UNINIT 0x00 +#define GST_MPI_STATE_INIT 0x01 +#define GST_MPI_STATE_TERMINATION 0x02 +#define GST_MPI_STATE_ERROR 0x03 +#define GST_MPI_STATE_MASK 0x07 + +#define MBIC_NMI_ENABLE_VPE0_IOP 0x000418 +#define MBIC_NMI_ENABLE_VPE0_AAP1 0x000418 +/* PCIE registers - BAR2(0x18), BAR1(win) 0x010000 */ +#define PCIE_EVENT_INTERRUPT_ENABLE 0x003040 +#define PCIE_EVENT_INTERRUPT 0x003044 +#define PCIE_ERROR_INTERRUPT_ENABLE 0x003048 +#define PCIE_ERROR_INTERRUPT 0x00304C +/* signature defintion for host scratch pad0 register */ +#define SPC_SOFT_RESET_SIGNATURE 0x252acbcd +/* Signature for Soft Reset */ + +/* SPC Reset register - BAR4(0x20), BAR2(win) (need dynamic mapping) */ +#define SPC_REG_RESET 0x000000/* reset register */ + +/* bit difination for SPC_RESET register */ +#define SPC_REG_RESET_OSSP 0x00000001 +#define SPC_REG_RESET_RAAE 0x00000002 +#define SPC_REG_RESET_PCS_SPBC 0x00000004 +#define SPC_REG_RESET_PCS_IOP_SS 0x00000008 +#define SPC_REG_RESET_PCS_AAP1_SS 0x00000010 +#define SPC_REG_RESET_PCS_AAP2_SS 0x00000020 +#define SPC_REG_RESET_PCS_LM 0x00000040 +#define SPC_REG_RESET_PCS 0x00000080 +#define SPC_REG_RESET_GSM 0x00000100 +#define SPC_REG_RESET_DDR2 0x00010000 +#define SPC_REG_RESET_BDMA_CORE 0x00020000 +#define SPC_REG_RESET_BDMA_SXCBI 0x00040000 +#define SPC_REG_RESET_PCIE_AL_SXCBI 0x00080000 +#define SPC_REG_RESET_PCIE_PWR 0x00100000 +#define SPC_REG_RESET_PCIE_SFT 0x00200000 +#define SPC_REG_RESET_PCS_SXCBI 0x00400000 +#define SPC_REG_RESET_LMS_SXCBI 0x00800000 +#define SPC_REG_RESET_PMIC_SXCBI 0x01000000 +#define SPC_REG_RESET_PMIC_CORE 0x02000000 +#define SPC_REG_RESET_PCIE_PC_SXCBI 0x04000000 +#define SPC_REG_RESET_DEVICE 0x80000000 + +/* registers for BAR Shifting - BAR2(0x18), BAR1(win) */ +#define SPC_IBW_AXI_TRANSLATION_LOW 0x003258 + +#define MBIC_AAP1_ADDR_BASE 0x060000 +#define MBIC_IOP_ADDR_BASE 0x070000 +#define GSM_ADDR_BASE 0x0700000 +/* Dynamic map through Bar4 - 0x00700000 */ +#define GSM_CONFIG_RESET 0x00000000 +#define RAM_ECC_DB_ERR 0x00000018 +#define GSM_READ_ADDR_PARITY_INDIC 0x00000058 +#define GSM_WRITE_ADDR_PARITY_INDIC 0x00000060 +#define GSM_WRITE_DATA_PARITY_INDIC 0x00000068 +#define GSM_READ_ADDR_PARITY_CHECK 0x00000038 +#define GSM_WRITE_ADDR_PARITY_CHECK 0x00000040 +#define GSM_WRITE_DATA_PARITY_CHECK 0x00000048 + +#define RB6_ACCESS_REG 0x6A0000 +#define HDAC_EXEC_CMD 0x0002 +#define HDA_C_PA 0xcb +#define HDA_SEQ_ID_BITS 0x00ff0000 +#define HDA_GSM_OFFSET_BITS 0x00FFFFFF +#define MBIC_AAP1_ADDR_BASE 0x060000 +#define MBIC_IOP_ADDR_BASE 0x070000 +#define GSM_ADDR_BASE 0x0700000 +#define SPC_TOP_LEVEL_ADDR_BASE 0x000000 +#define GSM_CONFIG_RESET_VALUE 0x00003b00 +#define GPIO_ADDR_BASE 0x00090000 +#define GPIO_GPIO_0_0UTPUT_CTL_OFFSET 0x0000010c + +/* RB6 offset */ +#define SPC_RB6_OFFSET 0x80C0 +/* Magic number of soft reset for RB6 */ +#define RB6_MAGIC_NUMBER_RST 0x1234 + +/* Device Register status */ +#define DEVREG_SUCCESS 0x00 +#define DEVREG_FAILURE_OUT_OF_RESOURCE 0x01 +#define DEVREG_FAILURE_DEVICE_ALREADY_REGISTERED 0x02 +#define DEVREG_FAILURE_INVALID_PHY_ID 0x03 +#define DEVREG_FAILURE_PHY_ID_ALREADY_REGISTERED 0x04 +#define DEVREG_FAILURE_PORT_ID_OUT_OF_RANGE 0x05 +#define DEVREG_FAILURE_PORT_NOT_VALID_STATE 0x06 +#define DEVREG_FAILURE_DEVICE_TYPE_NOT_VALID 0x07 + +#endif + diff --git a/drivers/scsi/pm8001/pm8001_init.c b/drivers/scsi/pm8001/pm8001_init.c new file mode 100644 index 000000000000..811b5d36d5f0 --- /dev/null +++ b/drivers/scsi/pm8001/pm8001_init.c @@ -0,0 +1,888 @@ +/* + * PMC-Sierra SPC 8001 SAS/SATA based host adapters driver + * + * Copyright (c) 2008-2009 USI Co., Ltd. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions, and the following disclaimer, + * without modification. + * 2. Redistributions in binary form must reproduce at minimum a disclaimer + * substantially similar to the "NO WARRANTY" disclaimer below + * ("Disclaimer") and any redistribution must be conditioned upon + * including a substantially similar Disclaimer requirement for further + * binary redistribution. + * 3. Neither the names of the above-listed copyright holders nor the names + * of any contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * Alternatively, this software may be distributed under the terms of the + * GNU General Public License ("GPL") version 2 as published by the Free + * Software Foundation. + * + * NO WARRANTY + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE + * POSSIBILITY OF SUCH DAMAGES. + * + */ + +#include "pm8001_sas.h" +#include "pm8001_chips.h" + +static struct scsi_transport_template *pm8001_stt; + +static const struct pm8001_chip_info pm8001_chips[] = { + [chip_8001] = { 8, &pm8001_8001_dispatch,}, +}; +static int pm8001_id; + +LIST_HEAD(hba_list); + +/** + * The main structure which LLDD must register for scsi core. + */ +static struct scsi_host_template pm8001_sht = { + .module = THIS_MODULE, + .name = DRV_NAME, + .queuecommand = sas_queuecommand, + .target_alloc = sas_target_alloc, + .slave_configure = pm8001_slave_configure, + .slave_destroy = sas_slave_destroy, + .scan_finished = pm8001_scan_finished, + .scan_start = pm8001_scan_start, + .change_queue_depth = sas_change_queue_depth, + .change_queue_type = sas_change_queue_type, + .bios_param = sas_bios_param, + .can_queue = 1, + .cmd_per_lun = 1, + .this_id = -1, + .sg_tablesize = SG_ALL, + .max_sectors = SCSI_DEFAULT_MAX_SECTORS, + .use_clustering = ENABLE_CLUSTERING, + .eh_device_reset_handler = sas_eh_device_reset_handler, + .eh_bus_reset_handler = sas_eh_bus_reset_handler, + .slave_alloc = pm8001_slave_alloc, + .target_destroy = sas_target_destroy, + .ioctl = sas_ioctl, + .shost_attrs = pm8001_host_attrs, +}; + +/** + * Sas layer call this function to execute specific task. + */ +static struct sas_domain_function_template pm8001_transport_ops = { + .lldd_dev_found = pm8001_dev_found, + .lldd_dev_gone = pm8001_dev_gone, + + .lldd_execute_task = pm8001_queue_command, + .lldd_control_phy = pm8001_phy_control, + + .lldd_abort_task = pm8001_abort_task, + .lldd_abort_task_set = pm8001_abort_task_set, + .lldd_clear_aca = pm8001_clear_aca, + .lldd_clear_task_set = pm8001_clear_task_set, + .lldd_I_T_nexus_reset = pm8001_I_T_nexus_reset, + .lldd_lu_reset = pm8001_lu_reset, + .lldd_query_task = pm8001_query_task, +}; + +/** + *pm8001_phy_init - initiate our adapter phys + *@pm8001_ha: our hba structure. + *@phy_id: phy id. + */ +static void __devinit pm8001_phy_init(struct pm8001_hba_info *pm8001_ha, + int phy_id) +{ + struct pm8001_phy *phy = &pm8001_ha->phy[phy_id]; + struct asd_sas_phy *sas_phy = &phy->sas_phy; + phy->phy_state = 0; + phy->pm8001_ha = pm8001_ha; + sas_phy->enabled = (phy_id < pm8001_ha->chip->n_phy) ? 1 : 0; + sas_phy->class = SAS; + sas_phy->iproto = SAS_PROTOCOL_ALL; + sas_phy->tproto = 0; + sas_phy->type = PHY_TYPE_PHYSICAL; + sas_phy->role = PHY_ROLE_INITIATOR; + sas_phy->oob_mode = OOB_NOT_CONNECTED; + sas_phy->linkrate = SAS_LINK_RATE_UNKNOWN; + sas_phy->id = phy_id; + sas_phy->sas_addr = &pm8001_ha->sas_addr[0]; + sas_phy->frame_rcvd = &phy->frame_rcvd[0]; + sas_phy->ha = (struct sas_ha_struct *)pm8001_ha->shost->hostdata; + sas_phy->lldd_phy = phy; +} + +/** + *pm8001_free - free hba + *@pm8001_ha: our hba structure. + * + */ +static void pm8001_free(struct pm8001_hba_info *pm8001_ha) +{ + int i; + struct pm8001_wq *wq; + + if (!pm8001_ha) + return; + + for (i = 0; i < USI_MAX_MEMCNT; i++) { + if (pm8001_ha->memoryMap.region[i].virt_ptr != NULL) { + pci_free_consistent(pm8001_ha->pdev, + pm8001_ha->memoryMap.region[i].element_size, + pm8001_ha->memoryMap.region[i].virt_ptr, + pm8001_ha->memoryMap.region[i].phys_addr); + } + } + PM8001_CHIP_DISP->chip_iounmap(pm8001_ha); + if (pm8001_ha->shost) + scsi_host_put(pm8001_ha->shost); + list_for_each_entry(wq, &pm8001_ha->wq_list, entry) + cancel_delayed_work(&wq->work_q); + kfree(pm8001_ha->tags); + kfree(pm8001_ha); +} + +#ifdef PM8001_USE_TASKLET +static void pm8001_tasklet(unsigned long opaque) +{ + struct pm8001_hba_info *pm8001_ha; + pm8001_ha = (struct pm8001_hba_info *)opaque;; + if (unlikely(!pm8001_ha)) + BUG_ON(1); + PM8001_CHIP_DISP->isr(pm8001_ha); +} +#endif + + + /** + * pm8001_interrupt - when HBA originate a interrupt,we should invoke this + * dispatcher to handle each case. + * @irq: irq number. + * @opaque: the passed general host adapter struct + */ +static irqreturn_t pm8001_interrupt(int irq, void *opaque) +{ + struct pm8001_hba_info *pm8001_ha; + irqreturn_t ret = IRQ_HANDLED; + struct sas_ha_struct *sha = opaque; + pm8001_ha = sha->lldd_ha; + if (unlikely(!pm8001_ha)) + return IRQ_NONE; + if (!PM8001_CHIP_DISP->is_our_interupt(pm8001_ha)) + return IRQ_NONE; +#ifdef PM8001_USE_TASKLET + tasklet_schedule(&pm8001_ha->tasklet); +#else + ret = PM8001_CHIP_DISP->isr(pm8001_ha); +#endif + return ret; +} + +/** + * pm8001_alloc - initiate our hba structure and 6 DMAs area. + * @pm8001_ha:our hba structure. + * + */ +static int __devinit pm8001_alloc(struct pm8001_hba_info *pm8001_ha) +{ + int i; + spin_lock_init(&pm8001_ha->lock); + for (i = 0; i < pm8001_ha->chip->n_phy; i++) + pm8001_phy_init(pm8001_ha, i); + + pm8001_ha->tags = kmalloc(sizeof(*pm8001_ha->tags)*PM8001_MAX_DEVICES, + GFP_KERNEL); + + /* MPI Memory region 1 for AAP Event Log for fw */ + pm8001_ha->memoryMap.region[AAP1].num_elements = 1; + pm8001_ha->memoryMap.region[AAP1].element_size = PM8001_EVENT_LOG_SIZE; + pm8001_ha->memoryMap.region[AAP1].total_len = PM8001_EVENT_LOG_SIZE; + pm8001_ha->memoryMap.region[AAP1].alignment = 32; + + /* MPI Memory region 2 for IOP Event Log for fw */ + pm8001_ha->memoryMap.region[IOP].num_elements = 1; + pm8001_ha->memoryMap.region[IOP].element_size = PM8001_EVENT_LOG_SIZE; + pm8001_ha->memoryMap.region[IOP].total_len = PM8001_EVENT_LOG_SIZE; + pm8001_ha->memoryMap.region[IOP].alignment = 32; + + /* MPI Memory region 3 for consumer Index of inbound queues */ + pm8001_ha->memoryMap.region[CI].num_elements = 1; + pm8001_ha->memoryMap.region[CI].element_size = 4; + pm8001_ha->memoryMap.region[CI].total_len = 4; + pm8001_ha->memoryMap.region[CI].alignment = 4; + + /* MPI Memory region 4 for producer Index of outbound queues */ + pm8001_ha->memoryMap.region[PI].num_elements = 1; + pm8001_ha->memoryMap.region[PI].element_size = 4; + pm8001_ha->memoryMap.region[PI].total_len = 4; + pm8001_ha->memoryMap.region[PI].alignment = 4; + + /* MPI Memory region 5 inbound queues */ + pm8001_ha->memoryMap.region[IB].num_elements = 256; + pm8001_ha->memoryMap.region[IB].element_size = 64; + pm8001_ha->memoryMap.region[IB].total_len = 256 * 64; + pm8001_ha->memoryMap.region[IB].alignment = 64; + + /* MPI Memory region 6 inbound queues */ + pm8001_ha->memoryMap.region[OB].num_elements = 256; + pm8001_ha->memoryMap.region[OB].element_size = 64; + pm8001_ha->memoryMap.region[OB].total_len = 256 * 64; + pm8001_ha->memoryMap.region[OB].alignment = 64; + + /* Memory region write DMA*/ + pm8001_ha->memoryMap.region[NVMD].num_elements = 1; + pm8001_ha->memoryMap.region[NVMD].element_size = 4096; + pm8001_ha->memoryMap.region[NVMD].total_len = 4096; + /* Memory region for devices*/ + pm8001_ha->memoryMap.region[DEV_MEM].num_elements = 1; + pm8001_ha->memoryMap.region[DEV_MEM].element_size = PM8001_MAX_DEVICES * + sizeof(struct pm8001_device); + pm8001_ha->memoryMap.region[DEV_MEM].total_len = PM8001_MAX_DEVICES * + sizeof(struct pm8001_device); + + /* Memory region for ccb_info*/ + pm8001_ha->memoryMap.region[CCB_MEM].num_elements = 1; + pm8001_ha->memoryMap.region[CCB_MEM].element_size = PM8001_MAX_CCB * + sizeof(struct pm8001_ccb_info); + pm8001_ha->memoryMap.region[CCB_MEM].total_len = PM8001_MAX_CCB * + sizeof(struct pm8001_ccb_info); + + for (i = 0; i < USI_MAX_MEMCNT; i++) { + if (pm8001_mem_alloc(pm8001_ha->pdev, + &pm8001_ha->memoryMap.region[i].virt_ptr, + &pm8001_ha->memoryMap.region[i].phys_addr, + &pm8001_ha->memoryMap.region[i].phys_addr_hi, + &pm8001_ha->memoryMap.region[i].phys_addr_lo, + pm8001_ha->memoryMap.region[i].total_len, + pm8001_ha->memoryMap.region[i].alignment) != 0) { + PM8001_FAIL_DBG(pm8001_ha, + pm8001_printk("Mem%d alloc failed\n", + i)); + goto err_out; + } + } + + pm8001_ha->devices = pm8001_ha->memoryMap.region[DEV_MEM].virt_ptr; + for (i = 0; i < PM8001_MAX_DEVICES; i++) { + pm8001_ha->devices[i].dev_type = NO_DEVICE; + pm8001_ha->devices[i].id = i; + pm8001_ha->devices[i].device_id = PM8001_MAX_DEVICES; + pm8001_ha->devices[i].running_req = 0; + } + pm8001_ha->ccb_info = pm8001_ha->memoryMap.region[CCB_MEM].virt_ptr; + for (i = 0; i < PM8001_MAX_CCB; i++) { + pm8001_ha->ccb_info[i].ccb_dma_handle = + pm8001_ha->memoryMap.region[CCB_MEM].phys_addr + + i * sizeof(struct pm8001_ccb_info); + ++pm8001_ha->tags_num; + } + pm8001_ha->flags = PM8001F_INIT_TIME; + /* Initialize tags */ + pm8001_tag_init(pm8001_ha); + return 0; +err_out: + return 1; +} + +/** + * pm8001_ioremap - remap the pci high physical address to kernal virtual + * address so that we can access them. + * @pm8001_ha:our hba structure. + */ +static int pm8001_ioremap(struct pm8001_hba_info *pm8001_ha) +{ + u32 bar; + u32 logicalBar = 0; + struct pci_dev *pdev; + + pdev = pm8001_ha->pdev; + /* map pci mem (PMC pci base 0-3)*/ + for (bar = 0; bar < 6; bar++) { + /* + ** logical BARs for SPC: + ** bar 0 and 1 - logical BAR0 + ** bar 2 and 3 - logical BAR1 + ** bar4 - logical BAR2 + ** bar5 - logical BAR3 + ** Skip the appropriate assignments: + */ + if ((bar == 1) || (bar == 3)) + continue; + if (pci_resource_flags(pdev, bar) & IORESOURCE_MEM) { + pm8001_ha->io_mem[logicalBar].membase = + pci_resource_start(pdev, bar); + pm8001_ha->io_mem[logicalBar].membase &= + (u32)PCI_BASE_ADDRESS_MEM_MASK; + pm8001_ha->io_mem[logicalBar].memsize = + pci_resource_len(pdev, bar); + pm8001_ha->io_mem[logicalBar].memvirtaddr = + ioremap(pm8001_ha->io_mem[logicalBar].membase, + pm8001_ha->io_mem[logicalBar].memsize); + PM8001_INIT_DBG(pm8001_ha, + pm8001_printk("PCI: bar %d, logicalBar %d " + "virt_addr=%lx,len=%d\n", bar, logicalBar, + (unsigned long) + pm8001_ha->io_mem[logicalBar].memvirtaddr, + pm8001_ha->io_mem[logicalBar].memsize)); + } else { + pm8001_ha->io_mem[logicalBar].membase = 0; + pm8001_ha->io_mem[logicalBar].memsize = 0; + pm8001_ha->io_mem[logicalBar].memvirtaddr = 0; + } + logicalBar++; + } + return 0; +} + +/** + * pm8001_pci_alloc - initialize our ha card structure + * @pdev: pci device. + * @ent: ent + * @shost: scsi host struct which has been initialized before. + */ +static struct pm8001_hba_info *__devinit +pm8001_pci_alloc(struct pci_dev *pdev, u32 chip_id, struct Scsi_Host *shost) +{ + struct pm8001_hba_info *pm8001_ha; + struct sas_ha_struct *sha = SHOST_TO_SAS_HA(shost); + + + pm8001_ha = sha->lldd_ha; + if (!pm8001_ha) + return NULL; + + pm8001_ha->pdev = pdev; + pm8001_ha->dev = &pdev->dev; + pm8001_ha->chip_id = chip_id; + pm8001_ha->chip = &pm8001_chips[pm8001_ha->chip_id]; + pm8001_ha->irq = pdev->irq; + pm8001_ha->sas = sha; + pm8001_ha->shost = shost; + pm8001_ha->id = pm8001_id++; + INIT_LIST_HEAD(&pm8001_ha->wq_list); + pm8001_ha->logging_level = 0x01; + sprintf(pm8001_ha->name, "%s%d", DRV_NAME, pm8001_ha->id); +#ifdef PM8001_USE_TASKLET + tasklet_init(&pm8001_ha->tasklet, pm8001_tasklet, + (unsigned long)pm8001_ha); +#endif + pm8001_ioremap(pm8001_ha); + if (!pm8001_alloc(pm8001_ha)) + return pm8001_ha; + pm8001_free(pm8001_ha); + return NULL; +} + +/** + * pci_go_44 - pm8001 specified, its DMA is 44 bit rather than 64 bit + * @pdev: pci device. + */ +static int pci_go_44(struct pci_dev *pdev) +{ + int rc; + + if (!pci_set_dma_mask(pdev, DMA_BIT_MASK(44))) { + rc = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(44)); + if (rc) { + rc = pci_set_consistent_dma_mask(pdev, + DMA_BIT_MASK(32)); + if (rc) { + dev_printk(KERN_ERR, &pdev->dev, + "44-bit DMA enable failed\n"); + return rc; + } + } + } else { + rc = pci_set_dma_mask(pdev, DMA_BIT_MASK(32)); + if (rc) { + dev_printk(KERN_ERR, &pdev->dev, + "32-bit DMA enable failed\n"); + return rc; + } + rc = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32)); + if (rc) { + dev_printk(KERN_ERR, &pdev->dev, + "32-bit consistent DMA enable failed\n"); + return rc; + } + } + return rc; +} + +/** + * pm8001_prep_sas_ha_init - allocate memory in general hba struct && init them. + * @shost: scsi host which has been allocated outside. + * @chip_info: our ha struct. + */ +static int __devinit pm8001_prep_sas_ha_init(struct Scsi_Host * shost, + const struct pm8001_chip_info *chip_info) +{ + int phy_nr, port_nr; + struct asd_sas_phy **arr_phy; + struct asd_sas_port **arr_port; + struct sas_ha_struct *sha = SHOST_TO_SAS_HA(shost); + + phy_nr = chip_info->n_phy; + port_nr = phy_nr; + memset(sha, 0x00, sizeof(*sha)); + arr_phy = kcalloc(phy_nr, sizeof(void *), GFP_KERNEL); + if (!arr_phy) + goto exit; + arr_port = kcalloc(port_nr, sizeof(void *), GFP_KERNEL); + if (!arr_port) + goto exit_free2; + + sha->sas_phy = arr_phy; + sha->sas_port = arr_port; + sha->lldd_ha = kzalloc(sizeof(struct pm8001_hba_info), GFP_KERNEL); + if (!sha->lldd_ha) + goto exit_free1; + + shost->transportt = pm8001_stt; + shost->max_id = PM8001_MAX_DEVICES; + shost->max_lun = 8; + shost->max_channel = 0; + shost->unique_id = pm8001_id; + shost->max_cmd_len = 16; + shost->can_queue = PM8001_CAN_QUEUE; + shost->cmd_per_lun = 32; + return 0; +exit_free1: + kfree(arr_port); +exit_free2: + kfree(arr_phy); +exit: + return -1; +} + +/** + * pm8001_post_sas_ha_init - initialize general hba struct defined in libsas + * @shost: scsi host which has been allocated outside + * @chip_info: our ha struct. + */ +static void __devinit pm8001_post_sas_ha_init(struct Scsi_Host *shost, + const struct pm8001_chip_info *chip_info) +{ + int i = 0; + struct pm8001_hba_info *pm8001_ha; + struct sas_ha_struct *sha = SHOST_TO_SAS_HA(shost); + + pm8001_ha = sha->lldd_ha; + for (i = 0; i < chip_info->n_phy; i++) { + sha->sas_phy[i] = &pm8001_ha->phy[i].sas_phy; + sha->sas_port[i] = &pm8001_ha->port[i].sas_port; + } + sha->sas_ha_name = DRV_NAME; + sha->dev = pm8001_ha->dev; + + sha->lldd_module = THIS_MODULE; + sha->sas_addr = &pm8001_ha->sas_addr[0]; + sha->num_phys = chip_info->n_phy; + sha->lldd_max_execute_num = 1; + sha->lldd_queue_size = PM8001_CAN_QUEUE; + sha->core.shost = shost; +} + +/** + * pm8001_init_sas_add - initialize sas address + * @chip_info: our ha struct. + * + * Currently we just set the fixed SAS address to our HBA,for manufacture, + * it should read from the EEPROM + */ +static void pm8001_init_sas_add(struct pm8001_hba_info *pm8001_ha) +{ + u8 i; +#ifdef PM8001_READ_VPD + DECLARE_COMPLETION_ONSTACK(completion); + pm8001_ha->nvmd_completion = &completion; + PM8001_CHIP_DISP->get_nvmd_req(pm8001_ha, 0, 0); + wait_for_completion(&completion); + for (i = 0; i < pm8001_ha->chip->n_phy; i++) { + memcpy(&pm8001_ha->phy[i].dev_sas_addr, pm8001_ha->sas_addr, + SAS_ADDR_SIZE); + PM8001_INIT_DBG(pm8001_ha, + pm8001_printk("phy %d sas_addr = %x \n", i, + (u64)pm8001_ha->phy[i].dev_sas_addr)); + } +#else + for (i = 0; i < pm8001_ha->chip->n_phy; i++) { + pm8001_ha->phy[i].dev_sas_addr = 0x500e004010000004ULL; + pm8001_ha->phy[i].dev_sas_addr = + cpu_to_be64((u64) + (*(u64 *)&pm8001_ha->phy[i].dev_sas_addr)); + } + memcpy(pm8001_ha->sas_addr, &pm8001_ha->phy[0].dev_sas_addr, + SAS_ADDR_SIZE); +#endif +} + +#ifdef PM8001_USE_MSIX +/** + * pm8001_setup_msix - enable MSI-X interrupt + * @chip_info: our ha struct. + * @irq_handler: irq_handler + */ +static u32 pm8001_setup_msix(struct pm8001_hba_info *pm8001_ha, + irq_handler_t irq_handler) +{ + u32 i = 0, j = 0; + u32 number_of_intr = 1; + int flag = 0; + u32 max_entry; + int rc; + max_entry = sizeof(pm8001_ha->msix_entries) / + sizeof(pm8001_ha->msix_entries[0]); + flag |= IRQF_DISABLED; + for (i = 0; i < max_entry ; i++) + pm8001_ha->msix_entries[i].entry = i; + rc = pci_enable_msix(pm8001_ha->pdev, pm8001_ha->msix_entries, + number_of_intr); + pm8001_ha->number_of_intr = number_of_intr; + if (!rc) { + for (i = 0; i < number_of_intr; i++) { + if (request_irq(pm8001_ha->msix_entries[i].vector, + irq_handler, flag, DRV_NAME, + SHOST_TO_SAS_HA(pm8001_ha->shost))) { + for (j = 0; j < i; j++) + free_irq( + pm8001_ha->msix_entries[j].vector, + SHOST_TO_SAS_HA(pm8001_ha->shost)); + pci_disable_msix(pm8001_ha->pdev); + break; + } + } + } + return rc; +} +#endif + +/** + * pm8001_request_irq - register interrupt + * @chip_info: our ha struct. + */ +static u32 pm8001_request_irq(struct pm8001_hba_info *pm8001_ha) +{ + struct pci_dev *pdev; + irq_handler_t irq_handler = pm8001_interrupt; + u32 rc; + + pdev = pm8001_ha->pdev; + +#ifdef PM8001_USE_MSIX + if (pci_find_capability(pdev, PCI_CAP_ID_MSIX)) + return pm8001_setup_msix(pm8001_ha, irq_handler); + else + goto intx; +#endif + +intx: + /* intialize the INT-X interrupt */ + rc = request_irq(pdev->irq, irq_handler, IRQF_SHARED, DRV_NAME, + SHOST_TO_SAS_HA(pm8001_ha->shost)); + return rc; +} + +/** + * pm8001_pci_probe - probe supported device + * @pdev: pci device which kernel has been prepared for. + * @ent: pci device id + * + * This function is the main initialization function, when register a new + * pci driver it is invoked, all struct an hardware initilization should be done + * here, also, register interrupt + */ +static int __devinit pm8001_pci_probe(struct pci_dev *pdev, + const struct pci_device_id *ent) +{ + unsigned int rc; + u32 pci_reg; + struct pm8001_hba_info *pm8001_ha; + struct Scsi_Host *shost = NULL; + const struct pm8001_chip_info *chip; + + dev_printk(KERN_INFO, &pdev->dev, + "pm8001: driver version %s\n", DRV_VERSION); + rc = pci_enable_device(pdev); + if (rc) + goto err_out_enable; + pci_set_master(pdev); + /* + * Enable pci slot busmaster by setting pci command register. + * This is required by FW for Cyclone card. + */ + + pci_read_config_dword(pdev, PCI_COMMAND, &pci_reg); + pci_reg |= 0x157; + pci_write_config_dword(pdev, PCI_COMMAND, pci_reg); + rc = pci_request_regions(pdev, DRV_NAME); + if (rc) + goto err_out_disable; + rc = pci_go_44(pdev); + if (rc) + goto err_out_regions; + + shost = scsi_host_alloc(&pm8001_sht, sizeof(void *)); + if (!shost) { + rc = -ENOMEM; + goto err_out_regions; + } + chip = &pm8001_chips[ent->driver_data]; + SHOST_TO_SAS_HA(shost) = + kcalloc(1, sizeof(struct sas_ha_struct), GFP_KERNEL); + if (!SHOST_TO_SAS_HA(shost)) { + rc = -ENOMEM; + goto err_out_free_host; + } + + rc = pm8001_prep_sas_ha_init(shost, chip); + if (rc) { + rc = -ENOMEM; + goto err_out_free; + } + pci_set_drvdata(pdev, SHOST_TO_SAS_HA(shost)); + pm8001_ha = pm8001_pci_alloc(pdev, chip_8001, shost); + if (!pm8001_ha) { + rc = -ENOMEM; + goto err_out_free; + } + list_add_tail(&pm8001_ha->list, &hba_list); + PM8001_CHIP_DISP->chip_soft_rst(pm8001_ha, 0x252acbcd); + rc = PM8001_CHIP_DISP->chip_init(pm8001_ha); + if (rc) + goto err_out_ha_free; + + rc = scsi_add_host(shost, &pdev->dev); + if (rc) + goto err_out_ha_free; + rc = pm8001_request_irq(pm8001_ha); + if (rc) + goto err_out_shost; + + PM8001_CHIP_DISP->interrupt_enable(pm8001_ha); + pm8001_init_sas_add(pm8001_ha); + pm8001_post_sas_ha_init(shost, chip); + rc = sas_register_ha(SHOST_TO_SAS_HA(shost)); + if (rc) + goto err_out_shost; + scsi_scan_host(pm8001_ha->shost); + return 0; + +err_out_shost: + scsi_remove_host(pm8001_ha->shost); +err_out_ha_free: + pm8001_free(pm8001_ha); +err_out_free: + kfree(SHOST_TO_SAS_HA(shost)); +err_out_free_host: + kfree(shost); +err_out_regions: + pci_release_regions(pdev); +err_out_disable: + pci_disable_device(pdev); +err_out_enable: + return rc; +} + +static void __devexit pm8001_pci_remove(struct pci_dev *pdev) +{ + struct sas_ha_struct *sha = pci_get_drvdata(pdev); + struct pm8001_hba_info *pm8001_ha; + int i; + pm8001_ha = sha->lldd_ha; + pci_set_drvdata(pdev, NULL); + sas_unregister_ha(sha); + sas_remove_host(pm8001_ha->shost); + list_del(&pm8001_ha->list); + scsi_remove_host(pm8001_ha->shost); + PM8001_CHIP_DISP->interrupt_disable(pm8001_ha); + PM8001_CHIP_DISP->chip_soft_rst(pm8001_ha, 0x252acbcd); + +#ifdef PM8001_USE_MSIX + for (i = 0; i < pm8001_ha->number_of_intr; i++) + synchronize_irq(pm8001_ha->msix_entries[i].vector); + for (i = 0; i < pm8001_ha->number_of_intr; i++) + free_irq(pm8001_ha->msix_entries[i].vector, sha); + pci_disable_msix(pdev); +#else + free_irq(pm8001_ha->irq, sha); +#endif +#ifdef PM8001_USE_TASKLET + tasklet_kill(&pm8001_ha->tasklet); +#endif + pm8001_free(pm8001_ha); + kfree(sha->sas_phy); + kfree(sha->sas_port); + kfree(sha); + pci_release_regions(pdev); + pci_disable_device(pdev); +} + +/** + * pm8001_pci_suspend - power management suspend main entry point + * @pdev: PCI device struct + * @state: PM state change to (usually PCI_D3) + * + * Returns 0 success, anything else error. + */ +static int pm8001_pci_suspend(struct pci_dev *pdev, pm_message_t state) +{ + struct sas_ha_struct *sha = pci_get_drvdata(pdev); + struct pm8001_hba_info *pm8001_ha; + int i , pos; + u32 device_state; + pm8001_ha = sha->lldd_ha; + flush_scheduled_work(); + scsi_block_requests(pm8001_ha->shost); + pos = pci_find_capability(pdev, PCI_CAP_ID_PM); + if (pos == 0) { + printk(KERN_ERR " PCI PM not supported\n"); + return -ENODEV; + } + PM8001_CHIP_DISP->interrupt_disable(pm8001_ha); + PM8001_CHIP_DISP->chip_soft_rst(pm8001_ha, 0x252acbcd); +#ifdef PM8001_USE_MSIX + for (i = 0; i < pm8001_ha->number_of_intr; i++) + synchronize_irq(pm8001_ha->msix_entries[i].vector); + for (i = 0; i < pm8001_ha->number_of_intr; i++) + free_irq(pm8001_ha->msix_entries[i].vector, sha); + pci_disable_msix(pdev); +#else + free_irq(pm8001_ha->irq, sha); +#endif +#ifdef PM8001_USE_TASKLET + tasklet_kill(&pm8001_ha->tasklet); +#endif + device_state = pci_choose_state(pdev, state); + pm8001_printk("pdev=0x%p, slot=%s, entering " + "operating state [D%d]\n", pdev, + pm8001_ha->name, device_state); + pci_save_state(pdev); + pci_disable_device(pdev); + pci_set_power_state(pdev, device_state); + return 0; +} + +/** + * pm8001_pci_resume - power management resume main entry point + * @pdev: PCI device struct + * + * Returns 0 success, anything else error. + */ +static int pm8001_pci_resume(struct pci_dev *pdev) +{ + struct sas_ha_struct *sha = pci_get_drvdata(pdev); + struct pm8001_hba_info *pm8001_ha; + int rc; + u32 device_state; + pm8001_ha = sha->lldd_ha; + device_state = pdev->current_state; + + pm8001_printk("pdev=0x%p, slot=%s, resuming from previous " + "operating state [D%d]\n", pdev, pm8001_ha->name, device_state); + + pci_set_power_state(pdev, PCI_D0); + pci_enable_wake(pdev, PCI_D0, 0); + pci_restore_state(pdev); + rc = pci_enable_device(pdev); + if (rc) { + pm8001_printk("slot=%s Enable device failed during resume\n", + pm8001_ha->name); + goto err_out_enable; + } + + pci_set_master(pdev); + rc = pci_go_44(pdev); + if (rc) + goto err_out_disable; + + PM8001_CHIP_DISP->chip_soft_rst(pm8001_ha, 0x252acbcd); + rc = PM8001_CHIP_DISP->chip_init(pm8001_ha); + if (rc) + goto err_out_disable; + PM8001_CHIP_DISP->interrupt_disable(pm8001_ha); + rc = pm8001_request_irq(pm8001_ha); + if (rc) + goto err_out_disable; + #ifdef PM8001_USE_TASKLET + tasklet_init(&pm8001_ha->tasklet, pm8001_tasklet, + (unsigned long)pm8001_ha); + #endif + PM8001_CHIP_DISP->interrupt_enable(pm8001_ha); + scsi_unblock_requests(pm8001_ha->shost); + return 0; + +err_out_disable: + scsi_remove_host(pm8001_ha->shost); + pci_disable_device(pdev); +err_out_enable: + return rc; +} + +static struct pci_device_id __devinitdata pm8001_pci_table[] = { + { + PCI_VDEVICE(PMC_Sierra, 0x8001), chip_8001 + }, + { + PCI_DEVICE(0x117c, 0x0042), + .driver_data = chip_8001 + }, + {} /* terminate list */ +}; + +static struct pci_driver pm8001_pci_driver = { + .name = DRV_NAME, + .id_table = pm8001_pci_table, + .probe = pm8001_pci_probe, + .remove = __devexit_p(pm8001_pci_remove), + .suspend = pm8001_pci_suspend, + .resume = pm8001_pci_resume, +}; + +/** + * pm8001_init - initialize scsi transport template + */ +static int __init pm8001_init(void) +{ + int rc; + pm8001_id = 0; + pm8001_stt = sas_domain_attach_transport(&pm8001_transport_ops); + if (!pm8001_stt) + return -ENOMEM; + rc = pci_register_driver(&pm8001_pci_driver); + if (rc) + goto err_out; + return 0; +err_out: + sas_release_transport(pm8001_stt); + return rc; +} + +static void __exit pm8001_exit(void) +{ + pci_unregister_driver(&pm8001_pci_driver); + sas_release_transport(pm8001_stt); +} + +module_init(pm8001_init); +module_exit(pm8001_exit); + +MODULE_AUTHOR("Jack Wang "); +MODULE_DESCRIPTION("PMC-Sierra PM8001 SAS/SATA controller driver"); +MODULE_VERSION(DRV_VERSION); +MODULE_LICENSE("GPL"); +MODULE_DEVICE_TABLE(pci, pm8001_pci_table); + diff --git a/drivers/scsi/pm8001/pm8001_sas.c b/drivers/scsi/pm8001/pm8001_sas.c new file mode 100644 index 000000000000..7bf30fa6963a --- /dev/null +++ b/drivers/scsi/pm8001/pm8001_sas.c @@ -0,0 +1,1104 @@ +/* + * PMC-Sierra SPC 8001 SAS/SATA based host adapters driver + * + * Copyright (c) 2008-2009 USI Co., Ltd. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions, and the following disclaimer, + * without modification. + * 2. Redistributions in binary form must reproduce at minimum a disclaimer + * substantially similar to the "NO WARRANTY" disclaimer below + * ("Disclaimer") and any redistribution must be conditioned upon + * including a substantially similar Disclaimer requirement for further + * binary redistribution. + * 3. Neither the names of the above-listed copyright holders nor the names + * of any contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * Alternatively, this software may be distributed under the terms of the + * GNU General Public License ("GPL") version 2 as published by the Free + * Software Foundation. + * + * NO WARRANTY + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE + * POSSIBILITY OF SUCH DAMAGES. + * + */ + +#include "pm8001_sas.h" + +/** + * pm8001_find_tag - from sas task to find out tag that belongs to this task + * @task: the task sent to the LLDD + * @tag: the found tag associated with the task + */ +static int pm8001_find_tag(struct sas_task *task, u32 *tag) +{ + if (task->lldd_task) { + struct pm8001_ccb_info *ccb; + ccb = task->lldd_task; + *tag = ccb->ccb_tag; + return 1; + } + return 0; +} + +/** + * pm8001_tag_clear - clear the tags bitmap + * @pm8001_ha: our hba struct + * @tag: the found tag associated with the task + */ +static void pm8001_tag_clear(struct pm8001_hba_info *pm8001_ha, u32 tag) +{ + void *bitmap = pm8001_ha->tags; + clear_bit(tag, bitmap); +} + +static void pm8001_tag_free(struct pm8001_hba_info *pm8001_ha, u32 tag) +{ + pm8001_tag_clear(pm8001_ha, tag); +} + +static void pm8001_tag_set(struct pm8001_hba_info *pm8001_ha, u32 tag) +{ + void *bitmap = pm8001_ha->tags; + set_bit(tag, bitmap); +} + +/** + * pm8001_tag_alloc - allocate a empty tag for task used. + * @pm8001_ha: our hba struct + * @tag_out: the found empty tag . + */ +inline int pm8001_tag_alloc(struct pm8001_hba_info *pm8001_ha, u32 *tag_out) +{ + unsigned int index, tag; + void *bitmap = pm8001_ha->tags; + + index = find_first_zero_bit(bitmap, pm8001_ha->tags_num); + tag = index; + if (tag >= pm8001_ha->tags_num) + return -SAS_QUEUE_FULL; + pm8001_tag_set(pm8001_ha, tag); + *tag_out = tag; + return 0; +} + +void pm8001_tag_init(struct pm8001_hba_info *pm8001_ha) +{ + int i; + for (i = 0; i < pm8001_ha->tags_num; ++i) + pm8001_tag_clear(pm8001_ha, i); +} + + /** + * pm8001_mem_alloc - allocate memory for pm8001. + * @pdev: pci device. + * @virt_addr: the allocated virtual address + * @pphys_addr_hi: the physical address high byte address. + * @pphys_addr_lo: the physical address low byte address. + * @mem_size: memory size. + */ +int pm8001_mem_alloc(struct pci_dev *pdev, void **virt_addr, + dma_addr_t *pphys_addr, u32 *pphys_addr_hi, + u32 *pphys_addr_lo, u32 mem_size, u32 align) +{ + caddr_t mem_virt_alloc; + dma_addr_t mem_dma_handle; + u64 phys_align; + u64 align_offset = 0; + if (align) + align_offset = (dma_addr_t)align - 1; + mem_virt_alloc = + pci_alloc_consistent(pdev, mem_size + align, &mem_dma_handle); + if (!mem_virt_alloc) { + pm8001_printk("memory allocation error\n"); + return -1; + } + memset((void *)mem_virt_alloc, 0, mem_size+align); + *pphys_addr = mem_dma_handle; + phys_align = (*pphys_addr + align_offset) & ~align_offset; + *virt_addr = (void *)mem_virt_alloc + phys_align - *pphys_addr; + *pphys_addr_hi = upper_32_bits(phys_align); + *pphys_addr_lo = lower_32_bits(phys_align); + return 0; +} +/** + * pm8001_find_ha_by_dev - from domain device which come from sas layer to + * find out our hba struct. + * @dev: the domain device which from sas layer. + */ +static +struct pm8001_hba_info *pm8001_find_ha_by_dev(struct domain_device *dev) +{ + struct sas_ha_struct *sha = dev->port->ha; + struct pm8001_hba_info *pm8001_ha = sha->lldd_ha; + return pm8001_ha; +} + +/** + * pm8001_phy_control - this function should be registered to + * sas_domain_function_template to provide libsas used, note: this is just + * control the HBA phy rather than other expander phy if you want control + * other phy, you should use SMP command. + * @sas_phy: which phy in HBA phys. + * @func: the operation. + * @funcdata: always NULL. + */ +int pm8001_phy_control(struct asd_sas_phy *sas_phy, enum phy_func func, + void *funcdata) +{ + int rc = 0, phy_id = sas_phy->id; + struct pm8001_hba_info *pm8001_ha = NULL; + struct sas_phy_linkrates *rates; + DECLARE_COMPLETION_ONSTACK(completion); + pm8001_ha = sas_phy->ha->lldd_ha; + pm8001_ha->phy[phy_id].enable_completion = &completion; + switch (func) { + case PHY_FUNC_SET_LINK_RATE: + rates = funcdata; + if (rates->minimum_linkrate) { + pm8001_ha->phy[phy_id].minimum_linkrate = + rates->minimum_linkrate; + } + if (rates->maximum_linkrate) { + pm8001_ha->phy[phy_id].maximum_linkrate = + rates->maximum_linkrate; + } + if (pm8001_ha->phy[phy_id].phy_state == 0) { + PM8001_CHIP_DISP->phy_start_req(pm8001_ha, phy_id); + wait_for_completion(&completion); + } + PM8001_CHIP_DISP->phy_ctl_req(pm8001_ha, phy_id, + PHY_LINK_RESET); + break; + case PHY_FUNC_HARD_RESET: + if (pm8001_ha->phy[phy_id].phy_state == 0) { + PM8001_CHIP_DISP->phy_start_req(pm8001_ha, phy_id); + wait_for_completion(&completion); + } + PM8001_CHIP_DISP->phy_ctl_req(pm8001_ha, phy_id, + PHY_HARD_RESET); + break; + case PHY_FUNC_LINK_RESET: + if (pm8001_ha->phy[phy_id].phy_state == 0) { + PM8001_CHIP_DISP->phy_start_req(pm8001_ha, phy_id); + wait_for_completion(&completion); + } + PM8001_CHIP_DISP->phy_ctl_req(pm8001_ha, phy_id, + PHY_LINK_RESET); + break; + case PHY_FUNC_RELEASE_SPINUP_HOLD: + PM8001_CHIP_DISP->phy_ctl_req(pm8001_ha, phy_id, + PHY_LINK_RESET); + break; + case PHY_FUNC_DISABLE: + PM8001_CHIP_DISP->phy_stop_req(pm8001_ha, phy_id); + break; + default: + rc = -EOPNOTSUPP; + } + msleep(300); + return rc; +} + +int pm8001_slave_alloc(struct scsi_device *scsi_dev) +{ + struct domain_device *dev = sdev_to_domain_dev(scsi_dev); + if (dev_is_sata(dev)) { + /* We don't need to rescan targets + * if REPORT_LUNS request is failed + */ + if (scsi_dev->lun > 0) + return -ENXIO; + scsi_dev->tagged_supported = 1; + } + return sas_slave_alloc(scsi_dev); +} + +/** + * pm8001_scan_start - we should enable all HBA phys by sending the phy_start + * command to HBA. + * @shost: the scsi host data. + */ +void pm8001_scan_start(struct Scsi_Host *shost) +{ + int i; + struct pm8001_hba_info *pm8001_ha; + struct sas_ha_struct *sha = SHOST_TO_SAS_HA(shost); + pm8001_ha = sha->lldd_ha; + for (i = 0; i < pm8001_ha->chip->n_phy; ++i) + PM8001_CHIP_DISP->phy_start_req(pm8001_ha, i); +} + +int pm8001_scan_finished(struct Scsi_Host *shost, unsigned long time) +{ + /* give the phy enabling interrupt event time to come in (1s + * is empirically about all it takes) */ + if (time < HZ) + return 0; + /* Wait for discovery to finish */ + scsi_flush_work(shost); + return 1; +} + +/** + * pm8001_task_prep_smp - the dispatcher function, prepare data for smp task + * @pm8001_ha: our hba card information + * @ccb: the ccb which attached to smp task + */ +static int pm8001_task_prep_smp(struct pm8001_hba_info *pm8001_ha, + struct pm8001_ccb_info *ccb) +{ + return PM8001_CHIP_DISP->smp_req(pm8001_ha, ccb); +} + +u32 pm8001_get_ncq_tag(struct sas_task *task, u32 *tag) +{ + struct ata_queued_cmd *qc = task->uldd_task; + if (qc) { + if (qc->tf.command == ATA_CMD_FPDMA_WRITE || + qc->tf.command == ATA_CMD_FPDMA_READ) { + *tag = qc->tag; + return 1; + } + } + return 0; +} + +/** + * pm8001_task_prep_ata - the dispatcher function, prepare data for sata task + * @pm8001_ha: our hba card information + * @ccb: the ccb which attached to sata task + */ +static int pm8001_task_prep_ata(struct pm8001_hba_info *pm8001_ha, + struct pm8001_ccb_info *ccb) +{ + return PM8001_CHIP_DISP->sata_req(pm8001_ha, ccb); +} + +/** + * pm8001_task_prep_ssp_tm - the dispatcher function, prepare task management data + * @pm8001_ha: our hba card information + * @ccb: the ccb which attached to TM + * @tmf: the task management IU + */ +static int pm8001_task_prep_ssp_tm(struct pm8001_hba_info *pm8001_ha, + struct pm8001_ccb_info *ccb, struct pm8001_tmf_task *tmf) +{ + return PM8001_CHIP_DISP->ssp_tm_req(pm8001_ha, ccb, tmf); +} + +/** + * pm8001_task_prep_ssp - the dispatcher function,prepare ssp data for ssp task + * @pm8001_ha: our hba card information + * @ccb: the ccb which attached to ssp task + */ +static int pm8001_task_prep_ssp(struct pm8001_hba_info *pm8001_ha, + struct pm8001_ccb_info *ccb) +{ + return PM8001_CHIP_DISP->ssp_io_req(pm8001_ha, ccb); +} +int pm8001_slave_configure(struct scsi_device *sdev) +{ + struct domain_device *dev = sdev_to_domain_dev(sdev); + int ret = sas_slave_configure(sdev); + if (ret) + return ret; + if (dev_is_sata(dev)) { + #ifdef PM8001_DISABLE_NCQ + struct ata_port *ap = dev->sata_dev.ap; + struct ata_device *adev = ap->link.device; + adev->flags |= ATA_DFLAG_NCQ_OFF; + scsi_adjust_queue_depth(sdev, MSG_SIMPLE_TAG, 1); + #endif + } + return 0; +} +/** + * pm8001_task_exec -execute the task which come from upper level, send the + * command or data to DMA area and then increase CI,for queuecommand(ssp), + * it is from upper layer and for smp command,it is from libsas, + * for ata command it is from libata. + * @task: the task to be execute. + * @num: if can_queue great than 1, the task can be queued up. for SMP task, + * we always execute one one time. + * @gfp_flags: gfp_flags. + * @is tmf: if it is task management task. + * @tmf: the task management IU + */ +#define DEV_IS_GONE(pm8001_dev) \ + ((!pm8001_dev || (pm8001_dev->dev_type == NO_DEVICE))) +static int pm8001_task_exec(struct sas_task *task, const int num, + gfp_t gfp_flags, int is_tmf, struct pm8001_tmf_task *tmf) +{ + struct domain_device *dev = task->dev; + struct pm8001_hba_info *pm8001_ha; + struct pm8001_device *pm8001_dev; + struct sas_task *t = task; + struct pm8001_ccb_info *ccb; + u32 tag = 0xdeadbeef, rc, n_elem = 0; + u32 n = num; + unsigned long flags = 0; + + if (!dev->port) { + struct task_status_struct *tsm = &t->task_status; + tsm->resp = SAS_TASK_UNDELIVERED; + tsm->stat = SAS_PHY_DOWN; + if (dev->dev_type != SATA_DEV) + t->task_done(t); + return 0; + } + pm8001_ha = pm8001_find_ha_by_dev(task->dev); + PM8001_IO_DBG(pm8001_ha, pm8001_printk("pm8001_task_exec device \n ")); + spin_lock_irqsave(&pm8001_ha->lock, flags); + do { + dev = t->dev; + pm8001_dev = dev->lldd_dev; + if (DEV_IS_GONE(pm8001_dev)) { + if (pm8001_dev) { + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("device %d not ready.\n", + pm8001_dev->device_id)); + } else { + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("device %016llx not " + "ready.\n", SAS_ADDR(dev->sas_addr))); + } + rc = SAS_PHY_DOWN; + goto out_done; + } + rc = pm8001_tag_alloc(pm8001_ha, &tag); + if (rc) + goto err_out; + ccb = &pm8001_ha->ccb_info[tag]; + + if (!sas_protocol_ata(t->task_proto)) { + if (t->num_scatter) { + n_elem = dma_map_sg(pm8001_ha->dev, + t->scatter, + t->num_scatter, + t->data_dir); + if (!n_elem) { + rc = -ENOMEM; + goto err_out; + } + } + } else { + n_elem = t->num_scatter; + } + + t->lldd_task = NULL; + ccb->n_elem = n_elem; + ccb->ccb_tag = tag; + ccb->task = t; + switch (t->task_proto) { + case SAS_PROTOCOL_SMP: + rc = pm8001_task_prep_smp(pm8001_ha, ccb); + break; + case SAS_PROTOCOL_SSP: + if (is_tmf) + rc = pm8001_task_prep_ssp_tm(pm8001_ha, + ccb, tmf); + else + rc = pm8001_task_prep_ssp(pm8001_ha, ccb); + break; + case SAS_PROTOCOL_SATA: + case SAS_PROTOCOL_STP: + case SAS_PROTOCOL_SATA | SAS_PROTOCOL_STP: + rc = pm8001_task_prep_ata(pm8001_ha, ccb); + break; + default: + dev_printk(KERN_ERR, pm8001_ha->dev, + "unknown sas_task proto: 0x%x\n", + t->task_proto); + rc = -EINVAL; + break; + } + + if (rc) { + PM8001_IO_DBG(pm8001_ha, + pm8001_printk("rc is %x\n", rc)); + goto err_out_tag; + } + t->lldd_task = ccb; + /* TODO: select normal or high priority */ + spin_lock(&t->task_state_lock); + t->task_state_flags |= SAS_TASK_AT_INITIATOR; + spin_unlock(&t->task_state_lock); + pm8001_dev->running_req++; + if (n > 1) + t = list_entry(t->list.next, struct sas_task, list); + } while (--n); + rc = 0; + goto out_done; + +err_out_tag: + pm8001_tag_free(pm8001_ha, tag); +err_out: + dev_printk(KERN_ERR, pm8001_ha->dev, "pm8001 exec failed[%d]!\n", rc); + if (!sas_protocol_ata(t->task_proto)) + if (n_elem) + dma_unmap_sg(pm8001_ha->dev, t->scatter, n_elem, + t->data_dir); +out_done: + spin_unlock_irqrestore(&pm8001_ha->lock, flags); + return rc; +} + +/** + * pm8001_queue_command - register for upper layer used, all IO commands sent + * to HBA are from this interface. + * @task: the task to be execute. + * @num: if can_queue great than 1, the task can be queued up. for SMP task, + * we always execute one one time + * @gfp_flags: gfp_flags + */ +int pm8001_queue_command(struct sas_task *task, const int num, + gfp_t gfp_flags) +{ + return pm8001_task_exec(task, num, gfp_flags, 0, NULL); +} + +void pm8001_ccb_free(struct pm8001_hba_info *pm8001_ha, u32 ccb_idx) +{ + pm8001_tag_clear(pm8001_ha, ccb_idx); +} + +/** + * pm8001_ccb_task_free - free the sg for ssp and smp command, free the ccb. + * @pm8001_ha: our hba card information + * @ccb: the ccb which attached to ssp task + * @task: the task to be free. + * @ccb_idx: ccb index. + */ +void pm8001_ccb_task_free(struct pm8001_hba_info *pm8001_ha, + struct sas_task *task, struct pm8001_ccb_info *ccb, u32 ccb_idx) +{ + if (!ccb->task) + return; + if (!sas_protocol_ata(task->task_proto)) + if (ccb->n_elem) + dma_unmap_sg(pm8001_ha->dev, task->scatter, + task->num_scatter, task->data_dir); + + switch (task->task_proto) { + case SAS_PROTOCOL_SMP: + dma_unmap_sg(pm8001_ha->dev, &task->smp_task.smp_resp, 1, + PCI_DMA_FROMDEVICE); + dma_unmap_sg(pm8001_ha->dev, &task->smp_task.smp_req, 1, + PCI_DMA_TODEVICE); + break; + + case SAS_PROTOCOL_SATA: + case SAS_PROTOCOL_STP: + case SAS_PROTOCOL_SSP: + default: + /* do nothing */ + break; + } + task->lldd_task = NULL; + ccb->task = NULL; + ccb->ccb_tag = 0xFFFFFFFF; + pm8001_ccb_free(pm8001_ha, ccb_idx); +} + + /** + * pm8001_alloc_dev - find the empty pm8001_device structure, allocate and + * return it for use. + * @pm8001_ha: our hba card information + */ +struct pm8001_device *pm8001_alloc_dev(struct pm8001_hba_info *pm8001_ha) +{ + u32 dev; + for (dev = 0; dev < PM8001_MAX_DEVICES; dev++) { + if (pm8001_ha->devices[dev].dev_type == NO_DEVICE) { + pm8001_ha->devices[dev].id = dev; + return &pm8001_ha->devices[dev]; + } + } + if (dev == PM8001_MAX_DEVICES) { + PM8001_FAIL_DBG(pm8001_ha, + pm8001_printk("max support %d devices, ignore ..\n", + PM8001_MAX_DEVICES)); + } + return NULL; +} + +static void pm8001_free_dev(struct pm8001_device *pm8001_dev) +{ + u32 id = pm8001_dev->id; + memset(pm8001_dev, 0, sizeof(*pm8001_dev)); + pm8001_dev->id = id; + pm8001_dev->dev_type = NO_DEVICE; + pm8001_dev->device_id = PM8001_MAX_DEVICES; + pm8001_dev->sas_device = NULL; +} + +/** + * pm8001_dev_found_notify - when libsas find a sas domain device, it should + * tell the LLDD that device is found, and then LLDD register this device to + * HBA FW by the command "OPC_INB_REG_DEV", after that the HBA will assign + * a device ID(according to device's sas address) and returned it to LLDD.from + * now on, we communicate with HBA FW with the device ID which HBA assigned + * rather than sas address. it is the neccessary step for our HBA but it is + * the optional for other HBA driver. + * @dev: the device structure which sas layer used. + */ +static int pm8001_dev_found_notify(struct domain_device *dev) +{ + unsigned long flags = 0; + int res = 0; + struct pm8001_hba_info *pm8001_ha = NULL; + struct domain_device *parent_dev = dev->parent; + struct pm8001_device *pm8001_device; + DECLARE_COMPLETION_ONSTACK(completion); + u32 flag = 0; + pm8001_ha = pm8001_find_ha_by_dev(dev); + spin_lock_irqsave(&pm8001_ha->lock, flags); + + pm8001_device = pm8001_alloc_dev(pm8001_ha); + pm8001_device->sas_device = dev; + if (!pm8001_device) { + res = -1; + goto found_out; + } + dev->lldd_dev = pm8001_device; + pm8001_device->dev_type = dev->dev_type; + pm8001_device->dcompletion = &completion; + if (parent_dev && DEV_IS_EXPANDER(parent_dev->dev_type)) { + int phy_id; + struct ex_phy *phy; + for (phy_id = 0; phy_id < parent_dev->ex_dev.num_phys; + phy_id++) { + phy = &parent_dev->ex_dev.ex_phy[phy_id]; + if (SAS_ADDR(phy->attached_sas_addr) + == SAS_ADDR(dev->sas_addr)) { + pm8001_device->attached_phy = phy_id; + break; + } + } + if (phy_id == parent_dev->ex_dev.num_phys) { + PM8001_FAIL_DBG(pm8001_ha, + pm8001_printk("Error: no attached dev:%016llx" + " at ex:%016llx.\n", SAS_ADDR(dev->sas_addr), + SAS_ADDR(parent_dev->sas_addr))); + res = -1; + } + } else { + if (dev->dev_type == SATA_DEV) { + pm8001_device->attached_phy = + dev->rphy->identify.phy_identifier; + flag = 1; /* directly sata*/ + } + } /*register this device to HBA*/ + PM8001_DISC_DBG(pm8001_ha, pm8001_printk("Found device \n")); + PM8001_CHIP_DISP->reg_dev_req(pm8001_ha, pm8001_device, flag); + spin_unlock_irqrestore(&pm8001_ha->lock, flags); + wait_for_completion(&completion); + if (dev->dev_type == SAS_END_DEV) + msleep(50); + pm8001_ha->flags = PM8001F_RUN_TIME ; + return 0; +found_out: + spin_unlock_irqrestore(&pm8001_ha->lock, flags); + return res; +} + +int pm8001_dev_found(struct domain_device *dev) +{ + return pm8001_dev_found_notify(dev); +} + +/** + * pm8001_alloc_task - allocate a task structure for TMF + */ +static struct sas_task *pm8001_alloc_task(void) +{ + struct sas_task *task = kzalloc(sizeof(*task), GFP_KERNEL); + if (task) { + INIT_LIST_HEAD(&task->list); + spin_lock_init(&task->task_state_lock); + task->task_state_flags = SAS_TASK_STATE_PENDING; + init_timer(&task->timer); + init_completion(&task->completion); + } + return task; +} + +static void pm8001_free_task(struct sas_task *task) +{ + if (task) { + BUG_ON(!list_empty(&task->list)); + kfree(task); + } +} + +static void pm8001_task_done(struct sas_task *task) +{ + if (!del_timer(&task->timer)) + return; + complete(&task->completion); +} + +static void pm8001_tmf_timedout(unsigned long data) +{ + struct sas_task *task = (struct sas_task *)data; + + task->task_state_flags |= SAS_TASK_STATE_ABORTED; + complete(&task->completion); +} + +#define PM8001_TASK_TIMEOUT 20 +/** + * pm8001_exec_internal_tmf_task - when errors or exception happened, we may + * want to do something, for example abort issued task which result in this + * execption, this is by calling this function, note it is also with the task + * execute interface. + * @dev: the wanted device. + * @tmf: which task management wanted to be take. + * @para_len: para_len. + * @parameter: ssp task parameter. + */ +static int pm8001_exec_internal_tmf_task(struct domain_device *dev, + void *parameter, u32 para_len, struct pm8001_tmf_task *tmf) +{ + int res, retry; + struct sas_task *task = NULL; + struct pm8001_hba_info *pm8001_ha = pm8001_find_ha_by_dev(dev); + + for (retry = 0; retry < 3; retry++) { + task = pm8001_alloc_task(); + if (!task) + return -ENOMEM; + + task->dev = dev; + task->task_proto = dev->tproto; + memcpy(&task->ssp_task, parameter, para_len); + task->task_done = pm8001_task_done; + task->timer.data = (unsigned long)task; + task->timer.function = pm8001_tmf_timedout; + task->timer.expires = jiffies + PM8001_TASK_TIMEOUT*HZ; + add_timer(&task->timer); + + res = pm8001_task_exec(task, 1, GFP_KERNEL, 1, tmf); + + if (res) { + del_timer(&task->timer); + PM8001_FAIL_DBG(pm8001_ha, + pm8001_printk("Executing internal task " + "failed\n")); + goto ex_err; + } + wait_for_completion(&task->completion); + res = -TMF_RESP_FUNC_FAILED; + /* Even TMF timed out, return direct. */ + if ((task->task_state_flags & SAS_TASK_STATE_ABORTED)) { + if (!(task->task_state_flags & SAS_TASK_STATE_DONE)) { + PM8001_FAIL_DBG(pm8001_ha, + pm8001_printk("TMF task[%x]timeout.\n", + tmf->tmf)); + goto ex_err; + } + } + + if (task->task_status.resp == SAS_TASK_COMPLETE && + task->task_status.stat == SAM_GOOD) { + res = TMF_RESP_FUNC_COMPLETE; + break; + } + + if (task->task_status.resp == SAS_TASK_COMPLETE && + task->task_status.stat == SAS_DATA_UNDERRUN) { + /* no error, but return the number of bytes of + * underrun */ + res = task->task_status.residual; + break; + } + + if (task->task_status.resp == SAS_TASK_COMPLETE && + task->task_status.stat == SAS_DATA_OVERRUN) { + PM8001_FAIL_DBG(pm8001_ha, + pm8001_printk("Blocked task error.\n")); + res = -EMSGSIZE; + break; + } else { + PM8001_IO_DBG(pm8001_ha, + pm8001_printk(" Task to dev %016llx response: 0x%x" + "status 0x%x\n", + SAS_ADDR(dev->sas_addr), + task->task_status.resp, + task->task_status.stat)); + pm8001_free_task(task); + task = NULL; + } + } +ex_err: + BUG_ON(retry == 3 && task != NULL); + if (task != NULL) + pm8001_free_task(task); + return res; +} + +static int +pm8001_exec_internal_task_abort(struct pm8001_hba_info *pm8001_ha, + struct pm8001_device *pm8001_dev, struct domain_device *dev, u32 flag, + u32 task_tag) +{ + int res, retry; + u32 rc, ccb_tag; + struct pm8001_ccb_info *ccb; + struct sas_task *task = NULL; + + for (retry = 0; retry < 3; retry++) { + task = pm8001_alloc_task(); + if (!task) + return -ENOMEM; + + task->dev = dev; + task->task_proto = dev->tproto; + task->task_done = pm8001_task_done; + task->timer.data = (unsigned long)task; + task->timer.function = pm8001_tmf_timedout; + task->timer.expires = jiffies + PM8001_TASK_TIMEOUT*HZ; + add_timer(&task->timer); + + rc = pm8001_tag_alloc(pm8001_ha, &ccb_tag); + if (rc) + return rc; + ccb = &pm8001_ha->ccb_info[ccb_tag]; + ccb->device = pm8001_dev; + ccb->ccb_tag = ccb_tag; + ccb->task = task; + + res = PM8001_CHIP_DISP->task_abort(pm8001_ha, + pm8001_dev, flag, task_tag, ccb_tag); + + if (res) { + del_timer(&task->timer); + PM8001_FAIL_DBG(pm8001_ha, + pm8001_printk("Executing internal task " + "failed\n")); + goto ex_err; + } + wait_for_completion(&task->completion); + res = TMF_RESP_FUNC_FAILED; + /* Even TMF timed out, return direct. */ + if ((task->task_state_flags & SAS_TASK_STATE_ABORTED)) { + if (!(task->task_state_flags & SAS_TASK_STATE_DONE)) { + PM8001_FAIL_DBG(pm8001_ha, + pm8001_printk("TMF task timeout.\n")); + goto ex_err; + } + } + + if (task->task_status.resp == SAS_TASK_COMPLETE && + task->task_status.stat == SAM_GOOD) { + res = TMF_RESP_FUNC_COMPLETE; + break; + + } else { + PM8001_IO_DBG(pm8001_ha, + pm8001_printk(" Task to dev %016llx response: " + "0x%x status 0x%x\n", + SAS_ADDR(dev->sas_addr), + task->task_status.resp, + task->task_status.stat)); + pm8001_free_task(task); + task = NULL; + } + } +ex_err: + BUG_ON(retry == 3 && task != NULL); + if (task != NULL) + pm8001_free_task(task); + return res; +} + +/** + * pm8001_dev_gone_notify - see the comments for "pm8001_dev_found_notify" + * @dev: the device structure which sas layer used. + */ +static void pm8001_dev_gone_notify(struct domain_device *dev) +{ + unsigned long flags = 0; + u32 tag; + struct pm8001_hba_info *pm8001_ha; + struct pm8001_device *pm8001_dev = dev->lldd_dev; + u32 device_id = pm8001_dev->device_id; + pm8001_ha = pm8001_find_ha_by_dev(dev); + spin_lock_irqsave(&pm8001_ha->lock, flags); + pm8001_tag_alloc(pm8001_ha, &tag); + if (pm8001_dev) { + PM8001_DISC_DBG(pm8001_ha, + pm8001_printk("found dev[%d:%x] is gone.\n", + pm8001_dev->device_id, pm8001_dev->dev_type)); + if (pm8001_dev->running_req) { + spin_unlock_irqrestore(&pm8001_ha->lock, flags); + pm8001_exec_internal_task_abort(pm8001_ha, pm8001_dev , + dev, 1, 0); + spin_lock_irqsave(&pm8001_ha->lock, flags); + } + PM8001_CHIP_DISP->dereg_dev_req(pm8001_ha, device_id); + pm8001_free_dev(pm8001_dev); + } else { + PM8001_DISC_DBG(pm8001_ha, + pm8001_printk("Found dev has gone.\n")); + } + dev->lldd_dev = NULL; + spin_unlock_irqrestore(&pm8001_ha->lock, flags); +} + +void pm8001_dev_gone(struct domain_device *dev) +{ + pm8001_dev_gone_notify(dev); +} + +static int pm8001_issue_ssp_tmf(struct domain_device *dev, + u8 *lun, struct pm8001_tmf_task *tmf) +{ + struct sas_ssp_task ssp_task; + if (!(dev->tproto & SAS_PROTOCOL_SSP)) + return TMF_RESP_FUNC_ESUPP; + + strncpy((u8 *)&ssp_task.LUN, lun, 8); + return pm8001_exec_internal_tmf_task(dev, &ssp_task, sizeof(ssp_task), + tmf); +} + +/** + * Standard mandates link reset for ATA (type 0) and hard reset for + * SSP (type 1) , only for RECOVERY + */ +int pm8001_I_T_nexus_reset(struct domain_device *dev) +{ + int rc = TMF_RESP_FUNC_FAILED; + struct pm8001_device *pm8001_dev; + struct pm8001_hba_info *pm8001_ha; + struct sas_phy *phy; + if (!dev || !dev->lldd_dev) + return -1; + + pm8001_dev = dev->lldd_dev; + pm8001_ha = pm8001_find_ha_by_dev(dev); + phy = sas_find_local_phy(dev); + + if (dev_is_sata(dev)) { + DECLARE_COMPLETION_ONSTACK(completion_setstate); + rc = sas_phy_reset(phy, 1); + msleep(2000); + rc = pm8001_exec_internal_task_abort(pm8001_ha, pm8001_dev , + dev, 1, 0); + pm8001_dev->setds_completion = &completion_setstate; + rc = PM8001_CHIP_DISP->set_dev_state_req(pm8001_ha, + pm8001_dev, 0x01); + wait_for_completion(&completion_setstate); + } else{ + rc = sas_phy_reset(phy, 1); + msleep(2000); + } + PM8001_EH_DBG(pm8001_ha, pm8001_printk(" for device[%x]:rc=%d\n", + pm8001_dev->device_id, rc)); + return rc; +} + +/* mandatory SAM-3, the task reset the specified LUN*/ +int pm8001_lu_reset(struct domain_device *dev, u8 *lun) +{ + int rc = TMF_RESP_FUNC_FAILED; + struct pm8001_tmf_task tmf_task; + struct pm8001_device *pm8001_dev = dev->lldd_dev; + struct pm8001_hba_info *pm8001_ha = pm8001_find_ha_by_dev(dev); + if (dev_is_sata(dev)) { + struct sas_phy *phy = sas_find_local_phy(dev); + rc = pm8001_exec_internal_task_abort(pm8001_ha, pm8001_dev , + dev, 1, 0); + rc = sas_phy_reset(phy, 1); + rc = PM8001_CHIP_DISP->set_dev_state_req(pm8001_ha, + pm8001_dev, 0x01); + msleep(2000); + } else { + tmf_task.tmf = TMF_LU_RESET; + rc = pm8001_issue_ssp_tmf(dev, lun, &tmf_task); + } + /* If failed, fall-through I_T_Nexus reset */ + PM8001_EH_DBG(pm8001_ha, pm8001_printk("for device[%x]:rc=%d\n", + pm8001_dev->device_id, rc)); + return rc; +} + +/* optional SAM-3 */ +int pm8001_query_task(struct sas_task *task) +{ + u32 tag = 0xdeadbeef; + int i = 0; + struct scsi_lun lun; + struct pm8001_tmf_task tmf_task; + int rc = TMF_RESP_FUNC_FAILED; + if (unlikely(!task || !task->lldd_task || !task->dev)) + return rc; + + if (task->task_proto & SAS_PROTOCOL_SSP) { + struct scsi_cmnd *cmnd = task->uldd_task; + struct domain_device *dev = task->dev; + struct pm8001_hba_info *pm8001_ha = + pm8001_find_ha_by_dev(dev); + + int_to_scsilun(cmnd->device->lun, &lun); + rc = pm8001_find_tag(task, &tag); + if (rc == 0) { + rc = TMF_RESP_FUNC_FAILED; + return rc; + } + PM8001_EH_DBG(pm8001_ha, pm8001_printk("Query:[")); + for (i = 0; i < 16; i++) + printk(KERN_INFO "%02x ", cmnd->cmnd[i]); + printk(KERN_INFO "]\n"); + tmf_task.tmf = TMF_QUERY_TASK; + tmf_task.tag_of_task_to_be_managed = tag; + + rc = pm8001_issue_ssp_tmf(dev, lun.scsi_lun, &tmf_task); + switch (rc) { + /* The task is still in Lun, release it then */ + case TMF_RESP_FUNC_SUCC: + PM8001_EH_DBG(pm8001_ha, + pm8001_printk("The task is still in Lun \n")); + /* The task is not in Lun or failed, reset the phy */ + case TMF_RESP_FUNC_FAILED: + case TMF_RESP_FUNC_COMPLETE: + PM8001_EH_DBG(pm8001_ha, + pm8001_printk("The task is not in Lun or failed," + " reset the phy \n")); + break; + } + } + pm8001_printk(":rc= %d\n", rc); + return rc; +} + +/* mandatory SAM-3, still need free task/ccb info, abord the specified task */ +int pm8001_abort_task(struct sas_task *task) +{ + unsigned long flags; + u32 tag = 0xdeadbeef; + u32 device_id; + struct domain_device *dev ; + struct pm8001_hba_info *pm8001_ha = NULL; + struct pm8001_ccb_info *ccb; + struct scsi_lun lun; + struct pm8001_device *pm8001_dev; + struct pm8001_tmf_task tmf_task; + int rc = TMF_RESP_FUNC_FAILED; + if (unlikely(!task || !task->lldd_task || !task->dev)) + return rc; + spin_lock_irqsave(&task->task_state_lock, flags); + if (task->task_state_flags & SAS_TASK_STATE_DONE) { + spin_unlock_irqrestore(&task->task_state_lock, flags); + rc = TMF_RESP_FUNC_COMPLETE; + goto out; + } + spin_unlock_irqrestore(&task->task_state_lock, flags); + if (task->task_proto & SAS_PROTOCOL_SSP) { + struct scsi_cmnd *cmnd = task->uldd_task; + dev = task->dev; + ccb = task->lldd_task; + pm8001_dev = dev->lldd_dev; + pm8001_ha = pm8001_find_ha_by_dev(dev); + int_to_scsilun(cmnd->device->lun, &lun); + rc = pm8001_find_tag(task, &tag); + if (rc == 0) { + printk(KERN_INFO "No such tag in %s\n", __func__); + rc = TMF_RESP_FUNC_FAILED; + return rc; + } + device_id = pm8001_dev->device_id; + PM8001_EH_DBG(pm8001_ha, + pm8001_printk("abort io to device_id = %d\n", device_id)); + tmf_task.tmf = TMF_ABORT_TASK; + tmf_task.tag_of_task_to_be_managed = tag; + rc = pm8001_issue_ssp_tmf(dev, lun.scsi_lun, &tmf_task); + rc = pm8001_exec_internal_task_abort(pm8001_ha, pm8001_dev, + pm8001_dev->sas_device, 0, tag); + } else if (task->task_proto & SAS_PROTOCOL_SATA || + task->task_proto & SAS_PROTOCOL_STP) { + dev = task->dev; + pm8001_dev = dev->lldd_dev; + pm8001_ha = pm8001_find_ha_by_dev(dev); + rc = pm8001_find_tag(task, &tag); + if (rc == 0) { + printk(KERN_INFO "No such tag in %s\n", __func__); + rc = TMF_RESP_FUNC_FAILED; + return rc; + } + rc = pm8001_exec_internal_task_abort(pm8001_ha, pm8001_dev, + pm8001_dev->sas_device, 0, tag); + } else if (task->task_proto & SAS_PROTOCOL_SMP) { + /* SMP */ + dev = task->dev; + pm8001_dev = dev->lldd_dev; + pm8001_ha = pm8001_find_ha_by_dev(dev); + rc = pm8001_find_tag(task, &tag); + if (rc == 0) { + printk(KERN_INFO "No such tag in %s\n", __func__); + rc = TMF_RESP_FUNC_FAILED; + return rc; + } + rc = pm8001_exec_internal_task_abort(pm8001_ha, pm8001_dev, + pm8001_dev->sas_device, 0, tag); + + } +out: + if (rc != TMF_RESP_FUNC_COMPLETE) + pm8001_printk("rc= %d\n", rc); + return rc; +} + +int pm8001_abort_task_set(struct domain_device *dev, u8 *lun) +{ + int rc = TMF_RESP_FUNC_FAILED; + struct pm8001_tmf_task tmf_task; + + tmf_task.tmf = TMF_ABORT_TASK_SET; + rc = pm8001_issue_ssp_tmf(dev, lun, &tmf_task); + return rc; +} + +int pm8001_clear_aca(struct domain_device *dev, u8 *lun) +{ + int rc = TMF_RESP_FUNC_FAILED; + struct pm8001_tmf_task tmf_task; + + tmf_task.tmf = TMF_CLEAR_ACA; + rc = pm8001_issue_ssp_tmf(dev, lun, &tmf_task); + + return rc; +} + +int pm8001_clear_task_set(struct domain_device *dev, u8 *lun) +{ + int rc = TMF_RESP_FUNC_FAILED; + struct pm8001_tmf_task tmf_task; + struct pm8001_device *pm8001_dev = dev->lldd_dev; + struct pm8001_hba_info *pm8001_ha = pm8001_find_ha_by_dev(dev); + + PM8001_EH_DBG(pm8001_ha, + pm8001_printk("I_T_L_Q clear task set[%x]\n", + pm8001_dev->device_id)); + tmf_task.tmf = TMF_CLEAR_TASK_SET; + rc = pm8001_issue_ssp_tmf(dev, lun, &tmf_task); + return rc; +} + diff --git a/drivers/scsi/pm8001/pm8001_sas.h b/drivers/scsi/pm8001/pm8001_sas.h new file mode 100644 index 000000000000..14c676bbb533 --- /dev/null +++ b/drivers/scsi/pm8001/pm8001_sas.h @@ -0,0 +1,480 @@ +/* + * PMC-Sierra SPC 8001 SAS/SATA based host adapters driver + * + * Copyright (c) 2008-2009 USI Co., Ltd. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions, and the following disclaimer, + * without modification. + * 2. Redistributions in binary form must reproduce at minimum a disclaimer + * substantially similar to the "NO WARRANTY" disclaimer below + * ("Disclaimer") and any redistribution must be conditioned upon + * including a substantially similar Disclaimer requirement for further + * binary redistribution. + * 3. Neither the names of the above-listed copyright holders nor the names + * of any contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * Alternatively, this software may be distributed under the terms of the + * GNU General Public License ("GPL") version 2 as published by the Free + * Software Foundation. + * + * NO WARRANTY + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING + * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE + * POSSIBILITY OF SUCH DAMAGES. + * + */ + +#ifndef _PM8001_SAS_H_ +#define _PM8001_SAS_H_ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "pm8001_defs.h" + +#define DRV_NAME "pm8001" +#define DRV_VERSION "0.1.36" +#define PM8001_FAIL_LOGGING 0x01 /* libsas EH function logging */ +#define PM8001_INIT_LOGGING 0x02 /* driver init logging */ +#define PM8001_DISC_LOGGING 0x04 /* discovery layer logging */ +#define PM8001_IO_LOGGING 0x08 /* I/O path logging */ +#define PM8001_EH_LOGGING 0x10 /* Error message logging */ +#define PM8001_IOCTL_LOGGING 0x20 /* IOCTL message logging */ +#define PM8001_MSG_LOGGING 0x40 /* misc message logging */ +#define pm8001_printk(format, arg...) printk(KERN_INFO "%s %d:" format,\ + __func__, __LINE__, ## arg) +#define PM8001_CHECK_LOGGING(HBA, LEVEL, CMD) \ +do { \ + if (unlikely(HBA->logging_level & LEVEL)) \ + do { \ + CMD; \ + } while (0); \ +} while (0); + +#define PM8001_EH_DBG(HBA, CMD) \ + PM8001_CHECK_LOGGING(HBA, PM8001_EH_LOGGING, CMD) + +#define PM8001_INIT_DBG(HBA, CMD) \ + PM8001_CHECK_LOGGING(HBA, PM8001_INIT_LOGGING, CMD) + +#define PM8001_DISC_DBG(HBA, CMD) \ + PM8001_CHECK_LOGGING(HBA, PM8001_DISC_LOGGING, CMD) + +#define PM8001_IO_DBG(HBA, CMD) \ + PM8001_CHECK_LOGGING(HBA, PM8001_IO_LOGGING, CMD) + +#define PM8001_FAIL_DBG(HBA, CMD) \ + PM8001_CHECK_LOGGING(HBA, PM8001_FAIL_LOGGING, CMD) + +#define PM8001_IOCTL_DBG(HBA, CMD) \ + PM8001_CHECK_LOGGING(HBA, PM8001_IOCTL_LOGGING, CMD) + +#define PM8001_MSG_DBG(HBA, CMD) \ + PM8001_CHECK_LOGGING(HBA, PM8001_MSG_LOGGING, CMD) + + +#define PM8001_USE_TASKLET +#define PM8001_USE_MSIX + + +#define DEV_IS_EXPANDER(type) ((type == EDGE_DEV) || (type == FANOUT_DEV)) + +#define PM8001_NAME_LENGTH 32/* generic length of strings */ +extern struct list_head hba_list; +extern const struct pm8001_dispatch pm8001_8001_dispatch; + +struct pm8001_hba_info; +struct pm8001_ccb_info; +struct pm8001_device; +struct pm8001_tmf_task; +struct pm8001_dispatch { + char *name; + int (*chip_init)(struct pm8001_hba_info *pm8001_ha); + int (*chip_soft_rst)(struct pm8001_hba_info *pm8001_ha, u32 signature); + void (*chip_rst)(struct pm8001_hba_info *pm8001_ha); + int (*chip_ioremap)(struct pm8001_hba_info *pm8001_ha); + void (*chip_iounmap)(struct pm8001_hba_info *pm8001_ha); + void (*isr)(struct pm8001_hba_info *pm8001_ha); + u32 (*is_our_interupt)(struct pm8001_hba_info *pm8001_ha); + int (*isr_process_oq)(struct pm8001_hba_info *pm8001_ha); + void (*interrupt_enable)(struct pm8001_hba_info *pm8001_ha); + void (*interrupt_disable)(struct pm8001_hba_info *pm8001_ha); + void (*make_prd)(struct scatterlist *scatter, int nr, void *prd); + int (*smp_req)(struct pm8001_hba_info *pm8001_ha, + struct pm8001_ccb_info *ccb); + int (*ssp_io_req)(struct pm8001_hba_info *pm8001_ha, + struct pm8001_ccb_info *ccb); + int (*sata_req)(struct pm8001_hba_info *pm8001_ha, + struct pm8001_ccb_info *ccb); + int (*phy_start_req)(struct pm8001_hba_info *pm8001_ha, u8 phy_id); + int (*phy_stop_req)(struct pm8001_hba_info *pm8001_ha, u8 phy_id); + int (*reg_dev_req)(struct pm8001_hba_info *pm8001_ha, + struct pm8001_device *pm8001_dev, u32 flag); + int (*dereg_dev_req)(struct pm8001_hba_info *pm8001_ha, u32 device_id); + int (*phy_ctl_req)(struct pm8001_hba_info *pm8001_ha, + u32 phy_id, u32 phy_op); + int (*task_abort)(struct pm8001_hba_info *pm8001_ha, + struct pm8001_device *pm8001_dev, u8 flag, u32 task_tag, + u32 cmd_tag); + int (*ssp_tm_req)(struct pm8001_hba_info *pm8001_ha, + struct pm8001_ccb_info *ccb, struct pm8001_tmf_task *tmf); + int (*get_nvmd_req)(struct pm8001_hba_info *pm8001_ha, void *payload); + int (*set_nvmd_req)(struct pm8001_hba_info *pm8001_ha, void *payload); + int (*fw_flash_update_req)(struct pm8001_hba_info *pm8001_ha, + void *payload); + int (*set_dev_state_req)(struct pm8001_hba_info *pm8001_ha, + struct pm8001_device *pm8001_dev, u32 state); + int (*sas_diag_start_end_req)(struct pm8001_hba_info *pm8001_ha, + u32 state); + int (*sas_diag_execute_req)(struct pm8001_hba_info *pm8001_ha, + u32 state); +}; + +struct pm8001_chip_info { + u32 n_phy; + const struct pm8001_dispatch *dispatch; +}; +#define PM8001_CHIP_DISP (pm8001_ha->chip->dispatch) + +struct pm8001_port { + struct asd_sas_port sas_port; +}; + +struct pm8001_phy { + struct pm8001_hba_info *pm8001_ha; + struct pm8001_port *port; + struct asd_sas_phy sas_phy; + struct sas_identify identify; + struct scsi_device *sdev; + u64 dev_sas_addr; + u32 phy_type; + struct completion *enable_completion; + u32 frame_rcvd_size; + u8 frame_rcvd[32]; + u8 phy_attached; + u8 phy_state; + enum sas_linkrate minimum_linkrate; + enum sas_linkrate maximum_linkrate; +}; + +struct pm8001_device { + enum sas_dev_type dev_type; + struct domain_device *sas_device; + u32 attached_phy; + u32 id; + struct completion *dcompletion; + struct completion *setds_completion; + u32 device_id; + u32 running_req; +}; + +struct pm8001_prd_imt { + __le32 len; + __le32 e; +}; + +struct pm8001_prd { + __le64 addr; /* 64-bit buffer address */ + struct pm8001_prd_imt im_len; /* 64-bit length */ +} __attribute__ ((packed)); +/* + * CCB(Command Control Block) + */ +struct pm8001_ccb_info { + struct list_head entry; + struct sas_task *task; + u32 n_elem; + u32 ccb_tag; + dma_addr_t ccb_dma_handle; + struct pm8001_device *device; + struct pm8001_prd buf_prd[PM8001_MAX_DMA_SG]; + struct fw_control_ex *fw_control_context; +}; + +struct mpi_mem { + void *virt_ptr; + dma_addr_t phys_addr; + u32 phys_addr_hi; + u32 phys_addr_lo; + u32 total_len; + u32 num_elements; + u32 element_size; + u32 alignment; +}; + +struct mpi_mem_req { + /* The number of element in the mpiMemory array */ + u32 count; + /* The array of structures that define memroy regions*/ + struct mpi_mem region[USI_MAX_MEMCNT]; +}; + +struct main_cfg_table { + u32 signature; + u32 interface_rev; + u32 firmware_rev; + u32 max_out_io; + u32 max_sgl; + u32 ctrl_cap_flag; + u32 gst_offset; + u32 inbound_queue_offset; + u32 outbound_queue_offset; + u32 inbound_q_nppd_hppd; + u32 outbound_hw_event_pid0_3; + u32 outbound_hw_event_pid4_7; + u32 outbound_ncq_event_pid0_3; + u32 outbound_ncq_event_pid4_7; + u32 outbound_tgt_ITNexus_event_pid0_3; + u32 outbound_tgt_ITNexus_event_pid4_7; + u32 outbound_tgt_ssp_event_pid0_3; + u32 outbound_tgt_ssp_event_pid4_7; + u32 outbound_tgt_smp_event_pid0_3; + u32 outbound_tgt_smp_event_pid4_7; + u32 upper_event_log_addr; + u32 lower_event_log_addr; + u32 event_log_size; + u32 event_log_option; + u32 upper_iop_event_log_addr; + u32 lower_iop_event_log_addr; + u32 iop_event_log_size; + u32 iop_event_log_option; + u32 fatal_err_interrupt; + u32 fatal_err_dump_offset0; + u32 fatal_err_dump_length0; + u32 fatal_err_dump_offset1; + u32 fatal_err_dump_length1; + u32 hda_mode_flag; + u32 anolog_setup_table_offset; +}; +struct general_status_table { + u32 gst_len_mpistate; + u32 iq_freeze_state0; + u32 iq_freeze_state1; + u32 msgu_tcnt; + u32 iop_tcnt; + u32 reserved; + u32 phy_state[8]; + u32 reserved1; + u32 reserved2; + u32 reserved3; + u32 recover_err_info[8]; +}; +struct inbound_queue_table { + u32 element_pri_size_cnt; + u32 upper_base_addr; + u32 lower_base_addr; + u32 ci_upper_base_addr; + u32 ci_lower_base_addr; + u32 pi_pci_bar; + u32 pi_offset; + u32 total_length; + void *base_virt; + void *ci_virt; + u32 reserved; + __le32 consumer_index; + u32 producer_idx; +}; +struct outbound_queue_table { + u32 element_size_cnt; + u32 upper_base_addr; + u32 lower_base_addr; + void *base_virt; + u32 pi_upper_base_addr; + u32 pi_lower_base_addr; + u32 ci_pci_bar; + u32 ci_offset; + u32 total_length; + void *pi_virt; + u32 interrup_vec_cnt_delay; + u32 dinterrup_to_pci_offset; + __le32 producer_index; + u32 consumer_idx; +}; +struct pm8001_hba_memspace { + void __iomem *memvirtaddr; + u64 membase; + u32 memsize; +}; +struct pm8001_hba_info { + char name[PM8001_NAME_LENGTH]; + struct list_head list; + unsigned long flags; + spinlock_t lock;/* host-wide lock */ + struct pci_dev *pdev;/* our device */ + struct device *dev; + struct pm8001_hba_memspace io_mem[6]; + struct mpi_mem_req memoryMap; + void __iomem *msg_unit_tbl_addr;/*Message Unit Table Addr*/ + void __iomem *main_cfg_tbl_addr;/*Main Config Table Addr*/ + void __iomem *general_stat_tbl_addr;/*General Status Table Addr*/ + void __iomem *inbnd_q_tbl_addr;/*Inbound Queue Config Table Addr*/ + void __iomem *outbnd_q_tbl_addr;/*Outbound Queue Config Table Addr*/ + struct main_cfg_table main_cfg_tbl; + struct general_status_table gs_tbl; + struct inbound_queue_table inbnd_q_tbl[PM8001_MAX_INB_NUM]; + struct outbound_queue_table outbnd_q_tbl[PM8001_MAX_OUTB_NUM]; + u8 sas_addr[SAS_ADDR_SIZE]; + struct sas_ha_struct *sas;/* SCSI/SAS glue */ + struct Scsi_Host *shost; + u32 chip_id; + const struct pm8001_chip_info *chip; + struct completion *nvmd_completion; + int tags_num; + unsigned long *tags; + struct pm8001_phy phy[PM8001_MAX_PHYS]; + struct pm8001_port port[PM8001_MAX_PHYS]; + u32 id; + u32 irq; + struct pm8001_device *devices; + struct pm8001_ccb_info *ccb_info; +#ifdef PM8001_USE_MSIX + struct msix_entry msix_entries[16];/*for msi-x interrupt*/ + int number_of_intr;/*will be used in remove()*/ +#endif +#ifdef PM8001_USE_TASKLET + struct tasklet_struct tasklet; +#endif + struct list_head wq_list; + u32 logging_level; + u32 fw_status; + const struct firmware *fw_image; +}; + +struct pm8001_wq { + struct delayed_work work_q; + struct pm8001_hba_info *pm8001_ha; + void *data; + int handler; + struct list_head entry; +}; + +struct pm8001_fw_image_header { + u8 vender_id[8]; + u8 product_id; + u8 hardware_rev; + u8 dest_partition; + u8 reserved; + u8 fw_rev[4]; + __be32 image_length; + __be32 image_crc; + __be32 startup_entry; +} __attribute__((packed, aligned(4))); + +/* define task management IU */ +struct pm8001_tmf_task { + u8 tmf; + u32 tag_of_task_to_be_managed; +}; +/** + * FW Flash Update status values + */ +#define FLASH_UPDATE_COMPLETE_PENDING_REBOOT 0x00 +#define FLASH_UPDATE_IN_PROGRESS 0x01 +#define FLASH_UPDATE_HDR_ERR 0x02 +#define FLASH_UPDATE_OFFSET_ERR 0x03 +#define FLASH_UPDATE_CRC_ERR 0x04 +#define FLASH_UPDATE_LENGTH_ERR 0x05 +#define FLASH_UPDATE_HW_ERR 0x06 +#define FLASH_UPDATE_DNLD_NOT_SUPPORTED 0x10 +#define FLASH_UPDATE_DISABLED 0x11 + +/** + * brief param structure for firmware flash update. + */ +struct fw_flash_updata_info { + u32 cur_image_offset; + u32 cur_image_len; + u32 total_image_len; + struct pm8001_prd sgl; +}; + +struct fw_control_info { + u32 retcode;/*ret code (status)*/ + u32 phase;/*ret code phase*/ + u32 phaseCmplt;/*percent complete for the current + update phase */ + u32 version;/*Hex encoded firmware version number*/ + u32 offset;/*Used for downloading firmware */ + u32 len; /*len of buffer*/ + u32 size;/* Used in OS VPD and Trace get size + operations.*/ + u32 reserved;/* padding required for 64 bit + alignment */ + u8 buffer[1];/* Start of buffer */ +}; +struct fw_control_ex { + struct fw_control_info *fw_control; + void *buffer;/* keep buffer pointer to be + freed when the responce comes*/ + void *virtAddr;/* keep virtual address of the data */ + void *usrAddr;/* keep virtual address of the + user data */ + dma_addr_t phys_addr; + u32 len; /* len of buffer */ + void *payload; /* pointer to IOCTL Payload */ + u8 inProgress;/*if 1 - the IOCTL request is in + progress */ + void *param1; + void *param2; + void *param3; +}; + +/******************** function prototype *********************/ +int pm8001_tag_alloc(struct pm8001_hba_info *pm8001_ha, u32 *tag_out); +void pm8001_tag_init(struct pm8001_hba_info *pm8001_ha); +u32 pm8001_get_ncq_tag(struct sas_task *task, u32 *tag); +void pm8001_ccb_free(struct pm8001_hba_info *pm8001_ha, u32 ccb_idx); +void pm8001_ccb_task_free(struct pm8001_hba_info *pm8001_ha, + struct sas_task *task, struct pm8001_ccb_info *ccb, u32 ccb_idx); +int pm8001_phy_control(struct asd_sas_phy *sas_phy, enum phy_func func, + void *funcdata); +int pm8001_slave_alloc(struct scsi_device *scsi_dev); +int pm8001_slave_configure(struct scsi_device *sdev); +void pm8001_scan_start(struct Scsi_Host *shost); +int pm8001_scan_finished(struct Scsi_Host *shost, unsigned long time); +int pm8001_queue_command(struct sas_task *task, const int num, + gfp_t gfp_flags); +int pm8001_abort_task(struct sas_task *task); +int pm8001_abort_task_set(struct domain_device *dev, u8 *lun); +int pm8001_clear_aca(struct domain_device *dev, u8 *lun); +int pm8001_clear_task_set(struct domain_device *dev, u8 *lun); +int pm8001_dev_found(struct domain_device *dev); +void pm8001_dev_gone(struct domain_device *dev); +int pm8001_lu_reset(struct domain_device *dev, u8 *lun); +int pm8001_I_T_nexus_reset(struct domain_device *dev); +int pm8001_query_task(struct sas_task *task); +int pm8001_mem_alloc(struct pci_dev *pdev, void **virt_addr, + dma_addr_t *pphys_addr, u32 *pphys_addr_hi, u32 *pphys_addr_lo, + u32 mem_size, u32 align); + + +/* ctl shared API */ +extern struct device_attribute *pm8001_host_attrs[]; + +#endif + diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h index b0f0f3851cd4..161fadb291d1 100644 --- a/include/linux/pci_ids.h +++ b/include/linux/pci_ids.h @@ -1586,6 +1586,8 @@ #define PCI_VENDOR_ID_COMPEX 0x11f6 #define PCI_DEVICE_ID_COMPEX_ENET100VG4 0x0112 +#define PCI_VENDOR_ID_PMC_Sierra 0x11f8 + #define PCI_VENDOR_ID_RP 0x11fe #define PCI_DEVICE_ID_RP32INTF 0x0001 #define PCI_DEVICE_ID_RP8INTF 0x0002 -- cgit v1.2.3 From 851b164231d1117673aa44c00c7622e48b7dfcf4 Mon Sep 17 00:00:00 2001 From: Alok Kataria Date: Tue, 13 Oct 2009 14:51:05 -0700 Subject: [SCSI] vmw_pvscsi: SCSI driver for VMware's virtual HBA. This is a driver for VMware's paravirtualized SCSI device, which should improve disk performance for guests running under control of VMware hypervisors that support such devices. Signed-off-by: Alok N Kataria Signed-off-by: James Bottomley --- MAINTAINERS | 8 + drivers/scsi/Kconfig | 8 + drivers/scsi/Makefile | 1 + drivers/scsi/vmw_pvscsi.c | 1407 +++++++++++++++++++++++++++++++++++++++++++++ drivers/scsi/vmw_pvscsi.h | 397 +++++++++++++ 5 files changed, 1821 insertions(+) create mode 100644 drivers/scsi/vmw_pvscsi.c create mode 100644 drivers/scsi/vmw_pvscsi.h (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 016411cadc9a..d1a5cfd2c379 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -5696,6 +5696,14 @@ L: netdev@vger.kernel.org S: Maintained F: drivers/net/vmxnet3/ +VMware PVSCSI driver +M: Alok Kataria +M: VMware PV-Drivers +L: linux-scsi@vger.kernel.org +S: Maintained +F: drivers/scsi/vmw_pvscsi.c +F: drivers/scsi/vmw_pvscsi.h + VOLTAGE AND CURRENT REGULATOR FRAMEWORK M: Liam Girdwood M: Mark Brown diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig index 2e4f7d0ee639..1895259fff0f 100644 --- a/drivers/scsi/Kconfig +++ b/drivers/scsi/Kconfig @@ -621,6 +621,14 @@ config SCSI_FLASHPOINT substantial, so users of MultiMaster Host Adapters may not wish to include it. +config VMWARE_PVSCSI + tristate "VMware PVSCSI driver support" + depends on PCI && SCSI && X86 + help + This driver supports VMware's para virtualized SCSI HBA. + To compile this driver as a module, choose M here: the + module will be called vmw_pvscsi. + config LIBFC tristate "LibFC module" select SCSI_FC_ATTRS diff --git a/drivers/scsi/Makefile b/drivers/scsi/Makefile index 53b1dac7e7d9..5026bdc7b2b7 100644 --- a/drivers/scsi/Makefile +++ b/drivers/scsi/Makefile @@ -134,6 +134,7 @@ obj-$(CONFIG_SCSI_CXGB3_ISCSI) += libiscsi.o libiscsi_tcp.o cxgb3i/ obj-$(CONFIG_SCSI_BNX2_ISCSI) += libiscsi.o bnx2i/ obj-$(CONFIG_BE2ISCSI) += libiscsi.o be2iscsi/ obj-$(CONFIG_SCSI_PMCRAID) += pmcraid.o +obj-$(CONFIG_VMWARE_PVSCSI) += vmw_pvscsi.o obj-$(CONFIG_ARM) += arm/ diff --git a/drivers/scsi/vmw_pvscsi.c b/drivers/scsi/vmw_pvscsi.c new file mode 100644 index 000000000000..d2604c813a20 --- /dev/null +++ b/drivers/scsi/vmw_pvscsi.c @@ -0,0 +1,1407 @@ +/* + * Linux driver for VMware's para-virtualized SCSI HBA. + * + * Copyright (C) 2008-2009, VMware, Inc. All Rights Reserved. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License as published by the + * Free Software Foundation; version 2 of the License and no later version. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or + * NON INFRINGEMENT. See the GNU General Public License for more + * details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. + * + * Maintained by: Alok N Kataria + * + */ + +#include +#include +#include +#include +#include + +#include +#include +#include +#include + +#include "vmw_pvscsi.h" + +#define PVSCSI_LINUX_DRIVER_DESC "VMware PVSCSI driver" + +MODULE_DESCRIPTION(PVSCSI_LINUX_DRIVER_DESC); +MODULE_AUTHOR("VMware, Inc."); +MODULE_LICENSE("GPL"); +MODULE_VERSION(PVSCSI_DRIVER_VERSION_STRING); + +#define PVSCSI_DEFAULT_NUM_PAGES_PER_RING 8 +#define PVSCSI_DEFAULT_NUM_PAGES_MSG_RING 1 +#define PVSCSI_DEFAULT_QUEUE_DEPTH 64 +#define SGL_SIZE PAGE_SIZE + +struct pvscsi_sg_list { + struct PVSCSISGElement sge[PVSCSI_MAX_NUM_SG_ENTRIES_PER_SEGMENT]; +}; + +struct pvscsi_ctx { + /* + * The index of the context in cmd_map serves as the context ID for a + * 1-to-1 mapping completions back to requests. + */ + struct scsi_cmnd *cmd; + struct pvscsi_sg_list *sgl; + struct list_head list; + dma_addr_t dataPA; + dma_addr_t sensePA; + dma_addr_t sglPA; +}; + +struct pvscsi_adapter { + char *mmioBase; + unsigned int irq; + u8 rev; + bool use_msi; + bool use_msix; + bool use_msg; + + spinlock_t hw_lock; + + struct workqueue_struct *workqueue; + struct work_struct work; + + struct PVSCSIRingReqDesc *req_ring; + unsigned req_pages; + unsigned req_depth; + dma_addr_t reqRingPA; + + struct PVSCSIRingCmpDesc *cmp_ring; + unsigned cmp_pages; + dma_addr_t cmpRingPA; + + struct PVSCSIRingMsgDesc *msg_ring; + unsigned msg_pages; + dma_addr_t msgRingPA; + + struct PVSCSIRingsState *rings_state; + dma_addr_t ringStatePA; + + struct pci_dev *dev; + struct Scsi_Host *host; + + struct list_head cmd_pool; + struct pvscsi_ctx *cmd_map; +}; + + +/* Command line parameters */ +static int pvscsi_ring_pages = PVSCSI_DEFAULT_NUM_PAGES_PER_RING; +static int pvscsi_msg_ring_pages = PVSCSI_DEFAULT_NUM_PAGES_MSG_RING; +static int pvscsi_cmd_per_lun = PVSCSI_DEFAULT_QUEUE_DEPTH; +static bool pvscsi_disable_msi; +static bool pvscsi_disable_msix; +static bool pvscsi_use_msg = true; + +#define PVSCSI_RW (S_IRUSR | S_IWUSR) + +module_param_named(ring_pages, pvscsi_ring_pages, int, PVSCSI_RW); +MODULE_PARM_DESC(ring_pages, "Number of pages per req/cmp ring - (default=" + __stringify(PVSCSI_DEFAULT_NUM_PAGES_PER_RING) ")"); + +module_param_named(msg_ring_pages, pvscsi_msg_ring_pages, int, PVSCSI_RW); +MODULE_PARM_DESC(msg_ring_pages, "Number of pages for the msg ring - (default=" + __stringify(PVSCSI_DEFAULT_NUM_PAGES_MSG_RING) ")"); + +module_param_named(cmd_per_lun, pvscsi_cmd_per_lun, int, PVSCSI_RW); +MODULE_PARM_DESC(cmd_per_lun, "Maximum commands per lun - (default=" + __stringify(PVSCSI_MAX_REQ_QUEUE_DEPTH) ")"); + +module_param_named(disable_msi, pvscsi_disable_msi, bool, PVSCSI_RW); +MODULE_PARM_DESC(disable_msi, "Disable MSI use in driver - (default=0)"); + +module_param_named(disable_msix, pvscsi_disable_msix, bool, PVSCSI_RW); +MODULE_PARM_DESC(disable_msix, "Disable MSI-X use in driver - (default=0)"); + +module_param_named(use_msg, pvscsi_use_msg, bool, PVSCSI_RW); +MODULE_PARM_DESC(use_msg, "Use msg ring when available - (default=1)"); + +static const struct pci_device_id pvscsi_pci_tbl[] = { + { PCI_VDEVICE(VMWARE, PCI_DEVICE_ID_VMWARE_PVSCSI) }, + { 0 } +}; + +MODULE_DEVICE_TABLE(pci, pvscsi_pci_tbl); + +static struct device * +pvscsi_dev(const struct pvscsi_adapter *adapter) +{ + return &(adapter->dev->dev); +} + +static struct pvscsi_ctx * +pvscsi_find_context(const struct pvscsi_adapter *adapter, struct scsi_cmnd *cmd) +{ + struct pvscsi_ctx *ctx, *end; + + end = &adapter->cmd_map[adapter->req_depth]; + for (ctx = adapter->cmd_map; ctx < end; ctx++) + if (ctx->cmd == cmd) + return ctx; + + return NULL; +} + +static struct pvscsi_ctx * +pvscsi_acquire_context(struct pvscsi_adapter *adapter, struct scsi_cmnd *cmd) +{ + struct pvscsi_ctx *ctx; + + if (list_empty(&adapter->cmd_pool)) + return NULL; + + ctx = list_first_entry(&adapter->cmd_pool, struct pvscsi_ctx, list); + ctx->cmd = cmd; + list_del(&ctx->list); + + return ctx; +} + +static void pvscsi_release_context(struct pvscsi_adapter *adapter, + struct pvscsi_ctx *ctx) +{ + ctx->cmd = NULL; + list_add(&ctx->list, &adapter->cmd_pool); +} + +/* + * Map a pvscsi_ctx struct to a context ID field value; we map to a simple + * non-zero integer. ctx always points to an entry in cmd_map array, hence + * the return value is always >=1. + */ +static u64 pvscsi_map_context(const struct pvscsi_adapter *adapter, + const struct pvscsi_ctx *ctx) +{ + return ctx - adapter->cmd_map + 1; +} + +static struct pvscsi_ctx * +pvscsi_get_context(const struct pvscsi_adapter *adapter, u64 context) +{ + return &adapter->cmd_map[context - 1]; +} + +static void pvscsi_reg_write(const struct pvscsi_adapter *adapter, + u32 offset, u32 val) +{ + writel(val, adapter->mmioBase + offset); +} + +static u32 pvscsi_reg_read(const struct pvscsi_adapter *adapter, u32 offset) +{ + return readl(adapter->mmioBase + offset); +} + +static u32 pvscsi_read_intr_status(const struct pvscsi_adapter *adapter) +{ + return pvscsi_reg_read(adapter, PVSCSI_REG_OFFSET_INTR_STATUS); +} + +static void pvscsi_write_intr_status(const struct pvscsi_adapter *adapter, + u32 val) +{ + pvscsi_reg_write(adapter, PVSCSI_REG_OFFSET_INTR_STATUS, val); +} + +static void pvscsi_unmask_intr(const struct pvscsi_adapter *adapter) +{ + u32 intr_bits; + + intr_bits = PVSCSI_INTR_CMPL_MASK; + if (adapter->use_msg) + intr_bits |= PVSCSI_INTR_MSG_MASK; + + pvscsi_reg_write(adapter, PVSCSI_REG_OFFSET_INTR_MASK, intr_bits); +} + +static void pvscsi_mask_intr(const struct pvscsi_adapter *adapter) +{ + pvscsi_reg_write(adapter, PVSCSI_REG_OFFSET_INTR_MASK, 0); +} + +static void pvscsi_write_cmd_desc(const struct pvscsi_adapter *adapter, + u32 cmd, const void *desc, size_t len) +{ + const u32 *ptr = desc; + size_t i; + + len /= sizeof(*ptr); + pvscsi_reg_write(adapter, PVSCSI_REG_OFFSET_COMMAND, cmd); + for (i = 0; i < len; i++) + pvscsi_reg_write(adapter, + PVSCSI_REG_OFFSET_COMMAND_DATA, ptr[i]); +} + +static void pvscsi_abort_cmd(const struct pvscsi_adapter *adapter, + const struct pvscsi_ctx *ctx) +{ + struct PVSCSICmdDescAbortCmd cmd = { 0 }; + + cmd.target = ctx->cmd->device->id; + cmd.context = pvscsi_map_context(adapter, ctx); + + pvscsi_write_cmd_desc(adapter, PVSCSI_CMD_ABORT_CMD, &cmd, sizeof(cmd)); +} + +static void pvscsi_kick_rw_io(const struct pvscsi_adapter *adapter) +{ + pvscsi_reg_write(adapter, PVSCSI_REG_OFFSET_KICK_RW_IO, 0); +} + +static void pvscsi_process_request_ring(const struct pvscsi_adapter *adapter) +{ + pvscsi_reg_write(adapter, PVSCSI_REG_OFFSET_KICK_NON_RW_IO, 0); +} + +static int scsi_is_rw(unsigned char op) +{ + return op == READ_6 || op == WRITE_6 || + op == READ_10 || op == WRITE_10 || + op == READ_12 || op == WRITE_12 || + op == READ_16 || op == WRITE_16; +} + +static void pvscsi_kick_io(const struct pvscsi_adapter *adapter, + unsigned char op) +{ + if (scsi_is_rw(op)) + pvscsi_kick_rw_io(adapter); + else + pvscsi_process_request_ring(adapter); +} + +static void ll_adapter_reset(const struct pvscsi_adapter *adapter) +{ + dev_dbg(pvscsi_dev(adapter), "Adapter Reset on %p\n", adapter); + + pvscsi_write_cmd_desc(adapter, PVSCSI_CMD_ADAPTER_RESET, NULL, 0); +} + +static void ll_bus_reset(const struct pvscsi_adapter *adapter) +{ + dev_dbg(pvscsi_dev(adapter), "Reseting bus on %p\n", adapter); + + pvscsi_write_cmd_desc(adapter, PVSCSI_CMD_RESET_BUS, NULL, 0); +} + +static void ll_device_reset(const struct pvscsi_adapter *adapter, u32 target) +{ + struct PVSCSICmdDescResetDevice cmd = { 0 }; + + dev_dbg(pvscsi_dev(adapter), "Reseting device: target=%u\n", target); + + cmd.target = target; + + pvscsi_write_cmd_desc(adapter, PVSCSI_CMD_RESET_DEVICE, + &cmd, sizeof(cmd)); +} + +static void pvscsi_create_sg(struct pvscsi_ctx *ctx, + struct scatterlist *sg, unsigned count) +{ + unsigned i; + struct PVSCSISGElement *sge; + + BUG_ON(count > PVSCSI_MAX_NUM_SG_ENTRIES_PER_SEGMENT); + + sge = &ctx->sgl->sge[0]; + for (i = 0; i < count; i++, sg++) { + sge[i].addr = sg_dma_address(sg); + sge[i].length = sg_dma_len(sg); + sge[i].flags = 0; + } +} + +/* + * Map all data buffers for a command into PCI space and + * setup the scatter/gather list if needed. + */ +static void pvscsi_map_buffers(struct pvscsi_adapter *adapter, + struct pvscsi_ctx *ctx, struct scsi_cmnd *cmd, + struct PVSCSIRingReqDesc *e) +{ + unsigned count; + unsigned bufflen = scsi_bufflen(cmd); + struct scatterlist *sg; + + e->dataLen = bufflen; + e->dataAddr = 0; + if (bufflen == 0) + return; + + sg = scsi_sglist(cmd); + count = scsi_sg_count(cmd); + if (count != 0) { + int segs = scsi_dma_map(cmd); + if (segs > 1) { + pvscsi_create_sg(ctx, sg, segs); + + e->flags |= PVSCSI_FLAG_CMD_WITH_SG_LIST; + ctx->sglPA = pci_map_single(adapter->dev, ctx->sgl, + SGL_SIZE, PCI_DMA_TODEVICE); + e->dataAddr = ctx->sglPA; + } else + e->dataAddr = sg_dma_address(sg); + } else { + /* + * In case there is no S/G list, scsi_sglist points + * directly to the buffer. + */ + ctx->dataPA = pci_map_single(adapter->dev, sg, bufflen, + cmd->sc_data_direction); + e->dataAddr = ctx->dataPA; + } +} + +static void pvscsi_unmap_buffers(const struct pvscsi_adapter *adapter, + struct pvscsi_ctx *ctx) +{ + struct scsi_cmnd *cmd; + unsigned bufflen; + + cmd = ctx->cmd; + bufflen = scsi_bufflen(cmd); + + if (bufflen != 0) { + unsigned count = scsi_sg_count(cmd); + + if (count != 0) { + scsi_dma_unmap(cmd); + if (ctx->sglPA) { + pci_unmap_single(adapter->dev, ctx->sglPA, + SGL_SIZE, PCI_DMA_TODEVICE); + ctx->sglPA = 0; + } + } else + pci_unmap_single(adapter->dev, ctx->dataPA, bufflen, + cmd->sc_data_direction); + } + if (cmd->sense_buffer) + pci_unmap_single(adapter->dev, ctx->sensePA, + SCSI_SENSE_BUFFERSIZE, PCI_DMA_FROMDEVICE); +} + +static int __devinit pvscsi_allocate_rings(struct pvscsi_adapter *adapter) +{ + adapter->rings_state = pci_alloc_consistent(adapter->dev, PAGE_SIZE, + &adapter->ringStatePA); + if (!adapter->rings_state) + return -ENOMEM; + + adapter->req_pages = min(PVSCSI_MAX_NUM_PAGES_REQ_RING, + pvscsi_ring_pages); + adapter->req_depth = adapter->req_pages + * PVSCSI_MAX_NUM_REQ_ENTRIES_PER_PAGE; + adapter->req_ring = pci_alloc_consistent(adapter->dev, + adapter->req_pages * PAGE_SIZE, + &adapter->reqRingPA); + if (!adapter->req_ring) + return -ENOMEM; + + adapter->cmp_pages = min(PVSCSI_MAX_NUM_PAGES_CMP_RING, + pvscsi_ring_pages); + adapter->cmp_ring = pci_alloc_consistent(adapter->dev, + adapter->cmp_pages * PAGE_SIZE, + &adapter->cmpRingPA); + if (!adapter->cmp_ring) + return -ENOMEM; + + BUG_ON(!IS_ALIGNED(adapter->ringStatePA, PAGE_SIZE)); + BUG_ON(!IS_ALIGNED(adapter->reqRingPA, PAGE_SIZE)); + BUG_ON(!IS_ALIGNED(adapter->cmpRingPA, PAGE_SIZE)); + + if (!adapter->use_msg) + return 0; + + adapter->msg_pages = min(PVSCSI_MAX_NUM_PAGES_MSG_RING, + pvscsi_msg_ring_pages); + adapter->msg_ring = pci_alloc_consistent(adapter->dev, + adapter->msg_pages * PAGE_SIZE, + &adapter->msgRingPA); + if (!adapter->msg_ring) + return -ENOMEM; + BUG_ON(!IS_ALIGNED(adapter->msgRingPA, PAGE_SIZE)); + + return 0; +} + +static void pvscsi_setup_all_rings(const struct pvscsi_adapter *adapter) +{ + struct PVSCSICmdDescSetupRings cmd = { 0 }; + dma_addr_t base; + unsigned i; + + cmd.ringsStatePPN = adapter->ringStatePA >> PAGE_SHIFT; + cmd.reqRingNumPages = adapter->req_pages; + cmd.cmpRingNumPages = adapter->cmp_pages; + + base = adapter->reqRingPA; + for (i = 0; i < adapter->req_pages; i++) { + cmd.reqRingPPNs[i] = base >> PAGE_SHIFT; + base += PAGE_SIZE; + } + + base = adapter->cmpRingPA; + for (i = 0; i < adapter->cmp_pages; i++) { + cmd.cmpRingPPNs[i] = base >> PAGE_SHIFT; + base += PAGE_SIZE; + } + + memset(adapter->rings_state, 0, PAGE_SIZE); + memset(adapter->req_ring, 0, adapter->req_pages * PAGE_SIZE); + memset(adapter->cmp_ring, 0, adapter->cmp_pages * PAGE_SIZE); + + pvscsi_write_cmd_desc(adapter, PVSCSI_CMD_SETUP_RINGS, + &cmd, sizeof(cmd)); + + if (adapter->use_msg) { + struct PVSCSICmdDescSetupMsgRing cmd_msg = { 0 }; + + cmd_msg.numPages = adapter->msg_pages; + + base = adapter->msgRingPA; + for (i = 0; i < adapter->msg_pages; i++) { + cmd_msg.ringPPNs[i] = base >> PAGE_SHIFT; + base += PAGE_SIZE; + } + memset(adapter->msg_ring, 0, adapter->msg_pages * PAGE_SIZE); + + pvscsi_write_cmd_desc(adapter, PVSCSI_CMD_SETUP_MSG_RING, + &cmd_msg, sizeof(cmd_msg)); + } +} + +/* + * Pull a completion descriptor off and pass the completion back + * to the SCSI mid layer. + */ +static void pvscsi_complete_request(struct pvscsi_adapter *adapter, + const struct PVSCSIRingCmpDesc *e) +{ + struct pvscsi_ctx *ctx; + struct scsi_cmnd *cmd; + u32 btstat = e->hostStatus; + u32 sdstat = e->scsiStatus; + + ctx = pvscsi_get_context(adapter, e->context); + cmd = ctx->cmd; + pvscsi_unmap_buffers(adapter, ctx); + pvscsi_release_context(adapter, ctx); + cmd->result = 0; + + if (sdstat != SAM_STAT_GOOD && + (btstat == BTSTAT_SUCCESS || + btstat == BTSTAT_LINKED_COMMAND_COMPLETED || + btstat == BTSTAT_LINKED_COMMAND_COMPLETED_WITH_FLAG)) { + cmd->result = (DID_OK << 16) | sdstat; + if (sdstat == SAM_STAT_CHECK_CONDITION && cmd->sense_buffer) + cmd->result |= (DRIVER_SENSE << 24); + } else + switch (btstat) { + case BTSTAT_SUCCESS: + case BTSTAT_LINKED_COMMAND_COMPLETED: + case BTSTAT_LINKED_COMMAND_COMPLETED_WITH_FLAG: + /* If everything went fine, let's move on.. */ + cmd->result = (DID_OK << 16); + break; + + case BTSTAT_DATARUN: + case BTSTAT_DATA_UNDERRUN: + /* Report residual data in underruns */ + scsi_set_resid(cmd, scsi_bufflen(cmd) - e->dataLen); + cmd->result = (DID_ERROR << 16); + break; + + case BTSTAT_SELTIMEO: + /* Our emulation returns this for non-connected devs */ + cmd->result = (DID_BAD_TARGET << 16); + break; + + case BTSTAT_LUNMISMATCH: + case BTSTAT_TAGREJECT: + case BTSTAT_BADMSG: + cmd->result = (DRIVER_INVALID << 24); + /* fall through */ + + case BTSTAT_HAHARDWARE: + case BTSTAT_INVPHASE: + case BTSTAT_HATIMEOUT: + case BTSTAT_NORESPONSE: + case BTSTAT_DISCONNECT: + case BTSTAT_HASOFTWARE: + case BTSTAT_BUSFREE: + case BTSTAT_SENSFAILED: + cmd->result |= (DID_ERROR << 16); + break; + + case BTSTAT_SENTRST: + case BTSTAT_RECVRST: + case BTSTAT_BUSRESET: + cmd->result = (DID_RESET << 16); + break; + + case BTSTAT_ABORTQUEUE: + cmd->result = (DID_ABORT << 16); + break; + + case BTSTAT_SCSIPARITY: + cmd->result = (DID_PARITY << 16); + break; + + default: + cmd->result = (DID_ERROR << 16); + scmd_printk(KERN_DEBUG, cmd, + "Unknown completion status: 0x%x\n", + btstat); + } + + dev_dbg(&cmd->device->sdev_gendev, + "cmd=%p %x ctx=%p result=0x%x status=0x%x,%x\n", + cmd, cmd->cmnd[0], ctx, cmd->result, btstat, sdstat); + + cmd->scsi_done(cmd); +} + +/* + * barrier usage : Since the PVSCSI device is emulated, there could be cases + * where we may want to serialize some accesses between the driver and the + * emulation layer. We use compiler barriers instead of the more expensive + * memory barriers because PVSCSI is only supported on X86 which has strong + * memory access ordering. + */ +static void pvscsi_process_completion_ring(struct pvscsi_adapter *adapter) +{ + struct PVSCSIRingsState *s = adapter->rings_state; + struct PVSCSIRingCmpDesc *ring = adapter->cmp_ring; + u32 cmp_entries = s->cmpNumEntriesLog2; + + while (s->cmpConsIdx != s->cmpProdIdx) { + struct PVSCSIRingCmpDesc *e = ring + (s->cmpConsIdx & + MASK(cmp_entries)); + /* + * This barrier() ensures that *e is not dereferenced while + * the device emulation still writes data into the slot. + * Since the device emulation advances s->cmpProdIdx only after + * updating the slot we want to check it first. + */ + barrier(); + pvscsi_complete_request(adapter, e); + /* + * This barrier() ensures that compiler doesn't reorder write + * to s->cmpConsIdx before the read of (*e) inside + * pvscsi_complete_request. Otherwise, device emulation may + * overwrite *e before we had a chance to read it. + */ + barrier(); + s->cmpConsIdx++; + } +} + +/* + * Translate a Linux SCSI request into a request ring entry. + */ +static int pvscsi_queue_ring(struct pvscsi_adapter *adapter, + struct pvscsi_ctx *ctx, struct scsi_cmnd *cmd) +{ + struct PVSCSIRingsState *s; + struct PVSCSIRingReqDesc *e; + struct scsi_device *sdev; + u32 req_entries; + + s = adapter->rings_state; + sdev = cmd->device; + req_entries = s->reqNumEntriesLog2; + + /* + * If this condition holds, we might have room on the request ring, but + * we might not have room on the completion ring for the response. + * However, we have already ruled out this possibility - we would not + * have successfully allocated a context if it were true, since we only + * have one context per request entry. Check for it anyway, since it + * would be a serious bug. + */ + if (s->reqProdIdx - s->cmpConsIdx >= 1 << req_entries) { + scmd_printk(KERN_ERR, cmd, "vmw_pvscsi: " + "ring full: reqProdIdx=%d cmpConsIdx=%d\n", + s->reqProdIdx, s->cmpConsIdx); + return -1; + } + + e = adapter->req_ring + (s->reqProdIdx & MASK(req_entries)); + + e->bus = sdev->channel; + e->target = sdev->id; + memset(e->lun, 0, sizeof(e->lun)); + e->lun[1] = sdev->lun; + + if (cmd->sense_buffer) { + ctx->sensePA = pci_map_single(adapter->dev, cmd->sense_buffer, + SCSI_SENSE_BUFFERSIZE, + PCI_DMA_FROMDEVICE); + e->senseAddr = ctx->sensePA; + e->senseLen = SCSI_SENSE_BUFFERSIZE; + } else { + e->senseLen = 0; + e->senseAddr = 0; + } + e->cdbLen = cmd->cmd_len; + e->vcpuHint = smp_processor_id(); + memcpy(e->cdb, cmd->cmnd, e->cdbLen); + + e->tag = SIMPLE_QUEUE_TAG; + if (sdev->tagged_supported && + (cmd->tag == HEAD_OF_QUEUE_TAG || + cmd->tag == ORDERED_QUEUE_TAG)) + e->tag = cmd->tag; + + if (cmd->sc_data_direction == DMA_FROM_DEVICE) + e->flags = PVSCSI_FLAG_CMD_DIR_TOHOST; + else if (cmd->sc_data_direction == DMA_TO_DEVICE) + e->flags = PVSCSI_FLAG_CMD_DIR_TODEVICE; + else if (cmd->sc_data_direction == DMA_NONE) + e->flags = PVSCSI_FLAG_CMD_DIR_NONE; + else + e->flags = 0; + + pvscsi_map_buffers(adapter, ctx, cmd, e); + + e->context = pvscsi_map_context(adapter, ctx); + + barrier(); + + s->reqProdIdx++; + + return 0; +} + +static int pvscsi_queue(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *)) +{ + struct Scsi_Host *host = cmd->device->host; + struct pvscsi_adapter *adapter = shost_priv(host); + struct pvscsi_ctx *ctx; + unsigned long flags; + + spin_lock_irqsave(&adapter->hw_lock, flags); + + ctx = pvscsi_acquire_context(adapter, cmd); + if (!ctx || pvscsi_queue_ring(adapter, ctx, cmd) != 0) { + if (ctx) + pvscsi_release_context(adapter, ctx); + spin_unlock_irqrestore(&adapter->hw_lock, flags); + return SCSI_MLQUEUE_HOST_BUSY; + } + + cmd->scsi_done = done; + + dev_dbg(&cmd->device->sdev_gendev, + "queued cmd %p, ctx %p, op=%x\n", cmd, ctx, cmd->cmnd[0]); + + spin_unlock_irqrestore(&adapter->hw_lock, flags); + + pvscsi_kick_io(adapter, cmd->cmnd[0]); + + return 0; +} + +static int pvscsi_abort(struct scsi_cmnd *cmd) +{ + struct pvscsi_adapter *adapter = shost_priv(cmd->device->host); + struct pvscsi_ctx *ctx; + unsigned long flags; + + scmd_printk(KERN_DEBUG, cmd, "task abort on host %u, %p\n", + adapter->host->host_no, cmd); + + spin_lock_irqsave(&adapter->hw_lock, flags); + + /* + * Poll the completion ring first - we might be trying to abort + * a command that is waiting to be dispatched in the completion ring. + */ + pvscsi_process_completion_ring(adapter); + + /* + * If there is no context for the command, it either already succeeded + * or else was never properly issued. Not our problem. + */ + ctx = pvscsi_find_context(adapter, cmd); + if (!ctx) { + scmd_printk(KERN_DEBUG, cmd, "Failed to abort cmd %p\n", cmd); + goto out; + } + + pvscsi_abort_cmd(adapter, ctx); + + pvscsi_process_completion_ring(adapter); + +out: + spin_unlock_irqrestore(&adapter->hw_lock, flags); + return SUCCESS; +} + +/* + * Abort all outstanding requests. This is only safe to use if the completion + * ring will never be walked again or the device has been reset, because it + * destroys the 1-1 mapping between context field passed to emulation and our + * request structure. + */ +static void pvscsi_reset_all(struct pvscsi_adapter *adapter) +{ + unsigned i; + + for (i = 0; i < adapter->req_depth; i++) { + struct pvscsi_ctx *ctx = &adapter->cmd_map[i]; + struct scsi_cmnd *cmd = ctx->cmd; + if (cmd) { + scmd_printk(KERN_ERR, cmd, + "Forced reset on cmd %p\n", cmd); + pvscsi_unmap_buffers(adapter, ctx); + pvscsi_release_context(adapter, ctx); + cmd->result = (DID_RESET << 16); + cmd->scsi_done(cmd); + } + } +} + +static int pvscsi_host_reset(struct scsi_cmnd *cmd) +{ + struct Scsi_Host *host = cmd->device->host; + struct pvscsi_adapter *adapter = shost_priv(host); + unsigned long flags; + bool use_msg; + + scmd_printk(KERN_INFO, cmd, "SCSI Host reset\n"); + + spin_lock_irqsave(&adapter->hw_lock, flags); + + use_msg = adapter->use_msg; + + if (use_msg) { + adapter->use_msg = 0; + spin_unlock_irqrestore(&adapter->hw_lock, flags); + + /* + * Now that we know that the ISR won't add more work on the + * workqueue we can safely flush any outstanding work. + */ + flush_workqueue(adapter->workqueue); + spin_lock_irqsave(&adapter->hw_lock, flags); + } + + /* + * We're going to tear down the entire ring structure and set it back + * up, so stalling new requests until all completions are flushed and + * the rings are back in place. + */ + + pvscsi_process_request_ring(adapter); + + ll_adapter_reset(adapter); + + /* + * Now process any completions. Note we do this AFTER adapter reset, + * which is strange, but stops races where completions get posted + * between processing the ring and issuing the reset. The backend will + * not touch the ring memory after reset, so the immediately pre-reset + * completion ring state is still valid. + */ + pvscsi_process_completion_ring(adapter); + + pvscsi_reset_all(adapter); + adapter->use_msg = use_msg; + pvscsi_setup_all_rings(adapter); + pvscsi_unmask_intr(adapter); + + spin_unlock_irqrestore(&adapter->hw_lock, flags); + + return SUCCESS; +} + +static int pvscsi_bus_reset(struct scsi_cmnd *cmd) +{ + struct Scsi_Host *host = cmd->device->host; + struct pvscsi_adapter *adapter = shost_priv(host); + unsigned long flags; + + scmd_printk(KERN_INFO, cmd, "SCSI Bus reset\n"); + + /* + * We don't want to queue new requests for this bus after + * flushing all pending requests to emulation, since new + * requests could then sneak in during this bus reset phase, + * so take the lock now. + */ + spin_lock_irqsave(&adapter->hw_lock, flags); + + pvscsi_process_request_ring(adapter); + ll_bus_reset(adapter); + pvscsi_process_completion_ring(adapter); + + spin_unlock_irqrestore(&adapter->hw_lock, flags); + + return SUCCESS; +} + +static int pvscsi_device_reset(struct scsi_cmnd *cmd) +{ + struct Scsi_Host *host = cmd->device->host; + struct pvscsi_adapter *adapter = shost_priv(host); + unsigned long flags; + + scmd_printk(KERN_INFO, cmd, "SCSI device reset on scsi%u:%u\n", + host->host_no, cmd->device->id); + + /* + * We don't want to queue new requests for this device after flushing + * all pending requests to emulation, since new requests could then + * sneak in during this device reset phase, so take the lock now. + */ + spin_lock_irqsave(&adapter->hw_lock, flags); + + pvscsi_process_request_ring(adapter); + ll_device_reset(adapter, cmd->device->id); + pvscsi_process_completion_ring(adapter); + + spin_unlock_irqrestore(&adapter->hw_lock, flags); + + return SUCCESS; +} + +static struct scsi_host_template pvscsi_template; + +static const char *pvscsi_info(struct Scsi_Host *host) +{ + struct pvscsi_adapter *adapter = shost_priv(host); + static char buf[256]; + + sprintf(buf, "VMware PVSCSI storage adapter rev %d, req/cmp/msg rings: " + "%u/%u/%u pages, cmd_per_lun=%u", adapter->rev, + adapter->req_pages, adapter->cmp_pages, adapter->msg_pages, + pvscsi_template.cmd_per_lun); + + return buf; +} + +static struct scsi_host_template pvscsi_template = { + .module = THIS_MODULE, + .name = "VMware PVSCSI Host Adapter", + .proc_name = "vmw_pvscsi", + .info = pvscsi_info, + .queuecommand = pvscsi_queue, + .this_id = -1, + .sg_tablesize = PVSCSI_MAX_NUM_SG_ENTRIES_PER_SEGMENT, + .dma_boundary = UINT_MAX, + .max_sectors = 0xffff, + .use_clustering = ENABLE_CLUSTERING, + .eh_abort_handler = pvscsi_abort, + .eh_device_reset_handler = pvscsi_device_reset, + .eh_bus_reset_handler = pvscsi_bus_reset, + .eh_host_reset_handler = pvscsi_host_reset, +}; + +static void pvscsi_process_msg(const struct pvscsi_adapter *adapter, + const struct PVSCSIRingMsgDesc *e) +{ + struct PVSCSIRingsState *s = adapter->rings_state; + struct Scsi_Host *host = adapter->host; + struct scsi_device *sdev; + + printk(KERN_INFO "vmw_pvscsi: msg type: 0x%x - MSG RING: %u/%u (%u) \n", + e->type, s->msgProdIdx, s->msgConsIdx, s->msgNumEntriesLog2); + + BUILD_BUG_ON(PVSCSI_MSG_LAST != 2); + + if (e->type == PVSCSI_MSG_DEV_ADDED) { + struct PVSCSIMsgDescDevStatusChanged *desc; + desc = (struct PVSCSIMsgDescDevStatusChanged *)e; + + printk(KERN_INFO + "vmw_pvscsi: msg: device added at scsi%u:%u:%u\n", + desc->bus, desc->target, desc->lun[1]); + + if (!scsi_host_get(host)) + return; + + sdev = scsi_device_lookup(host, desc->bus, desc->target, + desc->lun[1]); + if (sdev) { + printk(KERN_INFO "vmw_pvscsi: device already exists\n"); + scsi_device_put(sdev); + } else + scsi_add_device(adapter->host, desc->bus, + desc->target, desc->lun[1]); + + scsi_host_put(host); + } else if (e->type == PVSCSI_MSG_DEV_REMOVED) { + struct PVSCSIMsgDescDevStatusChanged *desc; + desc = (struct PVSCSIMsgDescDevStatusChanged *)e; + + printk(KERN_INFO + "vmw_pvscsi: msg: device removed at scsi%u:%u:%u\n", + desc->bus, desc->target, desc->lun[1]); + + if (!scsi_host_get(host)) + return; + + sdev = scsi_device_lookup(host, desc->bus, desc->target, + desc->lun[1]); + if (sdev) { + scsi_remove_device(sdev); + scsi_device_put(sdev); + } else + printk(KERN_INFO + "vmw_pvscsi: failed to lookup scsi%u:%u:%u\n", + desc->bus, desc->target, desc->lun[1]); + + scsi_host_put(host); + } +} + +static int pvscsi_msg_pending(const struct pvscsi_adapter *adapter) +{ + struct PVSCSIRingsState *s = adapter->rings_state; + + return s->msgProdIdx != s->msgConsIdx; +} + +static void pvscsi_process_msg_ring(const struct pvscsi_adapter *adapter) +{ + struct PVSCSIRingsState *s = adapter->rings_state; + struct PVSCSIRingMsgDesc *ring = adapter->msg_ring; + u32 msg_entries = s->msgNumEntriesLog2; + + while (pvscsi_msg_pending(adapter)) { + struct PVSCSIRingMsgDesc *e = ring + (s->msgConsIdx & + MASK(msg_entries)); + + barrier(); + pvscsi_process_msg(adapter, e); + barrier(); + s->msgConsIdx++; + } +} + +static void pvscsi_msg_workqueue_handler(struct work_struct *data) +{ + struct pvscsi_adapter *adapter; + + adapter = container_of(data, struct pvscsi_adapter, work); + + pvscsi_process_msg_ring(adapter); +} + +static int pvscsi_setup_msg_workqueue(struct pvscsi_adapter *adapter) +{ + char name[32]; + + if (!pvscsi_use_msg) + return 0; + + pvscsi_reg_write(adapter, PVSCSI_REG_OFFSET_COMMAND, + PVSCSI_CMD_SETUP_MSG_RING); + + if (pvscsi_reg_read(adapter, PVSCSI_REG_OFFSET_COMMAND_STATUS) == -1) + return 0; + + snprintf(name, sizeof(name), + "vmw_pvscsi_wq_%u", adapter->host->host_no); + + adapter->workqueue = create_singlethread_workqueue(name); + if (!adapter->workqueue) { + printk(KERN_ERR "vmw_pvscsi: failed to create work queue\n"); + return 0; + } + INIT_WORK(&adapter->work, pvscsi_msg_workqueue_handler); + + return 1; +} + +static irqreturn_t pvscsi_isr(int irq, void *devp) +{ + struct pvscsi_adapter *adapter = devp; + int handled; + + if (adapter->use_msi || adapter->use_msix) + handled = true; + else { + u32 val = pvscsi_read_intr_status(adapter); + handled = (val & PVSCSI_INTR_ALL_SUPPORTED) != 0; + if (handled) + pvscsi_write_intr_status(devp, val); + } + + if (handled) { + unsigned long flags; + + spin_lock_irqsave(&adapter->hw_lock, flags); + + pvscsi_process_completion_ring(adapter); + if (adapter->use_msg && pvscsi_msg_pending(adapter)) + queue_work(adapter->workqueue, &adapter->work); + + spin_unlock_irqrestore(&adapter->hw_lock, flags); + } + + return IRQ_RETVAL(handled); +} + +static void pvscsi_free_sgls(const struct pvscsi_adapter *adapter) +{ + struct pvscsi_ctx *ctx = adapter->cmd_map; + unsigned i; + + for (i = 0; i < adapter->req_depth; ++i, ++ctx) + free_pages((unsigned long)ctx->sgl, get_order(SGL_SIZE)); +} + +static int pvscsi_setup_msix(const struct pvscsi_adapter *adapter, int *irq) +{ + struct msix_entry entry = { 0, PVSCSI_VECTOR_COMPLETION }; + int ret; + + ret = pci_enable_msix(adapter->dev, &entry, 1); + if (ret) + return ret; + + *irq = entry.vector; + + return 0; +} + +static void pvscsi_shutdown_intr(struct pvscsi_adapter *adapter) +{ + if (adapter->irq) { + free_irq(adapter->irq, adapter); + adapter->irq = 0; + } + if (adapter->use_msi) { + pci_disable_msi(adapter->dev); + adapter->use_msi = 0; + } else if (adapter->use_msix) { + pci_disable_msix(adapter->dev); + adapter->use_msix = 0; + } +} + +static void pvscsi_release_resources(struct pvscsi_adapter *adapter) +{ + pvscsi_shutdown_intr(adapter); + + if (adapter->workqueue) + destroy_workqueue(adapter->workqueue); + + if (adapter->mmioBase) + pci_iounmap(adapter->dev, adapter->mmioBase); + + pci_release_regions(adapter->dev); + + if (adapter->cmd_map) { + pvscsi_free_sgls(adapter); + kfree(adapter->cmd_map); + } + + if (adapter->rings_state) + pci_free_consistent(adapter->dev, PAGE_SIZE, + adapter->rings_state, adapter->ringStatePA); + + if (adapter->req_ring) + pci_free_consistent(adapter->dev, + adapter->req_pages * PAGE_SIZE, + adapter->req_ring, adapter->reqRingPA); + + if (adapter->cmp_ring) + pci_free_consistent(adapter->dev, + adapter->cmp_pages * PAGE_SIZE, + adapter->cmp_ring, adapter->cmpRingPA); + + if (adapter->msg_ring) + pci_free_consistent(adapter->dev, + adapter->msg_pages * PAGE_SIZE, + adapter->msg_ring, adapter->msgRingPA); +} + +/* + * Allocate scatter gather lists. + * + * These are statically allocated. Trying to be clever was not worth it. + * + * Dynamic allocation can fail, and we can't go deeep into the memory + * allocator, since we're a SCSI driver, and trying too hard to allocate + * memory might generate disk I/O. We also don't want to fail disk I/O + * in that case because we can't get an allocation - the I/O could be + * trying to swap out data to free memory. Since that is pathological, + * just use a statically allocated scatter list. + * + */ +static int __devinit pvscsi_allocate_sg(struct pvscsi_adapter *adapter) +{ + struct pvscsi_ctx *ctx; + int i; + + ctx = adapter->cmd_map; + BUILD_BUG_ON(sizeof(struct pvscsi_sg_list) > SGL_SIZE); + + for (i = 0; i < adapter->req_depth; ++i, ++ctx) { + ctx->sgl = (void *)__get_free_pages(GFP_KERNEL, + get_order(SGL_SIZE)); + ctx->sglPA = 0; + BUG_ON(!IS_ALIGNED(((unsigned long)ctx->sgl), PAGE_SIZE)); + if (!ctx->sgl) { + for (; i >= 0; --i, --ctx) { + free_pages((unsigned long)ctx->sgl, + get_order(SGL_SIZE)); + ctx->sgl = NULL; + } + return -ENOMEM; + } + } + + return 0; +} + +static int __devinit pvscsi_probe(struct pci_dev *pdev, + const struct pci_device_id *id) +{ + struct pvscsi_adapter *adapter; + struct Scsi_Host *host; + unsigned int i; + unsigned long flags = 0; + int error; + + error = -ENODEV; + + if (pci_enable_device(pdev)) + return error; + + if (pci_set_dma_mask(pdev, DMA_BIT_MASK(64)) == 0 && + pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64)) == 0) { + printk(KERN_INFO "vmw_pvscsi: using 64bit dma\n"); + } else if (pci_set_dma_mask(pdev, DMA_BIT_MASK(32)) == 0 && + pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32)) == 0) { + printk(KERN_INFO "vmw_pvscsi: using 32bit dma\n"); + } else { + printk(KERN_ERR "vmw_pvscsi: failed to set DMA mask\n"); + goto out_disable_device; + } + + pvscsi_template.can_queue = + min(PVSCSI_MAX_NUM_PAGES_REQ_RING, pvscsi_ring_pages) * + PVSCSI_MAX_NUM_REQ_ENTRIES_PER_PAGE; + pvscsi_template.cmd_per_lun = + min(pvscsi_template.can_queue, pvscsi_cmd_per_lun); + host = scsi_host_alloc(&pvscsi_template, sizeof(struct pvscsi_adapter)); + if (!host) { + printk(KERN_ERR "vmw_pvscsi: failed to allocate host\n"); + goto out_disable_device; + } + + adapter = shost_priv(host); + memset(adapter, 0, sizeof(*adapter)); + adapter->dev = pdev; + adapter->host = host; + + spin_lock_init(&adapter->hw_lock); + + host->max_channel = 0; + host->max_id = 16; + host->max_lun = 1; + host->max_cmd_len = 16; + + adapter->rev = pdev->revision; + + if (pci_request_regions(pdev, "vmw_pvscsi")) { + printk(KERN_ERR "vmw_pvscsi: pci memory selection failed\n"); + goto out_free_host; + } + + for (i = 0; i < DEVICE_COUNT_RESOURCE; i++) { + if ((pci_resource_flags(pdev, i) & PCI_BASE_ADDRESS_SPACE_IO)) + continue; + + if (pci_resource_len(pdev, i) < PVSCSI_MEM_SPACE_SIZE) + continue; + + break; + } + + if (i == DEVICE_COUNT_RESOURCE) { + printk(KERN_ERR + "vmw_pvscsi: adapter has no suitable MMIO region\n"); + goto out_release_resources; + } + + adapter->mmioBase = pci_iomap(pdev, i, PVSCSI_MEM_SPACE_SIZE); + + if (!adapter->mmioBase) { + printk(KERN_ERR + "vmw_pvscsi: can't iomap for BAR %d memsize %lu\n", + i, PVSCSI_MEM_SPACE_SIZE); + goto out_release_resources; + } + + pci_set_master(pdev); + pci_set_drvdata(pdev, host); + + ll_adapter_reset(adapter); + + adapter->use_msg = pvscsi_setup_msg_workqueue(adapter); + + error = pvscsi_allocate_rings(adapter); + if (error) { + printk(KERN_ERR "vmw_pvscsi: unable to allocate ring memory\n"); + goto out_release_resources; + } + + /* + * From this point on we should reset the adapter if anything goes + * wrong. + */ + pvscsi_setup_all_rings(adapter); + + adapter->cmd_map = kcalloc(adapter->req_depth, + sizeof(struct pvscsi_ctx), GFP_KERNEL); + if (!adapter->cmd_map) { + printk(KERN_ERR "vmw_pvscsi: failed to allocate memory.\n"); + error = -ENOMEM; + goto out_reset_adapter; + } + + INIT_LIST_HEAD(&adapter->cmd_pool); + for (i = 0; i < adapter->req_depth; i++) { + struct pvscsi_ctx *ctx = adapter->cmd_map + i; + list_add(&ctx->list, &adapter->cmd_pool); + } + + error = pvscsi_allocate_sg(adapter); + if (error) { + printk(KERN_ERR "vmw_pvscsi: unable to allocate s/g table\n"); + goto out_reset_adapter; + } + + if (!pvscsi_disable_msix && + pvscsi_setup_msix(adapter, &adapter->irq) == 0) { + printk(KERN_INFO "vmw_pvscsi: using MSI-X\n"); + adapter->use_msix = 1; + } else if (!pvscsi_disable_msi && pci_enable_msi(pdev) == 0) { + printk(KERN_INFO "vmw_pvscsi: using MSI\n"); + adapter->use_msi = 1; + adapter->irq = pdev->irq; + } else { + printk(KERN_INFO "vmw_pvscsi: using INTx\n"); + adapter->irq = pdev->irq; + flags = IRQF_SHARED; + } + + error = request_irq(adapter->irq, pvscsi_isr, flags, + "vmw_pvscsi", adapter); + if (error) { + printk(KERN_ERR + "vmw_pvscsi: unable to request IRQ: %d\n", error); + adapter->irq = 0; + goto out_reset_adapter; + } + + error = scsi_add_host(host, &pdev->dev); + if (error) { + printk(KERN_ERR + "vmw_pvscsi: scsi_add_host failed: %d\n", error); + goto out_reset_adapter; + } + + dev_info(&pdev->dev, "VMware PVSCSI rev %d host #%u\n", + adapter->rev, host->host_no); + + pvscsi_unmask_intr(adapter); + + scsi_scan_host(host); + + return 0; + +out_reset_adapter: + ll_adapter_reset(adapter); +out_release_resources: + pvscsi_release_resources(adapter); +out_free_host: + scsi_host_put(host); +out_disable_device: + pci_set_drvdata(pdev, NULL); + pci_disable_device(pdev); + + return error; +} + +static void __pvscsi_shutdown(struct pvscsi_adapter *adapter) +{ + pvscsi_mask_intr(adapter); + + if (adapter->workqueue) + flush_workqueue(adapter->workqueue); + + pvscsi_shutdown_intr(adapter); + + pvscsi_process_request_ring(adapter); + pvscsi_process_completion_ring(adapter); + ll_adapter_reset(adapter); +} + +static void pvscsi_shutdown(struct pci_dev *dev) +{ + struct Scsi_Host *host = pci_get_drvdata(dev); + struct pvscsi_adapter *adapter = shost_priv(host); + + __pvscsi_shutdown(adapter); +} + +static void pvscsi_remove(struct pci_dev *pdev) +{ + struct Scsi_Host *host = pci_get_drvdata(pdev); + struct pvscsi_adapter *adapter = shost_priv(host); + + scsi_remove_host(host); + + __pvscsi_shutdown(adapter); + pvscsi_release_resources(adapter); + + scsi_host_put(host); + + pci_set_drvdata(pdev, NULL); + pci_disable_device(pdev); +} + +static struct pci_driver pvscsi_pci_driver = { + .name = "vmw_pvscsi", + .id_table = pvscsi_pci_tbl, + .probe = pvscsi_probe, + .remove = __devexit_p(pvscsi_remove), + .shutdown = pvscsi_shutdown, +}; + +static int __init pvscsi_init(void) +{ + pr_info("%s - version %s\n", + PVSCSI_LINUX_DRIVER_DESC, PVSCSI_DRIVER_VERSION_STRING); + return pci_register_driver(&pvscsi_pci_driver); +} + +static void __exit pvscsi_exit(void) +{ + pci_unregister_driver(&pvscsi_pci_driver); +} + +module_init(pvscsi_init); +module_exit(pvscsi_exit); diff --git a/drivers/scsi/vmw_pvscsi.h b/drivers/scsi/vmw_pvscsi.h new file mode 100644 index 000000000000..62e36e75715e --- /dev/null +++ b/drivers/scsi/vmw_pvscsi.h @@ -0,0 +1,397 @@ +/* + * VMware PVSCSI header file + * + * Copyright (C) 2008-2009, VMware, Inc. All Rights Reserved. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License as published by the + * Free Software Foundation; version 2 of the License and no later version. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or + * NON INFRINGEMENT. See the GNU General Public License for more + * details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. + * + * Maintained by: Alok N Kataria + * + */ + +#ifndef _VMW_PVSCSI_H_ +#define _VMW_PVSCSI_H_ + +#include + +#define PVSCSI_DRIVER_VERSION_STRING "1.0.1.0-k" + +#define PVSCSI_MAX_NUM_SG_ENTRIES_PER_SEGMENT 128 + +#define MASK(n) ((1 << (n)) - 1) /* make an n-bit mask */ + +#define PCI_VENDOR_ID_VMWARE 0x15AD +#define PCI_DEVICE_ID_VMWARE_PVSCSI 0x07C0 + +/* + * host adapter status/error codes + */ +enum HostBusAdapterStatus { + BTSTAT_SUCCESS = 0x00, /* CCB complete normally with no errors */ + BTSTAT_LINKED_COMMAND_COMPLETED = 0x0a, + BTSTAT_LINKED_COMMAND_COMPLETED_WITH_FLAG = 0x0b, + BTSTAT_DATA_UNDERRUN = 0x0c, + BTSTAT_SELTIMEO = 0x11, /* SCSI selection timeout */ + BTSTAT_DATARUN = 0x12, /* data overrun/underrun */ + BTSTAT_BUSFREE = 0x13, /* unexpected bus free */ + BTSTAT_INVPHASE = 0x14, /* invalid bus phase or sequence requested by target */ + BTSTAT_LUNMISMATCH = 0x17, /* linked CCB has different LUN from first CCB */ + BTSTAT_SENSFAILED = 0x1b, /* auto request sense failed */ + BTSTAT_TAGREJECT = 0x1c, /* SCSI II tagged queueing message rejected by target */ + BTSTAT_BADMSG = 0x1d, /* unsupported message received by the host adapter */ + BTSTAT_HAHARDWARE = 0x20, /* host adapter hardware failed */ + BTSTAT_NORESPONSE = 0x21, /* target did not respond to SCSI ATN, sent a SCSI RST */ + BTSTAT_SENTRST = 0x22, /* host adapter asserted a SCSI RST */ + BTSTAT_RECVRST = 0x23, /* other SCSI devices asserted a SCSI RST */ + BTSTAT_DISCONNECT = 0x24, /* target device reconnected improperly (w/o tag) */ + BTSTAT_BUSRESET = 0x25, /* host adapter issued BUS device reset */ + BTSTAT_ABORTQUEUE = 0x26, /* abort queue generated */ + BTSTAT_HASOFTWARE = 0x27, /* host adapter software error */ + BTSTAT_HATIMEOUT = 0x30, /* host adapter hardware timeout error */ + BTSTAT_SCSIPARITY = 0x34, /* SCSI parity error detected */ +}; + +/* + * Register offsets. + * + * These registers are accessible both via i/o space and mm i/o. + */ + +enum PVSCSIRegOffset { + PVSCSI_REG_OFFSET_COMMAND = 0x0, + PVSCSI_REG_OFFSET_COMMAND_DATA = 0x4, + PVSCSI_REG_OFFSET_COMMAND_STATUS = 0x8, + PVSCSI_REG_OFFSET_LAST_STS_0 = 0x100, + PVSCSI_REG_OFFSET_LAST_STS_1 = 0x104, + PVSCSI_REG_OFFSET_LAST_STS_2 = 0x108, + PVSCSI_REG_OFFSET_LAST_STS_3 = 0x10c, + PVSCSI_REG_OFFSET_INTR_STATUS = 0x100c, + PVSCSI_REG_OFFSET_INTR_MASK = 0x2010, + PVSCSI_REG_OFFSET_KICK_NON_RW_IO = 0x3014, + PVSCSI_REG_OFFSET_DEBUG = 0x3018, + PVSCSI_REG_OFFSET_KICK_RW_IO = 0x4018, +}; + +/* + * Virtual h/w commands. + */ + +enum PVSCSICommands { + PVSCSI_CMD_FIRST = 0, /* has to be first */ + + PVSCSI_CMD_ADAPTER_RESET = 1, + PVSCSI_CMD_ISSUE_SCSI = 2, + PVSCSI_CMD_SETUP_RINGS = 3, + PVSCSI_CMD_RESET_BUS = 4, + PVSCSI_CMD_RESET_DEVICE = 5, + PVSCSI_CMD_ABORT_CMD = 6, + PVSCSI_CMD_CONFIG = 7, + PVSCSI_CMD_SETUP_MSG_RING = 8, + PVSCSI_CMD_DEVICE_UNPLUG = 9, + + PVSCSI_CMD_LAST = 10 /* has to be last */ +}; + +/* + * Command descriptor for PVSCSI_CMD_RESET_DEVICE -- + */ + +struct PVSCSICmdDescResetDevice { + u32 target; + u8 lun[8]; +} __packed; + +/* + * Command descriptor for PVSCSI_CMD_ABORT_CMD -- + * + * - currently does not support specifying the LUN. + * - _pad should be 0. + */ + +struct PVSCSICmdDescAbortCmd { + u64 context; + u32 target; + u32 _pad; +} __packed; + +/* + * Command descriptor for PVSCSI_CMD_SETUP_RINGS -- + * + * Notes: + * - reqRingNumPages and cmpRingNumPages need to be power of two. + * - reqRingNumPages and cmpRingNumPages need to be different from 0, + * - reqRingNumPages and cmpRingNumPages need to be inferior to + * PVSCSI_SETUP_RINGS_MAX_NUM_PAGES. + */ + +#define PVSCSI_SETUP_RINGS_MAX_NUM_PAGES 32 +struct PVSCSICmdDescSetupRings { + u32 reqRingNumPages; + u32 cmpRingNumPages; + u64 ringsStatePPN; + u64 reqRingPPNs[PVSCSI_SETUP_RINGS_MAX_NUM_PAGES]; + u64 cmpRingPPNs[PVSCSI_SETUP_RINGS_MAX_NUM_PAGES]; +} __packed; + +/* + * Command descriptor for PVSCSI_CMD_SETUP_MSG_RING -- + * + * Notes: + * - this command was not supported in the initial revision of the h/w + * interface. Before using it, you need to check that it is supported by + * writing PVSCSI_CMD_SETUP_MSG_RING to the 'command' register, then + * immediately after read the 'command status' register: + * * a value of -1 means that the cmd is NOT supported, + * * a value != -1 means that the cmd IS supported. + * If it's supported the 'command status' register should return: + * sizeof(PVSCSICmdDescSetupMsgRing) / sizeof(u32). + * - this command should be issued _after_ the usual SETUP_RINGS so that the + * RingsState page is already setup. If not, the command is a nop. + * - numPages needs to be a power of two, + * - numPages needs to be different from 0, + * - _pad should be zero. + */ + +#define PVSCSI_SETUP_MSG_RING_MAX_NUM_PAGES 16 + +struct PVSCSICmdDescSetupMsgRing { + u32 numPages; + u32 _pad; + u64 ringPPNs[PVSCSI_SETUP_MSG_RING_MAX_NUM_PAGES]; +} __packed; + +enum PVSCSIMsgType { + PVSCSI_MSG_DEV_ADDED = 0, + PVSCSI_MSG_DEV_REMOVED = 1, + PVSCSI_MSG_LAST = 2, +}; + +/* + * Msg descriptor. + * + * sizeof(struct PVSCSIRingMsgDesc) == 128. + * + * - type is of type enum PVSCSIMsgType. + * - the content of args depend on the type of event being delivered. + */ + +struct PVSCSIRingMsgDesc { + u32 type; + u32 args[31]; +} __packed; + +struct PVSCSIMsgDescDevStatusChanged { + u32 type; /* PVSCSI_MSG_DEV _ADDED / _REMOVED */ + u32 bus; + u32 target; + u8 lun[8]; + u32 pad[27]; +} __packed; + +/* + * Rings state. + * + * - the fields: + * . msgProdIdx, + * . msgConsIdx, + * . msgNumEntriesLog2, + * .. are only used once the SETUP_MSG_RING cmd has been issued. + * - '_pad' helps to ensure that the msg related fields are on their own + * cache-line. + */ + +struct PVSCSIRingsState { + u32 reqProdIdx; + u32 reqConsIdx; + u32 reqNumEntriesLog2; + + u32 cmpProdIdx; + u32 cmpConsIdx; + u32 cmpNumEntriesLog2; + + u8 _pad[104]; + + u32 msgProdIdx; + u32 msgConsIdx; + u32 msgNumEntriesLog2; +} __packed; + +/* + * Request descriptor. + * + * sizeof(RingReqDesc) = 128 + * + * - context: is a unique identifier of a command. It could normally be any + * 64bit value, however we currently store it in the serialNumber variable + * of struct SCSI_Command, so we have the following restrictions due to the + * way this field is handled in the vmkernel storage stack: + * * this value can't be 0, + * * the upper 32bit need to be 0 since serialNumber is as a u32. + * Currently tracked as PR 292060. + * - dataLen: contains the total number of bytes that need to be transferred. + * - dataAddr: + * * if PVSCSI_FLAG_CMD_WITH_SG_LIST is set: dataAddr is the PA of the first + * s/g table segment, each s/g segment is entirely contained on a single + * page of physical memory, + * * if PVSCSI_FLAG_CMD_WITH_SG_LIST is NOT set, then dataAddr is the PA of + * the buffer used for the DMA transfer, + * - flags: + * * PVSCSI_FLAG_CMD_WITH_SG_LIST: see dataAddr above, + * * PVSCSI_FLAG_CMD_DIR_NONE: no DMA involved, + * * PVSCSI_FLAG_CMD_DIR_TOHOST: transfer from device to main memory, + * * PVSCSI_FLAG_CMD_DIR_TODEVICE: transfer from main memory to device, + * * PVSCSI_FLAG_CMD_OUT_OF_BAND_CDB: reserved to handle CDBs larger than + * 16bytes. To be specified. + * - vcpuHint: vcpuId of the processor that will be most likely waiting for the + * completion of the i/o. For guest OSes that use lowest priority message + * delivery mode (such as windows), we use this "hint" to deliver the + * completion action to the proper vcpu. For now, we can use the vcpuId of + * the processor that initiated the i/o as a likely candidate for the vcpu + * that will be waiting for the completion.. + * - bus should be 0: we currently only support bus 0 for now. + * - unused should be zero'd. + */ + +#define PVSCSI_FLAG_CMD_WITH_SG_LIST (1 << 0) +#define PVSCSI_FLAG_CMD_OUT_OF_BAND_CDB (1 << 1) +#define PVSCSI_FLAG_CMD_DIR_NONE (1 << 2) +#define PVSCSI_FLAG_CMD_DIR_TOHOST (1 << 3) +#define PVSCSI_FLAG_CMD_DIR_TODEVICE (1 << 4) + +struct PVSCSIRingReqDesc { + u64 context; + u64 dataAddr; + u64 dataLen; + u64 senseAddr; + u32 senseLen; + u32 flags; + u8 cdb[16]; + u8 cdbLen; + u8 lun[8]; + u8 tag; + u8 bus; + u8 target; + u8 vcpuHint; + u8 unused[59]; +} __packed; + +/* + * Scatter-gather list management. + * + * As described above, when PVSCSI_FLAG_CMD_WITH_SG_LIST is set in the + * RingReqDesc.flags, then RingReqDesc.dataAddr is the PA of the first s/g + * table segment. + * + * - each segment of the s/g table contain a succession of struct + * PVSCSISGElement. + * - each segment is entirely contained on a single physical page of memory. + * - a "chain" s/g element has the flag PVSCSI_SGE_FLAG_CHAIN_ELEMENT set in + * PVSCSISGElement.flags and in this case: + * * addr is the PA of the next s/g segment, + * * length is undefined, assumed to be 0. + */ + +struct PVSCSISGElement { + u64 addr; + u32 length; + u32 flags; +} __packed; + +/* + * Completion descriptor. + * + * sizeof(RingCmpDesc) = 32 + * + * - context: identifier of the command. The same thing that was specified + * under "context" as part of struct RingReqDesc at initiation time, + * - dataLen: number of bytes transferred for the actual i/o operation, + * - senseLen: number of bytes written into the sense buffer, + * - hostStatus: adapter status, + * - scsiStatus: device status, + * - _pad should be zero. + */ + +struct PVSCSIRingCmpDesc { + u64 context; + u64 dataLen; + u32 senseLen; + u16 hostStatus; + u16 scsiStatus; + u32 _pad[2]; +} __packed; + +/* + * Interrupt status / IRQ bits. + */ + +#define PVSCSI_INTR_CMPL_0 (1 << 0) +#define PVSCSI_INTR_CMPL_1 (1 << 1) +#define PVSCSI_INTR_CMPL_MASK MASK(2) + +#define PVSCSI_INTR_MSG_0 (1 << 2) +#define PVSCSI_INTR_MSG_1 (1 << 3) +#define PVSCSI_INTR_MSG_MASK (MASK(2) << 2) + +#define PVSCSI_INTR_ALL_SUPPORTED MASK(4) + +/* + * Number of MSI-X vectors supported. + */ +#define PVSCSI_MAX_INTRS 24 + +/* + * Enumeration of supported MSI-X vectors + */ +#define PVSCSI_VECTOR_COMPLETION 0 + +/* + * Misc constants for the rings. + */ + +#define PVSCSI_MAX_NUM_PAGES_REQ_RING PVSCSI_SETUP_RINGS_MAX_NUM_PAGES +#define PVSCSI_MAX_NUM_PAGES_CMP_RING PVSCSI_SETUP_RINGS_MAX_NUM_PAGES +#define PVSCSI_MAX_NUM_PAGES_MSG_RING PVSCSI_SETUP_MSG_RING_MAX_NUM_PAGES + +#define PVSCSI_MAX_NUM_REQ_ENTRIES_PER_PAGE \ + (PAGE_SIZE / sizeof(struct PVSCSIRingReqDesc)) + +#define PVSCSI_MAX_REQ_QUEUE_DEPTH \ + (PVSCSI_MAX_NUM_PAGES_REQ_RING * PVSCSI_MAX_NUM_REQ_ENTRIES_PER_PAGE) + +#define PVSCSI_MEM_SPACE_COMMAND_NUM_PAGES 1 +#define PVSCSI_MEM_SPACE_INTR_STATUS_NUM_PAGES 1 +#define PVSCSI_MEM_SPACE_MISC_NUM_PAGES 2 +#define PVSCSI_MEM_SPACE_KICK_IO_NUM_PAGES 2 +#define PVSCSI_MEM_SPACE_MSIX_NUM_PAGES 2 + +enum PVSCSIMemSpace { + PVSCSI_MEM_SPACE_COMMAND_PAGE = 0, + PVSCSI_MEM_SPACE_INTR_STATUS_PAGE = 1, + PVSCSI_MEM_SPACE_MISC_PAGE = 2, + PVSCSI_MEM_SPACE_KICK_IO_PAGE = 4, + PVSCSI_MEM_SPACE_MSIX_TABLE_PAGE = 6, + PVSCSI_MEM_SPACE_MSIX_PBA_PAGE = 7, +}; + +#define PVSCSI_MEM_SPACE_NUM_PAGES \ + (PVSCSI_MEM_SPACE_COMMAND_NUM_PAGES + \ + PVSCSI_MEM_SPACE_INTR_STATUS_NUM_PAGES + \ + PVSCSI_MEM_SPACE_MISC_NUM_PAGES + \ + PVSCSI_MEM_SPACE_KICK_IO_NUM_PAGES + \ + PVSCSI_MEM_SPACE_MSIX_NUM_PAGES) + +#define PVSCSI_MEM_SPACE_SIZE (PVSCSI_MEM_SPACE_NUM_PAGES * PAGE_SIZE) + +#endif /* _VMW_PVSCSI_H_ */ -- cgit v1.2.3 From a968cd3ef1d315b8c4c48ea65ab5aac8421c2612 Mon Sep 17 00:00:00 2001 From: Joe Perches Date: Mon, 7 Dec 2009 12:52:10 +0100 Subject: [S390] MAINTAINERS: Add s390 drivers block There are currently 4 directories in drivers/s390 (block, char, cio, kvm) without maintainers. Add drivers/s390/ to the s390 kvm section. Add the rest to the default s390 section. Add a W: link for drivers/s390/crypto/ Signed-off-by: Joe Perches Signed-off-by: Heiko Carstens Signed-off-by: Martin Schwidefsky --- MAINTAINERS | 3 +++ 1 file changed, 3 insertions(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 4f96ac81089c..ea3c88ed3ade 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3088,6 +3088,7 @@ S: Supported F: Documentation/s390/kvm.txt F: arch/s390/include/asm/kvm* F: arch/s390/kvm/ +F: drivers/s390/kvm/ KEXEC M: Eric Biederman @@ -4502,6 +4503,7 @@ L: linux-s390@vger.kernel.org W: http://www.ibm.com/developerworks/linux/linux390/ S: Supported F: arch/s390/ +F: drivers/s390/ S390 NETWORK DRIVERS M: Ursula Braun @@ -4517,6 +4519,7 @@ M: Felix Beck M: Ralph Wuerthner M: linux390@de.ibm.com L: linux-s390@vger.kernel.org +W: http://www.ibm.com/developerworks/linux/linux390/ S: Supported F: drivers/s390/crypto/ -- cgit v1.2.3 From 1f26978afd123deb22dd3c7dc75771a02f6e03f6 Mon Sep 17 00:00:00 2001 From: Johannes Berg Date: Sun, 6 Dec 2009 11:42:09 -0800 Subject: Input: appletouch - give up maintainership I no longer have a machine with this, and as such am not really able to help out with this driver any more. Remove the entire appletouch section and let the driver fall under the general input subsystem. Signed-off-by: Johannes Berg Signed-off-by: Dmitry Torokhov --- MAINTAINERS | 7 ------- 1 file changed, 7 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 4f96ac81089c..85e1734f9a50 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -486,13 +486,6 @@ S: Maintained F: drivers/net/appletalk/ F: net/appletalk/ -APPLETOUCH TOUCHPAD DRIVER -M: Johannes Berg -L: linux-input@vger.kernel.org -S: Maintained -F: Documentation/input/appletouch.txt -F: drivers/input/mouse/appletouch.c - ARC FRAMEBUFFER DRIVER M: Jaya Kumar S: Maintained -- cgit v1.2.3 From d33c861e71c57dd69d39d88b84a672adf86a2144 Mon Sep 17 00:00:00 2001 From: Grant Likely Date: Wed, 25 Nov 2009 07:32:25 -0700 Subject: MAINTAINERS: add SPI co-maintainer. Signed-off-by: Grant Likely Acked-by: Andrew Morton --- MAINTAINERS | 1 + 1 file changed, 1 insertion(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index a1a2aceca5bd..2b697cb84d54 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -4937,6 +4937,7 @@ F: drivers/char/specialix* SPI SUBSYSTEM M: David Brownell +M: Grant Likely L: spi-devel-general@lists.sourceforge.net S: Maintained F: Documentation/spi/ -- cgit v1.2.3 From 11c34c7deaeeebcee342cbc35e1bb2a6711b2431 Mon Sep 17 00:00:00 2001 From: Joe Perches Date: Fri, 4 Dec 2009 07:16:59 +0000 Subject: MAINTAINERS: Add PowerPC patterns On Fri, 2009-12-04 at 20:59 +1100, Benjamin Herrenschmidt wrote: > On Fri, 2009-12-04 at 10:34 +0100, Jean Delvare wrote: > > I've sent it to linuxppc-dev@ozlabs.org on October 14th. This is the > > address which is listed 22 times in MAINTAINERS. If it isn't correct, > > then please update MAINTAINERS. > No it's fine both shoul work. Your patches are there, just waiting for > me to pick them up, I was just firing a reminder to the rest of the CC > list :-) (and I do remember fwd'ing a couple of your patches to the > list, for some reason they didn't make it to patchwork back then, that > was a few month ago). > Anyways, I've been stretched thin with all sort of stuff lately, so bear > with me if I'm a bit slow at taking or testing stuff, I'm doing my best. Adding patterns to the PowerPC sections of MAINTAINERS is useful. Signed-off-by: Joe Perches Acked-by: Josh Boyer Acked-by: Grant Likely Acked-by: Olof Johansson Acked-by: Benjamin Herrenschmidt Signed-off-by: Benjamin Herrenschmidt --- MAINTAINERS | 15 +++++++++++++++ 1 file changed, 15 insertions(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index c824b4d62754..5dd672213903 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3168,6 +3168,7 @@ LINUX FOR IBM pSERIES (RS/6000) M: Paul Mackerras W: http://www.ibm.com/linux/ltc/projects/ppc S: Supported +F: arch/powerpc/boot/rs6000.h LINUX FOR POWERPC (32-BIT AND 64-BIT) M: Benjamin Herrenschmidt @@ -3176,18 +3177,24 @@ W: http://www.penguinppc.org/ L: linuxppc-dev@ozlabs.org T: git git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc.git S: Supported +F: Documentation/powerpc/ +F: arch/powerpc/ LINUX FOR POWER MACINTOSH M: Benjamin Herrenschmidt W: http://www.penguinppc.org/ L: linuxppc-dev@ozlabs.org S: Maintained +F: arch/powerpc/platforms/powermac/ +F: drivers/macintosh/ LINUX FOR POWERPC EMBEDDED MPC5XXX M: Grant Likely L: linuxppc-dev@ozlabs.org T: git git://git.secretlab.ca/git/linux-2.6.git S: Maintained +F: arch/powerpc/platforms/512x/ +F: arch/powerpc/platforms/52xx/ LINUX FOR POWERPC EMBEDDED PPC4XX M: Josh Boyer @@ -3196,6 +3203,8 @@ W: http://www.penguinppc.org/ L: linuxppc-dev@ozlabs.org T: git git://git.kernel.org/pub/scm/linux/kernel/git/jwboyer/powerpc-4xx.git S: Maintained +F: arch/powerpc/platforms/40x/ +F: arch/powerpc/platforms/44x/ LINUX FOR POWERPC EMBEDDED XILINX VIRTEX M: Grant Likely @@ -3203,6 +3212,8 @@ W: http://wiki.secretlab.ca/index.php/Linux_on_Xilinx_Virtex L: linuxppc-dev@ozlabs.org T: git git://git.secretlab.ca/git/linux-2.6.git S: Maintained +F: arch/powerpc/*/*virtex* +F: arch/powerpc/*/*/*virtex* LINUX FOR POWERPC EMBEDDED PPC8XX M: Vitaly Bordug @@ -3216,12 +3227,16 @@ M: Kumar Gala W: http://www.penguinppc.org/ L: linuxppc-dev@ozlabs.org S: Maintained +F: arch/powerpc/platforms/83xx/ LINUX FOR POWERPC PA SEMI PWRFICIENT M: Olof Johansson W: http://www.pasemi.com/ L: linuxppc-dev@ozlabs.org S: Supported +F: arch/powerpc/platforms/pasemi/ +F: drivers/*/*pasemi* +F: drivers/*/*/*pasemi* LINUX SECURITY MODULE (LSM) FRAMEWORK M: Chris Wright -- cgit v1.2.3 From 178ff4c9175db447f93b7343954b1d44707c881b Mon Sep 17 00:00:00 2001 From: Tomi Valkeinen Date: Tue, 22 Sep 2009 13:30:24 +0300 Subject: MAINTAINERS: Add OMAP2/3 DSS and OMAPFB maintainer Signed-off-by: Tomi Valkeinen --- MAINTAINERS | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index ea781c1cfb5a..c34f772d73d3 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3894,6 +3894,23 @@ L: linux-omap@vger.kernel.org S: Maintained F: drivers/video/omap/ +OMAP DISPLAY SUBSYSTEM SUPPORT (DSS2) +M: Tomi Valkeinen +L: linux-omap@vger.kernel.org +L: linux-fbdev@vger.kernel.org (moderated for non-subscribers) +S: Maintained +F: drivers/video/omap2/dss/ +F: drivers/video/omap2/vrfb.c +F: drivers/video/omap2/vram.c +F: Documentation/arm/OMAP/DSS + +OMAP FRAMEBUFFER SUPPORT (FOR DSS2) +M: Tomi Valkeinen +L: linux-omap@vger.kernel.org +L: linux-fbdev@vger.kernel.org (moderated for non-subscribers) +S: Maintained +F: drivers/video/omap2/omapfb/ + OMAP MMC SUPPORT M: Jarkko Lavinen L: linux-omap@vger.kernel.org -- cgit v1.2.3 From 047f4ec29453ad8a785ccab7d8098f28149828e2 Mon Sep 17 00:00:00 2001 From: Jean Delvare Date: Wed, 9 Dec 2009 20:35:46 +0100 Subject: MAINTAINERS: Add missing hwmon files Add missing documentation and header files to the hardware monitoring subsystem section. Signed-off-by: Jean Delvare --- MAINTAINERS | 2 ++ 1 file changed, 2 insertions(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index ea781c1cfb5a..83398d8a2e8c 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -2423,7 +2423,9 @@ HARDWARE MONITORING L: lm-sensors@lm-sensors.org W: http://www.lm-sensors.org/ S: Orphan +F: Documentation/hwmon/ F: drivers/hwmon/ +F: include/linux/hwmon*.h HARDWARE RANDOM NUMBER GENERATOR CORE M: Matt Mackall -- cgit v1.2.3 From 4e233cbed249ea94d989b8be08eac0414dbdc44b Mon Sep 17 00:00:00 2001 From: Adrien Demarez Date: Wed, 9 Dec 2009 20:35:50 +0100 Subject: hwmon: New driver for the National Semiconductor LM73 The National Semiconductor LM73 is a single temperature sensor, much like the famous LM75. Signed-off-by: Adrien Demarez Signed-off-by: Jean Delvare --- MAINTAINERS | 6 ++ drivers/hwmon/Kconfig | 9 +++ drivers/hwmon/Makefile | 1 + drivers/hwmon/lm73.c | 205 +++++++++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 221 insertions(+) create mode 100644 drivers/hwmon/lm73.c (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 83398d8a2e8c..1f6155c2c9fc 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3320,6 +3320,12 @@ S: Maintained F: Documentation/hwmon/lis3lv02d F: drivers/hwmon/lis3lv02d.* +LM73 HARDWARE MONITOR DRIVER +M: Guillaume Ligneul +L: lm-sensors@lm-sensors.org +S: Maintained +F: drivers/hwmon/lm73.c + LM83 HARDWARE MONITOR DRIVER M: Jean Delvare L: lm-sensors@lm-sensors.org diff --git a/drivers/hwmon/Kconfig b/drivers/hwmon/Kconfig index 700e93adeb33..87184b5f401d 100644 --- a/drivers/hwmon/Kconfig +++ b/drivers/hwmon/Kconfig @@ -442,6 +442,15 @@ config SENSORS_LM70 This driver can also be built as a module. If so, the module will be called lm70. +config SENSORS_LM73 + tristate "National Semiconductor LM73" + depends on I2C + help + If you say yes here you get support for National Semiconductor LM73 + sensor chips. + This driver can also be built as a module. If so, the module + will be called lm73. + config SENSORS_LM75 tristate "National Semiconductor LM75 and compatibles" depends on I2C diff --git a/drivers/hwmon/Makefile b/drivers/hwmon/Makefile index 9f46cb019cc6..a24a52d12dc5 100644 --- a/drivers/hwmon/Makefile +++ b/drivers/hwmon/Makefile @@ -57,6 +57,7 @@ obj-$(CONFIG_SENSORS_LIS3LV02D) += lis3lv02d.o hp_accel.o obj-$(CONFIG_SENSORS_LIS3_SPI) += lis3lv02d.o lis3lv02d_spi.o obj-$(CONFIG_SENSORS_LM63) += lm63.o obj-$(CONFIG_SENSORS_LM70) += lm70.o +obj-$(CONFIG_SENSORS_LM73) += lm73.o obj-$(CONFIG_SENSORS_LM75) += lm75.o obj-$(CONFIG_SENSORS_LM77) += lm77.o obj-$(CONFIG_SENSORS_LM78) += lm78.o diff --git a/drivers/hwmon/lm73.c b/drivers/hwmon/lm73.c new file mode 100644 index 000000000000..0bf8b2a8e9f0 --- /dev/null +++ b/drivers/hwmon/lm73.c @@ -0,0 +1,205 @@ +/* + * LM73 Sensor driver + * Based on LM75 + * + * Copyright (C) 2007, CenoSYS (www.cenosys.com). + * Copyright (C) 2009, Bollore telecom (www.bolloretelecom.eu). + * + * Guillaume Ligneul + * Adrien Demarez + * Jeremy Laine + * + * This software program is licensed subject to the GNU General Public License + * (GPL).Version 2,June 1991, available at + * http://www.gnu.org/licenses/old-licenses/gpl-2.0.html + */ + +#include +#include +#include +#include +#include +#include +#include + + +/* Addresses scanned */ +static const unsigned short normal_i2c[] = { 0x48, 0x49, 0x4a, 0x4c, + 0x4d, 0x4e, I2C_CLIENT_END }; + +/* Insmod parameters */ +I2C_CLIENT_INSMOD_1(lm73); + +/* LM73 registers */ +#define LM73_REG_INPUT 0x00 +#define LM73_REG_CONF 0x01 +#define LM73_REG_MAX 0x02 +#define LM73_REG_MIN 0x03 +#define LM73_REG_CTRL 0x04 +#define LM73_REG_ID 0x07 + +#define LM73_ID 0x9001 /* or 0x190 after a swab16() */ +#define DRVNAME "lm73" +#define LM73_TEMP_MIN (-40) +#define LM73_TEMP_MAX 150 + +/*-----------------------------------------------------------------------*/ + + +static ssize_t set_temp(struct device *dev, struct device_attribute *da, + const char *buf, size_t count) +{ + struct sensor_device_attribute *attr = to_sensor_dev_attr(da); + struct i2c_client *client = to_i2c_client(dev); + long temp; + short value; + + int status = strict_strtol(buf, 10, &temp); + if (status < 0) + return status; + + /* Write value */ + value = (short) SENSORS_LIMIT(temp/250, (LM73_TEMP_MIN*4), + (LM73_TEMP_MAX*4)) << 5; + i2c_smbus_write_word_data(client, attr->index, swab16(value)); + return count; +} + +static ssize_t show_temp(struct device *dev, struct device_attribute *da, + char *buf) +{ + struct sensor_device_attribute *attr = to_sensor_dev_attr(da); + struct i2c_client *client = to_i2c_client(dev); + /* use integer division instead of equivalent right shift to + guarantee arithmetic shift and preserve the sign */ + int temp = ((s16) (swab16(i2c_smbus_read_word_data(client, + attr->index)))*250) / 32; + return sprintf(buf, "%d\n", temp); +} + + +/*-----------------------------------------------------------------------*/ + +/* sysfs attributes for hwmon */ + +static SENSOR_DEVICE_ATTR(temp1_max, S_IWUSR | S_IRUGO, + show_temp, set_temp, LM73_REG_MAX); +static SENSOR_DEVICE_ATTR(temp1_min, S_IWUSR | S_IRUGO, + show_temp, set_temp, LM73_REG_MIN); +static SENSOR_DEVICE_ATTR(temp1_input, S_IRUGO, + show_temp, NULL, LM73_REG_INPUT); + + +static struct attribute *lm73_attributes[] = { + &sensor_dev_attr_temp1_input.dev_attr.attr, + &sensor_dev_attr_temp1_max.dev_attr.attr, + &sensor_dev_attr_temp1_min.dev_attr.attr, + + NULL +}; + +static const struct attribute_group lm73_group = { + .attrs = lm73_attributes, +}; + +/*-----------------------------------------------------------------------*/ + +/* device probe and removal */ + +static int +lm73_probe(struct i2c_client *client, const struct i2c_device_id *id) +{ + struct device *hwmon_dev; + int status; + + /* Register sysfs hooks */ + status = sysfs_create_group(&client->dev.kobj, &lm73_group); + if (status) + return status; + + hwmon_dev = hwmon_device_register(&client->dev); + if (IS_ERR(hwmon_dev)) { + status = PTR_ERR(hwmon_dev); + goto exit_remove; + } + i2c_set_clientdata(client, hwmon_dev); + + dev_info(&client->dev, "%s: sensor '%s'\n", + dev_name(hwmon_dev), client->name); + + return 0; + +exit_remove: + sysfs_remove_group(&client->dev.kobj, &lm73_group); + return status; +} + +static int lm73_remove(struct i2c_client *client) +{ + struct device *hwmon_dev = i2c_get_clientdata(client); + + hwmon_device_unregister(hwmon_dev); + sysfs_remove_group(&client->dev.kobj, &lm73_group); + i2c_set_clientdata(client, NULL); + return 0; +} + +static const struct i2c_device_id lm73_ids[] = { + { "lm73", lm73 }, + { /* LIST END */ } +}; +MODULE_DEVICE_TABLE(i2c, lm73_ids); + +/* Return 0 if detection is successful, -ENODEV otherwise */ +static int lm73_detect(struct i2c_client *new_client, int kind, + struct i2c_board_info *info) +{ + struct i2c_adapter *adapter = new_client->adapter; + u16 id; + u8 ctrl; + + if (!i2c_check_functionality(adapter, I2C_FUNC_SMBUS_BYTE_DATA | + I2C_FUNC_SMBUS_WORD_DATA)) + return -ENODEV; + + /* Check device ID */ + id = i2c_smbus_read_word_data(new_client, LM73_REG_ID); + ctrl = i2c_smbus_read_byte_data(new_client, LM73_REG_CTRL); + if ((id != LM73_ID) || (ctrl & 0x10)) + return -ENODEV; + + strlcpy(info->type, "lm73", I2C_NAME_SIZE); + + return 0; +} + +static struct i2c_driver lm73_driver = { + .class = I2C_CLASS_HWMON, + .driver = { + .name = "lm73", + }, + .probe = lm73_probe, + .remove = lm73_remove, + .id_table = lm73_ids, + .detect = lm73_detect, + .address_data = &addr_data, +}; + +/* module glue */ + +static int __init sensors_lm73_init(void) +{ + return i2c_add_driver(&lm73_driver); +} + +static void __exit sensors_lm73_exit(void) +{ + i2c_del_driver(&lm73_driver); +} + +MODULE_AUTHOR("Guillaume Ligneul "); +MODULE_DESCRIPTION("LM73 driver"); +MODULE_LICENSE("GPL"); + +module_init(sensors_lm73_init); +module_exit(sensors_lm73_exit); -- cgit v1.2.3 From b058b8596136d97b9469366f1f97fb35accf3d66 Mon Sep 17 00:00:00 2001 From: Jean Delvare Date: Wed, 9 Dec 2009 20:36:08 +0100 Subject: hwmon: (adt7475) Add an entry in MAINTAINERS As I've just done a lot of changes to the adt7475 driver, I volunteer to maintain it for the year to come. Signed-off-by: Jean Delvare Cc: Hans de Goede Cc: Jordan Crouse Cc: "Darrick J. Wong" --- MAINTAINERS | 7 +++++++ 1 file changed, 7 insertions(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 1f6155c2c9fc..9cba1feaf448 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -327,6 +327,13 @@ M: Colin Leroy S: Maintained F: drivers/macintosh/therm_adt746x.c +ADT7475 HARDWARE MONITOR DRIVER +M: Jean Delvare +L: lm-sensors@lm-sensors.org +S: Maintained +F: Documentation/hwmon/adt7475 +F: drivers/hwmon/adt7475.c + ADVANSYS SCSI DRIVER M: Matthew Wilcox L: linux-scsi@vger.kernel.org -- cgit v1.2.3 From 0c19d21e801bef90618a1f4fd0a13d4194609804 Mon Sep 17 00:00:00 2001 From: Daniel Walker Date: Mon, 7 Dec 2009 16:53:51 -0800 Subject: Add arm msm maintainer entry This adds a maintainer entry for the arch/arm/mach-msm sub-architecture and all it's components in various locations. Signed-off-by: David Brown Signed-off-by: Bryan Huntsman Signed-off-by: Daniel Walker --- MAINTAINERS | 13 +++++++++++++ 1 file changed, 13 insertions(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index a1a2aceca5bd..5a058b59cf13 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -755,6 +755,19 @@ L: openmoko-kernel@lists.openmoko.org (subscribers-only) W: http://wiki.openmoko.org/wiki/Neo_FreeRunner S: Supported +ARM/QUALCOMM MSM MACHINE SUPPORT +M: David Brown +M: Daniel Walker +M: Bryan Huntsman +F: arch/arm/mach-msm/ +F: drivers/video/msm/ +F: drivers/mmc/host/msm_sdcc.c +F: drivers/mmc/host/msm_sdcc.h +F: drivers/serial/msm_serial.h +F: drivers/serial/msm_serial.c +T: git git://codeaurora.org/quic/kernel/dwalker/linux-msm.git +S: Maintained + ARM/TOSA MACHINE SUPPORT M: Dmitry Eremin-Solenikov M: Dirk Opfer -- cgit v1.2.3 From 36d0344c254a7b333272757f858c403ea3a2d29f Mon Sep 17 00:00:00 2001 From: Sarah Sharp Date: Tue, 1 Dec 2009 10:37:07 -0800 Subject: USB: xhci: Add correct email and files to MAINTAINERS entry. Add the xHCI driver files to its MAINTAINERS entry so that I'm Cc'd on cleanup patches. Update the email address to one I actually use for sending patches and responding to Linux mailing list emails. Signed-off-by: Sarah Sharp Cc: stable Signed-off-by: Greg Kroah-Hartman --- MAINTAINERS | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index d7f8668b7a72..a383281d04cd 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -5663,9 +5663,11 @@ S: Maintained F: drivers/net/wireless/rndis_wlan.c USB XHCI DRIVER -M: Sarah Sharp +M: Sarah Sharp L: linux-usb@vger.kernel.org S: Supported +F: drivers/usb/host/xhci* +F: drivers/usb/host/pci-quirks* USB ZC0301 DRIVER M: Luca Risolia -- cgit v1.2.3 From f0348d44a0af14b00c6cfef02aa7478eb19663ff Mon Sep 17 00:00:00 2001 From: Oliver Neukum Date: Fri, 11 Dec 2009 15:01:26 -0800 Subject: MAINTAINERS: Transfering maintainership of cdc-ether Oliver Neukum takes over from Greg KH Signed-off-by: Oliver Neukum Signed-off-by: Greg Kroah-Hartman Signed-off-by: David S. Miller --- MAINTAINERS | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index ea781c1cfb5a..04390e971c30 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -5391,10 +5391,9 @@ S: Supported F: drivers/block/ub.c USB CDC ETHERNET DRIVER -M: Greg Kroah-Hartman +M: Oliver Neukum L: linux-usb@vger.kernel.org S: Maintained -W: http://www.kroah.com/linux-usb/ F: drivers/net/usb/cdc_*.c F: include/linux/usb/cdc.h -- cgit v1.2.3 From d8379ab1dde371f13d7fdddf05346840a82c2b61 Mon Sep 17 00:00:00 2001 From: Tony Finch Date: Fri, 27 Nov 2009 15:50:30 +0000 Subject: unifdef: update to upstream revision 1.190 Fix handling of input files (e.g. with no newline at EOF) that could make unifdef get into an unexpected state and call abort(). The new -B option compresses blank lines around a deleted section so that blank lines around "paragraphs" of code don't get doubled. The evaluator can now handle macros with arguments, and unbracketed arguments to the "defined" operator. Add myself to MAINTAINERS for unifdef. Signed-off-by: Tony Finch Acked-by: Sam Ravnborg Signed-off-by: Michal Marek --- MAINTAINERS | 6 + scripts/unifdef.c | 341 +++++++++++++++++++++++++++++++++--------------------- 2 files changed, 213 insertions(+), 134 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index d58fa703ec16..94381eddccfa 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -5407,6 +5407,12 @@ F: drivers/uwb/* F: include/linux/uwb.h F: include/linux/uwb/ +UNIFDEF +M: Tony Finch +W: http://dotat.at/prog/unifdef +S: Maintained +F: scripts/unifdef.c + UNIFORM CDROM DRIVER M: Jens Axboe W: http://www.kernel.dk diff --git a/scripts/unifdef.c b/scripts/unifdef.c index 30d459fb0709..44d39785e50d 100644 --- a/scripts/unifdef.c +++ b/scripts/unifdef.c @@ -1,13 +1,5 @@ /* - * Copyright (c) 2002 - 2005 Tony Finch . All rights reserved. - * - * This code is derived from software contributed to Berkeley by Dave Yost. - * It was rewritten to support ANSI C by Tony Finch. The original version of - * unifdef carried the following copyright notice. None of its code remains - * in this version (though some of the names remain). - * - * Copyright (c) 1985, 1993 - * The Regents of the University of California. All rights reserved. + * Copyright (c) 2002 - 2009 Tony Finch * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions @@ -31,23 +23,20 @@ * SUCH DAMAGE. */ -#include +/* + * This code was derived from software contributed to Berkeley by Dave Yost. + * It was rewritten to support ANSI C by Tony Finch. The original version + * of unifdef carried the 4-clause BSD copyright licence. None of its code + * remains in this version (though some of the names remain) so it now + * carries a more liberal licence. + * + * The latest version is available from http://dotat.at/prog/unifdef + */ -#ifndef lint -#if 0 -static const char copyright[] = -"@(#) Copyright (c) 1985, 1993\n\ - The Regents of the University of California. All rights reserved.\n"; -#endif -#ifdef __IDSTRING -__IDSTRING(Berkeley, "@(#)unifdef.c 8.1 (Berkeley) 6/6/93"); -__IDSTRING(NetBSD, "$NetBSD: unifdef.c,v 1.8 2000/07/03 02:51:36 matt Exp $"); -__IDSTRING(dotat, "$dotat: things/unifdef.c,v 1.171 2005/03/08 12:38:48 fanf2 Exp $"); -#endif -#endif /* not lint */ -#ifdef __FBSDID -__FBSDID("$FreeBSD: /repoman/r/ncvs/src/usr.bin/unifdef/unifdef.c,v 1.20 2005/05/21 09:55:09 ru Exp $"); -#endif +static const char * const copyright[] = { + "@(#) Copyright (c) 2002 - 2009 Tony Finch \n", + "$dotat: unifdef/unifdef.c,v 1.190 2009/11/27 17:21:26 fanf2 Exp $", +}; /* * unifdef - remove ifdef'ed lines @@ -72,8 +61,6 @@ __FBSDID("$FreeBSD: /repoman/r/ncvs/src/usr.bin/unifdef/unifdef.c,v 1.20 2005/05 #include #include -size_t strlcpy(char *dst, const char *src, size_t siz); - /* types of input lines: */ typedef enum { LT_TRUEI, /* a true #if with ignore flag */ @@ -90,6 +77,7 @@ typedef enum { LT_DODGY_LAST = LT_DODGY + LT_ENDIF, LT_PLAIN, /* ordinary line */ LT_EOF, /* end of file */ + LT_ERROR, /* unevaluable #if */ LT_COUNT } Linetype; @@ -100,7 +88,7 @@ static char const * const linetype_name[] = { "DODGY IF", "DODGY TRUE", "DODGY FALSE", "DODGY ELIF", "DODGY ELTRUE", "DODGY ELFALSE", "DODGY ELSE", "DODGY ENDIF", - "PLAIN", "EOF" + "PLAIN", "EOF", "ERROR" }; /* state of #if processing */ @@ -168,11 +156,13 @@ static char const * const linestate_name[] = { * Globals. */ +static bool compblank; /* -B: compress blank lines */ +static bool lnblank; /* -b: blank deleted lines */ static bool complement; /* -c: do the complement */ static bool debugging; /* -d: debugging reports */ static bool iocccok; /* -e: fewer IOCCC errors */ +static bool strictlogic; /* -K: keep ambiguous #ifs */ static bool killconsts; /* -k: eval constant #ifs */ -static bool lnblank; /* -l: blank deleted lines */ static bool lnnum; /* -n: add #line directives */ static bool symlist; /* -s: output symbol list */ static bool text; /* -t: this is a text file */ @@ -196,7 +186,9 @@ static bool ignoring[MAXDEPTH]; /* ignore comments state */ static int stifline[MAXDEPTH]; /* start of current #if */ static int depth; /* current #if nesting */ static int delcount; /* count of deleted lines */ -static bool keepthis; /* don't delete constant #if */ +static unsigned blankcount; /* count of blank lines */ +static unsigned blankmax; /* maximum recent blankcount */ +static bool constexpr; /* constant #if expression */ static int exitstat; /* program exit status */ @@ -206,13 +198,14 @@ static void done(void); static void error(const char *); static int findsym(const char *); static void flushline(bool); -static Linetype get_line(void); +static Linetype parseline(void); static Linetype ifeval(const char **); static void ignoreoff(void); static void ignoreon(void); static void keywordedit(const char *); static void nest(void); static void process(void); +static const char *skipargs(const char *); static const char *skipcomment(const char *); static const char *skipsym(const char *); static void state(Ifstate); @@ -220,7 +213,7 @@ static int strlcmp(const char *, const char *, size_t); static void unnest(void); static void usage(void); -#define endsym(c) (!isalpha((unsigned char)c) && !isdigit((unsigned char)c) && c != '_') +#define endsym(c) (!isalnum((unsigned char)c) && c != '_') /* * The main program. @@ -230,7 +223,7 @@ main(int argc, char *argv[]) { int opt; - while ((opt = getopt(argc, argv, "i:D:U:I:cdeklnst")) != -1) + while ((opt = getopt(argc, argv, "i:D:U:I:BbcdeKklnst")) != -1) switch (opt) { case 'i': /* treat stuff controlled by these symbols as text */ /* @@ -255,6 +248,13 @@ main(int argc, char *argv[]) case 'I': /* no-op for compatibility with cpp */ break; + case 'B': /* compress blank lines around removed section */ + compblank = true; + break; + case 'b': /* blank deleted lines instead of omitting them */ + case 'l': /* backwards compatibility */ + lnblank = true; + break; case 'c': /* treat -D as -U and vice versa */ complement = true; break; @@ -264,12 +264,12 @@ main(int argc, char *argv[]) case 'e': /* fewer errors from dodgy lines */ iocccok = true; break; + case 'K': /* keep ambiguous #ifs */ + strictlogic = true; + break; case 'k': /* process constant #ifs */ killconsts = true; break; - case 'l': /* blank deleted lines instead of omitting them */ - lnblank = true; - break; case 'n': /* add #line directive after deleted lines */ lnnum = true; break; @@ -284,6 +284,8 @@ main(int argc, char *argv[]) } argc -= optind; argv += optind; + if (compblank && lnblank) + errx(2, "-B and -b are mutually exclusive"); if (argc > 1) { errx(2, "can only do one file"); } else if (argc == 1 && strcmp(*argv, "-") != 0) { @@ -302,7 +304,7 @@ main(int argc, char *argv[]) static void usage(void) { - fprintf(stderr, "usage: unifdef [-cdeklnst] [-Ipath]" + fprintf(stderr, "usage: unifdef [-BbcdeKknst] [-Ipath]" " [-Dsym[=val]] [-Usym] [-iDsym[=val]] [-iUsym] ... [file]\n"); exit(2); } @@ -383,46 +385,46 @@ static state_fn * const trans_table[IS_COUNT][LT_COUNT] = { /* IS_OUTSIDE */ { Itrue, Ifalse,Fpass, Ftrue, Ffalse,Eelif, Eelif, Eelif, Eelse, Eendif, Oiffy, Oiffy, Fpass, Oif, Oif, Eelif, Eelif, Eelif, Eelse, Eendif, - print, done }, + print, done, abort }, /* IS_FALSE_PREFIX */ { Idrop, Idrop, Fdrop, Fdrop, Fdrop, Mpass, Strue, Sfalse,Selse, Dendif, Idrop, Idrop, Fdrop, Fdrop, Fdrop, Mpass, Eioccc,Eioccc,Eioccc,Eioccc, - drop, Eeof }, + drop, Eeof, abort }, /* IS_TRUE_PREFIX */ { Itrue, Ifalse,Fpass, Ftrue, Ffalse,Dfalse,Dfalse,Dfalse,Delse, Dendif, Oiffy, Oiffy, Fpass, Oif, Oif, Eioccc,Eioccc,Eioccc,Eioccc,Eioccc, - print, Eeof }, + print, Eeof, abort }, /* IS_PASS_MIDDLE */ { Itrue, Ifalse,Fpass, Ftrue, Ffalse,Pelif, Mtrue, Delif, Pelse, Pendif, Oiffy, Oiffy, Fpass, Oif, Oif, Pelif, Oelif, Oelif, Pelse, Pendif, - print, Eeof }, + print, Eeof, abort }, /* IS_FALSE_MIDDLE */ { Idrop, Idrop, Fdrop, Fdrop, Fdrop, Pelif, Mtrue, Delif, Pelse, Pendif, Idrop, Idrop, Fdrop, Fdrop, Fdrop, Eioccc,Eioccc,Eioccc,Eioccc,Eioccc, - drop, Eeof }, + drop, Eeof, abort }, /* IS_TRUE_MIDDLE */ { Itrue, Ifalse,Fpass, Ftrue, Ffalse,Melif, Melif, Melif, Melse, Pendif, Oiffy, Oiffy, Fpass, Oif, Oif, Eioccc,Eioccc,Eioccc,Eioccc,Pendif, - print, Eeof }, + print, Eeof, abort }, /* IS_PASS_ELSE */ { Itrue, Ifalse,Fpass, Ftrue, Ffalse,Eelif, Eelif, Eelif, Eelse, Pendif, Oiffy, Oiffy, Fpass, Oif, Oif, Eelif, Eelif, Eelif, Eelse, Pendif, - print, Eeof }, + print, Eeof, abort }, /* IS_FALSE_ELSE */ { Idrop, Idrop, Fdrop, Fdrop, Fdrop, Eelif, Eelif, Eelif, Eelse, Dendif, Idrop, Idrop, Fdrop, Fdrop, Fdrop, Eelif, Eelif, Eelif, Eelse, Eioccc, - drop, Eeof }, + drop, Eeof, abort }, /* IS_TRUE_ELSE */ { Itrue, Ifalse,Fpass, Ftrue, Ffalse,Eelif, Eelif, Eelif, Eelse, Dendif, Oiffy, Oiffy, Fpass, Oif, Oif, Eelif, Eelif, Eelif, Eelse, Eioccc, - print, Eeof }, + print, Eeof, abort }, /* IS_FALSE_TRAILER */ { Idrop, Idrop, Fdrop, Fdrop, Fdrop, Dfalse,Dfalse,Dfalse,Delse, Dendif, Idrop, Idrop, Fdrop, Fdrop, Fdrop, Dfalse,Dfalse,Dfalse,Delse, Eioccc, - drop, Eeof } + drop, Eeof, abort } /*TRUEI FALSEI IF TRUE FALSE ELIF ELTRUE ELFALSE ELSE ENDIF TRUEI FALSEI IF TRUE FALSE ELIF ELTRUE ELFALSE ELSE ENDIF (DODGY) - PLAIN EOF */ + PLAIN EOF ERROR */ }; /* @@ -463,9 +465,11 @@ keywordedit(const char *replacement) static void nest(void) { - depth += 1; - if (depth >= MAXDEPTH) + if (depth > MAXDEPTH-1) + abort(); /* bug */ + if (depth == MAXDEPTH-1) error("Too many levels of nesting"); + depth += 1; stifline[depth] = linenum; } static void @@ -490,15 +494,23 @@ flushline(bool keep) if (symlist) return; if (keep ^ complement) { - if (lnnum && delcount > 0) - printf("#line %d\n", linenum); - fputs(tline, stdout); - delcount = 0; + bool blankline = tline[strspn(tline, " \t\n")] == '\0'; + if (blankline && compblank && blankcount != blankmax) { + delcount += 1; + blankcount += 1; + } else { + if (lnnum && delcount > 0) + printf("#line %d\n", linenum); + fputs(tline, stdout); + delcount = 0; + blankmax = blankcount = blankline ? blankcount + 1 : 0; + } } else { if (lnblank) putc('\n', stdout); exitstat = 1; delcount += 1; + blankcount = 0; } } @@ -510,9 +522,12 @@ process(void) { Linetype lineval; + /* When compressing blank lines, act as if the file + is preceded by a large number of blank lines. */ + blankmax = blankcount = 1000; for (;;) { linenum++; - lineval = get_line(); + lineval = parseline(); trans_table[ifstate[depth]][lineval](); debug("process %s -> %s depth %d", linetype_name[lineval], @@ -526,7 +541,7 @@ process(void) * help from skipcomment(). */ static Linetype -get_line(void) +parseline(void) { const char *cp; int cursym; @@ -595,9 +610,21 @@ get_line(void) if (incomment) linestate = LS_DIRTY; } - /* skipcomment should have changed the state */ - if (linestate == LS_HASH) - abort(); /* bug */ + /* skipcomment normally changes the state, except + if the last line of the file lacks a newline, or + if there is too much whitespace in a directive */ + if (linestate == LS_HASH) { + size_t len = cp - tline; + if (fgets(tline + len, MAXLINE - len, input) == NULL) { + /* append the missing newline */ + tline[len+0] = '\n'; + tline[len+1] = '\0'; + cp++; + linestate = LS_START; + } else { + linestate = LS_DIRTY; + } + } } if (linestate == LS_DIRTY) { while (*cp != '\0') @@ -610,17 +637,40 @@ get_line(void) /* * These are the binary operators that are supported by the expression - * evaluator. Note that if support for division is added then we also - * need short-circuiting booleans because of divide-by-zero. + * evaluator. */ -static int op_lt(int a, int b) { return (a < b); } -static int op_gt(int a, int b) { return (a > b); } -static int op_le(int a, int b) { return (a <= b); } -static int op_ge(int a, int b) { return (a >= b); } -static int op_eq(int a, int b) { return (a == b); } -static int op_ne(int a, int b) { return (a != b); } -static int op_or(int a, int b) { return (a || b); } -static int op_and(int a, int b) { return (a && b); } +static Linetype op_strict(int *p, int v, Linetype at, Linetype bt) { + if(at == LT_IF || bt == LT_IF) return (LT_IF); + return (*p = v, v ? LT_TRUE : LT_FALSE); +} +static Linetype op_lt(int *p, Linetype at, int a, Linetype bt, int b) { + return op_strict(p, a < b, at, bt); +} +static Linetype op_gt(int *p, Linetype at, int a, Linetype bt, int b) { + return op_strict(p, a > b, at, bt); +} +static Linetype op_le(int *p, Linetype at, int a, Linetype bt, int b) { + return op_strict(p, a <= b, at, bt); +} +static Linetype op_ge(int *p, Linetype at, int a, Linetype bt, int b) { + return op_strict(p, a >= b, at, bt); +} +static Linetype op_eq(int *p, Linetype at, int a, Linetype bt, int b) { + return op_strict(p, a == b, at, bt); +} +static Linetype op_ne(int *p, Linetype at, int a, Linetype bt, int b) { + return op_strict(p, a != b, at, bt); +} +static Linetype op_or(int *p, Linetype at, int a, Linetype bt, int b) { + if (!strictlogic && (at == LT_TRUE || bt == LT_TRUE)) + return (*p = 1, LT_TRUE); + return op_strict(p, a || b, at, bt); +} +static Linetype op_and(int *p, Linetype at, int a, Linetype bt, int b) { + if (!strictlogic && (at == LT_FALSE || bt == LT_FALSE)) + return (*p = 0, LT_FALSE); + return op_strict(p, a && b, at, bt); +} /* * An evaluation function takes three arguments, as follows: (1) a pointer to @@ -629,8 +679,8 @@ static int op_and(int a, int b) { return (a && b); } * value of the expression; and (3) a pointer to a char* that points to the * expression to be evaluated and that is updated to the end of the expression * when evaluation is complete. The function returns LT_FALSE if the value of - * the expression is zero, LT_TRUE if it is non-zero, or LT_IF if the - * expression could not be evaluated. + * the expression is zero, LT_TRUE if it is non-zero, LT_IF if the expression + * depends on an unknown symbol, or LT_ERROR if there is a parse failure. */ struct ops; @@ -649,7 +699,7 @@ static const struct ops { eval_fn *inner; struct op { const char *str; - int (*fn)(int, int); + Linetype (*fn)(int *, Linetype, int, Linetype, int); } op[5]; } eval_ops[] = { { eval_table, { { "||", op_or } } }, @@ -664,8 +714,8 @@ static const struct ops { /* * Function for evaluating the innermost parts of expressions, - * viz. !expr (expr) defined(symbol) symbol number - * We reset the keepthis flag when we find a non-constant subexpression. + * viz. !expr (expr) number defined(symbol) symbol + * We reset the constexpr flag in the last two cases. */ static Linetype eval_unary(const struct ops *ops, int *valp, const char **cpp) @@ -673,68 +723,83 @@ eval_unary(const struct ops *ops, int *valp, const char **cpp) const char *cp; char *ep; int sym; + bool defparen; + Linetype lt; cp = skipcomment(*cpp); if (*cp == '!') { debug("eval%d !", ops - eval_ops); cp++; - if (eval_unary(ops, valp, &cp) == LT_IF) { - *cpp = cp; - return (LT_IF); + lt = eval_unary(ops, valp, &cp); + if (lt == LT_ERROR) + return (LT_ERROR); + if (lt != LT_IF) { + *valp = !*valp; + lt = *valp ? LT_TRUE : LT_FALSE; } - *valp = !*valp; } else if (*cp == '(') { cp++; debug("eval%d (", ops - eval_ops); - if (eval_table(eval_ops, valp, &cp) == LT_IF) - return (LT_IF); + lt = eval_table(eval_ops, valp, &cp); + if (lt == LT_ERROR) + return (LT_ERROR); cp = skipcomment(cp); if (*cp++ != ')') - return (LT_IF); + return (LT_ERROR); } else if (isdigit((unsigned char)*cp)) { debug("eval%d number", ops - eval_ops); *valp = strtol(cp, &ep, 0); + if (ep == cp) + return (LT_ERROR); + lt = *valp ? LT_TRUE : LT_FALSE; cp = skipsym(cp); } else if (strncmp(cp, "defined", 7) == 0 && endsym(cp[7])) { cp = skipcomment(cp+7); debug("eval%d defined", ops - eval_ops); - if (*cp++ != '(') - return (LT_IF); - cp = skipcomment(cp); + if (*cp == '(') { + cp = skipcomment(cp+1); + defparen = true; + } else { + defparen = false; + } sym = findsym(cp); - cp = skipsym(cp); - cp = skipcomment(cp); - if (*cp++ != ')') - return (LT_IF); - if (sym >= 0) + if (sym < 0) { + lt = LT_IF; + } else { *valp = (value[sym] != NULL); - else { - *cpp = cp; - return (LT_IF); + lt = *valp ? LT_TRUE : LT_FALSE; } - keepthis = false; + cp = skipsym(cp); + cp = skipcomment(cp); + if (defparen && *cp++ != ')') + return (LT_ERROR); + constexpr = false; } else if (!endsym(*cp)) { debug("eval%d symbol", ops - eval_ops); sym = findsym(cp); - if (sym < 0) - return (LT_IF); - if (value[sym] == NULL) + cp = skipsym(cp); + if (sym < 0) { + lt = LT_IF; + cp = skipargs(cp); + } else if (value[sym] == NULL) { *valp = 0; - else { + lt = LT_FALSE; + } else { *valp = strtol(value[sym], &ep, 0); if (*ep != '\0' || ep == value[sym]) - return (LT_IF); + return (LT_ERROR); + lt = *valp ? LT_TRUE : LT_FALSE; + cp = skipargs(cp); } - cp = skipsym(cp); - keepthis = false; + constexpr = false; } else { debug("eval%d bad expr", ops - eval_ops); - return (LT_IF); + return (LT_ERROR); } *cpp = cp; debug("eval%d = %d", ops - eval_ops, *valp); - return (*valp ? LT_TRUE : LT_FALSE); + return (lt); } /* @@ -746,11 +811,13 @@ eval_table(const struct ops *ops, int *valp, const char **cpp) const struct op *op; const char *cp; int val; - Linetype lhs, rhs; + Linetype lt, rt; debug("eval%d", ops - eval_ops); cp = *cpp; - lhs = ops->inner(ops+1, valp, &cp); + lt = ops->inner(ops+1, valp, &cp); + if (lt == LT_ERROR) + return (LT_ERROR); for (;;) { cp = skipcomment(cp); for (op = ops->op; op->str != NULL; op++) @@ -760,32 +827,16 @@ eval_table(const struct ops *ops, int *valp, const char **cpp) break; cp += strlen(op->str); debug("eval%d %s", ops - eval_ops, op->str); - rhs = ops->inner(ops+1, &val, &cp); - if (op->fn == op_and && (lhs == LT_FALSE || rhs == LT_FALSE)) { - debug("eval%d: and always false", ops - eval_ops); - if (lhs == LT_IF) - *valp = val; - lhs = LT_FALSE; - continue; - } - if (op->fn == op_or && (lhs == LT_TRUE || rhs == LT_TRUE)) { - debug("eval%d: or always true", ops - eval_ops); - if (lhs == LT_IF) - *valp = val; - lhs = LT_TRUE; - continue; - } - if (rhs == LT_IF) - lhs = LT_IF; - if (lhs != LT_IF) - *valp = op->fn(*valp, val); + rt = ops->inner(ops+1, &val, &cp); + if (rt == LT_ERROR) + return (LT_ERROR); + lt = op->fn(valp, lt, *valp, rt, val); } *cpp = cp; debug("eval%d = %d", ops - eval_ops, *valp); - if (lhs != LT_IF) - lhs = (*valp ? LT_TRUE : LT_FALSE); - return lhs; + debug("eval%d lt = %s", ops - eval_ops, linetype_name[lt]); + return (lt); } /* @@ -796,17 +847,14 @@ eval_table(const struct ops *ops, int *valp, const char **cpp) static Linetype ifeval(const char **cpp) { - const char *cp = *cpp; int ret; - int val; + int val = 0; debug("eval %s", *cpp); - keepthis = killconsts ? false : true; - ret = eval_table(eval_ops, &val, &cp); - if (ret != LT_IF) - *cpp = cp; + constexpr = killconsts ? false : true; + ret = eval_table(eval_ops, &val, cpp); debug("eval = %d", val); - return (keepthis ? LT_IF : ret); + return (constexpr ? LT_IF : ret == LT_ERROR ? LT_IF : ret); } /* @@ -917,6 +965,31 @@ skipcomment(const char *cp) return (cp); } +/* + * Skip macro arguments. + */ +static const char * +skipargs(const char *cp) +{ + const char *ocp = cp; + int level = 0; + cp = skipcomment(cp); + if (*cp != '(') + return (cp); + do { + if (*cp == '(') + level++; + if (*cp == ')') + level--; + cp = skipcomment(cp+1); + } while (level != 0 && *cp != '\0'); + if (level == 0) + return (cp); + else + /* Rewind and re-detect the syntax error later. */ + return (ocp); +} + /* * Skip over an identifier. */ @@ -929,7 +1002,7 @@ skipsym(const char *cp) } /* - * Look for the symbol in the symbol table. If is is found, we return + * Look for the symbol in the symbol table. If it is found, we return * the symbol table index, else we return -1. */ static int -- cgit v1.2.3 From 8bf2805918e9ee3ccdc18590fec0416a01d635c9 Mon Sep 17 00:00:00 2001 From: Arnd Bergmann Date: Mon, 14 Dec 2009 01:53:41 +0000 Subject: x25: Update maintainer. On Monday 14 December 2009, andrew hendry wrote: > Thanks, I didn't know X.25 was actively maintained. I get bounces. > Is the the maintainers out of date? From looking at the posts on the x.25 mailing list and the changes that went into the kernel during the last three years in that area, I think it is safe to say that you are now the maintainer ;-). The last mail on this topic from Henner Eisen was around 2001. > AX.25 NETWORK LAYER > M: Ralf Baechle > > X.25 NETWORK LAYER > M: Henner Eisen How about this change? Signed-off-by: Andrew Hendry Signed-off-by: David S. Miller --- MAINTAINERS | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 93a074330782..256139e0c231 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -5959,9 +5959,9 @@ F: sound/soc/codecs/wm8350.c F: sound/soc/codecs/wm8400.c X.25 NETWORK LAYER -M: Henner Eisen +M: Andrew Hendry L: linux-x25@vger.kernel.org -S: Maintained +S: Odd Fixes F: Documentation/networking/x25* F: include/net/x25* F: net/x25/ -- cgit v1.2.3 From 5ce45962b26ae867e98e60177f62f9695b49a936 Mon Sep 17 00:00:00 2001 From: Michal Marek Date: Mon, 14 Dec 2009 17:57:43 -0800 Subject: MAINTAINERS: new kbuild maintainer Sam was fine with handing over kbuild maintainership to me. The git trees are already in linux-next, a merge request will follow shortly. Acked-by: Sam Ravnborg Signed-off-by: Michal Marek Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- MAINTAINERS | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 1f21c34124db..8acf149164e3 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3081,8 +3081,11 @@ S: Maintained F: fs/autofs4/ KERNEL BUILD +M: Michal Marek +T: git git://repo.or.cz/linux-kbuild.git for-next +T: git git://repo.or.cz/linux-kbuild.git for-linus L: linux-kbuild@vger.kernel.org -S: Orphan +S: Maintained F: Documentation/kbuild/ F: Makefile F: scripts/Makefile.* -- cgit v1.2.3 From d1f28953ca338a9af6d91d5896060179142de0d1 Mon Sep 17 00:00:00 2001 From: KOSAKI Motohiro Date: Mon, 14 Dec 2009 18:00:52 -0800 Subject: MAINTAINERS: mark cifs mailing list as "moderated for non-subscribers" If non-subscribers post bug report to CIFS mailing list, they will get following messages. Your mail to 'linux-cifs-client' with the subject [PATCH x/x] cifs: xxxxxxxxxxxxx Is being held until the list moderator can review it for approval. The reason it is being held: Post by non-member to a members-only list Either the message will get posted to the list, or you will receive notification of the moderator's decision. If you would like to cancel this posting, please visit the following URL: members-only list should be written as so in MAINTAINERS file. Signed-off-by: KOSAKI Motohiro Cc: Steve French Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- MAINTAINERS | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 8acf149164e3..9a80011db083 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1482,8 +1482,8 @@ F: include/linux/coda*.h COMMON INTERNET FILE SYSTEM (CIFS) M: Steve French -L: linux-cifs-client@lists.samba.org -L: samba-technical@lists.samba.org +L: linux-cifs-client@lists.samba.org (moderated for non-subscribers) +L: samba-technical@lists.samba.org (moderated for non-subscribers) W: http://linux-cifs.samba.org/ T: git git://git.kernel.org/pub/scm/linux/kernel/git/sfrench/cifs-2.6.git S: Supported -- cgit v1.2.3 From b57fe924740b30cdbaaf20da8b7375f167f6ac35 Mon Sep 17 00:00:00 2001 From: Joe Perches Date: Mon, 14 Dec 2009 18:00:52 -0800 Subject: MAINTAINERS: rename PALM TREO section and file patterns Signed-off-by: Joe Perches Cc: Tomas Cech Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- MAINTAINERS | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 9a80011db083..833f4c6ef038 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -835,13 +835,13 @@ F: arch/arm/mach-pxa/palmte2.c F: arch/arm/mach-pxa/include/mach/palmtc.h F: arch/arm/mach-pxa/palmtc.c -ARM/PALM TREO 680 SUPPORT +ARM/PALM TREO SUPPORT M: Tomas Cech L: linux-arm-kernel@lists.infradead.org W: http://hackndev.com S: Maintained -F: arch/arm/mach-pxa/include/mach/treo680.h -F: arch/arm/mach-pxa/treo680.c +F: arch/arm/mach-pxa/include/mach/palmtreo.h +F: arch/arm/mach-pxa/palmtreo.c ARM/PALMZ72 SUPPORT M: Sergey Lapin -- cgit v1.2.3 From e74b98e58f57d293afa2ab86770b02981f50076e Mon Sep 17 00:00:00 2001 From: Joe Perches Date: Mon, 14 Dec 2009 18:00:53 -0800 Subject: MAINTAINERS: remove file pattern from KERNEL VIRTUAL MACHINE (KVM) FOR AMD-V Commit 6c8166a77c98f473eb91e96a61c3cf78ac617278 ("KVM: SVM: Fold kvm_svm.h info svm.c") folded this file away. Signed-off-by: Joe Perches Cc: Avi Kivity Cc: Joerg Roedel Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- MAINTAINERS | 1 - 1 file changed, 1 deletion(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 833f4c6ef038..ab6bc7b5587b 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3127,7 +3127,6 @@ L: kvm@vger.kernel.org W: http://kvm.qumranet.com S: Supported F: arch/x86/include/asm/svm.h -F: arch/x86/kvm/kvm_svm.h F: arch/x86/kvm/svm.c KERNEL VIRTUAL MACHINE (KVM) FOR POWERPC -- cgit v1.2.3 From 3768f0b1d18369bbb4ddc3adca791d26704e8047 Mon Sep 17 00:00:00 2001 From: Joe Perches Date: Mon, 14 Dec 2009 18:00:54 -0800 Subject: MAINTAINERS: Update file patterns for WOLFSON MICROELECTRONICS PMIC DRIVERS One of the includes pointed to a non-existent directory Add Documentation/hwmon/wm83?? Add sound/soc/codecs/wm(8350|8400).h files Signed-off-by: Joe Perches Acked-by: Mark Brown Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- MAINTAINERS | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index ab6bc7b5587b..0a32c3ec6b1c 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -5976,6 +5976,7 @@ M: Mark Brown T: git git://opensource.wolfsonmicro.com/linux-2.6-audioplus W: http://opensource.wolfsonmicro.com/node/8 S: Supported +F: Documentation/hwmon/wm83?? F: drivers/leds/leds-wm83*.c F: drivers/mfd/wm8*.c F: drivers/power/wm83*.c @@ -5985,9 +5986,9 @@ F: drivers/video/backlight/wm83*_bl.c F: drivers/watchdog/wm83*_wdt.c F: include/linux/mfd/wm831x/ F: include/linux/mfd/wm8350/ -F: include/linux/mfd/wm8400/ -F: sound/soc/codecs/wm8350.c -F: sound/soc/codecs/wm8400.c +F: include/linux/mfd/wm8400* +F: sound/soc/codecs/wm8350.* +F: sound/soc/codecs/wm8400.* X.25 NETWORK LAYER M: Henner Eisen -- cgit v1.2.3 From 7bc98b97ed5dfe710025414de771baa674998892 Mon Sep 17 00:00:00 2001 From: Andi Kleen Date: Wed, 16 Dec 2009 12:19:57 +0100 Subject: HWPOISON: Add Andi Kleen as hwpoison maintainer to MAINTAINERS Signed-off-by: Andi Kleen --- MAINTAINERS | 9 +++++++++ 1 file changed, 9 insertions(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 0a32c3ec6b1c..99a0e513cd7c 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -2377,6 +2377,15 @@ W: http://www.kernel.org/pub/linux/kernel/people/fseidel/hdaps/ S: Maintained F: drivers/hwmon/hdaps.c +HWPOISON MEMORY FAILURE HANDLING +M: Andi Kleen +L: linux-mm@kvack.org +L: linux-kernel@vger.kernel.org +T: git git://git.kernel.org/pub/scm/linux/kernel/git/ak/linux-mce-2.6.git hwpoison +S: Maintained +F: mm/memory-failure.c +F: mm/hwpoison-inject.c + HYPERVISOR VIRTUAL CONSOLE DRIVER L: linuxppc-dev@ozlabs.org S: Odd Fixes -- cgit v1.2.3 From 355ffe69cb329bb9ab9eca3ebef369268162220d Mon Sep 17 00:00:00 2001 From: David Vrabel Date: Tue, 22 Dec 2009 13:13:28 +0000 Subject: MAINTAINERS: update entries for WUSB, UWB and WLP subsystems Update the file patterns for the WUSB, UWB and WLP subsystems and add netdev@vger as the list for the WLP subsystem. Signed-off-by: David Vrabel --- MAINTAINERS | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index efd2ef2c2660..d5244f1580bc 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1402,6 +1402,8 @@ L: linux-usb@vger.kernel.org S: Supported F: Documentation/usb/WUSB-Design-overview.txt F: Documentation/usb/wusb-cbaf +F: drivers/usb/host/hwa-hc.c +F: drivers/usb/host/whci/ F: drivers/usb/wusbcore/ F: include/linux/usb/wusb* @@ -5430,7 +5432,10 @@ ULTRA-WIDEBAND (UWB) SUBSYSTEM: M: David Vrabel L: linux-usb@vger.kernel.org S: Supported -F: drivers/uwb/* +F: drivers/uwb/ +X: drivers/uwb/wlp/ +X: drivers/uwb/i1480/i1480u-wlp/ +X: drivers/uwb/i1480/i1480-wlp.h F: include/linux/uwb.h F: include/linux/uwb/ @@ -5943,9 +5948,12 @@ W: http://linuxwimax.org WIMEDIA LLC PROTOCOL (WLP) SUBSYSTEM M: David Vrabel +L: netdev@vger.kernel.org S: Maintained F: include/linux/wlp.h F: drivers/uwb/wlp/ +F: drivers/uwb/i1480/i1480u-wlp/ +F: drivers/uwb/i1480/i1480-wlp.h WISTRON LAPTOP BUTTON DRIVER M: Miloslav Trmac -- cgit v1.2.3 From 0f1006b1f2b7862e35973c53cc4f99bea65d5a1d Mon Sep 17 00:00:00 2001 From: Anisse Astier Date: Thu, 17 Dec 2009 11:28:49 +0100 Subject: MAINTAINERS: add maintainer for msi-wmi driver Signed-off-by: Anisse Astier Signed-off-by: Len Brown --- MAINTAINERS | 5 +++++ 1 file changed, 5 insertions(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index efd2ef2c2660..eefe43927898 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3644,6 +3644,11 @@ W: http://0pointer.de/lennart/tchibo.html S: Maintained F: drivers/platform/x86/msi-laptop.c +MSI WMI SUPPORT +M: Anisse Astier +S: Supported +F: drivers/platform/x86/msi-wmi.c + MULTIFUNCTION DEVICES (MFD) M: Samuel Ortiz T: git git://git.kernel.org/pub/scm/linux/kernel/git/sameo/mfd-2.6.git -- cgit v1.2.3 From 8a700f3d0d34a79c9cb25f2c66552c181f9c5737 Mon Sep 17 00:00:00 2001 From: Felipe Balbi Date: Tue, 15 Dec 2009 11:08:45 +0200 Subject: USB: musb: MAINTAINERS: Fix my tree's address The tree is now on a new address. Signed-off-by: Felipe Balbi Signed-off-by: Greg Kroah-Hartman --- MAINTAINERS | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index d5244f1580bc..745643b8c344 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3679,7 +3679,7 @@ F: include/linux/isicom.h MUSB MULTIPOINT HIGH SPEED DUAL-ROLE CONTROLLER M: Felipe Balbi L: linux-usb@vger.kernel.org -T: git git://gitorious.org/musb/mainline.git +T: git git://gitorious.org/usb/usb.git S: Maintained F: drivers/usb/musb/ -- cgit v1.2.3 From 529aa8cb0a59367d08883f818e8c47028e819d0d Mon Sep 17 00:00:00 2001 From: Thadeu Lima de Souza Cascardo Date: Mon, 21 Dec 2009 16:20:01 -0800 Subject: classmate-laptop: add support for Classmate PC ACPI devices This add supports for devices like keyboard, backlight, tablet and accelerometer. This work is supported by International Syst S/A. [randy.dunlap@oracle.com: cmpc_acpi: depends on ACPI] [akpm@linux-foundation.org: readability tweaks] [akpm@linux-foundation.org: coding-style fixes] Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Andrew Morton Signed-off-by: Len Brown --- MAINTAINERS | 6 + drivers/platform/x86/Kconfig | 12 + drivers/platform/x86/Makefile | 1 + drivers/platform/x86/classmate-laptop.c | 609 ++++++++++++++++++++++++++++++++ 4 files changed, 628 insertions(+) create mode 100644 drivers/platform/x86/classmate-laptop.c (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index efd2ef2c2660..9ddfc97b1ded 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1470,6 +1470,12 @@ L: linux-scsi@vger.kernel.org S: Supported F: drivers/scsi/fnic/ +CMPC ACPI DRIVER +M: Thadeu Lima de Souza Cascardo +M: Daniel Oliveira Nascimento +S: Supported +F: drivers/platform/x86/classmate-laptop.c + CODA FILE SYSTEM M: Jan Harkes M: coda@cs.cmu.edu diff --git a/drivers/platform/x86/Kconfig b/drivers/platform/x86/Kconfig index fc5bf9d2a3f3..ec4faffe6b05 100644 --- a/drivers/platform/x86/Kconfig +++ b/drivers/platform/x86/Kconfig @@ -464,4 +464,16 @@ config TOSHIBA_BT_RFKILL If you have a modern Toshiba laptop with a Bluetooth and an RFKill switch (such as the Portege R500), say Y. + +config ACPI_CMPC + tristate "CMPC Laptop Extras" + depends on X86 && ACPI + select INPUT + select BACKLIGHT_CLASS_DEVICE + default n + help + Support for Intel Classmate PC ACPI devices, including some + keys as input device, backlight device, tablet and accelerometer + devices. + endif # X86_PLATFORM_DEVICES diff --git a/drivers/platform/x86/Makefile b/drivers/platform/x86/Makefile index b7474b6a8bf1..9cd9fa0a27e6 100644 --- a/drivers/platform/x86/Makefile +++ b/drivers/platform/x86/Makefile @@ -5,6 +5,7 @@ obj-$(CONFIG_ASUS_LAPTOP) += asus-laptop.o obj-$(CONFIG_EEEPC_LAPTOP) += eeepc-laptop.o obj-$(CONFIG_MSI_LAPTOP) += msi-laptop.o +obj-$(CONFIG_ACPI_CMPC) += classmate-laptop.o obj-$(CONFIG_COMPAL_LAPTOP) += compal-laptop.o obj-$(CONFIG_DELL_LAPTOP) += dell-laptop.o obj-$(CONFIG_DELL_WMI) += dell-wmi.o diff --git a/drivers/platform/x86/classmate-laptop.c b/drivers/platform/x86/classmate-laptop.c new file mode 100644 index 000000000000..ed90082cdf1d --- /dev/null +++ b/drivers/platform/x86/classmate-laptop.c @@ -0,0 +1,609 @@ +/* + * Copyright (C) 2009 Thadeu Lima de Souza Cascardo + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License along + * with this program; if not, write to the Free Software Foundation, Inc., + * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. + */ + + +#include +#include +#include +#include +#include +#include + +MODULE_LICENSE("GPL"); + + +struct cmpc_accel { + int sensitivity; +}; + +#define CMPC_ACCEL_SENSITIVITY_DEFAULT 5 + + +/* + * Generic input device code. + */ + +typedef void (*input_device_init)(struct input_dev *dev); + +static int cmpc_add_acpi_notify_device(struct acpi_device *acpi, char *name, + input_device_init idev_init) +{ + struct input_dev *inputdev; + int error; + + inputdev = input_allocate_device(); + if (!inputdev) + return -ENOMEM; + inputdev->name = name; + inputdev->dev.parent = &acpi->dev; + idev_init(inputdev); + error = input_register_device(inputdev); + if (error) { + input_free_device(inputdev); + return error; + } + dev_set_drvdata(&acpi->dev, inputdev); + return 0; +} + +static int cmpc_remove_acpi_notify_device(struct acpi_device *acpi) +{ + struct input_dev *inputdev = dev_get_drvdata(&acpi->dev); + input_unregister_device(inputdev); + return 0; +} + +/* + * Accelerometer code. + */ +static acpi_status cmpc_start_accel(acpi_handle handle) +{ + union acpi_object param[2]; + struct acpi_object_list input; + acpi_status status; + + param[0].type = ACPI_TYPE_INTEGER; + param[0].integer.value = 0x3; + param[1].type = ACPI_TYPE_INTEGER; + input.count = 2; + input.pointer = param; + status = acpi_evaluate_object(handle, "ACMD", &input, NULL); + return status; +} + +static acpi_status cmpc_stop_accel(acpi_handle handle) +{ + union acpi_object param[2]; + struct acpi_object_list input; + acpi_status status; + + param[0].type = ACPI_TYPE_INTEGER; + param[0].integer.value = 0x4; + param[1].type = ACPI_TYPE_INTEGER; + input.count = 2; + input.pointer = param; + status = acpi_evaluate_object(handle, "ACMD", &input, NULL); + return status; +} + +static acpi_status cmpc_accel_set_sensitivity(acpi_handle handle, int val) +{ + union acpi_object param[2]; + struct acpi_object_list input; + + param[0].type = ACPI_TYPE_INTEGER; + param[0].integer.value = 0x02; + param[1].type = ACPI_TYPE_INTEGER; + param[1].integer.value = val; + input.count = 2; + input.pointer = param; + return acpi_evaluate_object(handle, "ACMD", &input, NULL); +} + +static acpi_status cmpc_get_accel(acpi_handle handle, + unsigned char *x, + unsigned char *y, + unsigned char *z) +{ + union acpi_object param[2]; + struct acpi_object_list input; + struct acpi_buffer output = { ACPI_ALLOCATE_BUFFER, 0 }; + unsigned char *locs; + acpi_status status; + + param[0].type = ACPI_TYPE_INTEGER; + param[0].integer.value = 0x01; + param[1].type = ACPI_TYPE_INTEGER; + input.count = 2; + input.pointer = param; + status = acpi_evaluate_object(handle, "ACMD", &input, &output); + if (ACPI_SUCCESS(status)) { + union acpi_object *obj; + obj = output.pointer; + locs = obj->buffer.pointer; + *x = locs[0]; + *y = locs[1]; + *z = locs[2]; + kfree(output.pointer); + } + return status; +} + +static void cmpc_accel_handler(struct acpi_device *dev, u32 event) +{ + if (event == 0x81) { + unsigned char x, y, z; + acpi_status status; + + status = cmpc_get_accel(dev->handle, &x, &y, &z); + if (ACPI_SUCCESS(status)) { + struct input_dev *inputdev = dev_get_drvdata(&dev->dev); + + input_report_abs(inputdev, ABS_X, x); + input_report_abs(inputdev, ABS_Y, y); + input_report_abs(inputdev, ABS_Z, z); + input_sync(inputdev); + } + } +} + +static ssize_t cmpc_accel_sensitivity_show(struct device *dev, + struct device_attribute *attr, + char *buf) +{ + struct acpi_device *acpi; + struct input_dev *inputdev; + struct cmpc_accel *accel; + + acpi = to_acpi_device(dev); + inputdev = dev_get_drvdata(&acpi->dev); + accel = dev_get_drvdata(&inputdev->dev); + + return sprintf(buf, "%d\n", accel->sensitivity); +} + +static ssize_t cmpc_accel_sensitivity_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) +{ + struct acpi_device *acpi; + struct input_dev *inputdev; + struct cmpc_accel *accel; + unsigned long sensitivity; + int r; + + acpi = to_acpi_device(dev); + inputdev = dev_get_drvdata(&acpi->dev); + accel = dev_get_drvdata(&inputdev->dev); + + r = strict_strtoul(buf, 0, &sensitivity); + if (r) + return r; + + accel->sensitivity = sensitivity; + cmpc_accel_set_sensitivity(acpi->handle, sensitivity); + + return strnlen(buf, count); +} + +struct device_attribute cmpc_accel_sensitivity_attr = { + .attr = { .name = "sensitivity", .mode = 0660 }, + .show = cmpc_accel_sensitivity_show, + .store = cmpc_accel_sensitivity_store +}; + +static int cmpc_accel_open(struct input_dev *input) +{ + struct acpi_device *acpi; + + acpi = to_acpi_device(input->dev.parent); + if (ACPI_SUCCESS(cmpc_start_accel(acpi->handle))) + return 0; + return -EIO; +} + +static void cmpc_accel_close(struct input_dev *input) +{ + struct acpi_device *acpi; + + acpi = to_acpi_device(input->dev.parent); + cmpc_stop_accel(acpi->handle); +} + +static void cmpc_accel_idev_init(struct input_dev *inputdev) +{ + set_bit(EV_ABS, inputdev->evbit); + input_set_abs_params(inputdev, ABS_X, 0, 255, 8, 0); + input_set_abs_params(inputdev, ABS_Y, 0, 255, 8, 0); + input_set_abs_params(inputdev, ABS_Z, 0, 255, 8, 0); + inputdev->open = cmpc_accel_open; + inputdev->close = cmpc_accel_close; +} + +static int cmpc_accel_add(struct acpi_device *acpi) +{ + int error; + struct input_dev *inputdev; + struct cmpc_accel *accel; + + accel = kmalloc(sizeof(*accel), GFP_KERNEL); + if (!accel) + return -ENOMEM; + + accel->sensitivity = CMPC_ACCEL_SENSITIVITY_DEFAULT; + cmpc_accel_set_sensitivity(acpi->handle, accel->sensitivity); + + error = device_create_file(&acpi->dev, &cmpc_accel_sensitivity_attr); + if (error) + goto failed_file; + + error = cmpc_add_acpi_notify_device(acpi, "cmpc_accel", + cmpc_accel_idev_init); + if (error) + goto failed_input; + + inputdev = dev_get_drvdata(&acpi->dev); + dev_set_drvdata(&inputdev->dev, accel); + + return 0; + +failed_input: + device_remove_file(&acpi->dev, &cmpc_accel_sensitivity_attr); +failed_file: + kfree(accel); + return error; +} + +static int cmpc_accel_remove(struct acpi_device *acpi, int type) +{ + struct input_dev *inputdev; + struct cmpc_accel *accel; + + inputdev = dev_get_drvdata(&acpi->dev); + accel = dev_get_drvdata(&inputdev->dev); + + device_remove_file(&acpi->dev, &cmpc_accel_sensitivity_attr); + return cmpc_remove_acpi_notify_device(acpi); +} + +static const struct acpi_device_id cmpc_accel_device_ids[] = { + {"ACCE0000", 0}, + {"", 0} +}; +MODULE_DEVICE_TABLE(acpi, cmpc_accel_device_ids); + +static struct acpi_driver cmpc_accel_acpi_driver = { + .owner = THIS_MODULE, + .name = "cmpc_accel", + .class = "cmpc_accel", + .ids = cmpc_accel_device_ids, + .ops = { + .add = cmpc_accel_add, + .remove = cmpc_accel_remove, + .notify = cmpc_accel_handler, + } +}; + + +/* + * Tablet mode code. + */ +static acpi_status cmpc_get_tablet(acpi_handle handle, + unsigned long long *value) +{ + union acpi_object param; + struct acpi_object_list input; + unsigned long long output; + acpi_status status; + + param.type = ACPI_TYPE_INTEGER; + param.integer.value = 0x01; + input.count = 1; + input.pointer = ¶m; + status = acpi_evaluate_integer(handle, "TCMD", &input, &output); + if (ACPI_SUCCESS(status)) + *value = output; + return status; +} + +static void cmpc_tablet_handler(struct acpi_device *dev, u32 event) +{ + unsigned long long val = 0; + struct input_dev *inputdev = dev_get_drvdata(&dev->dev); + + if (event == 0x81) { + if (ACPI_SUCCESS(cmpc_get_tablet(dev->handle, &val))) + input_report_switch(inputdev, SW_TABLET_MODE, !val); + } +} + +static void cmpc_tablet_idev_init(struct input_dev *inputdev) +{ + unsigned long long val = 0; + struct acpi_device *acpi; + + set_bit(EV_SW, inputdev->evbit); + set_bit(SW_TABLET_MODE, inputdev->swbit); + + acpi = to_acpi_device(inputdev->dev.parent); + if (ACPI_SUCCESS(cmpc_get_tablet(acpi->handle, &val))) + input_report_switch(inputdev, SW_TABLET_MODE, !val); +} + +static int cmpc_tablet_add(struct acpi_device *acpi) +{ + return cmpc_add_acpi_notify_device(acpi, "cmpc_tablet", + cmpc_tablet_idev_init); +} + +static int cmpc_tablet_remove(struct acpi_device *acpi, int type) +{ + return cmpc_remove_acpi_notify_device(acpi); +} + +static int cmpc_tablet_resume(struct acpi_device *acpi) +{ + struct input_dev *inputdev = dev_get_drvdata(&acpi->dev); + unsigned long long val = 0; + if (ACPI_SUCCESS(cmpc_get_tablet(acpi->handle, &val))) + input_report_switch(inputdev, SW_TABLET_MODE, !val); + return 0; +} + +static const struct acpi_device_id cmpc_tablet_device_ids[] = { + {"TBLT0000", 0}, + {"", 0} +}; +MODULE_DEVICE_TABLE(acpi, cmpc_tablet_device_ids); + +static struct acpi_driver cmpc_tablet_acpi_driver = { + .owner = THIS_MODULE, + .name = "cmpc_tablet", + .class = "cmpc_tablet", + .ids = cmpc_tablet_device_ids, + .ops = { + .add = cmpc_tablet_add, + .remove = cmpc_tablet_remove, + .resume = cmpc_tablet_resume, + .notify = cmpc_tablet_handler, + } +}; + + +/* + * Backlight code. + */ + +static acpi_status cmpc_get_brightness(acpi_handle handle, + unsigned long long *value) +{ + union acpi_object param; + struct acpi_object_list input; + unsigned long long output; + acpi_status status; + + param.type = ACPI_TYPE_INTEGER; + param.integer.value = 0xC0; + input.count = 1; + input.pointer = ¶m; + status = acpi_evaluate_integer(handle, "GRDI", &input, &output); + if (ACPI_SUCCESS(status)) + *value = output; + return status; +} + +static acpi_status cmpc_set_brightness(acpi_handle handle, + unsigned long long value) +{ + union acpi_object param[2]; + struct acpi_object_list input; + acpi_status status; + unsigned long long output; + + param[0].type = ACPI_TYPE_INTEGER; + param[0].integer.value = 0xC0; + param[1].type = ACPI_TYPE_INTEGER; + param[1].integer.value = value; + input.count = 2; + input.pointer = param; + status = acpi_evaluate_integer(handle, "GWRI", &input, &output); + return status; +} + +static int cmpc_bl_get_brightness(struct backlight_device *bd) +{ + acpi_status status; + acpi_handle handle; + unsigned long long brightness; + + handle = bl_get_data(bd); + status = cmpc_get_brightness(handle, &brightness); + if (ACPI_SUCCESS(status)) + return brightness; + else + return -1; +} + +static int cmpc_bl_update_status(struct backlight_device *bd) +{ + acpi_status status; + acpi_handle handle; + + handle = bl_get_data(bd); + status = cmpc_set_brightness(handle, bd->props.brightness); + if (ACPI_SUCCESS(status)) + return 0; + else + return -1; +} + +static struct backlight_ops cmpc_bl_ops = { + .get_brightness = cmpc_bl_get_brightness, + .update_status = cmpc_bl_update_status +}; + +static int cmpc_bl_add(struct acpi_device *acpi) +{ + struct backlight_device *bd; + + bd = backlight_device_register("cmpc_bl", &acpi->dev, + acpi->handle, &cmpc_bl_ops); + bd->props.max_brightness = 7; + dev_set_drvdata(&acpi->dev, bd); + return 0; +} + +static int cmpc_bl_remove(struct acpi_device *acpi, int type) +{ + struct backlight_device *bd; + + bd = dev_get_drvdata(&acpi->dev); + backlight_device_unregister(bd); + return 0; +} + +static const struct acpi_device_id cmpc_device_ids[] = { + {"IPML200", 0}, + {"", 0} +}; +MODULE_DEVICE_TABLE(acpi, cmpc_device_ids); + +static struct acpi_driver cmpc_bl_acpi_driver = { + .owner = THIS_MODULE, + .name = "cmpc", + .class = "cmpc", + .ids = cmpc_device_ids, + .ops = { + .add = cmpc_bl_add, + .remove = cmpc_bl_remove + } +}; + + +/* + * Extra keys code. + */ +static int cmpc_keys_codes[] = { + KEY_UNKNOWN, + KEY_WLAN, + KEY_SWITCHVIDEOMODE, + KEY_BRIGHTNESSDOWN, + KEY_BRIGHTNESSUP, + KEY_VENDOR, + KEY_MAX +}; + +static void cmpc_keys_handler(struct acpi_device *dev, u32 event) +{ + struct input_dev *inputdev; + int code = KEY_MAX; + + if ((event & 0x0F) < ARRAY_SIZE(cmpc_keys_codes)) + code = cmpc_keys_codes[event & 0x0F]; + inputdev = dev_get_drvdata(&dev->dev);; + input_report_key(inputdev, code, !(event & 0x10)); +} + +static void cmpc_keys_idev_init(struct input_dev *inputdev) +{ + int i; + + set_bit(EV_KEY, inputdev->evbit); + for (i = 0; cmpc_keys_codes[i] != KEY_MAX; i++) + set_bit(cmpc_keys_codes[i], inputdev->keybit); +} + +static int cmpc_keys_add(struct acpi_device *acpi) +{ + return cmpc_add_acpi_notify_device(acpi, "cmpc_keys", + cmpc_keys_idev_init); +} + +static int cmpc_keys_remove(struct acpi_device *acpi, int type) +{ + return cmpc_remove_acpi_notify_device(acpi); +} + +static const struct acpi_device_id cmpc_keys_device_ids[] = { + {"FnBT0000", 0}, + {"", 0} +}; +MODULE_DEVICE_TABLE(acpi, cmpc_keys_device_ids); + +static struct acpi_driver cmpc_keys_acpi_driver = { + .owner = THIS_MODULE, + .name = "cmpc_keys", + .class = "cmpc_keys", + .ids = cmpc_keys_device_ids, + .ops = { + .add = cmpc_keys_add, + .remove = cmpc_keys_remove, + .notify = cmpc_keys_handler, + } +}; + + +/* + * General init/exit code. + */ + +static int cmpc_init(void) +{ + int r; + + r = acpi_bus_register_driver(&cmpc_keys_acpi_driver); + if (r) + goto failed_keys; + + r = acpi_bus_register_driver(&cmpc_bl_acpi_driver); + if (r) + goto failed_bl; + + r = acpi_bus_register_driver(&cmpc_tablet_acpi_driver); + if (r) + goto failed_tablet; + + r = acpi_bus_register_driver(&cmpc_accel_acpi_driver); + if (r) + goto failed_accel; + + return r; + +failed_accel: + acpi_bus_unregister_driver(&cmpc_tablet_acpi_driver); + +failed_tablet: + acpi_bus_unregister_driver(&cmpc_bl_acpi_driver); + +failed_bl: + acpi_bus_unregister_driver(&cmpc_keys_acpi_driver); + +failed_keys: + return r; +} + +static void cmpc_exit(void) +{ + acpi_bus_unregister_driver(&cmpc_accel_acpi_driver); + acpi_bus_unregister_driver(&cmpc_tablet_acpi_driver); + acpi_bus_unregister_driver(&cmpc_bl_acpi_driver); + acpi_bus_unregister_driver(&cmpc_keys_acpi_driver); +} + +module_init(cmpc_init); +module_exit(cmpc_exit); -- cgit v1.2.3 From 958a29cb1c9161d61dcd4ced51f260578e19823c Mon Sep 17 00:00:00 2001 From: Stefan Richter Date: Sat, 26 Dec 2009 01:36:12 +0100 Subject: firewire, ieee1394: update MAINTAINERS entries Ben and Kristian have not been involved in maintenance of the IEEE 1394 drivers for quite some time; submitters are not required to Cc them on patches. The linux1394.org domain has been dead for a while and is no longer under control of a Linux developer. The current web site of the Linux 1394 project is http://ieee1394.wiki.kernel.org/. The classic drivers/ieee1394/ stack is now obsolete from the development point of view, though still a useful alternative in productive use. But nobody should attempt to submit style cleanup patches for it or to develop new drivers on top of this stack, hence mark its MAINTAINERS entry as Obsolete. drivers/ieee1394/raw1394*, like the rest of the old stack, does not receive bigger code changes anymore, hence shrink the MAINTAINERS database a bit by dropping raw1394's special entry. If something important and urgent is going to come up for raw1394, I will make sure that Dan will be notified of it besides via linux1394-devel. Signed-off-by: Stefan Richter Signed-off-by: Dan Dennedy Cc: Ben Collins Cc: Kristian Hoegsberg --- MAINTAINERS | 15 +++------------ 1 file changed, 3 insertions(+), 12 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 66f5f7dab285..c22d597e6ace 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -2169,10 +2169,9 @@ F: drivers/hwmon/f75375s.c F: include/linux/f75375s.h FIREWIRE SUBSYSTEM -M: Kristian Hoegsberg M: Stefan Richter L: linux1394-devel@lists.sourceforge.net -W: http://www.linux1394.org/ +W: http://ieee1394.wiki.kernel.org/ T: git git://git.kernel.org/pub/scm/linux/kernel/git/ieee1394/linux1394-2.6.git S: Maintained F: drivers/firewire/ @@ -2705,22 +2704,14 @@ S: Supported F: drivers/idle/i7300_idle.c IEEE 1394 SUBSYSTEM -M: Ben Collins M: Stefan Richter L: linux1394-devel@lists.sourceforge.net -W: http://www.linux1394.org/ +W: http://ieee1394.wiki.kernel.org/ T: git git://git.kernel.org/pub/scm/linux/kernel/git/ieee1394/linux1394-2.6.git -S: Maintained +S: Obsolete F: Documentation/debugging-via-ohci1394.txt F: drivers/ieee1394/ -IEEE 1394 RAW I/O DRIVER -M: Dan Dennedy -M: Stefan Richter -L: linux1394-devel@lists.sourceforge.net -S: Maintained -F: drivers/ieee1394/raw1394* - IEEE 802.15.4 SUBSYSTEM M: Dmitry Eremin-Solenikov M: Sergey Lapin -- cgit v1.2.3 From 6aff43f817ddc54fcd6f0215bfba5d334b0bbbbd Mon Sep 17 00:00:00 2001 From: Ryusuke Konishi Date: Sat, 2 Jan 2010 21:41:53 +0900 Subject: nilfs2: update mailing list address This replaces the list address for nilfs discussion to linux-nilfs at vger.kernel.org from users at nilfs.org. Signed-off-by: Ryusuke Konishi --- Documentation/filesystems/nilfs2.txt | 2 +- MAINTAINERS | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) (limited to 'MAINTAINERS') diff --git a/Documentation/filesystems/nilfs2.txt b/Documentation/filesystems/nilfs2.txt index 4949fcaa6b6a..839efd8a8a8c 100644 --- a/Documentation/filesystems/nilfs2.txt +++ b/Documentation/filesystems/nilfs2.txt @@ -28,7 +28,7 @@ described in the man pages included in the package. Project web page: http://www.nilfs.org/en/ Download page: http://www.nilfs.org/en/download.html Git tree web page: http://www.nilfs.org/git/ -NILFS mailing lists: http://www.nilfs.org/mailman/listinfo/users +List info: http://vger.kernel.org/vger-lists.html#linux-nilfs Caveats ======= diff --git a/MAINTAINERS b/MAINTAINERS index 66f5f7dab285..0121c61d67d5 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3882,7 +3882,7 @@ F: drivers/net/ni5010.* NILFS2 FILESYSTEM M: KONISHI Ryusuke -L: users@nilfs.org +L: linux-nilfs@vger.kernel.org W: http://www.nilfs.org/en/ S: Supported F: Documentation/filesystems/nilfs2.txt -- cgit v1.2.3 From 3043c10a7e6a133dd79636d6ec4f650a6b2848ae Mon Sep 17 00:00:00 2001 From: Tomi Valkeinen Date: Thu, 7 Jan 2010 13:06:51 +0200 Subject: MAINTAINERS: change omapfb maintainer Signed-off-by: Tomi Valkeinen Acked-by: Imre Deak --- MAINTAINERS | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 66f5f7dab285..8d2fceeaa2b6 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3949,7 +3949,7 @@ S: Maintained F: sound/soc/omap/ OMAP FRAMEBUFFER SUPPORT -M: Imre Deak +M: Tomi Valkeinen L: linux-fbdev@vger.kernel.org L: linux-omap@vger.kernel.org S: Maintained -- cgit v1.2.3 From 676eec0daf87614eadbcd82d3876b09b65e1ddf9 Mon Sep 17 00:00:00 2001 From: Tomi Valkeinen Date: Thu, 7 Jan 2010 13:18:04 +0200 Subject: MAINTAINERS: Combine DSS2 and OMAPFB2 into one entry There isn't really any reason to divide those. Signed-off-by: Tomi Valkeinen --- MAINTAINERS | 15 +++------------ 1 file changed, 3 insertions(+), 12 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 8d2fceeaa2b6..b888cd6696f8 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3955,23 +3955,14 @@ L: linux-omap@vger.kernel.org S: Maintained F: drivers/video/omap/ -OMAP DISPLAY SUBSYSTEM SUPPORT (DSS2) +OMAP DISPLAY SUBSYSTEM and FRAMEBUFFER SUPPORT (DSS2) M: Tomi Valkeinen L: linux-omap@vger.kernel.org -L: linux-fbdev@vger.kernel.org (moderated for non-subscribers) +L: linux-fbdev@vger.kernel.org S: Maintained -F: drivers/video/omap2/dss/ -F: drivers/video/omap2/vrfb.c -F: drivers/video/omap2/vram.c +F: drivers/video/omap2/ F: Documentation/arm/OMAP/DSS -OMAP FRAMEBUFFER SUPPORT (FOR DSS2) -M: Tomi Valkeinen -L: linux-omap@vger.kernel.org -L: linux-fbdev@vger.kernel.org (moderated for non-subscribers) -S: Maintained -F: drivers/video/omap2/omapfb/ - OMAP MMC SUPPORT M: Jarkko Lavinen L: linux-omap@vger.kernel.org -- cgit v1.2.3 From abd4d609057dd4faa22837376fdef2433e4c33b1 Mon Sep 17 00:00:00 2001 From: Matt Turner Date: Thu, 14 Jan 2010 13:15:20 -0500 Subject: alpha: add myself as a maintainer, and drop mention of 2.4 CC: Richard Henderson CC: Ivan Kokshaysky CC: linux-alpha@vger.kernel.org Signed-off-by: Matt Turner --- MAINTAINERS | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index c8f47bf154f4..3f591629e953 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -410,9 +410,8 @@ F: drivers/i2c/busses/i2c-ali1563.c ALPHA PORT M: Richard Henderson -S: Odd Fixes for 2.4; Maintained for 2.6. M: Ivan Kokshaysky -S: Maintained for 2.4; PCI support for 2.6. +M: Matt Turner L: linux-alpha@vger.kernel.org F: arch/alpha/ -- cgit v1.2.3 From 9fe3b691282691a46abbb25a1e0a8e5f96486d18 Mon Sep 17 00:00:00 2001 From: Dan Williams Date: Fri, 15 Jan 2010 01:47:37 -0800 Subject: MAINTAINERS: transfer maintainership of I/OAT Dan Williams takes over I/OAT from Maciej Sosnowski Signed-off-by: Dan Williams Signed-off-by: Maciej Sosnowski Signed-off-by: David S. Miller --- MAINTAINERS | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 256139e0c231..ef40d2c3e55b 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -975,7 +975,6 @@ F: drivers/platform/x86/asus-laptop.c ASYNCHRONOUS TRANSFERS/TRANSFORMS (IOAT) API M: Dan Williams -M: Maciej Sosnowski W: http://sourceforge.net/projects/xscaleiop S: Supported F: Documentation/crypto/async-tx-api.txt @@ -1804,7 +1803,6 @@ S: Supported F: fs/dlm/ DMA GENERIC OFFLOAD ENGINE SUBSYSTEM -M: Maciej Sosnowski M: Dan Williams S: Supported F: drivers/dma/ @@ -2767,7 +2765,7 @@ F: arch/x86/kernel/microcode_core.c F: arch/x86/kernel/microcode_intel.c INTEL I/OAT DMA DRIVER -M: Maciej Sosnowski +M: Dan Williams S: Supported F: drivers/dma/ioat* -- cgit v1.2.3 From 8e21cbbcacd1949c0e6929a739cf9d689cc38857 Mon Sep 17 00:00:00 2001 From: Hans Verkuil Date: Wed, 13 Jan 2010 16:56:48 -0200 Subject: MAINTAINERS: Andy Walls is the new ivtv maintainer Replaces Hans Verkuil by Andy Walls as the ivtv maintainer. After 4 1/2 years, Hans decided to hand over the ivtv driver to Andy. Andy was already doing more work on ivtv than him, so this just makes official what was happening in practice. Signed-off-by: Hans Verkuil Signed-off-by: Andy Walls Signed-off-by: Mauro Carvalho Chehab --- MAINTAINERS | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 3f591629e953..47c1474db052 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1637,7 +1637,6 @@ S: Maintained F: sound/pci/cs5535audio/ CX18 VIDEO4LINUX DRIVER -M: Hans Verkuil M: Andy Walls L: ivtv-devel@ivtvdriver.org L: linux-media@vger.kernel.org @@ -3011,7 +3010,7 @@ S: Maintained F: drivers/isdn/hardware/eicon/ IVTV VIDEO4LINUX DRIVER -M: Hans Verkuil +M: Andy Walls L: ivtv-devel@ivtvdriver.org L: linux-media@vger.kernel.org T: git git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-2.6.git -- cgit v1.2.3 From c42405096bd804c82c7ac9addcbadea7390158e4 Mon Sep 17 00:00:00 2001 From: Jiri Slaby Date: Wed, 13 Jan 2010 19:39:16 -0200 Subject: MAINTAINERS: ivtv-devel is moderated Mark ivtv-devel@ivtvdriver.org as 'moderated for non-subscribers'. Signed-off-by: Jiri Slaby Acked-by: Andy Walls Signed-off-by: Mauro Carvalho Chehab --- MAINTAINERS | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 47c1474db052..1858646b52e3 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1638,7 +1638,7 @@ F: sound/pci/cs5535audio/ CX18 VIDEO4LINUX DRIVER M: Andy Walls -L: ivtv-devel@ivtvdriver.org +L: ivtv-devel@ivtvdriver.org (moderated for non-subscribers) L: linux-media@vger.kernel.org T: git git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-2.6.git W: http://linuxtv.org @@ -3011,7 +3011,7 @@ F: drivers/isdn/hardware/eicon/ IVTV VIDEO4LINUX DRIVER M: Andy Walls -L: ivtv-devel@ivtvdriver.org +L: ivtv-devel@ivtvdriver.org (moderated for non-subscribers) L: linux-media@vger.kernel.org T: git git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-2.6.git W: http://www.ivtvdriver.org -- cgit v1.2.3 From de4fc07aff770743b2c3e3ee30a23a691450a4f6 Mon Sep 17 00:00:00 2001 From: Jeff Kirsher Date: Sat, 23 Jan 2010 01:20:22 -0800 Subject: MAINTAINERS: Add Intel igbvf maintainer Add igbvf to the list of supported Intel drivers and Alex to the list of maintainers. Signed-off-by: Jeff Kirsher Signed-off-by: David S. Miller --- MAINTAINERS | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 4067c2e50d48..03f38c18f323 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -2822,10 +2822,11 @@ L: netdev@vger.kernel.org S: Maintained F: drivers/net/ixp2000/ -INTEL ETHERNET DRIVERS (e100/e1000/e1000e/igb/ixgb/ixgbe) +INTEL ETHERNET DRIVERS (e100/e1000/e1000e/igb/igbvf/ixgb/ixgbe) M: Jeff Kirsher M: Jesse Brandeburg M: Bruce Allan +M: Alex Duyck M: PJ Waskiewicz M: John Ronciak L: e1000-devel@lists.sourceforge.net @@ -2835,6 +2836,7 @@ F: drivers/net/e100.c F: drivers/net/e1000/ F: drivers/net/e1000e/ F: drivers/net/igb/ +F: drivers/net/igbvf/ F: drivers/net/ixgb/ F: drivers/net/ixgbe/ -- cgit v1.2.3 From 3af26f58d1920d904da87c3897d23070fe2266b4 Mon Sep 17 00:00:00 2001 From: Joe Perches Date: Mon, 8 Feb 2010 22:42:40 -0800 Subject: MAINTAINERS: networking drivers - Add git net-next tree During the rc period, patches that are not bugfixes should be done using the net-next tree. Signed-off-by: Joe Perches Signed-off-by: David S. Miller --- MAINTAINERS | 1 + 1 file changed, 1 insertion(+) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 03f38c18f323..602022d2c7a5 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3836,6 +3836,7 @@ NETWORKING DRIVERS L: netdev@vger.kernel.org W: http://www.linuxfoundation.org/en/Net T: git git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-2.6.git +T: git git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next-2.6.git S: Odd Fixes F: drivers/net/ F: include/linux/if_* -- cgit v1.2.3 From 763458e0dbfb4d562d62823149cf62e8b8eca82b Mon Sep 17 00:00:00 2001 From: Rishikesh Date: Wed, 10 Feb 2010 13:56:40 -0800 Subject: MAINTAINERS: changed LTP maintainership responsibilities Change the LTP maintainer responsibities from 2010. Ref: http://marc.info/?l=ltp-list&m=126502242912536&w=2 Signed-off-by : Rishikesh K Rajak Cc: Subrata Modak Cc: Mike Frysinger Cc: Garrett Cooper Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- MAINTAINERS | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 602022d2c7a5..412eff60c33d 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3411,8 +3411,10 @@ S: Maintained F: drivers/scsi/sym53c8xx_2/ LTP (Linux Test Project) -M: Subrata Modak -M: Mike Frysinger +M: Rishikesh K Rajak +M: Garrett Cooper +M: Mike Frysinger +M: Subrata Modak L: ltp-list@lists.sourceforge.net (subscribers-only) W: http://ltp.sourceforge.net/ T: git git://git.kernel.org/pub/scm/linux/kernel/git/galak/ltp.git -- cgit v1.2.3 From f8b55f251012e104093e105483c45c5d85ad3040 Mon Sep 17 00:00:00 2001 From: Christine Caulfield Date: Thu, 18 Feb 2010 11:33:13 +0000 Subject: Orphan DECnet Due to lack of time, space, motivation, hardware and probably expertise, I have reluctantly decided to orphan the DECnet code in the kernel. Judging by the deafening silence on the linux-decnet mailing list I suspect it's either not being used anyway, or the few people that are using it are happy with their older kernels. Signed-off-by: Christine Caulfield Signed-off-by: Linus Torvalds --- MAINTAINERS | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) (limited to 'MAINTAINERS') diff --git a/MAINTAINERS b/MAINTAINERS index 412eff60c33d..8ed3d0a8b3f4 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1733,10 +1733,9 @@ F: include/linux/tfrc.h F: net/dccp/ DECnet NETWORK LAYER -M: Christine Caulfield W: http://linux-decnet.sourceforge.net L: linux-decnet-user@lists.sourceforge.net -S: Maintained +S: Orphan F: Documentation/networking/decnet.txt F: net/decnet/ -- cgit v1.2.3