Bitstream loading with ZynqMP/UltraScale+ fpga_manager

DB
David Banks
Fri, Feb 1, 2019 8:12 PM

OpenCPI users interested in ZynqMP fpga_manager,

I know some users are interested in the OpenCPI's bitstream loading for
ZynqMP/UltraScale+ using "fpga_manager". In general, we followed the
instructions at
https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream.
I will give a short explanation here:

Reminder: All ZynqMP/UltraScale+ changes are located at
https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra branch.

Firstly, all fpga_manager code is located in
runtime/hdl/src/HdlBusDriver.cxx. There were also changes in
r
untime/hdl-support/xilinx/vivado.mk
http://vivado.mk
to generate a bitstream in the correct *.bin format. To
see the changes made to these files for ZynqMP, you can diff them between
release_1.4 and release_1.4_zynq_ultra:
$ git clone https://github.com/Geontech/opencpi.git --branch
release_1.4_zynq_ultra;
$ cd opencpi;
$ git fetch origin release_1.4:release_1.4;
$ git diff release_1.4 -- runtime/hdl/src/HdlBusDriver.cxx
runtime/hdl-support/xilinx/vivado.mk;

The directly relevant functions are load_fpga_manager() and i
sProgrammed().
load_fpga_manager() ensures that /lib/firmware exists, reads the *.bin
bitstream file and writes its contents to /lib/firmware/opencpi_temp.bin.
It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the the
filename "opencpi_temp.bin" to /sys/class/fpga_manager/fpga0/firmware.
Finally, the temporary opencpi_temp.bin bitstream is removed and the state
of the fpga_manager (/sys/class/fpga_manager/fpga0/state) is confirmed to
be "operating" in isProgrammed().

fpga_manager requires that bitstreams be in *.bin in order to write them to
the PL. So, some changes were made to vivado.mk to add a make rule for the
*.bin file. This make rule (BinName) uses Vivado's "bootgen" to convert
the bitstream from *.bit to *.bin.

Most of the relevant code is pasted or summarized below:

    *load_fpga_manager*(const char *fileName, std::string &error) {
      if (!file_exists("/lib/firmware")){
        mkdir("/lib/firmware",0666);
      }
      int out_file = creat("/lib/firmware/opencpi_temp.bin", 0666);
      gzFile bin_file;
      int bfd, zerror;
      uint8_t buf[8*1024];

      if ((bfd = ::open(fileName, O_RDONLY)) < 0)
        OU::format(error, "Can't open bitstream file '%s' for reading:

%s(%d)",
fileName, strerror(errno), errno);
if ((bin_file = ::gzdopen(bfd, "rb")) == NULL)
OU::format(error, "Can't open compressed bin file '%s' for :
%s(%u)",
fileName, strerror(errno), errno);
do {
uint8_t *bit_buf = buf;
int n = ::gzread(bin_file, bit_buf, sizeof(buf));
if (n < 0)
return true;
if (n & 3)
return OU::eformat(error, "Bitstream data in is '%s' not a
multiple of 3 bytes",
fileName);
if (n == 0)
break;
if (write(out_file, buf, n) <= 0)
return OU::eformat(error,
"Error writing to /lib/firmware/opencpi_temp.bin for bin
loading: %s(%u/%d)",
strerror(errno), errno, n);
} while (1);
close(out_file);
std::ofstream fpga_flags("/sys/class/fpga_manager/fpga0/flags");
std::ofstream
fpga_firmware("/sys/class/fpga_manager/fpga0/firmware");
fpga_flags << 0 << std::endl;
fpga_firmware << "opencpi_temp.bin" << std::endl;

      remove("/lib/firmware/opencpi_temp.bin");
      return isProgrammed(error) ? init(error) : true;
    }

The isProgrammed() function just checks whether or not the fpga_manager
state is 'operating' although we are not entirely confident this is a
robust check:

    *isProgrammed*(...) {
      ...
      const char *e = OU::file2String(val,

"/sys/class/fpga_manager/fpga0/state", '|');
...
return val == "operating";
}

vivado.mk's *bin make-rule uses bootgen to convert bit to bin. This is
necessary in Vivado 2018.2, but in later versions you may be able to
directly generate the correct *.bin file via an option to write_bitstream:
$(call BinName,$1,$3,$6): $(call BitName,$1,$3)
$(AT)echo -n For $2 on $5 using config $4: Generating Xilinx Vivado
bitstream file $$@ with BIN extension using "bootgen".
$(AT)echo all: > $$(call BifName,$1,$3,$6);
echo "{" >> $$(call BifName,$1,$3,$6);
echo "      [destination_device = pl] $(notdir $(call
BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6);
echo "}" >> $$(call BifName,$1,$3,$6);
$(AT)$(call DoXilinx,bootgen,$1,-image $(notdir $(call
BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call
BinName,$1,$3,$6)) -w,bin)

Hope this is useful!

Regards,
David Banks
dbanks@geontech.com
Geon Technologies, LLC

OpenCPI users interested in ZynqMP fpga_manager, I know some users are interested in the OpenCPI's bitstream loading for ZynqMP/UltraScale+ using "*fpga_manager*". In general, we followed the instructions at https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream. I will give a short explanation here: Reminder: All ZynqMP/UltraScale+ changes are located at https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra branch. Firstly, all *fpga_manager *code is located in *runtime/hdl/src/HdlBusDriver.cxx*. There were also changes in r*untime/hdl-support/xilinx/vivado.mk <http://vivado.mk>* to generate a bitstream in the correct *.bin format. To see the changes made to these files for ZynqMP, you can diff them between *release_1.4* and *release_1.4_zynq_ultra*: $ git clone https://github.com/Geontech/opencpi.git --branch release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin release_1.4:release_1.4; $ git diff release_1.4 -- runtime/hdl/src/HdlBusDriver.cxx runtime/hdl-support/xilinx/vivado.mk; The directly relevant functions are *load_fpga_manager()* and i *sProgrammed()*. load_fpga_manager() ensures that /lib/firmware exists, reads the *.bin bitstream file and writes its contents to /lib/firmware/opencpi_temp.bin. It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the the filename "opencpi_temp.bin" to /sys/class/fpga_manager/fpga0/firmware. Finally, the temporary opencpi_temp.bin bitstream is removed and the state of the fpga_manager (/sys/class/fpga_manager/fpga0/state) is confirmed to be "operating" in isProgrammed(). fpga_manager requires that bitstreams be in *.bin in order to write them to the PL. So, some changes were made to vivado.mk to add a make rule for the *.bin file. This make rule (*BinName*) uses Vivado's "*bootgen*" to convert the bitstream from *.bit to *.bin. Most of the relevant code is pasted or summarized below: *load_fpga_manager*(const char *fileName, std::string &error) { if (!file_exists("/lib/firmware")){ mkdir("/lib/firmware",0666); } int out_file = creat("/lib/firmware/opencpi_temp.bin", 0666); gzFile bin_file; int bfd, zerror; uint8_t buf[8*1024]; if ((bfd = ::open(fileName, O_RDONLY)) < 0) OU::format(error, "Can't open bitstream file '%s' for reading: %s(%d)", fileName, strerror(errno), errno); if ((bin_file = ::gzdopen(bfd, "rb")) == NULL) OU::format(error, "Can't open compressed bin file '%s' for : %s(%u)", fileName, strerror(errno), errno); do { uint8_t *bit_buf = buf; int n = ::gzread(bin_file, bit_buf, sizeof(buf)); if (n < 0) return true; if (n & 3) return OU::eformat(error, "Bitstream data in is '%s' not a multiple of 3 bytes", fileName); if (n == 0) break; if (write(out_file, buf, n) <= 0) return OU::eformat(error, "Error writing to /lib/firmware/opencpi_temp.bin for bin loading: %s(%u/%d)", strerror(errno), errno, n); } while (1); close(out_file); std::ofstream fpga_flags("/sys/class/fpga_manager/fpga0/flags"); std::ofstream fpga_firmware("/sys/class/fpga_manager/fpga0/firmware"); fpga_flags << 0 << std::endl; fpga_firmware << "opencpi_temp.bin" << std::endl; remove("/lib/firmware/opencpi_temp.bin"); return isProgrammed(error) ? init(error) : true; } The isProgrammed() function just checks whether or not the fpga_manager state is 'operating' although we are not entirely confident this is a robust check: *isProgrammed*(...) { ... const char *e = OU::file2String(val, "/sys/class/fpga_manager/fpga0/state", '|'); ... return val == "operating"; } vivado.mk's *bin make-rule uses bootgen to convert bit to bin. This is necessary in Vivado 2018.2, but in later versions you may be able to directly generate the correct *.bin file via an option to write_bitstream: $(call *BinName*,$1,$3,$6): $(call BitName,$1,$3) $(AT)echo -n For $2 on $5 using config $4: Generating Xilinx Vivado bitstream file $$@ with BIN extension using "bootgen". $(AT)echo all: > $$(call BifName,$1,$3,$6); \ echo "{" >> $$(call BifName,$1,$3,$6); \ echo " [destination_device = pl] $(notdir $(call BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6); \ echo "}" >> $$(call BifName,$1,$3,$6); $(AT)$(call DoXilinx,*bootgen*,$1,-image $(notdir $(call BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call BinName,$1,$3,$6)) -w,bin) Hope this is useful! Regards, David Banks dbanks@geontech.com Geon Technologies, LLC
JK
James Kulp
Fri, Feb 1, 2019 8:27 PM

David,

This is great work. Thanks.

Since I believe the fpga manager stuff is really an attribute of later
linux kernels, I don't think it is really a ZynqMP thing, but just a
later linux kernel thing.
I am currently bringing up the quite ancient zedboard using the latest
Vivado and Xilinx linux and will try to use this same code.
There are two thinigs I am looking into, now that you have done the hard
work of getting to a working solution:

  1. The bit vs bin thing existed with the old bitstream loader, but I
    think we were converting on the fly, so I will try that here.
    (To avoid the bin format altogether).

  2. The fpga manager has entry points from kernel mode that allow you to
    inject the bitstream without making a copy in /lib/firmware.
    Since we already have a kernel driver, I will try to use that to avoid
    the whole /lib/firmware thing.

So if those two things can work (no guarantees), the difference between
old and new bitstream loading (and building) can be minimized and
the loading process faster and requiring no extra file system space. 
This will make merging easier too.

We'll see.  Thanks again to you and Geon for this important contribution.

Jim

On 2/1/19 3:12 PM, David Banks wrote:

OpenCPI users interested in ZynqMP fpga_manager,

I know some users are interested in the OpenCPI's bitstream loading for
ZynqMP/UltraScale+ using "fpga_manager". In general, we followed the
instructions at
https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream.
I will give a short explanation here:

Reminder: All ZynqMP/UltraScale+ changes are located at
https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra branch.

Firstly, all fpga_manager code is located in
runtime/hdl/src/HdlBusDriver.cxx. There were also changes in
r
untime/hdl-support/xilinx/vivado.mk
http://vivado.mk
to generate a bitstream in the correct *.bin format. To
see the changes made to these files for ZynqMP, you can diff them between
release_1.4 and release_1.4_zynq_ultra:
$ git clone https://github.com/Geontech/opencpi.git --branch
release_1.4_zynq_ultra;
$ cd opencpi;
$ git fetch origin release_1.4:release_1.4;
$ git diff release_1.4 -- runtime/hdl/src/HdlBusDriver.cxx
runtime/hdl-support/xilinx/vivado.mk;

The directly relevant functions are load_fpga_manager() and i
sProgrammed().
load_fpga_manager() ensures that /lib/firmware exists, reads the *.bin
bitstream file and writes its contents to /lib/firmware/opencpi_temp.bin.
It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the the
filename "opencpi_temp.bin" to /sys/class/fpga_manager/fpga0/firmware.
Finally, the temporary opencpi_temp.bin bitstream is removed and the state
of the fpga_manager (/sys/class/fpga_manager/fpga0/state) is confirmed to
be "operating" in isProgrammed().

fpga_manager requires that bitstreams be in *.bin in order to write them to
the PL. So, some changes were made to vivado.mk to add a make rule for the
*.bin file. This make rule (BinName) uses Vivado's "bootgen" to convert
the bitstream from *.bit to *.bin.

Most of the relevant code is pasted or summarized below:

      *load_fpga_manager*(const char *fileName, std::string &error) {
        if (!file_exists("/lib/firmware")){
          mkdir("/lib/firmware",0666);
        }
        int out_file = creat("/lib/firmware/opencpi_temp.bin", 0666);
        gzFile bin_file;
        int bfd, zerror;
        uint8_t buf[8*1024];

        if ((bfd = ::open(fileName, O_RDONLY)) < 0)
          OU::format(error, "Can't open bitstream file '%s' for reading:

%s(%d)",
fileName, strerror(errno), errno);
if ((bin_file = ::gzdopen(bfd, "rb")) == NULL)
OU::format(error, "Can't open compressed bin file '%s' for :
%s(%u)",
fileName, strerror(errno), errno);
do {
uint8_t *bit_buf = buf;
int n = ::gzread(bin_file, bit_buf, sizeof(buf));
if (n < 0)
return true;
if (n & 3)
return OU::eformat(error, "Bitstream data in is '%s' not a
multiple of 3 bytes",
fileName);
if (n == 0)
break;
if (write(out_file, buf, n) <= 0)
return OU::eformat(error,
"Error writing to /lib/firmware/opencpi_temp.bin for bin
loading: %s(%u/%d)",
strerror(errno), errno, n);
} while (1);
close(out_file);
std::ofstream fpga_flags("/sys/class/fpga_manager/fpga0/flags");
std::ofstream
fpga_firmware("/sys/class/fpga_manager/fpga0/firmware");
fpga_flags << 0 << std::endl;
fpga_firmware << "opencpi_temp.bin" << std::endl;

        remove("/lib/firmware/opencpi_temp.bin");
        return isProgrammed(error) ? init(error) : true;
      }

The isProgrammed() function just checks whether or not the fpga_manager
state is 'operating' although we are not entirely confident this is a
robust check:

      *isProgrammed*(...) {
        ...
        const char *e = OU::file2String(val,

"/sys/class/fpga_manager/fpga0/state", '|');
...
return val == "operating";
}

vivado.mk's *bin make-rule uses bootgen to convert bit to bin. This is
necessary in Vivado 2018.2, but in later versions you may be able to
directly generate the correct *.bin file via an option to write_bitstream:
$(call BinName,$1,$3,$6): $(call BitName,$1,$3)
$(AT)echo -n For $2 on $5 using config $4: Generating Xilinx Vivado
bitstream file $$@ with BIN extension using "bootgen".
$(AT)echo all: > $$(call BifName,$1,$3,$6);
echo "{" >> $$(call BifName,$1,$3,$6);
echo "      [destination_device = pl] $(notdir $(call
BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6);
echo "}" >> $$(call BifName,$1,$3,$6);
$(AT)$(call DoXilinx,bootgen,$1,-image $(notdir $(call
BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call
BinName,$1,$3,$6)) -w,bin)

Hope this is useful!

Regards,
David Banks
dbanks@geontech.com
Geon Technologies, LLC

David, This is great work. Thanks. Since I believe the fpga manager stuff is really an attribute of later linux kernels, I don't think it is really a ZynqMP thing, but just a later linux kernel thing. I am currently bringing up the quite ancient zedboard using the latest Vivado and Xilinx linux and will try to use this same code. There are two thinigs I am looking into, now that you have done the hard work of getting to a working solution: 1. The bit vs bin thing existed with the old bitstream loader, but I think we were converting on the fly, so I will try that here. (To avoid the bin format altogether). 2. The fpga manager has entry points from kernel mode that allow you to inject the bitstream without making a copy in /lib/firmware. Since we already have a kernel driver, I will try to use that to avoid the whole /lib/firmware thing. So if those two things can work (no guarantees), the difference between old and new bitstream loading (and building) can be minimized and the loading process faster and requiring no extra file system space.  This will make merging easier too. We'll see.  Thanks again to you and Geon for this important contribution. Jim On 2/1/19 3:12 PM, David Banks wrote: > OpenCPI users interested in ZynqMP fpga_manager, > > I know some users are interested in the OpenCPI's bitstream loading for > ZynqMP/UltraScale+ using "*fpga_manager*". In general, we followed the > instructions at > https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream. > I will give a short explanation here: > > Reminder: All ZynqMP/UltraScale+ changes are located at > https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra branch. > > Firstly, all *fpga_manager *code is located in > *runtime/hdl/src/HdlBusDriver.cxx*. There were also changes in > r*untime/hdl-support/xilinx/vivado.mk > <http://vivado.mk>* to generate a bitstream in the correct *.bin format. To > see the changes made to these files for ZynqMP, you can diff them between > *release_1.4* and *release_1.4_zynq_ultra*: > $ git clone https://github.com/Geontech/opencpi.git --branch > release_1.4_zynq_ultra; > $ cd opencpi; > $ git fetch origin release_1.4:release_1.4; > $ git diff release_1.4 -- runtime/hdl/src/HdlBusDriver.cxx > runtime/hdl-support/xilinx/vivado.mk; > > > The directly relevant functions are *load_fpga_manager()* and i > *sProgrammed()*. > load_fpga_manager() ensures that /lib/firmware exists, reads the *.bin > bitstream file and writes its contents to /lib/firmware/opencpi_temp.bin. > It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the the > filename "opencpi_temp.bin" to /sys/class/fpga_manager/fpga0/firmware. > Finally, the temporary opencpi_temp.bin bitstream is removed and the state > of the fpga_manager (/sys/class/fpga_manager/fpga0/state) is confirmed to > be "operating" in isProgrammed(). > > fpga_manager requires that bitstreams be in *.bin in order to write them to > the PL. So, some changes were made to vivado.mk to add a make rule for the > *.bin file. This make rule (*BinName*) uses Vivado's "*bootgen*" to convert > the bitstream from *.bit to *.bin. > > Most of the relevant code is pasted or summarized below: > > *load_fpga_manager*(const char *fileName, std::string &error) { > if (!file_exists("/lib/firmware")){ > mkdir("/lib/firmware",0666); > } > int out_file = creat("/lib/firmware/opencpi_temp.bin", 0666); > gzFile bin_file; > int bfd, zerror; > uint8_t buf[8*1024]; > > if ((bfd = ::open(fileName, O_RDONLY)) < 0) > OU::format(error, "Can't open bitstream file '%s' for reading: > %s(%d)", > fileName, strerror(errno), errno); > if ((bin_file = ::gzdopen(bfd, "rb")) == NULL) > OU::format(error, "Can't open compressed bin file '%s' for : > %s(%u)", > fileName, strerror(errno), errno); > do { > uint8_t *bit_buf = buf; > int n = ::gzread(bin_file, bit_buf, sizeof(buf)); > if (n < 0) > return true; > if (n & 3) > return OU::eformat(error, "Bitstream data in is '%s' not a > multiple of 3 bytes", > fileName); > if (n == 0) > break; > if (write(out_file, buf, n) <= 0) > return OU::eformat(error, > "Error writing to /lib/firmware/opencpi_temp.bin for bin > loading: %s(%u/%d)", > strerror(errno), errno, n); > } while (1); > close(out_file); > std::ofstream fpga_flags("/sys/class/fpga_manager/fpga0/flags"); > std::ofstream > fpga_firmware("/sys/class/fpga_manager/fpga0/firmware"); > fpga_flags << 0 << std::endl; > fpga_firmware << "opencpi_temp.bin" << std::endl; > > remove("/lib/firmware/opencpi_temp.bin"); > return isProgrammed(error) ? init(error) : true; > } > > The isProgrammed() function just checks whether or not the fpga_manager > state is 'operating' although we are not entirely confident this is a > robust check: > > *isProgrammed*(...) { > ... > const char *e = OU::file2String(val, > "/sys/class/fpga_manager/fpga0/state", '|'); > ... > return val == "operating"; > } > > vivado.mk's *bin make-rule uses bootgen to convert bit to bin. This is > necessary in Vivado 2018.2, but in later versions you may be able to > directly generate the correct *.bin file via an option to write_bitstream: > $(call *BinName*,$1,$3,$6): $(call BitName,$1,$3) > $(AT)echo -n For $2 on $5 using config $4: Generating Xilinx Vivado > bitstream file $$@ with BIN extension using "bootgen". > $(AT)echo all: > $$(call BifName,$1,$3,$6); \ > echo "{" >> $$(call BifName,$1,$3,$6); \ > echo " [destination_device = pl] $(notdir $(call > BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6); \ > echo "}" >> $$(call BifName,$1,$3,$6); > $(AT)$(call DoXilinx,*bootgen*,$1,-image $(notdir $(call > BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call > BinName,$1,$3,$6)) -w,bin) > > Hope this is useful! > > Regards, > David Banks > dbanks@geontech.com > Geon Technologies, LLC >
CH
Chris Hinkey
Fri, Feb 1, 2019 8:37 PM

in response to Point 1 here.  We attempted using the code that on the fly was attempting to convert from bit to bin.  This did not work on these newer platforms using fpga_manager so we decided to use the vendor provided tools rather then to reverse engineer what was wrong with the existing code.

If changes need to be made to create more commonality and given that all zynq and zynqMP platforms need a .bin file format wouldn't it make more sense to just use .bin files rather then converting them on the fly every time?


From: discuss discuss-bounces@lists.opencpi.org on behalf of James Kulp jek@parera.com
Sent: Friday, February 1, 2019 3:27 PM
To: discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

David,

This is great work. Thanks.

Since I believe the fpga manager stuff is really an attribute of later
linux kernels, I don't think it is really a ZynqMP thing, but just a
later linux kernel thing.
I am currently bringing up the quite ancient zedboard using the latest
Vivado and Xilinx linux and will try to use this same code.
There are two thinigs I am looking into, now that you have done the hard
work of getting to a working solution:

  1. The bit vs bin thing existed with the old bitstream loader, but I
    think we were converting on the fly, so I will try that here.
    (To avoid the bin format altogether).

  2. The fpga manager has entry points from kernel mode that allow you to
    inject the bitstream without making a copy in /lib/firmware.
    Since we already have a kernel driver, I will try to use that to avoid
    the whole /lib/firmware thing.

So if those two things can work (no guarantees), the difference between
old and new bitstream loading (and building) can be minimized and
the loading process faster and requiring no extra file system space.
This will make merging easier too.

We'll see.  Thanks again to you and Geon for this important contribution.

Jim

On 2/1/19 3:12 PM, David Banks wrote:

OpenCPI users interested in ZynqMP fpga_manager,

I know some users are interested in the OpenCPI's bitstream loading for
ZynqMP/UltraScale+ using "fpga_manager". In general, we followed the
instructions at
https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream.
I will give a short explanation here:

Reminder: All ZynqMP/UltraScale+ changes are located at
https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra branch.

Firstly, all fpga_manager code is located in
runtime/hdl/src/HdlBusDriver.cxx. There were also changes in
r
untime/hdl-support/xilinx/vivado.mk
http://vivado.mk
to generate a bitstream in the correct *.bin format. To
see the changes made to these files for ZynqMP, you can diff them between
release_1.4 and release_1.4_zynq_ultra:
$ git clone https://github.com/Geontech/opencpi.git --branch
release_1.4_zynq_ultra;
$ cd opencpi;
$ git fetch origin release_1.4:release_1.4;
$ git diff release_1.4 -- runtime/hdl/src/HdlBusDriver.cxx
runtime/hdl-support/xilinx/vivado.mk;

The directly relevant functions are load_fpga_manager() and i
sProgrammed().
load_fpga_manager() ensures that /lib/firmware exists, reads the *.bin
bitstream file and writes its contents to /lib/firmware/opencpi_temp.bin.
It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the the
filename "opencpi_temp.bin" to /sys/class/fpga_manager/fpga0/firmware.
Finally, the temporary opencpi_temp.bin bitstream is removed and the state
of the fpga_manager (/sys/class/fpga_manager/fpga0/state) is confirmed to
be "operating" in isProgrammed().

fpga_manager requires that bitstreams be in *.bin in order to write them to
the PL. So, some changes were made to vivado.mk to add a make rule for the
*.bin file. This make rule (BinName) uses Vivado's "bootgen" to convert
the bitstream from *.bit to *.bin.

Most of the relevant code is pasted or summarized below:

      *load_fpga_manager*(const char *fileName, std::string &error) {
        if (!file_exists("/lib/firmware")){
          mkdir("/lib/firmware",0666);
        }
        int out_file = creat("/lib/firmware/opencpi_temp.bin", 0666);
        gzFile bin_file;
        int bfd, zerror;
        uint8_t buf[8*1024];

        if ((bfd = ::open(fileName, O_RDONLY)) < 0)
          OU::format(error, "Can't open bitstream file '%s' for reading:

%s(%d)",
fileName, strerror(errno), errno);
if ((bin_file = ::gzdopen(bfd, "rb")) == NULL)
OU::format(error, "Can't open compressed bin file '%s' for :
%s(%u)",
fileName, strerror(errno), errno);
do {
uint8_t *bit_buf = buf;
int n = ::gzread(bin_file, bit_buf, sizeof(buf));
if (n < 0)
return true;
if (n & 3)
return OU::eformat(error, "Bitstream data in is '%s' not a
multiple of 3 bytes",
fileName);
if (n == 0)
break;
if (write(out_file, buf, n) <= 0)
return OU::eformat(error,
"Error writing to /lib/firmware/opencpi_temp.bin for bin
loading: %s(%u/%d)",
strerror(errno), errno, n);
} while (1);
close(out_file);
std::ofstream fpga_flags("/sys/class/fpga_manager/fpga0/flags");
std::ofstream
fpga_firmware("/sys/class/fpga_manager/fpga0/firmware");
fpga_flags << 0 << std::endl;
fpga_firmware << "opencpi_temp.bin" << std::endl;

        remove("/lib/firmware/opencpi_temp.bin");
        return isProgrammed(error) ? init(error) : true;
      }

The isProgrammed() function just checks whether or not the fpga_manager
state is 'operating' although we are not entirely confident this is a
robust check:

      *isProgrammed*(...) {
        ...
        const char *e = OU::file2String(val,

"/sys/class/fpga_manager/fpga0/state", '|');
...
return val == "operating";
}

vivado.mk's *bin make-rule uses bootgen to convert bit to bin. This is
necessary in Vivado 2018.2, but in later versions you may be able to
directly generate the correct *.bin file via an option to write_bitstream:
$(call BinName,$1,$3,$6): $(call BitName,$1,$3)
$(AT)echo -n For $2 on $5 using config $4: Generating Xilinx Vivado
bitstream file $$@ with BIN extension using "bootgen".
$(AT)echo all: > $$(call BifName,$1,$3,$6);
echo "{" >> $$(call BifName,$1,$3,$6);
echo "      [destination_device = pl] $(notdir $(call
BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6);
echo "}" >> $$(call BifName,$1,$3,$6);
$(AT)$(call DoXilinx,bootgen,$1,-image $(notdir $(call
BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call
BinName,$1,$3,$6)) -w,bin)

Hope this is useful!

Regards,
David Banks
dbanks@geontech.com
Geon Technologies, LLC

in response to Point 1 here. We attempted using the code that on the fly was attempting to convert from bit to bin. This did not work on these newer platforms using fpga_manager so we decided to use the vendor provided tools rather then to reverse engineer what was wrong with the existing code. If changes need to be made to create more commonality and given that all zynq and zynqMP platforms need a .bin file format wouldn't it make more sense to just use .bin files rather then converting them on the fly every time? ________________________________ From: discuss <discuss-bounces@lists.opencpi.org> on behalf of James Kulp <jek@parera.com> Sent: Friday, February 1, 2019 3:27 PM To: discuss@lists.opencpi.org Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager David, This is great work. Thanks. Since I believe the fpga manager stuff is really an attribute of later linux kernels, I don't think it is really a ZynqMP thing, but just a later linux kernel thing. I am currently bringing up the quite ancient zedboard using the latest Vivado and Xilinx linux and will try to use this same code. There are two thinigs I am looking into, now that you have done the hard work of getting to a working solution: 1. The bit vs bin thing existed with the old bitstream loader, but I think we were converting on the fly, so I will try that here. (To avoid the bin format altogether). 2. The fpga manager has entry points from kernel mode that allow you to inject the bitstream without making a copy in /lib/firmware. Since we already have a kernel driver, I will try to use that to avoid the whole /lib/firmware thing. So if those two things can work (no guarantees), the difference between old and new bitstream loading (and building) can be minimized and the loading process faster and requiring no extra file system space. This will make merging easier too. We'll see. Thanks again to you and Geon for this important contribution. Jim On 2/1/19 3:12 PM, David Banks wrote: > OpenCPI users interested in ZynqMP fpga_manager, > > I know some users are interested in the OpenCPI's bitstream loading for > ZynqMP/UltraScale+ using "*fpga_manager*". In general, we followed the > instructions at > https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream. > I will give a short explanation here: > > Reminder: All ZynqMP/UltraScale+ changes are located at > https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra branch. > > Firstly, all *fpga_manager *code is located in > *runtime/hdl/src/HdlBusDriver.cxx*. There were also changes in > r*untime/hdl-support/xilinx/vivado.mk > <http://vivado.mk>* to generate a bitstream in the correct *.bin format. To > see the changes made to these files for ZynqMP, you can diff them between > *release_1.4* and *release_1.4_zynq_ultra*: > $ git clone https://github.com/Geontech/opencpi.git --branch > release_1.4_zynq_ultra; > $ cd opencpi; > $ git fetch origin release_1.4:release_1.4; > $ git diff release_1.4 -- runtime/hdl/src/HdlBusDriver.cxx > runtime/hdl-support/xilinx/vivado.mk; > > > The directly relevant functions are *load_fpga_manager()* and i > *sProgrammed()*. > load_fpga_manager() ensures that /lib/firmware exists, reads the *.bin > bitstream file and writes its contents to /lib/firmware/opencpi_temp.bin. > It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the the > filename "opencpi_temp.bin" to /sys/class/fpga_manager/fpga0/firmware. > Finally, the temporary opencpi_temp.bin bitstream is removed and the state > of the fpga_manager (/sys/class/fpga_manager/fpga0/state) is confirmed to > be "operating" in isProgrammed(). > > fpga_manager requires that bitstreams be in *.bin in order to write them to > the PL. So, some changes were made to vivado.mk to add a make rule for the > *.bin file. This make rule (*BinName*) uses Vivado's "*bootgen*" to convert > the bitstream from *.bit to *.bin. > > Most of the relevant code is pasted or summarized below: > > *load_fpga_manager*(const char *fileName, std::string &error) { > if (!file_exists("/lib/firmware")){ > mkdir("/lib/firmware",0666); > } > int out_file = creat("/lib/firmware/opencpi_temp.bin", 0666); > gzFile bin_file; > int bfd, zerror; > uint8_t buf[8*1024]; > > if ((bfd = ::open(fileName, O_RDONLY)) < 0) > OU::format(error, "Can't open bitstream file '%s' for reading: > %s(%d)", > fileName, strerror(errno), errno); > if ((bin_file = ::gzdopen(bfd, "rb")) == NULL) > OU::format(error, "Can't open compressed bin file '%s' for : > %s(%u)", > fileName, strerror(errno), errno); > do { > uint8_t *bit_buf = buf; > int n = ::gzread(bin_file, bit_buf, sizeof(buf)); > if (n < 0) > return true; > if (n & 3) > return OU::eformat(error, "Bitstream data in is '%s' not a > multiple of 3 bytes", > fileName); > if (n == 0) > break; > if (write(out_file, buf, n) <= 0) > return OU::eformat(error, > "Error writing to /lib/firmware/opencpi_temp.bin for bin > loading: %s(%u/%d)", > strerror(errno), errno, n); > } while (1); > close(out_file); > std::ofstream fpga_flags("/sys/class/fpga_manager/fpga0/flags"); > std::ofstream > fpga_firmware("/sys/class/fpga_manager/fpga0/firmware"); > fpga_flags << 0 << std::endl; > fpga_firmware << "opencpi_temp.bin" << std::endl; > > remove("/lib/firmware/opencpi_temp.bin"); > return isProgrammed(error) ? init(error) : true; > } > > The isProgrammed() function just checks whether or not the fpga_manager > state is 'operating' although we are not entirely confident this is a > robust check: > > *isProgrammed*(...) { > ... > const char *e = OU::file2String(val, > "/sys/class/fpga_manager/fpga0/state", '|'); > ... > return val == "operating"; > } > > vivado.mk's *bin make-rule uses bootgen to convert bit to bin. This is > necessary in Vivado 2018.2, but in later versions you may be able to > directly generate the correct *.bin file via an option to write_bitstream: > $(call *BinName*,$1,$3,$6): $(call BitName,$1,$3) > $(AT)echo -n For $2 on $5 using config $4: Generating Xilinx Vivado > bitstream file $$@ with BIN extension using "bootgen". > $(AT)echo all: > $$(call BifName,$1,$3,$6); \ > echo "{" >> $$(call BifName,$1,$3,$6); \ > echo " [destination_device = pl] $(notdir $(call > BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6); \ > echo "}" >> $$(call BifName,$1,$3,$6); > $(AT)$(call DoXilinx,*bootgen*,$1,-image $(notdir $(call > BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call > BinName,$1,$3,$6)) -w,bin) > > Hope this is useful! > > Regards, > David Banks > dbanks@geontech.com > Geon Technologies, LLC >
JK
James Kulp
Fri, Feb 1, 2019 9:17 PM

On 2/1/19 3:37 PM, Chris Hinkey wrote:

in response to Point 1 here.  We attempted using the code that on the fly was attempting to convert from bit to bin.  This did not work on these newer platforms using fpga_manager so we decided to use the vendor provided tools rather then to reverse engineer what was wrong with the existing code.

If changes need to be made to create more commonality and given that all zynq and zynqMP platforms need a .bin file format wouldn't it make more sense to just use .bin files rather then converting them on the fly every time?

A sensible question for sure.

When this was done originally, it was to avoid generating multiple file
formats all the time.  .bit files are necessary for JTAG loading, and
.bin files are necessary for zynq hardware loading.

Even on Zynq, some debugging using jtag is done, and having that be
mostly transparent (using the same bitstream files) is convenient.

So we preferred having a single bitstream file (with metadata,
compressed) regardless of whether we were hardware loading or jtag
loading, zynq or virtex6 or spartan3, ISE or Vivado.

In fact, there was no reverse engineering the last time since both
formats, at the level we were operating at, were documented by Xilinx.

It seemed to be worth the 30 SLOC to convert on the fly to keep a single
format of Xilinx bitstream files, including between ISE and Vivado and
all Xilinx FPGA types.

Of course it might make sense to switch things around the other way and
use .bin files uniformly and only convert to .bit format for JTAG loading.

But since the core of the "conversion:" after a header, is just a 32 bit
endian swap, it doesn't matter much either way.

If it ends up being a truly nasty reverse engineering exercise now, I
would reconsider.


From: discuss discuss-bounces@lists.opencpi.org on behalf of James Kulp jek@parera.com
Sent: Friday, February 1, 2019 3:27 PM
To: discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

David,

This is great work. Thanks.

Since I believe the fpga manager stuff is really an attribute of later
linux kernels, I don't think it is really a ZynqMP thing, but just a
later linux kernel thing.
I am currently bringing up the quite ancient zedboard using the latest
Vivado and Xilinx linux and will try to use this same code.
There are two thinigs I am looking into, now that you have done the hard
work of getting to a working solution:

  1. The bit vs bin thing existed with the old bitstream loader, but I
    think we were converting on the fly, so I will try that here.
    (To avoid the bin format altogether).

  2. The fpga manager has entry points from kernel mode that allow you to
    inject the bitstream without making a copy in /lib/firmware.
    Since we already have a kernel driver, I will try to use that to avoid
    the whole /lib/firmware thing.

So if those two things can work (no guarantees), the difference between
old and new bitstream loading (and building) can be minimized and
the loading process faster and requiring no extra file system space.
This will make merging easier too.

We'll see.  Thanks again to you and Geon for this important contribution.

Jim

On 2/1/19 3:12 PM, David Banks wrote:

OpenCPI users interested in ZynqMP fpga_manager,

I know some users are interested in the OpenCPI's bitstream loading for
ZynqMP/UltraScale+ using "fpga_manager". In general, we followed the
instructions at
https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream.
I will give a short explanation here:

Reminder: All ZynqMP/UltraScale+ changes are located at
https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra branch.

Firstly, all fpga_manager code is located in
runtime/hdl/src/HdlBusDriver.cxx. There were also changes in
r
untime/hdl-support/xilinx/vivado.mk
http://vivado.mk
to generate a bitstream in the correct *.bin format. To
see the changes made to these files for ZynqMP, you can diff them between
release_1.4 and release_1.4_zynq_ultra:
$ git clone https://github.com/Geontech/opencpi.git --branch
release_1.4_zynq_ultra;
$ cd opencpi;
$ git fetch origin release_1.4:release_1.4;
$ git diff release_1.4 -- runtime/hdl/src/HdlBusDriver.cxx
runtime/hdl-support/xilinx/vivado.mk;

The directly relevant functions are load_fpga_manager() and i
sProgrammed().
load_fpga_manager() ensures that /lib/firmware exists, reads the *.bin
bitstream file and writes its contents to /lib/firmware/opencpi_temp.bin.
It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the the
filename "opencpi_temp.bin" to /sys/class/fpga_manager/fpga0/firmware.
Finally, the temporary opencpi_temp.bin bitstream is removed and the state
of the fpga_manager (/sys/class/fpga_manager/fpga0/state) is confirmed to
be "operating" in isProgrammed().

fpga_manager requires that bitstreams be in *.bin in order to write them to
the PL. So, some changes were made to vivado.mk to add a make rule for the
*.bin file. This make rule (BinName) uses Vivado's "bootgen" to convert
the bitstream from *.bit to *.bin.

Most of the relevant code is pasted or summarized below:

      *load_fpga_manager*(const char *fileName, std::string &error) {
        if (!file_exists("/lib/firmware")){
          mkdir("/lib/firmware",0666);
        }
        int out_file = creat("/lib/firmware/opencpi_temp.bin", 0666);
        gzFile bin_file;
        int bfd, zerror;
        uint8_t buf[8*1024];

        if ((bfd = ::open(fileName, O_RDONLY)) < 0)
          OU::format(error, "Can't open bitstream file '%s' for reading:

%s(%d)",
fileName, strerror(errno), errno);
if ((bin_file = ::gzdopen(bfd, "rb")) == NULL)
OU::format(error, "Can't open compressed bin file '%s' for :
%s(%u)",
fileName, strerror(errno), errno);
do {
uint8_t *bit_buf = buf;
int n = ::gzread(bin_file, bit_buf, sizeof(buf));
if (n < 0)
return true;
if (n & 3)
return OU::eformat(error, "Bitstream data in is '%s' not a
multiple of 3 bytes",
fileName);
if (n == 0)
break;
if (write(out_file, buf, n) <= 0)
return OU::eformat(error,
"Error writing to /lib/firmware/opencpi_temp.bin for bin
loading: %s(%u/%d)",
strerror(errno), errno, n);
} while (1);
close(out_file);
std::ofstream fpga_flags("/sys/class/fpga_manager/fpga0/flags");
std::ofstream
fpga_firmware("/sys/class/fpga_manager/fpga0/firmware");
fpga_flags << 0 << std::endl;
fpga_firmware << "opencpi_temp.bin" << std::endl;

        remove("/lib/firmware/opencpi_temp.bin");
        return isProgrammed(error) ? init(error) : true;
      }

The isProgrammed() function just checks whether or not the fpga_manager
state is 'operating' although we are not entirely confident this is a
robust check:

      *isProgrammed*(...) {
        ...
        const char *e = OU::file2String(val,

"/sys/class/fpga_manager/fpga0/state", '|');
...
return val == "operating";
}

vivado.mk's *bin make-rule uses bootgen to convert bit to bin. This is
necessary in Vivado 2018.2, but in later versions you may be able to
directly generate the correct *.bin file via an option to write_bitstream:
$(call BinName,$1,$3,$6): $(call BitName,$1,$3)
$(AT)echo -n For $2 on $5 using config $4: Generating Xilinx Vivado
bitstream file $$@ with BIN extension using "bootgen".
$(AT)echo all: > $$(call BifName,$1,$3,$6);
echo "{" >> $$(call BifName,$1,$3,$6);
echo "      [destination_device = pl] $(notdir $(call
BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6);
echo "}" >> $$(call BifName,$1,$3,$6);
$(AT)$(call DoXilinx,bootgen,$1,-image $(notdir $(call
BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call
BinName,$1,$3,$6)) -w,bin)

Hope this is useful!

Regards,
David Banks
dbanks@geontech.com
Geon Technologies, LLC

On 2/1/19 3:37 PM, Chris Hinkey wrote: > in response to Point 1 here. We attempted using the code that on the fly was attempting to convert from bit to bin. This did not work on these newer platforms using fpga_manager so we decided to use the vendor provided tools rather then to reverse engineer what was wrong with the existing code. > > If changes need to be made to create more commonality and given that all zynq and zynqMP platforms need a .bin file format wouldn't it make more sense to just use .bin files rather then converting them on the fly every time? A sensible question for sure. When this was done originally, it was to avoid generating multiple file formats all the time.  .bit files are necessary for JTAG loading, and .bin files are necessary for zynq hardware loading. Even on Zynq, some debugging using jtag is done, and having that be mostly transparent (using the same bitstream files) is convenient. So we preferred having a single bitstream file (with metadata, compressed) regardless of whether we were hardware loading or jtag loading, zynq or virtex6 or spartan3, ISE or Vivado. In fact, there was no reverse engineering the last time since both formats, at the level we were operating at, were documented by Xilinx. It seemed to be worth the 30 SLOC to convert on the fly to keep a single format of Xilinx bitstream files, including between ISE and Vivado and all Xilinx FPGA types. Of course it might make sense to switch things around the other way and use .bin files uniformly and only convert to .bit format for JTAG loading. But since the core of the "conversion:" after a header, is just a 32 bit endian swap, it doesn't matter much either way. If it ends up being a truly nasty reverse engineering exercise now, I would reconsider. > ________________________________ > From: discuss <discuss-bounces@lists.opencpi.org> on behalf of James Kulp <jek@parera.com> > Sent: Friday, February 1, 2019 3:27 PM > To: discuss@lists.opencpi.org > Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager > > David, > > This is great work. Thanks. > > Since I believe the fpga manager stuff is really an attribute of later > linux kernels, I don't think it is really a ZynqMP thing, but just a > later linux kernel thing. > I am currently bringing up the quite ancient zedboard using the latest > Vivado and Xilinx linux and will try to use this same code. > There are two thinigs I am looking into, now that you have done the hard > work of getting to a working solution: > > 1. The bit vs bin thing existed with the old bitstream loader, but I > think we were converting on the fly, so I will try that here. > (To avoid the bin format altogether). > > 2. The fpga manager has entry points from kernel mode that allow you to > inject the bitstream without making a copy in /lib/firmware. > Since we already have a kernel driver, I will try to use that to avoid > the whole /lib/firmware thing. > > So if those two things can work (no guarantees), the difference between > old and new bitstream loading (and building) can be minimized and > the loading process faster and requiring no extra file system space. > This will make merging easier too. > > We'll see. Thanks again to you and Geon for this important contribution. > > Jim > > > On 2/1/19 3:12 PM, David Banks wrote: >> OpenCPI users interested in ZynqMP fpga_manager, >> >> I know some users are interested in the OpenCPI's bitstream loading for >> ZynqMP/UltraScale+ using "*fpga_manager*". In general, we followed the >> instructions at >> https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream. >> I will give a short explanation here: >> >> Reminder: All ZynqMP/UltraScale+ changes are located at >> https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra branch. >> >> Firstly, all *fpga_manager *code is located in >> *runtime/hdl/src/HdlBusDriver.cxx*. There were also changes in >> r*untime/hdl-support/xilinx/vivado.mk >> <http://vivado.mk>* to generate a bitstream in the correct *.bin format. To >> see the changes made to these files for ZynqMP, you can diff them between >> *release_1.4* and *release_1.4_zynq_ultra*: >> $ git clone https://github.com/Geontech/opencpi.git --branch >> release_1.4_zynq_ultra; >> $ cd opencpi; >> $ git fetch origin release_1.4:release_1.4; >> $ git diff release_1.4 -- runtime/hdl/src/HdlBusDriver.cxx >> runtime/hdl-support/xilinx/vivado.mk; >> >> >> The directly relevant functions are *load_fpga_manager()* and i >> *sProgrammed()*. >> load_fpga_manager() ensures that /lib/firmware exists, reads the *.bin >> bitstream file and writes its contents to /lib/firmware/opencpi_temp.bin. >> It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the the >> filename "opencpi_temp.bin" to /sys/class/fpga_manager/fpga0/firmware. >> Finally, the temporary opencpi_temp.bin bitstream is removed and the state >> of the fpga_manager (/sys/class/fpga_manager/fpga0/state) is confirmed to >> be "operating" in isProgrammed(). >> >> fpga_manager requires that bitstreams be in *.bin in order to write them to >> the PL. So, some changes were made to vivado.mk to add a make rule for the >> *.bin file. This make rule (*BinName*) uses Vivado's "*bootgen*" to convert >> the bitstream from *.bit to *.bin. >> >> Most of the relevant code is pasted or summarized below: >> >> *load_fpga_manager*(const char *fileName, std::string &error) { >> if (!file_exists("/lib/firmware")){ >> mkdir("/lib/firmware",0666); >> } >> int out_file = creat("/lib/firmware/opencpi_temp.bin", 0666); >> gzFile bin_file; >> int bfd, zerror; >> uint8_t buf[8*1024]; >> >> if ((bfd = ::open(fileName, O_RDONLY)) < 0) >> OU::format(error, "Can't open bitstream file '%s' for reading: >> %s(%d)", >> fileName, strerror(errno), errno); >> if ((bin_file = ::gzdopen(bfd, "rb")) == NULL) >> OU::format(error, "Can't open compressed bin file '%s' for : >> %s(%u)", >> fileName, strerror(errno), errno); >> do { >> uint8_t *bit_buf = buf; >> int n = ::gzread(bin_file, bit_buf, sizeof(buf)); >> if (n < 0) >> return true; >> if (n & 3) >> return OU::eformat(error, "Bitstream data in is '%s' not a >> multiple of 3 bytes", >> fileName); >> if (n == 0) >> break; >> if (write(out_file, buf, n) <= 0) >> return OU::eformat(error, >> "Error writing to /lib/firmware/opencpi_temp.bin for bin >> loading: %s(%u/%d)", >> strerror(errno), errno, n); >> } while (1); >> close(out_file); >> std::ofstream fpga_flags("/sys/class/fpga_manager/fpga0/flags"); >> std::ofstream >> fpga_firmware("/sys/class/fpga_manager/fpga0/firmware"); >> fpga_flags << 0 << std::endl; >> fpga_firmware << "opencpi_temp.bin" << std::endl; >> >> remove("/lib/firmware/opencpi_temp.bin"); >> return isProgrammed(error) ? init(error) : true; >> } >> >> The isProgrammed() function just checks whether or not the fpga_manager >> state is 'operating' although we are not entirely confident this is a >> robust check: >> >> *isProgrammed*(...) { >> ... >> const char *e = OU::file2String(val, >> "/sys/class/fpga_manager/fpga0/state", '|'); >> ... >> return val == "operating"; >> } >> >> vivado.mk's *bin make-rule uses bootgen to convert bit to bin. This is >> necessary in Vivado 2018.2, but in later versions you may be able to >> directly generate the correct *.bin file via an option to write_bitstream: >> $(call *BinName*,$1,$3,$6): $(call BitName,$1,$3) >> $(AT)echo -n For $2 on $5 using config $4: Generating Xilinx Vivado >> bitstream file $$@ with BIN extension using "bootgen". >> $(AT)echo all: > $$(call BifName,$1,$3,$6); \ >> echo "{" >> $$(call BifName,$1,$3,$6); \ >> echo " [destination_device = pl] $(notdir $(call >> BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6); \ >> echo "}" >> $$(call BifName,$1,$3,$6); >> $(AT)$(call DoXilinx,*bootgen*,$1,-image $(notdir $(call >> BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call >> BinName,$1,$3,$6)) -w,bin) >> >> Hope this is useful! >> >> Regards, >> David Banks >> dbanks@geontech.com >> Geon Technologies, LLC >>
MR
Munro, Robert M.
Fri, Aug 2, 2019 8:15 PM

Are there any required flag or environment variable settings that must be done before building the framework to utilize this functionality?  I have a platform built that is producing an output during environment load: 'When searching for PL device '0': Can't process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string: file could not be open for reading' .  This leads me to believe that it is running the xdevcfg code still present in HdlBusDriver.cxx .

Use of the release_1.4_zynq_ultra branch and presence of the /sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been verified for the environment used to generate the executables.

Thanks,
Robert Munro

-----Original Message-----
From: discuss discuss-bounces@lists.opencpi.org On Behalf Of James Kulp
Sent: Friday, February 1, 2019 4:18 PM
To: discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

On 2/1/19 3:37 PM, Chris Hinkey wrote:

in response to Point 1 here.  We attempted using the code that on the fly was attempting to convert from bit to bin.  This did not work on these newer platforms using fpga_manager so we decided to use the vendor provided tools rather then to reverse engineer what was wrong with the existing code.

If changes need to be made to create more commonality and given that all zynq and zynqMP platforms need a .bin file format wouldn't it make more sense to just use .bin files rather then converting them on the fly every time?

A sensible question for sure.

When this was done originally, it was to avoid generating multiple file formats all the time.  .bit files are necessary for JTAG loading, and .bin files are necessary for zynq hardware loading.

Even on Zynq, some debugging using jtag is done, and having that be mostly transparent (using the same bitstream files) is convenient.

So we preferred having a single bitstream file (with metadata,
compressed) regardless of whether we were hardware loading or jtag loading, zynq or virtex6 or spartan3, ISE or Vivado.

In fact, there was no reverse engineering the last time since both formats, at the level we were operating at, were documented by Xilinx.

It seemed to be worth the 30 SLOC to convert on the fly to keep a single format of Xilinx bitstream files, including between ISE and Vivado and all Xilinx FPGA types.

Of course it might make sense to switch things around the other way and use .bin files uniformly and only convert to .bit format for JTAG loading.

But since the core of the "conversion:" after a header, is just a 32 bit endian swap, it doesn't matter much either way.

If it ends up being a truly nasty reverse engineering exercise now, I would reconsider.


From: discuss discuss-bounces@lists.opencpi.org on behalf of James
Kulp jek@parera.com
Sent: Friday, February 1, 2019 3:27 PM
To: discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

David,

This is great work. Thanks.

Since I believe the fpga manager stuff is really an attribute of later
linux kernels, I don't think it is really a ZynqMP thing, but just a
later linux kernel thing.
I am currently bringing up the quite ancient zedboard using the latest
Vivado and Xilinx linux and will try to use this same code.
There are two thinigs I am looking into, now that you have done the
hard work of getting to a working solution:

  1. The bit vs bin thing existed with the old bitstream loader, but I
    think we were converting on the fly, so I will try that here.
    (To avoid the bin format altogether).

  2. The fpga manager has entry points from kernel mode that allow you
    to inject the bitstream without making a copy in /lib/firmware.
    Since we already have a kernel driver, I will try to use that to avoid
    the whole /lib/firmware thing.

So if those two things can work (no guarantees), the difference
between old and new bitstream loading (and building) can be minimized
and the loading process faster and requiring no extra file system space.
This will make merging easier too.

We'll see.  Thanks again to you and Geon for this important contribution.

Jim

On 2/1/19 3:12 PM, David Banks wrote:

OpenCPI users interested in ZynqMP fpga_manager,

I know some users are interested in the OpenCPI's bitstream loading
for ZynqMP/UltraScale+ using "fpga_manager". In general, we
followed the instructions at
https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream.
I will give a short explanation here:

Reminder: All ZynqMP/UltraScale+ changes are located at
https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra branch.

Firstly, all fpga_manager code is located in
runtime/hdl/src/HdlBusDriver.cxx. There were also changes in
r
untime/hdl-support/xilinx/vivado.mk
http://vivado.mk
to generate a bitstream in the correct *.bin
format. To see the changes made to these files for ZynqMP, you can
diff them between
release_1.4 and release_1.4_zynq_ultra:
$ git clone https://github.com/Geontech/opencpi.git --branch
release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin
release_1.4:release_1.4; $ git diff release_1.4 --
runtime/hdl/src/HdlBusDriver.cxx
runtime/hdl-support/xilinx/vivado.mk;

The directly relevant functions are load_fpga_manager() and i
sProgrammed().
load_fpga_manager() ensures that /lib/firmware exists, reads the
*.bin bitstream file and writes its contents to /lib/firmware/opencpi_temp.bin.
It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the the
filename "opencpi_temp.bin" to /sys/class/fpga_manager/fpga0/firmware.
Finally, the temporary opencpi_temp.bin bitstream is removed and the
state of the fpga_manager (/sys/class/fpga_manager/fpga0/state) is
confirmed to be "operating" in isProgrammed().

fpga_manager requires that bitstreams be in *.bin in order to write
them to the PL. So, some changes were made to vivado.mk to add a make
rule for the *.bin file. This make rule (BinName) uses Vivado's
"bootgen" to convert the bitstream from *.bit to *.bin.

Most of the relevant code is pasted or summarized below:

       *load_fpga_manager*(const char *fileName, std::string &error) {
         if (!file_exists("/lib/firmware")){
           mkdir("/lib/firmware",0666);
         }
         int out_file = creat("/lib/firmware/opencpi_temp.bin", 0666);
         gzFile bin_file;
         int bfd, zerror;
         uint8_t buf[8*1024];

         if ((bfd = ::open(fileName, O_RDONLY)) < 0)
           OU::format(error, "Can't open bitstream file '%s' for reading:

%s(%d)",
fileName, strerror(errno), errno);
if ((bin_file = ::gzdopen(bfd, "rb")) == NULL)
OU::format(error, "Can't open compressed bin file '%s' for :
%s(%u)",
fileName, strerror(errno), errno);
do {
uint8_t *bit_buf = buf;
int n = ::gzread(bin_file, bit_buf, sizeof(buf));
if (n < 0)
return true;
if (n & 3)
return OU::eformat(error, "Bitstream data in is '%s' not
a multiple of 3 bytes",
fileName);
if (n == 0)
break;
if (write(out_file, buf, n) <= 0)
return OU::eformat(error,
"Error writing to /lib/firmware/opencpi_temp.bin
for bin
loading: %s(%u/%d)",
strerror(errno), errno, n);
} while (1);
close(out_file);
std::ofstream fpga_flags("/sys/class/fpga_manager/fpga0/flags");
std::ofstream
fpga_firmware("/sys/class/fpga_manager/fpga0/firmware");
fpga_flags << 0 << std::endl;
fpga_firmware << "opencpi_temp.bin" << std::endl;

         remove("/lib/firmware/opencpi_temp.bin");
         return isProgrammed(error) ? init(error) : true;
       }

The isProgrammed() function just checks whether or not the
fpga_manager state is 'operating' although we are not entirely
confident this is a robust check:

       *isProgrammed*(...) {
         ...
         const char *e = OU::file2String(val, 

"/sys/class/fpga_manager/fpga0/state", '|');
...
return val == "operating";
}

vivado.mk's *bin make-rule uses bootgen to convert bit to bin. This
is necessary in Vivado 2018.2, but in later versions you may be able
to directly generate the correct *.bin file via an option to write_bitstream:
$(call BinName,$1,$3,$6): $(call BitName,$1,$3)
$(AT)echo -n For $2 on $5 using config $4: Generating Xilinx
Vivado bitstream file $$@ with BIN extension using "bootgen".
$(AT)echo all: > $$(call BifName,$1,$3,$6);
echo "{" >> $$(call BifName,$1,$3,$6);
echo "      [destination_device = pl] $(notdir $(call
BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6);
echo "}" >> $$(call BifName,$1,$3,$6);
$(AT)$(call DoXilinx,bootgen,$1,-image $(notdir $(call
BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call
BinName,$1,$3,$6)) -w,bin)

Hope this is useful!

Regards,
David Banks
dbanks@geontech.com
Geon Technologies, LLC
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachm
ents/20190201/4b49675d/attachment.html>


discuss mailing list
discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org


discuss mailing list
discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme
nts/20190201/64e4ea45/attachment.html>


discuss mailing list
discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org

Are there any required flag or environment variable settings that must be done before building the framework to utilize this functionality? I have a platform built that is producing an output during environment load: 'When searching for PL device '0': Can't process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string: file could not be open for reading' . This leads me to believe that it is running the xdevcfg code still present in HdlBusDriver.cxx . Use of the release_1.4_zynq_ultra branch and presence of the /sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been verified for the environment used to generate the executables. Thanks, Robert Munro -----Original Message----- From: discuss <discuss-bounces@lists.opencpi.org> On Behalf Of James Kulp Sent: Friday, February 1, 2019 4:18 PM To: discuss@lists.opencpi.org Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager On 2/1/19 3:37 PM, Chris Hinkey wrote: > in response to Point 1 here. We attempted using the code that on the fly was attempting to convert from bit to bin. This did not work on these newer platforms using fpga_manager so we decided to use the vendor provided tools rather then to reverse engineer what was wrong with the existing code. > > If changes need to be made to create more commonality and given that all zynq and zynqMP platforms need a .bin file format wouldn't it make more sense to just use .bin files rather then converting them on the fly every time? A sensible question for sure. When this was done originally, it was to avoid generating multiple file formats all the time.  .bit files are necessary for JTAG loading, and .bin files are necessary for zynq hardware loading. Even on Zynq, some debugging using jtag is done, and having that be mostly transparent (using the same bitstream files) is convenient. So we preferred having a single bitstream file (with metadata, compressed) regardless of whether we were hardware loading or jtag loading, zynq or virtex6 or spartan3, ISE or Vivado. In fact, there was no reverse engineering the last time since both formats, at the level we were operating at, were documented by Xilinx. It seemed to be worth the 30 SLOC to convert on the fly to keep a single format of Xilinx bitstream files, including between ISE and Vivado and all Xilinx FPGA types. Of course it might make sense to switch things around the other way and use .bin files uniformly and only convert to .bit format for JTAG loading. But since the core of the "conversion:" after a header, is just a 32 bit endian swap, it doesn't matter much either way. If it ends up being a truly nasty reverse engineering exercise now, I would reconsider. > ________________________________ > From: discuss <discuss-bounces@lists.opencpi.org> on behalf of James > Kulp <jek@parera.com> > Sent: Friday, February 1, 2019 3:27 PM > To: discuss@lists.opencpi.org > Subject: Re: [Discuss OpenCPI] Bitstream loading with > ZynqMP/UltraScale+ fpga_manager > > David, > > This is great work. Thanks. > > Since I believe the fpga manager stuff is really an attribute of later > linux kernels, I don't think it is really a ZynqMP thing, but just a > later linux kernel thing. > I am currently bringing up the quite ancient zedboard using the latest > Vivado and Xilinx linux and will try to use this same code. > There are two thinigs I am looking into, now that you have done the > hard work of getting to a working solution: > > 1. The bit vs bin thing existed with the old bitstream loader, but I > think we were converting on the fly, so I will try that here. > (To avoid the bin format altogether). > > 2. The fpga manager has entry points from kernel mode that allow you > to inject the bitstream without making a copy in /lib/firmware. > Since we already have a kernel driver, I will try to use that to avoid > the whole /lib/firmware thing. > > So if those two things can work (no guarantees), the difference > between old and new bitstream loading (and building) can be minimized > and the loading process faster and requiring no extra file system space. > This will make merging easier too. > > We'll see. Thanks again to you and Geon for this important contribution. > > Jim > > > On 2/1/19 3:12 PM, David Banks wrote: >> OpenCPI users interested in ZynqMP fpga_manager, >> >> I know some users are interested in the OpenCPI's bitstream loading >> for ZynqMP/UltraScale+ using "*fpga_manager*". In general, we >> followed the instructions at >> https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream. >> I will give a short explanation here: >> >> Reminder: All ZynqMP/UltraScale+ changes are located at >> https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra branch. >> >> Firstly, all *fpga_manager *code is located in >> *runtime/hdl/src/HdlBusDriver.cxx*. There were also changes in >> r*untime/hdl-support/xilinx/vivado.mk >> <http://vivado.mk>* to generate a bitstream in the correct *.bin >> format. To see the changes made to these files for ZynqMP, you can >> diff them between >> *release_1.4* and *release_1.4_zynq_ultra*: >> $ git clone https://github.com/Geontech/opencpi.git --branch >> release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin >> release_1.4:release_1.4; $ git diff release_1.4 -- >> runtime/hdl/src/HdlBusDriver.cxx >> runtime/hdl-support/xilinx/vivado.mk; >> >> >> The directly relevant functions are *load_fpga_manager()* and i >> *sProgrammed()*. >> load_fpga_manager() ensures that /lib/firmware exists, reads the >> *.bin bitstream file and writes its contents to /lib/firmware/opencpi_temp.bin. >> It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the the >> filename "opencpi_temp.bin" to /sys/class/fpga_manager/fpga0/firmware. >> Finally, the temporary opencpi_temp.bin bitstream is removed and the >> state of the fpga_manager (/sys/class/fpga_manager/fpga0/state) is >> confirmed to be "operating" in isProgrammed(). >> >> fpga_manager requires that bitstreams be in *.bin in order to write >> them to the PL. So, some changes were made to vivado.mk to add a make >> rule for the *.bin file. This make rule (*BinName*) uses Vivado's >> "*bootgen*" to convert the bitstream from *.bit to *.bin. >> >> Most of the relevant code is pasted or summarized below: >> >> *load_fpga_manager*(const char *fileName, std::string &error) { >> if (!file_exists("/lib/firmware")){ >> mkdir("/lib/firmware",0666); >> } >> int out_file = creat("/lib/firmware/opencpi_temp.bin", 0666); >> gzFile bin_file; >> int bfd, zerror; >> uint8_t buf[8*1024]; >> >> if ((bfd = ::open(fileName, O_RDONLY)) < 0) >> OU::format(error, "Can't open bitstream file '%s' for reading: >> %s(%d)", >> fileName, strerror(errno), errno); >> if ((bin_file = ::gzdopen(bfd, "rb")) == NULL) >> OU::format(error, "Can't open compressed bin file '%s' for : >> %s(%u)", >> fileName, strerror(errno), errno); >> do { >> uint8_t *bit_buf = buf; >> int n = ::gzread(bin_file, bit_buf, sizeof(buf)); >> if (n < 0) >> return true; >> if (n & 3) >> return OU::eformat(error, "Bitstream data in is '%s' not >> a multiple of 3 bytes", >> fileName); >> if (n == 0) >> break; >> if (write(out_file, buf, n) <= 0) >> return OU::eformat(error, >> "Error writing to /lib/firmware/opencpi_temp.bin >> for bin >> loading: %s(%u/%d)", >> strerror(errno), errno, n); >> } while (1); >> close(out_file); >> std::ofstream fpga_flags("/sys/class/fpga_manager/fpga0/flags"); >> std::ofstream >> fpga_firmware("/sys/class/fpga_manager/fpga0/firmware"); >> fpga_flags << 0 << std::endl; >> fpga_firmware << "opencpi_temp.bin" << std::endl; >> >> remove("/lib/firmware/opencpi_temp.bin"); >> return isProgrammed(error) ? init(error) : true; >> } >> >> The isProgrammed() function just checks whether or not the >> fpga_manager state is 'operating' although we are not entirely >> confident this is a robust check: >> >> *isProgrammed*(...) { >> ... >> const char *e = OU::file2String(val, >> "/sys/class/fpga_manager/fpga0/state", '|'); >> ... >> return val == "operating"; >> } >> >> vivado.mk's *bin make-rule uses bootgen to convert bit to bin. This >> is necessary in Vivado 2018.2, but in later versions you may be able >> to directly generate the correct *.bin file via an option to write_bitstream: >> $(call *BinName*,$1,$3,$6): $(call BitName,$1,$3) >> $(AT)echo -n For $2 on $5 using config $4: Generating Xilinx >> Vivado bitstream file $$@ with BIN extension using "bootgen". >> $(AT)echo all: > $$(call BifName,$1,$3,$6); \ >> echo "{" >> $$(call BifName,$1,$3,$6); \ >> echo " [destination_device = pl] $(notdir $(call >> BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6); \ >> echo "}" >> $$(call BifName,$1,$3,$6); >> $(AT)$(call DoXilinx,*bootgen*,$1,-image $(notdir $(call >> BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call >> BinName,$1,$3,$6)) -w,bin) >> >> Hope this is useful! >> >> Regards, >> David Banks >> dbanks@geontech.com >> Geon Technologies, LLC >> -------------- next part -------------- An HTML attachment was >> scrubbed... >> URL: >> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachm >> ents/20190201/4b49675d/attachment.html> >> _______________________________________________ >> discuss mailing list >> discuss@lists.opencpi.org >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > > > _______________________________________________ > discuss mailing list > discuss@lists.opencpi.org > http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > -------------- next part -------------- An HTML attachment was > scrubbed... > URL: > <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme > nts/20190201/64e4ea45/attachment.html> > _______________________________________________ > discuss mailing list > discuss@lists.opencpi.org > http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org _______________________________________________ discuss mailing list discuss@lists.opencpi.org http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
JK
James Kulp
Fri, Aug 2, 2019 8:27 PM

That code is not integrated into the main line of OpenCPI yet, but in
that code there is:
          if (file_exists("/dev/xdevcfg")){
            ret_val= load_xdevconfig(fileName, error);
          }
          else if (file_exists("/sys/class/fpga_manager/fpga0/")){
            ret_val= load_fpga_manager(fileName, error);
          }
So it looks like the presence of /dev/xdevcfg is what causes it to look
for /sys/class/xdevcfg/xdevcfg/device/prog_done

On 8/2/19 4:15 PM, Munro, Robert M. wrote:

Are there any required flag or environment variable settings that must be done before building the framework to utilize this functionality?  I have a platform built that is producing an output during environment load: 'When searching for PL device '0': Can't process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string: file could not be open for reading' .  This leads me to believe that it is running the xdevcfg code still present in HdlBusDriver.cxx .

Use of the release_1.4_zynq_ultra branch and presence of the /sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been verified for the environment used to generate the executables.

Thanks,
Robert Munro

-----Original Message-----
From: discuss discuss-bounces@lists.opencpi.org On Behalf Of James Kulp
Sent: Friday, February 1, 2019 4:18 PM
To: discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

On 2/1/19 3:37 PM, Chris Hinkey wrote:

in response to Point 1 here.  We attempted using the code that on the fly was attempting to convert from bit to bin.  This did not work on these newer platforms using fpga_manager so we decided to use the vendor provided tools rather then to reverse engineer what was wrong with the existing code.

If changes need to be made to create more commonality and given that all zynq and zynqMP platforms need a .bin file format wouldn't it make more sense to just use .bin files rather then converting them on the fly every time?

A sensible question for sure.

When this was done originally, it was to avoid generating multiple file formats all the time.  .bit files are necessary for JTAG loading, and .bin files are necessary for zynq hardware loading.

Even on Zynq, some debugging using jtag is done, and having that be mostly transparent (using the same bitstream files) is convenient.

So we preferred having a single bitstream file (with metadata,
compressed) regardless of whether we were hardware loading or jtag loading, zynq or virtex6 or spartan3, ISE or Vivado.

In fact, there was no reverse engineering the last time since both formats, at the level we were operating at, were documented by Xilinx.

It seemed to be worth the 30 SLOC to convert on the fly to keep a single format of Xilinx bitstream files, including between ISE and Vivado and all Xilinx FPGA types.

Of course it might make sense to switch things around the other way and use .bin files uniformly and only convert to .bit format for JTAG loading.

But since the core of the "conversion:" after a header, is just a 32 bit endian swap, it doesn't matter much either way.

If it ends up being a truly nasty reverse engineering exercise now, I would reconsider.


From: discuss discuss-bounces@lists.opencpi.org on behalf of James
Kulp jek@parera.com
Sent: Friday, February 1, 2019 3:27 PM
To: discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

David,

This is great work. Thanks.

Since I believe the fpga manager stuff is really an attribute of later
linux kernels, I don't think it is really a ZynqMP thing, but just a
later linux kernel thing.
I am currently bringing up the quite ancient zedboard using the latest
Vivado and Xilinx linux and will try to use this same code.
There are two thinigs I am looking into, now that you have done the
hard work of getting to a working solution:

  1. The bit vs bin thing existed with the old bitstream loader, but I
    think we were converting on the fly, so I will try that here.
    (To avoid the bin format altogether).

  2. The fpga manager has entry points from kernel mode that allow you
    to inject the bitstream without making a copy in /lib/firmware.
    Since we already have a kernel driver, I will try to use that to avoid
    the whole /lib/firmware thing.

So if those two things can work (no guarantees), the difference
between old and new bitstream loading (and building) can be minimized
and the loading process faster and requiring no extra file system space.
This will make merging easier too.

We'll see.  Thanks again to you and Geon for this important contribution.

Jim

On 2/1/19 3:12 PM, David Banks wrote:

OpenCPI users interested in ZynqMP fpga_manager,

I know some users are interested in the OpenCPI's bitstream loading
for ZynqMP/UltraScale+ using "fpga_manager". In general, we
followed the instructions at
https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream.
I will give a short explanation here:

Reminder: All ZynqMP/UltraScale+ changes are located at
https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra branch.

Firstly, all fpga_manager code is located in
runtime/hdl/src/HdlBusDriver.cxx. There were also changes in
r
untime/hdl-support/xilinx/vivado.mk
http://vivado.mk
to generate a bitstream in the correct *.bin
format. To see the changes made to these files for ZynqMP, you can
diff them between
release_1.4 and release_1.4_zynq_ultra:
$ git clone https://github.com/Geontech/opencpi.git --branch
release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin
release_1.4:release_1.4; $ git diff release_1.4 --
runtime/hdl/src/HdlBusDriver.cxx
runtime/hdl-support/xilinx/vivado.mk;

The directly relevant functions are load_fpga_manager() and i
sProgrammed().
load_fpga_manager() ensures that /lib/firmware exists, reads the
*.bin bitstream file and writes its contents to /lib/firmware/opencpi_temp.bin.
It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the the
filename "opencpi_temp.bin" to /sys/class/fpga_manager/fpga0/firmware.
Finally, the temporary opencpi_temp.bin bitstream is removed and the
state of the fpga_manager (/sys/class/fpga_manager/fpga0/state) is
confirmed to be "operating" in isProgrammed().

fpga_manager requires that bitstreams be in *.bin in order to write
them to the PL. So, some changes were made to vivado.mk to add a make
rule for the *.bin file. This make rule (BinName) uses Vivado's
"bootgen" to convert the bitstream from *.bit to *.bin.

Most of the relevant code is pasted or summarized below:

        *load_fpga_manager*(const char *fileName, std::string &error) {
          if (!file_exists("/lib/firmware")){
            mkdir("/lib/firmware",0666);
          }
          int out_file = creat("/lib/firmware/opencpi_temp.bin", 0666);
          gzFile bin_file;
          int bfd, zerror;
          uint8_t buf[8*1024];

          if ((bfd = ::open(fileName, O_RDONLY)) < 0)
            OU::format(error, "Can't open bitstream file '%s' for reading:

%s(%d)",
fileName, strerror(errno), errno);
if ((bin_file = ::gzdopen(bfd, "rb")) == NULL)
OU::format(error, "Can't open compressed bin file '%s' for :
%s(%u)",
fileName, strerror(errno), errno);
do {
uint8_t *bit_buf = buf;
int n = ::gzread(bin_file, bit_buf, sizeof(buf));
if (n < 0)
return true;
if (n & 3)
return OU::eformat(error, "Bitstream data in is '%s' not
a multiple of 3 bytes",
fileName);
if (n == 0)
break;
if (write(out_file, buf, n) <= 0)
return OU::eformat(error,
"Error writing to /lib/firmware/opencpi_temp.bin
for bin
loading: %s(%u/%d)",
strerror(errno), errno, n);
} while (1);
close(out_file);
std::ofstream fpga_flags("/sys/class/fpga_manager/fpga0/flags");
std::ofstream
fpga_firmware("/sys/class/fpga_manager/fpga0/firmware");
fpga_flags << 0 << std::endl;
fpga_firmware << "opencpi_temp.bin" << std::endl;

          remove("/lib/firmware/opencpi_temp.bin");
          return isProgrammed(error) ? init(error) : true;
        }

The isProgrammed() function just checks whether or not the
fpga_manager state is 'operating' although we are not entirely
confident this is a robust check:

        *isProgrammed*(...) {
          ...
          const char *e = OU::file2String(val,

"/sys/class/fpga_manager/fpga0/state", '|');
...
return val == "operating";
}

vivado.mk's *bin make-rule uses bootgen to convert bit to bin. This
is necessary in Vivado 2018.2, but in later versions you may be able
to directly generate the correct *.bin file via an option to write_bitstream:
$(call BinName,$1,$3,$6): $(call BitName,$1,$3)
$(AT)echo -n For $2 on $5 using config $4: Generating Xilinx
Vivado bitstream file $$@ with BIN extension using "bootgen".
$(AT)echo all: > $$(call BifName,$1,$3,$6);
echo "{" >> $$(call BifName,$1,$3,$6);
echo "      [destination_device = pl] $(notdir $(call
BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6);
echo "}" >> $$(call BifName,$1,$3,$6);
$(AT)$(call DoXilinx,bootgen,$1,-image $(notdir $(call
BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call
BinName,$1,$3,$6)) -w,bin)

Hope this is useful!

Regards,
David Banks
dbanks@geontech.com
Geon Technologies, LLC
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachm
ents/20190201/4b49675d/attachment.html>


discuss mailing list
discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org


discuss mailing list
discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme
nts/20190201/64e4ea45/attachment.html>


discuss mailing list
discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org

That code is not integrated into the main line of OpenCPI yet, but in that code there is:           if (file_exists("/dev/xdevcfg")){             ret_val= load_xdevconfig(fileName, error);           }           else if (file_exists("/sys/class/fpga_manager/fpga0/")){             ret_val= load_fpga_manager(fileName, error);           } So it looks like the presence of /dev/xdevcfg is what causes it to look for /sys/class/xdevcfg/xdevcfg/device/prog_done On 8/2/19 4:15 PM, Munro, Robert M. wrote: > Are there any required flag or environment variable settings that must be done before building the framework to utilize this functionality? I have a platform built that is producing an output during environment load: 'When searching for PL device '0': Can't process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string: file could not be open for reading' . This leads me to believe that it is running the xdevcfg code still present in HdlBusDriver.cxx . > > Use of the release_1.4_zynq_ultra branch and presence of the /sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been verified for the environment used to generate the executables. > > Thanks, > Robert Munro > > -----Original Message----- > From: discuss <discuss-bounces@lists.opencpi.org> On Behalf Of James Kulp > Sent: Friday, February 1, 2019 4:18 PM > To: discuss@lists.opencpi.org > Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager > > On 2/1/19 3:37 PM, Chris Hinkey wrote: >> in response to Point 1 here. We attempted using the code that on the fly was attempting to convert from bit to bin. This did not work on these newer platforms using fpga_manager so we decided to use the vendor provided tools rather then to reverse engineer what was wrong with the existing code. >> >> If changes need to be made to create more commonality and given that all zynq and zynqMP platforms need a .bin file format wouldn't it make more sense to just use .bin files rather then converting them on the fly every time? > A sensible question for sure. > > When this was done originally, it was to avoid generating multiple file formats all the time.  .bit files are necessary for JTAG loading, and .bin files are necessary for zynq hardware loading. > > Even on Zynq, some debugging using jtag is done, and having that be mostly transparent (using the same bitstream files) is convenient. > > So we preferred having a single bitstream file (with metadata, > compressed) regardless of whether we were hardware loading or jtag loading, zynq or virtex6 or spartan3, ISE or Vivado. > > In fact, there was no reverse engineering the last time since both formats, at the level we were operating at, were documented by Xilinx. > > It seemed to be worth the 30 SLOC to convert on the fly to keep a single format of Xilinx bitstream files, including between ISE and Vivado and all Xilinx FPGA types. > > Of course it might make sense to switch things around the other way and use .bin files uniformly and only convert to .bit format for JTAG loading. > > But since the core of the "conversion:" after a header, is just a 32 bit endian swap, it doesn't matter much either way. > > If it ends up being a truly nasty reverse engineering exercise now, I would reconsider. > >> ________________________________ >> From: discuss <discuss-bounces@lists.opencpi.org> on behalf of James >> Kulp <jek@parera.com> >> Sent: Friday, February 1, 2019 3:27 PM >> To: discuss@lists.opencpi.org >> Subject: Re: [Discuss OpenCPI] Bitstream loading with >> ZynqMP/UltraScale+ fpga_manager >> >> David, >> >> This is great work. Thanks. >> >> Since I believe the fpga manager stuff is really an attribute of later >> linux kernels, I don't think it is really a ZynqMP thing, but just a >> later linux kernel thing. >> I am currently bringing up the quite ancient zedboard using the latest >> Vivado and Xilinx linux and will try to use this same code. >> There are two thinigs I am looking into, now that you have done the >> hard work of getting to a working solution: >> >> 1. The bit vs bin thing existed with the old bitstream loader, but I >> think we were converting on the fly, so I will try that here. >> (To avoid the bin format altogether). >> >> 2. The fpga manager has entry points from kernel mode that allow you >> to inject the bitstream without making a copy in /lib/firmware. >> Since we already have a kernel driver, I will try to use that to avoid >> the whole /lib/firmware thing. >> >> So if those two things can work (no guarantees), the difference >> between old and new bitstream loading (and building) can be minimized >> and the loading process faster and requiring no extra file system space. >> This will make merging easier too. >> >> We'll see. Thanks again to you and Geon for this important contribution. >> >> Jim >> >> >> On 2/1/19 3:12 PM, David Banks wrote: >>> OpenCPI users interested in ZynqMP fpga_manager, >>> >>> I know some users are interested in the OpenCPI's bitstream loading >>> for ZynqMP/UltraScale+ using "*fpga_manager*". In general, we >>> followed the instructions at >>> https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream. >>> I will give a short explanation here: >>> >>> Reminder: All ZynqMP/UltraScale+ changes are located at >>> https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra branch. >>> >>> Firstly, all *fpga_manager *code is located in >>> *runtime/hdl/src/HdlBusDriver.cxx*. There were also changes in >>> r*untime/hdl-support/xilinx/vivado.mk >>> <http://vivado.mk>* to generate a bitstream in the correct *.bin >>> format. To see the changes made to these files for ZynqMP, you can >>> diff them between >>> *release_1.4* and *release_1.4_zynq_ultra*: >>> $ git clone https://github.com/Geontech/opencpi.git --branch >>> release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin >>> release_1.4:release_1.4; $ git diff release_1.4 -- >>> runtime/hdl/src/HdlBusDriver.cxx >>> runtime/hdl-support/xilinx/vivado.mk; >>> >>> >>> The directly relevant functions are *load_fpga_manager()* and i >>> *sProgrammed()*. >>> load_fpga_manager() ensures that /lib/firmware exists, reads the >>> *.bin bitstream file and writes its contents to /lib/firmware/opencpi_temp.bin. >>> It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the the >>> filename "opencpi_temp.bin" to /sys/class/fpga_manager/fpga0/firmware. >>> Finally, the temporary opencpi_temp.bin bitstream is removed and the >>> state of the fpga_manager (/sys/class/fpga_manager/fpga0/state) is >>> confirmed to be "operating" in isProgrammed(). >>> >>> fpga_manager requires that bitstreams be in *.bin in order to write >>> them to the PL. So, some changes were made to vivado.mk to add a make >>> rule for the *.bin file. This make rule (*BinName*) uses Vivado's >>> "*bootgen*" to convert the bitstream from *.bit to *.bin. >>> >>> Most of the relevant code is pasted or summarized below: >>> >>> *load_fpga_manager*(const char *fileName, std::string &error) { >>> if (!file_exists("/lib/firmware")){ >>> mkdir("/lib/firmware",0666); >>> } >>> int out_file = creat("/lib/firmware/opencpi_temp.bin", 0666); >>> gzFile bin_file; >>> int bfd, zerror; >>> uint8_t buf[8*1024]; >>> >>> if ((bfd = ::open(fileName, O_RDONLY)) < 0) >>> OU::format(error, "Can't open bitstream file '%s' for reading: >>> %s(%d)", >>> fileName, strerror(errno), errno); >>> if ((bin_file = ::gzdopen(bfd, "rb")) == NULL) >>> OU::format(error, "Can't open compressed bin file '%s' for : >>> %s(%u)", >>> fileName, strerror(errno), errno); >>> do { >>> uint8_t *bit_buf = buf; >>> int n = ::gzread(bin_file, bit_buf, sizeof(buf)); >>> if (n < 0) >>> return true; >>> if (n & 3) >>> return OU::eformat(error, "Bitstream data in is '%s' not >>> a multiple of 3 bytes", >>> fileName); >>> if (n == 0) >>> break; >>> if (write(out_file, buf, n) <= 0) >>> return OU::eformat(error, >>> "Error writing to /lib/firmware/opencpi_temp.bin >>> for bin >>> loading: %s(%u/%d)", >>> strerror(errno), errno, n); >>> } while (1); >>> close(out_file); >>> std::ofstream fpga_flags("/sys/class/fpga_manager/fpga0/flags"); >>> std::ofstream >>> fpga_firmware("/sys/class/fpga_manager/fpga0/firmware"); >>> fpga_flags << 0 << std::endl; >>> fpga_firmware << "opencpi_temp.bin" << std::endl; >>> >>> remove("/lib/firmware/opencpi_temp.bin"); >>> return isProgrammed(error) ? init(error) : true; >>> } >>> >>> The isProgrammed() function just checks whether or not the >>> fpga_manager state is 'operating' although we are not entirely >>> confident this is a robust check: >>> >>> *isProgrammed*(...) { >>> ... >>> const char *e = OU::file2String(val, >>> "/sys/class/fpga_manager/fpga0/state", '|'); >>> ... >>> return val == "operating"; >>> } >>> >>> vivado.mk's *bin make-rule uses bootgen to convert bit to bin. This >>> is necessary in Vivado 2018.2, but in later versions you may be able >>> to directly generate the correct *.bin file via an option to write_bitstream: >>> $(call *BinName*,$1,$3,$6): $(call BitName,$1,$3) >>> $(AT)echo -n For $2 on $5 using config $4: Generating Xilinx >>> Vivado bitstream file $$@ with BIN extension using "bootgen". >>> $(AT)echo all: > $$(call BifName,$1,$3,$6); \ >>> echo "{" >> $$(call BifName,$1,$3,$6); \ >>> echo " [destination_device = pl] $(notdir $(call >>> BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6); \ >>> echo "}" >> $$(call BifName,$1,$3,$6); >>> $(AT)$(call DoXilinx,*bootgen*,$1,-image $(notdir $(call >>> BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call >>> BinName,$1,$3,$6)) -w,bin) >>> >>> Hope this is useful! >>> >>> Regards, >>> David Banks >>> dbanks@geontech.com >>> Geon Technologies, LLC >>> -------------- next part -------------- An HTML attachment was >>> scrubbed... >>> URL: >>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachm >>> ents/20190201/4b49675d/attachment.html> >>> _______________________________________________ >>> discuss mailing list >>> discuss@lists.opencpi.org >>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >> >> _______________________________________________ >> discuss mailing list >> discuss@lists.opencpi.org >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >> -------------- next part -------------- An HTML attachment was >> scrubbed... >> URL: >> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme >> nts/20190201/64e4ea45/attachment.html> >> _______________________________________________ >> discuss mailing list >> discuss@lists.opencpi.org >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > > > _______________________________________________ > discuss mailing list > discuss@lists.opencpi.org > http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
MR
Munro, Robert M.
Mon, Aug 5, 2019 2:48 PM

Jim,

The given block of code is not the root cause of the issue because the file system does not have a /dev/xdevcfg device.

I suspect there is some functional code similar to this being compiled incorrectly:
#if (OCPI_ARCH_arm)
// do xdevcfg loading stuff
#else
// do fpga_manager loading stuff
#endif

This error is being output at environment initialization as well as when running hello.xml.  I've attached a copy of the output from the command 'ocpirun -v -l 20 hello.xml' for further investigation.

From looking at the output I believe the system is calling OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line 484 which in turn is calling Driver::open in the same file at line 499 which then outputs the 'When searching for PL device ...' error at line 509.  This then returns to the HdlDriver.cxx search() function and outputs the '... got Zynq search error ...' error at line 141.

This is an ARM device and I am not familiar enough with this codebase to adjust precompiler definitions with confidence that some other code section will become affected.

Thanks,
Robert Munro

-----Original Message-----
From: James Kulp jek@parera.com
Sent: Friday, August 2, 2019 4:27 PM
To: Munro, Robert M. Robert.Munro@jhuapl.edu; discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

That code is not integrated into the main line of OpenCPI yet, but in that code there is:
          if (file_exists("/dev/xdevcfg")){
            ret_val= load_xdevconfig(fileName, error);
          }
          else if (file_exists("/sys/class/fpga_manager/fpga0/")){
            ret_val= load_fpga_manager(fileName, error);
          }
So it looks like the presence of /dev/xdevcfg is what causes it to look for /sys/class/xdevcfg/xdevcfg/device/prog_done

On 8/2/19 4:15 PM, Munro, Robert M. wrote:

Are there any required flag or environment variable settings that must be done before building the framework to utilize this functionality?  I have a platform built that is producing an output during environment load: 'When searching for PL device '0': Can't process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string: file could not be open for reading' .  This leads me to believe that it is running the xdevcfg code still present in HdlBusDriver.cxx .

Use of the release_1.4_zynq_ultra branch and presence of the /sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been verified for the environment used to generate the executables.

Thanks,
Robert Munro

-----Original Message-----
From: discuss discuss-bounces@lists.opencpi.org On Behalf Of James
Kulp
Sent: Friday, February 1, 2019 4:18 PM
To: discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

On 2/1/19 3:37 PM, Chris Hinkey wrote:

in response to Point 1 here.  We attempted using the code that on the fly was attempting to convert from bit to bin.  This did not work on these newer platforms using fpga_manager so we decided to use the vendor provided tools rather then to reverse engineer what was wrong with the existing code.

If changes need to be made to create more commonality and given that all zynq and zynqMP platforms need a .bin file format wouldn't it make more sense to just use .bin files rather then converting them on the fly every time?

A sensible question for sure.

When this was done originally, it was to avoid generating multiple file formats all the time.  .bit files are necessary for JTAG loading, and .bin files are necessary for zynq hardware loading.

Even on Zynq, some debugging using jtag is done, and having that be mostly transparent (using the same bitstream files) is convenient.

So we preferred having a single bitstream file (with metadata,
compressed) regardless of whether we were hardware loading or jtag loading, zynq or virtex6 or spartan3, ISE or Vivado.

In fact, there was no reverse engineering the last time since both formats, at the level we were operating at, were documented by Xilinx.

It seemed to be worth the 30 SLOC to convert on the fly to keep a single format of Xilinx bitstream files, including between ISE and Vivado and all Xilinx FPGA types.

Of course it might make sense to switch things around the other way and use .bin files uniformly and only convert to .bit format for JTAG loading.

But since the core of the "conversion:" after a header, is just a 32 bit endian swap, it doesn't matter much either way.

If it ends up being a truly nasty reverse engineering exercise now, I would reconsider.


From: discuss discuss-bounces@lists.opencpi.org on behalf of James
Kulp jek@parera.com
Sent: Friday, February 1, 2019 3:27 PM
To: discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

David,

This is great work. Thanks.

Since I believe the fpga manager stuff is really an attribute of
later linux kernels, I don't think it is really a ZynqMP thing, but
just a later linux kernel thing.
I am currently bringing up the quite ancient zedboard using the
latest Vivado and Xilinx linux and will try to use this same code.
There are two thinigs I am looking into, now that you have done the
hard work of getting to a working solution:

  1. The bit vs bin thing existed with the old bitstream loader, but I
    think we were converting on the fly, so I will try that here.
    (To avoid the bin format altogether).

  2. The fpga manager has entry points from kernel mode that allow you
    to inject the bitstream without making a copy in /lib/firmware.
    Since we already have a kernel driver, I will try to use that to
    avoid the whole /lib/firmware thing.

So if those two things can work (no guarantees), the difference
between old and new bitstream loading (and building) can be minimized
and the loading process faster and requiring no extra file system space.
This will make merging easier too.

We'll see.  Thanks again to you and Geon for this important contribution.

Jim

On 2/1/19 3:12 PM, David Banks wrote:

OpenCPI users interested in ZynqMP fpga_manager,

I know some users are interested in the OpenCPI's bitstream loading
for ZynqMP/UltraScale+ using "fpga_manager". In general, we
followed the instructions at
https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream.
I will give a short explanation here:

Reminder: All ZynqMP/UltraScale+ changes are located at
https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra branch.

Firstly, all fpga_manager code is located in
runtime/hdl/src/HdlBusDriver.cxx. There were also changes in
r
untime/hdl-support/xilinx/vivado.mk
http://vivado.mk
to generate a bitstream in the correct *.bin
format. To see the changes made to these files for ZynqMP, you can
diff them between
release_1.4 and release_1.4_zynq_ultra:
$ git clone https://github.com/Geontech/opencpi.git --branch
release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin
release_1.4:release_1.4; $ git diff release_1.4 --
runtime/hdl/src/HdlBusDriver.cxx
runtime/hdl-support/xilinx/vivado.mk;

The directly relevant functions are load_fpga_manager() and i
sProgrammed().
load_fpga_manager() ensures that /lib/firmware exists, reads the
*.bin bitstream file and writes its contents to /lib/firmware/opencpi_temp.bin.
It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the
the filename "opencpi_temp.bin" to /sys/class/fpga_manager/fpga0/firmware.
Finally, the temporary opencpi_temp.bin bitstream is removed and the
state of the fpga_manager (/sys/class/fpga_manager/fpga0/state) is
confirmed to be "operating" in isProgrammed().

fpga_manager requires that bitstreams be in *.bin in order to write
them to the PL. So, some changes were made to vivado.mk to add a
make rule for the *.bin file. This make rule (BinName) uses
Vivado's "bootgen" to convert the bitstream from *.bit to *.bin.

Most of the relevant code is pasted or summarized below:

        *load_fpga_manager*(const char *fileName, std::string &error) {
          if (!file_exists("/lib/firmware")){
            mkdir("/lib/firmware",0666);
          }
          int out_file = creat("/lib/firmware/opencpi_temp.bin", 0666);
          gzFile bin_file;
          int bfd, zerror;
          uint8_t buf[8*1024];

          if ((bfd = ::open(fileName, O_RDONLY)) < 0)
            OU::format(error, "Can't open bitstream file '%s' for reading:

%s(%d)",
fileName, strerror(errno), errno);
if ((bin_file = ::gzdopen(bfd, "rb")) == NULL)
OU::format(error, "Can't open compressed bin file '%s' for :
%s(%u)",
fileName, strerror(errno), errno);
do {
uint8_t *bit_buf = buf;
int n = ::gzread(bin_file, bit_buf, sizeof(buf));
if (n < 0)
return true;
if (n & 3)
return OU::eformat(error, "Bitstream data in is '%s'
not a multiple of 3 bytes",
fileName);
if (n == 0)
break;
if (write(out_file, buf, n) <= 0)
return OU::eformat(error,
"Error writing to /lib/firmware/opencpi_temp.bin
for bin
loading: %s(%u/%d)",
strerror(errno), errno, n);
} while (1);
close(out_file);
std::ofstream fpga_flags("/sys/class/fpga_manager/fpga0/flags");
std::ofstream
fpga_firmware("/sys/class/fpga_manager/fpga0/firmware");
fpga_flags << 0 << std::endl;
fpga_firmware << "opencpi_temp.bin" << std::endl;

          remove("/lib/firmware/opencpi_temp.bin");
          return isProgrammed(error) ? init(error) : true;
        }

The isProgrammed() function just checks whether or not the
fpga_manager state is 'operating' although we are not entirely
confident this is a robust check:

        *isProgrammed*(...) {
          ...
          const char *e = OU::file2String(val, 

"/sys/class/fpga_manager/fpga0/state", '|');
...
return val == "operating";
}

vivado.mk's *bin make-rule uses bootgen to convert bit to bin. This
is necessary in Vivado 2018.2, but in later versions you may be able
to directly generate the correct *.bin file via an option to write_bitstream:
$(call BinName,$1,$3,$6): $(call BitName,$1,$3)
$(AT)echo -n For $2 on $5 using config $4: Generating
Xilinx Vivado bitstream file $$@ with BIN extension using "bootgen".
$(AT)echo all: > $$(call BifName,$1,$3,$6);
echo "{" >> $$(call BifName,$1,$3,$6);
echo "      [destination_device = pl] $(notdir $(call
BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6);
echo "}" >> $$(call BifName,$1,$3,$6);
$(AT)$(call DoXilinx,bootgen,$1,-image $(notdir $(call
BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call
BinName,$1,$3,$6)) -w,bin)

Hope this is useful!

Regards,
David Banks
dbanks@geontech.com
Geon Technologies, LLC
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attach
m ents/20190201/4b49675d/attachment.html>


discuss mailing list
discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org


discuss mailing list
discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachm
e nts/20190201/64e4ea45/attachment.html>


discuss mailing list
discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org

Jim, The given block of code is not the root cause of the issue because the file system does not have a /dev/xdevcfg device. I suspect there is some functional code similar to this being compiled incorrectly: #if (OCPI_ARCH_arm) // do xdevcfg loading stuff #else // do fpga_manager loading stuff #endif This error is being output at environment initialization as well as when running hello.xml. I've attached a copy of the output from the command 'ocpirun -v -l 20 hello.xml' for further investigation. From looking at the output I believe the system is calling OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line 484 which in turn is calling Driver::open in the same file at line 499 which then outputs the 'When searching for PL device ...' error at line 509. This then returns to the HdlDriver.cxx search() function and outputs the '... got Zynq search error ...' error at line 141. This is an ARM device and I am not familiar enough with this codebase to adjust precompiler definitions with confidence that some other code section will become affected. Thanks, Robert Munro -----Original Message----- From: James Kulp <jek@parera.com> Sent: Friday, August 2, 2019 4:27 PM To: Munro, Robert M. <Robert.Munro@jhuapl.edu>; discuss@lists.opencpi.org Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager That code is not integrated into the main line of OpenCPI yet, but in that code there is:           if (file_exists("/dev/xdevcfg")){             ret_val= load_xdevconfig(fileName, error);           }           else if (file_exists("/sys/class/fpga_manager/fpga0/")){             ret_val= load_fpga_manager(fileName, error);           } So it looks like the presence of /dev/xdevcfg is what causes it to look for /sys/class/xdevcfg/xdevcfg/device/prog_done On 8/2/19 4:15 PM, Munro, Robert M. wrote: > Are there any required flag or environment variable settings that must be done before building the framework to utilize this functionality? I have a platform built that is producing an output during environment load: 'When searching for PL device '0': Can't process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string: file could not be open for reading' . This leads me to believe that it is running the xdevcfg code still present in HdlBusDriver.cxx . > > Use of the release_1.4_zynq_ultra branch and presence of the /sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been verified for the environment used to generate the executables. > > Thanks, > Robert Munro > > -----Original Message----- > From: discuss <discuss-bounces@lists.opencpi.org> On Behalf Of James > Kulp > Sent: Friday, February 1, 2019 4:18 PM > To: discuss@lists.opencpi.org > Subject: Re: [Discuss OpenCPI] Bitstream loading with > ZynqMP/UltraScale+ fpga_manager > > On 2/1/19 3:37 PM, Chris Hinkey wrote: >> in response to Point 1 here. We attempted using the code that on the fly was attempting to convert from bit to bin. This did not work on these newer platforms using fpga_manager so we decided to use the vendor provided tools rather then to reverse engineer what was wrong with the existing code. >> >> If changes need to be made to create more commonality and given that all zynq and zynqMP platforms need a .bin file format wouldn't it make more sense to just use .bin files rather then converting them on the fly every time? > A sensible question for sure. > > When this was done originally, it was to avoid generating multiple file formats all the time.  .bit files are necessary for JTAG loading, and .bin files are necessary for zynq hardware loading. > > Even on Zynq, some debugging using jtag is done, and having that be mostly transparent (using the same bitstream files) is convenient. > > So we preferred having a single bitstream file (with metadata, > compressed) regardless of whether we were hardware loading or jtag loading, zynq or virtex6 or spartan3, ISE or Vivado. > > In fact, there was no reverse engineering the last time since both formats, at the level we were operating at, were documented by Xilinx. > > It seemed to be worth the 30 SLOC to convert on the fly to keep a single format of Xilinx bitstream files, including between ISE and Vivado and all Xilinx FPGA types. > > Of course it might make sense to switch things around the other way and use .bin files uniformly and only convert to .bit format for JTAG loading. > > But since the core of the "conversion:" after a header, is just a 32 bit endian swap, it doesn't matter much either way. > > If it ends up being a truly nasty reverse engineering exercise now, I would reconsider. > >> ________________________________ >> From: discuss <discuss-bounces@lists.opencpi.org> on behalf of James >> Kulp <jek@parera.com> >> Sent: Friday, February 1, 2019 3:27 PM >> To: discuss@lists.opencpi.org >> Subject: Re: [Discuss OpenCPI] Bitstream loading with >> ZynqMP/UltraScale+ fpga_manager >> >> David, >> >> This is great work. Thanks. >> >> Since I believe the fpga manager stuff is really an attribute of >> later linux kernels, I don't think it is really a ZynqMP thing, but >> just a later linux kernel thing. >> I am currently bringing up the quite ancient zedboard using the >> latest Vivado and Xilinx linux and will try to use this same code. >> There are two thinigs I am looking into, now that you have done the >> hard work of getting to a working solution: >> >> 1. The bit vs bin thing existed with the old bitstream loader, but I >> think we were converting on the fly, so I will try that here. >> (To avoid the bin format altogether). >> >> 2. The fpga manager has entry points from kernel mode that allow you >> to inject the bitstream without making a copy in /lib/firmware. >> Since we already have a kernel driver, I will try to use that to >> avoid the whole /lib/firmware thing. >> >> So if those two things can work (no guarantees), the difference >> between old and new bitstream loading (and building) can be minimized >> and the loading process faster and requiring no extra file system space. >> This will make merging easier too. >> >> We'll see. Thanks again to you and Geon for this important contribution. >> >> Jim >> >> >> On 2/1/19 3:12 PM, David Banks wrote: >>> OpenCPI users interested in ZynqMP fpga_manager, >>> >>> I know some users are interested in the OpenCPI's bitstream loading >>> for ZynqMP/UltraScale+ using "*fpga_manager*". In general, we >>> followed the instructions at >>> https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream. >>> I will give a short explanation here: >>> >>> Reminder: All ZynqMP/UltraScale+ changes are located at >>> https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra branch. >>> >>> Firstly, all *fpga_manager *code is located in >>> *runtime/hdl/src/HdlBusDriver.cxx*. There were also changes in >>> r*untime/hdl-support/xilinx/vivado.mk >>> <http://vivado.mk>* to generate a bitstream in the correct *.bin >>> format. To see the changes made to these files for ZynqMP, you can >>> diff them between >>> *release_1.4* and *release_1.4_zynq_ultra*: >>> $ git clone https://github.com/Geontech/opencpi.git --branch >>> release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin >>> release_1.4:release_1.4; $ git diff release_1.4 -- >>> runtime/hdl/src/HdlBusDriver.cxx >>> runtime/hdl-support/xilinx/vivado.mk; >>> >>> >>> The directly relevant functions are *load_fpga_manager()* and i >>> *sProgrammed()*. >>> load_fpga_manager() ensures that /lib/firmware exists, reads the >>> *.bin bitstream file and writes its contents to /lib/firmware/opencpi_temp.bin. >>> It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the >>> the filename "opencpi_temp.bin" to /sys/class/fpga_manager/fpga0/firmware. >>> Finally, the temporary opencpi_temp.bin bitstream is removed and the >>> state of the fpga_manager (/sys/class/fpga_manager/fpga0/state) is >>> confirmed to be "operating" in isProgrammed(). >>> >>> fpga_manager requires that bitstreams be in *.bin in order to write >>> them to the PL. So, some changes were made to vivado.mk to add a >>> make rule for the *.bin file. This make rule (*BinName*) uses >>> Vivado's "*bootgen*" to convert the bitstream from *.bit to *.bin. >>> >>> Most of the relevant code is pasted or summarized below: >>> >>> *load_fpga_manager*(const char *fileName, std::string &error) { >>> if (!file_exists("/lib/firmware")){ >>> mkdir("/lib/firmware",0666); >>> } >>> int out_file = creat("/lib/firmware/opencpi_temp.bin", 0666); >>> gzFile bin_file; >>> int bfd, zerror; >>> uint8_t buf[8*1024]; >>> >>> if ((bfd = ::open(fileName, O_RDONLY)) < 0) >>> OU::format(error, "Can't open bitstream file '%s' for reading: >>> %s(%d)", >>> fileName, strerror(errno), errno); >>> if ((bin_file = ::gzdopen(bfd, "rb")) == NULL) >>> OU::format(error, "Can't open compressed bin file '%s' for : >>> %s(%u)", >>> fileName, strerror(errno), errno); >>> do { >>> uint8_t *bit_buf = buf; >>> int n = ::gzread(bin_file, bit_buf, sizeof(buf)); >>> if (n < 0) >>> return true; >>> if (n & 3) >>> return OU::eformat(error, "Bitstream data in is '%s' >>> not a multiple of 3 bytes", >>> fileName); >>> if (n == 0) >>> break; >>> if (write(out_file, buf, n) <= 0) >>> return OU::eformat(error, >>> "Error writing to /lib/firmware/opencpi_temp.bin >>> for bin >>> loading: %s(%u/%d)", >>> strerror(errno), errno, n); >>> } while (1); >>> close(out_file); >>> std::ofstream fpga_flags("/sys/class/fpga_manager/fpga0/flags"); >>> std::ofstream >>> fpga_firmware("/sys/class/fpga_manager/fpga0/firmware"); >>> fpga_flags << 0 << std::endl; >>> fpga_firmware << "opencpi_temp.bin" << std::endl; >>> >>> remove("/lib/firmware/opencpi_temp.bin"); >>> return isProgrammed(error) ? init(error) : true; >>> } >>> >>> The isProgrammed() function just checks whether or not the >>> fpga_manager state is 'operating' although we are not entirely >>> confident this is a robust check: >>> >>> *isProgrammed*(...) { >>> ... >>> const char *e = OU::file2String(val, >>> "/sys/class/fpga_manager/fpga0/state", '|'); >>> ... >>> return val == "operating"; >>> } >>> >>> vivado.mk's *bin make-rule uses bootgen to convert bit to bin. This >>> is necessary in Vivado 2018.2, but in later versions you may be able >>> to directly generate the correct *.bin file via an option to write_bitstream: >>> $(call *BinName*,$1,$3,$6): $(call BitName,$1,$3) >>> $(AT)echo -n For $2 on $5 using config $4: Generating >>> Xilinx Vivado bitstream file $$@ with BIN extension using "bootgen". >>> $(AT)echo all: > $$(call BifName,$1,$3,$6); \ >>> echo "{" >> $$(call BifName,$1,$3,$6); \ >>> echo " [destination_device = pl] $(notdir $(call >>> BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6); \ >>> echo "}" >> $$(call BifName,$1,$3,$6); >>> $(AT)$(call DoXilinx,*bootgen*,$1,-image $(notdir $(call >>> BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call >>> BinName,$1,$3,$6)) -w,bin) >>> >>> Hope this is useful! >>> >>> Regards, >>> David Banks >>> dbanks@geontech.com >>> Geon Technologies, LLC >>> -------------- next part -------------- An HTML attachment was >>> scrubbed... >>> URL: >>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attach >>> m ents/20190201/4b49675d/attachment.html> >>> _______________________________________________ >>> discuss mailing list >>> discuss@lists.opencpi.org >>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >> >> _______________________________________________ >> discuss mailing list >> discuss@lists.opencpi.org >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >> -------------- next part -------------- An HTML attachment was >> scrubbed... >> URL: >> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachm >> e nts/20190201/64e4ea45/attachment.html> >> _______________________________________________ >> discuss mailing list >> discuss@lists.opencpi.org >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > > > _______________________________________________ > discuss mailing list > discuss@lists.opencpi.org > http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
MR
Munro, Robert M.
Thu, Aug 8, 2019 5:36 PM

Jim or others,

Is there any further input or feedback on the source or resolution of this issue?

As it stands I do not believe that the OCPI runtime software will be able to successfully load HDL assemblies on the N310 platform.  My familiarity with this codebase is limited and we would appreciate any guidance available toward investigating or resolving this issue.

Thank you,
Robert Munro

-----Original Message-----
From: discuss discuss-bounces@lists.opencpi.org On Behalf Of Munro, Robert M.
Sent: Monday, August 5, 2019 10:49 AM
To: James Kulp jek@parera.com; discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

Jim,

The given block of code is not the root cause of the issue because the file system does not have a /dev/xdevcfg device.

I suspect there is some functional code similar to this being compiled incorrectly:
#if (OCPI_ARCH_arm)
// do xdevcfg loading stuff
#else
// do fpga_manager loading stuff
#endif

This error is being output at environment initialization as well as when running hello.xml.  I've attached a copy of the output from the command 'ocpirun -v -l 20 hello.xml' for further investigation.

From looking at the output I believe the system is calling OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line 484 which in turn is calling Driver::open in the same file at line 499 which then outputs the 'When searching for PL device ...' error at line 509.  This then returns to the HdlDriver.cxx search() function and outputs the '... got Zynq search error ...' error at line 141.

This is an ARM device and I am not familiar enough with this codebase to adjust precompiler definitions with confidence that some other code section will become affected.

Thanks,
Robert Munro

-----Original Message-----
From: James Kulp jek@parera.com
Sent: Friday, August 2, 2019 4:27 PM
To: Munro, Robert M. Robert.Munro@jhuapl.edu; discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

That code is not integrated into the main line of OpenCPI yet, but in that code there is:
          if (file_exists("/dev/xdevcfg")){
            ret_val= load_xdevconfig(fileName, error);
          }
          else if (file_exists("/sys/class/fpga_manager/fpga0/")){
            ret_val= load_fpga_manager(fileName, error);
          }
So it looks like the presence of /dev/xdevcfg is what causes it to look for /sys/class/xdevcfg/xdevcfg/device/prog_done

On 8/2/19 4:15 PM, Munro, Robert M. wrote:

Are there any required flag or environment variable settings that must be done before building the framework to utilize this functionality?  I have a platform built that is producing an output during environment load: 'When searching for PL device '0': Can't process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string: file could not be open for reading' .  This leads me to believe that it is running the xdevcfg code still present in HdlBusDriver.cxx .

Use of the release_1.4_zynq_ultra branch and presence of the /sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been verified for the environment used to generate the executables.

Thanks,
Robert Munro

-----Original Message-----
From: discuss discuss-bounces@lists.opencpi.org On Behalf Of James
Kulp
Sent: Friday, February 1, 2019 4:18 PM
To: discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

On 2/1/19 3:37 PM, Chris Hinkey wrote:

in response to Point 1 here.  We attempted using the code that on the fly was attempting to convert from bit to bin.  This did not work on these newer platforms using fpga_manager so we decided to use the vendor provided tools rather then to reverse engineer what was wrong with the existing code.

If changes need to be made to create more commonality and given that all zynq and zynqMP platforms need a .bin file format wouldn't it make more sense to just use .bin files rather then converting them on the fly every time?

A sensible question for sure.

When this was done originally, it was to avoid generating multiple file formats all the time.  .bit files are necessary for JTAG loading, and .bin files are necessary for zynq hardware loading.

Even on Zynq, some debugging using jtag is done, and having that be mostly transparent (using the same bitstream files) is convenient.

So we preferred having a single bitstream file (with metadata,
compressed) regardless of whether we were hardware loading or jtag loading, zynq or virtex6 or spartan3, ISE or Vivado.

In fact, there was no reverse engineering the last time since both formats, at the level we were operating at, were documented by Xilinx.

It seemed to be worth the 30 SLOC to convert on the fly to keep a single format of Xilinx bitstream files, including between ISE and Vivado and all Xilinx FPGA types.

Of course it might make sense to switch things around the other way and use .bin files uniformly and only convert to .bit format for JTAG loading.

But since the core of the "conversion:" after a header, is just a 32 bit endian swap, it doesn't matter much either way.

If it ends up being a truly nasty reverse engineering exercise now, I would reconsider.


From: discuss discuss-bounces@lists.opencpi.org on behalf of James
Kulp jek@parera.com
Sent: Friday, February 1, 2019 3:27 PM
To: discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

David,

This is great work. Thanks.

Since I believe the fpga manager stuff is really an attribute of
later linux kernels, I don't think it is really a ZynqMP thing, but
just a later linux kernel thing.
I am currently bringing up the quite ancient zedboard using the
latest Vivado and Xilinx linux and will try to use this same code.
There are two thinigs I am looking into, now that you have done the
hard work of getting to a working solution:

  1. The bit vs bin thing existed with the old bitstream loader, but I
    think we were converting on the fly, so I will try that here.
    (To avoid the bin format altogether).

  2. The fpga manager has entry points from kernel mode that allow you
    to inject the bitstream without making a copy in /lib/firmware.
    Since we already have a kernel driver, I will try to use that to
    avoid the whole /lib/firmware thing.

So if those two things can work (no guarantees), the difference
between old and new bitstream loading (and building) can be minimized
and the loading process faster and requiring no extra file system space.
This will make merging easier too.

We'll see.  Thanks again to you and Geon for this important contribution.

Jim

On 2/1/19 3:12 PM, David Banks wrote:

OpenCPI users interested in ZynqMP fpga_manager,

I know some users are interested in the OpenCPI's bitstream loading
for ZynqMP/UltraScale+ using "fpga_manager". In general, we
followed the instructions at
https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream.
I will give a short explanation here:

Reminder: All ZynqMP/UltraScale+ changes are located at
https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra branch.

Firstly, all fpga_manager code is located in
runtime/hdl/src/HdlBusDriver.cxx. There were also changes in
r
untime/hdl-support/xilinx/vivado.mk
http://vivado.mk
to generate a bitstream in the correct *.bin
format. To see the changes made to these files for ZynqMP, you can
diff them between
release_1.4 and release_1.4_zynq_ultra:
$ git clone https://github.com/Geontech/opencpi.git --branch
release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin
release_1.4:release_1.4; $ git diff release_1.4 --
runtime/hdl/src/HdlBusDriver.cxx
runtime/hdl-support/xilinx/vivado.mk;

The directly relevant functions are load_fpga_manager() and i
sProgrammed().
load_fpga_manager() ensures that /lib/firmware exists, reads the
*.bin bitstream file and writes its contents to /lib/firmware/opencpi_temp.bin.
It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the
the filename "opencpi_temp.bin" to /sys/class/fpga_manager/fpga0/firmware.
Finally, the temporary opencpi_temp.bin bitstream is removed and the
state of the fpga_manager (/sys/class/fpga_manager/fpga0/state) is
confirmed to be "operating" in isProgrammed().

fpga_manager requires that bitstreams be in *.bin in order to write
them to the PL. So, some changes were made to vivado.mk to add a
make rule for the *.bin file. This make rule (BinName) uses
Vivado's "bootgen" to convert the bitstream from *.bit to *.bin.

Most of the relevant code is pasted or summarized below:

        *load_fpga_manager*(const char *fileName, std::string &error) {
          if (!file_exists("/lib/firmware")){
            mkdir("/lib/firmware",0666);
          }
          int out_file = creat("/lib/firmware/opencpi_temp.bin", 0666);
          gzFile bin_file;
          int bfd, zerror;
          uint8_t buf[8*1024];

          if ((bfd = ::open(fileName, O_RDONLY)) < 0)
            OU::format(error, "Can't open bitstream file '%s' for reading:

%s(%d)",
fileName, strerror(errno), errno);
if ((bin_file = ::gzdopen(bfd, "rb")) == NULL)
OU::format(error, "Can't open compressed bin file '%s' for :
%s(%u)",
fileName, strerror(errno), errno);
do {
uint8_t *bit_buf = buf;
int n = ::gzread(bin_file, bit_buf, sizeof(buf));
if (n < 0)
return true;
if (n & 3)
return OU::eformat(error, "Bitstream data in is '%s'
not a multiple of 3 bytes",
fileName);
if (n == 0)
break;
if (write(out_file, buf, n) <= 0)
return OU::eformat(error,
"Error writing to /lib/firmware/opencpi_temp.bin
for bin
loading: %s(%u/%d)",
strerror(errno), errno, n);
} while (1);
close(out_file);
std::ofstream fpga_flags("/sys/class/fpga_manager/fpga0/flags");
std::ofstream
fpga_firmware("/sys/class/fpga_manager/fpga0/firmware");
fpga_flags << 0 << std::endl;
fpga_firmware << "opencpi_temp.bin" << std::endl;

          remove("/lib/firmware/opencpi_temp.bin");
          return isProgrammed(error) ? init(error) : true;
        }

The isProgrammed() function just checks whether or not the
fpga_manager state is 'operating' although we are not entirely
confident this is a robust check:

        *isProgrammed*(...) {
          ...
          const char *e = OU::file2String(val, 

"/sys/class/fpga_manager/fpga0/state", '|');
...
return val == "operating";
}

vivado.mk's *bin make-rule uses bootgen to convert bit to bin. This
is necessary in Vivado 2018.2, but in later versions you may be able
to directly generate the correct *.bin file via an option to write_bitstream:
$(call BinName,$1,$3,$6): $(call BitName,$1,$3)
$(AT)echo -n For $2 on $5 using config $4: Generating
Xilinx Vivado bitstream file $$@ with BIN extension using "bootgen".
$(AT)echo all: > $$(call BifName,$1,$3,$6);
echo "{" >> $$(call BifName,$1,$3,$6);
echo "      [destination_device = pl] $(notdir $(call
BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6);
echo "}" >> $$(call BifName,$1,$3,$6);
$(AT)$(call DoXilinx,bootgen,$1,-image $(notdir $(call
BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call
BinName,$1,$3,$6)) -w,bin)

Hope this is useful!

Regards,
David Banks
dbanks@geontech.com
Geon Technologies, LLC
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attach
m ents/20190201/4b49675d/attachment.html>


discuss mailing list
discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org


discuss mailing list
discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachm
e nts/20190201/64e4ea45/attachment.html>


discuss mailing list
discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org

-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: hello_n310_log_output.txt
URL: http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachments/20190805/d9b4f229/attachment.txt


discuss mailing list
discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org

Jim or others, Is there any further input or feedback on the source or resolution of this issue? As it stands I do not believe that the OCPI runtime software will be able to successfully load HDL assemblies on the N310 platform. My familiarity with this codebase is limited and we would appreciate any guidance available toward investigating or resolving this issue. Thank you, Robert Munro -----Original Message----- From: discuss <discuss-bounces@lists.opencpi.org> On Behalf Of Munro, Robert M. Sent: Monday, August 5, 2019 10:49 AM To: James Kulp <jek@parera.com>; discuss@lists.opencpi.org Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager Jim, The given block of code is not the root cause of the issue because the file system does not have a /dev/xdevcfg device. I suspect there is some functional code similar to this being compiled incorrectly: #if (OCPI_ARCH_arm) // do xdevcfg loading stuff #else // do fpga_manager loading stuff #endif This error is being output at environment initialization as well as when running hello.xml. I've attached a copy of the output from the command 'ocpirun -v -l 20 hello.xml' for further investigation. From looking at the output I believe the system is calling OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line 484 which in turn is calling Driver::open in the same file at line 499 which then outputs the 'When searching for PL device ...' error at line 509. This then returns to the HdlDriver.cxx search() function and outputs the '... got Zynq search error ...' error at line 141. This is an ARM device and I am not familiar enough with this codebase to adjust precompiler definitions with confidence that some other code section will become affected. Thanks, Robert Munro -----Original Message----- From: James Kulp <jek@parera.com> Sent: Friday, August 2, 2019 4:27 PM To: Munro, Robert M. <Robert.Munro@jhuapl.edu>; discuss@lists.opencpi.org Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager That code is not integrated into the main line of OpenCPI yet, but in that code there is:           if (file_exists("/dev/xdevcfg")){             ret_val= load_xdevconfig(fileName, error);           }           else if (file_exists("/sys/class/fpga_manager/fpga0/")){             ret_val= load_fpga_manager(fileName, error);           } So it looks like the presence of /dev/xdevcfg is what causes it to look for /sys/class/xdevcfg/xdevcfg/device/prog_done On 8/2/19 4:15 PM, Munro, Robert M. wrote: > Are there any required flag or environment variable settings that must be done before building the framework to utilize this functionality? I have a platform built that is producing an output during environment load: 'When searching for PL device '0': Can't process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string: file could not be open for reading' . This leads me to believe that it is running the xdevcfg code still present in HdlBusDriver.cxx . > > Use of the release_1.4_zynq_ultra branch and presence of the /sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been verified for the environment used to generate the executables. > > Thanks, > Robert Munro > > -----Original Message----- > From: discuss <discuss-bounces@lists.opencpi.org> On Behalf Of James > Kulp > Sent: Friday, February 1, 2019 4:18 PM > To: discuss@lists.opencpi.org > Subject: Re: [Discuss OpenCPI] Bitstream loading with > ZynqMP/UltraScale+ fpga_manager > > On 2/1/19 3:37 PM, Chris Hinkey wrote: >> in response to Point 1 here. We attempted using the code that on the fly was attempting to convert from bit to bin. This did not work on these newer platforms using fpga_manager so we decided to use the vendor provided tools rather then to reverse engineer what was wrong with the existing code. >> >> If changes need to be made to create more commonality and given that all zynq and zynqMP platforms need a .bin file format wouldn't it make more sense to just use .bin files rather then converting them on the fly every time? > A sensible question for sure. > > When this was done originally, it was to avoid generating multiple file formats all the time.  .bit files are necessary for JTAG loading, and .bin files are necessary for zynq hardware loading. > > Even on Zynq, some debugging using jtag is done, and having that be mostly transparent (using the same bitstream files) is convenient. > > So we preferred having a single bitstream file (with metadata, > compressed) regardless of whether we were hardware loading or jtag loading, zynq or virtex6 or spartan3, ISE or Vivado. > > In fact, there was no reverse engineering the last time since both formats, at the level we were operating at, were documented by Xilinx. > > It seemed to be worth the 30 SLOC to convert on the fly to keep a single format of Xilinx bitstream files, including between ISE and Vivado and all Xilinx FPGA types. > > Of course it might make sense to switch things around the other way and use .bin files uniformly and only convert to .bit format for JTAG loading. > > But since the core of the "conversion:" after a header, is just a 32 bit endian swap, it doesn't matter much either way. > > If it ends up being a truly nasty reverse engineering exercise now, I would reconsider. > >> ________________________________ >> From: discuss <discuss-bounces@lists.opencpi.org> on behalf of James >> Kulp <jek@parera.com> >> Sent: Friday, February 1, 2019 3:27 PM >> To: discuss@lists.opencpi.org >> Subject: Re: [Discuss OpenCPI] Bitstream loading with >> ZynqMP/UltraScale+ fpga_manager >> >> David, >> >> This is great work. Thanks. >> >> Since I believe the fpga manager stuff is really an attribute of >> later linux kernels, I don't think it is really a ZynqMP thing, but >> just a later linux kernel thing. >> I am currently bringing up the quite ancient zedboard using the >> latest Vivado and Xilinx linux and will try to use this same code. >> There are two thinigs I am looking into, now that you have done the >> hard work of getting to a working solution: >> >> 1. The bit vs bin thing existed with the old bitstream loader, but I >> think we were converting on the fly, so I will try that here. >> (To avoid the bin format altogether). >> >> 2. The fpga manager has entry points from kernel mode that allow you >> to inject the bitstream without making a copy in /lib/firmware. >> Since we already have a kernel driver, I will try to use that to >> avoid the whole /lib/firmware thing. >> >> So if those two things can work (no guarantees), the difference >> between old and new bitstream loading (and building) can be minimized >> and the loading process faster and requiring no extra file system space. >> This will make merging easier too. >> >> We'll see. Thanks again to you and Geon for this important contribution. >> >> Jim >> >> >> On 2/1/19 3:12 PM, David Banks wrote: >>> OpenCPI users interested in ZynqMP fpga_manager, >>> >>> I know some users are interested in the OpenCPI's bitstream loading >>> for ZynqMP/UltraScale+ using "*fpga_manager*". In general, we >>> followed the instructions at >>> https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream. >>> I will give a short explanation here: >>> >>> Reminder: All ZynqMP/UltraScale+ changes are located at >>> https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra branch. >>> >>> Firstly, all *fpga_manager *code is located in >>> *runtime/hdl/src/HdlBusDriver.cxx*. There were also changes in >>> r*untime/hdl-support/xilinx/vivado.mk >>> <http://vivado.mk>* to generate a bitstream in the correct *.bin >>> format. To see the changes made to these files for ZynqMP, you can >>> diff them between >>> *release_1.4* and *release_1.4_zynq_ultra*: >>> $ git clone https://github.com/Geontech/opencpi.git --branch >>> release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin >>> release_1.4:release_1.4; $ git diff release_1.4 -- >>> runtime/hdl/src/HdlBusDriver.cxx >>> runtime/hdl-support/xilinx/vivado.mk; >>> >>> >>> The directly relevant functions are *load_fpga_manager()* and i >>> *sProgrammed()*. >>> load_fpga_manager() ensures that /lib/firmware exists, reads the >>> *.bin bitstream file and writes its contents to /lib/firmware/opencpi_temp.bin. >>> It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the >>> the filename "opencpi_temp.bin" to /sys/class/fpga_manager/fpga0/firmware. >>> Finally, the temporary opencpi_temp.bin bitstream is removed and the >>> state of the fpga_manager (/sys/class/fpga_manager/fpga0/state) is >>> confirmed to be "operating" in isProgrammed(). >>> >>> fpga_manager requires that bitstreams be in *.bin in order to write >>> them to the PL. So, some changes were made to vivado.mk to add a >>> make rule for the *.bin file. This make rule (*BinName*) uses >>> Vivado's "*bootgen*" to convert the bitstream from *.bit to *.bin. >>> >>> Most of the relevant code is pasted or summarized below: >>> >>> *load_fpga_manager*(const char *fileName, std::string &error) { >>> if (!file_exists("/lib/firmware")){ >>> mkdir("/lib/firmware",0666); >>> } >>> int out_file = creat("/lib/firmware/opencpi_temp.bin", 0666); >>> gzFile bin_file; >>> int bfd, zerror; >>> uint8_t buf[8*1024]; >>> >>> if ((bfd = ::open(fileName, O_RDONLY)) < 0) >>> OU::format(error, "Can't open bitstream file '%s' for reading: >>> %s(%d)", >>> fileName, strerror(errno), errno); >>> if ((bin_file = ::gzdopen(bfd, "rb")) == NULL) >>> OU::format(error, "Can't open compressed bin file '%s' for : >>> %s(%u)", >>> fileName, strerror(errno), errno); >>> do { >>> uint8_t *bit_buf = buf; >>> int n = ::gzread(bin_file, bit_buf, sizeof(buf)); >>> if (n < 0) >>> return true; >>> if (n & 3) >>> return OU::eformat(error, "Bitstream data in is '%s' >>> not a multiple of 3 bytes", >>> fileName); >>> if (n == 0) >>> break; >>> if (write(out_file, buf, n) <= 0) >>> return OU::eformat(error, >>> "Error writing to /lib/firmware/opencpi_temp.bin >>> for bin >>> loading: %s(%u/%d)", >>> strerror(errno), errno, n); >>> } while (1); >>> close(out_file); >>> std::ofstream fpga_flags("/sys/class/fpga_manager/fpga0/flags"); >>> std::ofstream >>> fpga_firmware("/sys/class/fpga_manager/fpga0/firmware"); >>> fpga_flags << 0 << std::endl; >>> fpga_firmware << "opencpi_temp.bin" << std::endl; >>> >>> remove("/lib/firmware/opencpi_temp.bin"); >>> return isProgrammed(error) ? init(error) : true; >>> } >>> >>> The isProgrammed() function just checks whether or not the >>> fpga_manager state is 'operating' although we are not entirely >>> confident this is a robust check: >>> >>> *isProgrammed*(...) { >>> ... >>> const char *e = OU::file2String(val, >>> "/sys/class/fpga_manager/fpga0/state", '|'); >>> ... >>> return val == "operating"; >>> } >>> >>> vivado.mk's *bin make-rule uses bootgen to convert bit to bin. This >>> is necessary in Vivado 2018.2, but in later versions you may be able >>> to directly generate the correct *.bin file via an option to write_bitstream: >>> $(call *BinName*,$1,$3,$6): $(call BitName,$1,$3) >>> $(AT)echo -n For $2 on $5 using config $4: Generating >>> Xilinx Vivado bitstream file $$@ with BIN extension using "bootgen". >>> $(AT)echo all: > $$(call BifName,$1,$3,$6); \ >>> echo "{" >> $$(call BifName,$1,$3,$6); \ >>> echo " [destination_device = pl] $(notdir $(call >>> BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6); \ >>> echo "}" >> $$(call BifName,$1,$3,$6); >>> $(AT)$(call DoXilinx,*bootgen*,$1,-image $(notdir $(call >>> BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call >>> BinName,$1,$3,$6)) -w,bin) >>> >>> Hope this is useful! >>> >>> Regards, >>> David Banks >>> dbanks@geontech.com >>> Geon Technologies, LLC >>> -------------- next part -------------- An HTML attachment was >>> scrubbed... >>> URL: >>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attach >>> m ents/20190201/4b49675d/attachment.html> >>> _______________________________________________ >>> discuss mailing list >>> discuss@lists.opencpi.org >>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >> >> _______________________________________________ >> discuss mailing list >> discuss@lists.opencpi.org >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >> -------------- next part -------------- An HTML attachment was >> scrubbed... >> URL: >> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachm >> e nts/20190201/64e4ea45/attachment.html> >> _______________________________________________ >> discuss mailing list >> discuss@lists.opencpi.org >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > > > _______________________________________________ > discuss mailing list > discuss@lists.opencpi.org > http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: hello_n310_log_output.txt URL: <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachments/20190805/d9b4f229/attachment.txt> _______________________________________________ discuss mailing list discuss@lists.opencpi.org http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
JK
James Kulp
Thu, Aug 8, 2019 7:02 PM

We’ll keep looking at this and provide more input tomorrow.

On Aug 8, 2019, at 13:36, Munro, Robert M. Robert.Munro@jhuapl.edu wrote:

Jim or others,

Is there any further input or feedback on the source or resolution of this issue?

As it stands I do not believe that the OCPI runtime software will be able to successfully load HDL assemblies on the N310 platform.  My familiarity with this codebase is limited and we would appreciate any guidance available toward investigating or resolving this issue.

Thank you,
Robert Munro

-----Original Message-----
From: discuss discuss-bounces@lists.opencpi.org On Behalf Of Munro, Robert M.
Sent: Monday, August 5, 2019 10:49 AM
To: James Kulp jek@parera.com; discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

Jim,

The given block of code is not the root cause of the issue because the file system does not have a /dev/xdevcfg device.

I suspect there is some functional code similar to this being compiled incorrectly:
#if (OCPI_ARCH_arm)
// do xdevcfg loading stuff
#else
// do fpga_manager loading stuff
#endif

This error is being output at environment initialization as well as when running hello.xml.  I've attached a copy of the output from the command 'ocpirun -v -l 20 hello.xml' for further investigation.

From looking at the output I believe the system is calling OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line 484 which in turn is calling Driver::open in the same file at line 499 which then outputs the 'When searching for PL device ...' error at line 509.  This then returns to the HdlDriver.cxx search() function and outputs the '... got Zynq search error ...' error at line 141.

This is an ARM device and I am not familiar enough with this codebase to adjust precompiler definitions with confidence that some other code section will become affected.

Thanks,
Robert Munro

-----Original Message-----
From: James Kulp jek@parera.com
Sent: Friday, August 2, 2019 4:27 PM
To: Munro, Robert M. Robert.Munro@jhuapl.edu; discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

That code is not integrated into the main line of OpenCPI yet, but in that code there is:
if (file_exists("/dev/xdevcfg")){
ret_val= load_xdevconfig(fileName, error);
}
else if (file_exists("/sys/class/fpga_manager/fpga0/")){
ret_val= load_fpga_manager(fileName, error);
}
So it looks like the presence of /dev/xdevcfg is what causes it to look for /sys/class/xdevcfg/xdevcfg/device/prog_done

On 8/2/19 4:15 PM, Munro, Robert M. wrote:
Are there any required flag or environment variable settings that must be done before building the framework to utilize this functionality?  I have a platform built that is producing an output during environment load: 'When searching for PL device '0': Can't process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string: file could not be open for reading' .  This leads me to believe that it is running the xdevcfg code still present in HdlBusDriver.cxx .

Use of the release_1.4_zynq_ultra branch and presence of the /sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been verified for the environment used to generate the executables.

Thanks,
Robert Munro

-----Original Message-----
From: discuss discuss-bounces@lists.opencpi.org On Behalf Of James
Kulp
Sent: Friday, February 1, 2019 4:18 PM
To: discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

On 2/1/19 3:37 PM, Chris Hinkey wrote:
in response to Point 1 here.  We attempted using the code that on the fly was attempting to convert from bit to bin.  This did not work on these newer platforms using fpga_manager so we decided to use the vendor provided tools rather then to reverse engineer what was wrong with the existing code.

If changes need to be made to create more commonality and given that all zynq and zynqMP platforms need a .bin file format wouldn't it make more sense to just use .bin files rather then converting them on the fly every time?
A sensible question for sure.

When this was done originally, it was to avoid generating multiple file formats all the time.  .bit files are necessary for JTAG loading, and .bin files are necessary for zynq hardware loading.

Even on Zynq, some debugging using jtag is done, and having that be mostly transparent (using the same bitstream files) is convenient.

So we preferred having a single bitstream file (with metadata,
compressed) regardless of whether we were hardware loading or jtag loading, zynq or virtex6 or spartan3, ISE or Vivado.

In fact, there was no reverse engineering the last time since both formats, at the level we were operating at, were documented by Xilinx.

It seemed to be worth the 30 SLOC to convert on the fly to keep a single format of Xilinx bitstream files, including between ISE and Vivado and all Xilinx FPGA types.

Of course it might make sense to switch things around the other way and use .bin files uniformly and only convert to .bit format for JTAG loading.

But since the core of the "conversion:" after a header, is just a 32 bit endian swap, it doesn't matter much either way.

If it ends up being a truly nasty reverse engineering exercise now, I would reconsider.


From: discuss discuss-bounces@lists.opencpi.org on behalf of James
Kulp jek@parera.com
Sent: Friday, February 1, 2019 3:27 PM
To: discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

David,

This is great work. Thanks.

Since I believe the fpga manager stuff is really an attribute of
later linux kernels, I don't think it is really a ZynqMP thing, but
just a later linux kernel thing.
I am currently bringing up the quite ancient zedboard using the
latest Vivado and Xilinx linux and will try to use this same code.
There are two thinigs I am looking into, now that you have done the
hard work of getting to a working solution:

  1. The bit vs bin thing existed with the old bitstream loader, but I
    think we were converting on the fly, so I will try that here.
    (To avoid the bin format altogether).

  2. The fpga manager has entry points from kernel mode that allow you
    to inject the bitstream without making a copy in /lib/firmware.
    Since we already have a kernel driver, I will try to use that to
    avoid the whole /lib/firmware thing.

So if those two things can work (no guarantees), the difference
between old and new bitstream loading (and building) can be minimized
and the loading process faster and requiring no extra file system space.
This will make merging easier too.

We'll see.  Thanks again to you and Geon for this important contribution.

Jim

On 2/1/19 3:12 PM, David Banks wrote:
OpenCPI users interested in ZynqMP fpga_manager,

I know some users are interested in the OpenCPI's bitstream loading
for ZynqMP/UltraScale+ using "fpga_manager". In general, we
followed the instructions at
https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream.
I will give a short explanation here:

Reminder: All ZynqMP/UltraScale+ changes are located at
https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra branch.

Firstly, all fpga_manager code is located in
runtime/hdl/src/HdlBusDriver.cxx. There were also changes in
r
untime/hdl-support/xilinx/vivado.mk
http://vivado.mk
to generate a bitstream in the correct *.bin
format. To see the changes made to these files for ZynqMP, you can
diff them between
release_1.4 and release_1.4_zynq_ultra:
$ git clone https://github.com/Geontech/opencpi.git --branch
release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin
release_1.4:release_1.4; $ git diff release_1.4 --
runtime/hdl/src/HdlBusDriver.cxx
runtime/hdl-support/xilinx/vivado.mk;

The directly relevant functions are load_fpga_manager() and i
sProgrammed().
load_fpga_manager() ensures that /lib/firmware exists, reads the
*.bin bitstream file and writes its contents to /lib/firmware/opencpi_temp.bin.
It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the
the filename "opencpi_temp.bin" to /sys/class/fpga_manager/fpga0/firmware.
Finally, the temporary opencpi_temp.bin bitstream is removed and the
state of the fpga_manager (/sys/class/fpga_manager/fpga0/state) is
confirmed to be "operating" in isProgrammed().

fpga_manager requires that bitstreams be in *.bin in order to write
them to the PL. So, some changes were made to vivado.mk to add a
make rule for the *.bin file. This make rule (BinName) uses
Vivado's "bootgen" to convert the bitstream from *.bit to *.bin.

Most of the relevant code is pasted or summarized below:

      *load_fpga_manager*(const char *fileName, std::string &error) {
        if (!file_exists("/lib/firmware")){
          mkdir("/lib/firmware",0666);
        }
        int out_file = creat("/lib/firmware/opencpi_temp.bin", 0666);
        gzFile bin_file;
        int bfd, zerror;
        uint8_t buf[8*1024];

        if ((bfd = ::open(fileName, O_RDONLY)) < 0)
          OU::format(error, "Can't open bitstream file '%s' for reading:

%s(%d)",
fileName, strerror(errno), errno);
if ((bin_file = ::gzdopen(bfd, "rb")) == NULL)
OU::format(error, "Can't open compressed bin file '%s' for :
%s(%u)",
fileName, strerror(errno), errno);
do {
uint8_t *bit_buf = buf;
int n = ::gzread(bin_file, bit_buf, sizeof(buf));
if (n < 0)
return true;
if (n & 3)
return OU::eformat(error, "Bitstream data in is '%s'
not a multiple of 3 bytes",
fileName);
if (n == 0)
break;
if (write(out_file, buf, n) <= 0)
return OU::eformat(error,
"Error writing to /lib/firmware/opencpi_temp.bin
for bin
loading: %s(%u/%d)",
strerror(errno), errno, n);
} while (1);
close(out_file);
std::ofstream fpga_flags("/sys/class/fpga_manager/fpga0/flags");
std::ofstream
fpga_firmware("/sys/class/fpga_manager/fpga0/firmware");
fpga_flags << 0 << std::endl;
fpga_firmware << "opencpi_temp.bin" << std::endl;

        remove("/lib/firmware/opencpi_temp.bin");
        return isProgrammed(error) ? init(error) : true;
      }

The isProgrammed() function just checks whether or not the
fpga_manager state is 'operating' although we are not entirely
confident this is a robust check:

      *isProgrammed*(...) {
        ...
        const char *e = OU::file2String(val, 

"/sys/class/fpga_manager/fpga0/state", '|');
...
return val == "operating";
}

vivado.mk's *bin make-rule uses bootgen to convert bit to bin. This
is necessary in Vivado 2018.2, but in later versions you may be able
to directly generate the correct *.bin file via an option to write_bitstream:
$(call BinName,$1,$3,$6): $(call BitName,$1,$3)
$(AT)echo -n For $2 on $5 using config $4: Generating
Xilinx Vivado bitstream file $$@ with BIN extension using "bootgen".
$(AT)echo all: > $$(call BifName,$1,$3,$6);
echo "{" >> $$(call BifName,$1,$3,$6);
echo "      [destination_device = pl] $(notdir $(call
BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6);
echo "}" >> $$(call BifName,$1,$3,$6);
$(AT)$(call DoXilinx,bootgen,$1,-image $(notdir $(call
BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call
BinName,$1,$3,$6)) -w,bin)

Hope this is useful!

Regards,
David Banks
dbanks@geontech.com
Geon Technologies, LLC
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attach
m ents/20190201/4b49675d/attachment.html>


discuss mailing list
discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org


discuss mailing list
discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachm
e nts/20190201/64e4ea45/attachment.html>


discuss mailing list
discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org


discuss mailing list
discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org

We’ll keep looking at this and provide more input tomorrow. > On Aug 8, 2019, at 13:36, Munro, Robert M. <Robert.Munro@jhuapl.edu> wrote: > > Jim or others, > > Is there any further input or feedback on the source or resolution of this issue? > > As it stands I do not believe that the OCPI runtime software will be able to successfully load HDL assemblies on the N310 platform. My familiarity with this codebase is limited and we would appreciate any guidance available toward investigating or resolving this issue. > > Thank you, > Robert Munro > > -----Original Message----- > From: discuss <discuss-bounces@lists.opencpi.org> On Behalf Of Munro, Robert M. > Sent: Monday, August 5, 2019 10:49 AM > To: James Kulp <jek@parera.com>; discuss@lists.opencpi.org > Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager > > Jim, > > The given block of code is not the root cause of the issue because the file system does not have a /dev/xdevcfg device. > > I suspect there is some functional code similar to this being compiled incorrectly: > #if (OCPI_ARCH_arm) > // do xdevcfg loading stuff > #else > // do fpga_manager loading stuff > #endif > > This error is being output at environment initialization as well as when running hello.xml. I've attached a copy of the output from the command 'ocpirun -v -l 20 hello.xml' for further investigation. > > From looking at the output I believe the system is calling OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line 484 which in turn is calling Driver::open in the same file at line 499 which then outputs the 'When searching for PL device ...' error at line 509. This then returns to the HdlDriver.cxx search() function and outputs the '... got Zynq search error ...' error at line 141. > > This is an ARM device and I am not familiar enough with this codebase to adjust precompiler definitions with confidence that some other code section will become affected. > > Thanks, > Robert Munro > > -----Original Message----- > From: James Kulp <jek@parera.com> > Sent: Friday, August 2, 2019 4:27 PM > To: Munro, Robert M. <Robert.Munro@jhuapl.edu>; discuss@lists.opencpi.org > Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager > > That code is not integrated into the main line of OpenCPI yet, but in that code there is: > if (file_exists("/dev/xdevcfg")){ > ret_val= load_xdevconfig(fileName, error); > } > else if (file_exists("/sys/class/fpga_manager/fpga0/")){ > ret_val= load_fpga_manager(fileName, error); > } > So it looks like the presence of /dev/xdevcfg is what causes it to look for /sys/class/xdevcfg/xdevcfg/device/prog_done > >> On 8/2/19 4:15 PM, Munro, Robert M. wrote: >> Are there any required flag or environment variable settings that must be done before building the framework to utilize this functionality? I have a platform built that is producing an output during environment load: 'When searching for PL device '0': Can't process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string: file could not be open for reading' . This leads me to believe that it is running the xdevcfg code still present in HdlBusDriver.cxx . >> >> Use of the release_1.4_zynq_ultra branch and presence of the /sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been verified for the environment used to generate the executables. >> >> Thanks, >> Robert Munro >> >> -----Original Message----- >> From: discuss <discuss-bounces@lists.opencpi.org> On Behalf Of James >> Kulp >> Sent: Friday, February 1, 2019 4:18 PM >> To: discuss@lists.opencpi.org >> Subject: Re: [Discuss OpenCPI] Bitstream loading with >> ZynqMP/UltraScale+ fpga_manager >> >>> On 2/1/19 3:37 PM, Chris Hinkey wrote: >>> in response to Point 1 here. We attempted using the code that on the fly was attempting to convert from bit to bin. This did not work on these newer platforms using fpga_manager so we decided to use the vendor provided tools rather then to reverse engineer what was wrong with the existing code. >>> >>> If changes need to be made to create more commonality and given that all zynq and zynqMP platforms need a .bin file format wouldn't it make more sense to just use .bin files rather then converting them on the fly every time? >> A sensible question for sure. >> >> When this was done originally, it was to avoid generating multiple file formats all the time. .bit files are necessary for JTAG loading, and .bin files are necessary for zynq hardware loading. >> >> Even on Zynq, some debugging using jtag is done, and having that be mostly transparent (using the same bitstream files) is convenient. >> >> So we preferred having a single bitstream file (with metadata, >> compressed) regardless of whether we were hardware loading or jtag loading, zynq or virtex6 or spartan3, ISE or Vivado. >> >> In fact, there was no reverse engineering the last time since both formats, at the level we were operating at, were documented by Xilinx. >> >> It seemed to be worth the 30 SLOC to convert on the fly to keep a single format of Xilinx bitstream files, including between ISE and Vivado and all Xilinx FPGA types. >> >> Of course it might make sense to switch things around the other way and use .bin files uniformly and only convert to .bit format for JTAG loading. >> >> But since the core of the "conversion:" after a header, is just a 32 bit endian swap, it doesn't matter much either way. >> >> If it ends up being a truly nasty reverse engineering exercise now, I would reconsider. >> >>> ________________________________ >>> From: discuss <discuss-bounces@lists.opencpi.org> on behalf of James >>> Kulp <jek@parera.com> >>> Sent: Friday, February 1, 2019 3:27 PM >>> To: discuss@lists.opencpi.org >>> Subject: Re: [Discuss OpenCPI] Bitstream loading with >>> ZynqMP/UltraScale+ fpga_manager >>> >>> David, >>> >>> This is great work. Thanks. >>> >>> Since I believe the fpga manager stuff is really an attribute of >>> later linux kernels, I don't think it is really a ZynqMP thing, but >>> just a later linux kernel thing. >>> I am currently bringing up the quite ancient zedboard using the >>> latest Vivado and Xilinx linux and will try to use this same code. >>> There are two thinigs I am looking into, now that you have done the >>> hard work of getting to a working solution: >>> >>> 1. The bit vs bin thing existed with the old bitstream loader, but I >>> think we were converting on the fly, so I will try that here. >>> (To avoid the bin format altogether). >>> >>> 2. The fpga manager has entry points from kernel mode that allow you >>> to inject the bitstream without making a copy in /lib/firmware. >>> Since we already have a kernel driver, I will try to use that to >>> avoid the whole /lib/firmware thing. >>> >>> So if those two things can work (no guarantees), the difference >>> between old and new bitstream loading (and building) can be minimized >>> and the loading process faster and requiring no extra file system space. >>> This will make merging easier too. >>> >>> We'll see. Thanks again to you and Geon for this important contribution. >>> >>> Jim >>> >>> >>>> On 2/1/19 3:12 PM, David Banks wrote: >>>> OpenCPI users interested in ZynqMP fpga_manager, >>>> >>>> I know some users are interested in the OpenCPI's bitstream loading >>>> for ZynqMP/UltraScale+ using "*fpga_manager*". In general, we >>>> followed the instructions at >>>> https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream. >>>> I will give a short explanation here: >>>> >>>> Reminder: All ZynqMP/UltraScale+ changes are located at >>>> https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra branch. >>>> >>>> Firstly, all *fpga_manager *code is located in >>>> *runtime/hdl/src/HdlBusDriver.cxx*. There were also changes in >>>> r*untime/hdl-support/xilinx/vivado.mk >>>> <http://vivado.mk>* to generate a bitstream in the correct *.bin >>>> format. To see the changes made to these files for ZynqMP, you can >>>> diff them between >>>> *release_1.4* and *release_1.4_zynq_ultra*: >>>> $ git clone https://github.com/Geontech/opencpi.git --branch >>>> release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin >>>> release_1.4:release_1.4; $ git diff release_1.4 -- >>>> runtime/hdl/src/HdlBusDriver.cxx >>>> runtime/hdl-support/xilinx/vivado.mk; >>>> >>>> >>>> The directly relevant functions are *load_fpga_manager()* and i >>>> *sProgrammed()*. >>>> load_fpga_manager() ensures that /lib/firmware exists, reads the >>>> *.bin bitstream file and writes its contents to /lib/firmware/opencpi_temp.bin. >>>> It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the >>>> the filename "opencpi_temp.bin" to /sys/class/fpga_manager/fpga0/firmware. >>>> Finally, the temporary opencpi_temp.bin bitstream is removed and the >>>> state of the fpga_manager (/sys/class/fpga_manager/fpga0/state) is >>>> confirmed to be "operating" in isProgrammed(). >>>> >>>> fpga_manager requires that bitstreams be in *.bin in order to write >>>> them to the PL. So, some changes were made to vivado.mk to add a >>>> make rule for the *.bin file. This make rule (*BinName*) uses >>>> Vivado's "*bootgen*" to convert the bitstream from *.bit to *.bin. >>>> >>>> Most of the relevant code is pasted or summarized below: >>>> >>>> *load_fpga_manager*(const char *fileName, std::string &error) { >>>> if (!file_exists("/lib/firmware")){ >>>> mkdir("/lib/firmware",0666); >>>> } >>>> int out_file = creat("/lib/firmware/opencpi_temp.bin", 0666); >>>> gzFile bin_file; >>>> int bfd, zerror; >>>> uint8_t buf[8*1024]; >>>> >>>> if ((bfd = ::open(fileName, O_RDONLY)) < 0) >>>> OU::format(error, "Can't open bitstream file '%s' for reading: >>>> %s(%d)", >>>> fileName, strerror(errno), errno); >>>> if ((bin_file = ::gzdopen(bfd, "rb")) == NULL) >>>> OU::format(error, "Can't open compressed bin file '%s' for : >>>> %s(%u)", >>>> fileName, strerror(errno), errno); >>>> do { >>>> uint8_t *bit_buf = buf; >>>> int n = ::gzread(bin_file, bit_buf, sizeof(buf)); >>>> if (n < 0) >>>> return true; >>>> if (n & 3) >>>> return OU::eformat(error, "Bitstream data in is '%s' >>>> not a multiple of 3 bytes", >>>> fileName); >>>> if (n == 0) >>>> break; >>>> if (write(out_file, buf, n) <= 0) >>>> return OU::eformat(error, >>>> "Error writing to /lib/firmware/opencpi_temp.bin >>>> for bin >>>> loading: %s(%u/%d)", >>>> strerror(errno), errno, n); >>>> } while (1); >>>> close(out_file); >>>> std::ofstream fpga_flags("/sys/class/fpga_manager/fpga0/flags"); >>>> std::ofstream >>>> fpga_firmware("/sys/class/fpga_manager/fpga0/firmware"); >>>> fpga_flags << 0 << std::endl; >>>> fpga_firmware << "opencpi_temp.bin" << std::endl; >>>> >>>> remove("/lib/firmware/opencpi_temp.bin"); >>>> return isProgrammed(error) ? init(error) : true; >>>> } >>>> >>>> The isProgrammed() function just checks whether or not the >>>> fpga_manager state is 'operating' although we are not entirely >>>> confident this is a robust check: >>>> >>>> *isProgrammed*(...) { >>>> ... >>>> const char *e = OU::file2String(val, >>>> "/sys/class/fpga_manager/fpga0/state", '|'); >>>> ... >>>> return val == "operating"; >>>> } >>>> >>>> vivado.mk's *bin make-rule uses bootgen to convert bit to bin. This >>>> is necessary in Vivado 2018.2, but in later versions you may be able >>>> to directly generate the correct *.bin file via an option to write_bitstream: >>>> $(call *BinName*,$1,$3,$6): $(call BitName,$1,$3) >>>> $(AT)echo -n For $2 on $5 using config $4: Generating >>>> Xilinx Vivado bitstream file $$@ with BIN extension using "bootgen". >>>> $(AT)echo all: > $$(call BifName,$1,$3,$6); \ >>>> echo "{" >> $$(call BifName,$1,$3,$6); \ >>>> echo " [destination_device = pl] $(notdir $(call >>>> BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6); \ >>>> echo "}" >> $$(call BifName,$1,$3,$6); >>>> $(AT)$(call DoXilinx,*bootgen*,$1,-image $(notdir $(call >>>> BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call >>>> BinName,$1,$3,$6)) -w,bin) >>>> >>>> Hope this is useful! >>>> >>>> Regards, >>>> David Banks >>>> dbanks@geontech.com >>>> Geon Technologies, LLC >>>> -------------- next part -------------- An HTML attachment was >>>> scrubbed... >>>> URL: >>>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attach >>>> m ents/20190201/4b49675d/attachment.html> >>>> _______________________________________________ >>>> discuss mailing list >>>> discuss@lists.opencpi.org >>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >>> >>> _______________________________________________ >>> discuss mailing list >>> discuss@lists.opencpi.org >>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >>> -------------- next part -------------- An HTML attachment was >>> scrubbed... >>> URL: >>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachm >>> e nts/20190201/64e4ea45/attachment.html> >>> _______________________________________________ >>> discuss mailing list >>> discuss@lists.opencpi.org >>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >> >> >> _______________________________________________ >> discuss mailing list >> discuss@lists.opencpi.org >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > > >
JK
James Kulp
Mon, Aug 12, 2019 1:00 PM

I was a bit confused about your use of the "ultrascale" branch.
So you are using a branch with two types of patches in it: one for later
linux kernels with the fpga manager, and the other for the ultrascale
chip itself.
The N310 is not ultrascale, so we need to separate the two issues, which
were not separated before.
So its not really a surprise that the branch you are using is not yet
happy with the system you are trying to run it on.

I am working on a branch that simply updates the xilinx tools (2019-1)
and the xilinx linux kernel (4.19) without dealing with ultrascale,
which is intended to work with
a baseline zed board, but with current tools and kernels.

The N310 uses a 7000-series part (7100) which should be compatible with
this.

Which kernel and which xilinx tools are you using?

Jim

On 8/8/19 1:36 PM, Munro, Robert M. wrote:

Jim or others,

Is there any further input or feedback on the source or resolution of this issue?

As it stands I do not believe that the OCPI runtime software will be able to successfully load HDL assemblies on the N310 platform.  My familiarity with this codebase is limited and we would appreciate any guidance available toward investigating or resolving this issue.

Thank you,
Robert Munro

-----Original Message-----
From: discuss discuss-bounces@lists.opencpi.org On Behalf Of Munro, Robert M.
Sent: Monday, August 5, 2019 10:49 AM
To: James Kulp jek@parera.com; discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

Jim,

The given block of code is not the root cause of the issue because the file system does not have a /dev/xdevcfg device.

I suspect there is some functional code similar to this being compiled incorrectly:
#if (OCPI_ARCH_arm)
// do xdevcfg loading stuff
#else
// do fpga_manager loading stuff
#endif

This error is being output at environment initialization as well as when running hello.xml.  I've attached a copy of the output from the command 'ocpirun -v -l 20 hello.xml' for further investigation.

From looking at the output I believe the system is calling OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line 484 which in turn is calling Driver::open in the same file at line 499 which then outputs the 'When searching for PL device ...' error at line 509.  This then returns to the HdlDriver.cxx search() function and outputs the '... got Zynq search error ...' error at line 141.

This is an ARM device and I am not familiar enough with this codebase to adjust precompiler definitions with confidence that some other code section will become affected.

Thanks,
Robert Munro

-----Original Message-----
From: James Kulp jek@parera.com
Sent: Friday, August 2, 2019 4:27 PM
To: Munro, Robert M. Robert.Munro@jhuapl.edu; discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

That code is not integrated into the main line of OpenCPI yet, but in that code there is:
          if (file_exists("/dev/xdevcfg")){
            ret_val= load_xdevconfig(fileName, error);
          }
          else if (file_exists("/sys/class/fpga_manager/fpga0/")){
            ret_val= load_fpga_manager(fileName, error);
          }
So it looks like the presence of /dev/xdevcfg is what causes it to look for /sys/class/xdevcfg/xdevcfg/device/prog_done

On 8/2/19 4:15 PM, Munro, Robert M. wrote:

Are there any required flag or environment variable settings that must be done before building the framework to utilize this functionality?  I have a platform built that is producing an output during environment load: 'When searching for PL device '0': Can't process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string: file could not be open for reading' .  This leads me to believe that it is running the xdevcfg code still present in HdlBusDriver.cxx .

Use of the release_1.4_zynq_ultra branch and presence of the /sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been verified for the environment used to generate the executables.

Thanks,
Robert Munro

-----Original Message-----
From: discuss discuss-bounces@lists.opencpi.org On Behalf Of James
Kulp
Sent: Friday, February 1, 2019 4:18 PM
To: discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

On 2/1/19 3:37 PM, Chris Hinkey wrote:

in response to Point 1 here.  We attempted using the code that on the fly was attempting to convert from bit to bin.  This did not work on these newer platforms using fpga_manager so we decided to use the vendor provided tools rather then to reverse engineer what was wrong with the existing code.

If changes need to be made to create more commonality and given that all zynq and zynqMP platforms need a .bin file format wouldn't it make more sense to just use .bin files rather then converting them on the fly every time?
A sensible question for sure.

When this was done originally, it was to avoid generating multiple file formats all the time.  .bit files are necessary for JTAG loading, and .bin files are necessary for zynq hardware loading.

Even on Zynq, some debugging using jtag is done, and having that be mostly transparent (using the same bitstream files) is convenient.

So we preferred having a single bitstream file (with metadata,
compressed) regardless of whether we were hardware loading or jtag loading, zynq or virtex6 or spartan3, ISE or Vivado.

In fact, there was no reverse engineering the last time since both formats, at the level we were operating at, were documented by Xilinx.

It seemed to be worth the 30 SLOC to convert on the fly to keep a single format of Xilinx bitstream files, including between ISE and Vivado and all Xilinx FPGA types.

Of course it might make sense to switch things around the other way and use .bin files uniformly and only convert to .bit format for JTAG loading.

But since the core of the "conversion:" after a header, is just a 32 bit endian swap, it doesn't matter much either way.

If it ends up being a truly nasty reverse engineering exercise now, I would reconsider.


From: discuss discuss-bounces@lists.opencpi.org on behalf of James
Kulp jek@parera.com
Sent: Friday, February 1, 2019 3:27 PM
To: discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

David,

This is great work. Thanks.

Since I believe the fpga manager stuff is really an attribute of
later linux kernels, I don't think it is really a ZynqMP thing, but
just a later linux kernel thing.
I am currently bringing up the quite ancient zedboard using the
latest Vivado and Xilinx linux and will try to use this same code.
There are two thinigs I am looking into, now that you have done the
hard work of getting to a working solution:

  1. The bit vs bin thing existed with the old bitstream loader, but I
    think we were converting on the fly, so I will try that here.
    (To avoid the bin format altogether).

  2. The fpga manager has entry points from kernel mode that allow you
    to inject the bitstream without making a copy in /lib/firmware.
    Since we already have a kernel driver, I will try to use that to
    avoid the whole /lib/firmware thing.

So if those two things can work (no guarantees), the difference
between old and new bitstream loading (and building) can be minimized
and the loading process faster and requiring no extra file system space.
This will make merging easier too.

We'll see.  Thanks again to you and Geon for this important contribution.

Jim

On 2/1/19 3:12 PM, David Banks wrote:

OpenCPI users interested in ZynqMP fpga_manager,

I know some users are interested in the OpenCPI's bitstream loading
for ZynqMP/UltraScale+ using "fpga_manager". In general, we
followed the instructions at
https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream.
I will give a short explanation here:

Reminder: All ZynqMP/UltraScale+ changes are located at
https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra branch.

Firstly, all fpga_manager code is located in
runtime/hdl/src/HdlBusDriver.cxx. There were also changes in
r
untime/hdl-support/xilinx/vivado.mk
http://vivado.mk
to generate a bitstream in the correct *.bin
format. To see the changes made to these files for ZynqMP, you can
diff them between
release_1.4 and release_1.4_zynq_ultra:
$ git clone https://github.com/Geontech/opencpi.git --branch
release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin
release_1.4:release_1.4; $ git diff release_1.4 --
runtime/hdl/src/HdlBusDriver.cxx
runtime/hdl-support/xilinx/vivado.mk;

The directly relevant functions are load_fpga_manager() and i
sProgrammed().
load_fpga_manager() ensures that /lib/firmware exists, reads the
*.bin bitstream file and writes its contents to /lib/firmware/opencpi_temp.bin.
It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the
the filename "opencpi_temp.bin" to /sys/class/fpga_manager/fpga0/firmware.
Finally, the temporary opencpi_temp.bin bitstream is removed and the
state of the fpga_manager (/sys/class/fpga_manager/fpga0/state) is
confirmed to be "operating" in isProgrammed().

fpga_manager requires that bitstreams be in *.bin in order to write
them to the PL. So, some changes were made to vivado.mk to add a
make rule for the *.bin file. This make rule (BinName) uses
Vivado's "bootgen" to convert the bitstream from *.bit to *.bin.

Most of the relevant code is pasted or summarized below:

        *load_fpga_manager*(const char *fileName, std::string &error) {
          if (!file_exists("/lib/firmware")){
            mkdir("/lib/firmware",0666);
          }
          int out_file = creat("/lib/firmware/opencpi_temp.bin", 0666);
          gzFile bin_file;
          int bfd, zerror;
          uint8_t buf[8*1024];

          if ((bfd = ::open(fileName, O_RDONLY)) < 0)
            OU::format(error, "Can't open bitstream file '%s' for reading:

%s(%d)",
fileName, strerror(errno), errno);
if ((bin_file = ::gzdopen(bfd, "rb")) == NULL)
OU::format(error, "Can't open compressed bin file '%s' for :
%s(%u)",
fileName, strerror(errno), errno);
do {
uint8_t *bit_buf = buf;
int n = ::gzread(bin_file, bit_buf, sizeof(buf));
if (n < 0)
return true;
if (n & 3)
return OU::eformat(error, "Bitstream data in is '%s'
not a multiple of 3 bytes",
fileName);
if (n == 0)
break;
if (write(out_file, buf, n) <= 0)
return OU::eformat(error,
"Error writing to /lib/firmware/opencpi_temp.bin
for bin
loading: %s(%u/%d)",
strerror(errno), errno, n);
} while (1);
close(out_file);
std::ofstream fpga_flags("/sys/class/fpga_manager/fpga0/flags");
std::ofstream
fpga_firmware("/sys/class/fpga_manager/fpga0/firmware");
fpga_flags << 0 << std::endl;
fpga_firmware << "opencpi_temp.bin" << std::endl;

          remove("/lib/firmware/opencpi_temp.bin");
          return isProgrammed(error) ? init(error) : true;
        }

The isProgrammed() function just checks whether or not the
fpga_manager state is 'operating' although we are not entirely
confident this is a robust check:

        *isProgrammed*(...) {
          ...
          const char *e = OU::file2String(val,

"/sys/class/fpga_manager/fpga0/state", '|');
...
return val == "operating";
}

vivado.mk's *bin make-rule uses bootgen to convert bit to bin. This
is necessary in Vivado 2018.2, but in later versions you may be able
to directly generate the correct *.bin file via an option to write_bitstream:
$(call BinName,$1,$3,$6): $(call BitName,$1,$3)
$(AT)echo -n For $2 on $5 using config $4: Generating
Xilinx Vivado bitstream file $$@ with BIN extension using "bootgen".
$(AT)echo all: > $$(call BifName,$1,$3,$6);
echo "{" >> $$(call BifName,$1,$3,$6);
echo "      [destination_device = pl] $(notdir $(call
BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6);
echo "}" >> $$(call BifName,$1,$3,$6);
$(AT)$(call DoXilinx,bootgen,$1,-image $(notdir $(call
BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call
BinName,$1,$3,$6)) -w,bin)

Hope this is useful!

Regards,
David Banks
dbanks@geontech.com
Geon Technologies, LLC
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attach
m ents/20190201/4b49675d/attachment.html>


discuss mailing list
discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org


discuss mailing list
discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachm
e nts/20190201/64e4ea45/attachment.html>


discuss mailing list
discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org


discuss mailing list
discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org

I was a bit confused about your use of the "ultrascale" branch. So you are using a branch with two types of patches in it: one for later linux kernels with the fpga manager, and the other for the ultrascale chip itself. The N310 is not ultrascale, so we need to separate the two issues, which were not separated before. So its not really a surprise that the branch you are using is not yet happy with the system you are trying to run it on. I am working on a branch that simply updates the xilinx tools (2019-1) and the xilinx linux kernel (4.19) without dealing with ultrascale, which is intended to work with a baseline zed board, but with current tools and kernels. The N310 uses a 7000-series part (7100) which should be compatible with this. Which kernel and which xilinx tools are you using? Jim On 8/8/19 1:36 PM, Munro, Robert M. wrote: > Jim or others, > > Is there any further input or feedback on the source or resolution of this issue? > > As it stands I do not believe that the OCPI runtime software will be able to successfully load HDL assemblies on the N310 platform. My familiarity with this codebase is limited and we would appreciate any guidance available toward investigating or resolving this issue. > > Thank you, > Robert Munro > > -----Original Message----- > From: discuss <discuss-bounces@lists.opencpi.org> On Behalf Of Munro, Robert M. > Sent: Monday, August 5, 2019 10:49 AM > To: James Kulp <jek@parera.com>; discuss@lists.opencpi.org > Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager > > Jim, > > The given block of code is not the root cause of the issue because the file system does not have a /dev/xdevcfg device. > > I suspect there is some functional code similar to this being compiled incorrectly: > #if (OCPI_ARCH_arm) > // do xdevcfg loading stuff > #else > // do fpga_manager loading stuff > #endif > > This error is being output at environment initialization as well as when running hello.xml. I've attached a copy of the output from the command 'ocpirun -v -l 20 hello.xml' for further investigation. > > From looking at the output I believe the system is calling OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line 484 which in turn is calling Driver::open in the same file at line 499 which then outputs the 'When searching for PL device ...' error at line 509. This then returns to the HdlDriver.cxx search() function and outputs the '... got Zynq search error ...' error at line 141. > > This is an ARM device and I am not familiar enough with this codebase to adjust precompiler definitions with confidence that some other code section will become affected. > > Thanks, > Robert Munro > > -----Original Message----- > From: James Kulp <jek@parera.com> > Sent: Friday, August 2, 2019 4:27 PM > To: Munro, Robert M. <Robert.Munro@jhuapl.edu>; discuss@lists.opencpi.org > Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager > > That code is not integrated into the main line of OpenCPI yet, but in that code there is: >           if (file_exists("/dev/xdevcfg")){ >             ret_val= load_xdevconfig(fileName, error); >           } >           else if (file_exists("/sys/class/fpga_manager/fpga0/")){ >             ret_val= load_fpga_manager(fileName, error); >           } > So it looks like the presence of /dev/xdevcfg is what causes it to look for /sys/class/xdevcfg/xdevcfg/device/prog_done > > On 8/2/19 4:15 PM, Munro, Robert M. wrote: >> Are there any required flag or environment variable settings that must be done before building the framework to utilize this functionality? I have a platform built that is producing an output during environment load: 'When searching for PL device '0': Can't process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string: file could not be open for reading' . This leads me to believe that it is running the xdevcfg code still present in HdlBusDriver.cxx . >> >> Use of the release_1.4_zynq_ultra branch and presence of the /sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been verified for the environment used to generate the executables. >> >> Thanks, >> Robert Munro >> >> -----Original Message----- >> From: discuss <discuss-bounces@lists.opencpi.org> On Behalf Of James >> Kulp >> Sent: Friday, February 1, 2019 4:18 PM >> To: discuss@lists.opencpi.org >> Subject: Re: [Discuss OpenCPI] Bitstream loading with >> ZynqMP/UltraScale+ fpga_manager >> >> On 2/1/19 3:37 PM, Chris Hinkey wrote: >>> in response to Point 1 here. We attempted using the code that on the fly was attempting to convert from bit to bin. This did not work on these newer platforms using fpga_manager so we decided to use the vendor provided tools rather then to reverse engineer what was wrong with the existing code. >>> >>> If changes need to be made to create more commonality and given that all zynq and zynqMP platforms need a .bin file format wouldn't it make more sense to just use .bin files rather then converting them on the fly every time? >> A sensible question for sure. >> >> When this was done originally, it was to avoid generating multiple file formats all the time.  .bit files are necessary for JTAG loading, and .bin files are necessary for zynq hardware loading. >> >> Even on Zynq, some debugging using jtag is done, and having that be mostly transparent (using the same bitstream files) is convenient. >> >> So we preferred having a single bitstream file (with metadata, >> compressed) regardless of whether we were hardware loading or jtag loading, zynq or virtex6 or spartan3, ISE or Vivado. >> >> In fact, there was no reverse engineering the last time since both formats, at the level we were operating at, were documented by Xilinx. >> >> It seemed to be worth the 30 SLOC to convert on the fly to keep a single format of Xilinx bitstream files, including between ISE and Vivado and all Xilinx FPGA types. >> >> Of course it might make sense to switch things around the other way and use .bin files uniformly and only convert to .bit format for JTAG loading. >> >> But since the core of the "conversion:" after a header, is just a 32 bit endian swap, it doesn't matter much either way. >> >> If it ends up being a truly nasty reverse engineering exercise now, I would reconsider. >> >>> ________________________________ >>> From: discuss <discuss-bounces@lists.opencpi.org> on behalf of James >>> Kulp <jek@parera.com> >>> Sent: Friday, February 1, 2019 3:27 PM >>> To: discuss@lists.opencpi.org >>> Subject: Re: [Discuss OpenCPI] Bitstream loading with >>> ZynqMP/UltraScale+ fpga_manager >>> >>> David, >>> >>> This is great work. Thanks. >>> >>> Since I believe the fpga manager stuff is really an attribute of >>> later linux kernels, I don't think it is really a ZynqMP thing, but >>> just a later linux kernel thing. >>> I am currently bringing up the quite ancient zedboard using the >>> latest Vivado and Xilinx linux and will try to use this same code. >>> There are two thinigs I am looking into, now that you have done the >>> hard work of getting to a working solution: >>> >>> 1. The bit vs bin thing existed with the old bitstream loader, but I >>> think we were converting on the fly, so I will try that here. >>> (To avoid the bin format altogether). >>> >>> 2. The fpga manager has entry points from kernel mode that allow you >>> to inject the bitstream without making a copy in /lib/firmware. >>> Since we already have a kernel driver, I will try to use that to >>> avoid the whole /lib/firmware thing. >>> >>> So if those two things can work (no guarantees), the difference >>> between old and new bitstream loading (and building) can be minimized >>> and the loading process faster and requiring no extra file system space. >>> This will make merging easier too. >>> >>> We'll see. Thanks again to you and Geon for this important contribution. >>> >>> Jim >>> >>> >>> On 2/1/19 3:12 PM, David Banks wrote: >>>> OpenCPI users interested in ZynqMP fpga_manager, >>>> >>>> I know some users are interested in the OpenCPI's bitstream loading >>>> for ZynqMP/UltraScale+ using "*fpga_manager*". In general, we >>>> followed the instructions at >>>> https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream. >>>> I will give a short explanation here: >>>> >>>> Reminder: All ZynqMP/UltraScale+ changes are located at >>>> https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra branch. >>>> >>>> Firstly, all *fpga_manager *code is located in >>>> *runtime/hdl/src/HdlBusDriver.cxx*. There were also changes in >>>> r*untime/hdl-support/xilinx/vivado.mk >>>> <http://vivado.mk>* to generate a bitstream in the correct *.bin >>>> format. To see the changes made to these files for ZynqMP, you can >>>> diff them between >>>> *release_1.4* and *release_1.4_zynq_ultra*: >>>> $ git clone https://github.com/Geontech/opencpi.git --branch >>>> release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin >>>> release_1.4:release_1.4; $ git diff release_1.4 -- >>>> runtime/hdl/src/HdlBusDriver.cxx >>>> runtime/hdl-support/xilinx/vivado.mk; >>>> >>>> >>>> The directly relevant functions are *load_fpga_manager()* and i >>>> *sProgrammed()*. >>>> load_fpga_manager() ensures that /lib/firmware exists, reads the >>>> *.bin bitstream file and writes its contents to /lib/firmware/opencpi_temp.bin. >>>> It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the >>>> the filename "opencpi_temp.bin" to /sys/class/fpga_manager/fpga0/firmware. >>>> Finally, the temporary opencpi_temp.bin bitstream is removed and the >>>> state of the fpga_manager (/sys/class/fpga_manager/fpga0/state) is >>>> confirmed to be "operating" in isProgrammed(). >>>> >>>> fpga_manager requires that bitstreams be in *.bin in order to write >>>> them to the PL. So, some changes were made to vivado.mk to add a >>>> make rule for the *.bin file. This make rule (*BinName*) uses >>>> Vivado's "*bootgen*" to convert the bitstream from *.bit to *.bin. >>>> >>>> Most of the relevant code is pasted or summarized below: >>>> >>>> *load_fpga_manager*(const char *fileName, std::string &error) { >>>> if (!file_exists("/lib/firmware")){ >>>> mkdir("/lib/firmware",0666); >>>> } >>>> int out_file = creat("/lib/firmware/opencpi_temp.bin", 0666); >>>> gzFile bin_file; >>>> int bfd, zerror; >>>> uint8_t buf[8*1024]; >>>> >>>> if ((bfd = ::open(fileName, O_RDONLY)) < 0) >>>> OU::format(error, "Can't open bitstream file '%s' for reading: >>>> %s(%d)", >>>> fileName, strerror(errno), errno); >>>> if ((bin_file = ::gzdopen(bfd, "rb")) == NULL) >>>> OU::format(error, "Can't open compressed bin file '%s' for : >>>> %s(%u)", >>>> fileName, strerror(errno), errno); >>>> do { >>>> uint8_t *bit_buf = buf; >>>> int n = ::gzread(bin_file, bit_buf, sizeof(buf)); >>>> if (n < 0) >>>> return true; >>>> if (n & 3) >>>> return OU::eformat(error, "Bitstream data in is '%s' >>>> not a multiple of 3 bytes", >>>> fileName); >>>> if (n == 0) >>>> break; >>>> if (write(out_file, buf, n) <= 0) >>>> return OU::eformat(error, >>>> "Error writing to /lib/firmware/opencpi_temp.bin >>>> for bin >>>> loading: %s(%u/%d)", >>>> strerror(errno), errno, n); >>>> } while (1); >>>> close(out_file); >>>> std::ofstream fpga_flags("/sys/class/fpga_manager/fpga0/flags"); >>>> std::ofstream >>>> fpga_firmware("/sys/class/fpga_manager/fpga0/firmware"); >>>> fpga_flags << 0 << std::endl; >>>> fpga_firmware << "opencpi_temp.bin" << std::endl; >>>> >>>> remove("/lib/firmware/opencpi_temp.bin"); >>>> return isProgrammed(error) ? init(error) : true; >>>> } >>>> >>>> The isProgrammed() function just checks whether or not the >>>> fpga_manager state is 'operating' although we are not entirely >>>> confident this is a robust check: >>>> >>>> *isProgrammed*(...) { >>>> ... >>>> const char *e = OU::file2String(val, >>>> "/sys/class/fpga_manager/fpga0/state", '|'); >>>> ... >>>> return val == "operating"; >>>> } >>>> >>>> vivado.mk's *bin make-rule uses bootgen to convert bit to bin. This >>>> is necessary in Vivado 2018.2, but in later versions you may be able >>>> to directly generate the correct *.bin file via an option to write_bitstream: >>>> $(call *BinName*,$1,$3,$6): $(call BitName,$1,$3) >>>> $(AT)echo -n For $2 on $5 using config $4: Generating >>>> Xilinx Vivado bitstream file $$@ with BIN extension using "bootgen". >>>> $(AT)echo all: > $$(call BifName,$1,$3,$6); \ >>>> echo "{" >> $$(call BifName,$1,$3,$6); \ >>>> echo " [destination_device = pl] $(notdir $(call >>>> BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6); \ >>>> echo "}" >> $$(call BifName,$1,$3,$6); >>>> $(AT)$(call DoXilinx,*bootgen*,$1,-image $(notdir $(call >>>> BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call >>>> BinName,$1,$3,$6)) -w,bin) >>>> >>>> Hope this is useful! >>>> >>>> Regards, >>>> David Banks >>>> dbanks@geontech.com >>>> Geon Technologies, LLC >>>> -------------- next part -------------- An HTML attachment was >>>> scrubbed... >>>> URL: >>>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attach >>>> m ents/20190201/4b49675d/attachment.html> >>>> _______________________________________________ >>>> discuss mailing list >>>> discuss@lists.opencpi.org >>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >>> _______________________________________________ >>> discuss mailing list >>> discuss@lists.opencpi.org >>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >>> -------------- next part -------------- An HTML attachment was >>> scrubbed... >>> URL: >>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachm >>> e nts/20190201/64e4ea45/attachment.html> >>> _______________________________________________ >>> discuss mailing list >>> discuss@lists.opencpi.org >>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >> >> _______________________________________________ >> discuss mailing list >> discuss@lists.opencpi.org >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > >