Bitstream loading with ZynqMP/UltraScale+ fpga_manager

MR
Munro, Robert M.
Mon, Aug 12, 2019 1:37 PM

Jim,

This is the only branch with the modifications required for use with the FPGA Manager driver.  This is required for use with the Linux kernel provided for the N310.  The Xilinx toolset being used is 2018_2 and the kernel being used is generated via the N310 build container using v3.14.0.0 .

Thanks,
Robert Munro

From: James Kulp <jek@parera.commailto:jek@parera.com>
Date: Monday, Aug 12, 2019, 9:00 AM
To: Munro, Robert M. <Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>, discuss@lists.opencpi.org <discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

I was a bit confused about your use of the "ultrascale" branch.
So you are using a branch with two types of patches in it: one for later
linux kernels with the fpga manager, and the other for the ultrascale
chip itself.
The N310 is not ultrascale, so we need to separate the two issues, which
were not separated before.
So its not really a surprise that the branch you are using is not yet
happy with the system you are trying to run it on.

I am working on a branch that simply updates the xilinx tools (2019-1)
and the xilinx linux kernel (4.19) without dealing with ultrascale,
which is intended to work with
a baseline zed board, but with current tools and kernels.

The N310 uses a 7000-series part (7100) which should be compatible with
this.

Which kernel and which xilinx tools are you using?

Jim

On 8/8/19 1:36 PM, Munro, Robert M. wrote:

Jim or others,

Is there any further input or feedback on the source or resolution of this issue?

As it stands I do not believe that the OCPI runtime software will be able to successfully load HDL assemblies on the N310 platform.  My familiarity with this codebase is limited and we would appreciate any guidance available toward investigating or resolving this issue.

Thank you,
Robert Munro

-----Original Message-----
From: discuss discuss-bounces@lists.opencpi.org On Behalf Of Munro, Robert M.
Sent: Monday, August 5, 2019 10:49 AM
To: James Kulp jek@parera.com; discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

Jim,

The given block of code is not the root cause of the issue because the file system does not have a /dev/xdevcfg device.

I suspect there is some functional code similar to this being compiled incorrectly:
#if (OCPI_ARCH_arm)
// do xdevcfg loading stuff
#else
// do fpga_manager loading stuff
#endif

This error is being output at environment initialization as well as when running hello.xml.  I've attached a copy of the output from the command 'ocpirun -v -l 20 hello.xml' for further investigation.

From looking at the output I believe the system is calling OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line 484 which in turn is calling Driver::open in the same file at line 499 which then outputs the 'When searching for PL device ...' error at line 509.  This then returns to the HdlDriver.cxx search() function and outputs the '... got Zynq search error ...' error at line 141.

This is an ARM device and I am not familiar enough with this codebase to adjust precompiler definitions with confidence that some other code section will become affected.

Thanks,
Robert Munro

-----Original Message-----
From: James Kulp jek@parera.com
Sent: Friday, August 2, 2019 4:27 PM
To: Munro, Robert M. Robert.Munro@jhuapl.edu; discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

That code is not integrated into the main line of OpenCPI yet, but in that code there is:
if (file_exists("/dev/xdevcfg")){
ret_val= load_xdevconfig(fileName, error);
}
else if (file_exists("/sys/class/fpga_manager/fpga0/")){
ret_val= load_fpga_manager(fileName, error);
}
So it looks like the presence of /dev/xdevcfg is what causes it to look for /sys/class/xdevcfg/xdevcfg/device/prog_done

On 8/2/19 4:15 PM, Munro, Robert M. wrote:

Are there any required flag or environment variable settings that must be done before building the framework to utilize this functionality?  I have a platform built that is producing an output during environment load: 'When searching for PL device '0': Can't process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string: file could not be open for reading' .  This leads me to believe that it is running the xdevcfg code still present in HdlBusDriver.cxx .

Use of the release_1.4_zynq_ultra branch and presence of the /sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been verified for the environment used to generate the executables.

Thanks,
Robert Munro

-----Original Message-----
From: discuss discuss-bounces@lists.opencpi.org On Behalf Of James
Kulp
Sent: Friday, February 1, 2019 4:18 PM
To: discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

On 2/1/19 3:37 PM, Chris Hinkey wrote:

in response to Point 1 here.  We attempted using the code that on the fly was attempting to convert from bit to bin.  This did not work on these newer platforms using fpga_manager so we decided to use the vendor provided tools rather then to reverse engineer what was wrong with the existing code.

If changes need to be made to create more commonality and given that all zynq and zynqMP platforms need a .bin file format wouldn't it make more sense to just use .bin files rather then converting them on the fly every time?
A sensible question for sure.

When this was done originally, it was to avoid generating multiple file formats all the time.  .bit files are necessary for JTAG loading, and .bin files are necessary for zynq hardware loading.

Even on Zynq, some debugging using jtag is done, and having that be mostly transparent (using the same bitstream files) is convenient.

So we preferred having a single bitstream file (with metadata,
compressed) regardless of whether we were hardware loading or jtag loading, zynq or virtex6 or spartan3, ISE or Vivado.

In fact, there was no reverse engineering the last time since both formats, at the level we were operating at, were documented by Xilinx.

It seemed to be worth the 30 SLOC to convert on the fly to keep a single format of Xilinx bitstream files, including between ISE and Vivado and all Xilinx FPGA types.

Of course it might make sense to switch things around the other way and use .bin files uniformly and only convert to .bit format for JTAG loading.

But since the core of the "conversion:" after a header, is just a 32 bit endian swap, it doesn't matter much either way.

If it ends up being a truly nasty reverse engineering exercise now, I would reconsider.


From: discuss discuss-bounces@lists.opencpi.org on behalf of James
Kulp jek@parera.com
Sent: Friday, February 1, 2019 3:27 PM
To: discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

David,

This is great work. Thanks.

Since I believe the fpga manager stuff is really an attribute of
later linux kernels, I don't think it is really a ZynqMP thing, but
just a later linux kernel thing.
I am currently bringing up the quite ancient zedboard using the
latest Vivado and Xilinx linux and will try to use this same code.
There are two thinigs I am looking into, now that you have done the
hard work of getting to a working solution:

  1. The bit vs bin thing existed with the old bitstream loader, but I
    think we were converting on the fly, so I will try that here.
    (To avoid the bin format altogether).

  2. The fpga manager has entry points from kernel mode that allow you
    to inject the bitstream without making a copy in /lib/firmware.
    Since we already have a kernel driver, I will try to use that to
    avoid the whole /lib/firmware thing.

So if those two things can work (no guarantees), the difference
between old and new bitstream loading (and building) can be minimized
and the loading process faster and requiring no extra file system space.
This will make merging easier too.

We'll see.  Thanks again to you and Geon for this important contribution.

Jim

On 2/1/19 3:12 PM, David Banks wrote:

OpenCPI users interested in ZynqMP fpga_manager,

I know some users are interested in the OpenCPI's bitstream loading
for ZynqMP/UltraScale+ using "fpga_manager". In general, we
followed the instructions at
https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream.
I will give a short explanation here:

Reminder: All ZynqMP/UltraScale+ changes are located at
https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra branch.

Firstly, all fpga_manager code is located in
runtime/hdl/src/HdlBusDriver.cxx. There were also changes in
r
untime/hdl-support/xilinx/vivado.mk
http://vivado.mk
to generate a bitstream in the correct *.bin
format. To see the changes made to these files for ZynqMP, you can
diff them between
release_1.4 and release_1.4_zynq_ultra:
$ git clone https://github.com/Geontech/opencpi.git --branch
release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin
release_1.4:release_1.4; $ git diff release_1.4 --
runtime/hdl/src/HdlBusDriver.cxx
runtime/hdl-support/xilinx/vivado.mk;

The directly relevant functions are load_fpga_manager() and i
sProgrammed().
load_fpga_manager() ensures that /lib/firmware exists, reads the
*.bin bitstream file and writes its contents to /lib/firmware/opencpi_temp.bin.
It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the
the filename "opencpi_temp.bin" to /sys/class/fpga_manager/fpga0/firmware.
Finally, the temporary opencpi_temp.bin bitstream is removed and the
state of the fpga_manager (/sys/class/fpga_manager/fpga0/state) is
confirmed to be "operating" in isProgrammed().

fpga_manager requires that bitstreams be in *.bin in order to write
them to the PL. So, some changes were made to vivado.mk to add a
make rule for the *.bin file. This make rule (BinName) uses
Vivado's "bootgen" to convert the bitstream from *.bit to *.bin.

Most of the relevant code is pasted or summarized below:

        *load_fpga_manager*(const char *fileName, std::string &error) {
          if (!file_exists("/lib/firmware")){
            mkdir("/lib/firmware",0666);
          }
          int out_file = creat("/lib/firmware/opencpi_temp.bin", 0666);
          gzFile bin_file;
          int bfd, zerror;
          uint8_t buf[8*1024];

          if ((bfd = ::open(fileName, O_RDONLY)) < 0)
            OU::format(error, "Can't open bitstream file '%s' for reading:

%s(%d)",
fileName, strerror(errno), errno);
if ((bin_file = ::gzdopen(bfd, "rb")) == NULL)
OU::format(error, "Can't open compressed bin file '%s' for :
%s(%u)",
fileName, strerror(errno), errno);
do {
uint8_t *bit_buf = buf;
int n = ::gzread(bin_file, bit_buf, sizeof(buf));
if (n < 0)
return true;
if (n & 3)
return OU::eformat(error, "Bitstream data in is '%s'
not a multiple of 3 bytes",
fileName);
if (n == 0)
break;
if (write(out_file, buf, n) <= 0)
return OU::eformat(error,
"Error writing to /lib/firmware/opencpi_temp.bin
for bin
loading: %s(%u/%d)",
strerror(errno), errno, n);
} while (1);
close(out_file);
std::ofstream fpga_flags("/sys/class/fpga_manager/fpga0/flags");
std::ofstream
fpga_firmware("/sys/class/fpga_manager/fpga0/firmware");
fpga_flags << 0 << std::endl;
fpga_firmware << "opencpi_temp.bin" << std::endl;

          remove("/lib/firmware/opencpi_temp.bin");
          return isProgrammed(error) ? init(error) : true;
        }

The isProgrammed() function just checks whether or not the
fpga_manager state is 'operating' although we are not entirely
confident this is a robust check:

        *isProgrammed*(...) {
          ...
          const char *e = OU::file2String(val,

"/sys/class/fpga_manager/fpga0/state", '|');
...
return val == "operating";
}

vivado.mk's *bin make-rule uses bootgen to convert bit to bin. This
is necessary in Vivado 2018.2, but in later versions you may be able
to directly generate the correct *.bin file via an option to write_bitstream:
$(call BinName,$1,$3,$6): $(call BitName,$1,$3)
$(AT)echo -n For $2 on $5 using config $4: Generating
Xilinx Vivado bitstream file $$@ with BIN extension using "bootgen".
$(AT)echo all: > $$(call BifName,$1,$3,$6);
echo "{" >> $$(call BifName,$1,$3,$6);
echo "      [destination_device = pl] $(notdir $(call
BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6);
echo "}" >> $$(call BifName,$1,$3,$6);
$(AT)$(call DoXilinx,bootgen,$1,-image $(notdir $(call
BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call
BinName,$1,$3,$6)) -w,bin)

Hope this is useful!

Regards,
David Banks
dbanks@geontech.com
Geon Technologies, LLC
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attach
m ents/20190201/4b49675d/attachment.html>


discuss mailing list
discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org


discuss mailing list
discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachm
e nts/20190201/64e4ea45/attachment.html>


discuss mailing list
discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org


discuss mailing list
discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org

Jim, This is the only branch with the modifications required for use with the FPGA Manager driver. This is required for use with the Linux kernel provided for the N310. The Xilinx toolset being used is 2018_2 and the kernel being used is generated via the N310 build container using v3.14.0.0 . Thanks, Robert Munro From: James Kulp <jek@parera.com<mailto:jek@parera.com>> Date: Monday, Aug 12, 2019, 9:00 AM To: Munro, Robert M. <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>, discuss@lists.opencpi.org <discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager I was a bit confused about your use of the "ultrascale" branch. So you are using a branch with two types of patches in it: one for later linux kernels with the fpga manager, and the other for the ultrascale chip itself. The N310 is not ultrascale, so we need to separate the two issues, which were not separated before. So its not really a surprise that the branch you are using is not yet happy with the system you are trying to run it on. I am working on a branch that simply updates the xilinx tools (2019-1) and the xilinx linux kernel (4.19) without dealing with ultrascale, which is intended to work with a baseline zed board, but with current tools and kernels. The N310 uses a 7000-series part (7100) which should be compatible with this. Which kernel and which xilinx tools are you using? Jim On 8/8/19 1:36 PM, Munro, Robert M. wrote: > Jim or others, > > Is there any further input or feedback on the source or resolution of this issue? > > As it stands I do not believe that the OCPI runtime software will be able to successfully load HDL assemblies on the N310 platform. My familiarity with this codebase is limited and we would appreciate any guidance available toward investigating or resolving this issue. > > Thank you, > Robert Munro > > -----Original Message----- > From: discuss <discuss-bounces@lists.opencpi.org> On Behalf Of Munro, Robert M. > Sent: Monday, August 5, 2019 10:49 AM > To: James Kulp <jek@parera.com>; discuss@lists.opencpi.org > Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager > > Jim, > > The given block of code is not the root cause of the issue because the file system does not have a /dev/xdevcfg device. > > I suspect there is some functional code similar to this being compiled incorrectly: > #if (OCPI_ARCH_arm) > // do xdevcfg loading stuff > #else > // do fpga_manager loading stuff > #endif > > This error is being output at environment initialization as well as when running hello.xml. I've attached a copy of the output from the command 'ocpirun -v -l 20 hello.xml' for further investigation. > > From looking at the output I believe the system is calling OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line 484 which in turn is calling Driver::open in the same file at line 499 which then outputs the 'When searching for PL device ...' error at line 509. This then returns to the HdlDriver.cxx search() function and outputs the '... got Zynq search error ...' error at line 141. > > This is an ARM device and I am not familiar enough with this codebase to adjust precompiler definitions with confidence that some other code section will become affected. > > Thanks, > Robert Munro > > -----Original Message----- > From: James Kulp <jek@parera.com> > Sent: Friday, August 2, 2019 4:27 PM > To: Munro, Robert M. <Robert.Munro@jhuapl.edu>; discuss@lists.opencpi.org > Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager > > That code is not integrated into the main line of OpenCPI yet, but in that code there is: > if (file_exists("/dev/xdevcfg")){ > ret_val= load_xdevconfig(fileName, error); > } > else if (file_exists("/sys/class/fpga_manager/fpga0/")){ > ret_val= load_fpga_manager(fileName, error); > } > So it looks like the presence of /dev/xdevcfg is what causes it to look for /sys/class/xdevcfg/xdevcfg/device/prog_done > > On 8/2/19 4:15 PM, Munro, Robert M. wrote: >> Are there any required flag or environment variable settings that must be done before building the framework to utilize this functionality? I have a platform built that is producing an output during environment load: 'When searching for PL device '0': Can't process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string: file could not be open for reading' . This leads me to believe that it is running the xdevcfg code still present in HdlBusDriver.cxx . >> >> Use of the release_1.4_zynq_ultra branch and presence of the /sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been verified for the environment used to generate the executables. >> >> Thanks, >> Robert Munro >> >> -----Original Message----- >> From: discuss <discuss-bounces@lists.opencpi.org> On Behalf Of James >> Kulp >> Sent: Friday, February 1, 2019 4:18 PM >> To: discuss@lists.opencpi.org >> Subject: Re: [Discuss OpenCPI] Bitstream loading with >> ZynqMP/UltraScale+ fpga_manager >> >> On 2/1/19 3:37 PM, Chris Hinkey wrote: >>> in response to Point 1 here. We attempted using the code that on the fly was attempting to convert from bit to bin. This did not work on these newer platforms using fpga_manager so we decided to use the vendor provided tools rather then to reverse engineer what was wrong with the existing code. >>> >>> If changes need to be made to create more commonality and given that all zynq and zynqMP platforms need a .bin file format wouldn't it make more sense to just use .bin files rather then converting them on the fly every time? >> A sensible question for sure. >> >> When this was done originally, it was to avoid generating multiple file formats all the time. .bit files are necessary for JTAG loading, and .bin files are necessary for zynq hardware loading. >> >> Even on Zynq, some debugging using jtag is done, and having that be mostly transparent (using the same bitstream files) is convenient. >> >> So we preferred having a single bitstream file (with metadata, >> compressed) regardless of whether we were hardware loading or jtag loading, zynq or virtex6 or spartan3, ISE or Vivado. >> >> In fact, there was no reverse engineering the last time since both formats, at the level we were operating at, were documented by Xilinx. >> >> It seemed to be worth the 30 SLOC to convert on the fly to keep a single format of Xilinx bitstream files, including between ISE and Vivado and all Xilinx FPGA types. >> >> Of course it might make sense to switch things around the other way and use .bin files uniformly and only convert to .bit format for JTAG loading. >> >> But since the core of the "conversion:" after a header, is just a 32 bit endian swap, it doesn't matter much either way. >> >> If it ends up being a truly nasty reverse engineering exercise now, I would reconsider. >> >>> ________________________________ >>> From: discuss <discuss-bounces@lists.opencpi.org> on behalf of James >>> Kulp <jek@parera.com> >>> Sent: Friday, February 1, 2019 3:27 PM >>> To: discuss@lists.opencpi.org >>> Subject: Re: [Discuss OpenCPI] Bitstream loading with >>> ZynqMP/UltraScale+ fpga_manager >>> >>> David, >>> >>> This is great work. Thanks. >>> >>> Since I believe the fpga manager stuff is really an attribute of >>> later linux kernels, I don't think it is really a ZynqMP thing, but >>> just a later linux kernel thing. >>> I am currently bringing up the quite ancient zedboard using the >>> latest Vivado and Xilinx linux and will try to use this same code. >>> There are two thinigs I am looking into, now that you have done the >>> hard work of getting to a working solution: >>> >>> 1. The bit vs bin thing existed with the old bitstream loader, but I >>> think we were converting on the fly, so I will try that here. >>> (To avoid the bin format altogether). >>> >>> 2. The fpga manager has entry points from kernel mode that allow you >>> to inject the bitstream without making a copy in /lib/firmware. >>> Since we already have a kernel driver, I will try to use that to >>> avoid the whole /lib/firmware thing. >>> >>> So if those two things can work (no guarantees), the difference >>> between old and new bitstream loading (and building) can be minimized >>> and the loading process faster and requiring no extra file system space. >>> This will make merging easier too. >>> >>> We'll see. Thanks again to you and Geon for this important contribution. >>> >>> Jim >>> >>> >>> On 2/1/19 3:12 PM, David Banks wrote: >>>> OpenCPI users interested in ZynqMP fpga_manager, >>>> >>>> I know some users are interested in the OpenCPI's bitstream loading >>>> for ZynqMP/UltraScale+ using "*fpga_manager*". In general, we >>>> followed the instructions at >>>> https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream. >>>> I will give a short explanation here: >>>> >>>> Reminder: All ZynqMP/UltraScale+ changes are located at >>>> https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra branch. >>>> >>>> Firstly, all *fpga_manager *code is located in >>>> *runtime/hdl/src/HdlBusDriver.cxx*. There were also changes in >>>> r*untime/hdl-support/xilinx/vivado.mk >>>> <http://vivado.mk>* to generate a bitstream in the correct *.bin >>>> format. To see the changes made to these files for ZynqMP, you can >>>> diff them between >>>> *release_1.4* and *release_1.4_zynq_ultra*: >>>> $ git clone https://github.com/Geontech/opencpi.git --branch >>>> release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin >>>> release_1.4:release_1.4; $ git diff release_1.4 -- >>>> runtime/hdl/src/HdlBusDriver.cxx >>>> runtime/hdl-support/xilinx/vivado.mk; >>>> >>>> >>>> The directly relevant functions are *load_fpga_manager()* and i >>>> *sProgrammed()*. >>>> load_fpga_manager() ensures that /lib/firmware exists, reads the >>>> *.bin bitstream file and writes its contents to /lib/firmware/opencpi_temp.bin. >>>> It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the >>>> the filename "opencpi_temp.bin" to /sys/class/fpga_manager/fpga0/firmware. >>>> Finally, the temporary opencpi_temp.bin bitstream is removed and the >>>> state of the fpga_manager (/sys/class/fpga_manager/fpga0/state) is >>>> confirmed to be "operating" in isProgrammed(). >>>> >>>> fpga_manager requires that bitstreams be in *.bin in order to write >>>> them to the PL. So, some changes were made to vivado.mk to add a >>>> make rule for the *.bin file. This make rule (*BinName*) uses >>>> Vivado's "*bootgen*" to convert the bitstream from *.bit to *.bin. >>>> >>>> Most of the relevant code is pasted or summarized below: >>>> >>>> *load_fpga_manager*(const char *fileName, std::string &error) { >>>> if (!file_exists("/lib/firmware")){ >>>> mkdir("/lib/firmware",0666); >>>> } >>>> int out_file = creat("/lib/firmware/opencpi_temp.bin", 0666); >>>> gzFile bin_file; >>>> int bfd, zerror; >>>> uint8_t buf[8*1024]; >>>> >>>> if ((bfd = ::open(fileName, O_RDONLY)) < 0) >>>> OU::format(error, "Can't open bitstream file '%s' for reading: >>>> %s(%d)", >>>> fileName, strerror(errno), errno); >>>> if ((bin_file = ::gzdopen(bfd, "rb")) == NULL) >>>> OU::format(error, "Can't open compressed bin file '%s' for : >>>> %s(%u)", >>>> fileName, strerror(errno), errno); >>>> do { >>>> uint8_t *bit_buf = buf; >>>> int n = ::gzread(bin_file, bit_buf, sizeof(buf)); >>>> if (n < 0) >>>> return true; >>>> if (n & 3) >>>> return OU::eformat(error, "Bitstream data in is '%s' >>>> not a multiple of 3 bytes", >>>> fileName); >>>> if (n == 0) >>>> break; >>>> if (write(out_file, buf, n) <= 0) >>>> return OU::eformat(error, >>>> "Error writing to /lib/firmware/opencpi_temp.bin >>>> for bin >>>> loading: %s(%u/%d)", >>>> strerror(errno), errno, n); >>>> } while (1); >>>> close(out_file); >>>> std::ofstream fpga_flags("/sys/class/fpga_manager/fpga0/flags"); >>>> std::ofstream >>>> fpga_firmware("/sys/class/fpga_manager/fpga0/firmware"); >>>> fpga_flags << 0 << std::endl; >>>> fpga_firmware << "opencpi_temp.bin" << std::endl; >>>> >>>> remove("/lib/firmware/opencpi_temp.bin"); >>>> return isProgrammed(error) ? init(error) : true; >>>> } >>>> >>>> The isProgrammed() function just checks whether or not the >>>> fpga_manager state is 'operating' although we are not entirely >>>> confident this is a robust check: >>>> >>>> *isProgrammed*(...) { >>>> ... >>>> const char *e = OU::file2String(val, >>>> "/sys/class/fpga_manager/fpga0/state", '|'); >>>> ... >>>> return val == "operating"; >>>> } >>>> >>>> vivado.mk's *bin make-rule uses bootgen to convert bit to bin. This >>>> is necessary in Vivado 2018.2, but in later versions you may be able >>>> to directly generate the correct *.bin file via an option to write_bitstream: >>>> $(call *BinName*,$1,$3,$6): $(call BitName,$1,$3) >>>> $(AT)echo -n For $2 on $5 using config $4: Generating >>>> Xilinx Vivado bitstream file $$@ with BIN extension using "bootgen". >>>> $(AT)echo all: > $$(call BifName,$1,$3,$6); \ >>>> echo "{" >> $$(call BifName,$1,$3,$6); \ >>>> echo " [destination_device = pl] $(notdir $(call >>>> BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6); \ >>>> echo "}" >> $$(call BifName,$1,$3,$6); >>>> $(AT)$(call DoXilinx,*bootgen*,$1,-image $(notdir $(call >>>> BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call >>>> BinName,$1,$3,$6)) -w,bin) >>>> >>>> Hope this is useful! >>>> >>>> Regards, >>>> David Banks >>>> dbanks@geontech.com >>>> Geon Technologies, LLC >>>> -------------- next part -------------- An HTML attachment was >>>> scrubbed... >>>> URL: >>>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attach >>>> m ents/20190201/4b49675d/attachment.html> >>>> _______________________________________________ >>>> discuss mailing list >>>> discuss@lists.opencpi.org >>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >>> _______________________________________________ >>> discuss mailing list >>> discuss@lists.opencpi.org >>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >>> -------------- next part -------------- An HTML attachment was >>> scrubbed... >>> URL: >>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachm >>> e nts/20190201/64e4ea45/attachment.html> >>> _______________________________________________ >>> discuss mailing list >>> discuss@lists.opencpi.org >>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >> >> _______________________________________________ >> discuss mailing list >> discuss@lists.opencpi.org >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > >
JK
James Kulp
Mon, Aug 12, 2019 3:05 PM
An HTML attachment was scrubbed... URL: <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachments/20190812/f8208809/attachment.html>
JK
James Kulp
Mon, Aug 12, 2019 3:11 PM

On 8/12/19 9:37 AM, Munro, Robert M. wrote:

Jim,

This is the only branch with the modifications required for use with
the FPGA Manager driver.  This is required for use with the Linux
kernel provided for the N310.  The Xilinx toolset being used is 2018_2
and the kernel being used is generated via the N310 build container
using v3.14.0.0 .

Ok.  The default Xilinx kernel associated with 2018_2 is 4.14.

I guess the bottom line is that this combination of platform and tools
and kernel is not yet supported in either the mainline of OpenCPI and
the third party branch you are trying to use.

It is probably not a big problem, but someone has to debug it that has
the time and skills necessary to dig as deep as necessary.

The fpga manager in the various later linux kernels will definitely be
supported in a patch from the mainline "soon", probably in a month,
since it is being actively worked.

That does not guarantee functionality on your exact kernel (and thus
version of the fpga manager), but it does guarantee it working on the
latest Xilinx-supported kernel.

Jim

Thanks,
Robert Munro

*From: *James Kulp <jek@parera.com mailto:jek@parera.com>
*Date: *Monday, Aug 12, 2019, 9:00 AM
*To: *Munro, Robert M. <Robert.Munro@jhuapl.edu
mailto:Robert.Munro@jhuapl.edu>, discuss@lists.opencpi.org
<discuss@lists.opencpi.org mailto:discuss@lists.opencpi.org>
*Subject: *Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

I was a bit confused about your use of the "ultrascale" branch.
So you are using a branch with two types of patches in it: one for later
linux kernels with the fpga manager, and the other for the ultrascale
chip itself.
The N310 is not ultrascale, so we need to separate the two issues, which
were not separated before.
So its not really a surprise that the branch you are using is not yet
happy with the system you are trying to run it on.

I am working on a branch that simply updates the xilinx tools (2019-1)
and the xilinx linux kernel (4.19) without dealing with ultrascale,
which is intended to work with
a baseline zed board, but with current tools and kernels.

The N310 uses a 7000-series part (7100) which should be compatible with
this.

Which kernel and which xilinx tools are you using?

Jim

On 8/8/19 1:36 PM, Munro, Robert M. wrote:

Jim or others,

Is there any further input or feedback on the source or resolution
of this issue?

As it stands I do not believe that the OCPI runtime software will be
able to successfully load HDL assemblies on the N310 platform.  My
familiarity with this codebase is limited and we would appreciate any
guidance available toward investigating or resolving this issue.

Thank you,
Robert Munro

-----Original Message-----
From: discuss discuss-bounces@lists.opencpi.org On Behalf Of
Munro, Robert M.
Sent: Monday, August 5, 2019 10:49 AM
To: James Kulp jek@parera.com; discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

Jim,

The given block of code is not the root cause of the issue because
the file system does not have a /dev/xdevcfg device.

I suspect there is some functional code similar to this being
compiled incorrectly:
#if (OCPI_ARCH_arm)
    // do xdevcfg loading stuff
#else
    // do fpga_manager loading stuff
#endif

This error is being output at environment initialization as well as
when running hello.xml.  I've attached a copy of the output from the
command 'ocpirun -v -l 20 hello.xml' for further investigation.

  From looking at the output I believe the system is calling
OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is
calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line
484 which in turn is calling Driver::open in the same file at line 499
which then outputs the 'When searching for PL device ...' error at
line 509. This then returns to the HdlDriver.cxx search() function and
outputs the '... got Zynq search error ...' error at line 141.

This is an ARM device and I am not familiar enough with this
codebase to adjust precompiler definitions with confidence that some
other code section will become affected.

Thanks,
Robert Munro

-----Original Message-----
From: James Kulp jek@parera.com
Sent: Friday, August 2, 2019 4:27 PM
To: Munro, Robert M. Robert.Munro@jhuapl.edu;
discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

That code is not integrated into the main line of OpenCPI yet, but
in that code there is:
             if (file_exists("/dev/xdevcfg")){
               ret_val= load_xdevconfig(fileName, error);
             }
             else if (file_exists("/sys/class/fpga_manager/fpga0/")){
               ret_val= load_fpga_manager(fileName, error);
             }
So it looks like the presence of /dev/xdevcfg is what causes it to
look for /sys/class/xdevcfg/xdevcfg/device/prog_done

On 8/2/19 4:15 PM, Munro, Robert M. wrote:

Are there any required flag or environment variable settings that
must be done before building the framework to utilize this
functionality?  I have a platform built that is producing an output
during environment load: 'When searching for PL device '0': Can't
process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string:
file could not be open for reading' .  This leads me to believe that
it is running the xdevcfg code still present in HdlBusDriver.cxx .

Use of the release_1.4_zynq_ultra branch and presence of the
/sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been
verified for the environment used to generate the executables.

Thanks,
Robert Munro

-----Original Message-----
From: discuss discuss-bounces@lists.opencpi.org On Behalf Of James
Kulp
Sent: Friday, February 1, 2019 4:18 PM
To: discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

On 2/1/19 3:37 PM, Chris Hinkey wrote:

in response to Point 1 here.  We attempted using the code that on
the fly was attempting to convert from bit to bin.  This did not work
on these newer platforms using fpga_manager so we decided to use the
vendor provided tools rather then to reverse engineer what was wrong
with the existing code.

If changes need to be made to create more commonality and given
that all zynq and zynqMP platforms need a .bin file format wouldn't it
make more sense to just use .bin files rather then converting them on
the fly every time?
A sensible question for sure.

When this was done originally, it was to avoid generating multiple
file formats all the time.  .bit files are necessary for JTAG loading,
and .bin files are necessary for zynq hardware loading.

Even on Zynq, some debugging using jtag is done, and having that be
mostly transparent (using the same bitstream files) is convenient.

So we preferred having a single bitstream file (with metadata,
compressed) regardless of whether we were hardware loading or jtag
loading, zynq or virtex6 or spartan3, ISE or Vivado.

In fact, there was no reverse engineering the last time since both
formats, at the level we were operating at, were documented by Xilinx.

It seemed to be worth the 30 SLOC to convert on the fly to keep a
single format of Xilinx bitstream files, including between ISE and
Vivado and all Xilinx FPGA types.

Of course it might make sense to switch things around the other way
and use .bin files uniformly and only convert to .bit format for JTAG
loading.

But since the core of the "conversion:" after a header, is just a
32 bit endian swap, it doesn't matter much either way.

If it ends up being a truly nasty reverse engineering exercise now,
I would reconsider.


From: discuss discuss-bounces@lists.opencpi.org on behalf of James
Kulp jek@parera.com
Sent: Friday, February 1, 2019 3:27 PM
To: discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

David,

This is great work. Thanks.

Since I believe the fpga manager stuff is really an attribute of
later linux kernels, I don't think it is really a ZynqMP thing, but
just a later linux kernel thing.
I am currently bringing up the quite ancient zedboard using the
latest Vivado and Xilinx linux and will try to use this same code.
There are two thinigs I am looking into, now that you have done the
hard work of getting to a working solution:

  1. The bit vs bin thing existed with the old bitstream loader, but I
    think we were converting on the fly, so I will try that here.
    (To avoid the bin format altogether).

  2. The fpga manager has entry points from kernel mode that allow you
    to inject the bitstream without making a copy in /lib/firmware.
    Since we already have a kernel driver, I will try to use that to
    avoid the whole /lib/firmware thing.

So if those two things can work (no guarantees), the difference
between old and new bitstream loading (and building) can be minimized
and the loading process faster and requiring no extra file system
space.
This will make merging easier too.

We'll see.  Thanks again to you and Geon for this important
contribution.

Jim

On 2/1/19 3:12 PM, David Banks wrote:

OpenCPI users interested in ZynqMP fpga_manager,

I know some users are interested in the OpenCPI's bitstream loading
for ZynqMP/UltraScale+ using "fpga_manager". In general, we
followed the instructions at

https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream.

I will give a short explanation here:

Reminder: All ZynqMP/UltraScale+ changes are located at
https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra
branch.

Firstly, all fpga_manager code is located in
runtime/hdl/src/HdlBusDriver.cxx. There were also changes in
r
untime/hdl-support/xilinx/vivado.mk
http://vivado.mk
to generate a bitstream in the correct *.bin
format. To see the changes made to these files for ZynqMP, you can
diff them between
release_1.4 and release_1.4_zynq_ultra:
$ git clone https://github.com/Geontech/opencpi.git --branch
release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin
release_1.4:release_1.4; $ git diff release_1.4 --
runtime/hdl/src/HdlBusDriver.cxx
runtime/hdl-support/xilinx/vivado.mk;

The directly relevant functions are load_fpga_manager() and i
sProgrammed().
load_fpga_manager() ensures that /lib/firmware exists, reads the
*.bin bitstream file and writes its contents to
/lib/firmware/opencpi_temp.bin.
It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the
the filename "opencpi_temp.bin" to
/sys/class/fpga_manager/fpga0/firmware.
Finally, the temporary opencpi_temp.bin bitstream is removed and the
state of the fpga_manager (/sys/class/fpga_manager/fpga0/state) is
confirmed to be "operating" in isProgrammed().

fpga_manager requires that bitstreams be in *.bin in order to write
them to the PL. So, some changes were made to vivado.mk to add a
make rule for the *.bin file. This make rule (BinName) uses
Vivado's "bootgen" to convert the bitstream from *.bit to *.bin.

Most of the relevant code is pasted or summarized below:

             load_fpga_manager(const char fileName, std::string
&error) {
               if (!file_exists("/lib/firmware")){
mkdir("/lib/firmware",0666);
               }
               int out_file =
creat("/lib/firmware/opencpi_temp.bin", 0666);
               gzFile bin_file;
               int bfd, zerror;
               uint8_t buf[8
1024];

               if ((bfd = ::open(fileName, O_RDONLY)) < 0)
                 OU::format(error, "Can't open bitstream file '%s'
for reading:
%s(%d)",
                            fileName, strerror(errno), errno);
               if ((bin_file = ::gzdopen(bfd, "rb")) == NULL)
                 OU::format(error, "Can't open compressed bin file
'%s' for :
%s(%u)",
                            fileName, strerror(errno), errno);
               do {
             uint8_t *bit_buf = buf;
                 int n = ::gzread(bin_file, bit_buf, sizeof(buf));
             if (n < 0)
               return true;
             if (n & 3)
               return OU::eformat(error, "Bitstream data in is '%s'
not a multiple of 3 bytes",
                      fileName);
                 if (n == 0)
                   break;
             if (write(out_file, buf, n) <= 0)
               return OU::eformat(error,
                      "Error writing to /lib/firmware/opencpi_temp.bin
for bin
loading: %s(%u/%d)",
                      strerror(errno), errno, n);
           } while (1);
               close(out_file);
               std::ofstream
fpga_flags("/sys/class/fpga_manager/fpga0/flags");
               std::ofstream
fpga_firmware("/sys/class/fpga_manager/fpga0/firmware");
               fpga_flags << 0 << std::endl;
               fpga_firmware << "opencpi_temp.bin" << std::endl;

remove("/lib/firmware/opencpi_temp.bin");
               return isProgrammed(error) ? init(error) : true;
             }

The isProgrammed() function just checks whether or not the
fpga_manager state is 'operating' although we are not entirely
confident this is a robust check:

             isProgrammed(...) {
               ...
               const char *e = OU::file2String(val,
"/sys/class/fpga_manager/fpga0/state", '|');
               ...
               return val == "operating";
             }

vivado.mk's *bin make-rule uses bootgen to convert bit to bin. This
is necessary in Vivado 2018.2, but in later versions you may be able
to directly generate the correct *.bin file via an option to
write_bitstream:
$(call BinName,$1,$3,$6): $(call BitName,$1,$3)
            $(AT)echo -n For $2 on $5 using config $4: Generating
Xilinx Vivado bitstream file $$@ with BIN extension using "bootgen".
            $(AT)echo all: > $$(call BifName,$1,$3,$6);
                 echo "{" >> $$(call BifName,$1,$3,$6);
                 echo " [destination_device = pl] $(notdir $(call
BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6);
                 echo "}" >> $$(call BifName,$1,$3,$6);
            $(AT)$(call DoXilinx,bootgen,$1,-image $(notdir $(call
BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call
BinName,$1,$3,$6)) -w,bin)

Hope this is useful!

Regards,
David Banks
dbanks@geontech.com
Geon Technologies, LLC
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attach
m ents/20190201/4b49675d/attachment.html>


discuss mailing list
discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org


discuss mailing list
discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachm
e nts/20190201/64e4ea45/attachment.html>


discuss mailing list
discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org


discuss mailing list
discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org

On 8/12/19 9:37 AM, Munro, Robert M. wrote: > Jim, > > This is the only branch with the modifications required for use with > the FPGA Manager driver.  This is required for use with the Linux > kernel provided for the N310.  The Xilinx toolset being used is 2018_2 > and the kernel being used is generated via the N310 build container > using v3.14.0.0 . Ok.  The default Xilinx kernel associated with 2018_2 is 4.14. I guess the bottom line is that this combination of platform and tools and kernel is not yet supported in either the mainline of OpenCPI and the third party branch you are trying to use. It is probably not a big problem, but someone has to debug it that has the time and skills necessary to dig as deep as necessary. The fpga manager in the various later linux kernels will definitely be supported in a patch from the mainline "soon", probably in a month, since it is being actively worked. That does not guarantee functionality on your exact kernel (and thus version of the fpga manager), but it does guarantee it working on the latest Xilinx-supported kernel. Jim > > Thanks, > Robert Munro > > *From: *James Kulp <jek@parera.com <mailto:jek@parera.com>> > *Date: *Monday, Aug 12, 2019, 9:00 AM > *To: *Munro, Robert M. <Robert.Munro@jhuapl.edu > <mailto:Robert.Munro@jhuapl.edu>>, discuss@lists.opencpi.org > <discuss@lists.opencpi.org <mailto:discuss@lists.opencpi.org>> > *Subject: *Re: [Discuss OpenCPI] Bitstream loading with > ZynqMP/UltraScale+ fpga_manager > > I was a bit confused about your use of the "ultrascale" branch. > So you are using a branch with two types of patches in it: one for later > linux kernels with the fpga manager, and the other for the ultrascale > chip itself. > The N310 is not ultrascale, so we need to separate the two issues, which > were not separated before. > So its not really a surprise that the branch you are using is not yet > happy with the system you are trying to run it on. > > I am working on a branch that simply updates the xilinx tools (2019-1) > and the xilinx linux kernel (4.19) without dealing with ultrascale, > which is intended to work with > a baseline zed board, but with current tools and kernels. > > The N310 uses a 7000-series part (7100) which should be compatible with > this. > > Which kernel and which xilinx tools are you using? > > Jim > > > > On 8/8/19 1:36 PM, Munro, Robert M. wrote: > > Jim or others, > > > > Is there any further input or feedback on the source or resolution > of this issue? > > > > As it stands I do not believe that the OCPI runtime software will be > able to successfully load HDL assemblies on the N310 platform.  My > familiarity with this codebase is limited and we would appreciate any > guidance available toward investigating or resolving this issue. > > > > Thank you, > > Robert Munro > > > > -----Original Message----- > > From: discuss <discuss-bounces@lists.opencpi.org> On Behalf Of > Munro, Robert M. > > Sent: Monday, August 5, 2019 10:49 AM > > To: James Kulp <jek@parera.com>; discuss@lists.opencpi.org > > Subject: Re: [Discuss OpenCPI] Bitstream loading with > ZynqMP/UltraScale+ fpga_manager > > > > Jim, > > > > The given block of code is not the root cause of the issue because > the file system does not have a /dev/xdevcfg device. > > > > I suspect there is some functional code similar to this being > compiled incorrectly: > > #if (OCPI_ARCH_arm) > >    // do xdevcfg loading stuff > > #else > >    // do fpga_manager loading stuff > > #endif > > > > This error is being output at environment initialization as well as > when running hello.xml.  I've attached a copy of the output from the > command 'ocpirun -v -l 20 hello.xml' for further investigation. > > > >  From looking at the output I believe the system is calling > OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is > calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line > 484 which in turn is calling Driver::open in the same file at line 499 > which then outputs the 'When searching for PL device ...' error at > line 509. This then returns to the HdlDriver.cxx search() function and > outputs the '... got Zynq search error ...' error at line 141. > > > > This is an ARM device and I am not familiar enough with this > codebase to adjust precompiler definitions with confidence that some > other code section will become affected. > > > > Thanks, > > Robert Munro > > > > -----Original Message----- > > From: James Kulp <jek@parera.com> > > Sent: Friday, August 2, 2019 4:27 PM > > To: Munro, Robert M. <Robert.Munro@jhuapl.edu>; > discuss@lists.opencpi.org > > Subject: Re: [Discuss OpenCPI] Bitstream loading with > ZynqMP/UltraScale+ fpga_manager > > > > That code is not integrated into the main line of OpenCPI yet, but > in that code there is: > >             if (file_exists("/dev/xdevcfg")){ > >               ret_val= load_xdevconfig(fileName, error); > >             } > >             else if (file_exists("/sys/class/fpga_manager/fpga0/")){ > >               ret_val= load_fpga_manager(fileName, error); > >             } > > So it looks like the presence of /dev/xdevcfg is what causes it to > look for /sys/class/xdevcfg/xdevcfg/device/prog_done > > > > On 8/2/19 4:15 PM, Munro, Robert M. wrote: > >> Are there any required flag or environment variable settings that > must be done before building the framework to utilize this > functionality?  I have a platform built that is producing an output > during environment load: 'When searching for PL device '0': Can't > process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string: > file could not be open for reading' .  This leads me to believe that > it is running the xdevcfg code still present in HdlBusDriver.cxx . > >> > >> Use of the release_1.4_zynq_ultra branch and presence of the > /sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been > verified for the environment used to generate the executables. > >> > >> Thanks, > >> Robert Munro > >> > >> -----Original Message----- > >> From: discuss <discuss-bounces@lists.opencpi.org> On Behalf Of James > >> Kulp > >> Sent: Friday, February 1, 2019 4:18 PM > >> To: discuss@lists.opencpi.org > >> Subject: Re: [Discuss OpenCPI] Bitstream loading with > >> ZynqMP/UltraScale+ fpga_manager > >> > >> On 2/1/19 3:37 PM, Chris Hinkey wrote: > >>> in response to Point 1 here.  We attempted using the code that on > the fly was attempting to convert from bit to bin.  This did not work > on these newer platforms using fpga_manager so we decided to use the > vendor provided tools rather then to reverse engineer what was wrong > with the existing code. > >>> > >>> If changes need to be made to create more commonality and given > that all zynq and zynqMP platforms need a .bin file format wouldn't it > make more sense to just use .bin files rather then converting them on > the fly every time? > >> A sensible question for sure. > >> > >> When this was done originally, it was to avoid generating multiple > file formats all the time.  .bit files are necessary for JTAG loading, > and .bin files are necessary for zynq hardware loading. > >> > >> Even on Zynq, some debugging using jtag is done, and having that be > mostly transparent (using the same bitstream files) is convenient. > >> > >> So we preferred having a single bitstream file (with metadata, > >> compressed) regardless of whether we were hardware loading or jtag > loading, zynq or virtex6 or spartan3, ISE or Vivado. > >> > >> In fact, there was no reverse engineering the last time since both > formats, at the level we were operating at, were documented by Xilinx. > >> > >> It seemed to be worth the 30 SLOC to convert on the fly to keep a > single format of Xilinx bitstream files, including between ISE and > Vivado and all Xilinx FPGA types. > >> > >> Of course it might make sense to switch things around the other way > and use .bin files uniformly and only convert to .bit format for JTAG > loading. > >> > >> But since the core of the "conversion:" after a header, is just a > 32 bit endian swap, it doesn't matter much either way. > >> > >> If it ends up being a truly nasty reverse engineering exercise now, > I would reconsider. > >> > >>> ________________________________ > >>> From: discuss <discuss-bounces@lists.opencpi.org> on behalf of James > >>> Kulp <jek@parera.com> > >>> Sent: Friday, February 1, 2019 3:27 PM > >>> To: discuss@lists.opencpi.org > >>> Subject: Re: [Discuss OpenCPI] Bitstream loading with > >>> ZynqMP/UltraScale+ fpga_manager > >>> > >>> David, > >>> > >>> This is great work. Thanks. > >>> > >>> Since I believe the fpga manager stuff is really an attribute of > >>> later linux kernels, I don't think it is really a ZynqMP thing, but > >>> just a later linux kernel thing. > >>> I am currently bringing up the quite ancient zedboard using the > >>> latest Vivado and Xilinx linux and will try to use this same code. > >>> There are two thinigs I am looking into, now that you have done the > >>> hard work of getting to a working solution: > >>> > >>> 1. The bit vs bin thing existed with the old bitstream loader, but I > >>> think we were converting on the fly, so I will try that here. > >>> (To avoid the bin format altogether). > >>> > >>> 2. The fpga manager has entry points from kernel mode that allow you > >>> to inject the bitstream without making a copy in /lib/firmware. > >>> Since we already have a kernel driver, I will try to use that to > >>> avoid the whole /lib/firmware thing. > >>> > >>> So if those two things can work (no guarantees), the difference > >>> between old and new bitstream loading (and building) can be minimized > >>> and the loading process faster and requiring no extra file system > space. > >>> This will make merging easier too. > >>> > >>> We'll see.  Thanks again to you and Geon for this important > contribution. > >>> > >>> Jim > >>> > >>> > >>> On 2/1/19 3:12 PM, David Banks wrote: > >>>> OpenCPI users interested in ZynqMP fpga_manager, > >>>> > >>>> I know some users are interested in the OpenCPI's bitstream loading > >>>> for ZynqMP/UltraScale+ using "*fpga_manager*". In general, we > >>>> followed the instructions at > >>>> > https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream. > >>>> I will give a short explanation here: > >>>> > >>>> Reminder: All ZynqMP/UltraScale+ changes are located at > >>>> https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra > branch. > >>>> > >>>> Firstly, all *fpga_manager *code is located in > >>>> *runtime/hdl/src/HdlBusDriver.cxx*. There were also changes in > >>>> r*untime/hdl-support/xilinx/vivado.mk > >>>> <http://vivado.mk>* to generate a bitstream in the correct *.bin > >>>> format. To see the changes made to these files for ZynqMP, you can > >>>> diff them between > >>>> *release_1.4* and *release_1.4_zynq_ultra*: > >>>> $ git clone https://github.com/Geontech/opencpi.git --branch > >>>> release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin > >>>> release_1.4:release_1.4; $ git diff release_1.4 -- > >>>> runtime/hdl/src/HdlBusDriver.cxx > >>>> runtime/hdl-support/xilinx/vivado.mk; > >>>> > >>>> > >>>> The directly relevant functions are *load_fpga_manager()* and i > >>>> *sProgrammed()*. > >>>> load_fpga_manager() ensures that /lib/firmware exists, reads the > >>>> *.bin bitstream file and writes its contents to > /lib/firmware/opencpi_temp.bin. > >>>> It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the > >>>> the filename "opencpi_temp.bin" to > /sys/class/fpga_manager/fpga0/firmware. > >>>> Finally, the temporary opencpi_temp.bin bitstream is removed and the > >>>> state of the fpga_manager (/sys/class/fpga_manager/fpga0/state) is > >>>> confirmed to be "operating" in isProgrammed(). > >>>> > >>>> fpga_manager requires that bitstreams be in *.bin in order to write > >>>> them to the PL. So, some changes were made to vivado.mk to add a > >>>> make rule for the *.bin file. This make rule (*BinName*) uses > >>>> Vivado's "*bootgen*" to convert the bitstream from *.bit to *.bin. > >>>> > >>>> Most of the relevant code is pasted or summarized below: > >>>> > >>>>             *load_fpga_manager*(const char *fileName, std::string > &error) { > >>>>               if (!file_exists("/lib/firmware")){ > >>>> mkdir("/lib/firmware",0666); > >>>>               } > >>>>               int out_file = > creat("/lib/firmware/opencpi_temp.bin", 0666); > >>>>               gzFile bin_file; > >>>>               int bfd, zerror; > >>>>               uint8_t buf[8*1024]; > >>>> > >>>>               if ((bfd = ::open(fileName, O_RDONLY)) < 0) > >>>>                 OU::format(error, "Can't open bitstream file '%s' > for reading: > >>>> %s(%d)", > >>>>                            fileName, strerror(errno), errno); > >>>>               if ((bin_file = ::gzdopen(bfd, "rb")) == NULL) > >>>>                 OU::format(error, "Can't open compressed bin file > '%s' for : > >>>> %s(%u)", > >>>>                            fileName, strerror(errno), errno); > >>>>               do { > >>>>             uint8_t *bit_buf = buf; > >>>>                 int n = ::gzread(bin_file, bit_buf, sizeof(buf)); > >>>>             if (n < 0) > >>>>               return true; > >>>>             if (n & 3) > >>>>               return OU::eformat(error, "Bitstream data in is '%s' > >>>> not a multiple of 3 bytes", > >>>>                      fileName); > >>>>                 if (n == 0) > >>>>                   break; > >>>>             if (write(out_file, buf, n) <= 0) > >>>>               return OU::eformat(error, > >>>>                      "Error writing to /lib/firmware/opencpi_temp.bin > >>>> for bin > >>>> loading: %s(%u/%d)", > >>>>                      strerror(errno), errno, n); > >>>>           } while (1); > >>>>               close(out_file); > >>>>               std::ofstream > fpga_flags("/sys/class/fpga_manager/fpga0/flags"); > >>>>               std::ofstream > >>>> fpga_firmware("/sys/class/fpga_manager/fpga0/firmware"); > >>>>               fpga_flags << 0 << std::endl; > >>>>               fpga_firmware << "opencpi_temp.bin" << std::endl; > >>>> > >>>> remove("/lib/firmware/opencpi_temp.bin"); > >>>>               return isProgrammed(error) ? init(error) : true; > >>>>             } > >>>> > >>>> The isProgrammed() function just checks whether or not the > >>>> fpga_manager state is 'operating' although we are not entirely > >>>> confident this is a robust check: > >>>> > >>>>             *isProgrammed*(...) { > >>>>               ... > >>>>               const char *e = OU::file2String(val, > >>>> "/sys/class/fpga_manager/fpga0/state", '|'); > >>>>               ... > >>>>               return val == "operating"; > >>>>             } > >>>> > >>>> vivado.mk's *bin make-rule uses bootgen to convert bit to bin. This > >>>> is necessary in Vivado 2018.2, but in later versions you may be able > >>>> to directly generate the correct *.bin file via an option to > write_bitstream: > >>>> $(call *BinName*,$1,$3,$6): $(call BitName,$1,$3) > >>>>            $(AT)echo -n For $2 on $5 using config $4: Generating > >>>> Xilinx Vivado bitstream file $$@ with BIN extension using "bootgen". > >>>>            $(AT)echo all: > $$(call BifName,$1,$3,$6); \ > >>>>                 echo "{" >> $$(call BifName,$1,$3,$6); \ > >>>>                 echo " [destination_device = pl] $(notdir $(call > >>>> BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6); \ > >>>>                 echo "}" >> $$(call BifName,$1,$3,$6); > >>>>            $(AT)$(call DoXilinx,*bootgen*,$1,-image $(notdir $(call > >>>> BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call > >>>> BinName,$1,$3,$6)) -w,bin) > >>>> > >>>> Hope this is useful! > >>>> > >>>> Regards, > >>>> David Banks > >>>> dbanks@geontech.com > >>>> Geon Technologies, LLC > >>>> -------------- next part -------------- An HTML attachment was > >>>> scrubbed... > >>>> URL: > >>>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attach > >>>> m ents/20190201/4b49675d/attachment.html> > >>>> _______________________________________________ > >>>> discuss mailing list > >>>> discuss@lists.opencpi.org > >>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > >>> _______________________________________________ > >>> discuss mailing list > >>> discuss@lists.opencpi.org > >>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > >>> -------------- next part -------------- An HTML attachment was > >>> scrubbed... > >>> URL: > >>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachm > >>> e nts/20190201/64e4ea45/attachment.html> > >>> _______________________________________________ > >>> discuss mailing list > >>> discuss@lists.opencpi.org > >>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > >> > >> _______________________________________________ > >> discuss mailing list > >> discuss@lists.opencpi.org > >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > > > >
CH
Chris Hinkey
Tue, Aug 13, 2019 2:01 PM

I think when I implemented this code I probably made the assumption that if
we are using fpga_manager we are also using ARCH=arm64.  This met our needs
as we only cared about the fpga manager on ultrascale devices at the time.
We also made the assumption that the tools created a tarred bin file
instead of a bit file because we could not get the bit to bin conversion
working with the existing openCPI code (this might cause you problems later
when actually trying to load the fpga).

The original problem you were running into is certainly because of an ifdef
on line 226 where it will check the old driver done pin if it is on an arm
and not an arm64

226 #if defined(OCPI_ARCH_arm) || defined(OCPI_ARCH_arm_cs)

to move forward for now you can change this line to an "#if 0" and rebuild
the framework, not this will cause other zynq based platforms(zed,
matchstiq etc..) to no longer work with this patch but maybe you don't care
for now while Jim tries to get this into the mainline in a more generic
way.

there may be some similar patches you need to make to the same file but the
full diff that I needed to make to BusDriver.cxx to the 1.4 mainline can be
seen here https://github.com/opencpi/opencpi/pull/17/files in case you
didn't already know.

hope this helps

On Mon, Aug 12, 2019 at 11:12 AM James Kulp jek@parera.com wrote:

On 8/12/19 9:37 AM, Munro, Robert M. wrote:

Jim,

This is the only branch with the modifications required for use with
the FPGA Manager driver.  This is required for use with the Linux
kernel provided for the N310.  The Xilinx toolset being used is 2018_2
and the kernel being used is generated via the N310 build container
using v3.14.0.0 .

Ok.  The default Xilinx kernel associated with 2018_2 is 4.14.

I guess the bottom line is that this combination of platform and tools
and kernel is not yet supported in either the mainline of OpenCPI and
the third party branch you are trying to use.

It is probably not a big problem, but someone has to debug it that has
the time and skills necessary to dig as deep as necessary.

The fpga manager in the various later linux kernels will definitely be
supported in a patch from the mainline "soon", probably in a month,
since it is being actively worked.

That does not guarantee functionality on your exact kernel (and thus
version of the fpga manager), but it does guarantee it working on the
latest Xilinx-supported kernel.

Jim

Thanks,
Robert Munro

*From: *James Kulp <jek@parera.com mailto:jek@parera.com>
*Date: *Monday, Aug 12, 2019, 9:00 AM
*To: *Munro, Robert M. <Robert.Munro@jhuapl.edu
mailto:Robert.Munro@jhuapl.edu>, discuss@lists.opencpi.org
<discuss@lists.opencpi.org mailto:discuss@lists.opencpi.org>
*Subject: *Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

I was a bit confused about your use of the "ultrascale" branch.
So you are using a branch with two types of patches in it: one for later
linux kernels with the fpga manager, and the other for the ultrascale
chip itself.
The N310 is not ultrascale, so we need to separate the two issues, which
were not separated before.
So its not really a surprise that the branch you are using is not yet
happy with the system you are trying to run it on.

I am working on a branch that simply updates the xilinx tools (2019-1)
and the xilinx linux kernel (4.19) without dealing with ultrascale,
which is intended to work with
a baseline zed board, but with current tools and kernels.

The N310 uses a 7000-series part (7100) which should be compatible with
this.

Which kernel and which xilinx tools are you using?

Jim

On 8/8/19 1:36 PM, Munro, Robert M. wrote:

Jim or others,

Is there any further input or feedback on the source or resolution
of this issue?

As it stands I do not believe that the OCPI runtime software will be
able to successfully load HDL assemblies on the N310 platform.  My
familiarity with this codebase is limited and we would appreciate any
guidance available toward investigating or resolving this issue.

Thank you,
Robert Munro

-----Original Message-----
From: discuss discuss-bounces@lists.opencpi.org On Behalf Of
Munro, Robert M.
Sent: Monday, August 5, 2019 10:49 AM
To: James Kulp jek@parera.com; discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

Jim,

The given block of code is not the root cause of the issue because
the file system does not have a /dev/xdevcfg device.

I suspect there is some functional code similar to this being
compiled incorrectly:
#if (OCPI_ARCH_arm)
// do xdevcfg loading stuff
#else
// do fpga_manager loading stuff
#endif

This error is being output at environment initialization as well as
when running hello.xml.  I've attached a copy of the output from the
command 'ocpirun -v -l 20 hello.xml' for further investigation.

From looking at the output I believe the system is calling
OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is
calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line
484 which in turn is calling Driver::open in the same file at line 499
which then outputs the 'When searching for PL device ...' error at
line 509. This then returns to the HdlDriver.cxx search() function and
outputs the '... got Zynq search error ...' error at line 141.

This is an ARM device and I am not familiar enough with this
codebase to adjust precompiler definitions with confidence that some
other code section will become affected.

Thanks,
Robert Munro

-----Original Message-----
From: James Kulp jek@parera.com
Sent: Friday, August 2, 2019 4:27 PM
To: Munro, Robert M. Robert.Munro@jhuapl.edu;
discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

That code is not integrated into the main line of OpenCPI yet, but
in that code there is:
if (file_exists("/dev/xdevcfg")){
ret_val= load_xdevconfig(fileName, error);
}
else if (file_exists("/sys/class/fpga_manager/fpga0/")){
ret_val= load_fpga_manager(fileName, error);
}
So it looks like the presence of /dev/xdevcfg is what causes it to
look for /sys/class/xdevcfg/xdevcfg/device/prog_done

On 8/2/19 4:15 PM, Munro, Robert M. wrote:

Are there any required flag or environment variable settings that
must be done before building the framework to utilize this
functionality?  I have a platform built that is producing an output
during environment load: 'When searching for PL device '0': Can't
process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string:
file could not be open for reading' .  This leads me to believe that
it is running the xdevcfg code still present in HdlBusDriver.cxx .

Use of the release_1.4_zynq_ultra branch and presence of the
/sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been
verified for the environment used to generate the executables.

Thanks,
Robert Munro

-----Original Message-----
From: discuss discuss-bounces@lists.opencpi.org On Behalf Of James
Kulp
Sent: Friday, February 1, 2019 4:18 PM
To: discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

On 2/1/19 3:37 PM, Chris Hinkey wrote:

in response to Point 1 here.  We attempted using the code that on
the fly was attempting to convert from bit to bin.  This did not work
on these newer platforms using fpga_manager so we decided to use the
vendor provided tools rather then to reverse engineer what was wrong
with the existing code.

If changes need to be made to create more commonality and given
that all zynq and zynqMP platforms need a .bin file format wouldn't it
make more sense to just use .bin files rather then converting them on
the fly every time?
A sensible question for sure.

When this was done originally, it was to avoid generating multiple
file formats all the time.  .bit files are necessary for JTAG loading,
and .bin files are necessary for zynq hardware loading.

Even on Zynq, some debugging using jtag is done, and having that be
mostly transparent (using the same bitstream files) is convenient.

So we preferred having a single bitstream file (with metadata,
compressed) regardless of whether we were hardware loading or jtag
loading, zynq or virtex6 or spartan3, ISE or Vivado.

In fact, there was no reverse engineering the last time since both
formats, at the level we were operating at, were documented by Xilinx.

It seemed to be worth the 30 SLOC to convert on the fly to keep a
single format of Xilinx bitstream files, including between ISE and
Vivado and all Xilinx FPGA types.

Of course it might make sense to switch things around the other way
and use .bin files uniformly and only convert to .bit format for JTAG
loading.

But since the core of the "conversion:" after a header, is just a
32 bit endian swap, it doesn't matter much either way.

If it ends up being a truly nasty reverse engineering exercise now,
I would reconsider.


From: discuss discuss-bounces@lists.opencpi.org on behalf of James
Kulp jek@parera.com
Sent: Friday, February 1, 2019 3:27 PM
To: discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

David,

This is great work. Thanks.

Since I believe the fpga manager stuff is really an attribute of
later linux kernels, I don't think it is really a ZynqMP thing, but
just a later linux kernel thing.
I am currently bringing up the quite ancient zedboard using the
latest Vivado and Xilinx linux and will try to use this same code.
There are two thinigs I am looking into, now that you have done the
hard work of getting to a working solution:

  1. The bit vs bin thing existed with the old bitstream loader, but I
    think we were converting on the fly, so I will try that here.
    (To avoid the bin format altogether).

  2. The fpga manager has entry points from kernel mode that allow you
    to inject the bitstream without making a copy in /lib/firmware.
    Since we already have a kernel driver, I will try to use that to
    avoid the whole /lib/firmware thing.

So if those two things can work (no guarantees), the difference
between old and new bitstream loading (and building) can be minimized
and the loading process faster and requiring no extra file system
space.
This will make merging easier too.

We'll see.  Thanks again to you and Geon for this important
contribution.

Jim

On 2/1/19 3:12 PM, David Banks wrote:

OpenCPI users interested in ZynqMP fpga_manager,

I know some users are interested in the OpenCPI's bitstream loading
for ZynqMP/UltraScale+ using "fpga_manager". In general, we
followed the instructions at

https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream
.

I will give a short explanation here:

Reminder: All ZynqMP/UltraScale+ changes are located at
https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra
branch.

Firstly, all fpga_manager code is located in
runtime/hdl/src/HdlBusDriver.cxx. There were also changes in
r
untime/hdl-support/xilinx/vivado.mk
http://vivado.mk
to generate a bitstream in the correct *.bin
format. To see the changes made to these files for ZynqMP, you can
diff them between
release_1.4 and release_1.4_zynq_ultra:
$ git clone https://github.com/Geontech/opencpi.git --branch
release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin
release_1.4:release_1.4; $ git diff release_1.4 --
runtime/hdl/src/HdlBusDriver.cxx
runtime/hdl-support/xilinx/vivado.mk;

The directly relevant functions are load_fpga_manager() and i
sProgrammed().
load_fpga_manager() ensures that /lib/firmware exists, reads the
*.bin bitstream file and writes its contents to
/lib/firmware/opencpi_temp.bin.
It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the
the filename "opencpi_temp.bin" to
/sys/class/fpga_manager/fpga0/firmware.
Finally, the temporary opencpi_temp.bin bitstream is removed and the
state of the fpga_manager (/sys/class/fpga_manager/fpga0/state) is
confirmed to be "operating" in isProgrammed().

fpga_manager requires that bitstreams be in *.bin in order to write
them to the PL. So, some changes were made to vivado.mk to add a
make rule for the *.bin file. This make rule (BinName) uses
Vivado's "bootgen" to convert the bitstream from *.bit to *.bin.

Most of the relevant code is pasted or summarized below:

        *load_fpga_manager*(const char *fileName, std::string

&error) {

          if (!file_exists("/lib/firmware")){

mkdir("/lib/firmware",0666);
}
int out_file =
creat("/lib/firmware/opencpi_temp.bin", 0666);
gzFile bin_file;
int bfd, zerror;
uint8_t buf[8*1024];

          if ((bfd = ::open(fileName, O_RDONLY)) < 0)
            OU::format(error, "Can't open bitstream file '%s'

for reading:

%s(%d)",
fileName, strerror(errno), errno);
if ((bin_file = ::gzdopen(bfd, "rb")) == NULL)
OU::format(error, "Can't open compressed bin file
'%s' for :
%s(%u)",
fileName, strerror(errno), errno);
do {
uint8_t *bit_buf = buf;
int n = ::gzread(bin_file, bit_buf, sizeof(buf));
if (n < 0)
return true;
if (n & 3)
return OU::eformat(error, "Bitstream data in is '%s'
not a multiple of 3 bytes",
fileName);
if (n == 0)
break;
if (write(out_file, buf, n) <= 0)
return OU::eformat(error,
"Error writing to
/lib/firmware/opencpi_temp.bin
for bin
loading: %s(%u/%d)",
strerror(errno), errno, n);
} while (1);
close(out_file);
std::ofstream
fpga_flags("/sys/class/fpga_manager/fpga0/flags");
std::ofstream
fpga_firmware("/sys/class/fpga_manager/fpga0/firmware");
fpga_flags << 0 << std::endl;
fpga_firmware << "opencpi_temp.bin" << std::endl;

remove("/lib/firmware/opencpi_temp.bin");
return isProgrammed(error) ? init(error) : true;
}

The isProgrammed() function just checks whether or not the
fpga_manager state is 'operating' although we are not entirely
confident this is a robust check:

        *isProgrammed*(...) {
          ...
          const char *e = OU::file2String(val,

"/sys/class/fpga_manager/fpga0/state", '|');
...
return val == "operating";
}

vivado.mk's *bin make-rule uses bootgen to convert bit to bin. This
is necessary in Vivado 2018.2, but in later versions you may be able
to directly generate the correct *.bin file via an option to
write_bitstream:
$(call BinName,$1,$3,$6): $(call BitName,$1,$3)
$(AT)echo -n For $2 on $5 using config $4: Generating
Xilinx Vivado bitstream file $$@ with BIN extension using "bootgen".
$(AT)echo all: > $$(call BifName,$1,$3,$6);
echo "{" >> $$(call BifName,$1,$3,$6);
echo " [destination_device = pl] $(notdir $(call
BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6);
echo "}" >> $$(call BifName,$1,$3,$6);
$(AT)$(call DoXilinx,bootgen,$1,-image $(notdir $(call
BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call
BinName,$1,$3,$6)) -w,bin)

Hope this is useful!

Regards,
David Banks
dbanks@geontech.com
Geon Technologies, LLC
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<
http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attach
m ents/20190201/4b49675d/attachment.html>


discuss mailing list
discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org


discuss mailing list
discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<
http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachm
e nts/20190201/64e4ea45/attachment.html>


discuss mailing list
discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org


discuss mailing list
discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org

I think when I implemented this code I probably made the assumption that if we are using fpga_manager we are also using ARCH=arm64. This met our needs as we only cared about the fpga manager on ultrascale devices at the time. We also made the assumption that the tools created a tarred bin file instead of a bit file because we could not get the bit to bin conversion working with the existing openCPI code (this might cause you problems later when actually trying to load the fpga). The original problem you were running into is certainly because of an ifdef on line 226 where it will check the old driver done pin if it is on an arm and not an arm64 226 #if defined(OCPI_ARCH_arm) || defined(OCPI_ARCH_arm_cs) to move forward for now you can change this line to an "#if 0" and rebuild the framework, not this will cause other zynq based platforms(zed, matchstiq etc..) to no longer work with this patch but maybe you don't care for now while Jim tries to get this into the mainline in a more generic way. there may be some similar patches you need to make to the same file but the full diff that I needed to make to BusDriver.cxx to the 1.4 mainline can be seen here https://github.com/opencpi/opencpi/pull/17/files in case you didn't already know. hope this helps On Mon, Aug 12, 2019 at 11:12 AM James Kulp <jek@parera.com> wrote: > On 8/12/19 9:37 AM, Munro, Robert M. wrote: > > Jim, > > > > This is the only branch with the modifications required for use with > > the FPGA Manager driver. This is required for use with the Linux > > kernel provided for the N310. The Xilinx toolset being used is 2018_2 > > and the kernel being used is generated via the N310 build container > > using v3.14.0.0 . > > Ok. The default Xilinx kernel associated with 2018_2 is 4.14. > > I guess the bottom line is that this combination of platform and tools > and kernel is not yet supported in either the mainline of OpenCPI and > the third party branch you are trying to use. > > It is probably not a big problem, but someone has to debug it that has > the time and skills necessary to dig as deep as necessary. > > The fpga manager in the various later linux kernels will definitely be > supported in a patch from the mainline "soon", probably in a month, > since it is being actively worked. > > That does not guarantee functionality on your exact kernel (and thus > version of the fpga manager), but it does guarantee it working on the > latest Xilinx-supported kernel. > > Jim > > > > > > > > > > > Thanks, > > Robert Munro > > > > *From: *James Kulp <jek@parera.com <mailto:jek@parera.com>> > > *Date: *Monday, Aug 12, 2019, 9:00 AM > > *To: *Munro, Robert M. <Robert.Munro@jhuapl.edu > > <mailto:Robert.Munro@jhuapl.edu>>, discuss@lists.opencpi.org > > <discuss@lists.opencpi.org <mailto:discuss@lists.opencpi.org>> > > *Subject: *Re: [Discuss OpenCPI] Bitstream loading with > > ZynqMP/UltraScale+ fpga_manager > > > > I was a bit confused about your use of the "ultrascale" branch. > > So you are using a branch with two types of patches in it: one for later > > linux kernels with the fpga manager, and the other for the ultrascale > > chip itself. > > The N310 is not ultrascale, so we need to separate the two issues, which > > were not separated before. > > So its not really a surprise that the branch you are using is not yet > > happy with the system you are trying to run it on. > > > > I am working on a branch that simply updates the xilinx tools (2019-1) > > and the xilinx linux kernel (4.19) without dealing with ultrascale, > > which is intended to work with > > a baseline zed board, but with current tools and kernels. > > > > The N310 uses a 7000-series part (7100) which should be compatible with > > this. > > > > Which kernel and which xilinx tools are you using? > > > > Jim > > > > > > > > On 8/8/19 1:36 PM, Munro, Robert M. wrote: > > > Jim or others, > > > > > > Is there any further input or feedback on the source or resolution > > of this issue? > > > > > > As it stands I do not believe that the OCPI runtime software will be > > able to successfully load HDL assemblies on the N310 platform. My > > familiarity with this codebase is limited and we would appreciate any > > guidance available toward investigating or resolving this issue. > > > > > > Thank you, > > > Robert Munro > > > > > > -----Original Message----- > > > From: discuss <discuss-bounces@lists.opencpi.org> On Behalf Of > > Munro, Robert M. > > > Sent: Monday, August 5, 2019 10:49 AM > > > To: James Kulp <jek@parera.com>; discuss@lists.opencpi.org > > > Subject: Re: [Discuss OpenCPI] Bitstream loading with > > ZynqMP/UltraScale+ fpga_manager > > > > > > Jim, > > > > > > The given block of code is not the root cause of the issue because > > the file system does not have a /dev/xdevcfg device. > > > > > > I suspect there is some functional code similar to this being > > compiled incorrectly: > > > #if (OCPI_ARCH_arm) > > > // do xdevcfg loading stuff > > > #else > > > // do fpga_manager loading stuff > > > #endif > > > > > > This error is being output at environment initialization as well as > > when running hello.xml. I've attached a copy of the output from the > > command 'ocpirun -v -l 20 hello.xml' for further investigation. > > > > > > From looking at the output I believe the system is calling > > OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is > > calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line > > 484 which in turn is calling Driver::open in the same file at line 499 > > which then outputs the 'When searching for PL device ...' error at > > line 509. This then returns to the HdlDriver.cxx search() function and > > outputs the '... got Zynq search error ...' error at line 141. > > > > > > This is an ARM device and I am not familiar enough with this > > codebase to adjust precompiler definitions with confidence that some > > other code section will become affected. > > > > > > Thanks, > > > Robert Munro > > > > > > -----Original Message----- > > > From: James Kulp <jek@parera.com> > > > Sent: Friday, August 2, 2019 4:27 PM > > > To: Munro, Robert M. <Robert.Munro@jhuapl.edu>; > > discuss@lists.opencpi.org > > > Subject: Re: [Discuss OpenCPI] Bitstream loading with > > ZynqMP/UltraScale+ fpga_manager > > > > > > That code is not integrated into the main line of OpenCPI yet, but > > in that code there is: > > > if (file_exists("/dev/xdevcfg")){ > > > ret_val= load_xdevconfig(fileName, error); > > > } > > > else if (file_exists("/sys/class/fpga_manager/fpga0/")){ > > > ret_val= load_fpga_manager(fileName, error); > > > } > > > So it looks like the presence of /dev/xdevcfg is what causes it to > > look for /sys/class/xdevcfg/xdevcfg/device/prog_done > > > > > > On 8/2/19 4:15 PM, Munro, Robert M. wrote: > > >> Are there any required flag or environment variable settings that > > must be done before building the framework to utilize this > > functionality? I have a platform built that is producing an output > > during environment load: 'When searching for PL device '0': Can't > > process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string: > > file could not be open for reading' . This leads me to believe that > > it is running the xdevcfg code still present in HdlBusDriver.cxx . > > >> > > >> Use of the release_1.4_zynq_ultra branch and presence of the > > /sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been > > verified for the environment used to generate the executables. > > >> > > >> Thanks, > > >> Robert Munro > > >> > > >> -----Original Message----- > > >> From: discuss <discuss-bounces@lists.opencpi.org> On Behalf Of James > > >> Kulp > > >> Sent: Friday, February 1, 2019 4:18 PM > > >> To: discuss@lists.opencpi.org > > >> Subject: Re: [Discuss OpenCPI] Bitstream loading with > > >> ZynqMP/UltraScale+ fpga_manager > > >> > > >> On 2/1/19 3:37 PM, Chris Hinkey wrote: > > >>> in response to Point 1 here. We attempted using the code that on > > the fly was attempting to convert from bit to bin. This did not work > > on these newer platforms using fpga_manager so we decided to use the > > vendor provided tools rather then to reverse engineer what was wrong > > with the existing code. > > >>> > > >>> If changes need to be made to create more commonality and given > > that all zynq and zynqMP platforms need a .bin file format wouldn't it > > make more sense to just use .bin files rather then converting them on > > the fly every time? > > >> A sensible question for sure. > > >> > > >> When this was done originally, it was to avoid generating multiple > > file formats all the time. .bit files are necessary for JTAG loading, > > and .bin files are necessary for zynq hardware loading. > > >> > > >> Even on Zynq, some debugging using jtag is done, and having that be > > mostly transparent (using the same bitstream files) is convenient. > > >> > > >> So we preferred having a single bitstream file (with metadata, > > >> compressed) regardless of whether we were hardware loading or jtag > > loading, zynq or virtex6 or spartan3, ISE or Vivado. > > >> > > >> In fact, there was no reverse engineering the last time since both > > formats, at the level we were operating at, were documented by Xilinx. > > >> > > >> It seemed to be worth the 30 SLOC to convert on the fly to keep a > > single format of Xilinx bitstream files, including between ISE and > > Vivado and all Xilinx FPGA types. > > >> > > >> Of course it might make sense to switch things around the other way > > and use .bin files uniformly and only convert to .bit format for JTAG > > loading. > > >> > > >> But since the core of the "conversion:" after a header, is just a > > 32 bit endian swap, it doesn't matter much either way. > > >> > > >> If it ends up being a truly nasty reverse engineering exercise now, > > I would reconsider. > > >> > > >>> ________________________________ > > >>> From: discuss <discuss-bounces@lists.opencpi.org> on behalf of James > > >>> Kulp <jek@parera.com> > > >>> Sent: Friday, February 1, 2019 3:27 PM > > >>> To: discuss@lists.opencpi.org > > >>> Subject: Re: [Discuss OpenCPI] Bitstream loading with > > >>> ZynqMP/UltraScale+ fpga_manager > > >>> > > >>> David, > > >>> > > >>> This is great work. Thanks. > > >>> > > >>> Since I believe the fpga manager stuff is really an attribute of > > >>> later linux kernels, I don't think it is really a ZynqMP thing, but > > >>> just a later linux kernel thing. > > >>> I am currently bringing up the quite ancient zedboard using the > > >>> latest Vivado and Xilinx linux and will try to use this same code. > > >>> There are two thinigs I am looking into, now that you have done the > > >>> hard work of getting to a working solution: > > >>> > > >>> 1. The bit vs bin thing existed with the old bitstream loader, but I > > >>> think we were converting on the fly, so I will try that here. > > >>> (To avoid the bin format altogether). > > >>> > > >>> 2. The fpga manager has entry points from kernel mode that allow you > > >>> to inject the bitstream without making a copy in /lib/firmware. > > >>> Since we already have a kernel driver, I will try to use that to > > >>> avoid the whole /lib/firmware thing. > > >>> > > >>> So if those two things can work (no guarantees), the difference > > >>> between old and new bitstream loading (and building) can be minimized > > >>> and the loading process faster and requiring no extra file system > > space. > > >>> This will make merging easier too. > > >>> > > >>> We'll see. Thanks again to you and Geon for this important > > contribution. > > >>> > > >>> Jim > > >>> > > >>> > > >>> On 2/1/19 3:12 PM, David Banks wrote: > > >>>> OpenCPI users interested in ZynqMP fpga_manager, > > >>>> > > >>>> I know some users are interested in the OpenCPI's bitstream loading > > >>>> for ZynqMP/UltraScale+ using "*fpga_manager*". In general, we > > >>>> followed the instructions at > > >>>> > > > https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream > . > > >>>> I will give a short explanation here: > > >>>> > > >>>> Reminder: All ZynqMP/UltraScale+ changes are located at > > >>>> https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra > > branch. > > >>>> > > >>>> Firstly, all *fpga_manager *code is located in > > >>>> *runtime/hdl/src/HdlBusDriver.cxx*. There were also changes in > > >>>> r*untime/hdl-support/xilinx/vivado.mk > > >>>> <http://vivado.mk>* to generate a bitstream in the correct *.bin > > >>>> format. To see the changes made to these files for ZynqMP, you can > > >>>> diff them between > > >>>> *release_1.4* and *release_1.4_zynq_ultra*: > > >>>> $ git clone https://github.com/Geontech/opencpi.git --branch > > >>>> release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin > > >>>> release_1.4:release_1.4; $ git diff release_1.4 -- > > >>>> runtime/hdl/src/HdlBusDriver.cxx > > >>>> runtime/hdl-support/xilinx/vivado.mk; > > >>>> > > >>>> > > >>>> The directly relevant functions are *load_fpga_manager()* and i > > >>>> *sProgrammed()*. > > >>>> load_fpga_manager() ensures that /lib/firmware exists, reads the > > >>>> *.bin bitstream file and writes its contents to > > /lib/firmware/opencpi_temp.bin. > > >>>> It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the > > >>>> the filename "opencpi_temp.bin" to > > /sys/class/fpga_manager/fpga0/firmware. > > >>>> Finally, the temporary opencpi_temp.bin bitstream is removed and the > > >>>> state of the fpga_manager (/sys/class/fpga_manager/fpga0/state) is > > >>>> confirmed to be "operating" in isProgrammed(). > > >>>> > > >>>> fpga_manager requires that bitstreams be in *.bin in order to write > > >>>> them to the PL. So, some changes were made to vivado.mk to add a > > >>>> make rule for the *.bin file. This make rule (*BinName*) uses > > >>>> Vivado's "*bootgen*" to convert the bitstream from *.bit to *.bin. > > >>>> > > >>>> Most of the relevant code is pasted or summarized below: > > >>>> > > >>>> *load_fpga_manager*(const char *fileName, std::string > > &error) { > > >>>> if (!file_exists("/lib/firmware")){ > > >>>> mkdir("/lib/firmware",0666); > > >>>> } > > >>>> int out_file = > > creat("/lib/firmware/opencpi_temp.bin", 0666); > > >>>> gzFile bin_file; > > >>>> int bfd, zerror; > > >>>> uint8_t buf[8*1024]; > > >>>> > > >>>> if ((bfd = ::open(fileName, O_RDONLY)) < 0) > > >>>> OU::format(error, "Can't open bitstream file '%s' > > for reading: > > >>>> %s(%d)", > > >>>> fileName, strerror(errno), errno); > > >>>> if ((bin_file = ::gzdopen(bfd, "rb")) == NULL) > > >>>> OU::format(error, "Can't open compressed bin file > > '%s' for : > > >>>> %s(%u)", > > >>>> fileName, strerror(errno), errno); > > >>>> do { > > >>>> uint8_t *bit_buf = buf; > > >>>> int n = ::gzread(bin_file, bit_buf, sizeof(buf)); > > >>>> if (n < 0) > > >>>> return true; > > >>>> if (n & 3) > > >>>> return OU::eformat(error, "Bitstream data in is '%s' > > >>>> not a multiple of 3 bytes", > > >>>> fileName); > > >>>> if (n == 0) > > >>>> break; > > >>>> if (write(out_file, buf, n) <= 0) > > >>>> return OU::eformat(error, > > >>>> "Error writing to > /lib/firmware/opencpi_temp.bin > > >>>> for bin > > >>>> loading: %s(%u/%d)", > > >>>> strerror(errno), errno, n); > > >>>> } while (1); > > >>>> close(out_file); > > >>>> std::ofstream > > fpga_flags("/sys/class/fpga_manager/fpga0/flags"); > > >>>> std::ofstream > > >>>> fpga_firmware("/sys/class/fpga_manager/fpga0/firmware"); > > >>>> fpga_flags << 0 << std::endl; > > >>>> fpga_firmware << "opencpi_temp.bin" << std::endl; > > >>>> > > >>>> remove("/lib/firmware/opencpi_temp.bin"); > > >>>> return isProgrammed(error) ? init(error) : true; > > >>>> } > > >>>> > > >>>> The isProgrammed() function just checks whether or not the > > >>>> fpga_manager state is 'operating' although we are not entirely > > >>>> confident this is a robust check: > > >>>> > > >>>> *isProgrammed*(...) { > > >>>> ... > > >>>> const char *e = OU::file2String(val, > > >>>> "/sys/class/fpga_manager/fpga0/state", '|'); > > >>>> ... > > >>>> return val == "operating"; > > >>>> } > > >>>> > > >>>> vivado.mk's *bin make-rule uses bootgen to convert bit to bin. This > > >>>> is necessary in Vivado 2018.2, but in later versions you may be able > > >>>> to directly generate the correct *.bin file via an option to > > write_bitstream: > > >>>> $(call *BinName*,$1,$3,$6): $(call BitName,$1,$3) > > >>>> $(AT)echo -n For $2 on $5 using config $4: Generating > > >>>> Xilinx Vivado bitstream file $$@ with BIN extension using "bootgen". > > >>>> $(AT)echo all: > $$(call BifName,$1,$3,$6); \ > > >>>> echo "{" >> $$(call BifName,$1,$3,$6); \ > > >>>> echo " [destination_device = pl] $(notdir $(call > > >>>> BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6); \ > > >>>> echo "}" >> $$(call BifName,$1,$3,$6); > > >>>> $(AT)$(call DoXilinx,*bootgen*,$1,-image $(notdir $(call > > >>>> BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call > > >>>> BinName,$1,$3,$6)) -w,bin) > > >>>> > > >>>> Hope this is useful! > > >>>> > > >>>> Regards, > > >>>> David Banks > > >>>> dbanks@geontech.com > > >>>> Geon Technologies, LLC > > >>>> -------------- next part -------------- An HTML attachment was > > >>>> scrubbed... > > >>>> URL: > > >>>> < > http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attach > > >>>> m ents/20190201/4b49675d/attachment.html> > > >>>> _______________________________________________ > > >>>> discuss mailing list > > >>>> discuss@lists.opencpi.org > > >>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > > >>> _______________________________________________ > > >>> discuss mailing list > > >>> discuss@lists.opencpi.org > > >>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > > >>> -------------- next part -------------- An HTML attachment was > > >>> scrubbed... > > >>> URL: > > >>> < > http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachm > > >>> e nts/20190201/64e4ea45/attachment.html> > > >>> _______________________________________________ > > >>> discuss mailing list > > >>> discuss@lists.opencpi.org > > >>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > > >> > > >> _______________________________________________ > > >> discuss mailing list > > >> discuss@lists.opencpi.org > > >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > > > > > >
MR
Munro, Robert M.
Tue, Aug 13, 2019 2:55 PM

Chris,

Thank you for your helpful response and insight.  My thinking was that the #define could be overridden to provide the desired functionality for the platform, but was not comfortable making the changes without proper familiarity.  I will move forward by looking at the diff to the 1.4 mainline, make the appropriate modifications, and test with the modified framework on the N310.

Thanks again for your help.

Thanks,
Rob

From: Chris Hinkey chinkey@geontech.com
Sent: Tuesday, August 13, 2019 10:02 AM
To: James Kulp jek@parera.com
Cc: Munro, Robert M. Robert.Munro@jhuapl.edu; discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

I think when I implemented this code I probably made the assumption that if we are using fpga_manager we are also using ARCH=arm64.  This met our needs as we only cared about the fpga manager on ultrascale devices at the time.  We also made the assumption that the tools created a tarred bin file instead of a bit file because we could not get the bit to bin conversion working with the existing openCPI code (this might cause you problems later when actually trying to load the fpga).

The original problem you were running into is certainly because of an ifdef on line 226 where it will check the old driver done pin if it is on an arm and not an arm64

226 #if defined(OCPI_ARCH_arm) || defined(OCPI_ARCH_arm_cs)

to move forward for now you can change this line to an "#if 0" and rebuild the framework, not this will cause other zynq based platforms(zed, matchstiq etc..) to no longer work with this patch but maybe you don't care for now while Jim tries to get this into the mainline in a more generic way.
there may be some similar patches you need to make to the same file but the full diff that I needed to make to BusDriver.cxx to the 1.4 mainline can be seen here https://github.com/opencpi/opencpi/pull/17/files in case you didn't already know.
hope this helps

On Mon, Aug 12, 2019 at 11:12 AM James Kulp <jek@parera.commailto:jek@parera.com> wrote:
On 8/12/19 9:37 AM, Munro, Robert M. wrote:

Jim,

This is the only branch with the modifications required for use with
the FPGA Manager driver.  This is required for use with the Linux
kernel provided for the N310.  The Xilinx toolset being used is 2018_2
and the kernel being used is generated via the N310 build container
using v3.14.0.0 .

Ok.  The default Xilinx kernel associated with 2018_2 is 4.14.

I guess the bottom line is that this combination of platform and tools
and kernel is not yet supported in either the mainline of OpenCPI and
the third party branch you are trying to use.

It is probably not a big problem, but someone has to debug it that has
the time and skills necessary to dig as deep as necessary.

The fpga manager in the various later linux kernels will definitely be
supported in a patch from the mainline "soon", probably in a month,
since it is being actively worked.

That does not guarantee functionality on your exact kernel (and thus
version of the fpga manager), but it does guarantee it working on the
latest Xilinx-supported kernel.

Jim

Thanks,
Robert Munro

*From: *James Kulp <jek@parera.commailto:jek@parera.com <mailto:jek@parera.commailto:jek@parera.com>>
*Date: *Monday, Aug 12, 2019, 9:00 AM
*To: *Munro, Robert M. <Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu
<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>, discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
<discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org <mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
*Subject: *Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

I was a bit confused about your use of the "ultrascale" branch.
So you are using a branch with two types of patches in it: one for later
linux kernels with the fpga manager, and the other for the ultrascale
chip itself.
The N310 is not ultrascale, so we need to separate the two issues, which
were not separated before.
So its not really a surprise that the branch you are using is not yet
happy with the system you are trying to run it on.

I am working on a branch that simply updates the xilinx tools (2019-1)
and the xilinx linux kernel (4.19) without dealing with ultrascale,
which is intended to work with
a baseline zed board, but with current tools and kernels.

The N310 uses a 7000-series part (7100) which should be compatible with
this.

Which kernel and which xilinx tools are you using?

Jim

On 8/8/19 1:36 PM, Munro, Robert M. wrote:

Jim or others,

Is there any further input or feedback on the source or resolution

of this issue?

As it stands I do not believe that the OCPI runtime software will be

able to successfully load HDL assemblies on the N310 platform.  My
familiarity with this codebase is limited and we would appreciate any
guidance available toward investigating or resolving this issue.

Thank you,
Robert Munro

-----Original Message-----
From: discuss <discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org> On Behalf Of

Munro, Robert M.

Sent: Monday, August 5, 2019 10:49 AM
To: James Kulp <jek@parera.commailto:jek@parera.com>; discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with

ZynqMP/UltraScale+ fpga_manager

Jim,

The given block of code is not the root cause of the issue because

the file system does not have a /dev/xdevcfg device.

I suspect there is some functional code similar to this being

compiled incorrectly:

#if (OCPI_ARCH_arm)
// do xdevcfg loading stuff
#else
// do fpga_manager loading stuff
#endif

This error is being output at environment initialization as well as

when running hello.xml.  I've attached a copy of the output from the
command 'ocpirun -v -l 20 hello.xml' for further investigation.

From looking at the output I believe the system is calling

OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is
calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line
484 which in turn is calling Driver::open in the same file at line 499
which then outputs the 'When searching for PL device ...' error at
line 509. This then returns to the HdlDriver.cxx search() function and
outputs the '... got Zynq search error ...' error at line 141.

This is an ARM device and I am not familiar enough with this

codebase to adjust precompiler definitions with confidence that some
other code section will become affected.

Thanks,
Robert Munro

-----Original Message-----
From: James Kulp <jek@parera.commailto:jek@parera.com>
Sent: Friday, August 2, 2019 4:27 PM
To: Munro, Robert M. <Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>;

Subject: Re: [Discuss OpenCPI] Bitstream loading with

ZynqMP/UltraScale+ fpga_manager

That code is not integrated into the main line of OpenCPI yet, but

in that code there is:

         if (file_exists("/dev/xdevcfg")){
           ret_val= load_xdevconfig(fileName, error);
         }
         else if (file_exists("/sys/class/fpga_manager/fpga0/")){
           ret_val= load_fpga_manager(fileName, error);
         }

So it looks like the presence of /dev/xdevcfg is what causes it to

look for /sys/class/xdevcfg/xdevcfg/device/prog_done

On 8/2/19 4:15 PM, Munro, Robert M. wrote:

Are there any required flag or environment variable settings that

must be done before building the framework to utilize this
functionality?  I have a platform built that is producing an output
during environment load: 'When searching for PL device '0': Can't
process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string:
file could not be open for reading' .  This leads me to believe that
it is running the xdevcfg code still present in HdlBusDriver.cxx .

Use of the release_1.4_zynq_ultra branch and presence of the

/sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been
verified for the environment used to generate the executables.

Thanks,
Robert Munro

-----Original Message-----
From: discuss <discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org> On Behalf Of James
Kulp
Sent: Friday, February 1, 2019 4:18 PM
To: discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

On 2/1/19 3:37 PM, Chris Hinkey wrote:

in response to Point 1 here.  We attempted using the code that on

the fly was attempting to convert from bit to bin.  This did not work
on these newer platforms using fpga_manager so we decided to use the
vendor provided tools rather then to reverse engineer what was wrong
with the existing code.

If changes need to be made to create more commonality and given

that all zynq and zynqMP platforms need a .bin file format wouldn't it
make more sense to just use .bin files rather then converting them on
the fly every time?

A sensible question for sure.

When this was done originally, it was to avoid generating multiple

file formats all the time.  .bit files are necessary for JTAG loading,
and .bin files are necessary for zynq hardware loading.

Even on Zynq, some debugging using jtag is done, and having that be

mostly transparent (using the same bitstream files) is convenient.

So we preferred having a single bitstream file (with metadata,
compressed) regardless of whether we were hardware loading or jtag

loading, zynq or virtex6 or spartan3, ISE or Vivado.

In fact, there was no reverse engineering the last time since both

formats, at the level we were operating at, were documented by Xilinx.

It seemed to be worth the 30 SLOC to convert on the fly to keep a

single format of Xilinx bitstream files, including between ISE and
Vivado and all Xilinx FPGA types.

Of course it might make sense to switch things around the other way

and use .bin files uniformly and only convert to .bit format for JTAG
loading.

But since the core of the "conversion:" after a header, is just a

32 bit endian swap, it doesn't matter much either way.

If it ends up being a truly nasty reverse engineering exercise now,

I would reconsider.


From: discuss <discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org> on behalf of James
Kulp <jek@parera.commailto:jek@parera.com>
Sent: Friday, February 1, 2019 3:27 PM
To: discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

David,

This is great work. Thanks.

Since I believe the fpga manager stuff is really an attribute of
later linux kernels, I don't think it is really a ZynqMP thing, but
just a later linux kernel thing.
I am currently bringing up the quite ancient zedboard using the
latest Vivado and Xilinx linux and will try to use this same code.
There are two thinigs I am looking into, now that you have done the
hard work of getting to a working solution:

  1. The bit vs bin thing existed with the old bitstream loader, but I
    think we were converting on the fly, so I will try that here.
    (To avoid the bin format altogether).

  2. The fpga manager has entry points from kernel mode that allow you
    to inject the bitstream without making a copy in /lib/firmware.
    Since we already have a kernel driver, I will try to use that to
    avoid the whole /lib/firmware thing.

So if those two things can work (no guarantees), the difference
between old and new bitstream loading (and building) can be minimized
and the loading process faster and requiring no extra file system

space.

This will make merging easier too.

We'll see.  Thanks again to you and Geon for this important

contribution.

Jim

On 2/1/19 3:12 PM, David Banks wrote:

OpenCPI users interested in ZynqMP fpga_manager,

I know some users are interested in the OpenCPI's bitstream loading
for ZynqMP/UltraScale+ using "fpga_manager". In general, we
followed the instructions at

I will give a short explanation here:

Reminder: All ZynqMP/UltraScale+ changes are located at
https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra

branch.

Firstly, all fpga_manager code is located in
runtime/hdl/src/HdlBusDriver.cxx. There were also changes in
r
untime/hdl-support/xilinx/vivado.mkhttp://vivado.mk
http://vivado.mk
to generate a bitstream in the correct *.bin
format. To see the changes made to these files for ZynqMP, you can
diff them between
release_1.4 and release_1.4_zynq_ultra:
$ git clone https://github.com/Geontech/opencpi.git --branch
release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin
release_1.4:release_1.4; $ git diff release_1.4 --
runtime/hdl/src/HdlBusDriver.cxx
runtime/hdl-support/xilinx/vivado.mkhttp://vivado.mk;

The directly relevant functions are load_fpga_manager() and i
sProgrammed().
load_fpga_manager() ensures that /lib/firmware exists, reads the
*.bin bitstream file and writes its contents to

/lib/firmware/opencpi_temp.bin.

It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the
the filename "opencpi_temp.bin" to

/sys/class/fpga_manager/fpga0/firmware.

Finally, the temporary opencpi_temp.bin bitstream is removed and the
state of the fpga_manager (/sys/class/fpga_manager/fpga0/state) is
confirmed to be "operating" in isProgrammed().

fpga_manager requires that bitstreams be in *.bin in order to write
them to the PL. So, some changes were made to vivado.mkhttp://vivado.mk to add a
make rule for the *.bin file. This make rule (BinName) uses
Vivado's "bootgen" to convert the bitstream from *.bit to *.bin.

Most of the relevant code is pasted or summarized below:

         *load_fpga_manager*(const char *fileName, std::string

&error) {

           if (!file_exists("/lib/firmware")){

mkdir("/lib/firmware",0666);
}
int out_file =

creat("/lib/firmware/opencpi_temp.bin", 0666);

           gzFile bin_file;
           int bfd, zerror;
           uint8_t buf[8*1024];

           if ((bfd = ::open(fileName, O_RDONLY)) < 0)
             OU::format(error, "Can't open bitstream file '%s'

for reading:

%s(%d)",
fileName, strerror(errno), errno);
if ((bin_file = ::gzdopen(bfd, "rb")) == NULL)
OU::format(error, "Can't open compressed bin file

'%s' for :

%s(%u)",
fileName, strerror(errno), errno);
do {
uint8_t *bit_buf = buf;
int n = ::gzread(bin_file, bit_buf, sizeof(buf));
if (n < 0)
return true;
if (n & 3)
return OU::eformat(error, "Bitstream data in is '%s'
not a multiple of 3 bytes",
fileName);
if (n == 0)
break;
if (write(out_file, buf, n) <= 0)
return OU::eformat(error,
"Error writing to /lib/firmware/opencpi_temp.bin
for bin
loading: %s(%u/%d)",
strerror(errno), errno, n);
} while (1);
close(out_file);
std::ofstream

fpga_flags("/sys/class/fpga_manager/fpga0/flags");

           std::ofstream

fpga_firmware("/sys/class/fpga_manager/fpga0/firmware");
fpga_flags << 0 << std::endl;
fpga_firmware << "opencpi_temp.bin" << std::endl;

remove("/lib/firmware/opencpi_temp.bin");
return isProgrammed(error) ? init(error) : true;
}

The isProgrammed() function just checks whether or not the
fpga_manager state is 'operating' although we are not entirely
confident this is a robust check:

         *isProgrammed*(...) {
           ...
           const char *e = OU::file2String(val,

"/sys/class/fpga_manager/fpga0/state", '|');
...
return val == "operating";
}

vivado.mkhttp://vivado.mk's *bin make-rule uses bootgen to convert bit to bin. This
is necessary in Vivado 2018.2, but in later versions you may be able
to directly generate the correct *.bin file via an option to

write_bitstream:

$(call BinName,$1,$3,$6): $(call BitName,$1,$3)
$(AT)echo -n For $2 on $5 using config $4: Generating
Xilinx Vivado bitstream file $$@ with BIN extension using "bootgen".
$(AT)echo all: > $$(call BifName,$1,$3,$6);
echo "{" >> $$(call BifName,$1,$3,$6);
echo " [destination_device = pl] $(notdir $(call
BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6);
echo "}" >> $$(call BifName,$1,$3,$6);
$(AT)$(call DoXilinx,bootgen,$1,-image $(notdir $(call
BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call
BinName,$1,$3,$6)) -w,bin)

Hope this is useful!

Regards,
David Banks
dbanks@geontech.commailto:dbanks@geontech.com
Geon Technologies, LLC
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attach
m ents/20190201/4b49675d/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachm
e nts/20190201/64e4ea45/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org

-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: hello_n310_log_output.txt
URL:

Chris, Thank you for your helpful response and insight. My thinking was that the #define could be overridden to provide the desired functionality for the platform, but was not comfortable making the changes without proper familiarity. I will move forward by looking at the diff to the 1.4 mainline, make the appropriate modifications, and test with the modified framework on the N310. Thanks again for your help. Thanks, Rob From: Chris Hinkey <chinkey@geontech.com> Sent: Tuesday, August 13, 2019 10:02 AM To: James Kulp <jek@parera.com> Cc: Munro, Robert M. <Robert.Munro@jhuapl.edu>; discuss@lists.opencpi.org Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager I think when I implemented this code I probably made the assumption that if we are using fpga_manager we are also using ARCH=arm64. This met our needs as we only cared about the fpga manager on ultrascale devices at the time. We also made the assumption that the tools created a tarred bin file instead of a bit file because we could not get the bit to bin conversion working with the existing openCPI code (this might cause you problems later when actually trying to load the fpga). The original problem you were running into is certainly because of an ifdef on line 226 where it will check the old driver done pin if it is on an arm and not an arm64 226 #if defined(OCPI_ARCH_arm) || defined(OCPI_ARCH_arm_cs) to move forward for now you can change this line to an "#if 0" and rebuild the framework, not this will cause other zynq based platforms(zed, matchstiq etc..) to no longer work with this patch but maybe you don't care for now while Jim tries to get this into the mainline in a more generic way. there may be some similar patches you need to make to the same file but the full diff that I needed to make to BusDriver.cxx to the 1.4 mainline can be seen here https://github.com/opencpi/opencpi/pull/17/files in case you didn't already know. hope this helps On Mon, Aug 12, 2019 at 11:12 AM James Kulp <jek@parera.com<mailto:jek@parera.com>> wrote: On 8/12/19 9:37 AM, Munro, Robert M. wrote: > Jim, > > This is the only branch with the modifications required for use with > the FPGA Manager driver. This is required for use with the Linux > kernel provided for the N310. The Xilinx toolset being used is 2018_2 > and the kernel being used is generated via the N310 build container > using v3.14.0.0 . Ok. The default Xilinx kernel associated with 2018_2 is 4.14. I guess the bottom line is that this combination of platform and tools and kernel is not yet supported in either the mainline of OpenCPI and the third party branch you are trying to use. It is probably not a big problem, but someone has to debug it that has the time and skills necessary to dig as deep as necessary. The fpga manager in the various later linux kernels will definitely be supported in a patch from the mainline "soon", probably in a month, since it is being actively worked. That does not guarantee functionality on your exact kernel (and thus version of the fpga manager), but it does guarantee it working on the latest Xilinx-supported kernel. Jim > > Thanks, > Robert Munro > > *From: *James Kulp <jek@parera.com<mailto:jek@parera.com> <mailto:jek@parera.com<mailto:jek@parera.com>>> > *Date: *Monday, Aug 12, 2019, 9:00 AM > *To: *Munro, Robert M. <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu> > <mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>, discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > <discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> <mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > *Subject: *Re: [Discuss OpenCPI] Bitstream loading with > ZynqMP/UltraScale+ fpga_manager > > I was a bit confused about your use of the "ultrascale" branch. > So you are using a branch with two types of patches in it: one for later > linux kernels with the fpga manager, and the other for the ultrascale > chip itself. > The N310 is not ultrascale, so we need to separate the two issues, which > were not separated before. > So its not really a surprise that the branch you are using is not yet > happy with the system you are trying to run it on. > > I am working on a branch that simply updates the xilinx tools (2019-1) > and the xilinx linux kernel (4.19) without dealing with ultrascale, > which is intended to work with > a baseline zed board, but with current tools and kernels. > > The N310 uses a 7000-series part (7100) which should be compatible with > this. > > Which kernel and which xilinx tools are you using? > > Jim > > > > On 8/8/19 1:36 PM, Munro, Robert M. wrote: > > Jim or others, > > > > Is there any further input or feedback on the source or resolution > of this issue? > > > > As it stands I do not believe that the OCPI runtime software will be > able to successfully load HDL assemblies on the N310 platform. My > familiarity with this codebase is limited and we would appreciate any > guidance available toward investigating or resolving this issue. > > > > Thank you, > > Robert Munro > > > > -----Original Message----- > > From: discuss <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org>> On Behalf Of > Munro, Robert M. > > Sent: Monday, August 5, 2019 10:49 AM > > To: James Kulp <jek@parera.com<mailto:jek@parera.com>>; discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > > Subject: Re: [Discuss OpenCPI] Bitstream loading with > ZynqMP/UltraScale+ fpga_manager > > > > Jim, > > > > The given block of code is not the root cause of the issue because > the file system does not have a /dev/xdevcfg device. > > > > I suspect there is some functional code similar to this being > compiled incorrectly: > > #if (OCPI_ARCH_arm) > > // do xdevcfg loading stuff > > #else > > // do fpga_manager loading stuff > > #endif > > > > This error is being output at environment initialization as well as > when running hello.xml. I've attached a copy of the output from the > command 'ocpirun -v -l 20 hello.xml' for further investigation. > > > > From looking at the output I believe the system is calling > OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is > calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line > 484 which in turn is calling Driver::open in the same file at line 499 > which then outputs the 'When searching for PL device ...' error at > line 509. This then returns to the HdlDriver.cxx search() function and > outputs the '... got Zynq search error ...' error at line 141. > > > > This is an ARM device and I am not familiar enough with this > codebase to adjust precompiler definitions with confidence that some > other code section will become affected. > > > > Thanks, > > Robert Munro > > > > -----Original Message----- > > From: James Kulp <jek@parera.com<mailto:jek@parera.com>> > > Sent: Friday, August 2, 2019 4:27 PM > > To: Munro, Robert M. <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>; > discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > > Subject: Re: [Discuss OpenCPI] Bitstream loading with > ZynqMP/UltraScale+ fpga_manager > > > > That code is not integrated into the main line of OpenCPI yet, but > in that code there is: > > if (file_exists("/dev/xdevcfg")){ > > ret_val= load_xdevconfig(fileName, error); > > } > > else if (file_exists("/sys/class/fpga_manager/fpga0/")){ > > ret_val= load_fpga_manager(fileName, error); > > } > > So it looks like the presence of /dev/xdevcfg is what causes it to > look for /sys/class/xdevcfg/xdevcfg/device/prog_done > > > > On 8/2/19 4:15 PM, Munro, Robert M. wrote: > >> Are there any required flag or environment variable settings that > must be done before building the framework to utilize this > functionality? I have a platform built that is producing an output > during environment load: 'When searching for PL device '0': Can't > process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string: > file could not be open for reading' . This leads me to believe that > it is running the xdevcfg code still present in HdlBusDriver.cxx . > >> > >> Use of the release_1.4_zynq_ultra branch and presence of the > /sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been > verified for the environment used to generate the executables. > >> > >> Thanks, > >> Robert Munro > >> > >> -----Original Message----- > >> From: discuss <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org>> On Behalf Of James > >> Kulp > >> Sent: Friday, February 1, 2019 4:18 PM > >> To: discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > >> Subject: Re: [Discuss OpenCPI] Bitstream loading with > >> ZynqMP/UltraScale+ fpga_manager > >> > >> On 2/1/19 3:37 PM, Chris Hinkey wrote: > >>> in response to Point 1 here. We attempted using the code that on > the fly was attempting to convert from bit to bin. This did not work > on these newer platforms using fpga_manager so we decided to use the > vendor provided tools rather then to reverse engineer what was wrong > with the existing code. > >>> > >>> If changes need to be made to create more commonality and given > that all zynq and zynqMP platforms need a .bin file format wouldn't it > make more sense to just use .bin files rather then converting them on > the fly every time? > >> A sensible question for sure. > >> > >> When this was done originally, it was to avoid generating multiple > file formats all the time. .bit files are necessary for JTAG loading, > and .bin files are necessary for zynq hardware loading. > >> > >> Even on Zynq, some debugging using jtag is done, and having that be > mostly transparent (using the same bitstream files) is convenient. > >> > >> So we preferred having a single bitstream file (with metadata, > >> compressed) regardless of whether we were hardware loading or jtag > loading, zynq or virtex6 or spartan3, ISE or Vivado. > >> > >> In fact, there was no reverse engineering the last time since both > formats, at the level we were operating at, were documented by Xilinx. > >> > >> It seemed to be worth the 30 SLOC to convert on the fly to keep a > single format of Xilinx bitstream files, including between ISE and > Vivado and all Xilinx FPGA types. > >> > >> Of course it might make sense to switch things around the other way > and use .bin files uniformly and only convert to .bit format for JTAG > loading. > >> > >> But since the core of the "conversion:" after a header, is just a > 32 bit endian swap, it doesn't matter much either way. > >> > >> If it ends up being a truly nasty reverse engineering exercise now, > I would reconsider. > >> > >>> ________________________________ > >>> From: discuss <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org>> on behalf of James > >>> Kulp <jek@parera.com<mailto:jek@parera.com>> > >>> Sent: Friday, February 1, 2019 3:27 PM > >>> To: discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > >>> Subject: Re: [Discuss OpenCPI] Bitstream loading with > >>> ZynqMP/UltraScale+ fpga_manager > >>> > >>> David, > >>> > >>> This is great work. Thanks. > >>> > >>> Since I believe the fpga manager stuff is really an attribute of > >>> later linux kernels, I don't think it is really a ZynqMP thing, but > >>> just a later linux kernel thing. > >>> I am currently bringing up the quite ancient zedboard using the > >>> latest Vivado and Xilinx linux and will try to use this same code. > >>> There are two thinigs I am looking into, now that you have done the > >>> hard work of getting to a working solution: > >>> > >>> 1. The bit vs bin thing existed with the old bitstream loader, but I > >>> think we were converting on the fly, so I will try that here. > >>> (To avoid the bin format altogether). > >>> > >>> 2. The fpga manager has entry points from kernel mode that allow you > >>> to inject the bitstream without making a copy in /lib/firmware. > >>> Since we already have a kernel driver, I will try to use that to > >>> avoid the whole /lib/firmware thing. > >>> > >>> So if those two things can work (no guarantees), the difference > >>> between old and new bitstream loading (and building) can be minimized > >>> and the loading process faster and requiring no extra file system > space. > >>> This will make merging easier too. > >>> > >>> We'll see. Thanks again to you and Geon for this important > contribution. > >>> > >>> Jim > >>> > >>> > >>> On 2/1/19 3:12 PM, David Banks wrote: > >>>> OpenCPI users interested in ZynqMP fpga_manager, > >>>> > >>>> I know some users are interested in the OpenCPI's bitstream loading > >>>> for ZynqMP/UltraScale+ using "*fpga_manager*". In general, we > >>>> followed the instructions at > >>>> > https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream. > >>>> I will give a short explanation here: > >>>> > >>>> Reminder: All ZynqMP/UltraScale+ changes are located at > >>>> https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra > branch. > >>>> > >>>> Firstly, all *fpga_manager *code is located in > >>>> *runtime/hdl/src/HdlBusDriver.cxx*. There were also changes in > >>>> r*untime/hdl-support/xilinx/vivado.mk<http://vivado.mk> > >>>> <http://vivado.mk>* to generate a bitstream in the correct *.bin > >>>> format. To see the changes made to these files for ZynqMP, you can > >>>> diff them between > >>>> *release_1.4* and *release_1.4_zynq_ultra*: > >>>> $ git clone https://github.com/Geontech/opencpi.git --branch > >>>> release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin > >>>> release_1.4:release_1.4; $ git diff release_1.4 -- > >>>> runtime/hdl/src/HdlBusDriver.cxx > >>>> runtime/hdl-support/xilinx/vivado.mk<http://vivado.mk>; > >>>> > >>>> > >>>> The directly relevant functions are *load_fpga_manager()* and i > >>>> *sProgrammed()*. > >>>> load_fpga_manager() ensures that /lib/firmware exists, reads the > >>>> *.bin bitstream file and writes its contents to > /lib/firmware/opencpi_temp.bin. > >>>> It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the > >>>> the filename "opencpi_temp.bin" to > /sys/class/fpga_manager/fpga0/firmware. > >>>> Finally, the temporary opencpi_temp.bin bitstream is removed and the > >>>> state of the fpga_manager (/sys/class/fpga_manager/fpga0/state) is > >>>> confirmed to be "operating" in isProgrammed(). > >>>> > >>>> fpga_manager requires that bitstreams be in *.bin in order to write > >>>> them to the PL. So, some changes were made to vivado.mk<http://vivado.mk> to add a > >>>> make rule for the *.bin file. This make rule (*BinName*) uses > >>>> Vivado's "*bootgen*" to convert the bitstream from *.bit to *.bin. > >>>> > >>>> Most of the relevant code is pasted or summarized below: > >>>> > >>>> *load_fpga_manager*(const char *fileName, std::string > &error) { > >>>> if (!file_exists("/lib/firmware")){ > >>>> mkdir("/lib/firmware",0666); > >>>> } > >>>> int out_file = > creat("/lib/firmware/opencpi_temp.bin", 0666); > >>>> gzFile bin_file; > >>>> int bfd, zerror; > >>>> uint8_t buf[8*1024]; > >>>> > >>>> if ((bfd = ::open(fileName, O_RDONLY)) < 0) > >>>> OU::format(error, "Can't open bitstream file '%s' > for reading: > >>>> %s(%d)", > >>>> fileName, strerror(errno), errno); > >>>> if ((bin_file = ::gzdopen(bfd, "rb")) == NULL) > >>>> OU::format(error, "Can't open compressed bin file > '%s' for : > >>>> %s(%u)", > >>>> fileName, strerror(errno), errno); > >>>> do { > >>>> uint8_t *bit_buf = buf; > >>>> int n = ::gzread(bin_file, bit_buf, sizeof(buf)); > >>>> if (n < 0) > >>>> return true; > >>>> if (n & 3) > >>>> return OU::eformat(error, "Bitstream data in is '%s' > >>>> not a multiple of 3 bytes", > >>>> fileName); > >>>> if (n == 0) > >>>> break; > >>>> if (write(out_file, buf, n) <= 0) > >>>> return OU::eformat(error, > >>>> "Error writing to /lib/firmware/opencpi_temp.bin > >>>> for bin > >>>> loading: %s(%u/%d)", > >>>> strerror(errno), errno, n); > >>>> } while (1); > >>>> close(out_file); > >>>> std::ofstream > fpga_flags("/sys/class/fpga_manager/fpga0/flags"); > >>>> std::ofstream > >>>> fpga_firmware("/sys/class/fpga_manager/fpga0/firmware"); > >>>> fpga_flags << 0 << std::endl; > >>>> fpga_firmware << "opencpi_temp.bin" << std::endl; > >>>> > >>>> remove("/lib/firmware/opencpi_temp.bin"); > >>>> return isProgrammed(error) ? init(error) : true; > >>>> } > >>>> > >>>> The isProgrammed() function just checks whether or not the > >>>> fpga_manager state is 'operating' although we are not entirely > >>>> confident this is a robust check: > >>>> > >>>> *isProgrammed*(...) { > >>>> ... > >>>> const char *e = OU::file2String(val, > >>>> "/sys/class/fpga_manager/fpga0/state", '|'); > >>>> ... > >>>> return val == "operating"; > >>>> } > >>>> > >>>> vivado.mk<http://vivado.mk>'s *bin make-rule uses bootgen to convert bit to bin. This > >>>> is necessary in Vivado 2018.2, but in later versions you may be able > >>>> to directly generate the correct *.bin file via an option to > write_bitstream: > >>>> $(call *BinName*,$1,$3,$6): $(call BitName,$1,$3) > >>>> $(AT)echo -n For $2 on $5 using config $4: Generating > >>>> Xilinx Vivado bitstream file $$@ with BIN extension using "bootgen". > >>>> $(AT)echo all: > $$(call BifName,$1,$3,$6); \ > >>>> echo "{" >> $$(call BifName,$1,$3,$6); \ > >>>> echo " [destination_device = pl] $(notdir $(call > >>>> BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6); \ > >>>> echo "}" >> $$(call BifName,$1,$3,$6); > >>>> $(AT)$(call DoXilinx,*bootgen*,$1,-image $(notdir $(call > >>>> BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call > >>>> BinName,$1,$3,$6)) -w,bin) > >>>> > >>>> Hope this is useful! > >>>> > >>>> Regards, > >>>> David Banks > >>>> dbanks@geontech.com<mailto:dbanks@geontech.com> > >>>> Geon Technologies, LLC > >>>> -------------- next part -------------- An HTML attachment was > >>>> scrubbed... > >>>> URL: > >>>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attach > >>>> m ents/20190201/4b49675d/attachment.html> > >>>> _______________________________________________ > >>>> discuss mailing list > >>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > >>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > >>> _______________________________________________ > >>> discuss mailing list > >>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > >>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > >>> -------------- next part -------------- An HTML attachment was > >>> scrubbed... > >>> URL: > >>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachm > >>> e nts/20190201/64e4ea45/attachment.html> > >>> _______________________________________________ > >>> discuss mailing list > >>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > >>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > >> > >> _______________________________________________ > >> discuss mailing list > >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > > > > -------------- next part -------------- > > An embedded and charset-unspecified text was scrubbed... > > Name: hello_n310_log_output.txt > > URL: > <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachments/20190805/d9b4f229/attachment.txt> > > _______________________________________________ > > discuss mailing list > > discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > > http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > > _______________________________________________ discuss mailing list discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
MR
Munro, Robert M.
Wed, Aug 28, 2019 9:55 PM

Chris,

After merging some sections of HdlBusDriver.cxx into the 1.4 version of the file and going through the build process I am encountering a new error when attempting to load HDL on the N310.  The fsk_filerw is being used as a known good reference for this purpose.  The new sections of vivado.mk were merged in to attempt building the HDL using the framework, but it did not generate the .bin file when using ocpidev build with the --hdl-assembly argument.  An attempt to replicate the commands in vivado.mk manually while following the guidelines for generating a .bin from a .bit from Xilinx documentation https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841645/Solution+Zynq+PL+Programming+With+FPGA+Manager was taken.

The steps were:

  • generate a .bif file similar to the documentation's Full_Bitstream.bif using the correct filename
  • run a bootgen command similar to vivado.mk: bootgen -image <bif_filename> -arch zynq -o <bin_filename> -w

This generated a .bin file as desired and was copied to the artifacts directory in the ocpi folder structure.

The built ocpi environment loaded successfully, recognizes the HDL container as being available, and the hello application was able to run successfully.  The command output contained ' HDL Device 'PL:0' responds, but the OCCP signature: magic: 0x18000afe187003 (sb 0x435049004f70656e) ' but the impact of this was not understood until attempting to load HDL.  When attempting to run the fsk_filerw from the ocpirun command it did not appear to recognize the assembly when listing resources found in the output and reported that suitable candidate for a HDL-implemented component was not available.

The command 'ocpihdl load' was then attempted to force the loading of the HDL assembly and the same '...OCCP signature: magic: ...' output observed and finally ' Exiting for problem: error loading device pl:0: Magic numbers in admin space do not match'.

Is there some other step that must be taken during the generation of the .bin file?  Is there any other software modification that is required of the ocpi runtime code?  The diff patch of the modified 1.4 HdlBusDriver.cxx is attached to make sure that the required code modifications are performed correctly.  The log output from the ocpihdl load command is attached in case that can provide further insight regarding performance or required steps.

Thanks,
Rob

-----Original Message-----
From: discuss discuss-bounces@lists.opencpi.org On Behalf Of Munro, Robert M.
Sent: Tuesday, August 13, 2019 10:56 AM
To: Chris Hinkey chinkey@geontech.com; James Kulp jek@parera.com
Cc: discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

Chris,

Thank you for your helpful response and insight.  My thinking was that the #define could be overridden to provide the desired functionality for the platform, but was not comfortable making the changes without proper familiarity.  I will move forward by looking at the diff to the 1.4 mainline, make the appropriate modifications, and test with the modified framework on the N310.

Thanks again for your help.

Thanks,
Rob

From: Chris Hinkey chinkey@geontech.com
Sent: Tuesday, August 13, 2019 10:02 AM
To: James Kulp jek@parera.com
Cc: Munro, Robert M. Robert.Munro@jhuapl.edu; discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

I think when I implemented this code I probably made the assumption that if we are using fpga_manager we are also using ARCH=arm64.  This met our needs as we only cared about the fpga manager on ultrascale devices at the time.  We also made the assumption that the tools created a tarred bin file instead of a bit file because we could not get the bit to bin conversion working with the existing openCPI code (this might cause you problems later when actually trying to load the fpga).

The original problem you were running into is certainly because of an ifdef on line 226 where it will check the old driver done pin if it is on an arm and not an arm64

226 #if defined(OCPI_ARCH_arm) || defined(OCPI_ARCH_arm_cs)

to move forward for now you can change this line to an "#if 0" and rebuild the framework, not this will cause other zynq based platforms(zed, matchstiq etc..) to no longer work with this patch but maybe you don't care for now while Jim tries to get this into the mainline in a more generic way.
there may be some similar patches you need to make to the same file but the full diff that I needed to make to BusDriver.cxx to the 1.4 mainline can be seen here https://github.com/opencpi/opencpi/pull/17/files in case you didn't already know.
hope this helps

On Mon, Aug 12, 2019 at 11:12 AM James Kulp <jek@parera.commailto:jek@parera.com> wrote:
On 8/12/19 9:37 AM, Munro, Robert M. wrote:

Jim,

This is the only branch with the modifications required for use with
the FPGA Manager driver.  This is required for use with the Linux
kernel provided for the N310.  The Xilinx toolset being used is 2018_2
and the kernel being used is generated via the N310 build container
using v3.14.0.0 .

Ok.  The default Xilinx kernel associated with 2018_2 is 4.14.

I guess the bottom line is that this combination of platform and tools and kernel is not yet supported in either the mainline of OpenCPI and the third party branch you are trying to use.

It is probably not a big problem, but someone has to debug it that has the time and skills necessary to dig as deep as necessary.

The fpga manager in the various later linux kernels will definitely be supported in a patch from the mainline "soon", probably in a month, since it is being actively worked.

That does not guarantee functionality on your exact kernel (and thus version of the fpga manager), but it does guarantee it working on the latest Xilinx-supported kernel.

Jim

Thanks,
Robert Munro

*From: *James Kulp <jek@parera.commailto:jek@parera.com
<mailto:jek@parera.commailto:jek@parera.com>>
*Date: *Monday, Aug 12, 2019, 9:00 AM
*To: *Munro, Robert M.
<Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu
<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>,
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
<discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
*Subject: *Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

I was a bit confused about your use of the "ultrascale" branch.
So you are using a branch with two types of patches in it: one for
later linux kernels with the fpga manager, and the other for the
ultrascale chip itself.
The N310 is not ultrascale, so we need to separate the two issues,
which were not separated before.
So its not really a surprise that the branch you are using is not yet
happy with the system you are trying to run it on.

I am working on a branch that simply updates the xilinx tools (2019-1)
and the xilinx linux kernel (4.19) without dealing with ultrascale,
which is intended to work with a baseline zed board, but with current
tools and kernels.

The N310 uses a 7000-series part (7100) which should be compatible
with this.

Which kernel and which xilinx tools are you using?

Jim

On 8/8/19 1:36 PM, Munro, Robert M. wrote:

Jim or others,

Is there any further input or feedback on the source or resolution

of this issue?

As it stands I do not believe that the OCPI runtime software will be

able to successfully load HDL assemblies on the N310 platform.  My
familiarity with this codebase is limited and we would appreciate any
guidance available toward investigating or resolving this issue.

Thank you,
Robert Munro

-----Original Message-----
From: discuss
<discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.open
cpi.org>> On Behalf Of

Munro, Robert M.

Sent: Monday, August 5, 2019 10:49 AM
To: James Kulp <jek@parera.commailto:jek@parera.com>;
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with

ZynqMP/UltraScale+ fpga_manager

Jim,

The given block of code is not the root cause of the issue because

the file system does not have a /dev/xdevcfg device.

I suspect there is some functional code similar to this being

compiled incorrectly:

#if (OCPI_ARCH_arm)
// do xdevcfg loading stuff
#else
// do fpga_manager loading stuff
#endif

This error is being output at environment initialization as well as

when running hello.xml.  I've attached a copy of the output from the
command 'ocpirun -v -l 20 hello.xml' for further investigation.

From looking at the output I believe the system is calling

OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is
calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line
484 which in turn is calling Driver::open in the same file at line 499
which then outputs the 'When searching for PL device ...' error at
line 509. This then returns to the HdlDriver.cxx search() function and
outputs the '... got Zynq search error ...' error at line 141.

This is an ARM device and I am not familiar enough with this

codebase to adjust precompiler definitions with confidence that some
other code section will become affected.

Thanks,
Robert Munro

-----Original Message-----
From: James Kulp <jek@parera.commailto:jek@parera.com>
Sent: Friday, August 2, 2019 4:27 PM
To: Munro, Robert M.
<Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>;

Subject: Re: [Discuss OpenCPI] Bitstream loading with

ZynqMP/UltraScale+ fpga_manager

That code is not integrated into the main line of OpenCPI yet, but

in that code there is:

         if (file_exists("/dev/xdevcfg")){
           ret_val= load_xdevconfig(fileName, error);
         }
         else if (file_exists("/sys/class/fpga_manager/fpga0/")){
           ret_val= load_fpga_manager(fileName, error);
         }

So it looks like the presence of /dev/xdevcfg is what causes it to

look for /sys/class/xdevcfg/xdevcfg/device/prog_done

On 8/2/19 4:15 PM, Munro, Robert M. wrote:

Are there any required flag or environment variable settings that

must be done before building the framework to utilize this
functionality?  I have a platform built that is producing an output
during environment load: 'When searching for PL device '0': Can't
process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string:
file could not be open for reading' .  This leads me to believe that
it is running the xdevcfg code still present in HdlBusDriver.cxx .

Use of the release_1.4_zynq_ultra branch and presence of the

/sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been
verified for the environment used to generate the executables.

Thanks,
Robert Munro

-----Original Message-----
From: discuss
<discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.ope
ncpi.org>> On Behalf Of James Kulp
Sent: Friday, February 1, 2019 4:18 PM
To: discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

On 2/1/19 3:37 PM, Chris Hinkey wrote:

in response to Point 1 here.  We attempted using the code that on

the fly was attempting to convert from bit to bin.  This did not work
on these newer platforms using fpga_manager so we decided to use the
vendor provided tools rather then to reverse engineer what was wrong
with the existing code.

If changes need to be made to create more commonality and given

that all zynq and zynqMP platforms need a .bin file format wouldn't it
make more sense to just use .bin files rather then converting them on
the fly every time?

A sensible question for sure.

When this was done originally, it was to avoid generating multiple

file formats all the time.  .bit files are necessary for JTAG loading,
and .bin files are necessary for zynq hardware loading.

Even on Zynq, some debugging using jtag is done, and having that be

mostly transparent (using the same bitstream files) is convenient.

So we preferred having a single bitstream file (with metadata,
compressed) regardless of whether we were hardware loading or jtag

loading, zynq or virtex6 or spartan3, ISE or Vivado.

In fact, there was no reverse engineering the last time since both

formats, at the level we were operating at, were documented by Xilinx.

It seemed to be worth the 30 SLOC to convert on the fly to keep a

single format of Xilinx bitstream files, including between ISE and
Vivado and all Xilinx FPGA types.

Of course it might make sense to switch things around the other way

and use .bin files uniformly and only convert to .bit format for JTAG
loading.

But since the core of the "conversion:" after a header, is just a

32 bit endian swap, it doesn't matter much either way.

If it ends up being a truly nasty reverse engineering exercise now,

I would reconsider.


From: discuss
<discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.op
encpi.org>> on behalf of James Kulp
<jek@parera.commailto:jek@parera.com>
Sent: Friday, February 1, 2019 3:27 PM
To: discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

David,

This is great work. Thanks.

Since I believe the fpga manager stuff is really an attribute of
later linux kernels, I don't think it is really a ZynqMP thing,
but just a later linux kernel thing.
I am currently bringing up the quite ancient zedboard using the
latest Vivado and Xilinx linux and will try to use this same code.
There are two thinigs I am looking into, now that you have done
the hard work of getting to a working solution:

  1. The bit vs bin thing existed with the old bitstream loader, but
    I think we were converting on the fly, so I will try that here.
    (To avoid the bin format altogether).

  2. The fpga manager has entry points from kernel mode that allow
    you to inject the bitstream without making a copy in /lib/firmware.
    Since we already have a kernel driver, I will try to use that to
    avoid the whole /lib/firmware thing.

So if those two things can work (no guarantees), the difference
between old and new bitstream loading (and building) can be
minimized and the loading process faster and requiring no extra
file system

space.

This will make merging easier too.

We'll see.  Thanks again to you and Geon for this important

contribution.

Jim

On 2/1/19 3:12 PM, David Banks wrote:

OpenCPI users interested in ZynqMP fpga_manager,

I know some users are interested in the OpenCPI's bitstream
loading for ZynqMP/UltraScale+ using "fpga_manager". In
general, we followed the instructions at

I will give a short explanation here:

Reminder: All ZynqMP/UltraScale+ changes are located at
https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra

branch.

Firstly, all fpga_manager code is located in
runtime/hdl/src/HdlBusDriver.cxx. There were also changes in
r
untime/hdl-support/xilinx/vivado.mkhttp://vivado.mk
http://vivado.mk
to generate a bitstream in the correct *.bin
format. To see the changes made to these files for ZynqMP, you
can diff them between
release_1.4 and release_1.4_zynq_ultra:
$ git clone https://github.com/Geontech/opencpi.git --branch
release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin
release_1.4:release_1.4; $ git diff release_1.4 --
runtime/hdl/src/HdlBusDriver.cxx
runtime/hdl-support/xilinx/vivado.mkhttp://vivado.mk;

The directly relevant functions are load_fpga_manager() and i
sProgrammed().
load_fpga_manager() ensures that /lib/firmware exists, reads the
*.bin bitstream file and writes its contents to

/lib/firmware/opencpi_temp.bin.

It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the
the filename "opencpi_temp.bin" to

/sys/class/fpga_manager/fpga0/firmware.

Finally, the temporary opencpi_temp.bin bitstream is removed and
the state of the fpga_manager
(/sys/class/fpga_manager/fpga0/state) is confirmed to be "operating" in isProgrammed().

fpga_manager requires that bitstreams be in *.bin in order to
write them to the PL. So, some changes were made to
vivado.mkhttp://vivado.mk to add a make rule for the *.bin
file. This make rule (BinName) uses Vivado's "bootgen" to convert the bitstream from *.bit to *.bin.

Most of the relevant code is pasted or summarized below:

         *load_fpga_manager*(const char *fileName, std::string

&error) {

           if (!file_exists("/lib/firmware")){ 

mkdir("/lib/firmware",0666);
}
int out_file =

creat("/lib/firmware/opencpi_temp.bin", 0666);

           gzFile bin_file;
           int bfd, zerror;
           uint8_t buf[8*1024];

           if ((bfd = ::open(fileName, O_RDONLY)) < 0)
             OU::format(error, "Can't open bitstream file '%s'

for reading:

%s(%d)",
fileName, strerror(errno), errno);
if ((bin_file = ::gzdopen(bfd, "rb")) == NULL)
OU::format(error, "Can't open compressed bin file

'%s' for :

%s(%u)",
fileName, strerror(errno), errno);
do {
uint8_t *bit_buf = buf;
int n = ::gzread(bin_file, bit_buf, sizeof(buf));
if (n < 0)
return true;
if (n & 3)
return OU::eformat(error, "Bitstream data in is '%s'
not a multiple of 3 bytes",
fileName);
if (n == 0)
break;
if (write(out_file, buf, n) <= 0)
return OU::eformat(error,
"Error writing to
/lib/firmware/opencpi_temp.bin for bin
loading: %s(%u/%d)",
strerror(errno), errno, n);
} while (1);
close(out_file);
std::ofstream

fpga_flags("/sys/class/fpga_manager/fpga0/flags");

           std::ofstream

fpga_firmware("/sys/class/fpga_manager/fpga0/firmware");
fpga_flags << 0 << std::endl;
fpga_firmware << "opencpi_temp.bin" << std::endl;

remove("/lib/firmware/opencpi_temp.bin");
return isProgrammed(error) ? init(error) : true;
}

The isProgrammed() function just checks whether or not the
fpga_manager state is 'operating' although we are not entirely
confident this is a robust check:

         *isProgrammed*(...) {
           ...
           const char *e = OU::file2String(val, 

"/sys/class/fpga_manager/fpga0/state", '|');
...
return val == "operating";
}

vivado.mkhttp://vivado.mk's *bin make-rule uses bootgen to
convert bit to bin. This is necessary in Vivado 2018.2, but in
later versions you may be able to directly generate the correct
*.bin file via an option to

write_bitstream:

$(call BinName,$1,$3,$6): $(call BitName,$1,$3)
$(AT)echo -n For $2 on $5 using config $4: Generating
Xilinx Vivado bitstream file $$@ with BIN extension using "bootgen".
$(AT)echo all: > $$(call BifName,$1,$3,$6);
echo "{" >> $$(call BifName,$1,$3,$6);
echo " [destination_device = pl] $(notdir $(call
BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6);
echo "}" >> $$(call BifName,$1,$3,$6);
$(AT)$(call DoXilinx,bootgen,$1,-image $(notdir
$(call
BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call
BinName,$1,$3,$6)) -w,bin)

Hope this is useful!

Regards,
David Banks
dbanks@geontech.commailto:dbanks@geontech.com
Geon Technologies, LLC
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/att
ach m ents/20190201/4b49675d/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.o
rg


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or
g
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/atta
chm e nts/20190201/64e4ea45/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or
g

-------------- next part -------------- An embedded and
charset-unspecified text was scrubbed...
Name: hello_n310_log_output.txt
URL:

<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme
nts/20190805/d9b4f229/attachment.txt>

Chris, After merging some sections of HdlBusDriver.cxx into the 1.4 version of the file and going through the build process I am encountering a new error when attempting to load HDL on the N310. The fsk_filerw is being used as a known good reference for this purpose. The new sections of vivado.mk were merged in to attempt building the HDL using the framework, but it did not generate the .bin file when using ocpidev build with the --hdl-assembly argument. An attempt to replicate the commands in vivado.mk manually while following the guidelines for generating a .bin from a .bit from Xilinx documentation https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841645/Solution+Zynq+PL+Programming+With+FPGA+Manager was taken. The steps were: - generate a .bif file similar to the documentation's Full_Bitstream.bif using the correct filename - run a bootgen command similar to vivado.mk: bootgen -image <bif_filename> -arch zynq -o <bin_filename> -w This generated a .bin file as desired and was copied to the artifacts directory in the ocpi folder structure. The built ocpi environment loaded successfully, recognizes the HDL container as being available, and the hello application was able to run successfully. The command output contained ' HDL Device 'PL:0' responds, but the OCCP signature: magic: 0x18000afe187003 (sb 0x435049004f70656e) ' but the impact of this was not understood until attempting to load HDL. When attempting to run the fsk_filerw from the ocpirun command it did not appear to recognize the assembly when listing resources found in the output and reported that suitable candidate for a HDL-implemented component was not available. The command 'ocpihdl load' was then attempted to force the loading of the HDL assembly and the same '...OCCP signature: magic: ...' output observed and finally ' Exiting for problem: error loading device pl:0: Magic numbers in admin space do not match'. Is there some other step that must be taken during the generation of the .bin file? Is there any other software modification that is required of the ocpi runtime code? The diff patch of the modified 1.4 HdlBusDriver.cxx is attached to make sure that the required code modifications are performed correctly. The log output from the ocpihdl load command is attached in case that can provide further insight regarding performance or required steps. Thanks, Rob -----Original Message----- From: discuss <discuss-bounces@lists.opencpi.org> On Behalf Of Munro, Robert M. Sent: Tuesday, August 13, 2019 10:56 AM To: Chris Hinkey <chinkey@geontech.com>; James Kulp <jek@parera.com> Cc: discuss@lists.opencpi.org Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager Chris, Thank you for your helpful response and insight. My thinking was that the #define could be overridden to provide the desired functionality for the platform, but was not comfortable making the changes without proper familiarity. I will move forward by looking at the diff to the 1.4 mainline, make the appropriate modifications, and test with the modified framework on the N310. Thanks again for your help. Thanks, Rob From: Chris Hinkey <chinkey@geontech.com> Sent: Tuesday, August 13, 2019 10:02 AM To: James Kulp <jek@parera.com> Cc: Munro, Robert M. <Robert.Munro@jhuapl.edu>; discuss@lists.opencpi.org Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager I think when I implemented this code I probably made the assumption that if we are using fpga_manager we are also using ARCH=arm64. This met our needs as we only cared about the fpga manager on ultrascale devices at the time. We also made the assumption that the tools created a tarred bin file instead of a bit file because we could not get the bit to bin conversion working with the existing openCPI code (this might cause you problems later when actually trying to load the fpga). The original problem you were running into is certainly because of an ifdef on line 226 where it will check the old driver done pin if it is on an arm and not an arm64 226 #if defined(OCPI_ARCH_arm) || defined(OCPI_ARCH_arm_cs) to move forward for now you can change this line to an "#if 0" and rebuild the framework, not this will cause other zynq based platforms(zed, matchstiq etc..) to no longer work with this patch but maybe you don't care for now while Jim tries to get this into the mainline in a more generic way. there may be some similar patches you need to make to the same file but the full diff that I needed to make to BusDriver.cxx to the 1.4 mainline can be seen here https://github.com/opencpi/opencpi/pull/17/files in case you didn't already know. hope this helps On Mon, Aug 12, 2019 at 11:12 AM James Kulp <jek@parera.com<mailto:jek@parera.com>> wrote: On 8/12/19 9:37 AM, Munro, Robert M. wrote: > Jim, > > This is the only branch with the modifications required for use with > the FPGA Manager driver. This is required for use with the Linux > kernel provided for the N310. The Xilinx toolset being used is 2018_2 > and the kernel being used is generated via the N310 build container > using v3.14.0.0 . Ok. The default Xilinx kernel associated with 2018_2 is 4.14. I guess the bottom line is that this combination of platform and tools and kernel is not yet supported in either the mainline of OpenCPI and the third party branch you are trying to use. It is probably not a big problem, but someone has to debug it that has the time and skills necessary to dig as deep as necessary. The fpga manager in the various later linux kernels will definitely be supported in a patch from the mainline "soon", probably in a month, since it is being actively worked. That does not guarantee functionality on your exact kernel (and thus version of the fpga manager), but it does guarantee it working on the latest Xilinx-supported kernel. Jim > > Thanks, > Robert Munro > > *From: *James Kulp <jek@parera.com<mailto:jek@parera.com> > <mailto:jek@parera.com<mailto:jek@parera.com>>> > *Date: *Monday, Aug 12, 2019, 9:00 AM > *To: *Munro, Robert M. > <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu> > <mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>, > discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > <discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > <mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > *Subject: *Re: [Discuss OpenCPI] Bitstream loading with > ZynqMP/UltraScale+ fpga_manager > > I was a bit confused about your use of the "ultrascale" branch. > So you are using a branch with two types of patches in it: one for > later linux kernels with the fpga manager, and the other for the > ultrascale chip itself. > The N310 is not ultrascale, so we need to separate the two issues, > which were not separated before. > So its not really a surprise that the branch you are using is not yet > happy with the system you are trying to run it on. > > I am working on a branch that simply updates the xilinx tools (2019-1) > and the xilinx linux kernel (4.19) without dealing with ultrascale, > which is intended to work with a baseline zed board, but with current > tools and kernels. > > The N310 uses a 7000-series part (7100) which should be compatible > with this. > > Which kernel and which xilinx tools are you using? > > Jim > > > > On 8/8/19 1:36 PM, Munro, Robert M. wrote: > > Jim or others, > > > > Is there any further input or feedback on the source or resolution > of this issue? > > > > As it stands I do not believe that the OCPI runtime software will be > able to successfully load HDL assemblies on the N310 platform. My > familiarity with this codebase is limited and we would appreciate any > guidance available toward investigating or resolving this issue. > > > > Thank you, > > Robert Munro > > > > -----Original Message----- > > From: discuss > > <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.open > > cpi.org>> On Behalf Of > Munro, Robert M. > > Sent: Monday, August 5, 2019 10:49 AM > > To: James Kulp <jek@parera.com<mailto:jek@parera.com>>; > > discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > > Subject: Re: [Discuss OpenCPI] Bitstream loading with > ZynqMP/UltraScale+ fpga_manager > > > > Jim, > > > > The given block of code is not the root cause of the issue because > the file system does not have a /dev/xdevcfg device. > > > > I suspect there is some functional code similar to this being > compiled incorrectly: > > #if (OCPI_ARCH_arm) > > // do xdevcfg loading stuff > > #else > > // do fpga_manager loading stuff > > #endif > > > > This error is being output at environment initialization as well as > when running hello.xml. I've attached a copy of the output from the > command 'ocpirun -v -l 20 hello.xml' for further investigation. > > > > From looking at the output I believe the system is calling > OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is > calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line > 484 which in turn is calling Driver::open in the same file at line 499 > which then outputs the 'When searching for PL device ...' error at > line 509. This then returns to the HdlDriver.cxx search() function and > outputs the '... got Zynq search error ...' error at line 141. > > > > This is an ARM device and I am not familiar enough with this > codebase to adjust precompiler definitions with confidence that some > other code section will become affected. > > > > Thanks, > > Robert Munro > > > > -----Original Message----- > > From: James Kulp <jek@parera.com<mailto:jek@parera.com>> > > Sent: Friday, August 2, 2019 4:27 PM > > To: Munro, Robert M. > > <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>; > discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > > Subject: Re: [Discuss OpenCPI] Bitstream loading with > ZynqMP/UltraScale+ fpga_manager > > > > That code is not integrated into the main line of OpenCPI yet, but > in that code there is: > > if (file_exists("/dev/xdevcfg")){ > > ret_val= load_xdevconfig(fileName, error); > > } > > else if (file_exists("/sys/class/fpga_manager/fpga0/")){ > > ret_val= load_fpga_manager(fileName, error); > > } > > So it looks like the presence of /dev/xdevcfg is what causes it to > look for /sys/class/xdevcfg/xdevcfg/device/prog_done > > > > On 8/2/19 4:15 PM, Munro, Robert M. wrote: > >> Are there any required flag or environment variable settings that > must be done before building the framework to utilize this > functionality? I have a platform built that is producing an output > during environment load: 'When searching for PL device '0': Can't > process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string: > file could not be open for reading' . This leads me to believe that > it is running the xdevcfg code still present in HdlBusDriver.cxx . > >> > >> Use of the release_1.4_zynq_ultra branch and presence of the > /sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been > verified for the environment used to generate the executables. > >> > >> Thanks, > >> Robert Munro > >> > >> -----Original Message----- > >> From: discuss > >> <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.ope > >> ncpi.org>> On Behalf Of James Kulp > >> Sent: Friday, February 1, 2019 4:18 PM > >> To: discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > >> Subject: Re: [Discuss OpenCPI] Bitstream loading with > >> ZynqMP/UltraScale+ fpga_manager > >> > >> On 2/1/19 3:37 PM, Chris Hinkey wrote: > >>> in response to Point 1 here. We attempted using the code that on > the fly was attempting to convert from bit to bin. This did not work > on these newer platforms using fpga_manager so we decided to use the > vendor provided tools rather then to reverse engineer what was wrong > with the existing code. > >>> > >>> If changes need to be made to create more commonality and given > that all zynq and zynqMP platforms need a .bin file format wouldn't it > make more sense to just use .bin files rather then converting them on > the fly every time? > >> A sensible question for sure. > >> > >> When this was done originally, it was to avoid generating multiple > file formats all the time. .bit files are necessary for JTAG loading, > and .bin files are necessary for zynq hardware loading. > >> > >> Even on Zynq, some debugging using jtag is done, and having that be > mostly transparent (using the same bitstream files) is convenient. > >> > >> So we preferred having a single bitstream file (with metadata, > >> compressed) regardless of whether we were hardware loading or jtag > loading, zynq or virtex6 or spartan3, ISE or Vivado. > >> > >> In fact, there was no reverse engineering the last time since both > formats, at the level we were operating at, were documented by Xilinx. > >> > >> It seemed to be worth the 30 SLOC to convert on the fly to keep a > single format of Xilinx bitstream files, including between ISE and > Vivado and all Xilinx FPGA types. > >> > >> Of course it might make sense to switch things around the other way > and use .bin files uniformly and only convert to .bit format for JTAG > loading. > >> > >> But since the core of the "conversion:" after a header, is just a > 32 bit endian swap, it doesn't matter much either way. > >> > >> If it ends up being a truly nasty reverse engineering exercise now, > I would reconsider. > >> > >>> ________________________________ > >>> From: discuss > >>> <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.op > >>> encpi.org>> on behalf of James Kulp > >>> <jek@parera.com<mailto:jek@parera.com>> > >>> Sent: Friday, February 1, 2019 3:27 PM > >>> To: discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > >>> Subject: Re: [Discuss OpenCPI] Bitstream loading with > >>> ZynqMP/UltraScale+ fpga_manager > >>> > >>> David, > >>> > >>> This is great work. Thanks. > >>> > >>> Since I believe the fpga manager stuff is really an attribute of > >>> later linux kernels, I don't think it is really a ZynqMP thing, > >>> but just a later linux kernel thing. > >>> I am currently bringing up the quite ancient zedboard using the > >>> latest Vivado and Xilinx linux and will try to use this same code. > >>> There are two thinigs I am looking into, now that you have done > >>> the hard work of getting to a working solution: > >>> > >>> 1. The bit vs bin thing existed with the old bitstream loader, but > >>> I think we were converting on the fly, so I will try that here. > >>> (To avoid the bin format altogether). > >>> > >>> 2. The fpga manager has entry points from kernel mode that allow > >>> you to inject the bitstream without making a copy in /lib/firmware. > >>> Since we already have a kernel driver, I will try to use that to > >>> avoid the whole /lib/firmware thing. > >>> > >>> So if those two things can work (no guarantees), the difference > >>> between old and new bitstream loading (and building) can be > >>> minimized and the loading process faster and requiring no extra > >>> file system > space. > >>> This will make merging easier too. > >>> > >>> We'll see. Thanks again to you and Geon for this important > contribution. > >>> > >>> Jim > >>> > >>> > >>> On 2/1/19 3:12 PM, David Banks wrote: > >>>> OpenCPI users interested in ZynqMP fpga_manager, > >>>> > >>>> I know some users are interested in the OpenCPI's bitstream > >>>> loading for ZynqMP/UltraScale+ using "*fpga_manager*". In > >>>> general, we followed the instructions at > >>>> > https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream. > >>>> I will give a short explanation here: > >>>> > >>>> Reminder: All ZynqMP/UltraScale+ changes are located at > >>>> https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra > branch. > >>>> > >>>> Firstly, all *fpga_manager *code is located in > >>>> *runtime/hdl/src/HdlBusDriver.cxx*. There were also changes in > >>>> r*untime/hdl-support/xilinx/vivado.mk<http://vivado.mk> > >>>> <http://vivado.mk>* to generate a bitstream in the correct *.bin > >>>> format. To see the changes made to these files for ZynqMP, you > >>>> can diff them between > >>>> *release_1.4* and *release_1.4_zynq_ultra*: > >>>> $ git clone https://github.com/Geontech/opencpi.git --branch > >>>> release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin > >>>> release_1.4:release_1.4; $ git diff release_1.4 -- > >>>> runtime/hdl/src/HdlBusDriver.cxx > >>>> runtime/hdl-support/xilinx/vivado.mk<http://vivado.mk>; > >>>> > >>>> > >>>> The directly relevant functions are *load_fpga_manager()* and i > >>>> *sProgrammed()*. > >>>> load_fpga_manager() ensures that /lib/firmware exists, reads the > >>>> *.bin bitstream file and writes its contents to > /lib/firmware/opencpi_temp.bin. > >>>> It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the > >>>> the filename "opencpi_temp.bin" to > /sys/class/fpga_manager/fpga0/firmware. > >>>> Finally, the temporary opencpi_temp.bin bitstream is removed and > >>>> the state of the fpga_manager > >>>> (/sys/class/fpga_manager/fpga0/state) is confirmed to be "operating" in isProgrammed(). > >>>> > >>>> fpga_manager requires that bitstreams be in *.bin in order to > >>>> write them to the PL. So, some changes were made to > >>>> vivado.mk<http://vivado.mk> to add a make rule for the *.bin > >>>> file. This make rule (*BinName*) uses Vivado's "*bootgen*" to convert the bitstream from *.bit to *.bin. > >>>> > >>>> Most of the relevant code is pasted or summarized below: > >>>> > >>>> *load_fpga_manager*(const char *fileName, std::string > &error) { > >>>> if (!file_exists("/lib/firmware")){ > >>>> mkdir("/lib/firmware",0666); > >>>> } > >>>> int out_file = > creat("/lib/firmware/opencpi_temp.bin", 0666); > >>>> gzFile bin_file; > >>>> int bfd, zerror; > >>>> uint8_t buf[8*1024]; > >>>> > >>>> if ((bfd = ::open(fileName, O_RDONLY)) < 0) > >>>> OU::format(error, "Can't open bitstream file '%s' > for reading: > >>>> %s(%d)", > >>>> fileName, strerror(errno), errno); > >>>> if ((bin_file = ::gzdopen(bfd, "rb")) == NULL) > >>>> OU::format(error, "Can't open compressed bin file > '%s' for : > >>>> %s(%u)", > >>>> fileName, strerror(errno), errno); > >>>> do { > >>>> uint8_t *bit_buf = buf; > >>>> int n = ::gzread(bin_file, bit_buf, sizeof(buf)); > >>>> if (n < 0) > >>>> return true; > >>>> if (n & 3) > >>>> return OU::eformat(error, "Bitstream data in is '%s' > >>>> not a multiple of 3 bytes", > >>>> fileName); > >>>> if (n == 0) > >>>> break; > >>>> if (write(out_file, buf, n) <= 0) > >>>> return OU::eformat(error, > >>>> "Error writing to > >>>> /lib/firmware/opencpi_temp.bin for bin > >>>> loading: %s(%u/%d)", > >>>> strerror(errno), errno, n); > >>>> } while (1); > >>>> close(out_file); > >>>> std::ofstream > fpga_flags("/sys/class/fpga_manager/fpga0/flags"); > >>>> std::ofstream > >>>> fpga_firmware("/sys/class/fpga_manager/fpga0/firmware"); > >>>> fpga_flags << 0 << std::endl; > >>>> fpga_firmware << "opencpi_temp.bin" << std::endl; > >>>> > >>>> remove("/lib/firmware/opencpi_temp.bin"); > >>>> return isProgrammed(error) ? init(error) : true; > >>>> } > >>>> > >>>> The isProgrammed() function just checks whether or not the > >>>> fpga_manager state is 'operating' although we are not entirely > >>>> confident this is a robust check: > >>>> > >>>> *isProgrammed*(...) { > >>>> ... > >>>> const char *e = OU::file2String(val, > >>>> "/sys/class/fpga_manager/fpga0/state", '|'); > >>>> ... > >>>> return val == "operating"; > >>>> } > >>>> > >>>> vivado.mk<http://vivado.mk>'s *bin make-rule uses bootgen to > >>>> convert bit to bin. This is necessary in Vivado 2018.2, but in > >>>> later versions you may be able to directly generate the correct > >>>> *.bin file via an option to > write_bitstream: > >>>> $(call *BinName*,$1,$3,$6): $(call BitName,$1,$3) > >>>> $(AT)echo -n For $2 on $5 using config $4: Generating > >>>> Xilinx Vivado bitstream file $$@ with BIN extension using "bootgen". > >>>> $(AT)echo all: > $$(call BifName,$1,$3,$6); \ > >>>> echo "{" >> $$(call BifName,$1,$3,$6); \ > >>>> echo " [destination_device = pl] $(notdir $(call > >>>> BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6); \ > >>>> echo "}" >> $$(call BifName,$1,$3,$6); > >>>> $(AT)$(call DoXilinx,*bootgen*,$1,-image $(notdir > >>>> $(call > >>>> BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call > >>>> BinName,$1,$3,$6)) -w,bin) > >>>> > >>>> Hope this is useful! > >>>> > >>>> Regards, > >>>> David Banks > >>>> dbanks@geontech.com<mailto:dbanks@geontech.com> > >>>> Geon Technologies, LLC > >>>> -------------- next part -------------- An HTML attachment was > >>>> scrubbed... > >>>> URL: > >>>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/att > >>>> ach m ents/20190201/4b49675d/attachment.html> > >>>> _______________________________________________ > >>>> discuss mailing list > >>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > >>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.o > >>>> rg > >>> _______________________________________________ > >>> discuss mailing list > >>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > >>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or > >>> g > >>> -------------- next part -------------- An HTML attachment was > >>> scrubbed... > >>> URL: > >>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/atta > >>> chm e nts/20190201/64e4ea45/attachment.html> > >>> _______________________________________________ > >>> discuss mailing list > >>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > >>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or > >>> g > >> > >> _______________________________________________ > >> discuss mailing list > >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > > > > -------------- next part -------------- An embedded and > > charset-unspecified text was scrubbed... > > Name: hello_n310_log_output.txt > > URL: > <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme > nts/20190805/d9b4f229/attachment.txt> > > _______________________________________________ > > discuss mailing list > > discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > > http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > > _______________________________________________ discuss mailing list discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
CH
Chris Hinkey
Thu, Aug 29, 2019 12:05 PM

It looks like you loaded something sucessfully but the control plan is not
hooked up quite right.

as an eraly part of the running process opencpi reads a register across the
control plan that contains ascii "OpenCPI(NULL)" and in your case you are
reading "CPI(NULL)Open"  this is given by the data in the error message -
(sb 0x435049004f70656e).  this is the magic that the message is referring
to it requires OpenCPI to be at address 0 of the control plane address
space to proceed.

I think we ran into this problem and we decided it was because the bus on
the ultrascale was setup to be 32 bits and needed to be 64 bits for the hdl
that we implemented to work correctly.  remind me what platform you are
using is it a zynq ultrascale or 7000 series?

On Wed, Aug 28, 2019 at 5:55 PM Munro, Robert M. Robert.Munro@jhuapl.edu
wrote:

Chris,

After merging some sections of HdlBusDriver.cxx into the 1.4 version of
the file and going through the build process I am encountering a new error
when attempting to load HDL on the N310.  The fsk_filerw is being used as a
known good reference for this purpose.  The new sections of vivado.mk
were merged in to attempt building the HDL using the framework, but it did
not generate the .bin file when using ocpidev build with the --hdl-assembly
argument.  An attempt to replicate the commands in vivado.mk manually
while following the guidelines for generating a .bin from a .bit from
Xilinx documentation
https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841645/Solution+Zynq+PL+Programming+With+FPGA+Manager
was taken.

The steps were:

  • generate a .bif file similar to the documentation's Full_Bitstream.bif
    using the correct filename
  • run a bootgen command similar to vivado.mk: bootgen -image
    <bif_filename> -arch zynq -o <bin_filename> -w

This generated a .bin file as desired and was copied to the artifacts
directory in the ocpi folder structure.

The built ocpi environment loaded successfully, recognizes the HDL
container as being available, and the hello application was able to run
successfully.  The command output contained ' HDL Device 'PL:0' responds,
but the OCCP signature: magic: 0x18000afe187003 (sb 0x435049004f70656e) '
but the impact of this was not understood until attempting to load HDL.
When attempting to run the fsk_filerw from the ocpirun command it did not
appear to recognize the assembly when listing resources found in the output
and reported that suitable candidate for a HDL-implemented component was
not available.

The command 'ocpihdl load' was then attempted to force the loading of the
HDL assembly and the same '...OCCP signature: magic: ...' output observed
and finally ' Exiting for problem: error loading device pl:0: Magic numbers
in admin space do not match'.

Is there some other step that must be taken during the generation of the
.bin file?  Is there any other software modification that is required of
the ocpi runtime code?  The diff patch of the modified 1.4 HdlBusDriver.cxx
is attached to make sure that the required code modifications are performed
correctly.  The log output from the ocpihdl load command is attached in
case that can provide further insight regarding performance or required
steps.

Thanks,
Rob

-----Original Message-----
From: discuss discuss-bounces@lists.opencpi.org On Behalf Of Munro,
Robert M.
Sent: Tuesday, August 13, 2019 10:56 AM
To: Chris Hinkey chinkey@geontech.com; James Kulp jek@parera.com
Cc: discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+
fpga_manager

Chris,

Thank you for your helpful response and insight.  My thinking was that the
#define could be overridden to provide the desired functionality for the
platform, but was not comfortable making the changes without proper
familiarity.  I will move forward by looking at the diff to the 1.4
mainline, make the appropriate modifications, and test with the modified
framework on the N310.

Thanks again for your help.

Thanks,
Rob

From: Chris Hinkey chinkey@geontech.com
Sent: Tuesday, August 13, 2019 10:02 AM
To: James Kulp jek@parera.com
Cc: Munro, Robert M. Robert.Munro@jhuapl.edu; discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+
fpga_manager

I think when I implemented this code I probably made the assumption that
if we are using fpga_manager we are also using ARCH=arm64.  This met our
needs as we only cared about the fpga manager on ultrascale devices at the
time.  We also made the assumption that the tools created a tarred bin file
instead of a bit file because we could not get the bit to bin conversion
working with the existing openCPI code (this might cause you problems later
when actually trying to load the fpga).

The original problem you were running into is certainly because of an
ifdef on line 226 where it will check the old driver done pin if it is on
an arm and not an arm64

226 #if defined(OCPI_ARCH_arm) || defined(OCPI_ARCH_arm_cs)

to move forward for now you can change this line to an "#if 0" and rebuild
the framework, not this will cause other zynq based platforms(zed,
matchstiq etc..) to no longer work with this patch but maybe you don't care
for now while Jim tries to get this into the mainline in a more generic way.
there may be some similar patches you need to make to the same file but
the full diff that I needed to make to BusDriver.cxx to the 1.4 mainline
can be seen here https://github.com/opencpi/opencpi/pull/17/files in case
you didn't already know.
hope this helps

On Mon, Aug 12, 2019 at 11:12 AM James Kulp <jek@parera.com<mailto:
jek@parera.com>> wrote:
On 8/12/19 9:37 AM, Munro, Robert M. wrote:

Jim,

This is the only branch with the modifications required for use with
the FPGA Manager driver.  This is required for use with the Linux
kernel provided for the N310.  The Xilinx toolset being used is 2018_2
and the kernel being used is generated via the N310 build container
using v3.14.0.0 .

Ok.  The default Xilinx kernel associated with 2018_2 is 4.14.

I guess the bottom line is that this combination of platform and tools and
kernel is not yet supported in either the mainline of OpenCPI and the third
party branch you are trying to use.

It is probably not a big problem, but someone has to debug it that has the
time and skills necessary to dig as deep as necessary.

The fpga manager in the various later linux kernels will definitely be
supported in a patch from the mainline "soon", probably in a month, since
it is being actively worked.

That does not guarantee functionality on your exact kernel (and thus
version of the fpga manager), but it does guarantee it working on the
latest Xilinx-supported kernel.

Jim

Thanks,
Robert Munro

*From: *James Kulp <jek@parera.commailto:jek@parera.com
<mailto:jek@parera.commailto:jek@parera.com>>
*Date: *Monday, Aug 12, 2019, 9:00 AM
*To: *Munro, Robert M.
<Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu
<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>,
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
<discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
*Subject: *Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

I was a bit confused about your use of the "ultrascale" branch.
So you are using a branch with two types of patches in it: one for
later linux kernels with the fpga manager, and the other for the
ultrascale chip itself.
The N310 is not ultrascale, so we need to separate the two issues,
which were not separated before.
So its not really a surprise that the branch you are using is not yet
happy with the system you are trying to run it on.

I am working on a branch that simply updates the xilinx tools (2019-1)
and the xilinx linux kernel (4.19) without dealing with ultrascale,
which is intended to work with a baseline zed board, but with current
tools and kernels.

The N310 uses a 7000-series part (7100) which should be compatible
with this.

Which kernel and which xilinx tools are you using?

Jim

On 8/8/19 1:36 PM, Munro, Robert M. wrote:

Jim or others,

Is there any further input or feedback on the source or resolution
of this issue?

As it stands I do not believe that the OCPI runtime software will be
able to successfully load HDL assemblies on the N310 platform.  My
familiarity with this codebase is limited and we would appreciate any
guidance available toward investigating or resolving this issue.

Thank you,
Robert Munro

-----Original Message-----
From: discuss
<discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.open
cpi.org>> On Behalf Of
Munro, Robert M.
Sent: Monday, August 5, 2019 10:49 AM
To: James Kulp <jek@parera.commailto:jek@parera.com>;
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

Jim,

The given block of code is not the root cause of the issue because
the file system does not have a /dev/xdevcfg device.

I suspect there is some functional code similar to this being
compiled incorrectly:
#if (OCPI_ARCH_arm)
// do xdevcfg loading stuff
#else
// do fpga_manager loading stuff
#endif

This error is being output at environment initialization as well as
when running hello.xml.  I've attached a copy of the output from the
command 'ocpirun -v -l 20 hello.xml' for further investigation.

From looking at the output I believe the system is calling
OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is
calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line
484 which in turn is calling Driver::open in the same file at line 499
which then outputs the 'When searching for PL device ...' error at
line 509. This then returns to the HdlDriver.cxx search() function and
outputs the '... got Zynq search error ...' error at line 141.

This is an ARM device and I am not familiar enough with this
codebase to adjust precompiler definitions with confidence that some
other code section will become affected.

Thanks,
Robert Munro

-----Original Message-----
From: James Kulp <jek@parera.commailto:jek@parera.com>
Sent: Friday, August 2, 2019 4:27 PM
To: Munro, Robert M.
<Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>;
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

That code is not integrated into the main line of OpenCPI yet, but
in that code there is:
if (file_exists("/dev/xdevcfg")){
ret_val= load_xdevconfig(fileName, error);
}
else if (file_exists("/sys/class/fpga_manager/fpga0/")){
ret_val= load_fpga_manager(fileName, error);
}
So it looks like the presence of /dev/xdevcfg is what causes it to
look for /sys/class/xdevcfg/xdevcfg/device/prog_done

On 8/2/19 4:15 PM, Munro, Robert M. wrote:

Are there any required flag or environment variable settings that
must be done before building the framework to utilize this
functionality?  I have a platform built that is producing an output
during environment load: 'When searching for PL device '0': Can't
process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string:
file could not be open for reading' .  This leads me to believe that
it is running the xdevcfg code still present in HdlBusDriver.cxx .

Use of the release_1.4_zynq_ultra branch and presence of the
/sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been
verified for the environment used to generate the executables.

Thanks,
Robert Munro

-----Original Message-----
From: discuss
<discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.ope
ncpi.org>> On Behalf Of James Kulp
Sent: Friday, February 1, 2019 4:18 PM
To: discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

On 2/1/19 3:37 PM, Chris Hinkey wrote:

in response to Point 1 here.  We attempted using the code that on
the fly was attempting to convert from bit to bin.  This did not work
on these newer platforms using fpga_manager so we decided to use the
vendor provided tools rather then to reverse engineer what was wrong
with the existing code.

If changes need to be made to create more commonality and given
that all zynq and zynqMP platforms need a .bin file format wouldn't it
make more sense to just use .bin files rather then converting them on
the fly every time?
A sensible question for sure.

When this was done originally, it was to avoid generating multiple
file formats all the time.  .bit files are necessary for JTAG loading,
and .bin files are necessary for zynq hardware loading.

Even on Zynq, some debugging using jtag is done, and having that be
mostly transparent (using the same bitstream files) is convenient.

So we preferred having a single bitstream file (with metadata,
compressed) regardless of whether we were hardware loading or jtag
loading, zynq or virtex6 or spartan3, ISE or Vivado.

In fact, there was no reverse engineering the last time since both
formats, at the level we were operating at, were documented by Xilinx.

It seemed to be worth the 30 SLOC to convert on the fly to keep a
single format of Xilinx bitstream files, including between ISE and
Vivado and all Xilinx FPGA types.

Of course it might make sense to switch things around the other way
and use .bin files uniformly and only convert to .bit format for JTAG
loading.

But since the core of the "conversion:" after a header, is just a
32 bit endian swap, it doesn't matter much either way.

If it ends up being a truly nasty reverse engineering exercise now,
I would reconsider.


From: discuss
<discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.op
encpi.org>> on behalf of James Kulp
<jek@parera.commailto:jek@parera.com>
Sent: Friday, February 1, 2019 3:27 PM
To: discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

David,

This is great work. Thanks.

Since I believe the fpga manager stuff is really an attribute of
later linux kernels, I don't think it is really a ZynqMP thing,
but just a later linux kernel thing.
I am currently bringing up the quite ancient zedboard using the
latest Vivado and Xilinx linux and will try to use this same code.
There are two thinigs I am looking into, now that you have done
the hard work of getting to a working solution:

  1. The bit vs bin thing existed with the old bitstream loader, but
    I think we were converting on the fly, so I will try that here.
    (To avoid the bin format altogether).

  2. The fpga manager has entry points from kernel mode that allow
    you to inject the bitstream without making a copy in /lib/firmware.
    Since we already have a kernel driver, I will try to use that to
    avoid the whole /lib/firmware thing.

So if those two things can work (no guarantees), the difference
between old and new bitstream loading (and building) can be
minimized and the loading process faster and requiring no extra
file system
space.
This will make merging easier too.

We'll see.  Thanks again to you and Geon for this important
contribution.

Jim

On 2/1/19 3:12 PM, David Banks wrote:

OpenCPI users interested in ZynqMP fpga_manager,

I know some users are interested in the OpenCPI's bitstream
loading for ZynqMP/UltraScale+ using "fpga_manager". In
general, we followed the instructions at

https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream
.

I will give a short explanation here:

Reminder: All ZynqMP/UltraScale+ changes are located at
https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra
branch.

Firstly, all fpga_manager code is located in
runtime/hdl/src/HdlBusDriver.cxx. There were also changes in
r
untime/hdl-support/xilinx/vivado.mkhttp://vivado.mk
http://vivado.mk
to generate a bitstream in the correct *.bin
format. To see the changes made to these files for ZynqMP, you
can diff them between
release_1.4 and release_1.4_zynq_ultra:
$ git clone https://github.com/Geontech/opencpi.git --branch
release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin
release_1.4:release_1.4; $ git diff release_1.4 --
runtime/hdl/src/HdlBusDriver.cxx
runtime/hdl-support/xilinx/vivado.mkhttp://vivado.mk;

The directly relevant functions are load_fpga_manager() and i
sProgrammed().
load_fpga_manager() ensures that /lib/firmware exists, reads the
*.bin bitstream file and writes its contents to
/lib/firmware/opencpi_temp.bin.
It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the
the filename "opencpi_temp.bin" to
/sys/class/fpga_manager/fpga0/firmware.
Finally, the temporary opencpi_temp.bin bitstream is removed and
the state of the fpga_manager
(/sys/class/fpga_manager/fpga0/state) is confirmed to be
"operating" in isProgrammed().

fpga_manager requires that bitstreams be in *.bin in order to
write them to the PL. So, some changes were made to
vivado.mkhttp://vivado.mk to add a make rule for the *.bin
file. This make rule (BinName) uses Vivado's "bootgen" to
convert the bitstream from *.bit to *.bin.

Most of the relevant code is pasted or summarized below:

        *load_fpga_manager*(const char *fileName, std::string

&error) {

          if (!file_exists("/lib/firmware")){

mkdir("/lib/firmware",0666);
}
int out_file =
creat("/lib/firmware/opencpi_temp.bin", 0666);
gzFile bin_file;
int bfd, zerror;
uint8_t buf[8*1024];

          if ((bfd = ::open(fileName, O_RDONLY)) < 0)
            OU::format(error, "Can't open bitstream file '%s'

for reading:

%s(%d)",
fileName, strerror(errno), errno);
if ((bin_file = ::gzdopen(bfd, "rb")) == NULL)
OU::format(error, "Can't open compressed bin file
'%s' for :
%s(%u)",
fileName, strerror(errno), errno);
do {
uint8_t *bit_buf = buf;
int n = ::gzread(bin_file, bit_buf, sizeof(buf));
if (n < 0)
return true;
if (n & 3)
return OU::eformat(error, "Bitstream data in is '%s'
not a multiple of 3 bytes",
fileName);
if (n == 0)
break;
if (write(out_file, buf, n) <= 0)
return OU::eformat(error,
"Error writing to
/lib/firmware/opencpi_temp.bin for bin
loading: %s(%u/%d)",
strerror(errno), errno, n);
} while (1);
close(out_file);
std::ofstream
fpga_flags("/sys/class/fpga_manager/fpga0/flags");
std::ofstream
fpga_firmware("/sys/class/fpga_manager/fpga0/firmware");
fpga_flags << 0 << std::endl;
fpga_firmware << "opencpi_temp.bin" << std::endl;

remove("/lib/firmware/opencpi_temp.bin");
return isProgrammed(error) ? init(error) : true;
}

The isProgrammed() function just checks whether or not the
fpga_manager state is 'operating' although we are not entirely
confident this is a robust check:

        *isProgrammed*(...) {
          ...
          const char *e = OU::file2String(val,

"/sys/class/fpga_manager/fpga0/state", '|');
...
return val == "operating";
}

vivado.mkhttp://vivado.mk's *bin make-rule uses bootgen to
convert bit to bin. This is necessary in Vivado 2018.2, but in
later versions you may be able to directly generate the correct
*.bin file via an option to
write_bitstream:
$(call BinName,$1,$3,$6): $(call BitName,$1,$3)
$(AT)echo -n For $2 on $5 using config $4: Generating
Xilinx Vivado bitstream file $$@ with BIN extension using "bootgen".
$(AT)echo all: > $$(call BifName,$1,$3,$6);
echo "{" >> $$(call BifName,$1,$3,$6);
echo " [destination_device = pl] $(notdir $(call
BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6);
echo "}" >> $$(call BifName,$1,$3,$6);
$(AT)$(call DoXilinx,bootgen,$1,-image $(notdir
$(call
BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call
BinName,$1,$3,$6)) -w,bin)

Hope this is useful!

Regards,
David Banks
dbanks@geontech.commailto:dbanks@geontech.com
Geon Technologies, LLC
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/att
ach m ents/20190201/4b49675d/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.o
rg


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or
g
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/atta
chm e nts/20190201/64e4ea45/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or
g


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org

-------------- next part -------------- An embedded and
charset-unspecified text was scrubbed...
Name: hello_n310_log_output.txt
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme
nts/20190805/d9b4f229/attachment.txt>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org

It looks like you loaded something sucessfully but the control plan is not hooked up quite right. as an eraly part of the running process opencpi reads a register across the control plan that contains ascii "OpenCPI(NULL)" and in your case you are reading "CPI(NULL)Open" this is given by the data in the error message - (sb 0x435049004f70656e). this is the magic that the message is referring to it requires OpenCPI to be at address 0 of the control plane address space to proceed. I think we ran into this problem and we decided it was because the bus on the ultrascale was setup to be 32 bits and needed to be 64 bits for the hdl that we implemented to work correctly. remind me what platform you are using is it a zynq ultrascale or 7000 series? On Wed, Aug 28, 2019 at 5:55 PM Munro, Robert M. <Robert.Munro@jhuapl.edu> wrote: > Chris, > > After merging some sections of HdlBusDriver.cxx into the 1.4 version of > the file and going through the build process I am encountering a new error > when attempting to load HDL on the N310. The fsk_filerw is being used as a > known good reference for this purpose. The new sections of vivado.mk > were merged in to attempt building the HDL using the framework, but it did > not generate the .bin file when using ocpidev build with the --hdl-assembly > argument. An attempt to replicate the commands in vivado.mk manually > while following the guidelines for generating a .bin from a .bit from > Xilinx documentation > https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841645/Solution+Zynq+PL+Programming+With+FPGA+Manager > was taken. > > The steps were: > - generate a .bif file similar to the documentation's Full_Bitstream.bif > using the correct filename > - run a bootgen command similar to vivado.mk: bootgen -image > <bif_filename> -arch zynq -o <bin_filename> -w > > This generated a .bin file as desired and was copied to the artifacts > directory in the ocpi folder structure. > > The built ocpi environment loaded successfully, recognizes the HDL > container as being available, and the hello application was able to run > successfully. The command output contained ' HDL Device 'PL:0' responds, > but the OCCP signature: magic: 0x18000afe187003 (sb 0x435049004f70656e) ' > but the impact of this was not understood until attempting to load HDL. > When attempting to run the fsk_filerw from the ocpirun command it did not > appear to recognize the assembly when listing resources found in the output > and reported that suitable candidate for a HDL-implemented component was > not available. > > The command 'ocpihdl load' was then attempted to force the loading of the > HDL assembly and the same '...OCCP signature: magic: ...' output observed > and finally ' Exiting for problem: error loading device pl:0: Magic numbers > in admin space do not match'. > > Is there some other step that must be taken during the generation of the > .bin file? Is there any other software modification that is required of > the ocpi runtime code? The diff patch of the modified 1.4 HdlBusDriver.cxx > is attached to make sure that the required code modifications are performed > correctly. The log output from the ocpihdl load command is attached in > case that can provide further insight regarding performance or required > steps. > > Thanks, > Rob > > -----Original Message----- > From: discuss <discuss-bounces@lists.opencpi.org> On Behalf Of Munro, > Robert M. > Sent: Tuesday, August 13, 2019 10:56 AM > To: Chris Hinkey <chinkey@geontech.com>; James Kulp <jek@parera.com> > Cc: discuss@lists.opencpi.org > Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ > fpga_manager > > Chris, > > Thank you for your helpful response and insight. My thinking was that the > #define could be overridden to provide the desired functionality for the > platform, but was not comfortable making the changes without proper > familiarity. I will move forward by looking at the diff to the 1.4 > mainline, make the appropriate modifications, and test with the modified > framework on the N310. > > Thanks again for your help. > > Thanks, > Rob > > From: Chris Hinkey <chinkey@geontech.com> > Sent: Tuesday, August 13, 2019 10:02 AM > To: James Kulp <jek@parera.com> > Cc: Munro, Robert M. <Robert.Munro@jhuapl.edu>; discuss@lists.opencpi.org > Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ > fpga_manager > > I think when I implemented this code I probably made the assumption that > if we are using fpga_manager we are also using ARCH=arm64. This met our > needs as we only cared about the fpga manager on ultrascale devices at the > time. We also made the assumption that the tools created a tarred bin file > instead of a bit file because we could not get the bit to bin conversion > working with the existing openCPI code (this might cause you problems later > when actually trying to load the fpga). > > The original problem you were running into is certainly because of an > ifdef on line 226 where it will check the old driver done pin if it is on > an arm and not an arm64 > > 226 #if defined(OCPI_ARCH_arm) || defined(OCPI_ARCH_arm_cs) > > to move forward for now you can change this line to an "#if 0" and rebuild > the framework, not this will cause other zynq based platforms(zed, > matchstiq etc..) to no longer work with this patch but maybe you don't care > for now while Jim tries to get this into the mainline in a more generic way. > there may be some similar patches you need to make to the same file but > the full diff that I needed to make to BusDriver.cxx to the 1.4 mainline > can be seen here https://github.com/opencpi/opencpi/pull/17/files in case > you didn't already know. > hope this helps > > On Mon, Aug 12, 2019 at 11:12 AM James Kulp <jek@parera.com<mailto: > jek@parera.com>> wrote: > On 8/12/19 9:37 AM, Munro, Robert M. wrote: > > Jim, > > > > This is the only branch with the modifications required for use with > > the FPGA Manager driver. This is required for use with the Linux > > kernel provided for the N310. The Xilinx toolset being used is 2018_2 > > and the kernel being used is generated via the N310 build container > > using v3.14.0.0 . > > Ok. The default Xilinx kernel associated with 2018_2 is 4.14. > > I guess the bottom line is that this combination of platform and tools and > kernel is not yet supported in either the mainline of OpenCPI and the third > party branch you are trying to use. > > It is probably not a big problem, but someone has to debug it that has the > time and skills necessary to dig as deep as necessary. > > The fpga manager in the various later linux kernels will definitely be > supported in a patch from the mainline "soon", probably in a month, since > it is being actively worked. > > That does not guarantee functionality on your exact kernel (and thus > version of the fpga manager), but it does guarantee it working on the > latest Xilinx-supported kernel. > > Jim > > > > > > > > > > > Thanks, > > Robert Munro > > > > *From: *James Kulp <jek@parera.com<mailto:jek@parera.com> > > <mailto:jek@parera.com<mailto:jek@parera.com>>> > > *Date: *Monday, Aug 12, 2019, 9:00 AM > > *To: *Munro, Robert M. > > <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu> > > <mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>, > > discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > > <discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > > <mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > > *Subject: *Re: [Discuss OpenCPI] Bitstream loading with > > ZynqMP/UltraScale+ fpga_manager > > > > I was a bit confused about your use of the "ultrascale" branch. > > So you are using a branch with two types of patches in it: one for > > later linux kernels with the fpga manager, and the other for the > > ultrascale chip itself. > > The N310 is not ultrascale, so we need to separate the two issues, > > which were not separated before. > > So its not really a surprise that the branch you are using is not yet > > happy with the system you are trying to run it on. > > > > I am working on a branch that simply updates the xilinx tools (2019-1) > > and the xilinx linux kernel (4.19) without dealing with ultrascale, > > which is intended to work with a baseline zed board, but with current > > tools and kernels. > > > > The N310 uses a 7000-series part (7100) which should be compatible > > with this. > > > > Which kernel and which xilinx tools are you using? > > > > Jim > > > > > > > > On 8/8/19 1:36 PM, Munro, Robert M. wrote: > > > Jim or others, > > > > > > Is there any further input or feedback on the source or resolution > > of this issue? > > > > > > As it stands I do not believe that the OCPI runtime software will be > > able to successfully load HDL assemblies on the N310 platform. My > > familiarity with this codebase is limited and we would appreciate any > > guidance available toward investigating or resolving this issue. > > > > > > Thank you, > > > Robert Munro > > > > > > -----Original Message----- > > > From: discuss > > > <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.open > > > cpi.org>> On Behalf Of > > Munro, Robert M. > > > Sent: Monday, August 5, 2019 10:49 AM > > > To: James Kulp <jek@parera.com<mailto:jek@parera.com>>; > > > discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > > > Subject: Re: [Discuss OpenCPI] Bitstream loading with > > ZynqMP/UltraScale+ fpga_manager > > > > > > Jim, > > > > > > The given block of code is not the root cause of the issue because > > the file system does not have a /dev/xdevcfg device. > > > > > > I suspect there is some functional code similar to this being > > compiled incorrectly: > > > #if (OCPI_ARCH_arm) > > > // do xdevcfg loading stuff > > > #else > > > // do fpga_manager loading stuff > > > #endif > > > > > > This error is being output at environment initialization as well as > > when running hello.xml. I've attached a copy of the output from the > > command 'ocpirun -v -l 20 hello.xml' for further investigation. > > > > > > From looking at the output I believe the system is calling > > OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is > > calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line > > 484 which in turn is calling Driver::open in the same file at line 499 > > which then outputs the 'When searching for PL device ...' error at > > line 509. This then returns to the HdlDriver.cxx search() function and > > outputs the '... got Zynq search error ...' error at line 141. > > > > > > This is an ARM device and I am not familiar enough with this > > codebase to adjust precompiler definitions with confidence that some > > other code section will become affected. > > > > > > Thanks, > > > Robert Munro > > > > > > -----Original Message----- > > > From: James Kulp <jek@parera.com<mailto:jek@parera.com>> > > > Sent: Friday, August 2, 2019 4:27 PM > > > To: Munro, Robert M. > > > <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>; > > discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > > > Subject: Re: [Discuss OpenCPI] Bitstream loading with > > ZynqMP/UltraScale+ fpga_manager > > > > > > That code is not integrated into the main line of OpenCPI yet, but > > in that code there is: > > > if (file_exists("/dev/xdevcfg")){ > > > ret_val= load_xdevconfig(fileName, error); > > > } > > > else if (file_exists("/sys/class/fpga_manager/fpga0/")){ > > > ret_val= load_fpga_manager(fileName, error); > > > } > > > So it looks like the presence of /dev/xdevcfg is what causes it to > > look for /sys/class/xdevcfg/xdevcfg/device/prog_done > > > > > > On 8/2/19 4:15 PM, Munro, Robert M. wrote: > > >> Are there any required flag or environment variable settings that > > must be done before building the framework to utilize this > > functionality? I have a platform built that is producing an output > > during environment load: 'When searching for PL device '0': Can't > > process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string: > > file could not be open for reading' . This leads me to believe that > > it is running the xdevcfg code still present in HdlBusDriver.cxx . > > >> > > >> Use of the release_1.4_zynq_ultra branch and presence of the > > /sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been > > verified for the environment used to generate the executables. > > >> > > >> Thanks, > > >> Robert Munro > > >> > > >> -----Original Message----- > > >> From: discuss > > >> <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.ope > > >> ncpi.org>> On Behalf Of James Kulp > > >> Sent: Friday, February 1, 2019 4:18 PM > > >> To: discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > > >> Subject: Re: [Discuss OpenCPI] Bitstream loading with > > >> ZynqMP/UltraScale+ fpga_manager > > >> > > >> On 2/1/19 3:37 PM, Chris Hinkey wrote: > > >>> in response to Point 1 here. We attempted using the code that on > > the fly was attempting to convert from bit to bin. This did not work > > on these newer platforms using fpga_manager so we decided to use the > > vendor provided tools rather then to reverse engineer what was wrong > > with the existing code. > > >>> > > >>> If changes need to be made to create more commonality and given > > that all zynq and zynqMP platforms need a .bin file format wouldn't it > > make more sense to just use .bin files rather then converting them on > > the fly every time? > > >> A sensible question for sure. > > >> > > >> When this was done originally, it was to avoid generating multiple > > file formats all the time. .bit files are necessary for JTAG loading, > > and .bin files are necessary for zynq hardware loading. > > >> > > >> Even on Zynq, some debugging using jtag is done, and having that be > > mostly transparent (using the same bitstream files) is convenient. > > >> > > >> So we preferred having a single bitstream file (with metadata, > > >> compressed) regardless of whether we were hardware loading or jtag > > loading, zynq or virtex6 or spartan3, ISE or Vivado. > > >> > > >> In fact, there was no reverse engineering the last time since both > > formats, at the level we were operating at, were documented by Xilinx. > > >> > > >> It seemed to be worth the 30 SLOC to convert on the fly to keep a > > single format of Xilinx bitstream files, including between ISE and > > Vivado and all Xilinx FPGA types. > > >> > > >> Of course it might make sense to switch things around the other way > > and use .bin files uniformly and only convert to .bit format for JTAG > > loading. > > >> > > >> But since the core of the "conversion:" after a header, is just a > > 32 bit endian swap, it doesn't matter much either way. > > >> > > >> If it ends up being a truly nasty reverse engineering exercise now, > > I would reconsider. > > >> > > >>> ________________________________ > > >>> From: discuss > > >>> <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.op > > >>> encpi.org>> on behalf of James Kulp > > >>> <jek@parera.com<mailto:jek@parera.com>> > > >>> Sent: Friday, February 1, 2019 3:27 PM > > >>> To: discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > > >>> Subject: Re: [Discuss OpenCPI] Bitstream loading with > > >>> ZynqMP/UltraScale+ fpga_manager > > >>> > > >>> David, > > >>> > > >>> This is great work. Thanks. > > >>> > > >>> Since I believe the fpga manager stuff is really an attribute of > > >>> later linux kernels, I don't think it is really a ZynqMP thing, > > >>> but just a later linux kernel thing. > > >>> I am currently bringing up the quite ancient zedboard using the > > >>> latest Vivado and Xilinx linux and will try to use this same code. > > >>> There are two thinigs I am looking into, now that you have done > > >>> the hard work of getting to a working solution: > > >>> > > >>> 1. The bit vs bin thing existed with the old bitstream loader, but > > >>> I think we were converting on the fly, so I will try that here. > > >>> (To avoid the bin format altogether). > > >>> > > >>> 2. The fpga manager has entry points from kernel mode that allow > > >>> you to inject the bitstream without making a copy in /lib/firmware. > > >>> Since we already have a kernel driver, I will try to use that to > > >>> avoid the whole /lib/firmware thing. > > >>> > > >>> So if those two things can work (no guarantees), the difference > > >>> between old and new bitstream loading (and building) can be > > >>> minimized and the loading process faster and requiring no extra > > >>> file system > > space. > > >>> This will make merging easier too. > > >>> > > >>> We'll see. Thanks again to you and Geon for this important > > contribution. > > >>> > > >>> Jim > > >>> > > >>> > > >>> On 2/1/19 3:12 PM, David Banks wrote: > > >>>> OpenCPI users interested in ZynqMP fpga_manager, > > >>>> > > >>>> I know some users are interested in the OpenCPI's bitstream > > >>>> loading for ZynqMP/UltraScale+ using "*fpga_manager*". In > > >>>> general, we followed the instructions at > > >>>> > > > https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream > . > > >>>> I will give a short explanation here: > > >>>> > > >>>> Reminder: All ZynqMP/UltraScale+ changes are located at > > >>>> https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra > > branch. > > >>>> > > >>>> Firstly, all *fpga_manager *code is located in > > >>>> *runtime/hdl/src/HdlBusDriver.cxx*. There were also changes in > > >>>> r*untime/hdl-support/xilinx/vivado.mk<http://vivado.mk> > > >>>> <http://vivado.mk>* to generate a bitstream in the correct *.bin > > >>>> format. To see the changes made to these files for ZynqMP, you > > >>>> can diff them between > > >>>> *release_1.4* and *release_1.4_zynq_ultra*: > > >>>> $ git clone https://github.com/Geontech/opencpi.git --branch > > >>>> release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin > > >>>> release_1.4:release_1.4; $ git diff release_1.4 -- > > >>>> runtime/hdl/src/HdlBusDriver.cxx > > >>>> runtime/hdl-support/xilinx/vivado.mk<http://vivado.mk>; > > >>>> > > >>>> > > >>>> The directly relevant functions are *load_fpga_manager()* and i > > >>>> *sProgrammed()*. > > >>>> load_fpga_manager() ensures that /lib/firmware exists, reads the > > >>>> *.bin bitstream file and writes its contents to > > /lib/firmware/opencpi_temp.bin. > > >>>> It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the > > >>>> the filename "opencpi_temp.bin" to > > /sys/class/fpga_manager/fpga0/firmware. > > >>>> Finally, the temporary opencpi_temp.bin bitstream is removed and > > >>>> the state of the fpga_manager > > >>>> (/sys/class/fpga_manager/fpga0/state) is confirmed to be > "operating" in isProgrammed(). > > >>>> > > >>>> fpga_manager requires that bitstreams be in *.bin in order to > > >>>> write them to the PL. So, some changes were made to > > >>>> vivado.mk<http://vivado.mk> to add a make rule for the *.bin > > >>>> file. This make rule (*BinName*) uses Vivado's "*bootgen*" to > convert the bitstream from *.bit to *.bin. > > >>>> > > >>>> Most of the relevant code is pasted or summarized below: > > >>>> > > >>>> *load_fpga_manager*(const char *fileName, std::string > > &error) { > > >>>> if (!file_exists("/lib/firmware")){ > > >>>> mkdir("/lib/firmware",0666); > > >>>> } > > >>>> int out_file = > > creat("/lib/firmware/opencpi_temp.bin", 0666); > > >>>> gzFile bin_file; > > >>>> int bfd, zerror; > > >>>> uint8_t buf[8*1024]; > > >>>> > > >>>> if ((bfd = ::open(fileName, O_RDONLY)) < 0) > > >>>> OU::format(error, "Can't open bitstream file '%s' > > for reading: > > >>>> %s(%d)", > > >>>> fileName, strerror(errno), errno); > > >>>> if ((bin_file = ::gzdopen(bfd, "rb")) == NULL) > > >>>> OU::format(error, "Can't open compressed bin file > > '%s' for : > > >>>> %s(%u)", > > >>>> fileName, strerror(errno), errno); > > >>>> do { > > >>>> uint8_t *bit_buf = buf; > > >>>> int n = ::gzread(bin_file, bit_buf, sizeof(buf)); > > >>>> if (n < 0) > > >>>> return true; > > >>>> if (n & 3) > > >>>> return OU::eformat(error, "Bitstream data in is '%s' > > >>>> not a multiple of 3 bytes", > > >>>> fileName); > > >>>> if (n == 0) > > >>>> break; > > >>>> if (write(out_file, buf, n) <= 0) > > >>>> return OU::eformat(error, > > >>>> "Error writing to > > >>>> /lib/firmware/opencpi_temp.bin for bin > > >>>> loading: %s(%u/%d)", > > >>>> strerror(errno), errno, n); > > >>>> } while (1); > > >>>> close(out_file); > > >>>> std::ofstream > > fpga_flags("/sys/class/fpga_manager/fpga0/flags"); > > >>>> std::ofstream > > >>>> fpga_firmware("/sys/class/fpga_manager/fpga0/firmware"); > > >>>> fpga_flags << 0 << std::endl; > > >>>> fpga_firmware << "opencpi_temp.bin" << std::endl; > > >>>> > > >>>> remove("/lib/firmware/opencpi_temp.bin"); > > >>>> return isProgrammed(error) ? init(error) : true; > > >>>> } > > >>>> > > >>>> The isProgrammed() function just checks whether or not the > > >>>> fpga_manager state is 'operating' although we are not entirely > > >>>> confident this is a robust check: > > >>>> > > >>>> *isProgrammed*(...) { > > >>>> ... > > >>>> const char *e = OU::file2String(val, > > >>>> "/sys/class/fpga_manager/fpga0/state", '|'); > > >>>> ... > > >>>> return val == "operating"; > > >>>> } > > >>>> > > >>>> vivado.mk<http://vivado.mk>'s *bin make-rule uses bootgen to > > >>>> convert bit to bin. This is necessary in Vivado 2018.2, but in > > >>>> later versions you may be able to directly generate the correct > > >>>> *.bin file via an option to > > write_bitstream: > > >>>> $(call *BinName*,$1,$3,$6): $(call BitName,$1,$3) > > >>>> $(AT)echo -n For $2 on $5 using config $4: Generating > > >>>> Xilinx Vivado bitstream file $$@ with BIN extension using "bootgen". > > >>>> $(AT)echo all: > $$(call BifName,$1,$3,$6); \ > > >>>> echo "{" >> $$(call BifName,$1,$3,$6); \ > > >>>> echo " [destination_device = pl] $(notdir $(call > > >>>> BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6); \ > > >>>> echo "}" >> $$(call BifName,$1,$3,$6); > > >>>> $(AT)$(call DoXilinx,*bootgen*,$1,-image $(notdir > > >>>> $(call > > >>>> BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call > > >>>> BinName,$1,$3,$6)) -w,bin) > > >>>> > > >>>> Hope this is useful! > > >>>> > > >>>> Regards, > > >>>> David Banks > > >>>> dbanks@geontech.com<mailto:dbanks@geontech.com> > > >>>> Geon Technologies, LLC > > >>>> -------------- next part -------------- An HTML attachment was > > >>>> scrubbed... > > >>>> URL: > > >>>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/att > > >>>> ach m ents/20190201/4b49675d/attachment.html> > > >>>> _______________________________________________ > > >>>> discuss mailing list > > >>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > > >>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.o > > >>>> rg > > >>> _______________________________________________ > > >>> discuss mailing list > > >>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > > >>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or > > >>> g > > >>> -------------- next part -------------- An HTML attachment was > > >>> scrubbed... > > >>> URL: > > >>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/atta > > >>> chm e nts/20190201/64e4ea45/attachment.html> > > >>> _______________________________________________ > > >>> discuss mailing list > > >>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > > >>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or > > >>> g > > >> > > >> _______________________________________________ > > >> discuss mailing list > > >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > > >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > > > > > > -------------- next part -------------- An embedded and > > > charset-unspecified text was scrubbed... > > > Name: hello_n310_log_output.txt > > > URL: > > <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme > > nts/20190805/d9b4f229/attachment.txt> > > > _______________________________________________ > > > discuss mailing list > > > discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > > > http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > > > > > > > _______________________________________________ > discuss mailing list > discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >
MR
Munro, Robert M.
Thu, Aug 29, 2019 1:08 PM

Chris,

Thanks for the information regarding the internals.  The FPGA part on this platform is a XC7Z100.  I purposefully did not pull in changes that I believed were related to addressing.  I can double check the specifications regarding address widths to verify it should be unchanged.

Please let me know if there are any other changes or steps missed.

Thanks,
Rob

From: Chris Hinkey <chinkey@geontech.commailto:chinkey@geontech.com>
Date: Thursday, Aug 29, 2019, 8:05 AM
To: Munro, Robert M. <Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>
Cc: James Kulp <jek@parera.commailto:jek@parera.com>, discuss@lists.opencpi.org <discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

It looks like you loaded something sucessfully but the control plan is not hooked up quite right.

as an eraly part of the running process opencpi reads a register across the control plan that contains ascii "OpenCPI(NULL)" and in your case you are reading "CPI(NULL)Open"  this is given by the data in the error message - (sb 0x435049004f70656e).  this is the magic that the message is referring to it requires OpenCPI to be at address 0 of the control plane address space to proceed.

I think we ran into this problem and we decided it was because the bus on the ultrascale was setup to be 32 bits and needed to be 64 bits for the hdl that we implemented to work correctly.  remind me what platform you are using is it a zynq ultrascale or 7000 series?

On Wed, Aug 28, 2019 at 5:55 PM Munro, Robert M. <Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu> wrote:
Chris,

After merging some sections of HdlBusDriver.cxx into the 1.4 version of the file and going through the build process I am encountering a new error when attempting to load HDL on the N310.  The fsk_filerw is being used as a known good reference for this purpose.  The new sections of vivado.mkhttp://vivado.mk were merged in to attempt building the HDL using the framework, but it did not generate the .bin file when using ocpidev build with the --hdl-assembly argument.  An attempt to replicate the commands in vivado.mkhttp://vivado.mk manually while following the guidelines for generating a .bin from a .bit from Xilinx documentation https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841645/Solution+Zynq+PL+Programming+With+FPGA+Manager was taken.

The steps were:

  • generate a .bif file similar to the documentation's Full_Bitstream.bif using the correct filename
  • run a bootgen command similar to vivado.mkhttp://vivado.mk: bootgen -image <bif_filename> -arch zynq -o <bin_filename> -w

This generated a .bin file as desired and was copied to the artifacts directory in the ocpi folder structure.

The built ocpi environment loaded successfully, recognizes the HDL container as being available, and the hello application was able to run successfully.  The command output contained ' HDL Device 'PL:0' responds, but the OCCP signature: magic: 0x18000afe187003 (sb 0x435049004f70656e) ' but the impact of this was not understood until attempting to load HDL.  When attempting to run the fsk_filerw from the ocpirun command it did not appear to recognize the assembly when listing resources found in the output and reported that suitable candidate for a HDL-implemented component was not available.

The command 'ocpihdl load' was then attempted to force the loading of the HDL assembly and the same '...OCCP signature: magic: ...' output observed and finally ' Exiting for problem: error loading device pl:0: Magic numbers in admin space do not match'.

Is there some other step that must be taken during the generation of the .bin file?  Is there any other software modification that is required of the ocpi runtime code?  The diff patch of the modified 1.4 HdlBusDriver.cxx is attached to make sure that the required code modifications are performed correctly.  The log output from the ocpihdl load command is attached in case that can provide further insight regarding performance or required steps.

Thanks,
Rob

-----Original Message-----
From: discuss <discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org> On Behalf Of Munro, Robert M.
Sent: Tuesday, August 13, 2019 10:56 AM
To: Chris Hinkey <chinkey@geontech.commailto:chinkey@geontech.com>; James Kulp <jek@parera.commailto:jek@parera.com>
Cc: discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

Chris,

Thank you for your helpful response and insight.  My thinking was that the #define could be overridden to provide the desired functionality for the platform, but was not comfortable making the changes without proper familiarity.  I will move forward by looking at the diff to the 1.4 mainline, make the appropriate modifications, and test with the modified framework on the N310.

Thanks again for your help.

Thanks,
Rob

From: Chris Hinkey <chinkey@geontech.commailto:chinkey@geontech.com>
Sent: Tuesday, August 13, 2019 10:02 AM
To: James Kulp <jek@parera.commailto:jek@parera.com>
Cc: Munro, Robert M. <Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>; discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

I think when I implemented this code I probably made the assumption that if we are using fpga_manager we are also using ARCH=arm64.  This met our needs as we only cared about the fpga manager on ultrascale devices at the time.  We also made the assumption that the tools created a tarred bin file instead of a bit file because we could not get the bit to bin conversion working with the existing openCPI code (this might cause you problems later when actually trying to load the fpga).

The original problem you were running into is certainly because of an ifdef on line 226 where it will check the old driver done pin if it is on an arm and not an arm64

226 #if defined(OCPI_ARCH_arm) || defined(OCPI_ARCH_arm_cs)

to move forward for now you can change this line to an "#if 0" and rebuild the framework, not this will cause other zynq based platforms(zed, matchstiq etc..) to no longer work with this patch but maybe you don't care for now while Jim tries to get this into the mainline in a more generic way.
there may be some similar patches you need to make to the same file but the full diff that I needed to make to BusDriver.cxx to the 1.4 mainline can be seen here https://github.com/opencpi/opencpi/pull/17/files in case you didn't already know.
hope this helps

On Mon, Aug 12, 2019 at 11:12 AM James Kulp <jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com>> wrote:
On 8/12/19 9:37 AM, Munro, Robert M. wrote:

Jim,

This is the only branch with the modifications required for use with
the FPGA Manager driver.  This is required for use with the Linux
kernel provided for the N310.  The Xilinx toolset being used is 2018_2
and the kernel being used is generated via the N310 build container
using v3.14.0.0 .

Ok.  The default Xilinx kernel associated with 2018_2 is 4.14.

I guess the bottom line is that this combination of platform and tools and kernel is not yet supported in either the mainline of OpenCPI and the third party branch you are trying to use.

It is probably not a big problem, but someone has to debug it that has the time and skills necessary to dig as deep as necessary.

The fpga manager in the various later linux kernels will definitely be supported in a patch from the mainline "soon", probably in a month, since it is being actively worked.

That does not guarantee functionality on your exact kernel (and thus version of the fpga manager), but it does guarantee it working on the latest Xilinx-supported kernel.

Jim

Thanks,
Robert Munro

*From: *James Kulp <jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com>
<mailto:jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com>>>
*Date: *Monday, Aug 12, 2019, 9:00 AM
*To: *Munro, Robert M.
<Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>
<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>>,
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
<discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>>
*Subject: *Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

I was a bit confused about your use of the "ultrascale" branch.
So you are using a branch with two types of patches in it: one for
later linux kernels with the fpga manager, and the other for the
ultrascale chip itself.
The N310 is not ultrascale, so we need to separate the two issues,
which were not separated before.
So its not really a surprise that the branch you are using is not yet
happy with the system you are trying to run it on.

I am working on a branch that simply updates the xilinx tools (2019-1)
and the xilinx linux kernel (4.19) without dealing with ultrascale,
which is intended to work with a baseline zed board, but with current
tools and kernels.

The N310 uses a 7000-series part (7100) which should be compatible
with this.

Which kernel and which xilinx tools are you using?

Jim

On 8/8/19 1:36 PM, Munro, Robert M. wrote:

Jim or others,

Is there any further input or feedback on the source or resolution

of this issue?

As it stands I do not believe that the OCPI runtime software will be

able to successfully load HDL assemblies on the N310 platform.  My
familiarity with this codebase is limited and we would appreciate any
guidance available toward investigating or resolving this issue.

Thank you,
Robert Munro

-----Original Message-----
From: discuss
<discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.openmailto:discuss-bounces@lists.open
cpi.orghttp://cpi.org>> On Behalf Of

Munro, Robert M.

Sent: Monday, August 5, 2019 10:49 AM
To: James Kulp <jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com>>;
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
Subject: Re: [Discuss OpenCPI] Bitstream loading with

ZynqMP/UltraScale+ fpga_manager

Jim,

The given block of code is not the root cause of the issue because

the file system does not have a /dev/xdevcfg device.

I suspect there is some functional code similar to this being

compiled incorrectly:

#if (OCPI_ARCH_arm)
// do xdevcfg loading stuff
#else
// do fpga_manager loading stuff
#endif

This error is being output at environment initialization as well as

when running hello.xml.  I've attached a copy of the output from the
command 'ocpirun -v -l 20 hello.xml' for further investigation.

From looking at the output I believe the system is calling

OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is
calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line
484 which in turn is calling Driver::open in the same file at line 499
which then outputs the 'When searching for PL device ...' error at
line 509. This then returns to the HdlDriver.cxx search() function and
outputs the '... got Zynq search error ...' error at line 141.

This is an ARM device and I am not familiar enough with this

codebase to adjust precompiler definitions with confidence that some
other code section will become affected.

Thanks,
Robert Munro

-----Original Message-----
From: James Kulp <jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com>>
Sent: Friday, August 2, 2019 4:27 PM
To: Munro, Robert M.
<Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>;

Subject: Re: [Discuss OpenCPI] Bitstream loading with

ZynqMP/UltraScale+ fpga_manager

That code is not integrated into the main line of OpenCPI yet, but

in that code there is:

         if (file_exists("/dev/xdevcfg")){
           ret_val= load_xdevconfig(fileName, error);
         }
         else if (file_exists("/sys/class/fpga_manager/fpga0/")){
           ret_val= load_fpga_manager(fileName, error);
         }

So it looks like the presence of /dev/xdevcfg is what causes it to

look for /sys/class/xdevcfg/xdevcfg/device/prog_done

On 8/2/19 4:15 PM, Munro, Robert M. wrote:

Are there any required flag or environment variable settings that

must be done before building the framework to utilize this
functionality?  I have a platform built that is producing an output
during environment load: 'When searching for PL device '0': Can't
process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string:
file could not be open for reading' .  This leads me to believe that
it is running the xdevcfg code still present in HdlBusDriver.cxx .

Use of the release_1.4_zynq_ultra branch and presence of the

/sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been
verified for the environment used to generate the executables.

Thanks,
Robert Munro

-----Original Message-----
From: discuss
<discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opemailto:discuss-bounces@lists.ope
ncpi.orghttp://ncpi.org>> On Behalf Of James Kulp
Sent: Friday, February 1, 2019 4:18 PM
To: discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

On 2/1/19 3:37 PM, Chris Hinkey wrote:

in response to Point 1 here.  We attempted using the code that on

the fly was attempting to convert from bit to bin.  This did not work
on these newer platforms using fpga_manager so we decided to use the
vendor provided tools rather then to reverse engineer what was wrong
with the existing code.

If changes need to be made to create more commonality and given

that all zynq and zynqMP platforms need a .bin file format wouldn't it
make more sense to just use .bin files rather then converting them on
the fly every time?

A sensible question for sure.

When this was done originally, it was to avoid generating multiple

file formats all the time.  .bit files are necessary for JTAG loading,
and .bin files are necessary for zynq hardware loading.

Even on Zynq, some debugging using jtag is done, and having that be

mostly transparent (using the same bitstream files) is convenient.

So we preferred having a single bitstream file (with metadata,
compressed) regardless of whether we were hardware loading or jtag

loading, zynq or virtex6 or spartan3, ISE or Vivado.

In fact, there was no reverse engineering the last time since both

formats, at the level we were operating at, were documented by Xilinx.

It seemed to be worth the 30 SLOC to convert on the fly to keep a

single format of Xilinx bitstream files, including between ISE and
Vivado and all Xilinx FPGA types.

Of course it might make sense to switch things around the other way

and use .bin files uniformly and only convert to .bit format for JTAG
loading.

But since the core of the "conversion:" after a header, is just a

32 bit endian swap, it doesn't matter much either way.

If it ends up being a truly nasty reverse engineering exercise now,

I would reconsider.


From: discuss
<discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opmailto:discuss-bounces@lists.op
encpi.orghttp://encpi.org>> on behalf of James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com>>
Sent: Friday, February 1, 2019 3:27 PM
To: discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

David,

This is great work. Thanks.

Since I believe the fpga manager stuff is really an attribute of
later linux kernels, I don't think it is really a ZynqMP thing,
but just a later linux kernel thing.
I am currently bringing up the quite ancient zedboard using the
latest Vivado and Xilinx linux and will try to use this same code.
There are two thinigs I am looking into, now that you have done
the hard work of getting to a working solution:

  1. The bit vs bin thing existed with the old bitstream loader, but
    I think we were converting on the fly, so I will try that here.
    (To avoid the bin format altogether).

  2. The fpga manager has entry points from kernel mode that allow
    you to inject the bitstream without making a copy in /lib/firmware.
    Since we already have a kernel driver, I will try to use that to
    avoid the whole /lib/firmware thing.

So if those two things can work (no guarantees), the difference
between old and new bitstream loading (and building) can be
minimized and the loading process faster and requiring no extra
file system

space.

This will make merging easier too.

We'll see.  Thanks again to you and Geon for this important

contribution.

Jim

On 2/1/19 3:12 PM, David Banks wrote:

OpenCPI users interested in ZynqMP fpga_manager,

I know some users are interested in the OpenCPI's bitstream
loading for ZynqMP/UltraScale+ using "fpga_manager". In
general, we followed the instructions at

I will give a short explanation here:

Reminder: All ZynqMP/UltraScale+ changes are located at
https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra

branch.

Firstly, all fpga_manager code is located in
runtime/hdl/src/HdlBusDriver.cxx. There were also changes in
r
untime/hdl-support/xilinx/vivado.mkhttp://vivado.mkhttp://vivado.mk
http://vivado.mk
to generate a bitstream in the correct *.bin
format. To see the changes made to these files for ZynqMP, you
can diff them between
release_1.4 and release_1.4_zynq_ultra:
$ git clone https://github.com/Geontech/opencpi.git --branch
release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin
release_1.4:release_1.4; $ git diff release_1.4 --
runtime/hdl/src/HdlBusDriver.cxx
runtime/hdl-support/xilinx/vivado.mkhttp://vivado.mkhttp://vivado.mk;

The directly relevant functions are load_fpga_manager() and i
sProgrammed().
load_fpga_manager() ensures that /lib/firmware exists, reads the
*.bin bitstream file and writes its contents to

/lib/firmware/opencpi_temp.bin.

It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the
the filename "opencpi_temp.bin" to

/sys/class/fpga_manager/fpga0/firmware.

Finally, the temporary opencpi_temp.bin bitstream is removed and
the state of the fpga_manager
(/sys/class/fpga_manager/fpga0/state) is confirmed to be "operating" in isProgrammed().

fpga_manager requires that bitstreams be in *.bin in order to
write them to the PL. So, some changes were made to
vivado.mkhttp://vivado.mkhttp://vivado.mk to add a make rule for the *.bin
file. This make rule (BinName) uses Vivado's "bootgen" to convert the bitstream from *.bit to *.bin.

Most of the relevant code is pasted or summarized below:

         *load_fpga_manager*(const char *fileName, std::string

&error) {

           if (!file_exists("/lib/firmware")){

mkdir("/lib/firmware",0666);
}
int out_file =

creat("/lib/firmware/opencpi_temp.bin", 0666);

           gzFile bin_file;
           int bfd, zerror;
           uint8_t buf[8*1024];

           if ((bfd = ::open(fileName, O_RDONLY)) < 0)
             OU::format(error, "Can't open bitstream file '%s'

for reading:

%s(%d)",
fileName, strerror(errno), errno);
if ((bin_file = ::gzdopen(bfd, "rb")) == NULL)
OU::format(error, "Can't open compressed bin file

'%s' for :

%s(%u)",
fileName, strerror(errno), errno);
do {
uint8_t *bit_buf = buf;
int n = ::gzread(bin_file, bit_buf, sizeof(buf));
if (n < 0)
return true;
if (n & 3)
return OU::eformat(error, "Bitstream data in is '%s'
not a multiple of 3 bytes",
fileName);
if (n == 0)
break;
if (write(out_file, buf, n) <= 0)
return OU::eformat(error,
"Error writing to
/lib/firmware/opencpi_temp.bin for bin
loading: %s(%u/%d)",
strerror(errno), errno, n);
} while (1);
close(out_file);
std::ofstream

fpga_flags("/sys/class/fpga_manager/fpga0/flags");

           std::ofstream

fpga_firmware("/sys/class/fpga_manager/fpga0/firmware");
fpga_flags << 0 << std::endl;
fpga_firmware << "opencpi_temp.bin" << std::endl;

remove("/lib/firmware/opencpi_temp.bin");
return isProgrammed(error) ? init(error) : true;
}

The isProgrammed() function just checks whether or not the
fpga_manager state is 'operating' although we are not entirely
confident this is a robust check:

         *isProgrammed*(...) {
           ...
           const char *e = OU::file2String(val,

"/sys/class/fpga_manager/fpga0/state", '|');
...
return val == "operating";
}

vivado.mkhttp://vivado.mkhttp://vivado.mk's *bin make-rule uses bootgen to
convert bit to bin. This is necessary in Vivado 2018.2, but in
later versions you may be able to directly generate the correct
*.bin file via an option to

write_bitstream:

$(call BinName,$1,$3,$6): $(call BitName,$1,$3)
$(AT)echo -n For $2 on $5 using config $4: Generating
Xilinx Vivado bitstream file $$@ with BIN extension using "bootgen".
$(AT)echo all: > $$(call BifName,$1,$3,$6);
echo "{" >> $$(call BifName,$1,$3,$6);
echo " [destination_device = pl] $(notdir $(call
BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6);
echo "}" >> $$(call BifName,$1,$3,$6);
$(AT)$(call DoXilinx,bootgen,$1,-image $(notdir
$(call
BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call
BinName,$1,$3,$6)) -w,bin)

Hope this is useful!

Regards,
David Banks
dbanks@geontech.commailto:dbanks@geontech.com<mailto:dbanks@geontech.commailto:dbanks@geontech.com>
Geon Technologies, LLC
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/att
ach m ents/20190201/4b49675d/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.o
rg


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or
g
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/atta
chm e nts/20190201/64e4ea45/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or
g

-------------- next part -------------- An embedded and
charset-unspecified text was scrubbed...
Name: hello_n310_log_output.txt
URL:

<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme
nts/20190805/d9b4f229/attachment.txt>

Chris, Thanks for the information regarding the internals. The FPGA part on this platform is a XC7Z100. I purposefully did not pull in changes that I believed were related to addressing. I can double check the specifications regarding address widths to verify it should be unchanged. Please let me know if there are any other changes or steps missed. Thanks, Rob From: Chris Hinkey <chinkey@geontech.com<mailto:chinkey@geontech.com>> Date: Thursday, Aug 29, 2019, 8:05 AM To: Munro, Robert M. <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>> Cc: James Kulp <jek@parera.com<mailto:jek@parera.com>>, discuss@lists.opencpi.org <discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager It looks like you loaded something sucessfully but the control plan is not hooked up quite right. as an eraly part of the running process opencpi reads a register across the control plan that contains ascii "OpenCPI(NULL)" and in your case you are reading "CPI(NULL)Open" this is given by the data in the error message - (sb 0x435049004f70656e). this is the magic that the message is referring to it requires OpenCPI to be at address 0 of the control plane address space to proceed. I think we ran into this problem and we decided it was because the bus on the ultrascale was setup to be 32 bits and needed to be 64 bits for the hdl that we implemented to work correctly. remind me what platform you are using is it a zynq ultrascale or 7000 series? On Wed, Aug 28, 2019 at 5:55 PM Munro, Robert M. <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>> wrote: Chris, After merging some sections of HdlBusDriver.cxx into the 1.4 version of the file and going through the build process I am encountering a new error when attempting to load HDL on the N310. The fsk_filerw is being used as a known good reference for this purpose. The new sections of vivado.mk<http://vivado.mk> were merged in to attempt building the HDL using the framework, but it did not generate the .bin file when using ocpidev build with the --hdl-assembly argument. An attempt to replicate the commands in vivado.mk<http://vivado.mk> manually while following the guidelines for generating a .bin from a .bit from Xilinx documentation https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841645/Solution+Zynq+PL+Programming+With+FPGA+Manager was taken. The steps were: - generate a .bif file similar to the documentation's Full_Bitstream.bif using the correct filename - run a bootgen command similar to vivado.mk<http://vivado.mk>: bootgen -image <bif_filename> -arch zynq -o <bin_filename> -w This generated a .bin file as desired and was copied to the artifacts directory in the ocpi folder structure. The built ocpi environment loaded successfully, recognizes the HDL container as being available, and the hello application was able to run successfully. The command output contained ' HDL Device 'PL:0' responds, but the OCCP signature: magic: 0x18000afe187003 (sb 0x435049004f70656e) ' but the impact of this was not understood until attempting to load HDL. When attempting to run the fsk_filerw from the ocpirun command it did not appear to recognize the assembly when listing resources found in the output and reported that suitable candidate for a HDL-implemented component was not available. The command 'ocpihdl load' was then attempted to force the loading of the HDL assembly and the same '...OCCP signature: magic: ...' output observed and finally ' Exiting for problem: error loading device pl:0: Magic numbers in admin space do not match'. Is there some other step that must be taken during the generation of the .bin file? Is there any other software modification that is required of the ocpi runtime code? The diff patch of the modified 1.4 HdlBusDriver.cxx is attached to make sure that the required code modifications are performed correctly. The log output from the ocpihdl load command is attached in case that can provide further insight regarding performance or required steps. Thanks, Rob -----Original Message----- From: discuss <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org>> On Behalf Of Munro, Robert M. Sent: Tuesday, August 13, 2019 10:56 AM To: Chris Hinkey <chinkey@geontech.com<mailto:chinkey@geontech.com>>; James Kulp <jek@parera.com<mailto:jek@parera.com>> Cc: discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager Chris, Thank you for your helpful response and insight. My thinking was that the #define could be overridden to provide the desired functionality for the platform, but was not comfortable making the changes without proper familiarity. I will move forward by looking at the diff to the 1.4 mainline, make the appropriate modifications, and test with the modified framework on the N310. Thanks again for your help. Thanks, Rob From: Chris Hinkey <chinkey@geontech.com<mailto:chinkey@geontech.com>> Sent: Tuesday, August 13, 2019 10:02 AM To: James Kulp <jek@parera.com<mailto:jek@parera.com>> Cc: Munro, Robert M. <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>; discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager I think when I implemented this code I probably made the assumption that if we are using fpga_manager we are also using ARCH=arm64. This met our needs as we only cared about the fpga manager on ultrascale devices at the time. We also made the assumption that the tools created a tarred bin file instead of a bit file because we could not get the bit to bin conversion working with the existing openCPI code (this might cause you problems later when actually trying to load the fpga). The original problem you were running into is certainly because of an ifdef on line 226 where it will check the old driver done pin if it is on an arm and not an arm64 226 #if defined(OCPI_ARCH_arm) || defined(OCPI_ARCH_arm_cs) to move forward for now you can change this line to an "#if 0" and rebuild the framework, not this will cause other zynq based platforms(zed, matchstiq etc..) to no longer work with this patch but maybe you don't care for now while Jim tries to get this into the mainline in a more generic way. there may be some similar patches you need to make to the same file but the full diff that I needed to make to BusDriver.cxx to the 1.4 mainline can be seen here https://github.com/opencpi/opencpi/pull/17/files in case you didn't already know. hope this helps On Mon, Aug 12, 2019 at 11:12 AM James Kulp <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>>> wrote: On 8/12/19 9:37 AM, Munro, Robert M. wrote: > Jim, > > This is the only branch with the modifications required for use with > the FPGA Manager driver. This is required for use with the Linux > kernel provided for the N310. The Xilinx toolset being used is 2018_2 > and the kernel being used is generated via the N310 build container > using v3.14.0.0 . Ok. The default Xilinx kernel associated with 2018_2 is 4.14. I guess the bottom line is that this combination of platform and tools and kernel is not yet supported in either the mainline of OpenCPI and the third party branch you are trying to use. It is probably not a big problem, but someone has to debug it that has the time and skills necessary to dig as deep as necessary. The fpga manager in the various later linux kernels will definitely be supported in a patch from the mainline "soon", probably in a month, since it is being actively worked. That does not guarantee functionality on your exact kernel (and thus version of the fpga manager), but it does guarantee it working on the latest Xilinx-supported kernel. Jim > > Thanks, > Robert Munro > > *From: *James Kulp <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>> > <mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>>>> > *Date: *Monday, Aug 12, 2019, 9:00 AM > *To: *Munro, Robert M. > <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>> > <mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>>, > discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> > <discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> > <mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>>> > *Subject: *Re: [Discuss OpenCPI] Bitstream loading with > ZynqMP/UltraScale+ fpga_manager > > I was a bit confused about your use of the "ultrascale" branch. > So you are using a branch with two types of patches in it: one for > later linux kernels with the fpga manager, and the other for the > ultrascale chip itself. > The N310 is not ultrascale, so we need to separate the two issues, > which were not separated before. > So its not really a surprise that the branch you are using is not yet > happy with the system you are trying to run it on. > > I am working on a branch that simply updates the xilinx tools (2019-1) > and the xilinx linux kernel (4.19) without dealing with ultrascale, > which is intended to work with a baseline zed board, but with current > tools and kernels. > > The N310 uses a 7000-series part (7100) which should be compatible > with this. > > Which kernel and which xilinx tools are you using? > > Jim > > > > On 8/8/19 1:36 PM, Munro, Robert M. wrote: > > Jim or others, > > > > Is there any further input or feedback on the source or resolution > of this issue? > > > > As it stands I do not believe that the OCPI runtime software will be > able to successfully load HDL assemblies on the N310 platform. My > familiarity with this codebase is limited and we would appreciate any > guidance available toward investigating or resolving this issue. > > > > Thank you, > > Robert Munro > > > > -----Original Message----- > > From: discuss > > <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org><mailto:discuss-bounces@lists.open<mailto:discuss-bounces@lists.open> > > cpi.org<http://cpi.org>>> On Behalf Of > Munro, Robert M. > > Sent: Monday, August 5, 2019 10:49 AM > > To: James Kulp <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>>>; > > discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> > > Subject: Re: [Discuss OpenCPI] Bitstream loading with > ZynqMP/UltraScale+ fpga_manager > > > > Jim, > > > > The given block of code is not the root cause of the issue because > the file system does not have a /dev/xdevcfg device. > > > > I suspect there is some functional code similar to this being > compiled incorrectly: > > #if (OCPI_ARCH_arm) > > // do xdevcfg loading stuff > > #else > > // do fpga_manager loading stuff > > #endif > > > > This error is being output at environment initialization as well as > when running hello.xml. I've attached a copy of the output from the > command 'ocpirun -v -l 20 hello.xml' for further investigation. > > > > From looking at the output I believe the system is calling > OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is > calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line > 484 which in turn is calling Driver::open in the same file at line 499 > which then outputs the 'When searching for PL device ...' error at > line 509. This then returns to the HdlDriver.cxx search() function and > outputs the '... got Zynq search error ...' error at line 141. > > > > This is an ARM device and I am not familiar enough with this > codebase to adjust precompiler definitions with confidence that some > other code section will become affected. > > > > Thanks, > > Robert Munro > > > > -----Original Message----- > > From: James Kulp <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>>> > > Sent: Friday, August 2, 2019 4:27 PM > > To: Munro, Robert M. > > <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>; > discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> > > Subject: Re: [Discuss OpenCPI] Bitstream loading with > ZynqMP/UltraScale+ fpga_manager > > > > That code is not integrated into the main line of OpenCPI yet, but > in that code there is: > > if (file_exists("/dev/xdevcfg")){ > > ret_val= load_xdevconfig(fileName, error); > > } > > else if (file_exists("/sys/class/fpga_manager/fpga0/")){ > > ret_val= load_fpga_manager(fileName, error); > > } > > So it looks like the presence of /dev/xdevcfg is what causes it to > look for /sys/class/xdevcfg/xdevcfg/device/prog_done > > > > On 8/2/19 4:15 PM, Munro, Robert M. wrote: > >> Are there any required flag or environment variable settings that > must be done before building the framework to utilize this > functionality? I have a platform built that is producing an output > during environment load: 'When searching for PL device '0': Can't > process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string: > file could not be open for reading' . This leads me to believe that > it is running the xdevcfg code still present in HdlBusDriver.cxx . > >> > >> Use of the release_1.4_zynq_ultra branch and presence of the > /sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been > verified for the environment used to generate the executables. > >> > >> Thanks, > >> Robert Munro > >> > >> -----Original Message----- > >> From: discuss > >> <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org><mailto:discuss-bounces@lists.ope<mailto:discuss-bounces@lists.ope> > >> ncpi.org<http://ncpi.org>>> On Behalf Of James Kulp > >> Sent: Friday, February 1, 2019 4:18 PM > >> To: discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> > >> Subject: Re: [Discuss OpenCPI] Bitstream loading with > >> ZynqMP/UltraScale+ fpga_manager > >> > >> On 2/1/19 3:37 PM, Chris Hinkey wrote: > >>> in response to Point 1 here. We attempted using the code that on > the fly was attempting to convert from bit to bin. This did not work > on these newer platforms using fpga_manager so we decided to use the > vendor provided tools rather then to reverse engineer what was wrong > with the existing code. > >>> > >>> If changes need to be made to create more commonality and given > that all zynq and zynqMP platforms need a .bin file format wouldn't it > make more sense to just use .bin files rather then converting them on > the fly every time? > >> A sensible question for sure. > >> > >> When this was done originally, it was to avoid generating multiple > file formats all the time. .bit files are necessary for JTAG loading, > and .bin files are necessary for zynq hardware loading. > >> > >> Even on Zynq, some debugging using jtag is done, and having that be > mostly transparent (using the same bitstream files) is convenient. > >> > >> So we preferred having a single bitstream file (with metadata, > >> compressed) regardless of whether we were hardware loading or jtag > loading, zynq or virtex6 or spartan3, ISE or Vivado. > >> > >> In fact, there was no reverse engineering the last time since both > formats, at the level we were operating at, were documented by Xilinx. > >> > >> It seemed to be worth the 30 SLOC to convert on the fly to keep a > single format of Xilinx bitstream files, including between ISE and > Vivado and all Xilinx FPGA types. > >> > >> Of course it might make sense to switch things around the other way > and use .bin files uniformly and only convert to .bit format for JTAG > loading. > >> > >> But since the core of the "conversion:" after a header, is just a > 32 bit endian swap, it doesn't matter much either way. > >> > >> If it ends up being a truly nasty reverse engineering exercise now, > I would reconsider. > >> > >>> ________________________________ > >>> From: discuss > >>> <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org><mailto:discuss-bounces@lists.op<mailto:discuss-bounces@lists.op> > >>> encpi.org<http://encpi.org>>> on behalf of James Kulp > >>> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>>> > >>> Sent: Friday, February 1, 2019 3:27 PM > >>> To: discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> > >>> Subject: Re: [Discuss OpenCPI] Bitstream loading with > >>> ZynqMP/UltraScale+ fpga_manager > >>> > >>> David, > >>> > >>> This is great work. Thanks. > >>> > >>> Since I believe the fpga manager stuff is really an attribute of > >>> later linux kernels, I don't think it is really a ZynqMP thing, > >>> but just a later linux kernel thing. > >>> I am currently bringing up the quite ancient zedboard using the > >>> latest Vivado and Xilinx linux and will try to use this same code. > >>> There are two thinigs I am looking into, now that you have done > >>> the hard work of getting to a working solution: > >>> > >>> 1. The bit vs bin thing existed with the old bitstream loader, but > >>> I think we were converting on the fly, so I will try that here. > >>> (To avoid the bin format altogether). > >>> > >>> 2. The fpga manager has entry points from kernel mode that allow > >>> you to inject the bitstream without making a copy in /lib/firmware. > >>> Since we already have a kernel driver, I will try to use that to > >>> avoid the whole /lib/firmware thing. > >>> > >>> So if those two things can work (no guarantees), the difference > >>> between old and new bitstream loading (and building) can be > >>> minimized and the loading process faster and requiring no extra > >>> file system > space. > >>> This will make merging easier too. > >>> > >>> We'll see. Thanks again to you and Geon for this important > contribution. > >>> > >>> Jim > >>> > >>> > >>> On 2/1/19 3:12 PM, David Banks wrote: > >>>> OpenCPI users interested in ZynqMP fpga_manager, > >>>> > >>>> I know some users are interested in the OpenCPI's bitstream > >>>> loading for ZynqMP/UltraScale+ using "*fpga_manager*". In > >>>> general, we followed the instructions at > >>>> > https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream. > >>>> I will give a short explanation here: > >>>> > >>>> Reminder: All ZynqMP/UltraScale+ changes are located at > >>>> https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra > branch. > >>>> > >>>> Firstly, all *fpga_manager *code is located in > >>>> *runtime/hdl/src/HdlBusDriver.cxx*. There were also changes in > >>>> r*untime/hdl-support/xilinx/vivado.mk<http://vivado.mk><http://vivado.mk> > >>>> <http://vivado.mk>* to generate a bitstream in the correct *.bin > >>>> format. To see the changes made to these files for ZynqMP, you > >>>> can diff them between > >>>> *release_1.4* and *release_1.4_zynq_ultra*: > >>>> $ git clone https://github.com/Geontech/opencpi.git --branch > >>>> release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin > >>>> release_1.4:release_1.4; $ git diff release_1.4 -- > >>>> runtime/hdl/src/HdlBusDriver.cxx > >>>> runtime/hdl-support/xilinx/vivado.mk<http://vivado.mk><http://vivado.mk>; > >>>> > >>>> > >>>> The directly relevant functions are *load_fpga_manager()* and i > >>>> *sProgrammed()*. > >>>> load_fpga_manager() ensures that /lib/firmware exists, reads the > >>>> *.bin bitstream file and writes its contents to > /lib/firmware/opencpi_temp.bin. > >>>> It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the > >>>> the filename "opencpi_temp.bin" to > /sys/class/fpga_manager/fpga0/firmware. > >>>> Finally, the temporary opencpi_temp.bin bitstream is removed and > >>>> the state of the fpga_manager > >>>> (/sys/class/fpga_manager/fpga0/state) is confirmed to be "operating" in isProgrammed(). > >>>> > >>>> fpga_manager requires that bitstreams be in *.bin in order to > >>>> write them to the PL. So, some changes were made to > >>>> vivado.mk<http://vivado.mk><http://vivado.mk> to add a make rule for the *.bin > >>>> file. This make rule (*BinName*) uses Vivado's "*bootgen*" to convert the bitstream from *.bit to *.bin. > >>>> > >>>> Most of the relevant code is pasted or summarized below: > >>>> > >>>> *load_fpga_manager*(const char *fileName, std::string > &error) { > >>>> if (!file_exists("/lib/firmware")){ > >>>> mkdir("/lib/firmware",0666); > >>>> } > >>>> int out_file = > creat("/lib/firmware/opencpi_temp.bin", 0666); > >>>> gzFile bin_file; > >>>> int bfd, zerror; > >>>> uint8_t buf[8*1024]; > >>>> > >>>> if ((bfd = ::open(fileName, O_RDONLY)) < 0) > >>>> OU::format(error, "Can't open bitstream file '%s' > for reading: > >>>> %s(%d)", > >>>> fileName, strerror(errno), errno); > >>>> if ((bin_file = ::gzdopen(bfd, "rb")) == NULL) > >>>> OU::format(error, "Can't open compressed bin file > '%s' for : > >>>> %s(%u)", > >>>> fileName, strerror(errno), errno); > >>>> do { > >>>> uint8_t *bit_buf = buf; > >>>> int n = ::gzread(bin_file, bit_buf, sizeof(buf)); > >>>> if (n < 0) > >>>> return true; > >>>> if (n & 3) > >>>> return OU::eformat(error, "Bitstream data in is '%s' > >>>> not a multiple of 3 bytes", > >>>> fileName); > >>>> if (n == 0) > >>>> break; > >>>> if (write(out_file, buf, n) <= 0) > >>>> return OU::eformat(error, > >>>> "Error writing to > >>>> /lib/firmware/opencpi_temp.bin for bin > >>>> loading: %s(%u/%d)", > >>>> strerror(errno), errno, n); > >>>> } while (1); > >>>> close(out_file); > >>>> std::ofstream > fpga_flags("/sys/class/fpga_manager/fpga0/flags"); > >>>> std::ofstream > >>>> fpga_firmware("/sys/class/fpga_manager/fpga0/firmware"); > >>>> fpga_flags << 0 << std::endl; > >>>> fpga_firmware << "opencpi_temp.bin" << std::endl; > >>>> > >>>> remove("/lib/firmware/opencpi_temp.bin"); > >>>> return isProgrammed(error) ? init(error) : true; > >>>> } > >>>> > >>>> The isProgrammed() function just checks whether or not the > >>>> fpga_manager state is 'operating' although we are not entirely > >>>> confident this is a robust check: > >>>> > >>>> *isProgrammed*(...) { > >>>> ... > >>>> const char *e = OU::file2String(val, > >>>> "/sys/class/fpga_manager/fpga0/state", '|'); > >>>> ... > >>>> return val == "operating"; > >>>> } > >>>> > >>>> vivado.mk<http://vivado.mk><http://vivado.mk>'s *bin make-rule uses bootgen to > >>>> convert bit to bin. This is necessary in Vivado 2018.2, but in > >>>> later versions you may be able to directly generate the correct > >>>> *.bin file via an option to > write_bitstream: > >>>> $(call *BinName*,$1,$3,$6): $(call BitName,$1,$3) > >>>> $(AT)echo -n For $2 on $5 using config $4: Generating > >>>> Xilinx Vivado bitstream file $$@ with BIN extension using "bootgen". > >>>> $(AT)echo all: > $$(call BifName,$1,$3,$6); \ > >>>> echo "{" >> $$(call BifName,$1,$3,$6); \ > >>>> echo " [destination_device = pl] $(notdir $(call > >>>> BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6); \ > >>>> echo "}" >> $$(call BifName,$1,$3,$6); > >>>> $(AT)$(call DoXilinx,*bootgen*,$1,-image $(notdir > >>>> $(call > >>>> BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call > >>>> BinName,$1,$3,$6)) -w,bin) > >>>> > >>>> Hope this is useful! > >>>> > >>>> Regards, > >>>> David Banks > >>>> dbanks@geontech.com<mailto:dbanks@geontech.com><mailto:dbanks@geontech.com<mailto:dbanks@geontech.com>> > >>>> Geon Technologies, LLC > >>>> -------------- next part -------------- An HTML attachment was > >>>> scrubbed... > >>>> URL: > >>>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/att > >>>> ach m ents/20190201/4b49675d/attachment.html> > >>>> _______________________________________________ > >>>> discuss mailing list > >>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> > >>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.o > >>>> rg > >>> _______________________________________________ > >>> discuss mailing list > >>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> > >>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or > >>> g > >>> -------------- next part -------------- An HTML attachment was > >>> scrubbed... > >>> URL: > >>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/atta > >>> chm e nts/20190201/64e4ea45/attachment.html> > >>> _______________________________________________ > >>> discuss mailing list > >>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> > >>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or > >>> g > >> > >> _______________________________________________ > >> discuss mailing list > >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> > >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > > > > -------------- next part -------------- An embedded and > > charset-unspecified text was scrubbed... > > Name: hello_n310_log_output.txt > > URL: > <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme > nts/20190805/d9b4f229/attachment.txt> > > _______________________________________________ > > discuss mailing list > > discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> > > http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > > _______________________________________________ discuss mailing list discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
MR
Munro, Robert M.
Thu, Aug 29, 2019 3:42 PM

Chris,

Looking at the Zynq and ZynqMP datasheets:
https://www.xilinx.com/support/documentation/data_sheets/ds190-Zynq-7000-Overview.pdf
https://www.xilinx.com/support/documentation/data_sheets/ds891-zynq-ultrascale-plus-overview.pdf

It looks like the Z-7100 has the same memory interfaces as other Zynq parts with the external memory interface having '16-bit or 32-bit interfaces to DDR3, DDR3L, DDR2, or LPDDR2 memories' whereas the ZynqMP has '32-bit or 64-bit interfaces to DDR4, DDR3, DDR3L, or LPDDR3 memories, and 32-bit interface to LPDDR4 memory' .

Is it possible that other changes are needed from the 1.4_zynq_ultra branch that I have not pulled in?

Thanks,
Rob

-----Original Message-----
From: discuss discuss-bounces@lists.opencpi.org On Behalf Of Munro, Robert M.
Sent: Thursday, August 29, 2019 9:09 AM
To: Chris Hinkey chinkey@geontech.com
Cc: discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

Chris,

Thanks for the information regarding the internals.  The FPGA part on this platform is a XC7Z100.  I purposefully did not pull in changes that I believed were related to addressing.  I can double check the specifications regarding address widths to verify it should be unchanged.

Please let me know if there are any other changes or steps missed.

Thanks,
Rob

From: Chris Hinkey <chinkey@geontech.commailto:chinkey@geontech.com>
Date: Thursday, Aug 29, 2019, 8:05 AM
To: Munro, Robert M. <Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>
Cc: James Kulp <jek@parera.commailto:jek@parera.com>, discuss@lists.opencpi.org <discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

It looks like you loaded something sucessfully but the control plan is not hooked up quite right.

as an eraly part of the running process opencpi reads a register across the control plan that contains ascii "OpenCPI(NULL)" and in your case you are reading "CPI(NULL)Open"  this is given by the data in the error message - (sb 0x435049004f70656e).  this is the magic that the message is referring to it requires OpenCPI to be at address 0 of the control plane address space to proceed.

I think we ran into this problem and we decided it was because the bus on the ultrascale was setup to be 32 bits and needed to be 64 bits for the hdl that we implemented to work correctly.  remind me what platform you are using is it a zynq ultrascale or 7000 series?

On Wed, Aug 28, 2019 at 5:55 PM Munro, Robert M. <Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu> wrote:
Chris,

After merging some sections of HdlBusDriver.cxx into the 1.4 version of the file and going through the build process I am encountering a new error when attempting to load HDL on the N310.  The fsk_filerw is being used as a known good reference for this purpose.  The new sections of vivado.mkhttp://vivado.mk were merged in to attempt building the HDL using the framework, but it did not generate the .bin file when using ocpidev build with the --hdl-assembly argument.  An attempt to replicate the commands in vivado.mkhttp://vivado.mk manually while following the guidelines for generating a .bin from a .bit from Xilinx documentation https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841645/Solution+Zynq+PL+Programming+With+FPGA+Manager was taken.

The steps were:

  • generate a .bif file similar to the documentation's Full_Bitstream.bif using the correct filename
  • run a bootgen command similar to vivado.mkhttp://vivado.mk: bootgen -image <bif_filename> -arch zynq -o <bin_filename> -w

This generated a .bin file as desired and was copied to the artifacts directory in the ocpi folder structure.

The built ocpi environment loaded successfully, recognizes the HDL container as being available, and the hello application was able to run successfully.  The command output contained ' HDL Device 'PL:0' responds, but the OCCP signature: magic: 0x18000afe187003 (sb 0x435049004f70656e) ' but the impact of this was not understood until attempting to load HDL.  When attempting to run the fsk_filerw from the ocpirun command it did not appear to recognize the assembly when listing resources found in the output and reported that suitable candidate for a HDL-implemented component was not available.

The command 'ocpihdl load' was then attempted to force the loading of the HDL assembly and the same '...OCCP signature: magic: ...' output observed and finally ' Exiting for problem: error loading device pl:0: Magic numbers in admin space do not match'.

Is there some other step that must be taken during the generation of the .bin file?  Is there any other software modification that is required of the ocpi runtime code?  The diff patch of the modified 1.4 HdlBusDriver.cxx is attached to make sure that the required code modifications are performed correctly.  The log output from the ocpihdl load command is attached in case that can provide further insight regarding performance or required steps.

Thanks,
Rob

-----Original Message-----
From: discuss <discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org> On Behalf Of Munro, Robert M.
Sent: Tuesday, August 13, 2019 10:56 AM
To: Chris Hinkey <chinkey@geontech.commailto:chinkey@geontech.com>; James Kulp <jek@parera.commailto:jek@parera.com>
Cc: discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

Chris,

Thank you for your helpful response and insight.  My thinking was that the #define could be overridden to provide the desired functionality for the platform, but was not comfortable making the changes without proper familiarity.  I will move forward by looking at the diff to the 1.4 mainline, make the appropriate modifications, and test with the modified framework on the N310.

Thanks again for your help.

Thanks,
Rob

From: Chris Hinkey <chinkey@geontech.commailto:chinkey@geontech.com>
Sent: Tuesday, August 13, 2019 10:02 AM
To: James Kulp <jek@parera.commailto:jek@parera.com>
Cc: Munro, Robert M. <Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>; discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

I think when I implemented this code I probably made the assumption that if we are using fpga_manager we are also using ARCH=arm64.  This met our needs as we only cared about the fpga manager on ultrascale devices at the time.  We also made the assumption that the tools created a tarred bin file instead of a bit file because we could not get the bit to bin conversion working with the existing openCPI code (this might cause you problems later when actually trying to load the fpga).

The original problem you were running into is certainly because of an ifdef on line 226 where it will check the old driver done pin if it is on an arm and not an arm64

226 #if defined(OCPI_ARCH_arm) || defined(OCPI_ARCH_arm_cs)

to move forward for now you can change this line to an "#if 0" and rebuild the framework, not this will cause other zynq based platforms(zed, matchstiq etc..) to no longer work with this patch but maybe you don't care for now while Jim tries to get this into the mainline in a more generic way.
there may be some similar patches you need to make to the same file but the full diff that I needed to make to BusDriver.cxx to the 1.4 mainline can be seen here https://github.com/opencpi/opencpi/pull/17/files in case you didn't already know.
hope this helps

On Mon, Aug 12, 2019 at 11:12 AM James Kulp <jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com>> wrote:
On 8/12/19 9:37 AM, Munro, Robert M. wrote:

Jim,

This is the only branch with the modifications required for use with
the FPGA Manager driver.  This is required for use with the Linux
kernel provided for the N310.  The Xilinx toolset being used is 2018_2
and the kernel being used is generated via the N310 build container
using v3.14.0.0 .

Ok.  The default Xilinx kernel associated with 2018_2 is 4.14.

I guess the bottom line is that this combination of platform and tools and kernel is not yet supported in either the mainline of OpenCPI and the third party branch you are trying to use.

It is probably not a big problem, but someone has to debug it that has the time and skills necessary to dig as deep as necessary.

The fpga manager in the various later linux kernels will definitely be supported in a patch from the mainline "soon", probably in a month, since it is being actively worked.

That does not guarantee functionality on your exact kernel (and thus version of the fpga manager), but it does guarantee it working on the latest Xilinx-supported kernel.

Jim

Thanks,
Robert Munro

*From: *James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.com<mailto:je
k@parera.com>>
<mailto:jek@parera.commailto:jek@parera.com<mailto:jek@parera.com<ma
ilto:jek@parera.com>>>>
*Date: *Monday, Aug 12, 2019, 9:00 AM
*To: *Munro, Robert M.
<Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert
.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>
<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto
:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>>,
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:dis
cuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
<discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:di
scuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<ma
ilto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>>
*Subject: *Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

I was a bit confused about your use of the "ultrascale" branch.
So you are using a branch with two types of patches in it: one for
later linux kernels with the fpga manager, and the other for the
ultrascale chip itself.
The N310 is not ultrascale, so we need to separate the two issues,
which were not separated before.
So its not really a surprise that the branch you are using is not yet
happy with the system you are trying to run it on.

I am working on a branch that simply updates the xilinx tools (2019-1)
and the xilinx linux kernel (4.19) without dealing with ultrascale,
which is intended to work with a baseline zed board, but with current
tools and kernels.

The N310 uses a 7000-series part (7100) which should be compatible
with this.

Which kernel and which xilinx tools are you using?

Jim

On 8/8/19 1:36 PM, Munro, Robert M. wrote:

Jim or others,

Is there any further input or feedback on the source or resolution

of this issue?

As it stands I do not believe that the OCPI runtime software will be

able to successfully load HDL assemblies on the N310 platform.  My
familiarity with this codebase is limited and we would appreciate any
guidance available toward investigating or resolving this issue.

Thank you,
Robert Munro

-----Original Message-----
From: discuss
<discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.open
cpi.org><mailto:discuss-bounces@lists.open<mailto:discuss-bounces@li
sts.open> cpi.orghttp://cpi.org>> On Behalf Of

Munro, Robert M.

Sent: Monday, August 5, 2019 10:49 AM
To: James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.com<mailto:
jek@parera.com>>>;
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:d
iscuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
Subject: Re: [Discuss OpenCPI] Bitstream loading with

ZynqMP/UltraScale+ fpga_manager

Jim,

The given block of code is not the root cause of the issue because

the file system does not have a /dev/xdevcfg device.

I suspect there is some functional code similar to this being

compiled incorrectly:

#if (OCPI_ARCH_arm)
// do xdevcfg loading stuff
#else
// do fpga_manager loading stuff
#endif

This error is being output at environment initialization as well as

when running hello.xml.  I've attached a copy of the output from the
command 'ocpirun -v -l 20 hello.xml' for further investigation.

From looking at the output I believe the system is calling

OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is
calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line
484 which in turn is calling Driver::open in the same file at line 499
which then outputs the 'When searching for PL device ...' error at
line 509. This then returns to the HdlDriver.cxx search() function and
outputs the '... got Zynq search error ...' error at line 141.

This is an ARM device and I am not familiar enough with this

codebase to adjust precompiler definitions with confidence that some
other code section will become affected.

Thanks,
Robert Munro

-----Original Message-----
From: James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.com<mailto:
jek@parera.com>>>
Sent: Friday, August 2, 2019 4:27 PM
To: Munro, Robert M.
<Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robe
rt.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>;

Subject: Re: [Discuss OpenCPI] Bitstream loading with

ZynqMP/UltraScale+ fpga_manager

That code is not integrated into the main line of OpenCPI yet, but

in that code there is:

         if (file_exists("/dev/xdevcfg")){
           ret_val= load_xdevconfig(fileName, error);
         }
         else if (file_exists("/sys/class/fpga_manager/fpga0/")){
           ret_val= load_fpga_manager(fileName, error);
         }

So it looks like the presence of /dev/xdevcfg is what causes it to

look for /sys/class/xdevcfg/xdevcfg/device/prog_done

On 8/2/19 4:15 PM, Munro, Robert M. wrote:

Are there any required flag or environment variable settings that

must be done before building the framework to utilize this
functionality?  I have a platform built that is producing an output
during environment load: 'When searching for PL device '0': Can't
process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string:
file could not be open for reading' .  This leads me to believe that
it is running the xdevcfg code still present in HdlBusDriver.cxx .

Use of the release_1.4_zynq_ultra branch and presence of the

/sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been
verified for the environment used to generate the executables.

Thanks,
Robert Munro

-----Original Message-----
From: discuss
<discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.ope
ncpi.org><mailto:discuss-bounces@lists.ope<mailto:discuss-bounces@l
ists.ope> ncpi.orghttp://ncpi.org>> On Behalf Of James Kulp
Sent: Friday, February 1, 2019 4:18 PM
To:
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

On 2/1/19 3:37 PM, Chris Hinkey wrote:

in response to Point 1 here.  We attempted using the code that on

the fly was attempting to convert from bit to bin.  This did not work
on these newer platforms using fpga_manager so we decided to use the
vendor provided tools rather then to reverse engineer what was wrong
with the existing code.

If changes need to be made to create more commonality and given

that all zynq and zynqMP platforms need a .bin file format wouldn't it
make more sense to just use .bin files rather then converting them on
the fly every time?

A sensible question for sure.

When this was done originally, it was to avoid generating multiple

file formats all the time.  .bit files are necessary for JTAG loading,
and .bin files are necessary for zynq hardware loading.

Even on Zynq, some debugging using jtag is done, and having that be

mostly transparent (using the same bitstream files) is convenient.

So we preferred having a single bitstream file (with metadata,
compressed) regardless of whether we were hardware loading or jtag

loading, zynq or virtex6 or spartan3, ISE or Vivado.

In fact, there was no reverse engineering the last time since both

formats, at the level we were operating at, were documented by Xilinx.

It seemed to be worth the 30 SLOC to convert on the fly to keep a

single format of Xilinx bitstream files, including between ISE and
Vivado and all Xilinx FPGA types.

Of course it might make sense to switch things around the other way

and use .bin files uniformly and only convert to .bit format for JTAG
loading.

But since the core of the "conversion:" after a header, is just a

32 bit endian swap, it doesn't matter much either way.

If it ends up being a truly nasty reverse engineering exercise now,

I would reconsider.


From: discuss
<discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.op
encpi.org><mailto:discuss-bounces@lists.op<mailto:discuss-bounces@
lists.op> encpi.orghttp://encpi.org>> on behalf of James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.com<mailt
o:jek@parera.com>>>
Sent: Friday, February 1, 2019 3:27 PM
To:
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

David,

This is great work. Thanks.

Since I believe the fpga manager stuff is really an attribute of
later linux kernels, I don't think it is really a ZynqMP thing,
but just a later linux kernel thing.
I am currently bringing up the quite ancient zedboard using the
latest Vivado and Xilinx linux and will try to use this same code.
There are two thinigs I am looking into, now that you have done
the hard work of getting to a working solution:

  1. The bit vs bin thing existed with the old bitstream loader, but
    I think we were converting on the fly, so I will try that here.
    (To avoid the bin format altogether).

  2. The fpga manager has entry points from kernel mode that allow
    you to inject the bitstream without making a copy in /lib/firmware.
    Since we already have a kernel driver, I will try to use that to
    avoid the whole /lib/firmware thing.

So if those two things can work (no guarantees), the difference
between old and new bitstream loading (and building) can be
minimized and the loading process faster and requiring no extra
file system

space.

This will make merging easier too.

We'll see.  Thanks again to you and Geon for this important

contribution.

Jim

On 2/1/19 3:12 PM, David Banks wrote:

OpenCPI users interested in ZynqMP fpga_manager,

I know some users are interested in the OpenCPI's bitstream
loading for ZynqMP/UltraScale+ using "fpga_manager". In
general, we followed the instructions at

I will give a short explanation here:

Reminder: All ZynqMP/UltraScale+ changes are located at
https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra

branch.

Firstly, all fpga_manager code is located in
runtime/hdl/src/HdlBusDriver.cxx. There were also changes in
r
untime/hdl-support/xilinx/vivado.mkhttp://vivado.mk<http://vi
vado.mk>
http://vivado.mk
to generate a bitstream in the correct *.bin
format. To see the changes made to these files for ZynqMP, you
can diff them between
release_1.4 and release_1.4_zynq_ultra:
$ git clone https://github.com/Geontech/opencpi.git --branch
release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin
release_1.4:release_1.4; $ git diff release_1.4 --
runtime/hdl/src/HdlBusDriver.cxx
runtime/hdl-support/xilinx/vivado.mkhttp://vivado.mk<http://viv
ado.mk>;

The directly relevant functions are load_fpga_manager() and i
sProgrammed().
load_fpga_manager() ensures that /lib/firmware exists, reads the
*.bin bitstream file and writes its contents to

/lib/firmware/opencpi_temp.bin.

It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the
the filename "opencpi_temp.bin" to

/sys/class/fpga_manager/fpga0/firmware.

Finally, the temporary opencpi_temp.bin bitstream is removed and
the state of the fpga_manager
(/sys/class/fpga_manager/fpga0/state) is confirmed to be "operating" in isProgrammed().

fpga_manager requires that bitstreams be in *.bin in order to
write them to the PL. So, some changes were made to
vivado.mkhttp://vivado.mkhttp://vivado.mk to add a make rule
for the *.bin file. This make rule (BinName) uses Vivado's "bootgen" to convert the bitstream from *.bit to *.bin.

Most of the relevant code is pasted or summarized below:

         *load_fpga_manager*(const char *fileName, std::string

&error) {

           if (!file_exists("/lib/firmware")){ 

mkdir("/lib/firmware",0666);
}
int out_file =

creat("/lib/firmware/opencpi_temp.bin", 0666);

           gzFile bin_file;
           int bfd, zerror;
           uint8_t buf[8*1024];

           if ((bfd = ::open(fileName, O_RDONLY)) < 0)
             OU::format(error, "Can't open bitstream file '%s'

for reading:

%s(%d)",
fileName, strerror(errno), errno);
if ((bin_file = ::gzdopen(bfd, "rb")) == NULL)
OU::format(error, "Can't open compressed bin file

'%s' for :

%s(%u)",
fileName, strerror(errno), errno);
do {
uint8_t *bit_buf = buf;
int n = ::gzread(bin_file, bit_buf, sizeof(buf));
if (n < 0)
return true;
if (n & 3)
return OU::eformat(error, "Bitstream data in is '%s'
not a multiple of 3 bytes",
fileName);
if (n == 0)
break;
if (write(out_file, buf, n) <= 0)
return OU::eformat(error,
"Error writing to
/lib/firmware/opencpi_temp.bin for bin
loading: %s(%u/%d)",
strerror(errno), errno, n);
} while (1);
close(out_file);
std::ofstream

fpga_flags("/sys/class/fpga_manager/fpga0/flags");

           std::ofstream

fpga_firmware("/sys/class/fpga_manager/fpga0/firmware");
fpga_flags << 0 << std::endl;
fpga_firmware << "opencpi_temp.bin" << std::endl;

remove("/lib/firmware/opencpi_temp.bin");
return isProgrammed(error) ? init(error) : true;
}

The isProgrammed() function just checks whether or not the
fpga_manager state is 'operating' although we are not entirely
confident this is a robust check:

         *isProgrammed*(...) {
           ...
           const char *e = OU::file2String(val, 

"/sys/class/fpga_manager/fpga0/state", '|');
...
return val == "operating";
}

vivado.mkhttp://vivado.mkhttp://vivado.mk's *bin make-rule
uses bootgen to convert bit to bin. This is necessary in Vivado
2018.2, but in later versions you may be able to directly
generate the correct *.bin file via an option to

write_bitstream:

$(call BinName,$1,$3,$6): $(call BitName,$1,$3)
$(AT)echo -n For $2 on $5 using config $4: Generating
Xilinx Vivado bitstream file $$@ with BIN extension using "bootgen".
$(AT)echo all: > $$(call BifName,$1,$3,$6);
echo "{" >> $$(call BifName,$1,$3,$6);
echo " [destination_device = pl] $(notdir $(call
BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6);
echo "}" >> $$(call BifName,$1,$3,$6);
$(AT)$(call DoXilinx,bootgen,$1,-image $(notdir
$(call
BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call
BinName,$1,$3,$6)) -w,bin)

Hope this is useful!

Regards,
David Banks
dbanks@geontech.commailto:dbanks@geontech.com<mailto:dbanks@geo
ntech.commailto:dbanks@geontech.com>
Geon Technologies, LLC
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/att
ach m ents/20190201/4b49675d/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailt
o:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.o
rg


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or
g
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/atta
chm e nts/20190201/64e4ea45/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or
g

-------------- next part -------------- An embedded and
charset-unspecified text was scrubbed...
Name: hello_n310_log_output.txt
URL:

<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme
nts/20190805/d9b4f229/attachment.txt>

Chris, Looking at the Zynq and ZynqMP datasheets: https://www.xilinx.com/support/documentation/data_sheets/ds190-Zynq-7000-Overview.pdf https://www.xilinx.com/support/documentation/data_sheets/ds891-zynq-ultrascale-plus-overview.pdf It looks like the Z-7100 has the same memory interfaces as other Zynq parts with the external memory interface having '16-bit or 32-bit interfaces to DDR3, DDR3L, DDR2, or LPDDR2 memories' whereas the ZynqMP has '32-bit or 64-bit interfaces to DDR4, DDR3, DDR3L, or LPDDR3 memories, and 32-bit interface to LPDDR4 memory' . Is it possible that other changes are needed from the 1.4_zynq_ultra branch that I have not pulled in? Thanks, Rob -----Original Message----- From: discuss <discuss-bounces@lists.opencpi.org> On Behalf Of Munro, Robert M. Sent: Thursday, August 29, 2019 9:09 AM To: Chris Hinkey <chinkey@geontech.com> Cc: discuss@lists.opencpi.org Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager Chris, Thanks for the information regarding the internals. The FPGA part on this platform is a XC7Z100. I purposefully did not pull in changes that I believed were related to addressing. I can double check the specifications regarding address widths to verify it should be unchanged. Please let me know if there are any other changes or steps missed. Thanks, Rob From: Chris Hinkey <chinkey@geontech.com<mailto:chinkey@geontech.com>> Date: Thursday, Aug 29, 2019, 8:05 AM To: Munro, Robert M. <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>> Cc: James Kulp <jek@parera.com<mailto:jek@parera.com>>, discuss@lists.opencpi.org <discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager It looks like you loaded something sucessfully but the control plan is not hooked up quite right. as an eraly part of the running process opencpi reads a register across the control plan that contains ascii "OpenCPI(NULL)" and in your case you are reading "CPI(NULL)Open" this is given by the data in the error message - (sb 0x435049004f70656e). this is the magic that the message is referring to it requires OpenCPI to be at address 0 of the control plane address space to proceed. I think we ran into this problem and we decided it was because the bus on the ultrascale was setup to be 32 bits and needed to be 64 bits for the hdl that we implemented to work correctly. remind me what platform you are using is it a zynq ultrascale or 7000 series? On Wed, Aug 28, 2019 at 5:55 PM Munro, Robert M. <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>> wrote: Chris, After merging some sections of HdlBusDriver.cxx into the 1.4 version of the file and going through the build process I am encountering a new error when attempting to load HDL on the N310. The fsk_filerw is being used as a known good reference for this purpose. The new sections of vivado.mk<http://vivado.mk> were merged in to attempt building the HDL using the framework, but it did not generate the .bin file when using ocpidev build with the --hdl-assembly argument. An attempt to replicate the commands in vivado.mk<http://vivado.mk> manually while following the guidelines for generating a .bin from a .bit from Xilinx documentation https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841645/Solution+Zynq+PL+Programming+With+FPGA+Manager was taken. The steps were: - generate a .bif file similar to the documentation's Full_Bitstream.bif using the correct filename - run a bootgen command similar to vivado.mk<http://vivado.mk>: bootgen -image <bif_filename> -arch zynq -o <bin_filename> -w This generated a .bin file as desired and was copied to the artifacts directory in the ocpi folder structure. The built ocpi environment loaded successfully, recognizes the HDL container as being available, and the hello application was able to run successfully. The command output contained ' HDL Device 'PL:0' responds, but the OCCP signature: magic: 0x18000afe187003 (sb 0x435049004f70656e) ' but the impact of this was not understood until attempting to load HDL. When attempting to run the fsk_filerw from the ocpirun command it did not appear to recognize the assembly when listing resources found in the output and reported that suitable candidate for a HDL-implemented component was not available. The command 'ocpihdl load' was then attempted to force the loading of the HDL assembly and the same '...OCCP signature: magic: ...' output observed and finally ' Exiting for problem: error loading device pl:0: Magic numbers in admin space do not match'. Is there some other step that must be taken during the generation of the .bin file? Is there any other software modification that is required of the ocpi runtime code? The diff patch of the modified 1.4 HdlBusDriver.cxx is attached to make sure that the required code modifications are performed correctly. The log output from the ocpihdl load command is attached in case that can provide further insight regarding performance or required steps. Thanks, Rob -----Original Message----- From: discuss <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org>> On Behalf Of Munro, Robert M. Sent: Tuesday, August 13, 2019 10:56 AM To: Chris Hinkey <chinkey@geontech.com<mailto:chinkey@geontech.com>>; James Kulp <jek@parera.com<mailto:jek@parera.com>> Cc: discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager Chris, Thank you for your helpful response and insight. My thinking was that the #define could be overridden to provide the desired functionality for the platform, but was not comfortable making the changes without proper familiarity. I will move forward by looking at the diff to the 1.4 mainline, make the appropriate modifications, and test with the modified framework on the N310. Thanks again for your help. Thanks, Rob From: Chris Hinkey <chinkey@geontech.com<mailto:chinkey@geontech.com>> Sent: Tuesday, August 13, 2019 10:02 AM To: James Kulp <jek@parera.com<mailto:jek@parera.com>> Cc: Munro, Robert M. <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>; discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager I think when I implemented this code I probably made the assumption that if we are using fpga_manager we are also using ARCH=arm64. This met our needs as we only cared about the fpga manager on ultrascale devices at the time. We also made the assumption that the tools created a tarred bin file instead of a bit file because we could not get the bit to bin conversion working with the existing openCPI code (this might cause you problems later when actually trying to load the fpga). The original problem you were running into is certainly because of an ifdef on line 226 where it will check the old driver done pin if it is on an arm and not an arm64 226 #if defined(OCPI_ARCH_arm) || defined(OCPI_ARCH_arm_cs) to move forward for now you can change this line to an "#if 0" and rebuild the framework, not this will cause other zynq based platforms(zed, matchstiq etc..) to no longer work with this patch but maybe you don't care for now while Jim tries to get this into the mainline in a more generic way. there may be some similar patches you need to make to the same file but the full diff that I needed to make to BusDriver.cxx to the 1.4 mainline can be seen here https://github.com/opencpi/opencpi/pull/17/files in case you didn't already know. hope this helps On Mon, Aug 12, 2019 at 11:12 AM James Kulp <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>>> wrote: On 8/12/19 9:37 AM, Munro, Robert M. wrote: > Jim, > > This is the only branch with the modifications required for use with > the FPGA Manager driver. This is required for use with the Linux > kernel provided for the N310. The Xilinx toolset being used is 2018_2 > and the kernel being used is generated via the N310 build container > using v3.14.0.0 . Ok. The default Xilinx kernel associated with 2018_2 is 4.14. I guess the bottom line is that this combination of platform and tools and kernel is not yet supported in either the mainline of OpenCPI and the third party branch you are trying to use. It is probably not a big problem, but someone has to debug it that has the time and skills necessary to dig as deep as necessary. The fpga manager in the various later linux kernels will definitely be supported in a patch from the mainline "soon", probably in a month, since it is being actively worked. That does not guarantee functionality on your exact kernel (and thus version of the fpga manager), but it does guarantee it working on the latest Xilinx-supported kernel. Jim > > Thanks, > Robert Munro > > *From: *James Kulp > <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:je > k@parera.com>> > <mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<ma > ilto:jek@parera.com>>>> > *Date: *Monday, Aug 12, 2019, 9:00 AM > *To: *Munro, Robert M. > <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert > .Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>> > <mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto > :Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>>, > discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:dis > cuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> > <discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:di > scuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> > <mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><ma > ilto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>>> > *Subject: *Re: [Discuss OpenCPI] Bitstream loading with > ZynqMP/UltraScale+ fpga_manager > > I was a bit confused about your use of the "ultrascale" branch. > So you are using a branch with two types of patches in it: one for > later linux kernels with the fpga manager, and the other for the > ultrascale chip itself. > The N310 is not ultrascale, so we need to separate the two issues, > which were not separated before. > So its not really a surprise that the branch you are using is not yet > happy with the system you are trying to run it on. > > I am working on a branch that simply updates the xilinx tools (2019-1) > and the xilinx linux kernel (4.19) without dealing with ultrascale, > which is intended to work with a baseline zed board, but with current > tools and kernels. > > The N310 uses a 7000-series part (7100) which should be compatible > with this. > > Which kernel and which xilinx tools are you using? > > Jim > > > > On 8/8/19 1:36 PM, Munro, Robert M. wrote: > > Jim or others, > > > > Is there any further input or feedback on the source or resolution > of this issue? > > > > As it stands I do not believe that the OCPI runtime software will be > able to successfully load HDL assemblies on the N310 platform. My > familiarity with this codebase is limited and we would appreciate any > guidance available toward investigating or resolving this issue. > > > > Thank you, > > Robert Munro > > > > -----Original Message----- > > From: discuss > > <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.open > > cpi.org><mailto:discuss-bounces@lists.open<mailto:discuss-bounces@li > > sts.open> cpi.org<http://cpi.org>>> On Behalf Of > Munro, Robert M. > > Sent: Monday, August 5, 2019 10:49 AM > > To: James Kulp > > <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto: > > jek@parera.com>>>; > > discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:d > > iscuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> > > Subject: Re: [Discuss OpenCPI] Bitstream loading with > ZynqMP/UltraScale+ fpga_manager > > > > Jim, > > > > The given block of code is not the root cause of the issue because > the file system does not have a /dev/xdevcfg device. > > > > I suspect there is some functional code similar to this being > compiled incorrectly: > > #if (OCPI_ARCH_arm) > > // do xdevcfg loading stuff > > #else > > // do fpga_manager loading stuff > > #endif > > > > This error is being output at environment initialization as well as > when running hello.xml. I've attached a copy of the output from the > command 'ocpirun -v -l 20 hello.xml' for further investigation. > > > > From looking at the output I believe the system is calling > OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is > calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line > 484 which in turn is calling Driver::open in the same file at line 499 > which then outputs the 'When searching for PL device ...' error at > line 509. This then returns to the HdlDriver.cxx search() function and > outputs the '... got Zynq search error ...' error at line 141. > > > > This is an ARM device and I am not familiar enough with this > codebase to adjust precompiler definitions with confidence that some > other code section will become affected. > > > > Thanks, > > Robert Munro > > > > -----Original Message----- > > From: James Kulp > > <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto: > > jek@parera.com>>> > > Sent: Friday, August 2, 2019 4:27 PM > > To: Munro, Robert M. > > <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robe > > rt.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>; > discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:dis > cuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> > > Subject: Re: [Discuss OpenCPI] Bitstream loading with > ZynqMP/UltraScale+ fpga_manager > > > > That code is not integrated into the main line of OpenCPI yet, but > in that code there is: > > if (file_exists("/dev/xdevcfg")){ > > ret_val= load_xdevconfig(fileName, error); > > } > > else if (file_exists("/sys/class/fpga_manager/fpga0/")){ > > ret_val= load_fpga_manager(fileName, error); > > } > > So it looks like the presence of /dev/xdevcfg is what causes it to > look for /sys/class/xdevcfg/xdevcfg/device/prog_done > > > > On 8/2/19 4:15 PM, Munro, Robert M. wrote: > >> Are there any required flag or environment variable settings that > must be done before building the framework to utilize this > functionality? I have a platform built that is producing an output > during environment load: 'When searching for PL device '0': Can't > process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string: > file could not be open for reading' . This leads me to believe that > it is running the xdevcfg code still present in HdlBusDriver.cxx . > >> > >> Use of the release_1.4_zynq_ultra branch and presence of the > /sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been > verified for the environment used to generate the executables. > >> > >> Thanks, > >> Robert Munro > >> > >> -----Original Message----- > >> From: discuss > >> <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.ope > >> ncpi.org><mailto:discuss-bounces@lists.ope<mailto:discuss-bounces@l > >> ists.ope> ncpi.org<http://ncpi.org>>> On Behalf Of James Kulp > >> Sent: Friday, February 1, 2019 4:18 PM > >> To: > >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto: > >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> > >> Subject: Re: [Discuss OpenCPI] Bitstream loading with > >> ZynqMP/UltraScale+ fpga_manager > >> > >> On 2/1/19 3:37 PM, Chris Hinkey wrote: > >>> in response to Point 1 here. We attempted using the code that on > the fly was attempting to convert from bit to bin. This did not work > on these newer platforms using fpga_manager so we decided to use the > vendor provided tools rather then to reverse engineer what was wrong > with the existing code. > >>> > >>> If changes need to be made to create more commonality and given > that all zynq and zynqMP platforms need a .bin file format wouldn't it > make more sense to just use .bin files rather then converting them on > the fly every time? > >> A sensible question for sure. > >> > >> When this was done originally, it was to avoid generating multiple > file formats all the time. .bit files are necessary for JTAG loading, > and .bin files are necessary for zynq hardware loading. > >> > >> Even on Zynq, some debugging using jtag is done, and having that be > mostly transparent (using the same bitstream files) is convenient. > >> > >> So we preferred having a single bitstream file (with metadata, > >> compressed) regardless of whether we were hardware loading or jtag > loading, zynq or virtex6 or spartan3, ISE or Vivado. > >> > >> In fact, there was no reverse engineering the last time since both > formats, at the level we were operating at, were documented by Xilinx. > >> > >> It seemed to be worth the 30 SLOC to convert on the fly to keep a > single format of Xilinx bitstream files, including between ISE and > Vivado and all Xilinx FPGA types. > >> > >> Of course it might make sense to switch things around the other way > and use .bin files uniformly and only convert to .bit format for JTAG > loading. > >> > >> But since the core of the "conversion:" after a header, is just a > 32 bit endian swap, it doesn't matter much either way. > >> > >> If it ends up being a truly nasty reverse engineering exercise now, > I would reconsider. > >> > >>> ________________________________ > >>> From: discuss > >>> <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.op > >>> encpi.org><mailto:discuss-bounces@lists.op<mailto:discuss-bounces@ > >>> lists.op> encpi.org<http://encpi.org>>> on behalf of James Kulp > >>> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailt > >>> o:jek@parera.com>>> > >>> Sent: Friday, February 1, 2019 3:27 PM > >>> To: > >>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto > >>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> > >>> Subject: Re: [Discuss OpenCPI] Bitstream loading with > >>> ZynqMP/UltraScale+ fpga_manager > >>> > >>> David, > >>> > >>> This is great work. Thanks. > >>> > >>> Since I believe the fpga manager stuff is really an attribute of > >>> later linux kernels, I don't think it is really a ZynqMP thing, > >>> but just a later linux kernel thing. > >>> I am currently bringing up the quite ancient zedboard using the > >>> latest Vivado and Xilinx linux and will try to use this same code. > >>> There are two thinigs I am looking into, now that you have done > >>> the hard work of getting to a working solution: > >>> > >>> 1. The bit vs bin thing existed with the old bitstream loader, but > >>> I think we were converting on the fly, so I will try that here. > >>> (To avoid the bin format altogether). > >>> > >>> 2. The fpga manager has entry points from kernel mode that allow > >>> you to inject the bitstream without making a copy in /lib/firmware. > >>> Since we already have a kernel driver, I will try to use that to > >>> avoid the whole /lib/firmware thing. > >>> > >>> So if those two things can work (no guarantees), the difference > >>> between old and new bitstream loading (and building) can be > >>> minimized and the loading process faster and requiring no extra > >>> file system > space. > >>> This will make merging easier too. > >>> > >>> We'll see. Thanks again to you and Geon for this important > contribution. > >>> > >>> Jim > >>> > >>> > >>> On 2/1/19 3:12 PM, David Banks wrote: > >>>> OpenCPI users interested in ZynqMP fpga_manager, > >>>> > >>>> I know some users are interested in the OpenCPI's bitstream > >>>> loading for ZynqMP/UltraScale+ using "*fpga_manager*". In > >>>> general, we followed the instructions at > >>>> > https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream. > >>>> I will give a short explanation here: > >>>> > >>>> Reminder: All ZynqMP/UltraScale+ changes are located at > >>>> https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra > branch. > >>>> > >>>> Firstly, all *fpga_manager *code is located in > >>>> *runtime/hdl/src/HdlBusDriver.cxx*. There were also changes in > >>>> r*untime/hdl-support/xilinx/vivado.mk<http://vivado.mk><http://vi > >>>> vado.mk> > >>>> <http://vivado.mk>* to generate a bitstream in the correct *.bin > >>>> format. To see the changes made to these files for ZynqMP, you > >>>> can diff them between > >>>> *release_1.4* and *release_1.4_zynq_ultra*: > >>>> $ git clone https://github.com/Geontech/opencpi.git --branch > >>>> release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin > >>>> release_1.4:release_1.4; $ git diff release_1.4 -- > >>>> runtime/hdl/src/HdlBusDriver.cxx > >>>> runtime/hdl-support/xilinx/vivado.mk<http://vivado.mk><http://viv > >>>> ado.mk>; > >>>> > >>>> > >>>> The directly relevant functions are *load_fpga_manager()* and i > >>>> *sProgrammed()*. > >>>> load_fpga_manager() ensures that /lib/firmware exists, reads the > >>>> *.bin bitstream file and writes its contents to > /lib/firmware/opencpi_temp.bin. > >>>> It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the > >>>> the filename "opencpi_temp.bin" to > /sys/class/fpga_manager/fpga0/firmware. > >>>> Finally, the temporary opencpi_temp.bin bitstream is removed and > >>>> the state of the fpga_manager > >>>> (/sys/class/fpga_manager/fpga0/state) is confirmed to be "operating" in isProgrammed(). > >>>> > >>>> fpga_manager requires that bitstreams be in *.bin in order to > >>>> write them to the PL. So, some changes were made to > >>>> vivado.mk<http://vivado.mk><http://vivado.mk> to add a make rule > >>>> for the *.bin file. This make rule (*BinName*) uses Vivado's "*bootgen*" to convert the bitstream from *.bit to *.bin. > >>>> > >>>> Most of the relevant code is pasted or summarized below: > >>>> > >>>> *load_fpga_manager*(const char *fileName, std::string > &error) { > >>>> if (!file_exists("/lib/firmware")){ > >>>> mkdir("/lib/firmware",0666); > >>>> } > >>>> int out_file = > creat("/lib/firmware/opencpi_temp.bin", 0666); > >>>> gzFile bin_file; > >>>> int bfd, zerror; > >>>> uint8_t buf[8*1024]; > >>>> > >>>> if ((bfd = ::open(fileName, O_RDONLY)) < 0) > >>>> OU::format(error, "Can't open bitstream file '%s' > for reading: > >>>> %s(%d)", > >>>> fileName, strerror(errno), errno); > >>>> if ((bin_file = ::gzdopen(bfd, "rb")) == NULL) > >>>> OU::format(error, "Can't open compressed bin file > '%s' for : > >>>> %s(%u)", > >>>> fileName, strerror(errno), errno); > >>>> do { > >>>> uint8_t *bit_buf = buf; > >>>> int n = ::gzread(bin_file, bit_buf, sizeof(buf)); > >>>> if (n < 0) > >>>> return true; > >>>> if (n & 3) > >>>> return OU::eformat(error, "Bitstream data in is '%s' > >>>> not a multiple of 3 bytes", > >>>> fileName); > >>>> if (n == 0) > >>>> break; > >>>> if (write(out_file, buf, n) <= 0) > >>>> return OU::eformat(error, > >>>> "Error writing to > >>>> /lib/firmware/opencpi_temp.bin for bin > >>>> loading: %s(%u/%d)", > >>>> strerror(errno), errno, n); > >>>> } while (1); > >>>> close(out_file); > >>>> std::ofstream > fpga_flags("/sys/class/fpga_manager/fpga0/flags"); > >>>> std::ofstream > >>>> fpga_firmware("/sys/class/fpga_manager/fpga0/firmware"); > >>>> fpga_flags << 0 << std::endl; > >>>> fpga_firmware << "opencpi_temp.bin" << std::endl; > >>>> > >>>> remove("/lib/firmware/opencpi_temp.bin"); > >>>> return isProgrammed(error) ? init(error) : true; > >>>> } > >>>> > >>>> The isProgrammed() function just checks whether or not the > >>>> fpga_manager state is 'operating' although we are not entirely > >>>> confident this is a robust check: > >>>> > >>>> *isProgrammed*(...) { > >>>> ... > >>>> const char *e = OU::file2String(val, > >>>> "/sys/class/fpga_manager/fpga0/state", '|'); > >>>> ... > >>>> return val == "operating"; > >>>> } > >>>> > >>>> vivado.mk<http://vivado.mk><http://vivado.mk>'s *bin make-rule > >>>> uses bootgen to convert bit to bin. This is necessary in Vivado > >>>> 2018.2, but in later versions you may be able to directly > >>>> generate the correct *.bin file via an option to > write_bitstream: > >>>> $(call *BinName*,$1,$3,$6): $(call BitName,$1,$3) > >>>> $(AT)echo -n For $2 on $5 using config $4: Generating > >>>> Xilinx Vivado bitstream file $$@ with BIN extension using "bootgen". > >>>> $(AT)echo all: > $$(call BifName,$1,$3,$6); \ > >>>> echo "{" >> $$(call BifName,$1,$3,$6); \ > >>>> echo " [destination_device = pl] $(notdir $(call > >>>> BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6); \ > >>>> echo "}" >> $$(call BifName,$1,$3,$6); > >>>> $(AT)$(call DoXilinx,*bootgen*,$1,-image $(notdir > >>>> $(call > >>>> BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call > >>>> BinName,$1,$3,$6)) -w,bin) > >>>> > >>>> Hope this is useful! > >>>> > >>>> Regards, > >>>> David Banks > >>>> dbanks@geontech.com<mailto:dbanks@geontech.com><mailto:dbanks@geo > >>>> ntech.com<mailto:dbanks@geontech.com>> > >>>> Geon Technologies, LLC > >>>> -------------- next part -------------- An HTML attachment was > >>>> scrubbed... > >>>> URL: > >>>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/att > >>>> ach m ents/20190201/4b49675d/attachment.html> > >>>> _______________________________________________ > >>>> discuss mailing list > >>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailt > >>>> o:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> > >>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.o > >>>> rg > >>> _______________________________________________ > >>> discuss mailing list > >>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto > >>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> > >>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or > >>> g > >>> -------------- next part -------------- An HTML attachment was > >>> scrubbed... > >>> URL: > >>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/atta > >>> chm e nts/20190201/64e4ea45/attachment.html> > >>> _______________________________________________ > >>> discuss mailing list > >>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto > >>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> > >>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or > >>> g > >> > >> _______________________________________________ > >> discuss mailing list > >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto: > >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> > >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > > > > -------------- next part -------------- An embedded and > > charset-unspecified text was scrubbed... > > Name: hello_n310_log_output.txt > > URL: > <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme > nts/20190805/d9b4f229/attachment.txt> > > _______________________________________________ > > discuss mailing list > > discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:d > > iscuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> > > http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > > _______________________________________________ discuss mailing list discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org