Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

MR
Munro, Robert M.
Thu, Sep 5, 2019 9:47 PM

Chris,

Would this be the GP0 AXI slave or master registers that are being accessed in this scenario?  I don’t believe these are configured in the FSBL, but in the FPGA image.  This could indicate that a facility required by the OCPI framework is not enabled in the FPGA image built into the N310 image.  Is there a listing of the OCPI required FPGA facilities?

Thanks,
Rob

From: Chris Hinkey chinkey@geontech.com
Sent: Thursday, August 29, 2019 11:58 AM
To: Munro, Robert M. Robert.Munro@jhuapl.edu
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

you are not accessing external memory in this case you are accessing axi_gp0's adress space a register directly on the FPGA.  i would suspect that that something is wrong with how GP0 is setup from the fsbl in this case.  I don't think anything would need to change on the opencpi software side given that 7100 vs 7020 should be the same.
the information on all the register maps and where everything is located is somewhere in the Xilinx Technical reference manual (be warned this is a very large document).

On Thu, Aug 29, 2019 at 11:42 AM Munro, Robert M. <Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu> wrote:
Chris,

Looking at the Zynq and ZynqMP datasheets:
https://www.xilinx.com/support/documentation/data_sheets/ds190-Zynq-7000-Overview.pdf
https://www.xilinx.com/support/documentation/data_sheets/ds891-zynq-ultrascale-plus-overview.pdf

It looks like the Z-7100 has the same memory interfaces as other Zynq parts with the external memory interface having '16-bit or 32-bit interfaces to DDR3, DDR3L, DDR2, or LPDDR2 memories' whereas the ZynqMP has '32-bit or 64-bit interfaces to DDR4, DDR3, DDR3L, or LPDDR3 memories, and 32-bit interface to LPDDR4 memory' .

Is it possible that other changes are needed from the 1.4_zynq_ultra branch that I have not pulled in?

Thanks,
Rob

-----Original Message-----
From: discuss <discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org> On Behalf Of Munro, Robert M.
Sent: Thursday, August 29, 2019 9:09 AM
To: Chris Hinkey <chinkey@geontech.commailto:chinkey@geontech.com>
Cc: discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

Chris,

Thanks for the information regarding the internals.  The FPGA part on this platform is a XC7Z100.  I purposefully did not pull in changes that I believed were related to addressing.  I can double check the specifications regarding address widths to verify it should be unchanged.

Please let me know if there are any other changes or steps missed.

Thanks,
Rob

From: Chris Hinkey <chinkey@geontech.commailto:chinkey@geontech.com<mailto:chinkey@geontech.commailto:chinkey@geontech.com>>
Date: Thursday, Aug 29, 2019, 8:05 AM
To: Munro, Robert M. <Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>
Cc: James Kulp <jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com>>, discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org <discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

It looks like you loaded something sucessfully but the control plan is not hooked up quite right.

as an eraly part of the running process opencpi reads a register across the control plan that contains ascii "OpenCPI(NULL)" and in your case you are reading "CPI(NULL)Open"  this is given by the data in the error message - (sb 0x435049004f70656e).  this is the magic that the message is referring to it requires OpenCPI to be at address 0 of the control plane address space to proceed.

I think we ran into this problem and we decided it was because the bus on the ultrascale was setup to be 32 bits and needed to be 64 bits for the hdl that we implemented to work correctly.  remind me what platform you are using is it a zynq ultrascale or 7000 series?

On Wed, Aug 28, 2019 at 5:55 PM Munro, Robert M. <Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>> wrote:
Chris,

After merging some sections of HdlBusDriver.cxx into the 1.4 version of the file and going through the build process I am encountering a new error when attempting to load HDL on the N310.  The fsk_filerw is being used as a known good reference for this purpose.  The new sections of vivado.mkhttp://vivado.mkhttp://vivado.mk were merged in to attempt building the HDL using the framework, but it did not generate the .bin file when using ocpidev build with the --hdl-assembly argument.  An attempt to replicate the commands in vivado.mkhttp://vivado.mkhttp://vivado.mk manually while following the guidelines for generating a .bin from a .bit from Xilinx documentation https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841645/Solution+Zynq+PL+Programming+With+FPGA+Manager was taken.

The steps were:

  • generate a .bif file similar to the documentation's Full_Bitstream.bif using the correct filename
  • run a bootgen command similar to vivado.mkhttp://vivado.mkhttp://vivado.mk: bootgen -image <bif_filename> -arch zynq -o <bin_filename> -w

This generated a .bin file as desired and was copied to the artifacts directory in the ocpi folder structure.

The built ocpi environment loaded successfully, recognizes the HDL container as being available, and the hello application was able to run successfully.  The command output contained ' HDL Device 'PL:0' responds, but the OCCP signature: magic: 0x18000afe187003 (sb 0x435049004f70656e) ' but the impact of this was not understood until attempting to load HDL.  When attempting to run the fsk_filerw from the ocpirun command it did not appear to recognize the assembly when listing resources found in the output and reported that suitable candidate for a HDL-implemented component was not available.

The command 'ocpihdl load' was then attempted to force the loading of the HDL assembly and the same '...OCCP signature: magic: ...' output observed and finally ' Exiting for problem: error loading device pl:0: Magic numbers in admin space do not match'.

Is there some other step that must be taken during the generation of the .bin file?  Is there any other software modification that is required of the ocpi runtime code?  The diff patch of the modified 1.4 HdlBusDriver.cxx is attached to make sure that the required code modifications are performed correctly.  The log output from the ocpihdl load command is attached in case that can provide further insight regarding performance or required steps.

Thanks,
Rob

-----Original Message-----
From: discuss <discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org>> On Behalf Of Munro, Robert M.
Sent: Tuesday, August 13, 2019 10:56 AM
To: Chris Hinkey <chinkey@geontech.commailto:chinkey@geontech.com<mailto:chinkey@geontech.commailto:chinkey@geontech.com>>; James Kulp <jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com>>
Cc: discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

Chris,

Thank you for your helpful response and insight.  My thinking was that the #define could be overridden to provide the desired functionality for the platform, but was not comfortable making the changes without proper familiarity.  I will move forward by looking at the diff to the 1.4 mainline, make the appropriate modifications, and test with the modified framework on the N310.

Thanks again for your help.

Thanks,
Rob

From: Chris Hinkey <chinkey@geontech.commailto:chinkey@geontech.com<mailto:chinkey@geontech.commailto:chinkey@geontech.com>>
Sent: Tuesday, August 13, 2019 10:02 AM
To: James Kulp <jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com>>
Cc: Munro, Robert M. <Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>; discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

I think when I implemented this code I probably made the assumption that if we are using fpga_manager we are also using ARCH=arm64.  This met our needs as we only cared about the fpga manager on ultrascale devices at the time.  We also made the assumption that the tools created a tarred bin file instead of a bit file because we could not get the bit to bin conversion working with the existing openCPI code (this might cause you problems later when actually trying to load the fpga).

The original problem you were running into is certainly because of an ifdef on line 226 where it will check the old driver done pin if it is on an arm and not an arm64

226 #if defined(OCPI_ARCH_arm) || defined(OCPI_ARCH_arm_cs)

to move forward for now you can change this line to an "#if 0" and rebuild the framework, not this will cause other zynq based platforms(zed, matchstiq etc..) to no longer work with this patch but maybe you don't care for now while Jim tries to get this into the mainline in a more generic way.
there may be some similar patches you need to make to the same file but the full diff that I needed to make to BusDriver.cxx to the 1.4 mainline can be seen here https://github.com/opencpi/opencpi/pull/17/files in case you didn't already know.
hope this helps

On Mon, Aug 12, 2019 at 11:12 AM James Kulp <jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><mailto:jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com>>> wrote:
On 8/12/19 9:37 AM, Munro, Robert M. wrote:

Jim,

This is the only branch with the modifications required for use with
the FPGA Manager driver.  This is required for use with the Linux
kernel provided for the N310.  The Xilinx toolset being used is 2018_2
and the kernel being used is generated via the N310 build container
using v3.14.0.0 .

Ok.  The default Xilinx kernel associated with 2018_2 is 4.14.

I guess the bottom line is that this combination of platform and tools and kernel is not yet supported in either the mainline of OpenCPI and the third party branch you are trying to use.

It is probably not a big problem, but someone has to debug it that has the time and skills necessary to dig as deep as necessary.

The fpga manager in the various later linux kernels will definitely be supported in a patch from the mainline "soon", probably in a month, since it is being actively worked.

That does not guarantee functionality on your exact kernel (and thus version of the fpga manager), but it does guarantee it working on the latest Xilinx-supported kernel.

Jim

Thanks,
Robert Munro

*From: *James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><mailto:jek@parera.commailto:jek@parera.com<mailto:jemailto:je
k@parera.commailto:k@parera.com>>
<mailto:jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><mailto:jek@parera.commailto:jek@parera.com<ma
ilto:jek@parera.commailto:ilto%3Ajek@parera.com>>>>
*Date: *Monday, Aug 12, 2019, 9:00 AM
*To: *Munro, Robert M.
<Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu><mailto:Robertmailto:Robert
.Munro@jhuapl.edumailto:Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>
<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu><mailto
:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>>>,
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:dismailto:dis
cuss@lists.opencpi.orgmailto:cuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
<discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:dimailto:di
scuss@lists.opencpi.orgmailto:scuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><ma
ilto:discuss@lists.opencpi.orgmailto:ilto%3Adiscuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>>>
*Subject: *Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

I was a bit confused about your use of the "ultrascale" branch.
So you are using a branch with two types of patches in it: one for
later linux kernels with the fpga manager, and the other for the
ultrascale chip itself.
The N310 is not ultrascale, so we need to separate the two issues,
which were not separated before.
So its not really a surprise that the branch you are using is not yet
happy with the system you are trying to run it on.

I am working on a branch that simply updates the xilinx tools (2019-1)
and the xilinx linux kernel (4.19) without dealing with ultrascale,
which is intended to work with a baseline zed board, but with current
tools and kernels.

The N310 uses a 7000-series part (7100) which should be compatible
with this.

Which kernel and which xilinx tools are you using?

Jim

On 8/8/19 1:36 PM, Munro, Robert M. wrote:

Jim or others,

Is there any further input or feedback on the source or resolution

of this issue?

As it stands I do not believe that the OCPI runtime software will be

able to successfully load HDL assemblies on the N310 platform.  My
familiarity with this codebase is limited and we would appreciate any
guidance available toward investigating or resolving this issue.

Munro, Robert M.

ZynqMP/UltraScale+ fpga_manager

Jim,

The given block of code is not the root cause of the issue because

the file system does not have a /dev/xdevcfg device.

I suspect there is some functional code similar to this being

compiled incorrectly:

#if (OCPI_ARCH_arm)
// do xdevcfg loading stuff
#else
// do fpga_manager loading stuff
#endif

This error is being output at environment initialization as well as

when running hello.xml.  I've attached a copy of the output from the
command 'ocpirun -v -l 20 hello.xml' for further investigation.

From looking at the output I believe the system is calling

OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is
calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line
484 which in turn is calling Driver::open in the same file at line 499
which then outputs the 'When searching for PL device ...' error at
line 509. This then returns to the HdlDriver.cxx search() function and
outputs the '... got Zynq search error ...' error at line 141.

This is an ARM device and I am not familiar enough with this

codebase to adjust precompiler definitions with confidence that some
other code section will become affected.

Subject: Re: [Discuss OpenCPI] Bitstream loading with

ZynqMP/UltraScale+ fpga_manager

That code is not integrated into the main line of OpenCPI yet, but

in that code there is:

         if (file_exists("/dev/xdevcfg")){
           ret_val= load_xdevconfig(fileName, error);
         }
         else if (file_exists("/sys/class/fpga_manager/fpga0/")){
           ret_val= load_fpga_manager(fileName, error);
         }

So it looks like the presence of /dev/xdevcfg is what causes it to

look for /sys/class/xdevcfg/xdevcfg/device/prog_done

On 8/2/19 4:15 PM, Munro, Robert M. wrote:

Are there any required flag or environment variable settings that

must be done before building the framework to utilize this
functionality?  I have a platform built that is producing an output
during environment load: 'When searching for PL device '0': Can't
process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string:
file could not be open for reading' .  This leads me to believe that
it is running the xdevcfg code still present in HdlBusDriver.cxx .

Use of the release_1.4_zynq_ultra branch and presence of the

/sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been
verified for the environment used to generate the executables.

Thanks,
Robert Munro

-----Original Message-----
From: discuss
<discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opemailto:discuss-bounces@lists.ope
ncpi.orghttp://ncpi.org><mailto:discuss-bounces@lists.opemailto:discuss-bounces@lists.ope<mailto:discuss-bounces@lmailto:discuss-bounces@l
ists.ope> ncpi.orghttp://ncpi.orghttp://ncpi.org>> On Behalf Of James Kulp
Sent: Friday, February 1, 2019 4:18 PM
To:
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

On 2/1/19 3:37 PM, Chris Hinkey wrote:

in response to Point 1 here.  We attempted using the code that on

the fly was attempting to convert from bit to bin.  This did not work
on these newer platforms using fpga_manager so we decided to use the
vendor provided tools rather then to reverse engineer what was wrong
with the existing code.

If changes need to be made to create more commonality and given

that all zynq and zynqMP platforms need a .bin file format wouldn't it
make more sense to just use .bin files rather then converting them on
the fly every time?

A sensible question for sure.

When this was done originally, it was to avoid generating multiple

file formats all the time.  .bit files are necessary for JTAG loading,
and .bin files are necessary for zynq hardware loading.

Even on Zynq, some debugging using jtag is done, and having that be

mostly transparent (using the same bitstream files) is convenient.

So we preferred having a single bitstream file (with metadata,
compressed) regardless of whether we were hardware loading or jtag

loading, zynq or virtex6 or spartan3, ISE or Vivado.

In fact, there was no reverse engineering the last time since both

formats, at the level we were operating at, were documented by Xilinx.

It seemed to be worth the 30 SLOC to convert on the fly to keep a

single format of Xilinx bitstream files, including between ISE and
Vivado and all Xilinx FPGA types.

Of course it might make sense to switch things around the other way

and use .bin files uniformly and only convert to .bit format for JTAG
loading.

But since the core of the "conversion:" after a header, is just a

32 bit endian swap, it doesn't matter much either way.

If it ends up being a truly nasty reverse engineering exercise now,

I would reconsider.


From: discuss
<discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opmailto:discuss-bounces@lists.op
encpi.orghttp://encpi.org><mailto:discuss-bounces@lists.opmailto:discuss-bounces@lists.op<mailto:discuss-bounces@mailto:discuss-bounces@
lists.op> encpi.orghttp://encpi.orghttp://encpi.org>> on behalf of James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><mailto:jek@parera.commailto:jek@parera.com<mailt
o:jek@parera.commailto:o%3Ajek@parera.com>>>
Sent: Friday, February 1, 2019 3:27 PM
To:
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

David,

This is great work. Thanks.

Since I believe the fpga manager stuff is really an attribute of
later linux kernels, I don't think it is really a ZynqMP thing,
but just a later linux kernel thing.
I am currently bringing up the quite ancient zedboard using the
latest Vivado and Xilinx linux and will try to use this same code.
There are two thinigs I am looking into, now that you have done
the hard work of getting to a working solution:

  1. The bit vs bin thing existed with the old bitstream loader, but
    I think we were converting on the fly, so I will try that here.
    (To avoid the bin format altogether).

  2. The fpga manager has entry points from kernel mode that allow
    you to inject the bitstream without making a copy in /lib/firmware.
    Since we already have a kernel driver, I will try to use that to
    avoid the whole /lib/firmware thing.

So if those two things can work (no guarantees), the difference
between old and new bitstream loading (and building) can be
minimized and the loading process faster and requiring no extra
file system

space.

This will make merging easier too.

We'll see.  Thanks again to you and Geon for this important

contribution.

Jim

On 2/1/19 3:12 PM, David Banks wrote:

OpenCPI users interested in ZynqMP fpga_manager,

I know some users are interested in the OpenCPI's bitstream
loading for ZynqMP/UltraScale+ using "fpga_manager". In
general, we followed the instructions at

I will give a short explanation here:

Reminder: All ZynqMP/UltraScale+ changes are located at
https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra

branch.

Firstly, all fpga_manager code is located in
runtime/hdl/src/HdlBusDriver.cxx. There were also changes in
r
untime/hdl-support/xilinx/vivado.mkhttp://vivado.mkhttp://vivado.mk<http://vi
vado.mkhttp://vado.mk>
http://vivado.mk
to generate a bitstream in the correct *.bin
format. To see the changes made to these files for ZynqMP, you
can diff them between
release_1.4 and release_1.4_zynq_ultra:
$ git clone https://github.com/Geontech/opencpi.git --branch
release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin
release_1.4:release_1.4; $ git diff release_1.4 --
runtime/hdl/src/HdlBusDriver.cxx
runtime/hdl-support/xilinx/vivado.mkhttp://vivado.mkhttp://vivado.mk<http://viv
ado.mkhttp://ado.mk>;

The directly relevant functions are load_fpga_manager() and i
sProgrammed().
load_fpga_manager() ensures that /lib/firmware exists, reads the
*.bin bitstream file and writes its contents to

/lib/firmware/opencpi_temp.bin.

It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the
the filename "opencpi_temp.bin" to

/sys/class/fpga_manager/fpga0/firmware.

Finally, the temporary opencpi_temp.bin bitstream is removed and
the state of the fpga_manager
(/sys/class/fpga_manager/fpga0/state) is confirmed to be "operating" in isProgrammed().

fpga_manager requires that bitstreams be in *.bin in order to
write them to the PL. So, some changes were made to
vivado.mkhttp://vivado.mkhttp://vivado.mkhttp://vivado.mk to add a make rule
for the *.bin file. This make rule (BinName) uses Vivado's "bootgen" to convert the bitstream from *.bit to *.bin.

Most of the relevant code is pasted or summarized below:

         *load_fpga_manager*(const char *fileName, std::string

&error) {

           if (!file_exists("/lib/firmware")){

mkdir("/lib/firmware",0666);
}
int out_file =

creat("/lib/firmware/opencpi_temp.bin", 0666);

           gzFile bin_file;
           int bfd, zerror;
           uint8_t buf[8*1024];

           if ((bfd = ::open(fileName, O_RDONLY)) < 0)
             OU::format(error, "Can't open bitstream file '%s'

for reading:

%s(%d)",
fileName, strerror(errno), errno);
if ((bin_file = ::gzdopen(bfd, "rb")) == NULL)
OU::format(error, "Can't open compressed bin file

'%s' for :

%s(%u)",
fileName, strerror(errno), errno);
do {
uint8_t *bit_buf = buf;
int n = ::gzread(bin_file, bit_buf, sizeof(buf));
if (n < 0)
return true;
if (n & 3)
return OU::eformat(error, "Bitstream data in is '%s'
not a multiple of 3 bytes",
fileName);
if (n == 0)
break;
if (write(out_file, buf, n) <= 0)
return OU::eformat(error,
"Error writing to
/lib/firmware/opencpi_temp.bin for bin
loading: %s(%u/%d)",
strerror(errno), errno, n);
} while (1);
close(out_file);
std::ofstream

fpga_flags("/sys/class/fpga_manager/fpga0/flags");

           std::ofstream

fpga_firmware("/sys/class/fpga_manager/fpga0/firmware");
fpga_flags << 0 << std::endl;
fpga_firmware << "opencpi_temp.bin" << std::endl;

remove("/lib/firmware/opencpi_temp.bin");
return isProgrammed(error) ? init(error) : true;
}

The isProgrammed() function just checks whether or not the
fpga_manager state is 'operating' although we are not entirely
confident this is a robust check:

         *isProgrammed*(...) {
           ...
           const char *e = OU::file2String(val,

"/sys/class/fpga_manager/fpga0/state", '|');
...
return val == "operating";
}

vivado.mkhttp://vivado.mkhttp://vivado.mkhttp://vivado.mk's *bin make-rule
uses bootgen to convert bit to bin. This is necessary in Vivado
2018.2, but in later versions you may be able to directly
generate the correct *.bin file via an option to

write_bitstream:

$(call BinName,$1,$3,$6): $(call BitName,$1,$3)
$(AT)echo -n For $2 on $5 using config $4: Generating
Xilinx Vivado bitstream file $$@ with BIN extension using "bootgen".
$(AT)echo all: > $$(call BifName,$1,$3,$6);
echo "{" >> $$(call BifName,$1,$3,$6);
echo " [destination_device = pl] $(notdir $(call
BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6);
echo "}" >> $$(call BifName,$1,$3,$6);
$(AT)$(call DoXilinx,bootgen,$1,-image $(notdir
$(call
BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call
BinName,$1,$3,$6)) -w,bin)

Hope this is useful!

Regards,
David Banks
dbanks@geontech.commailto:dbanks@geontech.com<mailto:dbanks@geontech.commailto:dbanks@geontech.com><mailto:dbanks@geomailto:dbanks@geo
ntech.comhttp://ntech.com<mailto:dbanks@geontech.commailto:dbanks@geontech.com>>
Geon Technologies, LLC
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/att
ach m ents/20190201/4b49675d/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailt
o:discuss@lists.opencpi.orgmailto:o%3Adiscuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.o
rg


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or
g
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/atta
chm e nts/20190201/64e4ea45/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or
g

-------------- next part -------------- An embedded and
charset-unspecified text was scrubbed...
Name: hello_n310_log_output.txt
URL:

<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme
nts/20190805/d9b4f229/attachment.txt>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachments/20190813/4516c872/attachment.html


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachments/20190829/b99ae3e0/attachment.html


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org

Chris, Would this be the GP0 AXI slave or master registers that are being accessed in this scenario? I don’t believe these are configured in the FSBL, but in the FPGA image. This could indicate that a facility required by the OCPI framework is not enabled in the FPGA image built into the N310 image. Is there a listing of the OCPI required FPGA facilities? Thanks, Rob From: Chris Hinkey <chinkey@geontech.com> Sent: Thursday, August 29, 2019 11:58 AM To: Munro, Robert M. <Robert.Munro@jhuapl.edu> Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager you are not accessing external memory in this case you are accessing axi_gp0's adress space a register directly on the FPGA. i would suspect that that something is wrong with how GP0 is setup from the fsbl in this case. I don't think anything would need to change on the opencpi software side given that 7100 vs 7020 should be the same. the information on all the register maps and where everything is located is somewhere in the Xilinx Technical reference manual (be warned this is a very large document). On Thu, Aug 29, 2019 at 11:42 AM Munro, Robert M. <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>> wrote: Chris, Looking at the Zynq and ZynqMP datasheets: https://www.xilinx.com/support/documentation/data_sheets/ds190-Zynq-7000-Overview.pdf https://www.xilinx.com/support/documentation/data_sheets/ds891-zynq-ultrascale-plus-overview.pdf It looks like the Z-7100 has the same memory interfaces as other Zynq parts with the external memory interface having '16-bit or 32-bit interfaces to DDR3, DDR3L, DDR2, or LPDDR2 memories' whereas the ZynqMP has '32-bit or 64-bit interfaces to DDR4, DDR3, DDR3L, or LPDDR3 memories, and 32-bit interface to LPDDR4 memory' . Is it possible that other changes are needed from the 1.4_zynq_ultra branch that I have not pulled in? Thanks, Rob -----Original Message----- From: discuss <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org>> On Behalf Of Munro, Robert M. Sent: Thursday, August 29, 2019 9:09 AM To: Chris Hinkey <chinkey@geontech.com<mailto:chinkey@geontech.com>> Cc: discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager Chris, Thanks for the information regarding the internals. The FPGA part on this platform is a XC7Z100. I purposefully did not pull in changes that I believed were related to addressing. I can double check the specifications regarding address widths to verify it should be unchanged. Please let me know if there are any other changes or steps missed. Thanks, Rob From: Chris Hinkey <chinkey@geontech.com<mailto:chinkey@geontech.com><mailto:chinkey@geontech.com<mailto:chinkey@geontech.com>>> Date: Thursday, Aug 29, 2019, 8:05 AM To: Munro, Robert M. <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>> Cc: James Kulp <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>>>, discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> <discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager It looks like you loaded something sucessfully but the control plan is not hooked up quite right. as an eraly part of the running process opencpi reads a register across the control plan that contains ascii "OpenCPI(NULL)" and in your case you are reading "CPI(NULL)Open" this is given by the data in the error message - (sb 0x435049004f70656e). this is the magic that the message is referring to it requires OpenCPI to be at address 0 of the control plane address space to proceed. I think we ran into this problem and we decided it was because the bus on the ultrascale was setup to be 32 bits and needed to be 64 bits for the hdl that we implemented to work correctly. remind me what platform you are using is it a zynq ultrascale or 7000 series? On Wed, Aug 28, 2019 at 5:55 PM Munro, Robert M. <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>> wrote: Chris, After merging some sections of HdlBusDriver.cxx into the 1.4 version of the file and going through the build process I am encountering a new error when attempting to load HDL on the N310. The fsk_filerw is being used as a known good reference for this purpose. The new sections of vivado.mk<http://vivado.mk><http://vivado.mk> were merged in to attempt building the HDL using the framework, but it did not generate the .bin file when using ocpidev build with the --hdl-assembly argument. An attempt to replicate the commands in vivado.mk<http://vivado.mk><http://vivado.mk> manually while following the guidelines for generating a .bin from a .bit from Xilinx documentation https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841645/Solution+Zynq+PL+Programming+With+FPGA+Manager was taken. The steps were: - generate a .bif file similar to the documentation's Full_Bitstream.bif using the correct filename - run a bootgen command similar to vivado.mk<http://vivado.mk><http://vivado.mk>: bootgen -image <bif_filename> -arch zynq -o <bin_filename> -w This generated a .bin file as desired and was copied to the artifacts directory in the ocpi folder structure. The built ocpi environment loaded successfully, recognizes the HDL container as being available, and the hello application was able to run successfully. The command output contained ' HDL Device 'PL:0' responds, but the OCCP signature: magic: 0x18000afe187003 (sb 0x435049004f70656e) ' but the impact of this was not understood until attempting to load HDL. When attempting to run the fsk_filerw from the ocpirun command it did not appear to recognize the assembly when listing resources found in the output and reported that suitable candidate for a HDL-implemented component was not available. The command 'ocpihdl load' was then attempted to force the loading of the HDL assembly and the same '...OCCP signature: magic: ...' output observed and finally ' Exiting for problem: error loading device pl:0: Magic numbers in admin space do not match'. Is there some other step that must be taken during the generation of the .bin file? Is there any other software modification that is required of the ocpi runtime code? The diff patch of the modified 1.4 HdlBusDriver.cxx is attached to make sure that the required code modifications are performed correctly. The log output from the ocpihdl load command is attached in case that can provide further insight regarding performance or required steps. Thanks, Rob -----Original Message----- From: discuss <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org><mailto:discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org>>> On Behalf Of Munro, Robert M. Sent: Tuesday, August 13, 2019 10:56 AM To: Chris Hinkey <chinkey@geontech.com<mailto:chinkey@geontech.com><mailto:chinkey@geontech.com<mailto:chinkey@geontech.com>>>; James Kulp <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>>> Cc: discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager Chris, Thank you for your helpful response and insight. My thinking was that the #define could be overridden to provide the desired functionality for the platform, but was not comfortable making the changes without proper familiarity. I will move forward by looking at the diff to the 1.4 mainline, make the appropriate modifications, and test with the modified framework on the N310. Thanks again for your help. Thanks, Rob From: Chris Hinkey <chinkey@geontech.com<mailto:chinkey@geontech.com><mailto:chinkey@geontech.com<mailto:chinkey@geontech.com>>> Sent: Tuesday, August 13, 2019 10:02 AM To: James Kulp <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>>> Cc: Munro, Robert M. <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>; discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager I think when I implemented this code I probably made the assumption that if we are using fpga_manager we are also using ARCH=arm64. This met our needs as we only cared about the fpga manager on ultrascale devices at the time. We also made the assumption that the tools created a tarred bin file instead of a bit file because we could not get the bit to bin conversion working with the existing openCPI code (this might cause you problems later when actually trying to load the fpga). The original problem you were running into is certainly because of an ifdef on line 226 where it will check the old driver done pin if it is on an arm and not an arm64 226 #if defined(OCPI_ARCH_arm) || defined(OCPI_ARCH_arm_cs) to move forward for now you can change this line to an "#if 0" and rebuild the framework, not this will cause other zynq based platforms(zed, matchstiq etc..) to no longer work with this patch but maybe you don't care for now while Jim tries to get this into the mainline in a more generic way. there may be some similar patches you need to make to the same file but the full diff that I needed to make to BusDriver.cxx to the 1.4 mainline can be seen here https://github.com/opencpi/opencpi/pull/17/files in case you didn't already know. hope this helps On Mon, Aug 12, 2019 at 11:12 AM James Kulp <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>>>> wrote: On 8/12/19 9:37 AM, Munro, Robert M. wrote: > Jim, > > This is the only branch with the modifications required for use with > the FPGA Manager driver. This is required for use with the Linux > kernel provided for the N310. The Xilinx toolset being used is 2018_2 > and the kernel being used is generated via the N310 build container > using v3.14.0.0 . Ok. The default Xilinx kernel associated with 2018_2 is 4.14. I guess the bottom line is that this combination of platform and tools and kernel is not yet supported in either the mainline of OpenCPI and the third party branch you are trying to use. It is probably not a big problem, but someone has to debug it that has the time and skills necessary to dig as deep as necessary. The fpga manager in the various later linux kernels will definitely be supported in a patch from the mainline "soon", probably in a month, since it is being actively worked. That does not guarantee functionality on your exact kernel (and thus version of the fpga manager), but it does guarantee it working on the latest Xilinx-supported kernel. Jim > > Thanks, > Robert Munro > > *From: *James Kulp > <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:je<mailto:je> > k@parera.com<mailto:k@parera.com>>> > <mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><ma > ilto:jek@parera.com<mailto:ilto%3Ajek@parera.com>>>>> > *Date: *Monday, Aug 12, 2019, 9:00 AM > *To: *Munro, Robert M. > <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>><mailto:Robert<mailto:Robert> > .Munro@jhuapl.edu<mailto:Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>> > <mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>><mailto > :Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>>>, > discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:dis<mailto:dis> > cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > <discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:di<mailto:di> > scuss@lists.opencpi.org<mailto:scuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > <mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><ma > ilto:discuss@lists.opencpi.org<mailto:ilto%3Adiscuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>>>> > *Subject: *Re: [Discuss OpenCPI] Bitstream loading with > ZynqMP/UltraScale+ fpga_manager > > I was a bit confused about your use of the "ultrascale" branch. > So you are using a branch with two types of patches in it: one for > later linux kernels with the fpga manager, and the other for the > ultrascale chip itself. > The N310 is not ultrascale, so we need to separate the two issues, > which were not separated before. > So its not really a surprise that the branch you are using is not yet > happy with the system you are trying to run it on. > > I am working on a branch that simply updates the xilinx tools (2019-1) > and the xilinx linux kernel (4.19) without dealing with ultrascale, > which is intended to work with a baseline zed board, but with current > tools and kernels. > > The N310 uses a 7000-series part (7100) which should be compatible > with this. > > Which kernel and which xilinx tools are you using? > > Jim > > > > On 8/8/19 1:36 PM, Munro, Robert M. wrote: > > Jim or others, > > > > Is there any further input or feedback on the source or resolution > of this issue? > > > > As it stands I do not believe that the OCPI runtime software will be > able to successfully load HDL assemblies on the N310 platform. My > familiarity with this codebase is limited and we would appreciate any > guidance available toward investigating or resolving this issue. > > > > Thank you, > > Robert Munro > > > > -----Original Message----- > > From: discuss > > <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org><mailto:discuss-bounces@lists.open<mailto:discuss-bounces@lists.open> > > cpi.org<http://cpi.org>><mailto:discuss-bounces@lists.open<mailto:discuss-bounces@lists.open><mailto:discuss-bounces@li<mailto:discuss-bounces@li> > > sts.open> cpi.org<http://cpi.org><http://cpi.org>>> On Behalf Of > Munro, Robert M. > > Sent: Monday, August 5, 2019 10:49 AM > > To: James Kulp > > <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto: > > jek@parera.com<mailto:jek@parera.com>>>>; > > discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:d<mailto:d> > > iscuss@lists.opencpi.org<mailto:iscuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > > Subject: Re: [Discuss OpenCPI] Bitstream loading with > ZynqMP/UltraScale+ fpga_manager > > > > Jim, > > > > The given block of code is not the root cause of the issue because > the file system does not have a /dev/xdevcfg device. > > > > I suspect there is some functional code similar to this being > compiled incorrectly: > > #if (OCPI_ARCH_arm) > > // do xdevcfg loading stuff > > #else > > // do fpga_manager loading stuff > > #endif > > > > This error is being output at environment initialization as well as > when running hello.xml. I've attached a copy of the output from the > command 'ocpirun -v -l 20 hello.xml' for further investigation. > > > > From looking at the output I believe the system is calling > OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is > calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line > 484 which in turn is calling Driver::open in the same file at line 499 > which then outputs the 'When searching for PL device ...' error at > line 509. This then returns to the HdlDriver.cxx search() function and > outputs the '... got Zynq search error ...' error at line 141. > > > > This is an ARM device and I am not familiar enough with this > codebase to adjust precompiler definitions with confidence that some > other code section will become affected. > > > > Thanks, > > Robert Munro > > > > -----Original Message----- > > From: James Kulp > > <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto: > > jek@parera.com<mailto:jek@parera.com>>>> > > Sent: Friday, August 2, 2019 4:27 PM > > To: Munro, Robert M. > > <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>><mailto:Robe<mailto:Robe> > > rt.Munro@jhuapl.edu<mailto:rt.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>>; > discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:dis<mailto:dis> > cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > > Subject: Re: [Discuss OpenCPI] Bitstream loading with > ZynqMP/UltraScale+ fpga_manager > > > > That code is not integrated into the main line of OpenCPI yet, but > in that code there is: > > if (file_exists("/dev/xdevcfg")){ > > ret_val= load_xdevconfig(fileName, error); > > } > > else if (file_exists("/sys/class/fpga_manager/fpga0/")){ > > ret_val= load_fpga_manager(fileName, error); > > } > > So it looks like the presence of /dev/xdevcfg is what causes it to > look for /sys/class/xdevcfg/xdevcfg/device/prog_done > > > > On 8/2/19 4:15 PM, Munro, Robert M. wrote: > >> Are there any required flag or environment variable settings that > must be done before building the framework to utilize this > functionality? I have a platform built that is producing an output > during environment load: 'When searching for PL device '0': Can't > process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string: > file could not be open for reading' . This leads me to believe that > it is running the xdevcfg code still present in HdlBusDriver.cxx . > >> > >> Use of the release_1.4_zynq_ultra branch and presence of the > /sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been > verified for the environment used to generate the executables. > >> > >> Thanks, > >> Robert Munro > >> > >> -----Original Message----- > >> From: discuss > >> <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org><mailto:discuss-bounces@lists.ope<mailto:discuss-bounces@lists.ope> > >> ncpi.org<http://ncpi.org>><mailto:discuss-bounces@lists.ope<mailto:discuss-bounces@lists.ope><mailto:discuss-bounces@l<mailto:discuss-bounces@l> > >> ists.ope> ncpi.org<http://ncpi.org><http://ncpi.org>>> On Behalf Of James Kulp > >> Sent: Friday, February 1, 2019 4:18 PM > >> To: > >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto: > >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > >> Subject: Re: [Discuss OpenCPI] Bitstream loading with > >> ZynqMP/UltraScale+ fpga_manager > >> > >> On 2/1/19 3:37 PM, Chris Hinkey wrote: > >>> in response to Point 1 here. We attempted using the code that on > the fly was attempting to convert from bit to bin. This did not work > on these newer platforms using fpga_manager so we decided to use the > vendor provided tools rather then to reverse engineer what was wrong > with the existing code. > >>> > >>> If changes need to be made to create more commonality and given > that all zynq and zynqMP platforms need a .bin file format wouldn't it > make more sense to just use .bin files rather then converting them on > the fly every time? > >> A sensible question for sure. > >> > >> When this was done originally, it was to avoid generating multiple > file formats all the time. .bit files are necessary for JTAG loading, > and .bin files are necessary for zynq hardware loading. > >> > >> Even on Zynq, some debugging using jtag is done, and having that be > mostly transparent (using the same bitstream files) is convenient. > >> > >> So we preferred having a single bitstream file (with metadata, > >> compressed) regardless of whether we were hardware loading or jtag > loading, zynq or virtex6 or spartan3, ISE or Vivado. > >> > >> In fact, there was no reverse engineering the last time since both > formats, at the level we were operating at, were documented by Xilinx. > >> > >> It seemed to be worth the 30 SLOC to convert on the fly to keep a > single format of Xilinx bitstream files, including between ISE and > Vivado and all Xilinx FPGA types. > >> > >> Of course it might make sense to switch things around the other way > and use .bin files uniformly and only convert to .bit format for JTAG > loading. > >> > >> But since the core of the "conversion:" after a header, is just a > 32 bit endian swap, it doesn't matter much either way. > >> > >> If it ends up being a truly nasty reverse engineering exercise now, > I would reconsider. > >> > >>> ________________________________ > >>> From: discuss > >>> <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org><mailto:discuss-bounces@lists.op<mailto:discuss-bounces@lists.op> > >>> encpi.org<http://encpi.org>><mailto:discuss-bounces@lists.op<mailto:discuss-bounces@lists.op><mailto:discuss-bounces@<mailto:discuss-bounces@> > >>> lists.op> encpi.org<http://encpi.org><http://encpi.org>>> on behalf of James Kulp > >>> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailt > >>> o:jek@parera.com<mailto:o%3Ajek@parera.com>>>> > >>> Sent: Friday, February 1, 2019 3:27 PM > >>> To: > >>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto > >>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > >>> Subject: Re: [Discuss OpenCPI] Bitstream loading with > >>> ZynqMP/UltraScale+ fpga_manager > >>> > >>> David, > >>> > >>> This is great work. Thanks. > >>> > >>> Since I believe the fpga manager stuff is really an attribute of > >>> later linux kernels, I don't think it is really a ZynqMP thing, > >>> but just a later linux kernel thing. > >>> I am currently bringing up the quite ancient zedboard using the > >>> latest Vivado and Xilinx linux and will try to use this same code. > >>> There are two thinigs I am looking into, now that you have done > >>> the hard work of getting to a working solution: > >>> > >>> 1. The bit vs bin thing existed with the old bitstream loader, but > >>> I think we were converting on the fly, so I will try that here. > >>> (To avoid the bin format altogether). > >>> > >>> 2. The fpga manager has entry points from kernel mode that allow > >>> you to inject the bitstream without making a copy in /lib/firmware. > >>> Since we already have a kernel driver, I will try to use that to > >>> avoid the whole /lib/firmware thing. > >>> > >>> So if those two things can work (no guarantees), the difference > >>> between old and new bitstream loading (and building) can be > >>> minimized and the loading process faster and requiring no extra > >>> file system > space. > >>> This will make merging easier too. > >>> > >>> We'll see. Thanks again to you and Geon for this important > contribution. > >>> > >>> Jim > >>> > >>> > >>> On 2/1/19 3:12 PM, David Banks wrote: > >>>> OpenCPI users interested in ZynqMP fpga_manager, > >>>> > >>>> I know some users are interested in the OpenCPI's bitstream > >>>> loading for ZynqMP/UltraScale+ using "*fpga_manager*". In > >>>> general, we followed the instructions at > >>>> > https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream. > >>>> I will give a short explanation here: > >>>> > >>>> Reminder: All ZynqMP/UltraScale+ changes are located at > >>>> https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra > branch. > >>>> > >>>> Firstly, all *fpga_manager *code is located in > >>>> *runtime/hdl/src/HdlBusDriver.cxx*. There were also changes in > >>>> r*untime/hdl-support/xilinx/vivado.mk<http://vivado.mk><http://vivado.mk><http://vi > >>>> vado.mk<http://vado.mk>> > >>>> <http://vivado.mk>* to generate a bitstream in the correct *.bin > >>>> format. To see the changes made to these files for ZynqMP, you > >>>> can diff them between > >>>> *release_1.4* and *release_1.4_zynq_ultra*: > >>>> $ git clone https://github.com/Geontech/opencpi.git --branch > >>>> release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin > >>>> release_1.4:release_1.4; $ git diff release_1.4 -- > >>>> runtime/hdl/src/HdlBusDriver.cxx > >>>> runtime/hdl-support/xilinx/vivado.mk<http://vivado.mk><http://vivado.mk><http://viv > >>>> ado.mk<http://ado.mk>>; > >>>> > >>>> > >>>> The directly relevant functions are *load_fpga_manager()* and i > >>>> *sProgrammed()*. > >>>> load_fpga_manager() ensures that /lib/firmware exists, reads the > >>>> *.bin bitstream file and writes its contents to > /lib/firmware/opencpi_temp.bin. > >>>> It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the > >>>> the filename "opencpi_temp.bin" to > /sys/class/fpga_manager/fpga0/firmware. > >>>> Finally, the temporary opencpi_temp.bin bitstream is removed and > >>>> the state of the fpga_manager > >>>> (/sys/class/fpga_manager/fpga0/state) is confirmed to be "operating" in isProgrammed(). > >>>> > >>>> fpga_manager requires that bitstreams be in *.bin in order to > >>>> write them to the PL. So, some changes were made to > >>>> vivado.mk<http://vivado.mk><http://vivado.mk><http://vivado.mk> to add a make rule > >>>> for the *.bin file. This make rule (*BinName*) uses Vivado's "*bootgen*" to convert the bitstream from *.bit to *.bin. > >>>> > >>>> Most of the relevant code is pasted or summarized below: > >>>> > >>>> *load_fpga_manager*(const char *fileName, std::string > &error) { > >>>> if (!file_exists("/lib/firmware")){ > >>>> mkdir("/lib/firmware",0666); > >>>> } > >>>> int out_file = > creat("/lib/firmware/opencpi_temp.bin", 0666); > >>>> gzFile bin_file; > >>>> int bfd, zerror; > >>>> uint8_t buf[8*1024]; > >>>> > >>>> if ((bfd = ::open(fileName, O_RDONLY)) < 0) > >>>> OU::format(error, "Can't open bitstream file '%s' > for reading: > >>>> %s(%d)", > >>>> fileName, strerror(errno), errno); > >>>> if ((bin_file = ::gzdopen(bfd, "rb")) == NULL) > >>>> OU::format(error, "Can't open compressed bin file > '%s' for : > >>>> %s(%u)", > >>>> fileName, strerror(errno), errno); > >>>> do { > >>>> uint8_t *bit_buf = buf; > >>>> int n = ::gzread(bin_file, bit_buf, sizeof(buf)); > >>>> if (n < 0) > >>>> return true; > >>>> if (n & 3) > >>>> return OU::eformat(error, "Bitstream data in is '%s' > >>>> not a multiple of 3 bytes", > >>>> fileName); > >>>> if (n == 0) > >>>> break; > >>>> if (write(out_file, buf, n) <= 0) > >>>> return OU::eformat(error, > >>>> "Error writing to > >>>> /lib/firmware/opencpi_temp.bin for bin > >>>> loading: %s(%u/%d)", > >>>> strerror(errno), errno, n); > >>>> } while (1); > >>>> close(out_file); > >>>> std::ofstream > fpga_flags("/sys/class/fpga_manager/fpga0/flags"); > >>>> std::ofstream > >>>> fpga_firmware("/sys/class/fpga_manager/fpga0/firmware"); > >>>> fpga_flags << 0 << std::endl; > >>>> fpga_firmware << "opencpi_temp.bin" << std::endl; > >>>> > >>>> remove("/lib/firmware/opencpi_temp.bin"); > >>>> return isProgrammed(error) ? init(error) : true; > >>>> } > >>>> > >>>> The isProgrammed() function just checks whether or not the > >>>> fpga_manager state is 'operating' although we are not entirely > >>>> confident this is a robust check: > >>>> > >>>> *isProgrammed*(...) { > >>>> ... > >>>> const char *e = OU::file2String(val, > >>>> "/sys/class/fpga_manager/fpga0/state", '|'); > >>>> ... > >>>> return val == "operating"; > >>>> } > >>>> > >>>> vivado.mk<http://vivado.mk><http://vivado.mk><http://vivado.mk>'s *bin make-rule > >>>> uses bootgen to convert bit to bin. This is necessary in Vivado > >>>> 2018.2, but in later versions you may be able to directly > >>>> generate the correct *.bin file via an option to > write_bitstream: > >>>> $(call *BinName*,$1,$3,$6): $(call BitName,$1,$3) > >>>> $(AT)echo -n For $2 on $5 using config $4: Generating > >>>> Xilinx Vivado bitstream file $$@ with BIN extension using "bootgen". > >>>> $(AT)echo all: > $$(call BifName,$1,$3,$6); \ > >>>> echo "{" >> $$(call BifName,$1,$3,$6); \ > >>>> echo " [destination_device = pl] $(notdir $(call > >>>> BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6); \ > >>>> echo "}" >> $$(call BifName,$1,$3,$6); > >>>> $(AT)$(call DoXilinx,*bootgen*,$1,-image $(notdir > >>>> $(call > >>>> BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call > >>>> BinName,$1,$3,$6)) -w,bin) > >>>> > >>>> Hope this is useful! > >>>> > >>>> Regards, > >>>> David Banks > >>>> dbanks@geontech.com<mailto:dbanks@geontech.com><mailto:dbanks@geontech.com<mailto:dbanks@geontech.com>><mailto:dbanks@geo<mailto:dbanks@geo> > >>>> ntech.com<http://ntech.com><mailto:dbanks@geontech.com<mailto:dbanks@geontech.com>>> > >>>> Geon Technologies, LLC > >>>> -------------- next part -------------- An HTML attachment was > >>>> scrubbed... > >>>> URL: > >>>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/att > >>>> ach m ents/20190201/4b49675d/attachment.html> > >>>> _______________________________________________ > >>>> discuss mailing list > >>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailt > >>>> o:discuss@lists.opencpi.org<mailto:o%3Adiscuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > >>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.o > >>>> rg > >>> _______________________________________________ > >>> discuss mailing list > >>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto > >>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > >>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or > >>> g > >>> -------------- next part -------------- An HTML attachment was > >>> scrubbed... > >>> URL: > >>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/atta > >>> chm e nts/20190201/64e4ea45/attachment.html> > >>> _______________________________________________ > >>> discuss mailing list > >>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto > >>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > >>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or > >>> g > >> > >> _______________________________________________ > >> discuss mailing list > >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto: > >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > > > > -------------- next part -------------- An embedded and > > charset-unspecified text was scrubbed... > > Name: hello_n310_log_output.txt > > URL: > <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme > nts/20190805/d9b4f229/attachment.txt> > > _______________________________________________ > > discuss mailing list > > discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:d<mailto:d> > > iscuss@lists.opencpi.org<mailto:iscuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > > http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > > _______________________________________________ discuss mailing list discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachments/20190813/4516c872/attachment.html> _______________________________________________ discuss mailing list discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachments/20190829/b99ae3e0/attachment.html> _______________________________________________ discuss mailing list discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
JK
James Kulp
Thu, Sep 5, 2019 9:59 PM

Hi Rob,

Nearly all aspects of the boundary hardware between the PS and the PL
sides of Zynq are controlled by registers written by the processor and
not in the FPGA bitstream.
The FSBL does typically initialize these registers to some default
values that are not necessarily the right values for how OpenCPI uses
the PL/FPGA.
The ocpizynq utility program does dump out some of these registers, and
you could modify it pretty easily if you want to know what some other
registers are set to.
All these registers are pretty well documented in the Zynq TRM.

Jim

On 9/5/19 5:47 PM, Munro, Robert M. wrote:

Chris,

Would this be the GP0 AXI slave or master registers that are being accessed in this scenario?  I don’t believe these are configured in the FSBL, but in the FPGA image.  This could indicate that a facility required by the OCPI framework is not enabled in the FPGA image built into the N310 image.  Is there a listing of the OCPI required FPGA facilities?

Thanks,
Rob

From: Chris Hinkey chinkey@geontech.com
Sent: Thursday, August 29, 2019 11:58 AM
To: Munro, Robert M. Robert.Munro@jhuapl.edu
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

you are not accessing external memory in this case you are accessing axi_gp0's adress space a register directly on the FPGA.  i would suspect that that something is wrong with how GP0 is setup from the fsbl in this case.  I don't think anything would need to change on the opencpi software side given that 7100 vs 7020 should be the same.
the information on all the register maps and where everything is located is somewhere in the Xilinx Technical reference manual (be warned this is a very large document).

On Thu, Aug 29, 2019 at 11:42 AM Munro, Robert M. <Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu> wrote:
Chris,

Looking at the Zynq and ZynqMP datasheets:
https://www.xilinx.com/support/documentation/data_sheets/ds190-Zynq-7000-Overview.pdf
https://www.xilinx.com/support/documentation/data_sheets/ds891-zynq-ultrascale-plus-overview.pdf

It looks like the Z-7100 has the same memory interfaces as other Zynq parts with the external memory interface having '16-bit or 32-bit interfaces to DDR3, DDR3L, DDR2, or LPDDR2 memories' whereas the ZynqMP has '32-bit or 64-bit interfaces to DDR4, DDR3, DDR3L, or LPDDR3 memories, and 32-bit interface to LPDDR4 memory' .

Is it possible that other changes are needed from the 1.4_zynq_ultra branch that I have not pulled in?

Thanks,
Rob

-----Original Message-----
From: discuss <discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org> On Behalf Of Munro, Robert M.
Sent: Thursday, August 29, 2019 9:09 AM
To: Chris Hinkey <chinkey@geontech.commailto:chinkey@geontech.com>
Cc: discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

Chris,

Thanks for the information regarding the internals.  The FPGA part on this platform is a XC7Z100.  I purposefully did not pull in changes that I believed were related to addressing.  I can double check the specifications regarding address widths to verify it should be unchanged.

Please let me know if there are any other changes or steps missed.

Thanks,
Rob

From: Chris Hinkey <chinkey@geontech.commailto:chinkey@geontech.com<mailto:chinkey@geontech.commailto:chinkey@geontech.com>>
Date: Thursday, Aug 29, 2019, 8:05 AM
To: Munro, Robert M. <Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>
Cc: James Kulp <jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com>>, discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org <discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

It looks like you loaded something sucessfully but the control plan is not hooked up quite right.

as an eraly part of the running process opencpi reads a register across the control plan that contains ascii "OpenCPI(NULL)" and in your case you are reading "CPI(NULL)Open"  this is given by the data in the error message - (sb 0x435049004f70656e).  this is the magic that the message is referring to it requires OpenCPI to be at address 0 of the control plane address space to proceed.

I think we ran into this problem and we decided it was because the bus on the ultrascale was setup to be 32 bits and needed to be 64 bits for the hdl that we implemented to work correctly.  remind me what platform you are using is it a zynq ultrascale or 7000 series?

On Wed, Aug 28, 2019 at 5:55 PM Munro, Robert M. <Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>> wrote:
Chris,

After merging some sections of HdlBusDriver.cxx into the 1.4 version of the file and going through the build process I am encountering a new error when attempting to load HDL on the N310.  The fsk_filerw is being used as a known good reference for this purpose.  The new sections of vivado.mkhttp://vivado.mkhttp://vivado.mk were merged in to attempt building the HDL using the framework, but it did not generate the .bin file when using ocpidev build with the --hdl-assembly argument.  An attempt to replicate the commands in vivado.mkhttp://vivado.mkhttp://vivado.mk manually while following the guidelines for generating a .bin from a .bit from Xilinx documentation https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841645/Solution+Zynq+PL+Programming+With+FPGA+Manager was taken.

The steps were:

  • generate a .bif file similar to the documentation's Full_Bitstream.bif using the correct filename
  • run a bootgen command similar to vivado.mkhttp://vivado.mkhttp://vivado.mk: bootgen -image <bif_filename> -arch zynq -o <bin_filename> -w

This generated a .bin file as desired and was copied to the artifacts directory in the ocpi folder structure.

The built ocpi environment loaded successfully, recognizes the HDL container as being available, and the hello application was able to run successfully.  The command output contained ' HDL Device 'PL:0' responds, but the OCCP signature: magic: 0x18000afe187003 (sb 0x435049004f70656e) ' but the impact of this was not understood until attempting to load HDL.  When attempting to run the fsk_filerw from the ocpirun command it did not appear to recognize the assembly when listing resources found in the output and reported that suitable candidate for a HDL-implemented component was not available.

The command 'ocpihdl load' was then attempted to force the loading of the HDL assembly and the same '...OCCP signature: magic: ...' output observed and finally ' Exiting for problem: error loading device pl:0: Magic numbers in admin space do not match'.

Is there some other step that must be taken during the generation of the .bin file?  Is there any other software modification that is required of the ocpi runtime code?  The diff patch of the modified 1.4 HdlBusDriver.cxx is attached to make sure that the required code modifications are performed correctly.  The log output from the ocpihdl load command is attached in case that can provide further insight regarding performance or required steps.

Thanks,
Rob

-----Original Message-----
From: discuss <discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org>> On Behalf Of Munro, Robert M.
Sent: Tuesday, August 13, 2019 10:56 AM
To: Chris Hinkey <chinkey@geontech.commailto:chinkey@geontech.com<mailto:chinkey@geontech.commailto:chinkey@geontech.com>>; James Kulp <jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com>>
Cc: discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

Chris,

Thank you for your helpful response and insight.  My thinking was that the #define could be overridden to provide the desired functionality for the platform, but was not comfortable making the changes without proper familiarity.  I will move forward by looking at the diff to the 1.4 mainline, make the appropriate modifications, and test with the modified framework on the N310.

Thanks again for your help.

Thanks,
Rob

From: Chris Hinkey <chinkey@geontech.commailto:chinkey@geontech.com<mailto:chinkey@geontech.commailto:chinkey@geontech.com>>
Sent: Tuesday, August 13, 2019 10:02 AM
To: James Kulp <jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com>>
Cc: Munro, Robert M. <Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>; discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

I think when I implemented this code I probably made the assumption that if we are using fpga_manager we are also using ARCH=arm64.  This met our needs as we only cared about the fpga manager on ultrascale devices at the time.  We also made the assumption that the tools created a tarred bin file instead of a bit file because we could not get the bit to bin conversion working with the existing openCPI code (this might cause you problems later when actually trying to load the fpga).

The original problem you were running into is certainly because of an ifdef on line 226 where it will check the old driver done pin if it is on an arm and not an arm64

226 #if defined(OCPI_ARCH_arm) || defined(OCPI_ARCH_arm_cs)

to move forward for now you can change this line to an "#if 0" and rebuild the framework, not this will cause other zynq based platforms(zed, matchstiq etc..) to no longer work with this patch but maybe you don't care for now while Jim tries to get this into the mainline in a more generic way.
there may be some similar patches you need to make to the same file but the full diff that I needed to make to BusDriver.cxx to the 1.4 mainline can be seen here https://github.com/opencpi/opencpi/pull/17/files in case you didn't already know.
hope this helps

On Mon, Aug 12, 2019 at 11:12 AM James Kulp <jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><mailto:jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com>>> wrote:
On 8/12/19 9:37 AM, Munro, Robert M. wrote:

Jim,

This is the only branch with the modifications required for use with
the FPGA Manager driver.  This is required for use with the Linux
kernel provided for the N310.  The Xilinx toolset being used is 2018_2
and the kernel being used is generated via the N310 build container
using v3.14.0.0 .
Ok.  The default Xilinx kernel associated with 2018_2 is 4.14.

I guess the bottom line is that this combination of platform and tools and kernel is not yet supported in either the mainline of OpenCPI and the third party branch you are trying to use.

It is probably not a big problem, but someone has to debug it that has the time and skills necessary to dig as deep as necessary.

The fpga manager in the various later linux kernels will definitely be supported in a patch from the mainline "soon", probably in a month, since it is being actively worked.

That does not guarantee functionality on your exact kernel (and thus version of the fpga manager), but it does guarantee it working on the latest Xilinx-supported kernel.

Jim

Thanks,
Robert Munro

*From: *James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><mailto:jek@parera.commailto:jek@parera.com<mailto:jemailto:je
k@parera.commailto:k@parera.com>>
<mailto:jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><mailto:jek@parera.commailto:jek@parera.com<ma
ilto:jek@parera.commailto:ilto%3Ajek@parera.com>>>>
*Date: *Monday, Aug 12, 2019, 9:00 AM
*To: *Munro, Robert M.
<Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu><mailto:Robertmailto:Robert
.Munro@jhuapl.edumailto:Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>
<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu><mailto
:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>>>,
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:dismailto:dis
cuss@lists.opencpi.orgmailto:cuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
<discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:dimailto:di
scuss@lists.opencpi.orgmailto:scuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><ma
ilto:discuss@lists.opencpi.orgmailto:ilto%3Adiscuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>>>
*Subject: *Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

I was a bit confused about your use of the "ultrascale" branch.
So you are using a branch with two types of patches in it: one for
later linux kernels with the fpga manager, and the other for the
ultrascale chip itself.
The N310 is not ultrascale, so we need to separate the two issues,
which were not separated before.
So its not really a surprise that the branch you are using is not yet
happy with the system you are trying to run it on.

I am working on a branch that simply updates the xilinx tools (2019-1)
and the xilinx linux kernel (4.19) without dealing with ultrascale,
which is intended to work with a baseline zed board, but with current
tools and kernels.

The N310 uses a 7000-series part (7100) which should be compatible
with this.

Which kernel and which xilinx tools are you using?

Jim

On 8/8/19 1:36 PM, Munro, Robert M. wrote:

Jim or others,

Is there any further input or feedback on the source or resolution
of this issue?
As it stands I do not believe that the OCPI runtime software will be
able to successfully load HDL assemblies on the N310 platform.  My
familiarity with this codebase is limited and we would appreciate any
guidance available toward investigating or resolving this issue.
Thank you,
Robert Munro

-----Original Message-----
From: discuss
<discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.openmailto:discuss-bounces@lists.open
cpi.orghttp://cpi.org><mailto:discuss-bounces@lists.openmailto:discuss-bounces@lists.open<mailto:discuss-bounces@limailto:discuss-bounces@li
sts.open> cpi.orghttp://cpi.orghttp://cpi.org>> On Behalf Of
Munro, Robert M.
Sent: Monday, August 5, 2019 10:49 AM
To: James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><mailto:jek@parera.commailto:jek@parera.com<mailto:
jek@parera.commailto:jek@parera.com>>>;
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:dmailto:d
iscuss@lists.opencpi.orgmailto:iscuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager
Jim,

The given block of code is not the root cause of the issue because
the file system does not have a /dev/xdevcfg device.
I suspect there is some functional code similar to this being
compiled incorrectly:
#if (OCPI_ARCH_arm)
// do xdevcfg loading stuff
#else
// do fpga_manager loading stuff
#endif

This error is being output at environment initialization as well as
when running hello.xml.  I've attached a copy of the output from the
command 'ocpirun -v -l 20 hello.xml' for further investigation.
From looking at the output I believe the system is calling
OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is
calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line
484 which in turn is calling Driver::open in the same file at line 499
which then outputs the 'When searching for PL device ...' error at
line 509. This then returns to the HdlDriver.cxx search() function and
outputs the '... got Zynq search error ...' error at line 141.
This is an ARM device and I am not familiar enough with this
codebase to adjust precompiler definitions with confidence that some
other code section will become affected.
Thanks,
Robert Munro

-----Original Message-----
From: James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><mailto:jek@parera.commailto:jek@parera.com<mailto:
jek@parera.commailto:jek@parera.com>>>
Sent: Friday, August 2, 2019 4:27 PM
To: Munro, Robert M.
<Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu><mailto:Robemailto:Robe
rt.Munro@jhuapl.edumailto:rt.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>>;
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:dismailto:dis
cuss@lists.opencpi.orgmailto:cuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager
That code is not integrated into the main line of OpenCPI yet, but
in that code there is:
if (file_exists("/dev/xdevcfg")){
ret_val= load_xdevconfig(fileName, error);
}
else if (file_exists("/sys/class/fpga_manager/fpga0/")){
ret_val= load_fpga_manager(fileName, error);
}
So it looks like the presence of /dev/xdevcfg is what causes it to
look for /sys/class/xdevcfg/xdevcfg/device/prog_done
On 8/2/19 4:15 PM, Munro, Robert M. wrote:

Are there any required flag or environment variable settings that
must be done before building the framework to utilize this
functionality?  I have a platform built that is producing an output
during environment load: 'When searching for PL device '0': Can't
process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string:
file could not be open for reading' .  This leads me to believe that
it is running the xdevcfg code still present in HdlBusDriver.cxx .
Use of the release_1.4_zynq_ultra branch and presence of the
/sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been
verified for the environment used to generate the executables.
Thanks,
Robert Munro

-----Original Message-----
From: discuss
<discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opemailto:discuss-bounces@lists.ope
ncpi.orghttp://ncpi.org><mailto:discuss-bounces@lists.opemailto:discuss-bounces@lists.ope<mailto:discuss-bounces@lmailto:discuss-bounces@l
ists.ope> ncpi.orghttp://ncpi.orghttp://ncpi.org>> On Behalf Of James Kulp
Sent: Friday, February 1, 2019 4:18 PM
To:
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

On 2/1/19 3:37 PM, Chris Hinkey wrote:

in response to Point 1 here.  We attempted using the code that on
the fly was attempting to convert from bit to bin.  This did not work
on these newer platforms using fpga_manager so we decided to use the
vendor provided tools rather then to reverse engineer what was wrong
with the existing code.
If changes need to be made to create more commonality and given
that all zynq and zynqMP platforms need a .bin file format wouldn't it
make more sense to just use .bin files rather then converting them on
the fly every time?
A sensible question for sure.

When this was done originally, it was to avoid generating multiple
file formats all the time.  .bit files are necessary for JTAG loading,
and .bin files are necessary for zynq hardware loading.
Even on Zynq, some debugging using jtag is done, and having that be
mostly transparent (using the same bitstream files) is convenient.
So we preferred having a single bitstream file (with metadata,
compressed) regardless of whether we were hardware loading or jtag
loading, zynq or virtex6 or spartan3, ISE or Vivado.
In fact, there was no reverse engineering the last time since both
formats, at the level we were operating at, were documented by Xilinx.
It seemed to be worth the 30 SLOC to convert on the fly to keep a
single format of Xilinx bitstream files, including between ISE and
Vivado and all Xilinx FPGA types.
Of course it might make sense to switch things around the other way
and use .bin files uniformly and only convert to .bit format for JTAG
loading.
But since the core of the "conversion:" after a header, is just a
32 bit endian swap, it doesn't matter much either way.
If it ends up being a truly nasty reverse engineering exercise now,
I would reconsider.


From: discuss
<discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opmailto:discuss-bounces@lists.op
encpi.orghttp://encpi.org><mailto:discuss-bounces@lists.opmailto:discuss-bounces@lists.op<mailto:discuss-bounces@mailto:discuss-bounces@
lists.op> encpi.orghttp://encpi.orghttp://encpi.org>> on behalf of James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><mailto:jek@parera.commailto:jek@parera.com<mailt
o:jek@parera.commailto:o%3Ajek@parera.com>>>
Sent: Friday, February 1, 2019 3:27 PM
To:
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

David,

This is great work. Thanks.

Since I believe the fpga manager stuff is really an attribute of
later linux kernels, I don't think it is really a ZynqMP thing,
but just a later linux kernel thing.
I am currently bringing up the quite ancient zedboard using the
latest Vivado and Xilinx linux and will try to use this same code.
There are two thinigs I am looking into, now that you have done
the hard work of getting to a working solution:

  1. The bit vs bin thing existed with the old bitstream loader, but
    I think we were converting on the fly, so I will try that here.
    (To avoid the bin format altogether).

  2. The fpga manager has entry points from kernel mode that allow
    you to inject the bitstream without making a copy in /lib/firmware.
    Since we already have a kernel driver, I will try to use that to
    avoid the whole /lib/firmware thing.

So if those two things can work (no guarantees), the difference
between old and new bitstream loading (and building) can be
minimized and the loading process faster and requiring no extra
file system
space.
This will make merging easier too.

We'll see.  Thanks again to you and Geon for this important
contribution.
Jim

On 2/1/19 3:12 PM, David Banks wrote:

OpenCPI users interested in ZynqMP fpga_manager,

I know some users are interested in the OpenCPI's bitstream
loading for ZynqMP/UltraScale+ using "fpga_manager". In
general, we followed the instructions at

https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream.

I will give a short explanation here:

Reminder: All ZynqMP/UltraScale+ changes are located at
https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra
branch.
Firstly, all fpga_manager code is located in
runtime/hdl/src/HdlBusDriver.cxx. There were also changes in
r
untime/hdl-support/xilinx/vivado.mkhttp://vivado.mkhttp://vivado.mk<http://vi
vado.mkhttp://vado.mk>
http://vivado.mk
to generate a bitstream in the correct *.bin
format. To see the changes made to these files for ZynqMP, you
can diff them between
release_1.4 and release_1.4_zynq_ultra:
$ git clone https://github.com/Geontech/opencpi.git --branch
release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin
release_1.4:release_1.4; $ git diff release_1.4 --
runtime/hdl/src/HdlBusDriver.cxx
runtime/hdl-support/xilinx/vivado.mkhttp://vivado.mkhttp://vivado.mk<http://viv
ado.mkhttp://ado.mk>;

The directly relevant functions are load_fpga_manager() and i
sProgrammed().
load_fpga_manager() ensures that /lib/firmware exists, reads the
*.bin bitstream file and writes its contents to
/lib/firmware/opencpi_temp.bin.
It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the
the filename "opencpi_temp.bin" to
/sys/class/fpga_manager/fpga0/firmware.
Finally, the temporary opencpi_temp.bin bitstream is removed and
the state of the fpga_manager
(/sys/class/fpga_manager/fpga0/state) is confirmed to be "operating" in isProgrammed().

fpga_manager requires that bitstreams be in *.bin in order to
write them to the PL. So, some changes were made to
vivado.mkhttp://vivado.mkhttp://vivado.mkhttp://vivado.mk to add a make rule
for the *.bin file. This make rule (BinName) uses Vivado's "bootgen" to convert the bitstream from *.bit to *.bin.

Most of the relevant code is pasted or summarized below:

         *load_fpga_manager*(const char *fileName, std::string

&error) {

           if (!file_exists("/lib/firmware")){

mkdir("/lib/firmware",0666);
}
int out_file =
creat("/lib/firmware/opencpi_temp.bin", 0666);
gzFile bin_file;
int bfd, zerror;
uint8_t buf[8*1024];

           if ((bfd = ::open(fileName, O_RDONLY)) < 0)
             OU::format(error, "Can't open bitstream file '%s'

for reading:

%s(%d)",
fileName, strerror(errno), errno);
if ((bin_file = ::gzdopen(bfd, "rb")) == NULL)
OU::format(error, "Can't open compressed bin file
'%s' for :
%s(%u)",
fileName, strerror(errno), errno);
do {
uint8_t *bit_buf = buf;
int n = ::gzread(bin_file, bit_buf, sizeof(buf));
if (n < 0)
return true;
if (n & 3)
return OU::eformat(error, "Bitstream data in is '%s'
not a multiple of 3 bytes",
fileName);
if (n == 0)
break;
if (write(out_file, buf, n) <= 0)
return OU::eformat(error,
"Error writing to
/lib/firmware/opencpi_temp.bin for bin
loading: %s(%u/%d)",
strerror(errno), errno, n);
} while (1);
close(out_file);
std::ofstream
fpga_flags("/sys/class/fpga_manager/fpga0/flags");
std::ofstream
fpga_firmware("/sys/class/fpga_manager/fpga0/firmware");
fpga_flags << 0 << std::endl;
fpga_firmware << "opencpi_temp.bin" << std::endl;

remove("/lib/firmware/opencpi_temp.bin");
return isProgrammed(error) ? init(error) : true;
}

The isProgrammed() function just checks whether or not the
fpga_manager state is 'operating' although we are not entirely
confident this is a robust check:

         *isProgrammed*(...) {
           ...
           const char *e = OU::file2String(val,

"/sys/class/fpga_manager/fpga0/state", '|');
...
return val == "operating";
}

vivado.mkhttp://vivado.mkhttp://vivado.mkhttp://vivado.mk's *bin make-rule
uses bootgen to convert bit to bin. This is necessary in Vivado
2018.2, but in later versions you may be able to directly
generate the correct *.bin file via an option to
write_bitstream:
$(call BinName,$1,$3,$6): $(call BitName,$1,$3)
$(AT)echo -n For $2 on $5 using config $4: Generating
Xilinx Vivado bitstream file $$@ with BIN extension using "bootgen".
$(AT)echo all: > $$(call BifName,$1,$3,$6);
echo "{" >> $$(call BifName,$1,$3,$6);
echo " [destination_device = pl] $(notdir $(call
BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6);
echo "}" >> $$(call BifName,$1,$3,$6);
$(AT)$(call DoXilinx,bootgen,$1,-image $(notdir
$(call
BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call
BinName,$1,$3,$6)) -w,bin)

Hope this is useful!

Regards,
David Banks
dbanks@geontech.commailto:dbanks@geontech.com<mailto:dbanks@geontech.commailto:dbanks@geontech.com><mailto:dbanks@geomailto:dbanks@geo
ntech.comhttp://ntech.com<mailto:dbanks@geontech.commailto:dbanks@geontech.com>>
Geon Technologies, LLC
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/att
ach m ents/20190201/4b49675d/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailt
o:discuss@lists.opencpi.orgmailto:o%3Adiscuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.o
rg


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or
g
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/atta
chm e nts/20190201/64e4ea45/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or
g


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
-------------- next part -------------- An embedded and
charset-unspecified text was scrubbed...
Name: hello_n310_log_output.txt
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme
nts/20190805/d9b4f229/attachment.txt>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:dmailto:d
iscuss@lists.opencpi.orgmailto:iscuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org

Hi Rob, Nearly all aspects of the boundary hardware between the PS and the PL sides of Zynq are controlled by registers written by the processor and *not* in the FPGA bitstream. The FSBL does typically initialize these registers to some default values that are not necessarily the right values for how OpenCPI uses the PL/FPGA. The ocpizynq utility program does dump out some of these registers, and you could modify it pretty easily if you want to know what some other registers are set to. All these registers are pretty well documented in the Zynq TRM. Jim On 9/5/19 5:47 PM, Munro, Robert M. wrote: > Chris, > > Would this be the GP0 AXI slave or master registers that are being accessed in this scenario? I don’t believe these are configured in the FSBL, but in the FPGA image. This could indicate that a facility required by the OCPI framework is not enabled in the FPGA image built into the N310 image. Is there a listing of the OCPI required FPGA facilities? > > Thanks, > Rob > > From: Chris Hinkey <chinkey@geontech.com> > Sent: Thursday, August 29, 2019 11:58 AM > To: Munro, Robert M. <Robert.Munro@jhuapl.edu> > Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager > > you are not accessing external memory in this case you are accessing axi_gp0's adress space a register directly on the FPGA. i would suspect that that something is wrong with how GP0 is setup from the fsbl in this case. I don't think anything would need to change on the opencpi software side given that 7100 vs 7020 should be the same. > the information on all the register maps and where everything is located is somewhere in the Xilinx Technical reference manual (be warned this is a very large document). > > On Thu, Aug 29, 2019 at 11:42 AM Munro, Robert M. <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>> wrote: > Chris, > > Looking at the Zynq and ZynqMP datasheets: > https://www.xilinx.com/support/documentation/data_sheets/ds190-Zynq-7000-Overview.pdf > https://www.xilinx.com/support/documentation/data_sheets/ds891-zynq-ultrascale-plus-overview.pdf > > It looks like the Z-7100 has the same memory interfaces as other Zynq parts with the external memory interface having '16-bit or 32-bit interfaces to DDR3, DDR3L, DDR2, or LPDDR2 memories' whereas the ZynqMP has '32-bit or 64-bit interfaces to DDR4, DDR3, DDR3L, or LPDDR3 memories, and 32-bit interface to LPDDR4 memory' . > > Is it possible that other changes are needed from the 1.4_zynq_ultra branch that I have not pulled in? > > Thanks, > Rob > > -----Original Message----- > From: discuss <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org>> On Behalf Of Munro, Robert M. > Sent: Thursday, August 29, 2019 9:09 AM > To: Chris Hinkey <chinkey@geontech.com<mailto:chinkey@geontech.com>> > Cc: discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager > > Chris, > > Thanks for the information regarding the internals. The FPGA part on this platform is a XC7Z100. I purposefully did not pull in changes that I believed were related to addressing. I can double check the specifications regarding address widths to verify it should be unchanged. > > Please let me know if there are any other changes or steps missed. > > Thanks, > Rob > > > From: Chris Hinkey <chinkey@geontech.com<mailto:chinkey@geontech.com><mailto:chinkey@geontech.com<mailto:chinkey@geontech.com>>> > Date: Thursday, Aug 29, 2019, 8:05 AM > To: Munro, Robert M. <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>> > Cc: James Kulp <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>>>, discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> <discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager > > It looks like you loaded something sucessfully but the control plan is not hooked up quite right. > > as an eraly part of the running process opencpi reads a register across the control plan that contains ascii "OpenCPI(NULL)" and in your case you are reading "CPI(NULL)Open" this is given by the data in the error message - (sb 0x435049004f70656e). this is the magic that the message is referring to it requires OpenCPI to be at address 0 of the control plane address space to proceed. > > I think we ran into this problem and we decided it was because the bus on the ultrascale was setup to be 32 bits and needed to be 64 bits for the hdl that we implemented to work correctly. remind me what platform you are using is it a zynq ultrascale or 7000 series? > > On Wed, Aug 28, 2019 at 5:55 PM Munro, Robert M. <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>> wrote: > Chris, > > After merging some sections of HdlBusDriver.cxx into the 1.4 version of the file and going through the build process I am encountering a new error when attempting to load HDL on the N310. The fsk_filerw is being used as a known good reference for this purpose. The new sections of vivado.mk<http://vivado.mk><http://vivado.mk> were merged in to attempt building the HDL using the framework, but it did not generate the .bin file when using ocpidev build with the --hdl-assembly argument. An attempt to replicate the commands in vivado.mk<http://vivado.mk><http://vivado.mk> manually while following the guidelines for generating a .bin from a .bit from Xilinx documentation https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841645/Solution+Zynq+PL+Programming+With+FPGA+Manager was taken. > > The steps were: > - generate a .bif file similar to the documentation's Full_Bitstream.bif using the correct filename > - run a bootgen command similar to vivado.mk<http://vivado.mk><http://vivado.mk>: bootgen -image <bif_filename> -arch zynq -o <bin_filename> -w > > This generated a .bin file as desired and was copied to the artifacts directory in the ocpi folder structure. > > The built ocpi environment loaded successfully, recognizes the HDL container as being available, and the hello application was able to run successfully. The command output contained ' HDL Device 'PL:0' responds, but the OCCP signature: magic: 0x18000afe187003 (sb 0x435049004f70656e) ' but the impact of this was not understood until attempting to load HDL. When attempting to run the fsk_filerw from the ocpirun command it did not appear to recognize the assembly when listing resources found in the output and reported that suitable candidate for a HDL-implemented component was not available. > > The command 'ocpihdl load' was then attempted to force the loading of the HDL assembly and the same '...OCCP signature: magic: ...' output observed and finally ' Exiting for problem: error loading device pl:0: Magic numbers in admin space do not match'. > > Is there some other step that must be taken during the generation of the .bin file? Is there any other software modification that is required of the ocpi runtime code? The diff patch of the modified 1.4 HdlBusDriver.cxx is attached to make sure that the required code modifications are performed correctly. The log output from the ocpihdl load command is attached in case that can provide further insight regarding performance or required steps. > > Thanks, > Rob > > -----Original Message----- > From: discuss <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org><mailto:discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org>>> On Behalf Of Munro, Robert M. > Sent: Tuesday, August 13, 2019 10:56 AM > To: Chris Hinkey <chinkey@geontech.com<mailto:chinkey@geontech.com><mailto:chinkey@geontech.com<mailto:chinkey@geontech.com>>>; James Kulp <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>>> > Cc: discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> > Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager > > Chris, > > Thank you for your helpful response and insight. My thinking was that the #define could be overridden to provide the desired functionality for the platform, but was not comfortable making the changes without proper familiarity. I will move forward by looking at the diff to the 1.4 mainline, make the appropriate modifications, and test with the modified framework on the N310. > > Thanks again for your help. > > Thanks, > Rob > > From: Chris Hinkey <chinkey@geontech.com<mailto:chinkey@geontech.com><mailto:chinkey@geontech.com<mailto:chinkey@geontech.com>>> > Sent: Tuesday, August 13, 2019 10:02 AM > To: James Kulp <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>>> > Cc: Munro, Robert M. <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>; discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> > Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager > > I think when I implemented this code I probably made the assumption that if we are using fpga_manager we are also using ARCH=arm64. This met our needs as we only cared about the fpga manager on ultrascale devices at the time. We also made the assumption that the tools created a tarred bin file instead of a bit file because we could not get the bit to bin conversion working with the existing openCPI code (this might cause you problems later when actually trying to load the fpga). > > The original problem you were running into is certainly because of an ifdef on line 226 where it will check the old driver done pin if it is on an arm and not an arm64 > > 226 #if defined(OCPI_ARCH_arm) || defined(OCPI_ARCH_arm_cs) > > to move forward for now you can change this line to an "#if 0" and rebuild the framework, not this will cause other zynq based platforms(zed, matchstiq etc..) to no longer work with this patch but maybe you don't care for now while Jim tries to get this into the mainline in a more generic way. > there may be some similar patches you need to make to the same file but the full diff that I needed to make to BusDriver.cxx to the 1.4 mainline can be seen here https://github.com/opencpi/opencpi/pull/17/files in case you didn't already know. > hope this helps > > On Mon, Aug 12, 2019 at 11:12 AM James Kulp <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>>>> wrote: > On 8/12/19 9:37 AM, Munro, Robert M. wrote: >> Jim, >> >> This is the only branch with the modifications required for use with >> the FPGA Manager driver. This is required for use with the Linux >> kernel provided for the N310. The Xilinx toolset being used is 2018_2 >> and the kernel being used is generated via the N310 build container >> using v3.14.0.0 . > Ok. The default Xilinx kernel associated with 2018_2 is 4.14. > > I guess the bottom line is that this combination of platform and tools and kernel is not yet supported in either the mainline of OpenCPI and the third party branch you are trying to use. > > It is probably not a big problem, but someone has to debug it that has the time and skills necessary to dig as deep as necessary. > > The fpga manager in the various later linux kernels will definitely be supported in a patch from the mainline "soon", probably in a month, since it is being actively worked. > > That does not guarantee functionality on your exact kernel (and thus version of the fpga manager), but it does guarantee it working on the latest Xilinx-supported kernel. > > Jim > > > > > > > >> Thanks, >> Robert Munro >> >> *From: *James Kulp >> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:je<mailto:je> >> k@parera.com<mailto:k@parera.com>>> >> <mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><ma >> ilto:jek@parera.com<mailto:ilto%3Ajek@parera.com>>>>> >> *Date: *Monday, Aug 12, 2019, 9:00 AM >> *To: *Munro, Robert M. >> <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>><mailto:Robert<mailto:Robert> >> .Munro@jhuapl.edu<mailto:Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>> >> <mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>><mailto >> :Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>>>, >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:dis<mailto:dis> >> cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >> <discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:di<mailto:di> >> scuss@lists.opencpi.org<mailto:scuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >> <mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><ma >> ilto:discuss@lists.opencpi.org<mailto:ilto%3Adiscuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>>>> >> *Subject: *Re: [Discuss OpenCPI] Bitstream loading with >> ZynqMP/UltraScale+ fpga_manager >> >> I was a bit confused about your use of the "ultrascale" branch. >> So you are using a branch with two types of patches in it: one for >> later linux kernels with the fpga manager, and the other for the >> ultrascale chip itself. >> The N310 is not ultrascale, so we need to separate the two issues, >> which were not separated before. >> So its not really a surprise that the branch you are using is not yet >> happy with the system you are trying to run it on. >> >> I am working on a branch that simply updates the xilinx tools (2019-1) >> and the xilinx linux kernel (4.19) without dealing with ultrascale, >> which is intended to work with a baseline zed board, but with current >> tools and kernels. >> >> The N310 uses a 7000-series part (7100) which should be compatible >> with this. >> >> Which kernel and which xilinx tools are you using? >> >> Jim >> >> >> >> On 8/8/19 1:36 PM, Munro, Robert M. wrote: >>> Jim or others, >>> >>> Is there any further input or feedback on the source or resolution >> of this issue? >>> As it stands I do not believe that the OCPI runtime software will be >> able to successfully load HDL assemblies on the N310 platform. My >> familiarity with this codebase is limited and we would appreciate any >> guidance available toward investigating or resolving this issue. >>> Thank you, >>> Robert Munro >>> >>> -----Original Message----- >>> From: discuss >>> <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org><mailto:discuss-bounces@lists.open<mailto:discuss-bounces@lists.open> >>> cpi.org<http://cpi.org>><mailto:discuss-bounces@lists.open<mailto:discuss-bounces@lists.open><mailto:discuss-bounces@li<mailto:discuss-bounces@li> >>> sts.open> cpi.org<http://cpi.org><http://cpi.org>>> On Behalf Of >> Munro, Robert M. >>> Sent: Monday, August 5, 2019 10:49 AM >>> To: James Kulp >>> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto: >>> jek@parera.com<mailto:jek@parera.com>>>>; >>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:d<mailto:d> >>> iscuss@lists.opencpi.org<mailto:iscuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >>> Subject: Re: [Discuss OpenCPI] Bitstream loading with >> ZynqMP/UltraScale+ fpga_manager >>> Jim, >>> >>> The given block of code is not the root cause of the issue because >> the file system does not have a /dev/xdevcfg device. >>> I suspect there is some functional code similar to this being >> compiled incorrectly: >>> #if (OCPI_ARCH_arm) >>> // do xdevcfg loading stuff >>> #else >>> // do fpga_manager loading stuff >>> #endif >>> >>> This error is being output at environment initialization as well as >> when running hello.xml. I've attached a copy of the output from the >> command 'ocpirun -v -l 20 hello.xml' for further investigation. >>> From looking at the output I believe the system is calling >> OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is >> calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line >> 484 which in turn is calling Driver::open in the same file at line 499 >> which then outputs the 'When searching for PL device ...' error at >> line 509. This then returns to the HdlDriver.cxx search() function and >> outputs the '... got Zynq search error ...' error at line 141. >>> This is an ARM device and I am not familiar enough with this >> codebase to adjust precompiler definitions with confidence that some >> other code section will become affected. >>> Thanks, >>> Robert Munro >>> >>> -----Original Message----- >>> From: James Kulp >>> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto: >>> jek@parera.com<mailto:jek@parera.com>>>> >>> Sent: Friday, August 2, 2019 4:27 PM >>> To: Munro, Robert M. >>> <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>><mailto:Robe<mailto:Robe> >>> rt.Munro@jhuapl.edu<mailto:rt.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>>; >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:dis<mailto:dis> >> cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >>> Subject: Re: [Discuss OpenCPI] Bitstream loading with >> ZynqMP/UltraScale+ fpga_manager >>> That code is not integrated into the main line of OpenCPI yet, but >> in that code there is: >>> if (file_exists("/dev/xdevcfg")){ >>> ret_val= load_xdevconfig(fileName, error); >>> } >>> else if (file_exists("/sys/class/fpga_manager/fpga0/")){ >>> ret_val= load_fpga_manager(fileName, error); >>> } >>> So it looks like the presence of /dev/xdevcfg is what causes it to >> look for /sys/class/xdevcfg/xdevcfg/device/prog_done >>> On 8/2/19 4:15 PM, Munro, Robert M. wrote: >>>> Are there any required flag or environment variable settings that >> must be done before building the framework to utilize this >> functionality? I have a platform built that is producing an output >> during environment load: 'When searching for PL device '0': Can't >> process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string: >> file could not be open for reading' . This leads me to believe that >> it is running the xdevcfg code still present in HdlBusDriver.cxx . >>>> Use of the release_1.4_zynq_ultra branch and presence of the >> /sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been >> verified for the environment used to generate the executables. >>>> Thanks, >>>> Robert Munro >>>> >>>> -----Original Message----- >>>> From: discuss >>>> <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org><mailto:discuss-bounces@lists.ope<mailto:discuss-bounces@lists.ope> >>>> ncpi.org<http://ncpi.org>><mailto:discuss-bounces@lists.ope<mailto:discuss-bounces@lists.ope><mailto:discuss-bounces@l<mailto:discuss-bounces@l> >>>> ists.ope> ncpi.org<http://ncpi.org><http://ncpi.org>>> On Behalf Of James Kulp >>>> Sent: Friday, February 1, 2019 4:18 PM >>>> To: >>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto: >>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >>>> Subject: Re: [Discuss OpenCPI] Bitstream loading with >>>> ZynqMP/UltraScale+ fpga_manager >>>> >>>> On 2/1/19 3:37 PM, Chris Hinkey wrote: >>>>> in response to Point 1 here. We attempted using the code that on >> the fly was attempting to convert from bit to bin. This did not work >> on these newer platforms using fpga_manager so we decided to use the >> vendor provided tools rather then to reverse engineer what was wrong >> with the existing code. >>>>> If changes need to be made to create more commonality and given >> that all zynq and zynqMP platforms need a .bin file format wouldn't it >> make more sense to just use .bin files rather then converting them on >> the fly every time? >>>> A sensible question for sure. >>>> >>>> When this was done originally, it was to avoid generating multiple >> file formats all the time. .bit files are necessary for JTAG loading, >> and .bin files are necessary for zynq hardware loading. >>>> Even on Zynq, some debugging using jtag is done, and having that be >> mostly transparent (using the same bitstream files) is convenient. >>>> So we preferred having a single bitstream file (with metadata, >>>> compressed) regardless of whether we were hardware loading or jtag >> loading, zynq or virtex6 or spartan3, ISE or Vivado. >>>> In fact, there was no reverse engineering the last time since both >> formats, at the level we were operating at, were documented by Xilinx. >>>> It seemed to be worth the 30 SLOC to convert on the fly to keep a >> single format of Xilinx bitstream files, including between ISE and >> Vivado and all Xilinx FPGA types. >>>> Of course it might make sense to switch things around the other way >> and use .bin files uniformly and only convert to .bit format for JTAG >> loading. >>>> But since the core of the "conversion:" after a header, is just a >> 32 bit endian swap, it doesn't matter much either way. >>>> If it ends up being a truly nasty reverse engineering exercise now, >> I would reconsider. >>>>> ________________________________ >>>>> From: discuss >>>>> <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org><mailto:discuss-bounces@lists.op<mailto:discuss-bounces@lists.op> >>>>> encpi.org<http://encpi.org>><mailto:discuss-bounces@lists.op<mailto:discuss-bounces@lists.op><mailto:discuss-bounces@<mailto:discuss-bounces@> >>>>> lists.op> encpi.org<http://encpi.org><http://encpi.org>>> on behalf of James Kulp >>>>> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailt >>>>> o:jek@parera.com<mailto:o%3Ajek@parera.com>>>> >>>>> Sent: Friday, February 1, 2019 3:27 PM >>>>> To: >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto >>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >>>>> Subject: Re: [Discuss OpenCPI] Bitstream loading with >>>>> ZynqMP/UltraScale+ fpga_manager >>>>> >>>>> David, >>>>> >>>>> This is great work. Thanks. >>>>> >>>>> Since I believe the fpga manager stuff is really an attribute of >>>>> later linux kernels, I don't think it is really a ZynqMP thing, >>>>> but just a later linux kernel thing. >>>>> I am currently bringing up the quite ancient zedboard using the >>>>> latest Vivado and Xilinx linux and will try to use this same code. >>>>> There are two thinigs I am looking into, now that you have done >>>>> the hard work of getting to a working solution: >>>>> >>>>> 1. The bit vs bin thing existed with the old bitstream loader, but >>>>> I think we were converting on the fly, so I will try that here. >>>>> (To avoid the bin format altogether). >>>>> >>>>> 2. The fpga manager has entry points from kernel mode that allow >>>>> you to inject the bitstream without making a copy in /lib/firmware. >>>>> Since we already have a kernel driver, I will try to use that to >>>>> avoid the whole /lib/firmware thing. >>>>> >>>>> So if those two things can work (no guarantees), the difference >>>>> between old and new bitstream loading (and building) can be >>>>> minimized and the loading process faster and requiring no extra >>>>> file system >> space. >>>>> This will make merging easier too. >>>>> >>>>> We'll see. Thanks again to you and Geon for this important >> contribution. >>>>> Jim >>>>> >>>>> >>>>> On 2/1/19 3:12 PM, David Banks wrote: >>>>>> OpenCPI users interested in ZynqMP fpga_manager, >>>>>> >>>>>> I know some users are interested in the OpenCPI's bitstream >>>>>> loading for ZynqMP/UltraScale+ using "*fpga_manager*". In >>>>>> general, we followed the instructions at >>>>>> >> https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream. >>>>>> I will give a short explanation here: >>>>>> >>>>>> Reminder: All ZynqMP/UltraScale+ changes are located at >>>>>> https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra >> branch. >>>>>> Firstly, all *fpga_manager *code is located in >>>>>> *runtime/hdl/src/HdlBusDriver.cxx*. There were also changes in >>>>>> r*untime/hdl-support/xilinx/vivado.mk<http://vivado.mk><http://vivado.mk><http://vi >>>>>> vado.mk<http://vado.mk>> >>>>>> <http://vivado.mk>* to generate a bitstream in the correct *.bin >>>>>> format. To see the changes made to these files for ZynqMP, you >>>>>> can diff them between >>>>>> *release_1.4* and *release_1.4_zynq_ultra*: >>>>>> $ git clone https://github.com/Geontech/opencpi.git --branch >>>>>> release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin >>>>>> release_1.4:release_1.4; $ git diff release_1.4 -- >>>>>> runtime/hdl/src/HdlBusDriver.cxx >>>>>> runtime/hdl-support/xilinx/vivado.mk<http://vivado.mk><http://vivado.mk><http://viv >>>>>> ado.mk<http://ado.mk>>; >>>>>> >>>>>> >>>>>> The directly relevant functions are *load_fpga_manager()* and i >>>>>> *sProgrammed()*. >>>>>> load_fpga_manager() ensures that /lib/firmware exists, reads the >>>>>> *.bin bitstream file and writes its contents to >> /lib/firmware/opencpi_temp.bin. >>>>>> It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the >>>>>> the filename "opencpi_temp.bin" to >> /sys/class/fpga_manager/fpga0/firmware. >>>>>> Finally, the temporary opencpi_temp.bin bitstream is removed and >>>>>> the state of the fpga_manager >>>>>> (/sys/class/fpga_manager/fpga0/state) is confirmed to be "operating" in isProgrammed(). >>>>>> >>>>>> fpga_manager requires that bitstreams be in *.bin in order to >>>>>> write them to the PL. So, some changes were made to >>>>>> vivado.mk<http://vivado.mk><http://vivado.mk><http://vivado.mk> to add a make rule >>>>>> for the *.bin file. This make rule (*BinName*) uses Vivado's "*bootgen*" to convert the bitstream from *.bit to *.bin. >>>>>> >>>>>> Most of the relevant code is pasted or summarized below: >>>>>> >>>>>> *load_fpga_manager*(const char *fileName, std::string >> &error) { >>>>>> if (!file_exists("/lib/firmware")){ >>>>>> mkdir("/lib/firmware",0666); >>>>>> } >>>>>> int out_file = >> creat("/lib/firmware/opencpi_temp.bin", 0666); >>>>>> gzFile bin_file; >>>>>> int bfd, zerror; >>>>>> uint8_t buf[8*1024]; >>>>>> >>>>>> if ((bfd = ::open(fileName, O_RDONLY)) < 0) >>>>>> OU::format(error, "Can't open bitstream file '%s' >> for reading: >>>>>> %s(%d)", >>>>>> fileName, strerror(errno), errno); >>>>>> if ((bin_file = ::gzdopen(bfd, "rb")) == NULL) >>>>>> OU::format(error, "Can't open compressed bin file >> '%s' for : >>>>>> %s(%u)", >>>>>> fileName, strerror(errno), errno); >>>>>> do { >>>>>> uint8_t *bit_buf = buf; >>>>>> int n = ::gzread(bin_file, bit_buf, sizeof(buf)); >>>>>> if (n < 0) >>>>>> return true; >>>>>> if (n & 3) >>>>>> return OU::eformat(error, "Bitstream data in is '%s' >>>>>> not a multiple of 3 bytes", >>>>>> fileName); >>>>>> if (n == 0) >>>>>> break; >>>>>> if (write(out_file, buf, n) <= 0) >>>>>> return OU::eformat(error, >>>>>> "Error writing to >>>>>> /lib/firmware/opencpi_temp.bin for bin >>>>>> loading: %s(%u/%d)", >>>>>> strerror(errno), errno, n); >>>>>> } while (1); >>>>>> close(out_file); >>>>>> std::ofstream >> fpga_flags("/sys/class/fpga_manager/fpga0/flags"); >>>>>> std::ofstream >>>>>> fpga_firmware("/sys/class/fpga_manager/fpga0/firmware"); >>>>>> fpga_flags << 0 << std::endl; >>>>>> fpga_firmware << "opencpi_temp.bin" << std::endl; >>>>>> >>>>>> remove("/lib/firmware/opencpi_temp.bin"); >>>>>> return isProgrammed(error) ? init(error) : true; >>>>>> } >>>>>> >>>>>> The isProgrammed() function just checks whether or not the >>>>>> fpga_manager state is 'operating' although we are not entirely >>>>>> confident this is a robust check: >>>>>> >>>>>> *isProgrammed*(...) { >>>>>> ... >>>>>> const char *e = OU::file2String(val, >>>>>> "/sys/class/fpga_manager/fpga0/state", '|'); >>>>>> ... >>>>>> return val == "operating"; >>>>>> } >>>>>> >>>>>> vivado.mk<http://vivado.mk><http://vivado.mk><http://vivado.mk>'s *bin make-rule >>>>>> uses bootgen to convert bit to bin. This is necessary in Vivado >>>>>> 2018.2, but in later versions you may be able to directly >>>>>> generate the correct *.bin file via an option to >> write_bitstream: >>>>>> $(call *BinName*,$1,$3,$6): $(call BitName,$1,$3) >>>>>> $(AT)echo -n For $2 on $5 using config $4: Generating >>>>>> Xilinx Vivado bitstream file $$@ with BIN extension using "bootgen". >>>>>> $(AT)echo all: > $$(call BifName,$1,$3,$6); \ >>>>>> echo "{" >> $$(call BifName,$1,$3,$6); \ >>>>>> echo " [destination_device = pl] $(notdir $(call >>>>>> BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6); \ >>>>>> echo "}" >> $$(call BifName,$1,$3,$6); >>>>>> $(AT)$(call DoXilinx,*bootgen*,$1,-image $(notdir >>>>>> $(call >>>>>> BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call >>>>>> BinName,$1,$3,$6)) -w,bin) >>>>>> >>>>>> Hope this is useful! >>>>>> >>>>>> Regards, >>>>>> David Banks >>>>>> dbanks@geontech.com<mailto:dbanks@geontech.com><mailto:dbanks@geontech.com<mailto:dbanks@geontech.com>><mailto:dbanks@geo<mailto:dbanks@geo> >>>>>> ntech.com<http://ntech.com><mailto:dbanks@geontech.com<mailto:dbanks@geontech.com>>> >>>>>> Geon Technologies, LLC >>>>>> -------------- next part -------------- An HTML attachment was >>>>>> scrubbed... >>>>>> URL: >>>>>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/att >>>>>> ach m ents/20190201/4b49675d/attachment.html> >>>>>> _______________________________________________ >>>>>> discuss mailing list >>>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailt >>>>>> o:discuss@lists.opencpi.org<mailto:o%3Adiscuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >>>>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.o >>>>>> rg >>>>> _______________________________________________ >>>>> discuss mailing list >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto >>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >>>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or >>>>> g >>>>> -------------- next part -------------- An HTML attachment was >>>>> scrubbed... >>>>> URL: >>>>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/atta >>>>> chm e nts/20190201/64e4ea45/attachment.html> >>>>> _______________________________________________ >>>>> discuss mailing list >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto >>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >>>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or >>>>> g >>>> _______________________________________________ >>>> discuss mailing list >>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto: >>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >>> -------------- next part -------------- An embedded and >>> charset-unspecified text was scrubbed... >>> Name: hello_n310_log_output.txt >>> URL: >> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme >> nts/20190805/d9b4f229/attachment.txt> >>> _______________________________________________ >>> discuss mailing list >>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:d<mailto:d> >>> iscuss@lists.opencpi.org<mailto:iscuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >> > > _______________________________________________ > discuss mailing list > discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >
MR
Munro, Robert M.
Thu, Sep 5, 2019 10:19 PM

Jim,

Does the ocpizynq utility list all the available interfaces that can dumped?

Thanks,
Rob

-----Original Message-----
From: discuss discuss-bounces@lists.opencpi.org On Behalf Of James Kulp
Sent: Thursday, September 5, 2019 5:59 PM
To: discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

Hi Rob,

Nearly all aspects of the boundary hardware between the PS and the PL sides of Zynq are controlled by registers written by the processor and
not in the FPGA bitstream.
The FSBL does typically initialize these registers to some default values that are not necessarily the right values for how OpenCPI uses the PL/FPGA.
The ocpizynq utility program does dump out some of these registers, and you could modify it pretty easily if you want to know what some other registers are set to.
All these registers are pretty well documented in the Zynq TRM.

Jim

On 9/5/19 5:47 PM, Munro, Robert M. wrote:

Chris,

Would this be the GP0 AXI slave or master registers that are being accessed in this scenario?  I don’t believe these are configured in the FSBL, but in the FPGA image.  This could indicate that a facility required by the OCPI framework is not enabled in the FPGA image built into the N310 image.  Is there a listing of the OCPI required FPGA facilities?

Thanks,
Rob

From: Chris Hinkey chinkey@geontech.com
Sent: Thursday, August 29, 2019 11:58 AM
To: Munro, Robert M. Robert.Munro@jhuapl.edu
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

you are not accessing external memory in this case you are accessing axi_gp0's adress space a register directly on the FPGA.  i would suspect that that something is wrong with how GP0 is setup from the fsbl in this case.  I don't think anything would need to change on the opencpi software side given that 7100 vs 7020 should be the same.
the information on all the register maps and where everything is located is somewhere in the Xilinx Technical reference manual (be warned this is a very large document).

On Thu, Aug 29, 2019 at 11:42 AM Munro, Robert M. <Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu> wrote:
Chris,

Looking at the Zynq and ZynqMP datasheets:
https://www.xilinx.com/support/documentation/data_sheets/ds190-Zynq-70
00-Overview.pdf
https://www.xilinx.com/support/documentation/data_sheets/ds891-zynq-ul
trascale-plus-overview.pdf

It looks like the Z-7100 has the same memory interfaces as other Zynq parts with the external memory interface having '16-bit or 32-bit interfaces to DDR3, DDR3L, DDR2, or LPDDR2 memories' whereas the ZynqMP has '32-bit or 64-bit interfaces to DDR4, DDR3, DDR3L, or LPDDR3 memories, and 32-bit interface to LPDDR4 memory' .

Is it possible that other changes are needed from the 1.4_zynq_ultra branch that I have not pulled in?

Thanks,
Rob

-----Original Message-----
From: discuss <discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org> On Behalf Of Munro, Robert M.
Sent: Thursday, August 29, 2019 9:09 AM
To: Chris Hinkey <chinkey@geontech.commailto:chinkey@geontech.com>
Cc: discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

Chris,

Thanks for the information regarding the internals.  The FPGA part on this platform is a XC7Z100.  I purposefully did not pull in changes that I believed were related to addressing.  I can double check the specifications regarding address widths to verify it should be unchanged.

Please let me know if there are any other changes or steps missed.

Thanks,
Rob

From: Chris Hinkey
<chinkey@geontech.commailto:chinkey@geontech.com<mailto:chinkey@geon
tech.commailto:chinkey@geontech.com>>
Date: Thursday, Aug 29, 2019, 8:05 AM
To: Munro, Robert M.
<Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert
.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>
Cc: James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.com<mailto:je
k@parera.com>>>,
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
<discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:di
scuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

It looks like you loaded something sucessfully but the control plan is not hooked up quite right.

as an eraly part of the running process opencpi reads a register across the control plan that contains ascii "OpenCPI(NULL)" and in your case you are reading "CPI(NULL)Open"  this is given by the data in the error message - (sb 0x435049004f70656e).  this is the magic that the message is referring to it requires OpenCPI to be at address 0 of the control plane address space to proceed.

I think we ran into this problem and we decided it was because the bus on the ultrascale was setup to be 32 bits and needed to be 64 bits for the hdl that we implemented to work correctly.  remind me what platform you are using is it a zynq ultrascale or 7000 series?

On Wed, Aug 28, 2019 at 5:55 PM Munro, Robert M. <Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>> wrote:
Chris,

After merging some sections of HdlBusDriver.cxx into the 1.4 version of the file and going through the build process I am encountering a new error when attempting to load HDL on the N310.  The fsk_filerw is being used as a known good reference for this purpose.  The new sections of vivado.mkhttp://vivado.mkhttp://vivado.mk were merged in to attempt building the HDL using the framework, but it did not generate the .bin file when using ocpidev build with the --hdl-assembly argument.  An attempt to replicate the commands in vivado.mkhttp://vivado.mkhttp://vivado.mk manually while following the guidelines for generating a .bin from a .bit from Xilinx documentation https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841645/Solution+Zynq+PL+Programming+With+FPGA+Manager was taken.

The steps were:

  • generate a .bif file similar to the documentation's
    Full_Bitstream.bif using the correct filename
  • run a bootgen command similar to
    vivado.mkhttp://vivado.mkhttp://vivado.mk: bootgen -image
    <bif_filename> -arch zynq -o <bin_filename> -w

This generated a .bin file as desired and was copied to the artifacts directory in the ocpi folder structure.

The built ocpi environment loaded successfully, recognizes the HDL container as being available, and the hello application was able to run successfully.  The command output contained ' HDL Device 'PL:0' responds, but the OCCP signature: magic: 0x18000afe187003 (sb 0x435049004f70656e) ' but the impact of this was not understood until attempting to load HDL.  When attempting to run the fsk_filerw from the ocpirun command it did not appear to recognize the assembly when listing resources found in the output and reported that suitable candidate for a HDL-implemented component was not available.

The command 'ocpihdl load' was then attempted to force the loading of the HDL assembly and the same '...OCCP signature: magic: ...' output observed and finally ' Exiting for problem: error loading device pl:0: Magic numbers in admin space do not match'.

Is there some other step that must be taken during the generation of the .bin file?  Is there any other software modification that is required of the ocpi runtime code?  The diff patch of the modified 1.4 HdlBusDriver.cxx is attached to make sure that the required code modifications are performed correctly.  The log output from the ocpihdl load command is attached in case that can provide further insight regarding performance or required steps.

Thanks,
Rob

-----Original Message-----
From: discuss <discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org>> On Behalf Of Munro, Robert M.
Sent: Tuesday, August 13, 2019 10:56 AM
To: Chris Hinkey
<chinkey@geontech.commailto:chinkey@geontech.com<mailto:chinkey@geon
tech.commailto:chinkey@geontech.com>>; James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.com<mailto:je
k@parera.com>>>
Cc:
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:dis
cuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

Chris,

Thank you for your helpful response and insight.  My thinking was that the #define could be overridden to provide the desired functionality for the platform, but was not comfortable making the changes without proper familiarity.  I will move forward by looking at the diff to the 1.4 mainline, make the appropriate modifications, and test with the modified framework on the N310.

Thanks again for your help.

Thanks,
Rob

From: Chris Hinkey
<chinkey@geontech.commailto:chinkey@geontech.com<mailto:chinkey@geon
tech.commailto:chinkey@geontech.com>>
Sent: Tuesday, August 13, 2019 10:02 AM
To: James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.com<mailto:je
k@parera.com>>>
Cc: Munro, Robert M.
<Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert
.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>;
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:dis
cuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

I think when I implemented this code I probably made the assumption that if we are using fpga_manager we are also using ARCH=arm64.  This met our needs as we only cared about the fpga manager on ultrascale devices at the time.  We also made the assumption that the tools created a tarred bin file instead of a bit file because we could not get the bit to bin conversion working with the existing openCPI code (this might cause you problems later when actually trying to load the fpga).

The original problem you were running into is certainly because of an
ifdef on line 226 where it will check the old driver done pin if it is
on an arm and not an arm64

226 #if defined(OCPI_ARCH_arm) || defined(OCPI_ARCH_arm_cs)

to move forward for now you can change this line to an "#if 0" and rebuild the framework, not this will cause other zynq based platforms(zed, matchstiq etc..) to no longer work with this patch but maybe you don't care for now while Jim tries to get this into the mainline in a more generic way.
there may be some similar patches you need to make to the same file but the full diff that I needed to make to BusDriver.cxx to the 1.4 mainline can be seen here https://github.com/opencpi/opencpi/pull/17/files in case you didn't already know.
hope this helps

On Mon, Aug 12, 2019 at 11:12 AM James Kulp <jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><mailto:jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com>>> wrote:
On 8/12/19 9:37 AM, Munro, Robert M. wrote:

Jim,

This is the only branch with the modifications required for use with
the FPGA Manager driver.  This is required for use with the Linux
kernel provided for the N310.  The Xilinx toolset being used is
2018_2 and the kernel being used is generated via the N310 build
container using v3.14.0.0 .

Ok.  The default Xilinx kernel associated with 2018_2 is 4.14.

I guess the bottom line is that this combination of platform and tools and kernel is not yet supported in either the mainline of OpenCPI and the third party branch you are trying to use.

It is probably not a big problem, but someone has to debug it that has the time and skills necessary to dig as deep as necessary.

The fpga manager in the various later linux kernels will definitely be supported in a patch from the mainline "soon", probably in a month, since it is being actively worked.

That does not guarantee functionality on your exact kernel (and thus version of the fpga manager), but it does guarantee it working on the latest Xilinx-supported kernel.

Jim

Thanks,
Robert Munro

*From: *James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.com<mailto:j
ek@parera.com>><mailto:jek@parera.commailto:jek@parera.com<mailto:j
emailto:je
k@parera.commailto:k@parera.com>>
<mailto:jek@parera.commailto:jek@parera.com<mailto:jek@parera.com<m
ailto:jek@parera.com>><mailto:jek@parera.commailto:jek@parera.com<m
a ilto:jek@parera.commailto:ilto%3Ajek@parera.com>>>>
*Date: *Monday, Aug 12, 2019, 9:00 AM
*To: *Munro, Robert M.
<Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Rober
t.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu><mailto:Robert<mai lto:Robert>
.Munro@jhuapl.edumailto:Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl
.edumailto:Robert.Munro@jhuapl.edu>>
<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailt
o:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu><mailto
:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Rober
t.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>>>,
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:di
scuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:dis
mailto:dis
cuss@lists.opencpi.orgmailto:cuss@lists.opencpi.org<mailto:discuss@
lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
<discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:d
iscuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:di
mailto:di
scuss@lists.opencpi.orgmailto:scuss@lists.opencpi.org<mailto:discus
s@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<m
ailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><ma
ilto:discuss@lists.opencpi.org<mailto:ilto%3Adiscuss@lists.opencpi.or
g><mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org

*Subject: *Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

I was a bit confused about your use of the "ultrascale" branch.
So you are using a branch with two types of patches in it: one for
later linux kernels with the fpga manager, and the other for the
ultrascale chip itself.
The N310 is not ultrascale, so we need to separate the two issues,
which were not separated before.
So its not really a surprise that the branch you are using is not yet
happy with the system you are trying to run it on.

I am working on a branch that simply updates the xilinx tools
(2019-1) and the xilinx linux kernel (4.19) without dealing with
ultrascale, which is intended to work with a baseline zed board, but
with current tools and kernels.

The N310 uses a 7000-series part (7100) which should be compatible
with this.

Which kernel and which xilinx tools are you using?

Jim

On 8/8/19 1:36 PM, Munro, Robert M. wrote:

Jim or others,

Is there any further input or feedback on the source or resolution

of this issue?

As it stands I do not believe that the OCPI runtime software will be

able to successfully load HDL assemblies on the N310 platform.  My
familiarity with this codebase is limited and we would appreciate any
guidance available toward investigating or resolving this issue.

Thank you,
Robert Munro

-----Original Message-----
From: discuss
<discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.open
cpi.org><mailto:discuss-bounces@lists.open<mailto:discuss-bounces@li
sts.open>
cpi.orghttp://cpi.org><mailto:discuss-bounces@lists.open<mailto:di
scuss-bounces@lists.open><mailto:discuss-bounces@li<mailto:discuss-b
ounces@li> sts.open> cpi.orghttp://cpi.orghttp://cpi.org>> On
Behalf Of

Munro, Robert M.

ZynqMP/UltraScale+ fpga_manager

Jim,

The given block of code is not the root cause of the issue because

the file system does not have a /dev/xdevcfg device.

I suspect there is some functional code similar to this being

compiled incorrectly:

#if (OCPI_ARCH_arm)
// do xdevcfg loading stuff
#else
// do fpga_manager loading stuff #endif

This error is being output at environment initialization as well as

when running hello.xml.  I've attached a copy of the output from the
command 'ocpirun -v -l 20 hello.xml' for further investigation.

From looking at the output I believe the system is calling

OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is
calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line
484 which in turn is calling Driver::open in the same file at line
499 which then outputs the 'When searching for PL device ...' error
at line 509. This then returns to the HdlDriver.cxx search() function
and outputs the '... got Zynq search error ...' error at line 141.

This is an ARM device and I am not familiar enough with this

codebase to adjust precompiler definitions with confidence that some
other code section will become affected.

Thanks,
Robert Munro

-----Original Message-----
From: James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><mailto:jek@parera.commailto:jek@parera.com<mailto:
jek@parera.commailto:jek@parera.com>>>
Sent: Friday, August 2, 2019 4:27 PM
To: Munro, Robert M.
<Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robe
rt.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu><mailto:Robe<mai lto:Robe>
rt.Munro@jhuapl.edumailto:rt.Munro@jhuapl.edu<mailto:Robert.Munro@
jhuapl.edumailto:Robert.Munro@jhuapl.edu>>>;

Subject: Re: [Discuss OpenCPI] Bitstream loading with

ZynqMP/UltraScale+ fpga_manager

That code is not integrated into the main line of OpenCPI yet, but

in that code there is:

          if (file_exists("/dev/xdevcfg")){
            ret_val= load_xdevconfig(fileName, error);
          }
          else if (file_exists("/sys/class/fpga_manager/fpga0/")){
            ret_val= load_fpga_manager(fileName, error);
          }

So it looks like the presence of /dev/xdevcfg is what causes it to

look for /sys/class/xdevcfg/xdevcfg/device/prog_done

On 8/2/19 4:15 PM, Munro, Robert M. wrote:

Are there any required flag or environment variable settings that

must be done before building the framework to utilize this
functionality?  I have a platform built that is producing an output
during environment load: 'When searching for PL device '0': Can't
process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string:
file could not be open for reading' .  This leads me to believe that
it is running the xdevcfg code still present in HdlBusDriver.cxx .

Use of the release_1.4_zynq_ultra branch and presence of the

/sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been
verified for the environment used to generate the executables.

Thanks,
Robert Munro

-----Original Message-----
From: discuss
<discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.ope
ncpi.org><mailto:discuss-bounces@lists.ope<mailto:discuss-bounces@l
ists.ope>
ncpi.orghttp://ncpi.org><mailto:discuss-bounces@lists.ope<mailto:
discuss-bounces@lists.ope><mailto:discuss-bounces@l<mailto:discuss-
bounces@l> ists.ope> ncpi.orghttp://ncpi.orghttp://ncpi.org>>
On Behalf Of James Kulp
Sent: Friday, February 1, 2019 4:18 PM
To:
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

On 2/1/19 3:37 PM, Chris Hinkey wrote:

in response to Point 1 here.  We attempted using the code that on

the fly was attempting to convert from bit to bin.  This did not work
on these newer platforms using fpga_manager so we decided to use the
vendor provided tools rather then to reverse engineer what was wrong
with the existing code.

If changes need to be made to create more commonality and given

that all zynq and zynqMP platforms need a .bin file format wouldn't
it make more sense to just use .bin files rather then converting them
on the fly every time?

A sensible question for sure.

When this was done originally, it was to avoid generating multiple

file formats all the time.  .bit files are necessary for JTAG
loading, and .bin files are necessary for zynq hardware loading.

Even on Zynq, some debugging using jtag is done, and having that be

mostly transparent (using the same bitstream files) is convenient.

So we preferred having a single bitstream file (with metadata,
compressed) regardless of whether we were hardware loading or jtag

loading, zynq or virtex6 or spartan3, ISE or Vivado.

In fact, there was no reverse engineering the last time since both

formats, at the level we were operating at, were documented by Xilinx.

It seemed to be worth the 30 SLOC to convert on the fly to keep a

single format of Xilinx bitstream files, including between ISE and
Vivado and all Xilinx FPGA types.

Of course it might make sense to switch things around the other way

and use .bin files uniformly and only convert to .bit format for JTAG
loading.

But since the core of the "conversion:" after a header, is just a

32 bit endian swap, it doesn't matter much either way.

If it ends up being a truly nasty reverse engineering exercise now,

I would reconsider.


From: discuss
<discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.op
encpi.org><mailto:discuss-bounces@lists.op<mailto:discuss-bounces@
lists.op>
encpi.orghttp://encpi.org><mailto:discuss-bounces@lists.op<mailt
o:discuss-bounces@lists.op><mailto:discuss-bounces@<mailto:discuss
-bounces@> lists.op>
encpi.orghttp://encpi.orghttp://encpi.org>> on behalf of James
Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.com<mailt
o:jek@parera.com>><mailto:jek@parera.commailto:jek@parera.com<ma
ilt o:jek@parera.commailto:o%3Ajek@parera.com>>>
Sent: Friday, February 1, 2019 3:27 PM
To:
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mail
to
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailt
o:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

David,

This is great work. Thanks.

Since I believe the fpga manager stuff is really an attribute of
later linux kernels, I don't think it is really a ZynqMP thing,
but just a later linux kernel thing.
I am currently bringing up the quite ancient zedboard using the
latest Vivado and Xilinx linux and will try to use this same code.
There are two thinigs I am looking into, now that you have done
the hard work of getting to a working solution:

  1. The bit vs bin thing existed with the old bitstream loader, but
    I think we were converting on the fly, so I will try that here.
    (To avoid the bin format altogether).

  2. The fpga manager has entry points from kernel mode that allow
    you to inject the bitstream without making a copy in /lib/firmware.
    Since we already have a kernel driver, I will try to use that to
    avoid the whole /lib/firmware thing.

So if those two things can work (no guarantees), the difference
between old and new bitstream loading (and building) can be
minimized and the loading process faster and requiring no extra
file system

space.

This will make merging easier too.

We'll see.  Thanks again to you and Geon for this important

contribution.

Jim

On 2/1/19 3:12 PM, David Banks wrote:

OpenCPI users interested in ZynqMP fpga_manager,

I know some users are interested in the OpenCPI's bitstream
loading for ZynqMP/UltraScale+ using "fpga_manager". In
general, we followed the instructions at

I will give a short explanation here:

Reminder: All ZynqMP/UltraScale+ changes are located at
https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra

branch.

Firstly, all fpga_manager code is located in
runtime/hdl/src/HdlBusDriver.cxx. There were also changes in
r
untime/hdl-support/xilinx/vivado.mkhttp://vivado.mk<http://vi
vado.mk><http://vi
vado.mkhttp://vado.mk>
http://vivado.mk
to generate a bitstream in the correct *.bin
format. To see the changes made to these files for ZynqMP, you
can diff them between
release_1.4 and release_1.4_zynq_ultra:
$ git clone https://github.com/Geontech/opencpi.git --branch
release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin
release_1.4:release_1.4; $ git diff release_1.4 --
runtime/hdl/src/HdlBusDriver.cxx
runtime/hdl-support/xilinx/vivado.mkhttp://vivado.mk<http://viv
ado.mk><http://viv
ado.mkhttp://ado.mk>;

The directly relevant functions are load_fpga_manager() and i
sProgrammed().
load_fpga_manager() ensures that /lib/firmware exists, reads the
*.bin bitstream file and writes its contents to

/lib/firmware/opencpi_temp.bin.

It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the
the filename "opencpi_temp.bin" to

/sys/class/fpga_manager/fpga0/firmware.

Finally, the temporary opencpi_temp.bin bitstream is removed and
the state of the fpga_manager
(/sys/class/fpga_manager/fpga0/state) is confirmed to be "operating" in isProgrammed().

fpga_manager requires that bitstreams be in *.bin in order to
write them to the PL. So, some changes were made to
vivado.mkhttp://vivado.mkhttp://vivado.mkhttp://vivado.mk
to add a make rule for the *.bin file. This make rule (BinName) uses Vivado's "bootgen" to convert the bitstream from *.bit to *.bin.

Most of the relevant code is pasted or summarized below:

          *load_fpga_manager*(const char *fileName, 

std::string

&error) {

            if (!file_exists("/lib/firmware")){ 

mkdir("/lib/firmware",0666);
}
int out_file =

creat("/lib/firmware/opencpi_temp.bin", 0666);

            gzFile bin_file;
            int bfd, zerror;
            uint8_t buf[8*1024];

            if ((bfd = ::open(fileName, O_RDONLY)) < 0)
              OU::format(error, "Can't open bitstream file '%s'

for reading:

%s(%d)",
fileName, strerror(errno), errno);
if ((bin_file = ::gzdopen(bfd, "rb")) == NULL)
OU::format(error, "Can't open compressed bin
file

'%s' for :

%s(%u)",
fileName, strerror(errno), errno);
do {
uint8_t *bit_buf = buf;
int n = ::gzread(bin_file, bit_buf, sizeof(buf));
if (n < 0)
return true;
if (n & 3)
return OU::eformat(error, "Bitstream data in is '%s'
not a multiple of 3 bytes",
fileName);
if (n == 0)
break;
if (write(out_file, buf, n) <= 0)
return OU::eformat(error,
"Error writing to
/lib/firmware/opencpi_temp.bin for bin
loading: %s(%u/%d)",
strerror(errno), errno, n);
} while (1);
close(out_file);
std::ofstream

fpga_flags("/sys/class/fpga_manager/fpga0/flags");

            std::ofstream

fpga_firmware("/sys/class/fpga_manager/fpga0/firmware");
fpga_flags << 0 << std::endl;
fpga_firmware << "opencpi_temp.bin" << std::endl;

remove("/lib/firmware/opencpi_temp.bin");
return isProgrammed(error) ? init(error) : true;
}

The isProgrammed() function just checks whether or not the
fpga_manager state is 'operating' although we are not entirely
confident this is a robust check:

          *isProgrammed*(...) {
            ...
            const char *e = OU::file2String(val, 

"/sys/class/fpga_manager/fpga0/state", '|');
...
return val == "operating";
}

vivado.mkhttp://vivado.mkhttp://vivado.mkhttp://vivado.mk's
*bin make-rule uses bootgen to convert bit to bin. This is
necessary in Vivado 2018.2, but in later versions you may be able
to directly generate the correct *.bin file via an option to

write_bitstream:

$(call BinName,$1,$3,$6): $(call BitName,$1,$3)
$(AT)echo -n For $2 on $5 using config $4: Generating
Xilinx Vivado bitstream file $$@ with BIN extension using "bootgen".
$(AT)echo all: > $$(call BifName,$1,$3,$6);
echo "{" >> $$(call BifName,$1,$3,$6);
echo " [destination_device = pl] $(notdir $(call
BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6);
echo "}" >> $$(call BifName,$1,$3,$6);
$(AT)$(call DoXilinx,bootgen,$1,-image $(notdir
$(call
BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call
BinName,$1,$3,$6)) -w,bin)

Hope this is useful!

Regards,
David Banks
dbanks@geontech.commailto:dbanks@geontech.com<mailto:dbanks@geo
ntech.commailto:dbanks@geontech.com><mailto:dbanks@geo<mailto:d
banks@geo>
ntech.comhttp://ntech.com<mailto:dbanks@geontech.com<mailto:dba
nks@geontech.com>>>
Geon Technologies, LLC
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/att
ach m ents/20190201/4b49675d/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailt
o:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><ma
ilt
o:discuss@lists.opencpi.orgmailto:o%3Adiscuss@lists.opencpi.org
<mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.or
g>>>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.o
rg


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mail
to
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailt
o:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or
g
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/atta
chm e nts/20190201/64e4ea45/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mail
to
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailt
o:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or
g

-------------- next part -------------- An embedded and
charset-unspecified text was scrubbed...
Name: hello_n310_log_output.txt
URL:

<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachm
e nts/20190805/d9b4f229/attachment.txt>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:dis
cuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:discu
ss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@
lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme
nts/20190813/4516c872/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:dis
cuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme
nts/20190829/b99ae3e0/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme
nts/20190905/0b9a1953/attachment.html>


discuss mailing list
discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org

Jim, Does the ocpizynq utility list all the available interfaces that can dumped? Thanks, Rob -----Original Message----- From: discuss <discuss-bounces@lists.opencpi.org> On Behalf Of James Kulp Sent: Thursday, September 5, 2019 5:59 PM To: discuss@lists.opencpi.org Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager Hi Rob, Nearly all aspects of the boundary hardware between the PS and the PL sides of Zynq are controlled by registers written by the processor and *not* in the FPGA bitstream. The FSBL does typically initialize these registers to some default values that are not necessarily the right values for how OpenCPI uses the PL/FPGA. The ocpizynq utility program does dump out some of these registers, and you could modify it pretty easily if you want to know what some other registers are set to. All these registers are pretty well documented in the Zynq TRM. Jim On 9/5/19 5:47 PM, Munro, Robert M. wrote: > Chris, > > Would this be the GP0 AXI slave or master registers that are being accessed in this scenario? I don’t believe these are configured in the FSBL, but in the FPGA image. This could indicate that a facility required by the OCPI framework is not enabled in the FPGA image built into the N310 image. Is there a listing of the OCPI required FPGA facilities? > > Thanks, > Rob > > From: Chris Hinkey <chinkey@geontech.com> > Sent: Thursday, August 29, 2019 11:58 AM > To: Munro, Robert M. <Robert.Munro@jhuapl.edu> > Subject: Re: [Discuss OpenCPI] Bitstream loading with > ZynqMP/UltraScale+ fpga_manager > > you are not accessing external memory in this case you are accessing axi_gp0's adress space a register directly on the FPGA. i would suspect that that something is wrong with how GP0 is setup from the fsbl in this case. I don't think anything would need to change on the opencpi software side given that 7100 vs 7020 should be the same. > the information on all the register maps and where everything is located is somewhere in the Xilinx Technical reference manual (be warned this is a very large document). > > On Thu, Aug 29, 2019 at 11:42 AM Munro, Robert M. <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>> wrote: > Chris, > > Looking at the Zynq and ZynqMP datasheets: > https://www.xilinx.com/support/documentation/data_sheets/ds190-Zynq-70 > 00-Overview.pdf > https://www.xilinx.com/support/documentation/data_sheets/ds891-zynq-ul > trascale-plus-overview.pdf > > It looks like the Z-7100 has the same memory interfaces as other Zynq parts with the external memory interface having '16-bit or 32-bit interfaces to DDR3, DDR3L, DDR2, or LPDDR2 memories' whereas the ZynqMP has '32-bit or 64-bit interfaces to DDR4, DDR3, DDR3L, or LPDDR3 memories, and 32-bit interface to LPDDR4 memory' . > > Is it possible that other changes are needed from the 1.4_zynq_ultra branch that I have not pulled in? > > Thanks, > Rob > > -----Original Message----- > From: discuss <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org>> On Behalf Of Munro, Robert M. > Sent: Thursday, August 29, 2019 9:09 AM > To: Chris Hinkey <chinkey@geontech.com<mailto:chinkey@geontech.com>> > Cc: discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > Subject: Re: [Discuss OpenCPI] Bitstream loading with > ZynqMP/UltraScale+ fpga_manager > > Chris, > > Thanks for the information regarding the internals. The FPGA part on this platform is a XC7Z100. I purposefully did not pull in changes that I believed were related to addressing. I can double check the specifications regarding address widths to verify it should be unchanged. > > Please let me know if there are any other changes or steps missed. > > Thanks, > Rob > > > From: Chris Hinkey > <chinkey@geontech.com<mailto:chinkey@geontech.com><mailto:chinkey@geon > tech.com<mailto:chinkey@geontech.com>>> > Date: Thursday, Aug 29, 2019, 8:05 AM > To: Munro, Robert M. > <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert > .Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>> > Cc: James Kulp > <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:je > k@parera.com>>>, > discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > <discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:di > scuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > Subject: Re: [Discuss OpenCPI] Bitstream loading with > ZynqMP/UltraScale+ fpga_manager > > It looks like you loaded something sucessfully but the control plan is not hooked up quite right. > > as an eraly part of the running process opencpi reads a register across the control plan that contains ascii "OpenCPI(NULL)" and in your case you are reading "CPI(NULL)Open" this is given by the data in the error message - (sb 0x435049004f70656e). this is the magic that the message is referring to it requires OpenCPI to be at address 0 of the control plane address space to proceed. > > I think we ran into this problem and we decided it was because the bus on the ultrascale was setup to be 32 bits and needed to be 64 bits for the hdl that we implemented to work correctly. remind me what platform you are using is it a zynq ultrascale or 7000 series? > > On Wed, Aug 28, 2019 at 5:55 PM Munro, Robert M. <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>> wrote: > Chris, > > After merging some sections of HdlBusDriver.cxx into the 1.4 version of the file and going through the build process I am encountering a new error when attempting to load HDL on the N310. The fsk_filerw is being used as a known good reference for this purpose. The new sections of vivado.mk<http://vivado.mk><http://vivado.mk> were merged in to attempt building the HDL using the framework, but it did not generate the .bin file when using ocpidev build with the --hdl-assembly argument. An attempt to replicate the commands in vivado.mk<http://vivado.mk><http://vivado.mk> manually while following the guidelines for generating a .bin from a .bit from Xilinx documentation https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841645/Solution+Zynq+PL+Programming+With+FPGA+Manager was taken. > > The steps were: > - generate a .bif file similar to the documentation's > Full_Bitstream.bif using the correct filename > - run a bootgen command similar to > vivado.mk<http://vivado.mk><http://vivado.mk>: bootgen -image > <bif_filename> -arch zynq -o <bin_filename> -w > > This generated a .bin file as desired and was copied to the artifacts directory in the ocpi folder structure. > > The built ocpi environment loaded successfully, recognizes the HDL container as being available, and the hello application was able to run successfully. The command output contained ' HDL Device 'PL:0' responds, but the OCCP signature: magic: 0x18000afe187003 (sb 0x435049004f70656e) ' but the impact of this was not understood until attempting to load HDL. When attempting to run the fsk_filerw from the ocpirun command it did not appear to recognize the assembly when listing resources found in the output and reported that suitable candidate for a HDL-implemented component was not available. > > The command 'ocpihdl load' was then attempted to force the loading of the HDL assembly and the same '...OCCP signature: magic: ...' output observed and finally ' Exiting for problem: error loading device pl:0: Magic numbers in admin space do not match'. > > Is there some other step that must be taken during the generation of the .bin file? Is there any other software modification that is required of the ocpi runtime code? The diff patch of the modified 1.4 HdlBusDriver.cxx is attached to make sure that the required code modifications are performed correctly. The log output from the ocpihdl load command is attached in case that can provide further insight regarding performance or required steps. > > Thanks, > Rob > > -----Original Message----- > From: discuss <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org><mailto:discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org>>> On Behalf Of Munro, Robert M. > Sent: Tuesday, August 13, 2019 10:56 AM > To: Chris Hinkey > <chinkey@geontech.com<mailto:chinkey@geontech.com><mailto:chinkey@geon > tech.com<mailto:chinkey@geontech.com>>>; James Kulp > <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:je > k@parera.com>>> > Cc: > discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:dis > cuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> > Subject: Re: [Discuss OpenCPI] Bitstream loading with > ZynqMP/UltraScale+ fpga_manager > > Chris, > > Thank you for your helpful response and insight. My thinking was that the #define could be overridden to provide the desired functionality for the platform, but was not comfortable making the changes without proper familiarity. I will move forward by looking at the diff to the 1.4 mainline, make the appropriate modifications, and test with the modified framework on the N310. > > Thanks again for your help. > > Thanks, > Rob > > From: Chris Hinkey > <chinkey@geontech.com<mailto:chinkey@geontech.com><mailto:chinkey@geon > tech.com<mailto:chinkey@geontech.com>>> > Sent: Tuesday, August 13, 2019 10:02 AM > To: James Kulp > <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:je > k@parera.com>>> > Cc: Munro, Robert M. > <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert > .Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>; > discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:dis > cuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> > Subject: Re: [Discuss OpenCPI] Bitstream loading with > ZynqMP/UltraScale+ fpga_manager > > I think when I implemented this code I probably made the assumption that if we are using fpga_manager we are also using ARCH=arm64. This met our needs as we only cared about the fpga manager on ultrascale devices at the time. We also made the assumption that the tools created a tarred bin file instead of a bit file because we could not get the bit to bin conversion working with the existing openCPI code (this might cause you problems later when actually trying to load the fpga). > > The original problem you were running into is certainly because of an > ifdef on line 226 where it will check the old driver done pin if it is > on an arm and not an arm64 > > 226 #if defined(OCPI_ARCH_arm) || defined(OCPI_ARCH_arm_cs) > > to move forward for now you can change this line to an "#if 0" and rebuild the framework, not this will cause other zynq based platforms(zed, matchstiq etc..) to no longer work with this patch but maybe you don't care for now while Jim tries to get this into the mainline in a more generic way. > there may be some similar patches you need to make to the same file but the full diff that I needed to make to BusDriver.cxx to the 1.4 mainline can be seen here https://github.com/opencpi/opencpi/pull/17/files in case you didn't already know. > hope this helps > > On Mon, Aug 12, 2019 at 11:12 AM James Kulp <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>>>> wrote: > On 8/12/19 9:37 AM, Munro, Robert M. wrote: >> Jim, >> >> This is the only branch with the modifications required for use with >> the FPGA Manager driver. This is required for use with the Linux >> kernel provided for the N310. The Xilinx toolset being used is >> 2018_2 and the kernel being used is generated via the N310 build >> container using v3.14.0.0 . > Ok. The default Xilinx kernel associated with 2018_2 is 4.14. > > I guess the bottom line is that this combination of platform and tools and kernel is not yet supported in either the mainline of OpenCPI and the third party branch you are trying to use. > > It is probably not a big problem, but someone has to debug it that has the time and skills necessary to dig as deep as necessary. > > The fpga manager in the various later linux kernels will definitely be supported in a patch from the mainline "soon", probably in a month, since it is being actively worked. > > That does not guarantee functionality on your exact kernel (and thus version of the fpga manager), but it does guarantee it working on the latest Xilinx-supported kernel. > > Jim > > > > > > > >> Thanks, >> Robert Munro >> >> *From: *James Kulp >> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:j >> ek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:j >> e<mailto:je> >> k@parera.com<mailto:k@parera.com>>> >> <mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<m >> ailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><m >> a ilto:jek@parera.com<mailto:ilto%3Ajek@parera.com>>>>> >> *Date: *Monday, Aug 12, 2019, 9:00 AM >> *To: *Munro, Robert M. >> <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Rober >> t.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>><mailto:Robert<mai >> lto:Robert> >> .Munro@jhuapl.edu<mailto:Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl >> .edu<mailto:Robert.Munro@jhuapl.edu>>> >> <mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailt >> o:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>><mailto >> :Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Rober >> t.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>>>, >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:di >> scuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:dis >> <mailto:dis> >> cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org><mailto:discuss@ >> lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >> <discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:d >> iscuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:di >> <mailto:di> >> scuss@lists.opencpi.org<mailto:scuss@lists.opencpi.org><mailto:discus >> s@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >> <mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><m >> ailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><ma >> ilto:discuss@lists.opencpi.org<mailto:ilto%3Adiscuss@lists.opencpi.or >> g><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> >> >>>> >> *Subject: *Re: [Discuss OpenCPI] Bitstream loading with >> ZynqMP/UltraScale+ fpga_manager >> >> I was a bit confused about your use of the "ultrascale" branch. >> So you are using a branch with two types of patches in it: one for >> later linux kernels with the fpga manager, and the other for the >> ultrascale chip itself. >> The N310 is not ultrascale, so we need to separate the two issues, >> which were not separated before. >> So its not really a surprise that the branch you are using is not yet >> happy with the system you are trying to run it on. >> >> I am working on a branch that simply updates the xilinx tools >> (2019-1) and the xilinx linux kernel (4.19) without dealing with >> ultrascale, which is intended to work with a baseline zed board, but >> with current tools and kernels. >> >> The N310 uses a 7000-series part (7100) which should be compatible >> with this. >> >> Which kernel and which xilinx tools are you using? >> >> Jim >> >> >> >> On 8/8/19 1:36 PM, Munro, Robert M. wrote: >>> Jim or others, >>> >>> Is there any further input or feedback on the source or resolution >> of this issue? >>> As it stands I do not believe that the OCPI runtime software will be >> able to successfully load HDL assemblies on the N310 platform. My >> familiarity with this codebase is limited and we would appreciate any >> guidance available toward investigating or resolving this issue. >>> Thank you, >>> Robert Munro >>> >>> -----Original Message----- >>> From: discuss >>> <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.open >>> cpi.org><mailto:discuss-bounces@lists.open<mailto:discuss-bounces@li >>> sts.open> >>> cpi.org<http://cpi.org>><mailto:discuss-bounces@lists.open<mailto:di >>> scuss-bounces@lists.open><mailto:discuss-bounces@li<mailto:discuss-b >>> ounces@li> sts.open> cpi.org<http://cpi.org><http://cpi.org>>> On >>> Behalf Of >> Munro, Robert M. >>> Sent: Monday, August 5, 2019 10:49 AM >>> To: James Kulp >>> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto: >>> jek@parera.com<mailto:jek@parera.com>>>>; >>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:d >>> iscuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:d >>> <mailto:d> >>> iscuss@lists.opencpi.org<mailto:iscuss@lists.opencpi.org><mailto:dis >>> cuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >>> Subject: Re: [Discuss OpenCPI] Bitstream loading with >> ZynqMP/UltraScale+ fpga_manager >>> Jim, >>> >>> The given block of code is not the root cause of the issue because >> the file system does not have a /dev/xdevcfg device. >>> I suspect there is some functional code similar to this being >> compiled incorrectly: >>> #if (OCPI_ARCH_arm) >>> // do xdevcfg loading stuff >>> #else >>> // do fpga_manager loading stuff #endif >>> >>> This error is being output at environment initialization as well as >> when running hello.xml. I've attached a copy of the output from the >> command 'ocpirun -v -l 20 hello.xml' for further investigation. >>> From looking at the output I believe the system is calling >> OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is >> calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line >> 484 which in turn is calling Driver::open in the same file at line >> 499 which then outputs the 'When searching for PL device ...' error >> at line 509. This then returns to the HdlDriver.cxx search() function >> and outputs the '... got Zynq search error ...' error at line 141. >>> This is an ARM device and I am not familiar enough with this >> codebase to adjust precompiler definitions with confidence that some >> other code section will become affected. >>> Thanks, >>> Robert Munro >>> >>> -----Original Message----- >>> From: James Kulp >>> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto: >>> jek@parera.com<mailto:jek@parera.com>>>> >>> Sent: Friday, August 2, 2019 4:27 PM >>> To: Munro, Robert M. >>> <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robe >>> rt.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>><mailto:Robe<mai >>> lto:Robe> >>> rt.Munro@jhuapl.edu<mailto:rt.Munro@jhuapl.edu><mailto:Robert.Munro@ >>> jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>>; >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:di >> scuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:dis >> <mailto:dis> >> cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org><mailto:discuss@ >> lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >>> Subject: Re: [Discuss OpenCPI] Bitstream loading with >> ZynqMP/UltraScale+ fpga_manager >>> That code is not integrated into the main line of OpenCPI yet, but >> in that code there is: >>> if (file_exists("/dev/xdevcfg")){ >>> ret_val= load_xdevconfig(fileName, error); >>> } >>> else if (file_exists("/sys/class/fpga_manager/fpga0/")){ >>> ret_val= load_fpga_manager(fileName, error); >>> } >>> So it looks like the presence of /dev/xdevcfg is what causes it to >> look for /sys/class/xdevcfg/xdevcfg/device/prog_done >>> On 8/2/19 4:15 PM, Munro, Robert M. wrote: >>>> Are there any required flag or environment variable settings that >> must be done before building the framework to utilize this >> functionality? I have a platform built that is producing an output >> during environment load: 'When searching for PL device '0': Can't >> process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string: >> file could not be open for reading' . This leads me to believe that >> it is running the xdevcfg code still present in HdlBusDriver.cxx . >>>> Use of the release_1.4_zynq_ultra branch and presence of the >> /sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been >> verified for the environment used to generate the executables. >>>> Thanks, >>>> Robert Munro >>>> >>>> -----Original Message----- >>>> From: discuss >>>> <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.ope >>>> ncpi.org><mailto:discuss-bounces@lists.ope<mailto:discuss-bounces@l >>>> ists.ope> >>>> ncpi.org<http://ncpi.org>><mailto:discuss-bounces@lists.ope<mailto: >>>> discuss-bounces@lists.ope><mailto:discuss-bounces@l<mailto:discuss- >>>> bounces@l> ists.ope> ncpi.org<http://ncpi.org><http://ncpi.org>>> >>>> On Behalf Of James Kulp >>>> Sent: Friday, February 1, 2019 4:18 PM >>>> To: >>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto: >>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto: >>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >>>> Subject: Re: [Discuss OpenCPI] Bitstream loading with >>>> ZynqMP/UltraScale+ fpga_manager >>>> >>>> On 2/1/19 3:37 PM, Chris Hinkey wrote: >>>>> in response to Point 1 here. We attempted using the code that on >> the fly was attempting to convert from bit to bin. This did not work >> on these newer platforms using fpga_manager so we decided to use the >> vendor provided tools rather then to reverse engineer what was wrong >> with the existing code. >>>>> If changes need to be made to create more commonality and given >> that all zynq and zynqMP platforms need a .bin file format wouldn't >> it make more sense to just use .bin files rather then converting them >> on the fly every time? >>>> A sensible question for sure. >>>> >>>> When this was done originally, it was to avoid generating multiple >> file formats all the time. .bit files are necessary for JTAG >> loading, and .bin files are necessary for zynq hardware loading. >>>> Even on Zynq, some debugging using jtag is done, and having that be >> mostly transparent (using the same bitstream files) is convenient. >>>> So we preferred having a single bitstream file (with metadata, >>>> compressed) regardless of whether we were hardware loading or jtag >> loading, zynq or virtex6 or spartan3, ISE or Vivado. >>>> In fact, there was no reverse engineering the last time since both >> formats, at the level we were operating at, were documented by Xilinx. >>>> It seemed to be worth the 30 SLOC to convert on the fly to keep a >> single format of Xilinx bitstream files, including between ISE and >> Vivado and all Xilinx FPGA types. >>>> Of course it might make sense to switch things around the other way >> and use .bin files uniformly and only convert to .bit format for JTAG >> loading. >>>> But since the core of the "conversion:" after a header, is just a >> 32 bit endian swap, it doesn't matter much either way. >>>> If it ends up being a truly nasty reverse engineering exercise now, >> I would reconsider. >>>>> ________________________________ >>>>> From: discuss >>>>> <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.op >>>>> encpi.org><mailto:discuss-bounces@lists.op<mailto:discuss-bounces@ >>>>> lists.op> >>>>> encpi.org<http://encpi.org>><mailto:discuss-bounces@lists.op<mailt >>>>> o:discuss-bounces@lists.op><mailto:discuss-bounces@<mailto:discuss >>>>> -bounces@> lists.op> >>>>> encpi.org<http://encpi.org><http://encpi.org>>> on behalf of James >>>>> Kulp >>>>> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailt >>>>> o:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><ma >>>>> ilt o:jek@parera.com<mailto:o%3Ajek@parera.com>>>> >>>>> Sent: Friday, February 1, 2019 3:27 PM >>>>> To: >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto >>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mail >>>>> to >>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailt >>>>> o:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >>>>> Subject: Re: [Discuss OpenCPI] Bitstream loading with >>>>> ZynqMP/UltraScale+ fpga_manager >>>>> >>>>> David, >>>>> >>>>> This is great work. Thanks. >>>>> >>>>> Since I believe the fpga manager stuff is really an attribute of >>>>> later linux kernels, I don't think it is really a ZynqMP thing, >>>>> but just a later linux kernel thing. >>>>> I am currently bringing up the quite ancient zedboard using the >>>>> latest Vivado and Xilinx linux and will try to use this same code. >>>>> There are two thinigs I am looking into, now that you have done >>>>> the hard work of getting to a working solution: >>>>> >>>>> 1. The bit vs bin thing existed with the old bitstream loader, but >>>>> I think we were converting on the fly, so I will try that here. >>>>> (To avoid the bin format altogether). >>>>> >>>>> 2. The fpga manager has entry points from kernel mode that allow >>>>> you to inject the bitstream without making a copy in /lib/firmware. >>>>> Since we already have a kernel driver, I will try to use that to >>>>> avoid the whole /lib/firmware thing. >>>>> >>>>> So if those two things can work (no guarantees), the difference >>>>> between old and new bitstream loading (and building) can be >>>>> minimized and the loading process faster and requiring no extra >>>>> file system >> space. >>>>> This will make merging easier too. >>>>> >>>>> We'll see. Thanks again to you and Geon for this important >> contribution. >>>>> Jim >>>>> >>>>> >>>>> On 2/1/19 3:12 PM, David Banks wrote: >>>>>> OpenCPI users interested in ZynqMP fpga_manager, >>>>>> >>>>>> I know some users are interested in the OpenCPI's bitstream >>>>>> loading for ZynqMP/UltraScale+ using "*fpga_manager*". In >>>>>> general, we followed the instructions at >>>>>> >> https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream. >>>>>> I will give a short explanation here: >>>>>> >>>>>> Reminder: All ZynqMP/UltraScale+ changes are located at >>>>>> https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra >> branch. >>>>>> Firstly, all *fpga_manager *code is located in >>>>>> *runtime/hdl/src/HdlBusDriver.cxx*. There were also changes in >>>>>> r*untime/hdl-support/xilinx/vivado.mk<http://vivado.mk><http://vi >>>>>> vado.mk><http://vi >>>>>> vado.mk<http://vado.mk>> >>>>>> <http://vivado.mk>* to generate a bitstream in the correct *.bin >>>>>> format. To see the changes made to these files for ZynqMP, you >>>>>> can diff them between >>>>>> *release_1.4* and *release_1.4_zynq_ultra*: >>>>>> $ git clone https://github.com/Geontech/opencpi.git --branch >>>>>> release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin >>>>>> release_1.4:release_1.4; $ git diff release_1.4 -- >>>>>> runtime/hdl/src/HdlBusDriver.cxx >>>>>> runtime/hdl-support/xilinx/vivado.mk<http://vivado.mk><http://viv >>>>>> ado.mk><http://viv >>>>>> ado.mk<http://ado.mk>>; >>>>>> >>>>>> >>>>>> The directly relevant functions are *load_fpga_manager()* and i >>>>>> *sProgrammed()*. >>>>>> load_fpga_manager() ensures that /lib/firmware exists, reads the >>>>>> *.bin bitstream file and writes its contents to >> /lib/firmware/opencpi_temp.bin. >>>>>> It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the >>>>>> the filename "opencpi_temp.bin" to >> /sys/class/fpga_manager/fpga0/firmware. >>>>>> Finally, the temporary opencpi_temp.bin bitstream is removed and >>>>>> the state of the fpga_manager >>>>>> (/sys/class/fpga_manager/fpga0/state) is confirmed to be "operating" in isProgrammed(). >>>>>> >>>>>> fpga_manager requires that bitstreams be in *.bin in order to >>>>>> write them to the PL. So, some changes were made to >>>>>> vivado.mk<http://vivado.mk><http://vivado.mk><http://vivado.mk> >>>>>> to add a make rule for the *.bin file. This make rule (*BinName*) uses Vivado's "*bootgen*" to convert the bitstream from *.bit to *.bin. >>>>>> >>>>>> Most of the relevant code is pasted or summarized below: >>>>>> >>>>>> *load_fpga_manager*(const char *fileName, >>>>>> std::string >> &error) { >>>>>> if (!file_exists("/lib/firmware")){ >>>>>> mkdir("/lib/firmware",0666); >>>>>> } >>>>>> int out_file = >> creat("/lib/firmware/opencpi_temp.bin", 0666); >>>>>> gzFile bin_file; >>>>>> int bfd, zerror; >>>>>> uint8_t buf[8*1024]; >>>>>> >>>>>> if ((bfd = ::open(fileName, O_RDONLY)) < 0) >>>>>> OU::format(error, "Can't open bitstream file '%s' >> for reading: >>>>>> %s(%d)", >>>>>> fileName, strerror(errno), errno); >>>>>> if ((bin_file = ::gzdopen(bfd, "rb")) == NULL) >>>>>> OU::format(error, "Can't open compressed bin >>>>>> file >> '%s' for : >>>>>> %s(%u)", >>>>>> fileName, strerror(errno), errno); >>>>>> do { >>>>>> uint8_t *bit_buf = buf; >>>>>> int n = ::gzread(bin_file, bit_buf, sizeof(buf)); >>>>>> if (n < 0) >>>>>> return true; >>>>>> if (n & 3) >>>>>> return OU::eformat(error, "Bitstream data in is '%s' >>>>>> not a multiple of 3 bytes", >>>>>> fileName); >>>>>> if (n == 0) >>>>>> break; >>>>>> if (write(out_file, buf, n) <= 0) >>>>>> return OU::eformat(error, >>>>>> "Error writing to >>>>>> /lib/firmware/opencpi_temp.bin for bin >>>>>> loading: %s(%u/%d)", >>>>>> strerror(errno), errno, n); >>>>>> } while (1); >>>>>> close(out_file); >>>>>> std::ofstream >> fpga_flags("/sys/class/fpga_manager/fpga0/flags"); >>>>>> std::ofstream >>>>>> fpga_firmware("/sys/class/fpga_manager/fpga0/firmware"); >>>>>> fpga_flags << 0 << std::endl; >>>>>> fpga_firmware << "opencpi_temp.bin" << std::endl; >>>>>> >>>>>> remove("/lib/firmware/opencpi_temp.bin"); >>>>>> return isProgrammed(error) ? init(error) : true; >>>>>> } >>>>>> >>>>>> The isProgrammed() function just checks whether or not the >>>>>> fpga_manager state is 'operating' although we are not entirely >>>>>> confident this is a robust check: >>>>>> >>>>>> *isProgrammed*(...) { >>>>>> ... >>>>>> const char *e = OU::file2String(val, >>>>>> "/sys/class/fpga_manager/fpga0/state", '|'); >>>>>> ... >>>>>> return val == "operating"; >>>>>> } >>>>>> >>>>>> vivado.mk<http://vivado.mk><http://vivado.mk><http://vivado.mk>'s >>>>>> *bin make-rule uses bootgen to convert bit to bin. This is >>>>>> necessary in Vivado 2018.2, but in later versions you may be able >>>>>> to directly generate the correct *.bin file via an option to >> write_bitstream: >>>>>> $(call *BinName*,$1,$3,$6): $(call BitName,$1,$3) >>>>>> $(AT)echo -n For $2 on $5 using config $4: Generating >>>>>> Xilinx Vivado bitstream file $$@ with BIN extension using "bootgen". >>>>>> $(AT)echo all: > $$(call BifName,$1,$3,$6); \ >>>>>> echo "{" >> $$(call BifName,$1,$3,$6); \ >>>>>> echo " [destination_device = pl] $(notdir $(call >>>>>> BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6); \ >>>>>> echo "}" >> $$(call BifName,$1,$3,$6); >>>>>> $(AT)$(call DoXilinx,*bootgen*,$1,-image $(notdir >>>>>> $(call >>>>>> BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call >>>>>> BinName,$1,$3,$6)) -w,bin) >>>>>> >>>>>> Hope this is useful! >>>>>> >>>>>> Regards, >>>>>> David Banks >>>>>> dbanks@geontech.com<mailto:dbanks@geontech.com><mailto:dbanks@geo >>>>>> ntech.com<mailto:dbanks@geontech.com>><mailto:dbanks@geo<mailto:d >>>>>> banks@geo> >>>>>> ntech.com<http://ntech.com><mailto:dbanks@geontech.com<mailto:dba >>>>>> nks@geontech.com>>> >>>>>> Geon Technologies, LLC >>>>>> -------------- next part -------------- An HTML attachment was >>>>>> scrubbed... >>>>>> URL: >>>>>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/att >>>>>> ach m ents/20190201/4b49675d/attachment.html> >>>>>> _______________________________________________ >>>>>> discuss mailing list >>>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailt >>>>>> o:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><ma >>>>>> ilt >>>>>> o:discuss@lists.opencpi.org<mailto:o%3Adiscuss@lists.opencpi.org> >>>>>> <mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.or >>>>>> g>>> >>>>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.o >>>>>> rg >>>>> _______________________________________________ >>>>> discuss mailing list >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto >>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mail >>>>> to >>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailt >>>>> o:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >>>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or >>>>> g >>>>> -------------- next part -------------- An HTML attachment was >>>>> scrubbed... >>>>> URL: >>>>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/atta >>>>> chm e nts/20190201/64e4ea45/attachment.html> >>>>> _______________________________________________ >>>>> discuss mailing list >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto >>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mail >>>>> to >>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailt >>>>> o:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >>>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or >>>>> g >>>> _______________________________________________ >>>> discuss mailing list >>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto: >>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto: >>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >>> -------------- next part -------------- An embedded and >>> charset-unspecified text was scrubbed... >>> Name: hello_n310_log_output.txt >>> URL: >> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachm >> e nts/20190805/d9b4f229/attachment.txt> >>> _______________________________________________ >>> discuss mailing list >>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:d >>> iscuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:d >>> <mailto:d> >>> iscuss@lists.opencpi.org<mailto:iscuss@lists.opencpi.org><mailto:dis >>> cuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >> > > _______________________________________________ > discuss mailing list > discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:dis > cuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:discu > ss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@ > lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > -------------- next part -------------- An HTML attachment was > scrubbed... > URL: > <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme > nts/20190813/4516c872/attachment.html> > _______________________________________________ > discuss mailing list > discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:dis > cuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> > http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > -------------- next part -------------- An HTML attachment was > scrubbed... > URL: > <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme > nts/20190829/b99ae3e0/attachment.html> > _______________________________________________ > discuss mailing list > discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > -------------- next part -------------- An HTML attachment was > scrubbed... > URL: > <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme > nts/20190905/0b9a1953/attachment.html> > _______________________________________________ > discuss mailing list > discuss@lists.opencpi.org > http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org _______________________________________________ discuss mailing list discuss@lists.opencpi.org http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
JK
James Kulp
Thu, Sep 5, 2019 11:37 PM

If you invoke the command with no arguments it tells you what it can do, like most opencpi commands.  We mostly use it to find out how the FPGA clocks are initialized.

On Sep 5, 2019, at 18:19, Munro, Robert M. Robert.Munro@jhuapl.edu wrote:

Jim,

Does the ocpizynq utility list all the available interfaces that can dumped?

Thanks,
Rob

-----Original Message-----
From: discuss discuss-bounces@lists.opencpi.org On Behalf Of James Kulp
Sent: Thursday, September 5, 2019 5:59 PM
To: discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

Hi Rob,

Nearly all aspects of the boundary hardware between the PS and the PL sides of Zynq are controlled by registers written by the processor and
not in the FPGA bitstream.
The FSBL does typically initialize these registers to some default values that are not necessarily the right values for how OpenCPI uses the PL/FPGA.
The ocpizynq utility program does dump out some of these registers, and you could modify it pretty easily if you want to know what some other registers are set to.
All these registers are pretty well documented in the Zynq TRM.

Jim

On 9/5/19 5:47 PM, Munro, Robert M. wrote:
Chris,

Would this be the GP0 AXI slave or master registers that are being accessed in this scenario?  I don’t believe these are configured in the FSBL, but in the FPGA image.  This could indicate that a facility required by the OCPI framework is not enabled in the FPGA image built into the N310 image.  Is there a listing of the OCPI required FPGA facilities?

Thanks,
Rob

From: Chris Hinkey chinkey@geontech.com
Sent: Thursday, August 29, 2019 11:58 AM
To: Munro, Robert M. Robert.Munro@jhuapl.edu
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

you are not accessing external memory in this case you are accessing axi_gp0's adress space a register directly on the FPGA.  i would suspect that that something is wrong with how GP0 is setup from the fsbl in this case.  I don't think anything would need to change on the opencpi software side given that 7100 vs 7020 should be the same.
the information on all the register maps and where everything is located is somewhere in the Xilinx Technical reference manual (be warned this is a very large document).

On Thu, Aug 29, 2019 at 11:42 AM Munro, Robert M. <Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu> wrote:
Chris,

Looking at the Zynq and ZynqMP datasheets:
https://www.xilinx.com/support/documentation/data_sheets/ds190-Zynq-70
00-Overview.pdf
https://www.xilinx.com/support/documentation/data_sheets/ds891-zynq-ul
trascale-plus-overview.pdf

It looks like the Z-7100 has the same memory interfaces as other Zynq parts with the external memory interface having '16-bit or 32-bit interfaces to DDR3, DDR3L, DDR2, or LPDDR2 memories' whereas the ZynqMP has '32-bit or 64-bit interfaces to DDR4, DDR3, DDR3L, or LPDDR3 memories, and 32-bit interface to LPDDR4 memory' .

Is it possible that other changes are needed from the 1.4_zynq_ultra branch that I have not pulled in?

Thanks,
Rob

-----Original Message-----
From: discuss <discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org> On Behalf Of Munro, Robert M.
Sent: Thursday, August 29, 2019 9:09 AM
To: Chris Hinkey <chinkey@geontech.commailto:chinkey@geontech.com>
Cc: discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

Chris,

Thanks for the information regarding the internals.  The FPGA part on this platform is a XC7Z100.  I purposefully did not pull in changes that I believed were related to addressing.  I can double check the specifications regarding address widths to verify it should be unchanged.

Please let me know if there are any other changes or steps missed.

Thanks,
Rob

From: Chris Hinkey
<chinkey@geontech.commailto:chinkey@geontech.com<mailto:chinkey@geon
tech.commailto:chinkey@geontech.com>>
Date: Thursday, Aug 29, 2019, 8:05 AM
To: Munro, Robert M.
<Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert
.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>
Cc: James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.com<mailto:je
k@parera.com>>>,
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
<discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:di
scuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

It looks like you loaded something sucessfully but the control plan is not hooked up quite right.

as an eraly part of the running process opencpi reads a register across the control plan that contains ascii "OpenCPI(NULL)" and in your case you are reading "CPI(NULL)Open"  this is given by the data in the error message - (sb 0x435049004f70656e).  this is the magic that the message is referring to it requires OpenCPI to be at address 0 of the control plane address space to proceed.

I think we ran into this problem and we decided it was because the bus on the ultrascale was setup to be 32 bits and needed to be 64 bits for the hdl that we implemented to work correctly.  remind me what platform you are using is it a zynq ultrascale or 7000 series?

On Wed, Aug 28, 2019 at 5:55 PM Munro, Robert M. <Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>> wrote:
Chris,

After merging some sections of HdlBusDriver.cxx into the 1.4 version of the file and going through the build process I am encountering a new error when attempting to load HDL on the N310.  The fsk_filerw is being used as a known good reference for this purpose.  The new sections of vivado.mkhttp://vivado.mkhttp://vivado.mk were merged in to attempt building the HDL using the framework, but it did not generate the .bin file when using ocpidev build with the --hdl-assembly argument.  An attempt to replicate the commands in vivado.mkhttp://vivado.mkhttp://vivado.mk manually while following the guidelines for generating a .bin from a .bit from Xilinx documentation https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841645/Solution+Zynq+PL+Programming+With+FPGA+Manager was taken.

The steps were:

  • generate a .bif file similar to the documentation's
    Full_Bitstream.bif using the correct filename
  • run a bootgen command similar to
    vivado.mkhttp://vivado.mkhttp://vivado.mk: bootgen -image
    <bif_filename> -arch zynq -o <bin_filename> -w

This generated a .bin file as desired and was copied to the artifacts directory in the ocpi folder structure.

The built ocpi environment loaded successfully, recognizes the HDL container as being available, and the hello application was able to run successfully.  The command output contained ' HDL Device 'PL:0' responds, but the OCCP signature: magic: 0x18000afe187003 (sb 0x435049004f70656e) ' but the impact of this was not understood until attempting to load HDL.  When attempting to run the fsk_filerw from the ocpirun command it did not appear to recognize the assembly when listing resources found in the output and reported that suitable candidate for a HDL-implemented component was not available.

The command 'ocpihdl load' was then attempted to force the loading of the HDL assembly and the same '...OCCP signature: magic: ...' output observed and finally ' Exiting for problem: error loading device pl:0: Magic numbers in admin space do not match'.

Is there some other step that must be taken during the generation of the .bin file?  Is there any other software modification that is required of the ocpi runtime code?  The diff patch of the modified 1.4 HdlBusDriver.cxx is attached to make sure that the required code modifications are performed correctly.  The log output from the ocpihdl load command is attached in case that can provide further insight regarding performance or required steps.

Thanks,
Rob

-----Original Message-----
From: discuss <discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org>> On Behalf Of Munro, Robert M.
Sent: Tuesday, August 13, 2019 10:56 AM
To: Chris Hinkey
<chinkey@geontech.commailto:chinkey@geontech.com<mailto:chinkey@geon
tech.commailto:chinkey@geontech.com>>; James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.com<mailto:je
k@parera.com>>>
Cc:
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:dis
cuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

Chris,

Thank you for your helpful response and insight.  My thinking was that the #define could be overridden to provide the desired functionality for the platform, but was not comfortable making the changes without proper familiarity.  I will move forward by looking at the diff to the 1.4 mainline, make the appropriate modifications, and test with the modified framework on the N310.

Thanks again for your help.

Thanks,
Rob

From: Chris Hinkey
<chinkey@geontech.commailto:chinkey@geontech.com<mailto:chinkey@geon
tech.commailto:chinkey@geontech.com>>
Sent: Tuesday, August 13, 2019 10:02 AM
To: James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.com<mailto:je
k@parera.com>>>
Cc: Munro, Robert M.
<Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert
.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>;
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:dis
cuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

I think when I implemented this code I probably made the assumption that if we are using fpga_manager we are also using ARCH=arm64.  This met our needs as we only cared about the fpga manager on ultrascale devices at the time.  We also made the assumption that the tools created a tarred bin file instead of a bit file because we could not get the bit to bin conversion working with the existing openCPI code (this might cause you problems later when actually trying to load the fpga).

The original problem you were running into is certainly because of an
ifdef on line 226 where it will check the old driver done pin if it is
on an arm and not an arm64

226 #if defined(OCPI_ARCH_arm) || defined(OCPI_ARCH_arm_cs)

to move forward for now you can change this line to an "#if 0" and rebuild the framework, not this will cause other zynq based platforms(zed, matchstiq etc..) to no longer work with this patch but maybe you don't care for now while Jim tries to get this into the mainline in a more generic way.
there may be some similar patches you need to make to the same file but the full diff that I needed to make to BusDriver.cxx to the 1.4 mainline can be seen here https://github.com/opencpi/opencpi/pull/17/files in case you didn't already know.
hope this helps

On Mon, Aug 12, 2019 at 11:12 AM James Kulp <jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><mailto:jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com>>> wrote:

On 8/12/19 9:37 AM, Munro, Robert M. wrote:
Jim,

This is the only branch with the modifications required for use with
the FPGA Manager driver.  This is required for use with the Linux
kernel provided for the N310.  The Xilinx toolset being used is
2018_2 and the kernel being used is generated via the N310 build
container using v3.14.0.0 .

Ok.  The default Xilinx kernel associated with 2018_2 is 4.14.

I guess the bottom line is that this combination of platform and tools and kernel is not yet supported in either the mainline of OpenCPI and the third party branch you are trying to use.

It is probably not a big problem, but someone has to debug it that has the time and skills necessary to dig as deep as necessary.

The fpga manager in the various later linux kernels will definitely be supported in a patch from the mainline "soon", probably in a month, since it is being actively worked.

That does not guarantee functionality on your exact kernel (and thus version of the fpga manager), but it does guarantee it working on the latest Xilinx-supported kernel.

Jim

Thanks,
Robert Munro

*From: *James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.com<mailto:j
ek@parera.com>><mailto:jek@parera.commailto:jek@parera.com<mailto:j
emailto:je
k@parera.commailto:k@parera.com>>
<mailto:jek@parera.commailto:jek@parera.com<mailto:jek@parera.com<m
ailto:jek@parera.com>><mailto:jek@parera.commailto:jek@parera.com<m
a ilto:jek@parera.commailto:ilto%3Ajek@parera.com>>>>
*Date: *Monday, Aug 12, 2019, 9:00 AM
*To: *Munro, Robert M.
<Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Rober
t.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu><mailto:Robert<mai lto:Robert>
.Munro@jhuapl.edumailto:Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl
.edumailto:Robert.Munro@jhuapl.edu>>
<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailt
o:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu><mailto
:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Rober
t.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>>>,
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:di
scuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:dis
mailto:dis
cuss@lists.opencpi.orgmailto:cuss@lists.opencpi.org<mailto:discuss@
lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
<discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:d
iscuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:di
mailto:di
scuss@lists.opencpi.orgmailto:scuss@lists.opencpi.org<mailto:discus
s@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<m
ailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><ma
ilto:discuss@lists.opencpi.org<mailto:ilto%3Adiscuss@lists.opencpi.or
g><mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org

*Subject: *Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

I was a bit confused about your use of the "ultrascale" branch.
So you are using a branch with two types of patches in it: one for
later linux kernels with the fpga manager, and the other for the
ultrascale chip itself.
The N310 is not ultrascale, so we need to separate the two issues,
which were not separated before.
So its not really a surprise that the branch you are using is not yet
happy with the system you are trying to run it on.

I am working on a branch that simply updates the xilinx tools
(2019-1) and the xilinx linux kernel (4.19) without dealing with
ultrascale, which is intended to work with a baseline zed board, but
with current tools and kernels.

The N310 uses a 7000-series part (7100) which should be compatible
with this.

Which kernel and which xilinx tools are you using?

Jim

On 8/8/19 1:36 PM, Munro, Robert M. wrote:
Jim or others,

Is there any further input or feedback on the source or resolution

of this issue?

As it stands I do not believe that the OCPI runtime software will be

able to successfully load HDL assemblies on the N310 platform.  My
familiarity with this codebase is limited and we would appreciate any
guidance available toward investigating or resolving this issue.

Thank you,
Robert Munro

-----Original Message-----
From: discuss
<discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.open
cpi.org><mailto:discuss-bounces@lists.open<mailto:discuss-bounces@li
sts.open>
cpi.orghttp://cpi.org><mailto:discuss-bounces@lists.open<mailto:di
scuss-bounces@lists.open><mailto:discuss-bounces@li<mailto:discuss-b
ounces@li> sts.open> cpi.orghttp://cpi.orghttp://cpi.org>> On
Behalf Of

Munro, Robert M.

ZynqMP/UltraScale+ fpga_manager

Jim,

The given block of code is not the root cause of the issue because

the file system does not have a /dev/xdevcfg device.

I suspect there is some functional code similar to this being

compiled incorrectly:

#if (OCPI_ARCH_arm)
// do xdevcfg loading stuff
#else
// do fpga_manager loading stuff #endif

This error is being output at environment initialization as well as

when running hello.xml.  I've attached a copy of the output from the
command 'ocpirun -v -l 20 hello.xml' for further investigation.

From looking at the output I believe the system is calling

OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is
calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line
484 which in turn is calling Driver::open in the same file at line
499 which then outputs the 'When searching for PL device ...' error
at line 509. This then returns to the HdlDriver.cxx search() function
and outputs the '... got Zynq search error ...' error at line 141.

This is an ARM device and I am not familiar enough with this

codebase to adjust precompiler definitions with confidence that some
other code section will become affected.

Thanks,
Robert Munro

-----Original Message-----
From: James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><mailto:jek@parera.commailto:jek@parera.com<mailto:
jek@parera.commailto:jek@parera.com>>>
Sent: Friday, August 2, 2019 4:27 PM
To: Munro, Robert M.
<Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robe
rt.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu><mailto:Robe<mai lto:Robe>
rt.Munro@jhuapl.edumailto:rt.Munro@jhuapl.edu<mailto:Robert.Munro@
jhuapl.edumailto:Robert.Munro@jhuapl.edu>>>;

Subject: Re: [Discuss OpenCPI] Bitstream loading with

ZynqMP/UltraScale+ fpga_manager

That code is not integrated into the main line of OpenCPI yet, but

in that code there is:

         if (file_exists("/dev/xdevcfg")){
           ret_val= load_xdevconfig(fileName, error);
         }
         else if (file_exists("/sys/class/fpga_manager/fpga0/")){
           ret_val= load_fpga_manager(fileName, error);
         }

So it looks like the presence of /dev/xdevcfg is what causes it to

look for /sys/class/xdevcfg/xdevcfg/device/prog_done

On 8/2/19 4:15 PM, Munro, Robert M. wrote:
Are there any required flag or environment variable settings that

must be done before building the framework to utilize this
functionality?  I have a platform built that is producing an output
during environment load: 'When searching for PL device '0': Can't
process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string:
file could not be open for reading' .  This leads me to believe that
it is running the xdevcfg code still present in HdlBusDriver.cxx .

Use of the release_1.4_zynq_ultra branch and presence of the

/sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been
verified for the environment used to generate the executables.

Thanks,
Robert Munro

-----Original Message-----
From: discuss
<discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.ope
ncpi.org><mailto:discuss-bounces@lists.ope<mailto:discuss-bounces@l
ists.ope>
ncpi.orghttp://ncpi.org><mailto:discuss-bounces@lists.ope<mailto:
discuss-bounces@lists.ope><mailto:discuss-bounces@l<mailto:discuss-
bounces@l> ists.ope> ncpi.orghttp://ncpi.orghttp://ncpi.org>>
On Behalf Of James Kulp
Sent: Friday, February 1, 2019 4:18 PM
To:
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

On 2/1/19 3:37 PM, Chris Hinkey wrote:
in response to Point 1 here.  We attempted using the code that on

the fly was attempting to convert from bit to bin.  This did not work
on these newer platforms using fpga_manager so we decided to use the
vendor provided tools rather then to reverse engineer what was wrong
with the existing code.

If changes need to be made to create more commonality and given

that all zynq and zynqMP platforms need a .bin file format wouldn't
it make more sense to just use .bin files rather then converting them
on the fly every time?

A sensible question for sure.

When this was done originally, it was to avoid generating multiple

file formats all the time.  .bit files are necessary for JTAG
loading, and .bin files are necessary for zynq hardware loading.

Even on Zynq, some debugging using jtag is done, and having that be

mostly transparent (using the same bitstream files) is convenient.

So we preferred having a single bitstream file (with metadata,
compressed) regardless of whether we were hardware loading or jtag

loading, zynq or virtex6 or spartan3, ISE or Vivado.

In fact, there was no reverse engineering the last time since both

formats, at the level we were operating at, were documented by Xilinx.

It seemed to be worth the 30 SLOC to convert on the fly to keep a

single format of Xilinx bitstream files, including between ISE and
Vivado and all Xilinx FPGA types.

Of course it might make sense to switch things around the other way

and use .bin files uniformly and only convert to .bit format for JTAG
loading.

But since the core of the "conversion:" after a header, is just a

32 bit endian swap, it doesn't matter much either way.

If it ends up being a truly nasty reverse engineering exercise now,

I would reconsider.


From: discuss
<discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.op
encpi.org><mailto:discuss-bounces@lists.op<mailto:discuss-bounces@
lists.op>
encpi.orghttp://encpi.org><mailto:discuss-bounces@lists.op<mailt
o:discuss-bounces@lists.op><mailto:discuss-bounces@<mailto:discuss
-bounces@> lists.op>
encpi.orghttp://encpi.orghttp://encpi.org>> on behalf of James
Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.com<mailt
o:jek@parera.com>><mailto:jek@parera.commailto:jek@parera.com<ma
ilt o:jek@parera.commailto:o%3Ajek@parera.com>>>
Sent: Friday, February 1, 2019 3:27 PM
To:
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mail
to
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailt
o:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

David,

This is great work. Thanks.

Since I believe the fpga manager stuff is really an attribute of
later linux kernels, I don't think it is really a ZynqMP thing,
but just a later linux kernel thing.
I am currently bringing up the quite ancient zedboard using the
latest Vivado and Xilinx linux and will try to use this same code.
There are two thinigs I am looking into, now that you have done
the hard work of getting to a working solution:

  1. The bit vs bin thing existed with the old bitstream loader, but
    I think we were converting on the fly, so I will try that here.
    (To avoid the bin format altogether).

  2. The fpga manager has entry points from kernel mode that allow
    you to inject the bitstream without making a copy in /lib/firmware.
    Since we already have a kernel driver, I will try to use that to
    avoid the whole /lib/firmware thing.

So if those two things can work (no guarantees), the difference
between old and new bitstream loading (and building) can be
minimized and the loading process faster and requiring no extra
file system

space.

This will make merging easier too.

We'll see.  Thanks again to you and Geon for this important

contribution.

Jim

On 2/1/19 3:12 PM, David Banks wrote:
OpenCPI users interested in ZynqMP fpga_manager,

I know some users are interested in the OpenCPI's bitstream
loading for ZynqMP/UltraScale+ using "fpga_manager". In
general, we followed the instructions at

I will give a short explanation here:

Reminder: All ZynqMP/UltraScale+ changes are located at
https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra

branch.

Firstly, all fpga_manager code is located in
runtime/hdl/src/HdlBusDriver.cxx. There were also changes in
r
untime/hdl-support/xilinx/vivado.mkhttp://vivado.mk<http://vi
vado.mk><http://vi
vado.mkhttp://vado.mk>
http://vivado.mk
to generate a bitstream in the correct *.bin
format. To see the changes made to these files for ZynqMP, you
can diff them between
release_1.4 and release_1.4_zynq_ultra:
$ git clone https://github.com/Geontech/opencpi.git --branch
release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin
release_1.4:release_1.4; $ git diff release_1.4 --
runtime/hdl/src/HdlBusDriver.cxx
runtime/hdl-support/xilinx/vivado.mkhttp://vivado.mk<http://viv
ado.mk><http://viv
ado.mkhttp://ado.mk>;

The directly relevant functions are load_fpga_manager() and i
sProgrammed().
load_fpga_manager() ensures that /lib/firmware exists, reads the
*.bin bitstream file and writes its contents to

/lib/firmware/opencpi_temp.bin.

It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the
the filename "opencpi_temp.bin" to

/sys/class/fpga_manager/fpga0/firmware.

Finally, the temporary opencpi_temp.bin bitstream is removed and
the state of the fpga_manager
(/sys/class/fpga_manager/fpga0/state) is confirmed to be "operating" in isProgrammed().

fpga_manager requires that bitstreams be in *.bin in order to
write them to the PL. So, some changes were made to
vivado.mkhttp://vivado.mkhttp://vivado.mkhttp://vivado.mk
to add a make rule for the *.bin file. This make rule (BinName) uses Vivado's "bootgen" to convert the bitstream from *.bit to *.bin.

Most of the relevant code is pasted or summarized below:

         *load_fpga_manager*(const char *fileName, 

std::string

&error) {

           if (!file_exists("/lib/firmware")){ 

mkdir("/lib/firmware",0666);
}
int out_file =

creat("/lib/firmware/opencpi_temp.bin", 0666);

           gzFile bin_file;
           int bfd, zerror;
           uint8_t buf[8*1024];

           if ((bfd = ::open(fileName, O_RDONLY)) < 0)
             OU::format(error, "Can't open bitstream file '%s'

for reading:

%s(%d)",
fileName, strerror(errno), errno);
if ((bin_file = ::gzdopen(bfd, "rb")) == NULL)
OU::format(error, "Can't open compressed bin
file

'%s' for :

%s(%u)",
fileName, strerror(errno), errno);
do {
uint8_t *bit_buf = buf;
int n = ::gzread(bin_file, bit_buf, sizeof(buf));
if (n < 0)
return true;
if (n & 3)
return OU::eformat(error, "Bitstream data in is '%s'
not a multiple of 3 bytes",
fileName);
if (n == 0)
break;
if (write(out_file, buf, n) <= 0)
return OU::eformat(error,
"Error writing to
/lib/firmware/opencpi_temp.bin for bin
loading: %s(%u/%d)",
strerror(errno), errno, n);
} while (1);
close(out_file);
std::ofstream

fpga_flags("/sys/class/fpga_manager/fpga0/flags");

           std::ofstream

fpga_firmware("/sys/class/fpga_manager/fpga0/firmware");
fpga_flags << 0 << std::endl;
fpga_firmware << "opencpi_temp.bin" << std::endl;

remove("/lib/firmware/opencpi_temp.bin");
return isProgrammed(error) ? init(error) : true;
}

The isProgrammed() function just checks whether or not the
fpga_manager state is 'operating' although we are not entirely
confident this is a robust check:

         *isProgrammed*(...) {
           ...
           const char *e = OU::file2String(val, 

"/sys/class/fpga_manager/fpga0/state", '|');
...
return val == "operating";
}

vivado.mkhttp://vivado.mkhttp://vivado.mkhttp://vivado.mk's
*bin make-rule uses bootgen to convert bit to bin. This is
necessary in Vivado 2018.2, but in later versions you may be able
to directly generate the correct *.bin file via an option to

write_bitstream:

$(call BinName,$1,$3,$6): $(call BitName,$1,$3)
$(AT)echo -n For $2 on $5 using config $4: Generating
Xilinx Vivado bitstream file $$@ with BIN extension using "bootgen".
$(AT)echo all: > $$(call BifName,$1,$3,$6);
echo "{" >> $$(call BifName,$1,$3,$6);
echo " [destination_device = pl] $(notdir $(call
BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6);
echo "}" >> $$(call BifName,$1,$3,$6);
$(AT)$(call DoXilinx,bootgen,$1,-image $(notdir
$(call
BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call
BinName,$1,$3,$6)) -w,bin)

Hope this is useful!

Regards,
David Banks
dbanks@geontech.commailto:dbanks@geontech.com<mailto:dbanks@geo
ntech.commailto:dbanks@geontech.com><mailto:dbanks@geo<mailto:d
banks@geo>
ntech.comhttp://ntech.com<mailto:dbanks@geontech.com<mailto:dba
nks@geontech.com>>>
Geon Technologies, LLC
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/att
ach m ents/20190201/4b49675d/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailt
o:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><ma
ilt
o:discuss@lists.opencpi.orgmailto:o%3Adiscuss@lists.opencpi.org
<mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.or
g>>>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.o
rg


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mail
to
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailt
o:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or
g
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/atta
chm e nts/20190201/64e4ea45/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mail
to
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailt
o:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or
g

-------------- next part -------------- An embedded and
charset-unspecified text was scrubbed...
Name: hello_n310_log_output.txt
URL:

<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachm
e nts/20190805/d9b4f229/attachment.txt>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:dis
cuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:discu
ss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@
lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme
nts/20190813/4516c872/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:dis
cuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme
nts/20190829/b99ae3e0/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme
nts/20190905/0b9a1953/attachment.html>


discuss mailing list
discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org

If you invoke the command with no arguments it tells you what it can do, like most opencpi commands. We mostly use it to find out how the FPGA clocks are initialized. > On Sep 5, 2019, at 18:19, Munro, Robert M. <Robert.Munro@jhuapl.edu> wrote: > > Jim, > > Does the ocpizynq utility list all the available interfaces that can dumped? > > Thanks, > Rob > > -----Original Message----- > From: discuss <discuss-bounces@lists.opencpi.org> On Behalf Of James Kulp > Sent: Thursday, September 5, 2019 5:59 PM > To: discuss@lists.opencpi.org > Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager > > Hi Rob, > > Nearly all aspects of the boundary hardware between the PS and the PL sides of Zynq are controlled by registers written by the processor and > *not* in the FPGA bitstream. > The FSBL does typically initialize these registers to some default values that are not necessarily the right values for how OpenCPI uses the PL/FPGA. > The ocpizynq utility program does dump out some of these registers, and you could modify it pretty easily if you want to know what some other registers are set to. > All these registers are pretty well documented in the Zynq TRM. > > Jim > >> On 9/5/19 5:47 PM, Munro, Robert M. wrote: >> Chris, >> >> Would this be the GP0 AXI slave or master registers that are being accessed in this scenario? I don’t believe these are configured in the FSBL, but in the FPGA image. This could indicate that a facility required by the OCPI framework is not enabled in the FPGA image built into the N310 image. Is there a listing of the OCPI required FPGA facilities? >> >> Thanks, >> Rob >> >> From: Chris Hinkey <chinkey@geontech.com> >> Sent: Thursday, August 29, 2019 11:58 AM >> To: Munro, Robert M. <Robert.Munro@jhuapl.edu> >> Subject: Re: [Discuss OpenCPI] Bitstream loading with >> ZynqMP/UltraScale+ fpga_manager >> >> you are not accessing external memory in this case you are accessing axi_gp0's adress space a register directly on the FPGA. i would suspect that that something is wrong with how GP0 is setup from the fsbl in this case. I don't think anything would need to change on the opencpi software side given that 7100 vs 7020 should be the same. >> the information on all the register maps and where everything is located is somewhere in the Xilinx Technical reference manual (be warned this is a very large document). >> >> On Thu, Aug 29, 2019 at 11:42 AM Munro, Robert M. <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>> wrote: >> Chris, >> >> Looking at the Zynq and ZynqMP datasheets: >> https://www.xilinx.com/support/documentation/data_sheets/ds190-Zynq-70 >> 00-Overview.pdf >> https://www.xilinx.com/support/documentation/data_sheets/ds891-zynq-ul >> trascale-plus-overview.pdf >> >> It looks like the Z-7100 has the same memory interfaces as other Zynq parts with the external memory interface having '16-bit or 32-bit interfaces to DDR3, DDR3L, DDR2, or LPDDR2 memories' whereas the ZynqMP has '32-bit or 64-bit interfaces to DDR4, DDR3, DDR3L, or LPDDR3 memories, and 32-bit interface to LPDDR4 memory' . >> >> Is it possible that other changes are needed from the 1.4_zynq_ultra branch that I have not pulled in? >> >> Thanks, >> Rob >> >> -----Original Message----- >> From: discuss <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org>> On Behalf Of Munro, Robert M. >> Sent: Thursday, August 29, 2019 9:09 AM >> To: Chris Hinkey <chinkey@geontech.com<mailto:chinkey@geontech.com>> >> Cc: discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> >> Subject: Re: [Discuss OpenCPI] Bitstream loading with >> ZynqMP/UltraScale+ fpga_manager >> >> Chris, >> >> Thanks for the information regarding the internals. The FPGA part on this platform is a XC7Z100. I purposefully did not pull in changes that I believed were related to addressing. I can double check the specifications regarding address widths to verify it should be unchanged. >> >> Please let me know if there are any other changes or steps missed. >> >> Thanks, >> Rob >> >> >> From: Chris Hinkey >> <chinkey@geontech.com<mailto:chinkey@geontech.com><mailto:chinkey@geon >> tech.com<mailto:chinkey@geontech.com>>> >> Date: Thursday, Aug 29, 2019, 8:05 AM >> To: Munro, Robert M. >> <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert >> .Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>> >> Cc: James Kulp >> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:je >> k@parera.com>>>, >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> >> <discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:di >> scuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >> Subject: Re: [Discuss OpenCPI] Bitstream loading with >> ZynqMP/UltraScale+ fpga_manager >> >> It looks like you loaded something sucessfully but the control plan is not hooked up quite right. >> >> as an eraly part of the running process opencpi reads a register across the control plan that contains ascii "OpenCPI(NULL)" and in your case you are reading "CPI(NULL)Open" this is given by the data in the error message - (sb 0x435049004f70656e). this is the magic that the message is referring to it requires OpenCPI to be at address 0 of the control plane address space to proceed. >> >> I think we ran into this problem and we decided it was because the bus on the ultrascale was setup to be 32 bits and needed to be 64 bits for the hdl that we implemented to work correctly. remind me what platform you are using is it a zynq ultrascale or 7000 series? >> >> On Wed, Aug 28, 2019 at 5:55 PM Munro, Robert M. <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>> wrote: >> Chris, >> >> After merging some sections of HdlBusDriver.cxx into the 1.4 version of the file and going through the build process I am encountering a new error when attempting to load HDL on the N310. The fsk_filerw is being used as a known good reference for this purpose. The new sections of vivado.mk<http://vivado.mk><http://vivado.mk> were merged in to attempt building the HDL using the framework, but it did not generate the .bin file when using ocpidev build with the --hdl-assembly argument. An attempt to replicate the commands in vivado.mk<http://vivado.mk><http://vivado.mk> manually while following the guidelines for generating a .bin from a .bit from Xilinx documentation https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841645/Solution+Zynq+PL+Programming+With+FPGA+Manager was taken. >> >> The steps were: >> - generate a .bif file similar to the documentation's >> Full_Bitstream.bif using the correct filename >> - run a bootgen command similar to >> vivado.mk<http://vivado.mk><http://vivado.mk>: bootgen -image >> <bif_filename> -arch zynq -o <bin_filename> -w >> >> This generated a .bin file as desired and was copied to the artifacts directory in the ocpi folder structure. >> >> The built ocpi environment loaded successfully, recognizes the HDL container as being available, and the hello application was able to run successfully. The command output contained ' HDL Device 'PL:0' responds, but the OCCP signature: magic: 0x18000afe187003 (sb 0x435049004f70656e) ' but the impact of this was not understood until attempting to load HDL. When attempting to run the fsk_filerw from the ocpirun command it did not appear to recognize the assembly when listing resources found in the output and reported that suitable candidate for a HDL-implemented component was not available. >> >> The command 'ocpihdl load' was then attempted to force the loading of the HDL assembly and the same '...OCCP signature: magic: ...' output observed and finally ' Exiting for problem: error loading device pl:0: Magic numbers in admin space do not match'. >> >> Is there some other step that must be taken during the generation of the .bin file? Is there any other software modification that is required of the ocpi runtime code? The diff patch of the modified 1.4 HdlBusDriver.cxx is attached to make sure that the required code modifications are performed correctly. The log output from the ocpihdl load command is attached in case that can provide further insight regarding performance or required steps. >> >> Thanks, >> Rob >> >> -----Original Message----- >> From: discuss <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org><mailto:discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org>>> On Behalf Of Munro, Robert M. >> Sent: Tuesday, August 13, 2019 10:56 AM >> To: Chris Hinkey >> <chinkey@geontech.com<mailto:chinkey@geontech.com><mailto:chinkey@geon >> tech.com<mailto:chinkey@geontech.com>>>; James Kulp >> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:je >> k@parera.com>>> >> Cc: >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:dis >> cuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> >> Subject: Re: [Discuss OpenCPI] Bitstream loading with >> ZynqMP/UltraScale+ fpga_manager >> >> Chris, >> >> Thank you for your helpful response and insight. My thinking was that the #define could be overridden to provide the desired functionality for the platform, but was not comfortable making the changes without proper familiarity. I will move forward by looking at the diff to the 1.4 mainline, make the appropriate modifications, and test with the modified framework on the N310. >> >> Thanks again for your help. >> >> Thanks, >> Rob >> >> From: Chris Hinkey >> <chinkey@geontech.com<mailto:chinkey@geontech.com><mailto:chinkey@geon >> tech.com<mailto:chinkey@geontech.com>>> >> Sent: Tuesday, August 13, 2019 10:02 AM >> To: James Kulp >> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:je >> k@parera.com>>> >> Cc: Munro, Robert M. >> <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert >> .Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>; >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:dis >> cuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> >> Subject: Re: [Discuss OpenCPI] Bitstream loading with >> ZynqMP/UltraScale+ fpga_manager >> >> I think when I implemented this code I probably made the assumption that if we are using fpga_manager we are also using ARCH=arm64. This met our needs as we only cared about the fpga manager on ultrascale devices at the time. We also made the assumption that the tools created a tarred bin file instead of a bit file because we could not get the bit to bin conversion working with the existing openCPI code (this might cause you problems later when actually trying to load the fpga). >> >> The original problem you were running into is certainly because of an >> ifdef on line 226 where it will check the old driver done pin if it is >> on an arm and not an arm64 >> >> 226 #if defined(OCPI_ARCH_arm) || defined(OCPI_ARCH_arm_cs) >> >> to move forward for now you can change this line to an "#if 0" and rebuild the framework, not this will cause other zynq based platforms(zed, matchstiq etc..) to no longer work with this patch but maybe you don't care for now while Jim tries to get this into the mainline in a more generic way. >> there may be some similar patches you need to make to the same file but the full diff that I needed to make to BusDriver.cxx to the 1.4 mainline can be seen here https://github.com/opencpi/opencpi/pull/17/files in case you didn't already know. >> hope this helps >> >> On Mon, Aug 12, 2019 at 11:12 AM James Kulp <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>>>> wrote: >>> On 8/12/19 9:37 AM, Munro, Robert M. wrote: >>> Jim, >>> >>> This is the only branch with the modifications required for use with >>> the FPGA Manager driver. This is required for use with the Linux >>> kernel provided for the N310. The Xilinx toolset being used is >>> 2018_2 and the kernel being used is generated via the N310 build >>> container using v3.14.0.0 . >> Ok. The default Xilinx kernel associated with 2018_2 is 4.14. >> >> I guess the bottom line is that this combination of platform and tools and kernel is not yet supported in either the mainline of OpenCPI and the third party branch you are trying to use. >> >> It is probably not a big problem, but someone has to debug it that has the time and skills necessary to dig as deep as necessary. >> >> The fpga manager in the various later linux kernels will definitely be supported in a patch from the mainline "soon", probably in a month, since it is being actively worked. >> >> That does not guarantee functionality on your exact kernel (and thus version of the fpga manager), but it does guarantee it working on the latest Xilinx-supported kernel. >> >> Jim >> >> >> >> >> >> >> >>> Thanks, >>> Robert Munro >>> >>> *From: *James Kulp >>> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:j >>> ek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:j >>> e<mailto:je> >>> k@parera.com<mailto:k@parera.com>>> >>> <mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<m >>> ailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><m >>> a ilto:jek@parera.com<mailto:ilto%3Ajek@parera.com>>>>> >>> *Date: *Monday, Aug 12, 2019, 9:00 AM >>> *To: *Munro, Robert M. >>> <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Rober >>> t.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>><mailto:Robert<mai >>> lto:Robert> >>> .Munro@jhuapl.edu<mailto:Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl >>> .edu<mailto:Robert.Munro@jhuapl.edu>>> >>> <mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailt >>> o:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>><mailto >>> :Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Rober >>> t.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>>>, >>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:di >>> scuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:dis >>> <mailto:dis> >>> cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org><mailto:discuss@ >>> lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >>> <discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:d >>> iscuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:di >>> <mailto:di> >>> scuss@lists.opencpi.org<mailto:scuss@lists.opencpi.org><mailto:discus >>> s@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >>> <mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><m >>> ailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><ma >>> ilto:discuss@lists.opencpi.org<mailto:ilto%3Adiscuss@lists.opencpi.or >>> g><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> >>>>>>> >>> *Subject: *Re: [Discuss OpenCPI] Bitstream loading with >>> ZynqMP/UltraScale+ fpga_manager >>> >>> I was a bit confused about your use of the "ultrascale" branch. >>> So you are using a branch with two types of patches in it: one for >>> later linux kernels with the fpga manager, and the other for the >>> ultrascale chip itself. >>> The N310 is not ultrascale, so we need to separate the two issues, >>> which were not separated before. >>> So its not really a surprise that the branch you are using is not yet >>> happy with the system you are trying to run it on. >>> >>> I am working on a branch that simply updates the xilinx tools >>> (2019-1) and the xilinx linux kernel (4.19) without dealing with >>> ultrascale, which is intended to work with a baseline zed board, but >>> with current tools and kernels. >>> >>> The N310 uses a 7000-series part (7100) which should be compatible >>> with this. >>> >>> Which kernel and which xilinx tools are you using? >>> >>> Jim >>> >>> >>> >>>> On 8/8/19 1:36 PM, Munro, Robert M. wrote: >>>> Jim or others, >>>> >>>> Is there any further input or feedback on the source or resolution >>> of this issue? >>>> As it stands I do not believe that the OCPI runtime software will be >>> able to successfully load HDL assemblies on the N310 platform. My >>> familiarity with this codebase is limited and we would appreciate any >>> guidance available toward investigating or resolving this issue. >>>> Thank you, >>>> Robert Munro >>>> >>>> -----Original Message----- >>>> From: discuss >>>> <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.open >>>> cpi.org><mailto:discuss-bounces@lists.open<mailto:discuss-bounces@li >>>> sts.open> >>>> cpi.org<http://cpi.org>><mailto:discuss-bounces@lists.open<mailto:di >>>> scuss-bounces@lists.open><mailto:discuss-bounces@li<mailto:discuss-b >>>> ounces@li> sts.open> cpi.org<http://cpi.org><http://cpi.org>>> On >>>> Behalf Of >>> Munro, Robert M. >>>> Sent: Monday, August 5, 2019 10:49 AM >>>> To: James Kulp >>>> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto: >>>> jek@parera.com<mailto:jek@parera.com>>>>; >>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:d >>>> iscuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:d >>>> <mailto:d> >>>> iscuss@lists.opencpi.org<mailto:iscuss@lists.opencpi.org><mailto:dis >>>> cuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >>>> Subject: Re: [Discuss OpenCPI] Bitstream loading with >>> ZynqMP/UltraScale+ fpga_manager >>>> Jim, >>>> >>>> The given block of code is not the root cause of the issue because >>> the file system does not have a /dev/xdevcfg device. >>>> I suspect there is some functional code similar to this being >>> compiled incorrectly: >>>> #if (OCPI_ARCH_arm) >>>> // do xdevcfg loading stuff >>>> #else >>>> // do fpga_manager loading stuff #endif >>>> >>>> This error is being output at environment initialization as well as >>> when running hello.xml. I've attached a copy of the output from the >>> command 'ocpirun -v -l 20 hello.xml' for further investigation. >>>> From looking at the output I believe the system is calling >>> OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is >>> calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line >>> 484 which in turn is calling Driver::open in the same file at line >>> 499 which then outputs the 'When searching for PL device ...' error >>> at line 509. This then returns to the HdlDriver.cxx search() function >>> and outputs the '... got Zynq search error ...' error at line 141. >>>> This is an ARM device and I am not familiar enough with this >>> codebase to adjust precompiler definitions with confidence that some >>> other code section will become affected. >>>> Thanks, >>>> Robert Munro >>>> >>>> -----Original Message----- >>>> From: James Kulp >>>> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto: >>>> jek@parera.com<mailto:jek@parera.com>>>> >>>> Sent: Friday, August 2, 2019 4:27 PM >>>> To: Munro, Robert M. >>>> <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robe >>>> rt.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>><mailto:Robe<mai >>>> lto:Robe> >>>> rt.Munro@jhuapl.edu<mailto:rt.Munro@jhuapl.edu><mailto:Robert.Munro@ >>>> jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>>; >>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:di >>> scuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:dis >>> <mailto:dis> >>> cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org><mailto:discuss@ >>> lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >>>> Subject: Re: [Discuss OpenCPI] Bitstream loading with >>> ZynqMP/UltraScale+ fpga_manager >>>> That code is not integrated into the main line of OpenCPI yet, but >>> in that code there is: >>>> if (file_exists("/dev/xdevcfg")){ >>>> ret_val= load_xdevconfig(fileName, error); >>>> } >>>> else if (file_exists("/sys/class/fpga_manager/fpga0/")){ >>>> ret_val= load_fpga_manager(fileName, error); >>>> } >>>> So it looks like the presence of /dev/xdevcfg is what causes it to >>> look for /sys/class/xdevcfg/xdevcfg/device/prog_done >>>>> On 8/2/19 4:15 PM, Munro, Robert M. wrote: >>>>> Are there any required flag or environment variable settings that >>> must be done before building the framework to utilize this >>> functionality? I have a platform built that is producing an output >>> during environment load: 'When searching for PL device '0': Can't >>> process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string: >>> file could not be open for reading' . This leads me to believe that >>> it is running the xdevcfg code still present in HdlBusDriver.cxx . >>>>> Use of the release_1.4_zynq_ultra branch and presence of the >>> /sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been >>> verified for the environment used to generate the executables. >>>>> Thanks, >>>>> Robert Munro >>>>> >>>>> -----Original Message----- >>>>> From: discuss >>>>> <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.ope >>>>> ncpi.org><mailto:discuss-bounces@lists.ope<mailto:discuss-bounces@l >>>>> ists.ope> >>>>> ncpi.org<http://ncpi.org>><mailto:discuss-bounces@lists.ope<mailto: >>>>> discuss-bounces@lists.ope><mailto:discuss-bounces@l<mailto:discuss- >>>>> bounces@l> ists.ope> ncpi.org<http://ncpi.org><http://ncpi.org>>> >>>>> On Behalf Of James Kulp >>>>> Sent: Friday, February 1, 2019 4:18 PM >>>>> To: >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto: >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto: >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >>>>> Subject: Re: [Discuss OpenCPI] Bitstream loading with >>>>> ZynqMP/UltraScale+ fpga_manager >>>>> >>>>>> On 2/1/19 3:37 PM, Chris Hinkey wrote: >>>>>> in response to Point 1 here. We attempted using the code that on >>> the fly was attempting to convert from bit to bin. This did not work >>> on these newer platforms using fpga_manager so we decided to use the >>> vendor provided tools rather then to reverse engineer what was wrong >>> with the existing code. >>>>>> If changes need to be made to create more commonality and given >>> that all zynq and zynqMP platforms need a .bin file format wouldn't >>> it make more sense to just use .bin files rather then converting them >>> on the fly every time? >>>>> A sensible question for sure. >>>>> >>>>> When this was done originally, it was to avoid generating multiple >>> file formats all the time. .bit files are necessary for JTAG >>> loading, and .bin files are necessary for zynq hardware loading. >>>>> Even on Zynq, some debugging using jtag is done, and having that be >>> mostly transparent (using the same bitstream files) is convenient. >>>>> So we preferred having a single bitstream file (with metadata, >>>>> compressed) regardless of whether we were hardware loading or jtag >>> loading, zynq or virtex6 or spartan3, ISE or Vivado. >>>>> In fact, there was no reverse engineering the last time since both >>> formats, at the level we were operating at, were documented by Xilinx. >>>>> It seemed to be worth the 30 SLOC to convert on the fly to keep a >>> single format of Xilinx bitstream files, including between ISE and >>> Vivado and all Xilinx FPGA types. >>>>> Of course it might make sense to switch things around the other way >>> and use .bin files uniformly and only convert to .bit format for JTAG >>> loading. >>>>> But since the core of the "conversion:" after a header, is just a >>> 32 bit endian swap, it doesn't matter much either way. >>>>> If it ends up being a truly nasty reverse engineering exercise now, >>> I would reconsider. >>>>>> ________________________________ >>>>>> From: discuss >>>>>> <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.op >>>>>> encpi.org><mailto:discuss-bounces@lists.op<mailto:discuss-bounces@ >>>>>> lists.op> >>>>>> encpi.org<http://encpi.org>><mailto:discuss-bounces@lists.op<mailt >>>>>> o:discuss-bounces@lists.op><mailto:discuss-bounces@<mailto:discuss >>>>>> -bounces@> lists.op> >>>>>> encpi.org<http://encpi.org><http://encpi.org>>> on behalf of James >>>>>> Kulp >>>>>> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailt >>>>>> o:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><ma >>>>>> ilt o:jek@parera.com<mailto:o%3Ajek@parera.com>>>> >>>>>> Sent: Friday, February 1, 2019 3:27 PM >>>>>> To: >>>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto >>>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mail >>>>>> to >>>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailt >>>>>> o:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >>>>>> Subject: Re: [Discuss OpenCPI] Bitstream loading with >>>>>> ZynqMP/UltraScale+ fpga_manager >>>>>> >>>>>> David, >>>>>> >>>>>> This is great work. Thanks. >>>>>> >>>>>> Since I believe the fpga manager stuff is really an attribute of >>>>>> later linux kernels, I don't think it is really a ZynqMP thing, >>>>>> but just a later linux kernel thing. >>>>>> I am currently bringing up the quite ancient zedboard using the >>>>>> latest Vivado and Xilinx linux and will try to use this same code. >>>>>> There are two thinigs I am looking into, now that you have done >>>>>> the hard work of getting to a working solution: >>>>>> >>>>>> 1. The bit vs bin thing existed with the old bitstream loader, but >>>>>> I think we were converting on the fly, so I will try that here. >>>>>> (To avoid the bin format altogether). >>>>>> >>>>>> 2. The fpga manager has entry points from kernel mode that allow >>>>>> you to inject the bitstream without making a copy in /lib/firmware. >>>>>> Since we already have a kernel driver, I will try to use that to >>>>>> avoid the whole /lib/firmware thing. >>>>>> >>>>>> So if those two things can work (no guarantees), the difference >>>>>> between old and new bitstream loading (and building) can be >>>>>> minimized and the loading process faster and requiring no extra >>>>>> file system >>> space. >>>>>> This will make merging easier too. >>>>>> >>>>>> We'll see. Thanks again to you and Geon for this important >>> contribution. >>>>>> Jim >>>>>> >>>>>> >>>>>>> On 2/1/19 3:12 PM, David Banks wrote: >>>>>>> OpenCPI users interested in ZynqMP fpga_manager, >>>>>>> >>>>>>> I know some users are interested in the OpenCPI's bitstream >>>>>>> loading for ZynqMP/UltraScale+ using "*fpga_manager*". In >>>>>>> general, we followed the instructions at >>>>>>> >>> https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream. >>>>>>> I will give a short explanation here: >>>>>>> >>>>>>> Reminder: All ZynqMP/UltraScale+ changes are located at >>>>>>> https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra >>> branch. >>>>>>> Firstly, all *fpga_manager *code is located in >>>>>>> *runtime/hdl/src/HdlBusDriver.cxx*. There were also changes in >>>>>>> r*untime/hdl-support/xilinx/vivado.mk<http://vivado.mk><http://vi >>>>>>> vado.mk><http://vi >>>>>>> vado.mk<http://vado.mk>> >>>>>>> <http://vivado.mk>* to generate a bitstream in the correct *.bin >>>>>>> format. To see the changes made to these files for ZynqMP, you >>>>>>> can diff them between >>>>>>> *release_1.4* and *release_1.4_zynq_ultra*: >>>>>>> $ git clone https://github.com/Geontech/opencpi.git --branch >>>>>>> release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin >>>>>>> release_1.4:release_1.4; $ git diff release_1.4 -- >>>>>>> runtime/hdl/src/HdlBusDriver.cxx >>>>>>> runtime/hdl-support/xilinx/vivado.mk<http://vivado.mk><http://viv >>>>>>> ado.mk><http://viv >>>>>>> ado.mk<http://ado.mk>>; >>>>>>> >>>>>>> >>>>>>> The directly relevant functions are *load_fpga_manager()* and i >>>>>>> *sProgrammed()*. >>>>>>> load_fpga_manager() ensures that /lib/firmware exists, reads the >>>>>>> *.bin bitstream file and writes its contents to >>> /lib/firmware/opencpi_temp.bin. >>>>>>> It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the >>>>>>> the filename "opencpi_temp.bin" to >>> /sys/class/fpga_manager/fpga0/firmware. >>>>>>> Finally, the temporary opencpi_temp.bin bitstream is removed and >>>>>>> the state of the fpga_manager >>>>>>> (/sys/class/fpga_manager/fpga0/state) is confirmed to be "operating" in isProgrammed(). >>>>>>> >>>>>>> fpga_manager requires that bitstreams be in *.bin in order to >>>>>>> write them to the PL. So, some changes were made to >>>>>>> vivado.mk<http://vivado.mk><http://vivado.mk><http://vivado.mk> >>>>>>> to add a make rule for the *.bin file. This make rule (*BinName*) uses Vivado's "*bootgen*" to convert the bitstream from *.bit to *.bin. >>>>>>> >>>>>>> Most of the relevant code is pasted or summarized below: >>>>>>> >>>>>>> *load_fpga_manager*(const char *fileName, >>>>>>> std::string >>> &error) { >>>>>>> if (!file_exists("/lib/firmware")){ >>>>>>> mkdir("/lib/firmware",0666); >>>>>>> } >>>>>>> int out_file = >>> creat("/lib/firmware/opencpi_temp.bin", 0666); >>>>>>> gzFile bin_file; >>>>>>> int bfd, zerror; >>>>>>> uint8_t buf[8*1024]; >>>>>>> >>>>>>> if ((bfd = ::open(fileName, O_RDONLY)) < 0) >>>>>>> OU::format(error, "Can't open bitstream file '%s' >>> for reading: >>>>>>> %s(%d)", >>>>>>> fileName, strerror(errno), errno); >>>>>>> if ((bin_file = ::gzdopen(bfd, "rb")) == NULL) >>>>>>> OU::format(error, "Can't open compressed bin >>>>>>> file >>> '%s' for : >>>>>>> %s(%u)", >>>>>>> fileName, strerror(errno), errno); >>>>>>> do { >>>>>>> uint8_t *bit_buf = buf; >>>>>>> int n = ::gzread(bin_file, bit_buf, sizeof(buf)); >>>>>>> if (n < 0) >>>>>>> return true; >>>>>>> if (n & 3) >>>>>>> return OU::eformat(error, "Bitstream data in is '%s' >>>>>>> not a multiple of 3 bytes", >>>>>>> fileName); >>>>>>> if (n == 0) >>>>>>> break; >>>>>>> if (write(out_file, buf, n) <= 0) >>>>>>> return OU::eformat(error, >>>>>>> "Error writing to >>>>>>> /lib/firmware/opencpi_temp.bin for bin >>>>>>> loading: %s(%u/%d)", >>>>>>> strerror(errno), errno, n); >>>>>>> } while (1); >>>>>>> close(out_file); >>>>>>> std::ofstream >>> fpga_flags("/sys/class/fpga_manager/fpga0/flags"); >>>>>>> std::ofstream >>>>>>> fpga_firmware("/sys/class/fpga_manager/fpga0/firmware"); >>>>>>> fpga_flags << 0 << std::endl; >>>>>>> fpga_firmware << "opencpi_temp.bin" << std::endl; >>>>>>> >>>>>>> remove("/lib/firmware/opencpi_temp.bin"); >>>>>>> return isProgrammed(error) ? init(error) : true; >>>>>>> } >>>>>>> >>>>>>> The isProgrammed() function just checks whether or not the >>>>>>> fpga_manager state is 'operating' although we are not entirely >>>>>>> confident this is a robust check: >>>>>>> >>>>>>> *isProgrammed*(...) { >>>>>>> ... >>>>>>> const char *e = OU::file2String(val, >>>>>>> "/sys/class/fpga_manager/fpga0/state", '|'); >>>>>>> ... >>>>>>> return val == "operating"; >>>>>>> } >>>>>>> >>>>>>> vivado.mk<http://vivado.mk><http://vivado.mk><http://vivado.mk>'s >>>>>>> *bin make-rule uses bootgen to convert bit to bin. This is >>>>>>> necessary in Vivado 2018.2, but in later versions you may be able >>>>>>> to directly generate the correct *.bin file via an option to >>> write_bitstream: >>>>>>> $(call *BinName*,$1,$3,$6): $(call BitName,$1,$3) >>>>>>> $(AT)echo -n For $2 on $5 using config $4: Generating >>>>>>> Xilinx Vivado bitstream file $$@ with BIN extension using "bootgen". >>>>>>> $(AT)echo all: > $$(call BifName,$1,$3,$6); \ >>>>>>> echo "{" >> $$(call BifName,$1,$3,$6); \ >>>>>>> echo " [destination_device = pl] $(notdir $(call >>>>>>> BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6); \ >>>>>>> echo "}" >> $$(call BifName,$1,$3,$6); >>>>>>> $(AT)$(call DoXilinx,*bootgen*,$1,-image $(notdir >>>>>>> $(call >>>>>>> BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call >>>>>>> BinName,$1,$3,$6)) -w,bin) >>>>>>> >>>>>>> Hope this is useful! >>>>>>> >>>>>>> Regards, >>>>>>> David Banks >>>>>>> dbanks@geontech.com<mailto:dbanks@geontech.com><mailto:dbanks@geo >>>>>>> ntech.com<mailto:dbanks@geontech.com>><mailto:dbanks@geo<mailto:d >>>>>>> banks@geo> >>>>>>> ntech.com<http://ntech.com><mailto:dbanks@geontech.com<mailto:dba >>>>>>> nks@geontech.com>>> >>>>>>> Geon Technologies, LLC >>>>>>> -------------- next part -------------- An HTML attachment was >>>>>>> scrubbed... >>>>>>> URL: >>>>>>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/att >>>>>>> ach m ents/20190201/4b49675d/attachment.html> >>>>>>> _______________________________________________ >>>>>>> discuss mailing list >>>>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailt >>>>>>> o:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><ma >>>>>>> ilt >>>>>>> o:discuss@lists.opencpi.org<mailto:o%3Adiscuss@lists.opencpi.org> >>>>>>> <mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.or >>>>>>> g>>> >>>>>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.o >>>>>>> rg >>>>>> _______________________________________________ >>>>>> discuss mailing list >>>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto >>>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mail >>>>>> to >>>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailt >>>>>> o:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >>>>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or >>>>>> g >>>>>> -------------- next part -------------- An HTML attachment was >>>>>> scrubbed... >>>>>> URL: >>>>>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/atta >>>>>> chm e nts/20190201/64e4ea45/attachment.html> >>>>>> _______________________________________________ >>>>>> discuss mailing list >>>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto >>>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mail >>>>>> to >>>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailt >>>>>> o:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >>>>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or >>>>>> g >>>>> _______________________________________________ >>>>> discuss mailing list >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto: >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto: >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >>>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >>>> -------------- next part -------------- An embedded and >>>> charset-unspecified text was scrubbed... >>>> Name: hello_n310_log_output.txt >>>> URL: >>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachm >>> e nts/20190805/d9b4f229/attachment.txt> >>>> _______________________________________________ >>>> discuss mailing list >>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:d >>>> iscuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:d >>>> <mailto:d> >>>> iscuss@lists.opencpi.org<mailto:iscuss@lists.opencpi.org><mailto:dis >>>> cuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >>> >> >> _______________________________________________ >> discuss mailing list >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:dis >> cuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:discu >> ss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@ >> lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >> -------------- next part -------------- An HTML attachment was >> scrubbed... >> URL: >> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme >> nts/20190813/4516c872/attachment.html> >> _______________________________________________ >> discuss mailing list >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:dis >> cuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >> -------------- next part -------------- An HTML attachment was >> scrubbed... >> URL: >> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme >> nts/20190829/b99ae3e0/attachment.html> >> _______________________________________________ >> discuss mailing list >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >> -------------- next part -------------- An HTML attachment was >> scrubbed... >> URL: >> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme >> nts/20190905/0b9a1953/attachment.html> >> _______________________________________________ >> discuss mailing list >> discuss@lists.opencpi.org >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > > > > _______________________________________________ > discuss mailing list > discuss@lists.opencpi.org > http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
CH
Chris Hinkey
Fri, Sep 6, 2019 12:09 PM

iirc it gives clocks and interdictions on which axi ports are enabled but
not which direction is master (you would have to look up which register/bit
this is set by in the TRM).  i don't remember the axi ports being
configurable which side is the master but i very well might be mistake.

On Thu, Sep 5, 2019 at 7:38 PM James Kulp jek@parera.com wrote:

If you invoke the command with no arguments it tells you what it can do,
like most opencpi commands.  We mostly use it to find out how the FPGA
clocks are initialized.

On Sep 5, 2019, at 18:19, Munro, Robert M. Robert.Munro@jhuapl.edu

wrote:

Jim,

Does the ocpizynq utility list all the available interfaces that can

dumped?

Thanks,
Rob

-----Original Message-----
From: discuss discuss-bounces@lists.opencpi.org On Behalf Of James

Kulp

Sent: Thursday, September 5, 2019 5:59 PM
To: discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+

fpga_manager

Hi Rob,

Nearly all aspects of the boundary hardware between the PS and the PL

sides of Zynq are controlled by registers written by the processor and

not in the FPGA bitstream.
The FSBL does typically initialize these registers to some default

values that are not necessarily the right values for how OpenCPI uses the
PL/FPGA.

The ocpizynq utility program does dump out some of these registers, and

you could modify it pretty easily if you want to know what some other
registers are set to.

All these registers are pretty well documented in the Zynq TRM.

Jim

On 9/5/19 5:47 PM, Munro, Robert M. wrote:
Chris,

Would this be the GP0 AXI slave or master registers that are being

accessed in this scenario?  I don’t believe these are configured in the
FSBL, but in the FPGA image.  This could indicate that a facility required
by the OCPI framework is not enabled in the FPGA image built into the N310
image.  Is there a listing of the OCPI required FPGA facilities?

Thanks,
Rob

From: Chris Hinkey chinkey@geontech.com
Sent: Thursday, August 29, 2019 11:58 AM
To: Munro, Robert M. Robert.Munro@jhuapl.edu
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

you are not accessing external memory in this case you are accessing

axi_gp0's adress space a register directly on the FPGA.  i would suspect
that that something is wrong with how GP0 is setup from the fsbl in this
case.  I don't think anything would need to change on the opencpi software
side given that 7100 vs 7020 should be the same.

the information on all the register maps and where everything is

located is somewhere in the Xilinx Technical reference manual (be warned
this is a very large document).

On Thu, Aug 29, 2019 at 11:42 AM Munro, Robert M. <

Chris,

Looking at the Zynq and ZynqMP datasheets:
https://www.xilinx.com/support/documentation/data_sheets/ds190-Zynq-70
00-Overview.pdf
https://www.xilinx.com/support/documentation/data_sheets/ds891-zynq-ul
trascale-plus-overview.pdf

It looks like the Z-7100 has the same memory interfaces as other Zynq

parts with the external memory interface having '16-bit or 32-bit
interfaces to DDR3, DDR3L, DDR2, or LPDDR2 memories' whereas the ZynqMP has
'32-bit or 64-bit interfaces to DDR4, DDR3, DDR3L, or LPDDR3 memories, and
32-bit interface to LPDDR4 memory' .

Is it possible that other changes are needed from the 1.4_zynq_ultra

branch that I have not pulled in?

Thanks,
Rob

-----Original Message-----
From: discuss <discuss-bounces@lists.opencpi.org<mailto:

discuss-bounces@lists.opencpi.org>> On Behalf Of Munro, Robert M.

Sent: Thursday, August 29, 2019 9:09 AM
To: Chris Hinkey <chinkey@geontech.commailto:chinkey@geontech.com>
Cc: discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

Chris,

Thanks for the information regarding the internals.  The FPGA part on

this platform is a XC7Z100.  I purposefully did not pull in changes that I
believed were related to addressing.  I can double check the specifications
regarding address widths to verify it should be unchanged.

Please let me know if there are any other changes or steps missed.

Thanks,
Rob

From: Chris Hinkey
<chinkey@geontech.commailto:chinkey@geontech.com<mailto:chinkey@geon
tech.commailto:chinkey@geontech.com>>
Date: Thursday, Aug 29, 2019, 8:05 AM
To: Munro, Robert M.
<Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert
.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>
Cc: James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.com<mailto:je
k@parera.com>>>,
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
<discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:di
scuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

It looks like you loaded something sucessfully but the control plan is

not hooked up quite right.

as an eraly part of the running process opencpi reads a register across

the control plan that contains ascii "OpenCPI(NULL)" and in your case you
are reading "CPI(NULL)Open"  this is given by the data in the error message

  • (sb 0x435049004f70656e).  this is the magic that the message is referring
    to it requires OpenCPI to be at address 0 of the control plane address
    space to proceed.

I think we ran into this problem and we decided it was because the bus

on the ultrascale was setup to be 32 bits and needed to be 64 bits for the
hdl that we implemented to work correctly.  remind me what platform you are
using is it a zynq ultrascale or 7000 series?

On Wed, Aug 28, 2019 at 5:55 PM Munro, Robert M. <

Chris,

After merging some sections of HdlBusDriver.cxx into the 1.4 version of

the file and going through the build process I am encountering a new error
when attempting to load HDL on the N310.  The fsk_filerw is being used as a
known good reference for this purpose.  The new sections of vivado.mk<
http://vivado.mk>http://vivado.mk were merged in to attempt building
the HDL using the framework, but it did not generate the .bin file when
using ocpidev build with the --hdl-assembly argument.  An attempt to
replicate the commands in vivado.mkhttp://vivado.mkhttp://vivado.mk
manually while following the guidelines for generating a .bin from a .bit
from Xilinx documentation
https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841645/Solution+Zynq+PL+Programming+With+FPGA+Manager
was taken.

The steps were:

  • generate a .bif file similar to the documentation's
    Full_Bitstream.bif using the correct filename
  • run a bootgen command similar to
    vivado.mkhttp://vivado.mkhttp://vivado.mk: bootgen -image
    <bif_filename> -arch zynq -o <bin_filename> -w

This generated a .bin file as desired and was copied to the artifacts

directory in the ocpi folder structure.

The built ocpi environment loaded successfully, recognizes the HDL

container as being available, and the hello application was able to run
successfully.  The command output contained ' HDL Device 'PL:0' responds,
but the OCCP signature: magic: 0x18000afe187003 (sb 0x435049004f70656e) '
but the impact of this was not understood until attempting to load HDL.
When attempting to run the fsk_filerw from the ocpirun command it did not
appear to recognize the assembly when listing resources found in the output
and reported that suitable candidate for a HDL-implemented component was
not available.

The command 'ocpihdl load' was then attempted to force the loading of

the HDL assembly and the same '...OCCP signature: magic: ...' output
observed and finally ' Exiting for problem: error loading device pl:0:
Magic numbers in admin space do not match'.

Is there some other step that must be taken during the generation of

the .bin file?  Is there any other software modification that is required
of the ocpi runtime code?  The diff patch of the modified 1.4
HdlBusDriver.cxx is attached to make sure that the required code
modifications are performed correctly.  The log output from the ocpihdl
load command is attached in case that can provide further insight regarding
performance or required steps.

Thanks,
Rob

-----Original Message-----
From: discuss <discuss-bounces@lists.opencpi.org<mailto:

Sent: Tuesday, August 13, 2019 10:56 AM
To: Chris Hinkey
<chinkey@geontech.commailto:chinkey@geontech.com<mailto:chinkey@geon
tech.commailto:chinkey@geontech.com>>; James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.com<mailto:je
k@parera.com>>>
Cc:
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:dis
cuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

Chris,

Thank you for your helpful response and insight.  My thinking was that

the #define could be overridden to provide the desired functionality for
the platform, but was not comfortable making the changes without proper
familiarity.  I will move forward by looking at the diff to the 1.4
mainline, make the appropriate modifications, and test with the modified
framework on the N310.

Thanks again for your help.

Thanks,
Rob

From: Chris Hinkey
<chinkey@geontech.commailto:chinkey@geontech.com<mailto:chinkey@geon
tech.commailto:chinkey@geontech.com>>
Sent: Tuesday, August 13, 2019 10:02 AM
To: James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.com<mailto:je
k@parera.com>>>
Cc: Munro, Robert M.
<Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert
.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>;
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:dis
cuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

I think when I implemented this code I probably made the assumption

that if we are using fpga_manager we are also using ARCH=arm64.  This met
our needs as we only cared about the fpga manager on ultrascale devices at
the time.  We also made the assumption that the tools created a tarred bin
file instead of a bit file because we could not get the bit to bin
conversion working with the existing openCPI code (this might cause you
problems later when actually trying to load the fpga).

The original problem you were running into is certainly because of an
ifdef on line 226 where it will check the old driver done pin if it is
on an arm and not an arm64

226 #if defined(OCPI_ARCH_arm) || defined(OCPI_ARCH_arm_cs)

to move forward for now you can change this line to an "#if 0" and

rebuild the framework, not this will cause other zynq based platforms(zed,
matchstiq etc..) to no longer work with this patch but maybe you don't care
for now while Jim tries to get this into the mainline in a more generic way.

there may be some similar patches you need to make to the same file but

the full diff that I needed to make to BusDriver.cxx to the 1.4 mainline
can be seen here https://github.com/opencpi/opencpi/pull/17/files in case
you didn't already know.

hope this helps

On Mon, Aug 12, 2019 at 11:12 AM James Kulp <jek@parera.com<mailto:

On 8/12/19 9:37 AM, Munro, Robert M. wrote:
Jim,

This is the only branch with the modifications required for use with
the FPGA Manager driver.  This is required for use with the Linux
kernel provided for the N310.  The Xilinx toolset being used is
2018_2 and the kernel being used is generated via the N310 build
container using v3.14.0.0 .

Ok.  The default Xilinx kernel associated with 2018_2 is 4.14.

I guess the bottom line is that this combination of platform and tools

and kernel is not yet supported in either the mainline of OpenCPI and the
third party branch you are trying to use.

It is probably not a big problem, but someone has to debug it that has

the time and skills necessary to dig as deep as necessary.

The fpga manager in the various later linux kernels will definitely be

supported in a patch from the mainline "soon", probably in a month, since
it is being actively worked.

That does not guarantee functionality on your exact kernel (and thus

version of the fpga manager), but it does guarantee it working on the
latest Xilinx-supported kernel.

Jim

Thanks,
Robert Munro

*From: *James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.com<mailto:j
ek@parera.com>><mailto:jek@parera.commailto:jek@parera.com<mailto:j
emailto:je
k@parera.commailto:k@parera.com>>
<mailto:jek@parera.commailto:jek@parera.com<mailto:jek@parera.com<m
ailto:jek@parera.com>><mailto:jek@parera.commailto:jek@parera.com<m
a ilto:jek@parera.commailto:ilto%3Ajek@parera.com>>>>
*Date: *Monday, Aug 12, 2019, 9:00 AM
*To: *Munro, Robert M.
<Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Rober
t.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu><mailto:Robert<mai lto:Robert>
.Munro@jhuapl.edumailto:Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl
.edumailto:Robert.Munro@jhuapl.edu>>
<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailt
o:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu><mailto
:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Rober
t.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>>>,
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:di
scuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:dis
mailto:dis
cuss@lists.opencpi.orgmailto:cuss@lists.opencpi.org<mailto:discuss@
lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
<discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:d
iscuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:di
mailto:di
scuss@lists.opencpi.orgmailto:scuss@lists.opencpi.org<mailto:discus
s@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<m
ailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><ma
ilto:discuss@lists.opencpi.org<mailto:ilto%3Adiscuss@lists.opencpi.or
g><mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org

*Subject: *Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

I was a bit confused about your use of the "ultrascale" branch.
So you are using a branch with two types of patches in it: one for
later linux kernels with the fpga manager, and the other for the
ultrascale chip itself.
The N310 is not ultrascale, so we need to separate the two issues,
which were not separated before.
So its not really a surprise that the branch you are using is not yet
happy with the system you are trying to run it on.

I am working on a branch that simply updates the xilinx tools
(2019-1) and the xilinx linux kernel (4.19) without dealing with
ultrascale, which is intended to work with a baseline zed board, but
with current tools and kernels.

The N310 uses a 7000-series part (7100) which should be compatible
with this.

Which kernel and which xilinx tools are you using?

Jim

On 8/8/19 1:36 PM, Munro, Robert M. wrote:
Jim or others,

Is there any further input or feedback on the source or resolution

of this issue?

As it stands I do not believe that the OCPI runtime software will be

able to successfully load HDL assemblies on the N310 platform.  My
familiarity with this codebase is limited and we would appreciate any
guidance available toward investigating or resolving this issue.

Thank you,
Robert Munro

-----Original Message-----
From: discuss
<discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.open
cpi.org><mailto:discuss-bounces@lists.open<mailto:discuss-bounces@li
sts.open>
cpi.orghttp://cpi.org><mailto:discuss-bounces@lists.open<mailto:di
scuss-bounces@lists.open><mailto:discuss-bounces@li<mailto:discuss-b
ounces@li> sts.open> cpi.orghttp://cpi.orghttp://cpi.org>> On
Behalf Of

Munro, Robert M.

Sent: Monday, August 5, 2019 10:49 AM
To: James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.com<mailto:

ZynqMP/UltraScale+ fpga_manager

Jim,

The given block of code is not the root cause of the issue because

the file system does not have a /dev/xdevcfg device.

I suspect there is some functional code similar to this being

compiled incorrectly:

#if (OCPI_ARCH_arm)
// do xdevcfg loading stuff
#else
// do fpga_manager loading stuff #endif

This error is being output at environment initialization as well as

when running hello.xml.  I've attached a copy of the output from the
command 'ocpirun -v -l 20 hello.xml' for further investigation.

From looking at the output I believe the system is calling

OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is
calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line
484 which in turn is calling Driver::open in the same file at line
499 which then outputs the 'When searching for PL device ...' error
at line 509. This then returns to the HdlDriver.cxx search() function
and outputs the '... got Zynq search error ...' error at line 141.

This is an ARM device and I am not familiar enough with this

codebase to adjust precompiler definitions with confidence that some
other code section will become affected.

Thanks,
Robert Munro

-----Original Message-----
From: James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.com<mailto:

Subject: Re: [Discuss OpenCPI] Bitstream loading with

ZynqMP/UltraScale+ fpga_manager

That code is not integrated into the main line of OpenCPI yet, but

in that code there is:

         if (file_exists("/dev/xdevcfg")){
           ret_val= load_xdevconfig(fileName, error);
         }
         else if (file_exists("/sys/class/fpga_manager/fpga0/")){
           ret_val= load_fpga_manager(fileName, error);
         }

So it looks like the presence of /dev/xdevcfg is what causes it to

look for /sys/class/xdevcfg/xdevcfg/device/prog_done

On 8/2/19 4:15 PM, Munro, Robert M. wrote:
Are there any required flag or environment variable settings that

must be done before building the framework to utilize this
functionality?  I have a platform built that is producing an output
during environment load: 'When searching for PL device '0': Can't
process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string:
file could not be open for reading' .  This leads me to believe that
it is running the xdevcfg code still present in HdlBusDriver.cxx .

Use of the release_1.4_zynq_ultra branch and presence of the

/sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been
verified for the environment used to generate the executables.

Thanks,
Robert Munro

-----Original Message-----
From: discuss
<discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.ope
ncpi.org><mailto:discuss-bounces@lists.ope<mailto:discuss-bounces@l
ists.ope>
ncpi.orghttp://ncpi.org><mailto:discuss-bounces@lists.ope<mailto:
discuss-bounces@lists.ope><mailto:discuss-bounces@l<mailto:discuss-
bounces@l> ists.ope> ncpi.orghttp://ncpi.orghttp://ncpi.org>>
On Behalf Of James Kulp
Sent: Friday, February 1, 2019 4:18 PM
To:
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:

discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

On 2/1/19 3:37 PM, Chris Hinkey wrote:
in response to Point 1 here.  We attempted using the code that on

the fly was attempting to convert from bit to bin.  This did not work
on these newer platforms using fpga_manager so we decided to use the
vendor provided tools rather then to reverse engineer what was wrong
with the existing code.

If changes need to be made to create more commonality and given

that all zynq and zynqMP platforms need a .bin file format wouldn't
it make more sense to just use .bin files rather then converting them
on the fly every time?

A sensible question for sure.

When this was done originally, it was to avoid generating multiple

file formats all the time.  .bit files are necessary for JTAG
loading, and .bin files are necessary for zynq hardware loading.

Even on Zynq, some debugging using jtag is done, and having that be

mostly transparent (using the same bitstream files) is convenient.

So we preferred having a single bitstream file (with metadata,
compressed) regardless of whether we were hardware loading or jtag

loading, zynq or virtex6 or spartan3, ISE or Vivado.

In fact, there was no reverse engineering the last time since both

formats, at the level we were operating at, were documented by Xilinx.

It seemed to be worth the 30 SLOC to convert on the fly to keep a

single format of Xilinx bitstream files, including between ISE and
Vivado and all Xilinx FPGA types.

Of course it might make sense to switch things around the other way

and use .bin files uniformly and only convert to .bit format for JTAG
loading.

But since the core of the "conversion:" after a header, is just a

32 bit endian swap, it doesn't matter much either way.

If it ends up being a truly nasty reverse engineering exercise now,

I would reconsider.


From: discuss
<discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.op
encpi.org><mailto:discuss-bounces@lists.op<mailto:discuss-bounces@
lists.op>
encpi.orghttp://encpi.org><mailto:discuss-bounces@lists.op<mailt
o:discuss-bounces@lists.op><mailto:discuss-bounces@<mailto:discuss
-bounces@> lists.op>
encpi.orghttp://encpi.orghttp://encpi.org>> on behalf of James
Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.com<mailt
o:jek@parera.com>><mailto:jek@parera.commailto:jek@parera.com<ma
ilt o:jek@parera.commailto:o%3Ajek@parera.com>>>
Sent: Friday, February 1, 2019 3:27 PM
To:
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mail
to
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailt
o:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

David,

This is great work. Thanks.

Since I believe the fpga manager stuff is really an attribute of
later linux kernels, I don't think it is really a ZynqMP thing,
but just a later linux kernel thing.
I am currently bringing up the quite ancient zedboard using the
latest Vivado and Xilinx linux and will try to use this same code.
There are two thinigs I am looking into, now that you have done
the hard work of getting to a working solution:

  1. The bit vs bin thing existed with the old bitstream loader, but
    I think we were converting on the fly, so I will try that here.
    (To avoid the bin format altogether).

  2. The fpga manager has entry points from kernel mode that allow
    you to inject the bitstream without making a copy in /lib/firmware.
    Since we already have a kernel driver, I will try to use that to
    avoid the whole /lib/firmware thing.

So if those two things can work (no guarantees), the difference
between old and new bitstream loading (and building) can be
minimized and the loading process faster and requiring no extra
file system

space.

This will make merging easier too.

We'll see.  Thanks again to you and Geon for this important

contribution.

Jim

On 2/1/19 3:12 PM, David Banks wrote:
OpenCPI users interested in ZynqMP fpga_manager,

I know some users are interested in the OpenCPI's bitstream
loading for ZynqMP/UltraScale+ using "fpga_manager". In
general, we followed the instructions at

I will give a short explanation here:

Reminder: All ZynqMP/UltraScale+ changes are located at
https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra

branch.

Firstly, all fpga_manager code is located in
runtime/hdl/src/HdlBusDriver.cxx. There were also changes in
r
untime/hdl-support/xilinx/vivado.mkhttp://vivado.mk<http://vi
vado.mk><http://vi
vado.mkhttp://vado.mk>
http://vivado.mk
to generate a bitstream in the correct *.bin
format. To see the changes made to these files for ZynqMP, you
can diff them between
release_1.4 and release_1.4_zynq_ultra:
$ git clone https://github.com/Geontech/opencpi.git --branch
release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin
release_1.4:release_1.4; $ git diff release_1.4 --
runtime/hdl/src/HdlBusDriver.cxx
runtime/hdl-support/xilinx/vivado.mkhttp://vivado.mk<http://viv
ado.mk><http://viv
ado.mkhttp://ado.mk>;

The directly relevant functions are load_fpga_manager() and i
sProgrammed().
load_fpga_manager() ensures that /lib/firmware exists, reads the
*.bin bitstream file and writes its contents to

/lib/firmware/opencpi_temp.bin.

It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the
the filename "opencpi_temp.bin" to

/sys/class/fpga_manager/fpga0/firmware.

Finally, the temporary opencpi_temp.bin bitstream is removed and
the state of the fpga_manager
(/sys/class/fpga_manager/fpga0/state) is confirmed to be

"operating" in isProgrammed().

fpga_manager requires that bitstreams be in *.bin in order to
write them to the PL. So, some changes were made to
vivado.mkhttp://vivado.mkhttp://vivado.mkhttp://vivado.mk
to add a make rule for the *.bin file. This make rule (BinName)

uses Vivado's "bootgen" to convert the bitstream from *.bit to *.bin.

Most of the relevant code is pasted or summarized below:

         *load_fpga_manager*(const char *fileName,

std::string

&error) {

           if (!file_exists("/lib/firmware")){

mkdir("/lib/firmware",0666);
}
int out_file =

creat("/lib/firmware/opencpi_temp.bin", 0666);

           gzFile bin_file;
           int bfd, zerror;
           uint8_t buf[8*1024];

           if ((bfd = ::open(fileName, O_RDONLY)) < 0)
             OU::format(error, "Can't open bitstream file '%s'

for reading:

%s(%d)",
fileName, strerror(errno), errno);
if ((bin_file = ::gzdopen(bfd, "rb")) == NULL)
OU::format(error, "Can't open compressed bin
file

'%s' for :

%s(%u)",
fileName, strerror(errno), errno);
do {
uint8_t *bit_buf = buf;
int n = ::gzread(bin_file, bit_buf, sizeof(buf));
if (n < 0)
return true;
if (n & 3)
return OU::eformat(error, "Bitstream data in is '%s'
not a multiple of 3 bytes",
fileName);
if (n == 0)
break;
if (write(out_file, buf, n) <= 0)
return OU::eformat(error,
"Error writing to
/lib/firmware/opencpi_temp.bin for bin
loading: %s(%u/%d)",
strerror(errno), errno, n);
} while (1);
close(out_file);
std::ofstream

fpga_flags("/sys/class/fpga_manager/fpga0/flags");

           std::ofstream

fpga_firmware("/sys/class/fpga_manager/fpga0/firmware");
fpga_flags << 0 << std::endl;
fpga_firmware << "opencpi_temp.bin" << std::endl;

remove("/lib/firmware/opencpi_temp.bin");
return isProgrammed(error) ? init(error) : true;
}

The isProgrammed() function just checks whether or not the
fpga_manager state is 'operating' although we are not entirely
confident this is a robust check:

         *isProgrammed*(...) {
           ...
           const char *e = OU::file2String(val,

"/sys/class/fpga_manager/fpga0/state", '|');
...
return val == "operating";
}

vivado.mkhttp://vivado.mkhttp://vivado.mkhttp://vivado.mk's
*bin make-rule uses bootgen to convert bit to bin. This is
necessary in Vivado 2018.2, but in later versions you may be able
to directly generate the correct *.bin file via an option to

write_bitstream:

$(call BinName,$1,$3,$6): $(call BitName,$1,$3)
$(AT)echo -n For $2 on $5 using config $4: Generating
Xilinx Vivado bitstream file $$@ with BIN extension using

"bootgen".

        $(AT)echo all: > $$(call BifName,$1,$3,$6); \
             echo "{" >> $$(call BifName,$1,$3,$6); \
             echo " [destination_device = pl] $(notdir $(call

BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6);
echo "}" >> $$(call BifName,$1,$3,$6);
$(AT)$(call DoXilinx,bootgen,$1,-image $(notdir
$(call
BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call
BinName,$1,$3,$6)) -w,bin)

Hope this is useful!

Regards,
David Banks
dbanks@geontech.commailto:dbanks@geontech.com<mailto:dbanks@geo
ntech.commailto:dbanks@geontech.com><mailto:dbanks@geo<mailto:d
banks@geo>
ntech.comhttp://ntech.com<mailto:dbanks@geontech.com<mailto:dba
nks@geontech.com>>>
Geon Technologies, LLC
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/att
ach m ents/20190201/4b49675d/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailt
o:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><ma
ilt
o:discuss@lists.opencpi.orgmailto:o%3Adiscuss@lists.opencpi.org
<mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.or
g>>>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.o
rg


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mail
to
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailt
o:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or
g
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/atta
chm e nts/20190201/64e4ea45/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mail
to
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailt
o:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or
g


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:dis
cuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:discu
ss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@
lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme
nts/20190813/4516c872/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:dis
cuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme
nts/20190829/b99ae3e0/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme
nts/20190905/0b9a1953/attachment.html>


discuss mailing list
discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org

iirc it gives clocks and interdictions on which axi ports are enabled but not which direction is master (you would have to look up which register/bit this is set by in the TRM). i don't remember the axi ports being configurable which side is the master but i very well might be mistake. On Thu, Sep 5, 2019 at 7:38 PM James Kulp <jek@parera.com> wrote: > If you invoke the command with no arguments it tells you what it can do, > like most opencpi commands. We mostly use it to find out how the FPGA > clocks are initialized. > > > > On Sep 5, 2019, at 18:19, Munro, Robert M. <Robert.Munro@jhuapl.edu> > wrote: > > > > Jim, > > > > Does the ocpizynq utility list all the available interfaces that can > dumped? > > > > Thanks, > > Rob > > > > -----Original Message----- > > From: discuss <discuss-bounces@lists.opencpi.org> On Behalf Of James > Kulp > > Sent: Thursday, September 5, 2019 5:59 PM > > To: discuss@lists.opencpi.org > > Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ > fpga_manager > > > > Hi Rob, > > > > Nearly all aspects of the boundary hardware between the PS and the PL > sides of Zynq are controlled by registers written by the processor and > > *not* in the FPGA bitstream. > > The FSBL does typically initialize these registers to some default > values that are not necessarily the right values for how OpenCPI uses the > PL/FPGA. > > The ocpizynq utility program does dump out some of these registers, and > you could modify it pretty easily if you want to know what some other > registers are set to. > > All these registers are pretty well documented in the Zynq TRM. > > > > Jim > > > >> On 9/5/19 5:47 PM, Munro, Robert M. wrote: > >> Chris, > >> > >> Would this be the GP0 AXI slave or master registers that are being > accessed in this scenario? I don’t believe these are configured in the > FSBL, but in the FPGA image. This could indicate that a facility required > by the OCPI framework is not enabled in the FPGA image built into the N310 > image. Is there a listing of the OCPI required FPGA facilities? > >> > >> Thanks, > >> Rob > >> > >> From: Chris Hinkey <chinkey@geontech.com> > >> Sent: Thursday, August 29, 2019 11:58 AM > >> To: Munro, Robert M. <Robert.Munro@jhuapl.edu> > >> Subject: Re: [Discuss OpenCPI] Bitstream loading with > >> ZynqMP/UltraScale+ fpga_manager > >> > >> you are not accessing external memory in this case you are accessing > axi_gp0's adress space a register directly on the FPGA. i would suspect > that that something is wrong with how GP0 is setup from the fsbl in this > case. I don't think anything would need to change on the opencpi software > side given that 7100 vs 7020 should be the same. > >> the information on all the register maps and where everything is > located is somewhere in the Xilinx Technical reference manual (be warned > this is a very large document). > >> > >> On Thu, Aug 29, 2019 at 11:42 AM Munro, Robert M. < > Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>> wrote: > >> Chris, > >> > >> Looking at the Zynq and ZynqMP datasheets: > >> https://www.xilinx.com/support/documentation/data_sheets/ds190-Zynq-70 > >> 00-Overview.pdf > >> https://www.xilinx.com/support/documentation/data_sheets/ds891-zynq-ul > >> trascale-plus-overview.pdf > >> > >> It looks like the Z-7100 has the same memory interfaces as other Zynq > parts with the external memory interface having '16-bit or 32-bit > interfaces to DDR3, DDR3L, DDR2, or LPDDR2 memories' whereas the ZynqMP has > '32-bit or 64-bit interfaces to DDR4, DDR3, DDR3L, or LPDDR3 memories, and > 32-bit interface to LPDDR4 memory' . > >> > >> Is it possible that other changes are needed from the 1.4_zynq_ultra > branch that I have not pulled in? > >> > >> Thanks, > >> Rob > >> > >> -----Original Message----- > >> From: discuss <discuss-bounces@lists.opencpi.org<mailto: > discuss-bounces@lists.opencpi.org>> On Behalf Of Munro, Robert M. > >> Sent: Thursday, August 29, 2019 9:09 AM > >> To: Chris Hinkey <chinkey@geontech.com<mailto:chinkey@geontech.com>> > >> Cc: discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > >> Subject: Re: [Discuss OpenCPI] Bitstream loading with > >> ZynqMP/UltraScale+ fpga_manager > >> > >> Chris, > >> > >> Thanks for the information regarding the internals. The FPGA part on > this platform is a XC7Z100. I purposefully did not pull in changes that I > believed were related to addressing. I can double check the specifications > regarding address widths to verify it should be unchanged. > >> > >> Please let me know if there are any other changes or steps missed. > >> > >> Thanks, > >> Rob > >> > >> > >> From: Chris Hinkey > >> <chinkey@geontech.com<mailto:chinkey@geontech.com><mailto:chinkey@geon > >> tech.com<mailto:chinkey@geontech.com>>> > >> Date: Thursday, Aug 29, 2019, 8:05 AM > >> To: Munro, Robert M. > >> <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert > >> .Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>> > >> Cc: James Kulp > >> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:je > >> k@parera.com>>>, > >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > >> <discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:di > >> scuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > >> Subject: Re: [Discuss OpenCPI] Bitstream loading with > >> ZynqMP/UltraScale+ fpga_manager > >> > >> It looks like you loaded something sucessfully but the control plan is > not hooked up quite right. > >> > >> as an eraly part of the running process opencpi reads a register across > the control plan that contains ascii "OpenCPI(NULL)" and in your case you > are reading "CPI(NULL)Open" this is given by the data in the error message > - (sb 0x435049004f70656e). this is the magic that the message is referring > to it requires OpenCPI to be at address 0 of the control plane address > space to proceed. > >> > >> I think we ran into this problem and we decided it was because the bus > on the ultrascale was setup to be 32 bits and needed to be 64 bits for the > hdl that we implemented to work correctly. remind me what platform you are > using is it a zynq ultrascale or 7000 series? > >> > >> On Wed, Aug 28, 2019 at 5:55 PM Munro, Robert M. < > Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto: > Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>> wrote: > >> Chris, > >> > >> After merging some sections of HdlBusDriver.cxx into the 1.4 version of > the file and going through the build process I am encountering a new error > when attempting to load HDL on the N310. The fsk_filerw is being used as a > known good reference for this purpose. The new sections of vivado.mk< > http://vivado.mk><http://vivado.mk> were merged in to attempt building > the HDL using the framework, but it did not generate the .bin file when > using ocpidev build with the --hdl-assembly argument. An attempt to > replicate the commands in vivado.mk<http://vivado.mk><http://vivado.mk> > manually while following the guidelines for generating a .bin from a .bit > from Xilinx documentation > https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841645/Solution+Zynq+PL+Programming+With+FPGA+Manager > was taken. > >> > >> The steps were: > >> - generate a .bif file similar to the documentation's > >> Full_Bitstream.bif using the correct filename > >> - run a bootgen command similar to > >> vivado.mk<http://vivado.mk><http://vivado.mk>: bootgen -image > >> <bif_filename> -arch zynq -o <bin_filename> -w > >> > >> This generated a .bin file as desired and was copied to the artifacts > directory in the ocpi folder structure. > >> > >> The built ocpi environment loaded successfully, recognizes the HDL > container as being available, and the hello application was able to run > successfully. The command output contained ' HDL Device 'PL:0' responds, > but the OCCP signature: magic: 0x18000afe187003 (sb 0x435049004f70656e) ' > but the impact of this was not understood until attempting to load HDL. > When attempting to run the fsk_filerw from the ocpirun command it did not > appear to recognize the assembly when listing resources found in the output > and reported that suitable candidate for a HDL-implemented component was > not available. > >> > >> The command 'ocpihdl load' was then attempted to force the loading of > the HDL assembly and the same '...OCCP signature: magic: ...' output > observed and finally ' Exiting for problem: error loading device pl:0: > Magic numbers in admin space do not match'. > >> > >> Is there some other step that must be taken during the generation of > the .bin file? Is there any other software modification that is required > of the ocpi runtime code? The diff patch of the modified 1.4 > HdlBusDriver.cxx is attached to make sure that the required code > modifications are performed correctly. The log output from the ocpihdl > load command is attached in case that can provide further insight regarding > performance or required steps. > >> > >> Thanks, > >> Rob > >> > >> -----Original Message----- > >> From: discuss <discuss-bounces@lists.opencpi.org<mailto: > discuss-bounces@lists.opencpi.org><mailto: > discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org>>> > On Behalf Of Munro, Robert M. > >> Sent: Tuesday, August 13, 2019 10:56 AM > >> To: Chris Hinkey > >> <chinkey@geontech.com<mailto:chinkey@geontech.com><mailto:chinkey@geon > >> tech.com<mailto:chinkey@geontech.com>>>; James Kulp > >> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:je > >> k@parera.com>>> > >> Cc: > >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:dis > >> cuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> > >> Subject: Re: [Discuss OpenCPI] Bitstream loading with > >> ZynqMP/UltraScale+ fpga_manager > >> > >> Chris, > >> > >> Thank you for your helpful response and insight. My thinking was that > the #define could be overridden to provide the desired functionality for > the platform, but was not comfortable making the changes without proper > familiarity. I will move forward by looking at the diff to the 1.4 > mainline, make the appropriate modifications, and test with the modified > framework on the N310. > >> > >> Thanks again for your help. > >> > >> Thanks, > >> Rob > >> > >> From: Chris Hinkey > >> <chinkey@geontech.com<mailto:chinkey@geontech.com><mailto:chinkey@geon > >> tech.com<mailto:chinkey@geontech.com>>> > >> Sent: Tuesday, August 13, 2019 10:02 AM > >> To: James Kulp > >> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:je > >> k@parera.com>>> > >> Cc: Munro, Robert M. > >> <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert > >> .Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>; > >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:dis > >> cuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> > >> Subject: Re: [Discuss OpenCPI] Bitstream loading with > >> ZynqMP/UltraScale+ fpga_manager > >> > >> I think when I implemented this code I probably made the assumption > that if we are using fpga_manager we are also using ARCH=arm64. This met > our needs as we only cared about the fpga manager on ultrascale devices at > the time. We also made the assumption that the tools created a tarred bin > file instead of a bit file because we could not get the bit to bin > conversion working with the existing openCPI code (this might cause you > problems later when actually trying to load the fpga). > >> > >> The original problem you were running into is certainly because of an > >> ifdef on line 226 where it will check the old driver done pin if it is > >> on an arm and not an arm64 > >> > >> 226 #if defined(OCPI_ARCH_arm) || defined(OCPI_ARCH_arm_cs) > >> > >> to move forward for now you can change this line to an "#if 0" and > rebuild the framework, not this will cause other zynq based platforms(zed, > matchstiq etc..) to no longer work with this patch but maybe you don't care > for now while Jim tries to get this into the mainline in a more generic way. > >> there may be some similar patches you need to make to the same file but > the full diff that I needed to make to BusDriver.cxx to the 1.4 mainline > can be seen here https://github.com/opencpi/opencpi/pull/17/files in case > you didn't already know. > >> hope this helps > >> > >> On Mon, Aug 12, 2019 at 11:12 AM James Kulp <jek@parera.com<mailto: > jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto: > jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto: > jek@parera.com>>>> wrote: > >>> On 8/12/19 9:37 AM, Munro, Robert M. wrote: > >>> Jim, > >>> > >>> This is the only branch with the modifications required for use with > >>> the FPGA Manager driver. This is required for use with the Linux > >>> kernel provided for the N310. The Xilinx toolset being used is > >>> 2018_2 and the kernel being used is generated via the N310 build > >>> container using v3.14.0.0 . > >> Ok. The default Xilinx kernel associated with 2018_2 is 4.14. > >> > >> I guess the bottom line is that this combination of platform and tools > and kernel is not yet supported in either the mainline of OpenCPI and the > third party branch you are trying to use. > >> > >> It is probably not a big problem, but someone has to debug it that has > the time and skills necessary to dig as deep as necessary. > >> > >> The fpga manager in the various later linux kernels will definitely be > supported in a patch from the mainline "soon", probably in a month, since > it is being actively worked. > >> > >> That does not guarantee functionality on your exact kernel (and thus > version of the fpga manager), but it does guarantee it working on the > latest Xilinx-supported kernel. > >> > >> Jim > >> > >> > >> > >> > >> > >> > >> > >>> Thanks, > >>> Robert Munro > >>> > >>> *From: *James Kulp > >>> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:j > >>> ek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:j > >>> e<mailto:je> > >>> k@parera.com<mailto:k@parera.com>>> > >>> <mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<m > >>> ailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><m > >>> a ilto:jek@parera.com<mailto:ilto%3Ajek@parera.com>>>>> > >>> *Date: *Monday, Aug 12, 2019, 9:00 AM > >>> *To: *Munro, Robert M. > >>> <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Rober > >>> t.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>><mailto:Robert<mai > >>> lto:Robert> > >>> .Munro@jhuapl.edu<mailto:Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl > >>> .edu<mailto:Robert.Munro@jhuapl.edu>>> > >>> <mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailt > >>> o:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>><mailto > >>> :Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Rober > >>> t.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>>>, > >>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:di > >>> scuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:dis > >>> <mailto:dis> > >>> cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org><mailto:discuss@ > >>> lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > >>> <discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:d > >>> iscuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:di > >>> <mailto:di> > >>> scuss@lists.opencpi.org<mailto:scuss@lists.opencpi.org><mailto:discus > >>> s@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > >>> <mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><m > >>> ailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><ma > >>> ilto:discuss@lists.opencpi.org<mailto:ilto%3Adiscuss@lists.opencpi.or > >>> g><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > >>>>>>> > >>> *Subject: *Re: [Discuss OpenCPI] Bitstream loading with > >>> ZynqMP/UltraScale+ fpga_manager > >>> > >>> I was a bit confused about your use of the "ultrascale" branch. > >>> So you are using a branch with two types of patches in it: one for > >>> later linux kernels with the fpga manager, and the other for the > >>> ultrascale chip itself. > >>> The N310 is not ultrascale, so we need to separate the two issues, > >>> which were not separated before. > >>> So its not really a surprise that the branch you are using is not yet > >>> happy with the system you are trying to run it on. > >>> > >>> I am working on a branch that simply updates the xilinx tools > >>> (2019-1) and the xilinx linux kernel (4.19) without dealing with > >>> ultrascale, which is intended to work with a baseline zed board, but > >>> with current tools and kernels. > >>> > >>> The N310 uses a 7000-series part (7100) which should be compatible > >>> with this. > >>> > >>> Which kernel and which xilinx tools are you using? > >>> > >>> Jim > >>> > >>> > >>> > >>>> On 8/8/19 1:36 PM, Munro, Robert M. wrote: > >>>> Jim or others, > >>>> > >>>> Is there any further input or feedback on the source or resolution > >>> of this issue? > >>>> As it stands I do not believe that the OCPI runtime software will be > >>> able to successfully load HDL assemblies on the N310 platform. My > >>> familiarity with this codebase is limited and we would appreciate any > >>> guidance available toward investigating or resolving this issue. > >>>> Thank you, > >>>> Robert Munro > >>>> > >>>> -----Original Message----- > >>>> From: discuss > >>>> <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.open > >>>> cpi.org><mailto:discuss-bounces@lists.open<mailto:discuss-bounces@li > >>>> sts.open> > >>>> cpi.org<http://cpi.org>><mailto:discuss-bounces@lists.open<mailto:di > >>>> scuss-bounces@lists.open><mailto:discuss-bounces@li<mailto:discuss-b > >>>> ounces@li> sts.open> cpi.org<http://cpi.org><http://cpi.org>>> On > >>>> Behalf Of > >>> Munro, Robert M. > >>>> Sent: Monday, August 5, 2019 10:49 AM > >>>> To: James Kulp > >>>> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto: > jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto: > >>>> jek@parera.com<mailto:jek@parera.com>>>>; > >>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:d > >>>> iscuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:d > >>>> <mailto:d> > >>>> iscuss@lists.opencpi.org<mailto:iscuss@lists.opencpi.org><mailto:dis > >>>> cuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > >>>> Subject: Re: [Discuss OpenCPI] Bitstream loading with > >>> ZynqMP/UltraScale+ fpga_manager > >>>> Jim, > >>>> > >>>> The given block of code is not the root cause of the issue because > >>> the file system does not have a /dev/xdevcfg device. > >>>> I suspect there is some functional code similar to this being > >>> compiled incorrectly: > >>>> #if (OCPI_ARCH_arm) > >>>> // do xdevcfg loading stuff > >>>> #else > >>>> // do fpga_manager loading stuff #endif > >>>> > >>>> This error is being output at environment initialization as well as > >>> when running hello.xml. I've attached a copy of the output from the > >>> command 'ocpirun -v -l 20 hello.xml' for further investigation. > >>>> From looking at the output I believe the system is calling > >>> OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is > >>> calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line > >>> 484 which in turn is calling Driver::open in the same file at line > >>> 499 which then outputs the 'When searching for PL device ...' error > >>> at line 509. This then returns to the HdlDriver.cxx search() function > >>> and outputs the '... got Zynq search error ...' error at line 141. > >>>> This is an ARM device and I am not familiar enough with this > >>> codebase to adjust precompiler definitions with confidence that some > >>> other code section will become affected. > >>>> Thanks, > >>>> Robert Munro > >>>> > >>>> -----Original Message----- > >>>> From: James Kulp > >>>> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto: > jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto: > >>>> jek@parera.com<mailto:jek@parera.com>>>> > >>>> Sent: Friday, August 2, 2019 4:27 PM > >>>> To: Munro, Robert M. > >>>> <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robe > >>>> rt.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>><mailto:Robe<mai > >>>> lto:Robe> > >>>> rt.Munro@jhuapl.edu<mailto:rt.Munro@jhuapl.edu><mailto:Robert.Munro@ > >>>> jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>>; > >>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:di > >>> scuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:dis > >>> <mailto:dis> > >>> cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org><mailto:discuss@ > >>> lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > >>>> Subject: Re: [Discuss OpenCPI] Bitstream loading with > >>> ZynqMP/UltraScale+ fpga_manager > >>>> That code is not integrated into the main line of OpenCPI yet, but > >>> in that code there is: > >>>> if (file_exists("/dev/xdevcfg")){ > >>>> ret_val= load_xdevconfig(fileName, error); > >>>> } > >>>> else if (file_exists("/sys/class/fpga_manager/fpga0/")){ > >>>> ret_val= load_fpga_manager(fileName, error); > >>>> } > >>>> So it looks like the presence of /dev/xdevcfg is what causes it to > >>> look for /sys/class/xdevcfg/xdevcfg/device/prog_done > >>>>> On 8/2/19 4:15 PM, Munro, Robert M. wrote: > >>>>> Are there any required flag or environment variable settings that > >>> must be done before building the framework to utilize this > >>> functionality? I have a platform built that is producing an output > >>> during environment load: 'When searching for PL device '0': Can't > >>> process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string: > >>> file could not be open for reading' . This leads me to believe that > >>> it is running the xdevcfg code still present in HdlBusDriver.cxx . > >>>>> Use of the release_1.4_zynq_ultra branch and presence of the > >>> /sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been > >>> verified for the environment used to generate the executables. > >>>>> Thanks, > >>>>> Robert Munro > >>>>> > >>>>> -----Original Message----- > >>>>> From: discuss > >>>>> <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.ope > >>>>> ncpi.org><mailto:discuss-bounces@lists.ope<mailto:discuss-bounces@l > >>>>> ists.ope> > >>>>> ncpi.org<http://ncpi.org>><mailto:discuss-bounces@lists.ope<mailto: > >>>>> discuss-bounces@lists.ope><mailto:discuss-bounces@l<mailto:discuss- > >>>>> bounces@l> ists.ope> ncpi.org<http://ncpi.org><http://ncpi.org>>> > >>>>> On Behalf Of James Kulp > >>>>> Sent: Friday, February 1, 2019 4:18 PM > >>>>> To: > >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto: > discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto: > >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto: > >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > >>>>> Subject: Re: [Discuss OpenCPI] Bitstream loading with > >>>>> ZynqMP/UltraScale+ fpga_manager > >>>>> > >>>>>> On 2/1/19 3:37 PM, Chris Hinkey wrote: > >>>>>> in response to Point 1 here. We attempted using the code that on > >>> the fly was attempting to convert from bit to bin. This did not work > >>> on these newer platforms using fpga_manager so we decided to use the > >>> vendor provided tools rather then to reverse engineer what was wrong > >>> with the existing code. > >>>>>> If changes need to be made to create more commonality and given > >>> that all zynq and zynqMP platforms need a .bin file format wouldn't > >>> it make more sense to just use .bin files rather then converting them > >>> on the fly every time? > >>>>> A sensible question for sure. > >>>>> > >>>>> When this was done originally, it was to avoid generating multiple > >>> file formats all the time. .bit files are necessary for JTAG > >>> loading, and .bin files are necessary for zynq hardware loading. > >>>>> Even on Zynq, some debugging using jtag is done, and having that be > >>> mostly transparent (using the same bitstream files) is convenient. > >>>>> So we preferred having a single bitstream file (with metadata, > >>>>> compressed) regardless of whether we were hardware loading or jtag > >>> loading, zynq or virtex6 or spartan3, ISE or Vivado. > >>>>> In fact, there was no reverse engineering the last time since both > >>> formats, at the level we were operating at, were documented by Xilinx. > >>>>> It seemed to be worth the 30 SLOC to convert on the fly to keep a > >>> single format of Xilinx bitstream files, including between ISE and > >>> Vivado and all Xilinx FPGA types. > >>>>> Of course it might make sense to switch things around the other way > >>> and use .bin files uniformly and only convert to .bit format for JTAG > >>> loading. > >>>>> But since the core of the "conversion:" after a header, is just a > >>> 32 bit endian swap, it doesn't matter much either way. > >>>>> If it ends up being a truly nasty reverse engineering exercise now, > >>> I would reconsider. > >>>>>> ________________________________ > >>>>>> From: discuss > >>>>>> <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.op > >>>>>> encpi.org><mailto:discuss-bounces@lists.op<mailto:discuss-bounces@ > >>>>>> lists.op> > >>>>>> encpi.org<http://encpi.org>><mailto:discuss-bounces@lists.op<mailt > >>>>>> o:discuss-bounces@lists.op><mailto:discuss-bounces@<mailto:discuss > >>>>>> -bounces@> lists.op> > >>>>>> encpi.org<http://encpi.org><http://encpi.org>>> on behalf of James > >>>>>> Kulp > >>>>>> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailt > >>>>>> o:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><ma > >>>>>> ilt o:jek@parera.com<mailto:o%3Ajek@parera.com>>>> > >>>>>> Sent: Friday, February 1, 2019 3:27 PM > >>>>>> To: > >>>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto > >>>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mail > >>>>>> to > >>>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailt > >>>>>> o:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > >>>>>> Subject: Re: [Discuss OpenCPI] Bitstream loading with > >>>>>> ZynqMP/UltraScale+ fpga_manager > >>>>>> > >>>>>> David, > >>>>>> > >>>>>> This is great work. Thanks. > >>>>>> > >>>>>> Since I believe the fpga manager stuff is really an attribute of > >>>>>> later linux kernels, I don't think it is really a ZynqMP thing, > >>>>>> but just a later linux kernel thing. > >>>>>> I am currently bringing up the quite ancient zedboard using the > >>>>>> latest Vivado and Xilinx linux and will try to use this same code. > >>>>>> There are two thinigs I am looking into, now that you have done > >>>>>> the hard work of getting to a working solution: > >>>>>> > >>>>>> 1. The bit vs bin thing existed with the old bitstream loader, but > >>>>>> I think we were converting on the fly, so I will try that here. > >>>>>> (To avoid the bin format altogether). > >>>>>> > >>>>>> 2. The fpga manager has entry points from kernel mode that allow > >>>>>> you to inject the bitstream without making a copy in /lib/firmware. > >>>>>> Since we already have a kernel driver, I will try to use that to > >>>>>> avoid the whole /lib/firmware thing. > >>>>>> > >>>>>> So if those two things can work (no guarantees), the difference > >>>>>> between old and new bitstream loading (and building) can be > >>>>>> minimized and the loading process faster and requiring no extra > >>>>>> file system > >>> space. > >>>>>> This will make merging easier too. > >>>>>> > >>>>>> We'll see. Thanks again to you and Geon for this important > >>> contribution. > >>>>>> Jim > >>>>>> > >>>>>> > >>>>>>> On 2/1/19 3:12 PM, David Banks wrote: > >>>>>>> OpenCPI users interested in ZynqMP fpga_manager, > >>>>>>> > >>>>>>> I know some users are interested in the OpenCPI's bitstream > >>>>>>> loading for ZynqMP/UltraScale+ using "*fpga_manager*". In > >>>>>>> general, we followed the instructions at > >>>>>>> > >>> > https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream > . > >>>>>>> I will give a short explanation here: > >>>>>>> > >>>>>>> Reminder: All ZynqMP/UltraScale+ changes are located at > >>>>>>> https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra > >>> branch. > >>>>>>> Firstly, all *fpga_manager *code is located in > >>>>>>> *runtime/hdl/src/HdlBusDriver.cxx*. There were also changes in > >>>>>>> r*untime/hdl-support/xilinx/vivado.mk<http://vivado.mk><http://vi > >>>>>>> vado.mk><http://vi > >>>>>>> vado.mk<http://vado.mk>> > >>>>>>> <http://vivado.mk>* to generate a bitstream in the correct *.bin > >>>>>>> format. To see the changes made to these files for ZynqMP, you > >>>>>>> can diff them between > >>>>>>> *release_1.4* and *release_1.4_zynq_ultra*: > >>>>>>> $ git clone https://github.com/Geontech/opencpi.git --branch > >>>>>>> release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin > >>>>>>> release_1.4:release_1.4; $ git diff release_1.4 -- > >>>>>>> runtime/hdl/src/HdlBusDriver.cxx > >>>>>>> runtime/hdl-support/xilinx/vivado.mk<http://vivado.mk><http://viv > >>>>>>> ado.mk><http://viv > >>>>>>> ado.mk<http://ado.mk>>; > >>>>>>> > >>>>>>> > >>>>>>> The directly relevant functions are *load_fpga_manager()* and i > >>>>>>> *sProgrammed()*. > >>>>>>> load_fpga_manager() ensures that /lib/firmware exists, reads the > >>>>>>> *.bin bitstream file and writes its contents to > >>> /lib/firmware/opencpi_temp.bin. > >>>>>>> It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the > >>>>>>> the filename "opencpi_temp.bin" to > >>> /sys/class/fpga_manager/fpga0/firmware. > >>>>>>> Finally, the temporary opencpi_temp.bin bitstream is removed and > >>>>>>> the state of the fpga_manager > >>>>>>> (/sys/class/fpga_manager/fpga0/state) is confirmed to be > "operating" in isProgrammed(). > >>>>>>> > >>>>>>> fpga_manager requires that bitstreams be in *.bin in order to > >>>>>>> write them to the PL. So, some changes were made to > >>>>>>> vivado.mk<http://vivado.mk><http://vivado.mk><http://vivado.mk> > >>>>>>> to add a make rule for the *.bin file. This make rule (*BinName*) > uses Vivado's "*bootgen*" to convert the bitstream from *.bit to *.bin. > >>>>>>> > >>>>>>> Most of the relevant code is pasted or summarized below: > >>>>>>> > >>>>>>> *load_fpga_manager*(const char *fileName, > >>>>>>> std::string > >>> &error) { > >>>>>>> if (!file_exists("/lib/firmware")){ > >>>>>>> mkdir("/lib/firmware",0666); > >>>>>>> } > >>>>>>> int out_file = > >>> creat("/lib/firmware/opencpi_temp.bin", 0666); > >>>>>>> gzFile bin_file; > >>>>>>> int bfd, zerror; > >>>>>>> uint8_t buf[8*1024]; > >>>>>>> > >>>>>>> if ((bfd = ::open(fileName, O_RDONLY)) < 0) > >>>>>>> OU::format(error, "Can't open bitstream file '%s' > >>> for reading: > >>>>>>> %s(%d)", > >>>>>>> fileName, strerror(errno), errno); > >>>>>>> if ((bin_file = ::gzdopen(bfd, "rb")) == NULL) > >>>>>>> OU::format(error, "Can't open compressed bin > >>>>>>> file > >>> '%s' for : > >>>>>>> %s(%u)", > >>>>>>> fileName, strerror(errno), errno); > >>>>>>> do { > >>>>>>> uint8_t *bit_buf = buf; > >>>>>>> int n = ::gzread(bin_file, bit_buf, sizeof(buf)); > >>>>>>> if (n < 0) > >>>>>>> return true; > >>>>>>> if (n & 3) > >>>>>>> return OU::eformat(error, "Bitstream data in is '%s' > >>>>>>> not a multiple of 3 bytes", > >>>>>>> fileName); > >>>>>>> if (n == 0) > >>>>>>> break; > >>>>>>> if (write(out_file, buf, n) <= 0) > >>>>>>> return OU::eformat(error, > >>>>>>> "Error writing to > >>>>>>> /lib/firmware/opencpi_temp.bin for bin > >>>>>>> loading: %s(%u/%d)", > >>>>>>> strerror(errno), errno, n); > >>>>>>> } while (1); > >>>>>>> close(out_file); > >>>>>>> std::ofstream > >>> fpga_flags("/sys/class/fpga_manager/fpga0/flags"); > >>>>>>> std::ofstream > >>>>>>> fpga_firmware("/sys/class/fpga_manager/fpga0/firmware"); > >>>>>>> fpga_flags << 0 << std::endl; > >>>>>>> fpga_firmware << "opencpi_temp.bin" << std::endl; > >>>>>>> > >>>>>>> remove("/lib/firmware/opencpi_temp.bin"); > >>>>>>> return isProgrammed(error) ? init(error) : true; > >>>>>>> } > >>>>>>> > >>>>>>> The isProgrammed() function just checks whether or not the > >>>>>>> fpga_manager state is 'operating' although we are not entirely > >>>>>>> confident this is a robust check: > >>>>>>> > >>>>>>> *isProgrammed*(...) { > >>>>>>> ... > >>>>>>> const char *e = OU::file2String(val, > >>>>>>> "/sys/class/fpga_manager/fpga0/state", '|'); > >>>>>>> ... > >>>>>>> return val == "operating"; > >>>>>>> } > >>>>>>> > >>>>>>> vivado.mk<http://vivado.mk><http://vivado.mk><http://vivado.mk>'s > >>>>>>> *bin make-rule uses bootgen to convert bit to bin. This is > >>>>>>> necessary in Vivado 2018.2, but in later versions you may be able > >>>>>>> to directly generate the correct *.bin file via an option to > >>> write_bitstream: > >>>>>>> $(call *BinName*,$1,$3,$6): $(call BitName,$1,$3) > >>>>>>> $(AT)echo -n For $2 on $5 using config $4: Generating > >>>>>>> Xilinx Vivado bitstream file $$@ with BIN extension using > "bootgen". > >>>>>>> $(AT)echo all: > $$(call BifName,$1,$3,$6); \ > >>>>>>> echo "{" >> $$(call BifName,$1,$3,$6); \ > >>>>>>> echo " [destination_device = pl] $(notdir $(call > >>>>>>> BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6); \ > >>>>>>> echo "}" >> $$(call BifName,$1,$3,$6); > >>>>>>> $(AT)$(call DoXilinx,*bootgen*,$1,-image $(notdir > >>>>>>> $(call > >>>>>>> BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call > >>>>>>> BinName,$1,$3,$6)) -w,bin) > >>>>>>> > >>>>>>> Hope this is useful! > >>>>>>> > >>>>>>> Regards, > >>>>>>> David Banks > >>>>>>> dbanks@geontech.com<mailto:dbanks@geontech.com><mailto:dbanks@geo > >>>>>>> ntech.com<mailto:dbanks@geontech.com>><mailto:dbanks@geo<mailto:d > >>>>>>> banks@geo> > >>>>>>> ntech.com<http://ntech.com><mailto:dbanks@geontech.com<mailto:dba > >>>>>>> nks@geontech.com>>> > >>>>>>> Geon Technologies, LLC > >>>>>>> -------------- next part -------------- An HTML attachment was > >>>>>>> scrubbed... > >>>>>>> URL: > >>>>>>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/att > >>>>>>> ach m ents/20190201/4b49675d/attachment.html> > >>>>>>> _______________________________________________ > >>>>>>> discuss mailing list > >>>>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailt > >>>>>>> o:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><ma > >>>>>>> ilt > >>>>>>> o:discuss@lists.opencpi.org<mailto:o%3Adiscuss@lists.opencpi.org> > >>>>>>> <mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.or > >>>>>>> g>>> > >>>>>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.o > >>>>>>> rg > >>>>>> _______________________________________________ > >>>>>> discuss mailing list > >>>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto > >>>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mail > >>>>>> to > >>>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailt > >>>>>> o:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > >>>>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or > >>>>>> g > >>>>>> -------------- next part -------------- An HTML attachment was > >>>>>> scrubbed... > >>>>>> URL: > >>>>>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/atta > >>>>>> chm e nts/20190201/64e4ea45/attachment.html> > >>>>>> _______________________________________________ > >>>>>> discuss mailing list > >>>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto > >>>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mail > >>>>>> to > >>>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailt > >>>>>> o:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > >>>>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or > >>>>>> g > >>>>> _______________________________________________ > >>>>> discuss mailing list > >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto: > discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto: > >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto: > >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > >>>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > >>>> -------------- next part -------------- An embedded and > >>>> charset-unspecified text was scrubbed... > >>>> Name: hello_n310_log_output.txt > >>>> URL: > >>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachm > >>> e nts/20190805/d9b4f229/attachment.txt> > >>>> _______________________________________________ > >>>> discuss mailing list > >>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:d > >>>> iscuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:d > >>>> <mailto:d> > >>>> iscuss@lists.opencpi.org<mailto:iscuss@lists.opencpi.org><mailto:dis > >>>> cuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > >>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > >>> > >> > >> _______________________________________________ > >> discuss mailing list > >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:dis > >> cuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:discu > >> ss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@ > >> lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > >> -------------- next part -------------- An HTML attachment was > >> scrubbed... > >> URL: > >> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme > >> nts/20190813/4516c872/attachment.html> > >> _______________________________________________ > >> discuss mailing list > >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:dis > >> cuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> > >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > >> -------------- next part -------------- An HTML attachment was > >> scrubbed... > >> URL: > >> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme > >> nts/20190829/b99ae3e0/attachment.html> > >> _______________________________________________ > >> discuss mailing list > >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > >> -------------- next part -------------- An HTML attachment was > >> scrubbed... > >> URL: > >> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme > >> nts/20190905/0b9a1953/attachment.html> > >> _______________________________________________ > >> discuss mailing list > >> discuss@lists.opencpi.org > >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > > > > > > > > _______________________________________________ > > discuss mailing list > > discuss@lists.opencpi.org > > http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > > > _______________________________________________ > discuss mailing list > discuss@lists.opencpi.org > http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >
MR
Munro, Robert M.
Fri, Sep 6, 2019 6:29 PM

It appears there was some resource contention in the GP0 area that was not allowing the OCPI system to set the OccpAdminRegister.magic value during operation.  If the FPGA load is prevented during the boot process, the magic number mismatch error is no longer output.  Looking through the TRM was showing no configuration settings for GP0 other than enabling communication using LVL_SHFTR_EN.

If there is some required configuration of AXI_GP0 configuration registers for OCPI to work properly, please provide it for future reference.

I am further trying to understand the code that was producing the output by looking at the at the source.  The magic number mismatch output on lines 82-83 look to be outputting a #define value in the (sb ….) area of the output and the ‘magic’ variable there is giving the value that was read from the OccpAdminRegister area.  Am I understanding the code correctly?  If so, that would indicate that the (sb …) number is the expected value and its orientation should be correct. https://github.com/Geontech/opencpi/blob/6c7f48352ef9dcb1213302f470ce803643cc604d/runtime/hdl/src/HdlDevice.cxx#L82

Is the code being understood correctly that the OccpAdminRegister is a memory mapped data structure that is being written and read as part of the OCPI control interface?  If so, can you explain how and where this is being mapped and at what base address it should be expected?

After preventing the FPGA load at boot time the OCPI commands no longer output the magic number mismatch error.  The command ‘ocpihdl load <fsk_filerw bin>’ does not succeed however.  The output from the command states ‘Exiting for problem: error loading device pl:0’ .  What further steps can be taken to debug this?

I have also found that the FPGA loading approach coded in HdlBusDriver.cxx does not work on this platform when attempting to run manually.  The command ‘echo 0 > /sys/class/fpga_manager/fpga0/flags’ returns ‘-sh: /sys/class/fpga_manager/fpga0/flags: Permission denied’ .  A manual command using the DT overlay approach does appear to work however.

Thanks,
Rob

From: Chris Hinkey chinkey@geontech.com
Sent: Friday, September 6, 2019 8:09 AM
To: James Kulp jek@parera.com
Cc: Munro, Robert M. Robert.Munro@jhuapl.edu; discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

iirc it gives clocks and interdictions on which axi ports are enabled but not which direction is master (you would have to look up which register/bit this is set by in the TRM).  i don't remember the axi ports being configurable which side is the master but i very well might be mistake.

On Thu, Sep 5, 2019 at 7:38 PM James Kulp <jek@parera.commailto:jek@parera.com> wrote:
If you invoke the command with no arguments it tells you what it can do, like most opencpi commands.  We mostly use it to find out how the FPGA clocks are initialized.

On Sep 5, 2019, at 18:19, Munro, Robert M. <Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu> wrote:

Jim,

Does the ocpizynq utility list all the available interfaces that can dumped?

Thanks,
Rob

-----Original Message-----
From: discuss <discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org> On Behalf Of James Kulp
Sent: Thursday, September 5, 2019 5:59 PM
To: discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

Hi Rob,

Nearly all aspects of the boundary hardware between the PS and the PL sides of Zynq are controlled by registers written by the processor and
not in the FPGA bitstream.
The FSBL does typically initialize these registers to some default values that are not necessarily the right values for how OpenCPI uses the PL/FPGA.
The ocpizynq utility program does dump out some of these registers, and you could modify it pretty easily if you want to know what some other registers are set to.
All these registers are pretty well documented in the Zynq TRM.

Jim

On 9/5/19 5:47 PM, Munro, Robert M. wrote:
Chris,

Would this be the GP0 AXI slave or master registers that are being accessed in this scenario?  I don’t believe these are configured in the FSBL, but in the FPGA image.  This could indicate that a facility required by the OCPI framework is not enabled in the FPGA image built into the N310 image.  Is there a listing of the OCPI required FPGA facilities?

Thanks,
Rob

From: Chris Hinkey <chinkey@geontech.commailto:chinkey@geontech.com>
Sent: Thursday, August 29, 2019 11:58 AM
To: Munro, Robert M. <Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

you are not accessing external memory in this case you are accessing axi_gp0's adress space a register directly on the FPGA.  i would suspect that that something is wrong with how GP0 is setup from the fsbl in this case.  I don't think anything would need to change on the opencpi software side given that 7100 vs 7020 should be the same.
the information on all the register maps and where everything is located is somewhere in the Xilinx Technical reference manual (be warned this is a very large document).

On Thu, Aug 29, 2019 at 11:42 AM Munro, Robert M. <Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>> wrote:
Chris,

Looking at the Zynq and ZynqMP datasheets:
https://www.xilinx.com/support/documentation/data_sheets/ds190-Zynq-70
00-Overview.pdf
https://www.xilinx.com/support/documentation/data_sheets/ds891-zynq-ul
trascale-plus-overview.pdf

It looks like the Z-7100 has the same memory interfaces as other Zynq parts with the external memory interface having '16-bit or 32-bit interfaces to DDR3, DDR3L, DDR2, or LPDDR2 memories' whereas the ZynqMP has '32-bit or 64-bit interfaces to DDR4, DDR3, DDR3L, or LPDDR3 memories, and 32-bit interface to LPDDR4 memory' .

Is it possible that other changes are needed from the 1.4_zynq_ultra branch that I have not pulled in?

Thanks,
Rob

-----Original Message-----
From: discuss <discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org>> On Behalf Of Munro, Robert M.
Sent: Thursday, August 29, 2019 9:09 AM
To: Chris Hinkey <chinkey@geontech.commailto:chinkey@geontech.com<mailto:chinkey@geontech.commailto:chinkey@geontech.com>>
Cc: discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

Chris,

Thanks for the information regarding the internals.  The FPGA part on this platform is a XC7Z100.  I purposefully did not pull in changes that I believed were related to addressing.  I can double check the specifications regarding address widths to verify it should be unchanged.

Please let me know if there are any other changes or steps missed.

Thanks,
Rob

From: Chris Hinkey
<chinkey@geontech.commailto:chinkey@geontech.com<mailto:chinkey@geontech.commailto:chinkey@geontech.com><mailto:chinkey@geonmailto:chinkey@geon
tech.comhttp://tech.com<mailto:chinkey@geontech.commailto:chinkey@geontech.com>>>
Date: Thursday, Aug 29, 2019, 8:05 AM
To: Munro, Robert M.
<Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu><mailto:Robertmailto:Robert
.Munro@jhuapl.edumailto:Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>>
Cc: James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><mailto:jek@parera.commailto:jek@parera.com<mailto:jemailto:je
k@parera.commailto:k@parera.com>>>,
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
<discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:dimailto:di
scuss@lists.opencpi.orgmailto:scuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

It looks like you loaded something sucessfully but the control plan is not hooked up quite right.

as an eraly part of the running process opencpi reads a register across the control plan that contains ascii "OpenCPI(NULL)" and in your case you are reading "CPI(NULL)Open"  this is given by the data in the error message - (sb 0x435049004f70656e).  this is the magic that the message is referring to it requires OpenCPI to be at address 0 of the control plane address space to proceed.

I think we ran into this problem and we decided it was because the bus on the ultrascale was setup to be 32 bits and needed to be 64 bits for the hdl that we implemented to work correctly.  remind me what platform you are using is it a zynq ultrascale or 7000 series?

On Wed, Aug 28, 2019 at 5:55 PM Munro, Robert M. <Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>> wrote:
Chris,

After merging some sections of HdlBusDriver.cxx into the 1.4 version of the file and going through the build process I am encountering a new error when attempting to load HDL on the N310.  The fsk_filerw is being used as a known good reference for this purpose.  The new sections of vivado.mkhttp://vivado.mkhttp://vivado.mkhttp://vivado.mk were merged in to attempt building the HDL using the framework, but it did not generate the .bin file when using ocpidev build with the --hdl-assembly argument.  An attempt to replicate the commands in vivado.mkhttp://vivado.mkhttp://vivado.mkhttp://vivado.mk manually while following the guidelines for generating a .bin from a .bit from Xilinx documentation https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841645/Solution+Zynq+PL+Programming+With+FPGA+Manager was taken.

The steps were:

  • generate a .bif file similar to the documentation's
    Full_Bitstream.bif using the correct filename
  • run a bootgen command similar to
    vivado.mkhttp://vivado.mkhttp://vivado.mkhttp://vivado.mk: bootgen -image
    <bif_filename> -arch zynq -o <bin_filename> -w

This generated a .bin file as desired and was copied to the artifacts directory in the ocpi folder structure.

The built ocpi environment loaded successfully, recognizes the HDL container as being available, and the hello application was able to run successfully.  The command output contained ' HDL Device 'PL:0' responds, but the OCCP signature: magic: 0x18000afe187003 (sb 0x435049004f70656e) ' but the impact of this was not understood until attempting to load HDL.  When attempting to run the fsk_filerw from the ocpirun command it did not appear to recognize the assembly when listing resources found in the output and reported that suitable candidate for a HDL-implemented component was not available.

The command 'ocpihdl load' was then attempted to force the loading of the HDL assembly and the same '...OCCP signature: magic: ...' output observed and finally ' Exiting for problem: error loading device pl:0: Magic numbers in admin space do not match'.

Is there some other step that must be taken during the generation of the .bin file?  Is there any other software modification that is required of the ocpi runtime code?  The diff patch of the modified 1.4 HdlBusDriver.cxx is attached to make sure that the required code modifications are performed correctly.  The log output from the ocpihdl load command is attached in case that can provide further insight regarding performance or required steps.

Thanks,
Rob

-----Original Message-----
From: discuss <discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org><mailto:discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org>>> On Behalf Of Munro, Robert M.
Sent: Tuesday, August 13, 2019 10:56 AM
To: Chris Hinkey
<chinkey@geontech.commailto:chinkey@geontech.com<mailto:chinkey@geontech.commailto:chinkey@geontech.com><mailto:chinkey@geonmailto:chinkey@geon
tech.comhttp://tech.com<mailto:chinkey@geontech.commailto:chinkey@geontech.com>>>; James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><mailto:jek@parera.commailto:jek@parera.com<mailto:jemailto:je
k@parera.commailto:k@parera.com>>>
Cc:
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:dismailto:dis
cuss@lists.opencpi.orgmailto:cuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

Chris,

Thank you for your helpful response and insight.  My thinking was that the #define could be overridden to provide the desired functionality for the platform, but was not comfortable making the changes without proper familiarity.  I will move forward by looking at the diff to the 1.4 mainline, make the appropriate modifications, and test with the modified framework on the N310.

Thanks again for your help.

Thanks,
Rob

From: Chris Hinkey
<chinkey@geontech.commailto:chinkey@geontech.com<mailto:chinkey@geontech.commailto:chinkey@geontech.com><mailto:chinkey@geonmailto:chinkey@geon
tech.comhttp://tech.com<mailto:chinkey@geontech.commailto:chinkey@geontech.com>>>
Sent: Tuesday, August 13, 2019 10:02 AM
To: James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><mailto:jek@parera.commailto:jek@parera.com<mailto:jemailto:je
k@parera.commailto:k@parera.com>>>
Cc: Munro, Robert M.
<Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu><mailto:Robertmailto:Robert
.Munro@jhuapl.edumailto:Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>>;
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:dismailto:dis
cuss@lists.opencpi.orgmailto:cuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

I think when I implemented this code I probably made the assumption that if we are using fpga_manager we are also using ARCH=arm64.  This met our needs as we only cared about the fpga manager on ultrascale devices at the time.  We also made the assumption that the tools created a tarred bin file instead of a bit file because we could not get the bit to bin conversion working with the existing openCPI code (this might cause you problems later when actually trying to load the fpga).

The original problem you were running into is certainly because of an
ifdef on line 226 where it will check the old driver done pin if it is
on an arm and not an arm64

226 #if defined(OCPI_ARCH_arm) || defined(OCPI_ARCH_arm_cs)

to move forward for now you can change this line to an "#if 0" and rebuild the framework, not this will cause other zynq based platforms(zed, matchstiq etc..) to no longer work with this patch but maybe you don't care for now while Jim tries to get this into the mainline in a more generic way.
there may be some similar patches you need to make to the same file but the full diff that I needed to make to BusDriver.cxx to the 1.4 mainline can be seen here https://github.com/opencpi/opencpi/pull/17/files in case you didn't already know.
hope this helps

On Mon, Aug 12, 2019 at 11:12 AM James Kulp <jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><mailto:jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com>><mailto:jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><mailto:jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com>>>> wrote:

On 8/12/19 9:37 AM, Munro, Robert M. wrote:
Jim,

This is the only branch with the modifications required for use with
the FPGA Manager driver.  This is required for use with the Linux
kernel provided for the N310.  The Xilinx toolset being used is
2018_2 and the kernel being used is generated via the N310 build
container using v3.14.0.0 .

Ok.  The default Xilinx kernel associated with 2018_2 is 4.14.

I guess the bottom line is that this combination of platform and tools and kernel is not yet supported in either the mainline of OpenCPI and the third party branch you are trying to use.

It is probably not a big problem, but someone has to debug it that has the time and skills necessary to dig as deep as necessary.

The fpga manager in the various later linux kernels will definitely be supported in a patch from the mainline "soon", probably in a month, since it is being actively worked.

That does not guarantee functionality on your exact kernel (and thus version of the fpga manager), but it does guarantee it working on the latest Xilinx-supported kernel.

Jim

Thanks,
Robert Munro

*From: *James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><mailto:jek@parera.commailto:jek@parera.com<mailto:jmailto:j
ek@parera.commailto:ek@parera.com>><mailto:jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><mailto:jmailto:j
e<mailto:jemailto:je>
k@parera.commailto:k@parera.com<mailto:k@parera.commailto:k@parera.com>>>
<mailto:jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><mailto:jek@parera.commailto:jek@parera.com<m
ailto:jek@parera.commailto:ailto%3Ajek@parera.com>><mailto:jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><m
a ilto:jek@parera.commailto:ilto%3Ajek@parera.com<mailto:ilto%3Ajek@parera.commailto:ilto%253Ajek@parera.com>>>>>
*Date: *Monday, Aug 12, 2019, 9:00 AM
*To: *Munro, Robert M.
<Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu><mailto:Robermailto:Rober
t.Munro@jhuapl.edumailto:t.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>><mailto:Robertmailto:Robert<mai lto:Robert>
.Munro@jhuapl.edumailto:Munro@jhuapl.edu<mailto:Munro@jhuapl.edumailto:Munro@jhuapl.edu><mailto:Robert.Munro@jhuaplmailto:Robert.Munro@jhuapl
.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>>
<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu><mailt
o:Robert.Munro@jhuapl.edumailto:o%3ARobert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>><mailto
:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu><mailto:Robermailto:Rober
t.Munro@jhuapl.edumailto:t.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>>>>,
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:dimailto:di
scuss@lists.opencpi.orgmailto:scuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>><mailto:dismailto:dis
<mailto:dismailto:dis>
cuss@lists.opencpi.orgmailto:cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.orgmailto:cuss@lists.opencpi.org><mailto:discuss@mailto:discuss@
lists.opencpi.orghttp://lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>>
<discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:dmailto:d
iscuss@lists.opencpi.orgmailto:iscuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>><mailto:dimailto:di
<mailto:dimailto:di>
scuss@lists.opencpi.orgmailto:scuss@lists.opencpi.org<mailto:scuss@lists.opencpi.orgmailto:scuss@lists.opencpi.org><mailto:discusmailto:discus
s@lists.opencpi.orgmailto:s@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>>
<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><m
ailto:discuss@lists.opencpi.orgmailto:ailto%3Adiscuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>><ma
ilto:discuss@lists.opencpi.orgmailto:ilto%3Adiscuss@lists.opencpi.org<mailto:ilto%3Adiscuss@lists.opencpi.ormailto:ilto%253Adiscuss@lists.opencpi.or
g><mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>

*Subject: *Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

I was a bit confused about your use of the "ultrascale" branch.
So you are using a branch with two types of patches in it: one for
later linux kernels with the fpga manager, and the other for the
ultrascale chip itself.
The N310 is not ultrascale, so we need to separate the two issues,
which were not separated before.
So its not really a surprise that the branch you are using is not yet
happy with the system you are trying to run it on.

I am working on a branch that simply updates the xilinx tools
(2019-1) and the xilinx linux kernel (4.19) without dealing with
ultrascale, which is intended to work with a baseline zed board, but
with current tools and kernels.

The N310 uses a 7000-series part (7100) which should be compatible
with this.

Which kernel and which xilinx tools are you using?

Jim

On 8/8/19 1:36 PM, Munro, Robert M. wrote:
Jim or others,

Is there any further input or feedback on the source or resolution

of this issue?

As it stands I do not believe that the OCPI runtime software will be

able to successfully load HDL assemblies on the N310 platform.  My
familiarity with this codebase is limited and we would appreciate any
guidance available toward investigating or resolving this issue.

Munro, Robert M.

ZynqMP/UltraScale+ fpga_manager

Jim,

The given block of code is not the root cause of the issue because

the file system does not have a /dev/xdevcfg device.

I suspect there is some functional code similar to this being

compiled incorrectly:

#if (OCPI_ARCH_arm)
// do xdevcfg loading stuff
#else
// do fpga_manager loading stuff #endif

This error is being output at environment initialization as well as

when running hello.xml.  I've attached a copy of the output from the
command 'ocpirun -v -l 20 hello.xml' for further investigation.

From looking at the output I believe the system is calling

OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is
calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line
484 which in turn is calling Driver::open in the same file at line
499 which then outputs the 'When searching for PL device ...' error
at line 509. This then returns to the HdlDriver.cxx search() function
and outputs the '... got Zynq search error ...' error at line 141.

This is an ARM device and I am not familiar enough with this

codebase to adjust precompiler definitions with confidence that some
other code section will become affected.

Subject: Re: [Discuss OpenCPI] Bitstream loading with

ZynqMP/UltraScale+ fpga_manager

That code is not integrated into the main line of OpenCPI yet, but

in that code there is:

         if (file_exists("/dev/xdevcfg")){
           ret_val= load_xdevconfig(fileName, error);
         }
         else if (file_exists("/sys/class/fpga_manager/fpga0/")){
           ret_val= load_fpga_manager(fileName, error);
         }

So it looks like the presence of /dev/xdevcfg is what causes it to

look for /sys/class/xdevcfg/xdevcfg/device/prog_done

On 8/2/19 4:15 PM, Munro, Robert M. wrote:
Are there any required flag or environment variable settings that

must be done before building the framework to utilize this
functionality?  I have a platform built that is producing an output
during environment load: 'When searching for PL device '0': Can't
process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string:
file could not be open for reading' .  This leads me to believe that
it is running the xdevcfg code still present in HdlBusDriver.cxx .

Use of the release_1.4_zynq_ultra branch and presence of the

/sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been
verified for the environment used to generate the executables.

the fly was attempting to convert from bit to bin.  This did not work
on these newer platforms using fpga_manager so we decided to use the
vendor provided tools rather then to reverse engineer what was wrong
with the existing code.

If changes need to be made to create more commonality and given

that all zynq and zynqMP platforms need a .bin file format wouldn't
it make more sense to just use .bin files rather then converting them
on the fly every time?

A sensible question for sure.

When this was done originally, it was to avoid generating multiple

file formats all the time.  .bit files are necessary for JTAG
loading, and .bin files are necessary for zynq hardware loading.

Even on Zynq, some debugging using jtag is done, and having that be

mostly transparent (using the same bitstream files) is convenient.

So we preferred having a single bitstream file (with metadata,
compressed) regardless of whether we were hardware loading or jtag

loading, zynq or virtex6 or spartan3, ISE or Vivado.

In fact, there was no reverse engineering the last time since both

formats, at the level we were operating at, were documented by Xilinx.

It seemed to be worth the 30 SLOC to convert on the fly to keep a

single format of Xilinx bitstream files, including between ISE and
Vivado and all Xilinx FPGA types.

Of course it might make sense to switch things around the other way

and use .bin files uniformly and only convert to .bit format for JTAG
loading.

But since the core of the "conversion:" after a header, is just a

32 bit endian swap, it doesn't matter much either way.

If it ends up being a truly nasty reverse engineering exercise now,

I would reconsider.


From: discuss
<discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opmailto:discuss-bounces@lists.op
encpi.orghttp://encpi.org><mailto:discuss-bounces@lists.opmailto:discuss-bounces@lists.op<mailto:discuss-bounces@mailto:discuss-bounces@
lists.op>
encpi.orghttp://encpi.orghttp://encpi.org><mailto:discuss-bounces@lists.opmailto:discuss-bounces@lists.op<mailt
o:discuss-bounces@lists.op><mailto:discuss-bounces@mailto:discuss-bounces@<mailto:discussmailto:discuss
-bounces@> lists.op>
encpi.orghttp://encpi.orghttp://encpi.orghttp://encpi.org>> on behalf of James
Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><mailto:jek@parera.commailto:jek@parera.com<mailt
o:jek@parera.commailto:o%3Ajek@parera.com>><mailto:jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><ma
ilt o:jek@parera.commailto:o%3Ajek@parera.com<mailto:o%3Ajek@parera.commailto:o%253Ajek@parera.com>>>>
Sent: Friday, February 1, 2019 3:27 PM
To:
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>><mail
to
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailt
o:discuss@lists.opencpi.orgmailto:o%3Adiscuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

David,

This is great work. Thanks.

Since I believe the fpga manager stuff is really an attribute of
later linux kernels, I don't think it is really a ZynqMP thing,
but just a later linux kernel thing.
I am currently bringing up the quite ancient zedboard using the
latest Vivado and Xilinx linux and will try to use this same code.
There are two thinigs I am looking into, now that you have done
the hard work of getting to a working solution:

  1. The bit vs bin thing existed with the old bitstream loader, but
    I think we were converting on the fly, so I will try that here.
    (To avoid the bin format altogether).

  2. The fpga manager has entry points from kernel mode that allow
    you to inject the bitstream without making a copy in /lib/firmware.
    Since we already have a kernel driver, I will try to use that to
    avoid the whole /lib/firmware thing.

So if those two things can work (no guarantees), the difference
between old and new bitstream loading (and building) can be
minimized and the loading process faster and requiring no extra
file system

space.

This will make merging easier too.

We'll see.  Thanks again to you and Geon for this important

contribution.

Jim

On 2/1/19 3:12 PM, David Banks wrote:
OpenCPI users interested in ZynqMP fpga_manager,

I know some users are interested in the OpenCPI's bitstream
loading for ZynqMP/UltraScale+ using "fpga_manager". In
general, we followed the instructions at

I will give a short explanation here:

Reminder: All ZynqMP/UltraScale+ changes are located at
https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra

branch.

Firstly, all fpga_manager code is located in
runtime/hdl/src/HdlBusDriver.cxx. There were also changes in
r
untime/hdl-support/xilinx/vivado.mkhttp://vivado.mkhttp://vivado.mk<http://vi
vado.mkhttp://vado.mk><http://vi
vado.mkhttp://vado.mkhttp://vado.mk>
http://vivado.mk
to generate a bitstream in the correct *.bin
format. To see the changes made to these files for ZynqMP, you
can diff them between
release_1.4 and release_1.4_zynq_ultra:
$ git clone https://github.com/Geontech/opencpi.git --branch
release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin
release_1.4:release_1.4; $ git diff release_1.4 --
runtime/hdl/src/HdlBusDriver.cxx
runtime/hdl-support/xilinx/vivado.mkhttp://vivado.mkhttp://vivado.mk<http://viv
ado.mkhttp://ado.mk><http://viv
ado.mkhttp://ado.mkhttp://ado.mk>;

The directly relevant functions are load_fpga_manager() and i
sProgrammed().
load_fpga_manager() ensures that /lib/firmware exists, reads the
*.bin bitstream file and writes its contents to

/lib/firmware/opencpi_temp.bin.

It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the
the filename "opencpi_temp.bin" to

/sys/class/fpga_manager/fpga0/firmware.

Finally, the temporary opencpi_temp.bin bitstream is removed and
the state of the fpga_manager
(/sys/class/fpga_manager/fpga0/state) is confirmed to be "operating" in isProgrammed().

fpga_manager requires that bitstreams be in *.bin in order to
write them to the PL. So, some changes were made to
vivado.mkhttp://vivado.mkhttp://vivado.mkhttp://vivado.mkhttp://vivado.mk
to add a make rule for the *.bin file. This make rule (BinName) uses Vivado's "bootgen" to convert the bitstream from *.bit to *.bin.

Most of the relevant code is pasted or summarized below:

         *load_fpga_manager*(const char *fileName,

std::string

&error) {

           if (!file_exists("/lib/firmware")){

mkdir("/lib/firmware",0666);
}
int out_file =

creat("/lib/firmware/opencpi_temp.bin", 0666);

           gzFile bin_file;
           int bfd, zerror;
           uint8_t buf[8*1024];

           if ((bfd = ::open(fileName, O_RDONLY)) < 0)
             OU::format(error, "Can't open bitstream file '%s'

for reading:

%s(%d)",
fileName, strerror(errno), errno);
if ((bin_file = ::gzdopen(bfd, "rb")) == NULL)
OU::format(error, "Can't open compressed bin
file

'%s' for :

%s(%u)",
fileName, strerror(errno), errno);
do {
uint8_t *bit_buf = buf;
int n = ::gzread(bin_file, bit_buf, sizeof(buf));
if (n < 0)
return true;
if (n & 3)
return OU::eformat(error, "Bitstream data in is '%s'
not a multiple of 3 bytes",
fileName);
if (n == 0)
break;
if (write(out_file, buf, n) <= 0)
return OU::eformat(error,
"Error writing to
/lib/firmware/opencpi_temp.bin for bin
loading: %s(%u/%d)",
strerror(errno), errno, n);
} while (1);
close(out_file);
std::ofstream

fpga_flags("/sys/class/fpga_manager/fpga0/flags");

           std::ofstream

fpga_firmware("/sys/class/fpga_manager/fpga0/firmware");
fpga_flags << 0 << std::endl;
fpga_firmware << "opencpi_temp.bin" << std::endl;

remove("/lib/firmware/opencpi_temp.bin");
return isProgrammed(error) ? init(error) : true;
}

The isProgrammed() function just checks whether or not the
fpga_manager state is 'operating' although we are not entirely
confident this is a robust check:

         *isProgrammed*(...) {
           ...
           const char *e = OU::file2String(val,

"/sys/class/fpga_manager/fpga0/state", '|');
...
return val == "operating";
}

vivado.mkhttp://vivado.mkhttp://vivado.mkhttp://vivado.mkhttp://vivado.mk's
*bin make-rule uses bootgen to convert bit to bin. This is
necessary in Vivado 2018.2, but in later versions you may be able
to directly generate the correct *.bin file via an option to

write_bitstream:

$(call BinName,$1,$3,$6): $(call BitName,$1,$3)
$(AT)echo -n For $2 on $5 using config $4: Generating
Xilinx Vivado bitstream file $$@ with BIN extension using "bootgen".
$(AT)echo all: > $$(call BifName,$1,$3,$6);
echo "{" >> $$(call BifName,$1,$3,$6);
echo " [destination_device = pl] $(notdir $(call
BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6);
echo "}" >> $$(call BifName,$1,$3,$6);
$(AT)$(call DoXilinx,bootgen,$1,-image $(notdir
$(call
BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call
BinName,$1,$3,$6)) -w,bin)

Hope this is useful!

Regards,
David Banks
dbanks@geontech.commailto:dbanks@geontech.com<mailto:dbanks@geontech.commailto:dbanks@geontech.com><mailto:dbanks@geomailto:dbanks@geo
ntech.comhttp://ntech.com<mailto:dbanks@geontech.commailto:dbanks@geontech.com>><mailto:dbanks@geomailto:dbanks@geo<mailto:dmailto:d
banks@geo>
ntech.comhttp://ntech.comhttp://ntech.com<mailto:dbanks@geontech.commailto:dbanks@geontech.com<mailto:dbamailto:dba
nks@geontech.commailto:nks@geontech.com>>>
Geon Technologies, LLC
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/att
ach m ents/20190201/4b49675d/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailt
o:discuss@lists.opencpi.orgmailto:o%3Adiscuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>><ma
ilt
o:discuss@lists.opencpi.orgmailto:o%3Adiscuss@lists.opencpi.org<mailto:o%3Adiscuss@lists.opencpi.orgmailto:o%253Adiscuss@lists.opencpi.org>
<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.ormailto:discuss@lists.opencpi.or
g>>>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.o
rg


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>><mail
to
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailt
o:discuss@lists.opencpi.orgmailto:o%3Adiscuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or
g
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/atta
chm e nts/20190201/64e4ea45/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>><mail
to
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailt
o:discuss@lists.opencpi.orgmailto:o%3Adiscuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or
g

-------------- next part -------------- An embedded and
charset-unspecified text was scrubbed...
Name: hello_n310_log_output.txt
URL:

<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachm
e nts/20190805/d9b4f229/attachment.txt>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:dismailto:dis
cuss@lists.opencpi.orgmailto:cuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>><mailto:discumailto:discu
ss@lists.opencpi.orgmailto:ss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:discuss@mailto:discuss@
lists.opencpi.orghttp://lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme
nts/20190813/4516c872/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:dismailto:dis
cuss@lists.opencpi.orgmailto:cuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme
nts/20190829/b99ae3e0/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme
nts/20190905/0b9a1953/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org

It appears there was some resource contention in the GP0 area that was not allowing the OCPI system to set the OccpAdminRegister.magic value during operation. If the FPGA load is prevented during the boot process, the magic number mismatch error is no longer output. Looking through the TRM was showing no configuration settings for GP0 other than enabling communication using LVL_SHFTR_EN. If there is some required configuration of AXI_GP0 configuration registers for OCPI to work properly, please provide it for future reference. I am further trying to understand the code that was producing the output by looking at the at the source. The magic number mismatch output on lines 82-83 look to be outputting a #define value in the (sb ….) area of the output and the ‘magic’ variable there is giving the value that was read from the OccpAdminRegister area. Am I understanding the code correctly? If so, that would indicate that the (sb …) number is the expected value and its orientation should be correct. https://github.com/Geontech/opencpi/blob/6c7f48352ef9dcb1213302f470ce803643cc604d/runtime/hdl/src/HdlDevice.cxx#L82 Is the code being understood correctly that the OccpAdminRegister is a memory mapped data structure that is being written and read as part of the OCPI control interface? If so, can you explain how and where this is being mapped and at what base address it should be expected? After preventing the FPGA load at boot time the OCPI commands no longer output the magic number mismatch error. The command ‘ocpihdl load <fsk_filerw bin>’ does not succeed however. The output from the command states ‘Exiting for problem: error loading device pl:0’ . What further steps can be taken to debug this? I have also found that the FPGA loading approach coded in HdlBusDriver.cxx does not work on this platform when attempting to run manually. The command ‘echo 0 > /sys/class/fpga_manager/fpga0/flags’ returns ‘-sh: /sys/class/fpga_manager/fpga0/flags: Permission denied’ . A manual command using the DT overlay approach does appear to work however. Thanks, Rob From: Chris Hinkey <chinkey@geontech.com> Sent: Friday, September 6, 2019 8:09 AM To: James Kulp <jek@parera.com> Cc: Munro, Robert M. <Robert.Munro@jhuapl.edu>; discuss@lists.opencpi.org Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager iirc it gives clocks and interdictions on which axi ports are enabled but not which direction is master (you would have to look up which register/bit this is set by in the TRM). i don't remember the axi ports being configurable which side is the master but i very well might be mistake. On Thu, Sep 5, 2019 at 7:38 PM James Kulp <jek@parera.com<mailto:jek@parera.com>> wrote: If you invoke the command with no arguments it tells you what it can do, like most opencpi commands. We mostly use it to find out how the FPGA clocks are initialized. > On Sep 5, 2019, at 18:19, Munro, Robert M. <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>> wrote: > > Jim, > > Does the ocpizynq utility list all the available interfaces that can dumped? > > Thanks, > Rob > > -----Original Message----- > From: discuss <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org>> On Behalf Of James Kulp > Sent: Thursday, September 5, 2019 5:59 PM > To: discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager > > Hi Rob, > > Nearly all aspects of the boundary hardware between the PS and the PL sides of Zynq are controlled by registers written by the processor and > *not* in the FPGA bitstream. > The FSBL does typically initialize these registers to some default values that are not necessarily the right values for how OpenCPI uses the PL/FPGA. > The ocpizynq utility program does dump out some of these registers, and you could modify it pretty easily if you want to know what some other registers are set to. > All these registers are pretty well documented in the Zynq TRM. > > Jim > >> On 9/5/19 5:47 PM, Munro, Robert M. wrote: >> Chris, >> >> Would this be the GP0 AXI slave or master registers that are being accessed in this scenario? I don’t believe these are configured in the FSBL, but in the FPGA image. This could indicate that a facility required by the OCPI framework is not enabled in the FPGA image built into the N310 image. Is there a listing of the OCPI required FPGA facilities? >> >> Thanks, >> Rob >> >> From: Chris Hinkey <chinkey@geontech.com<mailto:chinkey@geontech.com>> >> Sent: Thursday, August 29, 2019 11:58 AM >> To: Munro, Robert M. <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>> >> Subject: Re: [Discuss OpenCPI] Bitstream loading with >> ZynqMP/UltraScale+ fpga_manager >> >> you are not accessing external memory in this case you are accessing axi_gp0's adress space a register directly on the FPGA. i would suspect that that something is wrong with how GP0 is setup from the fsbl in this case. I don't think anything would need to change on the opencpi software side given that 7100 vs 7020 should be the same. >> the information on all the register maps and where everything is located is somewhere in the Xilinx Technical reference manual (be warned this is a very large document). >> >> On Thu, Aug 29, 2019 at 11:42 AM Munro, Robert M. <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>> wrote: >> Chris, >> >> Looking at the Zynq and ZynqMP datasheets: >> https://www.xilinx.com/support/documentation/data_sheets/ds190-Zynq-70 >> 00-Overview.pdf >> https://www.xilinx.com/support/documentation/data_sheets/ds891-zynq-ul >> trascale-plus-overview.pdf >> >> It looks like the Z-7100 has the same memory interfaces as other Zynq parts with the external memory interface having '16-bit or 32-bit interfaces to DDR3, DDR3L, DDR2, or LPDDR2 memories' whereas the ZynqMP has '32-bit or 64-bit interfaces to DDR4, DDR3, DDR3L, or LPDDR3 memories, and 32-bit interface to LPDDR4 memory' . >> >> Is it possible that other changes are needed from the 1.4_zynq_ultra branch that I have not pulled in? >> >> Thanks, >> Rob >> >> -----Original Message----- >> From: discuss <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org><mailto:discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org>>> On Behalf Of Munro, Robert M. >> Sent: Thursday, August 29, 2019 9:09 AM >> To: Chris Hinkey <chinkey@geontech.com<mailto:chinkey@geontech.com><mailto:chinkey@geontech.com<mailto:chinkey@geontech.com>>> >> Cc: discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> >> Subject: Re: [Discuss OpenCPI] Bitstream loading with >> ZynqMP/UltraScale+ fpga_manager >> >> Chris, >> >> Thanks for the information regarding the internals. The FPGA part on this platform is a XC7Z100. I purposefully did not pull in changes that I believed were related to addressing. I can double check the specifications regarding address widths to verify it should be unchanged. >> >> Please let me know if there are any other changes or steps missed. >> >> Thanks, >> Rob >> >> >> From: Chris Hinkey >> <chinkey@geontech.com<mailto:chinkey@geontech.com><mailto:chinkey@geontech.com<mailto:chinkey@geontech.com>><mailto:chinkey@geon<mailto:chinkey@geon> >> tech.com<http://tech.com><mailto:chinkey@geontech.com<mailto:chinkey@geontech.com>>>> >> Date: Thursday, Aug 29, 2019, 8:05 AM >> To: Munro, Robert M. >> <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>><mailto:Robert<mailto:Robert> >> .Munro@jhuapl.edu<mailto:Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>> >> Cc: James Kulp >> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:je<mailto:je> >> k@parera.com<mailto:k@parera.com>>>>, >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> >> <discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:di<mailto:di> >> scuss@lists.opencpi.org<mailto:scuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>>> >> Subject: Re: [Discuss OpenCPI] Bitstream loading with >> ZynqMP/UltraScale+ fpga_manager >> >> It looks like you loaded something sucessfully but the control plan is not hooked up quite right. >> >> as an eraly part of the running process opencpi reads a register across the control plan that contains ascii "OpenCPI(NULL)" and in your case you are reading "CPI(NULL)Open" this is given by the data in the error message - (sb 0x435049004f70656e). this is the magic that the message is referring to it requires OpenCPI to be at address 0 of the control plane address space to proceed. >> >> I think we ran into this problem and we decided it was because the bus on the ultrascale was setup to be 32 bits and needed to be 64 bits for the hdl that we implemented to work correctly. remind me what platform you are using is it a zynq ultrascale or 7000 series? >> >> On Wed, Aug 28, 2019 at 5:55 PM Munro, Robert M. <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>> wrote: >> Chris, >> >> After merging some sections of HdlBusDriver.cxx into the 1.4 version of the file and going through the build process I am encountering a new error when attempting to load HDL on the N310. The fsk_filerw is being used as a known good reference for this purpose. The new sections of vivado.mk<http://vivado.mk><http://vivado.mk><http://vivado.mk> were merged in to attempt building the HDL using the framework, but it did not generate the .bin file when using ocpidev build with the --hdl-assembly argument. An attempt to replicate the commands in vivado.mk<http://vivado.mk><http://vivado.mk><http://vivado.mk> manually while following the guidelines for generating a .bin from a .bit from Xilinx documentation https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841645/Solution+Zynq+PL+Programming+With+FPGA+Manager was taken. >> >> The steps were: >> - generate a .bif file similar to the documentation's >> Full_Bitstream.bif using the correct filename >> - run a bootgen command similar to >> vivado.mk<http://vivado.mk><http://vivado.mk><http://vivado.mk>: bootgen -image >> <bif_filename> -arch zynq -o <bin_filename> -w >> >> This generated a .bin file as desired and was copied to the artifacts directory in the ocpi folder structure. >> >> The built ocpi environment loaded successfully, recognizes the HDL container as being available, and the hello application was able to run successfully. The command output contained ' HDL Device 'PL:0' responds, but the OCCP signature: magic: 0x18000afe187003 (sb 0x435049004f70656e) ' but the impact of this was not understood until attempting to load HDL. When attempting to run the fsk_filerw from the ocpirun command it did not appear to recognize the assembly when listing resources found in the output and reported that suitable candidate for a HDL-implemented component was not available. >> >> The command 'ocpihdl load' was then attempted to force the loading of the HDL assembly and the same '...OCCP signature: magic: ...' output observed and finally ' Exiting for problem: error loading device pl:0: Magic numbers in admin space do not match'. >> >> Is there some other step that must be taken during the generation of the .bin file? Is there any other software modification that is required of the ocpi runtime code? The diff patch of the modified 1.4 HdlBusDriver.cxx is attached to make sure that the required code modifications are performed correctly. The log output from the ocpihdl load command is attached in case that can provide further insight regarding performance or required steps. >> >> Thanks, >> Rob >> >> -----Original Message----- >> From: discuss <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org><mailto:discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org>><mailto:discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org><mailto:discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org>>>> On Behalf Of Munro, Robert M. >> Sent: Tuesday, August 13, 2019 10:56 AM >> To: Chris Hinkey >> <chinkey@geontech.com<mailto:chinkey@geontech.com><mailto:chinkey@geontech.com<mailto:chinkey@geontech.com>><mailto:chinkey@geon<mailto:chinkey@geon> >> tech.com<http://tech.com><mailto:chinkey@geontech.com<mailto:chinkey@geontech.com>>>>; James Kulp >> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:je<mailto:je> >> k@parera.com<mailto:k@parera.com>>>> >> Cc: >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:dis<mailto:dis> >> cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >> Subject: Re: [Discuss OpenCPI] Bitstream loading with >> ZynqMP/UltraScale+ fpga_manager >> >> Chris, >> >> Thank you for your helpful response and insight. My thinking was that the #define could be overridden to provide the desired functionality for the platform, but was not comfortable making the changes without proper familiarity. I will move forward by looking at the diff to the 1.4 mainline, make the appropriate modifications, and test with the modified framework on the N310. >> >> Thanks again for your help. >> >> Thanks, >> Rob >> >> From: Chris Hinkey >> <chinkey@geontech.com<mailto:chinkey@geontech.com><mailto:chinkey@geontech.com<mailto:chinkey@geontech.com>><mailto:chinkey@geon<mailto:chinkey@geon> >> tech.com<http://tech.com><mailto:chinkey@geontech.com<mailto:chinkey@geontech.com>>>> >> Sent: Tuesday, August 13, 2019 10:02 AM >> To: James Kulp >> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:je<mailto:je> >> k@parera.com<mailto:k@parera.com>>>> >> Cc: Munro, Robert M. >> <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>><mailto:Robert<mailto:Robert> >> .Munro@jhuapl.edu<mailto:Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>>; >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:dis<mailto:dis> >> cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >> Subject: Re: [Discuss OpenCPI] Bitstream loading with >> ZynqMP/UltraScale+ fpga_manager >> >> I think when I implemented this code I probably made the assumption that if we are using fpga_manager we are also using ARCH=arm64. This met our needs as we only cared about the fpga manager on ultrascale devices at the time. We also made the assumption that the tools created a tarred bin file instead of a bit file because we could not get the bit to bin conversion working with the existing openCPI code (this might cause you problems later when actually trying to load the fpga). >> >> The original problem you were running into is certainly because of an >> ifdef on line 226 where it will check the old driver done pin if it is >> on an arm and not an arm64 >> >> 226 #if defined(OCPI_ARCH_arm) || defined(OCPI_ARCH_arm_cs) >> >> to move forward for now you can change this line to an "#if 0" and rebuild the framework, not this will cause other zynq based platforms(zed, matchstiq etc..) to no longer work with this patch but maybe you don't care for now while Jim tries to get this into the mainline in a more generic way. >> there may be some similar patches you need to make to the same file but the full diff that I needed to make to BusDriver.cxx to the 1.4 mainline can be seen here https://github.com/opencpi/opencpi/pull/17/files in case you didn't already know. >> hope this helps >> >> On Mon, Aug 12, 2019 at 11:12 AM James Kulp <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>>>>> wrote: >>> On 8/12/19 9:37 AM, Munro, Robert M. wrote: >>> Jim, >>> >>> This is the only branch with the modifications required for use with >>> the FPGA Manager driver. This is required for use with the Linux >>> kernel provided for the N310. The Xilinx toolset being used is >>> 2018_2 and the kernel being used is generated via the N310 build >>> container using v3.14.0.0 . >> Ok. The default Xilinx kernel associated with 2018_2 is 4.14. >> >> I guess the bottom line is that this combination of platform and tools and kernel is not yet supported in either the mainline of OpenCPI and the third party branch you are trying to use. >> >> It is probably not a big problem, but someone has to debug it that has the time and skills necessary to dig as deep as necessary. >> >> The fpga manager in the various later linux kernels will definitely be supported in a patch from the mainline "soon", probably in a month, since it is being actively worked. >> >> That does not guarantee functionality on your exact kernel (and thus version of the fpga manager), but it does guarantee it working on the latest Xilinx-supported kernel. >> >> Jim >> >> >> >> >> >> >> >>> Thanks, >>> Robert Munro >>> >>> *From: *James Kulp >>> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:j<mailto:j> >>> ek@parera.com<mailto:ek@parera.com>>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:j<mailto:j> >>> e<mailto:je<mailto:je>> >>> k@parera.com<mailto:k@parera.com><mailto:k@parera.com<mailto:k@parera.com>>>> >>> <mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><m >>> ailto:jek@parera.com<mailto:ailto%3Ajek@parera.com>>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><m >>> a ilto:jek@parera.com<mailto:ilto%3Ajek@parera.com><mailto:ilto%3Ajek@parera.com<mailto:ilto%253Ajek@parera.com>>>>>> >>> *Date: *Monday, Aug 12, 2019, 9:00 AM >>> *To: *Munro, Robert M. >>> <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>><mailto:Rober<mailto:Rober> >>> t.Munro@jhuapl.edu<mailto:t.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>><mailto:Robert<mailto:Robert><mai >>> lto:Robert> >>> .Munro@jhuapl.edu<mailto:Munro@jhuapl.edu><mailto:Munro@jhuapl.edu<mailto:Munro@jhuapl.edu>><mailto:Robert.Munro@jhuapl<mailto:Robert.Munro@jhuapl> >>> .edu<mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>> >>> <mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>><mailt >>> o:Robert.Munro@jhuapl.edu<mailto:o%3ARobert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>><mailto >>> :Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>><mailto:Rober<mailto:Rober> >>> t.Munro@jhuapl.edu<mailto:t.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>>>>, >>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:di<mailto:di> >>> scuss@lists.opencpi.org<mailto:scuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>><mailto:dis<mailto:dis> >>> <mailto:dis<mailto:dis>> >>> cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org><mailto:cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org>><mailto:discuss@<mailto:discuss@> >>> lists.opencpi.org<http://lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>>> >>> <discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:d<mailto:d> >>> iscuss@lists.opencpi.org<mailto:iscuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>><mailto:di<mailto:di> >>> <mailto:di<mailto:di>> >>> scuss@lists.opencpi.org<mailto:scuss@lists.opencpi.org><mailto:scuss@lists.opencpi.org<mailto:scuss@lists.opencpi.org>><mailto:discus<mailto:discus> >>> s@lists.opencpi.org<mailto:s@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>>> >>> <mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><m >>> ailto:discuss@lists.opencpi.org<mailto:ailto%3Adiscuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>><ma >>> ilto:discuss@lists.opencpi.org<mailto:ilto%3Adiscuss@lists.opencpi.org><mailto:ilto%3Adiscuss@lists.opencpi.or<mailto:ilto%253Adiscuss@lists.opencpi.or> >>> g><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> >>>>>>> >>> *Subject: *Re: [Discuss OpenCPI] Bitstream loading with >>> ZynqMP/UltraScale+ fpga_manager >>> >>> I was a bit confused about your use of the "ultrascale" branch. >>> So you are using a branch with two types of patches in it: one for >>> later linux kernels with the fpga manager, and the other for the >>> ultrascale chip itself. >>> The N310 is not ultrascale, so we need to separate the two issues, >>> which were not separated before. >>> So its not really a surprise that the branch you are using is not yet >>> happy with the system you are trying to run it on. >>> >>> I am working on a branch that simply updates the xilinx tools >>> (2019-1) and the xilinx linux kernel (4.19) without dealing with >>> ultrascale, which is intended to work with a baseline zed board, but >>> with current tools and kernels. >>> >>> The N310 uses a 7000-series part (7100) which should be compatible >>> with this. >>> >>> Which kernel and which xilinx tools are you using? >>> >>> Jim >>> >>> >>> >>>> On 8/8/19 1:36 PM, Munro, Robert M. wrote: >>>> Jim or others, >>>> >>>> Is there any further input or feedback on the source or resolution >>> of this issue? >>>> As it stands I do not believe that the OCPI runtime software will be >>> able to successfully load HDL assemblies on the N310 platform. My >>> familiarity with this codebase is limited and we would appreciate any >>> guidance available toward investigating or resolving this issue. >>>> Thank you, >>>> Robert Munro >>>> >>>> -----Original Message----- >>>> From: discuss >>>> <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org><mailto:discuss-bounces@lists.open<mailto:discuss-bounces@lists.open> >>>> cpi.org<http://cpi.org>><mailto:discuss-bounces@lists.open<mailto:discuss-bounces@lists.open><mailto:discuss-bounces@li<mailto:discuss-bounces@li> >>>> sts.open> >>>> cpi.org<http://cpi.org><http://cpi.org>><mailto:discuss-bounces@lists.open<mailto:discuss-bounces@lists.open><mailto:di<mailto:di> >>>> scuss-bounces@lists.open><mailto:discuss-bounces@li<mailto:discuss-bounces@li><mailto:discuss-b<mailto:discuss-b> >>>> ounces@li> sts.open> cpi.org<http://cpi.org><http://cpi.org><http://cpi.org>>> On >>>> Behalf Of >>> Munro, Robert M. >>>> Sent: Monday, August 5, 2019 10:49 AM >>>> To: James Kulp >>>> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto: <mailto:%0b>>>>> jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>>>>>; >>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:d<mailto:d> >>>> iscuss@lists.opencpi.org<mailto:iscuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>><mailto:d<mailto:d> >>>> <mailto:d<mailto:d>> >>>> iscuss@lists.opencpi.org<mailto:iscuss@lists.opencpi.org><mailto:iscuss@lists.opencpi.org<mailto:iscuss@lists.opencpi.org>><mailto:dis<mailto:dis> >>>> cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>>> >>>> Subject: Re: [Discuss OpenCPI] Bitstream loading with >>> ZynqMP/UltraScale+ fpga_manager >>>> Jim, >>>> >>>> The given block of code is not the root cause of the issue because >>> the file system does not have a /dev/xdevcfg device. >>>> I suspect there is some functional code similar to this being >>> compiled incorrectly: >>>> #if (OCPI_ARCH_arm) >>>> // do xdevcfg loading stuff >>>> #else >>>> // do fpga_manager loading stuff #endif >>>> >>>> This error is being output at environment initialization as well as >>> when running hello.xml. I've attached a copy of the output from the >>> command 'ocpirun -v -l 20 hello.xml' for further investigation. >>>> From looking at the output I believe the system is calling >>> OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is >>> calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line >>> 484 which in turn is calling Driver::open in the same file at line >>> 499 which then outputs the 'When searching for PL device ...' error >>> at line 509. This then returns to the HdlDriver.cxx search() function >>> and outputs the '... got Zynq search error ...' error at line 141. >>>> This is an ARM device and I am not familiar enough with this >>> codebase to adjust precompiler definitions with confidence that some >>> other code section will become affected. >>>> Thanks, >>>> Robert Munro >>>> >>>> -----Original Message----- >>>> From: James Kulp >>>> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto: <mailto:%0b>>>>> jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>>>>> >>>> Sent: Friday, August 2, 2019 4:27 PM >>>> To: Munro, Robert M. >>>> <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>><mailto:Robe<mailto:Robe> >>>> rt.Munro@jhuapl.edu<mailto:rt.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>><mailto:Robe<mailto:Robe><mai >>>> lto:Robe> >>>> rt.Munro@jhuapl.edu<mailto:rt.Munro@jhuapl.edu><mailto:rt.Munro@jhuapl.edu<mailto:rt.Munro@jhuapl.edu>><mailto:Robert.Munro@<mailto:Robert.Munro@> >>>> jhuapl.edu<http://jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>>>; >>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:di<mailto:di> >>> scuss@lists.opencpi.org<mailto:scuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>><mailto:dis<mailto:dis> >>> <mailto:dis<mailto:dis>> >>> cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org><mailto:cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org>><mailto:discuss@<mailto:discuss@> >>> lists.opencpi.org<http://lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>>> >>>> Subject: Re: [Discuss OpenCPI] Bitstream loading with >>> ZynqMP/UltraScale+ fpga_manager >>>> That code is not integrated into the main line of OpenCPI yet, but >>> in that code there is: >>>> if (file_exists("/dev/xdevcfg")){ >>>> ret_val= load_xdevconfig(fileName, error); >>>> } >>>> else if (file_exists("/sys/class/fpga_manager/fpga0/")){ >>>> ret_val= load_fpga_manager(fileName, error); >>>> } >>>> So it looks like the presence of /dev/xdevcfg is what causes it to >>> look for /sys/class/xdevcfg/xdevcfg/device/prog_done >>>>> On 8/2/19 4:15 PM, Munro, Robert M. wrote: >>>>> Are there any required flag or environment variable settings that >>> must be done before building the framework to utilize this >>> functionality? I have a platform built that is producing an output >>> during environment load: 'When searching for PL device '0': Can't >>> process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string: >>> file could not be open for reading' . This leads me to believe that >>> it is running the xdevcfg code still present in HdlBusDriver.cxx . >>>>> Use of the release_1.4_zynq_ultra branch and presence of the >>> /sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been >>> verified for the environment used to generate the executables. >>>>> Thanks, >>>>> Robert Munro >>>>> >>>>> -----Original Message----- >>>>> From: discuss >>>>> <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org><mailto:discuss-bounces@lists.ope<mailto:discuss-bounces@lists.ope> >>>>> ncpi.org<http://ncpi.org>><mailto:discuss-bounces@lists.ope<mailto:discuss-bounces@lists.ope><mailto:discuss-bounces@l<mailto:discuss-bounces@l> >>>>> ists.ope> >>>>> ncpi.org<http://ncpi.org><http://ncpi.org>><mailto:discuss-bounces@lists.ope<mailto:discuss-bounces@lists.ope><mailto: >>>>> discuss-bounces@lists.ope><mailto:discuss-bounces@l<mailto:discuss-bounces@l><mailto:discuss-<mailto:discuss-> >>>>> bounces@l> ists.ope> ncpi.org<http://ncpi.org><http://ncpi.org><http://ncpi.org>>> >>>>> On Behalf Of James Kulp >>>>> Sent: Friday, February 1, 2019 4:18 PM >>>>> To: >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>><mailto: >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto: >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>>> >>>>> Subject: Re: [Discuss OpenCPI] Bitstream loading with >>>>> ZynqMP/UltraScale+ fpga_manager >>>>> >>>>>> On 2/1/19 3:37 PM, Chris Hinkey wrote: >>>>>> in response to Point 1 here. We attempted using the code that on >>> the fly was attempting to convert from bit to bin. This did not work >>> on these newer platforms using fpga_manager so we decided to use the >>> vendor provided tools rather then to reverse engineer what was wrong >>> with the existing code. >>>>>> If changes need to be made to create more commonality and given >>> that all zynq and zynqMP platforms need a .bin file format wouldn't >>> it make more sense to just use .bin files rather then converting them >>> on the fly every time? >>>>> A sensible question for sure. >>>>> >>>>> When this was done originally, it was to avoid generating multiple >>> file formats all the time. .bit files are necessary for JTAG >>> loading, and .bin files are necessary for zynq hardware loading. >>>>> Even on Zynq, some debugging using jtag is done, and having that be >>> mostly transparent (using the same bitstream files) is convenient. >>>>> So we preferred having a single bitstream file (with metadata, >>>>> compressed) regardless of whether we were hardware loading or jtag >>> loading, zynq or virtex6 or spartan3, ISE or Vivado. >>>>> In fact, there was no reverse engineering the last time since both >>> formats, at the level we were operating at, were documented by Xilinx. >>>>> It seemed to be worth the 30 SLOC to convert on the fly to keep a >>> single format of Xilinx bitstream files, including between ISE and >>> Vivado and all Xilinx FPGA types. >>>>> Of course it might make sense to switch things around the other way >>> and use .bin files uniformly and only convert to .bit format for JTAG >>> loading. >>>>> But since the core of the "conversion:" after a header, is just a >>> 32 bit endian swap, it doesn't matter much either way. >>>>> If it ends up being a truly nasty reverse engineering exercise now, >>> I would reconsider. >>>>>> ________________________________ >>>>>> From: discuss >>>>>> <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org><mailto:discuss-bounces@lists.op<mailto:discuss-bounces@lists.op> >>>>>> encpi.org<http://encpi.org>><mailto:discuss-bounces@lists.op<mailto:discuss-bounces@lists.op><mailto:discuss-bounces@<mailto:discuss-bounces@> >>>>>> lists.op> >>>>>> encpi.org<http://encpi.org><http://encpi.org>><mailto:discuss-bounces@lists.op<mailto:discuss-bounces@lists.op><mailt >>>>>> o:discuss-bounces@lists.op><mailto:discuss-bounces@<mailto:discuss-bounces@><mailto:discuss<mailto:discuss> >>>>>> -bounces@> lists.op> >>>>>> encpi.org<http://encpi.org><http://encpi.org><http://encpi.org>>> on behalf of James >>>>>> Kulp >>>>>> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailt >>>>>> o:jek@parera.com<mailto:o%3Ajek@parera.com>>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><ma >>>>>> ilt o:jek@parera.com<mailto:o%3Ajek@parera.com><mailto:o%3Ajek@parera.com<mailto:o%253Ajek@parera.com>>>>> >>>>>> Sent: Friday, February 1, 2019 3:27 PM >>>>>> To: >>>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto >>>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>><mail >>>>>> to >>>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailt >>>>>> o:discuss@lists.opencpi.org<mailto:o%3Adiscuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>>> >>>>>> Subject: Re: [Discuss OpenCPI] Bitstream loading with >>>>>> ZynqMP/UltraScale+ fpga_manager >>>>>> >>>>>> David, >>>>>> >>>>>> This is great work. Thanks. >>>>>> >>>>>> Since I believe the fpga manager stuff is really an attribute of >>>>>> later linux kernels, I don't think it is really a ZynqMP thing, >>>>>> but just a later linux kernel thing. >>>>>> I am currently bringing up the quite ancient zedboard using the >>>>>> latest Vivado and Xilinx linux and will try to use this same code. >>>>>> There are two thinigs I am looking into, now that you have done >>>>>> the hard work of getting to a working solution: >>>>>> >>>>>> 1. The bit vs bin thing existed with the old bitstream loader, but >>>>>> I think we were converting on the fly, so I will try that here. >>>>>> (To avoid the bin format altogether). >>>>>> >>>>>> 2. The fpga manager has entry points from kernel mode that allow >>>>>> you to inject the bitstream without making a copy in /lib/firmware. >>>>>> Since we already have a kernel driver, I will try to use that to >>>>>> avoid the whole /lib/firmware thing. >>>>>> >>>>>> So if those two things can work (no guarantees), the difference >>>>>> between old and new bitstream loading (and building) can be >>>>>> minimized and the loading process faster and requiring no extra >>>>>> file system >>> space. >>>>>> This will make merging easier too. >>>>>> >>>>>> We'll see. Thanks again to you and Geon for this important >>> contribution. >>>>>> Jim >>>>>> >>>>>> >>>>>>> On 2/1/19 3:12 PM, David Banks wrote: >>>>>>> OpenCPI users interested in ZynqMP fpga_manager, >>>>>>> >>>>>>> I know some users are interested in the OpenCPI's bitstream >>>>>>> loading for ZynqMP/UltraScale+ using "*fpga_manager*". In >>>>>>> general, we followed the instructions at >>>>>>> >>> https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream. >>>>>>> I will give a short explanation here: >>>>>>> >>>>>>> Reminder: All ZynqMP/UltraScale+ changes are located at >>>>>>> https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra >>> branch. >>>>>>> Firstly, all *fpga_manager *code is located in >>>>>>> *runtime/hdl/src/HdlBusDriver.cxx*. There were also changes in >>>>>>> r*untime/hdl-support/xilinx/vivado.mk<http://vivado.mk><http://vivado.mk><http://vi >>>>>>> vado.mk<http://vado.mk>><http://vi >>>>>>> vado.mk<http://vado.mk><http://vado.mk>> >>>>>>> <http://vivado.mk>* to generate a bitstream in the correct *.bin >>>>>>> format. To see the changes made to these files for ZynqMP, you >>>>>>> can diff them between >>>>>>> *release_1.4* and *release_1.4_zynq_ultra*: >>>>>>> $ git clone https://github.com/Geontech/opencpi.git --branch >>>>>>> release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin >>>>>>> release_1.4:release_1.4; $ git diff release_1.4 -- >>>>>>> runtime/hdl/src/HdlBusDriver.cxx >>>>>>> runtime/hdl-support/xilinx/vivado.mk<http://vivado.mk><http://vivado.mk><http://viv >>>>>>> ado.mk<http://ado.mk>><http://viv >>>>>>> ado.mk<http://ado.mk><http://ado.mk>>; >>>>>>> >>>>>>> >>>>>>> The directly relevant functions are *load_fpga_manager()* and i >>>>>>> *sProgrammed()*. >>>>>>> load_fpga_manager() ensures that /lib/firmware exists, reads the >>>>>>> *.bin bitstream file and writes its contents to >>> /lib/firmware/opencpi_temp.bin. >>>>>>> It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the >>>>>>> the filename "opencpi_temp.bin" to >>> /sys/class/fpga_manager/fpga0/firmware. >>>>>>> Finally, the temporary opencpi_temp.bin bitstream is removed and >>>>>>> the state of the fpga_manager >>>>>>> (/sys/class/fpga_manager/fpga0/state) is confirmed to be "operating" in isProgrammed(). >>>>>>> >>>>>>> fpga_manager requires that bitstreams be in *.bin in order to >>>>>>> write them to the PL. So, some changes were made to >>>>>>> vivado.mk<http://vivado.mk><http://vivado.mk><http://vivado.mk><http://vivado.mk> >>>>>>> to add a make rule for the *.bin file. This make rule (*BinName*) uses Vivado's "*bootgen*" to convert the bitstream from *.bit to *.bin. >>>>>>> >>>>>>> Most of the relevant code is pasted or summarized below: >>>>>>> >>>>>>> *load_fpga_manager*(const char *fileName, >>>>>>> std::string >>> &error) { >>>>>>> if (!file_exists("/lib/firmware")){ >>>>>>> mkdir("/lib/firmware",0666); >>>>>>> } >>>>>>> int out_file = >>> creat("/lib/firmware/opencpi_temp.bin", 0666); >>>>>>> gzFile bin_file; >>>>>>> int bfd, zerror; >>>>>>> uint8_t buf[8*1024]; >>>>>>> >>>>>>> if ((bfd = ::open(fileName, O_RDONLY)) < 0) >>>>>>> OU::format(error, "Can't open bitstream file '%s' >>> for reading: >>>>>>> %s(%d)", >>>>>>> fileName, strerror(errno), errno); >>>>>>> if ((bin_file = ::gzdopen(bfd, "rb")) == NULL) >>>>>>> OU::format(error, "Can't open compressed bin >>>>>>> file >>> '%s' for : >>>>>>> %s(%u)", >>>>>>> fileName, strerror(errno), errno); >>>>>>> do { >>>>>>> uint8_t *bit_buf = buf; >>>>>>> int n = ::gzread(bin_file, bit_buf, sizeof(buf)); >>>>>>> if (n < 0) >>>>>>> return true; >>>>>>> if (n & 3) >>>>>>> return OU::eformat(error, "Bitstream data in is '%s' >>>>>>> not a multiple of 3 bytes", >>>>>>> fileName); >>>>>>> if (n == 0) >>>>>>> break; >>>>>>> if (write(out_file, buf, n) <= 0) >>>>>>> return OU::eformat(error, >>>>>>> "Error writing to >>>>>>> /lib/firmware/opencpi_temp.bin for bin >>>>>>> loading: %s(%u/%d)", >>>>>>> strerror(errno), errno, n); >>>>>>> } while (1); >>>>>>> close(out_file); >>>>>>> std::ofstream >>> fpga_flags("/sys/class/fpga_manager/fpga0/flags"); >>>>>>> std::ofstream >>>>>>> fpga_firmware("/sys/class/fpga_manager/fpga0/firmware"); >>>>>>> fpga_flags << 0 << std::endl; >>>>>>> fpga_firmware << "opencpi_temp.bin" << std::endl; >>>>>>> >>>>>>> remove("/lib/firmware/opencpi_temp.bin"); >>>>>>> return isProgrammed(error) ? init(error) : true; >>>>>>> } >>>>>>> >>>>>>> The isProgrammed() function just checks whether or not the >>>>>>> fpga_manager state is 'operating' although we are not entirely >>>>>>> confident this is a robust check: >>>>>>> >>>>>>> *isProgrammed*(...) { >>>>>>> ... >>>>>>> const char *e = OU::file2String(val, >>>>>>> "/sys/class/fpga_manager/fpga0/state", '|'); >>>>>>> ... >>>>>>> return val == "operating"; >>>>>>> } >>>>>>> >>>>>>> vivado.mk<http://vivado.mk><http://vivado.mk><http://vivado.mk><http://vivado.mk>'s >>>>>>> *bin make-rule uses bootgen to convert bit to bin. This is >>>>>>> necessary in Vivado 2018.2, but in later versions you may be able >>>>>>> to directly generate the correct *.bin file via an option to >>> write_bitstream: >>>>>>> $(call *BinName*,$1,$3,$6): $(call BitName,$1,$3) >>>>>>> $(AT)echo -n For $2 on $5 using config $4: Generating >>>>>>> Xilinx Vivado bitstream file $$@ with BIN extension using "bootgen". >>>>>>> $(AT)echo all: > $$(call BifName,$1,$3,$6); \ >>>>>>> echo "{" >> $$(call BifName,$1,$3,$6); \ >>>>>>> echo " [destination_device = pl] $(notdir $(call >>>>>>> BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6); \ >>>>>>> echo "}" >> $$(call BifName,$1,$3,$6); >>>>>>> $(AT)$(call DoXilinx,*bootgen*,$1,-image $(notdir >>>>>>> $(call >>>>>>> BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call >>>>>>> BinName,$1,$3,$6)) -w,bin) >>>>>>> >>>>>>> Hope this is useful! >>>>>>> >>>>>>> Regards, >>>>>>> David Banks >>>>>>> dbanks@geontech.com<mailto:dbanks@geontech.com><mailto:dbanks@geontech.com<mailto:dbanks@geontech.com>><mailto:dbanks@geo<mailto:dbanks@geo> >>>>>>> ntech.com<http://ntech.com><mailto:dbanks@geontech.com<mailto:dbanks@geontech.com>>><mailto:dbanks@geo<mailto:dbanks@geo><mailto:d<mailto:d> >>>>>>> banks@geo> >>>>>>> ntech.com<http://ntech.com><http://ntech.com><mailto:dbanks@geontech.com<mailto:dbanks@geontech.com><mailto:dba<mailto:dba> >>>>>>> nks@geontech.com<mailto:nks@geontech.com>>>> >>>>>>> Geon Technologies, LLC >>>>>>> -------------- next part -------------- An HTML attachment was >>>>>>> scrubbed... >>>>>>> URL: >>>>>>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/att >>>>>>> ach m ents/20190201/4b49675d/attachment.html> >>>>>>> _______________________________________________ >>>>>>> discuss mailing list >>>>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailt >>>>>>> o:discuss@lists.opencpi.org<mailto:o%3Adiscuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>><ma >>>>>>> ilt >>>>>>> o:discuss@lists.opencpi.org<mailto:o%3Adiscuss@lists.opencpi.org><mailto:o%3Adiscuss@lists.opencpi.org<mailto:o%253Adiscuss@lists.opencpi.org>> >>>>>>> <mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.or<mailto:discuss@lists.opencpi.or> >>>>>>> g>>> >>>>>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.o >>>>>>> rg >>>>>> _______________________________________________ >>>>>> discuss mailing list >>>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto >>>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>><mail >>>>>> to >>>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailt >>>>>> o:discuss@lists.opencpi.org<mailto:o%3Adiscuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>>> >>>>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or >>>>>> g >>>>>> -------------- next part -------------- An HTML attachment was >>>>>> scrubbed... >>>>>> URL: >>>>>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/atta >>>>>> chm e nts/20190201/64e4ea45/attachment.html> >>>>>> _______________________________________________ >>>>>> discuss mailing list >>>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto >>>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>><mail >>>>>> to >>>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailt >>>>>> o:discuss@lists.opencpi.org<mailto:o%3Adiscuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>>> >>>>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or >>>>>> g >>>>> _______________________________________________ >>>>> discuss mailing list >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>><mailto: >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto: >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>>> >>>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >>>> -------------- next part -------------- An embedded and >>>> charset-unspecified text was scrubbed... >>>> Name: hello_n310_log_output.txt >>>> URL: >>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachm >>> e nts/20190805/d9b4f229/attachment.txt> >>>> _______________________________________________ >>>> discuss mailing list >>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:d<mailto:d> >>>> iscuss@lists.opencpi.org<mailto:iscuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>><mailto:d<mailto:d> >>>> <mailto:d<mailto:d>> >>>> iscuss@lists.opencpi.org<mailto:iscuss@lists.opencpi.org><mailto:iscuss@lists.opencpi.org<mailto:iscuss@lists.opencpi.org>><mailto:dis<mailto:dis> >>>> cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>>> >>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >>> >> >> _______________________________________________ >> discuss mailing list >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:dis<mailto:dis> >> cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>><mailto:discu<mailto:discu> >> ss@lists.opencpi.org<mailto:ss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:discuss@<mailto:discuss@> >> lists.opencpi.org<http://lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>>> >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >> -------------- next part -------------- An HTML attachment was >> scrubbed... >> URL: >> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme >> nts/20190813/4516c872/attachment.html> >> _______________________________________________ >> discuss mailing list >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:dis<mailto:dis> >> cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >> -------------- next part -------------- An HTML attachment was >> scrubbed... >> URL: >> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme >> nts/20190829/b99ae3e0/attachment.html> >> _______________________________________________ >> discuss mailing list >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >> -------------- next part -------------- An HTML attachment was >> scrubbed... >> URL: >> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme >> nts/20190905/0b9a1953/attachment.html> >> _______________________________________________ >> discuss mailing list >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > > > > _______________________________________________ > discuss mailing list > discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org _______________________________________________ discuss mailing list discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
JK
James Kulp
Fri, Sep 6, 2019 7:12 PM
An HTML attachment was scrubbed... URL: <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachments/20190906/d57b3ba8/attachment.html>
CH
Chris Hinkey
Fri, Sep 6, 2019 7:19 PM

looks like i responded just to robert not to the discussion list, oops

The magic number is a register that is on the fpga and is set to ascii
"OpenCPI" (0x4f70656e435049) which is a 64 bit read across the axi bus and
it is the first address across the bus  this value is not set during
operation, it is a hardcoded register that is built-in to the bitfile.  it
is never being written to from the software side.

when I get to this point on a new platform what I will do is step outside
of the framework and use the tool devmem to read the physical address
across the fpga boundary.
this will ensure that the 64 bits here are returning the correct "magic".

I expect that your problems with the Exiting for problem: error loading
device pl:0’ .  What further steps can be taken to debug this? error is
from something with fpga_manager not acting correctly.  we had a similar
problem recently and we forced opencpi to think that it always had an
opencpi bitstream loaded.

I would check that /sys/class/fpga_manager/fpga0/state is returning
something reasonable.  the fact that you don't have  permissions to access
the other parts of the fpga_manger is suspect as well, might be related.

On Fri, Sep 6, 2019 at 3:12 PM James Kulp jek@parera.com wrote:

On 9/6/19 2:29 PM, Munro, Robert M. wrote:

It appears there was some resource contention in the GP0 area that was not
allowing the OCPI system to set the OccpAdminRegister.magic value during
operation.

This value is hardwired into the OpenCPI  FPGA load and is read-only.

The software memory maps the area where the GP0 interface is a slave to
the CPU at:

   const uint32_t GP0_PADDR = 0x40000000;

And reads from offset 0.

It firsts the 8 byte MAGIC a byte at a time, then if it matches, it reads
again as a single 64 bit value to make sure 32-bit endian swapping is
right.  If both those reads from offset 0 at 0x40000000 come back correct,
it believes that the FPGA is loaded with an OpenCPI bitstream.

If there is already a non-OpenCPI bitstream loaded, we expect that this
test will fail.

If this failure occurs when there is a bitstream loaded, on Zynq, it still
assumes the FPGA is available for subsequent loading.

If the FPGA load is prevented during the boot process, the magic number
mismatch error is no longer output.  Looking through the TRM was showing no
configuration settings for GP0 other than enabling communication using
LVL_SHFTR_EN.

If there is some required configuration of AXI_GP0 configuration registers
for OCPI to work properly, please provide it for future reference.

I will check this.

I am further trying to understand the code that was producing the output
by looking at the at the source.  The magic number mismatch output on lines
82-83 look to be outputting a #define value in the (sb ….) area of the
output and the ‘magic’ variable there is giving the value that was read
from the OccpAdminRegister area.  Am I understanding the code correctly?
If so, that would indicate that the (sb …) number is the expected value and
its orientation should be correct.
https://github.com/Geontech/opencpi/blob/6c7f48352ef9dcb1213302f470ce803643cc604d/runtime/hdl/src/HdlDevice.cxx#L82

Is the code being understood correctly that the OccpAdminRegister is a
memory mapped data structure that is being written and read as part of the
OCPI control interface?  If so, can you explain how and where this is being
mapped and at what base address it should be expected?

See above - it is never written.

After preventing the FPGA load at boot time the OCPI commands no longer
output the magic number mismatch error.  The command ‘ocpihdl load
<fsk_filerw bin>’ does not succeed however.  The output from the command
states ‘Exiting for problem: error loading device pl:0’ .  What further
steps can be taken to debug this?

What FPGA load at boot time are you refering to? The native manufacturer's
bitstream?

AFAIK OpenCPI has no "boot time FPGA load".

What you appear to be debugginig is the Geon ultrascale fpga manager
loading code on a non-ultrascale Zynq.

I should have this particular function running on a zedboard Zynq next
week.

I have also found that the FPGA loading approach coded in HdlBusDriver.cxx
does not work on this platform when attempting to run manually.  The
command ‘echo 0 > /sys/class/fpga_manager/fpga0/flags’ returns ‘-sh:
/sys/class/fpga_manager/fpga0/flags: Permission denied’ .  A manual command
using the DT overlay approach does appear to work however.

I'm sorry you are the guinea pig on this particular configuration.

The reason we did not immediately integrate the Geon code into OpenCPI is
that it was taking two steps (fpga manager + ultra-scale) at once and we
needed to take them one step at a time.  We are taking that first step,
unfortunately not on a schedule that helps you.

Jim

Thanks,

Rob

From: Chris Hinkey chinkey@geontech.com chinkey@geontech.com
Sent: Friday, September 6, 2019 8:09 AM
To: James Kulp jek@parera.com jek@parera.com
Cc: Munro, Robert M. Robert.Munro@jhuapl.edu Robert.Munro@jhuapl.edu;
discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

iirc it gives clocks and interdictions on which axi ports are enabled but
not which direction is master (you would have to look up which register/bit
this is set by in the TRM).  i don't remember the axi ports being
configurable which side is the master but i very well might be mistake.

On Thu, Sep 5, 2019 at 7:38 PM James Kulp jek@parera.com wrote:

If you invoke the command with no arguments it tells you what it can do,
like most opencpi commands.  We mostly use it to find out how the FPGA
clocks are initialized.

On Sep 5, 2019, at 18:19, Munro, Robert M. Robert.Munro@jhuapl.edu

wrote:

Jim,

Does the ocpizynq utility list all the available interfaces that can

dumped?

Thanks,
Rob

-----Original Message-----
From: discuss discuss-bounces@lists.opencpi.org On Behalf Of James

Kulp

Sent: Thursday, September 5, 2019 5:59 PM
To: discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+

fpga_manager

Hi Rob,

Nearly all aspects of the boundary hardware between the PS and the PL

sides of Zynq are controlled by registers written by the processor and

not in the FPGA bitstream.
The FSBL does typically initialize these registers to some default

values that are not necessarily the right values for how OpenCPI uses the
PL/FPGA.

The ocpizynq utility program does dump out some of these registers, and

you could modify it pretty easily if you want to know what some other
registers are set to.

All these registers are pretty well documented in the Zynq TRM.

Jim

On 9/5/19 5:47 PM, Munro, Robert M. wrote:
Chris,

Would this be the GP0 AXI slave or master registers that are being

accessed in this scenario?  I don’t believe these are configured in the
FSBL, but in the FPGA image.  This could indicate that a facility required
by the OCPI framework is not enabled in the FPGA image built into the N310
image.  Is there a listing of the OCPI required FPGA facilities?

Thanks,
Rob

From: Chris Hinkey chinkey@geontech.com
Sent: Thursday, August 29, 2019 11:58 AM
To: Munro, Robert M. Robert.Munro@jhuapl.edu
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

you are not accessing external memory in this case you are accessing

axi_gp0's adress space a register directly on the FPGA.  i would suspect
that that something is wrong with how GP0 is setup from the fsbl in this
case.  I don't think anything would need to change on the opencpi software
side given that 7100 vs 7020 should be the same.

the information on all the register maps and where everything is

located is somewhere in the Xilinx Technical reference manual (be warned
this is a very large document).

On Thu, Aug 29, 2019 at 11:42 AM Munro, Robert M. <

Chris,

Looking at the Zynq and ZynqMP datasheets:
https://www.xilinx.com/support/documentation/data_sheets/ds190-Zynq-70
00-Overview.pdf
https://www.xilinx.com/support/documentation/data_sheets/ds891-zynq-ul
trascale-plus-overview.pdf

It looks like the Z-7100 has the same memory interfaces as other Zynq

parts with the external memory interface having '16-bit or 32-bit
interfaces to DDR3, DDR3L, DDR2, or LPDDR2 memories' whereas the ZynqMP has
'32-bit or 64-bit interfaces to DDR4, DDR3, DDR3L, or LPDDR3 memories, and
32-bit interface to LPDDR4 memory' .

Is it possible that other changes are needed from the 1.4_zynq_ultra

branch that I have not pulled in?

Thanks,
Rob

-----Original Message-----
From: discuss <discuss-bounces@lists.opencpi.org<mailto:

discuss-bounces@lists.opencpi.org>> On Behalf Of Munro, Robert M.

Sent: Thursday, August 29, 2019 9:09 AM
To: Chris Hinkey <chinkey@geontech.commailto:chinkey@geontech.com>
Cc: discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

Chris,

Thanks for the information regarding the internals.  The FPGA part on

this platform is a XC7Z100.  I purposefully did not pull in changes that I
believed were related to addressing.  I can double check the specifications
regarding address widths to verify it should be unchanged.

Please let me know if there are any other changes or steps missed.

Thanks,
Rob

From: Chris Hinkey
<chinkey@geontech.commailto:chinkey@geontech.com<mailto:chinkey@geon
tech.commailto:chinkey@geontech.com>>
Date: Thursday, Aug 29, 2019, 8:05 AM
To: Munro, Robert M.
<Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert
.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>
Cc: James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.com<mailto:je
k@parera.com>>>,
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
<discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:di
scuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

It looks like you loaded something sucessfully but the control plan is

not hooked up quite right.

as an eraly part of the running process opencpi reads a register across

the control plan that contains ascii "OpenCPI(NULL)" and in your case you
are reading "CPI(NULL)Open"  this is given by the data in the error message

  • (sb 0x435049004f70656e).  this is the magic that the message is referring
    to it requires OpenCPI to be at address 0 of the control plane address
    space to proceed.

I think we ran into this problem and we decided it was because the bus

on the ultrascale was setup to be 32 bits and needed to be 64 bits for the
hdl that we implemented to work correctly.  remind me what platform you are
using is it a zynq ultrascale or 7000 series?

On Wed, Aug 28, 2019 at 5:55 PM Munro, Robert M. <

Chris,

After merging some sections of HdlBusDriver.cxx into the 1.4 version of

the file and going through the build process I am encountering a new error
when attempting to load HDL on the N310.  The fsk_filerw is being used as a
known good reference for this purpose.  The new sections of vivado.mk<
http://vivado.mk>http://vivado.mk were merged in to attempt building
the HDL using the framework, but it did not generate the .bin file when
using ocpidev build with the --hdl-assembly argument.  An attempt to
replicate the commands in vivado.mkhttp://vivado.mkhttp://vivado.mk
manually while following the guidelines for generating a .bin from a .bit
from Xilinx documentation
https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841645/Solution+Zynq+PL+Programming+With+FPGA+Manager
was taken.

The steps were:

  • generate a .bif file similar to the documentation's
    Full_Bitstream.bif using the correct filename
  • run a bootgen command similar to
    vivado.mkhttp://vivado.mkhttp://vivado.mk: bootgen -image
    <bif_filename> -arch zynq -o <bin_filename> -w

This generated a .bin file as desired and was copied to the artifacts

directory in the ocpi folder structure.

The built ocpi environment loaded successfully, recognizes the HDL

container as being available, and the hello application was able to run
successfully.  The command output contained ' HDL Device 'PL:0' responds,
but the OCCP signature: magic: 0x18000afe187003 (sb 0x435049004f70656e) '
but the impact of this was not understood until attempting to load HDL.
When attempting to run the fsk_filerw from the ocpirun command it did not
appear to recognize the assembly when listing resources found in the output
and reported that suitable candidate for a HDL-implemented component was
not available.

The command 'ocpihdl load' was then attempted to force the loading of

the HDL assembly and the same '...OCCP signature: magic: ...' output
observed and finally ' Exiting for problem: error loading device pl:0:
Magic numbers in admin space do not match'.

Is there some other step that must be taken during the generation of

the .bin file?  Is there any other software modification that is required
of the ocpi runtime code?  The diff patch of the modified 1.4
HdlBusDriver.cxx is attached to make sure that the required code
modifications are performed correctly.  The log output from the ocpihdl
load command is attached in case that can provide further insight regarding
performance or required steps.

Thanks,
Rob

-----Original Message-----
From: discuss <discuss-bounces@lists.opencpi.org<mailto:

Sent: Tuesday, August 13, 2019 10:56 AM
To: Chris Hinkey
<chinkey@geontech.commailto:chinkey@geontech.com<mailto:chinkey@geon
tech.commailto:chinkey@geontech.com>>; James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.com<mailto:je
k@parera.com>>>
Cc:
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:dis
cuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

Chris,

Thank you for your helpful response and insight.  My thinking was that

the #define could be overridden to provide the desired functionality for
the platform, but was not comfortable making the changes without proper
familiarity.  I will move forward by looking at the diff to the 1.4
mainline, make the appropriate modifications, and test with the modified
framework on the N310.

Thanks again for your help.

Thanks,
Rob

From: Chris Hinkey
<chinkey@geontech.commailto:chinkey@geontech.com<mailto:chinkey@geon
tech.commailto:chinkey@geontech.com>>
Sent: Tuesday, August 13, 2019 10:02 AM
To: James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.com<mailto:je
k@parera.com>>>
Cc: Munro, Robert M.
<Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert
.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>;
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:dis
cuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

I think when I implemented this code I probably made the assumption

that if we are using fpga_manager we are also using ARCH=arm64.  This met
our needs as we only cared about the fpga manager on ultrascale devices at
the time.  We also made the assumption that the tools created a tarred bin
file instead of a bit file because we could not get the bit to bin
conversion working with the existing openCPI code (this might cause you
problems later when actually trying to load the fpga).

The original problem you were running into is certainly because of an
ifdef on line 226 where it will check the old driver done pin if it is
on an arm and not an arm64

226 #if defined(OCPI_ARCH_arm) || defined(OCPI_ARCH_arm_cs)

to move forward for now you can change this line to an "#if 0" and

rebuild the framework, not this will cause other zynq based platforms(zed,
matchstiq etc..) to no longer work with this patch but maybe you don't care
for now while Jim tries to get this into the mainline in a more generic way.

there may be some similar patches you need to make to the same file but

the full diff that I needed to make to BusDriver.cxx to the 1.4 mainline
can be seen here https://github.com/opencpi/opencpi/pull/17/files in case
you didn't already know.

hope this helps

On Mon, Aug 12, 2019 at 11:12 AM James Kulp <jek@parera.com<mailto:

On 8/12/19 9:37 AM, Munro, Robert M. wrote:
Jim,

This is the only branch with the modifications required for use with
the FPGA Manager driver.  This is required for use with the Linux
kernel provided for the N310.  The Xilinx toolset being used is
2018_2 and the kernel being used is generated via the N310 build
container using v3.14.0.0 .

Ok.  The default Xilinx kernel associated with 2018_2 is 4.14.

I guess the bottom line is that this combination of platform and tools

and kernel is not yet supported in either the mainline of OpenCPI and the
third party branch you are trying to use.

It is probably not a big problem, but someone has to debug it that has

the time and skills necessary to dig as deep as necessary.

The fpga manager in the various later linux kernels will definitely be

supported in a patch from the mainline "soon", probably in a month, since
it is being actively worked.

That does not guarantee functionality on your exact kernel (and thus

version of the fpga manager), but it does guarantee it working on the
latest Xilinx-supported kernel.

Jim

Thanks,
Robert Munro

*From: *James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.com<mailto:j
ek@parera.com>><mailto:jek@parera.commailto:jek@parera.com<mailto:j
emailto:je
k@parera.commailto:k@parera.com>>
<mailto:jek@parera.commailto:jek@parera.com<mailto:jek@parera.com<m
ailto:jek@parera.com>><mailto:jek@parera.commailto:jek@parera.com<m
a ilto:jek@parera.commailto:ilto%3Ajek@parera.com>>>>
*Date: *Monday, Aug 12, 2019, 9:00 AM
*To: *Munro, Robert M.
<Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Rober
t.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu><mailto:Robert<mai lto:Robert>
.Munro@jhuapl.edumailto:Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl
.edumailto:Robert.Munro@jhuapl.edu>>
<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailt
o:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu><mailto
:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Rober
t.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>>>,
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:di
scuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:dis
mailto:dis
cuss@lists.opencpi.orgmailto:cuss@lists.opencpi.org<mailto:discuss@
lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
<discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:d
iscuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:di
mailto:di
scuss@lists.opencpi.orgmailto:scuss@lists.opencpi.org<mailto:discus
s@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<m
ailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><ma
ilto:discuss@lists.opencpi.org<mailto:ilto%3Adiscuss@lists.opencpi.or
g><mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org

*Subject: *Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

I was a bit confused about your use of the "ultrascale" branch.
So you are using a branch with two types of patches in it: one for
later linux kernels with the fpga manager, and the other for the
ultrascale chip itself.
The N310 is not ultrascale, so we need to separate the two issues,
which were not separated before.
So its not really a surprise that the branch you are using is not yet
happy with the system you are trying to run it on.

I am working on a branch that simply updates the xilinx tools
(2019-1) and the xilinx linux kernel (4.19) without dealing with
ultrascale, which is intended to work with a baseline zed board, but
with current tools and kernels.

The N310 uses a 7000-series part (7100) which should be compatible
with this.

Which kernel and which xilinx tools are you using?

Jim

On 8/8/19 1:36 PM, Munro, Robert M. wrote:
Jim or others,

Is there any further input or feedback on the source or resolution

of this issue?

As it stands I do not believe that the OCPI runtime software will be

able to successfully load HDL assemblies on the N310 platform.  My
familiarity with this codebase is limited and we would appreciate any
guidance available toward investigating or resolving this issue.

Thank you,
Robert Munro

-----Original Message-----
From: discuss
<discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.open
cpi.org><mailto:discuss-bounces@lists.open<mailto:discuss-bounces@li
sts.open>
cpi.orghttp://cpi.org><mailto:discuss-bounces@lists.open<mailto:di
scuss-bounces@lists.open><mailto:discuss-bounces@li<mailto:discuss-b
ounces@li> sts.open> cpi.orghttp://cpi.orghttp://cpi.org>> On
Behalf Of

Munro, Robert M.

Sent: Monday, August 5, 2019 10:49 AM
To: James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.com<mailto:

ZynqMP/UltraScale+ fpga_manager

Jim,

The given block of code is not the root cause of the issue because

the file system does not have a /dev/xdevcfg device.

I suspect there is some functional code similar to this being

compiled incorrectly:

#if (OCPI_ARCH_arm)
// do xdevcfg loading stuff
#else
// do fpga_manager loading stuff #endif

This error is being output at environment initialization as well as

when running hello.xml.  I've attached a copy of the output from the
command 'ocpirun -v -l 20 hello.xml' for further investigation.

From looking at the output I believe the system is calling

OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is
calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line
484 which in turn is calling Driver::open in the same file at line
499 which then outputs the 'When searching for PL device ...' error
at line 509. This then returns to the HdlDriver.cxx search() function
and outputs the '... got Zynq search error ...' error at line 141.

This is an ARM device and I am not familiar enough with this

codebase to adjust precompiler definitions with confidence that some
other code section will become affected.

Thanks,
Robert Munro

-----Original Message-----
From: James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.com<mailto:

Sent: Friday, August 2, 2019 4:27 PM
To: Munro, Robert M.
<Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robe
rt.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu><mailto:Robe<mai lto:Robe>
rt.Munro@jhuapl.edumailto:rt.Munro@jhuapl.edu<mailto:Robert.Munro@
jhuapl.edumailto:Robert.Munro@jhuapl.edu>>>;

Subject: Re: [Discuss OpenCPI] Bitstream loading with

ZynqMP/UltraScale+ fpga_manager

That code is not integrated into the main line of OpenCPI yet, but

in that code there is:

         if (file_exists("/dev/xdevcfg")){
           ret_val= load_xdevconfig(fileName, error);
         }
         else if (file_exists("/sys/class/fpga_manager/fpga0/")){
           ret_val= load_fpga_manager(fileName, error);
         }

So it looks like the presence of /dev/xdevcfg is what causes it to

look for /sys/class/xdevcfg/xdevcfg/device/prog_done

On 8/2/19 4:15 PM, Munro, Robert M. wrote:
Are there any required flag or environment variable settings that

must be done before building the framework to utilize this
functionality?  I have a platform built that is producing an output
during environment load: 'When searching for PL device '0': Can't
process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string:
file could not be open for reading' .  This leads me to believe that
it is running the xdevcfg code still present in HdlBusDriver.cxx .

Use of the release_1.4_zynq_ultra branch and presence of the

/sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been
verified for the environment used to generate the executables.

Thanks,
Robert Munro

-----Original Message-----
From: discuss
<discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.ope
ncpi.org><mailto:discuss-bounces@lists.ope<mailto:discuss-bounces@l
ists.ope>
ncpi.orghttp://ncpi.org><mailto:discuss-bounces@lists.ope<mailto:
discuss-bounces@lists.ope><mailto:discuss-bounces@l<mailto:discuss-
bounces@l> ists.ope> ncpi.orghttp://ncpi.orghttp://ncpi.org>>
On Behalf Of James Kulp
Sent: Friday, February 1, 2019 4:18 PM
To:
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:

discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

On 2/1/19 3:37 PM, Chris Hinkey wrote:
in response to Point 1 here.  We attempted using the code that on

the fly was attempting to convert from bit to bin.  This did not work
on these newer platforms using fpga_manager so we decided to use the
vendor provided tools rather then to reverse engineer what was wrong
with the existing code.

If changes need to be made to create more commonality and given

that all zynq and zynqMP platforms need a .bin file format wouldn't
it make more sense to just use .bin files rather then converting them
on the fly every time?

A sensible question for sure.

When this was done originally, it was to avoid generating multiple

file formats all the time.  .bit files are necessary for JTAG
loading, and .bin files are necessary for zynq hardware loading.

Even on Zynq, some debugging using jtag is done, and having that be

mostly transparent (using the same bitstream files) is convenient.

So we preferred having a single bitstream file (with metadata,
compressed) regardless of whether we were hardware loading or jtag

loading, zynq or virtex6 or spartan3, ISE or Vivado.

In fact, there was no reverse engineering the last time since both

formats, at the level we were operating at, were documented by Xilinx.

It seemed to be worth the 30 SLOC to convert on the fly to keep a

single format of Xilinx bitstream files, including between ISE and
Vivado and all Xilinx FPGA types.

Of course it might make sense to switch things around the other way

and use .bin files uniformly and only convert to .bit format for JTAG
loading.

But since the core of the "conversion:" after a header, is just a

32 bit endian swap, it doesn't matter much either way.

If it ends up being a truly nasty reverse engineering exercise now,

I would reconsider.


From: discuss
<discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.op
encpi.org><mailto:discuss-bounces@lists.op<mailto:discuss-bounces@
lists.op>
encpi.orghttp://encpi.org><mailto:discuss-bounces@lists.op<mailt
o:discuss-bounces@lists.op><mailto:discuss-bounces@<mailto:discuss
-bounces@> lists.op>
encpi.orghttp://encpi.orghttp://encpi.org>> on behalf of James
Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.com<mailt
o:jek@parera.com>><mailto:jek@parera.commailto:jek@parera.com<ma
ilt o:jek@parera.commailto:o%3Ajek@parera.com>>>
Sent: Friday, February 1, 2019 3:27 PM
To:
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mail
to
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailt
o:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

David,

This is great work. Thanks.

Since I believe the fpga manager stuff is really an attribute of
later linux kernels, I don't think it is really a ZynqMP thing,
but just a later linux kernel thing.
I am currently bringing up the quite ancient zedboard using the
latest Vivado and Xilinx linux and will try to use this same code.
There are two thinigs I am looking into, now that you have done
the hard work of getting to a working solution:

  1. The bit vs bin thing existed with the old bitstream loader, but
    I think we were converting on the fly, so I will try that here.
    (To avoid the bin format altogether).

  2. The fpga manager has entry points from kernel mode that allow
    you to inject the bitstream without making a copy in /lib/firmware.
    Since we already have a kernel driver, I will try to use that to
    avoid the whole /lib/firmware thing.

So if those two things can work (no guarantees), the difference
between old and new bitstream loading (and building) can be
minimized and the loading process faster and requiring no extra
file system

space.

This will make merging easier too.

We'll see.  Thanks again to you and Geon for this important

contribution.

Jim

On 2/1/19 3:12 PM, David Banks wrote:
OpenCPI users interested in ZynqMP fpga_manager,

I know some users are interested in the OpenCPI's bitstream
loading for ZynqMP/UltraScale+ using "fpga_manager". In
general, we followed the instructions at

I will give a short explanation here:

Reminder: All ZynqMP/UltraScale+ changes are located at
https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra

branch.

Firstly, all fpga_manager code is located in
runtime/hdl/src/HdlBusDriver.cxx. There were also changes in
r
untime/hdl-support/xilinx/vivado.mkhttp://vivado.mk<http://vi
vado.mk><http://vi
vado.mkhttp://vado.mk>
http://vivado.mk
to generate a bitstream in the correct *.bin
format. To see the changes made to these files for ZynqMP, you
can diff them between
release_1.4 and release_1.4_zynq_ultra:
$ git clone https://github.com/Geontech/opencpi.git --branch
release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin
release_1.4:release_1.4; $ git diff release_1.4 --
runtime/hdl/src/HdlBusDriver.cxx
runtime/hdl-support/xilinx/vivado.mkhttp://vivado.mk<http://viv
ado.mk><http://viv
ado.mkhttp://ado.mk>;

The directly relevant functions are load_fpga_manager() and i
sProgrammed().
load_fpga_manager() ensures that /lib/firmware exists, reads the
*.bin bitstream file and writes its contents to

/lib/firmware/opencpi_temp.bin.

It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the
the filename "opencpi_temp.bin" to

/sys/class/fpga_manager/fpga0/firmware.

Finally, the temporary opencpi_temp.bin bitstream is removed and
the state of the fpga_manager
(/sys/class/fpga_manager/fpga0/state) is confirmed to be

"operating" in isProgrammed().

fpga_manager requires that bitstreams be in *.bin in order to
write them to the PL. So, some changes were made to
vivado.mkhttp://vivado.mkhttp://vivado.mkhttp://vivado.mk
to add a make rule for the *.bin file. This make rule (BinName)

uses Vivado's "bootgen" to convert the bitstream from *.bit to *.bin.

Most of the relevant code is pasted or summarized below:

         *load_fpga_manager*(const char *fileName,

std::string

&error) {

           if (!file_exists("/lib/firmware")){

mkdir("/lib/firmware",0666);
}
int out_file =

creat("/lib/firmware/opencpi_temp.bin", 0666);

           gzFile bin_file;
           int bfd, zerror;
           uint8_t buf[8*1024];

           if ((bfd = ::open(fileName, O_RDONLY)) < 0)
             OU::format(error, "Can't open bitstream file '%s'

for reading:

%s(%d)",
fileName, strerror(errno), errno);
if ((bin_file = ::gzdopen(bfd, "rb")) == NULL)
OU::format(error, "Can't open compressed bin
file

'%s' for :

%s(%u)",
fileName, strerror(errno), errno);
do {
uint8_t *bit_buf = buf;
int n = ::gzread(bin_file, bit_buf, sizeof(buf));
if (n < 0)
return true;
if (n & 3)
return OU::eformat(error, "Bitstream data in is '%s'
not a multiple of 3 bytes",
fileName);
if (n == 0)
break;
if (write(out_file, buf, n) <= 0)
return OU::eformat(error,
"Error writing to
/lib/firmware/opencpi_temp.bin for bin
loading: %s(%u/%d)",
strerror(errno), errno, n);
} while (1);
close(out_file);
std::ofstream

fpga_flags("/sys/class/fpga_manager/fpga0/flags");

           std::ofstream

fpga_firmware("/sys/class/fpga_manager/fpga0/firmware");
fpga_flags << 0 << std::endl;
fpga_firmware << "opencpi_temp.bin" << std::endl;

remove("/lib/firmware/opencpi_temp.bin");
return isProgrammed(error) ? init(error) : true;
}

The isProgrammed() function just checks whether or not the
fpga_manager state is 'operating' although we are not entirely
confident this is a robust check:

         *isProgrammed*(...) {
           ...
           const char *e = OU::file2String(val,

"/sys/class/fpga_manager/fpga0/state", '|');
...
return val == "operating";
}

vivado.mkhttp://vivado.mkhttp://vivado.mkhttp://vivado.mk's
*bin make-rule uses bootgen to convert bit to bin. This is
necessary in Vivado 2018.2, but in later versions you may be able
to directly generate the correct *.bin file via an option to

write_bitstream:

$(call BinName,$1,$3,$6): $(call BitName,$1,$3)
$(AT)echo -n For $2 on $5 using config $4: Generating
Xilinx Vivado bitstream file $$@ with BIN extension using

"bootgen".

        $(AT)echo all: > $$(call BifName,$1,$3,$6); \
             echo "{" >> $$(call BifName,$1,$3,$6); \
             echo " [destination_device = pl] $(notdir $(call

BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6);
echo "}" >> $$(call BifName,$1,$3,$6);
$(AT)$(call DoXilinx,bootgen,$1,-image $(notdir
$(call
BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call
BinName,$1,$3,$6)) -w,bin)

Hope this is useful!

Regards,
David Banks
dbanks@geontech.commailto:dbanks@geontech.com<mailto:dbanks@geo
ntech.commailto:dbanks@geontech.com><mailto:dbanks@geo<mailto:d
banks@geo>
ntech.comhttp://ntech.com<mailto:dbanks@geontech.com<mailto:dba
nks@geontech.com>>>
Geon Technologies, LLC
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/att
ach m ents/20190201/4b49675d/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailt
o:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><ma
ilt
o:discuss@lists.opencpi.orgmailto:o%3Adiscuss@lists.opencpi.org
<mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.or
g>>>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.o
rg


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mail
to
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailt
o:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or
g
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/atta
chm e nts/20190201/64e4ea45/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mail
to
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailt
o:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or
g


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:dis
cuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:discu
ss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@
lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme
nts/20190813/4516c872/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:dis
cuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme
nts/20190829/b99ae3e0/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme
nts/20190905/0b9a1953/attachment.html>


discuss mailing list
discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org

looks like i responded just to robert not to the discussion list, oops The magic number is a register that is on the fpga and is set to ascii "OpenCPI" (0x4f70656e435049) which is a 64 bit read across the axi bus and it is the first address across the bus this value is not set during operation, it is a hardcoded register that is built-in to the bitfile. it is never being written to from the software side. when I get to this point on a new platform what I will do is step outside of the framework and use the tool devmem to read the physical address across the fpga boundary. this will ensure that the 64 bits here are returning the correct "magic". I expect that your problems with the Exiting for problem: error loading device pl:0’ . What further steps can be taken to debug this? error is from something with fpga_manager not acting correctly. we had a similar problem recently and we forced opencpi to think that it always had an opencpi bitstream loaded. I would check that /sys/class/fpga_manager/fpga0/state is returning something reasonable. the fact that you don't have permissions to access the other parts of the fpga_manger is suspect as well, might be related. On Fri, Sep 6, 2019 at 3:12 PM James Kulp <jek@parera.com> wrote: > On 9/6/19 2:29 PM, Munro, Robert M. wrote: > > It appears there was some resource contention in the GP0 area that was not > allowing the OCPI system to set the OccpAdminRegister.magic value during > operation. > > This value is hardwired into the OpenCPI FPGA load and is read-only. > > The software memory maps the area where the GP0 interface is a slave to > the CPU at: > > const uint32_t GP0_PADDR = 0x40000000; > > And reads from offset 0. > > It firsts the 8 byte MAGIC a byte at a time, then if it matches, it reads > again as a single 64 bit value to make sure 32-bit endian swapping is > right. If both those reads from offset 0 at 0x40000000 come back correct, > it believes that the FPGA is loaded with an OpenCPI bitstream. > > If there is already a non-OpenCPI bitstream loaded, we expect that this > test will fail. > > If this failure occurs when there is a bitstream loaded, on Zynq, it still > assumes the FPGA is available for subsequent loading. > > > If the FPGA load is prevented during the boot process, the magic number > mismatch error is no longer output. Looking through the TRM was showing no > configuration settings for GP0 other than enabling communication using > LVL_SHFTR_EN. > > > > > If there is some required configuration of AXI_GP0 configuration registers > for OCPI to work properly, please provide it for future reference. > > I will check this. > > > > I am further trying to understand the code that was producing the output > by looking at the at the source. The magic number mismatch output on lines > 82-83 look to be outputting a #define value in the (sb ….) area of the > output and the ‘magic’ variable there is giving the value that was read > from the OccpAdminRegister area. Am I understanding the code correctly? > If so, that would indicate that the (sb …) number is the expected value and > its orientation should be correct. > https://github.com/Geontech/opencpi/blob/6c7f48352ef9dcb1213302f470ce803643cc604d/runtime/hdl/src/HdlDevice.cxx#L82 > > > > Is the code being understood correctly that the OccpAdminRegister is a > memory mapped data structure that is being written and read as part of the > OCPI control interface? If so, can you explain how and where this is being > mapped and at what base address it should be expected? > > See above - it is never written. > > > > After preventing the FPGA load at boot time the OCPI commands no longer > output the magic number mismatch error. The command ‘ocpihdl load > <fsk_filerw bin>’ does not succeed however. The output from the command > states ‘Exiting for problem: error loading device pl:0’ . What further > steps can be taken to debug this? > > What FPGA load at boot time are you refering to? The native manufacturer's > bitstream? > > AFAIK OpenCPI has no "boot time FPGA load". > > What you appear to be debugginig is the Geon ultrascale fpga manager > loading code on a non-ultrascale Zynq. > > I should have this particular function running on a zedboard Zynq next > week. > > > > I have also found that the FPGA loading approach coded in HdlBusDriver.cxx > does not work on this platform when attempting to run manually. The > command ‘echo 0 > /sys/class/fpga_manager/fpga0/flags’ returns ‘-sh: > /sys/class/fpga_manager/fpga0/flags: Permission denied’ . A manual command > using the DT overlay approach does appear to work however. > > I'm sorry you are the guinea pig on this particular configuration. > > The reason we did not immediately integrate the Geon code into OpenCPI is > that it was taking two steps (fpga manager + ultra-scale) at once and we > needed to take them one step at a time. We are taking that first step, > unfortunately not on a schedule that helps you. > > Jim > > > > > > Thanks, > > Rob > > > > > > *From:* Chris Hinkey <chinkey@geontech.com> <chinkey@geontech.com> > *Sent:* Friday, September 6, 2019 8:09 AM > *To:* James Kulp <jek@parera.com> <jek@parera.com> > *Cc:* Munro, Robert M. <Robert.Munro@jhuapl.edu> <Robert.Munro@jhuapl.edu>; > discuss@lists.opencpi.org > *Subject:* Re: [Discuss OpenCPI] Bitstream loading with > ZynqMP/UltraScale+ fpga_manager > > > > iirc it gives clocks and interdictions on which axi ports are enabled but > not which direction is master (you would have to look up which register/bit > this is set by in the TRM). i don't remember the axi ports being > configurable which side is the master but i very well might be mistake. > > > > On Thu, Sep 5, 2019 at 7:38 PM James Kulp <jek@parera.com> wrote: > > If you invoke the command with no arguments it tells you what it can do, > like most opencpi commands. We mostly use it to find out how the FPGA > clocks are initialized. > > > > On Sep 5, 2019, at 18:19, Munro, Robert M. <Robert.Munro@jhuapl.edu> > wrote: > > > > Jim, > > > > Does the ocpizynq utility list all the available interfaces that can > dumped? > > > > Thanks, > > Rob > > > > -----Original Message----- > > From: discuss <discuss-bounces@lists.opencpi.org> On Behalf Of James > Kulp > > Sent: Thursday, September 5, 2019 5:59 PM > > To: discuss@lists.opencpi.org > > Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ > fpga_manager > > > > Hi Rob, > > > > Nearly all aspects of the boundary hardware between the PS and the PL > sides of Zynq are controlled by registers written by the processor and > > *not* in the FPGA bitstream. > > The FSBL does typically initialize these registers to some default > values that are not necessarily the right values for how OpenCPI uses the > PL/FPGA. > > The ocpizynq utility program does dump out some of these registers, and > you could modify it pretty easily if you want to know what some other > registers are set to. > > All these registers are pretty well documented in the Zynq TRM. > > > > Jim > > > >> On 9/5/19 5:47 PM, Munro, Robert M. wrote: > >> Chris, > >> > >> Would this be the GP0 AXI slave or master registers that are being > accessed in this scenario? I don’t believe these are configured in the > FSBL, but in the FPGA image. This could indicate that a facility required > by the OCPI framework is not enabled in the FPGA image built into the N310 > image. Is there a listing of the OCPI required FPGA facilities? > >> > >> Thanks, > >> Rob > >> > >> From: Chris Hinkey <chinkey@geontech.com> > >> Sent: Thursday, August 29, 2019 11:58 AM > >> To: Munro, Robert M. <Robert.Munro@jhuapl.edu> > >> Subject: Re: [Discuss OpenCPI] Bitstream loading with > >> ZynqMP/UltraScale+ fpga_manager > >> > >> you are not accessing external memory in this case you are accessing > axi_gp0's adress space a register directly on the FPGA. i would suspect > that that something is wrong with how GP0 is setup from the fsbl in this > case. I don't think anything would need to change on the opencpi software > side given that 7100 vs 7020 should be the same. > >> the information on all the register maps and where everything is > located is somewhere in the Xilinx Technical reference manual (be warned > this is a very large document). > >> > >> On Thu, Aug 29, 2019 at 11:42 AM Munro, Robert M. < > Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>> wrote: > >> Chris, > >> > >> Looking at the Zynq and ZynqMP datasheets: > >> https://www.xilinx.com/support/documentation/data_sheets/ds190-Zynq-70 > >> 00-Overview.pdf > >> https://www.xilinx.com/support/documentation/data_sheets/ds891-zynq-ul > >> trascale-plus-overview.pdf > >> > >> It looks like the Z-7100 has the same memory interfaces as other Zynq > parts with the external memory interface having '16-bit or 32-bit > interfaces to DDR3, DDR3L, DDR2, or LPDDR2 memories' whereas the ZynqMP has > '32-bit or 64-bit interfaces to DDR4, DDR3, DDR3L, or LPDDR3 memories, and > 32-bit interface to LPDDR4 memory' . > >> > >> Is it possible that other changes are needed from the 1.4_zynq_ultra > branch that I have not pulled in? > >> > >> Thanks, > >> Rob > >> > >> -----Original Message----- > >> From: discuss <discuss-bounces@lists.opencpi.org<mailto: > discuss-bounces@lists.opencpi.org>> On Behalf Of Munro, Robert M. > >> Sent: Thursday, August 29, 2019 9:09 AM > >> To: Chris Hinkey <chinkey@geontech.com<mailto:chinkey@geontech.com>> > >> Cc: discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > >> Subject: Re: [Discuss OpenCPI] Bitstream loading with > >> ZynqMP/UltraScale+ fpga_manager > >> > >> Chris, > >> > >> Thanks for the information regarding the internals. The FPGA part on > this platform is a XC7Z100. I purposefully did not pull in changes that I > believed were related to addressing. I can double check the specifications > regarding address widths to verify it should be unchanged. > >> > >> Please let me know if there are any other changes or steps missed. > >> > >> Thanks, > >> Rob > >> > >> > >> From: Chris Hinkey > >> <chinkey@geontech.com<mailto:chinkey@geontech.com><mailto:chinkey@geon > >> tech.com<mailto:chinkey@geontech.com>>> > >> Date: Thursday, Aug 29, 2019, 8:05 AM > >> To: Munro, Robert M. > >> <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert > >> .Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>> > >> Cc: James Kulp > >> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:je > >> k@parera.com>>>, > >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > >> <discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:di > >> scuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > >> Subject: Re: [Discuss OpenCPI] Bitstream loading with > >> ZynqMP/UltraScale+ fpga_manager > >> > >> It looks like you loaded something sucessfully but the control plan is > not hooked up quite right. > >> > >> as an eraly part of the running process opencpi reads a register across > the control plan that contains ascii "OpenCPI(NULL)" and in your case you > are reading "CPI(NULL)Open" this is given by the data in the error message > - (sb 0x435049004f70656e). this is the magic that the message is referring > to it requires OpenCPI to be at address 0 of the control plane address > space to proceed. > >> > >> I think we ran into this problem and we decided it was because the bus > on the ultrascale was setup to be 32 bits and needed to be 64 bits for the > hdl that we implemented to work correctly. remind me what platform you are > using is it a zynq ultrascale or 7000 series? > >> > >> On Wed, Aug 28, 2019 at 5:55 PM Munro, Robert M. < > Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto: > Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>> wrote: > >> Chris, > >> > >> After merging some sections of HdlBusDriver.cxx into the 1.4 version of > the file and going through the build process I am encountering a new error > when attempting to load HDL on the N310. The fsk_filerw is being used as a > known good reference for this purpose. The new sections of vivado.mk< > http://vivado.mk><http://vivado.mk> were merged in to attempt building > the HDL using the framework, but it did not generate the .bin file when > using ocpidev build with the --hdl-assembly argument. An attempt to > replicate the commands in vivado.mk<http://vivado.mk><http://vivado.mk> > manually while following the guidelines for generating a .bin from a .bit > from Xilinx documentation > https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841645/Solution+Zynq+PL+Programming+With+FPGA+Manager > was taken. > >> > >> The steps were: > >> - generate a .bif file similar to the documentation's > >> Full_Bitstream.bif using the correct filename > >> - run a bootgen command similar to > >> vivado.mk<http://vivado.mk><http://vivado.mk>: bootgen -image > >> <bif_filename> -arch zynq -o <bin_filename> -w > >> > >> This generated a .bin file as desired and was copied to the artifacts > directory in the ocpi folder structure. > >> > >> The built ocpi environment loaded successfully, recognizes the HDL > container as being available, and the hello application was able to run > successfully. The command output contained ' HDL Device 'PL:0' responds, > but the OCCP signature: magic: 0x18000afe187003 (sb 0x435049004f70656e) ' > but the impact of this was not understood until attempting to load HDL. > When attempting to run the fsk_filerw from the ocpirun command it did not > appear to recognize the assembly when listing resources found in the output > and reported that suitable candidate for a HDL-implemented component was > not available. > >> > >> The command 'ocpihdl load' was then attempted to force the loading of > the HDL assembly and the same '...OCCP signature: magic: ...' output > observed and finally ' Exiting for problem: error loading device pl:0: > Magic numbers in admin space do not match'. > >> > >> Is there some other step that must be taken during the generation of > the .bin file? Is there any other software modification that is required > of the ocpi runtime code? The diff patch of the modified 1.4 > HdlBusDriver.cxx is attached to make sure that the required code > modifications are performed correctly. The log output from the ocpihdl > load command is attached in case that can provide further insight regarding > performance or required steps. > >> > >> Thanks, > >> Rob > >> > >> -----Original Message----- > >> From: discuss <discuss-bounces@lists.opencpi.org<mailto: > discuss-bounces@lists.opencpi.org><mailto: > discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org>>> > On Behalf Of Munro, Robert M. > >> Sent: Tuesday, August 13, 2019 10:56 AM > >> To: Chris Hinkey > >> <chinkey@geontech.com<mailto:chinkey@geontech.com><mailto:chinkey@geon > >> tech.com<mailto:chinkey@geontech.com>>>; James Kulp > >> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:je > >> k@parera.com>>> > >> Cc: > >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:dis > >> cuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> > >> Subject: Re: [Discuss OpenCPI] Bitstream loading with > >> ZynqMP/UltraScale+ fpga_manager > >> > >> Chris, > >> > >> Thank you for your helpful response and insight. My thinking was that > the #define could be overridden to provide the desired functionality for > the platform, but was not comfortable making the changes without proper > familiarity. I will move forward by looking at the diff to the 1.4 > mainline, make the appropriate modifications, and test with the modified > framework on the N310. > >> > >> Thanks again for your help. > >> > >> Thanks, > >> Rob > >> > >> From: Chris Hinkey > >> <chinkey@geontech.com<mailto:chinkey@geontech.com><mailto:chinkey@geon > >> tech.com<mailto:chinkey@geontech.com>>> > >> Sent: Tuesday, August 13, 2019 10:02 AM > >> To: James Kulp > >> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:je > >> k@parera.com>>> > >> Cc: Munro, Robert M. > >> <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert > >> .Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>; > >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:dis > >> cuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> > >> Subject: Re: [Discuss OpenCPI] Bitstream loading with > >> ZynqMP/UltraScale+ fpga_manager > >> > >> I think when I implemented this code I probably made the assumption > that if we are using fpga_manager we are also using ARCH=arm64. This met > our needs as we only cared about the fpga manager on ultrascale devices at > the time. We also made the assumption that the tools created a tarred bin > file instead of a bit file because we could not get the bit to bin > conversion working with the existing openCPI code (this might cause you > problems later when actually trying to load the fpga). > >> > >> The original problem you were running into is certainly because of an > >> ifdef on line 226 where it will check the old driver done pin if it is > >> on an arm and not an arm64 > >> > >> 226 #if defined(OCPI_ARCH_arm) || defined(OCPI_ARCH_arm_cs) > >> > >> to move forward for now you can change this line to an "#if 0" and > rebuild the framework, not this will cause other zynq based platforms(zed, > matchstiq etc..) to no longer work with this patch but maybe you don't care > for now while Jim tries to get this into the mainline in a more generic way. > >> there may be some similar patches you need to make to the same file but > the full diff that I needed to make to BusDriver.cxx to the 1.4 mainline > can be seen here https://github.com/opencpi/opencpi/pull/17/files in case > you didn't already know. > >> hope this helps > >> > >> On Mon, Aug 12, 2019 at 11:12 AM James Kulp <jek@parera.com<mailto: > jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto: > jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto: > jek@parera.com>>>> wrote: > >>> On 8/12/19 9:37 AM, Munro, Robert M. wrote: > >>> Jim, > >>> > >>> This is the only branch with the modifications required for use with > >>> the FPGA Manager driver. This is required for use with the Linux > >>> kernel provided for the N310. The Xilinx toolset being used is > >>> 2018_2 and the kernel being used is generated via the N310 build > >>> container using v3.14.0.0 . > >> Ok. The default Xilinx kernel associated with 2018_2 is 4.14. > >> > >> I guess the bottom line is that this combination of platform and tools > and kernel is not yet supported in either the mainline of OpenCPI and the > third party branch you are trying to use. > >> > >> It is probably not a big problem, but someone has to debug it that has > the time and skills necessary to dig as deep as necessary. > >> > >> The fpga manager in the various later linux kernels will definitely be > supported in a patch from the mainline "soon", probably in a month, since > it is being actively worked. > >> > >> That does not guarantee functionality on your exact kernel (and thus > version of the fpga manager), but it does guarantee it working on the > latest Xilinx-supported kernel. > >> > >> Jim > >> > >> > >> > >> > >> > >> > >> > >>> Thanks, > >>> Robert Munro > >>> > >>> *From: *James Kulp > >>> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:j > >>> ek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:j > >>> e<mailto:je> > >>> k@parera.com<mailto:k@parera.com>>> > >>> <mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<m > >>> ailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><m > >>> a ilto:jek@parera.com<mailto:ilto%3Ajek@parera.com>>>>> > >>> *Date: *Monday, Aug 12, 2019, 9:00 AM > >>> *To: *Munro, Robert M. > >>> <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Rober > >>> t.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>><mailto:Robert<mai > >>> lto:Robert> > >>> .Munro@jhuapl.edu<mailto:Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl > >>> .edu<mailto:Robert.Munro@jhuapl.edu>>> > >>> <mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailt > >>> o:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>><mailto > >>> :Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Rober > >>> t.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>>>, > >>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:di > >>> scuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:dis > >>> <mailto:dis> > >>> cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org><mailto:discuss@ > >>> lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > >>> <discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:d > >>> iscuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:di > >>> <mailto:di> > >>> scuss@lists.opencpi.org<mailto:scuss@lists.opencpi.org><mailto:discus > >>> s@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > >>> <mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><m > >>> ailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><ma > >>> ilto:discuss@lists.opencpi.org<mailto:ilto%3Adiscuss@lists.opencpi.or > >>> g><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > >>>>>>> > >>> *Subject: *Re: [Discuss OpenCPI] Bitstream loading with > >>> ZynqMP/UltraScale+ fpga_manager > >>> > >>> I was a bit confused about your use of the "ultrascale" branch. > >>> So you are using a branch with two types of patches in it: one for > >>> later linux kernels with the fpga manager, and the other for the > >>> ultrascale chip itself. > >>> The N310 is not ultrascale, so we need to separate the two issues, > >>> which were not separated before. > >>> So its not really a surprise that the branch you are using is not yet > >>> happy with the system you are trying to run it on. > >>> > >>> I am working on a branch that simply updates the xilinx tools > >>> (2019-1) and the xilinx linux kernel (4.19) without dealing with > >>> ultrascale, which is intended to work with a baseline zed board, but > >>> with current tools and kernels. > >>> > >>> The N310 uses a 7000-series part (7100) which should be compatible > >>> with this. > >>> > >>> Which kernel and which xilinx tools are you using? > >>> > >>> Jim > >>> > >>> > >>> > >>>> On 8/8/19 1:36 PM, Munro, Robert M. wrote: > >>>> Jim or others, > >>>> > >>>> Is there any further input or feedback on the source or resolution > >>> of this issue? > >>>> As it stands I do not believe that the OCPI runtime software will be > >>> able to successfully load HDL assemblies on the N310 platform. My > >>> familiarity with this codebase is limited and we would appreciate any > >>> guidance available toward investigating or resolving this issue. > >>>> Thank you, > >>>> Robert Munro > >>>> > >>>> -----Original Message----- > >>>> From: discuss > >>>> <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.open > >>>> cpi.org><mailto:discuss-bounces@lists.open<mailto:discuss-bounces@li > >>>> sts.open> > >>>> cpi.org<http://cpi.org>><mailto:discuss-bounces@lists.open<mailto:di > >>>> scuss-bounces@lists.open><mailto:discuss-bounces@li<mailto:discuss-b > >>>> ounces@li> sts.open> cpi.org<http://cpi.org><http://cpi.org>>> On > >>>> Behalf Of > >>> Munro, Robert M. > >>>> Sent: Monday, August 5, 2019 10:49 AM > >>>> To: James Kulp > >>>> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto: > jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto: > <%0b>>>>> jek@parera.com<mailto:jek@parera.com>>>>; > >>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:d > >>>> iscuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:d > >>>> <mailto:d> > >>>> iscuss@lists.opencpi.org<mailto:iscuss@lists.opencpi.org><mailto:dis > >>>> cuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > >>>> Subject: Re: [Discuss OpenCPI] Bitstream loading with > >>> ZynqMP/UltraScale+ fpga_manager > >>>> Jim, > >>>> > >>>> The given block of code is not the root cause of the issue because > >>> the file system does not have a /dev/xdevcfg device. > >>>> I suspect there is some functional code similar to this being > >>> compiled incorrectly: > >>>> #if (OCPI_ARCH_arm) > >>>> // do xdevcfg loading stuff > >>>> #else > >>>> // do fpga_manager loading stuff #endif > >>>> > >>>> This error is being output at environment initialization as well as > >>> when running hello.xml. I've attached a copy of the output from the > >>> command 'ocpirun -v -l 20 hello.xml' for further investigation. > >>>> From looking at the output I believe the system is calling > >>> OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is > >>> calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line > >>> 484 which in turn is calling Driver::open in the same file at line > >>> 499 which then outputs the 'When searching for PL device ...' error > >>> at line 509. This then returns to the HdlDriver.cxx search() function > >>> and outputs the '... got Zynq search error ...' error at line 141. > >>>> This is an ARM device and I am not familiar enough with this > >>> codebase to adjust precompiler definitions with confidence that some > >>> other code section will become affected. > >>>> Thanks, > >>>> Robert Munro > >>>> > >>>> -----Original Message----- > >>>> From: James Kulp > >>>> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto: > jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto: > <%0b>>>>> jek@parera.com<mailto:jek@parera.com>>>> > >>>> Sent: Friday, August 2, 2019 4:27 PM > >>>> To: Munro, Robert M. > >>>> <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robe > >>>> rt.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>><mailto:Robe<mai > >>>> lto:Robe> > >>>> rt.Munro@jhuapl.edu<mailto:rt.Munro@jhuapl.edu><mailto:Robert.Munro@ > >>>> jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>>; > >>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:di > >>> scuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:dis > >>> <mailto:dis> > >>> cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org><mailto:discuss@ > >>> lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > >>>> Subject: Re: [Discuss OpenCPI] Bitstream loading with > >>> ZynqMP/UltraScale+ fpga_manager > >>>> That code is not integrated into the main line of OpenCPI yet, but > >>> in that code there is: > >>>> if (file_exists("/dev/xdevcfg")){ > >>>> ret_val= load_xdevconfig(fileName, error); > >>>> } > >>>> else if (file_exists("/sys/class/fpga_manager/fpga0/")){ > >>>> ret_val= load_fpga_manager(fileName, error); > >>>> } > >>>> So it looks like the presence of /dev/xdevcfg is what causes it to > >>> look for /sys/class/xdevcfg/xdevcfg/device/prog_done > >>>>> On 8/2/19 4:15 PM, Munro, Robert M. wrote: > >>>>> Are there any required flag or environment variable settings that > >>> must be done before building the framework to utilize this > >>> functionality? I have a platform built that is producing an output > >>> during environment load: 'When searching for PL device '0': Can't > >>> process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string: > >>> file could not be open for reading' . This leads me to believe that > >>> it is running the xdevcfg code still present in HdlBusDriver.cxx . > >>>>> Use of the release_1.4_zynq_ultra branch and presence of the > >>> /sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been > >>> verified for the environment used to generate the executables. > >>>>> Thanks, > >>>>> Robert Munro > >>>>> > >>>>> -----Original Message----- > >>>>> From: discuss > >>>>> <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.ope > >>>>> ncpi.org><mailto:discuss-bounces@lists.ope<mailto:discuss-bounces@l > >>>>> ists.ope> > >>>>> ncpi.org<http://ncpi.org>><mailto:discuss-bounces@lists.ope<mailto: > >>>>> discuss-bounces@lists.ope><mailto:discuss-bounces@l<mailto:discuss- > >>>>> bounces@l> ists.ope> ncpi.org<http://ncpi.org><http://ncpi.org>>> > >>>>> On Behalf Of James Kulp > >>>>> Sent: Friday, February 1, 2019 4:18 PM > >>>>> To: > >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto: > discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto: > >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto: > >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > >>>>> Subject: Re: [Discuss OpenCPI] Bitstream loading with > >>>>> ZynqMP/UltraScale+ fpga_manager > >>>>> > >>>>>> On 2/1/19 3:37 PM, Chris Hinkey wrote: > >>>>>> in response to Point 1 here. We attempted using the code that on > >>> the fly was attempting to convert from bit to bin. This did not work > >>> on these newer platforms using fpga_manager so we decided to use the > >>> vendor provided tools rather then to reverse engineer what was wrong > >>> with the existing code. > >>>>>> If changes need to be made to create more commonality and given > >>> that all zynq and zynqMP platforms need a .bin file format wouldn't > >>> it make more sense to just use .bin files rather then converting them > >>> on the fly every time? > >>>>> A sensible question for sure. > >>>>> > >>>>> When this was done originally, it was to avoid generating multiple > >>> file formats all the time. .bit files are necessary for JTAG > >>> loading, and .bin files are necessary for zynq hardware loading. > >>>>> Even on Zynq, some debugging using jtag is done, and having that be > >>> mostly transparent (using the same bitstream files) is convenient. > >>>>> So we preferred having a single bitstream file (with metadata, > >>>>> compressed) regardless of whether we were hardware loading or jtag > >>> loading, zynq or virtex6 or spartan3, ISE or Vivado. > >>>>> In fact, there was no reverse engineering the last time since both > >>> formats, at the level we were operating at, were documented by Xilinx. > >>>>> It seemed to be worth the 30 SLOC to convert on the fly to keep a > >>> single format of Xilinx bitstream files, including between ISE and > >>> Vivado and all Xilinx FPGA types. > >>>>> Of course it might make sense to switch things around the other way > >>> and use .bin files uniformly and only convert to .bit format for JTAG > >>> loading. > >>>>> But since the core of the "conversion:" after a header, is just a > >>> 32 bit endian swap, it doesn't matter much either way. > >>>>> If it ends up being a truly nasty reverse engineering exercise now, > >>> I would reconsider. > >>>>>> ________________________________ > >>>>>> From: discuss > >>>>>> <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.op > >>>>>> encpi.org><mailto:discuss-bounces@lists.op<mailto:discuss-bounces@ > >>>>>> lists.op> > >>>>>> encpi.org<http://encpi.org>><mailto:discuss-bounces@lists.op<mailt > >>>>>> o:discuss-bounces@lists.op><mailto:discuss-bounces@<mailto:discuss > >>>>>> -bounces@> lists.op> > >>>>>> encpi.org<http://encpi.org><http://encpi.org>>> on behalf of James > >>>>>> Kulp > >>>>>> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailt > >>>>>> o:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><ma > >>>>>> ilt o:jek@parera.com<mailto:o%3Ajek@parera.com>>>> > >>>>>> Sent: Friday, February 1, 2019 3:27 PM > >>>>>> To: > >>>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto > >>>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mail > >>>>>> to > >>>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailt > >>>>>> o:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > >>>>>> Subject: Re: [Discuss OpenCPI] Bitstream loading with > >>>>>> ZynqMP/UltraScale+ fpga_manager > >>>>>> > >>>>>> David, > >>>>>> > >>>>>> This is great work. Thanks. > >>>>>> > >>>>>> Since I believe the fpga manager stuff is really an attribute of > >>>>>> later linux kernels, I don't think it is really a ZynqMP thing, > >>>>>> but just a later linux kernel thing. > >>>>>> I am currently bringing up the quite ancient zedboard using the > >>>>>> latest Vivado and Xilinx linux and will try to use this same code. > >>>>>> There are two thinigs I am looking into, now that you have done > >>>>>> the hard work of getting to a working solution: > >>>>>> > >>>>>> 1. The bit vs bin thing existed with the old bitstream loader, but > >>>>>> I think we were converting on the fly, so I will try that here. > >>>>>> (To avoid the bin format altogether). > >>>>>> > >>>>>> 2. The fpga manager has entry points from kernel mode that allow > >>>>>> you to inject the bitstream without making a copy in /lib/firmware. > >>>>>> Since we already have a kernel driver, I will try to use that to > >>>>>> avoid the whole /lib/firmware thing. > >>>>>> > >>>>>> So if those two things can work (no guarantees), the difference > >>>>>> between old and new bitstream loading (and building) can be > >>>>>> minimized and the loading process faster and requiring no extra > >>>>>> file system > >>> space. > >>>>>> This will make merging easier too. > >>>>>> > >>>>>> We'll see. Thanks again to you and Geon for this important > >>> contribution. > >>>>>> Jim > >>>>>> > >>>>>> > >>>>>>> On 2/1/19 3:12 PM, David Banks wrote: > >>>>>>> OpenCPI users interested in ZynqMP fpga_manager, > >>>>>>> > >>>>>>> I know some users are interested in the OpenCPI's bitstream > >>>>>>> loading for ZynqMP/UltraScale+ using "*fpga_manager*". In > >>>>>>> general, we followed the instructions at > >>>>>>> > >>> > https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream > . > >>>>>>> I will give a short explanation here: > >>>>>>> > >>>>>>> Reminder: All ZynqMP/UltraScale+ changes are located at > >>>>>>> https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra > >>> branch. > >>>>>>> Firstly, all *fpga_manager *code is located in > >>>>>>> *runtime/hdl/src/HdlBusDriver.cxx*. There were also changes in > >>>>>>> r*untime/hdl-support/xilinx/vivado.mk<http://vivado.mk><http://vi > >>>>>>> vado.mk><http://vi > >>>>>>> vado.mk<http://vado.mk>> > >>>>>>> <http://vivado.mk>* to generate a bitstream in the correct *.bin > >>>>>>> format. To see the changes made to these files for ZynqMP, you > >>>>>>> can diff them between > >>>>>>> *release_1.4* and *release_1.4_zynq_ultra*: > >>>>>>> $ git clone https://github.com/Geontech/opencpi.git --branch > >>>>>>> release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin > >>>>>>> release_1.4:release_1.4; $ git diff release_1.4 -- > >>>>>>> runtime/hdl/src/HdlBusDriver.cxx > >>>>>>> runtime/hdl-support/xilinx/vivado.mk<http://vivado.mk><http://viv > >>>>>>> ado.mk><http://viv > >>>>>>> ado.mk<http://ado.mk>>; > >>>>>>> > >>>>>>> > >>>>>>> The directly relevant functions are *load_fpga_manager()* and i > >>>>>>> *sProgrammed()*. > >>>>>>> load_fpga_manager() ensures that /lib/firmware exists, reads the > >>>>>>> *.bin bitstream file and writes its contents to > >>> /lib/firmware/opencpi_temp.bin. > >>>>>>> It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the > >>>>>>> the filename "opencpi_temp.bin" to > >>> /sys/class/fpga_manager/fpga0/firmware. > >>>>>>> Finally, the temporary opencpi_temp.bin bitstream is removed and > >>>>>>> the state of the fpga_manager > >>>>>>> (/sys/class/fpga_manager/fpga0/state) is confirmed to be > "operating" in isProgrammed(). > >>>>>>> > >>>>>>> fpga_manager requires that bitstreams be in *.bin in order to > >>>>>>> write them to the PL. So, some changes were made to > >>>>>>> vivado.mk<http://vivado.mk><http://vivado.mk><http://vivado.mk> > >>>>>>> to add a make rule for the *.bin file. This make rule (*BinName*) > uses Vivado's "*bootgen*" to convert the bitstream from *.bit to *.bin. > >>>>>>> > >>>>>>> Most of the relevant code is pasted or summarized below: > >>>>>>> > >>>>>>> *load_fpga_manager*(const char *fileName, > >>>>>>> std::string > >>> &error) { > >>>>>>> if (!file_exists("/lib/firmware")){ > >>>>>>> mkdir("/lib/firmware",0666); > >>>>>>> } > >>>>>>> int out_file = > >>> creat("/lib/firmware/opencpi_temp.bin", 0666); > >>>>>>> gzFile bin_file; > >>>>>>> int bfd, zerror; > >>>>>>> uint8_t buf[8*1024]; > >>>>>>> > >>>>>>> if ((bfd = ::open(fileName, O_RDONLY)) < 0) > >>>>>>> OU::format(error, "Can't open bitstream file '%s' > >>> for reading: > >>>>>>> %s(%d)", > >>>>>>> fileName, strerror(errno), errno); > >>>>>>> if ((bin_file = ::gzdopen(bfd, "rb")) == NULL) > >>>>>>> OU::format(error, "Can't open compressed bin > >>>>>>> file > >>> '%s' for : > >>>>>>> %s(%u)", > >>>>>>> fileName, strerror(errno), errno); > >>>>>>> do { > >>>>>>> uint8_t *bit_buf = buf; > >>>>>>> int n = ::gzread(bin_file, bit_buf, sizeof(buf)); > >>>>>>> if (n < 0) > >>>>>>> return true; > >>>>>>> if (n & 3) > >>>>>>> return OU::eformat(error, "Bitstream data in is '%s' > >>>>>>> not a multiple of 3 bytes", > >>>>>>> fileName); > >>>>>>> if (n == 0) > >>>>>>> break; > >>>>>>> if (write(out_file, buf, n) <= 0) > >>>>>>> return OU::eformat(error, > >>>>>>> "Error writing to > >>>>>>> /lib/firmware/opencpi_temp.bin for bin > >>>>>>> loading: %s(%u/%d)", > >>>>>>> strerror(errno), errno, n); > >>>>>>> } while (1); > >>>>>>> close(out_file); > >>>>>>> std::ofstream > >>> fpga_flags("/sys/class/fpga_manager/fpga0/flags"); > >>>>>>> std::ofstream > >>>>>>> fpga_firmware("/sys/class/fpga_manager/fpga0/firmware"); > >>>>>>> fpga_flags << 0 << std::endl; > >>>>>>> fpga_firmware << "opencpi_temp.bin" << std::endl; > >>>>>>> > >>>>>>> remove("/lib/firmware/opencpi_temp.bin"); > >>>>>>> return isProgrammed(error) ? init(error) : true; > >>>>>>> } > >>>>>>> > >>>>>>> The isProgrammed() function just checks whether or not the > >>>>>>> fpga_manager state is 'operating' although we are not entirely > >>>>>>> confident this is a robust check: > >>>>>>> > >>>>>>> *isProgrammed*(...) { > >>>>>>> ... > >>>>>>> const char *e = OU::file2String(val, > >>>>>>> "/sys/class/fpga_manager/fpga0/state", '|'); > >>>>>>> ... > >>>>>>> return val == "operating"; > >>>>>>> } > >>>>>>> > >>>>>>> vivado.mk<http://vivado.mk><http://vivado.mk><http://vivado.mk>'s > >>>>>>> *bin make-rule uses bootgen to convert bit to bin. This is > >>>>>>> necessary in Vivado 2018.2, but in later versions you may be able > >>>>>>> to directly generate the correct *.bin file via an option to > >>> write_bitstream: > >>>>>>> $(call *BinName*,$1,$3,$6): $(call BitName,$1,$3) > >>>>>>> $(AT)echo -n For $2 on $5 using config $4: Generating > >>>>>>> Xilinx Vivado bitstream file $$@ with BIN extension using > "bootgen". > >>>>>>> $(AT)echo all: > $$(call BifName,$1,$3,$6); \ > >>>>>>> echo "{" >> $$(call BifName,$1,$3,$6); \ > >>>>>>> echo " [destination_device = pl] $(notdir $(call > >>>>>>> BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6); \ > >>>>>>> echo "}" >> $$(call BifName,$1,$3,$6); > >>>>>>> $(AT)$(call DoXilinx,*bootgen*,$1,-image $(notdir > >>>>>>> $(call > >>>>>>> BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call > >>>>>>> BinName,$1,$3,$6)) -w,bin) > >>>>>>> > >>>>>>> Hope this is useful! > >>>>>>> > >>>>>>> Regards, > >>>>>>> David Banks > >>>>>>> dbanks@geontech.com<mailto:dbanks@geontech.com><mailto:dbanks@geo > >>>>>>> ntech.com<mailto:dbanks@geontech.com>><mailto:dbanks@geo<mailto:d > >>>>>>> banks@geo> > >>>>>>> ntech.com<http://ntech.com><mailto:dbanks@geontech.com<mailto:dba > >>>>>>> nks@geontech.com>>> > >>>>>>> Geon Technologies, LLC > >>>>>>> -------------- next part -------------- An HTML attachment was > >>>>>>> scrubbed... > >>>>>>> URL: > >>>>>>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/att > >>>>>>> ach m ents/20190201/4b49675d/attachment.html> > >>>>>>> _______________________________________________ > >>>>>>> discuss mailing list > >>>>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailt > >>>>>>> o:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><ma > >>>>>>> ilt > >>>>>>> o:discuss@lists.opencpi.org<mailto:o%3Adiscuss@lists.opencpi.org> > >>>>>>> <mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.or > >>>>>>> g>>> > >>>>>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.o > >>>>>>> rg > >>>>>> _______________________________________________ > >>>>>> discuss mailing list > >>>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto > >>>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mail > >>>>>> to > >>>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailt > >>>>>> o:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > >>>>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or > >>>>>> g > >>>>>> -------------- next part -------------- An HTML attachment was > >>>>>> scrubbed... > >>>>>> URL: > >>>>>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/atta > >>>>>> chm e nts/20190201/64e4ea45/attachment.html> > >>>>>> _______________________________________________ > >>>>>> discuss mailing list > >>>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto > >>>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mail > >>>>>> to > >>>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailt > >>>>>> o:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > >>>>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or > >>>>>> g > >>>>> _______________________________________________ > >>>>> discuss mailing list > >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto: > discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto: > >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto: > >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > >>>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > >>>> -------------- next part -------------- An embedded and > >>>> charset-unspecified text was scrubbed... > >>>> Name: hello_n310_log_output.txt > >>>> URL: > >>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachm > >>> e nts/20190805/d9b4f229/attachment.txt> > >>>> _______________________________________________ > >>>> discuss mailing list > >>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:d > >>>> iscuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:d > >>>> <mailto:d> > >>>> iscuss@lists.opencpi.org<mailto:iscuss@lists.opencpi.org><mailto:dis > >>>> cuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > >>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > >>> > >> > >> _______________________________________________ > >> discuss mailing list > >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:dis > >> cuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:discu > >> ss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@ > >> lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> > >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > >> -------------- next part -------------- An HTML attachment was > >> scrubbed... > >> URL: > >> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme > >> nts/20190813/4516c872/attachment.html> > >> _______________________________________________ > >> discuss mailing list > >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:dis > >> cuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> > >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > >> -------------- next part -------------- An HTML attachment was > >> scrubbed... > >> URL: > >> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme > >> nts/20190829/b99ae3e0/attachment.html> > >> _______________________________________________ > >> discuss mailing list > >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > >> -------------- next part -------------- An HTML attachment was > >> scrubbed... > >> URL: > >> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme > >> nts/20190905/0b9a1953/attachment.html> > >> _______________________________________________ > >> discuss mailing list > >> discuss@lists.opencpi.org > >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > > > > > > > > _______________________________________________ > > discuss mailing list > > discuss@lists.opencpi.org > > http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > > > _______________________________________________ > discuss mailing list > discuss@lists.opencpi.org > http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > > >
MR
Munro, Robert M.
Mon, Sep 9, 2019 8:22 PM

The FPGA load that can be prevented at boot time is the vendor’s FPGA build.

My suspicion is that the ‘ocpihdl load …’ call was not able to successfully load the FPGA.  When it subsequently attempted reading the magic number it was getting an incorrect value because the vendor’s FPGA load was already loaded during the boot process.  When preventing the vendor’s FPGA load during boot the magic number mismatch was no longer being output but the software was reporting the load was unsuccessful.

Does an application that does not require FPGA load such as hello.xml is run, does the system attempt to load anything to the FPGA?  I noticed that it was outputting the magic number mismatch when the vendor’s FPGA build was loaded in this case as well.

I am now looking into what is required to use the DTO loading approach for this platform.

Thanks,
Rob

From: Chris Hinkey chinkey@geontech.com
Sent: Friday, September 6, 2019 3:20 PM
To: James Kulp jek@parera.com
Cc: Munro, Robert M. Robert.Munro@jhuapl.edu; discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

looks like i responded just to robert not to the discussion list, oops

The magic number is a register that is on the fpga and is set to ascii "OpenCPI" (0x4f70656e435049) which is a 64 bit read across the axi bus and it is the first address across the bus  this value is not set during operation, it is a hardcoded register that is built-in to the bitfile.  it is never being written to from the software side.

when I get to this point on a new platform what I will do is step outside of the framework and use the tool devmem to read the physical address across the fpga boundary.
this will ensure that the 64 bits here are returning the correct "magic".

I expect that your problems with the Exiting for problem: error loading device pl:0’ .  What further steps can be taken to debug this? error is from something with fpga_manager not acting correctly.  we had a similar problem recently and we forced opencpi to think that it always had an opencpi bitstream loaded.
I would check that /sys/class/fpga_manager/fpga0/state is returning something reasonable.  the fact that you don't have  permissions to access the other parts of the fpga_manger is suspect as well, might be related.

On Fri, Sep 6, 2019 at 3:12 PM James Kulp <jek@parera.commailto:jek@parera.com> wrote:
On 9/6/19 2:29 PM, Munro, Robert M. wrote:
It appears there was some resource contention in the GP0 area that was not allowing the OCPI system to set the OccpAdminRegister.magic value during operation.

This value is hardwired into the OpenCPI  FPGA load and is read-only.

The software memory maps the area where the GP0 interface is a slave to the CPU at:

  const uint32_t GP0_PADDR = 0x40000000;

And reads from offset 0.

It firsts the 8 byte MAGIC a byte at a time, then if it matches, it reads again as a single 64 bit value to make sure 32-bit endian swapping is right.  If both those reads from offset 0 at 0x40000000 come back correct, it believes that the FPGA is loaded with an OpenCPI bitstream.

If there is already a non-OpenCPI bitstream loaded, we expect that this test will fail.

If this failure occurs when there is a bitstream loaded, on Zynq, it still assumes the FPGA is available for subsequent loading.

If the FPGA load is prevented during the boot process, the magic number mismatch error is no longer output.  Looking through the TRM was showing no configuration settings for GP0 other than enabling communication using LVL_SHFTR_EN.

If there is some required configuration of AXI_GP0 configuration registers for OCPI to work properly, please provide it for future reference.
I will check this.

I am further trying to understand the code that was producing the output by looking at the at the source.  The magic number mismatch output on lines 82-83 look to be outputting a #define value in the (sb ….) area of the output and the ‘magic’ variable there is giving the value that was read from the OccpAdminRegister area.  Am I understanding the code correctly?  If so, that would indicate that the (sb …) number is the expected value and its orientation should be correct. https://github.com/Geontech/opencpi/blob/6c7f48352ef9dcb1213302f470ce803643cc604d/runtime/hdl/src/HdlDevice.cxx#L82

Is the code being understood correctly that the OccpAdminRegister is a memory mapped data structure that is being written and read as part of the OCPI control interface?  If so, can you explain how and where this is being mapped and at what base address it should be expected?
See above - it is never written.

After preventing the FPGA load at boot time the OCPI commands no longer output the magic number mismatch error.  The command ‘ocpihdl load <fsk_filerw bin>’ does not succeed however.  The output from the command states ‘Exiting for problem: error loading device pl:0’ .  What further steps can be taken to debug this?

What FPGA load at boot time are you refering to? The native manufacturer's bitstream?

AFAIK OpenCPI has no "boot time FPGA load".

What you appear to be debugginig is the Geon ultrascale fpga manager loading code on a non-ultrascale Zynq.

I should have this particular function running on a zedboard Zynq next week.

I have also found that the FPGA loading approach coded in HdlBusDriver.cxx does not work on this platform when attempting to run manually.  The command ‘echo 0 > /sys/class/fpga_manager/fpga0/flags’ returns ‘-sh: /sys/class/fpga_manager/fpga0/flags: Permission denied’ .  A manual command using the DT overlay approach does appear to work however.

I'm sorry you are the guinea pig on this particular configuration.

The reason we did not immediately integrate the Geon code into OpenCPI is that it was taking two steps (fpga manager + ultra-scale) at once and we needed to take them one step at a time.  We are taking that first step, unfortunately not on a schedule that helps you.

Jim

Thanks,
Rob

From: Chris Hinkey chinkey@geontech.commailto:chinkey@geontech.com
Sent: Friday, September 6, 2019 8:09 AM
To: James Kulp jek@parera.commailto:jek@parera.com
Cc: Munro, Robert M. Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu; discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

iirc it gives clocks and interdictions on which axi ports are enabled but not which direction is master (you would have to look up which register/bit this is set by in the TRM).  i don't remember the axi ports being configurable which side is the master but i very well might be mistake.

On Thu, Sep 5, 2019 at 7:38 PM James Kulp <jek@parera.commailto:jek@parera.com> wrote:
If you invoke the command with no arguments it tells you what it can do, like most opencpi commands.  We mostly use it to find out how the FPGA clocks are initialized.

On Sep 5, 2019, at 18:19, Munro, Robert M. <Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu> wrote:

Jim,

Does the ocpizynq utility list all the available interfaces that can dumped?

Thanks,
Rob

-----Original Message-----
From: discuss <discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org> On Behalf Of James Kulp
Sent: Thursday, September 5, 2019 5:59 PM
To: discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

Hi Rob,

Nearly all aspects of the boundary hardware between the PS and the PL sides of Zynq are controlled by registers written by the processor and
not in the FPGA bitstream.
The FSBL does typically initialize these registers to some default values that are not necessarily the right values for how OpenCPI uses the PL/FPGA.
The ocpizynq utility program does dump out some of these registers, and you could modify it pretty easily if you want to know what some other registers are set to.
All these registers are pretty well documented in the Zynq TRM.

Jim

On 9/5/19 5:47 PM, Munro, Robert M. wrote:
Chris,

Would this be the GP0 AXI slave or master registers that are being accessed in this scenario?  I don’t believe these are configured in the FSBL, but in the FPGA image.  This could indicate that a facility required by the OCPI framework is not enabled in the FPGA image built into the N310 image.  Is there a listing of the OCPI required FPGA facilities?

Thanks,
Rob

From: Chris Hinkey <chinkey@geontech.commailto:chinkey@geontech.com>
Sent: Thursday, August 29, 2019 11:58 AM
To: Munro, Robert M. <Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

you are not accessing external memory in this case you are accessing axi_gp0's adress space a register directly on the FPGA.  i would suspect that that something is wrong with how GP0 is setup from the fsbl in this case.  I don't think anything would need to change on the opencpi software side given that 7100 vs 7020 should be the same.
the information on all the register maps and where everything is located is somewhere in the Xilinx Technical reference manual (be warned this is a very large document).

On Thu, Aug 29, 2019 at 11:42 AM Munro, Robert M. <Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>> wrote:
Chris,

Looking at the Zynq and ZynqMP datasheets:
https://www.xilinx.com/support/documentation/data_sheets/ds190-Zynq-70
00-Overview.pdf
https://www.xilinx.com/support/documentation/data_sheets/ds891-zynq-ul
trascale-plus-overview.pdf

It looks like the Z-7100 has the same memory interfaces as other Zynq parts with the external memory interface having '16-bit or 32-bit interfaces to DDR3, DDR3L, DDR2, or LPDDR2 memories' whereas the ZynqMP has '32-bit or 64-bit interfaces to DDR4, DDR3, DDR3L, or LPDDR3 memories, and 32-bit interface to LPDDR4 memory' .

Is it possible that other changes are needed from the 1.4_zynq_ultra branch that I have not pulled in?

Thanks,
Rob

-----Original Message-----
From: discuss <discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org>> On Behalf Of Munro, Robert M.
Sent: Thursday, August 29, 2019 9:09 AM
To: Chris Hinkey <chinkey@geontech.commailto:chinkey@geontech.com<mailto:chinkey@geontech.commailto:chinkey@geontech.com>>
Cc: discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

Chris,

Thanks for the information regarding the internals.  The FPGA part on this platform is a XC7Z100.  I purposefully did not pull in changes that I believed were related to addressing.  I can double check the specifications regarding address widths to verify it should be unchanged.

Please let me know if there are any other changes or steps missed.

Thanks,
Rob

From: Chris Hinkey
<chinkey@geontech.commailto:chinkey@geontech.com<mailto:chinkey@geontech.commailto:chinkey@geontech.com><mailto:chinkey@geonmailto:chinkey@geon
tech.comhttp://tech.com<mailto:chinkey@geontech.commailto:chinkey@geontech.com>>>
Date: Thursday, Aug 29, 2019, 8:05 AM
To: Munro, Robert M.
<Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu><mailto:Robertmailto:Robert
.Munro@jhuapl.edumailto:Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>>
Cc: James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><mailto:jek@parera.commailto:jek@parera.com<mailto:jemailto:je
k@parera.commailto:k@parera.com>>>,
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
<discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:dimailto:di
scuss@lists.opencpi.orgmailto:scuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

It looks like you loaded something sucessfully but the control plan is not hooked up quite right.

as an eraly part of the running process opencpi reads a register across the control plan that contains ascii "OpenCPI(NULL)" and in your case you are reading "CPI(NULL)Open"  this is given by the data in the error message - (sb 0x435049004f70656e).  this is the magic that the message is referring to it requires OpenCPI to be at address 0 of the control plane address space to proceed.

I think we ran into this problem and we decided it was because the bus on the ultrascale was setup to be 32 bits and needed to be 64 bits for the hdl that we implemented to work correctly.  remind me what platform you are using is it a zynq ultrascale or 7000 series?

On Wed, Aug 28, 2019 at 5:55 PM Munro, Robert M. <Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>> wrote:
Chris,

After merging some sections of HdlBusDriver.cxx into the 1.4 version of the file and going through the build process I am encountering a new error when attempting to load HDL on the N310.  The fsk_filerw is being used as a known good reference for this purpose.  The new sections of vivado.mkhttp://vivado.mkhttp://vivado.mkhttp://vivado.mk were merged in to attempt building the HDL using the framework, but it did not generate the .bin file when using ocpidev build with the --hdl-assembly argument.  An attempt to replicate the commands in vivado.mkhttp://vivado.mkhttp://vivado.mkhttp://vivado.mk manually while following the guidelines for generating a .bin from a .bit from Xilinx documentation https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841645/Solution+Zynq+PL+Programming+With+FPGA+Manager was taken.

The steps were:

  • generate a .bif file similar to the documentation's
    Full_Bitstream.bif using the correct filename
  • run a bootgen command similar to
    vivado.mkhttp://vivado.mkhttp://vivado.mkhttp://vivado.mk: bootgen -image
    <bif_filename> -arch zynq -o <bin_filename> -w

This generated a .bin file as desired and was copied to the artifacts directory in the ocpi folder structure.

The built ocpi environment loaded successfully, recognizes the HDL container as being available, and the hello application was able to run successfully.  The command output contained ' HDL Device 'PL:0' responds, but the OCCP signature: magic: 0x18000afe187003 (sb 0x435049004f70656e) ' but the impact of this was not understood until attempting to load HDL.  When attempting to run the fsk_filerw from the ocpirun command it did not appear to recognize the assembly when listing resources found in the output and reported that suitable candidate for a HDL-implemented component was not available.

The command 'ocpihdl load' was then attempted to force the loading of the HDL assembly and the same '...OCCP signature: magic: ...' output observed and finally ' Exiting for problem: error loading device pl:0: Magic numbers in admin space do not match'.

Is there some other step that must be taken during the generation of the .bin file?  Is there any other software modification that is required of the ocpi runtime code?  The diff patch of the modified 1.4 HdlBusDriver.cxx is attached to make sure that the required code modifications are performed correctly.  The log output from the ocpihdl load command is attached in case that can provide further insight regarding performance or required steps.

Thanks,
Rob

-----Original Message-----
From: discuss <discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org><mailto:discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org>>> On Behalf Of Munro, Robert M.
Sent: Tuesday, August 13, 2019 10:56 AM
To: Chris Hinkey
<chinkey@geontech.commailto:chinkey@geontech.com<mailto:chinkey@geontech.commailto:chinkey@geontech.com><mailto:chinkey@geonmailto:chinkey@geon
tech.comhttp://tech.com<mailto:chinkey@geontech.commailto:chinkey@geontech.com>>>; James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><mailto:jek@parera.commailto:jek@parera.com<mailto:jemailto:je
k@parera.commailto:k@parera.com>>>
Cc:
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:dismailto:dis
cuss@lists.opencpi.orgmailto:cuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

Chris,

Thank you for your helpful response and insight.  My thinking was that the #define could be overridden to provide the desired functionality for the platform, but was not comfortable making the changes without proper familiarity.  I will move forward by looking at the diff to the 1.4 mainline, make the appropriate modifications, and test with the modified framework on the N310.

Thanks again for your help.

Thanks,
Rob

From: Chris Hinkey
<chinkey@geontech.commailto:chinkey@geontech.com<mailto:chinkey@geontech.commailto:chinkey@geontech.com><mailto:chinkey@geonmailto:chinkey@geon
tech.comhttp://tech.com<mailto:chinkey@geontech.commailto:chinkey@geontech.com>>>
Sent: Tuesday, August 13, 2019 10:02 AM
To: James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><mailto:jek@parera.commailto:jek@parera.com<mailto:jemailto:je
k@parera.commailto:k@parera.com>>>
Cc: Munro, Robert M.
<Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu><mailto:Robertmailto:Robert
.Munro@jhuapl.edumailto:Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>>;
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:dismailto:dis
cuss@lists.opencpi.orgmailto:cuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

I think when I implemented this code I probably made the assumption that if we are using fpga_manager we are also using ARCH=arm64.  This met our needs as we only cared about the fpga manager on ultrascale devices at the time.  We also made the assumption that the tools created a tarred bin file instead of a bit file because we could not get the bit to bin conversion working with the existing openCPI code (this might cause you problems later when actually trying to load the fpga).

The original problem you were running into is certainly because of an
ifdef on line 226 where it will check the old driver done pin if it is
on an arm and not an arm64

226 #if defined(OCPI_ARCH_arm) || defined(OCPI_ARCH_arm_cs)

to move forward for now you can change this line to an "#if 0" and rebuild the framework, not this will cause other zynq based platforms(zed, matchstiq etc..) to no longer work with this patch but maybe you don't care for now while Jim tries to get this into the mainline in a more generic way.
there may be some similar patches you need to make to the same file but the full diff that I needed to make to BusDriver.cxx to the 1.4 mainline can be seen here https://github.com/opencpi/opencpi/pull/17/files in case you didn't already know.
hope this helps

On Mon, Aug 12, 2019 at 11:12 AM James Kulp <jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><mailto:jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com>><mailto:jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><mailto:jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com>>>> wrote:

On 8/12/19 9:37 AM, Munro, Robert M. wrote:
Jim,

This is the only branch with the modifications required for use with
the FPGA Manager driver.  This is required for use with the Linux
kernel provided for the N310.  The Xilinx toolset being used is
2018_2 and the kernel being used is generated via the N310 build
container using v3.14.0.0 .

Ok.  The default Xilinx kernel associated with 2018_2 is 4.14.

I guess the bottom line is that this combination of platform and tools and kernel is not yet supported in either the mainline of OpenCPI and the third party branch you are trying to use.

It is probably not a big problem, but someone has to debug it that has the time and skills necessary to dig as deep as necessary.

The fpga manager in the various later linux kernels will definitely be supported in a patch from the mainline "soon", probably in a month, since it is being actively worked.

That does not guarantee functionality on your exact kernel (and thus version of the fpga manager), but it does guarantee it working on the latest Xilinx-supported kernel.

Jim

Thanks,
Robert Munro

*From: *James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><mailto:jek@parera.commailto:jek@parera.com<mailto:jmailto:j
ek@parera.commailto:ek@parera.com>><mailto:jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><mailto:jmailto:j
e<mailto:jemailto:je>
k@parera.commailto:k@parera.com<mailto:k@parera.commailto:k@parera.com>>>
<mailto:jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><mailto:jek@parera.commailto:jek@parera.com<m
ailto:jek@parera.commailto:ailto%3Ajek@parera.com>><mailto:jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><m
a ilto:jek@parera.commailto:ilto%3Ajek@parera.com<mailto:ilto%3Ajek@parera.commailto:ilto%253Ajek@parera.com>>>>>
*Date: *Monday, Aug 12, 2019, 9:00 AM
*To: *Munro, Robert M.
<Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu><mailto:Robermailto:Rober
t.Munro@jhuapl.edumailto:t.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>><mailto:Robertmailto:Robert<mai lto:Robert>
.Munro@jhuapl.edumailto:Munro@jhuapl.edu<mailto:Munro@jhuapl.edumailto:Munro@jhuapl.edu><mailto:Robert.Munro@jhuaplmailto:Robert.Munro@jhuapl
.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>>
<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu><mailt
o:Robert.Munro@jhuapl.edumailto:o%3ARobert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>><mailto
:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu><mailto:Robermailto:Rober
t.Munro@jhuapl.edumailto:t.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>>>>,
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:dimailto:di
scuss@lists.opencpi.orgmailto:scuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>><mailto:dismailto:dis
<mailto:dismailto:dis>
cuss@lists.opencpi.orgmailto:cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.orgmailto:cuss@lists.opencpi.org><mailto:discuss@mailto:discuss@
lists.opencpi.orghttp://lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>>
<discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:dmailto:d
iscuss@lists.opencpi.orgmailto:iscuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>><mailto:dimailto:di
<mailto:dimailto:di>
scuss@lists.opencpi.orgmailto:scuss@lists.opencpi.org<mailto:scuss@lists.opencpi.orgmailto:scuss@lists.opencpi.org><mailto:discusmailto:discus
s@lists.opencpi.orgmailto:s@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>>
<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><m
ailto:discuss@lists.opencpi.orgmailto:ailto%3Adiscuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>><ma
ilto:discuss@lists.opencpi.orgmailto:ilto%3Adiscuss@lists.opencpi.org<mailto:ilto%3Adiscuss@lists.opencpi.ormailto:ilto%253Adiscuss@lists.opencpi.or
g><mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>

*Subject: *Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

I was a bit confused about your use of the "ultrascale" branch.
So you are using a branch with two types of patches in it: one for
later linux kernels with the fpga manager, and the other for the
ultrascale chip itself.
The N310 is not ultrascale, so we need to separate the two issues,
which were not separated before.
So its not really a surprise that the branch you are using is not yet
happy with the system you are trying to run it on.

I am working on a branch that simply updates the xilinx tools
(2019-1) and the xilinx linux kernel (4.19) without dealing with
ultrascale, which is intended to work with a baseline zed board, but
with current tools and kernels.

The N310 uses a 7000-series part (7100) which should be compatible
with this.

Which kernel and which xilinx tools are you using?

Jim

On 8/8/19 1:36 PM, Munro, Robert M. wrote:
Jim or others,

Is there any further input or feedback on the source or resolution

of this issue?

As it stands I do not believe that the OCPI runtime software will be

able to successfully load HDL assemblies on the N310 platform.  My
familiarity with this codebase is limited and we would appreciate any
guidance available toward investigating or resolving this issue.

Munro, Robert M.

ZynqMP/UltraScale+ fpga_manager

Jim,

The given block of code is not the root cause of the issue because

the file system does not have a /dev/xdevcfg device.

I suspect there is some functional code similar to this being

compiled incorrectly:

#if (OCPI_ARCH_arm)
// do xdevcfg loading stuff
#else
// do fpga_manager loading stuff #endif

This error is being output at environment initialization as well as

when running hello.xml.  I've attached a copy of the output from the
command 'ocpirun -v -l 20 hello.xml' for further investigation.

From looking at the output I believe the system is calling

OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is
calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line
484 which in turn is calling Driver::open in the same file at line
499 which then outputs the 'When searching for PL device ...' error
at line 509. This then returns to the HdlDriver.cxx search() function
and outputs the '... got Zynq search error ...' error at line 141.

This is an ARM device and I am not familiar enough with this

codebase to adjust precompiler definitions with confidence that some
other code section will become affected.

Subject: Re: [Discuss OpenCPI] Bitstream loading with

ZynqMP/UltraScale+ fpga_manager

That code is not integrated into the main line of OpenCPI yet, but

in that code there is:

         if (file_exists("/dev/xdevcfg")){
           ret_val= load_xdevconfig(fileName, error);
         }
         else if (file_exists("/sys/class/fpga_manager/fpga0/")){
           ret_val= load_fpga_manager(fileName, error);
         }

So it looks like the presence of /dev/xdevcfg is what causes it to

look for /sys/class/xdevcfg/xdevcfg/device/prog_done

On 8/2/19 4:15 PM, Munro, Robert M. wrote:
Are there any required flag or environment variable settings that

must be done before building the framework to utilize this
functionality?  I have a platform built that is producing an output
during environment load: 'When searching for PL device '0': Can't
process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string:
file could not be open for reading' .  This leads me to believe that
it is running the xdevcfg code still present in HdlBusDriver.cxx .

Use of the release_1.4_zynq_ultra branch and presence of the

/sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been
verified for the environment used to generate the executables.

the fly was attempting to convert from bit to bin.  This did not work
on these newer platforms using fpga_manager so we decided to use the
vendor provided tools rather then to reverse engineer what was wrong
with the existing code.

If changes need to be made to create more commonality and given

that all zynq and zynqMP platforms need a .bin file format wouldn't
it make more sense to just use .bin files rather then converting them
on the fly every time?

A sensible question for sure.

When this was done originally, it was to avoid generating multiple

file formats all the time.  .bit files are necessary for JTAG
loading, and .bin files are necessary for zynq hardware loading.

Even on Zynq, some debugging using jtag is done, and having that be

mostly transparent (using the same bitstream files) is convenient.

So we preferred having a single bitstream file (with metadata,
compressed) regardless of whether we were hardware loading or jtag

loading, zynq or virtex6 or spartan3, ISE or Vivado.

In fact, there was no reverse engineering the last time since both

formats, at the level we were operating at, were documented by Xilinx.

It seemed to be worth the 30 SLOC to convert on the fly to keep a

single format of Xilinx bitstream files, including between ISE and
Vivado and all Xilinx FPGA types.

Of course it might make sense to switch things around the other way

and use .bin files uniformly and only convert to .bit format for JTAG
loading.

But since the core of the "conversion:" after a header, is just a

32 bit endian swap, it doesn't matter much either way.

If it ends up being a truly nasty reverse engineering exercise now,

I would reconsider.


From: discuss
<discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opmailto:discuss-bounces@lists.op
encpi.orghttp://encpi.org><mailto:discuss-bounces@lists.opmailto:discuss-bounces@lists.op<mailto:discuss-bounces@mailto:discuss-bounces@
lists.op>
encpi.orghttp://encpi.orghttp://encpi.org><mailto:discuss-bounces@lists.opmailto:discuss-bounces@lists.op<mailt
o:discuss-bounces@lists.opmailto:o:discuss-bounces@lists.op><mailto:discuss-bounces@mailto:discuss-bounces@<mailto:discussmailto:discuss
-bounces@> lists.op>
encpi.orghttp://encpi.orghttp://encpi.orghttp://encpi.org>> on behalf of James
Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><mailto:jek@parera.commailto:jek@parera.com<mailt
o:jek@parera.commailto:o%3Ajek@parera.com>><mailto:jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><ma
ilt o:jek@parera.commailto:o%3Ajek@parera.com<mailto:o%3Ajek@parera.commailto:o%253Ajek@parera.com>>>>
Sent: Friday, February 1, 2019 3:27 PM
To:
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>><mail
to
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailt
o:discuss@lists.opencpi.orgmailto:o%3Adiscuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

David,

This is great work. Thanks.

Since I believe the fpga manager stuff is really an attribute of
later linux kernels, I don't think it is really a ZynqMP thing,
but just a later linux kernel thing.
I am currently bringing up the quite ancient zedboard using the
latest Vivado and Xilinx linux and will try to use this same code.
There are two thinigs I am looking into, now that you have done
the hard work of getting to a working solution:

  1. The bit vs bin thing existed with the old bitstream loader, but
    I think we were converting on the fly, so I will try that here.
    (To avoid the bin format altogether).

  2. The fpga manager has entry points from kernel mode that allow
    you to inject the bitstream without making a copy in /lib/firmware.
    Since we already have a kernel driver, I will try to use that to
    avoid the whole /lib/firmware thing.

So if those two things can work (no guarantees), the difference
between old and new bitstream loading (and building) can be
minimized and the loading process faster and requiring no extra
file system

space.

This will make merging easier too.

We'll see.  Thanks again to you and Geon for this important

contribution.

Jim

On 2/1/19 3:12 PM, David Banks wrote:
OpenCPI users interested in ZynqMP fpga_manager,

I know some users are interested in the OpenCPI's bitstream
loading for ZynqMP/UltraScale+ using "fpga_manager". In
general, we followed the instructions at

I will give a short explanation here:

Reminder: All ZynqMP/UltraScale+ changes are located at
https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra

branch.

Firstly, all fpga_manager code is located in
runtime/hdl/src/HdlBusDriver.cxx. There were also changes in
r
untime/hdl-support/xilinx/vivado.mkhttp://vivado.mkhttp://vivado.mk<http://vi
vado.mkhttp://vado.mk><http://vi
vado.mkhttp://vado.mkhttp://vado.mk>
http://vivado.mk
to generate a bitstream in the correct *.bin
format. To see the changes made to these files for ZynqMP, you
can diff them between
release_1.4 and release_1.4_zynq_ultra:
$ git clone https://github.com/Geontech/opencpi.git --branch
release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin
release_1.4:release_1.4; $ git diff release_1.4 --
runtime/hdl/src/HdlBusDriver.cxx
runtime/hdl-support/xilinx/vivado.mkhttp://vivado.mkhttp://vivado.mk<http://viv
ado.mkhttp://ado.mk><http://viv
ado.mkhttp://ado.mkhttp://ado.mk>;

The directly relevant functions are load_fpga_manager() and i
sProgrammed().
load_fpga_manager() ensures that /lib/firmware exists, reads the
*.bin bitstream file and writes its contents to

/lib/firmware/opencpi_temp.bin.

It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the
the filename "opencpi_temp.bin" to

/sys/class/fpga_manager/fpga0/firmware.

Finally, the temporary opencpi_temp.bin bitstream is removed and
the state of the fpga_manager
(/sys/class/fpga_manager/fpga0/state) is confirmed to be "operating" in isProgrammed().

fpga_manager requires that bitstreams be in *.bin in order to
write them to the PL. So, some changes were made to
vivado.mkhttp://vivado.mkhttp://vivado.mkhttp://vivado.mkhttp://vivado.mk
to add a make rule for the *.bin file. This make rule (BinName) uses Vivado's "bootgen" to convert the bitstream from *.bit to *.bin.

Most of the relevant code is pasted or summarized below:

         *load_fpga_manager*(const char *fileName,

std::string

&error) {

           if (!file_exists("/lib/firmware")){

mkdir("/lib/firmware",0666);
}
int out_file =

creat("/lib/firmware/opencpi_temp.bin", 0666);

           gzFile bin_file;
           int bfd, zerror;
           uint8_t buf[8*1024];

           if ((bfd = ::open(fileName, O_RDONLY)) < 0)
             OU::format(error, "Can't open bitstream file '%s'

for reading:

%s(%d)",
fileName, strerror(errno), errno);
if ((bin_file = ::gzdopen(bfd, "rb")) == NULL)
OU::format(error, "Can't open compressed bin
file

'%s' for :

%s(%u)",
fileName, strerror(errno), errno);
do {
uint8_t *bit_buf = buf;
int n = ::gzread(bin_file, bit_buf, sizeof(buf));
if (n < 0)
return true;
if (n & 3)
return OU::eformat(error, "Bitstream data in is '%s'
not a multiple of 3 bytes",
fileName);
if (n == 0)
break;
if (write(out_file, buf, n) <= 0)
return OU::eformat(error,
"Error writing to
/lib/firmware/opencpi_temp.bin for bin
loading: %s(%u/%d)",
strerror(errno), errno, n);
} while (1);
close(out_file);
std::ofstream

fpga_flags("/sys/class/fpga_manager/fpga0/flags");

           std::ofstream

fpga_firmware("/sys/class/fpga_manager/fpga0/firmware");
fpga_flags << 0 << std::endl;
fpga_firmware << "opencpi_temp.bin" << std::endl;

remove("/lib/firmware/opencpi_temp.bin");
return isProgrammed(error) ? init(error) : true;
}

The isProgrammed() function just checks whether or not the
fpga_manager state is 'operating' although we are not entirely
confident this is a robust check:

         *isProgrammed*(...) {
           ...
           const char *e = OU::file2String(val,

"/sys/class/fpga_manager/fpga0/state", '|');
...
return val == "operating";
}

vivado.mkhttp://vivado.mkhttp://vivado.mkhttp://vivado.mkhttp://vivado.mk's
*bin make-rule uses bootgen to convert bit to bin. This is
necessary in Vivado 2018.2, but in later versions you may be able
to directly generate the correct *.bin file via an option to

write_bitstream:

$(call BinName,$1,$3,$6): $(call BitName,$1,$3)
$(AT)echo -n For $2 on $5 using config $4: Generating
Xilinx Vivado bitstream file $$@ with BIN extension using "bootgen".
$(AT)echo all: > $$(call BifName,$1,$3,$6);
echo "{" >> $$(call BifName,$1,$3,$6);
echo " [destination_device = pl] $(notdir $(call
BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6);
echo "}" >> $$(call BifName,$1,$3,$6);
$(AT)$(call DoXilinx,bootgen,$1,-image $(notdir
$(call
BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call
BinName,$1,$3,$6)) -w,bin)

Hope this is useful!

Regards,
David Banks
dbanks@geontech.commailto:dbanks@geontech.com<mailto:dbanks@geontech.commailto:dbanks@geontech.com><mailto:dbanks@geomailto:dbanks@geo
ntech.comhttp://ntech.com<mailto:dbanks@geontech.commailto:dbanks@geontech.com>><mailto:dbanks@geomailto:dbanks@geo<mailto:dmailto:d
banks@geo>
ntech.comhttp://ntech.comhttp://ntech.com<mailto:dbanks@geontech.commailto:dbanks@geontech.com<mailto:dbamailto:dba
nks@geontech.commailto:nks@geontech.com>>>
Geon Technologies, LLC
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/att
ach m ents/20190201/4b49675d/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailt
o:discuss@lists.opencpi.orgmailto:o%3Adiscuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>><ma
ilt
o:discuss@lists.opencpi.orgmailto:o%3Adiscuss@lists.opencpi.org<mailto:o%3Adiscuss@lists.opencpi.orgmailto:o%253Adiscuss@lists.opencpi.org>
<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.ormailto:discuss@lists.opencpi.or
g>>>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.o
rg


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>><mail
to
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailt
o:discuss@lists.opencpi.orgmailto:o%3Adiscuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or
g
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/atta
chm e nts/20190201/64e4ea45/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>><mail
to
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailt
o:discuss@lists.opencpi.orgmailto:o%3Adiscuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or
g

-------------- next part -------------- An embedded and
charset-unspecified text was scrubbed...
Name: hello_n310_log_output.txt
URL:

<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachm
e nts/20190805/d9b4f229/attachment.txt>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:dismailto:dis
cuss@lists.opencpi.orgmailto:cuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>><mailto:discumailto:discu
ss@lists.opencpi.orgmailto:ss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:discuss@mailto:discuss@
lists.opencpi.orghttp://lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme
nts/20190813/4516c872/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:dismailto:dis
cuss@lists.opencpi.orgmailto:cuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme
nts/20190829/b99ae3e0/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme
nts/20190905/0b9a1953/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org

The FPGA load that can be prevented at boot time is the vendor’s FPGA build. My suspicion is that the ‘ocpihdl load …’ call was not able to successfully load the FPGA. When it subsequently attempted reading the magic number it was getting an incorrect value because the vendor’s FPGA load was already loaded during the boot process. When preventing the vendor’s FPGA load during boot the magic number mismatch was no longer being output but the software was reporting the load was unsuccessful. Does an application that does not require FPGA load such as hello.xml is run, does the system attempt to load anything to the FPGA? I noticed that it was outputting the magic number mismatch when the vendor’s FPGA build was loaded in this case as well. I am now looking into what is required to use the DTO loading approach for this platform. Thanks, Rob From: Chris Hinkey <chinkey@geontech.com> Sent: Friday, September 6, 2019 3:20 PM To: James Kulp <jek@parera.com> Cc: Munro, Robert M. <Robert.Munro@jhuapl.edu>; discuss@lists.opencpi.org Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager looks like i responded just to robert not to the discussion list, oops The magic number is a register that is on the fpga and is set to ascii "OpenCPI" (0x4f70656e435049) which is a 64 bit read across the axi bus and it is the first address across the bus this value is not set during operation, it is a hardcoded register that is built-in to the bitfile. it is never being written to from the software side. when I get to this point on a new platform what I will do is step outside of the framework and use the tool devmem to read the physical address across the fpga boundary. this will ensure that the 64 bits here are returning the correct "magic". I expect that your problems with the Exiting for problem: error loading device pl:0’ . What further steps can be taken to debug this? error is from something with fpga_manager not acting correctly. we had a similar problem recently and we forced opencpi to think that it always had an opencpi bitstream loaded. I would check that /sys/class/fpga_manager/fpga0/state is returning something reasonable. the fact that you don't have permissions to access the other parts of the fpga_manger is suspect as well, might be related. On Fri, Sep 6, 2019 at 3:12 PM James Kulp <jek@parera.com<mailto:jek@parera.com>> wrote: On 9/6/19 2:29 PM, Munro, Robert M. wrote: It appears there was some resource contention in the GP0 area that was not allowing the OCPI system to set the OccpAdminRegister.magic value during operation. This value is hardwired into the OpenCPI FPGA load and is read-only. The software memory maps the area where the GP0 interface is a slave to the CPU at: const uint32_t GP0_PADDR = 0x40000000; And reads from offset 0. It firsts the 8 byte MAGIC a byte at a time, then if it matches, it reads again as a single 64 bit value to make sure 32-bit endian swapping is right. If both those reads from offset 0 at 0x40000000 come back correct, it believes that the FPGA is loaded with an OpenCPI bitstream. If there is already a non-OpenCPI bitstream loaded, we expect that this test will fail. If this failure occurs when there is a bitstream loaded, on Zynq, it still assumes the FPGA is available for subsequent loading. If the FPGA load is prevented during the boot process, the magic number mismatch error is no longer output. Looking through the TRM was showing no configuration settings for GP0 other than enabling communication using LVL_SHFTR_EN. If there is some required configuration of AXI_GP0 configuration registers for OCPI to work properly, please provide it for future reference. I will check this. I am further trying to understand the code that was producing the output by looking at the at the source. The magic number mismatch output on lines 82-83 look to be outputting a #define value in the (sb ….) area of the output and the ‘magic’ variable there is giving the value that was read from the OccpAdminRegister area. Am I understanding the code correctly? If so, that would indicate that the (sb …) number is the expected value and its orientation should be correct. https://github.com/Geontech/opencpi/blob/6c7f48352ef9dcb1213302f470ce803643cc604d/runtime/hdl/src/HdlDevice.cxx#L82 Is the code being understood correctly that the OccpAdminRegister is a memory mapped data structure that is being written and read as part of the OCPI control interface? If so, can you explain how and where this is being mapped and at what base address it should be expected? See above - it is never written. After preventing the FPGA load at boot time the OCPI commands no longer output the magic number mismatch error. The command ‘ocpihdl load <fsk_filerw bin>’ does not succeed however. The output from the command states ‘Exiting for problem: error loading device pl:0’ . What further steps can be taken to debug this? What FPGA load at boot time are you refering to? The native manufacturer's bitstream? AFAIK OpenCPI has no "boot time FPGA load". What you appear to be debugginig is the Geon ultrascale fpga manager loading code on a non-ultrascale Zynq. I should have this particular function running on a zedboard Zynq next week. I have also found that the FPGA loading approach coded in HdlBusDriver.cxx does not work on this platform when attempting to run manually. The command ‘echo 0 > /sys/class/fpga_manager/fpga0/flags’ returns ‘-sh: /sys/class/fpga_manager/fpga0/flags: Permission denied’ . A manual command using the DT overlay approach does appear to work however. I'm sorry you are the guinea pig on this particular configuration. The reason we did not immediately integrate the Geon code into OpenCPI is that it was taking two steps (fpga manager + ultra-scale) at once and we needed to take them one step at a time. We are taking that first step, unfortunately not on a schedule that helps you. Jim Thanks, Rob From: Chris Hinkey <chinkey@geontech.com><mailto:chinkey@geontech.com> Sent: Friday, September 6, 2019 8:09 AM To: James Kulp <jek@parera.com><mailto:jek@parera.com> Cc: Munro, Robert M. <Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu>; discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager iirc it gives clocks and interdictions on which axi ports are enabled but not which direction is master (you would have to look up which register/bit this is set by in the TRM). i don't remember the axi ports being configurable which side is the master but i very well might be mistake. On Thu, Sep 5, 2019 at 7:38 PM James Kulp <jek@parera.com<mailto:jek@parera.com>> wrote: If you invoke the command with no arguments it tells you what it can do, like most opencpi commands. We mostly use it to find out how the FPGA clocks are initialized. > On Sep 5, 2019, at 18:19, Munro, Robert M. <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>> wrote: > > Jim, > > Does the ocpizynq utility list all the available interfaces that can dumped? > > Thanks, > Rob > > -----Original Message----- > From: discuss <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org>> On Behalf Of James Kulp > Sent: Thursday, September 5, 2019 5:59 PM > To: discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager > > Hi Rob, > > Nearly all aspects of the boundary hardware between the PS and the PL sides of Zynq are controlled by registers written by the processor and > *not* in the FPGA bitstream. > The FSBL does typically initialize these registers to some default values that are not necessarily the right values for how OpenCPI uses the PL/FPGA. > The ocpizynq utility program does dump out some of these registers, and you could modify it pretty easily if you want to know what some other registers are set to. > All these registers are pretty well documented in the Zynq TRM. > > Jim > >> On 9/5/19 5:47 PM, Munro, Robert M. wrote: >> Chris, >> >> Would this be the GP0 AXI slave or master registers that are being accessed in this scenario? I don’t believe these are configured in the FSBL, but in the FPGA image. This could indicate that a facility required by the OCPI framework is not enabled in the FPGA image built into the N310 image. Is there a listing of the OCPI required FPGA facilities? >> >> Thanks, >> Rob >> >> From: Chris Hinkey <chinkey@geontech.com<mailto:chinkey@geontech.com>> >> Sent: Thursday, August 29, 2019 11:58 AM >> To: Munro, Robert M. <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>> >> Subject: Re: [Discuss OpenCPI] Bitstream loading with >> ZynqMP/UltraScale+ fpga_manager >> >> you are not accessing external memory in this case you are accessing axi_gp0's adress space a register directly on the FPGA. i would suspect that that something is wrong with how GP0 is setup from the fsbl in this case. I don't think anything would need to change on the opencpi software side given that 7100 vs 7020 should be the same. >> the information on all the register maps and where everything is located is somewhere in the Xilinx Technical reference manual (be warned this is a very large document). >> >> On Thu, Aug 29, 2019 at 11:42 AM Munro, Robert M. <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>> wrote: >> Chris, >> >> Looking at the Zynq and ZynqMP datasheets: >> https://www.xilinx.com/support/documentation/data_sheets/ds190-Zynq-70 >> 00-Overview.pdf >> https://www.xilinx.com/support/documentation/data_sheets/ds891-zynq-ul >> trascale-plus-overview.pdf >> >> It looks like the Z-7100 has the same memory interfaces as other Zynq parts with the external memory interface having '16-bit or 32-bit interfaces to DDR3, DDR3L, DDR2, or LPDDR2 memories' whereas the ZynqMP has '32-bit or 64-bit interfaces to DDR4, DDR3, DDR3L, or LPDDR3 memories, and 32-bit interface to LPDDR4 memory' . >> >> Is it possible that other changes are needed from the 1.4_zynq_ultra branch that I have not pulled in? >> >> Thanks, >> Rob >> >> -----Original Message----- >> From: discuss <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org><mailto:discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org>>> On Behalf Of Munro, Robert M. >> Sent: Thursday, August 29, 2019 9:09 AM >> To: Chris Hinkey <chinkey@geontech.com<mailto:chinkey@geontech.com><mailto:chinkey@geontech.com<mailto:chinkey@geontech.com>>> >> Cc: discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> >> Subject: Re: [Discuss OpenCPI] Bitstream loading with >> ZynqMP/UltraScale+ fpga_manager >> >> Chris, >> >> Thanks for the information regarding the internals. The FPGA part on this platform is a XC7Z100. I purposefully did not pull in changes that I believed were related to addressing. I can double check the specifications regarding address widths to verify it should be unchanged. >> >> Please let me know if there are any other changes or steps missed. >> >> Thanks, >> Rob >> >> >> From: Chris Hinkey >> <chinkey@geontech.com<mailto:chinkey@geontech.com><mailto:chinkey@geontech.com<mailto:chinkey@geontech.com>><mailto:chinkey@geon<mailto:chinkey@geon> >> tech.com<http://tech.com><mailto:chinkey@geontech.com<mailto:chinkey@geontech.com>>>> >> Date: Thursday, Aug 29, 2019, 8:05 AM >> To: Munro, Robert M. >> <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>><mailto:Robert<mailto:Robert> >> .Munro@jhuapl.edu<mailto:Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>> >> Cc: James Kulp >> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:je<mailto:je> >> k@parera.com<mailto:k@parera.com>>>>, >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> >> <discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:di<mailto:di> >> scuss@lists.opencpi.org<mailto:scuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>>> >> Subject: Re: [Discuss OpenCPI] Bitstream loading with >> ZynqMP/UltraScale+ fpga_manager >> >> It looks like you loaded something sucessfully but the control plan is not hooked up quite right. >> >> as an eraly part of the running process opencpi reads a register across the control plan that contains ascii "OpenCPI(NULL)" and in your case you are reading "CPI(NULL)Open" this is given by the data in the error message - (sb 0x435049004f70656e). this is the magic that the message is referring to it requires OpenCPI to be at address 0 of the control plane address space to proceed. >> >> I think we ran into this problem and we decided it was because the bus on the ultrascale was setup to be 32 bits and needed to be 64 bits for the hdl that we implemented to work correctly. remind me what platform you are using is it a zynq ultrascale or 7000 series? >> >> On Wed, Aug 28, 2019 at 5:55 PM Munro, Robert M. <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>> wrote: >> Chris, >> >> After merging some sections of HdlBusDriver.cxx into the 1.4 version of the file and going through the build process I am encountering a new error when attempting to load HDL on the N310. The fsk_filerw is being used as a known good reference for this purpose. The new sections of vivado.mk<http://vivado.mk><http://vivado.mk><http://vivado.mk> were merged in to attempt building the HDL using the framework, but it did not generate the .bin file when using ocpidev build with the --hdl-assembly argument. An attempt to replicate the commands in vivado.mk<http://vivado.mk><http://vivado.mk><http://vivado.mk> manually while following the guidelines for generating a .bin from a .bit from Xilinx documentation https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841645/Solution+Zynq+PL+Programming+With+FPGA+Manager was taken. >> >> The steps were: >> - generate a .bif file similar to the documentation's >> Full_Bitstream.bif using the correct filename >> - run a bootgen command similar to >> vivado.mk<http://vivado.mk><http://vivado.mk><http://vivado.mk>: bootgen -image >> <bif_filename> -arch zynq -o <bin_filename> -w >> >> This generated a .bin file as desired and was copied to the artifacts directory in the ocpi folder structure. >> >> The built ocpi environment loaded successfully, recognizes the HDL container as being available, and the hello application was able to run successfully. The command output contained ' HDL Device 'PL:0' responds, but the OCCP signature: magic: 0x18000afe187003 (sb 0x435049004f70656e) ' but the impact of this was not understood until attempting to load HDL. When attempting to run the fsk_filerw from the ocpirun command it did not appear to recognize the assembly when listing resources found in the output and reported that suitable candidate for a HDL-implemented component was not available. >> >> The command 'ocpihdl load' was then attempted to force the loading of the HDL assembly and the same '...OCCP signature: magic: ...' output observed and finally ' Exiting for problem: error loading device pl:0: Magic numbers in admin space do not match'. >> >> Is there some other step that must be taken during the generation of the .bin file? Is there any other software modification that is required of the ocpi runtime code? The diff patch of the modified 1.4 HdlBusDriver.cxx is attached to make sure that the required code modifications are performed correctly. The log output from the ocpihdl load command is attached in case that can provide further insight regarding performance or required steps. >> >> Thanks, >> Rob >> >> -----Original Message----- >> From: discuss <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org><mailto:discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org>><mailto:discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org><mailto:discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org>>>> On Behalf Of Munro, Robert M. >> Sent: Tuesday, August 13, 2019 10:56 AM >> To: Chris Hinkey >> <chinkey@geontech.com<mailto:chinkey@geontech.com><mailto:chinkey@geontech.com<mailto:chinkey@geontech.com>><mailto:chinkey@geon<mailto:chinkey@geon> >> tech.com<http://tech.com><mailto:chinkey@geontech.com<mailto:chinkey@geontech.com>>>>; James Kulp >> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:je<mailto:je> >> k@parera.com<mailto:k@parera.com>>>> >> Cc: >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:dis<mailto:dis> >> cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >> Subject: Re: [Discuss OpenCPI] Bitstream loading with >> ZynqMP/UltraScale+ fpga_manager >> >> Chris, >> >> Thank you for your helpful response and insight. My thinking was that the #define could be overridden to provide the desired functionality for the platform, but was not comfortable making the changes without proper familiarity. I will move forward by looking at the diff to the 1.4 mainline, make the appropriate modifications, and test with the modified framework on the N310. >> >> Thanks again for your help. >> >> Thanks, >> Rob >> >> From: Chris Hinkey >> <chinkey@geontech.com<mailto:chinkey@geontech.com><mailto:chinkey@geontech.com<mailto:chinkey@geontech.com>><mailto:chinkey@geon<mailto:chinkey@geon> >> tech.com<http://tech.com><mailto:chinkey@geontech.com<mailto:chinkey@geontech.com>>>> >> Sent: Tuesday, August 13, 2019 10:02 AM >> To: James Kulp >> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:je<mailto:je> >> k@parera.com<mailto:k@parera.com>>>> >> Cc: Munro, Robert M. >> <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>><mailto:Robert<mailto:Robert> >> .Munro@jhuapl.edu<mailto:Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>>; >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:dis<mailto:dis> >> cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >> Subject: Re: [Discuss OpenCPI] Bitstream loading with >> ZynqMP/UltraScale+ fpga_manager >> >> I think when I implemented this code I probably made the assumption that if we are using fpga_manager we are also using ARCH=arm64. This met our needs as we only cared about the fpga manager on ultrascale devices at the time. We also made the assumption that the tools created a tarred bin file instead of a bit file because we could not get the bit to bin conversion working with the existing openCPI code (this might cause you problems later when actually trying to load the fpga). >> >> The original problem you were running into is certainly because of an >> ifdef on line 226 where it will check the old driver done pin if it is >> on an arm and not an arm64 >> >> 226 #if defined(OCPI_ARCH_arm) || defined(OCPI_ARCH_arm_cs) >> >> to move forward for now you can change this line to an "#if 0" and rebuild the framework, not this will cause other zynq based platforms(zed, matchstiq etc..) to no longer work with this patch but maybe you don't care for now while Jim tries to get this into the mainline in a more generic way. >> there may be some similar patches you need to make to the same file but the full diff that I needed to make to BusDriver.cxx to the 1.4 mainline can be seen here https://github.com/opencpi/opencpi/pull/17/files in case you didn't already know. >> hope this helps >> >> On Mon, Aug 12, 2019 at 11:12 AM James Kulp <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>>>>> wrote: >>> On 8/12/19 9:37 AM, Munro, Robert M. wrote: >>> Jim, >>> >>> This is the only branch with the modifications required for use with >>> the FPGA Manager driver. This is required for use with the Linux >>> kernel provided for the N310. The Xilinx toolset being used is >>> 2018_2 and the kernel being used is generated via the N310 build >>> container using v3.14.0.0 . >> Ok. The default Xilinx kernel associated with 2018_2 is 4.14. >> >> I guess the bottom line is that this combination of platform and tools and kernel is not yet supported in either the mainline of OpenCPI and the third party branch you are trying to use. >> >> It is probably not a big problem, but someone has to debug it that has the time and skills necessary to dig as deep as necessary. >> >> The fpga manager in the various later linux kernels will definitely be supported in a patch from the mainline "soon", probably in a month, since it is being actively worked. >> >> That does not guarantee functionality on your exact kernel (and thus version of the fpga manager), but it does guarantee it working on the latest Xilinx-supported kernel. >> >> Jim >> >> >> >> >> >> >> >>> Thanks, >>> Robert Munro >>> >>> *From: *James Kulp >>> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:j<mailto:j> >>> ek@parera.com<mailto:ek@parera.com>>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:j<mailto:j> >>> e<mailto:je<mailto:je>> >>> k@parera.com<mailto:k@parera.com><mailto:k@parera.com<mailto:k@parera.com>>>> >>> <mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><m >>> ailto:jek@parera.com<mailto:ailto%3Ajek@parera.com>>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><m >>> a ilto:jek@parera.com<mailto:ilto%3Ajek@parera.com><mailto:ilto%3Ajek@parera.com<mailto:ilto%253Ajek@parera.com>>>>>> >>> *Date: *Monday, Aug 12, 2019, 9:00 AM >>> *To: *Munro, Robert M. >>> <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>><mailto:Rober<mailto:Rober> >>> t.Munro@jhuapl.edu<mailto:t.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>><mailto:Robert<mailto:Robert><mai >>> lto:Robert> >>> .Munro@jhuapl.edu<mailto:Munro@jhuapl.edu><mailto:Munro@jhuapl.edu<mailto:Munro@jhuapl.edu>><mailto:Robert.Munro@jhuapl<mailto:Robert.Munro@jhuapl> >>> .edu<mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>> >>> <mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>><mailt >>> o:Robert.Munro@jhuapl.edu<mailto:o%3ARobert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>><mailto >>> :Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>><mailto:Rober<mailto:Rober> >>> t.Munro@jhuapl.edu<mailto:t.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>>>>, >>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:di<mailto:di> >>> scuss@lists.opencpi.org<mailto:scuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>><mailto:dis<mailto:dis> >>> <mailto:dis<mailto:dis>> >>> cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org><mailto:cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org>><mailto:discuss@<mailto:discuss@> >>> lists.opencpi.org<http://lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>>> >>> <discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:d<mailto:d> >>> iscuss@lists.opencpi.org<mailto:iscuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>><mailto:di<mailto:di> >>> <mailto:di<mailto:di>> >>> scuss@lists.opencpi.org<mailto:scuss@lists.opencpi.org><mailto:scuss@lists.opencpi.org<mailto:scuss@lists.opencpi.org>><mailto:discus<mailto:discus> >>> s@lists.opencpi.org<mailto:s@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>>> >>> <mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><m >>> ailto:discuss@lists.opencpi.org<mailto:ailto%3Adiscuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>><ma >>> ilto:discuss@lists.opencpi.org<mailto:ilto%3Adiscuss@lists.opencpi.org><mailto:ilto%3Adiscuss@lists.opencpi.or<mailto:ilto%253Adiscuss@lists.opencpi.or> >>> g><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> >>>>>>> >>> *Subject: *Re: [Discuss OpenCPI] Bitstream loading with >>> ZynqMP/UltraScale+ fpga_manager >>> >>> I was a bit confused about your use of the "ultrascale" branch. >>> So you are using a branch with two types of patches in it: one for >>> later linux kernels with the fpga manager, and the other for the >>> ultrascale chip itself. >>> The N310 is not ultrascale, so we need to separate the two issues, >>> which were not separated before. >>> So its not really a surprise that the branch you are using is not yet >>> happy with the system you are trying to run it on. >>> >>> I am working on a branch that simply updates the xilinx tools >>> (2019-1) and the xilinx linux kernel (4.19) without dealing with >>> ultrascale, which is intended to work with a baseline zed board, but >>> with current tools and kernels. >>> >>> The N310 uses a 7000-series part (7100) which should be compatible >>> with this. >>> >>> Which kernel and which xilinx tools are you using? >>> >>> Jim >>> >>> >>> >>>> On 8/8/19 1:36 PM, Munro, Robert M. wrote: >>>> Jim or others, >>>> >>>> Is there any further input or feedback on the source or resolution >>> of this issue? >>>> As it stands I do not believe that the OCPI runtime software will be >>> able to successfully load HDL assemblies on the N310 platform. My >>> familiarity with this codebase is limited and we would appreciate any >>> guidance available toward investigating or resolving this issue. >>>> Thank you, >>>> Robert Munro >>>> >>>> -----Original Message----- >>>> From: discuss >>>> <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org><mailto:discuss-bounces@lists.open<mailto:discuss-bounces@lists.open> >>>> cpi.org<http://cpi.org>><mailto:discuss-bounces@lists.open<mailto:discuss-bounces@lists.open><mailto:discuss-bounces@li<mailto:discuss-bounces@li> >>>> sts.open> >>>> cpi.org<http://cpi.org><http://cpi.org>><mailto:discuss-bounces@lists.open<mailto:discuss-bounces@lists.open><mailto:di<mailto:di> >>>> scuss-bounces@lists.open<mailto:scuss-bounces@lists.open>><mailto:discuss-bounces@li<mailto:discuss-bounces@li><mailto:discuss-b<mailto:discuss-b> >>>> ounces@li> sts.open> cpi.org<http://cpi.org><http://cpi.org><http://cpi.org>>> On >>>> Behalf Of >>> Munro, Robert M. >>>> Sent: Monday, August 5, 2019 10:49 AM >>>> To: James Kulp >>>> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto: <mailto:%0b>>>>> jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>>>>>; >>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:d<mailto:d> >>>> iscuss@lists.opencpi.org<mailto:iscuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>><mailto:d<mailto:d> >>>> <mailto:d<mailto:d>> >>>> iscuss@lists.opencpi.org<mailto:iscuss@lists.opencpi.org><mailto:iscuss@lists.opencpi.org<mailto:iscuss@lists.opencpi.org>><mailto:dis<mailto:dis> >>>> cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>>> >>>> Subject: Re: [Discuss OpenCPI] Bitstream loading with >>> ZynqMP/UltraScale+ fpga_manager >>>> Jim, >>>> >>>> The given block of code is not the root cause of the issue because >>> the file system does not have a /dev/xdevcfg device. >>>> I suspect there is some functional code similar to this being >>> compiled incorrectly: >>>> #if (OCPI_ARCH_arm) >>>> // do xdevcfg loading stuff >>>> #else >>>> // do fpga_manager loading stuff #endif >>>> >>>> This error is being output at environment initialization as well as >>> when running hello.xml. I've attached a copy of the output from the >>> command 'ocpirun -v -l 20 hello.xml' for further investigation. >>>> From looking at the output I believe the system is calling >>> OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is >>> calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line >>> 484 which in turn is calling Driver::open in the same file at line >>> 499 which then outputs the 'When searching for PL device ...' error >>> at line 509. This then returns to the HdlDriver.cxx search() function >>> and outputs the '... got Zynq search error ...' error at line 141. >>>> This is an ARM device and I am not familiar enough with this >>> codebase to adjust precompiler definitions with confidence that some >>> other code section will become affected. >>>> Thanks, >>>> Robert Munro >>>> >>>> -----Original Message----- >>>> From: James Kulp >>>> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto: <mailto:%0b>>>>> jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>>>>> >>>> Sent: Friday, August 2, 2019 4:27 PM >>>> To: Munro, Robert M. >>>> <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>><mailto:Robe<mailto:Robe> >>>> rt.Munro@jhuapl.edu<mailto:rt.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>><mailto:Robe<mailto:Robe><mai >>>> lto:Robe> >>>> rt.Munro@jhuapl.edu<mailto:rt.Munro@jhuapl.edu><mailto:rt.Munro@jhuapl.edu<mailto:rt.Munro@jhuapl.edu>><mailto:Robert.Munro@<mailto:Robert.Munro@> >>>> jhuapl.edu<http://jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>>>; >>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:di<mailto:di> >>> scuss@lists.opencpi.org<mailto:scuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>><mailto:dis<mailto:dis> >>> <mailto:dis<mailto:dis>> >>> cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org><mailto:cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org>><mailto:discuss@<mailto:discuss@> >>> lists.opencpi.org<http://lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>>> >>>> Subject: Re: [Discuss OpenCPI] Bitstream loading with >>> ZynqMP/UltraScale+ fpga_manager >>>> That code is not integrated into the main line of OpenCPI yet, but >>> in that code there is: >>>> if (file_exists("/dev/xdevcfg")){ >>>> ret_val= load_xdevconfig(fileName, error); >>>> } >>>> else if (file_exists("/sys/class/fpga_manager/fpga0/")){ >>>> ret_val= load_fpga_manager(fileName, error); >>>> } >>>> So it looks like the presence of /dev/xdevcfg is what causes it to >>> look for /sys/class/xdevcfg/xdevcfg/device/prog_done >>>>> On 8/2/19 4:15 PM, Munro, Robert M. wrote: >>>>> Are there any required flag or environment variable settings that >>> must be done before building the framework to utilize this >>> functionality? I have a platform built that is producing an output >>> during environment load: 'When searching for PL device '0': Can't >>> process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string: >>> file could not be open for reading' . This leads me to believe that >>> it is running the xdevcfg code still present in HdlBusDriver.cxx . >>>>> Use of the release_1.4_zynq_ultra branch and presence of the >>> /sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been >>> verified for the environment used to generate the executables. >>>>> Thanks, >>>>> Robert Munro >>>>> >>>>> -----Original Message----- >>>>> From: discuss >>>>> <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org><mailto:discuss-bounces@lists.ope<mailto:discuss-bounces@lists.ope> >>>>> ncpi.org<http://ncpi.org>><mailto:discuss-bounces@lists.ope<mailto:discuss-bounces@lists.ope><mailto:discuss-bounces@l<mailto:discuss-bounces@l> >>>>> ists.ope> >>>>> ncpi.org<http://ncpi.org><http://ncpi.org>><mailto:discuss-bounces@lists.ope<mailto:discuss-bounces@lists.ope><mailto: >>>>> discuss-bounces@lists.ope<mailto:discuss-bounces@lists.ope>><mailto:discuss-bounces@l<mailto:discuss-bounces@l><mailto:discuss-<mailto:discuss-> >>>>> bounces@l> ists.ope> ncpi.org<http://ncpi.org><http://ncpi.org><http://ncpi.org>>> >>>>> On Behalf Of James Kulp >>>>> Sent: Friday, February 1, 2019 4:18 PM >>>>> To: >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>><mailto: >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto: >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>>> >>>>> Subject: Re: [Discuss OpenCPI] Bitstream loading with >>>>> ZynqMP/UltraScale+ fpga_manager >>>>> >>>>>> On 2/1/19 3:37 PM, Chris Hinkey wrote: >>>>>> in response to Point 1 here. We attempted using the code that on >>> the fly was attempting to convert from bit to bin. This did not work >>> on these newer platforms using fpga_manager so we decided to use the >>> vendor provided tools rather then to reverse engineer what was wrong >>> with the existing code. >>>>>> If changes need to be made to create more commonality and given >>> that all zynq and zynqMP platforms need a .bin file format wouldn't >>> it make more sense to just use .bin files rather then converting them >>> on the fly every time? >>>>> A sensible question for sure. >>>>> >>>>> When this was done originally, it was to avoid generating multiple >>> file formats all the time. .bit files are necessary for JTAG >>> loading, and .bin files are necessary for zynq hardware loading. >>>>> Even on Zynq, some debugging using jtag is done, and having that be >>> mostly transparent (using the same bitstream files) is convenient. >>>>> So we preferred having a single bitstream file (with metadata, >>>>> compressed) regardless of whether we were hardware loading or jtag >>> loading, zynq or virtex6 or spartan3, ISE or Vivado. >>>>> In fact, there was no reverse engineering the last time since both >>> formats, at the level we were operating at, were documented by Xilinx. >>>>> It seemed to be worth the 30 SLOC to convert on the fly to keep a >>> single format of Xilinx bitstream files, including between ISE and >>> Vivado and all Xilinx FPGA types. >>>>> Of course it might make sense to switch things around the other way >>> and use .bin files uniformly and only convert to .bit format for JTAG >>> loading. >>>>> But since the core of the "conversion:" after a header, is just a >>> 32 bit endian swap, it doesn't matter much either way. >>>>> If it ends up being a truly nasty reverse engineering exercise now, >>> I would reconsider. >>>>>> ________________________________ >>>>>> From: discuss >>>>>> <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org><mailto:discuss-bounces@lists.op<mailto:discuss-bounces@lists.op> >>>>>> encpi.org<http://encpi.org>><mailto:discuss-bounces@lists.op<mailto:discuss-bounces@lists.op><mailto:discuss-bounces@<mailto:discuss-bounces@> >>>>>> lists.op> >>>>>> encpi.org<http://encpi.org><http://encpi.org>><mailto:discuss-bounces@lists.op<mailto:discuss-bounces@lists.op><mailt >>>>>> o:discuss-bounces@lists.op<mailto:o:discuss-bounces@lists.op>><mailto:discuss-bounces@<mailto:discuss-bounces@><mailto:discuss<mailto:discuss> >>>>>> -bounces@> lists.op> >>>>>> encpi.org<http://encpi.org><http://encpi.org><http://encpi.org>>> on behalf of James >>>>>> Kulp >>>>>> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailt >>>>>> o:jek@parera.com<mailto:o%3Ajek@parera.com>>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><ma >>>>>> ilt o:jek@parera.com<mailto:o%3Ajek@parera.com><mailto:o%3Ajek@parera.com<mailto:o%253Ajek@parera.com>>>>> >>>>>> Sent: Friday, February 1, 2019 3:27 PM >>>>>> To: >>>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto >>>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>><mail >>>>>> to >>>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailt >>>>>> o:discuss@lists.opencpi.org<mailto:o%3Adiscuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>>> >>>>>> Subject: Re: [Discuss OpenCPI] Bitstream loading with >>>>>> ZynqMP/UltraScale+ fpga_manager >>>>>> >>>>>> David, >>>>>> >>>>>> This is great work. Thanks. >>>>>> >>>>>> Since I believe the fpga manager stuff is really an attribute of >>>>>> later linux kernels, I don't think it is really a ZynqMP thing, >>>>>> but just a later linux kernel thing. >>>>>> I am currently bringing up the quite ancient zedboard using the >>>>>> latest Vivado and Xilinx linux and will try to use this same code. >>>>>> There are two thinigs I am looking into, now that you have done >>>>>> the hard work of getting to a working solution: >>>>>> >>>>>> 1. The bit vs bin thing existed with the old bitstream loader, but >>>>>> I think we were converting on the fly, so I will try that here. >>>>>> (To avoid the bin format altogether). >>>>>> >>>>>> 2. The fpga manager has entry points from kernel mode that allow >>>>>> you to inject the bitstream without making a copy in /lib/firmware. >>>>>> Since we already have a kernel driver, I will try to use that to >>>>>> avoid the whole /lib/firmware thing. >>>>>> >>>>>> So if those two things can work (no guarantees), the difference >>>>>> between old and new bitstream loading (and building) can be >>>>>> minimized and the loading process faster and requiring no extra >>>>>> file system >>> space. >>>>>> This will make merging easier too. >>>>>> >>>>>> We'll see. Thanks again to you and Geon for this important >>> contribution. >>>>>> Jim >>>>>> >>>>>> >>>>>>> On 2/1/19 3:12 PM, David Banks wrote: >>>>>>> OpenCPI users interested in ZynqMP fpga_manager, >>>>>>> >>>>>>> I know some users are interested in the OpenCPI's bitstream >>>>>>> loading for ZynqMP/UltraScale+ using "*fpga_manager*". In >>>>>>> general, we followed the instructions at >>>>>>> >>> https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream. >>>>>>> I will give a short explanation here: >>>>>>> >>>>>>> Reminder: All ZynqMP/UltraScale+ changes are located at >>>>>>> https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra >>> branch. >>>>>>> Firstly, all *fpga_manager *code is located in >>>>>>> *runtime/hdl/src/HdlBusDriver.cxx*. There were also changes in >>>>>>> r*untime/hdl-support/xilinx/vivado.mk<http://vivado.mk><http://vivado.mk><http://vi >>>>>>> vado.mk<http://vado.mk>><http://vi >>>>>>> vado.mk<http://vado.mk><http://vado.mk>> >>>>>>> <http://vivado.mk>* to generate a bitstream in the correct *.bin >>>>>>> format. To see the changes made to these files for ZynqMP, you >>>>>>> can diff them between >>>>>>> *release_1.4* and *release_1.4_zynq_ultra*: >>>>>>> $ git clone https://github.com/Geontech/opencpi.git --branch >>>>>>> release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin >>>>>>> release_1.4:release_1.4; $ git diff release_1.4 -- >>>>>>> runtime/hdl/src/HdlBusDriver.cxx >>>>>>> runtime/hdl-support/xilinx/vivado.mk<http://vivado.mk><http://vivado.mk><http://viv >>>>>>> ado.mk<http://ado.mk>><http://viv >>>>>>> ado.mk<http://ado.mk><http://ado.mk>>; >>>>>>> >>>>>>> >>>>>>> The directly relevant functions are *load_fpga_manager()* and i >>>>>>> *sProgrammed()*. >>>>>>> load_fpga_manager() ensures that /lib/firmware exists, reads the >>>>>>> *.bin bitstream file and writes its contents to >>> /lib/firmware/opencpi_temp.bin. >>>>>>> It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the >>>>>>> the filename "opencpi_temp.bin" to >>> /sys/class/fpga_manager/fpga0/firmware. >>>>>>> Finally, the temporary opencpi_temp.bin bitstream is removed and >>>>>>> the state of the fpga_manager >>>>>>> (/sys/class/fpga_manager/fpga0/state) is confirmed to be "operating" in isProgrammed(). >>>>>>> >>>>>>> fpga_manager requires that bitstreams be in *.bin in order to >>>>>>> write them to the PL. So, some changes were made to >>>>>>> vivado.mk<http://vivado.mk><http://vivado.mk><http://vivado.mk><http://vivado.mk> >>>>>>> to add a make rule for the *.bin file. This make rule (*BinName*) uses Vivado's "*bootgen*" to convert the bitstream from *.bit to *.bin. >>>>>>> >>>>>>> Most of the relevant code is pasted or summarized below: >>>>>>> >>>>>>> *load_fpga_manager*(const char *fileName, >>>>>>> std::string >>> &error) { >>>>>>> if (!file_exists("/lib/firmware")){ >>>>>>> mkdir("/lib/firmware",0666); >>>>>>> } >>>>>>> int out_file = >>> creat("/lib/firmware/opencpi_temp.bin", 0666); >>>>>>> gzFile bin_file; >>>>>>> int bfd, zerror; >>>>>>> uint8_t buf[8*1024]; >>>>>>> >>>>>>> if ((bfd = ::open(fileName, O_RDONLY)) < 0) >>>>>>> OU::format(error, "Can't open bitstream file '%s' >>> for reading: >>>>>>> %s(%d)", >>>>>>> fileName, strerror(errno), errno); >>>>>>> if ((bin_file = ::gzdopen(bfd, "rb")) == NULL) >>>>>>> OU::format(error, "Can't open compressed bin >>>>>>> file >>> '%s' for : >>>>>>> %s(%u)", >>>>>>> fileName, strerror(errno), errno); >>>>>>> do { >>>>>>> uint8_t *bit_buf = buf; >>>>>>> int n = ::gzread(bin_file, bit_buf, sizeof(buf)); >>>>>>> if (n < 0) >>>>>>> return true; >>>>>>> if (n & 3) >>>>>>> return OU::eformat(error, "Bitstream data in is '%s' >>>>>>> not a multiple of 3 bytes", >>>>>>> fileName); >>>>>>> if (n == 0) >>>>>>> break; >>>>>>> if (write(out_file, buf, n) <= 0) >>>>>>> return OU::eformat(error, >>>>>>> "Error writing to >>>>>>> /lib/firmware/opencpi_temp.bin for bin >>>>>>> loading: %s(%u/%d)", >>>>>>> strerror(errno), errno, n); >>>>>>> } while (1); >>>>>>> close(out_file); >>>>>>> std::ofstream >>> fpga_flags("/sys/class/fpga_manager/fpga0/flags"); >>>>>>> std::ofstream >>>>>>> fpga_firmware("/sys/class/fpga_manager/fpga0/firmware"); >>>>>>> fpga_flags << 0 << std::endl; >>>>>>> fpga_firmware << "opencpi_temp.bin" << std::endl; >>>>>>> >>>>>>> remove("/lib/firmware/opencpi_temp.bin"); >>>>>>> return isProgrammed(error) ? init(error) : true; >>>>>>> } >>>>>>> >>>>>>> The isProgrammed() function just checks whether or not the >>>>>>> fpga_manager state is 'operating' although we are not entirely >>>>>>> confident this is a robust check: >>>>>>> >>>>>>> *isProgrammed*(...) { >>>>>>> ... >>>>>>> const char *e = OU::file2String(val, >>>>>>> "/sys/class/fpga_manager/fpga0/state", '|'); >>>>>>> ... >>>>>>> return val == "operating"; >>>>>>> } >>>>>>> >>>>>>> vivado.mk<http://vivado.mk><http://vivado.mk><http://vivado.mk><http://vivado.mk>'s >>>>>>> *bin make-rule uses bootgen to convert bit to bin. This is >>>>>>> necessary in Vivado 2018.2, but in later versions you may be able >>>>>>> to directly generate the correct *.bin file via an option to >>> write_bitstream: >>>>>>> $(call *BinName*,$1,$3,$6): $(call BitName,$1,$3) >>>>>>> $(AT)echo -n For $2 on $5 using config $4: Generating >>>>>>> Xilinx Vivado bitstream file $$@ with BIN extension using "bootgen". >>>>>>> $(AT)echo all: > $$(call BifName,$1,$3,$6); \ >>>>>>> echo "{" >> $$(call BifName,$1,$3,$6); \ >>>>>>> echo " [destination_device = pl] $(notdir $(call >>>>>>> BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6); \ >>>>>>> echo "}" >> $$(call BifName,$1,$3,$6); >>>>>>> $(AT)$(call DoXilinx,*bootgen*,$1,-image $(notdir >>>>>>> $(call >>>>>>> BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call >>>>>>> BinName,$1,$3,$6)) -w,bin) >>>>>>> >>>>>>> Hope this is useful! >>>>>>> >>>>>>> Regards, >>>>>>> David Banks >>>>>>> dbanks@geontech.com<mailto:dbanks@geontech.com><mailto:dbanks@geontech.com<mailto:dbanks@geontech.com>><mailto:dbanks@geo<mailto:dbanks@geo> >>>>>>> ntech.com<http://ntech.com><mailto:dbanks@geontech.com<mailto:dbanks@geontech.com>>><mailto:dbanks@geo<mailto:dbanks@geo><mailto:d<mailto:d> >>>>>>> banks@geo> >>>>>>> ntech.com<http://ntech.com><http://ntech.com><mailto:dbanks@geontech.com<mailto:dbanks@geontech.com><mailto:dba<mailto:dba> >>>>>>> nks@geontech.com<mailto:nks@geontech.com>>>> >>>>>>> Geon Technologies, LLC >>>>>>> -------------- next part -------------- An HTML attachment was >>>>>>> scrubbed... >>>>>>> URL: >>>>>>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/att >>>>>>> ach m ents/20190201/4b49675d/attachment.html> >>>>>>> _______________________________________________ >>>>>>> discuss mailing list >>>>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailt >>>>>>> o:discuss@lists.opencpi.org<mailto:o%3Adiscuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>><ma >>>>>>> ilt >>>>>>> o:discuss@lists.opencpi.org<mailto:o%3Adiscuss@lists.opencpi.org><mailto:o%3Adiscuss@lists.opencpi.org<mailto:o%253Adiscuss@lists.opencpi.org>> >>>>>>> <mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.or<mailto:discuss@lists.opencpi.or> >>>>>>> g>>> >>>>>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.o >>>>>>> rg >>>>>> _______________________________________________ >>>>>> discuss mailing list >>>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto >>>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>><mail >>>>>> to >>>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailt >>>>>> o:discuss@lists.opencpi.org<mailto:o%3Adiscuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>>> >>>>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or >>>>>> g >>>>>> -------------- next part -------------- An HTML attachment was >>>>>> scrubbed... >>>>>> URL: >>>>>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/atta >>>>>> chm e nts/20190201/64e4ea45/attachment.html> >>>>>> _______________________________________________ >>>>>> discuss mailing list >>>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto >>>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>><mail >>>>>> to >>>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailt >>>>>> o:discuss@lists.opencpi.org<mailto:o%3Adiscuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>>> >>>>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or >>>>>> g >>>>> _______________________________________________ >>>>> discuss mailing list >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>><mailto: >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto: >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>>> >>>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >>>> -------------- next part -------------- An embedded and >>>> charset-unspecified text was scrubbed... >>>> Name: hello_n310_log_output.txt >>>> URL: >>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachm >>> e nts/20190805/d9b4f229/attachment.txt> >>>> _______________________________________________ >>>> discuss mailing list >>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:d<mailto:d> >>>> iscuss@lists.opencpi.org<mailto:iscuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>><mailto:d<mailto:d> >>>> <mailto:d<mailto:d>> >>>> iscuss@lists.opencpi.org<mailto:iscuss@lists.opencpi.org><mailto:iscuss@lists.opencpi.org<mailto:iscuss@lists.opencpi.org>><mailto:dis<mailto:dis> >>>> cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>>> >>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >>> >> >> _______________________________________________ >> discuss mailing list >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:dis<mailto:dis> >> cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>><mailto:discu<mailto:discu> >> ss@lists.opencpi.org<mailto:ss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:discuss@<mailto:discuss@> >> lists.opencpi.org<http://lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>>> >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >> -------------- next part -------------- An HTML attachment was >> scrubbed... >> URL: >> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme >> nts/20190813/4516c872/attachment.html> >> _______________________________________________ >> discuss mailing list >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:dis<mailto:dis> >> cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >> -------------- next part -------------- An HTML attachment was >> scrubbed... >> URL: >> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme >> nts/20190829/b99ae3e0/attachment.html> >> _______________________________________________ >> discuss mailing list >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >> -------------- next part -------------- An HTML attachment was >> scrubbed... >> URL: >> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme >> nts/20190905/0b9a1953/attachment.html> >> _______________________________________________ >> discuss mailing list >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > > > > _______________________________________________ > discuss mailing list > discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org _______________________________________________ discuss mailing list discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
MR
Munro, Robert M.
Tue, Sep 10, 2019 9:52 PM

The N310 does allow arbitrary HDL loads using the DTO approach.  A basic Vivado project was created to test this loading capability.  The general flow followed:

  •     Build simple Vivado HDL project: block design w/ Zynq PS, AXI GPIO
    
  •     Export hardware -> .hdf file
    
  •     Vivado SDK project (may be unnecessary)
    
  •     Use XSCT w/ TCL scripts to generate device tree overlay files as discussed here https://forums.xilinx.com/t5/Embedded-Linux/Unable-to-download-dt-overaly-tcl/td-p/924363
    
  •     Modify generated .dtsi files as required
    
  •     Generate .bif, .bin, .dtbo files as described in Xilinx documentation here https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841645/Solution+Zynq+PL+Programming+With+FPGA+Manager
    
  •     Copy bitstream.bit.bin and pl.dtbo onto target
    
  •     Follow DTO loading procedure described in Xilinx documentation above
    

o  Note: commands ‘echo <something> > /sys/class/fpga_manager/fpga0/…’ did not work and reported permission denied; as a result these operations were skipped

-Rob

From: Munro, Robert M.
Sent: Monday, September 9, 2019 4:22 PM
To: 'Chris Hinkey' chinkey@geontech.com; James Kulp jek@parera.com
Cc: discuss@lists.opencpi.org
Subject: RE: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

The FPGA load that can be prevented at boot time is the vendor’s FPGA build.

My suspicion is that the ‘ocpihdl load …’ call was not able to successfully load the FPGA.  When it subsequently attempted reading the magic number it was getting an incorrect value because the vendor’s FPGA load was already loaded during the boot process.  When preventing the vendor’s FPGA load during boot the magic number mismatch was no longer being output but the software was reporting the load was unsuccessful.

Does an application that does not require FPGA load such as hello.xml is run, does the system attempt to load anything to the FPGA?  I noticed that it was outputting the magic number mismatch when the vendor’s FPGA build was loaded in this case as well.

I am now looking into what is required to use the DTO loading approach for this platform.

Thanks,
Rob

From: Chris Hinkey <chinkey@geontech.commailto:chinkey@geontech.com>
Sent: Friday, September 6, 2019 3:20 PM
To: James Kulp <jek@parera.commailto:jek@parera.com>
Cc: Munro, Robert M. <Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>; discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

looks like i responded just to robert not to the discussion list, oops

The magic number is a register that is on the fpga and is set to ascii "OpenCPI" (0x4f70656e435049) which is a 64 bit read across the axi bus and it is the first address across the bus  this value is not set during operation, it is a hardcoded register that is built-in to the bitfile.  it is never being written to from the software side.

when I get to this point on a new platform what I will do is step outside of the framework and use the tool devmem to read the physical address across the fpga boundary.
this will ensure that the 64 bits here are returning the correct "magic".

I expect that your problems with the Exiting for problem: error loading device pl:0’ .  What further steps can be taken to debug this? error is from something with fpga_manager not acting correctly.  we had a similar problem recently and we forced opencpi to think that it always had an opencpi bitstream loaded.
I would check that /sys/class/fpga_manager/fpga0/state is returning something reasonable.  the fact that you don't have  permissions to access the other parts of the fpga_manger is suspect as well, might be related.

On Fri, Sep 6, 2019 at 3:12 PM James Kulp <jek@parera.commailto:jek@parera.com> wrote:
On 9/6/19 2:29 PM, Munro, Robert M. wrote:
It appears there was some resource contention in the GP0 area that was not allowing the OCPI system to set the OccpAdminRegister.magic value during operation.

This value is hardwired into the OpenCPI  FPGA load and is read-only.

The software memory maps the area where the GP0 interface is a slave to the CPU at:

  const uint32_t GP0_PADDR = 0x40000000;

And reads from offset 0.

It firsts the 8 byte MAGIC a byte at a time, then if it matches, it reads again as a single 64 bit value to make sure 32-bit endian swapping is right.  If both those reads from offset 0 at 0x40000000 come back correct, it believes that the FPGA is loaded with an OpenCPI bitstream.

If there is already a non-OpenCPI bitstream loaded, we expect that this test will fail.

If this failure occurs when there is a bitstream loaded, on Zynq, it still assumes the FPGA is available for subsequent loading.

If the FPGA load is prevented during the boot process, the magic number mismatch error is no longer output.  Looking through the TRM was showing no configuration settings for GP0 other than enabling communication using LVL_SHFTR_EN.

If there is some required configuration of AXI_GP0 configuration registers for OCPI to work properly, please provide it for future reference.
I will check this.

I am further trying to understand the code that was producing the output by looking at the at the source.  The magic number mismatch output on lines 82-83 look to be outputting a #define value in the (sb ….) area of the output and the ‘magic’ variable there is giving the value that was read from the OccpAdminRegister area.  Am I understanding the code correctly?  If so, that would indicate that the (sb …) number is the expected value and its orientation should be correct. https://github.com/Geontech/opencpi/blob/6c7f48352ef9dcb1213302f470ce803643cc604d/runtime/hdl/src/HdlDevice.cxx#L82

Is the code being understood correctly that the OccpAdminRegister is a memory mapped data structure that is being written and read as part of the OCPI control interface?  If so, can you explain how and where this is being mapped and at what base address it should be expected?
See above - it is never written.

After preventing the FPGA load at boot time the OCPI commands no longer output the magic number mismatch error.  The command ‘ocpihdl load <fsk_filerw bin>’ does not succeed however.  The output from the command states ‘Exiting for problem: error loading device pl:0’ .  What further steps can be taken to debug this?

What FPGA load at boot time are you refering to? The native manufacturer's bitstream?

AFAIK OpenCPI has no "boot time FPGA load".

What you appear to be debugginig is the Geon ultrascale fpga manager loading code on a non-ultrascale Zynq.

I should have this particular function running on a zedboard Zynq next week.

I have also found that the FPGA loading approach coded in HdlBusDriver.cxx does not work on this platform when attempting to run manually.  The command ‘echo 0 > /sys/class/fpga_manager/fpga0/flags’ returns ‘-sh: /sys/class/fpga_manager/fpga0/flags: Permission denied’ .  A manual command using the DT overlay approach does appear to work however.

I'm sorry you are the guinea pig on this particular configuration.

The reason we did not immediately integrate the Geon code into OpenCPI is that it was taking two steps (fpga manager + ultra-scale) at once and we needed to take them one step at a time.  We are taking that first step, unfortunately not on a schedule that helps you.

Jim

Thanks,
Rob

From: Chris Hinkey chinkey@geontech.commailto:chinkey@geontech.com
Sent: Friday, September 6, 2019 8:09 AM
To: James Kulp jek@parera.commailto:jek@parera.com
Cc: Munro, Robert M. Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu; discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

iirc it gives clocks and interdictions on which axi ports are enabled but not which direction is master (you would have to look up which register/bit this is set by in the TRM).  i don't remember the axi ports being configurable which side is the master but i very well might be mistake.

On Thu, Sep 5, 2019 at 7:38 PM James Kulp <jek@parera.commailto:jek@parera.com> wrote:
If you invoke the command with no arguments it tells you what it can do, like most opencpi commands.  We mostly use it to find out how the FPGA clocks are initialized.

On Sep 5, 2019, at 18:19, Munro, Robert M. <Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu> wrote:

Jim,

Does the ocpizynq utility list all the available interfaces that can dumped?

Thanks,
Rob

-----Original Message-----
From: discuss <discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org> On Behalf Of James Kulp
Sent: Thursday, September 5, 2019 5:59 PM
To: discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

Hi Rob,

Nearly all aspects of the boundary hardware between the PS and the PL sides of Zynq are controlled by registers written by the processor and
not in the FPGA bitstream.
The FSBL does typically initialize these registers to some default values that are not necessarily the right values for how OpenCPI uses the PL/FPGA.
The ocpizynq utility program does dump out some of these registers, and you could modify it pretty easily if you want to know what some other registers are set to.
All these registers are pretty well documented in the Zynq TRM.

Jim

On 9/5/19 5:47 PM, Munro, Robert M. wrote:
Chris,

Would this be the GP0 AXI slave or master registers that are being accessed in this scenario?  I don’t believe these are configured in the FSBL, but in the FPGA image.  This could indicate that a facility required by the OCPI framework is not enabled in the FPGA image built into the N310 image.  Is there a listing of the OCPI required FPGA facilities?

Thanks,
Rob

From: Chris Hinkey <chinkey@geontech.commailto:chinkey@geontech.com>
Sent: Thursday, August 29, 2019 11:58 AM
To: Munro, Robert M. <Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

you are not accessing external memory in this case you are accessing axi_gp0's adress space a register directly on the FPGA.  i would suspect that that something is wrong with how GP0 is setup from the fsbl in this case.  I don't think anything would need to change on the opencpi software side given that 7100 vs 7020 should be the same.
the information on all the register maps and where everything is located is somewhere in the Xilinx Technical reference manual (be warned this is a very large document).

On Thu, Aug 29, 2019 at 11:42 AM Munro, Robert M. <Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>> wrote:
Chris,

Looking at the Zynq and ZynqMP datasheets:
https://www.xilinx.com/support/documentation/data_sheets/ds190-Zynq-70
00-Overview.pdf
https://www.xilinx.com/support/documentation/data_sheets/ds891-zynq-ul
trascale-plus-overview.pdf

It looks like the Z-7100 has the same memory interfaces as other Zynq parts with the external memory interface having '16-bit or 32-bit interfaces to DDR3, DDR3L, DDR2, or LPDDR2 memories' whereas the ZynqMP has '32-bit or 64-bit interfaces to DDR4, DDR3, DDR3L, or LPDDR3 memories, and 32-bit interface to LPDDR4 memory' .

Is it possible that other changes are needed from the 1.4_zynq_ultra branch that I have not pulled in?

Thanks,
Rob

-----Original Message-----
From: discuss <discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org>> On Behalf Of Munro, Robert M.
Sent: Thursday, August 29, 2019 9:09 AM
To: Chris Hinkey <chinkey@geontech.commailto:chinkey@geontech.com<mailto:chinkey@geontech.commailto:chinkey@geontech.com>>
Cc: discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

Chris,

Thanks for the information regarding the internals.  The FPGA part on this platform is a XC7Z100.  I purposefully did not pull in changes that I believed were related to addressing.  I can double check the specifications regarding address widths to verify it should be unchanged.

Please let me know if there are any other changes or steps missed.

Thanks,
Rob

From: Chris Hinkey
<chinkey@geontech.commailto:chinkey@geontech.com<mailto:chinkey@geontech.commailto:chinkey@geontech.com><mailto:chinkey@geonmailto:chinkey@geon
tech.comhttp://tech.com<mailto:chinkey@geontech.commailto:chinkey@geontech.com>>>
Date: Thursday, Aug 29, 2019, 8:05 AM
To: Munro, Robert M.
<Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu><mailto:Robertmailto:Robert
.Munro@jhuapl.edumailto:Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>>
Cc: James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><mailto:jek@parera.commailto:jek@parera.com<mailto:jemailto:je
k@parera.commailto:k@parera.com>>>,
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
<discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:dimailto:di
scuss@lists.opencpi.orgmailto:scuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

It looks like you loaded something sucessfully but the control plan is not hooked up quite right.

as an eraly part of the running process opencpi reads a register across the control plan that contains ascii "OpenCPI(NULL)" and in your case you are reading "CPI(NULL)Open"  this is given by the data in the error message - (sb 0x435049004f70656e).  this is the magic that the message is referring to it requires OpenCPI to be at address 0 of the control plane address space to proceed.

I think we ran into this problem and we decided it was because the bus on the ultrascale was setup to be 32 bits and needed to be 64 bits for the hdl that we implemented to work correctly.  remind me what platform you are using is it a zynq ultrascale or 7000 series?

On Wed, Aug 28, 2019 at 5:55 PM Munro, Robert M. <Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>> wrote:
Chris,

After merging some sections of HdlBusDriver.cxx into the 1.4 version of the file and going through the build process I am encountering a new error when attempting to load HDL on the N310.  The fsk_filerw is being used as a known good reference for this purpose.  The new sections of vivado.mkhttp://vivado.mkhttp://vivado.mkhttp://vivado.mk were merged in to attempt building the HDL using the framework, but it did not generate the .bin file when using ocpidev build with the --hdl-assembly argument.  An attempt to replicate the commands in vivado.mkhttp://vivado.mkhttp://vivado.mkhttp://vivado.mk manually while following the guidelines for generating a .bin from a .bit from Xilinx documentation https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841645/Solution+Zynq+PL+Programming+With+FPGA+Manager was taken.

The steps were:

  • generate a .bif file similar to the documentation's
    Full_Bitstream.bif using the correct filename
  • run a bootgen command similar to
    vivado.mkhttp://vivado.mkhttp://vivado.mkhttp://vivado.mk: bootgen -image
    <bif_filename> -arch zynq -o <bin_filename> -w

This generated a .bin file as desired and was copied to the artifacts directory in the ocpi folder structure.

The built ocpi environment loaded successfully, recognizes the HDL container as being available, and the hello application was able to run successfully.  The command output contained ' HDL Device 'PL:0' responds, but the OCCP signature: magic: 0x18000afe187003 (sb 0x435049004f70656e) ' but the impact of this was not understood until attempting to load HDL.  When attempting to run the fsk_filerw from the ocpirun command it did not appear to recognize the assembly when listing resources found in the output and reported that suitable candidate for a HDL-implemented component was not available.

The command 'ocpihdl load' was then attempted to force the loading of the HDL assembly and the same '...OCCP signature: magic: ...' output observed and finally ' Exiting for problem: error loading device pl:0: Magic numbers in admin space do not match'.

Is there some other step that must be taken during the generation of the .bin file?  Is there any other software modification that is required of the ocpi runtime code?  The diff patch of the modified 1.4 HdlBusDriver.cxx is attached to make sure that the required code modifications are performed correctly.  The log output from the ocpihdl load command is attached in case that can provide further insight regarding performance or required steps.

Thanks,
Rob

-----Original Message-----
From: discuss <discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org><mailto:discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org>>> On Behalf Of Munro, Robert M.
Sent: Tuesday, August 13, 2019 10:56 AM
To: Chris Hinkey
<chinkey@geontech.commailto:chinkey@geontech.com<mailto:chinkey@geontech.commailto:chinkey@geontech.com><mailto:chinkey@geonmailto:chinkey@geon
tech.comhttp://tech.com<mailto:chinkey@geontech.commailto:chinkey@geontech.com>>>; James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><mailto:jek@parera.commailto:jek@parera.com<mailto:jemailto:je
k@parera.commailto:k@parera.com>>>
Cc:
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:dismailto:dis
cuss@lists.opencpi.orgmailto:cuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

Chris,

Thank you for your helpful response and insight.  My thinking was that the #define could be overridden to provide the desired functionality for the platform, but was not comfortable making the changes without proper familiarity.  I will move forward by looking at the diff to the 1.4 mainline, make the appropriate modifications, and test with the modified framework on the N310.

Thanks again for your help.

Thanks,
Rob

From: Chris Hinkey
<chinkey@geontech.commailto:chinkey@geontech.com<mailto:chinkey@geontech.commailto:chinkey@geontech.com><mailto:chinkey@geonmailto:chinkey@geon
tech.comhttp://tech.com<mailto:chinkey@geontech.commailto:chinkey@geontech.com>>>
Sent: Tuesday, August 13, 2019 10:02 AM
To: James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><mailto:jek@parera.commailto:jek@parera.com<mailto:jemailto:je
k@parera.commailto:k@parera.com>>>
Cc: Munro, Robert M.
<Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu><mailto:Robertmailto:Robert
.Munro@jhuapl.edumailto:Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>>;
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:dismailto:dis
cuss@lists.opencpi.orgmailto:cuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

I think when I implemented this code I probably made the assumption that if we are using fpga_manager we are also using ARCH=arm64.  This met our needs as we only cared about the fpga manager on ultrascale devices at the time.  We also made the assumption that the tools created a tarred bin file instead of a bit file because we could not get the bit to bin conversion working with the existing openCPI code (this might cause you problems later when actually trying to load the fpga).

The original problem you were running into is certainly because of an
ifdef on line 226 where it will check the old driver done pin if it is
on an arm and not an arm64

226 #if defined(OCPI_ARCH_arm) || defined(OCPI_ARCH_arm_cs)

to move forward for now you can change this line to an "#if 0" and rebuild the framework, not this will cause other zynq based platforms(zed, matchstiq etc..) to no longer work with this patch but maybe you don't care for now while Jim tries to get this into the mainline in a more generic way.
there may be some similar patches you need to make to the same file but the full diff that I needed to make to BusDriver.cxx to the 1.4 mainline can be seen here https://github.com/opencpi/opencpi/pull/17/files in case you didn't already know.
hope this helps

On Mon, Aug 12, 2019 at 11:12 AM James Kulp <jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><mailto:jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com>><mailto:jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><mailto:jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com>>>> wrote:

On 8/12/19 9:37 AM, Munro, Robert M. wrote:
Jim,

This is the only branch with the modifications required for use with
the FPGA Manager driver.  This is required for use with the Linux
kernel provided for the N310.  The Xilinx toolset being used is
2018_2 and the kernel being used is generated via the N310 build
container using v3.14.0.0 .

Ok.  The default Xilinx kernel associated with 2018_2 is 4.14.

I guess the bottom line is that this combination of platform and tools and kernel is not yet supported in either the mainline of OpenCPI and the third party branch you are trying to use.

It is probably not a big problem, but someone has to debug it that has the time and skills necessary to dig as deep as necessary.

The fpga manager in the various later linux kernels will definitely be supported in a patch from the mainline "soon", probably in a month, since it is being actively worked.

That does not guarantee functionality on your exact kernel (and thus version of the fpga manager), but it does guarantee it working on the latest Xilinx-supported kernel.

Jim

Thanks,
Robert Munro

*From: *James Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><mailto:jek@parera.commailto:jek@parera.com<mailto:jmailto:j
ek@parera.commailto:ek@parera.com>><mailto:jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><mailto:jmailto:j
e<mailto:jemailto:je>
k@parera.commailto:k@parera.com<mailto:k@parera.commailto:k@parera.com>>>
<mailto:jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><mailto:jek@parera.commailto:jek@parera.com<m
ailto:jek@parera.commailto:ailto%3Ajek@parera.com>><mailto:jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><m
a ilto:jek@parera.commailto:ilto%3Ajek@parera.com<mailto:ilto%3Ajek@parera.commailto:ilto%253Ajek@parera.com>>>>>
*Date: *Monday, Aug 12, 2019, 9:00 AM
*To: *Munro, Robert M.
<Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu><mailto:Robermailto:Rober
t.Munro@jhuapl.edumailto:t.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>><mailto:Robertmailto:Robert<mai lto:Robert>
.Munro@jhuapl.edumailto:Munro@jhuapl.edu<mailto:Munro@jhuapl.edumailto:Munro@jhuapl.edu><mailto:Robert.Munro@jhuaplmailto:Robert.Munro@jhuapl
.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>>
<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu><mailt
o:Robert.Munro@jhuapl.edumailto:o%3ARobert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>><mailto
:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu><mailto:Robermailto:Rober
t.Munro@jhuapl.edumailto:t.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edumailto:Robert.Munro@jhuapl.edu>>>>>,
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:dimailto:di
scuss@lists.opencpi.orgmailto:scuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>><mailto:dismailto:dis
<mailto:dismailto:dis>
cuss@lists.opencpi.orgmailto:cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.orgmailto:cuss@lists.opencpi.org><mailto:discuss@mailto:discuss@
lists.opencpi.orghttp://lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>>
<discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:dmailto:d
iscuss@lists.opencpi.orgmailto:iscuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>><mailto:dimailto:di
<mailto:dimailto:di>
scuss@lists.opencpi.orgmailto:scuss@lists.opencpi.org<mailto:scuss@lists.opencpi.orgmailto:scuss@lists.opencpi.org><mailto:discusmailto:discus
s@lists.opencpi.orgmailto:s@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>>
<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><m
ailto:discuss@lists.opencpi.orgmailto:ailto%3Adiscuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>><ma
ilto:discuss@lists.opencpi.orgmailto:ilto%3Adiscuss@lists.opencpi.org<mailto:ilto%3Adiscuss@lists.opencpi.ormailto:ilto%253Adiscuss@lists.opencpi.or
g><mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>

*Subject: *Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

I was a bit confused about your use of the "ultrascale" branch.
So you are using a branch with two types of patches in it: one for
later linux kernels with the fpga manager, and the other for the
ultrascale chip itself.
The N310 is not ultrascale, so we need to separate the two issues,
which were not separated before.
So its not really a surprise that the branch you are using is not yet
happy with the system you are trying to run it on.

I am working on a branch that simply updates the xilinx tools
(2019-1) and the xilinx linux kernel (4.19) without dealing with
ultrascale, which is intended to work with a baseline zed board, but
with current tools and kernels.

The N310 uses a 7000-series part (7100) which should be compatible
with this.

Which kernel and which xilinx tools are you using?

Jim

On 8/8/19 1:36 PM, Munro, Robert M. wrote:
Jim or others,

Is there any further input or feedback on the source or resolution

of this issue?

As it stands I do not believe that the OCPI runtime software will be

able to successfully load HDL assemblies on the N310 platform.  My
familiarity with this codebase is limited and we would appreciate any
guidance available toward investigating or resolving this issue.

Munro, Robert M.

ZynqMP/UltraScale+ fpga_manager

Jim,

The given block of code is not the root cause of the issue because

the file system does not have a /dev/xdevcfg device.

I suspect there is some functional code similar to this being

compiled incorrectly:

#if (OCPI_ARCH_arm)
// do xdevcfg loading stuff
#else
// do fpga_manager loading stuff #endif

This error is being output at environment initialization as well as

when running hello.xml.  I've attached a copy of the output from the
command 'ocpirun -v -l 20 hello.xml' for further investigation.

From looking at the output I believe the system is calling

OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is
calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line
484 which in turn is calling Driver::open in the same file at line
499 which then outputs the 'When searching for PL device ...' error
at line 509. This then returns to the HdlDriver.cxx search() function
and outputs the '... got Zynq search error ...' error at line 141.

This is an ARM device and I am not familiar enough with this

codebase to adjust precompiler definitions with confidence that some
other code section will become affected.

Subject: Re: [Discuss OpenCPI] Bitstream loading with

ZynqMP/UltraScale+ fpga_manager

That code is not integrated into the main line of OpenCPI yet, but

in that code there is:

         if (file_exists("/dev/xdevcfg")){
           ret_val= load_xdevconfig(fileName, error);
         }
         else if (file_exists("/sys/class/fpga_manager/fpga0/")){
           ret_val= load_fpga_manager(fileName, error);
         }

So it looks like the presence of /dev/xdevcfg is what causes it to

look for /sys/class/xdevcfg/xdevcfg/device/prog_done

On 8/2/19 4:15 PM, Munro, Robert M. wrote:
Are there any required flag or environment variable settings that

must be done before building the framework to utilize this
functionality?  I have a platform built that is producing an output
during environment load: 'When searching for PL device '0': Can't
process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string:
file could not be open for reading' .  This leads me to believe that
it is running the xdevcfg code still present in HdlBusDriver.cxx .

Use of the release_1.4_zynq_ultra branch and presence of the

/sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been
verified for the environment used to generate the executables.

the fly was attempting to convert from bit to bin.  This did not work
on these newer platforms using fpga_manager so we decided to use the
vendor provided tools rather then to reverse engineer what was wrong
with the existing code.

If changes need to be made to create more commonality and given

that all zynq and zynqMP platforms need a .bin file format wouldn't
it make more sense to just use .bin files rather then converting them
on the fly every time?

A sensible question for sure.

When this was done originally, it was to avoid generating multiple

file formats all the time.  .bit files are necessary for JTAG
loading, and .bin files are necessary for zynq hardware loading.

Even on Zynq, some debugging using jtag is done, and having that be

mostly transparent (using the same bitstream files) is convenient.

So we preferred having a single bitstream file (with metadata,
compressed) regardless of whether we were hardware loading or jtag

loading, zynq or virtex6 or spartan3, ISE or Vivado.

In fact, there was no reverse engineering the last time since both

formats, at the level we were operating at, were documented by Xilinx.

It seemed to be worth the 30 SLOC to convert on the fly to keep a

single format of Xilinx bitstream files, including between ISE and
Vivado and all Xilinx FPGA types.

Of course it might make sense to switch things around the other way

and use .bin files uniformly and only convert to .bit format for JTAG
loading.

But since the core of the "conversion:" after a header, is just a

32 bit endian swap, it doesn't matter much either way.

If it ends up being a truly nasty reverse engineering exercise now,

I would reconsider.


From: discuss
<discuss-bounces@lists.opencpi.orgmailto:discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opmailto:discuss-bounces@lists.op
encpi.orghttp://encpi.org><mailto:discuss-bounces@lists.opmailto:discuss-bounces@lists.op<mailto:discuss-bounces@mailto:discuss-bounces@
lists.op>
encpi.orghttp://encpi.orghttp://encpi.org><mailto:discuss-bounces@lists.opmailto:discuss-bounces@lists.op<mailt
o:discuss-bounces@lists.opmailto:o:discuss-bounces@lists.op><mailto:discuss-bounces@mailto:discuss-bounces@<mailto:discussmailto:discuss
-bounces@> lists.op>
encpi.orghttp://encpi.orghttp://encpi.orghttp://encpi.org>> on behalf of James
Kulp
<jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><mailto:jek@parera.commailto:jek@parera.com<mailt
o:jek@parera.commailto:o%3Ajek@parera.com>><mailto:jek@parera.commailto:jek@parera.com<mailto:jek@parera.commailto:jek@parera.com><ma
ilt o:jek@parera.commailto:o%3Ajek@parera.com<mailto:o%3Ajek@parera.commailto:o%253Ajek@parera.com>>>>
Sent: Friday, February 1, 2019 3:27 PM
To:
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>><mail
to
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailt
o:discuss@lists.opencpi.orgmailto:o%3Adiscuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>>
Subject: Re: [Discuss OpenCPI] Bitstream loading with
ZynqMP/UltraScale+ fpga_manager

David,

This is great work. Thanks.

Since I believe the fpga manager stuff is really an attribute of
later linux kernels, I don't think it is really a ZynqMP thing,
but just a later linux kernel thing.
I am currently bringing up the quite ancient zedboard using the
latest Vivado and Xilinx linux and will try to use this same code.
There are two thinigs I am looking into, now that you have done
the hard work of getting to a working solution:

  1. The bit vs bin thing existed with the old bitstream loader, but
    I think we were converting on the fly, so I will try that here.
    (To avoid the bin format altogether).

  2. The fpga manager has entry points from kernel mode that allow
    you to inject the bitstream without making a copy in /lib/firmware.
    Since we already have a kernel driver, I will try to use that to
    avoid the whole /lib/firmware thing.

So if those two things can work (no guarantees), the difference
between old and new bitstream loading (and building) can be
minimized and the loading process faster and requiring no extra
file system

space.

This will make merging easier too.

We'll see.  Thanks again to you and Geon for this important

contribution.

Jim

On 2/1/19 3:12 PM, David Banks wrote:
OpenCPI users interested in ZynqMP fpga_manager,

I know some users are interested in the OpenCPI's bitstream
loading for ZynqMP/UltraScale+ using "fpga_manager". In
general, we followed the instructions at

I will give a short explanation here:

Reminder: All ZynqMP/UltraScale+ changes are located at
https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra

branch.

Firstly, all fpga_manager code is located in
runtime/hdl/src/HdlBusDriver.cxx. There were also changes in
r
untime/hdl-support/xilinx/vivado.mkhttp://vivado.mkhttp://vivado.mk<http://vi
vado.mkhttp://vado.mk><http://vi
vado.mkhttp://vado.mkhttp://vado.mk>
http://vivado.mk
to generate a bitstream in the correct *.bin
format. To see the changes made to these files for ZynqMP, you
can diff them between
release_1.4 and release_1.4_zynq_ultra:
$ git clone https://github.com/Geontech/opencpi.git --branch
release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin
release_1.4:release_1.4; $ git diff release_1.4 --
runtime/hdl/src/HdlBusDriver.cxx
runtime/hdl-support/xilinx/vivado.mkhttp://vivado.mkhttp://vivado.mk<http://viv
ado.mkhttp://ado.mk><http://viv
ado.mkhttp://ado.mkhttp://ado.mk>;

The directly relevant functions are load_fpga_manager() and i
sProgrammed().
load_fpga_manager() ensures that /lib/firmware exists, reads the
*.bin bitstream file and writes its contents to

/lib/firmware/opencpi_temp.bin.

It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the
the filename "opencpi_temp.bin" to

/sys/class/fpga_manager/fpga0/firmware.

Finally, the temporary opencpi_temp.bin bitstream is removed and
the state of the fpga_manager
(/sys/class/fpga_manager/fpga0/state) is confirmed to be "operating" in isProgrammed().

fpga_manager requires that bitstreams be in *.bin in order to
write them to the PL. So, some changes were made to
vivado.mkhttp://vivado.mkhttp://vivado.mkhttp://vivado.mkhttp://vivado.mk
to add a make rule for the *.bin file. This make rule (BinName) uses Vivado's "bootgen" to convert the bitstream from *.bit to *.bin.

Most of the relevant code is pasted or summarized below:

         *load_fpga_manager*(const char *fileName,

std::string

&error) {

           if (!file_exists("/lib/firmware")){

mkdir("/lib/firmware",0666);
}
int out_file =

creat("/lib/firmware/opencpi_temp.bin", 0666);

           gzFile bin_file;
           int bfd, zerror;
           uint8_t buf[8*1024];

           if ((bfd = ::open(fileName, O_RDONLY)) < 0)
             OU::format(error, "Can't open bitstream file '%s'

for reading:

%s(%d)",
fileName, strerror(errno), errno);
if ((bin_file = ::gzdopen(bfd, "rb")) == NULL)
OU::format(error, "Can't open compressed bin
file

'%s' for :

%s(%u)",
fileName, strerror(errno), errno);
do {
uint8_t *bit_buf = buf;
int n = ::gzread(bin_file, bit_buf, sizeof(buf));
if (n < 0)
return true;
if (n & 3)
return OU::eformat(error, "Bitstream data in is '%s'
not a multiple of 3 bytes",
fileName);
if (n == 0)
break;
if (write(out_file, buf, n) <= 0)
return OU::eformat(error,
"Error writing to
/lib/firmware/opencpi_temp.bin for bin
loading: %s(%u/%d)",
strerror(errno), errno, n);
} while (1);
close(out_file);
std::ofstream

fpga_flags("/sys/class/fpga_manager/fpga0/flags");

           std::ofstream

fpga_firmware("/sys/class/fpga_manager/fpga0/firmware");
fpga_flags << 0 << std::endl;
fpga_firmware << "opencpi_temp.bin" << std::endl;

remove("/lib/firmware/opencpi_temp.bin");
return isProgrammed(error) ? init(error) : true;
}

The isProgrammed() function just checks whether or not the
fpga_manager state is 'operating' although we are not entirely
confident this is a robust check:

         *isProgrammed*(...) {
           ...
           const char *e = OU::file2String(val,

"/sys/class/fpga_manager/fpga0/state", '|');
...
return val == "operating";
}

vivado.mkhttp://vivado.mkhttp://vivado.mkhttp://vivado.mkhttp://vivado.mk's
*bin make-rule uses bootgen to convert bit to bin. This is
necessary in Vivado 2018.2, but in later versions you may be able
to directly generate the correct *.bin file via an option to

write_bitstream:

$(call BinName,$1,$3,$6): $(call BitName,$1,$3)
$(AT)echo -n For $2 on $5 using config $4: Generating
Xilinx Vivado bitstream file $$@ with BIN extension using "bootgen".
$(AT)echo all: > $$(call BifName,$1,$3,$6);
echo "{" >> $$(call BifName,$1,$3,$6);
echo " [destination_device = pl] $(notdir $(call
BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6);
echo "}" >> $$(call BifName,$1,$3,$6);
$(AT)$(call DoXilinx,bootgen,$1,-image $(notdir
$(call
BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call
BinName,$1,$3,$6)) -w,bin)

Hope this is useful!

Regards,
David Banks
dbanks@geontech.commailto:dbanks@geontech.com<mailto:dbanks@geontech.commailto:dbanks@geontech.com><mailto:dbanks@geomailto:dbanks@geo
ntech.comhttp://ntech.com<mailto:dbanks@geontech.commailto:dbanks@geontech.com>><mailto:dbanks@geomailto:dbanks@geo<mailto:dmailto:d
banks@geo>
ntech.comhttp://ntech.comhttp://ntech.com<mailto:dbanks@geontech.commailto:dbanks@geontech.com<mailto:dbamailto:dba
nks@geontech.commailto:nks@geontech.com>>>
Geon Technologies, LLC
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/att
ach m ents/20190201/4b49675d/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailt
o:discuss@lists.opencpi.orgmailto:o%3Adiscuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>><ma
ilt
o:discuss@lists.opencpi.orgmailto:o%3Adiscuss@lists.opencpi.org<mailto:o%3Adiscuss@lists.opencpi.orgmailto:o%253Adiscuss@lists.opencpi.org>
<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.ormailto:discuss@lists.opencpi.or
g>>>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.o
rg


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>><mail
to
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailt
o:discuss@lists.opencpi.orgmailto:o%3Adiscuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or
g
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/atta
chm e nts/20190201/64e4ea45/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>><mail
to
:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailt
o:discuss@lists.opencpi.orgmailto:o%3Adiscuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or
g

-------------- next part -------------- An embedded and
charset-unspecified text was scrubbed...
Name: hello_n310_log_output.txt
URL:

<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachm
e nts/20190805/d9b4f229/attachment.txt>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:dismailto:dis
cuss@lists.opencpi.orgmailto:cuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>><mailto:discumailto:discu
ss@lists.opencpi.orgmailto:ss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:discuss@mailto:discuss@
lists.opencpi.orghttp://lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme
nts/20190813/4516c872/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org><mailto:dismailto:dis
cuss@lists.opencpi.orgmailto:cuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme
nts/20190829/b99ae3e0/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org>
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
-------------- next part -------------- An HTML attachment was
scrubbed...
URL:
<http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme
nts/20190905/0b9a1953/attachment.html>


discuss mailing list
discuss@lists.opencpi.orgmailto:discuss@lists.opencpi.org
http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org

The N310 does allow arbitrary HDL loads using the DTO approach. A basic Vivado project was created to test this loading capability. The general flow followed: - Build simple Vivado HDL project: block design w/ Zynq PS, AXI GPIO - Export hardware -> .hdf file - Vivado SDK project (may be unnecessary) - Use XSCT w/ TCL scripts to generate device tree overlay files as discussed here https://forums.xilinx.com/t5/Embedded-Linux/Unable-to-download-dt-overaly-tcl/td-p/924363 - Modify generated .dtsi files as required - Generate .bif, .bin, .dtbo files as described in Xilinx documentation here https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841645/Solution+Zynq+PL+Programming+With+FPGA+Manager - Copy bitstream.bit.bin and pl.dtbo onto target - Follow DTO loading procedure described in Xilinx documentation above o Note: commands ‘echo <something> > /sys/class/fpga_manager/fpga0/…’ did not work and reported permission denied; as a result these operations were skipped -Rob From: Munro, Robert M. Sent: Monday, September 9, 2019 4:22 PM To: 'Chris Hinkey' <chinkey@geontech.com>; James Kulp <jek@parera.com> Cc: discuss@lists.opencpi.org Subject: RE: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager The FPGA load that can be prevented at boot time is the vendor’s FPGA build. My suspicion is that the ‘ocpihdl load …’ call was not able to successfully load the FPGA. When it subsequently attempted reading the magic number it was getting an incorrect value because the vendor’s FPGA load was already loaded during the boot process. When preventing the vendor’s FPGA load during boot the magic number mismatch was no longer being output but the software was reporting the load was unsuccessful. Does an application that does not require FPGA load such as hello.xml is run, does the system attempt to load anything to the FPGA? I noticed that it was outputting the magic number mismatch when the vendor’s FPGA build was loaded in this case as well. I am now looking into what is required to use the DTO loading approach for this platform. Thanks, Rob From: Chris Hinkey <chinkey@geontech.com<mailto:chinkey@geontech.com>> Sent: Friday, September 6, 2019 3:20 PM To: James Kulp <jek@parera.com<mailto:jek@parera.com>> Cc: Munro, Robert M. <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>; discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager looks like i responded just to robert not to the discussion list, oops The magic number is a register that is on the fpga and is set to ascii "OpenCPI" (0x4f70656e435049) which is a 64 bit read across the axi bus and it is the first address across the bus this value is not set during operation, it is a hardcoded register that is built-in to the bitfile. it is never being written to from the software side. when I get to this point on a new platform what I will do is step outside of the framework and use the tool devmem to read the physical address across the fpga boundary. this will ensure that the 64 bits here are returning the correct "magic". I expect that your problems with the Exiting for problem: error loading device pl:0’ . What further steps can be taken to debug this? error is from something with fpga_manager not acting correctly. we had a similar problem recently and we forced opencpi to think that it always had an opencpi bitstream loaded. I would check that /sys/class/fpga_manager/fpga0/state is returning something reasonable. the fact that you don't have permissions to access the other parts of the fpga_manger is suspect as well, might be related. On Fri, Sep 6, 2019 at 3:12 PM James Kulp <jek@parera.com<mailto:jek@parera.com>> wrote: On 9/6/19 2:29 PM, Munro, Robert M. wrote: It appears there was some resource contention in the GP0 area that was not allowing the OCPI system to set the OccpAdminRegister.magic value during operation. This value is hardwired into the OpenCPI FPGA load and is read-only. The software memory maps the area where the GP0 interface is a slave to the CPU at: const uint32_t GP0_PADDR = 0x40000000; And reads from offset 0. It firsts the 8 byte MAGIC a byte at a time, then if it matches, it reads again as a single 64 bit value to make sure 32-bit endian swapping is right. If both those reads from offset 0 at 0x40000000 come back correct, it believes that the FPGA is loaded with an OpenCPI bitstream. If there is already a non-OpenCPI bitstream loaded, we expect that this test will fail. If this failure occurs when there is a bitstream loaded, on Zynq, it still assumes the FPGA is available for subsequent loading. If the FPGA load is prevented during the boot process, the magic number mismatch error is no longer output. Looking through the TRM was showing no configuration settings for GP0 other than enabling communication using LVL_SHFTR_EN. If there is some required configuration of AXI_GP0 configuration registers for OCPI to work properly, please provide it for future reference. I will check this. I am further trying to understand the code that was producing the output by looking at the at the source. The magic number mismatch output on lines 82-83 look to be outputting a #define value in the (sb ….) area of the output and the ‘magic’ variable there is giving the value that was read from the OccpAdminRegister area. Am I understanding the code correctly? If so, that would indicate that the (sb …) number is the expected value and its orientation should be correct. https://github.com/Geontech/opencpi/blob/6c7f48352ef9dcb1213302f470ce803643cc604d/runtime/hdl/src/HdlDevice.cxx#L82 Is the code being understood correctly that the OccpAdminRegister is a memory mapped data structure that is being written and read as part of the OCPI control interface? If so, can you explain how and where this is being mapped and at what base address it should be expected? See above - it is never written. After preventing the FPGA load at boot time the OCPI commands no longer output the magic number mismatch error. The command ‘ocpihdl load <fsk_filerw bin>’ does not succeed however. The output from the command states ‘Exiting for problem: error loading device pl:0’ . What further steps can be taken to debug this? What FPGA load at boot time are you refering to? The native manufacturer's bitstream? AFAIK OpenCPI has no "boot time FPGA load". What you appear to be debugginig is the Geon ultrascale fpga manager loading code on a non-ultrascale Zynq. I should have this particular function running on a zedboard Zynq next week. I have also found that the FPGA loading approach coded in HdlBusDriver.cxx does not work on this platform when attempting to run manually. The command ‘echo 0 > /sys/class/fpga_manager/fpga0/flags’ returns ‘-sh: /sys/class/fpga_manager/fpga0/flags: Permission denied’ . A manual command using the DT overlay approach does appear to work however. I'm sorry you are the guinea pig on this particular configuration. The reason we did not immediately integrate the Geon code into OpenCPI is that it was taking two steps (fpga manager + ultra-scale) at once and we needed to take them one step at a time. We are taking that first step, unfortunately not on a schedule that helps you. Jim Thanks, Rob From: Chris Hinkey <chinkey@geontech.com><mailto:chinkey@geontech.com> Sent: Friday, September 6, 2019 8:09 AM To: James Kulp <jek@parera.com><mailto:jek@parera.com> Cc: Munro, Robert M. <Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu>; discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager iirc it gives clocks and interdictions on which axi ports are enabled but not which direction is master (you would have to look up which register/bit this is set by in the TRM). i don't remember the axi ports being configurable which side is the master but i very well might be mistake. On Thu, Sep 5, 2019 at 7:38 PM James Kulp <jek@parera.com<mailto:jek@parera.com>> wrote: If you invoke the command with no arguments it tells you what it can do, like most opencpi commands. We mostly use it to find out how the FPGA clocks are initialized. > On Sep 5, 2019, at 18:19, Munro, Robert M. <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>> wrote: > > Jim, > > Does the ocpizynq utility list all the available interfaces that can dumped? > > Thanks, > Rob > > -----Original Message----- > From: discuss <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org>> On Behalf Of James Kulp > Sent: Thursday, September 5, 2019 5:59 PM > To: discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager > > Hi Rob, > > Nearly all aspects of the boundary hardware between the PS and the PL sides of Zynq are controlled by registers written by the processor and > *not* in the FPGA bitstream. > The FSBL does typically initialize these registers to some default values that are not necessarily the right values for how OpenCPI uses the PL/FPGA. > The ocpizynq utility program does dump out some of these registers, and you could modify it pretty easily if you want to know what some other registers are set to. > All these registers are pretty well documented in the Zynq TRM. > > Jim > >> On 9/5/19 5:47 PM, Munro, Robert M. wrote: >> Chris, >> >> Would this be the GP0 AXI slave or master registers that are being accessed in this scenario? I don’t believe these are configured in the FSBL, but in the FPGA image. This could indicate that a facility required by the OCPI framework is not enabled in the FPGA image built into the N310 image. Is there a listing of the OCPI required FPGA facilities? >> >> Thanks, >> Rob >> >> From: Chris Hinkey <chinkey@geontech.com<mailto:chinkey@geontech.com>> >> Sent: Thursday, August 29, 2019 11:58 AM >> To: Munro, Robert M. <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>> >> Subject: Re: [Discuss OpenCPI] Bitstream loading with >> ZynqMP/UltraScale+ fpga_manager >> >> you are not accessing external memory in this case you are accessing axi_gp0's adress space a register directly on the FPGA. i would suspect that that something is wrong with how GP0 is setup from the fsbl in this case. I don't think anything would need to change on the opencpi software side given that 7100 vs 7020 should be the same. >> the information on all the register maps and where everything is located is somewhere in the Xilinx Technical reference manual (be warned this is a very large document). >> >> On Thu, Aug 29, 2019 at 11:42 AM Munro, Robert M. <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>> wrote: >> Chris, >> >> Looking at the Zynq and ZynqMP datasheets: >> https://www.xilinx.com/support/documentation/data_sheets/ds190-Zynq-70 >> 00-Overview.pdf >> https://www.xilinx.com/support/documentation/data_sheets/ds891-zynq-ul >> trascale-plus-overview.pdf >> >> It looks like the Z-7100 has the same memory interfaces as other Zynq parts with the external memory interface having '16-bit or 32-bit interfaces to DDR3, DDR3L, DDR2, or LPDDR2 memories' whereas the ZynqMP has '32-bit or 64-bit interfaces to DDR4, DDR3, DDR3L, or LPDDR3 memories, and 32-bit interface to LPDDR4 memory' . >> >> Is it possible that other changes are needed from the 1.4_zynq_ultra branch that I have not pulled in? >> >> Thanks, >> Rob >> >> -----Original Message----- >> From: discuss <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org><mailto:discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org>>> On Behalf Of Munro, Robert M. >> Sent: Thursday, August 29, 2019 9:09 AM >> To: Chris Hinkey <chinkey@geontech.com<mailto:chinkey@geontech.com><mailto:chinkey@geontech.com<mailto:chinkey@geontech.com>>> >> Cc: discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> >> Subject: Re: [Discuss OpenCPI] Bitstream loading with >> ZynqMP/UltraScale+ fpga_manager >> >> Chris, >> >> Thanks for the information regarding the internals. The FPGA part on this platform is a XC7Z100. I purposefully did not pull in changes that I believed were related to addressing. I can double check the specifications regarding address widths to verify it should be unchanged. >> >> Please let me know if there are any other changes or steps missed. >> >> Thanks, >> Rob >> >> >> From: Chris Hinkey >> <chinkey@geontech.com<mailto:chinkey@geontech.com><mailto:chinkey@geontech.com<mailto:chinkey@geontech.com>><mailto:chinkey@geon<mailto:chinkey@geon> >> tech.com<http://tech.com><mailto:chinkey@geontech.com<mailto:chinkey@geontech.com>>>> >> Date: Thursday, Aug 29, 2019, 8:05 AM >> To: Munro, Robert M. >> <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>><mailto:Robert<mailto:Robert> >> .Munro@jhuapl.edu<mailto:Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>> >> Cc: James Kulp >> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:je<mailto:je> >> k@parera.com<mailto:k@parera.com>>>>, >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> >> <discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:di<mailto:di> >> scuss@lists.opencpi.org<mailto:scuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>>> >> Subject: Re: [Discuss OpenCPI] Bitstream loading with >> ZynqMP/UltraScale+ fpga_manager >> >> It looks like you loaded something sucessfully but the control plan is not hooked up quite right. >> >> as an eraly part of the running process opencpi reads a register across the control plan that contains ascii "OpenCPI(NULL)" and in your case you are reading "CPI(NULL)Open" this is given by the data in the error message - (sb 0x435049004f70656e). this is the magic that the message is referring to it requires OpenCPI to be at address 0 of the control plane address space to proceed. >> >> I think we ran into this problem and we decided it was because the bus on the ultrascale was setup to be 32 bits and needed to be 64 bits for the hdl that we implemented to work correctly. remind me what platform you are using is it a zynq ultrascale or 7000 series? >> >> On Wed, Aug 28, 2019 at 5:55 PM Munro, Robert M. <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>> wrote: >> Chris, >> >> After merging some sections of HdlBusDriver.cxx into the 1.4 version of the file and going through the build process I am encountering a new error when attempting to load HDL on the N310. The fsk_filerw is being used as a known good reference for this purpose. The new sections of vivado.mk<http://vivado.mk><http://vivado.mk><http://vivado.mk> were merged in to attempt building the HDL using the framework, but it did not generate the .bin file when using ocpidev build with the --hdl-assembly argument. An attempt to replicate the commands in vivado.mk<http://vivado.mk><http://vivado.mk><http://vivado.mk> manually while following the guidelines for generating a .bin from a .bit from Xilinx documentation https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841645/Solution+Zynq+PL+Programming+With+FPGA+Manager was taken. >> >> The steps were: >> - generate a .bif file similar to the documentation's >> Full_Bitstream.bif using the correct filename >> - run a bootgen command similar to >> vivado.mk<http://vivado.mk><http://vivado.mk><http://vivado.mk>: bootgen -image >> <bif_filename> -arch zynq -o <bin_filename> -w >> >> This generated a .bin file as desired and was copied to the artifacts directory in the ocpi folder structure. >> >> The built ocpi environment loaded successfully, recognizes the HDL container as being available, and the hello application was able to run successfully. The command output contained ' HDL Device 'PL:0' responds, but the OCCP signature: magic: 0x18000afe187003 (sb 0x435049004f70656e) ' but the impact of this was not understood until attempting to load HDL. When attempting to run the fsk_filerw from the ocpirun command it did not appear to recognize the assembly when listing resources found in the output and reported that suitable candidate for a HDL-implemented component was not available. >> >> The command 'ocpihdl load' was then attempted to force the loading of the HDL assembly and the same '...OCCP signature: magic: ...' output observed and finally ' Exiting for problem: error loading device pl:0: Magic numbers in admin space do not match'. >> >> Is there some other step that must be taken during the generation of the .bin file? Is there any other software modification that is required of the ocpi runtime code? The diff patch of the modified 1.4 HdlBusDriver.cxx is attached to make sure that the required code modifications are performed correctly. The log output from the ocpihdl load command is attached in case that can provide further insight regarding performance or required steps. >> >> Thanks, >> Rob >> >> -----Original Message----- >> From: discuss <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org><mailto:discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org>><mailto:discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org><mailto:discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org>>>> On Behalf Of Munro, Robert M. >> Sent: Tuesday, August 13, 2019 10:56 AM >> To: Chris Hinkey >> <chinkey@geontech.com<mailto:chinkey@geontech.com><mailto:chinkey@geontech.com<mailto:chinkey@geontech.com>><mailto:chinkey@geon<mailto:chinkey@geon> >> tech.com<http://tech.com><mailto:chinkey@geontech.com<mailto:chinkey@geontech.com>>>>; James Kulp >> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:je<mailto:je> >> k@parera.com<mailto:k@parera.com>>>> >> Cc: >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:dis<mailto:dis> >> cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >> Subject: Re: [Discuss OpenCPI] Bitstream loading with >> ZynqMP/UltraScale+ fpga_manager >> >> Chris, >> >> Thank you for your helpful response and insight. My thinking was that the #define could be overridden to provide the desired functionality for the platform, but was not comfortable making the changes without proper familiarity. I will move forward by looking at the diff to the 1.4 mainline, make the appropriate modifications, and test with the modified framework on the N310. >> >> Thanks again for your help. >> >> Thanks, >> Rob >> >> From: Chris Hinkey >> <chinkey@geontech.com<mailto:chinkey@geontech.com><mailto:chinkey@geontech.com<mailto:chinkey@geontech.com>><mailto:chinkey@geon<mailto:chinkey@geon> >> tech.com<http://tech.com><mailto:chinkey@geontech.com<mailto:chinkey@geontech.com>>>> >> Sent: Tuesday, August 13, 2019 10:02 AM >> To: James Kulp >> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:je<mailto:je> >> k@parera.com<mailto:k@parera.com>>>> >> Cc: Munro, Robert M. >> <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>><mailto:Robert<mailto:Robert> >> .Munro@jhuapl.edu<mailto:Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>>; >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:dis<mailto:dis> >> cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >> Subject: Re: [Discuss OpenCPI] Bitstream loading with >> ZynqMP/UltraScale+ fpga_manager >> >> I think when I implemented this code I probably made the assumption that if we are using fpga_manager we are also using ARCH=arm64. This met our needs as we only cared about the fpga manager on ultrascale devices at the time. We also made the assumption that the tools created a tarred bin file instead of a bit file because we could not get the bit to bin conversion working with the existing openCPI code (this might cause you problems later when actually trying to load the fpga). >> >> The original problem you were running into is certainly because of an >> ifdef on line 226 where it will check the old driver done pin if it is >> on an arm and not an arm64 >> >> 226 #if defined(OCPI_ARCH_arm) || defined(OCPI_ARCH_arm_cs) >> >> to move forward for now you can change this line to an "#if 0" and rebuild the framework, not this will cause other zynq based platforms(zed, matchstiq etc..) to no longer work with this patch but maybe you don't care for now while Jim tries to get this into the mainline in a more generic way. >> there may be some similar patches you need to make to the same file but the full diff that I needed to make to BusDriver.cxx to the 1.4 mainline can be seen here https://github.com/opencpi/opencpi/pull/17/files in case you didn't already know. >> hope this helps >> >> On Mon, Aug 12, 2019 at 11:12 AM James Kulp <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>>>>> wrote: >>> On 8/12/19 9:37 AM, Munro, Robert M. wrote: >>> Jim, >>> >>> This is the only branch with the modifications required for use with >>> the FPGA Manager driver. This is required for use with the Linux >>> kernel provided for the N310. The Xilinx toolset being used is >>> 2018_2 and the kernel being used is generated via the N310 build >>> container using v3.14.0.0 . >> Ok. The default Xilinx kernel associated with 2018_2 is 4.14. >> >> I guess the bottom line is that this combination of platform and tools and kernel is not yet supported in either the mainline of OpenCPI and the third party branch you are trying to use. >> >> It is probably not a big problem, but someone has to debug it that has the time and skills necessary to dig as deep as necessary. >> >> The fpga manager in the various later linux kernels will definitely be supported in a patch from the mainline "soon", probably in a month, since it is being actively worked. >> >> That does not guarantee functionality on your exact kernel (and thus version of the fpga manager), but it does guarantee it working on the latest Xilinx-supported kernel. >> >> Jim >> >> >> >> >> >> >> >>> Thanks, >>> Robert Munro >>> >>> *From: *James Kulp >>> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:j<mailto:j> >>> ek@parera.com<mailto:ek@parera.com>>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:j<mailto:j> >>> e<mailto:je<mailto:je>> >>> k@parera.com<mailto:k@parera.com><mailto:k@parera.com<mailto:k@parera.com>>>> >>> <mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><m >>> ailto:jek@parera.com<mailto:ailto%3Ajek@parera.com>>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><m >>> a ilto:jek@parera.com<mailto:ilto%3Ajek@parera.com><mailto:ilto%3Ajek@parera.com<mailto:ilto%253Ajek@parera.com>>>>>> >>> *Date: *Monday, Aug 12, 2019, 9:00 AM >>> *To: *Munro, Robert M. >>> <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>><mailto:Rober<mailto:Rober> >>> t.Munro@jhuapl.edu<mailto:t.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>><mailto:Robert<mailto:Robert><mai >>> lto:Robert> >>> .Munro@jhuapl.edu<mailto:Munro@jhuapl.edu><mailto:Munro@jhuapl.edu<mailto:Munro@jhuapl.edu>><mailto:Robert.Munro@jhuapl<mailto:Robert.Munro@jhuapl> >>> .edu<mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>> >>> <mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>><mailt >>> o:Robert.Munro@jhuapl.edu<mailto:o%3ARobert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>><mailto >>> :Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>><mailto:Rober<mailto:Rober> >>> t.Munro@jhuapl.edu<mailto:t.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>>>>, >>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:di<mailto:di> >>> scuss@lists.opencpi.org<mailto:scuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>><mailto:dis<mailto:dis> >>> <mailto:dis<mailto:dis>> >>> cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org><mailto:cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org>><mailto:discuss@<mailto:discuss@> >>> lists.opencpi.org<http://lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>>> >>> <discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:d<mailto:d> >>> iscuss@lists.opencpi.org<mailto:iscuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>><mailto:di<mailto:di> >>> <mailto:di<mailto:di>> >>> scuss@lists.opencpi.org<mailto:scuss@lists.opencpi.org><mailto:scuss@lists.opencpi.org<mailto:scuss@lists.opencpi.org>><mailto:discus<mailto:discus> >>> s@lists.opencpi.org<mailto:s@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>>> >>> <mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><m >>> ailto:discuss@lists.opencpi.org<mailto:ailto%3Adiscuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>><ma >>> ilto:discuss@lists.opencpi.org<mailto:ilto%3Adiscuss@lists.opencpi.org><mailto:ilto%3Adiscuss@lists.opencpi.or<mailto:ilto%253Adiscuss@lists.opencpi.or> >>> g><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> >>>>>>> >>> *Subject: *Re: [Discuss OpenCPI] Bitstream loading with >>> ZynqMP/UltraScale+ fpga_manager >>> >>> I was a bit confused about your use of the "ultrascale" branch. >>> So you are using a branch with two types of patches in it: one for >>> later linux kernels with the fpga manager, and the other for the >>> ultrascale chip itself. >>> The N310 is not ultrascale, so we need to separate the two issues, >>> which were not separated before. >>> So its not really a surprise that the branch you are using is not yet >>> happy with the system you are trying to run it on. >>> >>> I am working on a branch that simply updates the xilinx tools >>> (2019-1) and the xilinx linux kernel (4.19) without dealing with >>> ultrascale, which is intended to work with a baseline zed board, but >>> with current tools and kernels. >>> >>> The N310 uses a 7000-series part (7100) which should be compatible >>> with this. >>> >>> Which kernel and which xilinx tools are you using? >>> >>> Jim >>> >>> >>> >>>> On 8/8/19 1:36 PM, Munro, Robert M. wrote: >>>> Jim or others, >>>> >>>> Is there any further input or feedback on the source or resolution >>> of this issue? >>>> As it stands I do not believe that the OCPI runtime software will be >>> able to successfully load HDL assemblies on the N310 platform. My >>> familiarity with this codebase is limited and we would appreciate any >>> guidance available toward investigating or resolving this issue. >>>> Thank you, >>>> Robert Munro >>>> >>>> -----Original Message----- >>>> From: discuss >>>> <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org><mailto:discuss-bounces@lists.open<mailto:discuss-bounces@lists.open> >>>> cpi.org<http://cpi.org>><mailto:discuss-bounces@lists.open<mailto:discuss-bounces@lists.open><mailto:discuss-bounces@li<mailto:discuss-bounces@li> >>>> sts.open> >>>> cpi.org<http://cpi.org><http://cpi.org>><mailto:discuss-bounces@lists.open<mailto:discuss-bounces@lists.open><mailto:di<mailto:di> >>>> scuss-bounces@lists.open<mailto:scuss-bounces@lists.open>><mailto:discuss-bounces@li<mailto:discuss-bounces@li><mailto:discuss-b<mailto:discuss-b> >>>> ounces@li> sts.open> cpi.org<http://cpi.org><http://cpi.org><http://cpi.org>>> On >>>> Behalf Of >>> Munro, Robert M. >>>> Sent: Monday, August 5, 2019 10:49 AM >>>> To: James Kulp >>>> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto: <mailto:%0b>>>>> jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>>>>>; >>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:d<mailto:d> >>>> iscuss@lists.opencpi.org<mailto:iscuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>><mailto:d<mailto:d> >>>> <mailto:d<mailto:d>> >>>> iscuss@lists.opencpi.org<mailto:iscuss@lists.opencpi.org><mailto:iscuss@lists.opencpi.org<mailto:iscuss@lists.opencpi.org>><mailto:dis<mailto:dis> >>>> cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>>> >>>> Subject: Re: [Discuss OpenCPI] Bitstream loading with >>> ZynqMP/UltraScale+ fpga_manager >>>> Jim, >>>> >>>> The given block of code is not the root cause of the issue because >>> the file system does not have a /dev/xdevcfg device. >>>> I suspect there is some functional code similar to this being >>> compiled incorrectly: >>>> #if (OCPI_ARCH_arm) >>>> // do xdevcfg loading stuff >>>> #else >>>> // do fpga_manager loading stuff #endif >>>> >>>> This error is being output at environment initialization as well as >>> when running hello.xml. I've attached a copy of the output from the >>> command 'ocpirun -v -l 20 hello.xml' for further investigation. >>>> From looking at the output I believe the system is calling >>> OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is >>> calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line >>> 484 which in turn is calling Driver::open in the same file at line >>> 499 which then outputs the 'When searching for PL device ...' error >>> at line 509. This then returns to the HdlDriver.cxx search() function >>> and outputs the '... got Zynq search error ...' error at line 141. >>>> This is an ARM device and I am not familiar enough with this >>> codebase to adjust precompiler definitions with confidence that some >>> other code section will become affected. >>>> Thanks, >>>> Robert Munro >>>> >>>> -----Original Message----- >>>> From: James Kulp >>>> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto: <mailto:%0b>>>>> jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>>>>> >>>> Sent: Friday, August 2, 2019 4:27 PM >>>> To: Munro, Robert M. >>>> <Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>><mailto:Robe<mailto:Robe> >>>> rt.Munro@jhuapl.edu<mailto:rt.Munro@jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>><mailto:Robe<mailto:Robe><mai >>>> lto:Robe> >>>> rt.Munro@jhuapl.edu<mailto:rt.Munro@jhuapl.edu><mailto:rt.Munro@jhuapl.edu<mailto:rt.Munro@jhuapl.edu>><mailto:Robert.Munro@<mailto:Robert.Munro@> >>>> jhuapl.edu<http://jhuapl.edu><mailto:Robert.Munro@jhuapl.edu<mailto:Robert.Munro@jhuapl.edu>>>>>; >>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:di<mailto:di> >>> scuss@lists.opencpi.org<mailto:scuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>><mailto:dis<mailto:dis> >>> <mailto:dis<mailto:dis>> >>> cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org><mailto:cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org>><mailto:discuss@<mailto:discuss@> >>> lists.opencpi.org<http://lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>>> >>>> Subject: Re: [Discuss OpenCPI] Bitstream loading with >>> ZynqMP/UltraScale+ fpga_manager >>>> That code is not integrated into the main line of OpenCPI yet, but >>> in that code there is: >>>> if (file_exists("/dev/xdevcfg")){ >>>> ret_val= load_xdevconfig(fileName, error); >>>> } >>>> else if (file_exists("/sys/class/fpga_manager/fpga0/")){ >>>> ret_val= load_fpga_manager(fileName, error); >>>> } >>>> So it looks like the presence of /dev/xdevcfg is what causes it to >>> look for /sys/class/xdevcfg/xdevcfg/device/prog_done >>>>> On 8/2/19 4:15 PM, Munro, Robert M. wrote: >>>>> Are there any required flag or environment variable settings that >>> must be done before building the framework to utilize this >>> functionality? I have a platform built that is producing an output >>> during environment load: 'When searching for PL device '0': Can't >>> process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string: >>> file could not be open for reading' . This leads me to believe that >>> it is running the xdevcfg code still present in HdlBusDriver.cxx . >>>>> Use of the release_1.4_zynq_ultra branch and presence of the >>> /sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been >>> verified for the environment used to generate the executables. >>>>> Thanks, >>>>> Robert Munro >>>>> >>>>> -----Original Message----- >>>>> From: discuss >>>>> <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org><mailto:discuss-bounces@lists.ope<mailto:discuss-bounces@lists.ope> >>>>> ncpi.org<http://ncpi.org>><mailto:discuss-bounces@lists.ope<mailto:discuss-bounces@lists.ope><mailto:discuss-bounces@l<mailto:discuss-bounces@l> >>>>> ists.ope> >>>>> ncpi.org<http://ncpi.org><http://ncpi.org>><mailto:discuss-bounces@lists.ope<mailto:discuss-bounces@lists.ope><mailto: >>>>> discuss-bounces@lists.ope<mailto:discuss-bounces@lists.ope>><mailto:discuss-bounces@l<mailto:discuss-bounces@l><mailto:discuss-<mailto:discuss-> >>>>> bounces@l> ists.ope> ncpi.org<http://ncpi.org><http://ncpi.org><http://ncpi.org>>> >>>>> On Behalf Of James Kulp >>>>> Sent: Friday, February 1, 2019 4:18 PM >>>>> To: >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>><mailto: >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto: >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>>> >>>>> Subject: Re: [Discuss OpenCPI] Bitstream loading with >>>>> ZynqMP/UltraScale+ fpga_manager >>>>> >>>>>> On 2/1/19 3:37 PM, Chris Hinkey wrote: >>>>>> in response to Point 1 here. We attempted using the code that on >>> the fly was attempting to convert from bit to bin. This did not work >>> on these newer platforms using fpga_manager so we decided to use the >>> vendor provided tools rather then to reverse engineer what was wrong >>> with the existing code. >>>>>> If changes need to be made to create more commonality and given >>> that all zynq and zynqMP platforms need a .bin file format wouldn't >>> it make more sense to just use .bin files rather then converting them >>> on the fly every time? >>>>> A sensible question for sure. >>>>> >>>>> When this was done originally, it was to avoid generating multiple >>> file formats all the time. .bit files are necessary for JTAG >>> loading, and .bin files are necessary for zynq hardware loading. >>>>> Even on Zynq, some debugging using jtag is done, and having that be >>> mostly transparent (using the same bitstream files) is convenient. >>>>> So we preferred having a single bitstream file (with metadata, >>>>> compressed) regardless of whether we were hardware loading or jtag >>> loading, zynq or virtex6 or spartan3, ISE or Vivado. >>>>> In fact, there was no reverse engineering the last time since both >>> formats, at the level we were operating at, were documented by Xilinx. >>>>> It seemed to be worth the 30 SLOC to convert on the fly to keep a >>> single format of Xilinx bitstream files, including between ISE and >>> Vivado and all Xilinx FPGA types. >>>>> Of course it might make sense to switch things around the other way >>> and use .bin files uniformly and only convert to .bit format for JTAG >>> loading. >>>>> But since the core of the "conversion:" after a header, is just a >>> 32 bit endian swap, it doesn't matter much either way. >>>>> If it ends up being a truly nasty reverse engineering exercise now, >>> I would reconsider. >>>>>> ________________________________ >>>>>> From: discuss >>>>>> <discuss-bounces@lists.opencpi.org<mailto:discuss-bounces@lists.opencpi.org><mailto:discuss-bounces@lists.op<mailto:discuss-bounces@lists.op> >>>>>> encpi.org<http://encpi.org>><mailto:discuss-bounces@lists.op<mailto:discuss-bounces@lists.op><mailto:discuss-bounces@<mailto:discuss-bounces@> >>>>>> lists.op> >>>>>> encpi.org<http://encpi.org><http://encpi.org>><mailto:discuss-bounces@lists.op<mailto:discuss-bounces@lists.op><mailt >>>>>> o:discuss-bounces@lists.op<mailto:o:discuss-bounces@lists.op>><mailto:discuss-bounces@<mailto:discuss-bounces@><mailto:discuss<mailto:discuss> >>>>>> -bounces@> lists.op> >>>>>> encpi.org<http://encpi.org><http://encpi.org><http://encpi.org>>> on behalf of James >>>>>> Kulp >>>>>> <jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><mailto:jek@parera.com<mailto:jek@parera.com><mailt >>>>>> o:jek@parera.com<mailto:o%3Ajek@parera.com>>><mailto:jek@parera.com<mailto:jek@parera.com><mailto:jek@parera.com<mailto:jek@parera.com>><ma >>>>>> ilt o:jek@parera.com<mailto:o%3Ajek@parera.com><mailto:o%3Ajek@parera.com<mailto:o%253Ajek@parera.com>>>>> >>>>>> Sent: Friday, February 1, 2019 3:27 PM >>>>>> To: >>>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto >>>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>><mail >>>>>> to >>>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailt >>>>>> o:discuss@lists.opencpi.org<mailto:o%3Adiscuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>>> >>>>>> Subject: Re: [Discuss OpenCPI] Bitstream loading with >>>>>> ZynqMP/UltraScale+ fpga_manager >>>>>> >>>>>> David, >>>>>> >>>>>> This is great work. Thanks. >>>>>> >>>>>> Since I believe the fpga manager stuff is really an attribute of >>>>>> later linux kernels, I don't think it is really a ZynqMP thing, >>>>>> but just a later linux kernel thing. >>>>>> I am currently bringing up the quite ancient zedboard using the >>>>>> latest Vivado and Xilinx linux and will try to use this same code. >>>>>> There are two thinigs I am looking into, now that you have done >>>>>> the hard work of getting to a working solution: >>>>>> >>>>>> 1. The bit vs bin thing existed with the old bitstream loader, but >>>>>> I think we were converting on the fly, so I will try that here. >>>>>> (To avoid the bin format altogether). >>>>>> >>>>>> 2. The fpga manager has entry points from kernel mode that allow >>>>>> you to inject the bitstream without making a copy in /lib/firmware. >>>>>> Since we already have a kernel driver, I will try to use that to >>>>>> avoid the whole /lib/firmware thing. >>>>>> >>>>>> So if those two things can work (no guarantees), the difference >>>>>> between old and new bitstream loading (and building) can be >>>>>> minimized and the loading process faster and requiring no extra >>>>>> file system >>> space. >>>>>> This will make merging easier too. >>>>>> >>>>>> We'll see. Thanks again to you and Geon for this important >>> contribution. >>>>>> Jim >>>>>> >>>>>> >>>>>>> On 2/1/19 3:12 PM, David Banks wrote: >>>>>>> OpenCPI users interested in ZynqMP fpga_manager, >>>>>>> >>>>>>> I know some users are interested in the OpenCPI's bitstream >>>>>>> loading for ZynqMP/UltraScale+ using "*fpga_manager*". In >>>>>>> general, we followed the instructions at >>>>>>> >>> https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream. >>>>>>> I will give a short explanation here: >>>>>>> >>>>>>> Reminder: All ZynqMP/UltraScale+ changes are located at >>>>>>> https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra >>> branch. >>>>>>> Firstly, all *fpga_manager *code is located in >>>>>>> *runtime/hdl/src/HdlBusDriver.cxx*. There were also changes in >>>>>>> r*untime/hdl-support/xilinx/vivado.mk<http://vivado.mk><http://vivado.mk><http://vi >>>>>>> vado.mk<http://vado.mk>><http://vi >>>>>>> vado.mk<http://vado.mk><http://vado.mk>> >>>>>>> <http://vivado.mk>* to generate a bitstream in the correct *.bin >>>>>>> format. To see the changes made to these files for ZynqMP, you >>>>>>> can diff them between >>>>>>> *release_1.4* and *release_1.4_zynq_ultra*: >>>>>>> $ git clone https://github.com/Geontech/opencpi.git --branch >>>>>>> release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin >>>>>>> release_1.4:release_1.4; $ git diff release_1.4 -- >>>>>>> runtime/hdl/src/HdlBusDriver.cxx >>>>>>> runtime/hdl-support/xilinx/vivado.mk<http://vivado.mk><http://vivado.mk><http://viv >>>>>>> ado.mk<http://ado.mk>><http://viv >>>>>>> ado.mk<http://ado.mk><http://ado.mk>>; >>>>>>> >>>>>>> >>>>>>> The directly relevant functions are *load_fpga_manager()* and i >>>>>>> *sProgrammed()*. >>>>>>> load_fpga_manager() ensures that /lib/firmware exists, reads the >>>>>>> *.bin bitstream file and writes its contents to >>> /lib/firmware/opencpi_temp.bin. >>>>>>> It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the >>>>>>> the filename "opencpi_temp.bin" to >>> /sys/class/fpga_manager/fpga0/firmware. >>>>>>> Finally, the temporary opencpi_temp.bin bitstream is removed and >>>>>>> the state of the fpga_manager >>>>>>> (/sys/class/fpga_manager/fpga0/state) is confirmed to be "operating" in isProgrammed(). >>>>>>> >>>>>>> fpga_manager requires that bitstreams be in *.bin in order to >>>>>>> write them to the PL. So, some changes were made to >>>>>>> vivado.mk<http://vivado.mk><http://vivado.mk><http://vivado.mk><http://vivado.mk> >>>>>>> to add a make rule for the *.bin file. This make rule (*BinName*) uses Vivado's "*bootgen*" to convert the bitstream from *.bit to *.bin. >>>>>>> >>>>>>> Most of the relevant code is pasted or summarized below: >>>>>>> >>>>>>> *load_fpga_manager*(const char *fileName, >>>>>>> std::string >>> &error) { >>>>>>> if (!file_exists("/lib/firmware")){ >>>>>>> mkdir("/lib/firmware",0666); >>>>>>> } >>>>>>> int out_file = >>> creat("/lib/firmware/opencpi_temp.bin", 0666); >>>>>>> gzFile bin_file; >>>>>>> int bfd, zerror; >>>>>>> uint8_t buf[8*1024]; >>>>>>> >>>>>>> if ((bfd = ::open(fileName, O_RDONLY)) < 0) >>>>>>> OU::format(error, "Can't open bitstream file '%s' >>> for reading: >>>>>>> %s(%d)", >>>>>>> fileName, strerror(errno), errno); >>>>>>> if ((bin_file = ::gzdopen(bfd, "rb")) == NULL) >>>>>>> OU::format(error, "Can't open compressed bin >>>>>>> file >>> '%s' for : >>>>>>> %s(%u)", >>>>>>> fileName, strerror(errno), errno); >>>>>>> do { >>>>>>> uint8_t *bit_buf = buf; >>>>>>> int n = ::gzread(bin_file, bit_buf, sizeof(buf)); >>>>>>> if (n < 0) >>>>>>> return true; >>>>>>> if (n & 3) >>>>>>> return OU::eformat(error, "Bitstream data in is '%s' >>>>>>> not a multiple of 3 bytes", >>>>>>> fileName); >>>>>>> if (n == 0) >>>>>>> break; >>>>>>> if (write(out_file, buf, n) <= 0) >>>>>>> return OU::eformat(error, >>>>>>> "Error writing to >>>>>>> /lib/firmware/opencpi_temp.bin for bin >>>>>>> loading: %s(%u/%d)", >>>>>>> strerror(errno), errno, n); >>>>>>> } while (1); >>>>>>> close(out_file); >>>>>>> std::ofstream >>> fpga_flags("/sys/class/fpga_manager/fpga0/flags"); >>>>>>> std::ofstream >>>>>>> fpga_firmware("/sys/class/fpga_manager/fpga0/firmware"); >>>>>>> fpga_flags << 0 << std::endl; >>>>>>> fpga_firmware << "opencpi_temp.bin" << std::endl; >>>>>>> >>>>>>> remove("/lib/firmware/opencpi_temp.bin"); >>>>>>> return isProgrammed(error) ? init(error) : true; >>>>>>> } >>>>>>> >>>>>>> The isProgrammed() function just checks whether or not the >>>>>>> fpga_manager state is 'operating' although we are not entirely >>>>>>> confident this is a robust check: >>>>>>> >>>>>>> *isProgrammed*(...) { >>>>>>> ... >>>>>>> const char *e = OU::file2String(val, >>>>>>> "/sys/class/fpga_manager/fpga0/state", '|'); >>>>>>> ... >>>>>>> return val == "operating"; >>>>>>> } >>>>>>> >>>>>>> vivado.mk<http://vivado.mk><http://vivado.mk><http://vivado.mk><http://vivado.mk>'s >>>>>>> *bin make-rule uses bootgen to convert bit to bin. This is >>>>>>> necessary in Vivado 2018.2, but in later versions you may be able >>>>>>> to directly generate the correct *.bin file via an option to >>> write_bitstream: >>>>>>> $(call *BinName*,$1,$3,$6): $(call BitName,$1,$3) >>>>>>> $(AT)echo -n For $2 on $5 using config $4: Generating >>>>>>> Xilinx Vivado bitstream file $$@ with BIN extension using "bootgen". >>>>>>> $(AT)echo all: > $$(call BifName,$1,$3,$6); \ >>>>>>> echo "{" >> $$(call BifName,$1,$3,$6); \ >>>>>>> echo " [destination_device = pl] $(notdir $(call >>>>>>> BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6); \ >>>>>>> echo "}" >> $$(call BifName,$1,$3,$6); >>>>>>> $(AT)$(call DoXilinx,*bootgen*,$1,-image $(notdir >>>>>>> $(call >>>>>>> BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call >>>>>>> BinName,$1,$3,$6)) -w,bin) >>>>>>> >>>>>>> Hope this is useful! >>>>>>> >>>>>>> Regards, >>>>>>> David Banks >>>>>>> dbanks@geontech.com<mailto:dbanks@geontech.com><mailto:dbanks@geontech.com<mailto:dbanks@geontech.com>><mailto:dbanks@geo<mailto:dbanks@geo> >>>>>>> ntech.com<http://ntech.com><mailto:dbanks@geontech.com<mailto:dbanks@geontech.com>>><mailto:dbanks@geo<mailto:dbanks@geo><mailto:d<mailto:d> >>>>>>> banks@geo> >>>>>>> ntech.com<http://ntech.com><http://ntech.com><mailto:dbanks@geontech.com<mailto:dbanks@geontech.com><mailto:dba<mailto:dba> >>>>>>> nks@geontech.com<mailto:nks@geontech.com>>>> >>>>>>> Geon Technologies, LLC >>>>>>> -------------- next part -------------- An HTML attachment was >>>>>>> scrubbed... >>>>>>> URL: >>>>>>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/att >>>>>>> ach m ents/20190201/4b49675d/attachment.html> >>>>>>> _______________________________________________ >>>>>>> discuss mailing list >>>>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailt >>>>>>> o:discuss@lists.opencpi.org<mailto:o%3Adiscuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>><ma >>>>>>> ilt >>>>>>> o:discuss@lists.opencpi.org<mailto:o%3Adiscuss@lists.opencpi.org><mailto:o%3Adiscuss@lists.opencpi.org<mailto:o%253Adiscuss@lists.opencpi.org>> >>>>>>> <mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.or<mailto:discuss@lists.opencpi.or> >>>>>>> g>>> >>>>>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.o >>>>>>> rg >>>>>> _______________________________________________ >>>>>> discuss mailing list >>>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto >>>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>><mail >>>>>> to >>>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailt >>>>>> o:discuss@lists.opencpi.org<mailto:o%3Adiscuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>>> >>>>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or >>>>>> g >>>>>> -------------- next part -------------- An HTML attachment was >>>>>> scrubbed... >>>>>> URL: >>>>>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/atta >>>>>> chm e nts/20190201/64e4ea45/attachment.html> >>>>>> _______________________________________________ >>>>>> discuss mailing list >>>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto >>>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>><mail >>>>>> to >>>>>> :discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailt >>>>>> o:discuss@lists.opencpi.org<mailto:o%3Adiscuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>>> >>>>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or >>>>>> g >>>>> _______________________________________________ >>>>> discuss mailing list >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>><mailto: >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto: >>>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>>> >>>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >>>> -------------- next part -------------- An embedded and >>>> charset-unspecified text was scrubbed... >>>> Name: hello_n310_log_output.txt >>>> URL: >>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachm >>> e nts/20190805/d9b4f229/attachment.txt> >>>> _______________________________________________ >>>> discuss mailing list >>>> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:d<mailto:d> >>>> iscuss@lists.opencpi.org<mailto:iscuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>><mailto:d<mailto:d> >>>> <mailto:d<mailto:d>> >>>> iscuss@lists.opencpi.org<mailto:iscuss@lists.opencpi.org><mailto:iscuss@lists.opencpi.org<mailto:iscuss@lists.opencpi.org>><mailto:dis<mailto:dis> >>>> cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>>> >>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >>> >> >> _______________________________________________ >> discuss mailing list >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:dis<mailto:dis> >> cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>><mailto:discu<mailto:discu> >> ss@lists.opencpi.org<mailto:ss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:discuss@<mailto:discuss@> >> lists.opencpi.org<http://lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>>> >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >> -------------- next part -------------- An HTML attachment was >> scrubbed... >> URL: >> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme >> nts/20190813/4516c872/attachment.html> >> _______________________________________________ >> discuss mailing list >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>><mailto:dis<mailto:dis> >> cuss@lists.opencpi.org<mailto:cuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>>> >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >> -------------- next part -------------- An HTML attachment was >> scrubbed... >> URL: >> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme >> nts/20190829/b99ae3e0/attachment.html> >> _______________________________________________ >> discuss mailing list >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org><mailto:discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org>> >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org >> -------------- next part -------------- An HTML attachment was >> scrubbed... >> URL: >> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme >> nts/20190905/0b9a1953/attachment.html> >> _______________________________________________ >> discuss mailing list >> discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org > > > > _______________________________________________ > discuss mailing list > discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> > http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org _______________________________________________ discuss mailing list discuss@lists.opencpi.org<mailto:discuss@lists.opencpi.org> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org