[Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+ fpga_manager

Chris Hinkey chinkey at geontech.com
Fri Sep 6 08:09:17 EDT 2019


iirc it gives clocks and interdictions on which axi ports are enabled but
not which direction is master (you would have to look up which register/bit
this is set by in the TRM).  i don't remember the axi ports being
configurable which side is the master but i very well might be mistake.

On Thu, Sep 5, 2019 at 7:38 PM James Kulp <jek at parera.com> wrote:

> If you invoke the command with no arguments it tells you what it can do,
> like most opencpi commands.  We mostly use it to find out how the FPGA
> clocks are initialized.
>
>
> > On Sep 5, 2019, at 18:19, Munro, Robert M. <Robert.Munro at jhuapl.edu>
> wrote:
> >
> > Jim,
> >
> > Does the ocpizynq utility list all the available interfaces that can
> dumped?
> >
> > Thanks,
> > Rob
> >
> > -----Original Message-----
> > From: discuss <discuss-bounces at lists.opencpi.org> On Behalf Of James
> Kulp
> > Sent: Thursday, September 5, 2019 5:59 PM
> > To: discuss at lists.opencpi.org
> > Subject: Re: [Discuss OpenCPI] Bitstream loading with ZynqMP/UltraScale+
> fpga_manager
> >
> > Hi Rob,
> >
> > Nearly all aspects of the boundary hardware between the PS and the PL
> sides of Zynq are controlled by registers written by the processor and
> > *not* in the FPGA bitstream.
> > The FSBL does typically initialize these registers to some default
> values that are not necessarily the right values for how OpenCPI uses the
> PL/FPGA.
> > The ocpizynq utility program does dump out some of these registers, and
> you could modify it pretty easily if you want to know what some other
> registers are set to.
> > All these registers are pretty well documented in the Zynq TRM.
> >
> > Jim
> >
> >> On 9/5/19 5:47 PM, Munro, Robert M. wrote:
> >> Chris,
> >>
> >> Would this be the GP0 AXI slave or master registers that are being
> accessed in this scenario?  I don’t believe these are configured in the
> FSBL, but in the FPGA image.  This could indicate that a facility required
> by the OCPI framework is not enabled in the FPGA image built into the N310
> image.  Is there a listing of the OCPI required FPGA facilities?
> >>
> >> Thanks,
> >> Rob
> >>
> >> From: Chris Hinkey <chinkey at geontech.com>
> >> Sent: Thursday, August 29, 2019 11:58 AM
> >> To: Munro, Robert M. <Robert.Munro at jhuapl.edu>
> >> Subject: Re: [Discuss OpenCPI] Bitstream loading with
> >> ZynqMP/UltraScale+ fpga_manager
> >>
> >> you are not accessing external memory in this case you are accessing
> axi_gp0's adress space a register directly on the FPGA.  i would suspect
> that that something is wrong with how GP0 is setup from the fsbl in this
> case.  I don't think anything would need to change on the opencpi software
> side given that 7100 vs 7020 should be the same.
> >> the information on all the register maps and where everything is
> located is somewhere in the Xilinx Technical reference manual (be warned
> this is a very large document).
> >>
> >> On Thu, Aug 29, 2019 at 11:42 AM Munro, Robert M. <
> Robert.Munro at jhuapl.edu<mailto:Robert.Munro at jhuapl.edu>> wrote:
> >> Chris,
> >>
> >> Looking at the Zynq and ZynqMP datasheets:
> >> https://www.xilinx.com/support/documentation/data_sheets/ds190-Zynq-70
> >> 00-Overview.pdf
> >> https://www.xilinx.com/support/documentation/data_sheets/ds891-zynq-ul
> >> trascale-plus-overview.pdf
> >>
> >> It looks like the Z-7100 has the same memory interfaces as other Zynq
> parts with the external memory interface having '16-bit or 32-bit
> interfaces to DDR3, DDR3L, DDR2, or LPDDR2 memories' whereas the ZynqMP has
> '32-bit or 64-bit interfaces to DDR4, DDR3, DDR3L, or LPDDR3 memories, and
> 32-bit interface to LPDDR4 memory' .
> >>
> >> Is it possible that other changes are needed from the 1.4_zynq_ultra
> branch that I have not pulled in?
> >>
> >> Thanks,
> >> Rob
> >>
> >> -----Original Message-----
> >> From: discuss <discuss-bounces at lists.opencpi.org<mailto:
> discuss-bounces at lists.opencpi.org>> On Behalf Of Munro, Robert M.
> >> Sent: Thursday, August 29, 2019 9:09 AM
> >> To: Chris Hinkey <chinkey at geontech.com<mailto:chinkey at geontech.com>>
> >> Cc: discuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org>
> >> Subject: Re: [Discuss OpenCPI] Bitstream loading with
> >> ZynqMP/UltraScale+ fpga_manager
> >>
> >> Chris,
> >>
> >> Thanks for the information regarding the internals.  The FPGA part on
> this platform is a XC7Z100.  I purposefully did not pull in changes that I
> believed were related to addressing.  I can double check the specifications
> regarding address widths to verify it should be unchanged.
> >>
> >> Please let me know if there are any other changes or steps missed.
> >>
> >> Thanks,
> >> Rob
> >>
> >>
> >> From: Chris Hinkey
> >> <chinkey at geontech.com<mailto:chinkey at geontech.com><mailto:chinkey at geon
> >> tech.com<mailto:chinkey at geontech.com>>>
> >> Date: Thursday, Aug 29, 2019, 8:05 AM
> >> To: Munro, Robert M.
> >> <Robert.Munro at jhuapl.edu<mailto:Robert.Munro at jhuapl.edu><mailto:Robert
> >> .Munro at jhuapl.edu<mailto:Robert.Munro at jhuapl.edu>>>
> >> Cc: James Kulp
> >> <jek at parera.com<mailto:jek at parera.com><mailto:jek at parera.com<mailto:je
> >> k at parera.com>>>,
> >> discuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org>
> >> <discuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org><mailto:di
> >> scuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org>>>
> >> Subject: Re: [Discuss OpenCPI] Bitstream loading with
> >> ZynqMP/UltraScale+ fpga_manager
> >>
> >> It looks like you loaded something sucessfully but the control plan is
> not hooked up quite right.
> >>
> >> as an eraly part of the running process opencpi reads a register across
> the control plan that contains ascii "OpenCPI(NULL)" and in your case you
> are reading "CPI(NULL)Open"  this is given by the data in the error message
> - (sb 0x435049004f70656e).  this is the magic that the message is referring
> to it requires OpenCPI to be at address 0 of the control plane address
> space to proceed.
> >>
> >> I think we ran into this problem and we decided it was because the bus
> on the ultrascale was setup to be 32 bits and needed to be 64 bits for the
> hdl that we implemented to work correctly.  remind me what platform you are
> using is it a zynq ultrascale or 7000 series?
> >>
> >> On Wed, Aug 28, 2019 at 5:55 PM Munro, Robert M. <
> Robert.Munro at jhuapl.edu<mailto:Robert.Munro at jhuapl.edu><mailto:
> Robert.Munro at jhuapl.edu<mailto:Robert.Munro at jhuapl.edu>>> wrote:
> >> Chris,
> >>
> >> After merging some sections of HdlBusDriver.cxx into the 1.4 version of
> the file and going through the build process I am encountering a new error
> when attempting to load HDL on the N310.  The fsk_filerw is being used as a
> known good reference for this purpose.  The new sections of vivado.mk<
> http://vivado.mk><http://vivado.mk> were merged in to attempt building
> the HDL using the framework, but it did not generate the .bin file when
> using ocpidev build with the --hdl-assembly argument.  An attempt to
> replicate the commands in vivado.mk<http://vivado.mk><http://vivado.mk>
> manually while following the guidelines for generating a .bin from a .bit
> from Xilinx documentation
> https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841645/Solution+Zynq+PL+Programming+With+FPGA+Manager
> was taken.
> >>
> >> The steps were:
> >> - generate a .bif file similar to the documentation's
> >> Full_Bitstream.bif using the correct filename
> >> - run a bootgen command similar to
> >> vivado.mk<http://vivado.mk><http://vivado.mk>: bootgen -image
> >> <bif_filename> -arch zynq -o <bin_filename> -w
> >>
> >> This generated a .bin file as desired and was copied to the artifacts
> directory in the ocpi folder structure.
> >>
> >> The built ocpi environment loaded successfully, recognizes the HDL
> container as being available, and the hello application was able to run
> successfully.  The command output contained ' HDL Device 'PL:0' responds,
> but the OCCP signature: magic: 0x18000afe187003 (sb 0x435049004f70656e) '
> but the impact of this was not understood until attempting to load HDL.
> When attempting to run the fsk_filerw from the ocpirun command it did not
> appear to recognize the assembly when listing resources found in the output
> and reported that suitable candidate for a HDL-implemented component was
> not available.
> >>
> >> The command 'ocpihdl load' was then attempted to force the loading of
> the HDL assembly and the same '...OCCP signature: magic: ...' output
> observed and finally ' Exiting for problem: error loading device pl:0:
> Magic numbers in admin space do not match'.
> >>
> >> Is there some other step that must be taken during the generation of
> the .bin file?  Is there any other software modification that is required
> of the ocpi runtime code?  The diff patch of the modified 1.4
> HdlBusDriver.cxx is attached to make sure that the required code
> modifications are performed correctly.  The log output from the ocpihdl
> load command is attached in case that can provide further insight regarding
> performance or required steps.
> >>
> >> Thanks,
> >> Rob
> >>
> >> -----Original Message-----
> >> From: discuss <discuss-bounces at lists.opencpi.org<mailto:
> discuss-bounces at lists.opencpi.org><mailto:
> discuss-bounces at lists.opencpi.org<mailto:discuss-bounces at lists.opencpi.org>>>
> On Behalf Of Munro, Robert M.
> >> Sent: Tuesday, August 13, 2019 10:56 AM
> >> To: Chris Hinkey
> >> <chinkey at geontech.com<mailto:chinkey at geontech.com><mailto:chinkey at geon
> >> tech.com<mailto:chinkey at geontech.com>>>; James Kulp
> >> <jek at parera.com<mailto:jek at parera.com><mailto:jek at parera.com<mailto:je
> >> k at parera.com>>>
> >> Cc:
> >> discuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org><mailto:dis
> >> cuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org>>
> >> Subject: Re: [Discuss OpenCPI] Bitstream loading with
> >> ZynqMP/UltraScale+ fpga_manager
> >>
> >> Chris,
> >>
> >> Thank you for your helpful response and insight.  My thinking was that
> the #define could be overridden to provide the desired functionality for
> the platform, but was not comfortable making the changes without proper
> familiarity.  I will move forward by looking at the diff to the 1.4
> mainline, make the appropriate modifications, and test with the modified
> framework on the N310.
> >>
> >> Thanks again for your help.
> >>
> >> Thanks,
> >> Rob
> >>
> >> From: Chris Hinkey
> >> <chinkey at geontech.com<mailto:chinkey at geontech.com><mailto:chinkey at geon
> >> tech.com<mailto:chinkey at geontech.com>>>
> >> Sent: Tuesday, August 13, 2019 10:02 AM
> >> To: James Kulp
> >> <jek at parera.com<mailto:jek at parera.com><mailto:jek at parera.com<mailto:je
> >> k at parera.com>>>
> >> Cc: Munro, Robert M.
> >> <Robert.Munro at jhuapl.edu<mailto:Robert.Munro at jhuapl.edu><mailto:Robert
> >> .Munro at jhuapl.edu<mailto:Robert.Munro at jhuapl.edu>>>;
> >> discuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org><mailto:dis
> >> cuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org>>
> >> Subject: Re: [Discuss OpenCPI] Bitstream loading with
> >> ZynqMP/UltraScale+ fpga_manager
> >>
> >> I think when I implemented this code I probably made the assumption
> that if we are using fpga_manager we are also using ARCH=arm64.  This met
> our needs as we only cared about the fpga manager on ultrascale devices at
> the time.  We also made the assumption that the tools created a tarred bin
> file instead of a bit file because we could not get the bit to bin
> conversion working with the existing openCPI code (this might cause you
> problems later when actually trying to load the fpga).
> >>
> >> The original problem you were running into is certainly because of an
> >> ifdef on line 226 where it will check the old driver done pin if it is
> >> on an arm and not an arm64
> >>
> >> 226 #if defined(OCPI_ARCH_arm) || defined(OCPI_ARCH_arm_cs)
> >>
> >> to move forward for now you can change this line to an "#if 0" and
> rebuild the framework, not this will cause other zynq based platforms(zed,
> matchstiq etc..) to no longer work with this patch but maybe you don't care
> for now while Jim tries to get this into the mainline in a more generic way.
> >> there may be some similar patches you need to make to the same file but
> the full diff that I needed to make to BusDriver.cxx to the 1.4 mainline
> can be seen here https://github.com/opencpi/opencpi/pull/17/files in case
> you didn't already know.
> >> hope this helps
> >>
> >> On Mon, Aug 12, 2019 at 11:12 AM James Kulp <jek at parera.com<mailto:
> jek at parera.com><mailto:jek at parera.com<mailto:jek at parera.com>><mailto:
> jek at parera.com<mailto:jek at parera.com><mailto:jek at parera.com<mailto:
> jek at parera.com>>>> wrote:
> >>> On 8/12/19 9:37 AM, Munro, Robert M. wrote:
> >>> Jim,
> >>>
> >>> This is the only branch with the modifications required for use with
> >>> the FPGA Manager driver.  This is required for use with the Linux
> >>> kernel provided for the N310.  The Xilinx toolset being used is
> >>> 2018_2 and the kernel being used is generated via the N310 build
> >>> container using v3.14.0.0 .
> >> Ok.  The default Xilinx kernel associated with 2018_2 is 4.14.
> >>
> >> I guess the bottom line is that this combination of platform and tools
> and kernel is not yet supported in either the mainline of OpenCPI and the
> third party branch you are trying to use.
> >>
> >> It is probably not a big problem, but someone has to debug it that has
> the time and skills necessary to dig as deep as necessary.
> >>
> >> The fpga manager in the various later linux kernels will definitely be
> supported in a patch from the mainline "soon", probably in a month, since
> it is being actively worked.
> >>
> >> That does not guarantee functionality on your exact kernel (and thus
> version of the fpga manager), but it does guarantee it working on the
> latest Xilinx-supported kernel.
> >>
> >> Jim
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>> Thanks,
> >>> Robert Munro
> >>>
> >>> *From: *James Kulp
> >>> <jek at parera.com<mailto:jek at parera.com><mailto:jek at parera.com<mailto:j
> >>> ek at parera.com>><mailto:jek at parera.com<mailto:jek at parera.com><mailto:j
> >>> e<mailto:je>
> >>> k at parera.com<mailto:k at parera.com>>>
> >>> <mailto:jek at parera.com<mailto:jek at parera.com><mailto:jek at parera.com<m
> >>> ailto:jek at parera.com>><mailto:jek at parera.com<mailto:jek at parera.com><m
> >>> a ilto:jek at parera.com<mailto:ilto%3Ajek at parera.com>>>>>
> >>> *Date: *Monday, Aug 12, 2019, 9:00 AM
> >>> *To: *Munro, Robert M.
> >>> <Robert.Munro at jhuapl.edu<mailto:Robert.Munro at jhuapl.edu><mailto:Rober
> >>> t.Munro at jhuapl.edu<mailto:Robert.Munro at jhuapl.edu>><mailto:Robert<mai
> >>> lto:Robert>
> >>> .Munro at jhuapl.edu<mailto:Munro at jhuapl.edu><mailto:Robert.Munro at jhuapl
> >>> .edu<mailto:Robert.Munro at jhuapl.edu>>>
> >>> <mailto:Robert.Munro at jhuapl.edu<mailto:Robert.Munro at jhuapl.edu><mailt
> >>> o:Robert.Munro at jhuapl.edu<mailto:Robert.Munro at jhuapl.edu>><mailto
> >>> :Robert.Munro at jhuapl.edu<mailto:Robert.Munro at jhuapl.edu><mailto:Rober
> >>> t.Munro at jhuapl.edu<mailto:Robert.Munro at jhuapl.edu>>>>>,
> >>> discuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org><mailto:di
> >>> scuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org>><mailto:dis
> >>> <mailto:dis>
> >>> cuss at lists.opencpi.org<mailto:cuss at lists.opencpi.org><mailto:discuss@
> >>> lists.opencpi.org<mailto:discuss at lists.opencpi.org>>>
> >>> <discuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org><mailto:d
> >>> iscuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org>><mailto:di
> >>> <mailto:di>
> >>> scuss at lists.opencpi.org<mailto:scuss at lists.opencpi.org><mailto:discus
> >>> s at lists.opencpi.org<mailto:discuss at lists.opencpi.org>>>
> >>> <mailto:discuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org><m
> >>> ailto:discuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org>><ma
> >>> ilto:discuss at lists.opencpi.org<mailto:ilto%3Adiscuss at lists.opencpi.or
> >>> g><mailto:discuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org>
> >>>>>>>
> >>> *Subject: *Re: [Discuss OpenCPI] Bitstream loading with
> >>> ZynqMP/UltraScale+ fpga_manager
> >>>
> >>> I was a bit confused about your use of the "ultrascale" branch.
> >>> So you are using a branch with two types of patches in it: one for
> >>> later linux kernels with the fpga manager, and the other for the
> >>> ultrascale chip itself.
> >>> The N310 is not ultrascale, so we need to separate the two issues,
> >>> which were not separated before.
> >>> So its not really a surprise that the branch you are using is not yet
> >>> happy with the system you are trying to run it on.
> >>>
> >>> I am working on a branch that simply updates the xilinx tools
> >>> (2019-1) and the xilinx linux kernel (4.19) without dealing with
> >>> ultrascale, which is intended to work with a baseline zed board, but
> >>> with current tools and kernels.
> >>>
> >>> The N310 uses a 7000-series part (7100) which should be compatible
> >>> with this.
> >>>
> >>> Which kernel and which xilinx tools are you using?
> >>>
> >>> Jim
> >>>
> >>>
> >>>
> >>>> On 8/8/19 1:36 PM, Munro, Robert M. wrote:
> >>>> Jim or others,
> >>>>
> >>>> Is there any further input or feedback on the source or resolution
> >>> of this issue?
> >>>> As it stands I do not believe that the OCPI runtime software will be
> >>> able to successfully load HDL assemblies on the N310 platform.  My
> >>> familiarity with this codebase is limited and we would appreciate any
> >>> guidance available toward investigating or resolving this issue.
> >>>> Thank you,
> >>>> Robert Munro
> >>>>
> >>>> -----Original Message-----
> >>>> From: discuss
> >>>> <discuss-bounces at lists.opencpi.org<mailto:discuss-bounces at lists.open
> >>>> cpi.org><mailto:discuss-bounces at lists.open<mailto:discuss-bounces at li
> >>>> sts.open>
> >>>> cpi.org<http://cpi.org>><mailto:discuss-bounces at lists.open<mailto:di
> >>>> scuss-bounces at lists.open><mailto:discuss-bounces at li<mailto:discuss-b
> >>>> ounces at li> sts.open> cpi.org<http://cpi.org><http://cpi.org>>> On
> >>>> Behalf Of
> >>> Munro, Robert M.
> >>>> Sent: Monday, August 5, 2019 10:49 AM
> >>>> To: James Kulp
> >>>> <jek at parera.com<mailto:jek at parera.com><mailto:jek at parera.com<mailto:
> jek at parera.com>><mailto:jek at parera.com<mailto:jek at parera.com><mailto:
> >>>> jek at parera.com<mailto:jek at parera.com>>>>;
> >>>> discuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org><mailto:d
> >>>> iscuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org>><mailto:d
> >>>> <mailto:d>
> >>>> iscuss at lists.opencpi.org<mailto:iscuss at lists.opencpi.org><mailto:dis
> >>>> cuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org>>>
> >>>> Subject: Re: [Discuss OpenCPI] Bitstream loading with
> >>> ZynqMP/UltraScale+ fpga_manager
> >>>> Jim,
> >>>>
> >>>> The given block of code is not the root cause of the issue because
> >>> the file system does not have a /dev/xdevcfg device.
> >>>> I suspect there is some functional code similar to this being
> >>> compiled incorrectly:
> >>>> #if (OCPI_ARCH_arm)
> >>>>    // do xdevcfg loading stuff
> >>>> #else
> >>>>    // do fpga_manager loading stuff #endif
> >>>>
> >>>> This error is being output at environment initialization as well as
> >>> when running hello.xml.  I've attached a copy of the output from the
> >>> command 'ocpirun -v -l 20 hello.xml' for further investigation.
> >>>>  From looking at the output I believe the system is calling
> >>> OCPI::HDL::Driver::search() in HdlDriver.cxx at line 128 which is
> >>> calling OCPI::HDL::Zynq::Driver::search() in HdlBusDriver.cxx at line
> >>> 484 which in turn is calling Driver::open in the same file at line
> >>> 499 which then outputs the 'When searching for PL device ...' error
> >>> at line 509. This then returns to the HdlDriver.cxx search() function
> >>> and outputs the '... got Zynq search error ...' error at line 141.
> >>>> This is an ARM device and I am not familiar enough with this
> >>> codebase to adjust precompiler definitions with confidence that some
> >>> other code section will become affected.
> >>>> Thanks,
> >>>> Robert Munro
> >>>>
> >>>> -----Original Message-----
> >>>> From: James Kulp
> >>>> <jek at parera.com<mailto:jek at parera.com><mailto:jek at parera.com<mailto:
> jek at parera.com>><mailto:jek at parera.com<mailto:jek at parera.com><mailto:
> >>>> jek at parera.com<mailto:jek at parera.com>>>>
> >>>> Sent: Friday, August 2, 2019 4:27 PM
> >>>> To: Munro, Robert M.
> >>>> <Robert.Munro at jhuapl.edu<mailto:Robert.Munro at jhuapl.edu><mailto:Robe
> >>>> rt.Munro at jhuapl.edu<mailto:Robert.Munro at jhuapl.edu>><mailto:Robe<mai
> >>>> lto:Robe>
> >>>> rt.Munro at jhuapl.edu<mailto:rt.Munro at jhuapl.edu><mailto:Robert.Munro@
> >>>> jhuapl.edu<mailto:Robert.Munro at jhuapl.edu>>>>;
> >>> discuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org><mailto:di
> >>> scuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org>><mailto:dis
> >>> <mailto:dis>
> >>> cuss at lists.opencpi.org<mailto:cuss at lists.opencpi.org><mailto:discuss@
> >>> lists.opencpi.org<mailto:discuss at lists.opencpi.org>>>
> >>>> Subject: Re: [Discuss OpenCPI] Bitstream loading with
> >>> ZynqMP/UltraScale+ fpga_manager
> >>>> That code is not integrated into the main line of OpenCPI yet, but
> >>> in that code there is:
> >>>>             if (file_exists("/dev/xdevcfg")){
> >>>>               ret_val= load_xdevconfig(fileName, error);
> >>>>             }
> >>>>             else if (file_exists("/sys/class/fpga_manager/fpga0/")){
> >>>>               ret_val= load_fpga_manager(fileName, error);
> >>>>             }
> >>>> So it looks like the presence of /dev/xdevcfg is what causes it to
> >>> look for /sys/class/xdevcfg/xdevcfg/device/prog_done
> >>>>> On 8/2/19 4:15 PM, Munro, Robert M. wrote:
> >>>>> Are there any required flag or environment variable settings that
> >>> must be done before building the framework to utilize this
> >>> functionality?  I have a platform built that is producing an output
> >>> during environment load: 'When searching for PL device '0': Can't
> >>> process file "/sys/class/xdevcfg/xdevcfg/device/prog_done" for string:
> >>> file could not be open for reading' .  This leads me to believe that
> >>> it is running the xdevcfg code still present in HdlBusDriver.cxx .
> >>>>> Use of the release_1.4_zynq_ultra branch and presence of the
> >>> /sys/clas/fpga_manager loading code in HdlBusDriver.cxx has been
> >>> verified for the environment used to generate the executables.
> >>>>> Thanks,
> >>>>> Robert Munro
> >>>>>
> >>>>> -----Original Message-----
> >>>>> From: discuss
> >>>>> <discuss-bounces at lists.opencpi.org<mailto:discuss-bounces at lists.ope
> >>>>> ncpi.org><mailto:discuss-bounces at lists.ope<mailto:discuss-bounces at l
> >>>>> ists.ope>
> >>>>> ncpi.org<http://ncpi.org>><mailto:discuss-bounces at lists.ope<mailto:
> >>>>> discuss-bounces at lists.ope><mailto:discuss-bounces at l<mailto:discuss-
> >>>>> bounces at l> ists.ope> ncpi.org<http://ncpi.org><http://ncpi.org>>>
> >>>>> On Behalf Of James Kulp
> >>>>> Sent: Friday, February 1, 2019 4:18 PM
> >>>>> To:
> >>>>> discuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org><mailto:
> discuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org>><mailto:
> >>>>> discuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org><mailto:
> >>>>> discuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org>>>
> >>>>> Subject: Re: [Discuss OpenCPI] Bitstream loading with
> >>>>> ZynqMP/UltraScale+ fpga_manager
> >>>>>
> >>>>>> On 2/1/19 3:37 PM, Chris Hinkey wrote:
> >>>>>> in response to Point 1 here.  We attempted using the code that on
> >>> the fly was attempting to convert from bit to bin.  This did not work
> >>> on these newer platforms using fpga_manager so we decided to use the
> >>> vendor provided tools rather then to reverse engineer what was wrong
> >>> with the existing code.
> >>>>>> If changes need to be made to create more commonality and given
> >>> that all zynq and zynqMP platforms need a .bin file format wouldn't
> >>> it make more sense to just use .bin files rather then converting them
> >>> on the fly every time?
> >>>>> A sensible question for sure.
> >>>>>
> >>>>> When this was done originally, it was to avoid generating multiple
> >>> file formats all the time.  .bit files are necessary for JTAG
> >>> loading, and .bin files are necessary for zynq hardware loading.
> >>>>> Even on Zynq, some debugging using jtag is done, and having that be
> >>> mostly transparent (using the same bitstream files) is convenient.
> >>>>> So we preferred having a single bitstream file (with metadata,
> >>>>> compressed) regardless of whether we were hardware loading or jtag
> >>> loading, zynq or virtex6 or spartan3, ISE or Vivado.
> >>>>> In fact, there was no reverse engineering the last time since both
> >>> formats, at the level we were operating at, were documented by Xilinx.
> >>>>> It seemed to be worth the 30 SLOC to convert on the fly to keep a
> >>> single format of Xilinx bitstream files, including between ISE and
> >>> Vivado and all Xilinx FPGA types.
> >>>>> Of course it might make sense to switch things around the other way
> >>> and use .bin files uniformly and only convert to .bit format for JTAG
> >>> loading.
> >>>>> But since the core of the "conversion:" after a header, is just a
> >>> 32 bit endian swap, it doesn't matter much either way.
> >>>>> If it ends up being a truly nasty reverse engineering exercise now,
> >>> I would reconsider.
> >>>>>> ________________________________
> >>>>>> From: discuss
> >>>>>> <discuss-bounces at lists.opencpi.org<mailto:discuss-bounces at lists.op
> >>>>>> encpi.org><mailto:discuss-bounces at lists.op<mailto:discuss-bounces@
> >>>>>> lists.op>
> >>>>>> encpi.org<http://encpi.org>><mailto:discuss-bounces at lists.op<mailt
> >>>>>> o:discuss-bounces at lists.op><mailto:discuss-bounces@<mailto:discuss
> >>>>>> -bounces@> lists.op>
> >>>>>> encpi.org<http://encpi.org><http://encpi.org>>> on behalf of James
> >>>>>> Kulp
> >>>>>> <jek at parera.com<mailto:jek at parera.com><mailto:jek at parera.com<mailt
> >>>>>> o:jek at parera.com>><mailto:jek at parera.com<mailto:jek at parera.com><ma
> >>>>>> ilt o:jek at parera.com<mailto:o%3Ajek at parera.com>>>>
> >>>>>> Sent: Friday, February 1, 2019 3:27 PM
> >>>>>> To:
> >>>>>> discuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org><mailto
> >>>>>> :discuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org>><mail
> >>>>>> to
> >>>>>> :discuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org><mailt
> >>>>>> o:discuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org>>>
> >>>>>> Subject: Re: [Discuss OpenCPI] Bitstream loading with
> >>>>>> ZynqMP/UltraScale+ fpga_manager
> >>>>>>
> >>>>>> David,
> >>>>>>
> >>>>>> This is great work. Thanks.
> >>>>>>
> >>>>>> Since I believe the fpga manager stuff is really an attribute of
> >>>>>> later linux kernels, I don't think it is really a ZynqMP thing,
> >>>>>> but just a later linux kernel thing.
> >>>>>> I am currently bringing up the quite ancient zedboard using the
> >>>>>> latest Vivado and Xilinx linux and will try to use this same code.
> >>>>>> There are two thinigs I am looking into, now that you have done
> >>>>>> the hard work of getting to a working solution:
> >>>>>>
> >>>>>> 1. The bit vs bin thing existed with the old bitstream loader, but
> >>>>>> I think we were converting on the fly, so I will try that here.
> >>>>>> (To avoid the bin format altogether).
> >>>>>>
> >>>>>> 2. The fpga manager has entry points from kernel mode that allow
> >>>>>> you to inject the bitstream without making a copy in /lib/firmware.
> >>>>>> Since we already have a kernel driver, I will try to use that to
> >>>>>> avoid the whole /lib/firmware thing.
> >>>>>>
> >>>>>> So if those two things can work (no guarantees), the difference
> >>>>>> between old and new bitstream loading (and building) can be
> >>>>>> minimized and the loading process faster and requiring no extra
> >>>>>> file system
> >>> space.
> >>>>>> This will make merging easier too.
> >>>>>>
> >>>>>> We'll see.  Thanks again to you and Geon for this important
> >>> contribution.
> >>>>>> Jim
> >>>>>>
> >>>>>>
> >>>>>>> On 2/1/19 3:12 PM, David Banks wrote:
> >>>>>>> OpenCPI users interested in ZynqMP fpga_manager,
> >>>>>>>
> >>>>>>> I know some users are interested in the OpenCPI's bitstream
> >>>>>>> loading for ZynqMP/UltraScale+ using "*fpga_manager*". In
> >>>>>>> general, we followed the instructions at
> >>>>>>>
> >>>
> https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841847/Solution+ZynqMP+PL+Programming#SolutionZynqMPPLProgramming-StepsforprogrammingtheFullBitstream
> .
> >>>>>>> I will give a short explanation here:
> >>>>>>>
> >>>>>>> Reminder: All ZynqMP/UltraScale+ changes are located at
> >>>>>>> https://github.com/Geontech/opencpi.git in release_1.4_zynq_ultra
> >>> branch.
> >>>>>>> Firstly, all *fpga_manager *code is located in
> >>>>>>> *runtime/hdl/src/HdlBusDriver.cxx*. There were also changes in
> >>>>>>> r*untime/hdl-support/xilinx/vivado.mk<http://vivado.mk><http://vi
> >>>>>>> vado.mk><http://vi
> >>>>>>> vado.mk<http://vado.mk>>
> >>>>>>> <http://vivado.mk>* to generate a bitstream in the correct *.bin
> >>>>>>> format. To see the changes made to these files for ZynqMP, you
> >>>>>>> can diff them between
> >>>>>>> *release_1.4* and *release_1.4_zynq_ultra*:
> >>>>>>> $ git clone https://github.com/Geontech/opencpi.git --branch
> >>>>>>> release_1.4_zynq_ultra; $ cd opencpi; $ git fetch origin
> >>>>>>> release_1.4:release_1.4; $ git diff release_1.4 --
> >>>>>>> runtime/hdl/src/HdlBusDriver.cxx
> >>>>>>> runtime/hdl-support/xilinx/vivado.mk<http://vivado.mk><http://viv
> >>>>>>> ado.mk><http://viv
> >>>>>>> ado.mk<http://ado.mk>>;
> >>>>>>>
> >>>>>>>
> >>>>>>> The directly relevant functions are *load_fpga_manager()* and i
> >>>>>>> *sProgrammed()*.
> >>>>>>> load_fpga_manager() ensures that /lib/firmware exists, reads the
> >>>>>>> *.bin bitstream file and writes its contents to
> >>> /lib/firmware/opencpi_temp.bin.
> >>>>>>> It then writes "0" to /sys/class/fpga_manager/fpga0/flags and the
> >>>>>>> the filename "opencpi_temp.bin" to
> >>> /sys/class/fpga_manager/fpga0/firmware.
> >>>>>>> Finally, the temporary opencpi_temp.bin bitstream is removed and
> >>>>>>> the state of the fpga_manager
> >>>>>>> (/sys/class/fpga_manager/fpga0/state) is confirmed to be
> "operating" in isProgrammed().
> >>>>>>>
> >>>>>>> fpga_manager requires that bitstreams be in *.bin in order to
> >>>>>>> write them to the PL. So, some changes were made to
> >>>>>>> vivado.mk<http://vivado.mk><http://vivado.mk><http://vivado.mk>
> >>>>>>> to add a make rule for the *.bin file. This make rule (*BinName*)
> uses Vivado's "*bootgen*" to convert the bitstream from *.bit to *.bin.
> >>>>>>>
> >>>>>>> Most of the relevant code is pasted or summarized below:
> >>>>>>>
> >>>>>>>             *load_fpga_manager*(const char *fileName,
> >>>>>>> std::string
> >>> &error) {
> >>>>>>>               if (!file_exists("/lib/firmware")){
> >>>>>>> mkdir("/lib/firmware",0666);
> >>>>>>>               }
> >>>>>>>               int out_file =
> >>> creat("/lib/firmware/opencpi_temp.bin", 0666);
> >>>>>>>               gzFile bin_file;
> >>>>>>>               int bfd, zerror;
> >>>>>>>               uint8_t buf[8*1024];
> >>>>>>>
> >>>>>>>               if ((bfd = ::open(fileName, O_RDONLY)) < 0)
> >>>>>>>                 OU::format(error, "Can't open bitstream file '%s'
> >>> for reading:
> >>>>>>> %s(%d)",
> >>>>>>>                            fileName, strerror(errno), errno);
> >>>>>>>               if ((bin_file = ::gzdopen(bfd, "rb")) == NULL)
> >>>>>>>                 OU::format(error, "Can't open compressed bin
> >>>>>>> file
> >>> '%s' for :
> >>>>>>> %s(%u)",
> >>>>>>>                            fileName, strerror(errno), errno);
> >>>>>>>               do {
> >>>>>>>             uint8_t *bit_buf = buf;
> >>>>>>>                 int n = ::gzread(bin_file, bit_buf, sizeof(buf));
> >>>>>>>             if (n < 0)
> >>>>>>>               return true;
> >>>>>>>             if (n & 3)
> >>>>>>>               return OU::eformat(error, "Bitstream data in is '%s'
> >>>>>>> not a multiple of 3 bytes",
> >>>>>>>                      fileName);
> >>>>>>>                 if (n == 0)
> >>>>>>>                   break;
> >>>>>>>             if (write(out_file, buf, n) <= 0)
> >>>>>>>               return OU::eformat(error,
> >>>>>>>                      "Error writing to
> >>>>>>> /lib/firmware/opencpi_temp.bin for bin
> >>>>>>> loading: %s(%u/%d)",
> >>>>>>>                      strerror(errno), errno, n);
> >>>>>>>           } while (1);
> >>>>>>>               close(out_file);
> >>>>>>>               std::ofstream
> >>> fpga_flags("/sys/class/fpga_manager/fpga0/flags");
> >>>>>>>               std::ofstream
> >>>>>>> fpga_firmware("/sys/class/fpga_manager/fpga0/firmware");
> >>>>>>>               fpga_flags << 0 << std::endl;
> >>>>>>>               fpga_firmware << "opencpi_temp.bin" << std::endl;
> >>>>>>>
> >>>>>>> remove("/lib/firmware/opencpi_temp.bin");
> >>>>>>>               return isProgrammed(error) ? init(error) : true;
> >>>>>>>             }
> >>>>>>>
> >>>>>>> The isProgrammed() function just checks whether or not the
> >>>>>>> fpga_manager state is 'operating' although we are not entirely
> >>>>>>> confident this is a robust check:
> >>>>>>>
> >>>>>>>             *isProgrammed*(...) {
> >>>>>>>               ...
> >>>>>>>               const char *e = OU::file2String(val,
> >>>>>>> "/sys/class/fpga_manager/fpga0/state", '|');
> >>>>>>>               ...
> >>>>>>>               return val == "operating";
> >>>>>>>             }
> >>>>>>>
> >>>>>>> vivado.mk<http://vivado.mk><http://vivado.mk><http://vivado.mk>'s
> >>>>>>> *bin make-rule uses bootgen to convert bit to bin. This is
> >>>>>>> necessary in Vivado 2018.2, but in later versions you may be able
> >>>>>>> to directly generate the correct *.bin file via an option to
> >>> write_bitstream:
> >>>>>>> $(call *BinName*,$1,$3,$6): $(call BitName,$1,$3)
> >>>>>>>            $(AT)echo -n For $2 on $5 using config $4: Generating
> >>>>>>> Xilinx Vivado bitstream file $$@ with BIN extension using
> "bootgen".
> >>>>>>>            $(AT)echo all: > $$(call BifName,$1,$3,$6); \
> >>>>>>>                 echo "{" >> $$(call BifName,$1,$3,$6); \
> >>>>>>>                 echo " [destination_device = pl] $(notdir $(call
> >>>>>>> BitName,$1,$3,$6))" >> $$(call BifName,$1,$3,$6); \
> >>>>>>>                 echo "}" >> $$(call BifName,$1,$3,$6);
> >>>>>>>            $(AT)$(call DoXilinx,*bootgen*,$1,-image $(notdir
> >>>>>>> $(call
> >>>>>>> BifName,$1,$3,$6)) -arch $(BootgenArch) -o $(notdir $(call
> >>>>>>> BinName,$1,$3,$6)) -w,bin)
> >>>>>>>
> >>>>>>> Hope this is useful!
> >>>>>>>
> >>>>>>> Regards,
> >>>>>>> David Banks
> >>>>>>> dbanks at geontech.com<mailto:dbanks at geontech.com><mailto:dbanks at geo
> >>>>>>> ntech.com<mailto:dbanks at geontech.com>><mailto:dbanks at geo<mailto:d
> >>>>>>> banks at geo>
> >>>>>>> ntech.com<http://ntech.com><mailto:dbanks at geontech.com<mailto:dba
> >>>>>>> nks at geontech.com>>>
> >>>>>>> Geon Technologies, LLC
> >>>>>>> -------------- next part -------------- An HTML attachment was
> >>>>>>> scrubbed...
> >>>>>>> URL:
> >>>>>>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/att
> >>>>>>> ach m ents/20190201/4b49675d/attachment.html>
> >>>>>>> _______________________________________________
> >>>>>>> discuss mailing list
> >>>>>>> discuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org><mailt
> >>>>>>> o:discuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org>><ma
> >>>>>>> ilt
> >>>>>>> o:discuss at lists.opencpi.org<mailto:o%3Adiscuss at lists.opencpi.org>
> >>>>>>> <mailto:discuss at lists.opencpi.org<mailto:discuss at lists.opencpi.or
> >>>>>>> g>>>
> >>>>>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.o
> >>>>>>> rg
> >>>>>> _______________________________________________
> >>>>>> discuss mailing list
> >>>>>> discuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org><mailto
> >>>>>> :discuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org>><mail
> >>>>>> to
> >>>>>> :discuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org><mailt
> >>>>>> o:discuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org>>>
> >>>>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or
> >>>>>> g
> >>>>>> -------------- next part -------------- An HTML attachment was
> >>>>>> scrubbed...
> >>>>>> URL:
> >>>>>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/atta
> >>>>>> chm e nts/20190201/64e4ea45/attachment.html>
> >>>>>> _______________________________________________
> >>>>>> discuss mailing list
> >>>>>> discuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org><mailto
> >>>>>> :discuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org>><mail
> >>>>>> to
> >>>>>> :discuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org><mailt
> >>>>>> o:discuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org>>>
> >>>>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.or
> >>>>>> g
> >>>>> _______________________________________________
> >>>>> discuss mailing list
> >>>>> discuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org><mailto:
> discuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org>><mailto:
> >>>>> discuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org><mailto:
> >>>>> discuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org>>>
> >>>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
> >>>> -------------- next part -------------- An embedded and
> >>>> charset-unspecified text was scrubbed...
> >>>> Name: hello_n310_log_output.txt
> >>>> URL:
> >>> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachm
> >>> e nts/20190805/d9b4f229/attachment.txt>
> >>>> _______________________________________________
> >>>> discuss mailing list
> >>>> discuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org><mailto:d
> >>>> iscuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org>><mailto:d
> >>>> <mailto:d>
> >>>> iscuss at lists.opencpi.org<mailto:iscuss at lists.opencpi.org><mailto:dis
> >>>> cuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org>>>
> >>>> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
> >>>
> >>
> >> _______________________________________________
> >> discuss mailing list
> >> discuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org><mailto:dis
> >> cuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org>><mailto:discu
> >> ss at lists.opencpi.org<mailto:discuss at lists.opencpi.org><mailto:discuss@
> >> lists.opencpi.org<mailto:discuss at lists.opencpi.org>>>
> >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
> >> -------------- next part -------------- An HTML attachment was
> >> scrubbed...
> >> URL:
> >> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme
> >> nts/20190813/4516c872/attachment.html>
> >> _______________________________________________
> >> discuss mailing list
> >> discuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org><mailto:dis
> >> cuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org>>
> >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
> >> -------------- next part -------------- An HTML attachment was
> >> scrubbed...
> >> URL:
> >> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme
> >> nts/20190829/b99ae3e0/attachment.html>
> >> _______________________________________________
> >> discuss mailing list
> >> discuss at lists.opencpi.org<mailto:discuss at lists.opencpi.org>
> >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
> >> -------------- next part -------------- An HTML attachment was
> >> scrubbed...
> >> URL:
> >> <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachme
> >> nts/20190905/0b9a1953/attachment.html>
> >> _______________________________________________
> >> discuss mailing list
> >> discuss at lists.opencpi.org
> >> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
> >
> >
> >
> > _______________________________________________
> > discuss mailing list
> > discuss at lists.opencpi.org
> > http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
>
>
> _______________________________________________
> discuss mailing list
> discuss at lists.opencpi.org
> http://lists.opencpi.org/mailman/listinfo/discuss_lists.opencpi.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opencpi.org/pipermail/discuss_lists.opencpi.org/attachments/20190906/8bae83a0/attachment.html>


More information about the discuss mailing list