Sharing workload between multiple workers

D
dwp@md1tech.co.uk
Thu, May 18, 2023 3:26 PM

Hello,

My goal is to speed up a certain process through parallelisation. I have used an FFT as an example but this was more about exploring the capabilities of openCPI.

I was advised that creating new threads from within a worker is not recommended and that each worker exists in its own thread so it seemed logical to create multiple instances of a worker that can handle a portion of the workload and have these instances run in parallel.
See the screenshot attached showing the 3 components I created and how data is sent between them.

It is clear that in the current implementation this is not being achieved, instead the Partial FFT workers are entering their run methods one after the other and CPU usage is stuck at using a single core.

Is this something that is possible using openCPI?

What issues can arise from creating threads within workers that are terminated before the run method is exited?

Thanks in advance,
Dan

Hello, My goal is to speed up a certain process through parallelisation. I have used an FFT as an example but this was more about exploring the capabilities of openCPI. I was advised that creating new threads from within a worker is not recommended and that each worker exists in its own thread so it seemed logical to create multiple instances of a worker that can handle a portion of the workload and have these instances run in parallel.\ See the screenshot attached showing the 3 components I created and how data is sent between them. It is clear that in the current implementation this is not being achieved, instead the `Partial FFT` workers are entering their `run` methods one after the other and CPU usage is stuck at using a single core. Is this something that is possible using openCPI? What issues can arise from creating threads within workers that are terminated before the `run` method is exited? Thanks in advance,\ Dan
AO
Aaron Olivarez
Thu, May 18, 2023 4:00 PM

How are you executing your application? Is it using an OpencPI application
Specification (OAS) and ocpirun command line tool or utilizing a C++
program using the application control interface (ACI).

If you are using an OAS you can specify additional RCC containers using the
-n or '--processors' argument on ocpirun to specify the number of rcc
containers to create at execution.
It defaults to 1 but if you specify more than 1 it will round robin the
workers based on the numbers of available containers. Another option you
have is to specify a worker to run on a specific container using -P
argument. For syntax is -P <instance-name>=<platformname> as an example -P
parital_fft0=rcc0 -P partial_fft1=rcc1 etc. You can specify the platform in
XML as well as by adding an attribute to the instance Platform='rcc0' This
can also be done through XML.

By using the -v argument when executing ocpirun you can see where the
worker was ran. Below is an example of running an application with 5
workers using 5 containers and allowing OpenCPI to dictate which container
to run the worker.

~/opencpi-v2.4.6/projects/assets/applications$ ocpirun -v -n5 testbias5.xml
Available containers are:  0: rcc0 [model: rcc os: linux platform:
ubuntu18_04], 1: rcc1 [model: rcc os: linux platform: ubuntu18_04], 2: rcc2
[model: rcc os: linux platform: ubuntu18_04], 3: rcc3 [model: rcc os: linux
platform: ubuntu18_04], 4: rcc4 [model: rcc os: linux platform: ubuntu18_04]
Actual deployment is:
  Instance  0 file_read (spec ocpi.core.file_read) on rcc container 0:
rcc0, using file_read in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/
ocpi.core.file_read.rcc.0.ubuntu18_04.so dated Fri Mar 31 11:25:20 2023
  Instance  1 bias0 (spec ocpi.core.bias) on rcc container 1: rcc1, using
bias_cc in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/
ocpi.core.bias_cc.rcc.0.ubuntu18_04.so dated Fri Mar 31 11:25:24 2023
  Instance  2 bias1 (spec ocpi.core.bias) on rcc container 2: rcc2, using
bias_cc in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/
ocpi.core.bias_cc.rcc.0.ubuntu18_04.so dated Fri Mar 31 11:25:24 2023
  Instance  3 bias2 (spec ocpi.core.bias) on rcc container 3: rcc3, using
bias_cc in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/
ocpi.core.bias_cc.rcc.0.ubuntu18_04.so dated Fri Mar 31 11:25:24 2023
  Instance  4 bias3 (spec ocpi.core.bias) on rcc container 4: rcc4, using
bias_cc in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/
ocpi.core.bias_cc.rcc.0.ubuntu18_04.so dated Fri Mar 31 11:25:24 2023
  Instance  5 bias4 (spec ocpi.core.bias) on rcc container 0: rcc0, using
bias_cc in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/
ocpi.core.bias_cc.rcc.0.ubuntu18_04.so dated Fri Mar 31 11:25:24 2023
  Instance  6 file_write (spec ocpi.core.file_write) on rcc container 1:
rcc1, using file_write in
/home/aaron/opencpi-v2.4.6/projects/core/artifacts/
ocpi.core.file_write.rcc.0.ubuntu18_04.so dated Fri Mar 31 11:25:23 2023
Application XML parsed and deployments (containers and artifacts) chosen [0
s 2 ms]
Application established: containers, workers, connections all created [0 s
0 ms]
Application started/running [0 s 0 ms]
Waiting for application to finish (no time limit)
Application finished [0 s 10 ms]

Let me know if you are using an ACI application. It's a similar process.

Aaron

On Thu, May 18, 2023 at 10:26 AM dwp@md1tech.co.uk wrote:

Hello,

My goal is to speed up a certain process through parallelisation. I have
used an FFT as an example but this was more about exploring the
capabilities of openCPI.

I was advised that creating new threads from within a worker is not
recommended and that each worker exists in its own thread so it seemed
logical to create multiple instances of a worker that can handle a portion
of the workload and have these instances run in parallel.
See the screenshot attached showing the 3 components I created and how
data is sent between them.

It is clear that in the current implementation this is not being achieved,
instead the Partial FFT workers are entering their run methods one after
the other and CPU usage is stuck at using a single core.

Is this something that is possible using openCPI?

What issues can arise from creating threads within workers that are
terminated before the run method is exited?

Thanks in advance,
Dan


discuss mailing list -- discuss@lists.opencpi.org
To unsubscribe send an email to discuss-leave@lists.opencpi.org

How are you executing your application? Is it using an OpencPI application Specification (OAS) and `ocpirun` command line tool or utilizing a C++ program using the application control interface (ACI). If you are using an OAS you can specify additional RCC containers using the `-n` or '--processors' argument on `ocpirun` to specify the number of rcc containers to create at execution. It defaults to 1 but if you specify more than 1 it will round robin the workers based on the numbers of available containers. Another option you have is to specify a worker to run on a specific container using `-P` argument. For syntax is -P <instance-name>=<platformname> as an example -P parital_fft0=rcc0 -P partial_fft1=rcc1 etc. You can specify the platform in XML as well as by adding an attribute to the instance Platform='rcc0' This can also be done through XML. By using the -v argument when executing `ocpirun` you can see where the worker was ran. Below is an example of running an application with 5 workers using 5 containers and allowing OpenCPI to dictate which container to run the worker. ``` ~/opencpi-v2.4.6/projects/assets/applications$ ocpirun -v -n5 testbias5.xml Available containers are: 0: rcc0 [model: rcc os: linux platform: ubuntu18_04], 1: rcc1 [model: rcc os: linux platform: ubuntu18_04], 2: rcc2 [model: rcc os: linux platform: ubuntu18_04], 3: rcc3 [model: rcc os: linux platform: ubuntu18_04], 4: rcc4 [model: rcc os: linux platform: ubuntu18_04] Actual deployment is: Instance 0 file_read (spec ocpi.core.file_read) on rcc container 0: rcc0, using file_read in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ ocpi.core.file_read.rcc.0.ubuntu18_04.so dated Fri Mar 31 11:25:20 2023 Instance 1 bias0 (spec ocpi.core.bias) on rcc container 1: rcc1, using bias_cc in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ ocpi.core.bias_cc.rcc.0.ubuntu18_04.so dated Fri Mar 31 11:25:24 2023 Instance 2 bias1 (spec ocpi.core.bias) on rcc container 2: rcc2, using bias_cc in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ ocpi.core.bias_cc.rcc.0.ubuntu18_04.so dated Fri Mar 31 11:25:24 2023 Instance 3 bias2 (spec ocpi.core.bias) on rcc container 3: rcc3, using bias_cc in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ ocpi.core.bias_cc.rcc.0.ubuntu18_04.so dated Fri Mar 31 11:25:24 2023 Instance 4 bias3 (spec ocpi.core.bias) on rcc container 4: rcc4, using bias_cc in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ ocpi.core.bias_cc.rcc.0.ubuntu18_04.so dated Fri Mar 31 11:25:24 2023 Instance 5 bias4 (spec ocpi.core.bias) on rcc container 0: rcc0, using bias_cc in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ ocpi.core.bias_cc.rcc.0.ubuntu18_04.so dated Fri Mar 31 11:25:24 2023 Instance 6 file_write (spec ocpi.core.file_write) on rcc container 1: rcc1, using file_write in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ ocpi.core.file_write.rcc.0.ubuntu18_04.so dated Fri Mar 31 11:25:23 2023 Application XML parsed and deployments (containers and artifacts) chosen [0 s 2 ms] Application established: containers, workers, connections all created [0 s 0 ms] Application started/running [0 s 0 ms] Waiting for application to finish (no time limit) Application finished [0 s 10 ms] ``` Let me know if you are using an ACI application. It's a similar process. Aaron On Thu, May 18, 2023 at 10:26 AM <dwp@md1tech.co.uk> wrote: > Hello, > > My goal is to speed up a certain process through parallelisation. I have > used an FFT as an example but this was more about exploring the > capabilities of openCPI. > > I was advised that creating new threads from within a worker is not > recommended and that each worker exists in its own thread so it seemed > logical to create multiple instances of a worker that can handle a portion > of the workload and have these instances run in parallel. > See the screenshot attached showing the 3 components I created and how > data is sent between them. > > It is clear that in the current implementation this is not being achieved, > instead the Partial FFT workers are entering their run methods one after > the other and CPU usage is stuck at using a single core. > > Is this something that is possible using openCPI? > > What issues can arise from creating threads within workers that are > terminated before the run method is exited? > > Thanks in advance, > Dan > _______________________________________________ > discuss mailing list -- discuss@lists.opencpi.org > To unsubscribe send an email to discuss-leave@lists.opencpi.org >
D
dwp@md1tech.co.uk
Fri, May 19, 2023 8:16 AM

Hi Aaron, thanks for your help, this is really useful.

Yes my application uses an OAS. So far I have been using ocpidev run application <app> to run my applications.

I am getting an error when I instead try to use ocpirun:

OCPI( 8:532.0158): For instance  0: "file_read", finding and checking candidate implementations/workers
OCPI( 8:532.0158): Error Exception: No acceptable implementations found in any libraries for "ocpi.core.file_read".  Use log level 8 for more detail.

When I run the same application with ocpidev run it finds it fine:

OCPI( 8:585.0106): For instance  0: "file_read", finding and checking candidate implementations/workers
OCPI( 8:585.0106):   Considering implementation "file_read" from artifact "../../imports/ocpi.core/exports/artifacts/ocpi.core.file_read.rcc.0.ubuntu22_04.so"
OCPI( 8:585.0106):     Accepted implementation before connectivity checks with score 1

Hi Aaron, thanks for your help, this is really useful. Yes my application uses an OAS. So far I have been using `ocpidev run application <app>` to run my applications. I am getting an error when I instead try to use `ocpirun`: `OCPI( 8:532.0158): For instance  0: "file_read", finding and checking candidate implementations/workers`\ `OCPI( 8:532.0158): Error Exception: No acceptable implementations found in any libraries for "ocpi.core.file_read".  Use log level 8 for more detail.` When I run the same application with `ocpidev run` it finds it fine: `OCPI( 8:585.0106): For instance  0: "file_read", finding and checking candidate implementations/workers`\ `OCPI( 8:585.0106):   Considering implementation "file_read" from artifact "../../imports/ocpi.core/exports/artifacts/ocpi.core.file_read.rcc.0.ubuntu22_04.so"`\ `OCPI( 8:585.0106):     Accepted implementation before connectivity checks with score 1`
DW
Dominic Walters
Fri, May 19, 2023 8:24 AM

You need to set the OCPI_LIBRARY_PATH environment variable manually when
using ocpirun. It needs to be a list of paths pointing at the artifacts
folders of all the projects you need assets from; in this case you need
core:

OCPI_LIBRARY_PATH=$OCPI_ROOT_DIR/projects/core/artifacts

Then add whatever other paths are needed, separating with colons.

ocpidev run understands project structure, so does this for you.

On Fri, 19 May 2023, 09:16 , dwp@md1tech.co.uk wrote:

Hi Aaron, thanks for your help, this is really useful.

Yes my application uses an OAS. So far I have been using ocpidev run
application <app> to run my applications.

I am getting an error when I instead try to use ocpirun:

OCPI( 8:532.0158): For instance  0: "file_read", finding and checking
candidate implementations/workers
OCPI( 8:532.0158): Error Exception: No acceptable implementations found in
any libraries for "ocpi.core.file_read".  Use log level 8 for more detail.

When I run the same application with ocpidev run it finds it fine:

OCPI( 8:585.0106): For instance  0: "file_read", finding and checking
candidate implementations/workers
OCPI( 8:585.0106):  Considering implementation "file_read" from artifact
"../../imports/ocpi.core/exports/artifacts/
ocpi.core.file_read.rcc.0.ubuntu22_04.so"
OCPI( 8:585.0106):    Accepted implementation before connectivity checks
with score 1


discuss mailing list -- discuss@lists.opencpi.org
To unsubscribe send an email to discuss-leave@lists.opencpi.org

You need to set the `OCPI_LIBRARY_PATH` environment variable manually when using `ocpirun`. It needs to be a list of paths pointing at the `artifacts` folders of all the projects you need assets from; in this case you need core: `OCPI_LIBRARY_PATH=$OCPI_ROOT_DIR/projects/core/artifacts` Then add whatever other paths are needed, separating with colons. `ocpidev run` understands project structure, so does this for you. On Fri, 19 May 2023, 09:16 , <dwp@md1tech.co.uk> wrote: > Hi Aaron, thanks for your help, this is really useful. > > Yes my application uses an OAS. So far I have been using ocpidev run > application <app> to run my applications. > > I am getting an error when I instead try to use ocpirun: > > OCPI( 8:532.0158): For instance 0: "file_read", finding and checking > candidate implementations/workers > OCPI( 8:532.0158): Error Exception: No acceptable implementations found in > any libraries for "ocpi.core.file_read". Use log level 8 for more detail. > > When I run the same application with ocpidev run it finds it fine: > > OCPI( 8:585.0106): For instance 0: "file_read", finding and checking > candidate implementations/workers > OCPI( 8:585.0106): Considering implementation "file_read" from artifact > "../../imports/ocpi.core/exports/artifacts/ > ocpi.core.file_read.rcc.0.ubuntu22_04.so" > OCPI( 8:585.0106): Accepted implementation before connectivity checks > with score 1 > > > _______________________________________________ > discuss mailing list -- discuss@lists.opencpi.org > To unsubscribe send an email to discuss-leave@lists.opencpi.org >
D
dwp@md1tech.co.uk
Fri, May 19, 2023 8:36 AM

Thankyou it now works

Thankyou it now works
IC
Ian Chodera
Fri, May 19, 2023 1:16 PM

Hi Aaron

You said: "Let me know if you are using an ACI application. It's a similar process. "

Presumably this is using the PValues argument to the application initialisation call? The only thing is; there doesn’t seem to be an ACI Value equivalent option for the ocpirun ’n’ option listed on page 86 of the application development guide: https://opencpi.gitlab.io/releases/latest/docs/OpenCPI_Application_Development_Guide.pdf

Ian Chodera
MD1 Technology Ltd.

e: iac@md1tech.co.uk
w: www.md1tech.co.uk

MD1TechnologyLtd. is registered in England & Wales with company number 09378746.
Registered address: Cheltenham Film & Photographic Studios, Hatherley Lane, Cheltenham, Gloucestershire, England. GL51 6PN.
VAT registration number: GB 206 3877 05

On 18 May 2023, at 17:00, Aaron Olivarez aaron@olivarez.info wrote:

How are you executing your application? Is it using an OpencPI application Specification (OAS) and ocpirun command line tool or utilizing a C++ program using the application control interface (ACI).

If you are using an OAS you can specify additional RCC containers using the -n or '--processors' argument on ocpirun to specify the number of rcc containers to create at execution.
It defaults to 1 but if you specify more than 1 it will round robin the workers based on the numbers of available containers. Another option you have is to specify a worker to run on a specific container using -P argument. For syntax is -P <instance-name>=<platformname> as an example -P parital_fft0=rcc0 -P partial_fft1=rcc1 etc. You can specify the platform in XML as well as by adding an attribute to the instance Platform='rcc0' This can also be done through XML.

By using the -v argument when executing ocpirun you can see where the worker was ran. Below is an example of running an application with 5 workers using 5 containers and allowing OpenCPI to dictate which container to run the worker.

~/opencpi-v2.4.6/projects/assets/applications$ ocpirun -v -n5 testbias5.xml
Available containers are:  0: rcc0 [model: rcc os: linux platform: ubuntu18_04], 1: rcc1 [model: rcc os: linux platform: ubuntu18_04], 2: rcc2 [model: rcc os: linux platform: ubuntu18_04], 3: rcc3 [model: rcc os: linux platform: ubuntu18_04], 4: rcc4 [model: rcc os: linux platform: ubuntu18_04]
Actual deployment is:
  Instance  0 file_read (spec ocpi.core.file_read) on rcc container 0: rcc0, using file_read in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ocpi.core.file_read.rcc.0.ubuntu18_04.so <http://ocpi.core.file_read.rcc.0.ubuntu18_04.so/> dated Fri Mar 31 11:25:20 2023
  Instance  1 bias0 (spec ocpi.core.bias) on rcc container 1: rcc1, using bias_cc in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ocpi.core.bias_cc.rcc.0.ubuntu18_04.so <http://ocpi.core.bias_cc.rcc.0.ubuntu18_04.so/> dated Fri Mar 31 11:25:24 2023
  Instance  2 bias1 (spec ocpi.core.bias) on rcc container 2: rcc2, using bias_cc in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ocpi.core.bias_cc.rcc.0.ubuntu18_04.so <http://ocpi.core.bias_cc.rcc.0.ubuntu18_04.so/> dated Fri Mar 31 11:25:24 2023
  Instance  3 bias2 (spec ocpi.core.bias) on rcc container 3: rcc3, using bias_cc in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ocpi.core.bias_cc.rcc.0.ubuntu18_04.so <http://ocpi.core.bias_cc.rcc.0.ubuntu18_04.so/> dated Fri Mar 31 11:25:24 2023
  Instance  4 bias3 (spec ocpi.core.bias) on rcc container 4: rcc4, using bias_cc in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ocpi.core.bias_cc.rcc.0.ubuntu18_04.so <http://ocpi.core.bias_cc.rcc.0.ubuntu18_04.so/> dated Fri Mar 31 11:25:24 2023
  Instance  5 bias4 (spec ocpi.core.bias) on rcc container 0: rcc0, using bias_cc in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ocpi.core.bias_cc.rcc.0.ubuntu18_04.so <http://ocpi.core.bias_cc.rcc.0.ubuntu18_04.so/> dated Fri Mar 31 11:25:24 2023
  Instance  6 file_write (spec ocpi.core.file_write) on rcc container 1: rcc1, using file_write in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ocpi.core.file_write.rcc.0.ubuntu18_04.so <http://ocpi.core.file_write.rcc.0.ubuntu18_04.so/> dated Fri Mar 31 11:25:23 2023
Application XML parsed and deployments (containers and artifacts) chosen [0 s 2 ms]
Application established: containers, workers, connections all created [0 s 0 ms]
Application started/running [0 s 0 ms]
Waiting for application to finish (no time limit)
Application finished [0 s 10 ms]

Let me know if you are using an ACI application. It's a similar process.

Aaron

On Thu, May 18, 2023 at 10:26 AM <dwp@md1tech.co.uk mailto:dwp@md1tech.co.uk> wrote:

Hello,

My goal is to speed up a certain process through parallelisation. I have used an FFT as an example but this was more about exploring the capabilities of openCPI.

I was advised that creating new threads from within a worker is not recommended and that each worker exists in its own thread so it seemed logical to create multiple instances of a worker that can handle a portion of the workload and have these instances run in parallel.
See the screenshot attached showing the 3 components I created and how data is sent between them.

It is clear that in the current implementation this is not being achieved, instead the Partial FFT workers are entering their run methods one after the other and CPU usage is stuck at using a single core.

Is this something that is possible using openCPI?

What issues can arise from creating threads within workers that are terminated before the run method is exited?

Thanks in advance,
Dan


discuss mailing list -- discuss@lists.opencpi.org mailto:discuss@lists.opencpi.org
To unsubscribe send an email to discuss-leave@lists.opencpi.org mailto:discuss-leave@lists.opencpi.org


discuss mailing list -- discuss@lists.opencpi.org
To unsubscribe send an email to discuss-leave@lists.opencpi.org

Hi Aaron You said: "Let me know if you are using an ACI application. It's a similar process. " Presumably this is using the PValues argument to the application initialisation call? The only thing is; there doesn’t seem to be an ACI Value equivalent option for the ocpirun ’n’ option listed on page 86 of the application development guide: https://opencpi.gitlab.io/releases/latest/docs/OpenCPI_Application_Development_Guide.pdf Ian Chodera MD1 Technology Ltd. e: iac@md1tech.co.uk w: www.md1tech.co.uk MD1TechnologyLtd. is registered in England & Wales with company number 09378746. Registered address: Cheltenham Film & Photographic Studios, Hatherley Lane, Cheltenham, Gloucestershire, England. GL51 6PN. VAT registration number: GB 206 3877 05 > On 18 May 2023, at 17:00, Aaron Olivarez <aaron@olivarez.info> wrote: > > How are you executing your application? Is it using an OpencPI application Specification (OAS) and `ocpirun` command line tool or utilizing a C++ program using the application control interface (ACI). > > If you are using an OAS you can specify additional RCC containers using the `-n` or '--processors' argument on `ocpirun` to specify the number of rcc containers to create at execution. > It defaults to 1 but if you specify more than 1 it will round robin the workers based on the numbers of available containers. Another option you have is to specify a worker to run on a specific container using `-P` argument. For syntax is -P <instance-name>=<platformname> as an example -P parital_fft0=rcc0 -P partial_fft1=rcc1 etc. You can specify the platform in XML as well as by adding an attribute to the instance Platform='rcc0' This can also be done through XML. > > By using the -v argument when executing `ocpirun` you can see where the worker was ran. Below is an example of running an application with 5 workers using 5 containers and allowing OpenCPI to dictate which container to run the worker. > ``` > ~/opencpi-v2.4.6/projects/assets/applications$ ocpirun -v -n5 testbias5.xml > Available containers are: 0: rcc0 [model: rcc os: linux platform: ubuntu18_04], 1: rcc1 [model: rcc os: linux platform: ubuntu18_04], 2: rcc2 [model: rcc os: linux platform: ubuntu18_04], 3: rcc3 [model: rcc os: linux platform: ubuntu18_04], 4: rcc4 [model: rcc os: linux platform: ubuntu18_04] > Actual deployment is: > Instance 0 file_read (spec ocpi.core.file_read) on rcc container 0: rcc0, using file_read in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ocpi.core.file_read.rcc.0.ubuntu18_04.so <http://ocpi.core.file_read.rcc.0.ubuntu18_04.so/> dated Fri Mar 31 11:25:20 2023 > Instance 1 bias0 (spec ocpi.core.bias) on rcc container 1: rcc1, using bias_cc in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ocpi.core.bias_cc.rcc.0.ubuntu18_04.so <http://ocpi.core.bias_cc.rcc.0.ubuntu18_04.so/> dated Fri Mar 31 11:25:24 2023 > Instance 2 bias1 (spec ocpi.core.bias) on rcc container 2: rcc2, using bias_cc in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ocpi.core.bias_cc.rcc.0.ubuntu18_04.so <http://ocpi.core.bias_cc.rcc.0.ubuntu18_04.so/> dated Fri Mar 31 11:25:24 2023 > Instance 3 bias2 (spec ocpi.core.bias) on rcc container 3: rcc3, using bias_cc in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ocpi.core.bias_cc.rcc.0.ubuntu18_04.so <http://ocpi.core.bias_cc.rcc.0.ubuntu18_04.so/> dated Fri Mar 31 11:25:24 2023 > Instance 4 bias3 (spec ocpi.core.bias) on rcc container 4: rcc4, using bias_cc in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ocpi.core.bias_cc.rcc.0.ubuntu18_04.so <http://ocpi.core.bias_cc.rcc.0.ubuntu18_04.so/> dated Fri Mar 31 11:25:24 2023 > Instance 5 bias4 (spec ocpi.core.bias) on rcc container 0: rcc0, using bias_cc in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ocpi.core.bias_cc.rcc.0.ubuntu18_04.so <http://ocpi.core.bias_cc.rcc.0.ubuntu18_04.so/> dated Fri Mar 31 11:25:24 2023 > Instance 6 file_write (spec ocpi.core.file_write) on rcc container 1: rcc1, using file_write in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ocpi.core.file_write.rcc.0.ubuntu18_04.so <http://ocpi.core.file_write.rcc.0.ubuntu18_04.so/> dated Fri Mar 31 11:25:23 2023 > Application XML parsed and deployments (containers and artifacts) chosen [0 s 2 ms] > Application established: containers, workers, connections all created [0 s 0 ms] > Application started/running [0 s 0 ms] > Waiting for application to finish (no time limit) > Application finished [0 s 10 ms] > ``` > > Let me know if you are using an ACI application. It's a similar process. > > Aaron > > On Thu, May 18, 2023 at 10:26 AM <dwp@md1tech.co.uk <mailto:dwp@md1tech.co.uk>> wrote: >> Hello, >> >> My goal is to speed up a certain process through parallelisation. I have used an FFT as an example but this was more about exploring the capabilities of openCPI. >> >> I was advised that creating new threads from within a worker is not recommended and that each worker exists in its own thread so it seemed logical to create multiple instances of a worker that can handle a portion of the workload and have these instances run in parallel. >> See the screenshot attached showing the 3 components I created and how data is sent between them. >> >> It is clear that in the current implementation this is not being achieved, instead the Partial FFT workers are entering their run methods one after the other and CPU usage is stuck at using a single core. >> >> Is this something that is possible using openCPI? >> >> What issues can arise from creating threads within workers that are terminated before the run method is exited? >> >> Thanks in advance, >> Dan >> >> _______________________________________________ >> discuss mailing list -- discuss@lists.opencpi.org <mailto:discuss@lists.opencpi.org> >> To unsubscribe send an email to discuss-leave@lists.opencpi.org <mailto:discuss-leave@lists.opencpi.org> > _______________________________________________ > discuss mailing list -- discuss@lists.opencpi.org > To unsubscribe send an email to discuss-leave@lists.opencpi.org
AO
Aaron Olivarez
Fri, May 19, 2023 1:37 PM

Hi Ian,

The creation of multiple RCC containers is similar. You're right, the 'n'
option is not exposed via PValues. This is how ocpirun does it:

  //  OA::Container *c;
  if (options.processors())
    for (unsigned n = 1; n < options.processors(); n++) {
      std::string name;
      OU::formatString(name, "rcc%d", n);
      OA::ContainerManager::find("rcc", name.c_str());
    }

You could use ContainerManager from an ACI application. The find function
name is misleading but it's what creates rccN when you use the -n option
in ocpirun.

Aaron

On Fri, May 19, 2023 at 8:16 AM Ian Chodera iac@md1tech.co.uk wrote:

Hi Aaron

You said: "Let me know if you are using an ACI application. It's a
similar process. "

Presumably this is using the PValues argument to the application
initialisation call? The only thing is; there doesn’t seem to be an ACI
Value equivalent option for the ocpirun ’n’ option listed on page 86 of
the application development guide:
https://opencpi.gitlab.io/releases/latest/docs/OpenCPI_Application_Development_Guide.pdf

Ian Chodera
MD1 Technology Ltd.

e: iac@md1tech.co.uk
w: www.md1tech.co.uk

MD1TechnologyLtd. is registered in England & Wales with company number
09378746.
Registered address: Cheltenham Film & Photographic Studios, Hatherley
Lane, Cheltenham, Gloucestershire, England. GL51 6PN.
VAT registration number: GB 206 3877 05

On 18 May 2023, at 17:00, Aaron Olivarez aaron@olivarez.info wrote:

How are you executing your application? Is it using an OpencPI application
Specification (OAS) and ocpirun command line tool or utilizing a C++
program using the application control interface (ACI).

If you are using an OAS you can specify additional RCC containers using
the -n or '--processors' argument on ocpirun to specify the number of
rcc containers to create at execution.
It defaults to 1 but if you specify more than 1 it will round robin the
workers based on the numbers of available containers. Another option you
have is to specify a worker to run on a specific container using -P
argument. For syntax is -P <instance-name>=<platformname> as an example -P
parital_fft0=rcc0 -P partial_fft1=rcc1 etc. You can specify the platform in
XML as well as by adding an attribute to the instance Platform='rcc0' This
can also be done through XML.

By using the -v argument when executing ocpirun you can see where the
worker was ran. Below is an example of running an application with 5
workers using 5 containers and allowing OpenCPI to dictate which container
to run the worker.

~/opencpi-v2.4.6/projects/assets/applications$ ocpirun -v -n5 testbias5.xml
Available containers are:  0: rcc0 [model: rcc os: linux platform:
ubuntu18_04], 1: rcc1 [model: rcc os: linux platform: ubuntu18_04], 2: rcc2
[model: rcc os: linux platform: ubuntu18_04], 3: rcc3 [model: rcc os: linux
platform: ubuntu18_04], 4: rcc4 [model: rcc os: linux platform: ubuntu18_04]
Actual deployment is:
  Instance  0 file_read (spec ocpi.core.file_read) on rcc container 0:
rcc0, using file_read in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/
ocpi.core.file_read.rcc.0.ubuntu18_04.so dated Fri Mar 31 11:25:20 2023
  Instance  1 bias0 (spec ocpi.core.bias) on rcc container 1: rcc1, using
bias_cc in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/
ocpi.core.bias_cc.rcc.0.ubuntu18_04.so dated Fri Mar 31 11:25:24 2023
  Instance  2 bias1 (spec ocpi.core.bias) on rcc container 2: rcc2, using
bias_cc in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/
ocpi.core.bias_cc.rcc.0.ubuntu18_04.so dated Fri Mar 31 11:25:24 2023
  Instance  3 bias2 (spec ocpi.core.bias) on rcc container 3: rcc3, using
bias_cc in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/
ocpi.core.bias_cc.rcc.0.ubuntu18_04.so dated Fri Mar 31 11:25:24 2023
  Instance  4 bias3 (spec ocpi.core.bias) on rcc container 4: rcc4, using
bias_cc in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/
ocpi.core.bias_cc.rcc.0.ubuntu18_04.so dated Fri Mar 31 11:25:24 2023
  Instance  5 bias4 (spec ocpi.core.bias) on rcc container 0: rcc0, using
bias_cc in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/
ocpi.core.bias_cc.rcc.0.ubuntu18_04.so dated Fri Mar 31 11:25:24 2023
  Instance  6 file_write (spec ocpi.core.file_write) on rcc container 1:
rcc1, using file_write in
/home/aaron/opencpi-v2.4.6/projects/core/artifacts/
ocpi.core.file_write.rcc.0.ubuntu18_04.so dated Fri Mar 31 11:25:23 2023
Application XML parsed and deployments (containers and artifacts) chosen
[0 s 2 ms]
Application established: containers, workers, connections all created [0 s
0 ms]
Application started/running [0 s 0 ms]
Waiting for application to finish (no time limit)
Application finished [0 s 10 ms]

Let me know if you are using an ACI application. It's a similar process.

Aaron

On Thu, May 18, 2023 at 10:26 AM dwp@md1tech.co.uk wrote:

Hello,

My goal is to speed up a certain process through parallelisation. I have
used an FFT as an example but this was more about exploring the
capabilities of openCPI.

I was advised that creating new threads from within a worker is not
recommended and that each worker exists in its own thread so it seemed
logical to create multiple instances of a worker that can handle a portion
of the workload and have these instances run in parallel.
See the screenshot attached showing the 3 components I created and how
data is sent between them.

It is clear that in the current implementation this is not being
achieved, instead the Partial FFT workers are entering their run methods
one after the other and CPU usage is stuck at using a single core.

Is this something that is possible using openCPI?

What issues can arise from creating threads within workers that are
terminated before the run method is exited?

Thanks in advance,
Dan


discuss mailing list -- discuss@lists.opencpi.org
To unsubscribe send an email to discuss-leave@lists.opencpi.org


discuss mailing list -- discuss@lists.opencpi.org
To unsubscribe send an email to discuss-leave@lists.opencpi.org

Hi Ian, The creation of multiple RCC containers is similar. You're right, the 'n' option is not exposed via PValues. This is how ocpirun does it: ``` // OA::Container *c; if (options.processors()) for (unsigned n = 1; n < options.processors(); n++) { std::string name; OU::formatString(name, "rcc%d", n); OA::ContainerManager::find("rcc", name.c_str()); } ``` You could use ContainerManager from an ACI application. The find function name is misleading but it's what creates rccN when you use the `-n` option in ocpirun. Aaron On Fri, May 19, 2023 at 8:16 AM Ian Chodera <iac@md1tech.co.uk> wrote: > Hi Aaron > > You said: *"Let me know if you are using an ACI application. It's a > similar process. "* > > Presumably this is using the PValues argument to the application > initialisation call? The only thing is; there doesn’t seem to be an ACI > Value equivalent option for the ocpirun *’n’* option listed on page 86 of > the application development guide: > https://opencpi.gitlab.io/releases/latest/docs/OpenCPI_Application_Development_Guide.pdf > > Ian Chodera > MD1 Technology Ltd. > > e: iac@md1tech.co.uk > w: www.md1tech.co.uk > > > > > > MD1TechnologyLtd. is registered in England & Wales with company number > 09378746. > Registered address: Cheltenham Film & Photographic Studios, Hatherley > Lane, Cheltenham, Gloucestershire, England. GL51 6PN. > VAT registration number: GB 206 3877 05 > > On 18 May 2023, at 17:00, Aaron Olivarez <aaron@olivarez.info> wrote: > > How are you executing your application? Is it using an OpencPI application > Specification (OAS) and `ocpirun` command line tool or utilizing a C++ > program using the application control interface (ACI). > > If you are using an OAS you can specify additional RCC containers using > the `-n` or '--processors' argument on `ocpirun` to specify the number of > rcc containers to create at execution. > It defaults to 1 but if you specify more than 1 it will round robin the > workers based on the numbers of available containers. Another option you > have is to specify a worker to run on a specific container using `-P` > argument. For syntax is -P <instance-name>=<platformname> as an example -P > parital_fft0=rcc0 -P partial_fft1=rcc1 etc. You can specify the platform in > XML as well as by adding an attribute to the instance Platform='rcc0' This > can also be done through XML. > > By using the -v argument when executing `ocpirun` you can see where the > worker was ran. Below is an example of running an application with 5 > workers using 5 containers and allowing OpenCPI to dictate which container > to run the worker. > ``` > ~/opencpi-v2.4.6/projects/assets/applications$ ocpirun -v -n5 testbias5.xml > Available containers are: 0: rcc0 [model: rcc os: linux platform: > ubuntu18_04], 1: rcc1 [model: rcc os: linux platform: ubuntu18_04], 2: rcc2 > [model: rcc os: linux platform: ubuntu18_04], 3: rcc3 [model: rcc os: linux > platform: ubuntu18_04], 4: rcc4 [model: rcc os: linux platform: ubuntu18_04] > Actual deployment is: > Instance 0 file_read (spec ocpi.core.file_read) on rcc container 0: > rcc0, using file_read in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ > ocpi.core.file_read.rcc.0.ubuntu18_04.so dated Fri Mar 31 11:25:20 2023 > Instance 1 bias0 (spec ocpi.core.bias) on rcc container 1: rcc1, using > bias_cc in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ > ocpi.core.bias_cc.rcc.0.ubuntu18_04.so dated Fri Mar 31 11:25:24 2023 > Instance 2 bias1 (spec ocpi.core.bias) on rcc container 2: rcc2, using > bias_cc in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ > ocpi.core.bias_cc.rcc.0.ubuntu18_04.so dated Fri Mar 31 11:25:24 2023 > Instance 3 bias2 (spec ocpi.core.bias) on rcc container 3: rcc3, using > bias_cc in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ > ocpi.core.bias_cc.rcc.0.ubuntu18_04.so dated Fri Mar 31 11:25:24 2023 > Instance 4 bias3 (spec ocpi.core.bias) on rcc container 4: rcc4, using > bias_cc in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ > ocpi.core.bias_cc.rcc.0.ubuntu18_04.so dated Fri Mar 31 11:25:24 2023 > Instance 5 bias4 (spec ocpi.core.bias) on rcc container 0: rcc0, using > bias_cc in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ > ocpi.core.bias_cc.rcc.0.ubuntu18_04.so dated Fri Mar 31 11:25:24 2023 > Instance 6 file_write (spec ocpi.core.file_write) on rcc container 1: > rcc1, using file_write in > /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ > ocpi.core.file_write.rcc.0.ubuntu18_04.so dated Fri Mar 31 11:25:23 2023 > Application XML parsed and deployments (containers and artifacts) chosen > [0 s 2 ms] > Application established: containers, workers, connections all created [0 s > 0 ms] > Application started/running [0 s 0 ms] > Waiting for application to finish (no time limit) > Application finished [0 s 10 ms] > ``` > > Let me know if you are using an ACI application. It's a similar process. > > Aaron > > On Thu, May 18, 2023 at 10:26 AM <dwp@md1tech.co.uk> wrote: > >> Hello, >> >> My goal is to speed up a certain process through parallelisation. I have >> used an FFT as an example but this was more about exploring the >> capabilities of openCPI. >> >> I was advised that creating new threads from within a worker is not >> recommended and that each worker exists in its own thread so it seemed >> logical to create multiple instances of a worker that can handle a portion >> of the workload and have these instances run in parallel. >> See the screenshot attached showing the 3 components I created and how >> data is sent between them. >> >> It is clear that in the current implementation this is not being >> achieved, instead the Partial FFT workers are entering their run methods >> one after the other and CPU usage is stuck at using a single core. >> >> Is this something that is possible using openCPI? >> >> What issues can arise from creating threads within workers that are >> terminated before the run method is exited? >> >> Thanks in advance, >> Dan >> _______________________________________________ >> discuss mailing list -- discuss@lists.opencpi.org >> To unsubscribe send an email to discuss-leave@lists.opencpi.org >> > _______________________________________________ > discuss mailing list -- discuss@lists.opencpi.org > To unsubscribe send an email to discuss-leave@lists.opencpi.org > > >
IC
Ian Chodera
Fri, May 19, 2023 2:03 PM

Thanks

Ian Chodera
MD1 Technology Ltd.

e: iac@md1tech.co.uk
w: www.md1tech.co.uk

MD1TechnologyLtd. is registered in England & Wales with company number 09378746.
Registered address: Cheltenham Film & Photographic Studios, Hatherley Lane, Cheltenham, Gloucestershire, England. GL51 6PN.
VAT registration number: GB 206 3877 05

On 19 May 2023, at 14:37, Aaron Olivarez aaron@olivarez.info wrote:

Hi Ian,

The creation of multiple RCC containers is similar. You're right, the 'n' option is not exposed via PValues. This is how ocpirun does it:

  //  OA::Container *c;
  if (options.processors())
    for (unsigned n = 1; n < options.processors(); n++) {
      std::string name;
      OU::formatString(name, "rcc%d", n);
      OA::ContainerManager::find("rcc", name.c_str());
    }

You could use ContainerManager from an ACI application. The find function name is misleading but it's what creates rccN when you use the -n option in ocpirun.

Aaron

On Fri, May 19, 2023 at 8:16 AM Ian Chodera <iac@md1tech.co.uk mailto:iac@md1tech.co.uk> wrote:

Hi Aaron

You said: "Let me know if you are using an ACI application. It's a similar process. "

Presumably this is using the PValues argument to the application initialisation call? The only thing is; there doesn’t seem to be an ACI Value equivalent option for the ocpirun ’n’ option listed on page 86 of the application development guide: https://opencpi.gitlab.io/releases/latest/docs/OpenCPI_Application_Development_Guide.pdf

Ian Chodera
MD1 Technology Ltd.

e: iac@md1tech.co.uk mailto:iac@md1tech.co.uk
w: www.md1tech.co.uk http://www.md1tech.co.uk/

MD1TechnologyLtd. is registered in England & Wales with company number 09378746.
Registered address: Cheltenham Film & Photographic Studios, Hatherley Lane, Cheltenham, Gloucestershire, England. GL51 6PN.
VAT registration number: GB 206 3877 05

On 18 May 2023, at 17:00, Aaron Olivarez <aaron@olivarez.info mailto:aaron@olivarez.info> wrote:

How are you executing your application? Is it using an OpencPI application Specification (OAS) and ocpirun command line tool or utilizing a C++ program using the application control interface (ACI).

If you are using an OAS you can specify additional RCC containers using the -n or '--processors' argument on ocpirun to specify the number of rcc containers to create at execution.
It defaults to 1 but if you specify more than 1 it will round robin the workers based on the numbers of available containers. Another option you have is to specify a worker to run on a specific container using -P argument. For syntax is -P <instance-name>=<platformname> as an example -P parital_fft0=rcc0 -P partial_fft1=rcc1 etc. You can specify the platform in XML as well as by adding an attribute to the instance Platform='rcc0' This can also be done through XML.

By using the -v argument when executing ocpirun you can see where the worker was ran. Below is an example of running an application with 5 workers using 5 containers and allowing OpenCPI to dictate which container to run the worker.

~/opencpi-v2.4.6/projects/assets/applications$ ocpirun -v -n5 testbias5.xml
Available containers are:  0: rcc0 [model: rcc os: linux platform: ubuntu18_04], 1: rcc1 [model: rcc os: linux platform: ubuntu18_04], 2: rcc2 [model: rcc os: linux platform: ubuntu18_04], 3: rcc3 [model: rcc os: linux platform: ubuntu18_04], 4: rcc4 [model: rcc os: linux platform: ubuntu18_04]
Actual deployment is:
  Instance  0 file_read (spec ocpi.core.file_read) on rcc container 0: rcc0, using file_read in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ocpi.core.file_read.rcc.0.ubuntu18_04.so <http://ocpi.core.file_read.rcc.0.ubuntu18_04.so/> dated Fri Mar 31 11:25:20 2023
  Instance  1 bias0 (spec ocpi.core.bias) on rcc container 1: rcc1, using bias_cc in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ocpi.core.bias_cc.rcc.0.ubuntu18_04.so <http://ocpi.core.bias_cc.rcc.0.ubuntu18_04.so/> dated Fri Mar 31 11:25:24 2023
  Instance  2 bias1 (spec ocpi.core.bias) on rcc container 2: rcc2, using bias_cc in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ocpi.core.bias_cc.rcc.0.ubuntu18_04.so <http://ocpi.core.bias_cc.rcc.0.ubuntu18_04.so/> dated Fri Mar 31 11:25:24 2023
  Instance  3 bias2 (spec ocpi.core.bias) on rcc container 3: rcc3, using bias_cc in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ocpi.core.bias_cc.rcc.0.ubuntu18_04.so <http://ocpi.core.bias_cc.rcc.0.ubuntu18_04.so/> dated Fri Mar 31 11:25:24 2023
  Instance  4 bias3 (spec ocpi.core.bias) on rcc container 4: rcc4, using bias_cc in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ocpi.core.bias_cc.rcc.0.ubuntu18_04.so <http://ocpi.core.bias_cc.rcc.0.ubuntu18_04.so/> dated Fri Mar 31 11:25:24 2023
  Instance  5 bias4 (spec ocpi.core.bias) on rcc container 0: rcc0, using bias_cc in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ocpi.core.bias_cc.rcc.0.ubuntu18_04.so <http://ocpi.core.bias_cc.rcc.0.ubuntu18_04.so/> dated Fri Mar 31 11:25:24 2023
  Instance  6 file_write (spec ocpi.core.file_write) on rcc container 1: rcc1, using file_write in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ocpi.core.file_write.rcc.0.ubuntu18_04.so <http://ocpi.core.file_write.rcc.0.ubuntu18_04.so/> dated Fri Mar 31 11:25:23 2023
Application XML parsed and deployments (containers and artifacts) chosen [0 s 2 ms]
Application established: containers, workers, connections all created [0 s 0 ms]
Application started/running [0 s 0 ms]
Waiting for application to finish (no time limit)
Application finished [0 s 10 ms]

Let me know if you are using an ACI application. It's a similar process.

Aaron

On Thu, May 18, 2023 at 10:26 AM <dwp@md1tech.co.uk mailto:dwp@md1tech.co.uk> wrote:

Hello,

My goal is to speed up a certain process through parallelisation. I have used an FFT as an example but this was more about exploring the capabilities of openCPI.

I was advised that creating new threads from within a worker is not recommended and that each worker exists in its own thread so it seemed logical to create multiple instances of a worker that can handle a portion of the workload and have these instances run in parallel.
See the screenshot attached showing the 3 components I created and how data is sent between them.

It is clear that in the current implementation this is not being achieved, instead the Partial FFT workers are entering their run methods one after the other and CPU usage is stuck at using a single core.

Is this something that is possible using openCPI?

What issues can arise from creating threads within workers that are terminated before the run method is exited?

Thanks in advance,
Dan


discuss mailing list -- discuss@lists.opencpi.org mailto:discuss@lists.opencpi.org
To unsubscribe send an email to discuss-leave@lists.opencpi.org mailto:discuss-leave@lists.opencpi.org

Thanks Ian Chodera MD1 Technology Ltd. e: iac@md1tech.co.uk w: www.md1tech.co.uk  MD1TechnologyLtd. is registered in England & Wales with company number 09378746. Registered address: Cheltenham Film & Photographic Studios, Hatherley Lane, Cheltenham, Gloucestershire, England. GL51 6PN. VAT registration number: GB 206 3877 05 > On 19 May 2023, at 14:37, Aaron Olivarez <aaron@olivarez.info> wrote: > > Hi Ian, > > The creation of multiple RCC containers is similar. You're right, the 'n' option is not exposed via PValues. This is how ocpirun does it: > > ``` > // OA::Container *c; > if (options.processors()) > for (unsigned n = 1; n < options.processors(); n++) { > std::string name; > OU::formatString(name, "rcc%d", n); > OA::ContainerManager::find("rcc", name.c_str()); > } > ``` > You could use ContainerManager from an ACI application. The find function name is misleading but it's what creates rccN when you use the `-n` option in ocpirun. > > Aaron > > On Fri, May 19, 2023 at 8:16 AM Ian Chodera <iac@md1tech.co.uk <mailto:iac@md1tech.co.uk>> wrote: >> Hi Aaron >> >> You said: "Let me know if you are using an ACI application. It's a similar process. " >> >> Presumably this is using the PValues argument to the application initialisation call? The only thing is; there doesn’t seem to be an ACI Value equivalent option for the ocpirun ’n’ option listed on page 86 of the application development guide: https://opencpi.gitlab.io/releases/latest/docs/OpenCPI_Application_Development_Guide.pdf >> >> Ian Chodera >> MD1 Technology Ltd. >> >> e: iac@md1tech.co.uk <mailto:iac@md1tech.co.uk> >> w: www.md1tech.co.uk <http://www.md1tech.co.uk/> >> >> >> >> >> >> MD1TechnologyLtd. is registered in England & Wales with company number 09378746. >> Registered address: Cheltenham Film & Photographic Studios, Hatherley Lane, Cheltenham, Gloucestershire, England. GL51 6PN. >> VAT registration number: GB 206 3877 05 >> >>> On 18 May 2023, at 17:00, Aaron Olivarez <aaron@olivarez.info <mailto:aaron@olivarez.info>> wrote: >>> >>> How are you executing your application? Is it using an OpencPI application Specification (OAS) and `ocpirun` command line tool or utilizing a C++ program using the application control interface (ACI). >>> >>> If you are using an OAS you can specify additional RCC containers using the `-n` or '--processors' argument on `ocpirun` to specify the number of rcc containers to create at execution. >>> It defaults to 1 but if you specify more than 1 it will round robin the workers based on the numbers of available containers. Another option you have is to specify a worker to run on a specific container using `-P` argument. For syntax is -P <instance-name>=<platformname> as an example -P parital_fft0=rcc0 -P partial_fft1=rcc1 etc. You can specify the platform in XML as well as by adding an attribute to the instance Platform='rcc0' This can also be done through XML. >>> >>> By using the -v argument when executing `ocpirun` you can see where the worker was ran. Below is an example of running an application with 5 workers using 5 containers and allowing OpenCPI to dictate which container to run the worker. >>> ``` >>> ~/opencpi-v2.4.6/projects/assets/applications$ ocpirun -v -n5 testbias5.xml >>> Available containers are: 0: rcc0 [model: rcc os: linux platform: ubuntu18_04], 1: rcc1 [model: rcc os: linux platform: ubuntu18_04], 2: rcc2 [model: rcc os: linux platform: ubuntu18_04], 3: rcc3 [model: rcc os: linux platform: ubuntu18_04], 4: rcc4 [model: rcc os: linux platform: ubuntu18_04] >>> Actual deployment is: >>> Instance 0 file_read (spec ocpi.core.file_read) on rcc container 0: rcc0, using file_read in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ocpi.core.file_read.rcc.0.ubuntu18_04.so <http://ocpi.core.file_read.rcc.0.ubuntu18_04.so/> dated Fri Mar 31 11:25:20 2023 >>> Instance 1 bias0 (spec ocpi.core.bias) on rcc container 1: rcc1, using bias_cc in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ocpi.core.bias_cc.rcc.0.ubuntu18_04.so <http://ocpi.core.bias_cc.rcc.0.ubuntu18_04.so/> dated Fri Mar 31 11:25:24 2023 >>> Instance 2 bias1 (spec ocpi.core.bias) on rcc container 2: rcc2, using bias_cc in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ocpi.core.bias_cc.rcc.0.ubuntu18_04.so <http://ocpi.core.bias_cc.rcc.0.ubuntu18_04.so/> dated Fri Mar 31 11:25:24 2023 >>> Instance 3 bias2 (spec ocpi.core.bias) on rcc container 3: rcc3, using bias_cc in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ocpi.core.bias_cc.rcc.0.ubuntu18_04.so <http://ocpi.core.bias_cc.rcc.0.ubuntu18_04.so/> dated Fri Mar 31 11:25:24 2023 >>> Instance 4 bias3 (spec ocpi.core.bias) on rcc container 4: rcc4, using bias_cc in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ocpi.core.bias_cc.rcc.0.ubuntu18_04.so <http://ocpi.core.bias_cc.rcc.0.ubuntu18_04.so/> dated Fri Mar 31 11:25:24 2023 >>> Instance 5 bias4 (spec ocpi.core.bias) on rcc container 0: rcc0, using bias_cc in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ocpi.core.bias_cc.rcc.0.ubuntu18_04.so <http://ocpi.core.bias_cc.rcc.0.ubuntu18_04.so/> dated Fri Mar 31 11:25:24 2023 >>> Instance 6 file_write (spec ocpi.core.file_write) on rcc container 1: rcc1, using file_write in /home/aaron/opencpi-v2.4.6/projects/core/artifacts/ocpi.core.file_write.rcc.0.ubuntu18_04.so <http://ocpi.core.file_write.rcc.0.ubuntu18_04.so/> dated Fri Mar 31 11:25:23 2023 >>> Application XML parsed and deployments (containers and artifacts) chosen [0 s 2 ms] >>> Application established: containers, workers, connections all created [0 s 0 ms] >>> Application started/running [0 s 0 ms] >>> Waiting for application to finish (no time limit) >>> Application finished [0 s 10 ms] >>> ``` >>> >>> Let me know if you are using an ACI application. It's a similar process. >>> >>> Aaron >>> >>> On Thu, May 18, 2023 at 10:26 AM <dwp@md1tech.co.uk <mailto:dwp@md1tech.co.uk>> wrote: >>>> Hello, >>>> >>>> My goal is to speed up a certain process through parallelisation. I have used an FFT as an example but this was more about exploring the capabilities of openCPI. >>>> >>>> I was advised that creating new threads from within a worker is not recommended and that each worker exists in its own thread so it seemed logical to create multiple instances of a worker that can handle a portion of the workload and have these instances run in parallel. >>>> See the screenshot attached showing the 3 components I created and how data is sent between them. >>>> >>>> It is clear that in the current implementation this is not being achieved, instead the Partial FFT workers are entering their run methods one after the other and CPU usage is stuck at using a single core. >>>> >>>> Is this something that is possible using openCPI? >>>> >>>> What issues can arise from creating threads within workers that are terminated before the run method is exited? >>>> >>>> Thanks in advance, >>>> Dan >>>> >>>> _______________________________________________ >>>> discuss mailing list -- discuss@lists.opencpi.org <mailto:discuss@lists.opencpi.org> >>>> To unsubscribe send an email to discuss-leave@lists.opencpi.org <mailto:discuss-leave@lists.opencpi.org> >>> _______________________________________________ >>> discuss mailing list -- discuss@lists.opencpi.org <mailto:discuss@lists.opencpi.org> >>> To unsubscribe send an email to discuss-leave@lists.opencpi.org <mailto:discuss-leave@lists.opencpi.org> >>