https://gradle.com/ logo
#native
Title
# native
b

Basil Mathew

09/27/2022, 6:03 PM
Hello Everyone I am new to this forum, very new to Gradle and also a relative beginner in Software Engineering in general. Also, I do not have any experience of working previously with Groovy/Kotlin. So please excuse and correct me if my questions are basic and/or does not make any sense, I have been asked to create a POC to migrate our embedded system build for native C project from the Make/SCons world to the Gradle world in our organization. I have been going through the Gradle documentation (what is available) and the questions in this slack channel for the last 2 days. My current state is one of extreme confusion as I do not seem to get the overall picture of how the support for the native C SW projects is provided in Gradle; Should probably be due to my in-experience and lack of knowledge... All I wanted to do as a first step is to compile a set of C-files, create a library out of the sub-set of the previously compiled C-files and link the library and the other object files to an executable using TASKING compiler for TRICORE target. To start with, I am confused regarding different approaches to native SW project builds, namely - Using the latest cpp-application, cpp-library plugins - Old Native build - Nokee Plugin What I have read so far is that one should use the latest plugins when starting newly to configure a native project, but I see some configurations are still reused from old native build. Also what is confusing is whether to use Nokee or cpp-application, cpp-library plugins; what is the difference? Secondly, configuring the Gradle to work with TASKING compiler for TRICORE target seems to be complicated. I have read about toolchains, target platform and such, but I am not sure if I have understood this properly. Things in the Make/SCons world is much simpler, one just configures the command line to be executed with the appropriate arguments and things just work. Is there an example that I can have a look? I understand that I could probable be asking a very broad question and/or a basic one, but any help regarding where to start and how to proceed will be very helpful for me. Thank you
d

daniel

09/28/2022, 12:54 PM
I understand the confusion and it’s not an ideal situation. Historically, the “old native build” (often refer as the software model) was the first implementation. When Gradle shifted away from the software model, the newer plugin
cpp-application
and
cpp-library
were introduce which are simpler but rely on the software model for the toolchain configuration (reuse over reimplementing). When I left Gradle, I started Nokee because my passion has always being native support in Gradle. Nokee is superior to the newer core plugins. As for the software model, Nokee still has some feature that we are missing, mainly around edge cases. In general, we should be able to make then work without much issues. Nokee is in full time development which is the advantage over the other native support. I’m working with Gradle to clear the confusion but at the moment, it’s a bit messy. 😞 There’s a fundamental difference between Make/Scons and Gradle. In Make/Scons, users focus on how the software is built. In Gradle, we focus on what is been built. In your example, what you are creating is an executable linked against a library all implemented in C which target TRICORE. With recent change in Nokee, it should be possible to achieve this. If you could point me to a working same in Make/SCons and the TASKING compiler (hopefully available freely), then I can create a demonstration on how it’s done. The reason why Gradle don’t simply allow users to specify the command line to execute relies in the fact that the command line can and will be different depending on the target. For example, building an executable for Linux, macOS or Windows all follow the same basic concept but the resulting command line will be different (picture Gcc/Clang vs MSVC). The idea is to abstract away how the command line looks and focus on what is been built. You can still modify the command line for each variant. It’s a same for dependencies. Regardless where your dependency comes from (built locally, prebuilt, 3rd-party package, system library, etc), it’s always the same. It’s some kind of artifact with a set of attributes.
In terms of samples, Nokee has the following samples: https://docs.nokee.dev/samples/ We are working on adding more samples soon.
b

Basil Mathew

09/28/2022, 7:20 PM
Hello Daniel, Thank you so very much for the reply and for your offer of help. The Tasking compiler trial version is available at https://www.infineon.com/cms/en/tools/aurix-tools/Compilers/TASKING/ I am not sure what to provide you from the Make/SCons world. Would you prefer the configuration file or would you would prefer the full command line call for the executables? For now, please find a small snippet of the executables that we are referring to from the TASKING compiler suite.
Copy code
class CtcAurix_TC27xx(Platform):
    def __init__(self, env):
        Platform.__init__(self, env)
        """
        Paths to executables used by this platform
        """
        # Path to the compiler suite bin folder
        bin_path = posixpath.join(env['TOOLSPATH'],'CTC_AURIX',env['TOOLVERSION'],'ctc','bin')
        # Path to the compiler executable
        self.CCPATH = posixpath.join(bin_path,'ctc.exe')
        # Path to the assembler executable
        self.ASMPATH = posixpath.join(bin_path,'astc.exe')
        # Path to the archiver executable
        self.ARPATH = posixpath.join(bin_path,'artc.exe')
        # Path to the linker executable
        self.LINKPATH = posixpath.join(bin_path,'ltc.exe')
        # Path to the MCS executables
        mcs_bin_path = posixpath.join(env['TOOLSPATH'],'CTC_AURIX',env['TOOLVERSION'],'cmcs','bin')
        # Path to the MCS compiler executable
        self.MCSCOMPPATH = posixpath.join(mcs_bin_path,'cmcs.exe')
        # Path to the MCS assembler executable
        self.MCSASPATH = posixpath.join(mcs_bin_path,'asmcs.exe')
        # Path to the c51 executables
        scr_bin_path = posixpath.join(env['TOOLSPATH'],'CTC_AURIX',env['TOOLVERSION'],'c51','bin')
        # Path to the SCR compiler executable
        self.SCRCOMPPATH = posixpath.join(scr_bin_path,'c51.exe')
        # Path to the SCR assembler executable
        self.SCRASPATH = posixpath.join(scr_bin_path,'as51.exe')
A typical command line looks like this. We perform a splitted compilation where in the C-file is first converted to assembly file and then assembled using an assembler to generate the object file.
Copy code
c:\legacyapp/CTC_AURIX/6.2r1p2/ctc/bin/ctc.exe work\bsw\arch\arlib\bfx\bfx\pi\i\bfx_main.c --error-file=../log/err/bfx_main.c_err -g -OacefgIklmnopRsuvwy --eabi=DCfHnsW --immediate-in-code -t3 -F -N0 -Z0 -Y0 --core=tc1.6.2 --iso=99 --no-macs --switch=linear -f./includes.txt -Ic:\legacyapp/CTC_AURIX/6.2r1p2/ctc/include -o work\bsw\arch\arlib\bfx\bfx\pi\i\bfx_main.src
Copy code
c:\legacyapp/CTC_AURIX/6.2r1p2/ctc/bin/astc.exe --error-file=../log/err/bfx_main.s_err -il --core=tc1.6.2 -f./includes.txt -Ic:\legacyapp/CTC_AURIX/6.2r1p2/ctc/include -o work\bsw\arch\arlib\bfx\bfx\pi\i\bfx_main.o work\bsw\arch\arlib\bfx\bfx\pi\i\bfx_main.src
We usually copy the installation package to our local machine and execute the build using the executables with-in the compiler suite. If you have a look at the SCons configuration file, you will see many executables getting used for the build, this is basically because other than the normal compilation, some C-files are specifically used for certain features like the XC800 Standby controller (SCR), and MCS/GTM timer module of AURIX TC3xx, and Peripheral Control Processor (PCP) and needs to be compiled with the relevant executables; Also link step is handled specially that these object files from the special C-files are to be linked with a separate argument in the linker command line call. Please let me know if you were expecting more information.
d

daniel

09/29/2022, 2:35 PM
Is converting C to ASM then compile ASM to OBJ the usual way to compile or the toolchain can compile C to OBJ in one invocation? It is not impossible to do, simply unusual compared to other toolchain. For the two step process, we would probably consider ASM -> OBJ as the compile step and the C -> ASM as the source generation. The project would basically be an assembly language project but we can still map the C file so IDE take those into account (not the generated ASM).
For the special features, my understanding is you would need to use different C to ASM executable depending on the features. Is that accurate? In term of linking, all OBJ can be linked together but they would require different flag depending on the feature they use? Is that accurate? Do you have an example?
It’s a very interesting domain. I knew automobile was quite special but the process is quite fascinating. It’s a legit process, and I feel it would be more along the line of “composing” the build logic to perform the build vs out-of-the-box plugins given how specific the result has to be. Allowing easy composition is really where I want to bring Nokee. We just need to figure out what are the pieces that we need to offer so it’s easy to do.
Couple more questions: 1) What IDE do you typically use to develop and 2) what test framework or how testing is usually performed? Testing is an important aspect as some developer use emulators others use the real hardware. Some even compile the code for the host with some mocking. All approach are legit, they vary in complexity. One of Nokee’s goal is to simplify these approach so why behave mostly the same from the user/developer’s perspective, aka
./gradlew test
b

Basil Mathew

09/29/2022, 5:33 PM
Hello Daniel, Thank you for the reply. Regarding the 2 step compilation process (.c -> .src and .src -> .o), this has been the convention that we had followed for more than 8 years. The compiler can directly compile from (.c -> ,o) like any normal C compiler. When I joined the team about 8 years back, I asked our compiler team (we get our compilation command lines and arguments from a dedicated compiler team who deals with compiler validation) the same question (for the obvious reason of build time improvement), they gave me some reason(s) at that time which I do not fully remember now (If I remember correctly they mentioned some thing about the generated binary code out of the obj being much better optimized, but I could be wrong, I will get back to you on this). I think I understood what you are trying to say regarding the 2 step compilation process
Regarding the special features, yes, you are correct. We make use of special executables depending on the features. For eg: for the MCS feature, code can be written either in C or in the specific assembly language for the MCS core, these C files and assembler files for MCS needs to be executed with the cmcs.exe and asmcs.exe respectively and they are linked differently than the normal object file in the final link. Please find an assembler command line for MCS assembler file
Copy code
c:\legacyapp/CTC_AURIX/6.2r1p2/cmcs/bin/asmcs.exe --error-file=../log/err/iopt_sent.mcs_err -Os -il -mt -f./includes.txt -Ic:\legacyapp/CTC_AURIX/6.2r1p2/ctc/include work\bsw\firmware\sent\sent\pd_gtm\i\iopt_sent.mcs -o work\bsw\firmware\sent\sent\pd_gtm\i\iopt_sent.o
Please find a sample linker command line that we are using, see the linker arguments starting with "--new-task --core=mpe:mcs00" in the linker command; they represent the mcs object file(s) while the normal object file(s) are passed inside a text file as "-f ./objects.txt"
Copy code
c:\legacyapp/CTC_AURIX/6.2r1p2/ctc/bin/ltc.exe -f ./objects.txt --warnings-as-errors -OcLTXY -M -mCdFiKlmNOQRSU --auto-base-register -Cmpe:vtc -lcs_fpu -lrt -lfp_fpu -Lc:/legacyapp/CTC_AURIX/6.2r1p2//ctc/lib/tc162 --error-file=../log/err/FS_0GT3_0U0_166.err --map-file=_out/FS_0GT3_0U0_166.map -d _lcf/loc_opt_pp.def -o _out\FS_0GT3_0U0_166.elf -o _out/FS_0GT3_0U0_166.tmp:SREC -o _out/FS_0GT3_0U0_166.hex:IHEX --new-task --core=mpe:mcs00 --map-file=_out/mcs00.map _FS_0gt3_0u0_CTC_developer-normal\proc\ARPROC\out\iopt_gtm_mcs_00.o work\bsw\firmware\sent\sent\pd_gtm\i\iopt_sent.o --new-task --core=mpe:mcs03 --map-file=_out/mcs03.map _FS_0gt3_0u0_CTC_developer-normal\proc\ARPROC\out\iopt_gtm_mcs_03.o work\bsw\test\angpwm\i\iopt_mcs_ang_pwm.o
Regarding the automotive domain being special, I think you are correct. Some times the peripheral controller has an altogether different core than the main controller. At times, we end up using 2 versions of the same compiler from the same vendor to build different functionalities. We internally call this mixed mode scenario. This happens in case when a specific controller feature is requested by a customer very close to the serial production time-line which was never being used till that time. In this scenario, if the project had been using an old version of the compiler which did not support that feature, they will be forced to move to the new version of the compiler to compile the code for the newly introduced feature. But since the time line is so very near to serial production, the project will not move to the newer version of the compiler since that will entail huge overhead for overall testing since the binaries would have changed, so they will only use the latest compiler version for the new feature and will link this finally together. Also, at times, after we perform the first compile and link, a process or build step gets run which updates the generated object files and libraries for binary code optimization (in terms of core resource optimization for multi core targets) and the updated object and libraries are re-linked again to form the final binary that gets flashed to the Engine Control Unit. This is done after the first compile_link, because the strategy to optimize the final binary is derived out of the ELF and MAP file from the first compile_link. So, yes, I agree with you, our current approach is more along the line of “composing” the build logic to perform the build vs out-of-the-box solutions from the frameworks.
Regarding the IDE, as of now, we use a customized IDE based on Eclipse for our project build workflow. We use Eclipse as GUI front-end, for authoring of code for SW projects and for executing the build workflow. The build workflow engine is custom developed in Java; other than the compilation and link process, there are many other build steps that gets executed as code generators before the compilation and link and other build steps that gets executed after the compile& link stage that work on the generated binary. The compile_link build step is based on SCons and forms one build step in a rather long build workflow. You could say, this is not ideal, but this is what we have currently, so the idea now is to move the entire build workflow to be in Gradle along with native compile_link. This is what we are trying to achieve
Regarding the testing frameworks, there are 3 levels of testing 1. Unit testing using Tessy or Rational Real Time framework. This is kicked off from the IDE either as part of build workflow or independently. These are special setups that are configured to work with the cross-compilation needs. 2. Automated testing on the HIL systems or workbench. 3. Manual unit testing on the workbench.
d

daniel

09/29/2022, 7:14 PM
Hmm, I see. It’s pretty interesting. The first thing that comes to mind is finding a composable way to model the files and their features/capabilities. I feel the most important aspect is ensuring that as files move along in the process, there are some information that needs to follow them to ensure 1) the right tools are selected and 2) they are now mixed in incompatible ways. The confident in how things were built, aka an auditable build manifest, would be very important. The second most important aspect is composition and testing of the build logic. There seems to be multiple competing features which between them create a mini-dependency graph in terms of what needs to be done in which order. Each of those needs to be tested individually and the composition needs to perform as expected or fail fast if it’s not expected. Let me have a second read on what you wrote, and look into the testing framework. I have a general idea, but would like to learn a bit more. It seems that most feature in Nokee are useful here, but I get the feeling there would be a need for a partial blank slate for some of the configuration. For example, I don’t see the user write the variant handling code, but each variant may differ wildly (the optimization process is most likely only require in release binaries vs debug binaries).
For IDE, would you have the Eclipse version you are using? The Gradle team maintain Buildship which is the official Gradle integration. Ideally, you would just have to use that plugin. However, in term of native support, I would have to talk with the Gradle team to make sure Nokee offers the right hook point.
b

Basil Mathew

09/30/2022, 9:14 AM
Hello Daniel, We have not yet made a final decision on which IDE will be used for the Gradle build for the users, but initial thought is to use Visual Studio Code. But this depends on the outcome of our Proof Of Concept and its acceptance by the internal super user community. We are already using Eclipse Buildship plugin to create custom Gradle plugins for the POC. We had 5 goals identified for POC (listed below). We have done the 4 (thanks to many Gradle community members who have helped Tom Gregory, James Justinic, Jendrik Johannes), native build is the one task pending. James (He is my mentor for the Gradle Fellowship program) is the member who recommended Nokee to me. • Chaining of build process(es) based on input and output file(s) • Incremental execution of build process(es) • Parallelization of build process steps. • Reuse of previously built artifacts - Build Cache mechanism • Support for C SW project builds
10 Views