nursecosmo wrote:
gmctd wrote:
One slight fault with that logic sequence: ECM has no way to measure the actual fuelrate input, not even one O2 sensor in the loop (some engines have 4-5!) - what will be observed with a scanner is ECM's estimated fuelrate, based on operating conditions at that instant - in this case, the fuelrate was offset by the EDGE Trail module, such that, even tho the indicated fuelrate was ~10mm3, actual fuelrate was greater, thus the resemblance to a Navy smog-layer
How is that a fault? That is exactly what I was illustrating. The computer can only monitor "perceived" rail pressure and MAP VS MAF (unless the MAF has been disabled with an ORM or SEGR). The Computer "Thought" that it was putting out the correct amount of fuel for a clean idle, but was in fact not because it was deceived by the EDGE changing the rail pressure signal.
gmctd wrote:
Also, few Diesel engines have enuff exhaust energy at idle to spool-up the turbo enuff to make Boost - takes a lot of power to compress large volumes of air - even under normal conditions at high altitudes, for which the turbocharger was developed, ECM just reduces fuel based on Barometric pressure, sensed by the pressure sensor in the airbox and compared to the MAP sensor in the intake manifold - with artificially increased fuelrates at idle, the engine just blows raw fuel vapors, usually white to blue, as idle combustion temps are not high enuff to do black smoke - proof: idle EGT's of ~250-295* - however (and there's always a however!), if turbocharger duty-cycle is monitored at high altitudes, it may be noted that even the idle %age is much greater than for us flatlanders, which allows for much quicker spool-up much earlier as exhaust energy increases with rpm and load
Yes exactly. In a conventional Turbodiesel, if the fuel rate is raised from 10mm3 to 12mm3, a small amount of boost will be generated to compensate for the increased fuel. That will result in an RPM increase from ~750 to ~1000 or so.
In our CRD (which is far from conventional) the ECU wants the RPM to ALWAYS be at ~750 when the Throttle Position Sensor is at 0 degrees of depression. It accomplishes this by dropping the rail pressure to close the the lower acceptable pressure limit preset into the ECU's fueling map. If it starts sensing too high of an RPM it can drop the rail pressure to the lowest setting but can only go so far based on the pre-programed fuel map (injector timing and pulse width is already at it's narrowest setting) . The other thing that is occurring is that ECU will only allow the turbine to compress a small amount of pressure, usually ~15psi absolute, and no more, based on the pressure map parameters programed into the computer to account for altitude, filter restriction etc...
I think that your "operator error" was a great teaching tool to help others understand some of the basics of their CRD's operation.
Every time I think about the complexity of modern diesel engines, I fall in love with my old 12 valve, P-pumped, 100% mechanical, Cumins powered Ram a little bit more.
I think that there are some misconceptions here, hopefully I'll try to explain it somewhat.
The boost control is completely decoupled from the engine fueling situation. Obviously, with the vanes fully closed, there is only so much boost that can be made which is just like a fixed-geometry turbo. The boost is controlled by a setpoint map which uses various inputs to compute the desired pressure.
For the low-idle scenario that you presented above, that's not entirely true. Lets say the KJ app has the 750rpm idle that you indicated. While rail pressure does play a lot only in terms of the energizing time of injection (rail pressure vs. desired fuel quantity = energizing time, or rail pressure vs. energizing time = desired fuel quantity, works both ways), the rail pressure isn't used for engine speed control. The actual rail pressure setpoint is important because it affects NVH-type things. In any case, generally a PID structure is employed to maintain closed loop control of the engine speed by varying the fuel input (energizing time).
For increasing engine speed, fueling has everything to do with it not boost. when you push on the pedal, the fuel ramps up slowly, the increased fuel quantity creates more combustion pressure and the crankshaft speed accelerates. Obviously, too much fuel at a given time creates black smoke (more based off fresh air flow than pressure), so obviously the fuel can be limited at certain operating points so that no visible smoke is produced (or only faintly visible smoke).
With regard to actual fuel rate, this value is actually very precise, thanks in part to the injector quantity adjustment code on the top of every injector and precise manufacturing tolerances in injector nozzles (EDM).
The EDGE works easily because rail pressure and boost pressure are closed-loop controlled based on a setpoint value. If you trick the ECU into thinking that the values it sees are lower than they really are, then the governing will work to meet the setpoint but, if measuring the actual fluid medium pressure, it will be much higher. THis is why the engine smokes: energizing time is the same because ECU thinks the rail pressure is where it should be, but the actual rail pressure is much higher. with a given energizing time and a higher rail pressure, a higher fuel quantity is delivered. By applying the same principal with the boost, the vanes will be closed more to "meet the setpoint" while the actual value is higher. In theory, if you raise the boost enough with the increased fuel quantity it won't smoke (maybe a little because your timing isn't optimized for increased fuel delivery/power output) but in the lower load areas it's hard for the turbo to meet the setpoint and so the engine smokes. Especially in regions important for emissions where you'll find timing more retarded.