POPULARITY
On the cosmic evolution of AGN obscuration and the X-ray luminosity function: XMM-Newton and Chandra spectral analysis of the 31 3 deg ^2 Stripe 82X by Alessandro Peca et al. on Wednesday 23 November We present X-ray spectral analysis of XMM and Chandra observations in the 31.3 deg$^2$ Stripe-82X (S82X) field. Of the 6181 X-ray sources in this field, we analyze a sample of 2937 active galactic nuclei (AGN) with solid redshifts and sufficient counts determined by simulations. Our results show a population with median values of spectral index $Gamma=1.94_{-0.39}^{+0.31}$, column density log$,N_{mathrm{H}}/mathrm{cm}^{-2}=20.7_{-0.5}^{+1.2}$ and intrinsic, de-absorbed, 2-10 keV luminosity log$,L_{mathrm{X}}/mathrm{erg,s}^{-1}=44.0_{-1.0}^{+0.7}$, in the redshift range 0-4. We derive the intrinsic fraction of AGN that are obscured ($22leqmathrm{log},N_{mathrm{H}}/mathrm{cm}^{-2}43$. This work constrains the AGN obscuration and spectral shape of the still uncertain high-luminosity and high-redshift regimes (log$,L_{mathrm{X}}/mathrm{erg,s}^{-1}>45.5$, $z>3$), where the obscured AGN fraction rises to $64pm12%$. We report a luminosity and density evolution of the X-ray luminosity function, with obscured AGN dominating at all luminosities at $z>2$ and unobscured sources prevailing at log$,L_{mathrm{X}}/mathrm{erg,s}^{-1}>45$ at lower redshifts. Our results agree with evolutionary models in which the bulk of AGN activity is triggered by gas-rich environments and in a downsizing scenario. Also, the black hole accretion density (BHAD) is found to evolve similarly to the star formation rate density, confirming the co-evolution between AGN and host-galaxy, but suggesting different time scales in their growing history. The derived BHAD evolution shows that Compton-thick AGN contribute to the accretion history of AGN as much as all other AGN populations combined. arXiv: http://arxiv.org/abs/http://arxiv.org/abs/2210.08030v2
On the cosmic evolution of AGN obscuration and the X-ray luminosity function: XMM-Newton and Chandra spectral analysis of the 31 3 deg ^2 Stripe 82X by Alessandro Peca et al. on Tuesday 22 November We present X-ray spectral analysis of XMM and Chandra observations in the 31.3 deg$^2$ Stripe-82X (S82X) field. Of the 6181 X-ray sources in this field, we analyze a sample of 2937 active galactic nuclei (AGN) with solid redshifts and sufficient counts determined by simulations. Our results show a population with median values of spectral index $Gamma=1.94_{-0.39}^{+0.31}$, column density log$,N_{mathrm{H}}/mathrm{cm}^{-2}=20.7_{-0.5}^{+1.2}$ and intrinsic, de-absorbed, 2-10 keV luminosity log$,L_{mathrm{X}}/mathrm{erg,s}^{-1}=44.0_{-1.0}^{+0.7}$, in the redshift range 0-4. We derive the intrinsic fraction of AGN that are obscured ($22leqmathrm{log},N_{mathrm{H}}/mathrm{cm}^{-2}43$. This work constrains the AGN obscuration and spectral shape of the still uncertain high-luminosity and high-redshift regimes (log$,L_{mathrm{X}}/mathrm{erg,s}^{-1}>45.5$, $z>3$), where the obscured AGN fraction rises to $64pm12%$. We report a luminosity and density evolution of the X-ray luminosity function, with obscured AGN dominating at all luminosities at $z>2$ and unobscured sources prevailing at log$,L_{mathrm{X}}/mathrm{erg,s}^{-1}>45$ at lower redshifts. Our results agree with evolutionary models in which the bulk of AGN activity is triggered by gas-rich environments and in a downsizing scenario. Also, the black hole accretion density (BHAD) is found to evolve similarly to the star formation rate density, confirming the co-evolution between AGN and host-galaxy, but suggesting different time scales in their growing history. The derived BHAD evolution shows that Compton-thick AGN contribute to the accretion history of AGN as much as all other AGN populations combined. arXiv: http://arxiv.org/abs/http://arxiv.org/abs/2210.08030v2
On the cosmic evolution of AGN obscuration and the X-ray luminosity function: XMM-Newton and Chandra spectral analysis of the 31 3 deg ^2 Stripe 82X by Alessandro Peca et al. on Tuesday 22 November We present X-ray spectral analysis of XMM and Chandra observations in the 31.3 deg$^2$ Stripe-82X (S82X) field. Of the 6181 X-ray sources in this field, we analyze a sample of 2937 active galactic nuclei (AGN) with solid redshifts and sufficient counts determined by simulations. Our results show a population with median values of spectral index $Gamma=1.94_{-0.39}^{+0.31}$, column density log$,N_{mathrm{H}}/mathrm{cm}^{-2}=20.7_{-0.5}^{+1.2}$ and intrinsic, de-absorbed, 2-10 keV luminosity log$,L_{mathrm{X}}/mathrm{erg,s}^{-1}=44.0_{-1.0}^{+0.7}$, in the redshift range 0-4. We derive the intrinsic fraction of AGN that are obscured ($22leqmathrm{log},N_{mathrm{H}}/mathrm{cm}^{-2}43$. This work constrains the AGN obscuration and spectral shape of the still uncertain high-luminosity and high-redshift regimes (log$,L_{mathrm{X}}/mathrm{erg,s}^{-1}>45.5$, $z>3$), where the obscured AGN fraction rises to $64pm12%$. We report a luminosity and density evolution of the X-ray luminosity function, with obscured AGN dominating at all luminosities at $z>2$ and unobscured sources prevailing at log$,L_{mathrm{X}}/mathrm{erg,s}^{-1}>45$ at lower redshifts. Our results agree with evolutionary models in which the bulk of AGN activity is triggered by gas-rich environments and in a downsizing scenario. Also, the black hole accretion density (BHAD) is found to evolve similarly to the star formation rate density, confirming the co-evolution between AGN and host-galaxy, but suggesting different time scales in their growing history. The derived BHAD evolution shows that Compton-thick AGN contribute to the accretion history of AGN as much as all other AGN populations combined. arXiv: http://arxiv.org/abs/http://arxiv.org/abs/2210.08030v2
Timescale-dependent X-ray to UV time lags of NGC 4593 using high-intensity XMM-Newton observations with Swift and AstroSat by Max W. J. Beard et al. on Monday 21 November We present a 140ks observation of NGC 4593 with XMM-Newton providing simultaneous and continuous PN X-ray and OM UV (UVW1 2910AA) lightcurves which sample short-timescale variations better than previous observations. These observations were simultaneous with 22d of Swift X-ray and UV/optical monitoring, reported previously, and 4d of AstroSat X-ray (SXT), far (FUV 1541AA), and near (NUV 2632AA) UV allowing lag measurements between them and the highly-sampled XMM. From the XMM we find that UVW1 lags behind the X-rays by 29.5$pm$1.3ks, $sim$half the lag previously determined from the Swift monitoring. Re-examination of the textit{Swift} data reveals a bimodal lag distribution, with evidence for both the long and short lags. However if we detrend the Swift lightcurves by LOWESS filtering with a 5d width, only the shorter lag (23.8$pm$21.2ks) remains. The NUV observations, compared to PN and SXT, confirm the $sim$30ks lag found by XMM and, after 4d filtering is applied to remove the long-timescale component, the FUV shows a lag of $sim$23ks. The resultant new UVW1, FUV, and NUV lag spectrum extends to the X-ray band without requiring additional X-ray to UV lag offset, which if the UV arises from reprocessing of X-rays, implies direct illumination of the reprocessor. By referencing previous Swift and HST lag measurements, we obtain an X-ray to optical lag spectrum which agrees with a model using the KYNreverb disc-reprocessing code, assuming the accepted mass of $7.63times10^{6}M_{odot}$ and a spin approaching maximum. Previously noted lag contribution from the BLR in the Balmer and Paschen continua are still prominent. arXiv: http://arxiv.org/abs/http://arxiv.org/abs/2211.10229v1
On the cosmic evolution of AGN obscuration and the X-ray luminosity function: XMM-Newton and Chandra spectral analysis of the 31 3 deg ^2 Stripe 82X by Alessandro Peca et al. on Monday 17 October We present X-ray spectral analysis of XMM and Chandra observations in the 31.3 deg$^2$ Stripe-82X (S82X) field. Of the 6181 unique X-ray sources in this field, we select and analyze a sample of 2937 candidate active galactic nuclei (AGN) with solid redshifts and sufficient counts determined by simulations. Our results show an observed population with median values of spectral index $Gamma=1.94_{-0.39}^{+0.31}$, column density log $N_H/rm{cm}^{-2}=20.7_{-0.5}^{+1.2}$ ($21.6_{-0.9}^{+0.7}$ considering upper limits) and intrinsic, de-absorbed, 2-10 keV luminosity log $L_X/rm{erg,s}^{-1}=44.0_{-1.0}^{+0.7}$, in the redshift range 0-4. Correcting for observational biases, we derive the intrinsic, model-independent, fraction of AGN that are obscured ($22leqrm{log},N_H/rm{cm}^{-2}43$. This work constrains the AGN obscuration and spectral shape of the still uncertain high-luminosity and high-redshift regimes (log $L_X/rm{erg,s}^{-1}>45.5$, $z>3$), where the obscured AGN fraction rises to $64pm12%$. The total, obscured, and unobscured X-ray luminosity functions (XLFs) are determined out to $z=4$. We report a luminosity and density evolution of the total XLF, with obscured AGN dominating at all luminosities at $z>2$ and unobscured sources prevailing at log $L_X/rm{erg,s}^{-1}>45$ at lower redshifts. Our results agree with the evolutionary models in which the bulk of AGN activity is triggered by gas-rich environments and in a downsizing scenario. Also, the black hole accretion density is found to evolve similarly to the star formation rate density, confirming the co-evolution scenario between AGN and host galaxy, but suggesting different time scales in their growing history. arXiv: http://arxiv.org/abs/http://arxiv.org/abs/2210.08030v1
Il primo speciale estivo è una puntata crossover con gli amici di Digitalia, Franco Solerio e Francesco Facconi.. Si parla di satelliti, missioni ESA, software di controllo dei satelliti e tanti aneddoti spaziali e astronautici in vera salsa astronauti-digitaliana. Links ForumAstronautico.itAstronautiNEWS – Le notizie in diretta dallo spazioAstronautiCAST – Il primo podcast di astronautica in lingua italianaAstronautiCAST su TwitterFoto dall'ESACebreros - ESA 30 metres antennaIntegral MissionXMM-Newton operationsGaiaCluster II operationsEnvisat Gingilli del giorno Gaia Sky - open source universe simulator
News Un nuovo telescopio per indagare i misteri della Via Lattea [Link]Il giorno in cui abbiamo quasi perso INTEGRAL [Link] I supporter di questo episodio Grazie a Fiorello P., Gianpaolo F. per il supporto. Rubriche Le storie di Nonno Apollo: Il razzo lockheed X-17 Link della settimana 12 novembre 2014, Philae atterrava sulla cometa 67/P CG [Link]Aperitivo con la cometa [Link] AstronauticAgenda Versione a griglia, Google Calendar e Timeline La puntata su YouTube https://www.youtube.com/watch?v=JWkIPI2yiV4 Sigle e musiche di accompagnamento Sigla iniziale: Discov2 di eslade (https://www.jamendo.com/track/467466/discov2)Sigla finale: Prometheus di ANtarticbreeze (https://www.jamendo.com/track/1229086/prometheus)
For 500 million years, the galaxy known as XMM 2599 churned out the equivalent of a thousand suns per year. Then, about 12 billion years ago, the star factory shut down — the galaxy stopped making stars at all. It’s possible that the galaxy used up all the gas for making stars. But it’s also possible that the gas was blown out of the galaxy — by a supermassive black hole. Black holes themselves only pull things in. Anything that falls into a black hole is lost — nothing can come back out. But as gas and dust fall toward a black hole, they form a wide, thin disk around it. As material piles up in the disk it gets hot — up to millions of degrees. At that temperature, it produces a strong “wind” of charged particles. The black holes in the hearts of galaxies are millions or billions of times the mass of the Sun. So the disks around them can be huge and brilliant. And their winds can be massive. They can “blow” outward at up to a quarter of the speed of light. And like a strong desert breeze blowing sand, a black-hole wind can blow away the gas and dust around it — hundreds of times the mass of the Sun per year. That can shut down starbirth in the surrounding galaxy. But the picture is complicated. Black-hole winds can also cause the birth of new stars. The winds can squeeze clouds of gas and dust, causing them to collapse and create stars — all in the hot wind from a black hole. More about black holes tomorrow. Script by Damond Benningfield
For 500 million years, the galaxy known as XMM 2599 churned out the equivalent of a thousand suns per year. Then, about 12 billion years ago, the star factory shut down — the galaxy stopped making stars at all. It’s possible that the galaxy used up all the gas for making stars. But it’s also possible that the gas was blown out of the galaxy — by a supermassive black hole. Black holes themselves only pull things in. Anything that falls into a black hole is lost — nothing can come back out. But as gas and dust fall toward a black hole, they form a wide, thin disk around it. As material piles up in the disk it gets hot — up to millions of degrees. At that temperature, it produces a strong “wind” of charged particles. The black holes in the hearts of galaxies are millions or billions of times the mass of the Sun. So the disks around them can be huge and brilliant. And their winds can be massive. They can “blow” outward at up to a quarter of the speed of light. And like a strong desert breeze blowing sand, a black-hole wind can blow away the gas and dust around it — hundreds of times the mass of the Sun per year. That can shut down starbirth in the surrounding galaxy. But the picture is complicated. Black-hole winds can also cause the birth of new stars. The winds can squeeze clouds of gas and dust, causing them to collapse and create stars — all in the hot wind from a black hole. More about black holes tomorrow. Script by Damond Benningfield Support McDonald Observatory
In this week's episode of Astronomy News with The Cosmic Companion, we meet the first European space telescope designed to study the Sun, and a massive young world is found in our galactic neighborhood. We also take a look up at Betelgeuse as one of the most familiar stars in the night sky may be preparing to explode, and we examine an odd radio signal from space which repeats every 16 days, leaving astronomers baffled.I also interview Dr. Gillian Wilson of the University of California, Riverside about her discovery of XMM-2599, a galaxy that lived fast and died young in the early Universe. Full interview in podcast only. Video version of this podcast:On February 10, the Solar Orbiter from the European Space Agency lifted off from Cape Canaveral on a mission to explore the Sun. This vehicle carries 10 instruments, each designed to study a different characteristic of our parent star. This is Europe's first mission to the Sun, and the spacecraft will work with NASA's Parker Solar Probe, attempting to understand solar activity which produces space weather that can affect Earth.A massive young planet has been discovered by astronomers just 330 light years from Earth. This world, known as 2MASS 1155–7919 b, is roughly 10 times larger than Jupiter, and orbits its parent star at a distance 600 times greater than the distance between the Earth and Sun. Just a handful of planets this size are known to astronomers, and this world is the closest yet found to our home world.On February 25th, I will interview Annie Dickson Vandevelde of the Rochester Institute of Technology about her discovery of this unusual planet. Listen to this full interview next week on the Astronomy News with the Cosmic Companion podcast.For several months, the normally bright star, Betelgeuse, seen in the constellation of Orion, has been noticeably dimming. This has led many astronomers, both professional and amateur, to speculate that this massive red giant star may be about to explode as a supernova. New observations by astronomers at the European Southern Observatory show this star is also changing shape, becoming more elongated. It is uncertain what is causing this, or if the star will be seen erupting in the immediate future, although chances of such an eruption seem slim at this time.Radio astronomers in Canada have recently discovered a source of radio waves from space which turns on and off on a 16-day cycle. Roughly once an hour for four days, the source emits a radio signal, which is then followed by twelve days of silence. Astronomers are uncertain what could be causing this unusual phenomenon, but the CHIME radio telescope in Canada which found the source uses technology which could help uncover its nature. This signal appears to be a unique type of fast radio burst, which were first discovered in 2007. Remember to rune in next week when I interview Dorothy Dickson-Vandervelde of the Rochester Institute of Technology about her discovery of 2MASS 1155–7919 b, the massive young exoplanet in our galactic neighborhood. Did you like this episode? Subscribe to The Cosmic Companion Newsletter! Astronomy News with The Cosmic Companion is also available as a podcast from all major podcast providers. Or, add this show to your flash briefings on Amazon Alexa.See you around the Cosmos!- James Get full access to The Cosmic Companion at thecosmiccompanion.substack.com/subscribe --- Send in a voice message: https://anchor.fm/the-cosmic-companion/message Support this podcast: https://anchor.fm/the-cosmic-companion/support
In this week's episode of The Cosmic Companion, we look at how the smallest subatomic particles could be responsible for all the matter in the Universe, the icy heart of Pluto could control the climate on that world, an ancient galaxy is discovered that lived fast and died young, The CHEOPS Space Telescope takes its first image, and the Cocoon Galaxy is found to have a rare double core.Video version of this podcast:When matter first formed in the early Universe, theories suggest antimatter should have been created in the same, identical proportions. These two families of particles should have completely annihilated each other long ago, according to current theories. However, the Universe consists almost entirely of matter.This may be explained if neutrinos, which only rarely interact with matter, changed just one in a billion particles of antimatter into matter, a new study suggests. This process may have produced gravitational waves which could be visible to a new generation of observatories. Finding such waves could prove this new theory, researchers suggest.---An artist's impression of CHEOPS in space. Image credit: ESA/ATG Media LabThe first space telescope from the European Space Agency dedicated to studying planets around other stars has returned its first image. The CHaracterising ExOPlanet Satellite, or CHEOPS, was launched on December 18th, on a mission to study exoplanets discovered by other telescopes. This first image was created to test systems on the spacecraft and on the ground, and further testing on the orbiting observatory will the carried out over the course of the next two months.---A giant heart-shaped feature on Pluto, named Tombaugh Regio, may play a significant role in driving climate on that world, a new study reveals. As the Heart of Pluto warms during the day, nitrogen is driven into the atmosphere. At night, this gas cools, falling back to Pluto as frozen nitrogen, in a regular cycle similar to a heartbeat, altering the climate of the dwarf planet.---Astronomers believe the cocoon galaxy and its smaller companion galaxy, called NGC 4485, are the products of an ancient collision between a pair of small spiral galaxies. Now, Iowa State astronomers have recognized a second galactic core within the larger galaxy. One of the cores is seen in visible light and has long been known to astronomers, while the newly-recognized second core is obscured by clouds, and is only visible in radio wavelengths.---An ancient galaxy recently discovered by astronomers apparently lived fast and died young. This family of stars thrived just one billion years after the Big Bang, experiencing a period of active star formation. Just 800 million years later, star production had ceased, leaving behind a dead galaxy. Researchers are uncertain why this galaxy, known as XMM-2599, died so quickly or what became of this stellar grouping after star production ceased. ---On February 18th, I will interview Dr. Gillian Wilson of the University of California Riverside, about her work on the recent discovery of this fast-living galaxy. Ma --- Send in a voice message: https://anchor.fm/the-cosmic-companion/message Support this podcast: https://anchor.fm/the-cosmic-companion/support
This week on BSDNow, reports from AsiaBSDcon, TrueOS and FreeBSD news, Optimizing IllumOS Kernel, your questions and more. This episode was brought to you by Headlines AsiaBSDcon Reports and Reviews () AsiaBSDcon schedule (https://2017.asiabsdcon.org/program.html.en) Schedule and slides from the 4th bhyvecon (http://bhyvecon.org/) Michael Dexter's trip report on the iXsystems blog (https://www.ixsystems.com/blog/ixsystems-attends-asiabsdcon-2017) NetBSD AsiaBSDcon booth report (http://mail-index.netbsd.org/netbsd-advocacy/2017/03/13/msg000729.html) *** TrueOS Community Guidelines are here! (https://www.trueos.org/blog/trueos-community-guidelines/) TrueOS has published its new Community Guidelines The TrueOS Project has existed for over ten years. Until now, there was no formally defined process for interested individuals in the TrueOS community to earn contributor status as an active committer to this long-standing project. The current core TrueOS developers (Kris Moore, Ken Moore, and Joe Maloney) want to provide the community more opportunities to directly impact the TrueOS Project, and wish to formalize the process for interested people to gain full commit access to the TrueOS repositories. These describe what is expected of community members and committers They also describe the process of getting commit access to the TrueOS repo: Previously, Kris directly handed out commit bits. Now, the Core developers have provided a small list of requirements for gaining a TrueOS commit bit: Create five or more pull requests in a TrueOS Project repository within a single six month period. Stay active in the TrueOS community through at least one of the available community channels (Gitter, Discourse, IRC, etc.). Request commit access from the core developers via core@trueos.org OR Core developers contact you concerning commit access. Pull requests can be any contribution to the project, from minor documentation tweaks to creating full utilities. At the end of every month, the core developers review the commit logs, removing elements that break the Project or deviate too far from its intended purpose. Additionally, outstanding pull requests with no active dissension are immediately merged, if possible. For example, a user submits a pull request which adds a little-used OpenRC script. No one from the community comments on the request or otherwise argues against its inclusion, resulting in an automatic merge at the end of the month. In this manner, solid contributions are routinely added to the project and never left in a state of “limbo”. The page also describes the perks of being a TrueOS committer: Contributors to the TrueOS Project enjoy a number of benefits, including: A personal TrueOS email alias: @trueos.org Full access for managing TrueOS issues on GitHub. Regular meetings with the core developers and other contributors. Access to private chat channels with the core developers. Recognition as part of an online Who's Who of TrueOS developers. The eternal gratitude of the core developers of TrueOS. A warm, fuzzy feeling. Intel Donates 250.000 $ to the FreeBSD Foundation (https://www.freebsdfoundation.org/news-and-events/latest-news/new-uranium-level-donation-and-collaborative-partnership-with-intel/) More details about the deal: Systems Thinking: Intel and the FreeBSD Project (https://www.freebsdfoundation.org/blog/systems-thinking-intel-and-the-freebsd-project/) Intel will be more actively engaging with the FreeBSD Foundation and the FreeBSD Project to deliver more timely support for Intel products and technologies in FreeBSD. Intel has contributed code to FreeBSD for individual device drivers (i.e. NICs) in the past, but is now seeking a more holistic “systems thinking” approach. Intel Blog Post (https://01.org/blogs/imad/2017/intel-increases-support-freebsd-project) We will work closely with the FreeBSD Foundation to ensure the drivers, tools, and applications needed on Intel® SSD-based storage appliances are available to the community. This collaboration will also provide timely support for future Intel® 3D XPoint™ products. Thank you very much, Intel! *** Applied FreeBSD: Basic iSCSI (https://globalengineer.wordpress.com/2017/03/05/applied-freebsd-basic-iscsi/) iSCSI is often touted as a low-cost replacement for fibre-channel (FC) Storage Area Networks (SANs). Instead of having to setup a separate fibre-channel network for the SAN, or invest in the infrastructure to run Fibre-Channel over Ethernet (FCoE), iSCSI runs on top of standard TCP/IP. This means that the same network equipment used for routing user data on a network could be utilized for the storage as well. This article will cover a very basic setup where a FreeBSD server is configured as an iSCSI Target, and another FreeBSD server is configured as the iSCSI Initiator. The iSCSI Target will export a single disk drive, and the initiator will create a filesystem on this disk and mount it locally. Advanced topics, such as multipath, ZFS storage pools, failover controllers, etc. are not covered. The real magic is the /etc/ctl.conf file, which contains all of the information necessary for ctld to share disk drives on the network. Check out the man page for /etc/ctl.conf for more details; below is the configuration file that I created for this test setup. Note that on a system that has never had iSCSI configured, there will be no existing configuration file, so go ahead and create it. Then, enable ctld and start it: sysrc ctld_enable=”YES” service ctld start You can use the ctladm command to see what is going on: root@bsdtarget:/dev # ctladm lunlist (7:0:0/0): Fixed Direct Access SPC-4 SCSI device (7:0:1/1): Fixed Direct Access SPC-4 SCSI device root@bsdtarget:/dev # ctladm devlist LUN Backend Size (Blocks) BS Serial Number Device ID 0 block 10485760 512 MYSERIAL 0 MYDEVID 0 1 block 10485760 512 MYSERIAL 1 MYDEVID 1 Now, let's configure the client side: In order for a FreeBSD host to become an iSCSI Initiator, the iscsd daemon needs to be started. sysrc iscsid_enable=”YES” service iscsid start Next, the iSCSI Initiator can manually connect to the iSCSI target using the iscsictl tool. While setting up a new iSCSI session, this is probably the best option. Once you are sure the configuration is correct, add the configuration to the /etc/iscsi.conf file (see man page for this file). For iscsictl, pass the IP address of the target as well as the iSCSI IQN for the session: + iscsictl -A -p 192.168.22.128 -t iqn.2017-02.lab.testing:basictarget You should now have a new device (check dmesg), in this case, da1 The guide them walks through partitioning the disk, and laying down a UFS file system, and mounting it This it walks through how to disconnect iscsi, incase you don't want it anymore This all looked nice and easy, and it works very well. Now lets see what happens when you try to mount the iSCSI from Windows Ok, that wasn't so bad. Now, instead of sharing an entire space disk on the host via iSCSI, share a zvol. Now your windows machine can be backed by ZFS. All of your problems are solved. Interview - Philipp Buehler - pbuehler@sysfive.com (mailto:pbuehler@sysfive.com) Technical Lead at SysFive, and Former OpenBSD Committer News Roundup Half a dozen new features in mandoc -T html (http://undeadly.org/cgi?action=article&sid=20170316080827) mandoc (http://man.openbsd.org/mandoc.1)'s HTML output mode got some new features Even though mdoc(7) is a semantic markup language, traditionally none of the semantic annotations were communicated to the reader. [...] Now, at least in -T html output mode, you can see the semantic function of marked-up words by hovering your mouse over them. In terminal output modes, we have the ctags(1)-like internal search facility built around the less(1) tag jump (:t) feature for quite some time now. We now have a similar feature in -T html output mode. To jump to (almost) the same places in the text, go to the address bar of the browser, type a hash mark ('#') after the URI, then the name of the option, command, variable, error code etc. you want to jump to, and hit enter. Check out the full report by Ingo Schwarze (schwarze@) and try out these new features *** Optimizing IllumOS Kernel Crypto (http://zfs-create.blogspot.com/2014/05/optimizing-illumos-kernel-crypto.html) Sašo Kiselkov, of ZFS fame, looked into the performance of the OpenSolaris kernel crypto framework and found it lacking. The article also spends a few minutes on the different modes and how they work. Recently I've had some motivation to look into the KCF on Illumos and discovered that, unbeknownst to me, we already had an AES-NI implementation that was automatically enabled when running on Intel and AMD CPUs with AES-NI support. This work was done back in 2010 by Dan Anderson.This was great news, so I set out to test the performance in Illumos in a VM on my Mac with a Core i5 3210M (2.5GHz normal, 3.1GHz turbo). The initial tests of “what the hardware can do” were done in OpenSSL So now comes the test for the KCF. I wrote a quick'n'dirty crypto test module that just performed a bunch of encryption operations and timed the results. KCF got around 100 MB/s for each algorithm, except half that for AES-GCM OpenSSL had done over 3000 MB/s for CTR mode, 500 MB/s for CBC, and 1000 MB/s for GCM What the hell is that?! This is just plain unacceptable. Obviously we must have hit some nasty performance snag somewhere, because this is comical. And sure enough, we did. When looking around in the AES-NI implementation I came across this bit in aes_intel.s that performed the CLTS instruction. This is a problem: 3.1.2 Instructions That Cause VM Exits ConditionallyCLTS. The CLTS instruction causes a VM exit if the bits in position 3 (corresponding to CR0.TS) are set in both the CR0 guest/host mask and the CR0 read shadow. The CLTS instruction signals to the CPU that we're about to use FPU registers (which is needed for AES-NI), which in VMware causes an exit into the hypervisor. And we've been doing it for every single AES block! Needless to say, performing the equivalent of a very expensive context switch every 16 bytes is going to hurt encryption performance a bit. The reason why the kernel is issuing CLTS is because for performance reasons, the kernel doesn't save and restore FPU register state on kernel thread context switches. So whenever we need to use FPU registers inside the kernel, we must disable kernel thread preemption via a call to kpreemptdisable() and kpreemptenable() and save and restore FPU register state manually. During this time, we cannot be descheduled (because if we were, some other thread might clobber our FPU registers), so if a thread does this for too long, it can lead to unexpected latency bubbles The solution was to restructure the AES and KCF block crypto implementations in such a way that we execute encryption in meaningfully small chunks. I opted for 32k bytes, for reasons which I'll explain below. Unfortunately, doing this restructuring work was a bit more complicated than one would imagine, since in the KCF the implementation of the AES encryption algorithm and the block cipher modes is separated into two separate modules that interact through an internal API, which wasn't really conducive to high performance (we'll get to that later). Anyway, having fixed the issue here and running the code at near native speed, this is what I get: AES-128/CTR: 439 MB/s AES-128/CBC: 483 MB/s AES-128/GCM: 252 MB/s Not disastrous anymore, but still, very, very bad. Of course, you've got keep in mind, the thing we're comparing it to, OpenSSL, is no slouch. It's got hand-written highly optimized inline assembly implementations of most of these encryption functions and their specific modes, for lots of platforms. That's a ton of code to maintain and optimize, but I'll be damned if I let this kind of performance gap persist. Fixing this, however, is not so trivial anymore. It pertains to how the KCF's block cipher mode API interacts with the cipher algorithms. It is beautifully designed and implemented in a fashion that creates minimum code duplication, but this also means that it's inherently inefficient. ECB, CBC and CTR gained the ability to pass an algorithm-specific "fastpath" implementation of the block cipher mode, because these functions benefit greatly from pipelining multiple cipher calls into a single place. ECB, CTR and CBC decryption benefit enormously from being able to exploit the wide XMM register file on Intel to perform encryption/decryption operations on 8 blocks at the same time in a non-interlocking manner. The performance gains here are on the order of 5-8x.CBC encryption benefits from not having to copy the previously encrypted ciphertext blocks into memory and back into registers to XOR them with the subsequent plaintext blocks, though here the gains are more modest, around 1.3-1.5x. After all of this work, this is how the results now look on Illumos, even inside of a VM: Algorithm/Mode 128k ops AES-128/CTR: 3121 MB/s AES-128/CBC: 691 MB/s AES-128/GCM: 1053 MB/s So the CTR and GCM speeds have actually caught up to OpenSSL, and CBC is actually faster than OpenSSL. On the decryption side of things, CBC decryption also jumped from 627 MB/s to 3011 MB/s. Seeing these performance numbers, you can see why I chose 32k for the operation size in between kernel preemption barriers. Even on the slowest hardware with AES-NI, we can expect at least 300-400 MB/s/core of throughput, so even in the worst case, we'll be hogging the CPU for at most ~0.1ms per run. Overall, we're even a little bit faster than OpenSSL in some tests, though that's probably down to us encrypting 128k blocks vs 8k in the "openssl speed" utility. Anyway, having fixed this monstrous atrocity of a performance bug, I can now finally get some sleep. To made these tests repeatable, and to ensure that the changes didn't break the crypto algorithms, Saso created a crypto_test kernel module. I have recently created a FreeBSD version of crypto_test.ko, for much the same purposes Initial performance on FreeBSD is not as bad, if you have the aesni.ko module loaded, but it is not up to speed with OpenSSL. You cannot directly compare to the benchmarks Saso did, because the CPUs are vastly different. Performance results (https://wiki.freebsd.org/OpenCryptoPerformance) I hope to do some more tests on a range of different sized CPUs in order to determine how the algorithms scale across different clock speeds. I also want to look at, or get help and have someone else look at, implementing some of the same optimizations that Saso did. It currently seems like there isn't a way to perform addition crypto operations in the same session without regenerating the key table. Processing additional buffers in an existing session might offer a number of optimizations for bulk operations, although in many cases, each block is encrypted with a different key and/or IV, so it might not be very useful. *** Brendan Gregg's special freeware tools for sysadmins (http://www.brendangregg.com/specials.html) These tools need to be in every (not so) serious sysadmins toolbox. Triple ROT13 encryption algorithm (beware: export restrictions may apply) /usr/bin/maybe, in case true and false don't provide too little choice... The bottom command lists you all the processes using the least CPU cycles. Check out the rest of the tools. You wrote similar tools and want us to cover them in the show? Send us an email to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) *** A look at 2038 (http://www.lieberbiber.de/2017/03/14/a-look-at-the-year-20362038-problems-and-time-proofness-in-various-systems/) I remember the Y2K problem quite vividly. The world was going crazy for years, paying insane amounts of money to experts to fix critical legacy systems, and there was a neverending stream of predictions from the media on how it's all going to fail. Most didn't even understand what the problem was, and I remember one magazine writing something like the following: Most systems store the current year as a two-digit value to save space. When the value rolls over on New Year's Eve 1999, those two digits will be “00”, and “00” means “halt operation” in the machine language of many central processing units. If you're in an elevator at this time, it will stop working and you may fall to your death. I still don't know why they thought a computer would suddenly interpret data as code, but people believed them. We could see a nearby hydropower plant from my parents' house, and we expected it to go up in flames as soon as the clock passed midnight, while at least two airplanes crashed in our garden at the same time. Then nothing happened. I think one of the most “severe” problems was the police not being able to open their car garages the next day because their RFID tokens had both a start and end date for validity, and the system clock had actually rolled over to 1900, so the tokens were “not yet valid”. That was 17 years ago. One of the reasons why Y2K wasn't as bad as it could have been is that many systems had never used the “two-digit-year” representation internally, but use some form of “timestamp” relative to a fixed date (the “epoch”). The actual problem with time and dates rolling over is that systems calculate timestamp differences all day. Since a timestamp derived from the system clock seemingly only increases with each query, it is very common to just calculate diff = now - before and never care about the fact that now could suddenly be lower than before because the system clock has rolled over. In this case diff is suddenly negative, and if other parts of the code make further use of the suddenly negative value, things can go horribly wrong. A good example was a bug in the generator control units (GCUs) aboard Boeing 787 “Dreamliner” aircrafts, discovered in 2015. An internal timestamp counter would overflow roughly 248 days after the system had been powered on, triggering a shut down to “safe mode”. The aircraft has four generator units, but if all were powered up at the same time, they would all fail at the same time. This sounds like an overflow caused by a signed 32-bit counter counting the number of centiseconds since boot, overflowing after 248.55 days, and luckily no airline had been using their Boing 787 models for such a long time between maintenance intervals. The “obvious” solution is to simply switch to 64-Bit values and call it day, which would push overflow dates far into the future (as long as you don't do it like the IBM S/370 mentioned before). But as we've learned from the Y2K problem, you have to assume that computer systems, computer software and stored data (which often contains timestamps in some form) will stay with us for much longer than we might think. The years 2036 and 2038 might be far in the future, but we have to assume that many of the things we make and sell today are going to be used and supported for more than just 19 years. Also many systems have to store dates which are far in the future. A 30 year mortgage taken out in 2008 could have already triggered the bug, and for some banks it supposedly did. sysgettimeofday() is one of the most used system calls on a generic Linux system and returns the current time in form of an UNIX timestamp (timet data type) plus fraction (susecondst data type). Many applications have to know the current time and date to do things, e.g. displaying it, using it in game timing loops, invalidating caches after their lifetime ends, perform an action after a specific moment has passed, etc. In a 32-Bit UNIX system, timet is usually defined as a signed 32-Bit Integer. When kernel, libraries and applications are compiled, the compiler will turn this assumption machine code and all components later have to match each other. So a 32-Bit Linux application or library still expects the kernel to return a 32-Bit value even if the kernel is running on a 64-Bit architecture and has 32-Bit compatibility. The same holds true for applications calling into libraries. This is a major problem, because there will be a lot of legacy software running in 2038. Systems which used an unsigned 32-Bit Integer for timet push the problem back to 2106, but I don't know about many of those. The developers of the GNU C library (glibc), the default standard C library for many GNU/Linux systems, have come up with a design for year 2038 proofness for their library. Besides the timet data type itself, a number of other data structures have fields based on timet or the combined struct timespec and struct timeval types. Many methods beside those intended for setting and querying the current time use timestamps 32-Bit Windows applications, or Windows applications defining _USE32BITTIMET, can be hit by the year 2038 problem too if they use the timet data type. The _time64t data type had been available since Visual C 7.1, but only Visual C 8 (default with Visual Studio 2015) expanded timet to 64 bits by default. The change will only be effective after a recompilation, legacy applications will continue to be affected. If you live in a 64-Bit world and use a 64-Bit kernel with 64-Bit only applications, you might think you can just ignore the problem. In such a constellation all instances of the standard time_t data type for system calls, libraries and applications are signed 64-Bit Integers which will overflow in around 292 billion years. But many data formats, file systems and network protocols still specify 32-Bit time fields, and you might have to read/write this data or talk to legacy systems after 2038. So solving the problem on your side alone is not enough. Then the article goes on to describe how all of this will break your file systems. Not to mention your databases and other file formats. Also see Theo De Raadt's EuroBSDCon 2013 Presentation (https://www.openbsd.org/papers/eurobsdcon_2013_time_t/mgp00001.html) *** Beastie Bits Michael Lucas: Get your name in “Absolute FreeBSD 3rd Edition” (https://blather.michaelwlucas.com/archives/2895) ZFS compressed ARC stats to top (https://svnweb.freebsd.org/base?view=revision&revision=r315435) Matthew Dillon discovered HAMMER was repeating itself when writing to disk. Fixing that issue doubled write speeds (https://www.dragonflydigest.com/2017/03/14/19452.html) TedU on Meaningful Short Names (http://www.tedunangst.com/flak/post/shrt-nms-fr-clrty) vBSDcon and EuroBSDcon Call for Papers are open (https://www.freebsdfoundation.org/blog/submit-your-work-vbsdcon-and-eurobsdcon-cfps-now-open/) Feedback/Questions Craig asks about BSD server management (http://pastebin.com/NMshpZ7n) Michael asks about jails as a router between networks (http://pastebin.com/UqRwMcRk) Todd asks about connecting jails (http://pastebin.com/i1ZD6eXN) Dave writes in with an interesting link (http://pastebin.com/QzW5c9wV) > applications crash more often due to errors than corruptions. In the case of corruption, a few applications (e.g., Log-Cabin, ZooKeeper) can use checksums and redundancy to recover, leading to a correct behavior; however, when the corruption is transformed into an error, these applications crash, resulting in reduced availability. ***
Fakultät für Physik - Digitale Hochschulschriften der LMU - Teil 04/05
One of the most fundamental correlations between the properties of galaxies in the local Universe is the so-called morphology-density relation (Dressler 1980). A plethora of studies utilizing multi-wavelength tracers of activity have shown that late type star forming galaxies favour low density regions in the local Universe (e.g. G´omez et al. 2003). In particular, the cores of massive galaxy clusters are galaxy graveyards full of massive spheroids that are dominated by old stellar populations. A variety of physical processes might be effective in suppressing star formation and affecting the morphology of cluster and group galaxies. Broadly speaking, these can be grouped in two big families: (i) interactions with other cluster members and/or with the cluster gravitational potential and (ii) interactions with the hot gas that permeates massive galaxy systems. Galaxy groups are the most common galaxy environment in our Universe, bridging the gap between the low density field and the crowded galaxy clusters. Indeed, as many as 50%-70% of galaxies reside in galaxy groups in the nearby Universe (Huchra & Geller 1982; Eke et al. 2004), while only a few percent are contained in the denser cluster cores. In addition, in the current bottom-up paradigm of structure formation, galaxy groups are the building blocks of more massive systems: they merge to form clusters. As structures grow, galaxies join more and more massive systems, spending most of their life in galaxy groups before entering the cluster environment. Thus, it is plausible to ask if group-related processes may drive the observed relations between galaxy properties and their environment. To shed light on this topic we have built the largest X-ray selected samples of galaxy groups with secure spectroscopic identification on the major blank field surveys. For this purpose, we combine deep X-ray Chandra and XMM data of the four major blank fields (All-wavelength Extended Groth Strip International Survey (AEGIS), the COSMOS field, the Extended Chandra Deep Field South (ECDFS), and the Chandra Deep Field North (CDFN) ). The group catalog in each field is created by associating any X-ray extended emission to a galaxy overdensity in the 3D space. This is feasible given the extremely rich spectroscopic coverage of these fields. Our identification method and the dynamical analysis used to identify the galaxy group members and to estimate the group velocity dispersion is extensively tested on the AEGIS field and with mock catalogs extracted from the Millennium Simulation (Springel et al. 2005). The effect of dynamical complexity, substructure, shape of X-ray emission, different radial and redshift cuts have been explored on the LX −sigma relation. We also discover a high redshift group at z~1.54 in the AEGIS field. This detection illustrates that mega-second Chandra exposures are required for detecting such objects in the volume of deep fields. We provide an accurate measure of the Star Formation Rate (SFR) of galaxies by using the deepest available Herschel PACS and Spitzer MIPS data available for the considered fields. We also provide a well-calibrated estimate of the SFR derived by using the SED fitting technique for undetected sources in mid- and far-infrared observations. Using this unique sample, we conduct a comprehensive analysis of the dependence of the total SFR , total stellar masses and halo occupation distribution (HOD) of massive galaxies (M*>10^10 M_sun) on the halo mass of the groups with rigorous consideration of uncertainties. We observe a clear evolution in the level of star formation (SF) activity in galaxy groups. Indeed, the total star formation activity in high redshift (0.5
Fakultät für Physik - Digitale Hochschulschriften der LMU - Teil 03/05
We are experiencing a unique epoch in the history of galaxy cluster studies. We have now open windows across the whole electromagnetic spectrum which offer us complementary approaches for cluster detection and analyses. Almost forty years after its theoretical prediction, first large radio telescopes started to scan the sky looking for massive clusters as "shadows" in the cosmic microwave background imprinted there by their hot gas content via the Sunyaev-Zel'dovich effect (SZE). In X-rays this hot plasma can be observed also directly. Optical and infrared telescopes give us a view on the galaxy population of clusters and through gravitational lensing also on its dominant, invisible component - the dark matter. The advent of multi-wavelength cluster surveys brings also the necessity to compare and cross-calibrate each cluster detection approach. This is the main aim of this work carried out in the framework of the XMM-emph{Newton}-Blanco Cosmology Survey project (XMM-BCS). This project is a coordinated multi-wavelength survey in a 14~deg$^2$ test region covered in the optical band by the Blanco Cosmology Survey, in the mid-infrared by the emph{Spitzer} Space Telescope and in X-rays by XMM-emph{Newton}. This area is also part of the sky scanned by both SZE survey instruments: the South Pole Telescope (SPT) and the Atacama Cosmology Telescope (ACT). In the first part of the thesis I describe the analysis of the initial 6~deg$^2$ core of the X-ray survey field. From the detected extended sources a cluster catalog comprising 46 objects is constructed. These cluster candidates are confirmed as significant galaxy overdensities in the optical data, their photometric redshifts are measured and for a subsample confirmed with spectroscopic measurements. I provide physical parameters of the clusters derived from X-ray luminosity and carry out a first comparison with optical studies. The cluster catalog will be useful for direct cross-comparison with optical/mid-infrared catalogs, for the investigation of the survey selection functions, stacking analysis of the SZE signal and for cosmological analyses after combing with clusters detected in the extension of the survey. The extension of the survey to 14~deg$^2$ is a first scientific utilization of the novel XMM-emph{Newton} mosaic mode observations. I have developed a data analysis pipeline for this operation mode and report on the discovery of two galaxy clusters, SPT-CL~J2332-5358 and SPT-CL~J2342-5411, in X-rays. The clusters were also independently detected through their SZE signal by the SPT and in the optical band in the BCS data. They are thus the first clusters detected under survey conditions by all major cluster search approaches. This work also demonstrates the potential of the mosaic mode observations to effectively cover large sky areas and detect massive clusters out to redshifts $sim1$ even with shallow exposures. The last part of the thesis provides an example of a multi-wavelength analysis of two high-redshift ($z>1$) systems in the framework of the XMM-emph{Newton} Distant Cluster Project. With the detection and studies of these high redshift systems we are for the first time able to see the assembly phase of the galaxy population of the clusters, which in nearby systems is totally passive, but at these high redshifts still show signatures of star formation.
Fakultät für Physik - Digitale Hochschulschriften der LMU - Teil 03/05
In the framework of the current cosmological paradigm, cosmic evolution is mostly driven by gravity through the hierarchical growth of cold dark matter structures. However, the evolution of the directly observed luminous component involves complex non-gravitational processes such as cooling, star formation and feedback mechanisms involving the conventional matter well known to us, termed shortly as baryons. Clusters of galaxies are the largest virialized systems in the Universe, hence are ideal laboratories to study the evolution of baryons. The baryon content of clusters accounts for roughly 15% of their total mass, encompassing a “cold phase” in the form of luminous galaxy masses, and a “hot phase” corresponding to the X-ray emitting intracluster medium (ICM). The thermodynamics of baryons is affected by non-trivial phenomena and the interplay of the intricate processes between these two phases remains, to a large extent, unclear. In this thesis I investigate the properties of both the ICM and the underlying galaxy populations in X-ray selected distant clusters, with the aim of constraining physical processes governing the evolution of clusters and their galaxies. The inner regions of local clusters often exhibit radiative cooling, termed cool cores (CC). I have made an important step in investigating the abundance of cool cores in the distant cluster population, by devising efficient methods to characterize local CCs, that were applied to the highest redshift cluster sample currently available (0.7 < z < 1.4) from the Chandra archive. The fraction of CCs seems to decrease with redshift, since I find that the majority of the distant clusters are in an intermediate state of cooling. High-z (z∼ >1) clusters are hard to find. The XMM-Newton Distant Cluster Project (XDCP) is a survey aimed to construct a complete sample of z∼1 clusters from the XMM-Newton archive. Within this scope a large effort has been done to confirm potential distant cluster candidates by exploring a new optical & near-infrared imaging technique to identify overdensities of galaxies. Twenty-two cluster candidates were imaged during two runs at the ESO/La Silla Observatory, of which around half are potential distant clusters, based on their I-H images. The applied photometric technique has thus proven to be reliable in identifying z≥0.8 overdensities of galaxies. The formation and evolution of massive early-type galaxies (ETGs) is still an open question, since the observational data cannot be easily reconciled with the preferred, hierarchical galaxy formation scenario. Using high resolution Hubble Space Telescope/ACS imaging and VLT/FORS2 spectra, I studied the galaxy population of XMM1229 at z=0.975, discovered in the XDCP. The results show a red-sequence populated by galaxies with stellar masses in the range 5e10 - 2e11 solar masses and old (3-4 Gyr) underlying stellar population formed at zf ∼ 4. The color-magnitude relation at this high redshift is found to be already very tight (with a 0.04 spread similar to the local Coma cluster). This confirms that ETGs in clusters assembled early on and in short timescales, and their star formation processes have already completed the essential part of their chemical enrichment, as elucidated by the high metal abundance (Z ∼ 0.3 solar) of the ICM, measured with XMM spectra.
Fakultät für Physik - Digitale Hochschulschriften der LMU - Teil 01/05
In dieser Arbeit habe ich die Eigenschaften der interstellaren Staubpartikel untersucht, wie sie sich aus deren Wechselwirkung mit Roentgen strahlung ergeben. Tatsaechlich werden Photonen, die von einer entfernten Punktquelle stammen, von den Staubteilchen nicht nur absorbiert sondern auch in Vorwaertsrichtung gestreut. Ich habe mehrere Quellen untersucht, die mit unterschiedlichen Instrumenten an Bord der Roentgen satelliten Chandra und XMM beobachtet wurden. Dabei lag der Schwerpunkt sowohl auf den Absorptionsmerkmalen, die das interstellare Medium den Spektren eingepraegt hat, als auch auf der spektralen und raeumlichen Untersuchung der gestreuten Strahlung, die einen Halo aus schwacher diffuser Emission um die Punktquelle erzeugt. Als vorlaeufigen Schritt habe ich die instrumentelle Punktbildfunktion der EPIC-pn-Kamera und der ACIS-Kamera (an Bord von XMM beziehungsweise Chandra) bestimmt unter Benutzung von Daten aus der Flugphase, und diese mit Vorhersagen aus Bodenkalibrationen verglichen. Eine genaue Kenntnis der Punktbildfunktion ist unerlaeßlich fuer eine korrekte Bestimmung der Flaechenhelligkeitsverteilung der ausgedehnten gestreuten Emission. Aus der Analyse von sieben Chandra-Quellen (beobachtet mit ACIS-S und ACIS-I) ergibt sich, daß fuer einige Quellen (namentlich Cen X-3 und der Große Annihilator die Form der Flaechenhelligkeitsverteilung eine einfache gleichförmige Verteilung der Staubkörner entlang der Sichtlinie ausschließt und stattdessen ein Modell mit einem geklumpten Medium bevorzugt wird. Dies ist in uebereinstimmung mit der Geometrie der Galaxis selbst: ein Sehstrahl kann einen oder mehrere Spiralarme durchdringen oder auch mehrere Wolken. Ich habe einige Bedingungen fuer die Lage dieser Staubklumpen aufstellen können. Die Untersuchung der ausgewaehlten Quellen, zusammen mit Daten von frueheren Missionen, hat es mir erlaubt, die innere Struktur der Staubkörner einzugrenzen, und die Grenzen der Streutheorie zu analysieren, wenn diese auf astrophysikalische Objekte angewandt wird. Ich habe mit einer weiteren Auswahl von Chandra-Quellen, die mit dem HETG-Spektrometer beobachtet worden waren, ein besonderes Absorptionsmerkmal (die sogenannten XAFS) untersucht, das von den festen Teilchen im interstellaren Medium verursacht wird. Die am meisten absorbierten Quellen erscheinen als die besten Kandidaten fuer eine erflgreiche Erkennung der XAFS. Da der Absorptionsquerschnitt fuer Staub den fuer Gas oberhalb von 1.3 keV uebersteigt, sind die Elemente mit erwarteten XAFS Magnesium und Silizium. Mittels Beobachtungen von XMM habe ich anhand von Daten des RGS-Spektrometers und der EPIC-pn-Kamera zwei Roentgen-Doppelstenssysteme (LMXB) untersucht ({em Cyg X-2, GX,339-4}). Cyg X-2 ist eine "schwache Haloquelle", was bedeutet, daß die Staubsaeulendichte relativ gering ist. Wegen dieser bei weichen Roentgen energien nur moderaten Absorption konnte der Bereich unterhalb 1 keV sowohl durch Streuung (mittels des Halospektrums) als auch durch Absorption (durch das hochaufgelöste RGS-Spektrum der absorbierten Quelle). Von diesem gestreuten Spektrum konnte -- erstmals -- the Streueigenschaften der Elemente im Staub bestimmen. Die Daten konnten gut an eine Mischung aus Graphit und Silikaten angepaßt werden. Bei der Untersuchung des RGS-Spektrums lag der Schwerpunkt auf der komplexen Struktur der Sauerstoff-Kante und der Eisen-L-Kante, wo viele Absorptionsmerkmale gefunden wurden, und ich habe die beobachteten resonanten Uebergaenge im Lichte neuer Labormessungen identifiziert.