POPULARITY
Nextcloud Upgrade, Android Tablet Experiment, Jellystat, OpenSSH 9.5, Audiobookshelf, StirlingPDF, Lubelog, Audiobookshelf, Tailscale Integration, Apple Notes Killer, Memos, Reverse Proxy, Zigbee Plugs Recommendation Sponsored By:Tailscale: Tailscale is a Zero config VPN. It installs on any device in minutes, manages firewall rules for you, and works from anywhere. Get 3 users and 100 devices for free. Support Self-HostedLinks:⚡ Grab Sats with Strike Around the World — Strike is a lightning-powered app that lets you quickly and cheaply grab sats in over 36 countries.
The Oracle Autonomous Database Dedicated deployment is a good choice for customers who want to implement a private database cloud in their own dedicated Exadata infrastructure. That dedicated infrastructure can either be in the Oracle Public Cloud or in the customer's own data center via Oracle Exadata Cloud@Customer. In a dedicated environment, the Exadata infrastructure is entirely dedicated to the subscribing customer, isolated from other cloud tenants, with no shared processor, storage, and memory resource. In this episode, hosts Lois Houston and Nikita Abraham speak with Oracle Database experts about how Autonomous Database Dedicated offers greater control of the software and infrastructure life cycle, customizable policies for separation of database workload, software update schedules and versioning, workload consolidation, availability policies, and much more. Oracle MyLearn: https://mylearn.oracle.com/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Tamal Chatterjee, and the OU Studio Team for helping us create this episode. ------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started. 00:26 Nikita: Hello and welcome to the Oracle University Podcast. I'm Nikita Abraham, Principal Technical Editor with Oracle University, and I'm joined by Lois Houston, Director of Innovation Programs. Lois: Hi there! This is our second episode on Oracle's Autonomous Database, and today we're going to spend time discussing Autonomous Database on Dedicated Infrastructure. We'll be talking with three of our colleagues: Maria Colgan, Kamryn Vinson, and Kay Malcolm. 00:53 Nikita: Maria is a Distinguished Product Manager for Oracle Database, Kamryn is a Database Product Manager, and Kay is a Senior Director of Database Product Management. Lois: Hi Maria! Thanks for joining us today. We know that Oracle Autonomous Database offers two deployment choices: serverless and dedicated Exadata infrastructure. We spoke about serverless infrastructure last week but for anyone who missed that episode, can you give us a quick recap of what it is? 01:22 Maria: With Autonomous Database Serverless, Oracle automates all aspects of the infrastructure and database management for you. That includes provisioning, configuring, monitoring, backing up, and tuning. You simply select what type of database you want, maybe a data warehouse, transaction processing, or a JSON document store, which region in the Oracle Public Cloud you want that database deployed, and the base compute and storage resources necessary. Oracle automatically takes care of everything else. Once provisioned, the database can be instantly scaled through our UI, our APIs, or automatically based on your workload needs. All scaling activities happen completely online while the database remains open for business. 02:11 Nikita: Ok, so now that we know what serverless is, let's move on to dedicated infrastructure. What can you tell us about it? Maria: Autonomous Database Dedicated allows customers to implement a private database cloud running on their own dedicated Exadata infrastructure. That dedicated infrastructure can be in Oracle's Public Cloud or in the customer's own data center via Oracle Exadata Cloud@Customer. It makes an ideal platform to consolidate multiple databases regardless of their workload type or their size. And it also allows you to offer database as a service within your enterprise. 02:50 Lois: What are the primary benefits of Autonomous Database Dedicated infrastructure? Maria: With the dedicated deployment option, you must first subscribe to Dedicated Exadata Cloud Infrastructure that is isolated from other tenants with no shared processors, memory, network, or storage resources. This infrastructure choice offers greater control of both the software and the infrastructure life cycle. Customers can specify their own policies for workload separation, software update schedules, and availability. One of the key benefits of an autonomous database is a lower total cost of ownership through more automation and operational delegation to Oracle. Remember it's a fully managed service. All database operations, such as backup, software updates, upgrades, OS maintenance, incident management, and health monitoring, will be automatically done for you by Oracle. Its maximum availability architecture protects you from any hardware failures and in the event of a full outage, the service will be automatically failed over to your standby site. Built-in application continuity ensures zero downtime during the standard software update or in the event of a failover. 04:09 Nikita: And how is this billed? Maria: Autonomous Database also has true pay-per-use billing so even when autoscale is enabled, you'll only pay for those additional resources when you use them. And we make it incredibly simple to develop on this environment with managed developer add-ons like our low code development environment, APEX, and our REST data services. This means you don't need any additional development environments in order to get started with a new application. 04:40 Lois: Ok. So, it looks like the dedicated option offers more control and customization. Maria, how do we access a dedicated database over a network? Maria: The network path is through a VCN, or Virtual Cloud Network, and the subnet that's defined by the Exadata infrastructure hosting the database. By default, this subnet is defined as private, meaning, there's no public internet access to those databases. This ensures only your company can access your Exadata infrastructure and your databases. Autonomous Database Dedicated can also take advantage of network services provided by OCI, including subnets or VCN peering, as well as connections to on-prem databases through the IP secure VPN and FastConnect dedicated corporate network connections. 05:33 Maria: You can also take advantage of the Oracle Microsoft partnership that enables customers to connect their Oracle Cloud Infrastructure resources and Microsoft Azure resources through a dedicated private connection. However, for some customers, a move to the public cloud is just not possible. Perhaps it's due to industry regulations, performance concerns, or integration with legacy on-prem applications. For these types of customers, Exadata Cloud@Customer should meet their requirements for strict data sovereignty and security by delivering high-performance Exadata Cloud Services capabilities in their data center behind their own firewall. 06:16 Nikita: What are the benefits of Autonomous Database on Exadata Cloud@Customer? How's it different? Maria: Autonomous Database on Exadata Cloud@Customer provides the same service as Autonomous Database Dedicated in the public cloud. So you get the same simplicity, agility, and performance, and elasticity that you get in the cloud. But it also provides a very fast and simple transition to an autonomous cloud because you can easily migrate on-prem databases to Exadata Cloud@Customer. Once the database is migrated, any existing applications can simply reconnect to that new database and run without any application changes being needed. And the data will leave your data center, so making it a very safe way to adopt a cloud model. 07:04 Lois: So, how do we manage communication to and from the public cloud? Maria: Each Cloud@Customer rack includes two local control plane servers to manage the communication to and from the public cloud. The local control plane acts on behalf of requests from the public cloud, keeping communications consolidated and secure. Platform control plane commands are sent to the Exadata Cloud@Customer system through a dedicated WebSocket secure tunnel. Oracle Cloud operations staff use that same tunnel to monitor the autonomous database on Exadata Cloud@Customer both for maintenance and for troubleshooting. The two remote, control plane servers installed in the Exadata Cloud@Customer rack host that secure tunnel endpoint and act as a gateway for access to the infrastructure. They also host components that orchestrate the cloud automation, aggregates and routes telemetry messages from the Exadata Cloud@Customer platform to the Oracle Support Service infrastructure. And they also host images for server patching. 08:13 Maria: The Exadata Database Server is connected to the customer-managed switches via either 10 gigabit or 25 gigabit Ethernet. Customers have access to the customer Virtual Machine, or VM, via a pair of layer 2 network connections that are implemented as Virtual Network Interface Cards, or vNICs. They're also tagged VLAN. The physical network connections are implemented for high availability in an active standby configuration. Autonomous Database on Exadata Cloud@Customer provides the best of both worlds-- all of the automation including patching, backing up, scaling, and management of a database that you get with a cloud service, but without the data ever leaving the customer's data center. 09:01 Nikita: That's interesting. And, what happens if a dedicated database loses network connectivity to the OCI control plane? Maria: In the event an autonomous database on Exadata Cloud@Customer loses network connectivity to the OCI control plane, the Autonomous Database will actually continue to be available for your applications. And operations such as backups and autoscaling will not be impacted in that loss of network connectivity. However, the management and monitoring of the Autonomous Database via the OCI console and APIs as well as access by the Oracle Cloud operations team will not be available until that network is reconnected. 09:43 Maria: The capability suspended in the case of a lost network connection include, as I said, infrastructure management-- so that's the manual scaling of an Autonomous Database via the UI or our OCI CLI, or REST APIs, as well as Terraform scripts. They won't be available. Neither will the ability for Oracle Cloud ops to access and perform maintenance activities, such as patching. Nor will we be able to monitor the Oracle infrastructure during the time where the system is not connected. 10:20 Lois: That's good to know, Maria. What about data encryption and backup options? Maria: All Oracle Autonomous Databases encrypt data at REST. Data is automatically encrypted as it's written to the storage. But this encryption is transparent to authorized users and applications because the database automatically decrypts the data when it's being read from the storage. There are several options for backing up the Autonomous Database Cloud@Customer including using a Zero Data Loss Recovery Appliance, or ZDLRA. You can back it up to locally mounted NFS storage or back it up to the Oracle Public Cloud. 10:57 Nikita: I want to ask you about the typical workflow for Autonomous Database Dedicated infrastructure. What are the main steps here? Maria: In the typical workflow, the fleet administrator role performs the following steps. They provision the Exadata infrastructure by specifying its size, availability domain, and region within the Oracle Cloud. Once the hardware has been provisioned, the fleet administrator partitions the system by provisioning clusters and container databases. Then the developers, DBAs, or anyone who needs a database can provision databases within those container databases. Billing is based on the size of the Exadata infrastructure that's provisioned. So whether that's a quarter rack, half rack, or full rack. It also depends on the number of CPUs that are being consumed. Remember, it's also possible for customers to use their existing Oracle database licenses with this service to reduce the cost. 11:53 Lois: And what Exadata infrastructure models and shapes does Autonomous Database Dedicated support? Maria: That's the X7, X8, and X8M and you can get all of those in either a quarter, half, or full Exadata rack. Currently, you can create a maximum of 12 VM clusters on an Autonomous Database Dedicated infrastructure. We also advise that you limit the number of databases you provision to meet your preferred SLA. To meet the high availability SLA, we recommend a maximum of 100 databases. To meet the extreme availability SLA, we recommend a maximum of 25 databases. 12:35 Nikita: Ok, so now that I know all this, how do I actually get started with Autonomous Database on dedicated infrastructure? Maria: You need to increase your service limit to include that Exadata infrastructure and then you need to create the fleet and DBA service roles. You also need to create the necessary network model, VM clusters, and container databases for your organization. Finally, you need to provide access to the end users who want to create and use those Autonomous databases. Autonomous Database requires a subscription to that Exadata infrastructure for a minimum of 48 hours. But once subscribed, you can test out ideas and then terminate the subscription with no ongoing costs. While subscribed, you can control where you place the resources to perhaps manage latency sensitive applications. 13:29 Maria: You can also have control over patching schedules, software versions, so you can be sure that you're testing exactly what you need to. You can also migrate databases to the Autonomous Database via our export, import capabilities via the object store or through Data Pump or Golden Gate. As with any Autonomous Database, once it's provisioned, you've got full access to both autoscaling and all our cloning capabilities. 13:57 Lois: Maria, I've heard you talk about the importance of clean role separation in managing a private cloud. Can you elaborate on that, please? Maria: A successful private cloud is set up and managed using clean role separation between the fleet administration group and the developers, or DBA groups. The fleet administration group establishes the governance constraints, including things like budgeting, capacity compliance, and SLAs, according to the business structure. The physical resources are also logically grouped to align with this business structure, and then groups of users are given self-service access to the resources within these groups. So a good example of this would be that the developers and DBA groups use self-service database resources within these constraints. 14:46 Nikita: I see. So, what exactly does a fleet administrator do? Maria: Fleet administrators allocate budget by department and are responsible for the creation, monitoring, and management of the autonomous exadata infrastructure, the autonomous exadata VM clusters, and the autonomous container databases. To perform these duties, the fleet administrators must have an Oracle Cloud account or user, and that user must have permissions to manage these resources and be permitted to use network resources that need to be specified when you create these other resources. 15:24 Nikita: And what about database administrators? Maria: Database administrators create, monitor, and manage autonomous databases. They, too, need to have an Oracle Cloud account or be an Oracle Cloud user. Now, those accounts need to have the necessary permissions in order to create and access databases. They also need to be able to access autonomous backups and have permission to access the autonomous container databases, inside which these autonomous databases will be created, and have all of the necessary permissions to be able to create those databases, as I said. While creating autonomous databases, the database administrators will define and gain access to an admin user account inside the database. It's through this account that they will actually get the necessary permissions to be able to create and control database users. 16:24 Lois: How do developers fit into the picture? Maria: Database users and developers who write applications that will use or access an autonomous database don't actually need Oracle Cloud accounts. They'll actually be given the network connectivity and authorization information they need to access those databases by the database administrators. 16:45 Lois: Maria, you mentioned the various ways to manage the lifecycle of an autonomous dedicated service. Can you tell us more about that? Maria: You can manage the lifecycle of an autonomous dedicated service through the Cloud UI, Command Line Interface, through our REST APIs, or through one of the several language SDKs. The lifecycle operations that you can manage include capacity planning and setup, the provisioning and partitioning of exadata infrastructure, the provisioning and management of databases, the scaling of CPU storage and other resources, the scheduling of updates for the infrastructure, the VMs, and the database, as well as monitoring through event notifications. 17:30 Lois: And how do policies come into play? Maria: OCI allows fine-grained control over resources through the application of policies to groups. These policies are applicable to any member of the group. For Oracle Autonomous Database on dedicated infrastructure, the resources in question are autonomous exadata infrastructure, autonomous container databases, autonomous databases, and autonomous backups. Lois: Thanks so much, Maria. That was great information. 18:05 The Oracle University Learning Community is a great place for you to collaborate and learn with experts, peers, and practitioners. Grow your skills, inspire innovation, and celebrate your successes. The more you participate, the more recognition you can earn. All of your activities, from liking a post to answering questions and sharing with others, will help you earn badges and ranks, and be recognized within the community. If you are already an Oracle MyLearn user, go to MyLearn to join the community. You will need to log in first. If you have not yet accessed Oracle MyLearn, visit mylearn.oracle.com and create an account to get started. 18:44 Nikita: Welcome back! Hi Kamryn, thanks for joining us on the podcast. So, in an Autonomous Database environment where most DBA tasks are automated, what exactly does an application DBA do? Kamryn: While Autonomous Database automates most of the repetitive tasks that DBAs perform, the application DBA will still want to monitor and diagnose databases for applications to maintain the highest performance and the greatest security possible. Tasks the application DBA performs includes operations on databases, cloning, movement, monitoring, and creating alerts. When required, the application DBA performs low-level diagnostics for application performance and looks for insights on performance and capacity trends. 19:36 Nikita: I see. And which tools do they use for these tasks? Kamryn: There are several tools at the application DBA's disposal, including Enterprise Manager, Performance Hub, and the OCI Console. For Autonomous Dedicated, all the database operations are exposed through the console UI and available through REST API calls, including provisioning, stop/start, lifecycle operations for dedicated database types, unscheduled on-demand backups and restores, CPU scaling and storage management, providing connectivity information, including wallets, scheduling updates. 20:17 Lois: So, Kamryn, what tools can DBAs use for deeper exploration? Kamryn: For deeper exploration of the databases themselves, Autonomous Database DBAs can use SQL Developer Web, Performance Hub, and Enterprise Manager. 20:31 Nikita: Let's bring Kay into the conversation. Hi Kay! With Autonomous Database Dedicated, I've heard that customers have more control over patching. Can you tell us a little more about that? Kay: With Autonomous Database Dedicated, customers get to determine the update or patching schedule if they wish. Oracle automatically manages all patching activity, but with the ADB-Dedicated service, customers have the option of customizing the patching schedule. You can specify which month in every quarter you want, which week in that month, which day in that month, and which patching window within that day. You can also dynamically change the scheduled patching date and time for a specific database if the originally scheduled time becomes inconvenient. 21:22 Lois: That's great! So, how often are updates published, and what options do customers have when it comes to applying these updates? Kay: Every quarter, updates are published to the console, and OCI notifications are sent out. ADB-Dedicated allows for greater control over updates by allowing you to choose to apply the current update or stay with the previous version and skip to the next release. And the latest update can be applied immediately. This provides fleet administrators with the option to maintain test and production systems at different patch levels. A fleet administrator or a database admin sets up the software version policy at the Autonomous Container Database level during provisioning, although the defaults can be modified at any time for an existing Autonomous Container Database. At the bottom of the Autonomous Exadata Infrastructure provisioning screen, you will see a Configure the Automatic Maintenance section, where you should click the Modify Schedule. 22:34 Nikita: What happens if a customer doesn't customize their patching schedule? Kay: If you do not customize a schedule, it behaves like Autonomous Serverless, and Oracle will set a schedule for you. ADB-Dedicated customers get to choose the patching schedule that fits their business. 22:52 Lois: Back to you, Kamryn, I know a bit about Transparent Data Encryption, but I'm curious to learn more. Can you tell me what it does and how it helps protect data? Kamryn: Transparent Data Encryption, TDE, enables you to encrypt sensitive data that you store in tables and tablespaces. After the data is encrypted, this data is transparently decrypted for authorized users or applications when they access this data. TDE helps protect data stored on media, also called data at rest. If the storage media or data file is stolen, Oracle database uses authentication, authorization, and auditing mechanisms to secure data in the database, but not in the operating system data files where data is stored. To protect these data files, Oracle database provides TDE. 23:45 Nikita: That sounds important for data security. So, how does TDE protect data files? Kamryn: TDE encrypts sensitive data stored in data files. To prevent unauthorized decryption, TDE stores the encryption keys in a security module external to the database called a keystore. You can configure Oracle Key Vault as part of the TDE implementation. This enables you to centrally manage TDE key stores, called TDE wallets, in Oracle Key Vault in your enterprise. For example, you can upload a software keystore to Oracle Key Vault and then make the contents of this keystore available to other TDE-enabled databases. 24:28 Lois: What about Oracle Autonomous Database? How does it handle encryption? Kamryn: Oracle Autonomous Database uses always-on encryption that protects data at rest and in transit. All data stored in Oracle Cloud and network communication with Oracle Cloud is encrypted by default. Encryption cannot be turned off. By default, Oracle Autonomous Database creates and manages all the master encryption keys used to protect your data, storing them in a secure PKCS 12 keystore on the same Exadata systems where the databases reside. If your company's security policies require, Oracle Autonomous Database can instead use keys you create and manage. Customers can control key generation and rotation of the keys. 25:19 Kamryn: The Autonomous databases you create automatically use customer-managed keys because the Autonomous container database in which they are created is configured to use customer-managed keys. Thus, those users who create and manage Autonomous databases do not have to worry about configuring their databases to use customer-managed keys. 25:41 Nikita: Thank you so much, Kamryn, Kay, and Maria for taking the time to give us your insights. To learn more about provisioning Autonomous Database Dedicated resources, head over to mylearn.oracle.com and search for the Oracle Autonomous Database Administration Workshop. Lois: In our next episode, we will discuss Autonomous Database tools. Until then, this is Lois Houston… Nikita: …and Nikita Abraham signing off. 26:07 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
Parmis les 72 nouveautés de ces deux dernières semaines, j'en ai retenu pour vous 7 plus un livre et un article de blog. Dans cet épisode on parle de Bedrock, des ses agents et de ses bases de connaissance vectorielles. On parle de Java et de chaos - il n'y a aucun rapport entre ces deux news - IAM Roles Anywhere support maintenant PKCS#11, c'est quoi ca ? Et puis j'ai repéré cette semaine un nouveau livre sur AWS qui semble intéressant et un article de blog en français qui explique comment déployer des web apps sur AWS Lambda, sans changer votre code.
Parmis les 72 nouveautés de ces deux dernières semaines, j'en ai retenu pour vous 7 plus un livre et un article de blog. Dans cet épisode on parle de Bedrock, des ses agents et de ses bases de connaissance vectorielles. On parle de Java et de chaos - il n'y a aucun rapport entre ces deux news - IAM Roles Anywhere support maintenant PKCS#11, c'est quoi ca ? Et puis j'ai repéré cette semaine un nouveau livre sur AWS qui semble intéressant et un article de blog en français qui explique comment déployer des web apps sur AWS Lambda, sans changer votre code.
Here are my 100 interesting things to learn about cryptography: For a 128-bit encryption key, there are 340 billion billion billion billion possible keys. [Calc: 2**128/(1e9**4)] For a 256-bit encryption key, there are 115,792 billion billion billion billion billion billion billion billion possible keys. [Calc: 2**256/(1e9**8)] To crack a 128-bit encryption with brute force using a cracker running at 1 Teracracks/second, will take — on average — 5 million million million years to crack. Tera is 1,000 billion. [Calc: 2**128/100e9/2/60/60/24/365/(1e6**3)] For a 256-bit key this is 1,835 million million million million million million million million million years. For the brute force cracking of a 35-bit key symmetric key (such as AES), you only need to pay for the boiling of a teaspoon of energy. For a 50-bit key, you just need to have enough money to pay to boil the water for a shower. For a 90-bit symmetric key, you would need the energy to boil a sea, and for a 105-bit symmetric key, you need the energy to boil and ocean. For a 128-bit key, there just isn't enough water on the planet to boil for that. Ref: here. With symmetric key encryption, anything below 72 bits is relatively inexpensive to crack with brute force. One of the first symmetric key encryption methods was the LUCIFER cipher and was created by Horst Feistel at IBM. It was further developed into the DES encryption method. Many, at the time of the adoption of DES, felt that its 56-bit key was too small to be secure and that the NSA had a role in limiting them. With a block cipher, we only have to deal with a fixed size of blocks. DES and 3DES use a 64-bit (eight-byte) block size, and AES uses a 128-bit block size (16 bytes). With symmetric key methods, we either have block ciphers, such as DES, AES CBC and AES ECB, or stream ciphers, such as ChaCha20 and RC4. In order to enhance security, AES has a number of rounds where parts of the key are applied. With 128-bit AES we have 10 rounds, and 14 rounds for 256-bit AES. In AES, we use an S-box to scramble the bytes, and which is applied for each round. When decrypting, we have the inverse of the S-box used in the encrypting process. A salt/nonce or Initialisation Vector (IV) is used with an encryption key in order to change the ciphertext for the same given input. Stream ciphers are generally much faster than block cipers, and can generally be processed in parallel. With the Diffie-Hellman method. Bob creates x and shares g^x (mod p), and Alice creates y, and shares g^y (mod p). The shared key is g^{xy} (mod p). Ralph Merkle — the boy genius — submitted a patent on 5 Sept 1979 and which outlined the Merkle hash. This is used to create a block hash. Ralph Merkle's PhD supervisor was Martin Hellman (famous as the co-creator of the Diffie-Hellman method). Adi Shamir defines a secret share method, and which defines a mathematical equation with the sharing of (x,y), and where a constant value in the equation is the secret. With Shamir Secret Shares (SSS), for a quadratic equation of y=x²+5x+6, the secret is 6. We can share three points at x=1, x=2 and y=3, and which gives y=12, y=20, and y=20, respectively. With the points of (1,12), (2,20), and (3,20), we can recover the value of 6. Adi Shamir broke the Merkle-Hellman knapsack method at a live event at a rump session of a conference. With secret shares, with the highest polynomial power of n, we need n+1 points to come together to regenerate the secret. For example, y=2x+5 needs two points to come together, while y=x²+15x+4 needs three points. The first usable public key method was RSA — and created by Rivest, Shamir and Adleman. It was first published in 1979 and defined in the RSA patent entitled “Cryptographic Communications System and Method”. In public key encryption, we use the public key to encrypt data and the private key to decrypt it. In digital signing, we use the private key to sign a hash and create a digital signature, and then the associated public key to verify the signature. Len Adleman — the “A” in the RSA method — thought that the RSA paper would be one of the least significant papers he would ever publish. The RSA method came to Ron Rivest while he slept on a couch. Martin Gardner published information on the RSA method in his Scientific American article. Initially, there were 4,000 requests for the paper (which rose to 7,000), and it took until December 1977 for them to be posted. The security of RSA is based on the multiplication of two random prime numbers (p and q) to give a public modulus (N). The difficulty of RSA is the difficulty in factorizing this modulus. Once factorized, it is easy to decrypt a ciphertext that has been encrypted using the related modulus. In RSA, we have a public key of (e,N) and a private key of (d,N). e is the public exponent and d is the private exponent. The public exponent is normally set at 65,537. The binary value of 65,537 is 10000000000000001 — this number is efficient in producing ciphertext in RSA. In RSA, the ciphertext is computed from a message of M as C=M^e (mod N), and is decrypted with M=C^d (mod N). We compute the the private exponent (d) from the inverse of the public exponent (e) modulus PHI, and where PHI is (p-1)*(q-1). If we can determine p and q, we can compute PHI. Anything below a 738-bit public modulus is relatively inexpensive to crack for RSA. To crack 2K RSA at the current time, we would need the energy to boil ever ocean on the planet to break it. RSA requires padding is required for security. A popular method has been PCKS#1v1.5 — but this is not provably secure and is susceptible to Bleichenbacher's attack. An improved method is Optimal Asymmetric Encryption Padding (OAEP) and was defined by Bellare and Rogaway and standardized in PKCS#1 v2. The main entity contained in a digital certificate is the public key of a named entity. This is either an RSA or an Elliptic Curve key. A digital certificate is signed with the private key of a trusted entity — Trent. The public key of Trent is then used to prove the integrity and trust of the associated public key. For an elliptic curve of y²=x³+ax+b (mod p), not every (x,y) point is possible. The total number of points is defined as the order (n). ECC (Elliptic Curve Cryptography) was invented by Neal Koblitz and Victor S. Miller in 1985. Elliptic curve cryptography algorithms did not take off until 2004. In ECC, the public key is a point on the elliptic curve. For secp256k1, we have a 256-bit private key and a 512-bit (x,y) point for the public key. A “04” in the public key is an uncompressed public key, and “02” and “03” are compressed versions with only the x-co-ordinate and whether the y coordinate is odd or even. Satoshi selected the secp256k1 curve for Bitcoin, and which gives the equivalent of 128-bit security. The secp256k1 curve uses the mapping of y²=x³ + 7 (mod p), and is known as a Short Weierstrass (“Vier-strass”) curve. The prime number used with secp256k1 is 2²⁵⁶-2³²-2⁹-2⁸-2⁷-2⁶-2⁴-1. An uncompressed secp256k1 public key has 512 bits and is an (x,y) point on the curve. The point starts with a “04”. A compressed secp256k1 public key only stores the x-co-ordinate value and whether the y coordinate is odd or even. It starts with a “02” if the y-co-ordinate is even; otherwise, it starts with a “03”. In computing the public key in ECC of a.G, we use the Montgomery multiplication method and which was created by Peter Montgomery in 1985, in a paper entitled, “Modular Multiplication without Trial Division.” Elliptic Curve methods use two basic operations: point address (P+Q) and point doubling (2.P). These can be combined to provide the scalar operation of a.G. In 1999, Don Johnson Alfred Menezes published a classic paper on “The Elliptic Curve Digital Signature Algorithm (ECDSA)”. It was based on the DSA (Digital Signature Algorithm) — created by David W. Kravitz in a patent which was assigned to the US. ECDSA is a digital signature method and requires a random nonce value (k), and which should never be reused or repeated. ECDSA is an elliptic curve conversion of the DSA signature method. Digital signatures are defined in FIPS (Federal Information Processing Standard) 186–5. NIST approved the Rijndael method (led by Joan Daemen and Vincent Rijmen) for Advanced Encryption Standard (AES). Other contenders included Serpent (led by Ross Anderson), TwoFish (led by Bruce Schneier), MARS (led by IBM), and RC6 (led by Ron Rivest). ChaCha20 is a stream cipher that is based on Salsa20 and developed by Daniel J. Bernstein. MD5 has a 128-bit hash, SHA-1 has 160 bits and SHA-256 has 256-bits. It is relatively easy to create a hash collision with MD5. Google showed that it was possible to create a signature collision for a document with SHA-1. It is highly unlikely to get a hash collision for SHA-256. In 2015, NIST defined SHA-3 as a standard, and which was built on the Keccak hashing family — and which used a different method to SHA-2. The Keccak hash family uses a sponge function and was created by Guido Bertoni, Joan Daemen, Michaël Peeters, and Gilles Van Assche and standardized by NIST in August 2015 as SHA-3. Hash functions such as MD5, SHA-1 and SHA-256 have a fixed hash length, whereas an eXtendable-Output Function (XOF) produces a bit string that can be of any length. Examples are SHAKE128, SHAKE256, BLAKE2XB and BLAKE2XS. BLAKE 3 is the fastest cryptographically secure hashing method and was created by Jack O'Connor, Jean-Philippe Aumasson, Samuel Neves, and Zooko Wilcox-O'Hearn. Hashing methods can be slowed down with a number of rounds. These slower hashing methods include Bcrypt, PBKDF2 and scrypt. Argon 2 uses methods to try and break GPU cracking, such as using a given amount of memory and defining the CPU utlization. To speed up the operation of the SHA-3 hash, the team reduced the security of the method and reduce the number of rounds. The result is the 12 Kangaroo's hashing method. The number of rounds was reduced from 24 to 12 (with a security level of around 128 bits). Integrated Encryption Scheme (IES) is a hybrid encryption scheme which allows Alice to get Bob's public key and then generate an encryption key based on this public key, and she will use her private key to recover the symmetric. With ECIES, we use elliptic curve methods for the public key part. A MAC (Message Authentication Code) uses a symmetric key to sign a hash, and where Bob and Alice share the same secret key. The most popular method is HMAC (hash-based message authentication code). The AES block cipher can be converted into a stream cipher using modes such as GCM (Galois Counter Mode) and CCM (counter with cipher block chaining message authentication code; counter with CBC-MAC). A MAC is added to a symmetric key method in order to stop the ciphertext from being attacked by flipping bits. GCM does not have a MAC, and is thus susceptible to this attack. CCM is more secure, as it contains a MAC. With symmetric key encryption, we must remove the encryption keys in the reverse order they were applied. Commutative encryption overcomes this by allowing the keys to be removed in any order. It is estimated that Bitcoin miners consume 17.05 GW of electrical power per day and 149.46 TWh per year. A KDF (Key Derivation Function) is used to convert a passphrase or secret into an encryption key. The most popular methods are HKDF, PBKDF2 and Bcrypt. RSA, ECC and Discrete Log methods will all be cracked by quantum computers using Shor's algorithm Lattice methods represent bit values as polynomial values, such as 1001 is x³+1 as a polynomial. Taher Elgamal — the sole inventor of the ElGamal encryption method — and Paul Koche were the creators of SSL, and developed it for the Netscape browser. David Chaum is considered as a founder of electronic payments and, in 1983, created ECASH, along with publishing a paper on “Blind signatures for untraceable payments”. Satoshi Nakamoto worked with Hal Finney on the first versions of Bitcoin, and which were created for a Microsoft Windows environment. Blockchains can either be permissioned (requiring rights to access the blockchain) or permissionless (open to anyone to use). Bitcoin and Ethereum are the two most popular permissionless blockchains, and Hyperledger is the most popular permissioned ledger. In 1992, Eric Hughes, Timothy May, and John Gilmore set up the cypherpunk movement and defined, “We the Cypherpunks are dedicated to building anonymous systems. We are defending our privacy with cryptography, with anonymous mail forwarding systems, with digital signatures, and with electronic money.” In Bitcoin and Ethereum, a private key (x) is converted to a public key with x.G, and where G is the base point on the secp256k1 curve. Ethereum was first conceived in 2013 by Vitalik Buterin, Gavin Wood, Charles Hoskinson, Anthony Di Iorio and Joseph Lubin. It introduced smaller blocks, improved proof of work, and smart contracts. NI-ZKPs involves a prover (Peggy), a verifier (Victor) and a witness (Wendy) and were first defined by Manuel Blum, Paul Feldman, and Silvio Micali in their paper entitled “Non-interactive zero-knowledge and its applications”. Popular ZKP methods include ZK-SNARKs (Zero-Knowledge Succinct Non-Interactive Argument of Knowledge) and ZK-STARKs (Zero-Knowledge Scalable Transparent Argument of Knowledge). Bitcoin and Ethereum are pseudo-anonymised, and where the sender and recipient of a transaction, and its value, can be traced. Privacy coins enable anonymous transactions. These include Zcash and Monero. In 1992, David Chaum and Torben Pryds Pedersen published “Wallet databases with observers,” and outlined a method of shielding the details of a monetary transaction. In 1992, Adi Shamir (the “S” in RSA) published a paper on “How to share a secret” in the Communications of the ACM. This supported the splitting of a secret into a number of shares (n) and where a threshold value (t) could be defined for the minimum number of shares that need to be brought back together to reveal the secret. These are known as Shamir Secret Shares (SSS). In 1991, Torbin P Pedersen published a paper entitled “Non-interactive and information-theoretic secure verifiable secret sharing” — and which is now known as Pedersen Commitment. This is where we produce our commitment and then show the message that matches the commitment. Distributed Key Generation (DKG) methods allow a private key to be shared by a number of trusted nodes. These nodes can then sign for a part of the ECDSA signature by producing a partial signature with these shares of the key. Not all blockchains use ECDSA. The IOTA blockchain uses the EdDSA signature, and which uses Curve 25519. This is a more lightweight signature version and has better support for signature aggregation. It uses Twisted Edwards Curves. The core signing method used in EdDSA is based on the Schnorr signature scheme and which was created by Claus Schnorr in 1989. This was patented as a “Method for identifying subscribers and for generating and verifying electronic signatures in a data exchange system”. The patent ran out in 2008. Curve 25519 uses the prime number of 2²⁵⁵-19 and was created by Daniel J. Bernstein. Peter Shor defined that elliptic curve methods can be broken with quantum computers. To overcome the cracking of the ECDSA signature from quantum computers, NIST are standardising a number of methods. At present, this focuses on CRYSTALS-Dilithium, and which is a lattice cryptography method. Bulletproofs were created in 2017 by Stanford's Applied Cryptography Group (ACG). They define a zero-knowledge proof as where a value can be checked to see it lies within a given range. The name “bulletproofs” is defined as they are short, like a bullet, and with bulletproof security assumptions. Homomorphic encryption methods allow for the processing of encrypted values using arithmetic operations. A public key is used to encrypt the data, and which can then be processed using an arithmetic circuit on the encrypted data. The owner of the associated private key can then decrypt the result. Some traditional public key methods enable partial homomorphic encryption. RSA and ElGamal allow for multiplication and division, whilst Pailier allows for homomorphic addition and subtraction. Full homomorphic encryption (FHE) supports all of the arithmetic operations and includes Fan-Vercauteren (FV) and BFV (Brakerski/Fan-Vercauteren) for integer operations and HEAAN (Homomorphic Encryption for Arithmetic of Approximate Numbers) for floating point operations. Most of the Full Homomorphic encryption methods use lattice cryptography. Some blockchain applications use Barreto-Lynn-Scott (BLS) curves which are pairing-friendly. They can be used to implement Bilinear groups and which are a triplet of groups (G1, G2 and GT), so that we can implement a function e() such that e(g1^x,g2^y)=gT^{xy}. Pairing-based cryptography is used in ZKPs. The main BLS curves used are BLS12–381, BLS12–446, BLS12–455, BLS12–638 and BLS24–477. An accumulator can be used for zero-knowledge proof of knowledge, such as using a BLS curve to create to add and remove proof of knowledge. Metamask is one of the most widely used blockchain wallets and can integrate into many blockchains. Most wallets generate the seed from the operating system and where the browser can use the Crypto.getRandomValues function, and compatible with most browsers. With a Verifiable Delay Function (VDF), we can prove that a given amount of work has been done by a prover (Peggy). A verifier (Victor) can then send the prover a proof value and compute a result which verifies the work has been done, with the verifier not needing to do the work but can still prove the work has been done. A Physical Unclonable Functions (PUFs) is a one-way function which creates a unique signature pattern based on the inherent delays within the wires and transistors. This can be used to link a device to an NFT.
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2023.02.25.530040v1?rss=1 Authors: Xiao, J., Eleid, L., Buenaventura, T., Boutry, R., Bonnefond, A., Jones, B., Rutter, G. A., Froguel, P., Tomas, A. Abstract: Aim: To determine the kinase activity profiles of human pancreatic beta cells downstream of GLP-1R balanced versus biased agonist stimulations. Materials and methods: This study analysed the kinomic profiles of human EndoC-{beta}h1 cells following vehicle and glucagon-like peptide-1 receptor (GLP-1R) stimulation with the pharmacological agonist exendin-4, as well as exendin-4-based biased derivatives exendin-phe1 and exendin-asp3 for acute (10-minute) versus sustained (120-minute) responses, using PamChip protein tyrosine kinase (PTK) and serine/threonine kinase (STK) assays. The raw data were filtered and normalised using BioNavigator. The kinase analyses were conducted with R, mainly including kinase-substrate mapping and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis. Results: The present analysis reveals that kinomic responses are distinct for acute versus sustained GLP-1R agonist (GLP-1RA) exposure, with individual responses associated with agonists presenting specific bias profiles. According to pathway analysis, several kinases, including JNKs, PKCs, INSR and LKB1, are important GLP-1R signalling mediators, constituting potential targets for further research on biased GLP-1R downstream signalling. Conclusion: Results from this study suggest that differentially biased exendin-phe1 and exendin-asp3 can modulate distinct kinase interaction networks. Further understanding of these mechanisms will have important implications for the selection of appropriate anti-T2D therapies with optimised downstream kinomic profiles. Copy rights belong to original authors. Visit the link for more info Podcast created by Paper Player, LLC
On The Cloud Pod this week, the team is running at half-duplex without Peter and Ryan. Plus Cloudflare R2 is here, Facebook died for a day, and AWS releases Cloud Control Plane. A big thanks to this week's sponsors: Foghorn Consulting, which provides full-stack cloud solutions with a focus on strategy, planning and execution for enterprises seeking to take advantage of the transformative capabilities of AWS, Google Cloud and Azure. JumpCloud, which offers a complete platform for identity, access, and device management — no matter where your users and devices are located. This week's highlights
PHP Internals News: Episode 60: OpenSSL CMS Support London, UK Thursday, July 2nd 2020, 09:23 BST In this episode of "PHP Internals News" I chat with Eliot Lear (Twitter, GitHub, Website) about OpenSSL CMS support, which he has contributed to PHP. The RSS feed for this podcast is https://derickrethans.nl/feed-phpinternalsnews.xml, you can download this episode's MP3 file, and it's available on Spotify and iTunes. There is a dedicated website: https://phpinternals.news Transcript Derick Rethans 0:16 Hi, I'm Derick, and this is PHP internals news, a weekly podcast dedicated to demystifying the development of the PHP language. This is Episode 60. Today I'm talking with Eliot Lear about adding OpenSSL CMS supports to PHP. Hello Eliot, would you please introduce yourself. Eliot Lear 0:34 Hi Derick, it's great to be here. My name is Eliot Lear, I'm a principal engineer for Cisco Systems working on IoT security. Derick Rethans 0:41 I saw somewhere on the internet, Wikipedia I believe that he also did some RFCs, not PHP RFC, but internet RFCs. Eliot Lear 0:49 That's correct. I have a few out there I'm a jack of all trades But Master of None. Derick Rethans 0:53 The one that piqued my interest was the one for the timezone database, because I added timezone support to PHP a long long time ago. Eliot Lear 1:01 That's right, there's a whole funny story about that RFC, we will have to save it for another time but there are a lot of heroes out there in the volunteer world, who keep that database up to date, and currently the they're corralled and coordinated by a lovely gentleman by the name of Paul Eggert and if you're not a member of that community it's really a wonderful contribution to make, and they need people all around the world to send an information but I guess that's not why we're here today. Derick Rethans 1:29 But I'm happy to chat about that at some other point in the future. Now today we're talking about CMS support in OpenSSL and the first time I saw CMS. I don't think that means content management system here. Eliot Lear 1:41 No, it stands for cryptographic message syntax, and it is the follow on to earlier work which people will know as PKCS#7. So it's a way in which one can transmit and receive encrypted information or just signed information. Derick Rethans 1:58 How does CMS, and PKCS#7 differ from each other. Eliot Lear 2:03 Actually not too many differences, the externally the envelope or the structure of the message is slightly better formed, and the people who worked on that at the Internet Engineering Task Force were essentially just making incremental improvements to make sure that there was good interoperability, good for email support and encrypted email, and signed email, and for other purposes as well. So it's very relatively modest but important improvements, from PKCS#7. Derick Rethans 2:39 How old are these two standards? Eliot Lear 2:42 Goodness. PKCS#7, I'm not sure actually of how old the PKCS#7 is, but CMS dates back. Gosh, probably a decade or so I'd have to go look. I'm sorry if I don't have the answer to that one, Derick Rethans 2:56 A ballpark figure works fine for me. Why would you want to use CMS over the older PKCS#7? Eliot Lear 3:02 You know, truthfully, I'm not, I'm not a cryptographer, so the reason I used it was because it was the latest and greatest thing and when you're doing this sort of work. I'm an, I'm an interdisciplinary person so what I do is I go find the experts and they tell me what to use. And believe it or not, I went and found the person who's the expert on cryptographic signatures, which is what I need. I said: What should I use? He said: You should use CMS and so that's what I did. What I ran into some troubles though, which is that some of the tooling, doesn't support CMS. So, in particular PHP didn't support CMS. So that's why I got involved in the PHP project. Derick Rethans 3:40 You are a new contributor to the PHP project. What did you think of its interactions? Eliot Lear 3:45 I had a wonderful time doing the development. There was a fair amount of coding involved, and one has to understand that the underlying code here is OpenSSL and OpenSSL's documentation for some of its interfaces could stand a little bit of improvement. I needed to do a fair amount of work and I needed a fair amount of review so I got a lot of support from Jakub particular, who looks after the OpenSSL code base, as one of the maintainers, and I really enjoyed the CI/CD integration, which allowed me to check the numerous environments that PHP runs on. I really enjoyed the community review, and I really enjoyed it even though I didn't have to really do one in my case, I did do an RFC, as part of the PHP development process, which essentially forced me to write really good documentation or at least I hope it's really good. Before all of the caller interfaces that I defined, so it was a really enjoyable experience. I really liked working with the team. Derick Rethans 4:47 That's good to hear. I think sometimes although an RFC wasn't particularly necessary here, as an RFC one particularly necessary I always find writing down the requirements that I have for my own software, first, even though this doesn't get publicized or nobody's going to review that always very useful to just clear my head and see what's going on there. Eliot Lear 5:06 Yeah, I think that's a good approach. Derick Rethans 5:07 During the review, was there a lot of feedback where you weren't quite sure, or what was the best feedback that you got during this process? Eliot Lear 5:15 Biggest issue that we had was, how to handle streaming, and we have some code in there now for streaming, but it's it's unlikely to get really heavily exercised in the way that the interfaces are defined right now. It's essentially files in/files out interface which mirrors the PKCS#7 interface. One of the future activities that I would like to take on if I can find a little bit more time, is to move away from the files in/files out interface, but rather use an in memory structure or in memory interface. So that can actually take advantage of streaming and can be more memory efficient, over time. Derick Rethans 5:56 When you say file now you actually provide a file name to the functions? Eliot Lear 6:00 That's right, you know, depending on which of the interfaces you're using, there's an encrypt, there's an encrypt call there's a decrypt call. There's a sign and a validate call, and or a verify call, and each of them has a slightly different interface, but you know if you're encrypting you need to have the destination that you're encrypting through these are all public key, you know PKI based approaches so you have to have the destination certificates, that you're sending. If you're verifying you need to have the private key to do or you need, I'm sorry you need to have the public key chain and if you're decrypting to have the private key to do all this. So, but they're all filenames that are passed and it's a bit of a limitation of the original interface in that you probably don't really want to be passing file names from most of your functions you'd rather be passing objects that are a bit better structure than that. Derick Rethans 6:53 Is the underlying OpenSSL interface similar or does that allow for streaming in general? Eliot Lear 6:59 The C API allows for streaming in such. The command line interface, it doesn't seem to me that they do any particular things with with streaming. If you look at the cryptographic interface that we that we did for CMS, mostly it is an attempt to provide the capability that you would otherwise have on the open using the OpenSSL command line interface and I think the nice thing here is that we can evolve from that point. Derick Rethans 7:26 And the progress wouldn't only be done implemented for the CMS mechanism, but also for PKCS#7, as well as others that are also available. Eliot Lear 7:35 Yes. Another area that I would like to look at, I'm not sure how easy it will be, we didn't try it this time was to try and combine the code bases because they are so close, and be a little bit more code efficient, but there are just slight enough differences in the caller interfaces between PKCS#7 and CMS that, I'm not sure I could get away with using void functions for everything I have. I might have to have a lot of switches, or conditionals in the code. But what I am interested in doing for both sets of code is, again, providing new interfaces, where instead of passing file names, you're passing memory structures of some form that can be used to stream. That's the future. Derick Rethans 8:22 I've been writing quite a bit of GO code in the last couple of months. And that interface is exactly the same, you provide file names to it, which I find kind of annoying because I'm going to have to distribute these binaries at some point. And I don't really want any other dependencies in the form of files, so I need to figure out a way how to do that without also provide those key files at some point. Eliot Lear 8:43 Indeed, that's, that's an issue, and for us right well who are web developers I did this because I was doing some web development. A lot of the stuff that I want to do. I just want to do in memory and then pass right back to the client and I don't really want to have to go to the File System. And right now, I'll have to take an extra step to go to the File System and that's alright, it's not a big deal, but it'll be a little bit more elegant when I get away from that. We'll do that you know at an appropriate time. Derick Rethans 9:11 Yes, that sounds lovely. I'm not an expert in cryptography either. I saw that the RFC mentions the X 509. How does it tie in with CMS and PKCS #7? Eliot Lear 9:21 X 509 is essentially a certificate standard. In fact, that's what really what it is. A certificate essentially has a bunch of attributes, along with a subject being one of those attributes and a signature on top of the whole structure. And the signature comes from a signer, and the signer is essentially asserting all of these attributes on behalf of whoever sent the request. X 509 certificates are, for example the core of our web authentication infrastructure. When you go to the bank online, it uses an X 509 certificate to prove to you that it is the bank that you intended to visit, that's the basis of this and CMS and PKCS#7 are structures that allow the X 509 standard to be serialized, so there's the distinguishing coding rules that are used underneath PKCS#7 and CMS, and then what you have, CMS essentially was designed as at least in part for mail transmission. So how is it that you indicate the certificate, the subject name, the content of the message. All of this information had to be formally described, and it had to be done in a way that is scalable. And the nice thing about X 509, as compared to say just using naked public keys, is with naked public keys, the verifier or the recipient has to have each individual public key, whereas with X 509, it uses the certificate hierarchy such that you only need to have the top of the chain, if you will, in order to validate a certificate. So X 509 scales, amazingly well, we see that success, all throughout the web. And so that's what CMS and PKCS#7 help support. Derick Rethans 11:24 Like I said, I've never really done enough research into this but I think it is something that many web developers should really know how that works because this comes back, not only with mail, but also with HTTPS. Eliot Lear 11:35 It's another part of the code right. So CMS isn't directly used for supporting TLS connections, there's a whole a whole set of code inside of PHP for that. Derick Rethans 11:44 Would you have anything else to add? Eliot Lear 11:46 I would say a couple of things. The basis of this work was that I was attempting to create signatures for something called manufacturer usage descriptions. The reason I got involved with PHP is that I'm doing tooling that supports an IoT protection project. And this this manufacturer usage descriptions essentially describes what the device, what an IoT device needs in terms of network access. And the purpose of using PHP and adding the code that I added was so that those descriptions could be signed, and that's why Cisco, my employer, supported my activity. Now Cisco loves giving back to the community. This was one way we could do so it's something I'm very proud of when it comes to our company. And so we're very happy to participate with the PHP project. I really enjoyed working with Derick Rethans 12:33 That's glad to hear. I'm looking forward to some other API improvements because I agree that the interfaces that the OpenSSL extension has aren't always the easiest to use and I think it's important that encryption is easy to use, because more people will use it right. Eliot Lear 12:49 I have to say, in my opinion, the encryption interfaces that we have today are still relatively immature. And not just CMS, the code that I wrote, which is really you know fresh it just got committed, but the whole category of interfaces, is something that will evolve over time and it's important that it do so because the threats are evolving over time and people need to be able to use these interfaces, and we can't all be cryptographic experts, I'm not. I just use the code but I needed to write some in order to use it in my case, but as we go on I think will enjoy richer and easier to use interfaces that normal developers can use without being experts. Derick Rethans 13:38 PHP has been going that way already a little bit because we started having a simple random interface, and in a simple way of doing hashes and verifying hashes, to make these things a lot easier because we saw that lots of people are implementing their own ways in PHP code, and pretty much messing it up because, as you say not everybody's a cryptographer. Eliot Lear 13:56 That's right. And so that's a really good thing that PHP did, because as you pointed out, it eliminates all the people who are going onto the net looking for the little snippet of code that they're going to include in PHP, whether that snippet is correct or not that's a big issue. Derick Rethans 14:11 Absolutely. And cryptography is not something that you want to get wrong. Eliot Lear 14:15 That's right, because for every line of code that you've written in this space, there's going to be somebody who's going to want to attack it, maybe several. Derick Rethans 14:23 Absolutely. Thank you, Eliot, for taking the time this morning to talk to me about CMS support. Eliot Lear 14:28 It's been my pleasure Derick, and thanks for having me on. And again, it was really enjoyable to work with the PHP team and I'm looking forward to doing more. Derick Rethans 14:38 Thanks for listening to this instalment of PHP internals news, the weekly podcast dedicated to demystifying the development of the PHP language. I maintain a Patreon account for supporters of this podcast, as well as the Xdebug debugging tool, you can sign up for Patreon at https://drck.me/patreon. If you have comments or suggestions, feel free to email them to derick@phpinternals.news. Thank you for listening, and I'll see you next week. Show Notes RFC: Add CMS Support Credits Music: Chipper Doodle v2 — Kevin MacLeod (incompetech.com) — Creative Commons: By Attribution 3.0
Encryption is the process of scrambling data to protect personal files, secure communication, hide identities and much more. In this video we will learn about the different type of encryptions we will talk about symmetric encryption, asymmetrical encryption, where they are used for and the pros and cons of each one. Symmetric encryption Asymmetrical encrypt Pros and cons of sym va asym Symmetric encryption Might as well just call it classic encryption I would argue and i think this is the first encryption known to us. I have some thing I dont want anyone to see I use a lock key to lock it. Only I can open it unless I have a lock. The same key you use to encrypt is the same key to Decrypt. Examples Examples of popular symmetric-key algorithms include AES Twofish Serpent DES Twofish, Serpent, AES (Rijndael), Blowfish CAST5, Kuznyechik, RC4, DES, 3DES, Skipjack, Safer+/++ (Bluetooth), and IDEA Asymmetrical encryptions We had symmetric encryptions for a long time, then internet came and networking and we needed to encrypt messages going back and forth. We said cool lets use AES. Then we said wait a second.. the other computer doesnt really have my key so we need to encrypt it.. Also called Public key encryption 1977 Rivest–Shamir–Adleman (RSA) Diffie–Hellman key exchange protocol DSS (Digital Signature Standard), which incorporates the Digital Signature Algorithm ElGamal Various elliptic curve techniques Various password-authenticated key agreement techniques Paillier cryptosystem RSA encryption algorithm (PKCS#1) Cramer–Shoup cryptosystem YAK authenticated key agreement protocol --- Send in a voice message: https://anchor.fm/hnasr/message
This week on BSDNow, we have a very special guest joining us to tell us a tale of the early days in BSD history. That plus some new OpenSSH goodness, shell scripting utilities and much more. Stay tuned for your place to B...SD! This episode was brought to you by Headlines Call For Testing: OpenSSH 7.4 (http://marc.info/?l=openssh-unix-dev&m=148167688911316&w=2) Getting ready to head into the holidays for for the end of 2016 means some of us will have spare time on our hands. What a perfect time to get some call for testing work done! Damien Miller has issued a public CFT for the upcoming OpenSSH 7.4 release, which considering how much we all rely on SSH I would expect will get some eager volunteers for testing. What are some of the potential breakers? “* This release removes server support for the SSH v.1 protocol. ssh(1): Remove 3des-cbc from the client's default proposal. 64-bit block ciphers are not safe in 2016 and we don't want to wait until attacks like SWEET32 are extended to SSH. As 3des-cbc was the only mandatory cipher in the SSH RFCs, this may cause problems connecting to older devices using the default configuration, but it's highly likely that such devices already need explicit configuration for key exchange and hostkey algorithms already anyway. sshd(8): Remove support for pre-authentication compression. Doing compression early in the protocol probably seemed reasonable in the 1990s, but today it's clearly a bad idea in terms of both cryptography (cf. multiple compression oracle attacks in TLS) and attack surface. Pre-auth compression support has been disabled by default for >10 years. Support remains in the client. ssh-agent will refuse to load PKCS#11 modules outside a whitelist of trusted paths by default. The path whitelist may be specified at run-time. sshd(8): When a forced-command appears in both a certificate and an authorized keys/principals command= restriction, sshd will now refuse to accept the certificate unless they are identical. The previous (documented) behaviour of having the certificate forced-command override the other could be a bit confusing and error-prone. sshd(8): Remove the UseLogin configuration directive and support for having /bin/login manage login sessions.“ What about new features? 7.4 has some of those to wake you up also: “* ssh(1): Add a proxy multiplexing mode to ssh(1) inspired by the version in PuTTY by Simon Tatham. This allows a multiplexing client to communicate with the master process using a subset of the SSH packet and channels protocol over a Unix-domain socket, with the main process acting as a proxy that translates channel IDs, etc. This allows multiplexing mode to run on systems that lack file- descriptor passing (used by current multiplexing code) and potentially, in conjunction with Unix-domain socket forwarding, with the client and multiplexing master process on different machines. Multiplexing proxy mode may be invoked using "ssh -O proxy ..." sshd(8): Add a sshdconfig DisableForwaring option that disables X11, agent, TCP, tunnel and Unix domain socket forwarding, as well as anything else we might implement in the future. Like the 'restrict' authorizedkeys flag, this is intended to be a simple and future-proof way of restricting an account. sshd(8), ssh(1): Support the "curve25519-sha256" key exchange method. This is identical to the currently-support method named "curve25519-sha256@libssh.org". sshd(8): Improve handling of SIGHUP by checking to see if sshd is already daemonised at startup and skipping the call to daemon(3) if it is. This ensures that a SIGHUP restart of sshd(8) will retain the same process-ID as the initial execution. sshd(8) will also now unlink the PidFile prior to SIGHUP restart and re-create it after a successful restart, rather than leaving a stale file in the case of a configuration error. bz#2641 sshd(8): Allow ClientAliveInterval and ClientAliveCountMax directives to appear in sshd_config Match blocks. sshd(8): Add %-escapes to AuthorizedPrincipalsCommand to match those supported by AuthorizedKeysCommand (key, key type, fingerprint, etc.) and a few more to provide access to the contents of the certificate being offered. Added regression tests for string matching, address matching and string sanitisation functions. Improved the key exchange fuzzer harness.“ Get those tests done and be sure to send feedback, both positive and negative. *** How My Printer Caused Excessive Syscalls & UDP Traffic (https://zinascii.com/2014/how-my-printer-caused-excessive-syscalls.html) “3,000 syscalls a second, on an idle machine? That doesn't seem right. I just booted this machine. The only processes running are those required to boot the SmartOS Global Zone, which is minimal.” This is a story from 2014, about debugging a machine that was being slowed down by excessive syscalls and UDP traffic. It is also an excellent walkthrough of the basics of DTrace “Well, at least I have DTrace. I can use this one-liner to figure out what syscalls are being made across the entire system.” dtrace -n 'syscall:::entry { @[probefunc,probename] = count(); }' “Wow! That is a lot of lwpsigmask calls. Now that I know what is being called, it's time to find out who is doing the calling? I'll use another one-liner to show me the most common user stacks invoking lwpsigmask.” dtrace -n 'syscall::lwp_sigmask:entry { @[ustack()] = count(); }' “Okay, so this mdnsd code is causing all the trouble. What is the distribution of syscalls for the mdnsd program?” dtrace -n 'syscall:::entry /execname == "mdnsd"/ { @[probefunc] = count(); } tick-1s { exit(0); }' “Lots of signal masking and polling. What the hell! Why is it doing this? What is mdnsd anyways? Is there a man page? Googling for mdns reveals that it is used for resolving host names in small networks, like my home network. It uses UDP, and requires zero configuration. Nothing obvious to explain why it's flipping out. I feel helpless. I turn to the only thing I can trust, the code.” “Woah boy, this is some messy looking code. This would not pass illumos cstyle checks. Turns out this is code from Darwin—the kernel of OSX.” “Hmmm…an idea pops into my computer animal brain. I wonder…I wonder if my MacBook is also experiencing abnormal syscall rates? Nooo, that can't be it. Why would both my SmartOS server and MacBook both have the same problem? There is no good technical reason to link these two. But, then again, I'm dealing with computers here, and I've seen a lot of strange things over the years—I switch to my laptop.” sudo dtrace -n 'syscall::: { @[execname] = count(); } tick-1s { exit(0); }' Same thing, except mdns is called discoverd on OS X “I ask my friend Steve Vinoski to run the same DTrace one-liner on his OSX machines. He has both Yosemite and the older Mountain Lion. But, to my dismay, neither of his machines are exhibiting high syscall rates. My search continues.” “Not sure what to do next, I open the OSX Activity Monitor. In desperation I click on the Network tab.” “ HOLE—E—SHIT! Two-Hundred-and-Seventy Million packets received by discoveryd. Obviously, I need to stop looking at code and start looking at my network. I hop back onto my SmartOS machine and check network interface statistics.” “Whatever is causing all this, it is sending about 200 packets a second. At this point, the only thing left to do is actually inspect some of these incoming packets. I run snoop(1M) to collect events on the e1000g0 interface, stopping at about 600 events. Then I view the first 15.” “ A constant stream of mDNS packets arriving from IP 10.0.1.8. I know that this IP is not any of my computers. The only devices left are my iPhone, AppleTV, and Canon printer. Wait a minute! The printer! Two days earlier I heard some beeping noises…” “I own a Canon PIXMA MG6120 printer. It has a touch interface with a small LCD at the top, used to set various options. Since it sits next to my desk I sometimes lay things on top of it like a book or maybe a plate after I'm done eating. If I lay things in the wrong place it will activate the touch interface and cause repeated pressing. Each press makes a beeping noise. If the object lays there long enough the printer locks up and I have to reboot it. Just such events occurred two days earlier.” “I fire up dladm again to monitor incoming packets in realtime. Then I turn to the printer. I move all the crap off of it: two books, an empty plate, and the title for my Suzuki SV650 that I've been meaning to sell for the last year. I try to use the touch screen on top of the printer. It's locked up, as expected. I cut power to the printer and whip my head back to my terminal.” No more packet storm “Giddy, I run DTrace again to count syscalls.” “I'm not sure whether to laugh or cry. I laugh, because, LOL computers. There's some new dumb shit you deal with everyday, better to roll with the punches and laugh. You live longer that way. At least I got to flex my DTrace muscles a bit. In fact, I felt a bit like Brendan Gregg when he was debugging why OSX was dropping keystrokes.” “I didn't bother to root cause why my printer turned into a UDP machine gun. I don't intend to either. I have better things to do, and if rebooting solves the problem then I'm happy. Besides, I had to get back to what I was trying to do six hours before I started debugging this damn thing.” There you go. The Internet of Terror has already been on your LAN for years. Making Getaddrinfo Concurrent in Python on Mac OS and BSD (https://emptysqua.re/blog/getaddrinfo-cpython-mac-and-bsd/) We have a very fun blog post today to pass along originally authored by “A. Jesse Jiryu Davis”. Specifically the tale of one man's quest to unify the Getaddrinfo in Python with Mac OS and BSD. To give you a small taste of this tale, let us pass along just the introduction “Tell us about the time you made DNS resolution concurrent in Python on Mac and BSD. No, no, you do not want to hear that story, my friends. It is nothing but old lore and #ifdefs. But you made Python more scalable. The saga of Steve Jobs was sung to you by a mysterious wizard with a fanciful nickname! Tell us! Gather round, then. I will tell you how I unearthed a lost secret, unbound Python from old shackles, and banished an ancient and horrible Mutex Troll. Let us begin at the beginning.“ Is your interest piqued? It should be. I'm not sure we could do this blog post justice trying to read it aloud here, but definetly recommend if you want to see how he managed to get this bit of code working cross platform. (And it's highly entertaining as well) “A long time ago, in the 1980s, a coven of Berkeley sorcerers crafted an operating system. They named it after themselves: the Berkeley Software Distribution, or BSD. For generations they nurtured it, growing it and adding features. One night, they conjured a powerful function that could resolve hostnames to IPv4 or IPv6 addresses. It was called getaddrinfo. The function was mighty, but in years to come it would grow dangerous, for the sorcerers had not made getaddrinfo thread-safe.” “As ages passed, BSD spawned many offspring. There were FreeBSD, OpenBSD, NetBSD, and in time, Mac OS X. Each made its copy of getaddrinfo thread safe, at different times and different ways. Some operating systems retained scribes who recorded these events in the annals. Some did not.” The story continues as our hero battles the Mutex Troll and quests for ancient knowledge “Apple engineers are not like you and me — they are a shy and secretive folk. They publish only what code they must from Darwin. Their comings and goings are recorded in no bug tracker, their works in no changelog. To learn their secrets, one must delve deep.” “There is a tiny coven of NYC BSD users who meet at the tavern called Stone Creek, near my dwelling. They are aged and fierce, but I made the Sign of the Trident and supplicated them humbly for advice, and they were kindly to me.” Spoiler: “Without a word, the mercenary troll shouldered its axe and trudged off in search of other patrons on other platforms. Never again would it hold hostage the worthy smiths forging Python code on BSD.” *** Using release(7) to create FreeBSD images for OpenStack (https://diegocasati.com/2016/12/13/using-release7-to-create-freebsd-images-for-openstack-yes-you-can-do-it/) Following a recent episode where we covered a walk through on how to create FreeBSD guest OpenStack images, we wondered if it would be possible to integrate this process into the FreeBSD release(7) process, so they images could be generated consistently and automatically Being the awesome audience that you are, one of you responded by doing exactly that “During a recent BSDNow podcast, Allan and Kris mentioned that it would be nice to have a tutorial on how to create a FreeBSD image for OpenStack using the official release(7) tools. With that, it came to me that: #1 I do have access to an OpenStack environment and #2 I am interested in having FreeBSD as a guest image in my environment. Looks like I was up for the challenge.” “Previously, I've had success running FreeBSD 11.0-RELEASE on OpenStack but more could/should be done. For instance, as suggested by Allan, wouldn't be nice to deploy the latest code from FreeBSD ? Running -STABLE or even -CURRENT ? Yes, it would. Also, wouldn't it be nice to customize these images for a specific need? I'd say ‘Yes' for that as well.” “After some research I found that the current openstack.conf file, located at /usr/src/release/tools/ could use some extra tweaks to get where I wanted. I've created and attached that to a bugzilla on the same topic. You can read about that here (https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=213396).” Steps: Fetch the FreeBSD source code and extract it under /usr/src Once the code is in place, follow the regular process of build(7) and perform a make buildworld buildkernel Change into the release directory (/usr/src/release) and perform a make cloudware make cloudware-release WITH_CLOUDWARE=yes CLOUDWARE=OPENSTACK VMIMAGE=2G “That's it! This will generate a qcow2 image with 1.4G in size and a raw image of 2G. The entire process uses the release(7) toolchain to generate the image and should work with newer versions of FreeBSD.” + The patch has already been committed to FreeBSD (https://svnweb.freebsd.org/base?view=revision&revision=310047) Interview - Rod Grimes - rgrimes@freebsd.org (mailto:rgrimes@freebsd.org) Want to help fund the development of GPU Passthru? Visit bhyve.org (http://bhyve.org/) *** News Roundup Configuring the FreeBSD automounter (http://blog.khubla.com/freebsd/configuring-the-freebsd-automounter) Ever had to configure the FreeBSD auto-mounting daemon? Today we have a blog post that walks us through a few of the configuration knobs you have at your disposal. First up, Tom shows us his /etc/fstab file, and the various UFS partitions he has setup with the ‘noauto' flag so they are not mounted at system boot. His amd.conf file is pretty basic, with just options enabled to restart mounts, and unmount on exit. Where most users will most likely want to pay attention is in the crafting of an amd.map file Within this file, we have the various command-foo which performs mounts and unmounts of targeted disks / file-systems on demand. Pay special attention to all the special chars, since those all matter and a stray or missing ; could be a source of failure. Lastly a few knobs in rc.conf will enable the various services and a reboot should confirm the functionality. *** l2k16 hackathon report: LibreSSL manuals now in mdoc(7) (http://undeadly.org/cgi?action=article&sid=20161114174451) Hackathon report by Ingo Schwarze “Back in the spring two years ago, Kristaps Dzonsons started the pod2mdoc(1) conversion utility, and less than a month later, the LibreSSL project began. During the general summer hackathon in the same year, g2k14, Anthony Bentley started using pod2mdoc(1) for converting LibreSSL manuals to mdoc(7).” “Back then, doing so still was a pain, because pod2mdoc(1) was still full of bugs and had gaping holes in functionality. For example, Anthony was forced to basically translate the SYNOPSIS sections by hand, and to fix up .Fn and .Xr in the body by hand as well. All the same, he speedily finished all of libssl, and in the autumn of the same year, he mustered the courage to commit his work.” “Near the end of the following winter, i improved the pod2mdoc(1) tool to actually become convenient in practice and started work on libcrypto, converting about 50 out of the about 190 manuals. Max Fillinger also helped a bit, converting a handful of pages, but i fear i tarried too much checking and committing his work, so he quickly gave up on the task. After that, almost nothing happened for a full year.” “Now i was finally fed up with the messy situation and decided to put an end to it. So i went to Toulouse and finished the conversion of the remaining 130 manual pages in libcrypto, such that you can now view the documentation of all functions” Interactive Terminal Utility: smenu (https://github.com/p-gen/smenu) Ok, I've made no secret of my love for shell scripting. Well today we have a new (somewhat new to us) tool to bring your way. Have you ever needed to deal with large lists of data, perhaps as the result of a long specially crafted pipe? What if you need to select a specific value from a range and then continue processing? Enter ‘smenu' which can help make your scripting life easier. “smenu is a selection filter just like sed is an editing filter. This simple tool reads words from the standard input, presents them in a cool interactive window after the current line on the terminal and writes the selected word, if any, on the standard output. After having unsuccessfully searched the NET for what I wanted, I decided to try to write my own. I have tried hard to made its usage as simple as possible. It should work, even when using an old vt100 terminal and is UTF-8 aware.“ What this means, is in your interactive scripts, you can much easier present the user with a cursor driven menu to select from a range of possible choices. (Without needing to craft a bunch of dialog flags) Take a look, and hopefully you'll be able to find creative uses for your shell scripts in the future. *** Ubuntu still isn't free software (http://mjg59.dreamwidth.org/45939.html) “Any redistribution of modified versions of Ubuntu must be approved, certified or provided by Canonical if you are going to associate it with the Trademarks. Otherwise you must remove and replace the Trademarks and will need to recompile the source code to create your own binaries. This does not affect your rights under any open source licence applicable to any of the components of Ubuntu. If you need us to approve, certify or provide modified versions for redistribution you will require a licence agreement from Canonical, for which you may be required to pay. For further information, please contact us” “Mark Shuttleworth just blogged (http://insights.ubuntu.com/2016/12/01/taking-a-stand-against-unstable-risky-unofficial-ubuntu-images/) about their stance against unofficial Ubuntu images. The assertion is that a cloud hoster is providing unofficial and modified Ubuntu images, and that these images are meaningfully different from upstream Ubuntu in terms of their functionality and security. Users are attempting to make use of these images, are finding that they don't work properly and are assuming that Ubuntu is a shoddy product. This is an entirely legitimate concern, and if Canonical are acting to reduce user confusion then they should be commended for that.” “The appropriate means to handle this kind of issue is trademark law. If someone claims that something is Ubuntu when it isn't, that's probably an infringement of the trademark and it's entirely reasonable for the trademark owner to take action to protect the value associated with their trademark. But Canonical's IP policy goes much further than that - it can be interpreted as meaning[1] that you can't distribute works based on Ubuntu without paying Canonical for the privilege, even if you call it something other than Ubuntu. [1]: And by "interpreted as meaning" I mean that's what it says and Canonical refuse to say otherwise” “If you ask a copyright holder if you can give a copy of their work to someone else (assuming it doesn't infringe trademark law), and they say no or insist you need an additional contract, it's not free software. If they insist that you recompile source code before you can give copies to someone else, it's not free software. Asking that you remove trademarks that would otherwise infringe trademark law is fine, but if you can't use their trademarks in non-infringing ways, that's still not free software.” “Canonical's IP policy continues to impose restrictions on all of these things, and therefore Ubuntu is not free software.” Beastie Bits OPNsense 16.7.10 released (https://opnsense.org/opnsense-16-7-10-released/) OpenBSD Foundation Welcomes First Iridium Donor: Smartisan (http://undeadly.org/cgi?action=article&sid=20161123193708&mode=expanded&count=8) Jan Koum donates $500,000 to FreeBSD (https://www.freebsdfoundation.org/blog/foundation-announces-new-uranium-donor/) The Soviet Russia, BSD makes you (https://en.wikipedia.org/wiki/DEMOS) Feedback/Questions Jason - Value (http://pastebin.com/gRN4Lzy8) Hamza - Shell Scripting (http://pastebin.com/GZYjRmSR) Blog link (http://aikchar.me/blog/unix-shell-programming-lessons-learned.html) Dave - Migrating to FreeBSD (http://pastebin.com/hEBu3Drp) Dan - Which BSD? (http://pastebin.com/1HpKqCSt) Zach - AMD Video (http://pastebin.com/4Aj5ebns) ***
Background: Somatic cell nuclear transfer (SCNT) is currently the most efficient and precise method to generate genetically tailored pig models for biomedical research. However, the efficiency of this approach is crucially dependent on the source of nuclear donor cells. In this study, we evaluate the potential of primary porcine kidney cells (PKCs) as cell source for SCNT, including their proliferation capacity, transfection efficiency, and capacity to support full term development of SCNT embryos after additive gene transfer or homologous recombination. Results: PKCs could be maintained in culture with stable karyotype for up to 71 passages, whereas porcine fetal fibroblasts (PFFs) and porcine ear fibroblasts (PEFs) could be hardly passaged more than 20 times. Compared with PFFs and PEFs, PKCs exhibited a higher proliferation rate and resulted in a 2-fold higher blastocyst rate after SCNT and in vitro cultivation. Among the four transfection methods tested with a GFP expression plasmid, best results were obtained with the Nucleofector (TM) technology, resulting in transfection efficiencies of 70% to 89% with high fluorescence intensity, low cytotoxicity, good cell proliferation, and almost no morphological signs of cell stress. Usage of genetically modified PKCs in SCNT resulted in approximately 150 piglets carrying at least one of 18 different transgenes. Several of those pigs originated from PKCs that underwent homologous recombination and antibiotic selection before SCNT. Conclusion: The high proliferation capacity of PKCs facilitates the introduction of precise and complex genetic modifications in vitro. PKCs are thus a valuable cell source for the generation of porcine biomedical models by SCNT.
Fakultät für Biologie - Digitale Hochschulschriften der LMU - Teil 02/06
Unrepaired DNA double-strand breaks can lead to apoptosis or tumorigenesis. In mammals double-strand breaks are repaired mainly by nonhomologous end joining mediated by the DNA-PK complex. The core protein of this complex, DNA-PKcs, is a DNA-dependent serine/threonine kinase that phosphorylates protein targets as well as itself. To identify new proteins, which contribute to double-strand break repair by nonhomologous end joining, we previously performed a yeast two-hybrid screen with fragments of human DNA-PKcs as bait. From the identified putative interaction partners of DNA-PKcs we chose two for further characterization: Protein phosphatase 5 (PP5) and Ku70 binding protein 3 (Kub3). With PP5 we have identified the first protein phosphatase with a function in doublestrand break repair. We show that protein phosphatase 5 interacts with DNA-PKcs and dephosphorylates with surprising specificity at least two functional sites of it. Cells with either hypo or hyperphosphorylation of DNA-PKcs at these sites show increased radiation sensitivity. For the characterization of Kub3 we describe its correct reading frame and a putative metalloprotease domain. Using a rabbit polyclonal antibody against human Kub3 we demonstrate that Kub3 is a nuclear protein which co-precipitates with DNA-PKcs. When Kub3 is overexpressed in HeLa cells, DNA-PKcs phosphorylation at T2609 is increased after ionizing radiation. However, when Kub3 is knocked down in drosophila cells by RNAi, cell survival after ionizing radiation is not affected. In summary, with PP5 and Kub3 we have characterized two new interaction partners of DNA-PKcs. While PP5 clearly functions in double-strand break repair, further experiments are necessary to confirm such a role for Kub3.
Fakultät für Chemie und Pharmazie - Digitale Hochschulschriften der LMU - Teil 01/06
Im Rahmen dieser Arbeit wurde zum ersten Mal die PKC-vermittelte Inhibition der EGF-induzierten JNK-Aktivität durch GPCRn gezeigt, die durch Transmodulation des EGF Rezeptors verursacht wird. Darüber hinaus wurde anhand einer dominant negativen Mutante der Effekt von PKD auf diesen Prozess nachgewiesen und ihre Rolle in der bereits durch (Bagowski et al., 1999) beschriebenen PDGF-vermittelten JNK-Inhibition verifiziert. Weiterhin wurde gezeigt, dass die Liganden Endothelin und LPA, die ihre Signale über ver-schiedene G-Proteine vermitteln, in Rat1-Zellen über die Aktivierung von PKD unterschiedliche Signalwege verfolgen. So vermittelt Endothelin, welches PKD über Gq-Proteine aktiviert, ebenso wie PDGF die Inhibition der EGF-induzierten JNK-Aktivierung. Hingegen führt LPA, welches PKD zusätzlich über Gi-Proteine stimuliert, zu einer Verdopplung der EGF-induzierten JNK-Aktivität. Neben der EGF Rezeptortransmodulation wurden in dieser Arbeit Untersuchungen zu seiner Transaktivierung durchgeführt. Hier wurde zum ersten Mal die PKC-abhängige Transaktivierung des EGF Rezeptors durch PDGF gezeigt. Hierbei werden dazu in Rat1-Zellen Phor-bol-ester-abhängige PKCs, in 3T3 L1-Zellen dagegen Phor-bol-ester-unabhängige PCKs benötigt. Weiterhin wurde im Rahmen dieser Arbeit die konstitutive Assoziation von PKD mit dem PDGF Rezeptor nachgewiesen, die sowohl in gesunden Nagetierzellen als auch in humanen Glioblastomzellen stattfindet. Verantwortlich für diese Bindung scheint der Juxtamembranbe-reich des PDGF Rezeptors zu sein, während bei PKD mit hoher Wahrscheinlichkeit der Carbo-xyterminus involviert ist. Auch die aus B-Zellen bereits bekannten Interaktionen von PKD mit PLC und Syk (Sidorenko et al., 1996) wurden untersucht, und die PDGF-induzierte Assoziation von PKD mit PLC in denselben Systemen nachgewiesen, in denen auch die Assoziation von PKD mit dem PDGF Rezeptor gezeigt wurde. Eine Assoziation von PKD mit Syk fand sich dagegen nur in HEK 293-Zellen, die PKD und Syk überexprimierten.